From what I've read people who are multi-lingual, sometimes think in one language sometimes in another. When I was taking German in college, I sometimes found myself thinking in German a little bit. I kind of suspect that we are thinking in that proto-language and just think we are thinking in our language. Even as I say that I have to think it's different when we are writing because then we are almost certainly thinking in English. It's interesting that sometimes you have a train of thought and you are going down that road and ensuing points are like misty mountains in the distance and you don't know how you will put them into words until you get there.
I don't think AI has a language in the sense that we think of languages. There are computer languages but they are not really languages, and anyway when they are compiled they are just machine language which is ones and zeros. I was reading about AI lately where it was used for some device that oh, picked a berry out of a box and put it on a plate. We carbon units can do that pretty easily, we might ask which berry or which plate, but we don't even think about the operation. We don't think, reach out our arm and slip the thumb underneath and forefinger on top, firmly, but not too firmly, and release over the plate but not too far over.
Just an aside here, but I am thinking about times when we are carrying several things, in our hands, under our arms, and then we have to carry one more thing and we just kind of shift everything around, this thing goes under that arm which frees that hand, but we never think about what we are doing, we just shift things around and everything finds its place or else we decide to make another trip.
Anyway, back to that berry picker, probably some kind of tweezers at the end of a mechanical arm, and maybe it is told when it fucks up or maybe it can sense that, and then it has to adjust accordingly. Say it comes in at 27 degrees with a pressure of 5, and it squashes the berry. Does it then tells itself to never do that again? Does its self-written algorithm consist of a list of approaches that are no good? Or does it evolve some more general rule, something more like the language rule go from the top and grip gently?
I don't think those early computer chess machines had much in the way of strategy, they just looked through all the possibilities of every different piece moving through several moves and pick the one that came out most advantageously. I don't know how it judged advantageous, but I assume there was some measure.
These newer ones, the ones that taught themselves the game in four hours and then beat masters, I wonder if they just do more moves in advance or if they develop some sort of strategy, write some sort of master algorithm that could be translated, by other computers of course, into English. What if we submitted our toughest problems to these computers, set them at it, and then translated that into English?
This seems so logical, so clear-sighted to me, that it must be a crackpot idea.
Here's another one. These computers are exactly the same when they come out of the factory. One assumes they get the same kind of programming, so that when they teach themselves they must all go through exactly the same operations. What happens when they play each other? Do they stalemate?
Did Old Dog just say that he made up eating pickled herring for New Years? I remember as a wee lad doing that in Gage Park and being told it was a Czech tradition. Only did it once. Foul stuff.
No comments:
Post a Comment