Yesterday the University of Reading announced a winner of their annual Artificial Intelligence competition - the Elbot. You can test it out at www.elbot.com
The claim is that the Elbot achieved a 25% success rate and that a 30% success rate is required to be considered sentient according to the rules of the Turing test. Therefore we are getting very close to creating a sentient thing.
Having given this chatbot a test run this morning it felt to me more like a set of scripted responses to certain keywords in my questions than an intelligence. This hardly seems like something on the brink of sentience.
The problem, in my opinion, is that we have modern science working on outdated philosophical principles. The Turing test is steeped in the kinds of flawed theory of mind which arose from attempts to answer the problems posed by Dualism, typified by the philosophy of Descartes.
Dualism contains the idea of the ghost in the machine - that we are fundamentally made up of a physical body and a mysterious thinking thing that somehow controls it. Historically this idea opened a rather large philosophical can of worms, in particular in this context, it begs the question how can we know that there is a ghost in the machine? That is to say how can we be sure that what we think of as other human beings are not illusions or robots or even chatbots like Elbot?
The Turing test answers this question rather clumsily by presuming that human behaviour is somehow special and recognisable to other human beings and that somehow we just instinctively know when we are in the presence of a sentient something. There are clearly a great many philosophical holes that need filling before this holds water.
But any modern philosopher will tell you that over the last century or so pretty much every avenue has been explored in order to try to remove the philosophical doubt cast by Cartesian dualism to no avail - up to and including removing the ghost from the machine altogether and claiming that we are all simply automatons.
These days it is widely thought that the whole model of the ghost in the machine, with or without the ghost, is flawed if for no other reason than that it casts us into this unresolvable world of doubt where we cannot postulate with any certainty the existence of any sentient thing other than ourselves. A new model of our existence is required where the existence of other sentient beings is fundamental, not something that we struggle to believe in.
20th century philosophers such as Wittgenstein talked about the nature of our knowledge and the nature of our language as things that are so wrapped up in the existence of a community of sentient things, that in fact they don't make any sense without it. And I think that this is intuitively true. The word chair is meaningless outside the context of a world with people who sit on chairs. What could the word chair possibly mean in a universe with no chairs, legs, bums or steep hills to walk up? The fundamentally,language presumes the existence things of like chairs and indeed people to sit in them. Without the existence of a community, knowledge and language themselves start to unravel.
If shared context is the key to communication, Elbot is doomed to failure. The word chair and any other word cannot mean anything to Elbot because Elbot cannot sit down. An Elbot does not share any experience with us so it can never communicate with us in a meaningful way.
The Turing test asks us to sit behind a wall and guess whether or not we are talking to another human being, but day to day human interaction is nothing like this. We could certainly do this as a game, and I'm sure that many of us would fail the test. The point is that by playing that game we are creating a bizarre and artificial scenario which is very far removed from our normal interaction. The fact that it is so bizarre only underlines the fact that the dualist model is wrong.
If we create AI based on more sound philosophical foundations, the bogus questions asked by the Turing test -is there a ghost in the machine?- will melt away.
And yes this is already being done. Asimo has begun to understand what a chair is because it exists in the same world as the chair. Asimo probably recognises chairs by seeing their structure and also perhaps by judging their size compared to itself. Asimo would have an even better idea of what a chair is if it could sit down and perhaps save energy by doing so.
Monday, 13 October 2008
Subscribe to:
Posts (Atom)