Read an interesting article yesterday about Duke University neuroscientist Miguel Nicolelis, who takes issue with science fiction author Vernor Vinge and futurist Ray Kurzweil’s famous Singularity: that point at which computer intelligence emerges and outstrips human intelligence, which has been a staple of science fiction for years.
(“Robot,” by ewen and donabel, from Flickr under Creative Commons.)
The viewpoint article, “The Brain is Not Computable”, introduces Nicolelis and his new book on the brain and human thought. As opposed to Kurzweil, et al, who foresee artificial intelligence being developed in the next few decades as computers grow ever more powerful, Nicolelis posits that the functions of the human brain — including random and unpredictable interactions among its myriad neurons — will not be replicated inside a machine.
That reminded me of a conversation I had during a panel discussion at a science fiction and fantasy convention many years ago,* in which I expressed my own doubts about artificial intelligence. I’m dubious of its appearance any time soon, not from the perspective of computer science but from that of Theory of Knowledge.
Specifically, the emergence of true AI would seem to require the computer (or network of computers) to transcend its own programming. We have seen tremendous performances by machines as repositories of quickly-accessible data — the “Watson” computer that competed so well at Jeopardy! was such a machine, capable of parsing the answer and finding the components of the most likely question. But as I understand it, Watson was still following instructions: still performing tasks it had been programmed to perform.
I contend that machines such as Watson are at the lowest end of what I think of as the chain of intelligence: Data are interpreted into Knowledge, and Knowledge is applied and refined into Wisdom.
A true AI — or, if you will, an intelligent artifice — will have to be much more than a sophisticated data-mining tool. For it to adhere to Theory of Knowledge, it will have to be able to form concepts based on the data presented to it; to convey knowledge those concepts will have to be predictive in nature, and the artifice will have to test those predictions against reality and, if needed, modify and continue to test them. Once it can rely on the accuracy of its predictions enough to carry out independent,** routine tasks without recourse to intervention by its programmers, we might consider it intelligent — but as its intelligence is tried in the fire of reality, will that artifice develop anything approaching wisdom?
Will such a device — artificial, independent, and intelligent — be developed in our lifetimes, and will it approach (let alone surpass) the functions of the human brain? I’m aware of the danger of saying anything will never happen, so I won’t say no … but I doubt it.
The cyberneticists are welcome to prove me wrong.
___
*TriNoCon, perhaps? NASFiC? I don’t remember … and that bothers me.
**Which brings up another thorny issue with respect to any artifice: from whence shall it develop the will to act independently?
by