The Technofile Web site has moved.


Technofile is now located at http://twcny.rr.com/technofile/
Please update your links, bookmarks and Favorites.  
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
 

Letter from Grant Ingersoll

technofile  by al fasoldt

Columns and commentaries in a life-long dance with technology
 

Simple gray rule


Response to column on whether computers can think
 

May 18, 1997

Dear Mr. Fasoldt,

I am sorry, but I must respectfully disagree with your position on thinking computers. You are correct in pointing out that Kasparov was indeed beaten by cleverly written software that could analyze millions of positions per second and you are also correct in your other reasonings (Uncle Henry, etc.). However, I fail to see how this proves that computers will never be able to think or act intelligently. Will it happen in our lifetime, probably not. However, consider this: if the processing speeds of computers continue to increase at their current rate, in 30 years they will be able to process information at the same rate as the human central nervous system (I saw this factoid recently on The Learning Channel). Will this speed make them intelligent? Of course not, but it may very well open up doors that now seem incomprehensible.

As you pointed out, the real question is not how to make a computer think, but to figure out how a human being thinks. The argument can be made that the human mind is simply a series of chemicals and neurons either firing or not firing in parallel, in a complicated and difficult to understand process. (This is quite the simplification) What is a computer but a bunch of bits either turned on or off, much like a neuron? You ask how a computer knows whether a word is misspelled or not, but again, the real question is how does a human being know? Has not your mind simply been trained to do so through repetition? (my very naive and limited knowledge of the brain tells me that the neurons and chemicals work to create pathways through the brain. These pathways are essentially storage units.) Do you not have stored in memory somewhere a listing of all the words you know? Is not the computer just as adept, if not more so, at storing these pathways? Who is to say that the human mind is not just a complicated program that has learned to rewrite certain aspects of its own program? If that is the case, why is it inconceivable to think that we could not write a similar program? (In fact, if you look at genetic programming principles, amongst others, you will see this effort is already underway, though it is very primitive.)

Would you argue that your 3-year-old thinks on the same level as most adults? Probably not. So what happens to the child as he/she grows? Is it born with all of the knowledge it will ever have, but just hasn't figured out how to access it yet? I doubt it, he or she learns it through experience and schooling. In a sense a massive knowledge base is built through time that can be searched and recalled more or less on demand throughout life. This is where computers and AI have a distinct advantage over humans: computers can build a similar knowledge base through time and that knowledge base can be passed on INSANTLY to its "offspring". The offspring need not sit through 12+ years of schooling to attain the level of knowledge of its parent. It can instantly pick up where the other one left off and continue building.

As you pointed out, we tend to be smug about things we think we think we know a lot about. The fact is that although there may be thousands of books written on the topic of artificial intelligence, we are nowhere near having enough knowledge to even venture an opinion on it, much less say that something as grandiose as AI could or could not take place. True smugness seems to come from those who think that something can't happen, just because they don't see how it ever could happen. With that kind of thinking, we would all still be living in Europe thinking the Earth was flat and all of the planets rotated around us.

Sincerely,

Grant Ingersoll