top of page

Thinking Machines: The Artificial Intelligence Debate


Brown, D. S. (2016, 1 22). Retrieved from Thinking Machines: The Artificial Intelligence Debate:

Takeaway: Many believe that true artificial intelligence exists today, and that it is actively at work in the service of science. Does this mean that machines actually think for themselves? Just what sort of intelligence can machines have? What does this mean for humanity?

There seem to be more questions than answers when it comes to artificial intelligence. There may be an ongoing debate about the true nature and future of thinking machines, but we can be sure that humans think a lot.

The Turing Test

In a 1950 article published in Mind magazine, Alan Turing asks, “Can Machines Think?” To find the answer, he suggests an “imitation game” (which later came to be known as the Turing test) where an interrogator is tasked to determine which of two other players is the machine. The results of this test would provide the answer to the question.

Admitting that he has “no very convincing arguments” to prove that machines actually do think, he addresses various objections. Along the way, he deals with some interesting questions: Can a machine surprise you? Is it possible for a machine to fall in love or enjoy strawberries and cream? Could God confer a soul upon a computer? Can a machine do more than you tell it to do? Can a computer, as a “child machine,” be made to learn?

Turing believed that by the year 2000 computers would be able to imitate humans sufficiently to pass the test. Are we there yet? Artificial intelligence experts say no. Some even say that focusing on human performance should not be the goal of AI and is actually a distraction. That has not stopped efforts simulate the brain electronically, or to anthropomorphize the machine.

At any rate, comparisons with human intelligence are standard in the field of AI research. Artificial general intelligence (AGI) is the capacity of a computer that equals human intelligence. Artificial superintelligence (ASI) is a level of intelligence that surpasses human intelligence. Singularity has been coined as the point of no return, where machine intelligence finally exceeds human intelligence.

“We may hope that machines will eventually compete with men in all purely intellectual fields,” wrote Turing. He dismissed Lady Lovelace's objection that “the Analytical Engine has no pretensions to originate anything” by suggesting that her reference did not apply to a more capable machine that might come along later. “Machines take me by surprise with great frequency,” said Turing.

The Chinese Room

One challenge to Turing's AI prediction came from John Searle in 1980. Searle correlated “weak AI” with the use of the computer as a valuable tool, but according to “strong AI,” “the appropriately programmed computer really is a mind.” Searle concluded that “strong AI has little to tell us about thinking.”

In Searle's thought experiment, a subject is given cards with unknown characters on them. These turn out to be Chinese characters, but the subject doesn't know any Chinese at all. He is then given successive cards in Chinese, as well as written instructions in English to help him with his task. Based on the instructions, he returns certain responses that also turn out to be Chinese characters. The subject is successful in making those who have sent the cards believe that he actually knows Chinese. One might conclude that the subject has passed the Turing test by use of a programmed response.

Searle's point was that simulation is not duplication. For simulation, you just need the right input and output and a program in the middle. Attempts to confer consciousness on a machine by algebraic means will simply fall short. Humans have beliefs; machines don't. He summarized that thinking is limited to “only very special kinds of machines, namely brains and machines that had the same causal powers as brains.” And those other kinds of machines do not exist. Intentionality is a biological phenomenon – an aspect of the human brain. (To learn more on this, see Will Computers Be Able to Imitate the Human Brain?)

The Spiritual Machine

“Imagine a world where the difference between man and machine blurs, where the line between humanity and technology fades, and where the soul and the silicon chip unite.” These are the words of Ray Kurzweil, the “restless genius” who gave us optical character recognition, print-to-speech and speech-to-print technologies and a great music synthesizer. Now imagine a world where technology solves problems like poverty and disease.

Kurzweil is an advocate of transhumanism, an intellectual movement that seeks technological solutions for human problems. Some transhumanists are almost religious in their devotion. Whether for extending life, enhancing the body with computerized prosthetics or a myriad of other projects, the concept is to eventually meld with the machine or to confer upon it consciousness.

Kurzweil is seen as a visionary. Believers look forward to a Singularity, the point at which machine intelligence surpasses that of a human. From there, self-improvement by self-programming will create a runaway effect. Kurzweil believes that the results of the ensuing intelligence explosion will be positive. Others are not so sure.

Benefits and Challenges

Whether machines can think may be less important to those who are interested in their potential benefits. Businesses want machines that are better, faster, more powerful and more interactive. AI solutions have powered space shuttles, diagnosed medical conditions, guided driverless cars, performed data mining and become the voices of our smartphones. IBM's Deep Blue defeated world chess master Gary Kasparov, and their Watson beat Jeopardy! champions Brad Rutter and Ken Jennings.

But not all AI stories are positive. AI has replaced travel agents, grocery store clerks, bank tellers and stock brokers. During the 2010 “Flash Crash,” the Dow Jones Industrial Average dropped 600 points in five minutes. (Some 70 percent of securities trading is done by computer algorithms.) “AGI is a ticking timebomb,” says Eliezer Ludkowsky. Stephen Hawking says that “the danger is real” that computers could develop intelligence and “take over the world.” Lt. General Keith Alexander, USCYBERCOM believes that “the next war will begin in cyberspace.” Bill Joy raised concerns about self-replicating intelligent robots. Enthusiasts and skeptics disagree over the future of AI. (For more on the future of AI, see Don’t Look Back, Here They Come! The Advance of Artificial Intelligence.)

The Current State of the Debate

Can a machine think? "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim,” wrote Edsger W. Dijkstra. The AI debate has moved on. Next question: Are there sufficient safeguards against the potential dangers of AI? James Barrat cautioned that we should program friendliness into machines. Some suggest relinquishment or apoptosis. Others seem to minimize the risks.

In a Vanity Fair article published in November 2014, the author recognizes that AI is suddenly everywhere. The question now is whether the future Singularity will bring utopia or apocalypse. The existential argument among AI leaders brings to mind current science fiction movies. What awaits us when Pandora's box of genetics, nanotechnology and robotics (GNR) is opened? Elon Musk said that “with artificial intelligence we are summoning the demon.”

Will computers become truly sentient beings? Will they save the world or destroy it? Will Kurzweil's Singularitarians participate in the development of machine consciousness? These points will not be decided here. Turing wrote, “We can only see a short distance ahead, but we can see plenty there that needs to be done.” Our shortsightedness remains.

3 views0 comments
Post: Blog2_Post
bottom of page