Tuesday 16 December 2008

The Inaugural Post

Ahhhh, the inaugural post, and so often the only post. Its amazing how many blogs get abandoned only minutes after they are started. But this time it's different. This time I have friends. I realized that I myself do not have the posting volume to keep anyone entertained, so I have recruited a team of specialists to help me keep this space populated with mind melting brain benders and ideas. Politics, philosophy, theology, societal critiques, are all fair game, and the more confusing and far out the better!

For those of you who haven't yet said, "Lame, I actually have to use my brain for a change," I have devised a fiendishly interesting query. It centres around the Turing Test, invented by Alan Turing in the fifties as a means for distinguishing between real and artificial intelligence, if indeed such a distinction exists. If you are not already familiar with the test, you can read about it here.

Assuming you understand the basic premise of the test, what question(s) would you ask in order to determine whether you were speaking with a computer or a human being?


  1. When I first read your post, I thought it would be easy to come up with a question for the computer, but its been 20 minutes and I'm still sitting here.

    "Who is your best friend," would be far to easy because the computer could simply search the web and find a story about a person and their best friend, change the names, and masquerade it as their own.

    I suppose my question would have to be: "Why are you lying to me?"

  2. I must needs indeed think upon this further. I would use a series of questions, proceeding from what I thought of the responses; one question would not be enough. I would pose inquiries targeting personality, individualism, morals, and humor. I would observe the subject’s responses to my questions to see if it displayed the aforementioned attributes or capabilities, and also study its interaction with a real human, and its understanding of the concepts. Also, I would see if it can learn a new language or understand different, non standard uses of linguistics. There is a difference between knowledge and understanding, facts and feeling, intelligence and wisdom; can an artificial being have both in each distinction, or only the knowledge, facts, and a synthetic excuse for intelligence?

    I would also ask: “Did you ever tell your mother that you loved her?”.

  3. The problem I see with those questions is that they could be defeated with a search agorithm. The computer will be programmed to know a little about sentence structure and will pick the important words out of your sentance, run a google search for info, and then regurgitate a canned story it found somewhere on the web. There are plenty of people who have websites about how much they love their mothers. No one ever said the computer could not lie.
    Someone disputed this saying that the internet is not a computer. I beg to differ. In the same way that supercomputers are networks of servers inside a giant climate controlled room, the internet is a big network, and when computers are connected, its pretty much like plugging in some more hard drives, webcams, speech synthasis generators, etc.

  4. The purpose of the questions is not to stump the computer into being incapable of answering, or even of giving a satisfactory answer, but to engage in a conversation that will permit the inquisitor to evaluate the subject's intelligence. Targeting metaphysical topics to evaluate the understanding and intelligence of the subject is how I would gain a feel for the subject's cognitive capabilities.

    There are a few problems I see with the Turing Test:

    There is the concept of the Chinese Room, which is a hypothetical room with a book in it that allows a person to simulate knowing the Chinese Language. The aforementioned book would contain a list of Chinese linguistical characters and strings of characters and proper responses, enabling the user to take a correspondence document and, without being able to understand it or even read it, formulate a reply. Outside the Chinese Room would be a panel of evaluators who were all fluent in Chinese. Papers would be exchanged between the two rooms facilitating a conversation in writing between the Chinese people and the one in the room using the book. The question is, even if the person can compel the Chinese evaluators to believe that an actual, intelligent conversation is being conducted in Chinese, is the person using the book actually speaking Chinese? It could be argued that the panel of Chinese evaluators are having a conversation in Chinese, but with the book's author or authors, and not the one who is using the book. So similarly, would the people questioning the computer not be conversing with the computer, but actually with the programmers and any one who gave input to the computer, such as the web authors that the computer uses?

    Convincing people to believe that the intelligence is human, and fooling people into seeing no difference, is a demonstration of the foolishness and gullibility of humanity, not proof of the genuinity of the intelligence.

    It might test how good the intelligence may be, but not if it is an intelligence. Locks are the oldest form of artificial intelligence (as far as I know); they take input and evaluate the input to make a decision and do something. Some computers are smarter than others, and some people are smarter than others, and some animals are smarter than others, and some locks are smarter than others, and some bombs are smarter than others.

    Can a lie detector really be perceptive and identify if a person is lying? Or does a lie detector only gather and publish data that the operator of the machine can use to perceive if the person is lying? Are artificial intelligences really intelligent, or do they simply perform complicated, or simplistic, calculations based on how they are designed and programmed to function by the intelligence that constructed them?

    So, are locks, computers, and other machines that "think" somehow, really intelligences? Or are they merely devices and systems designed intelligently that perform operations, but are only automated actions and systems?

    My awesome three-volume dictionary, the Webster's Third New International Dictionary, unabridged, 1986, defines intelligence as "1a The faculty of understanding: capacity to know or apprehend. b The available ability as measured by intelligence tests or by other social criteria to use one's existing knowledge to meet new situations and to solve new problems, to learn, to foresee problems, to use symbols or relationships, to create new relationships, to think abstractly: the ability to perceive one's environment, to deal with it symbolically, to deal with it effectively, to adjust it, to work toward a goal: the degree of one's alertness, awareness, or acuity: the ability to use with awareness the mechanisms of reasoning whether conceived as a unified intellectual factor or as the aggregate of many intellectual factors or abilities, as intuitive or as analytic, as organismic, biological, physiological, psychological, or social in origin and nature. c Mental acuteness."

    Rather than checking knowledge or understanding, I would prefer to see if the intelligence can come up with something genuinely new.

  5. I've got it! You are right Brijn lets have the computer try to replicate human innovation by creating new things. The problem that I think would help us seperate the computers from the people is that the computers have no needs or desires other than the ones they were programmed to exhibit. Therefore, while a human would innovate to produce more food, and then decide to create a new form of transportation, a computer would create a new way to produce food because it was programmed too, and unless there is more in its programming it will be stumped. It has no needs that say to it "how about developing a better solar panel, or bellbottom jeans need a revival" and the only way to bypass this lack of conciousness (if you can call it that) is for the computer to keep and motitor subjects and build things for their needs and desires. It could do this by reading the internet, by asking a real human questions, or by taking itself to a programming office where it is given new tasks.