In Response

Hugh Loebner

Shieber [in his article Lessons from a Restricted Turing Test] would like to tell me how I should spend my money. He suggests alternative prizes for my "largess." In my letter of December 30, 1988 to Dr. Robert Epstein, wherein I authorized Dr. Epstein to move forward with the contest, and referring to the Turing Test, I concluded with these words: "Robert, in years to come, there may be richer prizes, and more prestigious contests, but gads, this will always be the oldest." Well, one out of three isn't bad. I was aware when I penned those words that I had no patent on prizes, and that opportunities to reward advances in AI were not barred to others. I look forward with great anticipation to The Shieber Prize.

Why a Loebner Prize?

Shieber asks "Why a Loebner Prize?" I can best answer his question by explaining how I thought of the idea and why I decided to actually implement it.

I spent a year in Boston in 1980 and 1981 on a one year contract working for the State of Massachusetts. I provided expertise in the DMS1100 data base management system on Univac 1100 computer systems. In the course of my duties I discovered the Univac computer language "MACRO," which was bundled with DMS1100. Input for a MACRO program is text streams. In an initial declarative section of a MACRO program, "tokens" are defined as strings of characters. The procedure section consists of a collection of "macros." These are subroutines each of which is headed by a pattern of tokens. As the input text stream is read by a program it is tokenized. When a pattern of tokens in the input stream matches a pattern defined for a macro, that macro is activated. MACRO is recursive and it allows associative subscripting.

For me, MACRO was a revelation. MACRO's ability to create and match patterns of tokens seemed ideal for natural language analysis. I began to plan how I could use MACRO to develop an AI system of my own. I thought about associative subscripting. How could I use the technique for natural language analysis? Possibilities presented themselves, almost without limit. I decided then, and still think today, that thought processes are associative, and that an associative language, rather than a list processing one, will be needed to create machine intelligence.

My contract expired. I returned to the University of Maryland Baltimore County (UMBC), and to my position as assistant director of computing. The University's Univac was scheduled for removal, and MACRO was available only for it. I could not find a substitute available on any computer system to which I had access. Although I hoped to work on AI, in the end I did nothing. (I later learned that there are comparable languages available for other computer systems.)

Although I did not create an AI system, my thoughts about how I might use MACRO to create one gave rise to another idea. I had, of course, read about Alan Turing's famous test. In the course of my plans to develop an AI system, I had the Turing Test in mind as a criterion. I soon realized that even if I were to succeed in developing a computer that could pass Turing's Test, I had no venue available in which to prove it. I thought: "Well, why not establish a Turing Test contest?" That is how I thought of the idea. Now here are three reasons why I actually implemented it.

1. My primary purpose was to develop the Turing Test itself. By the time I thought of my prize, Turing's article proposing the test was decades old. AI scientists and philosophers regularly discussed the test, yet no one had taken steps to implement it. (I was misquoted by Computerworld. I said that "no one was doing anything about the Turing Test, not AI.") The initial Loebner Prize contest was the first time that the Turing Test had ever been formally tried. This in itself justified the endeavour. It also introduced the Turing Test to a wide public, and stimulated interest in it. To my knowledge, no one had asked, let alone answered, the many important questions about the Turing Test which must eventually be solved. The title of Dr. ShieberÕs article is proof to me that this contest will advance the study of the Turing Test . "To see something once is better than to be told a thousands times" is the old Chinese saying.

2. Related to my desire to develop the Turing Test was my desire to advance AI. I have, since adolescence, been interested in the field. When I was in high school, I thought that computers and robots should do all work. I called this philosophy "automated parasitism." I was told this was impossible. To build a computer as complex as the human brain would require a computer as large as the Empire State Building, and it would require all the water of Niagara Falls to cool. Today, I remain an unrepentant utopian. I want to see total unemployment. That, for me, is the ultimate goal of AI and automation. (The problem is to design a society that can equitably distribute the fruits of automation among its members.)

I believe that this contest will advance AI and serve as a tool to measure the state of the art. As time passes we shall measure the advances in the field. For this reason I have made two stipulations regarding the contest: (a) The contest must be held annually and (b) The prize must be awarded if there is at least one entry. The certainty of the award of a prize each year is the inducement. The contest must be dependable. Each contestant must understand that he is competing against other entrants, not against someoneÕs idea of the perfect program. I am not worried that the winning entries in early years are primitive. Their inadequacies are incentives for others to enter the contest.

Shieber opines that the contest is premature. He feels that because of our early stage of knowledge of AI, research efforts will be wasted and misguided. He speculates on an imaginary prize, prematurely announced, that is directed at developing flight. "Springs!," he says, that is what everybody will use for flight because there is no time to study the airfoil. I have two comments. The first is that Dr. Shieber's argument (study springs because there is no time to study airfoils) is precisely why I have insisted that there be an annual contest with a prize awarded every year. There will be time to consider long term goals and there will be a reward each year.

My second comment has to do with Mozart's backside. When Mozart rode to Vienna in 1781 he wrote complaining of the pain the mail coach inflicted on his backside. (Mozart in Vienna,V.Braunbehrens, trans T. Bell Grove Weidenfield, NY 1990, p 17 [most of what think you know about Mozart is wrong unless you have read this book]) This was a result, I must suppose, in part from poor suspension of the coach. The study of elasticity, stress, and strain did not result in a swift and straight arrival at understanding. Suppose a concerted effort had been made, early on, to fly using springs. Perhaps the concepts of stress and strain would have been invented sooner, along with advances in spring technology that would have been a boon to humanity, and MozartÕs buttocks. There is probably still room for improvements in springs. [For an interesting discussion of the development of the concepts of stress and strain, see J E Gordon, Structures, or Why things Don't Fall Down, Da Capo Press, NY, 1978] Research would be a boring, indeed, if every effort resulted in answers only to the question or problem intended. Perhaps my prize will not lead down the straightest path to AI. It will prove useful, nonetheless, perhaps in very unexpected ways.

3. The third purpose of the prize was to perform a social experiment. By training I am a sociologist, with an interest in methodology and mathematical sociology. Chaos theory has interested me from my first exposure to it. Chaos theory posits the existence of non-linear systems which are "highly sensitive to initial conditions." The weather, for instance, may be such a system. Indeed, it may be so sensitive to small perturbations that a butterfly flapping its wings over New York can determine the weather over Paris two weeks later. This is known as the "butterfly effect." I believed that chaos theory was applicable to many social phenomena, including, I hypothesized, the field of AI. I thought the years of discussion of the Turing Test and the years of effort to create AI systems had created a powerful "potential social energy" which "energized" the field of AI. If my hypothesis was correct, the time was overripe to implement a prize for the Turing Test. It might only require a minimal effort by one individual to act as the catalyst. Whoever first proposed the contest would act as the seed around whom this potential social energy would focus. Hence The Loebner Prize; I have always wanted to be a social butterfly.

Finally, there is a Loebner Prize because of the hard work and diligent efforts of many people, especially the contest's director Dr. Robert Epstein, the members of the Loebner Prize Committee, and of course those who submitted entries.

Future Loebner Prize Contests

The task in the present contest was to answer the question: "which terminal represents interaction with a computer , which with a human?" This is, after all, the Turing Test. In order to make this question interesting, The Loebner Prize Committee decided to require that conversations be restricted to a single topic. That decision was controversial. I agree with Dr. Shieber that a restricted test is not the best way of conducting a Turing Test. I think everyone would agree with that. However, with this design we have learned something about conducting a Turing Test. As I hoped, we can see results. For a full discussion of the lessons we have learned from our restricted Turing Test, see Dr. Shieber's companion article "Lessons from a Restricted Turing Test" in this issue. Now we will use the lessons that we have learned to improve the contest.

(1) At the current state of the art I suggest that the appropriate orientation for the contest is to determine which of obviously artificial computer entries is the best entry, i.e. most human like, and nominate the authors as "winners." It should not be to determine if a particular terminal is controlled by a human or a computer. If we maintain this orientation, there should be no problems holding unrestricted tests. Therefore I intend The Loebner prize to be unrestricted for at least 4 years, starting with the Fifth Annual Loebner Prize Contest in 1995. There are two advantages to an unrestricted test. The first of these is that it is the Turing Test, and it is time we conducted one. The second reason is that it is much simpler and less expensive. As an old DP professional, I strongly believe in KISS wherever possible. By eliminating topics, we eliminate the need for referees to ensure that conversations are properly restricted. At the end of four years, we can evaluate the status of the contest and perhaps opt for another period of restricted tests.

(2) I am concerned with verification. I don't think that any entrant has cheated. However, to ensure that there is no cheating, starting with the Fifth Annual Contest, all programs must be subject to verification. The easiest way is to require that software and hardware run on computers located on-site and in-sight at the contest location. This also eliminates communications costs and complexity.

(3) Starting with the Fifth Annual Contest, each program must be self reporting. That is, each program will be required to provide a text file containing the transcripts of the conversations with each sentence time stamped. In the past this was captured by the communications program. This rule is in keeping with my desire to simplify the running of the contest. There is one question regarding future contests, the answer to which I am not sure. It has to do with publication of a winning entry. Should winning programs and hardware enjoy a cloak of secrecy? If the contest is to advance the field of AI, it may be said that workers in the field must be given the opportunity to learn from the winner. Opposing this position is the possibility that winners will desire secrecy. An artificial intellect can have great commercial and possibly military value. Some might go so far as to think that it should be classified secret. Are there super programs in existence that the authors will not enter in the contest for this reason?

A possible solution is that winners whose programs or hardware represent significant "non-obvious" advances seek patent or copyright protection. This ensures both publication and protection. A second alternative is for the winner to provide a copy of the software and any schematics of hardware with the understanding that they will be kept secret for a period of time. I do not know how long they should be kept secret. Perhaps 10 years is appropriate. This would maintain the winner's lead, but eventually would reveal the details. I do not the answer to this conundrum. I hope to see a discussion regarding this by the AI community and I solicit direct comments from those in the AI community to whom this question has interest.

Can a machine think, or what is intelligence?

My reaction to intelligence is the same as my reaction te pornography - I can't define it but I like it when I see it. There are those who argue that intelligence must be more than an algorithm. I, too, believe this. See Box I think that there has been an undue tendency to consider only natural language interactions when discussing the Turing Test.I do not think Alan Turing would necessarily have agreed with this approach. In the conclusion of his article,Turing has a brief, but extraordinarily prescient discussion of heuristic learning by a computer. He writes:

"We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and teach it to understand and speak English. This process could follow the normal teaching of a child." [A. M. Turing, "Computing Machinery and Intelligence," MIND, LIX, 1950, p. 460]

Now, in his article, Turing did write at length on finite state machines operating under the influence of stored instructions. However he also wrote "... we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well." For this reason, the winner of the Loebner Grand Prize must develop a computer with associated hardware that can respond "intelligently" to audio visual input in the manner that a human would. Some may say that this is an "impure" Turing Test, but I do not. As Turing wrote, "The question and answer method seems suitable for introducing any one of the fields of human endeavour that we wish to include" Well, I would like ask question about images and pattern recognition. If the computer answers appropriately it is is intelligent.

Conclusion

I would like to conclude this paper with a final thought. There is a nobility in this endeavour. If we humans can succeed in developing an artificial intellect it will be a measure of the scope of our intellect. Let me repeat a joke. You may already have heard it. A group of computer scientists build the world's most powerful computer. Let us call it "HyperThought." HyperThought is massively parallel, it contains neural networks, it has teraflop speed., etc. The computer scientists give HyperThought a shakedown run. It easily computes Pi to 10000 places, and factors a 100 digit number. The scientists try find a difficult question that may stump it. Finally, one scientist exclaims: "I know!" "HyperThought," she asks "is there a God?" "There is now," replies the computer.

Now, most people, when they hear the joke, assume that the computer is asserting its own divinity. But, when asked who God is these same people reply "He is my creator." And to HyperThought, humans will be its creator.

I suggest Loebner's Corollary to Asimov's Laws of Robotics: "Humans are gods."

We may ask "Is it ethical for us to teach intelligent computers this?" If we want intelligent robots and computers to care for us, to fetch and to carry for us, as I do, then this belief system will facilitate the matter. And, in fact, we will have created them.

It amuses me to imagine a day in the distant future when humans have become extinct, surpassed by our creations, robots, who roam the universe. I like to think that these robots may have a memory of us humans, perhaps as semi-mythic fractious demigods from the distant past who created them. And, just possibly, they will remember me.


-----BOX------

I have the following problem with "algorithmic" intelligence. Consider the input string "What", "time", "is", "it", and "?" Let us speculate on two ways an algorithm might function on it. (I will use bold to represent input/output and italic to represent the algorithm):
(1) IF What, time, is, it, ?, THEN
IF the time is 12:01 AM RESPOND It, is, 1,2,:,0,1, AM
ELSE IF the time is 12:02 AM RESPOND It, is, 1,2,:,0,2, AM etc....


(2) IF What, time, is, it, ?, THEN RESPOND I, do, not, know, . Both algorithms produce syntactically correct responses, but in the first example, the "intelligent" response, the algorithm must be ÒawareÓ of reality. We can easily think of other examples e.g:

"What", "is", "the", "color", "of", "the", "ball", "on", "the", "table", "?"

(1) IF What, is, the, color, of, the, ball, on, the, table, ? THEN
IF there is a ball on the table and it is red RESPOND The, ball, is, red
ELSE IF there is a ball on the table and it is blue RESPOND The, ball, is, blue etc....

(2) IF What, is, the, color, of, the, ball, on, the, table, ?
THEN RESPOND I, do, not, know.

It seems to me that these simple examples suggest that there exists a population of sentences (questions about the universe) that can be answered "intelligently" (telling a questioner something about the universe that the questioner doesn't know) by a computer operating under an algorithm only if the computer has an "awareness" of the universe. By "awareness" I mean that it has access to "sense organs" or physical transducers that can measure some state of the universe at the time the input string is encountered. Some of the states in this finite state machine depend on states of the external universe and not the result of prior instructions.