I wrote this essay in December 1998 for PHIL342a: Minds and Machines, an excellent course taught by Prof. Charles Morgan that I took as part of my undergraduate degree. I’m now blogging it for posterity.

Introduction

First off, it is necessary to attempt to characterize what we mean by mental capacities. There will likely never be a universally agreed-upon definition of this term, or of intelligence, mind, sentience, consciousness, etc. There have been many attempts, and they vary greatly. Yet all of these terms are generally understood to be related. One way to ask the question, “What are mental capacities?”, is to ask, “What would an entity have to be like in order to justify the attribution of mental capacities?”, where “entity” could be human, machine, extra-terrestrial life form, or what have you. This question, too, has been answered in many different ways. For some people, intelligence is something that humans and humans alone can have, by definition. Others are willing to attribute mental capacities to the lowliest of creatures or even to plants.

Alan Turing, in his landmark 1950 paper entitled “Computing Machinery and Intelligence”, proposed to answer a very similar question to the last one above. He argued in essence that if a machine were able to fool a human into thinking that it were also human, then we would be justified in saying that it could think.1 Out of this sprung the popular “Turing Test” for intelligence, which involves a machine candidate, a human confederate, and a human interrogator. The three participants are isolated from one-another. The machine and the human confederate are not allowed to interact; however, the interrogator may communicate freely with both the human and the machine via a computer terminal by which textual messages may be sent back and forth. It is the interrogator’s task to determine in a limited amount of time2 which one is the machine, and which one is the human. If the machine is able to convince the interrogator of its “humanness” as often as the other human is, then the machine is considered to have passed the Test.

Turing observed that his Test may place the machine at an unfair disadvantage. Is it really necessary to demand such faithfulness to human-like intelligence, in particular? Isn’t there a more general test which a machine might be capable of passing without in fact having to pass itself off as human? Perhaps being able to pass the Turing Test is not necessary for the attribution of mental capacities, but surely it is sufficient, or so he argued. (op. cit.: 55)

Probably the most famous instantiation of the Turing Test is the Loebner Prize, an annual competition first run in 1991, in which computers vie for the title “most human-like”. The following is an excerpt from the transcript of the 1998 winner of the Loebner Prize, Robitron Software Research Inc.’s “Albert One”:3

Interrogator: what do you work as?
Program: I’ve worked on a lot of PC’s and some mini computers and mainframes when I was in college.
Interrogator: what did you do at uni?
Program: You should worry more about what you mean.
Interrogator: what do you mean?
Program: You should worry more about what you mean.
Interrogator: and what does that mean?
Program: Tiberius
Interrogator: okay, well if you’re going to be completely incoherent how am I supposed to have a conversation with you?
Program: Do you like Mexican food?
Interrogator: never had any. do you like oysters?
Program: I can certainly hope that any. You aren’t going to disappoint me I hope?
Interrogator: what do you mean by that?
Program: I don’t know what I mean. You tell me what you think and I’ll tell you if I agree or not.

Needless to say, this is not an altogether impressive performance, and indicates that the current state of the art in conversational software is still not very advanced. It certainly falls far short of the sort of dialog that Turing anticipated: (op. cit.: 60)

Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?
Witness: It wouldn’t scan.
Interrogator: How about “a winter’s day”? That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter’s day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

This exchange is far more convincing. Yet even this level of competency wouldn’t satisfy everyone. Gunderson (1964), among others, has made the claim that the Turing Test measures too narrow a range of human behaviour. Such complaints might be paraphrased: “Hold an intelligent conversation? Is that all it can do?! Then it can’t possibly be intelligent!” In particular, there is the sentiment that having a more human-like means of interaction with the physical world should be required; for instance, being able to recognize objects visually. Even Hugh Loebner, sponsor of the aforementioned prize, has high expectations for the as yet unawarded $100,000 grand prize. In contrast to the yearly $2000 “most human” winners, the grand prize winner will have to deal with audio/visual input. 4 (Loebner 1994)

My thinking machine needn’t be able to throw a football or recognize an orange, because while those activities do require humans to perform some mental computations, they do not exemplify what I consider to be our most important abilities. I agree with Hofstadter, who lists the following mental faculties as minimum requirements for intelligence (1979: 26 — I enumerated the list for ease of reference):

  1. To respond to situations very flexibly;
  2. To take advantage of fortuitous circumstances;
  3. To make sense out of ambiguous or contradictory messages;
  4. To recognize the relative importance of different elements of a situation;
  5. To find similarities between situations despite differences which may separate them;
  6. To draw distinctions between situations despite similarities which may link them;
  7. To synthesize new concepts by taking old concepts and putting them together in new ways; and,
  8. To come up with ideas which are novel.

There are some noteworthy omissions from this list, not the least of which is the capacity for sophisticated language use! This is not to say that sophisticated language use is not important, merely that I do not consider it necessary for the attribution of mental capacities since I can imagine beings without it that I would still consider intelligent. Nor is there an item requiring audio/visual capacities or any other specific means of interaction with the world. Some means of interaction with the world must be assumed, of course, but there is no stipulation as to what form this must take. It may be as complicated as a diverse set of sensory-perceptual systems similar to that of humans, or it may be as elementary as a dumb terminal. Language, at least, does provide us with a means of determining whether Hofstadter’s requirements are met by a particular candidate, so it could be seen as a necessary component in the absence of any other complicated means of interaction.

The programs claimed to have passed the Turing Test for intelligence have not yet come close to satisfying Hofstader’s criteria. They are remarkable in their ability to contend with a variety of situations that arise in superficial conversation with forgiving interrogators, but their shortcomings are obvious as evidenced by the dialog above. Even the simple (for a human) ability to make analogies is lost on most conversational programs. Unfortunately, the interrogators often don’t know what to look for, and are satisfied with whimsical, incoherent conversations that change topic every sentence or spit back the interrogator’s own remarks with pronouns reversed:

Interrogator: I think you are a brainless automaton.
Program: Why do you think I am a brainless automaton?
Interrogator: Because you parrot everything I say.
Program: Does it bother you that I parrot everything you say?

Compare this to Turing’s poetry dialog from earlier.

To suggest that mental capacities either exist or they don’t in a given candidate is certainly misguided. There are obviously varying degrees of intelligence, and no sharp borderline between intelligent and non-intelligent. In addition, the “intelligence” of some machines is exhibited in only a restricted domain, such as chess or the works of Shakespeare. It is possible that some of these expert systems could meet Hofstadter’s criteria within their limited domain. If the domain is too limited, we would not want to attribute mental capacities. Instead, we require a more general intelligence. So, we may be more confident in our attribution of mental capacities the better the candidate is able to meet Hofstadter’s criteria in a very large domain. Exactly how large is open to debate.

The Argument For

Proceeding with the idea that mental capacities exist if Hofstadter’s criteria are met within a sufficiently large domain, we will look at what I will call the Argument by Mental Faculties:

a. Meeting Hofstadter’s criteria is sufficient for mental capacities.
b. It is possible to design a machine that meets each of Hofstadter’s criteria.
c. Therefore, it is possible to design a machine with mental capacities.

I consider this argument to be the “best” one in favour of the conclusion, not because it is the most obvious or the easiest to support, but because I think that it offers the most insight into the meaning of mental capacities. It is not based on a characterization of mental capacities as simply “what humans do”, but rather on a distilled set of behaviours. In support of premise b, we’ll now look at the eight mental faculties in question.

1. The impression of computers as unbending, rule-following automatons is completely justified: that’s exactly what they are, looked at from the bottom up. But it is misleading to suppose that this low-level adherence to rules necessitates higher-level inflexibility. Flexibility can be programmed, though it may sound like a contradiction in terms. Indeed, teaching computers to become more flexible has been a main thrust of AI research since its inception, and not without some measure of success. Some excellent examples can be found in modern computer games implementing complex interactive 3D environments, in which computer-controlled opponents exhibit varied and seeming intelligent behaviour in response to the player’s actions.

2 – 4. Taking advantage of fortuitous circumstances, dealing with ambiguous and contradictory data, and identifying salient features of a situation; all these abilities involve having some sort of a conceptual model of the environment or situation, being able to recognize changes in it, and being able to assign significance to different parts of it. For humans, this is called “common sense”. When it comes to computers, though, common sense is not so common. The most straight-forward approach to endowing a computer with common sense is being taken by Douglas Lenat, whose CYC project aims to explicitly “prime the pump with the millions of everyday terms, concepts, facts, and rules of thumb that comprise human consensus reality”. CYC has been in development since 1984. By 2001, it is hoped, it will have accumulated a critical mass of common sense, and will be ready to start learning on its own, “by automated-discovery methods”, thus creating a snowball effect of knowledge acquisition. (Lenat 1997: 201-3) Lenat’s method is regarded as something of a brute-force approach, but it will nonetheless result in a program with a pretty good knowledge about how the world works, and thus the ability to model a broad range of real-world situations effectively.

5 – 6. The ability to form analogies, which is what these two criteria signify, is considered by psychologists to be a central feature of human intelligence. This is evidenced by the fact that a considerable portion of the questions on any standard IQ test isolate this skill in the form of either verbal or visual analogy completions. (Yet the Turing Test makes no such demands on the candidate! — at least not implicitly.) Hofstadter’s more recent work (1995) has dealt largely with computer models of analogical thought. Of particular interest are his programs Seek-Whence, which solves analogy puzzles using sequences of integers,5 and the more elaborate CopyCat, which watches the user make some sort of change to a string of characters, and then performs an analogous change to a different string.6 The domain of these analogies is much more limited than what a human is comfortable with, but it at least serves as a proof of concept.

7 – 8. These last two criteria have to do with creativity. Everyone knows that machines are utterly uncreative, right? A passage from Turing may be enlightening at this point. Turing anticipated a variety of rebukes to the idea of machine intelligence, one of which he called Arguments from Various Disabilities. Such arguments make the claim, “you will never be able to make a machine do X,” where X can be any of a number of behaviours including “do something really new”. However, he noted, no support is ever offered for such claims. With subtle humor, he continues:

I believe they are mostly founded on the principle of scientific induction. A man has seen a thousand machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for minutely different purposes they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general. (op. cit.: 61)

Computers have written short stories, novels, and classical music, as well as created original drawings. These programs often come up with “ideas” completely unforeseen by their creators. The quality of their creative works may be disputed, but again we have a proof of concept.7

It will likely be quite some time before we see a machine capable of meeting Hofstadter’s criteria in any domain as broad as that in which humans operate — one to which we would attribute mental capacities. Nonetheless, each of the criteria has already been met in limited domains.

By boldly and explicitly stating a set of qualifying abilities for mental capacities, the Argument by Mental Faculties not only narrows our focus down the the important issues, but also suggests research directions.

The Argument Against

One thing has been monstrously difficult to reconcile in any formal way with the possibility of designing mental capacities into machines: the subjective experience or feeling of mind, that “inner life”, that sense of being or sentience—consciousness itself. How can a sequence of neural firings or any other physical process create sensations like the taste of a strawberry? While we might be able to create a robot capable of running a thorough chemical analysis on a fruit sample, determining the proportions of the constituent compounds and thus identifying the fruit as a strawberry rather than, say, a kiwi, it is not at all clear that we could also program in the experience of taste, or of color, or of elation, or of satisfaction. Suppose our robot spoke in the first person only because that’s what it had been programmed to do, and that in fact it had no sense of self, no inner fire, no idea of what it is like to “be”. Would it be fair to attribute mental capacities to our robot, if we knew this? Most people would be inclined to say “no”.

This so-called mind-body problem (as well as the related other minds problem8) has stumped an astounding number of thinkers. In his 1997 book How the Mind Works, arguably one of the most comprehensive and authoritative modern assessments of the matter indicated by that title, Steven Pinker can only mark the issue of subjective experience as a problem which “continues to baffle the modern mind”. (Pinker 1997: 558) Noam Chomsky suggests that some problems may simply fall outside human cognitive capabilities, due to biological restrictions. For these problems he uses the hyponym “mystery”, as they are bound to forever remain unsolved by humans. He speculates that mental experience may be one such mystery. (Chomsky 1993: 44-6)

The poor general understanding of this issue has naturally led to some doubts as to whether subjective experience is in fact replicable in a machine. We often see the following Argument by Consciousness, a simple extension of the other minds problem to machines:

d. Subjective experience is a necessary condition for mental capacities.
e. It is impossible to verify that a machine has subjective experience.
f. Therefore, it is impossible to verify that a machine has mental capacities.

And then, if we can’t detect it, how can we create it?

f.
g. But in order to design a machine with property P, we must have some way of telling when P is present.
h. Therefore, it is impossible to design a machine with mental capacities.

Turing quotes a certain Professor Jefferson as saying, “No mechanism could feel (and not merely artificially signal, and easy contrivance) pleasure at its successes, grief when its valves fuse…” (op. cit.: 60) I have found a similar sentiment to be rampant in the general population. The ability to imagine an entity bodily and behaviourally similar to humans, but which lacks a sense of self-awareness, i.e. lacks subjective experience of the type that you and I enjoy, sadly, is something that I lack. A robot with sensory apparatus can feel, but it is supposed that it might be only feeling without actually “feeling like” something. I have a very hard time imagining such a thing, but there is no use in arguing over who’s imagination is closer to the truth.

Instead, what I want to attack is premise e, that verification of subjective experience is impossible. How do we know this? The only way we can observe the subjective experience of some other entity E first-hand is to actually be E. Since I am not E, I will never be able to know first-hand whether E has a subjective experience. Granted. The problem is in the idea that the only way to be certain that subjective mental experience exists is to observe it first-hand.

I believe that this idea is mostly founded on the principle of scientific induction. A man observes a limited number of minds in his lifetime (i.e. his own). From what he sees of them, he draws a number of general conclusions. They are observable, by his own first-hand experience. Naturally, he assumes that these are necessary properties of minds in general.9

But let us build a robot, give it an assortment of high-fidelity sensory apparatus and some complicated software for interpreting the data and integrating it with a knowledge base, give it a memory for past experiences, and let it meet Hofstadter’s criteria for intelligence over a very broad domain. I submit that such a contraption necessarily has a subjective experience, and that to imagine anything different is to entertain pure fantasy. After all, what is subjective experience but the feeling of being there? And is it not a contradiction in terms to speak of something “sensing without feeling”?

Now obviously, there is a range of possible levels of “quality” or “intensity” in subjective experience, directly related to the complexity and fidelity of the sensory apparatus, and to how effectively the subject is able to integrate, interpret, and thus understand the sensory data. This process may righly be called its “perception”. If, as a thought experiment, we were to remove your ability to perceive the world, including cutting off sensory input from within your own body so that you truly couldn’t feel anything (assuming we could do this without killing you), would you have any grounds for maintaining that you were still conscious? “Yes,” some would argue, “I would still have an inner life, as I would still be able to observe my own thoughts.” Perhaps, but would there be any separation between you and the rest of the world? Would you be self-conscious? “Yes,” some would still argue, “in my memory.” Ah, but what of when those memories lost their immediacy? I think now you would have a very hard time maintaining that you were self-conscious. Sensation-perception and sensory-perceptual memory are necessary for continuing self-consciousness, i.e. subjective experience or awareness. But more than this, they are also sufficient for continuing self-consciousness, by the observation that when we have taken them away from consciousness, nothing remains.

Final Remarks

All this implies that consciousness and mental capacities are not in fact the same thing, though they are often confused, and furthermore that it might be possible to have one and not the other. For example, consider a computer program capable of passing the Turing Test with flying colors, capable of meeting all of Hofstadter’s criteria for intelligence in a very large and complex domain. Then we would grant that this program has mental capacities. Yet if its only means of interraction with the world is via a dumb terminal, then its sensory-perceptual apparatus is very limited in scope, and our degree of confidence in saying that the program was conscious would be quite low. Alternately, consider a robot with an extremely complicated, varied and integrated sensory-perceptual system, and with a wide range of responsive behaviours, but which fails to exhibit any reasonable amount intelligence in the sense of Hofstadter’s criteria. Then we would be perfectly justified in describing this entity as conscious, but not as having mental capacities.10 Looked at this way, the Argument by Consciousness against machine intelligence is off the mark.

References

Chomsky, N. (1993) Language and Thought. Moyer Bell, Wakefield, 1997.

Gunderson, K. (1964) “The Imitation Game” Mind 73: pp. 234-45.

Hofstadter, D. (1979) Gödel, Escher, Bach: An Eternal Golden Braid. Vintage edition, New York, 1989.

Hofstadter, D. (1995) Fluid Concepts and Creative Analogies. BasicBooks.

Kurzweil, R. (1997) “When Will HAL Understand What We Are Saying? Computer Speech Recognition and Understanding”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 131-69.

Lenat, D. (1997) “From 2001 to 2001: Common Sense and the Mind of HAL”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 193-209.

Loebner, H. (1994) “In response”, Communications of the ACM, Vol. 37, No. 6: pp. 79-82.

Pinker, S. (1997) How the Mind Works. Norton & Co., New York.

Rosenfeld, A. (1997) “Eyes for Computers: How HAL Could ‘See'”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 211-35.

Shieber, S. (1994) “Lessons from a restricted Turing test”, Communications of the ACM, Vol. 37, No. 6: pp. 70-78.

Turing, A. (1950) “Computing Machinery and Intelligence”, reprinted in The Mind’s I, Hofstadter and Dennett, eds. Bantam, New York, 1988: pp. 53-67.

Notes

1 Actually, Turing considered the question, “Can machines think?” to be “too meaningless to deserve discussion.” He nevertheless predicted that by the end of the century, common usage of the word would be extended to include some machine activities. (1950: 47)

2 Turing suggested five minutes, but the exact length of time is not considered to be terribly important.

3 The full transcript is available on the web at <http://www.phm.gov.au/whatson/pc1.htm&gt;. More information about the Loebner Prize, including contest rules and transcripts of previous winning conversations, can be found at the official website http://www.loebner.net/Prizef/loebner-prize.html. Shieber (1994) explains how the 1991 competition was run, and comments on its efficacy.

4 It is not anticipated that the grand prize will be won any time soon. The current state of the art in speech recognition (see Kurzweil 1997) and computer vision technologies (see Rosenfeld 1997) are nowhere near human-level.

5 For example, “What in ‘12344321’ corresponds to ‘4’ in ‘1234554321’?” (Hofstadter 1995: 195)

6 For example, “I change efg into efw. Can you ‘do the same thing’ to ghi? (op. cit.: 202)

7 Examples of such software include Harold Cohen’s drawing program “Aaron”, and Chamberlain & Etter’s “Racter”, which writes original prose.

8 Specifically, the mind-body problem is the problem of establishing the exact nature of the relationship between “mental phenomena” and “physical phenomena”, where the two are assumed to be distinct. The other minds problem involves the seeming conundrum of being able to verify the existence of other minds, i.e. subjective experiences.

9 Turing might agree with me on this. ;-)

10 Most animals fall into this category.

Blogged with the Flock Browser
Advertisements