Facsimile Magazine, Published by Haoyan of America. Volume Three, Number Ten, 2009. ISSN 1937-2116.
The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
- Alan Turing, 1950
By Cynthia Goodman, from Digital Visions: Computers & Art, 1987
Computers are making unprecedented aesthetic experiences possible and revolutionizing the way art is conceived, created, and perceived. The profound impact of digital technology on the art of the last twenty years and what it portends for the future is only beginning to be appreciated. Although the first computer-aided artistic experiments took place just twenty-five years ago, computers have since been applied to every facet of the artmaking process. No other medium has had such an extraordinary effect on all the visual arts so soon after its inception. Painters, sculptors, architects, printmakers, filmmakers, and video and performance artists, irrespective of their stylistic creeds, are responding to the rapidly developing possibilities of the new, quickly evolving technology and the dazzling array of options computers offer them and the art-viewing public alike.
Not long ago, artists were thrilled when quick-drying acrylic paints were perfected. Today, pigment is not even necessary; electronic color creation can be achieved instantaneously, entire compositions can be recolored in seconds, and lighting and positioning can be transformed with the mere touch of a light-senstive cursor. Some computer systems offer palettes of over sixteen million colors, the maximum number discernible by the human eye on a video monitor. Other intriguing options that lure artists to experiment with computer-imaging techniques include the manipulation of compositional scale and format in ways for the most part impossible in physical mediums. Live video images can be transformed by electronic paint, and pictures may be saved at any stage of their creation, referred to later, or restructured without irreversibly altering the original art. Software programs for the three-dimensional modeling of images have enabled artists to create representations of astonishing verisimilitude that can be rotated, relocated, and seen from any viewpoint or in any perspective on the computer screen, as if they are objects in actual space. It is almost beyond conception that such amazingly realistic pictures-endowed with textured surfaces and lighting effects-exist only as binary digits stored in the memory of a computer.
The interactive abilities of computer systems hold the key to radical changes within artmaking process. Ingenious developments of this potential by artists in every field are producing unique and previously inconceivable art forms. Through electronic implementation, sculptures and environments can be activated to follow programmed patterns of movement or even to respond to external stimuli. (Interactive transformations occur in "real time"; that is, the processing happens as soon as the stimuli are received, and the result are visible immediately.) In the case of some interactive installations, the presence of the artist is not required for the viewer to be both a participant in and observer of the creative process. Once an artist establishes certain boundaries, the interactive behavior of a piece is limited only by the inventiveness of the spectator. Such works may be impossible to witness the same interaction twice. In a paradigmatic union of art and science, artist, viewer, and computer become collaborators in a cycle of both controlled and surprising events.
In 1986, Bartlett was invited along with several prominent artists - David Hockney, Howard Hodgkin, Sir Sidney Nolan, and Larry Rivers - to the plant in Newberry, Yorkshire, where Quantel manufactures Paintbox, a computer animation system developed for broadcasting industry and commonly used to create logos and special effects for television. With this system, an artist can conduct an interactive dialogue with the computer by drawing with a light pen on a digitizing tablet and selecting colors and brushes from a menu of options on the screen. Paint systems allow the artist to take advantage of the computer's range of colors and effects, its memory, and its unique aesthetic without radically altering traditional working methods. As the artist draws, so results appear on the screen. But it does require an adjustment of hand-to-eye coordination. After the artists learned to use the computer, the series of images they produced shows the amazing versatility of the Quantel Paintbox.
The new technology is affecting every aspect of our art-viewing lives. Calling up a painting on a computer screen may well become as commonplace as going to a museum. Digital art may soon be transmitted via a subscription channel on television or rented for the evening as movies are today. Nam June Paik, the acclaimed pioneer of video art, envisions television screens the size of murals hanging on our walls to display video images as animated works of art. Art will be sold like record albums, he predicts, and there will be a top ten chart of the most popular hits.
Developments in the computer field are occurring at an almost inconceivable pace. Eagerly anticipating capabilities, previously only possible on sophisticated "high-end" systems, are announced as within reach of the personal computer one year and widely available the next. Established guidelines for software and hardware hold true for minimal amounts of time. Major breakthroughs continually make the machines more capable, quicker, less expensive, and easier to use.
It must be noted, however, that numerous state-of-the-art capabilities are still costly to produce and require highly sophisticated programming and powerful computing systems. Many of the more advanced effects are therefore developed for television and motion pictures, whose budgets are commensurate with the computational requirements. In featured films, computer-generated special effects are increasingly commonplace, often convincingly situating actors in the unfamiliar regions of outer space. Yet, even supercomputers are still put to the test in the realm of digital-image synthesis, Images of photographic realism, in particular, are among the computer's most impressive technical accomplishments to date. The twenty-five-minute, ominously lifelike sequence in Lorimar's film The Last Starfighter, for example, required more than a quadrillion calculations.
Laposky's "oscillons" were the first graphics made on an analog computer. For many years, they represented the most advanced achievements of what was known as computer art. His oscillons are photographs of electronic wave forms displayed on a cathode-ray tube.
Although enthusiasctially welcomed by the film and broadcasting industries, computers have not been readily adopted by most artists. With their enormous potential as visualizing tools, the reticence of the art community is somewhat perplexing. Musicians and poets were considerably more accepting. As early as 1957, Lejaren A. Hiller programmed an electronic composition, "Illiac Suite," on the ILLIAC computer at the University of Illinois at Champaign. For musicians who had long worked with and been frustrated by the imprecision and unreliability of analog systems, the digital computer was welcomed as a means of creating highly defined sounds. Poets, who are always searching for novel means of restructuring language, were also quick to grasp the ability of computer programming to offer them unanticipated combinations of words. The enthusiasm and interest with which fine artists are just now responding to the mention of computers is as profound as their disinterest and antagonism only a few years ago. As "a digital sketchpad," and "electronic thinking cap," or a collaborative partner, artists as diverse in their interests and styles as Kenneth Noland, Jack Youngerman, and Nam June Paik are anxious to have access to computers that can realize some of their artistic goals.
This overwhelming change of attitude reflects the impact of computers on all aspects of our daily lives-a phenomenon directly attributable to recent developments in microelectronics and the consequent impact on the cost of hardware. According to sculptor Milton Komisar, the only way he was able to acquire a personal computer in 1973, When he first became interested in making computer-controlled light sculptures, was to build one from a do-it-yourself kit, a challenge only a few artists were up to. With the introduction of the microprocessor in the late 1970s, the capabilities available today on relatively inexpensive computers, costing as little as fifteen hundred dollars, are commensurate in some ways with those that existed only on mainframes, costing one hundred thousand dollars and up, a few years ago. Moreover, the enormous mainframes occupied entire rooms and required a large staff to maintain them. Their settings did not appeal to most artists, who understandably preferred the comfort of their studios to sterile laboratories and the seemingly labyrinthine procedures that often accompanied admittance to sophisticated computer systems. Both the ambience and the manner of working in an automated environment conflicted disturbingly with painters and sculptors who were used to realizing their creative ideas in pencil and paint or clay and metal.
This image, which is meant to approximate Piet Mondrian's Composition With Lines, was generated by a digital computer and a microfilm plotter using pseudorandom numbers. When Noll, in a much-publicized experiment, showed Xeroxes of both pictures to one hundred people, fifty-nine preferred the composition of the computer-generated picture.
Komisar's example may seem extreme, before the late 1970s, however, computers were very much restricted to governmental, industrial, and academic workplaces. Even if an artist ingeniously gained access to a computer system, the successful realization of an image was a direct correlation of his ability to convey an artistic concept to a programmer, who then attempted to find a mathematical equivalent for it. No longer are such collaborations necessary. The personal computer software on the market today is "user friendly"; that is, easy for anyone to operate. Furthermore, the applications are diverse enough for artists to use regardless of stylistic constraints.
The potential applications of computers to artmaking are much broader than might be suspected. For some artists, the computer is merely a tool that facilitates design decisions; for others, the artwork itself assumes the form of direct computer output; still others think of computer output as the point of departure for further elaboration and execution in an entirely different medium. The most frequent practice, of course, is the generation of images on a screen that are then retrieved from storage in the digital memory of a computer and output in a tangible form called "hard copy." Hard-copy devices can produce images in many formats, including film, printer drawings, plotter drawings, color Xeroxes, textiles, and video-each with unique aesthetic values. Moreover, it is increasingly common for computer-generated imagery to be translated into traditional mediums. For a growing number of artists who have chosen to develop their images with computers, the thrill of filling an area with color by choosing the appropriate commands from a menu of options is still not comparable to squeezing pigment from the tube and being conscious of the smell and texture of paint as it is spread across a surface. Consequently, they have found ingenious ways to reintroduce the touch, physical contact, and the immediacy of materials they miss when working with computer technology. One solution is to project a computer-generated image onto a canvas and then to paint it by hand; another is to enhance hard copy with watercolor or pastel. Traditional mediums are used, and the artist benefits from the new range of design possibilities the computer offers.
James Pallas, with Progmod, a computer-driven sculpture that can "see" and "hear." In response to audio or visual sensations, it creates abstract patterns on its circular screen while displaying the numerical data that control the patterns on its video monitor. the front of Progmod is designed as a convnient desk for writing programs; is monitor and keyboards are located close by.
Until recently, most artists who used computer technology considered themselves part of a relatively closed and small community. Today, a computer art community still exists, but its mandate is broad and its membership vast. Literally thousands of artists who consider the computer their primary medium attend yearly meetings of SIGGRAPH (Special Interest Group for Graphics of the Association for Computing Machinery) and NCGA (National Computer Graphics Association), which are to the computer graphics world what the annual meetings of the College Art Association are to traditional artists and art historians. SICCRAPH's ranks swell from five hundred attendees at its first conference in 1974 to more than twenty-five thousand in 1986.
The most eagerly awaited events in the computer graphics world are the film and video reviews at the SIGGRAPH and NCGA conventions, when the latest computer animation techniques are unveiled to the rousing cheers and thunderous applause of the appreciative audience. Since the common goal of much high-end research is the simulation of reality through three-dimensional modeling techniques, last year's sensation was an animation called Lucor Jr., featuring two Luxo desk lamps endowed with humorous personalities and the ability to communicate with each other. "Reality is a convenient measure of complexity," says Alvy Ray Smith, one of the developers of the computer that Pixar specially designed to create the photographic-quality, computer-animated imagery on which the film was made. "But why be restricted to reality?"
That the term "computer graphics" is applied to animations such as Luxor Jr., flying logos that announce the national news on television, and electronically generated images of nudes created by Philip Pearlstein is understandably confusing. Indeed, the mention of computer art usually conjures up widely seen commercial images rather than the less-publicized fine arts applications. Although the tools for commerical and artistic work may be identical and the look at times strikingly similar, this book's selections focus only on the artistic applications of computers. With the elimination of exclusively commercial work, some basic criteria can be established to evaluate this new medium strictly as an artistic tool. (That the boundaries between art and technical virtuosity are not more clear is largely a function of the structure of the computer graphics world, in which some of the most spectacular digital imaging is still being done at the Los alamos National Laboratory in New Mexico, the IBM Thomas J. Watson Research Center in Yorktown Heights, New york, the Lawrence Livermore national Laboratory in California, and the Jet Propulsion Laboratory of the California Institute of Technology. It is these facilities that are equipped with supercomputers and research scientists who are developing the most advanced imaging capabilities for the entire field.)
Robert Abel's integration of computer-generated imagery with live-action performance has revolutionized the art of television commercials and has captivated audiences with its innovative look. Some of his most memorable commercials were for 7-Up and Levi Strauss and Company. In this witty animation, the colorful, three-dimensionally modeled figures of Ava revolves gracefully through fairyland settings explicitly, demonstrating the impressive powers of three-dimensional animation. In this scene Ava is dancing with her umbrella. The texture on her body was first digitized as a flat two-dimensional image and then wrapped around her body using the Evans and Sutherland PS2 System. The sky was hand-painted using a paint program.
Although the concern of this book is the fine arts, the work of some scientists is included in recognition of the fact that the arts are still inextricably intertwined with the achievements of computer researchers. there is a certain degree of irony in this situation. The same scientists who have done so much to advance computer graphics have also contributed to the confusion and criticism of the discipline. Indeed, rejection of computer art was initially based as much on the dubious aesthetic quality of early computer graphics accomplishments by scientists, who were mislabeled as artists, as on a fear of the machine itself. The computer's reception has been like that of photography's in the nineteenth century. Just as photography was initially scorned and engendered vicious hostility, only to gain increasing acceptance and widespread application, the use of computers by artists will inevitably follow suit. Artists have always experimented with the latest tools, and computers are now especially conducive to artistic improvisation.
Before being accepted unquestioningly as a legitimate artistic medium, some of the challenging aesthetic and philosophical issues raised by computer-generated art must be solved. The most haunting questions concern the impact of the technology on the artist, the creative process, and the nature of art. More specifically, it is asked, to what extent do the available systems and software determine the results? Is an artist creatively restrained by the options available to him, either by available data or by the way in which in may be retrieved? Are new aesthetics criteria required to evaluate computer-aided art? Is the value of some computer art decreased by its non-unique nature and the fact that it may have been executed by a machine instead of by hand? Are all works displayable in hard copy merely multiples? It is too soon for answers to these questions. Recent accomplishments, however, clearly demonstrate the ability of an individual working with computer technology to assert a distinct form of creative expression.
First displayed in 1970 at Software, an exhibition of artistic uses of computer technology, Seek was a Plexiglas-encased, computer-controlled environment inhabited by gerbils, whose primary activity consisted ofrearranging a group of small blocks. Once the arrangement was disrupted, a computer-controlled robotic arm rebuilt the block configurations in a manner its programmers believed followed the gerbil's objectives. The designers, however, did not successfully anticipate the reactions of the animals, who often outwitted the computer and created total disarray.
In spite of misconceptions, computers have had an impact on all the art forms and movements prominent in the last twenty years, including Conceptual Art, Earth Art, photo-realism, Performance Art, Minimal Art, holography, and robotics, as well as the more traditional genres of portraiture, landscape, and still life. Moreover, many artists who were seduced by the seemingly limitless possibilities of electronic media have reevaluated and transformed their total approach to the artmaking process. For these artists, the commitment to digital technology is philosophical as well as aesthetic.
The challenge of artificial intelligence is but one area that continues to inspire provocative research. Sculptural environments and computer graphics systems are being developed to simulate the intellectual logic and methodology of humans. British artist Harold Cohen has taken the concept of an intelligent machine in a direction that embodies the dreams of both futuristic enthusiasts in the artificial intelligence field and the nightmares of many traditional artists. he has programmed a computer to control mechanical drawing machine that is quite capable of making remarkably naturalistic drawings on its own.
With the increasing accessibility and affordability of computers, a growing understanding of the potential applications, the development of new software tailored to artistic requirements, and a generally more open-minded attitude about their use in creative endeavors, computers in all likelihood will soon be unchallenged as one of the implements available to an artist for the creation of a work of art. Research and development in computer graphics is proceeding apace around the world, suggesting future developments in electronic imaging capabilities that will be adaptable to the creative needs of virtually any artist.
From The HP 9845 Project
The demo package for the HP 9845C is remarkable for several reasons. One aspect is that it is one of the most complex BASIC programs ever developed for the 9845 series. The demo required two full tapes as distribution media, and the main program consisted of more than 4,000 lines of BASIC code. The other, more important aspect lies in the fact, that HP's engineers not only demonstrated the outstanding capabilities of their 9845C system with high resolution graphics, fast vector engine and up to 4,913 colors, they also used the 9845C as platform for implementing many state-of-the-art concepts in computer graphics and human interaction, like 3D shading, ordered dithering, wireframe rendering, interactive light pen control or just using color for better visualization of complex data.
The examples which were chosen for the demo covered a large scale of applications the 9845C was intended for. In total there are six cathegories, each consisting of six demo applications, giving all together 36 different demo cases. Each demo case has its own controls, so for example different visualization parameters, data sets, object parameters or viewpoints can be selected. The whole demo application is completely menu-driven, with one main menu and six submenus. Navigation within those menus is possible with both light pen or soft key control. The categories are:
These sections range from basic information, how to produce and to use color on a computer system, to general data visualization techniques and common engineering, scientific and business applications, up to using 3D graphics for architects and a light pen controlled, color graphics computer game (Gravity).
The demo software was part of each HP 9845C distribution package as part number 09845-15240 and included two software tapes plus user manual. Although today there are in principle still enough demo software tapes available, most suffer from data loss due to defects of the magnetic coating. Since there have been several versions produced, it was not possible for me to reconstruct one complete demo program from the tapes alone. Fortunately Jon from the hpmuseum found one complete set of demo files on one of his backup discs, so it was possible to run the full demo and to take example screenshots in order to get an idea of the capabilities of the 9845C.
Actually, it is not sure whether it is easier to find a complete set of demo software or a working 9845C system to run it.
If you like to see the demo in action, here is a record of a Polish science series from 1980 where the HP 9845C had been introduced. Of course, you need to know Polish to understand the dialogs, may be there is someone who likes to add subtitles...
Author's Preface by Fra. Belarion, O.T.O.
Since I first wrote this essay in 1946, some of the more ominous predictions have been fulfilled. Public employees have been subjected to the indignity of "loyalty" oaths and the ignominy of loyalty purges. Members of the United States Senate, moving under the cloak of immunity and the excuse of emergency, have made a joke of justice and a mockery of privacy. Constitutional immunity and legal procedure have been consistently violated and that which once would have been an outrage in America is today refused even a review by the Supreme Court.
The golden voice of social security, of socialized "this" and socialized "that", with its attendant confiscatory taxation and intrusion on individual liberty, is everywhere raised and everywhere heeded. England has crept under the aegis of a regime synonymous with total regimentation. Austria, Hungary, Yugoslavia and Czechoslovakia have fallen victims to communism while the United States makes deals with the corrupt dictatorships of Argentina and Spain.
As I write, the United States Senate is pursuing a burlesque investigation into the sphere of private sexual morals, which will accomplish nothing except to bring pain and sorrow to many innocent persons.
The inertia and acquiescence which allows the suspension of our liberties would once have been unthinkable. The present ignorance and indifference is appalling. The little that is worthwhile in our civilization and culture is made possible by the few who are capable of creative thinking and independent action, grudgingly assisted by the rest. When the majority of men surrender their freedom, barbarism is near but when the creative minority surrender it, the Dark Age has arrived. Even the word liberalism has now become a front for a new social form of Christian morality. Science, that was going to save the world back in H.G. Wells' time, is regimented, strait-jacketed and scared; its universal language is diminished to one word, security.
In this 1950 view some of my more hopeful utterances may appear almost naive. However, I was never so naive as to believe that freedom in any full sense of the word is possible for more than a few. But I have believed and do still hold that these few, by self-sacrifice, wisdom, courage and continuous effort, can achieve and maintain a free world. The labor is heroic but it can be done by example and by education. Such was the faith that built America, a faith that America has surrendered. I call upon America to renew this faith before she perishes.
We are one nation but we are also one world. The soul of the slums looks out of the eyes of Wall Street and the fate of a Chinese coolie determines the destiny of America. We cannot suppress our brother's liberty without suppressing our own and we cannot murder our brothers without murdering ourselves. We stand together as men for human freedom and human dignity or we will fall together, as animals, back into the jungle.
In this very late hour it is with solutions that we must be primarily concerned. We seem to be living in a nation that simply does not know what we are told we have and that we tell each other we have. Indeed, it is far more than that. It is to the definition of freedom, to its understanding, in order that it may be attained and defended, that this essay is devoted. I need not add that freedom is dangerous -- but it is hardly possible that we are all cowards.
Continue to Chapter 1...
By James Thurber from Fables for Our Time, 1940
A young and impressionable moth once set his heart on a certain star. He told his mother about this and she counseled him to set his heart on a bridge lamp instead. "Star's aren't the thing to hang around," she said; "lamps are the thing to hang around." "You get somewhere that way," said the moth's father. "You don't get anywhere chasing stars." But the moth would not heed the words of either parent. Every evening at dusk when the star came out he would start flying toward it and every morning at dawn he would crawl back home worn out with his vain endeavor. One day his father said to him, "You haven't burned a wing in months, boy, and it looks to me as if you were never going to. All your brothers have been badly burned flying around street lamps and all your sisters have been terribly singed flying around house lamps. Come on, now, get yourself scorched! A big strapping moth like you without a mark on him!"
The moth left his father's house, but he would not fly around street lamps and he would not fly around house lamps. He went right on trying to reach the star, which was four and one-third light years, or twenty-five trillion miles, away. The moth thought it was just caught in the top branches of an elm. He never did reach the star, but he went right on trying, night after night, and when he was a very, very old moth he began to think that he really had reached the star and he went around saying so. This gave him a deep and lasting pleasure, and he lived to a great old age. He parents and his brothers and his sisters had all been burned to death when they where quite young.
From Book 4 by Crowley "Aha!"
Do what thou wilt shall be the whole of the Law
Love is the Law, Love under Will
There are seven keys to the great gate,
Being eight in one and one in eight.
First, let the body of thee be still,
Bound by the cerements of will,
Corpse-rigid; thus thou mayst abort
The fidget-babes that tease the thought.
Next, let the breath-rhythm be low,
Easy, regular, and slow;
So that thy being be in tune
With the great sea's Pacific swoon.
Third, let thy life be pure and calm,
Swayed softly as a windless palm.
Fourth, let the will-to-live be bound
To the one love of the profound.
Fifth, let the thought, divinely free
From sense, observe its entity.
Watch every thought that springs; enhance
Hour after hour they vigilance!
Intense and keen, turned inward, miss
No atom of analysis!
Sixth, on one thought securely pinned
Still every whisper of the wind!
So like a flame straight and unstirred
Burn up thy being in one word!
Next, still that ecstasy, prolong
Thy meditation steep and strong,
Slaying even God, should He distract
Thy attention from the chosen act!
Last, all these things in one o'erpowered,
Time that the midnight blossom flowered!
The oneness is. Yet even in this,
My son, thou shalt not do amiss
If thou restrain the expression, shoot
They glance to rapture's darkling root,
Discarding name, form, sight, and stress
Even of this high consciousness;
Piece to the heart! I leave thee here:
Thou art the Master. I revere
Thy radiance that rolls afar,
O Brother of the Silver Star!
By Chris Wenham from disenchanted.com
Summary: The term "Artificial Intelligence" is so abused it's impossible to properly define what it is anymore. Disenchanted takes on the problem by splitting it up into five categories, with the first conversational machines being the topic of this first article in the series.
Marvin Minsky says we shouldn't intimidate ourselves by admiration of our Beethovens and Einsteins, and that there isn't much difference between ordinary thought and highly creative thought. If we could just somehow simulate the basic process on a computer, then in theory that computer should be capable of creativity as well. "Artificial" intelligence is our objective—a machine that thinks just like humans do. This field is broad enough that I've split it up into five categories that I'll be discussing in five articles; with one each month. What I'll discuss will not only include how intelligence works and could be simulated, but what purposes mankind will put it to and the impacts it'll have on humanity's own way of thinking. (Plus, you'll also see why I've chosen to put quote-marks around the word "Artificial.") This article is the first in the series and will talk about the first and best known category of AI—the conversational, or "chatty", machines.
Wouldn't you rather play a good game of chess?
Conversational AI begins with the prejudice that language is not just the best way to judge sentience, but is also the best way for the computer to percieve the world. In fact, the language prejudice is embodied in the de-facto test of an AI's worth: the Turing Test.
It works like this: Put a human in one room, a computer in another, and a human Interrogator in a third room. They're all connected together by computers and copies of AOL Instant Messenger (we've modernized the tools). The Interrogator isn't told which of the two contestants is the computer, merely that one is called ‘A’ and the other is called ‘B’. The Interrogator procedes to ask questions of the two contestents until he figures out which one must be the computer. The computer, of course, is pretending to be a human.
Let's say that the Interrogator asks "What's the weather like today?" and gets the following responses from the two contestants.
Player A says: "It's hotter today than it was yesterday, and my clothes are sticking to my skin."
Player B says: "I hate it. I'm sweating like a pig and its making me too uncomfortable to sleep at night."
The Interrogator might already be suspecting that Player A is the robot, since it merely relayed factual information (and unimaginatively, at that), while Player B actually expressed feelings. The comment about "my clothes are sticking to my skin" could have been pulled out of a table of responses keyed to the topic of the question—responses thought of in advance by the programmer.
The Turing Test is rather inexact, though, since a real but unimaginative human could potentially lose. For this reason most contests subject the machines to more than one judge. In the case of the Loebner prize, for example, the AI who fools the most judges "wins" (the Loebner prize actually has two levels of "winning"—$2,000 for the one that fools the most judges, and a real prize of $100,000 if the computer fools more than half of the judges).
It's of interest to note that the original form of the Turing test, as described by Alan Turing himself, doesn't involve any machine candidates at all, but has a man pretending to be a woman instead (suddenly, AOL Instant Messenger is relevant, again).
The second forgotten detail of Turing's original test is that the Interrogator isn't supposed to know he's looking for a computer at all.
We call it Voight-Kompf, for short.
If the Interrogator knows he's trying to find the robot then it makes it a lot harder for the computer to fool him, perhaps unfairly harder. After all, we don't interrogate each other in routine life. If a computer is simply looking to blend in with a crowd, then there are programs that have already succeeded.
While the original Turing Test gives the computer such an opportunity, its perversions of late have lead to non-constructive trick questions as part of the Interrogator's repetoir. "What does the letter ‘M’ look like upside-down?" may be impossible for a computer to answer without a programmer anticipating it, but its failure to answer that question doesn't say anything about its worthiness as a mind. Most humans blind from birth can't answer that question, either.
A better test of a robot's intelligence may be to lock it in a room with a basket of household cleaning products, a book on explosives, and the instruction to "get out."
This kind of test would not only check for problem solving intelligence, but also its self-awareness. For a robot that solves the problem of making the explosive, but uses its own body and battery to hold and detonate the charge will fail as a human-like thinker. The machine certainly solved the problem from the mathematical sense, but destroyed itself in the process. Why? Because animals and machines that aren't self-aware will inventory their own bodies as a disposable resource, not making the cognitive link between the body and the source of its own thoughts. Although it might alter the outcome of the test, you can't just give it a new directive ("do not destroy this unit") and expect it to be the same.
Hmm... the unfreezing process seems to have left me with no internal monologue!
We don't need real rooms, explosives and robots to do these kinds of tests, for all could be simulated on a computer. The language prejudice, however, might come not from the expense or difficulty of making physical simulations, but from the internal monologue—the "voice inside our head." We "think" in our native tongue, and so believe a computer should, too.
But it's not altogether clear that language skills are inseparable from intelligence, and again we can look to the capable intelligences of our blind, deaf, and brain-damaged for indicators.
Aphasia is a condition brought on by physical damage to the "language organ" of the brain, such as from a stroke or a wound. Since language skills seem to be spread across the brain (with hearing, speaking, reading and writing taking place in different parts), the severity of aphasia will depend on where the brain injury occured. Broca's aphasia—for example—affects the ability to generate words (the patient may say "Walk dog" to mean anything from "I want to walk the dog" to "you take the dog for a walk"), while those afflicted with Wernicke's aphasia will speak in long and poorly constructed sentences with lots of nonsense words ("You know that smoodle pinkered and that I want to get him round and take care of him like you want before" means "The dog wants to go out, so I'll take him for a walk"). Global aphasia victims may lose all understanding of language completely.
Yet despite losing or confusing the gift of the tongue, aphasia victims do not become stupid. In fact they're painfully aware of their problem and are horribly frustrated with it. And they can still solve problems, deal with abstract concepts, recognize patterns, make cognitive leaps and behave as self-aware beings. Could this suggest that language, while a characteristic of intelligence, is not critical for it?
Must... think... in Russian. Think... in Russian.
You can argue that aphasia doesn't discount the possibility that the victim may still have a linguistic internal monologue and is simply unable to express it. A leading theory of thought is that we don't actually think in words anyway, but in structures of associations. It's our own sense of "listening to ourselves think" that's fooling us because the associations to our memories of spoken words are so strong and "well trodden." Imagine the President of the United States in a clown suit. Have you actually seen him in a clown suit before? If not, how did you manage to imagine that scene? Is there a copy of Photoshop inside your brain that is frantically manipulating bitmaps as fast as you can think?
What could more easily explain that ability is if your brain has not actually "painted a picture" of George Bush in a clown suit (no offense intended, Mr. Bush, we just picked the example because we're pretty sure nobody's seen you with a big rubber nose) but merely manipulated sequences of associations. We remember seeing a clown before, our brains' visual center was able to identify all the sub-objects and attributes of the face (eyes, nose, mouth, ears, color, etc.), and all we had to do was think of the attributes that represent our Fine Leader's face combined with a few extras that we remember from elsewhere.
We think there's a crisp photograph inside our heads, but is there? Could you wire your brain up to a computer that could measure synaptic activity and build a picture on the screen by finding the grid of neurons that presumably represent all the pixels of the picture? No, you could not.
You're being a bit brief, perhaps you could go into detail.
To get an idea of what we mean by "structures of associations", we can examine one of the many ways you can write a conversational AI. Returning to the language prejudice, we want a routine that can parse an English sentence and figure out its meaning. "George is a cat", for example, can be picked apart into a very simple knowledge tree.
The computer has focused on "is a," which it knows is a type of relationship. Elsewhere in the machine's memory is another knowledge tree that describes what the "is a" means in template form.
Matching the parsed sentence with the template tells it that "George" is an Entity identifier (a name), and that "cat" is an entity type. The number 5 represents a confidence level—it's halfway sure that any sentence matching this template is identifying an entity (George) and then giving it a type (cat). Going further on, the programmer might have also told it a few things about the word "cat."
Notice here that the computer is pretty sure (confidence is 8) that "cat" on its own nearly always refers to an animal, but is also "aware" (just less confident) that we might be using slang for a certain British automobile instead. If we'd said more in our opening sentence, like "George is a cat, his claws are sharp", then it could have matched the attribute of claws—boosting its confidence up to 10. Then it'd be really sure we're talking about an animal, specifically one that has fur and makes its own decisions. (Do you know a cat that doesn't have a mind of its own?)
It doesn't matter what the words are that you use to label each node of the tree, we could use numbers instead to represent concepts such as a "cat" and then associate them with memories of words (analogous to our memories of word sounds and word shapes). But these knowledge trees do not represent understanding, they're merely a convenient data-structure upon which we can hang rules, so our "chat-bot" can at least produce replies that make sense.
The job of formulating a response is a little trickier, but again we can use a tree, this time as a decision-making structure.
What this is saying is that if the computer is really sure (confidence is 10) that the subject is a self-directing entity, then it'll ask a question that assumes so ("How is he doing, these days?"—asuming another knowledge tree matches "George" with a male pronoun), but if it's less sure (confidence is between 5 and 10), then it'll play it safe with a more generic response.
The more sophisticated the knowledge and decision making trees are, the more "intelligent" the computer will seem. A clever programmer will have it ask questions that would resolve a lack of confidence, to make it seem as if the computer is able to intuit what we're talking about. And these examples of knowledge trees, although very simple, are meant to give you an idea of how a structure (tree) of associations (to memories of sensations) might work in your own mind. It's certainly easier to solve our earlier thought problem, now.
But least we imagine that given a sophisticated enough set of knowledge and decision-making trees, then the computer could begin to solve problems (or even have emotions), remember that this kind of program is merely reactionary. No thought processes occur until the human has finished typing and presses the ENTER key, and then all it does is follow a set of rules that a programmer thought of beforehand. It's not solving problems, it's merely matching the input with solutions that have already been thought of by the programmer. If trees of associations are how we store knowledge in our own minds, then all the computer is doing is borrowing a couple of tricks—but not The Trick.
These types of programs, where knowledge and decisions are represented in trees, are better known as expert systems, and they're great for diagnosing car trouble.
Open the Pod-bay doors, please, HAL
Expert systems and a rudimentary ability break apart sentences to find meanings give us a practical application for the conversational AI: user interfaces. We've come a long way since Leisure Suit Larry, but combined with mature voice recognition (which itself makes use of another AI technology—the neural net—which we'll discuss in Part 2A) means that soon you'll be able to give arbitrarily constructed commands to a computer for it to follow.
Five years ago you could use halting English to give commands like "Open word processor. File. Open. Essay dot doc. Enter. File. Print. Okay.", but today it's possible to simply say "Open my essay and print it", and in the near future combine a sophisticated sequence of commands in one sentence. "Use the Jones Project as a template for this new contract with the Smiths, figure out the budget for twice as many shipments, then have a printed copy ready for me along with a noon ticket to fly into Seattle." It doesn't take any intelligence at all to follow those commands, it just takes a sophisticated enough set of rules to break it up into simple enough commands.
As for winning Turing tests and solving problems, AI research needs to unshackle itself from the language prejudice. A computer will not fool judges as long as it can only percieve of the world through grammar (check-out how bad the current state of the art is). They need to deal with the slightly less linguistic problem of representing and understanding the rules of the physical world.
The process of thought might be deceptively simple, like aerodynamic lift is, but without an equivalent to Bernoulli's principle we can't fathom how it might work. One obstacle is the learning process—the way a mind remembers what happened in the past, so it can apply it to decisions made in the present. It may be very separate from other functions of intelligence.
While some scientists bang their heads on that problem, others have attempted to bypass it and get on with exploring the more useful application of reasoning, pattern recognition, and prediction. This brings us to the second category of AIs; the Hybrids. These are programs that, like all conversational AIs, have pre-digested memories supplied to them by humans, so they can get on with the far more interesting job of making deductions. And that's the topic of the next article in the series.
This month, January 12, to be precise, sees the birthday of HAL, the mission-control computer on the Jupiter-bound spaceship Discovery in Arthur C. Clarke's celebrated science fiction novel 2001: A Space Odyssey.
According to the book, HAL was commissioned at Urbana, Illinois, on January 12, 1997. In Stanley Kubrick's 1968 movie version, the date of HAL's birth was inexplicably changed to January 12, 1992. In any event, whether HAL is just about to be born or preparing to celebrate its fifth birthday, with the year 2,001 practically upon us, it's natural to ask how correct Clarke and Kubrick's vision of the future has turned out to be.
Thirty years ago when the film was made, director Kubrick endowed HAL with capabilities computer scientists thought would be achieved by the end of the century. With a name that, despite Clarke's claim to the contrary, some observers suggested was a simple derivation of IBM (just go back one letter of the alphabet), HAL was, many believed, science fiction-shortly-to-become-fact.
In the movie, a team of five new millennium space explorers set off on a long journey of discovery to Jupiter. To conserve energy, three of the team members spend most of the time in a state of hibernation, their life-support systems being monitored and maintained by the on-board computer HAL. Though HAL controls the entire spaceship, it is supposed to be under the ultimate control of the ship's commander, Dave, with whom it communicates in a soothingly soft, but emotionless male voice (actually that of actor Douglas Rain). But once the vessel is well away from Earth, HAL shows that it has developed what can only be called a "mind of its own." Having figured out that the best way to achieve the mission for which it has been programmed is to dispose of its human baggage (expensive to maintain and sometimes irrational in their actions), HAL kills off the hibernating crew members, and then sets about trying to eliminate its two conscious passengers. It manages to maneuver one crew member outside the spacecraft and sends him spinning into outer space with no chance of return. Commander Dave is able to save himself only by entering the heart of the computer and manually removing its memory cells. Man triumphs over machine--but only just.
It's a good story. (There's a lot more to it than just described.) But how realistic is the behavior of HAL? We don't yet have computers capable of genuinely independent thought, nor do we have computers we can converse with using ordinary language. True, there have been admirable advances in systems that can perform useful control functions requiring decision making, and there are working systems that recognize and produce speech. But they are all highly restricted in their scope. You get some idea of what is and is not possible when you consider that it has taken AT&T over thirty years of intensive research and development to produce a system that can recognize the three words 'yes', 'no', and 'collect' with an acceptable level of reliability for a range of accents and tones. Despite the oft-repeated claims that "the real thing" is just around the corner, the plain fact is that we are not even close to building computers that can reproduce human capabilities in thinking and using language. And according to an increasing number of experts, we never will.
Despite the present view, at the time 2001 was made, there was no shortage of expert opinion claiming that the days of HAL ("HALcyon days," perhaps?) were indeed just a few years off. The first such prediction was made by the mathematician and computer pioneer Alan Turing. In his celebrated article Computing Machinery and Intelligence, written in 1950, Turing claimed, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Though the last part of Turing's claim seems to have come true, that is a popular response to years of hype rather than a reflection of the far less glamorous reality. There is now plenty of evidence, from psychology, sociology, and from linguistics, to indicate that the original ambitious goals of machine intelligence is not achievable, at least when those machines are electronic computers, no matter how big or fast they get. So how did the belief in intelligent machines ever arise?
Ever since the first modern computers were built in the late 1940s, it was obvious that they could do some things that had previously required an "intelligent mind." For example, by 1956, a group at Los Alamos National Laboratory had programmed a computer to play a poor but legal game of chess. That same year, Allen Newell, Clifford Shaw, and Herbert Simon of the RAND Corporation produced a computer program called The Logic Theorist, which coul--and did--prove some simple theorems in mathematics.
The success of The Logic Theorist immediately attracted a number of other mathematicians and computer scientists to the possibility of machine intelligence. Mathematician John McCarthy organized what he called a "two month ten-man study of artificial intelligence" at Dartmouth College in New Hampshire, thereby coining the phrase "artificial intelligence," or AI for short. Among the participants at the Dartmouth program were Newell and Simon, Minsky, and McCarthy himself. The following year, Newell and Simon produced the General Problem Solver, a computer program that could solve the kinds of logic puzzles you find in newspaper puzzle columns and in the puzzle magazines sold at airports and railway stations. The AI bandwagon was on the road and gathering speed.
As is often the case, the mathematics on which the new developments were based had been developed many years earlier. Attempts to write down mathematical rules of human thought go back to the ancient Greeks, notably Aristotle and Zeno of Citium. But the really big breakthrough came in 1847, when an English mathematician called George Boole published a book called An Investigation of the Laws of Thought. In this book, Boole showed how to apply ordinary algebra to human thought processes, writing down algebraic equation in which the unknowns denoted not numbers but human thoughts. For Boole, solving an equation was equivalent to deducing a conclusion from a number of given premises. With some minor modifications, Boole's nineteenth century algebra of thought lies beneath the electronic computer and is the driving force behind AI.
Another direct descendent of Boole's work was the dramatic revolution in linguistics set in motion by MIT linguist Noam Chomsky in the early 1950s. Chomsky showed how to use techniques of mathematics to describe and analyze the grammatical structure of ordinary languages such as English, virtually overnight transforming linguistics from a branch of anthropology into a mathematical science. At the same time that researchers were starting to seriously entertain the possibility of machines that think, Chomsky opened up (it seemed) the possibility of machines that could understand and speak our everyday language.
The race was on to turn the theories into practice. Unfortunately (some would say fortunately), after some initial successes, progress slowed to a crawl. The result was hardly a failure in scientific terms. For one thing, we do have some useful systems, and they are getting better all the time. The most significant outcome, however, has been an increased understanding of the human mind: how unlike a machine it is and how unmechanical human language use is.
One reason why computers cannot act intelligently is that logic alone does not produce intelligent behavior. As neuroscientist Antonio Damasio pointed out in his 1994 book Descartes' Error, you need emotions as well. That's right, emotions. While Damasio acknowledges that allowing the emotions to interfere with our reasoning can lead to irrational behavior, he presents evidence to show that a complete absence of emotion can likewise lead to irrational behavior. His evidence comes from case studies of patients for whom brain damage--either by physical accident, stroke, or disease--has impaired their emotions but has left intact their ability to perform 'logical reasoning', as verified using standard tests of logical reasoning skill. Take away the emotions and the result is a person who, while able to conduct an intelligent conversation and score highly on standard IQ tests, is not at all rational in his or her behavior. Such people often act in ways highly detrimental to their own well being. So much for western science's idea of a 'coolly rational person' who reasons in a manner unaffected by emotions. As Damasio's evidence indicates, truly emotionless thought leads to behavior that by anyone else's standards is quite clearly irrational.
And as linguist Steven Pinker explained in his 1994 book The Language Instinct, language too is perhaps best explained in biological terms. Our facility for language, says Pinker, should be thought of as an organ, along with the heart, the pancreas, the liver, and so forth. Some organs process blood, others process food. The language organ processes language. Think of language use as an instinctive, organic process, not a learned, computational one, says Pinker.
So, while no one would deny that work in AI and computational linguistics has led to some very useful computer systems, the really fundamental lessons that were learned were not about computers but about ourselves. The research was successful in terms not of engineering but of understanding what it is to be human. Though Kubrick got it dead wrong in terms of what computers would be able to do by 1997, he was right on the mark in terms of what we ultimately discover as a result of our science. 2001 shows the entire evolution of mankind, starting from the very beginnings of our ancestors Homo Erectus and taking us through the age of enlightenment into the present era of science, technology, and space exploration, and on into the then-anticipated future of routine interplanetary travel. Looking ahead forty years to the start of the new millennium, Kubrick had no doubt where it was all leading. In the much discussed--and much misunderstood--surrealistic ending to the movie, Kubrick's sole surviving interplanetary traveler reached the end of mankind's quest for scientific knowledge, only to be confronted with the greatest mystery of all: Himself. In acquiring knowledge and understanding, in developing our technology, and in setting out on our exploration of our world and the universe, said Kubrick, scientists were simply starting on a far more challenging journey into a second unknown: the exploration of ourselves.
The approaching new millennium sees Mankind about to pursue that new journey of discovery. Far from taking away our humanity, as many feared, attempts to get computers to think and to handle language have instead led to a greater understanding of who and what we are. As a human being, I like that. For today's scientist, inner space is the final frontier, a frontier made accessible in part by attempts to build a real-world HAL. As a mathematician, I like that, too. Happy birthday, HAL.
The above celebration of the birth of HAL, the computer in the book and film 2001, is abridged from the book Goodbye Descartes: The End of Logic and the Search for a New Cosmology of Mind, by Keith Devlin, published by John Wiley and Sons in January, 1997.
From Stanley Kubrick's 2001: A Space Odyssey
By Karl Sims, 1994
This video shows results from a research project involving simulated Darwinian evolutions of virtual block creatures. A population of several hundred creatures is created within a supercomputer, and each creature is tested for their ability to perform a given task, such the ability to swim in a simulated water environment. Those that are most successful survive, and their virtual genes containing coded instructions for their growth, are copied, combined, and mutated to make offspring for a new population. The new creatures are again tested, and some may be improvements on their parents. As this cycle of variation and selection continues, creatures with more and more successful behaviors can emerge.
The creatures shown are results from many independent simulations in which they were selected for swimming, walking, jumping, following, and competing for control of a green cube.
Karl Sims studied computer graphics at the MIT Media Lab, and Life Sciences as an undergraduate at MIT. He is the Founder of GenArts, Inc. of Cambridge, Massachusetts, which creates special effects software for the motion picture industry. He previously held positions at Thinking Machines Corporation, Optomystic, and Whitney/Demos Productions. He is the recipient of various awards including a MacArthur Fellowship Grant.