Monday, April 26, 2010

Is Artificial Intellgience Possible

"Artificial Intelligence has been brain-dead since the 1970s." This rather ostentatious remark made by Marvin Minsky co-founder of the world-famous MIT Artificial Intelligence Laboratory, was referring to the fact that researchers have been primarily interested on small facets of machine intelligence as contrary to looking at the problem as a whole. This article examines the contemporary issues of artificial intelligence (AI) looking at the current status of the AI field together with potent arguments allowed by leading experts to illustrate whether AI is an impossible concept to obtain.


Because of the scope and ambition, artificial intelligence defies simple definition. Initially AI was defined as "the science of making machines do things that would require intelligence if done by men". This fairly meaningless definition shows how AI is nonetheless a young discipline and similar early definitions have been shaped by technological and theoretical progress made in the subject. So for the time being, a good general definition that illustrates the future challenges in the AI field was made by the American Association for Artificial Intelligence (AAAI) clarifying that AI is the "scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines".

The term "artificial intelligence" was first coined by John McCarthy at a Conference at Dartmouth College, New Hampshire, in 1956, but the concept of machine intelligence is in fact a lot older. In ancient Greek mythology the smith-god, Hephaestus, is credited with making Talos, a "bull-headed" bronze man who guarded Crete for King Minos by patrolling the island terrifying off impostors. Similarly in the 13th century mechanical talking heads were said to have been created to scare intruders, with Albert the Great and Roger Bacon reputedly one of the owners. even so, it's only in the last 50 years that AI has really begun to pervade popular culture. Our fascination with "thinking machines" is clear, but has been wrongfully distorted by the science-fiction connotations seen in literature, film and television.

actually the AI field is far from creating the sentient beings seen in the media, nevertheless this doesn't imply that successful progress has not been made. AI has been a rich branch of research for 50 years and a lot of famed theorists have contributed to the field, but one computer pioneer that has shared his thoughts at the beginning and nonetheless remains timely in both his assessment and arguments is British mathematician Alan Turing. In the 1950s Turing published a paper known as Computing Machinery and Intelligence in which he proposed an empirical test that identifies an intelligent behaviour "when there is no discernible difference between the conversation generated by the machine and that of an intelligent person." The Turing test measures the performance of an allegedly intelligent machine against that of a human being and is arguably among the best evaluation experiments at this present time. The Turing test, also referred to as the "imitation game" is carried out by having a aware human interrogator engage in a natural language conversation with two other participants, one a human the other the "intelligent" machine communicating completely with textual messages. If the judge cannot reliably identify which is which, it's said that the machine has passed and is so intelligent. Although the test has a number of justifiable criticisms such as not being able to test perceptual skills or manual dexterity it's a great accomplishment that the machine can converse like a human and can cause a human to subjectively evaluate it as humanly intelligent by conversation alone.

a lot of theorist have disputed the Turing Test as an acceptable means of proving artificial intelligence, an argument posed by Professor Jefferson Lister states, "not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain". Turing replied by saying "that we have no manner of knowing that any individual other than ourselves experiences emotions and that so we ought to accept the test." even so Lister did have a valid point to make, developing an artificial consciousness. Intelligent machines already exist that are autonomous; they are able to learn, communicate and teach each other, but creating an artificial intuition, a consciousness, "is the holy grail of artificial intelligence." When modelling AI on the human mind a lot of illogical paradoxes surface and you begin to see how the complexity of the brain has been underestimated and why simulating it has not be as straightforward as experts believed in the 1950's. The problem with human beings is that they're not algorithmic creatures; they want to use heuristic shortcuts and analogies to situations well known. even so, this is a psychological implication, "it's not that folks are smarter then explicit algorithms, but that they're sloppy and nevertheless do well in most cases."

The phenomenon of consciousness has caught the attention of a lot of Philosophers and Scientists throughout history and innumerable papers and books have been published devoted to the subject. even so, no other biological singularity has remained so resistant to scientific evidence and "persistently ensnarled in fundamental philosophical and semantic tangles." Under ordinary circumstances, we have little difficulty in determining when others lose or regain consciousness and as long as we avoid describing it, the phenomenon remains intuitively clear. Most Computer Scientists believe that the consciousness was an evolutionary "add-on" and can so be algorithmically modelled. nevertheless a lot of recent claims oppose this theory. Sir Roger Penrose, an English mathematical physicist, argues that the rational processes of the human mind are not entirely algorithmic and thus transcends computation and Professor Stuart Hameroff's proposal that consciousness emerges as a macroscopic quantum state from a critical level of coherence of quantum level events in and around cytoskeletal microtubules inside neurons. Although these are all theories with not a lot or no empirical evidence, it's nonetheless significant to look at a piece of them because it's vital that we understand the human mind before we are able to duplicate it.

additional key problem with duplicating the human mind is how to incorporate the various transitional states of consciousness such as REM sleep, hypnosis, drug influence and some psychopathological states inside a new paradigm. If these states are removed from the design regarding to their complexity or irrelevancy in a computer then it ought to be pointed out that maybe consciousness cannot be artificially imitated because these altered states have a biophysical significance for the functionality of the mind.

If consciousness is not algorithmic, then how is it created? evidently we don't know. Scientists who are concerned in subjective awareness study the objective facts of neurology and behaviour and have shed new light on how our nervous system processes and discriminates one of stimuli. But although such sensory mechanisms are necessary for consciousness, it doesn't help to unlock the secrets of the cognitive mind as we are able to perceive things and respond to them without being well knowledgeable of them. A prime example of this is sleepwalking. When sleepwalking occurs (Sleepwalking comprises more or less 25 percent of all children and 7 percent of adults) a lot of of the victims carry out dangerous or stupid tasks, nevertheless some individuals carry out complicated, distinctively human-like tasks, such as driving a car. One may dispute whether sleepwalkers are really unconscious or not, but if it's in fact true that the individuals have no awareness or recollection of what happened during their sleepwalking episode, then maybe here is the key to the cognitive mind. Sleepwalking suggests at least two general behavioural deficiencies associated with the absence of consciousness in humans. The first is a deficiency in social skills. Sleepwalkers typically ignore the folks they encounter, and the "rare interactions that occur are perfunctory and clumsy, or even violent." The other major deficit in sleepwalking behaviour is linguistics. Most sleepwalkers respond to verbal stimuli with only grunts or monosyllables, or make no response at all. These two apparent deficiencies may be important. Sleepwalkers luse of protolanguage; short, grammar-free utterances with referential meaning but lack syntax, may illustrate that the consciousness is a social adaptation and that other animals don't lack understanding or sensation, but that they lack language skills and so cannot reflect on their sensations and become self-well knowledgeable. In principle Francis Crick, co-discover of double helix DNA structure, believed this hypotheses. After he and James Watson solved the mechanism of inheritance, Crick moved to neuroscience and spent the rest of his trying to answer the biggest biological question; what is the consciousness? Working closely with Christof Koch, he published his final paper in the Philosophical Transactions of the Royal Society of London and in it he proposed that an obscure part of the brain, the claustrum, acts like a conductor of an orchestra and "binds" vision, olfaction, somatic sensation, together with the amygdala and other neuronal processing for the unification of thought and emotion. And the fact that all mammals have a claustrum means that it's possible that other animals have high intelligence.

So how different are the minds of animals in comparison to our own? Can their minds be algorithmically simulated? a lot of Scientists are reluctant to discuss animal intelligence as it's not an observable property and nothing can be perceived without reason and so there is not a lot published research on the matter. But, by avoiding the comparison of some human mental states to other animals, we are impeding the use of a comparative method that may unravel the secrets of the cognitive mind. even so primates and cetacean have been considered by some to be extremely intelligent creatures, second only to humans. Their exalted status in the animal kingdom has lead to their involvement in almost all of published experiments related to animal intelligence. These experiments coupled with analysis of primate and cetacean's brain structure has lead to a lot of theories as to the development of higher intelligence as a trait. Although these theories seem to be plausible, there is some controversy over the degree to which non-human studies can be used to infer about the structure of human intelligence.

By a lot of of the physical methods of comparing intelligence, such as measuring the brain size to body size ratio, cetacean surpass non-human primates and even rival human beings. for instance "dolphins have a cerebral cortex which is about 40% larger a human being. Their cortex is also stratified in a lot the same manner as humans. The frontal lobe of dolphins is also developed to a level comparable to humans. additionally the parietal lobe of dolphins which "makes sense of the senses" is larger than the human parietal and frontal lobes combined. The similarities don't end there; most cetaceans have large and well-developed temporal lobes which contain sections equivalent to Broca's and Wernicke's areas in humans."

Dolphins exhibit complex behaviours; they have a social hierarchy, they demonstrate the ability to learn complex tricks, when scavenging for food on the sea floor, some dolphins have been seen tearing off pieces of sponge and wrapping them around their "bottle nose" to prevent abrasions; illustrating nevertheless additional complex cognitive process thought to be restricted to the great apes, they apparently communicate by emitting two very distinct kinds of acoustic signals, which we call whistles and clicks and lastly dolphins don't use sex purely for procreative purposes. Some dolphins have been recorded having homosexual sex, which demonstrates that they have to have some consciousness. Dolphins have a different brain structure then humans that could maybe be algorithmic simulated. One example of their dissimilar brain structure and intelligence is their sleep technique. While most mammals and birds show signs of rapid REM (Rapid Eye Movement) sleep, reptiles and cold-blooded animals don't. REM sleep stimulates the brain regions used in learning and is often associated with dreaming. The fact that cold-blooded animals don't have REM sleep could be enough evidence to suggest that they're not conscious and so their brains can definitely be emulated. Furthermore, warm-blood creatures display signs of REM sleep, and thus dream and so have to have some environmental awareness. even so, dolphins sleep unihemispherically, they're "conscious" breathers, and if fall asleep they could drown. Evolution has solved this problem by letting one half of its brain sleep at a time. As dolphins utilise this technique, they lack REM sleep and so a high intelligence, maybe consciousness, is possible that doesn't incorporate the transitional states mentioned earlier.

The evidence for animal consciousness is indirect. But so is the evidence for the big bang, neutrinos, or human evolution. As in any event, such unusual assertions have to be subject to rigorous scientific procedure, before they are able to be accepted as even vague possibilities. Intriguing, but more proof is required. even so merely because we don't understand something doesn't mean that it's false - or not. Studying other animal minds is a useful comparative method and could even lead to the creation of artificial intelligence (that doesn't include irrelevant transitional states for an artificial entity), based on a model not as complex as our own. nonetheless the central point being illustrated is how ignorant our understanding of the human brain, or any other brain is and how one day a concrete theory can alteration thanks to enlightening findings.

Furthermore, an analogous incident that exemplifies this argument happened in 1847, when an Irish workman, Phineas Cage, shed new light on the field of neuroscience when a rock blasting accident sent an iron rod through the frontal region of his brain. Miraculously enough, he survived the incident, but even more astonishing to the science community at the time were the marked changes in Cage's personality after the rode punctured his brain. Where before Cage was characterized by his mild mannered nature, he had now become aggressive, rude and "indulging in the grossest profanity, which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires" due to the Boston physician Harlow in 1868. even so, Cage sustained no impairment with regards to his intelligence or memory.

The serendipity of the Phineas Cage incident demonstrates how architecturally robust the structure of the brain is and by comparison how rigid a computer is. All mechanical systems and algorithms would stop functioning correctly or entirely if an iron rod punctured them, that is with the exception of artificial neural systems and their distributed parallel structure. In the last decade AI has began to resurge thanks to the promising approach of artificial neural systems.

Artificial neural systems or simply neural networks are modelled on the logical associations made by the human brain, they're based on mathematical models that accumulate data, or "knowledge," based on parameters set by administrators. Once the network is "trained" to recognize these parameters, it is able to make an evaluation, strive a conclusion and take action. In the 1980s, neural networks became widely used with the backpropagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in a lot of areas of AI and demonstrations of various applications. Most notably in 1997, IBM's Deep Blue supercomputer defeated the world chess champion Garry Kasparov. After the match Kasparov was quoted as saying the computer played "like a god."

That chess match and all its implications raised profound questions about neural networks. a lot of saw it as evidence that true artificial intelligence had in conclusion been achieved. After all, "a man was beaten by a computer in a game of wits." But it's one thing to program a computer to solve the kind of complex mathematical problems found in chess. it's quite additional for a computer to make logical deductions and decisions on its own.

Using neural networks, to emulate brain function, allows a lot of positive properties including parallel functioning, relatively quick realisation of complicated tasks, distributed information, weak computation changes regarding to network damage (Phineas Cage), just like learning abilities, i.e. adaptation upon changes in environment and improvement based on experience. These beneficial properties of neural networks have inspired a lot of scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish a lot of arbitrary tasks, without knowing a detailed mathematical algorithm of the problem. Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda's Asimo humanoid robot that cannot just walk and dance, but even ride a bicycle. Asimo, an acronym for Advanced Step in Innovative Mobility, has 16 flexible joints, requiring a four-processor computer to control its movement and balance. Its exceptional human-like mobility, are only possible because the neural networks that are connected to the robot's motion and positional sensors and control its 'muscle' actuators are capable of being 'taught' to do a particular activity.

The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to in reality produce a set of detailed instructions for walking or riding a bicycle, instructions which could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions. even so, in spite of the impressive performance of the neural networks, Asimo nonetheless cannot think for itself and its behaviour is nonetheless firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation.

Neural networks are slowly finding there manner into the commercial world. Recently, Siemens launched a new fire detector that applies a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or just part of the regular room environment such as dust. Over fifty percent of fire call-outs are false and of these well over half are regarding to fire detectors being triggered by everyday activities as contrary to actual fires, so this is clearly a beneficial use of the paradigm.

But are there limitations to the capabilities of neural networks or will they be the solution to creating strong-AI? Artificial neural networks are biologically inspired but that doesn't mean that they're necessarily biologically plausible. a lot of Scientists have published their thoughts on the intrinsic limitations of using neural networks; one book that received high exposure inside the Computer Scientist community in 1969 was Perceptron by Minsky and Papert. Perceptron imparted clarity to the limitations of neural networks, although a lot of scientists were well knowledgeable of restricted ability of an incomplex perceptron to classify patterns, Minsky's and Papert's approach of finding "what are neural networks good for?" illustrated what is impeding future development of neural networks. inside its time period Perceptron was exceptionally constructive and its identifiable content gave the impetus for later research that conquered some of the depicted computational problems restricting the model. a good example is the exclusive-or problem. The exclusive-or problem contains four patterns of two inputs each; a pattern is a positive member of a set if either among the input bits is on, but not both. Thus, changing the input pattern by one-bit changes the classification of the pattern. This is the simplest example of a linearly inseparable problem. A perceptron using linear threshold functions needs a layer of internal units to solve this problem, and since the connections between the input and internal units could not be trained, a perceptron could not learn this classification. Eventually this restriction was solved by incorporating extra "hidden" layers. Although advances in neural network research have solved a lot of of the limitations identified by Minsky and Papert, numerous nonetheless remain such as networks using linear threshold units nonetheless violate the restricted order constraint when faced with linearly inseparable problems Additionally, the scaling of weights as the size of the problem space increases remains an issue.

it's clear that the dismissive views about neural networks disseminated by Minsky, Papert and a lot of other Computer Scientists have some evidential support, but nonetheless a lot of researchers have ignored their claims and refused to abandon this biologically inspired system.

There have been a lot of recent advances in artificial neural networks by integrating other specialised theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong-AI. One promising area is the integration of fuzzy logic. invented by Professor Lotfi Zadeh. Other admirable algorithmic ideas include quantum inspired neural networks (QUINNs) and "network cavitations" proposed by S.L.Thaler.

The history of artificial intelligence is replete with theories and failed attempts. it's in inevitable that the discipline will progress with technological and scientific discoveries, but will they ever strive the final hurdle?

1 comment:

Monica Anderson said...

For a possible way out of the AI swamp, see http://videos.syntience.com and http://artificial-intuition.com or my blog at http://monicasmind.com .

You may not like what you see. Consciousness is overrated. Logic is overrated. Reasoning is overrated. A lot of re-thinking will have to happen; but several people have been pushing us in this direction for decades, including people like Erwin Schrödinger, Jean Piaget, Donald T. Campbell. The majority of AI researchers have been shielded from these ideas and those few that understand the issues are often prevented from switching tracks because of establishment inertia.

Also, Perceptrons are overrated. In fact, they have been obsolete since 1984 with the emergence of Modern Connectionism (see Rumelhart and McClelland: PDP).