Monday, May 3, 2010

How Will Artificial Intelligence Affect Our Lives In The Next Ten Years

The primary focus of this essay is the future of Artificial Intelligence (AI). to better understand how AI is likely to grow I intend to first explore the history and current state of AI. By showing how its role in our lives has changed and expanded so far, I will be better able to predict its future trends.

John McCarthy first coined the term artificial intelligence in 1956 at Dartmouth College. At this time electronic computers, the clear platform for such a technology were nonetheless less than thirty years old, the size of lecture halls and had storage systems and processing systems that were too slow to do the concept justice. It wasn't until the digital boom of the 80's and 90's that the hardware to
build the systems on began to acquire ground on the ambitions of the AI theorists and the field really started to pick up. If artificial intelligence can match the advances made last decade in the decade to come it's set to be as common a part of our daily lives as computers have in our lifetimes. Artificial intelligence has had a lot of different descriptions put to it since its birth and the most significant shift it is made in its history so far is in how it has defined its aims. When AI was young its aims were restricted to replicating the function of the human mind, as the research developed new intelligent things to replicate such as insects or genetic material became apparent. The limitations of the field were also becoming clear and out of this AI as we understand it today emerged. The first AI systems followed a purely symbolic approach. Classic AI's approach was to build intelligences on a set of symbols and rules for manipulating them. among the main problems with such a system is that of symbol grounding. If every bit of knowledge in a system is represented by a set of symbol and a particular set of symbols ("Dog" for instance) has a definition made up of a set of symbols ("Canine mammal") then the definition requires a definition ("mammal: creature with four limbs, and a constant internal temperature") and this definition requires a definition and then on. When does this symbolically represented knowledge get described in a way that does not need further definition to be complete? These symbols need to be defined outside of the symbolic world to avoid an eternal recursion of definitions. The manner the human mind does this is to link symbols with stimulation. for instance when we think dog we do not think canine mammal, we remember what a dog looks like, smells like, feels like etc. This is called sensorimotor categorization. By allowing an AI system access to senses beyond a typed message it could ground the knowledge it has in sensory input in the same way we do. That's not to say that classic AI was a entirely flawed strategy as it turned out to be successful for several its applications. Chess playing algorithms can beat grand masters, expert systems can diagnose diseases with greater accuracy than doctors in controlled situations and guidance systems can fly planes better than pilots. This model of AI developed in a time when the understanding of the brain wasn't as complete as it's today. Early AI theorists believed that the classic AI approach could achieve the goals set out in AI because computational theory supported it. Computation is largely based on symbol manipulation, and due to the Church/Turing thesis computation can potentially simulate anything symbolically. even so, classic AI's methods do not scale up well to more complex tasks. Turing also proposed a test to judge the worth of an artificial intelligent system called the Turing test. In the Turing test two rooms with terminals capable of communicating with each other are set up. The person judging the test sits in one room. In the second room there is either additional person or an AI system designed to emulate a person. The judge communicates with the person or system in the second room and if he eventually cannot distinguish between the person and the system then the test has been passed. even so, this test isn't broad enough (or is too broad...) to be applied to modern AI systems. The philosopher Searle made the Chinese room argument in 1980 stating that if a computer system passed the Turing test for speaking and understanding Chinese this does not necessarily mean that it understands Chinese because Searle himself could execute the same program thus giving the impression that he understand Chinese, he would not in reality be understanding the language, just manipulating symbols in a system. If he could give the impression that he understood Chinese while not in reality understanding a single word then the true test of intelligence have to go beyond what this test lays out.

Today artificial intelligence is already a major part of our lives. for instance there are a lot of separate AI based systems just in Microsoft Word. The little paper clip that advises us on how to use office tools is built on a Bayesian belief network and the red and green squiggles that tell us when we've misspelled a word or poorly phrased a sentence grew out of research into natural language. even so, you could argue that this hasn't made a positive difference to our lives, such tools have just replaced good spelling and grammar with a labour saving device that results in the same outcome. for instance I compulsively spell the word 'successfully' and a number of other word with multiple double letters wrong every time I type them, this does not matter surely because the software I use automatically corrects my work for me thus taking the pressure off me to improve. The end result is that these tools have damaged instead of improved my written English skills. Speech recognition is additional product that has emerged from natural language research that has had a a lot more dramatic effect on folks's lives. The progress made in the accuracy of speech recognition software has provided a friend of mine with an incredible mind who two years ago lost her sight and limbs to septicaemia to attend Cambridge University. Speech recognition had a very poor embark on, as the success rate when using it was too poor to be useful unless you have perfect and predictable spoken English, but now its progressed to the point where its possible to do on the fly language translation. The system in development now is a telephone system with real time English to Japanese translation. These AI systems are successful because they do not try to emulate the entire human mind the manner a system that might undergo the Turing test does. They instead emulate very specific parts of our intelligence. Microsoft Words grammar systems emulate the part of our intelligence that judges the grammatical correctness of a sentence. It does not know the meaning of the words, as this is not necessary to make a judgement. The voice recognition system emulates additional distinct subset of our intelligence, the ability to deduce the symbolic meaning of speech. And the 'on the fly translator' extends voice recognitions systems with voice synthesis. This shows that by being more accurate with the function of an artificially intelligent system it is able to be more accurate in its operation.

Artificial intelligence has reached the point now where it is able to provide invaluable assistance in speeding up tasks nonetheless performed by folks such as the rule based AI systems used in accounting and tax software, enhance automated tasks such as looking for algorithms and enhance mechanical systems such as braking and fuel injection in a car. Curiously the most successful examples of artificial intelligent systems are those that are almost invisible to the folks using them. Very few folks thank AI for saving their lives when they narrowly avoid crashing their car because of the computer controlled braking system.

among the main issues in modern AI is how to simulate the common sense folks pick up in their early years. There is a project currently underway that was started in 1990 known as the CYC project. The intent of the project is to provide a common sense database that AI systems can query to take into account them to make more human sense of the data they hold. Search engines such as Google are already starting to make use of the information compiled in this project to improve their service. for instance look at the word mouse or string, a mouse could be either a computer input device or a rodent and string could mean an array of ASCII characters or a length of string. In the sort of search facilities we're used to if you typed in either of these words you would be presented with a list of links to every document found with the specified search term in them. By using artificially intelligent system with access to the CYC common sense database when the search engine is given the word 'mouse' it could then ask you whether you mean the electronic or furry variety. It could then filter out any search result that contains the word outside of the desired context. Such a common sense database would also be invaluable in helping an AI pass the Turing test.

So far I have only talked about artificial systems that interact with a very closed world. A search engine always gets its search terms as a list of characters, grammatical parsers only must deal with strings of characters that form sentences in one language and voice recognition systems customise themselves for the voice and language their user speaks in. This is because in order for current artificial intelligence methods to be successful the function and the environment must be carefully defined. In the future AI systems will to be able to operate without knowing their environment first. for instance you are able to now use Google search to look for pictures by inputting text. Imagine if you could search for anything using any means of search description, you could instead attend Google and give it a picture of a cat, if could recognise that its been given a picture and try to assess what it is a picture of, it would insulate the focus of the picture and recognise that it is a cat, consider what it knows about cats and recognise that it is a Persian cat. It could then separate the search results into categories relevant to Persian cats such as grooming, where to buy them, pictures etc. This is just a good example and I do not know if there is currently any research being done in this direction, what I am trying to emphasise in it's that the future of AI lies in the merging existing techniques and methods of representing knowledge to make use of the strengths of each theme. The example I gave would require image analysis to recognise the cat, intelligent data classification to choose the right categories to sub divide the search results into and a strong element of common sense such as that which is offered by the CYC database. It would also must deal with data from several separate databases which different methods of representing the knowledge they contain. By 'representing the knowledge' I mean the data structure used to map the knowledge. Each method of representing knowledge has different strengths and weaknesses for different applications. Logical mapping is an ideal choice for applications such as expert systems to assist doctors or accountants where there is a clearly defined set of rules, but it's often too inflexible in areas such as the robotic navigation performed by the Mars Pathfinder probe. For this application a neural network might be more suitable as it could be trained across a range of terrains before landing on Mars. even so for other applications such as voice recognition or on the fly language translation neural networks would be too inflexible, as they require all the knowledge they contain to be broken down into numbers and sums. Other methods of representing knowledge include semantic networks, formal logic, statistics, qualitative reasoning or fuzzy logic to name a couple of. Any among these methods might be more suitable for a particular AI application depending on how precise the effects of the system must be, how a lot is already known about the operating environment and the range of different inputs the system is likely to must deal with.

In recent times there has also been a marked increase in investment for research in AI. This is because business is realising the time and labour saving potential of these tools. AI can make existing applications easier to use, more intuitive to user behaviour and more well knowledgeable of changes in the environment they run in. In the early day of AI research the field failed to meet its goals as quickly as investors believed it would, and this led to a slump in new capital. even so, it's beyond doubt that AI has more than paid back its thirty years of investment in saved labour hours and more efficient software. AI is now a top investment priority, with benefactors from the military, commercial and government worlds. The pentagon has recently invested $29m in an AI based system to assist officers in the same manner as a personal assistant normally would.

Since AI's birth in the fifties it has expanded out of maths and physics into evolutionary biology, psychology and cognitive studies in the hope of getting a more complete understanding of what makes a system, whether it be organic or electronic, an intelligent system. AI has already made a big difference to our lives in leisure pursuits, communications, transportation, sciences and space exploration. it is able to be used as a tool to make more efficient use of our time in designing complex things such as microprocessors or even other AI's. In the near future it's set to become as big a part of our lives as computer and automobiles did before it and may well begin to replace folks in the same manner the automation of steel mills did in the 60's and 70's. a lot of of its applications sound incredible, robot toys that help children to learn, intelligent pill boxes that nag you when you forget to take your medication, alarm clocks that learn your sleeping habits or personal assistants that can constantly learn via the internet. even so a lot of of its applications sound like they could lead to something terrible. The pentagon is among the largest investors in artificial intelligence research worldwide. There is currently a lot progressed research into AI soldier robots that look like small tanks and assess their targets automatically without human intervention. Such a device might as well be re-applied as cheap domestic policing. Fortunately the dark future of AI is nonetheless a Hollywood fantasy and the most we need to worry about for the near future is being beaten at chess by a children's toy.

Best Regards,

Sam Harnett MSc mBCS

Pixeko Studio - Web Developers in Kent

No comments: