Sections  Introduction  Emergence  Complexity  Chaos Thoery  Cellular Automata  Life Forms  Neural Networks   Evolution  L systesm  Uses people.htmlBooks Links about this site Site Index  Front Page
Stewart Dean's Guide to Artificial Life

Neural Networks:
(Artificial Intelligence the easy way.)

A creature without a brain could be said to be a plant. It can not do much apart from chemically react to simple stimuli. So in creating aLife intelligence has always been a topic that occurs. This is increasingly important when it comes to interacting with an environment. Evolution can only work so fast and is dependent on creatures dying or failing to mate and so therefore cannot cope with environmental changes quickly enough. The more intelligent a creature the quicker he can adapt to a changing environment. For the ultimate example of this you need only look at mankind.

Naturally man has always been interested in knowing how our intelligence works. Many still claim that human intelligence is impossible to surpass artificially and in 1989 a British mathematician, Roger Penrose, claimed that the workings of the human brain cannot be duplicated by a machine - even in principle. Also many still believe that there is more to human thought than the mere firing of synapses.

The human brain is currently the most sophisticated computer known to man - this is undeniable, but it works on the same principle as any other animal brain. Kurt Godel's theorem states that you cannot understand a system without going beyond that system, passing from on branch of mathematics to another. In more simple terms it's very hard to draw a building just by looking out of one of its windows. To fully understand intelligence we have to understand simple thought. Attempting to leap directly to the complex actions of the human brain is next to impossible and can only lead to findings such as Penrose's.

An increasing way of understanding things is to model them on a computer and produce them artificially. Just as aLife attempts to understand life better by modeling it, and in the process create something new, so Neural computing is and attempt at modeling the workings of a brain.

There are two main ways of approaching artificial intelligence. For example, computer vision has often been approached by constructing algorithms and then applying them to an input. Each step of the sight process has to be evaluated and an algorithm has to be devised to transform the input data into a form easier to handle. This is known as the top down approach to AI, or symbolic approach. It was people like James Lighthill that pointed out that most of these methods where highly machine intensive and where only suitable for very restricted problems. Also this approach relies on the knowledge of the programmer, nothing new can be added automatically.

The other way is to construct a Neural net especially for converting an image into information. In the 1960's neural computing was centered on a net called the perceptron. Interest in the perceptron was only revived in the mid 80's. The phrase was coined in 1962 by Frank Roesenblatt. The perceptron was a mixture of neural net and pre-processing that allowed images to be recognised by a computer. The perceptron was based on what was known of the first stages of primate vision and later on work carried out on car retinas. Using this set-up patterns are easily recognised as well as more complex visual problems like telling the gender of a person from their face or seeing how full a tube train platform is.

How they work.

A neural net is a physical (as in electronics) or virtual (a computer program) collection of nodes or neurons each in some way connected to the other. Each neuron has several inputs and several outputs. Input starts out as the message from an array of sensors. This message is often passed through associate nets which, in a vision system, do a lot of the pre processing of the signal before it is passed on to neurons based on the McCulloch and Pitt's model.

Warren McCulloch and Walter Pitts were a team of neurophysiologist and logician who in 1943 built a model involving resistors and amplifiers which mimicked what was known about natural neurons. Neurons take weighted inputs and then, depending on the result, either fire or do not. This firing is then passed onto several other neurons which take this input and, according to the weighting, act or do not act. The whole model is a network of interconnected cells, each effecting the next.

Eventually the signal in a neural net reaches an output stage. This can be a value (male, female) or an array of output (sound or a picture). At first this result will be near random until the net has been trained, and trained correctly. The net has to receive enough information through the input to be able to make the correct assumptions. For example, one neural net was being used by the military to aid the recognition of tanks. A net was given different pictures of tanks and had to decide whether they were Russian or American. Each time the net got it wrong the net would learn and reorganise its connections and weights. Eventually the net was achieving perfect results. Other pictures from the same set as the training photographs where also correctly sorting into American and Russian. Problems arose when a new set of photos of the same tanks was given to the machine. This time it went back to making mistakes. This was puzzling until someone pointed out the times of the days the photos were taken. The shadows on the trees and tanks fell at a different angle on the American photos and the Russian photos so the computer was sorting the photos by time of day and not by shape of tank. After different sets of photos were used at varying times of day the net learnt the error of its ways and went back to being correct most of the time.

This could be said to be an example of the old adage, garbage in, garbage out. In this case it shows that for a correct assumption to be reached by man or machine all the correct information must be available. If there are inconsistencies or we cannot work out why we get an answer we put it down to common sense. This is the same as neural nets. The weighting of neurons leads to the guessing of answers some of the time, using information known to fill in the gaps.

In the future computers may be a hybrid of Neural net and conventional Turing based computing. Conventional computing has the advantage of being logical and fast in known mathematical problems. Neural nets are not good at number crunching, much as the human brain finds sums harder to handle that music. Instead they excel in pattern recognition, in tasks that require filtering and analyising data. See the section on the uses of aLife in Interactive System Design for more details of how neural nets could be of use in the future.

What should be pointed out is that current neural networks are about as intelligent as a stupid insect. Neural computing, despite its history, is still a young subject and yet has to be fully understood with true precision. Having said that it has already produced complex results and is being used in many different fields. In the future HAL type computers could be totally possible, this is not to say an artificial intelligence will take the same form as ours. Our emotions are our motivation for the things we do, with an artificial intelligence these motivations might be totally different. As with the rest of aLife, there is no reason why we our creations should have to take nature's forms.

Future man made intelligences may live their entire lives in environments alien to the human mind. They could exist in different bodies and spend their time thinking about things we would deem unimportant. They could be specialist intelligences possibly not directly comparable to our intelligence. For an example of how different a possible intelligence can be you only need look at the second closest intelligence on the planet, dolphins. The dolphins' and whales' worlds are radically different, it is a world of the oceans. We now know that dolphins and whales have a symbolic language, if a lot simpler than human language. Dolphins give each other name but, like whales, spend time on navigation as we spend on trying to manipulate tools. Different environments put different priorities on a creature living in it; it is currently hard for us to imagine what form of intelligence a creature living purely in the datasphere would take.



I'm sorry Dave, I cant do that...

HAL was the infamous computer in the books and films of 2001 and 2010 created by Arthur C. Clarke. The name HAL is said to be a play on IBM, each letter is stepped one down the alphabet although Mr.. Clarke says its not.



Please send any corrections, comments or additions to: alife(at)stewdean(dot)com
 

 back a bit  next bit