Research, Uses, and the Future

AT&T Bell Laboratories

AT&T's Bell Laboratories are doing pioneering work on neural microchips. The group first developed a circuit modelled on the learning functions of the Limax garden slug. Slugs have very large neurons that are easily studied. Also they only have about twenty thousand neurons, this means that each neuron can be catalogued. The Bell Lab's group have been using a database to store information about each neuron.

A neural net has been built using the information collected so far.

slug neuron map image

The four taste neurons are presented with information. The net uses an energy relaxation approach and when the net stabilises the "slug" either "eats" or "flees" from the taste.

AT&T has now set up joint projects involving specialists in mathematics, computer science, physiology and psychology to investigate the fundamental underpinnings of neural science.

The method Bell Laboratories used to etch features on to microchips are now being adapted to make ten-micron wide boxes on silicon chips. The boxes are flooded with liquid nutrient. Neurons are then transplanted into the boxes and "farmed", one neuron to a box. Once in this new environment the neurons grow new connections to other farmed nerves. This is not intended to become a living neural net but by studying the formation and use of these connections it is hoped better neural nets can be built.

It was once thought that a single neuron represented a single bit but now research at AT&T shows that neurons are at least as complex as the microcomputer used in a handheld calculator. Neurons can also use time and distance in their operations.

A new chip built at AT&T has 512 neurons with over 256 thousand connections designed using Hopfield's principals. The 512 amplifiers are put in blocks of 128 around the edges. The resistors for the circuit are built at the intersections of the "wire" grid. The wires are two microns wide but this is relatively coarse. Wires 0.1 microns wide have been built and the limits of the system may produce memory chips 600 times more powerful that today's leading-edge digital systems.

hopfield's interconnection chip image

Jet Propulsion Laboratories

The Bell Lab's chips cannot be programmed after construction but another chip being researched at the Jet Propulsion Laboratories can. This means the chip will be able to learn new solutions that the user needs by altering the resistances of its connections. The resistances are changed in accordance with Hebb's law.

The forerunner of this chip was designed by Carver Mead.

carver's chip image

It has 22 analogue amplifiers arranged on the diagonal of a 22 square cell array. An innovative feature is that a dual-wire method represents both excitory and inhibitory connections, something not previously thought possible on a microchip. The chip has a remarkable failure tolerance and one had a component failure rate of over 50% but could still store two 22-bit vectors and recall them.

Mead has also done work on an electronic eye modelled on living systems. He argued that the eye did not send a series of "photographs" to the brain as some people think. It would take far more processing to construct moving images from that information than it would if the eye did much of the processing itself and did not record the images but merely the changes.

There are over 50 types of receptor cell that have been found in the human eye each of which detect one or more objects such as sharp edges, moving or not, or entire objects. In a frogs eye there are a set of cells that detect flying insects, dead or alive, but not pictures of them.

lattice of real neurons image lattice of artificial neurons image

The top diagram is part of the peripheral-vision area of the eye and the bottom is the neural net developed by Mead. There is a noticeable similarities between them.

Oregon Graduate Center

The information capacity of a Hopfield neural net is roughly equivalent to the density of interconnections between neurons. However the more neurons you have the longer the connection must be and it quickly becomes impractical. In the Oregon Graduate Center's Cognitive Architecture Project, Dan Hammerstrom noted that although little is known about the details of how the brain handles connections, the probability of two neurons being connected decreases as the distance between them increases. Now this is being used and it may now be possible to make million-neuron systems with a billion connections in the future. It may be unnecessary to make such large systems, however, if the intricacies of the brain's networks are unravelled.

These systems are all dedicated neural nets but neural nets can, to an extent, be simulated on computers. Some researchers have done this rather that experiment with the newer systems.


Parallel computers that use many small processors running at once match the brain far more that ordinary single processor systems. A large parallel computer called "Butterfly" was built with funding from America's Defence Department. It is a "budget" supercomputer which makes use of the latest technology to give almost supercomputer power at a far lower price. A simulation on this computer has managed one-hundred-thousandth of the brain's neurons and one ten-billionth of the connections. This is an exception as many simulations of neural nets today have relatively few neurons, simulations with as few as six neurons have provided useful results.

Digital Equipment Corporation have a research project called DECtalk. This computer program can read English and produce the correct pronunciation with about 95% accuracy. The rules it uses were derived by a team of linguists and it has a large dictionary of exceptions. The program, on being given a word, first looks in its dictionary and then uses its linguistic rules. This system took about twenty years to perfect and is not neural net based.

Inspired by this Terrance Sejnowski from Johns Hopkins University built a simulation net called NETtalk to do the same job, he claims to have made it during his summer vacation.

He used Rumelhart's back propagation of errors to teach the 300 neuron, 18 thousand connection net. He gave the net words and at the same time how the word was pronounced and then the net tried to say the word. At first the noise produced was a continuous wail, then a babble, almost baby like. Finally meaningful words began to emerge. The text it had been taught was from a child essay, about 3rd-grade level. The net managed to read the 100-word example with 98% accuracy and other previously unseen similar texts with almost the same accuracy. The training time for this was only sixteen hours. The only major advantage Sejnowski had as the synthesiser built by the DECtalk team to pronounce the phonemes of the speech.

James Anderson of Brown University used a simple auto-associative memory simulation which uses only a linear input-output function rather than more complicated sigmoid systems or Boltzmann machines but he has still managed to make a useful system. The system acts much like a conventional AI system and its first application area was a medical database. Unlike an ordinary expert system it can try to answer things that it has never seen before and Anderson contends that the information volunteered by the system often resembles a very shrewd guess.


Demetri Psaltis has been trying to use new theories from neural net researchers to develop a massively parallel read-head for optical disks. It would be ale to read up to a million bits simultaneously compared to the one bit at a time present systems use. The read-head could handle gigabytes of information. It may soon be possible to write on to optical disks and this head could allow what would be a new type of database. The neural net like aspect of the read-head would allow relevant information to go to the computer immediately.

Neural nets do not have to be built out of electronics or simulations, optical systems have already been made. Designed at the Naval Research Laboratory it is used to sort out, from host of real and false targets seen in modern battles, which are the real targets.

radar sensor image

Harold Szu used a unique type of Boltzmann machine which is well suited to optical electronics and also is often faster that the original design. The machine is based on a type of probability calculation first discovered by the nineteenth-centaury mathematician Augustin-Louis Cauchy. This is one of the few real-world problems it has been applied to but the calculations can be done very quickly.

Processing is done by randomly rotating the light sources to simulate the randomness in a physical system. Light goes from these to an array of light detectors. This means that every light source can effect every light detector. The network is programmed by placing transparencies in front of the detectors that selectively filter the light.

light sensor image


This is a military application and it is not unusual in this respect. Neural nets have caused interest in this field because of their intelligent behaviour. In areas which have needed a man to interpret information or control something may be able to replace him with a small electronic box. Alternatively situations which cannot have a man in them can be improved. The British military are trying to develop circuitry that can use the ill-defined and hard to interpret information from infrared cameras the steer homing missiles.

There are however things that effect us more that may start using neural nets. Modern security systems are good but if a person gets the password or the access card they will be able to get in easily if there is not a human to recognise them as being out of place. A net has been developed that can recognise people from records on "smart cards" that can be carried by the person. It does not matter who gets the card, only those with the right face will be able to get in.

Another application is in the money markets. Nets can be taught by watching a dealer's actions and then the net will start to make decisions. This is already being tested by one company and so may become an accepted practice within a few years.


In the future as nets become more complicated they will be able to do more and more human tasks, as good as, perhaps better than, humans themselves. Such tasks may include such things as driving. A net would not need to sleep, would not be thinking about what is going to happen when it arrives and would not have a little drink before leaving.

As speech recognition progresses voicewriters or voice controlled equipment may become commonplace. Coffee machines will learn just how you like your morning cup of coffee to go with your perfectly done toast. Kitchen scales will work out how much fat you are eating and tell you when to stop. The vacuum cleaner will be cleaning another room as it knows you don't like to be bothered by it. These and a lot of other items will come from neural net developments.

This does not means that the "old" digital computers will go however. They are fast and efficient for many functions and it would not be useful to make a net to do the job. Computers have one great advantage over nets, they are always right. If they are wrong it is because they are broken or they have not been designed or programmed correctly. Neural nets on the other hand will get most things correct, depending on how much they have been taught, but they will try to answer things that they have never seen before and may not get them. Even a commonplace question to the net may get the wrong answer in unusual circumstances. The problem is that in copying the brain to get its good points we must also get its bad points, that is its inaccuracies.

What may overcome this is a new breed of computer which combine standard computers with neural nets to produce a hybrid, a neurocomputer. The neural net will be able to stop the computer from looking in unnecessary detail and to jump to conclusions faster than it is possible for just a normal computer to do. The computer part stops the net from making those "human errors" and makes some of the basic calculation to which it is more suited.