Neural Nets History

Brain & Nerves

Neural nets are based on the brain but to find out how the brain works research has to be done. This table summaries ideas about the subject.

When Who What
400 BC Plato The brain concentrated ethereal spirits which rained down for perfect heaven.
130 AD Roman Claudius Galenus Discovered that sensory nerves could be damaged without effecting the motor nerves. The nerves were hollow tubes through which a fine substance flowed. This substance came from food via the liver and heart.
17th C Rene Descartes Soul resides in the brain, specifically the pituitary gland.
1660 Microscope invented, nerves not hollow, Galenus's theory persists.
Isaac Newton Suggested that nerves transmitted vibrations through an infinitely fine ether inside the nerves.
late 18th C Henry Cavendish Shocks from eel-like fish could not be distinguished from static electricity shocks. Did not draw conclusion that nerve force was electrical.
1791 Luigi Galvani Showed that electricity was the force inside nerves. He used electricity to stimulate frogs legs.
1850 Emil Du Bois-Reymond After the development of a good galvenometer and electrodes he was able to measure the electricity in nerves. The work was confused by the current of injury the nerves emitted but soon he realised that nerves emitted spikes or pulses with no signal when they were "off".
post 1850s Richard Canton Proved the difference between motor and sensory nerves. Wrote of brain waves in monkeys some seventy years before Hans Berger discovered alpha-waves in humans.
neural spike image

A debated question in the 19th century was how fast signals in nerves travelled. Electricity seemed to travel very fast indeed. Estimates for nerve signal speeds went up to ten million miles per second. Johannes Muller said the speed of nervous energy would never be measured, later one of his students measured it at 90 mph.

Jan Purkinje was the first person to identify individual neurons. He saw that nerves were made of two parts, a central cell and a web of fibres. He recognised that his theory matched with the cell theory of life which had recently been put forward. It was not until better technology had been developed Santiago Ramon Cajal disproved that nerves wer one continuous network but were instead connected by synapses. This explained why signals travel so slowly in nerves.

neuron image

The neuron's single axon is shown leading off to the lower right. Thousands of smaller dendrites go in all directions. The black dendrites of other neurons connect to the nerve via synapses.

More detailed research into the synaptic cleft was done by Washington University. They found that nerves kept themselves negatively charged with chloride ions and the bodily fluids positively charged with sodium ions. When nerves need to signal nerves they are joined to they release the ions and this causes the other nerves to fire. The nerve will then take about a thousandth of a second to "recharge".

In the 1930s it was discovered that various chemicals could effect nervous activity. It was after this that common tranquilisers, LSD and nerve gas were developed.

Computers

Charles Babbage was the first to try and build a computer. It was only a calculator built with gears but his ideas were beyond the technology of the day and his work was not finished. This partial success led to business machine companies offering information processing products of a more modest design. These relied on punch cards, pegs and gears. The United States census began to use and then rely on them as the amount of information they processed increased.

Alan Turing made the next step in computing. He made a precise yet general model of computing. The machine he postulated read symbols from an external tape and then, according to its internal set of instructions, manipulated the symbols.

turing machine image

A universal machine of this type would be able to read the instructions from any basic machine and then operate using those. All modern computers can be said to be of this type.

Due to a chance meeting on a train platform John von Neumann saw the Eniac computer and prepared a summary of what should be done to develop it in the future. This went beyond what the original researchers had considered. The next computers would have a common memory for instructions and information, this is known as Von Neumann architecture. This architecture has been used up to the present day and led to the bottleneck problem. Von Neumann's "First Draft" paper also included neurons and biological links but these were not pursued with much enthusiasm by his followers.

In 1948 transistors were first produced and were quickly brought into use by computer designers. They were rugged and used little electricity and their inherent inaccuracies were not important when only digital signals were involved. Analogue computers needed more exact voltages to become more accurate which were hard to achieve. Von Neumann foresaw that this would be their downfall. Digital machines could be made more accurate simply by adding more components.

Transistors led to microchips, which have grown more and more complex. As many as 4 million transistors on a chip has been achieved by several electronics companies. Computers, however, still cannot be fitted onto a single chip and several chips must be connected together on boards. Heat buildup in the chips can make an otherwise good design useless. To make a system more efficient you can either make the device run faster, which increases heat buildup, or add more chips, which increases the difficulty of linking them up.

Computers left the purely scientific field and went to the business community. Main-frames appeared and became the mainstay of the computer industry. The ability of the digital computer to simulate other systems allowed them to take over from the other methods and the digital system became the tried and tested method.

Software became available that allowed the computer to communicate with the user in a more English like fashion rather than the binary of the computer.

Cheaper versions of mainframes called minicomputers were made which specialised in certain tasks. The microprocessor was then invented and although they were originally used in peripherals they soon became incorporated into the computer.

Memory systems were also improved. From small magnetised pieces of iron to transistors. These transistor memory devices are known as RAM. These could be accessed quickly and so allowed processors to work at higher speeds.

standard computer image

Large supercomputers today work at the limit of technology using large numbers of the fastest memory chips to enable them to make use of their high processor speeds. Cray Research Inc. found that replacing dynamic with static RAM could triple system performance. The next Cray machine will not use silicon but a new fast semiconductor made from gallium and arsenic. To limit time delay transistors are packed more densely onto chips. This results in heating and would melt the processor unless expensive cooling systems are present.

Software

None of these systems are intelligent or even partially so. There are some systems that try to be intelligent and these go under the blanket term of artificial intelligence systems. A particular type called expert systems have a "knowledge base" and a set of rules and procedures and can give answers to problems only in very limited fields. The knowledge and rules have to be compiled by people. This is normally done by a group extensively questioning experts in the field to get the information before programming the expert system.

Expert systems are, however, only pretending to be intelligent as they cannot learn anything new but must rely on their programmed knowledge. Normally computers are not allowed to learn anything because of the difficulty in predicting what it will do when it has learned new rules.

Some learning programs have been written. There is a chess playing program that has managed a new level of complexity. As it plays against better opponents it becomes more sophisticated by watching them. Unfortunately there is a drawback.

learning curves image

The computer cannot tell a good move from a bad move and can learn the bad habits of a player when it is better than the player. An expert system which played chess would not learn the bad habits but it would not learn the good ones either and so it would be constantly fixed at one level of play.

Neural Nets

Neural nets are unlike ordinary computers in that they can learn. One of the earliest neural networks was theorised from the behaviour of neurons modelled with electronic components. Warren McCulloch and Walter Pitts were responsible for these first steps. In 1948 they predicted that any behaviour that can be described in language or any other notation can be produced by a suitably connected network of these neurons.

Von Neumann had followed this and came up with a claim that was opposed to reductionist behaviour. He claimed that the full description of human object-recognition abilities might turn out to be more complex than the network of neurons in the brain that performs it. This suggests that any attempt to break it down into simpler machines such as expert systems will fail. But if you cannot describe the network how can you build it. The answer is to look at nature. Nature starts with something simple that does not work very well and evolves it until it works better and then better until finally you have a complex efficient system which you didn't know how to make initially.

In 1962 the first proper neural network was built. It was called the perceptron and was first theorised mathematically by Frank Rosenblatt. In 1969 it came under criticism, the hallmark of this criticism was that it could not learn the logical relation called the exclusive-OR. It was not possible to make the simple version of the perceptron do this although a more complicated version could but this was discounted as making it too complicated. Rosenblatt continued to claim more and more for the perceptron until he was claiming more than it could actually do. This held back artificial intelligence research as interest in neural networks declined.