What could be better than an artificially created intelligence in a machine? No, I mean besides bacon. That’s right, a real human-like intelligence in a machine! Imagine an artificial construct, a piece of electronic hardware, which could actually think like a human! It might be frightening to some, but there are a lot of research groups currently working on this very project.
[Main Image: original from Stock.Xchng with manipulation by Rajasegar Chandiran]
It is probably the most ambitious artificial intelligence project imaginable. We know so little about neurons and human intelligence. To reproduce that in electronic machines so blindly might seem like sheer insanity. But that’s not stopping researchers from attempting the seemingly impossible. They’re trying to get computer nodes to behave just like neurons (brain cells). But yet, those neurons are so complicated, we don’t quite know how they function.
Neurons are extremely complex structures. We’re talking so complex, it makes the IRS tax code simple by comparison! But to simplify, there are a few primary components. The soma is the body of the cell, where the nucleus is located. This is where the complex functions occur. On one end is a long piece called the axon. At the terminal part of the axon are connections called synapses which excrete neurotransmitter chemicals to communicate information to other neurons. On the other end is a network of connections called dendrites. These are the receptors for the information from other brain cells. Some way, somehow, these cells work together to produce what we would refer to as thought and intelligence.
Multiple neurons working together are traditionally known as a neural network. But the term now also refers to computer networks which attempts to emulate the same functions. A neural network of the artificial variety has a very creative name to refer to it … “Artificial Neural Network” or “ANN”. (Shakespeare would be proud.)
So what is the real advantage of a neural network over a standardized computer system with a basic artificial intelligence? A normal computer solves problems by way of algorithms. It uses a structured and pre-programmed procedure in order to deal with issues. An ANN learns by example. It’s processes are parallel, with multiple inputs (dendrites) feeding hidden processes working simultaneously (and often redundantly), then feeding out to the next node (axon). This allows for processing which is less linear, structured, and limited. It creates its own organization for learning and processing information. This is not to say that the algorithmic approach doesn’t have its own merits. For arithmetic and other forms of straightforward logic, algorithms are the better choice. But when it comes to learning and reasoning in a particular area, the ANN is a much better choice.
So is it likely for an ANN to become “smarter” than a human being? Quite possibly. But there are barriers for it to overcome. Probably the biggest is the fact that estimates for the number of neurons in the human brain are anywhere from 50 billion to over 200 billion. The number of transistors on an integrated circuit (micro chip) number in the millions. It takes quite a few processors to fully handle the full function of a neuron. Not even Homer Simpson is that devoid of processing capability.
ANNs which are in use today are made to be specific in their scope of operation. They are used in places like business and medicine. Where there is need for adaptive reasoning, ANN fits the bill. When machines are needed to analyze marketing data, forecast sales figures, carry out customer research, or diagnose disease based off of data from MRIs and such, then the ANN can swoop in and save the day while wearing a cape and tights. (The cape and tights are virtual, not literal.)
To understand how an ANN thinks, let’s look at a comparative scenario. Let’s say that we want to get a computer to speak. On a standard computer system, the software would have to be written to identify sound and articulation, the primary memory would have to carry a database of vocabulary and grammatical structure, and it must be able to work with the sound output technology to produce the sounds drawn from its vocabulary database. An ANN, on the other hand, would hear sounds, and it would start trying to figure out how to reproduce the sounds. At first, it would babble like a baby. Gradually, it will be able to start picking out words. Then it learns, through trial and error, how to structure thoughts and sentences, as well as what verbiage would cause it to have its sound card washed out with soap and water. It can then better communicate with a human user.
While there is a lot of benefit to ANN, it would stand to reason that there would be a bit of drawback as well. If a computer can think like a human, then it is also subject to the same drawbacks as the human mind. A machine would be able to become neurotic, psychotic, sociopathic, or just plain loony. This is the possible scenario envisioned by Arthur C. Clarke when he came up with the HAL 9000 computer. A simple moral conflict which can create a maniacal and unstable killing machine is something which should be a bit unnerving to anybody. Actually, it would be a bit unnerving to any person as well as any sentient machine. The last thing we need is an army of Terminators with paranoid schizophrenia!
So perhaps there should also be some digital psychiatrists. Oh, well. Progress is progress, and we will adapt. Humans are known for that kind of thing. Whether or not machines can gain that ability is what remains in question. So far, great strides are being made. But true human-like thought in a machine is still quite a ways away. We shouldn’t have to worry about computers becoming better humans than humans are anytime soon. We’re still at the top of the totem pole on that one. One day, we could be faced with the dilemma of electronics out-humaning humans. But for right now, we can rest assured that we’ve still cornered the whole neurotic/psychotic/sociopathic/loony market. And for that, we should be proud.
Up next, Part 4: When Machines Are People Too –>