Capek's play: Rossum's Universal Robots (1920)
OpinionScienceTechnology

A.I. Part 1: The Quest For Artificial Intelligence

Some scientists are paranoid.  They say that it’s just a matter of time before machines become smarter than humans and that we’re inventing ourselves into extinction or slavery.  Others are excited.  They see the possibility of machines which can truly fill in the gaps that humans can’t handle, or even to expand human intelligence.  But whether or not one’s perspective is based on The Terminator or Star Wars, the inevitability of comprehensive and ubiquitous artificial intelligence is very real.

But to start out, let’s accept a simple fact that programmers have known for a very long time.  Computers are stupid.  When I say stupid, I mean they’re dumber than a box of rocks. Whenever you see computers doing something “smart”, it’s really running an algorithm that was thought up by some corporate lackey sitting in a cubicle and eating day-old donuts and drinking stale coffee.  That is, unless they work for a company like Google.  In which case, the donuts are fresh and the coffee was just brewed.  Nonetheless, computers are complete dolts.

So what is the possibility that computers might become highly intelligent?  Some would argue that the switching and electrical impulses in the transistors of an integrated circuit (micro chip) are no different than the impulses in the neurons and synaptic patterns of the human brain.  They are quick to point out that electronic hardware/software tend to operate in a pattern very similar to biological wetware; the same wetware that has given us inventions such as the Bedazzler and the Salad Shooter.  Others say that is proof of our impending doom.

When might computers surpass human intelligence?  Nobody really knows, because nobody knows enough about our own intelligence.  Estimates of how many neurons (brain cells) there are in the average human brain vary from around 50 billion to over 200 billion.  We also don’t completely know how all these little buggers work together to produce intelligence.

Our memory capacity?  Nobody knows that either.  I’ve seen estimates up to 2.5 petabytes!  For those who don’t know, a single switch or transistor (and therefore, one electrical impulse) makes up one bit.  It takes a group of bits to make up one usable piece of information, or byte.  The standard is 8 bits to a byte.  However, computers nowadays have 32 and 64-bit systems.  One usable piece of information (byte) would be a single character, pixel, and so on.  A megabyte is a million bytes, and a gigabyte is a billion bytes.  A petabyte is a billion megabytes.  Now why can’t that kind of memory ever allow me to remember where I put my keys?

The first commercial integrated circuit (a single chip with a lot of transistors) was first produced by the Intel Corporation.  One of its founders, Gordon Moore, wrote a paper in 1965 in which he stated that computing power would double approximately every two years.  This has since become known as “Moore’s Law” and has proven to be eerily accurate.  Most guesses as to the point of machines surpassing humans have been based strictly on Moore’s Law.  Still, there’s no agreement.

Even if we don’t create our new masters as we invent ourselves out of existence, there is still a lot of use for artificial intelligence.  Machines would be able to assess issues and deal with them without human intervention.  This autonomy would allow work to be done and exploration carried out in places which humans would be unable to go, whether it’s physically impossible or simply unsafe.  AI would also allow human/machine interaction which is both easier and more effective.  Imagine being able to give directives by using normal speech rather than obscure commands.

Luckily, we don’t need to know a lot about human intelligence to create an effective AI.  All we need is to create an interface that seems real enough that humans can easily interact with it, and a problem-solving ability to handle the tasks at hand.  There’s no reason for a machine to appreciate the arts in order to make toast.  There’s no reason for a machine to recite Shakespeare in order to build a car.  There’s no reason for a machine to know how to make toast and build cars if that machine is a repository for artistic works, including the works of Shakespeare.  So the concept of AI becomes a lot more manageable, and a lot less like SkyNet.  That means engineers would need to work less with psychologists and work more with the marketing department.

Artificial Intelligence would probably benefit by focusing less on the intelligence aspect and concentrate more on the artificial aspect.  Granted, there’s the field of neural networking which seeks to produce natural intelligence using artificial components.  But for most purposes, true human-like intelligence is completely unnecessary.

Topio 3.0 plays ping pong. (photo: Wikipedia)

Machines are tools, and are typically better when treated as tools.  We build mechanical and electrical devices to do the things that we as humans can’t do.  We can make music, paint pictures, write poetry, and tell the story of the life of a Pacific sea clam through interpretive dance.  We can’t do such important things as sealing blood vessels on a nano scale, grabbing fresh samples from a volcano, or chucking pumpkins over half a mile.  That’s what we need our machines for.  On the other hand, artistic capability has been known to lead to mimes.

There’s a major obstacle to full AI.  Computers operate on pure Boolean logic (yes/no, true/false, on/off, etc.).  Human development and critical thinking often hinges on human emotions.  Emotions can be simulated in many ways on a machine, but full critical thinking which humans are capable of require true emotions.  Without emotion, there is no drive or desire, and no motivation other than carrying out a directive.  Logic dictates that emotion gives humans certain advantages which a machine cannot emulate.  This means that robots and computers, without a fully effective emotional protocol, are less like R2-D2 and more like really smart screwdrivers.  It’s hard to dominate the human race when pure logic dictates that doing so is a bad idea.

So I don’t think we need to worry about machines taking over our lives anytime soon.  We as human beings would be able to use tools that we can relate to better.  But as long as electronic intelligence falls short, there should be no reason for us to ever be enslaved by machines.  There are a variety of reasons for this.  But first, you’ll need to hold on.  I’m getting a Facebook update on my smartphone …

Up next … Part 2:  Building An Effective AI  –>

 

Daniel C. Handley

Dan Handley was raised a Trekkie, fell in love with "Star Wars" at an early age, and became obsessed with comic book superheroes. He spent his youth dreaming of how to get real superpowers, starships, and so on.

Leave a Reply

Your email address will not be published. Required fields are marked *

Solve : *
13 + 23 =


This site uses Akismet to reduce spam. Learn how your comment data is processed.