Artificial Intelligence

by Krishna on September 7, 2008

From talking cars to androids, artificial intelligence is a very popular theme for movies. It is a very intriguing subject with so many potential (useful and harmful) applications. Many science fiction writers predicted that today (in 2008) we would live in a world where robots do most of the work. Human beings either enjoy the fruits of the robot’s labor or are subjugated by them. Neither is true, of course.

As a layman, I wonder how little advanced technology has delivered on the promises of artificial intelligence. Even though we have made tremendous leaps in information processing and communication, we are not yet at the level where machines have taken over human decision making and creativity. I do not say this to mock or belittle technology in the way some do to suggest that humans have some perfect traits that technology can never conquer.

Rather, I sometimes feel that it is the opposite. Humans are not perfect. They make lots of mistakes. They make unpredictable errors in judgment. They are inconsistent in applying the same rules to similar situations. Not only that, every human being operates from different principles and viewpoints and frequently changes them too. This imperfect and unique human intelligence is cause for both beauty and disaster in this world.

On the good side, we have creativity because every human being thinks differently. Because there are so many of us having ideas and creating things, we sometimes end up with great works of art and improvements in our life. If you create 1000 robots on an assembly line, you get 1000 similar brains operating the same way with the same rules. Instead, you would need a unique brain for every robot with different starting rules and the ability to change their core beliefs, and then you need a whole lot of them with no guarantee that you will produce something beautiful.

On the bad side, human beings are responsible for car crashes, stock market bubbles, and prescription errors on the basis of poor decisions. Technology can improve decision making, but it cannot entirely avoid mistakes in a real world. Technology cannot predict and prepare for every possible future scenario. It may not have any similar past situation it can rely on for making a correct decision.

In a situation where a negative outcome, however improbable, can be grievous harm or death, what users would trust technology? Consider that you may be willing to let your car parallel-park, but would you be ready to trust it to drive automatically on the road? What company would be ready to release such a product and face the inevitable legal consequences?

We understand human deficiencies in processing information. If you ask a person to do something, there is a significant chance that they will misunderstand it, forget it or do it incorrectly. That is the main reason we have standards, documentation, testing and so on. After their birth, human beings take several years to become functional and they are not even trusted to make important decisions until they are 18 or 21. Many organizations do not even trust their adult workers to make decisions.

So, when we imagine AI robots, what are we talking about? Do they come out of the factory packed with a 21-or-more-year-old adult brain? What experiences do they base their decisions on? Do we trust them to make choices for us? Do we debate with them? Do they have any moral or ethical base to make decisions on?

Raymond Kurzweil, an inventor and popular AI writer, suggests in one of his books that in the future, we would have the technology to scan a human brain and recreate it. I wonder if it is any different from cloning aside from the biological difference. After all, you may be able to have an interesting conversation with this electronic brain, but you would not gain anything with respect to improved creativity or judgment.

{ 2 comments }

Peter Raitt September 8, 2008 at 1:44 am

Regarding your paragraph beginning "In a situation where a negative outcome, however improbable, can be grievous harm or death, what users would trust technology?"; it rather implies that no-one would choose a technology with a very small chance of lethal failure or misjudgement, and yet we choose ourselves - who have a very significant chance of lethal failure (well, as regards driving anyway). Should the question not be whether we will choose technology with a lesser chance of disastrous negative consequence than we would have on our own?

I would presume that robots could (not yet though!) come out of the factory with the equivalent of a 21 year old brain since as we refine computer learning techniques, they can learn very, very fast, or alternatively they could be pre-loaded with the learning of a previous robot. The latter of course leading to the "similar" brains which you point out.

Anyway, nice points and an interesting points.

Krish September 8, 2008 at 7:25 am

Thanks for your comment, Peter.

We do tend to trust ourselves even though we have a significant chance of failure, because we wish to take responsibility for ourselves. Only when someone or something is much better than us in a particular activity, we allow them to handle it.

For example, we may not allow our friend to drive a huge truck in which we are sitting, but we are ready to entrust ourselves to a professional truck driver.

Comments on this entry are closed.

Previous post:

Next post: