Tonight I was browsing google (don’t remember what for)and I came across this article
which deals with the possiblity of robots eliminating 50% of American jobs by 2055. I have to admit I agree with the guy to some extent. This is not a new subject by any means. It’s been talked about for years and years. But I think it’s only within the last 20 years that people have come to accept it is actually possible within our lifetime.
I’m not a big fan of the “robots are evil” plotlines in movies because I generally believe that some good will come out of them. They will eventually have emotion and with it the ability to make the right and wrong decisions, just like us. It’s inevitable that humanoid robotics containing A.I. of some degree or another will make it into our daily lives. Here is the thought cycle of people who discover new technologies:
1. Men will seek to build them purely to see if it’s possible.
2. Once they discover it’s possible they will find ways to put it to work. This is how they justify all their time and research they put in to see if it was even possible.
3. Once in place at work, demand will drive better performance, new functions and lower prices.
4. Lower prices will yeild market saturation and hence a daily occurance for more people.
5. Daily occurances are non-threatening because we are used to them. At worst that are annoying (blinking 12:00 on VCR)
By step 5 I can see how robots could take over our lives. Even if laws are erected that state a company must employ a certain number of humans, we still have to fear the loopholes in those laws and the ever growing lack of common sense in our government. Looks like we’re fucked. Here’s why.
Once a new technology is discovered it’s virtually impossible to curb it’s development or use. Look at all the nuclear weapons being built, look at peer to peer file swapping. Just a couple examples. Even if US law states that so many jobs must be maintained by humans you’ll have different laws in different countries. Just like we do today and the reason why so many jobs are exported to third world countries.
As robots grow smarter and more agile they will take on jobs not suited for human beings(as they already do in some cases). Space exploration is a good example and place it’s already happening. The kind of money it takes to put a human into space with training, food, water, oxygen and even then humans are faced with muscle atrophy and bone mass reduction due to lack of gravity. As well as the possiblitiy of loss of life due to mishaps such as shuttle explosions, lack of fuel and miscalculated courses. Space travel is risky and we’ve already made an example that we can put a human on the moon or in a space station. So until we can put a human on another planet like Mars the public’s interest in the space program is waining. Thus no need to go with risky humans except when you want to make an example for publicity reasons. So my guess is that most of the difficult and routine work will be done by robots with 3D vision who are guided by human scientists safely back on earth. If not guided by themselves via a preprogammed mission objective.
All of this may be bad for human beings, but here is the question we have to ask ourselves in the long run. Does it matter if it’s bad for us? After all who are we to stand in the way of evolution. If a robot containing A.I. is smarter then us do we have the right to call it Artificial Intelligence. Shouldn’t we just call it Intelligence. And if the robot is more capable then we are, do we stll have the right to call ourselves the dominant species on earth? At that point the only thing standing in our way is our ego. The thing that tell us we are better then robots because we created them. But is that really true when you look at the facts by that point? For all we God could have been here in the physical form of another race of beings thousands of years ago when we were created and we took over because we were better/more suited. Soon we will be trumped ourselves.
Barring a nuclear war, alien invasion or something that sets our technological progress back, the best we can do now is try to find a way to co-exist. Science fiction has tought us that a cold emotionless robot like the Terminator can kill us all and that a robot with A.I. can out think us and one day discover for itself we are inferior and it doesn’t need us. But what we need to do is teach a robot capable of A.I. at any level to understand both love and fear. Isaac Asimov’s three rules won’t cut it. It’s like communism, it’s good in theory but in the real world it won’t work. Teaching a robot to love and fear won’t stop murders by robots or other problems humans face today. But by giving the robots the same problems we as humans face, they themselves will be in the same boat and will be able to relate to us and in turn respect us. Thus we can coexist.