Robots have captured our imagination for as long as the idea of technology has existed. Whether you’re talking about steam shovels or automated check-out kiosks, there has always been that pervasive, lingering concern that we will ultimately lose some kind of battle against the machines that we are creating to make our lives easier.

Even idly musing over the issue, the premise seems silly; it conjures images pulled straight from a science fiction dystopia novel, where metal feet stomp coldly through a barren wasteland, crushing human skulls as they march. But humans are nothing if not creative, and we still love to let our minds run wild, regardless of how far-fetched our worries are—especially if there’s money to be made in the process.

Advertisement

In particular, the media loves sensationalizing the idea that robots and computers will one day make us obsolete. A Wired article from 2012 talks about an algorithm that was created to write news stories, pondering if it could “write a better news story than a human reporter?” In the article, the founder of the tech company that created the algorithm, Kristian Hammond, laments that, in 15 years, computers will most likely be writing “more than 90 percent” of our news. If that seems scary, don’t worry—the article goes on to elaborate that, rather than putting journalists out of jobs, the output of news generated as a whole would probably multiply considerably “as computers mine vast troves of data to produce ultracheap [sic], totally readable accounts of events, trends, and developments that no journalist is currently covering.” To put it simply, when you start with a much bigger pie, there are way more slices to go around. The story concludes with an amelioration of the fears that it created in the first place, by quoting Hammond again saying, “Nobody has lost a single job because of us.” Then comes the dramatic sting! “At least not yet,” finishes the author. Wow. Thanks. That’s the journalistic equivalent of “The End...?”

We can’t really blame Wired alone for these kinds of articles though. In February, NBC News did a piece on “Robots Replacing Human Factory Workers,” where they talk about manufacturing machines “taking” away jobs, due to cost management. They even include a figure that estimates the cost of a robotic spot welder will drop from $133,000 to $103,000 by 2025. Yes, tech is getting cheaper, but for a robot that does a single task, $100k still isn’t exactly chump change. Then in March, CNet published an article where they talked to Bill Gates (of all people) about the possibility of us being outpaced by the speed of our ever-developing technology or, as the author puts it, how it might “overpower us.” Thankfully, Gates comes across as pretty level-headed in the article, asserting that it’s an issue we can overcome with the right strides in education and labor management. The author isn’t satisfied with that, however, and begins to speculate: “[Gates] sounds methodical, scientific. One can’t help wondering, though, whether he also feels a a genuine fear that it’s unavoidable — that the more powerful a robot you create, the more potential there is for humans’ own demise.” Sunny as a summer afternoon, isn’t he?

Let’s address the most realistic and obvious question first. Is automation going to eliminate jobs? Well, yes and no. When cars started being mass-produced and horse-drawn wagons became outdated, a lot of farriers probably got nervous. But now instead of farriers in every town, we have auto workers in factories. As our innovations become more sophisticated, the parts that we play in maintaining our tech-enabled lifestyles evolves. It seems like a logical conclusion, but there are even studies done to confirm that conclusion. In a study done at the IBM Thomas J. Watson Research Center, Aaron B. Brown and Joseph L. Hellerstein investigate whether IT operations will inevitably shift toward automation as a means of cost management. Their main findings were that “introducing automation creates extra processes to deploy and maintain that automation” and “detecting and removing errors from an automated process is considerably more complicated than for a manual process.” In other words, the answer winds up being the same as every time this question is asked: practically speaking, robots are only efficient for some of the work, and we still need humans to supervise.

Advertisement

That assertion is observable in our everyday life as well. Automated check-out scanners are convenient, but they only work perfectly some of the time, and you still need someone keeping an eye out for theft. An ATM works pretty much independently, but it still has to be serviced and if it runs out of cash for some reason, it’s nothing more than a big metal doorstop. So if we are creating jobs at the same time machines are eliminating jobs and, on the whole, technology has made our lives quantifiably easier, why are we still publishing books and articles that portend humans finally creating the one terrible machine that results in our “own demise”?

Those kinds of leaps, from toaster to Terminator, are ubitquitous in our collective imagination. First, we create the robots to do our jobs cheaper and better. Then the robots get a little too good at their jobs. Then they realize how much better they are than us and choose to get rid of us. Rather than asking ourselves whether such a scenario is feasible (decades before our technology is close to true artificial intelligence), why don’t we instead examine the reasons that we keep asking that question in the first place? Is it a vague, dreading, fear borne of the observation that humans have historically been a destructive force, always finding new ways to kill each other—that our nature is essentially vicious and that quality will be reflected in the things we create? Is it an Aesop’s fable come to life, where we fear that our own hubris will be our downfall because we attempt to create something that no mortal should? Is it a fear of the limits of human capability and intelligence, that we are small, narrow-minded creatures that can’t hope to compete with robots built by computers that already need little guidance from us?

Many people are already asking those kinds of questions. Over at the blog, Robot Zeitgeist, the author nicknamed “Awesome-o” seems to think it stems from a general lack of education about how robots work. That is a salient point; many of the fears of humanity come from ignorance. He/she also makes the excellent observation that, although robots are capable of performing many varied types of tasks, we are the ones giving them the orders. As of now, they are simply the tools that we use to accomplish those tasks, like a carpenter using a hammer. Of course, many people will argue that tools sometimes break and machines sometimes fail. What happens if a robot malfunctions and goes all psycho on us? But if you think about it, machines fail all the time. Car brakes can fail, laptops batteries can get hot enough to melt stuff, and have you ever tried to tried to start a lawn mower with a pull cord? Nearly fucking impossible.

The solution is that we put in as many fail-safes as possible. It’s also important to note, that people can malfunction too. We still haven’t fully determined what combination of genetic and environmental factors produce the myriad personality disorders that affect humans. We also can’t draw any definitive causal relationship between those kinds of disorders and acts of manipulation and violence. Unlike with a dangerous person, we can actually determine the root cause of what makes a machine malfunction, then go in and fix it. More importantly, if a machine malfunctions, it is without a specific motivation. It happens because of a glitch, a faulty program or a frayed wire. As the BBC puts it in a 2011 article , “our tools could fail someday - but it doesn’t mean they’re malevolent or immoral or have an ethical bias.” From that idea we can conclude that, yes, robots are fundamentally not like humans. In fact, they are better than humans, because (unless we program them in) they don’t hold any prejudices.

Within that realization lies the truly frightening aspect of the eventual development of artifical intelligence. Humanity, with its fallibility and ever-present social ills, is burdened with the heavy responsibility of ensuring its creations are devoid of the flaws that are inherent in itself. And it’s not just the execution of a robot’s programming that we need to be concerned with— it’s also the purposes with which we task them. The reason you create something is a huge factor in both how it is used and how it is misused. We supposedly created nuclear bombs as a deterrent to war, and first used them as a way to show that we had the biggest stick on the playground. Now everyone is carrying sticks, and we have to keep an eye out at all times to make sure no one has too many. If we wind up creating artificial intelligence in our image, especially if for the classically human motivations to claim the most resources and spread our individual ideologies, then we most definitely do have something to fear.

It seems that the writer Ray Bradbury said it best in one of his personal letters: “[Robots] are extensions of people, not people themselves...I am not afraid of robots. I am afraid of people...”

Header image via thelemming.com