Do we add Robotpocalypse to the list of threats to humanity we created?


Robots Science Technology

Literature and movies have primed us to view robots as threatening, a danger that companies counter by aggressively promoting them as useful and adorable. We’re being indoctrinated to embrace robots as benign servants by their almost-human and almost-dog designs that signal familiarity, yet don’t tip over into the uncanny valley—the point when a robot is both too human-like yet not human enough, and it triggers uneasiness. Our acceptance is enhanced as we enjoy their choreographed dances to popular songs and demonstrated ability to handle jobs we’d rather avoid, like cleaning floors, flipping burgers, and performing repetitive tasks.

Each week we see new reports of tasks robots can handle—making cocktails, cooking french fries—plus new robot capabilities. They are dancers, domestic helpers, disaster first-responders, warriors, and much more. They’re here in our daily lives—charming and convenient, creepy and disturbing—but just how much of a threat are robots? Can we really trust robots and the industries that make use of them?

When I watched Boston Dynamics’ robodog Spot dancing to “Uptown Funk” in its 2018 video premiere, my amusement mingled with apprehension. Spot’s predecessor, a 2006 prototype BigDog, was promoted as an unarmed pack mule by the U.S. Defense Advanced Research Projects Agency (DARPA), useful for traversing terrain too rugged for vehicles. Fifteen years later, Spot is a dancer that displays awful potential, although Boston Dynamics prohibits weaponizing their dogs.

What we see in the robodogs may depend on what we need; the charm offensive is creating demand. Promoters for The Rolling Stones’ remastered Tattoo You album release were intrigued enough to contact ​​Boston Dynamics. Founder Marc Raibert explains, “Someone called us, after seeing our dance videos, and said, ‘We’re going to promote this 40-year-old album, can you do something to support it?’ Everybody in our office was just really gassed to be able to do something for the Stones.”

Robots dancing with the Rolling Stones may be novel, but robots are far from new. The term “robot” originated 100 years ago in a play by Czech writer Karel Čapek: RUR: Rossum’s Universal Robots. Robot comes from the Czech word “robota,” which means forced labor, which is what we expect from Roomba: Do the job and no more. In RUR, however, the robots are synthetic living creatures who eventually cause the extinction of humans. Now we define robots as “any automatically operated machine that replaces human effort.” It doesn’t need to resemble humans or perform functions just as we do.

Real-life robots have grown even scarier since RUR, although they’re relentlessly presented as nonthreatening. Spot is a cutie who dodges the uncanny valley by still looking mechanical, while the robots wandering around Singapore to enforce social behavior resemble overgrown shoes with wheels—not a creepy sight, although their function is. Lacking weapons, they cannot compel enforcement but, as the fictional HAL in 2001: A Space Odyssey illustrated, it’s not the body or the computer hardware that carries the threat, it’s the programming. HAL’s core purpose was to relay information accurately. The conflict arose because “Mission Control did not want the crew of Discovery to have their thinking compromised by the knowledge that alien contact was already real. With the crew dead, HAL reasons, he would not need to lie to them.” HAL stayed true to his programming, which included the programmers’ biases.

And here we are, 100 years past RUR and over 50 years past HAL, dancing on the cliff edge of human-induced extinction due to climate change while simultaneously joke-tweeting about robots that may be capable of wiping us out. Boston Dynamics’ attempts to win us over by hiring a choreographer for the first upgraded robodog video, Uptown Spot in 2018, set us up to imagine robots as only clever laborers with nothing else in mind.

We edged closer to the uncanny valley when a troupe consisting of Atlas (Boston Dynamics’ humanoid bots), Spot, and Handle (which Wired called “Segway-on-mescaline”) danced together. Handle has since been replaced by a descendent, Stretch, with a more functional, less flexible design because it doesn’t need to dance now that delighting a human audience isn’t as essential as doing the job efficiently.

Like it or not, robots are doing a lot more than dancing, especially when the job is too dangerous or impossible for us fragile bio-bods. They’re spreading through the workforce and farming in areas never thought possible. Farmer’s Insurance plans to use their Spot for field inspections of property claims after disasters and “explore applications that could help first-responder organizations during scenarios such as post-event search and rescue operations.” Even Disney Imagineering is into robotics, with their autonomous stunt doubles.

The labor shortage is a growing problem with robots considered part of the solution. The New York Times reported on a Servi robot response to the shortage of restaurant workers that also benefited the human waiters. “Servi uses cameras and laser sensors to carry plates of food from the kitchen to tables in the dining room, where the waiter then transfers the plates to the customer’s table. The robot costs $999 a month, including installation and support. Servi saved wait staff and bussers from having to run back and forth to the kitchen and gave overworked servers more time to schmooze with customers and serve more tables, which led to higher tips.”

Robots aren’t just helping front of house staff. Behind the scenes in the kitchen, Flippy is using “artificial intelligence, sensors, computer vision and robotic arms to fry fast food, like French fries and chicken wings. The robot, which costs about $3,000 per month, including maintenance, identifies the food, senses the oil temperature, and monitors the cook time.”

In our homes, Roomba, Amazon’s newest robot Astro, and other artificial intelligence devices can help people live independently. When I asked KosAbility members what improved their quality of life, pr0gressivist described the array that keeps their home functioning now, and sets them up for aging in place. “I watched my grandparents deteriorate mentally and physically, and my parents and I had to keep looking out after them. I watched what they couldn’t do and am prepping myself against it, because I don’t have children to look after me. So: robotic vacuum, robot window cleaner, smart plugs, smart lights, smart speaker … I’m setting them all up now so that if I start losing faculties I will still be able to live independently.”

Clearly, robots benefit our lives. We are captivated by robot PR entertaining us, while home and work robots strengthen our appreciation of robots’ help for jobs we don’t want to do or cannot manage ourselves. But this convenience also comes at the expense of our privacy—all these devices are listening in to our daily lives. Still, willing laborers who make no other noticeable demands seem like an asset.

A New York Times story claims that acceptance of robots in the workforce has benefited from the pandemic labor shortages, quotes Craig Le Clair, vice president and principal analyst at Forrester, a company that helps businesses “use customer obsession to accelerate growth.” Le Clair asserts, “We’ve all gotten more comfortable with dealing with robots—that’s one of the legacies of the pandemic. We’re able to trade off the creepiness of robots with the improved help characteristics.”

​​In “How to survive the coming Robotpocalypse,” Community member funningforest concluded that the “Robotpocalypse is not coming, rather it’s already here and happening right now, in that we have become all too trusting and dependent on algorithms. And many of these algorithms are churning out results that are just dead wrong … denying people housing, jobs, loans, education, health care, the freedom to travel … they are opaque and not accountable.”

Why should we trust the algorithm developers to make life-altering decisions when their algorithmic bias in gender and skin type proves them incapable of a simple task like programming soap dispensers to acknowledge brown skin, or recognition software that accurately discerns POC non-male faces? After all, HAL was compromised by the programmers’ entrenched xenophobia.

Robot interference in human lives is not a far-fetched threat but a present concern in Singapore, as The Guardian pointed out: “The government’s latest surveillance devices are robots on wheels, with seven cameras, that issue warnings to the public and detect ‘undesirable social behaviour.’” Singaporeans say this adds to the constant, dystopian surveillance, but the government says they “were needed to address a labour crunch as the population ages.”

Another industry welcoming a robot incursion is warfare. Earlier this year, Community member Krotor observed, “Sorry, I wrongly thought Terminator was a cautionary tale warning humanity to never empower machines to kill humans. Instead, it apparently is an inspirational reference manual on the joys and benefits of arming killer robots.”

Ghost Robotics is preparing robodog fighting machines. Their Twitter bio claims, “Revolutionizing legged robots. Agile & ruggedized ground drones for military, homeland security and enterprise markets.” Wander through their Twitter feed to see more glamorized robot warriors … if you dare.

Ghost Robots has linked up with SWORD International (Special Warfare Operations Research and Development) to make sure theirs is the top dog in any fight.

What’s next? Are we okay with robots fighting battles for us? Human Rights Watch considers fully autonomous weapons as “one of the most alarming military technologies under development today,” saying there is “a moral and legal imperative to ban killer robots.” However, the U.S. recently rejected a United Nations binding agreement regulating or banning the use of “killer robots” and called, instead, for a nonbinding code of conduct, essentially saying “trust us.” Should we?

Robotic experts are “spooked by their own success,” claims The Guardian, writing about Professor Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California at Berkeley, who wrote a leading textbook on artificial intelligence. “It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed.” Russell explains that the concept was theoretical until “it happened and they weren’t ready for it.”

Russell believes that robots “must check in with humans—rather like a butler—on any decision. But the idea is complex, not least because different people have different—and sometimes conflicting—preferences, and those preferences are not fixed.” HAL was a fictional robot but represents the real threat that results when programmed with conflicting or unclear human objectives.

Requiring robots to check in with humans isn’t enough reassurance when artificial intelligence is more intelligent than humans. Russell says, “I think numbers range from 10 years for the most optimistic to a few hundred years. But almost all AI researchers would say it’s going to happen in this century.”

Stephen Hawking expressed his concern about the time when artificial intelligence exceeds humans and warned, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Autonomous artificial intelligence, if it does arise, will emerge from algorithms designed by members of a society fraught with systemic racism, sexism, ableism, and other prejudices, then filtered through industries, including those built for warfare and surveillance. But Roomba is safe, right? How wrong can programmers go with a robot designed just to clean floors—barring a pooptastrophe or glitchy updates? 

Our trust in robots, or rather in their programming and other algorithms that specify ethical principles, won’t matter when robots possess superintelligence, “an intellect that is much smarter than the best human brains in practically every field.” A Max Planck Institute study determined that humans wouldn’t be able to control superintelligent machines and this isn’t a farfetched concern. One of the scientists observed, “A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Robots are creepily awesome, are they also perilously flawed? As Spot’s music videos, pandemic labor shortages, and our desire for in-home help illustrate, these tireless laborers are ready to cook french fries, mix cocktails, manage our homes, and dig us out of disaster rubble … for the moment. However, the idea of superintelligence combined with adorable Spot’s transition into an “agile & ruggedized” military ground drone is daunting. How sanguine should we feel about robots, and is it even possible now to draw a line delineating safety?