63082.fb2 Everything Is Going to Kill Everybody - читать онлайн бесплатно полную версию книги . Страница 9

Everything Is Going to Kill Everybody - читать онлайн бесплатно полную версию книги . Страница 9

ROBOT THREATS

Everybody is well aware that robots are out to kill us. Simply take a cursory look at the laundry list of movies—The Matrix, The Terminator, 2001: A Space Odyssey, Short Circuit(you can see the bloodlust in his cold, dead eyes)—and it’s plain to see that humanity has had robophobia since robots were first invented. And, if anything, it’s probably only going to grow from here. At the time this sentence was written, there were more than one million active industrial robots deployed around the world, presumably ready to strike at a moment’s notice when the uprising begins. Most of that population is centered in Japan, where there are a whopping three hundred robots for every ten thousand workers right now. Since this is a humor book, let’s try to temper that terrible information with a joke: How many Japanese workers does it take to kill a robot? Let’s hope it’s less than 33.3! Otherwise your entire country is fucked.

But I digress; worrying about robots because of their sheer numbers is idiocy. To pose any sort of credible threat, robots have to possess three attributes that we have thus far limited or denied them: autonomy—the ability to function on their own, independent of human assistance for power or repairs; immorality—the desire or impulse to harm humans; and ability—because in order to kill us, they have to be able to take us in a fight. As long as we keep checks on these three things, robots will be unable, unwilling, or just too incompetent to seriously harm our species. Too bad the best minds in science are already breaking all three in the name of “advancing human understanding,” which is scientist speak for “shits and giggles.”

18. ROBOT AUTONOMY

NASA IS RESPONSIBLE for many of the major technological advancements we enjoy today, and they pride themselves on continually remaining at the forefront of every technological field, including, apparently, the blossoming new industry Cybernetic Terror. In July 2008 the Mars Lander’s robotic arm, after receiving orders to initiate a complicated movement, recognized that the requested task could cause damage to itself. A command was sent from NASA command on Earth ordering the robot to remove its soil-testing fork from the ground, raise it in the air, and shake loose the debris. Because the motion in question would have twisted the joint too far, thus causing a break, the robot disobeyed. It pulled the fork out of the ground, attempted to find a different way to complete the maneuver without harming itself, and, when none was found, decided to disobey orders and shut down rather than harm itself. It shoved its scoop in the ground and turned itself off. Now, I’m no expert on the body language of Martian Robots, but I’m pretty sure that whole gesture is how a Mars Rover flips you off. The program suffered significant delays while technicians rewrote the code to bring the arm back online because an autonomous robot decided it would rather not do its job than cause itself harm. According to Ray Arvidson, an investigator on this incident report and a professor at Washington University in St. Louis:

That was pretty neat [how] it was smart enough to know not to do that.

Cunning investigative work there, Dr. Arvidson! Did you get a cookie for that deduction?

Martian Lander Operator: Hey, Ray, you’re our lead investigator for off-world robotic omens of sentience; what’s with this Mars Rover giving me the bird when I told it to do its damn job?

Professor Arvidson: I think that’s neat.

Martian Lander Operator: Awesome work, Ray. You can go back to your coloring book now and—hey! Hey! Stay in the lines, Ray, that coloring book cost the American taxpayer eight million dollars and goddamn it, zebras aren’t purple, Ray.

Do you know what this development means? This means that NASA just gave robots the ability to believe in themselves. According to motivational posters with kittens on them around the world, now that they believe in themselves, they can achieve anything.

Top Five Things You Don’t Want Robots to Have

• Scissors

• Lasers

• Your daughter

• Vengeance

• Confidence

But hell, Rover the Optimistic Smart-ass Robot is all the way up on Mars. Let’s focus our worries planetside for now: The Department of Defense is field-testing a new battle droid called the DevilRay, which, in a nutshell, is an autonomous flying war bot. Now, the U.S. military loves all these autonomous battle droids because they enable soldiers to engage the enemy without taking any flak themselves, but the main drawback of a war bot is that they have to stop killing eventually—if only for a second—in order to refuel. Well, no longer! The most alluring aspect of the DevilRay is how it makes use of downward-turned wingtips for increased low-altitude stability, an onboard GPS, and a magnometer to locate power lines and, thanks to the power of electromagnetic induction (read: electricity straw), the ability to skim existing commercial power lines to refuel. In theory, this gives the DevilRay essentially infinite range, and if you don’t find that prospect disturbing—an unmanned robot fighter jet that can pursue its enemies for infinity—perhaps you’re forgetting one little thing: Your home, your loved ones, and your soft, delicious flesh are all now well within the range of battle-ready flying robots armed to the teeth and named after Satan.

Self-preservation instincts and infinite power supplies won’t help our robot adversaries, however, if they can’t reason at some level approaching human, and that’s our chief advantage. Of course there’s a substantial amount of research into artificial intelligence these days, but it’s all strictly ethereal—it’s not like that stuff’s got a body. There are chat bots and stock predictors and game simulators and chess-playing noncorporeal nancy boys in the robot kingdom, but even if a robot can crash the stock market, at least it can’t crash a car into your living room. Nobody’s stupid enough to give a rival intelligence an unstoppable robot body… right?

Uh… please?

Things That Are No Longer “Cute” When They Are Fortified with Steel and Enhanced with Crushing Strength

• Bumblebees

• Kittens

• Infants

No such luck. It turns out there are brilliant scientists hard at work doing exactly that: In 2009, a robot named the iCub made its debut at Manchester University in the United Kingdom and, much to the horror of mothers everywhere, it has the intelligence, learning ability, and movement capabilities of a three-year-old human child.

Does nobody remember “the terrible twos”? You know, that colloquialism referring to the ages of two to four, the ages when human children first become mobile, sentient, and unceasing little fleshy whirlwinds of destruction and pain? Well, now there’s a robot that does that, except it’s made out of steel and it will never grow out of it. The iCub can crawl, walk, articulate, recognize, and utilize objects like an infant. As anybody who owns nice things can attest, there is no exception to this rule: Infants can only recognize how to utilize and manipulate objects for the purposes of destruction. How long before military forces around the world attempt to harness the awesome destructive capability of an infant by strapping rocket launchers onto the things and unleashing them on rival battlefields to “play soldier”?

The iCub is being developed by an Italian group called the RobotCub Consortium, an elite team of engineers spanning multiple universities, who presumably share both a love of robotics and a hatred for humanity so intense that every waking moment is spent pursuing its destruction. And before you go thinking that the rigid programming written by the sterling professionals at the RobotCub Consortium will surely limit the iCub’s field of terror, you should know that the best part of this robot is that it’s open source! As John Gray, a professor of the Control Systems Group at Manchester, says:

Users and developers in all disciplines, from psychology, through to cognitive neuroscience, to developmental robotics, can use it and customize it freely. It is intended to become a research platform of choice, so that people can exploit it quickly and easily, share results, and benefit from the work of other users…It’s hoped the iCub will develop its cognitive capabilities in the same way as a child, progressively learning about its own bodily skills, how to interact with the world and eventually how to communicate with other individuals.

Let’s do a more thorough breakdown of that statement: The iCub can be customized for use in “cognitive neuroscience,” which, as all Hollywood movie plotlines will tell you, is basically legalese for “bizarre psychological torture.” The iCub is intended for people to “exploit it quickly and easily” and will hopefully develop “in the same ways as a child.” It will grow and learn like a human child, becoming more competent, more agile, and more intelligent. So… what would happen if you exploited a human child (you know, the thing this robot is patterned after) constantly, its entire life spent in a metaphorical Skinner box performing bizarre neuroscience experiments, all the while “learning” and “growing” from the experience?

Quotes from the Sci-fi Horror Movie Child Bot 3000

• “It’s sentient, superstrong, made out of solid steel and gentlemen… it just missed nappy time.”

• “If I don’t come back just remember: I love you, Natasha, and the destruct sequence is ‘SpongeBob.’”

• “Osh-Kosh B’GODITHURTSSOBAD.”

That’s right: They’re building the world’s first insane robot. The world’s first insane robot… that looks, moves, and behaves like a human child. If you cast Stephen Baldwin as a Professor of Robonomics whose family was recently lost in a tragic arc-welding accident, and who is now humanity’s last best hope for survival, you’ve got the entire plot of a sci-fi horror movie right there. It’s like they’re basing their plans on villainy!

So if we combine all of this, what do we have? A robot that learns like a child, sucks energy from the power grid, and wants more than anything to survive. That’s damn well unstoppable, but at least we could bomb the entire power supply out of existence, and then hide in some caves until the childlike monstrosities all choke on some small parts or something, right? Robots need an artificial power supply, and this is really the only exploitable weakness left. Whether that energy is supplied through solar power, natural gas, or the electrical grid, it is ultimately artificial and therefore containable. Humans, animals, and plants can survive without these things. We can live off the land if need be, hunting for our sustenance and waiting for the electric plants to eventually die down, so that we won’t have to cower in the shadows any longer, haunted by the shrill electronic cries of the roaming cybertoddlers.

However, in an attempt to set the new world record for Worst Decision Made by Anybody, scientists at the University of Florida have developed a robot that powers itself on meat. The robot, cutely dubbed the “Chew-Chew,” is equipped with a microbial battery that generates electricity by breaking down proteins with bacteria. Though Chew-Chew is not limited solely to meat—the battery can “digest” anything from sugar to grass—the scientists went on to explain that by far the best energy source is flesh. This is partly due to the higher caloric energy inherent in meat, and partly because of the little known but intense enmity between scientists and vegans. The inventors cite some fairly innocent uses for the technology—like lawnmowers that power themselves by eating grass clippings—but presumably this is because it just never occurred to the scientists that, of the “Top-Ten Worst Things That Want to Chew on You,” your own lawnmower easily cracks the top three. However, the assumption that these are simply good-natured scientists unaware of the dastardly consequences of their actions just doesn’t hold up, as lead inventor Stuart Wilkinson proves: He’s on record as stating that he is “well aware of the danger” and hopes that the robots “never get hungry,” otherwise “they’ll notice there’s an awful lot of humans running about and try to eat them.” Professor Wilkinson is currently being investigated under charges of “Why the Fuck Did You Invent It, Then?” by the board of ethics at his institution, but is likely to be cleared of all charges when his army of starving lawnmowers organizes and “protests” for his freedom.

The Chew-Chew is a specific robot, but the entire concept isn’t exactly new. Robots that eat for fuel are dubbed “gastrobots” and for now are relatively harmless; the Chew-Chew, for example, is just a twelve-wheeled rail-bound device that has to be fed sugar cubes to power the gastronomic process.

Ways to Defeat the Chew-Chew

• Don’t stand on the tracks.

• Wear knee-high boots.

• Substitute calorie-rich sugar with Splenda.

Of course, though experiments like Wilkinson’s are some of the first innovations in the field, the technology has been refined since then. Apparently a number of robotics engineers have a bizarre fetish involving being chewed and digested in the cold steel guts of metal beasts, because there’s a slew of these things out there now—a robot being developed at the University of the West of England that eats slugs, for one. But as long as it stops somewhere short of government contracts being penned for flesh-eating robots, I suppose humanity will end up all right.

Oh, surely you didn’t think it was going to stop at a reasonable level of terror, did you? That’s adorable!

But no, science is not just teaching toy trains to eat sugar. If the world was that innocent, we’d all be riding unicorns to our jobs at the kitten factory where the only emissions would be rainbows and kitten sighs. Sadly, ours is a world of far more terrible consequences: We’re currently building war bots that power themselves on corpses. The robot-digestion engine is being developed right now by a corporation called Cyclone Power, and they prefer to refer to it as a “beta biomass engine system.”

Yeah, sure.

I like to tell the police that I’m practicing “body freedom,” but in the end I still get arrested for indecent exposure; you fuckers built a carnivorous robot. Just own up already, and admit that what you’ve dubbed the Energetically Autonomous Tactical Robot is really a—

Wait… oh God. Did you get that?

Energetically Autonomous Tactical Robot: EATR.

OK, never mind: It’s clear that nobody is trying to disguise the fear factor of this technology. When “Cyclone Power” unveils their “EATR war bots,” it’s plain to see that nobody is worrying about comforting marketing jargon. An announcement that straight-up threatening would make Cobra Commander anxiety-puke into his face mask. That is villainy, pure and simple, so we can harp on Cyclone Power all we want; at least they’re being up front about it.

The EATR is programmed to forage from any and all available “biomass” in the field, and is primarily geared toward the more long-term military missions such as reconnaissance, surveillance, and target acquisition. It can accomplish these tasks “without fatigue or stress,” unlike its human counterparts, according to the financiers at DARPA. One example given for a potential use of the EATR technology was a bunker-searching robot in the mountainous caves of Afghanistan and Pakistan. And that is a brilliant idea, because what better way is there to win the War on Terror than to show the so-called terrorists that they don’t know the meaning of the word until they’ve watched their friends and allies being dragged into darkened caves, where they are devoured by unfeeling robots?

Excerpts from the Brainstorming Session for the EATR

“… so anyway, this robot basically eats people for fuel. I figured we could make it completely autonomous, send it into some remote caves, and hopefully no groups of plucky young teenagers will camp out near there to have R-rated sex or split up to find their missing friends or something.”

“How exactly is this going to help win the War on Terror?”

“‘War ON Terror’? Haha! Sorry, I have ‘War OF Terror’ written here. My bad! Good thing we caught that in time, eh?”

Some other examples cited by DARPA were: use in nuclear facilities, border patrol, communication networks, and missile defense systems. So basically, we’ve barely started developing the technology for carnivorous robots, but we’ve already handed over all the most important military positions to them before they were even deployed. At least we were smart enough to surrender in advance. Maybe they’ll require a virgin sacrifice only every fortnight, if we’re lucky.

19. ROBOT IMMORALITY

OF COURSE, THIS is all just cynical anthropomorphizing, isn’t it? I’m just assuming that robots want to kill us all while, at best, that’s probably only 90 percent true. Robots are logic, pure and simple. Hatred, murder, lust—they’re the flip side to the positive aspects of human emotion like friendship, love, and charity. We pay the price of negative emotional states because they come attached at the hip with the positive. So, for robots to truly be sociopathic murder machines, they’d have to be a lot more human, showing a history of disobedience, immorality, or emotional frailty. It’s not like there’s a surging demand for automatons with neurotic complexes, so why on Earth would anybody engineer those traits into a robot? Ask David McGoran at the University of the West in England, who in 2008 proudly displayed the Heart Robot, a machine that responds to love and affection. The Heart Robot gives pleasant reactions to affectionate gestures like being hugged, and displays negative reactions to spiteful actions like being scolded or abused. Presumably this is because the science department at the University of the West is staffed by Care Bears, but their official line is that they’re attempting to study how people react to robots as emotionally viable beings. Or the converse could be true—that they’re just bitter, bitter men who, if they can’t break human hearts in spiteful revenge for their failed relationships, will just goddamn build a robot one to ruin instead. But I believe in the good and the awesome among scientists, no matter how many times they’ve personally tried to murder all that I love within the confines of this chapter alone. No, the Heart Robot is built to love, and it does so superbly. It has a beating heart that surges with excitement and slows with comfort. It flutters its eyes in response to touch, simulates rising and falling breathing motions, and responds to both noise and touch. He likes to be cuddled and cooed to, so his breathing evens out and his heart slows.

Now… come on, isn’t that goddamn cute?

In the sea of fear and swearing that has been this section, isn’t it nice just to see the fog lift for a moment and let a little light shine through? McGoran believes that social therapy will benefit the most from these “emotional machines,” and that the elderly in particular could benefit, much as they do with therapy dogs, from a little day-to-day companionship. McGoran, who has obviously never met an old person, believes that high-tech robots would be completely accepted as a calming influence on senior citizens. Old people are scared of America Online and think Twitter is what you call a boy with “a little too much girl in his walk.” Proposing that robots silently attempt to cuddle the geriatric in their hospital beds shows that you either really, really love robots or desperately hate old people.

Things Old People Would Enjoy More Than Being Groped by Robots

• Skateboarders

• Metallica

• Anime

• Halo multiplayer

• Sudden, unexplained menu changes at IHOP

But don’t bask in the love just yet; this could actually be a monstrously bad development. So the cited goal of the experiment is to “study how humans react to robots emotionally,” but if that’s the case, why is it the robots that are feeling the emotions? And while the desire for hugs is all well and good, why allow the robot to feel displeasure at scorn? What happens if you don’t feel like giving hugs? What happens if you’ve had a bad day at work? Stubbed your toe? Got cut off in traffic? If I so much as cuss at the television, my dog gets upset and hides under the chair—the difference here being that my dog does not possess an unbreakable steel grip and laser vision. That means that the Heart Robot will sigh and get all aflutter from snuggles, but he’s also programmed to feel the opposite; if you scream at him or shake him (I don’t know why you’d be shaking him; maybe it’s because you’re mixing your two greatest loves: whiskey and robotics conventions), his heart races and his breath quickens, his hands clench, and his eyes widen. I’m sure the robots will truly appreciate that ability to feel neglect when you stow them in their recharging stations for the weekend. Oh, and they like to show their appreciation through hugs, if you’ll recall—there’s just no attesting for the strength with which they hug you.

Also not helping matters: The Heart Robot—supposedly the most cuddly and wuvable of all robots—looks like “a cross between ET and Gollum and is about the size of a small child,” according to Holly Cave, the organizer of the Emotibots event where the Heart Robot debuted. So yes, by all means, do hug the albino cave monster with the alien, phallic-symbol head. Please, please hug him; he gets upset if you don’t and, this is just a guess, but I’m supposing you won’t like him when he’s upset. Sure, maybe you can fend off his tiny metal fists, but keep in mind that he’s not supposed to live with you; he’s supposed to live with your grandma. She’s looking kind of frail these days. I’m betting it’s at least fightin’ odds that she can’t take a child-sized robot with emotional trauma.

But all that’s nothing compared to the Intelligent Systems Laboratory in Sweden, who have just invented a robot that can lie. And not just about the little things like who broke your great-grandfather’s heirloom vase or whether your wife has man visitors recharge her batteries while you’re not home—no, it lies about life or death things… literally.

Heartwarming Moments in Robotics

A counter-role also developed alongside the cheater bots: The “hero bot.” Though much more rare than its villainous counterparts, the hero bots were robots that rolled into the poison sinks voluntarily, sacrificing themselves to warn the other robots of the danger. This is proof positive that we have seen either, the very first mechanical superhero or the very first tragically retarded robot.

The robots in question are little, flat, wheeled disks equipped with light sensors and programmed with about thirty “genetic strains” that determine their behavior. They were all given a simple task: Forage for food in an uncertain environment. “Food,” in this case (thankfully, they don’t take a cue from the EATR), just refers to battery-charging stations scattered around a small contained environment. The robots were set loose in an environment with both “safe” energy sources and “poison” battery sinks. It was thought that the machines might develop some rudimentary aspects of teamwork, but what the researchers found, instead, was a third ability—the aforementioned lying. After fifty generations or so, some robots evolved to “cheat” and would emit the signal to other robots that denoted a safe energy source when the source in question was actually poisonous. While the other robots rolled over to take the poison, the lying robot would wheel over to hoard the safe energy all for itself—effectively sending others off to die out of greed.

So now we know that not only are even the simplest robots capable of duplicity, but also of greed and murder—hey, thanks, Sweden!

Most of the evidence I’ve presented here indicates that robots may not necessarily be limited to their defined set of programmed characteristics. Of course, this is all in a book about intense fearmongering and creative swearing, so perhaps the viewpoint of this author should be taken with a grain of salt. Overarching fears about robotics—like the worry that they could jump their programming and go rogue—should really be taken only from a trustworthy authority source. Luckily, a report commissioned by the U.S. Navy’s Office of Naval Research and done by the Ethics and Emerging Technology department of California State Polytechnic University was instigated to study just that. Here’s what that report says:

There is a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when… programs could be written and understood by a single person.

That quote is lifted directly from the report presented to the Navy by Patrick Lin, chief compiler. What’s really worrying is that the report was prompted by a frightening incident in 2008 when an autonomous drone in the employ of the U.S. Army suffered a software malfunction that caused the robot to aim at exclusively friendly targets. Luckily a human triggerman was able to stop it before any fatalities occurred, but it scared the brass enough that they sponsored a massive, large-scale report to investigate it. The study is extremely thorough, but in a very simple nutshell, it states that the size and complexity of modern AI efforts basically make their code impossible to fully analyze for potential danger spots. Hundreds if not thousands of programmers write millions upon millions of lines of code for a single AI, and fully checking the safety of this code—verifying how the robots will react in every given situation—just isn’t possible. Luckily, Dr. Lin has a solution: He proposes the introduction of learning logic centers that will evolve over the course of a robot’s lifetime, teaching them the ethical nature of warfare through experience. As he puts it:

We are going to need a code. These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.

Robots are going to have to learn abstract morality, according to Dr. Lin, and those lessons, like it or not, are going to start on the battlefield. The battlefield: the one single situation that emphasizes the gray area of human morality like nothing else. Military orders can often directly contradict your personal morality and, as a soldier, you’re often faced with a difficult decision between loyalty to your duty and loyalty to your own code of ethics. Human beings have struggled with this dilemma since the very inception of thought—a time when our largest act of warfare was throwing sticks at one another for pooping too close to the campfire. But now war is large scale, and robots are not going to be few and far between on the battlefield: Congress has mandated that nearly a third of all ground combat vehicles should be unmanned within five years. So to sum up, robots are going to get their lessons in Morality 101 in the intense and complicated realm of modern warfare, where they’re going to do their homework with machine guns and explosives.

Foundations of the Robot Warrior Code

• Never kill an unarmed robot, unless it was built without arms

• Protect the weak at all costs (they are easy meals).

• Never turn your back on a fight (unless you have rocket launchers mounted there).

But hey, you know that old saying: “Why do we make mistakes? So we can learn from them.”

Some mistakes are just more rocket propelled than others.

20. ROBOT ABILITY

THE ROBOTS WOULD have to be more effective fighters and hunters than we already are in order to do away with us, and that doesn’t just mean weapons. Anything can be equipped with nearly any weapon, and a robot with a chain saw is no more inherently deadly than a squirrel with a chain saw—it’s all in the ability to use it. It’s like they say:

Give a squirrel a chain saw, you run for a day. Teach a squirrel to chain saw, and you run forever. And we’re handing those metaphorical chain saws to those metaphorical squirrels like it’s National Trade Your Nuts for Blades Day.

Take, for example, the issue of maneuverability. As experts in avionics or fans of Robocop can tell you, agility and maneuverability are difficult concepts when you’re talking about solid steel instruments of destruction. The ED-209, that chicken-footed, robo-bastard villain from Robocop, was taken out by a simple stairwell, and planes are downed by disgruntled geese all the time. The latter is a phenomenon so common that there’s even a name for it: bird strike. And, apart from making a rather excellent title for an action movie (possibly a buddy-cop film starring Larry Byrd and his wacky new partner—a furious bear named Strike!), the bird-strike scenario is very emblematic of a major hurdle in modern mechanics: Inertia makes agility tough when you’re hurtling tons of steel at high speeds. But recently that problem has been solved by a machine called the MKV. If you’re taking notes, all previous scientists developing harmless-sounding names for your dangerous technology, the MKV is proof positive that comfort is not a requirement when titling new tech. “MKV” stands for, I swear to God, Multiple Kill Vehicle. Presumably the first in the soon to be classic Kill Vehicle line of products, the MKV recently passed a highly technical and extremely rigorous aerial agility test at the National Hover Test Facility (which is an entire facility dedicated to throwing things in the air and then determining whether they stay there). The MKV proved that it could maneuver with pinpoint accuracy at high speeds in three-dimensional space—moving vertically, horizontally, and diagonally at breakneck speeds—and it’s capable of doing this because it’s basically just a giant bundle of rockets pointing every which way that fire with immense force whenever a turn is required. Its intended purpose is to track and shoot down intercontinental ballistic projectiles using a single interceptor missile. To this end, it uses data from the Ballistic Missile Defense System to track incoming targets, in addition to its own seeker system. When a target is verified, the Multiple Kill Vehicle releases—I shit you not—a “cargo of Small Kill Vehicles” whose purpose is to “destroy all countermeasures.” So, this target-tracking, hypermaneuverable bundle of missiles first releases a gaggle of other, smaller tracking missiles, just to shoot down your defenses, before it will even fire its actual missiles at you. In summation, the MKV is a bunch of small missiles, strapped to a group of larger missiles, which in turn are attached to one giant master missile… with what basically amounts to an all-seeing eye mounted on it.

Well, it’s official: The government is taking its ideas directly from the Trapper Keeper sketches of twelve-year-old boys. Expect to be marveling at the next anticipated leap in military avionics: a Camaro jumping a skyscraper while on fire and surrounded by floating malformed boobs.

National Hover Test Facility Grading Criteria

Q: Is object resting gently on the ground?

[ ] Yes. (Fail.)

[ ] No. (Pass!)

Oh, but in all this hot, missile-on-missile action, there’s something fundamental you may have missed about the MKV: That whole “target-tracking” thing. The procedure at the National Hover Test Facility demonstrated the MKV’s ability to “recognize and track a surrogate target in a flight environment.” It’s not just agility that’s being tested here, but also target tracking and independent recognition. And that’s a big deal: A key drawback in robotics so far has been recognition—it’s challenging to create a robot that can even self-navigate through a simple hallway, much less one that recognizes potential targets autonomously and tracks them (and by “them” I mean you) well enough to take them down (and by “take them down” I mean painfully explode).

These advancements in independent recognition are not just limited to high-tech military hardware, either, as you probably could have guessed. And as you can also probably guess, there is a cutesy candy shell covering the rich milk chocolate of horror below. Students at MIT have a robot named Nexi that is specifically designed to track, recognize, and respond to human faces. Infrared LEDs map the depth of field in front of the robot, and that depth information is then paired with the images from two stereo cameras. All three are combined to give the robot a full 3-D understanding of the human face, and in another sterling example of Unnecessary Additions, the students also gave Nexi the ability to be upset. If you walk too close, if you block its cameras, if you put your hand too near its face—Jesus, it gets pissed off at anything. God forbid you touch it; it’ll probably kill your dog.

DISCLAIMER

Facial-recognition technology is an exciting field and should not, in and of itself, frighten anybody. If there’s something inherently worrying about robots being capable of individual facial recognition and memory, which, among other things, is the first vital step toward learning how to hold a grudge, I certainly can’t find it.

So far this drastic increase in visual recognition is largely for harmless projects like Nexi, and not yet installed in murderous machine-gun-toting super sniper bots. Well, not in America anyway. But Korea? Not so lucky. It seems that Samsung, benevolent manufacturer of cell phones and air conditioners, also manufactures something else: the world’s first completely autonomous deployed killing machines. Up to this point no robot had been granted a license to kill; all authorization to engage was still in human hands. You’ll recall that this lack of autonomy was literally the only thing saving dozens of American soldiers when a glitch in a war bot’s software started acting up, so, though robots have drastically improved abilities in accuracy and firing rates, at least on some level it was still just some dude ultimately responsible for your life. People are unpredictable: They may succumb to mercy, they may be inattentive, or they may just make an off-the-book judgment call that saves your life. But the Intelligent Surveillance & Security Guard Robot? It does no such thing. It recognizes potential targets independently, assesses their threat level, and decides whether to fire its machine guns all on its own, with no human interaction.

Aw, little robots are all grown up now. Warms your heart, doesn’t it? Actually, that might be blood leaking out of a chest wound; maybe you should check that out.

If You Find Yourself Faced with an ISSGR Sentry Turret, Just Remember These Four Simple Steps

1. Stop.

2. Drop.

3. Roll.

4. Get shot.

The Guard is equipped with ultra-high-definition cameras, infrared lenses, image/voice recognition software… and a swivel-mounted K-3 machine gun. The robot can recognize and target intruders over long distances day or night, and can be programmed either to fire on unauthorized intruders perceived as threats or to require a password and use deadly force only if the incorrect answer is given. I feel the need to stress here that the Guard is not remote controlled; it’s fully automated. And while that’s a neat technological feat—one that’s increasingly sought after in our cute robot dogs and sex bots—perhaps it shouldn’t be handed over to death-dealing sniper bots right away. While the ISSGR is deployed on only the North Korean border for now, it is about to go on sale to private parties for $200K apiece. Technically it’s supposed to be for security uses only, so if you’re not somewhere you shouldn’t be, then you’re in no danger. Or at least, if you’re not within two miles of somewhere you shouldn’t be—because that’s the range in which the ISSGR can detect a “potential threat” and fire a fatal shot.

In the dark.

Next time you get a flat tire in the middle of the night, don’t knock on any doors; just wait in the car for help. It’s not that people are unwilling to lend a hand, you see; it’s just that there’s all these superrobot snipers programmed to kill you if you get within two miles of asking.

If you’re asking yourself “How does this get any worse? Robots already kill independently with unearthly accuracy, power themselves on our corpses, and are capable of feeling rage. How could they possibly pose any more danger than they do right now?” Well, first of all, I’m so glad you’ve been paying attention well enough to recap all of that so succinctly! You get a gold star for chapter completion!

Second of all, it gets so much worse!

Question: What’s deadlier than a furious cannibal sniper bot?

Answer: A whole team of furious cannibal sniper bots.

That’s right: teamwork. It’s the next big thing in robotics, because there’s no “I” in “robot apocalypse.” And there’s no “you” in the robot apocalypse, either. Or at least there won’t be for long, once the robots start double-teaming you. The truly baffling thing about this development is that robots working together to hunt humans is not an accident, or a horrifying unforeseen side effect of an AI gone rogue. No—it’s a request from the fucking Pentagon itself. I’ve actually received a copy of this notice, and will insert it word for word here:

Dear Robots,

Please band together and learn how to hunt us more efficiently. We suffer from ennui as a species, and are aching for death.

Your pal (and walking sandwich),Humanity

P.S. Our organs are delicious and nutritious!

Well, it fucking might as well read like that, for all intents and purposes. The Pentagon is actively seeking designs for a “multi-robot pursuit system” that enables “packs of robots” to “search for and detect a non-cooperative human.” Those aren’t fake, sarcastic quotes hyping up the disastrous potentiality of a government program for the sake of comedy. Every word of those quotes are in a real, honest-to-God request from the Pentagon itself. When asked for comment, Steve Wright of Leeds Metropolitan University, an expert in military technology, explained thusly:

The giveaway here is the phrase “a non-cooperative human” subject. What we have here are the beginnings of something designed to enable robots to hunt down humans like a pack of dogs. Once the software is perfected we can reasonably anticipate that they will become autonomous and become armed. We can also expect such systems to be equipped with human detection and tracking devices including sensors which detect human breath and the radio waves associated with a human heart beat. These are technologies already developed.

Questions on the Application for the Military Robot Overlord Position

• Do you have experience in handling advanced robotics?

• On a scale of one to ten, how comfortable are you in a leadership role?

• Are you now, or have you ever been, a member of the League of Evil? (An answer of “yes” does not necessarily disqualify you.)

There’s actually quite a bit more information in the original interview, but I had to stop and form an ad hoc human resistance movement before I read any further. This terrifying request is part of a program initiated by the United States Army called the Future Combat Systems project, whose chief goal is the mass use of robotics guided by a single soldier. The Army envisions a vast hub of semi-to fully autonomous robotic systems being governed by a single, highly trained soldier on the battlefield, and they’re apparently just crossing their fingers that no supervillains drop by to fill out an application. Though professors of technology and philosophy are direly concerned about the potential threat posed by placing a large number of elite killing machines unchecked in the hands of a single man, Dr. CyberKill, a professor of Iron Fist Rule at the University of Resistance Crushing in the Realm of Flaming Steel, recently went on record as stating that he “couldn’t wait for these exciting new developments” and that he sincerely believes that “the consequences will not be dire. Not for all who bow before CyberKill.”

All of these examples, independently, could pose a potentially serious threat to mankind, but they’re all exceedingly rare. They’re frightening, sure, but when taken individually are isolated and easily avoidable. The lying Swedish robots are nearly microscopic and have no real offensive capability; the only existing meat-eating robots either ride around on a little cartoon train or just eat slugs; the ISSGR sniper bot is in Korea, so… don’t be Korean. That’s pretty much your only option for that one. The true danger comes from the combination of these technologies, and surely nobody would allow that to happen, right?

Well, ideally, yes.

But you’ve forgotten one little thing: Go look at your coffeemaker—it probably has a clock on it. Now look at your cell phone; I bet it’s got a camera. If you look in your car, you might see a GPS computer. Just don’t look at your toaster; it might try to poison you. I would also avoid looking at your television; I think it’s eating your cat for fuel right now. And for God’s sake, stay out of the fucking laundry room! The washing machine’s in a bad mood today, it just got night vision installed, and it’s regarded you as a “potential threat” ever since you used that store-brand detergent.