The Future Is Here
We may earn a commission from links on this page

Knightscope's Family of Dystopian Robot Guards Just Grew by Two

We may earn a commission from links on this page.

If you’ve heard of Knightscope’s security robots, it was probably due to their high-profile failures: one would-be Robocop failed to detect a staircase and killed itself by driving into a water fountain, another ran over a toddler’s foot in a shopping mall. On Wednesday, Knightscope announced two new potential fuckups were joining the force: the K1 and the K5 buggy.

The K1 is a stationary, five-foot-tall egg-tower equipped with “concealed weapon and radiation detection” capabilities. The K1 is designed to spot dangerous materials in airports and hospitals in advance of metal detectors. There’s no overhead enclosure on the device, it scans outwards as people pass by.

Advertisement

Its flashier cousin, looking like a villain from Disney’s Cars series, is the K7: an autonomous buggy, with speeds limited to 3 mph. Unlike the Ryan Lochte-like antics of the K5, the K7 is a non-egg vehicle built for rugged terrain and has a better spatial detection.

Advertisement
Advertisement

So what do these robots actually do? They don’t have weapons capabilities, they can’t arrest anyone and, at about $63,000 for a one-year contract, they cost slightly more than an actual security guard. What they excel at, however, is acting as harbingers of the omni-present surveillance dystopia experts have long sounded alarms about. From The Register:

Knightscope robots presently relay audio, video and other sensor data – the K5, for example, has a thermal camera that can watch for fires. According to Li, the upstart is exploring ways to make more sense of audio data, by detecting where sounds came from and being able to identify specific noises like footsteps. It’s also working on conversational AI to communicate with people it encounters while on patrol in a more flexible manner.

In other words, these are autonomous, roving surveillance machines that pick up massive amounts of data and then send it back to both the contractors and Knightscope itself. They may look like toys, but that’s pretty much what you want in representatives of a mechanized police state.

Knightscope has aspirations of fully automated crime predicting robots with face recognition capabilities, which would identify people by matching their faces to criminal databases. Accurate crime prediction, however, is a fantasy. Hotspot crime prediction uses algorithms to surmise where crime is most likely to occur based on arrest records. But when those arrests are skewed due to the well documented over-policing of low-income and minority communities, biases only reasserts themselves.

Advertisement

In the future, those biases could come in the form a robot whose motivations are masked by proprietary information laws. Companies aren’t legally required to disclose how their algorithms (crime-predicting or otherwise) work, what they’ve been programmed to do, or audit them to see if they’re repeating extant biases. It’s one thing to hold a police department accountable when officers act unjustly, but who do you hold accountable when one of these things does the same?

And these blindspots will scale as automated security is rolled out in more places. Your face is already scanned when traveling through airports, but federal bodies have justified that by saying they’re preventing the next 9/11. Should that level of scrutiny be applied to people visiting hospitals, or wherever else these machines start popping up? If we notice biases or unethical behavior, whose fault does it become? Automating security before we can address these issues turns “it was just following an algorithm” into the new “I was just following orders.”

Advertisement

[The Verge via The Register]