What Hong Kong's Protestors Can Teach Us About the Future of Privacy

15.4K
40
6
Photo: AP

Something odd happened in Hong Kong last week. Protestors against a controversial proposed extradition bill were afraid to use their metro cards. Instead of swiping their cards through the turnstiles of the city’s subway system, they lined up to buy single-journey tickets with cash, apparently worried about “leaving a paper trail” that could prove their presence at the demonstration.

Anyone who has been to Hong Kong knows how ubiquitous the Octopus payment card is. When I first visited to attend a privacy conference in 2017, a souvenir card was part of the conference swag bag, and I used it for everything from taking the ferry to shopping at 7-Eleven. Using a physical ticket instead of a rechargeable Octopus card isn’t just less convenient, it’s also more expensive. What made the protestors so worried?

Advertisement

Around the world, police and intelligence agencies are conducting secret, real-time surveillance of civic spaces—and not just during times of protest. In addition to greatly expanding the depth and breadth of surveillance, new technologies are changing how it’s performed. Today, spying can be conducted remotely and invisibly. Imposter cell towers called IMSI catchers intercept text messages, calls, and internet traffic from anyone who happens to be present in a particular area . Cameras equipped with facial recognition tech are turning people into walking ID cards. Even something as innocent-seeming as public wifi has been used to track commuters.

With powerful tools like these out there, the data traces left by subway swipes might seem like a secondary concern. (Most versions of the Octopus card are anonymous, though they can be linked to people’s credit cards and police have used the cards to track down suspects.) The scene in Hong Kong, however, illustrates an important truth: Privacy harms are often a time-shifted risk. In a world that runs on data, everyday uses of technology can suddenly put people in danger when circumstances change. This also means the opposite: Most of the time, most people won’t feel the cost of having their data exploited.

The vast majority of people won’t notice that a facial recognition camera has identified them in a crowd and run their face against a list of suspects. We don’t feel anything when some of the world’s largest apps leak our data to third parties in the murky advertising ecosystem. And when our journeys through increasingly connected cities leave a disturbingly detailed trail, it doesn’t usually result in negative consequences—up until the moment it does. In fact, most data intensive systems are designed to be frictionless and easy. “Cash is awkward,” a VISA ad that showed tourists fumbling with coins and inconveniently placed money belts declared in 2017.

Advertisement

This is why the loss of privacy often seems like no big deal, a small price to pay for the convenience of the digital world. This is an all-too-common misunderstanding that can have grave consequences, as privacy invasions are often invisible, harms frequently only happen in the future, and they always affect some people more than others.

The moment you are protesting against your government, a seamless public transit system can turn into a rich source of data for surveillance and crowd control. Today, you might be young, healthy, and (if you’re lucky!) live in a country that has universal health care. Tomorrow, social services might get cut, putting you in desperate need of private insurance. Through data brokers, that private insurance company could obtain information from the mood tracking app on your phone, your purchases at your online pharmacy, or your route to your regular therapy sessions. These are all types of data that are routinely tracked, sold, and shared today. Whether it’s due to austerity or a changing political climate, our digital doppelgängers might come back to haunt us. In fact, they are already haunting some.

Advertisement

As is often the case with technology ,  the future is already here, it’s just not equally distributed. As Sam Adler-Bell put it in The New Inquiry, “for the underclasses, privacy — in the form of access to ungovernable spaces — has never been on offer.” Today, old inequalities are reappearing in novel and unexpected forms. Facial recognition is a perfect example. For basically anyone who isn’t white or male, error rates are much higher. The hands-free convenience of paying with your face is only convenient if your face is actually being recognized. And the mass deployment of this tech is only invisible if those systems don’t confuse your face with that of a wanted suspect. Furthermore, for political dissidents, investigative journalists, and undocumented immigrants, facial recognition may already mean the end of anonymity in public spaces.

In an increasingly datafied world, we need to future-proof the devices, services, and infrastructures we rely on to make sure they don’t betray us when we need them most: at times of political unrest or when we are at our most vulnerable. Engineering approaches like Privacy By Design help to build systems that incorporate privacy from the outset. We need to stop thinking about privacy as an individual experience and start thinking about it as a right whose violation affects us collectively—with the heaviest burden often imposed on those who are already marginalized. Most importantly, we have to recognize that a world of technological totalitarianism is not inevitable. One of the most pressing tasks of our generation  will be to establish and vigorously defend the rights, norms, and rules that govern powerful technologies. This includes  the companies that built them, as well as the states that deploy them.

Advertisement

Under the guise of protecting democratic society, governments across the globe are introducing new surveillance technologies without adequate regulations and safeguards. These very same technologies, however, can threaten democratic participation and dissent—undermining democracy itself. This is not to say that new technologies should never be used, but they should be regulated, used transparently, deployed based on reasonable suspicion, designed to minimize impact on our digital security, and subject to effective and independent control and supervision.

Frederike Kaltheuner works on emerging technology and policy. She heads Privacy International’s work on corporate surveillance.

Advertisement

Share This Story