All This Dystopia, and for What?

When privacy-eroding technology doesn’t deliver on its promises.

Mr. Warzel is an Opinion writer at large.

Credit...Money Sharma/Agence France-Presse — Getty Images

When you signed up for this newsletter you may have noticed the language indicated it would be a “limited run.” And like all limited runs, ours is coming to an end next week.

We’re winding down next Tuesday and taking a brief hiatus. Next month, The Privacy Project newsletter will evolve into The New York Times’s tech newsletter, written by my colleague Shira Ovide. Every weekday, it’ll help you understand how technology is changing all aspects of our lives.

If you no longer wish to receive the email, simply unsubscribe at the bottom of this newsletter before March 1. We’re eager to hear your thoughts on what you want more or less of so we can make the newsletter even better for you. Please share your thoughts on this form. A reporter or editor may follow up with you to learn more.

I’ll save the goodbyes and lessons for next week’s edition but I just wanted to say that I so appreciate your readership and thoughtful emails and comments over this period.

A correction: Last week’s column misidentified the developer of Apache Struts. Apache Struts is an open source project of the Apache Software Foundation; Adobe was not the developer.

In the year I’ve been writing this column, and voraciously reading articles about digital privacy, an unsettling theme has emerged. A report introduces a piece of technology with terrifying, privacy-eroding implications. The technology — facial recognition, digital ad tracking, spyware, you name it — is being rapidly deployed by companies that aren’t considering the potential societal harms. The report produces understandable frustration and concern. Then, upon further examination, the claims regarding the technology break down. That groundbreaking piece of technology, it turns out, is deeply flawed. Instead of a perfect panopticon, you have a surveillance-state equivalent of a lemon, or worse yet, total snake oil.

The trend is most common when it comes to facial recognition. Clearview AI, the facial recognition company that scrapes billions of images from websites and social media platforms, claimed 100 percent accuracy, when pitching its product to police departments and suggested it employed testing methodology “used by the American Civil Liberties Union.” The A.C.L.U. vehemently disagreed, telling BuzzFeed News that Clearview’s accuracy claim “is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands.”

NEC, another facial recognition giant, is facing similar scrutiny. A recent profile of the company on the website OneZero cites a 2018 analysis of commercial facial recognition systems that shows “the algorithms were more than 30 percent less accurate when attempting to identify women of color compared to white men, making systems little more accurate than a coin toss.”

Facial recognition testing in general is still new and privacy experts are concerned about their rigor. Independent audits of facial recognition are few and far between and not reassuring. “In trials of the NEC technology in London, one of the only independent analyses of NEC’s algorithm found that 81 percent of 42 people flagged by the facial recognition algorithm were not actually on a watch list,” the OneZero report said.

An NBC News investigation into Amazon’s Ring doorbell cameras suggested that their porch-surveillance technology hasn’t proved all that effective in catching criminals. Thirteen of the 40 jurisdictions NBC News reached “said they had made zero arrests as a result of Ring footage,” while around a dozen others “said that they don’t know how many arrests had been made as a result of their relationship with Ring — and therefore could not evaluate its effectiveness.”

The examples are everywhere. Software intended to scan social media posts of job candidates for background checks sounds like a creepy way to judge candidates — but, as examples show, the software seems unable to recognize and appropriately categorize common human traits like sarcasm or humor, rendering the software mostly useless.

The online advertising industry, which lays the groundwork for most of the everyday tracking and data collection we face, is equally unreliable. Though apps, platforms and data brokers are following our every click, keystroke and physical movements via our phones, the profiles they assemble can still be full of errors. Take Equifax, the data broker hacked by the Chinese in 2017. As Aaron Klein of the Brookings Institution wrote in the wake of the hack, “More than one in five consumers have a ‘potentially material error’ in their credit file that makes them look riskier than they are” to lenders.

And while digital marketers are keen to play up the customer insights from the metadata they collect via our browsing, our understanding of the effectiveness of data to influence user behavior is still quite new. For example, despite the (justifiable) shock and outrage over the Cambridge Analytica scandal, it’s still hard to quantify exactly what role psychographic profiling played in influencing votes during Brexit or the 2016 election. Some skeptics suggest there’s not enough empirical evidence to reach a scientifically sound conclusion about Big Data’s ability to influence complex behavior like voting.

The same may be true for the entire digital ad industry. A fantastic deep dive into the ad world by The Correspondent illustrated that despite the assumptions of many marketers, there’s a great deal that’s unknown about the efficacy of digital ads.

“When these experiments showed that ads were utterly pointless, advertisers were not bothered in the slightest. They charged gaily ahead, buying ad after ad,” the article said. “Even when they knew, or could have known, that their ad campaigns were not very profitable, it had no impact on how they behaved.” Hundreds of billions of dollars are spent globally in the industry, but as the report concluded, “Is online advertising working? We simply don’t know.”

The above examples all represent a different, equally troubling brand of dystopia — one full of false positives, confusion and waste. In these examples the technology is no less invasive. Your face is still scanned in public, your online information is still leveraged against you to manipulate your behavior and your financial data is collected to compile a score that may determine if you can own a home or a car. Your privacy is still invaded, only now you’re left to wonder if the insights were accurate.

As lawmakers ponder facial recognition bans and comprehensive privacy laws, they’d do well to consider this fundamental question: Setting aside even the ethical concerns, are the technologies that are slowly eroding our ability to live a private life actually delivering on their promises? Companies like NEC and others argue that outright bans on technology like facial recognition “stifle innovation.” Though I’m personally not convinced, there may be kernels of truth to that. But before giving these companies the benefit of the doubt, we should look deeper at the so-called innovation to see what we’re really gaining as a result of our larger privacy sacrifice.

Right now, the trade-off doesn’t look so great. Perhaps the only thing worse than living in a perfect surveillance state is living in a deeply flawed one.

L.A.P.D. automatic license plate readers pose a massive privacy risk, audit says.

The myth of the privacy paradox.

I got a Ring doorbell camera. It scared the hell out of me.

Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher's description of The Times's practices and continued steps to increase transparency and protections.

Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.

glossary replacer