Signal App vs. Noise

Privacy tech is at war with authoritarian governments, the legacy tech industry, the legacy financial system… and now the AI industry?

Dana J. Wright
11 min readFeb 13, 2023
Desert warrior with Signal App helmet, by Dana J. Wright
Image created by the author with Midjourney and Photoshop.

An interesting piece surfaced in The New York Times recently.

At first blush it kind of reads like one of those “searing takedowns” of our technology overlords.

“We have a technologically driven shift of power to ideological individuals and organizations whose lack of appreciation for moral nuance and good governance puts us all at risk,” it says.

But the villain of the piece is not Bezos, Zuckerberg or Musk — it’s the 40 or so software engineers that support the open source messaging app, Signal.

“The ethical universe, according to Signal, is simple,” the author writes.

“The privacy of individuals must be respected above all else, come what may. If terrorists or child abusers or other criminals use the app or one like it to coordinate activities… behind impenetrable closed doors, that’s a shame — but privacy is all that matters.”

What’s this guy’s beef with Signal?

Turns out the author, Reid Blackman is a prominent government consultant specializing in AI, an important piece of context the New York Times failed to disclose.

In one of his online bios, Blackman is a “renowned AI ethicist and philosopher advising leaders on ethical AI principles for risk mitigation.”

Ethical Machines book cover

He also wrote a book, titled “Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI.”

When I discovered this, his takedown of Signal (and privacy tech more broadly) made a lot more sense.

See, one of the main constraints on the power of AI is the amount of source material it has to draw from in order to build predictive models and produce useful results.

This is not a trivial issue.

AI is currently enjoying a moment in the sun, kind of like Napster circa 2001, right before the lawsuits started flying.

It’s a true revolution with no precedents, no guard rails and no rules.

I myself have been having a lot of fun with it. Hardly a day goes by that I don’t use ChatGPT or Midjourney for one thing or another, just like millions of other people.

The general consensus seems to be that the AI has won, there is no going back, these tools are here to stay.

Screenshot of Napster
Napster client for Mac OS 9 (aka Macster) which launched in 2001. Source: Wikipedia.

But I’m not so sure.

Getty just sued StabilityAI for stealing their images.

A group of independent artists also sued StabilityAI, Midjourney and Deviant Art — for “infringing on the rights of millions of artists” by training their AIs on five billion images scraped from the web with­out con­sent.

And perhaps the most interesting one, Matthew Butterick is suing Microsoft, GitHub, and OpenAI — alleging that the companies’ AI-powered coding assistant, GitHub Copilot is effectively engaging in mass scale software piracy.

It’s shaping up to be a grueling battle through one of the murkiest of all legal territories: copyright.

Meanwhile, the burgeoning privacy tech movement is giving everyday people the tools to opt out of data collection all together, cutting the whole thing off at the knees.

All this taken together, maybe things don’t look so good for AI?

The tip of the spear of privacy tech

Screenshot of Signal.org homepage
Signal.org

Signal’s homepage reads, “State-of-the-art end-to-end encryption keeps your conversations secure.”

“We can’t read your messages or listen to your calls, and no one else can either. Privacy isn’t an optional mode — it’s just the way that Signal works. Every message, every call, every time.”

The app provides sanctuary of sorts, where it’s technically impossible for any third party to access the content shared there.

With over half a billion downloads, Signal is perhaps the leader in a brand new category of privacy-preserving tech platforms.

But there are others and the ecosystem is growing fast.

How the private, decentralized web was lost in the first place

Screenshot of Archie homepage
Archie, the first known search engine developed in 1990. Source: Digital Archeology
Screenshot of Iris
Iris, a popular text based IRC client launched in 1992. Source: Wikipedia
Screenshot of Usenet
FreeAgent, a Usenet newsreader client for Windows launched in 1993. Source: courtland.edu
Screenshot of PizzaNet.
PizzaNet, the first food delivery app launched in 1994. Source: Pizza Hut

Those who weren’t around for the early iterations of the internet may not know that in the beginning it was quite decentralized and private.

In the early 90s, the net consisted of a few disjoined protocols like internet relay chat, simple mail transfer protocol (email), and a handful of bulletin board systems, the most popular of which was called Usenet.

As more and more people came online, the problem that early internet platforms tried to solve was discoverability — basically how to find what you were looking for across all these little pockets of activity.

A useful way to think about it is that there were “discovery layers,” like Archie and Yahoo. And “presentation layers” like AOL and Amazon.

Some were organized as non-profits, like Wikipedia or the Creative Commons. Others, like AOL and Yahoo were private companies and very much for-profit.

Long story short, search engines and social networks solved the discoverability problem, but in the process they made some technical design decisions that would prove consequential.

Screenshot of Yahoo.com
Yahoo.com in 1996. Source: The Wayback Machine

For example, they decided to bring together three pieces of the stack that had previously been discrete — The protocol, the client and the hosting.

And they centralized the ownership of user data.

The wild success of online advertising, and the effect it had on practically every industry is well known. What is less well known is the effect that it had on the evolution of the internet itself.

The ad model created a powerful momentum. It influenced what kinds of tech companies were funded, what systems were built, and perhaps inadvertently, what kind of content was most discoverable.

The success was so swift and decisive that the early internet pioneers never really had a chance to properly explore other models that might have worked (with the exception perhaps of grid computing, which appears to be making a comeback).

Nor did they think through the downstream consequences of what they were building.

The age of surveillance capitalism

The Age of Surveillance Capitalism book cover.
The Age of Srvailliance Capitalism, by Shoshana Zuboff published in 2019. Source: rose.ph

Another oversight was that no one really considered what it would mean to create a multi billion dollar shadow economy for user data.

That little development flew well under the radar for over a decade, but has recently exploded into public consciousness.

Shoshana Zuboff dubbed the phenomenon, “surveillance capitalism” and describes it with colorful prose in her book, The Age of Surveillance Capitalism, which I highly recommend.

“Futuristic as this may sound, the vision of individuals and groups as so many objects to be continuously tracked, wholly known, and shunted this way or that for some purpose of which they are unaware has a history. It was coaxed to life nearly sixty years ago under the warm equatorial sun of the Galapagos Islands, when a giant tortoise stirred from her torpor to swallow a succulent chunk of cactus into which a dedicated scientist had wedged a small machine.

The solutions once concocted by scholars of elk herds, sea turtles, and geese have been refurbished by surveillance capitalists and presented as an inevitable feature of twenty-first-century life on Earth. All that has changed is that now we are the animals”

Brilliant historical analogs aside, the problems this presents are not purely academic or hypothetical.

Surveillance capitalists have caused front page scandal after front page scandal in recent years, rocking the foundations of western society’s most sacred institutions and values to their core.

The subversion of democracy, the systematic curbing of free speech, and the evisceration of privacy rights, to name a few.

In my research for this article, I also discovered a recent motherboard report, which revealed a massive, previously unknown data supply chain that sends ordinary people’s personal data to brokers, contractors, and the military.

According to the report, the chain begins with a couple hundred seemingly benign apps that are available to download in the Android and iOS app store.

For example, an app for shopping on Craigslist, a Muslim dating app, and a level app (like, for installing shelves).

Once downloaded, these apps harvest their users’ phones for location as well as tons of other sensitive data. They then feed that data to one or more “data brokers,” who then package it into “data products” and sell it to the highest paying customers on earth — namely, tax payer funded three letter agencies and the military.

All made possible by the centralization of data collection.

Cryptography, the last line of defense

Photo of a woman in the audience at a crypto conference.
A crypto design conference in Amsterdam. Photo by Sebastiaan ter Burg.

Observing this state of affairs, a new generation of software engineers and product leaders are not standing idly by.

They’re not wondering if the value prop of centralization and surveillance is worth the collateral damage, nor are they trying to convince the Googles, Facebooks and TikToks to change their ways.

They know that’s not going to happen.

Demanding privacy from surveillance capitalists after all, is “like asking Henry Ford to make each Model T by hand,” according to Zuboff.

Such demands represent “existential threats that violate the basic mechanisms of the entity’s survival.”

Instead, the young guns are subverting the power of big tech by peeling back all the layers, rethinking old assumptions (like the centralization of protocol, client and hosting), and building a new model that protects users with private/ public key cryptography.

Back in 2008, Satoshi Nakamoto published the Bitcoin Whitepaper, which laid out a fundamentally new design for data ownership and transmission.

One of the beautiful things about crypto is that you’re not tied to a client, aka a wallet. You can enter your private key into any wallet anytime and your funds just magically appear there.

Privacy tech essentially does the same thing but for communication. With your keys, you can easily explore different “presentation layers” and take all your stuff with you.

The benefits of this new model go far beyond privacy.

Imagine being able to port all your Facebook friends over when you sign up for TikTok, for example.

Crypto and privacy tech are similar in this way — They completely change incentive structures and tilt the power back to the user.

Circling back to Blackman’s article…

“There’s something somewhat sneaky in all of this,” he writes. “Small groups of technologists are developing and deploying applications of their technologies for explicitly ideological reasons, with those ideologies baked into the technologies.”

“Users are witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.”

Abstract image of shadowy technologists, by Dana J. Wright.
What came back when I entered “Small group of technologists developing and deploying applications of their technologies for explicitly ideological reasons — ar 3:2” on Midjourney.

Yes, those shadowy technologists… surreptitiously recruiting users to carry out their extreme ideology of privacy.

You can hear the echoes of Elizabeth Warren’s “shadowy super coders” rhetoric.

And the same fear mongering claim — That the only reason anyone would use this new technology is for criminal activity.

The fact that the claim has no basis in reality is not important, what matters is the message:

If you have nothing to hide, you have nothing to lose from being tracked and monitored at all times.

Nothing to hide, nothing to lose

Statue of Joseph Stalin.
Image by agitprop on Creative Commons.

Students of history will recognize this message, as it has been used time and again by authoritarians to persuade their citizens to surrender their rights.

It tends to be most effective during times of crisis, but it rests on some shaky premises:

That the state itself is never the bad actor; that individual agents of the state are always morally pure and never make mistakes; And that completely unchecked power does not eventually corrupt even those with the best of intentions.

“The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.” — Shoshana Zuboff

Again, these are not hypothetical concerns.

Nor are they foreign problems that only effect people who live in overtly authoritarian countries like China. You can be persecuted right here in Joe Biden’s America, and people are every day.

For example…

A young woman has an abortion in a state like Kentucky, where doing so is currently a criminal offense punishable by one to five years in prison.

Someone with brown skin gets pulled over for no reason and searched by a frantic cop looking for any reason to make an arrest.

A black lives matter protestor joins a peaceful demonstration and is greeted by riot cops with tear gas and rubber bullets.

An undocumented person asks a question about their rights, but the venue they chose is surveilled by ICE and next thing they know, they get picked up and deported.

To Blackman, this is all just fine.

These individuals are criminals ipso facto.

They have no right to privacy and any technology that protects them from being monitored by the state (aka an unlawful search) should be regarded as “morally dangerous.”

Conclusion

If the AI industry wants beef with privacy preserving tech, they’ll need to take a number and get in line.

The shadowy technologists are already at war with authoritarian government overreach, the entire legacy tech industry, and the legacy financial system.

Adding AI to the list won’t tip the odds all that much.

I do agree with Blackman on one thing:

“Signal’s influence doesn’t necessarily hit us at the belief level. It hits us at the action level: what we do, how we operate, day in and day out. In using this technology, we are acting out the ethical and political commitments of the technologists.”

Yes.

Thanks to them, everyday folks have a powerful tool to protect themselves from surveillance capitalism and the prying eyes of the state.

I just hope the readers of the Times are able to spot the rest of the piece for what it is — A manipulative and alarmist attack on the notion that we have any right to online privacy, in the service of yet another industry that depends on us not having it.

__

Thanks for reading until the end! I write about crypto and other stuff that gets me fired up. You can find me on twitter @danajwright_

--

--