Why Encryption Use Is Problematical When Advocating For Social Change

I believe decentralized knowledge sharing is important, especially for disaster preparedness. I also believe encryption is important in practice, the same way as many people have locks on their doors. Such things do affect a balance between state power and individual power, which is important in a democracy, and they also make it harder for vandals and criminals to operate. So, a project like Briar that supports decentralized communications and encryption is important for those and other reasons. Still, as my father (a machinist among other things) used to say, "Locks only keep honest people honest." Here is a partial list of all the ways a tool like Briar can fail when being used by activists engaged in controversial political actions.

This is an elaboration of stuff I've written before like here:
http://it.slashdot.org/comments.pl?sid=6317951&cid=48552921

"Encryption is conceptually broken because you can't organize a mass political movement or broad cultural change by hiding what you are doing. You need to convince people to believe in a cause and be willing to commit resources to support it. And overall that requires broad mass communications and engaging more and more people, any one of whom could report you to "authorities". Successful broad change in a democracy is going to be focused on legal & non-violent means to change public opinion. Encryption is generally about hiding communications and their contents, which is the opposite of what you need to be doing to make large scale social change."

Encryption algorithms can't really be verified by almost any user, since for almost all people using encryption, they are relying on the public statements of a handful of encryption wizards both to design the algorithms and then to implement them. If you can't verify something, why should you trust it? The best security is generally built on simplicity and understandability. The only thing that is simple enough to understand related to encryption is one-time pads using XOR for messages or some similar variants (assuming you have purely random numbers to make the pads, itself problematical). But hardly anyone uses those. Even if you use one-time pads, they could be compromised by whatever process you use to exchange or store the pads (even if you exchange them physically).

Even the most well-meaning people make mistakes in writing software. And not everyone writing software is well-meaning (see the quoted excerpt at the end). It can even be hard to tell, like with Heartbleed as an SSL vulnerability, which was the case. So, even with a perfect theoretical system, the actual system you are using can be flawed. A system that has been used in the past which was flawed then means an activist is compromised right now and forevermore in the future.

You can't ever really be sure who is at the other end of encrypted communications. The person might not be who they said they are. Even if they are who they say they are, they may have divided or complex loyalties. This is one of the single biggest risks of relying on encryption (or any sort of communications you expected to be private for other reasons). For example, Bradley/Chelsea Manning was turned in to authorities by someone (Adrian Lamo) Manning communicated with. Even when people you communicate with are loyal, their cooperativeness with others may also be compromised with drugs or other means as in this XKCD comic on wrenches and "Security".

You can't ever really be sure about the integrity of someone else's computer who you are communicating with. Their system may be compromised in any number of ways now or in the future meaning everything you event sent them in the past or send them in the future is available for review.

Related to the previous point, you can't be sure of the integrity of your own computer, even if you are a security professional. Everyone is only one software update away from being compromised. Operating systems now update themselves automatically. So do many applications (including games). Firmware asks to be updated. Essentially no users are able to evaluate these updates for what they really do. Yet, if users don't upgrade, they become vulnerable to any security holes the upgrades supposedly patch and which are now widely known because the update will be examined to understand what it patches. Because of software update risks, essentially no users can guarantee the integrity of any of their system in practice -- including those of the best and most dedicated and thorough security researchers implementing encryption or evaluating encryption systems. Patches can be compromised in multiple ways (such as compromising the patch itself, compromising where the patch is stored, compromising the communications system that supplies the patch, compromising how the patch is installed or checked, etc.) making it hard to defend against patching risks.

Vulnerabilities may interact in unexpected ways, making it hard to create truly secure systems by anyone. And the more complex systems get, the more likely they are to have exploitable flaws from both a growing attack surface and a sort of combinatorial increase in possible interactions. Example:
http://blog.checkpoint.com/2015/08/04/wordpress-vulnerabilities-1/

Hardware may be compromised during production at various levels (chips, assemblies like memory or disk drives or batteries, lowest level BIOS). Cell phones in particular are vulnerable to this because they generally have a separate processor for interfacing with the cell phone network that is often proprietary. The separate cell phone processor may also update on its own schedule independent of user control, as with the previous point.

Communications can be stored even when not understood or decryptable. Years in the future, means may be found to decrypt such communications (encryption keys obtained, exponentially increasing computing power, algorithm flaws discovered, etc.). Mass movements may take decades to play out. That means activists are at risk for anything they said years ago on such systems even if they worked perfectly at the time.

Because any person today using a computer will almost certainly use public services, users will need to switch between secure and insecure communications routinely. Just one user mistake in choosing the wrong system for a private message could compromise all their years of security precautions.

Metadata about communications is hard to mask. Almost any message can be traced back to its origin if you can coordinate enough information. Even if a physical billboard approach to messaging is taken to put out a message "anonymously", surveillance cameras (or local people) can record movements and can record who specifically is writing graffiti. Even when metadata can be hidden in internet communications, by that time with all sorts of restrictions on what is communicated and how (like via Tor, assuming that really works correctly all the time), you are going to have a lot of trouble making a mass movement. Masking metadata may potentially get easier with changes to technical infrastructure (if everyone uses Tor for everything assuming Tor really works and is not compromised), but then we are left with the other points.

In general, a system intended to ensure private communications is only as secure as its weakest link. If any of these levels is compromised (hardware, firmware, OS, application, algorithm theory, algorithm implementation, user error, user loyalty, etc.) then your communications are compromised.

In practice, professionals work on both "proactive" and "reactive" security. So, they might suggest virus scanners as a way to avoid some issues while also understanding how to reinstall software systematically to deal with a compromised system. When a business or financial system is compromised via some failure of proactive security, the security professionals pick up the pieces in reaction to the compromise. Personal communications that rely on encryption are completely dependent on "proactive" security. An activist whose proactive security fails has no strategy to reacting to that situation, other than maybe fleeing the situation of concern. Otherwise, any compromise also makes an activist subject to blackmail or being forced to work for others -- which can then lead to compromising more activists and so on.

If you want to build a mass movement, at some point, you need to engage people. In practice, for social psychology reasons, engaging people is very difficult, if not impossible, to do completely anonymously in an untraceable way.

People have historically built mass movements without computers or the internet. It's not clear if the internet really makes this easier for activists or instead just for the status quo who wants to monitor them.

If you work in public, you don't have to fear loss of secure communications because you never structure you movement to rely on them. If you rely on "secure" communications, then you may set yourself up to fail when such communications are compromised. If your point is to build a mass movement, then where should your focus be?

Also related on the limits of encryption and politics:
http://it.slashdot.org/story/14/12/07/0529200/neglecting-the-lessons-of-cypherpunk-history
http://www.truth-out.org/opinion/item/27783-glenn-greenwald-forgets-cypherpunk-history

"History demonstrates that Greenwald's encryption-laden narrative is the stuff of pleasant fiction and that the outward acts of bold defiance tend to indicate concealed acts of collaboration. Once more the most widely used products are also the most likely to be subverted. What better way to intercept sensitive information than to convince users to mistakenly put their faith in technology that they magically believe will keep their secrets safe?

Back in the 1980s and 1990s, a group of encryption mavens known as cypherpunks sought to protect individual privacy by making "strong" encryption available to everyone. To this end they successfully spread their tools far and wide such that there were those in the cypherpunk crowd who declared victory. Thanks to Edward Snowden, we know how this story actually turned out. The NSA embarked on a clandestine, industry-spanning, program of mass subversion that weakened protocols and inserted covert backdoors into a myriad of products. Technology promoted as "secure" quietly and intentionally failed on behalf of national security.

The depth of this betrayal is hard to overstate.

One lesson that can be derived from cypherpunk lore is that it's extremely hazardous to put blind faith in technology. The public record shows that prominent high-tech companies actively assisted the surveillance state in relationships that have existed for decades. Corporate spokespeople brazenly lied about doing so when confronted with accusations of complicity. Are we to assume that they've turned over a new leaf? ...

The surveillance state is motivated by the desire for power, the power to subvert technology and raise up an Eye of Providence behind a shroud of official secrecy. Power is rooted in politics. To put all of your eggs in the encryption basket is to chase after an illusion conjured artfully by propagandists. To save our civil liberties, we must recall our constitutional duty as citizens in a republic born out of revolution. Small as the windows of opportunity may seem we still have a system that admits the possibility of change. We must rise to seize this possibility, to recapture our government and remake the rules by which it operates. People in the past have mobilized to implement fundamental changes and we must do so again."

--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity.