On Encryption, Part 2: Of Rung Bells and Cats Out of Bags

Last time, we discussed the basics of encryption, and talked about concepts like security through obscurity, Kerckhoffs’ principle, Linus’s law, and the fundamental challenge of encryption (the adversary and you). In this post, we’ll discuss how those concepts apply to ideas like backdoors and deliberate flaws in encryption software, and then I’ll end with a few observations about the modern computing landscape and where I think we ought to go.

For the previous post, click here.

Flawed by design

In popular discussions, the term “backdoor” is often thrown around without being clearly defined. I wish to make a distinction here between a “backdoor” and a deliberate flaw. I consider a “backdoor” any feature or tool purposely designed to allow someone other than the user to access data encrypted by that user. For instance, keeping a second copy of the user’s key on file and providing it to the government on request would constitute a backdoor, as would any system designed to be decrypted by both the user’s key and a key held by the company.

In contrast, a deliberate flaw is a known issue introduced into an encryption algorithm or cryptosystem, generally with the end goal of being able to exploit that flaw later to gain access to data through various cryptanalysis attacks. One might reasonably ask what the difference is between this and the defined “backdoor” above; I would contend that while the above “backdoor” might be kept secret, or made explicit, a deliberate flaw would by design always be kept secret, to avoid driving users to another, less flawed algorithm.

Let’s first discuss the idea of purposefully weakening an algorithm. This is, to put it bluntly, a very bad idea. Again, it sounds reasonable at first: “Only the good guys will know how to break it. The bad guys won’t!” But Linus’s law tells us otherwise. Given enough eyeballs, someone will discover the flaw. Again, encryption has only the adversary and you – it does not (and indeed cannot) tell the difference between a “just” adversary and an “evil” one. Once you put a flaw into your system, you must assume it will be used by your adversaries, regardless of whether they are good or bad.

The natural response here might be to say “Well, we won’t expose how the code works, to help guard against the flaw.” Doing this leads us back into “security through obscurity”, long since abandoned by modern cryptography as unreliable. A system that is secure only by secrecy becomes insecure the moment even one person reveals the secret.

Building a backdoor

Consider instead the idea of engineering a “backdoor” into your system, perhaps at the request (or even the insistence) of the government. There are serious issues with this as well, but they relate less to the mathematical aspects of encryption, and more to the social, political, and human factors surrounding it.

For our sake, let’s assume that we are using a service which discloses that there is a “backdoor”: they keep a copy of our key on their servers, and they can use it to unlock our data at the request of the government (when served with a legal request to do so). If this is common knowledge, it becomes the primary attack vector for malicious adversaries coming for my data. If I know that some company is holding onto keys which can decrypt every single customer’s data, I know where I am going to focus my attention. That set of keys would have immense value in the modern economy.

Again, one might say “Why not keep the backdoor a secret?” For one, it violates commonly established principles of doing business, by deceiving the customer as to what is happening. Two, it again relies on security through obscurity – eventually, the information will be revealed. Three, the backdoor cannot be meaningfully kept secret while still being functional; people would have to build it, maintain it, secure the keys, process requests, and so on. The “secret” would be known to dozens, if not hundreds, of people, and history shows us that keeping a secret gets harder as more people know it.

But, okay, one could argue this doesn’t, strictly speaking, damage the encryption itself. It’s not weakening the encryption per se; instead, it’s just weakening your personal security, like leaving a key in a fake rock in your yard. You might say it’s worth the risk. This is where our discussion must shift from discussion of technical features to discussions about society and value.

The risk and the reward

On the surface, the idea of forcing companies to build publicly disclosed backdoors might not seem so ridiculous. You’d still be secure in your day-to-day life, but in an investigation, the government could access data on your phone, so long as it had a warrant. Reasonable people could look at that concept and find it, relatively speaking, acceptable.

Look below the surface, though, and you start to see the flaws with this idea. Let’s say you store your data in the cloud with Microsoft. Microsoft discloses to you, openly and without any attempt at duplicity, that they also have a second key which will let them access your data. They state that they only use this key to comply with government requests for data, which are legally vetted. Microsoft states that they employ strict measures to securely store their key, to avoid it being used maliciously or acquired by individuals who should not have access.

How confident can you be that this all will come to pass? Can you be confident that Google, and Amazon, and every other place with your data, will behave just as well? After all, you are counting on the following:

  • The government will never abuse their power to access your data, and will always do so based on a legally valid reason
  • The government will never lie to you about collecting data or looking at your information
  • Microsoft will never be the victim of a technical attack that results in them losing your keys
  • Microsoft will never have a group of employees who, when offered sufficient money, will sell them out and steal the key for sale on the Internet
  • Microsoft will always disclose to you everything that they are doing

Suddenly, this is starting to look a lot different, isn’t it? Now, it looks like a world where you need to trust big, complex, opaque companies regularly. Even if you totally believe in the government, and even if you totally believe that every company is beneficent and would not cause you harm, you still face the fact that there are a lot of absolute statements above. Humans are fallible. If a bunch of different attacks are coming at them all times of day (and they would be, since the hackers would know the value of what they were holding), eventually, one is going to get through. That’s the law of averages, if nothing else.

But let’s say you look at this, and you say “It’s worth the risk, though, because it’s about protecting us from bad guys. We can engineer legal safeguards to protect us!” I’ll buy that we might be able to engineer legal safeguards. And I understand the idea that this is about protecting people, and that is a powerful motive, driven from noble intentions. The problem is that all of this – the backdoors, and the risks, and the potential invasions of privacy – will only affect people who use these commercial services.

It’s not as though the current, high-quality, open source encryption software on the Internet will disappear once backdoors are put into place. If you are really concerned about bad guys, then you ought to know that those bad guys are going to realize that Microsoft, Google, and the others have engineered backdoors into their products. So they will take extra precautions, like encrypting their data with open source software, using phones engineered for privacy and security, and relying on other technical tools to circumvent the backdoors. The backdoors would trade our privacy for the illusion of security.

Bells and cats

The title of this post is “Of Rung Bells and Cats Out of Bags”, because that is the reality of what we face. You cannot unring the bell, and the cat is already out of the bag when it comes to encryption. At the core, encryption is math. The study of it is a matter of public record, and the field of cryptography is itself reliant on openness to ensure security. There exist modern, free, open source tools which excel at encryption – which can, indeed, approach being functionally unbreakable. This is the framework in which we must navigate.

Adding backdoors and permitting the government to access phones would not be entirely futile. There are criminals and terrorists who would not employ strong encryption, and thus would be caught, tried, and convicted if the government could break into phones using backdoors. It would be deceptive to say otherwise.

But here I am arguing that the value tradeoff is a steep one. Your data and your privacy are at stake, of course, but consider the broader implications. The modern economy runs – and relies – on encryption. If you’ve ever bought something online, you have gone through an encrypted website. If you have ever checked your bank statement online, you’ve used encryption. Huge swaths of our lives rely on encryption working, and on people being confident in encryption. Once you start poking holes in it, even for the most noble of reasons, you risk damaging the engine of innovation that propels the world.

You cannot use the laws of human beings to overrule the laws of mathematics, even though this is, to an extent, what is often attempted. Nothing can change the fact that encryption operates against a universal adversary, or that it cannot judge the morality of that adversary. Nothing can alter the fact that a backdoor represents a risk, even in the best of all possible worlds (the world we probably don’t live in). And nothing can stop people from having and protecting secrets.

I hope that the series of posts have been informative, and that I have distinguished clearly between the fundamentals of encryption and my own personal value judgments that lead from them. Go forth, be better educated, and most importantly, discuss this important subject. Be informed. Be engaged. Be citizens, not subjects.

3 Responses to “On Encryption, Part 2: Of Rung Bells and Cats Out of Bags”