Jurisdiction Is Nearly Irrelevant to the Security of Encrypted Messaging Apps

4 hours ago 1

Every time I lightly touch on this point, I always get someone who insists on arguing with me about it, so I thought it would be worth making a dedicated, singular-focused blog post about this topic without worrying too much about tertiary matters.

Here’s the TL;DR: If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

The notion of some apps being somehow “more secure” because they shovel data into Switzerland rather than a US-based cloud server is laughable.

But this line of argument sometimes becomes sinister when people evangelize storing plaintext instead of using end-to-end encryption, and then try to justify not using cryptography by appealing to jurisdiction instead.

That more extreme argument is patently stupid. That is all I will say about it, lest this turn into a straw man argument. But if I didn’t bring it up somewhere, someone would tell me I “forgot” about it, so I’m mentioning it for completeness.

Let’s start with the premise of the TL;DR.

What does “actually [building] your cryptography properly” mean?

Properly Built Cryptography

An end-to-end encrypted messaging app isn’t as simple as “I called AES_Encrypt() somewhere in thee client-side code. Job done!”

If you’ve implemented the cryptography properly, you might even be a contender for a real alternative to Signal. This isn’t an exercise for the faint of heart.

To begin with, you need to solve key management. This means both client-side, secret-key management (and deciding whether or not to pass The Mud Puddle Test) and providing some mechanism for validating that the public key vended by the server is the correct one for the other conversation participant.

The cryptography tried for over three decades to make “key fingerprints” happen, but I know professional cryptographers who have almost never verified a PGP key fingerprint or Signal safety number in practice. I’m working on a project to provide Key Transparency for the Fediverse. This is much a better starting point. Feel free to let power users do whatever rituals they want, but don’t count on most people bothering.

Separately, the app that ships the cryptography should itself strictly adhere to reproducible builds and binary transparency (i.e., SigStore).

What’s This About Transparency?

Both “Key Transparency” and “Binary Transparency” are specific instances of a general notion of using a Transparency Log to keep a privileged system honest.

Also, Key Transparency is an abbreviated term. The thing that you’re being incredibly transparent about is a user’s public keys. If that wasn’t the case, key transparency would be a dangerous and scary idea.

If you don’t know what a public key is, this blog post might be too technical for you right now.

If that’s the case, start here to get a sense for how people try to explain it simply.

Separate to both of those topics, Certificate Transparency is already being used to keep the Certificate Authorities that secure Internet traffic honest.

But either way, they’re just specific instances of using a transparency log to provide some security property to an ecosystem.

What’s a Transparency Log?

A transparency log is a type of log or ledger that uses an append-only data structure, such as a Merkle tree.

They’re designed such that anyone can verify the integrity and consistency of the log’s entries. See this web page for more info.

Sometimes you’ll hear cryptographers talk about a “secure bulletin board” in a protocol design. What they almost always mean is a transparency log, or something fancier built on top of one.

If this vaguely sounds blockchainy to you, you would be correct: Every cryptocurrency ledger is a consensus protocol (often “proof-of-work”) stapled onto a transparency log, and from there, they build fancier features like smart contracts and zero-knowledge virtual machines.

Independent Third-Party Monitors Are Essential

There is little point in running any sort of transparency log if you do not have independent third parties that monitor the log entries.

Even better if you take a page out of Sigsum’s book and implement witness co-signatures as a first class feature.

What Does Transparency Give You?

If you’re wondering, “Okay, so what?” then let me try to connect the dots.

If you want to surreptitiously compromise a messaging app, you might try to:

  1. Backdoor the client-side software.

    But binary transparency and reproducible build verification will make this extremely easy to detect–or even worse, mitigate.

  2. Compromise the server to distribute the wrong public keys.

    But key transparency prevents the server from successfully lying about the public keys that belong to a given user. Additionally, it prevents the server from changing history without being detected.

For a more detailed treatment, refer to the threat model I wrote for the public key directory project.

What Else Is Needed for Proper Implementations?

Once you have reproducible builds, binary transparency, secret-key management (which may or may not include secure backups), and public key transparency, you next need to actually ship a secure end-to-end encryption protocol.

The two games in town are MLS and the Signal Protocol. My previous blog post compared the two. They provide different subtly different security properties, serve slightly different use cases, and have similar but not identical threat models.

If you want to go with a third option, it MUST NOT tolerate plaintext transmission at all. Otherwise, it doesn’t qualify.

If your use case is to focus on scaling up group chats to large numbers of participants, efficiently, and don’t care about obfuscating metadata or social graphs, you might find MLS a more natural fit for your application.

Cryptographers use formal notions to describe the security goals of a system, and prove the security of a design under a game theoretic design that proves an attacker’s advantage stays below some threshold (usually something like “the birthday bound of a 256-bit random function”).

If you use the same algorithm (e.g., a hash function) in more than one place, you should take extra care to use domain-separation. Both of the protocols I mentioned above do this properly, but any custom features you introduce will also need to be implemented with great care.

Your protocol should not allow the server to do dumb things, like control group memberships. Also, don’t even think about letting any AI (not even a local model) have access to message contents.

Once you think you’re secure, you should hire cryptographers and software security experts to audit your designs and try to break them. This is something I do professionally, and I’ve written about my general approach to auditing cryptography products if you’re interested.

Any mechanisms (static analysis, etc.) you can introduce into your CI/CD pipeline that will fail and prevent a build if you introduce a memory-safety bug or cryptographic side-channel are a wonderful idea.

Section Recap

If you actually built your cryptography correctly, then it should always be the case that the server never sees any plaintext messages from users.

Furthermore, if the server attempts to substituting one user’s public key for another, it will fail, due to key transparency, third-party log monitors, and automatic Merkle tree inclusion proof verification.

While you’re at it, your binary releases should be reproducible from the source code, and the release process should emit attestations on a binary transparency log.

If you do all this, and managed to avoid introducing cryptographic vulnerabilities in your app’s design, congratulations! You have properly implemented the cryptography.

Interlude: Who’s Proper Today?

As of right now, there isn’t a perfect answer. I’m setting a high bar, after all. The main sticking point is key transparency.

WhatsApp uses key transparency, but is owned by Meta and is shoving AI features into the product, so I doubly distrust it. Factor in WhatsApp being closed source, and it’s immediately disqualified.

Matrix, OMEMO, Threema, Wire, and Wickr all rely on key fingerprints. The same can be said for virtually every PGP-based product (e.g., DeltaChat).

As of this writing, Signal’s key transparency feature still has not shipped (though it is being developed).

Today, “safety numbers” are the mechanism for keeping track of whether a public key has been substituted for a conversation partner. This is morally equivalent to key fingerprints. As soon as this feature launches, Signal will be a proper implementation.

Signal offers reproducible builds, but there isn’t enough attention on third-party verification of their builds. This is probably more of an incentive problem than a technical one.

None of the mainstream apps currently use binary transparency, but that’s an easier lift.

Enter, Jurisdiction

Now that the premise has been explained in sufficient detail, let’s revisit the argument I made at the top of the page:

If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

At the bottom of the cryptography used by a properly-built E2EE app, you will have an AEAD mode which carries a security proof that, without the secret key, an encrypted message is indistinguishable from an encryption of all zeroes set to the same length as the actual plaintext.

This means that the country of origin cannot learn anything useful about the actual contents of the communication.

They can only learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

Regardless, if the only thing you’re seeing on the server is encrypted data, then where the data is stored doesn’t really matter at all (outside of general availability concerns).

But What If The Host Country…

…Wants to Stealthily Backdoor the App?

Binary transparency and reproducible builds would prevent this from succeeding stealthily. If the government wants the attack to succeed, they have to accept that it will be detected.

…Legally Compels the App Store to Ship Malware?

This is an endemic risk to smartphones, but binary transparency makes this detectable.

That said, at minimum, the developer should control their own signing keys.

…Wants to Replace A User’s Public Key With Their Own?

Key transparency + independent third-party log monitors. I covered this above.

…Purchases Zero-Day Exploits To Target Users?

This is a table-stakes risk for virtually all high-profile software. But if you think your threat model is Mossad, you’re not being reasonable.

When Does Jurisdiction Matter?

If the developers for an app do not live in a liberal democracy with a robust legal system, they probably cannot tell their government, “No,” if they’re instructed to backdoor the app and cut a release (stealth be damned).

Of course, that’s not the only direction a government demand could take. As we saw with Shadowsocks, sometimes they’re only interested in pulling the plug.

If you’re worried about the government holding a gun to some developer’s head and instructing them to compromise millions of people–including their own employees and innocent civilians–just to specifically get access to your messages, you might be better served by learning some hacker opsec (STFU is the best policy) than trying to communicate at all.

In Conclusion

If you’re trying to weigh the importance of jurisdiction in your own personal risk calculus for deciding between different encrypted messaging apps, it should rank near the very bottom of your list of considerations.

I will always recommend the app that actually encrypts your data securely over the one that shovels weakly-encrypted (or just plaintext) data to Switzerland.

It’s okay to care about data sovereignty (if you really want to), but that’s really not a cryptographic security consideration. I’ve found that a lot of Europeans prioritize this incorrectly, and it’s kind of annoying.


Header art: AJ, photo from FWA 2025 taken by 3kh0.

Read Entire Article