In the recent Signalgate scandal, several senior Trump administration appointees used the Signal app on their phones to discuss an attack on the Houthis. People discussed the risk of the phones being compromised or human factors, such as adding a prominent journalist to the group chat by mistake. But mostly no one had concerns about the cryptography itself on this widely-available app.
It wasn't always this way--cryptography used to be a cat and mouse game, most notably the breaking of the German Enigma machine dramatized in the movie The Imitation Game. Then Diffie and Hellman in their classic 1976 paper stated
Theoretical developments in information theory and computer science show promise of providing provably secure cryptosystems, changing this ancient art into a science.
And in recent memory we've had several successful cybersecurity attacks, but never because the encryption was broken.
We've made incredible progress in solving problems once thought unsolvable, including language translation, chess, go, protein folding and traveling salesman. We have great SAT and Mixed-Integer Programming algorithms. We've blown through the Turing test. None of these algorithms work all of the time but no longer do hard problems seem so hard. Yet cryptography remains secure. Why? How did we get to this Optiland, where only the problems we want to be hard are hard? Quantum computers, if we have them can attack some cryptographic protocols, but we're a very long way from having those capabilities.
My latest theory involves compression. Machine learning works by finding a representation of a function in a neural net or other form that gives an imperfect compressed version of that function, removing the random components to reveal the structure inside. You get a smaller representation that, through Occam's Razor, is a hopefully mostly accurate version of that data. For example, we learn the representation of a Go player by training a neural net by having the computer play itself over and over again.
Cryptography is designed to look completely random. No compression. If you remove the randomness you have nothing left. And thus modern algorithms have no hook to attack it.
This is just the beginnings of a theory. I don't have a good formalization and certainly not even the right questions to ask of it.
So to me it's still a big mystery and one that deserves more thought, if we really want to understand computational complexity in the world we actually live in.