Home Artists Posts Import Register

Content

Your extensive posting history on r/birdswitharms and your old fanfiction-heavy livejournal are both one tiny math problem away from becoming public knowledge. That math problem is prime number factoring, and the new era of quantum computers may lay bare your indiscretions, as well as collapse the entire digital economy. Unless we get us some post-quantum cryptography post-haste. So, how close are we?

--

It’s said that quantum computers will threaten the security of our digital civilization because they can easily overcome the encryption method that underpins almost all of it. Every email, every online purchase, every login—is secured by the fact that it’s a lot harder to factor out prime numbers than it is to multiply them together. Large products of prime numbers become encryption locks, and the factors of those numbers become the keys to the secrets in your inbox. Your secrets are safe-ish for now because the most powerful classical computers will take thousands to billions of years to find those factors, depending on the key size. But in 1994, a mathematician named Peter Shor developed an algorithm - Shor’s algorithm - that could use a quantum computer to factor a prime number in, well, a human lifetime, or a human lunchbreak.

Quantum computers didn’t exist in 1994, but just last year, Google’s quantum computer, Sycamore, achieved so-called “quantum supremacy” - outspeeding the best classical supercomputers for a very specific task. According to the researchers, Sycamore performed calculations to simulate another quantum system exponentially faster than would have been possible with a pure classical computer. We’ll talk about exactly what happened here in another episode - but what does this mean for internet cryptography? Is it game over? 

Not quite - quantum computers need to become far more reliable - have better “fault tolerance” to do the sort of prime factoring required for decryption. But those computers will eventually arrive. So what’s to be done? One option is to match quantum decryption with new quantum encryption techniques to replace prime factoring. We talked about all this in our episode on quantum key distribution - and we’ve also talked about the challenges. To distribute a quantum key, you also need a quantum internet to transport quantum states. This is a far more challenging problem. Quantum states are insanely fragile and difficult to transport, and the quantum internet may not arrive before better quantum computers can crack current encryption techniques and collapse the modern digital world.

Fortunately there is another option - and one that’ll be a lot cheaper than building a quantum internet. It turns out there are some ingenious non-quantum ways to thwart the hacking powers of quantum computers. Enter post-quantum cryptography, or quantum resistant algorithms -  which might replace our vulnerable prime factoring-based cryptography.

To understand why some algorithms are vulnerable to quantum computing attacks while others are thought to be quantum resistant, we need to review a bit about bit about encryption.

Prime factoring is an example of what we call a one-way function. That’s a mathematical operation that’s very easy to figure out in one direction, but very difficult to reverse. Let’s start with a more visual example. Let’s say Alice has a secret shade of yellow, Bob has a secret shade of blue, and there’s a shade of red that is publicly broadcast. That’s the public key. Alice combines her yellow with a public shade of red, which gives her an orange. Bob combines his blue with red to get a purple. Alice sends her orange to Bob, and Bob sends his purple to Alice. Then, Alice combines Bob’s purple with her yellow and Bob combines Alice’s orange with his blue. The resulting brown is a shared secret that can be used to encode text between them. Unless an eavesdropper - Eve - can unmix Alice or Bob’s colors, she can’t figure out what their shared secret key is. 

The current dominant encryption protocol uses prime factors instead of colours, and this is the RSA protocol after Ron Rivest, Adi Shamir, and Leonard Adleman who came up with it in 1977. It works great except for its vulnerability to Shor’s algorithm and quantum computers. But prime factoring is only one example of a one-way function. Are there other possibilities that don’t have the same vulnerability? To understand that, we need a brief word on how quantum computers do their magic. 

If you try prime factorization using a classical computer you’re stuck using an algorithm similar to one that was developed over 2,000 years ago by the Greek mathematician Eratosthenes. First, divide a number by 2 and see if you get an integer. If not try 3, 5, 7, 11, 13, and so on. This is laborious—it takes an insane amount of time to factor large numbers this way. Or you could run an insane number of computers in parallel to each check every different factor. Still effectively impossible

Shor’s algorithm is potentially exponentially faster. For the full explanation, I’m going to point you to the episodes made by Infinite Series a while back. Today we’ll just do the quick version. It turns out there’s a structure to factorization that can be exploited by quantum computers. That structure is a period. For example, let’s consider powers of 2—2, 4, 8, 16, 32, 64, 128, 256, 1024. If we divide these powers by 5 and look at the remainder (an operation called “mod”) we get 2, 4, 3, 1, 2, 4, 3, 1, 2, 4, 3, 1—it repeats every four numbers. It turns out that if you can figure out the period of this sequence you can figure out the original number - in this case the 2. The 18th century mathematician Leonhard Euler figured this structure extends to numbers that are the multiple of two prime numbers — just like our RSA numbers. 

That doesn’t help us immediately - for the powers of large numbers, guessing the period is as hard as guessing the factors. It would take at least as long, or require at least as many parallel classical computers. This is where quantum weirdness comes in. 

Classical computers store information in bits that are either 0 or 1. Quantum computers store information in qubits, which, until they give an output, are in a quantum superposition of 0 and 1, with some probability of either being measured when you try to read out the qubit. So a set of qubits can be in any one of a large number of possible states, and each state represents a different number, each with a different probability. You can think of each state as being like a parallel processor - but now instead of processing across different computers, you’re processing in different parts of the quantum wavefunction - or in different parallel realities if you’re into the Many Worlds interpretation of quantum mechanics.

The problem with this parallel processing is that you only get one answer when you try to read out the qubits. If you tried to use a quantum computer to guess the factors of a prime number, you’d read out one of the guesses but it’s no more likely to be the correct one than if you used a classical computer.

But it turns out there’s a way to boost the chances of getting the right answer. Instead of trying to hold possible prime factors in our quibits, our array of qubits holds these repeating moduli of Shor’s algorithm - one per quantum state in the superposition. But now the entire superposition - all elements of the wavefunction are related by the period of their repetition. And now it’s possible to perform an operation that suppresses all possible periods besides the correct one - essentially, you cause those incorrect parts of the wavefunction to destructively interfere, leaving the correct period boosted. Once you read out that period you can determine the prime factors.

So far, quantum computers are stuck under 100 qubits and none have managed to factor a number higher than 21 with Shor’s algorithm. Which, honestly, I can do in my head. Does that mean I’m a quantum computer? There are some who might say so - I reserve my opinion. At any rate, although current technology isn’t there yet, when we figure out how to build quantum computers with thousands of qubits even very high digit RSA keys will not be safe. 

So how do we fix this problem? The answer is to find one-way functions that don’t have a known exploitable quality like the periodicity of prime factoring. That’s the goal of a competition currently being run by NIST, the National Institute for Standards and Technology. Algorithm applicants have been whittled down from a field of nearly 70 to only 7 finalists and a couple alternates, which were announced in June. In 2022, NIST is expected to narrow it down to one or two quantum resistant algorithms. These will become the new standards for public key encryption and digital signatures—and they’ll hopefully be able to protect our data from quantum computing attacks.

One of the NIST finalists achieves that through the McEliece cryptosystem, which gets its name from cryptographer Robert McEliece, who developed the hard math problem back in the 1970s. The principle behind McEliece is that it’s really difficult to repair errors in large messages. That gives us the potential for a one-way function.

Here's a very basic example of error decoding. Suppose Alice wants to send a message to Bob—”CAT” which is represented by the binary string 1 1 0 1. Once in a while, Alice sends the code, but a bit flips and she sends 1 0 0 1, which means DOG. Bob knows Alice doesn’t have a dog, so he’s pretty sure there’s been an error. What can he do?

Well, Bob can cross-reference his received message. He can add an error to each of the codewords. We’re working in binary, so 1+1=0. An error of 0100 to CAT (1101) equals DOG (1001), which is the message Bob got. So Bob can decode his message and know Alice’s true intent.


Error vectors

DOG

CAT

BIRD

COW

Codewords

0000

1001

1101

0011

1111


1000

0001

0101

1011

0111


0100

1101

1001

0111

1011


0010

1011

1111

0001

1101


0001

1000

1100

0010

1110

But decoding errors becomes much, much harder as the codeword size increases—unless you know some things about the codeword ahead of time. In the McEliece cryptosystem, the key is to modify the message in a reversible way to create a very large codeword - then add errors to that codeword. Unless you know how that modification is done, it’s essentially impossible to remove the added errors. McEliece does this by coding messages into large matrices. You have a big array of numbers - a matrix - and you scramble it using key matrices and code your message into the result. Then you add some errors. If you know the keys - the matrices used to do that scramble, they can pretty easily eliminate the errors - find a message that makes sense.

But without those keys it’s near impossible to decode the errors from this gigantic matrix, and the presence of the errors makes it impossible to brute force the one-way function in the backwards direction.

McEliece avoids the periodicity that makes RSA vulnerable to Shor’s algorithm. Computing in parallel universes isn’t helpful here. But there’s a reason we haven’t been using McEliece: the public key - this giant matrix - is, well, giant. RSA uses public keys on the order of a kilobyte. To be secure against quantum attacks for the foreseeable future, McEliece would need an 8 Mb public key—8,000 times larger. Right now, network protocols aren’t built for this and it could cause dramatic slowdowns in transactions. 

This is problem is shared with the three of the other public key encryption finalists—NTRU, CRYSTALS-KYBER, and SABER, which are lattice-based cryptosystems. Lattice-based cryptosystems rely on the difficulty of problems like the shortest vector problem. 

Suppose you have an enormous field dotted regularly with points. Imagine that you’re forming giant parallelepipeds to cover the whole space. This is the lattice. But here’s a question: What is the shortest vector, or distance from one point to another point? So far there’s no known classical or quantum algorithm that solves it quickly for large lattices. Unfortunately, those large lattices are also why the lattice candidates also have large public keys.

[graphic]

Perhaps a bigger issue than the size of public keys is the question is whether these algorithms are really robust against quantum and classical attacks.  Well, one thing is their age. The line of thought is basically this: the longer a cryptosystem has been around, the longer someone has had to break it. Since they haven’t, age is a good sign. People have more confidence in a 40 year old scheme like McEliece than more recent lattice-based systems. 

So will post-quantum cryptography save digital civilization? It’s hard to say. It’s true that these quantum resistant algorithms seem like they will be hard, if not impossible for quantum computers to crack, but that doesn’t mean no one will come up with a way to do it. In fact, there’s no guarantee a classical computer won’t come up with a fast algorithm to solve them. 

Mathematicians and cryptographers have decades of security to point to as proof of their encrypting and decrypting abilities. But at the end of the day, quantum key distribution proponents would rather put their faith in fundamental laws of physics - pretty darn secure if only we had that quantum internet... In the meantime,  post-quantum crypto may be our only hope as black hat quantum hackers attempt to decrypt your embarrassing emails across the parallel quantum space time.

Comments

No comments found for this post.