Home Artists Posts Import Register

Content

[This is a transcript with links to references.]

It’s been a while since we talked about quantum computing! The last time I did an overview video was in 2021, but that might as well have been in the stone age, so much has happened since! What’s going on, what’s new and who is saying what? That’s what we’ll talk about today.

First things first, quantum computers calculate with “quantum bits”, qubits for short that can become entangled. Entanglement is a type of correlation but one that requires quantum effects. So, you can’t build a quantum computer with Legos, I’m sorry. But with entanglement you can encode a huge number of states into the qubits, and that can speed up the solution of certain mathematical problems.

There relevant word in the previous sentence is “certain”. Quantum computers only bring an advantage for certain mathematical problems. The most important ones are in the areas of quantum chemistry, logistics, finance, and code cracking. I made a video about just what type of problems those are previously.

The other relevant thing you need to know about quantum computers is that those quantum states are really fragile. They get destroyed easily by the smallest perturbation, and you need to get the calculation done before the quantum states have been destroyed, it’s kind of like when you have to talk very fast to get out what you want to say before you forget about it. Now, what was I about to say?

How many qubits you need for practical application depends on how large an error you’re willing to tolerate. Optimistic estimates say it’s at least a few hundred thousand, the pessimistic ones more like 10 million, which makes me personally think it’ll probably be more like 100 million.

It used to be that there were basically only two approaches to quantum computing: trapped ions and superconducting circuits. And those are still leading the pack, but other approaches are catching up and there’s good reason to think some will take over in the next couple of years.

Superconducting circuits is the main approach that has been pursued by Google and IBM and Rigetti, among others. The downsides of this technology are that the qubits need to be cooled to a few milli-Kelvin and that they have a coherence time that’s a few tens of microseconds on a good day. The coherence time measures how long the quantum effects last and a few microseconds is hardly long. On the other hand, these qubits can also be operated and read out very quickly, so the short coherence time isn’t necessarily a disadvantage.

Superconducting circuits are also the technology that Google used for the first demonstration of quantum supremacy, that is when a quantum computer performs a task faster than a conventional computer, though it’s since been renamed to quantum advantage because reasons.

Google published its demonstration of quantum advantage in 2019. They used a 53-qubit quantum computer and claimed they succeeded with a calculation that would have taken 10 thousand years on a conventional computer in just three and a half minutes. The 10 thousand years claim was swiftly questioned by IBM and indeed it was later done by a Chinese group on a conventional computer in only 5 minutes.

Quantum advantage however just means that the calculation was fast, it doesn’t mean that it actually produced an interesting result. The calculation in question basically produced a certain random distribution.

A similar feat was achieved by a Chinese group in 2021. They measured coincident arrivals of 100 photons, and again that produced a random distribution from that which is difficult to calculate by any other means, though this random distribution could one day have some applications. Here’s a photo of their setup.

Then, earlier this year, IBM made up for being beaten on quantum supremacy by claiming the first demonstration of quantum utility, again with superconducting circuits.

They used a quantum processor with 127 qubits, called the Eagle processor to calculate what’s called the Trotterised time evolution of a 2D transverse-field Ising model.

What the heck is this? The Ising model is a model of coupled quantum spins, transverse-field means it’s a complicated version of that model. That it’s a trotterised time evolution means they used a specific method to calculate how these coupled quantum spins change in time. And 2D means Transdimensional Doodles. Just checking if you’re listening.

Now, this is a pretty amazing calculation if you’re interested in the Ising model, but I think it’s fair to say that this isn’t exactly everyone’s notion of utility.

The current record for the number of physical qubits is 433 and is the IBM Osprey chip that they are planning to make available in the cloud soon. IBM also has a roadmap according to which they want to reach more than 1000 qubits later this year. They say this will be a modular approach which can then be scaled up to a million within 10 years. Google has its own roadmap with certain milestones but hasn’t put particular dates on it.

The other front runner has for a long time been ion traps, that has been pursued by companies such as IonQ and Quantinuum. In this case the qubits are ions that are trapped by electromagnetic fields and entangled with lasers. Ion traps also have to be cooled, though not to milli-Kelvins, but to a balmy three Kelvins or so. Ion traps have longer coherence times than superconducting circuits, but they’re also slower to operate, so it’s not a priori clear which is better.

In the past two years, ion traps have severely lagged behind in the number of qubits. The company IonQ is currently at around 30 and Quantinuum around the same.

The first newcomer I want to mention is photonic quantum computing. When I say “new” I don’t mean to say that the idea is new, I just mean that the technology matured a lot recently and they have something to show for it.

Photonic quantum computing, as the name suggests, uses photons as qubits. In the simplest case the two qubit states are just whether the photon is there or not there. The good thing about photons is that you can use them at room temperature, though you still need to cool the detectors down because otherwise you can’t find the single photons in the noise.

The challenge for photonic quantum computing is, well, first you need to have reliable sources for single photons, and then you have to shrink down all the elements that you normally have on an optical table and put them on chips. This isn’t all that easy because photons are the quanta of light, so they move, well, at the speed of light. They don’t just sit around and wait for your go get things done.

You have to shrink all these elements down so you can scale the device up. Remember the setup from the Chinese group. This is for 100 photons. Let’s say it’s about one or two square meters and now scale this up to a million photons. That would bring us to more than 10 thousand square meters which is about the size of a football field. It’s kind of hard to control perturbations of a device that large, so maybe you begin to see the problem here.

But scientists have managed to build chips that can perform many of those operations of an optical table and some basic photonic quantum computing chips now exist. For example a Dutch group related to the startup QuiX Quantum recently presented a 20 qubit photonic chip.

In 2022, the Canadian Company Xanadu even put forward an 216 qubit photonic quantum computing chip, which they called Borealis. They are using a technology made of four layers to encode the quantum state and then operate on it. Their qubits are squeezed states of photons, that are basically optimized states. Though they stress that by the nature of their states, their system is not equivalent to a universal 216 qubit processor as it can only do certain tasks.

They have published a paper in nature demonstrating quantum advantage, also for a sampling method. Earlier this year, the Canadian government invested 40 million Canadian dollars into the company.Xanadu says that their technology can be scaled up to a million qubits at least. Xanadu has made Borealis available for use for everyone over the cloud.
 
Yes, this is well behind IBM’s 400 or maybe soon 1000, but this delay might not matter all that much in the end. This is certainly also what the company PsiQuantum thinks.

The company has partnered with GlobalFoundries, a leading semiconductor producer, to manufacture chips for photonic quantum computing en masse. They say that they want to have the facility for chip fabrication in place “by the middle of the decade” and then have their million qubit quantum computers “shortly after that”

PsiQuantum has been extremely quiet on their technology so it’s hard to say how far along they really are. That you can’t really tell what they’re doing could be a good sign because they might be worried that they’re getting scooped, or it might be a bad sign because it’s really just hot air.

The Germans too are interested in photonic quantum computing. A public private partnership of 14 organizations is working on a project called PhoQuant, which aims to develop a photonic quantum computer with up to 100 qubits by 2026.

Newcomer number two are atoms in tweezers. Not exactly the kind of tweezers you pluck your eyebrows with, but tweezers made of light, so called “optical tweezers”.

Atoms in tweezers are a variation of the idea of ion traps. Ions are atoms that are missing some electrons, so they’re negatively charged. On the one hand that’s good because if they’re charged, they’re easier to trap. On the other hand, it’s bad because they all repel each other which makes for awkwardly unstable configurations.

In the currently used ion traps, the ions all sit in a row. It’s called a linear trap and it’s convenient because the ions in a linear trap are reasonably easy to handle. You can link several of those traps with each other and that’s been done, but if you want to keep on lining them up forever that doesn’t scale very well if you also want them to all interact.

If you instead take neutral atoms, that makes it easier to build 3-dimensional configurations and that scales better. This is what you do with the optical tweezers. They’re weak so they don’t work well on charged things like those ions, but they’re big at the moment because they’re more scalable. You then need some way to encode qubits in the atoms, and there are several ways to do that.

You can do it for example with nuclear spin states, which is what the California-based startup Atom Computing is doing. They are using two different spin states of strontium-87 atoms. In a paper last year, they said they observed a coherence time longer than 20 seconds. I can report from my own that when it comes to quantum computing, staying coherent for more than 20 seconds is a remarkable success indeed.The company says that to date they can work with about 100 qubits which is pretty good.

There’s also the company ColdQuanta that said until 2022 they want to have 100 qubits in form of cold atoms. But 2022 came and went and I saw no press release of such a device, though they published a paper in which they report successfully computing with six qubits. The website now says they want to have 1000 qubits by 2024. Another company working this is Pasqaland a lot of research institutions are looking into it as well.

The third newcomer is topological quantum computing.

Topological quantum computing is a somewhat different idea from all the rest in that the term doesn’t refer to the physical basis of the qubit, but rather the type. A topological qubit can be realized in many different ways, the relevant point is that it’s a collective excitation in some kind of medium –a quasi-particle—and the quantum properties are protected because they’re a conserved quantity, a topological property. They’re like smoke rings, made of smaller particles, but have a shape that they want to keep. This makes them robust to noise and that’s why they’re interesting.

Topological quantum computing was basically only pursued by Microsoft, or so everyone thought, and that wasn’t going all that well. A paper published in 2018 in Nature by a Microsoft-led team claimed to have found evidence of Majorana modes, which are a type of topological quantum state. However, the paper was retracted in 2021 after the authors admitted to having made mistakes in their data analysis and presentation.

But in June this year, another group at Microsoft put out a new paper in which they say again that they’ve successfully created such Majorana modes. They used a thin semiconducting wire coupled to superconducting aluminium and claim they have convincing evidence for majorana modes on the end points of the wire. This device still needs to be cooled to 20 milliKelvin or so, but if they were right it would be much easier to scale up quantum computers.

Microsoft also has a roadmap to quantum computing. It makes no specific statements about how long it’s going to take but in an interview with TechCrunch, Krysta Svore, who leads the quantum research group at Microsoft, said that it would take less than 10 years.

The maybe most surprising development has however been that suddenly several other companies are interested in topological quantum computing as well. In late 2022, , Google’s Quantum AI group announced that they too had successfully created a different type of topological quantum states, nonabelions. They said they had made that work on a superconducting processor, and they also to encode information in them.

And in May this year, a group of German and American physicists together with the quantum computing startup Quantinuum have created another type of topological quantum state, similar to that of the Google group. They realised these states in an ion trap with 27 qubits ad they also managed to entangle them.

In summary, a lot has happened in quantum computing in the past two years. With two strong newcomers: photonics and optical tweezers, and a dark horse catching up that’s topological qubits. I quite frequently talk about this in my weekly science news, so rest assured I’ll keep you up to date.

Files

(No title)

Comments

Anonymous

This reminds me of the AI enthusiasm thirty years ago. A lot of money gets wasted trying to do something that isn't going to be real for a very long time. I worked on fusion for 30 years, so I ought to know (:->

Anonymous

Paul: Too right! Perhaps the story with quantum computing will end up with it, in some limited form, finding some niche applications where it might be really useful. The very work put into developing this seems likely to have some significantly useful technological spinoffs, at the very least. We'll see. As to other technological enthusiasms that have come and gone, or are still around, in long-term suspended animation: I just had ended college and remember the AI enthusiasm you mentioned, starting in the mid-Seventies. Computers (with less than 1 Gb or memory, slow CPUs and hard disks drives as large as washing machines) were going to translate from one language to another and understand natural language so we could tell them what to do by speaking it loud and clear and they would answer our questions in the same way, just as in "Star Trek", in a matter of a few more years. Fusion was just 10 years from powering our porch lights, along with everything else electric. In the 1990's, cars were soon going to be driving themselves with their electronic brains, eyes and ears, taking us safely and quite comfy as their passengers, reading, playing games, or enjoying the view. Then came those Internet startups that were going to change everything, in around 2000. We would be mining asteroids for rare metals and settling Mars by now. While the block chain and encryption were going to change the way we live, with the faceless rulers of Central Banks ultimately no longer dictating how we, as truly free citizens, conducted our business. And so it goes, as Kurt Vonnegut liked to write in his novels, at the end of some particularly melancholic chapters. Things like these, if at all possible, or even really desirable in the end, may take decades and even centuries in coming, as one realizes if only takes the time to think about it. Meanwhile, we have to live, right now, in only one world, not particularly large, dangerously armed with unimaginably powerful weapons, with eight billion mouths and still counting that need to be fed. I do not know why, but considering this theme has brought this old haiku to mind: The cry of the cicada Gives us no sign That presently it will die. Matsuo Basho

Anonymous

What’s a “lot of money” and how long is a “long time”? I see the exact opposite here. AI is available on your handheld and I don’t recall it ever being a topic for appropriations committees. Funding for Quantum Computing is coming from the public as well as the private sectors, with examples of promising applications in materials science already in the literature. That’s hardly a waste of money compared to the trillions we spent fighting wars in the Middle East to eradicate terrorism, only to see it rear its ugly head over the weekend. If we had spent a tenth as much on fusion research, maybe your project would have been further along in those 30 years.