Hacker News Clone new | comments | show | ask | jobs | submit | github repologin
Understanding Google's Quantum Error Correction Breakthrough (www.quantum-machines.co)
80 points by GavCo 6 hours ago | hide | past | web | 52 comments | favorite





Is this an actually good explanation? The introduction immediately made me pause:

> In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit

No in classical computers memory is corrected for using error correction not duplicating bits and majority voting. Duplicating bits would be a very wasteful strategy if you can add significantly fewer bits and achieve the same result which is what you get with error correction techniques like ECC. Maybe they got it confused with logic circuits where there’s not any more efficient strategy?


Physicist here. Classical error correction may not always be a straight up repetition code, but the concept of redundancy of information still applies (like parity checks).

In a nutshell, in quantum error correction you cannot use redundancy because of the no-cloning theorem, so instead you embed the qubit subspace in a larger space (using more qubits) such that when correctable errors happen the embedded subspace moves to a different "location" in the larger space. When this happens it can be detected and the subspace can be brought back without affecting the states within the subspace, so the quantum information is preserved.


This seems like the kind of error an LLM would make.

It is essentially impossible for a human to confuse error correction and “majority voting”/consensus.


People always have made bad assumptions or had misunderstandings. Maybe the author just doesn't understand ECC and always assumed it was consensus-based. I do things like that (I try not to write about them without verifying); I'm confident that so do you and everyone reading this.

>Maybe the author just doesn't understand ECC and always assumed it was consensus-based.

That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.


Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.

I don't believe it is the result of a LLM, more like an oversimplification, or maybe a minor fuckup on the part of the author as simple majority voting is often used in redundant systems, just not for memories as there are better ways.

And for a LLM result, this is what ChatGPT says when asked "How does memory error correction differ from quantum error correction?", among other things.

> Relies on redundancy by encoding extra bits into the data using techniques like parity bits, Hamming codes, or Reed-Solomon codes.

And when asked for a simplified answer

> Classical memory error correction fixes mistakes in regular computer data (0s and 1s) by adding extra bits to check for and fix any errors, like a safety net catching flipped bits. Quantum error correction, on the other hand, protects delicate quantum bits (qubits), which can hold more complex information (like being 0 and 1 at the same time), from errors caused by noise or interference. Because qubits are fragile and can’t be directly measured without breaking their state, quantum error correction uses clever techniques involving multiple qubits and special rules of quantum physics to detect and fix errors without ruining the quantum information.

Absolutely no mention of majority voting here.

EDIT: GPT-4o mini does mention majority voting as an example of a memory error correction scheme but not as the way to do it. The explanation is overall more clumsy, but generally correct, I don't know enough about quantum error correction to fact-check.


That threw me off as well. Majority voting works for industries like aviation, but that's still about checking results of computations, not all memory addresses.

Maybe they were thinking of control systems where duplicating memory, lockstep cores and majority voting are used. You don't even have to go to space to encounter such a system, you likely have one in your car.

The explanation of Google's error correction experiment is basic but fine. People should keep in mind that Quantum Machines sells control electronics for quantum computers which is why they focus on the control and timing aspects of the experiment. I think a more general introduction to quantum error correction would be more relevant to the Hackernews audience.

I think it's fundamentally misleading, even on the central quantum stuff:

I missed what you saw, that's certainly a massive oof. It's not even wrong, in the Pauli sense, i.e. it's not just a simplistic rendering of ECC.

It also strongly tripped my internal GPT detector.

Also, it goes on and on about realtime decoding, the foundation of the article is Google's breakthrough is real time, and the Google article was quite clear that it isn't real time.*

I'm a bit confused, because it seems completely wrong, yet they published it, and there's enough phrasing that definitely doesn't trip my GPT detector. My instinct is someone who doesn't have years of background knowledge / formal comp sci & physics education made a valiant effort.

I'm reminded that my throughly /r/WSB-ified MD friend brings up "quantum computing is gonna be big what stonks should I buy" every 6 months, and a couple days ago he sent me a screenshot of my AI app that had a few conversations with him hunting for opportunities.

* "While AlphaQubit is great at accurately identifying errors, it’s still too slow to correct errors in a superconducting processor in real time"


This is not about AlphaQubit. It's about a different paper, https://arxiv.org/abs/2408.13687 and they do demonstrate real-time decoding.

> we show that we can maintain below-threshold operation on the 72-qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1 μs cycle duration


Oh my, I really jumped to a conclusion. And what fantastic news to hear. Thank you!

Yeah, I didn't want to just accuse the article of being AI generated since quantum isn't my specialty, but this kind of error instantly tripped my "it doesn't sound like this person knows what they're talking about alarm" which likely indicates a bad LLM helped summarize the quantum paper for the author.

ECC is not easy to explain, and sounds like a tautology rather than an explanation "error correction is done with error correction"- unless you give a full technical explanation of exactly what ECC is doing.

Regardless of whether the parent's sentence is a tautology, the explanation in the article is categorically wrong.

Categorically might be a bit much. Duplicating bits with majority voting is an error correction code, its just not a very efficient one.

Like its wrong, but its not like its totally out of this world wrong. Or more speciglficly its in the correct category.


It's categorically wrong to say that that's how memory is error corrected in classical computers because it is not and never has been how it was done. Even for systems like S3 that replicate, there's no error correction happening in the replicas and the replicas are eventually converted to erasure codes.

I'm being a bit pedantic here, but it is not categorically wrong. Categorically wrong doesn't just mean "very wrong" it is a specific type of being wrong, a type that this isn't.

Repetition codes are a type of error correction code. It is thus in the category of error correction codes. Even if it is not the right error correction codes, it is in the correct category, so it is not a categorical error.


Eh, I don’t think it is categorically wrong… ECCs are based on the idea of sacrificing some capacity by adding redundant bits that can be used to correct for some number of errors. The simplest ECC would be just duplicating the data, and it isn’t categorically different than real ECCs used.

Then you're replicating and not error correcting. I've not seen any replication systems that use the replicas to detect errors. Even RAID 1 which is a pure mirroring solution only fetches one of the copies when reading & will ignore corruption on one of the disks unless you initiate a manual verification. There are technical reasons why that is related to read amplification as well as what it does to your storage cost.

I guess that is true, pure replication would not allow you to correct errors, only detect them.

However, I think explaining the concept as duplicating some data isn’t horrible wrong for non technical people. It is close enough to allow the person to understand the concept.


Yeah, I couldn't quite remember if ECC is just hamming codes or is using something more modern like fountain codes although those are technically FEC. So in the absence of stating something incorrectly I went with the tautology.

Note the paper they are referring to was published August 27, 2024

https://arxiv.org/pdf/2408.13687


While I'm still eager to see where Quantum Computing leads, I've got a new threshold for "breakthrough": Until a quantum computer can factor products of primes larger than a few bits, I'll consider it a work in progress at best.

> While I'm still eager to see where Quantum Computing leads

Agreed. Although I'm no expert in this domain, I've been watching it a long time as a hopeful fan. Recently I've been increasing my (currently small) estimated probability that quantum computing may not ever (or at least not in my lifetime), become a commercially viable replacement for SOTA classical computing to solve valuable real-world problems.

I wish I knew enough to have a detailed argument but I don't. It's more of a concern triggered by reading media reports that seem to just assume "sure it's hard, but there's no doubt we'll get there eventually."

While I agree quantum algorithms can solve valuable real-world problems in theory, it's pretty clear there are still a lot of unknown unknowns in getting all the way to "commercially viable replacement solving valuable real-world problems." It seems at least possible we may still discover some fundamental limit(s) preventing us from engineering a solution that's reliable enough and cost-effective enough to reach commercial viability at scale. I'd actually be interested in hearing counter-arguments that we now know enough to be reasonably confident it's mostly just "really hard engineering" left to solve.


I guess like most of these kinds of projects, it'll be smaller, less flashy breakthroughs or milestones along the way.

[delayed]

There will be a thousand breakthroughs before that point.

That just means that the word "breakthrough" has lost it's meaning. I would suggest the word "advancement", but I know this is a losing battle.

>That just means that the word "breakthrough" has lost it's meaning.

This. Small, incremental and predictable advances aren't breakthroughs.


quantum computers can (should be able to; do not currently) solve many useful problems without ever being able to factor primes.

What are some good examples?

The one a few years ago where Google declared "quantum supremacy" sounded a lot like simulating a noisy circuit by implementing a noisy circuit. And that seems a lot like simulating the falling particles and their collisions in an hour glass by using a physical hour glass.


The only one I can think of is simulating physical systems, especially quantum ones.

Google's supremacy claim didn't impress me; besides being a computationally uninteresting problem, it really just motivated the supercomputer people to improve their algorithms.

To really establish this field as a viable going concern probably needs somebody to do "something" with quantum that is experimentally verifiable but not computable classically, and is a useful computation.


Yeah I think that's the issue that makes it hard to assess quantum computing.

My very layman understanding is that there are certain things it will be several orders of magnitude better at, but "simple" things for a normal machine quantum will be just as bad if not massively worse.

It really should be treated as a different tool for right now. Maybe some day in the very far future if it becomes easier to make quantum computers an abstraction layer will be arrived at in some manner that means the end user thinks it's just like a normal computer, but from a "looking at series of 1/0's" or "looking at a series of superimposed particles" it's extremely different in function.


Wow, they managed to make a website that scales everything except the main text when adjusting the browser's zoom setting.

I'm someone not really aware of the consequences of each quantum of progress in quantum computing. But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.

How much closer does this work bring us to the Quantum Crypto Apocalypse? How much time do I have left before I need to start budgeting it into my quarterly engineering plan?


> But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.

Probably not. Unless a real sudden unexpected breakthrough happens, best practise will be to use crypto-resistant algorithms long before this becones a relavent issue.

And practically speaking its only public-key crypto that is an issue, your symmetric keys are fine (oversimplifying slightly, but practically speaking this is true)


The primary threat model is data collected today via mass surveillance that is currently unbreakable will become breakable.

There are already new “quantum-proof” security mechanisms being developed for that reason.


Perhaps, but you got to ask yourself how valuable will your data be 20-30 years in the future. For some people that is a big deal maybe. For most people that is a very low risk threat. Most private data has a shelf life where it is no longer valuable.

Yes, and people are recording encrypted conversations communications now for this reason.

You'll need to focus on asym and DH stuff. If your symmetric keys are 256 bits you should be fine there.

The hope is that most of this should just be: Update to the latest version of openssl / openssh / golang-crypto / what have you and make sure you have the handshake settings use the latest crypto algorithms. This is all kind of far flung because there is very little consensus around how to change protocols for various human reasons.

At some point you'll need to generate new asym keys as well, which is where I think things will get interesting. HW based solutions just don't exist today and will probably take a long time due to the inevitable cycle of: companies want to meet us fed gov standards due to regulations / selling to fedgov, fedgov is taking their sweet time to standardize protocols and seem to be interested in wanting to add more certified algorithms as well, actually getting something approved for FIPS 140 (the relevant standard) takes over a year at this point just to get your paperwork processed, everyone wants to move faster. Software can move quicker in terms of development, but you have the normal tradeoffs there with keys being easier to exfiltrate and the same issue with formal certification.


Maybe my tinfoil hat is a bit too tight, but every time fedgov wants a new algo certified I question how strong it is and if they've already figured out a weakness. Once bitten twice shy or something????

The NSA has definitely weakened or back-doored crypto. It’s not a conspiracy or even a secret! It was a matter of (public) law in the 90s, such as “export grade” crypto.

Most recently Dual_EC_DRBG was forced on American vendors by the NSA, but the backdoor private key was replaced by Chinese hackers in some Juniper devices and used by them to spy on westerners.

Look up phrase likes “nobody but us” (NOBUS), which is the aspirational goal of these approaches, but often fails, leaving everyone including Americans and their allies exposed.


You should look up the phrase "once bitten twice shy" as I think you missed the gist of my comment. We've already been bitten at least once by incidents as you've described. From then on, it will always be in the back of my mind that friendly little suggestions on crypto algos from fedgov will always be received with suspicion. Accepting that, most people that are unawares will assume someone is wearing a tinfoil hat.

I'm not sure anyone really knows this although there is no shortage of wild speculation.

If you have keys that need to be robust for 20 years you should probably be looking into trying out some of the newly NIST approved standard algorithms.


Does anyone on HN have a understanding how close this achievement brings us to useful quantum computers?

This is another hype piece from Google's research and development arm. This is a theoretical application to increase the number of logical qubits in a system by decreasing the error caused by quantum circuts. They just didn't do the last part yet so the application is yet to be seen.

https://arxiv.org/abs/2408.13687

"Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

Google forgot to test if it scales I guess?


It's the opposite of a theoretical application, and it's not a hype piece. It's more like an experimental confirmation of a theoretical result mixed with an engineering progress report.

They show that a certain milestone was achieved (error rate below the threshold), show experimentally that this milestone implies what theorists predicted, talk about how this milestone was achieved, and characterize the sources of error that could hinder further scaling.

They certainly tested how it scales up to the scale that they can build. A major part of the paper is how it scales.

>> "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

> Google forgot to test if it scales I guess?

Remember that quantum computers are still being built. The paper is the equivalent of

> We tested the scaling by comparing how our algorithm runs on a chromebook, a server rack, and google's largest supercomputing cluster and found it scales well.

The sentence you tried to interpret was, continuing this analogy, the equivalent of

>Google's largest supercomputing cluster is not large enough for us, we are currently building an even bigger supercomputing cluster, and when we finish, our algorithm should (to the best of our knowledge) continue along this good scaling law.


Lol yeah the whole problem with quantum computation is the scaling, that's literally the entire problem. It's trivial to make a qbit, harder to make 5, impossible to make 1000. "If it scales" is just wishy washy language to cover, "in the ideal scenario where everything works perfectly and nothing goes wrong, it will work perfectly"

The fact that there is a forward-looking subsection about “the vision for fault tolerance” (emphasis mine) almost entirely composed of empty words and concluding in “we are just starting this exciting journey, so stay tuned for what’s to come!” tells you “not close at all”.

Doesn't feel like a breakthrough. A positive engineering step forward, sure, but not a breakthrough.

And wtf does AI have to do with this?


It's not a major part of the paper, but Google tested a neural network decoder (which had the highest accuracy), and some of their other decoders used priors that were found using reinforcement learning (again for greater accuracy).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: