The big unsolved problem in QC is the lifetime of the QC itself until decoherence. Imagine a computer where you can execute a maximum of 1800 commands (IBM Heron) and then it is broken. That is the status of QC at the moment. A QC might store TB of data and do searches with O(1), but there is no way (at the moment) to upload a TB database to the QC. What we need is a quantum processor that lives for hours or days in a coherent state, but what we have is milli seconds.
Just my 2 ct.
edit: IBMs Roadmap shows a QC with 1 Billion commands (gates) after 2033. With that machine they could (in principle) upload a 100MB database and do searches.
The bigger question is how many qubits exactly you can keep arbitrarily entangled on your preferred time scale. Because a linear increase in the number of qubits gives you an exponential increase in capability (compared to a classical computer) for problems that are amenable to the quantum approach. So if adding qubits turns out to be exponentially difficult, then QC will not amount to much since you could've done the same thing in a classical simulation. If it can be done more or less arbitrarily with a non-exponential cost, it's a true asymptotic change for the kinds of problems QC can address.
It's not a big mystery how this will get accomplished however, which is why participants in the field seem hopeful (if modest and tempered). The theory of quantum error correction is rich and pretty well developed. It's just that engineering a system which has enough runway to be error corrected requires a lot of development and innovations in a lot of directions: qubit design and fabrication, quantum compilers, rapid experimental procedures, etc.
Edit to edit: Thinking of gate depths as permitting such-and-such megabyte databases as being "uploaded" isn't really a good or accurate metric in my opinion.
It's an unintuitive theorem in quantum computation that as long as you can protect against two types of errors (bit flips and sign flips) you are protected against almost all errors.
There isn't a good non-technical answer to your previous question but lemma 3.3 of this paper [1] says that if you can correct a finite set of errors for each qubit individually then you can also (essentially for free) correct all other errors. Specifically if you correct a set of errors which spans the set of possible errors (for example the Pauli errors on each qubit do this) then you correct all errors.
This is essentially the reason people are still interested in quantum computing, and why quantum error correction is viable at all. You can protect highly entangled crazy states from highly non-local complicated errors using only the resources needed to correct simple local errors.
Decoherence here pretty much just means "enough individual errors that you can't recover from them anymore". Theoretically yes, it is quite clear that you can protect against this. Practically doing so at the scale you need for a real, useful computer is very hard.
Quantum error correction corrects more than just bit flips. It corrects phase flips, accidental measurement, qubit erasure, small global rotations... lots of stuff.
I don't know exactly what you have in mind as "loss of entanglement"... But quantum error correction will very much preserve entanglement at the logical level against decay of entanglement at the physical level.
I'm a bit biased, but non-superconducting modalities have considerably better coherence time properties. Neutral-atom, for example, has on the order of seconds. It has other constraining issues, but in theory the coherence times are better
The roadmap of IONQ has them producing 64 logical qubits with error correction within 2025. Whether or not they succeed, I think time will tell as theyve stopped publishing and a cofounder stepped down to return to teaching and research.
Quantinuum, the Honeywell merger, expects billions of revenue in 2026. Honeywell gave up on transmons and joined up with their $260m investment into Cambridge.
Speaking of transmons, Rigetti is facing delisting
IBM is still going hard on transmons and selling mainly to universities.
Google had sycamore but were still waiting to see if they’ve solved the cascading errors inherent to scaling
Microsoft invested heavily in majorna fermions for compute — which might not even exist. They are not an authority on the physics
IBM's quantum roadmap[0] also looks... curious. There's some modest scaling until 2028. Then there sudden orders-of-magnitude jumps in the following years. Perhaps that's backed by technology they have in the labs and plan to have ready by then, but without details it looks too good to be true.
It means that they are willing to claim to the public that there is a business case for their technology providing billions in value in that year. They do not have the bookings.
That's just execs tossing out big numbers about a market they don't understand. They do that all the time. Like babies, they cannot help but fill diapers. That doesn't mean we need to discuss it at the water cooler.
Do they have hardware? Is it available? How many nines? Do they have a useful software stack? Does it give developers meaningful leverage to solve problems? The most incredible thing about quantum hype is that ordinarily detail-focused engineer types are getting snowed and buying it because the punchline is too good to be true. But sometimes, these companies make testable claims: if we'd only talk about the externally verifiable ones, and leave the vapor out of it, I think it would make for a much more interesting conversation.
But more specifically they have customers with use cases studied in detail, where a modest quantum advantage becomes highly profitable, so all they need to do now is ship and the bookings will come in.
Sorry, but that's exactly the kool-aid swilling nonsense I'm talking about. Did you notice that your "all they need" is literally a quantum-computer shaped hole in the plan? You're curiously bullish on exactly one brand of vaporware. Why?
Quantum has a level of bullshit where an enterprise could tell a customer their systems will solve P=NP. Clearly we don’t have information theory that supports that: we can only solve BQP better with them.
The industry has customers with use cases proven for BQP with restricted quantum computers ready to go. The linked article leans to saying that’s not going to be true, but the companies out there building contradict this and say their customers will be ready to pull the trigger on billions in quantum compute in the next 2-3 years at most.
It could be another Tesla with self driving perpetually 2-4 years out. But look at waymo, clearly tech changes happen
With what people are accomplishing today certainly seems closer to reality. But nobody has really shown us just yet
As for “one brand of vaporwave” I’m cynical on quantinuum. If their tech was strong they would have raised without Honeywell.
>their customers will be ready to pull the trigger on billions in quantum compute in the next 2-3 years at most
Right. I'm sure these customers would be willing to spend billions of dollars on a working quantum computer that solves a business problem for them in the next 2-3 years. What GP is pointing out, however, is that statement presupposes that said quantum computer will actually exist.
I would bet money that no such machine will be available for purchase in 2-3 years. What will exist in 2-3 years are more press releases about new QCs with even larger numbers of noisy physical qubits that still don't amount to a single fully error corrected logical qubit that can factor 35 without resorting to tricks like precompilation. Along with more press releases proclaiming loudly that commercial QCs are a mere 2-3 years away.
I don't think its just skeptics. Afaik most researchers (other than those who are trying to attract investments) have been very clear that we are still a ways off from practical quantum computing, and even then the applications of QC is limited to specific domains.
I agree with this. Unimaginable hype was mostly from those with an obvious incentive to raise money, to put themselves on a shortlist for a Nobel, and all that. Most of the ground-floor researchers and experimentalists have been extraordinarily realistic, though sometimes passive or quiet since there's an incentive to not bite the hand that feeds them.
It also doesn't help that the giant PR machines of IBM, Google, IonQ, et al. have been commanding the narrative. At this point you'd think transmons, ions, and atoms are the only commercially/at-scale interesting options. :)
Some of the hype is generated in a quasi-faceless fashion, such as the disaster of a Nature [1] paper by Google, and the surrounding PR claiming they constructed a wormhole with a quantum computer.
Or how about Leo Kouwenhoven [2], supported by Microsoft's quantum group, and the retracted Nature papers after their hand was forced?
Witness any number of talks at past Q2B conferences to see the audacious claims in applications (healthcare, logistics, ...), chip improvements, etc.
[1] "Nature" is a prestigious journal that is a prime target for quantum groups to publish in and consequently milk for PR.
[2] "Unimaginable hype was mostly from those with an obvious incentive to raise money, to put themselves on a shortlist for a Nobel"
is not evidenced by
"It claimed to have found evidence of Majorana particles, long-theorized but never conclusively detected...repeating the experiment revealed a miscalibration error that skewed the original ...“We apologize to the community for insufficient scientific rigor in our original manuscript,”
The Majorana particle stuff has a long history where the paper was retracted not due to a simple scientific error, but alleged intentional misrepresentation or negligence.
TL;DR: Still unclear what either example has to do with the claims.
[1] Majorana particle
A. Zenodo link: the authors use 'manipulating' to mean 'presentation of data', you are using it to mean 'made up data to unimaginably hype [the idea] quantum computing [applications will be ready in the short term] in order to get short-listed for a Nobel'.
B. The only thing tying this to QC is anything you've said is the idea MS would be excited by the paper. That's it. Nothing else tying it to QC or QC applications or hype.
C. If they purposefully manipulated the experiment to falsify data, they must be some of the stupidest people on earth, you don't get a Nobel without reproduction. Are you sure they did? No one else seems to be claiming any of that.
[2] Google wormhole
A. Source: "Last fall, a team of physicists announced that they had teleported a qubit through a holographic wormhole in a quantum computer. Now another group suggests that’s not quite what happened."
B. Claim: 'made up data to unimaginably hype [the idea] quantum computing [applications will be ready in the short term] in order to get short-listed for a Nobel'
C. QED: claim unrelated to source
[3] Follow up question from me
Are you comfortable making up people, claims about them and their actions, without sourcing?
If so, I find that interesting, because you seem very, very, very, dedicated to the idea that researchers who ever make a mistake are doing it intentionally for attention and rewards. That's not how research works. We applaud retractions.
We already have options that are quantum resistant (just not as well tested as we would like). There will probably be a short intermediate period where qc can break like 128 bit rsa keys but not 4096 bit. The moment that happens everyone will switch to other algos very quickly.
Its not like its all that different from the situation with md5 or the current situation with sha1. The world survived. A few people got hacked... but that was very much the exception and mostly software migrated to new algos.
Personally i think quantum simulation is a much more interesting application than factoring.
Without a global government or geopolitically stable world, there's no way to unwind it. All countries have different resources and needs and views, so nobody will disarm. Only MAD.
I'd like to double down on my previous statement: if cryptography becomes impossible due to some fundamental property of the universe, then there's nothing to be done. We're where we are today because of a fluke, and that's it. Once the dam bursts, it's over.
Maybe that happens, maybe that doesn't. Maybe there's something else we can base our secrets on.
Not sure I follow your point re global government, so I’ll return to your double down.
If there’s a fundamental property of the universe that disallows public cryptography then it’s a fact of life. What baffles me is someone’s desire to reach that state sooner rather later, given how much current global civilization depends on it.
In the same vein, global warming is a fact of life since in a few million years the sun will engulf the earth. Yet I’ve yet to hear a sentiment describing it as a desirable state of affairs, because “the free energy alone would be worth its weight in gold”.
General purpose QC isn't going to happen all that soon, and may always be faced with scaling issues.
But we will see QC in a very specific application before the decade is out.
It's kind of a perfect marriage between ML and QC. If you want stochastic outputs and aren't going to know what's going on at each step in the network anyways, the issues of error correction around measurement aren't nearly as crippling as they are for general purpose QC.
While initially the work will be converting from classical to optoelectronic hardware for matching current operations, such as MIT's work this year, once we've seen greater availability of optoelectronic hardware I suspect we'll see algorithms for ML developed that would only work in photonic networks and fully exploit the quantum properties therein.
I am a researcher in quantum information and this seems highly unlikely to me.
Quantum computers, as far as I can tell, get significant improvements over classical computers only for quite specific highly mathematically structured problems like the discrete log problem or simulating other quantum systems.
I don't think it's likely that there will be substantial "quantum adantage" in the machine learning/ai area.
Don't greyed out comments mean someone is downvoting you? What's going on in this thread, this is the second comment of yours that's factually far better than the preceding one but greyed out.
I agree with you, quantum ai has only ever been people shouting "quantum" at black box neural nets, as if that had ever produced a viable algorithm yet...
I'm not a researcher here, but I'm pretty sure Bayesian neural networks are realizable with quantum computers, which is a substantial "quantum advantage".
It's not really any different from the current state of models, it's just that moving the same constraints to a different hardware foundation opens up significant performance gains.
"While initially the work will be converting from classical to optoelectronic hardware for matching current operations, such as MIT's work this year,"
I'm pretty sure you're talking about mean Dirk Englund's photonic matrix multiplication. Also, you should do better at explaining what you mean, because if I wasn't already very familiar with this I would have no clue what you're talking about.
What do you mean by "no"? Do you mean, not that paper (that neither of us linked because there are probably several on matrix multiplication) but instead this other one that also has optics and ml and Dirk? I'm seeing a pattern of you understanding what you write, but it being impossible to decipher on my end.
I got curious about quantum computers so I ordered and read (most of) Quantum Computation and Quantum Information: 10th Anniversary Edition.
It pretty much proved to me that quantum computing is at best 100+ years away or at worst a pipe dream fantasy. Until decoherence at scale is solved there is no way quantum computing will be useful beyond current computing abilities. There is not even a hint that this problem will be solved anytime soon.
I co-authored that book (written in the late 1990s). I don't know when quantum computing will happen - I only follow developments very, very loosely now - but think you're being much too pessimistic. It is a very (very!) challenging problem, and it's still early days, but there's also been steady and impressive progress for many years.
How do people know that real-world QC is even possible?
I understand enough of nuclear physics and quantum physics to see that fusion is mostly a technical/engineering problem while QC is widely speculative.
The biggest reason I think its possible is that there are a lot of different ways you could build a quantum computer. Different groups are exploring building physical qubit systems out of transmons in superconductors, topological systems, linear optics, trapped ions, quantum dots, NMR driven spins, cavity QED and probably many other setups I don't know about.
It would be weird (and scientifically quite interesting) if all of these approaches fail for some reason.
You may well be correct in that bet. I lean that way myself more than most of my colleagues. On the other hand your reasoning is quite wrong, entanglement is both very well understood and fairly robust. Scale and gate fidelity are significant problems in building quantum computers, entanglement isn't really (by this I mean that building good entangling gates is hard, but the entanglement once you've made it is ok).
The theory (which is just quantum mechanics) matches the reality better than any other known physical theory ever has. It is certainly not "just literature". Of course it is wrong in some sense, since it doesn't appear to cover gravity, but it is also right in some pretty meaningful sense.
QC is just as solid of theoretical footing at fusion. It is mostly a technical/engineering/materials problem with plenty of room for clever physics to make it easier.
I don’t see how you can read and understood Nielsen and Chuang in one sitting, unless you are already a quantum computation theorist. I also don’t see how reading what is essentially an algorithms textbook can lead you to develop an informed opinion about the state of quantum computer engineering…
it’s like reading saying “I was curious about how computer software works so I ordered and read CLRS and I don’t think faster computers are anywhere on the horizon in 100 years…”
It was not one sitting... That's a textbook, it took a while. Probably over a year. I was also doing it along learning some of the math involved in another course. Also my SO is a physicist so I had some help.
The theory is great. The problem is that it all hinges on a scientific breakthrough that has not happened yet. I don't see it happening soon. Just my not totally uneducated opinion. I have no horse in the race I think the people claiming it will work "soon" are being a bit dishonest with themselves as well as everyone else. For all we know it will end up taking several other scientific breakthroughs to get all the parts needed. I personally think that is the case and why I say it will not be in our lifetime.
Except published research demonstrates continual improvements in coherence time and implementations of error correction protocols. So "not even a hint" is at best hyperbolic or at worst just wrong.
But it is solved theoretically, by quantum error correction. I'm not denying there are plenty of problems but can you justify that decoherence is the limiting problem for architectures other than superconducting qubits?
To expand on this - I went into my first AI class in college thinking it'd be amazing. I left with a very bland taste, having solved n-queens a bunch of different ways and written genetic algorithms to "evolve" images.
I read Kurzweil before this. I had thought we were decades away from digital immortality. Taking the AI course and algorithm analysis was quite disappointing. Reality set in. Things are harder than we hand wave away.
I then started taking bio, read papers on neuroscientists decoding visual signals from mammalian LGN, and went deep down the biology rabbit hole. That only further convinced me that Kurzweil was wishfully wrong. Here are systems more complex than anything previously described to me in my entire life.
But now we're confronted with a pace of innovation that is frankly quite humbling. Things I had written off no longer seem impossible.
For all the people saying this is a scam/waste of money/always 10 years away, I’m curious what you envision the funding model for this kind of speculative tech to be. The government ceased to be the bankroll for this kind of stuff since the end of the Cold War, and for better or for worse, VC is where the money is right now. The marketing BS with stuff like quantum, fusion, carbon capture, etc. is simply a cost of doing business in this environment.
If it’s such a travesty that VCs are making bets on tech like this, how then do you fund long term R&D projects with a high risk of failure? Isn’t that the point?
there is general public funding for all kinds of science, including physics
I never got the intense interest in quantum computing. My pet theory is that (CS-educated) rich VCs were taken by the word 'computing' , but in reality QC is a theoretical sub-branch of quantum physics. I should test my theory by publishing my tomato computing theory.
Anything being done in the public sector (e.g. academia or government) is pocket change by comparison, and so bound up with red tape these days that most truly high risk/reward bets are passed over. That, and you’re at the mercy of congressional budget cycles and the usual grant writing hunger games. If innovation emerges from that environment, it’s in spite of it, not because of it.
If private capital wants to fund deeptech, more power to them. If they want forego due diligence and fund tomato computers, that’s (literally) their business.
Moore's law was enabled by lithography which enabled a pretty clear path forward on how to shrink transistors over time. AFAIK there is no similar enabling technology for qubit. Maybe that will change, but until then, I don't really see much hope for quantum computers future.
I get the impression that cavity-QED is a similar path for QC that lithography was for classical compute. But I'm not at all an expert, just a curious and interested observer.
You only need a linear increase in number of error-corrected qubits to get exponential gains on the limited subset of problems they can improve, whereas Moore's law has provided an exponential increase in the number of elements for classical computers.
So, Moore's law isn't needed for something similar to Moore's law gains in quantum computing, a linear Moore's law will do if quantum error correction doesn't face scaling laws for the whole ensemble.
That might mean they only keep pace with classical computing if Moore's law continues to hold there, but eventually that hits the Landauer limit without new physics (or possibly reversible computing).
IMO this is very wrong. Most real applications of quantum computers will need thousands of error correct qbits (aka millions of real qbits). At the current growth rate in qbits of <100 qbits per year, if the growth was linear, it would be about 10000 years before we had quantum computers solving real problems.
I'm yet to see a single business case of money entering the QC value chain outside of (a) other QC businesses and (b) moonshot investments.
After ~15 years of literally the world's smartest people trying to come up with exactly that (some of which are dear friends), the only way conscionable way I can regard QC is as a vapor bubble.
Quantum computing may be the "cold fusion" of the 2000's.
In 2000, it was a fantastic looking technology that looked like it was going to leapfrog classical computing in a whole lot of ways. Now, classical computing has gotten so fast that (for example) O(sqrt(n)) searching of an in-memory structure is just not that exciting - O(n) is totally fine with a 100 GB dataset for many cases, and loading your database into your quantum computer would be O(n) anyway. Ideas for quantum machine learning have been supplanted by LLMs, and Ising optimization machines have already failed in the free market compared to a lot of classical computers.
The remaining problems of interest are quantum simulation and encryption cracking, both of which are relatively niche markets.
If the encryption cracking market gets off the ground, would a quantum encryption hardening market come into existence, or can classical encryption algorithms suffice?
There would be only minimal impact of quantum computers over the bulk encryption algorithms (i.e. the 128-bit keys used today when speed is prioritized over security would become deprecated).
Where changes would be needed is in digital signatures and certificates and in the key exchange algorithms for the establishment of communication connections in the public Internet, where pre-shared secret keys are not used.
Many algorithms have been studied, but they are significantly less efficient than those used today.
> Many algorithms have been studied, but they are significantly less efficient than those used today.
Actually, several post-quantum algorithms are considerably faster than current algorithms. But they have much larger ciphertexts, signatures, and public keys.
Feels like me with Bitcoin. I discovered it “early” but got out off as it sounded too complex for what it did. That wisdom probably cost me millions lol! Actually probably I would have lost the drive as so many did!
I was recently learning about Shors Algorithm which was kind of mind blowing until I found out it can’t actually be executed on anything right now. We’re still in the super early days. Like where microcomputers were 40-50 years ago
> We’re still in the super early days. Like where microcomputers were 40-50 years ago
There's a trap in that sort of thinking. It assumes that it's inevitable that we will at some point arrive at current day for quantum computers. That technology is a straight path in whatever direction we set out. I don't think that is a given. There a plenty of things out there that physics just can't do. Things that, no matter how much effort and thought and research we put into it, we just can't make the world do in a scalable affordable fashion. I don't have the knowledge to make a believable claim that quantum computers are one of those, but you have to consider that it might be.
It's possible that quantum computers aren't at the stage of the 8086, but rather at the stage of the flying cars in jetsons. Doomed to forever be an unrealistic dream.
Progress only looks like a straight shot in hindsight. Yes, we can take any invention today and trace every step back to the previous to make a single unbroken chain to the invention of the wheel.
But for every airplane, there's a Bored Ape Yacht Club NFT.
Related: "Quantum computing worst case scenario: we are Lovelace and Babbage" [1]
For scale: Babbage's planned analytical engine had a word size of 50 digits, a clock rate of 7Hz, and a physical size of roughly a locomotive [2]. Contrast [3] where it's estimated that 600-digit superposed additions would run at 27Hz (by dedicating millions of qubits to magic state distillation of the underlying AND gates). Given current plans, a quantum computer capable of doing arithmetic operations as wide and as fast as the analytical engine would probably be larger than the analytical engine.
We can see how to do reliable quantum computation in principle. The overhead of error correction makes it daunting in scale. It sure would be nice if someone came along and invented the quantum computing equivalent of a vacuum tube or a transistor.
Last I looked expert consensus on factoring RSA-2048 was about 15-20 years out but with a wide divergence of opinion. You might find a less optimistic take today,
First micro-processor MP944 (1970) had 5360 transistors. (73k transistors with memory).
Quantum computer with just 1123 qubits released this month. However, it doesn't work properly yet, because large systems doesn't work as quantum systems, so error rate is high. Large quantum computer is like Eniac at earthquake.
Can some explain me, since I don’t know, but I have this idea that ideally quantum computer is like artificial neural network hardware which can change its connections in realtime.
I just cannot help seeing how cubits are very much like probabilities comes out from ML, something between 0..1.
I’m I completely lost here? I’m curious to understand better:)
PS. Is it possible that brain is organic quantum computer?
Also. Do you see it possible that our senses and awareness are “prompting” the models we have built during life and thoughts are what comes back from the prompts?
Pretty lost yeah, but you're asking questions which is better than 90% of the people who are lost. Some very brief answers:
They're called qubits; a cubit is a length about 50cm. Pronounced the same though.
Quantum computers are not about being able to change connections in real time. We know how powerful such a computer could be (it can be simulated by a classical computer in polynomial time), and quantum computers are more powerful than that. (Some nitpicker is going to swoop by and say that technically we haven't proven that BQP != P, but if BQP==P then quantum computers are useless anyways.)
Quantum amplitudes are not just probabilities: they can be imaginary, and they can be negative. You can add up two non-zero amplitudes and get 0, which is very much not how probabilities work. You can't have a non-zero probability that A happens and a non-zero probability that B happens but a zero probability that A OR B happens, yet with quantum mechanics you get exactly that.
Whatever intuition you have for how quantum computers works, it's wrong.
Plenty of people have considered that the brain is doing quantum computations. It seems unlikely because the brain is large, wet, and hot, and quantum mechanical systems really like to decohere under those circumstances (breaking the computation). But Roger Penrose still thinks they are.
The main takeaway I'd like to convey is that quantum computers are vastly better than classical computers for a few very specialized tasks like taking discrete logarithms and factoring, and no better than classical computers for most everything else. (Vastly better meaning exponentially better: it takes a classical computer with roughly 2^1000 bits of memory to simulate a quantum computer with 1000 bits of memory.)
I'm pretty certain the QC development will continue even if at a slightly slower rate. And maybe AI can help with manufacturing more reliable and cheap versions :-) What I think has not be investigated so much is what can be done with a not so reliable quantum computer? At the start of computers Colossus was not ultra reliable but that didn't matter as it just cycled around trying different possibilitied on codes so failing on the task every so often wouldn't have made much difference.
One of the big challenges is being able to frame real world concrete problems in terms that are solvable by quantum algorithms. There are a shocking lack of examples out there.
Even when the hardware is there, it isn’t clear how quantum computing can do useful things. But I suspect this is a solvable problem of good communications. But right now, experts in quantum don’t seem to be able to provide these examples. Or they don’t exist.
Do you see any examples of real world problems in there? Mostly, it is very abstract. This creates a problem, if it is unclear how to use quantum computing to address real world information processing problems
I think this is partly an exercise in defining what "real world" means. To many, solving number theory problems, especially factorization, is real world enough. It's just not Silicon Valley monetizable.
Quantum computing is definitely overhyped right now. In my eyes, I don't see how this fundementally changes computing. AI, for example, literally lets you do things that weren't possible before. This just lets you store more data and manipulate it faster. It's not a shift in the core framework of the technology.
Like Bletchley Park during the last world war, the best-case scenario for intelligence is that the government has quantum computers that break public-key encryption and nobody finds out. It’s hard to know what machines exist behind the scenes.
If you mean quantum computers can't fundamentally work: you can publish a peer-reviewed paper about why it wouldn't work and the field would be immensely interested.
Kalai and Dyakonov are popular in sharing their opinions and speculations on this matter, but they haven't managed to make a convincing scientific argument that controlling a large scale quantum system is intractable or infeasible. That doesn't mean they're wrong, but that they too would need to engage in quantum computing as researchers doing science to be convincing.
Generally, fields of academic study are not interested in accepting publications on why their field is unnecessary, even when you come at them with evidence.
What the OP suggested was that "it is a scam". It clearly isn't, there is nothing in QC that isn't traditional QM which is a 100 year old incredibly well-tested theory. As far as we currently know, it's an engineering problem, not a question of "will it work". If there is evidence to the contrary, I'm sure any journal would be happy to publish it (that would make it to Science or Nature, not a quantum-computing specific journal).
Just my 2 ct.
edit: IBMs Roadmap shows a QC with 1 Billion commands (gates) after 2033. With that machine they could (in principle) upload a 100MB database and do searches.
source: https://www.tomshardware.com/tech-industry/quantum-computing...