In case anyone is wondering, this doesn't 'solve graph problems' with quantum magic any faster than a normal laptop solves them. The preprint is on arxiv: https://arxiv.org/abs/2302.00936v2
but I can't see any supplemental materials on arxiv including any code ("[26] See Supplemental Materials.")
Title and subtitle of the Synopsis on physics.aps.org:
> "Large Photonic Processor Solves Graph Problems: A quantum photonic device can perform some real-world tasks more efficiently than classical computers."
What the paper itself says, which is a "Letter" and not a normal paper with sections like introduction, methods, discussion, conclusions, etc:
> "It is an open question, however, whether the GBS can yield advantage compared to improved classical algorithms and quantum-inspired algorithm."
These are the two problems they solve stochastically:
The Max-Haf problem is, for a complex-valued matrix B of any dimension, to
find a submatrix BS of fixed even dimension k = 2m, with
the largest Hafnian in square of absolute value. Hafnian was
originally introduced in interacting quantum field theory and
plays a variety of roles in physics and chemistry [8, 30–36].
When the matrix is an adjacency matrix composed of 0s and
1s, Hafnian can be interpreted as the number of perfect matching of the graph [11].
The dense k-subgraph problem is, for an n-vertex graph
G with adjacency matrix ∆, to find its subgraph of k < n
vertices GS with the largest density
W (GS) = X k i,j=1 (∆S) i,j , (4)
where ∆S is the adjacency matrix of GS.
The dense k-subgraph problem is of fundamental interest
in both mathematics [37] and applied fields like data mining,
bioinformatics, finance and network analysis [38–45].
Their use of a 144-node fully-connected photonic processor means they're limited to solving graphs on 144 nodes...
Dense k-Subgraph, eh? I wrote a dissertation on that in a past life.
From a quick read of the paper, it looks like the only classical algorithms they compare to are greedy, random search, and simulated annealing (which is more or less random plus greedy). For random or semi-random instances like those described in this writeup, there are oftentimes substantially better classical algorithms (both random and deterministic) that one can use to try to find dense components (including, ironically, so-called "spectral" algorithms which are entirely classical).
They don't provide the specific parameter setting so I can't give a single definitively better method, but I'm relatively unimpressed by their finding that a somewhat heavily engineered approach can defeat a handful of braindead classical algorithms.
This is interesting to me because it's advancing the work on the notion of quantum graph problem solving.
I'm sure we've all heard how quantum computers can be used in the future to decrypt information from today. There's a lot of research out there on how QC may be able to efficiently factor large semiprimes and bust our existing cryptographic algorithms, but to me this is the more mundane side of QC.
The exciting side to me is that many graph problems, particularly whole graph problems like connectivity and shortest paths have a potential quantum advantage. This is particularly advantageous for sparse and hypersparse graphs that have billions of nodes but relatively low node degree. Language Models, chemical assay databases, proteomics, causal inference, and fraud detection are just a few problems that involve huge sparse graphs that could get a huge boost from quantum.
And to show my own bias here [1], I think the future of graph algorithms, including quantum, is expressing them in Linear Algebraic form with the GraphBLAS API. Using the GraphBLAS, you can write your algorithm in a mathematical form using the multiplication of adjacency matrices that is then synthesized to some optimal form for a given architecture.
The same code you write can then be run on a variety of backends, currently CPUs and CUDA using SuiteSparse's new JIT, but soon FPGAs and yes, quantum computers. Parallelism will become so broad and conceptually divergent that you won't even be able to conceive of an efficient hand written single function for all possible platforms.
FWIW while existing graph algorithms can't solve it optimally, but they do approach it quite closely using heuristics. It's unclear how much benefit something like this would have for online (i.e. latency-sensitive) applications such as internet routing if the time-constant to setup the QC + execute + get the result is higher compared to a heuristic approach. My hunch is that it won't be useful there for a long time if ever.
That being said, for FPGA/ASIC layout & similar problems, it's possible that QC is worth it as an extra optimization level even if slower to get a result in absolute terms to eek out that last bit of optimality (e.g. LTO equivalent). It may also be useful as a way to evaluate how good existing heuristics perform against the true optimal route to see if there's any improvements that can be made to the heuristic algorithms or the heuristics being used themselves.
Yes that's true, there are efficient heuristic graph algorithms for many problems, for example the travelling salesman problem has several good heuristic algorithms that gives you a short path, but perhaps not the exact shortest.
But for problems like connected components, there is only one right answer, a component is connected or it is not, there is no useful heuristic solution. Same goes for all shortest paths, yes A* is heuristic for finding an optimally short path from a specific source to a specific destination, but to get all destination shortest paths you have to visit all the nodes anyway, so there is only one useful solution.
"By treating each of the processor’s output ports as a graph vertex and each detected photon as a subgraph vertex, they determined which subgraph mapped to the solution. Their processor arrived at a solution after obtaining 221,891 samples, each of which corresponded to a particular distribution of up to 80 detected photons. Each sample would require 700 seconds on the world’s fastest supercomputer using an exact algorithm."
okay but how long did the sampling take the photonic computer?
This is a meta question, but I made a comment here twelve hours ago, and now it's showing the original post was from only 3 hours ago and that my comment was only 2 hours ago. What's going on with that? You can also see if you look at my comment profile history when I made that comment twelve hours ago it has the time correctly marked there but not in this thread.
oh that's interesting how do posts get queued there
EDIT: i see they are picked manually "HN's second-chance pool is a way to give links a second chance at the front page. Moderators and a small number of reviewers go through old submissions looking for articles that are in the spirit of the site—gratifying intellectual curiosity—and which seem like they might interest the community."
Title and subtitle of the Synopsis on physics.aps.org:
> "Large Photonic Processor Solves Graph Problems: A quantum photonic device can perform some real-world tasks more efficiently than classical computers."
What the paper itself says, which is a "Letter" and not a normal paper with sections like introduction, methods, discussion, conclusions, etc:
> "It is an open question, however, whether the GBS can yield advantage compared to improved classical algorithms and quantum-inspired algorithm."