I wrote most of this list to procrastinate on the flight back from TQC (which was great!). So, for my own reference: here’s some settings where efficient state preparation / data loading is possible, and classical versions of these protocols. Notes:
- There might be errors, especially in details of the quantum protocols, and some of the algorithms may be suboptimal (note the streaming setting, in particular). Let me know if you notice either of these.
- Some relevant complexity research here is in QSampling (Section 4).
- All these runtimes should have an extra factor, since we assume that indices and entries take bits/qubits to specify. However, I’m going to follow the convention from classical computing and ignore these factors, hopefully with little resulting confusion.
For all that follows, we are given in some way and want to output
- for the quantum case, a copy of the state , and
- for the classical case, the pair output with probability .
You could think about this as strong quantum simulation of state preparation protocols.
|quantum||depth||space with 2 passes|
|classical||space with 1 pass|
Recall that if we want to prepare an arbitrary quantum state, we need at least time by search lower bounds, so for some settings of the above constants, these protocols are exponentially faster than the naive strategy. Further recall that state preparation and sampling both have easy protocols running in time.
We assume that has at most nonzero entries and we can access a list of the nonzero entries . Thus, we have the oracle .
We can prepare the quantum state and classical sample by preparing the vector where , and then using the oracle to swap out the index with . This gives classical and quantum time.
We assume that and we know . Notice that we don’t give a lower bound on the size of entries, but we can’t have too many small entries, since this would lower the norm. Also notice that .
Quantumly, given the typical oracle we can prepare the state
Measuring the ancilla and post-selecting on 0 gives . This happens with probability , and with amplitude amplification this means we can get a copy of the state with probability in time.
Classically, we perform rejection sampling from the uniform distribution: pick an index uniformly at random, and keep it with probability ; otherwise, restart. This outputs the correct distribution and gives a sample in time.
is efficiently integrable
We assume that, given , I can compute in time. This assumption and the resulting quantum preparation routine comes from Grover-Rudolph.
The quantum algorithm uses one core subroutine: adding an extra qubit, sending , where
All that’s necessary is to apply it times and add the phase at the end. I haven’t worked it out, but I think you can run the subroutine efficiently using three calls to the integration oracle, giving time.
Classically, we can do essentially the same thing: the integration oracle means that we can compute marginal probabilities; that is,
Thus, we can sample from the distribution on the first bit, then sample from the distribution on the second bit conditioned on our value of the first bit, and so on. This also gives time.
is stored in a dynamic data structure
We assume that our vector can be stored in a data structure that supports efficient updating of entries. Namely, we use the standard binary search tree data structure (see, for example, Section 2.2.2 of Prakash’s thesis). This is a simple data structure with many nice properties, including time updates. If you want to prepare many states corresponding to similar vectors, this is a good option.
There’s not much more to say, since the protocol is the same as the integrability protocol. The only difference is that, instead of assuming that we can compute interval sums efficiently, we instead precompute and store all of the integration oracle calls we need for the state preparation procedure in a data structure.
The classical runtime is , and the quantum circuit takes gates but only depth. The quantum algorithm is larger because here, we need to query a linear number of memory cells, as opposed to the integrabilility assumption, where we only needed to run the integration oracle in superposition.
While it may seem that the classical algorithm wins definitively here, the small depth leaves potential for this protocol to run in time in practice, matching the classical algorithm.
We assume that we can receive a stream of the entries of in order; we wish to produce a state/sample using as little space as possible.
Classically, we can do this with reservoir sampling. The idea is that we maintain a sample from all of the entries we’ve seen before, along with their squared norm . Then, when we receive a new entry , we swap our sample to with probability and update our to . After we go through all of ’s entries, we get a sample only using space. (This is a particularly nice algorithm for sampling from a vector, since it has good locality and can be generalized to get samples in space and one pass.)
Quantumly, I only know how to prepare a state in one pass with sublinear space if the norm is known. If you know , then you can prepare , and as entries come in, rotate to get , then , and so on. This uses only qubits, which I notate here as space.
You can relax this assumption to just having an estimate of such that . Finally, if you like, you can remove the assumption that you know the norm just by requiring two passes instead of one; in the first pass, compute the norm, and in the second pass, prepare the state. But it’d be nice to remove the assumption entirely.