Home Algorithms Commercialization Data Science Information Theories Quantum Theories Lab Linear Algebra
<< Grovers Algorithm PDF Grovers Algorithm 4-item Trial >>

$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$

Grover's Algorithm Analysis

First created in August 2018

This analysis is based on the article Grover's Algorithm.

Why It Works

For a 2-qubit 4-item case with $\Ket{10}$ as marked state,

$\Ket{s}=\sqrt{\frac{1}{4}}\big(\Ket{00}+\Ket{01}+\Ket{10}+\Ket{11}\big).~~ \Ket{w}=\Ket{10}.~~ \Ket{s'}=\sqrt{\frac{1}{3}}\big(\Ket{00}+\Ket{01}+\Ket{11}\big).~~ \Ket{s}=\sqrt{\frac{3}{4}}\Ket{s'}+\sqrt{\frac{1}{4}}\Ket w .$

Why It Works

For a 2-qubit 4-item case with $\Ket{10}$ as marked state,

$\Ket{s}=\sqrt{\frac{1}{4}}\big(\Ket{00}+\Ket{01}+\Ket{10}+\Ket{11}\big).~~ \Ket{w}=\Ket{10}.~~ \Ket{s'}=\sqrt{\frac{1}{3}}\big(\Ket{00}+\Ket{01}+\Ket{11}\big).~~ \Ket{s}=\sqrt{\frac{3}{4}}\Ket{s'}+\sqrt{\frac{1}{4}}\Ket w .$


In Kous notation, $\Ket{s}=\wr\big(\Ket{00}+\Ket{01}+\Ket{10}+\Ket{11}\big).~~ \Ket{w}=~\Ket{10}=\Ket{10}.~~ \Ket{s}=\wr\Ket{s'}+\wr\Ket w .$

...

Success Rate

With 4 items, to encode $\Ket2$ (or $\Ket{10}$), you need:

$U_f =(S\otimes I)(I\otimes H)cX(I\otimes H)(S\otimes I) =(S\otimes H)cX(S\otimes H) .$

$U_s =2\Ket s\Bra s-I =2\left(H^{\otimes n}\Ket0^{\otimes n}\Bra0^{\otimes n}H^{\otimes n}\right)-H^{\otimes n}~I~H^{\otimes n} =H^{\otimes n}\left(2\Ket0^{\otimes n}\Bra0^{\otimes n}-I\right)H^{\otimes n} .$


With 8 items, $\beta={1\over\sqrt{8}},~~\theta=\sin^{-1}(\beta)\approx 0.3614.$ Given $\theta_{t_\mathrm{final}}=(2t_\mathrm{final}+1)\theta\approx\pi/2,~~t_\mathrm{final}\approx{\pi\over 4\theta}-{1\over 2}\approx 1.6734\approx 2.$

$\beta_2=\sin(\theta_2)=\sin(5\theta)\approx 0.9723.$ So the probability of getting it right after 2 iterations is $\beta_2^2\approx 94.5$%.

With 16 items, following the same process above, $\theta=0.2527,~~t_\mathrm{final}\approx 2.6083\approx 3.$ $\beta_3=\sin(7\theta)\approx 0.9805,$ giving a probability of 96.1%.


Generally (but not absolutely), the longer the search list, the closer the probability is to unity.

Qubits Items Iterations Success Rate
2 4 1 100%
3 8 2 94.53%
4 16 3 96.13%
5 32 4 99.92%
6 64 6 99.66%
7 128 8 99.56%
8 256 12 99.99%
9 512 17 99.94%
10 1024 25 99.95%
... ... ... ...
20 1,048,576 804 99.99998%
... ... ... ...
30 1,073,741,824 25,735 99.99999993%

However, there are exceptions like, like from a success rate of 99.92% with 5 qubits to 99.66% with 6, then to 99.56% with 7.

From 2 qubits to 3 is a significant drop. Well, no configurations can beat the 100% certain success that the algorithm on a 4-item list can achieve. After that, periodic drops do happen as well, like from 5 qubits to 6, and from 8 to 9.


Some points to note:

  • $\theta_{t_\mathrm{final}}$ may end up being (slightly) larger than $\pi/2$, which is fine, as long as it is closer to $\pi/2$ then with the previous iteration. We are minimising $\lvert\pi/2-\theta_t\rvert$.

  • Based on the above point, we cannot use more qubits to increase the probability, as $\theta$ will start to get away from $\pi/2$ and $\lvert\pi/2-\theta_t\rvert$ will start to increase.

  • More qubits are required to represent a longer list. $\beta=\Rsr N$ becomes smaller, so is $\Delta\theta$, and the final step is getting closer to $\pi/2$, as $\lvert\pi/2-\theta_{t_\mathrm{final}}\rvert\le\Delta\theta$.

The final point is the reason we are seeing higher success rate for a longer list, not that you can brute-force a higher success rate by using more qubits.


To elaborate further, while $\beta_t$ approaches $\pi/2$ linearly after every iteration, it is how close $\beta_t$ is to $\pi/2$ in the final iteration that determines the success rate, and this rate does not necessarily increase as the number of qubits grows.

Nevertheless, the general trend is the more qubits are used the higher success rate can be achieved, because as $N$ increases, $\Delta\theta_t$ approaches zero. That said, "padding" the item list unnecessarily increases compute time without necessarily increasing probabilities.

After all, a list as small as 256 items can have a success rate of 99.99%. This is better than what many classical circuits can achieve. Otherwise, a simple sanity check using the oracle can confirm the success beyond doubt (if it returns a negative $\Ket w$). Otherwise, a second run would correct that.

Complexity

Let us look at the classical algorithm complexity $O(N)$ vs the Grover's $O(\sqrt N)$.

The complexity is measured in the number of iterations, which is to apply the classical $f(x)$ vs the quantum $U_f\Ket x$.

$f(x)$ gives you the result of one item with certainty, while $U_f\Ket x$ evaluates all items at once with a decreasing uncertainty.


Although $U_f$ is mathematically ahead in terms of complexity, the residue probability with other unmarked items cannot be ignored.

In short, there is always a chance that the wrong result is returned, no matter how small that chance is.

It may require some classical algorithm to verify the result (e.g. hashing). Details are to be explored.

What is an Oracle

For a list with $2^n$ items, $n$ bits are required as input to the algorithm, classical or quantum alike. In this context, only one pattern in these $2^n$ items are "marked".

The classical "oracle" is easy to understand: You give an $n$-bit input to the function and it gives you a yes-or-no answer.


The quantum oracle however is encoded into an $n\times n$ matrix for a $2^n$ item list. The input is a vector of $n$ dimensions, so is the output.

Being able to process multiple inputs at once gives the oracle an advantage. On the other hand, giving multiple outputs with a probability distribution is something we need to deal with.


One might be too focus on the mathematics and would think: although we cannot "open up" the oracle to see what it is doing (like which item it would mark), after just one evaluation, the bit pattern already shows by a "slightly" increased implitude.

e.g. For $n=4$, if qubit 0, 2 and 3 have higher amplitude, item 13 is the marked one.


The challenge is that we cannot see the state (except in the simulator). So the amplitude is unknown. You need to repeat the process multiple times in order to approximate the probability.

So it is a trade off between spending the compute on $\sqrt N$ iterations with one measurements and on less iterations but more measurements to increase the confidence.

As the latter would increase with $O(N)$, I would say that doing the right number of iterations $O(\sqrt N)$ with less measurements (or one) will give a better success rate.

That said, the limitation of circuit depth due to dechoherence cannot be ignored. In short, you may not be able to do many iterations, so you may need to live with a close enough but not closest final state. Repeat the same shallow circuit operations and get a distribution of the measurement to determine the result, the most likely one.

 

<< Grovers Algorithm Top Grovers Algorithm 4-item Trial >>