Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Overview

In 1994 Adleman aroused a lot of attention by describing a biomolecular solution to the traveling salesman problem [59.1] (see also [59.2]). It is by no means exaggerated to say that Adleman actually set the establishment of a new kind of science, even though the ideas of bio-inspired computational models are far older. For example, the theoretical properties of artificial neural networks had been studied decades before, and a notable exposition was published by Minsky and Papert in 1969 [59.3]. However, the previous studies on bio-inspired computing seemed to focus on analogies of biological processes, and that is exactly where Adleman took one step further: he proposed that instead of simulation, it could be useful to utilize directly the biochemical processes to perform computation.

By a coincidence, another branch of new science gained a lot of attention in 1994, too. In that year Shor published his famous polynomial-time algorithm for integer factorization [59.4]. Shorʼs study was remarkable for several reasons. First, since antiquity, there has been an unsuccessful quest for an efficient procedure for integer factorization. Another reason making Shorʼs discovery important is a very practical one. The security of the broadly used RSA encryption system is based on the assumption that no efficient method for factoring integers exists. The third important feature of Shorʼs results was that the factoring method was designed for quantum computers and could not be directly described in terms of traditional Turing machines [59.5].

It is also true that quantum computing did not begin with Shorʼs article. The first ideas of quantum information were coined by von Neumann in 1927 [59.6,7], but the first ideas of the quantum computer were introduced as late as early 1980s by Benioff [59.8,9], and very notably by Feynman [59.10], who actually suggested that quantum computers may be more efficient that the traditional ones. The theory was further developed by Deutsch, who introduced quantum Turing machines [59.11] and quantum networks [59.12]. Also the first examples of the superior efficiency of quantum computers were given by Deutsch and Jozsa [59.13] in 1992. In 1994, Simon presented a more usable example [59.14], and Shor actually built his factoring procedure on Simonʼs algorithm. In 1997 Bernstein and Vazirani established quantum complexity theory using an improved version of Deutschʼs quantum Turing machine [59.15].

Since the early days, the theory of quantum computing has been under strong development, but many basic questions still remain unresolved. For instance, we know that factoring integers would be feasible by quantum computers, but we cannot prove it unfeasible for classical computers. All we know is that no-one has discovered any feasible factoring for classical computers. In fact we do not know for sure of any problems for which quantum computers could provably be more powerful. This lack of knowledge becomes understandable when noticing that many basic questions on classical computing remain unresolved, too: as of 2013, we do not know whether polynomial-time nondeterministic computing any more powerful than its deterministic counterpart. This so-called P versus NP-problem is generally acknowledged as one of the most difficult problems in contemporary mathematics [59.16], and there is no reason to believe that analogous problems on the relations between quantum and classical computing were any simpler to resolve.

Quantum computing has an important common feature with molecular computing: in both paradigms, the idea is to perform computation on non-traditional hardware. To run quantum algorithms requires a quantum computer, i.e., a computer capable of storing and handling quantum information. For that purpose, the information should be presented by using physical systems so small that the quantum effects occur. From time to time, the analogies between quantum and biomolecular computing encourage authors to submit an idea of joint computing model. However, the success of such models has been very limited so far, and the main purpose of this article is to explain why to a reader only weakly familiar with quantum computing concepts. This will be done by introducing the basic notions of quantum computing in a superficial way and pointing out the essential differences between the two computing paradigms. One single and perhaps the most essential difference between the computational paradigms can be introduced without presenting any deeper structures. The objects of molecular computing (DNA molecules) are microscopic to humans, but yet macroscopic from the quantum computing perspective. Hence the information in molecular computing is treated as classical information, whereas the starting point of quantum computing is the quantumness of information.

As a secondary purpose, we describe briefly the four types of existing quantum algorithms and some restrictions of quantum computing to present an idea of what can be achieved by using quantum computing. The types presented cover almost all known quantum algorithms.

Biomolecular and Quantum Parallelism

Adleman designed and expressed his algorithm for the traveling salesman problem (actually Adlemanʼs formulation was a problem which should be called the Hamiltonian path problem) by using biomolecular operations. Anyone interested can learn details about those operations in [59.1], but roughly speaking, Adlemanʼs procedure can be described as follows. The problem itself is to decide whether in a given directed graph (city map) there is a path beginning and ending at fixed vertices and visiting every vertex exactly once. Adlemanʼs solution was to encode each vertex and edge into a single-stranded DNA-sequence in such a way, that if there is an edge e from vertex c 1 to c 2, then half of the strand encoding c 1 is complementary to a half of the strand encoding e, and the latter half of e is complementary to a half encoding c 2. As there may be multiple incoming and outgoing edges, there may also be multiple encodings of cities. In a test tube containing multiple copies of DNA strands encoding both vertices and edges, the single (let us call them lower) strands c 1, c 2,… tend to form longer strands c i 1 c i 2 c i 3 bounded by the upper (single) strands e i 1 e i 2 ( e i 1 extends over c i 1 and c i 2 , e i 2 over c i 2 and c i 3 , etc.) The encoding ensures that c i j and c i j + 1 will be adjacent only if there is an edge from c i j to c i j + 1 . This means that the chemical tendency of DNA sticking to its complementary counterpart will generate various double-stranded DNA sequences encoding paths in the graph that was originally encoded.

The problem is then to detect whether a strand encoding a desired path exists. Using electrophoresis it is possible to filter out DNA strands of wrong length, and additional existing techniques suffice to filter out exactly the desired paths, if there are any. The crucial point is that it is known how to the duplicate the existing DNA. Once the encodings are available, they can be duplicated to the extent guaranteeing that the above procedure will detect a desired solution with high probability, if any exists.

The above description emphasizes that Adlemanʼs DNA-based solution actually utilizes heavy parallelism. A test tube containing multiple copies of vertex and edge encodings acts like a nondeterministic device generating potential paths, and the problem is to filter out the desired path – a solution, if any exists. An existing biotechnique is sufficient to do the filtering.

In quantum computing, there also occurs parallelism – quantum parallelism, which will be described in the rest of this chapter. The presentation here will be merely informal, but the notions will be defined in the next chapter. It is not necessary to focus on any specific NP-complete problem, and we choose to study a general version.

A General NP-Complete Problem

  • Input: N ∈ℕ and f:{0, 1}N →{ 0, 1} a polynomial-time (in N) computable function.

  • Output:

    { 1 , if there is  x { 0 , 1 } N  so that  f ( x ) = 1 , 0 , otherwise .

Here and hereafter, {0, 1}N stands for the bit strings of length N, so a general NP-complete problem is typically a search problem: one has to decide whether there is an N-bit string x so that f(x) = 1. A solution to this problem can evidently be obtained via exhaustive search, but that is computationally expensive: try all 2N candidates x ∈{ 0, 1}N and check if any of them satisfies f(x) = 1. By assumption, any value f(x) can be computed in polynomial time (in N), but there are exponentially many (2N) possibilities to be checked.

In quantum computing, it is possible to form a state

x { 0 , 1 } N 1 2 N | x ,
(59.1)

so-called superposition of all bit strings x ∈{ 0, 1}N, and then to compute function f on all possible inputs simultaneously by a cost of single computation to obtain

x { 0 , 1 } N 1 2 N | x | f ( x ) .
(59.2)

This is what quantum parallelism means: all values x ∈{ 0, 1}N occur, in a sense, parallel in (59.1), and all values f(x) are, again in a sense, computed simultaneously.

However, this so-called quantum parallelism is very different from the biomolecular parallelism described earlier. In (59.1) there are no 2N physical systems each consisting of N bits, but only a single physical system of N bits in a state, which allows, in a sense, an interpretation as any x ∈{ 0, 1}N. Quantum parallelism is not comparable to DNA computing parallelism, and the incomparability is underlined by the physical interpretation of (59.2). Observation of (59.2) will give any pair (x, f(x)), each with probability | 1 / 2 N | 2 = 1 2 N , but on a measurement, (59.2) is destroyed irreversibly.

It is worth noticing that the quantum parallelism, as described above, is not far apart from probabilistic parallelism: Toss N times a fair coin to obtain a random bit string x ∈{ 0, 1}N, then compute f(x). A string x with the property f(x) = 1 (if any exists) will be found exactly with the same probability as observing (59.2) would give. Hence the problem with straightforward quantum parallelism (59.2) is the same as with the probabilistic parallelism. If there are only a few, or even only one string x (let us call it solution) such that f(x) = 1, then the cases solution exists and no solution cannot be distinguished from each other with any better probability than  1 2 N .

However, as Deutsch and Joszaʼs [59.13], Simonʼs [59.14], and Shorʼs [59.4] discoveries demonstrated, quantum computing offers possibilities to solve some problems more efficiently than any known classical procedure allows. But it may be useful to underline right now that it is strongly believed (although not proved) that quantum computers cannot solve NP-complete problems in polynomial time.

Quantum Computing Preliminaries

In this chapter, the basics of quantum information will be presented only very superficially, and a reader desiring more detailed exposition is advised to consult [59.17,18] or [59.19]. To understand the formalism, it is necessary to accept the fact that quantum mechanics is a stochastic theory, meaning that in general, the complete description of the system, the state, cannot in general result into any deterministic description, only a probability distribution over potential outcomes. The probabilistic structure of quantum mechanics is very well studied, and lots of results are already available. We could ask, for instance, whether there could be a deterministic theory lying under quantum mechanics, and the probability distribution is only due to unknown boundary values (hidden variables). For instance, when tossing a classical coin, one could imagine that if the initial circumstances are known precisely enough, one could be always predict the outcome.

A deep investigation has shown that the quantum randomness cannot emerge from any deterministic procedure as described above, but randomness in inherently inseparable feature of quantum mechanics (see [59.20], for instance).

Hilbert Space Basic Structure

The Hilbert space formalism of quantum computing is usually based on pure states and requires some basic notions. An n -level quantum system means a (quantum) physical system with n different states which are mutually distinguishable with certainty. An n -dimensional Hilbert space H n is a complex vector space ℂn equipped with a Hermitian inner product 〈x|y〉 = x 1 * y 1 +…+ x n * y n . The inner product induces a norm | | x | | = x | x . For any element (x 1,… , x n ) ∈ℂn, a A ket-vector is defined as column vector (n × 1-matrix)

| x = ( x 1 x 2 x n ) ,

and a bra-vector as 1 × n-matrix (row vector)

x | = ( x 1 * , x 2 * , , x n * ) .

Usually ℂn is identified with the space of ket-vectors (n × 1-matrices), and we say that H n is the state space of the quantum system. The mathematical description of an n-level quantum system is based on n-dimensional Hilbert space in the following way: an orthonormal basis {|x 1〉,… ,| x n 〉} is fixed as a computational basis, and a general state of the system is presented as a superposition of computational states

α 1 | x 1 + + α n | x n ,
(59.3)

so that |α 1|2 +| α 2|2 +⋯+| α n |2 = 1. In other words, a general state of an n-level system can be represented as unit-length vectors in H n . Basically any basis for H n could be chosen for representation (59.3), but some bases may preferred because of the physical implementation. Hence the term computational basis should not be understood as any mathematical definition, but as a chosen reference basis. States of a computational basis are also called basis states and complex coefficients α i amplitudes.

An observable of quantum system H n is a collection of mutually orthogonal subspaces {V 1,… , V k } so that H k  = V 1 ⊕⋯⊕ V k . The intuitive meaning of the notion is that each subspace V i refers to a physical property the system can have. For example, the computational basis itself induces an observable {L(x 1),… , L(x n )}, where L(x i ) stands for the subspace generated by x i .

The minimal interpretation of quantum physics is an axiom connecting the mathematical structure to the real world. For this representation, it is sufficient to introduce the minimal interpretation in the following way: let {V 1,… , V k } be an observable and x = α 1 x 1 +⋯+ α k x k a presentation of state x so that x i  ∈ V i and | | x i | | = 1 for each i. Then the probability that quantum system in state x is seen to have property V i is

( i ) = | α i | 2 .
(59.4)

It may be worth mentioning here that usually an extra element is associated to an observable: a real number λ i to each subspace V i . Number λ i is the observable value, and equation (59.4) should read as

( λ i ) = | α i | 2 ,

meaning that the probability that the measured value of the observable is λ i equals |α i |2, However, when just studying quantum computation, it is not usually necessary to address explicit values λ i  ∈ℝ, but it is enough to identify λ i with its index i.

According to the projection postulate, the quantum system collapses to the observed state, and the superposition is irreversibly lost. That is, if property i was observed, then the state immediately after the observation is x i . The projection postulate is among the most problematic features in quantum mechanics, but this article cannot be extended to treat that specifically.

Example 59.1:

A two-level quantum system is referred as to a quantum bit, or qubit for short. We fix an orthonormal computational basis |0〉 = (1, 0)T, |1〉 = (0, 1)T for H 2 (T stands for transposition), and a general state of a quantum bit is a vector

α | 0 + β | 1 ,
(59.5)

where |α|2 +| β|2 = 1 (meaning that the length of (59.5) is 1). Let C ={ L(|0〉), L(|1〉)} be an observable consisting of two subspaces generated by |0〉 and |1〉, respectively, and C′ ={ L(|0′〉), L(|1′〉)} another observable, where | 0 = 1 2 ( | 0 + | 1 ) and | 1 = 1 2 ( | 0 - | 1 ) . If a quantum bit is in state (59.5), and observable C is measured, then 0 is seen with probability |α|2 and 1 with probability |β|2.

Hence a qubit state (59.5) may look like a generalized probability distribution, but that is not the case. It is perfectly possible to measure also observable C′ in state (59.5), and the outcome may be totally different. In fact,

1 2 | 0 + 1 2 | 1 = 1 · | 0 + 0 · | 1 ,
(59.6)

so measuring observable C results in 0 with probability 1 2 and 1 with probability 1 2 . On the other hand, measuring observable C′ results in 0′ with probability 1 and 1′ with probability 0. This is to emphasize that the state of a quantum system cannot be treated as a probability distribution. In fact, for every state of a finite-level (excluding the trivial case n = 1) quantum system there is a nontrivial (i.e., with more than 1 potential values) observable so that a single value will be observed with probability 1.

Based on the above definitions, we can now clarify the role of the computational basis a little bit. In fact, observing state (59.3) would generally require specification of the observable to be measured, but it is traditional to use the terminology observing a state, if the observable is induced by the computational basis.

Compound Systems

The states of a quantum system consisting of two distinguishable subsystems can be presented by using a tensor product construction. For the purposes of this article, it is not necessary to define the notion of tensor product exactly, it is enough just to know that the tensor product is (essentially) associative and distributive, but a non-commutative product of vectors obeying the obvious scalar rules. For more details, see [59.17] or [59.19]. It is also worth emphasizing that the counterpart of the tensor product in concrete objects such as matrices is the Kronecker product.

Now if H m and H n are the state spaces of m- and n-level quantum systems with computational bases {|x 1〉,… ,| x m 〉} and {|y 1〉,… ,| y n 〉}, then the state space of the compound system is mn-dimensional if tensor product H m  ⊗ H n , whose computational basis can be chosen as

{ | x i | y j ( i , j ) { 1 , , m } × { 1 , , n } } .

It is common to use shorthand notations |x i 〉 ⊗| y j 〉 =| x i 〉|y j 〉 =| x i , y j 〉, (even |x i y j 〉 is used if there is no danger of confusion) so the state of the compound system can be represented as

i = 1 m j = 1 n α ij | x i , y j ,

where

i = 1 m j = 1 n | α ij | 2 = 1 .
(59.7)

It is clear that the observables of subsystems give raise to observables of the compound system.

State (59.7) is called decomposable if it can be presented as a product state

( i = 1 m α i | x i ) ( j = 1 n β j | y j ) ,

otherwise, the state is called entangled.

Example 59.2:

A two-qubit state

1 2 | 00 + 1 2 | 01 + 1 2 | 10 + 1 2 | 11

is decomposable, as

1 2 | 00 + 1 2 | 01 + 1 2 | 10 + 1 2 | 11 = 1 2 ( | 0 + | 1 ) 1 2 ( | 0 + | 1 ) .

On the other hand, a two-qubit state

1 2 | 00 + 1 2 | 11
(59.8)

is entangled, since assumption

1 2 | 00 + 1 2 | 11 = ( α 0 | 0 + α 1 | 1 ) ( β 0 | 0 + β 1 | 1 ) = α 0 β 0 | 00 + α 0 β 1 | 01 + α 1 β 0 | 10 + α 1 β 1 | 11

leads into equations α 0 β 0 = α 1 β 1 = 1 2 and α 0 β 1 = α 1 β 0 = 0, which are clearly impossible.

Entangled state (59.8) is historically and philosophically of great interest. Indeed, Bohm used [59.21] a state analogous to it to reformulate an apparent paradox of quantum mechanics introduced by Einstein, Podolsky, and Rosen [59.22]. That formulation eventually led Bell to present a resolution of the paradox [59.23] (for an exposition, see [59.20]). State (59.8) is called an EPR state, and a pair of qubits in state (59.8) an EPR pair for the aforementioned reasons.

The minimal interpretation implies directly that if the state (59.8) is observed (i.e., observable generated by the computational basis is measured), then we will see “00” with probability of 1 2 , and “11” with probability of 1 2 , too. Hence the quantum bits in an EPR state (59.8) are perfectly correlated; when observed, they always have the same value, which, however, can be 0 or 1, either with probability  1 2 .

It has been experimentally demonstrated that the correlation of the EPR pairs as described above is detectable even if the two physical systems (quantum bits) are spatially separated by 144 km [59.24]. However, the correlation over distance should not be surprising or anything specific to quantum physics; it is left to the reader to describe a non-quantum bipartite system with distant correlations analogous to the EPR state.

Whereas the correlation itself is not specific to quantum mechanics, the violation of Bell inequalities is [59.20], for instance. Violation of Bell inequalities has been experimentally detected over a physical distance of 144 km [59.24].

The mathematical description of compound systems with more than 2 subsystems is again based on tensor product construction. In this article, we will not focus on details, but will merely present an example.

Example 59.3:

A system of N quantum bits has its description in a state space H 2 ⊗⋯⊗ H 2, a Hilbert space isomorphic to H 2 N . A general state of H 2 N can be described as

x { 0 , 1 } N α x | x ,

where

x { 0 , 1 } N | α x | 2 = 1 .

State

1 2 ( | 0 + | 1 ) 1 2 ( | 0 + | 1 ) = 1 2 N x { 0 , 1 } N | x
(59.9)

presents a uniformly distributed superposition over all basis states |x〉. It is worth noticing that presentation (59.9) shows that the state is clearly decomposable.

Quantum Operations

It was explained in the previous sections, in a very simplified way, how to present quantum information in pure states. It is, however, clear that the stagnant pictures of quantum states are not sufficient for using the theory. Instead, it is necessary to describe how quantum systems change in time. For most quantum computing models, and also for this article, it is sufficient to describe closed quantum system transformations, which will be mathematically formalized as follows: a (closed) quantum system state transformation is a unitary mapping H n  → H n . A linear mapping U is unitary, if U * U = UU * = I (identity mapping), where U * is the complex conjugate of the transpose of U. A closed quantum system state transformation is also called a quantum gate, see [59.25] for a study on quantum gates.

Example 59.4:

H = 1 2 ( 1 1 1 - 1 ) .

If is straightforward to verify that H * = H, and that H * H = HH * = HH = I, meaning that H is unitary. H is hence a unary quantum gate, i.e., a gate on one qubit. The action of H on computational basis {|0〉 = (1, 0)T,| 1〉 = (0, 1)T} is given by

H | 0 = 1 2 ( | 0 + | 1 ) and H | 1 = 1 2 ( | 0 - | 1 ) .

Gate H is called a Hadamard transform or a Walsh transform.

Example 59.5:

Mapping

C = ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 )

can be easily verified to be unitary. C is a binary gate called controlled not, and its name is justified by computing its action. First, matrix presentations of |00〉, |01〉, |10〉, |11〉 are obtained by using the Kronecker product

| 00 = | 0 | 0 = ( 1 0 ) ( 1 0 ) = ( 1 0 0 0 ) , | 01 = ( 1 0 ) ( 0 1 ) = ( 0 1 0 0 ) , | 10 = | 1 | 0 = ( 0 1 ) ( 1 0 ) = ( 0 0 1 0 ) ,

and

| 11 = ( 0 1 ) ( 0 1 ) = ( 0 0 0 1 ) .

The action of C is then easy to compute: C|00〉 =| 00〉, C|01〉 =| 01〉, C|10〉 =| 11〉, and C|11〉 =| 10〉, meaning that the second qubit is flipped exactly when the first, the control bit, equals 1.

It is possible to establish quantum computing on quantum gates only, and that would lead into quantum circuit formalism [59.17,18]. On the other hand, there are also other possible ways to establish the formalism of quantum computing. In fact, for any classical model for computing, there is a canonical way of transforming it into a quantum version. For example, for the definitions of quantum finite automata, see [59.26,27] and for quantum Turing machines, see [59.17] or [59.19]. It must, however, be emphasized that unitary mappings are invertible by definition, and therefore all quantum computing models based on them are reversible. For quantum Turing machines the reversibility does not bring any disadvantage, as it is well known that all computation can be made reversible by introducing extra space [59.28]. On the other hand, many unitary models of finite automata are strictly weaker than the traditional one [59.26,27], just because the transformation into a reversible machine would require extra space. The unitarity can be relaxed by using open system transforms, but representing them would require too much space in this article. See [59.29] for an automaton model with open time evolution.

Let us now revisit quantum parallelism and add more details. If f:{0, 1}N →{ 0, 1} is computable in polynomial time, there is also a polynomial-size quantum circuit computing f. In fact, an algorithm computing f can be efficiently turned into a quantum circuit that computes f [59.17]. Usually it is necessary to add some auxiliary bits to bypass the reversibility requirement, but those extra bits are not usually written down explicitly. This implies that it is possible to construct a unitary mapping U f by using a polynomial number of simple quantum gates (selected from a finite set) with the following action

U f | x | 0 = | x | f ( x ) ,

where x is a sequence (register) of N quantum bits. Applying the Hadamard transform to N first quantum bits in state

| 0 | 0

will result into state

1 2 N x { 0 , 1 } N | x | 0 ,

(59.9), and a further application of U f will lead into state

1 2 N x { 0 , 1 } N | x | f ( x ) ,
(59.10)

an equally balanced superposition over all potential pairs (x, f(x)). This is the state (59.2) of a previous example. From the computational complexity point of view, it is important to realize that the state (59.10) can be generated by N Hadamard gate actions plus the number of quantum gates required to implement U f (polynomial in N). This is exactly what quantum parallelism means: by a polynomial number of actions it is possible to generate state (59.10) extending over exponentially many basis states.

Unfortunately (59.10) is only a mathematical description of a state of a physical system. In particular, the exponentially many values do not exist physically observable to us, but observing (59.10) will give only a single pair (x, f(x)), each with probability 1 2 N , and observation will make (59.10) to collapse into state |x〉|f(x〉).

Hence it is not possible to use quantum parallelism, at least not in this straightforward way, to resolve efficiently NP-complete problems.

Quantum Algorithms

The last example of the previous section clearly justifies the following question: Why should we regard quantum parallelism any better than a simple probability distribution? In fact, we could obtain equally good results just by selecting N random bits to form a bit string x, then to compute f(x). If necessary, we could even invent a notation for probability distribution. Let us agree that notation

x { 0 , 1 } N 1 2 N [ x , 0 ]

stands for a probability distribution over N + 1 bit strings x, 0, where each x ∈{ 0, 1}N occurs with a probability 1 2 N . Then, computing f results into

x { 0 , 1 } N 1 2 N [ x , f ( x ) ] ,

and any pair (x, f(x)) is seen with a probability of 1 2 N , as in the case of (59.10).

The answer to that question is that the straightforward use of (59.10) is obviously not the only possible strategy. It should be noted that the amplitudes can be negative as well, and consequently it may be possible to design quantum computing in such a way that the desirable basis states would gain more visibility because their amplitudes would sum up, and nondesirable ones could cancel each other. Should that happen, we would call the former constructive interference and the latter destructive interference.

Example 59.6:

State |0〉 turns into 1 2 ( | 0 + | 1 ) , if affected by the Hadamard transform. If the affected state were observed, one would see 0 and 1, both with probability 1 2 . On the other hand, if the state is not observed, but another Hadamard transform is applied, we get the following

H 1 2 ( | 0 + | 1 ) = 1 2 ( H | 0 + H | 1 ) = 1 2 ( 1 2 ( | 0 + | 1 ) ) + 1 2 ( 1 2 ( | 0 - | 1 ) ) = ( 1 2 + 1 2 ) | 0 + ( 1 2 - 1 2 ) | 1 = | 0 ,

which demonstrates in detail how the amplitudes 1 2 and 1 2 sum up to 1, and 1 2 and - 1 2 cancel each other.

A parallel to classical information processing could be as follows: the Hadamard transform may be interpreted as a fair coin toss. Beginning either from |0〉 or |1〉, one reaches state 1 / 2 ( | 0 ± | 1 ) , where 0 and 1 are both seen with probability 1 2 (single coin toss). If the coin is tossed twice in the classical settings, then again 0 and 1 are seen both with 50% probability. But in this example, the second coin toss returns the state into |0〉, and hence 0 is seen with 100% probability. This is a feature that is clearly impossible with classical information.

Powerful quantum algorithms, such as Shorʼs factoring algorithm are indeed all algorithms utilizing quantum interference in a clever manner. Unfortunately, the interference behavior of a quantum algorithm is quite difficult to control in practice, and consequently only a few families of quantum algorithms are known to date.

The known quantum algorithm families enclosing almost all known quantum algorithms are:

  1. 1.

    Quantum algorithms based on Fourier transforms

  2. 2.

    Amplitude amplification methods

  3. 3.

    Quantum random walks

  4. 4.

    Adiabatic quantum algorithms.

Quantum Algorithms Based on Fourier Transforms

This class of quantum algorithms usually provide an apparent exponential speedup over their classical counterparts. The algorithms in this class attempt to construct a superposition

x α x | x

whose amplitudes α x form a periodic or almost periodic sequence. In many cases, it is then possible to perform the discrete Fourier transform on amplitudes (exponentially many) with only a polynomial number of operations on quantum bits (see, e.g., [59.17]). This structure takes care of interfering quantum computational paths by using centuries old knowledge on discrete Fourier transforms.

The approach has been very successful. The algorithm by Deutsch–Jozsa [59.13], Simonʼs algorithm [59.14], and Shorʼs algorithm [59.4] are all quantum algorithms that control their amplitudes by a structure given by discrete Fourier transforms. The most prominent examples of exponential speed-ups are provided by Fourier transform-based quantum algorithms. For details, see [59.17] or [59.19].

Amplitude Amplification Methods

The amplitude amplification method was presented by Grover in 1996 [59.30], and it has been extended thereafter. The basic form of Groverʼs method involves a function f:{0, 1}N →{ 0, 1} assumed to be computable in polynomial time, and is applied to a superposition

x { 0 , 1 } N α x | x .

The purpose is to use quantum interference to increase the joint squared absolute values of those amplitudes α x , for which f(x) = 1, to make such values x more likely to be observed. Grover presented an iterative procedure for that purpose. An iteration step typically involves one evaluation of f.

Among all quantum algorithms, the amplitude amplification methods deserve the first right to be called quantum-most methods, as all the other ones have an analog or a counterpart in classical computing. Groverʼs method does not have any; it is a method purely originating from quantum computing purposes.

The most remarkable consequence of Groverʼs method is that a general NP-complete problem can be solved (with a high probability) by quantum computers using only O ( N ) evaluations of function f. When comparing this to O(N) evaluations in the classical case, this is an essential improvement, but not yet not an exponential one: 2 N = ( 2 ) N is again an exponential function, although with a smaller base. For details, see [59.17] or [59.19].

Quantum Random Walks

Quantum random walks is a straightforward analog of classical random walks. It is known that quantum random walks sometimes have exponentially faster hitting times than their classical counterparts [59.31], and the technique can be attempted for a great variety of computational problems. This article is too narrow to address quantum random walks in detail, but for an exposition, see [59.32].

Adiabatic Quantum Algorithms

Adiabatic quantum algorithms should not be called algorithms, but just a technique for designing quantum algorithms. It, or at least its generality can be loosely compared to classical evolutionary algorithms. Adiabatic quantum computing can be adapted to any computational problem.

Adiabatic quantum computing is based on the adiabatic theorem, which says that if a Hamiltonian operator H 0 is transformed into another Hamiltonian H 1 slowly enough, then the ground state |x 0〉 of H 0 is transformed into the ground state |x 1〉 of H 1, as well. The terminology is not explained here, but the reader is advised to consult [59.33]. Instead of detailed definitions and descriptions, we just mention that the adiabatic computation has been shown to be equally as powerful as quantum computing based on quantum gates [59.33]. The technique of adiabatic computing just provides an advantage that the algorithm design can be circumvented in some cases.

Restrictions of Quantum Computing

Theoretical computer science seems to suffer from powerful absolute limitations of computational models. It has been previously mentioned that question P≠NP is waiting for resolution, but there are many analogous unsolved problems. From a quantum computing point of view, the most important such problem is probably: Is polynomial-time quantum computing more powerful than its classical counterpart, at least for some instances? Very likely this problem is at least as difficult as the P versus NP problem, and in the sight of the present understanding, there is no apparent route how to even approach these problems.

However, there are easy ways to obtain relativized lower bounds for computational complexity. For relativization in computing, we refer to [59.34], but the basic results are easy to state as follows: a general NP-complete problem described in an earlier section is relativized in the sense that no structure of function f is available. I should apologize to complexity theory experts for the simplifications in this explanation (but on second thought, I will not do that), but the intuitive idea in a relativized lower bound is the following: if nothing is known of the structure of function f (but the values are chosen arbitrarily), then to decide about the existence of x ∈{ 0, 1}N so that f(x) = 1 will inherently take 2N evaluations of f, since for any process asking less values, there is a possibility to introduce f assigning a wrong value to the non-queried string. This certainly gives reasons to believe that P ≠ NP, but does not constitute any proof of that, since no polynomial-time computable function exists without a structure.

Relative lower bounds are known for quantum computing, too. It is, for example, possible to say, that for a general NP-complete problem, quantum computing offers no polynomial time solution. Instead, at least 2 N computational steps are needed for a solution (this is to say that Groverʼs method is asymptotically optimal). There are various techniques for obtaining relativized lower bounds for quantum computing, and as the most notable ones we can mention the polynomial technique [59.35] and the adversary technique [59.36].

Physical Realization of Quantum Algorithms

As of 2013, quantum algorithms have been under development almost for three decades, and many things are known about them. In a previous section, we listed four general families of quantum algorithms and mentioned some techniques for proving lower bounds for quantum computing. Even though the development may seem modest in some sense, there are enough interesting quantum algorithms to justify the quantum computer development project. Unfortunately, to build a quantum computer has turned out to be a very challenging task.

In principle, any quantum physical two-state system could serve as a quantum bit. Unfortunately such a system is always very vulnerable to external disturbances, and consequently in many realizations, the lifetime of a qubit is only a tiny fraction of a second. The following physical realizations (among others) have been proposed. Cold trapped ions [59.37], nuclear spin [59.38], and photon polarization [59.39] are all potential implementations of quantum bits, but the most advanced quantum computer (with respect to the number of quantum bits) in modern technology allows us to hold only 12 quantum bits [59.40]. Quantum factoring algorithm for N-bit integers will require approximately 2N qubits [59.41]. This implies that the current quantum computers cannot handle enough bits to perform universal computation to truly challenge RSA or other public-key cryptosystems used currently.

Hence we have to conclude that with modern technology, we cannot yet realize very much quantum computing. The most important lesson the 30-year lasting research on quantum computing provides us is, therefore, new insights into the theory of computing and relations between the theory of computation and physical world.

Biological Applications of Quantum Computing

Even though large-scale quantum computers do not exist yet, we already know various potential applications. A fast integer factoring algorithm would be very influential, even though its influences may not be called entirely positive. Quantum algorithms providing an exponential speed-up over known classical ones are almost all designed by using quantum Fourier transform, and hence they are applicable only for problems having a suitable periodic structure. Such structures do not typically exist in biological problems, and therefore they are not very likely to have an exponential speed-up on biological problems.

On the other hand, a quantum computer would provide a quadratic speed-up on all search problems, and there are various potential applications. For instance, protein folding problems are typical search problems that could benefit from a quadratic speed-up, but as this would apply for any search problem, we are not going to list especially biological search problems.

Instead, we will conclude this article by pointing out a specific problem that quantum computers are good at and which may have consequences in biology, as well. This specific problem also encloses the circle: quantum computers are good at simulating quantum physics, just as Feynman explained in his article [59.10]. In general, simulating a quantum mechanical system of N particles with classical computers seems to lead into an exponential slowdown in the simulation efficiency, and Feynman proposed that a quantum computer could be used to avoid the slowdown.

As quantum mechanics governs microsystems, and large biomolecules are built of smaller particles, one could expect that quantum mechanics to have an important explanatory value on biomolecular processes [59.42]. Perhaps it is so, but there is not much existing research on this topic. One reason for this is that structures like DNA are so complex from the physical perspective that even the modeling becomes extremely hard, to say nothing of the explicit solutions. Another reason is that the help provided by computers will not lead very far – as long as we do not have quantum computers.