1 Introduction

Quantum image processing (QIP) is an emerging sub-discipline that is focused on extending conventional image processing tasks and operations to the quantum computing framework. It is primarily devoted to utilizing quantum computing technologies to capture, manipulate, and recover quantum images in different formats and for different purposes [1,2,, 2]. In terms of applications, the available literature on quantum image processing can be broadly classified into two groups: quantum-inspired image processing and classically inspired quantum image processing [35], respectively. Due to some of the astounding properties inherent to quantum computation, notably entanglement and parallelism, it is anticipated that QIP technologies will offer capabilities and performances that are, as yet, unrivalled by their traditional equivalents [1, 2]. The quantum computer has demonstrated a bright prospect over the classic computer, particularly in Feynman’s computation model [6], Deutsch’s quantum parallelism assertion [7], Shor’s integer factoring algorithm [8], and Grover’s database searching algorithm [9]. The development of QIP started in 1997, when Vlasov [10] proposed the quantum computations and images recognition. This was closely followed by quantum imaging [11], quantum signal processing [12], and pattern recognition on a quantum computer [13].

The primary work of QIP begins with how to represent images in quantum computer first. The pioneering work is the quantum image model of Qubit Lattice using one qubit to hold one pixel [14]. Following that, entangled images, in which geometric shapes are encoded in quantum states [15], and Real Ket, where the images are quantum states having gray levels as coefficients of the states [16], were proposed by Venegas-Andraca and Latorre, respectively. More recently, Le et al. [17] proposed a flexible representation of quantum image (FRQI) using quantum superposition state to store the colors and the corresponding positions of an image. Years later, more quantum image representations were proposed. A novel enhanced quantum representation (NEQR) [18] using q qubits encoding the gray-scale values could perform the complex and elaborate color operations, which is more convenient than FRQI. Quantum log-polar image [19] was proposed as a novel quantum image representation storing images sampled in log-polar coordinates. Color image representation utilized two sets of quantum states to store M colors and N coordinates, respectively [20]. A normal arbitrary quantum superposition state was used to represent a multi-dimensional image [21]. A simple quantum representation of infrared images is proposed based on the characteristic that infrared images reflect infrared radiation energy of objects [22]. A novel quantum representation of color digital images was proposed in 2016 [23]. Then, based on the work of quantum image representation models, there are many quantum image processing algorithms, including quantum image geometric transformation [2428], quantum image scaling [2932], quantum image scrambling [3335], quantum image watermarking [3642], and quantum image matching [43, 44].

Image edge extraction is an important issue of digital image processing. Edge of an image is defined as a set of locally connected picture elements (pixels) which are characterized by their rapid intensity variations. In order to find all the discontinuous pixels, all the classical algorithms need to analyze and calculate the gradient of the image intensity of every pixel. Thus, for an image, any classical edge extraction algorithms will cost the computational complexity no less than \( {\rm O}(2^{2n} ) \). This is the main reason that the classical edge extraction algorithms will be low on real time with the sharp increase in the image data. On the contrary, QIP utilizes the unique superposition and entanglement properties of quantum mechanics, which could process all the pixels of an image simultaneously. This is also why QIP algorithms are faster than the classical counterparts.

At present, the research concerned quantum image edge detection is relatively rare. Edge extraction algorithms proposed in both [45, 46] are based on the model Qubit Lattice, which do not resolve the overwhelming computational complexity problem because Qubit Lattice model does not use the unique properties of quantum mechanics, i.e., superposition, entanglement, and so on, of which computational complexity still remains as \( {\rm O}(2^{2n} ) \) for an \( 2^{n} \times 2^{n} \) image. Following that, based on FRQI and NEQR models, Zhang et al. [47, 48] proposed two quantum image extraction algorithms, respectively, which circuit complexity realizes an exponential decrease. However, their proposed schemes do not consider that when the original input quantum image is a noise image, which will misclassify the noise pixels as edge points. In order to address this problem, we investigate an enhanced quantum image edge detection algorithm based on NEQR model, which adopt the classical Laplacian filtering to smooth the original image first, and then, zero-cross method is used to detected image edge.

This paper is organized as follows. Section 2 briefly introduces the quantum image models, FRQI and NEQR, and edge detection that includes Laplacian operator and zero-cross method. Section 3 presents a sequence of predefined quantum circuit modules that would be used in our investigated quantum edge detection algorithm. Our proposed quantum edge detection algorithm as well as circuit design is described in detail Sect. 4. Section 5 analyzes the circuit complexity and the simulation results. The conclusions works are drawn in Sect. 6.

2 Related works

2.1 The flexible quantum representation (FRQI)

The quantum image model of FRQI [17] integrates information about an image into a normalized quantum state having its formula by

$$ |I(\theta )\rangle = \frac{1}{{2^{n} }}\sum\limits_{i = 0}^{{2^{2n} - 1}} {(\cos \theta_{i} |0\rangle + \sin \theta_{i} |1\rangle )} |i\rangle , $$
(1)

where \( |i\rangle = 0,1, \ldots ,2^{2n} - 1 \) are \( 2^{2n} - D \) computational basis quantum states encoding the image positions information, and \( \theta = (\theta_{0} ,\theta_{1} , \ldots ,\theta_{{2^{2n} - 1}} ),\;\theta_{i} = \cos \theta_{i} |0\rangle + \sin \theta_{i} |1\rangle \) are the vector of angles encoding the corresponding positions color information. The quantum state of FRQI is a normalized state meaning that \( \left\| {|I(\theta )\rangle } \right\| = 1 \).

$$ \left\| {|I(\theta )\rangle } \right\| = \frac{1}{{2^{n} }}\sqrt {\sum\limits_{i = 0}^{{2^{2n} - 1}} {\left( {\cos^{2} \theta_{i} + \sin^{2} \theta_{i} } \right)} } = 1 $$
(2)

A simple image and its FRQI state are shown in Fig. 1

Fig. 1
figure 1

A simple image and its FRQI state

2.2 The novel enhanced quantum representation of digital images

The novel enhanced quantum representation of digital images [18] is described as follows:

Supposing the range of the gray-scale value is from 0 to \( 2^{q} - 1 \), the gray-scale value \( C_{YX} \) of the pixel coordinate \( (Y,X) \) can be expressed by Eq. (3).

$$ C_{YX} = C_{YX}^{q - 1} C_{YX}^{q - 2} \ldots C_{YX}^{1} C_{YX}^{0} ,C_{YX}^{k} \in \{ 0,1\} ,\quad C_{YX} \in [0,2^{q} - 1] $$
(3)

Hence, NEQR for a \( 2^{n} \times 2^{n} \) quantum image can be written as:

$$ \left| I \right\rangle = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {C_{YX} } \right\rangle \left| Y \right\rangle \left| X \right\rangle} }= \frac{1}{{2^{n} }}\sum\limits_{YX = 0}^{{2^{2n} - 1}} {\mathop \otimes \limits_{k = 0}^{q - 1} \left| {C_{YX}^{k} } \right\rangle } \left| {YX} \right\rangle $$
(4)

Figure 2 shows an example of a \( 2 \times 2 \) image and the corresponding NEQR of which is on the right.

Fig. 2
figure 2

An example of a 2 × 2 image and its NEQR

Compare to FRQI model, NEQR model uses q-qubit sequence that contains \( 2^{q} \) computational basis to encode and represent the gray scale rather than only 1-qubit superposition state that encodes the gray-scale information in amplitude of qubit state. The color information that is stored in q-qubit sequence of NEQR model can do more complex operations than FRQI model does. Furthermore, based on NEQR model, we also can retrieve the pixel information exactly instead of probabilities in FRQI model. Therefore, NEQR model is chosen for investigating the quantum image edge detection algorithm rather than FRQI model.

2.3 Edge detection

Most of the classical edge detection algorithms are based on gradient and local maxima, such as Sobel method, Prewitt method, and Roberts method which find edges using their own approximation to the derivative, and return edges at those points where the gradient of the original image is maximum. However, these methods cannot get perfectible results of those boundaries of curves. They are effective for beeline boundary rather than curve boundary. In order to get the ideal image edge information of those boundaries of beelines and curves from an image, we present a new quantum image edge detection algorithm, that is, finding edges by looking for zero crossings after filtering the original image with a Laplacain filter.

2.3.1 Laplacian operator

The Laplacian operator [49] is constructed according to the principle that the second derivative is zero-crossing function at the edge point, which is a two-dimensional equivalent of the second derivative, and the function form is defined in Eq. (5)

$$ \nabla^{2} f = \frac{{\partial^{2} f}}{{\partial x^{2} }} + \frac{{\partial^{2} f}}{{\partial y^{2} }} $$
(5)

The second-order partial derivative of the x-direction is approximated by the difference equation is defined in Eq. (6)

$$ \frac{{\partial^{2} f}}{{\partial x^{2} }} = \frac{{\partial \left( {f(i,j + 1) - f(i,j)} \right)}}{\partial x} = f(i,j + 1) - 2f(i,j) + f(i,j - 1) $$
(6)

Similarly, the second-order partial derivative of the y-direction is approximated by the difference equation in Eq. (7)

$$ \frac{{\partial^{2} f}}{{\partial y^{2} }} = \frac{{\partial \left( {f(i + 1,j) - f(i,j)} \right)}}{\partial y} = f(i + 1,j) - 2f(i,j) + f(i - 1,j) $$
(7)

In practical application, we always consider the four directions in a \( 3 \times 3 \) neighborhood pixels; that is to say, the mask of the Laplacian operator is defined as:

$$ \nabla^{2} = \left( {\begin{array}{*{20}c} 1 & 1 & 1 \\ 1 & { - \;8} & 1 \\ 1 & 1 & 1 \\ \end{array} } \right) $$
(8)

which can be used to smooth a noise image before implementing the image edge extraction algorithm.

Thus, the pixels \( S(i,j) \) in smoothed image via Laplacian filtering can be calculates as follows:

$$ S(i,j) = \left| {\begin{array}{*{20}l} {f(i - 1,j - 1) + f(i - 1,j) + f(i - 1,j + 1) + f(i,j - 1)} \hfill \\ { + f(i,j + 1) + f(i + 1,j - 1) + f(i + 1,j) + f(i + 1,j + 1) - 8f(i,j)} \hfill \\ \end{array} } \right| $$
(9)

2.3.2 Zero-cross method

It is obvious that the local maxima of the first derivative mean zero crossings of the second derivative according to the mathematic knowledge. The edge pixels have maxima of both the first derivative and the second derivative zero crossings. Based on zero-cross method [50, 51], the second derivative of four directions in a \( 3 \times 3 \) neighborhood pixel of (YX) (i.e., up and down, left and right, and two diagonals) is calculated as follows, and the corresponding sketch map is illustrated in Fig. 3.

$$ \begin{aligned} G_{{_{YX} }}^{1} & = \left[ {f(Y + 1,X) - f(Y,X)} \right] - \left[ {f(Y,X) - f(Y - 1,X)} \right] \\ & = f(Y + 1,X) + f(Y - 1,X) - 2 \times f(Y,X) \\ G_{{_{YX} }}^{2} & = \left[ {f(Y + 1,X - 1) - f(Y,X)} \right] - \left[ {f(Y,X) - f(Y - 1,X + 1)} \right] \\ & = f(Y + 1,X - 1) + f(Y - 1,X + 1) - 2 \times f(Y,X) \\ G_{{_{YX} }}^{3} & = \left[ {f(Y,X + 1) - f(Y,X)} \right] - \left[ {f(Y,X) - f(Y,X - 1)} \right] \\ & = f(Y,X + 1) + f(Y,X - 1) - 2 \times f(Y,X) \\ G_{{_{YX} }}^{4} & = \left[ {f(Y + 1,X + 1) - f(Y,X)} \right] - \left[ {f(Y,X) - f(Y - 1,X - 1)} \right] \\ & = f(Y + 1,X + 1) + f(Y - 1,X - 1) - 2 \times f(Y,X) \\ \end{aligned} $$
(10)
Fig. 3
figure 3

The \( 3 \times 3 \) neighborhood pixels of pixel (Y, X), and the sub-gradients in four directions in this mask

However, in real application, there exist some points in which gradient is bigger than other point and it is not an edge point in actual. Therefore, in this paper, the gradients in four directions, i.e., \( G_{YX}^{1} \), \( G_{YX}^{2} \), \( G_{YX}^{3} \), and \( G_{YX}^{4} \) calculated according to zero-cross method are compared with given threshold T and make those gradients greater than the threshold as edge points. That is to say, if any one of them are equal or greater than the T, it will be classified into an edge point (i.e., \( \left| {G_{YX}^{1} } \right| \ge T \) or \( \left| {G_{YX}^{2} } \right| \ge T \) or \( \left| {G_{YX}^{3} } \right| \ge T \) or \( \left| {G_{YX}^{4} } \right| \ge T \)); Otherwise, it is not.

3 Quantum arithmetic operations for image edge extraction

3.1 Quantum comparator

Quantum comparator (QC) illustrated in [52] can be used to compare two numbers relationship. As illustrated in Fig. 4, wherein \( \left| A \right\rangle = \left| {a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} } \right\rangle \) and \( \left| B \right\rangle = \left| {b_{n - 1} b_{n - 2} \ldots b_{1} b_{0} } \right\rangle \), the output two qubits of QC module can be used to represent the result of comparison as follows:

Fig. 4
figure 4

Quantum circuit for quantum comparator

  • If \( c_{1} c_{0} = 10 \), then \( A > B \);

  • If \( c_{1} c_{0} = 01 \), then \( A < B \);

  • If \( c_{1} c_{0} = 00 \), then \( A = B \).

3.2 Quantum image shift transformations

Le et al. [25] defined the cyclic shift transformation, which can be used to shift the whole image, and every pixel will access the information of its neighborhood simultaneously. Here, based on the cyclic shift (CS) transformation on the X-axis (i.e.,\( {\text{CS}}_{X + } \) and \( {\text{CS}}_{X - } \)), its matrix expression for a NEQR image with a size of \( 2^{n} \times 2^{n} \) is discussed as follows:

$$ \begin{aligned} {\text{CS}}_{X + } \left| I \right\rangle & = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {C_{{YX^{\prime } }} } \right\rangle \left| Y \right\rangle \left| {\left( {X + 1} \right)\, \bmod \;2^{n} } \right\rangle } } ,\;X^{\prime } = \left( {X - 1} \right)\, \bmod \;2^{n} \\ {\text{CS}}_{X - } \left| I \right\rangle & = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {C_{{YX^{\prime } }} } \right\rangle \left| Y \right\rangle \left| {\left( {X - 1} \right)\, \bmod \;2^{n} } \right\rangle } } ,\;X^{\prime } = \left( {X + 1} \right)\, \bmod \;2^{n} \\ {\text{CS}}_{X + } & = \left[ {\begin{array}{*{20}l} 0 \hfill & 1 \hfill \\ {I_{{2^{n} - 1}} } \hfill & 0 \hfill \\ \end{array} } \right],\;{\text{CS}}_{X - } = \left[ {\begin{array}{*{20}l} 0 \hfill & {I_{{2^{n} - 1}} } \hfill \\ 1 \hfill & 0 \hfill \\ \end{array} } \right] \\ \end{aligned} $$
(11)

where \( \left( {X^{\prime} = \left( {X \mp 1} \right)\bmod \;2^{n} } \right) \) (Fig. 5).

Fig. 5
figure 5

Quantum circuit for cyclic transformation: a \( {\text{CS}}_{x + } \), b \( {\text{CS}}_{x - } \)

Based on the above analysis, it is easy to know that \( {\text{CS}}_{X + } \) and \( {\text{CS}}_{X - } \) make all the pixels of a quantum image move one unit to the right and left, respectively. It is worth noting that this operation is cycle, which makes the edge pixels of the image move to the other edge in the transformed image. An example as shown in Fig. 6 demonstrates the function of \( {\text{CS}}_{X + } \) and \( {\text{CS}}_{X - } \) (Fig. 5).

Fig. 6
figure 6

a The original image, b the transformed image implements \( {\text{CS}}_{X + } \) operation, c the transformed image implements \( {\text{CS}}_{X - } \) operation

For a two-dimensional digital image, the cycle shift operations on X-axis and Y-axis are symmetric and similar and the cyclic shift \( {\text{CS}}_{Y \pm } \) transformation on Y-axis is omitted here for circuit simplicity. According to the circuit complexity analysis reported in [25], it is easy to know that the time complexity of the cyclic shift transformations on both X-axis and Y-axis is no more than \( {\rm O}(n^{2} ) \) for an n-length qubit sequence.

3.3 Parallel Controlled-NOT operations

The parallel Controlled-NOT (CNOT) operations utilizes n Controlled-NOT gates that can make a copy of n-qubit quantum states \( \left| C \right\rangle = \left| {c_{n - 1} c_{n - 2} \ldots c_{0} } \right\rangle \) into the ancillary qubits \( \left| 0 \right\rangle^{ \otimes n} \). The quantum circuit for this operation is shown in Fig. 7.

Fig. 7
figure 7

Quantum circuit for parallel Controlled-NOT operations

3.4 Quantum operation of multiplied by powers of 2

Based on the particularities of binary arithmetic, we give a theorem firstly by assuming that a n-bit binary number \( A = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} \).

Theorem 1

\( 2A = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} 0 \).

Proof

It is easy to know that \( A = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} \) has another representation written as follows:

$$ A = a_{n - 1} \times 2^{n - 1} + a_{n - 2} \times 2^{n - 2} + \cdot \cdot \cdot + a_{1} \times 2^{1} + a_{0} \times 2^{0} $$

Thus,

$$ \begin{aligned} 2A & = 2\left( {a_{n - 1} \times 2^{n - 1} + a_{n - 2} \times 2^{n - 2} + \cdot \cdot \cdot + a_{1} \times 2^{1} + a_{0} \times 2^{0} } \right) \\ & = a_{n - 1} \times 2^{n} + a_{n - 2} \times 2^{n - 1} + \cdot \cdot \cdot + a_{1} \times 2^{2} + a_{0} \times 2^{1} \\ & = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} 0 \\ \end{aligned} $$

Based on Theorem 1, we can deduce that \( 2^{n} A = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} \underbrace {0 \cdot \cdot \cdot 0}_{n} \). Therefore, quantum circuit of an integer multiplied by \( 2^{n} \) can be realized via appending n ancillary qubits \( \left| 0 \right\rangle^{ \otimes n} \) after the lowest qubit, such as \( 2^{n} \left| A \right\rangle = \left| {a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} \underbrace {0 \cdot \cdot \cdot 0}_{n}} \right\rangle \).

3.5 Quantum absolute module

Quantum circuit for calculating the absolute value (CAV) module illustrated in [35] can be used to calculate the two integer numbers absolute. CAV module contains two basic quantum modules of reversible parallel subtractor (RPS) and complement operation (CO). Figure 8 gives the simplified quantum module of how the two modules of RPS and CO construct the CAV module, wherein \( \left| Y \right\rangle = \left| {y_{n - 1} y_{n - 2} \ldots y_{0} } \right\rangle \) and \( \left| X \right\rangle = \left| {x_{n - 1} x_{n - 2} \ldots x_{0} } \right\rangle \). For more detailed information, readers are referred to [35].

Fig. 8
figure 8

Quantum circuit for CAV module

3.6 Quantum ripple-carry adder

Cuccaro et al. [53] proposed a new quantum ripple-carry adder (shortly named QRCA) based on the ripple-carry approach, which start with the low-order bits of the input and work up to the high-order bits. Suppose that our goal is to compute the sum of two n-bit numbers A and B, where \( A = a_{n - 1} a_{n - 2} \ldots a_{1} a_{0} \) and \( B = b_{n - 1} b_{n - 2} \ldots b_{1} b_{0} \) with \( a_{0} \) and \( b_{0} \) are the lowest-order bits, respectively. The integrated quantum circuit for QRCA is illustrated in Fig. 9. Therein, Fig. 9a shows the circuit design for the QRCA in which two basic quantum circuit modules of MAJ and UMA are shown in Fig. 9b. Figure 9c gives the simplified module of QRCA that omits some ancillary and garbage qubits.

Fig. 9
figure 9

Quantum circuit for QRCA and its simplified module

4 Quantum image edge detection based on NEQR model

In this section, our investigated quantum image edge detection algorithm based on NEQR model is introduced first. Then, the corresponding quantum circuit design for image edge extraction is presented.

4.1 Algorithms for edge detection

For the aim of detecting perfect image edge information, our proposed quantum image edge detection algorithm can be described as follows:

  1. 1.

    Prepare the original quantum image \( \left| I \right\rangle \) based on NEQR model.

  2. 2.

    Use Laplacian filter to transact the original quantum image \( \left| I \right\rangle \), name the transacted image as \( \left| L \right\rangle \).

  3. 3.

    Compute the second derivative of pixels of \( \left| L \right\rangle \) in four directions according to the zero-cross method, then record them as \( \left| G \right\rangle \).

  4. 4.

    Compare the absolute value of pixels of \( \left| G \right\rangle \) with a given threshold and make those no less than the threshold as edge points; otherwise, it is not.

  5. 5.

    Get the final edge detection output as \( \left| E \right\rangle \), implement the quantum measurement operation to retrieve the classical image, and algorithm is finished (Fig. 10).

    Fig. 10
    figure 10

    Workflow of our investigated quantum edge detection algorithm

4.2 Quantum circuit design for edge detection

Based on the quantum edge detection algorithm and detailed workflow above, the corresponding quantum circuit that achieve the goal of extracting the edge of an image can be respected with three stages: Laplacian filtering, Calculate the second gradient, and Classify the edge points.

4.2.1 Laplacian filtering

For enhancing the image edge extraction, the Laplacian operator is used to smooth the original quantum image. In order to achieve this aim, we need prepare an \( 2^{n} \times 2^{n} \) original quantum \( \left| I \right\rangle \) with gray-scale range \( [0,2^{q} - 1] \) as follows (Fig. 11):

$$ \left| {I_{Y,X} } \right\rangle = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {C_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle = } } \frac{1}{{2^{n} }}\sum\limits_{YX = 0}^{{2^{2n} - 1}} {\mathop \otimes \limits_{k = 0}^{q - 1} \left| {c_{y,x}^{k} } \right\rangle } \left| Y \right\rangle \left| X \right\rangle $$
(12)
Fig. 11
figure 11

Obtain the nine image pixels in any \( 3 \times 3 \) neighbor window

Based on the previous analysis, in order to implement Laplacian filtering, we need all the pixels in a neighbor set. In this paper, for circuit simplicity, we only consider a \( 3 \times 3 \) neighbor pixel. Assume that enough auxiliary qubits are supplied to do the whole and this task can be completed with following steps:

  • Step 1 Via some certain cyclic shift transformations and parallel Controlled-NOT operations on \( \left| {I_{Y,X} } \right\rangle \) to obtain the eight shifted quantum image pixels of any \( 3 \times 3 \) neighbor set simultaneously. In this step, eight \( \left| 0 \right\rangle^{ \otimes q} \) ancillary qubits are used to store the other eight neighborhood pixels. Therefore, the ancillary qubits are the tensor product with the original prepared quantum image states, expressed as follows:

    $$ \begin{aligned} \left| 0 \right\rangle^{ \otimes 8q} \otimes \left| {I_{Y,X} } \right\rangle & = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| 0 \right\rangle^{ \otimes 8q} \left| {C_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } \\ & = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| 0 \right\rangle^{ \otimes q} \cdot \cdot \cdot \left| 0 \right\rangle^{ \otimes q} \left| {C_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } \\ \end{aligned} $$
    (13)

    The quantum circuit for this step is illustrated in Fig. 15. After this step, any eight pixels of a \( 3 \times 3 \) neighborhood pixel of the original quantum image \( \left| {I_{Y,X} } \right\rangle \) are computed and stored in the quantum states, encoded as follows:

    $$ \begin{aligned} & \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {C_{Y - 1,X} } \right\rangle \otimes \left| {C_{Y - 1,X + 1} } \right\rangle \otimes \left| {C_{Y,X + 1} } \right\rangle \otimes \left| {C_{Y + 1,X + 1} } \right\rangle \otimes \left| {C_{Y + 1,X} } \right\rangle } } \\ & \quad \otimes \left| {C_{Y + 1,X - 1} } \right\rangle \otimes \left| {C_{Y,X - 1} } \right\rangle \otimes \left| {C_{Y - 1X - 1} } \right\rangle \otimes \left| {C_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle \\ \end{aligned} $$
    (14)
  • Step 2 According to Laplacian filter method as introduced in the previous section, the pixels value of the transacted image is calculated according to Eq. (14) via some specific quantum arithmetic operations. Figure 14 gives the concrete quantum circuit realization of calculating the resulting pixels value in filtered image.

    As illustrated in Fig. 12, this step can be described as follows:

    $$ \begin{aligned} \left| {S_{Y,X}^{1} } \right\rangle & = {\text{QRCA}}\left( {\left| {C_{Y - 1,X} } \right\rangle ,\left| {C_{Y - 1,X + 1} } \right\rangle } \right),\;\left| {S_{Y,X}^{2} } \right\rangle = {\text{QRCA}}\left( {\left| {C_{Y,X + 1} } \right\rangle ,\left| {C_{Y + 1,X + 1} } \right\rangle } \right) \\ \left| {S_{Y,X}^{3} } \right\rangle & = {\text{QRCA}}\left( {\left| {C_{Y + 1,X} } \right\rangle ,\left| {C_{Y + 1,X - 1} } \right\rangle } \right),\;\left| {S_{Y,X}^{4} } \right\rangle = {\text{QRCA}}\left( {\left| {C_{Y,X - 1} } \right\rangle ,\left| {C_{Y - 1,X - 1} } \right\rangle } \right) \\ \left| {L_{Y,X} } \right\rangle & = {\text{CAV}}\left( {\left| {C_{Y,X} \times 8} \right\rangle ,\left| {S_{Y,X}^{1} + S_{Y,X}^{2} + S_{Y,X}^{3} + S_{Y,X}^{4} } \right\rangle } \right) \\ \end{aligned} $$
    (15)
    Fig. 12
    figure 12

    Calculate the resulting pixels value after implementing Laplacian filtering

    After this step, we can choose the desire quantum lines as the final output quantum states, that is:

    $$ \left| L \right\rangle = \frac{1}{{2^{n} }}\sum\limits_{YX = 0}^{{2^{2n} - 1}} {\left| {L_{Y,X} } \right\rangle \left| Y \right\rangle } \left| X \right\rangle $$
    (16)

4.2.2 Calculate the gradient

Based on the smooth quantum image \( \left| L \right\rangle \) obtained via Laplacian filtering in the previous section, we need to calculate the second derivative in four directions in a \( 3 \times 3 \) neighborhood window further. Similar to Laplacian filtering, the eight pixels in any \( 3 \times 3 \) neighborhood window need compute and store in quantum states firstly. Then, based on the neighbor pixels, the sub-gradients in four directions are calculated based on zero-cross method. Therefore, the processes of calculating the gradients are described in detail with following two steps:

  • Step 1. Similar to the step 1 in Lapalcian filtering, the eight pixels of any \( 3 \times 3 \) neighborhood in transacted quantum image \( \left| L \right\rangle \) are computed and stored in quantum states. Because the quantum circuit realization of this step is similar to the step 1 in Lapalcian filtering, the quantum circuit is omitted from here for circuit simplicity. It is worth noting that eight ancillary qubits \( \left| 0 \right\rangle^{ \otimes 8q} \) are also in tensor product with quantum states of image \( \left| L \right\rangle \) and can be written as follows:

    $$ \begin{aligned} \left| 0 \right\rangle^{ \otimes 8q} \otimes \left| L \right\rangle & = \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| 0 \right\rangle^{ \otimes 8q} \left| {L_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } \\ & = \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| 0 \right\rangle^{ \otimes q} \cdot \cdot \cdot \left| 0 \right\rangle^{ \otimes q} \left| {L_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } \\ \end{aligned} $$
    (17)

    And, the final output quantum states are encoded as follows:

    $$ \begin{aligned} & \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {L_{Y - 1,X} } \right\rangle \otimes \left| {L_{Y - 1,X + 1} } \right\rangle \otimes \left| {L_{Y,X + 1} } \right\rangle \otimes \left| {L_{Y + 1,X + 1} } \right\rangle \otimes \left| {L_{Y + 1,X} } \right\rangle } } \\ & \quad \otimes \left| {L_{Y + 1,X - 1} } \right\rangle \otimes \left| {L_{Y,X - 1} } \right\rangle \otimes \left| {L_{Y - 1X - 1} } \right\rangle \otimes \left| {L_{Y,X} } \right\rangle \left| Y \right\rangle \left| X \right\rangle \\ \end{aligned} $$
    (18)
  • Step 2 Based on the pixels obtained in \( 3 \times 3 \) neighbor window, a series of quantum arithmetic operations are implemented to compute the gradients in four directions according to Eq. (10). Figure 13 draws the concrete quantum circuit realization of calculating the corresponding gradients in four directions.

    Fig. 13
    figure 13

    Quantum circuit for obtaining the eight pixels in a \( 3 \times 3 \) neighbor window in image \( \left| L \right\rangle \)

    Similarly, the desired output quantum states \( \left| Z \right\rangle \) can be expressed as follows:

    $$ \left| Z \right\rangle = \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {G_{{_{Y,X} }}^{4} } \right\rangle \left| {G_{{_{Y,X} }}^{3} } \right\rangle \left| {G_{{_{Y,X} }}^{2} } \right\rangle \left| {G_{{_{Y,X} }}^{1} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } $$
    (19)

4.2.3 Classify the edge points

In order to extract the edge of original quantum image \( \left| I \right\rangle \), the calculated and stored four sub-gradients \( G_{{_{Y,X} }}^{1} ,G_{{_{Y,X} }}^{2} ,G_{{_{Y,X} }}^{3} ,G_{{_{Y,X} }}^{4} \) based on zero-cross method are compared with a given threshold \( \left| T \right\rangle \). It is worth noting that an ancillary qubit \( \left| 0 \right\rangle \) is tensor product with quantum states \( \left| Z \right\rangle \) encoded as follows:

$$ \left| 0 \right\rangle \otimes \left| Z \right\rangle = \frac{ 1}{{ 2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| 0 \right\rangle \left| {G_{{_{Y,X} }}^{4} } \right\rangle \left| {G_{{_{Y,X} }}^{3} } \right\rangle \left| {G_{{_{Y,X} }}^{2} } \right\rangle \left| {G_{{_{Y,X} }}^{1} } \right\rangle \left| Y \right\rangle \left| X \right\rangle } } $$
(20)

The quantum circuit for detecting the edge of original quantum image is shown in Fig. 14. Therein, the sub-gradients in four directions \( \left| {G_{{_{Y,X} }}^{1} } \right\rangle \), \( \left| {G_{{_{Y,X} }}^{2} } \right\rangle \), \( \left| {G_{{_{Y,X} }}^{3} } \right\rangle \), and \( \left| {G_{{_{Y,X} }}^{4} } \right\rangle \) are compared with a given threshold \( \left| T \right\rangle \) via four quantum comparators, respectively. It should pay attention that the result of comparison of QC can be denoted as follows:

$$ \begin{aligned} {\text{if}}\;\left| {G_{Y,X}^{i} } \right\rangle \ge \left| T \right\rangle ,\quad {\text{then}}\;\left| {c_{0} } \right\rangle = \left| 0 \right\rangle \hfill \\ {\text{if}}\;\left| {G_{Y,X}^{i} } \right\rangle < \left| T \right\rangle ,\quad {\text{then}}\;\left| {c_{0} } \right\rangle = \left| 1 \right\rangle \hfill \\ i = 1,2,3,4 \hfill \\ \end{aligned} $$
(21)
Fig. 14
figure 14

Detect the edge of quantum image and store it in quantum states

Therefore, for storing the edge information into quantum states, a NOT gate and 4-CNOT are used to bi-value the edge information and store it in the ancillary qubit \( \left| 0 \right\rangle \). That is to say, if and only if all the gradients of \( \left| {G_{{_{Y,X} }}^{1} } \right\rangle \), \( \left| {G_{{_{Y,X} }}^{2} } \right\rangle \), \( \left| {G_{{_{Y,X} }}^{3} } \right\rangle \), and \( \left| {G_{{_{Y,X} }}^{4} } \right\rangle \) are less than the threshold \( \left| T \right\rangle \), it is classified into a non-edge points denoted by \( \left| 0 \right\rangle \); Otherwise, it would be classified into the edge points denoted by \( \left| 1 \right\rangle \). And, the final desired output quantum states can be expressed as follows:

$$ \left| E \right\rangle = \frac{1}{{2^{n} }}\sum\limits_{Y = 0}^{{2^{n} - 1}} {\sum\limits_{X = 0}^{{2^{n} - 1}} {\left| {E_{Y,X} } \right\rangle } } \left| Y \right\rangle \left| X \right\rangle ,\;E_{Y,X} \in \left\{ {0,\;1} \right\} $$
(22)

wherein \( E_{Y,X} = 1 \) means that the current point is a edge point. Otherwise, it is not.

It is worth to note that for retrieving the classical image edge information, the quantum measurement will be operated on the quantum states illustrated in Eq. (). However, in many practical applications, since edges carry important information, the edge has been widely used in solid imaging such as image segmentation, pattern recognition, and so on. That is to say, edge detection is just one step of image preprocessing, and usually, we do not focus on which pixels are edge points. Many complex operations will work on the edge superposition to obtain more significant information. Thus, we do not need to consider the measurement time to read out the edge information. Consequently, the detail information concerned quantum measurement operation is omitted from here.

5 Circuit complexity and simulation results analysis

In this section, the circuit complexity of our investigated edge detection algorithm is discussed firstly. Then, the differences in the performance between our scheme and other edge extraction algorithms are compared.

5.1 Circuit complexity

Here, consider a digital image with a size of \( 2^{n} \times 2^{n} \) as an example. Because our proposed quantum image edge detection algorithm is realized step by step, the quantum circuit complexity analysis also is discussed step by step accordingly.

Quantum image preparation the quantum circuit will make the construction of quantum image \( \left| I \right\rangle \) based NEQR model. From the introduction of Ref. [19], it is known that the computational complexity of this process is no more than \( {\rm O}(qn2^{2n} ) \) because some operations will be done for every pixel one by one.

Laplacian filtering this stage can be divided into two processes: the first is to obtain the neighborhood pixels via some certain shift transformations and parallel Controlled-NOT operations, and the other is calculating the pixels value in filtered image. Since each shift transformation will cost the time complexity of \( {\rm O}\left( {n^{2} } \right) \) according to [25], every parallel Controlled-Not operation contains q CNOT gates of which complexity is \( {\rm O}\left( q \right) \). On the other hand, the quantum circuit of calculating the pixels value of the filtered image is composed by seven QRCA modules with additional CAV module. For single q-qubit QRCA module illustrated in Fig. 9, it is easy to conclude that the circuit complexity is \( {\rm O}\left( q \right) \). The CAV module circuit complexity is \( {\rm O}\left( {q^{2} } \right) \) according to [35]. Therefore, the circuit complexity of Laplacian filtering is \( {\rm O}\left( {n^{2} + q^{2} } \right) \).

Calculate the gradient this stage is similar to the Laplacian filtering in the above process, which also uses a sequence of shift transformations and parallel Controlled-NOT operations to compute and store the relative neighbor pixels information as well as some QRCA modules and CAV modules that are employed to calculate the gradient based on zero-cross method. Thus, the circuit complexity of this stage also is \( {\rm O}\left( {n^{2} + q^{2} } \right) \).

Classify the edge points In this stage, four QC modules are used to classify whether the pixels points belongs to edge points or not. For single QC module that compares two q-qubit integers, the circuit complexity is \( 24q^{2} + 6q \) according to [27]. Thus, the circuit complexity of current stage is \( {\rm O}\left( {q^{2} } \right) \).

Generally, the quantum image preparation and measurement operations are not considered as a part of quantum image processing algorithm. Therefore, if we omit the circuit complexity in these two processes, the circuit complexity of our proposed quantum edge detections algorithm for a \( 2^{n} \times 2^{n} \) NEQR quantum image is about \( {\rm O}\left( {n^{2} + q^{2} } \right) \). Hence, we can give the conclusion that our investigated quantum edge extraction algorithm can reach an exponential speed than all the classical edge extraction algorithms and all the existing quantum edge extraction algorithms. Table 1 demonstrates the performance comparisons between our presented algorithm and other edge extraction algorithms including two famous classical algorithms [54, 55] and three existing quantum edge extraction algorithms [45, 46], and Zhang [47, 48] for a digital image with a size of \( 2^{n} \times 2^{n} \).

Table 1 Performance comparisons between our method and other edge extraction algorithms for a digital image with a size of \( 2^{n} \times 2^{n} \)

5.2 Simulation results analysis

In order to evaluate the performance of our investigated quantum edge extraction algorithm compared with the other previous works [47], this section presents the simulation analysis. Due to the condition that the physical quantum hardware is not affordable for us to execute our method, our proposed quantum watermarking technique is simulated with a classical computer Intel(R) Core (TM) i7-8550U CPU 1.80 GHz, 8.00 GB Ram equipped with the MATLAB R2016b environment. Figure 15 demonstrates two tested gray-scale images of Lena and Cameraman with size of \( 128 \times 128 \) used in our simulation processes. Figures 16, 17, and 18 demonstrate that the experiment results use our presented method, zero-cross method [47], and classical Laplacian method under different thresholds. Because we only use the \( 3 \times 3 \) neighborhood information, it is inevitable that there exists some wrong edge information in the result images.

Fig. 15
figure 15

Two tested images are used in our simulation. a Lena. b Cameraman

Fig. 16
figure 16

Simulation results using our investigated method under different thresholds T

Fig. 17
figure 17

Simulation results using zero-cross method under different thresholds T

Fig. 18
figure 18

Simulation results using classical Laplacian method under different thresholds T

Based on the experiment results shown in Figs. 16 and 17, it is easy to conclude that our investigated method can enhance the edge information of an image. Thus, we can obtain a better result than zero-cross method proposed in [47]. Compared with the classical Laplacian edge detection, our proposed edge extraction can detect a small edge points than classical edge extraction method under the same threshold.

6 Conclusions

In recent years, with the sharp increase in the image data in the actual applications, real-time problem has become a limitation in classical image processing. In this paper, QLaplacian, a novel quantum edge extraction algorithm is designed, which combines the classical Laplacian algorithm and quantum image model of NEQR. The implementation circuit of QLaplacian showed that it can extract edges with no more than \( {\rm O}(n^{2} + 27q^{2} ) \) computational complexity for a NEQR quantum image with size of \( 2^{n} \times 2^{n} \). Compared with all the classical edge extraction algorithms and the existing algorithms of quantum edge extraction, QLaplacian can reach an exponential speedup than any classical edge extraction algorithm as shown in Table 1. Therefore, QLaplacian can resolve the real-time problem of image edge extraction.

In this paper, the drastic improvement in QLaplacian is liable for the peculiar properties of the quantum image model NEQR, which utilizes a quantum superposition state to store all the pixels of an image. Therefore, we can process the information of all the pixels simultaneously. Similarly, many algorithms of quantum image processing can also utilize quantum computing technologies for processing big data.

However, there are still some drawbacks in this algorithm of QLaplacian. Firstly, the Laplacian operator causes the edge information of the image to be lost; therefore, it is necessary to design an algorithm for accurately detecting the edge of the image. Secondly, Laplacian operator is the second derivative, which is more sensitive to noise than the first derivative, and enhances the influence of noise on the image. Thus, the design of quantum image smoothing algorithm is one of the future works.