1 Introduction

The beginning of the twentieth century witnessed a revolution in physics, which led to the development of quantum mechanics that proved the ability to solve problems of the classical physics at very small scales, and to predict accurately and elegantly the behavior of sub-atomic particles. From the beginning, chemistry has been a natural field of application for the quantum mechanics, as quantum effects are relevant at molecular scale in many phenomena, originating the new field of quantum chemistry—see, e.g., Levine (2014). The same happens in biology, where it is known that quantum effects are relevant in several processes, and it is even believed they can help explaining several macro-phenomena in the life sciences (Abbott et al. 2008).

However, looking through quantum mechanics to these disciplines faces major obstacles, as calculations rapidly become intractable with the size of the molecular systems involved, even with the help of the most advanced classical computational tools. The concept of quantum simulation, idealized by Feynman (1982) in the 1980s and later refined by Lloyd (1996), has raised expectations on the mitigation on some of these problems via achieving an exponential gain in simulation on quantum systems, with potential impact throughout all areas of physics (Georgescu et al. 2014), including quantum chemistry (Cao et al. 2019) and the Life Sciences (Wang et al. 2018). Recently, as the “second quantum revolution”Footnote 1 is coming of age, the first quantum computers are starting to emerge and become available to broad researcher’s community, giving means to the fulfillment of the Feynman’s vision. Compared with classical computers, quantum devices are ultimately expected to perform quantum chemistry calculations more quickly and accurately, handling larger molecules than it is possible with classical algorithms. This “quantum speedup” may lead to the design and discovery of new pharmaceuticals, materials, and industrial catalysts (Sim et al. 2018). A number of successful cases are described in literature on the efficient calculation of properties of interest for Chemistry, such as the electronic structure of molecules, phase diagrams, or reaction rates (Lidar and Wang 1999; Paesani et al. 2017; Aspuru-Guzik et al. 2005; Lanyon et al. 2010). A good review on the subject is written by Cao et al. (2019).

The conceptualization of a quantum simulation, from theory to experiment, poses many challenges (Whitfield et al. 2011), with no general recipe to tackle them. We hope to contribute to the progress in this area by exploring the simulations of two molecular systems, hydrogen (H\(_2\)) and lithium hydride (LiH) on a commercially available quantum computer, the IBM Q, accessed through the QuantaLab UMinho Academic Q Hub, and programmed using the QISKit platform (Cross 2018). The hydrogen molecule, the simplest existing one and also very important in nature, has been the natural test case of experimental and theoretical research. In particular, its ground-state properties and the dissociation curve have recently been recalculated using advanced classical (Vuckovic et al. 2015) and quantum (Colless et al. 2018) algorithms (the latter with extension to excited states). In a recent work, Rubin et al. (2020) describe Hartree–Fock calculations (done on the Google Sycamore quantum processor) for linear chains of up to twelve hydrogen atoms and discuss resulting errors in the system’s energy, along with possible ways to mitigate these errors. Similar works are likely to appear now in rapidly growing numbers; their importance is not in an increased speed or accuracy in tackling the corresponding quantum-chemical problems, as compared with established “conventional” algorithms, but the demonstration that these problems enter into the circle of practical feasibility for quantum computer. By the path of getting necessary experience in obtaining accurate and stable results for benchmark systems, testing different algorithms, the power of working quantum computers being simultaneously on the rise, the question of “quantum supremacy” may soon enough be posed while confronting problems of genuine challenge for contemporary quantum chemistry.

In this work, we extend the study of the H\(_2\) molecule as a standard benchmark toward the case of asymmetric LiH, whose ground-state calculation requires the inclusion of p-type atomic orbitals. Moreover we investigate the steady-state electronic Stark effect, i.e., the ground-state energy shift in response to a stationary external electric field (Gurav et al. 2018). We try to elucidate the essence of the quantum simulation algorithms to the broad community of physicists and chemists who may find the original works on quantum computation too technical to follow. We start from the definition of the molecular Hamiltonian, followed by its preparation for quantum simulation to the application of the variational quantum eigensolver (VQE) method, as well as its implementation and testing on the IBM Q.

The article is organized as follows: in Sect. 2, we briefly introduce the quantum Hamiltonian formalism for many-body systems, the Hartree–Fock approximation and the second quantization representation; in Sect. 3, we explain the mapping onto a system of qubits and designing the quantum circuit corresponding to the initial Hamiltonian, and the working principle of the VQE. Section 4 is dedicated to the case study of H\(_2\) and LiH molecules where we present and discuss the procedure details and results of the calculation of the dissociation curves in the presence of electric field. The last section offers a summary and concluding remarks. “Appendix A” contains details of the necessary matrix element’s calculation for this molecular setting, which is not commonly available in the literature.

2 Quantum chemistry background

2.1 Quantum Hamiltonian formalism

In this section, we outline the basic principles of the formulation of molecular Hamiltonians and the latter’s “preparation” for numerical calculation of electronic characteristics relevant for physics and chemistry. This is the domain, albeit represented by a quite simplistic case, of traditional quantum chemistry. A good introduction to the subject has been offered, for instance, by Levine (2014) and Szabo and Ostlund (2012). Here, we briefly describe just a few concepts and approximations essential for the formulation of the computational problem to be solved using quantum tools.

The quantum Hamiltonian formalism, in the Schrödinger’s formulation, is centered at the Hamiltonian operator, \(H = T + V\), T being the kinetic energy of the constituent particles and V the potential energy of all interactions and fields in the system, both internal and external. The action of this operator on the system’s wavefunction (WF), \(\vert {\Psi }\rangle \), describes the latter’s evolution,

$$\begin{aligned} i\hbar \frac{\partial }{\partial t}\vert {\Psi (t)}\rangle = H \vert {\Psi (t)}\rangle \; , \end{aligned}$$
(1)

or yields the total energy of the system if it is in a stationary state,

$$\begin{aligned} H\vert {\Psi }\rangle = E \vert {\Psi } \rangle \;. \end{aligned}$$
(2)

The wavefunction \(\vert {\Psi }\rangle \), beyond time, depends on other arguments (such as spatial coordinates and spin components) according to the representation used. Usually, there are several possible solutions to the equation, which correspond to different values of the energy (energy levels or eigenvalues, \(E_n\)), which are discrete for a confined (or bound) physical system. These states, called stationary states or eigenstates, are denoted \(\vert {\Psi }_n\rangle \), with the index \(n=1,\dots ,m\), in general, corresponding to a set of so-called quantum numbers that distinguish the eigenstates. The set of eigenstates constitutes the eigenbasis of the system that can be seen as a set of mutually orthogonal vectors in a Hilbert space of dimension m. The quantum system is also allowed to be in a superposition state,

$$\begin{aligned} \vert {\Psi } \rangle = \lambda _1 \vert {\Psi _1} \rangle + \lambda _2 \vert {\Psi _2} \rangle + \cdots + \lambda _m \vert {\Psi _m} \rangle \;, \end{aligned}$$
(3)

whose energy is not well defined (and, therefore, such a state is non-stationary). According to the statistical interpretation of quantum mechanics originally proposed by M. Born (Saunders et al. 2010), a measurement of such a quantum state can randomly yield one of the eigenvalues of its energy, \(E_n\), with the probabilities given by the squared amplitudes of the basis eigenstates participating, \(\vert \lambda _n \vert ^2\).

2.2 Many-particle systems

The Schrödinger equation for a system of non-interacting particles can be decomposed into a set of uncoupled equations for each particle, and the system’s WF can be factorized. A combination of two non-interacting and non-entangled systems can be described by applying the tensor product on the two vector spaces,Footnote 2 with resultant basis given as follows:

$$\begin{aligned} \vert {{\Psi }^{(1)}}\rangle \otimes \vert {{\Psi }^{(2)}} \rangle&= \sum ^{M_1}_{\alpha } \sum ^{M_2}_{\beta } \lambda _{\alpha }\mu _{\beta } {\vert {\Psi ^{(1)}_{\alpha }} \rangle \otimes \vert {\Psi ^{(2)}_{\beta }}}\rangle \nonumber \\&= \sum ^{M_1}_{\alpha } \sum ^{M_2}_{\beta } { \lambda _{\alpha }\mu _{\beta } \vert {\Psi ^{(1)}_{\alpha }\Psi ^{(2)}_{\beta }}}\rangle \;. \end{aligned}$$
(4)

In Eq. (4), \(\Psi _{\alpha }^{(s)}\) denotes an eigenfunction of a state \({\alpha }=1,\dots ,M_s\) of the system \(\Psi ^{(s)}\) (\(s=1,2\)). The dimension of the product vector is \(\mathrm{dim}(\Psi ^{(1)}){\,*\,}\mathrm{dim}(\Psi ^{(2)})=M_1\cdot M_2\).

When the particles constituting the system are identical, their spin becomes highly relevant. The spin, which is an intrinsic angular momentum of the particle, distinguishes two different types of particles, bosons (e.g., photons) and fermions (e.g., electrons and protons). For fermions, the Pauli exclusion principle states that the system’s WF must be antisymmetric with respect to permutation of any two particles. It implies important restriction upon the WF, namely that the product vector (4), if applied to a pair of non-interacting electrons, is not compatible with the Pauli principle.

In quantum chemistry, a single-electron WF is called orbital (Szabo and Ostlund 2012). One can distinguish spatial orbitals \(\phi ({\mathbf {r}})\), where r corresponds to spatial coordinates, and spin orbitals \(\chi ({\mathbf {x}})\), where \({\mathbf {x}}=({\mathbf {r}};s)\) and \(s=\uparrow , \downarrow \) stands for two possible orientations of electron’s spin. For two electrons, the Pauli principle means that

$$\begin{aligned} \chi ({\mathbf {x}}_1,{\mathbf {x}}_2)=-\chi ({\mathbf {x}}_2,{\mathbf {x}}_1) \end{aligned}$$
(5)

or, equivalently,

$$\begin{aligned} \phi ({\mathbf {r}}_1, {\mathbf {r}}_2) =\mp \phi ({\mathbf {r}}_2, {\mathbf {r}}_1)\,, \end{aligned}$$
(6)

where the upper (lower) sign corresponds to parallel (anti-parallel) spins of the two electrons. If the electron–electron interaction is neglected, the correct (i.e., compatible with the Pauli principle) two-electron WF is written in the form of the so-called Slater determinant,

$$\begin{aligned} \vert {\chi ^{(1)}_{\alpha }\,\chi ^{(2)}_{\beta }}\rangle =\frac{1}{\sqrt{2}} \left| \begin{array}{cc} \chi _{\alpha } ({\mathbf {x}}_1) &{} \chi _{\beta } ({\mathbf {x}}_1)\\ \chi _{\alpha } ({\mathbf {x}}_2) &{} \chi _{\beta } ({\mathbf {x}}_2) \end{array}\right| , \end{aligned}$$
(7)

where \(\chi _{\alpha }({\mathbf {x}})\) and \(\chi _{\beta }({\mathbf {x}})\) designate different spin orbitals. A Slater determinant can be straightforwardly generalized toward the case of N identical non-interacting particles. It vanishes when any two electrons “occupy” the same spin orbital, as required by the Pauli exclusion principle.

The Slater determinant is a simple way of constructing a many-electron WF from spin orbitals representing non-interacting electrons. Complete neglection of the Coulomb interaction between the electrons would be too crude an approximation, while solving directly the many-electron Schrödinger equation is an intractable problem. A compromise is achieved by a self-consistent field method also called Hartree–Fock (HF) approximation. An effective one-electron operator is introduced, \(v^{HF}({\mathbf {x}})\), called Fock operator, which includes, as a part of the single electron potential energy, the electron’s interaction with all other electrons whose positions are averaged under an assumption that the WF representing the system of N electrons is a single Slater determinant. An explicit expression for \(v^{HF} ({\mathbf {x}})\) will be presented below.

2.3 Molecular Hamiltonian and Hartree–Fock approximation

The general form of a molecular Hamiltonian is (in atomic units):

$$\begin{aligned} H_{\mathrm{mol}} =&- \sum _{i=1}^N \frac{1}{2} \nabla _i^2 -\sum _{A=1}^M \frac{1}{2M_A} \nabla _{A}^2 - \sum _{i=1}^N \sum _{A=1}^M \frac{Z_A}{r_{iA}} \nonumber \\&+ \sum _{i=1}^N \sum _{j>i}^N \frac{1}{r_{ij}} +\sum _{A=1}^M \sum _{B>A}^M \frac{Z_AZ_B}{r_{BA}}\,. \end{aligned}$$
(8)

The first and second terms of (8) correspond to the kinetic energy of the electrons (numbered by i and \(j=1,\dots ,N\)) and nuclei (numbered by \(A=1,\dots ,M\)), respectively. The third one represents the Coulomb attraction of each electron to each nucleus with \(r_{iA}\) being the electron–nucleus distance and \(Z_A\) the nucleus charge. Finally, the fourth and fifth terms correspond to the repulsion among the electrons and among the nuclei, respectively. It is common and well justified to use the Born–Oppenheimer approximation, which neglects the motion of the nuclei because they are much heavier than electrons, whereby the potential energy of the nucleus-nucleus interactions becomes a constant (for fixed placement of the nuclei) hence a parameter for the electron problem. With this, the electron Hamiltonian (8) reduces to:

$$\begin{aligned} H_{el} = - \sum _{i=1}^N \frac{1}{2} \nabla _i^2 - \sum _{i=1}^N \frac{Z_A}{r_{iA}}+ \sum _{i=1}^N \sum _{j>i}^N \frac{1}{r_{ij}} \,. \end{aligned}$$
(9)

For the H\(_2\) molecule, the Hamiltonian (9) depends on a single parameter, the distance between the protons d. If the lowest eigenvalue of (9), \(E_0(d)<0\), is larger in absolute value than the proton–proton repulsion energy, \(E_{rep}(d)=d^{-1}\), the molecule is bound, as illustrated in Fig. 1.

Fig. 1
figure 1

Left: the hydrogen atom consists of a single electron and a proton and has the energy of \(-0.5\) a.u. in the ground state. Right: in the hydrogen molecule H\(_2\), made of two nuclei and two electrons, the total energy can be lower than \(-1\) a.u., which makes the molecule stable

The Hamiltonian (9) has to be reduced to a single-electron one in order to proceed with finding its eigenvalues, which is achieved by means of the HF approximation, where one takes an average over the positions and spins of all electrons but one (to be labeled by \(i=1\)). This is done by multiplying (9) by \(\vert {\chi ^{(1)}_{\alpha }\chi ^{(2)}_{\beta }\dots \chi ^{(N)}_{\gamma }}\rangle \) and the corresponding “bra,” both in the form of Slater determinants of dimension N (the number of electrons in the system), and integrating over \({\mathbf {x}}_2,\, {\mathbf {x}}_3,\,\dots ,\,{\mathbf {x}}_N\), which leads to :

$$\begin{aligned} \left( - \frac{1}{2} \nabla _1^2 - \sum _{A=1}^M \frac{Z_A}{r_{1A}}+ v^{HF}_{1}\right) \chi _{\alpha } ({\mathbf {x}}_1) =\varepsilon _{\alpha } \chi _{\alpha } ({\mathbf {x}}_1), \end{aligned}$$
(10)

where \(v^{HF}_{1}\) is the average potential experienced by the “chosen” electron and \(\epsilon _{\alpha }\) is the single-electron energy. The HF potential can be written in the form:

$$\begin{aligned} v^{HF}_{1}= & {} \sum _{\beta }\int \! \vert \chi _\beta ({\mathbf {x}}_2)\vert ^2\frac{1}{|r_{12}|}\, d{\mathbf {x}}_2 \nonumber \\&-\;\frac{\displaystyle \sum _{\beta } \int \! \chi _\alpha ^{*} ({\mathbf {x}}_1)\chi _{\beta }^{*} ({\mathbf {x}}_2) \frac{1}{|r_{12}|}\chi _\beta ({\mathbf {x}}_1)\chi _\alpha ({\mathbf {x}}_2) \, d{\mathbf {x}}_2}{\displaystyle \vert \chi _\alpha (\mathbf{x}_1)\vert ^2}\;.\nonumber \\ \end{aligned}$$
(11)

The two terms in Eq. (11) are called Coulomb and exchange energies, respectively. The latter poses the main difficulty in solving Eq. (10); however, its neglection (known as the Hartree approximation) results in unsustainable error. Due to the nonlinearity of the HF approximation, the equations are solved in practice by self-consistent (iterative) methods, using a finite set of spatial basis functions, \(\phi _\mu ({\mathbf {r}})\) (\(\mu =1,2,\) \(\dots \), K)—see, e.g., Szabo and Ostlund (2012). The solution yields a set HF spin orbitals \(\{\chi _\alpha \}\) with corresponding energies \(\{\epsilon _\alpha \}\), \(\alpha =1,2,\dots ,2K\). It must be \(2K\ge {N}\), the number of electrons in the system. The possibilities to place N electrons over 2K spin orbitals gives rise to \((2K)!/(N!(2K-N)!)\) Slater determinants, one of which represents the ground state of the system and the others correspond to excited states. The HF approximation takes into account the quantum mechanical correlation caused by the Pauli principle, however, only of electrons with parallel spins. The difference between the approximate HF energy and the exact energy of the system is known as correlation correction (or energy).

It is common to use, as initial approximation basis sets to represent molecular orbitals (MO) in the HF equations, the linear combinations of atomic orbitals (LCAO). Since the exact atomic orbitals for a given many-electron atom are difficult to construct, the so-called Slater-type orbitals (STOs) are sometimes used, which are inspired by the (exactly known) radial asymptotics of spatial orbitals of the hydrogen atom,Footnote 3

$$\begin{aligned} \phi ({\mathbf {r}}) {\sim }r^{n-1}e^{-{\zeta }r}Y_{l,m}(\theta , \varphi ) \end{aligned}$$

(here \(Y_{l,m}\) is a spherical harmonic). For instance, one can use

$$\begin{aligned} \phi ^{\mathrm{STO}}_{1s}(\zeta ,{\mathbf {r}}-{\mathbf {R}}_{\mathrm{A}})= \left( \!\frac{\zeta ^3}{\pi }\!\right) ^{\!\!\tfrac{1}{2}}\!\!e^{-\zeta |{\mathbf {r}}-{\mathbf {R}}_\mathrm{A}|} \end{aligned}$$

for s-states, where \(\zeta \) is the Slater orbital exponent. As the STO functions are difficult to handle in many-center integrals, one practical resort consists of approximating these functions with linear combinations of Gaussian functions, known as STO-LG functions. The calculation of necessary matrix elements is then greatly facilitated, because the multi-center integrals with Gaussian functions can be evaluated analytically (see “Appendix A”). In this work, a set of such functions with \(n=3\) Gaussians mimicking each STO function, named STO-3G basis, is used. For the 1s state, such a function is:

$$\begin{aligned}&\phi ^{\mathrm{STO-3G}}_{1s}(\zeta ,{\mathbf {r}}) = c_1\!\left( \!\frac{2\alpha _1}{\pi }\!\right) ^{\!\!\frac{3}{4}}\!\!e^{-\alpha _1 r^2} \nonumber \\&\quad +\; c_2\!\left( \!\frac{2\alpha _2}{\pi }\!\right) ^{\!\!\frac{3}{4}}\!\!e^{-\alpha _2 r^2} +\; c_3\!\left( \!\frac{2\alpha _3}{\pi }\!\right) ^{\!\!\frac{3}{4}}\!\!e^{-\alpha _3 r^2}\,. \end{aligned}$$
(12)

Here, \(\alpha _i\) are the Gaussian orbital exponents that have been optimized for the best possible approximation of \(\phi ^\mathrm{STO}_{1s}(\zeta ,{\mathbf {r}})\) for a given \(\zeta \) (Hehre et al. 1969). The corresponding spin orbitals, \(\chi _\alpha (x)\), are obtained from \(\phi ^{\mathrm{STO-3G}}_{\mu }\) by multiplying them with a spinor \(\psi (s)\), \(s =\uparrow ,\, \downarrow \).

2.4 Second quantization

In the quantum mechanics of systems consisting of a number of identical particles (electrons, in our case), it is common to use the formalism called second quantization, originally introduced by P. Dirac—see, e.g., Dirac (1981). This formalism deals with the whole system of particles, instead of each particle individually, by introducing a new way of describing states, by the latter’s occupation numbers. Let \(\{\chi _\alpha ({\mathbf {x}})\}\) be a complete set of one-electron (atomic or molecular) spin orbitals that constitute the Hilbert space of a single particle. If the particles were non-interacting bosons, a state of the whole system could be entirely specified by indicating the numbers of particles, \(n_\alpha \), occupying each of these orbitals. Such an occupation number state can be designated by a state vector \(\vert {n_1,n_2,...} \rangle \). If the particles interact with an external field or with each other (but still assuming that they are bosons and no restrictions are imposed by particle’s spin), the state vector in the occupation number representation will evolve with time, obeying the time-dependent Schrödinger equation (1) with the Hamiltonian written in the occupation numbers representation:

$$\begin{aligned} H=H_1+H_2= \sum _{\alpha ,\beta } \tau _{\alpha \beta } a_{\alpha }^{\dagger } a_{\beta }+\frac{1}{2} \! \sum _{\begin{array}{c} \alpha ,\beta ,\\ \gamma ,\delta \end{array}} \mu _{{\alpha }{\beta }{\gamma }{\delta }} a_{\alpha }^{\dagger } a_{\gamma }^{\dagger } a_{\delta } a_{\beta }\,.\nonumber \\ \end{aligned}$$
(13)

The summation is over states in the single-particle Hilbert space, e.g., 1s-, 2p-like, etc., \(\tau _{\alpha \beta }\) being a matrix element of the single-electron energy,

$$\begin{aligned} \tau _{\alpha \beta }=\int d{\mathbf {x}}_1 \chi _\alpha ^* ({\mathbf {x}}_1)\left( \frac{-\nabla ^2}{2}+\sum _A \frac{Z_A}{|r_{A1} |}\right) \chi _\beta ({\mathbf {x}}_1)\,. \end{aligned}$$
(14)

The second term in (13) represents the Coulomb interactions between the particles, with the matrix element given [according to the convention used in quantum chemistry (Szabo and Ostlund 2012)] by:

$$\begin{aligned} \mu _{{\alpha }{\beta }{\gamma }{\delta }}=\int d\mathbf{x}_1 d{\mathbf {x}}_2 \chi _\alpha ^* ({\mathbf {x}}_1)\chi _\beta (\mathbf{x}_1)\left( \frac{1}{| r_{12}|}\right) \chi _{\gamma }^{*} (\mathbf{x}_2)\chi _\delta ({\mathbf {x}}_2)\,.\nonumber \\ \end{aligned}$$
(15)

The integration in Eqs. (14) and (15) is over coordinates (and summation over spins) of one or two electrons labeled 1, 2.

The Hamiltonian (13) is written in terms of so-called creation, \(a^{\dagger }\), and annihilation, a, operators, which add one particle to (or, remove from) an orbital \(\alpha \), respectively:

$$\begin{aligned} a_{\alpha }^{\dagger }\,\vert {n_1,n_2,\ldots }\rangle= & {} \sqrt{n_\alpha \!+\!1}\, \vert n_1,n_2,{\ldots }\rangle \, ;\nonumber \\ a_{\alpha } \,\vert {n_1,n_2,\ldots }\rangle= & {} \sqrt{n_\alpha }\,\vert n_1,n_2,{\ldots }\rangle \, . \end{aligned}$$
(16)

The product \(a_{\alpha }^{\dagger }a_\alpha \) is the occupation number operator for the orbital \(\alpha \). In the case of bosons, the creation and annihilation operators for different \(\alpha \) and \(\beta \) commute, because different orbitals are filled independently. These is not the case for fermions, because of the Pauli exclusion principle. By virtue of this, the following (anti-commutation) relations hold for the electron operators:

$$\begin{aligned} a_{\alpha }\,a_{\beta }^{\dagger } + a_{\alpha }^{\dagger }\,a_{\beta } = \delta _{{\alpha }{\beta }}\,. \end{aligned}$$
(17)

It can be shown that (17) guarantees that the occupation numbers can take only values 0 and 1 in accordance with the Pauli principle (Dirac 1981). Therefore, the Hamiltonian (13) has the same form for bosons and fermions, the only difference being in the (anti-)commutation relations of the creation and annihilation operators. For fermions, each state \(\vert {n_1,n_2,...} \rangle \) of this Hamiltonian corresponds to a Slater determinant in the Fock space (of dimension 2K), with the number of columns and rows equal to the number of electrons in the system, \(N=\sum _{\alpha =1}^{2K} n_\alpha \).

The choice of single-electron basis functions \(\chi _\alpha ^{*}({\mathbf {x}})\) is, in principle, arbitrary, but if we “guess” their form close to the “true” WFs of the system (which actually are not well-defined in the single-electron form!), the non-diagonal elements of the matrices \(\tau _{\alpha \beta }\) and \(\mu _{{\alpha }{\beta }{\gamma }{\delta }}\) will be much smaller than the diagonal ones. For practical calculations of these integrals, the basis functions are expressed in terms of the STO-3G sets explained in the previous section. The choice of molecular orbitals is based on the MO-LCAO approximation. One can improve this initial approximation by solving first the HF equation (10) and using its solutions to calculate the matrix elements. Then, the diagonalization of Eq. (13) amounts to the evaluation of the correlation energy.

In this article, we are going to consider also the stationary Stark effect described by the following (single-electron) Hamiltonian:

$$\begin{aligned} H_{S} = -{\mathbf {E}}\cdot {\mathbf {r}}, \end{aligned}$$
(18)

where \({\mathbf {E}}\) is the electric field intensity. Its second-quantization representation is identical to \(H_1\) in (13), and the corresponding matrix element is written as

$$\begin{aligned} \tau _{\alpha \beta }^{S}=\int d{\mathbf {x}}_1 \chi _\alpha ^{*}(\mathbf{x}_1)\left( -e\,{\mathbb {E}}\,r\cos {\theta }\right) \chi _\beta (\mathbf{x}_1), \end{aligned}$$
(19)

where \({\mathbb {E}}=\vert {\mathbf {E}}\vert \) and z-axis is assumed to be directed along \({\mathbf {E}}\). The use of second quantization formalism is facilitated, for instance, by the PyQuante (Muller 2017) and the PySCF (Sun et al. 2018) tools, Python libraries targeted to quantum chemistry calculations. We present the matrix elements (14), (15) and (19) calculated for 1s, 2s and \(2p_z\) atomic orbitals in “Appendix A”.

3 Quantum simulation of a quantum chemistry Hamiltonian

3.1 Mapping the fermion Hamiltonian onto a qubit representation

In order to perform quantum computations, one needs to map the second-quantization Hamiltonian onto a qubit (spin) representation and then design the corresponding quantum circuit that implements it. The basic idea is to replace the fermionic operators a and \(a^{\dagger }\) with tensor products of the Pauli matrices,

$$\begin{aligned} \sigma _x = \left[ \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \end{array}\right] ,\qquad \sigma _y = \left[ \begin{array}{cc} 0 &{} -i \\ i &{} 0 \end{array}\right] ,\qquad \sigma _z = \left[ \begin{array}{cc} 1 &{} 0 \\ 0 &{} -1 \end{array}\right] , \end{aligned}$$

which can be done in a number of ways, such as the Jordan–Wigner or Bravyi–Kitaev transformations (Cao et al. 2019). The former, addressed in this section, is a specific method based on the isomorphism between the creation and annihilation operators and the algebra of the Pauli matrices (Whitfield et al. 2011).

In the case of a single (one-electron) state, the Jordan–Wigner (JW) mapping is simple:

$$\begin{aligned}&a^{\dagger } {\;\Leftrightarrow \;} \sigma ^+\,{\equiv } \frac{1}{2}\left( \sigma _x +i \sigma _y \right) = \left[ \begin{array}{cc} 0 &{} 1 \\ 0 &{} 0 \end{array}\right] \,; \end{aligned}$$
(20)
$$\begin{aligned}&\quad a {\;\Leftrightarrow \;} \sigma ^-\,{\equiv }\frac{1}{2}\left( \sigma _x -i \sigma _y \right) = \left[ \begin{array}{cc} 0 &{} 0 \\ 1 &{} 0 \end{array}\right] \,; \end{aligned}$$
(21)
$$\begin{aligned}&\quad a^{\dagger }a-\tfrac{1}{2} {\;\Leftrightarrow \;} -\frac{1}{2}\sigma _z = \left[ \begin{array}{cc} -\tfrac{1}{2} &{}\quad 0 \\ \quad 0 &{} \quad \tfrac{1}{2} \end{array}\right] \,. \end{aligned}$$
(22)
Fig. 2
figure 2

A scheme illustrating the mapping of a fermion onto a qubit. We may assume that the magnetic field splitting two spins states is directed downwards, so that \(\vert \downarrow \rangle \equiv \vert 0\rangle \) is the ground state

It is illustrated in Fig. 2. The matrices \(\sigma ^{\pm }\) represent the spin-raising and spin-lowering operators, respectively, while \(\sigma _z\) is related to the occupation number operator.

In case of \(N>\!\!1\) fermions, in order to satisfy the anti-commutation relations (17) between any pair of fermionic operators, one numerates the states by a single index (\(\alpha \)) and adds the string, i.e., [spin]=[fermion]\(\times \)[string], taking into account the occupation numbers, \(n_\alpha \), of states with \(\beta < \alpha \), for a given \(\alpha \):

$$\begin{aligned} \sigma ^+_\alpha {\;\Leftrightarrow \;} a_\alpha {e^{i\pi \sum _{\beta<\alpha }n_\beta }}, \qquad \sigma ^-_\alpha {\;\Leftrightarrow \;} a^{\dagger }_\alpha {e^{i\pi \sum _{\beta <\alpha }n_\beta }}\,. \end{aligned}$$
(23)

The relation (23) holds for multiple fermions and the phase factors can be represented by the Pauli matrix \(\sigma _z\) acting on the corresponding fermionic state. Therefore, the fermionic operators are mapped onto direct products of Pauli matrices as follows:

$$\begin{aligned} a_\alpha \quad&{\Leftrightarrow }&\quad {\mathbf {1}}^{{\otimes }(\alpha -1)}{\otimes } \,(\!{\sigma ^+}\!)_{\alpha }{\otimes } \,\!(\!\sigma _z\!)^{{\otimes }(N-\alpha )} \nonumber \\&\quad = \left[ \!\!\begin{array}{c@{\quad }c}1 &{} 0 \\ 0 &{} 1 \end{array}\!\!\right] ^{\!{\otimes }(\alpha -1)} \!\!\!{\otimes } \left[ \!\!\begin{array}{c@{\quad }c}0 &{} {1} \\ 0 &{} 0 \end{array}\!\!\right] _{\alpha } \!{\otimes }\! \left[ \!\!\begin{array}{cc}1 &{} 0 \\ 0 &{} -1 \end{array}\!\!\right] ^{\!{\otimes }(N-\alpha )}\,; \end{aligned}$$
(24)
$$\begin{aligned} a_\alpha ^{\dagger }\quad&{\Leftrightarrow }&\quad {\mathbf {1}}^{{\otimes }(\alpha -1)}{\otimes } \,(\!{\sigma ^-}\!)_{\alpha }{\otimes } \,\!(\!\sigma _z\!)^{{\otimes }(N-\alpha )} \nonumber \\&\quad = \left[ \!\!\begin{array}{c@{\quad }c}1 &{} 0 \\ 0 &{} 1 \end{array}\!\!\right] ^{\!{\otimes }(\alpha -1)} \!\!\!{\otimes } \left[ \!\!\begin{array}{c@{\quad }c}0 &{} 0 \\ {1} &{} 0 \end{array}\!\!\right] _{\alpha } \!{\otimes }\! \left[ \!\!\begin{array}{cc}1 &{} 0 \\ 0 &{} -1 \end{array}\!\!\right] ^{\!{\otimes }(N-\alpha )}\,. \end{aligned}$$
(25)

Thus, any Hamiltonian operator written in the second quantization representation can be rewritten in terms of the raising and lowering spin operators and the Pauli matrix \(\sigma _z\). A catalogue of such translations can be found in Table A2 of the work by Whitfield et al. (2011). For a Hilbert space of 2K spin orbitals, a system of 2K fermions (i.e., qubits) is required for the JW mapping. The resulting qubit Hamiltonian has the following generic form:

$$\begin{aligned} {H} = \sum _{i;\, q} h_{q}^{i} {{\sigma }_{i}^{(q)}} + \sum _{i_1,i_2;\, q_1,q_2} h_{q_1,q_2}^{i_1,i_2} {\sigma }_{i_1}^{(q_1)}\otimes {\sigma }_{i_2}^{(q_2)} + \cdots \end{aligned}$$
(26)

where the indices i mean the type of the Pauli matrix (x, y or z), the indices q run over qubits and h are some coefficients. This form is useful for the algorithms discussed in the next section.

3.2 Quantum computation of the eigenvalues of a Hamiltonian

Once the molecule’s Hamiltonian has been transformed into the qubit representation, the ground-state energy can be evaluated using several methods. One of such methods where the quantum advantage seems likely is the calculation of eigenvalues of Hamiltonians through the application of the quantum phase estimation (QPE) algorithm (Luis and Peřina 1996), which also has several other applications, such as in the resolution of linear equations (Harrow et al. 2009). The method requires an approximation of the evolution operator, \({\hat{U}}=\exp {(-i{H}t})\) (t is time), and applying it to the initial state an appropriate number of times. For an eigenstate, the application of \({\hat{U}}\) results in adding a phase \((-Et)\), so that the energy eigenvalue E can be estimated. Unfortunately, despite its theoretical attractivity and a broad scope of possible applications, the method poses serious technical difficulties, which makes its practical realization unlikely at the present level of maturity of quantum computers. Namely, the QPE method requires a very large number of entangled qubits and quantum gates to be effective.

Alternatively, one can adopt a strategy of applying the Hamiltonian over a state several times, measuring the result (i.e., performing the quantum sampling), in order to obtain an estimation of the expected eigenvalue, for which effective algorithms are available, particularly the quantum expected eigenvalue estimation (QEE) method. The method requires that the Hamiltonian operator can be decomposed into a polynomial (M) independent n-qubit operators as exemplified by Eq. (26) and consists in the “measurement” of the expectation values of such operators for a trial state \(\vert \Psi \rangle \) (also known as the ansatz):

$$\begin{aligned} \langle {H} \rangle= & {} \langle \Psi \vert H \vert \Psi \rangle \nonumber \\= & {} \sum _{i;\, q} h_{q}^{i} \langle {{\sigma }_{i}^{(q)}}\rangle + \!\!\sum _{\begin{array}{c} i_1,i_2;\\ q_1,q_2 \end{array}}\!\! h_{q_1,q_2}^{i_1,i_2} \langle {\sigma _{i_1}^{(q_1)}\otimes \sigma _{i_2}^{(q_2)}} \rangle + \cdots \end{aligned}$$
(27)

The estimation of the expectation values, \(\langle \cdots \rangle \), requires repeated measurements with a large number of qubits, but, on the other hand, the computational effort amounts to the evaluation of a polynomial number of independent terms.

Table 1 Comparison of resources needed for two methods, QPE and QEE

An objective comparison of the QPE and QEE methods is presented by McClean et al. (2016) and summarized in Table 1. One main advantage of the QEE, when compared with QPE, is that it largely reduces the need for gates, but, more important—the amount of time the entanglement over sets of qubits has to be maintained, i.e., the coherence time, is O(1) (independent of precision, p), which is within grasp of existing quantum computers, while it grows linearly with p, \(O(p^{-1})\), for QPE. However, QEE introduces the need to prepare more copies of the ansatz to maintain the independence of the terms in Eq. (27)—O(M) against O(1) for QPE— requiring polynomially more memory, i.e., more qubits. Moreover, for a desired precision p, the number of necessary sampling steps is \(O({|h_{max}|}^2 M p^{-2})\), where \(h_{max}\) is the term with the maximum norm in the decomposition of the Hamiltonian. In summary, the QEE method reduces the required minimum coherence but introduces a polynomial complexity penalty, both in terms of memory and in terms of the number of steps necessary. Yet, it still holds an exponential advantage when compared to classical methods.

3.3 Trial wave functions (ansätze)

The ground-state energy estimation requires an appropriate ansatz. If the number of electrons in the system, N, is fixed, one may use the Slater determinant solution of the HF problem for the considered molecule, corresponding to its ground state. We shall denote it by \(\vert {\Psi _0}\rangle \) and it may be written as

$$\begin{aligned} \vert {\Psi _0}\rangle =\prod _\alpha ^{N} a^{\dagger }_\alpha \vert \text{ vac }\rangle , \end{aligned}$$

where \(\alpha \) runs over occupied orbitals and \(\vert \text{ vac }\rangle \) denotes vacuum (with no particles). Alternatively, one may start by defining a new “vacuum” state in the N-particle sector of the Fock space, which can be chosen as \(\vert {\Psi _0}\rangle \) and used to prepare the parametrized trial quantum state (Barkoutsos et al. 2018). It can be done by a quantum circuit implementing a unitary operator, \({\hat{U}}\), that represents a set of perturbations to the state \(\vert {\Psi _0}\rangle \):

$$\begin{aligned} \vert \Psi (\overrightarrow{\theta })\rangle = {\hat{U}}(\overrightarrow{\theta }) \vert \Psi _0\rangle \, , \end{aligned}$$
(28)

The parametrized ansatz will be used to estimate the energy with respect to the Hamiltonian. Here \(\overrightarrow{\theta }\) stands for the whole set of parameters (also called “gate angles” in this context) that can be adjusted and used in the optimization procedure (see Sec. 3.4 below).

There are several possible choices of constructing this operator, leading, e.g., to the so-called unitary coupled cluster (UCC) and Heuristic approaches that have been overviewed by Cao et al. (2019) and Barkoutsos et al. (2018). There are options of choosing different ansätze implemented in the QISKit package. Let us briefly consider the UCC approach, which has mainly been used in this work.

A flexible way to generate multi-determinantal (hence overcoming the HF approximation) reference states within the coupled cluster (CC) method, suggested by Jeziorski and Monkhorst (1981), has been translated by Barkoutsos et al. (2018) (specifically under an angle of quantum algorithms for electronic structure calculations) into the unitary version of the CC approach (UCC). The operator acting on the “vacuum state” according to Eq. (28) is chosen as follows:

$$\begin{aligned} |\langle {\Psi (\overrightarrow{\theta })}\rangle = e^{\hat{T}(\overrightarrow{\theta })-\hat{T}^{\dagger }(\overrightarrow{\theta })} |{\Psi _0}\rangle \,. \end{aligned}$$
(29)

Here \({\hat{T}}\) is an operator representing excitations from occupied to unoccupied states (labeled below by Greek and Latin indices, respectively), composed of hierarchical terms,

$$\begin{aligned} {\hat{T}}={\hat{T}}_1+{\hat{T}}_2+\dots , \end{aligned}$$

corresponding to n-particle excitations, namely,

$$\begin{aligned} {\hat{T}}_1 (\overrightarrow{\theta })= & {} \sum _{\alpha ,a} \theta _{\alpha }^{a}{a}_a^{\dagger }\,{a}_\alpha , \end{aligned}$$
(30)
$$\begin{aligned} {\hat{T}}_2 (\overrightarrow{\theta })= & {} \frac{1}{2} \sum _{\alpha ,\beta ;\;a,b} \theta _{\alpha \,\beta }^{a\,b}a_a^{\dagger }\,a_b^{\dagger }\,{a}_\alpha \,{a}_\beta , \nonumber \\&\cdots&\end{aligned}$$
(31)

The UCC ansatz usually retains only the two first terms in the expansion of \({\hat{T}}\), i.e., neglects 3-particle and higher-order excitations. The expansion coefficients in (30), (31) can be interpreted as matrix elements of a certain excitation operator between occupied and unoccupied orbitals. They can be assumed real, i.e., \(\{\theta _{\alpha }^{a},\; \theta _{\alpha \,\beta }^{a\,b}, \ldots \} \in {\mathbb {R}}\).

The anti-Hermitian combination \({\hat{T}}-{\hat{T}}^{\dagger }\) in (29) makes the exponential operator unitary. Unitary operations are natural on quantum computers, yet the implementation into quantum circuits is not that straightforward because of the non-commutation of different parts of the Hamiltonian, so the order in which the different terms are written in the exponent is important. This difficulty is bypassed by using the Trotter identity:

$$\begin{aligned} e^{({\hat{A}}+{\hat{B}})}=\lim _{n{\rightarrow }\infty }\left[ e^{{\hat{A}}/n}{\otimes }e^{{\hat{B}}/n}\right] ^n, \end{aligned}$$
(32)

where \({\hat{A}}\) and \({\hat{B}}\) are two non-commuting operators, e.g., \({\hat{A}}={\hat{T}}_1-{\hat{T}}_1^{\dagger }\) and \({\hat{B}}={\hat{T}}_2-{\hat{T}}_2^{\dagger }\). Exact in the limit \(n{\rightarrow }\infty \), it is an approximation for finite n. Different Trotter approximations of the operator (29) can be implemented on a quantum computer by transforming it to the qubit representation and using standard circuit compilation techniques for the “exponentiation” of the Pauli matrices (Cao et al. 2019). Some examples of such circuits and comparison of results obtained for different orders (n) of the Trotter approximation can be found in the work by Barkoutsos et al. (2018).

3.4 Variational Quantum Eigensolver

The variational method for the calculation of the ground-state energy, also known in physics as the Rayleigh-Ritz method, has widely been used for a long time in quantum chemistry—see, e.g., Levine (2014). It is an approximation method used to estimate the lowest eigenvalue (the ground-state energy) of a Hamiltonian,

$$\begin{aligned} E [\Psi (\overrightarrow{\theta })]= \frac{\langle {\Psi (\overrightarrow{\theta })|H |\Psi (\overrightarrow{\theta })\rangle }}{\langle {\Psi (\overrightarrow{\theta })|\Psi (\overrightarrow{\theta })\rangle }}\, . \end{aligned}$$
(33)

The optimization consists in the determination of the set of parameters \(\overrightarrow{\theta }\) that minimize the E function.

In the hybrid quantum-classical algorithm implemented as the variational quantum eigensolver (VQE), the quantum computer prepares the parametrized trial function \(\Psi (\overrightarrow{\theta })\), as discussed in Sect. 3.3, and evaluates the energy with respect to the system’s Hamiltonian, as discussed in Sect. 3.2. Then, a classically implemented algorithm updates the parameters \(\overrightarrow{\theta } \in {\mathbb {R}}^n\) of the quantum state using a classical optimization routine and then repeats the previous step until convergence criteria (e.g., in energy and/or iteration number) are satisfied. Any optimization method capable of performing this task can, in principle, be used. On IBM Q (Cross 2018), a few methods for this purpose are available, for instance, the simultaneous perturbation stochastic approximation algorithm (SPSA, see Bhatnagar et al. 2012), characterized by a very good performance under noise, or the Cobyla method (Powell 2007).

The VQE was introduced by Peruzzo et al. (2014) and applied since then in a number of quantum simulation/optimization tasks—see, e.g.,  Moll et al. (2017). The scheme of the method is depicted in Fig. 3, adapted from the latter work. A good additional discussion of this method can be found in the work by McClean et al. (2016).

Fig. 3
figure 3

Application of the variational method to fermionic problems, adapted from Moll et al. (2017)

3.5 Procedure summary

The principal steps can be summarized as follows.

  • The effect of fermionic annihilation-creation operators, \(a_{\alpha }\) and \(a^{\dagger }_{\beta }\), on the system of one-electron states is mapped onto \((\uparrow ,\!\downarrow )\) states in a model system of \(s\!=\!\tfrac{1}{2}\) spins (via the Jordan–Wigner transformation).

  • The state of each spin is represented by a qubit.

  • Excitations in multi-electron system are then represented as qubits, which interact and run through a quantum circuit.

  • The circuit consists of a number of basic elements (quantum gates), arranged according to the structure of equations to solve.

  • At the beginning, each qubit is prepared according to the starting configuration (i.e., occupation of the electron orbitals) chosen.

  • The output of the circuit (measurement) yields the expectation value of each qubit. It can be redirected to the input till convergence.

  • The configuration emerging in the repetitive process, taken together with the (previously calculated) matrix elements, yields the physical solution (energy and wavefunction).

4 Results and discussion

4.1 Calculation details

We used the procedure outlined in previous sections to calculate the ground-state energy (which can be straightforwardly converted into the dissociation energy) of two molecules, hydrogen (H\(_2\)) and lithium hydride (LiH), also (that is presumably a novel result) under the action of stationary electric fields of four different magnitudes (\({\mathbb {E}}=\) 0.0001, 0.001, 0.01, 0.1 atomic units; 1 a.u. \(\approx 5\cdot 10^{11}\) V/m). These calculations were performed for the interatomic distances, d, from 0.2 to 4 Å with the step 0.1 Å.

The actual computational environment, where these experiments were conducted, was the IBM Q, an ensemble of quantum computers and simulators and able to perform quantum computation. Such computational environment is available remotely through the internet and can be accessed and programmed using the QISKit framework, written in the Python language. The actual code developed to this work is available in the following GitHub repository: https://github.com/arcalab/experiments_quantum_chemistry/tree/master/Qiskit_Programmatic_version_srcit makes use notably of the QISkit and the PySCF python framework.

The PySCF tool was used to specify the molecules and calculate the respective one-body and two-body integrals, encompassing already the action of electric fields, using the theory developed throughout “Appendix A”. Both molecules were assumed to have zero global charge and spin zero; the STO-3G basis (12) was used to calculate the integrals.

The tasks of evaluation of corresponding integrals can then be reformulated into an assembling of quantum circuits, to be executed in quantum computers supplied, using the set of software packages available, e.g., in the QISkit framework: Terra, Aer, Aqua and Ignis. The calculation of the dissociation curves requires the calculation of the ground-state energies (discussed in Sect. 3.4) over a range of distances, to be able to identify the minimum (bound molecule) and the asymptotics (separated atoms). For this purpose, we used two methods: the exact eigensolver (classical matrix-multiplication method, as a benchmark) and the VQE.

We used the UCC (discussed in Sect. 3.2) as the variational method, i.e., the technique to build the ansätze for the molecules under study, and the HF approximation to obtain the initial solution for the VQE method. In this relation, several parameters had to be considered: the maximum number of iterations with the Cobyla method,Footnote 4 the optimization level (an IBM Q-specific parameter determining the degree of optimization of the circuits generated), the mapping method to use, such as the Jordan–Wigner (25), Bravyi–Kitaev, or parity methods [see Cao et al. (2019) for more information on these methods], each offering different (precision)/(circuit size) relationships. The technical parameters of calculation, selected after a course of trial and error, are summarized in Table 2.

Table 2 The set of technical parameters used for quantum calculations

The quantum or hybrid (such as VQE) procedures in the IBM Q require that a backend is specified, i.e., an actual processing node able to execute the quantum circuits, which can be either a classical computer able to perform the quantum computation (simulator), with or without simulated quantum noise, or a real quantum device, with a number of qubits from 2 to 53. The results of this work were obtained using a simulator, the qasm_simulator.

4.2 Results: H\(_2\) molecule

Fig. 4
figure 4

Dissociation curve of H\(_2\) molecule, as calculated with a classical solver (full lines) and with the VQE (symbols connected by lines), for several values of the external electric field \({\mathbb {E}}\) marked by color. The Stark effect (i.e., the shift of the minimum energy with electric field) is shown in the inset

The total energy as a function of the interatomic distance, hence the molecule’s dissociation curve for different values of the electric field, is depicted in Fig. 4. The effect of electric field on the shape of the dissociation curve remains negligible at small values of the field inspected yet results in a drastic change of the \(d{\rightarrow }\infty \) asymptotic (slope) and in a noticeable shift of the equilibrium position for \({\mathbb {E}}=0.1\) a.u. The abrupt change in the E(d) dependence slope at large distances, for very large electric field \({\mathbb {E}}=0.1\) a.u., can be related to the onset of the molecule’s dissociation, which becomes possible via tunneling through the energy barrier (blue curves in Fig. 4).

The inspection of the VQE results, represented by symbols connected by lines in Fig. 4, reveals a numerical noise that apparently increases with the electric field magnitude. Possibly, the HF approximation used as input for the quantum calculation becomes unstable under the action of a strong electric field.

The inset of Fig. 4 shows the Stark effect for the molecule under study, that is, the shift between the ground-state energy calculated under the action of the electric field and at \({\mathbb {E}}=0\). The distance at which the respective energies have been extracted was the energy minimum position yielded by the classical solver at \({\mathbb {E}}=0\), \(d_{eq}=0.7\) Å. We took this option because of the fluctuations of E(d) obtained with the quantum solver.

For a nonpolar molecule without intrinsic dipole moment, as is the case for H\(_2\), the stationary electronic Stark effect should be quadratic in the electric field. However, with the limited minimal basis used, it looks even weaker and becomes noticeable only for very strong fields.

4.3 Results: LiH molecule

Fig. 5
figure 5

Same as in Fig. 4 for the LiH molecule

The results for the lithium hydride molecule are shown in Fig. 5, where the effect of the applied electric field is quite noticeable. The displacement of the E(d) curve increases with the electric field: already for 0.01 a.u. the shift of the dissociation curve becomes appreciable. The Stark effect (inset in Fig. 5) increases with the field magnitude much faster than for the H\(_2\) molecule. This is because of the intrinsic dipole moment the LiH molecule already possesses in the ground state. The Stark effect is linear in \({\mathbb {E}}\) for small fields but then starts growing much faster because of the additional polarization of the ground state induced by the external field.

Similar to the case of H\(_2\) molecule, the numerical noise is visible in the results and becomes more pronounced in stronger electrical fields. Also, the ground-state energy obtained with the different solvers results in different values of the equilibrium distance, \(d_{eq}\), obtained for the quantum and classical solver, – 1.5 Å and 1.6 Å, respectively—at \({\mathbb {E}} =0\). Again, the latter was taken as the reference value for the Stark effect evaluation.

5 Conclusions

We attempted to outline, in a concise way yet indicating the essential elements and the underlying theory, a representative practical resolution of a simple quantum chemistry problem on a quantum computer. Special attention has been paid to the connection between the fermionic Hamiltonians and the quantum circuits, as well as the state preparation, running of the algorithm and evaluation of the results. An interested reader may wish to find out more details and discussions in the excellent recent review by Cao et al. (2019). In practical terms, we programmed and executed the calculation of ground-state energies of molecules (H\(_2\) and LiH), on the commercially available (since recently) quantum computer, IBM Q, of which we used the quantum device simulator.

The calculated results comprise the total energy as a function of bond length (i.e., the dissociation curve), also under applied stationary electric field. We also evaluated the shift of the molecule’s energy at a fixed d (equal to the equilibrium interatomic distance) with the electric field, i.e., the stationary electronic Stark effect, supposedly quadratic in \({\mathbb {E}}\) and small for the nonpolar H\(_2\) molecule but containing the linear term and much stronger in case of the polar LiH molecule. The quantum calculations were characterized by a considerable numerical noise, the magnitude of which increases with the strength of the electric field. The nature of these instabilities is still under inspection. In total, our case study seems to provide evidence for the feasibility of the use of this quantum computer for small molecules, with a reasonable number of iterations performed. Thus, the current quantum computation and simulation technology, even though yet far from being able to address large molecules in order to answer relevant questions in chemistry and biology, already is able to provide physically meaningful results for small systems, constituting an important milestone for further work.