Abstract
As quantum computing approaches its first commercial implementations, quantum simulation emerges as a potentially ground-breaking technology for several domains, including biology and chemistry. However, taking advantage of quantum algorithms in quantum chemistry raises a number of theoretical and practical challenges at different levels, from the conception to its actual execution. We go through such challenges in a case study of a quantum simulation for the hydrogen (H\(_2\)) and lithium hydride (LiH) molecules, at an actual commercially available quantum computer, the IBM Q. The former molecule has always been a playground for testing approximate calculation methods in quantum chemistry, while the latter is just a little bit more complex, lacking the mirror symmetry of the former. Using the variational quantum eigensolver method, we study the molecule’s ground state energy versus interatomic distance, under the action of stationary electric fields (Stark effect). Additionally, we review the necessary calculations of the matrix elements of the second quantization Hamiltonian encompassing the extra terms concerning the action of electric fields, using STO-LG-type atomic orbitals to build the minimal basis sets.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The beginning of the twentieth century witnessed a revolution in physics, which led to the development of quantum mechanics that proved the ability to solve problems of the classical physics at very small scales, and to predict accurately and elegantly the behavior of sub-atomic particles. From the beginning, chemistry has been a natural field of application for the quantum mechanics, as quantum effects are relevant at molecular scale in many phenomena, originating the new field of quantum chemistry—see, e.g., Levine (2014). The same happens in biology, where it is known that quantum effects are relevant in several processes, and it is even believed they can help explaining several macro-phenomena in the life sciences (Abbott et al. 2008).
However, looking through quantum mechanics to these disciplines faces major obstacles, as calculations rapidly become intractable with the size of the molecular systems involved, even with the help of the most advanced classical computational tools. The concept of quantum simulation, idealized by Feynman (1982) in the 1980s and later refined by Lloyd (1996), has raised expectations on the mitigation on some of these problems via achieving an exponential gain in simulation on quantum systems, with potential impact throughout all areas of physics (Georgescu et al. 2014), including quantum chemistry (Cao et al. 2019) and the Life Sciences (Wang et al. 2018). Recently, as the “second quantum revolution”Footnote 1 is coming of age, the first quantum computers are starting to emerge and become available to broad researcher’s community, giving means to the fulfillment of the Feynman’s vision. Compared with classical computers, quantum devices are ultimately expected to perform quantum chemistry calculations more quickly and accurately, handling larger molecules than it is possible with classical algorithms. This “quantum speedup” may lead to the design and discovery of new pharmaceuticals, materials, and industrial catalysts (Sim et al. 2018). A number of successful cases are described in literature on the efficient calculation of properties of interest for Chemistry, such as the electronic structure of molecules, phase diagrams, or reaction rates (Lidar and Wang 1999; Paesani et al. 2017; Aspuru-Guzik et al. 2005; Lanyon et al. 2010). A good review on the subject is written by Cao et al. (2019).
The conceptualization of a quantum simulation, from theory to experiment, poses many challenges (Whitfield et al. 2011), with no general recipe to tackle them. We hope to contribute to the progress in this area by exploring the simulations of two molecular systems, hydrogen (H\(_2\)) and lithium hydride (LiH) on a commercially available quantum computer, the IBM Q, accessed through the QuantaLab UMinho Academic Q Hub, and programmed using the QISKit platform (Cross 2018). The hydrogen molecule, the simplest existing one and also very important in nature, has been the natural test case of experimental and theoretical research. In particular, its ground-state properties and the dissociation curve have recently been recalculated using advanced classical (Vuckovic et al. 2015) and quantum (Colless et al. 2018) algorithms (the latter with extension to excited states). In a recent work, Rubin et al. (2020) describe Hartree–Fock calculations (done on the Google Sycamore quantum processor) for linear chains of up to twelve hydrogen atoms and discuss resulting errors in the system’s energy, along with possible ways to mitigate these errors. Similar works are likely to appear now in rapidly growing numbers; their importance is not in an increased speed or accuracy in tackling the corresponding quantum-chemical problems, as compared with established “conventional” algorithms, but the demonstration that these problems enter into the circle of practical feasibility for quantum computer. By the path of getting necessary experience in obtaining accurate and stable results for benchmark systems, testing different algorithms, the power of working quantum computers being simultaneously on the rise, the question of “quantum supremacy” may soon enough be posed while confronting problems of genuine challenge for contemporary quantum chemistry.
In this work, we extend the study of the H\(_2\) molecule as a standard benchmark toward the case of asymmetric LiH, whose ground-state calculation requires the inclusion of p-type atomic orbitals. Moreover we investigate the steady-state electronic Stark effect, i.e., the ground-state energy shift in response to a stationary external electric field (Gurav et al. 2018). We try to elucidate the essence of the quantum simulation algorithms to the broad community of physicists and chemists who may find the original works on quantum computation too technical to follow. We start from the definition of the molecular Hamiltonian, followed by its preparation for quantum simulation to the application of the variational quantum eigensolver (VQE) method, as well as its implementation and testing on the IBM Q.
The article is organized as follows: in Sect. 2, we briefly introduce the quantum Hamiltonian formalism for many-body systems, the Hartree–Fock approximation and the second quantization representation; in Sect. 3, we explain the mapping onto a system of qubits and designing the quantum circuit corresponding to the initial Hamiltonian, and the working principle of the VQE. Section 4 is dedicated to the case study of H\(_2\) and LiH molecules where we present and discuss the procedure details and results of the calculation of the dissociation curves in the presence of electric field. The last section offers a summary and concluding remarks. “Appendix A” contains details of the necessary matrix element’s calculation for this molecular setting, which is not commonly available in the literature.
2 Quantum chemistry background
2.1 Quantum Hamiltonian formalism
In this section, we outline the basic principles of the formulation of molecular Hamiltonians and the latter’s “preparation” for numerical calculation of electronic characteristics relevant for physics and chemistry. This is the domain, albeit represented by a quite simplistic case, of traditional quantum chemistry. A good introduction to the subject has been offered, for instance, by Levine (2014) and Szabo and Ostlund (2012). Here, we briefly describe just a few concepts and approximations essential for the formulation of the computational problem to be solved using quantum tools.
The quantum Hamiltonian formalism, in the Schrödinger’s formulation, is centered at the Hamiltonian operator, \(H = T + V\), T being the kinetic energy of the constituent particles and V the potential energy of all interactions and fields in the system, both internal and external. The action of this operator on the system’s wavefunction (WF), \(\vert {\Psi }\rangle \), describes the latter’s evolution,
or yields the total energy of the system if it is in a stationary state,
The wavefunction \(\vert {\Psi }\rangle \), beyond time, depends on other arguments (such as spatial coordinates and spin components) according to the representation used. Usually, there are several possible solutions to the equation, which correspond to different values of the energy (energy levels or eigenvalues, \(E_n\)), which are discrete for a confined (or bound) physical system. These states, called stationary states or eigenstates, are denoted \(\vert {\Psi }_n\rangle \), with the index \(n=1,\dots ,m\), in general, corresponding to a set of so-called quantum numbers that distinguish the eigenstates. The set of eigenstates constitutes the eigenbasis of the system that can be seen as a set of mutually orthogonal vectors in a Hilbert space of dimension m. The quantum system is also allowed to be in a superposition state,
whose energy is not well defined (and, therefore, such a state is non-stationary). According to the statistical interpretation of quantum mechanics originally proposed by M. Born (Saunders et al. 2010), a measurement of such a quantum state can randomly yield one of the eigenvalues of its energy, \(E_n\), with the probabilities given by the squared amplitudes of the basis eigenstates participating, \(\vert \lambda _n \vert ^2\).
2.2 Many-particle systems
The Schrödinger equation for a system of non-interacting particles can be decomposed into a set of uncoupled equations for each particle, and the system’s WF can be factorized. A combination of two non-interacting and non-entangled systems can be described by applying the tensor product on the two vector spaces,Footnote 2 with resultant basis given as follows:
In Eq. (4), \(\Psi _{\alpha }^{(s)}\) denotes an eigenfunction of a state \({\alpha }=1,\dots ,M_s\) of the system \(\Psi ^{(s)}\) (\(s=1,2\)). The dimension of the product vector is \(\mathrm{dim}(\Psi ^{(1)}){\,*\,}\mathrm{dim}(\Psi ^{(2)})=M_1\cdot M_2\).
When the particles constituting the system are identical, their spin becomes highly relevant. The spin, which is an intrinsic angular momentum of the particle, distinguishes two different types of particles, bosons (e.g., photons) and fermions (e.g., electrons and protons). For fermions, the Pauli exclusion principle states that the system’s WF must be antisymmetric with respect to permutation of any two particles. It implies important restriction upon the WF, namely that the product vector (4), if applied to a pair of non-interacting electrons, is not compatible with the Pauli principle.
In quantum chemistry, a single-electron WF is called orbital (Szabo and Ostlund 2012). One can distinguish spatial orbitals \(\phi ({\mathbf {r}})\), where r corresponds to spatial coordinates, and spin orbitals \(\chi ({\mathbf {x}})\), where \({\mathbf {x}}=({\mathbf {r}};s)\) and \(s=\uparrow , \downarrow \) stands for two possible orientations of electron’s spin. For two electrons, the Pauli principle means that
or, equivalently,
where the upper (lower) sign corresponds to parallel (anti-parallel) spins of the two electrons. If the electron–electron interaction is neglected, the correct (i.e., compatible with the Pauli principle) two-electron WF is written in the form of the so-called Slater determinant,
where \(\chi _{\alpha }({\mathbf {x}})\) and \(\chi _{\beta }({\mathbf {x}})\) designate different spin orbitals. A Slater determinant can be straightforwardly generalized toward the case of N identical non-interacting particles. It vanishes when any two electrons “occupy” the same spin orbital, as required by the Pauli exclusion principle.
The Slater determinant is a simple way of constructing a many-electron WF from spin orbitals representing non-interacting electrons. Complete neglection of the Coulomb interaction between the electrons would be too crude an approximation, while solving directly the many-electron Schrödinger equation is an intractable problem. A compromise is achieved by a self-consistent field method also called Hartree–Fock (HF) approximation. An effective one-electron operator is introduced, \(v^{HF}({\mathbf {x}})\), called Fock operator, which includes, as a part of the single electron potential energy, the electron’s interaction with all other electrons whose positions are averaged under an assumption that the WF representing the system of N electrons is a single Slater determinant. An explicit expression for \(v^{HF} ({\mathbf {x}})\) will be presented below.
2.3 Molecular Hamiltonian and Hartree–Fock approximation
The general form of a molecular Hamiltonian is (in atomic units):
The first and second terms of (8) correspond to the kinetic energy of the electrons (numbered by i and \(j=1,\dots ,N\)) and nuclei (numbered by \(A=1,\dots ,M\)), respectively. The third one represents the Coulomb attraction of each electron to each nucleus with \(r_{iA}\) being the electron–nucleus distance and \(Z_A\) the nucleus charge. Finally, the fourth and fifth terms correspond to the repulsion among the electrons and among the nuclei, respectively. It is common and well justified to use the Born–Oppenheimer approximation, which neglects the motion of the nuclei because they are much heavier than electrons, whereby the potential energy of the nucleus-nucleus interactions becomes a constant (for fixed placement of the nuclei) hence a parameter for the electron problem. With this, the electron Hamiltonian (8) reduces to:
For the H\(_2\) molecule, the Hamiltonian (9) depends on a single parameter, the distance between the protons d. If the lowest eigenvalue of (9), \(E_0(d)<0\), is larger in absolute value than the proton–proton repulsion energy, \(E_{rep}(d)=d^{-1}\), the molecule is bound, as illustrated in Fig. 1.
The Hamiltonian (9) has to be reduced to a single-electron one in order to proceed with finding its eigenvalues, which is achieved by means of the HF approximation, where one takes an average over the positions and spins of all electrons but one (to be labeled by \(i=1\)). This is done by multiplying (9) by \(\vert {\chi ^{(1)}_{\alpha }\chi ^{(2)}_{\beta }\dots \chi ^{(N)}_{\gamma }}\rangle \) and the corresponding “bra,” both in the form of Slater determinants of dimension N (the number of electrons in the system), and integrating over \({\mathbf {x}}_2,\, {\mathbf {x}}_3,\,\dots ,\,{\mathbf {x}}_N\), which leads to :
where \(v^{HF}_{1}\) is the average potential experienced by the “chosen” electron and \(\epsilon _{\alpha }\) is the single-electron energy. The HF potential can be written in the form:
The two terms in Eq. (11) are called Coulomb and exchange energies, respectively. The latter poses the main difficulty in solving Eq. (10); however, its neglection (known as the Hartree approximation) results in unsustainable error. Due to the nonlinearity of the HF approximation, the equations are solved in practice by self-consistent (iterative) methods, using a finite set of spatial basis functions, \(\phi _\mu ({\mathbf {r}})\) (\(\mu =1,2,\) \(\dots \), K)—see, e.g., Szabo and Ostlund (2012). The solution yields a set HF spin orbitals \(\{\chi _\alpha \}\) with corresponding energies \(\{\epsilon _\alpha \}\), \(\alpha =1,2,\dots ,2K\). It must be \(2K\ge {N}\), the number of electrons in the system. The possibilities to place N electrons over 2K spin orbitals gives rise to \((2K)!/(N!(2K-N)!)\) Slater determinants, one of which represents the ground state of the system and the others correspond to excited states. The HF approximation takes into account the quantum mechanical correlation caused by the Pauli principle, however, only of electrons with parallel spins. The difference between the approximate HF energy and the exact energy of the system is known as correlation correction (or energy).
It is common to use, as initial approximation basis sets to represent molecular orbitals (MO) in the HF equations, the linear combinations of atomic orbitals (LCAO). Since the exact atomic orbitals for a given many-electron atom are difficult to construct, the so-called Slater-type orbitals (STOs) are sometimes used, which are inspired by the (exactly known) radial asymptotics of spatial orbitals of the hydrogen atom,Footnote 3
(here \(Y_{l,m}\) is a spherical harmonic). For instance, one can use
for s-states, where \(\zeta \) is the Slater orbital exponent. As the STO functions are difficult to handle in many-center integrals, one practical resort consists of approximating these functions with linear combinations of Gaussian functions, known as STO-LG functions. The calculation of necessary matrix elements is then greatly facilitated, because the multi-center integrals with Gaussian functions can be evaluated analytically (see “Appendix A”). In this work, a set of such functions with \(n=3\) Gaussians mimicking each STO function, named STO-3G basis, is used. For the 1s state, such a function is:
Here, \(\alpha _i\) are the Gaussian orbital exponents that have been optimized for the best possible approximation of \(\phi ^\mathrm{STO}_{1s}(\zeta ,{\mathbf {r}})\) for a given \(\zeta \) (Hehre et al. 1969). The corresponding spin orbitals, \(\chi _\alpha (x)\), are obtained from \(\phi ^{\mathrm{STO-3G}}_{\mu }\) by multiplying them with a spinor \(\psi (s)\), \(s =\uparrow ,\, \downarrow \).
2.4 Second quantization
In the quantum mechanics of systems consisting of a number of identical particles (electrons, in our case), it is common to use the formalism called second quantization, originally introduced by P. Dirac—see, e.g., Dirac (1981). This formalism deals with the whole system of particles, instead of each particle individually, by introducing a new way of describing states, by the latter’s occupation numbers. Let \(\{\chi _\alpha ({\mathbf {x}})\}\) be a complete set of one-electron (atomic or molecular) spin orbitals that constitute the Hilbert space of a single particle. If the particles were non-interacting bosons, a state of the whole system could be entirely specified by indicating the numbers of particles, \(n_\alpha \), occupying each of these orbitals. Such an occupation number state can be designated by a state vector \(\vert {n_1,n_2,...} \rangle \). If the particles interact with an external field or with each other (but still assuming that they are bosons and no restrictions are imposed by particle’s spin), the state vector in the occupation number representation will evolve with time, obeying the time-dependent Schrödinger equation (1) with the Hamiltonian written in the occupation numbers representation:
The summation is over states in the single-particle Hilbert space, e.g., 1s-, 2p-like, etc., \(\tau _{\alpha \beta }\) being a matrix element of the single-electron energy,
The second term in (13) represents the Coulomb interactions between the particles, with the matrix element given [according to the convention used in quantum chemistry (Szabo and Ostlund 2012)] by:
The integration in Eqs. (14) and (15) is over coordinates (and summation over spins) of one or two electrons labeled 1, 2.
The Hamiltonian (13) is written in terms of so-called creation, \(a^{\dagger }\), and annihilation, a, operators, which add one particle to (or, remove from) an orbital \(\alpha \), respectively:
The product \(a_{\alpha }^{\dagger }a_\alpha \) is the occupation number operator for the orbital \(\alpha \). In the case of bosons, the creation and annihilation operators for different \(\alpha \) and \(\beta \) commute, because different orbitals are filled independently. These is not the case for fermions, because of the Pauli exclusion principle. By virtue of this, the following (anti-commutation) relations hold for the electron operators:
It can be shown that (17) guarantees that the occupation numbers can take only values 0 and 1 in accordance with the Pauli principle (Dirac 1981). Therefore, the Hamiltonian (13) has the same form for bosons and fermions, the only difference being in the (anti-)commutation relations of the creation and annihilation operators. For fermions, each state \(\vert {n_1,n_2,...} \rangle \) of this Hamiltonian corresponds to a Slater determinant in the Fock space (of dimension 2K), with the number of columns and rows equal to the number of electrons in the system, \(N=\sum _{\alpha =1}^{2K} n_\alpha \).
The choice of single-electron basis functions \(\chi _\alpha ^{*}({\mathbf {x}})\) is, in principle, arbitrary, but if we “guess” their form close to the “true” WFs of the system (which actually are not well-defined in the single-electron form!), the non-diagonal elements of the matrices \(\tau _{\alpha \beta }\) and \(\mu _{{\alpha }{\beta }{\gamma }{\delta }}\) will be much smaller than the diagonal ones. For practical calculations of these integrals, the basis functions are expressed in terms of the STO-3G sets explained in the previous section. The choice of molecular orbitals is based on the MO-LCAO approximation. One can improve this initial approximation by solving first the HF equation (10) and using its solutions to calculate the matrix elements. Then, the diagonalization of Eq. (13) amounts to the evaluation of the correlation energy.
In this article, we are going to consider also the stationary Stark effect described by the following (single-electron) Hamiltonian:
where \({\mathbf {E}}\) is the electric field intensity. Its second-quantization representation is identical to \(H_1\) in (13), and the corresponding matrix element is written as
where \({\mathbb {E}}=\vert {\mathbf {E}}\vert \) and z-axis is assumed to be directed along \({\mathbf {E}}\). The use of second quantization formalism is facilitated, for instance, by the PyQuante (Muller 2017) and the PySCF (Sun et al. 2018) tools, Python libraries targeted to quantum chemistry calculations. We present the matrix elements (14), (15) and (19) calculated for 1s, 2s and \(2p_z\) atomic orbitals in “Appendix A”.
3 Quantum simulation of a quantum chemistry Hamiltonian
3.1 Mapping the fermion Hamiltonian onto a qubit representation
In order to perform quantum computations, one needs to map the second-quantization Hamiltonian onto a qubit (spin) representation and then design the corresponding quantum circuit that implements it. The basic idea is to replace the fermionic operators a and \(a^{\dagger }\) with tensor products of the Pauli matrices,
which can be done in a number of ways, such as the Jordan–Wigner or Bravyi–Kitaev transformations (Cao et al. 2019). The former, addressed in this section, is a specific method based on the isomorphism between the creation and annihilation operators and the algebra of the Pauli matrices (Whitfield et al. 2011).
In the case of a single (one-electron) state, the Jordan–Wigner (JW) mapping is simple:
It is illustrated in Fig. 2. The matrices \(\sigma ^{\pm }\) represent the spin-raising and spin-lowering operators, respectively, while \(\sigma _z\) is related to the occupation number operator.
In case of \(N>\!\!1\) fermions, in order to satisfy the anti-commutation relations (17) between any pair of fermionic operators, one numerates the states by a single index (\(\alpha \)) and adds the string, i.e., [spin]=[fermion]\(\times \)[string], taking into account the occupation numbers, \(n_\alpha \), of states with \(\beta < \alpha \), for a given \(\alpha \):
The relation (23) holds for multiple fermions and the phase factors can be represented by the Pauli matrix \(\sigma _z\) acting on the corresponding fermionic state. Therefore, the fermionic operators are mapped onto direct products of Pauli matrices as follows:
Thus, any Hamiltonian operator written in the second quantization representation can be rewritten in terms of the raising and lowering spin operators and the Pauli matrix \(\sigma _z\). A catalogue of such translations can be found in Table A2 of the work by Whitfield et al. (2011). For a Hilbert space of 2K spin orbitals, a system of 2K fermions (i.e., qubits) is required for the JW mapping. The resulting qubit Hamiltonian has the following generic form:
where the indices i mean the type of the Pauli matrix (x, y or z), the indices q run over qubits and h are some coefficients. This form is useful for the algorithms discussed in the next section.
3.2 Quantum computation of the eigenvalues of a Hamiltonian
Once the molecule’s Hamiltonian has been transformed into the qubit representation, the ground-state energy can be evaluated using several methods. One of such methods where the quantum advantage seems likely is the calculation of eigenvalues of Hamiltonians through the application of the quantum phase estimation (QPE) algorithm (Luis and Peřina 1996), which also has several other applications, such as in the resolution of linear equations (Harrow et al. 2009). The method requires an approximation of the evolution operator, \({\hat{U}}=\exp {(-i{H}t})\) (t is time), and applying it to the initial state an appropriate number of times. For an eigenstate, the application of \({\hat{U}}\) results in adding a phase \((-Et)\), so that the energy eigenvalue E can be estimated. Unfortunately, despite its theoretical attractivity and a broad scope of possible applications, the method poses serious technical difficulties, which makes its practical realization unlikely at the present level of maturity of quantum computers. Namely, the QPE method requires a very large number of entangled qubits and quantum gates to be effective.
Alternatively, one can adopt a strategy of applying the Hamiltonian over a state several times, measuring the result (i.e., performing the quantum sampling), in order to obtain an estimation of the expected eigenvalue, for which effective algorithms are available, particularly the quantum expected eigenvalue estimation (QEE) method. The method requires that the Hamiltonian operator can be decomposed into a polynomial (M) independent n-qubit operators as exemplified by Eq. (26) and consists in the “measurement” of the expectation values of such operators for a trial state \(\vert \Psi \rangle \) (also known as the ansatz):
The estimation of the expectation values, \(\langle \cdots \rangle \), requires repeated measurements with a large number of qubits, but, on the other hand, the computational effort amounts to the evaluation of a polynomial number of independent terms.
An objective comparison of the QPE and QEE methods is presented by McClean et al. (2016) and summarized in Table 1. One main advantage of the QEE, when compared with QPE, is that it largely reduces the need for gates, but, more important—the amount of time the entanglement over sets of qubits has to be maintained, i.e., the coherence time, is O(1) (independent of precision, p), which is within grasp of existing quantum computers, while it grows linearly with p, \(O(p^{-1})\), for QPE. However, QEE introduces the need to prepare more copies of the ansatz to maintain the independence of the terms in Eq. (27)—O(M) against O(1) for QPE— requiring polynomially more memory, i.e., more qubits. Moreover, for a desired precision p, the number of necessary sampling steps is \(O({|h_{max}|}^2 M p^{-2})\), where \(h_{max}\) is the term with the maximum norm in the decomposition of the Hamiltonian. In summary, the QEE method reduces the required minimum coherence but introduces a polynomial complexity penalty, both in terms of memory and in terms of the number of steps necessary. Yet, it still holds an exponential advantage when compared to classical methods.
3.3 Trial wave functions (ansätze)
The ground-state energy estimation requires an appropriate ansatz. If the number of electrons in the system, N, is fixed, one may use the Slater determinant solution of the HF problem for the considered molecule, corresponding to its ground state. We shall denote it by \(\vert {\Psi _0}\rangle \) and it may be written as
where \(\alpha \) runs over occupied orbitals and \(\vert \text{ vac }\rangle \) denotes vacuum (with no particles). Alternatively, one may start by defining a new “vacuum” state in the N-particle sector of the Fock space, which can be chosen as \(\vert {\Psi _0}\rangle \) and used to prepare the parametrized trial quantum state (Barkoutsos et al. 2018). It can be done by a quantum circuit implementing a unitary operator, \({\hat{U}}\), that represents a set of perturbations to the state \(\vert {\Psi _0}\rangle \):
The parametrized ansatz will be used to estimate the energy with respect to the Hamiltonian. Here \(\overrightarrow{\theta }\) stands for the whole set of parameters (also called “gate angles” in this context) that can be adjusted and used in the optimization procedure (see Sec. 3.4 below).
There are several possible choices of constructing this operator, leading, e.g., to the so-called unitary coupled cluster (UCC) and Heuristic approaches that have been overviewed by Cao et al. (2019) and Barkoutsos et al. (2018). There are options of choosing different ansätze implemented in the QISKit package. Let us briefly consider the UCC approach, which has mainly been used in this work.
A flexible way to generate multi-determinantal (hence overcoming the HF approximation) reference states within the coupled cluster (CC) method, suggested by Jeziorski and Monkhorst (1981), has been translated by Barkoutsos et al. (2018) (specifically under an angle of quantum algorithms for electronic structure calculations) into the unitary version of the CC approach (UCC). The operator acting on the “vacuum state” according to Eq. (28) is chosen as follows:
Here \({\hat{T}}\) is an operator representing excitations from occupied to unoccupied states (labeled below by Greek and Latin indices, respectively), composed of hierarchical terms,
corresponding to n-particle excitations, namely,
The UCC ansatz usually retains only the two first terms in the expansion of \({\hat{T}}\), i.e., neglects 3-particle and higher-order excitations. The expansion coefficients in (30), (31) can be interpreted as matrix elements of a certain excitation operator between occupied and unoccupied orbitals. They can be assumed real, i.e., \(\{\theta _{\alpha }^{a},\; \theta _{\alpha \,\beta }^{a\,b}, \ldots \} \in {\mathbb {R}}\).
The anti-Hermitian combination \({\hat{T}}-{\hat{T}}^{\dagger }\) in (29) makes the exponential operator unitary. Unitary operations are natural on quantum computers, yet the implementation into quantum circuits is not that straightforward because of the non-commutation of different parts of the Hamiltonian, so the order in which the different terms are written in the exponent is important. This difficulty is bypassed by using the Trotter identity:
where \({\hat{A}}\) and \({\hat{B}}\) are two non-commuting operators, e.g., \({\hat{A}}={\hat{T}}_1-{\hat{T}}_1^{\dagger }\) and \({\hat{B}}={\hat{T}}_2-{\hat{T}}_2^{\dagger }\). Exact in the limit \(n{\rightarrow }\infty \), it is an approximation for finite n. Different Trotter approximations of the operator (29) can be implemented on a quantum computer by transforming it to the qubit representation and using standard circuit compilation techniques for the “exponentiation” of the Pauli matrices (Cao et al. 2019). Some examples of such circuits and comparison of results obtained for different orders (n) of the Trotter approximation can be found in the work by Barkoutsos et al. (2018).
3.4 Variational Quantum Eigensolver
The variational method for the calculation of the ground-state energy, also known in physics as the Rayleigh-Ritz method, has widely been used for a long time in quantum chemistry—see, e.g., Levine (2014). It is an approximation method used to estimate the lowest eigenvalue (the ground-state energy) of a Hamiltonian,
The optimization consists in the determination of the set of parameters \(\overrightarrow{\theta }\) that minimize the E function.
In the hybrid quantum-classical algorithm implemented as the variational quantum eigensolver (VQE), the quantum computer prepares the parametrized trial function \(\Psi (\overrightarrow{\theta })\), as discussed in Sect. 3.3, and evaluates the energy with respect to the system’s Hamiltonian, as discussed in Sect. 3.2. Then, a classically implemented algorithm updates the parameters \(\overrightarrow{\theta } \in {\mathbb {R}}^n\) of the quantum state using a classical optimization routine and then repeats the previous step until convergence criteria (e.g., in energy and/or iteration number) are satisfied. Any optimization method capable of performing this task can, in principle, be used. On IBM Q (Cross 2018), a few methods for this purpose are available, for instance, the simultaneous perturbation stochastic approximation algorithm (SPSA, see Bhatnagar et al. 2012), characterized by a very good performance under noise, or the Cobyla method (Powell 2007).
The VQE was introduced by Peruzzo et al. (2014) and applied since then in a number of quantum simulation/optimization tasks—see, e.g., Moll et al. (2017). The scheme of the method is depicted in Fig. 3, adapted from the latter work. A good additional discussion of this method can be found in the work by McClean et al. (2016).
3.5 Procedure summary
The principal steps can be summarized as follows.
-
The effect of fermionic annihilation-creation operators, \(a_{\alpha }\) and \(a^{\dagger }_{\beta }\), on the system of one-electron states is mapped onto \((\uparrow ,\!\downarrow )\) states in a model system of \(s\!=\!\tfrac{1}{2}\) spins (via the Jordan–Wigner transformation).
-
The state of each spin is represented by a qubit.
-
Excitations in multi-electron system are then represented as qubits, which interact and run through a quantum circuit.
-
The circuit consists of a number of basic elements (quantum gates), arranged according to the structure of equations to solve.
-
At the beginning, each qubit is prepared according to the starting configuration (i.e., occupation of the electron orbitals) chosen.
-
The output of the circuit (measurement) yields the expectation value of each qubit. It can be redirected to the input till convergence.
-
The configuration emerging in the repetitive process, taken together with the (previously calculated) matrix elements, yields the physical solution (energy and wavefunction).
4 Results and discussion
4.1 Calculation details
We used the procedure outlined in previous sections to calculate the ground-state energy (which can be straightforwardly converted into the dissociation energy) of two molecules, hydrogen (H\(_2\)) and lithium hydride (LiH), also (that is presumably a novel result) under the action of stationary electric fields of four different magnitudes (\({\mathbb {E}}=\) 0.0001, 0.001, 0.01, 0.1 atomic units; 1 a.u. \(\approx 5\cdot 10^{11}\) V/m). These calculations were performed for the interatomic distances, d, from 0.2 to 4 Å with the step 0.1 Å.
The actual computational environment, where these experiments were conducted, was the IBM Q, an ensemble of quantum computers and simulators and able to perform quantum computation. Such computational environment is available remotely through the internet and can be accessed and programmed using the QISKit framework, written in the Python language. The actual code developed to this work is available in the following GitHub repository: https://github.com/arcalab/experiments_quantum_chemistry/tree/master/Qiskit_Programmatic_version_srcit makes use notably of the QISkit and the PySCF python framework.
The PySCF tool was used to specify the molecules and calculate the respective one-body and two-body integrals, encompassing already the action of electric fields, using the theory developed throughout “Appendix A”. Both molecules were assumed to have zero global charge and spin zero; the STO-3G basis (12) was used to calculate the integrals.
The tasks of evaluation of corresponding integrals can then be reformulated into an assembling of quantum circuits, to be executed in quantum computers supplied, using the set of software packages available, e.g., in the QISkit framework: Terra, Aer, Aqua and Ignis. The calculation of the dissociation curves requires the calculation of the ground-state energies (discussed in Sect. 3.4) over a range of distances, to be able to identify the minimum (bound molecule) and the asymptotics (separated atoms). For this purpose, we used two methods: the exact eigensolver (classical matrix-multiplication method, as a benchmark) and the VQE.
We used the UCC (discussed in Sect. 3.2) as the variational method, i.e., the technique to build the ansätze for the molecules under study, and the HF approximation to obtain the initial solution for the VQE method. In this relation, several parameters had to be considered: the maximum number of iterations with the Cobyla method,Footnote 4 the optimization level (an IBM Q-specific parameter determining the degree of optimization of the circuits generated), the mapping method to use, such as the Jordan–Wigner (25), Bravyi–Kitaev, or parity methods [see Cao et al. (2019) for more information on these methods], each offering different (precision)/(circuit size) relationships. The technical parameters of calculation, selected after a course of trial and error, are summarized in Table 2.
The quantum or hybrid (such as VQE) procedures in the IBM Q require that a backend is specified, i.e., an actual processing node able to execute the quantum circuits, which can be either a classical computer able to perform the quantum computation (simulator), with or without simulated quantum noise, or a real quantum device, with a number of qubits from 2 to 53. The results of this work were obtained using a simulator, the qasm_simulator.
4.2 Results: H\(_2\) molecule
The total energy as a function of the interatomic distance, hence the molecule’s dissociation curve for different values of the electric field, is depicted in Fig. 4. The effect of electric field on the shape of the dissociation curve remains negligible at small values of the field inspected yet results in a drastic change of the \(d{\rightarrow }\infty \) asymptotic (slope) and in a noticeable shift of the equilibrium position for \({\mathbb {E}}=0.1\) a.u. The abrupt change in the E(d) dependence slope at large distances, for very large electric field \({\mathbb {E}}=0.1\) a.u., can be related to the onset of the molecule’s dissociation, which becomes possible via tunneling through the energy barrier (blue curves in Fig. 4).
The inspection of the VQE results, represented by symbols connected by lines in Fig. 4, reveals a numerical noise that apparently increases with the electric field magnitude. Possibly, the HF approximation used as input for the quantum calculation becomes unstable under the action of a strong electric field.
The inset of Fig. 4 shows the Stark effect for the molecule under study, that is, the shift between the ground-state energy calculated under the action of the electric field and at \({\mathbb {E}}=0\). The distance at which the respective energies have been extracted was the energy minimum position yielded by the classical solver at \({\mathbb {E}}=0\), \(d_{eq}=0.7\) Å. We took this option because of the fluctuations of E(d) obtained with the quantum solver.
For a nonpolar molecule without intrinsic dipole moment, as is the case for H\(_2\), the stationary electronic Stark effect should be quadratic in the electric field. However, with the limited minimal basis used, it looks even weaker and becomes noticeable only for very strong fields.
4.3 Results: LiH molecule
The results for the lithium hydride molecule are shown in Fig. 5, where the effect of the applied electric field is quite noticeable. The displacement of the E(d) curve increases with the electric field: already for 0.01 a.u. the shift of the dissociation curve becomes appreciable. The Stark effect (inset in Fig. 5) increases with the field magnitude much faster than for the H\(_2\) molecule. This is because of the intrinsic dipole moment the LiH molecule already possesses in the ground state. The Stark effect is linear in \({\mathbb {E}}\) for small fields but then starts growing much faster because of the additional polarization of the ground state induced by the external field.
Similar to the case of H\(_2\) molecule, the numerical noise is visible in the results and becomes more pronounced in stronger electrical fields. Also, the ground-state energy obtained with the different solvers results in different values of the equilibrium distance, \(d_{eq}\), obtained for the quantum and classical solver, – 1.5 Å and 1.6 Å, respectively—at \({\mathbb {E}} =0\). Again, the latter was taken as the reference value for the Stark effect evaluation.
5 Conclusions
We attempted to outline, in a concise way yet indicating the essential elements and the underlying theory, a representative practical resolution of a simple quantum chemistry problem on a quantum computer. Special attention has been paid to the connection between the fermionic Hamiltonians and the quantum circuits, as well as the state preparation, running of the algorithm and evaluation of the results. An interested reader may wish to find out more details and discussions in the excellent recent review by Cao et al. (2019). In practical terms, we programmed and executed the calculation of ground-state energies of molecules (H\(_2\) and LiH), on the commercially available (since recently) quantum computer, IBM Q, of which we used the quantum device simulator.
The calculated results comprise the total energy as a function of bond length (i.e., the dissociation curve), also under applied stationary electric field. We also evaluated the shift of the molecule’s energy at a fixed d (equal to the equilibrium interatomic distance) with the electric field, i.e., the stationary electronic Stark effect, supposedly quadratic in \({\mathbb {E}}\) and small for the nonpolar H\(_2\) molecule but containing the linear term and much stronger in case of the polar LiH molecule. The quantum calculations were characterized by a considerable numerical noise, the magnitude of which increases with the strength of the electric field. The nature of these instabilities is still under inspection. In total, our case study seems to provide evidence for the feasibility of the use of this quantum computer for small molecules, with a reasonable number of iterations performed. Thus, the current quantum computation and simulation technology, even though yet far from being able to address large molecules in order to answer relevant questions in chemistry and biology, already is able to provide physically meaningful results for small systems, constituting an important milestone for further work.
Notes
For interacting or entangled systems, the total WF cannot be written as a product of those of its parts. Entangled parts of a system, even if they do not interact physically, may not be described by a wave function, they only can be represented by a density matrix. Entanglement is out of scope of this article, the interested reader may refer to an appropriate textbook, e.g., that of Schumacher and Westmoreland (2010).
The STO includes a simple power function of radius instead of a polynomial, and hence do not possess radial nodes.
In this quantum computation setting, an iteration in the Cobyla method is an expensive operation in terms of computation time, and therefore, one may be interested to limit the number of iterations. However, the method stops if convergence is verified and in our particular case, the method always converged before 15000 iterations.
References
Abbott D, Davies PCW, Pati A (2008) Quantum aspects of life. World Scientific, Singapore
Aspuru-Guzik A, Dutoi A, Love PJ, Head-Gordon M (2005) Simulated quantum computation of molecular energies. Science 309(5741):1704–1707
Barkoutsos P, Gonthier JF, Sokolov I, Moll N, Salis G, Fuhrer A, Ganzhorn M, Egger DJ, Troyer M, Mezzacapo A et al (2018) Quantum algorithms for electronic structure calculations: particle-hole Hamiltonian and optimized wave-function expansions. Phys Rev A 98(2):022322
Bhatnagar S, Prasad HL, Prashanth LA (2012) Stochastic recursive algorithms for optimization: simultaneous perturbation methods, vol 434. Springer, Berlin
Cao Y, Romero J, Olson JP, Degroote M, Johnson PD, Kieferová M, Kivlichan ID, Menke T, Peropadre B, Sawaya NPD, Sim S, Veis L, Aspuru-Guzik A (2019) Quantum chemistry in the age of quantum computing. Chem Rev 119(19):10856–10915
Colless JI, Ramasesh VV, Dahlen D, Blok MS, Kimchi-Schwartz ME, McClean JR, Carter J, de Jong WA, Siddiqi I (2018) Computation of molecular spectra on a quantum processor with an error-resilient algorithm. Phys Rev X 8:011021
Cross A (2018) The IBM Q experience and QISKit open-source quantum computing software. APS 2018:L58–003
Dirac P (1981) The principles of quantum mechanics, vol 27. Oxford University Press, Oxford
Feynman RP (1982) Simulating physics with computers. Int J Theor Phys 21(6–7):467–488
Georgescu IM, Ashhab S, Nori F (2014) Quantum simulation. Rev Modern Phys 86(1):153
Gurav ND, Gejji SP, Pathak RK (2018) Electronic Stark effect for a single molecule: theoretical UV response. Comput Theor Chem 1138:23
Harrow AW, Hassidim A, Lloyd S (2009) Quantum algorithm for linear systems of equations. Phys Rev Lett 103(15):150502
Hehre WJ, Stewart RF, Pople JA (1969) Self-consistent molecular-orbital methods. I use of gaussian expansions of Slater-type atomic orbitals. J Chem Phys 51(6):2657
Jeziorski B, Monkhorst HJ (1981) Coupled-cluster method for multideterminantal reference states. Phys Rev A 24(4):1668
Lanyon BP, Whitfield JD, Gillett GG, Goggin ME, Almeida MP, Kassal I, Biamonte JD, Mohseni M, Powell BJ, Mea Barbieri (2010) Towards quantum chemistry on a quantum computer. Nat Chem 2(2):106
Levine IN (2014) Quantum chemistry. Pearson advanced chemistry series, Pearson. https://books.google.pt/books?id=ht6jMQEACAAJ
Lidar DA, Wang H (1999) Calculating the thermal rate constant with exponential speedup on a quantum computer. Phys Rev E 59(2):2429
Lloyd S (1996) Universal Quantum Simulators. Science 273(5278):1073–1078
Luis A, Peřina J (1996) Optimum phase-shift estimation and the quantum description of the phase difference. Phys Rev A 54(5):4564
McClean J, Romero J, Babbush R, Aspuru-Guzik A (2016) The theory of variational hybrid quantum-classical algorithms. New J Phys 18(2):023023
Moll N, Barkoutsos P, Bishop LS, Chow JM, Cross A, Egger DJ, Filipp S, Fuhrer A, Gambetta JM, Ganzhorn Mea (2017) Quantum optimization using variational algorithms on near-term quantum devices. arXiv:1710.01022
Muller R (2017) Pyquante-python quantum chemistry. URL http://pyquante.sourceforge.net
Nielsen MA, Chuang IL (2010) Quantum computation and quantum information. Cambridge University Press, Cambridge
Paesani S, Gentile AA, Santagati R, Wang J, Wiebe N, Tew DP, O’Brien JL, Thompson MG (2017) Experimental Bayesian quantum phase estimation on a silicon photonic chip. Phys Rev Lett 118(10):100503
Peruzzo A, McClean J, Shadbolt P, Yung M, Zhou X, Love PJ, Aspuru-Guzik A, O’Brien J (2014) A variational eigenvalue solver on a photonic quantum processor. Nat Commun 5:4213
Powell M (2007) A view of algorithms for optimization without derivatives. Math Today-Bull Institute of Math Appl 43(5):170–174
Rubin B (2020) Hartree-Fock on a superconducting qubit quantum computer. Science 369:1084–1089
Saunders S, Barrett J, Kent A, Wallace D (2010) Many worlds?: Everett, quantum theory, & reality. Oxford University Press, Oxford
Schumacher B, Westmoreland MD (2010) Quantum processes, systems, and information. Cambridge University Press, Cambridge
Sim S, Romeroy J, Johnsonz PD, Aspuru-Guzik A (2018) Quantum computer simulates excited states of molecule. Physics 11(2):14
Sun Q, Berkelbach TC, Blunt NS, Booth GH, Guo S, Li Z, Liu J, McClain JD, Sayfutyarova ER, Sharma S et al (2018) PySCF: the Python-based simulations of chemistry framework. Wiley Interdisciplin Rev Comput Molecular Sci 8(1):e1340
Szabo A, Ostlund NS (2012) Modern quantum chemistry: introduction to advanced electronic structure theory. Courier Corporation, North Chelmsford
Vuckovic S, Wagner LO, Mirtschink A, Gori-Giorgi P (2015) Hydrogen molecule dissociation curve with functionals based on the strictly correlated regime. J Chem Theory Computat 11:3153
Wang B, Tao M, Ai Q, Xin T, Lambert N, Ruan D, Cheng Y, Nori F, Deng F, Long G (2018) Efficient quantum simulation of photosynthetic light harvesting. NPJ Quantum Inf 4:52
Whitfield JD, Biamonte J, Aspuru-Guzik A (2011) Simulation of electronic structure Hamiltonians using quantum computers. Molecular Phys 109(5):735–750
Acknowledgements
The authors wish to thank Luís Barbosa for helpful discussions and for his suggestions during the course of this work, as well as the students of Physics Engineering at the University of Minho—Carolina Alves, Daniel Carvalho, Michael de Oliveira and Paulo Ribeiro—for their helpful contributions at the preliminary stage of this work. Carlos Tavares was funded by the FCT–Fundação para a Ciência e Tecnologia (FCT) by the grant SFRH/BD/116367/2016, funded under the POCH programme and MCTES national funds. This work was also funded by the project “SmartEGOV: Harnessing EGOV for Smart Governance (Foundations, Methods, Tools)/NORTE-01-0145-FEDER-000037,” supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (EFDR). Funding from the FCT in the framework of the Strategic Funding UID/FIS/04650/2019 is also gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Communicated by Tomas Veloz.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Calculation of the matrix elements
Calculation of the matrix elements
1.1 STO-LG wavefunctions
The STO-3G-type combinations of Gaussian functions are used to calculate the matrix elements of various electronic interactions in the molecules under study. As the minimal basis of the H\(_2\) molecule includes the s-type orbitals only, whereas that for LiH comprises both the s- and the p-type orbitals, by throughout covering the latter molecule we leave a possibility to fall back to the H\(_2\) case by removing the factor of 3 (Li nucleus charge) in those matrix elements where it appears explicitly (namely, in Table 4 ). Also, the parameters of the STO-3G functions have to be chosen accordingly (see Table 3).
The minimal basis will include the following atomic orbitals: 1s for H; 1s, 2s and \(2p_z\) for Li. All of them will be approximated by the STO-3G-type combinations of the following Gaussian functions (Szabo and Ostlund 2012):
Here \(\zeta \) is a parameter appearing in the Slater-type orbitals (\(\zeta =1.24\) for H and \(\zeta =2.69\) as the “recommended” value for Li1s); the coefficients \(d_{i}\) and \(\alpha _{i}\) are fitted parameters and g are the normalized Gaussian functions:
The fitted Gaussian exponents and the corresponding coefficients \(d_{i}\) depend on the parameter \(\zeta \) in the Slater orbital, also called “scaling factor,” which is different for each atomic shell (e.g., for 2s and 2p states of Li the recommended value is \(\zeta =0.75\)). The exponents for \(\zeta =1\) are given in Table 3.7 of Szabo and Ostlund (2012); for \(\zeta \ne 1\) they scale as \(\alpha (\zeta )=\alpha (1)\cdot \zeta ^{2}\), whereby the coefficients d are the same for each type of states in different atoms—e.g., 1s (H) and 1s (Li)—although \(\alpha \)’s are different. The parameters used by us are compiled in Table 3.
1.2 One-electron matrix elements
We shall use spherical coordinates with the origin at the Li atom, as shown in Fig. 6. From now on, the Li atom will be denoted “B” and the H atom will be “A,” and, according to the previous section, we shall consider the matrix elements between the following three functions:
1.2.1 Nuclear Potential Energy Matrix Elements
To calculate the nuclear potential energy matrix elements, one needs to calculate the following integrals:
These integrals are the same as for the H\(_2\) molecule, so we can use the result of Equation (A33) from Szabo and Ostlund (2012):
where \(F_o(x)\) is expressed via the error function, \(F_{o}(x)=\sqrt{\dfrac{\pi }{4x}}\text{ erf }\bigl (\sqrt{x}\bigr )\). The matrix elements involving the p-orbital are:
where \(f_{1}(\mathbf {r}) = e^{-\alpha (\mathbf {r}-\mathbf {d})^{2}}\) and \(f_{2}(\mathbf {r})=\cos {\theta } e^{-\beta r^{2}}\). It is convenient to use the Fourier transform of these functions:
For \(f_2(\mathbf {k})\) we need to express \(\cos {\theta }\) in terms of \(\cos {\gamma }\), since \(\mathbf {k}\cdot \mathbf {r}=kr\cos {\gamma }\). The vectors \(\mathbf {k}\), \(\mathbf {e_{z}}\) and \(\mathbf {r}\), in general, do not lie in the same plane, so we need to consider the spherical triangle shown in Fig. 7. We can use the following formula relating the angles \(\theta \), \(\theta _{k}\) and \(\gamma \):
Using (47), we obtain:
[notice that the integration over \(\phi \) eliminated the second term in (47)]. The integral with respect to \(\cos {\gamma }\) yields:
where \(j_{i}(x)\) is the spherical Bessel function. Then,
whereby
in which \(\text{ erfi }(t) = -i\text{ erf }(t)\), is called the Dawson’s function. Then
The angular part of the integral in (51) is:
and we have:
Another integral of this type, describing electrons interaction with the H atom, is:
where \(f_{1}(\mathbf {r})=r\cos {\theta } e^{-\beta r^{2}} \) and \(f_{2}(\mathbf {r})=\dfrac{1}{|\mathbf {r}-\mathbf {d}|}e^{-\alpha (\mathbf {r}-\mathbf {d})^{2}}\). The Fourier transforms of these functions are:
With this,
where \(b'=\dfrac{1}{4\beta d^{2}}\) and \(a'=\dfrac{1}{2\sqrt{\alpha }d}\), \(F_{D}\) is the Dawson’s function (50). Note that the dimension of the normalization constants is \([c_{\alpha }^{(s)}]=L^{-\tfrac{3}{2}}\), \([c_{\beta }^{(p)}]=L^{-\tfrac{5}{2}}\), while \([\alpha ]=[\beta ]=L^{2}\); thus, overall dimension of (57) is \(L^{-1}\), as it should be. The integral in (57) could not be evaluated analytically, so it has to be calculated numerically.
We still need matrix elements of \(r^{-1}\) diagonal in atomic index, which are as follows:
Here, we use the following expansion:
where \(x=\dfrac{r}{d}\); since \(\cos {\theta }=P_{1}(\cos {\theta })\) (\(P_{l}\) are the Legendre polynomials), the angular integration in (62) eliminates all the terms in the sum over l except \(l=1\). Therefore, we have:
Finally, the last integral of this type is:
Again, we use the formula (63) and the relation
Using (66), the angular integration in (65) yields:
The result is:
1.3 Kinetic energy matrix elements
The calculation of the kinetic energy matrix elements involves the following integrals:
where \(f_{2}(\mathbf {r}) = e^{-\alpha (\mathbf {r}-\mathbf {d})^{2}}\) and \(f_{1}(\mathbf {r}) = \nabla ^{2} e^{-\beta r^{2}}\). Fourier transforms of these functions are:
Then,
Denoting \(x=kd\), we have the following integral, \(\int _{0}^{\infty } \sin {x} \)\(e^{-bx^{2}}x^{3}\ dx\), where \(b=\dfrac{\alpha +\beta }{4\alpha \beta d^{2}}\). The result of the integration reads:
The similar integral involving the s and p states:
The integral is calculated with the help of Mathematica, with the result:
where \(b=\dfrac{\alpha +\beta }{4\alpha \beta d^{2}}\) and \(F_{D}\) is the Dawson’s function (50). The matrix elements diagonal in atomic index are as follows:
1.3.1 Summary of one-electron Hamiltonian (for zero external field):
The one-electron Hamiltonian in the absence of external electric field is as follows:
For convenience, the necessary integrals are presented in Table 4, and Table 5 indicates the reference of the corresponding equation.
1.4 Matrix elements of the interaction with external electric field
We shall consider the field parallel to the z axis, so the interaction Hamiltonian reads:
We shall keep the same notation as for the kinetic energy matrix elements just changing \(K\rightarrow J\). First, we have:
because the diagonal matrix elements for any atom vanish for non-degenerate atomic states and \(J_{aa}\) is compensated by the energy of the proton at point \(\mathbf {d}\) (see Fig. 6). For the matrix element between the s and p-orbitals of the Li atom, we have:
The matrix elements \(J_{ab}^{(s)}\) are the same as for H\(_{2}\):
We use the transformation:
where \(P=\alpha +\beta \) and \(\mathbf {R}_P=\dfrac{1}{p}\left( \alpha \mathbf {R}_A+\beta \mathbf {R}_B\right) =\dfrac{\alpha }{p}\mathbf {d}\). Then,
Thus, we have:
Obviously, \( J_{ab}^{(s)}=J_{ba}^{(s)}\). Now we shall calculate
where
The Fourier transform of \(f_{1}(\mathbf {r})\) is:
where we made use of (47). The term linear in \(\sin {(\phi -\phi _{k})}\) vanishes after integration over \(\phi \), while \(\int _{0}^{2\pi }\sin ^{2}{(\phi -\phi _{k})}d\phi \)\(=\pi \). Therefore,
and
The integral (84) is given by
In (87), the following angular integrals come about:
and
In (88) and (89), \(j_{l}(c)\) are the spherical Bessel functions and Z(x) is just a short-hand notation. With this, Eq. (87) reduces to:
where \(b=\dfrac{\alpha +\beta }{4\alpha \beta d^{2}}\). The calculation of the integral in (90) yields:
where \(a=2\beta d^{2}\).
1.4.1 Summary of the perturbation operator
The matrix elements of the perturbation operator due to external electric field, \(H_{S}\), are summarized in Table 6, and the corresponding equations are referred to in Table 7. Notice that the proton energy (\(-Ed\)) has been added to compensate \(J_{aa}\) and it is necessary to substitute \(\alpha _{i,1s}\), \(\alpha _{i,2sp}\) for \(\alpha \) and \(\beta \), respectively, and \(\alpha _{i,1s}'\) is for Li in the appropriate relations.
1.5 Two-electron matrix elements
Matrix elements of the electron–electron interaction, \(r_{12}^{-1}=|\mathbf {r_{1}}-\mathbf {r_{2}}|^{-1}\), in the “chemist’s notation,” are written in round brackets (Szabo and Ostlund 2012):
which is different from the physicist’s notation for the same thing, \(\langle ik|r_{12}^{-1}|jl\rangle \), which uses angular brackets and different order of orbitals. Here, \(\psi _{i}\) denotes a molecular spatial orbital constructed as a linear combination of atomic orbitals, i.e., in our case
The HF energy includes the so called Coulomb and exchange integrals:
Since \(|i\rangle \) and \(|j\rangle \) are linear combinations of \(g_{1s}(\mathbf {r}-\mathbf {d})\), \(g_{1s}(\mathbf {r})\) and \(g_{2p}(\mathbf {r})\) functions with different coefficients in the exponent, several kinds of integrals occur in (92) and (93), namely: (i) four kinds of one-center integrals; (ii) four kinds of two-center integrals. We proceed by elaborating on the first type (one-center) integrals, (i).
The same expression applies to \(D_{bb}^{(ss)}(\alpha ,\beta ,\gamma ,\delta )\).
where we used the Fourier transform result (85). The calculation of such integrals finally yields:
In the calculation of exchange-type integrals,
where we used the Fourier transform (55). The calculation of the integral finally yields:
For the Coulomb-type integrals,
Passing now to the discussion of two-center integrals, we begin with the exchange-type ones, involving the s functions on both centers:
where
Following (Szabo and Ostlund (2012), “Appendix A”), we first express products of Gaussian functions occurring in \(f_{1}(\mathbf {r}_1)\) and \(f_{3}(\mathbf {r}_2)\) as other Gaussian. Normalization constants will be ignored at this step; they will be introduced in the final results. The integral in (98) becomes:
where
Now we can use Fourier transform for each factor in the integral (99):
The integrals over \(\mathbf {r}_1\) and \(\mathbf {r}_2\) introduce two \(\delta \)-functions of \(\mathbf {k}\) and remove two integrations over different \(\mathbf {k}\)-vectors that appear after substituting the Fourier integrals into (99), so we obtain:
where \(R_{z}=|R_{p}-R_{q}|\).
The two-center s-s Coulomb-type integrals read:
We can use here the previous result with \(\mathbf {R}_p=\mathbf {d}\), \(\mathbf {R}_q=0\) and \(M\rightarrow \exp {\left( -\dfrac{\alpha \beta }{\alpha +\beta }d^{2} \right) }\). Explicitly, we have:
The two-center exchange-type integrals involving s and p-functions are:
where:
Now we shall use Fourier transform in the integral (105):
As before,
Substituting this into (105),
where \(I_1\), \(I_2\), \(I_3\) are given by the following expressions:
with \(R_{z}=|R_{p}-R_{q}|\) and \(s=\dfrac{p+q}{4pqR_{z}^{2}}\);
Thus, \(D_{ab}^{(spE)}\) is given by (109) where M is given by (100), \(p=\alpha +\beta \), \(q=\gamma +\delta \); \(I_{1}\), \(I_{2}\) and \(I_{3}\) are given by Eqs (110)–(112), \(R_{z}=|R_{p}-R_{q}|\), \(R_{p}=\dfrac{\alpha }{p}d\), \(R_{q}=\dfrac{\gamma }{q}d\) and \(s=\dfrac{p+q}{4pqR_{z}^{2}}\).
Finally, the evaluation of the Coulomb-type integrals between s and p functions at different sites proceeds as follows:
where
The Fourier transforms of these functions are:
as it has been calculated before, Eq. (85). Therefore,
with
where \(\; s = \dfrac{p+q}{4pqd^{2}}. \;\) Finally, we obtain:
The equations according to which the matrix elements summarized in Table 8 are calculated are given in Table 9.
Rights and permissions
About this article
Cite this article
Tavares, C., Oliveira, S., Fernandes, V. et al. Quantum simulation of the ground-state Stark effect in small molecules: a case study using IBM Q. Soft Comput 25, 6807–6830 (2021). https://doi.org/10.1007/s00500-020-05492-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-020-05492-5