1 Preface

This paper presents an introduction to and a survey of time-dependent density functional theory (TDDFT). The purpose of the paper is to explain in a nutshell what TDDFT is and what it can do. We will discuss the basics of the formal framework of TDDFT as well as the current state of the art, skipping over details of the proofs, and highlight some of the most important applications. Readers who would like a more detailed treatment and more literature references are encouraged to consult recent books [1, 2] and review articles [35].

TDDFT is a theoretical approach to the dynamical quantum many-body problem; it can be used to describe quantum systems that are not stationary. As a consequence, TDDFT provides formally exact and practically convenient methods to calculate electronic excitation energies. By contrast, density functional theory (DFT) is a ground-state theory: in other words, it is used to find the ground state of a quantum system and calculate related quantities of interest, such as the ground-state energy. In many, if not most, situations of practical interest, we have to determine the ground state of the system before we can study its dynamics or calculate its excitations.

The beginnings of ground-state DFT date back to the years 1964/1965 when the famous papers by Hohenberg and Kohn [6] and Kohn and Sham [7] were published. Since then, DFT has developed into a dominating method for electronic structure calculations in physics, chemistry, materials science, and many other areas (see Ref. [8] for a recent up-to-date account of DFT). Although TDDFT is of much more recent origin [9], it now has reached a similar status for calculating electronic excitations.

TDDFT uses many familiar concepts from DFT, most prominently, the Kohn–Sham idea of replacing the real interacting many-body system by a noninteracting system that reproduces the same density. But there are also many concepts that are unique to the time-dependent case, such as memory and initial-state dependence. To gain a thorough understanding of TDDFT, it is hence advisable to begin with a study of the basic concepts of DFT. We refer the reader to the very nice introductions to DFT by Capelle [10] and by Burke and Wagner [11]. There exist a number of books on DFT, some of which are very accessible to newcomers in the field [12, 13], others are more advanced [14].

2 Ground-State DFT in a Nutshell

2.1 The Many-Body Problem

We consider a system of N interacting electrons that is described by the nonrelativistic Schrödinger equation:

$$\begin{array}{ll} \hat H_{0} &\Psi_{j}(\mathbf{r}_{1},\ldots,\mathbf{r}_{N}) \\ &= E_{j} \Psi_{j}(\mathbf{r}_{1},\ldots,\mathbf{r}_{N}) \:,\quad j=0,1,2,\ldots \:. \end{array} $$
(1)

For a given D-dimensional system, the jth eigenstate Ψ j   (r 1,…,r N ) is a function of DN spatial variables. In the following, we use the abbreviation Ψ j . Of particular interest to us is the ground-state wave function Ψgs.

The many-body Hamiltonian is given by

$$ \hat H = \hat T + \hat V + \hat W \:, $$
(2)

where the kinetic energy and scalar potential operators are

$$ \hat T = \sum\limits_{j=1}^{N} -\frac{\nabla_{j}^{2}}{2} \:, \qquad \quad \hat V = \sum\limits_{j=1}^{N} v(\mathbf{r}_{j}) \:, $$
(3)

and the electron–electron interaction operator is

$$ \hat W = \sum\limits_{i,j=1 \atop i\ne j}^{N} w(|\mathbf{r}_{i} - \mathbf{r}_{j}|) \:. $$
(4)

Notice that we use atomic (Hartree) units throughout, i.e., m = e = = 1. The electron–electron interaction is usually taken to be the Coulomb interaction, w(|rr′|) = 1/|rr′|, but other forms of two-particle interactions, or zero interactions, are also possible.

The single-particle potential v(r) describes the total potential acting on the electrons. If one is interested in describing the properties of matter (atoms, molecules, or solids), v(r) is the sum of the Coulomb potentials of the atomic nuclei. However, to define the formal framework of DFT, it is not necessary to specify where the potential comes from, as long as it has a mathematically well-behaved form.

From the solutions of the Schrödinger equation, we calculate the expectation value of an observable in the jth eigenstate:

$$ O_{j} = \langle \Psi_{j} | \hat{O} | \Psi_{j}\rangle . $$
(5)

Here, \(\hat {O}\) is a Hermitian operator corresponding to a quantum mechanical observable.

Let us make two remarks on our formulation of the many-body problem:

  1. 1.

    We implicitly made the Born–Oppenheimer approximation (see Section 7.3). In other words, if our system contains nuclear degrees of freedom (as is the case in all forms of real matter), we treat them classically. The many-body wave functions therefore depend only on the electronic coordinates (r 1,…, r N ), and the nuclei only act as sources of scalar potentials. In Section 8, we will briefly discuss what happens if this approximation is not made and the electronic and nuclear degrees of freedom are coupled.

  2. 2.

    We have not indicated any spin indices, which was done mainly for notational simplicity. In other words, Ψ j (r 1,…, r N ) describes spinless electrons. Including spin, the many-body wave function can be written as Ψ (x 1,…, x N ), where x i = (r i , σ i ) denotes the spatial and spin coordinate of the ith electron.

2.2 The Basic Idea Behind DFT

Everything we wish to know about our system (energy, geometry, excitation spectrum, etc.) can be obtained from the wave functions, see (5). The exact wave functions can be calculated if the system is small, with no more than one or two electrons, but this becomes very difficult if N is greater than that: the many-body Schrödinger equation becomes too hard to solve, and the usefulness of the wave function itself becomes more and more questionable for large N [15].

There exist many approaches to find approximate solutions of the many-body problem. So-called “wave- function-based techniques” such as Hartree–Fock (HF) or configuration interaction (CI) attempt to find variational solutions of the Schrödinger equation using expansions of the wave function in terms of Slater determinants. This approach has been very successful in theoretical chemistry, but has its limitations for large systems.

The essence of the density functional approach is that it is in principle possible to obtain all desired information about an N-electron system without having to calculate its full wave function: instead, all one needs is the one-particle probability density of the ground state,

$$ n_{0}(\mathbf{r}) = N\int d^{3}r_{2}\ldots \int d^{3}r_{N} |\Psi_{\rm gs}(\mathbf{r},\mathbf{r}_{2},\ldots,\mathbf{r}_{N})|^{2} \:. $$
(6)

This can be mathematically proven (see below), but before doing so, it is helpful to give a simple illustration. Consider a one-electron system which satisfies the Schrödinger equation

$$ \left[-\frac{\nabla^{2}}{2} + v(\mathbf{r})\right] \varphi_{j}(\mathbf{r}) = \varepsilon_{j} \varphi_{j}(\mathbf{r}) \:. $$
(7)

The usual procedure is to solve this equation for a given potential v(r) and determine the ground-state probability density as n 0(r) = |φ 0(r)|2. But now, imagine the reverse situation: we are given a density function n 0(r), normalized to 1, and we ask in what potential this is the ground-state density. Assuming that the wave functions are real so \(\varphi _{0}(\mathbf {r})=\sqrt {n_{0}(\mathbf {r})}\), the Schrödinger equation (7) is easily inverted, and we obtain

$$ v(\mathbf{r}) = \frac{\nabla^{2} n_{0}(\mathbf{r})}{4n_{0}(\mathbf{r})} - \frac{|\nabla n_{0}(\mathbf{r})|^{2}}{8n_{0}(\mathbf{r})^{2}} \:. $$
(8)

A one-dimensional example is given in Fig. 1.

Fig. 1
figure 1

The density n 0(x) = cos2(Πx) + cos2(3Πx) (dashed line, scaled by a factor 60), for \(-\frac {1}{2}<x<\frac {1}{2}\), is the ground-state density of the potential v(x) (full line) which was constructed using (8)

What has been accomplished? From the ground-state density n 0(r), we were able to reconstruct the potential v(r). But this means that we have reconstructed the Hamiltonian \(\hat H\) of the system, and we can thus solve the Schrödinger equation (7) and get all the wave functions! This logical chain can be represented as follows:

$$ n_{0}(\mathbf{r}) \to v(\mathbf{r}) \to \hat H \to \{\Psi_{j}\}. $$
(9)

The reconstruction of the potential from the density is easy for one-electron systems. For interacting systems with many electrons, there is no explicit formula such as (8). Nevertheless, there exists a unique potential for each mathematically well-behaved density function such that it is the ground-state density in this potential. This was proved by Hohenberg and Kohn in [6].

The Hohenberg–Kohn theorem states that it is impossible for two different potentials, v(r) and v′ (r), to produce the same ground-state density (v′ is considered to be different from v if it is not just v shifted by a constant). In other words, the relationship between potentials and ground-state densities is one-to-one:

$$ n_{0}(\mathbf{r}) \leftrightarrow v(\mathbf{r}) \:. $$
(10)

The proof of this theorem is relatively straightforward, making use of the Rayleigh–Ritz minimum principle. It can be found in any textbook on DFT, so we won’t repeat it here.

Formally, this logical dependence of the wave functions on the ground-state density constitutes a functional relationship, which is written as Ψ j  [n 0]. Hence, the name density functional theory. Every quantum mechanical observable thus can be written as a density functional.

In particular, the total energy functional of a system with potential v 0(r) is

$$ \label {II.6} E_{v_{0}}[n]=\langle \Psi[n]|\hat T + \hat V_{0} + \hat W | \Psi[n]\rangle \:, $$
(11)

where n is some N-electron density and Ψ [n] is that ground-state wave function which reproduces this density. The energy functional (10) is minimized by the ground-state density n 0 which belongs to v 0, and then becomes equal to the ground-state energy:

$$\begin{array}{l} E_{v_{0}}[n] > E_{0} \quad \mbox{for} \quad n(\mathbf{r}) \ne n_{0}(\mathbf{r})\:, \\ E_{v_{0}}[n] = E_{0} \quad \mbox{for} \quad n(\mathbf{r}) = n_{0}(\mathbf{r}) \:. \end{array} $$
(12)

2.3 The Kohn–Sham Approach

The fact that all observables are functionals of the density opens up the way for an enormous computational simplification, since the density is a function of only D variables (and not of DN variables as the wave function). But how can we take advantage of this in practice? To obtain the density, one still needs to solve the full many-body problem. This means that nothing has been gained, unless we find a way to bypass the full Schrödinger equation and obtain the density in some other, easier way, at least approximately. Fortunately, a very elegant method exists to do this, known as the Kohn–Sham formalism [7].

We use the following trick: we define a noninteracting system in such a way that it reproduces the exact ground-state density of the interacting system. This means that we can calculate the exact density as the sum of squares of single-particle orbitals,

$$ n_{0}(\mathbf{r}) = \sum\limits_{j=1}^{N} |\varphi_{j}(\mathbf{r})|^{2}\:, $$
(13)

where the orbitals satisfy the following equation;

$$ \left[ -\frac{\nabla^{2}}{2} + v_{s}[n](\mathbf{r})\right] \varphi_{j}(\mathbf{r}) = \varepsilon_{j} \varphi_{j}(\mathbf{r}) \:. $$
(14)

This equation is known as Kohn–Sham equation; it is formally a single-particle Schrödinger equation, like (7). However, the potential v s is very special: it is defined to be that single-particle potential that produces orbitals which give the exact ground-state density of the interacting system via (13). It is therefore a functional of the density, v s  [n] (r).

The trick is now to write the unknown effective potential v s  [n] in a smart way. No doubt, the given external potential v 0(r) will make a contribution to it. The remainder, v s  [n] − v 0(r), then accounts for the electronic many-body effects. A large portion of the latter is made by the classical Coulomb potential associated with a given density distribution, also known as the Hartree potential,

$$ v_{\rm H}(\mathbf{r}) = \int d^{3}r' \frac{n(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|} \:. $$
(15)

And whatever is left is called the exchange-correlation (xc) potential, v xc [n] (r), so that

$$ v_{s}[n](\mathbf{r}) = v_{0}(\mathbf{r}) + v_{\rm H}(\mathbf{r}) + v_{\rm xc}[n](\mathbf{r}) \:. $$
(16)

It turns out that the solution of the Kohn–Sham equation (14)—that is, the density (13)—is precisely that density which minimizes the total energy functional (11). The connection is made by rewriting the energy as follows:

$$\begin{array}{rll} E_{v_{0}}[n] &=& T[n] + \int d^{3}r \: v_{0}(\mathbf{r}) n(\mathbf{r}) + W[n] \\ &=& T_{s}[n] +\int d^{3}r v_{0}(\mathbf{r}) n(\mathbf{r}) \\ &&+(T[n]-T_{s}[n] + W[n] ) \\ &\equiv& T_{s}[n] + \int d^{3}r v_{0}(\mathbf{r}) n(\mathbf{r})\\ &&+ E_{\rm H}[n] + E_{\rm xc}[n] \:. \end{array} $$
(17)

Here, T[n] is the kinetic energy functional of an interacting system, whereas T s [n] is the kinetic energy functional of a noninteracting system. Neither T[n] nor T s [n] are known as explicit density functionals, but it is very straightforward to write down T s [n] as an explicit functional of the orbitals:

$$ T_{s}[n] =-\frac{1}{2} \sum\limits_{j=1}^{N} \varphi_{j}^{*}(\mathbf{r}) \nabla^{2} \varphi_{j}(\mathbf{r}) \;, $$
(18)

where the orbitals φ j (r) come from the Kohn–Sham equation (14) are hence implicit density functionals.

In the last line of (17), we define the Hartree energy

$$ E_{\rm H}[n] = \frac{1}{2} \int d^{3}r \int d^{3}r' \: \frac{n(\mathbf{r}) n(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|} $$
(19)

and the xc energy as

$$ E_{\rm xc}[n] = T[n] - T_{s}[n] + W[n] - E_{\rm H}[n] \:. $$
(20)

This shows that the xc potential is given by the following functional derivative:

$$ v_{\rm xc}(\mathbf{r}) = \frac{\delta E_{\rm xc}[n]}{\delta n(\mathbf{r})} \:. $$
(21)

It is straightforward to show that the total energy (17) can be expressed as follows:

$$E_{v_{0}}[n] = \sum\limits_{j=1}^{N} \varepsilon_{j} - E_{\rm H}[n] - \int d^{3}r \: v_{\rm xc}(\mathbf{r}) n(\mathbf{r}) + E_{\rm xc}[n] \:. $$
(22)

2.4 Discussion and Exact Properties

Let us now summarize some of the most important properties of the Kohn–Sham approach. Our discussion is by no means complete, but the following properties will be relevant for the time-dependent case as well.

Meaning of the wave function. The Kohn–Sham system is noninteracting, so its total N-particle wave function can be written as a single Slater determinant:

$$ \Psi^{\rm KS}_{\rm gs}(\mathbf{r}_{1},\ldots,\mathbf{r}_{N}) = \frac{1}{\sqrt{N!}} \mbox{det}\{\varphi_{j}\} \:. $$
(23)

The Kohn–Sham Slater determinant has only one purpose: to reproduce the exact ground-state density when substituted in (6). It is not meant to reproduce the exact ground-state wave function, i.e., \(\Psi _{\rm gs}^{\rm KS} \ne \Psi _{\rm gs}\) in general.

Meaning of the Kohn–Sham energies. The energy eigenvalues ε j do not have a rigorous physical meaning, except for the highest occupied eigenvalue. We have

$$ \varepsilon_{N}(N) = E(N)-E(N-1) = -I(N) \:, $$
(24)

i.e., the highest occupied eigenvalue of the N-particle system equals minus the ionization energy of the N-particle system, and

$$ \varepsilon_{N+1}(N+1) = E(N+1)-E(N) = -A(N) \:, $$
(25)

i.e., the highest occupied eigenvalue of the N + 1-particle system equals minus the electron affinity of the N-particle system.

Eigenvalue differences ε a ε i , where a labels an unoccupied single-particle state and i an occupied one, should not be interpreted as excitation energies of the many-body system (although they often are).

Asymptotic behavior of the Kohn–Sham potential. A neutral atom with N electrons has the nuclear potential v 0(r) = −N/r, and its Hartree potential behaves as v H(r) → N / r for r → ∞. If an electron is far away in the outer regions of the atom, it should see the Coulomb potential of the remaining positive ion. This implies that the xc potential must behave asymptotically as

$$ v_{\rm xc}(\mathbf{r}) \to -\frac{1}{r} $$
(26)

for large r, for any finite system. The asymptotic behavior of v xc(r) reflects the fact that the Kohn–Sham formalism is free of self-interaction: for a 1-electron system, the Hartree and xc potential cancel exactly.

Spin-dependent formalism. In practice, the Kohn–Sham formalism is usually written down and applied in its spin-polarized form, even if the system does not have a net spin polarization. We then have

$$ \left[ -\frac{\nabla^{2}}{2} + v_{0\sigma}(\mathbf{r})+v_{\rm H}(\mathbf{r}) + v_{\rm xc\sigma}(\mathbf{r})\right] \varphi_{j\sigma}(\mathbf{r}) = \varepsilon_{j\sigma} \varphi_{j\sigma}(\mathbf{r}) \:, $$
(27)

where σ = ↑, ↓. Here, the external potential v 0σ carries a spin index, which could come from a static magnetic field, and the spin-polarized xc potential is defined as a functional of the spin-up and spin-down density:

$$ v_{\rm xc \sigma}[n_{\uparrow},n_{\downarrow}](\mathbf{r}) = \frac{\delta E_{\rm xc}[n_{\uparrow},n_{\downarrow}]}{\delta n_{\sigma}(\mathbf{r})} \:, $$
(28)

where

$$ n_{\sigma}(\mathbf{r}) = \sum\limits_{j=1}^{N_{\sigma}} |\varphi_{j\sigma}(\mathbf{r})|^{2} \:. $$
(29)

Exact exchange. The xc energy can be decomposed into exchange and correlation energy. The exact exchange energy is given by

$$\begin{array}{rll} E_{\rm x}^{\rm exact}[n] &=& -\frac{1}{2} \sum_{\sigma = \uparrow,\downarrow} \sum\limits_{i,j=1}^{N_{\sigma}}\int d^{3}r \int d^{3}r' \\ && \times \frac{\varphi_{i\sigma}^{*}(\mathbf{r}) \varphi_{j\sigma}(\mathbf{r}')\varphi_{i\sigma}(\mathbf{r}') \varphi_{j\sigma}^{*}(\mathbf{r})}{|\mathbf{r} - \mathbf{r}'|} \:, \end{array} $$
(30)

where the φ (r) are the exact Kohn–Sham orbitals. \(E_{\rm x}^{\rm exact}\) is a so-called implicit density functional.

2.5 DFT in Practice

The number of applications of DFT in various areas of science and engineering is almost impossible to count. Nice practical introductions from the perspectives of chemistry and of materials science, respectively, can be found in recent books and review articles [12, 13, 16]. Here, we will only make some general remarks and discuss a couple of representative examples.

Even though DFT is in principle exact (as we have emphasized), any practical application necessarily involves two kinds of approximations: (1) the xc energy functional E xc[n], and the xc potential following from it via (21), are not exactly known and need to be approximated; (2) the Kohn–Sham equation (14) needs to be solved using some computational scheme, which can introduce various types of numerical inaccuracies.

Over the years, many approximate xc functionals have been proposed; some of them using physical arguments, constraints, and exact conditions, others using parametrizations combined with fitting to reference data. Which functional should one choose? This question cannot be easily answered in general [17] but requires some experience. Practitioners of DFT who use popular software packages of quantum chemistry or solid-state physics often encounter daunting choices between many different menu options for v xc. Some functionals have turned out to be more popular and successful than others and are typically chosen in the majority of applications.

The xc energy of any system can be written as

$$ E_{\rm xc}[n] = \int d^{3}r \: e_{\rm xc}[n](\mathbf{r}) \:, $$
(31)

where e xc[n] (r) is the xc energy density, whose dependence on the density is, in general, nonlocal: e xc at a particular point r is determined by the density n(r′) at all points in space. The goal is to approximate e xc[n] (r).

Much of the success of DFT can be attributed to the fact that a very simple approximation, the local density approximation (LDA), gives very useful results in many circumstances. The LDA has the following form:

$$ E_{\rm xc}^{\rm LDA}[n] = \int d^{3}r \: e_{\rm xc}^{h}(n(\mathbf{r}))\:. $$
(32)

Here, the xc energy density of a homogeneous electron liquid, \(e_{\rm xc}^{h}(\bar n)\) (which is simple function of the uniform density \(\bar n\)), is evaluated at the local density at point r of the actual inhomogeneous physical system: \(e_{\rm xc}^{h}(n(\mathbf {r})) = \left .e_{\rm xc}^{h}(\bar n)\right |_{\bar n = n(\mathbf {r})} \). The so defined \(E_{\rm xc}^{\rm LDA}[n]\) is exact in the limit where the system becomes uniform, and should be accurate when the system varies only slowly in space.

The LDA requires \(e_{\rm xc}^{h}(\bar n)\) as input [18]. We can write

$$ e_{\rm xc}^{h}(\bar n) = e_{\rm x}^{h}(\bar n) + e_{\rm c}^{h}(\bar n) \:, $$
(33)

where the exchange energy density can be calculated exactly using Hartree–Fock theory; the result (for the spin-unpolarized electron liquid) is

$$ e_{\rm x}^{h}(\bar n) = -\frac{3}{4}\left(\frac{3}{\pi}\right)^{1/3} \bar n^{4/3} \:. $$
(34)

This gives the following expression for the LDA exchange potential:

$$\begin{array}{rll} v_{\rm x}^{\rm LDA}(\mathbf{r}) &=& \frac{\delta}{\delta n(\mathbf{r})} \left[ -\frac{3}{4}\left(\frac{3}{\pi}\right)^{1/3}\int d^{3}r' n(\mathbf{r}')^{4/3}\right] \\ &=& -\left(\frac{3}{\pi}\right)^{1/3} n(\mathbf{r})^{1/3} \:. \end{array} $$
(35)

The correlation energy density \(e_{\rm c}^{h}(\bar n)\) is not exactly known, but very accurate numerical results exist from quantum Monte Carlo calculations. Based on these results, parametrizations for the correlation energy of the homogeneous electron liquid have been derived [1921].

The LDA generally performs very well across the board. It produces atomic and molecular total ground-state energies within 1–5 % of the exact value, and yields molecular equilibrium distances and geometries within about 3 %. For solids, Fermi surfaces in metals come out within a few percent, lattice constants of solids within about 2 %, and vibrational frequencies and phonon energies are obtained within a few percent as well.

On the other hand, the LDA has several shortcomings. For instance, the LDA is not self-interaction free; as a consequence, the xc potential goes to zero exponentially fast and not as −1 / r (26). This causes the Kohn–Sham energy eigenvalues to be too low in magnitude in general; in particular, the highest occupied eigenvalue ε N underestimates the ionization energy typically by 30–50 %. The LDA does not produce any stable negative ions, and it underestimates the band gap in solids. Dissociation of heteronuclear molecules in LDA produces ions with fractional charges.

Overall, the LDA often gives good results in solid-state physics and materials science, but it is usually not accurate enough for many chemical applications.

The LDA can be improved by including a dependence not only on the local density itself but also on gradients of the density. This defines the so-called generalized gradient approximation (GGA), which has the following generic form:

$$ E_{\rm xc}^{\rm GGA}[n_{\uparrow},n_{\downarrow}] = {} \int{} d^{3}r \: e_{\rm xc} \left(n_{\uparrow}(\mathbf{r}),n_{\downarrow}(\mathbf{r}), \nabla n_{\uparrow}(\mathbf{r}),\nabla n_{\downarrow}(\mathbf{r})\right) . $$
(36)

There exist hundreds of different GGA functionals, and it is impossible to list all of them here. Among the most famous ones are the B88 exchange functional [23], the LYP correlation functional [24] (which, combined together, give the BLYP functional), and the PBE functional [25]. The exchange part of the latter has the following form:

$$ E_{\rm x}^{\rm PBE} = \int d^{3}r e_{\rm x}^{h}(n)\left[ 1 + \kappa - \frac{\kappa}{1 + \beta \pi^{2} s^{2}/3\kappa} \right], $$
(37)

where \(s(\mathbf {r}) = |\nabla n(\mathbf {r})|/2n(\mathbf {r})k_{F}(\mathbf {r})\), k F (r) is the local Fermi wavevector, and κ and β are given parameters.

The GGAs have been crucial in the great success story of DFT over the past couple of decades, due to their accuracy combined with computational simplicity. However, improvements are still desirable. One of the most important breakthroughs has been the development of the so-called hybrid functionals, which mix in a fraction of exact exchange:

$$ E_{\rm xc}^{\rm hybrid} = a E_{\rm x}^{\rm exact} + (1-a) E_{\rm x}^{\rm GGA} + E_{\rm c}^{\rm GGA} \:, $$
(38)

where a is a mixing coefficient that has a value of around 0.25. The most famous hybrid functional is B3LYP [26], which nowadays has become the workhorse of computational chemistry. It should be noted that the exact exchange mixed in here prevents the easy construction of a local xc potential, so hybrid functionals are defined in the so-called generalized Kohn–Sham scheme [27, 28].

Table 1 gives an assessment of various approximate xc functionals, carried out for large molecular test sets [22]. All xc functionals perform much better than Hartree–Fock. It is evident that the B3LYP functional gives the best overall results, with accuracies that come close to the requirements for predicting chemical reactions (the so-called “chemical accuracy” of 1 kcal/mol).

Table 1 Mean absolute errors in several molecular properties calculated for various test sets [22]

In solids, hybrid functionals such as B3LYP perform less well, due to the fact that they do not reduce to the exact homogeneous electron gas limit [29]. A detailed assessment of the performance of modern density functionals for bulk solids was given by Czonka et al. [30]. Generally speaking, GGA functionals do not improve the lattice constants in nonmolecular solids obtained with LDA (which are already very good!): while LDA systematically underestimates lattice constants, GGA overestimates them. Vice versa, bulk moduli and phonon frequencies are typically overestimated by LDA and underestimated by GGA. This clearly affects many properties of solids which are volume-dependent such as their magnetic behavior. Some typical results for lattice constants are given in Table 2.

Table 2 Equilibrium lattice constants of some representative bulk solids [30]

A particular class of hybrid functionals, called range-separated hybrids, has attracted much interest lately [33]. The basic idea is to separate the Coulomb interaction into a short-range (SR) and a long-range (LR) part:

$$ \frac{1}{|\mathbf{r} - \mathbf{r}'|} = \frac{f(\mu|\mathbf{r} - \mathbf{r}'|)}{|\mathbf{r} - \mathbf{r}'|} + \frac{1-f(\mu|\mathbf{r} - \mathbf{r}'|)}{|\mathbf{r} - \mathbf{r}'|} \:, $$
(39)

where the function f has the properties f (μx → 0) = 1 and f (μx → ∞) = 0.

Common examples are f (μx) = e μx and f (μx) = erfc(μx). The separation parameter μ is determined either empirically [3438] or using physical arguments [33, 39]. The resulting range-separated hybrid xc functional then has the following generic form:

$$ E_{\rm xc} = E_{\rm x}^{\rm SR-DFA} + E_{\rm x}^{\rm LR-HF} + E_{\rm c}^{\rm DFA} \:, $$
(40)

where DFA stands for any standard density functional approximation such as the LDA or GGA. The main strength of range-separated hybrids is that they have the correct (Hartree–Fock) long-range asymptotic behavior, and at the same time take advantage of the good short-range behavior of LDA or GGA. This, in turn, leads to a significant improvement in properties such as the polarizabilities of long-chain molecules, bond dissociation, and, particularly importantly for TDDFT, Rydberg, and charge-transfer excitations (see Section 6.3).

This concludes our very brief survey of ground-state DFT. Let us now come to the dynamical case.

3 Survey of Dynamical Phenomena

The stationary many-body problem was defined in Section 2.1. Solving the Schrödinger equation (1) allows us to obtain the eigenstates of an N-particle system. The time-dependent Schrödinger equation is given by

$$ i \frac{\partial}{\partial t} \Psi_{j}(\mathbf{r}_{1},\ldots,\mathbf{r}_{N},t) = \hat H(t) \Psi_{j}(\mathbf{r}_{1},\ldots,\mathbf{r}_{N},t) \:, $$
(41)

where the time-dependent Hamiltonian is defined as

$$ \hat H(t) = \hat T + \hat V(t) + \hat W \;. $$
(42)

The time-dependent Hamiltonian has the same kinetic energy and electron–electron interaction parts \(\hat T\) and \(\hat W\) as the static Hamiltonian (2), but it features an external potential operator that is explicitly time-dependent:

$$ \hat V(t) = \sum\limits_{j=1}^{N} v(\mathbf{r}_{j},t) \;. $$
(43)

The time-dependent Schrödinger equation (41) formally represents an initial value problem. We define a time t 0 as our initial time (often, t 0 = 0), and we start with a given initial many-body wave function of the system, Ψ (t 0) ≡ Ψ0 (notice that this is not necessarily the ground state). This state is then propagated forward in time, describing how the system evolves under the influence of the time-dependent potential v (r, t). In many situations of practical interest, the time-dependent single-particle potential can be written as

$$ v(\mathbf{r},t) = v_{0}(\mathbf{r}) + \theta (t-t_{0})v_{1}(\mathbf{r},t) \:, $$
(44)

i.e., the potential is static and equal to v 0 until time t 0 when an explicitly time-dependent additional potential v 1(t) is switched on.

The time-dependent wave function allows us to calculate whatever observable we may be interested in,

$$ O(t) = \langle \Psi(t) | \hat{O} | \Psi(t)\rangle . $$
(45)

Here, O(t) is the time-dependent expectation value of the Hermitian operator \(\hat {O}\) corresponding to a quantum mechanical observable. Two key quantities for TDDFT are the time-dependent density and current density, n(r, t) and j(r, t). They can be defined via the one-particle density operator and current density operator,

$$\hat{n}(\mathbf{r}) = \sum\limits_{i=1}^{N} \delta(\mathbf{r} - \mathbf{r}_{i})$$
(46)
$$\hat{\mathbf{j}}(\mathbf{r}) = \sum\limits_{i=1}^{N}[\nabla_{i}\delta(\mathbf{r} - \mathbf{r}_{i}) + \delta(\mathbf{r} - \mathbf{r}_{i})\nabla_{i}] $$
(47)

so that \(n(\mathbf {r},t) = \langle \Psi (t) | \hat {n}(\mathbf {r}) |\Psi (t)\rangle \) and similar for j(r, t). A connection between density and current density is provided by the continuity equation,

$$ i \frac{\partial}{\partial t} n(\mathbf{r},t) = -\nabla \cdot \mathbf{j}(\mathbf{r},t)\:. $$
(48)

There are many different types of quantum mechanical time evolution that are of practical interest. Many of them belong to one of the following two generic scenarios.

First scenario.

Consider a system that starts from a nonequilibrium initial state, and then freely evolves in a static potential. A simple one-dimensional example is illustrated in Fig. 2: at the initial time t 0, the density has an asymmetric shape which clearly does not come from an eigenstate of the square-well potential. The density is then “released” and starts to oscillate back and forth, while the square-well potential remains static.Footnote 1

Fig. 2
figure 2

(Colored online) First scenario of time evolution: the external potential is static, but the system starts with a nonequilibrium initial state. The density then oscillates back and forth

This kind of free time evolution occurs in practice when the system is subject to a sudden switching or a short “kick” at the initial time, and is then left to itself. For example, charge–density oscillations that are triggered in this way play an important role in the field of “plasmonics” [40].

Second scenario.

Consider now a system that is initially in the ground state, and is then subject to a time-dependent potential that is switched on at time t 0. This is illustrated in Fig. 3 for a square-well potential that is “shaken” by superimposing it with a time-dependent linear potential, which again leads to an oscillating density.

Fig. 3
figure 3

(Colored online) Second scenario of time evolution: the system starts from the ground state and evolves under a time-dependent external potential that is switched on at the initial time t 0

For example, this scenario takes place if an atom or molecule is hit by a strong laser pulse: the wave function is driven by the external field and gets “shaken up,” which can then lead to ionization.

TDDFT will allow us to describe both dynamical scenarios formally exactly for arbitrary many-body systems. To do this, we will derive a dynamical version of the Kohn–Sham equations, which will allow us to carry out real-time propagations of quantum systems, starting from arbitrary initial states and under the influence of arbitrary time-dependent potentials. We will derive the formal framework of TDDFT in Section 4, and we will discuss practical aspects and applications in Section 5.

Of particular importance are situations in which the external time-dependent potential can be considered a weak perturbation. Very often, one is interested in the first-order response of the system to a perturbation, because many spectroscopic techniques are used this regime. In particular, the linear response of a material is directly related to its spectrum of excitations. As we will see in Sections 6 and 7, TDDFT in the linear response regime is a very powerful approach to calculate excitation energies and optical spectra. In fact, this is where the majority of TDDFT applications are carried out at present.

4 The Formalism of TDDFT

4.1 The Runge–Gross Theorem

The foundation of ground-state DFT is the Hohenberg–Kohn theorem, which we discussed in Section 2.2. The unique 1:1 correspondence between ground-state densities and potentials makes it possible to construct density functionals in a meaningful way, and to determine ground-state properties in principle exactly via self-consistent solution of the Kohn–Sham equation.

For the time-dependent case, we would like a similar rigorous formal foundation. But the situation is different from the ground state, in two important ways. First, we do not have a variational minimum principle in the time-dependent case. Secondly, the Schrödinger equation (41) is an initial value problem, so whatever we will prove has to be done with a given initial state in mind.

The first to deliver an existence proof for TDDFT were Runge and Gross in [9]. They proved that if two N-electron systems start from the same initial state, but are subject to two different time-dependent potentials, their respective time-dependent densities will be different.

We consider two time-dependent potentials to be different if their difference is more than just a time-dependent constant,

$$ v(\mathbf{r},t) - v'(\mathbf{r},t) \ne c(t) $$
(49)

for t > t 0. Otherwise, they would give rise to two wave functions that differ only by a phase factor e (t), where (t) / dt = c(t), as can easily be shown. Such purely time-dependent phase factors cancel out when one forms expectation values of operators using (45).

The Runge–Gross theorem applies to potentials that can be expanded in a Taylor series about the initial time:

$$ v(\mathbf{r},t) = \sum\limits_{k=0}^{\infty} \frac{v_{k}(\mathbf{r})}{k!} \: (t-t_{0})^{k} \:. $$
(50)

For such potentials, the following unique 1:1 correspondence can be proven:

(51)

The proof proceeds in two steps. In the first step, it is established that different potentials produce different current densities, infinitesimally later than the initial time t 0. One then goes on to show that if the current densities are different, the densities must be different as well; to prove this, the continuity equation (48) is used.

Just like in ground-state DFT, the unique 1:1 correspondence (51) allows us to write the potential as a functional of the density:

$$ v(\mathbf{r},t) = v[n,\Psi_{0}](\mathbf{r},t) \:. $$
(52)

Notice the formal dependence on the initial state. However, this dependence goes away if the system starts from the ground state, i.e., Ψ0 = Ψgs: the Hohenberg–Kohn theorem then tells us that Ψgs[n] is a functional of the density, and v(r, t) can be thus written as a functional of the density only.

Since the potential can be written as a functional of the density, the time-dependent Hamiltonian becomes a density functional as well, and hence the time-dependent wave function and all observables:

$$ O(t) = \langle \Psi[n,\Psi_{0}](t) | \hat O | \Psi[n,\Psi_{0}](t)\rangle = O[n,\Psi_{0}](t) \:. $$
(53)

4.2 Time-Dependent Kohn–Sham Formalism

The Kohn–Sham formalism (see Section 2.3) has been tremendously successful in ground-state DFT. Its time-dependent counterpart looks very similar. The exact time-dependent density, n(r, t), can be calculated from a noninteracting system with N single-particle orbitals:

$$ n(\mathbf{r},t) = \sum\limits_{j=1}^{N} |\varphi_{j}(\mathbf{r},t)|^{2} \:. $$
(54)

The orbitals φ j (r, t) satisfy the time-dependent Kohn–Sham equation:

$$ i \frac{\partial}{\partial t} \varphi_{j}(\mathbf{r},t) = \left[ -\frac{\nabla^{2}}{2} + v_{s}(\mathbf{r},t)\right] \varphi_{j}(\mathbf{r},t) \:, $$
(55)

where the time-dependent effective potential is given by

$$ v_{s}[n,\Psi_{0},\Phi_{0}](\mathbf{r},t){\kern-1.8pt}={\kern-1.8pt}v(\mathbf{r},t) + v_{\rm H}(\mathbf{r},t)+v_{\rm xc}[{\kern-.5pt}n,\Psi_{0},\Phi_{0}{\kern-.5pt}](\mathbf{r},t) \:. $$
(56)

Here, v(r, t) is the time-dependent external potential, which we assume to have the form (44). The time-dependent Hartree potential,

$$ v_{\rm H}(\mathbf{r},t) = \int d^{3}r' \frac{n(\mathbf{r}',t)}{|\mathbf{r} - \mathbf{r}'|} \:, $$
(57)

depends on the instantaneous time-dependent density only. The time-dependent xc potential formally has a functional dependence on the density, the initial many-body state Ψ0 of the exact interacting system, and the initial state of the Kohn–Sham system Φ0. This is schematically illustrated in Fig. 4.

Fig. 4
figure 4

The time-dependent Kohn–Sham equation determines the time-dependent density self-consistently between the initial time t 0 and some final time t 1. The xc potential at time t depends on densities at times t′ ≤ t, as well as on the initial states of the interacting and of the Kohn–Sham system

4.3 Discussion: Beyond Runge–Gross

The Runge–Gross theorem in and by itself is sufficient to serve as the fundamental formal basis of TDDFT. However, there are some subtle questions that it leaves unanswered and some situations that are not covered by it. Extending the Runge–Gross theorem, or coming up with alternative proofs, has therefore been an area of significant research activity.

This section can be skipped by readers who may be less interested in the formal details of TDDFT, and more interested in practical aspects.

4.3.1 v-Representability and the van Leeuwen Theorem

An important question in ground-state DFT is the following: given a well-behaved (i.e., continuous and not singular) mathematical function n(r), with \(\int d^{3}r n(\mathbf {r})=N\), can one always find a potential v 0(r) where this n(r) is a ground-state density? This is known as the v-representability question; one distinguishes the interacting and the noninteracting v-representability problem, depending on whether the given density is to be reproduced in the physical (interacting) or in the Kohn–Sham (noninteracting) system.

Why is v-representability an important issue? If there exist density functions that are not v-representable (VR), then the domain of the functional \(E_{v_{0}}[n]\) would be ill-defined, and one would run into formal problems in defining functional derivatives such as in (21). The v-representability problem in DFT is still not fully solved, but at least we do know that all density functions on lattice systems are VR [41] (ensemble-VR, to be precise). Fortunately, it turns out that the v-representability problem in ground-state DFT can be circumvented in an elegant way with the so-called constrained search formalism [42, 43], which is, essentially, a clever reformulation of the variational minimum principle as a search over antisymmetric N-particle wave functions, so that

$$ E_{v_{0}}[n]=\mbox{min}_{\Psi \to n} \langle \Psi | \hat T + \hat V_{0} + \hat W | \Psi \rangle . $$
(58)

For TDDFT, the situation is different, due to a fundamental difference between the ground-state problem and the time-dependent problem: rather than finding a ground state, TDDFT describes the time propagation of many-body systems under the influence of external potentials. Due to the central role the external potential plays, the v-representability problem [i.e., whether there exists a v(r, t) for every n(r, t)] seems unavoidable.

TDDFT is not formulated on the basis of a variational minimum principle, since there is no quantity equivalent to the role of energy in time-dependent systems. Instead, it is possible to formulate TDDFT via a stationary-action principle [4446]. However, the uniqueness of the stationary-action point remains unproven. A rigorous time-dependent version of the constrained search approach does not exist, despite some attempts [47, 48].

Some progress has been made with the time-dependent v-representability problem for lattice systems [49]. Interestingly, it can happen very easily that perfectly well-behaved lattice densities are not VR, in this case, for well-understood reasons [50].

The van Leeuwen theorem [51] made an important contribution towards the resolution of the v-representability problem in TDDFT. It makes a statement about two many-body systems with different particle–particle interactions, w(rr′) (system 1) and w′(rr′) (system 2), see Fig. 5. If a time-dependent density n(r, t) is produced by an external potential v(r, t) in system 1 (starting from a given initial state), then one can uniquely construct the potential v′(r, t) that produces the same density in system 2 (the choice of initial state in system 2 is unique, too). There are some restrictions on the admissible densities: they must possess a Taylor expansion in t about the initial time (we denote such densities as t-TE). Below, we show that this assumption can be problematic.

Fig. 5
figure 5

The van Leeuwen theorem states that a time-dependent density n(t) coming from a many-body system with interaction w(rr′) and potential v(r, t) can be reproduced in a system with different interaction, w′(rr′) and potential v′(r, t). The potential v′ is uniquely determined

The van Leeuwen theorem has two important special cases. The first is that of w = w′, i.e., the two systems are identical. It turns out that in this way, one gains an alternative proof of the Runge–Gross theorem. The second case is w′ = 0, i.e., the second system is noninteracting. This establishes noninteracting v-representability in TDDFT, and hence provides formal justification of the time-dependent Kohn–Sham approach.

4.3.2 Non-Taylor-Expandable Densities

The van Leeuwen theorem shows that for t-TE densities, one can always construct the corresponding t-TE potential for the TDKS system. However, there is a subtle difference between the domain of the van Leeuwen theorem and the Runge–Gross theorem, as the latter only requires the external potentials to be t-TE (50), but not the densities. The van Leeuwen theorem does not apply to non-t-TE densities, which are allowed by the Runge–Gross theorem. Such densities are commonly considered pathological and thus are not considered to pose any threat. However, it turns out [52, 53] that the densities of most real world systems can become non-t-TE, including atoms, molecules, and solids!

In the usual nonrelativistic quantum mechanical description, the nuclei and electrons interact through the diverging Coulomb potential, and the densities always have cusps at the positions of nuclei [54]. The dynamics of the system, including the time-dependent density, is determined by the time evolution operator \(\hat U(t,t_{0})\), which in turn follows from the Hamiltonian \(\hat H(t)\). In the presence of space-non-analytic features such as cusps, time-non-analyticities appear because of the kinetic energy operator \(\hat T\), which is a differential operator in space. Thus, the time-dependent density can become non-t-TE.

A striking example [52] demonstrating the difference between the exact density and the t-TE density is shown in Fig. 6 (the t-TE density is defined as the result of applying the t-TE time evolution operator on the initial state). At the initial time, a density with a cusp is prepared, and then allowed to freely evolve in time. The upper panel of Fig. 6 shows that the density rapidly becomes smooth and spreads out. By contrast, if one attempts to find the time evolution by using a Taylor expansion, the density does not move at all!

Fig. 6
figure 6

(Colored online) Upper panel: time-dependent density of a 1D system with initial state ψ(x) = exp(−|x|) propagating with no external potential. Lower panel: using a Taylor expansion in time, the initial density remains stationary. This wrong behavior is due to the nonanalyticity of the density

We emphasize that although the Runge–Gross theorem is explicitly formulated for t-TE densities, the original proof remains valid despite the existence of non-t-TE densities [53]. Thus, the foundations of TDDFT remain sound.

4.3.3 Fixed-Point Proofs

Recent work on the v-representability problem and related questions focuses on developing so-called fixed-point proofs [55, 56], where the previous limitation of t-TE is lifted. The van Leeuwen theorem provides a way of constructing the time-dependent external potential for a given density, if the density is t-TE; if applied on non-t-TE densities, the constructed potential does not correspond to the exact density, but in turn reproduces the t-TE density [53]. The fixed-point proofs [55, 56] thus focus on explicitly showing the one-to-one correspondence between the potential and the density. The proof starts from the equation of motion of the density [51]:

$$ \frac{\partial^{2} n(\mathbf{r},t)}{\partial t^{2}}-\nabla\cdot[n(\mathbf{r},t)\nabla v(\mathbf{r},t)]=q(\mathbf{r},t). $$
(59)

The density and the quantity q can be seen as functionals of the potential, and thus (59) uniquely maps a potential v 0 to q[v 0], with the density n[v 0, Ψ0] determined by v 0 and the initial wave function Ψ0. In another perspective, (59) can also be seen as a differential equation for the potential, when n and q are given. If this given density is chosen to coincide with the initial density of the system and with its first-order time-derivative, and q is chosen to be q[v 0], (59) can be solved for the potential, denoted as v 1. Ref. [55] proves that under mild restrictions, v 0 = v 1, showing the mutual correspondence between the density and the potential. The proof is strengthened by recent numerical simulations [56]. The fixed-point proofs apply to densities confined within a finite (but arbitrarily large) space region, and the cases of density cusps are included in a limiting sense. It is not clear as of now whether these restrictions are general enough for the v-representability problem.

4.3.4 Vector Potentials and Time-Dependent Current-DFT

TDDFT applies to electronic many-body systems in the presence of time-dependent scalar potentials. But there are important classes of time-dependent processes that are not included, namely, many-body systems in time-dependent magnetic fields or under the influence of electromagnetic waves. This is obviously a very severe omission, because this means that, strictly speaking, this precludes discussing the interaction between light and matter! In practice, we can often get around this restriction and treat electromagnetic fields in dipole approximation, so that TDDFT is applicable. But in the general case, to deal with vector potentials of the form A(r, t), we need a theory that goes beyond TDDFT.

In general, a system can be under the influence of both a scalar and a vector potential, v(r, t) and A(r, t). The many-body Hamiltonian is then given by

$$ \hat H(t) = \sum\limits_{j=1}^{N}\left\{ \frac{1}{2}\left[ \frac{\nabla_{j}}{i} + \mathbf{A}(\mathbf{r}_{j},t)\right]^{2} +v(\mathbf{r}_{j},t)\right\} + \hat W. $$
(60)

The time-dependent many-body wave function associated with \(\hat H(t)\) determines the density n(r, t) and the current density j(r, t). It is important to keep in mind that the current density, like any general vector field, has a longitudinal and a transverse component,

$$ \mathbf{j} (\mathbf{r},t) = \mathbf{j}_{L}(\mathbf{r},t) + \mathbf{j}_{T}(\mathbf{r},t) \:. $$
(61)

The longitudinal current density is related to the density via the continuity equation:

$$ \frac{\partial }{\partial t} n(\mathbf{r},t) = -\nabla \cdot \mathbf{j}_{L}(\mathbf{r},t) \:, $$
(62)

but the transverse component j T (r, t) is not determined by n. Hence, current densities are, in general, not VR [57]: if \(\mathbf {j}(\mathbf {r},t) = \mathbf {j}_{L}(\mathbf {r},t) + \mathbf {j}_{T}(\mathbf {r},t)\) comes from a potential v(r, t), then \(\mathbf {j}'(\mathbf {r},t) = \mathbf {j}_{L}(\mathbf {r},t) + \mathbf {j}'_{T}(\mathbf {r},t)\) (same longitudinal but different transverse component) cannot come from a potential v′(r, t), since this would violate the Runge–Gross theorem. Hence, we need the full mapping

$$ (v,\mathbf{A}) \leftrightarrow (n,\mathbf{j}) \:. $$
(63)

However, this map is determined up to within a gauge transformation:

$$v(\mathbf{r},t) \to v(\mathbf{r},t) - \frac{\partial}{\partial t} \Lambda(\mathbf{r},t)$$
(64)
$$ \mathbf{A}(\mathbf{r},t) \to \mathbf{A}(\mathbf{r},t) + \nabla \Lambda(\mathbf{r},t) \:, $$
(65)

where Λ (r, t) is an arbitrary (but well-behaved) gauge function which vanishes at the initial time. Often, one chooses the gauge function in such a way that the scalar potential vanishes.

Ghosh and Dhara [58] were the first to give a formal proof of time-dependent current-DFT (TDCDFT). More recently, an alternative existence proof of TDCDFT, in the spirit of the van Leeuwen theorem, was provided by Vignale [59]. TDCDFT on lattice systems was discussed by Tokatly [60]. The time-dependent Kohn–Sham equation in TDCDFT becomes

$$ i \frac{\partial }{\partial t} \varphi_{j}(\mathbf{r},t) = \left\{\frac{1}{2} \left[\frac{\nabla}{i} + \mathbf{A}_{s}(\mathbf{r},t)\right]^{2} + v_{s}(\mathbf{r},t)\right\} \varphi_{j}(\mathbf{r},t)\:, $$
(66)

where the effective scalar potential, as before, is given by (56), and the effective vector potential is

$$ \mathbf{A}_{s}(\mathbf{r},t) = \mathbf{A}(\mathbf{r},t) + \mathbf{A}_{\rm xc}(\mathbf{r},t) \:. $$
(67)

Notice that the effective vector potential does not contain a Hartree-like term due to induced currents, since this would be relativistically small. The gauge-invariant physical current density is given by

$$ \mathbf{j}(\mathbf{r},t) = n(\mathbf{r},t) \mathbf{A}_{s}(\mathbf{r},t) + \frac{1}{i} \sum\limits_{j=1}^{N} \Im [\varphi_{j}^{*}(\mathbf{r},t) \nabla \varphi_{j}(\mathbf{r},t)] . $$
(68)

Let us summarize the key points of TDCDFT:

  1. 1.

    TDCDFT overcomes formal limitations of TDDFT, allowing treatment of electromagnetic waves and general vector potentials and time-varying magnetic fields. However, electromagnetic waves are usually treated in dipole approximation, so one rarely makes use of TDCDFT in this way.

  2. 2.

    The Runge–Gross theorem of TDDFT has been proved for finite systems, where the density vanishes at infinity. However, it also works for periodic systems [61], provided the external potential is also periodic. The Runge–Gross theorem does not apply when a uniform homogeneous field acts on a periodic system. This case, however, is formally included in TDCDFT [59].

  3. 3.

    TDCDFT can be very useful in situations that could, in principle, be fully described with TDDFT; using the current as basic variable, rather than the density, can make it easier to develop approximations for dynamical xc effects [62, 63].

5 Practical Aspects

To apply TDDFT in practice requires the following considerations:

  • A suitable approximation for the time-dependent xc potential needs to be found;

  • The time-dependent Kohn–Sham equations need to be solved numerically;

  • The physical observables of interest need to be obtained from the time-dependent density.

Each of these points has its own challenges. We shall now address them individually, including some examples.

5.1 The Time-Dependent xc Potential

As we said in Section 4.2, the time-dependent xc potential is formally a functional of the time-dependent density as well as the initial states, v xc[n, Ψ0, Φ0] (r, t). In practice, one is usually interested in situations where the system is initially in the ground state. If this is the case, things simplify considerably: thanks to the Hohenberg–Kohn theorem of ground-state DFT, the initial states become functionals of the initial (ground-state) density, and the xc functional can be written as a density functional only, v xc[n] (r, t).

However, the density dependence of the xc potential is complicated and nonlocal: the xc potential at space-time point (r, t) depends on densities at all other points in space and at all previous times, n(r′, t′), where t′ ≤ t (the potential cannot depend on densities in the future—this would violate the fundamental principle of causality).

The most widely used approximation for the xc potential is the adiabatic approximation:

$$ v_{\rm xc}^{\rm A}(\mathbf{r},t)=v_{\rm xc}^{\rm gs}[n_{0}](\mathbf{r})|_{n_{0}(\mathbf{r})=n(\mathbf{r},t)}, $$
(69)

where \(v_{\rm xc}^{\rm gs}\), the ground-state xc potential defined in (21), is evaluated at the instantaneous time-dependent density. Equation (69) becomes exact for an infinitely slowly varying system which is in its ground state for any time. In practice, this is of course not the case (unless one considers a time-dependent system which just sits there in its ground state, doing nothing).

One of the most important questions in TDDFT is under what circumstances the adiabatic approximation works well. Numerical studies [6466] demonstrate that the adiabatic approximation may break down if the system undergoes very rapid changes, but it turns out that the adiabatic approximation still works surprisingly well in many cases. This will be further addressed below when we discuss the calculation of excitation energies.

As of today, very few applications of TDDFT have been carried out with nonadiabatic, explicitly memory-dependent xc functionals [6770]. Due to its simplicity, the overwhelming majority of time-dependent Kohn–Sham calculations use the adiabatic LDA (ALDA),

$$ v_{\rm xc}^{\rm ALDA}(\mathbf{r},t) = v_{\rm xc}^{\rm LDA}(n(\mathbf{r},t)) \:, $$
(70)

or any adiabatic GGA defined in a similar way, by replacing the ground-state density with the instantaneous time-dependent density.

5.2 Observables

In Section 4.1, we showed that all physical observables are formally functionals of the time-dependent density, see (53). TDDFT gives, in principle, the exact time-dependent density n(r, t), and all quantities of interest must be obtained from it. Some observables are easily calculated in this way, but others are not. We will now give examples of both kinds.

5.2.1 Easy Observables

The easiest observable is the density itself, which shows how electrons move during any time-dependent process. This is certainly useful for visualizing molecular geometries or structural changes during chemical reactions or photoinduced processes, but does not reveal important quantum mechanical features such as atomic shell structure, covalent molecular bonds, or lone pairs. Such information can be gained from a convenient visualization tool known as the time-dependent electron localization function (TDELF) [71]. The TDELF is defined as a positive quantity with a magnitude between zero and one:

$$ f_{\rm ELF}(\mathbf{r},t) = \frac{1}{1 + [D_{\sigma}(\mathbf{r},t)/D_{\sigma}^{0}(\mathbf{r},t)]^{2}} \:. $$
(71)

The quantity

$$ D_{\sigma}(\mathbf{r},t) = \tau_{\sigma}(\mathbf{r},t) - \frac{|\nabla n_{\sigma}(\mathbf{r},t)|^{2}}{8n_{\sigma}(\mathbf{r},t)} - \frac{|\mathbf{j}_{\sigma}(\mathbf{r},t)|^{2}}{2n_{\sigma}(\mathbf{r},t)} $$
(72)

is a measure of the probability of finding an electron in the vicinity of another electron of the same spin σ at (r, t). Clearly, D σ (r, t) is not an explicit density functional, but it is expressed in terms of the density, the current, and the orbitals via the kinetic energy density \(\tau _{\sigma }(\mathbf {r},t) = \frac {1}{2}\sum _{j=1}^{N_{\sigma }} |\nabla \varphi _{j\sigma }(\mathbf {r},t)|^{2}\). \(D_{\sigma }^{0}\) in (71) is given by the kinetic energy density of the homogeneous electron liquid:

$$ D_{\sigma}^{0}(\mathbf{r},t) = \frac{3}{10}(6\pi^{2})^{3/2} n_{\sigma}^{5/3}(\mathbf{r},t) = \tau_{\sigma}^{h}(\mathbf{r},t) \:. $$
(73)

The time propagation is unitary, so the total norm is conserved; but to describe ionization or charge transfer processes, it is often of interest to obtain the number of electrons that escape from a given spatial region \(\cal V\):

$$ N_{\rm esc}(t) = N - \int_{\mathcal{V}} d^{3}\: n(\mathbf{r},t) \:. $$
(74)

Here, \(\cal V\) can be thought of as a “box” that surrounds the entire system (in case we wish to calculate ionization rates of atoms or molecules), or it could be a part of a larger molecule or part of a unit cell of a periodic solid.

Another easy class of observables are moments of the density, such as the dipole moment:

$$ \textbf{d}(t) = \int d^{3}r \: \mathbf{r} n(\mathbf{r},t) \:. $$
(75)

The dipole moment can be considered directly, i.e., in real time, to study the behavior of charge–density oscillations. Alternatively, it can be Fourier transformed to yield the dipole power spectrum |d(ω)|2 or related observable quantities such as the photoabsorption cross section.

Higher moments of the density, such as the quadrupole moment, can be calculated just as easily, but are less frequently considered.

5.2.2 Difficult Observables

Equation (74) gives the total number of escaped electrons, which in general can be nonintegral. For instance, if we consider an atom in a laser field, a value of N esc = 0.5 would indicate that on average, half an electron has been removed. In reality, there are of course no “half-electrons,” so we have to interpret this result in a probabilistic sense: it could for instance mean that there is 50 % probability that the atom is singly ionized, and 50 % probability that it is not ionized; other scenarios, involving double ionization, are also possible. The probabilities to find an atom or molecule in a certain charge state +m can be defined as follows [72]:

$$ P^{0}(t) = \int\limits_{\cal V} d^{3}r_{1} \ldots \int\limits_{\cal V} d^{3}r_{N} \:|\Psi(\mathbf{r}_{1},\ldots,\mathbf{r}_{N},t)|^{2}$$
(76)
$$ P^{+1}(t) = \int\limits_{\overline{\mathcal V}} d^{3}r_{1} \int\limits_\mathcal {V} d^{3}r_{2} \ldots \int\limits_\mathcal {V} d^{3}r_{N}|\Psi(\mathbf{r}_{1},\ldots,\mathbf{r}_{N},t)|^{2} $$
(77)

and similar for all other P +m(t). Here, \(\overline{\mathcal{V}}\) denotes all space outside the integration box \(\cal V\) surrounding the system. The ion probabilities are defined in terms of the full many-body wave function Ψ(t), which is a density functional according to the Runge--Gross theorem; but it is not possible to extract the ion probabilities P +m(t)$ directly from the density in an elementary way.

Since the full wave function is prohibitively expensive to deal with, a pragmatic solution is to replace Ψ(t) by the Kohn–Sham Slater determinant Φ(t), in spite of the fact that the latter has no rigorous physical meaning. One then obtains the Kohn–Sham ion probabilities

$$ P_{s}^{0}(t) = N_{1}(t) N_{2}(t) \ldots N_{N}(t) $$
(78)
$$ \begin{array}{rll} P_{s}^{+1}(t)& = &\sum\limits_{j=1}^{N} N_{1}(t)\ldots N_{j-1}(t)\big(1-N_{j}(t)\big) \\ && \times N_{j+1}(t) \ldots N_{N}(t)\end{array} $$
(79)

and similar for all other \(P_{s}^{+m}(t)\), where

$$ N_{j}(t) = \int\limits_{\mathcal{V}} d^{3}r |\varphi_{j}(\mathbf{r},t)|^{2} \:. $$
(80)

The Kohn–Sham ion probabilities are easily obtained from the orbitals; but apart from certain limiting cases [72], they are have no rigorous physical meaning [73, 74]. Here are some other examples of difficult observables:

Photoelectron spectra

The photoelectron kinetic energy distribution spectrum is formally defined as

$$ P(E)dE=\lim\limits_{t\to\infty}\sum\limits_{k=1}^{N}|\langle \Psi_{E}^{k}|\Psi(t)\rangle|^{2}dE \:, $$
(81)

where \(|\Psi _{E}^{k}\rangle \) is a many-body eigenstate with k electron in the continuum and total kinetic energy E of the continuum electrons. There are approximate ways of calculating photoelectron spectra from the density or from the Kohn–Sham orbitals [7577].

State-to-state transition probabilities

The S-matrix describes the transition between two states:

$$ S_{i,f} = \lim\limits_{t\to \infty} \langle \Psi_{f} | \Psi(t)\rangle \:, $$
(82)

for given initial and final many-body states Ψ i and Ψ f . To get the S-matrix from the density, a cumbersome implicit readout procedure was proposed [78].

Momentum distributions

Ion recoil momenta are of great interest in high-intense field or scattering experiments. The problem is formally similar to the problem of calculating ion probabilities from the density, and in principle requires the full wave function in momentum space. The Kohn–Sham momentum distributions can be taken as approximation, without formal justification [79].

Transition density matrix

The transition density matrix is a quantity that is defined in the linear response regime. As the name indicates, it refers to a specific excitation of the system (typically, a large molecular system), and maps the distribution and coherences of the excited electron and the associated hole. In particular, the transition density matrix is useful to visualize excitonic effects. There is no easy way to obtain it directly from the density; the best we can do is to construct the transition density matrix from Kohn–Sham orbitals [80].

All the above examples have in common that they are explicit expressions of the many-body wave function, or of the N-body density matrix, but can only be implicitly expressed as density functionals. One can get approximate results by replacing the full many-body wave function with the Kohn–Sham Slater determinant Φ(t), but there is no guarantee that this will give good results.

5.3 Applications

Real-time TDDFT has been implemented in several computer codes, most notably the open-source code octopus [81, 82]. A TDDFT code must deal with two basic numerical tasks: (1) The Kohn–Sham orbitals of the system, and its density, must be represented in space. This can be done either with a suitable basis, or on a spatial grid using finite-element or finite-difference discretization schemes (octopus uses the latter). (2) Time must be discretized as well, and the time-dependent Kohn–Sham equations are propagated forward in time, step by step, ensuring norm conservation.

Let us say a few words about the time propagation. Suppose we know the Kohn–Sham orbitals up until some time τ n . The orbitals at the next time step, \(\tau _{n+1} = \tau _{n} + \Delta \tau \), can then formally be written as

$$ \varphi_{j}(\tau_{n} + \Delta \tau) = \hat U(\tau_{n} + \Delta \tau,\tau_{n})\varphi_{j}(\tau_{n})\:, $$
(83)

where \(\hat U(\tau _{n} + \Delta \tau ,\tau _{n})\) is the time evolution operator which propagates the orbitals one time step Δτ forward. If Δτ is sufficiently small, we can approximate \(\hat U\) by

$$ \hat U (\tau_{n} + \Delta \tau,\tau_{n}) \approx e^{-i \hat H_{s}(\tau_{n} + \Delta \tau/2) \Delta \tau} \:, $$
(84)

where H s (τ n + Δτ/2) is the time-dependent Kohn–Sham Hamiltonian evaluated midway between the two time steps (in practice, this requires a so-called predictor-corrector scheme [1]). The time propagation (83) can be numerically implemented in various ways [83]; an example is the Crank–Nicholson algorithm:

$$ e^{-i\hat H_{s} \Delta \tau} \approx \frac{1-i\hat H_{s} \Delta \tau/2}{1+i\hat H_{s} \Delta \tau/2} \:, $$
(85)

which is correct to order (Δτ)2 and unitary (hence, the norm of the wave functions is conserved). This converts the time-dependent Kohn–Sham equations into a set of linear equations that can be numerically solved.

The applications of real-time TDDFT can be roughly divided into two categories, related to the two scenarios we discussed in Section 3.

In the first class of applications, the system is initially prepared in a nonequilibrium state through a sudden switching or a short impulsive excitation, and then allowed to propagate freely in time [8486]. The initial perturbation is kept weak in order to avoid any nonlinear effects, but it is spectrally broad and hence triggers a dynamical behavior of the system in which essentially the entire range of excitations participates. The time-dependent dipole moment d(t), (75), is calculated over a certain time span, and Fourier transformation yields the optical spectrum of the system. The time propagation method has certain advantages especially for large systems [8790] and metallic clusters [91], but is less frequently used for low-lying excitations of smaller molecules. Below, in Section 6, we will discuss an alternative way of calculating excitation energies.

Figure 7 shows an example of such a calculation for the CO2 molecule. The optical absorption spectrum, obtained by Fourier transforming the time-dependent dipole moment, agrees well with a spectrum that is obtained from linear response TDDFT (we will discuss this approach in the following section). Both spectra, in turn, agree well with experiment [92].

Fig. 7
figure 7

Time-dependent Kohn–Sham calculation for a CO2 molecule. Top: time-dependent dipole moment d(t) induced by an initial “kick.” Bottom: dipole spectrum, obtained by Fourier transforming d(t) (full line), compared with the spectrum obtained from linear response TDDFT (thin line)

The second class of applications is in the nonlinear regime, and deals with systems that are subject to strong excitations such as high-intensity laser pulses or collisions with fast, highly charged ionic projectiles. The response following such excitations can be highly nonlinear and far beyond any treatment using perturbative methods. Propagation of the time-dependent Kohn–Sham equations yields the response to all orders, in principle exactly, including collective many-body effects. Quantities of interest include easy observables such as total ionization yields and high-harmonic generation spectra, and difficult observables such as photoelectron spectra, ion probabilities, or momentum distributions.

Figure 8 shows an example. A CO2 molecule is hit with a very short, high-intensity laser pulse which deposits a large amount of excitation energy in a very short time. The snapshot at t = 10.6 a.u. (1 a.u. equals 24 as) shows how a packet of density flies off, and the remaining density is strongly distorted. The TDELF, (71), illustrates how the electronic orbitals have become extremely diffuse, and the bonds are essentially destroyed, which will cause the molecule to break up.

Fig. 8
figure 8

(Colored online) Two snapshots of the time-dependent electron localization function for a CO2 molecule, excited by a laser pulse of photon energy 20 eV and intensity 1.2 × 1015 W /cm2. Insets: density isosurfaces

TDDFT calculations for strong excitations have been carried out over the past two decades for a variety of atomic and molecular systems [73, 74, 79, 9398] (see [99] for a review). An intriguing question is whether it is possible to design the excitation (i.e., the laser intensity, pulse shape, and spectral composition) in such as way that a specific control goal can be achieved. The formal framework of TDDFT and optimal control has been worked out [100, 101], but some of the more interesting control goals may be difficult to achieve with standard (adiabatic) TDDFT approaches [102106].

6 TDDFT and Linear Response

6.1 Formalism

In many situations of practical interest, systems are subjected to small perturbations and hence do not deviate strongly from their initial state. This happens in most applications of spectroscopy, where the response to a weak probe is used to determine the spectral properties of a system. In this case, it is not necessary to seek a fully-fledged solution of the time-dependent Schrödinger or Kohn–Sham equations (although this would yield the desired information, too, as we have seen in Fig. 7). Instead, one can use perturbation theory. The goal of linear response theory is to directly calculate the change of a certain variable or observable to first order in the perturbation, without calculating the change of the wave function. For us, the most important example is the linear density response.

We consider the case where the system is initially in the ground state and a time-dependent potential is switched on at time t 0, see (44). Now, however, v 1(r, t) is treated as a small perturbation. This perturbation will cause some (small) time-dependent changes in the system, and the density will become time-dependent. We expand it as follows:

$$ n(\mathbf{r},t) = n_{0}(\mathbf{r}) + n_{1}(\mathbf{r},t) + n_{2}(\mathbf{r},t) + \ldots . $$
(86)

Here, n 0 is the ground-state density, n 1 is the linear density response (the first-order change in density induced by the perturbation v 1), n 2 is the second-order density response (quadratic in the perturbation v 1), and there will be higher-order terms which we have not explicitly indicated. If the perturbation is small, the linear density response dominates over all higher-order terms in the expansion (86). On the other hand, if the perturbation is strong, a perturbation expansion may not even converge! In that case, it makes more sense to solve the Schrödinger (or Kohn–Sham) equations instead. Notice that all contributions to the density response integrate to zero, e.g., \(\int d^{3}r \, n_{1}(\mathbf {r},t) = 0\), due to norm conservation.

The linear density response can be formally written as

$$ n_{1}(\mathbf{r},t) = \int_{-\infty}^{\infty} dt' \int d^{3}r' \chi(\mathbf{r},t,\mathbf{r}',t') v_{1}(\mathbf{r}',t') \:. $$
(87)

Here, χ(r, r′, tt′) is the density–density response function, defined as [1, 18]

$$ \chi(\mathbf{r},t,\mathbf{r}',t') = -i\theta(t-t') \langle \Psi_{\rm gs}|[\hat n(\mathbf{r},t-t'),\hat n(\mathbf{r}')]|\Psi_{\rm gs}\rangle \:. $$
(88)

The step function θ(tt′) ensures that the response is causal, i.e., the response comes after the perturbation. Equation (86) shows that the response function is obtained from the many-body ground state Ψgs, involving a commutator of density operators (in interaction representation). Hence, via the Hohenberg–Kohn theorem, it is formally a functional of the ground-state density, χ[n 0]. Usually, one is more interested in the frequency-dependent response than in the real-time response:

$$ n_{1}(\mathbf{r},\omega) = \int d^{3}r' \chi(\mathbf{r},\mathbf{r}',\omega) v_{1}(\mathbf{r}',\omega) \:. $$
(89)

The Fourier transform of the response function (88) can be written in the following form, known as the Lehmann representation [1, 18]:

$$\begin{array}{rll} \chi(\mathbf{r},\mathbf{r}',\omega) &=& \sum\limits_{n=1}^{\infty} \bigg\{ \frac{\langle \Psi_{\rm gs} | \hat n(\mathbf{r}) | \Psi_{n}\rangle \langle \Psi_{n} | \hat n(\mathbf{r}') | \Psi_{\rm gs}\rangle} {\omega - \Omega_{n} + i\eta} \\ && \qquad\quad- \frac{\langle \Psi_{\rm gs} | \hat n(\mathbf{r}') | \Psi_{n}\rangle \langle \Psi_{n} | \hat n(\mathbf{r}) | \Psi_{\rm gs}\rangle} {\omega + \Omega_{n} + i\eta} \bigg\}, \hspace{3mm} \end{array} $$
(90)

where the limit η → 0+ is understood. Here,

$$ \Omega_{n} = E_{n} - E_{0} $$
(91)

is the nth excitation energy of the many-body system. This shows explicitly that the response function has poles at the exact excitation energies of the system. This makes sense: if we apply a perturbation v 1(r, ω) whose frequency matches one of the excitation energies, the response of the system is very large (we see a peak in the spectrum).

If we knew the response function χ of the many-body system, calculating the density response would be easy and straightforward: all we have to do is evaluate expression (89). From the density response, spectroscopic observables of interest can then be calculated. For instance, one often considers a monochromatic dipole field along, say, the z direction,

$$ v_{1}(\mathbf{r},t) = \mathcal{E} z \sin(\omega t)\:. $$
(92)

The dynamic dipole polarizability follows as

$$ \alpha(\omega) = -\frac{2}{\cal E} \int d^{3}r \: z n_{1}(\mathbf{r},\omega)\:, $$
(93)

and the photoabsorption cross section σ(ω) is given by

$$ \sigma(\omega) = \frac{4\pi\omega}{c} \: \Im \alpha(\omega) \:. $$
(94)

In TDDFT, the linear density response can be calculated, in principle exactly, as the response of the noninteracting Kohn–Sham system to an effective perturbation [107]:

$$ n_{1}(\mathbf{r},t) = \int dt'\int d^{3}r' \chi_{s}(\mathbf{r},t,\mathbf{r}',t') v_{1s}(\mathbf{r}',t') \:. $$
(95)

Here, χ s (r, r′, t − t′) is the density–density response function of the Kohn–Sham system. The effective perturbation is given as the sum of the real external perturbation plus the linearized Hartree and xc potentials:

$$\begin{array}{rll} v_{s1}(\mathbf{r},t) &=& v_{1}(\mathbf{r},t) + \int d^{3}r'\: \frac{n_{1}(\mathbf{r}',t)}{|\mathbf{r} - \mathbf{r}'|} \\ &&+ \int dt' \int d^{3}r' f_{\rm xc}(\mathbf{r},t,\mathbf{r}',t') n_{1}(\mathbf{r}',t') \:. \end{array} $$
(96)

The so-called xc kernel is defined as the functional derivative of the time-dependent xc potential with respect to the time-dependent density, evaluated at the ground-state density:

$$ f_{\rm xc}(\mathbf{r},t,\mathbf{r}',t') = \left. \frac{\delta v_{\rm xc}[n](\mathbf{r},t)}{\delta n(\mathbf{r}',t')}\right|_{n_{0}(\mathbf{r})} \:. $$
(97)

The effective perturbation (96) depends on the density response, so the TDDFT response equation (95) has to be solved self-consistently. Again, we are usually more interested in the frequency-dependent response, given by

$$ n_{1}(\mathbf{r},\omega) = \int d^{3}r' \chi_{s}(\mathbf{r},\mathbf{r}',\omega) v_{1s}(\mathbf{r}',\omega) \:, $$
(98)

and

$$\begin{array}{rll} v_{s1}(\mathbf{r},\omega) &=& v_{1}(\mathbf{r},\omega) \\ &&+ \int d^{3}r'\left\{ \frac{1}{|\mathbf{r} - \mathbf{r}'|} + f_{\rm xc}(\mathbf{r},\mathbf{r}',\omega)\right\} n_{1}(\mathbf{r}',\omega) \:. \end{array} $$
(99)

The frequency-dependent xc kernel is the Fourier transform of f xc(r, t, r′, t′) with respect to (t − t′).

The Kohn–Sham response function is given by

$$ \chi_{s}(\mathbf{r},\mathbf{r}',\omega) = \sum_{j,k=1}^{\infty} (f_{k} - f_{j}) \frac{\varphi_{j}(\mathbf{r}) \varphi_{k}^{*}(\mathbf{r}) \varphi_{j}^{*}(\mathbf{r}') \varphi_{k}(\mathbf{r}')}{\omega - \omega_{jk} + i\eta} \:, $$
(100)

where f j and \(f_{k}\) are occupation numbers referring to the configuration of the Kohn–Sham ground state (1 for occupied and 0 for empty Kohn–Sham orbitals), and the \(\omega _{jk}\) are defined as

$$ \omega_{jk} = \varepsilon_{j} - \varepsilon_{k} \:. $$
(101)

Thus, \(\chi _{s}(\mathbf {r},\mathbf {r}',\omega )\) has poles at the excitation energies of the noninteracting Kohn–Sham system. Naively, one might conclude from this that the TDDFT linear response must be wrong, since it contains a response function with the wrong pole structure (we pointed out above that the exact response function has poles at the exact excitation energies \(\Omega _{n}\)). The resolution to this apparent contradiction lies in the self-consistent nature of the TDDFT response equation, which “cancels out” the wrong poles and restores the correct poles of the many-body system.

The TDDFT linear response formalism can be generalized to a spin-dependent form. The response equation is then given by

$$ n_{1\sigma}(\mathbf{r},t) = \sum\limits_{\sigma'}\int dt'\int d^{3}r' \chi_{s\sigma\sigma'}(\mathbf{r},t,\mathbf{r}',t') v_{1s\sigma}(\mathbf{r}',t') \:, $$
(102)

where the Kohn–Sham response function is diagonal in the spin index:

$$\begin{array}{rll} \chi_{s\sigma\sigma'}(\mathbf{r},\mathbf{r}',\omega)&=& \delta_{\sigma \sigma'}\sum_{j,k=1}^{\infty} (f_{k\sigma} - f_{j\sigma}) \\ &&\times \frac{\varphi_{j\sigma}(\mathbf{r}) \varphi_{k\sigma}^{*}(\mathbf{r}) \varphi_{j\sigma}^{*}(\mathbf{r}') \varphi_{k\sigma}(\mathbf{r}')} {\omega - \omega_{jk\sigma} + i\eta} \:, \end{array} $$
(103)

and \(\omega _{jk\sigma } = \varepsilon _{k\sigma } - \varepsilon _{j\sigma }\). The effective perturbation is

$$\begin{array}{rll} v_{s1\sigma}(\mathbf{r},\omega) &=& v_{1\sigma}(\mathbf{r},\omega)+ \sum_{\sigma'} \int d^{3}r'\nonumber\\&&\nonumber \times \left\{ \frac{1}{|\mathbf{r} - \mathbf{r}^{\prime}|} + f_{\rm xc\sigma\sigma'}(\mathbf{r},\mathbf{r}',\omega)\right\} n_{1\sigma'}(\mathbf{r}',\omega),\end{array} $$
(104)

featuring the spin-dependent xc kernel \(f_{\rm xc \sigma \sigma '}\).

6.2 How to Calculate Excitation Energies

The excitation energies of a many-body system are defined as the differences between the ground-state energy \(E_{0}\) and the energies of higher-lying eigenstates, \(E_{n}\), see (89). In other words, they are obtained by comparing the energies of stationary states. Why, then, would one want to use a time-dependent approach such as TDDFT? Isn’t that unnecessarily complicated?

It helps to think of an excitation in a different way, namely, as a dynamical process where the system transitions between two eigenstates; the excitation energy then corresponds to a characteristic frequency, which describes the rearrangements of probability density during the transition process. In other words, each excitation corresponds to a characteristic eigenmode of the interacting N-electron system.

The concept of electronic eigenmodes has a familiar analog in classical mechanics [108]. A system of s coupled oscillators carrying out small oscillations is described by the homogeneous linear system of equations

$$ \sum\limits_{j=1}^{s}(k_{ij}-\Omega^{2} m_{ij})A_{j} = 0 \:, \quad i=1,\ldots,s, $$
(105)

where the matrices \(k_{ij}\) and \(m_{ij}\) determine the potential and kinetic energy of the system, respectively:

$$ U= \frac{1}{2} \sum\limits_{ij}^{s} k_{ij} q_{i} q_{j}$$
(106)
$$ T = \frac{1}{2}\sum\limits_{ij}^{s} m_{ij} q_{i} q_{j} $$
(107)

(the \(q_{j}\) are generalized coordinates). Clearly, \(k_{ij}\) and \(m_{ij}\) generalize the concept of spring constant and mass of a simple harmonic oscillator. The solutions of (105) are obtained by finding the roots of the determinant,

$$ \mbox{det}|k_{ij} - \Omega^{2} m_{ij}|=0 \:. $$
(108)

The s solutions \(\Omega _{\alpha }^{2}\), \(\alpha =1,\ldots ,s\), are the eigenfrequencies of the system, and the associated eigenvectors \(A_{j \alpha }\) indicate the profile of the eigenmode, and can be used to determine the normal modes of the system.

It turns out that calculating excitation energies with TDDFT is very similar to describing the small oscillations of a classical system. Starting point is the TDDFT response equation, (98), but without any external perturbation:

$$ n_{1}(\mathbf{r},\omega) = \int d^{3}r' \chi_{s}(\mathbf{r},\mathbf{r}',\omega) \int d^{3}r'' f_{\rm Hxc}(\mathbf{r}',\mathbf{r}'',\omega)n_{1}(\mathbf{r}'',\omega) $$
(109)

where we define the combined Hartree-xc kernel as \(f_{\rm Hxc}(\mathbf {r},\mathbf {r}',\omega )=|\mathbf {r} - \mathbf {r}'|^{-1} + f_{\rm xc}(\mathbf {r},\mathbf {r}',\omega )\). Equation (109) has the trivial solution \(n_{1}=0\) for all frequencies ω, but at certain special frequencies Ω, there are also nontrivial solutions where the density response is finite and self-sustained, despite the fact that there is no external perturbation. These frequencies correspond to the excitation energies of the system, and n(r, Ω) is the profile of the associated electronic eigenmode.

To illustrate how this works, consider the simple case of two electrons in a two-level system with Kohn–Sham orbitals \(\varphi _{1}(\mathbf {r})\) and \(\varphi _{2}(\mathbf {r})\), assumed to be real. Each level is twofold degenerate, and the lower level is doubly occupied. Dropping the infinitesimal \(i\eta \), the Kohn–Sham response function (100) then simplifies to

$$ \chi_{s}(\mathbf{r},\mathbf{r}',\omega) = \frac{4\omega_{21}}{\omega^{2} - \omega_{21}^{2}} \:\varphi_{1}(\mathbf{r}) \varphi_{2}(\mathbf{r}) \varphi_{1}(\mathbf{r}') \varphi_{2}(\mathbf{r}'). $$
(110)

We substitute this into (109), and after a few simple manipulations we find the condition

$$ \omega^{2} = \omega_{21}^{2} + 4 \omega_{21} K(\omega)\:, $$
(111)

where

$$ K(\omega) = \int d^{3}r \int d^{3}r' \varphi_{1}(\mathbf{r}) \varphi_{2}(\mathbf{r}) f_{\rm Hxc}(\mathbf{r},\mathbf{r}',\omega) \varphi_{1}(\mathbf{r}')\varphi_{2}(\mathbf{r}'). $$
(112)

It is a simple exercise to repeat the above example using the spin-dependent response formalism. Assuming that the ground state is not spin-polarized (i.e., the spin-up and spin-down orbitals are the same), one finds the following solutions for the eigenmodes:

$$ \omega^{2}_{\pm} = \omega_{21}^{2} + 2 \omega_{21} [K_{\sigma\sigma}(\omega) \pm K_{\sigma\bar\sigma}(\omega)]. $$
(113)

The plus sign represents a singlet excitation, and the minus sign represents a triplet excitation.

The simple examples for two-level systems are instructive, but turn out not to be quantitatively accurate in practice [109111]. The eigenmodes can be calculated, in principle exactly, using the so-called Casida equation [112]:

$$ \left(\begin{array}{cc} \textbf{A} & \textbf{K} \\ \textbf{K} & \textbf{A} \end{array}\right)\left(\begin{array}{c} \textbf{X} \\ \textbf{Y} \end{array}\right)= \Omega\left(\begin{array}{cc} -{\bf 1} & {\bf 0} \\ {\bf 0} & {\bf 1} \end{array}\right)\left(\begin{array}{c} \textbf{X} \\ \textbf{Y} \end{array}\right), $$
(114)

where the matrix elements of A and K are given by

$$ A_{ia \sigma,i'a'\sigma'}(\omega) = \delta_{ii'} \delta_{aa'} \delta_{\sigma\sigma'} \omega_{ai \sigma} + K_{ia \sigma,i'a'\sigma'}(\omega) $$
(115)
$$\begin{array}{rll} K_{ia \sigma,i'a'\sigma'}(\omega)&=&\int d^{3}r \int d^{3}r' \varphi_{i\sigma}^{*}(\mathbf{r}) \varphi_{a\sigma}(\mathbf{r}) \\ &&\times f_{\rm Hxc\sigma\sigma'}(\mathbf{r},\mathbf{r}',\omega) \varphi_{i'\sigma'}(\mathbf{r}')\varphi_{a' \sigma'}^{*}(\mathbf{r}') \hspace{5mm} \end{array} $$
(116)

and i, i′ and a, a′ run over occupied and unoccupied Kohn–Sham orbitals, respectively. A detailed derivation of (114) can be found in Ref. [1].

If one assumes that the Kohn–Sham orbitals are real and that the xc kernel is frequency-independent (more about this assumption in Section 6.4), it is possible to recast the Casida equation into the following form:

$$\begin{array}{l} {\kern2pc}\lefteqn{ \sum\limits_{i'a'\sigma'} \Big[ \delta_{ii'} \delta_{aa'} \delta_{\sigma\sigma'}(\omega^{2}_{ia\sigma}-\Omega^{2}) }\\ \qquad\qquad+ 2\sqrt{\omega_{ia\sigma}\omega_{i'a'\sigma'}} K_{ia\sigma,i'a'\sigma'}\Big]Z_{i'a'\sigma'} =0 \:. \end{array} $$
(117)

This equation can be viewed as the TDDFT counterpart of the eigenvalue equation (105) for classical small oscillations. Hence, (117) yields the excitation energies and eigenmodes of the given system.

Equation (114) mixes excitations and de-excitations (X and Y, respectively). One may simplify (114) by setting the off-diagonal K matrix to zero, which decouples excitations and de-excitations. This so-called Tamm–Dancoff approximation (TDA) is valid if the excitation frequencies are not close to zero, which is the case for molecules, semiconductors, and insulators. The TDA often helps to compensate for deficiencies that arise because the xc functionals are not exactly known and have to be approximated; the TDA can therefore be preferable over the full calculation (in the sense of getting qualitatively correct results) in certain situations (e.g., triplet instabilities [113], conical intersections [114], and excitons [115]).

6.3 Charge-Transfer Excitations

An important class of excitations are those in which charge physically moves from one region (the donor) to a second region (the acceptor) which is spatially separated from the first. Such processes can occur in a wide range of systems, such as in complexes of two or more molecules, or between different functional groups within the same molecule. Unfortunately, the standard approximations in TDDFT fail for charge-transfer excitations [116118].

Consider the case where the donor and acceptor subsystems are separated by a large distance R. The minimum energy required to remove an electron from the donor is given by the donor’s ionization potential \(I_{d}\). When the electron attaches to the acceptor, some of that energy is regained via the acceptor’s electron affinity \(A_{a}\). Once the electron has moved from donor to acceptor, the two systems feel the electrostatic interaction energy −1 / R of the induced electron–hole pair. The exact charge-transfer energy is therefore

$$ \Omega_{ct}^{\rm exact} = I_{d} - A_{a} - \frac{1}{R} \:. $$
(118)

Now, let us compare this with TDDFT. To make our point, it is sufficient to consider the two-level approximation, (111). After linearization, we obtain

$$\begin{array}{rll} \Omega_{ct} &=& \varepsilon^{a}_{L} - \varepsilon^{d}_{H} + 2\int d^{3}r \int d^{3}r' \: \varphi^{a}_{L}(\mathbf{r}) \varphi^{d}_{H}(\mathbf{r}) \\ &&{}\times f_{\rm Hxc}(\mathbf{r},\mathbf{r}',\omega) \varphi^{a}_{L}(\mathbf{r}') \varphi^{d}_{H}(\mathbf{r}') \:, \end{array} $$
(119)

where \(\varphi ^{d}_{H}(\mathbf {r})\) is the highest occupied donor orbital and \( \varphi ^{a}_{L}(\mathbf {r})\) is the lowest unoccupied acceptor orbital, which have exponentially vanishing overlap in the limit of large separation. Hence, the double integral in (119) becomes zero (assuming that the xc kernel remains finite, which is certainly the case for all standard approximations), and TDDFT simply collapses to the difference between the bare Kohn–Sham eigenvalues,

$$ \Omega_{ct} \longrightarrow \varepsilon^{a}_{L} - \varepsilon^{d}_{H} \:. $$
(120)

This explains why TDDFT often drastically underestimates charge-transfer excitations when conventional xc functionals are used. Hybrid xc functionals [119, 120], in particular the range-separated hybrids of Section 2.5, offer a solution to this problem, and have been successfully used to describe charge-transfer excitations in a variety of systems [121123].

6.4 Beyond the Adiabatic Approximation

The exact excitation spectrum of a physical system is determined by the poles of the full response function χ, (90). All of the excitation energies Ω n of the many-body system are, in principle, obtained by solving the Casida equation (114). But it is found that within the adiabatic approximation for \(f_{\rm xc}\), some of the excitations are missing [124126]! The missing excitations turn out to be those that have the character of double (or multiple) excitations, i.e., the associated many-body excited states, if expanded in a basis of Kohn–Sham Slater determinants, contain dominant contributions of doubly excited configurations.

The Kohn–Sham noninteracting response function χ s (100) has poles at the Kohn–Sham single excitations. Compared with the many-body response function (90), χ s has fewer poles, since a noninteracting system cannot have double and multiple excitations in linear response. Solving the Casida equation in a finite basis and using the adiabatic approximation for \(f_{\rm xc}\), as is done in practice, will not change the number of poles, but just shift them. To obtain double excitations, a frequency-dependent \(f_{\rm xc}(\omega )\) is needed which will generate additional solutions, since the Casida equation then becomes a nonlinear eigenvalue problem.

Thus, we can say the following about the adiabatic approximation in TDDFT:

  • The adiabatic approximation works well for those excitations of the physical system for which a correspondence to a single excitation in the Kohn–Sham system exists. The Casida equation then shifts the Kohn–Sham excitations towards the true single excitations.

  • The frequency dependence of \(f_{\rm xc}\) must kick in for those excitations of the physical system that are missing in the Kohn–Sham system, namely, double or multiple excitations.

Several nonadiabatic TDDFT approaches for the description of molecular double excitations have been explored in the literature. One of them is known as dressed TDDFT [127], where a frequency-dependent xc kernel is explicitly constructed within a small subspace. Other nonadiabatic approaches are based on many-body theory [128131]. However, none of these approaches is sufficiently straightforward to be part of mainstream TDDFT.

6.5 Periodic Systems and Long-Range Behavior

As seen from (115 and 116), the Casida equation is expressed in the space spanned by one-particle Kohn–Sham transitions [132]. Real-space kernels are suitable for calculations of finite systems such as atoms and molecules. For periodic systems like solids, the momentum space representation of the Hartree-xc kernel is more convenient. In Section 7.4, we will use this approach to describe the optical properties of insulating solids.

The real-space representation of the kernel is related to the momentum space representation as

$$\begin{array}{rll} f_{{\rm Hxc}\sigma\sigma'}(\mathbf{r},\mathbf{r}',\omega)&=&\frac{1}{V}\sum\limits_{\mathbf{q}\in{\rm FBZ}}\sum\limits_{\mathbf{G},\mathbf{G}'}e^{i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}\\ &&\times f_{{\rm Hxc}\sigma\sigma'}(\mathbf{q},\mathbf{G},\mathbf{G}',\omega)e^{-i(\mathbf{q}+\mathbf{G}')\cdot\mathbf{r}'},\\ \end{array} $$
(121)

where G, G′ are reciprocal lattice vectors. With (121), the Hartree-xc kernel in transition space, (116), becomes

$$\begin{array}{rll} K_{ia\sigma,i'a'\sigma'}{}&=&{} \frac{1}{V}\sum\limits_{\mathbf{q}\in{\rm FBZ}}\sum\limits_{\mathbf{G},\mathbf{G}'} \langle i\mathbf{k}_{i}\sigma | e^{i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}| a\mathbf{k}_{a}\sigma \rangle \\ &\times&{} f_{{\rm Hxc}\sigma\sigma'}(\mathbf{q},\mathbf{G},\mathbf{G}') \langle a'\mathbf{k}_{a'}\sigma' | e^{-i(\mathbf{q}+\mathbf{G}')\cdot\mathbf{r}'}| i'\mathbf{k}_{i'}\sigma' \rangle \\ &\times&\delta_{\mathbf{k}_{a}-\mathbf{k}_{i}+\mathbf{q},\mathbf{G}_{0}}\delta_{\mathbf{k}_{a'}-\mathbf{k}_{i'}+\mathbf{q},\mathbf{G}'_{0}}, \end{array} $$
(122)

with the matrix elements defined as

$$ \langle i\mathbf{k}_{i}\sigma | e^{i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}} | a\mathbf{k}_{a}\sigma \rangle \equiv\int d^{3}r \phi_{i\mathbf{k}_{i}\sigma}^{*}(\mathbf{r})e^{i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}\phi_{a\mathbf{k}_{a}\sigma}(\mathbf{r}), $$
(123)

where k’s are the Bloch wavevectors of the corresponding wavefunctions, and G 0, \(\textbf {G}'_{0}\) can be any reciprocal lattice vector. The Kronecker-δs in (122) are a consequence of Bloch’s theorem.

The Hartree part of \(f_{\rm Hxc}\) can be shown to be largely irrelevant for the optical properties of insulators close to the gap [133]; we therefore focus on the xc part in the following. For G = G′ = 0 (the so-called head of f xc) in the important limit of q → 0, which corresponds to infinite range in real space, both matrix elements in (122) behave as \(O(q^{1})\). All the local and semilocal xc kernels (derived from LDA and GGA in the adiabatic approximation) have finite values for the head. Since the two matrix elements in (120) together vanish as \(O(q^{2})\), the head contribution to the sum of (122) is zero for all (semi)local kernels. For these kernels, all changes to the Kohn–Sham spectrum come from the body of \(f_{\rm xc}\) (where G ≠ 0, G′≠  0).

Gonze et al. [134, 135] pointed out that the head of \(f_{\rm xc}\) has to diverge as q −2 for q → 0 to correctly describe the polarization of periodic insulators. With the q −2 divergence, the head of \(f_{\rm xc}\) contributes in the sum of (120), dominating the other parts of \(f_{\rm xc}\) [wings (G = 0, G′ ≠  0 or vice versa) and body]. Local and semilocal xc kernels do not have this long-range behavior, and there is no obvious and consistent way of modifying them to include the long range.

The long-range behavior of the xc kernel is unimportant for low-lying excitations in finite systems such as atoms and molecules, which means that local and semilocal xc kernels will work reasonably well. However, for extended and periodic systems, it is crucial to have xc kernels with the proper long-range behavior to obtain correct optical spectra [133, 136]. We will discuss this further in Section 7.4.

7 Applications in Linear Response

Linear response TDDFT has been implemented in many computer codes in quantum chemistry and materials science. In this section, we will give an overview of some of the most important areas of application.

7.1 Standard Approximations for the xc Kernel

To carry out a TDDFT calculation in the linear response formalism, one must know the xc kernel. The simplest thing to do is to use the random phase approximation (RPA), where the xc kernel is set to zero:

$$ f_{\rm xc}^{\rm RPA}(\mathbf{r},\mathbf{r}',\omega)=0. $$
(124)

This seemingly trivial kernel originates from many-body theory, where one sums up all the ‘bubble’ type diagrams [18]. Though the form is similar to time-dependent Hartree, TDDFT RPA is fundamentally different due to the use of the Kohn–Sham system. The RPA kernel has seen applications for molecules and is known to produce reasonably good results. For insulating solids, the RPA spectra are missing important features such as excitonic effects (see below).

The proper way to obtain \(f_{\rm xc}\) is via (97): first approximate the time-dependent xc potential, calculate \(f_{\rm xc}(\mathbf {r},t,\mathbf {r}',t')\) by taking the functional derivative, and then get the frequency-dependent kernel \(f_{\rm xc}(\mathbf {r},\mathbf {r}',\omega )\) via Fourier transform. However, these steps are rarely carried out in practice, since most of the xc kernels in use are adiabatic kernels. Recall the adiabatic approximation for the xc potential, (69), which uses the ground-state functional and evaluates it at the time-dependent density. The adiabatic approximation for the xc kernel is

$$ f_{\rm xc}^{\rm A}(\mathbf{r},\mathbf{r}') = \frac{\delta v_{\rm xc}^{\rm gs}[n_{0}](\mathbf{r})}{\delta n_{0}(\mathbf{r}')} = \frac{\delta^{2} E_{\rm xc}[n_{0}]}{\delta n_{0}(\mathbf{r}) \delta n_{0}(\mathbf{r}')} \:, $$
(125)

which is frequency-independent.

An important example is the ALDA xc kernel:

$$ f_{\rm xc}^{\rm ALDA}(\mathbf{r},\mathbf{r}') = \left. \frac{d^{2} e_{\rm xc}^{h}(\bar n)}{d \bar n^{2}}\right|_{\bar n = n_{0}(\mathbf{r})} \delta(\mathbf{r}-\mathbf{r}') \:, $$
(126)

whose exchange part is explicitly given by

$$ f_{\rm x}^{\rm ALDA}(\mathbf{r},\mathbf{r}')=-[9\pi n_{0}^{2}(\mathbf{r})]^{-1/3}\delta(\mathbf{r}-\mathbf{r}'), $$
(127)

and the correlation part can be obtained by using (126) on any of the interpolations of \(e_{\rm c}^{\rm LDA}\)[1921].

This xc kernel is not only frequency-independent, it is also local. One can derive adiabatic-GGA kernels in a similar fashion, starting from any of the standard GGA functionals such as those discussed in Section 2.5. Adiabatic hybrid kernels, most notably B3LYP, are very widely used and have contributed much to the success of TDDFT in quantum chemistry.

7.2 Molecular Excitations

As an example, let us consider the benzene molecule. Table 3 shows eight low-lying singlet and triplet excitation energies of benzene, calculated with various xc functionals [137]. As an overall measure of the accuracy of the calculations, the mean absolute error (MAE) was also calculated for each functional. Based on this measure, the nonhybrid xc functionals (LSD, PBE, and TPSS) perform at about the same level, with an MAE of 0.3–0.4 eV. The hybrid functionals (PBE0 and B3LYP) used in this study perform somewhat better, with an MAE ranging from 0.18 to 0.27 eV. As we will see in the following examples, these findings are quite typical.

Table 3 Low-lying excitation energies (in eV) of the benzene molecule (C6H6) calculated with TDDFT using various xc functionals with the basis set 6-31++G(3df,3pd), and geometry optimized using the respective functionals with the same basis [137]

Figure 9 shows the MAE for 28 xc functionals and for HF, obtained by calculating 103 low-lying vertical excitation energies for a test set of 28 medium-sized organic molecules [139], compared against accurate theoretical benchmarks. The Kohn–Sham ground states were obtained with the same xc functionals that were used, in the adiabatic approximation, for the TDDFT calculations. Identical molecular geometries were used for each xc functional.

Fig. 9
figure 9

Mean absolute error for the lowest vertical excitation energies of a test set of 28 medium-sized organic molecules (103 excited states). Reproduced with permission from ACS from Ref. [139] Ⓒ2009

TDHF gives very large errors (over 1 eV), almost always overestimating the transition energies; any TDDFT calculation reduces the error by at least a half. Among the xc functionals, we can distinguish between pure density functionals (LDA and GGA), meta-GGAs, hybrid GGAs, and long-range-corrected hybrids (the first eight functionals in Fig. 9). The LDA and GGAs all give an MAE of order 0.5 eV. Meta-GGAs (VSXC and TPSS) give better agreement (about 0.4 eV). But the best choice are clearly the hybrid GGAs (B3LYP, X3LYP, B98, mPW1PW91, and PBE0). In this case, the MAE is reduced to less than 0.25 eV. Similar findings were also reported in a more recent benchmark study [140].

The long-range-corrected hybrids such as CAM-B3LYP give a slightly higher error, owing to a general overestimation of the transition energies. This is mainly due to the choice of the test set, in which charge-transfer excitations are not significantly represented. The advantage of long-range-corrected hybrids emerges for such excitations in larger molecules.

As these examples illustrate, TDDFT offers an excellent compromise between computational efficiency and accuracy. TDDFT scales as N 2 to N 3, depending on the implementation; wave-function-based methods of comparable accuracy scale at least one or two orders of magnitude worse. The current limit of high-end wave-function-based methods is about 50 atoms [139, 141, 142]. By contrast, TDDFT allows the treatment of molecules containing hundreds of atoms. Examples of medium-sized systems are shown in Figs. 10 and 11.

Fig. 10
figure 10

Circular dichroism spectrum of D 2-C84, comparing TDDFT with experiment. (ε: molar decadic absorption coefficient; R: rotatory strength; ΔE: excitation energy). Reproduced with permission from ACS from Ref. [143] Ⓒ2002

Fig. 11
figure 11

(Colored online) Calculated (blue line) and experimental (red line) absorption spectra of a Iridium(III) cyclometallated complex. Blue vertical lines correspond to the unbroadened oscillator strength of the calculated singlet–singlet transitions. Reproduced with permission from Elsevier from Ref. [144] Ⓒ2009

Figure 10 shows the circular dichroism spectrum of a large chiral fullerene molecule. TDDFT was able to resolve a debate regarding the molecular configuration of this system [143]. Figure 11 shows the absorption spectrum of an Iridium(III) cyclometallated complex [144].

7.3 Potential Energy Surfaces

Consider a system with N e electrons and N n nuclei, with nuclear masses M j and charges Z j , where j = 1,…, N n . Formally, all electrons and all nuclei are quantum mechanical particles, forming an interacting N e + N n -body system. For instance, the H 2 molecule depends on the coordinates of the two electrons, r 1 and r 2, and on the coordinates of the two protons, R 1 and R 2: hence, it is a four-body problem.

We denote the sets of electronic and nuclear spatial coordinates by \( \underline {\underline {\rm r}} \equiv \{\mathbf {r}_{1},\ldots ,\mathbf {r}_{N_{e}}\}\) and \(\underline {\underline {\rm R}} \equiv \{\mathbf {R}_{1},\ldots ,\mathbf {R}_{N_{n}}\}\), respectively. The many-body eigenstates of the system are a function of the two sets of coordinates, \(\Psi _{j}(\underline {\underline {\rm r}},\underline {\underline {\rm R}})\), and obey the following many-body Schrödinger equation

$$ \hat{H}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}) \Psi_{i}(\underline{\underline{\rm r}},\underline{\underline{\rm R}},t) = E_{i} \Psi_{i}(\underline{\underline{\rm r}},\underline{\underline{\rm R}},t) \:. $$
(128)

In the absence of any external potentials, the Hamiltonian of the coupled electron-nuclear system is given by

$$\begin{array}{rll} \hat{H}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}) &&= -\sum\limits_{j=1}^{N_{e}} \frac{\nabla_{\mathbf{r}_{j}}^{2}}{2} + \frac{1}{2}\sum\limits_{{j,k}\atop{j\ne k}}^{N_{e}} \frac{1}{|\mathbf{r}_{j} - \mathbf{r}_{k}|} -\sum\limits_{j=1}^{N_{n}}\frac{\nabla_{\mathbf{R}_{j}}^{2}}{2M_{j}} \\ &&{\kern8pt}+ \frac{1}{2}\sum\limits_{{j,k}\atop{j\ne k}}^{N_{n}} \frac{Z_{j} Z_{k}}{|\mathbf{R}_{j} - \mathbf{R}_{k}|} - \sum\limits_{j=1}^{N_{e}}\sum\limits_{k=1}^{N_{n}} \frac{Z_{k}}{|\mathbf{r}_{j} - \mathbf{R}_{k}|} \\ &&\equiv \hat{T}_{e} + \hat{W}_{ee} + \hat{T}_{n} + \hat{W}_{nn} + \hat{W}_{en} \:. \end{array} $$
(129)

As can be seen, \(\hat {H}(\underline {\underline {\rm r}},\underline {\underline {\rm R}})\) is the sum of an electronic Hamiltonian containing kinetic energy and electron–electron interaction, \(\hat {T}_{e} + \hat {W}_{ee}\), a similar nuclear Hamiltonian \(\hat {T}_{n} + \hat {W}_{nn}\), and an electron-nuclear coupling term \(\hat {W}_{en}\).

The full coupled electron-nuclear many-body problem is too difficult to solve in general; one usually works in the Born–Oppenheimer (BO) approximation to obtain the structure of molecules and solids. The central idea of the BO approximation is that because of the large difference between the electronic and nuclear masses (the proton is 1,836 times more massive than the electron), the two sets of degrees of freedom are essentially decoupled.

The BO Hamiltonian is defined as the full Hamiltonian (129) minus the nuclear kinetic energy term:

$$\begin{array}{rll} \hat{H}_{\rm BO}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}){} &=&{} -{}\sum\limits_{j=1}^{N_{e}}\frac{\nabla_{\mathbf{r}_{j}}^{2}}{2} + \frac{1}{2}\sum\limits_{{j,k}\atop{j\ne k}}^{N_{e}} \frac{1}{|\mathbf{r}_{j} - \mathbf{r}_{k}|} \\ &&+ \frac{1}{2}{}\sum\limits_{{j,k}\atop{j\ne k}}^{N_{n}} \frac{Z_{j} Z_{k}}{|\mathbf{R}_{j} - \mathbf{R}_{k}|} {}-{} \sum\limits_{j=1}^{N_{e}}\sum\limits_{k=1}^{N_{n}} \frac{Z_{k}}{|\mathbf{r}_{j} - \mathbf{R}_{k}|}. \end{array} $$
(130)

This Hamiltonian depends parametrically on the nuclear coordinates: this means that the nuclear positions \(\mathbf {R}_{1},\ldots ,\) \(\mathbf {R}_{N_{n}}\) are just treated as a set of given numbers, indicating a given nuclear configuration; they are no longer quantum mechanical operators. For each configuration, one solves the Schrödinger equation

$$ \hat{H}_{\rm BO}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}) \Psi_{j}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}) = E_{j} (\underline{\underline{\rm R}}) \Psi_{j}(\underline{\underline{\rm r}},\underline{\underline{\rm R}}) \:. $$
(131)

The energy eigenvalues \(E_{j} (\underline {\underline {\rm R}})\) define the landscape of potential energy surfaces, whose dimensionality depends on the degrees of freedom of the molecule. Thus, for a diatomic molecule, \(E_{j} (\underline {\underline {\rm R}})\) can be represented simply by a curve as a function of the internuclear distance, whereas for N n ≥ 3 it is a function of 3N n − 6 coordinates (3N n − 5 for linear molecules) and should therefore more appropriately be called a “hypersurface;” the potential energy surface is a 2D section through this higher-dimensional space. In common usage, however, the distinction between a surface and a hypersurface is usually not made.

The ground-state potential energy surface \(E_{0} (\underline {\underline {\rm R}})\) is of particular interest because its minimum defines the molecular equilibrium position. However, excited-state potential energy surfaces are important too, and play a crucial role in chemical reactions, photochemical processes, and in spectroscopy.

All potential energy surfaces following from (131) are called adiabatic, indicating a complete decoupling of electronic and nuclear degrees of freedom. The calculation of adiabatic potential energy surfaces is one of the key tasks of computational chemistry. The lowest potential energy surface can be obtained exactly, in principle, using ground-state DFT; for excited-state potential energy surfaces, forces, and vibrational frequencies, the appropriate method is TDDFT [143, 145].

Figure 12 shows the 1A 1 manifold of the CO-stretch potential energy curves of planar formaldehyde [146]. These are excited states, several eV above the ground-state potential energy curve (whose minimum is set at 0 eV). The dashed lines are results from a multireference doubles CI benchmark calculation; the full lines were obtained with TDDFT, using the ALDA with an asymptotic correction. An xc functional with the correct asymptotics is important here because these are high-lying (Rydberg) excitations.

Fig. 12
figure 12

1 A 1 CO-stretch potential energy curves of planar formaldehyde (CH2O). Full lines: TDDFT. Dashed lines: multireference doubles CI. Reproduced with permission from Wiley from [146] Ⓒ1998

A prominent feature in Fig. 12 is the avoided crossing between the states labeled (Π, Π*) and (n, 3p y ). TDDFT reproduces this avoided crossing qualitatively correctly, thanks to the configuration mixing of individual single-particle transitions induced by the off-diagonal matrix elements K iaσ,i′a′σ′ in the Casida equation (114) [147].

The (n, 3p y ) curve is almost on top of the exact curve, at least for C–O distances before the avoided crossing. On the other hand, the (n, 3d yz ) curve comes out about 1 eV too high, primarily owing to limitations of the xc functional used in this calculation.

There are many TDDFT studies in organic and inorganic photochemistry calculating excited-state potential energy surfaces [148152]. The performance of TDDFT depends strongly on the xc functional used (choosing appropriate basis sets is another important factor). Complications can arise for potential energy surfaces associated with excitations that have a long-range, charge-transfer character [153, 154]. In that case, local or semilocal xc functionals will fail, and one needs to use xc functionals with the correct long-range behavior, see Section 6.3.

Another source of complications are situations in which the ground state has an intrinsically multiconfigurational character. This can lead to circumstances in which two potential energy surfaces become degenerate and touch each other, which gives rise to so-called conical intersections. The name reflects the topology in the vicinity of the point of degeneracy, which looks like an inverted cone balancing on the tip of another cone. TDDFT has serious problems with conical intersections [114, 155, 156]: it typically produces the wrong topology in the vicinity of the intersection point. These difficulties have a lot to do with the problems of TDDFT to describe double excitations: an explicitly frequency-dependent xc kernel f xc (ω) is required for a proper description of conical intersections [147].

7.4 Optical Properties of Solids

At present, the majority of applications of TDDFT are in the area of computational (bio)chemistry. However, applications in solid-state physics and materials science are emerging at a rapid rate. In this section, we will highlight some of the most important issues for TDDFT in solids: the band-gap problem, excitons in insulators, and plasmons in metals.

7.4.1 The Band Gap versus the Optical Gap

The fundamental band gap E g is a key quantity that characterizes insulating materials. It is defined as follows:

$$ E_{g}(N) = I(N)-A(N), $$
(132)

where I(N) and A(N) are the ionization potential and the electron affinity of the N-electron system, see (24) and (25). Hence, we obtain

$$ E_{g}(N) = \varepsilon_{N+1}(N+1) - \varepsilon_{N}(N) \:. $$
(133)

It is important to note that the right-hand side of (133) contains the highest occupied Kohn–Sham eigenvalues of two different systems, namely with N and with N + 1 electrons. In a macroscopic solid with 1023 electrons, it would of course be impossible to calculate the band gap according to this definition.

The band gap in the noninteracting Kohn–Sham system, also known as the Kohn–Sham gap, is defined as

$$ E_{g,s}(N) = \varepsilon_{N+1}(N) - \varepsilon_{N}(N) \:. $$
(134)

In contrast with the interacting gap E g , the Kohn–Sham gap E g,s is simply the difference between the highest occupied and lowest unoccupied single-particle levels in the same N-particle system. This quantity is what is usually taken as the band gap in standard DFT band structure calculations. We can relate the two gaps by

$$ E_{g} = E_{g,s} + \Delta_{\rm xc}, $$
(135)

which defines Δxc as a many-body correction to the Kohn–Sham gap. By making use of the previous relations, we find Δxc = ε N+1 (N + 1) − ε N+1(N). It turns out that the many-body gap correction Δxc can be related to a very fundamental property of density functionals, known as derivative discontinuities [157160].

The so-called band-gap problem of DFT reflects the fact that in practice, E g,s is often a poor approximation to E g , typically underestimating the exact band gap by as much as 50 %. The reason for this is twofold: commonly used approximate xc functionals (such as LDA and GGA) tend to underestimate the exact Kohn–Sham gap E g,s , and they do not yield any discontinuity correction Δxc. An extreme example for the second failure are Mott insulators, which are typically predicted to be metallic by DFT. This is no accident: in Mott insulators, the exact Kohn–Sham system is metallic (i.e., E g,s = 0) so that E g = Δxc. Clearly, standard xc functionals (where Δxc vanishes) are unfit to describe Mott insulators.

It is important to distinguish between the fundamental band gap and the optical gap [123]. The band gap describes the energy that an electron must have so that, when added to an N-electron system, the result is an N + 1 electron system in its ground state. The total charge of the system changes by −1 in this process. By contrast, the optical gap describes the lowest neutral excitation of an N-electron system: here, the number of electrons remains unchanged. The two gaps are schematically illustrated in Fig. 13 together with the Kohn–Sham gap.

Fig. 13
figure 13

Schematic illustration of the different types of gaps in DFT and TDDFT. The Kohn–Sham gap is defined as the difference of the highest occupied and lowest unoccupied Kohn–Sham eigenvalues of the N-electron system, see (134). The fundamental band gap [or quasiparticle (QP) gap] is the Kohn–Sham gap plus the derivative discontinuity, see (135). The optical gap is the band gap minus the lowest exciton binding energy \(E_{0}^{\rm ex}\). The Kohn–Sham gap can be viewed as an approximation for the optical gap

The band gap of insulators can be accurately obtained from the so-called quasiparticle energies, which are defined as the single-particle energies of a noninteracting system whose one-particle Green’s function is the same as that of the real interacting system (note how this is different from the definition of the Kohn–Sham system). In practice, this is often done using the GW method [133, 161, 162]. GW clculations are more demanding than DFT, but they produce band structures of solids that agree very well with experiment. Generalized Kohn–Sham schemes [27, 28] can also give good band gaps.

While the band gap can be measured using techniques in which electrons are added or removed from the system (such as photoemission spectroscopy), the optical gap refers to the lowest neutral excitation. The difference between quasiparticle band gap and optical gap is the lowest exciton binding energy, \(E_{0}^{\rm ex}\). Excitons can be viewed as bound electron–hole pairs, whose bound states form a Rydberg series, analogous to the hydrogen atom [115]. The band gap is given by the asymptotic limit of the excitonic Rydberg series [163] (at least for direct-gap insulators and semiconductors).

TDDFT can be used to calculate optical spectra of materials in principle exactly. In the case of insulators and semiconductors, this means that it should, in principle, yield the correct optical gap, the correct excitonic Rydberg series (if the material under study has one), and hence the correct band gap (obtained as the limit of the excitonic Rydberg series). We will discuss in detail in the following section how optical spectra of insulators and semiconductors are calculated with TDDFT in practice.

As always in TDDFT, the burden rests on the xc kernel. In the case of bulk insulators, f xc needs to accomplish two things: it needs to “open up” the gap (i.e., compensate the fact that the Kohn–Sham gap underestimates the band gap), and it needs to produce the electron–hole interaction that is responsible for the formation of excitons. Formally, we can write this as follows [164]:

$$ f_{\rm xc} = f_{\rm xc}^{\rm qp} + f_{\rm xc}^{\rm ex} \:. $$
(136)

The xc kernel is written as the sum of a quasiparticle part \(f_{\rm xc}^{\rm qp}\) (which opens up the gap) and an excitonic part \(f_{\rm xc}^{\rm ex}\) (which causes excitonic effects). The excitonic part turns out to be easier to approximate than the quasiparticle part (see below). In fact, no suitable approximations to \(f_{\rm xc}^{\rm qp}\) exist at present. To a large extent, this is due to the fact that the quasiparticle part is intrinsically nonadiabatic [165]: the frequency dependence is essential to shift the Kohn–Sham gap, and to produce an excitonic Rydberg series [115]. In view of this, one usually ignores the quasiparticle part of f xc and starts from a band structure in which the gap has been corrected by other means (such as via GW, or with a simple scissor operator [166]).

7.4.2 Optical Spectra Of Semiconductors and Insulators

In the optical spectroscopy of solids, a central quantity is the complex index of refraction \(\tilde n\), defined as [167]

$$ \tilde n^{2} = \epsilon_{\rm mac}(\omega) \:, $$
(137)

where ε mac(ω) is the macroscopic dielectric function. The imaginary part of ε mac (ω) hence describes the photoabsorption of a solid, as illustrated in Fig. 14 for the case of silicon. To calculate the macroscopic dielectric function from first principles, we need to take a detour and first calculate the microscopic dielectric matrix, ε(q, G, G′, ω), where G and G′ are reciprocal lattice vectors. The macroscopic dielectric function then follows as the limit [133]

$$ \epsilon_{\rm mac}(\omega)= \lim_{\mathbf{q} \to0}\frac{1}{\epsilon^{-1}(\mathbf{q},\mathbf{G}=0,\mathbf{G}'=0,\omega)} \:. $$
(138)

In turn, the inverse dielectric function of a periodic system can be obtained from the response function as

$$ \epsilon^{-1}(\mathbf{q},\mathbf{G},\mathbf{G}',\omega) = \delta_{\mathbf{G} \mathbf{G}'} + v_{\mathbf{G}}(\mathbf{q}) \chi(\mathbf{q},\mathbf{G},\mathbf{G}',\omega) \:, $$
(139)

where \(v_{\mathbf{G}}(\mathbf{q}) = 4\pi/|\mathbf{q} + \mathbf{G}|^{2} \). In TDDFT, the full response function is expressed as

$$\begin{array}{@{}rcl@{}} \chi(\mathbf{q},\mathbf{G},\mathbf{G}',\omega){}&=&{} \sum_{\mathbf{G}''}\bigg[\delta_{\mathbf{G}_{1}\mathbf{G}_{2}}{} -{}\sum_{\mathbf{G}_{3}}\chi_{s}(\mathbf{q},\mathbf{G}_{1},\mathbf{G}_{3},\omega)\\ &\times& f_{\text{Hxc}}(\mathbf{q},\mathbf{G}_{3},\mathbf{G}_{2},\omega)\bigg]^{-1}_{\mathbf{G}\mathbf{G}''}\chi_{s}(\mathbf{q},\mathbf{G}'',\mathbf{G}',\omega), \end{array} $$
(140)

where the xc kernel in reciprocal space was defined in (121). By calculating χ on a frequency grid, one thus obtains the optical spectrum (including a finite broadening in order to make the spectrum smooth). The size of the matrices involved are determined by the number of k points associated with the numerical discretization scheme employed. The spectral contribution from large G and G′ elements in χ typically decays rapidly, so only few reciprocal lattice vectors need to be considered.

Fig. 14
figure 14

Optical absorption spectrum of bulk Si. RPA and TDLDA fail to reproduce the optical gap and the excitonic peak. Reproduced with permission from APS from [136] Ⓒ2004

As discussed in Section 6.5, the head of the xc kernel plays a dominant role in periodic solids. Figure 14 shows the experimental spectrum of Si together with the calculated spectrum of ALDA, which has a vanishing head of the xc matrix. Besides producing a red-shifted spectrum due to the band-gap problem, the ALDA spectrum lacks the strong excitonic peak near the gap. As expected, local and semilocal functionals such as the ALDA break down for the highly nonlocal excitonic effects. Big improvements can be achieved by having a finite head in the xc kernel. We now list a few xc kernels which have the proper long-range behavior that is required for a finite head of the xc matrix.

The long-range corrected (LRC) kernel [135] is a simple ad hoc approximation developed mainly for studying the effect of the long-range behavior. It has the form

$$ f_{\rm xc}^{\rm LRC}(\mathbf{q},\mathbf{G},\mathbf{G}',\omega)=-\frac{\alpha}{|\mathbf{q}+\mathbf{G}|^{2}}\: \delta_{\mathbf{G},\mathbf{G}'}, $$
(141)

where α is a system-dependent fitting parameter. Despite its simple form, LRC spectra (with properly chosen α) can be in good agreement with experiments [136, 168] since the head contribution of the kernel usually overwhelms the body contributions (sometimes called local field effects). A simple connection of the parameter α with the high-frequency dielectric constant has been suggested [136]. This xc kernel should not be confused with the long-range correction in ground-state DFT, where it means a correction term to fix the rapid decay of local and semilocal xc potentials away from nuclei [169].

The Bethe–Salpeter equation (BSE) [133, 170] is a many-body equation for a two-particle polarization function (which is closely related to the two-particle Green’s function) [171]. Today, the BSE, combined with the GW method, is the most accurate approach to calculating optical properties of materials. However, the scaling of the computational cost versus system size is not favorable; the use of GW-BSE has therefore been limited to moderate system sizes, despite recent progress [172175]. From the point of view of TDDFT, the BSE has been an important guide towards the development of very accurate excitonic xc kernels. The idea is to construct \(f_{\rm xc}^{\rm ex}\) via an integral equation that features the same four-point response functions that are featured in the BSE [133, 176]. The resulting xc kernel reproduces the results of the full BSE [168, 177183]. However, the computational cost is essentially as high as that of solving the full BSE; therefore, this xc kernel has mainly served as a proof of concept that TDDFT is capable of producing accurate excitonic effects. Furthermore, the LRC xc kernel can be shown to emerge from this BSE-based xc kernel in the long-range limit [184].

A computationally much simpler alternative is the recently proposed ‘bootstrap’ kernel [185, 186], defined as

$$f_{\rm xc}^{\rm boot}(\mathbf{q},\mathbf{G},\mathbf{G}',\omega)=\frac{\epsilon^{-1}(\mathbf{q},\mathbf{G},\mathbf{G}',\omega=0)}{\chi_{0}(\mathbf{q},\mathbf{G}=0,\mathbf{G}'=0,\omega=0)}.$$
(142)

Due to the inclusion of v G(q) in the numerator, the bootstrap kernel has the correct O(q −2) long-range behavior. The bootstrap kernel performs well for a wide range of solids, as illustrated in Fig. 15, and even works for the case of strongly bound excitons such as in solid argon or LiF (note that the noninteracting response function χ 0 typically contains a band-gap correction such as a scissor operator or GW).

Fig. 15
figure 15

(Colored online) Optical absorption spectra of various bulk semiconductors calculated with TDDFT using the bootstrap xc kernel, (142). Reproduced with permission from APS from [185] Ⓒ2011

We also briefly mention that the VS98 meta-GGA [187] has recently shown some promise for calculating optical spectra of insulators with TDDFT [188].

As an alternative to obtaining optical spectra via the dielectric matrix, a direct calculation of excitonic binding energies of insulators and semiconductors via the Casida equation is also possible [53, 189191]. The advantage of this approach is that excitonic binding energies—which can be in the meV range for materials such as GaAs—can be numerically well-resolved; this is much more difficult to do from the dielectric function, which typically yields relatively low-resolution optical spectra such as in Figs. 14 and 15. It is found that the bootstrap kernel yields good results for strongly bound excitons, but is less accurate for the more weakly bound cases [190]. Accurate triplet exciton binding energies are even more difficult to obtain. The development of xc kernels for excitonic effects in solids thus remains an important task for future research.

It should be noted that (138) and (139) imply that the eigenvalues in the Casida equation approach are the poles in 𝜖 −1 instead of 𝜖 mac, so that the absorption peaks are not given directly. This problem is solved through a modification of the Hartree kernel:

$$ \bar f_{\rm H}(\mathbf{q},\mathbf{G},\mathbf{G}')=\left\{\begin{array}{cl}0 & G=G'=0,\\ \frac{4\pi}{|\mathbf{q}+\mathbf{G}|^{2}}\delta_{\mathbf{G}\mathbf{G}'} & \text{otherwise}.\end{array}\right. $$
(143)

By using \(\bar f_{\rm H}\) instead of f Hin TDDFT, 𝜖 macbecomes [133]

$$\epsilon_{\rm mac}(\omega)=\lim_{\mathbf{q}\to0}[1-v_{G=0}(\mathbf{q})\bar\chi(\mathbf{q},\mathbf{G} = 0, \mathbf{G}^{\prime}= 0,\omega)], $$
(144)

where \(\bar \chi \) is the modified response function resulting from TDDFT with \(\bar f_{\rm H}\). Thus, the Casida equation with \(\bar f_{\rm H}\) yields eigenvalues corresponding to the peaks in the optical spectra. Since (144) avoids the matrix inversion involved in (138), the use of \(\bar f_{\rm H}\) is also a standard practice in the response function approach of TDDFT.

7.4.3 Metallic Systems

The optical properties of metallic systems (bulk metals or metallic nanoparticles) are strongly determined by the fact that they have a sea of delocalized conduction electrons with a Fermi surface. Hence, their low-energy elementary excitation are quite different compared to systems with a gap (insulators and semiconductors). Whereas the outstanding features of the optical spectra of insulators are the excitons, metallic systems are dominated by plasmons.

Excitons and plasmons are observed using different experimental techniques: excitons are seen in optical absorption spectra (i.e., via coupling to transverse optical fields); on the other hand, plasmons couple to longitudinal fields, and are thus observed using electron energy loss spectroscopy or inelastic light (or X-ray) scattering spectroscopy [192195].

From a TDDFT perspective, both excitons and plasmons are collective excitations of the many-body system. However, there is a big difference as to what causes the collective behavior in the Kohn–Sham system. Excitons can be viewed as a coherent superposition of a large number of individual particle-hole excitations between valence and conduction band, mediated via long-range dynamical xc effects [115] (see Fig. 16). As we discussed in the previous subsection, it is not easy to find xc kernels which reproduce excitonic effects: all electron gas-based approximations (such as ALDA) will fail.

Fig. 16
figure 16

(Colored online) a Excitons arise from a coupling of single-particle excitations between valence and conduction band in an insulator, mediated by dynamical xc effects. b Particle-hole excitations with momentum transfer q across the Fermi surface of a simple metal. A plasmon is a coherent superposition of many such excitations, coupled by Coulomb interactions

On the other hand, plasmon excitations in metallic systems are relatively easy to capture within TDDFT. The reason is that plasmons can be viewed as collective charge–density oscillations, and it is a straightforward textbook exercise in electromagnetism to show that such oscillations arise already from classical electrostatic (RPA-type) interactions; many-body xc effects only cause relatively minor corrections (but are important and subtle for plasmon damping, see below). One thus derives the classical plasma frequency as

$$ \omega_{\rm pl} = \sqrt{ \frac{4\pi n e^{2}}{m }} \:. $$
(145)

The plasmon dispersion of a homogeneous electron liquid can be calculated using TDDFT linear response theory, along similar lines as finding the zeros of the Lindhard dielectric function [18]. The analytic form of the plasmon dispersion up to order q 2 is given by

$$\Omega(q) = \omega_{\rm pl}\left[1 + \left(\frac{3 k_{F}^{2}}{10 \omega_{\rm pl}^{2}} + \frac{1}{8\pi}f_{\rm xc}(q=0,\omega_{pl})\right)q^{2}\right], $$
(146)

where the terms without f xc are the RPA result. For small q, the plasmon lies outside the particle-hole continuum, as illustrated in Fig. 17. As soon as the plasmon dispersion enters the particle-hole continuum, it becomes subject to Landau damping (decay into incoherent particle-hole excitations). This damping occurs already in RPA [197]. But outside the particle-hole continuum, the only source of plasmon damping comes from the imaginary part of the xc kernel. The physical origin of the low-q plasmon damping is decay into multiple particle-hole excitations. A frequency-independent f xc (such as the ALDA) has no imaginary part and hence leaves the plasmon undamped.

Fig. 17
figure 17

(Colored online) Schematic illustration of the particle-hole continuum of a 3D homogeneous electron liquid, and the RPA plasmon dispersion. In RPA, the plasmon is undamped until it enters the particle-hole continuum, where it decays into incoherent particle-hole excitations (Landau damping). TDDFT gives very similar results [196]

Figure 18 shows a comparison of experimental and theoretical results for the plasmon dispersions of bulk sodium and aluminum [195]. The agreement is very good for small plasmon wavevectors, but for larger wavevectors, all TDDFT approaches fail (even the nonadiabatic xc kernel of Gross and Kohn [107]). Good agreement is achieved by a hybrid approach in which many-body quasiparticle lifetimes are put by hand into the response formalism (TDLDA-LT).

Fig. 18
figure 18

(Colored online) Plasmon dispersions of bulk sodium and aluminum: comparison of experiment and TDDFT. Reproduced with permission from APS from [195] Ⓒ2011)

Plasmonic effects are found not only in bulk metals, but also in many types of nanostructures. TDDFT has been extensively used for collective excitations in metallic clusters and nanoparticles. In general, the results are excellent: plasmon peaks and line shapes for simple metal clusters are very well reproduced, even at the ALDA level [91, 198]. Applications to gold and silver clusters have also been quite successful, and nicely demonstrate the evolution from atomic-like discrete spectra to plasmon spectra as the cluster size increases [199201].

A similar picture holds for doped semiconductor nanostructures such as quantum wells, wires, or dots. Here, collective excitations in the charge and spin channel have been well studied using TDDFT methods; in general, plasmon dispersions are well reproduced [64]. The issue of plasmon damping in quantum wells has received a good deal of attention; in particular, intersubband plasmons in quantum wells have been used to test the Vignale-Kohn approximation of TDCDFT [62, 63, 203], with considerable success [204, 205, 207].

8 The Future of TDDFT

In the final section of our overview, we attempt a forecast of the directions in which the field of TDDFT will be progressing. We will highlight some areas in which applications of TDDFT are likely to see a lot of activity because of their practical importance. We will also give a list of issues and challenges—some of them formal, some of them practical—which will keep the TDDFT community busy for years to come.

Biological Systems. It has been said that “if the 20th century was the century of physics, the 21st century will be the century of biology” [208]. Without doubt, DFT and TDDFT methods will play a key role in the scientific effort to understand the links between structure and functionality in biochemistry and biology. This is due to the fact that DFT is the only method capable of delivering ab initio descriptions of the electronic structure of systems with tens of thousands of atoms; thanks to the development of linear-scaling methods, even systems with millions of atoms are now within reach [209211].

Applications of TDDFT for large biomolecules have begun to emerge at a rapid rate [90, 212218]. Many of these studies are concerned with the electronic and optical properties of DNA fragments, or the properties of light-harvesting complexes. Apart from the availability of the necessary computer power (hardware as well as software), there are several developments in DFT which facilitate this trend towards large organic systems:

  • With the range-separated hybrid functionals, we now have the tools for describing charge-transfer excitations with TDDFT (see Section 6.3).

  • A new generation of DFT approaches for van der Waals interactions has emerged [219225], which allow for first-principles calculations of the structure of sparse matter, adsorption on surfaces, and many other applications.

Coupled electron-nuclear dynamics. The coupling of electronic and structural degrees of freedom is a deciding factor in many functionalities of biological systems. An example are photoinduced processes such as photoisomerization. As discussed in Section 7.3, TDDFT gives access to excited-state potential energy surfaces. But things get really interesting when the dynamics goes beyond the Born–Oppenheimer approximation, giving rise to effects such as structural relaxation or ultrafast laser-driven molecular reorganization or dissociation. In such situations, TDDFT can be combined with molecular dynamics, at various levels of sophistication [226228]. For a recent review of nonadiabatic dynamics, see Ref. [229].

The most straightforward TDDFT approach for coupling electronic and nuclear dynamics is via the Ehrenfest approximation, which is a mixed quantum-classical treatment where forces on the classical ions result from a mean-field average over the electronic states. Ehrenfest dynamics works well in many situations [230233], but has its clear limitations for situations where a branching of ionic trajectories occurs, and where the excited states involve multiple pathways. Such phenomena can be described with the so-called surface-hopping schemes [234236], in which multiple excited-state potential energy surfaces can participate in the dynamics, governed by a stochastic hopping algorithm.

But all of these approaches are based on classical nuclear dynamics and are thus missing out on nuclear quantum effects. Important effects of nuclear dynamics such as interference, decoherence, or tunneling are therefore not captured. There are already some efforts underway to develop approaches that combine electronic TDDFT with nuclear quantum dynamics [237243]. It can be expected that the field will continue to advance towards a comprehensive and practical treatment of electronic and nuclear degrees of freedom. This would open up a large area of interesting new applications of TDDFT.

Linear and nonlinear optics in materials. In Section 7.4, we discussed how linear response TDDFT is applied to describe optical properties of materials (insulators and metals). It can be expected that this will remain a highly active area of research. Significant progress can be expected along several directions.

There is a need for better xc kernels for solids. It is very likely that these kernels will be expressed in terms of occupied and unoccupied orbitals, rather than the density. The bootstrap kernel, (142), is an important step in the right direction, but it is not so clear how it can be systematically improved. For instance, a spin-dependent generalization of the bootstrap kernel (which would allow a description of singlet and triplet excitons) is problematic [190].

A particularly hot area of research are photovoltaic processes in organic systems (polymers or biological light-harvesting complexes) [244247]. There is a rich variety of photophysical processes involved, such as formation and diffusion of excitons, formation of charge-transfer complexes, relaxation, and charge separation. At present, no comprehensive ab initio picture of these processes exists. This represents one of the major challenges for TDDFT, and should soon be within reach, based on existing methodologies and new developments. A promising idea is the recently proposed real-time visualization of exciton dynamics using the time-dependent transition density matrix [80].

In the majority of applications of TDDFT in periodic solids, the dielectric function (or related response properties) are calculated, which yield optical spectra or scattering cross sections. But there are many nonlinear or explicitly time-dependent processes of interest, which go beyond response theory and require, in principle, a time-dependent calculation. Real-time TDDFT calculations for periodic solids are beginning to emerge [248251] to simulate hot carrier generation, dielectric breakdown, and coherent phonons in semiconductors and insulators. Such calculations, in particular if light propagation effects are included via a coupling with Maxwell’s equation, pose a significant computational challenge and call for the development of new multiscale or multidomain approaches [252, 253].

Other developments. Let us conclude with a mixed bag of various formal and practical challenges and unsolved problems for present and future TDDFT research.

  • Nonadiabatic xc functionals. Nonadiabatic xc functionals are needed for double excitations in finite systems, for dissipation in extended systems, for exciton Rydberg series, for conical intersections, and many other important phenomena. Electron gas-based functionals [62, 63] are of limited use [254]; a connection with many-body approaches seems the most promising avenue towards the development of simple, practically useful nonadiabatic functionals [128131]. Another possibility could be via reduced density matrix functional theory [255258].

  • Open systems. TDDFT for open systems is of interest for the description of transport through nano- or mesoscopic systems, where a region of interest (e.g., a molecule) is connected to energy and particle reservoirs via metallic leads [259]. It is also of interest for treating dissipative dynamics. The coupling to a reservoir can be formally treated within TDDFT in various ways: with a master equation approach [260], using stochastic methods [261263], and by mapping the open physical system onto a noninteracting closed system [264266]. The formal aspects are complicated and subject of ongoing debate [267]; practical xc functionals for open systems and applications beyond simple model systems can be expected in the future.

  • Strongly correlated systems. There has been some interesting recent work in which TDDFT methods were successfully applied to the transport in strongly correlated model lattice systems exhibiting Coulomb blockade and the Kondo effect [268271]. A subtle feature of the xc potential, its derivative discontinuity upon change of particle number (briefly mentioned in Section 7.4), turns out to be crucial for capturing these effects. Most of these studies are for one-dimensional Hubbard-type lattice systems [272, 273], but three-dimensional systems were also considered [274, 275]. In the future, work along these lines is likely to make an impact in the description of realistic strongly correlated systems and materials, which so far have remained problematic for (TD)DFT.

  • Extensions of the formalism. Ground-state DFT has long ago been extended to finite temperatures [276] and to relativistic systems [14]. The corresponding TDDFT versions are not yet available, but would be of great interest for matter under extreme conditions. Finite-temperature TDDFT, which might include elements of nonequilibrium thermodynamics and time-dependent thermal ensembles, could also be of interest for thermal transport and thermoelectric properties. Relativistic TDDFT has been used for calculating molecular excitation energies and response properties [277281], and real-time Dirac–Kohn–Sham calculations have been explored [282], but formally rigorous general existence proofs have yet to be worked out. Some promising developments have recently occurred in the application of TDDFT methods for quantum electrodynamics [283, 284].