Keywords

4.1 Introduction

Nanomaterials and nanotechnology may lead to breakthroughs in various fields such as VLSI circuits [1], energy storage solutions [2, 3], environmental protection [4, 5], and biomedical applications [6, 7]. Despite the incredible advances in characterization tools and techniques, there seems to be greater than ever needs to be able to carry out computational simulations with atomistic resolution for nanomaterials. This is in part due to the fact that nanoscale properties are extremely difficult to measure or manipulate, but more importantly, such properties are probably very sensitive to subtle environmental changes and perturbations, making repeated measurements more challenging. This is where computational modeling has much to offer in the booming nanomaterials design and nanomedicine, in that it supplies “virtue experimental methods” to investigate mechanisms of phenomena and even to design artificial structures in order to get desirable properties [810]. One good example is the design of nitrogen doped nanotube as chemical sensor which would have potential applications in biomedical fields. It was first proposed based on the gas response sensitivity analysis from first-principles calculations [11] and then a year later, confirmed by experiments, where fabricated CNx nanotubes have rapid sensing capabilities to low concentrations of toxic gases such as ammonia, acetone and OH groups [12]. Another example is the explanation of the presence of the strongest grain-size in nanocrystal metals by molecular dynamics simulations [13]. Computational simulations are able to present detailed insight of phenomena which are very difficult or even impossible to be obtained by experiments. It is clear that computational modeling provides invaluable information in both cases and it is computational techniques that give deep insight into the structure-property and structure-functionality relationship in nanomaterials.

Another aspect of the rising interest in nanomaterials design and computational modeling is that with the ever increasing power of CPUs and better designed parallel algorithms, the capability of computer simulation has greatly expanded. Nowadays, modern supercomputers has tera-flop computational power and full quantum mechanical simulations can tackle molecular systems consisting millions of atoms, while state-of-the-art molecular dynamics code can handle billions of atoms. This puts nanomaterials modelling squarely into experimentally accessible regions and the ability of mimicking real experimental systems has made computer simulations a complementary tool for understanding phenomena on the nanoscale.

Over the past few decades, we see a booming interest in computational modeling and the emergence of a number of new theoretical and numerical techniques. In order to tackle nanomaterials systems on different time and size scales, computational modeling has also developed several branches that aim to suitably describe various properties of assorted systems. The number one rule of thumb for computational simulation is that one has to choose the most suitable simulation method for a specific problem. Generally speaking, based on accuracy and size constraints, simulation methods can be categorized as First-Principles (FP) method (accurate, 10–1,000 atoms), tight binding (TB) (approximate electronic structure information, 102–105 atoms), and molecular dynamics (MD) (empirical potential, >106 atoms) (Fig. 4.1), which cover a wide area from sophisticated electronic structures to massive bulk properties. In the following sections, the fundamentals of these three simulation techniques are outlined and their applications in nanomaterials modeling are further demonstrated in details.

Fig. 4.1
figure 1

Diagram illustrating suitable system sizes and time scales of FP, TB, and MD methods

4.2 First-Principles Methods

First-Principles methods, as shown in its own name, obtain electronic structure of materials basing on the very foundation of quantum mechanics, Schrödinger equation. Except several basic constants such as Plank constant, atomic mass, Bohr radius and certain approximations required to simplify numerical complexity, such as Born-Oppenheimer approximation, local density approximation, First-Principles methods calculate essential quantities directly without any presetting or empirical parameters. Therefore, this kind of methods has high accuracies, and can be used on most materials. On the other hand, due to the computationally demanding self-consistent solution procedures, FP methods are very time-consuming, and can only treat relatively small systems.

4.2.1 Born-Oppenheimer Approximation

The one single most important approximation employed in most FP calculations is the Born-Oppenheimer approximation, which effectively decouples the electronic and nuclear degree of freedom, and thus greatly simplifying the numerical solutions. By employing Hartree atomic unit, we can express the Hamiltonian of an N-ion-n-electron system as

$$ \widehat{H}=-{\displaystyle \sum}_{I=1}^N\frac{\nabla_I^2}{2{M}_I}-{\displaystyle \sum}_{i=1}^n\frac{\nabla_i^2}{2}+\frac{1}{2}{\displaystyle \sum}_{I,J,I\ne J}^N\frac{Z_I{Z}_J}{R_{IJ}}+\frac{1}{2}{\displaystyle \sum}_{i,j,i\ne j}^n\frac{1}{r_{ij}}-{\displaystyle \sum}_{I=1}^N{\displaystyle \sum}_{i=1}^n\frac{Z_I}{r{}_{iI}{}} $$
(4.1)

Capital and lower case letters indicate ions and electrons, respectively. Since ions are ~104 heavier than electrons (M I ~104), it is safe to say that electrons move much faster than ions do.

In other words, at every moment that ions move to a new configuration, electrons will instantaneously relax to their new ground state. Therefore, the movements of electrons and ions can be separated. In mathematical terms, this separation is realized by expressing the total wavefunction as follows:

$$ \varPhi \left(\left\{{\overrightarrow{r}}_i\right\};\left\{{\overrightarrow{R}}_I\right\}\right)=\phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\Big|{}_{\left\{{\overrightarrow{R}}_I\right\}}\chi \left(\left\{{\overrightarrow{R}}_I\right\}\right) $$
(4.2)

\( \phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\Big|{}_{\left\{{\overrightarrow{R}}_I\right\}} \) is electronic wavefunction with ionic configuration as \( \left\{{\overrightarrow{R}}_I\right\} \), and \( \chi \left(\left\{{\overrightarrow{R}}_I\right\}\right) \) is ionic wavefuction, respectively. Furthermore, \( \phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\Big|{}_{\left\{{\overrightarrow{R}}_I\right\}} \) satisfies Schrödinger equation:

$$ \left\{-{\displaystyle \sum}_{i=1}^n\frac{\nabla_i^2}{2}+\frac{1}{2}{\displaystyle \sum}_{i,j,i\ne j}^n\frac{1}{r_{ij}}-{\displaystyle \sum}_{I=1}^N{\displaystyle \sum}_{i=1}^n\frac{Z_I}{r_{iI}}\right\}\phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\left|{}_{\left\{{\overrightarrow{R}}_I\right\}}={\varepsilon}_{elec}\left(\left\{{\overrightarrow{R}}_I\right\}\right)\phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\right|{}_{\left\{{\overrightarrow{R}}_I\right\}} $$
(4.3)

Equations 4.2 and 4.3 are called Born-Oppenheimer approximation. It realizes the separation of movements of electrons and ions, and serves as the very foundation of most modern computational quantum chemistry/physics methods. Combining the above three equations, we can obtain the equation of \( \chi \left(\left\{{\overrightarrow{R}}_I\right\}\right) \):

$$ \left\{-{\displaystyle \sum}_{M=1}^N\frac{\nabla_I^2}{2{M}_I}+\frac{1}{2}{\displaystyle \sum}_{I,J,I\ne J}^N\frac{Z_I{Z}_J}{R_{IJ}}+{\varepsilon}_{elec}\left(\left\{{\overrightarrow{R}}_I\right\}\right)\right\}\chi \left(\left\{{\overrightarrow{R}}_I\right\}\right)=\varepsilon \chi \left(\left\{{\overrightarrow{R}}_I\right\}\right) $$
(4.4)

Equation 4.4 shows that ions move in a potential field ε elec . ε elec is therefore also called “Born-Oppenheimer potential surface”. From then on, we focus our discussion on solving the electronic wavefunction \( \phi \left(\left\{{\overrightarrow{r}}_i\right\}\right)\Big|{}_{\left\{{\overrightarrow{R}}_I\right\}} \). More information could be found in Grosso and Parravicini’s book [15].

4.2.2 Density Functional Theory

Though Born-Oppenheimer approximation simplifies Schrödinger equation in solid systems, Eq. 4.3 is still very difficult to solve. This is mainly because the coupling term 1/r ij which makes Eq. 4.3 a non-linear n-body coupled equation. The commonly used technique to overcome the complication of Eq. 4.3 is to transfer this n-body problem to a single body problem. The rigorous demonstration is the density functional theory (DFT) [16, 17]. DFT does not consider concrete electronic orbital configurations, but focuses on the relationship between the total energy and the charge distribution of the system. Further, employing the variation principle with constrained conditions, DFT reformulates Eq. 4.3 into a single-body equation describing the state of a single electron moving in an effective potential field, while all many body interactions are lumped into a so called “exchange-correlation” functional. As Hohenberg and Kohn pointed out, the ground energy of a system can be expressed as a universal functional as its ground charge distribution [16]:

$$ {E}^{\mathrm{HK}}\left[\rho \left(\overrightarrow{r}\right);{V}_{ext}\left(\overrightarrow{r}\right)\right]=T\left[\rho \left(\overrightarrow{r}\right)\right]+{E}_{ee}\left[\rho \left(\overrightarrow{r}\right)\right]+{\displaystyle \int }{V}_{ext}\left(\overrightarrow{r}\right)\rho \left(\overrightarrow{r}\right)d\overrightarrow{r}+{E}_{II} $$
(4.5)

Terms on the right side are kinetic energy, electron-electron interaction, electron-ion interaction, and ion-ion interaction, respectively. Let us ignore the last term by now for it is a constant shift for a given atomic configuration. One can in principle get the minimum of E HK by taking the variation of Eq. 4.5 with respect to ρ for a given external potential V ext . Unfortunately, this is not a practical way at least in the near future since we have no idea about the kinetic functional for an interacting electron-gas system. To overcome this difficulty, Kohn and Sham formulated the KS equation by mapping a non-interacting electron gas system whose charge distribution \( {\rho}_0\left(\overrightarrow{r}\right) \) is identical to the ground charge distribution \( \rho \left(\overrightarrow{r}\right) \) to the real system [17]. The key point of this ansatz is that \( {\rho}_0\left(\overrightarrow{r}\right) \) is able to be expresses as the summation of single electron wavefunctions \( {\varphi}_j\left(\overrightarrow{r}\right) \):

$$ {\rho}_0\left(\overrightarrow{r}\right)={\displaystyle \sum}_i{\varphi}_i^{*}\left(\overrightarrow{r}\right){\varphi}_i\left(\overrightarrow{r}\right) $$
(4.6)

and so is \( \rho \left(\overrightarrow{r}\right) \). The kinetic energy functional of a non-interacting electron gas \( {T}_0\left[\rho \left(\overrightarrow{r}\right)\right] \) can be analytically calculated. We can then take this advantage. Furthermore, Kohn and Sham calculated Hartree interaction \( {E}_H\left[\rho \left(\overrightarrow{r}\right)\right] \) instead of \( {E}_{ee}\left[\rho \left(\overrightarrow{r}\right)\right] \):

$$ {E}_H\left[\rho \left(\overrightarrow{r}\right)\right]={\displaystyle \int}\frac{\rho \left(\overrightarrow{r}\right)\rho \left({\overrightarrow{r}}^{\prime}\right)}{\left|\overrightarrow{r}-{\overrightarrow{r}}^{\prime}\right|}d\overrightarrow{r}d{\overrightarrow{r}}^{\prime } $$
(4.7)

Apparently, \( {T}_0\left[\rho \left(\overrightarrow{r}\right)\right]+{E}_H\left[\rho \left(\overrightarrow{r}\right)\right] \) differs from \( T\left[\rho \left(\overrightarrow{r}\right)\right]+{E}_{ee}\left[\rho \left(\overrightarrow{r}\right)\right] \) in Eq. 4.5. To compensate this difference, one needs an extra term:

$$ {E}_{xc}\left[\rho \right]=T\left[\rho \right]+{E}_{ee}\left[\rho \right]-{T}_0\left[\rho \right]-{E}_H\left[\rho \right] $$
(4.8)

The importance of E xc[ρ] is that it contains all the effects from many-body interactions. Combining Eqs. 4.5, 4.6, 4.7, and 4.8, and taking the variation of the total energy with respective to \( {\varphi}_j\left(\overrightarrow{r}\right) \) under the constrain condition:

$$ {\displaystyle \sum}_j\left\langle {\varphi}_j\Big|{\varphi}_j\right\rangle =N $$
(4.9)

we can obtain the equation which ground state wavefunction \( {\varphi}_j\left(\overrightarrow{r}\right) \) satisfies:

$$ \left[-\frac{\nabla^2}{2}+{V}_H\left(\overrightarrow{r}\right)+{V}_{ext}\left(\overrightarrow{r}\right)+{V}_{xc}\left(\overrightarrow{r}\right)\right]{\varphi}_j\left(\overrightarrow{r}\right)={\varepsilon}_j{\varphi}_j\left(\overrightarrow{r}\right) $$
(4.10)

Explicitly, we define exchange-correlation potential as:

$$ {V}_{xc}\left(\overrightarrow{r}\right)=\frac{\delta {E}_{xc}\left[\rho \left(\overrightarrow{r}\right)\right]}{\delta \rho \left(\overrightarrow{r}\right)} $$
(4.11)

Equation 4.10 is Kohn-Sham (KS) equation. By using KS equation, the ground energy could be rewritten as

$$ {E}_0={\displaystyle \sum}_j{\varepsilon}_j-\frac{1}{2}{\displaystyle \int}\frac{\rho \left(\overrightarrow{r}\right)\rho \left({\overrightarrow{r}}^{\prime}\right)}{\left|\overrightarrow{r}-{\overrightarrow{r}}^{\prime}\right|}d\overrightarrow{r}d{\overrightarrow{r}}^{\prime }+{E}_{\mathrm{xc}}\left[\rho \left(\overrightarrow{r}\right)\right]-{\displaystyle \int }{V}_{\mathrm{xc}}\left(\overrightarrow{r}\right)\rho \left(\overrightarrow{r}\right)d\overrightarrow{r} $$
(4.12)

The first term in the right side is called band structure energy, and other three terms are called double counting terms.

One should understand that an exact analytical formula for the exchange-correlation energy E xc[ρ] is generally unavailable for most cases. Its correlation component, however, can be numerically obtained by quantum Monte-Carlo (QMC) method, which has been done by Ceperley and Alder [18]. Several research groups fit their data by different analytic functions and incorporate these functions into KS equation [1822].

4.2.3 Self-Consistent Field Processes in DFT

Reviewing Eq. 4.10, a paradox may be found: To build up V H and V ext of Hamiltonian in Eq. 4.10, one has to know \( \rho \left(\overrightarrow{r}\right) \) in advance. At the same time, \( \rho \left(\overrightarrow{r}\right) \) needs to be solved. In other words, \( {\varphi}_j\left(\overrightarrow{r}\right) \) appears at both side of Eq. 4.10. Therefore, KS equation needs to be solved self consistently. First, an initial guess of \( {\varphi}_j\left(\overrightarrow{r}\right) \) or \( \rho \left(\overrightarrow{r}\right) \) is able to be presented. Second, the Hamiltonian is built up and Eq. 4.10 is solved. Third, \( \rho \left(\overrightarrow{r}\right) \) is updated according to the output and input of the current step, re-construct Hamiltonian. The above steps are repeated until the convergence criterion is satisfied. Usually, the criterion is chosen as the change of input and output values of total energy at current step. The whole process is called self-consistent field (SCF) calculations. After a set of high-quality eigenfunctions is obtained, the electronic structures could be constructed and analyzed, i.e. charge density, energy band structures, and density of states, etc. Details of SCF are far beyond the scope of this chapter, more details could be found in the books of Martin [23] and Knhanoff [24], respectively.

4.2.4 Examples

Since DFT methods, or more general, First-Principles methods, do not depend on availability of parameters for a given system. This character makes them very suitable to study materials which either is novel or has strong quantum effects [25, 26]. More importantly, as shown in Eqs. 4.6 and 4.10, the ground charge distribution is sensitive to concrete external potentials, arbitrary structural defects and/or alloying elements in principle are able to introduce unique electronic structures and even properties. Therefore, First-Principles methods can be employed to predict or even design new class of materials with desirable properties, which is a very important and vibrant aspect of modern computational materials science.

As an example, the study on the diffusion of an adatom on the Sn-alloying Cu(111) surface is hereby presented [14]. Figure 4.2 shows the potential energy surface (PES) of a Cu adatom on such an alloying surface. Clearly, Sn atoms disturb the profile of PES in two ways: (1) they increase the value of PES at their sites, and (2) they introduce forbidden region around each of them by transferring local valleys to slopes, which are shown as green areas in Fig. 4.2. These two features make Sn atoms diffusion blockers to a Cu adatom because it is very energetically unfavorable for a Cu adatom approaching sites occupying by Sn atoms. This phenomenon can be attributed to the fact that Sn atoms are larger than Cu atoms and therefore protrude from the surface. Besides the geometrical factor, different electronic configurations between Sn atoms and Cu atoms also contribute to the blocking effect of Sn. Figure 4.3 shows two minimum energy paths (MEP). When a Cu adatom approaches Sn sites, it climbs uphill. Not only local minimums, but also migration barriers rise up. This feature is more apparent along the path which goes through the two Sn atoms (Fig. 4.3b). To explain MEPs, Fig. 4.3c presents local density of states (LDOS) of the surface layer at Fermi energy D(E F ). D(E F ) is higher between the two Sn atoms because of the contribution from p-orbitals of Sn, and thus strengthens binding interaction between the surface and the Cu adatom according to Newns-Anderson model. The results of this First-Principles simulation are in good agreement with experimental observation in which alloying with Sn increases the serving lifetime of Cu interconnects by 10 times [27].

Fig. 4.2
figure 2

The landscape of total energy of the system with single Cu adatom on the Cu(111) surface alloyed by 2 Sn atoms. The brighter (darker) area indicates weak (strong) adsorption sites of the Cu adatom. The lowest energy is set to zero [14]

Fig. 4.3
figure 3

(a) Migration paths I and II for a Cu adatom on the Cu (111) surface with two 4th nearest neighbor Sn surface atoms. Yellow (gray) circles denote the Cu (Sn) atoms, while the green (red) circles denote hcp (fcc) sites, denoted by Greek and Latin letters, respectively. (b) Migration energy landscape versus reaction coordinates for pathways I and II, respectively. (c) LDOS contours at the Fermi level, with bright (dark) color indicating high (low) values of LDOS [38]

4.3 Tight Binding

Tight binding method (TB) is a wide-used semi-empirical computational method. Based on a set of well-chosen basis functions, TB builds up Hamiltonian matrix H of a given system and gets eigenvalues and eigen-wavefunctions by diagonalizing H. Further information of electronic structure, i.e. charge density, band structure, and optimal adsorption spectrum etc. can then be obtained. Different from first-principles methods, elements in H in TB are not directly calculated through SCF. They are expressed as functions of atomic positions with a set of pre-determined parameters. With high-quality parameters, TB can accurately simulate systems of 103 ~ 104 atoms. This is important since systems with this size could contain complicated atomic structures or functional group. Therefore, TB method is attractive for Nano-material simulations because objects in this area usually have artificial structure and peculiar electronic structures which need to be identified.

4.3.1 Linear Combination of Atomic Orbitals

Theoretical foundation of TB can be viewed by linear combination of atomic orbitals (LCAO) method [28]. Suppose there is a system containing N atoms and there are n i orbitals belonging to the i-th atom. One eigen-wavefunction can be expressed as

$$ \varPsi \left(\overrightarrow{r}\right)={\displaystyle \sum}_{i,\alpha }{c}_{i\alpha }{\phi}_{\alpha}\left(\overrightarrow{r}-{\overrightarrow{R}}_i\right) $$
(4.13)

where i and α are indexes of atoms and orbitals, respectively. Accordingly, the energy of the system E is

$$ E=\frac{\left\langle \varPsi \left|\widehat{H}\right|\varPsi \right\rangle }{\left\langle \varPsi \Big|\varPsi \right\rangle }=\frac{{\displaystyle \sum}_{i\alpha, j\beta }{c}_{i\alpha}^{*}{c}_{j\beta}\left\langle {\phi}_{i\alpha}\left|\widehat{H}\right|{\phi}_{j\beta}\right\rangle }{{\displaystyle \sum}_{i\alpha, j\beta }{c}_{i\alpha}^{*}{c}_{j\beta}\left\langle {\phi}_{i\alpha}\Big|{\phi}_{j\beta}\right\rangle } $$
(4.14)

The variation of E with respect to c * can be easily calculated:

$$ \frac{\delta E}{\delta {c}_{i\alpha}^{*}}=\frac{1}{{\displaystyle \sum}_{i\alpha, j\beta }{c}_{i\alpha}^{*}{c}_{j\beta}\left\langle {\phi}_{i\alpha}\Big|{\phi}_{j\beta}\right\rangle}\left[{\displaystyle \sum}_{j\beta }{c}_{j\beta}\kern0.36em \left\langle {\phi}_{i\alpha}\left|H\wedge \right|{\phi}_{j\beta}\right\rangle -E{c}_{j\beta}\left\langle\;{\phi}_{i\alpha}\Big|\kern0.24em {\phi}_{j\beta}\right\rangle \right] $$
(4.15)

To obtain the lowest value of E, Eq. 4.15 should equal to 0 for any i and α. Therefore, we can straightforward obtain the secular equation of c :

$$ \left|{H}_{i\alpha, j\beta }-E{S}_{i\alpha, j\beta}\right|=0 $$
(4.16)

Equation 4.16 is called generalized eigenvalue equation. H , and S , are elements of Hamiltonian matrix and overlapping matrix expanded by {ϕ }, respectively. If {ϕ } is a set of orthogonal functions, Eq. 4.16 is reduced to

$$ \left|{H}_{i\alpha, j\beta }-E{\delta}_{i\alpha, j\beta}\right|=0 $$
(4.17)

which has been familiar with in Sect. 4.2. Eqs. 4.16 and 4.17 are called LCAO method. If {ϕ } are not chosen as actual atomic orbitals, this method is usually named as “tight binding”.

4.3.2 Slater-Koster Two-Center Approximation

According to above discussions, the key step in TB method is to construct Hamiltonian matrix H. Appropriate approximations are able to essentially simplify the construction of elements of H. In 1954, Slater and Koster suggested two-center approximation in their classic paper [29], which expresses all Hamiltonian elements with a limited number of two-center integrations. The total number of all these basic integrations is around 30. Therefore, Slater-Koster two-center approximation makes TB method to be practical and is the foundation of modern TB simulating software packages.

Two points of Slater-Koster two-center approximation (SK approximation for short) should be emphasized. First, let us specifically write down the expression of one element of H for a periodic system i.e. crystal. In this case, the basic function is φ α , the Bloch summation of atomic orbitals ϕ α :

$$ {\varphi}_{i\alpha}\left(\overrightarrow{r}\right)=\frac{1}{\sqrt{N_{\mathrm{cell}}}}{\displaystyle \sum}_{n=1}^{N_{cell}} \exp \left(i\overrightarrow{k}\cdot {\overrightarrow{R}}_i^n\right){\phi}_{\alpha}\left(\overrightarrow{r}-{\overrightarrow{R}}_i^n\right) $$
(4.18)

The superscript n is the index of cell. \( \overrightarrow{k} \) is the vector in reciprocal space. After some basic algebra calculations, we can obtain \( {H}_{i\alpha, j\beta}\left(\overrightarrow{k}\right) \):

$$ {H}_{i\alpha, j\beta}\left(\overrightarrow{k}\right)=\frac{1}{N_{\mathrm{cell}}}{\displaystyle \sum}_{n,m=1}^{N_{\mathrm{cell}}} \exp \left[i\overrightarrow{k}\cdot \left({\overrightarrow{R}}_i^n-{\overrightarrow{R}}_i^m\right)\right]\times {\displaystyle \int }d\overrightarrow{r}{\phi}_{\alpha}^{*}\left(\overrightarrow{r}-{\overrightarrow{R}}_i^n\right)\widehat{H}{\phi}_{\alpha}\left(\overrightarrow{r}-{\overrightarrow{R}}_i^m\right) $$
(4.19)

Because \( \widehat{H} \) is a function of positions of electrons and atoms, Eq. 4.19 can be categorized as three types: (1) on-site integration, in which three integrands, ϕ α , \( \widehat{H} \) and ϕ β are at the same center; (2) two center integration, in which two of the above integrands are at one center, and the other one is at another center; and (3) three center integration, in which each integrand is at its own center. Because of the local nature of ϕ α and ϕ β , in most case on-site integration has the largest value while the three center integration is the smallest. Therefore, the contribution to \( {H}_{i\alpha, j\beta}\left(\overrightarrow{k}\right) \) is truncated up to two center integrations. Three center integrations and higher order ones are ignored. This truncation essentially decreases the number of terms in \( {H}_{i\alpha, j\beta}\left(\overrightarrow{k}\right) \), and is the first point in SK approximation.

Two center integrations are usually understood as “bonding” between two orbitals centering at two atoms. They are explicit functions of the relative position \( {\overrightarrow{R}}_{ij} \) between two atoms. Though they could be very different from each other since \( {\overrightarrow{R}}_{ij} \) can be any vector, they always can be expressed as a combination of several basic two center terms. This is the second key point in SK approximation. These basic terms could be referred to “bonding terms”. These terms are showed as V ss , V sp , V pp and V ppπ , etc. The first two subscripts are the angular momentum numbers of two orbitals, and the third subscript indicates the bonding type, which depends on the relative orientations and symmetries of the two orbitals.

In Figs. 4.4 and 4.5, we show simple examples about how to express two center integrations in terms of bonding terms. One atom is at original point and another atom is at \( \overrightarrow{R} \), the orientation cosine with respect to xyz axis are l,m and n, respectively. The s-s term is independent from orientation of \( \overrightarrow{R} \): \( \left\langle s\left|\widehat{H}\right|s\right\rangle ={V}_{ss\sigma } \)(Fig. 4.4a). The p y -p y interaction can be decomposed as \( \left\langle {p}_y\left|\widehat{H}\right|{p}_y\right\rangle ={m}^2{V}_{pp\sigma }+\left(1-{m}^2\right){V}_{pp\pi } \)(Fig. 4.4b), and the s-p y term is \( \left\langle s\left|\widehat{H}\right|{p}_y\right\rangle =m{V}_{sp\sigma } \), (Fig. 4.5).

Fig. 4.4
figure 4

Illustration of two-center approximation of sp-type interactions. (a) is s-s term and (b) is p y -p y term

Fig. 4.5
figure 5

Illustration of two-center approximation of s-p y term

Therefore, four basic bonding terms can be used to express all sp-type interactions. That is why SK approximation has made tremendous success. For higher order orbitals, i.e. d, f, and g, etc., one has to use angular momentum theory to calculate two center integrations, which is discussed in detail in a couple of references [30, 31].

4.3.3 Total Energy in TB

By diagonalizing Hamiltonian matrix, a set of eigenvalues can be obtained. The corresponding energy of the system is then can be expressed as

$$ {E}_{\mathrm{tot}}=2{\displaystyle \sum}_{\lambda }{\varepsilon}_{\lambda }f\left({\varepsilon}_{\lambda}\right) $$
(4.20)

λ is the band index, the 2 comes from the spin-degeneracy of each band, and f(ε λ ) is Fermi-Dirac distribution at 0 K. However, the TB method might be challenged on its accuracy since the above equation takes into account only on-site terms and “bonding” contributions. E tot should contain Coulomb repulsion of electrons and ions. Therefore, Eq. 4.20 severely overestimates cohesive energy of the system. Practically, Coulomb repulsion is dealt by adding an extra term E rep, we have

$$ {E}_{\mathrm{tot}}=2{\displaystyle \sum}_{\lambda }{\varepsilon}_{\lambda }f\left({\varepsilon}_{\lambda}\right)+{E}_{\mathrm{rep}}=2{\displaystyle \sum}_{\lambda }{\varepsilon}_{\lambda }f\left({\varepsilon}_{\lambda}\right)+\frac{1}{2}{\displaystyle \sum}_{i,j}A \exp \left(-{R}_{ij}/{R}_0\right) $$
(4.21)

A and R 0 are parameters which are determined by experiments or DFT calculations.

4.3.4 Examples

Compared to full self-consistent first-principles calculations, TB method does have limitation of generality and transferability of parameters. However, its transparent physical picture and its approximate but reasonable way of describing electronic structures make it a very useful tool in analyzing the structure-electronic relationship in nanomaterials. An example of how TB method is used to design a quantum dot based on a single nanotube is illustrated here (Fig. 4.6) [32].

Fig. 4.6
figure 6

The geometry of tubes used to create nanotube-based quantum wells. The two bending regions are separated by an undeformed segment (a) 6.7 nm long in the (10,0) tube and (b) 6.4 nm long in the (9,0) SWNT [32]

As is well known, a nanotube can be thought of a grapehene sheet rolled into a cylindrical tube. It has a tunable bandgap that is highly dependent on its topological structures, and this very unique electronic property may find many applications in nanoelectronics. For example, by changing the chirality of a single nanotube using topological defects, a variety of metal-semiconductor, metal-metal, and semiconductor-semiconductor junctions can be generated. Quantum dot (QD) can be fabricated on a single wall nanotube (SWNT) by the mechanical deformation. As shown in Fig. 4.6a, kinks on a semi-conductive SWNT create dips on energy band gap. Together with the information of eigenstate wavefunctions, one can conclude that these kinks equivalently behavior like acceptor QDs. This could be an effective and simple way of creating room-temperature quantum dot devices. On the other hand, for a metallic SWNT, the response of electronic structure to the deformation is not very sensitive (Fig. 4.6b).

4.4 Molecular Dynamics

Though molecular dynamics (MD) method perform atomic simulations, it has no quantum mechanics background, which is different from DFT and TB methods. MD treats atoms as classic particles. Movements of atoms are determined by Newton equations. The atomic interaction is presented by empirical potentials, which shows as either analytical functions with parameters or data on grids. Therefore, MD method does not perform SCF calculations or diagonalization of large matrix, and can be employed on study dynamical evolution of large systems (106 ~ 108 atoms, as shown in Fig. 4.1) in complicated loading conditions, i.e., stress, temperature, and ion bombardment, etc. [3335]

4.4.1 Empirical Potentials

4.4.1.1 Lennard-Jones Potential

Lennard-Jones potential is a famous type of pair-potential. It describe atomic interactions as

$$ V(r)=4\varepsilon \left[{\left(\frac{\sigma }{r}\right)}^{12}-{\left(\frac{\sigma }{r}\right)}^6\right] $$
(4.22)

ε and σ are parameters which are determined by fitting important properties, i.e. equilibrium distance, binding energy, etc. In modern MD simulations, Lennard-Jones potential is usually used to describe interactions between gas atoms.

4.4.1.2 Embedded Atomic Method

Embedded atomic method (EAM) presented better simulating results on bulk materials than pair-potential [36]. Besides pair potentials, EAM introduces an addition term. Thus the total energy of the system is expressed as

$$ {E}_{\mathrm{tot}}=\frac{1}{2}{\displaystyle \sum}_{i,j}V\left({R}_{ij}\right)+{\displaystyle \sum}_iF\left[{\displaystyle \sum}_j\rho \left({R}_{ij}\right)\right] $$
(4.23)

F[ρ] is called “embedded energy”, which means the energetic gain when an atom is put into electron gas contributed by other atoms in vicinity. Clearly, EAM has concepts similar to “atomic bonds” and density functional. V(R), ρ(R) and F[ρ] are usually functions with 10 ~ 20 parameters. One needs fit these parameters to reproduce lattice constant, cohesive energy, vacancy formation energy, surface energy, several elastic constants, and migration energy of a point defect, etc. Recently, Ercolessi and Adams developed the force-matching method [37]. This method employs first-principles methods to obtain force acting on each atom in a serial of reference configurations. It then defines a target function as follows

$$ F(p)={F}_f(p)+{F}_c(p)={\left(3{\displaystyle \sum}_{k=1}^M{N}_k\right)}^{-1}{\displaystyle \sum}_{k=1}^M{\displaystyle \sum}_{i=1}^{N_k}\left|{f}_{ki}(p)-{f}_{ki}^{\mathrm{ref}}\right|{}^2+{\displaystyle \sum}_{r=1}^{N_c}{W}_r\left|{A}_r(p)-{A}_r^{\mathrm{ref}}\right|{}^2 $$
(4.24)

M is the number of reference configurations, Nk is the number of atoms in the k-th configuration. f is the atomic force, A r is an abovementioned property of the material, W r is the assigned weight, and p is the set of parameters. By minimizing Eq. 4.24, one can obtain desirable EAM potentials [39]. Different groups have applied the force-matching method to get high-qualified potentials for several metals and even binary systems [34, 40], which is an approval to the fidelity of this method.

4.4.2 Integrator of Motion Equations

With a given reliable empirical potential, MD calculates the force on each atom as the negative derivative of potential energy with respect to the position of the atom, then update the velocity and the position of each atom according to Newton equation. Therefore, a mathematical interpretation of MD is to solve a second order ordinary partial equation:

$$ \frac{d^2\overrightarrow{r}}{d{t}^2}=f\left(t,r,v\right)\kern0.46em \left({r}_0,{v}_0\right) $$
(4.25)

Appropriate differential algorithms are essential for MD simulations.

4.4.2.1 Verlet algorithm and Prediction-Correction Algorithm

The Verlet algorithm and prediction-correction algorithm are discussed here since they are widely used and are basis of other advanced algorithms as well. Given position r(t), velocity v(t), and force f(t) at time t, we can get r(t + Δt) at t + Δt and r(t−Δt) at t−Δt through Taylor expansion:

$$ \overrightarrow{r}\left(t+\varDelta t\right)=\overrightarrow{r}(t)+\overrightarrow{v}(t)\cdot \varDelta t+\frac{\overrightarrow{f}(t)}{2m}\cdot \varDelta {t}^2+\frac{\varDelta {t}^3}{6}\overset{\dddot{}}{r}+O\left(\varDelta {t}^4\right) $$
(4.26)
$$ \overrightarrow{r}\left(t-\varDelta t\right)=\overrightarrow{r}(t)-\overrightarrow{v}(t)\cdot \varDelta t+\frac{\overrightarrow{f}(t)}{2m}\cdot \varDelta {t}^2-\frac{\varDelta {t}^3}{6}\overset{\dddot{}}{r}+O\left(\varDelta {t}^4\right) $$
(4.27)

By summing Eqs. 4.26 and 4.27, we have

$$ \overrightarrow{r}\left(t+\varDelta t\right)=2\overrightarrow{r}(t)-\overrightarrow{r}\left(t-\varDelta t\right)+\frac{\overrightarrow{f}(t)}{m}\cdot \varDelta {t}^2+O\left(\varDelta {t}^4\right) $$
(4.28)
$$ \overrightarrow{v}(t)=\frac{\overrightarrow{r}\left(t+\varDelta t\right)-\overrightarrow{r}\left(t-\varDelta t\right)}{2}+O\left(\varDelta {t}^2\right) $$
(4.29)

Equations 4.28 and 4.29 are called Verlet algorithm. Since positions and velocities at the time t can be obtained simultaneously, Verlet algorithm can be used to obtain the total energy of the system. Another key feature of Verlet algorithm is the time reversibility, which means if we suddenly flip the velocity of each atom at time t = nΔt, the system will go back to the initial positions along the same trajectory after n steps. Therefore, the total energy of the system is conserved in Verlet algorithm. Detailed analysis demonstrates that this feature comes from Liouville equation of conserve force systems [41]. Clearly, Verlet algorithm is ideal for micro-canonical ensembles. Main limitation of Verlet algorithm is that fluctuation of energy is large in a short period since velocities has only accuracy to the order of Δt 2, as shown in Eqs. 4.28 and 4.29.

As shown in Eq. 4.25, MD simulation is to solve a second-order ODE. The prediction-correction (PC) algorithm can be performed to get the trajectory of the system. First, the position and the velocity at time t + Δt as the linear combination of forces of previous k steps are expressed, and the linear coefficients could be set to equal to the corresponding coefficients of Taylor expansion up to the term of Δt k, hence the position and velocity are updated as below:

$$ \begin{array}{l}\overrightarrow{r}\left(t+\varDelta t\right)=\overrightarrow{r}(t)+\overrightarrow{v}(t)\cdot \varDelta t+\varDelta {t}^2{\displaystyle \sum}_{i=1}^{k-1}{\alpha}_i\overrightarrow{f}\left[t+\left(1-i\right)\varDelta t\right]\hfill \\ {}\overrightarrow{v}\left(t+\varDelta t\right)=\frac{\overrightarrow{r}(t)-\overrightarrow{r}\left(t-\varDelta t\right)}{\varDelta t}+\varDelta t{\displaystyle \sum}_{i=1}^{k-1}{\alpha}_i^{\prime}\overrightarrow{f}\left[t+\left(1-i\right)\varDelta t\right]\hfill \end{array} $$
(4.30)

This is called prediction step. Second, one correction step needs to be performed. Calculate \( \overrightarrow{f}\left(t+\varDelta t\right) \) and take it as a new point. Then re-estimate \( \overrightarrow{r}\left(t+\varDelta t\right) \) and \( \overrightarrow{v}\left(t+\varDelta t\right) \):

$$ \begin{array}{l}\overrightarrow{r}\left(t+\varDelta t\right)=\overrightarrow{r}(t)+\overrightarrow{v}(t)\cdot \varDelta t+\varDelta {t}^2{\displaystyle \sum}_{i=1}^{k-1}{\beta}_i\overrightarrow{f}\left[t+\left(2-i\right)\varDelta t\right]\hfill \\ {}\overrightarrow{v}\left(t+\varDelta t\right)=\frac{\overrightarrow{r}(t)-\overrightarrow{r}\left(t-\varDelta t\right)}{\varDelta t}+\varDelta t{\displaystyle \sum}_{i=1}^{k-1}{\beta}_i^{\prime}\overrightarrow{f}\left[t+\left(2-i\right)\varDelta t\right]\hfill \end{array} $$
(4.31)

Table 4.1 presents PC coefficients with k = 4. PC algorithm is suitable to complex simulations due to its flexibility. However, total energy of the system is not a conserving quantity in PC algorithm because it is not time-reversible. This is not a severe problem in canonical ensemble simulations since the total energy needs to be manipulated constantly.

Table 4.1 PC coefficients with k = 4

4.4.3 Examples

An example of the MD simulation on interaction of a <111>/2 screw dislocation and Cu-precipitate in BCC Fe matrix is presented here [38]. The whole system contains 576,000 atoms. As shown in Fig. 4.7, the diameter of a spherical Cu-precipitate with BCC structure is 2.3 nm. Under an external stress = 700 MPa, the dislocation can penetrate into the Cu precipitate, as shown in Fig. 4.7a, b. However, the dislocation becomes pinned as it approaches the opposite precipitate-matrix interface, and hence is unable to glide outside the precipitate. Upon increasing the external stress, the dislocation line outside the precipitate continues to glide forward while the short dislocation line segments within the precipitate remains pinned, resulting to a bowing out of the dislocation. The bow-out angle, θ gradually decreases upon increasing the external stress from 180° at = 700 MPa until it reaches the critical value of θc = 144° under 1,000 MPa shear stress, where the dislocation suddenly detaches from the Cu precipitate and the dislocation line renders straight.

Fig. 4.7
figure 7

MD snapshots of the dislocation core interacting with the 2.3 nm Cu precipitate as the dislocation glides on the \( \left(10\overline{1}\right) \) plane along the \( \left[1\overline{2}1\right] \) direction at (a) 243 ps, (b) 272 ps, (c) 312 ps, and (d) 320 ps, respectively. Green and red circles represent Cu and Fe atoms, respectively. [38]

Figure 4.8 presents the dislocation core structures during the pinning process as shown in Fig. 4.7. Note that dislocation core in Cu-precipitate spreads along three directions (polarized core), while spreads along six directions in BCC Fe matrix (non-polarized core). When the dislocation reaches to the boundary, it stops moving and transfers its structure from polarized core to non-polarized core (Fig. 4.8b, c). The transferring process corresponds the pinning. And the energy cost during the process is supplied by bowing-out of dislocation core. These results reveal that the dislocation/precipitate detachment process is accompanied with a polarized → non-polarized core transition, which may be responsible for the pinning effect, the above discussion presents a plausible precipitate-size induced strengthening mechanism.

Fig. 4.8
figure 8

The dislocation core structure during the detachment process in Fig. 4.1 at (a) 280 ps, (b) 297 ps, (c) 303 ps, and (d) 315 ps. The red and green spheres represent Fe and Cu atoms, respectively. The dislocation glides along the \( \left[1\overline{2}1\right] \) direction under an external stress of 1,000 MPa [38]

4.5 Conclusions and Future Outlook

Computational simulations with atomistic resolution for nanomaterials has received an increasing interest due to the fact that nanoscale properties are extremely difficult to measure or manipulate, but more importantly, such properties are probably very sensitive to subtle environmental changes and perturbations, making repeated measurements more challenging. In this chapter, the theoretical background of three types of widely-used material simulation methods: first-principles method, tight binding method, and molecular dynamics are introduced, followed by a detailed discussion including further theories with necessary mathematical treatments and examples for each method. A raw picture of applications of atomic simulation methods on material sciences could thus be generated.

Although there are some essential limitations for each kind of simulation methods, virtual material modeling/design has been made massive successes in the past four decades. Correction, extension and even renovation of current simulation methods have been or will be introduced in order to match the rapid development of material sciences. There are reasons to believe that virtual material modeling/design would become more and more important in both sciences and technologies in the near future.