It is well-known that quantum chromodynamics (QCD) is non-perturbative in the low-energy region where atomic nuclei exist. This feature prevents us from direct application of perturbation theory. To make progress, two complementary approaches are presently employed; lattice QCD (LQCD) [1] and chiral effective field theory (\(\chi \)EFT) [2]. The former amounts to numerical evaluation of the QCD path integral on a space-time lattice, while the latter is aimed at exploiting the decoupling principles of the renormalization group (RG) to systematically formulate a potential description of the nuclear interaction rooted in QCD. LQCD is a computationally expensive approach that requires at least exascale resources for a realistic analysis of multi-nucleon systems, and will most likely not be the most economical choice for analyzing nuclear systems. Nevertheless, in cases where numerically converged results can be obtained, LQCD offers a unique computational laboratory for theoretical studies of QCD in a low-energy setting [3].

The derivation of a nuclear potential in \(\chi \)EFT proceeds via the construction of an effective Lagrangian consisting of pions, nucleons, sometimes also the \(\varDelta \) isobar, endowed with all possible interactions compatible with the symmetries of low-energy QCD. The details can be found in extensive reviews [4,5,6]. All short-distance physics, normally associated with quarks and gluons, reside beyond a hard momentum scale \(\varLambda _b \sim 1\) GeV, that remains unresolved in \(\chi \)EFT. Such high-momentum dynamics is instead encoded in a set of low-energy constants (LECs) that must be determined from experimental data, or in a future scenario hopefully computed directly from LQCD. \(\chi \)EFT is the theoretical framework to calculate observables in an expansion expressed in powers of the small ratio \(Q/\varLambda _b\), where Q is a soft momentum scale \(\sim m_{\pi }\). If done right, this approach allows for a systematically improvable description of low-energy nuclear properties in harmony with the symmetries of low-energy QCD.

Fig. 90.1
figure 1

Diagrammatic order-by-order representation of the \(\varDelta \)-full two-nucleon (NN) and three-nucleon (NNN) nuclear interaction up to NNLO in \(\chi \)EFT based on so-called Weinberg power counting (WPC)

The promise of being systematically improvable is a unique selling point of \(\chi \)EFT, or any EFT for that matter. Indeed, although the order-by-order expansion contains an infinite number of terms and must be truncated, the omitted terms represent neglected physics and contribute to the systematic uncertainty. The upshot is that higher-order corrections should be less important and follow a pattern determined by the EFT expansion ratio. The organization of this expansion, such that increasingly unimportant physics appear at consecutively higher orders, is called power counting (PC).

The leading-order (LO) in this expansion consists of the well-known one-pion exchange potential (Yukawa term) accompanied by a contact potential to describe any unresolved short-ranged physics at this order. The potentials at higher orders, i.e. next-to-leading order (NLO) etc., systematically introduce multiple-pion exchanges, accompanied additional zero-ranged contact potentials, possibly \(\varDelta \) perturbations, and irreducible many-nucleon interactions, see Fig. 90.1.

To achieve an accurate theoretical description of the nuclear interaction, with quantified statistical and systematic uncertainties of the theoretical predictions, can be referred to as achieving a state of precision nuclear physics. There are several interesting facets of this ongoing endeavor:

  • On a fundamental physics level, it is well-known that the nuclear potentials from \(\chi \)EFT that are based on Weinberg power counting (WPC) do not generate observables that respect RG invariance, see e.g. [7] and references therein. At the same time, there is an ongoing debate regarding the need or validity for probing large momenta in a potential description from EFT that is only valid at low-energies to begin with, see e.g.  [8,9,10] for a selection of viewpoints. Presently, most ab initio calculations of atomic nuclei, including the calculations presented here, employ potentials based on WPC. There exist potentials with alternative PCs that fulfill the fundamental tests of RG invariance for observables in two- and three-nucleon systems, see e.g. [11]. Unfortunately, such potentials have not yet been employed in nuclear many-body calculations.

  • The numerical values of the LECs in \(\chi \)EFT must be determined from data before any quantitative analysis can proceed. From a frequentist perspective, parameter estimation often amounts to maximizing a likelihood. For \(\chi \)EFT, this turns into a non-linear optimization problem over a high-dimensional parameter domain [12,13,14]. Bayesian parameter estimation is explored more and more in ab initio nuclear theory [15, 16]. This approach captures the entire probability distribution of the relevant parameters, and not just the values at the maximum of the probability mode. However, the computational demands are substantially higher compared to most of the frequentist methods, mainly due to repeated sampling of the model across the parameter domain.

  • There are several sources of uncertainty in model calibration. For instance, the calibration data itself come with uncertainties. Thus, any parameter estimation process will contain covariances that must be quantified and propagated. There exist well-known methods, frequentist as well as Bayesian, for quantifying the statistical uncertainties at any level of the calculation, see e.g. [17, 18]. However, it remains a challenge to achieve full uncertainty quantification in complex models that require substantial high-performance resources for a single evaluation at one point in the parameter domain. Well-designed surrogate models can hopefully provide some leverage, see e.g. [19,20,21].

  • A theoretical model will never represent nature fully. Consequently, there are theory errors (sometimes referred to as systematic uncertainties or model discrepancies). The statistical uncertainties stemming from the calibration data discussed above are typically not the main source of error in \(\chi \)EFT predictions [22, 23]. It is therefore of key importance to identify and quantify the sources of systematic errors in \(\chi \)EFT. At the moment, such analyses are rarely performed in ab initio nuclear theory. \(\chi \)EFT models combined with ab initio methods are often computationally complex and require substantial computational resources. As such, Markov Chain Monte Carlo with long mixing times can be prohibitively expensive. Furthermore, it is not clear how to identify and exploit the relevant momentum scales in descriptions of atomic nuclei.

1 Ab initio Nuclear Theory with \(\chi \)EFT

Ab initio methods, such as the no-core shell-model (NCSM) [24], the coupled cluster method (CC) [25], in-medium similarity renormalization (IM-SRG) [26], or lattice EFT [27], for solving the many-nucleon Schrödinger equation

$$\begin{aligned} \left( \sum _{i=1}^{A} \frac{p_{i}^2}{2m_N} + \sum _{i<j=1}^{A}\mathcal {V}^{NN}_{ij}(\mathbf {\alpha }) + \sum _{i<j<k=1}^{A}\mathcal {V}^{NNN}_{ijk}(\mathbf {\alpha }) \right) |\varPsi \rangle =E|\varPsi \rangle \end{aligned}$$
(90.1)

with two-nucleon (NN) and three-nucleon (NNN) potentials derived from \(\chi \)EFT with a set of LECs \(\mathbf {\alpha }\), make use of controlled mathematical approximations. Such many-body approaches can provide numerically exact nuclear wave functions for several bound, resonant, and scattering states in isotopes well into the region of medium-mass nuclei [28,29,30,31]. This development has drastically changed the agenda in development of nuclear interactions for atomic nuclei.

In the beginning of the previous decade, a lot of effort was spent on constructing so-called high-precision nuclear interactions, most prominently Idaho-N3LO [32], AV18 [33], and CD-Bonn [34], that could reproduce the collected data on NN scattering below the pion-production threshold with nearly surgical precision. We now know that such interactions often fail to reproduce important bulk properties of atomic nuclei [31, 35,36,37]. However, fifteen years ago, it was unclear how to gauge the quality of the many-nucleon wave functions since they relied on a series of involved approximations. Although it is still a challenge to quantify the theoretical uncertainty in many-body calculations, modern ab initio methods are tremendously refined. Indeed, their fidelity, and domain of applicability have been dramatically extended during the recent decade. This development has led to an increased focus on designing improved microscopic nuclear potentials that are based on novel fitting protocols. To ensure steady progress, we need to critically examine and systematically compare the quality of different sets of interaction models and their predictive power.

1.1 Optimization of LECs and Uncertainty Quantification of Predictions from \(\chi \)EFT

The canonical approach to estimate the numerical values of the LECs \(\mathbf {\alpha }\) in \(\chi \)EFT is to minimize some weighted sum of squared residuals

$$\begin{aligned} \chi ^2(\mathbf {\alpha }) = \sum _{i \in \mathcal {D}} \left( \frac{\mathcal {O}^\mathrm{theo}_{i}(\mathbf {\alpha }) - \mathcal {O}_{i}^\mathrm{exp}}{\sigma _i}\right) ^2, \end{aligned}$$
(90.2)

where \(\mathcal {D}\) represents the calibration dataset, and \(\mathcal {O}_{i}\) denotes experimental and theoretical values for observable i in \(\mathcal {D}\) with an obvious notation. The theoretical description of each observable depends explicitly on the LECs \(\mathbf {\alpha }\). In the limit of independent data, the uncertainty associated with each observable is represented by \(\sigma _i\). Known, or estimated, correlations across the data can also be incorporated  [38, 39]. Using well-known methods from statistical regression analysis, often assuming normally distributed residuals, it is possible to extract the covariance matrix of the parameters that minimize the objective.

An order-by-order uncertainty analysis of chiral interactions up to NNLO was undertaken in [13]. The objective function in that work incorporated an estimate of the theory uncertainty from \(\chi \)EFT, and the dataset \(\mathcal {D}\) comprised \(\pi N\) and NN scattering data as well as bound-state observables in \(A=2,3\) nuclei. The total covariance matrix for the LECs was determined for each analyzed interaction model. Additional components of the systematic uncertainty were probed by varying the regulator cutoff \(\varLambda \in [450,600]\) MeV as well as the maximally allowed scattering energy in the employed database of measured scattering cross sections. This effort resulted in a family of 42 chiral interactions at NNLO. Together, they furnish a valuable tool for probing uncertainties in ab initio few-nucleon predictions, see [40, 41] for representative examples of their use.

1.2 With an EFT, We Can Do Better

One way to estimate the effect of the first excluded term in an EFT expansion was suggested in [42]. Building on the work in [43], this was given a Bayesian interpretation in  [18]. In brief, if we write the order-by-order expansion of some observable \(\mathcal {O}\) as

$$\begin{aligned} \mathcal {O} = \mathcal {O}_0(a_0 q^0 + a_1 q^1 + a_2 q^2 + a_3 q^3 +\ldots ), \end{aligned}$$
(90.3)

where \(\mathcal {O}_0\) is the overall scale, e.g. the leading order contribution, and we know the expansion parameter, e.g. \(q = (Q/\varLambda _b)\), then we can compute the probability distribution of the expansion coefficient \(a_i\) provided that we know the values of the lower-order coefficients \(a_0,\ldots ,a_{i-1}\). The application of Bayes theorem with identically distributed, independent, boundless, and uniform prior distributions of the expansion coefficients \(a_i\), leads to a simple expression for the estimate of \(a_i\), with \((100 \times i/(i+1))\%\) confidence, given by

$$\begin{aligned} a_i = \mathrm{max} \{|a_n|\}_{n<i}. \end{aligned}$$
(90.4)

Although the above expression only provides an estimate, theoretical predictions equipped with truncation errors provide important guidance and demonstrate one of the main advantages of using an EFT. Refined methods for quantifying EFT truncation errors in nuclear physics is of key importance.

2 Muon-Capture on the Deuteron

An excellent example of where theoretical uncertainty quantification plays an important role is in the theoretical analysis of the muon-deuteron \(\mu -d\) (doublet) capture rate \(\varGamma _D\), i.e. the rate of

$$\begin{aligned} \mu ^{-} + d \rightarrow \nu _{\mu } + n + n. \end{aligned}$$
(90.5)

Experimentally, this will be determined with 1.5% precision in the MuSun experiment. Such precision, if attained, corresponds to a tenfold improvement over previous experiments. The centerpiece of the MuSun experiment is to extract the two-body weak LEC \(d_R\) from a two-nucleon process. This LEC is of central importance in several other low-energy processes that are currently studied. It is proportional to the proton-proton (pp) fusion cross-section, an important low-energy process that generates energy in the Sun. Given its extremely low cross-section, this cannot be measured on earth. The LEC \(d_R\) also appears in neutrino-deuteron scattering, and once the \(\pi N\) couplings \(c_3\) and \(c_4\) are fixed, it determines the LEC \(c_D\) which governs the strength of the one-pion exchange plus contact piece of the leading NNN interaction. A thorough analysis of the uncertainties in theoretical descriptions of \(\mu -d\) capture was carried out in [44], using the covariance matrices from [13], yielding for the \(S-\)wave contribution \( \varGamma _D^{^1S_0} = 252.4 ^{+1.5}_{-2.1} \, \mathrm{s}^{-1}.\)

Exploiting the Roy-Steiner analysis from [45], it was also possible to quantify the correlation between the \(\mu -d\) capture rate and the pp-fusion low-energy cross section in terms of the LEC \(c_D\). Furthermore, assuming an EFT expansion ratio \(q=\frac{m_{\pi }}{\varLambda _b} \sim 0.28\), i.e. estimating \(\varLambda _b \sim 500\) MeV, allowed for an order-by-order estimate of the EFT truncation error of the capture rate along the lines presented in Sect. 90.1.2. The LO-NLO-NNLO predictions of the capture rate are \(\varGamma _{D}^{^1S_0} = 186.3 + 61.0 + 5.5\) s\(^{-1}\), where the second and third term indicate the NLO and NNLO contributions, respectively, to the LO result (first term). This information leads to an estimated EFT truncation error of 4.6 s\(^{-1}\), with 75%-confidence. Clearly, the dominating source of uncertainty.

3 From Few to Many

Increasing the number of nucleons in the system under study introduces several new challenges. The presence of multiple scales, emergence of many-body effects such as collectivity, clusterization, and saturation are not trivial to understand from first principles, nor particularly easy to handle when solving the Schrödinger equation and therefore not straightforward to incorporate when calibrating the interaction. In [36], the LECs of a chiral NNLO interaction was optimized to reproduce few-nucleon data as well as binding energies and radii in \(^{14}\)C and selected oxygen isotopes. This approach to parameter estimation, resulting in the NNLO\(_\mathrm{sat}\) interaction was facilitated by a novel application the POUNDERs optimization algorithm [46] coupled to jacobi-NCSM and CC methods. NNLO\(_\mathrm{sat}\) has enabled accurate predictions of radii and ground-state energies in selected medium-mass nuclei [47].

It should be pointed out that the NNLO\(_\mathrm{sat}\) interaction does not provide an accurate description of NN scattering cross-sections, in particular for pp scattering, at relative momenta beyond \(\sim m_{\pi }\). At the same time, it is not obvious how to determine the domains of applicability of an interaction model and exploit this information such that the risk of overfitting is minimized. This, and other challenges are intimately related to quantifying truncation errors in \(\chi \)EFT and predictions from ab initio nuclear theory.

3.1 Delta Isobars and Nuclear Saturation

It turns out that the inclusion of the \(\varDelta \) isobar as an explicit low-energy degree of freedom in the effective Lagrangian, in addition to pions and nucleons, play an important role for accurately reproducing the saturation properties of the nuclear interaction. See [48] for additional details. Figure 90.2 demonstrates the effect of incorporating the \(\varDelta \) up to NNLO in CC calculations of symmetric nuclear matter. Additional advantages of including the \(\varDelta \) were observed in [49,50,51]. Such results are not surprising from an EFT perspective, given that the \(\varDelta -N\) mass splitting is only twice the pion mass and therefore below the expected breakdown scale of \(\chi \)EFT potentials [52]. Thus, the \(\varDelta \)-full chiral interaction provides a valuable starting point for constructing more refined \(\chi \)EFT interactions with improved uncertainty estimates.

Fig. 90.2
figure 2

CC calculations of the energy per nucleon (in MeV) in symmetric nuclear matter at NNLO in \(\chi \)EFT with (solid line) and without (dashed line) the \(\varDelta \) isobar. Both interactions employ a momentum regulator-cutoff \(\varLambda \) = 450 MeV. The shaded areas indicate the estimated EFT-truncation errors following the prescription presented in Sect. 90.1.2. The diamonds mark the saturation point and the black rectangle indicates the region E/A = 16 ± 0.5 MeV and \(\rho \) = 0.16 ± 0.01 fm\(^{3}\)

4 Discussion and Outlook

It is clear that the computational capabilities in ab initio nuclear physics exceed the accuracy of available chiral interactions. To make further progress requires improved statistical analysis and evaluation of interaction models. Hopefully, such efforts will bring us closer to a well-founded and microscopically rooted formulation of the nuclear interaction. There are several interesting challenges ahead of us. We must push the frontier of accurate ab initio methods further towards exotic systems and decays; systematically exploit information from NNN scattering data, decay probabilities, and saturation properties of infinite matter when optimizing the LECs of chiral interactions; demonstrate a connection between EFT(s) applied to nuclei and low-energy QCD, e.g. test PCs for RG invariance; and quantify systematic and statistical uncertainties in theoretical predictions. Continuous development of efficient computer codes to harness high-performance computing resources will hopefully enable detailed Bayesian analyses of ab initio calculations in the near future.