1 Preface

A sound theoretical description of nuclear forces is pivotal for understanding many important physical observables over a wide range of energy scales and densities: from few-nucleon physics to nuclear structure and reaction observables as well as astrophysical environments and associated phenomena.

Within the last 3 decades, significant progress in nuclear physics has been made possible, in part, thanks to the development of powerful ab initio many-body methods for approximately solving the nuclear Schrödinger equation, and the development of nuclear forces using effective field theory (EFT), in particular chiral EFT (\(\chi \)EFT). This progress means that it has now become increasingly important to quantify the theoretical uncertainties of the predictions, in particular the uncertainties stemming from the nuclear Hamiltonian itself because they often dominate the theoretical error budget. These uncertainties, primarily due to unknown or neglected physics, can lead to sizable errors when predicting nuclear observables of interest for next-generation experiments and astrophysical observations and, therefore, need to be managed and reliably quantified to enable precision nuclear physics. Indeed, theoretical predictions with quantified uncertainties facilitate the most meaningful comparisons with experimental and observational data.

The Institute for Nuclear Theory at the University of Washington hosted a virtual 3-week program to assess the state of low-energy nuclear physics and to evaluate pathways to further progress, with an emphasis on nuclear forces. The overarching questions addressed during the program were:

  • What are the current limitations of nuclear Hamiltonians? Which few- and many-body observables are ideal to constrain nuclear forces?

  • How can novel computational and statistical tools be used to improve nuclear forces and their uncertainty estimates? What precision can be achieved by going to higher orders in \(\chi \)EFT?

  • What is a suitable power counting for \(\chi \)EFT? What is the role of lattice quantum chromodynamics (LQCD) studies of few-nucleon systems in constraining nuclear EFTs?

  • What can be learned from quantum-information analyses of low-energy nuclear systems? Can quantum computing change the computational paradigm in nuclear physics in the upcoming decades?

The program brought together researchers with expertise in nuclear many-body techniques, EFT, and LQCD for nuclear physics, to share recent advances and new developments, and to discuss shortcomings, generate new ideas, and identify pathways to address to the questions above.

To finish the program with a summary of outstanding problems and questions, possible benchmarks and solutions, and clearly stated tasks for the community, all participants were invited to contribute short perspective pieces. These have been collected and merged into the present document. The wide range of topics covered by the contributed perspectives reflects the rich and stimulating developments that presently characterize a highly active nuclear-physics community. The various pieces touch upon a range of topics; renormalizability, power counting, unitarity, emulators, determination of low-energy constants, the complex nature of open-source computing in science, the three-nucleon continuum, collectivity, regulator dependencies, matching LQCD to EFTs, variational LQCD spectroscopy, quantum information and quantum entanglement, and quantum computing and its migration to nuclear physics.

The hope is that this document will serve as an anthology for the community and help guide future developments, facilitate collaborative work between different sub-communities, and allow assessing the progress to be made in the next few years.

The program organizers and collection editors:

Zohreh Davoudi, Andreas Ekström, Jason D. Holt, and Ingo Tews

2 Nuclear Forces for Precision Nuclear Physics: Status, Challenges, and Prospects by Zohreh Davoudi, Andreas Ekström, Jason D. Holt and Ingo Tews

This first contribution to this collection contains the perspective of the editors as well as a brief overview of the discussions during the program and the contributions to this collection. It therefore spans over a wide range of topics: current limitations of nuclear Hamiltonians and the calibration of nuclear forces, improved nuclear forces using novel computational and statistical methods, and improved power-counting schemes. It also enumerates ideas and questions related to large-\(N_c\) analysis for low-energy nuclear processes, LQCD calculations for nuclear physics and their matching to the EFTs, and the role of quantum information sciences and quantum computing in theoretical nuclear physics.

2.1 Current Limitations of Nuclear Hamiltonians and Calibrating Nuclear Forces Using Data for Few- and Many-Body Observables

A reoccurring question in the field is why some interactions derived in \(\chi \)EFT, even though adjusted to reproduce similar data, work better than others for particular observables across the nuclear chart. This question is related to several open challenges pertaining to the (chiral) Hamiltonians used in ab initio many-body methods: uncertainty quantification, the regularization scheme and scale dependence, and the possibility of identifying an ideal set of observables to constrain Hamiltonians. In the coming years, it will be crucial to address these questions to identify which components of nuclear interactions are most important for accurately reproducing and predicting relevant nuclear observables.

When talking about different interactions and their success in describing various nuclear observables, it is important to distinguish between the EFT itself and the individual model realizations of it. The latter are typically referred to as interactions and depend on choices for where, and how, to truncate the (asymptotic) EFT series, how to identify the low-energy constants (LECs) and their numerical values, and how to regularize the potential. These choices all contribute to the theoretical uncertainty of the interaction and the resulting predictions.

In addition, when comparing theoretical predictions for individual observables, additional uncertainties arise due to approximations pertaining to the employed many-body method used to approximately solve the Schrödinger equation. Of course, the underlying assumptions made when estimating theoretical uncertainties will also play a significant role.

It is crucial to estimate uncertainties in theory as well as experiment, without which one cannot identify relevant tensions/discrepancies among model predictions and experiments. Bayesian statistical inference is becoming the prevailing approach for uncertainty quantification, parameter estimation, and various statistical analyses of theoretical predictions and models. In recent years, Bayesian tools and prescriptions have become available to, e.g., estimate truncation errors in EFT, and it is very informative to specify such uncertainties in theoretical analyses of nuclear observables. Alongside any uncertainty estimation, it is key to specify the assumptions made and, if possible, enumerate any additional sources of uncertainty not accounted for.

In this context, a relevant question arises: how to best estimate the uncertainties due to approximations made when solving the many-body Schrödinger equation? This is sometimes referred to as the (many-body) method error. It will be very important for the community to find ways of better estimating these uncertainties, e.g., by comparing many-body methods at different levels of approximation against available benchmark data, and by comparing predictions between different ab initio methods against each other and potentially against phenomenological models when data are not available. To facilitate such comparisons, it will be useful to more freely distribute relevant interaction matrix elements within the community and, if possible, make the many-body codes, as well as accurate emulators for many-body methods, available to other researchers. One way forward might be to create an online repository for such resources. While many obvious questions arise regarding storage space, documentation, and a recognition of scientific credit to the developers, it is nevertheless important to find ways to tackle these practical and logistical challenges.

Additionally, it is crucial to quantify the effects of different regulator schemes that might influence the performance of nuclear interactions by regulating different parts of the nuclear interaction differently. It might be that some problems with nuclear interactions are more persistent in some schemes compared to others. Can the community find arguments for or against certain schemes? For example, it is difficult to maintain relevant symmetries of the interactions with most regulators and nontrivial to consistently regulate currents and interactions. It is expected that regulator artifacts, i.e., systematic uncertainties due to the regulator choices, decrease at high orders in the EFT and for larger cutoffs. However, as was brought up in the program, if one needs very high orders in the calculations then one is likely working with the wrong expansion. Furthermore, high cutoffs are not accessible with most many-body methods, even though future method developments will enable the community to treat stiffer and bare \(\chi \)EFT interactions.

Finally, it is important to investigate which observables are ideal to calibrate interactions. In principle, the LECs of a low-energy EFT, and any additional parameters necessary for uncertainty quantification, can be inferred from any set of low-energy data within the applicability domain of the EFT. The challenge lies in identifying and combining a set of calibration data with sufficient information content to yield useful predictions. In addition to commonly used calibration data such as nucleon–nucleon scattering cross sections and bulk nuclear observables, a calibration data set could also include, e.g., nucleon-nucleus or nucleus-nucleus scattering, astrophysical observations of neutron stars or data on collective phenomena. Hence, we can ask ourselves if it would be useful to come up with a minimal set of observables for validation of ab initio approaches and interaction models. Sensitivity studies might help to determine which observables are most useful to determine and test the various parts of nuclear interactions and should be included in such a set. We stress however that such a set is only useful in combination with robust estimates of all uncertainties.

2.2 Improving Nuclear Forces Using Novel Computational Methods and Going to Higher Orders in EFT

Since the introduction of nuclear EFTs, the EFT paradigm has proven itself as a useful principle for constructing high-precision interactions with the added benefit of systematic assessment of uncertainties. Going to higher orders in the EFT corresponds to including additional information on the short-range physics with the hope of improving the accuracy of the theoretical predictions. Predictions in the few-nucleon sector have now reached a high level of precision and accuracy when based on EFTs at sufficiently high order; even fifth-order calculations exist in some cases. One question is how to similarly improve the predictions for observables in heavier-mass systems? There are not yet any clear signs of systematic improvements in such systems when increasing the (chiral) order in EFT. More order-by-order comparisons of delta-full/delta-less interactions, constructed using the same methodology, are needed. Generally, it would most likely be useful for different groups to compare different schemes for constructing interactions in a more systematic way.

From a quantitative perspective, at least two complications arise when going to higher orders. First, with each additional order comes additional LECs and the numerical values of which need to be determined. Second, higher-order EFTs entail many-body interactions, e.g., three-nucleon forces, with associated unknown LECs that must be determined using data from three-nucleon systems, or beyond. It is computationally expensive to calibrate interactions using data from observables in few- and many-nucleon systems.

In recent years, a large number of nuclear interactions have been constructed by the community. The question arises if this “Skyrmification” of interactions is a positive or a negative trend. Clearly, as long as the predictions from various interaction models agree within uncertainties, there is, in principle no problem. Indeed, a systematically developed family (or distribution) of interactions enables coherent model predictions and allows us to assess correlations. In addition, operating with more than one interaction is a straightforward way of gauging theoretical uncertainties. As such, an “antidote” to this “Skyrmification" is a careful and honest uncertainty estimation. Theoretical predictions with relevant estimates of the underlying uncertainty will likely become standard practice in the coming years. It is important to note that the canonical \(\chi ^2\)-per-datum measure does not account for e.g. model or method errors, but it is nevertheless a useful quantity for gauging the reproduction of, e.g., scattering data.

Emulators, i.e., computationally cheap, yet accurate, surrogate models for predicting the structure and reactions of few- and many-body systems, have emerged as powerful and useful tools since they provide access to an entirely new class of computational statistics methods for parameter estimation, sensitivity analysis, model comparison, and model mixing. In particular, emulators based on eigenvector continuation appear to be particularly efficient and accurate. This is an exciting development with the potential to facilitate new discoveries and to address several of the open problems mentioned before. Still, using emulators requires careful uncertainty quantification of the corresponding emulation error. Some methods, like Gaussian processes, yield uncertainties by design, but it remains to be established how to estimate the errors induced by eigenvector continuation emulators. Not all many-body methods lend themselves to emulation via eigenvector continuation, however, and the construction of emulators requires access to “split-format” interaction input. This again highlights the importance of a community repository for interaction codes and emulators.

2.3 Improved Power-Counting Schemes and Constraining Nuclear Forces from Lattice QCD

To achieve renormalization group invariance it is of key importance to have the correct operators in place at the respective orders in the nuclear EFT expansion. There are, however, decades-long diverging viewpoints in the nuclear-theory community about (non-perturbative) renormalization and power counting in \(\chi \)EFT. This was also a prominent topic of discussion during the program.

In this context, the regulator cutoff plays a central role. It is an intermediate quantity necessary to regulate the interaction, and is often kept relatively small to converge present-day many-body calculations, but beyond this function it is not part of the underlying physical theory. It is clear that it is not meaningful to take the cutoff (much) smaller than the hard scale, or breakdown scale, of the EFT. In principle, the cutoff can be taken larger than the breakdown scale, but there are opposing viewpoints on how large it is meaningful to take it. This is intimately related to the question of inferring the importance of counterterms in the potential without understating or overstating their importance, as well as possible changes to the power counting in A-body systems.

To make progress, it is of interest to the community to find simple, or well-understood, benchmark systems to analyze renormalization, regularization, and power-counting strategies. The present list of relevant, or realistic, benchmark systems appears to be rather short, and includes the zero-range limit at unitarity, systems described by pionless EFT, and the two-nucleon system. In addition, when studying such systems at high values for the cutoff, spurious bound states might appear that could be difficult to treat in certain many-body methods.

An EFT does not dictate what the leading order (\(\mathrm{LO}\)) should contain beyond what is necessary to fulfill minimal symmetry and renormalization-group requirements. Studies of finite and infinite nuclear systems at \(\mathrm{LO}\) in \(\chi \)EFT point to deficiencies regarding saturation and spin-orbit interactions, two important properties observed in nuclear systems. Several questions related to the topic of constructing a \(\mathrm{LO}\) interaction emerged during discussions, such as: What is an “optimal” convergence pattern for an EFT if you have to choose between “smooth and steady but requiring more orders" or an “irregular” start and then a rapid approach or “convergence” within fewer orders.

A standard avenue for constraining the LECs of the EFTs is to match to relevant experimental data. This may not be a straightforward endeavor when direct experimental measurements do not exist, necessitating the use of other related quantities and indirect phenomenological constraints. Among various examples enumerated in this collection is a recent estimate on the \(\mathrm{LO}\) nucleon–nucleon (\(NN\)) isotensor contact term in the neutrinoless double-\(\beta \) decay within a minimal extension of the Standard Model. This was enabled by the application of a formalism similar to the Cottingham formula used in the study of the neutron-proton mass difference, with the result expressed in terms of a renormalized amplitude. This example highlights the need for a direct matching of the EFT to calculations based in QCD for a variety of beyond-the-Standard-Model processes in the few-nucleon sector, from lepton-number non-conservation and CP violation to dark-matter-nucleus cross sections.

While LQCD is the method of choice for constraining unknown LECs, its computational cost has hindered precise computations in the nuclear sector to date. In the absence of direct LQCD constraints for the time being, large-\(N_c\) considerations can provide valuable insights into the size and hence relative importance of interactions in the EFT, and may motivate prioritization of certain LQCD calculations over the others. Among examples enumerated in this collection is the hadronic parity violation in the \(NN\) sector where a combined large-\(N_c\) and (pionless-)EFT analysis leads to only two independent \(\mathrm{LO}\) parity-violating operators. Importantly, there is an isotensor parity-violating LEC that contributes at this order. Such a guidance has motivated LQCD calculations of the isotensor quantity that are computationally more accessible. Recent large-\(N_c\) analyses have also revealed how questions regarding naturalness of the LECs and hence the size of contributions at given EFT orders may be impacted by the choice of basis.

Open questions to be studied in the coming years concern the expansion of nuclear binding energies in \(1/N_c\), better understanding of the role of \(\Delta \) in the large-\(N_c\) analyses, and accidental cancellations that may ruin the large-\(N_c\) countings. LQCD can also play a role in the development of the large-\(N_c\) studies in nuclear physics by providing constraints on higher partial waves in \(NN\) scattering, parity-violating nuclear matrix elements, and three-nucleon observables for the organization of three-nucleon operators. Additionally, LQCD calculations at \(N_c \ne 3\) may provide insight into many of these questions, including into the role of \(\Delta \) in single- and multi-nucleon sectors.

The early and recent work in matching LQCD results to EFTs has resulted in constraints on the two-body nuclear and hypernuclear interactions, revealing symmetries predicted by large-\(N_c\) and entanglement considerations, albeit at unphysically-large quark masses. They have also enabled first QCD-based constraints on the \(\mathrm{LO}\) LECs in the pionless-EFT descriptions of deuteron’s electromagnetic properties, and of the np radiative capture, tritium \(\beta \) decay, pp fusion, and two-neutrino double-\(\beta \) decay processes. A significant advance in this matching program involved making predictions for nuclei with atomic numbers larger than those obtained by LQCD, hence demonstrating a full workflow involving LQCD calculations, EFTs matching, and ab initio many-body calculations based in the constrained EFTs, as described in this collection.

As the field moves forward, particularly once LQCD computations of light nuclear systems will become a reality at the physical quark masses, more possibilities may be explored in this critical matching program. LQCD in a finite volume matched to EFTs may help identify convergence issues in the EFTs, or help quantify the energy scale at which the nucleonic description of nuclei breaks down. Such a matching of LQCD and EFT results in a finite volume can also facilitate constraints on few-nucleon operators without the need for complex, and generally not-yet-developed, matching formalisms to scattering amplitudes. A similar matching may be considered between LQCD calculations at a given lattice spacing and the EFT-based many-body calculations at a corresponding UV scale. Additionally, phenomenological or EFT-inspired nuclear wavefunctions may lead to the construction of better interpolating operators for nuclear states in LQCD calculations. To make progress, many-body methods that are set up fully perturbatively to eliminate the need for iterating the potential (hence preserving the strict renormalization-group invariance) may be preferred, nonetheless these methods need to overcome their present drawback in underbinding larger nuclei such as \(^{16}\)O.

Since the first LQCD calculations of few-nucleon systems at unphysically large quark masses in early 2010s, the field has come a long way in pushing towards lighter quark masses and expanding the observables studied beyond lowest-lying energies, as described in this collection. A decade later, given advances in algorithms and methods and growth in computational resources, the field stands at a critical point where the first ground-breaking, but uncontrolled, calculations will give their place to a new generation of calculations that involve, for the first time, a more comprehensive set of two- and eventually multi-nucleon interpolating operators, enabling a systematic variational spectroscopy of nuclei with better control over excited-state effects. These will also involve ensembles with more than one lattice spacing such that the continuum limit of the lattice results can be taken systematically. Furthermore, the quark masses can be tuned at or near the physical values such that the results will correspond to those in nature.

The first variational studies of \(NN\) systems have emerged in recent years, albeit still at large quark masses, with variational bounds on lowest-lying energies that are in tension with the previous non-variational estimates. These tensions may be attributed to one or more of the following: (i) the variational basis of interpolating operators may be yet incomplete and while the upper bounds on energies are reliable, they may miss the presence of one or more lower-energy states if no operator in the set has significant overlap onto such states, (ii) the previous non-variational ground-state results were dominated by excited-state effects at early times and misidentified the ground-state energies, or (iii) as one study suggests, lattice-spacing effects may be significant, and comparing the results of two calculations at different input parameters may be ambiguous due to scale-setting inconsistencies. Investigating such possibilities will constitute a major endeavor in this field in the upcoming years, with promising directions already explored by various collaborations, as enumerated in this collection.

It is important to note that there are already a significant body of work, and related formal and numerical developments, in place in accessing phenomenologically interesting quantities in nuclear physics, from spectra and structure of light nuclei to nuclear scattering and reaction amplitudes. Therefore, once reliable and sufficient variational bases of operators are found and all systematic uncertainties are controlled, progressing toward the goal of matching QCD to EFT and many-body calculations will be within the reach.

2.4 Prospects of Quantum Information Sciences and Quantum Computing in Nuclear Physics

The field of quantum information sciences (QIS) has grown to become a major area of scientific and technological developments in current times, benefiting from various partnerships between academia, government, and industry, as well as an ever growing workforce. Nuclear theorists, among other domain scientists, have recognized the potential of quantum computing in advancing many areas of nuclear physics that currently suffer from computationally intractable problems. These problems include accurate predictions for finite-density systems such as nuclei, phases and decomposition of dense matter (of relevance in neutron stars), real-time phenomena for description of reaction processes and evolution of matter after high-energy collisions (of relevance in collider experiments), as well as nuclear response functions (of relevance in the long-baseline neutrino experiments) and nuclear-structure quantities (of relevance in the upcoming Electron-Ion Collider).

In fact, the very first rudimentary (due to limited hardware technology) but ground-breaking (given their novel approach) calculations of small nuclear quantities have emerged, including quantum computing the binding energy of the deuteron and \(^4\hbox {He}\), simulating models of nuclear response functions, and simple nuclear reactions, along with exploration of coherent neutrino propagation using simple models. Furthermore, with an eye on the grand challenge of obtaining nuclear dynamics from first-principle QCD-based studies, the field has witnessed a proliferation of concrete ideas, proposals, and algorithms for simulating quantum field theories, and illuminating hardware implementations in small systems. Additionally, by incorporating quantum entanglement and coherence in theoretical descriptions of nuclei, new understanding and insights have been reached in recent years. These research directions, which are still at an early stage, will form an exciting subfield of nuclear theory in the coming decade.

Many interesting open questions and underdeveloped areas will be studied by QIS-oriented nuclear theorists in the coming years: Can quantum entanglement provide a better organizational scheme for nuclear interactions, and a window into emergent symmetries beyond traditional considerations? Can the observed entanglement minimization in low-energy baryon-baryon interactions be understood from an ab initio QCD analysis? Can the intricate balance between repulsion and attraction in nuclear media leading to complexities be characterized via quantum-information measures? Do the highly regular patterns in the shapes of nuclei (pointing to an emergent approximate symplectic symmetry), as verified by recent ab initio studies based in chiral potentials, signal interesting quantum correlations and entanglement structure? Can entanglement structure of nuclear wavefunctions and Hamiltonians in given nuclear many-body methods provide guidance on the most efficient bases for classical or quantum computation of certain nuclear processes?

Can customized gates provide a quicker and more scalable road to simulating nuclear dynamics than standard universal gates? What would be the role of hardware co-design, a process in which domain scientists, such as nuclear theorists, work in a feedback loop with quantum-hardware developers to impact the design of the next-generation quantum devices? Can nuclear theorists bring any benefit to the QIS field, given their long and advanced expertise in numerical Hamiltonian simulations, by providing state-of-the-art tools for simulating, hence optimizing, the quantum hardware? Have nuclear theorists identified concrete problems, i.e., the first realistic applications of quantum computing, so that quantum supremacy in the realm of nuclear physics can be claimed in the upcoming years?

What are the lessons to be learned from the integration of new theory and computing perspectives in nuclear theory over the past 3 decades (such as EFTs and LQCD), such that we can effectively incorporate QIS tools and talent in nuclear theory too? What are the lessons to be learned, and projections to be made, from the course of developments of high-performance computing and its application in domain sciences, so that one can envision a high performance quantum-computing era in nuclear physics? Insights into these questions, and references to relevant recent progress, are presented in this collection.

Acknowledgements: We are grateful to the Institute for Nuclear Theory, as well as an engaged, thoughtful, and critical nuclear-theory community, in particular the speakers, discussion leads, and participants of the INT program 21-1b on Nuclear Physics for Precision Nuclear Physics.

Z.D. acknowledges support from the the U.S.Department of Energy’s (DOE’s) Office of Science Early Career Award DE-SC0020271, Alfred P. Sloan foundation, and Maryland Center for Fundamental Physics at the University of Maryland, College Park. A.E. acknowledges support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 758027) and the Swedish Research Council project grant (Grant agreement No. 2020-05127). J.D.H. acknowledges support from the Natural Sciences and Engineering Research Council of Canada uunder grants SAPIN-2018-00027, RGPAS-2018-522453, and the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. I.T. was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract No. DE-AC52-06NA25396, and by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) NUCLEI program.

3 Reflections on Progress and Challenges in Low-Energy Nuclear Physics by Dean Lee and Daniel R. Phillips

During the program we enjoyed the many innovative talks and spirited discussions covering important and timely questions on nuclear forces, effective field theory, power counting, emulators, and lattice quantum chromodynamics. Here we give some brief comments on a few of the topics. We hope our comments might have some value for others working in this field.

3.1 Unitary Limit and Nuclear Physics

It is clear that the unitary limit for two-component fermions is immediately useful for describing the physics of dilute neutron matter. Recent work also establishes that the unitary limit of four-component fermions is of relevance for understanding atomic nuclei [1]. In particular, the proximity of the (nucleon–nucleon) \(NN\) system to the unitary limit results in the presence of universal correlations between few-nucleon observables [2,3,4,5,6] and suggests that the Efimov effect may be nearly realized in the three-nucleon (\(3N\)) system [7,8,9].

If we consider quantum chromodynamics in the limit of large numbers of colors, then the most important \(NN\) interactions have an underlying spin-flavor symmetry [10,11,12,13,14,15] as do the dominant pieces of the three-nucleon force [16]. For low momenta, the operators that are leading in large-\(N_c\) are those permitted by Wigner-SU(4) symmetry [17]. The Wigner-SU(4) symmetry [17] also emerges as a symmetry of nuclear forces in the unitary limit [18]. The combined use of the momentum and large-\(N_c\) expansion leads to useful insights into \(3N\) observables [19] and the parity-violating NN force [20,21,22].

More recently, it has been suggested that the leading interactions in such a Wigner-SU(4) organization of the NN-force problem are also those that result in minimal entanglement in the \(NN\) S-matrix [23, 24].

The fact that expansions around the unitary, chiral, and large-\(N_c\) limit all provide insights into nuclear forces leads us to ask whether one can systematically combine the around-unitarity and chiral expansions [25]? How about the chiral and large-\(N_c\) expansions? Which limit is nuclear physics closer to: \(m_q \rightarrow 0\) or \(N_c \rightarrow \infty \)? And what if the success of large-\(N_c\) and/or the closeness of nuclear physics to the unitary limit are somehow a manifestation of a deeper quantum-information-theoretic phenomenon in QCD? If that were the case, how would it get built into an EFT for nuclear physics?

3.2 Some Questions About Power Counting in Chiral EFT

  1. 1.

    What order is the \(3N\) force in such a unified EFT? \(\mathrm{LO}\)—as suggested by Efimov physics [26]? \(\mathrm{NLO}\)—as it is in \(\chi \)EFT with explicit Deltas [27]? \(\mathrm{NNLO}\)—as is currently practiced [28]? If it is \(\mathrm{NLO}\), is that just the Fujita-Miyazawa piece of the 3N force? Or should it also include the short-distance operators with undetermined LECs?

  2. 2.

    When we add higher-order corrections to a leading-order Hamiltonian in an \(\chi \)EFT do we intend those higher-order corrections to be treated in perturbation theory? How should we view the results if those higher-order corrections happen to make the Hamiltonian unbounded from below in certain parameter ranges?

3.3 Does a Quantum Phase Transition Make Zero-Range Interactions a Poor Tool with Which to Describe Nuclei?

The parameters of nuclear physics appear to mean that nuclei sit near a quantum phase transition [29]. The phase boundary is between a Bose gas and a nuclear liquid. Which phase appears is controlled by the alpha–alpha scattering length. If the range of the nucleonic interactions are shorter than the size of the alpha particles, than the Pauli blocking between identical nucleons will cause the alpha–alpha interaction to be weakly attractive or even repulsive [30, 31]. Therefore, the alpha-particle size takes on a critical role in the structure of alpha-conjugate nuclei.

The relationship between the alpha-particle size and the range of nucleonic interactions explains, for example, the instability of \(^{16}\hbox {O}\) against breakup into four alpha particles at leading order in pionless effective field theory [32]. The zero-range limit seems problematic for these systems, but a simple interaction near infinite scattering length with Wigner SU(4) symmetry, nonzero range, and significant local interactions seems to provide a useful starting point for studying atomic nuclei across the nuclear chart [33].

Similarly, if the nucleonic interactions have a significant range but the interactions are not local, then the alpha–alpha interaction may again not be sufficient to produce a nuclear liquid.Footnote 1 While the four-component unitary limit is relevant and useful for studying the physics of atomic nuclei with more than four nucleons, significant care must be taken regarding the range and locality of the nucleonic interactions relative to the size of alpha particles.

3.4 Some Questions About Eigenvector Continuation

Eigenvector continuation has recently emerged as a powerful tool that can reduce the computational load involved in solving the quantum many-body problem [34,35,36,37,38].

  1. 1.

    A meta-question is whether this method should really even be called eigenvector continuation. Recently the subspace-emulation strategy proposed in Ref. [34] has been applied without using any eigenvectors [38]. This raises the question of what the minimal conditions are for this very successful strategy to be used.

  2. 2.

    How do we estimate the errors/convergence of eigenvector continuation? Some work in this direction is reported in Ref. [39].

  3. 3.

    What does the workflow for improving nuclear forces with novel fitting strategies and eigenvector continuation look like in practice? A recent work on constraining \(3N\)-force parameters by the BUQEYE collaboration [40], uses eigenvector-continuation emulation of \(3N\) and \(4N\) bound-state calculations to facilitate simultaneous calibration of both \(\chi \)EFT parameters and the parameters of the statistical model that encoded the impact of the \(\chi \)EFT truncation on observables.

  4. 4.

    Can we use eigenvector continuation to extend a power counting scheme where perturbation theory is not converging? For example, what happens if we do continuation in \(g_A\) (or \(g_A^2/f_\pi ^2\)) to control the strength of the one-pion-exchange potential?

Acknowledgements: The perspectives presented here are informed by research supported by the U.S. Department of Energy under grants DESC0013365 and DE-SC0021152 (DL) and DE-FG02-93ER-40756 (DRP) and the Nuclear Computational Low-Energy Initiative (NUCLEI) SciDAC-4 project, DE-SC0018083 (DL).

4 Dependence of Nuclear Ab Initio Calculations for Medium Mass Nuclei on the Form of the Nuclear Force Regulator by Petr Navrátil

Ab initio calculations for medium mass nuclei with chiral nuclear interactions as input show that there is a substantial dependence of binding energies, radii and other observables on the regulator used in particular in the three-nucleon interaction. This dependence is rather weak in light nuclei and therefore it was overlooked at first. The chiral effective field theory (\(\chi \)EFT) suggests that the regulator effects should be of higher order than the used perturbation expansion. However, in practical calculations for medium mass nuclei the effect is large, bigger than what one would anticipate from the chiral perturbation theory (\(\chi \)PT).

The sensitivity is in particular significant to the functional form of the regulator, local versus non-local or semi-local. The use of local regulators in the chiral 3N interaction, i.e., depending on the transferred momentum, was widespread in the past as technically it is easier to implement the most complicated chiral three-nucleon (\(3N\)) terms with this type of regulator because the resulting interaction is local in the coordinate space [41]. However, obtained Hamiltonian with low-energy constants (LECs) typically determined in mass A = 2–4 systems overbinds medium mass nuclei and underestimates nuclear radii [42, 43]. On the other hand, applications of non-local regulators in the \(3N\) interaction that depend on the relative nucleon momenta, gives much better results in medium mass nuclei for both binding energies and radii [44]. It should be noted that most chiral nucleon–nucleon (\(NN\)) interactions used in ab initio calculations include non-local regulators [45,46,47], i.e., it appears that the use of a non-local regulator in the \(3N\) interaction is more consistent. Still, it has been argued recently that theoretically best justified is the application of semilocal regulators that preserve chiral symmetry [48, 49]. However, the corresponding consistently regularized \(3N\) interactions have not been developed yet beyond \(\mathrm{NNLO}\) and results available so far show a similar overbinding problem as calculations with the local \(3N\) forces [50].

Recently, \(3N\) interactions that combine the use of both local and non-local regulators have been introduced [51]. These interactions provide a good description of binding energies of light and medium mass nuclei including \(^{132}\)Sn [52]. At the same time, the calculated radii are typically underestimated compared to experiment [53] and compared to the most successful interaction for the description of radii, the \(\mathrm{NNLO}_\text {sat}\)  [54].

The strong dependence of binding energies and radii of medium mass nuclei on the regulator type needs to be further investigated and understood as it appears contrary to expectations from \(\chi \)PT.

4.1 Inclusion of the \(3N\) Contact Interaction at \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\)

Contact terms contributing to the chiral \(3N\) interaction at \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) order has been derived recently [55]. These fourteen terms accompanied by LECs impact the spin-orbit strength and the isospin dependence of the nuclear force among other effects. It has been demonstrated that analysing power puzzle in the d-p data can be resolved considering even a subset of these terms [56, 57]. Having 14 additional LECs at disposal, a high-precision fit to three-nucleon data should be possible, i.e., a partial wave analysis similar to that performed for the nucleon–nucleon data should be feasible. Such an analysis would undoubtedly result in a much better quality 3N interaction for applications in nuclei across the nuclear chart.

Recently, the spin-orbit (\(E_7\)) \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) contact term has been tested in calculations of \(^7\)Be(p,\(\gamma \))\(^8\)B radiative capture. This term was shown to considerably improve the the analyzing power in the p-d scattering. Its inclusion improved the structure description of \(^7\)Be and \(^8\)B nuclei as well as of the \(^7\)Be(p,\(\gamma \))\(^8\)B S factor when compared to experiment [58].

The \(3N\) interaction \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) contact terms include a \(T{=}3/2\) contribution. It might help to improve the description of nuclei far from stability with a large neutron or proton excess. Applications of this \(T{=}3/2\) 3N contribution is worth exploring.

4.2 Importance of Calculations that Include Continuum Effects

Most of ab initio calculations used to test nuclear forces involve bound-state observables such as binding energies, excitation energies, radii, and electroweak transitions between bound states. It should be noted that considering nuclear properties affected by continuum provide a complementary and often more comprehensive and strict test of nuclear forces. The reason is that even a straightforward experimental information such as elastic scattering cross section comprises information from bound states of scattering nuclei, resonances of the composite system, background phase shifts etc. [59] Even for describing an isolated resonance, one needs to calculate and compare not only its energy but also its width. Methods capable of including continuum effects should be applied more broadly in tests of nuclear forces. This is obviously done in few-body systems, e.g., \(NN\) scattering and nucleon–deuteron scattering [60,61,62]. However, continuum calculations for light and medium mass nuclei should also be considered [63, 64].

Acknowledgements: The presented perspectives are based upon work supported by the NSERC Grant No. SAPIN-2016-00033. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada. Computing support came from an INCITE Award on the Summit supercomputer of the Oak Ridge Leadership Computing Facility (OLCF) at ORNL, from Livermore Computing, and Compute Canada.

5 Collective Observables for Nuclear Interaction Benchmarks by Kristina D. Launey, Grigor H. Sargsyan, Kevin Becker, David Kekejian

Wave functions are not observables, however, they can provide critical information about the nuclear correlations and spin mixing in nuclei. Correlations drive properties of nuclei beyond the mean field and define important observables such as transitions, excitation spectra, and reaction observables (see, e.g., Ref. [65]). Collective correlations are responsible for deformation of nuclei, and the ab initio symmetry-adapted no-core shell-model (SA-NCSM) approach [66, 67] has unveiled the ubiquitous presence of deformation in light to medium-mass nuclei. The importance of deformation is anticipated to hold even more strongly in heavy nuclei [68,69,70,71,72]. Specifically, Refs. [67, 73] show that nuclei and their excitations are dominated by only a few collective shapes that rotate (Fig. 1a), which naturally emerge from first principles. Typically, low-lying states have a predominant shape that is realized by the most deformed configuration in the valence shell and particle-hole excitations above it. However, the probability amplitudes of these shapes vary to some extent from one parametrization of the chiral potentials to another, and this has a significant effect on reproducing collective observables.

Fig. 1
figure 1

a Emergence of almost perfect symplectic symmetry in nuclei from first principles [65, 66], enabling ab initio descriptions of clustering and collectivity in terms of nuclear shapes. b Observables for \(^{6}\)Li calculated in the SA-NCSM using only a small number of nuclear shapes (specified in the x-axis labels) and compared to experiment (“Expt."); dimensions of the largest model spaces used are also shown. c The same, but for \(^{20}\)Ne; for comparison, the corresponding complete dimension is \(3.8\times 10^{10}\). d Experimental and theoretical \(B(E2; {5\over 2}^+ \rightarrow {1\over 2}^+)\) values for \(T_z = -{3 \over 2}\); for \(A = 21\), the ab initio SA-NCSM calculation is shown without the use of effective charges (figure adapted from Ref. [74]). e Symplectic \(\mathrm {Sp}( 3,\mathbb {R} )\) irreps or nuclear shapes that compose the rotational band states of \(^6\)Li; each irrep is specified by its equilibrium shape, labeled by the deformation \(\beta \) and total intrinsic spin S. Figures adapted from Ref. [67], unless otherwise stated

5.1 Nearly Perfect Symplectic Symmetry in Nuclei–Radii and Quadrupole Moment Operators

First-principle nuclear structure calculations with various chiral potentials show that the special nature of the strong nuclear force determines highly regular patterns in nuclei that can be tied to an emergent approximate symmetry, the \(\mathrm {Sp}( 3,\mathbb {R} )\) symplectic symmetry [66, 67, 75]. Since this symmetry does not mix nuclear shapes and is only slightly broken in nuclei, nuclear states are readily described by only a few subspaces that respect this symmetry, or a few nuclear shapes (Fig. 1a–c). These subspaces extend to higher-lying harmonic oscillator (HO) shells and are imperative for reproducing collective observables—see, e.g., Fig. 1d for B(E2) in \(^{21}\)F; see also the outcomes of a many-particle modeling [76, 77] that utilizes interactions inspired by the symplectic effective field theory [78]; see also Refs. [79, 80] for E2 transitions in Mg isotopes. Besides the predominant shape(s), there is a manageable number of shapes, each of which contributes at a level that is typically at least an order of magnitude smaller, as shown in Fig. 1e. Furthermore, practically the same symplectic content observed for the low-lying states in \(^{20}\)Ne, Fig. 1a, and for those in \(^6\)Li, Fig. 1e, is a rigorous signature of rotations of a shape and can be used to identify members of a rotational band. A notable outcome is that excitation energies and transition rates for a few nuclear shapes closely reproduce the experimental data, Fig. 1b and 1c.

This has important implications:

Deformation.:

   The nuclear deformation is calculated by the quadrupole moment operator

$$\begin{aligned} Q_{2}=\sqrt{16\pi /5 }\sum _{k=1}^A r_k^2Y_{2}(\hat{r}_k). \end{aligned}$$

This operator is a symplectic generator, which means that it has zero matrix elements between two shapes and : \(\left\langle \sigma _i | Q_2 | \sigma _j \right\rangle =0\), \(\forall i \ne j\). This is important, since the largest fraction of the quadrupole moments and E2 transitions strengths, and, hence nuclear collectivity, necessarily emerges within the predominant symplectic irrep(s) or nuclear shape(s). The more the mixing of shapes, the smaller these collective observables.

Radii.:

   The nuclear size is calculated by the monopole moment operator \(r^2=\sum _{k=1}^A{\vec {r}_k \cdot \vec {r}_k}\). This operator is a symplectic generator, which means that it has zero matrix elements between two shapes and : \(\left\langle \sigma _i | r^2 | \sigma _j \right\rangle =0\), \(\forall i \ne j\). Different from the quadrupole moment, the rms radius provides the average radius of a shape that describes the size of a shape regardless if it is deformed or not, thereby converging at comparatively smaller model spaces compared to deformation-related observables.

Spin mixing.:

   The total intrinsic spin is a good quantum number for each nuclear shape. Hence, more spin mixing implies more mixing of shapes, thereby reducing collective observables.

Shape vibrations.:

   A nuclear shape is composed of an equilibrium shape (typically, a configuration in the valence shell) and vibrations. An important point is that a nuclear shape becomes energetically favored only when vibrations are allowed to develop within a model space. In limited model spaces, collective observables are highly reduced for two reasons: (1) shapes of enhanced deformation are suppressed, while other less deformed shapes enter the eigenfunctions, and (2) vibrations are largely suppressed, thus for example, for \(^{20}\)Ne’s predominant shape, \(B(E2; 2^+ \rightarrow 0^+_\mathrm{gs})= 13.4(14)\) W.u. in 11 shells (Fig. 1c), whereas it reduces to 4.2 W.u. for the equilibrium shape only (valence shell).

Note the critical difference between rms radii, \(\left\langle \Psi | r^2 | \Psi \right\rangle =\sum _{i}c_i^2\left\langle \sigma _i | r^2 | \sigma _i \right\rangle \), and quadrupole moments, \(\left\langle \Psi | Q_2 | \Psi \right\rangle =\sum _{i}c_i^2\left\langle \sigma _i | Q_2 | \sigma _i \right\rangle \): radii of individual shapes always add, whereas quadrupole moments can add (for oblate shapes), subtract (prolate) or be zero (spherical shapes). The quadrupole moments, and, hence, E2 transitions further decrease if largely deformed shapes are suppressed. This exposes the role of the symplectic symmetry, established as a remarkably good symmetry of the strong nuclear force in the low-energy regime, in guiding toward and calculating precise nuclear observables.

5.2 Recommendations

Collective observables are essential for parametrizing nuclear interactions and for benchmark studies to ensure the proper account of the physics of nuclear dynamics in ab initio modeling. First, it is important to monitor collective observables for various potential models and, in particular, perform global sensitivity analysis that probes the sensitivity of collective observables [81] to the low-energy constants that enter the chiral potentials, similarly to that of bulk properties [36]. Second, for benchmark studies, it is imperative to include quadrupole moments and/or E2 transitions. An ideal test case is the \(B(E2; 2^+ \rightarrow 0^+_\mathrm{gs})\) in \(^{12}\)C, which is well measured 4.65(26) W.u.; alternatively, the quadrupole moment of the first excited \(2^+\) state in \(^{12}\)C could be used (requires a calculation of a single state), although there are still large uncertainties in the recommended value, 6(3) e fm\(^2\). A particularly interesting case is the \(B(E2; 3^+ \rightarrow 1^+_\mathrm{gs})\) transition rate in \(^{6}\)Li that remains a challenge to realistic interactions. For such calculations, it is beneficial if a many-body approach is used, such as the SA-NCSM, that does not require renormalization of the chiral interactions in the nuclear medium, does not use effective charges, and admits any type of the nuclear interaction including non-local interactions.

Acknowledgements: This work was supported by the U.S. National Science Foundation (PHY-1913728) and benefitted from computing resources provided by the National Energy Research Scientific Computing Center NERSC (under Contract No. DE-AC02-05CH11231), Frontera computing project at the Texas Advanced Computing Center (under National Science Foundation award OAC-1818253) and LSU (www.hpc.lsu.edu).

6 Role of the Continuum Couplings in Testing the Nuclear Interactions by Marek Płoszajczak

The development of the nuclear forces for precision nuclear physics demands reliable methods to improve interactions by analyzing the discrepancies between results of chosen many-body approach and experimental data. Hence, the appropriate many-body approach and the relevant choice of studied observables are the two essential ingredients of testing of the interactions.

The many-body calculation of nuclear observables brings new aspects to the problem of testing the interactions, such as the dependence of interaction in a given nucleus on nucleon number, or the role of couplings to reaction channels and scattering continuum. These two aspects are a consequence of the attempts to improve interactions by calculating various observables in heavier nuclei.

One aspect concerns the changing role of the many-body forces with increasing number of nucleons because the interplay between two- and higher-body forces depends on the number of possible 2-body, 3-body, etc. couplings (talk of C.-J. Yang) in many-particle systems. This induces the dependence of EFT power counting on particle number [82]. On the other hand, the relative number of the different k-tuple couplings depends on the nucleon density distribution, i.e., the relative number of nucleons in the surface region and in the interior of nucleus. These many-body effects could be absorbed in the effective parameters of the EFT interaction fitted locally to the individual nuclei or their small sets.

Another aspect is related to the role of the continuum coupling which leads to the appearance of new energy scale(s) in the EFT power counting related to the distance from the threshold of reaction channel(s). An optimal way to test interactions in the vicinity of different particle emission channels is to employ the many-body approach which preserve unitarity at each opening of new reaction channel.

Weak binding brings another aspect which is related to the coupling to the scattering states and various particle decay channels. Indeed, experimental data even at low excitation energies contain states which are either weakly bound or unbound like, for example, A=5 nuclei which are all unbound even in the ground state. Calculation of the binding energy in long isotopic chains brings the issue of dependence on the asymmetry of proton \(S_p\) and neutron \(S_n\) separation energies [83, 84], such as the weakening of interaction between unlike nucleons and the asymmetry of neutron-neutron and proton-proton interactions.

The continuum couplings can be included either in the complex-energy continuum shell model using Berggren ensemble of single-particle states [64, 85], the so-called Gamow shell model, or in the real-energy continuum shell model in the projected subspaces [86]. Gamow shell model can be formulated either in core + valence particle model space [85, 87] or in the no-core basis [88]. Berggren representation has been also used in the coupled-cluster approach [89]. Another approach which has been used extensively, is the no-core shell model, including couplings to the reaction channels via the R-matrix approach [90].

The problem of deriving a reliable inter-nucleon interaction (\(NN\), \(3N\), \(4N\),\(\ldots \)) for precision nuclear physics calculations cannot be separated from the choice of the model space and the many-body approach. Unitarity, which is the fundamental property of Quantum Mechanics, is violated by most microscopic nuclear theories used. Coupling to the environment of scattering continuum and decay channels does not reduce to refitting parameters of the interaction which have been adjusted to observables in well-bound states. It is necessary that new families of the interactions are tested on a broader set of data including resonances and low-energy scattering observables, using the many-body frameworks respecting the unitarity.

7 Comments on the \(\varvec{\chi }^2\) Values in the Three-Nucleon Sector by Alejandro Kievsky

In recent years a substantial progress has been achieved in the development of accurate descriptions of the interaction between nucleons using the systematic framework provided by chiral perturbation theory. In the nucleon–nucleon (\(NN\)) sector, potentials obtained up to 4th and 5th order in the nuclear EFT expansion have provided an extremely accurate description of the \(NN\) world data with a value of \(\chi ^2\) per datum close to one. Most of the data that these potentials describe are proton-proton and neutron-proton scattering cross-sections and analyzing powers. The same class of data exist in other sectors, as the three-nucleon (\(3N\)) or four-nucleon scattering (\(4N\)). Focusing in the \(3N\) system, it is a fact that all realistic \(NN\) potentials describe poorly some of these observables. In particular, vector and tensor analyzing powers in proton–deuteron scattering are underpredicted by almost all those \(NN\) potentials even if they are supplemented by a \(3N\) force. However at present the \(NN\) and \(3N\) interactions are considered at different orders. As example, in Table 1, we show the \(\chi ^2\) per datum obtained after solving proton–deuteron scattering at low energies [91, 92]. Two different interactions are considered, the widely used Argonne V18 interaction (AV18), without and with the inclusion of the Urbana IX (UR) \(3N\) force, and the chiral based interaction by Entem and Machleidt (Idaho-\(\mathrm{N}\) \(^3\) \(\mathrm{LO}\)), with and without the inclusion of the \(3N\) force at \(\mathrm{NNLO}\). The strength of this force depends on two low-energy constants (LECs) that have been determined by fixing the triton binding energy and the doublet neutron-deuteron scattering length. In all the cases examined the \(\chi ^2\) per datum of the analyzing powers could exceed the value of one hundred, in particular the vector analyzing powers \(A_y\) and \(iT_{11}\). The tensor analyzing powers are slightly better described, however the \(\chi ^2\) value of \(T_{21}\) is very high in some cases.

Table 1 \(\chi ^2\) per datum obtained in the description of the proton–deuteron vector and tensor analyzing powers

These results suggest that the complicated structure of the \(3N\) force has to be further analysed. Recently, the contact three-nucleon interaction at \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) has been worked out showing that there are thirteen new LECs to be determined. Moreover, the spin structure is sufficiently flexible to guarantee a better description of the polarization observables at low energies. The results of Ref. [93] show that using the proton–deuteron data as input, it is possible to fit these \(3N\) LECs and obtain values of the \(\chi ^2\) per datum similar to those obtained in the \(NN\) sector.

8 Perspectives on an Open-Source Toolchain for ab initio Nuclear Physics by Matthias Heinz, Thomas Duguet, Harald W. Grießhammer, Heiko Hergert, Alexander Tichai

One topic of discussion at the workshop was the open-sourcing of codes necessary to evaluate and transform nuclear interaction matrix elements for use in many-body codes. We offer here some perspectives on what this could look like and how it could be achieved.

8.1 Introduction

The long-term benefits of open-sourcing computational tools are obvious: a standard set of validated, time-tested tools would provide a strong base for any further interaction and many-body developments. Even in the short term, the open availability of nuclear interaction matrix elements in formats used for many-body methods should allow many-body practitioners to experiment with different interactions and encourage greater cross-talk between many-body specialists and interaction specialists. Another consequence would be that the development effort required to begin research on many-body methods would be reduced, opening access to the field for more researchers without the resources to obtain or develop all of the required matrix-element-processing tools. Combined with the continuous open-sourcing and/or publication of many-body codes, this would provide an excellent environment for nuclear theorists with various backgrounds to perform different studies of nuclear interactions in medium-mass systems.

However, there exists a natural concern regarding how more cutting-edge developments that are open-sourced are appropriately acknowledged and how the people responsible for these developments obtain the scientific credit they are due. If implemented with proper usage guidelines, the open availability of interaction matrix elements can provide an enforceable standard for how to deal with citations and acknowledgments when using interactions and tools provided by others.

8.2 Sharing and Distribution of Interaction Matrix Elements

A simple starting point would be to make matrix elements available in formats used by configuration-space many-body methods, i.e., (angular-momentum- and isospin-coupled) single-particle harmonic-oscillator (HO) matrix elements, through a public read-only repository. These could also be made available in the form of individual terms proportional to different low-energy constants (LECs) to allow for fits or sensitivity studies in realistic applications. The matrix elements should be provided with some attached metadata on the code(s) and version(s) used to generate the matrix elements and who actually generated them. Ideally, this metadata should have all the information needed to reproduce the matrix elements using standard tools available in the repository, which we discuss next. For this reason, the metadata should support updates that help to clarify different strategies or approximations that went into generating the matrix elements to make them more reproducible. The starting interaction matrix elements can also be provided in code form (or as data with a read-in library), with standard interfaces to obtain coordinate-space/momentum-space matrix elements.

8.3 Tools for Matrix Element Usage

To make the generation of single-particle HO matrix elements fully reproducible, one would make available the tools to do the individual transformations required to generate these matrix elements (starting from the center-of-mass (COM) to lab frame transformation). A full set of these tools would be nucleon–nucleon (\(NN\)) and three-nucleon (\(3N\)) COM-to-lab transformation codes, \(NN\) and \(3N\) coordinate-space/momentum-space to Jacobi HO transformation codes, and possibly momentum-space and Jacobi HO similarity renormalization group codes. Any matrix elements made available in a data format should provide a link to some documentation that defines how the data format is structured. Additionally, matrix element formats should provide routines to read in matrix elements and access specific matrix elements as a small library (to be employed by the end user). A set of conversion tools to convert between equivalent matrix element formats may also prove useful.

8.4 Explorations of Nuclear Forces in Medium-Mass Systems

Providing \(NN\) and \(3N\) matrix elements broken up into terms proportional to individual LECs in a single-particle HO format would immediately allow many-body theorists to experiment with LEC choices and study their effects in medium-mass nuclei. For interaction specialists to do the same, one would need open-source versions of (at least) standard many-body solvers for closed-shell systems. We believe that the effort to open-source matrix elements outlined here should be paralleled by the practice of open-sourcing many-body codes. The ideal candidates for open-sourcing are stable older versions of codes where the original developers have completed the studies for which the code was intended. This way many-body practitioners may still exclusively profit off of their cutting-edge developments, but their publication of established versions opens their code to more people, leading to citations and collaboration opportunities with other interested nuclear theorists. In nuclear theory, there are some examples of open-source many-body codes that have helped open the field of many-body theory to more people: the self-consistent Green’s function code by Barbieri [94,95,96,97,98], the in-medium similarity renormalization group code by Stroberg [99, 100], and the many shell-model codes made available by their developers, such as NuShellX [101, 102], KShell [103, 104], BIGSTICK [105, 106], and Antoine [107, 108].

8.5 Ensuring Proper Credit and Acknowledgment

By standardizing access to and use of interaction matrix elements, there exists the opportunity to set usage guidelines that would ensure “good behavior” by users. These should be detailed in a general usage license for the tools and data made available on this public repository. Additionally, publishers of interaction matrix elements (including those of single-particle HO matrix elements that employ advanced strategies to handle 3N forces) should be allowed to extend the general usage license to include required citations and/or acknowledgment. There is also the opportunity to require procedures (e.g., convergence checks) to be taken care of and discussed. However, these requirements beyond general usage and acknowledgment would be additional restrictions and should be added with caution so they do not inadvertently restrict the usage of matrix elements too much. “Good behavior” would encompass abiding by the license and using the publicly available codes where possible to standardize the matrix elements, as small undocumented changes in the matrix element transformations can lead to unexpectedly large changes in observables, which could hurt reproducibility.

8.6 Considerations for Publishing Codes

For codes made publicly available, “publication” would generally mean pushing a version of the source to a public repository (for example, on GitHub or GitLab). The code in the repository would be available for all to use (provided they comply with the license, which should be generally compatible with the usage license for the matrix element repository outlined above) and would allow for community inspection and contributions (for example, optimizations or to add support for other file formats). The matrix element repository would specify “pinned” standard versions of relevant codes that are the currently best supported versions and whose usage to generate matrix elements for the repository would be considered “good behavior.” For released versions associated with recent developments, it may make sense to associate a released version of the code with a journal publication. This should be included in the usage license and would make appropriate acknowledgment unambiguous.

8.7 Closing Thoughts

We have outlined what an open-source repository of matrix elements and transformation tools might look like, along with some principles to guide behavior and encourage people to contribute. This is only the very beginning, and next steps would be to secure buy-in from more of the community and to form a “collaboration” to secure computational and organizational resources to manage this. On the latter point, there is the possibility to draw on prior experience from other fields where these practices are more established, like lattice quantum chromodynamics [109], quantum chemistry [110], and astrophysics. We acknowledge that some institutions (for example, national laboratories) may make it challenging to contribute to open-source projects; we hope the establishment of an organization for these efforts could aid in the streamlining of contributions to these efforts by researchers that would otherwise be hindered by bureaucratic overhead. Buy-in from key matrix element producers (broadly speaking, including those who generate single-particle matrix elements for use in their many-body codes) and an inclusive global partnership would make open-source matrix elements the new standard and allow nuclear theory to profit from the unprecedented open accessibility of information and computational tools for ab initio nuclear-theory calculations.

Acknowledgments: We thank Benjamin Bally and Zohreh Davoudi for useful discussions on open-source collaborations. This work was supported in part by the US Department of Energy under contracts DE-SC0015393 (H.W.G.), DE-SC0017887 (H.H.), and DE-SC0018083 (NUCLEI SciDAC-4 Collaboration, H.H.), by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 279384907—SFB (M.H., A.T.), and by the BMBF Contract No. 05P18RDFN1 (M.H., A.T.).

9 Few-Body Emulators Based on Eigenvector Continuation by Christian Drischler, Xilin Zhang

In this contribution we briefly recapitulate the progress made in constructing fast and accurate emulators for few-body scattering and reaction observables based on eigenvector continuation.Footnote 2 Emulators have been game changers and we envision them to play a key role in future workflows in nuclear physics and beyond. They have the potential to push the frontier of precision nuclear physics even further by enabling full Bayesian analyses of nuclear structure, scattering, and reaction observables, as well as by facilitating constraints for chiral interactions from (lattice) quantum chromodynamics (QCD). The future will show what other exciting applications are within reach.

9.1 Making the Impossible Possible: Emulators

The power of emulators lies in trading an exact solution with a highly accurate approximation obtained using only a (small) fraction of the computational resources. One needs to train an emulator only once on a small number of exact solutions (e.g., to the many-body Schrödinger equation) in the model’s phase space and can then efficiently make approximate predictions at all other points instead of evaluating the model exactly. Trained emulators also allow one to make intricate model calculations publicly available as self-contained mini-applications. This enables users to make fast and accurate model predictions without having the detailed knowledge and computational resources otherwise necessary to build and run an application from complex (and sometimes closed source) code bases. We consider this an important feature for future workflows in nuclear physics and beyond.

Emulators have been game changers in nuclear physics, where Bayesian methods have become standard tools for rigorously quantifying uncertainties in model predictions [112,113,114]; e.g., for low-energy observables derived from chiral effective field theory (\(\chi \)EFT) [25, 46, 115, 116]. Bayesian parameter estimation [113, 117], model comparison [118], and sensitivity analysis [36] can provide important insights to validate and improve model predictions. But their application in statistical analyses typically requires the model’s parameter space to be repeatedly evaluated for (large-scale) Monte Carlo sampling, which was in most cases prohibitively slow due to the computationally expensive nature of nuclear structure, scattering, and reaction calculations. Emulators have recently changed this practical limitation and made (what was thought) the impossible possible [35, 36], by significantly reducing computing cost for evaluating models.Footnote 3

Implementations of emulators include Gaussian processes [121], neural networks (see e.g., Refs. [119, 120]), and eigenvector continuation (EC) [34, 39]. In low-energy nuclear physics, the number of EC-driven emulators applied to few- and many-body bound state calculations (i.e., subspace projection methods) is increasing [35, 36, 122,123,124]. As discussed in Sarah Wesolowski’s talk [125] and her recent work [40], such a few-body bound state emulator enabled the construction of the first set of order-by-order chiral interactions with theoretical uncertainties fully quantified. The challenge is now to extend these efficient emulators for bound state calculations to scattering and reactions.

In this contribution to the “perspective pieces” we focus on the extension of EC-driven emulators to two-body (Sect. 8.2) and higher-body scattering (Sect. 8.3), as it was presented in our talks [126, 127] at the 2021 INT program “Nuclear Forces for Precision Nuclear Physics”.Footnote 4 We then provide an outlook for new EC-driven emulators for few-body systems in periodic boxes and external potential traps as well as their interesting applications (Sect. 8.4).

9.2 Setting the Stage: Two-Body Scattering and Reactions

Furnstahl et al.  [37] have recently demonstrated that EC can be used to construct extremely effective trial wave functions for fast and accurate variational calculations of two-body scattering observables, as discussed in Christian Drischler’s talk [126]. Specifically, Ref. [37] applied the Kohn variational principle (KVP) for the K-matrix to a range of test potentials, including nucleon–nucleon (\(NN\)) and optical potentials. More recently, Melendez et al.  [38] studied Newton’s variational method using EC-motivated trial K-matrices (instead of trial wave functions) and emulated, e.g., neutron-proton cross sections based on a modern chiral \(NN\) potential. The emulator’s high accuracy and speedup compared to exact scattering calculations are remarkable. Following a different approach, Miller et al.  [128] implemented the wave-packet continuum discretization (WPCD) method with GPU acceleration, which is capable of approximating scattering solutions fast at several energies simultaneously.

Comparing the efficacies of the available emulators for scattering observables quantitatively is an important task for future work. Such a comparison requires a scattering scenario with matching (real and complex) interactions and a common definition of the term exact solution for reference calculations. Important benchmarks then include the emulator’s accuracy and speedup relative to the exact scattering solution, and their susceptibility to numerical noise as well as spurious singularities known as Kohn (or Schwartz) anomalies [129]. We also envision studies of a wide range of modern interactions (e.g., different resolution scales and regularization schemes) for both proof-of-principle calculations and uncertainty quantification. Applications of statistical methods using emulators will provide important insights into the idiosyncrasies of nuclear potentials to help address known issues. For issues in \(\chi \)EFT, see, e.g., Ref. [130].

Also important are studies aimed at quantifying the emulator’s intrinsic errors and their dependence on the underlying training set such as the position and number of the training points. In particular, the rate of convergence of EC for scattering calculations needs to be investigated further. Progress along those lines has already been made for bound-state calculations [39]. A machine learning algorithm that positions and/or adapts a given number of training points such that the emulator’s intrinsic errors are minimized would be extremely useful for future applications of EC-driven emulators.

Variational calculations of scattering observables are known to be prone to Kohn anomalies (see Refs. [38, 131] for recent discussions). For a given set of model parameters, the anomalies occur at energies where no (unique) stationary approximation due to the variational functional exits. While Kohn anomalies can be straightforwardly spotted in proof-of-principle calculations (in which the exact solution is also computed), their presence can limit in practice the applicability of, e.g., Monte Carlo sampling of the model’s parameter space. As discussed in Christian Drischler’s talk [126, 131], the generalized KVP has been used to efficiently detect and mitigate those anomalies (see also Ref. [132]). To this end, the method in Ref. [131] assesses the consistency of stationary approximations obtained from a family of functionals with different scattering boundary conditions. This strategy is applicable to other variational calculations. Although Kohn anomalies were not an issue in the proof-of-principle calculations in Refs. [38, 130], it is important to study their emergence in more detail and implement efficient detection algorithms, especially, for Monte Carlo sampling. The rich literature in this field is an excellent starting point for future work along those lines [133,134,135].

These advances in developing fast and accurate emulators for two-body scattering are promising for future extensions to scattering problems where emulators are essential such as three-body scattering (see Sect. 8.3).

Further, the fast convergence observed by Melendez et al.  [38] using EC-motivated trial matrices (rather than trial wave functions) might indicate that the EC concept could also be applied to other stationary calculations. This would open exciting possibilities for future applications of emulators, some of which might not even be in sight at the moment.

9.3 Rising Action: Three and Higher-Body Scattering and Reactions

The two-body EC scattering and reaction emulator [37] has been generalized to three-boson elastic s-wave scattering, as reported in Xilin Zhang’s talk [127]. The first results are encouraging: the emulators have similar accuracy and computing speed compared to those in the two-body sector. The speed up can reach an order of \(10^6\): directly solving a three-body nuclear scattering problem takes about \(10^3\) seconds while the emulator’s cost on a laptop is milliseconds if the interactions’ parametric dependencies are factorized from the operators. The paper summarizing this work is in preparation [136].

This work opens up the possibility of fitting chiral three-nucleon interactions efficiently to proton–deuteron scattering and reaction data, which has been difficult due to the expensive cost of solving three-body Faddeev equations. However, further progress needs to be made before achieving this goal, including generalizations to arbitrary partial-wave channelsFootnote 5 and spin statistics, and nontrivial generalizations above the deuteron break-up threshold,Footnote 6 as well as implementations of these emulators for fitting chiral interactions.

Three-body emulators will also be useful in analyzing deuteron-nucleus scattering and reactions (and in fact any process that can be described in terms of three-cluster dynamics). Full exploration of an effective Hamiltonian’s parameter space (including testing new interaction operators) will become possible thanks to the emulators’ speed. However, a new challengeFootnote 7 in these applications comes from the fact that some parametric dependence cannot be factorized from the associated operator, such as the range of the interactions. An immediate solution [136] is to decompose the interaction potential as a linear combination of a series of potentials (e.g., those constructed using orthogonal polynomials), but its feasibility, depending on the required number of basis, needs to be studied case by case. In Ref. [136], the method of emulator-in-emulator is also explored and demonstrated to solve this issue, which uses Gaussian process within the EC-driven emulators.

In the longer term, four-nucleon interactions will become relevant. With that in mind, it will be necessary to further extend the EC scattering and reaction emulators to four-body systems. The emulators will not only enable parameter estimation but also provide users effortless applications of these expensive calculations in their own research.

9.4 (Not Really the) Final Act: Few-Body Systems with Discrete Energy Levels

Besides experimental constraints, there will be valuable information on NN interactions from lattice QCD calculations at and around the physical pion mass. However, the hadronic systems studied in LQCD live in periodic boxes instead of the free space. This requires extra steps to connect the LQCD results (without infinite-volume extrapolation) with the free-space observables we are interested in. For two-hadron systems, the Lüscher formula [142] is often used to extract free-space scattering phase shifts from discrete eigenenergies at various box sizes. Its generalization to three-hadron systems is being studied intensively by multiple groups (see e.g., Ref. [143]).

For constraining NN interactions using LQCD results, we suggest an alternative approach to the three-body Lüscher’s method, by treating the LQCD simulations as computer experiments. The chiral potentials can be solved in the same periodic boxes as used in the LQCD’s calculations in order to construct the mapping between the low-energy couplings in the nuclear interactions and the discrete energy levels. Again, the EC-driven emulators—for few-nucleon systems in periodic boxes here—will be the key to fitting those couplings to the eigenenergies, as they enable needed rapid solutions of the corresponding Faddeev equations. The computer-experiment strategy was used by Barnea et al.  [144] to analyze the LQCD’s results (at an unphysical pion mass) in few-nucleon systems, as discussed in Nir Barnea’s talk at this INT program. However, instead of using an emulator, they employed the so-called Stochastic Variational Method to speed up solving the few-body Schrödinger equation. As pointed out during this INT program, a method different from the generalized Lüscher method is desirable. It will be interesting to compare different methods of extracting constraints from LQCD on NN interactions in the future.

In the area of nuclear calculations, Xilin Zhang and collaborators [145, 146] have been developing analysis tools to extract two-cluster scattering phase shifts from the system’s discrete eigenenergies in harmonic potential traps computed using nuclear many-body methods. The goal is to take advantage of the progress in many-body calculations for medium-mass nuclei to enable ab initio calculations of scattering and reactions in the same mass region.

For two-cluster systems in external potential traps, the so-called BERW formula akin to the Lüscher formula in LQCD were studied for some time and lately improved (see Ref. [145] and references therein). For three-cluster systems, both three-hadron-Lüscher-method and EC-emulator approach are worth pursuing here. Therefore, the EC-driven emulators for few particle systems in external potential traps need to be developed as well.

From a broader perspective, few-body systems in periodic boxes and harmonic traps have discrete energy levels as self-bound systems have, while they also have sub-systems’ scattering information encoded in the energy levels thanks to the Lüscher and BERW formula. Developing EC-driven emulators for these systems will reveal the elusive connection between the bound [35, 36] and scattering state emulators, paving the way for a unified understanding of the two as well as the associated variational principles.

Acknowledgements: We thank R. J. Furnstahl, A. J. Garcia, P. Giuliani, A. E. Lovell, J. A. Melendez, F. M. Nunes, and M. Quinonez for sharing their invaluable insights with us. We are also grateful to the organizers of the (virtual) INT program “Nuclear Forces for Precision Nuclear Physics” (INT–21–1b) for creating a stimulating environment to discuss eigenvector continuation and variational principles. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance award DE-SC0013617. Xilin Zhang was also supported by the National Science Foundation under Grant No. PHY–1913069 and by the NUCLEI SciDAC Collaboration under Department of Energy MSU subcontract RC107839-OSU.

10 The Relevance of the Unitarity Limit and the Size of Many-Body Forces in Chiral Effective Field Theory by Harald W. Grießhammer, Sebastian König, Daniel R. Phillips, Ubirajara van Kolck

We collect in this document some thoughts which we believe are relevant for systematically incorporating the closeness of nuclear physics to the unitarity limit into chiral effective field theory (\(\chi \)EFT). Our discussion contains no quantitative insights beyond those already made in the literature. But the excellent talks and stimulating discussions at the recent INT program gave us an opportunity to take stock of the role of the unitarity limit in the precision description of nuclei. We thank the organizers for their efforts to overcome the constraints of the online format and for fostering an interactive and collaborative atmosphere—and for the invitation to contribute this piece to the various “Perspectives” coming out of the program.

The first section focuses on many-body forces in \(\chi \)EFT and the interplay of the chiral and unitarity limits in nuclear systems with \(A \le 4\); a second section discusses the particular (peculiar?) role of the \({}^1\)S\(_0\) channel; a final section then looks at how the issues discussed in the previous two play out in nuclei with \(A=6\) and beyond.

10.1 Chiral Limit or Unitarity Limit: Which One is More Relevant for Nuclei with \(A \le 4\)?

Both the pionless EFT expansion around the unitarity limit and the \(\chi \)EFT expansion around the chiral limit yield good descriptions of nuclei with \(A\le 4\) nucleons. Pionless EFT and \(\chi \)EFT are distinctively different since they are constructed as expansions around entirely different limits: the former embraces the large nucleon–nucleon (\(NN\)) scattering lengths as an emergent low-momentum scale and treats the pion mass \(m_\pi \) as a high scale, while the latter treats \(m_\pi \) as a low scale. In light of this contrast, what is the connection between the two expansions?

  • Recent work [82, 147,148,149] has shown that a renormalizable, perturbative approach to \(\chi \)EFT with reasonable convergence is possible for light nuclei. Results following the EFT power counting that converge to the experimental values are obtained not only for binding energies, but also for charge radii, with the caveat that so far only the lowest two orders have been studied.

  • The unitarity expansion in pionless EFT exhibits more rapid convergence for nuclei with \(A \le 4\) [1, 150] than \(\chi \)EFT, a success that stems largely from having a three-nucleon (\(3N\)) force at leading order (\(\mathrm{LO}\)) [2]. This force can be used to put the \(3N\) bound state at the right energy, and few-body universality then guarantees that the four-body binding energy [4] and the three-body radius [151] will be approximately reproduced. This makes for an excellent starting point for the description of \(A \le 4\) (bound) systems. The presence of a four-nucleon (\(4N\)) force at next-to-leading order (\(\mathrm{NLO}\)) [152] spoils the predictivity a bit. However, only a single input parameter is needed to fix this force, and fitting it to the \(^4\)He binding energy one can still predict the radius and other properties. The results that this produces for few-body observables will be studied in future work.

10.1.1 Connections

  • Recent studies of correlations between few-body observables in \(\chi \)EFT show that \(A=3,4\) binding energies and radii provide highly degenerate information on the \(3N\) force [40]. This observation is consistent with light nuclei being within the lower-energy regime of pionless EFT. Based on this phenomenological effect, it has been argued before [26] that \(\chi \)EFT should feature a (pure contact) \(3N\) force at \(\mathrm{LO}\) as well, but a clear justification for this conjecture is not provided by that work.

  • It is remarkable how close the Tjon lines come out to each other in \(\chi \)EFT and the unitarity expansion [149]. Assuming convergence of both expansions to the physical point means that they ultimately have to intersect there, and we already know that the Tjon band from pionless EFT captures well all the points from phenomenological potentials. But is there anything interesting to be learned from the fact that the slope of the \(\mathrm{LO}\) chiral Tjon line (generated by residual cutoff dependence) agrees well with the relation between three- and four-body binding energies \(B_4 = 4.610(1) B_3\) that prevails in the unitarity limit [153]?

  • It is instructive to consider the importance of many-body forces from the perspective of varying the resolution of the interaction. This can be done by Similarity Renormalization Group (SRG) transformations. It is well known that maintaining unitarity of these transformations induces higher-body forces, even if the interaction initially is given only at the two-body level. Moving from the regime of \(\chi \)EFT (typical momentum \(Q \sim m_\pi \)) to pionless EFT (\(Q \ll m_\pi \)), where pions are “integrated out,” corresponds to lowering the effective resolution. In light of this it makes sense to consider pionless EFT as the ultimate limit of a \(\chi \)EFT interaction transformed to very low resolution, and in this limit there is a \(3N\) contact force at \(\mathrm{LO}\) and a \(4N\) force at \(\mathrm{NLO}\). In the two-body sector, SRG-induced operator structures have been found to resemble simple contact terms [154, 155], so it is not unreasonable to assume the dominant induced three-body forces have a similar form. Based on this picture, one might imagine that considering the low-resolution limit will inform, and potentially adjust, the \(\chi \)EFT power counting.

10.1.2 Conjectures

  • What would be the best, most rigorous, arguments to determine the order at which a contact \(3N\) force first appears in \(\chi \)EFT? We know such a force is not required for RG invariance at \(\mathrm{LO}\)  [82, 147], but is there a possible numerical signature that would indicate a departure from naive dimensional analysis? It could be informative to study how the coupling constant of such a force runs with the cutoff when it is included at \(\mathrm{LO}\) and fixed to reproduce exactly the triton binding energy—as done in pionless EFT  [2]. For recent efforts in this direction in the context of \({}^4\)He trimers and the long-range van der Waal’s force see Ref. [156]. Alternatively, analytic results for the \(\mathrm{LO}\) three-body wave function at short distances would enable an analysis of the anomalous dimensions induced by the strong \(\mathrm{LO}\) interactions. The analog of the analysis in Ref. [157] for electroweak operators in the two-body sector would then reveal the extent to which those \(\mathrm{LO}\) interactions alter the naive dimensional analysis (NDA) result for the three-body contact.

  • For the intermediate-range \(3N\) force in Deltaless \(\chi \)EFT (\(\sim c_D\)), one can consider matrix elements between correlated \({}^1\)S\(_0\) \(NN\) states plus a third spectator nucleon. The large scattering length in this channel means one should promote this force by one order compared to its NDA order [157,158,159,160]. This moves the \(c_D\) term from \(\mathcal {O}(Q^3/M_{\text {hi}}^3)\) to \(\mathcal {O}(Q^2/M_{\text {hi}}^2)\),Footnote 8 where \(M_{\text {hi}}\sim m_\rho \) (the rho mass) is the assumed breakdown scale of \(\chi \)EFT. RG invariance would then seem to require that the pure contact \(3N\) force (\(\sim c_E\)) is promoted to the same degree, since matrix elements of \(c_D\) alone will be regulator dependent.

  • If \(3N\) forces appear earlier than Weinberg’s \(\mathcal {O}(Q^3/M_{\text {hi}}^3)\), then at what order in \(\chi \)EFT do \(4N\) forces enter? The argument of the previous bullet also implies a promotion of the particular piece of the \(4N\) force that is proportional to \(c_D^2\). In this context, it is interesting to note that calculations of nuclear matter that employ \(NN\) and \(3N\) forces adjusted to reproduce \(NN\) data and the triton binding energy still generate at least a portion of the Coester line [162].

  • Adjusting the counting of factors of \(4\pi \) associated with (nonrelativistic) reducible loops, as suggested by Friar [163] and inferred from pionless EFT  [164], gives a promotion over Weinberg’s power counting by one order for the \(3N\) force and by two orders for the \(4N\) force. This is in addition to any other enhancement.

10.2 The Perpetually Vexatious \({}^1\)S\(_0\) Channel

What features of the \({}^1\)S\(_0\) channel matter for finite nuclei? For \(A \le 4\), it seems to be sufficient to formulate a LO that has the \({}^1\)S\(_0\) amplitude close to the unitarity limit at low energies. This is one of the ingredients of the success of pionless EFT, and the unitarity limit (and nothing else) delivers the promising \(\mathrm{LO}\) results in Ref. [1].

But as the \(NN\) energy increases away from threshold, the \(\mathrm{LO}\) \({}^1\)S\(_0\) phase shift predictions in such an approach rapidly deviate from data. Attempts to formulate an EFT that reproduces higher-energy features of the phase shift in this channel at LO have a more-than-two-decade history now [165]. The idea is to change the \(\chi \)EFT power counting in order to generate more energy dependence in the \({}^1\)S\(_0\) phase shift at lower orders. However, only recently have the implications of such an approach for finite nuclei been elucidated in a systematic way.

  • One can consider a promotion of the short-distance operator that contributes to the \(NN\) effective range [166, 167]. Unfortunately that can only be done in an RG-invariant fashion with the energy dependence stemming from a dibaryon field.

  • The \({}^1\)S\(_0\) phase shift goes through zero at at a momentum of \(\approx 340\) MeV. If the zero is not present at \(\mathrm{LO}\), higher orders must overcome \(\mathrm{LO}\) at larger momenta, which implies poor convergence in the vicinity of this zero. The analysis of Ref. [168] formulates an EFT that at \(\mathrm{LO}\) respects the unitarity limit and also reproduces the zero. With the (potentially significant) caveat that the interaction is transformed from an energy-dependent form to a momentum-dependent form, leaving only on-shell results invariant, Ref. [169] reports that including this higher-energy feature of the \({}^1\)S\(_0\) phase shift in the Weinberg \(\mathrm{LO}\) EFT description of \(NN\) physics improves the description of finite nuclei.

  • Alternatively one can consider a promotion of correlated two-pion exchange contributions [170, 171] to lower orders. Promising results for finite nuclei and nuclear matter have recently been obtained when these diagrams with \(N\Delta \) intermediate states are included at \(\mathcal {O}(Q^2/M_{\text {hi}}^2)\) [172, 173]. Promotion of “sub-leading" Deltaless two-pion-exchange contributions proportional to \(c_1\), \(c_3\), and \(c_4\) is supported by the large-\(N_c\) limit, where two-pion-exchange mechanisms involving the \(\Delta (1232)\) are the SU(4) partners of the iterated one-pion-exchange diagrams that appear at \(\mathrm{LO}\) in chiral EFT in the \({}^3\)S\(_1\) channel. But large-\(N_c\) presumably demands that such mechanisms be included at \(\mathrm{LO}\) if the EFT is to respect both chiral symmetry and the large-\(N_c\) limit. If two-pion-exchange graphs with \(\Delta (1232)\) intermediate states are \(\mathrm{LO}\) in the \(NN\) system then would that not mean the Fujita-Miyazawa 3NF is leading order in the three-body system? And is there really enough scale separation between the momentum associated with \(N\Delta \) intermediate states and, say, \(m_\rho \) to justify treating the former as a low-energy excitation and the latter as a high-energy one?

10.3 What About “Real” Nuclei?

For \(A\ge 5\), both pionless EFT  [6, 32, 174,175,176] and \(\chi \)EFT  [82] tend to yield unstable nuclei at \(\mathrm{LO}\). What is the significance of this additional similarity between the two EFTs?

  • While instability is appropriate for \(A=5,8\), it raises the question, how can stability arise in other nuclei, for example \(A=16\)? Can it be obtained in (distorted-wave) perturbation theory?

  • In Ref. [175], \(\mathrm{NLO}\) pionless EFT corrections were iterated. This effectively includes the interaction range at \(\mathrm{LO}\), but destroys RG invariance [177]. Is this the only practical calculational scheme to produce p-shell nuclear stability in pionless EFT?

  • In \(\chi \)EFT the interaction range is included at \(\mathrm{LO}\) via one-pion exchange, and still it is not sufficient in a renormalized approach to ensure stability, at least at accessible cutoff values [82]. Should one trade the range by other formally higher-order interactions, such as the \(3N\) force in \(\chi \)EFT or the \(4N\) force in Pionless EFT?

  • Jerry Yang has suggested in his talk during the program [178] that few-body forces might be enhanced by combinatorial factors of A. This might be sufficient to promote few-body forces to \(\mathrm{LO}\) in \(\chi \)EFT. Preliminary results indicate that this is sufficient for \(A=16\) stability thanks to the \(3N\) force. But then \(4N\) forces might become important for heavier nuclei, complicating the description of nuclei significantly.

  • With the original Weinberg prescription [161], there is no saturation in symmetric nuclear matter until one reaches the order in the expansion where \(3N\) forces appear [162, 179]. Renormalized \(\chi \)EFT leads, without \(3N\) forces in LO, to saturation with significant underbinding [180]. Can a nuclear EFT that at \(\mathrm{LO}\) (and \(\mathrm{NLO}\) in the Weinberg prescription) does not produce realistic nuclear matter be a convergent description of nuclei? Ref. [181] argues that the \(\mathrm{LO}\) and \(\mathrm{NLO}\) uncertainties at the canonical saturation density of \(n_0=0.16~\mathrm{fm}^{-3}\) are too large to say whether or where saturation occurs. However, results computed at \(\mathrm{NNLO}\) (and \(\mathrm{N}\) \(^3\) \(\mathrm{LO}\)) (within the Weinberg prescription) fall within the \(\mathrm{LO}\) and \(\mathrm{NLO}\) error bands and show saturation occurring at a density and binding energy per nucleon consistent with the “empirical” value. In contrast, adding a \(3N\) force to Weinberg’s LO does yield nuclear saturation [182]. What do these results tell us about the ordering of few-body forces and the organization of nuclear EFT?

Acknowledgement: This work was supported in part by the US Department of Energy under contracts DE-SC0015393 (HWG), DE-FG02-93ER-40756 (DRP) and DE-FG02-04ER41338 (UvK), and by the US National Science Foundation under Grant No. PHY-2044632 (SK). This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance, award DE-SC0013617.

11 Nuclear Forces in a Manifestly Lorentz-Invariant Formulation of Chiral Effective Field Theory by Xiu-Lei Ren, Evgeny Epelbaum, Jambul Gegelia

We outline the advantages and disadvantages of manifestly Lorentz-invariant formulation of chiral effective field theory (\(\chi \)EFT) for the nuclear forces compared to the non-relativistic formalism.

11.1 Introductory Remarks

Chiral perturbation theory is an effective field theory (EFT) of the strong interaction applicable at low energies. It shares all symmetries of the underlying fundamental theory—the quantum chromodynamics (QCD). While Lorentz-invariance is a cornerstone of quantum field theories in general, a systematic non-relativistic expansion can be made for physical quantities if particle velocities are much smaller than the speed of the light. This expansion can also be done at the Lagrangian level leading to a non-relativistic EFT. The two approaches yield exactly the same non-relativistic expansions for physical quantities, provided that one takes special care to address the non-commutativity of the expansion of the effective Lagrangian and calculations of quantum corrections. It is important to keep in mind that the ultraviolet (UV) behavior of loop integrals of quantum corrections is completely different in the two approaches. In the few-nucleon sector, one deals with effective potentials, whose low-energy behavior is systematically calculable order-by order in \(\chi \)EFT. On the other hand, the (infrared) power counting has no status in the UV region. As we do not know the short-range behavior of few-body potentials, one might argue that all UV extensions of the effective potential are equally good/bad. While the effective potentials are not even uniquely defined, we do know the physical spectrum of QCD (assuming that it indeed describes the nature). Considering, e.g., nucleon–nucleon (\(NN\)) scattering, the actual short-distance behavior of the nuclear force can certainly not be singular since no corresponding deeply bound states are observed in nature. At the conceptual level, all complications caused by singular EFT interactions result from a naive extension of non-relativistic potentials from large distances to short ones, where the infrared ordering of various contributions is invalid. Therefore, approximating a non-singular potential with a singular leading order (\(\mathrm{LO}\)) contribution, supplemented by a finite number of contact interactions, is only appropriate if the cutoff is kept of the order of the hard scale of the problem. On the other hand, if one employs a non-singular extension of the one-pion exchange potential from long to short distances, then the short-range details of the \(\mathrm{LO}\) approximation indeed do not matter after removing the regulator, since a finite number of counter terms are required to renormalize the amplitude. The formalism based on the manifestly Lorentz-invariant formulation is well suited for this purpose. While being equivalent to the non-relativistic formulation in the infrared region, performing a resummation of a certain class of 1/m-corrections in a way consistent with the underlying Lorentz symmetry leads to the effective potentials that admit a better UV behavior.

11.2 Chiral Nuclear Forces from the Lorentz-Invariant Lagrangian Using Time-Ordered Perturbation Theory

These ideas have been taken up in Ref. [183] to formulate a renormalizable framework for \(NN\) scattering based on the manifestly Lorentz-invariant effective Lagrangian. In the resulting modified Weinberg approach, the \(\mathrm{LO}\) amplitude is obtained by solving the Kadyshevsky equation while higher-order corrections are treated perturbatively. Symmetry-preserving regularization within this formalism has been considered in Ref. [184]. A fully Lorentz-covariant form of the effective potential based on a new power counting has been suggested in Refs. [185, 186]. A systematic approach relying on the Lorentz-invariant Lagrangian and time-ordered perturbation theory has been further developed in Ref. [187], where the effective potential and the scattering equation (Kadyshevsky equation) are obtained within the same framework. Restricting the non-perturbative treatment to the (non-singular) \(\mathrm{LO}\) potential and assuming the validity of perturbation theory for higher-order interactions, one can systematically remove all divergences from the amplitude and, therefore, employ arbitrarily large values of the cutoff. Alternatively, the full effective potential can be treated non-perturbatively. The milder UV behavior then offers a larger flexibility regarding admissible cutoff values, which generally need to be kept of the order of the breakdown scale. Therefore, we expect that this approach should lead to better description of systems with larger numbers of nucleons. However, one should keep in mind that the derivation of corrections to the interaction beyond \(\mathrm{LO}\) is computationally more demanding as compared to its non-relativistic counterpart. Work is in progress towards extending the analysis of \(NN\) scattering within the modified Weinberg approach beyond \(\mathrm{LO}\)  [188]. Notice further that the actual solution of the Kadyshevsky integral equation is facilitated by the fact that it can be rewritten in the form of the standard Lippmann–Schwinger equation for a modified potential. Last but not least, the relativistic formulation of \(\chi \)EFT can also be merged with the Dirac–Brueckner–Hartree–Fock theory. It would further be interesting to perform ab initio studies of finite nuclei and nuclear matter using the relativistic version of chiral nuclear interactions.

Acknowledgements: This work was supported in part by BMBF (Grant No. 05P18PCFP1), by DFG and NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, Project-ID 196253076 - TRR 110), by Collaborative Research Center “The Low-Energy Frontier of the Standard Model” (DFG, Project No. 204404729 - SFB 1044), by the Cluster of Excellence “Precision Physics, Fundamental Interactions, and Structure of Matter” (PRISMA\(^+\), EXC 2118/1) within the German Excellence Strategy (Project ID 39083149), by the Georgian Shota Rustaveli National Science Foundation (Grant No. FR17-354) and by the ERC AdG NuclearTheory (Grant No. 885150).

12 Nuclear Forces for Precision Nuclear Physics: Some Thoughts on the Status, Controversies and Challenges by Evgeny Epelbaum, Ashot Gasparyan, Jambul Gegelia, Hermann Krebs

We outline the status of chiral effective field theory (\(\chi \)EFT) for nuclear systems, summarize our understanding of renormalization group invariance in this context and discuss some of the most pressing challenges and opportunities in the field.

12.1 Introductory Remarks and Disclaimer

This paper is a summary of the contributions and opinions of the Bochum participants in the INT program on Nuclear Forces for Precision Nuclear Physics, held at the INT, Seattle, April 19–May 7, 2021, regarding some of the topics addressed during this meeting. It is not intended to provide a review of the field, and we also made no attempt to be exhaustive in the references. A more complete and detailed discussion of (most of) the considered topics can be found in the recent review article by some of us [116] and in references therein, see also the earlier review [115].

Our paper is organized as follows. In Sect. 11.2, we discuss the status of \(\chi \)EFT for nuclear systems. We limit ourselves to its standard, finite-cutoff formulation based on the Weinberg approach [161, 192] and summarize the most pressing open issues in the field. Section 11.3 is devoted to the ongoing debate concerning a proper renormalization of nuclear chiral EFT. We provide a brief account of the renormalization program in the meson and single-baryon sectors of chiral perturbation theory (\(\chi \)PT) before critically addressing the so-called renormalization group (RG) invariant approach in the few-nucleon sector. Formal aspects of renormalization in the finite-cutoff formulation of \(\chi \)EFT are discussed in Sect. 11.4, while some concluding remarks are made in Sect.  11.5.

12.2 Nuclear Interactions from \(\chi \)EFT: Current Status and Open Questions

12.2.1 The Nucleon–Nucleon Sector

Starting from the pioneering work by Weinberg in the early nineties [161, 192], see also [170, 193], the nucleon–nucleon (\(NN\)) interactions have been worked out up to fifth order (\(\mathrm{N}\) \(^4\) \(\mathrm{LO}\)) in \(\chi \)EFT [47, 49], see [116] for a recent review and references therein. At the highest EFT order, the interactions from Ref. [49] lead to an excellent description of \(NN\) scattering data below pion production threshold with \(\sim 40\%\) less adjustable parameters as compared to high-precision phenomenological potentials. They also show a clear evidence of the (parameter-free) chiral two-pion exchange potential. In Ref. [194], a full-fledged partial-wave analysis of \(NN\) data up to the pion production threshold has been performed in the framework of \(\chi \)EFT (including a selection of mutually consistent data and a complete treatment of isospin-breaking interactions), thereby achieving a statistically perfect description of experimental data, see the left panel of Fig. 2 as a representative example. In this sense, one may claim “mission accomplished” in the \(NN\) sector of \(\chi \)EFT.

Fig. 2
figure 2

\(\chi \)EFT predictions for the analyzing power P in proton-proton scattering at \(E_\mathrm{lab} = 142\) MeV (left panel) and for tensor analyzing power \(A_{xx}\) in nucleon–deuteron elastic scattering at \(E_\mathrm{N} = 135\) MeV. The light- (dark-) shaded bands depict the \(68\%\) (\(95\%\)) degree-of-belief truncation errors at the corresponding order. Solid and open circles with error bars are experimental data from Refs. [189, 190] respectively. Open circles without error bars show the predictions of the Nijmegen partial-wave analysis [191]. For more details see Ref. [116]

12.2.2 The Three-Nucleon Force Challenge

The situation with three-nucleon (\(3N\)) forces is more intricate. Although they have been worked out completely to fourth chiral order (\(\mathrm{N}\) \(^3\) \(\mathrm{LO}\)) a long time ago [195, 196], and partially even to \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) [55, 197, 198], their implementation in few-/many-body calculations is more demanding. The issue is rooted in the regularization of the \(3N\) force. Loop contributions to the \(3N\) force starting from \(\mathrm{N}\) \(^3\) \(\mathrm{LO}\) have been derived using dimensional regularization (DR). On the other hand, the A-body Schrödinger equation is regularized using a cutoff. Differently to the \(NN\) sector, this mismatch in the regularization can not be compensated by counterterms from the effective chiral Lagrangian [116]. Curing this conceptual problem will require a re-derivation of the three- and four-nucleon (\(4N\)) forces starting from \(\mathrm{N}\) \(^3\) \(\mathrm{LO}\) using a cutoff regulator which (i) maintains the chiral symmetry and (ii) is consistent with the one employed in the \(NN\) sector. The higher derivative regularization method proposed in Ref. [199] offers one possible approach to tackle this challenge by introducing the cutoff at the level of the effective Lagrangian. Work along this line is in progress.

A precise description of nucleon–deuteron scattering data remains a major unsolved challenge in nuclear physics as reflected by very large values of the \(\chi ^2/\mathrm{datum}\) in the \(3N\) sector [200, 201] as compared to \(\chi ^2/\mathrm{datum} \sim 1\) for the reproduction of \(NN\) scattering data [49, 194]. \(\chi \)EFT predictions up to \(\mathrm{NNLO}\) generally agree with the data, see the right panel of Fig. 2 for an example, but the accuracy at this order is fairly low. Based on the experience in the \(NN\) sector, the solution of the \(3N\) force challenge will likely require pushing the EFT expansion of the \(3N\) force (at least) to \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\). Apart from the regularization issue mentioned above, one will then face the computational challenge associated with the determination of low-energy constants (LECs) entering the \(3N\) force from \(A \ge 3\)-nucleon data, see Ref. [93] for pioneering steps along this line. The eigenvector continuation technique [34, 36, 40] may be the key technology to make such an analysis feasible in the near future.

12.2.3 Nuclear Currents

Nuclear vector, axial-vector, pseudoscalar and scalar currents have also been extensively studied in \(\chi \)EFT, see Ref. [202] and references therein. The vector and axial-vector currents have been worked out using two different techniques, namely the method of a unitary transformation by the Bochum–Bonn group [203,204,205,206,207] and time-ordered perturbation theory by the JLab-Pisa group [208,209,210,211,212]. The results of these calculations disagree with each other, with the differences being most pronounced for the axial-vector currents. As an attempt to shed light on this issue, the Bochum-Bonn group has tried to reproduce the results of the JLab-Pisa group for the axial-vector currents using their method [213], thereby confirming their own expressions. To exclude the possibility that the approach of the JLab-Pisa group has been misinterpreted, a comparison could be carried out at the level of Hilbert-space operators [213].

Independently of this discrepancy, the implementation of the nuclear currents starting from \(\mathrm{N}\) \(^3\) \(\mathrm{LO}\) is also affected by the already mentioned issue with mixing up two different regularization methods [202]. Even though no results for the current operators using the higher-derivative regularization are available yet, a high-accuracy calculation of the deuteron charge and quadrupole form factors has been performed recently at \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\) [214, 215]. This was possible since the loop corrections to the two-body charge density, whose consistent regularization is not yet available, do not contribute to the deuteron form factors thanks to the isospin selection rules.

12.2.4 On the Role of the \(\Delta \)(1232) Resonance

Nuclear interactions discussed so far have been derived using pions and nucleons as the only explicit degrees of freedom. Given the low excitation energy of the \(\Delta (1232)\) resonance and its strong coupling to the \(\pi \)N system, one may expect its explicit inclusion in the effective Lagrangian to yield a more efficient EFT framework. Indeed, clear evidence of the improved convergence of the \(\Delta \)-full formulation of \(\chi \)EFT for pion-nucleon scattering was found in recent studies [216,217,218]. On the other hand, the available results for the \(NN\) system in the \(\Delta \)-less framework already show a generally good convergence pattern, with the estimated breakdown scale of the order of \(\Lambda _b \sim \) 600–650 MeV [219,220,221]. Moreover, the LECs accompanying the \(NN\) contact interactions come out of a natural size [49, 116] with no signs of enhancement from the \(m_\Delta - m_N \sim 2 M_\pi \) scale. This may indicate that the longest-range contributions of the \(\Delta \)-resonance to the \(NN\) force are efficiently mimicked via the saturation of \(\pi \)N LECs.Footnote 9 It is thus not a priori clear, that the explicit inclusion of the \(\Delta \)-resonance would result in a significantly larger value of \(\Lambda _b\), leading to a better convergence of \(\chi \)EFT in the few-nucleon sector.

\(\Delta \)-contributions to the nuclear forces have so far been worked out to third order (\(\mathrm{NNLO}\)) [171, 222], and the results up to this order do seem to indicate a superior performance of the \(\Delta \)-full approach [223], see also Refs. [172, 224] and references therein. Clearly, to assess the role of the \(\Delta \)-resonance in quantitative terms, the calculations within the \(\Delta \)-full approach will have to be pushed beyond \(\mathrm{NNLO}\). As a first step along this line, some of us have recently worked out the \(\mathrm{NNLO}\) contributions of the \(\Delta (1232)\) to the two-pion exchange \(3N\) force topology using DR [225]. Notice, however, that the already mentioned issue with the inconsistent regularizations is also relevant for the \(\Delta \)-full formulation of \(\chi \)EFT, and all loop contributions will need to be (re-) derived using e.g. the higher derivative regularization.

12.3 Renormalization Group Invariance and Power Counting

12.3.1 Renormalization in Chiral Perturbation Theory

In the strictly perturbative domain of chiral perturbation theory (\(\chi \)PT) comprising its mesonic and the heavy-baryon formulations, a finite number of counterterms is needed to remove divergences at every order in the chiral expansion. When using DR, all the required counterterms are generated by bare LECs from the effective Lagrangian of that corresponding order. Consequently, renormalized amplitudes calculated up to any finite order are independent of the DR scale \(\mu \).Footnote 10 On the other hand, the inclusion of an infinite number of counterterms is required (or implicitly assumed) e.g. in the infrared regularized formulation of manifestly Lorentz-invariant baryon \(\chi \)PT [226] that allows one to non-perturbatively resum 1/m-corrections within the heavy-baryon approach. This leads to a residual dependence of renormalized amplitudes on the renomalization scale. This feature is, however, perfectly acceptable from the EFT point of view since scale-dependent terms are of a higher chiral order, both formally and numerically.

12.3.2 Residual Renormalization-Scheme Dependence in pionless EFT

In the few-nucleon sector of \(\chi \)EFT, certain types of diagrams must be resummed non-perturbatively to accommodate for the appearance of shallow bound and virtual states. Residual renormalization scheme dependence of the calculated observables does not pose a conceptual problem in this case either. In particular, resonant P-wave systems have been studied recently in pionless EFT without auxiliary dimer fields [227]. This analytically solvable example provides an explicit demonstration that the exact RG invariance at order \(\mathcal {O} (p^n)\) (i.e., \(\partial T^{(n)}/\partial \mu _i = 0\) for \(\forall \mu _i\)) is not necessary to claim consistent EFT. Rather, it is sufficient to have \(\partial T^{(n)}/\partial \mu _i = \mathcal {O} (p^{n+1})\).

12.3.3 \(\chi \)EFT for Nuclear Systems and the Cutoff Choice

In contrast to the previously considered cases, renormalization is carried out implicitly in the non-perturbative domain of \(\chi \)EFT and in pionless EFT for systems with more than two nucleons. This is achieved by numerically expressing bare LECs \(C_i (\Lambda )\) in terms of observables, in other words, by fitting them to experimental data.

Removing the UV divergences in each order of the loop expansion of the scattering amplitude \(T(\Lambda ) = \sum _{n=0}^{\infty } \hbar ^n T_n (\Lambda )\) requires the inclusion of an infinite number of counterterms in the case of the one-pion exchange (OPE) potential (see, e.g., Ref. [228]), which is usually not possible in practical calculations. Notice that the operations of summation in \(T(\Lambda ) = \sum _{n} \hbar ^n T_n\) and \(\lim _{\Lambda \rightarrow \infty } T(\Lambda )\) do not commute unless all counterterms needed to render all terms in the series finite are taken into account. Without including all the necessary counterterms, the resulting partially renormalized [229] amplitude \(T(\Lambda )\) for \(\Lambda \gg \Lambda _b\) is actually determined by the ambiguous behaviour of V(r) at \(r \ll \Lambda _b^{-1}\), which is outside of the EFT validity range. Within such treatment, we see no a priori reason for \(T(\Lambda )\big |_{\Lambda \gg \Lambda _b}\) to represent a meaningful result (in the EFT sense). Under such circumstances, the cutoff \(\Lambda \) should be kept of the order of the expected breakdown scale \(\Lambda _{b}\) [229,230,231]. We also see no immediate relation between \(T(\Lambda )\big |_{\Lambda \gg \Lambda _b}\) and the power counting for renormalized short-range interactions.

12.3.4 The RG-Invariant Formulation of \(\chi \)EFT

As stated in Ref. [25], the approximate RG invariance of the truncated amplitude \(T^{(\nu )} (p, \Lambda )\) requires the condition

$$\begin{aligned} \frac{\Lambda }{T^{(\nu )} (p, \Lambda )} \frac{d T^{(\nu )} (p, \Lambda )}{d \Lambda } = \mathcal {O} \left( \frac{p^{\nu + 1}}{\Lambda _b^\nu \Lambda } \right) \end{aligned}$$
(11.1)

to be satisfied. As argued in that paper, in the absence of analytical results, “varying the regulator parameter widely above the breakdown scale is usually the only tool available to check RG invariance”. Moreover, RG invariance of the truncated amplitude is often identified with the requirement that “the result converges with respect to \(\Lambda \), i.e., the observable can only depend on negative power of \(\Lambda \) after renormalization” [232]. The final result for the calculated observables is then understood as the corresponding \(\Lambda \rightarrow \infty \) limits, see e.g. Refs. [147, 233].

12.3.5 A Practical Implementation of the RG-Invariant Approach: Lessons from a Toy-Model Example

While we agree with Eq. (11.1), we object to the suggested method of its numerical verification by varying \(\Lambda \) widely above \(\Lambda _b\). To demonstrate possible issues with taking \(\Lambda \) very large without subtracting all divergences, consider a toy-model example of the scattering amplitude of two heavy particles in \(4+1\) space-time dimensionsFootnote 11

$$\begin{aligned} T\left( \vec {p},\vec {q} \, \right) =V\left( \vec {p},\vec {q}\, \right) +m \int \frac{d^4 k }{(2\pi )^4}\,V( \vec {p},\vec {k})\,\frac{1}{m E -k^2+i\,0^+}\,T( \vec {k},\vec {q} \, )\,, \end{aligned}$$

with the effective potential given by

$$\begin{aligned} V(\vec {p},\vec {q} \, )\; =\; \frac{\alpha }{\big [ \left( \vec {p}-\vec {q} \, \right) ^2+M^2 \big ]^2} + V_C \; \equiv \; V_L(\vec {p},\vec {q} \, ) \, +\, C \, +\, \cdots . \end{aligned}$$

Here, \(V_C\) is a series of contact interactions with C being the LO term. Further, E and m refer to the energy and mass of the scattered particles, respectively, while M denotes the light mass of the exchanged meson that sets the soft scale in the problem. The coupling constant \(\alpha \) is chosen such that the long-range potential \(V_L\) contributes at LO for momenta of the order of M. The solution to the LO integral equation can be written as

$$\begin{aligned} T(\vec {p},\vec {q} \, ) \; =\; T_L (\vec {p},\vec {q} \, ) \, +\, \frac{\Psi _L (\vec {p} \, ) \Psi _L (\vec {q} \, )}{1/C-G_E}\,, \end{aligned}$$
(11.2)

where the finite quantities \(T_L\) and \(\Psi _L \) are given by

$$\begin{aligned} T_L \left( \vec {p},\vec {q} \, \right)= & {} V_L \left( \vec {p},\vec {q} \, \right) \, + \, m \int \frac{d^4 k }{(2\pi )^4}\, V_L( \vec {p},\vec {k}) \frac{1}{m E -k^2+i\,0^+} T_L ( \vec {k},\vec {q} \,)\,, \nonumber \\ \Psi _L (\vec {q} \, )= & {} 1 + m \int \frac{d^4 k }{(2\pi )^4}\,\frac{1}{m E-k^2+i\,0^+}\,T_L( \vec {k},\vec {q} \, )\,. \end{aligned}$$
(11.3)

On the other hand, \(G_E\) contains divergences and is given by

$$\begin{aligned} G_E \; = \; m \int \frac{d^4 k }{(2\pi )^4}\,\frac{1}{mE-k^2+i\,0^+} \, + \, m^2 \int \frac{d^4 k_1 }{(2\pi )^4}\,\frac{d^4 k_2 }{(2\pi )^4}\,\frac{T_L( \vec k_1,\vec k_2)}{\left( m E-k_1^2+i\,0^+ \right) \left( m E -k_2^2+i\,0^+ \right) }\,. \end{aligned}$$

Using cutoff regularization, \(G_E\) can be written as

$$\begin{aligned} G_E \; =\; a_1 \Lambda ^2 \, +\, a_2 \ln ^2\frac{\Lambda }{M} \, +\, (a_3+ a_4 E)\ln \frac{\Lambda }{m} \, + \, G_E^f \, + \, \mathcal{O}\left( \frac{1}{\Lambda ^2}\right) , \end{aligned}$$
(11.4)

where \(a_i\) are some constant factors and \(G_E^f\) is a finite \(\Lambda \)-independent part.

Coming back to the amplitude in Eq. (11.2), we perform implicit renormalization by fixing the bare LEC \(C(\Lambda )\) from the requirement to reproduce the on-shell amplitude at some kinematical point \(E_0\). For the sake of definiteness, suppose that the denominator of the last term in Eq. (11.2) takes some value d for \(E=E_0\). This finally yields the scattering amplitude

$$\begin{aligned} T(\vec p,\vec q \, ) \; =\; T_L (\vec p,\vec q \; )\, +\, \frac{\Psi _L (\vec p\, ) \Psi _L (\vec q\, )}{d -a_4 (E - E_0) \ln (\Lambda /m) - ( G_E^f - G_{E_0}^f )}\,, \end{aligned}$$
(11.5)

where we have dropped the irrelevant \(\mathcal{O} (\Lambda ^{-2})\)-terms. The survival of the \( \ln (\Lambda /m) \)-term in this result is a consequence of the amplitude being only partially renormalized by the counterterms generated by the LEC C.

On the other hand, the amplitude in Eq. (11.2) can be fully renormalized using the BPHZ procedure, i.e. by subtracting all divergences and subsequently taking the \(\Lambda \rightarrow \infty \) limit. Choosing the subtraction points of the order of the hard scale \(\Lambda _b \sim m\) in order to avoid distortion of the long-range interaction and fixing the renormalized LEC \(C_R\) at the same kinematical point, the subtractively renormalized amplitude takes the form

$$\begin{aligned} T(\vec p,\vec q \, ) \; =\; T_L (\vec p,\vec q \; )\, +\, \frac{\Psi _L (\vec p\, ) \Psi _L (\vec q\, )}{d -a_4 (E - E_0) \ln (\Lambda _b/m) - ( G_E^f - G_{E_0}^f )}\,. \end{aligned}$$
(11.6)

This result agrees, to a good accuracy, with the partially renormalized one in Eq. (11.5) as long as \(\Lambda \sim \Lambda _b\). On the other hand, choosing \(\Lambda \gg m \), one approaches for \(E \ne E_0\) the expression

$$\begin{aligned} T(\vec p,\vec q \, ) \approx T_L(\vec p, \vec q \, )\,. \end{aligned}$$
(11.7)

While finite in the \(\Lambda \rightarrow \infty \) limit, this result cannot be correct in general.Footnote 12 Another analytic example along this line with a long-range interaction of a separable type is presented in Ref. [234].

The above considerations illustrate the issues that can arise by attempting to numerically verify the validity of Eq. (11.1) from the variation of \(\Lambda \) in a wide range above \(\Lambda _b\). For the considered model, Eq. (11.1) is fulfilled both for \(\Lambda \sim \Lambda _b\) and for \(\Lambda \gg \Lambda _b\). However, if the scattering amplitude would only be available numerically, following the approach advocated by the practitioners of the RG-invariant method as done e.g. in Refs. [25, 147, 232, 233] would lead one to choose the approximately \(\Lambda \)-independent but incorrect solution given in Eq. (11.7).

In our view, a valid alternative approach to verify the approximate RG invariance as defined in Eq. (11.1) is by comparing the residual \(\Lambda \)-dependence for the available cutoff range, \(\Lambda \sim \Lambda _b\), against the expected truncation uncertainty, which can be estimated using Bayesian methods [220, 235]. Such self-consistency checks are, in fact, already being routinely performed in chiral EFT calculations, see e.g. Refs. [48, 49, 215, 219, 236, 237].

12.3.6 The RG-Invariant Formulation of Nuclear \(\chi \)EFT: Open Issues

In light of the above considerations, we encourage the supporters of the large-cutoff RG-invariant approach as defined above to take a position on the following issues:

  • The large-cutoff (i.e., \(\Lambda \gg \Lambda _b\)) behavior of the scattering amplitude for singular LO interactions \(V_\mathrm{LO}(r)\), like e.g. the OPE potential, is controlled by the (ambiguous) behavior of \(V_\mathrm{LO}(r)\) at short distances \(r \ll \Lambda _b^{-1}\) that governs the terms with positive powers and/or logarithms of \(\Lambda \) in the loop expansion \(T_\mathrm{LO}= \sum _n \hbar ^n T_n\) (unless one succeeds to completely renormalize the amplitude by subtracting all UV divergent terms in \(T_\mathrm{LO}\), which requires the inclusion of an infinite number of counterterms). If only a finite number of counterterms are included, what is the rationale behind expecting \(T_\mathrm{LO} \big |_{\Lambda \gg \Lambda _b}\) to represent a valid/meaningful EFT prediction?

  • In Ref. [227], some of us have considered resonant P-wave systems using the formulation of pionless EFT without auxiliary dimer fields. For P-wave systems with an enhanced scattering volume, expressing the leading order (\(\mathrm{LO}\)) LECs \(C_2(\Lambda )\), \(C_4(\Lambda )\) in terms of the scattering volume and effective “range” r is only possible for \(\Lambda \sim \Lambda _b \sim r\) as a consequence of the Wigner bound. How is this feature to be interpreted from the point of view of the RG-invariant EFT approach?

  • Given the apparent non-uniqueness of the approximate RG-invariance criterion in Eq. (11.1) when varying the cutoff from \(\Lambda \sim \Lambda _b\) to \(\Lambda \gg \Lambda _b\) as demonstrated in the toy-model example, how can one make sure to avoid running into a UV stable but unphysical solution if no analytical results are available?

12.4 Renormalizability in the EFT Sense: A Formal Proof

Formal aspects of renormalizability of the finite-cutoff formulation of \(\chi \)EFT for \(NN\) scattering were discussed in the talk by Ashot Gasparyan, see Ref. [238] for more details. In the considered framework, the cutoff \(\Lambda \sim \Lambda _b\) is chosen in such a way that no spurious bound states are generated at (\(\mathrm{LO}\)) in the \(NN\) spin-triplet channels, but its form is not restricted otherwise (in particular, it can be of a local or non-local type). Following the Weinberg power counting, the LO interaction is resummed up to an infinite order by iterating the Lippmann–Schwinger equation, whereas the subleading (\(\mathrm{NLO}\)) terms are iterated only once.

The standard requirement for a theory to be renormalizable is the ability to absorb all UV divergences appearing in the S-matrix into a redefinition (renormalization) of parameters in the underlying Lagrangian. Clearly, introducing a finite cutoff automatically tames all infinities in the scattering amplitude, and the problem is shifted to the appearance of positive powers of \(\Lambda \) in place of the soft scales such as \(M_\pi \) or external three-momenta as dictated by the power counting. Such power-counting violating contributions originate from the integration regions with momenta of the order of the cutoff. It is thus natural to extend the notion of renormalizability by demanding that all power-counting breaking terms are absorbable into shifts of the LECs at lower orders.

For the \(\mathrm{LO}\) \(NN\) amplitude, one usually assumes that no power-counting breaking contributions appear since positive powers of the cutoff in the iterations of the Lippmann–Schwinger equation are compensated by the corresponding inverse powers of the hard scale that appears as a prefactor in the \(\mathrm{LO}\) potential. This conjecture, along with the renormalizability of the NLO scattering amplitude, is rigorously proven in Ref. [238] to all orders in the loop expansion. An extension of the proof to the purely non-perturbative case with a non-convergent LO series is in progress.

To accomplish the proof, it was essential to introduce the regulator at the level of the Lagrangian without actually affecting it. This is achieved by adding the regulator terms to the LO interaction while systematically subtracting them from the perturbative \(\mathrm{NLO}\) interaction. The resulting approach allows one to strongly reduce the cutoff dependence of observables, the feature that has been verified by considering several examples of the NN phase shifts [238].

12.5 Concluding Remarks

\(\chi \)EFT offers a model independent and systematically improvable approach to low-energy nuclear dynamics, which—if pushed to sufficiently high orders in the EFT expansion—should be capable of making reliable and accurate predictions. It thus is expected to shed light on the long-standing problems in nuclear physics such as e.g. the \(3N\) force challenge. Today, 30 years after Weinberg’s seminal papers [161, 192] that laid out the foundations of the method, the term “Precision Nuclear Physics” is not merely a dream anymore. Indeed, modern \(NN\) interactions derived in \(\chi \)EFT have reached the precision of the most sophisticated phenomenological potentials. Further recent examples of precision nuclear physics studies in \(\chi \)EFT include the determination of the pion-nucleon coupling constants from \(NN\) data at the \(\sim 1\%\) accuracy level [194] and the calculation of the deuteron structure radius at the \(\sim 0.1\%\) accuracy level [214, 215], both carried out at \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\). To push the precision frontier beyond the \(NN\) sector, it will be necessary to develop consistently regularized high-precision many-body forces and currents up through \(\mathrm{N}\) \(^4\) \(\mathrm{LO}\). Work on this ambitious goal is in progress, and it will hopefully help to mature low-energy nuclear physics into precision science.

Acknowledgements: It is a pleasure to thank Zohreh Davoudi, Andreas Ekström, Jason Holt and Ingo Tews for making this wonderful INT program possible. We are also grateful to our long-standing collaborator Ulf-G. Meißner as well as to Patrick Reinert, Xiu-Lei Ren and the LENPIC Collaboration for sharing their insights into the discussed topics. This work was supported in part by BMBF (Grant No. 05P18PCFP1), by DFG and NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, Project-ID 196253076 - TRR 110), by DFG (Grant No. 426661267) by the ERC AdG NuclearTheory (Grant No. 885150), and by the Georgian Shota Rustaveli National Science Foundation (Grant No. FR17-354).

13 Challenges and Progress in Computational and Theoretical Low-Energy Nuclear Physics by Chieh-Jen Yang

I consider the challenges we have been facing in theoretical nuclear physics can be mainly categorized into two aspects: (i) The challenge in computing complex systems. (ii) The challenge in search for a better theoretical foundation. Breakthrough in these two directions are both important and should complement each other in order to make true progress. Fortunately, there are several important achievements in both directions presented in this workshop. In the following I highlight two breakthroughs (one in each direction) and one problem which requires further investigations in this workshop.

  • Breakthroughs in computational aspect

    Eigenvector continuation is a powerful tool, which allows fast simulations and testings in ab-initio calculations [34]. It has been applied to no-core-shell-model and coupled-cluster methods [36] and is crucial for optimizing the low-energy-constants (LECs) in order to obtain a better global fit. So far this technique has been applied mainly to the bound-state problems. The effort to extend it to 3-body scattering, as presented in this workshop [136], is therefore very interesting.

  • Breakthroughs in theoretical foundation

    Many-body forces play an important role in complex systems. They emerge naturally when the degrees of freedom are reduced from elementary particles to composite ones. Regardless how they are derived, most of the existing calculations performed today treat them as (parts of) the potential on top of two-body interactions without additional considerations. One of the most intriguing recent discoveries, as presented in this workshop, is that this could be very wrong. Due to a combinatorics argument [232], the importance of three-nucleon forces is estimated to be as important as the leading two-nucleon forces for nuclei with number of nucleons A = 10–20. This means, under chiral effective field theory (\(\chi \)EFT), the leading three-nucleon forces—which are conventionally regarded as next-to-next-to-leading order (\(\mathrm{NNLO}\))—should be promoted to leading order (\(\mathrm{LO}\)) in the calculation of \(^{16}\)O. This is confirmed by explicit calculations, where a physical \(^{16}\)O is obtained for the first time under a consistent \(\chi \)EFT at \(\mathrm{LO}\)  [178]. Further investigations in this direction, e.g., the importance of four-nucleon forces and high-order corrections, are highly desirable.

  • One problem requires further investigations

    It is shown in this workshop that, at least under the widely adopted Weinberg power counting (WPC), there is a limitation in optimizing the LECs in \(\chi \)EFT potentials. In particular, one faces the choices to either sacrifice the description of nucleon–nucleon (\(NN\)) and few-body observables in order to describe saturation-related properties, or the other way around [173, 239]. Since the potentials been tested are of considerable high-order (\(\mathrm{NNLO}\)), such large discrepancy/uncertainty is not acceptable. A call for a rearrangement of EFT power counting—with the number of nucleons taken into account, as suggested in Ref. [178]—might be necessary. Naively, this would partially release the burden of the LECs presented in the three-nucleon forces so that they do not have to fit all systems (from light to heavy mass nuclei) at the same time.

Acknowledgements: This material is based upon works supported by the Czech Science Foundation GACR grant 19-19640S and 22-14497S, e-Infrastruktura CZ (e-INFRA CZ LM2018140), and IT4Innovations at Czech National Supercomputing Center under project number OPEN-24-21 1892.

14 On the Determination of \(\varvec{\pi N}\) and \(\varvec{NN}\) Low-Energy Constants by Martin Hoferichter

Constructing precision nuclear forces from chiral effective field theory (\(\chi \)EFT) requires good control over subleading orders in the chiral expansion, in particular, of the low-energy constants (LECs) that parameterize degrees of freedom beyond the range of validity of the EFT. While some of the LECs can be determined from other observables, there are many cases in which this is not possible, leaving ultimately lattice QCD (LQCD) as the tool of choice. Here, we describe some of the recent developments and benchmarks of this program.

  1. 1.

    The long-range part of the nucleon–nucleon (\(NN\)) and three-nucleon forces is related to \(\pi N\) physics, encoded in the LECs \(c_i\), \(d_i\), and \(e_i\) at the respective order. At a given order, these LECs can be determined precisely by matching to the subthreshold parameters of \(\pi N\) scattering via the solution of Roy–Steiner equations [240, 241] in combination with experimental input from pionic atoms [242,243,244,245,246], leaving the convergence of the chiral expansion as the dominant uncertainty.

  2. 2.

    These issues in the chiral convergence become apparent when comparing the expansion in the subthreshold and threshold regions—with the former being most relevant for \(NN\) kinematics—as the heavy-baryon expansion fails to simultaneously describe these two kinematic regions. The convergence improves with a covariant formulation and when including explicit \(\Delta \) degrees of freedom [218], but in the latter case at the expense of introducing additional LECs. Only the leading one, the \(\pi N \Delta \) coupling \(h_A\), can be determined from phenomenology, while the subleading coefficients, \(g_1\), \(b_{4,5}\), are only constrained by large-\(N_c\) arguments [218].

  3. 3.

    Not all subleading \(\pi N\) LECs can be directly extracted from \(\pi N\) scattering, as the LEC \(c_5\), which appears as an isospin-breaking contribution, is determined from the strong part of the proton–neutron mass difference, which is not directly observable. A phenomenological determination is possible via the Cottingham formula [247], which relates the elastic contribution to nucleon form factors and the inelastic ones to nucleon structure functions when assuming a suitable high-energy behavior. The resulting separation of the nucleon mass into strong and electromagnetic contributions [248,249,250,251] differs from LQCD  [252,253,254] by \(2.3\sigma \).

  4. 4.

    A formalism similar to the Cottingham approach was used in [255, 256] to estimate the leading-order contact term [257, 258] in neutrino-less double-\(\beta \) decay, based on the elastic contribution, which gives the dominant effect in the case of the nucleon mass difference. This defines a benchmark for future calculations in LQCD  [259]. Moreover, the calculation in [255, 256] is performed in dimensional regularization, but the result presented in terms of a renormalized, physical amplitude, which can then be matched to schemes applied in nuclear-structure calculations [260, 261]. This strategy may prove useful for future LQCD calculations as well.

  5. 5.

    Before turning to the \(NN\) sector, benchmark quantities for simpler \(\pi N\) matrix elements include the axial coupling \(g_A\) and the \(\sigma \)-term \(\sigma _{\pi N}\). While for the former LQCD calculations have reached few-percent accuracy [262, 263], the situation for the latter remains unresolved, with LQCD  [264,265,266,267,268,269] favoring values significantly smaller than phenomenological determinations [270,271,272]. Recently, it was suggested that the origin could lie in larger-than-expected excited-state contamination [273], which may be of relevance for LQCD calculations of other \(\pi N\) and \(NN\) LECs.

  6. 6.

    Given its relation to phenomenology via the Cheng–Dashen low-energy theorem [274], the \(\sigma \)-term also serves as an important benchmark for matrix elements required for searches for physics beyond the Standard Model. Another such indirect relation that allows one to determine LECs for non-standard currents proceeds via a unitarity argument [275, 276], connecting the energy dependence of vector and antisymmetric tensor matrix elements [277, 278].

Acknowledgements: Support by the Swiss National Science Foundation (Project No. PCEFP2_181117) is gratefully acknowledged.

15 The Possible Role of the Large-\(\varvec{N_c}\) Limit in Understanding Nuclear Forces from QCD by Thomas R. Richardson, Matthias R. Schindler, Roxanne P. Springer

The community has developed a procedure for attempting to understand nuclear physics starting from QCD: lattice QCD (LQCD) is used to calculate the nonperturbative physics that determines the low energy constants (LECs) of an effective field theory (EFT) possessing the symmetries of QCD (and/or beyond-the-standard-model physics), which is then input into many-body calculations to address heavier nuclei. The large-\(N_c\) limit of QCD [279], where \(N_c\) is the number of colors, can play a role in this procedure. One- and two-nucleon matrix elements can be expanded in powers of \(1/N_c \). When combined with an EFT expansion, either pionless or chiral, the number of independent LECs at a given order in the combined expansion may be reduced. These constraints can be used to prioritize LQCD calculations and also provide some simplifications to the input needed for many-body calculations.

The large-\(N_c\) limit of QCD has been used to provide theoretical constraints for a variety of applications in the two- and three-nucleon (\(3N\)) sectors, see, e.g., Refs. [10, 11, 13, 16, 20, 21, 280,281,282,283,284,285,286,287,288]. In the large-\(N_c\) limit, Wigner’s SU(4) symmetry emerges, which is also manifest in the beta decays of some medium-mass nuclei [10, 11]. In the SU(3) sector, an SU(6) symmetry among baryon-baryon interactions is predicted in the large-\(N_c\) limit, with an accidental SU(16) emerging for certain values of LECs. These patterns have been observed in LQCD calculations with larger-than-physical values of the quark masses [289, 290]. In the \(3N\) sector, LECs in \(\chi \)EFT also broadly align with the large-\(N_c\) hierarchy [16, 284]. The large-\(N_c\) analysis in the parity-violating sector demonstrates that the number of leading-order couplings in pionless EFT is reduced from five to two [20]. This also highlights the need for a determination of the isotensor parity-violating LEC. The isotensor LEC in particular is an opportunity for LQCD to make a prediction in the absence of experimental data. Reference [286] considered the impact of the dual expansion on T-violating interactions. The application of the large-\(N_c\) approach to external magnetic and axial vector fields [287] offers a partial explanation for the disparate sizes of the isoscalar and isovector magnetic LECs despite these terms occurring at the same order in the pionless EFT power counting. These results also indicate that naturalness, i.e., the concept that LECs at the same order in the power counting should be the same size, may be hidden depending on the choice of basis; therefore, caution should be taken when attempting to quantify naturalness. Finally, the large-\(N_c\) analysis of charge-independence-breaking (CIB) two-nucleon interactions [288] provides a justification for the assumptions of Refs. [257, 258] relating a new lepton-number-violating LEC to an experimentally determined combination of CIB LECs. This result was recently corroborated using a different method [255, 256].

These examples demonstrate the utility of combining the large-\(N_c\) and EFT expansions. Further, this dual expansion could be used to estimate the relative sizes of the couplings, which could potentially reduce the number of contributions required at any given order for many-body calculations. Additionally, large-\(N_c\) constraints may help prioritize calculations for the lattice community. Lastly, a large-\(N_c\) analysis of new beyond-the-standard-model couplings can provide constraints to potentially guide the interpretation of experimental results, e.g., for dark matter direct detection. We think that the procedure of understanding nuclear phenomena via the combination of LQCD, EFTs, and many-body techniques may benefit from implementing large-\(N_c\) constraints.

Acknowledgements: We are grateful to Saori Pastore for useful discussions. We thank the Institute for Nuclear Theory at the University of Washington for its stimulating research environment during the INT-21-1b program “Nuclear Forces for Precision Nuclear Physics,” which was supported in part by the INT’s U.S. Department of Energy grant No. DE-FG02-00ER41132. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Numbers DE-SC0019647 (TRR and MRS) and DE-FG02-05ER41368 (RPS).

16 Towards Robustly Grounding Nuclear Physics in the Standard Model by Zohreh Davoudi, William Detmold, Marc Illa, Assumpta Parreño, Phiala E. Shanahan, Michael L. Wagman

Nuclear physics is entering an exciting era in which aspects of nuclear structure and reactions can be directly computed from the Standard Model of particle physics. Lattice quantum chromodynamics (LQCD) will play a vital role in this era by providing a systematically improvable route through which to obtain nonperturbative quantum chromodynamics (QCD) predictions for few-nucleon systems. In particular, robust QCD predictions with quantified uncertainties for observables, including the energy spectra of multi-nucleon systems and matrix elements of electroweak and beyond-Standard-Model (BSM) currents, will provide valuable information about nuclear structure and interactions complementary to that obtained from experimental measurements. Such predictions can be used to constrain the parameters of low-energy effective field theories (EFTs), as well as to validate and inform phenomenological models of nuclei based on nucleon degrees of freedom.

Both physical and computational challenges will restrict LQCD studies to few-nucleon systems for the foreseeable future; exponential degradation of signal versus noise at large Euclidean times [291, 292] arising from sign problems [293] and tensor contraction complexity [294] make LQCD calculations of (multi-)baryon correlation functions computationally demanding, while the smallness of finite volume (FV) energy gaps between states in such systems complicate their analysis. Fortunately, the most relevant inputs to EFTs of nuclei are two- and three-baryon interactions as well as one- and two-baryon electroweak and BSM currents. It is in this relatively computationally accessible few-baryon sector that LQCD calculations will have the largest impact on nuclear EFTs. Pioneering LQCD calculations of few-nucleon systems performed over the last 2 decades have been used to motivate, develop, and test different strategies for using the immediate results of LQCD calculations—FV Euclidean correlation functions formed from particular sets of composite operators designed to interpolate to the desired states—to obtain FV energy spectra and matrix elements and to constrain the inputs of nuclear effective theories. The first dynamical LQCD calculations of two-nucleon correlation functions, performed by the NPLQCD Collaboration in 2006 [295], used asymmetric correlation functions with localized sources and non-local sinks to constrain the contact operators describing two-nucleon interactions with both Weinberg [192] and Beane-Bedaque-Savage-van-Kolck [166] power counting in the two s-wave scattering channels. Calculations of analogous two-baryon correlation functions with non-zero strangeness by the NPLQCD Collaboration in 2012 were used to constrain contact interactions in an EFT for hyperon-nucleon systems that was then used to predict hyperon-nucleon phase shifts as well as in-medium energy shifts of hyperons relevant for the neutron-star equation of state [296]. Instead of obtaining scattering amplitudes via EFTs that are constrained by LQCD, Lüscher’s quantization condition [297] and its generalizations (reviewed in Refs. [298, 299]) have also been explored as a complementary strategy for relating the immediate results of LQCD calculations for two-baryon systems to infinite-volume quantities such as scattering phase shifts. Constraints on s-wave scattering at particular values of the quark masses have been made using this method by the NPLQCD Collaboration [295, 300].Constraints on higher-partial-wave scattering were first made by the CalLat Collaboration [301] in 2015 by applying these methods to asymmetric LQCD correlation functions with displaced as well as local sources.

LQCD and EFT have advanced together over the last decade and been applied to study increasingly complex systems. In 2013, calculations of baryon-number \(A\in \{2,3,4\}\) nuclear (and \(A\in \{2,3,4,5\}\) hypernuclear) correlation functions by the NPLQCD Collaboration with unphysically large quark masses corresponding to \(m_\pi = 806\) MeV [302] were used to constrain two- and three-body contact interactions in pionless EFT by Barnea et al. [303], who went on to predict binding energies of \(A\in \{5,6\}\) nuclei at these quark masses. More refined EFT matching directly to FV energies has been recently pursued in Ref. [144]. Calculations of larger nuclei in pionless EFT including \({}^{16}\)O and \({}^{40}\)Ca at both \(m_\pi = 806\) MeV (matched to the aforementioned LQCD results) and with physical quark masses were performed by multiple groups in 2017 [32, 175]. Calculations of additional hyperon-nucleon and hyperon-hyperon scattering channels by the NPLQCD Collaboration in 2017 suggested new emergent symmetries in baryon-baryon interactions [289]. The appearance of these symmetries at lighter quark masses has been tested by recent calculations constraining SU(3)\(_f\)-breaking hypernuclear interactions [290]. The structure of light nuclei with \(m_\pi = 806\) MeV and \(m_\pi = 450\) MeV has been probed by calculations of scalar, axial, tensor, and vector nuclear matrix elements by the NPLQCD Collaboration over the last several years [304,305,306,307,308,309,310,311] that have revealed shell-model-like structure at unphysically large quark masses. The first nuclear-reaction studies from LQCD, albeit at large quark masses, were reported in Refs. [306, 309, 310, 312], paving the way to constraining short-distance LECs of the EFTs in pp fusion, and single- and double-\(\beta \) decay processes in light nuclei [313]. Techniques for matching FV results for few-nucleon systems between LQCD and EFT have been further developed and in the last year have been used to enable a quark-mass extrapolation of the Gamow–Teller matrix element governing triton \(\beta \) decay [314, 315] as well as first constraints on the quark momentum fractions of \({}^3\)He [316].

Enabled by the early development of efficient algorithms [294, 317], all the LQCD calculations of multi-baryon systems described above used local, or sometimes displaced, sources and non-local sinks built from products of momentum-projected baryons. An alternative approach developed by the HALQCD Collaboration is based on determining nuclear potentials from Bethe-Salpeter wavefunctions of multi-baryon systems [318, 319] and is argued to avoid systematic uncertainties from excited-state effects involving unbound elastic scattering states [320] (inelastic states still contaminate the correlation functions used in this approach). However, short-distance features of the potentials determined using these methods depend on the sink interpolating operator choice, making it very challenging to quantitatively assign systematic uncertainties to predictions that depend on these short-distance features [321,322,323,324,325].

In the last few years, there has been exciting progress in enlarging the scope of interpolating operators that can be practically included in multi-baryon LQCD calculations. Increased computing power and new algorithmic approaches have allowed rigorous variational constraints on finite-volume energies of two-baryon systems. The first variational calculations of two-nucleon systems with symmetric correlation functions with multi-baryon sources and sinks were performed by Francis et al. [326] and were enabled by the Laplacian–Heaviside method [327] for computing approximate all-to-all quark propagators. A follow-up to this calculation [328] found significant discretization effects in multi-nucleon FV energy shifts and, perhaps relatedly, interesting tensions in comparison with previous results from the NPLQCD and CalLat Collaborations using asymmetric correlation functions. A further variational calculation [329] using the stochastic Laplacian-Heaviside method [330] included two-nucleon interpolating operators with zero and one unit of relative momentum in correlation-function matrices with several values of center-of-mass momentum that were diagonalized to construct orthogonal approximations to the ground state and first unbound excited state. These results using different discretizations and interpolating operators again show tensions with earlier results. The most recent variational study of two-nucleon systems as of this writing was performed by the NPLQCD collaboration  [331] and used sparsened timeslice-to-all quark propagators [332] and included a considerably larger set of hexaquark operators, quasi-local operators with exponential nucleon wavefunctions inspired by EFT bound-state wavefunctions, and scattering operators with up to \(\sqrt{6}\) units of relative momentum together (in the center-of-mass frame). Direct comparisons between asymmetric correlation functions and variational results on the same gauge-field ensemble in this study indicate that estimates of the FV energy spectrum depend significantly on the interpolating-operator set such that it is difficult to achieve systematically-controlled results at the available level of statistics. Similarly, comparison of variational results from different choices of interpolating-operator sets leads to different bounds on the ground-state energy, although the upper bounds on energy levels provided by variational methods are robust in all cases.

There are multiple possible explanations of these results. On one hand, asymmetric correlation functions could appear to be exhibiting single-state dominance due to delicate cancellations between ground and excited-state contributions and the true ground-state energy could be larger than the value determined by previous asymmetric calculations [331]. If such delicate cancellation is in place, the observed volume insensitivity of asymmetric correlation functions associated with the obtained ground states in previous two-nucleon studies [289], which signals the bound nature of the state in the volume, will be a surprising coincidence. On the other hand, it is straightforward to construct interpolating-operator overlap models for which asymmetric correlation functions would reveal the true ground-state energy while variational methods provide an upper bound that is dominated at realistic statistical precision by contributions from a higher-energy state that has larger overlap with all of the interpolating operators used in the study [331]. A simple toy example of this is given by a pair of interpolating operators A and B that are used to probe a three-state system with true energy levels

$$\begin{aligned} E_0^{(AB)} = \eta - \Delta , \quad E_1^{(AB)} = \eta , E_2^{(AB)} = \eta + \delta . \end{aligned}$$
(15.1)

Define normalized overlap factors for operators A and B onto these states by

$$\begin{aligned} \mathcal {Z}_A = (\epsilon ,\sqrt{1 - \epsilon ^2},0), \quad \mathcal {Z}_B = (\epsilon ,0,\sqrt{1 - \epsilon ^2}), \end{aligned}$$
(15.2)

where \(\epsilon \ll 1\) is a real parameter. Solving a generalized eigenvalue problem (GEVP) using \(2\times 2\) correlation-function matrices with interpolating operators \(\{A,B\}\) and times \(t_0\) and \(t > t_0\) gives eigenvalues

$$\begin{aligned} \begin{aligned} \lambda _0^{(AB)}&= e^{-(t-t_0)\eta }\left[ 1 + \epsilon ^2 \left( e^{t \Delta } - e^{t_0 \Delta } \right) + \mathcal {O}(\epsilon ^4) \right] , \\ \lambda _1^{(AB)}&= e^{-(t-t_0)(\eta + \delta ) }\left[ 1 + \epsilon ^2 \left( e^{t (\Delta + \delta )} - e^{t_0 (\Delta + \delta )} \right) + \mathcal {O}(\epsilon ^4) \right] . \end{aligned} \end{aligned}$$
(15.3)

Unless t is large enough such that \(e^{t \Delta }\) compensates for the \(O(\epsilon ^2)\) overlap-factor suppression, the bound obtained from the lowest GEVP eigenvalue will significantly overestimate the true ground-state energy. However, an asymmetric correlation function of the form \(\langle B(t) \overline{A}(0) \rangle \) will overlap perfectly with the true ground state with zero excited-state contamination. This example can be trivially generalized to include more states that have small overlap with the interpolating-operator set \(\{A,B\}\) without changing the need for achieving large enough \(e^{t \Delta }\) in order to compensate for the smallness of the overlap factors present. Including additional interpolating operators that have small overlap with the ground state also does not improve GEVP ground-state energy estimates in this example—it is the inclusion of interpolating operators with sufficiently large overlap with states of interest that is essential for the success of variational methods.

The existence of such models demonstrates that in order to conclusively determine whether two-nucleon systems are bound or unbound with larger than physical values of the quark masses, further variational studies are required to span the subspace of Hilbert space that might be associated with a two-nucleon bound state. While it is possible for variational studies to conclusively demonstrate the presence of a bound state, by their very nature they can not rule out such a state except if the interpolating operators that are used form a basis for the Hilbert space—a scenario that cannot be realistically achieved. Similarly, energies extracted from asymmetric correlation functions provide an estimate of the ground state energy which are subject to systematic uncertainties from choices of interpolating operators that may be difficult to estimate.

LQCD studies of multi-nucleon systems will become increasingly refined in the coming years. In future studies, systematic uncertainties in LQCD determinations of nuclear properties associated with lattice spacing and quark-mass extrapolations will be controlled through the use of larger sets of gauge-field ensembles. Excited-state effects will be controlled through variational studies that include more, and more varied, interpolating-operators that better cover the low-energy sector of Hilbert space. The ongoing development of strategies for matching LQCD results to nuclear effective theories and other ways to relate FV and infinite-volume observables will be increasingly essential for extending the reach of robust predictions grounded in the Standard Model up the chart of the nuclides.

Acknowledgements: We are grateful to the former and current members of the NPLQCD Collaboration, especially Martin Savage, for many insightful discussions and valuable collaborations around the topics discussed in this piece. ZD acknowledges support from the Alfred P. Sloan fellowship, Maryland Center for Fundamental Physics at the University of Maryland, College Park, and the U.S.Department of Energy’s (DOE’s) Office of Science Early Career Award DE-SC0020271. WD and PES are supported in part by the U.S. DOE’s Office of Science, Office of Nuclear Physics under grant Contract DE-SC0011090. WD is further supported in part by the SciDAC4 award DE-SC0018121, and within the framework of the TMD Topical Collaboration of the U.S. DOE’s Office of Science, Office of Nuclear Physics. PES is additionally supported by the National Science Foundation under EAGER grant 2035015, by the U.S. DOE’s Office of Science Early Career Award DE-SC0021006, by a NEC research award, and by the Carl G and Shirley Sontheimer Research Fund. WD and PES are supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/). MI is supported in part by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center, and in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) through the Quantum Horizons: QIS Research and Innovation for Nuclear Science, under Award Number DOE (NP) Award DE-SC0020970. AP acknowledges financial support from the State Agency for Research of the Spanish Ministry of Science and Innovation through the "Unit of Excellence María de Maeztu 2020-2023" award to the Institute of Cosmos Sciences (CEX2019-000918-M), the European FEDER funds under the contract PID2020-118758GB-I00, and from the EU STRONG-2020 project under the program H2020-INFRAIA-2018-1, grant agreement No. 824093. This piece has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.

17 On the Reliable Lattice-QCD Determination of Multi-Baryon Interactions and Matrix Elements by Raúl Briceño, Jeremy R. Green, Andrew D. Hanlon, Amy Nicholson, André Walker-Loud

For about a decade, there has persisted a discrepancy in the literature in which most groups performing calculations of nucleon–nucleon (\(NN\)) systems with lattice QCD (LQCD) reported the identification of deeply bound di-nucleon systems at pion masses larger than nature [289, 301, 302, 333, 334], while the HAL QCD Collaboration, utilizing an alternative method known as the HAL QCD potential [318, 335], reported that the di-nucleon systems do not support bound states at these heavy pion masses [336, 337]. It was initially asserted by many groups that this discrepancy was a sign of unquantified systematic uncertainties in the HAL QCD approach as this method requires additional assumptions beyond those needed for the Lüscher formalism [142, 338,339,340].

However, subsequent work uncovered significant dependence of the extracted spectrum upon the type of local-creation operator used [341], raising significant concerns on whether or not the previous works correctly determined the \(NN\) spectrum. The spectrum does not depend upon the creation/annihilation interpolating operators, and so the observation of such dependence is indicative of either a misidentification of the spectrum through “false plateaus” [341], or it could be a practical issue that the operators used couple so poorly to a given state that, at the available finite statistics, one can not numerically resolve the presence of the state through the analysis of the correlation functions.

A shortcoming of all previous works that have identified deeply bound di-nucleons is that they employed asymmetric correlation functions in which the \(NN\) creation and annihilation interpolating operators were not conjugate to each other. In this setup, the overlap factors for the excited states are not guaranteed to have the same sign as for the ground state, making the analysis susceptible to false plateaus. There are now three independent calculations of two baryon systems which utilize momentum-space creation and annihilation operators leading to Hermitian matrices of correlation functions allowing for a variational approach [342, 343]: The Mainz group has computed the H dibaryon and di-neutron systems [326] using the distillation method [327]; The sLapHnn Collaboation has computed the di-nucleon systems [329] using the stochastic Laplacian Heaviside method [330]; The NPLQCD Collaboation has computed the di-nucleon systems [331] using a momentum-sparsening method [332]. None of these newer works have identified deeply bound di-nucleon systems, including the calculation by NPLQCD which included both momentum-space and local hexaquark interpolating fields in the linearly-independent set of operators.

Resolving the nature of the \(NN\) systems at heavy pion mass is critical for the application of LQCD to nuclear physics. It is both a test of the underlying physics, and presently, it is more importantly a test of our ability to perform the calculations with fully quantified theoretical uncertainties. For example, if it is determined that calculations which utilize local hexaquark creation operators lead to a misidentification of the spectrum, all subsequent calculations of two and more nucleon matrix elements that utilize such a set of creation operators will have unquantified corrections.

A clear picture has emerged: the only calculations which identify deeply bound di-nucleons are those that utilize local hexaquark creation operators and momentum-space annihilation operators. Can we understand more quantitatively why these sets of correlation functions indicate deeply bound di-nucleons? HAL QCD has suggested they emerge as a false plateau generated by a linear combination of elastic \(NN\) scattering states, with differing signs for the overlap factors [344]. This can be tested with a matrix of correlation functions including both momentum-space and hexaquark operators.

Another recent troubling result is from the Mainz-group calculation of the H dibaryon in the SU(3)-flavor symmetric limit, utilizing six lattice spacings, from which they observed very large discretization corrections to the binding energy [328]. In contrast, important discretization effects in two-meson systems, which are generally more precisely computed, have so far not been observed. Therefore, the observation of such corrections in this dibaryon system raises several questions. Can this be confirmed from independent calculations? Is it specific to the lattice action used? Is it unique to the H dibaryon or a general feature of dibaryon interactions? With such large discretization corrections, does one need to consider discretization corrections to the Lüscher quantization condition?

In the following, we comment on these and other issues related to LQCD calculations of two-baryon systems. We provide some suggestions how to further elucidate some of the perplexing issues that have arisen in the literature, and how the field can make progress. The first step in making progress is to reliably determine the two-baryon spectrum, prior to moving on to more baryons and/or their matrix elements.

17.1 Reliable Spectral Analysis

For reliable conclusions to be drawn from the Lüscher finite-volume formalism, it is essential that the energies input into the quantization condition are accurate. There are three major challenges to obtaining accurate energies:

  • At early Euclidean time, the correlation functions are contaminated by excited states;

  • The lowest excited state gap in the two-baryon system is given by elastic scattering modes, which have an energy gap corresponding roughly to \(p^2/M\approx (2\pi /L)^2/M\approx 20-50\) MeV for typical values of L used in present calculations. The time scale for these excited states to decay is given by the inverse energy gap, which corresponds to \(4-10\) fm;

  • Empirically, it is observed that the noise of two-baryon systems overwhelms the signal before \(t\approx 2\) fm, for calculations at larger-than-physical pion mass. The exponential degradation of the signal becomes worse as the pion mass is reduced towards the physical point, which for \(NN\) systems scales as \(e^{-2(M_N-\frac{3}{2}m_\pi )t}\) at asymptotically large times.

Given the results in the literature, the only promising strategy to overcome these challenges is to use a variational method to extract the spectrum [342, 343] and then use the Lüscher quantization condition [142, 338,339,340] to provide important diagnostics on the consistency of the extracted spectrum [345, 346]. Such a strategy has been used successfully in many studies of two-meson systems, which has become the standard tool there, see the review [298] and references therein.

The variational method involves forming a Hermitian matrix of correlation functions with elements defined as

$$\begin{aligned} C_{ij} (t) \equiv \langle \mathcal {O}_i (t + t_0) \mathcal {O}_j^\dagger (t_0) \rangle , \end{aligned}$$
(16.1)

using a set of N linearly independent operators \(\left\{ \mathcal {O}_i\right\} \) that, ideally, have strong overlap with the states that one wants control over. For example, a baryon-baryon operator could be a linear combination of objects having the form

$$\begin{aligned} \mathcal {O}_{BB}(t,\vec P) = \sum _{\vec x,\vec y} e^{-i\vec p_1\cdot \vec x}e^{-i(\vec P-\vec p_1)\cdot \vec y} (qqq)(t,\vec x)(qqq)(t,\vec y), \end{aligned}$$
(16.2)

corresponding to the momentum-space operators discussed above, whereas a hexaquark operator has the structure

$$\begin{aligned} \mathcal {O}_H(t,\vec P) = \sum _{\vec x} e^{-i\vec P\cdot \vec x} (qqqqqq)(t,\vec x). \end{aligned}$$
(16.3)

The finite-volume spectrum can be obtained by solving the generalized eigenvalue problem (GEVP) on the resultant correlator matrix

$$\begin{aligned} C(t) \upsilon _n (t, \tau _0) = \lambda _n (t, \tau _0) C(\tau _0) \upsilon _n (t, \tau _0), \end{aligned}$$
(16.4)

whose eigenvalues give

$$\begin{aligned} \lambda _n(t, \tau _0) = |A_n|^2 e^{-E_n (t - \tau _0)}\big [1 + O (e^{-\Delta _n t}) \big ] , \end{aligned}$$
(16.5)

where \(E_n\) is the energy of the nth eigenstate in the system and \(n = 0, \ldots , N - 1\). Thus, the lowest N eigenstates that have overlap with the chosen set of operators can readily be extracted from these generalized eigenvalues. But, given the practical limitations on the size of t due to the exponentially bad signal-to-noise ratio, the reliability of this extraction depends strongly on the size of the gap \(\Delta _n\). It has been shown that solving the GEVP with \(\tau _0 \ge t/2\) leads to a gap of \(\Delta _n = E_N - E_n\) [343], which removes the contribution from all states with \(E_m \ne E_n < E_N\) from \(\lambda _n(t, \tau _0)\). This is in contrast to solving for the eigenvalues of C(t) directly, in which case the gap is in general given by \(\Delta _n = \min _{m \ne n} |E_n - E_m|\) [342], and thus does not help in controlling the contamination from different states. Hence, by using the GEVP, the gap can be made arbitrarily large by including more operators in the correlator matrix that couple well to the relevant states.

The GEVP is also amenable to self-consistency checks by varying the operators used in the correlator matrix and observing how the resulting spectrum is affected (e.g. see Ref. [331] for a recent investigation). This can help to determine operators that are irrelevant, or, more importantly, essential. As stated above, further consistency checks can be made by utilizing the quantization condition to look for any inconsistent behavior in the phase shift coming from the energies extracted from the GEVP. The resulting phase shift can also be used to predict the energy spectrum and look for any missing energies. Thus, the GEVP method in combination with the finite-volume Lüscher formalism is a powerful method for validating the extracted spectrum.

Pionless EFT indicates that regardless of whether a deep bound state exists in the system or not, a modest variational basis containing only momentum-space operators is sufficient to correctly determine the spectrum, and the inclusion of a hexaquark operator does not improve the convergence [347], (see also [326]). One lattice calculation found that including a hexaquark operator gave rise to an additional energy level well above threshold, without affecting any other levels [331]. Such a state, if it exists, would have to be a very narrow resonance that is weakly coupled to the \(NN\) system such that it leaves an otherwise imperceptible imprint on the nearby spectrum and the resulting phase shift. It is important to verify the validity of this state to further understand these strongly interacting systems.

All applications of the variational method to two-baryon systems at larger-than-physical pion masses either strongly disfavor a bound \(NN\) state [329] or are inconclusive in this regard [326, 331]. Direct comparison, on the same ensemble, of variational results to those using the asymmetric correlator setup described above show inconsistencies in the extracted phase shifts, further providing evidence that these early studies were affected by uncontrolled excited-state contamination.

The recent switch to utilizing the variational method in two-baryon systems has resulted in great strides toward resolving the two-baryon controversy in the literature. But, more work is certainly needed. For example, it will be illuminating to compare the phase shifts determined on the same configurations from the HAL QCD potential method and the variational methods. Thus a shift to controlling other sources of systematic error may be the next hurdle for obtaining reliable estimates of two-baryon observables. We discuss first steps toward this in the next section.

17.2 Quantization Condition and Discretization Effects

Until recently, every lattice calculation of baryon-baryon interactions was done using a single lattice spacing a. This was based on the assumption that discretization effects mostly cancel when taking the difference between baryon-baryon energy levels and the sum of two single-baryon energy levels [348]. In Ref. [328], two of us studied the H dibaryon for a fixed choice of quark masses using six lattice spacings and found very large discretization effects: the binding energy in the continuum limit was 4.6(1.3) MeV, whereas on the coarsest lattice spacing it was above 30 MeV. Given this first result in a single physical system, it will be important to check other systems such as \(NN\) systems to understand whether large discretization effects are common; work in this direction is in progress [349]. In addition, since discretization effects are not universal, it will be worthwhile to also perform studies using different lattice actions. Some input from EFTs or toy models could help in understanding these effects and whether any lattice action should be preferred. It will also be interesting and important to understand why they may be relevant for two-baryons but do not seem nearly as relevant for two-meson systems.

If large discretization effects are widespread, this implies that many previous calculations may also contain large systematic errors. In the future, it will be important to perform calculations using multiple lattice spacings or a single lattice spacing that is finer than has typically been used in the past.

17.2.1 Applying Quantization Conditions at Nonzero Lattice Spacing

A now-standard approach for studying multihadron interactions with LQCD is to use finite-volume quantization conditions, which relate the scattering amplitude to the finite-volume spectrum [142, 338,339,340]. Given that these conditions have been derived in the continuum, a natural question is how best to analyze a spectrum computed at nonzero lattice spacing. Two possible strategies are illustrated in Fig. 3.

Fig. 3
figure 3

Two paths, red and blue, from the lattice finite-volume energy levels E(La) to the continuum phase shift \(\delta (p^2)\)

The most theoretically clean approach is to follow the red path, by first performing continuum extrapolations at fixed volume to obtain the continuum finite-volume spectrum, then analyzing it using standard quantization conditions. However, the corresponding lattice calculations are challenging, since they require matched volumes at different lattice spacings.

Alternatively, one could follow the blue path, using quantization conditions to obtain scattering amplitudes at finite lattice spacing and then extrapolating those to the continuum. Two simplifying assumptions can be made: the continuum quantization condition can be applied to data at nonzero lattice spacing, and the scattering amplitude at nonzero lattice spacing has the same structure as the continuum one. This is the main approach used in Ref. [328]. Along with a fit ansatz in which \(p\cot \delta _0(p)\) is given by a polynomial in \(p^2\) whose coefficients are affine functions of \(a^2\), this approach produced a good description of the lattice spectra.

A better understanding of discretization effects could help to put the strategy of following the blue path on theoretically more solid ground. It would be beneficial to derive a quantization condition that accommodates at least the leading discretization effects. This would, of course, have to incorporate the \(\mathrm {O}(a^2)\) discretization effects that reduce the O(4) Euclidean-rotational group down to the hypercubic group. At this stage it is unclear if these effects could be accounted for using a universal framework or if it is necessary to resort to a specific EFT evaluated to a finite order. First steps in this direction were performed in Ref. [350] for a simple theory.

17.2.2 Energy Levels on Left-Hand Cuts

Here we note another important direction of investigation for the two-particle quantization conditions. It is well known that the standard quantization conditions break down above certain inelastic thresholds, with considerable progress being made in recent years to understand quantization conditions above multi-channel and three-particle thresholds (see Refs. [298, 299] for recent reviews). However, they also fail for energies that overlap with left-hand cuts that occur below the lowest threshold. One can see this in the simplest case where the quantization condition is truncated to S-wave:

$$\begin{aligned} p\cot \delta _0(p) = \frac{2}{\sqrt{\pi }L\gamma }Z_{00}^{\vec PL/(2\pi )}\left( 1,\left( \frac{pL}{2\pi }\right) ^2\right) . \end{aligned}$$
(16.6)

Here, \(Z_{00}^{\vec {D}}\) is a generalized zeta function that is real for real \(p^2\). Below the start of the left-hand cut, \(p\cot \delta _0(p)\) is generically complex; as a result, the equation has no solutions.

On the other hand, lattice energy levels below the start of the left-hand cut have now been observed [328]. For SU(3) singlet baryon-baryon scattering relevant for the H dibaryon, the first left-hand cut is caused by t-channel exchange of a pseudoscalar octet meson. On all but one of the ensembles in Ref. [328], the ground state in the rest frame lies below the start of the t-channel cut; these levels were discarded from the analysis. For nucleon–nucleon scattering at the physical pion mass, the t-channel cut starts about 5 MeV below threshold; naïve applications of quantization conditions to the scattering amplitude in the deuteron sector (using models that do not contain a t-channel cut) predict that the ground state in the rest frame will lie this far below threshold when \(L<8\) fm [351, 352].

It would be valuable to have quantization conditions that are valid on left-hand cuts, which could provide subthreshold information on the scattering amplitude. In fact, the recently-proposed method of Ref. [353] might already be applicable to some of these challenges. At this point, it is not clear if one could cast such a formalism in a universal form that may be applicable for arbitrary channels.

17.3 Two-Body Matrix Elements

Ultimately, the spectrum of the \(NN\) system serves as a first step towards the determination of more physically interesting quantities, including electroweak elastic and transition form factors and QCD contributions to processes that may provide smoking guns of BSM physics (e.g. neutrinoless double-beta decay). Many of these reactions may be constrained via the evaluation of matrix elements of external currents. With the aim of rigorously determining these matrix elements via LQCD, there has been significant progress towards providing a non-perturbative connection between finite- and infinite-volume few-body matrix elements [354, 355]. If we consider the simplest system, where the particles carry no relative angular momentum or intrinsic spin, the relationship between a matrix element for a local scalar current (\(\mathcal {J}\)) can be compactly written as [354, 355]

$$\begin{aligned} L^{3} \langle {P_{f},L}| \mathcal {J}(x=0) |{P_{i},L}\rangle = \bigg ( \mathcal {W}_\mathrm{df }(P_{f}, P_{i}) + \mathcal {M}(P_f^2) \cdot G(P_{f}, P_{i}, L) \cdot \mathcal {M}(P_i^2) \bigg ) \, \sqrt{ \mathcal {R}(P_{f},L) \mathcal {R}(P_{i},L) }, \end{aligned}$$
(16.7)

where G is a new finite-volume function that is closely related to \(Z_{00}\) appearing in Eq. (16.6), \(\mathcal {R}\) is the so-called Lellouch-Lüscher factor [356, 357], \(\mathcal {M}\) is the purely hadronic two-body amplitude, and \(\mathcal {W}_\mathrm{df}\) is the desired infinite-volume matrix element.Footnote 13 Although this formalism has not yet been implemented in a LQCD calculation, important checks have been performed on it, including the consistencies with perturbation theory, the Feynman-Hellmann theorem, and charge conservation [352, 358].

It is worth emphasizing that \(\mathcal {R}\) requires the evaluation of the derivative of the scattering amplitude and the \(Z_{00}\) functions with respect to energy. The derivative of the scattering amplitude is not directly accessible via LQCD, and as a result one must resort to parametrizations of the amplitudes. This, and the additional explicit dependence on \(\mathcal {M}\) in Eq. (16.7), points to the fact that in order to obtain infinite-volume matrix elements of two-particle states, one requires a tight and accurate constraint on the two-body spectrum and subsequently the two-body scattering amplitudes.

As already mentioned, this formalism has not been implemented in the study of \(NN\) matrix elements. Instead, the published results [306, 309, 311] have restricted their attention to systems that support bound states and relied on the fact that matrix elements of two-body bound states have exponentially suppressed finite-volume effects [352]. If there are no bound states, as indicated by improved spectroscopy calculations, then this approach for avoiding Eq. (16.7) is not valid. Furthermore, any uncontrolled systematic errors in the spectra will propagate into errors in \(\mathcal {M}\) and \(\mathcal {R}\), which can be large. In addition, generally the finite-volume matrix element on the left-hand side of Eq. (16.7) cannot be reliably isolated in a regime where the energy of the corresponding finite-volume state has not been reliably isolated. Although it has not been definitively demonstrated at this point whether the published results for the \(NN\) matrix elements are contaminated by these uncontrolled systematics, this will need further investigation, and these calculations will need to be done using variationally optimized operators as done in, for example, Ref. [359] for excited mesonic states.

For shallow bound states, like the physical deuteron, finite-volume effects can not be ignored. Instead, one will need to use multiple volumes and/or total momenta to scan the pole region of \(\mathcal {W}_\mathrm{df}\). At the bound-state, this amplitude acquires an energy-dependent pole associated with the initial and final state coupling to the bound state [360]. And from the residue of this, one can access the form factors of such a state.

To study the matrix elements of \(NN\) states, it will be necessary to generalize the formalism presented in Refs. [352, 358] for systems with non-zero intrinsic spin. Finally, as previously emphasized, the formalism discussed only supports currents that are local in time. In other words, it would not be suitable in, for example, the study of Compton scattering or double beta decays. Such observables would need extensions to accommodate the insertion of two currents separated by arbitrary time. Efforts along this track are under way [259, 361,362,363,364].

17.4 States Coupling to Three or More Nucleons

Among the more exciting prospects of the few-nucleon LQCD program is the possibility of constraining three-nucleon dynamics directly from QCD. The procedure for this follows closely that of the two-body sector. Namely, presently the most rigorous pathway requires the accurate determination of the finite-volume spectra of states with the quantum number of three-nucleons using a large list of operators. The spectrum would then need to be analysed with the extensions of the quantization condition for three particles in order to then constrain the infinite-volume amplitudes. Once the amplitudes have been constrained, these could be analytically continued below threshold to determine the location and residues of possible bound state poles.

These studies are significantly more challenging than their two-body analogues for multiple reasons, including

  • there is a larger density of states,

  • for systems with two-body bound states, there can be multiple thresholds,

  • the number of allowed contractions generally grows with the number of hadrons,

  • the numerical cost of evaluating each contraction is generally larger,

  • the stochastic noise grows with the number of nucleons,

  • the quantization condition is more complicated to derive and implement,

  • the quantization condition depends on one, two, and three-body observables,

  • the infinite-volume amplitudes have a larger class of singularities.

The first five items point to the fact that determination of finite-volume spectra is generally more complicated, while the last set of items refer to the fact that their analysis is also more challenging.

This program hinges on a non-perturbative formalism to relate the finite-volume spectra and the desired infinite-volume amplitude. There has been significant progress towards this goal [299, 365,366,367,368]. Although these formalisms have not been implemented in the study of nuclear states, they have been successfully implemented in the mesonic sector [369,370,371,372,373,374], where calculations are computationally more affordable.

While the formalism for three-nucleon systems in finite volume does not yet exist, there are published results for the three (and four) baryon spectra [302, 333]. There is now evidence that the \(NN\) (and generally, two-baryon) spectra for these results have significant, unaccounted-for systematic errors, as the NPLQCD collaboration first published \(B_{NN}\approx 20\) MeV binding energies for both di-nucleon systems [289, 302], while in their most recent work using momentum-space creation operators, they do not find evidence for deep bound states [331]. This is suggestive of an \(\mathrm {O}(20 \text { MeV})\) systematic uncertainty on the \(NN\) binding energy. Ref. [302] also quoted a \(^3\)He binding energy of \(B\approx 50\) MeV, which is 30 MeV from the quoted proton–deuteron breakup threshold. Assuming a 20 MeV systematic uncertainty on the deuteron binding energy, it is not unreasonable to also assume a similar or larger uncertainty on the gap from the three-nucleon state to this first open threshold. This is of the same order of the systematic for the two-body sector, which weakens the claim that a bound \({{}^3\mathrm He}\) was found for these quark masses. As a result, it will be necessary to do a variational analysis of the spectrum using a larger list of interpolating operators before concluding that \({}^3\mathrm He\) is indeed bound for these larger pion masses in the range \(m_\pi \sim \) 300–800 MeV.

In preparation for such studies, the formalism continues being developed [375,376,377,378] to allow for increasingly complex three-body systems. In parallel to these efforts, toy-model investigation have been continued, exploring nuclear-like theories which support two- and three-body bound states in a finite- and infinite-volume [379, 380]. Ultimately, these formalisms will need to be extended to accommodate systems with intrinsic spin.

Although calculations of correlation functions with four and more nucleons have been performed, they have used a single local creation operator. The challenges discussed above for the \(NN\) sector are expected to be more difficult for four and more nucleons, and the corresponding calculation will be numerically more expensive. Given these two points and the lack of existing formalism to test the validity of the resultant lattice spectra, robust investigations of systems composed of four or more nucleons will have to wait.

Acknowledgements: JRG acknowledges support from the Simons Foundation through the Simons Bridge for Postdoctoral Fellowships scheme. ADH is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the Contract No. DE-SC0012704 and within the framework of Scientific Discovery through Advance Computing (SciDAC) award “Computing the Properties of Matter with Leadership Computing Resources.” RAB is supported in part by U.S. Department of Energy Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab, and is partly supported by the U.S. Department of Energy Contract No. DE-SC0019229. ANN is supported by the U.S. National Science Foundation CAREER Award PHY-2047185. The work of AWL is supported in part by U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Awards No. DE-AC02-05CH11231.

18 Entanglement in Nuclear Structure by Caroline Robin

As prime examples of quantum many-body systems, atomic nuclei exhibit non-classical correlations, among which, entanglement is certainly the most fascinating. This peculiar phenomenon, inherent to quantum mechanics, allows particles that have interacted in some way, to remain correlated even when separated by arbitrarily large distances. Such non-local correlations play an important role in quantum communication and are the essence of quantum computing.

In order to characterize entanglement in tightly bound many-body systems, where particles are separated by short distances and have overlapping wave functions, one must consider the distinguishable or indistinguishable character of the components. While there appears to be a consensus on the definition of entanglement between distinguishable particles, an extension of this concept to systems of identical particles is difficult and is still subject to debate [381]. The issue comes from the fact that, because of their indistinguishability, single components cannot be accessed individually. It is thus unclear how to trace over one subsystem and determine reduced density matrices, which are the key elements to quantify entanglement.

Although possible treatments of two-identical-particle systems have been investigated [382], how to address particle entanglement in larger systems is not straightforward, and several points of view on the notion of entanglement itself, as well as its characterization, have been developed in the past years [383]. One possible way around this issue has been to consider the Fock space formulation of the many-body state and evaluate entanglement between modes rather than between particles. In this case, the subsystems are formed by groups of distinguishable single-particle states, or orbitals, so that the total Hilbert space has the required tensor product structure allowing for partial traces and calculation of entanglement measures.

Below we summarize different ways that have been explored to characterize entanglement in the structure of atomic nuclei, and discuss how these studies can not only lead to efficient ways of treating quantum correlations on classical computers, but can also provide valuable insights in order to design quantum computations of nuclei. Finally we discuss the possible fundamental role of entanglement in the description of nuclei and nuclear forces, and mention some problems and questions to be addressed in the future.

18.1 Entanglement in Nuclear Structure Calculations

The presence of entanglement is the reason why calculating nuclei, and in general quantum many-body systems, on classical computers is so hard. As systems with large entanglement cannot be well approximated by separable states they do not possess an efficient classical representation, thus leading to the exponential scaling of the required resources with the number of particles. In this context, careful investigations and possible manipulations of entanglement structures can allow for more efficient calculation schemes. This idea is exploited by methods such as density-matrix renormalization group (DMRG) or tensor networks largely used in other fields, and more recently being developed in nuclear physics [384,385,386,387,388,389,390,391,392,393,394]. As nuclei are composed of two particle species (Z protons and N neutrons), various partitioning of the nuclear state can be defined, for investigation of different forms of entanglement. For example, Refs. [384, 385, 395, 396] made use of the natural bi-partitioning of the nuclear state to study entanglement between neutron and proton subsystems. Analyses of the singular-value decomposition of shell-model ground states showed an exponential fall off of the eigenvalues of the reduced density matrix (singular values), in particular in spherical nuclei with \(N>Z\), so that nuclear states could be represented by only a few (correlated) proton and neutron states.

A few investigations of mode correlations and entanglement have now also been performed in the Lipkin model [397], two-nucleon [398] and many-nucleon systems [399, 400]. For example, Ref. [400] analyzed entanglement properties of several single-particle bases within the ground state of Helium isotopes, and found a clear link between convergence of the ground-state energy (with respect to the size of the model space) and entanglement structures. In particular, natural orbitals derived from a variational principle, which displayed the fastest convergence of the energy, exhibited much more localized structures of entanglement within the basis, as compared to, for example, harmonic oscillator or Hartree-Fock orbitals, and minimized the total entanglement content of the nuclear ground state. Measures of entanglement and correlations appeared compressed around the Fermi level, and showed that this basis also effectively decouples the active and inactive single-particle spaces. Analysis of the two-nucleon mutual information in \(^6\)He showed that the transformation of single-particle orbitals led to an emergent picture of two interacting p-shell neutrons decoupling from an \(^4\)He core, thus driving the wave function to a core-valence tensor product structure. While this study was an a posteriori investigation of entanglement from no-core configuration-interaction calculations, one could use the structured and minimized entanglement patterns of the variational natural basis to design more efficient calculation schemes. One intuitive future step in this direction would be to combine such orbital optimization with DMRG, as has already been explored in quantum chemistry [401, 402].

18.2 Entanglement to Guide Quantum Computations of Nuclei

While tremendous progress has been and can still be made in the classical computation of nuclei, quantum computers in principle offer a much more natural way to solve the quantum many-body problem [403] and, ultimately, could allow for exact treatments of systems beyond the limits of what could ever be achieved on classical machines. Thus, in the past decade a huge effort has been deployed to develop many-body quantum computations, and first proof-of-principle calculations of few-nucleon systems have been performed on quantum devices [404,405,406,407,408]. These pioneering studies have so far been limited to very few qubits. Developments of quantum computers are however fast progressing and one can expect that machines with several hundred or thousand qubits will become available in the very near future. To take advantage of these developments, it is of the utmost importance to develop clever algorithms which would limit the error rate to the best extent. In this respect, understanding the entanglement structure of the system that will be mapped on the quantum machine is crucial, and a careful organization and possible minimization of such structures could possibly allow for smaller numbers of entangling operations on the device, thus potentially leading to an expansion of the reach of quantum calculations. For example, studies of mode entanglement can be particularly useful to develop algorithms that map modes to qubits. In this context, the natural localization of entanglement into decoupled subspaces that is provided by the natural variational basis could be exploited in order to design hybrid classical-quantum algorithms on present and near-term devices possessing limited connectivity. In particular, the weakly-entangled subspaces could be treated classically while the strongly-entangled part of the Hilbert space would be handled by the quantum device.

18.3 Discussion and Questions to Address in the Future

The investigation of entanglement in atomic nuclei is overall a rather new and thus exciting line of investigation. Beyond the computational advantages that could bring an entanglement-based description of nuclear systems, several studies now point out to the fact that entanglement organization and minimization could in fact be fundamental to the description of matter [409]. In particular, it was revealed that entanglement suppression is connected to emergent symmetries of the strong interaction at low energy, suggesting that entanglement may be a basic notion related the hierarchy of nuclear forces and could characterize a new power-counting scheme [23, 410]. Recently, singular-value decompositions of two-nucleon interactions showed that low-rank truncations can be safely applied to non-local potentials, and that the singular-value content is mostly maintained during similarity renormalization group (SRG) evolution of these potentials [393, 394], suggesting that the entanglement minimization could be preserved in the renormalization flow. Overall it could be that entanglement minimization is the signal of a relevant description for a given energy scale. From the aforementioned nuclear structure studies, it seems that this could also be manifest at the many-body level, although more investigations are needed to confirm this statement. In particular, it would be interesting to investigate if, similarly to the transition from QCD to nucleons [23], an entanglement suppression also appears when transiting from a description in terms of interacting nucleons to a regime where collective vibrations or rotations become relevant as degrees of freedom.

Overall, the use of entanglement as driving principle for the development of nuclear forces and many-body methods appears as a promising path to keep exploring. In this context, re-interpretation of existing techniques from an entanglement point of view can also be enlightening. In principle density functional theory tells us that the exact energy of the interacting system can be obtained from a single Slater determinant (SD) [411], which, by definition, is separable and thus, has no entanglement (beyond anti-symmetrization). The situation is similar in the in-medium SRG method [412] which shifts the complexity of the nuclear state to the nuclear Hamiltonian via continuous unitary transformations of the latter. The exact energy (and other observables) can then also be obtained from a SD. On the other hand, these separable states do not characterize the exact wave function of the system. Thus it may not be straightforward to quantify entanglement directly in such approaches and one may need to elaborate different ways to characterize it.

Several other fundamental problems as well as possible applications are to be explored in the future. These include studies of various forms of entanglement, such as bi-partite and multi-partite entanglement, in ground and excited states of diverse types of nuclear systems, from light to mid-mass and heavy nuclei, both near and far from stability. Such works would lead us towards a broader and deeper understanding of entanglement in nuclei and its evolution along the nuclear chart, and could shed light on relations to symmetry breaking and phase transitions, as well as possible links with emergence of new degrees of freedom.

As a more conceptual issue, how to characterize entanglement between individual nucleons (as opposed to modes) in a way that would be independent of the basis should also be clarified. This type of entanglement could potentially provide better insight into physical phenomena such as pairing or clustering, and could perhaps reveal possible experimental signatures of entanglement in nuclei.

19 A Perspective on Quantum Information and Quantum Computing for Nuclear Physics by Martin J. Savage

This is my contribution to the Panel Discussion on 6 May 2021 in the INT workshop INT 21-1b related to quantum information sciences (QIS) for low-energy nuclear physics (NP). Each Panelist provided initial comments to start the discussion (with topics that we distributed among ourselves to avoid duplication). The following text is an approximate transcription of my remarks, regarding things to keep in mind when considering how to go about effectively transferring QIS techniques into and out of NP (theory) research. My remarks were mainly reflective in nature, outlining some of the means by which modern quantum field theory (QFT), the Standard Model, effective field theory (EFT), quantum chromodynamics (QCD) and lattice QCD (LQCD) techniques and technologies became integrated into the NP community.

Given the multi-disciplinary nature of QIS research, and the many points of connection with NP research, it is helpful to understand potential paths for a degree of integrating of research in QIS and nuclear theory at the interface to create an effective, robust and mutually beneficial research program. To provide some insight and guidance about handling the integration of relevant components of QIS research into NP and other domain sciences, and vice versa, it is worth reflecting on the integration of QFT, the standard model, EFT, QCD and LQCD techniques and technologies into the NP community. While QCD was discovered in the early 1970s, and LQCD soon thereafter, its theoretical footprint essentially remained in the domain of high-energy physics (HEP) for more than 15 years. This was in part because the success of perturbative QCD in systematically describing electroweak processes using EFTs and the RG, and the lack of direct impact on NP at that time beyond hadronic modeling. Despite a rapidly growing experimental program probing QCD in NP, QFT was not universally considered central to nuclear-theory research even well into the 1990s. Generally, the existing theoretical tools were integrated into the NP portfolio by a modest number of NP theorists re-aligning their research efforts and re-tooling (to some extent) and by hiring early-career scientists with PhDs in particle theory, particularly phenomenology, with interest in low-energy problems and electroweak processes. This was a remarkably successful recruitment, and coincided with one of the swings toward string theory in particle theory providing a significant pool of talent for NP to recruit from, and has contributed in part to present-day cutting-edge nuclear-theory activities.

LQCD was somewhat delayed in its migration into NP despite its obvious future role, again due to its significant impact on the HEP experimental program, and the challenges faced in computing the properties of even one nucleon with precision. This situation evolved during the 2000s, with the NP community spawning further single-nucleon and multi-nucleon LQCD efforts and collaborations, and hiring a number of junior scientists trained in LQCD by HEP, to utilize the rapidly increasing available classical computing resources. This was enabled and welcomed by the USQCD collaborationFootnote 14 that represented all LQCD practitioners in the U.S. and coordinating developments with SciDACFootnote 15 funding and HPC.Footnote 16

In addition to increased funding for efforts in new directions in local research groups at universities and national laboratories, national summer schools and conferences, major community-driven vehicles to enable a deeper integration of new ideas and concepts into and out of nuclear theory were established around 1990, with the creation of the Institute for Nuclear Theory (INT)Footnote 17 in Seattle, USA, embedded in a university physics department, followed a few years later by the European Center for Theoretical Nuclear Physics (ECT*)Footnote 18 in Trento, Italy connected with their physics department. This deliberate co-location provides a “low potential barrier” to engaging graduate students and postdocs in an immersive environment with a large and rotating selection of the world’s leading theorists at all career stages. Currently, ECT* is embracing QIS as an area of importance for future NP research, building upon its prior efforts in this area and also in HPC.

These discussions and examples of a previously successful integration pathway, provides guidance for considering how to accomplish the analogous mutually-beneficial QIS technique and technology transfer into and out of NP research. The situation for QIS has important differences, one of them being the major role of technology companies and startups, and the expected growth of the quantum economy to follow a similar path to that of the silicon economy, driven by Moore’s Law and financially supported by investors toward a significant fraction of a trillion dollars in the future. Having said this, there is no obvious reason to “re-invent the wheel”, and the NP community has the necessary institutional knowledge and infrastructures to rise and meet this challenge. The InQubator for Quantum Simulation (IQuS)Footnote 19 was recently established with the goal of enabling this QIS-NP integration.

20 A Perspective on the Future of High-Performance Quantum Computing by David J. Dean

When contrasting the evolution of quantum computing with the advent of microchip technology, one sees interesting similarities. Let us start with transistors. In 1925, Julian Lilienfeld filed for an early patent for the design of a field-effect transistor [413], marking the conceptual start of the microchip revolution. It took several years of development to move from concept to the first working transistor (its birthdate is given as 23 December 1947) for which the team of John Bardeen, Walter Brattain, and William Shockley won the Nobel Prize in 1956.Footnote 20 Robert Noyce, who received the first integrated circuit (IC) patent in 1961 (filed in 1959 [414]), joined with Shockley to found the first microchip production firm, Fairchild Semiconductor, and later founded Intel with Gordon Moore. Gordon Moore is credited with realizing that the number of transistors in a dense integrated circuit doubles about every 2 years (Moore’s law [415]). Indeed, from the Intel-4004 of 1970, with 2250 transistors on the chip, to today’s chips with billions of transistors, this observation has held true. The amazing advances made in computer technology have revolutionized every aspect of life and have had tremendous impacts in our approach to the sciences through the development and application of computational science across many domains.

Quantum computing is following a similar evolutionary path. We often credit Richard Feynman with the idea, see e.g. [403], of simulating quantum systems with quantum computers. Furthermore, the thermodynamics of computing (and potential reversibility of quantum computing) were being discussed contemporaneously by Toffoli [416] and Bennett [417]. Influenced by the reversable computing work, Benioff [418] developed universal quantum gate sets for computation. These tremendous leaps in a theoretical understanding of quantum information science and quantum computation mark the start of the current quantum computing era.

Of course, a theory does not mean a device that can compute. The next steps in the evolution of quantum computing required the hard work of developing and understanding how to make quantum circuits in the laboratory [419]. Many laboratories and researchers from across the world have contributed to the necessary advances required to advance quantum computing in the laboratory [420, 421]. Furthermore, these advances are leading to significant technical investments from different industrial sectors, and generating many new start-up technology companies. For example (and this is not an exhaustive list), progress is being made in generating quantum computers from superconducting qubits (represented by work at IBM,Footnote 21 Google,Footnote 22 and RigettiFootnote 23), trapped ions (represented by work at IonQFootnote 24 and HoneywellFootnote 25), and optical computing.Footnote 26 In each case, the technical difficulty comes with scaling of qubits and with error control. Nevertheless, the field continues to make quick progress in implementation within these technologies.

The US government has been keenly aware of the need to invest in quantum computing for several years. Starting a completely new industry plays into this need, as does the threat of competition from across the world [422]. The US Department of Energy began to formulate plans [423,424,425,426] for research in the area that would directly impact both coherence times and quantum computing scalability. At the same time, the DOE began funding research in quantum algorithm development and in use cases that are germane to the DOE’s mission in scientific R &D. The reports (and others that I have not listed), coupled with the initial base programmatic research, formed the basis for DOE’s role in the National Quantum Information ActFootnote 27 which was signed into law on December 21, 2018. The NQI Act establishes federal coordination among various US government agencies pursuing quantum R &DFootnote 28 and provides funding for NQI Research Centers to be established by NSF, DOE, and the DOD. The DOE Office of Science funds 5 National Quantum Information Science Research Centers, each operating at $25M/year.

The Quantum Science Center (QSC),Footnote 29 for which I was the PI until January 2022 when I transitioned to JLab, is dedicated to overcoming key roadblocks in quantum-state resilience, quantum-state controllability, and ultimately the scalability of quantum technologies. This mission is being achieved by integrating the discovery, design, and demonstration of revolutionary topological quantum materials, algorithms, and sensors, catalyzing the development of disruptive technologies. The QSC also develops the next generation of scientists and engineers through the active engagement of students and postdoctoral associates in research and professional development activities. Furthermore, by closely coordinating with industry, the QSC is strongly coupling its basic science foundation and technology development pathways to transition new applications to the private sector to make quantum technologies a reality. Specifically, the QSC is organized into three scientific thrusts. First, the QSC is addressing the fragility of quantum states through the design of new topological materials for quantum information science (QIS). QSC researchers focus on the design, synthesis [427], and characterization of topological superconductors and quantum spin liquids, both of which are candidate materials for discovering non-abelian quasiparticle states (anyons) that promise to yield quantum computing gates that are protected from environmental noise, and thus increasing the robustness of quantum computation, see e.g. [428]. Ultimately, the QSC will demonstrate controlled interactions between these topological states, or Majorana zero modes, to realize scalable topological quantum computation. Recent work [429] includes the observation of quantum entanglement phenomena in a triangular antiferromagnet using neutron scattering. The material, KYbSe2, is a quantum spin liquid candidate which should be exhibit anyonic behavior.

Second, the QSC is developing scalable algorithms and software to exploit the new physics enabled by topological systems. The QSC develops and tests these algorithms on several noisy intermediate-scale quantum platforms to characterize their behaviors and to devise algorithms that mitigate the noise [430]. In other computational work, QSC researchers presented and demonstrated entanglement-enhanced methods that can be used to learn an entire unitary rather than just its action on a low-lying subspace. The result is a framework for quantum machine learning of continuous variable (for instance, photonic) quantum systems capable of exponentially reducing input-output state training resources [431].

Third, the QSC is designing new quantum devices and sensors to unambiguously detect topological quasiparticles and explore meV regions of dark matter phase space. To manipulate Majorana states, one must first unambiguously identify them. To that end, the QSC develops new capabilities in detecting the ultralow-noise nondestructive local sensing of electromagnetic fields. These techniques are also being developed to detect for the first time, “light” dark matter, one of the theoretically favored candidates for dark matter, which constitutes 85% of the matter in the universe. Recent results include the discovery of room-temperature single-photon emitters in SiN, which have the potential to enable direct, scalable and low-loss integration of quantum light sources with well-established photonic on-chip platforms [432].

The QSC will transition discoveries in fundamental QIS through a progressive series of capability demonstrations that assess the readiness of quantum technology. The QSC method of co-design actively engages a broad range of researchers directly involved in pursuing an end goal to design and implement solutions. Thus, co-design processes generate the scientific integration of QSC-wide research projects toward a common programmatic outcome, and each co-design process provides feedback across research projects to focus innovations on new quantum science and technologies. During Year 1, the leads for the co-design processes identified common interfaces between projects and adopted quarterly milestones for Year 1 activities. The co-design process for topologically protected quantum information in Year 1 focused on the co-design of materials that host anyon physics. The co-design process for quantum simulations for scientific applications demonstrates quantum simulation for scientific applications by using quantum computing hardware with tailored quantum algorithms. The co-design process for quantum sensing in real-world applications demonstrates quantum sensors developed for material science characterization and dark matter detection.

Nuclear theorists, working with quantum computing colleagues, pursued several early calculations on quantum computing platforms. For example, early cloud-based quantum computers were used to calculate the binding energy of the deuteron [404]. Furthermore, in the early explorations, researchers also studied the dynamics of the Schwinger model using quantum computers [433]. These early works laid the groundwork for collaborations of nuclear theorists and QIS theorists to work at the interface of the two fields. Indeed, one recent example of progress coming from QSC involves describing neutrino oscillations at high neutrino density [434]. These examples indicate how theoretical nuclear physics research is incorporating quantum computing technology to solve interesting problems. It is still early days, and continuing collaborations coupled with efficient utilization of increasingly powerful quantum computers, will enable considerable progress in the coming decade.

I can draw conclusions from the progress made during the first year of the QSC, and from the exciting work being performed across the quantum computing community. This could be considered a prediction of the future. As Yogi Berra said, ‘It’s tough to make predictions, especially about the future’. Nevertheless, I will end this brief discussion with some predictions for the next decade of QIS research, particularly as it pertains to quantum computing.

  • Errors will be addressed in quantum computing. Materials that make up quantum computers (particularly superconducting materials) will be engineered to the point where the materials are not the primary sources of loss of coherence. Longer coherence times will enable gate depth to increase, thus opening possibilities for tackling larger problems. Furthermore, algorithms that mitigate error will continue to develop and improve. We should remember that HPC derives its power to solve increasingly difficult scientific problems from both hardware and algorithm improvement. A similar story is emerging in quantum computing.

  • Quantum ‘supremacy’ will be claimed several times before it truly happens. Evidence suggests that the Google supremacy claim [435] rested on the use of a less than effective algorithm for the HPC comparison [436].

  • While leadership computing may hit a plateau in implementation, staying at the exascale for several years and likely moving away the power-law progression that has characterized the top500 list for decades,Footnote 30 quantum computers will show significant advances in scientific reach over the next decade. The scale of R &D funding in quantum computing today is enormous, with roughly $24.4B being spent in globally thus far.Footnote 31 Trends suggest that the rate of R &D expenditures will continue to increase over the next decade.

  • Anyons will be unambiguously detected in two-dimensional superconductors or quantum spin liquids and manipulation of several anyons and their braiding to produce two-qubit gates will occur within the next decade. The QSC is built on this scientific goal, and I look forward to the day when we can say that we succeeded.

  • Nuclear theorists, working with QIS experts, will develop research problems that only a quantum computer can solve, and that are relevant for advances in nuclear physics.

  • Quantum computing technology will eventually fold into HPC as accelerator technology. Recent press releases indicate that this development is already being pursued in Europe.Footnote 32

Without a doubt, scientists can tackle the challenges that remain in quantum computing. It will take years of sustained R &D and multidisciplinary cooperation to get there, and that is as exciting as the future that quantum computing will usher in.