1 Introduction

Coarse graining is a form of idealization in modern physics. One of the traditional issues about coarse graining in the philosophy of physics is the investigation of irreversibility. Although fundamental theories such as statistical mechanics, demonstrate time reversibility, macroscopic phenomena exhibit time irreversibility. To fill this gap, Gibbs (1902) proposed an approach that appeals to coarse graining, and the roles and validity of this strategy have been investigated from a philosophical perspective (Ridderbos, 2002; Robertson, 2020; te Vrugt, 2021). In addition, the renormalization group (RG) method is another example of a philosophically investigated topic concerning coarse graining (Batterman, 2002; Batterman and Rice, 2014; Rodriguez, 2021; Wu, 2021). The RG method includes coarse graining, which plays an important role in explaining critical phenomena. However, coarse graining does not always play a substantial explanatory role. In classical mechanics, this method of coarse graining provides a link between descriptions of a rigid body as a set of point particles and as a continuum, but this change in the description does not give rise to any novel property or entity. How can we distinguish forms of coarse graining? This study aims to reveal the features of coarse graining and provide philosophical implications from an analysis of coarse graining.

In modern physics, coarse graining is an indispensable method for establishing a link between microscopic and macroscopic descriptions. In addition to irreversibility, the RG method, and classical mechanics, which will be examined in this article, several cases in physics include this procedure. A typical instance is the fluid dynamics because fluid composed of elementary particles is considered as a continuum through coarse graining. Moreover, coarse graining in electromagnetism is used to derive Maxwell’s equations in matter from Maxwell’s equations in a vacuum. Furthermore, in quantum gravity, coarse graining is a part of a procedure that gives rise to space-time. For instance, the multi-scale entanglement renormalization ansatz has a similar structure to renormalization in quantum field theory and includes a coarse-graining procedure (Vidal, 2007). Coarse graining is used in several fields of modern physics, and investigating coarse graining contributes to providing a comprehensive understanding of explanations in modern physics.

In order to analyze coarse graining, this article focuses on the notion of emergence. When a composition demonstrates a novel feature from its components, this is a case of emergence. It has been pointed out that some cases of coarse graining are instances of emergence. This is because coarse graining is a procedure that neglects details, and this idealization indicates a kind of autonomy in the composition from its components. Emergence in physics has been a traditional topic in the philosophy of science, and several definitions have been proposed. Additionally, the philosophical analysis of emergence provides a perspective to investigate features of some types of coarse graining.

In Section 2, some important definitions of emergence in physics are summarized. Among them, De Haro (2019) recently provided a more sophisticated definition, which reveals the importance of investigating coarse graining in physics. In Section 3, cases of coarse graining in physics are considered; in particular, the RG method, irreversibility, and a rigid body. In Section 4, the distinction between the types of coarse graining is established and the philosophical implications of this distinction are considered.

2 Overview: Emergence in philosophy of physics

The notion of emergence is a traditional topic in the philosophy of science (Nagel, 1979). With respect to the philosophy of physics, Batterman (2002), Butterfield (2011a, 2011b) propose a general definition of emergence and reduction. Their definitions are also incompatible and contain some problems, as noted by Franklin and Knox (2018). Recently, De Haro (2019) offers a general framework for emergence in physics. De Haro’s framework overcomes the problems of the previous definitions of emergence and distinguishes ontological and epistemological emergence. In this section, in addition to providing an overview of the definitions of emergence in physics, the importance of investigating coarse graining is presented.

Batterman (2002) defines emergence as a failure of reduction in his sense. Batterman’s definition of reduction in physics is as follows:

A “more refined” theory \(\mathcal {T}_{f}\) corresponds to a “coarser” theory \(\mathcal {T}_{c}\) as some fundamental parameter (call it 𝜖) in the finer theory approaches a limiting value. Schematically, the reduction in physics can be represented as follows;

$$ \lim_{\epsilon\rightarrow 0}\mathcal{T}_{f} = \mathcal{T}_{c}. $$

Only if the limit is regular, this limiting relation can be called as a reduction (Batterman, 2002, p. 18).

If the limit is not regular, that is, it is singular, then this case is not reductive but emergent. The relationship between Newtonian mechanics and special relativity theory is an example of reduction with this definition. On the other hand, cases of the rainbow and phase transitions are instances of emergence, and he argues that “the novelty of these emergent properties [rainbow and phase transitions] is ⋯ a result of the singular nature of the limiting relationship between the finer and coarser theories that are relevant to the phenomenon of interest” (Batterman 2002, p. 121, [] is added). In this manner, Batterman defines emergence as a limit-based notion.

Batterman’s definition, which appeals to the mathematical limits, provides a criterion to distinguish between emergence and reduction but does not characterize the meanings of emergence and reduction. Franklin and Knox (2018) argue that Batterman’s definition makes “the use of an asymptotic limit look mysterious at the lower level” (Franklin & Knox, 2018, p. 69). In addition, with the asymptotic reasoning, finite systems are represented as infinite systems, and this case is considered as a case of emergence. However, it is difficult to find any essential connections between such idealization and emergence (Bangu, 2015, p. 162; Franklin & Knox, 2018, p. 75).

Butterfield (2011a, 2011b) provides another definition of emergence that is different from Batterman’s in the sense that emergence is compatible with reduction, which is a logical or formal notion. More precisely, Butterfield’s reduction between two theories is a logical relationship between their sets of propositions. On the other hand, Butterfield defines emergence as something robust and novel:

  • Robustness: something like: the same for various choices of, or assumptions about, the comparison class

  • Novelty: something like: not definable from the comparison class (Butterfield, 2011a, p. 921)

The notion of emergence in his sense is relative to the comparison classFootnote 1. For instance, an entangled state demonstrates novelty and robustness. The entangled state shows the violation of Bell’s inequality, which is a novel and robust property compared with states as its components (Butterfield, 2011a, p. 955). According to his definition, the notion of emergence is conceptual.

In Butterfield’s framework, emergence is neither a logical nor formal notion and is compatible with reduction. In this way, taking mathematical limits can be regarded as a deduction, despite the appearance of novelty and robustness. Butterfield summarizes the morals of emergence in terms of the relationships with reduction as follows:

  1. 1.

    Deduce: emergence is compatible with reduction. For example, considering \(N\rightarrow \infty \) enables us to deduce novel and robust behavior.

  2. 2.

    Before: emergence can occur before getting to the limit, that is for finite N. (Butterfield, 2011b, p. 1069)

The first moral implies compatibility between reduction and emergence in the case of \(N\rightarrow \infty \). The second moral is that emergence occurs within finite systems. Butterfield distinguishes the cases of \(N=\infty \) and \(N\rightarrow \infty \), and emergence in physics appears in \(N\rightarrow \infty \) but not in \(N=\infty \). Therefore, emergence does not require an ideal system whose number of particles N is infinite, and taking a limit is a mathematical tool to derive emergent properties. The compatibility results from his definitions of emergence and reductionFootnote 2. In this manner, Butterfield attempts to distinguish the notion of emergence from the mathematical limits and avoids the problems of Batterman’s approach.

However, even in Butterfield’s definition, the meaning of novelty is ambiguous. As Franklin and Knox point out, “ the wide applicability of this definition lies in its lack of specificity. It provides the scaffolding for a full account of emergence; an analysis of novelty and robustness is needed to finish the construction” (Franklin & Knox, 2018, p. 68). As Butterfiled conceptually defines the notion of emergence, the meaning of indefinablility as a condition of novelty remains unclear. Another problem with his approach lies in the distinction between ontological and epistemological emergence (De Haro, 2019, p. 38). Butterfield’s definition decides whether one case is emergent or not, but fails to deal with the variants of emergence.

Franklin and Knox (2018) aim to characterize the notion of emergence in a different manner, through the analysis of phonons which are an instance of ontological emergence such as photons relative to the quantum field. They argue that both Batterman’s and Butterfield’s definitions of emergence fail to grasp the case of phonons as an instance of emergence. While these definitions are implicitly or explicitly based on limits, the emergent feature of phonons does not require limits. The novel explanatory power of phonons arises from the changes of the small number of variables and is the novelty compared to the basal crystal structure Footnote 3.

This approach to characterizing the novelty as an explanatory novelty verifies that the case of phonons is ontological emergence. De Haro summarizes that the novelty of Franklin and Knox’s definition is based on a claim about phonons. “[T]he change of variables is explanatory powerful” (De Haro, 2019, p. 38), and this claim entails the explanatory novelty. However, as De Haro argued, a change of variables does not always imply novelty. Admittedly, some novel properties arise from a change of variables, but the relationships between the notion of emergence and the change of variables are also ambiguous, as in the case of limit-based approach. Moreover, the novel explanatory power does not immediately imply ontological emergence. While they argue that the phonons are a case of ontological emergence, the explanatory novelty is insufficient to verify this argument.

De Haro (2019) provides a general framework for emergence (see Fig. 1). In his framework, he focuses not only on the relationships between the theories but also on their domains. There is a higher and top theory Tt and a lower and bottom theory Tb, and they have their own domains of application denoted by Dt and Db, respectively. The domain D is “a part of the empirical world” (De Haro, 2019, p. 2). Furthermore, scientific models, such as the Ising model, are candidates for Tt and Tb. The relationship between Tt and Tb is denoted by link, and the relationship between the theory and domain is called an interpretation denoted by i. A link is a physically motivated mathematical map that includes approximations. This interpretation is the map \(i: T \rightarrow D\). De Haro’s definition of emergence is as follows;

We have emergence iff two bare theories, Tb and Tt, are related by a linkage map, and if in addition the interpreted top theory has novel aspects relative to the interpreted bottom theory (De Haro, 2019, p. 10).

Fig. 1
figure 1

Ontological emergence in De Haro’s framework (De Haro, 2019, p. 15)

On the other hand, if itlink = ib, that is, Dt = Db, this case is epistemological emergence. In the case of phonons, the changes of variables establish a link between the two models. One of the models includes phonons, and the other does not, but the domains of the applications for both models are the same.

Regarding epistemological emergence, De Haro argues that mere coarse-graining entails epistemological emergence, but not ontological emergence (De Haro, 2019, p. 16). Roughly speaking, coarse graining refers to a procedure to dismiss some details, but coarse graining do not always entail ontological emergence. On the other hand, he admits that the RG method, which includes a coarse graining procedure, is a case of ontological emergence.

De Haro’s characterization of emergence overcomes the problems faced by the previous approaches. First, De Haro does not appeal to the mathematical limits as a condition for emergence. Admittedly, in some cases, the link includes mathematical limits, but the notion of emergence does not depend on limits. Second, novelty is defined as the difference between the domains of applications of theories in ontological emergence, and in epistemological emergence novelty is the difference in interpretations that connect the theories with their domains. Thus, the meaning of novelty is clear. Third, his framework clearly distinguishes ontological and epistemological emergence. In this sense, De Haro provides a more sophisticated definition of emergence.

In sum, the previous studies that attempt to characterize emergence have addressed these questions.

Q1:

What kinds of relationships imply emergence?

Q2:

What is the meaning of novelty that characterizes the notion of emergence?

Q3:

How can we distinguish ontological and epistemological emergence?

[Q1] Batterman (2002) attempts to answer this question by appealing to mathematical limits. Franklin and Knox (2018) point out that the change of variables causes emergence. [Q2] Butterfield (2011a, 2011b), Franklin and Knox (2018), and De Haro (2019) define novelty as a condition of emergence. [Q3] De Haro (2019) argues that his framework distinguishes between epistemological and ontological emergence and avoids the drawbacks of the previous attempts.

De Haro’s framework is a more sophisticated characterization of emergence. However, the notion of coarse graining remains ambiguous. As mentioned above, mere coarse-graining results in epistemological emergence, but coarse graining, such as renormalization, is a case of ontological emergence. Given that providing a criterion to distinguish between ontological and epistemological emergence is a main motivation for De Haro’s framework and this criterion has an advantage over the previous literature, the notion of coarse graining calls for more explication.

3 Substantial and mere coarse-graining in physics

Whereas some coarse graining is a sort of mere distortion and regarded as mere coarse-graining, others play greater explanatory roles and can be called substantial coarse-graining. To explicate the explanatory features of coarse graining and distinguish its variants, in this section, cases of coarse graining are considered. First, the RG method will be considered. This is thought as an instance of emergence and includes a coarse-graining procedure. Second, the Gibbsian approach to derive irreversibility in statistical mechanics is examined. Finally, the third case is a rigid body. In classical mechanics, coarse graining is used to explain a rigid body. This is an elementary case in physics, but it has not been much examined in the philosophy of science. In the cases of the RG method and the Gibbsian approach, coarse graining is substantial because it is required to explain important physical properties. On the other hand, in the case of the rigid body, coarse graining merely provides a different description of the same target system, and then this is mere coarse-graining.

Coarse graining is roughly a procedure to lower the resolution, and its purpose is to reduce, but not eliminate, the information. In this article, coarse graining refers to a transformation of a way to describe systems from a small-scale description to a large scale description. In the small-scale description, the components of the system are considered smaller ingredients, and the description is detailed. On the other hand, in the large-scale description, the components of the same system are larger, and the number of the components is smaller. One of the ways to implement a coarse-graining procedure is averaging over a small volume, such as with the Gibbs’ approach (Zeh, 2007, p. 53).

3.1 Minimal Model and RG

As mentioned above, the RG method in statistical physics includes a method of coarse graining procedure, which has been regarded as an ontological emergence (De Haro, 2019; Morrison, 2012). This section presents the features and roles of coarse graining in the RG method. Batterman and Rice (2014) point out the key feature of the RG method, based on the notion of “minimal model” Footnote 4. In addition, the notion of the minimal model provides a perspective for understanding the features of coarse graining in the RG method.

The RG method comprises three main processes: 1. RG transformations, 2. RG flow and finding the fixed point, and 3. a scaling analysis (Takahashi and Nishimori, 2017; Wu, 2021).

  1. 1.

    The RG transformation is composed of three mathematical processes. The first process is (a) a partial reduction in the degrees of freedom. Consider a scale transformation parameter b > 1 that characterizes coarse graining. Through this scale transformation, the smallest length scale a becomes ba and the maximum scale of the wavenumber Λ is Λ/b. The second process is (b) the scale transformation s.t. \(\textbf {r}\rightarrow \textbf {r}^{\prime }=\textbf {r}/b\) and \(\textbf {k}\rightarrow \textbf {k}^{\prime }=b\textbf {k}\), where r is the spatial coordinate and k is the wavenumber. Third, through these processes (a) and (b), the coupling constant K that appears in the partial functions transforms into \(K^{\prime }\), such that

    $$ \begin{array}{@{}rcl@{}} K\rightarrow K^{\prime}=\mathcal{R}_{b}(K). \end{array} $$
    (1)

    This transformation \(\mathcal {R}_{b}\) satisfies the definition of the semi group.

  2. 2.

    This transformation (Eq. (1)) allows us to depict the RG flow. When each transformation is infinitesimal, a vector is defined at each point. The vectors represent the changes of the coupling constants. This change is represented by a flow in the phase diagram and this flow is called the RG flow. The RG flow provided by Eq. (1) decides the phase diagram. This transformation also determines the fixed point K, such that

    $$ \begin{array}{@{}rcl@{}} K^{*}=\mathcal{R}_{b}(K^{*}). \end{array} $$
    (2)
  3. 3.

    Finally, the following scaling analysis illustrates the critical exponents, which characterize critical phenomena.

    1. (a)

      Consider the slightly different point from the fixed points K and its renormalization flow. The transformation matrix \(\hat {T}^{b}\) is given by the following relationship:

      $$ \begin{array}{@{}rcl@{}} K_{\alpha}=K^{*}_{\alpha}+\delta K_{\alpha}\rightarrow K^{\prime}_{\alpha} = K^{*}_{\alpha}+\delta K^{\prime}_{\alpha}=K^{*}_{\alpha}+\sum\limits_{\beta}(\hat{T}^{(b)})_{\alpha\beta}\delta K_{\beta}. \end{array} $$
      (3)
    2. (b)

      Diagonalizing the matrix \(\hat {T}^{(b)}\) determines the scaling dimension and scaling variables. When the transformation matrix is

      $$ \begin{array}{@{}rcl@{}} \hat{T}^{(b)}=b^{\hat{X}}, \end{array} $$
      (4)

      the matrix \(\hat {X}\) determines the scaling dimension xμ, which is the eigenvalue of this matrix. The right eigenvector Rμ determines the scaling variable gμ, by \(\delta K_{\alpha }={\sum }_{\mu }g_{\mu }(R_{\mu })_{\alpha }\).

    3. (c)

      In the neighborhood around the fixed point, the physical quantity such as the critical exponents is given by

      $$ \begin{array}{@{}rcl@{}} f(K^{*}; g_{1}, g_{2}, g_{3}, \cdots) = b^{-x_{f}}f(K^{*}; b^{x_{1}}g_{1}, b^{x_{2}}g_{2}, b^{x_{3}}g_{3}, \cdots). \end{array} $$
      (5)

This general framework includes a coarse-graining procedure represented by the Eq. (1). Using this framework, the critical phenomena can be explained. Critical phenomena are an instance of universality, which refers to a phenomenon common among several systems whose microscopic structures are different. The RG transformation enables us to explain the universality, while the model before the transformation fails to provide an explanation. In this sense, the RG transformation, in particular coarse graining, plays an important explanatory role.

Batterman and Rice (2014) refer to the model derived from the RG framework as the minimal model, which represents the important factors for the target phenomena, and the details, which are not in the minimal model, are not merely irrelevant but rather prevent us from understanding the target phenomena. They point out that “accuracy” or “correctness” are not always required for the explanations (Batterman & Rice, 2014, p. 356). According to the RG method, the details prevent us from explaining universal phenomena because the detailed models do not provide sufficient accounts for the critical phenomena. The RG transformation maps models onto the fixed point, which is a minimal model, and the fixed point demonstrates universality. The coarse-graining procedure in the RG method is required to derive the minimal model. In this sense, in this method, coarse graining is substantial.

Furthermore, the RG method is an instance of emergence in physics (Morrison, 2012; Batterman, 2010). Morrison (2012) typically suggests that because the RG method is a method for providing coarse grained systems, this method is a case of emergence. That is, the RG method implies the irrelevance of microscopic details for some macroscopic phenomena, and the RG method implies emergence. Within De Haro’s framework, the RG method is a case of ontological emergence (see Fig. 2). In this case, the lower-level models ML are microscopic models representing several kinds of materials, such as the two-dimensional Ising models. In contrast, the higher-level model MH corresponds to the fixed point. The RG transformation (Eq. (2)), which includes the coarse-graining procedure, maps the lower-level models onto the higher-level model. While the lower-level models fail to explain the critical phenomena, the higher-level model illustrates critical phenomena. The domain of the lower-level model DL is different from that of the higher-level model DH, implying ontological emergence.

Fig. 2
figure 2

The RG method includes a RG transformation \(\mathcal {R}\), which includes a coarse graining procedure. \(\mathcal {R}\) as coarse graining maps lower-level models ML, such as two-dimensional Ising models, each of which represents a different kind of physical system, onto the higher-level model MH (a fixed point). The domain of ML, denoted by DL, is different from the domain of MH, denoted by DH, because the higher-level model provides explanation of the critical behavior, but the lower-level model does not. The dashed line between M and D represents an interpretation that connects the mathematically represented models to their domains of application in the empirical world

Coarse graining in the RG method is indispensable for explaining universality, and then can be regarded as a substantial coarse-graining. This substantial coarse-graining demonstrates ontological emergence, because compared to the lower-level model, the higher-level model shows a novel physical property, such as critical exponents.

3.2 Irreversibility in statistical mechanics

Another philosophical issue concerning the method of coarse graining is the derivation of irreversibility from statistical mechanics. Gibbs (1902), one of the founders of statistical mechanics, originally proposed this approach based on coarse graining. Robertson (2020) investigates the conceptual foundations of this framework, in particular the Zwanzig-Zeh-Wallace (ZZW) framework (Wallace, 2011; Zeh, 2007; Zwanzig, 1960) as she calls. Many processes in our world, for example, spilled milk, are not invariant under time-reversal transformations. On the other hand, our contemporary fundamental theories, such as statistical mechanics, are time-reversible. To fill this gap, the ZZW framework appeals to the coarse graining to derive the irreversibility from statistical mechanics.

The ZZW framework introduces a coarse-graining projection operator \(\hat {P}\) on the space of probability density functions, in order to derive irreversibility in statistical mechanics and the probability density is denoted as ρ. The coarse-graining projection \(\hat {P}\) splits the probability density into two parts: the relevant probability density ρr and irrelevant probability density ρir.

$$ \begin{array}{@{}rcl@{}} \hat{P}\rho=\rho_{r},~~~~(1-\hat{P})\rho=\rho_{ir}. \end{array} $$
(6)

This relevant probability density ρr demonstrates irreversibility. Given the density function ρr, the entropy S is defined as follows;

$$ \begin{array}{@{}rcl@{}} S[\rho_{r}]:=S[\hat{P}(\rho)] := -k_{B}\int\hat{P}\rho(q, p)\ln\hat{P}\rho(q, p)d^{3N}q d^{3N}p. \end{array} $$
(7)

This coarse-grained entropy indicates irreversibility such that

$$ \begin{array}{@{}rcl@{}} \frac{dS[\rho_{r}]}{dt}\geq 0. \end{array} $$
(8)

In short, in the ZZW framework, time irreversibility appears through coarse graining (Robertson, 2020, pp. 549–556).

As Robertson argued, one of the objections to this kind of approach appealing to coarse graining is that asymmetry such as time-irreversibility is illusionary. This objection means that coarse graining inevitably distorts the correct density, and then the results of these procedures are illusionaryFootnote 5. Robertson offers counterarguments to this objection. First, the asymmetry is not illusionary but robust. Whenever the density ρ(t) is coarse grained, the coarse-grained density ρr(t) exhibits asymmetry. Second, coarse graining in this framework is not a Galilean idealization, which is a distortion that makes a system more tractable. Scientists ultimately aim to remove Galilean idealization and investigate more accurate representations. Weisberg argues that “Galilean idealization takes place with the expectation of future de-idealization and more accurate representation” (Weisberg, 2013, p. 100). In the ZZW framework, the accurate density is already known, and coarse graining is used to separate irrelevant parts of the accurate density. Robertson argues that coarse graining in the ZZW framework is not Galilean idealization because the details prevent us from deriving irreversibility. This case implies that neglecting details through the coarse-graining projection operator makes it possible to derive the irreversibility. Coarse graining in the ZZW framework is not merely a distortion but also plays an important role in explaining irreversibility. In this sense, this case can be regarded as a substantial coarse-graining.

Coarse graining in the ZZW framework demonstrates ontological emergence (see Fig. 3). In this case, the lower-level model is represented by the probability density function ρ. The coarse-graining projection \(\hat {P}\) maps this lower-level model onto the higher-level model represented by ρr. Within this framework, the coarse-graining projection excludes some irrelevant factors in the genuine density function ρ and introduces the coarse-grained density function ρr to derive the irreversibility. Without this procedure, the irreversibility does not appear. The domain of the model before coarse graining does not include the irreversibility, but the domain of the model derived from coarse graining does. Therefore, the domain of the higher-level model is not a proper subset of the domain of the lower-level model, and this case is ontological emergence like the case of the RG.

Fig. 3
figure 3

In the ZZW framework, the coarse-graining projection \(\hat {P}\) maps a lower-level models ML represented by ρ onto a higher-level model MH by ρr. The dashed line between M and D represents an interpretation that connects mathematically represented models to their domains of application in the empirical world. The domain of ML, denoted by DL, is different from that of MH, denoted by DH, because the higher-level model demonstrates the irreversibility, but the lower-level model does not

Within the ZZW framework, coarse graining is indispensable for deriving irreversibility from statistical mechanics. In this sense, coarse graining in the ZZW framework plays the important role and is substantial. This substantial coarse-graining in this case also implies ontological emergence, as is the case of the RG method. Although the original density function ρ does not demonstrate irreversibility, the coarse-grained density function ρr illustrates this physical property. Irreversibility is a novel property compared with the model before coarse graining, whose density function is ρ. In this sense, this case is ontological emergence.

3.3 Pictures of a Rigid body

In classical mechanics, a coarse-graining procedure is used to introduce the notion of a rigid body, as a kind of continuum. This is an elementary case concerning coarse graining in physics. In classical mechanics, the rigid body is defined as “a system of particles such that the distances between the particles do not vary” (Landau and Lifshitz, 1976, p. 96). A rigid body is thought to be a continuous and at the same time, as a system composed of particles. Coarse graining establishes a relationship between these understandings about a rigid body.

The passage from the formulae which involve a summation over discrete particles to those for a continuous body is effected by simply replacing the mass of each particle by the mass ρdV contained in a volume element dV (ρ being the density) and the summation by an integration over the volume of the body (Landau & Lifshitz, 1976, p. 96).

This replacement of the mass of each particle by the mass of the volume element is justified by the coarse-graining procedure. In other words, coarse graining maps a model of a system of the discrete particles onto a model of the continuum.

First, consider a continuum with volume V and total mass M. The density at the point r on the continuum is denoted by ρ(r). For a small volume ΔV around r, let ΔM be the mass in this small volume. In this case,

$$ \begin{array}{@{}rcl@{}} \rho(\textbf{r})=\lim_{\Delta V\rightarrow0}\frac{\Delta M}{\Delta V}. \end{array} $$
(9)

or, if the ρ(r) is given,

$$ \begin{array}{@{}rcl@{}} {\Delta} M\approx{\Delta} V\rho(\textbf{r}). \end{array} $$
(10)

Strictly speaking, a continuum is composed of particles that obey quantum mechanics. Therefore, if the ΔV is too small, then the Eq. (9) does not hold. In classical mechanics, the procedure for taking the mathematical limits \({\Delta } V\rightarrow 0\) is stopped in a halfway so that the density function ρ(r) remains continuous. When the system is considered an entity consisting of appropriately small segments whose volume is ΔVi instead of point particles, the density ρ is continuous, and this system can be regarded as a continuum. This approximation to derive ρ(r) as a continuous function is coarse graining that connects the model of aggregations of point particles with the model of a continuum.

The Eqs. (9) and (10), given by coarse graining, establish the relationship between a continuum and a set of point particles. In classical mechanics, the total mass of a system of n-point particles is

$$ \begin{array}{@{}rcl@{}} M=\sum\limits_{i=1}^{n} m_{i}, \end{array} $$
(11)

where the mass of each particle i is denoted by mi. On the other hand, when the total mass of the continuum is M, this entity can be divided into n parts.

$$ \begin{array}{@{}rcl@{}} M=\sum\limits_{i=1}^{n}{\Delta} M_{i}. \end{array} $$
(12)

For a point ri in ΔMi, when the density function (Eq. (9)) is given, the total mass can be expressed as follows:

$$ \begin{array}{@{}rcl@{}} M=\lim_{n\rightarrow\infty} \sum\limits_{i=1}^{n}\rho(\mathbf{r}_{i}){\Delta} V_{i} =\int \rho(\mathbf{r})dV. \end{array} $$
(13)

Comparing Eqs. (11) and (13), the coarse-graining procedure allows these transformations as follows to establish a connection between the system of point particles and continuum.

$$ \begin{array}{@{}rcl@{}} m_{i}\rightarrow{\Delta} M_{i}\rightarrow \rho(\mathbf{r})dV,\quad {\sum}_{i}\rightarrow\int \end{array} $$
(14)

The coarse-graining procedure maps the model of a system of point particles onto the model of the continuum.

Coarse graining, in this case, is a mere coarse-graining. As coarse graining is merely used to simplify the derivation of the physical properties of a rigid body, there exists a model as a sum of point particles and as a continuum, both of which show the classical mechanical behavior of a rigid body. In this case, unlike the RG method and ZZW framework, coarse graining is dispensable to show the classical mechanical properties of the rigid body. Therefore, this case is a mere coarse-graining.

Coarse graining in this case does not imply ontological emergence (see Fig. 4). The lower-level model represented by Eq. (11) is a model of a rigid body as a set of point particles. A higher-level model represented by Eq. (13) is a model as a continuum. These models represent a rigid body, and both models explain the same classical mechanical properties of the same rigid body. In fact, this procedure merely simplifies the derivation of the equations for classical mechanical properties (Landau & Lifshitz, 1976, p. 96). Therefore, this case does not demonstrate ontological emergence, but epistemological emergence. The novelty, in this case, is our understandings of a rigid body. In the lower-level model, the body is regarded as an aggregation of point particles; in contrast, it is regarded as a continuum in the higher-level model. This is not an ontological difference, but an epistemological differenceFootnote 6.

Fig. 4
figure 4

In the case of a rigid body, the coarse-graining procedure (Eq. 14) maps the lower-level model of point particles ML onto the higher-level model of a continuum MH. The domain of applications of both models D is the rigid body. The dashed line between M and D represents an interpretation that connects the mathematically represented models to their domains of application in the empirical world

4 Coarse graining and emergence

In the previous section, the case studies reveal two types of coarse graining. In the cases of the RG method and ZZW framework, coarse graining plays an important role in explaining the physical properties and these cases imply ontological emergence. On the other hand, in the case of a rigid body, coarse graining merely changes our descriptions and implies epistemological emergence. This section analyzes the difference between these types of coarse graining and considers its philosophical implications.

4.1 A Distinction between substantial and mere coarse-graining

Coarse graining is a sort of a form of idealization, and I will first consider whether the distinction about the idealization corresponds to the distinction between substantial and mere coarse-graining. In this article, this strategy is called a top-down approach in the sense that, in terms of a general understanding of idealization, the feature of coarse graining will be considered.

Norton (2012) characterizes the notions of the idealization and approximation as follows.

  • An approximation is an inexact description of a target system. It is propositional.

  • An idealization is a real or fictitious system, distinct from the target system, some of whose properties provide an inexact description of some aspects of the target system (Norton, 2012, p. 209, italics are original).

In short, an approximation does not give rise to a new system, while an idealization does. Because, in some cases, the property in the limit is not always that of system in the limit, the distinction between the property in the limit and the property of the limit system is crucial. If the limit property differs from the property of the limit system, this case should be considered as an approximation. Norton argues that the RG method in the phase transitions is an approximation because the coarse-grained properties (not system) such as the universality are considered (Norton, 2012, pp. 219–223). Because both the higher-level model and the lower-level models represent the same target materials, the RG method does not bring about new systems. In the case of the ZZW framework, both the models before and after coarse graining, represented by ρ and ρ, respectively, deal with the same target system. The coarse-graining procedure introduces an inexact description of the probability density function, from which irreversibility can be derived but the target systems are the same. Similarly, mere coarse-graining in the case of a rigid body is an approximation. The higher-level model as a continuum does not refer to the existence of a new system. Both substantial and mere coarse-graining are approximations in Norton’s sense. In fact, as Norton argues, coarse graining in general is an approximation. The coarse-graining procedure enables us to derive some new properties, but merely provides an inexact description of the target system. The case of continuum limits, which Norton (2012) examines, is a sort of coarse graining. He argues that the “sequence in halftone printing”, such as the gray-scale printing, is an approximation, but not an idealization in Norton’s sense. Therefore, Norton’s classification does not distinguish between the types of coarse graining.

Another philosophical investigation of idealization is explored by Weisberg (2013), who enumerates some kinds of idealization, such as Galilean, minimalist, and multiple models idealization. Galilean and minimalist idealization are concerned with our purposesFootnote 7. Both idealizations are procedures to reduce some factors to introduce a model. The differences among these idealizations lie in their purposes. In Galilean idealization, this idealization is carried out for the purpose of the simplification. For instance, the frictionless plane is a case of this idealization. The frictionless plane is a helpful assumption for understanding the abstract classical mechanical behavior on the plane Footnote 8 . Weisberg argues that this idealization is used for pragmatic purposes, and then scientists expect that this idealization will be de-idealized and some accurate representation will be provided in the future. On the other hand, minimalist idealization is carried out to highlight essential causal factors of phenomena. For instance, the Ising model, which is a highly idealized model, abstracts important causal factors for the ferromagnetism. The models introduced through minimalist idealization are considered to capture essential factors for the phenomena.

As in the case of Norton’s taxonomy, Weisberg’s definition does not distinguish coarse graining. As seen above, Robertson (2020) argues that coarse graining in the ZZW framework is not Galilean idealization. The ZZW framework assumes that the coarse-grained density function is indispensable for explaining irreversibility in terms of statistical mechanics, while the purpose of Galilean idealization is merely a simplification. Similarly, the RG method is also not Galilean idealization, because this method is required to explain critical phenomena that the lower-level model fails to explain. The RG method and ZZW framework abstract irrelevant factors for their targets. Both cases of substantial coarse-graining are minimalist idealizations, which aim to abstract important causal factors for the phenomena.

Coarse graining in classical mechanics is also not Galilean idealization, even though it is mere coarse-graining. Admittedly, Landau and Lifshitz argue that coarse graining simplifies the derivation of equations about a rigid body (Landau and Lifshitz, 1976, p. 96). This motivation seems to be Galilean idealization, because coarse graining in the case of a rigid body is carried out for the pragmatic purpose. However, the model introduced through coarse graining represents the important factors of the rigid body. Shortly after Landau and Lifshitz mention that the simplification is a role of coarse graining, they argue that “solid bodies may usually be regarded in mechanics as continuous, and their internal structure disregarded” (Landau & Lifshitz, 1976, p. 96). In other words, the coarse-grained description represents the essential structure of the rigid body well. Before and after coarse graining, both models have their own explanatory roles, and the de-idealization is not expected. In the end, this case of mere coarse-graining is minimalist idealization, like substantial coarse-graining.

The distinction of idealization does not distinguish between the types of coarse graining. The top-down approach to draw a distinction from the general definition of idealization fails to classify the types of coarse graining. Another approach to make a distinction is a bottom-up approach to clarify the differences based on the case studies.

As seen above, in the case of the RG method and ZZW framework, the coarse-graining procedure plays an important role in deriving the properties that the detailed (lower-level) model fails to show. In other words, coarse graining is required to exhibit this novel property. On the other hand, in the case of a rigid body, coarse graining allows us to connect different models that provide different pictures of the rigid body, but both models demonstrate the same classical properties. These characteristics provide a distinction as follows.

  • Mere coarse-graining refers to coarse graining to provide an inexact description of an original model, and the coarse-grained models do not demonstrate a new property compared to their original model.

  • Substantial coarse-graining refers to coarse graining to enable us to derive a property that the model before coarse graining fails to show.

Both types of coarse graining are mathematical transformations to derive an inexact description, and the difference lies in their roles in practice. Substantial coarse-graining shows ontological emergence, and some of mere coarse-graining show epistemological emergence.

Substantial coarse-graining implies ontological emergence, but does not bring in reference to a new target system. In the case of the RG method and ZZW framework, the higher-level models demonstrate the critical behavior and irreversibility, respectively, which are novel properties compared to the lower-level models; however, because coarse graining is an approximation in Norton’s sense, they share the same system. In contrast, the models of the rigid body do not show the differences in their properties. In order to explicate how substantial coarse-graining implies ontological emergenceFootnote 9, the term target property, in this article, is defined as follows: a target property refers to a property that is regarded as the main property of the model for agents who deal with the modelFootnote 10. For the RG method, the target property is the relationships between critical exponents or the universality of critical phenomena, and in the ZZW framework, irreversibility is the target property. This notion is introduced in order to exclude some trivial properties. For instance, the ZZW framework shows irreversibility, and at the same time, the resultant model is different from the original model. The density function ρ and ρr do not have the same form. The difference in the forms of the density functions itself is not the main interests for scientists, who aim to reveal a target property such as irreversibility. When the target property exists, the case is substantial coarse-graining and implies ontological emergence. Otherwise, the case is mere coarse-graining and implies epistemological emergence.

Mere coarse-graining does not always imply epistemological emergence. Admittedly, in the case of a rigid body, coarse graining demonstrates epistemological emergence in the sense that the higher- and lower-level models provide different representations of the rigid body. However, this does not mean that mere coarse-graining always implies epistemological emergence. For instance, in the RG method, the iterative mathematical transformations \(\mathcal {R}\) result in a fixed point. When we stop the iterative mathematical transformations before approaching the fixed point K, this transformation is mere coarse-graining, because this procedure merely distorts the original model and the resultant distorted model does not provide new properties. Consider a two-dimensional Ising model. If a single coarse graining and normalization were carried out towards the Ising model, the resultant model would not explain a new property, and would not provide a different picture from that of the original model, although this procedure is mere coarse-graining. This case of mere coarse-graining fails to show epistemological emergence.

In sum, to exclude some trivially novel properties, the notion of the target property is defined as the property that is an aim of the models. When a target property that the lower-level model fails to show can be shown from the higher-level model derived through coarse graining, the coarse graining procedure is indispensable and this target property is novel. When the lower-level model fails and the higher-level model succeeds in showing the target property, the domains of the models are different. Then, the existence of the novel target property entails difference in the domains of the models. Substantial coarse-graining is one that is indispensable to show the novel target property. Mere coarse-graining does not show the novel target property, but changes our description of the system.

4.2 Implication for the notion of emergence

So far, the distinction of the types of coarse graining and its philosophical implications have been considered. In this section, I will focus on more general philosophical issues about emergence pointed out in Section 2.

Q1:

What kinds of relationships imply emergence?

Q2:

What is the meaning of novelty that characterizes the notion of emergence?

Q3:

How can we distinguish ontological and epistemological emergence?

This section will examine these issues in turn.

[Q1] What kinds of relationships imply emergence? Batterman (2002) points out that the mathematical limits are related to the notion of emergence, and Franklin and Knox (2018) argue that the change of variables causes the appearance of the phonons as emergence. In these studies, the relationships that cause emergence are considered as specific mathematical operations. On the other hand, coarse graining is not always represented by any specific mathematical transformation. Coarse graining refers to a procedure or idealization to abstract part of the details, and the mathematical representation varies. Furthermore, because coarse graining does not always imply emergence, specific mathematical transformations themselves do not immediately imply emergence.

[Q2] The second issue is the meaning of the novelty as a condition of emergence. The case of mere coarse-graining shows that the difference in understanding of target systems implies epistemological emergence. On the other hand, substantial coarse-graining demonstrates that because the novel target properties such as critical exponents and irreversibility, appear, these are cases of ontological emergence. The scientists decide on a target property that is the main purpose of the model. Scientific models contain a lot of information, and a trivially novel property might exist. It is the notion of target property that is required to exclude trivially novel properties. While the scientists who deal with the models determine what is the target property of the model, whether or not the target properties appear from the higher-level model is objectively determined, independent of the intentions of scientists.

[Q3] The investigation of coarse graining shows that the boundary line between substantial and mere coarse-graining corresponds to the boundary line between ontological and epistemological emergence (see Fig. 5). However, whereas substantial coarse-graining implies ontological emergence, mere coarse-graining does not always imply epistemological emergence. Substantial coarse-graining provides an explanation of the properties that the lower-level models do not capture, and this case entails ontological emergence in the sense that the target property is, at least, partially autonomous from the details. On the other hand, mere coarse-graining does not bring about a novel property, but only provides a different picture of the systems. Some, but not all, differences in interpretations can qualify as epistemological emergence.

Fig. 5
figure 5

Coarse graining is classified into two kinds; substantial and mere coarse-graining. Emergence as a result of coarse graining is also classified into ontological and epistemological emergence. The dashed line represents the boundary line between substantial and mere coarse-graining, and between ontological and epistemological emergence. The boundary line in coarse graining corresponds with the line in emergence. Substantial coarse-graining implies ontological emergence. Meanwhile, mere coarse-graining does not always imply epistemological emergence. In some cases, the mere coarse-graining merely distorts the description of system

The cases of coarse graining examined above have implications for these questions. In particular, the meaning of novelty is clarified through the distinction between two forms of coarse graining. All coarse-graining distort models in some way, and, inevitably, some novelty appears. As demonstrated above, the notion of the target property provides a way to exclude such a trivial novelty as a feature of the emergence.

As a whole, the investigations on coarse graining have the following implications for the novelty as a condition of emergenceFootnote 11.

  • Ontological Novelty: The higher-level model shows a novel target property that the lower-level models fail to show. The existence of the novel target property implies ontological emergence.

  • Epistemological Novelty: The higher-level model demonstrates another novel picture of the target system, compared to the picture given by the lower-level model. The novelty concerns our ways of understanding, and then, this case merely implies epistemological emergence.

5 Conclusion and outlooks

This study has identified a key difference in the types of coarse graining and provided a significant distinction. An implication of this distinction is that the existence of a new target property implies ontological emergence and a difference in ways of descriptions implies epistemological emergence. Although coarse graining is a pervasive form of idealization in modern physics, its features have not received much attention. Moreover, in terms of the notion of emergence, the distinction between substantial and mere coarse graining calls for further clarification. This article considers three main examples of coarse graining in physics; the RG method, the ZZW framework, and a rigid body. In the cases of the RG method and ZZW framework, coarse graining is indispensable for deriving physical properties, such as universality and irreversibility respectively. On the other hand, coarse graining is dispensable for deriving the classical mechanical behavior of a rigid body. In fact, the RG method and ZZW framework as substantial coarse-graining show ontological emergence, and the case of a rigid body as mere coarse-graining shows epistemological emergence. The explanatory features of coarse graining succeed in demonstrating the distinction between the types of coarse graining. This distinction reveals that coarse graining does not always imply emergence. The boundary between substantial and mere coarse-graining corresponds to the boundary between ontological and epistemological emergence. Mere coarse-graining does not always imply epistemological emergence. Furthermore, the distinction reveals that novelty as a condition of ontological emergence is the existence of a novel target property, but not novel entities. The notion of a novel target property is introduced to exclude cases of trivial novelty.

The distinction between the types of coarse graining and its implications for the notion of emergence relies on what can be derived from the model. The notion of derivation should be considered to reveal what scientific models can demonstrate. However, in fluid dynamics, numerical calculations are indispensable for obtaining information from models. Therefore, the methods used to carry out the calculation should be considered. I will leave this topic for future research. Additionally, this study has considered some of the issues related to coarse graining, but it is not a comprehensive investigation. As mentioned in section 1, there are other cases of coarse graining, some of which are related with the RG such as the effective field theory and crossover behavior in the context of critical phenomenaFootnote 12. The applicability of the distinction proposed in this study is another important topic for the future work.