Abstract
The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for \(n=2\) and \(n=3\). Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Search for the critical point of strongly interacting matter remains one of the most important goals of experimental searches in heavy ion physics [1, 2]. Its basic property – the increase of the correlation length of the considered system – forces experimenters to shift their interests from inclusive spectra to higher-order moments and cumulants of the particle multiplicity distributions. A particular interest is paid towards net-proton fluctuations being the most sensitive to the searched phenomenon [3].
One of the most serious experimental issues, which largely limits the available phase-space coverage, and, possibly, affects the studied signal, is the incomplete particle identification caused by finite detector resolution. To overcome this problem an experimental technique, called the identity method, was proposed in Ref. [4], and extended in Refs. [5,6,7]. So far the identity method was described for the second [5] and third [6] order moments. In Ref. [6] it was also used to reexamine the first moments of the identified particle distributions. The impact of particle losses due to detector inefficiencies on results from the identity method is discussed in Ref. [7]. The author shows that it remains applicable provided detection efficiencies can be determined with sufficient accuracy. With the ongoing development of theoretical studies concerning higher order moments it seems appropriate to extend experimental techniques and tools as well.
In the present study the identity method is extended in two ways. Firstly, a strict procedure to obtain nth order moments of the multiplicity distribution is shown. Secondly, a program, called Idhim, which performs such calculations for any given number of considered particle types, is presented. It also allows one to obtain moments up to any order provided the detector response function is known. The modification of first moments from Ref. [6], also included in Idhim, may address possible biases in other popular methods [8] (e.g. maximal likelihood method [9, 10]).
The paper is organized as follows. In Sect. 2, basic quantities of the identity method are presented. The computation of nth moments of true multiplicity distribution is shown in Sect. 3. Modifications necessary to apply the general formulas in practice are addressed in Sect. 4. Description of the Idhim program which computes moments of the true multiplicity distribution is given in Sect. 5. Section 6 contains tests of the program with the detector response close to the ones measured in real experiments. Conclusion in Sect. 7 ends the paper.
2 Basic quantities
The identity method is developed under the assumption that particles are identified by measuring quantity x (e.g., a mass) of observed particles. Due to the finite detector resolution one gets a continuous distribution for x, denoted by \(\rho _{j}(x)\), where index j stands for one of k particle types. The density is expected to sum up to the mean of \(N_{j}\), i.e., the multiplicity for this type:
For a given particle observation, its conditional probability of being of a given type is expressed by a quantity called identity, defined as:
In the case of complete particle identification \(w_{j}\) is reduced to two extreme values: \(w_{j}=0\) for particles of types other than j and \(w_{j}=1\) for particles of type j.
In the same way, one can define an aggregated quantity for a given particle type:
where \(N(\nu )\) is the total multiplicity (including all particle types) of the \(\nu \)th of considered \(N_{ev}\) events. From these events one obtains the distribution of different types of W with its moments defined as
where \(n_{j}\) denotes the order of the moment of the distribution of \(W_{j}\).
3 Computing the nth moments of multiplicity distribution
We will now show how one can compute all the nth moments of the multiplicity distribution \(\langle N_1^{n_1}\cdot N_2^{n_2}\cdot \ldots \cdot N_k^{n_k} \rangle \) with \(n_1+n_2+\dots +n_k=n\) using the moments of the measured identity variables. The procedure will be a generalisation of those published for \(n=2\) [5] and \(n=3\) [6]. First, we shall demonstrate how the value of a moment of identity variables \(\langle W_1^{n_1}\cdot W_2^{n_2}\cdot \ldots \cdot W_k^{n_k} \rangle \), depends on the multiplicity distribution. We have the following:
where \(\mathrm {P}(N_1,N_2,\ldots , N_k)\) is the multiplicity distribution, i.e., the probability of observing \(N_1\) particles of the first time, \(N_2\) particles of the second type and so forth, and \(\mathrm {P}_j(x)=\frac{\rho _j(x)}{\langle N_j\rangle }\) is the probability distribution of the jth type.
Let us firstly focus on the innermost part of Eq. 5, denoted hereafter by \(\omega \):
We will now use the multinomial theorem to expand the \(n_l\)th power. Let us first define the following notation for brevity:
In this notation the multinomial theorem is represented by
which allows us to express \(\omega \) as
where the first summation is over all possible combinations of k nonnegative integers \(\eta ^1_{(l)},\ldots ,\eta ^k_{(l)}\) that sum up to \(n_l\). Let us now use the multinomial theorem again, this time to expand the \({\eta ^j_{(l)}}\)th power:
This formulation could be rearranged to give
If we now put \(\omega \) expressed in such way back to Eq. 5, we can notice that integration over \(x_i^j\) can be applied to the product \(w_1(x^j_i)^{\eta ^j_{i(1)}}\cdot \ldots \cdot w_k(x^j_i)^{\eta ^j_{i(k)}}\) to give
where function \(u_j\) is defined as
Let us now focus on the part of Eq. 12 depending on j, which will be denoted by \(\lambda ^j\),
Since each \(u_j\) depends on a tuple of values of length k, it is convenient to introduce a notation for such tuples:
This allows us to express \(\lambda ^j\) as
where the first summation is over all possible combinations of \(N_j\) tuples \(\varvec{\eta }^j_1,\ldots ,\varvec{\eta }^j_{N_j}\) (each containing k nonnegative integers) that sum up to \(\varvec{\eta }^j\). We can notice that zero tuples, i.e., \(\varvec{\eta }^j_i=(0,0,\ldots , 0)\), do not contribute to \(\lambda ^j\) since \(u_j(0,0,\ldots ,0)=1\). Let us then express the sequence \(\varvec{\eta }^j_1,\varvec{\eta }^j_2,\ldots ,\varvec{\eta }^j_{N_j}\) as a combination of several non-zero tuples.
Let \(\varGamma ^j\) denote a set of all such combinations possible for \(\varvec{\eta }^j\), so that each \(\gamma \in \varGamma ^j\) consists of \(|\gamma |\) different non-zero tuples: \(\varvec{\mu }^\gamma _1,\varvec{\mu }^\gamma _2,\dots ,\varvec{\mu }^\gamma _{|\gamma |}\), each occurring \(m^\gamma _1,m^\gamma _2,\dots ,m^\gamma _{|\gamma |}\) times, respectively. There are also \(N_j-\sum \nolimits _{p=1}^{|\gamma |} m^\gamma _p\) tuples equal to zero in the original sequence \(\varvec{\eta }^j_1,\varvec{\eta }^j_2,\ldots ,\varvec{\eta }^j_{N_j}\). We therefore have
If we use this to compute \(\lambda ^j\), we get
where \(a*b\) means that the value b appears a times in the multinomial symbol. The indicator variable is necessary so that in the case of \(\varvec{\eta }^j=0\), we have \(\lambda ^j=1\), as in Eq. 16.
If we now focus on the multinomial symbol involving \(N_j\), it can be expanded as
We can see it is a polynomial of \(N_j\) of degree \(\sum \nolimits _{p=1}^{|\gamma |} m^\gamma _p\), which is at least 1 and at most \(\eta ^j_\varSigma \equiv \sum \nolimits _{l=1}^k\eta ^j_{(l)}\).
Since \(\lambda ^j\) is a weighted sum of such polynomials (and indicator variable), it is a polynomial of \(N_j\) of at most the same degree and can therefore be expressed as
Further coefficients, i.e., \(\lambda ^j_p\) for \(p>\eta ^j_\varSigma \), equal zero.
Let us put this formulation of \(\lambda ^j\) back to Eq. 12. This gives us
We can rearrange it as
which finally gives us
Since \(\sum \nolimits _{j=1}^k\eta ^j_\varSigma =n_1+n_2+\cdots +n_k=n\), the order of the moments at the right hand side is at most equal n. We have therefore just shown how to express any moment of W distributions with order n as a sum of moments of N distributions with order \(\le n\). Since this dependency is linear, we can define the whole problem as a set of linear equations. It will have the coefficients
where \(n_1+n_2+\cdots +n_k=n\) and \(q_1+q_2+\cdots +q_k=n\). We also need to define elements which will take into account the contribution of moments of N with orders lower than n:
To arrange the moments in a linear order, let us now choose any one-to-one function f from sequences of length k summing up to n to numbers \(1, 2, \ldots , \left( {\begin{array}{c}n+k-1\\ k-1\end{array}}\right) \). We can use it to construct a matrix \(\mathbf A \) having elements \(A_{\xi ,\zeta }=a^{q_1,q_2,\ldots ,q_k}_{n_1,n_2,\ldots ,n_k}\) and vector \(\mathbf B \) with elements \(B_\xi =b_{n_1,n_2,\ldots ,n_k}\) for \(\xi =f(n_1,n_2,\dots ,n_k)\) and \(\zeta =f(q_1,q_2,\ldots ,q_k)\). We can also arrange unknown moments in a vector \(\mathbf N \) such that \(N_\zeta =\langle N_1^{q_1}\cdot N_2^{q_2}\cdot \ldots \cdot N_k^{q_k}\rangle \). This allows us to express equation 23 as
or in matrix notation:
If \(\text {det} \mathbf A \ne 0\), the moment we are looking for can be computed as
4 Modifications
In the previous section we have shown the procedure to compute the nth moments of the multiplicity distribution as a generalisation of the computations for \(n=2\) and \(n=3\), but to apply it in practice we needed to make three modifications.
Firstly, to compute the first moments as proposed in Ref. [6], we need to replace Eq. 1 with the following:
Now the distribution of a measured x for a given particle type j is normalised to arbitrary value \(A_j\), which does not have to equal \(N_j\). As a result, we also need modify Eq. 13, which now becomes
The rest of the procedure holds, and corrected \(\langle N_j\rangle \) could be computed by applying it for \(n=1\).
Secondly, the measured x is traditionally associated with the particle mass, but it can be any measured quantity, not necessarily a single scalar value. In general, it could be a multi-dimensional vector \(\mathbf {x}\), e.g. mean energy loss and time-of-flight, as long as integration in function u is performed accordingly.
Thirdly, measurement of x could be performed in several phase space bins, corresponding to different detector configurations. In such cases Eq. 1 takes form
where \(\theta \) denotes a configuration from a configuration space \(\varTheta \). Analogously, the definition of w (Eq. 2) has to take into account \(\theta \) as well:
where \(w_j(x,\theta )\) denotes value of the jth identity variable for a measurement x registered in configuration \(\theta \). Finally, the computation of u (Eq. 13) has to take into account measurements in all configurations, so
All three modifications have been described here separately for simplicity, but could be combined if necessary.
5 Implementation
The Idhim program was designed to provide an easy way to obtain moments of the true multiplicity distribution of identified particles provided the detector resolution is know.
The implementation in Java, using EJMLFootnote 1 library for linear algebra operations, is available as open source.Footnote 2 The required input to the program includes:
-
(i)
a list of particle types in a text file, with each line providing a particle type name,
-
(ii)
\(\langle W_{1}^{n_1}\cdot \ldots \cdot W_k^{n_k}\rangle \) moments in a tsv (i.e., tab-separated values) file, with each line describing one moment as a list if \(n_1,\ldots ,n_k\) indices, followed by the moment value,
-
(iii)
a list of phase space bins, where a detector response is known, as a tsv file (if there is more than one kinematic variables which define such bins, multiple tab-separated indices may be provided),
-
(iv)
a directory containing files with a detector response functions in each bin.
An exemplary set of all needed files is provided with the program.
The input format allows for applicability to a wide range of different experiments. Firstly, a number of considered particle types is arbitrary. In a typical case of particle identification it depends on a collision energy and available statistics, e.g., at low interaction energies one does not need to consider deuterons and/or Helium-3, whereas at high energies or with large available statistics they must be taken into account. Secondly, only \(\langle W_1^{n_1}\cdot \ldots \cdot W_k^{n_k} \rangle \) moments, not the full distributions, need to be provided. Finally, in a typical experiment, a particle identification is performed by a set of detectors with an overlapping momentum coverage. Thus, a full momentum coverage of an experiment consists of regions with \(\rho \) being 1D function, e.g., when particles are identified only by dE / dx or time-of-flight (ToF), or 2D function, e.g., when particles are identified by combined measurements of dE / dx and ToF. An example of such a non-uniform detector acceptance is shown in Fig. 1. Bins with any number of dimensions, which reflect changing detector configuration or particle yields, can be defined as long as density function for all particle types is given in the same points of the space.
The next section includes example demonstrating the usefulness of the program features described above.
6 Test on simulated data
The computation of all moments of multiplicity distributions up to the fourth order was tested on two models. The first one is a Monte Carlo model (so-called fast generator), where the number of particles of a given type produced in a single event was generated from Poisson distributions with a different free parameter \(\lambda \) for each considered particle type. Test included four most popular particle types, namely electrons, pions, kaons and protons, with their respective \(\lambda \) as 1, 10, 2, 4. The number of events is set to 1,000,000.
Particles are generated according to the Poisson distribution and are uncorrelated (except the detector response), so the true values of generated moments are
The generated cross-moments are defined as the multiplication of the pure ones.
A simulated detector response consists of mean energy loss measurements in the Time Projection Chamber. For each particle, its mean energy loss was generated from a Gaussian distribution with parameters based on experimental data from Refs. [9, 11] in two bins simulating the momentum dependence of the detector response. Testing several different momentum dependencies showed that particle distribution between bins does not affect the final results. An exemplary simulated dE/dx distribution in a single bin is shown in Fig. 2.
The Idhim program is used to obtain reconstructed moments of the considered particle types up to the fourth order. The statistical uncertainty of the reconstructed moments results from uncertainty of the fitted distributions \(\rho _{j}(x)\) as well as from the \(\langle W_{1}^{n_1}\cdot \ldots \cdot W_k^{n_k}\rangle \) moment values. Both sources are correlated, so the standard error propagation is complicated and inconvenient. Instead, the statistical uncertainty is obtained using the bootstrap method [12].
Reconstructed and generated moments as well as their ratio are shown in Fig. 3. The ratio is 1 within the statistical uncertainty for all considered values.
Another test was performed using 3 million p+p interactions at \(\sqrt{s_{NN}}=17.3\) GeV generated using the EPOS [13, 14] model with the detector acceptance containing two types of acceptance regions — dE / dx only and combined ToF and dE / dx. An example of such a two dimensional distribution is shown in Fig. 4. The shape of the 2D distribution and its parameters were based on a real data analysis in Ref. [15]. Again, in order to mimic the momentum dependence of the detector response, it was divided into several bins.
Reconstructed and generated moments as well as their ratio are shown in Fig. 5. Again, the ratio is 1 within the statistical uncertainty for all considered values.
Both the procedure and its implementation are functioning as expected. The difference between generated and reconstructed first moments of N and W is negligible but in case of the higher orders, the differences can reach 70\(\%\). In order to accommodate for different possible shapes of the \(\rho \) functions, they are delivered in a binned form. Thus, a proper binning is important to describe the functions’ shapes. The identity method does not address other detector biases or its efficiency. Other possible biases should be addressed by the appropriate experimental tools (for examples and details see Refs. [16,17,18]).
7 Conclusion
In this paper we extend the identity method in two ways. Firstly, a new strict procedure to obtain nth order moments of multiplicity distribution of an arbitrary number of particles is discussed. Secondly, a software implementation of this procedure is presented. Provided a detector response is known, it computes any moments, including the first ones. It is equally precise both for low and high mean multiplicities. Two tests were performed in order to validate the program. The first test, based on simple fast generator check, showed that program works well in case of lack of correlations between particles. The difference between the reconstructed and generated moments is at the level of statistical uncertainty or below. The second test was performed on p+p interactions simulated in the EPOS model. The second test confirmed that correlations between particles do not affect the program’s efficiency. It also showed that Idhim can be easily used in the case of a non-uniform detector acceptance which contains different detector types.
As a last comment we would like to stress that the successful analysis of moments of identified particle distributions depends on an understanding of a detector response. Possible flaws in description of the \(\rho \) functions will propagate to the identity method and the final results. Moreover, the identity method does not compensate for a limited detector efficiency. Thus, \(\rho \) distributions and mean \(\langle W \rangle \)’s have to be corrected for a limited and often momentum-dependent detector efficiency by other known methods.
References
N. Antoniou, et al., Study of hadron production in hadron nucleus and nucleus nucleus collisions at the CERN SPS CERN-SPSC-2006-034 (2006)
The STAR Coll. Beam energy scan white paper: studying the phase diagram of QCD matter at RHIC (2014)
Y. Hatta, M.A. Stephanov, Phys. Rev. Lett. 91, 102003 (2003) (Erratum: Phys. Rev. Lett.91,129901(2003))
M. Gazdzicki, K. Grebieszkow, M. Mackowiak, S. Mrowczynski, Phys. Rev. C83, 054907 (2011)
M. Gorenstein, Phys. Rev. C84, 024902 (2011)
A. Rustamov, M. Gorenstein, Phys. Rev. C86, 044906 (2012)
C.A. Pruneau, Phys. Rev. C 96(5), 054902 (2017)
R.R. Prado, in Proceedings of Science the 35th International Cosmic Ray Conference 2017, Busan, South Korea (2017)
M. van Leeuwen, A practical guide to de/dx analysis in na49, note, CERN EDMS (2008)
M. van Leeuwen, in Proceedings, 38th Rencontres de Moriond on QCD and High-energy Hadronic Interactions: Les Arcs, France, March 22–29, 2003 (2003)
A. Aduszkiewicz et al., Eur. Phys. J. C77(10), 671 (2017)
B. Efron, SIAM Rev. 21(4), 460–480 (1979)
K. Werner, in Proceedings of 14th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2006), Nucl. Phys. Proc. Suppl. 175–176, 81–87 (2008)
T. Pierog, K. Werner”, Nucl. Phys. Proc. Suppl. 196, 102–105 (2009)
M. Kuich, in Proceedings of Science of The European Physical Society Conference on High Energy Physics, Venice, Italy (2017)
A. Aduszkiewicz et al., [NA61/SHINE Collaboration], Eur. Phys. J. C 76(11), 635 (2016)
A. Bzdak, R. Holzmann, V. Koch, Phys. Rev. C 94(6), 064907 (2016)
M. Lorenz, Hadron production at SIS energies: an update from HADES, slides at Critical Point and Onset of Deconfinement, Wroclaw, Poland (2016)
Acknowledgements
We would like to thank M. Gazdzicki for fruitful discussions and comments. The work of M. M. P. was partially supported by the National Science Center, Poland Grant 2015/18/M/ST2/00125.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funded by SCOAP3
About this article
Cite this article
Maćkowiak-Pawłowska, M., Przybyła, P. Generalisation of the identity method for determination of high-order moments of multiplicity distributions with a software implementation. Eur. Phys. J. C 78, 391 (2018). https://doi.org/10.1140/epjc/s10052-018-5879-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s10052-018-5879-9