Keywords

1 Introduction

Statistical methods are excellent at modeling semantic content of text documents [9]. More specifically, document clustering is widely used in a variety of applications such as text retrieval or topic modeling, (see e.g. [3]). Words in text documents usually exhibit appearance dependencies, i.e., if word w appears once, it is more probable that the same word w will appear again. This phenomenon is called burstiness, which has shown to be addressed by introducing the prior information into the construction of the statistical model to obtain several computational advantages [15]. Given that the Dirichlet distribution is generally taken as a conjugate prior to the multinomial, the most popular hierarchical approach is the Dirichlet Compound Multinomial (DCM) distribution [14]. While the Multinomial distribution fails to model the words burstiness given its dependency assumption, the DCM distribution not only captures this behavior but also models text data better [14]. However, The Dirichlet distribution has its own limitations due to is negative covariance structure and equal confidence [11, 24]. Hence, a generalization of it called the Scaled Dirichlet (SD) distribution has shown to be a good alternative as a prior to the multinomial resulting in the Multinomial scaled Dirichlet (MSD) distribution recently proposed in [25]. Indeed, MSD has shown to have high flexibility in count data modeling with superior performance in several real-life challenging application [25,26,27,28]. Despite its flexibility, MSD distribution shares similar limitations to the one with DCM since its parameter estimation is slow, especially in high-dimensional spaces. Thus, [28] proposed a close exponential-family approximation called EMSD to combine the flexibility and efficiency of MSD with the desirable statistical and computational properties of the exponential family of distributions, including sufficiency. EMSD has shown to reduce the complexity and computational efforts, considering the sparsity and high-dimensionality nature of count data.

In this work, we study the application of the Bayesian framework for learning the exponential-family approximation to the Multinomial Scaled Dirichlet (EMSD) mixture model which has been shown to be an appropriate distribution to model the burstiness in high-dimensional feature space. In particular, we propose a learning approach for an EMSD mixture model using Stochastic Expectation Propagation (SEP) [10] for parameter estimation. Indeed, SEP combines both Assumed Density Filtering (ADF) and Expectation Propagation (EP) in order to scale to large datasets while mantaining accurate estimations. Only EP is usually more accurate than methods such as variational inference and MCMC [1, 18], and SEP solves some of the problems encountered when using EP given that the number of parameters increase according to number of datapoints. Thus, SEP is a deterministic approximate inference method that prevents memory overheads when increasing the number of data points. EP has shown to be an appropriate generalization in the case of Gaussian mixture model [20], hierarchical models such as LDA [18] or even infinite mixture models [6]. Furthermore, SEP has been used with Deep Gaussian process [4], showing the benefits of scalable Bayesian inference and outperforming traditional Gaussian process. The contributions of this paper are summarized as follows: 1) we show that SEP can provide effective parameter estimates when dealing with large datasets; 2) we derive foundations to learn an EMSD mixture model using SEP; 3) we exhaustively evaluate the proposed approach on synthetic and real count data and compare the performance with other models and learning approaches.

2 The Exponential-Family Approximation to MSD Distribution

In the clustering setting, We are given a dataset \(\mathcal {X}\) with D samples \(\mathcal {X} = \{\mathbf {x_i}\}_{i=1}^D\), each \(\mathbf {x}_i\) is a vector of count data (e.g. a text document or an image, represented as a frequencies vector of words or visual words, respectively). We assume that each data set has a vocabulary of size V.

The the Multinomial Scaled Dirichlet (MSD) is the marginal distribution defined by integrating out the probability parameter of scaled Dirichlet over all possible multinomials, and it is given by [25]:

$$\begin{aligned} \mathcal {MSD}(\mathbf {x}\mid \mathbf {\varvec{\rho }},\varvec{\nu }) =\frac{n!}{\prod _{w=1}^Vx_w!}\frac{\Gamma (s)}{\Gamma (s+n)\prod _{w=1}^V\nu _w^{x_w}}\prod _{w=1}^V\frac{\Gamma (x_w+\rho _w)}{\Gamma (\rho _w)} \end{aligned}$$
(1)

Note that the authors in [25] use the approximation \(\left( \sum _{w=1}^V\nu _w{p_w}\right) ^{\sum _{w=1}^Vx_w}\approx \prod _{w=1}^V\nu _w^{x_w}\). It is worth mentioning that DCM is a special case of MSD, such that when \(\varvec{\nu }=1\) in Eq. (1), we obtain the Dirichlet Compound Multinomial (DCM) distribution [14]. Similar to DCM, the considered model MSD, has an intuitive interpretation representing the Scaled Dirichlet as a general topic and the Multinomial as a document-specific subtopic, making some words more likely in a document \(\mathbf {x}\) based on word counts.

The representation of text documents is very sparse as many words in the vocabulary do not appear in most of the documents. Thus, in [28], the authors note that using only the non-zero values of \(\mathbf {x}\) is computationally efficient since \(x_w!=1\), \(\nu _w^{x_w}=1\) and \(\Gamma (x_w+\rho _w)/\Gamma (\rho _w)=1\) when \(x_w=0\). Moreover, since in high dimensional data the parameters are very small, [5], the following fact for small values of \(\rho \) when \(x\ge 1\) was used in [28]:

$$\begin{aligned} \lim _{\rho \rightarrow 0} \frac{\Gamma (x+\rho )}{\Gamma (\rho )}-\Gamma (x)\rho =0 \end{aligned}$$
(2)

Thus, being able to approximate \(\Gamma (x_w+\rho _w)/\Gamma (\rho _w)=\Gamma (x_w)\rho _w\) and using the fact that \(\Gamma (x_w)=(x_w-1)!\) leads to an approximation of the MSD distribution known as the Exponential-family approximation to the MSD distribution (EMSD), given by:

$$\begin{aligned} \mathcal {EMSD}(\mathbf {x}\mid \varvec{\alpha },\mathbf {\varvec{\beta }})= \frac{ n! }{\prod _{w:x_w\ge 1}^V x_w} \frac{\Gamma (s)}{\Gamma (s+n)} \prod _{w:x_w\ge 1}^V\frac{\alpha _w}{ \beta _w^{x_w}} \end{aligned}$$
(3)

The parameters of the EMSD distribution are denoted by \(\varvec{\alpha }\) and \(\varvec{\beta }\) to distinguish them from the MSD parameters for clarity.

3 Stochastic Expectation Propagation

Efficient inference and learning for probabilistic models that scale to large datasets are essential in the Bayesian setting. Thus, a variety of methods have been proposed from sampling approximations [17] to distributional approximations such as stochastic variational inference [8].

Another deterministic approach is Expectation Propagation (EP) that commonly provides more accurate approximations compared to sampling methods [21] and variational inference [19, 20]. Yet, the number of parameters grows with the number of data points, causing memory overheads and making it difficult to scale to large datasets. Besides, Assumed Density Filtering (ADF) [22], which has been introduced before EP, maintains a global approximating posterior; however, it results in poor estimates. Therefore, [10] proposed an alternative to push EP to large datasets denominated Stochastic Expectation Propagation (SEP). SEP takes the best of these two methods by maintaining a global approximation that is updated locally. It does this by introducing a global site that captures the average effect of the likelihood sites and, as a result avoiding memory overheads.

Given a probabilistic model \(p(\mathcal {X}\mid \varvec{\theta })\) with parameters \(\varvec{\theta }\) drawn from a prior \(p_0(\varvec{\theta })\), SEP approximates a target distribution \(p(\varvec{\theta }\mid \mathcal {X})\), which is commonly the posterior, with a global approximation \(q(\varvec{\theta })\) that belongs to the exponential family. The target distribution must be factorizable such that the posterior can be split into D sites \(p(\varvec{\theta }\mid \mathcal {X})\propto p_0(\varvec{\theta })\prod _{i=1}^Dp_i(\varvec{\theta })\); the initial site \(p_0\) is commonly interpreted as the prior distribution and the remaining \(p_i\) sites represent the contribution of each ith item to the likelihood. The approximating distribution must admit a similar factorization, \(q(\varvec{\theta })\propto p_0(\varvec{\theta })\tilde{p}(\varvec{\theta })^D\).

Unlike EP, the SEP maintains a global approximating site, \(\tilde{p}(\varvec{\theta })^D\), to capture the average effect of a likelihood on the posterior. Thus, we only have to maintain the parameters of the approximate posterior and approximate global site that commonly belongs to the exponential family. Consequently, each site is refined to create a cavity distribution by dividing the global approximation over one of the copies of the approximate site, \(q^{\setminus 1}(\varvec{\theta })\propto q(\varvec{\theta })/\tilde{p}(\varvec{\theta })\).

Additionally, in order to approximate each site, a new tilted distribution is introduced using the cavity distribution and the current site \(\hat{p}_i(\varvec{\theta })\propto p_i(\varvec{\theta })q^{\setminus 1}(\varvec{\theta })\).

Subsequently, a new posterior is found by minimizing the Kullback Leibler divergence \(D_{KL}(\hat{p}_i(\varvec{\theta })\mid \mid q^{new}(\varvec{\theta }))\) such that \(\tilde{p}_i(\varvec{\theta })\approx p_i(\varvec{\theta })\). This minimization is equivalent to match the moments of those distributions [1, 20]. Finally, the revised approximate site is updated by removing the remaining terms from the current approximation by employing damping [7, 18] in order to make a partial update since \(\tilde{p}_i\) captures the effect of a single likelihood function:

$$\begin{aligned} \tilde{p}(\varvec{\theta }) = \tilde{p}(\varvec{\theta })^{1-\eta }\left( \frac{q^{new}(\varvec{\theta })}{q^{\setminus w}(\varvec{\theta })}\right) ^{\eta } = \tilde{p}(\varvec{\theta })^{1-\eta }\tilde{p}_i(\varvec{\theta })^{\eta } \end{aligned}$$
(4)

Notice that \(\eta \) is the step size, and when \(\eta =1\), no damping is applied. A natural choice is \(\eta =1/D\).

4 EMSD Mixture Model

4.1 Clustering Model

We assume that we are given D documents drawn from a finite number of EMSD distributions, and each \(\mathbf {x}_i\) document is composed of V words. \(K\ge 1\) represents the number of mixture components. Thus, a document is drawn from its respective component j as follows: \(\mathbf {x_{i}}\sim \mathcal {EMSD}(\varvec{\alpha }_j,\varvec{\beta }_j)\).

In a mixture model, a latent variable \(\mathcal {Z}=\{\mathbf {z}_i\}_{i=1}^D\) is introduced for each \(\mathbf {x}_i\) document in order to represent the component assignment. We posit a Multinomial distribution for the component assignment such that \(\mathbf {z}_{i} \sim Mult(1, \varvec{\pi })\) where \(\varvec{\pi }=\{{\pi }_j\}_{j=1}^K\) represents the mixing weights, and they are subject to the constraints \(0<\pi _j<1\) and \(\sum _j \pi _j=1\). In other words, \(\mathbf {z}_i\) is a K-dimensional indicator vector containing a value of one when document \(\mathbf {x}_i\) belongs to the component j, and zero otherwise. Note that in this setting the value of \(z_{ij}=1\) acts as the selector of the component that generates \(\mathbf {x}_i\) document with parameters \(\varvec{\alpha }_j\) and \(\varvec{\beta }_j\); hence, \(p(\mathbf {z}_i\mid \varvec{\pi })=\pi _j\). Thus, the full posterior is in Eq. 5.

$$\begin{aligned} p(\varvec{\pi },\varvec{\alpha },\varvec{\beta }\mid \mathcal {X})&\propto p(\varvec{\pi })p(\varvec{\alpha })p(\varvec{\beta })\prod _i^D \sum _j^K \pi _j p(\mathbf {x}_i\mid \varvec{\alpha }_j,\varvec{\beta }_j) \end{aligned}$$
(5)

4.2 Parameter Learning

We use SEP in order to learn the parameters of the mixture model. We start by partitioning the likelihood in D sites and define a global approximating site for each of the latent variables (\(\varvec{\pi }\), \(\varvec{\alpha }\), and \(\varvec{\beta }\)). Theoretically, any distribution belonging to the exponential family can be used for the sites. We use a Gaussian distribution for the parameters of the EMSD distribution in order to facilitate calculations [12]. For the mixture weights, we use a Dirichlet distribution since it belongs to the \(K-1\) simplex and fits the constraints imposed by the mixing weights. Equations 6 illustrate the choices for the approximate sites.

$$\begin{aligned} \tilde{p}(\varvec{\pi })\propto \prod _j \pi _j^{a_j} \quad \tilde{p}(\varvec{\alpha })=\prod _j^K\mathcal {N}(\varvec{\alpha }_j\mid \varvec{m}_j, p_j^{-1})\quad \tilde{p}(\varvec{\beta })=\prod _j^K\mathcal {N}(\varvec{\beta }_j\mid \varvec{n}_j, q_j^{-1}) \end{aligned}$$
(6)

Once the global approximate site has been defined, we compute the approximate posterior \(q(\varvec{\pi },\varvec{\alpha },\varvec{\beta })\) by introducing the priors and the average effect of the global site:

$$\begin{aligned} q(\varvec{\pi }, \varvec{\alpha },\varvec{\beta })\propto&p(\varvec{\pi }, \varvec{a}^0) \tilde{p}(\varvec{\pi }\mid \varvec{a})^D \prod _j^K p\left( \varvec{\alpha }_j\mid \varvec{m}^0_j,(p^0_j)^{-1}\right) \tilde{p}\left( \varvec{\alpha }_j\mid \varvec{m}_j,(p_j)^{-1}\right) ^D\\&p\left( \varvec{\beta }_j\mid \varvec{n}_j^0, (q_j^0)^{-1}\right) \tilde{p}\left( \varvec{\beta }_j\mid \varvec{n}_j, q_j^{-1}\right) ^D \end{aligned}$$

The approximate posterior distribution has the following parameters:

$$\begin{aligned} \varvec{a}' = 1 + \varvec{a}^0 + D \varvec{a} \quad (p^{'}_j)^{-1}= (p^{0}_j+D p_j)^{-1} \quad (q^{'}_j)^{-1}= (q^{0}_j+Dq_j)^{-1} \nonumber \\ \quad \varvec{m}_j^{'} = (p_j^{'})^{-1}(p^0_j \varvec{m}_j^0+D p_j \varvec{m}_j) \quad \varvec{n}_j^{'} = (q_j^{'})^{-1}(q^0_j\varvec{n}_j^0+Dq_j \varvec{n}_j) \end{aligned}$$
(7)

Consequently, we introduce a cavity distribution by removing the contribution of one of the copies of the global site. The cavity distribution has parameters \(\varvec{a}^{\setminus 1}\), \(\left( p_j^{\setminus 1}\right) ^{-1}\), \(\varvec{m}_j^{\setminus 1}\), \(\left( q_j^{\setminus 1}\right) ^{-1}\), and \(\varvec{n}_j^{\setminus 1}\) illustrated in Eq. 8 that are calculated as follows: \(q(\varvec{\pi },\varvec{\alpha },\varvec{\beta })/\tilde{p}_i(\varvec{\pi },\varvec{\alpha },\varvec{\beta })\)

$$\begin{aligned} \varvec{a}^{\setminus 1} = \varvec{a}^{'} - \varvec{a} \quad \left( p_j^{\setminus 1}\right) ^{-1}=\left( p^{'}_j - p_j\right) ^{-1} \quad \left( q_j^{\setminus 1}\right) ^{-1}=\left( q^{'}_j - q_j\right) ^{-1} \nonumber \\ \quad \varvec{m}_j^{\setminus 1} = \left( p_j^{\setminus 1}\right) ^{-1}\left( p_j^{'} \varvec{m}_j^{'} - p_{j}\varvec{m}_{j}\right) \quad \varvec{n}_j^{\setminus 1} = \left( q_j^{\setminus 1}\right) ^{-1}\left( q_j^{'} \varvec{n}_j^{'} - q_{j}\varvec{n}_{j}\right) \end{aligned}$$
(8)

We use the cavity distribution and incorporate the ith site, resulting in the tilted distribution \(\hat{p} =\frac{1}{Z_i}p_iq^{\setminus {1}}\). We use this distribution to compute the KL divergence with the approximate distribution, which is equivalent to matching the moments. However, in this case, matching the moments leads to another analytically intractable integral (i.e. \(Z_i = \sum _j^K \frac{a^{\setminus 1}_j}{\sum _k^K a_k^{\setminus 1}}\mathbb {E}_{p(\alpha _j, \beta _j)}\left[ p(x_i\mid \alpha _j,\beta _j)\right] \)). Thus, we compute this integral via Monte Carlo sampling. After matching the moments, we obtain the parameters for an updated approximate posterior.

$$\begin{aligned}&\varPsi (a_j^{'}) - \varPsi (\sum _j^K a_j^{'})= \varPsi (a_j^{\setminus 1})- \varPsi (\sum _j^K a_j^{\setminus 1})+\nabla _{a_j^{\setminus 1}}\log Z_i\nonumber \\ p_j^{'}=&\left( p_j^{\setminus 1}\right) ^{-1} \left( 2\nabla _{\left( p_j^{\setminus 1}\right) ^{-1}}\log Z_i + p_j^{\setminus 1}\right) \left( p_j^{\setminus 1}\right) ^{-1} -\left( \varvec{m}_j^{'}-\varvec{m}_j^{\setminus 1}\right) \left( \varvec{m}_j^{'}-\varvec{m}_j^{\setminus 1}\right) ^\intercal \nonumber \\&q_j^{'}= \left( q_j^{\setminus 1}\right) ^{-1} \left( 2\nabla _{\left( q_j^{\setminus 1}\right) ^{-1}}\log Z_i + q_j^{\setminus 1}\right) \left( q_j^{\setminus 1}\right) ^{-1} -\left( \varvec{n}_j^{'}- \varvec{n}_j^{\setminus 1}\right) \left( \varvec{n}_j^{'}-\varvec{n}_j^{\setminus 1}\right) ^\intercal \nonumber \\&\varvec{m}_j^{'}=\varvec{m}_j^{\setminus 1} + \left( p_j^{\setminus 1}\right) ^{-1}\nabla _{\varvec{m}_j^{\setminus 1}}\log Z_i \quad \varvec{n}_j^{'}=\varvec{n}_j^{\setminus 1} + \left( q_j^{\setminus 1}\right) ^{-1}\nabla _{\varvec{n}_j^{\setminus 1}}\log Z_i \end{aligned}$$
(9)

The values of \(\varvec{a}^{'}\) are calculated using fixed point iteration as described in [16]. Using this updated approximate posterior, we remove the cavity distribution in order to obtain an approximation to the ith site.

$$\begin{aligned}&\varvec{a} = \varvec{a}' - \varvec{a}^{\setminus 1} \quad (p_j)^{-1}=(p_j^{'}-p_j^{\setminus 1})^{-1} \quad \varvec{m}_j=(p_j)^{-1}\left( p_j^{'} \varvec{m}_j^{'}-p_j^{\setminus 1}\varvec{m}_j^{\setminus 1}\right) \nonumber \\&(q_j)^{-1}=(q_j^{'}-q_j^{\setminus 1})^{-1} \quad \varvec{n}_j=(q_j)^{-1}\left( q_j^{'}\varvec{n}_j^{'}-q_j^{\setminus 1}\varvec{n}_j^{\setminus 1}\right) \end{aligned}$$
(10)

Finally, we use damping to partially update the global approximate site. First, we update the parameters of the global site as follows \(\varTheta ^{new}=(1-\eta )\varTheta ^{old}+\eta \varTheta _i\) where \(\varTheta ^{old}\) are the current parameters of the global site, and \(\varTheta _i\) are the parameters for the approximation of a single likelihood. Then, we introduce the global approximate site in the approximate distribution. The learning approach is described in the Algorithm 1.

figure a

5 Experimental Results

In this section, we describe the experiments carried out to test the validity of the proposed method on both synthetic and real count data.

5.1 Synthetic Dataset

We create a synthetic dataset \(\mathcal {X}=\{\mathbf {x}_i\}_{i=1}^D\) by using the probabilistic mixture model with \(D=210\) data points. We use \(K=3\) components, each of which is an EMSD distribution where the mixing weights are uniformly sampled. For simplicity, we set a fixed value of 1 for the scale parameter of the Scaled Dirichlet. Since the shape parameter is commonly \(\alpha _w\ll 1\) [5], we sample from a Beta distribution.

We initialize the priors of the model with covariance matrix \(5\varvec{I}\) and \(3\varvec{I}\) for the scale and shape parameter. Random values are used for the prior means and mixing weights parameter. We set a step size of \(\eta =0.1\) and approximate the posterior using SEP. Table 1 show the obtained estimates. The mixing weights can be estimated using the expected value of \(\pi _j\); for instance, \(\mathbb {E}\left[ \pi _j\right] =a_j^{'}/\sum _{j=1}^Ka_j^{'}\).

Table 1. Original parameters and estimated parameters for the mixture of EMSD using the proposed approach.

The used parameters as well as the estimated values are shown in Table 1. We notice that estimates are very close to the target values. Since we need to store only the local and global parameters, we emphasize the fact that SEP reduces memory consumption allowing us to scale EP.

5.2 Sentiment Analysis

We analyze the problem of sentiment analysis in the setting when online users employ online platforms to express opinions or experiences regarding a product or service through reviews. We exploit these data to investigate the validity of our framework where we know the right number of components (i.e. positive/negative, \(K=2\)). We use three benchmark datasets [13, 29]: 1) Amazon Review Polarity; 2) Yelp review Polarity; 3) IMDB Movie Reviews. This section presents the details of our experimentation and its results.

Before describing the experimental results, we first outline the key properties of the datasets and the performed setup. We pre-process the dataset as follows: 1) lowercase all text; 2) remove non-alphabetical characters; 3) lemmatize text. All datasets are reviews and contain two labels indicating whether the post has a positive or negative sentiment.

Amazon Review Polarity contains 180k customer reviews that span a period of 18 years, for products on the Amazon.com website. The dataset has an average of 75 words per review with a vocabulary size of over 55k unique words.

Yelp Review Polarity contains 560k user reviews from Yelp with an average of 133 words with \(>85k\) unique words. The Yelp dataset contains a polarity label by considering stars 1 and 2 negative, and 3 and 4 positive reviews about local businesses.

IMDB movie reviews this dataset consists of 50K movie reviews with an average 231 words per review and a vocabulary size of over 76k unique words. Ratings on IMDB are given as star values \(\in [1,10]\) which were linearly mapped to [0, 1] to use as document labels; negative and positive, respectively.

We compare the clustering performance of EMSD mixture model using the proposed SEP to different models with the same approach and different learning techniques such as Expectation Propagation (EP), and maximum-likelihood (ML) for parameter estimation. More precisely, we compared the performance of EMSD models to the following models that use maximum-likelihood for estimating its parameters. Firstly, we have a mixture of Multinomials (MM) [2]. Even though the MM is appropriate for modeling common words, not words burstiness problem, we add it to the comparison to evaluate its predictive power. Next, we make a comparison with different models that capture the words bustiness problem such as Dirichlet Compound Multinomial (DCM) [14], the Exponential approximation to the Dirichlet Compound Multinomial (EDCM) [5], the Multinomial Scaled Dirichlet (MSD) [25], and the Exponential approximation to the Multinomial Scaled Dirichlet (EMSD) [28]. Furthermore, we compare to the performance of EDCM mixture model in case of considering EP for parameter estimation as we have recently proposed in [23]. We evaluate the performance of the considered models according to precision and recall as illustrated in Table 2.

Table 2. Results on the three text datasets. Comparison using precision and recall. ML: maximum-likelihood; EP: expectation propagation; SEP: sthocastic expectation propagation.

In general, most models are superior to the Multinomial mixture model (except for Yelp dataset). We notice that SEP gives comparable results to the EDCM model in terms of precision and recall. Additionally, we evaluate an EDCM mixture that uses EP for parameter learning where we can assume that SEP is computing similar approximations to EP with the advantage that there is no need to store the parameters for each of the approximate sites. One of the main advantages is that we only store the local and global parameters, reducing memory usage. More specifically, for the Amazon dataset, EP and SEP are superior in terms of precision and recall compared with most models that use maximum-likelihood estimation. Our intuition is that the length of documents plays a critical role in parameter estimation. That is, in the Amazon dataset, for example, we obtain better precision and recall using a Bayesian approach given that the document length is relatively shorter than in the other two datasets.

6 Conclusions

In this paper, we propose a Stochastic Expectation Propagation (SEP) algorithm to learn a finite EMSD mixture model. We derive the mathematical framework using SEP, and since performing moment matching leads to an intractable integral, we use sampling in order to compute its moments. Then, we evaluate the proposed approach on both synthetic and real data and notice that SEP-EMSD provides comparable results to traditional approaches and in some cases is superior. Although we evaluated the proposed learning method with text data, we can use any type of count data such as a clustering of visual words for images or videos. It is noticeable that SEP does not need a site per data point and similar to variational inference maintains a global posterior approximation that is updated locally and reduces memory consumption.