1 Introduction

Expert recommendation [17] enables the timely sharing of high-quality knowledge for community question answering (CQA) [16]. Unfortunately, despite several previous research efforts, expert recommendation still remains problematic for various reasons. Firstly, question-answering (QA) communities are inherently time-evolving [21], with new users (both askers and answerers) joining daily and the existing users changing their interests and behavior (such as, e.g., long-term inactive users, who turn into active participants). Hitherto, expert recommendation has been studied, mostly by ignoring the temporal information of posts. Accordingly, the devised approaches are suitable neither to deal with the natural drift of users’ interests over time, nor to promote short-term answerers (i.e., users with a limited recent answering history). This is a severe limitation, that lowers the effectiveness of expert recommendation, since the recommended experts may no more reply to questions matching initially-relevant and already-outdated interests. In addition, short-term answerers would likely not be recommended at all, being their expertise gained in a far too short time period for such users to build a solid reputation as actual experts. Secondly, there is no common agreement on the choice of the discriminative content features to capture answerers’ expertise. In most studies, the latter is inferred from the raw text of their answers, suitably weighted by the respective votes from the QA community. Tags are mainly ignored or, alternatively, incorporated into post contents as in [20]. However, tags are more insightful, concise and explicit user-generated explanations of both post meaning and topical expertise, with respect to the general topics inferrable from the textual post contents [19]. Thirdly, supplementing content features with further auxiliary data (e.g., networks of user interactions) for more effective expert recommendation involves devising a plausible joint processing of such information. Hitherto, both sources of information have been combined mainly through simplistic schemes such as, e.g., linear interpolation [17].

In this paper, we propose a new collaborative approach to recommending question-specific experts in QA communities. The expertise of answerers is determined from the tags, votes and temporal information of their answers as well as the asking-answering relationships in the targeted QA community. More precisely, answer tags are employed to capture and represent the topical expertise of answerers. Votes indicate the degree to which answerers are publicly acknowledged within the QA community as experts under the tags of the respective answers. Posting time allows for discounting [8] earlier answers of responders, so that to account for the natural drift of their interests over time, without penalizing short-term answerers. Besides, asking-answering interactions inform the identification of experts, since repliers to expert askers are likely to be expert as well. Essentially, for each posted question, the intuition behind the presented approach consists in recommending answerers, who are credited the highest degree of expertise under the tags of the particular question at routing time. In particular, the expertise of answerers under already replied tags is intuitively determined by means of the votes and temporal information of their answers to the questions labelled with such tags. Instead, the unknown expertise of answerers under not yet replied tags is predicted through a latent-factor generative model of temporally-discounted user expertise and asking-answering behavior. Under such a model, Bayesian probabilistic matrix factorization [15] and the statistical formalization of asking-answering are seamlessly integrated. This allows for explaining both the expertise of users and their behavioral patterns as the result of a generative process, that is governed by a certain number of latent factors. These are estimated via a MCMC algorithm, that implements the derived mathematical details of Gibbs sampling inference under the devised model.

Extensive tests over real-world CQA data show that our approach overcomes several state-of-the-art competitors in recommendation effectiveness.

This paper proceeds as follows. Notation and preliminaries are introduced in Sect. 2. The devised model is developed in Sect. 3. Expertise prediction for recommendation and posterior inference are covered in Sect. 4. The experimental evaluation of our approach is presented in Sect. 5. Finally, conclusions are drawn in Sect. 6, where future research is also previewed.

2 Preliminaries

A question-answering (QA) community \(\varvec{D}\) can be formalized as a triple \(\varvec{D}\triangleq \langle \varvec{U}, \varvec{T}, \varvec{G}\rangle \), where

  • \(\varvec{U}= \{u_1, \ldots , u_N\}\) is a set of N users;

  • \(\varvec{T}= \{t_1, \ldots , t_M\}\) is a set of M tags;

  • \(\varvec{G}= \langle \varvec{V}, \varvec{A}\rangle \) is a directed communication network shaped by user interaction behavior, with \(\varvec{V}\subseteq \varvec{U}\) and \(\varvec{A}\subseteq \varvec{U}\times \varvec{U}\) being the set of nodes and edges, respectively.

The generic user \(u \in U\) can ask questions and/or provide answers. In order to capture the expertise of u, we focus on her answering history \(\varvec{a}_u = \{ a_{u,1}, \ldots , a_{u,N_u}\}\). \(\varvec{a}_u\) is the time sequence of \(N_u\) replies from u to as many questions posted by other users of \(\varvec{D}\). The arbitrary answer \(a_{u, h} \in \varvec{a}_u\) (with \(h=1,\ldots ,N_u\)) is associated with a respective timestamp \( ts _{u, h}\), an explicative set of tags \(\varvec{t}_{u, h} \subseteq \varvec{T}\) and a vote score \(s_{u, h}\). \( ts _{u, h}\) indicates when \(a_{u, h}\) was posted. For any two answers \(a_{u,h_i},a_{u,h_j} \in \varvec{a}_u\), \(h_i < h_j\) iff \( ts _{u, h_i} < ts _{u, h_j}\). Timestamps are useful for reasonably dealing with the drift of the interests and skills of u, across the respective answering history \(\varvec{a}_u\), by means of gradual forgetting [8]. The latter consists in estimating the expertise of u from the whole answering history \(\varvec{a}_u\), so that the earlier answers are realistically considered to be outdated and, thus, less informative of her current interests and skills. The tags in \(\varvec{t}_{u, h}\) are an insightful description of the actual themes covered by \(a_{u, h}\)Footnote 1. In principle, \(\varvec{t}_{u, h}\) is a more accurate representation of the both the intended meaning of \(a_{u, h}\) and the topical expertise of u, in comparison with the more general topics inferrable from the textual content of \(a_{u, h}\) [19]. For this reason, the wording of \(a_{u, h}\) is disregarded and, consequently, the computational burden of processing very large amounts of raw text is avoided. \(s_{u,h}\) indicates the acknowledged degree of expertise gained by u with regard to the question answered through \(a_{u, h}\) and, by extension, under each tag within \(\varvec{t}_{u, h}\).

At the current timestamp \( now \), the expertise of all users in \(\varvec{U}\) under the tags of \(\varvec{T}\) is summarized by matrix \(\varvec{E}^{( now )}\). Its generic entry \(E^{( now )}_{ut}\) quantifies the expertise of user u under tag t at time \( now \) as the below weighted average

$$\begin{aligned} E^{( now )}_{ut} = \frac{\sum _{a_{u,h} \in \varvec{a}_u} s_{u, h} \cdot \delta _{t, \varvec{t}_{u, h}} \cdot w^{( now )}_{u,h}}{\sum _{a_{u,h} \in \varvec{a}_u} \delta _{t, \varvec{t}_{u, h}} \cdot w^{( now )}_{u,h}} \end{aligned}$$
(1)

In Eq. 1, \(\delta _{t, \varvec{t}_{u, h}}\) is 1 iff \(t \in \varvec{t}_{u, h}\) for some h (with \(h=1,\ldots ,N_{u}\)), and 0 otherwise. If \(\delta _{t, \varvec{t}_{u, h}} = 0\) for each \(h=1,\ldots ,N_{u}\), \(E^{( now )}_{ut}\) is assumed to be 0, which corresponds to an unknown or missing value. Besides, \(w^{(now)}_{u,h} = e^{-\lambda (now - ts _{u, h})}\) is a weighting scheme, that implements gradual forgetting by exponential ageing. Intuitively, the earlier answers are not ignored in the estimation of the current expertise of u under t. Rather, their contribution to \(E^{( now )}_{ut}\) exponentially decays according to the respective timestamps. Remarkably, such a modeling choice does not penalizes the expertise of those users with a short replying history (such as new users or mostly inactive users with a recent answering history), without discarding the old answers of long-term answerers. Notice that \(w^{( now )}_{u,h}\) is parameterized by the decay rate \(\lambda \). The latter determines how rapidly the contribution of answers to user expertise decays over time. Essentially, larger values of \(\lambda \) imply a faster decay of earlier answers.

As a supplement to the information from the answering history of users, their asking-answering interactions are also captured as edges of \(\varvec{G}\). More precisely, an edge \(u_i \rightarrow u_j\) from a responder \(u_i\) to an asker \(u_j\) belongs to \(\varvec{A}\), if \(u_i\) answered at least one question posted by \(u_j\). By an abuse of notation, we also write \(\varvec{G}\) to denote the adjacency matrix associated with the asking-answering graph. The generic entry \(G_{ij}\) is 1 iff \(u_i \rightarrow u_j \in \varvec{A}\) and 0 otherwise.

2.1 Problem Statement

Given a question q, let \(\varvec{t}_{q}\) be the set of tags attached to q by the asker. Also, assume that \( now \) is the time, when q is routed to the answerers. We aim to recommend q to targeted users, with the highest acknowledged expertise in the tags of \(\varvec{t}_q\) at time \( now \), who are most likely to reply with high-quality answers.

Unfortunately, in the context of the generic QA community \(\varvec{D}\), \(\varvec{E}^{( now )}\) and \(\varvec{G}\) are generally very sparse. Consequently, the expertise of users under specific tags within \(\varvec{t}_q\) may not be known. In this paper, we exploit latent-factor modeling to predict the unknown values of \(\varvec{E}^{( now )}\), that correspond to the current expertise of answerers under the various adopted tags. Thus, experts can be simply recommended from a list of answerers, ranked by their average expertise under the tags attached to q.

Hereinafter, to avoid cluttering notation, we will write \(\varvec{E}\) to mean \(\varvec{E}^{( now )}\).

3 The ENGAGE Model

ENGAGE (timE-evolviNG tAG-based Expertise) is a Bayesian generative latent-factor model of temporally-discounted expertise and asking-answering behavior in QA communities. Under ENGAGE, the matrices \(\varvec{E}\) and \(\varvec{G}\) of a QA community \(\varvec{D}\) are the result of a probabilistic generative process, that is ruled by K latent factors. These are captured by embedding users and tags in a K-dimensional latent space, through the seamless integration of Bayesian probabilistic matrix factorization [15] and the statistical modeling of the asking-answering behavior.

Formally, each user \(u \in \varvec{U}\) is associated with a column vector \(\varvec{L}_u \in \mathbb {R}^K\) The k-th entry of \(\varvec{L}_u\) (with \(k=1,\ldots , K\)) is a random variable representing the unknown degree to which the k-th latent factor explains the expertise of u. Analogously, each tag \(t \in \varvec{T}\) is associated with a column vector \(\varvec{H}_t \in \mathbb {R}^K\). The k-th entry of \(\varvec{H}_t\) (with \(k=1,\ldots , K\)) is a random variable representing the unknown extent to which the k-th latent factor is inherently characteristic of t. The latent-factor representation of all users and tags is collectively denoted as \(\varvec{L}\in \mathbb {R}^{K \times N}\) and \(\varvec{H}\in \mathbb {R}^{K \times M}\), respectively. The data likelihood (i.e., the conditional distribution over the entries of \(\varvec{E}\) and \(\varvec{G}\)) is

$$\begin{aligned}&\Pr ( \varvec{E}| \varvec{L}, \varvec{H}, \alpha ) = \prod _{u \in \varvec{U}}\prod _{t\in \varvec{T}}\mathcal {N}(E_{ut} ; \mu _{u,t}, \alpha ^{-1})^{\delta _{ut}} \end{aligned}$$
(2)
$$\begin{aligned}&\Pr \left( \varvec{G}| \varvec{L}, \beta \right) = \prod _{u_i\rightarrow u_j \in \varvec{A}} \mathcal {N}(G_{ij} ; \mu _{u_i,u_j}, \beta ^{-1}) \end{aligned}$$
(3)

with

$$\begin{aligned} \mu _{u,t} = \varvec{L}_u^T \cdot \varvec{H}_t \text{ and } \mu _{u_i,u_j} = \varvec{L}_{u_i}^T\varvec{L}_{u_j} \end{aligned}$$

In the above Eq. 2 and Eq. 3, \(\mathcal {N}(\cdot | \mu , \alpha ^{-1})\) is the Gaussian distribution having mean \(\mu \) and precision \(\alpha \). In particular, according to Eq. 2, the current expertise of answerers under the adopted tags is centered around the intuitive explanation provided by the dot product of the respective latent-factor representations. \(\delta _{ut}\) is 1 iff \(E_{ut} > 0\) (i.e., if the expertise of u under t is actually acknowledged) and 0 otherwise. Equation 3 seamlessly incorporates the supplementary information regarding the asking-answering interactions of users. Specifically, according to Eq. 3, the asking-answering interactions are centered around the degree of agreement between the involved users. This provides a valuable contribution to the identification of experts, since those users, who answer questions from other users with a high expertise, are also likely to have gained a high expertise.

The latent-factor representations of users and tags stem from multivariate Gaussian prior distributions parameterized, respectively, by \(\varvec{\varTheta _{\varvec{L}}} = \left\{ \varvec{\mu }_{\varvec{L}}, \varLambda _{\varvec{L}} \right\} \) and \(\varvec{\varTheta }_{\varvec{H}} = \left\{ \varvec{\mu }_{\varvec{H}}, \varLambda _{\varvec{H}} \right\} \). In turn, such parameters are drawn from the below Gaussian-Wishart prior distributions (hereinafter indicated as \(\mathcal {NW}\)) [2]

$$ \Pr ( \varTheta _{\varvec{X}} | \varTheta _0) = \mathcal {N}\left( \varvec{\mu }_{\varvec{X}}; \varvec{\mu }_0, \left[ \beta _0 \varLambda _{\varvec{X}} \right] ^{-1} \right) \cdot \mathcal {W}\left( \varLambda _{\varvec{X}} ; \nu _0, \mathbf {W}_0 \right) $$

where \(\varvec{X} \in \{\varvec{L}, \varvec{H}\}\), \(\mathcal {W}\left( \varLambda _{\varvec{X}} ; \nu _0, \mathbf {W}_0 \right) \) denotes the Wishart distribution [2] and \(\varvec{\varTheta }_0 = \left\{ \varvec{\mu }_0, \beta _0,\nu _0, \mathbf {W}_0 \right\} \) is a set of hyperparameters.

The conditional (in)dependencies among the random variables under ENGAGE are shown by means of plate notation in Fig. 1a. Notice that unshaded nodes correspond to latent factors, whereas shaded nodes correspond to observed magnitudes. The generative process modeled by ENGAGE performs the realization of the observed random variables (i.e., the individual entries of \(\varvec{E}\) and \(\varvec{G}\)) according to the conditional (in)dependencies of Fig. 1a as detailed in Fig. 1b.

Fig. 1.
figure 1

Graphical representation of ENGAGE (a) and its generative process (b).

4 Model Inference

Under ENGAGE, the experts for a given question q are found by ranking users based on a recommendation score, that involves the latent-factor representations of users and tags. The recommendation score is introduced in Sect. 4.1. The inference of the latent-factor representations is discussed in Sect. 4.2.

4.1 Answerer Ranking for Recommendation

The rank of an answerer \(u \in \varvec{U}\) in the list of experts for q is determined by the score \(P_{uq}\) of her acknowledged/predicted expertise. \(P_{uq}\) is computed by averaging the current expertise of u under the individual tags of \(\varvec{t}_q\). This requires to distinguish between two alternative cases. Let t be an adopted tag of \(\varvec{t}_q\). If the expertise of u under t is acknowledged, \(E_{ut}\) can be directly used in the definition of \(P_{uq}\). Otherwise, if the expertise of u under t is an unknown entry of \(\varvec{E}\), then \(E_{ut}\) is suitably predicted by resorting to the latent-factor representations of u and t under ENGAGE. Accordingly, \(P_{uq} = \frac{1}{|\varvec{t}_q|} \sum _{t \in \varvec{t}_q} \hat{E}_{ut}\), where \(\hat{E}_{ut}\) is defined in the below Eq. 4, so that to incorporate the current expertise of u under t according to the two above cases.

$$\begin{aligned} \hat{E}_{ut}= \left\{ \begin{array}{ll} E_{ut} &{} \text{ if } E_{ut} > 0\\ \frac{1}{S} \sum ^S_{s=1} \left( \varvec{L}^{(s)}_u\right) ^T \cdot \varvec{H}^{(s)}_t &{} \text{ if } E_{ut} = 0\\ \end{array} \right. \end{aligned}$$
(4)

In Eq. 4, S is the number of samples of both \(\varvec{L}_u\) and \(\varvec{H}_t\), which are respectively referred to as \(\varvec{L}^{(s)}_u\) and \(\varvec{H}^{(s)}_t\) (with \(s = 1, \ldots , S\)). Assume that \(\varvec{\varTheta } = \{ \varvec{L}, \varvec{H}\} \cup \varvec{\varTheta }_{\varvec{L}} \cup \varvec{\varTheta }_{\varvec{H}}\). In principle, all samples \(\varvec{L}^{(s)}_u\) and \(\varvec{H}^{(s)}_t\) are to be drawn from the posterior distribution \(\Pr (\varvec{\varTheta }| \varvec{E}, \varvec{G}, \alpha , \beta , \varvec{\varTheta }_0)\). However, the latter is analytically intractable. Therefore, the generic \(\varvec{L}^{(s)}_u\) and \(\varvec{H}^{(s)}_t\) are drawn through approximate posterior inference, as described in Sect. 4.2.

4.2 Approximate Posterior Inference

A well-known technique for approximate stochastic inference [14] is Gibbs sampling. The latter defines a (first-order) Markov chain, whose stationary distribution eventually approaches the true posterior distribution \(\Pr (\varvec{\varTheta }| \varvec{E}, \varvec{G}, \alpha , \beta , \varvec{\varTheta }_0)\). This is accomplished by means of reiterated transitions from the current sample of the model parameters \(\varvec{\varTheta }\) to a new one. More precisely, at the generic transition, each parameter \(\varvec{\theta } \in \varvec{\varTheta }\) is sequentially sampled from the respective full conditional \(\Pr (\varvec{\theta }| \varvec{\varTheta } - \varvec{\theta }, \varvec{E}, \varvec{G}, \alpha , \beta , \varvec{\varTheta }_0) \). This is the conditional distribution over \(\varvec{\theta }\), given all other parameters \(\varvec{\varTheta } - \varvec{\theta }\), the (hyper)parameters \(\varvec{\varTheta }_0\) as well as the observations \(\varvec{E}\) and \(\varvec{G}\).

The derived full conditional distributions over the individual parameters of ENGAGE are reported next, along with the algorithm designed to perform Gibbs sampling inference.

Parameters \(\varvec{L}_u\) and \(\varvec{H}_t\). Due to the conjugacy between the multivariate Gaussian distribution on \(\varvec{L}_u\) (with unknown parameters \(\varvec{\varTheta }_{\varvec{L}}\)) and the Gaussian-Wishart prior on \(\varvec{\varTheta }_{\varvec{L}}\), the full conditional on \(\varvec{L}_u\) is a multivariate Guassian distribution, i.e.,

$$\begin{aligned} \varvec{L}_u \sim \mathcal {N}\left( \varvec{\mu }_{\varvec{L}}^{*(u) }, \left[ \varLambda _{\varvec{L}}^{*(u)}\right] ^{-1}\right) \end{aligned}$$
(5)

where

$$\begin{aligned} \varLambda _{\varvec{L}}^{*(u) }&= \varLambda _{\varvec{L}} + \alpha \sum _{t \in \varvec{T}} \delta _{ut}\varvec{H}_t\varvec{H}_t^T + \beta \sum _{u \in \varvec{U}} \varvec{L}_u\varvec{L}_u^T\\ \mu _{\varvec{L}}^{*(u) }&= \left[ \varLambda _{\varvec{L}}^{*(u) }\right] ^{-1} \left[ \alpha \sum _{t \in \varvec{T}}\delta _{ut}\varvec{H}_t E_{ut} + \beta \sum _{v \in \varvec{U}} \varvec{L}_v G_{uv} + \varLambda _{\varvec{L}} \mu _{\varvec{L}} \right] \end{aligned}$$

Likewise, because of the conjugacy between the multivariate Gaussian distribution on \(\varvec{H}_t\) (with unknown parameters \(\varvec{\varTheta }_{\varvec{H}}\)) and the Gaussian-Wishart prior on \(\varvec{\varTheta }_{\varvec{H}}\), the full conditional on \(\varvec{H}_t\) is a multivariate Guassian distribution, i.e.,

$$\begin{aligned} \varvec{H}_{t} \sim \mathcal {N}\left( \mu _{\varvec{H}}^{*(t) }, \left[ \varLambda _{\varvec{H}}^{*(t)}\right] ^{-1}\right) \end{aligned}$$
(6)

with

$$\begin{aligned} \varLambda _{\varvec{H}}^{*(t) }&= \varLambda _{\varvec{H}} + \alpha \sum _{u\in \varvec{U}} \delta _{ut} \varvec{L}_u \varvec{L}_u^T\\ \mu _{\varvec{H}}^{*(t) }&= \left[ \varLambda _{\varvec{H}}^{*(t) }\right] ^{-1} \left[ \alpha \sum _{u \in N} \varvec{L}_u \delta _{ut} E_{ut} + \varLambda _{\varvec{H}}\mu _{\varvec{H}} \right] \end{aligned}$$

Parameters \(\varvec{\varTheta }_{\varvec{L}}\) and \(\varvec{\varTheta }_{\varvec{H}}\). For each \(\varvec{X} \in \{\varvec{L}, \varvec{H}\}\), the conditional distribution over \(\varvec{\varTheta }_{\varvec{X}} = \{\varvec{\varvec{\mu }}_{\varvec{X}}, \varvec{\varLambda }_{\varvec{X}}\}\) is the below Gaussian-Wishart distribution [6, pp. 178]

$$\begin{aligned} \Pr (\varvec{\mu }_{\mathbf {X}}, \varvec{\varLambda }_{\varvec{X}}| \varvec{X}, \varvec{\varTheta }_0) =&\mathcal {N}(\varvec{\mu }_{\varvec{X}}|\varvec{\mu }_{\varvec{X}}^*, [(\beta _0 + c) \varvec{\varLambda }_{\varvec{X}}]^{-1}) \nonumber \\&\cdot \mathcal {W}(\varvec{\varLambda }_{\varvec{X}}|\nu _0 + c, \varvec{W}_{\varvec{X}}^*) \end{aligned}$$
(7)

where c is the number of columns within matrix \(\varvec{X}\) and

$$\begin{aligned}&\varvec{\mu }_{\varvec{X}}^* = \frac{\beta _0 \varvec{\mu }_0 + c \overline{\varvec{X}}}{\beta _0 + c}; \;\;\, \mathbf {S}_{\varvec{X}} = \frac{1}{c} \sum _{i=1}^{c} (\varvec{X}_i - \overline{\varvec{X}}) (\varvec{X}_i - \overline{\varvec{X}})^T; \;\;\, \overline{\varvec{X}} = \frac{1}{c} \sum ^{c}_{i=1} \varvec{X}_i \\&\left[ \mathbf {W}^*_{\varvec{X}}\right] ^{-1} = \mathbf {W}_0c^{-1} + c \varvec{S}_{\varvec{X}} + \frac{\beta _0 c}{\beta _0 + c} (\varvec{\mu }_0 - \overline{\varvec{X}})(\varvec{\mu }_0 - \overline{\varvec{X}})^T \end{aligned}$$

Gibbs Sampling. Algorithm 1 sketches the pseudo code of the sampler, designed to implement approximate posterior inference under ENGAGE. After a preliminary initialization (line 1), the sampler enters a loop (lines 2–11), whose generic iteration h embraces two steps. \(\varvec{\varTheta }_{\varvec{L}}^{(h)}\) and \(\varvec{\varTheta }_{\varvec{H}}^{(h)}\) are drawn at the first step (lines 3–4), being functional to draw \(\varvec{L}^{(h)}_u\) and \(\varvec{H}^{(h)}_t\) at the second step (lines 5–10).

The maximum number H of iterations is established, by following the widely-adopted convergence-criterion in [12]. This allows the Markov chain behind the Gibbs sampler to reach its equilibrium after an initial burn-in period. As a consequence, the S samples used in Eq. 4, can be drawn when convergence is met (i.e., after the burn-in period, in which samples are instead still sensible to the preliminary initialization).

figure a

5 Experimental Evaluation

We comparatively investigated the recommendation effectiveness of ENGAGE.

5.1 Data Set

All experiments were conducted on Stack OverflowFootnote 2 [1, 16], i.e., a real-world QA community for sharing knowledge on computer programming. More precisely, we formed our training and test sets from an anonymized and quarterly dumpFootnote 3 of all Stack Overflow data, produced by its users within a time interval ranging from Jan 1, 2015 to July 31, 2015. Such a dump is publicly released by the Stack Exchange network under the Creative Commons BY-SA 4.0 licence. More precisely, as far as the training set is concerned, we retained all those tags that were adopted at least 50 times in the time interval from Jan 1, 2015 to June 31, 2015. Further, we considered all those users, who provided more than 80 posts [20] in the same period. The selected tags and users, along with their answers, the respective questions, timestamps and votes were included into the training set. Overall, the latter consists of 3, 376 users, 40, 382 questions, 60, 968 answers. Regarding the test set, we focused on a collection \(\varvec{Q}\) of questions (with \(|\varvec{Q}| = 1,357 \)), that were posted by the users in the training set in a later time interval from July 1, 2015 to July 31, 2015. These questions are labelled with tags and answered by answerers in the training set. We chose such users, their answers to the questions of \(\varvec{Q}\), the respective timestamps and votes as the test set. As a whole, the latter consists of 1, 357 questions, 3, 376 users, 3, 771 answers.

5.2 Competitors

We contrasted ENGAGE against a selection of various competitors.

Votes [17] ranks answerers based on the mean of the difference between the positive and negative votes of their answers as well as the average percentage of the positive votes.

InDegree [3] ranks answerers by their respective numbers of best answers.

The state-of-the art model in [19], hereinafter called TER (Tag-based Expert Recommendation), infers user expertise from the factorization of the user-tag matrix. The latter is built, so that the generic entry reflects the expertise of an answerer under a tag, as captured by averaging the votes of her answers marked by that tag. Unlike ENGAGE, TER ignores both the drift of users’ interests over time and their asking-answering behavior.

TEM [20] is a state-of-the art joint model of topics and expertise. Essentially, under TEM, tags are incorporated into the textual content of posts, in order to infer the topical interests of users. The specific expertise of users under the different topics is explicitly captured.

CQARank [20] combines the user topical interests and expertise under TEM with the link analysis of the asking-answering interaction graph, in order to enhance the inference of user topical expertise.

Both TEM and CQARank disregard the drift of users’ interests over time.

5.3 Recommendation Effectiveness

We comparatively assessed the recommendation performance of ENGAGE through several evaluation metrics. Let \(q \in \varvec{Q}\) be a generic question of the test set. Assume that \(\overline{\varvec{R}}^{(q)}\) and \(\varvec{R}^{(q)}\) are, respectively, the ground-truth and the recommended list of experts for q. Essentially, \(\overline{\varvec{R}}^{(q)}\) is the list of users, who actually answered q, ranked by the known scores of their answers. Instead, the users in \(\varvec{R}^{(q)}\) are ranked by the recommendation score of Sect. 4.1. \(R^{(q)}=|\varvec{R}^{(q)}|\) is the size of \(\varvec{R}^{(q)}\). \(R^{(q)}_i\) denotes the user at position i of \(\varvec{R}^{(q)}\). \(R^{(q)}_{best}\) indicates the rank of the best answerer. The adopted evaluation metrics are enumerated next.

  • Precision at top \(R^{(q)}\) \(( Precision ^{(q)}@R^{(q)})\) [9] is the correctness of \(\varvec{R}^{(q)}\), i.e., the fraction of top-\(R^{(q)}\) recommended experts, who are ground-truth answerers. More precisely,

    $$ Precision ^{(q)}@R^{(q)} = \frac{|\varvec{R}^{(q)} \cap \overline{\varvec{R}}^{(q)}|}{R^{(q)}}$$
  • Recall at top \(R^{(q)}\) \(( Recall ^{(q)}@R^{(q)})\) [9] is the coverage of \(\varvec{R}^{(q)}\), i.e., the fraction of ground-truth answerers in the top-\(R^{(q)}\) recommended experts. Specifically,

    $$ Recall ^{(q)}@R^{(q)} =\frac{|\varvec{R}^{(q)} \cap \overline{\varvec{R}}^{(q)}|}{|\overline{\varvec{R}}^{(q)}|}$$
  • \( nDCG \) [10] (normalized Discounted Cumulative Gain) measures the goodness of the ranking of the recommended experts, based on their position in \(\varvec{R}^{(q)}\). This is accomplished by accumulating expert relevance to question q along \(\varvec{R}^{(q)}\), so that the relevance of higher-ranked experts is suitably discounted. Formally, \( nDCG (q) = \frac{ DCG(q) }{ IDCG(q) }\), where

    $$ DCG (q) = s^{(q)}_{1} + \sum ^{R^{(q)}}_{i=2} \frac{s^{(q)}_i}{ log _2 i}$$

    In the above equation, \(s^{(q)}_{i}\) represents the relevance of \(R^{(q)}_i\) to q (according to thumbs-up/down). \( IDCG (q)\) is the \( DCG (q)\) value of the ideal ranking.

  • Accuracy (\( Acc ^{(q)}\)) [22] measures the quality of the best-answer’s rank, i.e.,

    $$ Acc ^{(q)} = \frac{ R^{(q)} - R^{(q)}_{best} }{ R^{(q)} - 1 }$$

Larger values of the above measures denote a higher recommendation effectiveness. Table 1 summarizes the average values of such measures over the whole set \(\varvec{Q}\) of questions for all competitors. The reported results were found by adopting the following empirical settings. In all tests, the time decay factor \(\lambda \) was fixed to 0.2. The number K of latent factors was set to 15. The number S of samples used in Eq. 4 was set to 200. The overall number H of iterations for Algorithm 1 was fixed to 1, 000, in compliance with the convergence-criterion in [12]. Additionally, for each \(q \in \varvec{Q}\), the number \(R^{(q)}\) of recommended answerers for q was set to 10.

By looking at Table 1, it is evident that ENGAGE overcomes all tested competitors. In particular, the lower effectiveness of Votes and InDegree is due to the fact that both focus only on the importance of users, without accounting for their specific discounted expertise. TER is a state-of-the art competitor, that captures the tag-based expertise of answerers. Nonetheless, TER is still less effective than ENGAGE for two main reasons. Firstly, TER does not account for the drift of users’ interests over time. Secondly, TER does not exploit any auxiliary information from the communication network, that is shaped by the asking-answering behavior. The latter is instead conveniently used, under ENGAGE, in order to more accurately inform the latent factor representation of users and tags. TEM and CQARank are two state-of-the-art competitors, that use tags to capture topical expertise. However, their effectiveness is penalized with respect to ENGAGE, since tags are mixed up with the textual content of posts, rather then being used as user-generated explanations of their topical expertise. Moreover, neither TEM nor CQARank discount the expertise of users, in order to account for the drift of their interests over time.

Table 1. Recommendation effectiveness of the compared approaches

6 Conclusions and Further Research

We proposed a new latent-factor approach to expert recommendation in QA communities. The idea is to infer the time-evolving expertise of users from the tags of the answered questions, the votes and posting time of the respective answers as well as the asking-answering behavior of the CQA users. A thorough experimentation on real-world CQA data showed the overcoming recommendation effectiveness of our approach with respect to several state-of-the-art competitors.

It is interesting to explore the impact of alternative implementations of gradual forgetting on recommendation effectiveness. In this regard, temporal hyperbolic discounting [21] is a viable choice. Finally, three further lines of innovative research involve studying the incorporation of, respectively, user roles [5, 7, 13, 18], exposure [11] to posted questions as well as the recent generative models of text corpora (such as, e.g., [4]) for more effective expert recommendation.