There is widespread recognition that salesperson performance is a dependent variable of extreme academic and managerial interest (see Verbeke et al. 2011) for a recent meta-analysis; however, the definition of what salesperson performance is and how researchers should operationalize it in their work remains an unresolved issue. Some scholars prefer salesperson performance operationalizations (SPOs) acquired from firms’ CRM systems (or other databases) as secondary data.Footnote 1 These scholars argue such measures represent unbiased, verifiable outcomes (Plouffe et al. 2016) that may enhance an article’s contribution to the discipline (e.g., Hochstein et al. 2019; Palmatier 2016).

Though, even if scholars focus on this subset of SPOs, countless options still remain, including: sales volume (e.g., Bolander et al. 2015), number of calls (Ahearne et al. 2007), sales growth relative to a prior period (e.g., Gonzalez et al. 2014), sales-to-quota ratio (e.g., Hughes 2013), and others. Despite the vast variety of SPOs being used in the literature, “salesperson performance” is often discussed as though all articles, along with their disparate operationalizations, are referring to the same underlying construct. This conflation is compounded when considering the breadth of activities salespeople conduct (e.g., prospecting, servicing), the various behaviors salespeople exhibit (e.g., adaptability, time management), and the industry differences in which salespeople operate (e.g., B2B, B2C). Clearly, the use of “salesperson performance” as an umbrella term that can be aligned to any SPO leads to confusion and inconsistency in the literature (see Park and Holloway 2003, p. 242).

Adding further complexity to this situation, the emergence of highly sophisticated CRM systems, such as Salesforce.com and Oracle’s Netsuite, allows companies to store an unprecedented amount and variety of data on salesperson performance (Beath et al. 2012; Hollison 2015). As evidence of this data availability, the CRM industry is expected to grow by almost 600% between 2010 and 2025 (Presswire 2019). Given this explosion of information, researchers obtaining secondary data from firms are likely to encounter measures far outside those to which they have become accustomed. This abundance of emerging secondary data and the absence of guidance for utilizing it highlights four key issues for researchers and reviewers: (1) which SPOs are of main importance to practitioners, (2) how do SPOs relate to each other in a nomological network, (3) which SPO is most appropriate for a given research context, design, or objective, and (4) what should scholars consider when selecting an SPO? Thus, scholars are significantly impaired as they look to build upon theory and maintain managerial relevance. Our goal is to prepare scholars for this rapidly evolving landscape and make sense of what is arguably the most important variable in all of sales management research.

To begin to unpack these important issues, this article proceeds in three stages. First, we conduct an exploratory survey with practitioners to discover the SPOs that salespeople and managers utilize. Practitioners are the primary source from which secondary sales metrics are collected (Homburg et al. 2011). Thus, it is imperative to understand the types of performance data practitioners collect and which SPOs they deem most important so that we can proceed from a place that is grounded in reality, ensuring the continued managerial relevance of sales research (Palmatier 2016). Second, we conduct a systematic literature review (see Palmatier et al. 2018) and, using a theoretically driven evaluative framework, classify 30 years of empirical sales research to better understand the SPOs in use by scholars. Third, we compare the practitioner and scholar perspectives to create an inclusive conceptual model of the different types of SPOs, provide their theoretical definitions, detail the nomological order of the SPOs within these categories, and offer simple transformations that can be applied to nearly any SPO to help scholars better align their SPO with their specific research objectives.

In all, this effort produces four key contributions. First, we provide clarity on the meaning of salesperson performance. We deconstruct and define salesperson performance and its components to align practitioners and scholars. By elucidating the conceptual meaning of this vital construct, we identify that salesperson performance should be considered a broader concept that goes beyond thinking of sales performance as strictly sales outcomes (e.g., revenue, growth, etc.). This view allows scholars to make use of a broader array of secondary data sources.

Second, through our extensive review of the literature, we highlight the strengths of both primary and secondary data. Our clarification of the pros and cons of each type of data (see Table 1) informs scholars about the suitability of each data type for research situations. This information allows researchers to identify potential advantages and shortcomings of their data. By acknowledging these differences, scholars will be more prepared to address their specific data challenges in order to strengthen the empirical findings of future research studies.

Table 1 Pros and cons of primary vs. secondary salesperson performance data

Third, our work brings together both practitioner and scholarly perspectives on salesperson performance. By comparing scholarly and managerial approaches to SPOs, we highlight a disparity between practitioner evaluations of performance and the literature. We investigate these perspectives by surveying sales professionals and using a systematic literature review to understand and align insights related to SPOs. In doing so, we offer a comprehensive view of SPOs using secondary data that is grounded in both practice and research.

Fourth, we offer guidelines, and cautions, for when and how to leverage different SPOs. These guidelines clarify the ambiguity around different SPOs by providing insights regarding the options that exist and which of those options may be appropriate for a given study. We offer suggestions to researchers on how secondary data can be used to operationalize salesperson performance. Specifically, we introduce a conceptual model of salesperson performance that gives guidance on the different aspects and categories of SPOs organized around the natural progression of the sales process. In doing so, we also draw a distinction between secondary data that is “objective” and secondary data that is “subjective” (e.g., human-generated, human-influenced), challenging the inconsistent selection of “any” SPO and the common assumptions found in the literature (e.g., that sales-to-quota ratio is an ideal SPO). Overall, our guidelines serve to improve the theoretical consistency and managerial relevance of future research by aligning scholarly and managerial perspectives on salesperson performance.

Conceptual background

The nature of salesperson performance

Before proceeding, it is important to discuss what is meant by salesperson performance in general. Salesperson performance is defined as “behavior that has been evaluated in terms of its contribution to the goals of the organization” (Walker et al. 1979, p. 33). Since the complex behaviors that salespeople enact tend to vary across and within industries, performance, a function of an individual’s behavior, is better thought of as inputs and outputs of effort quantity and quality (e.g., strategy, style) of a salesperson (Campbell et al. 1993). An expansive review of the literature suggests that salesperson performance broadly encompasses four categories of SPOs: activity-, outcome-, conversion-, and relationship-based.

Activity-based performance refers to the behavioral metrics the firm collects that lead to pipeline development and progression. These activities reflect effort (e.g., calls, meetings, proposals) rather than effectiveness, but practitioners still view these as valuable performance metrics. Outcome-based performance refers to actual sales results. These metrics reflect some form of transaction(s) which affect an organization’s revenue. The notions of activity-based and outcome-based performance mirror research on sales managerial controls (Anderson and Oliver 1987). Behavior-based controls emphasize the monitoring and rewarding of employee inputs (e.g., activities) and how work gets done, while outcome-based controls rely on outputs and underscore results rather than methods (Oliver and Anderson 1995).

Conversion-based performance is unique because it shows the quality of a salesperson’s effort by comparing inputs (activities) to outcomes (e.g., “win rate,” “batting average”). This gives managers insight into salesperson strengths and weaknesses at various stages of their pipelines. For example, after how many meetings does a salesperson close a sale or how many cold calls must a salesperson make to set a meeting? This can be tied back to literature on salesperson productivity (Hall et al. 2015; Weitz 1981). More specifically, researchers have acknowledged that there are both activities and outcomes that need to be considered in tandem when examining salesperson effectiveness (Weitz et al. 1986). In other words, performance can be a ratio of sales outcomes and inputs (Boles et al. 1995).

Relationship-based performance metrics relate to the strength of the relationship a salesperson maintains with customers (e.g., loyalty, retention, net promoter score – see (Keiningham et al. 2007; Morgan and Rego 2006). Because relationship-based metrics generally focus on long-term outcomes (Palmatier et al. 2013), these SPOs are thought to tap into one’s potential for sustained performance. This aligns closely with research highlighting the importance of relationship quality in the sales role (Crosby et al. 1990; Park et al. 2010). The SPO categories described here will be used throughout the remainder of this manuscript and contributed to the development of our survey, evaluative framework (for our systematic literature review), and conceptual model.

Measuring salesperson performance: Primary vs. secondary data

With some conceptual groundwork established, we now discuss two main approaches to measuring salesperson performance. Specifically, scholars tend to rely on either primary (e.g., McFarland et al. 2016; Miao and Evans 2013) or secondary (e.g., Ahearne et al. 2013b; Bolander et al. 2015) data collection methods. Primary data is generated by the researcher for a specific purpose (e.g., researcher-conducted surveys). Secondary data, on the other hand, is collected by a party other than the researcher for some other purpose (e.g., CRM records).

Primary data, which can be used to measure both salesperson behaviors and outcomes, involves making judgements about the overall performance (i.e., including financial and non-financial indicators) of an individual over a defined period (Murphy 2008). These judgments can be reported by either individual salespeople (e.g., self-evaluations), sales managers (e.g., rating performance of subordinates), or customers (e.g., satisfaction with the salesperson). This approach allows for a more holistic and multidimensional perspective of selling activities that extend beyond those that are easily countable (Osterman 2007). For instance, the scale by Behrman and Perreault (1982) includes measurement items related to multiple dimensions of sales (detailed subsequently).

Secondary data consists of gathering salesperson performance indicators that can be “seen” and counted and often comes in the form of company records. By incorporating organizationally relevant metrics, this type of data succinctly quantifies the inputs and outputs of salesperson actions, which may be used to determine job effectiveness (Neely et al. 1997). In the sales literature, research utilizing secondary data to measure salesperson performance has utilized a number of archival company data types, such as sales volume (e.g., Bolander et al. 2015), growth (e.g., Gonzalez et al. 2014), and quota attainment (e.g., Patil and Syam 2018).

One conclusion about SPOs is that there is no widespread acceptance of which data type researchers should use. While the literature acknowledges differences between these types of data and suggests they should not be used interchangeably (e.g., Rich et al. 1999), there remains no “silver bullet” when it comes to the best indicator of salesperson performance. Indeed, there is merit in both primary and secondary data approaches. Researchers may carefully consider the complementarity of these approaches by considering the advantages (disadvantages) related to what data source may be germane to their individual study objectives (see Table 1).

While many studies have involved primary data, in this study, given the rapid increase in the amount of company-generated secondary data available in firms, as well as new access to unique kinds of performance measures, our focus is on SPOs derived from secondary data. Specifically, we take a comprehensive approach by examining the practitioner and scholarly perspectives on SPOs in order to provide conceptual clarity as to the differences between SPO types and offer guidelines for researchers and reviewers regarding the use of secondary data sources to assess and operationalize different aspects of salesperson performance.

Practitioner perspective: Exploratory survey

In our effort to align practice and scholarship, we begin by assessing the practitioner perspective on salesperson performance. This starting point was selected for two reasons. First, given that our focus is on secondary data, which is stored by practitioners in various CRM systems and other databases, we must acknowledge that salespeople, managers, and customers represent the primary source of information on salesperson performance. By elaborating on which SPOs practitioners collect and emphasize as key performance indicators, we can give scholars a more accurate idea of what is potentially available to them when working with firms. Second, managerial relevance is a consistent point of focus for leading marketing journals (Palmatier 2016). Thus, understanding managers’ thinking and the context in which it takes place ensures that scholars will use relevant SPOs to ground their empirical examinations and discourages research that is “uncoupled from the real world” (Tushman and O’Reilly III 2007, p. 770).

Method: Practitioner perspective

Sample and survey instrument

We used our initial review of the literature as well as interviews with practitionersFootnote 2 to lay the groundwork for our understanding of the SPO-categories. We used this information to create our exploratory survey. The survey was distributed to practitioners using Qualtrics panel services with two criteria requested. First, to ensure that our results represented a balance of perspectives, we requested that approximately half the respondents be managers and the other half be sales representatives. Second, to ensure a variety of industry contexts were represented, we requested approximately half of the respondents represent B2B domains and the other half B2C. The resulting panel included 143 participants from a variety of industries (e.g., technology, insurance, manufacturing) nationwide.

Panelists were asked about a variety of prominent SPOs that, based on our interviews, we expected to encounter (e.g., sales revenue, sales-to-quota ratio, cold calls, etc.); those questions were accompanied by a five-point scale that asked about the importance of each SPO (1-Not important to 5-Very important, and an “N/A-Not Used” option). Additionally, empty text fields prompted participants to report metrics that were not listed (unanticipated metrics) to ensure we had the opportunity to gather all possible SPOs. These respondent-reported SPOs were then accompanied by the same scale items to capture their importance.

Respondents were removed due to time-to-completion concerns and for their failure to pass quality checks. After the data cleaning, we were left with usable responses from 122 practitioners. Forty-three percent were from a B2B context (57% from B2C), and 43% were sales representatives (57% sales managers). Given the variety of respondent backgrounds in our sample and the exploratory nature of this survey, we feel confident that this sample provides a comprehensive view of current practitioner approaches to SPOs.

Practitioner survey findings

SPO categories, subcategories, and importance

We found it critical to begin our survey with questions about practitioners’ definition of performance. To ensure that questions were relevant and unambiguous, we instructed respondents to consider how salesperson performance is measured in their particular organizations. These questions were prompted before any questions about SPOs to negate any priming effects on the participants’ answers. Using a similar procedure as in the preliminary interviews (see Web Appendix A), members of the research team reviewed and coded the open-ended responses. This process yielded the same categories and aspects of salesperson performance and indicated that practitioners consider varying aspects of performance (8.26% use activity-, 57.02% use outcome-, 4.96% use conversion-, and 8.26% use relationship-based, while 21.50% use a combination approach).

Our survey results corroborate the results of our conceptual background section and preliminary interview findings in that they confirm our four SPO categories. Furthermore, within each of the proposed categories, we find evidence of additional subcategories. A visual depiction of the subcategories, variety of specific SPO examples, and percentages of practitioners rating each category as highly important are detailed in Table 2.Footnote 3 These percentages are broken down by contextual domain to show differences between respondents in B2B and B2C contexts.

Table 2 Practitioner performance metric categories, examples, and importance

Results suggest that it is useful to divide activity-based SPOs into two subcategories: early stage and late stage activities. Early stage activities add prospects to a salesperson’s pipeline (e.g., cold calls, drop ins). Late stage activities focus on progress of the sales process through the pipeline toward a transaction (e.g., meeting, presentation, proposal). Early stage and late stage activities appear equally important to B2B respondents (40%), while B2C respondents report a greater emphasis on early stage activities (52%) as opposed to late stage (28%).

Similarly, outcome-based SPOs can be broken down into two subcategories: raw and comparative. Raw-outcome SPOs are raw sales volume metrics (e.g., revenue, units sold, profit) and provide the foundation for all other outcome SPOs. Comparative-outcome SPOs attempt to standardize a raw SPO to make it comparable across salespeople and territories; by dividing the raw SPO by some baseline (e.g., sales-to-quota ratio, “share of” measures, percent of total territory sales), firms can account for differences in territory potential. For example, a sales-to-quota ratio compares salespeople’s actual sales volume to a target sales volume, assuming that the quota is set in a way that allows for comparability across salespeople and territories. These dimensions of outcome-based SPOs showed relative consistency in their importance to practitioners in both the B2B (40%) and B2C (39%) domains.

Results also uncover two subcategories of conversion-based SPOs: activity conversions and outcome conversions. Activity conversions reveal a salesperson’s effectiveness in converting early stage activities to late stage activities (e.g., sales calls to sales meetings, sales meetings to proposals). Outcome conversions reveal a salesperson’s effectiveness at converting activities (at any stage) into sales outcomes (e.g., calls to revenue, proposals to profitability). Conversion-based SPOs were not used as frequently by practitioners as the other SPOs and exhibit similar patterns of importance in both B2B and B2C contexts (26–28%).

Relationship-based SPOs can also be divided into two subcategories: financial and non-financial. Financial relationship SPOs measure financial outcomes related to long-term client retention (e.g., customer lifetime value, recurring revenue, upselling) and allow researchers and practitioners to use behavioral data to assess customer loyalty (Watson et al. 2015). Non-financial relationship SPOs, in contrast, do not directly impact the bottom line (e.g., customer satisfaction, net promoter score, references); these SPOs are attitudinal measures of customers’ loyalty (Watson et al. 2015). Over half of B2B respondents (53%) rated financial relationship SPOs as highly important compared to only 41% of B2C, while non-financial relationship SPOs were rated as slightly more important in B2C (54%) than in B2B (51%).

Discussion: Practitioner perspective

Our exploratory survey results, which capture the practitioner perspective on SPOs, provide three key insights. First, in both interviews and surveys, practitioners confirm the existence of four general SPO categories: activity-, outcome-, conversion-, and relationship-based. Critically, outcome-based SPOs—arguably the most obvious type of SPO—are not unanimously or even frequently ranked as more important than other SPO types. Indeed, practitioners view activity- and relationship-based SPOs as especially valuable metrics, ranking them as more important than outcome-based SPOs in some cases (see Table 2).

Second, within these four broad SPO categories exist a variety of subcategories that are useful in organizing and categorizing SPOs. Moreover, some interesting differences emerge in how B2B and B2C respondents rank these subcategories’ importance. For example, regarding activity-based SPOs, B2C respondents place a greater emphasis on early stage activities than late stage activities, a point of divergence that might be explained in part by different sales cycles. Generally speaking, B2C companies have more simplistic sales cycles, while their B2B counterparts tend to have better-defined sales processes built around pipeline concepts (Ahearne et al. 2012). This differing emphasis on early stage activities may also reflect B2C firms’ belief that their customers are virtually unlimited (Peppers and Rogers 2005) and their resulting treatment of sales as a “numbers game” (Ward 2016). Additionally, regarding relationship-based SPOs, B2B respondents place more emphasis on financial SPOs than did B2C respondents. Again, we believe this makes sense, as the buyer-seller relationship in B2B contexts often involves more actors than does the same relationship in B2C contexts (Hartmann et al. 2018). These context-specific preferences and actions indicate the difficulty inherent in tracking non-financial relationship SPOs in the same way researchers track individual attitudes about a given issue or event since complex buying centers cannot technically hold attitudes.

Third, while conversion-based SPOs are deemed important to a smaller percentage of practitioners (approximately 15–25% compared to the 40–50% of other SPO categories important), we believe that these SPOs still warrant consideration, as the 28% who ranked conversion-based SPOs as highly important is still a notable portion of our respondents.

Scholarly perspective: Systematic literature review

Having detailed the practitioner perspective on SPOs, we now turn our attention to scholars. One of the key takeaways from the prior section is that practitioners take a broad view of SPOs, considering not only outcomes such as revenue or profit, but also process-oriented categories. Indeed, as shown from our initial study, practitioners monitor a wide variety of performance categories including activity-, outcome-, conversion-, and relationship-based SPOs.

Method: Systematic literature review

We investigate the scholarly perspective on SPOs by conducting a systematic literature review (Palmatier et al. 2018; Tranfield et al. 2003) to explore the current state of empirical salesperson performance research and derive meaningful insights. This is a rigorous and transparent approach of the review process that enhances replicability (Torraco 2005).

Search procedure and parameters

Given the scope of our study on secondary SPOs, we focused our search effort on articles published from 1989 to 2020. We chose this timeframe because prior to this date, most sales research focused on self-report performance measures. As per Baumgartner and Pieters (2003), Williams and Plouffe (2007), and Verbeke et al. (2011), we specifically searched within journals that have been identified as “top” marketing or management outlets or as outlets that are most likely to publish sales research. To keep our search manageable, we included a list of the relevant journals that “count towards” the Financial Times research rank (Ormans 2016) as well as applicable specialty journals. Our final list includes nine journals (detailed in Web Appendix B). We also considered other journals that appeared potentially appropriate (e.g., Management Science, Marketing Letters, Journal of Business Research, Journal of Business and Industrial Marketing, Journal of Management, Journal of Service Research) but found too few—or, in many cases, zero—instances in which the dependent variable is a secondary SPO. Thus, we concluded that our focus on these nine journals is appropriate and representative of most empirical studies on salesperson performance.

We conducted our search of these journals via EBSCO’s Business Source Complete and used the following terms to search in keywords and abstracts (all paired with the word “performance”): “sales,” “salesperson,” “objective,” “sales representative,” “sales associate,” “sales rep,” “account manager,” “business development,” “frontline employee,” and “FLE.” Even using these focused search terms, unsuitable articles resulted. For example, the term, “sales performance,” uncovered articles centered on business unit-level sales performance (e.g., Nijssen et al. 2017) or a firm’s overall annual sales (e.g., Rowe and Skinner 2016) as opposed to individual salesperson performance. We excluded conference papers, editorials, meta-analyses, and non-empirical articles. We conducted careful screening of the resulting articles involving a review of their titles, abstracts, keywords, and methodology sections to ensure that the articles use SPOs pulled from secondary data. As a result, extensive manual evaluation was also a vital part of identifying the articles included in this review.

Evaluative framework

A well-defined and theoretically driven evaluative framework allows us to rigorously examine the nuances found in published studies (Katsikeas et al. 2016). We developed our framework by drawing on both firm- and individual-level performance reviews and conceptual articles in marketing (e.g., Katsikeas et al. 2000), management (e.g., Richard et al. 2009), international business (e.g., Hult et al. 2008), and sales (e.g., Boles et al. 1995). Taken together, these literatures suggest that the variables displayed in Table 3 should be evaluated as part of any comprehensive effort to examine SPOs.

Table 3 Evaluative framework for research on salesperson performance using secondary data

Review and extraction process

Our search efforts resulted in an initial set of 218 articles that appeared to include a measure of secondary salesperson performance. After manual evaluation, we eliminated 119 (55%) that use primary performance measures or whose authors failed to look at salesperson performance at all.Footnote 4 Despite including our search terms, an additional 19 articles were eliminated because they do not look at secondary performance at the individual level. This resulted in 80 articles, described in Web Appendix B, for our systematic review.

With the final list of studies determined, we sought to understand how performance is operationalized. We began the coding process by carefully reading each article and summarizing the SPO in a sentence. Once all articles were summarized, we independently examined each “case” (e.g., Watson et al. 2018) to extract information (Tranfield et al. 2003). The evaluation of these articles was completed by the four authors. The information was codified in a protocol list that included the criteria from the evaluative framework and the specifics of the SPO. For consistency, we maintained a spreadsheet for coding and met regularly to resolve any disagreements (Marques and McCall 2005; Scandura and Williams 2000). Table 4 details the summary statistics for our findings based on our evaluative framework.

Table 4 Summary of research using secondary data for salesperson performance (1989–2020)

Systematic literature review findings

Aspects of performance

Aspect of performance refers to the performance category with which an article’s SPO aligns. Specifically: (1) activity- (salesperson behaviors), (2) outcome- (salesperson results), (3) conversion- (comparing salesperson outcomes to activities performed), and (4) relationship-based (future-focused results with customers). Most articles in our literature review focus on outcome-based performance (88%). Many activity- and relationship-based metrics are collected using primary data (e.g., surveys of salesperson effort or customer loyalty) which may influence the lack of secondary research exploring these aspects of performance. Though, we do see a few notable examples of the other performance aspects being operationalized with secondary data (see Table 5). For example, Ahearne and colleagues (2010a) use calls recorded in a CRM system as an activity-based SPO; Jasmand and colleagues (2012) create conversion-based SPOs to operationalize call effectiveness; and Wieseke and colleagues (2012) use customer-satisfaction data collected from a third-party firm as a relationship-based SPO. These examples, along with the articles highlighted in Table 5, should serve as models for future research to emulate.

Table 5 Exemplars of studies using secondary data for underrepresented SPO categories

Theoretical rationale

Next, we consider whether each study provides a formal definition of salesperson performance along with a theoretical or conceptual rationale that shows how their specific SPO aligns with this definition. If such rationale is provided, authors are able to plainly delineate their specific conceptualization of salesperson performance from alternatives in the broader domain of performance, and that conceptualization can then be used to articulate their choice of SPO and facilitate replication efforts (Katsikeas et al. 2016). Our review of the literature finds that 63% of articles do not provide theoretical rationale or justification for the designated SPO – a number higher than the results presented in Katsikeas and colleagues (2016) review of the marketing performance literature. However, since approximately two out of three articles do not provide the theoretical rationale or justification as to why the specific SPO was chosen, significant room for improvement remains.Footnote 5 Transparently sharing these details is critical for replication efforts (Freese and Peterson 2017).

SPO measurement occasions

Given that salesperson performance varies over time (Ahearne et al. 2010b), it is also critical that researchers evaluate each study’s treatment of their SPO as either a single occurrence or as a repeated measure. Importantly, this is a separate issue from whether or not an article’s overall model is longitudinal. Consider that a longitudinal model, perhaps where independent variables are measured at one point, mediators at another, and dependent variables (like performance) at yet another, would be a longitudinal model, but would not involve a repeated performance measure. Our review revealed that single occasion SPOs are used in the majority of studies. Certainly, such data can be sufficient to understand certain phenomenon, but there are distinct advantages to using repeated measures of performance to understand how effects unfold over time (Bolander et al. 2017).

Another research design consideration recently acknowledged in sales management research involves whether a study utilizes a between- or within-person research design (Childs et al. 2019). A between-person research design views salesperson performance as “inter-individual;” in other words, salespeople are compared to each other. In a within-person design, performance is viewed as “intra-individual,” and salespeople are compared only to themselves. For example, Childs et al. (2019) detail articles that attempt to claim that increasing one’s self efficacy would result in some outcome (a within-individual claim) using results derived from differences between individuals who demonstrate higher (vs. lower) self-efficacy rates (a between individual result). Only 19% of articles consider performance over time, highlighting a need for more repeated measures research to explore causal and within-person relationships.

Table 6 details repeated measures studies using secondary salesperson performance data, the focal SPO, the aspect of performance, the advantage of the repeated measures design, and key insights derived that would have eluded a study evaluating performance at a single occasion. Of note, we currently identify no repeated measures secondary salesperson performance research that explores activity-, conversion-, or relationship-based performance. Repeated measures research analyzing causal and within-person relationships for these aspects of performance represents a clear opportunity for future research.

Table 6 Exemplars of repeated measures data using secondary data for salesperson performance

Referent

We identify and examine common reference points used to conceptualize and operationalize salesperson performance. Specifically, we consider referents that are: (1) absolute—a raw SPO with no specific referent other than zero (e.g., revenue), (2) relative—a ratio-based SPO in which the referent is a baseline of some sort (e.g., revenue to quota, new accounts to territory average), and (3) temporal—a change-focused SPO in which the referent is an individual’s change in performance over a specified time period (e.g., year-over-year revenue growth). Absolute referents are used frequently by researchers (44%). Their use seems appropriate to the extent that the salespeople under examination all have similar performance potential (i.e., few salient territory, manager, or economic differences exist). Otherwise, it may be easy to misattribute an apparently high-performing salesperson’s performance to the variables under study when, in reality, their success is the result of a favorable territory. We note several articles that handle this threat well by either running a multilevel model where individual performance is nested under territory (e.g., Ahearne et al. 2013a) or detailing why territory differences do not exist or are not a concern (e.g., Bolander et al. 2015). At the same time, others appear to suggest the presence of territory differences yet employ absolute referent SPOs.

Relative-referent SPOs are also heavily represented in our review (51%) which makes sense given the likelihood of variance in salespeople’s performance potential (i.e., the presence of territory or manager differences). However, with 75% of articles reviewed having expected territory differences, the proportion of relative to absolute measures used should be more heavily weighted toward relative referent SPOs. Relative referents are intended to control for territory variance by viewing performance relative to a baseline such as a quota (which, if rigorously set, would account for potential differences) or the average sales numbers for the territory (Ahearne et al. 2010b). The specific SPO sales-to-quota ratio is oft-used in this category (33%).

Finally, we find SPOs with a temporal referent are notably underrepresented in the literature (6%). These SPOs try to address possible territory differences by comparing a salesperson’s current performance to the same individual’s (in the same territory) performance at a prior time. In other words, if a given salesperson was capable of a certain performance level in the first quarter of last year, we can use that information to understand the potential of their unique territory in the first quarter of this year. It should be acknowledged, though, that the few articles using these SPOs (e.g., year-over-year sales growth) have all been published in top marketing outlets, which suggests that the field is receptive to these SPOs. It is important to note that researchers should be cautious not to confuse temporal-referent SPOs with repeated measures designs, as temporal-referent SPOs involve combining multiple waves of measurement into a single score that is then analyzed in the same manner as a variable measured at one-time.

Data considerations

The type of data (primary or secondary) and the data source (computer generated, human influenced, external human input, and self-reported human input) are also important criteria to consider in a general sense. However, since the intent of our study is to review secondary data and provide future guidance for this data type and source, we do not include such considerations as part of our evaluative framework. But we discuss secondary data subjectivity (human influenced) later in the manuscript as these important criteria warrant consideration for anyone reviewing the broader literature.

Study context

When measuring organizational performance, as Richard et al. (2009) note, researchers “must take into account heterogeneity of environments, strategies, and management practices” (p. 725). Similarly, salesperson performance is potentially context specific. As such, the contextual details surrounding each individual study are critical to understanding how salesperson performance is evaluated. These details are necessary to justify, among other things, the population used or the appropriateness of adopted measures (Hulland et al. 2018) and to elucidate the decisions that underlie SPO choice. As part of the study context, we consider whether data came from a B2B or B2C context, whether territory/office differences were expected, and if details regarding how sales quotas were set are available. Other study context details that may be of interest for future research include whether salespeople have pricing authority or the nature of a salesperson’s compensation. However, due to a lack of relevant information in the articles examined, we are unable to fully evaluate these details.

Discussion: Scholarly perspective

The results of our systematic literature review, used to capture the scholarly perspective on SPOs, provide three key insights. First, while there are clearly imbalances along the criteria we used to evaluate articles—for example, an overwhelming focus on outcome-based, single occasion SPOs—we are pleased to find that there are counter examples of these general trends that can serve as models for future research. Continuing to focus on the performance aspect, we see some excellent examples of studies utilizing secondary data for activity- (e.g., Ahearne et al. 2010a), conversion- (e.g., Jasmand et al. 2012), and relationship-based SPOs (e.g., Wieseke et al. 2012). Similarly, for those interested in working with repeated measures data, there exist several examples to use for reference (Ahearne et al. 2010b; Fu et al. 2010). This is encouraging since these papers offer guidance to those working to address these imbalances.

Second, there is at least some possibility that seemingly objective, secondary data is subject to what we call “subjective confounding.” For example, our literature review identifies a potential area of concern in the combination of objective and subjective data as indicators of a latent aggregate construct. This is a novel approach, but it also raises some concerns; adding anything subjective to an objective SPO diminishes the resulting variable’s objectivity. So, if an article combines objective and subjective SPOs (e.g., survey items) to create a latent aggregate construct, that construct should no longer be considered objective. Further, commenting on Bommer et al.’s (1995) finding that objective and subjective SPOs share only 15% of their variance, Rich et al. (1999) state that the relationship between subjective and objective SPOs is “hardly what one would expect if the two types of measures assess the same underlying construct” (p. 42). So, combining SPOs to create a common latent (reflective) variable seems potentially problematic. It could be rightfully said that conceptualizing the variable as a formative construct could alleviate the issue of limited overlap between subjective and objective items, but this is also concerning given that studies that take this approach rarely use the same variables as indicators; and that if one has access to a variety of distinct SPOs, it may be more impactful to model each as a dependent variable for the sake of robustness tests (which are increasingly demanded in top marketing outlets; e.g., Gonzalez et al. 2014)

Though, even strictly secondary data can be potentially confounded by subjectivity. Our review identifies considerable ambiguity regarding the way a specific firm may set its quota. Of the articles using a sales-to-quota ratio SPO, 56% failed to detail the process by which the quota was set. To the extent that a quota has been set analytically based on data that accounts for territory history, competitor actions, and macro-economic trends (see Ahearne et al. 2010b, p. 69), objective SPO claims may be justified. However, we know there are numerous methods for quota setting, including human guesswork (Rich 2016), that would call objectivity into question. So, dividing an objective SPO (e.g., revenue) by a questionable quota does not allow a researcher to claim the resulting value remains objective. When scholars neglect to report the details of the quota-setting method, readers are left wondering about the validity of the quota and, therefore, the results. Combining an objective value with a subjective one, whether as indicators of a common factor or by dividing one by the other, will rightly cast suspicion on the measure’s objectivity.

Third, regarding the relative lack of repeated SPOs in the literature, we note that secondary data is uniquely equipped to address this issue, as it is often recorded over many time periods (e.g., monthly, quarterly, etc.), giving researchers easy access to multiple occasions of a variety of SPOs (Bolander et al. 2017). In contrast, collecting primary, subjective performance data over multiple occasions would be far more cumbersome for the researcher and participants. But, despite this distinct advantage, our review uncovers a few articles by researchers with apparent access to multiple waves of performance data who still aggregate this data into a single variable, seemingly nullifying the data’s novelty. Thus, we see an opportunity for more research looking at repeated SPOs moving forward, as the data needed appears to be available.

Aligning perspectives: General discussion

We seek to assist with unifying the practitioner and scholarly perspectives via a conceptual model of SPOs. To this end, we detail the SPO categories, provide specific examples within each category, pair SPO categories with the appropriate corresponding selling stages, and recommend transformations that can prepare each SPO for within- or between-person research.

Conceptual model of salesperson performance

Researchers using secondary data for salesperson performance focus almost exclusively on outcome-based SPOs (e.g., revenue, profit), while practitioners acknowledge a much broader conceptualization of salesperson performance (i.e., activity-, conversion-, and relationship-based). We also find a majority of research focusing on single occasion and between-person questions leaving much to be discovered via repeated performance measures and within-person research designs (Bolander et al. 2017; Childs et al. 2019). If marketing scholars hope to align their research with practice and ensure their work’s relevance (Palmatier 2016), these problems need to be deliberately addressed. To this end, we provide our conceptual model of SPOs in Fig. 1 to assist researchers with these objectives.

Fig. 1
figure 1

Conceptual model of salesperson performance operationalization options

Our conceptual model is broken into three sections: selling stages, salesperson performance, and potential transformations. The selling stages specify well-defined stages of the selling process (as per Andzulis et al. 2012; Moncrief and Marshall 2005), salesperson performance identifies the nomological order of the four categories of SPOs and their respective subcategories identified in our research, and potential transformations details ways to transform secondary data relative to others and relative to time so researchers can appropriately address between- or within-person research questions.

Selling stages

Our conceptual model details three main selling stages of importance to sales scholars: pipeline progression and development, closing, and relationship management. Pipeline development refers to a salesperson’s prospecting and approaching abilities (i.e., hunting; e.g., DeCarlo and Lam 2016) and pipeline progression refers to advancing those prospects through the sales process through needs identification and solution presentation. Next, closing refers a salesperson’s ability to convert prospects into customers through negotiation and by gaining commitment. Finally, relationship management focuses on building and maintaining relationships (i.e., farming) by servicing the sale, following-up, and cross/upselling.

SPO category recommendations

To facilitate the appropriate use of these categories by scholars, we align, the selling stages and SPO categories that best measure the efficacy of the salesperson’s ability during each selling stage. Our recommendations begin with activity-based performance which best assesses a salesperson’s pipeline development and progression. Early stage, activity-based SPOs measure a salesperson’s initial effort (e.g., calls) making them appropriate measures of pipeline development. Several articles examine this type of outcome using primary data (e.g., Sujan et al. 1994), and our conceptual model should make the application of secondary data for this purpose clear. In contrast, if a scholar is interested in assessing not only a salesperson’s initial effort but also their ability to progress an opportunity through the process, late stage activity-based SPOs (e.g., meetings) will be more appropriate.

Outcome-based performance measures a salesperson’s closing capabilities. Research collecting data from contexts where territory or managerial differences are thought to be negligible (e.g., Bolander et al. 2015) or explicitly interested in testing the effects of such expected differences (e.g., Wieseke et al. 2009) should use raw SPOs (e.g., revenue). Consider that if a scholar interested in territory or managerial differences models these variables’ effects on a comparative SPO (e.g., sales-to-quota ratio; which is thought to control for such differences), they are essentially “double-controlling” for these contextual effects and their results, if any, would be difficult to interpret. Alternatively, if a researcher would like to suppress contextual differences to evaluate the influence of salesperson specific variables, comparative SPOs (e.g., sales-to-quota ratio) may be more appropriate. Overall, though, outcome SPOs, whether raw or comparative, are ideal for those interested in hard outcomes rather than pipeline development or progression competency.

Conversion-based SPOs involve a comparison of inputs to outcomes to determine not only what a salesperson accomplished in terms of pipeline progression or closed business, but also how hard they had to work to achieve those results. Depending on the research question, one could assess activity conversions which focus on a salesperson’s effectiveness in converting early stage activities to later stage ones (e.g., meetings per calls) or one could assess outcome conversions which focus on a salesperson’s effectiveness in turning activities into hard sales outcomes (e.g., units sold per calls; Jasmand et al. 2012). To the extent that it is important for one’s model to differentiate between a salesperson who sells, for example, $1 Million in widgets by leveraging a close connection and making a single call from a salesperson who sells the same amount by working long hours and intensely prospecting, these SPOs will be essential to highlight.

Finally, a researcher interested in a salesperson’s ability to conduct “farming” aspects of the sales role (i.e., maintaining post-sale client relationships) should use relationship-based SPOs. Financial relationship SPOs are advised for researchers interested in long-term customer purchases (e.g., cross/upselling). Non-financial relationship SPOs are relevant for research on attitudinal measures of customer relationships (e.g., customer satisfaction). These can be very important as, the variables that may predict, say, outcome-based SPOs may be quite different from those that predict repurchase or long-term customer satisfaction (Holmes et al. 2017).

Transforming the SPO

Once a researcher selects the best SPO, they must consider the functional form of the SPO. If a firm provides a researcher with an SPO—whether calls, revenue, win rate, net promoter score, etc.—the form of the provided SPO may not make the most sense for the scholar’s study. If performance in the study is defined as performance over that of others (between-individual), and if a reasonable quota is unavailable, dividing each salesperson’s performance by a territory or unit average makes sense (e.g., Shi et al. 2017). Of note, this rationale is the same that drives the use of sales-to-quota, but the relativization described here can be used on any SPO. However, if performance in the study is defined as improvement relative to oneself (within-individual; e.g., Childs et al. 2019), assessing the difference between adjacent timepoints of a given SPO makes sense. Our conceptual model demonstrates that, even when a researcher feels constrained by the specific SPO a firm provides, they can still use simple transformations to align the SPO with their research design.

Guidelines for researchers

Embrace that salesperson performance is broader than sales performance

Despite the variety of SPO types that practitioners value, 88% of the articles in our literature review look at outcome-based performance. With only 12% of articles that use secondary data remaining to address the other three SPO categories, such SPOs appear underrepresented. We reiterate the point that the conceptualization of salesperson performance is, and should be considered, much broader than sales performance. Thus, researchers should focus on considering a wider range of performance aspects (i.e., activity-, conversion-, and relationship-based). For instance, researchers might want to consider a “portfolio” approach (using various alternative SPOs to compare model results and conduct robustness tests; e.g., Gonzalez et al. 2014) of assessing salesperson performance, especially in situations where it makes sense to view performance as consisting of processes, not merely outcomes. To maintain relevance, our perspective must move beyond outcome-based SPOs. Our conceptual model encourages scholars in this direction.

Reconsider predictors of salesperson performance

To expand on the above recommendation, and considering the sheer number of SPOs, we should question what we think we know about the antecedents of salesperson performance. Are these critical drivers—for example, selling-related knowledge, degree of adaptiveness, and cognitive aptitude (see Verbeke et al. 2011)—equally effective at driving each category of salesperson performance? Since most studies utilize outcome-based SPOs, we may not be able to answer this question. By treating salesperson performance too loosely, failing to provide the details of our SPO, or neglecting to consider whether observed relationships hold for alternative SPOs, we diminish our practical impact. Antecedent relationships to each performance type are a fruitful area for future research.

Consider secondary proxies for traditionally primary data

Considering the growing sophistication of CRM systems, we urge researchers to think creatively about ways they can operationalize previously primary variables using secondary data. A large portion of researchers collecting primary sales performance have used variations of the Behrman and Perreault (1982) items, which fall into five categories: sales objectives, technical knowledge, providing information, controlling expenses, and presentations. Using these categories as a guide, we see an opportunity for researchers to utilize secondary data proxies for these performance categories (see Fig. 2).

Fig. 2
figure 2

Primary data categories and secondary proxy examples

The sales objectives category provides the most logical connection to secondary data because these items directly impact the firm’s bottom line so researchers can simply collect a secondary outcome-based measure (e.g., revenue, etc.). Technical knowledge refers to a salesperson’s knowledge about company products. Perhaps rather than asking managers to report a salesperson’s product knowledge (e.g., Mariadoss et al. 2014), one could collect scores from product-training courses (e.g., easily conducted through Salesforce’s Trail Head). Providing information refers to a salesperson’s ability to execute company procedures. Rather than asking a salesperson about their ability to troubleshoot and resolve issues, one could collect the number of support tickets completed, outstanding, or average completion time. Or, if accuracy of information recording is of interest (e.g., in the case of a loan officer or financial advisor), one could gather compliance data that the company records for regulatory purposes. Controlling expenses refers to responsibly using company funds. Secondary proxies could be found in expense systems like Concur or Lola which are increasingly used in organizations. The data these expense systems collect would provide information about salesperson spending habits (e.g., credit card usage, car mileage, etc.) and can also be used to accurately calculate profitability. Presentation is the last category of the performance scale and a researcher could operationalize this category using average time in meetings as a proxy for customer engagement or using a conversion-based SPO as a proxy for presentation efficacy (e.g., revenue per meeting).

Extending beyond Behrman and Perreault (1982), we also see an opportunity for researchers to get creative with the use of secondary data. For example, instead of asking a salesperson about their social media use or social network data (Agnihotri et al. 2017; Bolander et al. 2015; Rapp et al. 2013), one could gather communication data registered in a social CRM application (e.g., Salesforce’s Chatter). Additionally, a researcher could use activity-based performance as a measure of “working hard” and conversion-based performance as a measure of “working smart” in lieu of survey based measures (Fang et al. 2004).

Elaborate on theoretical definition of and justification for SPO

Replicability is the gold standard in scientific research (Jasny et al. 2011); but replicability is not merely replication of relationships between vague concepts or meaningless data points. True scientific replication requires that the variables under examination have a clear meaning (Suddaby 2010). Yet, we too often use the term “salesperson performance” in an overly abstract way. This tendency clouds the relationship among the term’s conceptual and theoretical meaning with its specific operationalization, impeding interpretation. The remedy for this is straightforward: authors should commit to fully explaining the nature of their SPOs (along with relevant contextual details) in all their work. Otherwise, replicability will suffer alongside managerial relevance.

Consider the possibility of subjective confounding of objective SPOs

While concrete, verifiable outcomes are thought to enhance an article’s relevance and contribution (Palmatier 2016), we will reemphasize that not all secondary data is objective. It is important to consider the original source of the SPO. Data can originate from at least one of four different sources: computer generated, human influenced, external human input, and self-reported human input. Computer generated data is automatically recorded (e.g., call records in a computer-based call system, sales transactions). The lack of human intervention in the recording of this data makes it the most objective source for SPOs.

However, the other sources of data may or may not be truly objective. Human influenced data, for example, combines computer and human generated data to create a new metric (e.g., sales-to-quota ratio; where sales is objective, but quota may not be). Entirely human generated metrics can come from external sources reporting about a specific salesperson (e.g., customer satisfaction, manager evaluations) or the salesperson themselves (e.g., hours worked, call made). Any data influenced, or entirely generated, by human input is susceptible to bias, error, or inaccuracy (e.g., manager favoritism, entry errors, poorly set quota; Rich 2016); However, self-reported human generated data should prompt the most skepticism as the data is being reported by the individual most affected by the results. Investigation efforts can include discussions with management about the validity of salesperson reports or perhaps controlling for social desirability (Podsakoff et al. 2003). We see a need for more transparency about the SPO source in order to determine the objectivity of the SPO and establish confidence in the study’s findings.

Transform to align given SPO with research question and context

Scholars may believe that they are restrained to the SPO a firm is willing to provide. Although this is partially true, we emphasize that simple transformations can be performed on any SPO to better align it with the researcher’s needs. Our conceptual model highlights two such transformations: one that sets an individual’s performance relative to peers in the same office or territory (potentially controlling for territory differences in a way that aligns with between-individual research designs) and another that sets an individual’s performance relative to their own past performance (potentially controlling for territory differences in a way that aligns with within-individual designs). So, flexibility can be conscientiously exercised regardless of the SPOs a firm provides. Of course, we recognize that the firm providing the secondary data will be the final arbitrator of what data the researcher receives, and one may not get everything they wish for (multiple waves of performance data, for instance), but there is still value in researchers being well-equipped to know what to at least ask for in order to maximize the value of the data they receive.

Conduct more repeated measures and within-person research

We see an immediate need to increase the amount of research examining salesperson performance with repeated measures and within-person designs. Both categories are underrepresented in the literature, impeding our understanding of causal relationships (Bolander et al. 2017) and within-person change (Childs et al. 2019).Footnote 6

Guidelines for reviewers

Ask more from authors conceptually and empirically

Our work provides value to journal reviewers and editors as well as researchers. Reviewers often value rigor in terms of analysis method. However, we suggest that the conceptual rigor of the construct under examination is just as important. When questions remain regarding the appropriateness of a firm’s quota, the presence of territory differences, appropriate referents, or relevant control variables, the eventual precision of our methodology is tarnished. To aid reviewers with ensuring strong empirical foundations, reviewers can request that authors provide more information about the elements (e.g., theoretical rationale, aspects of performance) found in the evaluative framework (Table 3) or to indicate precisely where in Fig. 1 their SPO falls. Accordingly, rather than making assumptions about the veracity of a study, we encourage reviewers to ask for details about the data. It is surely appropriate to request more transparency from authors to gauge the strengths and weaknesses of a particular study more accurately.

Related, reviewers can use the findings of our study to request evidence from authors that justifies their use of a specific SPO by empirically demonstrating that they are right to favor one SPO over another. For example, a reviewer might ask the author to run the same model using a different SPO as a robustness test, or a reviewer may ask the researcher to account for additional control variables to show evidence that the author’s SPO choice is appropriate. To be clear, if an article claims that no territory differences exist, the truth of this claim could be easily demonstrated by including territory-level controls (e.g., population, office size, average income, etc.; e.g., Gonzalez et al. 2014) and showing them to be nonsignificant predictors of variance in performance.

Be mindful of construct clarity

Suddaby (2010) highlights the importance of construct clarity in theory development. The author discusses the danger of creating a “Tower of Babel” where researchers use different terms to describe the same underlying construct. We find the opposite to be true as well (i.e., we can use one term to describe different underlying constructs). By not properly articulating the theoretical definition of or conceptual approach to salesperson performance, our literature is equally susceptible to confounding effects. Indeed, the replication failures and conflicting findings that exist in our literature could be the result of scholars researching fundamentally different constructs. This hinders scholars’ ability to accrue knowledge, which directly opposes theoretical and managerial relevance. Thus, reviewers play a vital role in demanding that articles contain details about the nature of the SPO being studied.

Conclusion

We sought to understand the variations in operationalizations of salesperson performance in the marketing and sales literature. We began by identifying the pros and cons of both primary and secondary data. Then, we directed our focus toward operationalizations of salesperson performance using secondary data. The lack of guidance in the literature led us to investigate both practitioner and scholar perspectives, which may increase the clarity with which we view this important issue. We find that salesperson performance is much broader than sales performance, and that a misalignment exists between managers and researchers in relation to SPOs. Our discussion and conceptual model bridge this divide by producing targeted recommendations for authors and reviewers in hopes of aligning practice, scholarship, and theory.