Abstract
Worker well-being is a hot topic in organizations, consultancy and academia. However, too often, the buzz about worker well-being, enthusiasm for new programs to promote it and interest to research it, have not been accompanied by universal enthusiasm for scientific measurement. Aim to bridge this gap, we address three questions. To address the question ‘What is worker well-being?’, we explain that worker well-being is a multi-facetted concept and that it can be operationalized in a variety of constructs. We propose a four-dimensional taxonomy of worker well-being constructs to illustrate the concept’s complexity and classify ten constructs within this taxonomy. To answer the question ‘How can worker well-being constructs be measured?’, we present two aspects of measures: measure obtrusiveness (i.e., the extent to which obtaining a measure interferes with workers’ experiences) and measure type (i.e., closed question survey, word, behavioral and physiological). We illustrate the diversity of measures across our taxonomy and uncover some hitherto under-appreciated avenues for measuring worker well-being. Finally, we address the question ‘How should a worker well-being measure be selected?’ by discussing conceptual, methodological, practical and ethical considerations when selecting a measure. We summarize these considerations in a short checklist. It is our hope that with this study researchers – working in organizations, in academia or both – will feel more competent to find effective strategies for the measurement worker well-being and eventually make policies and choices with a better understanding of what drives worker well-being.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
“What we measure affects what we do; and if our measurements are flawed, decisions may be distorted.”
- Stiglitz, Sen and Fitousi ( 2009, p. 7)
In light of changes in the conditions and nature of work, along with wider appreciation of the importance of social responsibility, organizations and consultancy firms have taken a serious interest in worker well-being (Scott and Spievack 2019). Indeed, an article in Forbes magazine on the human resources (HR) trends of 2020 suggests that worker well-being should be HR’s top priority, explaining, “Many companies concerned about the future of work focus on the massive disruption of jobs, automation, and workforce demographics. All of this is important but as HR leaders we need to start with making worker wellbeing a priority in 2020!” (Meister 2020). The current workplace wellness market is worth more than $45 bilion and is projected to grow in the decades to come (Allied Market Research, 2020; Global Wellness Institute 2016). A lot of buzz surrounds worker well-being.
Numerous good reasons support widespread interest in worker well-being. The Forbes article highlights the purported role of worker well-being in workforce resilience and healthy organizational culture. Indeed, worker well-being may be an indicator of organizational ethics (Giacalone and Promislo 2010), and it has been found to predict other key indicators of organizational performance (Salas et al. 2017; Taris and Schaufeli 2015), such as productivity (Bellet et al. 2019; Oswald et al. 2015), absenteeism (Kuoppala et al. 2008), job performance (Judge et al. 2001) and voluntary turnover (Judge 1993; Wright and Bonett 2007; Wright and Cropanzano 1998). In addition to all of these ways in which worker well-being may be instrumentally valuable for advancing organizational objectives, worker well-being has great intrinsic value. Among the many things that might be thought to be good in themselves, human well-being is perhaps the one object most highly regarded as such (Aristotle, 350 C.E.; Mill 1859; Raz 1986; Sidgwick 1874). In sum, for many different reasons, the well-being of workers (and anyone else) is well worth pursuing.
Not only is there great interest in worker well-being by practitioners in organizations, academic researchers have also been paying much attention to the subject matter too (Chen and Cooper 2014; Zheng et al. 2015). Over many decades, a rich and mature field of research has emerged, with thousands of psychological studies that conceptually and empirically study worker well-being constructs such as job satisfaction (Judge et al. 2017) and engagement (Macey and Schneider 2008; Purcell 2014). More recently, researchers from outside the psychological sciences have started to embrace the topic, including economics (Bryson et al. 2013; Golden and Wiens-Tuers 2006; Oswald et al. 2015), information systems (Gelbard et al. 2018; Jung and Suh 2019) and machine learning (Lawanot et al. 2019; LiKamWa et al. 2013). However, buzz about worker well-being, enthusiasm for new programs to promote it and interest to research it have not been accompanied by universal enthusiasm for scientific measurement on the work floor. Hence, there remains a gap between the buzz surrounding worker well-being and the science needed to support it. However, pushes to research and influence worker well-being without careful scientific measurement may be ineffective (Bartels et al. 2019). Even worse, these endeavors may be genuinely problematic: If researchers conceptualize or measure worker well-being inadequately, a scientific study may impede rather than advance the science that surrounds it (Podsakoff et al. 2016). If an organization touts purported improvements in well-being when, in fact, there has been no real improvement, it amounts to a case of “ethics washing” (Bietti 2020; Wagner 2018), and may hide the need for actual meaningful improvement.
We believe that the gap between the burgeoning psychological science of worker well-being and the buzz around it in other domains is caused by the complexity of worker well-being itself and the vast array of approaches to measuring it, combined with the variety of goals stakeholders may have for studying it. For many, it can be difficult to choose, let alone confidently justify, the selection of a particular research strategy for studying worker well-being. The primary goal of this paper is to help close the gap by offering a conceptual overview of the science of worker well-being and practical guidance for leveraging it in light of the particular objectives motivating the study of worker well-being.
This work will be useful for researchers of various stripes. First and foremost, this work will be relevant for research practitioners in organizations and academics outside psychological sciences. After all, it is not straightforward to move from intuitions about the need to pay more attention to worker well-being to adequate conceptualization and rigorous measurement. Insufficient scientific rigor prevents policy and research initiatives from being as relevant as they could be. In addition, even experienced psychological researchers who have been administrating well-being surveys – currently still the preferred instrument for measuring well-being (Nave et al. 2008) – for years may benefit from a synthesis of conceptual approaches and an enlargement of their inventory of approaches to measurement. As most psychologists are trained primarily in classic psychological methods (Aiken et al. 2008), a foray outside their comfort zone that updates them on the methodological developments across other fields may prove useful. Inspiration to use new, innovative measures helps researchers to address calls for increased attention to the construction of better well-being measures (Brulé and Maggino 2017; Diener 2012; Schneider and Schimmack 2009) and facilitates collaborative interdisciplinary research.
We build on prior work that offers direction through “the conceptual jungle that currently characterizes the employee wellbeing literature” (Mäkikangas et al. 2016, p. 62). For example, Johnson et al. (2018) and Zheng et al. (2015) offered conceptual overviews on employee well-being and provide a handful of examples of validated survey instruments that can be readily used. Focusing on particular well-being constructs, other academics have reviewed existing traditional survey measures (Cooke et al. 2016; Roscoe 2009; Schaufeli and Bakker 2010; Van Saane et al. 2003; Veenhoven 2017), non-survey measures (Luhmann 2017; Rossouw and Greyling 2020), or both (Diener 1994, 2012). Going beyond both disciplinary and construct borders, other academics have concentrated on the promise of certain devices (e.g., wearable devices, Chaffin et al. 2017; Eatough et al. 2016), and measure categories for measurement of psychological constructs in general (Ganster et al. 2017; Luciano et al. 2017). A commonality among these works is that they each have a focus on specific instruments or constructs. Such specificity is both a blessing and a curse. It is helpful for researchers wanting an overview of the state-of-the-science of a particular instrument (e.g., the use of physiological measures in organizational science) or construct (e.g., survey measures of job satisfaction), but of limited use for readers interested in the bigger picture. In our work, we therefore offer a comprehensive field guide, which we hope will have broad appeal. Notably, in its broad scope, our work is not meant as an exhaustive overview, but rather as illustrative synthesis that maps the lay of the land and directs researchers to more specialized research. We structure our synthesis around three research questions:
-
(1)
What is worker well-being?
-
(2)
How can worker well-being be measured?
-
(3)
How should a worker well-being measure be selected?
We will address the first question by offering a rationale about how to think about the concept of worker well-being and proposing a construct taxonomy that researchers can draw from to operationalize the concept of worker well-being. In doing so, we intend to disentangle the conceptual jungle that we find in the current literature. The second question will be addressed by creating an illustrative overview of measures for ten constructs that fall under the conceptual umbrella of worker well-being: life satisfaction, dispositional affect, moods, emotions, psychological well-being, job satisfaction, dispositional job affect, job moods, job emotions and work engagement. Looking beyond disciplinary borders, we will show that innovative, non-survey measures show promise for measuring worker well-being and, thereby, hopefully inspire researchers to enrich their methodological toolboxes. The third question will be answered by reviewing different conceptual, methodological, practical and ethical considerations for selecting a measure and doing so in ways that are responsive to the motivations driving researchers and practitioners to take an interest in worker well-being. These considerations are summarized into a checklist.
What Is Worker Well-Being?
Worker Well-Being and Related Concepts
We assume that worker well-being, at the most inclusive level, comes down to the general well-being of working people. To ensure clear conceptual boundaries, it is useful to differentiate worker well-being from concepts that relate to it. Worker well-being differs from employee well-being, as not all working people are employed by organizations, e.g., volunteers, independent contractors, executives and business owners. Even though most well-being constructs are relevant for both employees and non-employed working people, there may be some exceptions. For instance, the construct of satisfaction with pay will be inapplicable to volunteers. Satisfaction with co-workers and satisfaction with supervisor will likely be irrelevant concepts for independent contractors. Worker well-being differs from work-specific well-being, as constructs falling under that conceptual umbrella that have their origin and application distinctively within the work context. For example, the construct of satisfaction with colleagues has its origin in the work context. Work-specific well-being’s manifestation can be within and outside the work context, e.g., a worker can feel contented about social relationships at work at the dinner table or before going to bed too, which can impact other parts of worker well-being. Worker well-being differs also from well-being at work, as this concept merely concerns the experience or state of well-being in the work setting or when working. Notably, the source of well-being at work can be unrelated to work. Workers could, for instance, be contemplating fights with their spouses or reliving a fun weekend while being at work. Finally, worker well-being differs from general individual-level well-being, as, in contrast to general individual-level well-being, it pertains specifically to the lives and experiences of working people.
A Taxonomy of Worker Well-Being Constructs
Many constructs have been proposed to operationalize the concept of worker well-being. We propose a theory-driven construct taxonomy that can be used to categorize constructs and map construct boundaries. We have drawn on eight other conceptual works on worker well-being (i.e. C. D. Fisher 2014; Ilies et al. 2007; Johnson et al. 2018; Page and Vella-Brodrick 2009; Taris and Schaufeli 2015; Warr 2012; Warr and Nielsen 2018; Zheng et al. 2015) to do this.Footnote 1 We constructed our taxonomy along four dimensions: (i) philosophical foundation, (ii) temporal stability, (iii) scope and (iv) valence.Footnote 2
First, researchers have been adopting different philosophical foundations for conceptualizing well-being (Forgeard et al. 2011; Kashdan et al. 2008) and worker well-being (Taris and Schaufeli 2015). Among the most prevalent are the philosophical traditions of hedonia and eudaimonia (Linley et al. 2009; Ryan and Deci 2001). The hedonic approach regards well-being as subjective experience of happiness (Diener et al. 1999; Veenhoven 2000); the eudaimonic approach focuses on the realization of human potential (Ryff 1989a; C.D. Ryff and Keyes 1995). The classification of constructs on the hedonic and eudaimonic continuum is not an easy task, because the different philosophical traditions are partially overlapping (C. D. Fisher 2014; Waterman 2008) and also empirically related (Linley et al. 2009; Pancheva et al. 2020). We categorize a construct as eudaimonic, if intrinsic motivation, activation, purpose and meaningfulness are at its core (Ryan and Deci 2001). However, it is important that researchers acknowledge that a eudaimonic construct often contains a hedonic component.
Second, a classification can be made based on constructs’ temporal stability (Johnson et al. 2018; Mäkikangas et al. 2016). Well-being researchers have developed state-like and trait-like well-being constructs (C. D. Fisher 2014). State-like constructs are characterized by high variability over time due to high state variance, whereas trait-like constructs are characterized by greater stability over time (Schimmack et al. 2010). Some state-like constructs are truly momentary and last for a few minutes at most, while others remain somewhat stable (Kashdan et al. 2008). Some traits are inherited and are unlikely to change over a lifetime, while others are subject to some change over months or years (Johnson et al. 2018).
Third, two levels of scope of worker well-being constructs can be distinguished: context-free and domain-specific constructs (Ilies et al. 2007). Context-free constructs concern the worker’s life and experience in general, whereas domain-specific well-being constructs concern well-being within particular life domains (e.g., work, leisure, health, finance). Context-free and domain-specific (especially work-specific) constructs capture the bigger picture and subtleties of worker well-being, respectively (Page and Vella-Brodrick 2009).
Fourth, the valence of a construct can be considered. Some constructs are indicators of ill-being or the absence of well-being (e.g., burnout, stress, workaholism, negative affect), whereas others are indicators of well-being (e.g., work engagement, flow, job satisfaction, positive affect). Intuitively, the realization of constructs with positive valence is desirable, while the realization of those with negative valence is undesirable.
To illustrate, we describe eight worker well-being constructs that together span the breath of the taxonomy.Footnote 3 In light of its broad scope and alignment with our understanding of worker well-being, we build on Page and Vella-Brodrick’s (2009) Framework of Employee Mental Health. It revolves around three concepts: subjective well-being (SWB), psychological well-being (PWB) and workplace well-being (WWB). As made explicit by Page and Vella-Brodrick, the model does not include eudaimonic WWB constructs. However, we have included work engagement as eudaimonic WWB construct. The constructs and their categorization are summarized in Table 1. Table 1 also contains a brief characterization based on the academic literature surrounding the individual constructs.
Subjective Well-Being
SWB encompasses diverse aspects of people’s evaluations of how their lives are going (Diener et al. 1999) Life satisfaction, the cognitive evaluation of satisfaction with life circumstances, is a trait-like, context-free, positive well-being construct (Diener et al. 1999). Affect, “people’s on-line evaluations of the events that occur in their lives” (Diener et al. 1999, p. 277), is constituted by both trait-like and state-like components, which can vary in their valence as well as their degree of arousal (active vs. passive, Barrett and Russell 1999). Some aspects of a person’s affect are relatively stable over time. Accordingly, dispositional affect is a trait-like construct and has been defined as “durable dispositions or long-term, stable individual differences that reflect a person’s general tendency to experience a particular affective state” (Gray and Watson 2007, p. 172). Other affect-related constructs within SWB follow a fluctuating course and classify as state-like (Gray and Watson 2007). For instance, moods are emotional states can last days or even a week, occur relatively frequently, have nonspecific triggers and manifestations (e.g. positive mood), and are primarily manifested in behavior and subjective experiences of people. Emotions can last seconds to, at most, a few minutes, are intense, occur infrequently, have specific triggers and manifestations (e.g., anger, joy), and are manifested in different forms, e.g., behavior, subjective experiences, brain activity, and physiological response (Gray and Watson 2007).
Psychological Well-Being
Although its various aspects can be studied individually, we treat PWB as a single construct concerning the “formulations of human development and existential challenges of life” (Keyes et al. 2002, p. 1007). PWB is often represented by Ryff’s (1989b) six-factor model, including self-acceptance, personal growth, purpose in life, positive relations with others, environment mastery, and autonomy. PWB is grounded in the eudaimonic well-being tradition, and is a trait-like, context-free, positive well-being construct (Page and Vella-Brodrick 2009; Ryan and Deci 2001).
Workplace Well-Being
Within WWB, we consider the constructs of job satisfaction, dispositional job affect, job emotions, job moods and work engagement. Job satisfaction can be defined as “a positive (or negative) evaluative judgment one makes about one’s job or job situation” (H. M. Weiss 2002, p. 175). Job satisfaction is a domain-specific, hedonic and trait-like construct (Bowling et al. 2005, 2010; C. D. Fisher 2014). As such, job satisfaction is the work-specific counterpart to the context-free life satisfaction construct we described above.Footnote 4Dispositional job affect, job moods and job emotions are equivalent to context-free conceptions of dispositional affect, moods and emotions, except for their narrower, work-specific focus. For example, we could be narrowly interested in a worker’s general affect while working (dispositional job affect) or more broadly interested in the worker’s general affect across life domains (dispositional affect, Ilies and Judge 2004). In contrast to these hedonistic constructs, work engagement is an eudaimonic construct (C. D. Fisher 2014) concerned with how workers experience the exercise of their capacities at work. Work engagement as been defined in various ways, but is generally described as a domain-specific construct characterized by high levels of identification with work, positive affect, enthusiasm and energy (Bakker et al. 2008) and is theoretically distinct from other constructs, such as job satisfaction and organizational commitment (Schaufeli and Bakker 2010). Work engagement could be defined as “harnessing of organization members’ selves to their work roles: in engagement, people employ and express themselves physically, cognitively, emotionally and mentally during role performances” (Kahn 1990, p. 694) and “a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption” (Schaufeli et al. 2002, p. 74). Work engagement turns out to be relatively stable over time (Seppälä et al. 2015), hence its classification as trait-like.Footnote 5
How Can Worker Well-Being Constructs Be Measured?
Measure Classification
Constructs, like each of those just discussed, are put together to study real phenomena that cannot be observed directly and perfectly (Edwards and Bagozzi 2000). A measure, “an observed score gathered through self-report, interview, observation or some other means” (Edwards and Bagozzi 2000, p. 156), can therefore be regarded as the empirical equivalent of a construct. A measure thus does not necessarily perfectly reflect the well-being construct it is intended to measure; rather it provides an instrument-dependent representation of it. In this article, we introduce two classifications that will prove important for selecting the most appropriate measure for a given construct. The first classification concerns the extent to which the obtaining a measure interferes with the workers’ affairs and experience, and the second considers the different types of data a researcher can obtain.
Measure Obtrusiveness
Regarding the extent of interference with a workers’ affairs and experience, we distinguish between three measurement approaches for worker well-being: unobtrusive measurement, reaction-based obtrusive measurement and observation-based obtrusive measurement. Unobtrusive measures are methods that allow researchers to gain insights about subjects without the researcher, the subject, or others intruding into the research context and draw their data from naturally occurring circumstances and events (Hill et al. 2014; Webb et al. 1966). Obtrusive measures, methods characterized by active cooperation of subjects (Hill et al. 2014; Webb et al. 1966), come in two forms. Reaction-based obtrusive measures are based on the instruments that ask subjects for conscious, subjective input, whereas observation-based obtrusive measures are based on instruments that collect data automatically but require subjects to operate them. In other words, observation-based measures rely solely on the practical cooperation of subjects, and reaction-based measures rely both on practical cooperation and subjects’ effort to offer responses.
Measure Types
We distinguish between four types of measures: closed question measures, word measures, behavioral measures and physiological measures (Luciano et al. 2017). We will describe both the general characteristics of these types, as well as their relations to the obtrusiveness classifications just discussed.
Closed survey question measures are obtained from workers’ responses to one or more survey questions or statements with a finite number of answer categories, as with multiple-choice questions and discrete number scales. Most often, self-report closed survey question measures are used, which are inherently reaction-based obtrusive. In light of common method biases associated with self-report measures, well-being researchers have used other-report (e.g., spouses, friends, children, colleagues) well-being measures to validate self-report measures (Schneider and Schimmack 2010). Other-report measures are observation-based obtrusive, because, even though subjects do not have to exert cognitive effort, they must cooperate with a researcher to identify and contact relevant others who can fill out a survey.
Two classes of survey measures are distinguished: attitudinal or experience-based measures (Grube et al. 2008). Attitudinal measures are designed to uncover a person’s overall, usually retrospective assessment of trait-like attitudes, such as life and job satisfaction. Experience-based measures are designed to measure a person’s momentary state, e.g., moods and emotions. Typical experience-based survey instruments prompt questions about whereabouts, events, company, activity and feelings of the respondent for several days, either multiple times during the day (i.e. experience sampling method) or at the end of the day (i.e. day reconstruction method; Kahneman et al. 2004).
Word measures are derived from spoken or written text, and can represent the relevant semantic content of the speech or writing (i.e., meaning), or the pattern of speech (Luciano et al. 2017). Word data can be manually analyzed by independent coders or processed automatically by computer software and can be collected either obtrusively (e.g., administering open-ended survey questions) or unobtrusively (e.g., scraping social media data).
Behavioral measures consist of observations of individual behavior, and come in many forms, e.g. data on movement, position, body posture, facial expression, online behavior, substance abuse, etc. (Luciano et al. 2017). Behavioral measures can be either unobtrusive (e.g., publicly available video data) or observation-based obtrusive (e.g., video data obtained from a lab experiment).
Physiological measures are markers that reveal the state of a person’s body or its subsystems (Luciano et al. 2017). Building on the work of Akinola (2010) on the most widely used physiological measures in organizational sciences, we distinguish four prominent subcategories: endocrine activity (e.g., cortisol, testosterone, oxytocin, dopamine and serotonin), electrodermal activity (e.g., skin conductance response, skin conductance level), cardiovascular activity (e.g., blood pressure, heart rate, cardiac efficiency) and neurological activity (e.g., frontal lobe activation). These markers reflect changes in the autonomic nervous system, a part of the peripheral nervous system that serves regulatory functions by helping the human body adapt to internal and external demands (Akinola 2010).
Because physiological data is not recorded naturally, researchers typically rely on observation-based obtrusive measures. The obtrusiveness of these instruments varies substantially (Eatough et al. 2016; Ilies et al. 2016). Devices such as arm-cuff digital blood pressure monitors, fingertip pulse oximeters and cotton swab saliva sampling require substantial effort for subjects (e.g., attaching a device to the body) and can be uncomfortable in use (e.g., some activities could be inhibited by the device), while devices such as wearable bracelets and smartphone applications are almost completely hassle-free.
Illustrations of Measures
In Table 2, we provide illustrations of measures for constructs falling into the framework that we used for illustrating our construct taxonomy. More information on these measures can be found in Appendix A. We echo our previous disclaimer that the list of measurement options is non-exhaustive and will not cover all potential conceptual nuances. In addition, we want to note that the different measures vary in their degree to which they are valid for the constructs they are purported to measure. For example, evaluative constructs such as job satisfaction and life satisfaction are likely better measured using subjective measures, while affective constructs such as emotions and moods can be gauged with both subjective and objective measures (Brulé and Maggino 2017). We will discuss the validity of measures in the next section. Finally, several rows in Table 2 contain blank cells. These are areas where, as of yet, there has been little or no work applying the relevant measurement approach to the type of construct in question. As such, these blank areas signify current opportunities in the study of worker well-being.
How Should a Worker Well-Being Measure Be Selected?
With such a wide assortment of measures for worker well-being constructs, the next question is how to choose one in your research. In this section, we will show why demonstrating measurement fit, “the degree of alignment between how a construct is conceptualized and measured” (Luciano et al. 2017, p. 593), is a challenging task. Luciano et al.’s (2017) framework of measurement fit illustrates that researchers have to go through various (iterative) steps to make well-reasoned measurement decisions: researchers must explicate the construct thoroughly (e.g., map a construct’s content, dimensionality, stability and hypothesized manifestation), determine measurement features (e.g., identify a measure’s content, source and aggregation strategy), consider the research context (e.g., state-of-the-science and research purpose), ethics of a proposed research plan (e.g., privacy, discrimination, paternalism) and feasibility, accuracy and completeness of a measure. Considering space concerns, we cannot follow Luciano et al.’s full model for each worker well-being construct. Instead, we sketch a high-level picture of the various relevant considerations for choosing a measure and refer the reader to dedicated works for more elaborate discussion. We summarize this overview in the form of a checklist in Table 3.
Conceptualization
One must decide on the construct or constructs of study before a measure can be selected. This decision is driven by many factors, e.g., objective of the research, the employment situation of the workers you study, the research context and the research question(s). For example, when researchers are interested in evaluating the well-being enhancing potential of a new coffee machine, they are well-advised to select a very narrow, domain-specific construct, such as satisfaction with facility management, rather than a broader construct, such as job satisfaction. For another example, when researchers are tasked to evaluate the well-being enhancing potential of receiving a compliment, they may want to consider a more dynamic, state-like well-being construct, such as job emotions, rather than a stable, trait-like well-being construct, such as job satisfaction, because the effects of compliments will likely be only temporary.
For the selection of appropriate worker well-being construct, we recommend researchers measure as many well-being constructs as possible and maximize diversity. As the measures on different constructs are not easily aggregated, we urge researchers to report well-being measures individually, in the spirit of a dashboard (Forgeard et al. 2011). Such broad measurement of worker well-being is relevant for several reasons.
First, since most researchers’ goals for studying worker well-being will be largely motivated by moral considerations and general goodwill, it is important to ensure sufficient breadth of measurement. The reason for this is that constructs vary in their intrinsic value.Footnote 6 Most context-free well-being constructs reflect theoretically and philosophically grounded conceptions of human value, e.g., PWB (Aristotle, 350 C.E.; Zagzebski 1996), life satisfaction (Sumner 1996) and dispositional affect (Bentham 1789; de Lazari-Radek and Singer 2014; Feldman 2004). For domain-specific constructs such as job satisfaction and work engagement, the moral case favoring attention to these constructs is slightly harder to make, as they do not necessarily and inherently contribute to worker well-being. Work engagement, for instance, could have a dark side (Bakker et al. 2011; Dolan et al. 2012), as illustrated by research showing that it, in some cases, may instigate work-family conflict (Halbesleben 2011; Halbesleben et al. 2009). None of this is to deny that varieties of domain-specific well-being may frequently, or even usually, drive general well-being, and thus are valuable. It is just that the value of domain-specific well-being constructs depends on the contingencies of their causal interplay with context-free well-being constructs, which better reflect a worker’s overall well-being.
Researchers can mind such well-being trade-offs by measuring a diverse set of constructs. To illustrate, it may be necessary to study constructs with negative valence, such as burnout or work addiction, to uncover downsides of policies driven by the goal of increasing positive affect at work. An organization’s increasing focus on social responsibility may increase engagement, but with the unintended effect of enticing some workers to be too engaged in their work, giving rise to work addiction (Brieger et al. 2019). A dashboard covering a variety of domain-specific and context-free constructs allows researchers to keep all possible tradeoffs in view. However, if the selection of constructs must be constrained, researchers may prioritize constructs that are most likely to uncover those tradeoffs.
Second, for researchers who are motivated to study worker well-being in the service of other objectives, keeping an open mind to the measurement of multiple worker well-being constructs will likely pay off. This holds for researchers with various research objectives, e.g., academics interested in testing theory or practitioners aiming at advancing organizational performance through the enhancement of worker well-being. The reason is that worker well-being constructs can be related to other constructs and factors in unexpected ways. To illustrate, concerning antecedents of worker well-being, a meta-analysis of Steffens et al. (2017) showed that social identification processes relate more strongly to positive well-being constructs than to negative well-being constructs. Regarding outcomes, a meta-analysis by Erdogan et al. (2012) demonstrated that life satisfaction correlates significantly stronger to organizational commitment and turnover intention than to job performance. In conclusion, having a sufficiently broad measurement scope will enable researchers to uncover the most interesting and important relationships among variables.
For researchers interested in making an academic contribution, there is an additional impetus for measuring multiple constructs. Like many research fields in social sciences, the field of worker well-being is burdened with the problem of construct proliferation: “research streams are built around ostensibly new constructs that are theoretically or empirically indistinguishable from existing constructs” (Shaffer et al. 2016, p. 81). For example, research suggested that employee engagement is not distinct from constructs like job burnout (Cole et al. 2012) and job satisfaction (Christian et al. 2011). Measuring multiple, ostensibly distinct constructs will help researchers to demonstrate or refute the theoretical and empirical distinctiveness of well-being constructs and thereby advance the science of worker well-being.
Once one or more constructs have been chosen, researchers are well-advised to turn to established literature to carefully define the construct and understand the conceptual nuances to it. Articles covering best practices for construct definition (Podsakoff et al. 2016) and conceptual works on the conceptualization and categorization of worker well-being (e.g., our current work, Johnson et al. 2018; Page and Vella-Brodrick 2009; Zheng et al. 2015) could be helpful. When constructs have been selected and adequately conceptualized, researchers can move into the constructs’ ideal measurement strategy.
Measurement
One of the most important considerations in choosing a suitable measure is a measure’s validity. Validity can be described as “the degree to which scores on an appropriately administered instrument support inferences about variation in the characteristic that the instrument was developed to measure” (Cizek 2012, p. 35). A measure must be the causal outcome of a construct (Borsboom et al. 2004), which means that it has to satisfy the following four conditions for causality: (i) definition of a construct must be chosen and articulated independently and prior to the measure, so that the relationship between the two is not merely tautological, (ii) substantial association (or covariation) between the construct and the measure, (iii) realization of the construct temporally prior to the measurement, and (iv) elimination of rival explanations that could explain the relationship between a construct and a measure, such as history and instrumentation (Edwards and Bagozzi 2000). In summary, for a measure to be valid for a hypothesized construct, it must be the hypothesized construct – and only the hypothesized construct – that causes the measure.
Proving that a measure is a valid requires a process of theoretical and empirical validation (Borsboom et al. 2004), “the ongoing process of gathering, summarizing, and evaluating relevant evidence concerning the degree to which that evidence supports the intended meaning of scores yielded by an instrument and inferences about standing on the characteristic it was designed to measure” (Cizek 2012, p. 35). Researchers interested in using a previously developed measure are therefore advised to understand how that measure has been validated and assess the adequacy of the validation process. Researchers aiming to innovate in the development of a new measure must accept the responsibility of performing, or otherwise ensuring, a proper process of measure validation. Either way, understanding the validation process is essential to avoid relying on misleading indicators of the relevant constructs and drawing specious conclusions.
Theoretical (or content) validation starts with a logical analysis of measure-construct fit, often performed by academic and/or practitioner subject matter experts (Bornstein 2011; Luciano et al. 2017). This is where the preparatory work from the conceptualization phase comes into play: a high-quality conceptual definition and deep understanding of conceptual nuances are useful for making methodological decisions. For instance, as the definition of life satisfaction suggests that a valid measure of this construct should be based on a cognitive evaluation and will typically remain stable over time (Diener 1994; Shin and Johnson 1978), one can safely forego dynamic, unobtrusive or observation-based obtrusive word, behavioral or physiological measures, and narrow the methodological scope to reaction-based obtrusive, subjective measures, such as surveys and interviews. In sharp contrast, one is well advised to consider more objective behavioral and physiological measures when the measurement of affective states or other state-like constructs is of interest, as their conceptual definition permits it (Mauss and Robinson 2009). In case the research contexts necessitates survey measurement of affect, one would need to accommodate the state-like nature of affect by focusing on experience-based measures instead of attitudinal measures (C. D. Fisher 2000).
After theoretical validation, a measure must be empirically validated. This is traditionally done by demonstrating adequate reliability of a measure and demonstrating appropriate statistical associations between a new measure and measures of related or unrelated constructs (Bornstein 2011; for early examples, see Campbell and Fiske 1959; Cronbach and Meehl 1955). More specifically, one can examine a new measure’s convergent validity, discriminant validity, predictive validity and incremental validity, in relation to other validated measures, or design experiments to uncover biases in measures and to unravel the underlying mechanisms causing the measurements observed (Bornstein 2011; Edwards 2003). Often, one can draw on existing validation research to substantiate the empirical validity of a measure and pick appropriate validation tests (e.g., confirmatory factor analysis, internal consistency analysis, Edwards 2003). For example, in the development of new closed question job satisfaction measures, Ironson et al. (1989), Thompson and Phua (2012) and Bowling et al. (2018) all followed common practice (e.g., Clark and Watson 1995; Edwards 2003; Hinkin 1998) by examining the new measures’ convergent validity (i.e., alignment with) with existing job satisfaction scales and their discriminant validity (i.e., departure from) with measures of related, but distinct constructs.
During empirical validation, one should pay serious attention to the various kinds of measurement error that measures are susceptible to. For instance, closed question survey measures, word measures based on social media and physiological measures obtained from wearable sensors are all vulnerable to selection biases: subjects self-select themselves into participating to a survey, using social media and utilizing a wearable sensor (Ganster et al. 2017; Kern et al. 2016; Landers and Behrend 2015). Closed question survey measures and word based social media measures are both susceptible to social desirability biases (Marwick and Boyd 2011; Podsakoff et al. 2003; Wang et al. 2014), while physiological measures are not. Other sources of measurement error are relevant for specific measurement instruments. Surveys are vulnerable to careless responding, the tendency to respond to questions without regard to the content of items (Meade and Craig 2012; e.g., an intense experience sampling study, Beal 2015; lengthy batteries of job satisfaction questions, Kam and Meyer 2015). Word measures obtained through computer-aided textual analyses will be vulnerable to algorithm error, the pattern of error observed when multiple computer-aided textual analysis techniques produce different measures using the same methods and texts (McKenny et al. 2018; Short et al. 2010). Instruments collecting physiological data will inescapably introduce noise (Chaffin et al. 2017; Ganster et al. 2017). Researchers should ensure that they have the appropriate expertise to catch and mitigate the relevant sorts of errors.
We conclude with a note on the varying complexity of theoretically and empirically validating measures. As previously indicated, obtrusive measures such as closed questions, open questions and interviews are relatively straightforward to validate. For theoretical validation, this mainly is due to the deliberate alignment of the measure with the construct definition (e.g., during item pool generation and item purification, Brod et al. 2009; Hinkin 1998). By maximizing the semantic equivalence of the questions and the construct definition, researchers are able to eliminate alternative explanations prior to the collection of data. The theoretical validation of an unobtrusive measure is much less straightforward, because one has little to no influence over the way data is collected. With an unobtrusive measure we have much less guarantee that the cause of the measurements is limited to factors relevant to the construct to be measured. Because of inherent differences between the instrument and the intended construct, one is forced to rely heavily on theory to make a case for why the content of a measure best resembles the construct of interest rather than related, but distinct constructs (Hill et al. 2014). The same pattern of difficulty holds for empirical validation. Empirical validation of obtrusive measures is relatively convenient, as a multitude of validation guidelines and validated measures have accumulated over time. Empirically validating an unobtrusive measure is much more challenging, as it is often impossible to find a well-validated unobtrusive measure for comparison and introducing a validated obtrusive (e.g., survey) measure in an obtrusive measurement design takes away the valuable unobtrusive nature of the data (Hill et al. 2014).
Practicality
After conceptualization and measurement, researchers must consider the practicality of a measurement strategy in a given research context. In some way, all researchers must accommodate the preferences and demands of stakeholders, e.g., organizations, employees and institutions. At the same time, they must safeguard their scientific and ethical integrity. Finally, they must always remain mindful of their own resource limitations.
Organizations
Organizations may use their position as facilitator of worker well-being research to put pressure on researchers to do research as cheaply and efficiently as possible (Lapierre et al. 2018). For example, organizations may be hesitant to facilitate physiological measurement, as purchasing and distributing wearable devices are still much more costly than administering questionnaires (Akinola 2010; Ganster et al. 2017). Relatedly, organizations may prefer single-item measures over their psychometrically superior multi-item counterparts, as the opportunity costs associated with filling out multi-item measures are expected to be too high (G. G. Fisher et al. 2016; Gardner et al. 1998).
Beyond the need to deal with unequal power relations, it is important for researchers to be wary of the values and leadership in an organization. In particular, for well-being research to have an effect on the well-being of workers, an organization’s leadership has to value both research and well-being (Nielsen et al. 2006; Nielsen and Noblet 2018). Without commitment from senior management, worker well-being research, regardless of its rigor, will be of limited value, as any resulting policy recommendations will not be implemented. Hence, it is advisable to start well-being research only if the topic is a strategic topic in the organization and there is a culture of receptivity to research and evidence-based practices. On the other hand, organizational change must always begin somewhere, and we should not lose hope that well-presented, well-timed research on a topic of moral importance may occasionally prove pivotal.
Workers
Researcher on worker well-being is, of course, typically motivated by a moral interest in lives and experience of workers. However, when striving to obtain valid measurements of worker well-being, researchers must not lose sight of the impact of measurement on those very workers whose well-being is to be measured. For choosing a well-being measurement strategy, the rights and interests of the research subjects matter for both practical and ethical reasons. Practically, without satisfactory buy-in from them, measures will be subject to substantial non-response or validity issues (Rogelberg et al. 2000). It is therefore advisable to accommodate workers’ tendency to dislike lengthy batteries of questions or long interviews, as participation can be unpleasant and distracting. Further ethical considerations emerge in light of the inherent moral significance of well well-being research and the increasing convenience of collecting (big) data (see Israel and Hay 2006 for an extensive overview of research ethics for social science; Metcalf and Crawford 2016). Here we briefly touch upon important ethical considerations and direct readers to referenced works for more information.
First of all, there is an obligation that will be obvious to academic researchers but perhaps less familiar to professionals in organizations: In order to ensure that research does not harm the workers who are the research subjects, researchers must adhere to the principles of research ethics (e.g., American Psychological Association 2017). In most instances, a review by an independent ethics committee is highly advisable (Wassenaar and Mamotte 2012), as any research conducted by an organization on employees of that same organization presents special problems, due to pressure employees may feel to “volunteer” for the research (Kim 1996). In cooperation with the ethics committee, researchers must be prepared to justify any measurement choices for which a less obtrusive, invasive or burdensome alternative might have been available.
Second, although it is sometimes neglected with novel forms of social research (Flick 2016), the informed consent of research subjects is of paramount importance. This requires that researchers adequately inform workers about the study, thereby taking into account their expectations and social norms (Brody et al. 2000; Manson and O’Neill 2007), and ensuring that their participation is voluntary, not coerced (Faden and Beauchamp 1986). The imperative of informed consent has implications for measurement strategies. When practical, it is advisable to use measures that have a clear and intuitive connection to the constructs to be measured (high face-validity), as is the case with most survey measures. This makes it more straightforward to fully inform workers about the connection between the research and their well-being, and hence reduces barriers to consent and willing participation. Where experimental design precludes full transparency in advance, a thorough debriefing session after the experiment becomes critically important (Brody et al. 2000; S. S. Smith and Richardson 1983; Sommers and Miller 2013). Novel or esoteric measurement techniques require extra care with regard to informing and debriefing, simply because these techniques may run contrary to workers’ expectations of the research process.
Third, it is essential to mind ethical considerations regarding autonomy and privacy, as respectively associated with obtrusive and unobtrusive measurement. Since obtrusive measures, by their nature, interfere with workers’ work and other experience, use of such measures implicates the autonomy of workers. Significant interference with their lives should be limited as much as possible and explained clearly. This ensures that workers’ abilities to make sure their own choices are not unduly diminished, and not affected beyond the participation to which they have consented (Faden and Beauchamp 1986). On the flip side, unobtrusive measures raise special concerns about the privacy of workers, because, by the very design of the measurement methods, the subjects may not be aware of the information collected about them (Motro et al. 2020). Hence, it is incumbent upon researchers to ensure that workers are not monitored beyond what is relevant to the study, or beyond that to which they have consented. In general, pitfalls of both obtrusive and unobtrusive measures can be largely mitigated by diligent procedures for informed consent.
Institutions
Researchers must also navigate institutional pressure and legal requirements. The relevant regulations are very much dependent on the type of study and the location where the study is conducted. For instance, the General Data Privacy Regulation (GDPR, European Parliament and Council 2016) has outlined strict rules on the analysis, collection, sharing and storage of individual-level data and, in particular, health data (e.g., biometric data, survey data on mental health, Guzzo et al. 2015). If the analysis of health data is of interest, researchers within an organization may want to consider a collaboration with external researchers who specialize in managing such data securely and responsibly. Finally, if the workers are unionized, proactive communication with union representatives is advisable. Although unions support initiatives to advance worker well-being, they may well be wary of measurement procedures that appear to diminish worker autonomy or privacy.
Researchers
In light of the many, often divergent preferences and demands of various stakeholders, researchers are forced to be pragmatic and accommodating. Making concessions, however, does not mean that the researchers’ own objectives should be discounted. The responsibility falls to researchers themselves to ensure that well-being is measured in a valid way and that, therefore, research questions are answered adequately. In addition, as researchers’ time, skills, and resources are finite, certain well-being measures will be infeasible in certain contexts. For instance, if an organization wants to evaluate a company-wide vitality promotion program using wearable devices and dynamic surveying, researchers must be certain to have enough time and resources available to prepare data collection (e.g., selecting vendors, customizing instruments, training subjects) and to analyze the data (e.g., collaborating with researchers in other fields, learning new analytical techniques Chaffin et al. 2017; Eatough et al. 2016). Being pragmatic and minding resource limitations does not have to undermine the validity of measures. Researchers can draw from extant literature to select validated alternatives to the more time-consuming and costly measures. If one wants to measure job affect using the experience sampling method, and an organization suggests a cross-sectional survey to do this, researchers can suggest day reconstruction method as a valid alternative (Dockray et al. 2010; Kahneman et al. 2004). If one wants to use well-established multi-item scales to measure well-being constructs, and an organization rejects this idea, researchers might want to suggest validated single-item measures (e.g., G. G. Fisher et al. 2016; Wanous et al. 1997) or shortened scales (e.g., Russell et al. 2004; Schaufeli et al. 2006, 2017). This may allow investigation of several constructs with satisfactory precision instead of a single construct with higher precision, which should be a desirable trade-off in many contexts, for the reasons noted above.
In the process of managing stakeholders, good communication is key. Organizations, in particular, are not easily convinced by the presentation of statistical or theoretical evidence (Hodgkinson 2012). For this reason, it is key to communicate about topics such as instrument validity, research design and construct choice in an understandable and persuasive manner (Lapierre et al. 2018). We refer the reader to research on the communication of evidence-based practice (Baughman et al. 2011; Highhouse et al. 2017; Hodgkinson 2012; Lapierre et al. 2018; Zhang 2018) and bridging the academia-practice gap (Banks et al. 2016; Rynes 2012) for best practices.
Conclusion
Our work aimed at answering three questions that are relevant for the study of worker well-being. We addressed the first question, What is worker well-being?, by proposing a construct taxonomy based on four dimensions: philosophical foundation, scope, stability and valence. We illustrated the taxonomy by classifying the ten worker well-being constructs. By synthesizing the many conceptual models of worker well-being, the taxonomy helps researchers to make sense of the burgeoning but messy field of worker well-being.
To answer the question, How can worker well-being constructs be measured?, we offered a multi-disciplinary overview of traditional (e.g., surveys and interviews) and novel data sources (e.g., wearable sensors) that can be leveraged to measure worker well-being. Therein, we distinguished four broad types of data sources: closed question survey, word, behavioral and physiological measures, and further classified them as either unobtrusive, reaction-based obtrusive or observation-based obtrusive. We hope that our overview inspires researchers to think outside their current methodological toolboxes and to foster collaborations outside the social sciences to leverage new data collection techniques.
Taken together, our construct taxonomy and our overview of existing measurement approaches uncovered some notable gaps in the current science of worker well-being. In particular, we showed that several of the most important work-specific well-being constructs have been measured primarily using closed question surveys. In light of the fact that the context-free counterparts of these constructs have undergone innovation in measurement methodology, we encourage researchers to draw from other research strands to develop new measures of these important work-specific constructs. More generally, we hope that our overview inspires researchers to think outside their current methodological toolboxes and to foster collaborations outside the social sciences to leverage new measurement and data collection techniques.
To address the final question, How should an worker well-being construct measure be selected?, we described the importance of good conceptualization, rigorous operationalization and pragmatic stakeholder management. Because of its broad scope, this discussion was not intended to be exhaustive. Instead, we hope that the discussion provides a useful map of the most important considerations and guidance to detailed references on particular topics (e.g., construct definition, validation, ethics, and communication).
In conclusion, with our work, we intended to bridge the gap between the popular buzz about worker well-being and the extant scientific research about it. Our work has provided guidelines to go beyond the ad-hoc study of worker well-being and conduct rigorous, responsible research. It is our hope that researchers, whether working in organizations, in academia or both, will feel more competent to take the well-being of workers into account, eventually permitting them to better understand what drives worker well-being and design policies to promote it accordingly.
Availability of Data and Material
Not applicable.
Code Availability
Not applicable.
Notes
Readers interested in the ethics of worker well-being may wonder why we have not considered the capability approach to well-being (Robeyns 2005). The reason, in short, is that we are addressing readers who are interested in well-being outcomes, in contrast to the general capabilities that support those outcomes. Although capabilities (and their distribution) have been held to be fundamentally important for justice (Nussbaum 2011), and thus central to politics and public policy, we are more concerned with the effects of conditions and policies of work and employment. Hence, we focus on well-being, as a lived outcome, rather than the capability for living well (cf. Veenhoven 2000).
We excluded hybrid constructs from our discussion – broad constructs that integrate hedonic and eudaimonic constructs – such as human flourishing (Huta and Waterman 2014) and thriving (Spreitzer et al. 2005), from our selection of worker well-being constructs. Considering their broad scope, hybrid constructs often lack clear theoretical justification and are characterized by fuzzy construct boundaries (Martela 2017).
Although it is most common to consider job satisfaction in terms of a workers’ cognitive evaluations of their jobs, it is also worthwhile to examine worker’s affective psychological responses or feelings specifically regarding their jobs (Thompson and Phua 2012). If the affective component is emphasized, the resulting job satisfaction construct comes close to the dispositional job affect construct we discuss next, which also concerns workers’ feelings while at work, though not necessarily about work.
Some researchers have contended that state job satisfaction and state work engagement should be distinguished next to more trait-like conceptions of job satisfaction and work engagement, as the temporal stability of the two constructs may vary from week to week or from day to day (for discussions on state job satisfaction, see Grube et al. 2008; Ilies and Judge 2004; Niklas and Dormann 2005; for discussions on state work engagement, see Bakker and Bal 2010; Xanthopoulou et al. 2008; Sonnentag 2003). To limit the scope of the article, we focus on the trait-like constructs of job satisfaction and work engagement, which have traditionally been the most common focus of research.
In ethical theory, it is common to distinguish between what is intrinsically valuable and what is instrumentally valuable. Objects are intrinsically valuable when they are good in themselves and worth pursuing independent of any other goals. In contrast, objects are merely instrumentally valuable when their value depends on their capacity to help realize other things that are valuable. Of course, a single object can have both intrinsic and instrumental value (for discussion and finer distinctions, see Korsgaard 1983).
In light of the conceptual difference between affective and cognitive job satisfaction (Thompson and Phua 2012; H. M. Weiss 2002), researchers have to be mindful that some measures of job satisfaction relate more strongly to the cognitive component and others more to the affective component (Kaplan et al. 2009). Researchers thus can view job satisfaction measures on a continuum from primarily tapping into cognitive job satisfaction to primarily tapping into affective job satisfaction (C. D. Fisher 2000).
References
Abdel-Khalek, A. M. (2006). Measuring happiness with a single-item scale. Social Behavior and Personality: An International Journal, 34(2), 139–150.
Aiken, L. S., West, S. G., & Millsap, R. E. (2008). Doctoral training in statistics, measurement, and methodology in psychology: Replication and extension of Aiken, West, Sechrest, and Reno’s (1990) survey of PhD programs in North America. American Psychologist, 63(1), 32–50.
Akinola, M. (2010). Measuring the pulse of an organization: Integrating physiological measures into the organizational scholar’s toolbox. Research in Organizational Behavior, 30, 203–223.
Alharthi, R., Guthier, B., Guertin, C., & El Saddik, A. (2017). A dataset for psychological human needs detection from social networks. IEEE Access, 5, 9109–9117.
Allied Market Research. (2020). Increase in prevalence of chronic diseases, and surge in awareness and implementation of wellness programs by employers drive the growth of the global workplace wellness market. https://www.globenewswire.com/news-release/2020/08/25/2083396/0/en/Workplace-Wellness-Market-to-Reach-Valuation-of-74-00-Billion-by-2026-at-6-1-CAGR.html.
Amabile, T. M., Barsade, S. G., Mueller, J. S., & Staw, B. M. (2005). Affect and creativity at work. Administrative Science Quarterly, 50(3), 367–403.
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code/
Andrews, F. A., & Withey, S. B. (1976). Social indicators of well-being in America: The development and measurement of perceptual indicators. Plenum Press.
Aristotle. (350 C. E.). Nicomachean Ethics (R. Crisp, Trans.). Cambridge: Cambridge University Press.
Bakker, A. B., Albrecht, S. L., & Leiter, M. P. (2011). Work engagement: Further reflections on the state of play. European Journal of Work and Organizational Psychology, 20(1), 74–88.
Bakker, A. B., & Bal, P. M. (2010). Weekly work engagement and performance: A study among starting teachers. Journal of Occupational and Organizational Psychology, 83(1), 189–206.
Bakker, A. B., Schaufeli, W. B., Leiter, M. P., & Taris, T. W. (2008). Work engagement: An emerging concept in occupational health psychology. Work & Stress, 22(3), 187–200.
Banks, G. C., Pollack, J. M., Bochantin, J. E., Kirkman, B. L., Whelpley, C. E., & O’Boyle, E. H. (2016). Management’s science–practice gap: A grand challenge for all stakeholders. Academy of Management Journal, 59(6), 2205–2231.
Banse, R., & Scherer, K. R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70(3), 614–636.
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1–68.
Barrett, L. F., & Russell, J. A. (1999). The structure of current affect: Controversies and emerging consensus. Current Directions in Psychological Science, 8(1), 10–14.
Bartels, A. L., Peterson, S. J., & Reina, C. S. (2019). Understanding well-being at work: Development and validation of the eudaimonic workplace well-being scale. PloS One, 14(4), e0215957.
Baughman, W. A., Dorsey, D. W., & Zarefsky, D. (2011). Putting evidence in its place: A means not an end. Industrial and Organizational Psychology, 4(1), 62–64.
Beal, D. J. (2015). ESM 2.0: State of the art and future potential of experience sampling methods in organizational research. Annu. Rev. Organ. Psychol. Organ. Behav., 2(1), 383–407.
Beal, D. J., & Ghandour, L. (2011). Stability, change, and the stability of change in daily workplace affect. Journal of Organizational Behavior, 32(4), 526–546.
Bellet, C., De Neve, J.-E., & Ward, G. (2019). Does employee happiness have an impact on productivity? Saïd Business School WP, 13.
Bentham, J. (1789). An Introduction to the Principles of Morals and Legislation. Clarendon Press.
Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 210–219.
Bleidorn, W., & Peters, A. L. (2011). A multilevel multitrait-multimethod analysis of self-and peer-reported daily affective experiences. European Journal of Personality, 25(5), 398–408.
Bollen, J., Mao, H., & Pepe, A. (2011). Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena. Fifth International AAAI Conference on Weblogs and Social Media.
Borg, I., & Zuell, C. (2012). Write-in comments in employee surveys. International Journal of Manpower, 33(2), 206–220.
Bornstein, R. F. (2011). Toward a process-focused model of test score validity: Improving psychological assessment in science and practice. Psychological Assessment, 23(2), 532.
Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2004). The concept of validity. Psychological Review, 111(4), 1061–1071.
Bowling, N. A., Beehr, T. A., Wagner, S. H., & Libkuman, T. M. (2005). Adaptation-level theory, opponent process theory, and dispositions: An integrated approach to the stability of job satisfaction. Journal of Applied Psychology, 90(6), 1044–1053.
Bowling, N. A., Eschleman, K. J., & Wang, Q. (2010). A meta-analytic examination of the relationship between job satisfaction and subjective well-being. Journal of Occupational and Organizational Psychology, 83(4), 915–934.
Bowling, N. A., Wagner, S. H., & Beehr, T. A. (2018). The Facet Satisfaction Scale: An Effective Affective Measure of Job Satisfaction Facets. Journal of Business and Psychology, 33(3), 383–403.
Boyle, G. J. (1992). Multidimensional Mood State Inventory (MMSI). Unpublished, Department of Psychology, University of Queensland.
Boyle, G. J., Helmes, E., Matthews, G., & Izard, C. E. (2015). Measures of affect dimensions. In Measures of personality and social psychological constructs (pp. 190–224). Elsevier.
Bradburn, N. M. (1969). The structure of psychological well-being. Aldine.
Brayfield, A. H., & Rothe, H. F. (1951). An index of job satisfaction. Journal of Applied Psychology, 35(5), 307–311.
Brey, P. (2012). Well-being in philosophy, psychology, and economics. In The good life in a technological age (pp. 33–52). Routledge.
Brief, A. P., Burke, M. J., George, J. M., Robinson, B. S., & Webster, J. (1988). Should negative affectivity remain an unmeasured variable in the study of job stress? Journal of Applied Psychology, 73(2), 193–198.
Brieger, S. A., Anderer, S., Fröhlich, A., Bäro, A., & Meynhardt, T. (2019). Too much of a good thing? On the relationship between CSR and employee work addiction. Journal of Business Ethics, 1–19.
Brod, M., Tesler, L. E., & Christensen, T. L. (2009). Qualitative research and content validity: Developing best practices based on science and experience. Quality of Life Research, 18(9), 1263.
Brodeur, A., Clark, A. E., Fleche, S., & Powdthavee, N. (2020). Covid-19, lockdowns and well-being: Evidence from google trends. Journal of public economics, 193, 104346.
Brody, J. L., Gluck, J. P., & Aragon, A. S. (2000). Participants’ understanding of the process of psychological research: Debriefing. Ethics & Behavior, 10(1), 13–25.
Brulé, G., & Maggino, F. (2017). Metrics of subjective well-being: Limits and improvements. Springer.
Bryson, A., Barth, E., & Dale-Olsen, H. (2013). The effects of organizational change on worker well-being and the moderating role of trade unions. ILR Review, 66(4), 989–1011.
Burke, M. J., Brief, A. P., George, J. M., Roberson, L., & Webster, J. (1989). Measuring affect at work: Confirmatory analyses of competing mood structures with conceptual linkage to cortical regulatory systems. Journal of Personality and Social Psychology, 57(6), 1091.
Cammann, C., Fichman, M., Jenkins, D., & Klesh, J. (1979). The Michigan organizational assessment questionnaire. Unpublished manuscript. University of Michigan, Ann Arbor.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81.
Cantril, H. (1965). The pattern of human concerns. Rutgers University.
Chaffin, D., Heidl, R., Hollenbeck, J. R., Howe, M., Yu, A., Voorhees, C., & Calantone, R. (2017). The promise and perils of wearable sensors in organizational research. Organizational Research Methods, 20(1), 3–31.
Chen, P. Y., & Cooper, C. (2014). Wellbeing: A complete reference guide, work and wellbeing (Vol. 3). John Wiley & Sons.
Christian, M. S., Garza, A. S., & Slaughter, J. E. (2011). Work engagement: A quantitative review and test of its relations with task and contextual performance. Personnel Psychology, 64(1), 89–136.
Cizek, G. J. (2012). Defining and distinguishing validity: Interpretations of score meaning and justifications of test use. Psychological Methods, 17(1), 31.
Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309.
Cole, M. S., Walter, F., Bedeian, A. G., & O’Boyle, E. H. (2012). Job burnout and employee engagement: A meta-analytic examination of construct proliferation. Journal of Management, 38(5), 1550–1581.
Collins, S., Sun, Y., Kosinski, M., Stillwell, D., & Markuzon, N. (2015). Are you satisfied with life?: Predicting satisfaction with life from facebook. In N. Agarwal, K. Xu, & N. Osgood (Eds.), Social Computing, Behavioral-Cultural Modeling, and Prediction. SBP 2015. Lecture Notes in Computer Science (Vol. 9021, pp. 24–33). Springer.
Commision of the European Communities. (2017). Eurobarometer. Cologne: GESIS Data Archive.
Cooke, P. J., Melchert, T. P., & Connor, K. (2016). Measuring well-being: A review of instruments. The Counseling Psychologist, 44(5), 730–757.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281.
Daniels, K. (2000). Measures of five aspects of affective well-being at work. Human Relations, 53(2), 275–294.
Dasgupta, P. B. (2017). Detection and Analysis of Human Emotions through Voice and Speech Pattern Processing. International Journal of Computational Trends Technology, 51(1), 1–3.
de Lazari-Radek, K., & Singer, P. (2014). The point of view of the universe: Sidgwick and contemporary ethics. Oxford University Press.
Demerouti, E., Bakker, A. B., & Ebbinghaus, M. (2002). From mental strain to burnout. European Journal of Work and Organizational Psychology, 11(4), 423–441.
Denson, T. F., Spanovic, M., & Miller, N. (2009). Cognitive appraisals and emotions predict cortisol and immune responses: A meta-analysis of acute laboratory social stressors and emotion inductions. Psychological Bulletin, 135(6), 823–853.
Depue, R. A., & Collins, P. F. (1999). Neurobiology of the structure of personality: Dopamine, facilitation of incentive motivation, and extraversion. Behavioral and Brain Sciences, 22(3), 491–517.
Dickerson, S. S., & Kemeny, M. E. (2004). Acute stressors and cortisol responses: A theoretical integration and synthesis of laboratory research. Psychological Bulletin, 130(3), 355–391.
Diener, E. (1994). Assessing subjective well-being: Progress and opportunities. Social Indicators Research, 31(2), 103–157.
Diener, E. (2012). New findings and future directions for subjective well-being research. American Psychologist, 67(8), 590–597.
Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The satisfaction with life scale. Journal of Personality Assessment, 49(1), 71–75.
Diener, E., Suh, E. M., Lucas, R. E., & Smith, H. L. (1999). Subjective well-being: Three decades of progress. Psychological Bulletin, 125(2), 276–302.
Diener, E., Wirtz, D., Tov, W., Kim-Prieto, C., Choi, D., Oishi, S., & Biswas-Diener, R. (2010). New well-being measures: Short scales to assess flourishing and positive and negative feelings. Social Indicators Research, 97(2), 143–156.
Dimotakis, N., Scott, B. A., & Koopman, J. (2011). An experience sampling investigation of workplace interactions, affective states, and employee well-being. Journal of Organizational Behavior, 32(4), 572–588.
Dockray, S., Grant, N., Stone, A. A., Kahneman, D., Wardle, J., & Steptoe, A. (2010). A comparison of affect ratings obtained with ecological momentary assessment and the day reconstruction method. Social Indicators Research, 99(2), 269–283.
Dodds, P. S., Harris, K. D., Kloumann, I. M., Bliss, C. A., & Danforth, C. M. (2011). Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PloS One, 6(12), e26752.
Dolan, S. L., Burke, R. J., & Moodie, S. (2012). Is There a’Dark Side’to Work Engagement? Effective Executive, 15(4), 12.
Drake, G., Csipke, E., & Wykes, T. (2013). Assessing your mood online: Acceptability and use of Moodscope. Psychological Medicine, 43(7), 1455–1464.
Eatough, E., Shockley, K., & Yu, P. (2016). A review of ambulatory health data collection methods for employee experience sampling research. Applied Psychology, 65(2), 322–354.
Edwards, J. R. (2003). Construct validation in organizational behavior research. In J. Greenberg (Ed.), Organizational behavior: The state of the science (2nd ed., pp. 327–371). Lawrence Erlbaum Associates Publishers.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155–174.
Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expression and brain physiology: II. Journal of Personality and Social Psychology, 58(2), 342.
Erdogan, B., Bauer, T. N., Truxillo, D. M., & Mansfield, L. R. (2012). Whistle while you work: A review of the life satisfaction literature. Journal of Management, 38(4), 1038–1083.
European Parliament and Council. (2016). Regulation (eu) 2016/679 of the european parliament and of the coucil of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation)(2016). Official Journal of the European Union.
Faden, R. R., & Beauchamp, T. L. (1986). A history and theory of informed consent. Oxford University Press.
Feldman, F. (2004). Pleasure and the good life: Concerning the nature, varieties, and plausibility of hedonism. Oxford University Press on Demand.
Fisher, C. D. (2000). Mood and emotions while working: Missing pieces of job satisfaction? Journal of Organizational Behavior, 185–202.
Fisher, C. D. (2014). Conceptualizing and Measuring Wellbeing at Work. In P. Y. Chen & C. L. Cooper (Eds.), Wellbeing: A Complete Reference Guide (3rd ed., pp. 9–33). John Wiley & Sons Ltd.
Fisher, G. G., Matthews, R. A., & Gibbons, A. M. (2016). Developing and investigating the use of single-item measures in organizational research. Journal of Occupational Health Psychology, 21(1), 3.
Flick, C. (2016). Informed consent and the Facebook emotional manipulation study. Research Ethics, 12(1), 14–28.
Ford, M. T., Jebb, A. T., Tay, L., & Diener, E. (2018). Internet searches for affect-related terms: An indicator of subjective well-being and predictor of health outcomes across US states and metro areas. Applied Psychology: Health and Well-Being, 10(1), 3–29.
Fordyce, M. W. (1977). The happiness measures: A sixty-second index of emotional well-being and mental health. Unpublished manuscript, Edison Community College.
Forgeard, M. J., Jayawickreme, E., Kern, M. L., & Seligman, M. E. (2011). Doing the right thing: Measuring wellbeing for public policy. International Journal of Wellbeing, 1(1).
Frisch, M. (1988). Life Satisfaction Interview. Unpublished Manuscript.
Ganster, D. C., Crain, T. L., & Brossoit, R. M. (2017). Physiological Measurement in the Organizational Sciences: A Review and Recommendations for Future Use. Annual Review of Organizational Psychology and Organizational Behavior, 5(1), 267–293.
Gardner, D. G., Cummings, L. L., Dunham, R. B., & Pierce, J. L. (1998). Single-item versus multiple-item measurement scales: An empirical comparison. Educational and Psychological Measurement, 58(6), 898–915.
Gelbard, R., Ramon-Gonen, R., Carmeli, A., Bittmann, R. M., & Talyansky, R. (2018). Sentiment analysis in organizational work: Towards an ontology of people analytics. Expert Systems, 35(5), e12289.
Giacalone, R. A., & Promislo, M. D. (2010). Unethical and unwell: Decrements in well-being and unethical activity at work. Journal of Business Ethics, 91(2), 275–297.
Gill, A. J., Gergle, D., French, R. M., & Oberlander, J. (2008). Emotion rating from short blog texts. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1121–1124.
Gilles, I., Mayer, M., Courvoisier, N., & Peytremann-Bridevaux, I. (2017). Joint analyses of open comments and quantitative data: Added value in a job satisfaction survey of hospital professionals. PloS ONE, 12(3), e0173950.
Glick, W. H., Jenkins, G. D., & Gupta, N. (1986). Method versus substance: How strong are underlying relationships between job characteristics and attitudinal outcomes? Academy of Management Journal, 29(3), 441–464.
Global Wellness Institute. (2016). The Future of Wellness at Work. https://globalwellnessinstitute.org/industry-research/the-future-of-wellness-at-work/.
Golden, L., & Wiens-Tuers, B. (2006). To your happiness? Extra hours of labor supply and worker well-being. The Journal of Socio-Economics, 35(2), 382–397.
Golder, S. A., & Macy, M. W. (2011). Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science, 333(6051), 1878–1881.
Gray, E. K., & Watson, D. (2007). Assessing positive and negative affect via self-report. In J. A. Coan & J. J. B. Allen (Eds.), Handbook of emotional elicitation and assessment (pp. 171–183). Oxford University Press.
Grewen, K. M., Girdler, S. S., Amico, J., & Light, K. C. (2005). Effects of partner support on resting oxytocin, cortisol, norepinephrine, and blood pressure before and after warm partner contact. Psychosomatic Medicine, 67(4), 531–538.
Greyling, T., Rossouw, S., & Afstereo. (2019). Gross national happiness index. The University of Johannesburg and Afstereo [producers]. http://gnh.today
Grube, A., Schroer, J., Hentzschel, C., & Hertel, G. (2008). The event reconstruction method: An efficient measure of experience-based job satisfaction. Journal of Occupational and Organizational Psychology, 81(4), 669–689.
Gurin, G., Veroff, J., & Feld, S. (1960). Americans view their mental health: A nationwide interview survey.
Guzzo, R. A., Fink, A. A., King, E., Tonidandel, S., & Landis, R. S. (2015). Big data recommendations for industrial–organizational psychology. Industrial and Organizational Psychology, 8(4), 491–508.
Hackman, J. R., & Oldham, G. R. (1974). The job diagnostic survey: An instrument for the diagnosis of jobs and the evaluation of job redesign projects. Yale University, Department of Administrative Sciences.
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16(2), 250–279.
Halbesleben, J. R. (2011). The consequences of engagement: The good, the bad, and the ugly. European Journal of Work and Organizational Psychology, 20(1), 68–73.
Halbesleben, J. R., Harvey, J., & Bolino, M. C. (2009). Too engaged? A conservation of resources view of the relationship between work engagement and work interference with family. Journal of Applied Psychology, 94(6), 1452.
Hancock, J. T., Landrigan, C., & Silver, C. (2007). Expressing emotion in text-based communication. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 929–932.
Harker, L., & Keltner, D. (2001). Expressions of positive emotion in women’s college yearbook pictures and their relationship to personality and life outcomes across adulthood. Journal of Personality and Social Psychology, 80(1), 112–124.
Harter, J. K., Schmidt, F. L., & Hayes, T. L. (2002). Business-unit-level relationship between employee satisfaction, employee engagement, and business outcomes: A meta-analysis. Journal of Applied Psychology, 87(2), 268–279.
Heller, D., Watson, D., & Ilies, R. (2006). The dynamic process of life satisfaction. Journal of Personality, 74(5), 1421–1450.
Hernandez, I., Newman, D. A., & Jeon, G. (2015). Twitter analysis: Methods for data management and a word count dictionary to measure city-level job satisfaction. In S. Tonidandel, J. Cortina, & E. King (Eds.), Data at Work: The Data Science Revolution and Organizational Psychology. Routledge.
Highhouse, S., Brooks, M. E., Nesnidol, S., & Sim, S. (2017). Is a. 51 validity coefficient good? Value sensitivity for interview validity. International Journal of Selection and Assessment, 25(4), 383–389.
Hill, A. D., White, M. A., & Wallace, J. C. (2014). Unobtrusive measurement of psychological constructs in organizational research. Organizational Psychology Review, 4(2), 148–174.
Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121.
Hodgkinson, G. P. (2012). The politics of evidence-based decision making. The Oxford Handbook of Evidence-Based Management, 404–419.
Huelsman, T. J., Nemanick Jr., R. C., & Munz, D. C. (1998). Scales to measure four dimensions of dispositional mood: Positive energy, tiredness, negative activation, and relaxation. Educational and Psychological Measurement, 58(5), 804–819.
Huta, V., & Waterman, A. S. (2014). Eudaimonia and its distinction from hedonia: Developing a classification and terminology for understanding conceptual and operational definitions. Journal of Happiness Studies, 15(6), 1425–1456.
Iacus, S., Porro, G., Salini, S., & Siletti, E. (2020). An Italian Composite Subjective Well-Being Index: The Voice of Twitter Users from 2012 to 2017. Social Indicators Research, 1–19.
Ilies, R., Aw, S. S., & Lim, V. K. (2016). A naturalistic multilevel framework for studying transient and chronic effects of psychosocial work stressors on employee health and well-being. Applied Psychology, 65(2), 223–258.
Ilies, R., Dimotakis, N., & Watson, D. (2010). Mood, blood pressure, and heart rate at work: An experience-sampling study. Journal of Occupational Health Psychology, 15(2), 120–130.
Ilies, R., & Judge, T. A. (2004). An experience-sampling measure of job satisfaction and its relationships with affectivity, mood at work, job beliefs, and general job satisfaction. European Journal of Work and Organizational Psychology, 13(3), 367–389.
Ilies, R., Schwind, K. M., & Heller, D. (2007). Employee well-being: A multilevel model linking work and nonwork domains. European Journal of Work and Organizational Psychology, 16(3), 326–341.
Ilies, R., Scott, B. A., & Judge, T. A. (2006). The interactive effects of personal traits and experienced states on intraindividual patterns of citizenship behavior. Academy of Management Journal, 49(3), 561–575.
Ironson, G. H., Smith, P. C., Brannick, M. T., Gibson, W. M., & Paul, K. B. (1989). Construction of a Job in General scale: A comparison of global, composite, and specific measures. Journal of Applied Psychology, 74(2), 193–200.
Israel, M., & Hay, I. (2006). Research ethics for social scientists. Sage.
Izard, C. E., Dougherty, F. E., Bloxom, B. M., & Kotsch, N. E. (1974). The Differential Emotions Scale: A Method of Measuring the Subjective Experience of Discrete Emotions. Vanderbilt Univ. Press.
Jaidka, K., Giorgi, S., Schwartz, H. A., Kern, M. L., Ungar, L. H., & Eichstaedt, J. C. (2020). Estimating geographic subjective well-being from Twitter: A comparison of dictionary and data-driven language methods. Proceedings of the National Academy of Sciences, 117(19), 10,165–10,171.
Johnson, S., Robertson, I., & Cooper, C. L. (2018). Well-being. Productivity and happiness at work. Palgrave Macmillan.
Judge, T. A. (1993). Does affective disposition moderate the relationship between job satisfaction and voluntary turnover? Journal of Applied Psychology, 78(3), 395.
Judge, T. A., & Locke, E. A. (1993). Effect of dysfunctional thought processes on subjective well-being and job satisfaction. Journal of Applied Psychology, 78(3), 475–490.
Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction–job performance relationship: A qualitative and quantitative review. Psychological Bulletin, 127(3), 376.
Judge, T. A., Weiss, H. M., Kammeyer-Mueller, J. D., & Hulin, C. L. (2017). Job attitudes, job satisfaction, and job affect: A century of continuity and of change. Journal of Applied Psychology, 102(3), 356–374.
Jung, Y., & Suh, Y. (2019). Mining the voice of employees: A text mining approach to identifying and analyzing job satisfaction factors from online employee reviews. Decision Support Systems, 113, 074.
Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at work. Academy of Management Journal, 33(4), 692–724.
Kahneman, D., Krueger, A. B., Schkade, D. A., Schwarz, N., & Stone, A. A. (2004). A survey method for characterizing daily life experience: The day reconstruction method. Science, 306(5702), 1776–1780.
Kam, C. C. S., & Meyer, J. P. (2015). How careless responding and acquiescence response bias can influence construct dimensionality: The case of job satisfaction. Organizational Research Methods, 18(3), 512–541.
Kammann, R., & Flett, R. (1983). Affectometer 2: A scale to measure current level of general happiness. Australian Journal of Psychology, 35(2), 259–265.
Kaplan, S. A., Warren, C. R., Barsky, A. P., & Thoresen, C. J. (2009). A note on the relationship between affect(ivity) and differing conceptualizations of job satisfaction: Some unexpected meta-analytic findings. European Journal of Work and Organizational Psychology, 18(1), 29–54.
Kashdan, T. B., Biswas-Diener, R., & King, L. A. (2008). Reconsidering happiness: The costs of distinguishing between hedonics and eudaimonia. The Journal of Positive Psychology, 3(4), 219–233.
Katz, L. D. (1999). Dopamine and serotonin: Integrating current affective engagement with longer-term goals. Behavioral and Brain Sciences, 22(3), 527–527.
Kern, M. L., Park, G., Eichstaedt, J. C., Schwartz, H. A., Sap, M., Smith, L. K., & Ungar, L. H. (2016). Gaining insights from social media language: Methodologies and challenges. Psychological Methods, 21(4), 507.
Keshtkar, F., & Inkpen, D. (2009). Using sentiment orientation features for mood classification in blogs. 2009 International Conference on Natural Language Processing and Knowledge Engineering, 1–6.
Keyes, C. L., Shmotkin, D., & Ryff, C. D. (2002). Optimizing well-being: The empirical encounter of two traditions. Journal of Personality and Social Psychology, 82(6), 1007.
Kim, P. T. (1996). Privacy Rights, Public Policy, and the Employment Relationship. Ohio St. LJ, 57, 671.
Korsgaard, C. M. (1983). Two distinctions in goodness. The Philosophical Review, 92(2), 169–195.
Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005). Oxytocin increases trust in humans. Nature, 435(7042), 673–676.
Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110, 5802–5805.
Kreibig, S. D. (2010). Autonomic nervous system activity in emotion: A review. Biological Psychology, 84(3), 394–421.
Kulkarni, S. S., Reddy, N. P., & Hariharan, S. I. (2009). Facial expression (mood) recognition from facial images using committee neural networks. Biomedical Engineering Online, 8(1), 16.
Kunin, T. (1955). The construction of a new type of attitude measure. Personnel Psychology, 8(1), 65–77.
Kuoppala, J., Lamminpää, A., Liira, J., & Vainio, H. (2008). Leadership, job well-being, and health effects—A systematic review and a meta-analysis. Journal of Occupational and Environmental Medicine, 50(8), 904–915.
Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions between organizational, Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology, 8(2), 142–164.
Lapierre, L. M., Matthews, R. A., Eby, L. T., Truxillo, D. M., Johnson, R. E., & Major, D. A. (2018). Recommended practices for academics to initiate and manage research partnerships with organizations. Industrial and Organizational Psychology, 11(4), 543–581.
Lawanot, W., Inoue, M., Yokemura, T., Mongkolnam, P., & Nukoolkit, C. (2019). Daily stress and mood recognition system using deep learning and fuzzy clustering for promoting better well-being. 2019 IEEE International Conference on Consumer Electronics (ICCE), 1–6.
LiKamWa, R., Liu, Y., Lane, N. D., & Zhong, L. (2013). Moodscope: Building a mood sensor from smartphone usage patterns. Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, 389–402.
Linley, P. A., Maltby, J., Wood, A. M., Osborne, G., & Hurling, R. (2009). Measuring happiness: The higher order factor structure of subjective and psychological well-being measures. Personality and Individual Differences, 47(8), 878–884.
Liu, P., Tov, W., Kosinski, M., Stillwell, D. J., & Qiu, L. (2015). Do Facebook Status Updates Reflect Subjective Well-Being? Cyberpsychology, Behavior, and Social Networking, 18(7), 373–379.
Lucas, R. E., Diener, E., & Suh, E. (1996). Discriminant validity of well-being measures. Journal of Personality and Social Psychology, 71(3), 616–628.
Luciano, M. M., Mathieu, J. E., Park, S., & Tannenbaum, S. I. (2017). A Fitting Approach to Construct and Measurement Alignment: The Role of Big Data in Advancing Dynamic Theories. Organizational Research Methods, 21(3), 1–41.
Luhmann, M. (2017). Using big data to study subjective well-being. Current opinion in behavioral. Current Opinion in Behavioral Sciences, 18, 28–33.
MacEwen, K. E., & Barling, J. (1988). Interrole conflict, family support and marital adjustment of employed mothers: A short term, longitudinal study. Journal of Organizational Behavior, 9(3), 241–250.
Macey, W. H., & Schneider, B. (2008). The meaning of employee engagement. Industrial and Organizational Psychology, 1(1), 3–30.
Mäkikangas, A., Kinnunen, U., Feldt, T., & Schaufeli, W. (2016). The longitudinal development of employee well-being: A systematic review. Work & Stress, 30(1), 46–70.
Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics. Cambridge University Press.
Martela, F. (2017). Can good life be measured? The dimensions and measurability of a life worth living. In metrics of subjective well-being: Limits and improvements (pp. 21–42). Springer.
Marwick, A. E., & Boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133.
Maslach, C., Jackson, S. E., Leiter, M. P., Schaufeli, W. B., & Schwab, R. L. (1986). Maslach burnout inventory (Vol. 21). Consulting Psychologists Press.
Maslach, C., & Jackson, S. E. (1981). The measurement of experienced burnout. Journal of Organizational Behavior, 2(2), 99–113.
Mauss, I. B., Levenson, R. W., McCarter, L., Wilhelm, F. H., & Gross, J. J. (2005). The tie that binds? Coherence among emotion experience, behavior, and physiology. Emotion, 5(2), 175.
Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition and Emotion, 23(2), 209–237.
Mazur, A., & Booth, A. (1998). Testosterone and dominance in men. Behavioral and Brain Sciences, 21(3), 353–363.
McKenny, A. F., Aguinis, H., Short, J. C., & Anglin, A. H. (2018). What doesn’t get measured does exist: Improving the accuracy of computer-aided text analysis. Journal of Management, 44(7), 2909–2933.
McNair, D. M., Lorr, M., & Droppleman, L. F. (1981). Profile of mood states: EdITS manual. Educational and Industrial Testing Service.
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455.
Mehta, P. H., & Josephs, R. A. (2006). Testosterone change after losing predicts the decision to compete again. Hormones and Behavior, 50(5), 684–692.
Meister, J. (2020). Top 10 HR Trends That Matter Most In The 2020 Workplace. Forbes. https://www.forbes.com/sites/jeannemeister/2020/01/15/top-10-hr-trends-that-matter-most-in-the-2020-workplace/
Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1), 2,053,951,716,650,211.
Mill, J. S. (1859). Utilitarianism and on liberty (2nd ed.) (M. Warnock, Ed.). Blackwell Publishing.
Miner, A., Glomb, T., & Hulin, C. (2005). Experience sampling mood and its correlates at work. Journal of Occupational and Organizational Psychology, 78(2), 171–193.
Mishne, G. (2005). Experiments with mood classification in blog posts. Proceedings of ACM SIGIR, 19, 321–327.
Moniz, A., & Jong, F. (2014). Sentiment analysis and the impact of employee satisfaction on firm earnings. Proceedings of European Conference on Information Retrieval, 519–527.
Motro, D., Ye, B., Kugler, T., & Noussair, C. N. (2020). Measuring Emotions in the Digital Age. MIT Sloan Management Review, 61(2), 1–4.
Nagy, M. S. (2002). Using a single-item approach to measure facet job satisfaction. Journal of Occupational and Organizational Psychology, 75(1), 77–86.
Nave, C. S., Sherman, R. A., & Funder, D. C. (2008). Beyond self-report in the study of hedonic and eudaimonic well-being: Correlations with acquaintance reports, clinician judgments and directly observed social behavior. Journal of Research in Personality, 42(3), 643–659.
Neugarten, B. L., Havighurst, R. J., & Tobin, S. S. (1961). The measurement of life satisfaction. Journal of Gerontology.
Nielsen, K., Fredslund, H., Christensen, K. B., & Albertsen, K. (2006). Success or failure? Interpreting and understanding the impact of interventions in four similar worksites. Work & Stress, 20(3), 272–287.
Nielsen, K., & Noblet, A. (2018). Organizational Interventions for Health and Well-being: A Handbook for Evidence-based Practice. Routledge.
Niklas, C. D., & Dormann, C. (2005). The impact of state affect on job satisfaction. European Journal of Work and Organizational Psychology, 14(4), 367–388.
Nussbaum, M. C. (2011). Creating capabilities. Harvard University Press.
OECD. (2013). OECD guidelines on measuring subjective well-being. OECD publishing.
Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of Labor Economics, 33(4), 789–822.
Page, K. M., & Vella-Brodrick, D. A. (2009). The ‘what’,‘why’and ‘how’of employee well-being: A new model. Social Indicators Research, 90(3), 441–458.
Pancheva, M. G., Ryff, C. D., & Lucchini, M. (2020). An Integrated Look at Well-Being: Topological Clustering of Combinations and Correlates of Hedonia and Eudaimonia. Journal of Happiness Studies, 1–23.
Parfit, D. (1984). Reasons and persons. OUP Oxford.
Pavot, W., Diener, E., Colvin, C. R., & Sandvik, E. (1991). Further validation of the Satisfaction with Life Scale: Evidence for the cross-method convergence of well-being measures. Journal of Personality Assessment, 57(1), 149–161.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903.
Podsakoff, P. M., MacKenzie, S. P., & Podsakoff, N. P. (2016). Recommendations for creating better concept definitions in the organizational, behavioral, and social sciences. Organizational Research Methods, 19(2), 159–203.
Poncheri, R. M., Lindberg, J. T., Thompson, L. F., & Surface, E. A. (2008). A comment on employee surveys: Negativity bias in open-ended responses. Organizational Research Methods, 11(3), 614–630.
Purcell, J. (2014). Disengaging from engagement. Human Resource Management Journal, 24(3), 241–254.
Raz, J. (1986). The morality of freedom. Clarendon Press.
Rich, B. L., Lepine, J. A., & Crawford, E. R. (2010). Job engagement: Antecedents and effects on job performance. Academy of Management Journal, 53(3), 617–635.
Robeyns, I. (2005). The capability approach: A theoretical survey. Journal of Human Development, 6(1), 93–117.
Rogelberg, S. G., Luong, A., Sederburg, M. E., & Cristol, D. S. (2000). Employee attitude surveys: Examining the attitudes of noncompliant employees. Journal of Applied Psychology, 85(2), 284.
Rojas, M. (2017). The Subjective Object of Well-Being Studies: Well-Being as the Experience of Being Well. In Metrics of subjective well-being: Limits and improvements (pp. 43–62). Springer.
Roscoe, L. J. (2009). Wellness: A review of theory and measurement for counselors. Journal of Counseling & Development, 87(2), 216–226.
Rossouw, S., & Greyling, T. (2020). Big Data and Happiness. In K. F. Zimmermann (Ed.), Handbook of Labor, Human Resources and Population Economics (pp. 1–35). Springer.
Russel, J. A., Weiss, A., & Mendelsohn, G. A. (1989). Affect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493–502.
Russell, S. S., Spitzmuller, C., Lin, L. F., Stanton, J. M., Smith, P. C., & Ironson, G. H. (2004). Shorter can also be better: The abridged job in general scale. Educational and Psychological Measurement, 64, 878–893.
Ryan, R. M., & Deci, E. L. (2001). On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual Review of Psychology, 52(1), 141–166.
Ryff, C. D. (1989a). Beyond Ponce de Leon and life satisfaction: New directions in quest of successful ageing. International Journal of Behavioral Development, 12(1), 35–55.
Ryff, C. D. (1989b). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology, 57(6), 1069–1081.
Ryff, C. D., & Keyes, C. L. M. (1995). The structure of psychological well-being revisited. Journal of Personality and Social Psychology, 69(4), 719–727.
Ryff, C. D., Singer, B. H., & Love, G. D. (2004). Positive health: Connecting well-being with biology. Research Transactions of the Royal Society B: Biological Sciences, 359(1449), 1383–1394.
Rynes, S. L. (2012). The research-practice gap in I/O psychology and related fields: Challenges and potential solutions. The Oxford Handbook of Organizational Psychology, 1, 409–452.
Salas, E., Kozlowski, S. W., & Chen, G. (2017). A century of progress in industrial and organizational psychology: Discoveries and the next century. Journal of Applied Psychology, 102(3), 589–598.
Sandvik, E., Diener, E., & Seidlitz, L. (1993). Subjective well-being: The convergence and stability of self-report and non-self-report measures. Journal of Personality, 61(3), 317–342.
Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Cognitive Brain Research, 20(1), 81–91.
Schaufeli, W. B., & Bakker, A. B. (2010). Defining and measuring work engagement: Bringing clarity to the concept. In A. B. Bakker & M. P. Leiter (Eds.), Work engagement: A handbook of essential theory and research (pp. 10–24). Psychology Press.
Schaufeli, W. B., Bakker, A. B., & Salanova, M. (2006). The measurement of work engagement with a short questionnaire: A cross-national study. Educational and Psychological Measurement, 66(4), 701–716.
Schaufeli, W. B., Salanova, M., González-Romá, V., & Bakker, A. B. (2002). The measurement of engagement and burnout: A two sample confirmatory factor analytic approach. Journal of Happiness Studies, 3(1), 71–92.
Schaufeli, W. B., Shimazu, A., Hakanen, J., Salanova, M., & De Witte, H. (2017). An Ultra-Short Measure for Work Engagement: The UWES-3 validation across five countries. European Journal of Psychological Assessment.
Schimmack, U., Krause, P., Wagner, G. G., & Schupp, J. (2010). Stability and change of well-being: An experimentally enhanced latent state-trait-error analysis. Social Indicators Research, 95(1), 19.
Schneider, L., & Schimmack, U. (2009). Self-informant agreement in well-being ratings: A meta-analysis. Social Indicators Research, 94(3), 363.
Schneider, L., & Schimmack, U. (2010). Examining sources of self-informant agreement in life-satisfaction judgments. Journal of Research in Personality, 44(2), 207–212.
Schneider, L., Schimmack, U., Petrican, R., & Walker, S. (2010). Acquaintanceship length as a moderator of self-informant agreement in life-satisfaction ratings. Journal of Research in Personality, 44(1), 146–150.
Schwartz, H. A., Sap, M., Kern, M. L., Eichstaedt, J. C., Kapelner, A., Agrawal, M., & Kosinski, M. (2016). Predicting individual well-being through the language of social media. Biocomputing 2016: Proceedings of the Pacific Symposium, 516–527.
Scott, M. M., & Spievack, N. (2019). Making the Business Case for Employee Well-Being.
Seder, J. P., & Oishi, S. (2012). Intensity of smiling in Facebook photos predicts future life satisfaction. Social Psychological and Personality Science, 3(4), 407–413.
Seppälä, P., Hakanen, J., Mauno, S., Perhoniemi, R., Tolvanen, A., & Schaufeli, W. (2015). Stability and change model of job resources and work engagement: A seven-year three-wave follow-up study. European Journal of Work and Organizational Psychology, 24(3), 360–375.
Seppälä, P., Mauno, S., Kinnunen, M. L., Feldt, T., Juuti, T., Tolvanen, A., & Rusko, H. (2012). Is work engagement related to healthy cardiac autonomic activity? Evidence from a field study among Finnish women workers. The Journal of Positive Psychology, 7(2), 95–106.
Sequeira, H., Hot, P., Silvert, L., & Delplanque, S. (2009). Electrical autonomic correlates of emotion. International Journal of Psychophysiology, 71(1), 50–56.
Shacham, S. (1983). A shortened version of the Profile of Mood States. Journal of Personality Assessment, 47(3), 305–306.
Shaffer, J. A., DeGeest, D., & Li, A. (2016). Tackling the problem of construct proliferation: A guide to assessing the discriminant validity of conceptually related constructs. Organizational Research Methods, 19(1), 80–110.
Shin, D. C., & Johnson, D. M. (1978). Avowed happiness as an overall assessment of the quality of life. Social Indicators Research, 5(1–4), 475–492.
Shiota, M. N., Neufeld, S. L., Yeung, W. H., Moser, S. E., & Perea, E. F. (2011). Feeling good: Autonomic nervous system responding in five positive emotions. Emotion, 11(6), 1368.
Short, J. C., Broberg, J. C., Cogliser, C. C., & Brigham, K. H. (2010). Construct validation using computer-aided text analysis (CATA): An illustration using entrepreneurial orientation. Organizational Research Methods, 13(2), 320–347.
Sidgwick, H. (1874). The methods of ethics (7th ed.). Hackett Publishing.
Smith, B. L., Brown, B. L., Strong, W. J., & Rencher, A. C. (1975). Effects of speech rate on personality perception. Language and Speech, 18(2), 145–152.
Smith, S. S., & Richardson, D. (1983). Amelioration of deception and harm in psychological research: The important role of debriefing. Journal of Personality and Social Psychology, 44(5), 1075.
Sommers, R., & Miller, F. G. (2013). Forgoing debriefing in deceptive research: Is it ever ethical? Ethics & Behavior, 23(2), 98–116.
Sonnentag, S. (2003). Recovery, work engagement, and proactive behavior: A new look at the interface between non-work and work. Journal of Applied Psychology, 88(3), 518–528.
Spector, P. E. (1985). Measurement of human service staff satisfaction: Development of the Job Satisfaction Survey. American Journal of Community Psychology, 13(6), 693–713.
Spector, P. E., Dwyer, D. J., & Jex, S. M. (1988). Relation of job stressors to affective, health, and performance outcomes: A comparison of multiple data sources. Journal of Applied Psychology, 73(1), 11.
Spielberger, C. D., & Gorsuch, R. L. (1983). State-trait anxiety inventory for adults: Manual and sample: Manual, instrument and scoring guide. Consulting Psychologists Press.
Spreitzer, G., Sutcliffe, K., Dutton, J., Sonenshein, S., & Grant, A. M. (2005). A socially embedded model of thriving at work. Organization Science, 16(5), 537–549.
Steffens, N. K., Haslam, S. A., Schuh, S. C., Jetten, J., & Van Dick, R. (2017). A meta-analytic review of social identification and health in organizational contexts. Personality and Social Psychology Review, 21(4), 303–335.
Stiglitz, J. E., Sen, A., Fitoussi, J.-P., & others. (2009). Report by the commission on the measurement of economic performance and social progress. Citeseer.
Sumner, L. W. (1996). Welfare, happiness, and ethics. Clarendon Press.
Taber, T. D. (1991). Triangulating job attitudes with interpretive and positivist measurement methods. Personnel Psychology, 44(3), 577–600.
Taris, T. W., & Schaufeli, W. (2015). Individual well-being and performance at work: A conceptual and theoretical overview. In M. Van Veldhoven & R. Peccei (Eds.), Well-being and performance at work: The role of context. (pp. 24–43). Psychology Press.
Thege, B. K., Tarnoki, A. D., Tarnoki, D. L., Garami, Z., Berczi, V., Horvath, I., & Veress, G. (2014). Is flourishing good for the heart? Relationships between positive psychology characteristics and cardiorespiratory health. Annals of Psychology, 31(1), 55–65.
Thomas, L. E., & Chambers, K. O. (1989). Phenomenology of life satisfaction among elderly men: Quantitative and qualitative views. Psychology and Aging, 4(3), 284.
Thompson, E. R., & Phua, F. T. (2012). A brief index of affective job satisfaction. Group & Organization Management, 37(3), 275–307.
Tracy, J. L., & Matsumoto, D. (2007). More than a thrill: Cross cultural evidence for spontaneous displays of pride in response to athletic success. Proceedings of the National Academy of Sciences, 105(11) 655–11,660.
Trice, A. D., & Tillapaugh, P. (1991). Children’s estimates of their parents’ job satisfaction. Psychological Reports, 69(1), 63–66.
Urry, H. L., Nitschke, J. B., Dolski, I., Jackson, D. C., Dalton, K. M., Mueller, C. J., & Davidson, R. J. (2004). Making a life worth living: Neural correlates of well-being. Psychological Science, 15(6), 367–372.
Van Doornen, L. J., Houtveen, J. H., Langelaan, S., Bakker, A. B., Van Rhenen, W., & Schaufeli, W. B. (2009). Burnout versus work engagement in their effects on 24-h ambulatory monitored cardiac autonomic function. Stress and Health, 25(4), 323–331.
Van Katwyk, P. T., Fox, S., Spector, P. E., & Kelloway, E. K. (2000). Using the Job-Related Affective Well-Being Scale (JAWS) to investigate affective responses to work stressors. Journal of Occupational Health Psychology, 5(2), 219.
Van Saane, N., Sluiter, J., Verbeek, J., & Frings-Dresen, M. (2003). Reliability and validity of instruments measuring job satisfaction—A systematic review. Occupational Medicine, 53(3), 191–200.
Veenhoven, R. (2000). The four qualities of life: Ordering concepts and measures of the good life. Journal of Happiness Studies, 1(1), 1–39.
Veenhoven, R. (2017). Measures of happiness: Which to choose? In Metrics of subjective well-being: Limits and improvements (pp. 65–84). Springer.
Veenhoven, R. (2020). Measures of happiness. World Database of Happiness, Erasmus University Rotterdam, The Netherlands. https://worlddatabaseofhappiness.eur.nl/hap_quer/hqi_fp.htm
Verduyn, P., Delvaux, E., Van Coillie, H., Tuerlinckx, F., & Van Mechelen, I. (2009). Predicting the duration of emotional experience: Two experience sampling studies. Emotion, 9(1), 83.
Vytal, K., & Hamann, S. (2010). Neuroimaging support for discrete neural correlates of basic emotions: A voxel-based meta-analysis. Journal of Cognitive Neuroscience, 22(12), 2864–2885.
Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics-shopping. Being Profiling. Cogitas Ergo Sum, 84–90.
Wang, N., Kosinski, M., Stillwell, D. J., & Rust, J. (2014). Can well-being be measured using Facebook status updates? Validation of Facebook’s Gross National Happiness Index. Social Indicators Research, 115(1), 483–491.
Wanous, J. P., Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: How good are single-item measures? Journal of Applied Psychology, 82(2), 247–252.
Warr, P. (1990). The measurement of well-being and other aspects of mental health. Journal of Occupational Psychology, 63(3), 193–210.
Warr, P. (2012). How to think about and measure psychological well-being. In Research methods in occupational health psychology (pp. 76–90). Routledge.
Warr, P., Cook, J., & Wall, T. (1979). Scales for the measurement of some work attitudes and aspects of psychological well-being. Journal of Occupational and Organizational Psychology, 52(2), 129–148.
Warr, P., & Nielsen, K. (2018). Wellbeing and work performance. In E. Diener, S. Oishi, & L. Tay (Eds.), Handbook of well-being. DEF Publishers.
Wassenaar, D. R., & Mamotte, N. (2012). Ethical issues and ethics reviews in social science research. The Oxford Handbook of International Psychological Ethics, 268–282.
Waterman, A. S. (2008). Reconsidering happiness: A eudaimonist’s perspective. Journal of Positive Psychology, 3(4), 234–252.
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063–1070.
Watson, D., Hubbard, B., & Wiese, D. (2000). General traits of personality and affectivity as predictors of satisfaction in intimate relationships: Evidence from self-and partner-ratings. Journal of Personality, 68(3), 413–449.
Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Nonreactive research in the social sciences. (Vol. 2). Rand McNally.
Weinberger, D. A., Schwartz, G. E., & Davidson, R. J. (1979). Low-anxious, high-anxious, and repressive coping styles: Psychometric patterns and behavioral and physiological responses to stress. Journal of Abnormal Psychology, 88(4), 369.
Weiss, D. J., Dawis, R. V., & England, G. W. (1967). Manual for the Minnesota Satisfaction Questionnaire.: Vol. Minnesota studies in vocational rehabilitation. University of Minnesota, Industrial Relations Center.
Weiss, H. M. (2002). Deconstructing job satisfaction: Separating evaluations. beliefs and affective experiences. Human Resource Management Review, 12(2), 173–194.
Wijngaards, I., Burger, M., & Van Exel, J. (2019). The promise of open survey questions—The validation of text-based job satisfaction measures. PloS ONE, 14(12). https://doi.org/10.1371/journal.pone.0226408
Wijngaards, I., Burger, M. & Van Exel, J. (2021). Unpacking the Quantifying and Qualifying Potential of Semi-Open Job Satisfaction Questions through Computer-Aided Sentiment Analysis. Journal of Well-being Assessment. https://doi.org/10.1007/s41543-021-00040-w.
Williams, C. E., & Stevens, K. N. (1972). Emotions and speech: Some acoustical correlates. The Journal of the Acoustical Society of America, 52(4), 1238–1250.
Wright, T. A., & Bonett, D. G. (2007). Job satisfaction and psychological well-being as nonadditive predictors of workplace turnover. Journal of Management, 33(2), 141–160.
Wright, T. A., & Cropanzano, R. (1998). Emotional exhaustion as a predictor of job performance and voluntary turnover. Journal of Applied Psychology, 83(3), 486.
Xanthopoulou, D., Bakker, A. B., Heuven, E., Demerouti, E., & Schaufeli, W. B. (2008). Working in the sky: A diary study on work engagement among flight attendants. Journal of Occupational Health Psychology, 13, 345–356.
Yang, C., & Srinivasan, P. (2016). Life satisfaction and the pursuit of happiness on Twitter. PloS ONE, 11(3), e0150881.
Young, L. M., & Gavade, S. R. (2018). Translating emotional insights from hospitality employees’ comments: Using sentiment analysis to understand job satisfaction. International Hospitality Review, 32(1), 75–92.
Zagzebski, L. T. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge University Press.
Zelenski, J. M., & Larsen, R. J. (2000). The distribution of basic emotions in everyday life: A state and trait perspective from experience sampling data. Journal of Research in Personality, 34(2), 178–197.
Zhang, D. C. (2018). Art of the sale: Recommendations for sharing research with mainstream media and senior leaders. Industrial and Organizational Psychology, 11(4), 589–593.
Zheng, X., Zhu, W., Zhao, H., & Zhang, C. (2015). Employee well-being in organizations: Theoretical model, scale development, and cross-cultural validation. Journal of Organizational Behavior, 36(5), 621–644.
Zilioli, S., Caldbick, E., & Watson, N. V. (2014). Testosterone reactivity to facial display of emotions in men and women. Hormones and Behavior, 65(5), 461–468.
Zou, C., Schimmack, U., & Gere, J. (2013). The validity of well-being measures: A multiple-indicator–multiple-rater model. Psychological Assessment, 25(4), 1247.
Zuckerman, M., & Lubin, B. (1985). Manual for the MAACL-R: The Multiple Affect Adjective Check List Revised. Educational and Industrial Testing Service.
Acknowledgments
We gratefully acknowledge Bart Voorn for spurring interdisciplinary collaboration and providing feedback on an earlier version of the manuscript.
Funding
Indy Wijngaards and Owen King was funded by grant (652.001.003) from the Netherlands Organization for Scientific Research (NWO).
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Ethics Approval
As the study did not involve human participants, there was no ethical approval to request.
Consent to Participate
As the study did not involve human participants, there was no informed consent to ask.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or.
financial relationships that could be construed as a potential conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Illustrations of measures
Illustrations of measures
Life satisfaction
Life satisfaction is most often measured using closed question survey measures (Veenhoven 2017). These measures can be either single-item (Abdel-Khalek 2006; Cantril 1965; Commission of the European Communities 2017; OECD 2013) or multiple-item, e.g., Satisfaction With Life Scale (Diener et al. 1985), Happiness-Unhappiness Scale (Andrews and Withey 1976), Gurin Scale (Gurin et al. 1960), and the Happiness Measure (Fordyce 1977). In general, convergence exists between self-report and other-report measures of life satisfaction (Heller et al. 2006; Judge and Locke 1993; Lucas et al. 1996; Nave et al. 2008; Pavot et al. 1991; Sandvik et al. 1993; Sandvik et al. 1993; Schneider et al. 2010; Schneider and Schimmack 2010; Zou et al. 2013). For other closed question survey measures of life satisfaction and reflections on their validity, see Veenhoven (2017, 2020).
Beyond closed question survey measures, life satisfaction has been measured by analyzing naturally occurring texts on social media sites such as Facebook and Twitter (Collins et al. 2015; Liu et al. 2015; Schwartz et al. 2016; Yang and Srinivasan 2016) and transcripts from clinical life satisfaction interviews (Frisch 1988; Nave et al. 2008; Neugarten et al. 1961; Thomas and Chambers 1989). Facial expression data obtained from pictures have been linked to later life satisfaction (Harker and Keltner 2001; Seder and Oishi 2012). Unobtrusive data on online behavior has also been linked to life satisfaction (Collins et al. 2015; Kosinski et al. 2013). Some studies have found correlations between self-report life satisfaction scores and peripheral systolic and mean arterial blood pressure (Thege et al. 2014).
Dispositional affect
Dispositional affect also has been measured mostly with closed question survey measures, e.g., the Affect Balance Scale (ABS, Bradburn 1969), Differential Emotions Scale (DES, Izard et al. 1974), Positive and Negative Affect Schedule (PANAS; Watson et al. 1988), the Multiple Affect Adjective Check-List-Revised (Zuckerman and Lubin 1985), State-Trait Anxiety Inventory (Spielberger and Gorsuch 1983), Scale of Positive and Negative Experience (SPANE, Diener et al. 2010) and Affectometer 2 (Kammann and Flett 1983). Often, self-report measures of dispositional affect converge substantially with other-report measures (Lucas et al. 1996; Pavot et al. 1991; Watson et al. 2000). For more complete overviews on closed question measures of dispositional affect, see Gray and Watson (2007) and Boyle et al. (2015). There is only limited research on measures of dispositional affect other than closed question surveys. Self-reported dispositional affect has been linked to the content in answers to open questions (Sandvik et al. 1993) and salivary cholesterol (Ryff et al. 2004).
Moods
Moods are also typically measured using survey scales. These are either specially designed to measure moods, e.g., Profile of Mood States (POMS, McNair et al. 1981), Shortened POMS (Shacham 1983), Multidimensional Mood State Inventory (Boyle 1992), Four Dimension Mood Scale (Huelsman et al. 1998) and Affect Grid (Russel et al. 1989), or adaptations of general affect scales, e.g., PANAS, SPANE and DES. Self-report and other-report measures tend to converge (Bleidorn and Peters 2011; Pavot et al. 1991). Considering mood’s cyclic nature (Gray and Watson 2007), scholars have often used experience-based survey instruments, e.g., adopting experience sampling method (e.g., Dockray et al. 2010; Ilies and Judge 2004) and day reconstruction method designs (e.g., Dockray et al. 2010; Kahneman et al. 2004).
Concerning non-survey measures, various researchers have shown that word measures can be used to measure mood, e.g., sentiment in blog posts (Bollen et al. 2011; Keshtkar and Inkpen 2009; Mishne 2005), social media updates (Dodds et al. 2011; Golder and Macy 2011; Greyling et al. 2019; Iacus et al. 2020; Jaidka et al. 2020) and responses to open-ended questions (Amabile et al. 2005). Other studies have shown that behaviors can be used as a proxy for moods, e.g., facial behavior (Kulkarni et al. 2009) and online activity (Drake et al. 2013; LiKamWa et al. 2013).
Emotions
Like moods, emotions are typically measured using experience-based closed question survey measures like the DES and PANAS (Verduyn et al. 2009; Zelenski and Larsen 2000). Non-survey researchers have shown that emotions can be inferred from short instant messaging texts (Gill et al. 2008; Hancock et al. 2007). Other research has shown that social media (Greyling et al. 2019) and online search behavior can be used to monitor specific emotional states (Brodeur et al. 2020; Ford et al. 2018). Lab research has shown that emotions can be inferred from observation-based obtrusive measures, such speech characteristics (Dasgupta 2017; B. L. Smith et al. 1975; Williams and Stevens 1972), combinations of acoustic variables (Banse and Scherer 1996) and voice pitch (Mauss and Robinson 2009). Researchers have found that data on body postures (Mauss and Robinson 2009; Tracy and Matsumoto 2007) and facial expressions can be used to infer emotions (Ekman et al. 1990; Mauss et al. 2005). There is, however, controversy about the use of facial expression behavior, as certain facial expressions may be associated with multiple emotions and the meaning of them varies substantially across cultures and situations (Barrett et al. 2019). Physiological measures are regularly used to measure emotions. For instance, emotional valence and arousal have been linked to neuroendocrine activity, e.g., cortisol levels (Denson et al. 2009; Dickerson and Kemeny 2004), testosterone (Mazur and Booth 1998; Mehta and Josephs 2006; Zilioli et al. 2014), oxytocin (Grewen et al. 2005; Kosfeld et al. 2005), dopamine (Depue and Collins 1999) and serotonin (Katz 1999), electrodermal activity, e.g., skin conductance response and skin conductance level (Akinola 2010; Kreibig 2010; Sequeira et al. 2009; Weinberger et al. 1979), cardiovascular activity, e.g., systolic and diastolic blood pressure, heart rate, heart rate variability, cardiac efficiency and respiration (Akinola 2010; Kreibig 2010; Shiota et al. 2011) and neurological activity (Sato et al. 2004; Vytal and Hamann 2010).
Psychological well-being
PWB is most often measured by Ryff’s (1989b) attitudinal closed question survey measure: Scales of Psychological Well-being. These scales have been linked to measures of psychological functioning and physical health, e.g., neuroendocrine, cardiovascular, immune (Ryff et al. 2004), cardiorespiratory (Thege et al. 2014), neurological (Urry et al. 2004). Behavioral markers (e.g., expressive face, voice or gestures, social skills, awkward interpersonal style) and clinical ratings after an in-depth interview (e.g., productivity, aspiration level) also correlated to self-report measures of PWB (Nave et al. 2008).
Job satisfaction
Job satisfaction is most often measured using attitudinal single-item and multi-item survey scales (Gardner et al. 1998; Nagy 2002; Wanous et al. 1997). It is either measured by aggregating the scores on several job facets or by asking respondents directly about a general evaluation of their job (H. M. Weiss 2002). Frequently used job facet scales include the Job Satisfaction Survey (Spector 1985), Facet Satisfaction Scale (Bowling et al. 2018) and Job Diagnostic Survey (Hackman and Oldham 1974), and overall job satisfaction scales include the Minnesota Satisfaction Questionnaire (D. J. Weiss et al. 1967), Job in General Scale (Ironson et al. 1989), Abridged Job in General scale (Russell et al. 2004), Job Satisfaction Scale (Warr et al. 1979), Job Satisfaction Index (Brayfield and Rothe 1951), Michigan Organizational Assessment Questionnaire (Cammann et al. 1979), Faces scale (Kunin 1955) and Brief Index of Affective Job Satisfaction (Thompson and Phua 2012).Footnote 7 Self-report measures and other-report measures of job satisfaction have been found to converge (Ilies et al. 2006; MacEwen and Barling 1988; Spector et al. 1988; Trice and Tillapaugh 1991).
Obtrusive, reaction-based word measures have also been used, for example, open and semi-open questions about job satisfaction (Borg and Zuell 2012; Gilles et al. 2017; Poncheri et al. 2008; Taber 1991; Wijngaards et al. 2019; 2021; Young and Gavade 2018). Job satisfaction has also been inferred from unobtrusive textual data sources such as job review websites (Jung and Suh 2019; Moniz and Jong 2014) and social media (Hernandez et al. 2015). Other research found that job satisfaction can be inferred from an overall impression of behavior (Glick et al. 1986).
Job affect
Because most research on job affect has been based on closed question measures, we group dispositional job affect, job moods and job emotions in one paragraph. In line with their conceptual distinction, dispositional job affect is generally measured using attitudinal measures (Brief et al. 1988; Van Katwyk et al. 2000) and job moods and job emotions are generally measured using experience-based measures (e.g., Beal and Ghandour 2011; Dimotakis et al. 2011; Miner et al. 2005). For this, dedicated job affect scales are most often used, e.g., Job Emotions Scale (C. D. Fisher 2000), Warr’s (1990) and Van Katwyk et al.’s (2000) Job-related Affective Well-being Scale, Job Affect Scale (Burke et al. 1989) and Affective Well-Being scale (Daniels 2000). Different versions of such measures can be used to accommodate the temporal dimension of the target construct (e.g., changing the reference frame from the “in the last four weeks” to “today”).
Work engagement
Work engagement has mostly been measured using attitudinal closed question survey measures (Bakker et al. 2008; Schaufeli and Bakker 2010), e.g. Maslach Burnout Inventory (MBI; Maslach et al. 1986), the Oldenburg Burnout Inventory (OBI; Demerouti et al. 2002), Utrecht Work Engagement Scale (UWES-17; Schaufeli et al. 2002, UWES-9; 2006, UWES-3, 2017), Job Engagement Scale (Rich et al. 2010), and the Gallup Q12 (Harter et al. 2002). A handful of studies have considered measures other than self-report surveys. For example, studies have linked work engagement to cardiovascular activity (Seppälä et al. 2012; Van Doornen et al. 2009).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wijngaards, I., King, O.C., Burger, M.J. et al. Worker Well-Being: What it Is, and how it Should Be Measured. Applied Research Quality Life 17, 795–832 (2022). https://doi.org/10.1007/s11482-021-09930-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11482-021-09930-w