Keywords

Introduction

Value sensitive design represents a pioneering endeavor to proactively consider human values throughout the process of technology design. The work is grounded by the belief that the products that we engage with strongly influence our lived experience and, in turn, our abilities to meet our aspirations. Initially, the phrase “value sensitive design” was an umbrella term used to highlight an orientation towards human values shared between otherwise unaffiliated projects (Friedman 1999). However, since that time, value sensitive design (VSD) has become a branded term, designating specific strategies and techniques to help researchers and designers explicitly incorporate the consideration of human values into their work. To date, VSD has primarily been applied in the area of human-computer interaction (HCI).

Other branded, values-oriented approaches have developed in HCI, including Values at Play (Flanagan et al. 2005; Flanagan and Nissenbaum 2007), Values in Design (Detweiler et al. 2011; Knobel and Bowker 2011), and Worth-Centered Computing (Cockton 2009a, b). Participatory design has historically attended to participants’ values (Iverson et al. 2010), although the term values is not always present. Still others are working in this area but do not present a branded account of how to do so (e.g., Flanagan et al. 2008), often focusing on specific values such as privacy (e.g., Palen and Dourish 2003; Barkhuus 2012). Within this field of endeavor, VSD is often recognized as the most extensive approach to date for addressing human values in technology design (Albrechtslund 2007; Le Dantec et al. 2009; Brey 2010; Fallman 2011; Manders-Huits 2011; Rode 2011; Yetim 2011).

We, the authors of this chapter, are members of the first cohort of scholars to receive doctoral training from the founders of VSD at the University of Washington. We were literally at the table as the approach was evolving. We participated in the tangled, formative debates that rarely receive mention in formal academic writing. Because of this background, we offer a distinct perspective on recent methodological and theoretical applications of the approach and related critiques. We are able to identify authors with strong affiliations to the VSD lab, relationships that can be hard to discern from authorship and citations. No longer members of the VSD lab, we do not claim to represent an officially authorized account of VSD from the University of Washington’s VSD lab. Rather, we present our informed opinions of what is compelling, provocative, and problematic about recent manifestations of VSD. Our envisioned readers are scholars who are (1) exploring this history and uptake of VSD as developed by Friedman and colleagues at the University of Washington, (2) interested in applying VSD in their own work, (3) working to extend or modify VSD, or (4) working in cognate areas.

We concentrate the majority of our analysis on the development of VSD since Friedman, Kahn, and Borning’s seminal overview published in 2006. Friedman et al. (2006a) offer a thorough introduction to VSD and provide the first full articulation of what they term VSD’s “constellation of features.” The authors position this constellation as exclusive to value sensitive design (Friedman et al. 2006a). Taking this article and its claims as a point of departure, we examine how VSD has been appropriated and critiqued since the 2006 article was published. We draw from contemporary case studies to argue for a condensed version of the VSD constellation of features. We also propose a set of heuristics crafted from the writings of the VSD lab, appropriations and critiques of VSD, and related scholarly work. We present these heuristics for those who wish to draw upon, refine, and improve values-oriented approaches in their endeavors and may or may not choose to follow the tenets of value sensitive design.

Method for Collecting Articles for Review

The scholarship discussed in this chapter is primarily VSD-influenced research and design from the years 2007–2012. We began our search for related work in the ACM Digital Library. We proceeded to expand the search to databases and journals from cognate fields (information and library science). Specific search terms and data parameters are provided in Table 1. We removed from our analysis writings that were (1) conference workshop proposals or panel position papers, (2) magazine articles discussing designers’ practice around values that do not explicitly address the values-oriented design scholarship, (3) pedagogical work, or (4) unpublished works in progress. We added some works not identified in our search, but cited by sources identified in our search.

Table 1 A summary of databases and search terms used in our search for recent VSD-influenced research

This chapter is not an exhaustive review of all work that has incorporated, cultivated, or critiqued VSD in the past 6 years. Our goal is to present scholarship that exemplifies the development of VSD to date. There is worthwhile work that we did not examine in this chapter.

History of Value Sensitive Design

Under explicit development since early in the 1990s, VSD is claimed to provide a theory (Friedman and Freier 2005), a methodology (Friedman and Kahn 2003; Friedman 2004), methods (Miller et al. 2007; Nathan et al. 2007), and an approach (Nathan et al. 2008; Woelfer and Hendry 2009) for scaffolding consideration of human values during the design, implementation, use, and evaluation of interactive systems. An early explication of VSD is found within a National Science Foundation (NSF) workshop report (Friedman 1999). The report uses “value sensitive design” as a label for a wide range of projects undertaken by scholars who were likely unaware of the term, but whose interactive design work shared a proactive approach to addressing concerns other than efficiency and usability (Friedman 1999). Soon thereafter, Batya Friedman and a core group of collaborators at the University of Washington began publishing research describing and practicing what they formally termed “value sensitive design” in books, journals, conference proceedings, and white papers (http://www.vsdesign.org/publications.php). Over time, work cited as representative of VSD (Borning and Muller 2012) typically is coauthored by Friedman or other researchers affiliated with the VSD Lab, suggesting a proprietary relationship between the VSD approach and the VSD Lab.

What is a value according to VSD? Friedman’s early explications of VSD did not define the term “value” explicitly, but instead listed broad areas of concern including human dignity and welfare (Friedman 1999). She proceeds to highlight certain values deserving of attention: “trust, accountability, freedom from bias, access, autonomy, privacy, and consent” (Friedman 1999, p. 3). In the 2006 article, the term value was defined as “what a person or group of people consider important in life” (Friedman et al. 2006a, p. 349). However, this rather broad definition was circumscribed in part by another list of specific values “with ethical import that are often implicated in system design” (Friedman et al. 2006a, p. 349).

Through our review, we found no statements that VSD is a finished product; rather, we found it offered for others to continue to adapt and improve (Borning and Muller 2012). To position a discussion of how VSD has been used and critiqued, the following paragraphs briefly describe each of the areas mentioned above: theory, methodology, method, and approach.Footnote 1

Theory

Key to value sensitive design is its basis in an interactional understanding of technological appropriation. This theoretical positioning claims that a technology’s influence on humanity is shaped by the features of its design, the context in which it is used, and the people involved in its use. To ignore any component of this emergent and relational process (tool features, context, or stakeholders) is problematic. The interactional perspective implies that the impact of a technology on human life is not fully determined by the technology’s design. Values are not embedded within a technology; rather, they are implicated through engagement. A technology can be appropriated in innumerable ways, shaped by individuals and societies and by the context of use, as well as by its form and content.

VSD collaborators and allies believe that a concerted effort to identify and address human values implicated by technology use – even though that effort is imperfect – can significantly improve the design of products. In turn, when we ignore the influence of a product’s use on lived experience, the resulting interactions are more likely to have a range of unintended, negative impacts on human lives (Nathan et al. 2007). Moreover, as we discuss later, the effects of interactions with technology reach far beyond those who are directly involved in technology use.

Methodology

Early VSD literature emphasized the development of a methodology for addressing values. This “tripartite” methodology is composed of three types of iterative and integrative “investigations,” labeled conceptual, empirical, and technical (Friedman and Kahn 2003).

Conceptual investigations involve two primary activities. The first is identifying the stakeholders who will be affected by the technology under study. This includes those who use (or will use) a given product (direct stakeholders) and those who may not engage the technology directly, but whose lives will be influenced through others’ use (indirect stakeholders). As an example, a conceptual investigation of a building surveillance system would likely identify the security personnel who manipulate, maintain, and monitor the system as direct stakeholders. The indirect stakeholders might include building inhabitants and visitors (welcome and unwelcome) whose images will be captured by the camera. Although these latter individuals do not directly interact with the system, their lives are influenced by others’ use of the technology.

The second component of a conceptual investigation is identifying and defining the values implicated by use of a technology. For example, conceptual investigations of building security technology in a condominium in Vancouver, Canada, would include creating definitions of what values such as privacy mean in that context, where privacy is a legislated right. Value conflicts (or tensions) can emerge as soon as values are identified and discussed (Friedman et al. 2006a). In this example, residents’ conceptualizations labeled security regarding their personal safety and possessions might stand in tension with expectations related to the privacy of residents and nonresidents who enter the building.

Empirical investigations examine stakeholders’ “understandings, contexts, and experiences” in relation to technologies and implicated values (Friedman and Kahn 2003). Such investigations may employ a variety of methods – surveys, questionnaires, interviews, experiments, artifact analysis, participant observation, and so on – to inquire into stakeholders’ observable actions as well as their understandings, concerns, reflections, and aspirations.

Technical investigations are primarily concerned with specific features of technologies. These studies may include designing a new technology to support particular values or analyzing how particular features of existing technologies implicate certain values in a context of use.

It is worth reiterating that the investigations are presented as iterative and integrative (Friedman et al. 2006a). They are meant to inform each other rather than be engaged as separate, modular activities. Investigations may overlap, happen in different orders, or intertwine with each other. One activity can serve multiple purposes. Even researchers working with the VSD lab who call out VSD as a primary influence do not always elucidate these three investigations (e.g., Friedman and Hendry 2012).

Methods

Since its inception, VSD has incorporated the investigation of values into a range of standard social science methods, such as semi-structured interviews (Friedman et al. 2006a), surveys, observations, quasi-experimental designs, exploratory inquiries (Woelfer et al. 2008), and longitudinal case studies (Nathan et al. 2009). Researchers associated with the VSD lab in Seattle have taken a values orientation in their use of physiological measures (e.g., Kahn et al. 2008) and chat log analyses (Friedman et al. 2003; Kahn et al. 2005), as well as design methods such as probes (Nathan et al. 2009), sketching (Woelfer et al. 2011), and scenarios (Nathan et al. 2008; Woelfer and Hendry 2012).

Friedman recently laid claim to the development of 14 unique value sensitive design methods (Friedman 2012). The list consisted of (1) stakeholder analysis; (2) designer/stakeholder explicitly supported values; (3) coevolution of technology and social structure; (4) value scenarios; (5) value sketches; (6) value-oriented semi-structured interview; (7) granular assessments of magnitude, scale, and proximity; (8) value-oriented coding manual; (9) value-oriented mock-ups, prototypes, and field deployments; (10) ethnography focused on values and technology; (11) model for informed consent online; (12) value dams and flows; (13) value sensitive action-reflection model; and (14) envisioning cards. We were unable to find explicit descriptions of all of these in the published literature on VSD. Some (3, 4, 5, 6, 9, 10) are examples of values-oriented appropriations of social science and design methods mentioned in the preceding paragraph. Below, we discuss three methods from this list that are reported upon in the literature: direct and indirect stakeholder analysis, value dams and flows, and the envisioning criteria and cards. The first two can be positioned as values-oriented analyses; the last is a values-oriented toolkit.

Direct and Indirect Stakeholder Analyses. In a stakeholder analysis, researchers attempt to identify the roles of individuals who will be affected by the technology under study. This includes those who use (or will use) a given technology (direct stakeholders) and those who may not engage the technology directly, but whose lives will be influenced through others’ use of the technology (indirect stakeholders). These roles might be distinguished by job type (e.g., programmer, conductor), relation type (mother, daughter), interaction with technologies (e.g., contributor, reader, commenter), or any of the other myriad positions that individuals take on in daily life. The types of roles will depend on the context(s) under investigation.

Note that the term stakeholders refers to roles and not individual people. For example, we (Janet and Lisa) are not solely defined by our roles as authors of this chapter. Depending on the context, we may also be described as teachers, daughters, friends, women, citizens, voters, bicyclists, or gardeners. As we engage with a tool throughout a day, we take on any number of these different stakeholder roles, depending on the situation. Conceptualizing an individual as mother rather than as in the role of mother risks ignoring the multiplicity of roles through which we engage with our environments (Friedman et al. 2006b; Miller et al. 2007).

In a robust stakeholder analysis, researchers consider both the roles of those who will use a tool along with the roles of those who will be affected by others’ use of the tool. For each role, the research team identifies the potential harms and benefits that these roles are likely to experience as a result of the tool under investigation being used.

Note that stakeholder analysis is not unique to VSD. Such analysis is also found in fields such as public policy, conflict resolution, and business administration. However, in these other areas, the concept of indirect stakeholder may not be present. In VSD, the goal of the stakeholder analysis is to iteratively inform further conceptual investigations focused on values, as well as empirical and/or technical investigations, by framing the roles that should be considered (Friedman et al. 2006a; Miller et al. 2007).

Value Dams and Flows. The value dams and flows method is a process for making decisions regarding value tensions (Miller et al. 2007). Value tensions occur when conceptualizations of values or the design implications of values are found to be in friction with each other. Value tensions can surface in a variety of ways. For one, values articulated by an individual may conflict with those shared by a group. A familiar example of this type of tension can be found in information practices that develop around the use of organizational open calendaring systems. An open calendaring system is purported to support collaboration and accountability within a group. However, having one’s daily schedule available for all to see can also be perceived as intrusive. As a result, information practices may develop to address the varied tensions between collaboration, accountability, privacy, autonomy, and identity (Palen 1999). A second type of tension can occur when a system supports a societal value that runs counter to an organizational value. An example of this type of tension is an interactive system that supports collaboration (in this case a societal value), deployed in an organization that explicitly rewards fierce individualism and competition (Orlikowski 1992). Value tensions can also occur within individuals or across groups.

VSD does not claim that all value tensions can (or even should) be resolved. Once a tension has been identified, one may choose to address the tension or perhaps mark it for attention at another stage in the process (Miller et al. 2007; Nathan et al. 2007).

In the value dams and flows method, value dams are defined as technical features or organizational policies that some stakeholders (even just a few) strongly oppose, causing a value tension. The proposition is that it is important to identify potential opposition to how a tool functions (features) or is deployed (policies) because strong opposition by even a few can block appropriation of the technology. If the findings from a survey or set of interviews suggest that some stakeholders hold a strongly negative view of a technological feature or policy, it is likely that a value dam will develop, inhibiting effective use of the technology. Value flows are the flip side of this construct, calling attention to features and policies that a large number of stakeholders are in favor of incorporating. Attending to value flows in the design of tool features and policies may attract people to adopt the tool or to intensify their use (Miller et al. 2007).

Envisioning Criteria and Cards. The consideration of values when working with an interactive technology is complex and hard (Fallman 2011). Undertaking such work may appear out of scope when designers are already pushed to meet numerous commitments with limited time and too few resources. To address this challenge, members of the VSD lab looked to the scholarship in urban planning and design noir for insights on how to engage longer-term thinking in complex environments as well as address the reality that designs are often appropriated in unforeseen ways. From this multidisciplinary inquiry developed four “envisioning criteria”: stakeholders, time, values, and pervasiveness (Nathan et al. 2008). In turn, these four criteria were expanded into the Envisioning Cards toolkit, a product meant to support agile consideration of values during the design, redesign, or implementation of an interactive system (Friedman and Hendry 2012).

Each set of Envisioning Cards includes a stack of 32, 3′′ × 5′′ cards, a sand timer, and a small information booklet. Four of the cards are blank, encouraging users to create new, context-specific cards. Each of the other 28 highlights one of the four envisioning criteria (stakeholders, time, values, or pervasiveness), a card theme, and an activity on one side. The reverse side has an image that is meant to evoke or support that card’s particular theme. Just as the order of VSD investigations is intentionally flexible, the Envisioning Cards are designed to provide opportunities for values-oriented reflection, iteration, and course correction throughout the design, implementation, and evaluation of information tools (Friedman and Hendry 2012). To date, documented uses of the cards are limited to supporting classroom activities, projects with members who are associated with the VSD lab in Seattle, and conference workshops (Kaptein et al. 2011; Friedman and Hendry 2012).

Approach: Variations in VSD Uptake

Since 2006, VSD’s influence has become increasingly apparent in publications beyond those of Friedman’s group at the University of Washington.Footnote 2 Some of these appropriations have used the tripartite methodology, others the stakeholder analysis, and still others simply an orientation towards values where VSD’s influence is mentioned in the literature review. We are not interested in demarking who is or who is not doing straight-up value sensitive design research. However, we believe that these variations in practice, different approaches to VSD, are important to identify and may lead to future lines of inquiry.

Here we attempt to describe the ways in which recent scholarship has related to VSD. We consider four categories: an affinity with VSD, prescriptive VSD, critical VSD, and formative VSD. These categories do not neatly divide the space; rather, these categories have fuzzy boundaries and regions of overlap. The works we point to within these categories are offered as examples.

An affinity with VSD. The values-oriented research area in the field of human-computer interaction has continued to evolve since Friedman’s 1999 NSF report (e.g., Blevis 2007; Flanagan et al. 2005; Klasnja et al. 2009; Le Dantec et al. 2009; Palen and Dourish 2003). Friedman’s lab at the University of Washington is just one strand.Footnote 3 The affinity among these approaches grows from a shared concern regarding the complex interplay between human values and the design and use of information tools. The information tools under consideration are not always new and networked; there also investigations of non-digital tools (e.g., Woelfer and Hendry 2009; Wyche et al. 2007; Wyeth 2006).

Such work may explicitly critique VSD (e.g., Le Dantec et al. 2009; Leitner et al. 2010) or draw inspiration from VSD (e.g., Chango 2007; Foong 2008), without claiming an adherence to VSD theory or methodology. Even when values-oriented work does not claim any particular relationship to VSD, it often shares key features such as an interactional perspective and a proactive approach to values in design. Notably, Flanagan et al. (2008) describe technical, philosophical, and empirical modes of inquiry, aligning with VSD’s technical, conceptual, and empirical investigations. This similarity may reflect ideas developed in prior collaborations between Friedman and Nissenbaum (1996, 1997).

Prescriptive VSD. Many researchers and designers have taken VSD at face value, applying it as presented by Friedman and colleagues, often in a new domain. Some work applies VSD to technology design in a particular context of use, for example, supporting healthcare workers in Africa (Walton and DeRenzi 2009) and blind transit users in the United States (Azencot et al. 2011). Other work aims to establish principles or frameworks that account for ethics in designing particular technologies, such as brain-computer interfaces (Simbasavian and Jackson 2007), persuasive technology (Davis 2009), social networks (Cotler and Rizzo 2010), healthcare robots (van Wynsberghe 2013), and nanopharmaceuticals (Timmermans et al. 2011).

Critical VSD. Other researchers have critiqued VSD from a variety of perspectives. Some have applied VSD and found that it falls short. For example, Johri and Nair (2011) apply VSD to an e-governance project in India. In their discussion, they observe that “there were several issues that were not as clear-cut in the field as noted in the framework” – in particular, contextual and emergent aspects of values , the importance of pragmatic issues, unresolved contradictions between values, and the role of intermediation in access to technology. Others critique VSD in explaining why they choose another approach (e.g., Le Dantec et al. 2009; Halloran et al. 2009). Still others critique VSD in order to develop or extend the. We consider such critiques of VSD at greater length in the next section.

Formative VSD. Much work inside and outside the VSD lab aims to develop or extend VSD.

While Borning and Muller (2012) aim to improve the overall practice of VSD and increase its uptake, others aim to fill gaps in VSD theory or in the methods offered. For example, Yetim (2011) argues that VSD suffers from the lack of an ethical theory and shows how discourse ethics can fill this gap, notably offering guidance for stakeholder analysis. Pommeranz et al. (2011) argue, after Le Dantec et al. (2009), that VSD empirical investigations need methods for situated elicitation of values; they compare the photo elicitation interview with other methods and propose a new value elicitation tool. Detweiler et al. (2011) argue that VSD lacks formal modeling methods to inform technical investigations of values; they demonstrate how to incorporate values into the Tropos approach to requirements modeling.

Critiques

As awareness of VSD has grown, so have the numbers of critiques. Some of these critiques are explicit, calling out problematic areas of VSD in terms of how it has been formulated, presented, and applied (Le Dantec et al. 2009; Borning and Muller 2012). Other critiques refer generally to values-oriented design approaches, carefully avoiding particular labels, but offering relevant insights (Bardzell 2010). As a body of scholarship, we find these critiques offer some of the most stimulating and provocative work in this area over the past 6 years. Here we engage the critiques that address core aspects of VSD: universal values, ethical commitments, stakeholder participation, the emergence of values, and the voices of researchers and participants.

Universal Values

A frequent critique of VSD concerns its position that certain values are universal, although those values may play out in different ways across cultures and contexts (Friedman et al. 2006a). Borning and Muller (2012) argue that this position sits atop a slippery slope: “The belief that there are universal values … has on occasion led to the further belief that a particular group, culture, or religion is the keeper of those values, and needs to impose them on others – with sometimes tragic consequences.” Borning and Muller (2012) further claim that VSD’s stated commitment to universal values has likely impeded uptake of the approach. Indeed, others have explicitly or implicitly critiqued this commitment (e.g., Le Dantec et al. 2009; Alsheikh et al. 2011), and even some who claim VSD as their approach reject VSD’s stance on universal values (Johri and Nair 2011).

Borning and Muller (2012) propose two different responses to such a quandary: to shift from a philosophical to an empirical basis for one’s stance or to make explicit the researcher’s position, implicitly acknowledging that theirs is not the only valid position. As Borning and Muller (2012) point out, the founders of VSD claim that their position is supported by empirical evidence (Friedman and Kahn 2003, Friedman et al. 2006a). Yet, this position is still vulnerable to critique, for systematic abuses of the unprivileged have often been justified on “scientific” grounds (Borning and Muller 2012). Moreover, empirical positions can be overturned by contradictory evidence. For example, Hofstede’s (1991) empirically based model of cultural values identifies dimensions along which residents of different nations espouse opposing values: for example, individualism versus collectivism. Where Friedman et al. (2006a) claim privacy and autonomy as universal values (although their expression differs across different cultures), Saab (2008) instead casts these values as belonging to individualist cultures, in contrast to values such as group cohesion held in collectivist cultures. However, Saab (2008) also notes a need for further empirical validation of this ethnorelativist model, particularly in contrast to VSD’s universalist stance. Thus, a universalist position based on empirical evidence remains a contentious position.

Turning to the second response, Friedman et al. (2006a) certainly make their position explicit, but do so by claiming that a commitment to universal values is obligatory for those practicing VSD. Does VSD require universal values as part of its foundation? While arguing strongly that particular values are universal – specifically, human welfare and ownership of property – Friedman and Kahn (2003) acknowledge with Saab (2008) that not all values are universal and some cultures hold contrasting values, such as cooperation versus competition. Furthermore, Friedman and Kahn (2003) go on to address the implications of cultural variability for design: when implicated values vary across cultures, systems may be appropriate only within a particular culture, unless designers make an extra effort to build in value adaptivity. Some variability is thus already accounted for in VSD, but there is not agreement on whether it provides enough flexibility.

Building on these ideas, Borning and Muller (2012) argue for a pluralistic stance: that VSD should not recommend any position on the universality or relativism of values, but rather leave VSD researchers and practitioners free to take and support their own positions in the context of particular projects.

Ethical Commitments

While some critique VSD for its adamant commitment to the general concept of universal values, others critique VSD for failing to make concrete ethical commitments. While praising VSD for addressing universal values and drawing on ethical theory, Albrechtslund (2007) points out that VSD leaves unclear “what values and which theories” it includes. Without an explicit commitment to an ethical theory, he claims, VSD is an ethically neutral tool, vulnerable to use in support of harmful values such as those of Nazism.

Indeed, descriptions of VSD do not recommend the use of any particular ethical theory. Rather, Friedman and Kahn (2003) argue that values relevant to information technology design find their basis in different kinds of ethical theories. Some values – such as not intentionally deceiving others – rest on theories of the right (consequentialist and deontological ethics), which concern moral obligations and prohibitions. Other values such as “warmth and friendliness,” they argue, would not be considered moral values under such theories, as there is no obligation to be friendly. But from the perspective of virtue ethics, or theories of the good, these are indeed moral values, as one is a better person for being warm and friendly (Friedman and Kahn 2003). Friedman and Kahn (2003) imply that designers must attend to values supported by theories of the right, which are obligatory, and may attend to values supported by theories of the good, which are discretionary. However, they make no commitments to particular theories.

Manders-Huits (2011) advances this critique, arguing that the notion of values in VSD is “underdeveloped.” She claims that VSD provides “no methodological account for distinguishing genuine moral values from mere preferences, wishes, and whims of those involved in the design process” (Manders-Huits 2011). That is, among the many things that stakeholders consider important in life, how does the investigator determine which ones correspond to values of ethical import that ought to be attended to in design? Without such an account, the VSD practitioner risks attending to a set of values that is unprincipled or unbounded.Footnote 4 The solution, Manders-Huits (2011) argues, is that VSD requires a complementary ethical theory not only to demarcate moral values but also to provide a basis on which to make principled judgments about which values are most important to support. In extending VSD to “value conscious design,” Manders-Huits (2011) recommends that designers clarify their ethical goals and explicate their chosen ethical theory.

Several VSD projects have adopted ethical theories found suitable to the project domain. For example, Borning et al. (2005) draw upon discourse ethics to support the legitimation of an urban simulation, while Chatterjee et al. (2009) explicitly adopt deontological ethics in the context of developing collaboration systems. Writing about the design of a hypothetical weapons command and control system, Cummings (2006) draws on the theory of just war and the principles of proportionality and discrimination. Similarly, in developing a VSD-based framework for the ethical design of healthcare robots, van Wynsberghe (2013) adopts the perspective of care ethics, which emphasizes relationships and responsibilities over rights. Thus, the literature provides several models for following Manders-Huits’ (2011) recommendations to explicitly adopt an ethical theory alongside the VSD approach.

By contrast, Yetim (2011) argues that discourse ethics is a uniquely appropriate standpoint from which to critically examine VSD itself, going beyond any particular project domain such as urban simulation. Even though Borning et al. (2005) address the legitimacy of the simulation software and the transparency of the software development process, Yetim (2011) argues that they do not go far enough in addressing the legitimacy of the design process itself. Yetim (2011) therefore develops a general approach for adopting discourse ethics alongside VSD, in support of the legitimacy of the value sensitive design process, while at the same time acknowledging that discursive methods may play a different role in different design contexts.

Stakeholder Participation and the Emergence of Values

Turning the lens of discourse ethics to VSD itself provides Yetim (2011) with theoretical grounding for a critique of the role of stakeholders in VSD work. According to Yetim (2011), “the discourse principle suggests the inclusion of all those affected in discourse, which in turn requires a method to identify them.” This raises two issues. First, along with Manders-Huits (2011), Yetim (2011) claims that VSD fails to provide a systematic and comprehensive method for identifying stakeholders. Second, Yetim (2011) argues that VSD fails to address the use of deliberative methods and tools to promote joint reflection on values during the design process – in particular, reflection by stakeholders on their own values, value tensions, and implications for design, as participants in the design process.

Others, too, have called for greater stakeholder participation in the VSD process. For example, Kujala and Väänänen-Vainio-Mattila (2008) emphasize the need for stakeholders to reflect upon their own values, as do Pommeranz et al. (2011). Borning and Muller (2012) further argue that stakeholders should have a greater voice in the VSD process. They observe that in recent years participatory design (PD) has extended far beyond the workplace and workers; indeed, some recent PD work has aimed to “rekindle” attention to values in participatory design (e.g., Iverson et al. 2010; Halloran et al. 2009). Borning and Muller (2012) go on to recommend that VSD projects consider explicit commitments to codesign and power sharing.

Bolstering this call for greater stakeholder participation is a critique of VSD’s systematic approach to identifying values of concern. Le Dantec et al. (2009) are concerned that Friedman and Kahn’s (2003) list of “12 human values with ethical import” serves to reify the values already studied in the HCI community and further privilege them over the values held by stakeholders, which might be quite different. Although one of the purposes of empirical investigations is to serve as a check on conceptual investigations, Le Dantec et al. (2009) argue that having a list of values may blind the researcher to values that fall outside that list. Rather, they promote what Iverson et al. (2010) call an emergent approach to values, where the values at stake initially emerge from work with stakeholders rather than an initial conceptual investigation carried out by the researchers alone.

While still calling for greater stakeholder participation as noted earlier, Borning and Muller (2012) soften this critique, observing that VSD has evolved from being highly prescriptive in listing values worthy of concern (Friedman and Kahn 2003) to providing suggestive heuristics as in the Envisioning Cards (Friedman and Hendry 2012). Moreover, Borning and Muller (2012) argue that such heuristics can be useful: heuristics enable projects where VSD would otherwise be impractical to build on values-oriented work in the literature. Even when there is adequate time for empirical investigations, heuristics may reduce, rather than increase, the risk that designers will overlook areas of concern. At the same time, Borning and Muller (2012) caution that heuristics should be contextualized, recognizing who developed the heuristics (in this case, Western, upper-middle class academics).

Out of these critiques come methods for advancing the application of VSD. Yetim (2011) recommends the use of Ulrich’s (2000) “critically heuristic boundary questions” to identify stakeholders. These questions concern sources of motivation, power, knowledge, and legitimation. Yetim (2011) suggests that the questions be addressed iteratively, in both descriptive and prescriptive modes, to uncover unresolved issues.

Recent work demonstrates a spectrum of methods concerning emergent values and participation, varying from value elicitation activities in which researchers make meaning from observations of stakeholders to participatory approaches where researchers and stakeholders together create shared meanings:

  • Alsheikh et al. (2011) use ethnography to defamiliarize their own values in a cross-cultural setting; they use grounded theory to identify themes in their observations.

  • Woelfer et al. (2011) elicited participants’ understanding of the value safety through value sketches and scenarios created by the participants and interpreted by the researchers.

  • Le Dantec et al. (2009) recommend the photo elicitation interview (PEI) for “shifting the power dynamic towards the participants by letting them shape the direction of the interview.” Pommeranz et al. (2011) compare the PEI with the portrait value questionnaire (Schwartz and Bilsky 1990) and participant tagging of photographs, finding that the PEI gives more descriptive and more situated values, but still fails to elicit how values inform behavior, decisions, and trade-offs. They suggest the development of a mobile app for in situ elicitation of values.

  • Iverson et al. (2010) discuss a variety of methods for discovering, developing, grounding, and realizing values in collaboration with stakeholders. Halloran et al. (2009) show how participatory design workshops and interactive prototypes can elicit values, observing that “values [emerge] whether or not we look for them.”

In contrast, Yetim (2011) problematizes value discovery as part of design: it is infeasible to include all stakeholders in discourse about values, and interpretations of values and tools may change over time. He points out that UrbanSim is designed for technical flexibility so that developers can respond to concerns that emerge during the use of the system (Borning et al. 2005), yet UrbanSim includes no tools or process for eliciting those emergent concerns. In response, Yetim (2011) recommends “continuous participation and discourse”: systems should include tools for communication about breakdowns in the system itself. In particular, his DISCOURSIUM tool (Yetim 2008) draws on discourse ethics to structure reflection on comprehensibility, relevance, validity, and rationality.

Voice

Borning and Muller (2012) present two compelling critiques related to VSD and issues of voice. First, Borning and Muller (2012) call for greater attention to the voice of the researcher. They claim that, too often, VSD research is reported from a disembodied “we” position, with the authors failing to clarify who is making various claims throughout the work. Borning and Muller are not asking for the researchers to simply claim ownership of their statements, but to help the reader understand the researchers’ backgrounds and values. Do the researchers and the stakeholders they are reporting on have similar backgrounds? Are there conflicts or tensions in how stakeholder and researchers view the situation under study? Borning and Muller call attention to the influence of researchers’ perspective on what they find important in an investigation. A researcher is not a disembodied conduit for truth, but rather takes an active role in interpreting, analyzing, and designing. Strong examples of making the researchers’ position explicit include the scholarship of Ames et al. exploring the role of social class in technological appropriation (Ames et al. 2011), and a growing body of work by Woelfer and Hendry on information technology for homeless youth (particularly Woelfer et al. 2011). Both groups of authors make clear statements about their positions as researchers.

Second, Borning and Muller (2012) raise concerns regarding the voice of stakeholders and how the multiplicity of voices is identified, brought forward, and attended to throughout the lifecycle of a project. As mentioned in the previous section, Borning and Muller recommend that VSD scholars consider stakeholder participation and voice throughout the entire research process. Beyond issues of participation, Borning and Muller address the presentation of research, arguing that summaries and paraphrases place researchers at risk of unintentionally reporting their own values or thoughts as if they were the values or thoughts of the participants. The usual response to this problem in HCI and other fields is to liberally use direct quotations from participants in final publications. This provides readers with the opportunity (albeit imperfect) to engage directly with the stakeholder’s choice of words, their own voice. Several examples of this practice can be found in research on values and technology (Alsheikh et al. 2011; Czeskis et al. 2011; Fleischmann et al. 2011; Woelfer and Hendry 2012).

VSD Looking Forward: Commitments and Heuristics

As mentioned earlier, in 2006 Friedman, Kahn, and Borning identified a “constellation” of eight features distinguishing VSD from other approaches to design (Friedman et al. 2006a). Here, we reposition the constellation, paring it down to commitments that can guide those engaging in a VSD investigation and that are largely uncontestedFootnote 5 in the literature. Moreover, from the range of VSD appropriations, we draw out general heuristics for individuals embarking on values-oriented projects, who may or may not wish to position themselves as engaged in value sensitive design.

Core Commitments

Drawing upon the structure used by Friedman et al. (2006a), we identify four core commitments of VSD: proactive stance, interactional perspective, direct and indirect stakeholders, and tripartite methodology. We illustrate these commitments through recent case studies. We do not mean to imply that all VSD work rests equally on these four commitments, nor do we intend to draw a sharp line between research that adheres to each of these commitments and work that does not. Rather, we aim to demonstrate how these core commitments make unique contributions to a range of VSD manifestations.

Proactive stance. VSD is proactive in two ways. First, it positions researchers to proactively identify ethical concerns implicated by interactions with and through technology rather than waiting for an ethical problem to arise. For example, van Wynsberghe (2013) considers the nascent field of healthcare robotics , articulating the need for a framework that incorporates ethics and design. In the absence of universal guidelines or standards for robot design, she recommends the adoption of VSD in combination with a care ethics perspective. van Wynsberghe (2013) points out that VSD can be used both retrospectively, to analyze the ethical implications of existing care robots (regardless of whether problems have already occurred), and proactively, to guide consideration of ethics throughout the process of designing care robots.

Second, VSD “seeks to be proactive: to influence the design of technology early in and throughout the design process” (Friedman et al. 2006a). While this proactive stance is not unique to VSD, it is an essential feature that distinguishes a design approach from critique or analysis of existing technologies. This stance draws attention to values both early in the design process and throughout the design process. As with privacy, security, or usability, support for values cannot always be “bolted on” late in the design process, but rather requires that designers make fundamental decisions about requirements and architecture with those values in mind. Furthermore, key values should be not be forgotten in the face of competing concerns, but rather reconsidered at each step of design and evaluation.

A proactive approach to values can make a difference in outcomes. Davis (2009) compares two contemporary projects with similar goals, one that takes a VSD approach and one that does not. Both projects aim to develop tools that facilitate knowledge sharing within an organization. BlueReach is designed from the perspective of persuasive technology (Brodie et al. 2007), while CodeCOOP is designed using VSD theory and methodology (Miller et al. 2007). Although both consider the value of reputation, the persuasive technology perspective considers reputation only as a strategy to promote knowledge sharing. The assumption is that an individual enhances her reputation through publicly sharing useful information (Brodie et al. 2007). In contrast, the VSD perspective led Miller et al. (2007) to also consider potential harms to reputation (e.g., from asking a silly question) that might impede use of the system. Moreover, Miller et al. (2007) considered a more expansive field of values from the start of the design process, including privacy, trust, and awareness, along with reputation. Early empirical investigations positioned the CodeCOOP designers to assess and mitigate these value tensions before building the CodeCOOP system, thereby leading to an apparently successful deployment (Miller et al. 2007). In contrast, empirical investigations of barriers to BlueReach’s use took place after a less-than-successful deployment; only then did harms to reputation emerge as a concern that stopped people from using the system (Singley et al. 2008). Thus, VSD’s proactive stance guided Miller et al. (2007) to address users’ concerns early in the design process, avoiding potential barriers to system adoption.

Interactional perspective. VSD takes an interactional perspective: “values are viewed neither as inscribed into technology (an endogenous theory) nor as simply transmitted by social forces (an exogenous theory). Rather, the interactional position holds that while the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it” (Friedman et al. 2006a). For the designer to hold an exogenous theory is a defeatist position: What is the point in designing for values if technology has no influence on how values are expressed? But neither should the designer adhere to an endogenous theory, which would overclaim the designer’s ability to determine which values the technology implicates and how values will ultimately be expressed. Falling between these two extremes, the interactional position is a widely accepted theoretical stance on socio-technical systems that can productively guide design.

Building on this interactional perspective, Albrechtslund (2007) takes issue with VSD’s positioning as “a principled and comprehensive account of human values in design” [italics ours]. Albrechtslund (2007) argues that no design process can be comprehensive with respect to human values; the multistability of human-technology relationships means that no one can fully account for all possible future uses of the designed technology. He cautions against falling prey to “the positivist problem” of assuming that the use of a technology corresponds to its design and against the hubris of assuming that all possible ethical problems with a technology have been accounted for in its design. At the same time, Albrechtslund (2007) acknowledges that many ethical problems can be anticipated and, indeed, that designers have a special obligation to pay attention to unintended uses. Methods developed in the VSD research lab at the University of Washington – notably, value scenarios and the Envisioning Cards – are steps towards helping designers imagine the multiplicity of unintended uses, users, and contexts of use for a technology, as well as the unintended consequences of intended use. Agreeing with Albrechtlund’s cautionary statements, we recommend a dose of humility alongside the use of such methods.

Woelfer and Hendry (2011) draw insightfully on the interactional perspective in their discussion of ubiquitous information systems for urban youth experiencing homelessness. Through a value scenario, Woelfer and Hendry (2011) explore the implications of digitizing a youth service agency flyer. This flyer provides the only comprehensive overview of services, the when and where for the youth agencies. Although it is the most comprehensive document concerning services distributed by the agencies, it is not distributed to the public. The print document is available only to users of services who visit the service agencies. Woelfer and Hendry (2011) observe that creating an open, online version of the service agency flyer would provide opportunities to improve usability and access for homeless young people. Yet, making this flyer publicly available on the Internet could also compromise their safety, as its information would be available not only to intended users but also to abusive parents, pimps, and drug dealers. It could bring greater attention – either helpful or harmful – from neighborhood business and home owners. Finally, the easy availability of the flyer online might reduce opportunities for positive face-to-face interactions between homeless youth and the adults who work at the service agencies. Thus, as Woelfer and Hendry (2011) imagine this new tool in its context of use, they realize that likely implications would distort their original intention. In comparing the online flyer to the paper flyer, it is clear that the use of the technology is determined neither solely by the designers’ intentions nor solely by the values of its users but rather by interactions between the properties of the technology (online versus paper), the stakeholders, and the context of use.

Attention to direct and indirect stakeholders . VSD “identifies and takes seriously two classes of stakeholders: direct and indirect. Direct stakeholders refer to parties – individuals or organizations – who interact directly with the computer system or its output. Indirect stakeholders refer to all other parties who are affected by the use of the system” (Friedman et al. 2006a). In designing for values, it is important to consider all those who are significantly affected by a technology, not only the clients or users.

Some recent work focuses on direct stakeholders, those whose needs and values will be supported by new technology. For example, Azencot et al. (2011) develop the GoBraille application to support transit users who are blind, or both deaf and blind, while Woelfer et al. (2012) aim to design mobile applications that support the safety of homeless youths. At the same time, other people are recognized to have significant stakes in the technology because of their interactions with the direct stakeholders. Bus drivers could be helped or hindered in their support of blind and deaf-blind passengers (Azencot et al. 2011); service providers, police officers, and community members each have their own relationships with homeless youths (Woelfer et al. 2012).

These researchers design their empirical investigations to invest more effort in direct stakeholders while still including indirect stakeholders. Interviews often require greater mutual investment between researchers and stakeholders than a survey does, but interviews can also reveal greater qualitative detail. Azencot et al. (2011) apply these methods accordingly, interviewing direct stakeholders and surveying indirect stakeholders. They balance their constraints of time and resources, managing to elicit the perspectives of significantly involved indirect stakeholders while concentrating more time on the direct stakeholders. Woelfer et al. (2012) strike a different balance with respect to time investment. They use semi-structured interviews and value sketches to gain rich insights from both direct and indirect stakeholders while tailoring some interview questions to the different stakeholder roles. However, they interview a greater number of direct stakeholders (19 homeless youth) versus indirect stakeholders (four service providers and two police officers), thus investing more time in direct stakeholders while still benefiting from the nuanced perspectives of indirect stakeholders who are nonetheless very involved (Woelfer et al. 2012).

As noted earlier, the same individual can shift between roles, moving from direct to indirect stakeholder. As an example, Czeskis et al. (2011) consider parents and teens as both direct and indirect stakeholders in mobile phone applications designed to support teens’ safety. Parents and teens are direct stakeholders when they use these applications. But friendships between teenagers mean that teens and parents who have not chosen to adopt such applications are nonetheless affected by their use. For example, a mobile application that takes photographs to monitor a teen’s unsafe driving may also capture images of passengers. An application that discloses a teen’s text messages to their parents will also disclose information about those who send and receive the messages. Thus, parents and teens can also be indirect stakeholders depending upon the situation. Czeskis et al. (2010) develop value scenarios about exactly these situations. Taking these dual roles further, their empirical investigations ask parents and teens to reflect on both roles: as direct stakeholders who use the technology and as indirect stakeholders who are inadvertently and perhaps unknowingly involved.

For some technologies, large communities have a significant but indirect stakes in the technology’s use. For UrbanSim, an urban planning simulator, stakeholders include all residents of the region (Borning et al. 2005). For the “Tribunal Voices” project, stakeholders include all Rwandans (Nathan et al. 2011). Because these important classes of stakeholders are large and diverse, both projects are concerned with engaging stakeholders who have different points of view. Both projects also aim to provide a path for indirect stakeholders to become direct stakeholders: that is, to provide those who are affected by the use of the technology with opportunities to influence its use or appropriate it for their own purposes. In the case of UrbanSim, an important and contested decision is the choice of indicators, or measures, to attend to in interpreting the simulation results. UrbanSim developers engage community organizations to present groups of indicators relevant to their perspectives and recommend indicators for future development; suggested future work would also enable citizens to comment on indicators (Friedman et al. 2008a). Nathan et al. (2011) worked with partner organizations to support workshops on international justice and sexual violence and also to develop online tools that encourage discourse around clips from the Tribunal Voices video collection. These studies suggest ways to consider representation when there is a large, diverse group of people occupying a particular stakeholder role and to include some of those stakeholders as cocreators and users of the technology.

Tripartite methodology. As discussed earlier, VSD “contributes a unique methodology that employs conceptual, empirical, and technical investigations, applied iteratively and integratively” (Friedman et al. 2006a). The tripartite methodology can be interpreted rigidly, as though the investigations are lockstep, discrete moves to be undertaken in a prescribed order (Le Dantec et al. 2009). Yet, when looking through recent VSD scholarship, we found many examples of the methodology being applied flexibly, in response to the particulars of the situation and the researchers’ goals.

Although the conceptual, empirical, and technical investigations are all considered important, particular studies may rest more heavily on just one or two. For example, a suite of studies starts with a conceptual investigation of the implications of using a digital camera and display to simulate a window in an interior office (Friedman et al. 2006a). But the bulk of the work consists of empirical investigations using multiple methods to address different values and stakeholder roles. One line of research concerns both short- and long-term impacts on the psychological and physiological well-being of those who work in a “room with a view” (Friedman et al. 2008b; Kahn et al. 2008). Another concerns reflections on privacy by “the watcher and the watched”: both the users of the digital display and those whose images are captured by the video camera (Kahn et al. 2008; Friedman et al. 2008c). These studies do not discuss technical investigations informing a product under design; rather, they are looking farther ahead, attempting to understand value implications of hypothetical, near-future technologies. As another example, van Wynsberghe (2013) focuses primarily on a conceptual investigation, applying care ethics to the nascent domain of healthcare robotics in order to identify stakeholders and values at stake. As a brief case study for her design framework, she conducts a technical analysis in which she compares the value implications of alternative designs for robots that assist with lifting patients: an autonomous robot versus a human-operated exoskeleton. van Wynsberghe (2013) indicates that future work applying her framework to design will need to iterate between technical and empirical investigations.

Two recent case studies are particularly instructive in that they discuss the interplay between all three types of investigations within a relatively self-contained design process. The CodeCOOP case study (Miller et al. 2007) presents a full design cycle of software developed with industry partners. In summarizing the design process, Miller et al. (2007) list all major design events and categorize them as conceptual, technical, or empirical. Similarly, Azencot et al. (2011) are careful to articulate the investigations used in their design of the GoBraille tool to support blind and deaf-blind public transit users. Both these design case studies begin with conceptual investigations of stakeholders and values. The CodeCOOP case proceeds to a technical investigation resulting in a software prototype; further technical investigations are interleaved with empirical investigations both formative (surveys, Value Dams and Flows analyses, contests) and summative (usage data analysis, interviews, reflection). One activity – the final design reflection – is considered simultaneously a conceptual and an empirical investigation (Miller et al. 2007). By contrast, the GoBraille case follows the initial conceptual investigation with an empirical investigation: semi-structured interviews with blind transit users, deaf-blind transit users, and an organization and mobility instructor. Building on the stakeholder analysis in the initial conceptual investigation, Azencot et al. (2011) also surveyed bus drivers about their attitudes towards blind and deaf-blind passengers. In technical investigations, Azencot et al. (2011) analyzed existing technologies and found them wanting, developed the low-cost MoBraille platform, and finally built the GoBraille application to support blind transit users in finding stops and identifying their buses. As in the CodeCOOP case, empirical investigations – here, field studies and semi-structured interviews – served to evaluate the new technology. Because the researchers realized their understanding of deaf-blind people was limited, they also codesigned a version of GoBraille with a deaf-blind transit user, thus combining empirical and technical investigations (Azencot et al. 2011). In both these cases, we see an initial conceptual investigation driving an iterative development process interleaving technical and empirical investigations, culminating in an empirical evaluation.

Walton and DeRenzi’s (2009) case study of supporting healthcare in Africa is particularly interesting in that the authors do not explicate the integrative nature of their investigations. The case study describes two related but independent projects carried out by each of the coauthors. One project concerns the redesign of existing information technology support for vaccine delivery. Walton and DeRenzi (2009) report that this project engaged with VSD from the beginning; however, the software to be redesigned already existed. In the other project, concerning a tool to support community healthcare workers, VSD is applied to evaluate a proposed design before beginning implementation and user training. Thus, although the application of VSD begins in both projects with a joint conceptual investigation of stakeholders and values, each project is at a different stage of an overall design process when that conceptual investigation was performed. The authors report that the vaccine delivery project continues with a technical investigation – the redesign of the software. However, this technical investigation involves discussion and codesign with stakeholders concerning the meanings of respect and accountability in the context of use, thus taking on an empirical overtone. Walton and DeRenzi (2009) frame their work on support for community health workers as an empirical investigation, including “rapport building,” semi-structured interviews, and focus groups with a range of stakeholders. However, this empirical work overlaps with beginning to design and develop the CommCare tool: an iterative process engaging both technical and less formal empirical investigations. This work thus illustrates how investigations overlap and intertwine so that boundaries between them are blurred.

We disagree somewhat with Borning and Muller’s (2012) claim that VSD can begin with any type of investigation. Friedman et al. (2006a) recommend a stakeholder analysis as one of the first steps. It does seem difficult to conduct empirical investigations without a reason to engage particular people or to conduct technical investigations with no notion of the user or others who might be affected. Indeed, all three of the case studies discussed above begin their application of VSD with a stakeholder analysis. While Le Dantec et al. (2009) claim that their work begins with empirical investigation, Borning and Muller (2012) point out that the project truly begins with a conceptual move, the identification of homeless people as a stakeholder group worthy of interest. However, we agree with Borning and Muller (2012) that the first major investigation can take any of the three forms: for example, a careful conceptual analysis of values at stake, as in the Cookies and Informed Consent work (Millet et al. 2001); an empirical investigation focusing on values in relation to a technology, as in the Watcher and the Watched (Friedman et al. 2006b); or construction of a new technology, as in the CodeCOOP work (Miller et al. 2007). The separation of the investigations is a conceptual tool, a way to get designers to consider the interactional aspects of their work; it is not meant to create silos within the project.

In an attempt to counter the misperception that VSD investigations must proceed in a prescribed order, some recent VSD work focuses on components drawn from the three investigations – for example, indirect stakeholders (conceptual), iterative practices (technical, empirical), multiple methods (empirical, conceptual, technical), and feature analysis (technical) – rather than calling out the investigations by name. For example, Czeskis et al. (2010) do not refer to the tripartite methodology, but do include conceptual, empirical, and technical investigations in the forms of value scenarios, semi-structured interviews, and technical recommendations. We refer readers to Friedman et al. (2006a) for a discussion of the “Cookies and Informed Consent” case study, which emphasizes the iterative and integrative nature of the investigations using the “traditional” labels. Whether called out explicitly or not, we believe the construct of the tripartite investigation is useful in drawing attention to different ways of exploring the relationship between human values and technology design.

Heuristics

Beyond VSD’s core commitments, we wish to propose several guiding questions based on VSD critiques, VSD case studies, and work that shares an affinity with VSD. We phrase these heuristics as questions addressed to you, the VSD investigator. Note that these heuristics are not strictly orthogonal nor are they presented in the order that they ought to be considered; rather, they intertwine, and your approach to addressing some heuristics will likely affect others. We look forward to future scholarship that continues to develop and add to this list.

Should you adopt an ethical theory alongside VSD? As discussed earlier, Friedman’s presentations of VSD neither recommend nor forbid the adoption of an ethical theory alongside VSD. An ethical theory can help to identify, define, and prioritize relevant values. Some domains, such as healthcare (van Wynsberghe 2013), war (Cummings 2006), and politics (Borning et al. 2005), have well-developed ethical theories that technology design should draw upon. Although we acknowledge Manders-Huits’ (2011) critiques and note that Yetim (2011) proposes discourse ethics as a generally applicable theory, we do not now take a stand as to whether VSD must always be complemented with an ethical theory.

How will you identify values of concern? Although the tripartite methodology facilitates the discovery of implicated values throughout the design process, we agree with Le Dantec et al. (2009) that it matters where researchers begin. When design begins with a technology or context of use, rather than a particular value, discovering stakeholders’ values through an initial empirical investigation can help to avert researcher bias. However, it can make sense to begin with conceptual investigation when time is short, and especially when the work can build on previous investigations of relevant values (Borning and Muller 2012). Moreover, applying a domain-specific ethical theory can provide a principled basis for identifying relevant values through conceptual investigations.

Where do you stand on universal values? Does it matter for this project? We agree with Borning and Muller (2012) that VSD can accommodate different stances regarding the universality of human values. If the researchers’ stance on universality affects their work, they should articulate that stance in reporting the research. The researchers’ stance is less likely to be problematic when designers have a nuanced understanding of the situations where their products will be engaged. The researchers’ stance is more likely to be an issue when designing across cultures (e.g., as articulated by Alsheikh et al. 2011), when explicitly designing for global use (as noted by Friedman and Kahn 2003), or when anticipating uses far beyond the intended context of use (e.g., with the Envisioning Cards).

What values, if any, will the project explicitly support? In work on UrbanSim, Borning et al. (2005) distinguish explicitly supported values – that is, values which the designers seek to explicitly support during the design process and in the final product – from designers’ personal values and from stakeholders’ values. Because of its role in a political process, UrbanSim’s use engages the full diversity of stakeholder values; the designers’ personal values should not be privileged over those of other stakeholders. As Alsheikh et al. (2011) point out, naming values to explicitly support helps prevent the designers’ values from being supported by default. To foster legitimacy in the political process, UrbanSim’s designers chose three values to explicitly support: fairness, accountability , and democracy (Borning et al. 2005). Borning and Muller (2012) recommend that designers consider participation and power-sharing as explicitly supported values.

How will you convey stakeholdersvoices? As the previous question makes clear, there are opportunities for researchers to engage with and hear stakeholders ’ perspectives and voice throughout a project. From the initial framing of the work and contexts to the development of theoretical lens, methods, and analysis, opportunities can be created to engage meaningfully with stakeholders. However, how the research team shares the voice of stakeholders with the audience of their work is another question entirely. Space constraints, stylistic norms, and disciplinary conventions can all push against the goal of directly representing stakeholders’ voice. However, a community can shift its conventions, particularly when there are strong examples of alternative practices (e.g., Alsheikh et al. 2011; Le Dantec 2009; Woelfer and Hendry 2012).

How will you present your own voice as a researcher? Other authors would have presented this overview of VSD in different ways than we have chosen to. Others would have interpreted the VSD literature differently, chosen different points to emphasize, and left different things unsaid. In short, it matters who wrote this chapter. So, too, does the researcher matter in research on design for values. It matters how stakeholders’ voices are interpreted, what is emphasized, and what is left out. Ames et al. (2011) and Woelfer et al. (2011) provide particularly strong examples of representing the researcher’s voice in design for values. We hope that this chapter serves as an example as well. Throughout this chapter, we have attempted to make clear our voices as authors with particular backgrounds, training, and interests. As we mentioned in the introduction, we have a long history with the VSD lab at the University of Washington. Many of the articles we critiqued have one or the other of us as an author. We believe that making this information apparent is important for positioning you, the reader, in evaluating our claims and entering the discussion.

Conclusion

Through this chapter, we demonstrate the evolving nature of VSD through the body of scholarship that both critiques and contributes to it since 2006. We conclude with our response to a provocative line of questioning posed by Chris Le Dantec at CHI 2012, in reaction to Borning and Muller’s presentation of their paper, “Next Steps for Value Sensitive Design” (Borning and Muller 2012). Le Dantec asked, is it necessary or desirable to continue building a branded approach for engaging with values in the design process? Should not all designers be routinely thinking about values and the human condition and all design educators teaching their students to do so?

We agree with the vision that Le Dantec proposes. But as a field, we are still far from that ideal. Not only is design for human values not yet routine, the development of methods and theories to support that work is still at a nascent stage. Whether the label is VSD or something else, it helps to have a label so that researchers can identify their work in relationship to something and build a discourse (as we are doing here) around what the something means and how to carry it out.

Consider an analogy with user-centered design , a familiar term within human-computer interaction. According to Abras et al. (2004), the term “user-centered design,” or UCD, was coined by Don Norman in the 1980s. This term originally referred to a stance (“the user should be at the center of design”), a theory, and a set of principles. This stance was not obvious at the time, though it became so in retrospect. UCD grew over time, becoming a more general term and less associated with Norman’s work, although Norman is still recognized as a founder of the field (Abras et al. 2004).

In the HCI community (broadly conceived), we now take for granted that it is important to consider the user in design and include users in the design process. And yet the term “user-centered design” is still useful, because it distinguishes a user-centered stance from other stances, allowing others to distinguish their work. Moreover, more than 20 years later, software development processes still do not always include attention to the userFootnote 6; that ideal has still not been reached.

A hopeful position is that VSD is the next UCD: Work in the area will continue to grow more nuanced and more reflective. Focusing design on human values will become an accepted rather than novel perspective. Attention to the user is infused throughout HCI work and gaining ground in software development practice, even when there is no explicit reference to UCD; we hope that someday attention to values will be just as pervasive, even if VSD (or another branded values-oriented methodology) is rarely referred to. What we learn from engaging with VSD today will influence how technology designers appreciate and address values in the future.