Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In the 1990s, the government of India began a program to digitize and open land records. Digitizing the Record of Rights, Tenants, and Crops (RTC) along with demographic and spatial data was intended to empower citizens against state bureaucracies and corrupt officials through transparency and accountability. Sunshine would be the best disinfectant, securing citizens’ land claims against conflicting records. In fact, what happened was anything but democratic. The claims of the lowest classes of Indian society were completely excluded from the records, leading to the loss of their historic land tenancies to groups better able to support their land claims within the process defined by the data systems. Far from empowering the least well off, the digitization program reinforced the power of bureaucracies, public officials, and developers (Donovan 2012).

Two decades later, student data management firm inBloom provided data storage and aggregation services to primary and secondary schools enabling them to track student progress and success using not only local data but data aggregated from schools nationwide. Many districts and some entire states adopted inBloom, which was backed by education reform giants such as the Bill and Melinda Gates Foundation and the Carnegie Foundation. inBloom promised that their system, by putting advanced data in the hands of teachers and administrators, would provide the infrastructure layer for a personalized learning ecosystem that would better meet students’ needs while improving efficiency. But the aggregation of such data raised deep concerns about student privacy. After several states backed out of the arrangement because of these concerns, the company ceased operations in 2014. CEO Iwan Streichenberger attributed inBloom’s failure to its “passion” and a need to build public acceptance of its practices—in essence rejecting the legitimacy of the ethical concerns its critics raised (Singer 2014). Whether one accepts the legitimacy of those claims or dismisses them as old-fashioned, there is no question that inBloom’s failure was not one of inadequate technology but of inadequate ethical vision: inBloom failed to appreciate the moral risks of its technologies and business model, and failed to convince the public of new principles that would support them.

Why the technologies of open data or data-driven personalized learning—and so many other information technologies that claim to bring about not just efficiency and prosperity but fairness, democracy, and freedom—failed to live up to their promises is the question to which this book is devoted. Information, as a social practice and a social structure, raises the same kinds of questions that we might ask of any other practice or structure: What should we do with it? How should it—and control over it—be distributed? These are questions that information ethicists have explored in some detail, especially with regard to privacy. But there are other, deeper questions that information ethicists are just beginning to explore, ones that emerge at the challenging intersection between ethics and social science: What role does data play in the structure of society, and society in the structure of data? How does information shift distributions of goods (material or otherwise) or balances of social and political power, especially among social groups? What beliefs—beliefs about information, beliefs about politics and society, beliefs about people—are assumed by and embedded in information systems? These questions, in turn, assume answers to more deeply philosophical questions about society’s relationship with information: How would a good society manage information, using it to further the best ends possible? What practices give us the fairest information processes and outcomes?

These are classical questions of justice. They give rise to a need for what I will call in this book “information justice.”

1.1 Toward a Theory of Information Justice

Information justice refers to the fundamental ethical judgment of social arrangements for the distribution of information and its effects on self-determination and human development. It is a subset of the broader notion of political justice, applied to questions of information and information technologies. A theory of information justice helps us understand the conditions under which a society can be said to be securing political justice within the realm of information. It is a critical question to be asked of “the information society” whatever that may mean, as any vision of a society driven by information should be expected to achieve justice generally only to the extent that its fundamental social institution is itself just. Even in more modest visions of society in which information technology is not the definitive structuring force, one cannot deny that information and information technologies are as important to the functioning of contemporary societies—not only the post-industrial north but increasingly the global south as well—as the political economies of those societies (though of course, no more independent of the political economy than the political economy is of information).

A political philosophy that sees information as a socio-technical practice displays certain similarities to environmental philosophy. Environmental philosophers have long acknowledged questions of justice. David Schlosberg notes that the field is marred by a weak understanding of justice itself, making “most theories of environmental justice . . . incomplete theoretically” (2004, p. 517). Relying on the work of Iris Marion Young and of Nancy Fraser, he argues for a more expansive understanding of justice composed not only of the common distributive framework but also of a need to secure recognition of and participation by all groups in society. Using this framework, he is able to develop a framework of “critical pluralism” for environmental justice that makes sense not only philosophically but also in light of claims by social movements dedicated to securing environmental justice. Schlosberg’s success in framing a more complex vision of justice in environmental issues suggests the value of a similar framing for information practices and technologies. Drawing on Schlosberg’s conclusion that “justice itself is a concept with multiple, integrated meanings” (2004, p. 536), it may be possible to engage the challenges of information effectively from a justice-centered perspective.

That is to say, we can build a more effective social theory of information by addressing it in relation to, as Serge-Christopher Kolm defines justice, “the central ethical judgment regarding the effects of society on the situation of social entities” (1993, p. 438).Footnote 1 Justice is the primary standard by which social and political structures, actions, and practices are evaluated. Echoing Aristotle, John Rawls calls justice “the first virtue of social institutions, as truth is of systems of thought” (2005, p. 3); Young considers justice “the primary subject of political philosophy” (1990, p. 3). Information privacy can be understood as a specific kind of political situation or condition of a social entity, that regarding information about the entity, which is affected by the situations and actions of other social entities. A framework for information privacy can thus be developed that evaluates situations according to judgments about the rightness of that situation, and that (hopefully) promotes information practices that tend toward right situations.

Broadly speaking, society might affect the situation of social entities in two ways. Distributive justice concerns the effects that occur “when the purposes of several such entities oppose each other, and the issue is how to arbitrate among their competing claims” (Kolm 1993, p. 438). Distributive justice might arbitrate among claims to material goods, but also claims to rights or political power. Arguably, most social effects on individuals can be understood as questions of the distribution of some good across social entities. However, not all social claims can be reduced to a distributive framework without doing significant violence to the claim itself, for example by ignoring the structural context that gives rise to the claim or taking as fixed social matters that are the product of relationships and processes (Young 1990, pp. 16–30). Hence we might also speak of structural justice, “the degree to which society contains and supports the institutional conditions necessary for the realization of…the values comprised in the good life,” values primarily concerned with self-development and self-determination (Young 1990, p. 37). Social choices such as the ones made in creating data, using it, and opening it for others to use will often have implications for both the distribution of material and social goods and for the social structures that shape individuals’ control over themselves.

From this perspective, the open data of the Indian land records system and the student data collected by inBloom are, in themselves, neither just nor unjust, nor do they inherently further justice or injustice. This is, as I will show in detail momentarily, not because open data is technologically neutral but because open data only exists in relation to a broader information system that gives it meaning: Open data as a-thing-in-itself does not exist in the real world. Moreover, openness is not the only value that ought to be pursued in an information system; data privacy, for example, is equally important (Nissenbaum 2010) and may often conflict with openness (Kaminski 2012). Whether we open or restrict data is thus best understood as one among many intermediate decisions in building an information system, decisions that should be made based on what will further justice given the nature of the data and circumstances. What is ultimately needed, then, is a way of understanding data in the context of an information system and in relation to justice directly: a framework for information justice. Such a framework would allow ethicists and practitioners to systematically identify the different ways in which data can present issues of justice, the relations among them, and the principles by which data can be made more just. Such a theory might pursue three parallel lines of inquiry: inquiries into moral principles, socio-technical practices, and institutions by which we might evaluate and govern data; practices that are conducive to achieving information justice; and the aims, capabilities, and conditions for the success of a social movement that aims to promote social justice.

The discussion in this book serves as a starting point to the study of information as a matter of justice. It should not be read as an indictment of any data practitioners. The problems identified herein are mostly structural in nature. If contemporary societies—affluent and otherwise—are to be as structured around data as many expect, we will need to know how existing social structures are perpetuated, exacerbated, and mitigated by information systems. We will need to know what the ideal information system looks like. Most important, we will need to know what can be done about it. These questions of justice presented by the information systems and practices now emerging in most societies—how the questions arise and what we might do about them—are the focus of this book.

1.2 The Myth of Technological Neutrality

Information justice differs from traditional notions of justice in that its object is explicitly technological. Understanding information justice demands understanding how justice can apply to technologies. That is challenging. With a culture of technological neutrality (Johnson 2006) and radical individualism (Walls and Johnson 2011) dominating the information technology industry, it is exceptionally easy for data scientists and users to accept current data practices and outcomes as natural or inevitable, and to make data use the only moral question of interest. More dangerously, one might take information technologies as instances of what Langdon Winner called “inherently political technology,” which “unavoidably brings with it conditions for human relationships that have a distinctive political cast” (1980, p. 128). Advocates see technologies that make democratic politics or individual liberty inevitable through a “naïve technological determinism” in which technology “molds society to fit its patterns” (1980, p. 122). The fundamental rejection of this view, and the recognition that technologies are neither mere artifacts nor the outcome of a purely scientific, rational design process will prove to be a central premise of the later chapters.

Technological neutrality is present in fields far broader than just information technology. To say that a social conception of information raises questions of justice that until now have not been explored is not to say that moral questions about information have never been asked, nor that useful answers to such questions have not been offered. There is a well-established literature among scholars of philosophy, law, and information studies of information ethics. There is an even longer tradition in social and political thought of philosophical reflection on technology generally. The question of information justice sits between these perspectives, subsuming the specific questions asked by information ethics into a larger moral framework while maintaining connections to the complexities of individual technologies that are often lost in more general philosophies of “technology.”

What, precisely, do we mean when we speak of technology and technologies?Footnote 2 Most views of technology focus on technē, paying little attention to logos. It is somewhat surprising that “technology” generally does not refer to the study of something. To some extent this is a function of how “-ology” has come to be used to designate that which has been studied as much as the study: the biology of the mollusk, the ecology of Arctic, the methodology of a study. It is perhaps endemic among contemporary society (and perhaps even modern life in general) that we confuse logos and episteme, not only etymologically, but more importantly practically: In modern life, method is knowledge. No place is this more the case than with technology and technologies.

A useful place to begin understanding technology pragmatically is thus with the word itself. Technology, Larry Hickman (2001) notes, literally means inquiry into technique. But it is used more commonly to designate (a) techniques, tools, and artifacts; (b) systems of these; and (c) applied science. When techniques, technical systems, and applied science work well, there is no need for inquiry into them. It is when they fail in some sense that inquiry into them is necessary, i.e., that we need technology in the literal sense. Technology, in strict speech, is thus “invention, development, and cognitive deployment of [physical and intellectual] tools and other artifacts brought to bear on raw materials and intermediate stock parts, with a view to the resolution of perceived problems” (2001, p. 12). We can use as a convenient shorthand for this “systematic inquiry into technique.”

But the problems of technology that we see are (or at least appear to be) found in areas defined by more conventional definitions of technology. They arise in techniques themselves. The problem is whether a particular technique should be used for a particular purpose, whether some people should be allowed to use a technique, whether a technique poses a threat to a particular social value. This, of course, raises the question of what constitutes a technique. In Hickman’s interpretation of John Dewey we see him focusing on “tools and other artifacts brought to bear on raw materials and other intermediate stock parts,” that is, on tools that we use to interact with the world, both as it is given by nature and created by us. The emphasis is on artifacts themselves. But we use these tools to carry out certain actions, to complete specific tasks. There is thus a technē, a craft or technique, to every artifact. It is when we conduct inquiry into these crafts that we engage in technology, that is, in the study of technical things.

Here we see a great divergence from conventional definitions of technology. In conventional definitions, as suggested above, technology is ultimately an artifact of some sort, usually a physical one but sometimes intellectual (a concept that we use to act, such as “markets”) or manual (a specific method of manipulation, as a physical therapist might use to inflict useful pain on a patient). Even in manual technologies, the technique reduces the human to machine, carrying out tasks as if human practitioners are automata, reducing the human to an artifact.

This artifact-driven view of technology leads to the thesis of technological neutrality. Technological neutrality is a vision of technology that begins with Bacon’s New Atlantis Footnote 3 and continues to be reflected quite strongly in popular discourse about technology. The thesis starts from the observation, shared with many critical perspectives on technology, that technologies are in important ways morally ambiguous (Feenberg 1991). As Melvin Kranzberg famously put it in what has come to be known as Kranzberg’s First Law, “Technology is neither good nor bad; nor is it neutral” (Kranzberg 1986). The thesis of technological neutrality is built on this ambiguity of technology, but takes the first clause of Kranzberg’s Law to mean the opposite of the second.

The basic premise of technological neutrality is that technology is value-neutral. Technologies are simply physical and intellectual tools that have no intrinsic value. They can be used in different ways, some of which are good and some bad. It is human action that assigns value to a technology. Thus the normative evaluation of technologies focuses not on the technologies themselves but on what one does with them. Actions, not technologies, hold moral values (Tiles and Oberdiek 1995, pp. 13–17). The neutrality thesis can thus be stated as follows: Technologies are value-neutral tools that are used to fulfill valued functions; therefore moral characteristics can be attributed only to uses of technologies and not to technologies themselves. This view is seriously deficient, as I will show below; nonetheless, it remains the dominant view in contemporary western culture.

We can see this dominance most strongly in discussions of the responsibility of scientists and technologists for their creation. Two common (though ultimately flawed) arguments from neutrality identify a very limited scope for responsibility among scientists for their work. Both rely strongly on the ambiguity in use of technology. The first suggests that the fact that technologies have both good and bad uses shows that a technology is neither good nor bad; goodness and badness attach to use. Since, the argument seems to assume, science can only gain value through technology (in this case understood as “applied science”) the neutrality of technology implies the neutrality of science and thus the freedom of the scientist from moral responsibility. Responsibility lies with those who use technologies, not those who create them. This is the view of Tom Lehrer’s satirical version of German-American rocket scientist Wernher von Braun: “‘Once the rockets go up, who cares where they come down/That’s not my department,’ says Wernher von Braun” (Lehrer 1965). A second argument suggests that the same body of scientific knowledge can lead to different technologies, some good and some bad. Since science can lead to both good and bad technologies, then it must be neutral itself. Again the scientist is exempt from moral responsibility by the neutrality of their work (Forge 1998). In the first argument, the value-neutrality of technologies directly insulates scientists from responsibility because it places responsibility on those who use the technology. In the second, the value-neutrality is shifted from technology to science, but the ambiguity of technology remains.

To build an alternative view requires rejecting two key premises of the thesis of technological neutrality. If technology is more than just a tool to be used for whatever purpose one chooses, and if ends are part of the artifact, then its claim to value-neutrality becomes unsupportable. To assert that technology on the whole is permeated by embedded values and that using technologies embeds those values in society at large is a central claim of most critical theories of technology since the Second World War. Martin Heidegger (1993, pp. 307–341) argues that technology sees the world as standing reserve and ultimately leads to humans understanding other humans as such. Herbert Marcuse (1991) focuses our thinking on the role of technology in upholding bourgeois rule and encouraging commodification. Michel Foucault (1995) demonstrates the role of technology in imposing discipline and normalization. Richard Merelman (2000) shows that the political values implicit in modern technologies are fundamentally different from those in postmodern technologies. These all suggest that technology is itself value-laden, and that by implementing technology in any form one implements values.

The Social Construction of Technology (SCOT) approach is one of the more promising social science approaches to understanding technology as value-laden. SCOT agrees with the various critical perspectives on technology that values are inherent features of technologies. But it does so in a far more sophisticated way. The SCOT program treats the development of technology not as a processed fixed by nature (as the neutrality thesis assumes) or universal social forces (as Heidegger, Marcuse, and Foucault do in various ways). Technologies are created in a historically contingent process in which scientists and technologists make choices that are rooted, implicitly or sometimes explicitly, in non-scientific judgments.

Technological development, in the SCOT approach, is seen as a process of variation and selection that is guided by the meanings given to the artifacts by social groups. These meanings are historically contingent social factors at work in the development of the technology. Key to this process is the idea of the interpretive flexibility of a technological artifact. Relevant social groups, those who have some role in the process of development, hold competing social meanings of the artifact. The artifact is, in essence, underdetermined by its natural characteristics like its physical operation, use, or utility in ways very similar to how constructivist approaches to science see scientific theories and empirical observations as underdetermined by nature. As the technology develops to its final form—a process of closure—these contingent meanings are lost through a process of stabilization in which the interpretive flexibility is gradually reduced by social processes rather than natural characteristics as some form of the artifact becomes dominant. Closure of the technological development process results in a technology that appears to be fully natural and developed through a linear, teleological process. But the SCOT program shows that there is nothing inevitable in a technology: “‘successful’ stages in the development are not the only possible ones,” and the selection of successful and unsuccessful stages are to be explained symmetrically by appealing to the social meanings at work in the choices that scientists and technologists make. Meanings, not nature, function, or utility, are the ultimate determinants of the form of a technology (Bijker 2001).

Both the critical political theories of technology and the SCOT empirical program lead to the same conclusion. Rather than being value-neutral, technologies embody and institutionalize certain values. Technologies are value-laden. The neutrality thesis cannot be maintained, and a fundamental contradiction in the superficial understanding of technology is exposed. This understanding of technology implies the precise opposite of the neutrality thesis. Technologies are shaped by normative social factors, not only by natural forces or a naturalized concept of utility. Ideas about the good, the beautiful, the healthy, the profitable are as much a part of technologies as the physics or chemistry of the artifact. Artifacts are designed and practices developed with these goals in mind, and these are ontologically part of the technologies as much as their physical characteristics. Far from technology being value-neutral, values are inherent in technologies.

What might these values look like? An analysis of the role of values in technology based on the constructivist framework leads to four ethical claims about the structure of technological values. The first is that values are embedded in technologies and thereby in society as a whole as well. A technology is not a value-neutral material tool because it is part of a structure of value-laden meanings. As Pinch and Bijker explain, “Obviously, the sociocultural and political situation of a social group shapes its norms and values, which in turn influence the meaning given to an artifact. . . . [D]ifferent meanings can constitute different lines of development” (Pinch and Bijker 2005). Constructivists hold that these meanings are ontologically part of the associated technologies (i.e., the technology cannot exist in its current state separately from these meanings) and embed the underlying values in technologies. If values are embedded in technologies, those values become embedded in society as well when the technology is implemented in society. As actors practice the technology, they bring about the consequences of the values embedded in it regardless of the values that the user holds. Implementing a technology is thus, Feenberg argues, the act of choosing “civilizational alternatives” (Feenberg 1991), different societies differentiated by the values embedded in them by technologies.

A second conclusion is the imposition of values comes with each technology, not just with technology in general. Technologies do have common features. If technologies are built by common social structures, the values of those structures should be embedded in the technologies that result. If technology itself has some common value—for example, understanding improvement as increased efficiency—that value should be present in all technologies. But the common features of technology do not exhaust the set of embedded values. Understanding the social place of a technology demands understanding it specifically, as each will be composed of different meanings and therefore embed different values than others. A concept of human psychology is at work in both medical testing and mass media, but it is a very different one: rational action is embedded in medicine, while unconscious motivation is embedded in television commercials. Each specific case demands its own analysis.

The third point is closely related. If specific technologies, and not just technology in general, can embed values, then each will embed somewhat different values based on the contingencies of the relevant social groups, the process of stabilization, and the contingencies of the experiences that underlie the key relationships in the technologies. The embedded values of specific technologies and of technology in general will thus be pluralistic rather than monistic. Technology in general can be standing reserve, commodified and bourgeois, and normalizing simultaneously. Online shopping may encourage normalization through advertising at the same time that it empowers consumers to express their individual sense of style by expanding their choices. It is possible to embed many different values in a technology, and even to embed conflicting ones. Understanding the social consequences of technology requires understanding the complex patterns of value in each specific technology rather than (or at least in addition to) a general monistic theory of technology.

The final point is the most consequential for political practice: the values embedded in how we do (i.e., the technology) can conflict with those of what we do (the action itself or its larger social context) when the neutrality thesis guides our understanding of the technology. The multiple values that could be embedded are now seen as either choices that individuals make in deciding how to use a “mature” technology or the natural (and therefore value-neutral) features of the technology itself. But if values are embedded in the technology, then the choice is made not in choosing how to use the technology but in the design process itself. In practice, the original values remain embedded in the technology, and implementing it remains an act that implements those values as well. By the time that the technology is ready for use (and thus ripe for the kinds of choices that the neutrality thesis focuses on), the values that it will embed in society will already be embedded in the technology by the process of constructive stabilization. Using the technology in any sense will embed those values whether we actually hold those values or not, choosing the resulting society whether we want it or not.

This leads to an important conclusion about normative problems associated with technologies. In a society dominated by technological neutrality, technologies will often pose irresolvable conflicts among the values embedded in and implemented through a technology and the values held by society more generally but not embedded in the technology. When we implement technologies, we assert their values as well, bringing about a particular society regardless of the values that we claim to hold. It is thus the former set of values, not the latter, that govern the social consequences of those technologies. The result is that the opportunity to choose among alternative directions for society is missed, hidden by the neutrality thesis.

1.3 A Critical-Constructive Alternative

A common thread in critical perspectives on technology is the rejection of realist or positivist views of data in favor of constructive views along the lines of the SCOT approach. Such views are deeply challenging to commonly held ideas about the moral status of data itself and the information technologies that manage data. As an alternative to technological neutrality, I present a critical-constructive view of technology that makes the details of technological development a central question. Technologies are formed through a process of selection in which alternative forms of the technology are winnowed into a final form by social and political forces as much as by scientific and engineering ones. These alternative forms allow one to explore the values of a technological system critically, opening technologies to examination as questions of justice. This philosophy of technology forms the basis for the analysis in the rest of the book.

Langdon Winner (1993) strongly criticizes the SCOT framework on several grounds related to its treatment of normative issues. He argues, in my view correctly, that SCOT is not generally concerned with the social consequences of technology and that it is generally ignorant of the larger moral and political questions that technology poses. A similar critique is offered by Hans Radder (1992), though his approach focuses primarily on normative implications of constructivist methodology rather than of the constructive nature of technologies themselves. One must certainly recognize the limits of Winner’s critique: His claims do little to fundamentally challenge the SCOT program itself as these criticisms are less theoretical failures than consequences of the fact that the SCOT program is a program trying to empirically explain the development of a technology.Footnote 4 But in a broader sense the point is compelling: SCOT alone cannot be critical of technology in the way that other philosophers of technology have been.

What is necessary is a critical-constructive approach to the values in technologies. That possibility emerges in considering the alternative forms of the technology that could have been. Young’s core premise of critical theory takes centrality here:

Critical theory is a mode of discourse which projects normative possibilities unrealized but felt in a particular given social reality. Each social reality presents its own unrealized possibilities, experienced as lacks and desires. Norms and ideals arise from the yearning that is an expression of freedom: it does not have to be this way, it could be otherwise. (1990, p. 6)

Technologies offer the possibility of many possible ends-in-view, and a critical view of technology facilitates rather than restricts making effective, critically reasoned choices in these questions. As philosopher John Dewey argued,

New technologies and techniques are multi-valent, that is, that they offer all sorts of new possibilities and that it is the obligation of those who use them to choose the best of those possibilities and then rework them in order to render them more valuable. (Hickman 2001, p. 59)

If one replaces the word “multi-valent” in this passage with “interpretively flexible” and shifts the locus of responsibility from use to development, one has not only a position very similar to the SCOT approach but with the addition of an obligation on the part of those constructing the technology to do so responsibly and critically. Technology appears neutral in a sense because of its interpretive flexibility—because it is swimming in a sea of indeterminacy—in that it does not inherently entail any one set of values until closure is reached.Footnote 5 But it will ultimately be value-laden as closure is reached and possible forms of the technology are foreclosed. Those who move the technology toward closure are responsible for the values that are ultimately embedded in a technology because they are making the design choices that do so.

Dewey holds that the apparent neutrality of science and technology leaves society “forced to consider the relation of human ideas and ideals to social consequences which are produced by science as an instrument” (1981, p. 390). Science and technology have social responsibilities, he argues; they “must, in short, plan [their] social effects with the same care with which in the past we have planned [their] physical operation and consequences” (1981, p. 392). To leave the choice of these consequences to private interests is to abdicate the responsibility that technology has to society. It may appear problematic that Dewey sees that responsibility as control until one notes that for Dewey control means most fundamentally the ability to act in a self-controlled manner, that is, to act with knowledge and understanding that allows one to bring about in practice the consequences that one expects from one’s beliefs (1981, p. 395). If the closure of technology will result in some values being built into society, then it is indeed irresponsible not to inquire into whether those values should be built into society.

Embedded values are seen not as universal claims but as ends-in-view that are therefore subject to evaluation and revision as well. As Dewey puts it:

Only recognition in both theory and practice that the ends to be attained (ends-in-view) are of the nature of hypotheses and that hypotheses have to be formed and tested in strict correlativity with existential conditions as means, can alter certain habits of dealing with social issues. (1981, p. 407)

At the very least, this critical-constructive philosophy of technology demands a kind of Weberian inquiry into technological values: we identify the values that are present, clarify the values by making them more logically coherent, draw out the implications of these values, and predict the consequences that one might expect from implementing technologies with particular values embedded in them (1949, pp. 20–21, 52–55). We likely will go at least as far as invoking the later ethics of Dewey’s predecessor in the development of pragmatism, Charles Sanders Peirce, who defines ethics as the “study of what ends we are deliberately prepared to adopt” (Peirce 1992, p. 200, vol. 2). The evaluation of norms under pragmatic inquiry compels us to change our technologies if we are not prepared to deliberately adopt the ends that are embedded in technology because it reveals that we hold doubts about the rightness of those ends.

It might be possible to go a step further than this. Cheryl Misak (2000) holds that pragmatic inquiry is necessarily responsive to moral as well as observational experience. She argues that in truth-seeking inquiry, the assertion of a proposition entails that one believes that it is true, that one is committed to defending it, and that one is committed to abandon it in the face of compelling evidence and argument against it because one seeks truth in making a claim. This makes one sensitive to experience, which, Misak rightly shows, means more than just observational experience; a proof can be seen as an analytical experience. Misak shows that moral inquiry is subject to certain kinds of experience under conditions similar to those of the natural sciences. One’s moral judgments, for example, are shaped by background beliefs which vary much more than those of scientists but operate in the same fashion. Thus she concludes that one’s moral claims are sensitive to one’s experience—and that of others—in precisely the same way that other kinds of inquiry require. So long as one maintains that one’s moral belief is true, one is committed to respond to empirical and analytical experience just as with one’s empirical beliefs. Critical-constructive technology should thus be able to criticize the beliefs that are inherent in technology much as it could criticize empirical beliefs, at least within a broad framework of moral pluralism.

In building a theory of information justice, this book challenges especially such ideas of technological neutrality and determinism in information technology. For all of the celebration of (and weeping and gnashing of teeth over) the purported ubiquity of data collection (e.g., Shilton 2009) and data as the “detritus” of human life (Learmonth 2009) in contemporary affluent societies, data—which we can understand preliminarily as systematically collected and stored information—does not, in fact, simply happen, nor is it a neutral, objective reflection of reality. Data exists only when information is transformed into data through a process of formatting, recording, making it retrievable and relatable, and communicating that information. It is, in an important sense, a form of communication between actors that embeds the assumptions and worldview of those actors in what is communicated. It is, like all technologies, a construct, an operationalization of an actor’s concept and reality, interpreting between the physical world and the intellectual structures by which actors understand that world, and embedded in a set of social practices by which it is created, interpreted, and used. It exists as just one element of a technology of data analysis that also includes statistical methodologies, data management systems, and ends for which data can be used. Data systems are thus neither stores of objective information nor inherently democratic technologies but rather technological arrangements that serve as forms of order: “ongoing social process[es] in which scientific knowledge, technological invention, and corporate profit reinforce each other in deeply entrenched patterns that bear the unmistakable stamp of political and economic power” (Winner 1980, p. 126). Data systems should thus be viewed critically in the sense that Iris Young wrote of critical theory: “Each social reality presents its own unrealized possibilities… it does not have to be this way, it could be otherwise” (1990, p. 6). This makes data amenable to political analysis: Why should the data be the way it is rather than some other way? That is the fundamental question guiding the analysis in this book.

1.4 Theorizing from One’s Own Experience

While this is primarily a work of social theory, it was spurred in part by questions arising in my own experience with information systems in higher education and is written in close conversation with socio-technical practices, especially in higher education administration. It thus requires some deep exploration of the actual structures and practices of information technologies, and a justification for relying on my experience in that exploration. I will frequently draw on the data system in place at Utah Valley University (UVU), where I worked as a Senior Research Analyst in its Institutional Research & Information (IRI) office from 2009 to 2013. That experience involved extensive work in data extraction and limited database design and administration, primarily in the Banner Operational Data Store (ODS) database. This is supplemented by narrative analysis of the Structured Query Language (SQL) implementing the data systems and the data standards established by the federal Integrated Postsecondary Education Data System (IPEDS) and the Utah System of Higher Education (USHE) reporting processes.

Since UVU’s systems are a key touchstone for this work, it will be valuable to understand a bit about them. UVU’s data backbone during this time was the Ellucian Banner relational database running on an Oracle 10 g database server.Footnote 6 Banner consists of a normalized set of several thousand data tables managing student and administrative data and optimized for Online Transactional Processing (OLTP)—entry and modification of individual data points to maintain records of transactions—locally referred to as “Prod” (a reference to it as the production database). The bulk of institutional data analysis is performed using the ODS, which consists of a denormalized set of fewer but much larger tables optimized for Online Analytical Processing (OLAP)—extracting full datasets for analysis. The data contained in the ODS is either identical to or derived from that in Prod but organized into a different structure of fields and tables. Both databases are extensively customized for UVU. Prod and the ODS also connect to several other data systems, including the advising information system, Ellucian Student Success CRM, and the learning management system.

Most government reporting comes from three customized relational tables. One table, referred to locally as STUDENT,Footnote 7 contains information that is constant about individual students across courses within a term such as demographics, contact information, or overall academic characteristics. The second table, COURSE, contains information that is constant across all students in a section for a term. The final table, STUDENT_COURSE, contains information specific to a student within a specific course, such as course grade or (since some courses can award variable credit) credits attempted. Using appropriate joins, STUDENT, COURSE, and STUDENT_COURSE can provide most of the information that the institution would need to understand its students and academic offerings. For example, joining STUDENT and STUDENT_COURSE would allow the institution to determine the distribution of courses taken by major and gender. STUDENT_COURSE would identify the courses taken by each student; STUDENT would provide the major and gender information. Each table is a “live” data table, showing data as it exists currently for all terms (including any transactions that affect data for a term after the term has ended, such as retroactive withdrawals from courses). A set of “freeze tables” contain data snapshots allowing time-series analysis throughout a term, and include freezes for the official census and end-of-term reporting dates.

These frozen data from the official reporting dates is used principally for state and federal government reporting. But there is a strong expectation that data reported by the institution for non-government purposes, including that used to make and justify decisions, will be consistent with the government reporting data. For example, between 2010 and 2012, UVU created a web-based data dashboard to provide more specific information on retention and graduation rates than was reported to IPEDS. It nonetheless relied on IPEDS definitions of retention and graduation rates, demographic categories, and reporting cohorts. The cohort definition is especially important, as the IPEDS cohort includes only first-time, full-time, degree-seeking undergraduates entering in fall, a relatively small portion of UVU’s students. Because of the expectation that locally used data will be consistent with government reporting data, the data processes in place at UVU are defined disproportionately by the rules that govern the three customized government reporting tables.

My work with UVU’s data systems forms the basis for developing a political theory of information. Theorizing based on this experience raises two challenges of justification. The first is methodological. While certainly the experience with this system is less systematic as a data collection technique than a traditionally empirical study would demand, given that the objective of this book is to establish a theoretical framework for understanding data as a type of social artifact that influences the achievement of social justice, it does not seem unreasonable to interpret that experience using frames and techniques common to emergent methods in social science. The approach used here shares some (but not all) features with constructivist grounded theory (Charmaz 2008). This approach is especially appropriate for the study of information systems on three grounds that are especially relevant to the study of information justice:

[F]irst, it was useful for areas where no previous theory existed; second, it incorporated the complexities of the organizational context into the understanding of the phenomena; and third, that [grounded theory method] was uniquely fitted to studying process and change. (Urquhart 2007, p. 341)

There are clear parallels between grounded theory and the work presented here. As I move between experience and theory, I use an abductive approach to building theory from experience in which both methods of inquiry and substantive findings are emergent rather than predetermined, testing the concepts developed previously for consistency with further iterations of inquiry. My approach also works at a distance from existing literature on other problems in information systems and technological ethics in order to avoid artificially constraining the emergence of a broader theory of information justice. (Urquhart 2007, pp. 350–351)

However, I must stress that understanding the creation of data using grounded theory was not intent at the outset of this project; grounded theory is itself emergent in this research. It does not, for instance, rely on the formal data collection processes of open coding or memo writing. Kelle (2005) and Charmaz (2008) provide exceptional reviews of these specific techniques, defending respectively the two distinct methodological approaches created by the schism between Glaser and Strauss, the founders of grounded theory. But this may well be a virtue; at the least it is not as great a weakness as guides to grounded theory would imply. I would suggest that the focus on specific methods in that schism has missed the real strength of grounded theory: its reliance on abductively created theoretical concepts that are iteratively tested and refined. It is this aspect on which I draw in developing a theory of information justice.

This view of grounded theory would, especially, be more consistent with the approach’s Peircean roots, in which, I have previously argued (Johnson 2000), the origin of theory is a creative act and science consists not in the body of knowledge but in subjecting claims abstracted from experience to the examination of further experience. Specific approaches to code development are not necessary for the success of grounded theory in the same way that, for example, successfully passing tests of statistical significance is for quantitative research using a hypothetical-deductive methodology. From this perspective the test of good grounded theory is its tendency to iteratively approach theoretical saturation rather than its compliance with any particular research procedure, and specific coding processes are evaluated from a purely instrumental perspective (i.e., is it helpful for moving toward theoretical saturation). The lack of compliance with such procedures in this book might thus argue for its inefficiency but not its inadequacy as a work of grounded theory.

That said, this work is not remotely intended to approach theoretical saturation, and its quite weak implementation of grounded theory methodology is merely an initial iteration of the process and thus valuable as a preliminary approach to the emerging question of information justice. Ultimately, while written in conversation with and as an interpretation of experience, this book is a work of normative social theory, the aims of which include making sense of the empirical and structural contexts of a set of normative questions and showing that understanding the former are essential to answering the latter. My methods are suitable for that context—given the importance of structure and practice in my argument, they are far more suitable than straightforward philosophical theorizing—and I make no further claim to any sort of methodological rigor appropriate to more strictly empirical research.

But while the methods may be sufficient for theorizing my own experience, this only heightens the second challenge: What makes my own experience, rather than claims to universal principles, worth theorizing? Political theory is not oriented toward theories of the particular. This was the heart of Jeffrey Issac’s (1995) seminal—by which I mean widely read, widely praised, and in practice widely disregarded—article, “The Strange Silence of Political Theory.” Isaac famously criticized political theory for its complete disregard of the collapse of communism as a topic for study—two of 384 articles in the major journals in the field published between 1989 and 1993 addressed the fall of the Iron Curtain. He argued that political theory had become too focused on the problems of “normal science” presented by the Western philosophical canon, which “engenders intellectual conformity and inhibits more engaged, colloquial, relevant kinds of inquiry.” As enabling as the canon can be, it can also be “a cloak…that conceals and obstructs political reality and our ability to experience it and interrogate it.” In consequence, political theory prefers abstract problems:

It seems almost beneath us to examine mundane, practical political problems located in space and time, in particular places with particular histories. These inquiries, we apparently reason, can be safely left to historians and political scientists. How much more edifying, rigorous, hip, virtuous, it is to discuss the constitution of the self, the nature of community, the proper way to read an old book, or the epistemological foundations of lack thereof that are involved in examining mundane political problems. (1995, p. 643)

The problems of the real world, for Isaac’s contemporaries (many of whom are still active two decades later), serve as examples of theory rather than objects for theory to engage and develop itself through. “Political theory,” he writes, “fiddles while the fire of freedom spreads, and perhaps the world burns.” (1995, p. 649)

Isaac’s alternative motivates this book. Without rejecting the importance of the abstract or the exegetical, he called for political theory:

…to acknowledge this world as a source of intellectual and practical problems, to engage it in all of its empirical and historical messiness, to demonstrate that our categories help to illuminate this political reality and, dare I say, to improve it…. Real political problems ought not be the pretext for scholarly investigations of other things; they should be what drives our inquiries. (1995, p. 646)

Academic conversations about the disciplinary canon (of authors and topics) cannot be the only form of political theory, in Isaac’s view. Instead, political theory must embrace the kind of pragmatic political theory that was once characteristic of American political life, less concerned with ideological anchors and more concerned with living politics that makes major trends intelligible.

It has taken time—longer than it took me to move out of political theory and into administration because of the strange silence of political theory, not only on 1989 but on so many other political events—but we see much improvement today. As of this writing, the current volume of Political Theory includes essays on climate change and reinsurance (Lehtonen 2017) and on Nietzsche’s place in ethnographic fieldwork (Ignatov 2017) along with ones on Plato (Valiquette Moreau 2017) and Adam Smith (Pitts 2017). But it still has not published an article on information technology. Indeed, for several years I had taken to referring to myself as the world’s leading expert on information justice—by default.Footnote 8 That likely reflects in part Isaac’s criticisms of a political theory that remains focused on the canon even if it is broadening its view somewhat. The focus on canonical thinkers and a standard syllabus of topics about which we can theorize makes new topics difficult to engage, even to find. This is where it becomes important to theorize one’s experience. Especially as so many who are trained in political theory move into other walks of life amidst diminishing opportunities for the standard academic career of the late twentieth century—be they “alt-ac” or “post-ac”—there becomes the opportunity for so many to ask new questions simply by looking around at their work as I did, and asking the questions Young poses: Must it be this way? How could it be otherwise? Our answers to such questions will enrich both political theory and human practice.

1.5 Plan of Study

The aim of this book is to develop a political theory of information and its associated technologies in which justice serves as the primary consideration in normatively evaluating information practices. Chapter 2 examines two cases in which data presents questions of justice. Many argue as a philosophical principle that data sources should be available as widely as possible, the principle at the heart of the open data movement. But as I argue in that chapter, open data can just as easily lead to injustice: Like garbage in programming, “Injustice in, injustice out” ought to be a principle of data. In the second case, I consider what big data means for higher education. After discussing some recent examples, I identify two types of ethical challenges in the increasingly common use of predictive analytics at universities: challenges related to the direct consequences of the systems and those rooted in the ideology of scientism that inspire them. Both the open data and big data cases prove quite problematic if the aim is just data.

Chapters 3 and 4 establish the political processes and structures behind information systems. In Chap. 3, I show that data is not an objective representation of reality but rather a constructed translation of observations into legible elements designed to support, broadly speaking, governance (be it by the state or by private actors). Both technical and social structures influence this translation; the technical aspects of database architecture are insufficient by themselves to define this translation regime. Such regimes can contain three characteristic translations: normalizing translations that separate the normal from the deviant, atomizing translations that separate complexity into individual elements, and unifying translations that group diverse characteristics into categories. At the same time, these data systems translate their subjects into “inforgs,” representations that consist of bundled information rather than actually existing subjects. These acts of translation, I conclude, are significant exercises in political power. Chapter 4 extends the analysis of the previous chapter to the role of metrics in political practice, using the U.S. standard graduation rate metric as a case. I argue that information is best understood as a process of communication in which observation is encoded into data through the translation regime and then decoded into metrics which are then institutionalized in political processes. In both processes, political factors are prominent, making metrics a political outcome at the least. I go further, however, showing that metrics play important distributive roles in politics, allocating material and moral goods as well as the conditions of political power. Metrics also exercise political control directly, working much like administrative procedures to select favored outcomes without direct legislative intervention and building the capacity of the state to exercise control over policy areas.

Chapters 5 and 6 examine two frameworks for justice in relation to information. In Chap. 5, I seek to go beyond contemporary theories of information privacy by subjecting the standard information flow models to analysis from the perspective of justice. I examine two perspectives on justice. At the least, one can see privacy as connected to justice instrumentally, that is, privacy is valuable not as a requirement of justice directly but because it is a useful means of achieving justice. This is, I argue, hardly adequate as an entire theory of information justice but it is too easily given short shrift in discussions of privacy (especially by the wealthiest Silicon Valley titans who can protect their interests directly). A more robust approach looks to theories of distributive justice. Theories of distribution that focus on the distributive process can address two significant weaknesses in information flow models of privacy, weak conceptions of informed consent and the inability to address the original acquisition of information. Pattern theories of distributive justice shift the focus from distributing information to distributing privacy rights, and provide significant insight into what it means to have rights to be left alone or forgotten. Each of these theories makes useful contributions to our understanding of privacy. But they are not wholly adequate to the task; for this, one needs to understand justice structurally as well as distributively.

Chapter 6 engages information from the perspective of structural justice using a case study of learning analytics in higher education, drawing heavily on the “Drown the Bunnies” case at Mount St. Mary’s University in 2016. This case suggests the outlines of an increasingly common approach to promoting student “success” in higher education in which early academic and non-cognitive data, often from students at other universities, are used to build a student success prediction algorithm that uses a triage approach to intervention, targeting middling students while writing off those in most need of help as inefficient uses of resources. Most common ethics approaches—privacy, individualism, autonomy, and discrimination—capture at best only part of the issues in play here. Instead I show that a full analysis of the “Drown the Bunnies” model requires understanding the ways that social structures perpetuate oppression and domination. Attention to more just organizational, politico-economic, and intellectual structures would greatly attenuate the likelihood of cases such as the Mount St. Mary’s University case, adding an important dimension to information justice. I conclude by contrasting the “Drown the Bunnies” model with an implementation of learning analytics at UVU, which did much better in part because of structural preconditions that support justice.

The concluding chapter (Chap. 7) summarizes the arguments of this book, situating them amidst the booming literature on information ethics that has emerged over the (too) long process of writing it. Unfortunately, nothing like a full theory of information justice has emerged from this, but we can now see important considerations for how we might think about information within what we already know about justice. That presents several possibilities for theoretically informed action and action-oriented theory. I also suggest a range of possible principles, policies, practices, and technologies that are worthy of a deeper look that can engage data scientists, citizens, and governments. Ultimately, however, information justice (like political justice generally) is not likely to be something that can be established solely by easily executable principles. It will necessarily involve an information justice movement.