Abstract
The current discourse on fairness, accountability, and transparency in machine learning is driven by two competing narratives: sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better algorithms; and sociotechnical skepticism, which opposes many instances of automation on principle. Both perspectives, we argue, are reductive and unhelpful. In this chapter, we review a large, diverse body of literature in an attempt to move beyond this restrictive duality, toward a pragmatic synthesis that emphasizes the central role of context and agency in evaluating new and emerging technologies. We show how epistemological and ethical considerations are inextricably intertwined in contemporary debates on algorithmic bias and explainability. We trace the dialectical interplay between dogmatic and skeptical narratives across disciplines, merging insights from social theory and philosophy. We review a number of theories of explanation, ultimately endorsing a sociotechnical pragmatism that combines elements of Floridi’s levelism and Mayo’s reliabilism to place a special emphasis on notions of agency and trust. We conclude that this hybrid does more to promote fairness, accountability, and transparency in machine learning than dogmatic or skeptical alternatives.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
In 2018, the conference was called FAT; in 2019, it was rebranded FAT* (pronounced “FAT star”). As of 2020, it goes by the current name, FAccT, which we will use henceforth.
- 2.
As an exegetical aside, we observe that there is some dispute over the true extent to which Marx was in fact a technological determinist, at least in the uncompromising sense that the label is occasionally employed by modern authors. See (Bimber, 1990).
- 3.
The concept now referred to as Pareto frontier (also Pareto efficiency) is attributed to the Italian economist and sociologist Vilfredo Pareto and his work Course in Political Economy and Manual of Political Economy. For a more contemporary introduction to the concept, we recommend (Lockwood, 2017).
- 4.
Hempel (1965) proposes a new class of explanations called inductive-statistical (IS) to accommodate such cases, but the IS model struggles to account for low probability events. The alternatives analyzed below are better equipped to handle statistical explanations.
- 5.
This is a non-trivial assumption. According to the Duhem-Quine thesis, Popper’s falsificationism fails precisely because it is impossible to design a test that isolates the effects of L. We can always salvage any theory no matter how anomalous the observations E, provided we make sufficient amendments to the conjunct S, e.g. adding auxiliary hypotheses. See (Duhem, 1954; Quine, 1951).
- 6.
Technically, this example should be formalized in first-order logic to quantify predicates over sets. We stick with propositional logic here for consistency with previous sections and ease of presentation. The example is sufficiently simple and familiar that we doubt the ambiguity will lead to any confusion.
References
Achinstein, P. (1983). The nature of explanation. Oxford University Press.
Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired.
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., & Rudin, C. (2018). Learning certifiably optimal rule lists for categorical data. Journal of Machine Learning Research, 18(234), 1–78.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias.
Aristotle. (1984). In J. Barnes (Ed.), The complete works of Aristotle. Princeton University Press.
Barocas, S., & Selbst, A. (2016). Big data’s disparate impact. California Law Review, 104(1), 671–729.
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning. fairmlbook.org
Beer, D. (2017). The social power of algorithms. Information Communication and Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
Berlin, I. (1997). The pursuit of an ideal. In H. Hardy & R. Hausheer (Eds.), The proper study of mankind: An anthology of essays. Pimlico.
Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of technology systems: New directions in the sociology and history of technology. The MIT Press.
Bimber, B. (1990). Karl Marx and the three faces of technological determinism. Social Studies of Science, 20(2), 333–351. https://doi.org/10.1177/030631290020002006
Bloor, D. (1976). Knowledge and social imagery. University of Chicago Press.
Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems.
Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information Communication and Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878
Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199–231. https://doi.org/10.1214/ss/1009213726
Briggs, R. (2012). Interventionist counterfactuals. Philosophical Studies, 160(1), 139–166. https://doi.org/10.1007/s11098-012-9908-5
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. The MIT Press.
Bromberger, S. (1966). Why questions. In R. Colodny (Ed.), Mind and cosmos: Essays in contemporary science and philosophy. University of Pittsburgh Press.
Browning, M., & Arrigo, B. (2021). Stop and risk: Policing, data, and the digital age of discrimination. American Journal of Criminal Justice, 46(2), 298–316. https://doi.org/10.1007/s12103-020-09557-x
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st conference on fairness, accountability and transparency (pp. 77–91). PMLR.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Carnap, R. (1950). Logical foundations of probability. University of Chicago Press.
Carnap, R. (1952). The continuum of inductive methods. University of Chicago Press.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Crawford, K. (2021). The atlas of AI. Yale University Press.
Dafoe, A. (2015). On technological determinism: A typology, scope conditions, and a mechanism. Science, Technology, & Human Values, 40(6), 1047–1076. https://doi.org/10.1177/0162243915579283
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 1, 92–112. https://doi.org/10.1515/popets-2015-0007
Dewey, J. (1999). In L. Hickman & T. Alexander (Eds.), The essential Dewey. Indiana Univesity Press.
Diamandis, P., & Kotler, S. (2013). Abundance: The future is better than you think. Free Press.
Doshi-Velez, F., & Kortz, M. (2017). Accountability of AI under the law: The role of explanation. In Berkman Klein Center for Internet & Society.
Dowe, P. (2000). Physical causation. Cambridge University Press.
Du Sautoy, M. (2019). The creativity code: Art and innovation in the age of AI. Harvard University Press.
Duhem, P. (1954). In P. W. Wiener (Ed.), The aim and structure of physical theory. Princeton University Press.
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to explanation” is probably not the remedy you are looking for. Duke Law and Technology Review, 16(1), 18–84. https://doi.org/10.2139/ssrn.2972855
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Fine, K. (2012). Counterfactuals without possible worlds. The Journal of Philosophy, 109(3), 221–246.
Fisher, R. A. (1935). The design of experiments. Oliver & Boyd.
Floridi, L. (2004). On the logical unsolvability of the Gettier problem. Synthese, 142(1), 61–79. https://doi.org/10.1023/B:SYNT.0000047709.27594.c4
Floridi, L. (2006). The logic of being informed. Logique et Analyse, 49(196), 433–460.
Floridi, L. (2008a). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.
Floridi, L. (2008b). Understanding epistemic relevance. Erkenntnis, 69(1), 69–92.
Floridi, L. (2010). Information, possible worlds and the cooptation of scepticism. Synthese, 175, 63–88. https://doi.org/10.1007/s11229-010-9736-0
Floridi, L. (2011a). A defence of constructionism: Philosophy as conceptual engineering. Metaphilosophy, 42(3), 282–304. https://doi.org/10.1111/j.1467-9973.2011.01693.x
Floridi, L. (2011b). Semantic information and the correctness theory of truth. Erkenntnis, 74(2), 147–175. https://doi.org/10.1007/s10670-010-9249-8
Floridi, L. (2012). Semantic information and the network theory of account. Synthese, 184(3), 431–454.
Floridi, L. (2013). The ethics of information. Oxford University Press.
Floridi, L. (2014). Open data, data protection, and group privacy. Philosophy & Technology, 27(1), 1–3. https://doi.org/10.1007/s13347-014-0157-8
Floridi, L. (2017). Infraethics – On the conditions of possibility of morality. Philosophy & Technology, 30(4), 391–394. https://doi.org/10.1007/s13347-017-0291-1
Floridi, L. (2019). The logic of information. Oxford University Press.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press.
Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness.
Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23(6), 121–123. https://doi.org/10.2307/3326922
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). The MIT Press.
Goldman, A. (1979). What is justified belief? In G. S. Pappas (Ed.), Justification and knowledge (pp. 1–25). Reidel.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 76–99. https://doi.org/10.1609/aimag.v38i3.2741
Greenwald, A. G., & Krieger, L. H. (2006). Implicit bias: Scientific foundations. California Law Review, 94(4), 945–967. https://doi.org/10.2307/20439056
Gross, N., Reed, I. A., & Winship, C. (Eds.). (2022). The new pragmatist sociology. Columbia University Press.
Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46, 205–211. https://doi.org/10.1136/medethics-2019-105586
Haavelmo, T. (1944). The probability approach in econometrics. Econometrica, 12, 3–115. https://doi.org/10.2307/1906935
Habermas, J. (1981). Theory of communicative action (T. McCarthy, Trans.). Polity Press.
Hacking, I. (1983). Representing and intervening. Cambridge University Press.
Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a critical race methodology in algorithmic fairness (pp. 501–512). Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3351095.3372826
Hansson, S. O. (2017). Science and pseudo-science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Summer 201). Metaphysics Research Lab, Stanford University.
Hao, K. (2020, August 20). The UK exam debacle reminds us that algorithms can’t fix broken systems. MIT Technology Review.
Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248.
Hayek, F. A. (1973). Law, legislation and liberty: A new statement of the liberal principles of justice and policitical economy. Routledge.
Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science. Free Press.
Hempel, C., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–175.
Hey, T., Tansley, S., & Tolle, K. (Eds.). (2009). The fourth paradigm: Data-intensive scientific discovery. Microsoft Research.
HLEGAI. (2019). Ethics guidelines for trustworthy AI.
Hobsbawm, E. J. (1952). The machine breakers. Past & Present, 1(1), 57–70. https://doi.org/10.1093/past/1.1.57
Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912
Horkheimer, M., & Adorno, T. (1947). Dialectic of enlightenment (G. S. Noerr, Ed.; E. Jephcott, Trans.). Stanford University Press.
Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 1–16. https://doi.org/10.1177/2053951716674238
James, W. (1975). Pragmatism: A new name for some old ways of thinking. Harvard University Press.
Jones, S. E. (2006). Against technology: From the luddites to neo-Luddism. Routledge.
Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
Kim, M., Reingold, O., & Rothblum, G. (2018). Fairness through computationally-bounded awareness. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems 31 (pp. 4842–4852). Curran Associates, Inc..
Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P. Kitcher & W. Salmon (Eds.), Scientific explanation (pp. 410–505). University of Minnesota Press.
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017a). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237–293. https://doi.org/10.1093/qje/qjx032
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017b). In C. H. Papadimitriou (Ed.), Inherent trade-offs in the fair determination of risk scores (pp. 43.1–43.23). 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). https://doi.org/10.4230/LIPIcs.ITCS.2017.43
Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis, 10, 113–174. https://doi.org/10.1093/jla/laz001
Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems (pp. 4066–4076). Curran Associates, Inc.
Latour, B., & Woolgar, S. (1979). Laboratory life: The construction of scientific facts. Princeton University Press.
Lee, M. S. A., Floridi, L., & Denev, A. (2021). Innovating with confidence: Embedding AI governance and fairness in a financial services risk management framework. In L. Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (pp. 353–371). Springer. https://doi.org/10.1007/978-3-030-81907-1_20
Legg, C., & Hookway, C. (2019). Pragmatism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. (Spring 201). Metaphysics Research Lab, Stanford University.
Lessig, L. (2006). Code (2nd ed.). Basic Books.
Lewis, D. (1973a). Causation. Journal of Philosophy, 70, 556–567.
Lewis, D. (1973b). Counterfactuals. Blackwell.
Lewis, D. (1979). Counterfactual dependence and Time’s Arrow. Noûs, 13(4), 455–476. https://doi.org/10.2307/2215339
Lewis, D. (1986). Philosophical papers, Volume II. Oxford University Press.
Lewis, D. (2000). Causation as influence. Journal of Philosophy, 97, 182–197.
Lockwood, B. (2017). Pareto efficiency. In The new Palgrave dictionary of economics (pp. 1–5). Palgrave Macmillan. https://doi.org/10.1057/978-1-349-95121-5_1823-2
Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360
Marx, K. (1990). Capital (B. Fowkes, Trans.). Penguin.
Marx, K. (1992). Capital (D. Fernbach, Trans.). Penguin.
Mayer-Schönberger, V., & Ramge, T. (2018). Reinventing capitalism in the age of big data. John Murray.
Mayo, D. G. (1996). Error and the growth of experimental knowledge. University of Chicago Press.
Mayo, D. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press.
McQuillan, D. (2018). Data science as Machinic Neoplatonism. Philosophy & Technology, 31(2), 253–272. https://doi.org/10.1007/s13347-017-0273-3
Mendes, L. S., & Mattiuzzo, M. (2022). Algorithms and discrimination: The case of credit scoring in Brazil. In M. Albers & I. W. Sarlet (Eds.), Personality and data protection rights on the internet: Brazilian and German approaches (pp. 407–443). Springer. https://doi.org/10.1007/978-3-030-90331-2_17
Menzies, P., & Beebee, H. (2020). Counterfactual theories of causation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Spring 202). Metaphysics Research Lab, Stanford University.
Merton, R. (1973). The normative structure of science. In N. Storer (Ed.), The sociology of science: Theoretical and empirical investigations (pp. 267–278). University of Chicago Press.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
Mittelstadt, B. (2017). From individual to group privacy in big data analytics. Philosophy & Technology, 30(4), 475–494. https://doi.org/10.1007/s13347-017-0253-7
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3, 205395171667967. https://doi.org/10.1177/2053951716679679
Mittelstadt, B., Russel, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of FAT* ’19: Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3287560.3287574
Mökander, J. (2021). On the limits of design: What are the conceptual constraints on designing artificial intelligence for social good? In J. Cowls & J. Morley (Eds.), The 2020 yearbook of the digital ethics lab (pp. 39–52). Springer. https://doi.org/10.1007/978-3-030-80083-3_5
Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2022). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32(2), 241–268. https://doi.org/10.1007/s11023-021-09577-4
Mökander, J., Juneja, P., Watson, D. S., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. The EU artificial intelligence act: what can they learn from each other? Minds and Machines, 32(4), 751–758. https://doi.org/10.1007/s11023-022-09612-y
Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of taste. European Journal of Cultural Studies, 18(4–5), 446–463. https://doi.org/10.1177/1367549415577387
Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080. https://doi.org/10.1073/pnas.1900654116
Narayanan, A. (2018). Tutorial: 21 fairness definitions and their politics. Retrieved April 8, 2020, from https://www.youtube.com/watch?v=jIXIuYdnyyk
Nasrabadi, N. (2014). Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Processing Magazine, 31(1), 34–44. https://doi.org/10.1109/MSP.2013.2278992
Newman, N., Fletcher, R., Kalogeropoulos, A., & Nielsen, R. (2019). Reuters Institute Digital News Report 2019 (Vol. 2019). Reuters Institute for the Study of Journalism.
Noble, S. U. (2018). Algorithms of oppression. New York University Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
OECD. (2019). Recommendation of the council on artificial intelligence.
Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459. https://doi.org/10.1007/s11023-019-09502-w
Pasquale, F. (2015). The Black Box Society. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061
Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press.
Peirce, C. S. (1999). The essential Peirce (The Peirce Edition Project ed.). Indiana Univesity Press.
Plato. (1997). In J. M. Cooper & D. S. Hutchison (Eds.), Plato: Complete works. Hackett.
Popper, K. (1959). The Logic of scientific discovery. Routledge.
Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. https://doi.org/10.2307/2412688
Popper, K. (1972). Objective knowledge: An evolutionary approach. Clarendon Press.
Prasad, M. (2021). Pragmatism as problem solving. Socius, 7, 2378023121993991. https://doi.org/10.1177/2378023121993991
Quine, W. v. O. (1951). Two dogmas of empiricism. The Philosophical Review, 60(1), 20–43.
Romano, Y., Barber, R. F., Sabatti, C., & Candès, E. J. (2019). With malice towards none: Assessing uncertainty via equalized coverage. Harvard Data Science Review.
Rorty, R. (2021). In E. Mendieta (Ed.), Pragmatism as anti-authoritarianism. Harvard University Press.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
Sale, K. (1996). Rebels against the future. Basic Books.
Salmon, W. (1971). Statistical explanation. In W. Salmon (Ed.), Statistical explanation and statistical relevance (pp. 29–87). University of Pittsburgh Press.
Salmon, W. (1984). Scientific explanation and the causal structure of the world. Princeton University Press.
Sánchez-Monedero, J., Dencik, L., & Edwards, L. (2020). What does it mean to “solve” the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems (pp. 458–468). Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3351095.3372849
Schapire, R. E., & Freund, Y. (2012). Boosting: Foundations and algorithms. MIT Press.
Schroeder, R. (2007). Rethinking science, technology, and social change. Stanford University Press.
Scriven, M. (1962). Explanations, predictions, and Laws. In H. Feigl & G. Maxwell (Eds.), Scientific explanation, space, and time (pp. 170–230). University of Minnesota Press.
Selbst, A., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242. https://doi.org/10.1007/s13347-017-0263-5
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
Sharifi-Malvajerdi, S., Kearns, M., & Roth, A. (2019). Average individual fairness: Algorithms, generalization and experiments. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in neural information processing systems 32 (pp. 8242–8251). Curran Associates, Inc.
Taddeo, M. (2010a). An information-based solution for the puzzle of testimony and trust. Social Epistemology, 24(4), 285–299. https://doi.org/10.1080/02691728.2010.521863
Taddeo, M. (2010b). Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines, 20(2), 243–257. https://doi.org/10.1007/s11023-010-9201-3
Taddeo, M. (2019). Three ethical challenges of applications of artificial intelligence in cybersecurity. Minds and Machines, 29(2), 187–191. https://doi.org/10.1007/s11023-019-09504-8
Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. https://doi.org/10.1038/s42256-019-0109-1
Talbott, W. (2016). Bayesian epistemology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. (Winter 201). Metaphysics Research Lab, Stanford University.
Tarski, A. (1983). The concept of truth in formalized languages. In S. Logic (Ed.), Metamathematics (2nd ed., pp. 152–278). Hackett.
Thornton, S. (2019). Karl Popper. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 201). Metaphysics Research Lab, Stanford University.
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society, 37, 215–230. https://doi.org/10.1007/s00146-021-01154-8
Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other (2nd ed.). Basic Books.
Upadhyay, A., & Khandelwal, K. (2018). Applying artificial intelligence: Implications for recruitment. Strategic HR Review, 17(5), 255–258. https://doi.org/10.1108/SHR-07-2018-0051
Ustun, B., & Rudin, C. (2019). Learning optimized risk scores. Journal of Machine Learning Research, 20(150), 1–75.
van Fraassen, B. C. (1980). The scientific image. Oxford University Press.
Véliz, C. (2020). Privacy is power: Why and how you should take back control of your data. Penguin.
Wachter, S., & Mittelstadt, B. D. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of Big Data and AI. Columbia Business Law Review, 2, 443–493.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99.
Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the Black Box: Automated decisions and the GDPR. Harvard Journal of Law and Technology, 31(2), 841–887.
Watson D, S., Floridi, L. (2021). The explanation game: a formal framework for interpretable machine learning. Abstract Synthese, 198(10), 9211–9242. https://doi.org/10.1007/s11229-020-02629-9
Watson, D. (2022a). Rational Shapley values (pp. 1083–1094). 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533170
Watson, D. S. (2022b). Conceptual challenges for interpretable machine learning. Synthese, 200(2), 65. https://doi.org/10.1007/s11229-022-03485-5
Watson, D. S., Gultchin, L., Taly, A., & Floridi, L. (2022). Local explanations via necessity and sufficiency: Unifying theory and practice. Minds and Machines, 32(1), 185–218. https://doi.org/10.1007/s11023-022-09598-7
Weber, M. (2002). The Protestant Ethic and the Spirit of Capitalism (T. Parsons, Trans.). Routledge.
Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions (pp. 195–200). Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3306618.3314289
Williams, M. (2016). Internalism, reliabilism, and deontology. In B. McLaughlin & H. Kornblith (Eds.), Goldman and his critics (pp. 1–21). Wiley.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press.
Woodward, J. (2008). Cause and explanation in psychiatry: An interventionist perspective. In K. Kendler & J. Parnas (Eds.), Philosophical issues in psychiatry (pp. 287–318). Johns Hopkins University Press.
Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25(3), 287–318. https://doi.org/10.1007/s10539-010-9200-z
Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347. https://doi.org/10.1111/phpr.12095
Woodward, J. (2019). Scientific explanation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 201). Metaphysics Research Lab, Stanford University.
Završnik, A. (2019). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18, 623–642. https://doi.org/10.1177/1477370819876762
Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Watson, D., Mökander, J. (2023). In Defense of Sociotechnical Pragmatism. In: Mazzi, F. (eds) The 2022 Yearbook of the Digital Governance Research Group. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-28678-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-28678-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-28677-3
Online ISBN: 978-3-031-28678-0
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)