Skip to main content

Models, Algorithms, and the Subjects of Transparency

  • Conference paper
  • First Online:
Philosophy and Theory of Artificial Intelligence 2021 (PTAI 2021)

Part of the book series: Studies in Applied Philosophy, Epistemology and Rational Ethics ((SAPERE,volume 63))

Included in the following conference series:

  • 651 Accesses

Abstract

Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms ‘transparency’ and ‘opacity’ are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic situation of human agents with respect to these systems. While these diagnoses are independently discussed in the literature, juxtaposing them and exploring possible interrelations will help to get a view of the relevant distinctions between conceptions of opacity and their empirical bearing. In pursuit of this aim, two pertinent conditions affecting computer models in general and contemporary AI in particular are outlined and discussed: opacity as a problem of computational tractability and opacity as a problem of the universality of the computational method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Bailer-Jones, D. (2009). Scientific models in philosophy of science. Pittsburgh: Pittsburgh University Press.

    Book  Google Scholar 

  • Beisbart, C. (2021). Opacity thought through: On the intransparency of computer simulations. Synthese, 199(3), 11643–11666.

    Article  Google Scholar 

  • Black, M. (1962). Models and metaphors. Ithaca: Cornell University Press.

    Book  Google Scholar 

  • Boge, F. J. (2021). Two dimensions of opacity and the deep learning predicament. Minds and Machines.

    Google Scholar 

  • Boltzmann, L. (1902). Model. In D.M. Wallace, A.T. Hadley, & H. Chisholm (Eds.), Encyclopaedia Britannica (Vol. 30, , 10th edn., pp. 788–791). Adam and Charles Black, The Times, London.

    Google Scholar 

  • Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.

    Article  Google Scholar 

  • Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 1–73.

    Article  Google Scholar 

  • da Costa, N., & French, S. (2003). Science and partial truth: A unitary approach to models and scientific reasoning. Oxford/New York: Oxford University Press.

    Book  Google Scholar 

  • Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7(5), 889–904.

    Article  Google Scholar 

  • Facchini, A., & Termine, A. (2022). A first contextual taxonomy for the opacity of AI systems. In V.C. Müller (Ed.) Philosophy and Theory of Artificial Intelligence 2021.

    Google Scholar 

  • Frigg, R. , & Hartmann, S. (2020). Models in science. In E.N. Zalta (Ed.), The Stanford encyclopedia of philosophy; page html. Metaphysics Research Lab, Stanford, spring 2020 edition.

    Google Scholar 

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge: MIT Press.

    Google Scholar 

  • Gunning, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. In Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI ’19 (p. ii). New York: ACM.

    Google Scholar 

  • Hesse, M. B. (1966). Models and analogies in science. Notre Dame: University of Notre Dame Press.

    Google Scholar 

  • Hohwy, J. (2013). The predictive mind. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Hohwy, J. (2020). New directions in predictive processing. Mind & Language, 35(2), 209–223.

    Article  Google Scholar 

  • Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169, 615–626.

    Article  Google Scholar 

  • Kleene, S. C. (1967). Mathematical logic. New York: Wiley.

    Google Scholar 

  • Knuth, D. E. (1973). The art of computer programming (Vol. 1, 2 edn.). Addison-Wesley, Reading.

    Google Scholar 

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, K.Q. Weinberger (Eds.), NIPS’12 Proceedings of the 25th International Conference on Neural Information Processing Systems (Vol. 1, pp. 1097–1105). Lake Tahoe. Curran Associates.

    Google Scholar 

  • Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? - a atakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.

    Google Scholar 

  • LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521, 436–444.

    Article  Google Scholar 

  • Markov, A. (1960). Theory of algorithms. American Mathematical Society Translations, 15.

    Google Scholar 

  • Morgan, M. S., & Morrison, M. (Eds.). (1999). Models as mediators. Perspectives on natural and social science. Cambridge: Cambridge University Press.

    Google Scholar 

  • Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459.

    Article  Google Scholar 

  • Putnam, H. (1960). Minds and machines. In S. Hook (Ed.), Dimensions of minds (pp. 138–164). New York: New York University Press.

    Google Scholar 

  • Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind and religion (pp. 37–48). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Robbins, P., & Aydede, M. (Eds.). (2009). The Cambridge handbook of situated cognition. Cambridge: Cambridge University Press.

    Google Scholar 

  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.

    Article  Google Scholar 

  • Sullivan, E. (2019). Understanding from machine learning models. The British Journal for the Philosophy of Science, 1.

    Google Scholar 

  • Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv:1806.07552.

  • Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42, 230–265.

    Google Scholar 

  • Turing, A. M. (1946). Letter to W. Ross Ashby of 19 November 1946 (approx.). The W. Ross Ashby Digital Archive.

    Google Scholar 

  • Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34, 265–288.

    Article  Google Scholar 

Download references

Acknowledgements

This work is dedicated to the memory of my dear and trusted PW colleague Helena Bulińska-Stangrecka, who tragically and prematurely died when I was finalising my manuscript. In terms of content, this paper owes a lot to my collaboration with Alessandro Facchini and Alberto Termine, who tried hard to rid me of my philosophers’ naiveté concerning how AI works. The same goes for Cameron Buckner, Holger Lyre, Jan Passoth and Carlos Zednik, who worked towards that goal earlier. Any remaining naiveté will not be the fault of any of those helpful minds.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hajo Greif .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Greif, H. (2022). Models, Algorithms, and the Subjects of Transparency. In: Müller, V.C. (eds) Philosophy and Theory of Artificial Intelligence 2021. PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 63. Springer, Cham. https://doi.org/10.1007/978-3-031-09153-7_3

Download citation

Publish with us

Policies and ethics