Skip to main content

Diskriminierungen und Verzerrungen durch Künstliche Intelligenz. Entstehung und Wirkung im gesellschaftlichen Kontext

  • Chapter
  • First Online:
Demokratietheorie im Zeitalter der Frühdigitalisierung

Zusammenfassung

Die Entwicklung und Anwendung künstlicher Intelligenz (KI) geht einher mit sozialen und ethischen Herausforderungen insbesondere in Form verzerrender oder diskriminierender KI-Ergebnisse. Oftmals werden gesellschaftliche Vorurteile und Verhaltensweisen gegenüber Minderheiten von KI-Technologien erlernt und entsprechend reproduziert. Dieser Beitrag widmet sich den zugrunde liegenden Wirkungsmechanismen. Dabei wird ein theoretisches Erklärungsmodell anhand gegebener Literatur abgeleitet, welches die Entstehung, Funktionsweise und Auswirkungen der Verzerrungen und Diskriminierung bei KI-Technologien darstellt.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Literatur- und Quellenverzeichnis

  • Accenture. (2018). Accenture launches new artificial intelligence testing services. https://newsroom.accenture.com/news/accenture-launches-new-artificial-intelligence-testing-services.htm. Zugegriffen: 28. Mai 2019.

  • Amini, A., A. Soleimany, A., W. Schwarting, S. Bhatia, und D. Rus. (2019). Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure. In Association for the Advancement of Artificial Intelligence (AAAI) (Hrsg.), Proceedings of the 2019 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society .

    Google Scholar 

  • Anderson, M., und S.L. Anderson. 2007. Machine ethics: Creating an ethical intelligent agent. AI magazine 28 (4): 15–26.

    Google Scholar 

  • Arel, I., D.C. Rose, und T.P. Karnowski. 2010. Deep machine learning – A new frontier in artificial intelligence research. IEEE computational intelligence magazine 5 (4): 13–18.

    Article  Google Scholar 

  • Basu, A. (Oxford Human Rights Hub, Hrsg.). (2018). Discrimination in the age of artificial intelligence. https://ohrh.law.ox.ac.uk/discrimination-in-the-age-of-artificial-intelligence/. Zugegriffen: 18. Dez. 2018.

  • Baumann, F., P. Lorenz-Spreen, I Sokolov, I. M., und Starnini, M. (2019, 28. Juni). Modeling echo chambers and polarization dynamics in social networks. https://arxiv.org/pdf/1906.12325v1.

  • Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J. T., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., und Zhang, Y. (2018). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. CoRR abs/1810.01943. https://arxiv.org/abs/1810.01943. Zugegriffen: 15. Juli 2019.

  • Buolamwini, J., und T. Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities inCommercial Gender Classification. Proceedings of Machine Learning Research 81: 1–15.

    Google Scholar 

  • Calmon, F., Wei, D., Vinzamuri, B., Natesan Ramamurthy, K. & Varshney, K. R. (2017). Optimized Pre-Processing for Discrimination Prevention. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan et al. (Hrsg.), Advances in Neural Information Processing Systems 30 (S. 3992–4001). Curran Associates, Inchttps://papers.nips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention.pdf. Zugegriffen: 15. Juli 2019.

  • Castelvecchi, D. 2016. Can we open the black box of AI? Nature News 538 (7623): 20–23.

    Article  Google Scholar 

  • Chen, D.L., T.J. Moskowitz, und K. Shue. 2016. Decision making under the gambler’s fallacy: Evidence from asylum judges, loan officers, and baseball umpires. The Quarterly Journal of Economics 131 (3): 1181–1242.

    Article  Google Scholar 

  • Chou, J., Murillo, O. & Ibars, R. (2017). What the Kids' Game "Telephone" Taught Microsoft about Biased AI. https://www.fastcompany.com/90146078/what-the-kids-game-telephone-taught-microsoft-about-biased-ai. Zugegriffen: 3. Juni 2019.

  • Crawford, K. (2016, 28. Juni). Artificial Intelligence’s White Guy Problem. The New York Times, New York Edition, S. 11. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html. Zugegriffen: 22. Mai 2019.

  • Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum 1989 (1), 139. https://​chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1052&context=uclf. Zugegriffen: 15. Juli 2019.

  • Dahlgren, P.M. 2019. Selective Exposure to Public Service News over Thirty Years: The Role of Ideological Leaning, Party Support, and Political Interest. The International Journal of Press/Politics 24 (3): 293–314. https://doi.org/10.1177/1940161219836223.

    Article  Google Scholar 

  • Dalenberg, D. J. (2018). Preventing discrimination in the automated targeting of job advertisements. Computer Law & Security Review 34 (3), 615–627. https://search.ebscohost.com/login.aspx?direct=true&db=bsu&AN=129568345&site=ehost-live.

  • Dastin, J. (Reuters, Hrsg.). (2018). Amazon scraps secret ai recruiting tool that shoed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-​that-showed-bias-against-women-idUSKCN1MK08G. Zugegriffen: 15. Juli 2019.

  • Davidson, T., Warmsley, D., Macy, M. & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017), 512–516.

    Google Scholar 

  • Del Vicario, M., G. Vivaldo, A. Bessi, F. Zollo, A. Scala, G. Caldarelli, und W. Quattrociocchi. 2016. Echo Chambers: Emotional Contagion and Group Polarization on Facebook. Scientific reports 6: 37825. https://doi.org/10.1038/srep37825.

    Article  Google Scholar 

  • Dutton, T. (Medium, Hrsg.). (2018). An Overview of National AI Strategies. https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd. Zugegriffen: 18. Dezember 2018.

  • Dylko, I., I. Dolgov, W. Hoffman, N. Eckhart, M. Molina, und O. Aaziz. 2017. The dark side of technology: An experimental investigation of the influence of customizability technology on online political selective exposure. Computers in Human Behavior 73: 181–190. https://doi.org/10.1016/j.chb.2017.03.031.

    Article  Google Scholar 

  • Eggers, W. D., Schatsky, D. & Viechnicki, P. (2017). AI-augmented government: Using cognitive technologies to redesign public sector work. Deloitte Center for Government Insights, 1–24.

    Google Scholar 

  • Ellis, E., und P. Watson. 2012. EU Anti-Discrimination Law, 2. Aufl. Oxford, UK: Oxford University Press.

    Book  Google Scholar 

  • Flaxman, S., S. Goel, und J.M. Rao. 2016. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly 80 (S1): 298–320.

    Article  Google Scholar 

  • Garcia, D., A. Abisheva, S. Schweighofer, U. Serdült, und F. Schweitzer. 2015. Ideological and Temporal Components of Network Polarization in Online Political Participatory Media. Policy & Internet 7 (1): 46–79. https://doi.org/10.1002/poi3.82.

    Article  Google Scholar 

  • Garrett, R.K. 2009. Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication 14 (2): 265–285. https://doi.org/10.1111/j.1083-6101.2009.01440.x.

    Article  Google Scholar 

  • Garvie, C. (2016). The perpetual line-up: Unregulated police face recognition in America: Georgetown Law, Center on Privacy & Technology.

    Google Scholar 

  • Goel, N., Yaghini, M. & Faltings, B. (2018). Non-Discriminatory Machine Learning through Convex Fairness Criteria. In J. Furman, G. Marchant, H. Price & F. Rossi (Hrsg.), Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society – AIES '18 (S. 116). New York, New York, USA: ACM Press.

    Google Scholar 

  • Google. (2019). At I/O '19: Building a more helpful Google for everyone. https://www.blog.google/technology/developers/io19-helpful-google-everyone/. Zugegriffen: 28. Mai 2019.

  • Gwagwa, A. & Koene, A. (2018). Minimizing algorithmic bias and discrimination in the digital economy. Working group for IEEE Standard on Algorithm Bias Considerations UNCTAD Africa e-Commerce week, 10–14 December 2018, Nairobi, Kenya.

    Google Scholar 

  • Harringer, C. 2018. „Good Bot, Bad Bot “? Information-Wissenschaft & Praxis 69 (5–6): 257–264.

    Article  Google Scholar 

  • Hofstetter, Y. (2018). Neue Welt. Macht. Neue Menschen. Wie die Digitalisierung das Menschenbild verändert. In G. Küenzlen, S. Haring-Mosbacher & P. Diehl (Hrsg.), Der Neue Mensch (Schriftenreihe/Bundeszentrale für Politische Bildung, Band 10247, S. 135–150). Bonn: bpb Bundeszentrale für Politische Bildung.

    Google Scholar 

  • Hupperich, T., Tatang, D., Wilkop, N. & Holz, T. (2018). An empirical study on online price differentiation. In Z. Zhao & G.-J. Ahn (Hrsg.), CODASPY'18. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy : March 19–21, 2018, Tempe, AZ, USA (S. 76–83). New York, NY, USA: ACM Association for Computing Machinery.

    Google Scholar 

  • IBM. (2018a). AI Fairness 360. https://developer.ibm.com/open/projects/ai-fairness-360/. Zugegriffen: 28. Mai 2019.

  • IBM. (2018b). Introducing AI Fairness 360. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/. Zugegriffen: 28. Mai 2019.

  • Johnson, T.J., S.L. Bichard, und W. Zhang. 2009. Communication Communities or CyberGhettos?: A Path Analysis Model Examining Factors that Explain Selective Exposure to Blogs. Journal of Computer-Mediated Communication 15 (1): 60–82. https://doi.org/10.1111/j.1083-6101.2009.01492.x.

    Article  Google Scholar 

  • Kasperkevic, J. (2015, 07. Januar). Google says sorry for racist auto-tag in photo app. The Guardian. https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app. Zugegriffen: 5.7.19.

  • Kirkpatrick, K. 2016. Battling algorithmic bias. Communications of the ACM 59 (10): 16–17. https://doi.org/10.1145/2983270.

    Article  Google Scholar 

  • Klare, B.F., M.J. Burge, J.C. Klontz, R.W.V. Bruegge, und A.K. Jain. 2012. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security 7 (6): 1789–1801.

    Article  Google Scholar 

  • Koene, A., Clifton, C., Hatada, Y., Webb, H. & Richardson, R. (2019). A Governance Framework for Algorithmic Accountability and Transparency (Scientific Foresight Unit, Hrsg.) (PE 624.262). Brussels: Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament.

    Google Scholar 

  • Krüger, J. & Lischka, K. (2018). Damit Maschinen den Menschen dienen. Lösungsansätze, um algorithmische Entscheidungen in den Dienst der Gesellschaft zu stellen. Impuls Algorithmenethik #6, Gütersloh.

    Google Scholar 

  • Lin, P. (2012). Introduction to Robot Ethics. In P. Lin, K. Abney & G. A. Bekey (Hrsg.), Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Lloyd, K. (2018). Bias Amplification in Artificial Intelligence Systems. AAAI FSS-18: Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA.

    Google Scholar 

  • Martini. . 2017. Algorithmen als Herausforderung für die Rechtsordnung. JuristenZeitung 72 (21): 1017–1025.

    Article  Google Scholar 

  • Martini, M. (2019). Blackbox Algorithmus–Grundfragen einer Regulierung Künstlicher Intelligenz: Springer-Verlag.

    Google Scholar 

  • Munger, K. 2017. Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior 39 (3): 629–649.

    Article  Google Scholar 

  • Neff, G., und P. Nagy. 2016. Automation, algorithms, and politics| talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication 10: 4915–4931.

    Google Scholar 

  • Novick, P.,. Schrier, K. & Woolley, S. (Anti-Defamation League, Hrsg.). (2018). Computational Propaganda, Jewish-Americans and the 2018 Midterms. The Amplification of Anti-Semitic Harassment Online. A report from the Center on Technology and Society, Anti-Defamation League. https://www.adl.org/media/12028/download. Zugegriffen: 22/11/19.

  • Obama White House. (2015). Big Data and Differential Pricing. https://​obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/docs/Big_Data_Report_Nonembargo_v2.pdf. Zugegriffen: 29.11.19.

    Google Scholar 

  • Passe, J., C. Drake, und L. Mayger. 2018. Homophily, echo chambers, & selective exposure in social networks: What should civic educators do? The Journal of Social Studies Research 42 (3): 261–271. https://doi.org/10.1016/j.jssr.2017.08.001.

    Article  Google Scholar 

  • Porter, T. M. (1996). Trust in numbers: The pursuit of objectivity in science and public life: Princeton University Press.

    Google Scholar 

  • Raji, I. D. & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Association for the Advancement of Artificial Intelligence (AAAI) (Hrsg.), Proceedings of the 2019 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (Bd. 1, S. 1–7).

    Google Scholar 

  • Savage, S. L. (2012). The flaw of averages: Why we underestimate risk in the face of uncertainty: John Wiley & Sons.

    Google Scholar 

  • Scherer, M. (Future of Life Institute, Hrsg.). (2016). Tay the Racist Chatbot: Who is responsible when a machine learns to be evil? https://futureoflife.org/2016/03/27/tay-the-racist-chatbot-who-is-responsible-when-a-machine-learns-to-be-evil/?cn-reloaded=1. Zugegriffen: 5. Juni 2019.

  • Schwarz, N., H. Bless, F. Strack, G. Klumpp, H. Rittenauer-Schatka, und A. Simons. 1991. Ease of retrieval as information: another look at the availability heuristic. Journal of Personality and Social psychology 61 (2): 195.

    Article  Google Scholar 

  • Skirpan, M. & Yeh, T. (2017). Designing a moral compass for the future of computer vision using speculative analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017 (S. 64–73).

    Google Scholar 

  • Smith, D. (2013). IBM's Watson gets a'Swear Filter'after learning the urban dictionary. International Business Times. https://www.ibtimes.com/ibms-watson-gets-swear-filter-after-learning-urban-dictionary-1007734. Zugegriffen: 25/11/19.

  • Speicher, T., M. Ali, G. Venkatadri, F.N. Ribeiro, G. Arvanitakis, F. Benevenuto, K.P. Gummadi, P. Loiseau, und A. Mislove. 2018. Potential for Discrimination in Online Targeted Advertising. Proceedings of Machine Learning Research 81: 1–15.

    Google Scholar 

  • Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K. P., Singla, A., Weller, A. & Zafar, M. B. (2018). A Unified Approach to Quantifying Algorithmic Unfairness. In Y. Guo & F. Farooq (Hrsg.), Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining – KDD '18 (S. 2239–2248). New York, New York, USA: ACM Press.

    Google Scholar 

  • Stuart-Ulin, C. R. (2018, 31. Juli). Microsoft's politically correct chatbot is even worse than its racist one, Quartz. https://qz.com/1340990/microsofts-politically-correct-chat-bot-is-even-worse-than-its-racist-one/. Zugegriffen: 5/7/19.

  • Thierer, A., O’Sullivan Castillo, A. & Russell, R. (Mercatus Center at George Mason University, Hrsg.). (2017). Artificial Intelligence and Public Policy. Mercatus Research. https://www.mercatus.org/system/files/thierer-artificial-intelligence-policy-mr-mercatus-v1.pdf. Zugegriffen: 2. Juli 2018.

  • Törnberg, P. 2018. Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS ONE 13 (9): e0203958. https://doi.org/10.1371/journal.pone.0203958.

    Article  Google Scholar 

  • Vereinte Nationen. (1948). Allgemeine Erklärung der Menschenrechte. Resolution der Generalversammlung 217 A (III), Generalversammlung der Vereinte Nationen. A/RES/217 A (III). https://www.un.org/depts/german/menschenrechte/aemr.pdf. Zugegriffen: 19.06.19.

  • Weiss, G. 1999. Multiagent systems: a modern approach to distributed artificial intelligence. Cambridge, MA: MIT Press.

    Google Scholar 

  • West, S. M., Whittaker, M. & Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI (AI Now Institute, Hrsg.). https://ainowinstitute.org/discriminatingsystems.pdf. Zugegriffen: 21. Mai 2019.

  • Weyerer, J. C. & Langer, P. F. (2019). Garbage In, Garbage Out: The Vicious Cycle of AI-Based Discrimination in the Public Sector. In Y.-C. Chen, F. Salem & A. Zuiderwijk (Hrsg.), Proceeding dg.o 2019 20th Annual International Conference on Digital Government Research (S. 509–511). New York, NY, USA.

    Google Scholar 

  • Weyerer, J. C. & Langer, P. F. (2020). Bias and Discrimination in Artificial Intelligence: Emergence and Impact in E-Business. In R. Luppicini (Hrsg.), Handbook of Research on Interdisciplinary Approaches to Digital Transformation and Innovation. forthcoming. Hershey, PA: IGI Global.

    Google Scholar 

  • White, M. C. (2012, 26. Juni). Orbits Shows Higher Prices to Mac Users. Time. https://www.business.time.com/2012/06/26/orbitz-shows-higherprices-to-mac-users.

  • Williams, Brooks, und Shmargad. . 2018. How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications. Journal of Information Policy 8: 78–115. https://doi.org/10.5325/jinfopoli.8.2018.0078.

    Article  Google Scholar 

  • Williams, H.T.P., J.R. McMurray, T. Kurz, und F. Hugo Lambert. 2015. Network analysis reveals open forums and echo chambers in social media discussions of climate change. Global Environmental Change 32: 126–138. https://doi.org/10.1016/j.gloenvcha.2015.03.006.

    Article  Google Scholar 

  • Wirtz, B.W., J.C. Weyerer, und C. Geyer. 2019. Artificial Intelligence and the Public Sector—Applications and Challenges. International Journal of Public Administration 42 (7): 596–615. https://doi.org/10.1080/01900692.2018.1498103.

    Article  Google Scholar 

  • World Economic Forum Global Future Council on Human Rights. (2018). How to Prevent Discriminatory Outcomes in Machine Learning. White Paper. https://www3.weforum.org/docs/WEF_40065_White_Paper_How_to_Prevent_Discriminatory_Outcomes_in_Machine_Learning.pdf. Zugegriffen: 21. Mai 2019.

  • Yong, H. (2018). Das Internet ist nicht genderneutral. Über Geschlechterethik in Netzöffentlichkeit. In P. Otto & E. Gräf (Hrsg.), 3TH1CS. Die Ethik der digitalen Zeit (Schriftenreihe/Bundeszentrale für Politische Bildung, Band 10181, Sonderausgabe für die Bundeszentrale für Politische Bildung, S. 198–208). Bonn: Bundeszentrale für Politische Bildung.

    Google Scholar 

  • Zhang, L., Wu, Y. & Wu, X. (2018). Achieving Non-Discrimination in Prediction. In J. S. Rosenschein & J. Lang (Hrsg.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (S. 3097–3103).

    Google Scholar 

  • Zhao, J., Wang, T., Yatskar, M., Ordonez, V. & Chang, K.-W. (2017). Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In M. Palmer, R. Hwa & S. Riedel (Hrsg.), Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (S. 2979–2989). Stroudsburg, PA: Association for Computational Linguistics.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul F. Langer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Langer, P.F., Weyerer, J.C. (2020). Diskriminierungen und Verzerrungen durch Künstliche Intelligenz. Entstehung und Wirkung im gesellschaftlichen Kontext. In: Oswald, M., Borucki, I. (eds) Demokratietheorie im Zeitalter der Frühdigitalisierung. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-30997-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-30997-8_11

  • Published:

  • Publisher Name: Springer VS, Wiesbaden

  • Print ISBN: 978-3-658-30996-1

  • Online ISBN: 978-3-658-30997-8

  • eBook Packages: Social Science and Law (German Language)

Publish with us

Policies and ethics