Back in the 1950s, Paul Meehl blew people’s minds with his groundbreaking work in Clinical versus Statistical Prediction: A Theoretical Analysis and Review of the Evidence. He demonstrated that algorithms were way better than humans at predicting. Since then, lots of other studies have shown algorithms are much better than we are in many situations. However, there is a phenomenon called “algorithm aversion,” where people are reluctant to use algorithms, even though they are much more accurate than we are.

Researchers have described algorithm aversion in two ways: either a general aversion to algorithms or being less likely to use them when algorithms’ mistakes are notorious (as Dietvorst et al. (2015) pointed out). However, I think algorithm aversion will disappear, no matter how you look at it. I support my argument by analyzing the literature on algorithm aversion, X (formerly Twitter) sentiment analysis,Footnote 1 and a Google Trends analysis.Footnote 2

In the case of general aversion to algorithms (the first interpretation of algorithm aversion), how we describe them plays a significant role when we talk about people’s perceptions and feelings toward algorithms. Today, words like “algorithm,” “machine,” or “robot” are being replaced by other terms that use attributes, such as “learning” and “intelligence,” that sound way more appealing. Therefore, the concept of an algorithm as a rigid set of instructions is changing, and that’s a good thing! Using more human-like concepts can reduce algorithm aversion, making people more likely to accept algorithms that behave like us.

A comprehensive review article on algorithm aversion (Mahmud et al. 2022) reveals an exciting word pattern related to the terminology employed to refer to algorithms. The seven most cited papers, since Dietvorst et al.’s (2015) paper used few or no human-like attributes to refer to algorithms.Footnote 3 Four papers used the term “algorithm,” while another “chatbot,” another “recommender system,” and another even used the word “machine.” I went further and analyzed X (formerly Twitter) posts for 1 year to get a better sense of people’s sentiments toward these terms. Here’s what I found: “algorithm” had a net sentiment (percentage of positive minus negative conversations) of 15%, “chatbot” had 6%, “recommender system” had 3%, and “machine” had -7%. However, when we use terminology that humanizes algorithms, such as “artificial intelligence” or “machine learning,” the net sentiment spiked to 42% and 54%, respectively. So, what’s the lesson here? The language we use in the algorithm aversion literature is not helping people feel more comfortable with modern algorithms. Just by calling them more human-like, we could improve their acceptance.

Related to the second interpretation of algorithm aversion, which is the tendency not to use algorithms as often when algorithms’ mistakes are notorious, new technology has truly amazed us. Today, algorithms have significantly outperformed humans, making it less common to acknowledge algorithm inaccuracies. Take Estimated Time of Arrival (ETA) accuracy in transportation. For example, apps like Waze and Google Maps use sophisticated algorithms that consider real-time traffic data and other factors, making their ETA predictions significantly more accurate than those made by humans. Humans, on the other hand, often underestimate or overestimate travel times by a much larger margin.

Although task-specific high-performing algorithms are gradually entering society, the release of ChatGPTFootnote 4 is expected to accelerate the adoption rate of algorithms. In just a few months since its launch, ChatGPT gained considerable attention. For instance, in April 2024, Google Trends showed that the term “chatGPT” was searched 28 times more often than the term “algorithm.” While ChatGPT in its current free version has limitations, such as its knowledge cut-off date of January 2022, it is a remarkable tool that can enhance people’s work and perform better than humans in different contexts, such as language translation and text generation. In fact, ChatGPT is rapidly becoming more widespread and can be considered an early version of an artificial general intelligence system. With these technological advances, algorithms will be more accurate and make fewer mistakes; consequently, people will trust them more. Therefore, I honestly think we can expect less algorithm aversion. It is an exciting time for technology, and we can look forward to a future where algorithms are accepted for the remarkable tools they are.

Should we worry about algorithm aversion? Probably not. I believe if we use more human-like terminology to refer to algorithms, it will positively change people’s perceptions and use of algorithms. Furthermore, recent technological advancements have demonstrated that algorithms can significantly outperform humans in many areas and show few errors. I think that the success of artificial intelligence tools like ChatGPT will push people to adopt algorithms even more. As algorithms improve and become more human-like and ubiquitous, algorithm aversion will naturally disappear.