Keywords

1 Introduction

The advent of the Internet has fundamentally changed how information is spread and perceived within societies. Not only are the entrance barriers much lower than in classical media, but the speed at which information can be shared worldwide is unrivaled.

The “post-factual” [1], also called “post-truth” [2], society is amid an “information war” and poses immense challenges for the media and the public sector within democracies [3, 4]. During the early stage of this revolution of interpersonal and mass communication via digital technologies, this paradigm change was perceived as a huge chance to reduce inequality by providing increased access to the public discourse and hence give a voice to virtually everybody, which in turn would ultimately support democracy within our societies [5, 6]. An assessment that still holds today. But the downside is that the easier access opens the door to disinformation from various (dangerous) sources [7]. In an age of innovation through knowledge for a sustainable, cohesive society [8], misinformation and fake news have a direct negative impact on public value creation through falsified or misleading information [9]. In this context, media [10] and public administrations [11] have a shared responsibility as gatekeepers to ensure the accuracy of public information. Due to this shared responsibility, we decided to focus our study on both parties from a combined point of view.

Following the argument of shared responsibility, journalists and the public sector are in a difficult situation. Trying to resolve misinformation and inform the public often results in the original misinformation being distributed even more intensively. This circumstance is partly due to the backfire effect [12]. This effect relates to potential cognitive biases within individuals and will cause feelings in cases the deepest beliefs or world views are “violated” by information that would contradict them. Consequently, the affected individuals will try to protect their beliefs even more vehemently and hence, render entirely the original intention of correction counterproductive. Also, studies have demonstrated that negative news is often more likely to be picked up and spread among the general public than positive news [13, 14].

Thus, the media and the public sector are in a problematic discrepancy between protecting free expression and disseminating information versus distorting democratic elections through massive disinformation campaigns. Moreover, in this tension range, they must deal with distrust and attacks often determined by prejudice, fear, and hate [15, 16]. This problematic situation is additionally pushed by social bots, which can massively spread whole global disinformation campaigns [17,18,19]. In addition, continuous development in artificial intelligence (AI), especially deep fakes, makes it increasingly challenging, even for experienced communication experts, to distinguish information from disinformation [20, 21].

But AI can also be a potent solution for identifying and fighting fake news. However, many barriers impede the implementation and use of tools by the leading media and the public sector to detect disinformation [20, 21]. To get a deeper understanding of those barriers, we conducted a quantitative survey with more than 100 experts from the field of leading media and the public sector, with a particular focus on the use of AI to fight disinformation.

The remainder of this paper is structured as follows: Sect. 2 provides a short discourse about state-of-the-art solutions using AI to combat fake news. In Sect. 3, we present the underlying methodology of this study and the collected data, including an overall profile of the participants. Section 4 then continues with the analysis of the results of the survey. After that, Sect. 5 discusses key learnings and practical implications. Section 6 then closes the paper with the conclusions and outlook for future work.

2 Related Work

A growing body of literature exists concerning technical solutions for using AI to combat fake news and misinformation. In this section, we provide a short discourse along the work of Shahid et al. [22] to inform the reader about state-of-the-art solutions currently used within available tools to the media and the public sector. Based on their analysis, current research streams can be separated into the following categories (ibid.):

  • Automatic detection: the idea behind this approach is to extract features of fake news within deep learning models to be used for the automated classification of news items. Examples of this approach include the research of Ozbay and Alatas [22], who developed a solution to detect fake news in social media via a transfer process of unstructured data toward structured data, combined with a multi-algorithm analysis.

  • Language-specific detection: this approach targets the development of a language-specific model beyond the limitation of English as the primary language. Studies that have used this approach, including the work of Faustini and Covões [23], build upon textual features and are not bound to a specific language, significantly increasing the overall usability, especially in an international context.

  • Dataset-based detection: the main goal is to develop highly specialized datasets to test and challenge existing and newly developed algorithms. Examples include Neves et al. [24], who developed a method of removing fingerprints of algorithms (i.e., Generative Adversarial Networks) in face manipulation of images to challenge existing detection tools.

  • Early detection: focuses on detecting fake news to limit its propagation at the earliest stage possible. Studies following this direction include Zhou et al. [25], who targeted the prevention of spreading fake news on social media via a supervised classification approach, building on social sciences and psychology theories.

  • Stance detection: the idea behind this approach is not only to detect fake news but to deepen the underlying understanding of it. This is achieved by also including the stance of the reporting news outlets toward the reported event or incident. Research following this idea includes the work of Xu et al. [26], who integrated the reputational factors of news distributors, such as registration behavior, timing, ranking of domains, and their popularity.

  • Feature-based detection: while this approach is similar to the automatic detection described before, it goes beyond classical textual features and includes topological and semantical features to improve the overall classification. Studies that have followed this idea include de Oliveira et al. [27], who incorporated stylistic information of social media posts, i.e., tweets, to improve the accuracy of fake news detection.

  • Ensemble learning: the concept behind this approach is to use not one but a combination of multiple algorithms to identify and classify fake news. Examples of such combined approaches include Elhadad et al. [28], who addressed the issue of misleading information in the context of the COVID-19 pandemic, combining ten machine-learning algorithms with several feature extraction approaches.

3 Methodology and Data

In order to derive recommendations on how AI tools can be used for disinformation detection for leading media and the public sector, it is crucial to consider several factors, motivations, and potential barriers. These include challenges with implementation and the working environment, technological maturity, data protection, uncertainty about AI, and advancing technological progress in general. To address this challenging domain rigorously, a questionnaire was created during the applied research project defalsif-AI (Detection of Disinformation via Artificial Intelligence) aimed at communication experts. The questionnaire was created based on literature around dimensions of fake news, misinformation, and information disorder [5, 29, 30], with a particular focus on professionals and their perspectives on i) the types of media to be confronted with, ii) individual detection approaches, iii) types of fake news encountered, iv) attitudes toward AI technological progress, as well as v) experience on currently used AI tools in the respective working environments.

In the first step, this questionnaire was circulated among the consortium partners, and in the second step, a snowball-based system was applied to other related areas. This approach helped us to significantly increase the overall coverage of experts that deal with fake news within their professional environment.

These experts included journalists, press officers, experts from different ministries, and scientists dealing with the topic of fighting fake news and disinformation. Overall, we collected n = 106 completed surveys. Since the population in these sectors is unknown, it is seldom possible for expert surveys to be representative. Nevertheless, they allow profound assessments to be made of trends among professionals.

In addition to demographic data and questions about media genres and usage behavior, the questionnaire focused on the frequency and risk of dealing with fake news and misinformation in everyday work, intuitive and technological detection, research activities, and, last but not least, the desire for or possible rejection of AI-based software. The survey was conducted from May to June 2021.

49% of the respondents work in the media domain (journalists, press officers, PR professionals, etc.), while 31% in communication and security, including fields such as the police or the Ministry of the Interior. 7% of the respondents work in the field of diplomacy or the Ministry of Foreign Affairs, and 13% in the field of research.

Concerning the age distribution of our participants, about 85% of them resided within the mid-career and late-career levels.

Regarding professional experience, about 62% had more than ten years of professional experience. Hence, a high level of insight and proficiency is represented among the study participants.

4 Analysis and Results

4.1 Fake News and Misinformation Within Working Environments

Only 23.6% of the respondents see little or no threat to democracy in disinformation, and 76.4% of the respondents consider fake news to be a high or very high risk for democracy. In the context of AI-based media forensics, it is necessary to understand the medium through which experts often come into contact with disinformation (see Fig. 1).

Fig. 1.
figure 1

Types of Mediation (n = 106); agreement high and very high (in %, n = 106; 6-point Likert scale; multiple answers possible).

Most subjects are confronted with disinformation via text, followed by manipulated photos often or very often. Text and photos are also the favored means of communication in traditional media, although video and audio are becoming increasingly popular, primarily through social media. In this context, this also raises the question of whether even experts can recognize manipulation, given the rapid technological development of deep fakes by video and audio. Studies also indicate that time of day, emotional state, fatigue, or age can significantly detect deepfakes [31, 32]. Concerning the odds of sharing misinformation, such as deep fakes, between individuals with a high interest in politics and those without, they later seem more prone to forwarding such misinformation [33]. In addition, personality traits such as optimism, especially for social media, can also play a role in classifying and spreading [34]. Ahmed points out that there is still limited knowledge about how social media users deal with this newer form of disinformation [33]. Our survey reveals a similar picture asking about the experts’ strategies (see Fig. 2) in case of suspicion of fake news, and the following picture emerges.

Fig. 2.
figure 2

Intuitive detection: Question: Based on which indications do you intuitively suspect whether it could be Fake News? (n = 106; agreement high and very high (in %, n = 106; 6-point Likert scale; multiple answers possible)).

Research whether and how other media report on it (78.5%), a critical look at the imprint of the medium (64,5%); checking the background of the author (54,2%), research how coherent contextual information such as geographic data, weather data, etc. are (39,3%), using fact-checking services like Mimikama, Correctiv, Hoaxsearch, etc. (28.9%), reverse image search on the Internet to check the actual origin of an image like Reverse Google Image Research, tineye.com Yandex (24.3%), and checking the metadata of an image (14,9%).

Since technology is advancing increasingly in mass manipulation, the results could indicate that training and AI-based tools will be increasingly necessary, especially for detecting deep fakes [35]. Especially since the sinister combination of manipulated videos and WhatsApp, e.g., in India [36], has already led to lynching mobs with innocent deaths, a change of modalities and further studies, training, and detection tools seem to be necessary for the context of security.

Continuing our analysis, we asked the participants to name the type(s) of misinformation and fake news most relevant to them in their daily business (see Fig. 3).

Fig. 3.
figure 3

Question: Which types of fake news are particularly relevant to you professionally? (n = 106; agreement high and very high (in %, n = 106; 6-point Likert scale; multiple answers possible)).

The respondents stated that they were mainly involved in news fabrication in their professional life. Fabrication in this context implies that the generated news items are not based on facts. However, due to their style and presentation, they create the impression with readers that they are real. Similar to fabricated news is propaganda, usually originating from a political motivation to either praise or discredit an individual or entity. Examples of such approaches, besides others, can be found within official Russian news channels, deliberately using narratives to convey a particular image to their audience [3]. Similarly, tear-jerking misleading headlines were used to create click-bait and were frequently named by our respondents as a challenge they have to cope with within their own professional routine.

In the second place, however, are already photo and video manipulation. This observation is only, at first glance, contradictory to Fig. 1, in which photo and video manipulation are not classified as particularly frequent. It seems reasonable to assume that these manipulations are challenging to recognize precisely because of the technical know-how and effort; thus, the motives behind them must be exceptionally high. The respondents least frequently mentioned mouse-to-mouse propaganda, i.e., paid customer reviews, which are popular with large online retailers.

4.2 Barriers and Trust in AI

The application of technologies in the context of decision-making in the public sector always impacts the lives of citizens. Reasons for the introduction of these technologies often include cost-saving, increased efficiency, and improved ‘objectivity’ due to ‘fair’ algorithms [27]. Yet these technologies can also trigger unintended side-effects, which bear risks that are hard to foresee, measure, and thus be prepared for. When these risks come into force, the negative consequences affect the citizens and public administration [27].

In this tense field, it is decided if citizens gain trust due to better decisions or increase their distrust in government decision-making due to the perception of the underlying algorithms as ‘black boxes, which in both ways will impact all aspects of daily life and social cohesion. Research has shown that the increased automation of decisions and centralization of those decisions will likely motivate distrust among citizens [29].

Especially in content filtering, e.g., to fight the spreading of fake news, the removal or restriction of content might be perceived as censorship [30]. Hence, it is essential to consider the ethical aspects of data-driven algorithms from the beginning of designing and implementing such systems.

Figure 4 shows the experts’ attitudes to technological progress and AI within their professional environment. In general, the respondents see technological progress as more problematic than positive. The majority fear difficulties with data protection, ethical problems, and significant dangers such as cyber-attacks and blackouts. Effects on leisure time are viewed in a balanced way. The most positive expectation of 46.8% of respondents was new opportunities for creativity and innovation.

Fig. 4.
figure 4

Attitudes to technological progress and AI; agreement high and very high (in %, n = 106; 6-point Likert scale).

The impact of disruptive technologies on the work environment should not be underestimated. It is often the case that decisions to implement such technologies are made with little prior knowledge of possible limitations or potentials. This lack of knowledge, in turn, can directly impact the work itself, its results, and its quality, both positively and negatively [37].

It is crucial to understand the processes and activities of potential users to include them in technology development. Only in this way is there a possibility that the new technologies can cover the functions necessary for the users [37]. Grabowski et al. speak of a technology being used when it is accepted. This, in turn, is related to the trust in the technology, whether it can reliably fulfill the desired functions and means more efficient work [38].

The topics addressed, among other things, around the basic skepticism based on experience that new technologies do not necessarily mean a simplification of everyday work but can sometimes even lead to more work without a recognizable improvement in quality. However, it must be noted that the target groups are, by and large, technology-savvy and technology-friendly groups of people who rarely tend to be overwhelmed by new software solutions in this context.

Turning to our last part of the survey, we asked the participants to express their opinion concerning barriers to using a fake news detection software tool in their workplace (see Fig. 5).

Fig. 5.
figure 5

What do you think would be barriers to using a fake news detection software tool in your workplace? (Agreement high and very high; in %, n = 106; 6-point Likert scale).

In addition to a lack of application options, the respondents see unclear or non-transparent strategies, high time and cost expenditure, and a lack of customized solutions as obstacles to using software solutions for fake news detection. Lack of acceptance by the workforce and high demands on data protection and security is mentioned the least, but more than a third of the respondents still cite them as possible obstacles. Winning the acceptance of employees should therefore be considered in training courses.

5 Lessons Learned and Propositions

The analysis of our survey has demonstrated the most pressing barriers that experts from the media and the public sector currently see in using AI to fight fake news and disinformation. Amongst the top-ranking results were: i) lack of trust in the technology, ii) in-transparent organizational strategies, and iii) ethical and privacy concerns. Hence, in the following, we provide some selected propositions and discussion points of lessons learned and what needs to be addressed to overcome the identified barriers.

In tools and data, we trust – attitudes towards AI as a ‘Colleague’.

Using AI to identify and communicate fake news to the general public is not without criticism, and trust in the technology is one of the key issues to ensure acceptance [39]. The literature shows that the same norms often come into play here as in interpersonal interaction [40]. In this context, it is also essential to consider that people tend to perceive AI as a “counterpart” and not as a tool [41]. AI and its results must also be trustworthy in times of personal uncertainty [42]. In-depth research into the influence of perception and trust in the context of AI is, therefore, necessary [43].

I know it as well as the back of my hand – the importance of personal experience with AI.

Many users have considerable reservations about AI-based fact-checking tools [44]. Overcoming these reservations is an open challenge due to such tools’ increasing distribution and use [45]. The accuracy of the analysis results is not always the decisive aspect of whether users trust the tools [43]. The users’ understanding of how to use the tools and how they work can have a lasting influence on their trust in the technology [46]. Personal experience in dealing with these tools [47] can also lead to realistic expectations of the tool itself [48] and, thus, to a more positive attitude toward AI [49]. It is, therefore, essential to define solutions that embed the presentation of results and the handling of the AI tool in the user’s experience. If this succeeds, it could lead to greater self-reflection and a more critical approach to news and information through fact-checkers and evaluation tools [43].

Digital ethics – the importance of societal consensus and consent.

Following the paradigm of digital humanism as a mindset of understanding the highly entangled and complex relationship of humans and technology [50], ultimately, technology should foster the free development of the individual to their full potential, but at the same time, not negatively impact others. This view also implies that tendencies towards anti-humanism through technology, e.g., artificial intelligence, should be identified and questioned [51]. This demand necessitates the fundamental need for ethical considerations embedded in all organizational processes. The essential question at hand: where to start? A plethora of frameworks is targeted at the ethical aspects of AI, where interested individuals can quickly lose oversight [52].

Furthermore, many of these frameworks are either on a high meta-level and thus hard to operationalize or on the opposite side, i.e., specific for a particular field or domain; hence, transferability is often limited [53]. Consequently, an approach needs to be selected that allows experts in communication to map common principles of digital ethics and the use of AI into their domain. Becker et al. have developed a three-step approach, i.e., analysis of principles, mapping the derived principles, and deriving an individual code of digital ethics [54]. Adopting this or similar frameworks can support communication experts in building their respective codes of conduct and guidelines for using AI. This adoption would ease internal barriers, as most refer to the missing knowledge and transparent and understandable guidelines.

6 Conclusions

Our study among the professionals has demonstrated that the situation is critical and that although AI can be a significant support within the daily work of communication experts, it is a blessing and a curse simultaneously. While the technology enables them to identify potentially fake news and misinformation, they struggle to communicate the results quickly and reach the necessary target audience. They are also facing fears and rejection concerning the use of AI by the general public. Censorship, violation of the free press, and intended overblocking are only a selection of accusations they are confronted with. This backlash leads to the build-up of internal barriers to adopting artificial intelligence within their organization. One of the biggest challenges comes in the lack of internal knowledge and capacity, which is also reflected in many follow-up barriers, such as fear of data privacy violation, mass surveillance, societal dived, or personal liability. What would be required is sophisticated training and proper adaptions to existing processes and work routines.

Consequently, this would lead to a deeper understanding of the underlying technology, its capabilities, and its limitations. In this context, the transparency of the use of algorithms and tools and the underlying decision process of these tools would be increased. Consequently, the responsible use would be strengthened, as well as the overall accountability for the application, interpretation, and dissemination of results. This overall increased knowledge would also become beneficial in terms of privacy protection while working with various sources of data and information.

For future work, several paths opened up based on our study results. The discussion around the regulation of AI within the EU is currently omnipresent. Thus, an examination of to what extent the handling of disinformation is regulated on a national level in the DACH countries and on an EU level (e.g., GDPR, Digital Services Act) or which initiatives exist in this regard in order to develop a well-founded recommendation for the future regulation of disinformation will be of interest. For a responsible approach to AI-based disinformation detection, the significance of the EU’s AI Act is of high importance to the research community and the community of practitioners, and also, what consequences are to be drawn from a legal perspective. In addition to the provisions of the AI Act, national developments should also be considered to develop a framework for the legal, ethical, and transparent use of AI systems to detect disinformation. The aim is to shed light on the legal framework for designing AI systems to detect disinformation and to make recommendations based on a comprehensive consideration of the fundamental rights of the citizens affected.

Another interesting aspect for future research comes from the ever-increasing flood of disinformation, not least multiplied by bots, trolls, and generative AI, which raises concerns about the destabilization of society and a post-factual future. Technological development enables the massive increase of disinformation in quantity and quality while, at the same time, also providing solutions in the area of detection. However, paying particular attention to this ambivalent relationship to AI is vital, especially in the context of information dissemination in society. A representative survey of the Austrian population will empirically record the rejection, fears, and hopes regarding various aspects such as data protection, freedom of opinion, “overblocking” and transparency. From this data material, concrete recommendations for action are derived from promoting the acceptance of a broad population and taking ethics and diversity into special consideration.