1 Introduction

The questions of algorithmic trust and credibility are gaining importance particularly in the AI context. As more people rely on algorithms for news and information sources, users have the challenge of discerning which news recommendations are credible or not. This challenge becomes even more problematic when users are not sure how and why the specific news is recommended. Users of CJ need to find ways to assess the credibility of AI CJ-mediated news. This need becomes more urgent when the source of the news and the process used by the underlying algorithms are not known to the users. These issues are related to broader debates on AI such as fairness, accountability, transparency, and explainability (FATE), which are complicatedly interwoven into CJ and overall algorithmic phenomena (Shin 2021a, b). FATE is about new challenges for ensuring due process, responsibility, non-discrimination, and understandability in algorithmic processes. Concerns about the potentially discriminatory impact of algorithms call for further research into the risks of encoding bias into AI decisions (Crain 2018; Kitchin 2017). There is increasing concern that the black-box of AIs may reduce the justification for important decisions made by algorithms (Hargittai et al. 2020). Algorithms exist and work invisibly behind the interface, learning from users and personalizing what people see online, but people do not know what these algorithms are or how they work (Bishop 2019). These issues, including credibility and trust regarding how we assess and accept AIs, remain critical to algorithm design in media domain (Beer 2017; Dörr and Hollnbuchner 2017).

Since algorithms are usually intangible and thus dubbed as black-box mechanisms, they are not perceptible for the end-user and their coding details are not available to the public. Most layman users are never able to see into the code within the platforms and most people are ignorant about how algorithms work and why they can be risky (Courtois and Timmermans 2018). The opacity of black-box processes led to a call for research on AC in the AI era (Wölker and Powell 2020). Logg et al. (2019) conducted a study on algorithm heuristics: specifically, how users perceive algorithmic features, how algorithmic trust is created, and how users experience algorithm systems. This research echoes related research in diverse algorithmic contexts. Shin and Park (2019) propose the user perception of transparency, fairness, and accountability in the experience of news recommender algorithms. Cotter and Reisdorf (2020) conceptualize users’ understanding and literacy with respect to the impact of algorithm-driven media. How users understand algorithmic characters, how they experience algorithm services, and how AC plays a role in such processes will be essential issues to tackle in discussing and developing CJ and future AI-driven media. Recent findings on algorithmic behaviors (Araujo et al. 2020) allude to the heuristic dimension of FATE in the experience with AI services. When users experience algorithms, they inevitably encounter FATE issues, which are essentially related to people’s understanding and engagement with algorithms (Cotter 2019). In examining such issues, it is important to consider the users’ cognitive processes that evaluate AC with respect to algorithms, by which users figure out the issues that arise from the interaction with AIs (Ananny and Crawford 2018). AC can be best understood/practiced as a set of social practices; the ways people use algorithms in their everyday lives and the actual events which are mediated by real-world algorithmic services. The processes of evaluating AC and human understanding in situations of high uncertainty or complexity are necessary to stimulate algorithm adoption decisions (Wölker and Powell 2020). Accordingly, the purpose of this study is to fill an AC research gap by uncovering the factors and the cognitive processes that users go through between perception and action. It is intended to uncover how users of chatbot news assess AC credibility and develop information-seeking intentions. Against the rising concern over the black-box of algorithms and thus decreasing public trust (Burrell 2016; Reisdorf and Blank 2020), this study operationalizes trust in reference to AC. It examines how AC is formed, what roles AC plays in the experience of CJ, and how users respond to the algorithms, especially in contexts where AC is used for chatbots to provide personalized information.

Just as credibility is important for conventional media and journalism, the credibility of algorithms and CJ is critical. AC plays a key part in mediating the effects of literacy and acceptance on their behavioral intentions. AC, defined as a user’s perceived believability of an algorithm as a communication channel (Shin and Park 2019), focuses more on algorithmic outlets that deliver messages rather than on news sources and the news messages themselves. AC can be applied to explain the effects of algorithm-mediated communication, such as personalized recommendations and customized news curations.

2 Literature review

The present study builds on the ground provided by prior studies on algorithmic trust and FATE (Shin 2020, 2021a, b). Furthering the prior studies, the goal of this study is to identify the factors used for judging credibility and the subsequent effect on information seeking decision.

2.1 Algorithmic credibility and trust

AI firms are relying on user data to enhance the design and delivery of personalized products and customized services. Ethics and privacy have been increasingly considered in any AI model that utilizes such sensitive confidential data. Algorithmic trust is a perception of users how they perceive algorithms as a more trustworthy mechanism of their data than operations managed by humans. Algorithmic trust helps ensure that firms will not be exposed to the risk of losing the trust of their users and customers, which in turn endows credibility to the algorithmic firm (algorithmic credibility).

This new notion of algorithmic credibility is highly relevant to the algorithmic media domain, where public trust in a vast array of media channels continues to decrease, driven by trends that the news industry is flooded with inaccurate information and fake news, as well as ambivalence about news from automated unknown sources (Thurman et al. 2019). This trend echoes how algorithm-based media and trust in algorithms come with serious concerns about issues of FATE. With the advent of AIs, increasing attention has been paid on trust and credibility, and ensuring FATE to provide more publicly accountable and socially responsible journalism from the perspective of users (Shin 2020).

Trust can be created in a recommender system by showing and clarifying how the system makes decisions and operates, and what responsibility is borne by the results of recommendations. In order for users to trust algorithms, users must be assured about issues of neutrality, impartiality, confidentiality, and objectivity (Kolkman 2020; Shin 2020). People can demand transparency, as well as legal and financial accountability, for the use of algorithms. People should assume that the results of algorithmic decisions will have to be explained in a timely fashion to anyone who may be adversely affected by them, so that these individuals have a say in decision outcomes. Creators may also need to explain how individuals’ data are being used. There has been increasing demands that algorithm journalism needs to be open about the structure, functions, and processes of the algorithms used to search for, analyze, and generate automated news. In reality, however, the connected nature of algorithm technologies makes it difficult to understand where data come from, how data are used, and where data go in the context of algorithms.

Credibility is often characterized as a multifaceted concept that has been approached in terms of believability, trust, accuracy, fairness, objectivity, and reliability. Trust is a key dimension in credibility, since it includes the perceived integrity and morality of the source (Shin 2020). From a common-sense standpoint, information can be trustworthy if it looks to be fair, transparent, accountable, and reliable. Thus, it can be seen AC is closely related to how users feel about issues of FATE (Ananny and Crawford 2018; Shin and Park 2019). Trust is not driven by monetary rewards, but by a shared understanding, or clarified affordance, of significant issues. Trust in algorithmic processes is an important factor in CJ (as well as in general algorithm-based services), and is likely to become an alternative paradigm for the operation and organization of algorithmic societies (Lokot and Diakopoulos 2015; Shin 2019). Referring to Shin and Park (2019), trust in this study is conceptualized as to have confidence, faith, or willingness in algorithms. It is referred to as an user’s feeling of confidence that the algorithms will perform actions that are beneficial. While trust is confidence in the algorithmic qualities, credibility is reputation of such algorithms impacting the ability to be believed. In CJ context, credibility is the extent to which a user considers information to be reliable (Lim and Heide 2015). News credibility is one of the key elements that constitute CJ media trust. Algorithmic media services have to be concerned about how their information is accepted, especially because evaluations of credibility play a critical role in readership patterns (Borah 2014). Just as media credibility has been approached from three areas of research—message credibility, source credibility, and medium credibility, AC can be considered a multidimensional concept drawing from several different aspects of coverage, such as fairness, transparency, accountability, balance, accuracy, and explainability (Lim and Heide 2015; Shin 2021a, b).

With this definition in mind, this study approaches trust in relation to issues of FATE in CJ. Algorithms influence decisions of major consequence for individuals in fundamental dimensions of our daily experience. One key factor is that most decisions are made through inscrutable black boxes used for decision-making. Thus, urgent questions arise, including how can we trust algorithmic systems, to what extent can we believe algorithmic processes, and how can we accept the results of algorithmic services (Alexander et al. 2018)? For example, a recommender system or news recommender system yields low value for users in cases where users do not trust the system (Shin 2020).

2.2 FATE as an algorithmic literacy

While CJ can feed people trustworthy information, keeping journalism relevant and sustainable in the AI era (Ford and Hutchinson 2019), journalistic principles of truthfulness, accuracy, objectivity, and impartiality remain unsolved which raises a broader ethical question about CJ appropriateness and credibility. These journalistic principles are in line with recent discussions of FATE in AI, which is among the most contentious debate issues (Ferrario et al. 2020). These issues provide a key clue in understanding AIs and their results (Dörr and Hollnbuchner 2017), and they can be components of AC. In much of the current discussion on trust in AI, issues of FATE are frequently touted as important normative values (Shin 2021a, b).

The concept of transparency in the context of personalized algorithms requires that recommendations made by algorithmic processes are obvious—that is, transparent—to users (Ananny and Crawford 2018). While fairness has been touted along with rising AI, it is critical to consider what fairness means in the specific context of user cases (Shin and Park 2019). In the case of designing AI systems, an important consequence to avoid is the creation or reinforcement of unfair bias and discrimination against certain groups (Diakopoulos 2016). Explainability in algorithms refers to how the methods and techniques in the application of AI should be understood by a human. Explainability is the extent where the feature values of an instance are associated with its model prediction in such a way that humans understand (Rai 2020). The debate of accountability in AI falls to who is liable for the outcomes of AI services (Moller et al. 2018).

2.3 Algorithmic information processing

A number of research approach to the area of algorithmic cognitive development evolved out of Information Processing Theory (Shin 2021a, b). Algorithm researchers who adopt the information-processing perspective explain cognitive development in terms of motivational and behavioral changes in user’s mind. The perspective is grounded on the notion that users process the algorithmic information they receive, rather than simply accepting the algorithmic outputs. This perspective equalizes the user mind to algorithms based on the assumption that an algorithm is a reflection of what people think, how people use the Internet, search for, and who they are (Shin and Park 2019). This perspective addresses how users perceive algorithmic attributes and how they process and respond to the algorithmic outputs they receive through their cognitions and actions. Shin (2021a, b) highlights a continuous and dynamic pattern of development throughout the interaction with algorithms. This perspective is nicely suited to the opaqueness nature of AI (Klawitter and Hargittai 2018).

The opaque nature of algorithms raises a problem for information acquisition via chatbots, because users who are unaware of algorithms will not have a complete or accurate understanding of the conditions by which content is recommended to them (Cotter and Residorf 2020). A large part of user cognitive processes are devoted to making sense of the algorithms people face. They do this by seeking heuristic understanding of the algorithms or by relying on existing knowledge or common presuppositions. In most situations, however, information processing alone is insufficient to transform disparate information into meaningful findings and confident decisions. In which case, the drive for sense-making directs people’s attention and can lead them to seek out additional information, and prompts users to think of the performance of the algorithm in terms of personalization and accuracy.

Since algorithm-based services like chatbot algorithms bring several competitive advantages, it is important to examine how users’ trust is gained, how it affect credibility, how AC works, and how the credibility is constructed (Alexander et al. 2018). The algorithmic information-processing perspective is a suitable frame for this inquiry insofar as the model argues that pre-behaviors and post-experiences influence user cognitions, which in turn leads to satisfaction and intentions. Understanding the process that the algorithmic attributes lead and shape users’ sense-making of algorithms using literacy, as well as how their actions influence their sense-making. The framework is suitable for this inquiry, because it is designed to examine user heuristics as a course of perceptions, experience, and user formulation of trust based on cognitive processes. With this framework in place, the following inquiries guide this examination:

RQ1: How do users make credibility judgments about algorithmic-driven media and how do ACs influence users’ heuristics in the context of CJ?

RQ2: How do people trust CJ, and how is AC related to trust in the experience and interaction with chatbot services?

RQ3: How does algorithmic credibility affect the chatbot experiences? How do users determine if an algorithm source is credible?

3 How do users determine if an algorithm source is credible and reliable?

The model includes FATE as trust constructs that influence AC, which then affect information seeking. The model includes trust as a mediator for attitude and information seeking (Fig. 1).

Fig. 1
figure 1

Algorithmic information processing in chatbot news

3.1 Algorithmic literacy and credibility

Recommending content inherently involves a great deal of uncertainty due to the underlying issues of FATE. Recent fairness challenges and privacy breaches by AI industries have formed greater social awareness about these issues (Park and Skoric 2017). The questions like how to develop algorithm media taking fairness, accountability, and transparency into account and how to ensure ethical concerns when designing AI-based systems become important considerations in the design and development of algorithm services (Koenig 2020; Shin and Park 2019).

In the AI-driven recommendation systems, how the personalization processes are done, whether the recommendations reflect user preferences, and whether the consequences are accountable relate to matters of algorithmic literacy. It has been discussed that FATE constitutes algorithmic literacy (Reisdorf and Blank 2020; Swart 2020). Algorithmic literacy is related to the trustworthiness of AI news services (Montal and Reich 2017). Credibility is about the worthiness of belief in AI news. Whether users trust certain systems or services impacts users’ assessments, and in turn, such assurance leads to users’ willingness to share more data with the AI. When fair, accountable, and transparent services are guaranteed, users are more likely to perceive higher credibility in the personalization. High levels of transparent algorithms can give users more insight in when and why AI algorithms make personalized results and on how to improve the performance. Fair and accountable recommendations afford users a feeling of trustworthiness. It has been found that user understanding on the algorithmic processes is itself significant in the construction of algorithm user interface. Shin (2020) shows that including explanations enhance users’ trust and satisfaction with a machine learning system. It has been found that literacy has a positive effect on platform trust (Reisdorf and Blank 2020). AC is built through activities in which people understand how algorithms are transparent, fair, accountable, and explainable. Given the existing research, the relation between algorithmic literacy and trust can be hypothesized. When users have higher algorithmic literacy, they will attribute more credibility to chatbot news.

H1: The higher the transparency users perceive, the higher credibility they associate with chatbot services.

H2: The higher the fairness users perceive, the higher credibility they associate with chatbot services.

H3: The higher the accountability users perceive, the higher credibility they associate with chatbot services.

H4: The higher the explainability users perceive, the higher credibility they associate with chatbot services.

3.2 Trust and credibility

Whenever people use AIs, they must make decisions as to whether, how, and to what extent to trust algorithm-based services (Shin 2021a, b). Personalized content is supposed to be precise as users expect personalized recommendations match their preferences. Accuracy and personalization are related concepts and are the two key measures defining a user's perceived utility of the system. Accuracy refers to whether the personalized system predicts those items that people have already rated or interacted with previously. When users feel the sense that news recommendations are optimized to their preferences, they consider the service valuable, and feel more trusting of the content (Shin et al. 2020). Users consider the algorithms as credible and reliable as long as they perceive the recommended items or content as accurate (Kim and Lee 2019). Previous studies have validated these linkages in a range of algorithm services, in which accuracy and personalization are confirmed to be causes of trust and credibility (Shin 2020). Hence, it can be hypothesized that those participants who have high trust will regard the chatbot news as more credible than those with low trust.

H5: The higher the trust users have, the higher the credibility they associate with chatbot services.

The existence of trust is key to promoting technology acceptance as it facilitates openness and transparency in the adoption process. Findings in the work of Lee (2018) imply that trust plays a mediating role in algorithm acceptance. In the news recommendation context, trust is considered a mediator of relationships between behavioral intentions, individual characteristics, and algorithm technology (Shin and Park 2019). It can be reasonably considered that credibility can play a mediating role in the context of AI.

H6: Perceived credibility mediates the relationship between FATE and information seeking.

3.3 Information seeking

Information seeking is the activity or process of attempting to acquire information in both human and technological contexts. In AI, what people feel and how they trust provide important indicators of their information seeking through AI. Information seeking has been defined as the process by which users deliberately make an effort to improve their state of knowledge (Borah 2014). The process of information seeking is related to the motivation of the user to pursue specific information. People in the AI context may show increased cognitive efforts (Shin et al. 2020). They allocate more cognitive resources in the algorithm conditions as the algorithm processes are generally unknown and concealed. A recent study implies that users who are assured about algorithmic performance show an increased willingness to find information (Shin 2020). In the algorithm context, when users allocate increased cognitive resources, they participate in active motivated processing. As a result of assurance about CJ credibility, users should show enhanced willingness to find information through AI. In AI, what people feel and how they trust provides important indicators of their information seeking through AI. When users confirm the value of a system, their attitudes toward the AI algorithm become positive and their intentions to use AI are formed. Hypotheses regarding the relationship of FATE and trust with intention have been widely validated in various contexts, including with respect to algorithms (Shin et al. 2020):

H7: Users’ perceived FATE has a significant effect on their information seeking in chatbots.

3.4 Interaction effects between trust and literacy

In a common sense, trust and literacy could be positively correlated; the more people know, the higher trust people have and vice versa. In the algorithm context, it has been argued that high competency can lead to user confidence and comfort, which increase user trust (e.g., Shin 2020). Algorithmic literacy can increase the level of trust, because it allows users to examine what is happening, so that they can understand the operation (Klawitter and Hargittai 2018). Conversely, a higher level of trust in the chatbots may have improved users’ algorithm literacy. When interacting with chatbots, trust can help users make sense of and develop a clearer understanding of the interaction. Given the possible interaction effects, it can be predicted that the effect of trust will be stronger in the high algorithm literacy group as compared with the low algorithm literacy group.

H8: The effect of trust is stronger when users have a high level of algorithm literacy.

4 Methods

This study used a 2 (degree of trust: low vs. high) × 2 (degree of literacy: low vs. high) between-subject design, which leads to a set of four conditions. The quasi-experiment was a between-subject design, which manipulated the trust and literacy of users with chatbot news services.

4.1 Data collection and sample

This study recruited a total of 200 participants residing in the United States (Table 1). The data were collected through a self-administered online-survey over a period of 6 months of 2020 (January 3–June 30, 2020). Participants received $1.00 for completing the 15-min experiment. Participants completed a thorough informed consent process. The sample was targeted to respondents who had prior experience of OTT or related services. Before answering the questionnaire, the participants were asked to recall their past experiences of using their OTT services. All participants read a full-page consent for and consented to participate in the research study. We also included a survey code at the end of the survey for the participants to fill in at the MTurk platform to address the possibility that participants would rush through the survey to receive payment. The survey code effectively prevented this from happening and allowed us to filter out the participants who did not finish the survey as attentively as they should. Overall, these procedures ensured the quality of the data used in subsequent analyses.

Table 1 Attributes of respondents per experimental group

To decide the sample size, we calculated the power analysis using the effect size obtained in a comparable recent study on the effects of trust and algorithms (Shin et al. 2020). We determined that our sample would require 200 individuals (50 per condition). We recruited groups based on their existing level of algorithm literacy and trust. In measuring their literacy, we evaluated users’ objective and self-reported understanding of algorithms by asking about their explicit and implicit knowledge of algorithms: explicit algorithm usage time and general and technical understanding (search skills), and implicit FATE issues, such as awareness of the way algorithms select and process information, recommend content, and construct social realities for them. The pre-selection questionnaire was reviewed by experts in algorithms and AI. The pre-selection questionnaire was composed of nine questions and respondents were required to provide self-reported responses. In recruiting groups with high and low algorithm trust, we evaluated users’ existing trust level toward algorithms and CJ.

Recruited respondents were randomly allocated to one of the four conditions, with no statistically significant differences between the groups for gender, age, education, media consumption of news, or prior knowledge about CJ. All study respondents were hired via Amazon’s Mechanical Turk. The majority of the participants were Caucasian (49%), the average age was 34.89 (SD 11.25), and the gender ratio was almost equal (female: 51.6%; male: 48.4%). Most of the participants (84.0%) had completed college. Participants were randomly assigned to one of eight versions of the same website, varied systematically to examine the three independent variables. Of the collected responses, 20 partial responses with missing information were excluded. The four groups did not differ in terms of age (p = 0.18), gender (p > 0.26), or prior experience (p > 0.23).

4.2 Procedures

Upon entering the online site, respondents were asked about their chatbot usage, media usage patterns, and interest in various kinds of news. Then, exposure to one of the experimental conditions followed. Directly after news exposure, credibility of the message and source(s), as well as the likelihood of selecting respective articles, were determined. Afterward, a manipulation check question was asked. Finally, demographics and knowledge about CJ were recorded. On average, respondents took 10 min (SD 3.92) to complete the experiment.

4.3 Stimuli

The chatbot platforms used for this study were news chatbots such as The Washington Post, BBC, Quartz, and New York Times. Participants were to choose one of the chatbots or use their preferred chatbot applications. Each research participant was presented with articles suggested by chatbots. After agreeing to a consent form, respondents completed a pre-screening questionnaire that rated their prior experiences with algorithms and chatbots. The stimuli material differed on individual characteristics, usage behaviors, the way individuals interacted with chatbots, and specific news search behaviors. This is due to the algorithms’ curation to select news, interaction with users, reliance on existing data, and current trends in news agendas. Respondents downloaded an app from among a number of chatbot apps and read news recommended through that chatbot and were instructed to interact with all chatbot applications as much as they wanted. Respondents concluded the experiment by completing a questionnaire that measured their normative values and assigned performance values. To validate the reliability of the responses, a series of confirmation questions was included to the survey. With the initially collected data, data reduction was done in terms of the validity and reliability of the responses.

4.4 Measures

The 18 measurements in this study were drawn from previously developed and validated items. The FAT measurements were derived from Shin (2021a, b) and Shin et al. (2020). The explainability measurements were modified from Renijith et al. (2020) and Rai (2020). The credibility measurements were derived from and Verhagen et al. (2014). The information seeking measurements were derived and modified from Borah (2014). Measurements were combinations of formerly used items and measurements adapted from other research. Some measurements required changes to reflect new traits of algorithms and AI services. Twenty college students with prior experience using algorithm news services completed a pretest about a specific news topic.

The measured items were tested with Cronbach’s alpha, and those scores varied between 0.78 and 0.90, indicating acceptable internal consistency (Appendix). A confirmatory factor analysis was computed with the other half of the original sample using structural equation modeling, with the analysis showing that the items had satisfactory factor loadings. The factor loadings for all measurements were statistically significant (p < 0.001), providing evidence for the convergent validity of all constructs. To check validity, correlation tests were conducted to determine reciprocal relationships among variables. A simple linear correlation (Pearson’s r) was used to assess the significance of observed relationships. The intercorrelations among the variables showed no signs of multicollinearity. For the discriminant validity check, the square root of the average variance extracted (AVE) from the construct was significantly higher than the variance shared between the construct and the other constructs in the model. The results from the tests suggest that the indicators account for a large portion of the variance of the corresponding latent construct and thus provide evidence for the measurement modeling.

4.5 Analyses

A two-way analysis of variance (ANOVA) was performed to compare mean ratings of the dependent variables (credibility and information seeking) among the literacy and trust level (independent variables) (Table 2). Furthermore, the indirect effect of credibility, as a mediator of selectivity is assessed. Credibility and literacy are then set as mediator variables, and information seeking forms the dependent variable. Multiple one-way ANOVAs determined the mean values of credibility [F(1, 199) = 228.37 p < 0.001; η2 = 0.177] and information seeking [F(1, 199) = 38.84, p < 0.001; η2 = 0.083] as statistically significant between groups.

Table 2 Experiment results

5 Results

A repeated measurements MANOVA was applied to the two dependent variables. A 2 × 2 factorial analysis reported that the groups did differ significantly concerning credibility and information seeking, as there were main effects found for literacy and trust, and the interaction effect was found to be significant along with the mediation effect.

5.1 Main effects

5.1.1 Effects of algorithmic literacy

We found that participants with a high level of literacy attributed more credibility to the chatbots (M 4.30, SD 1.709) than those with a lower level of literacy (M 3.53, SD 1.845). Also, participants with a high level of literacy revealed higher information seeking intentions (M 3.96, SD 1.477) than their low literacy counterparts (M 3.18, SD 1.395). These differences were significant in the ANOVA.

For the effects on credibility, there was a significant main effect of the conditions of respondents’ level of FATE understanding on credibility evaluations [fairness F(6, 199) = 14.04, p < 0.001; accountability F(6, 199) = 6.602, p < 0.001; transparency F(6, 199) = 19.617, p < 0.001; and explainability F(6, 199) = 11.91, p < 0.001].

For the effects on information seeking, there was a significant main effect of the conditions of respondents’ level of FATE understanding on information seeking behavior [fairness F(6, 199) = 10.44, p < 0.001; accountability F(6, 199) = 6.738, p < 0.001; transparency F(6, 199) = 6.129, p < 0.001; and explainability F(6, 199) = 3.502, p < 0.001].

5.1.2 Effects of trust

We found that participants with a high level of trust attributed more credibility to the chatbots (M 5.24, SD 1.182) than participants with a low level of trust (M 2.59, SD 1.296). Also, participants with a high level of literacy revealed higher information seeking intentions (M 4.17, SD 1.393) than their low counterparts (M 2.97, SD 1.329). These differences were significant in the ANOVA. For the effects on credibility, there was a significant main effect of the trust conditions on credibility assessment [F(1, 199) = 228.37, p < 0.001]. For the effects on information seeking, there was a significant main effect of the trust conditions of participants on information seeking behavior [F(1, 199) = 38.843, p < 0.001].

5.2 Interaction effects: the you can see as much as you trust algorithm

Hypothesis 8 posits that trust would have a stronger effect when users have a high level of literacy. To test the interaction effects, we compared the effect of trust on dependent variables in the high literacy case and the low literacy case. The results showed that trust had a much larger effect on credibility assessments in the high literacy conditions (M 4.30, SD 1.709 for the high trust case and M 3.53, SD 1.845 for the low trust case) than in the low literacy conditions (M 3.53, SD 1.845 for the high trust case and M 4.67, SD 1.51 for the low trust case).

An interaction effect was identified between trust and literacy, with respect to credibility and information seeking [F(1, 199) = 5.58, p < 0.01], which showed that the impact of credibility on information seeking was stronger in respondents with high literacy than in a low scoring literacy.

There was a significant difference in attributions of credibility to chatbots in the high literacy group as compared with the low literacy group (M 2.69 vs. 2.38). In support of the interaction effect, participants had significantly higher trust when they had a high level of literacy (M 3.22, SD 1.29) as compared to when they had a low level of literacy (M 3.90, SD 1.47), F(1, 199) = 6.53, p < 0.05.

The presence of the interaction effects indicates that the higher trust tendency with the high literacy perception produced a more positive credibility level, consequentially leading to higher information seeking than did the low trust tendency with the low literacy perception. The positive effect of trust on credibility was reinforced by a high level of literacy (Fig. 2). The figure illustrates the complementarity between trust and literacy in fostering credibility level and information seeking behaviors. People with a high trust tendency found the algorithm more accountable, transparent, and fairer than non-algorithm services, whereas people with a low trust tendency found non-algorithm services yielded higher accountability, transparency, and fairness than algorithm services. Trust tendencies and algorithmic literacy have combined effects on credibility and information seeking.

Fig. 2
figure 2

Interaction effect between trust and algorithmic literacy on credibility

5.3 Mediation effect

To examine the mediation effects of the credibility dimensions, a non-parametric bootstrapping approach was utilized to test the significance of the mediating effect. Bootstrapping is one of the crucial parts in modeling a structural model when it comes to testing the mediation effects. Based on the results of the ANOVA analysis, we aimed to test the significance of the indirect paths between FATE and information seeking through credibility. Bootstrapping techniques was used when examining mediation to gain confidence limits for specific indirect effects (Hair et al. 2013). Variance accounted for (VAF) is used to examine the indirect effect. A VAF value of greater than 80% indicates full mediation, while a VAF greater than 20%, but less than 80% indicates partial mediation (per Hair et al. 2013). The 95% confidence interval for the indirect effect via trust was obtained using bootstrapped resampling. Mediation is confirmed if such a confidence interval does not contain zero (Hayes 2013). The standardized indirect effect shows that exogenous latent constructs have partial mediation effects toward information seeking through credibility. All direct and indirect paths are significant at the 0.05 level. The results confirmed the indirect effect of FATE on information seeking through credibility to be significant, at 95%. There are partial mediations that trust has indirect effects on the relationships, which can be significantly reduced without credibility, but the relationships still valid (Fig. 3).

Fig. 3
figure 3

Mediating role of credibility in information seeking

6 Discussion

This study aimed to examine the factors and the processes that influence the credibility of and information seeking intentions of users of algorithmically driven chatbot news services. It utilized an algorithmic information-processing frame to test the effect of literacy and trust on credibility and information seeking via chatbot algorithms. We analyzed the data using an ANOVA 2 × 2 design with the user’s literacy and trust levels as the independent variables, interaction with chatbot news as conditions, and credibility and information seeking as dependent variables. This analysis revealed that the group with high algorithm literacy and high trust in algorithms showed the highest sense of CJ credibility and increased their information seeking behaviors. This is consistent with the previous studies, reporting that FATE and trust are significant factors in algorithmic media (Shin 2020; Shin et al. 2020). The findings not only show the role of literacy and trust in the adoption of chatbots, but also further clarify the relation of literacy and trust to credibility (Cotter and Reisdorf 2020). Thus, the findings of the study provide proof-of-concept insights for developing trust-based algorithmic literacy processing models in a chatbot context. The model shows that interacting with algorithms involves algorithmic literacy processes wherein features of algorithms are cognitively processed to formulate a heuristic of user motivation and to trigger user intentions for chatbot news. The findings of this study offer meaningful implications on the triadic relationship between literacy, trust, and credibility in CJ. The discussions can be explained in three parts, main, interaction, and mediation effects.

First, the findings indicate the critical dimension of trust in CJ and human–robot interactions. In highly dynamic and hyperconnected environments in which data are overloaded and uncertain, trust has become a triggering factor for the development of heuristic processes of AI decision-making (Shin 2020). When people trust algorithm systems, they tend to believe that the services are created by transparent and fair processes and believe in the accountability of the results (Kim and Lee 2019). User trust provides systems with more access to user data. When users feel trust and comfort, they are willing to share more personal data with systems. More data improve predictive analytics, which can help systems to produce more accurate search results. Hence, a trust algorithm is a set of rules that enhances the credibility of new sources.

Second, this study clarified the role of algorithmic literacy in chatbot news consumptions. The high literacy group clearly showed the heuristic process of FATE, whereas the low literacy group showed a low level of credibility. Low literacy users have a rudimentary understanding of the ways that algorithmic platforms function, yet they do not have significant awareness of the critical implications of these algorithmic platforms, nor do they quite understand the FATE issues. High literacy users have a sophisticated understanding of what chatbots can do for users, and at the same time, what limitations bear with the news services. Algorithmic literacy contains what users know about how algorithmic conditions affect human adoption, which thus determines the extent to which people can search information/interact with algorithms effectively. How users evaluate FATE is dependent upon user heuristics, existing dispositions, and context, which may be part and parcel of the entire adoption process. Heuristics play a role in understanding algorithmic features and assessing trust. The heuristic or facilitating roles suggest that FATE not only serves as algorithmic literacy in the use of chatbot platforms, but it also has an underlying relation with trust and has significant relation to credibility. When users understand FATE, they attribute higher credibility to chatbots. This means that when people realize how algorithms work, they are likely to trust algorithms and attribute credibility to the information provided.

Third, the dimensions of algorithmic literacy are interdepended, and they are interacting with each other forming interaction effects on credibility in an algorithm system. The interaction effect shows that literacy has a positive effect on attribution of high credibility and subsequent information seeking when users have a high level of algorithmic trust. People perceive the chatbot news story as most credible when they know about the algorithm operations. In other words, high trusting individuals become even more trusting when they have high literacy in algorithms. This echoes the old saying “You can see as much as you know” about trust in algorithms (Shin 2021a, b). The interaction effects are expected and reasonable, since it is hard for layman users to discern what transparent is, what fair is, and how accountable it is. Normal users rely on their existing trust in algorithms and technologies when they face issues such as FATE. They usually do not abstractly understand what FATE is and how it affects the performance of algorithm services. In reality, the concepts of FATE are interrelated and interwoven. People simply use existing trust as a heuristic when weighing FATE in algorithms (Shin 2020).

The presence of a mediation effect where credibility plays the role of a mediator in the relationship between FATE and information seeking is also consonant with the interaction effect. Trust is a cue to evaluate the FATE of algorithms, and processed perceptions of FATE trigger attributions of credibility and increased information seeking. The mediating effect implies that credibility is of utmost importance. When users take into consideration the recommendation credibility to measure their information seeking, the credibility factor controls up to 51% of the interaction between the literacy and information seeking, generating a significant indirect effect (t ratio > 1.96). During the cognitive processes of the user’s information seeking intention, an appreciation of the algorithmic literacy is enhanced through credibility which allows users to proceed with information seeking. Once users consider that chatbot recommendations are credible through confirmed literacy, they pay more attention to information seeking. As soon as the credibility perception is established, they build intention by grounding on the level of algorithmic literacy, while FATE alone does not directly affect the information seeking decision.

The effects of interaction and mediation shed light on the users’ cognitive process of assessing algorithmic features, transferring trust into credibility, and determining information seeking decision. How trust is started and evolves in the course of adoption may offer important clues in designing and developing chatbot media services, as more and more people are aware that algorithms are not neutral and that they may have human prejudices. People would like to understand how algorithms function, how the processes work, and to what extent the results are fair and legitimate. The model in this study shows a clue on how trust is triggered with what factors. Trust exerts a facilitating effect upon making judgements of the algorithmic credibility by expediting uncertain issues to be processed for usage and adoption of personalized algorithms.

Fourth, based on the effects identified, users’ cognitive process in chatbots can be inferred. The findings reveal that users use FATE as heuristic tools to assess trust and credibility in algorithms. Users’ cognitive process of algorithmic literacy influences user trust and increased trust influences systematic processing of performance expectancy, which is positively associated with credibility and information seeking. Algorithm users actively process the news they receive from chatbots, just as algorithms do. Users perceive, analyze, use, and adopt news and that cognitive development is ongoing and contextual, not stereotyped nor is it formulated or the same everywhere. In this light, algorithms serve as an information-processing tool and FATE is used as criterion to assess credibility. Not only do qualities of FATE play a key role in establishing credibility, but they also play an anchoring role in support of user trust. Algorithm users develop their own processes of literacy based on cognitive processes related to FATE. User reactions to perceived performance are contingent upon or at least closely related to how users recognize, understand, and process the information regarding FATE as algorithmic literacy. Such a relationship can be heuristic insofar as users rely on their perception of FATE to determine their feelings of trust and credibility of chatbot services.

Many people are averse to using algorithms, preferring instead to depend on their own instincts when it comes to algorithm decisions. Users develop their own heuristic understanding of algorithmic trust based on the cognitive process of FATE. User perception of credibility is not automatically granted or pre-determined; rather, it is dependent upon how users figure out FATE. This inference has heuristic implications for algorithm development and CJ. Issues of transparency and impartiality are hotly debated subjects in algorithm research, and many users are concerned with these issues. It can be inferred that trust is closely interrelated with these issues as it plays a significant role in establishing credibility and heuristics (Shin 2021a, b). When users are confirmed of FATE issues, their trust is enhanced, and they are willing to allow more of their data to be used and analyzed. With increased trust between users and algorithms, more transparent processes are warranted, and more data will enable algorithms to produce accurate results tailored and individualized to users’ preferences and personal history. It would be worthwhile to further empirically examine this relationship. Trust will be a key factor in positive feedback loops between users and algorithm systems. It can be further inferred that trust is created/embodied via users’ cognition and enhanced through the user interactions with algorithms (Shin 2021a, b). Through interaction, the more people are exposed to and the more they use algorithms (and CJ), the more likely they are to trust algorithms and the news from CJ.

7 Implications

The contributions of this study are twofold, theoretical and practical. Theoretically, this study confirmed the role of literacy and trust in chatbot services. Practically, the results of the study bear design implications for AI practitioners to support effective human–AI interaction (Sundar 2020), specifically, how to echo or manifest FATE in AI interface media design.

7.1 Contributions to research: heuristic algorithmic literacy

This study contributes to ongoing literature on algorithmic literacy, trust, and algorithm behaviors in the context of CJ. This study conceptualized literacy along with FATE and tested the facilitating role of the literacy. These findings stand to contribute to theoretical advance by propositioning what algorithm literacy is made of, how it works, and what effects of literacy are present in chatbot use, and from there, how trust can be theoretically framed, empirically measured, and analyzed. Previous concepts of media literacy or information literacy may not be applicable to describe the uniqueness of users’ interactions with chatbot platforms, because AI has been greatly different from existing media services. Previous notions of literacy have mainly remained on the surface of knowledge (know-what), leaving users’ actively figuring out processes, and delving into their roots for its application (know-how and know-why). It might be useful to distinguish algorithmic literacy from concepts such as code literacy and programing capability. Unlike technical capability (being able to read and write in code), algorithmic literacy goes beyond basic digital capacity and includes the heuristic understanding of the technical and social processes by which algorithms are generated, distributed, and consumed, and knowledge that allow users control over these processes (e.g., data approval, privacy control, and right to explain). Algorithm literacy should include users understanding the way algorithms convey meaning, structuring our interactions with others, and the process affecting what we see, how we see, and what we think of. Through such understanding processes, trust emerges from users and credibility is attributed to chatbot messages. With heuristic literacy, users question why and how certain search results are favored. A critical heuristic is a key to heuristic algorithm literacy. Just like algorithms sort humans and shape human society, users can sort algorithmic functions and shape algorithmic decisions by controlling their data, configuring their privacy, and critically evaluating algorithm performance (Park 2011).

By conceptualizing and developing scales to measure FATE as algorithmic literacy, this work contributes to the ongoing characterization of how to judge literacy in algorithms, how we can best exploit AI to support users and offer enhanced insights, while avoiding biased and discriminatory decisions, and how we can balance the need for technological innovation and public interest with fairness and transparency to users. As AI becomes increasingly ever-present and becomes an everyday reality, algorithmic literacy will be even more critical. The relation of literacy and trust to credibility is particularly useful as it clarifies where the attribution of algorithmic credibility came from. While FATE has been considered critical factors in AI (Shin 2021a, b), how users process FATE information and how it affects trust remain underexplored. The confirmed relations between literacy and trust can be a starting point to further excavate the role of literacy in the use of AI.

Our findings show integrated perspectives on how users perceive AI characteristics, how their trust is created and sustained, what cognitive affordances are realized, and what behavioral results are derived from the processes. Although previous studies consistently have shown the role of trust in AI (e.g., Alexander et al. 2018), this study empirically validated the function and relation of trust in chatbots, its antecedents, its interaction role with literacy, and the heuristic–systematic process. Our findings on interaction effects of trust are consistent with the research, suggesting the need for a multidimensional analysis of trust mechanisms (Shin 2020).

In chatbot algorithms, users obtain a sense of credibility when they are confirmed by algorithmic literacy and when they trust the algorithms. When users trust algorithm systems, they tend to believe that system’s services are useful as they are personalized and tailored to their needs (Shin et al. 2020). The mediating role of trust supports the liaison function of trust in algorithmic processes: linking heuristic and system evaluation (Ferrario et al. 2020). Trust significantly mediates the effects of literacy on users’ attitudes and information seeking. Affording strong user trust and assured emotion may warrant users that their personal data would be processed in transparent and legitimate ways, thereby generating positive trust toward the chatbot recommendations and the platforms, ultimately leading to heightened levels of information seeking. It can be inferred that trust between users and algorithms is the underlying key factor in the acceptance and experience of chatbots.

The findings of this study will enable future studies to increase both the rigor of existing literature and the questions addressed in the area of chatbots. As the findings imply, the functional features of algorithms are processed through users’ understanding and perceptions regarding perceived literacy, which are mediated by trust. Literacy thus facilitates the cognitive processes of quality, performance, attitude, and intention (Shin and Biocca 2018). Understanding the role of literacy in AI would provide a clue on the development of a user-centered interface for chatbots. It may be worthwhile to develop heuristic literacy, such that an individual’s capacity to use heuristic mechanisms in assessing algorithms and to approach algorithmic problems using a variety of heuristics can be measured.

7.2 Contributions to practice: algorithmic literacy as a social practice

For the providers of AI or other similar algorithmic services, the implications of this study can be constructive in designing AI interfaces and integration frameworks for chatbots. As AI continues to change the way we interact with media, how to warrant transparent interaction, fair algorithms, and how to include explainability in the interface would be urgent tasks to be carried out. There is a surging need for educating algorithmic literacy and that those who design algorithms should be trained in ethics and required to design code that considers societal processes and their interactions with the contexts (Courtois and Timmermans 2018).

Our findings have practical implications for algorithmic literacy. We provide a framework to evaluate user literacy in a real-world application. It can create guidelines for informed literacy practices that could be implemented into chatbot services. Issues of FATE have been urgent topics in AI, and users seek guarantees on such issues. Based on the FATE model, we know that trust is related to the literacy issues as it plays a key role in developing attributions of credibility and further facilitating interaction. Trust serves as a critical liaison that is a bridge between users and AI systems and enables a positive feedback loop. The results of this study provide guidelines on how to actualize and institutionalize FATE issues with trust. It is possible for industry to design innovative “users-in-the-loop” algorithmic systems to leverage people’s ability to cope with algorithmic decisions. Algorithmic literacy is best understood/practiced as a set of social practices; the ways people use algorithms in their everyday lives and the events which are mediated by users’ interactions with actual algorithmic services.

Another key implication regarding literacy and trust is that our findings provide answers on how to address questions like why did an AI make a specific recommendation and why did not it do something else? One implication to gain credibility in AI systems is to use algorithms that are inherently explainable and interpretable. Based on our findings, industries should address algorithmic experience in chatbots. Algorithm platforms can be considered experience technologies in which use of an algorithmic platform affords users the ability to learn about how a specific algorithm works (Cotter and Reisdorf 2020). The insights from the user heuristics can be used to design heuristic algorithms. Developing effective user-centered algorithm services requires an understanding of users’ literacy processes together with the ability to reflect these processes in an algorithm design.

8 Conclusions and future studies

What does it mean to be algorithmically literate and trustworthy, and how does this interactivity influence user credibility attribution and information seeking behaviors with respect to CJ? The results suggest that literacy refers to, beyond technical knowledge of technical code, the ability to understand and evaluate the FATE issues behind algorithms critically and contextually. Modeling literacy will be important to predict users’ future intentions for the sake of better algorithmic performance. The user model in this study provides insights on how to integrate FATE with user literacy, usability features, and user behavioral intentions. As AI is being developed and further implemented, industry must come up with ways to develop algorithms that are human-enabled and user-centric. The creation of understandable/explainable AI is critical in establishing trust and credibility by engaging human agency into the AI ecosystem. Facilitating the adoption of algorithms and enabling trust require a user-perspective on the development of understandable AI, which affords users the opportunity to co-develop AI accordingly.

User algorithmic trust and heuristic literacy processes open new areas for research. As one of the empirical attempts at modeling literacy and trust, this study relied on a preliminary conceptualization and a basic operationalization of literacy informed by the known, basic factors that influence algorithmic curation in chatbot use. Future research can investigate in greater detail conceptual links between literacy, trust, and credibility and apply them to diverse emerging AI technologies. Other researchers can adapt our literacy processing framework to further evaluate algorithmic understanding across many other AI domains.