8.1 Introduction

Information and communication technologies (ICTs) have become central to today’s information-driven society. They include a variety of technologies with different benefits that greatly shape all sectors of society but have their own specific set of challenges. The present accepts the value of ICTs and reaps the rewards of their adoption, while the future questions and reflects on the problems and issues arising from their continued use.

A panoply of innovative technologies is revolutionizing society. Social networks, artificial intelligence (AI), Internet of Things (IoT) and big data, and extended reality (XR) are not novel technologies, but they were selected for this chapter due to the innovations that they continuously introduce into society. Social networks continue to attract a large number of Internet users (Peng et al. 2018) and are valuable for communication and community building (Rosabel et al. 2022) AI continues to give greater capabilities to computers, enabling them to perform tasks in a human-like manner across numerous fields (Chen et al. 2020). IoT’s importance is notable in many fields (i.e., transportation, health, finance) (Nord et al. 2019), while the volume, velocity, and variety of big data enables information to be extracted that is valuable to businesses, government departments, and the like (Oussous et al. 2018). Finally, XR’s contribution is intrinsically connected with its enhancement of reality (Gong et al. 2021).

This chapter examines some of the most relevant and innovative technologies, although the list is not exhaustive. It investigates the roles that social networks, AI, IoT and big data, and XR play in various sectors and the challenges they pose. It begins by investigating the various contributions that social networks can make to education and business, as well as the problems they raise. It then examines the role of AI in education and health and addresses related issues and ethics. IoT and big data follow their value to smart cities and e-government is discussed, along with the difficulties they present. Next, the benefits offered by XR to the health and business sectors are examined, as well as its associated challenges. The final section reflects on the future of ICTs in terms of sustainability and the digital divide.

8.2 Social Networks in the Information Age

Social networks have undergone significant development and are attracting an increasing number of users. Social networks can be defined as networks that depict the various nodes and connections of a social structure. They comprise online social networks, existing in online environment and mobile social networks, which are based on mobile applications. Various other social networks have specific functions; among these are social news platforms or media-sharing applications (Peng et al. 2018). Social networks serve as valuable instruments in several sectors and numerous organizations, such as small and medium enterprises (Olvera-Lobo 2020) and non-profit organizations (Pífano et al. 2021). They are also used for advertising (Falcão and Isaías 2020) and for health (Alshaikh et al. 2014).

8.2.1 Implications for Business

Within online social networks, there are members who are considered to be “influencers.” These members can be very valuable for businesses due to their word-of-mouth power and their status as role models. They can reach their contacts more efficiently, enabling them to disseminate information swiftly and widely; because they are seen as role models, other members are more likely to imitate them. The identification of these key users has become a central issue for business, so much so that the strategies for that identification have recently become a significant research topic (Klein et al. 2015). Moreover, social networks are important sources of information for brands. The analysis and mining of data from these platforms can offer valuable insights into their users’ opinions about a specific product or service, mainly through the comments posted by users (Peng et al. 2018). Online social networks have an important role in the empowerment of clients. They are interactive platforms that allow users to generate content, search for information, and express their opinions about different products and brands. Internet users are sometimes called “digital evangelists” due to their influential role in social networks, which can cause a product to either succeed or fail. Also, because they offer suggestions regarding new products or services, they are often known as “prosumers” for their part in companies’ creative process (Gonzalez et al. 2015). Regardless of their advantages, the successful deployment of social network sites in the business arena should follow specific guidelines. Moreover, it is crucial to use measurable criteria to assess the actual effects that online social networks have on revenue (Isaías et al. 2012).

The user engagement with web-based social networks has repercussions on their business relationships. It is believed that individuals with an online social network presence have more opportunities to connect and strengthen their ties with other professionals. Despite being hosted online, web-based social networks are facilitating offline relationships (Benson et al. 2014). As the most popular professional social network, LinkedIn has attracted much interest and has an impressive number of users worldwide. The use of LinkedIn is positively related to networking ability. Also, frequent users, rather than those with a high number of contacts, can expect to obtain career benefits from this social network (Davis et al. 2020). Social networks are excellent communication channels with unlimited audience reach and information dissemination. When examining the dynamics of event organization, for example, the important role that social networks play in event promotion is very evident. Organisers can use social networks as vehicles of information. In the case of music festivals, a significant amount of data can be disseminated through social networks (performers, schedules, etc.) to those attending or wishing to attend the event. Additionally, the engagement of people in social networks is potentially beneficial in terms of building the attendees’ loyalty to the event and again in terms of marketing the event with personal statements (text, photographs, etc.) provided by the attendees (Hudson et al. 2015).

8.2.2 Social Networks in the Education Sector

Distance and mobile education are becoming pervasive with the assistance of various existing and emerging (Isaías 2018; Miranda et al. 2017) learning technologies (Isaias et al. 2017), including social networks (Shelomovska et al. 2017). Social networks can be used for both formal or informal education (Teoh et al. 2014) and across different levels of education, such as in secondary schools (Ng 2021) and in universities (Quimbita et al. 2020). Both teachers and students have expressed positive attitudes toward the use of social networks in higher education, regarding them as valuable tools for teaching and learning, and are willing to increase their use in educational settings (Shestak et al. 2021). The adoption of social networks by the higher education sector can strengthen students’ engagement and improve their motivation to learn. In addition, even their learning performance can be improved, increasing their level of achievement (Hortigüela-Alcalá et al. 2019). The constraints imposed by the safety measures adopted during the COVID-19 pandemic lockdown periods has led to an increase in the use of ICTs. In the current environment, social networks enable a large number of users to be connected and are efficient instruments of communication that help to create meaningful online learning communities (Rosabel et al. 2022).

Social networks are being used in the education sector, but education is also being used in the online social networks arena. The increase in the number of children and teenagers who use social networks has led to the design of several educational packages that promote a more secure participation on these platforms (Vanderhoven et al. 2014). In the context of online learning, in particular within Massive Open Online Course (MOOC), the students use tools such as social networks to engage in collaborative learning and to build learning communities. Also, on these platforms, they engage with tagged content and use the existing tagging features to maintain or establish new conversations to enhance their learning (Cruz-Benito et al. 2017). Microlearning, which promotes the delivery of learning via brief learning units, is one of the fields where social networks can be very effective. Social networks connect users with a diversity of interests and different levels of knowledge, enabling collaboration. Moreover, social networks enable the swift and widespread exchange of content and empowers students to interact with each other based on the shared content (Giurgiu 2017). Social networks can be valuable instruments for students in the sense that they allow the creation of networks of users, and they provide features enabling the exchange of content. In addition, students do not need to be trained on how to use them, since they are very familiar with these platforms and use them frequently (Imran et al. 2016).

8.2.3 Social Networks Main Challenges

Despite their many advantages, the use of social networks presents several challenges (Persia and Auria 2017). The nature of social networks has given rise to two major issues: privacy and anonymity and the dissemination of misinformation and disinformation. In regard to privacy and anonymity, some online social networks impose a real-name policy which prevents their users from using an alias. This policy is intended as a strategy to improve content and service, to facilitate users’ search for contacts, and to enable accountability. Despite the benefits that social networks often cite to justify this policy, there is a growing controversy associated with the use of the user’s real identity. By requesting their users to register with their real identity, these platforms have access to information that jeopardizes privacy and online freedom (Peddinti et al. 2014). Anonymity can become problematic, for example, in the use of location-based services, where the geographical position of the social network users is used to provide services. Furthermore, besides potentially exposing customers who wish to remain anonymous, these services can include information that poses a threat to the privacy of the users’ data (Buccafurri et al. 2021). The increasing number of social network users causes great volumes of varied information to be posted online, which increases the availability of datasets via the Internet and the uncertainty about whether that data is protected from de-anonymization. In light of this predicament, transparency is emerging as a new framework for information management. In addition to transparency, the right to be forgotten is vital in information management in the sense that it would allow users to delete previous data, when introducing new information (Kataoka et al. 2014).

Social networks, which enable information to “go viral,” have the capacity to influence decision making. There are insufficient mechanisms to protect social network users against misinformation and its consequences (Bastick 2021). Misinformation can have serious repercussions on many levels. In some cases, it can influence voting decisions, cause instability in financial markets, impact the decision of parents regarding vaccination for their children, and create panic and spread incorrect information about disease outbreaks, as is currently occurring with the COVID-19 pandemic (Amoruso et al. 2020).

There is increasing interest in the dissemination of misinformation within social networks. Bastick (2021) conducted an experiment with 233 undergraduate students and concluded that even in cases where the students were exposed briefly (less than five minutes) to fake news their unconscious behavior was substantially modified. Social networks are among the most effective instruments for information dissemination. They are used not only as key communication channels among individuals for both personal and professional purposes, but they are also the main source of news for a substantial number of Internet users (Amoruso et al. 2020). The detection of fake news is often done via content, by verifying the accuracy of the news, with some websites specializing in this type of service. Nonetheless, this individual verification of news is a time-consuming and complex process, particularly when there is a high volume of content being shared swiftly and widely. By combining content with methods that analyze the credibility of the source, the author can be more productive (Sitaula et al. 2020). There are also methods for the detection of fake news that are style-based, in that the writing style is analyzed, and propagation-based, focusing on information offered by the dissemination of the news (Zhou et al. 2019).

8.3 AI and the Barrier Between Human and Machines

AI can be defined as a “system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Kaplan and Haenlein 2019, p. 15). With AI, computers become capable of performing human-like tasks. AI is used across several fields (Chen et al. 2020) including marketing (Huang and Rust 2021), industry (Ahmad et al. 2021), agriculture (Zhang et al. 2021), and entertainment (Lachman and Joffe 2021).

8.3.1 AI in Educational Settings

The possibilities offered by new formats of learning online, such as MOOCs, translate into an increase in the number of students, greater student participation, and consequently rising costs for institutions. In these scenarios, automated solutions seem to be a valuable instrument to address the challenges that emerge, such as the need to provide personalized support to large numbers of students. Teachers can be integrated into online and blended courses to assist with the delivery of content, to offer feedback, and to supervise the students (Popenici and Kerr 2017). Despite its benefits, AI is still far from being adopted widely in education. In higher education, risk aversion, resistance to innovation, and insufficient funding are some of the obstacles to the incorporation of novel technology. In addition, educators often require convincing of the true value of technology before adopting it. AI can become more pervasive if educators become more aware of its benefits and shortcomings, whether there is an increased multidisciplinarity between different types of educational experts and technology experts and whether AI applications can become more aligned with contemporary educational theories (Bates et al. 2020). AI can be used with big data to create smart learning environments where teachers can analyze students’ progress, identify those who might be at risk of failing, and intervene in a timely manner. AI has the potential to improve both the teachers’ performance, by improving their pedagogical practices, and students’ learning experience, by empowering them to be more in control of their learning process. The value of the information provided by AI translates into actionable insight obtained from concrete data about the students’ difficulties, performance, and progress (Yang et al. 2021). As an example, it is happening with learning analytics and the valuable insight it offers education (Quadir et al. 2020; Clark et al. 2020).

The use of AI in education is known as artificial intelligence education (AIEd). This term refers to the general application of AI in educational settings, namely through the deployment of various AI-based applications and systems already in place in many schools and higher education institutions (Sun et al. 2021). In education, AI has been associated with a series of benefits, evident, for example, at the level of personalization, grading, quality of teaching, and student learning experience. AI-based learning systems integrate several techniques for analysis, recommendation, and understanding, supported by a knowledge model, machine learning, and data mining. These systems have the ability to map the connections between the students’ results and factors such as the learning resources and the teacher’s approach. The information they provide enables teachers to customize their teaching methods and strategies to improve the students’ learning outcomes and experience. Through automation, AI can assist teachers to grade students’ exams and offer feedback on assignments (Chen et al. 2020). AI can be used to offer personalized content to the students, according to their profile and preferences, at the same time that it can assist teachers with course design. Moreover, one the fundamental benefit of AI is the fact that by performing tasks previously done by teachers, the AI system frees the educators from time-consuming work and enables them to focus more on actual teaching. The total impact of AI on education remains unknown, although it is predicted that in the next 20 years, its influence will increase in this sector (Zawacki-Richter et al. 2019).

8.3.2 The Health Sector Under AI Adoption

In the medical sector, AI is in growing demand. In medicine, AI-based approaches need to be reliable and transparent to perform exceptionally well and be easily explainable to and interpretable by their users (Holzinger et al. 2019). In the field of medicine, AI is proving to be a valuable ally. Deep algorithms are a core solution to managing the growing amounts of data derived from wearable devices, smartphones, and other technologies now commonplace in the medicine. Additionally, AI has brought about a general improvement in the provision of health services by offering more intelligent management of patients’ electronic records, assistance with therapeutic compliance, improvement of predictive and preventive medicine, and monitoring of vital functions (Briganti and Le Moine 2020). Additionally, there are improvements in triage, with AI applications such as the Babylon app being used to assist health professionals to identify the patients who need to be examined. The use of AI in the triage of patients has the potential to reduce the burden on health systems and shift the resources to assist those patients who genuinely require medical help (He et al. 2019).

AI can also be deployed for preventive purposes. A significant part of prevention is the promotion of healthy behavior, such as the performance of regular exercise. AI-based health coaches can be used in these scenarios to assist individuals to train others to correctly perform an exercise, help them with their progress, and offer emotional support when they are struggling to succeed. These intelligent coaches also have the advantage of providing personalized support, adapted to the particular need and characteristics of the individual (Mohan et al. 2020).

In the area of mental health, AI is regarded as a valuable solution for some of the current challenges pertaining to treatment and service delivery. Mental health professionals are advocating the use of AI to assist with patient monitoring and to support the decision-making process (Carr 2020). AI-based virtual psychotherapy, for instance, is becoming popular, and it is being delivered by apps that assist individuals to identify patterns of behavior and the mental health conditions associated with them. They help the users to recognize their thoughts and feelings and assist them to develop the skills to address them. These apps, using chatbots, educate the users on relevant clinical terms, and they can provide advice to help the users cope with what they are experiencing. These AI applications are regarded as particularly beneficial for vulnerable and under-served populations (Fiske et al. 2019). Another key aspect of AI in health care is the increasing awareness of its value in low-income countries, as it can address many of the challenges emerging in their health systems. Assisted by the widespread uptake of smartphones and increasing investments in technological development, AI has the potential to improve healthcare delivery in regions that have fewer resources (Wahl et al. 2018).

8.3.3 The Issues and Ethics of AI

Despite abundant evidence of the value of AI to several areas of society, there is a constant dichotomy that is present in all sectors, which is human vs. machines. There is still fear that AI will replace humans, despite current opinions that it will assist, not replace, them (Briganti and Le Moine 2020), since AI has the capacity to complete some tasks with more consistency, speed, and reproducibility than human beings. In medicine, for instance, with the automation of tasks that are theoretically simple, but “incredible labour- and time-intensive, healthcare providers may be freed to tackle more complex tasks, representing an improved use of human capital” (He et al. 2019, p. 30). Another important question pertains to the coexistence of humans and machines and whether AI systems and human beings can coexist effectively. Also, there is the question of which decisions should be made exclusively by humans, which should be delegated to AI, and which decisions should be made collaboratively (Haenlein and Kaplan 2019).

In the field of medicine, for example, one of the difficulties associated with AI is the absence of a legal framework to establish liability when decisions are driven by algorithms (Briganti and Le Moine 2020). Moreover, the implementation of AI in mainstream medicine raises other issues such as data exchange and privacy, standardization of data, interoperability among different platforms, patient safety and algorithm transparency (He et al. 2019). Algorithm transparency is equally concerning in other fields such as education. Institutions need to exercise caution when granting power to hidden algorithms that can have considerable impact on people’s lives but lack transparency. As Popenici and Kerr (2017, p. 4) argues, “this is presented casually as a normal state of facts, the natural arrangements of Internet era, but it translates to highly dangerous levels of unquestioned power.” Finally, a popular issue that emerges in regard to AI is its ability to guarantee diversity and reduce bias (Bates et al. 2020). The question of bias emerges in the context of AI foundations, since both the data and the algorithms that are employed for training include the biases that exist in society. Hence, algorithms can be misused and result in breaches of human rights. Different types of biases, such as those related to gender or race can cause inequalities, since they mimic societal stereotypes. The performance of AI algorithms depends on predetermined values, which are subjective and become imprinted in the AI training datasets (Yang et al. 2021).

8.4 IoT and Big Data

IoT can be understood as an “open and comprehensive network of intelligent objects that have the capacity to auto-organize, share information, data and resources, reacting and acting in face of situations and changes in the environment” (Madakam et al. 2015, p. 165). Despite being still in its early stages, IoT is evolving swiftly and becoming one of the most important technologies, with an increase the in number and influence of IoT devices. Its importance is evident in several areas such as transportation, health, smart environments, retail, information technology, and finance (Nord et al. 2019). Big data is different from conventional data due to its three core features: volume, velocity, and variety. Its complex nature demands “powerful technologies and advanced algorithms. So, the traditional static Business Intelligence tools can no longer be efficient” (Oussous et al. 2018, p. 433).

8.4.1 IoT for Smarter Cities

IoT is one of the technologies central to the development of smart cities. The rapid urbanization that is taking place worldwide comes with challenges related to mobility, energy, healthcare provision, and civil infrastructures. Smart cities are intended to be intelligent solutions that guarantee sustainability and efficient use of resources, by embedding intelligent devices in various infrastructures and services. IoT is responsible for connecting the physical and virtual realms by means of widespread devices used in buildings, streets, vehicles, and different infrastructures. It can create integrated solutions for smart cities and offer different types of services to its citizens (Qian et al. 2019). Energy is an integral part of modern cities, and an ongoing concern is the reduction of consumption. The deployment of IoT-enabled devices can assist smart homes to monitor their energy consumption. Mobile phone applications, for example, can be used remotely to manage electric devices (Alavi et al. 2018). The application of IoT for waste management in smart cities is an important advancement in this crucial area. The use of sensors in trucks or garbage cans can provide essential information about the type and amount of garbage and the most appropriate time for collection. This information can also be used to identify the most appropriate locations for placing garbage cans, without detrimental environmental impact or the type of truck that should be used. Insight into these aspects of urban living can help waste management to become more efficient (Samih 2019).

The progress of smart cities is intrinsically connected to big data. “Data in smart cities are characterized by variety, velocity, volume, value, and veracity that are the well-known characteristics of big data.” (Karimi et al. 2021, p. 1). In cities, big data can be obtained from individuals and from objects. Data can be collected both from data infrastructures themselves and from energy-consumption behaviors. Moreover, it is possible to gather data on health, public transportation, criminal activities, and the environment (Lim et al. 2018). Big data is essential for supporting various sectors of smart cities; it facilitates traffic management, crime analysis, environmental monitoring, and planning. This big data can be derived from multiple sources including social media, IoT sensors, and different information systems; hence, many factors must be taken into account prior to and during the data analysis. It is important to focus on scalability, interoperability, and data integration and to consider the issues of privacy and security. In addition, it is important to consider streamed data, to enable services that are offered online and in real-time, and historic data for planning and decision-making (Osman 2019). Urban planning, in particular, can benefit significantly from the analysis of big data. Big data allows the mining of real-time data and the detection of high-frequency patterns on a wider scale. In urban planning, big data analytics can make sense of huge volumes of data to empower decisions that in some scenarios can, through automation, reduce the need for human intervention (Kandt and Batty 2021).

8.4.2 E-Government’s Adoption of IoT

E-government in another key sector where the IoT is having an important impact. The deployment of IoT can assist administrative management, promote a more intelligent administration, and improve the efficiency of different departments, thereby providing better service to the community. Intelligent e-government platforms based on IoT can be used to support various services and address the challenges deriving from slow servers; moreover, they are more technically advanced than conventional platforms (Qi and Wang 2021). Using real-time data and supported by smart devices, IoT can facilitate knowledge management, information exchange and collaboration not only between different departments and organizations, but also between the citizens and the government. It can assist with decision-making concerning environmental issues and occurrences. For example, it can be used to detect fire in forests and remote areas, monitor the weather, and identify the possibility of extreme natural occurrences such as earthquakes and floods (Papadopoulou et al. 2020). Moreover, the data that results from the communication between all the different devices used by IoT enables local governments to have access to vital information and take timely action as required and offers its citizens more appropriate services and accurate information. IoT can empower governments to improve their public services and at the same time gives citizens the resources allowing them to become more participative (Velsberg et al. 2020).

Big data is essential to IoT in the government sector. Big data can be used to improve the transparency of governments, provide insight to support decision making, offer information about citizens’ needs, and help to address pressing problems such as healthcare service delivery, and more sustainable methods for energy production. Moreover, it as a transformational power over procedures and policymaking (Klievink et al. 2017). Big data is even more relevant in the context of e-government where data is more voluminous and derives from an increased electronic participation. Big data offers cost-effective solutions and facilitates the achievement of various e-government objectives, namely in terms of information processing, and knowledge-based decision-making. Big data allows the government to have access to information that was previously unavailable, which offers insight into more aspects of its citizens’ lives and behaviors. This allows a more accurate prediction of what the citizens require, enabling a more appropriate response from the government. Furthermore, big data enables a more effective allocation of overall resources (Al-Sai and Abualigah 2017). Besides addressing current issues in e-government, big data can also be used to forecast the future impact of policies. In the context of tax policy, for example, big data can be used to estimate the economic effects of any changes (Agbozo and Spassov 2018).

As Huang and Li (2021, p. 2) posit: “predictions and decisions brought about by big data will inevitably change the way people make decisions. The significance of big data to decision-makers lies in advance prediction, in-process perception, and after-the-fact feedback.”

8.4.3 Assessing IoT Challenges

The IoT is not without challenges arising from issues such as the integration of innovative technology with existing technology, privacy concerns, security issues and questions of trust (Nord et al. 2019). One of the main challenges of IoT is security, as cyberattacks are a risk to IoT devices and to entire systems, such as health care. Another aspect that needs to be considered is the environmental impact of the increasing uptake of devices and the amount of energy that is required for their operation. Finally, more effort should be made to design and develop software that is user-friendly (Nižetić et al. 2020). Within e-government, for example, at a technical level, the diversity of IoT devices can cause difficulties in terms of interoperability and communication between them, and problems of security given that diverse devices are being used to access governmental services. Furthermore, there are other challenges related to: the required infrastructure such as sensors, digital tags and storage and data management solutions to address the rising volume of data; and the incorporation of AI, which may enable devices and systems to make decisions. At a non-technical level, it is important to: know the reasons for individuals’ and organizations’ reluctance to adopt IoT; consider the required organizational reform and the redesign of certain processes; allocate the necessary financial resources to enable such changes; establish an institutional and legal framework; to reflect on security, privacy, and trust; and invest in collaborative partnerships with other entities, particularly in the private sector (Papadopoulou et al. 2020).

There are also various difficulties associated with big data, one of which concerns the management of data quality. Different sources of data can produce redundant or contradicting information, which reduces the accuracy of the insights that are obtained which, subsequently, can result in poor decisions. Standardization and the integration of data from a variety of sources are difficult tasks when dealing with big data and require expertise and a significant amount of time (Lim et al. 2018). Furthermore, big data deals with different types of data, some of which can be imprecise. This is the case with sentiment data which is gathered via social media and is subjective and relies on human judgment and therefore devoid of objectivity. This subjectivity becomes a challenge, when it comes to, for example, machine learning, because it is difficult to learn from this type of data (L’heureux et al. 2017). Big data also poses challenges in terms of managing privacy and security, data governance, and the exchange of information and data. Also, at this management level, it is important to consider the intricacies related to data ownership and to the financial investment it requires. Even though big data can be a valuable business ally, it requires interpretation and analysis, which demand particular skills that are not always readily available (Sivarajah et al. 2017).

8.5 XR and the Boundaries Between Realities

XR can be understood as an “umbrella term to represent all computer-mediated reality technologies that merge the physical and virtual worlds for the enhanced experience.” (Gong et al. 2021), “encapsulating augmented reality (AR), virtual reality (VR), and mixed reality (MR)” (Chuah 2018, p. 1).

8.5.1 XR in Healthcare

AR and VR seem to be more popular in the health sector, especially in the field of surgery, for training and assisting performance; in psychology to assist patients with certain conditions, such as phobias; and in rehabilitation to help victims of stroke, for example (Muñoz-Saavedra et al. 2020). Surgery can definitely benefit from XR. In a study of cardiothoracic surgery, it was concluded that, although the use of XR for surgical is still in its infancy, it can be a valuable means of improving preoperative planning, using virtual reality. In this study (Sadeghi et al. 2020), intraoperative navigation, combined with reality and augmented reality, guided the surgical procedure. Moreover, a study on the use of AR in orthopedic surgery reported that its use is increasing and attracting more interest; the benefits include better accuracy while performing surgery, less exposure to radiation, and reduction of surgery time (Casari et al. 2021).

In the mental health arena, virtual reality can be used to create outdoor experiences, especially for those without access to the natural environment, and promotes health and emotional well-being (Browning et al. 2020). In addition, VR can be used to offer exposure-based techniques to address mental health diseases. The patients have the opportunity to, safely, confront their phobias within a controlled environment. This type of treatment has proven to be effective for numerous conditions. VR is an important tool for clinical assessment, where real-world experiences can be simulated to allow the clinician to observe the patients in a situation that mimics what occurs in their daily lives (e.g., anxiety, paranoia, fear). The usefulness of VR in the mental health domain is not limited to treatment or clinical assessment; it is also a valuable research tool. It can, for example, be used to conduct research on dangerous or inaccessible scenarios, and in neuroimaging research, to explore brain activation within naturalistic contexts (Bell et al. 2020).

In rehabilitation, VR and AR technologies are showing their potential by offering novel experiences to patients during their rehabilitation sessions and, consequently, strengthening their engagement. This increased patient engagement can improve their physical outcomes. Moreover, these technologies enable the creation of “remote physical therapy,” where AR and VR are implemented in a IoT ecosystem to collect, store, and analyze data on the performance of the patient, enabling the physiotherapist to evaluate progress and outcomes in order to design and implement tailored recovery plans (Postolache et al. 2021). In the context of post-stroke rehabilitation, the benefits of VR have been acknowledged by practitioners as they can increase therapy time and offer supplementary sessions. In addition, generally speaking, VR can improve patient motivation in the sense that individuals are willing to practice rehabilitation exercises more frequently and/or with more intensity as a result of the engaging scenarios (Levin 2020).

8.5.2 The Business Affordances of XR

Immersive technologies offer to consumers the opportunity to enhance their engagement and become co-creators with companies. XR makes it possible to improve the experience of the customer in many areas. When visiting an art gallery, for example, visitors are able to scan the brochure for more information using an AR application; they can, in a virtual environment, engage in a game with historical avatar. During the visit, it is equally possible to use AR to obtain more information about the displayed art of the pieces of art and take advantage of VR to be transported to a remote setting (Flavián et al. 2019).

XR technology has the potential to assist with a wide range of manufacturing activities, including all processes from design to service. The traditional work routines in manufacturing companies can be greatly improved by XR systems (Gong et al. 2021). In regard to the automotive industry, VR can be used in product design to improve vehicle safety. For example, it can be used to help assess the drivers’ visibility and, in terms of ergonomics, to establish the best location for a door handle or to determine the reachability of the control panel. This assessment can be performed with VR, and several features can be rearranged in the virtual cabin (Berg and Vance 2017). Also, within the industry sector, there is growing potential for AR to improve technical manuals, which previous research has shown to be clearer than other formats (Gattullo et al. 2019).

Augmented reality (AR) is one of the most promising technologies for technical manuals in the context of Industry 4.0. However, the implementation of AR documentation in industry is still challenging because specific standards and guidelines have yet to be established. In this work, we propose a novel methodology for the conversion of existing “traditional” documentation, and for the authoring of new manuals in AR in compliance with Industry 4.0 principles. The methodology is based on the optimization of text usage with the ASD Simplified Technical English, the conversion of text instructions into 2D graphic symbols, and the structuring of the content through the combination of Darwin Information Typing Architecture (DITA) and Information Mapping (IM). We tested the proposed approach with a case study of a maintenance manual for hydraulic brakes. We validated it with a user test followed by the collection of subjective feedback from 22 users. The results of this experiment confirm that the manual produced by our methodology is clearer than other templates.

In real estate, there are AR applications that enable house hunters to scan neighborhoods for possible houses, providing them with information on sale prices, taxes, the actual size of the lot, and other details allowing prospective buyers to make informed decisions. When shopping for groceries, clients with dietary constraints can examine products with AR-enabled smart glasses and identify whether the product is safe or according to their specific needs. VR can offer 360-degree tours of remote places, and it can be a powerful tool for businesses with online shopping as it provides more accurate depictions of their products (Farshid et al. 2018). Additionally, in retail, AR allows consumers to see the products, and in some cases, virtually try them, prior to purchasing (Slater et al. 2020).

8.5.3 The Challenges of Bridging Realities

Despite the fact that AR and VR applications are becoming increasingly user-friendly and accessible, there are several difficulties in their creation. The creators of AR and VR applications face several barriers such as difficulty with specific guidelines and examples of design and rely on of story-driven experiences when designing a product (Ashtari et al. 2020). There is also the issue of cost, which is high in terms of equipment since high-end systems are required to run VR environments. The development of VR applications is financially demanding, as is their maintenance and associated devices (Garrett et al. 2018). In addition, XR systems face substantial challenges concerning data exchange between different systems. Data incompatibility affects the quality of visualization. This is an important issue for these systems as they are subjected to heavy data interchange. Moreover, they present some issues in terms of interactive design (Gong et al. 2021). In the context of mixed-reality (MR) devices, certain scenarios could present significant challenges. For instance, people who use MR headsets while traveling could experience motion sickness, hinder the effectiveness of safety mechanisms, such as seat belts and airbags in case of a car accident, and they can display socially reprehensible behavior (McGill et al. 2020). In addition, since MR platforms offer interaction between reality and virtuality, they demand the deployment of precise tracking methods for both types of objects, which can be challenging. Another difficulty arising from MR platforms pertains to the need for suitable display technology capable of ensuring adequate resolution as well as contrast (Rokhsaritalemi et al. 2020).

In regard to AR, when used in education, despite various reported benefits (Isaias and Reis 2016), it has been associated with cognitive overload, an excessive amount of information, high cost of implementation and technical complexity (Alzahrani 2020). Furthermore, when users are presented with AR, they tend to overly concentrate on the content that is being presented virtually, while being oblivious to their immediate physical environment. In situations where AR is used together with other formats, and information is being delivered by humans as well, it can distract users and cause them to miss important information, undermining the role of the human instructor (Syiem et al. 2020). Regardless of all the benefits associated with VR, it is essential to understand that VR technology is valuable, not as a replacement for the real experience, but as a complementary tool that can assist individuals to engage in and benefit from real-world experiences (Flavián et al. 2019). Also, VR systems require users to have some technical expertise; otherwise, users may experience cybersickness, headaches, or eye strain and the head-mounted devices used for VR can be uncomfortable (Garrett et al. 2018). Finally, it is important to consider the implications of using systems that mimic reality so closely. The evolution of VR has improved the experience of the user in terms of immersion, with virtual worlds and characters becoming increasingly similar to reality. When immersed in these scenarios, which blur the boundary between reality and virtuality, users can become confused about what is real and what is not (Pan and Hamilton 2018).

8.6 Reflecting About the Future of ICTs

The aforementioned trends and technologies suggest a positive future where technologies, despite their shortcomings, can improve and advance society. Nonetheless, as technology progresses and introduces into society innovations that are applicable in all sectors, it is important to examine two fundamental aspects of increased digitalization and rapid technological development: digital divide and sustainability.

8.6.1 Sustainability

Sustainability is a concept that is becoming increasingly more important in an era when there is increased awareness of planet Earth’s urgent need for solutions to its ecological, economic, and social issues. Hence, sustainability encompasses a range of interconnected problems concerning these three dimensions: endangered ecosystems, poverty, and resource depletion. The pursuit of sustainability is an attempt to foster systems and processes with the ability to survive in the long-term future and to ensure a sustainable ecology, promote economic opportunity and foster social inclusion (Robertson 2021).

At an environmental level, ICTs have the potential to reduce CO2 emissions, particularly since they play a pivotal role in conserving energy in smart cities, improving transportation and electrical grids and developing smarter industries and energy-saving solutions. Paradoxically, ICTs also have a detrimental effect on the environment: they require a great deal of technological equipment and devices; they consume enormous amounts of energy; and they have huge impact on electronic waste recycling. For this relationship to become positive, ICTs need to reach, as is already happening in some developed countries, highly sophisticated levels of development (Añón Higón et al. 2017). In order to minimise the negative impact of digitalization, companies must focus on developing sustainable digital solutions that can advance their digitalization and sustainability efforts simultaneously (Lichtenthaler 2021). The term “digitainability” refers exactly to this relationship, as it concerns the “cross-fertilisation between the process of digitalisation and sustainable development” (Gupta et al. 2020, p. 9285).

Concerning the economy, ICTs have the potential to foster economic growth. However, at the same time, they can exclude certain populations, thereby exacerbating economic disparities. Thus, it is important to ensure a more equitable access to and use of ICTs. In terms of what ICTs can specifically do for economic prosperity, the possibilities include the use of big data to obtain information on those who live in poverty. The data mined from sensors and social media can inform interventions; increased access to mobile phones can assist new entrepreneurs to establish their businesses; and online learning platforms can facilitate training and knowledge transfer (International Telecommunication Union 2017). In agriculture, ICTs and big data can assist farmers with the timely identification of epidemics and plagues (Tjoa and Tjoa 2016).

ICTs can play an important role in the promotion of social equity and well-being. In the health sector, ICTs are revolutionizing the delivery of healthcare services by providing digital solutions, such as telemedicine at home, to alleviate the pressure on hospitals; offering online educational material to medical professionals worldwide; and deploying a variety of technologies that assist in the monitoring of health conditions. ICTs play an important role in promoting social equality, and improving communication through mobile phones and Internet connectivity (International Telecommunication Union 2017). Moreover, ICTs can be used in the education domain to give access to education to more people globally, through the development of online platforms for learning, following the MOOC format (Tjoa and Tjoa 2016).

8.6.2 Digital Divide

It is estimated that around 53% of the world population has access to the Internet; hence, a considerable number of people still have little or no possibility of accessing online platforms and reaping their benefits (United Nations Conference on Trade and Development 2020). In an era marked by rapidly evolving ICTs and increasing digitalization, those who have no access to technologies because increasingly distanced from those who have the means to access and master technology. Hence, it is important to reflect on all aspects of the digital divide.

The notion of digital divide has evolved from being a term focused on access, to a more encompassing conceptualization that includes other inequalities regarding the use of the Internet. The digital divide has three levels: level one pertains to Internet access itself; level two refers to the disparities that emerge from the individuals’ own motivations, the skills that they have, and the purpose(s) for which they use the Internet; level three concerns the disparities related to the various social, economic, personal, political benefits that the individuals can obtain when accessing the Internet (Ragnedda and Ruiu 2017). The digital divide has a range of consequences that have impacts on most areas of life and daily living. The increasing digitalization of society has caused, in the majority of developed countries, an expectation that everyone has access to the Internet, more specifically, when communicating with the government, when applying for a job, for socializing with friends, and for entertainment. Hence, inaccessibility can exclude people economically, socially, and culturally. Moreover, globally speaking, the lack of access can have a detrimental effect on innovation, development, and economic prosperity (Van Dijk 2020). Overall, the digital divide results in limited opportunities for those who are already disadvantaged. The effects of the digital divide have become even more apparent with the COVID-19 pandemic, with people having to use the Internet to work and/or continue their education, and for social support and information. The lack of access to the Internet has compromised the quality of people’s day-to-day lives during lockdown periods (Lai and Widmar 2020).

Addressing the digital divide requires international cooperation and an investment in key areas. It is essential that ICT infrastructures be developed or improved, particularly in poorly served areas where access is very restricted. Apart from being accessible, the Internet needs to be efficient, quick, and reliable. In addition, individuals’ literacy, numeracy, and digital skills need to be improved, with a particular consideration for marginalized populations, senior citizens, women, and individuals with disabilities. Hence, it is important that the development of skills be inclusive (United Nations Conference on Trade and Development 2021). One of the most significant difficulties of connecting under-served areas is the financial cost that it incurs and these areas’ low income. To address these issues, 6G can be used to improve both the heterogeneity and the scale of the network, at the same time improving the performance of the network in general (Chaoub et al. 2021).

8.7 Conclusion

The increasing digitalization of society is welcomed in many sectors, which currently benefit from all the advantages it brings. Nonetheless, despite the increasing pervasiveness of technology, many organizations and individuals are reluctant to adopt technological innovations that are perceived to have disadvantages or shortcomings. This chapter examined several innovative technologies, focusing on their advantages as well as on the difficulties associated with them.

The role of social networks in education was examined, particularly in regard to their potential in building online learning communities and strengthening online interaction. In the business and industrial sectors, ICTs can streamline operations and procedures and provide opportunities for career advancement and insight into customers’ preferences. In regard to challenges, two essential themes were discussed: anonymity/privacy and the dissemination of misinformation/disinformation. The role of AI in education was investigated, including its benefits ranging from personalization to improved learning/teaching performance. In the healthcare sector, AI can offer intelligent coaches and automated solutions. Ethical questions and challenges were related to the possible replacement of humans and algorithm transparency. IoT and big data were explored in the context of smart cities for their support of data-driven decisions and intelligent solutions and in e-government for their role in providing vital information and improving service delivery. The issues that were identified pertained, for instance, to interoperability and data quality. The advantages of XR within the healthcare sector were presented, both for physical recovery and for mental health, and within the business sector, for an improved customer experience and more efficient industrial processes. The difficulties associated with XR relate to issues such as technical complexity and a blurring between reality and virtuality.

Looking forward, it is important to ensure the sustainability of these technologies and a more equal distribution of access and skills, which will help to bridge the digital divide. Future research could complement this conceptual account of these technologies incorporating the views of the experts in the different areas, using focus groups or semi-structured interviews.