Keywords

1 Introduction

The twenty-first century is witnessing an unprecedented increase in the evolution and utilization of robots. With the upcoming Industry 4.0 revolution, we are approaching the era of robotics [39]. Currently, robotic systems play an important role, from performing medical procedures to serving as salespeople in shopping centers. Robots are now even replacing human companions. This remarkable growth, from a simple machine to an autonomous humanoid robot, has become possible because of the advancement of Artificial Intelligence, Natural Language Processing, Sensor Technology, and Processing Power.

To employ automation in work, different types of robots are used, designed to suit the specific nature of the work. We can categorize three general types of robots, i.e., Industrial Robots, Service Robots, and Specialized Robots [23]. Nowadays, these robots perform multipurpose applications seamlessly alongside humans in industries as well as at home. They handle heavy, mundane tasks for humans effortlessly. Additionally, they are becoming reliable in specialized tasks like healthcare assistance, surveillance, space exploration, rescue missions, etc. Robots are also helping as nurses or companions for older people. The vehicle industry is being revolutionized by the uprising of autonomous vehicles. All these advancements illustrate the prospect of reducing the gap between science fiction and reality.

As we embrace the help of robots in our daily lives, it may not be very long before these intelligent machines start to co-exist with us in society in every sector. Robotic help can undoubtedly simplify our lives, but it comes with potential privacy and security risks to our personal and social lives. Therefore, it is imperative to develop methods to prevent different kinds of privacy and security threats of robots to humans. Existing versions of robots are not free from threats, thereby indicating that future versions are unlikely to be different. There are several questions concerning privacy and security that a robot must answer before we may consider it to be safe to release in society. If we do not ensure that robots’ mechanisms can answer these questions, we might have to reassess the deployment of robot among humans due to the inherent risk it poses to human life. In this paper, we explore a few of these questions.

In the following sections of this paper, we will address the growth of robotic advancement and several privacy and security-related questions that need our attention.

2 Literature Review

The proliferation of Robots is accelerating rapidly in our daily lives, and with it comes a rise in potential dangers. From the beginning of the use of robots, back in 1979, the first death induced by an industrial robot has been recorded [53]. After that, several deaths and injuries were caused by robots [25]. Even though robot R &D companies are trying to implement policies for secure interaction between humans and robots, new threats arise with the development of new robot technologies.

Today, Robots are serving in many roles, such as security guards, salespeople, helping hands at home, nurses, etc. In emergency situations, humans might not follow the instructions of robots acting as security guards [2]. An open question is: What would happen if people refused to take commands from robots? Will the robot force humans or let them pass? Trust has not yet been fully established for robot services. People are concerned about their security; They are skeptical about letting unknown robots into their living spaces [8]. Trust also depends on the appearance of robots; in some cases, people may feel threatened by humanoid robots that perform better than them at work [57].

Robots are vulnerable to various forms of cyberattacks. Clark et al. present different cyber attack scenarios [11], for example, buffer overflow attacks to take control over companion robots, attacks on automated vehicles during firmware updates by pushing corrupted updates, hardware backdoor attacks on military drones, etc. Additionally, researchers show a comprehensive view of several cybersecurity issues such as malware, Trojan, replay attacks, fault injection, tampering attacks, etc. [28, 54, 58].

Automated vehicles can be one of the targets of attackers. The attackers may use jamming, high-brightness Infrared LEDs, Digital Radio Frequency Memory (DRFM), etc. [40], to provide false navigation data. Additionally, autonomous vehicles are generally connected to users’ smartphones. Sugawara et al. [46] presented an audio injection attack on the voice-controlled smartphone system connected to automated Tesla and Ford cars. In addition, the classification system of autonomous vehicles is at risk of potential attack. The work in [15, 31] demonstrated that a simple perturbation of the traffic signal could make the CNN classification model misidentify the signal. This attack poses significant security risks and can potentially cause chaos on roadways. Unmanned Automated Vehicles (UAVs), such as drones and rovers, are also in danger of being attacked. Dash et al. [13] demonstrated three attacks on UAVs protected by control invariants (CI) [10] and the extended Kalman filter (EKF) [9]. The authors designed the attacks on UAVs by injecting minor false data into the control system, which caused the automated vehicle to change its position and angular orientations, injecting time delays to make the UAV receive commands late, and lastly, injecting malicious code to switch the mode of the UAVs. In [50], Tu et al. presented two attacks (i.e., Side Swing [22], and DoS [21]) to cyber-physical systems, and they manipulated two automatic self-balancing robots by spoofing embedded Micro Electro Mechanical Systems (MEMS) inertial sensors.

Telerobots [38] come in handy in medical surgery, military operations, and rescue missions. In [5, 7], the authors elaborated that telerobots are vulnerable to common cyber attacks such as viruses, worms, and malware. They also mention security threats such as command manipulation, denial of service, and communication loss. Recently, several medical centers have filed lawsuits against Intuitive Surgical, a surgical robot manufacturer, alleging that they were coerced into signing restrictive repair contracts, forcing them to buy new parts from the aforementioned company [42]. An operation had to be postponed due to the usage of third-party repair. This incident adds another dimension to the challenges of surgical robots. Shah et al. [44] demonstrated a successful side-channel attack- Fingeprint on surgical robots. Besides, other potential side-channel attacks on robots are Radio-frequency attacks [45] and cache-based attacks on automated vehicles [32].

Lutz et al. [33] observed robot usage from a different perspective, implying that social robots might affect the psychological and social privacy of human beings. Van et al. [17] express their concern about whether we are compromising privacy in exchange for robotic services. The Guardian reported [18] about wifi-enabled Barbie dolls, which can be hacked and turned into a surveillance device to spy and collect information without anyone’s knowledge. Robots are also becoming companions of humans, sometimes as caregivers. However, some authors are concerned about ethical issues. For example, the authors fear that companion robots might create a hallucinatory reality for some people [6].

3 Future Evolution and Security Questions

Robots are evolving and becoming more intelligent, precise, and human-like. Understandably, people are apprehensive about whether robots are going to be a threat to our lives, as depicted in science fiction movies. We are going to elaborate on some sectors for possible futuristic advancements in robots and the privacy and security questions that come with them.

  • Cyber Security: Robots are now connected to wired and wireless networks for smooth data exchange and communication like any other device. However, robots have a lot of security issues, such as lack of authorization, authentication, secure network, tamper-resistant hardware, privacy, integrity, etc. [54]. Robotic networks and computer networks are different in nature; the same countermeasures in general computers may not work on robotics networks [52]. Robotic Operating System (ROS) is also becoming popular among developers. Nevertheless, ROS is vulnerable to attacks such as DoS, DDoS attacks, malware, buffer overflow, malicious code injection attacks, etc. [11].

    Ransomware is another concern for robot users. In [34], Mayoral-Vilches et al. show a ransomware attack-Akerbeltz on industrial robots, which locks and encrypts the robot from its vendor network. The attack was carried out by simply connecting a USB device to the robot or remotely accessing the adjacent network. Furthermore, another ransomware attack was demonstrated on a SoftBank Robotics NAO humanoid robot [29].

    figure a
  • IoT Connections: Robots are now becoming part of IoT and interconnecting with other devices. In homes, industries, and offices, it is common to connect robots with home assistants, smartphones, and TVs. Consider a scenario where an industrial robot integrates with other devices within a multi-purpose company. If an unauthorized user takes control of the robot, the whole system will be compromised. The attacker can take control of other devices and perform dangerous tasks. For example, this security breach may lead to injury, financial damage, and data theft. Thus, it is necessary to secure the additional mobile attack interface - robots. Another scenario is depicted by Amoozadeh et al. [4], where each vehicle receives beacon messages from the immediately preceding vehicle using the IEEE 802.11p protocol. The authors demonstrated security (e.g., message falsification attack, spoofing attack, distributed DoS, Radio jamming, etc.), system-level attacks (e.g., hardware or software tempering), and privacy attacks (e.g., eavesdropping attack) on different layers of automated vehicle networks. A compromised network of vehicles can endanger passengers in all connected vehicles. Moreover, the attacker can evade privacy by leaking personal information such as vehicle identity, current vehicle position, speed, and acceleration.

    figure b
  • Mutual Authentication: Authentication has become one of the main concerns in robotics. Mutual authentication is necessary to establish secure communication between robots and humans. Several works have been done to authenticate users, such as face recognition, voice recognition [52], behavior-based recognition [3] etc. However, as we are employing an increasing number of robots in our work, the robots’ identities need to be verified as well. Some delivery robots [26, 43, 48] use OTP (One-Time Password) or mobile applications on users’ smartphones to authenticate to the user. But these methods are insufficient because they are susceptible to attacks [36]. Adi et al. proposed an unclonable identity for robots based on the work [1]. This identity will be unique to human DNA. However, this process is complex, expensive, and not feasible for mass production. Later, Gavrilova et al. [16] presented an idea to use biometric principles (e.g., physical and behavioral characteristics) to recognize and authenticate virtual avatars.

    figure c
  • Autonomous Robot: The current generation of robots is not fully autonomous; they depend on pre-programmed commands. However, several initiatives are underway to extend the perimeter and allow robots to have autonomy to some extent, e.g., unmanned vehicles, Tesla bot [49].

    Military services are also trying to utilize autonomous robots in war, spying, bomb defusal, and other dangerous jobs. However, the use of robots at war is a controversial topic, as it can violate international Humanitarian law [47]. The question arises with the Robot at war, what happens when an order contradicts the war robot’s system. For example, if a robot receives an order to attack a house, the robot detects with sensors that the house is full of children. The order contradicts the robot’s system in minimizing civilian casualties. Should the robot be allowed to have an awareness of these types of situations, or should the order override the robot’s system [30]?

    figure d
  • Robot Learning: Robot Learning [12] is popular for teaching robots without programming every movement explicitly. Robots can learn from demonstrations, teleoperations, or observation [27]. Learning methods can be supervised, unsupervised, transfer learning, and reinforcement learning [41]. The robots adapt their decisions as they perceive the environment or dataset. The attackers can intentionally manipulate the data during the learning process, such as injecting poisonous data into the training set, spoofing sensor data (e.g., camera, audio), or changing learning conditions. Due to these attacks, robots may learn unsolicited behaviors that can exhibit danger to their surroundings. For example, Yang et al. [55] demonstrated an adversarial attack on a reinforcement learning-based robot learning system where the attacker uses a pulse to generate random observations, degrading the learning performance.

    figure e
  • Integration with ChatGPT: Robots are expected to undergo revolutionary changes using ChatGPT, especially ChatGPT-4. We have seen some proposed frameworks [19, 51] in recent times. Vemprala et al. [51] suggested using a ChatGPT prompt to write code automatically for non-technical users to make the robot perform a certain task. In one scenario, the user asks the robot to cook an omelet and serves it to the user’s grandfather. Recently, Google DeepMind introduced Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from web-scale datasets [56]. This model is built on the same tech as ChatGPT; It can interpret these data as plain language instruction and execute it [14].

    figure f
  • Access Control: Certain robots (e.g., service robots in our homes) continuously surveil us as part of their functions. These robots have access to our personal data; they can take pictures and videos, and monitor our locations. Nonetheless, if the vendor of these robots unethically grants access to the robots’ system during manufacturing and takes advantage of our confidential data, it can pose significant privacy and security risks. For example, unauthorized users can collect passwords and credit card information by simply taking photos or videos when the user is entering the data.

    figure g
  • Trolley Problem in Robotics: Imagine a scenario where a person is watching a runaway trolley heading towards a track where five people are standing, and if nothing is done, these people will certainly die. There is another track where he can divert the trolley, but there is another person standing on it that will be killed. Here arises the ethical dilemma of whether killing one person is okay instead of killing five people. As robots become more involved in society, they will inevitably encounter many ethical dilemmas in decision-making. So, it is essential to solve the trolley problem to mitigate any risks that an action of the robot may pose.

    figure h

4 Conclusion

The widespread adoption of robots signals the imminent revolution of robotics technology. It may not be very long before we generalize the idea of coexisting with robots. We must be prepared for the privacy and security risks to embrace this transition fully. Robotic systems are made of different subsystems and subcomponents. Securing the subcomponents is necessary but not sufficient for protecting the whole system. This is because components are integrated with one another and therefore, exhibit complex and subtle dependencies and interactions [35]. We need to enforce a robotics framework and a universal policy for developing or changing any robots. Such a comprehensive measure will ensure that robots and their manufacturer follow the standard user safety practice. European Commission has created a voluntary code of ethics and standards for manufacturers and users of robotics technology [37]. IEEE undertakes a global initiative-The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which aims to ensure that the involved persons prioritize ethical consideration and benefits of humankind [20]. However, as these policies are not enforced as obligatory, the concerns still prevail.