1 Introduction

The late Mark Weiser’s [1] vision of ubiquitous computing is slowly becoming reality. The notion of intelligent environments, where computing devices, embedded in the environment and within the very objects that we use daily in order to accomplish our tasks, is currently realising in contexts such as classrooms [2], meetings, [3] and even smart homes [4]. The computational power may, for instance, free us from some of the routine tasks, provide us with access to more information related to the task at hand, and so forth.

Obviously, developing a highly sophisticated system such as a ubiquitous, intelligent environment is not without challenges. On a high level, these can be divided into two classes:

  • Technology-related challenges, and

  • Human-related challenges.

Some examples of the first class include studying sensors required in ubicomp (such as [5]), building system software for interoperability and integration (see e.g. [6]), and researching mobile ad hoc networking (such as [7]). For discussion on ubicomp challenges related to computer science in general, see also [8].

As for the latter class, some examples include studying smart home usability (see, for instance, [9]), and tools for performing such studies (see e.g. [10]).

We focus on the class of human-related challenges, and emphasize one of the biggest threats usually associated with ubicomp: privacy. We stress that the most critical issue when dealing with privacy issues in ubicomp is thorough understanding of not only the technical aspects of ubicomp, but also of what privacy is from both theoretical and the user’s point of view. As a consequence, in this paper we combine the knowledge from both ubicomp research and privacy studies in social sciences. Combining these two approaches constructs a theoretical framework, as well as common terminology, that facilitate research and design of privacy aware ubicomp systems.

In the present paper, we aim at understanding the privacy in ubicomp via Altman’s existing framework of privacy in social sciences [11]. Since Altman’s theory is well recognized and significant in the field of social sciences and further, it emphasizes the roles of social interaction and an environment as essential elements in privacy regulation, it seemed to provide feasible grounds for this work. We set out to find out whether, by applying the model and extending when necessary, the same insight could be incorporated into technologically augmented environments.

The paper is structured as follows. First, we review related work in ubicomp privacy, followed by a brief introduction to Altman’s theory. We then continue by identifying the similarities between the framework and ubicomp, extending the model when necessary, and apply the result to typical ubicomp use cases. Finally, we discuss the success of the model and point out future directions.

2 Related work

Altman’s theory has been applied to computer sciences before. For instance, Palen and Dourish [12] deploy a model of privacy, claiming that privacy management is a dynamic response to circumstance rather than a static enforcement of rules. They use Altman’s theory to analyze the privacy in computing environments rather than trying to apply and extend the model itself.

Not surprisingly, privacy in ubicomp has been studied extensively. Many studies state that privacy, or the lack of it, is a real concern in ubiquitous computing environments [13, 14]. However, what is understood by privacy varies widely within the ubicomp research community, not to mention between ubicomp researchers and social scientists. As Westin [15] states: “no definition of privacy is possible, because those issues are fundamentally matters of values, interests and power”. Further, Langheinrich [16] argues that perfect privacy protection in ubicomp cannot be achieved, and proposes to build systems that help others respect our personal privacy, enable us to be aware of our privacy, and rely on social and legal norms to protect us from the few wrongdoers.

Regardless of Westin’s claim, some definitions of privacy exist. Westin himself proposes that privacy should be understood as the claim of individuals, groups and institutions to determine for themselves, when, how and what information and to what extent information about themselves is communicated to others.

Privacy can also be defined as a border between society and one’s personal affairs [14]. Marx [17] extends the concept of borders by introducing natural (governed by senses and physical boundaries), social (the expectation that information is shared within a social group), spatial or temporal (separate aspects of one’s life), and ephemeral borders (based on the assumption that information is not preserved longer than expected). He also discusses the difference between public and private, and argues that the terms are more ambiguous than is generally understood.

One of the earlier works regarding privacy in ubiquitous computing environments is the study by Bellotti and Sellen [18]. They discuss the privacy threats regarding ubiquitous computing in the working environment, and continue by presenting a framework for designing privacy in ubicomp environments. They classify the threats into two categories: (1) technological threats and (2) user interface design-related threats that are coupled with social behavior. However, they fail to refer to the works of Altman or Westin when discussing societal issues and privacy. They discuss two essential aspects of the design for privacy: control and feedback. The design framework proposes that a system should provide control and feedback for at least the following system and user behaviors: capture (information that is picked up), construction (how the information is processed), accessibility (who gets access to collected information), and purposes (what is the information used for).

Price et al. [19] propose a model for user control of privacy. They have identified four layers related to privacy in ubicomp: the regulatory regime a user is currently on; the type of ubicomp service required; the type of data being disclosed, and personal privacy policy. The model balances the user’s privacy preferences with privacy regulations, and provides means—what they refer to as noise—to protect location privacy. They also discuss balancing trade-offs between compromizing privacy against receiving ubicomp services.

Soppera and Burbridge [14] discuss privacy in terms of fair information practices, and technologies for supporting these practices. The work assumes the viewpoint of OECD towards privacy, addressing the following issues: personal data and identity, data collection, and data usage, storage and access. They conclude that anonymity (the users may use a service or resource without disclosing their identities) is a key enabler in maintaining privacy in ubicomp.

Adams and Sasse [20] and Adams [21] discuss privacy in networked and ubiquitous multimedia communications and mechanisms and policies for protecting user’s privacy in ubiquitous multimedia applications. Among other things they have identified four aspects of privacy: information receiver, information sensitivity, information usage, and the context of disclosure. This is partly related to Bellotti’s and Sellen’s findings [18] of capture, construction, accessibility, and purposes.

Hong et al. [22] propose privacy risk models as a tool for identifying concrete privacy issues in ubicomp, and prioritizing them. They also discuss a reasonable level of privacy. A privacy risk model consists of analysis and management parts. The analysis considers both social and organizatorial issues (such as the users and the different kinds of information involved), as well as technology. Further, it considers the relationship between data sharers and observers. The management part helps in organizing and prioritizing the issues gathered in the analysis, and identifying solutions.

Jiang et al. [23] have contributed to the discussion on privacy in ubicomp by proposing a principle of minimum asymmetry. Asymmetric information is related to situations in which some actor has private information that is relevant for everyone. By applying the principle of minimum asymmetry, privacy-aware ubicomp spaces and system should decrease the asymmetry of information between data owners, data collectors, and data users.

Lederer at al. [24] present a conceptual model of what they call “everyday privacy”, referring to as end-users exposure to and control over personal information collection. The model is constructed by synthesizing Lessig’s societal-scale model [25] with Adam’s perceptual model [26]. In the synthesized model legal, market, normative and architectural forces, combined with contextual factors, constrain the possible levels of privacy. Within the constrained range, the user’s subjective values determine the actual level of preferred privacy.

Field studies on ubicomp privacy have also been conducted. For instance, Beckwith [27] has studied privacy in an eldercare facility in a real-world experiment. In his study, he used a model loosely based on Adams and Sasse’s findings [20]. Many issues came up, such as the fact that people sometimes forgot they were monitored.

Campbell et al. [28] discuss major challenges and requirements for security in ubiquitous computing. They discuss e.g. the extended computing boundary, context-awareness, interoperability, and scalability. They also mention the difference between physical security and digital security. This is a typical example of the relations between security and privacy, where the terms are more or less used interchangeably.

To conclude, as Soppera and Burbridge [14] state, “the examination of privacy in the area of pervasive computing is immature”. Our paper aims at better understanding this phenomenon from interaction point of view.

3 A brief introduction to Altman’s theory of privacy

Altman [11] provides social psychological analysis of privacy, emphasizing the role of the environment in the process of privacy regulation. Since much of ubicomp research concentrates on smart environments, his way of understanding privacy is chosen as a relevant basis for our work. In addition, Altman’s theory of privacy is widely referred in various contexts (see e.g. Petronio [29], Palen and Dourish [12]). Even though Altman’s theory concerns non-technological circumstances, it is applicable in the ubicomp arena.

Altman [11] understands privacy as dialectic and dynamic boundary control process, which regulates interaction with others. This means that an individual manages social interaction and privacy through different behavioral mechanisms, such as verbal and nonverbal behavior, personal spacing, and territorial responses. Depending on the circumstances, an individual uses these mechanisms differently; one mechanism may substitute the other from situation to situation. Altman sees the role of an environment as determinant of behavior and as a form or extension of behavior.

According to Altman, privacy is a central concept providing means to understand environment and behavior relationships. Privacy is an interpersonal boundary regulation process by which a person or a group adjusts interaction with the others. People alter and adjust the degree of their openness to others and thereby make themselves more or less receptive to social interaction. Altman illustrates this process by the concepts of desired privacy, interpersonal control mechanisms, and achieved privacy.

Altman’s framework of privacy should be understood as two-way process, in which the self and the others interact with each other. A person, the self, desires an ideal level of privacy with others at particular time and social setting. The ideal level means an internal and personal state in which an individual (or a group) is interacting with other people as they wish. Next, we will present the key concepts that form Altman’s privacy framework.

Inputs and outputs

Altman uses the terms inputs and outputs to describe people’s behavior in a social situation. For instance, listening to the radio or listening to others’ discussion represent inputs from the others. Attending actively in discussion and presenting own views to the subject matter represent the outputs from self to others. In conclusion, in a state of desired privacy the inputs and outputs are in a level that the self wishes.

Control mechanisms

To achieve the ideal state of privacy, a person has several interpersonal control mechanisms, including verbal and non-verbal behavior, personal space, and territorial behavior. Based on past experiences, immediate interaction possibilities, and personal styling, an individual uses a series of control mechanisms in order to adjust boundaries between themselves and others. Here Altman also brings up the temporal aspect in the boundary regulation process.

Personal space is the invisible separation between self and others. It is “attached” to the self and it is carried wherever one goes. Personal space is not only related to interpersonal distance but it also covers the angle of orientation from others; face to face, side to side, front to back and so forth.

Territory implies control over some particular geographical area or objects. It also refers to ownership of the place including rights to personalize it according to one’s own interests.

Crowding and social isolation

Achieved privacy represents the achieved amount of actual interaction with others. It may be more or less than the desired privacy, or it can match it. If the achieved privacy equals desired privacy, an optimum level of privacy exists. Altman uses the concepts of crowding and its counterpart social isolation as resulting from breakdowns in achievement of desired levels of privacy. Crowding refers to a state where achieved privacy is less than desired privacy, caused by the fact that boundary control mechanisms fail to prevent input from others. In practice, this means that achieved level of social interaction is more than desired. Social isolation means that achieved privacy is greater than desired level of privacy, which means that a person has been cut off from the social interaction, causing for example loneliness or boredom.

Looking at Altman’s framework we discover some notions that are interesting from a ubicomp point of view. For instance, understanding privacy as a two-way process implies that (at least) two parties are involved. At the time of writing of his article, the parties were understood as humans. However, with a ubicomp environment, the interaction may take place between humans, between a human and a computational device, or even between devices. All these interactions can potentially affect the level of achieved privacy for a person in the environment (or a person remotely present, for that matter). Next, we will take a deeper look at Altman’s framework from ubicomp point of view by first analyzing the characteristics of ubicomp that can potentially impact the level of achieved privacy, and then extending and refining Altman’s theory to match the identified features.

The analysis of Altman’s theory in ubicomp environments is carried in two ways: first, merging the previous knowledge of ubicomp interactions and privacy regulation to one theoretical framework, and second, through analyzing typical ubicomp scenarios. Two of the scenarios are presented later in this paper.

4 Analysis of Altman in ubiquitous computing environments

We understand ubicomp environments as a mixture of real and digital spaces. People, their computational devices (such as mobile phones), as well as interactive environment constitute the actors in ubicomp environments. Therefore, ubicomp should not be understood only as a space where information is transferred from a digital entity to another but also as a social setting where individuals interact with each other and with the environment.

4.1 Mediated communication

One of the basic differences between interaction in the ubicomp environments and interaction in face-to-face setting is a possibility of mediated communication.

It changes the nature of interaction by extending its temporal and spatial dimensions. Palen et al. [12] pointed out that mediated communication can be recorded, which means that the mediated information can be retrieved and re-used after the information exchange has been taken place. This makes also controlling of personal information difficult or even impossible once it has been disclosed. One of the most typical examples of this is forwarding someone’s email without asking permission from the original sender. In summary, recordability increases the requirement for controlling mediated communication, and thus it has direct implications to control mechanisms used in ubicomp interactions.

The fact that interaction may take place between parties that do not share the same physical space extends the spatial dimension of mediated communication. Palen et al. [12] brought out that users have poor means or no means at all to be aware of the audience of this kind of mediated communication. This also increases the requirement of controlling the mediated information.

Mediated communication has also two implications requiring extensions to Altman’s framework of privacy. First, the mediator needs to be added as an actor in the model. We use the term personal device to signify the mediatorFootnote 1. Second, the informational content that is mediated should be considered as an object of control and communication. We call this informational content as digital self.

4.2 Personal device as an actor

Personal device is considered as an independent actor as pointed out. It has four distinguishable roles in the ubicomp interactions:

  1. 1.

    The device works as a tool for mediated communication such as accessing and using ubicomp services or communicating with other people.

  2. 2.

    The personal device may communicate autonomously with the other digital units in ubicomp environments.

  3. 3.

    The personal device is used for managing user’s personal information.

  4. 4.

    The user interacts with the device.

First, the personal device provides an access to the ubicomp services. The user may use the ubicomp services manually when they are available. Then, the role of the device is clearly a tool for utilizing the services and mediating information. The device also enables mediated communication with other people, such as conveying presence information with peers, proximity messaging e.g. via Bluetooth or email, and instant messaging.

The second role implies automated functions that require communication between several devices. An important case is the situation where the user sets the context-aware device to react to the context. It may detect other interactive devices and initiate interaction with them automatically when the defined conditions are met.

The third role points out users’ needs to create, access, edit, use, publish, and share personal information in various contexts. These operations can be done by using the personal device, but also with other applicable means such as public computers.

The fourth role implies the fact that the self can adjust interaction with the device when other people are present at a same space. For instance, one may inhibit others to (over-)see or (over-)hear the information handled with the device by adjusting the interaction with it.

4.3 Digital self as an object of control and communication

The informational content of an individual in ubicomp environments can be called as digital self. It is a representation of self in a digital world which can be any digital data that describes the self and is traceable to the self. Typical of digital self is that it rarely provides holistic description of the self but rather a particular aspect to it. For instance, personal web pages, blogs, emails, chat, dynamically updating presence information, and personal files like music and videos can be considered as parts of the digital self. However, if this information was combined, one could get fairly descriptive idea of the self. The personal information that constructs the digital self may be stored in various storage spaces and it can be accessed by multiple devices.

4.4 Inputs and outputs in ubicomp

Definition of interaction in ubicomp context is more complex than in non-technological setting due to the number of actors and characteristics of mediated communications. In addition, the inputs and outputs vary according to the capabilities of the computational units present. In order to examine privacy regulation process in an orderly fashion we need to examine interaction, and thus inputs and outputs, between the actors separately.

Interaction between self and other humans’ devices occurs when many people are in the same place within range of human perception, and some of them (at least one) are using their personal devices, like a mobile phone. In this kind of setting, the input that the self can get is all aural, visual, or haptic output from the other humans’ devices. For instance, the self may see the display of someone’s personal device.

Outputs from self to others’ devices are directly related to the capabilities of the devices. For instance, the others’ devices may be equipped with an audio UI, in which case the self may be able to influence or control the devices. In addition, the devices may record audio, video or still images when only a presence and typical behavior of the self can be considered as output to the other humans’ devices.

Interaction between self and an interactive environment is similar to the interaction between self and other humans’ personal devices in terms of input and outputs. The self can hear, see or feel input from the interactive elements in the environment, for instance she may watch a public screen. Accordingly, the environment may detect the presence or activity of the self in current space, for instance by surveillance cameras.

Interaction between personal device and other humans’ devices is either manually initiated mediated communication or automatic exchange of digital information between devices. Even though the interaction takes place between computational units, it is usually controlled by the users. In this particular case the inputs and outputs can be any digital data. The interaction between the devices can take place in proximity or remotely via a network.

Interaction between personal device and an interactive environment is similar to interaction between human individuals’ personal devices but it is limited to proximity context. Typical input from the environment to personal device is information on the services available in a current space. Output from personal device to the environment may be for instance personal information required to access the services.

The informational content related to the users in the latter two modes of interaction is what we call as digital self. The digital representation of self can be basically any digital data that describes the user; text, images, videos, avatars, etc. These can also be considered as inputs and outputs in this form of interaction.

4.5 Control mechanisms

The control mechanisms defined by Altman are not enough to tackle the control of privacy regulation in ubicomp context. Some of the control mechanisms in Altman’s theory need to be extended and new control mechanisms need to be defined due to the more complex interaction possibilities in ubicomp setting. The control mechanisms also vary according to interaction parties; human–device interaction can be controlled differently than device–device interaction. Table 1 presents examples of control mechanisms available in each communication context.

Table 1 Examples of control mechanism available in different forms of interaction

Personal space, one of the control mechanisms discussed by Altman, can be understood in ubicomp environments as a combination of self, personal device(s) and digital self. Personal device has a major role in providing control over or access to the services available in the ubicomp spaces and therefore it is also a significant factor defining the privacy control mechanism in such environments. In practice, the self may control the output of the personal device to other humans for instance by hiding it, adjusting volume of loudspeaker or brightness of display so that the information mediated via the device is more or less observable by other people.

Territory in ubicomp environments does no refer only to ownership of a geographical area or objects but also ownership and control over devices embedded in a certain environment or services available at a certain space. This takes the role of territorial responses relevant to privacy regulation process into a new level in ubicomp context.

Verbal and non-verbal behavior has a similar kind of role in mediated communication than in non-mediated communication. For instance, in video conferencing situation this control mechanism can be used somewhat similarly than in face-to-face conversation. However, verbal and non-verbal behavior can be understood also as a control mechanism in interaction between self and computational devices if the devices are equipped with audio or gesture user interfaces. This is radically different from its original meaning but the same terminology can be used also in the ubicomp context.

Context awareness is commonly used to signify capability of a portable device to make an inference of the context of the user based on contextual factors, such as physical, geographical or social attributes [30]. Context awareness can also be used as control mechanism in ubicomp environment; the self may manage the information that her device communicates to the environment or other people’s devices based on pre-defined context attributes.

Management of access rights is one of the basic control mechanisms in ubicomp interactions. Users are able to decide and select who or what kinds of parties have access to their personal data. Thus, users do not control information that is conveyed to others but the criteria of the others who have access to her personal information.

Filtering as a control mechanism means setting of certain rules to regulate incoming and outgoing information.

Management of visibility implies cases where users control publishing their personal information to other users. Thus, users control the amount of information that is conveyed to others. For instance, users can select whether to publish their Bluetooth ID in proximity communications or not. Publishing presence information is another concrete example of visibility management; the users can, for instance, decide not to publish their presence information at all or to publish only some pieces of it.

Control mechanisms related to digital self

There are different kinds of means to control the digital self depending on the media and the implementation of the communication tools. In many cases, users can express themselves anonymously or by using pseudonyms, like nicknames in order to avoid being identified. Therefore, the digital self may not be correspondent to the real self; it may be true, partly true or false.

4.6 Actual levels of privacy in ubicomp environments

(Social) isolation as an undesired state of privacy means inability to access the services or information available in ubicomp environments. This might be caused by for instance non-interoperable technologies, other technological limitations, limited access rights, or lack of information or personal skills in accessing and utilizing the services. Since ubicomp services cover interaction between people, devices and services, lack of interaction could be called plainly as isolation instead of social isolation, which is a term that Altman uses.

Crowding as a counterpart for isolation in ubicomp is a state in which the personal device cannot control the input from the computational units embedded in the environment or used by other people. Information overflow, inability to find relevant bits of information from huge amount of digital data, or receiving irrelevant and undesired information can be considered crowding typical in ubicomp environment.

Leaking

Altman does not discuss a problem of state in which the amount of output is more than the desired level of privacy requires. Crowding refers to a state in which the amount of input from other people in the near surroundings is more than the self desires, and therefore it does not address this particular problem. Obviously, the reason why Altman does not consider the amount of output as an potential threat for privacy is the different control mechanism in face-to-face and mediated communication: in face-to-face setting an individual has clear and well established means to control social interaction with others whereas in ubicomp environments they do not necessarily even notice that information about themselves has been communicated to others. We call this undesired state of privacy, caused by unintended information disclosure, Leaking.

5 Examination of privacy regulation by typical ubicomp scenarios

In this section, we present and analyze two typical ubicomp scenarios from privacy regulation point of view based on the theoretical framework described above. The scenarios are based on ubicomp research carried out in our laboratories since 1997. They present some features and functionalities that are already in a wider use today, as well as some emerging and more visionary features.

The analysis was carried out by first, identifying the actors in the scenarios and second, examining both interaction and respective control mechanisms by placing the actors in the taxonomy described in Table 1. Finally, some states of achieved privacy in some particular interaction situations were identified and presented.

Scenario I. Kids and Sushi

Tom is a happy father of three. Before leaving for home from work, he opens BigBrother on his terminal and requests a manual update for the “Family” group. As usual, the teenage daughter refuses to reveal her location. However, Tom can see that his wife is already at home, and she has also visited the day-care center to pick up their infant daughter. Their son is still at school, waiting to be picked up. Tom acknowledges the automatically proposed route home via Junior’s school. When he leaves the office complex, the corporate security system records the event, complete with person ID and timestamp. In addition, a surveillance camera records Tom at the front door. On his way towards school, the car scans the wayside advertising for interesting offers for instant oil change jobs. Nothing pops up below the price limit Tom had set. After having picked up his son at the school gate and arriving home safely, Tom finds that his wife’s request to bring some sushi had never arrived, probably due to a glitch in his context filtering software.

5.1 Analysis

The above scenario presents some typical ubicomp interactions. Tom, his mobile terminal, Tom’s car, Tom’s family, presence service, corporate security system, and wayside advertising service are the parties who interact in the scenario.

Tom’s family uses presence service, which informs the status (at least location and some history information) of each family member. Family members’ mobile terminals update the status and inform it to others by request. From privacy point of view this means that the family members are able to restrict those who are able to see each other’s status information, thus they control access rights to this particular information.

When Tom requests the status of his family group, he gets the updated status of his family except his daughter who refuses to reveal her location. This refers to another control mechanism, management of visibility. Tom’s daughter belongs to the family group defined in the service; therefore, her status information is conveyed to others by default when requested. However, the daughter has a possibility to control information that she communicates to others. In this case she does not want others, at least her father, to know where she is spending her time.

Tom also uses a route finder service that finds the best route based on given criteria; locations of Tom’s work place, junior’s school and their home. Tom has defined the criteria beforehand, which the device communicates to the service. In this case, there is not any explicit means to control the interaction with the service. The only way of controlling the interaction is not to convey the criteria, which naturally means also that the service cannot provide a relevant route for Tom. Typically, the privacy policy of such a service states that the criteria is processed anonymously without making inferences of given location information. However, Tom does not have any control mechanisms on the use of information and he just needs to trust the service.

The next form of interaction in the scenario is automated proximity information exchange. When Tom leaves his work place, his work ID is communicated via a short-range radio to a corporate security system. The system adds time stamp to work ID information, thus it can follow employees working time. The system may also use information that the surveillance camera records on the front door. Tom may well know the fact that the corporate security system follows and records his routines but he may not know how the information is used, who has access to it, how long it will be saved, and so forth. Due to the fact that Tom has no means to control the information exchange, this form of interaction is fairly close to the state of privacy what we call leaking.

In the use case, Tom also scans wayside advertising. He has set filters to get the desired ads and to avoid information overflow. The wayside receiver system in Tom’s car analyses the incoming ads and filters out those that do not meet the rules Tom has set. Filtering can be seen as a control mechanism, since it decreases the amount of received information and therefore inhibits crowding as an undesired state of privacy.

Still another interesting occasion in the use case is Tom’s wife’s undelivered request of bringing some food for a dinner. This refers to a commonly known problem of unreliable computational units, which in this case turns to isolation of desired information. The context filtering software aims at blocking unwanted information and granting access to information that meets the preset criteria, but here it has blocked also a request that meets the criteria. In the spirit of Altman, malfunctioning of this control mechanism can be understood as a factor that causes isolation, an undesired state of privacy.

To summarize, the following aspects were identified in the scenario: control access rights, management of visibility, poor means to control interaction requiring blind trust, leaking, filtering, and isolation.

Scenario II. Home party

Lizzie, a high school student, throws a party for her friends while the parents are away. She has granted the guests access to some of the audiovisual devices in their home, so that they can post images and videos to be shown on the big screen, and play five of their favorite tunes over the house music system directly from their mobile devices. Since some of the friends are in another party, a video share and a text chat line have been set up to pass some of the buzz between the two events. The chat later also spontaneously turns into a channel for semi-anonymous flirting. Lizzie invited everyone to post their picture of the day for later collective voting of the ugliest entry. All the resulting media items from the night are collected in her party archive, where the guests can later retrieve interesting items and potentially delete the material about some embarrassing situations. The parents have no access to the archive, except for the anonymous usage data collected by their home devices, including the robot bartender.

5.2 Analysis

The above use case is taking place in a private space, which is shared with the host’s friends. In this kind of environment, the host has more control over the services available in the space than the rest of the users. In terms of privacy, this is the main difference between the public and private spaces. The actors in this use case are Lizzie, her friends, their personal devices, audio visual devices at home, party archive, and robot bartender.

Lizzie’s home is equipped with home electronics having communication capabilities with portable devices. Lizzie utilizes the capabilities in creating collective atmosphere in the parties by granting the guests access to some of the devices. This deals with two control mechanisms: territorial responses and management of access rights. Territorial responses imply the fact that Lizzie as a host has control over the devices in home. Thus, she has a right to decide who is able to use them and how. The management of access rights is the actualized territorial control of the devices in this particular case.

Lizzie utilizes also another way of practicing territorial control in what we call “homecomp” interactions; she limits the number of tunes to five that the guests can share from their mobile devices with the home audio system and thus with everyone present in the party. This also means that the audio system needs to keep a track on the devices that share material with it, in order to control playing back tunes according to Lizzie’s wish. The guests may not be aware of this or at least the amount of information that the system gathers. This may cause leaking as undesired (but unconscious) state of privacy.

Sharing tunes and images with other partygoers by utilizing the home audiovisual system is one form of mediated communication. Direct face-to-face interaction is also covered in social event that takes place around the big screen and within a hearing range of the audio devices. Therefore, the partygoers are dealing with a set of varying control mechanism simultaneously. First, they interact with their mobile devices when selecting material to be shared with others. The users need to browse though their personal images that may be too private to be revealed with anybody in the parties, and thus they try to avoid others to observe the interaction with the device. The user can for instance turn away, so that the others are not able to see the display of the device. This control mechanism is called personal spacing, which covers position and orientations of both the user and their personal device. Second, the users interact with the home audiovisual system. The only way of controlling this interaction is to decide the material that they want to share with the others via the system. This control mechanism is similar to verbal behavior in Altman’s theory but in this case it includes expression of self by digital content, like images and tunes. Naturally, the users can also decide not to share anything and thus avoid interaction with the system. The third form of interaction is the normal social interaction with the other partygoers in which all the control mechanisms described by Altman can be used.

The partygoers are able to communicate with people in another party via a chat line. Lizzie’s friends do not necessarily know the audience in the other end; who is actually able to see and participate in chat. Due to this, they end up using nick names and expressions that is known only by their close friends in another party. Semi-anonymity of the chat is the control mechanism that they use in this particular form of mediated communication.

The party archive collects images and possibly other material about the shared event. It is obvious that the material contains such personal information that the partygoers want to be able to manage afterwards. This is an example of distributed nature of personal information, which implies the need for managing it in various places. The example also points out another interesting matter; the content of the archive belongs to many people and includes several people’s personal information. Thus, deleting an image from the archive has an impact to the description of the whole shared event. Yet another factor affecting privacy regulation in this case is the complex ownership; Lizzie can manage the party archive, her parents own the system maintaining the party archive and thus can decide what to do with it, and finally the archive includes information whose ownership belongs to a group of users. This means that there are several levels of territorial responses that define privacy regulation.

Home devices, like audiovisual system, including the party archive and robot bartender collect anonymous usage data. This is one of the basic characteristics of interaction with computational units. In this case, the usage data does not describe anyone as a person but rather the nature of the parties or partygoers as a group. The systems probably records things like what music has been listened, how much traffic has taken place between the home and the other party, how much the robot bartender has served drinks, what drinks has been ordered, etc. The partygoers do not have any kind of control over this data, but it is accessible to family members who own the devices. This is another aspect of territory in ubicomp interactions; people who have not interacted with the devices and services but have ownership to them, have also access to information the devices and services have gathered.

To summarize, the following aspects have been identified in this scenario: private sphere allowing more control mechanisms for the owner, management of access rights, personal spacing, selection of material to be shared, semi-anonymity as control mechanism, and several levels of territorial responses.

5.3 Summary

Interaction plays a significant role in Altman’s theory of privacy and many of his ideas regarding privacy regulation are applicable also in ubicomp environments. For example, notions of inputs and outputs, states of achieved privacy (crowding and isolation), and some of the control mechanisms, such as personal spacing, territorial responses, and verbal and non-verbal behavior are relevant in ubicomp interactions and privacy regulation as well.

However, some extensions to Altman’s theory are required in order to make it more applicable in analyzing ubicomp environments. The most profound extensions are a notion of mediated communication and the number of various kinds of non-human actors, like personal device. As a result, the forms of interaction between the actors are needed to identify and examine in order to be able to further investigate the control mechanisms in respective interaction situations. For instance, interaction between the self and an interactive environment or other person’s device contains different control mechanisms from interaction between two individuals. Due to this, some totally new control mechanisms exist in ubicomp environments, such as, management of visibility and access rights, filtering, and context-awareness. In addition, the control mechanism that Altman identified and defined, require further development when applying them in ubicomp environments. For example, personal spacing in ubicomp can be seen as a control mechanism that includes interaction between the self and a personal device in a social context. Further, territorial responses are understood to cover also control over devices and data in a private sphere. Also the digital self as an object of control is a remarkable addition to Altman’s work.

Altman’s theory of privacy with the above mentioned extensions, make the analysis of privacy in ubicomp environments and design of privacy aware ubicomp services more systematic.

6 Discussion

Based on the above analyses, it seems feasible to apply Altman’s privacy framework to ubiquitous computing. The interactive approach fits well to the characteristics of a typical ubicomp environment, and allows analyzing them from the privacy point of view. It also highlights the complexity of ubicomp interactions and helps constructing understanding on the on dynamics of such environment. With the identified extensions, like non-human actors, information as an object, and new control mechanisms, we are able to model interactions and thus privacy regulation in ubicomp to a greater degree. When comparing to others who have built on Altman’s theory, such as Palen and Dourish [12], the above mentioned extensions bring extra value to understanding privacy in ubicomp.

Some differences between privacy regulation in ubicomp and face-to-face settings exist. The dialectical nature of privacy regulation seems to decrease in ubicomp due to matter of mediated communication and recordability. Many ubicomp services follow predefined policies where users have very limited means to regulate the interaction with the service. Related to these, leaking was brought out as an undesired state of privacy caused by unintentional information disclosure.

Instead of discussing interactions, the previous privacy research in field of ubicomp concentrates mostly on information—another important aspect of privacy. For example, Bellotti and Sellen [18] focus on information and the actions performed on it, such as information capture, processing, accessibility, and usage. In a similar manner, Soppera and Burbridge [14] discuss privacy in terms of fair information practices, and technologies for supporting these practices. They address the following issues, which partially overlap with Bellotti and Sellen’s list: personal data and identity, data collection, and data usage, storage and access. Further, Adams and Sasse [20] and Adams [21] discuss four aspects of privacy: information receiver, information sensitivity, information usage, and the context of disclosure. Many others have also studied privacy from the information point of view, including the actions performed on it. This is to say, interactions with information.

One could argue that the actions performed on the information and the interactions that take place between different actors in a ubicomp environment together form a rather holistic view on what affects ubicomp privacy and how. Indeed, it might be beneficial to try and extend the model proposed in this paper even further, by deepening the analysis on information and actions performed on it. Such a model might break the original interaction based model, yet from a theoretical point of view it would probably be appealing. Nevertheless, it remains a topic of further study.

An obvious counterpart to recordability, which covers time, is telepresence, which covers place. Considering telepresence from privacy regulation point of view is another interesting direction for further research.

7 Conclusions

In order to better understand privacy in ubiquitous computing, we have examined whether Altman’s well-known social psychological theory of privacy could be applied to ubicomp, extending it when needed. Altman considers privacy a two-way interactive process, making the approach promising from a ubicomp point of view where people, devices and services interact with each other in a rich manner. Our analysis is based on considering typical characteristics of a ubicomp environment that could potentially affect achieved privacy, and applying Altman’s theory by utilizing illustrative use cases and pointing out significant features and their equivalencies to Altman’s terms. Based on the analysis we argue that Altman’s theory of privacy can be applied to ubicomp environments, provided that it can be extended when needed. However, we found that dialectical nature of privacy regulation as well as ability to control interaction diminishes in mediated communication typical in ubicomp environments.

Overall, the framework presented in this paper, points out essential aspects of privacy regulation from ubicomp interactions point of view. This facilitates design, analysis, and evaluation of ubicomp systems by providing means to identify potential privacy threats and design challenges as well as proposing privacy regulation mechanisms that fit to different ubicomp environments.