Keywords

1 Introduction

The level of computer soft- and hardware penetration in our everyday lives has rocketed throughout the recent decades. How we, as humans, interact with these computer-based applications has changed in many ways and has been the focus of human-computer interaction research. In recent years, we have entered the third wave of computerization called ubiquitous computing [34]. More and more, computer interfaces get completely weaved into the fabric of the everyday lives of users, resulting in interactions outside of the traditional range of human-computer interaction. There has been a shift in objective where computers are not solely performance-oriented, aimed at increasing efficiency of certain tasks. Rather, computers are now also part of leisure, play, culture and art [27].

Concurrently, computer interfaces have been diversifying: while command line interfaces (CLIs) used to be the norm, new and alternative ways of interacting with computers have emerged, including wearable technologies attached to the human body. These new sensing possibilities have allowed input to interfaces to go beyond the traditional mouse- and keyboard [20]. As a consequence, computer interfaces have become multi-modal and embedded in our daily lives. This has resulted in a shift from manifest to latent interactions, where interactions between humans and computer interfaces are becoming less visible [27].

As a result, a complex interplay of interactions defines user experience [10]. In order to identify requirements for new digital products as well as to evaluate user experience, all relevant interactions should be taken into account when conducting user studies. While technology changed rapidly, the focus and methods of HCI research stayed on investigating performance-oriented explicit interactions, often not considering context [33]. The efficiency impact of new system features are, for example, often evaluated in a lab environment, while other external factors that are also important are overlooked. While in some cases increased efficiency and/or effectiveness can directly lead to a positive user experience, this is certainly not always true. Several studies found, for instance, no correlation between efficiency, effectiveness and user satisfaction [33]. Besides assessing a certain impact, it is thus also important to get a grasp on how interactions with a technology are experienced by the user himself. For this purpose, it is important to understand what actually contributes to a worthwhile, valuable experience [16]. In order to achieve this deeper understanding, it is important to identify which interactions are present in the environments where the technology will be used and how these interactions are related to user experience.

Currently, there is no framework available that fully captures the complexity of the multidimensional, multimodal, often latent interactions with these constantly shifting interfaces. Such a framework is, however, necessary as it will help user-centered design studies both in the pre-development stages as in the evaluation stages which ultimately will optimize the technologies. The present article takes a first step towards the development of such a conceptual framework that can serve as a guide for innovation research and is robust against future interfaces.

The development of such a framework is also required in order to develop a suitable methodological toolkit to fully capture interaction dimensions. As the traditional elements of interaction - being the user, the task at hand and the technology - no longer suffice to describe the complex interplay of interactions intrinsic to ubiquitous computing [24], a number of methodological weaknesses can be identified among current HCI frameworks.

Firstly, the ideation stage of innovation development - in which points of pain or goals have yet to be identified - is often very opaque [17]. A more systematic, validated approach grasping the ecosystem of a certain environment would allow for an adequate assessment of points of pain and potential solutions. The challenges associated with ideation are further compounded by the introduction of interfaces beyond the desktop [18].

Secondly, in testing phases to further optimize the technology, researchers always face the experimenter’s dilemma in which they have to choose between internal and external validity [25]. Generally, when conducting experimental research the objective is to find an effect or relationship between the dependent variable and the independent variable(s) and generalize findings to a broader population or context [6]. There are two types of validity which contribute to this end-goal. Firstly, internal validity which refers to a controlled experiment design that warrants that changes in the dependent variable are strictly due to the manipulation in the independent variable and not due to extraneous variables (i.e., confounds). Secondly, external validity which is the generalizability of study findings to other populations (population validity), settings (ecological validity), times and measures [5, 6, 14]. Whereas lab experimentation disentangles causal relationships from mere incidental occurrences, field experiments and living lab studies include contextual elements that cannot be simulated in the laboratory. While the former type has high internal validity, due to the fact that the confounding variables are controlled, these do not include all variables influencing the user experience, which ultimately results in low ecological validity. For instance, context is often not taken into account in lab experiments [3, 33]. Field studies, on the other hand, result in higher ecological validity, but in this case, researchers cannot pinpoint what exactly contributed to the overall user experience [25]. While in case of lab studies, researchers know how internal validity can be increased (i.e., by reducing confounding variables), there are no recommendations as to how we could increase ecological validity in more controlled environments. It has, however, been acknowledged that when factors which influence user experience are taken into account in a laboratory setting, ecological validity rises [1, 25].

Hence, a framework that can help researchers and designers pinpoint those factors would allow them to implement a more holistic approach in all stages of a new product development process. This way, a more ecologically valid view on user experience is provided during all iteration stages of development and lab testing, resulting in an optimization of the technologies to be tested in real life environments.

The remainder of this paper is structured as follows. The next chapter will more thoroughly explore the changing nature of computing and, consequently, the changes in HCI theory and practice. Following this, we will introduce our proposed framework, supported with several case studies that illustrate how the framework should be used. We end with concluding remarks.

1.1 Shift from Human-Computer Interaction (HCI) to Human-Computer-Context Interaction (HCCI)

Computing has undergone three waves of transformation. Whereas first, interaction was most prominently through command line interfaces (CLIs), the increase in computing power enabled the introduction of graphical user interfaces (GUIs) that could be manipulated with peripherals such as computer mice [21]. More recently, we have seen the emergence of natural interfaces (NUI), aligning with Ishii’s [19] view on tangle bits, where objects in our surroundings become interactive and where users interact with objects using voices and gestures [8]. A specific example of a mainstream NUI is the Kinect, which is a gaming device that reacts to body movement [36]. This shift from CLU to NUI similarly corresponds to the prediction that we are moving from a situation where one computer is used by many people, to where one person has many computing devices which are embedded in the people’s daily routine to perform an automation of their environment [35]. This theory is put into practice by, for instance, automatic coffee makers that sense the environment through alarm clock signals and, consequently, start brewing coffee for its owners when the alarm clocks are triggered [23, 34]. According to Weiser [34] ubiquitous computing is meant to be an invisible, ‘embodied virtuality’ in which the technologies disappear as they weave themselves into the fabric of the everyday lives of users. The end-goal for these technologies, thus, is to become part of users’ environments or contexts [34].

These three waves go together with a shift of interaction focus in HCI. In the so called “first wave of HCI” described by Bødker [4], researchers and practitioners focused on usability aspects. Interaction was considered a fairly simple concept: the unique observed user was interacting with one computer in one location, mostly in the context of work. The focus in this era was, thus, more on an interaction with an object or a system, in a controlled environment. In the “second wave”, described by Bannon [2], the focus of research was communication and collaboration. This shows a shift towards a more human to human interaction or interactivity as a process rather than interactivity as a product [11]. In the current “third wave of HCI”, again described by Bødker [4], the use context and application types are broader and intermixed. In contrary to the first and second waves focusing on the workspace, in the third wave technology has spread to our homes and got woven into the daily lives. This has resulted in a shift in research focus from technology and content to users and context or a shift from Human-Computer Interaction (HCI) to Human-Computer-Context Interaction (HCCI).

1.2 Current Limitations of User Research Methods in the New Product Development Process

The needs of users are an important starting point for HCI research and development. Through an ideation process, user needs can subsequently be translated into products. A variety of methods exist to achieve this, ranging from traditional focus groups [22], to co-creation sessions with end users [29]. However, as noted by Hribernik [18], ideation (in collaboration with users) in these dynamic environments is challenging because of the increasing complexity associated with systems that are used beyond the desktop. This is complicated further when considering that there remain differences between what users say when interviewed, what they do when observed and what they know or feel during a generative session [31].

By conducting an experiment, the effectiveness or efficacy of a new technology feature can be measured or tipping points of a certain technical parameters can be defined (i.e., finding an optimal balance between technological features and cost by pinpointing certain QoS levels that result in a significant increase or decrease in user experience). In HCI, due to the shift in interfaces, we also have witnessed a moving trend in experimentation context. In the past, HCI empirical work has been focused on the laboratory paradigm [24], where the controlled environment minimizes risk of contamination by extraneous variables and the internal validity is high, as explained in the introduction [6, 14, 26]. As the interaction between a user and a computer used to be more manifest, experiments could be confined to the controlled environment of a lab where a subject was given a task, a computer and an interactive system (the interface) [24]. However, as we have now entered the ubiquitous stage of computing where technology becomes an invisible, ‘embodied virtuality’ and interactions between user and technology have become latent [34], experimenting in the field is being more and more recommended as not all interaction levels with technology are reproducible in a controlled environment [24]. In turn, however, when testing in the field, researchers cannot fully control events. Nonetheless, field experiments or living lab tests should not be treated as ‘blackbox’ studies, but researchers should try to get a grasp on which contextual factors can influence the user experience when interacting with a certain technology and try to include or assess them in the experimentation environment.

1.3 Defining the Interaction Context

While there are several HCI frameworks available that provide insight in factors to take into account when developing and assessing user experience, these do not sufficiently provide researchers with concrete concepts that can cross-sectionally be used during a user study and be used for a wide range of technologies, let alone for technologies that still have to be developed. For instance, in a lot of frameworks ‘context’ is considered an important element, but

“..is often formulated very vaguely or used as a container concept for various intangible aspects of factors influencing product use. Furthermore, context is often analyzed post-hoc, where for measurement purposes we need an upfront view on the specific context” [13].

An interesting framework (see Fig. 1), providing a more holistic view on user experience and especially on context, is the integrated QoE framework by Geerts and colleagues [13]. In this framework, context is divided into three relevant categories which influence the user experience of a technology: the broader socio-cultural context, the situational context of use, and the interaction context. The socio-cultural context refers to the context on a societal level (e.g. the social and cultural background of people). Situational context refers to (the interpretation of) situations and is a more local level of context (e.g., jointly watching a soccer match at home). The interaction context refers to the micro-level of context around the interaction between the user and the product (e.g., interaction of the user with the television, of the user with the other people watching the game and the home environment).

Fig. 1.
figure 1

Adapted from Geerst et al. [13]

Integrated QoE framework.

This framework is interesting for the further development of our HCCI framework, as it acknowledges that not only the relationship between the (technical aspects of an) ICT product and the user impact the user experience. The framework states that user experience is the result of an interplay of user characteristics, product characteristics and different contexts. For the purpose of our study, we are especially interested in the interactional context. More specifically, we aim to further specify concrete parameters that are part of this interaction context and that impact the user experience. For this purpose, we will develop sensitizing concepts (i.e., concepts that provide direction for research and analysis) in a framework that connect theory and design practice, facilitate idea generation and solution development and can be used by designers and researchers. The framework will fully capture the complexity of the multidimensional, often latent multimodal interactions and should, thus, be able to be implemented in:

  • The ideation stage serving as a guide/checklist to analyze the ecosystem of the space in which the technology is being implemented in order to detect points of pain and potential solutions

  • The conceptual development stage matching potential ideas against our framework in order to optimize design decisions

  • Testing stages (both lab and field tests) serving as a guide to design user studies that are both high in internal and external validity in order to optimize technology before testing it in a real-world environment

For this purpose, we have done a post-hoc analysis of case studies in our research group where user research in the new product development process of ICT is the focus. More specifically, we looked into the issues that emerged during testing phases of mockups, prototypes or finished products due to overlooking relevant factors of the interactional context which influence the user experience from the beginning. Note that the presented concepts and framework is by no means complete or absolute, but rather serves as a preliminary suggestion to be matched against other innovation development projects and to be further validated and extended.

2 Results

Five interaction levels could be distinguished that can be considered relevant to take into account in the new product development process (Fig. 2). First a few remarks about the interactions defining the HCCI framework. In essence, each of the interactions can proceed in a digital or analogue way or with a digital or nondigital object. The motivation to regard both types of objects in the same way is the result of the recent shift to natural interfaces, but also with respect to possible future technologies. Since the line between digital and nondigital interfaces is slowly fading, it is important to enclose both in our framework. Further, an interaction is possible through one of the five senses and is always interpreted from the user’s point of view: an interaction with a robot will be a user-user interaction if the user does not notice any difference from a human interaction. Finally, the interaction works both ways: when we talk about user-object interaction, the user can have an impact on an object (i.e., user sets the alarm on a smartphone) but the object can also have an impact on the user (i.e., the alarm wakes the user).

User-Object.

Central in HCI research is the interaction between a user and a technology. In our HCCI framework this interaction is called the user-object interaction. Contrary to traditional HCI frameworks that tend to focus on the interaction a user has with a digital technology, in our framework user-object interaction can occur with both digital and nondigital objects (i.e. swiping a screen, or placing a RFID tagged book on a table).

User-User.

A second interaction level addresses the interaction between one person, the user, and another nearby or computer-mediated communicator, resulting in a user-user interaction. The outcome of this interaction impacts the interaction with the technology when, for example, the user mutes an incoming call due to an already on-going face-to-face conversation.

User-Content.

The next interaction, the user-content interaction, encompasses all the information and feedback the user perceives and processes. This information can either be presented in a digital (e.g., vibrating smart watch that wakes a user) or nondigital format (e.g., reading the wrapping of a chocolate bar). Similar to the user-user interaction, the nondigital content can be relevant for the interaction with the technology. The user-content interaction will always be the outcome of a user-object interaction.

User-Platform.

Another interaction is the user-platform interaction. A digital object can interact with different types of platforms, i.e., back-end, server or cloud. The user can either directly interact with the platform (e.g., smartphone interface/OS) as indirectly (e.g., automated cloud service). The user-platform interaction plays an important role in the HCCI framework, as it provides a service to the user that is the result of a user-object or user-content interaction. The user-platform interaction will become apparent to the user if it failed to execute the service, or technological parameters (e.g., latency, update rate, accuracy) impacted the positive evaluation of the user-object or user-content interaction.

User-Context.

The last interaction is the user-context interaction based on the interactional view of Dourish [9]. The context cannot be treated as static information, as it is the result of the user’s internal (e.g., values, predispositions) and external characteristics (e.g., temporal, social context) [7]. From this point of view the user-context interaction comprises all contextual elements not central to the interaction, but moderating the interaction to some extent. For instance, when a user arrives home after dark, the light switches automatically on. In case the user arrives home before sunset, the light does not switch on. The context of arriving home after dark plays an important role on the outcome. In the next paragraphs the HCCI framework will be applied to three case studies, each having a different goal; developing a smart shopping cart, developing a festival bracelet, improving air quality awareness and defining a smart home concept.

Fig. 2.
figure 2

Interaction levels

3 Case Studies

3.1 Smart Shopping Assistant

A research project in the retail context was directed by a technological and societal challenge. Whereas the technological challenge pointed towards the importance of developing an accurate and scalable solution for the real-time tracking of customers’ shopping carts in retail markets, the societal challenge aimed at making the grocery shopping experience more efficient and enjoyable. Eventually, the research project resulted in a smart shopping cart with a touch screen built in the handle. The smart shopping cart guides the customers through the supermarket based on the position of the shopping cart in the supermarket and the customer’s shopping list, and serves as an inspiration tool with contextual promotions, which results in a more efficient and enjoyable shopping experience. The project followed a mixed-method approach, consisting of five research steps that constitute the three aforementioned innovation process stages. In the ideation stage, an online survey (1) was conducted to delineate four personas. These personas were substantial to the innovation process since the needs and frustrations of each persona are associated with experience determinants. Thereafter, observations (2) were made in a real supermarket following the mystery shopper approach [30]. The conceptual development stage constituted of three co-creation workshops with customers and retail employees (3) that built on the insights of the observations and the sensitizing personas. Throughout these co-creation workshops we proceeded from an idea longlist over a feature shortlist to a visual concept (Fig. 3). The last testing stage proceeded in two steps. First, the ‘smart’ shopping cart was evaluated by the end-user using the Wizard of Oz methodology (4) (in this type of testing the researcher (or “Wizard”) simulates the user-platform to make the participants believe they are interacting with a working prototype [15], and eye-tracking technology in a real supermarket (N = 18) (Fig. 4). After having improved the concept based on the findings of the Wizard of Oz implementation, the concept was implemented and experimentally evaluated in a mockup retail store (N = 59) (5). In this field experiment two use cases of the smart shopping cart were evaluated; the display-service of, on the one hand, the location of the users and the products on their shopping list and, on the other hand, the contextual promos (Fig. 5).

Due, but also thanks to the followed mixed-method approach, not all interaction levels emerged in all five research steps. However, as the project proceeded, it became clear that particular interaction levels were more important than others. In general, four relevant interaction levels guided the outcome of the project, but also the constitution of the Human-Computer-Context Interaction Framework. The most remarkable interaction level turned out to be the user-user interaction. Firstly, the observations (2) showed that the social context of grocery shopping could not be ignored. For example a baby or partner often joined the grocery shopping trip. However, none of the co-creation participants (2) mentioned the importance of being accompanied by their significant others, nor the possibility of meeting people in the supermarket due to recall bias. Secondly, in the lab testing step (5) a minority of participants brought a family member with them to co-experience the experiment, although people were personally invited to participate individually in the lab test. The researchers opted for a quasi-experimental design, because they wanted to control as many confounding factors as possible. As a result everyone had to experience the shopping journey individually, without the co-presence of others who could impact their decision-making. Retrospectively, after having established the HCCI framework, the researchers and the designers would have attached more importance to the user-user interaction throughout the entire project. Taking the user research stance, it would have been more beneficial that co-creation participants reflected on the impact of the user-user interaction on the design, also alternative research question should have been addressed in the lab test. However, thanks to the triangulation of multiple methods the user-to-user interaction appeared multiple times. This definitely impacted future versions of the smart shopping cart, e.g., screen won’t be integrated in the child seat of the shopping cart and exploration of other interaction types besides touch, such as sound, vibration, … or more bold and explicit user interface, as people accompanied by others tend to look less to the shopping cart screen.

Another relevant interaction level in the multisensory journey of grocery shopping appeared to be the user-object interaction. This interaction level applies to the interplay between a user on the one hand and a tangible and/or digital object on the other. Firstly, in the observations (2) it appeared that people keep on interacting with physical objects all the time in a supermarket; smelling flowers or herbs, holding shopping list in mouth, meanwhile using smartphone, navigating shopping cart or holding basket, holding two products to compare info, hunting for the freshest vegetables, preparing loyalty cards, cash and bags when queuing at the checkout, etc. This impacted the design of the shopping assistant, resulting in the implementation of the screen in the shopping cart handle and not in the shopper’s smartphone as having to carry another item in your hand would negatively impact the user experience. Secondly, once the first proof of concept was developed, it was tested in a real supermarket following the Wizard of Oz approach (3). However, this concept appeared to be too intrusive. The main interaction during this grocery shopping trip was the touch interaction on the shopping cart screen and afterwards people said they were missing out on interacting with the others in the supermarket. This shows that the designers falsely assumed that the user-to-object interaction (i.e., touch screen interaction) could easily replace the user-user interaction. Interaction levels are never isolated, but often directly impact other interaction levels too. As a result, a ranking of the different interaction levels based on their importance in the overall experience should follow each research step and be reexamined afterwards.

Fig. 3.
figure 3

Participant cutting out the features defining the smart shopping cart in a co-creation workshop

A third interaction level, user-context interaction emerged with the triangulation of the personas delineated in the survey (1) on the one hand and the Wizard of Oz implementation (4) and lab test (5) on the other. One persona of the survey (1), Efficient Eden, is most disturbed with the check-out queue, difficult products to find, and the other customers present in the store. It follows that, during the Wizard of Oz test (4), Efficient Eden was more intolerant towards crowdedness in the store or shelf reorganizations, as he could not find a specific product. In contrast, for the lab test (5), participants were invited to a demo supermarket, which is both a controlled research setting, but relevant to the users to interact with [12]. In this demo supermarket, Efficient Eden (a persona within the study) was more tolerant towards the research context of the field experiment than he was in his habitual supermarket. Probably, the difference between the demo supermarket and a real supermarket paved the way for his tolerance; the demo store has a smaller surface than a real supermarket, the supermarket layout is different, and employees are missing.

Fig. 4.
figure 4

Wizard of Oz test with the participant wearing eye-tracking glasses and the Wizard (women in black) coordinating the smart shopping cart screen

The forth interaction level, user-platform, sheds light on the technical parameters underlying the concept infrastructure. This type of interaction only appeared in the testing phase. In the Wizard of Oz implementation (4), people were bothered with the delay between the “contextual” promos they received on the shopping cart screen and the actual position of the shopping cart that was already five meters further. This delay was the result of incorrect wizard behavior and system delay. Finding that people are intolerant of delay guided the scope of the lab test (5) to evaluate two latency levels next to features inherent to the concept.

Fig. 5.
figure 5

Printscreen of the smart shopping cart application with the shopping list products positioned on the supermarket map. When the shopping cart (blue dot) gets nearby a product, more information is provided on the price, number and shelf position of the product. (Color figure online)

3.2 Festival Bracelet of the Future

In a study that aimed to develop the festival bracelet of the future, research, technical and commercial partners aimed towards the development of a wearable that enhanced the whole festival experience during the festival, but also resulted in a longer ‘post-festival’ experience. For this purpose, users were involved in focus groups (ideation) and co-creation sessions (conceptualization) to develop a first prototype. This resulted in a prototype that was tested during the festival. User evaluation on the bracelet and the features were gathered during the festival by means of interviews every 2 h and experience sampling every hour. Six weeks after the festival, a follow-up debriefing sessions was organized. Based on this input, the bracelet was further optimized to again be tested at the festival the next year.

During the ideation phases with participants and conceptualization phases with the partners within the project, the focus was mainly on user-user interaction and user-object interaction. This resulted in 2 main features that were integrated in the first prototype that was tested at the festival: a) a friending feature, connecting festival goers with each other on social media (user-user) by simultaneously clicking on a button on the bracelet and b) cashless payments (user-object) by scanning the bracelet against a pay scanner that can be found at bars or top-up booths. When developing these features in the conceptualization and prototyping phase, an important interaction level that was not taken into account, was user-content. With regard to the friend feature, participants mentioned during the evaluations that they would like to get feedback from the bracelet when the befriending had succeeded by for instance letting the led light integrated in the bracelet turn green. With regard to the cashless payments, participants did not know how much money they were spending or what their balance was, leading to frustration. The only possible way of knowing their balance was to scan their bracelet against the scanners only available at the bar or top-up booths. Therefore, in the next phase the friending feedback mechanism was integrated and an application was linked to the bracelet to check and top up the balance.

With the cashless payments feature, another significant interaction level was overlooked during the ideation and development phase: user-user. More specifically, participants found it very annoying that now, everyone had to get their own drink as it was not possible to transfer drink vouchers to each other, a common user-user interaction which was possible with cash or paper drink vouchers, resulting in large lines at the bar.

If we had used the HCCI framework during the ideation and conceptualization phase for every separate feature, these were issues we would have come across beforehand and would have tackled them before implementing the first prototype at the actual festival. For instance by mapping all interaction levels involved for several situational contexts related to getting a drink at a festival without the cashless feature (ideation phase) and with the cashless feature (conceptualization phase).

One feature that was already present in a previous version of the bracelet (before the project had started) was an integrated led light. The led light was supposed to light up according to the beat of the song that was being played. However, because a festival site is a chaotic environment (large number of people, few network facilities and interference) this did not work very well, leaving users confused and not really understanding the purpose of the led light. Hence, a major goal of this project was to define a latency level between the actual beat and the lighting up of the led light (user-platform) that would still allow for a good user experience of the feature. However, we encountered serious issues when wanting to test this, as a result of not being able to include the user-context interaction level.

In the end, we had to conclude that it was impossible to test this as neither a lab setting nor a field trial could be used. In a lab setting, none of the context variables influencing the experience could be made available, such as a large crowd wearing the same bracelets and thus a large amount of led lights turning on and off again on the beat. In a living lab setting, it was impossible to manipulate latency as the only possible field test we could do, was during the festival itself and this was not something the festival organizers want to experiment with, risking to ‘ruin’ the user experience when testing with too low latency levels. Even if the festival organizers would allow it, it would have been impossible to assess QoE during several levels of latency as you would want to keep the DJ and song preferably the same when varying latency levels, as content would probably influence QoE. This shows that not only defining relevant interaction levels that can influence user experience is important, but we also need to get insight in which ones are difficult to implement in user studies in order to develop suitable methodologies. For instance, in this case, the solution could be using virtual reality to be able to capture the user-context interaction in order to define optimal latency levels between this user-platform interaction.

3.3 Air Quality Measurement

Our efforts to make buildings more energy-efficient and airtight had a negative effect on the air quality inside the house and pollutants are trapped inside the building. To get a deeper understanding of the different elements which can improve the users’ knowledge of air pollution in his home, a multi-method study was set up. An air quality sensor and a smartphone app would be developed to inform the inhabitants of the air quality and how they could improve it.

In the ideation phase, a co-creation session was held to identify the current and future needs around air quality measurement and possible new features. The focus of this workshop laid on the content and the platform of the new tool to inform the inhabitants (what information should they get?, which type of messages?). No attention was given to the interaction with the context (where do people measure the air? at which moment don’t I want to be disturbed?); the object (where will we place the air quality sensor?, how should it look like?); nor to the other users (what will multiple users like to see?, how will they interact?).

During the concept phase, a proxy technology study (PTA) with an existing air quality sensor took place in 11 homes over a period of 4 weeks. During this time the families could use an air quality sensor and report their findings in a diary. The study was organized in order to get insights on the interaction with the context (where do people measure the air?, when?) and interaction with the object (Is the display clear to understand?). No special attention was given to the interaction with the platform or other users. Qualitative interviews at the end of the test period revealed insights which would not have been obtained when holding this experiment in a lab context instead of in a real home situation.

We see that due to the lack of a methodology to evaluate interactions, several interactions were not taken into account during different development phases. A framework including these 5 interactions would have avoided to overlook them and would create a more user friendly product. For instance, the user-platform interaction was overlooked, which resulted in a sensor which was limited by the availability of sockets and WIFI in the home. A lack of sockets in e.g. the cellar or bedrooms contributed to the lack of knowledge about inferior air quality in these rooms. A final design of the air quality sensor with batteries or modem could solve this problem.

Another missed interaction was the user-context. The app sent push notifications at night which woke up the users who were, as a result, dissatisfied about the product. Designing the app taking the context of the night into account would have created a better product.

4 Discussion

The present article presents a preliminary framework to define interaction contexts of current and future technologies. More specifically, this paper has shown that five relevant interaction levels can be defined that can influence the user experience with a technology: user-user, user-content, user-object, user-platform and user-context. By implementing this framework in all stages of the new product development process, a more holistic approach for conducting user studies will be achieved.

The HCCI framework can be used in three ways. Firstly, it can be used to define the interaction context of existing objects that are the target of innovation. For instance, in order to define users’ current points of pain manufacturers who want to develop a smart kitchen use the framework to get insight into which interaction levels are important influencers of how a user interacts with a kitchen hood. Once these points of pain have been established, the framework can be used during idea generation by, for instance, looking for technology-enhanced solutions along all these interaction levels. Secondly, the framework can be used to optimize technology features and design in a conceptualization and prototyping phase, taking into account all relevant interaction levels when working out certain features. For instance, when designing the cashless payment feature in the festival bracelet, by mapping how it impacts other relevant interaction levels in the situational context of going to a festival with a group of people (e.g., user-user interaction where users give each other money or drinking vouchers so that one person can get drinks for a group of people). Thirdly, the framework can be used to design studies to test and optimize technology in a more controlled environment in order to increase ecological validity. For instance, in our shopping assistant case study, taking into account the impact of shopping together (user-user) on the effectiveness of a pop-up contextual advertisement.

The HCCI framework serves as a concrete tool to use in a new product development process by HCI researchers, designers, and developers. Our framework aims to be technology independent and future-proof. The decomposition of the object-, user-, content-, platform- and context-interactions can be explored, designed, and evaluated in different technologies (e.g., IoT, virtual and augmented reality, or even innovation triggers such as brain-computer interfaces) [32]. These technologies are expressions of a shift from manifest to latent interfaces and interactions. The HCCI framework helps designers and digital product engineers take relevant interactions into account in order to reveal gaps for new interaction techniques, methods and platforms.

The case studies in the present manuscript have shown that in ideation and concept development stages the focus was primarily on the user-content and/or user-object interaction. However, in the technology evaluation stage, it became apparent that the user-platform, user-user, and user-context interactions were disregarded, yet decisive in the user experience. Furthermore, from the case studies we saw that some methodological issues arose due to the difficulty to simulate all interactions in all innovation stages. Similar to the festival bracelet of the future research project which failed to test the latency levels of the led light in the bracelets on the beat of the music (i.e., user-platform), we expect that simulating user-user and user-context interactions (e.g., large crowds) will be challenging. Lastly, viewing the three case studies through our framework demonstrated that not all interactions are always relevant when using a technology.

5 Limitations and Further Research

Although we believe that this framework serves as a valuable tool throughout the entire new product development process, it certainly has its limitations. Firstly, no systematic analysis of current HCI frameworks and HCI user studies has been conducted in order to initially define certain levels of the interaction context with technological products. Traditionally, HCI frameworks seem to rely on compiled theoretical foundation or on lab-based research [28], but the recent shift in interfaces and interactions (e.g., embodied interactions, sensor-based interactions) complicates this. The interaction levels proposed in this manuscript have been defined bottom-up and inductively by several researchers involved in new product development user research, based on several case studies. The framework is, therefore, only a preliminary suggestion on defining the interaction context with new products and is definitely non-exhaustive. Further validation of this framework is required by systematic implementation in future research projects to provide evidence for the comprehensiveness of the interaction levels, and to address possible vague conceptualizations.

Future research should not only focus on further conceptual validation of the framework, but also search for appropriate methodologies on how to include the interaction levels in different phases of the research process. For instance, investigating opportunities of virtual reality to simulate relevant context variables, such as a large crowd at a festival stage or investigating the use of certain existing technologies such as Alexa in ideation stages to get insight in how user-platform interactions should be defined. In doing this our framework aims to provide guidelines to researchers as well as to designers to be able to use the HCCI framework the appropriate way.