1 Introduction

As part of the Ambient Intelligence vision, the environment has to seamlessly support the users in carrying out routine activities. The activities can range from home automation tasks to keeping up with friends in social networks. These tasks can be understood as services provided by an entity, integrated in the home environment itself and interacted through a certain kind of User Interface (UI). For the purpose of the research presented here, they will be generalised as interactive services.

Watching TV is one of the activities that take up most of people’s leisure time [54], regardless of their technical skills and knowledge. Consequently, the TV has become a broadly extended electronic appliance, with a relevant place at home, and its remote control has become a common interaction device for most people. The TV is evolving and converging with computers rapidly. In fact, a TV set is a computer, with a particular interaction paradigm and a particular content consumption setting.

Therefore, the use of the TV as the most suitable controller UI to provide services adapted to each user in an AmI environment is very sensible. In this context, providing interactive services adapted to the user on the familiar device that is most frequently used by them in their spare time provides an interesting topic of research.

For many users, the integration of these services into a TV set provides the means for being able to access online banking, e-health services, socialising via available social networks or making use of the services provided by their environment that otherwise would be inaccessible due to a lack of computer skills. The current trends of moving physically provided services into the cloud to save money in personal costs and to improve benefit margins, sharpens the need of providing interactive services in an accessible way to all. Otherwise, these user groups could be at risk of exclusion from the digital society.

Up to now, the majority of research carried out on inclusive TV has attempted to provide techniques for the interactive television (iTV) service’s lifecycle development (requirements gathering, design, implementation and evaluation), or to make the remote control of the TV accessible to groups of users with specific disabilities. Regarding the accessible interactive services on TV challenge, there are specific implementations coming from generic iTV research area, which are mostly electronic program guide (EPG) related. One of the results of providing specific solutions to specific user group’s problems, instead of using an intermediary open user interface platform, is that developed service adaptations are not reusable.

It is interesting that the greatest efforts to develop natural interfaces for the TV set come from the interactive television area. In recent years, the iTV research community has progressed through the integration of interactive services within the TV. There is also an industrial commitment, to include such interactive functionalities in commercial products that have already been brought to market. However, most research and industrial developments have been targeted to the mainstream user, generally ignoring the accessibility barriers experienced by people with disabilities and elderly people. An approach to easily integrate interactive services with the TV, based on inclusive design, and one that would give universal access to the different user groups is still missing.

In this sense, the main contribution of this paper is a new approach to integrate all kind of interactive services (locally or remotely provided) with the TV set in a way that would allow personalising the UI to the needs of each user group.

After this brief introduction, in Section 2 we have surveyed the current body of work related to the problem presented in the introduction. The next section introduces our approach to solve this problem. Following this in section 4 the details of the approach’s implementation done for the elderly users are given. Section 5 explains the evaluation method followed in the user tests and in section 6, we present the observed results. In section 7 we summarise the paper and draw our conclusions. Finally, Section 8 depicts the future working lines that will tackle the existing improvable points and opportunities of the approach.

2 TV accessibility and related work

European Commission’s “Assessment of the Status of eAccessibility in Europe” report, analysing both the accessibility of broadcast programs and end-user TV equipments, shows that TV accessibility is still far away from being implemented to its fullest extent. This report also highlights the accessibility opportunities and challenges that the introduction of digital TV brings. The main challenge of not excluding people from accessing any digital TV’s services has been underlined by the iTV research community at [21].

Given the political pressure to confront the poor design of accessibility and usability of interactive TV services and client devices, research to enhance the universal access has increasingly improved the state of art of this technology.

From our point of view there are two main working areas directly related to making the TV interaction experience accessible, which are: content accessibility and access to the digital TV. The state of art of these working areas is presented in the following sections.

2.1 Content accessibility

This working area covers the initiatives that seek to provide alternative content means, which are synchronised with the original content, so that people with different disabilities can consume audio-visual content in an accessible way. This audio-visual content corresponds to the broadcasted TV content and to the iTV applications’ media content.

The accessibility services defined for providing accessible broadcast content are: audio-description for the visually impaired, subtitles for the hearing impaired and sign language interpretation for the hearing impaired. To ensure the availability of these services, it is necessary that all stakeholders comply with the legislation and the standards of production, distribution and display of such content.

The main efforts in this area have been made in the standardisation through regulatory agencies of the creation of the alternative content. AENOR provides a standard reference on the creation of audio-description [48] and another standard reference on the creation of teletext subtitles [47]. With the transition from analogue to digital television, ETSI has published a standard [17] on how the subtitles should be managed in the Digital Video Broadcasting (DVB) technology.

It is remarkable, the ability of the DVB technology to transmit content composed of different streams of audio, video and data, which enables the distribution of the audio-description, the sign language signing or the subtitles as additional channels. Thus, the user can select the combination of content streams to be rendered on their client device.

In the content accessibility research area, different approaches have tried to present solutions to adapt the content to the different user groups by making use of the multimedia framework provided by the MPEG-21 standard. Yang et al. [52] proposes an adaptation system based on the MPEG-21 standard, focused on visual disabilities (people with colour blindness). Opposed to specific efforts like this, Vlachogiannis et al. [50] propose a content adaptation framework, also based on the MPEG-21 standard, that is not limited to one specific disability and that can adapt pre-authored content to the specifics of each user.

As introduced in the previous section, the amount of accessible TV content produced is still well below 100% in traditional TV broadcasting. Due to the legislation change in many countries obligating to broadcast a higher level of accessible content, several initiatives like creating signed language through virtual characters, automatic captioning or translation of subtitles have been launched to fully or partially automate the production of accessible content.

There is also a major challenge to ensure content accessibility in the current paradigm shift of TV content consumption, since the user is moving from being a mere consumer to produce content in an environment, where production, distribution and render are no longer in the hands of few parties and fixed set of technologies, as has been so far.

2.2 Access to the digital TV

Access to the digital TV is comprised by the purchase, installation, setup and use stages. In this sense, a report on usability and accessible design of the TV [26] analysed the following usage stages:

  • Choosing and purchasing the Set-top box (STB).

  • Installing the STB.

  • Tuning the STB.

  • Setting the TV so that the DTV channel is convenient to access.

  • Finding out what is on and selecting the desired channel, either by using the interactive on-screen guide, or by random surfing.

  • Using subtitles, accessing additional settings, navigating the menu structure.

  • Accessing interactive content (e.g. Teletext, BBCi).

The focus of this paper is on the accessibility of the user’s interaction with the TV, so the information related to the product’s acquisition and installation will be left out. Therefore, the remainder of this section is divided into the accessible remote control of the TV and the accessible interactive services on TV.

The emphasis of the related work is concentrated in the accessible interactive services subsection, since it is our contribution’s specific research area.

2.2.1 Accessible remote control

The accessible remote control topic covers the efforts of providing a means of remotely controlling basic TV functionalities such as channel up/down, volume up/down or turning the TV on/off, to all user groups.

The difficulties associated with the use of conventional handsets (unclear labels, insufficient tactile feedback, poor position and size of buttons) by the different user groups are well known. Usability advances for different user groups have led to changes in the appearance of the TV remote controls. As introduced by Rice et al. [38], the usability improvement process has to be done taking care of the difficulties of simplifying the remote control, and getting it right, since as Carmichael notes in [10], fewer buttons may reduce demands on memory, but it can increase the workload when attempting to match more functions to fewer buttons.

The research community has also driven efforts to develop new remote control paradigms. These efforts are listed and classified by Cesar et al. [13]. Some of the work that could help making the TV remote control more accessible is the use of objects such as pillows, gesture recognisers, speech interaction and dialogue systems [6], and the use of devices such as mobile phones and PDAs. A notable work [40], proposes to use a PDA with 3D visualisation based interaction paradigm, which enables the direct manipulation of physical environments, such as those composed by TV sets, where the user can control these devices or even drag and drop a presentation from a PDA to a TV set. Furthermore, thanks to its UPnP network based architectural approach [41], it facilitates the user with a direct access to the physical environment that the user may want to control without having to know about IDs or IP numbers of specific devices. Another interesting alternative introduced by Kim et al. at [25] is the interaction through a gesture based TV remote control. Nowadays, is becoming common to find commercially available high end remotes with gyroscopes or microphones, and game console controllers like the Nintendo Wiimote or Microsoft Kinect that makes us think that these new paradigms of remote control could simplify some user’s life in the future. Researchers are already investigating the use of these types of controllers in pointer based [49] or gesture based interactions with the TV set. Figure 1 shows possible alternatives that could help ensuring the accessibility of the remote control of the TV.

Fig. 1
figure 1

Possible alternatives that could help improving TV sets’ remote control accessibility

The TV experience is usually shared among different users, traditionally limited to watching broadcasted programmes. With the digitalisation of the TV technology and the introduction of iTV applications, a new challenge has appeared to enable the multi-user interaction of iTV services. Different studies [49, 51], have analysed the multi-user interaction of TV sets, and have provided advanced solutions to allow the interaction with more than one remote control at the same time.

The presented initiatives either target specific use-case scenarios and specific developments for specific TV models, so the developments cannot be easily integrated in future implementations or do not provide user interface adaptation features regarding accessibility in their approaches. In this sense, Epelde et al. [18] proposed an approach to make TV sets’ remote control accessible from design and to provide the ability to plug-in different UIs developed for different controller technologies, such as those previously described.

2.2.2 Accessible interactive services on TV

In recent years, several research projects have focused on the integration of interactive services into TVs. This type of services can be categorised according to their level of interactivity [34]:

  • Broadcast-only services: Electronic program guides (EPG), Local games, Video on Demand, Personal Video Recording …

  • One-way interactive services: Advertisement direct response, opinion polling, voting …

  • Two-way interactive services: TV banking, interactive TV content, email, social networking …

Apart from research projects, reacting to user interest in interactive services, manufacturers began including iTV applications in commercial products. This grew gradually from Set-top boxes and PC based Media Center solutions, to the TV’s themselves. The level of interactivity of the implemented services has followed a similar path, with successively more complex services being introduced.

With the advances in computation power, user interface technology has evolved from a simple on-screen display (OSD) to modern widget technology to improve the usability and user acceptance. A good example of these developments is the commercial availability of TVs supporting advanced user interface technology like Yahoo Connected TV, Google TV, Philips Net TV or Sony Applicast.

This growing interest in providing interactive services through the TV can also be found in recent approval standards such as HbbTV [46], in which regular broadcasts are complemented by interactive services available on-line, with the aim of providing those services seamlessly on a TV set.

Once introduced a generic view of the interactive services on TV’s evolution, the study will focus on research related to the usability and the accessibility of the interactive services on TV. In this sense, the initiatives are analysed from a product’s lifecycle point of view. Firstly, research on specific user groups’ requirement gathering is introduced, secondly, accessible iTV application design methodologies are analysed, next the implementations efforts to make iTV applications accessible are studied and finally iTV evaluation techniques that assess accessibility are discussed.

With regard to user requirement gathering methodologies and studies, Rice et al. presented an approach centred on elderly users [38], where possible iTV applications are presented at theatrical sessions which allow audiences to empathise with the characters, and questioning their actions, they can think about the role of this technology. Once their needs, feelings and requirements about the applications are gathered, the author proposes to follow up with paper prototyping and brainstorming sessions to understand how older people perceived using the targeted applications.

A related study [19], introduces various qualitative research methods applied to the field of design and evaluation of iTV applications, suggesting that the application of these methods to the adaptation of applications to specific user groups, can help understanding their accessibility problems, while developing universally accessible interactive TV applications.

Following ethnographic methods, Obrist et al. [33] and Bernhaupt et al. [7] present studies, which investigate media consumption with TV as the focal point.

Continuing with design and development methodologies, Springett et al. [43], present a methodology to develop accessible iTV applications. This methodology divides the application’s tasks into simpler tasks and the capacities needed to fulfill each task. To define the needed capacities, the authors evaluate each task with users, checking if they have any problem to carry out the defined task and they annotate the capacities needed to fulfill the task. In a following phase, they evaluate prototypes with users and they inspire creative thinking and gather suggestions by using checklists that challenge the participant to think about questions and ideas that they may not have considered.

A notable guide [10] offers detailed information on the appropriate design of digital services deployed on TV for the older people in relation to their sensory and cognitive abilities and the demands the system may place on them. For example, the importance of clear visual/spatial navigational support, such as suitable highlighting and lowlighting, is referenced as being important to relieve working memory load, minimise errors, and help guide attention through operations on screen.

Regarding the implementation of accessibility solutions for specific services, most of the work to date has been related to EPG applications. Some have extended the implemented text to speech capabilities for the basic TV interaction activities to EPG applications. There is an initiative that integrates a paper based remote for interacting with the EPG application [5]. Carmichael et al. [11] developed a virtual assistant, providing a new mean of interaction between the viewers and the content and functions of the EPG.

With the explosion on the use of Internet based services, the development of TV based interactive services like the social networks, telemedicine or telerehabilitation has grown substantially, mostly focused on the mainstream users, but work is still missing on the provision of these services to less favourable user groups in an accessible way on their TV sets.

Concerning the available iTV platform implementations, the commercially available platforms like Yahoo Connected TV or Google TV are based in closed middleware implementations that cannot be easily extended to enable the interaction through more accessible or more advanced user interface technologies.

With regard to the work in the implementation of the adaptation of the graphical user interfaces to fit elderly users’ needs, Rice and Alm proposed and evaluated four prototype layouts (carousel interface, flipper interface, transparency interface, standard iTV interface) and navigational strategies of communications systems [38]. The authors underline the need of developing user interfaces, which follow a continuity concept, meaning that the users can relate them to concepts of their real life.

In the context of the iTV applications evaluation, a research work [14] has proposed an affective UI evaluation methodology that takes into account usability and accessibility requirements as well as the specific characteristics of the TV consumption context, which is a step forward on providing accessible iTV UIs, making sure that the resulting UI competes with the established TV experience.

Rice [37] presented the difficulties that visually disabled users face while consuming iTV services. This work makes emphasis in parameters like screen size, font size and colour, icons’ identification and screen layout. The main conclusion derived from the study is that the best way to approach the problem situation is personalisation, due to diverging requirements. A related study by Springett et al. [42], examines the applicability of the W3C web accessibility guidelines to interactive television focused in the visually impaired. Regarding the elderly users, up to now, few studies have evaluated the design of iTV applications for older people, beyond the usability evaluation of existing services [32].

From the literature review, we conclude that techniques exist from the iTV service’s lifecycle development point of view (requirements gathering, design, implementation and evaluation). Additionally, there are specific implementations coming from the generic iTV research area, mostly EPG related, that provide accessibility solutions. Also, the availability of commercial iTV platforms targeted at mainstream users, shows the path of near future iTV services’ usage scenario.

But in our opinion, it is necessary to define an approach that would allow the integration of services with different TV sets to meet different user needs. In order to guarantee the openness and the adaptability of the evolved scenario, an abstract user interface based approach is needed. The provision of an approach like this can foster the development of accessible TV interactive services’ solutions, ensure the reusability of different modules and provide means of fast prototyping of new services or new TV set configurations (with new interactions technologies), to fill the existing gap of studies and solutions on TV accessible interactive services research area. Figure 2 presents the evolution from the actual scenario to an abstract user interface based evolved scenario.

Fig. 2
figure 2

Actual iTV applications development in contrast to an evolved scenario which allows the integration of services with different TV sets to meet different user needs

3 Universally accessible interactive services on TV

3.1 User interface personalisation technologies

The three main concepts regarding the user interface personalisation are the possibility to choose the interaction device, the interaction technology (multimodality option) and the personalisation of the user interface presentation.

Traditionally, the user interface personalisation has been achieved by developing a different application per each different configuration need. In the Web world, style sheets technology based solutions (Cascading Style Sheets–CSS [8] or Extensible Stylesheet Language Formatting Objects–XSL-FO [4]) have been developed, which allow modifying the presentation of the user interface for different users and different interaction devices.

The main benefits of the style sheet technologies come from the separation of the structure and the content of the presentation’s document. This technology allows more precise control (outside the tagging) over the spacing between characters, text alignment, the position of objects in a web page, the audio and voice output, text type characteristics, etc.

Moreover, the abstract user interface technologies like User Interface Markup Language (UIML) [1], Extensible Interface Markup Language (XIML) [36], XForms [9] and Universal Remote Console (URC) [24], allow changing the complete user interface to meet each person’s needs and preferences. These technologies permit to fulfill the main three concepts related to user interface personalisation introduced earlier in this section (possibility to choose the interaction device, the multimodality options and the personalisation of the user interface presentation). Furthermore, it provides these properties, without needing to develop a different application per each different configuration need, without having to limit the applications to be only of web type and being able to implement multimodal interfaces.

A reference study [45], analyses four abstract user interface technologies (UIML, XIML, XForms, URC) for a scenario where the abstract user interface technologies enable any user to access and control any compatible device or service in the environment, using any personal controller device. For the proposed scenario and for the abstract user interface representations being evaluated, the study defines the following properties as desirable: the applicability to any target and any context of use; personalisation; flexibility, extensibility and simplicity. These properties are specified as the following specific technical requirements that a language for abstract user interface representation must meet:

  • Separation of data from presentation.

  • Explicit machine interpretable representation of the target’s user interface elements, commands, dependencies, relations and semantics.

  • Flexibility in the inclusion of alternate resources and compatibility with concrete user interfaces.

  • Support for different interaction styles.

  • Support for remote control.

UIML and XIML are not well suited for the proposed scenario particularly with respect to separation of data from presentation and flexibility in resource substitution. XForms is a W3C recommendation with narrower scope but powerful representation capabilities. It focuses on gathering input provided by the user. Some information display facilities are also included in XForms, but in general these are provided by a surrounding XHTML context or other host markup language in which the form is embedded (e.g. SVG, VoiceXML). In this sense, it requires the presence of a suitable host language and framework to fulfill the requirements of supporting different interaction styles and providing flexibility in the inclusion of alternate resources. URC has broad scope and includes a framework for the delivery of context-of-use specific and abstract user interface resources in addition to the abstract language.

The Guide project [22] is based on the PERSONA UI framework [44]. The Persona UI framework uses Xforms technology for the dialogs that an iTV service is going to present to the user, together with content-specific adaptation parameters. These adaptation parameters are later on enriched by a so called Dialog Manager, which adds information related to user and context profile. Finally, the system chooses the input/output interaction handler (I/O handler) that fits each user profile best. Based on the modality and layout neutral representation given by the Xforms representation and the adaptation parameters, the selected I/O handler realises the transformation needed to adapt the content to each user. The Guide project adds two innovative concepts to the Persona UI framework, which are the user simulation at design time and the UI adaptation during runtime through continuous evaluation of the context.

In contrast to the transformation approach followed in the Persona UI framework, the URC framework proposes a plug-in approach. This plug-in approach relies on a standardised interface definition, which allows the creation of pluggable user interfaces, also by third parties. Compared to the transformation approach, the development of a pluggable user interface is much simpler than an I/O handler. In the first case, the concrete user interfaces development is targeted at UI designers and developers, while in the second case an expert in the Persona UI framework is needed. Additionally, the resource server concept included in the URC framework enables the deployment and the update of UIs through repositories on the Internet.

Regarding the user simulation at design time and the UI adaptation during runtime, both are clearly applicable to the URC framework, and would help enrich the generic user interfaces generated by the URC framework, but always taking in mind that UIs developed specifically for specific users and involving real user’s in the development process will have as a result UIs that fit better those users’ needs.

Our proposed approach targets a similar scenario to the one studied at [45], where different TV set interaction paradigms enable any user to access different services, either local or remote to different users. So in this sense, we have decided to use the URC abstract user interface representation for our solution’s technological base. To aid the reader in understanding our proposed approach, the URC framework is introduced in the following section.

Next, Table 1 compares the eligible iTV frameworks found in the state of art, regarding the desirable characteristics for providing universally accessible interactive services through TV sets.

Table 1 Comparison of available iTV frameworks regarding desirable characteristic for the provision of universally accessible interactive services through TV sets

3.1.1 The URC framework

The Universal Remote Console (URC) framework [24] was published in 2008 as an international standard composed of 5 parts (ISO/IEC 24752). The standard’s main concept is defined as a “user interface socket” (or “socket” for short), which is an interaction point between a pluggable user interface and the device or service which is aimed to be remotely controlled. In the context of the URC framework, user interfaces are either generic or specific. The generic user interfaces are based on the automatic generation of user interfaces directly from the description of the socket. Moreover, the specific user interfaces are developed based on the knowledge acquired from the detailed description of the socket.

The URC technology is a platform for open user interface allowing third parties to create pluggable user interfaces and to use them with any device or service that exposes its functionality through a socket. The framework includes “resource server” as global markets for all types of user interfaces and resources needed to interact with applications and services that are made available to the user community.

In addition, the Universal Control Hub (UCH) is an implementation of the Universal Remote Console (URC) as a gateway, initially developed in the framework of the digital home [55]. The UCH architecture is shown in Fig. 3.

Fig. 3
figure 3

The Universal Control Hub Architecture

The main features of the UCH are:

  • It acts as a gateway between the target devices and services and the client controllers, each with its own form of communication and control protocol. Without a gateway, the different parts would not be able to talk to each other.

  • The user interface socket is based on standards: The UCH is based on the URC framework previously described.

  • It provides the option to use different user interface protocols: The UCH enables different user interface protocols’ (HTTP through DHTML, Flash, etc.) implementations and use by the client controllers.

  • Globally available resource servers: The UCH can get resources, such as, target adapters, target discovery modules and user interfaces in a distributed manner.

In the UCH architecture, a User Interface Protocol Module (UIPM) is the responsible for presenting one or more sockets’ functionalities to the user through a user interface that is rendered in a controller. Controllers and their software aware of this architecture, may want to access directly the socket of the target device or service and its atomic resources, to create an appropriate user interface based on the socket elements and their values.

Finally, the URC-HTTP protocol, which is a user interface protocol, provides remote controllers with direct access to the sockets that are running in the UCH. This protocol defines the messaging over HTTP and the functions for a controller to access the sockets of a UCH. The implementation of this protocol in the UCH is optional, but once implemented, provides a standardised and powerful way for controllers to access the UCH.

3.2 Our approach

In this paper we present an approach that proposes the use of the URC framework in the form of a gateway-oriented architecture, UCH, to provide accessible interactive services to any TV set.

We adopted the URC Framework because it makes a clear separation between the service we want to access and the UI we want to use for accessing it. This way, we are able to provide accessible interactive services on any TV set. The TV sets that are used in this approach have to fulfill the following requirements: they must implement a communication technology and a programmable user interface system. These TV sets can implement varying levels of accessibility features, depending on the user’s requirements, and they may have different form factors, ranging from a TV, to a Set-top box, or a PC based media center solution.

Regarding the services to be integrated, they can have different levels of openness: they may have proprietary access protocols, defined access APIs or web service specifications.

The services’ integration into the UCH architecture is achieved by means of defining the required XML files (UI Socket, Target Description, Target Resource Sheets) and implementing the corresponding code for the target adapter layer requirements (Target Discovery Module and Target Adapter) for each interactive service.

Through the implementation of a UCH’s User Interface Protocol Manager (UIPM) we can implement any TV set’s compatible communication protocol. Using the UIPMs we have the ability to plug-in the different pluggable UIs to TV sets.

After achieving the integration of the services with the UCH and the required UIPM, we are able to create UIs for any service. The approach also allows the creation of aggregated UIs composed of different services. At the same time, the UCH can be connected to different resource servers on the Internet that offer UIs and UCH integration modules that may be downloaded and used directly.

The Fig. 4 outlines our approach to provide accessible interactive services in TV sets. This figure shows different target services integrated using their own protocols and that are accessed from different TV sets. The resource server object reflects the option of using the UIs and integration modules downloaded directly from the Internet.

Fig. 4
figure 4

Provision of universally accessible interactive services in TV sets

The UCH implementation can be embedded into a consumer broadband router, into the TV itself, or with a more powerful UCH that has extended functionality, such as a dedicated PC.

Finally, as this approach is an open and standardised approach, it’s remarkable that in addition to the presented Internet based or locally provided interactive services, it can also be used to interact with devices in different environments. In this sense, the i2home project [27] provided the URC ecosystem with adaptation modules for several devices in the home environment. Shortly, we can say that our approach, being based on a standard, guarantees the validity of the approach in time.

4 Implementation

We focused our approach’s implementation on elderly users who are 60 years or older, with normal cognitive aging, and have an active life. With this idea in mind, we integrated those services that best fit the improvement to their quality of life, integrating them with mainstream society, and introducing them to new technologies through familiar interfaces.

The current elderly generations as well as less technical inclined people, adopt easier and with lesser rejection the traditional interaction mean of TV. On the contrary, the new services deployed on the traditional TV do not correspond with the elderly users’ mental model (coming from the classic TV), which makes them get lost. It is necessary to develop advanced multimodal user interfaces [31], but that use classical interaction resources so that the different user groups, and specially the elderly user is able to understand how to operate the provided applications.

The following services have been integrated to the UCH: videoconference as a social inclusion application, information service as a personalised information provider application, and audio book, educational content and p2p gaming (quizzes, chess) to enhance their leisure time and their cultural enjoyment.

Concerning the specific needs of the seniors related to iTV platform’s user interface, the requirements and the guidelines defined for the development and the extension of the implemented platform are introduced in a condensed version in subsection 4.1.

With regard to the targeted TV set’s UI, a multimodal interaction has been developed together with a simple and easy to navigate graphical user interface. The multimodal interaction includes both a simplified remote control and speech interaction modalities. The dialog system technology used in the implementation is explained in subsection 4.2.

The TV set’s UI system is composed of a main menu with access to the different applications and the interfaces of the corresponding applications.

For the discussion of the evaluation and its results we will concentrate in the videoconference and the information service. In this way, these applications are explained in more detail in the subsections 4.3 and 4.4.

From the implementer’s perspective what is remarkable is the easiness to integrate already existing modules like the dialog system or the introduced services, and the simplicity to develop user interface instances of the selected TV UI solution, for new services or devices from different environments.

4.1 Elderly user needs and followed design guidelines

4.1.1 Visual domain needs

Among the elderly, the impairment of visual functions is a process of normal ageing. Ageing process implies a decrease of visual functions [16], including a reduction of visual acuity, sensibility to contrast (when trying to determine brightness differences) and loss of the ability to detect fine details.

4.1.2 Adopted visual domain design guidelines

As it has been presented, changes in vision that occur with age can make it more difficult to read computer or TV screens. Considering the previously described features of elderly people in visual domain, the following recommendations are adopted for the design of user interfaces in the targeted platform:

Use of icons/graphics:

  • Concepts in application screens must be presented using an adequate combination of text, graphics and when considered appropriate audio.

  • Graphics should be relevant and not for decoration.

  • Icons should be simple and meaningful.

  • Animation should be reduced to the minimum as necessary, to minimise users’ distraction.

Use of text, font type and size:

  • Fonts in the application screens must be resizable.

  • Condensed letters should be avoided.

  • Take into account text presentation in TV terminals is poorer than in PC monitors.

  • Upper case only fonts are preferred in application messages and text. Upper case fonts are easier to read. Italics are not recommended since italics text is easily degraded in TV terminals.

  • Boldface can be used to emphasise important texts or to increase the readability.

Use of colours:

  • Following different authors’ guidelines for interface design [28], background screens should not be pure white or change rapidly in brightness between screens. High contrast between the foreground and background should exist.

  • Colour must be used to attract the attention of the users to the most important elements of an application screen (i.e. the currently selected element). The contrast is the most important factor to keep in mind when considering colours to be implemented in the applications.

  • Take into account the colour performance of TV and PC terminals is absolutely different.

Screen titles:

  • Every application screen must have a text title identifying the purpose of the screen.

  • Every application screen must offer a help option to the user.

4.1.3 Auditory domain needs

Auditory functions in the elderly may include the presence of tinnitus, which is characterised by continuous sounds in the ear (buzzing, softly singings or little bells), and can be caused by small abnormalities in blood flow reaching the ear [39]. Presbicusis, also known as the loss of ability to discriminate between medium and high frequency sounds, can mask peripherical sounds and difficult the detection of a concomitant tinnitus.

Overall, the declines described above, together with excessive accumulations of ear wax, make it difficult for older people to deal with their daily tasks in some way. Sometimes words are hard to understand, another person’s speech may sound slurred or mumbled, especially with background noise, certain sounds can be annoying or loud, and TV shows, concerts or parties may be less enjoyable because of hearing difficulties.

4.1.4 Adopted auditory domain design guidelines

Use of audio feedback:

  • Background noise should be minimised.

  • For each possible state of an application, the user can be informed as to where they are in the interaction and which actions are possible at this point.

  • Audio feedback must be used with great care since it can become annoying and frustrating when it is too insisting. It might also affect the privacy of the users since other persons in the same room may get access to private information.

  • Speech rates should be kept to 140 words per minute or less [20]. Artificial (synthesised) speech messages that do not closely imitate natural speech should be avoided.

  • For acoustic signals to attract attention, a frequency between 300 Hz and 3,000 Hz should be used [20]. Moreover, older adults miss attention getting sounds with peaks over 2,500 Hz [23].

4.1.5 Cognitive domain needs

In normal ageing, some functions stay stable: Automatic and over learned responses, remote memory, semantic memory (memory for the meaning of words and concepts) and verbal reasoning and comprehension.

In contrast, some other functions decline: Decrease in information processing speed, including slower reactions, and slower reasoning and thinking capacity and they may experience a decline in working memory [23], defined as “the temporary storage and manipulation of information that is assumed to be necessary for a wide range of complex cognitive abilities” [3], in other words, the capacity for maintaining some information and mentally operate with it at the same time.

According to Zajicek et al. [53], 82% of their elderly users were unable to build useful conceptual models of the workings of the Web. Their confidence in making the decisions needed for the construction of conceptual models was low and they became confused and frustrated. In this sense, information and UI navigation should be provided in an easy way to master and should seek to increase users’ confidence, avoiding unexpected behaviours.

4.1.6 Adopted cognitive domain design guidelines

  • Language used should be simple and clear, avoiding irrelevant information on the screen.

  • Important information should be highlighted and concentrated mainly on the centre of the screen.

  • Screen layout, navigation and terminology used should be simple, clear and consistent.

  • Ample time should be provided to read information. Time critical processes must be avoided. Feedback information must be also adapted to the expected slow responsiveness of the users.

  • When a critical action has been selected by the user (i.e. exit an application), the system must always clearly notify this circumstance to the user and must request his explicit confirmation.

  • Web-based interfaces must provide backward and forward navigation.

  • Whenever possible, processes must be sequential and one way.

  • The presence of parallel or simultaneous tasks must be avoided at all cost.

  • It must also be avoided the use of complex navigation structures composed of multiple tree-like levels.

  • The demand on working memory should be reduced by supporting recognition rather than recall and providing fewer choices to the user.

  • Use new objects with new appearances for new interface behaviours, to avoid interference with the user’s previous knowledge.

  • The same actions must be implemented using exactly the same procedures in all applications.

4.1.7 Motor domain needs

Within the aging, different joint diseases together with different bone fractures occur more frequently. These diseases significantly restrict movement in joints and increase the pain in the realisation of daily live tasks. People with these pathologies, make slower movements and the lack of precision of movements, limits their dexterity to realise those daily tasks and to interact with mainstream technology through provided controls.

4.1.8 Adopted motor domain design guidelines

  • For users with different forms of motor dysfunctions, the graphical interface should be made less sensitive to erratic hand movements.

  • The slowness and lack of precision of movements associated to the pathologies described above could affect the use of scroll bars or image maps.

  • The enlargement of interface does not only have implications for the visual domain. Allowing the user to adapt the size of user interface elements as much as they want, the need for fine-motor coordination can be reduced.

  • The remote controller must be extremely simple and must have as few keys as possible.

  • The size of the remote must be adequate to its planned use by the target users.

  • The remote must have big, individual, well separated keys.

  • The design of the remote must be ergonomic, must adapt to the shape of the hand and must have volume.

  • Every key must present a drawing representing its function on its surface.

4.2 Dialog system

We conceive the multimodal dialog system as a scalable and modular unit, which provides voice control over the applications integrated in the targeted platform. The core component of the multimodal dialog system is constituted by the Ontology-based Dialog Platform framework (ODP). It provides an open architecture for building multimodal, task-oriented user interfaces that is in concordance with large parts in W3C’s multimodal architecture proposal [30].

The ODP framework itself is built up on blocks (see Fig. 5) which stand for the basic dialogue system modules:

Fig. 5
figure 5

The core building blocks of the ODP framework

  • The Extended Typed Feature Structures (eTFS) represents a data representation format which unifies the properties of RDF/RDFS (Resource Description Framework [29], and typed feature structures [12]). The encoding of the internal data as eTFS is accordant to the approach of ontology design in general, such that we use an < object > tag in order to denote a complex object of a certain type.

  • PATEFootnote 1 [35] provides a framework that supports the development of applications for multimodal dialogue systems. PATE’s architecture is centred on the idea of three separated data storage facilities: (i) the goal stack, (ii) the working memory, and (iii) the long-term memory. The working memory is responsible for the activated instances so-called Working Memory Elements (WMEs), which are accessible for processing, i.e., rule applications. The long-term memory is responsible for the persistent storage for all instances of the type hierarchy the system has in the background. The purpose of the goal-stack is to represent the focus of attention within the process of the system [2]. The placing of WMEs between the three data storage parts is organised by the activation value, which changes in the processing flow and the effects taken by rule applications.

  • The Ontology-based Middleware Platform for Multimodal Dialogue Systems (OBM) is a middleware platform and application-programming framework for building multimodal dialogue systems. It is based on the eTFS API for system-wide data representation and PATE for implementing rule-based message routing. The OBM core functions as a server component that ties all modules and services together and maintains their interoperability. At run-time the server is responsible for managing the deployment of system modules.

As to enable voice interaction for the applications, the technical challenge was to first integrate the ODP framework into the UCH and second to provide the TV with the abstract presentation encoding the graphical content. In order to achieve a consistent context within the dialog system with respect to what is happening on the target side (e.g. Information Service Target) and on the User Interface side (TV), the internal state of the dialog system has to keep track and administrate the actions taken place at both ends. As we can see in Fig. 6, the ODP docks by means of the two different services at the UIPM Layer that speaks the URC-HTTP protocol, i.e., receiving and posting socket modifications from/to the targets and the protocol that can be interpreted by the TV client:

Fig. 6
figure 6

The data flow in the dialogue system’s architecture demonstrates how the Information State Module together with its building blocks is synchronised via two channels dedicated to the input coming from the User Interface (TV) and the backend services

  • The Function Modeler Service supplies the ODP with the information returned by the target that are converted into TFS objects and serve the Information State Module to update its own state.

  • The Presentation Planner Service implements a service that invokes event handling on the UIPM. The TV client exclusively processes rendering information, which is made ready by the UIPM. Also, in case the context information is not fully covered by the information retrieved from the target, the dialog system uses the rendering information as complementary input.

Besides the services, which are dealing with the communication to the TV and the backend services, different tasks within the multimodal dialog platform along the speech processing are allocated across multiple modules (see Fig. 6):

  • The Interpretation Manager carries out natural language interpretation. For that purpose it processes the word lattice reflecting the user’s vocal utterance with its semantic interpretation of the utterance. In particular, the natural language understanding component interprets the recognised spoken input of the user and converts it into instances of the ontology.

  • The Information State stores and manages the ontology-based representation of all targets that represents the appropriate services on the backend side. Additionally, the Information State administrates and makes available a coherent representation of the displayed graphical content. The synchronisation between the internal state of the dialog system and the backend components together with the content on the screen builds the basis for enabling access to multimodal interactive services. Depending on the state configuration, the Command Manager of the Information State Module retrieves the required command assigned to manipulate the states of the backend services or/and the graphical user interface (GUI).

  • The Interaction Manager has the task to propagate these commands and invoke the adapters that are able to speak the language of the URC-HTTP protocol, hence takes care of the information exchange with the target layer (backend services) and the presentation layer (TV). Typically when the user utters commands specific to the presentation layer, i.e., “Go to the Main Menu”, first an ontological concept SwitchApplication is instantiated and then the interaction manager invokes the presentation planner to build and send the appropriate message to the UIPM.

Within the context of the tested prototype we can change between two input modalities and allow the user to alternate between voice interaction and the remote control. Possible utterances by the user are described by a grammar, maintained in a W3C standard compliant format. However, not only predefined speech input is accepted; the framework allows for loading new grammar entities on the fly. This is useful in the context of dynamic concept names (e.g. the title of a movie), which are created using information available from the web at runtime.

4.3 Videoconference service

The main motivation for choosing to integrate the videoconference was that one of the biggest problems that the elderly suffer from is loneliness. To overcome this problem, connectivity applications are thought to improve the quality of their ageing. Using these connectivity applications the elderly can keep in touch with their family or friends and make their life happier and more entertained.

Even if there are mainstream solutions in the market that integrate videoconference on a TV set like the Skype based solution, we felt that providing this functionality through a personalised and accessible UI, could improve the elderly’s interest and adoption of this service. The videoconference service has been provided through the integration of the open source Ekiga software as a target service to the UCH.

The developed videoconference application’s user interface for our TV UI system is composed of three interfaces. The first interface “Call Contacts” visualises user’s contacts and allows establishing a call with one of the contacts by pressing OK in the simplified remote. The second interface “Incoming Call” appears in the foreground when a new call arrives. This interface is composed of a picture of the caller contact and the options to take or reject the call. The users can choose one of the options by pressing on the left and right arrow keys on the simplified remote and they can run their option by pressing OK. The third interface “Ongoing Call” shows the video streams of the call and has a hang-up button that is activated by pressing OK in the simplified remote.

The videoconference service makes use of the platform’s contact management service. These contacts can be easily managed on a PC through a user interface developed for the contact database’s UI Socket.

Figure 7 shows the screenshots of the main menu and the three interfaces developed for the videoconference service.

Fig. 7
figure 7

Screenshots of the main menu and the three interfaces (call contacts, incoming call and ongoing call) developed for the videoconference service

4.4 Information service

An important concern we wanted to address was how to make the acquisition of information in the web easy to master. We believed that an application with such features integrated into the platform was of special interest for the elderly. To this end, we decided to avoid the use of web browsers, direct access to search engines (instead, the information service invokes a query on behalf of the user), and even hide the fact that the user accesses the Internet at all. In order to accomplish transparency of the content for the elderly, we encapsulate the knowledge about the web site into the project’s ontology. Using this approach “web-surfing” is substituted by browsing through an ontology tree.

Here, the ontology defines the conceptual relations in the domain. Furthermore, it assigns web pages to concepts and specifies the rules to extract the documents. In a second step, the ontology provides a description of content related to specific web pages. User preferences and interests of a specific user help to further restrict the space of concepts. Learning of users’ interests is done by statistical evaluation of previous user behaviour. Combining a probability approach and a vector space model, a personal recommendation service provides interesting documents which are instances of the favoured concepts in the ontology tree. For instance two instantiated concepts that have been established by the users’ preferences are:

  • TV: A personalised guide to the daily TV programme. The user can browse through all programmes split into categories (e.g. movies, sports, series, music and more). This category is displayed as the sixth category in the information service’s menu interface in Fig. 8.

    Fig. 8
    figure 8

    On the top, a screenshot of the start page of the Information Service is displayed. Preselected topic areas are distinguished by big icons and ordered by users’ preference that may change after usage. Below, two screenshots of the navigation through the content are shown

  • Wellness: Information about a healthy life style, suggestions for staying in good shape, news about advanced techniques in medicine and so on; see the last category in the information service’s menu interface in Fig. 8.

    Fig. 9
    figure 9

    Photograph showing the technical set up of the evaluations in an elderly association

In the following sections, the validation with users of the introduced services is presented. Firstly, the evaluation methodology is introduced, and later, the results of the user tests are discussed.

5 Evaluation method

The development of the platform was done following the User centred design approach. Following this approach, the platform was evaluated at the end of each iteration and the results of the first evaluations served as an input for the next iteration’s developments.

For the initial user test the platform was tested in labs by 10 users in Spain and by 30 users in the United Kingdom. The tests included interaction with the initial prototypes, together with personal interviews and focus groups to capture the user’s experience and to compile the input for the upcoming development period.

For the final test of the platform, the system was tested by 16 users at homes in the United Kingdom and by 4 users at home in Spain for a period of 3–4 weeks. The platform’s final prototype was also tested in elder associations in Spain with a meaningful amount of users. The reporting that is detailed in the following subsection is concentrated in the final evaluation of the platform done in the north of Spain in different older people associations, where the system was shown and discussed with an amount of users large enough to enable the performance of subsequent quantitative statistical analyses.

5.1 Participants

The sample recruited for the final evaluation of platform was composed of 83 participants. After the revision of the answered questionnaire and subsequent refinement of the data, the final sample from whom the results have been compiled, was composed of 47 participants, 13 male and 34 female, with an age ranging from 52 to 89 (x = 71.11; sd = 7.12) from the town of Zarautz (n = 23) and the city of San Sebastian (n = 24), in the North of Spain. All users were attending elder associations in their respective locations. They had been living in their current location a mean of 46.58 years (sd = 19.04), which identifies them as stable participants in their respective communities. Only 27.7% had no studies, while 46.8% had completed primary studies, 8.5% secondary studies, 12.8% had technical studies and a small 4.3% had completed university studies. They had been active workers throughout their lives, with a mean working life of 36.08 years (sd = 12.22). 51.1% of the people from the sample were married, while 38.3% were widows.

5.2 Technical set up

Two laptops with a headset, a simplified remote control, and their corresponding signal receptors were set in a room where the demonstration would take place on a group basis, connecting the main computer running the platform to a slide projector. One evaluator and one observer explained the whole procedure to the users in each testing site, gathered as a group in the room. They were administered a consent form, thus showing their acceptance to participate in the evaluation session. Afterwards, they were given the a questionnaire, which included the following sections: 1) sociodemographical data; 2) quality of family and social contacts; 3) leisure activities; 4) satisfaction with life, and 5) specific evaluation of the platform’s services (here, questions about the system in general and individual applications in particular were asked).

Figure 9 shows the technical set up of the evaluation sessions.

5.3 Tests steps

After an approximate time of 30 min to fulfill sections 1 to 4 from the questionnaire, the main menu was presented, and a brief explanation of its usage was given to the participants. Users were required to give written answers to questions related to the main menu as well as to provide specific verbal feedback to what they were seeing on the screen and consulting the staff at any time. The same procedure was followed for each application (from videoconference to information service), thus showing the interface and the functioning (via a simplified remote control) of each application. A demonstration of the interaction via voice (in English) was done. After all the services were presented, users were asked to discuss aloud any additional comments or feedback they would like to add. Then, the questionnaires were collected and users were thanked for their participation.

6 Results

6.1 Overall impression about the system and main menu

The opinions about the system were very divided among those not having a clear statement, those thinking that it was a good application, and those reporting from the beginning that “this application may isolate people… it might isolate them in their homes”. However, a majority of 68.2% thought it would be helpful in improving their social relationships, 71.4% thought it would help them to keep closer contact to their relatives, 73.7% thought it would help them to get closer with friends and 84.2% were confident in the idea that it would improve their quality of life.

Regarding the main menu interface, most of the sample (53.3%) found it pleasant or very pleasant, not being tiring for the eyes. All of them considered the interface was readable, with appropriate font size and colour. The voice control demonstration worked well and participants were impressed. They expressed concerns regarding the use of headsets, the provision of the technology in other languages and the accuracy of the voice interaction with elderly users.

A table summarising the results from the evaluation with users is included in Table 2.

Table 2 Summary of evaluation results

6.2 Videoconference service

The layout of the videoconference was described as pleasant by 48% of the sample (48% said it was neutral, neither pleasant nor unpleasant). When they saw the way it worked on the demonstration session, 81.3% thought it was a useful application, and 50% would regularly use it (the others would rather continue with the regular phone). 54.5% would use it to talk to family, and 36.4% to both family and friends. It was a very well rated application (“it helps you keep in touch easier… it brings you closer to your relatives… in this way, I can see them”).

Some of the participants were familiar with this form of communication through Skype on a PC. However, it was considered an advantage that the videoconference system allowed being using the TV set for other purposes (watching a film, etc.) while it run in the background; the user could be just watching the TV and, in case of receiving an incoming call, the call would pop-up on the TV screen. This simplicity of use was highly appreciated by the users.

Moreover, some of the opinions stated by the users were related to their perception of the videoconference service. More specifically, there was a significant relationship between feeling happy with the frequency of family contacts and perceiving videoconference as a device that could improve their quality of life (χ 2(3) = 12.058, p < ..01), thus resulting in 63.15% of users (who were already happy because they met their relatives as much as they wanted to) who thought that videoconference would improve their quality of life. In addition, 66.66% of the users who had social relationships mainly outside home were the ones who precisely described videoconference as a device that could improve their social relationships (χ 2(1) = 4.421, p < ..05). These results suggest that a device like this, rather than improving quality of life of those with less social relationships, is more likely to be accepted by those who can perceive it as a complement to their already existing successful social network.

6.3 Information service

50% described the layout as pleasant. After being exposed to its use, 87.5% considered it was useful, but only 50% would use it regularly. They liked the fact that local content was available but stressed that the content would have to be updated regularly in order for it to be useful. Many stated that they would continue with regular newspaper, and this kind of technologies may be good “for younger people”. Some complained that the layout contrast was no good and that fonts should be made bigger, but they were told that this may be adjusted when using it on a TV set.

More detail on the evaluation and result of the applications developed to validate our approach can be found at [15].

7 Conclusions

In this paper we have presented a new approach to integrate all kinds of interactive services with the TV set in a way that allows personalising the UI to the needs of each user group. The proposed approach is based on the ISO/IEC 24752 Universal Remote Console (URC) standard and its implementation in the UCH architecture.

Our proposal for an “ordinary” TV user interface is based on the contributions provided by the interactive TV research. This approach allows access to interactive services from common TV sets, through the provision of personalised plug and play user interfaces that are rendered on the TV set. Following this approach permits an easy integration of new accessible services into our TV sets, including the services locally provided by intelligent environments. At the same time, this approach allows the consumption of these interactive services in new TV set configurations with their corresponding interaction paradigms. These interaction paradigms include advanced UIs, natural language UIs, assistive technologies, and multimodal UIs.

Moreover, having the required modules available on a resource server on the Internet, allows us to deploy and update our systems easily and opens a new market for service integrators and UIs developers.

The provision of an approach like this fosters the development of accessible TV interactive services’ solutions and provides a means of fast prototyping of new services or new TV set configurations (with new interactions technologies) thus filling the existing gap of studies and solutions on accessible interactive services in TV research. The provided approach is a great tool, to be used in collaboration with the iTV application lifecycle methodologies introduced Section 2.

The UCH implementation can be embedded into a consumer broadband router, into the TV itself, or with a more powerful UCH that has extended functionality such as a dedicated PC. Additionally, the ongoing initiatives to provide the web services with native URC interface as well as providing a URC interface directly on devices in the environment, can reduce the implementation complexity of the URC solution, making it simpler to embed in devices like the TV in the future.

An implementation of this approach has been carried out focused on the elderly. Services targeted on improving the elderly people’s quality of life were integrated. With regard to the targeted TV set’s pluggable UI, a multimodal interaction has been developed together with a simple and easy to navigate graphical user interface.

The user tests showed that the developed UI was well accepted and they thought that the developed concept could improve their social relationship and their quality of life, especially for those who already perceived themselves as having an adequate social network and had close contact with their relatives.

From the implementers perspective what is remarkable is the easiness to integrate existing modules like the dialog system or the presented services and the simplicity to develop user interface instances of the selected TV UI solution for new services or devices from different environments.

8 Future work

There are aspects of the presented paper that require further work, firstly to enhance the concrete TV UI solutions developed following the presented approach and secondly, to evolve the presented approach and facilitate the work required to personalise the UIs to the different user groups and thirdly, to simplify the task of selecting and installing new services and their corresponding UIs by non technical end users.

The application of the presented approach in the development of UIs for interactive services on TV, enables and simplifies the provision of universally accessible interactive services on TV. But the mere application of the approach, does not guarantee that the resulting TV UI configuration will be accessible to the target user group for which it has been designed. In this sense, it is planned to analyse the available methodologies to develop and validate accessible UIs for TV sets, in order to provide an adapted methodology to be run in combination with the presented approach, to ensure that the developed UIs are usable and accessible by their target users.

As part of the validation process of the developed UIs, in terms of the user tests, it is likely that the experience with a meaningful sample of individual home trials would have enriched the specific feedback already obtained from the group-based user trials; participants did not show any reluctance to openly show their opinion on the solution shown to them, but a deeper experience and the gain of skills could have been derived from a home trial with a bigger sample and a larger period of testing time. Still, results and feedback provided by the participants showed to be very useful for future developments from a user-oriented perspective.

For the future, the way to proceed in the user validation topic, is to target statistically significant home trials in addition to group based trials, since apart from specific feedback gathered from the group based trials, it will allow to evaluate the evolution of the quality of life of the users and a more realistic user experience in their daily environment. The home trials with non-mainstream users have to be carefully designed and carried out, considering the budget restrictions and taking in mind how economically and socially costly is to involve non-biased users (people which have not previously tested similar prototypes) and to configure and maintain the user trials at home.

Regarding the approach’s implementation for the elderly presented in this paper, the users expressed concerns regarding the use of headsets, the provision of the technology in other languages and the accuracy of the voice interaction with elderly users. To overcome these issues, it is foreseen to research the use of other audio input paradigms (e.g., remote control with an integrated microphone) as well as including other language options to the system and integrating the most successful approaches for voice recognition of the elderly.

Additionally, to facilitate the work required in developing and adapting the UIs defined for the interactive services, the study and evaluation of hybrid approaches among pluggable UI and UI generation approaches is foreseen. The objective is to reach a consensus between fully personalised pluggable UIs and fully parameterised transformations modules. Hybrid approaches that define pluggable UIs for specific target user groups that through some adaptation parameters allow fine-tuning the UI to the specifics of each user and their context will be researched.

Finally, with the increasing number of available services provided through the Internet, the need of guiding the users (especially the ones with technological difficulties) in the installation process of new services will require the integration of the presented approach with semantic annotation technologies applied to the services, providing a simple way to search and find services to the end user. At the same time, the number of downloadable UI and resource options is increasing exponentially. The use of semantic annotation technologies applied to UI resources, together with the development of matchmaking algorithms, will enable the identification and adaptation of the best fitting UIs for the previously selected service or compound of services.