Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Whoever wishes to foresee the future must consult the past—Machiavelli

1 Introduction

The development of Human Machine Interfaces (HMI) is a complex, interdisciplinary challenge (Bader and Fallast 2012). Besides the technical aspects, the development is also challenged by the need to adhere to cognitive principles manifested in the need to choose interaction patterns that fit the mental model of the user. For example, the efficient usage of electric windows by pressing a flip switch is possible if the window is lowered when the bottom of the flip switch is pressed. The other way around would not be intuitive. The technical realization is responsible for the adequate implementation of the concept. The simplicity of developing automotive HMIs, as in the example above, was common many decades ago. Comparing present and future developments, the main differences to past developments are the aspects of information processing and entertainment. Hence, this article focuses primarily on the HMI of automotive infotainment systems using the representative term “automotive HMI” or simply “HMI.”

The types and the complexity of automotive HMIs have rapidly changed in the last decades proportional to the development of computer systems: from rudimentary command line interfaces to a wide variety of graphical user interfaces, speech dialog systems, and gesture-based systems like touch interfaces.

The first automotive HMIs were primarily mechanical. Their main purpose and implemented functionalities aimed at providing the driver with relevant information about the car or about driving, such as speed, gas level, or rev counter. Later on, displaying only this information was not sufficient anymore. The drivers also wanted to be entertained while driving.

Therefore, entertainment functions like radios were progressively integrated into the car, leading to an increase in automotive HMI complexity. The HMI as well as the different functions together became an infotainment system, i.e., a system that combines the provision of information with entertainment functionalities (Bosshart and Hellmueller 2009).

In Fig. 1.1, an example of an early infotainment system is depicted. The picture shows the car dashboard of a Ford Taunus in 1958. Moreover, the type of information provided to the driver has also evolved and been enhanced over time. Besides status information about the car, information about the traffic or navigation has been integrated as well. Today, standard functionalities of HMIs encompass the display of vehicle-related information, advanced driver assistance functionalities, and entertainment components like radio, media player, etc. An example of today’s developments is shown in Fig. 1.2. It illustrates the driver’s view in a cockpit of the Ford S-Max, which was initially delivered in 2016.

Fig. 1.1
figure 1

FORD Taunus 17M P2 (TL) deluxe two door 1958 steering wheel (Wikimedia Commons; User: Yeti.bigfoot 2009)

Fig. 1.2
figure 2

Ford S-Max 2015 Interior (Wikimedia Commons; User: Ranger 1 2016)

Due to the increased complexity of the HMI, which consists of a variety of different input and output interfaces, its usability has become a very important quality factor (Ariza et al. 2009). Modern HMIs consist of a graphical user interface and a control unit as well as speech dialog systems and gesture-based systems like touch interfaces. The application of up-to-date hardware and software components enables a steadily rising number of use cases.

Modern automobiles provide complex functionalities and can be connected to different mobile devices. The complexity in functionality has a direct impact on the complexity of the HMIs because the driver has to manage the provided functions. The established HMIs of the past have to be improved and adapted to those requirements to make them more modern and innovative and to reduce the complexity. Therefore, designs from everyday interfaces of the users could be taken as role model for the HMI in the automotive field. For example, users know the graphical user interface of their smartphones and are used to their utilization.

Electrical interfaces have emerged and evolved rapidly and are continually replacing their mechanical counterparts due to many advantages. For instance, the replacement of mechanical mirrors with cameras allows expanding the field of vision and eliminates blind angles. The range of expectations is also widening, i.e., simple driving support versus high-quality entertainment. This variety of functionalities is always a competitive attribute for automobile manufacturers but this requires also an optimal handling of a large number of different user qualifications. A big challenge is for example to design infotainment systems in a way that also people with minor technical background can easily use them. In this domain, the ability of configuration with respect to target groups and individuals is also an important issue when it comes to increasing usability.

Varying preconditions directly affect the development of HMI-based software. New tools and methods are necessary to handle the development of more and more complex functionalities. For example, in the future, tools have to support the design of graphical user interfaces based on 3D graphics, or manage the interface of multimodal systems as well.

In addition, the significance of certain development process phases that might have been neglected until now is increasing, such as the process of software testing. With growing complexity, the cost of testing rises, too. More complex HMIs result in more complex testing methods and automated testing becomes even more essential due to the numerous test cases. These facts finally lead to the challenge that within the entire HMI developing process, new concepts and tools are necessary to handle the steadily growing complexity. The process requires new roles and responsibilities to be assigned, such as requirements and usability engineers. Besides that, education in new concepts is necessary. The concept of extendable HMIs for example could reduce the problems regarding the up-to-dateness of the HMI. Functionalities can be added after roll-out, which decouples the development cycle from the life cycle of a vehicle.

In the literature, articles can be found reporting on the histories of user interfaces in general like Myers (1998), Jørgensen and Myers (2008), or Myers et al. (2000). However, information can also be found on facts and challenges in the history of HMI development within the automotive domain. Thus, the authors have summarized their experiences, their knowledge, and the results of literature studies in this article, which covers the history of HMI development from the past in 1922, to the present and also provides an outlook on upcoming trends for future automotive user interfaces.

The remainder of this paper is structured as follows: Sect. 1.2 describes the history of automotive HMI development. The representative infotainment systems are developed by Audi and Daimler. Section 1.3 describes early technology trends and development processes. Section 1.4 contains the state of the art. Besides the driving factors and the input/output capabilities, the development process is one of the main topics. The HMI development of the future is part of Sect. 1.5, which tries to predict tomorrow’s HMI, e.g., by considering upcoming trends and user expectations. Past, present, and future HMI development is summarized in Sect. 1.6.

2 The Past of Automotive HMI

2.1 From 1915 till 1993

As outlined in the introduction, the first HMIs in vehicles were primarily mechanical. Their main purpose and implemented functionalities aimed to provide relevant information about car functions in the form of simple diagrams and digits that were required for vehicle handling to the driver. The HMI of the 1915 Mercedes-Benz 22/50 Open Tourer shown in Fig. 1.3 is an example for such a simple HMI. There are only a few knobs and mechanical displays. But in addition to its simplicity the instruments were placed in the footwell which seems untypical from a today’s perspective due to higher driver distraction.

Fig. 1.3
figure 3

Mercedes-Benz 22/50 Open Tourer (1915) (Meixner 2013b)

Besides the development of motor and chassis technologies, the improvement of comfort in cars became another important topic as well. One aspect of such improvements was the addition of entertainment, information, and telematics systems to vehicles.

2.1.1 Music

In 1922, the first car radio was introduced experimentally (Gesellschaft für Unterhaltungs- und Kommunikationselektronik 2010) on a Ford Model T (see Fig. 1.4). The American automobile manufacturer Packard introduced the Packard 120 in 1935. Figure 1.5 shows the cockpit of a 1936 built Packard 120 and also indicates that comfort and quality became more and more important in these years. All instruments are placed next to the driver and are designed in a consistent more superior look.

Fig. 1.4
figure 4

First car radio (General Photographic Service 1922)

Fig. 1.5
figure 5

Packard 120 (1936) (Meixner 2013d)

In 1954 radios were still accessories available only for some cars. Even in high-quality cars like the Mercedes-Benz 300 SL radios were no standard. Figure 1.6 shows a 1955 Mercedes-Benz 300 SL without a radio. But the significance of in-car entertainment grew steadily. In the 1960s in-car radios became more and more popular probably from the fact that there was not yet any possibility to listen to other media than radio broadcasting.

Fig. 1.6
figure 6

Mercedes-Benz 300 SL (1955) (Meixner 2013c)

In 1956 an in-car record player called “Highway Hi-Fi” which was built by CBS/Columbia was offered in vehicles from Chrysler, Dodge, DeSoto, and Plymouth. With this custom records could be listened to in the vehicle. In 1968 Philips released the first in-car cassette player which quickly enabled users to listen to tapes with their favorite music. An in-car cassette player by Blaupunkt can be found in the cockpit of the 1972 Maserati Indy America 4700 shown in Fig. 1.7. Shortly after inventing the compact disc, Philips also developed an in-car CD player in 1983. But only in the late 1990s in-car CD players take over due to the ability to read CD-RW disks and MP3 files. Compared with the use of cassette tapes it was now possible to skip forward or back which leads to less driver distraction.

Fig. 1.7
figure 7

Maserati Indy America 4700 (1972) (Meixner 2013a)

2.1.2 Navigation

With increasing number of vehicles on the road, traffic information became more and more important. In the 1970s, the first traffic reports were broadcasted via radio. For example, in Germany the Autofahrer-Rundfunk-Information was developed by Blaupunkt and provided on many German radio channels (Gesellschaft für Unterhaltungs- und Kommunikationselektronik 2010). In the 1990s traveling by car was further facilitated by GPS navigation, at first using simple visualization on large LCD displays (Bellis 2011). In 1990s navigation systems (GPS) were introduced as well and within this the era for automotive infotainment started. Data processing, technical components for managing the GPS signals, and different sensors for allocating the car were now required. So, the user interface had to be expanded with displays which offer higher information density and resolution as well as with new input possibilities such as rotary push buttons.

2.1.3 Telephone

In 1910, Lars Magnus Ericsson had installed the first telephone in his car, which could be connected with electrical wires to telephone poles installed along the road (Wheen 2011). In the 1940s and 1950s, the development of cell towers enabled the further development of car telephones. An example for this technology is shown in Fig. 1.8. In the 1970s, a car phone service, which used the Autoradiopuhelin, a Car Radiophone service network, became popular.

Fig. 1.8
figure 8

A trucker rolls with one of the first in-car phones, used in Chicago in 1946 (AT&T 2016)

After the first 1G systems were created in 1982 by Nordic Mobile Telephone, mobile telephone service became mainstream for automotive phone services. In the 1990s, car phones lost their popularity because personal cell phones became affordable to the public. As a result, the first hands-free car kit for mobile phones as well as an enhanced version featuring speech recognition were introduced by the Bluetooth Special Interest Group in 2001. From the year 2000 the wireless signal technology Bluetooth was used in cars for hands-free calling while the first mobile phone with Bluetooth was available on the market. Since 2002 there are advanced voice integration features available, thanks to Bluetooth.

Constantly adding comfort functionality like entertainment or telematic systems to cars soon led to increasingly complex systems. For this the overall operation complexity raises, since every new functionality brought its own dials, switches, and displays.

Over time, in-vehicle HMIs became one of the most important components in the automotive industry. For the early automotive HMIs, there was an exact mapping between control unit and function. Examples of these are steering wheels, pedals, switches for turn signals, and wipers. Driven by the rapid progress of microchip technology and computer science, mechanical devices were replaced by electronic counterparts. At the end of the 1990s, the large number of functions across a wide range of electronic devices required the development of a new system architecture concept for automotive HMIs. Automobile manufacturers started to aggregate functions within a single device in order to reduce complexity (Bellis 2011). The complete system could then be accessed via one graphical user interface with a hierarchically structured menu. At that point, premium car manufacturers like Audi, BMW, and Mercedes-Benz presented their first in-car infotainment systems combining informative and entertaining functionalities. Until today, the main subjects are multimedia (e.g., radio, mp3, and television), car information (e.g., trip length, temperature), navigation, and telecommunication.

2.2 Infotainment Systems of the Last Decades

In the last 20 years, various technologies such as the Internet, computers, and smartphones have become more and more important in everyday life and have influenced the development of in-vehicle infotainment systems. The users expect more than simple entertainment and information functions in their car. They want to enjoy using these infotainment systems. Therefore, the HMIs for in-vehicle infotainment systems were further developed especially regarding a better usability. In the following chapter infotainment examples from Mercedes-Benz and Audi of the last decades are explained to figure out how the design changed. Figure 1.9 shows the history of HMI development between 1998 and 2009.

Fig. 1.9
figure 9

History of HMIs from Mercedes and Audi (1993–2016) (Häcker 2016)

2.2.1 Mercedes-Benz

1993—CNS: In 1993, Mercedes-Benz introduced its first infotainment system named Communication and Navigation System (CNS), which was one of the first fully integrated telematics system in the automotive domain.

1998—COMAND 2.5: Since 1998, the infotainment system has been sold under the brand name COMAND, which is an acronym of Cockpit Management and Data System. The functionality of COMAND has been extended over the years based on the current state of the practice in terms of development and vehicles. The main components of COMAND in version 2.5 comprised a simple radio and tape deck (see Fig. 1.10). Extras included CD changer, phone, TV, and navigation system. The latter consisted of a 4:3 color screen located in the center console and displaying the current route, whereas the instrument panel showed arrows indicating the direction. The number 2.5 comes from the height of the display which measures 2.5 Din units.

Fig. 1.10
figure 10

COMAND 2.5 (Wikimedia Commons; User: Guido Gybels 2008)

Optionally, the system provided dynamic route guidance based on current traffic information received by a mobile phone. On the faceplate of COMAND 2.5, the hard keys were located in the area of the display. Besides the classic push buttons, COMAND 2.5 had an assembly button group, which consisted of push and rotary buttons for several frequently used functions such as setting the volume and switching between different radio stations. The design concept of this assembly button group enabled the user to access frequently used functions more quickly and easily, leading to reduced distraction by menu operation.

2002/2003—COMAND 2.0 and COMAND APS: The two versions released in 2002/2003 comprised the components radio and CD drive. In contrast to the previous version, the navigation system consisted of a 16:9 color screen including the Auto Pilot System (APS), which calculated the routes based on information from Traffic Message Channel (TMC) services.

The navigation maps provided with COMAND APS NTG (new telematics generation) released in 2003 were displayed via a 6.5-in. color screen with an aspect ratio of 16:9. Maps on DVD covered Europe as well as points of interests like hotels, restaurants, gas stations, etc. Furthermore, the availability of an “Aux-In”-port enabled the connection of external devices. In 2003, the Mercedes-Benz Portal provided services targeted at business people, such as calendar, emails, text messages on portable computers (personal digital assistant PDA), and PC, respectively, from COMAND.

2004 COMAND APS NTG 2: In comparison with the first COMMAND APS NTG the main change here was that the navigation processor was attached near to the display in the center console.

2005 COMAND APS NTG 3: With COMAND APS NTG 3, a radically new system was developed that was completely integrated into the interior of the vehicle for the first time. The 8-in. color display (16:9) was not located on the center console anymore but higher, on the right-hand side of the instrument panel, putting it in the driver’s field of view. Interaction with the system’s functionalities was possible via a central control element (CCE) positioned in the central armrest, which enabled single-handed interaction. Additionally, seven keys were available: three of them dedicated to the quick selection of menu functions, the ON key, the key for individually selecting a favorite function, the return key (which enabled quick return to the previous menu level), as well as the mute key. Furthermore, volume control was integrated. Navigation data was stored on a 2.5-in. hard disk, which enabled faster route calculation. Passengers in the back were provided with a monitor integrated into the headrests of the front seats that enabled independent usage of the entertainment program. To interact with this system, an additional CCE had been integrated into the central armrest in the back.

2007—COMAND APS NTG 4: Since 2007 the COMAND APS NTG 4 is offered as a special equipment package for the c-class. In comparison to the previews COMAND systems there were changes within the design of the navigation map. With a so-called birdview variant the map is not still 2d but has a kind of possibility to look inside the map “from the side.” For this also attractions on the map were now 3D. To optimize the route guidance there were information about the traffic via TMC Pro. Concerning the way of displaying there was an innovation. Because there was not a display in the middle console anymore, a 7-in. display could be flipped out for use. The system is controlled with a rotary push button near to the gear lever.

2008—COMAND APS NTG 2.5: COMAND in version APS NTG 2.5 was introduced in April 2008. There were changes in the design of the keys and the layout of the menus, which were adapted from NTG 3. Furthermore, the CCE was not located in the central armrest anymore but on the device itself. Innovations in the navigation system included a top-down view that enabled the driver to look sideways on a map. Additionally, some places/points of interests could be viewed in 3D. Traffic jams could be detected automatically via TMC Pro. In some series, the mounted display was replaced by a 7-in. color display that could be folded out electronically.

2009—COMAND APS NTG 3.5: With the launch of the COMAND APS NTG 3.5 the capacity of the disk was expanded up to 7.2 GB. Also the speech communication system “Linguatronic” is a basic element of the car so the user could control the telephone and navigation. Within this system Splitview (compare to Sect. 4.2.2), a splitted view for the driver and the front-seat passenger is used for the first time.

2011—COMAND Online NTG 4.5: Since January 2011 the COMAND Online NTG 4.5 is available in the c-class of Mercedes under the abbreviation COMAND Online because it has a connection to Mercedes-Benz online internet services with any Bluetooth mobile phone via VPN. The list of available services includes weather forecast and Google search to find points of interest and send them to the navigation system. Also, these services reside physically on a remote server. They look as if they were implemented in the local infotainment system using the same HMI design. The services are designed to avoid driver distraction and can therefore be used while the car is moving. Furthermore, the infotainment system can be used to browse the Internet which is only possible when the car is stopped. The connection to the Internet is implemented by pairing a mobile phone via the Bluetooth Dial-up Networking Profile (DUN).

There is no flip display anymore but a fixed mounted TFT color display on the right side of the instrument cluster with 7-in. and with a resolution of 800 × 400 pixels. There is still a hard disk with the information of the navigation maps. Additionally, the system supports a Media Interface where an iPod kit is fully integrated, a digital radio, and also Digital TV. Furthermore, Facebook and other similar social media web sites could be used with this generation of COMAND.

2011—COMAND Online NTG 4.7: This COMAND Online system is the second generation of the 4.5. There are changes in form of different hardware elements and now a Bluetooth PAN is installed to allow online access with an iPhone.

2.2.2 Audi

Before 2001: Audi cars offered an in-car radio named Radio Chorus with basic radio functions such as FM and AM receivers. Radio Chorus was soon extended with a cassette or a CD player. The input modality of Radio Chorus consisted of physical control elements, such as rotary knobs for volume and tuning as well as some functional hard keys. It provided six hot keys that could be assigned to radio stations by the user. As the output modality, an eight-segment digital display was used to show the information from the radio, such as the station and the frequency.

2001—First-generation MMI: At the IAA in Frankfurt am Main, Germany, Audi presented its concept study Audi Avantissimo, the Avant version of the A8. It provided the first Audi MMI (Multimedia Interface), and its usability concept established the conceptual basis for all subsequent Audi infotainment systems. It consisted of a control unit and a graphical user interface displayed on a color screen (Audi 2001).

2002—Second-generation MMI 2G: The MMI 2G (see Fig. 1.11) came built-in in the Audi A8. Its main components were a 7-in. color display, a radio, and a CD player. Extras were a CD changer, a navigation system based on DVD, a simple speech dialog system with command input, Internet connection, television reception, and a satellite radio for the USA. The control unit consisted of a central control element (CCE) with four control keys and eight functional keys for accessing the four main menus media/entertainment, phone/communication, navigation/information, and car functions shown on the display. The ordering of the function keys was reflected in the graphical user interface (Elektronik Automotive 2002). Figure 1.11 also shows the dialing screen available in the telephone menu. A green color theme immediately indicated to the user that the phone/communication menu was active. Other menus had different colorings, namely orange for media/entertainment, blue for navigation/information, and red for car functions.

Fig. 1.11
figure 11

Control unit and graphical user interface of the first MMI 2G, which already shows the typical control logic (Elektronik Automotive 2002)

2004—MMI 2G variants Basic and High: The new A6 offered two variants of the MMI 2G. The variant High corresponded to the MMI of the A8; the variant Basic had a smaller 6.5 in. monochromatic display (Wikipedia 2015). Introducing this kind of variants was an important step in infotainment system development. At first glance, these variants were only distinguished by different display sizes and slightly different hardware control elements. However, behind the scenes these variants were completely independent systems, even developed by different suppliers. Whereas they shared a common look and feel, they were based on different hardware platforms of the main unit as well as the attached control units. As a consequence, the software also had to be developed separately. This fact had a major impact on the complexity of the development. First, OEMs had to design a look and feel that suited both the High and the Basic variant regardless of functional differences. Then it had to be ensured that the common look and feel was correctly implemented by the suppliers.

2005—Audi Online Services: Audi and Google formed a partnership to provide online services within the infotainment system (Audi 2001). At that time, Google was already able to provide web-based services for route planning that made use of a variety of additional information. Whereas in-car systems relied on built-in map data and restricted traffic information through TMC, there was more detailed data available online, e.g., traffic flow information. In a web service it was also much easier to solve the problem of keeping the map data up-to-date or providing user-specific points of interest. By integrating a navigation component based on an online service, rich and up-to-date information could be made available in the automotive HMI. Other services such as weather forecast or Internet radio were following soon.

Insertion: 2006—Touchpad: In 2006, a new input device was presented: the touchpad (White 2010). Touchpads are used more and more widely in in-vehicle infotainment in premium cars such as Mercedes-Benz S-Class and Audi A8. The touchpad we are used to from laptops has a touch-sensitive surface that is built with a capacitive, resistive, or infrared array and may be integrated into a multi-functional steering wheel or center stack. The Input/Output interface of touchpads supports key events, moving pointers, or pictures, as well as handwriting recognition software. Additional touchpads may have backlight to display numbers or soft keys that are predefined.

2007—Third-generation MMI (3G): The MMI 3G offered many new functionalities. Additionally, the number of different models and country variants grew. For the first time the new A4 and A5 provided an MMI with a navigation system and a 7-in. display. Bluetooth phone connectivity is an option for the A4 and the A5 with navigation system, as well as for the A6, Q7, and A8.

The MMI 3G was the first system providing advanced driver assistance systems like cruise control and parking sensors in cars made by Audi (ATZ/MTZ 2007). The first Audi lane deviation warning system was presented in the new Q7 and later also put into the new A4 (HELLA 2007). Besides the development of the MMI, the basic Radio Chorus was also replaced by Radio Concert and Radio Symphony. Radio Concert allowed playing mp3 files and Radio Symphony provided a CD changer with six slots and the possibility to store traffic information (TP memory).

2008—Third-generation plus MMI (3G+): Audi Q5 provided the first MMI 3G+. It had a 7-in. display with a resolution of 800 × 480 pixels and a speech dialog system with full-word input. The map of the navigation system is 3D and its data is stored offline on an integrated hard disk.Footnote 1

2009—MMI 3G+ Touch: The Audi A8 provided a touchpad with handwriting recognition to input phone numbers, addresses, and navigation destinations. It supported Latin as well as non-Latin characters like Chinese and Japanese. Additionally, for the first time in an Audi, rear-seat entertainment became available with two 10.2-in. displays (Audi 2009).

3 Early Technology Trends and Development Processes

OEMs always aimed at augmenting the comfort available in their cars. The requirement of personal customization and the diversity of the functionality to be realized enforced a modular design and thus an increasing number of electronic control units. Initially these devices were invented independently and there was no need for mutual interaction. Later it was recognized that even better results could be achieved if the devices shared some of their information that otherwise would be unavailable for certain devices. For example, information from wheel sensors can significantly improve position calculations in navigation systems, especially in situations where GPS is unavailable. Technical solutions were developed that enabled inter-device communication. Early, direct, proprietary device-to-device connections were soon replaced by standardized communication systems. The CAN (Controller Area Network) technology, a serial bus standard for distributed control systems, was introduced by Bosch in 1986 (Kurfess 2011). Each device connected to the bus can read relevant messages and use the contained information as well as put its own data on the bus. The bus features a communication protocol preventing message collisions. For interpreting messages and contained data there exists a system-wide database that is centrally maintained. Modern systems typically feature complex inter-device communication. To a certain extent, this is caused by the goal of system designers to provide a single-user interface rather than allowing each device to provide its own. This allows uniformly controlling all connected devices as though all the functionality were realized within a single device. The true complexity of the underlying distributed system remains hidden from the user.

Automotive HMIs provide a variety of different kinds of input and output interfaces (I/O interfaces). There are three kinds of feedback available: visual, auditory, and haptic (Kern and Schmidt 2009). The steady change of controller types over time, from simple switches via graphical user interfaces to speech dialog systems, touchpads, and touchscreens aimed at simplifying access, decreasing distraction, and thereby increasing safety. Nowadays, some major functions can be controlled by switches near the steering wheel for faster access, e.g., radio or telephony.

Due to the increasing complexity of the HMI, the usability of the interfaces has become a very important quality factor. Since the 1980s, standards have been defined to develop user interfaces with high usability. One of the first general models was the so-called IFIP (user interface reference model). With IFIP, a user interface is structured into the four parts input/output, dialogue, functional, and communication. Also in the 1980s with the growing impact of software engineering, a lot of software architecture models, such as MVC (Model–View–Controller), were invented. Soon it became apparent that for the automotive domain special user interface standards had to be established because automotive HMIs differ in major points from HMIs in other domains. One big difference is the focus on user attention. Whereas in many domains the main task of the user is to interact with the application, with automotive HMIs driving must remain the highest priority. When the functionalities of infotainment systems increase, the causes for driver distraction increase, too. In addition, the cognitive load for performing a task can grow immensely (Kern and Schmidt 2009). It has become more and more important over the years to ensure safety when developing automotive HMIs.

The European Statement of Principles on HMI was issued in 1998. It gives advice for developing the automotive HMIs in such a way that the provided functions do not distract the driver from safely driving the car. The statement is updated from time to time due to the steady increase of functionalities; the last update was done in 2013 (Commission of the European Communities 2013). Stevens et al. (2002) also discuss guidelines for ensuring more safety in cars in the face of a complex HMI. Another difference to other user interfaces is that the devices in cars are normally at fixed positions and the user can only interact with them within a limited radius. In this context, (Kern and Schmidt 2009) discuss the proper use of the so-called design space, i.e., the proper ordering of the devices with respect to their functions within the interaction space.

These varying preconditions directly affect the development of HMI-based software. New tools and methods are necessary to handle the development of more and more complex systems. The importance of certain development process phases that might have been neglected until now also increases, such as the process of software testing. Increasing complexity raises the cost of testing. HMIs of higher complexity require new testing methods (e.g., automated testing) to handle the increasing number of test cases. In the early days of software testing, everything was tested manually. Unfortunately, manual test procedures were not feasible for broad verification of complex systems, which led to the invention of automated testing methods. In the first step, scripted test procedures were introduced. Current trends in testing point to more complex methods, e.g., model-based testing. At first, the test models were relatively simple flow charts. Later the Unified Modeling Language (UML) was established as a standard for specifying the behavior of HMI in state chart diagrams (Reich 2005). There are many challenges to be solved in automated model-based HMI testing, e.g., where the machine-readable test models are coming from (Stolle et al. 2005). For automatically deriving test models from specifications, the existence of formal machine-readable specifications is essential. Various specification languages for different purposes are known in literature (Hess et al. 2012a, b). For the automotive domain, one approach of a specification for the formal description of the HMI of infotainment systems is presented in Fleischmann (2007).

4 The Present of Automotive HMI Development

As discussed in the previous chapter, automotive HMI has rapidly changed since the invention of the automobile and continues to grow in complexity. Combined with more and more new technologies from other domains, infotainment systems increase mobility and comfort in modern cars (Amditis et al. 2010). Therefore, developing infotainment HMI in the automotive context is intensely interdependent with the experiences and expectations of car customers with regards to other technologies.

Early computer systems and their interaction concepts used to be highly efficient but not easy to use or learn. Systems were primarily operated by technophiles using command line interfaces. In contrast, today it is possible to just use computers without knowing many details about the system architecture or the technical background. The potential of computers in daily life is clear. For almost every job, computer skills are required and knowing how to handle digital data with computers is a fundamental skill. In personal lives the importance of computers is also growing steadily. Today’s youth grows up with digital multimedia and social networking services. The upside for the development of automotive HMI is that users are not afraid of interacting with computers nor do they have to be motivated to do so. They know the basics of data processing and have acquired strategies for learning how to operate new computer systems. On the other hand, since people know about the potential of computers in other contexts, they have high expectations.

Especially the rise of so-called nomadic devices has increased this trend. Today, most people are used to being on call anywhere and anytime. Anderson (2015a, b) shows that in 2015, 68% of all adults in the USA had a smartphone. Smartphones provide similar functionality as mobile computers, especially regarding multimedia content or contact management. Additionally, basic functionality can easily be enhanced by apps.

Another driving factor for HMI development is the rise of the World Wide Web, as Internet provision has become faster, more stable, and more comfortable to use. Technologies such as VDSL or fiber optic link have overcome earlier restrictions and allow transferring several hundred Mbits per second. Internet has become mobile as well (Gow and Smith 2006). Fourth-generation mobile communication, such as the Long Term Evolution (LTE), and the increasing spread of wireless LAN hotspots have made it possible to be online anywhere and anytime with a reasonable transfer rate. As one example, the Internet adoption by adults in the USA rose from about 14% in 1995 to 84% in 2015 (Fig. 1.12) (Perrin and Duggan 2015; Pew Research Center 2014). According to Smith (2011), 59% of all adults in the USA went online wirelessly in 2011. In 2013 this number grows up to 63% (Pew Research Center 2016). With the perspective of reaching a growing number of users via the Internet, more and more online services are provided. According to (Kellar 2007), information retrieval and information exchange via the Internet are part of everyday life. Especially in combination with smartphones, Internet-originated communication (e.g., instant messaging and email) is integrated with cell phone communication (e.g., calling and text messaging). Much personal information, such as contact information (telephone numbers, mail/email addresses) or birth dates, are stored in mobile devices. People tend to be highly dependent on the availability of this data and not being able to access this in their cars would not be acceptable.

Fig. 1.12
figure 12

Internet adoption by adults in the USA from 2000 to 2015 (Perrin and Duggan 2015; Pew Research Center 2014)

4.1 Fields of Application

Applications of in-vehicle infotainment systems include navigation, media, TV, car configuration, data interfaces, telephone, and so on. The combination of previously independent functions is typical for today’s automotive HMI. The navigation system helps the driver to arrive at his destination. Via TMC (traffic message channel), traffic information is visually presented on the navigation map. Also, higher safety is achieved through so-called driver assistance systems, e.g., collision avoidance system. For relaxation during driving, the infotainment system provides several types of audio and/or visual entertainment, such as radio, music (mp3, music CD), and TV. The configuration functions, e.g., climate control, seat function, and in-vehicle lighting, enable the driver to configure his car easily and comfortably. Additionally, the driver can access data on connected devices such as MP3 players, smartphones, USB devices, and flash drives. The driver can also make hand-free calls via his mobile phone that is connected to the car.

Nowadays, in more and more cars a WLAN router is available at an additional cost. Audi, Mercedes, Peugeot, and Citroën offer their users the possibility to, e.g., check emails, log into Facebook and use Google via an Internet browser in their car. Because of the wide usage of smartphones, a lot of users expect to have flexible access to more useful applications for infotainment, e.g., via smartphones or directly from app marketplaces via the Internet. To enable the use of external applications, several approaches have been developed such as the use of technologies like, e.g., Android Auto or Apple CarPlay.

Moreover, besides smartphones, the personalization of in-car infotainment is becoming increasingly important for the automotive industry. Some car manufactures allow the user to customize his infotainment system, e.g., by installing applications and storing personalized configurations. In this context, user identification is a current field of research. For example, in (Feld and Müller 2011) a speaker classification system is presented to personalize in-car services automatically via speech input.

Providing an automotive HMI with lots of features is no longer a distinguishing criterion for premium cars. Nowadays, infotainment systems are installed in almost every car and are part of the standard equipment. So, car manufacturers have to look after a solution to enhance the user experience. Keywords are user experience, connectivity, and multimodal solution by understanding the user and his expectations (IQPC 2016).

4.2 Input/Output Devices

In many non-automotive domains, interaction with computers or smartphones is the primary task. Users can spend their cognitive capacity completely on human–computer interaction and concentrate less on their surroundings. In a typical automotive setup, users drive their car at the same time and additionally have to stay in the driving lane, watch their speed or react to the current traffic situations (cf. Sect. 1.3). Less attention can be given to selecting destinations or changing the radio station.

Examining the influence of human–computer interaction as a secondary task is a very active field of research as summarized by Ersal et al. (2010) and von Zezschwitz et al. (2014). Tasks are termed secondary when they engage the driver voluntarily or involuntarily and “do not directly pertain to the primary task of safe vehicle operation” (Ersal et al. 2010). (Governors Highway Safety Association 2011) adds that this task “uses the driver’s eyes, ears, or hands.”

Results show that things such as talking on a cell phone or writing a text message on a mobile device while driving are distracting (Drews et al. 2008). According to National Safety Council (2015), cell phone use (talking and texting) is estimated to be associated with a minimum of 27% of all accidents. As a consequence, in the automotive context it is crucial to design systems aims at reducing distraction: Interaction concepts have to be obvious, plausible, and consistent; modalities have to be chosen appropriate to tasks and users; constraints and affordances must clarify valid and invalid input. The user must not have any doubts or questions about how to interact with the system.

With the rising complexity and diversity of functionality these aspects are becoming even more critical. A modern in-vehicle infotainment system is an integrated set of Input and Output devices (I/O devices) which (a) communicate with each other using bus systems, such as CAN, and (b) enable interaction between the driver and the vehicle. I/O devices are a major hardware component of in-vehicle infotainment and offer the largest number of physical HMIs. Automotive suppliers do not develop the physical user interfaces as single hard keys, such as three push buttons or two rotary knobs. They are usually supplied as complete component assemblies of a set of hard keys. Additionally, these I/O components are controlled by separate Electronic Control Units (ECU), which are connected to each other by the vehicle bus. These separate responsibilities are also reflected in the organizational structure of car manufacturer’s and supplier’s development departments.

A vast number of solutions are available on the market that differs in appearance and functionality. Although their design and their integration into the vehicle’s interior is dependent on the car manufacturer, they can be divided into a set of groups. This allows different combinations, which can be located in different areas of a car to build variants that can be easily distinguished by the users. For the above reasons, in the following sections the I/O devices, including the physical and speech, HMI will be introduced based on current industrial categories.

4.2.1 Input Devices

Most of the conventional integrated infotainment interaction devices described in Sect. 1.2 are still common in the latest infotainment systems. Many of the input devices discussed in the following are shown in Fig. 1.13. Compared with the faceplate of previous head units, the Infotainment faceplate nowadays uses displays that come with touchscreens or even force feedback. The number and layout of the buttons usually depends on functional considerations and/or the philosophy of the car manufacturers. The infotainment faceplate enables some input functions for navigation and other systems.

Fig. 1.13
figure 13

Overview of haptic input elements of automotive infotainment systems (Zhang 2015)

In addition to the infotainment faceplate, the climate control panel is used for setting air condition parameters, such as the fan speed and the temperature by means of turning wheels, push buttons, and sliders (Zhang 2015). There may be separate panels for setting the air condition parameters for front and rear passengers. In order to lower prices, some climate control panels are implemented without electronics. More complex climate control panels use electronic components, such as displays (Zhang 2015). Although the infotainment faceplate and the climate control panel are often located next to each other, they are controlled by separate ECUs and developed in different departments.

Some cars are equipped with a push button assembly that enables direct access for major infotainment functions or functions that are frequently used (Zhang 2015). Examples include driver assistance functions, Electronic Stability Control, door lock, or voice control. Some of these functions, such as the latter ones, are also provided by duplicate buttons, which can also be commonly found on the steering wheel. Such buttons may use indicator lights to show the current status of the operated system or can even provide miniaturized displays. They can also be implemented as capacitive surfaces or approximation sensors.

However, due to the limited space, the maximum number of such buttons in a car is limited, although the number of functions to be controlled is constantly increasing. For that reason, some manufactures like BMW allow the user to assign functions to the available push buttons. Some manufacturers apply usage concepts that are based on center control elements (CCEs). These devices are multi-purpose controllers used to navigate complex menu structures (Zhang 2015). As described in Sect. 1.2.2, MMI (Audi), iDrive (BMW), and COMAND (Mercedes-Benz) are well-known examples of systems whose usage concepts are based on a CCE. These systems will be described in more detail in the following sections. Their CCE is a rotary controller with force feedback technology. Around that controller they provide buttons for switching between the major infotainment contexts or provide quick access to common functions.

However, rotation and selection by means of a rotary controller is not convenient in some usage scenarios, such as navigation on a map. That is why, from the beginning, the rotary controllers applied by BMW allowed for pushing and pressing. Audi refrained from this degree of freedom in its first-generation MMI system but added a coolie hat to the top of their rotary controller later on. However, these systems are restricted to simple inputs. For this reason they have been replaced in modern systems by touchpads, which also allow for convenient character input (see Sect. 1.2.2). These touchpads are mounted next to or even as part of the CCE. They allow users to provide input such as pointing, clicking, gestures, and characters. By activating a backlight, some touchpads can display predefined symbols like numbers. They are used to mark that clicking a region is currently associated with a specific function.

Touchscreen displays in the head unit are equipped with resistive or capacitive surfaces enabling direct manipulation of interactive objects by means of touch. Just like the touchpads in the CCE, current touchscreens can also provide gesture operations such as scrolling and zooming, as well as handwriting recognition.

Common steering wheel controls are buttons, scroll wheels, or little touchpads. Their main advantage is reduced distraction of the driver compared to touchscreens in the middle of the car because the driver does not need to take his hands off the steering wheel. For the same reason, control levers were used in the past for controlling the turn signals and the windshield wipers. Modern infotainment systems also use these levers to control infotainment functions or driver assistance functions such as adaptive cruise control (ACC). The levers as well as the buttons and scroll wheels on the steering wheel can be assigned to a single function, such as to accept incoming calls, activate voice recognition, or change the audio volume. They can also be used as multi-purpose control for navigating in lists displayed in the cluster or head-up displays like in the digital cockpit of Audi.

Voice control enables the user to input commands in natural language without taking his hands off the wheel and his eyes off the road. An array of microphones, which may need to be activated explicitly by, e.g., pressing a button located on the steering wheel, records the user’s commands as acoustic signals, which are then processed by the Speech Dialog System (SDS) (Lamel et al. 2000). In SDS, these recorded acoustic signals are transformed by a speech recognizer, which is often based on a probabilistic approach for modeling the production of speech using the technique of Hidden Markov Models (HMMs) (Schuller et al. 2006), into the most probable word sequence.

Voice control was first built into a regular series model in 1996, when Daimler integrated Linguatronic into its S-Class Mercedes-Benz cars (Heisterkamp 2001). This system supported the telephone application.

Nowadays, the driver can access functions like music selection, destination input for the navigation, or even climate control changes with the vehicle’s embedded voice recognition system. But beyond the in-vehicle solution also smartphone functions which let the user interact with music, social media or phone contacts via voice control are common like Apple’s Siri, Google’s Google Now, or Microsoft’s Cortana. These personal assistants are familiar to the driver and always up-to-date.

Especially, Ford is one of leading car manufacturers that pushes the smartphone integration combined with voice recognition. The so-called Ford SYNC is a voice-based communication system that is connectable via Bluetooth. Ford SYNC with AppLink goes one step further. With this refined system the user has access to his apps which he can control via voice, steering wheel buttons, or the middle console (Ford 2016).

A recent addition to the list of input devices is camera controls. They are used for monitoring the driver and for gesture recognition. It is possible to combine this with other functions that would require a camera, such as video telephony. The camera can be located on the dashboard or in the instrument cluster. Currently, camera controls are rarely utilized, except for some premium cars (e.g., the Lexus LS 460).

4.2.2 Output Devices

The optical channel is still the predominant output device. Displays are used to provide the user with information about the current system state. Today, the common locations of the major displays are the head unit and the instrument cluster (IC). Depending on the car model, different display sizes, resolutions, and color as well as monochromatic displays are used.

The primary display in modern cars is located in the head unit and shows the graphical user interface of the infotainment system. Some manufacturers apply usage concepts based on touchscreens, while the majority of premium cars combine a conventional display with a CCE. In addition, this central display can be enhanced with 3D ability. Because the display of the head unit is used by the driver as well as the front passenger, some recent systems use “Split View” displays. Depending on the viewing angle, they can show two different screens. This is done by applying a special optical filter on top of the display. It splits the image on the screen into two separate ones that are visible from different angles. Whereas one of these images consists of all odd pixel columns, the other one consists of all even pixel columns. Thus, the horizontal display resolution is halved compared to the nominal resolution of the display (Robert Bosch GmbH 2013).

Conventional instrument clusters consist of electro-mechanical tachometers, speedometers, odometers, oil gauges, etc. These are often complemented by a display used for showing information such as the on-board computer or ACC warning. Whereas in luxury class vehicles, full-color and high-resolution displays are common, simple seven-segment LC displays are still in use in lower price vehicles. However, there is a clear trend toward replacing the electro-mechanical components with virtual instruments shown in the displays, which in turn are becoming larger. This trend leads to free programmable instrument clusters (FPK) without the classical mechanical components. Figure 1.14 shows the Digital Cockpit of the Audi which also has a representation of the tachometer and speedometer but is reconfigurable in size. Depending on which information is important in a special situation, the user can change the graphical representation. For example, the car rounded elements will become smaller when the user needs a bigger few of the navigation system.

Fig. 1.14
figure 14

Audi TT Digital Cockpit (Häcker 2015)

Head-up displays (HUD), which were used in primitive versions in airplanes, are a recent innovation in cars. A HUD consists of projector, optical lens, information source, reflecting element, as well as combiner (Wood 1988) and shows good readability of displayed information in daylight and darkness. Just like the FPK, the HUD can also show virtual instruments and navigation information such as maneuvers. Their image is shown on the windshield in the driver’s line of sight. Using optical means, it is possible to have the HUDs image appear to be located in some distance to the driver, avoiding the need for the eyes to adapt when glancing at the HUD while driving. This is why using HUDs to display driving-related information to the driver promises to reduce distraction and thus increase safety.

Complementing to the optical channel, the acoustic channel is also used in cars. There are two major fields of application. On the one hand, nonverbal sounds are played to signal changed vehicle states, and confirm keystrokes or the activation of a specific system. On the other hand, speech output is used to request follow-up information from the user, e.g., say a name from the telephone book, or explain which commands can be used in a specific situation. With the help of Response Generator and Speech Synthesis (also called Text-to-Speech) (Lamel et al. 2000) in SDS, the output information is converted into natural language and played to the user. Jensen et al. found out that there are more advantages of speech output compared with visual output. They tested the driving behavior of participants in a real traffic driving situation. Doing this they became the result that an audio instruction for the navigation system was better than a visual instruction on a screen. But this depends on the quality of the auditory output and how the system is implemented in the whole infotainment array.

To increase accuracy and usability when operating menus, force feedback has been widely used in CCE to help the user get some kind of haptic feedback. Force feedback can also be used in the steering wheel to provide some driver assistance systems, such as the lane departure warning system (LDW).

4.3 Current Infotainment Systems

COMAND Online NTG 5: The latest version of Mercedes’ COMAND (COMAND Online NTG 5) is available in the new series 222, also known as the S-class. It consists of two 12.3-in. displays placed in the head unit and the instrument cluster. One of them replaces the classic instrument elements in front of the driver, whereas the other one is for the infotainment and navigation. Although a small bridge separates the two 12.3-in. displays they look like one unit. Each of them has a resolution of 1.440 × 540 pixels within a pixel density of 125 ppi. The central input device is the rotary push button in the center console. Within this button the user can navigate through lists and menus. Around that button there are six hard keys for quick entrance to the main points of the infotainment system: Seats, navigation, radio, multimedia, telephony, and vehicle functions. Furthermore, there are buttons for back, On/Off, and volume. On the steering wheel the user can also interact with the system while driving. As a special, there is a touch element in the center console for entering letters or for using the mouse cursor. Additionally, speech input is available for the driver. Compared to further COMAND systems the complexity and number of infotainment elements increase. There are much more interaction possibilities and therefore also buttons.

Currently, depending on the car model and the country, Audi offers different MMI infotainment systems on the market. The MMI is offered in four different variants, which vary in display size and the infotainment features they support. The basic variant is called MMI Radio. It is equipped with a 6.5-in. display with 400 × 240 pixels and supports features such as phone connectivity, address book, CD player, and TP memory. In addition to that, the MMI Radio plus is equipped with two SD card readers, Bluetooth phone, a speech dialog system, and the ability play mp3 files. MMI Navigation extends the variant MMI Radio plus with a DVD-based navigation system, TMC traffic information, and a speech dialog system that allows controlling the address book, and the phone and entering navigation destinations via voice commands. The variant MMI Navigation plus, which is standard in the A8 and A6 Avant, provides an 8-in. display with 800 × 480 pixels, a hard-disk-based navigation system with 3D map and Google Earth satellite view, a DVD drive, USB port, iPod interface, advanced driver assistance systems, and a speech dialog system with full-word input to control the navigation system, the phone, and the address book.

The latest Audi infotainment systems also extend the range of available mobile online services, which are called Audi Connect. These services include Google Search, Google Street View, traffic information, news, and a weather forecast. In the Audi A8 and A7, it is possible to connect up to eight mobile devices at the same time to a WLAN hotspot provided by the infotainment system.

In addition to those MMI variants, Audi offers a completely new display called virtual cockpit. It is a fully digital instrument cluster focused on the driver. In the 12.3-in. TFT display all functions of a standard instrument cluster and the middle MMI monitor are combined. Here, the driver is able to configure the information representation form. There are different view modes where the speedometer and the rev counter are more or less dominant (Audi 2015).

4.4 Development Process

After having a closer look at the current infotainment systems of Daimler and Audi it is important to know that the development process of automotive systems is mainly characterized by an intense interchange between OEMs and suppliers (Bock 2007). In many cases, the OEM specifies the requirements and hands them over to a supplier responsible for the development. After the development, the product is handed over to the OEM again, who tests the product. In the following, the development process will be explained for the steps specification and design, implementation, quality assurance, and post-implementation.

The OEM builds the specification containing requirements, functions, design-related requirements, writes and translates texts for different languages, and perhaps also creates a model of the specification for model-based development. In rare cases, a formal specification is also created, but often this is done by the supplier. The documents are then given to the supplier, who can be put in charge of a special field of devices or software (see Sect. 1.4.1).

The supplier sometimes has to do additional work on the specification as refinement. However, as a first step, there is a technical review, which results in an offer for the OEM. Seldom, reviews for quality are made concerning the specifications, because these documents can be very large, up to several thousand pages in extreme cases (Bock 2007). However, this can cause problems later on in development, when inconsistencies or similar are detected. After negotiations for price, the development starts. The analysis of requirements is often done using Microsoft Office applications. Some OEMs also already develop prototypes, Flash animations, Photoshop files, etc.

Sometimes, there is a need to clarify specification-related questions. This is often done using ticket management or bug tracking systems, such as JIRA, which are used for communication between OEMs and suppliers. At some point in time, a relatively stable version of the requirements documents exists. Then a feature freeze is set, which means that changes to the requirements can only be done via a change request (at extra cost). For the OEM, this means that testing can start to develop test cases based on the requirements. For suppliers, this implements that development can now be based on a defined and stable set of requirements. For development, a feature roll-out plan is developed and agreed on between the OEM and the supplier. Some OEMs also want to have prototypes during development, e.g., when defined quality gates are reached.

During development, the suppliers often use V-model-like processes, but aligned to the feature roll-out plan, thus with an iterative component (Ganzhorn et al. 2011; Amditis et al. 2010). The whole process, from start to delivery of the final product, may take between 1.5 and 3 years. The development process is filled with the typical elements: architecture, design, development, static, and dynamic quality assurance.

This brief explanation serves to demonstrate the development process. However, some elements are not as easy as it seems, and thus require a deeper discussion in order to shed light into the situation as it is found today. For example, the traditional V-model, although adjusted to include iterative cycles, can often not be processed in that way due to the large number of prototypes, which has a significant impact on further development.

Once the product has been received, the quality assurance at the OEM starts. Using the test specification created on the basis of the stable requirements, the test cases can be executed. If deviations from the specification are found, a ticket is entered into the ticket management system and transferred to the supplier for clearance and correction. Both the OEM and the supplier use quality assurance, but with a different scope. Suppliers normally use unit, integration, and software (system) tests. OEMs use testing on the system level. Model-based testing in particular is currently a trend. According to Duan (2012), a concept for model-based testing of HMIs (e.g., using UML state charts) ensures the quality and reduces testing costs.

Apart from the traditional roles in development, requirements engineer, architect, designer, developer, quality assurance-related experts, and professionals from other disciplines are included, often due to the focus on the human actors, the users, and the market. The variety of different roles which take part in HMI development leads to several problems.

One of the main problems is communication. All of these specialists have their own vocabulary with special terms. Clear communication always requires definitions in terms of a glossary. Such a glossary, though important, cannot be found in all projects, which leads to the need for additional communication, for questions and explanations due to the different languages (i.e., vocabularies) being used.

Even if there is a unified language, the contents of the documents are often not unambiguous. This leaves room for interpretations, and since the different domains have different concepts, misunderstandings can occur. It is possible that these misunderstandings may be detected during quality assurance. Correcting these results in high costs, depending on when they are detected (Shull et al. 2002).

The broad range of people and disciplines involved in specification and development also leads to a variety of tools being used in this domain. Besides the technical tools traditionally used in software engineering, such as IDEs, compilers, testing tools, bug tracking systems, build environments, or tools for configuration and version management, non-technical roles involved in the development use their own tools. Overall, the variety of tools ranges from general tools suitable for nearly all users up to very complex and special tools usable only by domain experts. The tools themselves often consist of a mix of commercial and open-source tools. In many companies, the tool suites are complemented with self-developed tools.

4.4.1 Specification and Design

Today, requirement specification is done with the intensive inclusion of stakeholders, users, and external test persons for evaluations. Drivers for the requirements specification are often workshops with stakeholders, interviews, subject studies, or car clinics. Thus, a very wide range of topics is covered in the requirements. In recent years, the focus of development has shifted to the customers or users, which leads up to a new type of requirements, together with specially designed interaction concepts. These concepts are needed to make interaction with the new features easy and safe. Safety is as issue, because driver distraction is something all companies are engaged in mitigating, as it is also demanded by various international standards and laws.

The parties involved in the specification include not only requirements experts (i.e., technical personnel), but also marketing and sales people, end users, and others. This makes the process of defining the requirements on the part of the OEMs a difficult task. The involvement of end users and the need to make complex interactions and usage of the HMI feel easy leads to a kind of cycle: Concepts are specified, developed, refined, assessed by users, and so on. This human-centered design approach is defined in the ISO 9241-210. The intension in here is to enhance human–system interaction through both, hardware and software.

Before development starts, during preproduction, concepts and features to be included have to be selected and specified. Here, various different roles come into play. Psychologists and market researchers often conduct experiments or user studies, such as car clinics, and even management may have special requirements, for example due to the market situation.

The set of requirements finally given to the supplier is then prioritized according to criteria such as cost, attractiveness, match with the brand and the corporate identity, competitors, relevance for the end users, time needed, and so on. Finally, a plan for the development of the features is defined. This leads to a feature roll-out plan, which may be aligned not only with quality goals, but also with real prototypes delivered to the OEM.

The elements in the specification include golden rules, state charts, use cases, GUI widget catalogs, style guides, graphical files (example screens), formal models, and often simulations. However, most of the documents are not formal and not readable by machines. Thus, they can also not be processed by computers, which may lead to problems due to media breaks and the manual (and thus error-prone) effort needed to transform these data in the design phase.

Notations used for the specification include semi-formal notations, UML or similar notations, pseudo-programming languages, textual descriptions, schematic representations, or even flash prototypes. Even at this early point in time, various different tools are used, such as Office applications, DOORS, Flash, or Photoshop. This is due to the involvement of various types of actors, not only with purely technical skills, but also with “designing skills.” Once the concepts have been agreed on, the ways users interact with the system, its different features and its look have to be designed. During this stage, interaction designers and display/UI designers are involved. Graphic designers and speech dialog designers are responsible for optical and acoustic feedback and general appearance of the HMI. Depending on the requirements, this also needs to include specialists for haptic feedback.

In this phase, software programs such as Adobe Flash or Photoshop are used for designing prototypes or the visual design of the display. Finally, tools for animation and 3D content are also employed. Again, there is no tool chain in terms of a seamless transfer from one tool to another. This results not only in manual and thus error-prone work, but also in a probable loss of traceability from one phase to another.

In general, the tools used are aimed at the requirements and goals of the different roles/groups involved. They offer the best functionality for the respective target group, but one problem arising at this point is that it might not always be clear how tools from different disciplines and their contents relate to each other. The specification is spread across several different instruments with different foci, and the connections between the different parts captured in different notations have to be managed. It might not be easy to exchange content between these documents, as these tools often do not maintain interfaces for each other. For example, while content can be interchanged between Office documents, it might be difficult to perform imports or exports between these documents and a requirements database.

Nowadays, development is no longer concentrated on one specific spot; there are projects which are developed by teams around the world. The tools need to be usable across the web, and often more than one user wants to use a certain tool at the same time. However, not all tools are currently multi-user enabled. Besides that, some of the tools have to be able to maintain variants since such variants play a crucial role in the automotive domain.

So, the complexity of managing the interface between the different actors increases the more different tools is used. However, the complexity of modern HMI development can also be seen in the way requirements are managed: Each requirement is annotated with attributes for the series, the market (regions), line, release, and so on. The market, for example, does not only define the language used, but can result in different interactions and even changed features. The features may also change according to the series: The top model has the largest range of features, the cheapest model has only a few of them (e.g., no smartphone integration). Also, different versions may exist according to the equipment in the car (e.g., some HMIs contain a TV, some do not). Additionally, local habits (e.g., the way addresses are entered in the USA is different from that in Europe) as well as laws and standards (e.g., regional standards concerning driver distraction are ESoP (European Statement of Principles) in Europe, JAMA (Japan Automobile Manufacturers Association) in Japan, and AAM (Alliance of Automobile Manufacturers) in North America) have to be followed (Blessing et al. 2010). This means that there is not “one specification.” Whenever we talk about a specification for an HMI, we have to keep in mind that there are numerous variants in different versions.

Accordingly, there are different challenges and even problems: Informal specifications are not always clearly interpretable and possibly not machine-readable, which results in media breaks. This may result in a loss of traceability. Additionally, transfers of data between different tools are often only possible manually.

4.4.2 Implementation

As mentioned above, the software is being developed by the supplier, sometimes with the help of subcontractors. The development is processed in an iterative way, oriented on the feature roll-out plan. Additionally, the hardware to be filled with the software is often developed by other parties. Both have to be integrated after development. Therefore, constraints have to be taken into account.

The software itself has to be developed so that the corporate branding of the OEM is integrated. However, sometimes the development of different HMIs for different vehicles of one company is split across several suppliers. Nevertheless, they have to adhere to the corporate branding. For this, style guides are developed, which have to be used by the suppliers.

Model-based development is being increasingly used by the developing companies. That way, the specification is modeled and brought to a formal level. The code is then generated from that. This also shows some of the problems in the domain. There is no tool chain, and thus the model has to be built on the basis of the specification documents. It is not possible to transform these documents, as they are from different applications and lack sufficient means for exporting them and for allowing easy import into other tools. The results of model building can be used during quality assurance; however, this is not possible in a fully automated way, either.

4.4.3 Quality Assurance

Quality assurance is used throughout the development. The suppliers use static and dynamic quality assurance. In particular, reviews, code analyses, and testing on the unit, integration, and (software) system level are used. Reviews are used for checking requirements specifications, test-related documents, and code.

The OEMs also employ testing, on the system level. They also face the largest number of challenges; several of the recent advancements in the domain pose a challenge and require special methods for testing. An emerging discipline in the field is model-based testing. However, even with model-based development, there are still some challenges.

One of the main challenges is multi-modality (see Sect. 1.4.3). As we have seen, modern HMIs allow several different types of interaction, ranging from traditional keystrokes to gestures, speech, and touch enabled for drawing, e.g., numbers or interacting directly on the screen. This means that all test cases have to be modified and executed several times in order to include the various possibilities for interaction. However, this is not a simple change in the test cases, as it requires special environments to enable the use of the respective interaction devices. Since this multiplies the testing effort needed, many OEMs have tried to automate testing. For this reason, robots are employed, which use the respective devices and capture the output of the system for test evaluation.

There is not one single product to be tested; since there are many different variants of the same product (see above), tests have to be tailored to the correct version of the program and the correct variant. Again, this increases the effort needed for testing, as well as the complexity. There have to be annotations for specifying which functions are present in which variant, and which are not. The test cases have to be adapted to these, so that for each variant, there is a distinct set of test cases. Otherwise, all the tests targeted at features not existent in the actual variant will fail. Tracking all the differences of the variants and the different versions requires thorough configuration and variation management.

In the past, there was often a very simple interface, but now we have digital screens with graphical widgets, overlaying text, and others. Testing the correct positioning and size of the elements on the screens is very cumbersome. A taxonomy of failures in graphical user interfaces of modern In-Vehicle Infotainment Systems is published in (Mauser et al. 2013b). Some companies have started using cameras to capture the screens of the HMI in different states. Afterwards, the elements have to be compared to the specification. This does not require only a very detailed specification, but capturing the screens is also error-prone. For example, it is not advisable to just film the displays of the head units because there are too many sources of error, for example changing or extreme lighting conditions, problems with the lens of the camera, or others. Finally, position, size, and color have to be compared to the specification, which itself has to be machine readable. However, the length of the text is subject to the language used. Again, different tests have to be employed for different languages, as the results change not only in terms of the length, but also in terms of the position of the texts, as some languages are written from right to left, or from top to bottom.

Lately, increased attention has been given to the verification of animations. Modern systems no longer perform hard, visual transitions between screens. Rather, they are filled with smooth animations, even within single widgets, providing a comfortable user experience. On the other hand, it is a challenge for testers to cope with the increased complexity, and currently there is no fully automated solution.

To overcome the difficulties of these modern developments, some companies have therefore started to educate their testers and demand a certificate of skills, e.g., certified tester by the International Software Testing Qualifications Board (ISTQB) (2013). In addition, ongoing research discusses, whether particular testing goals can be covered by partially or fully automatic testing process (Mauser et al. 2013a; Duan 2012).

When the product is ready for release, management is responsible for the final approval. Even at this late stage, it is possible that changes to the product features may be requested. If approval is granted, the HMI may go to production.

4.4.4 Post-Implementation

Even after the start of production, many companies continue to work on the HMIs, to eliminate defects found in the field, or to include new functions. For example, some companies have added functionalities for accessing external mobile devices through the HMI. These updates cannot be applied automatically (e.g., via download) but have to be deployed when the car is in the garage. Since there is no possibility to easily perform updates, (i.e., by the customer) this has to be done very carefully. This also means that the software (and the hardware) has to be tested extensively in order to prevent the introduction of new errors.

5 HMI Development Tomorrow

The current trends and developments in automotive HMI development are still in the process of establishing themselves, while new, future trends are already emerging. Some are a consequent progression from current trends, such as connected technologies. Future HMIs could be permanently connected to mobile devices or to the Internet, and make use of server- or cloud-based applications, which will not only foster the ability to change software easily even after roll-out, but also to reduce the constantly growing hardware demands. However, HMIs of the future may also be dynamically adaptive, depending on the habits and demands of the drivers and the driving situation, and could also be personalizable. One very important domain will continue to be safety functions included in the car, and also technologies concerning autonomous driving. Other developments will introduce new concepts in automobiles. As already known from TVs, 3D technologies will be examined for displays, and sooner or later, small touch displays will replace the traditional buttons. Hardware could be modularized and then be replaced in small parts, as development progresses, instead of having to change a whole HMI system.

5.1 Upcoming Trends and Changing User Expectations

In 2015, the average age of new car customers in Germany was nearly 53 years.Footnote 2 One of three new car customers is currently older than 60 years. For the younger generation, i.e., adults between 18 and 25 years, the Internet and mobile phones are more important than a car, as the results of the study “The young generation and the connected car” by the Center for Automotive Management indicate (Bratzel 2011). The study also identified high expectations within the targeted younger generation concerning a connected car. However, there is also a willingness to pay for the respective value-added services in such a connected car. Thus, there is a good chance for automotive manufacturers to inspire the young generation with new products and features. Focusing on the needs of the older generation can also be reasonable. Holl et al. (2011) used a digital pen as an enabler for the effective interaction between modern cars and elderly drivers.

These trends are a paradox for the automotive industry. On the one hand, the younger generation expects highly innovative infotainment in cars. They are interested in new technologies like Internet access, extensibility with application updates, and installation of new applications (apps) as well as the seamless integration of mobile devices in the car. On the other hand, this generation of car customers is not able to pay for premium segment cars, which used to be the car models where technical innovations are introduced first. A change of strategy is required from automotive manufacturers to stimulate the customers’ inducement to buy. It can be expected that innovations in the area of connected technologies will increasingly be introduced in lower priced segments (Dick 2011).

New challenges furthermore arise from the growing market of consumer electronics, such as smartphones and tablet PCs. The number of smartphone users in the United States increases from approx. 62.6 millions in 2010 to 171 million in 2014 and it is expected to increase to 236.8 million in 2019 (eMarketer 2016). In their cars, users now expect options similar to those they know from mobile contexts. 46% of the Americans are even dependent on their phones, as they are saying that “they couldn’t live without” their smartphone (Anderson 2015a, b). Especially so-called digital natives (Selwyn 2009) have grown up with computers and often strongly rely on them. The influence of smartphones and integration of personal information is summarized in Bratzel (2011): Young adults aged between 18–25 years were asked what they would rather relinquish for one month: their car or their mobile phone. The results show that they would rather give up their car than be without their mobile phones. A global survey from frog design confirms this decision (Giacchetto and Gregorio 2015). One-third of the car owners of this survey would give up their vehicle rather than their smartphones. This indicates that the concept of mobility is being redefined and refers no longer only to spatial but also to virtual dimensions, including communication or information retrieval anytime, anywhere.

Modern smartphones provide powerful hardware with high-definition touchscreens and sensory input- and output-like compass and GPS. Relying on permanent Internet availability, manifold functions and applications are possible at low costs. Furthermore, new apps can be installed easily. This leads to increased customer expectations, which carry over to in-car infotainment systems because customers compare their in-car infotainment systems with other devices of their everyday life, such as mobile phones (Meroth and Tolg 2008). Examples are spoken dialog systems, which were first introduced in cars to allow operation while driving without affecting visual attention and without the need to take the hands off the steering wheel (Tièschky et al. 2010). Modern smartphones also provide speech operation, although the context of use is usually not as safety critical as in the car. Since Apple uses speech operation as the most important selling point for the iPhone-4S mostly every smartphone has the possibility to interact with it via speech. This will lead to ever-increasing expectations concerning in-car speech operation.

The number of infotainment functions that can be better performed by in-car systems than by consumer electronic devices will decrease. However, automotive manufacturers can profit from some determining factors such as the possibility to communicate with other components in the car and to design the operational concept and appearance of the Human Machine Interface in a way that perfectly fits a car’s interior design. Automotive manufacturers have to be aware of this advantage and need to make strategic use of it. Rather than implementing more and more functions that the customers would expect anyway, such functions can be brought into the car via consumer electronic devices or Internet services. Furthermore, automotive manufacturers should adopt certain concepts and functions from these areas and provide solutions for easy integration of such devices into the in-car environment. Some of the major challenges with regard to achieving this goal are the differences in the development and product life cycles for automotive products and consumer electronics.

5.2 Extendable HMIs

A survey by IHS Inc. with 4000 people from US, UK, Germany, and China in 2015 showed that nearly 45% of respondents would use in-car apps in course of a growing driving experience, and also 75% of those surveyed would be willing to pay for updates of an app. This increased from 25% compared to a study by IMS Research with 2250 people from the US, the UK, and Germany in 2012. Because of the rapidly evolving consumer electronics, time to market for infotainment functions is becoming more important than before. At the same time, the complexity of infotainment functions is increasing. Closed, proprietary automotive infotainment systems cannot keep pace with such short innovation cycles. So, one possibility to improve this situation is the development of extendable systems where functions can be added after the roll-out (Infotainment app), thus separating the development cycle from the life cycle of a car.

Another possibility to integrate new functions into an infotainment system after its roll-out would be server-based applications where no new software needs to be installed on the in-car system. This enables easy deployment of new functions and allows for simple billing concepts (e.g., pay-per-use). Furthermore, OEMs could easily prevent the installation of unwanted applications (Schönfeld et al. 2011). The in-car infotainment system running these apps provides them with input and output devices, whereas the application logic is executed on a server. Thus, the HMIs have to be described in a way that they suit different input and output technologies as well as operation concepts found in different car models. In contrast to conventional telematics services, where the web service only provides machine–machine interfaces and no form of presentation, these apps require new forms of realization for HMIs similar to web technologies.

5.3 Hybrid HMIs with Mobile Devices

Traditionally, integration of mobile devices was limited to external data such as address book entries or music data, or to the use of certain functions like the actual phone call function where a respective HMI was already present in the in-car system. In the future, this will change, so that the mobile device will not only provide functions and data for the in-car system, but also use functions and data from the car. One example of such a function that is already implemented and used by mobile devices is the charging status of electric vehicles, which can be read by a corresponding smartphone app. These apps extend the in-car HMI in a certain manner. Thus, OEMs may want such apps to be designed to suit the respective brand and meet the respective quality targets.

Future technologies will also provide possibilities to extend the functional range of current infotainment systems by integrating new functions from external sources. There are different setups possible in which infotainment systems, external devices, and web services take over different roles. This requires technologies such as Mirror Link, Apple Car Play, or Android Auto, which enable remote operation of mobile phone applications (Bose et al. 2010). This is achieved by transferring the display content from the mobile phone to the in-car infotainment system and passing input signals from the infotainment system back to the phone. Another possibility for using mobile phone applications in the car is to run a web server on the phone that provides HTML pages. These pages are displayed in the in-car infotainment system, making it possible to operate the applications in the mobile phone. This requires deep browser integration into the HMI software (Müller 2011).

The most important challenge for the future will be to create added value for the customer by enabling continuous data and information flow between different domains (Sauerzapf et al. 2010). No matter what sources the data and functions may come from, the HMI has to provide a consistent look and feel, giving the impression that the data is coming from one single source. In order to bridge the life cycle gap between mobile devices and in-car infotainment systems, automotive and mobile device manufacturers have to cooperate. Exchange formats and interfaces have to be defined and flexible software architectures should be developed.

5.4 New Operation Concepts

With the increasing number and complexity of infotainment applications, new operation concepts have to be designed and improved to avoid driver distraction. New forms of speech operation allow more natural dialogs, similar to those currently promoted by Apple’s Siri. New technologies such as gesture recognition are being pushed continuously by the games industry and can be found in current products, e.g., Nintendo Wii or Microsoft Kinect. This can give rise to customer expectations regarding similar technologies in new cars. However, how such technologies can be applied to the automotive context has yet to be investigated in detail.

Another trend is the continuous increase in the size and number of displays in the car cabin replacing former buttons or knobs. Future display technologies such as 3D displays will enable new HMI concepts for communicating certain data to the driver or passenger. More powerful hardware will also make augmented reality applications possible at affordable costs.

5.5 Flexible Presentation Concepts

The increasing number and size of in-car displays lead to new possibilities for presenting information to the driver or passenger. In some cars, for example, freely configurable displays are used to replace former analog instrument cluster elements with virtual designed counterparts (e.g., speedo-/tachometer, see also Sect. 1.2.2 AUDI). The first car containing such a freely configurable display was the Toyota Crown Hybrid from 2008 (Burghardt 2009). This concept greatly simplifies the construction of displays, as only a (strong) GPU and a display are necessary; CPUs are already included in the ECUs for the standard displays (i.e., the analog instrument clusters) (Burghardt 2009). Such innovations are established in the premium car segment and in the future will trickle down to the medium and finally to the lower car segments to achieve more flexible presentation concepts. This will also enable the OEMs to achieve significant economies of scale because the same hardware platforms can be used to achieve a look and feel adapted for different brands, segments, and car models.

5.6 Adaptive HMIs and Personalization

The availability of different kinds of displays enables the creation of situation-dependent presentation sets or individually adaptable presentation forms the driver can choose from. For example, there might be a route guidance mode, an audio mode, and a night driving mode (Burghardt 2009). It might even be possible to create one’s own personal presentation profiles. This may include choosing preferred sounds or background images or adapting the layout of the presentation elements in the available display areas. The Cadillac Cue, for instance, allows the driver to choose between different arrangements for the digital instruments and to define the set of values that is displayed. However, there is also a drawback, in the form of possible driver distraction concerning some presentation concepts (Burghardt 2009), which has to be minimized as much as possible. In the future, there will be more possibilities to personalize and adapt the in-car infotainment system. The users will be allowed to download, install, and update software for their infotainment system, ranging from simple stand-alone apps to new design styles that adjust the look and feel of the in-car environment. With the possibility to install third-party applications, the OEM has to ensure that these apps are presented in an appropriate manner within the existing input and output devices in the car. Furthermore, new apps have to comply with certain standards assuring minimal driver distraction, and they have to be seamlessly included into a dynamic HMI adaptation process. With variably equipped car models, the quantity and functional range of available input and output devices may differ from car to car, thus leading to an increased need for flexibly designed HMIs that are able to adapt to the respective context of use. The behavior of the current HMI systems is statically predefined. One example is the prioritization of warning messages and the definition of when and how these warnings are presented to the user. With the increased connection of different car components in modern cars and improved sensor systems, it is also possible to add more dynamically adaptive HMIs based on knowledge of the current driving situation or the current driver. Volvo presented such a system called Intelligent Driver Information System (IDIS) with adaptive HMI technology already in 2003 (Brostroem et al. 2006) and has thereby contributed to future developments in this area. The trend of HMI systems goes toward a personal assistant, which means that the driver will enter into a personal relationship with the system. The HMI system will learn the driver’s needs and preferences in order to offer the relevant information and functions at the right time. A driver can be supported better when the system has some knowledge about him. First steps are being made in this direction: BMW is working on their system called BMW ConnectedDrive to be more personal by including an emotional browser that presents information depending on person, position, and mood. The system learns which information in what kind of situation is the right one (BMW 2011). A project of the USC Mobile and Environmental Media Lab MEML also funded by BMW explores how a relationship between the driver and an overall vehicle system could be realized. A user profile and system character parameters are held in cloud storage to allow access from everywhere. This enables the use of user-specific data in different vehicles. An interactive timeline represents the relationship between the driver and the system.

5.7 HMIs for New Mobility Concepts

The increasing popularity of car sharing communities like car2go Footnote 3 leads to new requirements for HMIs in cars. Customers use a car only temporarily and share the same car with hundreds or thousands of other possible users. Referred to the consulting company Frost & Sullivan the amount of car sharing users worldwide increased from 0,35 million in 2006 to 4,94 million in 2014 (Frost and Sullivan 2016). Thus, one customer may use many different car models and would have to adapt to the respective in-car environment each time. This implies the goal of creating HMI concepts not only for one type of car, but for a whole brand or model range of vehicles the driver may use. This also includes corresponding concepts for smartphone apps, web pages, or portals that belong to the car sharing solution.

5.8 Future Challenges for Upcoming Infotainment Systems

The previously described innovations in the field of HMI systems will highly influence the hardware and software architectures of upcoming infotainment generations as well as the underlying development processes. In the following sections future challenges for upcoming infotainment systems will be explained split in hardware and software.

5.8.1 Hardware

Integrating the latest infotainment functions and presenting them in an appealing manner in complex HMIs leads to an increased demand for processor performance and memory capacity while cost pressure remains intense. At the same time, new and improved forms of interaction require incorporating new hardware elements such as approximation sensors, control elements including display capabilities, multi-touch displays, or touchpads providing haptic feedback.

Further expansion of mobile broadband networks such as LTE raises both the bandwidth and availability of the mobile Internet. The global average broadband speed continues to grow and will more than double from 2014 to 2019, from 20.3 to 42.5 Mbps (Cisco 2015). In the future, increased bandwidth and availability of the mobile Internet will enable permanent Internet connection for infotainment systems. Functions that nowadays are locally realized within the systems could then be implemented as cloud services. As a result, the steady rise in demand for hardware resources for infotainment systems could be mitigated, which in return would help to keep the respective hardware costs per unit within limits. Furthermore, such cloud-based functionalities allow easy maintenance and modifications even after a car’s roll-out to the customer. Thus, car manufactures may achieve further benefits by applying different development and deployment processes.

Another strategy is the separation of short-lived hardware components, such as graphics processing unit, main processor, or memory from long-lived parts, such as audio amplifier or CAN transceiver. Audi has already developed and adopted such a hardware architecture for its new infotainment generations, where the hardware, originating from the consumer world, is placed on a replaceable MMX board (multimedia extension) that is detachable from the hard-mounted RCC module (Radio Car Control) (Hudi 2010). The RCC module contains those functions that are stable during the whole lifecycle of a specific car, e.g., power management, tuner, and diagnostics. The exchangeable MMX module contains those functions that change over the lifecycle of a specific car, e.g., media, navigation, phone, or even the user interface. Breaking up the system into distinct modules enables OEMs to combine modules with different features and performance depending on the vehicle configuration. In doing so, model upgrading becomes much less complex because inventions affect only single modules rather than the whole system, and can be done only a short time after new and improved hardware is available from the manufacturers. The development cycles for such systems could be reduced from 4 years to 2 years. Customers may also benefit from variable hardware architectures as they will have the opportunity to upgrade their existing infotainment system by exchanging the MMX board. Such a concept was realized in the A3 from Audi in 2012. In 2015 the second generation of the so-called MIB (“Modularer Infotainment Baukasten”) was delivered from the VW concern. The Audi TT and the product value build-up of the A6 and A7 are equipped with the MIB2 for the first time. With a T-30-processor from Tegra (Nvidia) it has double the storage capacity, double the processing power, double the graphics performance, and its flexibility is a great advantage to the intensive competition field (Hudi 2014).

The implementation of a growing number of connectivity services leads not only to a wider functional diversity of future infotainment systems, but also to increased multiplicity of the necessary hardware variants. To cope with the latter, it is likely that the car manufactures will adopt such platform- or module-based development strategies for their infotainment systems, as these are well-known and approved solutions for similar problems at the vehicle level (e.g., large growing number of different car models). Hence, decoupling short- and long-lived hardware components can be seen as an initial step in an ongoing large-scale change process. In future development processes, the early identification of similarities and diverse fragments between system variants will become a key task.

Facing high cost pressure in combination with increased complexity and condensed development cycles, automotive hardware will continue to align itself with consumer world devices. When adopting technology from different sectors, it must be considered that various automotive-specific regulations have to be satisfied, e.g., undervoltage, range of temperature, or crash safety. In most cases, redesigning the hardware or at least parts of it becomes necessary in order to meet the higher requirements. Apart from hardware elements for direct user interaction such as switches or displays, there will be growing competition in the field of software engineering. With regard to the overall development costs, there will be a shift toward software elements, which will be responsible for a substantial percentage of the total time and effort. This trend has already begun and will continue in the future.

5.8.2 Software

The modularization and separation of MMX board and RCC modules sketched in the section above has a counterpart in the software, which is also modularized and detached from the hardware (Hudi 2014). This flexible development allows the integration of several newly developed parts of the software, such as navigation or telephone modules, into the software system. Furthermore, there is an interface for MirrorLink, Android Auto, and CarPlay called App Connect within the user is able to have access to different smartphone functions like SMS, speech recognition like Siri, or music like Spotify or Audible.

At present it can be observed that partnerships between OEMs and suppliers are changing in the field of software engineering. In the past, software was developed almost exclusively by suppliers, whereas OEMs concentrated on conceptual design, specification, integration, and acceptance test. Nowadays, more and more large car manufacturers undertake strategic in sourcing of activities dedicated to the development of brand identity, forming elements of a car such as the HMI system (Hüttenrauch and Baum 2008). In practice, this is realized by shareholdings in supplier companies or by an OEM forming its own subsidiary companies that fulfill special-purpose tasks.Footnote 4 In addition, OEMs, suppliers, and service providers have recently begun launching common businesses that perform their software development activities.Footnote 5 The common strategy pursued by OEMs in all these approaches is to build up and keep software engineering expertise under their control. Such activities require large-scale investments that will only amortize if the OEM is willing to take responsibility for software development in the long term. Thus, it will become possible to individually customize and divide the assignment of development tasks between the OEM and its suppliers based on the particularities of the respective HMI project. As a consequence, it is likely that OEMs will realize a higher added value than they did before in development partnerships.

Industry standards like AUTOSAR and GENIVI allow development costs to be kept under control for both OEMs and suppliers, as there is no further need to develop new adaptation layers in each new partnership. At the same time, they make it easier for an OEM to change to another supplier. Suppliers, on the other hand, benefit by being less dependent on single OEMs. Since it becomes easier for both sides to create new partnerships as well as to end one, it is likely that future development partnerships will become more dynamic. This means that both a trend toward closer cooperation and a trend toward more dynamic partnerships are emerging.

The rapid evolution of consumer electronic devices forces OEMs to operate in increasingly shorter time-to-market cycles in developing their infotainment functions and HMIs. In order to keep this pace up while possibly varying partnerships and task assignments emerge, there is an urgent need for optimized development processes in HMI design and implementation. Being able to exchange data efficiently and without the commonly occurring format mismatch caused by the use of different tools on both sides are critical success factors of such a method. This can be achieved by standardizing exchange formats and appropriate integrated processes that are properly supported by development tools (Consulting et al. 2004). Current tools are used in isolation from each other, which causes issues in terms of the budget and quality of the developed systems (Ougier and Terrier 2010). A necessity to abandon established but company-specific special-purpose tools in software engineering is evolving. They have to be replaced by tool chains enabling the use of free software packages or available solutions originating from different industrial application domains such as mobile devices, web services, or desktop applications.

Both, software and hardware will be changed due to the upcoming trend of (semi-)autonomous driving. More and more assistant functions or pilots are available in modern cars, such as adaptive cruise control, parking systems, or lane change assistants. Hence, the vehicle is going to be rather a cooperative partner for the driver than only a mobility device.

In this case not only original automotive branches are going to be competitors of automobile companies of today. For example, Google is working on a self-driving car since 2009. Google’s electro vehicles are equipped with several sensors to drive autonomous (more than 1.5 million miles until today). In contrast to other cars their interior is not designed for driving but for riding. So, the HMI of upcoming cars will have a different focus. It is more about cooperation between the driver and the car, to enhance trust in the new technic. For example, there will be time to do other things while the car is driving autonomous and for this there is not a need for the steering wheel in this period. Perhaps, the interior of the vehicle is transformed more and more into a living room or working station.

5.9 Ongoing Research

The issues mentioned above are addressed by the research project automotiveHMI (AutomotiveHMI 2014), which is funded by the German Federal Ministry of Economics and Technology. The project aims to improve the process of the development of Human Machine Interfaces in the automotive sector. An integrating approach based on standardized languages (Meixner et al. 2013), models, and interfaces leads to an improvement of efficiency for all companies involved, from car manufacturers and component suppliers to the producers of the used tools (Hess et al. 2012a, b). The description based on abstract models also enables the convergence of new multimodal forms of interaction in completely new operating systems (AutomotiveHMI 2014). Designing and realizing HMIs in the automotive sector involves a multitude of highly diverging and concurrent development processes, each of them focusing on different aspects of the system or different development phases. Hence, one major objective is the creation of a standardized description language that can be used across workflow boundaries. This requires the language to comprehensively model every aspect relevant to one of the stakeholders and to provide views on the system from various perspectives, e.g., from the viewpoint of designers, engineers, or testing people. With proper tool support, this technique will facilitate communication between the involved parties, resulting in faster overall development while at the same time reducing error-proneness. For a historical overview of model-based user interface development outside the automotive industry, we refer to (Meixner et al. 2011).

6 Summary

Today’s infotainment systems are rather closed systems that come with a statically defined set of OEM-defined functions. Their interaction with external devices is limited at the moment to selected functionalities or even selected connectivity solutions like MirrorLink, Android Auto, or CarPlay. However, the rapid evolution of the consumer electronics sector and the broad acceptance and spread of its inventions in the general public drives customers’ expectations regarding automotive infotainment systems. This challenge can be mastered by bringing both worlds closer together, using the vitality of the consumer electronic market as a driving force and ramping up in new inventions for automotive infotainment systems. Therefore, in future HMI systems one can expect better connectivity, which will enable new functionality and customer value with less borders. So, for example, the smartphone integration will be seamless and much easier in the next decades.

A further challenge the OEMs have to deal with is the growing desire of consumers to personalize their vehicles. Consumers want to configure their vehicles according to their personal preferences and requirements, which may change from time to time. Taking into account the extended lifetime of vehicles and installed HMI systems—compared to the average lifetime of smartphones and other consumer trends—it is quite obvious that after-sale solutions for system upgrades and modifications are urgently needed and represent a non-negligible market for the industry. In terms of the HMI system, individually configurable and skinnable digital instrument clusters are first steps toward a higher degree of infotainment personalization.

However, personalization is not limited to pure appearance modifications but rather involves many other parts and aspects of an infotainment system. It is likely that downloadable content and functionalities (infotainment apps) will enable customers to individually extend the functional range of their HMI systems in the future. While the concept of app-based individualization has become a major factor in the market for consumer electronics, the automotive industry is still hesitant to adopt similar approaches. Nevertheless, it is obvious that OEMs will not have the resources to develop and offer a similar range of applications and add-ons on their own, thus paving the way for more and more third-party providers entering the market. In the future, one might even doubt the ability of a single OEM to develop on its own infotainment systems that meet the rising customer expectations in this area. This might force the OEMs to leave the concept of closed proprietary systems behind and move toward the disclosure of assorted signals and vehicle interfaces for third development parties.

Implementing the technological foundations for downloadable content and appearance updates open up ways for focusing on the primary function of the HMI: direct user interaction. For example, dialog design and control modalities may be kept up-to-date. Modern techniques in natural language recognition enable system designers to create more naturally feeling user–machine conversations and to mix different modalities. The human–machine interaction in a car could be pushed to a next level by successfully combining natural language speech commands and touchpad gestures with coherent audio-visual feedback of the system.

Once technical obstacles are overcome and the automotive industry succeeds in providing user-centered solutions for the individual configuration of in-car infotainment systems, one can expect that customers will start identifying themselves with their self-designed and personalized systems to a certain extent (e.g., similar to communities of users that exchange individually created desktop themes for their personal computers). This may lead to a new customer desire to transcend the physical boundaries of their cars and to extend and spread the interaction with “their” infotainment system to occasions that might go beyond the time that is actually spent in the car. Depending on the functional range of future infotainment systems, users may have the wish to share their configurations and to interact with the systems anywhere at any time. One example could be a user on vacation checking via his smartphone if everything is alright with his car, which is waiting for him on the public parking lot at the airport. Faced with such future scenarios, it is likely that OEMs will want to leave behind the idea of isolated in-car infotainment systems and move toward the development of a whole infotainment universe. In addition to the in-car hardware, such an OEM-branded HMI universe would comprise corresponding smartphone applications, websites, and driving portals, thus offering future customers a holistic brand experience.

From a technical perspective, the future scenarios presented in this article already seem to be conceivable but one major issue has not been addressed yet. Whereas continuous research efforts are being undertaken to cope with technical and infrastructural hurdles, questions about legality and liability aspects and responsibilities concerning the interaction with infotainment systems are still pending issues. As long as clarification of these legal basics remains an issue and as long as vehicles cannot drive fully autonomously, the driver and his distraction remain the bottleneck to the integration of new functionalities into infotainment systems. Neither will OEMs disclose car interfaces to third-party developers in a legal gray area nor will customers spend time and money on HMI systems without knowing the actuarial implications of their usage. Hence, one can say that in addition to the necessary technical solutions, clear legal directives need to be established to define exactly which function of an infotainment system is made available to which vehicle occupant under which specific driving conditions.

7 Conclusion

From the history of automotive infotainment, it can be seen that user interaction in the car has always been influenced by upcoming new technologies customers first get used to in the world outside the car. This already began in the early twentieth century with the first car radio and with the installation of the first in-car phone. A car without these functions is hard to imagine today. Currently, Internet-based applications and social network applications are finding their way into in-car infotainment applications. The car usually is not the first device utilizing new technologies like these. However, there have always been applications uniquely designed for in-car use, such as navigation applications. With the increasing market of portable devices and smartphones, these are not restricted to built-in navigation devices anymore. Consumer electronics providers are big competitors for the automotive infotainment market. Last, but not least, there are some distinctive applications that are hard to replace with external applications and devices. Driver assistance applications and all sorts of in-car settings and comfort applications are major examples. Looking into the future, driver assistance applications may not be needed anymore for self-driving cars. The car will turn into a mobile living room, a mobile office, a mobile child’s room, and maybe also a mobile dining room. This means that even more external devices and applications will influence in-car user interaction. Office workstations or home-entertainment devices may find their way into the car when the actual driving task is more and more reduced.

For a seamless experience, interfaces between different types of external applications and devices have to be developed and maintained, and perfect integration into the in-car infotainment world has to be achieved. This requires accurate design, development, and testing. Perfect engineering tools and the development of standard engineering approaches adjusted to the automotive industry will help keep development cycles shorter and ensure future-proof automotive infotainment development and deployment.