Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 System Optimization and Vehicle Integration

The optimization of vehicle components significantly help to improve the overalls vehicle performance. Especially light weighting the vehicle has a massive impact on the increase of range and on the improvement of energy efficiency.

The use of thermal-, and battery management but also the optimal use of power electronics and electronic and electrical architecture will be one of the key challenges and most important steps of the future.

This chapter describes the most important possibilities for improving the overall vehicle performance. Thus, it describes the outcome and the key messages of several Task 17 workshops, by focusing on:

  • overview of different e-motors and their abilities,

  • management systems like battery management systems as well as

  • thermal management systems,

  • simulation tools,

  • functional and innovative lightweight components and materials and

  • power electronics and drive train technologies for future xEVs

2 Electric Motors

In this section electric motors (e-motors) for use in EV’s are described. The focus is put on alternatives to Permanent Magnet Synchronous Motors (PMSM). Advantages and disadvantages are pointed out.

This section is hosted by Steven Boyd from U.S. DoE (year 2012).

2.1 Introduction

Currently (FY 2013), permanent magnet (PM) motors with rare-earth (RE) permanent magnets are almost universally used for hybrid and EVs because of their superior properties, particularly power density. However, there is now a distinct possibility of limited supply or very high cost of RE PMs that could make these motors unavailable or too expensive. Since the development of e-motors for vehicle applications is of interest, it should be determined which motor options are most promising and what barriers should be addressed. Currently, the focus is limited to induction and switched reluctance (SR) motors, as these types do not contain PMs and are currently used in electric drive vehicles.

In considering alternatives, cost and power density are two important properties of motors for traction drives in EVs, along consideration for efficiency and specific power.

For each technology, the following aspects can be considered:

  • current state-of-the-art performance and cost,

  • recent trends in the technology,

  • inherent characteristics of the motor, including ones that can limit the ability of the technology to meet performance targets,

  • what R&D would be needed to meet the targets,

  • potential for the technology to meet the targets

So far, alternatives are few. First, if high-strength Neodymium-Iron-Boron (NdFeB) RE magnets are not available, the following alternative PMs may be considered for use in PM motor designs: Samarium-Cobalt have similar magnetic properties to NdFeB magnets, have better high-temperature stability, but are very costly; Aluminum-Nickel-Cobalt (AlNiCo) magnets have somewhat lower cost but very low coercivity; iron or ceramic magnets are the least expensive but also are the weakest magnets; or new PM materials could be created or discovered. Next, AC induction motors are seen as a viable alternative despite lower power density compared with IPM motors, and are a mature, robust technology that can be less expensive. Alternatively, SR motors are durable and low cost, and contain no magnets. Their peak efficiency is slightly lower than that of PM motors, but the flatter efficiency profile of SR motors can give higher efficiency over a wider range of operation. Significant concerns about SR motors are torque ripple and acoustic noise. Finally, there are other alternative motor designs that have not been completely researched or characterized at this time, and further insight into these designs and reports from ongoing R&D projects may prove useful.

2.2 PM Motors

A PM motor is a hybrid motor design that uses both reluctance torque and magnetic torque to improve its efficiency and torque. These motors are created by adding a small amount of magnets inside the barriers of a synchronous reluctance motor. They have excellent torque, efficiency, and low torque ripple. They have now become the motor of choice for most EV applications. PM motors have high power density and maintain high efficiency over their entire torque and speed range except in the field-weakening speed range. This translates into a challenge to increase the constant power speed range without a loss of efficiency. Other major issues are failure modes and the relative high cost of the motor due to the cost of the (currently favored) RE PMs and rotor fabrication. Major R&D challenges are to develop bonded magnets with high energy density capable of operating at elevated temperatures and motor designs with increased reluctance torque. These developments may result in reducing the magnet cost. Other challenges include thermal management and the temperature rating of the electrical insulation.

2.3 Induction Motors

The induction motor was invented by Nicola Tesla in 1882 and is the most widely used type of e-motor. Mostly because of its ability to run directly from an alternating current (AC) voltage source without an inverter, it has been widely accepted for constant-speed applications. For vehicle applications, recent developments in low-cost inverters have made variable-speed operation possible for traction drives. Induction motors have the advantages of being the most reliable, possessing low maintenance, high starting torque, and a wide manufacturing base due to high acceptance by industry. These motors offer robust construction, low cost, and excellent peak torque capability. However, their power density is somewhat limited, and increasing speed is one of the few available paths to increase power density. So, many AC traction drives run at high speeds of 12,000–15,000 RPM at maximum vehicle speeds. This use of high motor speeds results in smaller, light-weight traction motors, but it requires a high-ratio gear box that also has associated mass and losses.

There are few recent improvements to the AC motor for vehicle applications. One of these is the use of a copper rotor instead of aluminum. Depending upon the size of the motor, the use of copper can increase the efficiency of an AC induction motor by one to three percentage points. Although this may seem like a small increase, it is significant in reducing the losses generated by the motor, easing thermal management. Depending on the vehicle, this could also add some distance to all-electric driving range as well. Another improvement that has been studied is determining the effect of different pole numbers, and it has been shown in several studies by Allen Bradley, Reliance Electric, General Electric, and Siemens that the optimum pole number for AC induction motors below 1000 NM is four poles. Increasing the pole numbers for smaller motors reduces the power factor, but is the torque density is increased in the same frame size by increasing the number of poles. For ac induction motors driven by inverters, the number of poles should be increased from 4 to 6 for motors above 1000 NM torque.

As an alternative to PM motors, induction motors are not as efficient or power dense. Furthermore, because AC motors are considered a mature technology, the likelihood of achieving the required additional improvements in efficiency, cost, weight, and volume is low. However, if PM motors become infeasible for reasons of cost or availability, induction motors seem to be the consensus next choice, and therefore remain a relevant technology for EVs.

2.4 Switched Reluctance Motors

The SR motor uses a doubly salient structure with toothed poles on the rotor and stator. Each set of coils is energized to attract a rotor pole in sequence so it acts much like a stepper motor. With current technology, SR motors have inherently high torque ripple and the high radial forces generated can create excessive noise levels if not carefully designed. These motors are currently best suited in high-speed applications where torque ripple is not an issue. In comparison with mature motor technologies such as the AC induction motor and the relatively recent brushless PM synchronous motor, the SR motor offers a competitive alternative to its counterparts despite these aforementioned issues. The basic concept of the SR motor has been around for over 100 years, but recent advances in power electronics, digital control, and sensing technologies have opened up the possibilities of the SR motor and provide some novel design opportunities which can be suited for vehicle propulsion.

Unlike most other motor technologies, both the rotor and stator of the SR motor comprise salient teeth such that torque is produced by the tendency of its rotor to move to a position where the inductance of an excited stator winding is maximized and reluctance is minimized. This condition generally occurs when the corresponding stator tooth is fully aligned with a rotor tooth. The non-steady state manner in which torque is produced in the SR motor introduces the requirement of a sophisticated control algorithm which, for optimal operation, requires current and position feedback. In addition to non-steady state operation, the SR motor often operates with the rotor and stator iron in saturation, increasing the difficulty of optimal control and making the motor very difficult to accurately model without the aid of computer processing and modelling techniques. Therefore, since the SR motor is very technologically demanding in terms of design, modelling, and control, the evolution of SR motor technology has been limited until these demands were adequately addressed. Furthermore, the progression of other motor technologies such as the induction motor and PM motor have not been as limited by the state of other technologies. Since the torque of an SR motor is based on reluctance, no excitation is required from within the rotor, making it more simple, mechanically resilient, and cost effective than other motor technologies. The absence of PM material, copper, aluminum or other artefacts in the rotor reduces the requirement of mechanical retention needed to counteract centrifugal and tangential forces. This causes the SR motor to be especially well suited for rugged applications or high-speed applications wherein high power density is desired. As there are no conductors in the rotor, only a low amount of heat is generated therein, and most of the heat is generated in the stator, which eases motor thermal management requirements. In addition to having low material costs, the simplicity of the SR motor facilitates low manufacturing costs as well.

In regards to electric drive vehicles, the primary issues with SR motors are the torque ripple and acoustic noise that is associated with the fundamental manner in which torque is produced. When current is supplied to the coil of a SR motor stator tooth with proper respect to rotor position, torque is created until the nearby rotor tooth is fully aligned with the stator tooth. Thereafter, torque is created in the opposite direction if the rotor continues to rotate and if current is still supplied to the coil. Therefore, it is typically desirable to reduce the current to zero prior to generating an undesirable torque. However, the inductive behavior of the coil and corresponding magnetic path prevent rapid evacuation of current in the coil, and thus a torque transient occurs and provokes the issue of acoustic noise and torque ripple. Perhaps the second most significant problem with the conventional SR motor is that it cannot be driven with a conventional three-phase power inverter. Nonetheless, a unipolar inverter for a three-phase SR motor contains three diodes and three switching elements, as is the case with a conventional three-phase power inverter, so similarities do exist.

Having lower material and manufacturing costs, the SR motor presents a competitive alternative to the PM motor. And while the power density and efficiency of the PM motor may not be surpassed, the SR motor can approach meeting similar metrics. Various comparison studies have shown that the efficiency and power density of the SR motor and AC induction motor with copper rotor bars are roughly equivalent, while the AC induction motor with aluminum rotor bars falls slightly behind these two types of motors.

2.5 Conclusions and Future Work

Due to the importance of electric traction drive motors to the future of EVs, motor R&D will continue to play an important aspect in the development of these vehicles. This is now further emphasized with the recent uncertainty in the future price and supply of RE magnets.

A proposal for alternatives and future work is to concentrate efforts in three distinct areas:

  • continued development of PM motors with RE magnets,

  • develop novel motor designs that use non-RE PMs or no magnets at all,

  • develop novel magnet materials that could be used in PM motor designs.

Continuing the development of PM motors using RE magnets could result in motor designs that use less RE magnet material, and maintains the technology development path with the consideration of possible future material availability through expanded RE supply or increased recycling. Developing other novel motor designs could potentially use other existing PM materials, improve upon existing motor technologies to better rival PM motor performance, or even develop new motor designs entirely. Finally, PM magnet material development could reduce the cost of existing RE PM magnets through reduced processing cost, reduce RE PM cost by reducing heavy RE content, or develop new magnets through the investigation of novel inter-metallic compounds.

3 Battery Management Systems in EVs

In this section the relevance of Battery Management Systems (BMS) (FY 2013) for system performance and costs are evaluated.

This section was partly provided by Mr. Andrea Vezzini and Ms. Irene Kunz from Bern University of Applied Sciences (BFH).

Content of this section:

  • general information and Tasks of BMS,

  • battery requirements for different configurations including: HEV, PHEV, BEV,

  • battery type considered (Li-ion),

  • information from a workshop (technology trends and open points) and information provided by BFH and ANL

In general the main electrical components of EVs can be summarized as:

  • e-motor,

  • e-motor drive (inverter),

  • (Li-ion) battery cells,

  • regenerative brakes,

  • Vehicle Control Unit (VCU) or board computer,

  • access to standard vehicle electronics systems (regular ABS,ESP, etc.) and

  • Battery Management System (BMS)

A detailed look at the components which have been used for one of the first generation BEVs (Ford) is shown in Fig. 1.

Fig. 1
figure 1

Components that will make up a BEV [1]

The VCU is a board program which reads the actual accelerator position and translates it to the actual torque of the motor. This value is sent to the inverter which drives the motor. Further, the VCU is responsible for monitoring the measured values which are relevant for the vehicle user (e.g. speed, battery SoC, etc.).

The VCU generally exchanges information with all the units in the vehicle. This means that the communication takes place with inverters, chargers, DC/DC converters, BMS and monitor devices over a Controller Area Network (CAN) Interface.

Figure 2 shows an overview of the different block modules in an EV. It is obviously, that it is very important that the BMS has access to most of these modules.

Fig. 2
figure 2

Scheme of block modules in an EV [2]

The new market segment of EVs sets off an increasing demand for battery storage systems (see Fig. 3). To guarantee a safe operation of these systems, batteries need to be protected from several malfunctions. This protection is realized with battery management systems.

Fig. 3
figure 3

High voltage battery systems require advanced BMS (image courtesy of Bosch [3])

A battery management system (BMS) is any electronic device that manages a rechargeable battery pack. The BMS is required to ensure that the cells in a battery are operated within their safe and reliable operating range. The BMS monitors voltages and temperatures from the cell stack. From there, the BMS processes the inputs, making logic decisions to control the pack performance, and reporting input status and operating state through communication outputs. Concisely, a BMS turns a collection of “dumb” cells into an intelligent, safe and more efficient battery pack [4].

Definition of the terms “battery”, “module” and “cell”:

A “battery” is the complete assembled pack of singular cells. It may consist of several modules, wired in series. In a module, single cells are connected in series or parallel (Fig. 4).

Fig. 4
figure 4

From powder to system integration (image courtesy of CEA [5])

It is possible to obtain higher currents if the cells or the modules are connected in parallel. A series connection leads to higher voltages. The module controls every cell to guarantee a proper function in the desired operating range by monitoring voltage, current and temperature. The control of these modules and battery packs is realized with a BMS. The process from the material towards the batteries prototype can be seen in Fig. 5.

Fig. 5
figure 5

BMS-Tasks for high voltage batteries [6]

3.1 Description and Tasks of a BMS in an EV Application

An intelligent implementation of a BMS will extend the battery’s lifetime and the driving range of the vehicle. The operation of a BMS in an EV includes normal protection functions like charge control and cell protection against limit-exceedance (voltage, current, temperature). BMS used in EVs have to interact with several on-board systems. Further, there is the requirement to act as a real-time application. This means that the safety-relevant applications are handled as prioritized functions and thus should never be blocked due to other less relevant functions of the electric car (like air-conditioning control).

The main functions of a BMS (for high voltage batteries, focusing on Li-ion batteries) are depicted in Figs. 5 and 6 and can be summarized as follows:

  • battery monitoring: BMS monitors key operational parameters, such as voltages, currents, and temperatures, during charging and discharging of batteries. Based on these values, BMS estimates the state of the battery [e.g. state of charge (SoC) and the state of health (SoH)]. Information which is relevant for the car driver has to be sent over CAN-Interface to the VCU. Here the data will be forwarded to the dashboard where it can be monitored. This information includes the SoC, SoH, error messages and available driving range,

  • energy/power management and prediction: BMS prevents the battery from operating outside its safe operating area such as: over-current, overvoltage, over-temperature and under temperature. The main goal of the energy management is to guarantee a constant supply of energy for the important vehicle functions. The vehicle can fundamentally be in three states: drive, park or charge. For each of these three states the priority of the different functions change:

    • in drive state, all devices which control the vehicle are to be preferentially treated;

    • if the vehicle is parked, the management system enters a sleep mode with the BMS being turned off for a period of several minutes after which, it becomes repowered to control important parameters of the battery for safety reasons;

    • in charge state a system detecting mechanical collisions on each side has to be active. If such a collision takes place, the connection between battery and vehicle will be immediately interrupted,

  • battery’s performance optimization: in order to maximize battery capacity, and to prevent localized under-charging or overcharging, the BMS ensures that all cells that compose the battery are kept at the same state of charge, through balancing and

  • communication: BMS communicates the above data to users/external devices. In EVs, the BMS interfaces with other on-board systems such as engine management, climate control, communication and safety systems, responsible for communications with the world outside the battery pack. The BMS transmits the measured and calculated values to the VCU which processes the data and forwards it to the right target component in the car. The SoC of the battery as an example, is forwarded by the VCU to the dashboard monitor.

Fig. 6
figure 6

Scheme of a BMS in an EV application [7]

These tasks are managed by the VCU, which can be seen as the ‘heart’ of the system.

It is important, that the program of the BMS never reaches a deadlock. Therefore the software should be implemented with a real-time operating system (RTOS). Moreover, sensitive tasks like overvoltage protection can be set to high priority in order to be processed first. On the other hand, the BMS involves also less important tasks like sending the actual SoH value to the dashboard monitor. For this task, the priority is set low.

In high power applications, around ten to over one hundred high-capacity elementary cells are series connected to build up the required battery voltage. The overall cell string is usually segmented into modules consisting of 4–14 series connected cells. Thus, the battery can be composed by three layers: namely, the elementary cell, the module and the pack.

Within each module the cells are connected together to complete the electrical path for current flow. Modern BMS systems for EV applications are typically distributed electronic systems. In a standard distributed topology, routing of wires to individual cells is minimized by breaking the BMS functions up into at least 2 categories, pack management unit and module management units (as it can be seen in Fig. 7 ). The monitoring of the temperature and voltage of individual cells is done by a BMS “sub-module” or “slave” circuit board, which is mounted directly on each battery module stack, (in a normal passenger car the number would be 1015 modules). Higher level functions such as computing state of charge, activating contactors along with aggregating the data from the sub modules and communicating with the engine control unit are done by the BMS ‘master’ or ‘main module’. The sub-modules and main module communicate on an internal data bus such as CAN (controller area network). In an electric vehicle, the pack management unit is linked to the vehicle management system through the external CAN bus. Almost all electronic functions of the EV battery pack are controlled by the BMS, including the battery pack voltage and current monitoring, individual cell voltage measurements, cell balancing routines, state of charge calculations, cell temperature and health monitoring, which ensures overall pack safety and optimal performance, and communication with the vehicle management system [8].

Fig. 7
figure 7

BMS structure scheme [9]

In general the BMS consist of several Module Control Units, which protect the battery cells, and a top level control, which supervises the modules. Generally it includes at least voltage, temperature and current measurements. These values are processed with an Electronic Control Unit (ECU) in order to protect the single cells against malfunctions.

This protection includes the following points:

  • charge control: limitation of the rate at which the electric current is fed into or drawn from each battery cell to hold the battery in the safe operating area. This includes additional protection of the battery cells against overcharge, overvoltage and deep discharging and

  • cell balancing: in a multi-cell battery (serial connection) small differences between the cells appear due to production tolerances or different operating conditions and tend to increase with each charging cycle. Weaker cells become overstressed during charging and this causes them to become even weaker, until they eventually fail. Without cell balancing, the individual cell voltages will drift apart over time. Cell balancing is a way of equalizing the charge of all cells in the chain.

The overall control of the different modules is realized with a Battery Control Unit (BCU). This unit has the following functions:

  • precharge function: this function is needed, when the battery is connected for the first time with a system which has a high capacity (e.g. EV). In this case, a very high inrush current is flowing for a short time. To reduce these current peaks, there are two switches needed in parallel (compare Fig. 8). The switch above includes a resistor, which attenuates the current peak (precharge relay). After a while, the main switch below can be closed to reduce the connection resistance to a minimal value and

    Fig. 8
    figure 8

    Structure of a BMS

  • module protection: in the case of a battery parameter exceeding the allowable range, the unit will open immediately the main switch (compare Fig. 8).

  • SoC determination: the SoC indicates the proportion of the charge currently available in the battery compared to the fully charged battery pack (100 %). This function is comparable with a fuel gauge in a car. Chapter 6.3.2 describes different calculation methods to determine this SoC,

  • SoH determination: the SoH is a relative figure of merit. It provides the actual condition of the battery compared to the completely new battery (100 %) whereas 0 % corresponds to a completely worn out battery pack. The SoH can be determined by the decrease in the capacity of the battery with increasing age. More information on the determination methods for the SoH is provided in Chap. 6.3.3,

  • demand management: this function provides an intelligent energy management system, which stores the energy amount in the battery as long as possible. The energy management system has to be adapted individually to each specific application,

  • communication with host: the BMS transmits data to a host computer or an external device. The data can then be stored or plotted in a Graphical User Interface and

  • history (log book function): monitoring and storing the data over an extended period of time is another possible function of the BMS. Parameters such as number of cycles, maximum or minimum voltage, temperature and maximum charging and discharging current can be tracked.

BMS topology

BMS technology varies in complexity and performance:

  • active regulators intelligently turning on and off a load when appropriate, again to achieve balancing further a complete BMS also reports the state of the battery to a display, and protects the battery,

  • simple passive regulators achieve balancing across batteries or cells by bypassing charging current when the cell’s voltage reaches a certain level. The cell voltage is a poor indicator of the cell’s SOC, thus, making cell voltages equal using passive regulators does not balance SOC, which is the goal of a BMS. Therefore, such devices, while certainly beneficial, have severe limitations in their effectiveness.

BMS topologies fall in 3 categories:

  • centralized: a single controller is connected to the battery cells through a multitude of wires. They are most economical and least expandable,

  • distributed: a BMS board is installed at each cell, with just a single communication cable between the battery and a controller. These are the most expensive, simplest to install, and offer the cleanest assembly,

  • modular: a few controllers, each handing a certain number of cells, with communication between the controllers. Modular BMS offer a compromise of the features and problems of the other two topologies [10].

3.2 SoC Determination Algorithm

At the moment no direct way of measuring the SoC is provided. There are indirect ways of estimating it but each has limitations. This session deals with several indirect determination methods:

  • voltage based SoC estimation,

  • current based SoC estimation,

  • combination of the current and

  • voltage based methods and SoC estimation from internal impedance measurement.

Voltage based SoC estimation

Several cell chemistries show a linear decrease of voltage with decreasing SoC. This characteristic is used to determine the actual SoC. It is possible to estimate the actual SoC by measuring the open circuit voltage of the battery. Figures 9 and 10 show the discharge voltage of two different battery chemistries. The characteristics are dependent on the temperature and the discharge rate.

Fig. 9
figure 9

Open circuit voltage vs. remaining capacity for a lead acid cell [11]

Fig. 10
figure 10

Open circuit voltage versus remaining capacity for a Li-ion cell [12]

In Fig. 9, it can be seen that the voltage in a lead acid battery decreases significantly as it is discharged. By knowing this characteristic the battery voltage can be used to estimate the SoC. A drawback of this method is that the battery voltage is dependent on the temperature and the discharge current. These effects have to be compensated in order to increase the accuracy of the estimation.

In the case of Li-ion batteries in Fig. 10, the voltage changes only slightly. So the estimation of the SoC is nearly impossible. However the voltage of a Li-ion cell changes at both ends of the characteristic abruptly. This effect can be used to detect if the battery cell is fully charged or depleted. But in most systems an earlier alert is required, because a completely discharge of a Li-ion cell will significantly reduce its life span.

Current based SoC estimation (coulomb counting)

With the coulomb counting method the charge which flows in and out of the battery is measured. It is not possible to measure the charge directly. Therefore a measurement of the current is needed, which will be integrated over time.

The start of the measure has to be in a defined initial state, for example if the battery is fully charged. Then the determined charge value is measured relatively to the fully charged battery cell. The drawback of this method is that in case of even a very small current sensor offset, the SoC determined will show a deviation from the real value. Due to the fact that this error is integrated over time it leads to significant estimation errors in longer time periods. This effect is shown in Fig. 11.

Fig. 11
figure 11

Drift of the SoC due to a current sensor offset [13]

Combination of the current and voltage based methods

A calibration of the integrator’s output can be realized if the coulomb counting variant and the voltage based method are combined:

  • the battery current is integrated to get the relative charge in and out of the battery,

  • the battery voltage is monitored to calibrate the SoC when the actual charge approaches either end.

This method is a more accurate and reliable way to estimate the SoC (compare Fig. 12).

Fig. 12
figure 12

Combining current and voltage based SOC-algorithm for higher accuracy [14]

A drawback of this method is that the application in HEVs is not possible. Because the normal SoC range in a HEV is between 20 and 80 %. It never reaches the threshold voltage on either end.

SoC estimation from internal impedance measurement

In a discharging cycle the cell impedance varies with the SoC. Therefore by measuring the internal impedance it is possible to determine the SoC (compare Fig. 13).

Fig. 13
figure 13

Internal resistance of a lead acid battery versus OCV—Open Circuit Voltage [15]

In all battery technologies the internal resistance will increase at the end of a discharging cycle. This effect can be used to determine the actual SoC. This method is rarely used because the battery has to be disconnected from the system to measure the internal resistance. Moreover the results are very temperature sensitive.

If changes of the voltage and current can be measured, then it is possible to calculate a dynamic approximation of the internal resistance with the following formula:

$$ r_{i,dyn} = \frac{{\Delta V}}{{\Delta I}} $$

With this variant it is possible to estimate the resistance online, this means without disconnecting the battery.

A further effect, which is observed in Li-ion batteries is that the internal resistance also changes at a SoC of 100 % (bathtub function in Fig. 14). The largest change of the internal impedance occurs in the range of 0 to 30 % and from 80 to 100 % SoC.

Fig. 14
figure 14

Internal resistance of a Li-ion battery versus cell capacity (DoD) [16]

3.3 SoH Determination Algorithm

There are several battery parameters which change significantly with increasing age:

  • increase of the resistance,

  • decrease of capacity.

The SoH can be determined from one of these values. Basically the SoH describes the actual condition of the battery in relation to the new battery pack. But actually every BMS manufacturer defines the SoH differently. For a new battery it is necessary to fully charge and discharge the battery in order to determine the initial capacity. This value has to be permanently stored in the BMS, because this value determines 100 % SoH.

After an arbitrary number of cycles, i.e. 100 cycles a correction of the SoH is required, based on a new reference measurement.

First the battery should be fully discharged and in the next charging cycle the capacity of the battery is determined. By comparing this value with the capacity of the completely new battery it is possible to determine the SoH of the battery.

A plot of the capacity versus the number of cycles is shown in Fig. 15. In this graph the Depth of Discharge (DoD) is 100 % (the battery was fully discharged in each cycle). A decrease of capacity with increasing number of cycles is observed.

Fig. 15
figure 15

Cycle life time of Li-Ion Phosphate cell, 2.3 Ah [17]

3.4 Integration of BMS into the EV—State of the Art

In EVs, the battery system has to be developed as an integral part of the vehicle to avoid malfunctions (see Fig. 16). In the planning phase of the vehicle design, it is important to reserve space for all the battery systems and the BMS. The best solution is to design the system in modular blocks which could then be placed in different locations within the vehicle allowing an optimal use of the available space.

Fig. 16
figure 16

Implementation of BMS [18]

These modular blocks consist of the BMS, charger unit and battery modules. The voltage of the battery pack can be determined from the number of battery modules connected in series.

Two additional points which have to be considered:

  • one critical point is the integration of a thermal management in the vehicle. The battery can be cooled down either by air or water flow. An advantage of the water-cooling version is the higher thermal cooling capacity. Moreover the battery can also be heated through the cooling water if other heat sources are included in the same coolant loop (e.g. e-motor, power electronics). Often liquid cooling uses special heating elements and the water is a good medium to distribute that heat,

  • the second issue is to protect the battery in case of an accident. Attention has to be paid to the fact that the battery is not located in the crash zone, where the battery can get mechanically damaged (Fig. 17).

    Fig. 17
    figure 17

    Integration of a battery system in an EV [19]

The weight of the whole battery pack should be reduced as much as possible. Figure 18 shows that the Li-ion technology is well applicable in EVs due to its high energy density compared to other cell chemistries. In the current EV models, often a combination of Li-ion batteries and supercaps is used. The supercap can provide high power for short-time periods. This power is used to accelerate the vehicle.

Fig. 18
figure 18

Energy density of different battery chemistries [21]

The lithium content in a high capacity lithium battery is actually quite small (typically less than 3 % by weight) [20]. Lithium batteries used in EVs and HEVs weigh about 7 kg (15.4 lb)/kWh. Thus their lithium content will be about 0.2 kg (0.4 lb)/kWh. Current EV passenger vehicle use batteries with capacities between 30 and 50 kWh (lithium content will be about 6 (13 lb) to 10 kg (22 lb) per EV battery).

The capacity of HEV batteries is typically less than 10 % of the capacity of an EV battery and the weight of Lithium used is correspondingly 10 % less.

Several BMS structures (compare Fig. 19) are currently offered on the market and differ in simplicity and price.

Fig. 19
figure 19

State of the art of BMS

As it can be seen in Fig. 19, the different BMS structures can be divided into:

  • single board: this is the low cost variant, the BMS is assembled on one single printed circuit board (PCB). The battery modules do not require any special electronics beside voltage, current and temperature sensing devices. The BMS board consists of several application specific integrated circuit (ASICs) which controls the battery modules. The top-level control is supervised by the BCU. This unit disconnects the battery from the application as soon as one battery parameter exceeds the predefined range. Advantage: low cost BMS; Disadvantage: each measurement point has to be connected individually to the BMS board,

  • smart modules: in this case, each battery module has its own ASIC, which protects the module directly. The ASIC’s are able to communicate with the battery control unit over a serial peripheral interface (SPI). All connections from the single board structure can be reduced to one SPI bus common for all modules. Advantage: less wire connections than in the single board structure; Disadvantage: the module control unit can only send information if the BCU (Master) is requesting it which could lead to data loss,

  • light intelligent modules: in this case, the communication disadvantage of the smart modules has been resolved. The Module Control Unit (MCU) in this case communicates over a private CAN interface with the BCU; in this way errors can be prevented. To initialize this way of communication on each module is a microcontroller needed. In this configuration, the MCU performs the following tasks: measure and supervise the voltage, measure temperature and balance the cells. The BCU includes the determination of SoC and SoH, the thermal management and the control of the precharge relay. A further task is to connect the BMS with the rest of the vehicle via CAN bus and

  • full intelligent modules: The only difference of this configuration, compared with the light intelligent module, is that some functions of the BCU will be taken over by the MCU. This includes for example determination of SoC and SoH.

3.5 Examples of Integrated BMS in EVs and HEVs

Daimler MILD-HYBRID S400 (Reporting Year 2009)

The S400 BlueHYBRID (see Fig. 20) was the first series-production model to be equipped with a Li-ion battery. Continental and Johnson Controls Saft (JCS) were teaming on the pack, with JCS was providing the cells. The compact hybrid module is a disc-shaped e-motor that also acts as a starter and generator. The hybrid module also has a start/stop function, and supports regenerative braking.

Fig. 20
figure 20

Daimler Mild-S400 BlueHYBRID with Li-ion battery pack [22]

The high torque of the e-motor at low speeds offsets the reduction in low-end torque resulting from applying the Atkinson cycle to the combustion engine.

Moving to a more powerful e-motor increased the weight of the hybrid system and decreased the fuel consumption. Furthermore, at a higher electrical to combustion power ratio, the e-motor operates increasingly in less favorable areas of the performance map as maximum requirements increase. Although relatively low in power, the e-motor delivers rated torque of 160 Nm (118 lb-ft), contributing to a combined system torque of 385 Nm (284 lb-ft).

The power electronics comprise a control unit which acts as the master of the E-drive system and a power unit that converts the direct current generated by the battery. The power electronics can cope with continuous currents of 150 A, and short-term as high as 310 A. Power is supplied to the e-motor by a bus bar.

The power electronics are situated in the engine compartment in the location of the conventional starter motor and are cooled by a separate circuit.

Li-ion battery pack: the compact Li-ion pack (compare Table 1), developed by Continental and JCS Saft, comprises 35 cells and provides 19 kW of power, with a capacity of 6.5 Ah. The battery is connected to the vehicle air conditioning circuit so it can be cooled independent from the engine. Cut-off valves are integrated into the system that allows the customer to switch off the air conditioning without interrupting battery cooling. When the engine is not running, the electric A/C compressor not only provides air conditioning but also guarantees that the battery’s operating temperature limits are not exceeded. Battery pack temperatures do not increased above 50 °C (122 °F) in any operating state to prevent serious damage.

Table 1 Key data of the battery system from Daimler

Operating strategy: the operating strategy of the S400 hybrid is based around start/stop, regenerative braking, boost and load point shifting. When providing support for load-point shifting, the operating strategy only allows shallow discharge cycle of the Li-ion battery to maintain the cycle strength. Fuller deployment of the electrical support is only provided based on the driver’s request, as indicated by accelerator pedal position and a large pedal value gradient. The focus of SoC swings is in the range of 5 %. Values of up to 10 % occur less frequently, while SoC cycle of more than 10 % are rarely observed. This contributes to the 10-year expected service life.

GM Chevrolet Volt (2012)

At the heart of the Chevrolet Volt, a sophisticated battery-stack management system ensures the safety and reliability of the multi-cell Li-ion battery stack (see Fig. 21) that delivers power on demand to the Volt drive system. Within the management system, battery-monitoring boards use two key subsystems to reliably monitor cell health and deliver digital results to a host processor that orchestrates system operation. Separating those subsystems, a signal interface ensures isolation between high voltage battery-sensing circuitry and communications devices on the boards.

Fig. 21
figure 21

GM Chevrolet Volt (view of battery) [23]

The Chevy Volt was described as an example of an extremely sophisticated vehicle. Over 100 microprocessors were used among in the various subsystems, to control each system. The majority of these controlling microprocessors were in the battery pack (for the BMS) and the inverter controlling the motor.

The Volt’s BMS (compare Fig. 22) runs more than 500 diagnostics at 10 times per second, allowing it to keep track of the Volt’s battery pack in real-time, 85 % of which ensure the battery pack is operating safely and 15 % monitor battery performance and life.

Fig. 22
figure 22

The Chevy Volt BMS [24]

The battery installed in this vehicle has the following characteristics, as shown in Table 2.

Table 2 Chevrolet Volt (battery data)

In a PHEV, the battery will be discharged with the full vehicle power over a longer period (e.g. highway driving in pure electric mode). Therefore the battery has therefore to be sized accordingly. In this example of the Chevrolet Volt the pack is arranged in tunnel and under the seat.

Nissan Leaf (2014)

Perhaps the best known and highest selling EV on the market is the Nissan Leaf. The all electric Nissan Leaf was the first affordable, mass produced, lithium battery EV. Key to the Leaf’s success was a battery design that balances safety, performance, cycle life, calendar life, energy density, power density, charge rate, discharge rate, weight, structural integrity, and thermal management.

The Nissan Leaf’s battery is made of 48 modules. Each module is made with 4 large surface area laminate Lithium Manganese/Lithium Nickel batteries. The module battery configuration is 2p 2s, meaning that two of the cells are wired in parallel and then this pair of cells is wired to the other pair in series. The results in a 7.4 V nominal voltage battery module with approximately 33 Ah. The modules are encased in an aluminum enclosure. Together these 48 modules form a string that produces between about 290 V empty and 400 V full. The total energy storage capacity of this battery system is approximately 24 kWh.

Each module is essentially sealed, with no active thermal management system installed in the pack. Here, the thermal management is done via passive means, with the heat of the cells being transferred to the metal enclosure of the modules and then to the external pack enclosure.

The Leaf is using a centralized BMS, with a single control unit and a wiring harness that extends to each module (Fig. 23).

Fig. 23
figure 23

Nissan Leaf BMS [25]

Renault ZOE (2015)

In early 2015, Renault announced that due to an improved BMS, as well as to the motor, Renault is able to increase the ZOEs range. The new “more compact” motor (10 % less volume)—including improved BMS—will reportedly increase the range of the ZOE by around 8 %, roughly 20 km (12.5 mi.) on the generous European testing cycle. It also accelerates quicker but consumes less power than the original motor. The ZOE uses a 22 kWh Li-ion battery (see Fig. 24).

Fig. 24
figure 24

Battery module of Renault ZOE [26]

Renault has been granted 95 patents for its passenger car e-motor. Innovations include replacing the previous liquid cooling with air cooling (the power control unit is still liquid cooled) and reducing the size of the power control unit by 25 %. The ZOE now features a built-in Chameleon charger that can recharge at either 3 kW or 11 kW. The charger is now built into the power control unit and charging times have been reduced.

One aspect of EV that is largely ignored by ordinary drivers is the BMS. Electric car batteries are composed of many individual cells. It is possible for some cells to become fully depleted sooner than others or to be fully recharged sooner. The BMS constantly monitors the SoC of each individual cell to maximize power and to prevent overcharging [27].

The Renault engineers have substantially upgraded the BMS for the ZOE to better manage battery usage. Those changes play a major role in the car’s improved performance and longer range. In essence, the new software uses the stored electrical energy more efficiently.

Especially this example highlights the key role of a BMS.

3.6 Technology Trends of BMS

BMS have a significant potential for improvement regarding the determination of the SoC. As discussed in this chapter, there are several methods available to measure the actual charge, but none of the options are accurate concerning Li-ion batteries.

One of the main problems is that the batteries will exhibit different characteristics depending on their history (temperature, charge/discharge cycles).

The current trend therefore is to implement an observer (Fig. 25), which compares the actual values of the battery with a state space model running in parallel. Using a Kalman Filter Feedback several internal parameters of the battery (SoC, SoH etc.) are tracked and corrected if necessary.

Fig. 25
figure 25

Implemented observer estimates the current SoC

An improvement of the SoC determination is mainly needed in HEVs, because there the battery operates only within a limited SoC range (20–80 %). As discussed above the coulomb counting method is not accurate enough in this SoC range over extended periods of time.

In order to accurately determine the SoC and therefore achieve an optimal performance of the system it is important to obtain an exact characterization of the battery. Therefore a detailed battery model is needed (compare Fig. 26).

Fig. 26
figure 26

Implementation structure of a coupled electrical and thermal observer

Basically, the battery model can be split into two parts:

  • electrical battery model and

  • thermal battery model

With the application of this determination method, the system is suitable for on-line calculation. The disadvantage in this case is the high complexity of the model which has to cover also the complete operating temperature and dynamic range of the battery. The dynamic range takes into consideration different chemical phenomena taking place within the cell over a wide frequency spectrum. The model also should cover pulsating currents (pulses of several seconds) as well as constant currents over minutes. Moreover, the aging of the battery has also to be taken into account within the battery model.

Bosch presented their development scenario and future trends for BMS at a Task 17 workshop (Geneva, 2011). Figure 27 highlights where future BMS will focus on. State of the art BMS are focusing on safety, while next generation BMS will have their focus on optimized generation and further more on extended functionality.

Fig. 27
figure 27

State of the art and development scenario of BMS [28]

3.7 BatPaC: A Li-Ion Battery Performance and Cost Model for Electric-Drive Vehicles

The information for this section have been provided by a revised final report [ 29 ]: “Modeling the Cost and Performance of Lithium-Ion Batteries for Electric-Drive Vehicles” and by the information on a Task 17 workshop in Chicago (2011) on battery performance and cost by Kevin Gallagher from ANL.

The United States Vehicle Technology Office has supported work to develop models that help researchers design and calculate potential costs of batteries. One major model is the bottom-up Battery Performance and Cost Model (BatPaC) at ANL. This model was developed utilizing efficient simulation and design tools for Li-ion batteries to predict: precise overall (and component) mass and dimensions, cost and performance characteristicsunderstand how performance affects costbattery pack values from bench-scale results

The recent penetration of Li-ion batteries into the vehicle market has prompted interest in projecting and understanding the costs of this family of chemistries being used to electrify the automotive powertrain. The performance of the materials within the battery directly affects the end energy density and cost of the integrated battery pack. The development of a publically available model that can project bench-scale results to real world battery pack values would be of great use.

This first version of the model, the battery performance and cost (BatPaC) model, represents the only public domain model that captures the interplay between design and cost of Li-ion batteries for transportation applications.

BatPaC has more accurate predictions than previous models and allows vehicle manufacturers to choose the best and smallest battery for the application. Based on expert recommendations of this model, the U.S. EPA used BatPaC to develop its most recent round of fuel economy standards. In addition, work at the National Renewable Energy Laboratory led to a multi-scale multi-dimensional framework for battery design that uses computer-aided engineering tools.

Approach to understanding cost and energy

BatPaC is built on a foundation of work by Paul Nelson at Argonne. It Designs Li-ion battery and required manufacturing facility based on user defined performance specifications for an assumed cell, module, and pack format (power, energy, efficiency, cell chemistry, production volume). Thus, it calculates the price to OEM for the battery pack produced in the year 2020. Therefore, it isn’t modeling the cost of today’s batteries but those produced by successful companies operating in 2020; some advances have been assumed while most processes are similar to well-established high-volume manufacturing practices. BatPaC efficiently completes calculations in fractions of a second [30].

Assumed battery format: cell

Various cell and battery design concepts are under development of battery manufacturers. ANL found out that the exact design of the battery doesn’t have an import effect on the cost for a set cell chemistry; the amounts of electrode materials and the number, capacity and electrode area of the cells are the determining cost factor. The most common cell designs for batteries nearing large scale production are cylindrical wound cells, flat wound cells, and prismatic cells with flat plates.

Some previous efforts were based on flat-wound and cylindrical cells. The assumed format of ANL scientists is most likely not the best design, however those successful in producing batteries in the year 2020 will reach similar energy densities and costs through other means.

To provide a specific design for the calculations, a prismatic cell in a stiff-pouch container was selected (compare Fig. 28).

Fig. 28
figure 28

Prismatic cell in a stiff-pouch container, to enable calculations [31]

Battery Pack Design

The cells are placed on their sides in the module. The model designs the battery pack in sufficient detail to provide a good estimate of the total weight and volume of the pack and the dimensions of the battery jacket so that its cost can be estimated. The modules in a row are interconnected, negative to positive terminals by cooper connectors. The modules (Fig. 29) are supported by a tray that provides space for the heat transfer fluid (ethylene glycol-water solution) to flow against the top and bottom of each module.

Fig. 29
figure 29

Assumed battery format: module and pack [32]

Modeling of battery design and performance

The design portion of the model calculates the physical properties of a battery based on user defined performance requirements and minimal experimental data (compare Fig. 30). The user is asked to enter a number of design parameters such as the battery power, number of cells and modules, etc. In addition, the user must enter one of the following three measures of energy: battery pack energy, cell capacity or vehicle electric range. The model of the battery cost calculations is shown in Fig. 31. This figure shows the baseline Li-ion battery manufacturing plant schematic diagram. This baseline plant is designed to produce 100,000 battery NCA-Gr packs per year. This figure highlights also the adjustment of costs for varying production volumes.

Fig. 30
figure 30

Summary flow of the design model [33]

Fig. 31
figure 31

Battery cost calculations [34]

Illustrated results

For a set battery pack power, the number of cells in the pack has substantial effects on the price of the pack, the pack voltage and the maximum current. These effects are illustrated in Fig. 32 for NMC441-Gr PHEV25 batteries (providing 40 km (25 mi.) electric range) with 60 kW power at a V/U = 0.8. The price of the pack increases by 17 % in changing the number of series-connected cells in the pack from 32 to 96 and the entire pack integrated cost increases by 15.7 %. The integrated cost includes additions to the vehicle air-conditioning system to provide for battery cooling and the BMS with disconnects. The change of the maximum current, resulting from differing pack voltages, would also affect the cost of the motor and the electronic converter and controller, but in the opposite direction.

Fig. 32
figure 32

Optimization of system costs [35]

Thus, BatPac demonstrates how to reduce the cost of today’s batteries by lowering the cell count, moving to large cell formats (from 15 to 45 Ah) and by increasing the maximum achievable electrode thickness. Further, it quantifies the benefits of the future chemistries, as it can be seen in Fig. 33.

Fig. 33
figure 33

Path forward for Lithium battery research [36]

The developed BatPaC model may be used to study the effects of battery parameters on the performance and the manufactured cost of the designed battery packs.

BatPaC can be downloaded by the following link: http://www.cse.anl.gov/batpac/download.php.

3.8 Selection of BMS Suppliers and Manufacturers

As this Task started in 2010, where e-mobility was not so familiar and common as it is today, one of the key activities of this Task was the collection of different suppliers and manufacturers. During the last five years of reporting a lot of business fields have been changed, modified, extended or have been removed. Thus, quiet a lot of suppliers and manufacturers which have been reported from the first Operating Agent in 2011, are not existing anymore, due to financial crises, a wrong business plan or the weak demand for e-mobility.

This section of the report tries to show a selection of common suppliers, in order to enable an overview of different model and concepts.

AVL List GmbH

The Austrian company AVL is the world’s largest independent company for the development of powertrain systems with internal combustion engines as well as instrumentation and test systems.

AVL Software and Functions offers, among others, innovative and automotive-compliant solutions for the following core functions of different battery types: the determination of the loading and health condition (SoC and SoH), the provision of different functions (SoF), active and passive balancing and cell failure detection.

The BCU consists of both, a low voltage and a high voltage part. The low voltage part includes a powerful 32 Bit floating point CPU, and several output drivers to control auxiliary components like HV contactors, water pumps, LV relays, fans or charge sockets. A variety of digital and analog input ports ensure enough flexibility for additional sensors and signals.

The high voltage part of the BCU includes high voltage measurement inputs and an integrated isolation guard, capable of up to 800 V total system voltage.

The vehicle interface is designed to be simple and easily understandable. The battery activation can be done by a discrete wake-up signal or a combination of wake-up signal and a single CAN request. The internal battery control (switching contactors, contactor weld diagnoses, balancing, isolation monitoring, SoC calculation, and much more) is handled by the BCU and does not need external algorithms. The BCU outputs the necessary CAN signals like pack and DC-link voltage, pack current flow (100 Hz), voltage and current limits, temperatures and SoC.

Hardware

The BCU features a 32-bit microcontroller including a wide variety of I/Os to manage and communicate with various sensors and actuators as well as to interface with the module control units. The BCU supports up to 3 CAN networks which are typically used for vehicle communication, internal CAN between BCU and MCUs and optional CAN for such items as instrumentation CAN or service CAN. There is also a redundant digital synchronization and fault circuitry for BCU/MCU network safety monitoring.

The MCU is an 8-bit controller that supports up to 12 cells in series. The MCU senses cell voltage (every cell) and temperature (up to 4 temperatures per module) and reports these values to the BCU. There are different MCU HW design sizes available, from minimal sized MCU (passive balancing, XX communication) to smart-MCUs for 48 V packs incorporating all necessary features for standalone operation.

Software functions

The in-house developed BMS software comprises basic and application layer software. Many functions are model based. Supported functions include: Battery Core Functions, BCU State Control, Contactor Control, Electrical Hazard Protection, Thermal Management, Battery Protection, Module Control, BCU Communication, Charge Control, Diagnostic Event Handling, Diagnostic Event Manager and Balancing Control: Control cell balancing [37] (Fig. 34).

Fig. 34
figure 34

Overview of AVL module system [38]

A123 Systems

A123 Systems, LLC develops and manufactures advanced Nano phosphate lithium iron phosphate batteries and energy storage systems that deliver high power, maximize usable energy, and provide long life, all with excellent safety performance.

A123’s system design takes advantage of the patented Nano phosphate® cell technology which delivers high power, excellent safety, and long life. This high-performance cells are incorporated into battery modules which serve as the building blocks for advanced energy storage systems. All modules and systems are built with high-grade components, battery management systems, and thermal management for long battery life and retained capacity.

A123 has developed and validated an automotive-grade electronics and software set for battery management, designed to ensure the safe and reliable operation of large battery systems. The distributed system consists of a battery control module, current sense module, monitor and balance electronics, and an electrical distribution module. Features of the BMS include industry standard CAN and diagnostic interfaces, SoC and SoH algorithms, charge management, and safety management. The BMS components can be reused use across energy and power systems to enable rapid design and development of a cost effective system.

Figure 35 shows the Nano phosphate® Energy Core Pack (23 kWh) module, which is designed for PHEV and EV applications as ready-to-use sample packs for rapid deployment into powertrains for testing and development purposes. Off-the-shelf energy core packs offer an already finished design to facilitate early vehicle development with less lead time and no engineering charges. Each pack comes equipped with battery management electronics, thermal management, and standard vehicle communication and control interface.

Fig. 35
figure 35

Cell (left), module (middle) and system (right) [39]

AKASOL Engineering

The German company AKASOL, develops and produces innovative Li-ion battery systems for the automobile and commercial vehicle industries, as well as for wind energy, hydropower, and solar industries as well as for the shipbuilding industry. Thus, this company is working on battery systems like AKASYSTEM (previously known as AIBAS), which is one of the world’s most powerful battery solutions for BEVs or (P)HEVs. The system is freely scalable, automotive-certified, standardized, and ready to order and made in Germany. AKASYSTEM operates with passive and active thermos management using liquid cooling. Cell temperatures therefore always remain within the recommended range even under heavy use. This promotes high performance values and prolongs service life.

The basis of the modular scalable Li-ion AKASYSTEM (compare Fig. 36) battery systems is formed by the highly integrated module AKAMODULE (see Fig. 37).

Fig. 36
figure 36

AKASYSTEM is regarded as one of the most powerful battery solutions [40]

Fig. 37
figure 37

Battery system and module by AKASOL [41]

One of the decisive advantages: despite the extremely high functional integration on a modular level, the AKAMODULE achieves energy density of more than 140 Wh/kg. This enables long vehicle range with simultaneously exceptional durability. Every AKAMODULE is cooled with a water-glycol fluid mix and it provides an extremely compact and lightweight solution here with an intelligent combination of housing and cooling structure. The technical data of the battery module is shown in Table 3.

Table 3 Key data for Akasol Engineering battery module

Bosch

Bosch Battery Systems develops, manufactures and markets battery systems for all kind of xEVs (see Fig. 38). As well as providing individual components, Bosch Battery Systems also offers a full range battery system—all from a single source. The areas of operation are including: battery systems for HEVs, PHEVs and BEVs, modules for systems, hardware and software for BMS as well as thermal management systems. Thus, it covers all types of applications including Li-ion battery technology for automotive powertrain applications.

Fig. 38
figure 38

Overview of different battery systems by Bosch [42]

Bosch is working on smart Li-ion BMS and is developing a system that sends data through the cell connectors rather than a dedicated communications network. The system, designed for EVs and PHEVs, should boost performance, safety and battery service life as well as reducing weight in Li-ion packs. Bosch is researching the innovative system, which uses the path travelled by the electricity in the battery to carry data. The data is then sent to a central control unit, eliminating the need for costly data-transmission wiring (see Fig. 39).

Fig. 39
figure 39

Using a battery’s internal wiring to send data [43]

Bosch’s intention is to constantly monitor and to control each battery cell individually. This will allow optimum use of the battery’s energy. If a single cell in the battery is no longer operating efficiently, then only that single cell will have to be replaced, not the whole module. Migrating data transfer to the internal wiring would allow the battery pack’s energy use to be optimized, and would also help to reduce costs.

I+ME ACTIA

I+ME ACTIA performs research of new battery technologies based on Ni- MH and Lithium Polymer (LiPo) batteries. They have also been working on the development of BMS since 1995. The BMS are mainly designed for the implementation of NiMH or Li-Po batteries in mobile applications.

The BMS of I+ME ACTIA consists of one master and up to 20 slaves (compare Fig. 40).

Fig. 40
figure 40

Master (left) and slave (right) unit of Actia [44]

The master function includes the voltage measurement for the whole battery and the communication with the slave modules. The master takes control over the protective relay, if a limit is exceeded. Further, the SoC and SoH were determined and can be sent over a CAN Interface to the vehicle controller. Thus, the master consists of 8 digital inputs with different characteristics, 8 digital outputs for activation of contactors or relays, CAN bus for communication with other control units, an Ethernet interface for representation of system data and status RS232 interface for communication with a PC.

Whereas the slave module is responsible for measuring the cell voltages, it includes also the cell balancing of the battery module. Here we have an accuracy of maximum ±5 mV.

The slave consists of a microprocessor controlled part of the BMS, the measuring and monitoring of 5–10 cells on a Li-ion basis, the measuring of voltage and temperature of cells and control the voltage balancing, 10 differential analogue inputs and 3 analogue inputs for measuring of temperatures.

Moreover the slave measures the temperature and sends all data to the master unit for the top-level control. The limit values of the BMS can be configured by software.

The battery systems of this company are often used in hybrid vehicles, bicycles, forklifts, wheel chairs and robots. The technical details of the Master and Slave unit are online available [45].

Johnson Control

Johnson Control supplies complete battery systems covering activities from design to manufacturing and is the leading independent supplier of hybrid battery systems to make vehicles more energy-efficient (see Fig. 41). It was the first company in the world to produce

Fig. 41
figure 41

Product portfolio of Johnson Control (batteries from left to right: advanced start-stop vehicles; micro HEVs; (P)HEVs and EVs) [46]

Li-ion batteries for mass-production HEV. The BMS from Johnson Control is well suited for applications in the automotive sector. Battery systems using the Johnson Control BMS are able to interact with other automotive systems and adjust their performance to go with the ever-changing conditions. This is realized with the help of microprocessors and hardware adaptations. Further in this BMS a thermal management is included. Thus, Johnson Controls has expertise in designing, developing, and integrating fully integrated air- and liquid-cooled battery thermal management systems.

Electronic Management-Johnson Controls’ system electronics integrate cell balancing for extended driving range and battery life. Diagnostics and voltage monitoring can be run on each individual cell which includes temperature controls. Johnson Controls has the capability to design and produce the cell supervision circuit and balancing electronics in our global Li-Ion battery technology centers and facilities. The technical specifications are listened in Table 4.

Table 4 Technical specifications of Johnson Control batteries for HEVs and PHEVs/EVs [47]

The functionality of the Johnson Control Battery management system is shown in Fig. 42. This figure shows that the BMS gains on the one hand information from the power electronics and the motor, while on the other hand the BMS controls the power output stage. Over the battery disconnect unit, the battery can be suspended from the electronic. This is for secure, if a failure occurs. Further the charger gives information of the battery to the BMS, which adapts the characteristics to the actual battery. The thermal management is important, due to the fact that the temperature of a Li-ion battery does not exceed the temperature limitation. Else the life expectancy of the battery will decrease.

Fig. 42
figure 42

BMS functionality of the firm Johnson Control [48]

The BMS (compare Fig. 43) measures the battery voltage and current. With this information the system is able to control the secure of the system. Moreover the cell balancing is supervised and also the temperature.

Fig. 43
figure 43

Schematic of the BMS [49]

4 Thermal Management

During the last years strong attention has been paid to increase the energy density of EVs batteries or to improve the energy consumption of the electric powertrain as well as of the auxiliary components of the vehicle. For BEVs, heating and cooling the cabin is an issue, many people have been working on since years.

Thermal management has a high potential for improving fuel economy and reducing emissions of HEVs and BEVs. Thus, thermal management of batteries is essential for effective operation in all climates. During the last years, the optimization of thermal management of vehicles has become an important business segment [50].

Thermal Management effects (compare Fig. 44):

  • fuel/energy consumption (e.g. friction losses, combustion process, recovery of energy losses, efficiency, etc.),

  • emissions (e.g. cat-light-off, EGR/SCR strategies, etc.),

  • engine performance (e.g. effective cooling, engine efficiency, reduction of friction losses, etc.),

  • comfort and safety (e.g. cabin conditioning, windscreen defrosting, etc.)

Fig. 44
figure 44

Thermal management and its impact

Compared to a conventional vehicle in a HEV there are additional heat sources like e-motors, power electronics, batteries, etc. which have to be kept in a certain temperature range to generate high efficiencies and protect components against overheating. Due to the interaction between different subsystems like the combustion engine, e-motor/generator, energy storage and drive train for hybrid systems a comprehensive simulation model is necessary.

Figure 45 gives an overview about different drive train configurations, including the ICE, micro HEV, mild HEV, full HEV and PHEV/BEV and their requirements concerning new vehicle constraints; new thermal needs and new systems and components. This picture demonstrates the need for further thermal management systems.

Fig. 45
figure 45

HEV/EV thermal management activities [51]

During the last years a lot of companies and R&D institutions realized the demand for thermal management solutions. Therefore two Task 17 workshops were held in Chicago (2013) and Vienna (2014), focusing on Thermal Management Systems (TMS) and concepts for HEVs.

Companies and Research Institutes as ANL, Austrian Institute of Technology (AIT), Delphi, Fraunhofer, qpunkt, Valeo presented their results and concepts.

A selection of the most imported ones are mentioned in this section.

Overview of ambient temperature impact and drive pattern on energy consumption for HEVs, PHEV and BEV

The ANL highlighted the ambient temperature impact and drive pattern on energy consumption. Thus, they compared a BEV (Nissan Leaf 2012), HEV (Toyota Prius 2010) and a conventional one (Ford Focus 2012).

The study included a comprehensive thermal study: 7 vehicles spanning conventional vehicles (CV) (gas and diesel), HEVs (mild to full), a PHEV and a BEV, which have been tested on cold start UDDS, hot start UDDS, HWFET and US06 at ambient temperature of −7 °C (20 °F), 22 °C (72 °F) and 35 °C (95 °F) with 850 W/m2 of sun emulation (compare Fig. 46).

Fig. 46
figure 46

Wide technology spectrum of research vehicles [52]

The output of this study demonstrates the following facts:

  • −7 °C (20 °F) cold start has the largest cold start penalty due to high powertrain losses and frictions. Once a powertrain reached operating temperatures, the energy consumption is close to the 22 °C (72 °F) results again (see Fig. 47),

    Fig. 47
    figure 47

    UDDS energy consumption for cold and hot start [53]

  • 35 °C (95 °F) environment requires a constant A/C compressor load which impacts the energy consumption across all vehicle types on hot and cold starts,

  • worst cases scenarios for the different vehicle types:

    • CV: 35 °C (95 °F) environment due to 4–5 kW of extra air conditioning load,

    • HEV: both −7 °C (20 °F) and 35 °C (95 °F) have a large range of increase due to a change in hybrid operation (fuel and electricity trade off),

    • PHEV: −7 °C (20 °F) where the PHEV uses both the engine and the electric heater to warm up the powertrain and the cabin,

    • BEV: −7 °C (20 °F) due to 4 kW of heater which can double the energy consumption on a UDDS,

  • Battery system resistance doubles from 35 °C (95 °F) to −7 °C (20 °F) for all battery chemistries in the study

Looking in more detail into the study, as it can be seen in Fig. 48, using the heater in an electric car may double the energy consumption in city type driving. Figure 49 shows that driving at higher speeds and aggressively will increase the energy consumption in an electric car.

Fig. 48
figure 48

Using the heater in an electric car may double the energy consumption in city type driving [132]

Fig. 49
figure 49

Driving at higher speeds and aggressively will increase the energy consumption in an electric car [54]

By comparing the cold start energy function and the hot start energy consumption it is obviously that the cold start energy consumption is larger than the hot start one (see Fig. 50).

Fig. 50
figure 50

Cold start energy consumption is larger than the hot start energy consumption [55]

Figures 51 and 52 are showing the largest energy consumptions increases for a BEV and for a conventional one. Thus, the largest energy consumption increase for an EV occurs at −7 °C (20 °F) and for a conventional at 35 °C (95 °F), while a conventional vehicle has the largest absolute energy consumption penalty on a cold start.

Fig. 51
figure 51

Largest energy consumption increase for an EV occurs at −7 °C (20 °F) and for a CV at 35 °C (95 °F) [56]

Fig. 52
figure 52

A conventional vehicle has the largest absolute energy consumption penalty on a cold start [57]

Figure 53 shows that generally increased speeds and accelerations translate to higher energy consumption, except for the conventional due to low efficiency in the city.

Fig. 53
figure 53

Generally increased speeds and accelerations translate to higher energy consumption except for the CV due to low efficiency in the city [58]

For more information of this study refer to Journal of Automobile Engineering [59].

Driving at higher speeds and aggressively will increase the energy consumption in an EV.

Cold start energy consumption is larger than the hot start energy consumption.

Largest energy consumption increase for an EV occurs at −7 °C (20 °F) and for a conventional at 35 °C (95 °F).

A CV has the largest absolute energy consumption penalty on a cold start.

Generally increased speeds and accelerations translate to higher energy consumption except for the conventional due to low efficiency in the city.

4.1 Heating Technologies

In conventional vehicles (driven by an ICE) the heating system is usually fed by the waste heat of the combustion engine. The ICE has a relatively low efficiency of about 30–40 % so that a lot of waste heat has to be dissipated to the cooling system. One part of these heat losses is then transferred to the vehicle cabin by a heat exchanger and forced convection due to a fan, the other part is dissipated to the surrounding environment by a cooler. The usage of this heat has no direct influence on the driving range of the vehicle because it is generated as part of the combustion process.

In BEVs , this waste heat is not available, the heat for heating the passenger compartment has to be taken from other sources. Due to the high efficiency of the drive components, as the electric machine, the power electronics and the battery, only little waste heat is generated, which is not enough to heat the cabin. Hence, the heating system has to be powered by another source. If heat is generated electrically by energy taken from the traction battery, the driving range of the vehicle may be reduced by up to 50 % (compare visualization Figs. 54 and 55).

Fig. 54
figure 54

Thermal resources of a CV (left) and of an EV (right) [60]

Fig. 55
figure 55

Influence of heating and cooling an EV on the range anxiety [61]

Qpunkt, an Austrian company, evaluated the range losses due to vehicle heating (average for January), based on a 1D Matlab Simulink Simulations with a given, average driving power for urban and overland cycles depending on the ambient temperature (compare Fig. 56).

Fig. 56
figure 56

Range losses due to vehicle heating (average for January)—bases on 1D Matlab Simulink Simulations with a given, average driving power for urban and overland cycles depending on the ambient temperature [62]

qpunkt is a specialist in the fields of thermal management, computational fluid dynamics (CFD) and acoustics, in automotive engineering and in other areas such as aviation, railway, research and development.

In BEVs heat has to be generated instead of using waste heat.

The easiest way is to use the electrical energy stored in the traction battery of the vehicle by using a positive temperature coefficient element (PTC) to generate heat (reduction of the driving range). Another possibility for a heating is based on burning a CO2 neutral fuel like bioethanol and thus generating heat (bioethanol tank, which has a negative influence on the vehicle weight).

Cold weather conditions affect the battery, notably in terms of:

  • capacity: a very cold battery doesn’t fully charge. It can be compared to a fuel tank that contracts in the cold. The result is, of course, reduced range,

  • power: a cold battery cannot provide all the power required by the motor, reflected in weaker acceleration. These batteries are highly sensitive to temperature changes and can be negatively impacted if improperly designed. Depending on where the climate control system is located, the battery life can be either positively or negatively impacted by the climate control design of the vehicle and

  • charge: an overly cold battery cannot be charged too fast, making for longer “quick” charging times. Cold, however, has no impact on standard charging, via a Wall-Box.

One of the emerging technologies being used to solve cooling and heating problems for EVs and HEVs is the heat pump. This technology has been around for decades—but heat pumps are just now coming into their own in the automobile industry. They transfer heat to and from a working fluid (a refrigerant in most cases) to the air. As such, heat pumps can be used to both heat and cool the cabin of a vehicle and are estimated to increase battery range substantially.

By introducing the Renault ZOE, Renault presented the first EV, which uses a heat pump in addition to an air conditioning. The heat pump fitted on ZOE (compare Fig. 57) is a reversible air conditioning system.

Fig. 57
figure 57

Heating system of the Renault ZOE [63]

The following subsections listen different concepts and technologies which have been demonstrated during the Task 17 workshops, mentioned above.

4.2 Automotive Thermal Comfort by Valeo

Valeo started to implement the thermal systems business group into their portfolio. This business group develops and manufactures systems, modules and components to ensure thermal energy management of the powertrain and comfort for each passenger, during all phases of vehicle use. Therefore they are focusing their work on: climate control, powertrain thermal systems, climate control compressors and the front end module.

Valeo offers a wide range of different technologies for thermal management solutions.

Stop Stay Cool (SSC) Evaporator: this evaporator consists of an Embedded Phase Change Material (PCM) to store and recover energy when the engine is off. This enables the benefits of thermal comfort at stop (with engine off) and no changes to the module architecture are required. The SSC enables a longer engine shut off at high heat loads (see Fig. 58).

Fig. 58
figure 58

Performance and benefits of the SSC evaporator [64]

Front-End Integration (Active Shutter): front-end has an impact on overall power efficiency (emissions and fuel economy). Active Shutters improves heat pump deicing and lowers the drag coefficient. The front-end design and integration is a key to leverage: system efficiency; compressor downsizing and operating range (see Fig. 59).

Fig. 59
figure 59

Front-end integration—active shutter (image courtesy by Valeo [65])

Cooling technologies: depending of the vehicle itself, different cooling techniques are available, like: passive air cooling, active air cooling, liquid cooling and direct cooling (Fig. 60). Passive air cooling systems is suitable for low performance city vehicles with low mileage per day and no fast charging. Compromises with vehicle performances have to be accepted. Active air cooling is used for average driving conditions with sufficient battery capacity. Fast charging is one of its limitations. Liquid cooling is used by high performance EVs, which depend on 100 % availability in all driving, charging and ambiences. Liquid cooling can also be used for easy heating. Direct cooling, also known as refrigerant direct cooling is used by high performance hybrids, which do not depend on 100 % availability under all ambient conditions (specifically winter conditions).

Fig. 60
figure 60

Different cooling techniques for xEVs [66]

Depending on the future trends, EVs and PHEVs are gaining more and more on popularity. Therefore cooling methods, especially liquid cooling can be seen as a main stream for EV/PHEV. This main stream in shown in Fig. 61. Thus, refrigerated direct cooling and liquid cooling might be the future cooling methods.

Fig. 61
figure 61

Battery TMS—development trend forecast [67]

4.3 Development of Nanofluids for Cooling Power Electronics by Argonne

This project report is hosted by E. V. Timofeeva, D. Singh, W. Yu, D. France, from Argonne National Lab.

Two cooling systems are currently used for HEVs, higher temperature system for cooling the gasoline engine and lower temperature system for cooling the power electronics.

The DoE started a project with the goal to eliminate the lower temperature cooling system, such that all cooling is done with a single higher temperature cooling system. In order to reach this goal, the cooling fluid needs to have better heat transfer properties and the DoE proposed to use Nanofluids as coolants. Nanofluids have a proven ability to increase thermal conductivity and heat transfer and promise to reduce the size, weight, and number of heat exchangers for power electronics cooling.

Definition of Nanofluids

Nanofluids are fluids containing nanoparticles (nanometer-sized particles of metals, oxides, carbides, nitrides, or nanotubes). Nanofluids exhibit enhanced thermal properties, amongst them; higher thermal conductivity and heat transfer coefficients compared to the base fluid [68].

Simulations of the cooling system of a large truck engine indicate that replacement of the conventional engine coolant (ethylene glycol-water mixture) by a Nanofluid would provide considerable benefits by removing more heat from the engine. Additionally, a calculation has shown that a graphite based Nanofluid developed by Argonne could be used to eliminate one heat exchanger for cooling power electronics in a HEV. This would obviously reduce weight, and allow the power electronics to operate more efficiently.

To develop Nanofluids for heat transfer (e.g., cooling), team used a systems engineering approach. This method enables scientists to look at how Nanofluid systems work by analyzing the behavior of the whole system, which is different than looking at each individual property of the system, such as nanoparticle material, concentration, shape, size and more.

Nanofluids improve performance of vehicle components

Researchers from ANL have been working with industrial partners to create and test Nanofluids which improve the cooling of power electronics in HEVs. Fluids, containing nanoparticles, can lessen the need for heat exchangers by increasing the heat transfer efficiency. The results are smaller cooling systems and lighter vehicles.

During a Task 17 workshop researchers from ANL presented silicon carbide nanoparticles and ethylene glycol/water Nanofluid that carries heat away 15 % more effectively than conventional fluids. Further they’ve developed a graphite-based nanofluid that has an enhanced thermal conductivity 50 % greater than the base fluid, which would, under specific conditions, eliminate the need for a second heat exchanger for cooling power electronics. To develop nanofluids for heat transfer, the Argonne team used a systems engineering approach (compare Fig. 62). Using the systems engineering approach they discovered that particle size and shape in combination with particle concentration were key to designing effective Nanofluid coolants.

Fig. 62
figure 62

Approach for using nanofluids to replace heat exchangers [69]

The project concentrated on further development of graphitic based nanofluids for hybrid electronic cooling that result in higher heat transfer coefficients while keeping the viscosity low.

Figure 63 shows the thermal conductivity mechanisms in Nanofluids. The project was focusing on development of graphitic nanofluids. ANL scientist developed surface modification of nanoparticles that uses electrostatic repulsion for achieving good dispersion, low viscosity and stability of suspensions, while percolation of nanoparticles provides high thermal conductivity.

Fig. 63
figure 63

Thermal conductivity mechanism [70]

Accomplishments: Study of particle shape effects in graphitic nanoparticles on thermal conductivity and viscosity of suspensions, in particularly graphitic platelet diameter and thickness (Fig. 64), allowed identification of promising graphitic additives (graphite nanoflakes or multilayered graphene), which are low cost and commercially available at large scale. Surface modification procedure was developed to overcome poor dispersibility of graphite in water and ethylene-glycol -water mixtures.

Fig. 64
figure 64

Various grades nanoparticle selections GnP [71]

Thermal conductivity ratio between 1.5 and 2.3 at 5 wt% of nanoparticles (room temperature) enables possibilities for dramatic improvement in power electronics cooling. Figure 65 shows the calculated heat flux at double sided cooling, while Fig. 66 shows the heat flow at single-sided cooling.

Fig. 65
figure 65

Heat flux—double sided cooling [72]

Fig. 66
figure 66

Heat flux—single sided cooling [73]

Project in a summary:

  • analysis of power electronics cooling system allowed establishing criteria for efficient nanofluid coolant such as thermal conductivity ratio of more than 1.5,

  • such enhancements are possible with graphitic nanoparticles that are commercially available at reasonable costs (20 % added cost to coolants),

  • graphitic nanofluids in 50/50 mixture of ethylene glycol and water showed:

    • morphology dependent thermal conductivity;

    • 50–130 % increases in thermal conductivity at 5 wt% (room temperature)—possibilities for dramatic improvement in liquid cooling nanoparticle;

    • surface treatment provides better dispersion stability, lower viscosity, and higher thermal conductivity;

    • enhanced performance with temperature,

    • the optimized and scale up nanofluid tested in a heat transfer loop, fouling and erosion tests further validated the commercial viability of the graphitic GnP nanofluid technology.

4.4 Eko-Lack: Simulation and Measurement of an Energy Efficient Infrared Radiation Heating of a Full EV by AIT and Qpunkt GmbH

This project report is hosted by Bäuml T., Dvorak D., Frohner A., Simic D. from AIT.

Together with some other Austrian institutions and companies the AIT (Austria’s largest non-university research institute, is among the European research institutes a specialist in the key infrastructure issues of the future) worked on a project, which was focusing on the simulation and measurement of an energy efficient infrared radiation heating of a BEV.

As mentioned in the beginning of this chapter, heating up an EV leads to a loss of range. One possible conventional approach would be the reduction of demanded heat, which would improve the total vehicle efficiency. On the other side that fact leads to a conflict of targets between efficiency versus comfort.

The target comfort within an EV increases a high costumer perception, while the target of reduced heating costs decreases the costumer’s perception.

Thus, the approach for a new heating system has been taken from civil engineering technology to increase the heating efficiency. The aim of the project was to heat up the interior of an EV through a special varnish based heating system which can be applied to different surfaces and components, and requires less energy than conventional heating concepts while increasing the range and the passenger comfort. This special heating varnish was subjected to various durability tests. A major challenge was to maintain a durable contact between the electrodes and the varnish layer. A 1D and 3D simulation of the varnish based heating system in the vehicle passenger compartment was implemented and linked with an intelligent control concept. Heating elements were manufactured and embedded in a test vehicle, which was evaluated during several test runs.

Infrared Heating System

Conventional heating systems of EVs are very inefficient. A cheap method is to use PTC heaters in EVs. There, the electric resistance of the heater rises with the temperature of the element and reaches a certain stable operating point. Around this PTC element, the air is heated up and then transported to the cabin by a fan. At low ambient temperatures this inefficient heating method uses up to 6 kW of electrical power for heating the cabin. In Fig. 67 this power consumption can be seen in a vehicle on a chassis dynamometer at an ambient temperature of −7 °C (19 °F) and a vehicle speed of about 50 km/h (31 mph). The power needed for heating up the cabin is about as much as the used driving power for a small vehicle at a constant speed of 50 km/h (31 mph). The proposed infrared heating system has a much higher efficiency. Only a small electrical power is needed and thus the consumed heating energy can be reduced. It takes about 20 min to reach a comfortable climate in the cabin.

Fig. 67
figure 67

Comparison of drive and heating power (conventional convective heating) measured in a BEV test vehicle on a climatized vehicle dynamometer [74]

Figure 68 shows a comparison between a conventional heating system and an infrared heating system. Thus, the infrared system is independent, locally separated and high efficient.

Fig. 68
figure 68

Conventional versus infrared heating system [75]

The infrared radiation system is based on an electrically conductive coating. When an electric current flows through the coating, heat is generated and dissipated directly as infrared radiation to the passenger. Because the heating elements are very close to the passenger, and infrared radiation is commonly reckoned as very comfortable heat, the air temperatures in the vehicle cabin can be kept lower while preserving or even increasing the thermal comfort of the passenger.

Infrared radiation provides a very quick thermal sensation and comfortable feeling. The heating system can be designed either as large-area coating for infrared heating or as contact heating. It can be applied to different carrier materials and structures in a vehicle environment like the dashboard, door coverings, A- and B-pillars, to the center console, the steering wheel and to many more objects.

For choosing the right carrier and covering materials which have different properties, several test have been carried out to optimize the heat dissipation and radiation capabilities. Isolating and reflecting carrier layer materials prevent a heat dissipation to the backside and optimize a heat transfer to the desired front side. Kevlar Honeycomb layers as used in aviation, as well as fiber glass sheets covered with silicone and a reflective coating layer have been used to manufacture samples for the used heating elements. The behavior of the element covers (leather or textiles for automotive applications) is very important, especially in the case of partial damages of the heating coating. Small damages have a big influence on the electrical resistance, but do not create dangerous hot spots on the heating coating. Different types of electrodes for contacting the heating elements have been tested for durability and conductivity. Aluminum electrodes have turned out to have the best compromise between electrical conductivity and mechanical stability. For analyzing the long-term behavior under automotive conditions (vibrations, UV-radiation, heat, etc.) a special test box was set up where vibration tests could be performed. Also UV-radiation tests and operational tests in different ambient temperatures were performed, to determine the long-term reliability of the heating coating itself, the carrier and covering layers as well as the electric contact surfaces.

For this project the heatable coating “radheat” of qpunkt has been chosen. Base of the product “radheat” is an electrically conductive coating layer which consists of a semiconductor material-polymer dispersion (w/o carbon nanotubes or ceramics).

Basic facts of “radheat”:

  • weight: ~200 g/m2,

  • typical film thickness: 0.5 up to 1 mm (0.02–0.04 in),

  • voltage: from 0 to 400 Volt (AC/DC),

  • maximum temperature: 400 °C (752 °F) and

  • maximum power: 40 kW/m2 (depending on coating thickness).

The areas of application for this heatable coating are shown in Figs. 69 and 70.

Fig. 69
figure 69

Application areas for “radheat” (blue colored) [76]

Fig. 70
figure 70

Application areas for “radheat” in detail [77]

Thermal Measurement and Simulation

Through a simulative design of the coating based heating system the energy saving potential can be estimated in advance. Therefore a simplified 3D model of the vehicle cabin was created including in- and outflow ports of the cooling system (Fig. 71).

Fig. 71
figure 71

Simulation of the heatable surfaces [78]

The vehicle Computational Fluid Dynamics (CFD) model itself consists of around 1.1 million computation cells, with boundary layers resolved for all surfaces. A convection boundary condition was used for all cabin walls. The external heat transfer coefficients were defined as a function of the vehicle velocity. Material properties for the car body were defined in order to achieve a total thermal transmittance of 0.5 W/m2K. A Surface-to-Surface (STS) radiation model was used to calculate the radiation heat exchange between all surfaces of the model. For all flow inlets, velocity and temperature were defined. Turbulence was modelled using the k-omega SST model. On the driver’s seat, a human dummy model was seated to assess the skin temperatures and thermal comfort of the passenger. The internal heat gain of the dummy was taken into account using a heat flux boundary condition for the dummy surface with a total heat transfer rate of 100 W.

In a first step, a 3D CFD flow simulation of the conventional convective heating system was performed, to determine reference values for the energy consumption of the conventional system. For validating these simulations, measurements in the climatic wind tunnel at a temperature of −7 °C (19 °F) have been performed (see Fig. 72). Measurement data included electrical currents and voltages of the drive system and the auxiliary components as well as about 20 temperature values inside the vehicle cabin. The sensors were positioned according to VDA220.

Fig. 72
figure 72

Flow measurement of the test vehicle [79]

Fig. 73
figure 73

Simulation results driver’s comfort [80]

In a second step, the infrared heating elements have been positioned in the vehicle simulation as depicted in Fig. 74 as purple (dark colored) elements. This CFD model was coupled to a 1D model of a controller for the infrared heating elements. The control concept takes into account the current ambient and cabin temperature, as well as the vehicle velocity. For these three variables, correction terms apply. The higher the ambient or cabin temperature gets, the lower the temperature of the heating elements has to be to get the same thermal comfort in the cabin. On the contrary, the higher the vehicle velocity gets, the higher the heat transfer coefficient to the ambient and hence, a higher desired temperature on the heating elements is demanded. Furthermore maximum power and temperature values for each heating element are defined. A PI controller with anti-windup controls the desired temperature of the elements.

Fig. 74
figure 74

Temperature distribution on the human dummy during heating with the infrared heating elements (indicated as purple elements) only [81]

The thermal comfort of the simulated driver dummy was determined and evaluated using comfort indicators gained by a human thermoregulation model.

In the first simulation with heating elements with a total maximum power of 200 W were installed and convective heating was deactivated. The power of the heating elements was increased until a surface temperature limit of 70 °C (158 °F) was reached. In Fig. 74 it can be seen, that the skin temperature of the human dummy does not reach comfortable values between 25 °C (77 °F) and 36 °C (97 °F). The maximum value of the human dummies skin temperature is about 22 °C (71 °F), a maximum cabin temperature of 2 °C (35 °F) was reached. This shows, that infrared radiation heating alone is not enough to create a comfortable environment in the cabin, because the surrounding air in the cabin is not heated up sufficiently (compare Fig. 73).

In a second simulation, additional convective heating with a power of approximately 2 kW was switched on. An outlet temperature of about 55 °C (131 °F) was assumed. Figure 75 shows the skin temperature of the human dummy again, that is now closer to the comfort temperature. Also the cabin temperature is higher and between 10 °C (50 °F) in the foot space and 30 °C (86 °F) around the head (compare Fig. 76).

Fig. 75
figure 75

Temperature distribution on the human dummy and on a surface through the cabin during heating with the infrared heating elements and additional convective heating [82]

Fig. 76
figure 76

Simulation results driver‘s comfort—optimized [83]

Validation Results

The simulation was validated by putting the EV, equipped with six infrared heating elements and numerous temperature, voltage and current sensors, in a climatic chamber. The ambient temperature in the climatic chamber was cooled down to a constant value of −7 °C (19 °F). The vehicle was conditioned to this temperature to simulate warm-up of the vehicle during operation in cold weather conditions. During the first test phase, only the infrared heating elements with a total maximum power of 200 W were active, without additional convective heating. Test persons were asked to sit in the vehicle for an hour and assess their thermal comfort and sensation every 5 min. Like the simulations showed before, the body parts directly warmed by the heating elements were regarded as comfortable very quickly by all test persons. Whereas most of the test persons observed cold feet, because there are heating elements under the steering wheel pointing to the knees and thighs but not near the feet. The cabin temperature after about 30 min rose to only 2 °C (35 °F), because the infrared radiation of the heating elements warms up the human body directly, but not the surrounding air, which shows a good correlation with the simulated results. The heat-up of the inner air happens mostly by the dissipated heat of the passenger. In a second test run, additional convective heating with reduced power of around 2 kW was switched on. Here as well, body parts exposed directly to the heating elements were warmed up very quickly. Despite of an additional convective heating, the feet needed nearly 30 min to be regarded as comfortable for most of the test persons.

The project team expects a cut of the losses in range due to the heating system by half. The most important advantages are: the heating effect, available after 1 min; the quietness of the system; the possibility to focus the heat output locally; the reduction of vehicle weight; the usage in conventional and hybrid vehicles.

Thermal Management Systems means a challenge:

Energy management has to be understood as a topic for vehicle and powertrain. A thermal management strategy has to be developed and implemented (ECU, VCU, BCU). Thus, thermal management can’t be threaten in an isolated view as it is a multi-disciplinary development task (many departments involved)! The increasing system complexity means an early integration of simulation into the development process.

5 Simulation Tools—Overview of International Research Groups

Simulation has become an essential tool to engineers in the product development process. The challenge of reducing costs and time along the product development cycle creates a growing demand to replace physical prototypes with virtual prototypes applying frontloading.

The growing number of closely interacting powertrain components and control systems and the increasing complexity of these control functions, requires the analysis and testing of an extensive number of powertrain combinations.

In order to keep vehicle development costs low, while still being able to eliminate quality issues, for example due to overlooked test cases, test and calibration methods are being adjusted to incorporate simulation in all analysis, hardware test, function development and calibration tasks.

Ideally the same system and sub-system simulation models should be used for as long as possible in the whole vehicle development process (from concept to test), in order to guarantee consistency by generating comparable results in offline and real-time applications.

The following chapter includes several approaches to simulation by highlighting three examples of different simulation tools:

  • Cruise by AVL List Gmbh

  • Autonomie by Argonne National Laboratory

  • Dymola/Modelica by Austrian Institute of Technology

5.1 CRUISE—Vehicle System Simulation (by AVL)

This section was mainly provided by Mr. Engelbert Loibner from AVL List GmbH.

From concept studies to calibration and testing

AVL CRUISE [84] offers all required flexibility to build up a system model, whose fidelity may easily be adjusted to various application requirements through the entire powertrain development cycle. It supports everyday tasks in vehicle system and driveline analysis through all development phases, from concept planning and design in the office, to calibration and verification on hardware test systems. Starting with only a few input data in the early phases, the model maturity grows throughout the vehicle development process according to the continuously increasing simulation needs in xCU software and calibration development. CRUISE models improve the flexibility and the productivity of platforms that are implemented in Hardware in the Loop (HiL) procedures: AVL InMotion, IPG CarMaker, dSPACE, ETAS, National Instruments, Opal RT, as well as AVL PUMA Open testbeds. Model reuse in consecutive or iterative development approaches ensures consistent decision making processes and saves valuable engineering time, keeping the project focus on the target: optimizing vehicle fuel efficiency, emissions, driving performance and drivability.

Today’s vehicle powertrain concepts, electrified and conventional, are pushing the complexity of system simulation models to extremes. The highly adaptable System/Sub-system structure of CRUISE, allows an easy changing of drive train concepts.

Vehicle hybridization and adapting the model to alternative application needs in different phases are carried out within minutes, saving time to focus on value adding engineering, calibration and testing tasks, without a need to look into mathematical equations or re-program simulation model code.

Solution oriented open concept

CRUISE is more than a vehicle simulation model. Streamlined workflows are realized for all kinds of parameter optimization, component matching and sub-system integration. CRUISE powertrain integration system simulation platform features a modular structure, with a wide range of interfaces to other simulation tools, like Matlab/Simulink, CarMaker, CarSim, etc., ready to use analysis tasks and data management capabilities.

AVL CRUISE Application Examples

Today software developers must save time and money while delivering increasingly complex controllers that function correctly in all applications. A key approach to address these challenges is so called ‘frontloading’ or ‘model based development’ (see Fig. 77). Here, to reduce software development effort and risk it is necessary to start testing the software, as soon as executable software parts are available, typically well before real hardware exists. Therefore realistic, computationally fast, flexible and affordable simulation plant models are required (see Fig. 78).

Fig. 77
figure 77

Model based development with Cruise [85]

Fig. 78
figure 78

Plant modeling for xCU software development [86]

AVL CRUISE has wide ranging capabilities and flexibility to support this advanced software development process. For example, CRUISE models can easily be compiled and used directly in the Matlab Simulink environment, a widely used software development environment. Furthermore, the modular architecture of a CRUISE powertrain model means that the CRUISE plant models can easily be adapted or improved to match the actual requirements of each software development status. For example, it is possible to simulate the whole powertrain, or just those sub-systems which are required to test the chosen software part.

For software testing, various test cases are defined and therefore it is also necessary to re-parameterize parts of the AVL CRUISE vehicle model, like for example alternative clutch characteristics to consider the aging and wear of a clutch. CRUISE offers a simple way to change predefined vehicle parameters without the need to recompile the model.

Depending on the software development process status, the testing environment is changed from Model in the Loop (MiL), to Software in the Loop (SiL) and to HiL. In all three environments essentially the same CRUISE powertrain model is used. The complete powertrain model remains the same for all test environments. Only two parts, the interface type and the compiler, need to be adapted to the chosen target environment.

HEV powertrain modeling and validation

With AVL CRUISE it is possible to generate very detailed and advanced models of HEVs. If those models are also validated against measurements, the result is a very detailed reflection of the real vehicle behavior with respect to fuel consumption, CO2 emissions, internal energy flows and vehicle performance.

CRUISE simulation models allow very detailed analysis of each powertrain component. This approach is very useful to optimize the complete system by virtually optimizing single components. The effect of a modified or changed powertrain component (e.g. electrical machine) can be simulated in CRUISE without time consuming adaptations to the powertrain model: CRUISE offers the opportunity to investigate parameter, component or even system variations to provide a detailed sensitivity analysis.

Beside the component optimization, a second major benefit of highly sophisticated simulation models of HEVs is the possibility to virtually generate detailed requirements and load profiles for each component in the system. Such an output is fundamental for successful and efficient component design, especially for novel and hence more unknown powertrains. By simulating various driving cycles (real world and legislated) the CRUISE model delivers detailed information regarding the maximum load (mechanical or electrical), from which the expected life-time of various components can be derived.

AVL CRUISE can be used as a standalone model, but it also provides the opportunity to be used as a development platform since it can easily be connected with various other tools. For example, using the direct connection between AVL CRUISE and AVL Concerto tools, it is readily possible to handle extremely large simulation result data, filtering out superfluous information to display only the relevant parts.

The architecture of advanced HEV models contains beside the CRUISE powertrain and vehicle, highly complex xCU control logic, typically modeled in Matlab Simulink. Both parts are easily connected via a standard interface defined in AVL CRUISE. This control logic reflects exactly the control algorithm re-engineered from the measurements and used for CRUISE model validation. To allow further optimization of the model parameters and to better calibrate the control logic, well-known ‘Design of Experiment’ techniques can be virtually applied through a coupling with AVL Cameo.

5.2 Autonomie (By Argonne National Laboratory)

This section was mainly provided by Mr. Aymeric Rousseau from ANL.

Many of today’s automotive control-system simulation tools are suitable for simulation, but they can provide rather limited support for model building and management. Setting up a simulation model requires more than writing down state equations and running them on a computer. Detailed knowledge of vehicle performance, controls, architecture configuration, component technology, system integration, fuel economy benefits and cost is required.

With the introduction of electric-drive vehicles, the number of components that can populate a vehicle has increased considerably, and more components translate into more possible drive train configurations. In addition, building hardware is expensive. Traditional design paradigms in the automotive industry often delay control-system design until late in the process—in some cases requiring several costly hardware iterations. To reduce costs and improve time to market, it is imperative that greater emphasis has to be placed on modeling and simulation. This only becomes truer as time goes on because of the increasing complexity of vehicles and the greater number of vehicle configurations.

Because of the large number of possible advanced vehicle architectures and time and cost constraints, it is impossible to manually build every powertrain configuration model. As a result, processes have to be automated.

Autonomie (Argonne National Laboratory 2011a; Rousseau, n.d.) is a MATLAB©-based software environment and framework for automotive control-system design, simulation, and analysis. The tool is designed for rapid and easy integration of models with varying levels of detail (low to high fidelity) and abstraction (from subsystems to systems and entire architectures), as well as processes (calibration, validation, etc.). Developed by Argonne in collaboration with General Motors, Autonomie was designed to serve as a single tool that can be used to meet the requirements of automotive engineering throughout the development process from modeling to control. Autonomie was built to accomplish the following:

  • support many methods, from model-in-the-loop, software-in-the-loop, and hardware-in-the-loop to rapid-control-prototyping,

  • integrate math-based engineering activities through all stages of development, from feasibility studies to production release,

  • promote re-use and exchange of models industry-wide through its modeling architecture and framework,

  • support users’ customization of the entire software package, including system architecture, processes, and post-processing,

  • mix and match models of different levels of abstraction for execution efficiency with higher-fidelity models where analysis and high-detail understanding is critical,

  • link with commercial off-the-shelf software applications, including GT-Power©, AMESim©, and CarSim©, for detailed, physically-based models,

  • provide configuration and database management and

  • protect proprietary models and processes.

By building models automatically, Autonomie allows the simulation of a very large number of component technologies and powertrain configurations. Autonomie can:

  • simulate subsystems, systems, or entire vehicles,

  • predict and analyze fuel efficiency and performance,

  • perform analyses and tests for virtual calibration, verification, and validation of hardware models and algorithms,

  • support system hardware and software requirements,

  • link to optimization algorithms and

  • supply libraries of models for propulsion architectures of conventional powertrains as well as electric-drive vehicles.

Autonomie is used to assess the performance, fuel consumption and cost of advanced powertrain technologies. Autonomie has been validated for several powertrain configurations and vehicle classes using Argonne’s Advanced Powertrain Research Facility (APRF) vehicle test data, among other data sources.

With more than 400 different pre-defined powertrain configurations, Autonomie is an ideal tool to analyze the advantages and drawbacks of the different options within each family, including conventional, parallel, series, and power-split (P)HEVs. Various approaches have been used in previous studies to compare options ranging from global optimization to rule-based control.

Autonomie also allows users to evaluate the impact of component sizing on fuel consumption for different powertrain technologies as well as to define the component requirements (e.g., power, energy) to maximize fuel displacement for a specific application. To properly evaluate any powertrain-configuration or component-sizing impact, the vehicle-level control is critical, especially for electric drives. Argonne has extensive expertise in developing vehicle-level controls based on different approaches, from global optimization to instantaneous optimization, rule-based optimization and heuristic optimization.

The ability to simulate a large number of powertrain configurations, component technologies, and vehicle-level controls over numerous drive cycles has been used to support many U.S. DoE as well as manufacturers’ studies, focusing on fuel efficiency, cost-benefit analysis, or greenhouse gases. All the development performed in simulation can then be implemented in hardware to take into account non-modeled parameters such as emissions and temperature.

5.3 Dymola/Modelica (By Austrian Institute of Technology—AIT)

This section was provided by Mr. Dragan Simic and Thomas Bäuml from AIT.

Simulation is state of the art in automotive development but up to now the simulation efforts mainly focused on the analysis and optimization of individual components. For realization of optimized and efficient vehicles the optimum tuning of the different powertrain components is essential, but especially the high complexity of alternative vehicle powertrains requires the balancing of different components, like internal combustion engine, electric machine, and electric energy storage systems. Based on that, new simulation and testing tools on systems level are necessary to analyze and optimize the complex system “vehicle” as a whole and to realize innovative and competitive vehicles.

This development environment on systems level has to meet different requirements, like interdisciplinary, flexibility and real-time capability. Especially in the context of alternative drive train vehicles the complexity of the powertrain increases due to the complex interaction of the ICE, the e-machine, the inverters, the energy storage system, electric auxiliaries, etc. For simulation and optimization of such complex systems the development tools have to provide the possibility to combine mechanical, electric, thermal, etc. models. For that purpose the AIT has set up an advanced simulation and development environment based on Dymola/Modelica. Several Modelica libraries have been developed covering the mechanical, electrical and thermal behavior of components. The libraries have been realized in such a way that the entire vehicle can be modeled in an easy and flexible manner and different vehicle concepts can be compared.

Furthermore, looking at the vehicle concepts of today a broad spectrum of different drive technologies, vehicle and fuel concepts is considered and developed. The efficient analysis, comparison and optimization of different vehicle concepts require flexible development tools that support modularity and fast exchangeability of parts of the vehicle models.

Finally today’s development environment has to cover the entire development process—from the concept and design phase up to the testing phase. The support of the entire development chain by just one simulation tool highly improves the usability and efficiency. For that purpose the efforts necessary to integrate the simulation and the test environment as well as the availability of real-time models define the applicability of a simulation tool.

In addition, to not only the design phase but also support the entire development process, a Hardware in the Loop test environment has been established. The Modelica simulation environment can be coupled via in-house developed interfaces to a high performance energy storage system as well as an electric drive test bed. By that a virtual development environment has been established that allows an efficient design of EV components and their controls as well as their testing and optimization without time and cost intensive vehicle integration and tests.

At the AIT for modeling, the object oriented simulation language Modelica [87] in the modeling and simulation environment Dymola [88] is used. Modelica (see Fig. 79) is an object oriented, multi domain modeling language for component oriented modeling of complex physical systems. Systems in Modelica are modeled in a way an engineer builds a real system. Each component is physically described by algebraic and ordinary differential equations with respect to time. Systems containing mechanical, electrical, electronic, hydraulic, thermal, control, electric power or process-oriented subcomponents can be simulated simultaneously and interactions analyzed. In Modelica there is no predefined causality. The modeling and simulation environment Dymola decides by using complex algorithms which variables are known input or unknown output variables and how the equation system has to be solved in the most efficient way. The structure of the simulation model defines the set of equations and hence the variables to be solved for. Symbolical transformations and optimizations of the equation system drastically reduce simulation time. Each component in Modelica is handled as an object which can inherit equations and parameters from other classes. Each object representing a physical component has interfaces. With these interfaces, components are able to interact with other objects having the same type of interface.

Fig. 79
figure 79

Screenshot from the simulation tool ‘Modelica’ [89]

In a typical vehicle simulation model all desired components can be integrated into the overall system. At the AIT primarily subsystems and systems as far as the entire vehicle are modeled. Main focus is on the design and sizing of drive components as e-machine, battery and power electronics. Other tasks include the prediction and investigation of driving range, efficiency, thermal behavior of components. The libraries used in these simulations have all been developed at AIT. These include libraries for analyzing the electrical behavior of controlled e-machines as well as the thermal characteristics. Libraries for electric energy storage systems and power electronic devices are also used. These can be combined in vehicle system models for longitudinal and lateral dynamics. Modeling the cooling system of a vehicle connects all components thermally. For optimizing the air flow in air conditioning systems aeroacoustic 1D flow models are available. Most of the models are modeled in more than one level of abstraction focusing on different purposes. While the full physical model of an e-machine includes switching effects and electrical transients, the abstraction is a quasi-stationary machine without electrical transients which is suitable for most of the issues regarding energy consumption simulations and driving range analyses. By choosing a model with the right level of detail for the right purpose, simulation time can be drastically reduced.

Simple control systems like basic operating strategies for EVs or HEVs can also be modeled in Modelica. For more detailed strategies or plant control systems MatLab is the tool of choice. Control systems developed in MatLab can be easily used in Dymola and vice versa. This makes it easy to couple the strengths of these tools and gain a maximum output.

6 Lightweight as Overall Method for Optimization

Lightweight as overall optimization method has the potential for a higher energy efficiency and has an impact on safety and end-of-life vehicle.

The light weighting benefits on fuel/energy consumption depend on the driving type:

  • in city type driving and aggressive type driving light weighting any vehicle type will reduce the energy/fuel consumption.

  • in highway type driving light weighting vehicles does not significantly reduce the energy/fuel consumption.

Light weighting a vehicle will save the most amount of fuel/energy in a conventional vehicle due to the comparatively low powertrain efficiency of the conventional vehicle.

Light weighting the car can be done by:

  • the intense use of simulation tools,

  • the use of advanced materials (e.g. sandwich materials),

  • the implementation of bionic concepts and by use of

  • functional integration.

The main way of achieving the reductions in the consumption and emission values of passenger cars is by improving the efficiency of these vehicles. As well as improving aerodynamics and the powertrain, increasing efficiency is possible through lightweight design, since a large proportion of fuel consumption is generated by the vehicle mass.

The share of consumption due to aerodynamic drag and idling consumption is around 30–40 % of total consumption, whereas the mass-dependent consumption of a vehicle due to rolling and acceleration is up to 70 % of total consumption. These values are irrespective of the total vehicle mass. It is therefore evident that by saving on mass, a large proportion of consumption can be influenced. In vehicles with an alternative drive, a reduction in the mass of the body in white structures is of considerable importance, because electrical energy storage systems are heavy.

There are various lightweight design strategies for reducing vehicle mass: conditional, conceptual, material and form lightweight construction [90].

6.1 Vehicle Mass Impact on Efficiency and Fuel Economy

This section was mainly provided by Mr. Michael Duoba from ANL.

The results of a study of the “Vehicle Mass Impact on Efficiency and Fuel Economy”, conducted by ANL, have been presented during a Task 17 workshop in Vienna (2012).

It is widely accepted that increased vehicle mass adversely affects vehicle fuel economy. The vehicle has to consume additional energy to accelerate the heavier vehicle, as well as increased rolling drag; therefore, it requires more energy to propel the vehicle. From several literature references of U.S. EPA fuel economy labels of vehicles produced in the past 10 years, a clear trend can be seen showing that vehicle mass directly impacts overall vehicle fuel economy for light duty vehicles. Despite the clear trend, the magnitude of this mass impact on fuel consumption varies significantly between references. The results of the previous studies show that a decrease in mass of 114 kg (250 lb) results in an improvement in fuel economy of 0.53–1.6 mpg for ICE technology.

Industry assumes that 1.0 % change in vehicle mass is equal to 1.0 % change in vehicle road load.

This study, initiated by the U.S. DoE’s Office of Energy Efficiency & Renewable Energy, conducted coastdown testing and chassis dynamometer testing of three vehicles, each at multiple test weights, in an effort to determine the impact of a vehicle’s mass on road load force and energy consumption. The testing and analysis also investigated the sensitivity of the vehicle’s powertrain architecture on the magnitude of the impact of vehicle mass.

Therefore, three vehicles have been selected for testing. To accomplish the objectives a BEV, HEV, and ICE vehicle has been chosen (Fig. 80):

  • Nissan Leaf (2011),

  • Ford Fusion Hybrid (2012),

  • Ford Fusion ICE V6 (2012)

Fig. 80
figure 80

Data from vehicles tested in the study [91]

Testing methodology

Testing included a collaborative study with two parts:

  • coastdown testing,

  • chassis dynamometer testing

Part 1: Coast down study (test track): coastdown testing was conducted on a closed test track (Phoenix area) to determine the drag forces and road load at each test weight for each vehicle. It consisted of a 3.2 km (2 mi.) straight away. Many quality measures were used to ensure only mass variations impact the road load measurements. Thus, for each vehicle, at each test weight, a minimum of 14 coastdown tests were conducted to reduce sensitivity to external variables—7 tests in each direction to nullify any track grade variability.

Acceptable testing conditions for wind, ambient temperature, and humidity limits were strictly adhered to per the SAE J1263 standard. The test weights chosen for coastdown testing included weights heavier and lighter than the U.S. EPA certification test weight. The EPA certification weight was curb weight plus an additional 150 kg (332 lb), which included the driver and typical cargo or luggage.

To reduce testing variability, the vehicle was warmed up for 30 min. prior to testing, the ride height was held to a small tolerance at the various vehicle test weights and temperatures were monitored and recorded to ensure vehicle is functioning at steady state operating conditions. Table 5 shows the test weights used for the three vehicles for coastdown testing.

Table 5 Vehicle test weights utilized for coastdown testing

Part 2: Dynamometer testing for fuel and electric consumption at Argonne with road load from coast down testing.

Chassis dynamometer testing was conducted over standard drive cycles UDDS, HWFET and US06 on each vehicle at multiple test weights to determine the fuel consumption or electrical energy consumption impact caused by change in vehicle mass. A chassis dynamometer provides a very accurate and repeatable means of measuring energy consumption. To reduce testing variability prior to the on-dynamometer coastdown and vehicle loss determination, each vehicle was warmed up per dynamometer test procedures. For each vehicle, the same sensors and sensor positioning, but also the same temperatures used during the coastdown testing were also used in the dynamometer testing.

The road load measurements obtained from the coastdown testing were used to configure the chassis dynamometer. Chassis dynamometer testing also incorporated many quality controls to ensure accurate result.

Table 6 shows the test weights used for the chassis dynamometer testing.

Table 6 Vehicle test weights for dynamometer testing

Testing results and analysis

Coastdown testing: the results shown in Fig. 81 are the average of the 14 coastdown tests at each test weight for each vehicle. Note the progression of increasing coast time for increasing test weight. Two opposing factors were in effect.

Fig. 81
figure 81

Coastdown speeds for the 3 vehicles at each of the five test weights [92]

With increasing mass, the vehicle inertia increased, which increased the coastdown time; however, also with increasing mass, the rolling resistance forces increased, which decreased the coastdown time. Because the overall coastdown times slightly increased, the vehicle’s momentum had a larger impact on the coastdown time than the rolling resistance. Thus, a slightly nonlinear trend of decreasing vehicle mass results in decreased vehicle drag; a slight difference in trends (from vehicle to vehicle) is likely due to tire technology, not due to powertrain technology.

In order to summarize the coastdown testing: the vehicle mass impact on vehicle road load and drag losses was determined. The coastdown testing conducted for: three Vehicles (BEV, HEV, ICE), including five weight classes for each vehicle. The analysis of coastdown testing data provided road load data to enable accurate chassis dynamometer testing.

The mass impact on vehicle road load showed: a slightly nonlinear trend of decreasing vehicle mass results in decreased vehicle drag and a slight difference in trends (from vehicle to vehicle) is likely due to tire technology, not due to powertrain technology

Dynamometer testing: each vehicle was tested continuously at its different test weights on the same chassis dynamometer. The target coefficients (A, B, and C) utilized for the dynamometer testing were directly derived from the coastdown testing and analysis described above. This was accomplished by curve fitting a three-term equation to each of the five vehicle road load curves for each vehicle (as shown in Fig. 81).

One test weight category was tested per day per vehicle. Each test weight category was tested at least three times to establish a confidence interval in the fuel and energy consumption results from the chassis dynamometer testing. The test cycles used are U.S. certification cycles that represent different driving patterns. The UDDS represents city-type driving, the HWFET represents highway-type driving, and the US06 represents aggressive and higher speed driving (as shown in Fig. 82).

Fig. 82
figure 82

Average speed/acceleration distribution of different driving cycles [93]

The fuel was measured using a direct fuel flow meter in line with the vehicle fuel pump and the fuel rail at the engine. A Hioki power analyzer was used to measure the DC power and net DC energy in and out of the high voltage battery pack for the EV and the HEV. The power analyzer measurements on the HEV were used to verify that the tests were in charge-sustaining mode.

Raw chassis dynamometer test results: Figs. 83, 84 and 85 present the average fuel consumption as a function of vehicle test weight. Each average fuel consumption test result was framed by a 95 % confidence interval.

Fig. 83
figure 83

Fuel consumption of the Ford Fusion V6 (ICE) [94]

Fig. 84
figure 84

Fuel consumption (Ford Fusion HEV) [95]

Fig. 85
figure 85

Fuel consumption of the Leaf (BEV) [96]

The data shows that for all vehicles, fuel consumption increased noticeably on the UDDS and US06 test cycles, which contained higher average accelerations as shown in Fig. 82. Fuel consumption on the HWFET seemed relatively unaffected by the weight change compared to the other drive cycles.

Energy consumption change in terms of mass change

To compare the results from the three vehicles, percent change in energy consumption over percent change in vehicle mass was chosen as the metric, because fuel consumption (l/100 km) and electrical energy consumption (Wh/mi) were not readily comparable. Additionally, the absolute energy consumption savings is represented in liter of gasoline equivalent, which is calculated for the EV based on the AC energy consumption and the energy content of gasoline.

Figures 86, 87 and 88 show the energy consumption rate of change and the absolute fuel savings as a function of rate of vehicle mass change.

Fig. 86
figure 86

Energy consumption rate of change and the absolute fuel savings as a function of rate of vehicle mass change—UDDS [97]

Fig. 87
figure 87

Energy consumption rate of change and the absolute fuel savings as a function of rate of vehicle mass change—HWY [98]

Fig. 88
figure 88

Mass impact on aggressive driving—US06 [99]

Figure 86 illustrates (note: linearizing loses some of the detail. Mass gain and mass reduction seem to have different slopes):

  • a lighter vehicle will require less power on average to complete a drive cycle and thus the load demand on the powertrain is lower which translates into a lower powertrain efficiency and

  • in city driving the average load on the powertrain are relatively low, therefore an increase in load can significantly improve the powertrain efficiency, perhaps it may appear that light weighting a BEV has the best return in energy consumption gain in this graph, but the absolute fuel gain as a function of mass change shows a different story (see Fig. 87 ).

Figure 87 illustrates (note: linearizing loses some of the detail. Mass gain and mass reduction seem to have different slopes):

  • the energy savings from lightweight a vehicle on the highway cycle are relatively low. Reducing the road load through better tire technology and better aerodynamics would significantly reduce the energy consumption on the highway.

Figure 88 illustrates (note: linearizing loses some of the detail. Mass gain and mass reduction seem to have different slopes):

  • the largest proportional energy change occurred in the city and during aggressive-type driving. In these cycles, where the vehicle accelerated often, the vehicle mass had a direct impact on the inertia energy required to move the vehicle forward. Because the inertial power required to move a vehicle was calculated by multiplying acceleration by mass, any mass change to a vehicle has a direct and proportional impact on the energy required to accelerate the vehicle. This effect was displayed in the data in the cycles dominated by acceleration and

  • the highway cycle energy required to move the vehicle was dominated by the road load because the vehicle was cruising at relatively steady speeds. In the energy consumption rate change graphs, all of the vehicles seemed to be clustered relatively closely. Perhaps the EV might experience the largest benefit in range increase on a full battery per mass saved. In the absolute energy or fuel savings graphs, lightweight conventional vehicles provide the largest fuel savings per mass saved, because the conventional vehicles have the lowest vehicle efficiency.

Summary and conclusion:

This study investigated and quantified the impact of vehicle mass on the road load and energy consumption of three vehicles of varying powertrain architecture, including a Ford Fusion V6 (ICE), the Ford Fusion Hybrid (HEV), and the Nissan Leaf (BEV). Each vehicle was tested at multiple test weights lighter and heavier than the EPA certification test weight. This study investigated the impact of increased vehicle mass and vehicle light-weighting on vehicle road load force and energy consumption.

Coastdown testing and analysis was conducted to measure the impact of weight mass on vehicle road load. For the three vehicles, a slightly non-linear trend in decreasing road load was measured versus decreasing vehicle mass. The results of the testing and analysis showed that for a given vehicle, the road load shows a slightly non-linear trend of decreasing road load with decreasing mass. This trend appears to be consistent across vehicle powertrain architectures (e.g., conventional powertrain, HEV, or BEV).

Chassis dynamometer testing of fuel consumption or electrical energy consumption showed that in city-type driving and aggressive-type driving, a 10 % mass reduction can result in a 3–4 % energy consumption reduction for the conventional ICE engine, HEVs, and BEVs.

The energy consumption benefit appeared to be linked to the reduction in inertia energy required to accelerate the vehicle. Vehicle mass change did not appear to have a large impact on energy consumption in highway-type driving. The largest absolute fuel savings can be achieved by mass reduction in a conventional vehicle because powertrain efficiency was the lowest of the three vehicles tested in this study; therefore, it had the largest overall energy consumption impact.

Vehicle mass significantly impacted energy consumption during stop and go driving [such as city driving (compare Table 7)]. Conversely, highway driving proved to have little impact from vehicle mass on energy consumption. The results of this study were specific for the three vehicles tested (e.g., the 2012 Ford Fusion V6 ICE, 2012 Ford Fusion Hybrid HEV, and 2011 Nissan Leaf BEV). Though some general conclusions can be drawn from these results, they do not dictate the results for other makes and models of ICE vehicles, HEVs, and BEVs [100].

Table 7 Results of percent change in energy per percent change in vehicle mass

The energy savings from lightweight a vehicle on the highway cycle are relatively low.

Reducing the road load through better tire technology and better aerodynamics would significantly reduce the energy consumption on the highway.

6.2 Functional and Innovative Lightweight Concepts and Materials for xEVs

According to a McKinsey study [101] (2014), the proportion of high tensile steels, aluminum and CFRP in vehicles is set to increase from 30 % today to up to 70 % in 2030. High-tensile steel will remain the most important lightweight material (market share: 15–40 %) and CFRP are expected to experience annual growth of 20 %.

Following the worldwide trend of dealing with the topic of Lightweight in 2014—Task 17 discussed and analyzed this topic in terms of a workshop.

The workshop (2014, Schaffhausen, Switzerland) was hosted by Georg Fischer Automotive AG, a Swiss company specialized in the field of lightweight design, which is well known for their pioneering materials developed in-house, bionic design, and optimized manufacturing technologies, in the automotive sector. About 30 participants included experts from industry as well as research institutes and representatives from government. The workshop enabled good talks and fruitful discussions between experts from different working fields and technologies.

The workshop was focusing on Functional and Innovative Lightweight Concepts and Materials for xEVs and covered four sessions about:

  • lightweight activities in Switzerland (hosting country),

  • lightweight materials and components,

  • simulation and

  • functional and innovative concepts and solutions.

The aim of the workshop was an information exchange in order to identify potentials for improvement in the field of lightening xEVs by giving an update on available lightweight materials and prognoses about future materials. Thus, representatives from all kind of materials were invited to share their opinion. The workshop pointed out, that right at the moment there is no ultimate lightweight material available. Much more, the future of lightweight materials will be a mixture of the best materials on the market available, by combining their benefits: there is a need for the right material on the right place.

Furthermore, the participants got the possibility to participate at guided tours:

  • Automotive- and R&D-center of Georg Fischer,

  • Iron Library: the library’s books and periodicals, offers an in-depth perspective like no other in the world. The collection comprises over 40,000 publications that deal with the topic of iron, including classics by masters such as Isaac Newton as well as specialized modern literature.

In order to emphases the bandwidth of different lightweight materials and solutions, which have been covered during a Task 17 workshop. The following section highlights the most important of them: bionic, materials and functional integration.

Bionic

Nature has provided ideas for high-strength materials, dirt-repellent coatings and even Velcro fastenings. This has led to developing bionic car components, or even full cars like the Mercedes-Benz bionic-car. The Alfred-Wegener Institute designs and constructs vehicle components with an intelligent lightweight construction by using ELiSE- Evolutionary Light Structure Engineering- a new, versatile tool for lightweight optimization. After the technical specification and objectives of a component are defined, the optimization is done by five steps: screening for biological archetypes within 90,000 structures → structure assessment (natural structures are analyzed according to technically boundary conditions) → abstraction and functional transfer of natural structures to CAD model → application of parametric and evolutionary optimization (FEA optimization with focus on feasibility of manufacturing and cost performance) → assembly.

Figure 89 shows the development of the ELiSE process with the example of a b-pillar. Finally a weight reduction of 34 %—from 8.0 (17.5 lb) to 5.3 kg (11.7 lb)—can be achieved.

Fig. 89
figure 89

ELiSE process with b-pillar development [102]

Materials

Despite the above ground-breaking achievements in key areas, OEMs are still searching to develop strategies and technical solutions for cost-effectively integrating lightweight materials into multi-material vehicles and securing lightweight materials and components on a global platform. The combined use of various materials (also known as multi-materials or sandwich materials) allows the generation of products displaying a broad spectrum of desired properties. By selecting the appropriate material many mechanical characteristics can be influenced and optimized. Sandwich structures with three or more layers represent the base technology for lightweight parts. The Austrian company 4a manufacturing [103] offers the worlds thinnest sandwich structure, used to optimize vehicles weight. Especially, materials can be produced having very high bending stiffness at low weight and total thickness of 0.3 mm (0.01 in) or more and a surface weight of 100 g/m2 or more. Further advantages are good formability and very good damping conditions.

Figure 90 shows the schematic build-up of the sandwich material. Automotive application fields for ‘Cimera’ are: firewall (−45 % weight reduction compared to aluminum), rear panel (−30 % weight reduction compared to aluminum).

Fig. 90
figure 90

Material “Cimera” (left), schematic drawing (right) [104]

AIREX Composite Structures produces sandwich materials being used in the application field of roofs for buses and trains. A weight reduction by up to 160 kg (352.6 lb) (−20 %) can be achieved by using sandwich roofs (see Fig. 91). Beside the benefits of weight reduction, the sandwich roof is stiffer under longitudinal, vertical and torsional loading (5–22 % smaller displacements), in comparison to the steel roof. Also the welded joint bounded joint is 50 % stronger than the steel one. A further benefit results in the saving of manufacturing costs (up to 10 %) and the saving of material costs in the build out through component integration.

Fig. 91
figure 91

Sandwich technology for buses [105]

Georg Fischer Automotive AG is specialized in the field of lightweight design. They demonstrated the challenges and hurdles between carbon composites and casting materials like iron, magnesium and aluminum. Thus, they showed the differences in the context of recycling, lower energy consumption, corrosion resistance, repair, weight, material cycle, resistance and costs. Table 8 compares the different material abilities between casting and carbon composites (Source: Georg Fischer). Figure 92 points out that carbon fibers have excellent abilities in terms of tensile strength, Young’s modulus and weight reduction.

Table 8 Comparison of different materials
Fig. 92
figure 92

Comparison of weight reduction

Carbon fibers have a huge benefit in comparison to casting materials, like their ability for weight reduction. Besides that, carbon fibers have a few disadvantages which makes them unattractive at the moment:

  • manufacturing climate balance sheet:

    • Steel/AHSS (0–5 kg CO2/kg),

    • Aluminum (10–25 kg CO2/kg),

    • Carbon fiber composites (20–42 kg CO2/kg),

    • Magnesium (11–47 kg CO2/kg);

  • manufacturing energy consumption:

    • Steel/AHSS (0–25 MJ primary energy/kg),

    • Aluminum (60–200 MJ primary energy/kg),

    • Carbon fiber composites (280–380 MJ primary energy/kg),

    • Magnesium (310–390 MJ primary energy/kg);

  • emission over life-cycle/change of perspective: Break Even:

    • 0 km High Strength Steel,

    • 90,000 km Aluminum,

    • 120,000 km Magnesium,

    • 170,000 km Carbon fiber;

  • availability:

    • Magnesium: 8th abundant element in earth’s crust;

    • Carbon fiber: uses polyacrylonitrile;

  • recycling:

    • Metals: fully recyclable, used in new alloys, used as a mixture in new alloys,

    • Carbon fiber: still not sure how to recycle, at least downcycling or scrap;

  • cost comparison: costs per part (in % of Steel)

    • Steel 100 %,

    • Aluminum 130 %,

    • Magnesium 155 %,

    • Carbon fiber 570 %—Thus, today there are up to 3 times higher costs for carbon fiber. With new precursors and technological progress improving production processes, a decrease of almost 30 % in the global cost of composites is expected between 2015 and 2020;

  • production chain: Figs. 93 and 94 are showing the different production chains of carbon fiber and magnesium (based on the explanations and data from Georg Fischer Automotive). It can be seen that the production chain for carbon fiber is much longer and complex that the one for magnesium.

    Fig. 93
    figure 93

    Carbon fiber production chain [106]

    Fig. 94
    figure 94

    Magnesium production chain [107]

Georg Fischer demonstrated that there are a few materials for lightweight design available, all of them with different abilities. Using new materials like carbon fiber can held to reduce the vehicles weight drastically and furthermore to save fuel. But it is also necessary to have a look at the overalls balance sheet.

Future vehicles will rely on multiple lightweight materials to reduce weight, and increasing use of both High Strength Steel (HSS) and aluminum within new vehicles are likely to continue. There are several advantages of using HSS over aluminum: the energy demands for producing HSS are lower, as is the cost per kilogram of total weight savings at high production volumes. HSS-intensive concept vehicles have also demonstrated that it is possible achieve similar degree of weight savings as with aluminum. Aluminum, however, is preferred in cast components like the engine block and wheels, where HSS cannot compete in.

This light metal will make some inroads in the body and chassis, but given its higher cost, aluminum content per vehicle is unlikely to overtake steel.

A comprehensive comparison of all materials can be seen in Table 9.

Table 9 Comparison of abilities of different materials

Further, Georg Fischer showed ways of how to reduce the weight of existing components:

  • substitution of sheet metal constructions: e.g. seat frame in casted magnesium up to 30 % less weight than metal sheet assembly, innodoor frame in casted aluminum up to 40 % less weight than metal sheet assembly,

  • implementing bionic design (see Fig. 95): e.g. weight reduction of up to 35 % in an iron steering knuckle through bionic design,

    Fig. 95
    figure 95

    Left Front knuckle serial production, right Bionic front knuckle [108]

    substitution of system parts: cross member, completely casted in iron has 17 % less weight than combined assembly out of steel sheet and iron

Functional integration in the field of lightweight

Functional integration provides parts with several functions in order to save on the final number of parts. For example, replacing plastic interior trim with structural parts suitable designed with laminable, visually attractive surfaces. Individual close-to-the-wheel drives open up new possibilities for vehicle dynamics control strategies. This includes an associated increase of driving safety and energy efficiency through the targeted distribution of power and recuperation. On the other hand, the positioning of the motors close to the wheel increases the unsprung mass. This challenge can only be met by consistent lightweight design for all chassis components. The “LEICHT” concept—by the German Aerospace Centre DLR—presents a novel, drive-integrated chassis concept which in terms of its design and construction offers a significant chassis weight reduction (see Fig. 96). By integrating the motor into the chassis in an intelligent way, easy modularization regarding drive power and steer ability becomes possible. This enables an adoption to a variety of vehicle concepts and an application as front or rear suspension module, by reaching about 30 % weight reduction in comparison to conventional reference structures [109].

Fig. 96
figure 96

Torque transmission (left) and wheel travel (right) [110]

Another innovative concept called ESKAM was presented by Groschopp AG. In 2014 there were no optimized drive axles for BEVs on the market available. They are too heavy, too expensive and too big, measured by the available power. The aim of ESKAM (Elektrisch Skalierbares Achs-Modul) is to develop an optimized electric drive axle module for commercial vehicles, consisting of two motors, transmissions and power electronics (see Fig. 97). All components fit neatly and compactly into a shared housing, which is fitted in the vehicle using a special frame construction also developed by the project engineers.

Fig. 97
figure 97

Project ESKAM. Left Front axle, right rear axle [111]

This innovative concept provides an integration of the drive prior to the axis module by limiting the weight of the drive module to a maximum of 100 kg (220.4 lb). For this purpose, it is necessary to couple rapidly rotating e-machines with corresponding gears and to integrate them in a common housing. In this case, only e-motors are used, which are not dependent on permanent magnets or on ever-increasingly expensive rare earth elements such as neodymium and samarium. The electric drive consists of two identical electrically excited, electronically commutated motors and transmissions, which are housed together with the power electronics in a housing and fitted to a drive axle module. The power range of the engine is scalable between 20 and 50 kWh. The axle module presents numerous advantages, such as a high power density and a very high torque. For drivers, this means very fast acceleration. Because the module is scalable, it can be used in everything from small vans and municipal vehicles, to buses and trucks. With a wheel hub motor, that would not be possible. While wheel hub motors have definite advantages, they are not suitable for commercial vehicles, as they scarcely deliver more than 2000 RPM. While the speed of most e-motors is approximately 10,000 to 15,000 RPM, the ESKAM motor (from Groschopp) achieves speeds of 20,000 RPM, with maximum torque of 45 Nm (33 lb-ft) and power of 32 kW (43 hp).

As well as designing the axle module, the project researchers and developers simultaneously developed the required series production technologies. As an example, gearbox shafts are usually manufactured from expensive cylinders or by means of deep-hole drilling. In both cases, the excess material is unused. By contrast, researchers have chosen new, short process chains together with methods that allow greater material efficiency. One such method is spin extrusion, which was developed by the project partners.

Magna presented a vehicle called CULT [112] (Cars’ Ultra-Light Technologies vehicle), a modern lightweight vehicle fueled by natural gas which shows significantly reduced CO2 emissions (Fig. 98). Lead-managed by Magna Steyr, the Polymer Competence Center Leoben GmbH worked on the development of an ultralight vehicle with minimal CO2 emissions in the CULT project.

Fig. 98
figure 98

CULT, vehicle (left) and explosion view (right) [113]

Comparable benchmark cars in this segment normally have a curb weight of approximately 900 kg. (1984 lb). By functional integration and cancellation of parts 80 kg (176 lb) should be reduced. Further 100 kg (220 lb) should be achieved by a multi material approach. Downsizing and the use of secondary effects should cut down the curb weight by another 120 kg (264 lb). Overall, this means a weight reduction of 300 kg (661 lb). Finally 672.5 kg (1481 lb) of total weight could be achieved by this holistic approach. The materials, being used can be seen in Fig. 99.

Fig. 99
figure 99

Materials concept for CULT [114]

Therefore, aluminum was used for several parts and represents nearly three quarter of the complete weight of the body-in-white structure. In addition steel was used in the side construction to obtain the crash relevant stiffness in this area. Organosheets, duroplasts as well as sandwich parts for the roof, doors and hood represent the rest of the material mix.

One of the methods employed to achieve this objective was exploitation of the properties of thermoplastic fiber composite materials relating specifically to weight in order to reduce the overall vehicle weight by using lighter components. For example, for the bumper beam, which consists of a crossbeam and shock absorbers, thermoplastic continuous-fiber-reinforced semi-finished composites were used.

The Fraunhofer Institute presented the requirements for a composite wheel with integrated hub motor—the development of a wheel of CRFP with an integrated e-motor (see Fig. 100). The main focus of the development was to achieve the optimum of lightweight potential considering structural durability.

Fig. 100
figure 100

Wheel with integrated hub motor [115]

During the realization, the technical challenges of multifunctional design were considered in the whole product life cycle. The CFRP lightweight wheel has a weight of approximately 3.5 kg (7.7 lb).

The motor housing is not directly connected to the rim, but to the inner area of the wheel axle. This prevents radial or lateral loads, especially shocks caused by rough road or curbstone crossing, from being transferred directly to the hub motor (4 kW). Another advantage of the separation of the load paths is that the rim can be more flexible than if it were directly connected to the hub motor. For increasing the flexural rigidity at a constant weight, foam cores were inserted into the spokes. A smaller, commercially available hub motor was used as the e-motor.

To align the fibers continuously and with the flow of the forces and to avoid stress peaks caused by sharp edges or sudden variations in rigidity, radii that are in line with the material and smooth transitions have been created in the component. The motor adapter flange is connected with the interior area of the wheel axle. To reduce the mass and increase flexural strength, foam cores have been integrated into the spokes. The e-motor is a small, commercially available wheel hub motor. The motor, consisting of a permanent magnet (outer rotor) and a yoke ring with solenoids (stator) has a power of 4 kW and a control voltage of 2 × 24.5 V. Through the use of high module fibers in fiber reinforced plastics, an increased eigenfrequency with improved damping behavior is achievable compared to the use of metal. Higher module fibers would allow still further reduction of mass noise emission.

7 Power Electronics and Drive Train Technologies as Overall Optimization Method

Nearly 40 years ago, the first piece of software was used in a vehicle to control the ignition of the engine [116].

The first software systems in vehicles were local and did not have any communication between different systems. Since then a lot has happened and today almost all new functionality involves advanced control of electronics and software.

The automotive industry is traditionally a mechanical based industry. Mechanics is still the foundation of the vehicle, however the amount of software and electronics is increasing rapidly. Thus, the automotive industry faces a challenge.

In 2004, 23 % of the overall cost of high-end cars was related to the Electrical/Electronic (E/E) system (Hardung et all. [117]). At this time this figure was believed to increase to 35 % in 2010 [118].

Today up to 90 % of all new innovations in a car are realized with electronics and software [119].

In today’s commercial vehicles driven by an ICE, the proportion of electrical, electronic and IT components is between 20 and 35 % (dependent on the vehicles class). In xEVs, this share will increase to up to 70 %. This includes around 70 main control units with more than 13,000 electronic devices.

In the future, every second euro/dollar is spent on the production for electronics. Currently, the share of electronic components to the manufacturing cost is around 30 %, by 2017 it will grow to 35 % and will still increase to 50 % in 2030.

7.1 Reasons for an Increasing Amount of Software and Electronics

One of the reasons for the large increase of software and electronics are the customers demand for new safety and convenience functions such as adaptive cruise control, blind spot detection, forward collision avoidance, lane departure warning and many other Advanced Driver Assistance Systems (ADAS), which means safety becomes one of the key challenges by increasing the amount of software.

ADAS can be seen as a pre-stage of autonomous driving (compare the different pre-steps of autonomous/automated driving at Fig. 101). The current trend towards autonomous driving requires interconnectivity of modules and systems such as control units, sensors and actuators, which have to communicate in a timely, safe and reliable way. Therefore embedded software, including functional integration, is indispensable.

Fig. 101
figure 101

Levels of automated driving, including ADAS [120]

Functional integration plays an important role due to the fact that electric systems and embedded software are increasingly taking over the functional integration role. Today about 80 % of all vehicle innovations are coming from the field of electronics which indicates that future cars will be highly connected but will also consist of complex systems.

Last but not least, to cope with new regulations on emissions the use of software and electronics is a necessity.

7.2 Electrified Drive Trains Leads to Increasing Complexity

Powertrains in EVs have to meet demanding requirements with respect to power density, life time and component costs. Thus systems are becoming increasingly complex making the engineering of these software intensive systems more and more difficult.

Especially the change towards electrified technologies—the advanced implementation of HEVs and BEVs—has led to an increased consumption of electrical energy in automotive wiring harnesses. As it can be seen in Table 10 these vehicles require new/additional components like the e-machine, power electronic unit, battery system, etc. which include a massive amount of software and electronic. Such vehicles require a modification of the drive train components which means a fundamental technology turnaround and that leads to complex systems.

Table 10 List of components for e-vehicles: not needed/to be adapted/new ones

Complex systems require software in the powertrain, embedded systems E/E-Architecture, Intelligent Control, which are increasingly taking over the functional integration role in the vehicle development.

The increasing amount of varieties of architecture of current (H)EVs can be seen in Fig. 102, by comparison of Nissan Leaf, Toyota Prius, Renault ZOE and Renault Fluence ZE.

Fig. 102
figure 102

Power architecture of current (H)EVs [121]

Furthermore, the comparison of the power architecture of different current (H)EVs demonstrates, the use of different power and high voltage levels can be seen in Figs. 103 and 104. Thus, the highly diverse E/E-Architecture is obviously and will still stay diverse, due to different OEM strategies and different hybridization strategies.

Fig. 103
figure 103

Power net architecture [122]

Fig. 104
figure 104

Exploded view of the R240 [124]

Future power electronics will have to adapt modularization and downscaling in weight and volume. Thus, the optimal use of electronics & software in vehicles is THE prerequisite challenge in order to meet all requirements of cooperative vehicle safety, the adaptive vehicle management, electrification, automated driving and especially to improve the vehicles performance in terms of efficiency which directly influences mileage and finally it turns into a very real and visible benefit to the customer.

Reducing electrical energy consumption in the complex E/E-Architectures of modern passenger vehicles has increasingly been a topic for discussion over the last couple of years. The E/E-Architecture of EVs consist of at least two voltage domains where vehicle modules are placed in relation to their application.

As mentioned in this section, due to the increasing amount of electrification of the vehicle and thus the upcoming complexity of the vehicle, several questions have to solved, regarding the optimization of the E/E-Architecture, like:

  • complex cooling system (up to three different heat exchanger for one car: engine, power electronics, battery),

  • design standard modules for various applications to reduce non recurrent engineering cost,

  • limited volume under the hood (especially for HEV and PHEV),

  • high electrical power for auxiliaries → 12 V and 48 V network,

  • connected car system demanding high auxiliary battery capacity (especially for car sharing),

  • maintenance: for any repair shop; continuously evolving technologies,

  • 1 % energy loss/gain on a 10 kWh battery corresponds to a 43.3 EUR (48.5 USD) loss/gain (according to U.S. DoE) for same EV mileage,

  • ensure reliable and safe operation throughout system lifetime.

7.3 Benefits Through Optimized Power Electronics and Drive Train Technologies

Task 17-workshop on Power Electronics and Drive Train Technologies

In April 2015, RENAULT announced, that they have been successful by extending the range of its BEV ZOE to 240 km (149 mi.)—a boost of 31 km (19 mi.) or 14.6 %—in the New European Driving Cycle (NEDC) by using a new lighter and more compact R240 e-motor and an optimized electronic management system. The R240 is a synchronous e-motor with rotor coil, with a power output of 65 kW and torque of 220 Nm (162 lb-ft). It also features a built-in Chameleon charger which allows faster charging at home (3 and 11 kW). The R240 (see Figs. 104 105) is an all Renault motor [123].

Fig. 105
figure 105

Cutaway view of the power electronic controller [125]

Two main areas of focus in the development were improved electronic management to cut electric energy consumption on the move and the new charging system to reduce charging times at low power levels.

Due to the fact, that this example of an OEM successfully demonstrated that improving the power electronic unit has a massive impact on the improvement of the overalls vehicle performance, the members of the IEA-Task 17 agreed on a workshop about “Power Electronics and Drive Train Technologies for future xEVs”.

The workshop was organized by the A3PS and held in Berlin (Germany) in April 2015, hosted by VDI/VDE-IT, a service provider in the field of innovation and technology for customers in Germany and all over the world. VDI/VDE-IT analyses, supports and organizes innovation and technology for clients with political, research, industry and finance backgrounds.

About 20 participants from industry as well as R&D and policy makers followed the invitation and participated at this one-day workshop. Participants from several countries (Austria, Belgium, France, Germany, Switzerland and the USA) have been present.

Aim of this workshop: the aim of this workshop was to summarize and communicate on a global level:

  • the status and prospects of Power Electronics and Drive Train Technologies,

  • to give an introduction about E/E-Architecture and Intelligent Controls in order to enhance the overalls vehicle performance and

  • discuss the synergies of fully autonomous vehicles.

Results of the workshop

This workshop was focusing on methods to improve the energy efficiency and performance of xEVs. Thus, the following topics have been discussed:

  • virtual design approaches in the development of powertrain concepts,

  • evaluation of future powertrain architecture and their benefits on efficiency,

  • new power electronic concepts for online energy management,

  • cloud data solutions to improve the intelligence of such vehicles,

  • possibilities of improving the e-motor by using advanced E/E-Architecture,

  • combined view of the grid and the vehicle together as a system,

  • methods to calculate the maximum junction temperature in a vehicle drive with a combined cooling system, which makes it possible to specify the cooling system with respect to a specific drive cycle, a maximum power rating, as well as a maximum acceleration from standstill,

  • benefits of modular drive train structures with a flexible drive train topology as better utilization of components, higher production numbers (in terms of economy of scale), an advanced energy management, enhancement of system life time and reliability, additional safety functions and system redundancy and

  • synergies between electric and automated driving by collecting of new ideas for a follow up Task.

Focus on power electronics and drive train technologies

Virtual Design Approaches in the Development of Electric Vehicle Powertrain Components

This section was provided by Mr. Johannes Gragger from AIT.

Power electronics, machines and batteries in EVs and HEVs have to meet exceptional requirements with respect to power density, resilience and component costs. In the design process usually driving cycles are used to assess the performance of an electric powertrain configuration. Due to the typical load play it is of high importance to consider the electrical, mechanical and thermal effects including respective interactions in the individual system components.

Especially thermal EV drive simulations consists of different challenges like real load cycles as well as the largest (thermal) time constants can have several minutes, high switching frequencies vs. large mechanical and thermal time constants and drive models calculating switching events are slow.

The resulting question tries to look at the temperature of the inverter bridge, how hot does the inverter bridge get in the worst-case instance of a presumed drive cycle?

During a task 17 workshop in Berlin (2015), the AIT showed methods as well as benefits and challenges of virtual electric powertrain design and discussed their application in powertrain development projects (based on the data of the conference paper [126]: “An Efficient Approach to Specify the Cooling System in Electric Powertrains with Presumed Drive Cycles”).

The interactions of the cooling system with the electric drive components of a battery electric bus are assessed by multi-physical domain simulation. A vehicle model with real drive cycle data is applied for the calculation of the torque and speed of the induction machine. These results are then imported to an electro-thermal-mechanical drive model that is fast enough to calculate the thermal conditions of the electric drive throughout an entire drive cycle. With another drive model the maximum values of the junction temperatures that are cooling system and drive cycle dependent, are calculated.

Thus, the AIT showed simulation models to calculate the maximum junction temperatures in a vehicle drive with a combined cooling system. It has been shown that by simulating the vehicle model, drive model A and drive model B one after another, it is possible to assess the influence of the combined cooling circuit on the worst-case junction temperatures in the inverter.

For the initial simulation the EV is modeled considering mechanical and electric effects while transient thermal effects (such as temperature changes in components due to heat dissipation) are disregarded.

In the simulation a virtual driver model controls the vehicle model using a prescribed reference drive cycle. The drive cycle data is found from test rides with a reference vehicle carrying a GPS logger. During these test rides on prescribed public traffic routes the instantaneous local position of the vehicle was recorded. Figure 106 shows the instantaneous vehicle speed and altitude profile corresponding to the public traffic route that is used as the reference for the presented virtual design approach.

Fig. 106
figure 106

Real drive cycle extracted from GPS data [127]

In the vehicle model the electric part of the powertrain is modeled with a battery model and a variable-speed electric drive model (comprising the behavior of the two-level inverter, field-oriented-control and the induction machine). Simulating the vehicle model the instantaneous torque requirement and the angular speed of the induction machine are calculated. The respective simulation results are shown in Fig. 107.

Fig. 107
figure 107

Machine speed/torque requirements [128]

Therefore, if applied in a virtual design approach the three models can be utilized to find the cooling system specifications required for a save inverter operation in the electric powertrain with the described cooling circuit topology.

Three simulation models were used (Fig. 108):

  • a vehicle model,

  • a drive model for drive cycle simulation and

  • a refined drive model

Fig. 108
figure 108

Evaluation of the maximum junction temperature [129]

In the proposed approach the results of the vehicle model are used as input data to drive model A and the results of drive model A are used in drive model B. The proposed approach makes it possible to specify the cooling system with respect to a specific drive cycle, a maximum power rating, as well as a maximum acceleration from standstill (plus curb climb).

To keep the calculation effort at the lowest in every simulation the three models are consequently rid of physical relations that do not have a significant influence on the calculated results. This makes the presented approach very computation time efficient.

The models showed that the load in real drive cycles changes very often and is low most of the time. Further, a usual result is that the cooler can be smaller than expected [130].

Efficiency Improvement Potentials for Light-, Medium- and Heavy Duty Trucks via Hybridization and Electrification in Urban and Sub-Urban Traffic

This section was provided by Mr. Peter Prenninger from AVL.

By means of numerical simulations it was investigated how and to what extend future powertrain architectures for three classes of commercial vehicles (light-, medium- and heavy duty trucks (see Table 11)) can contribute to efficiency improvements. Thus it was investigated how to reach the optimal match of powertrain systems of trucks for particular use cases by investigating the typical load profiles for different vehicle classes, mentioned above.

Table 11 Vehicle classes used for the study (light N1; medium N2, heavy N3 [131])

Particular transport routes in urban and suburban services were selected which are very representative for each class of vehicles and their typical use profiles. Various types of hybridizations as well as different fuel options were taken into account. The results indicate that there is not a single “optimal” solution, but in each vehicle class, a “best solution” always depends very much on the profile of the transport tour and the related load profile for the powertrain (compare Figs. 109, 110 and 111). The study also indicates that Start-Stop and Mild Hybrid offers high CO2-reduction potential at relative modest add-on costs. Furthermore PHEVs are almost as good as pure EVs with advantage of wider range and much lower costs (Fig. 110).

Fig. 109
figure 109

Simulation results: light/N1 (“Sprinter-Class”) [132]

Fig. 110
figure 110

Simulation results: medium/N2 (“Atego-Class”) [133]

Fig. 111
figure 111

Simulation results: heavy/N3 (“Actros-Class”) [134]

Advanced Reluctance Motors for Electric Vehicle Applications

Punch Powertrain is working on a project called ARMEVA, which aims to develop a new rare-earth-free generation of advanced reluctance motors (see Fig. 112), as rare earth supply issues are expected within the next years. Alternatives exist, the challenge is to deliver alternatives that combine the best balance of efficiency, power density, safety, reliability, durability and cost.

Fig. 112
figure 112

Advanced reluctance motors

The goal of ARMEVA is to achieve similar power density and NVH-performance (Noise, vibration, and harshness) at lower costs when compared to permanent magnet motors in real EV applications. The focus will be on Switched Reluctance Motors, Variable Reluctance Synchronous Motors and DC exited flux-switching motors which each have been the topic of previous research by the consortium, and offer promising potential.

The scientific objectives of the ARMEVA project are: (i) development of multiphysics simulation models for advanced reluctance motors; (ii) comparative assessment to select optimal motor topology for future EV’s; (iii) development of an integrated electric drive system based on advanced reluctance motor technology and customized power electronics [135].

Challenges—are in overcoming the inherent torque ripple and NVH while maintaining high efficiency as well as in developing adequate power electronics for an independent control of each phase at affordable cost. Due to the fact of being a non standardized component, it requires economy of scale.

The ARMEVA project addresses these challenges by CAE based optimization, advanced control methods, integral system design optimization and by further integration of drive train components.

The main scientific objectives of the ARMEVA project are the development of multiphysics simulation models for advanced reluctance motors, comparative assessment to select optimal motor topology for future EV’s and development of an integrated electric drive system. The entire system consisting of control software, power electronics and a physical e-motor (including high voltage wiring, liquid cooling and a 100 kW SR motor) (Fig. 113) will be integrated and validated in a vehicle platform.

Fig. 113
figure 113

Electric drive system of “ARMEVA” [136]

ARMEVA is using a system based approach using multi-attribute techniques to improve the overall concepts and multi-application, multi-operation analysis to optimize vehicle level efficiency in a wide range of realistic conditions.

Focus on E/E-Architecture and Intelligent Control

Examples of electric innovative architectures for xEVs

The growth of EVs market encourages car manufacturers to continuously improve the E/E-Architecture of xEVs due to the fact that the classical E/E-Architecture (see Fig. 114) has limitation and has to be challenged. CEA (France) presented two examples, how innovative architectures can enhance service to customers. For example smart Battery Modules, with new integrated functions, are one of the solutions to imagine, design and define new electric architectures.

Fig. 114
figure 114

Battery pack in the E/E-architecture of an EV [137]

In terms of requirement for new batteries and architectures there are a few aspects which should be considered, as low system cost; high quantity/low part number; low development cost, re-use, low space requirements. Furthermore the system should be: reliable and safe; flexible and always available. All these requirements leads to the necessity for standard modules with integrated functions.

CEA showed two proposals. The first one deals with an innovative balancing solution supplying 12 V auxiliary network, which permits:

  • a redundant 12 V supply implying no need for a 12 V battery,

  • the ability to have 48 V or 12 V or both with one standard module,

  • a better efficiency for the 12 V supply at low power,

  • endless energy on the 12 V network when vehicle is off and

  • a high power balancing which enables fast charging and compensation of capacity difference between modules.

The advantages of the presented balancing solution (supplying 12 V auxiliary network) are:

  • compensation of differences of capacities (compare

  • Figures 115 and 116). The standard module consists of 2 kW in the drive and 300 W in the auxiliary network, including a (45 Ah + 40 Ah (in serial) = 40 Ah battery pack. The remaining energy in module 1 at the end of discharge consist of 3000 Wh. While the smart module consists of 2 kW in the drive and 300 W in the auxiliary network which is split in 220 W in module 1 and 80 W in module 2. That enables the increase of the energy used at the end of discharge. Thus the energy of the pack is totally used,

    Fig. 115
    figure 115

    Standard module solution [138]

    Fig. 116
    figure 116

    Smart module solution [139]

  • possibility of removing the low voltage battery,

  • better efficiency with low power consumption and a

  • flexible configuration as it can be seen in Fig. 117.

    Fig. 117
    figure 117

    Flexible configuration as a big advantage [140]

The second proposal deals with a solution of a switch module (see Fig. 118) which also permits:

  • a bypass of one module in case of fault/service continuity in case of fault,

  • an increasing battery capacity (range) without modifying DC bus voltage,

  • standardization becomes easier and

  • safety improvements during manufacturing and in case of crash.

Fig. 118
figure 118

Designed prototype for a switch module [141]

Vehicle Cloud information

IMPROVE—Cloud Data Solution

The IMPROVE main approach is to innovate the intelligence and connectedness of commercial EVs integrated control systems delivering improved connectedness of the vehicles to on-board and off-board data, energy efficiency and drive range while maintaining comfort and safety. IMPROVE focuses on in-vehicle information and communication technologies innovations for commercial vehicles. Within this focus, IMPROVE leverages a set of hardware and software innovations that in combination add a target of +20 % range for the same battery capacity, increase the life of the battery, reduce the cost key components and uses deeply integrated interconnections between subsystems inside the vehicle and between the vehicle (sub-)system and the outside world.

Thus, IMPROVE aims to increase efficiency and range predictability of CEVs (commercial electric vehicles) operated in fleets by:

  • employing cloud information for operation and control strategy;

  • reuse of waste energy in a holistic, predictive way;

  • learning from history (gaining information of several vehicles and using this info for a strategy);

  • establishing psychological efficiency incentives through gamification.

IMPROVE will drastically increase the intelligence of the vehicle in two ways: on one hand through the interaction between an integrated control system and on-board and off-board data; and on the other hand by developing in-the-loop local modelling and scenario check capabilities into control subsystems to increase the intelligence of each component.

Algorithms developed in the project will thus involve modelling scenarios of the future before deciding what the best course of action might be. In the case of commercial vehicles, the focus on operation economy and the influence of payload changes during the trip on energy consumption is especially important, and the IMPROVE approach allows to take these parameters into account.

This workshop demonstrated that improving the power electronics unit, the E/E-Architecture, the introduction of an intelligent control and the modification of the drive train technologies indeed helps to improve the overalls vehicle performance of xEVs.

Future generations of xEVs require a layered, flexible and scalable architecture addressing different system aspects such as uniform communication, scalable and flexible modules as well as hardware and software.

Future (P)HEVs and BEVs willapart from some micro hybridsrequire a high voltage power net in addition to the conventional power net. This high voltage power net includes at least an electrical energy storage and a single drive inverter.

The automotive future is hard to predict, but it is indeed promising for the power electronics and motor drives industry.