Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Introduction

The following material is fundamental to flight safety. It is a prerequisite to the acceptance of the airplane as a means of public transportation, as well as to the economic viability of a flight operation.

Legislators and regulators are cognizant of the public’s interest in safety and establish minimum standards, which must be complied with accordingly.

But minimum standards alone can’t sustain long-term flight operations; especially those employing a large number of aircraft. These operators are forced to strive for higher standards then those prescribed by law due to the greater probability of accident or incident. An operator with too many mishaps will be eliminated from the competition. Yet, no airline will invest more money into safety than makes economical sense.

The goal of every pilot is to achieve an accident rate of zero. In the routine of the daily flying profession, the pilot is the final authority for guaranteeing flight safety and the prevention of accidents. It is therefore essential that pilots possess comprehensive knowledge about both the cause, as well as the prevention of accidents. This enables them to function from the outset in a preventative manner.

For this reason, it would be desirable for this material to be included for study and testing, even during initial training for license issuance.

To begin with, it will be beneficial to review some basic flight safety-related statistics. Then the question as to the circumstances under which accidents occur will be approached. This, in turn, will allow us to derive recommendations useful for daily air service. Several fundamental psychological principles that work to impede pilots from consistently implementing these recommendations will then be addressed. This, then, leads to the question of “human error”, with the subsequent chapters being dedicated to its prevention.

In the context of this chapter, the levels of accident prevention falling under the responsibility of the various authorities will not be addressed or, if so, then only where deemed necessary.

1.2 Accident Statistics

1.2.1 Trends in Accident Rates

In 2007, there were 20,700 jet powered transport aircraft in operation around the world conducting 20.8 million flights. This corresponds to an average of about 1,000 flights per year per jet (see Fig. 1.1) (Boeing 2011).

Fig. 1.1
figure 1

Fatal accident rates (Boeing 2011)

Accident rates in the USA and Canada are approximately the same as those in the part of Europe regulated by the European Aviation Safety Agency (EASA).

After the heavy losses experienced by the civil jet aviation fleet in the early 1960s, accident rates began to stabilize around the end of the 1990s. From about the year 2000 on, the rate of fatal accidents in North America has virtually dropped to zero.

Several developments at this time correlate with one another. The technical reliability and equipage of Aircraft has undergone continuous development. The operating environment, such as weather forecasting, ATC and the airport infrastructure, has matured. The process of pilot selection and training has been perfected to a greater degree.

Improvements in flight safety related to developments in aircraft technology are illustrated by statistics provided by (Airbus 2009) (see Table 1.1):

Table 1.1 Accident rates based on aircraft generation (put into service—end of 2008)

The transition from 2nd generation aircraft (e.g. DC-10, Tri-Star) to 3rd generation aircraft (glass cockpit, FMS: e.g. B757/767, A300/310) reduced the number of total losses per 1 million flights by a factor of 3. The transition from 3rd to 4th generation aircraft (fly-by-wire with flight envelope protection: e.g. A318-321, A330/A340, B777) further reduced that number by a factor of 2 (see Table 1.2).

Table 1.2 Accident rates based on aircraft generation (1998 to end of 2008)

1.2.2 Accident Rates Based on Aircraft Type

This can also be seen in the accident rates per individual aircraft type (see Fig. 1.2).

Fig. 1.2
figure 2

Accident rates based on individual aircraft type (Boeing 2011)

It can clearly be seen that the aircraft types commonly in use around the world today have only very low accident rates. Among today’s modern airlines, however, the MD-11 stands out with an above average rate.

Between 1959 and 2007, a total of 854 aircraft were accounted as total losses (hull loss) while 565 fatal accidents were recorded with a combined 28,621 deaths. Incidentally, a total of 1,564 accidents were recorded during the timeframe referenced in these statistics from Boeing.Footnote 1

The statistics above, although appearing at first glance to be on the positive side, are put into perspective when one considers that traffic volumes are continuously increasing. This means that, even if accident rates remain consistently low, the ultimate number of accidents will continue to rise.

1.2.3 Distribution According to Traffic Region

This data from the International Air Transport Association (IATA) depicts total loss rates for the year 2008 (IATA 2009) (see Fig. 1.3).

Fig. 1.3
figure 3

Total loss rates for “western-built jets” according to traffic region (IATA 2009)

Accident rates in the world’s lesser economically developed regions are significantly higher. The reasons for this are manifold: older, poorly equipped aircraft fleets, infrastructure deficiencies and, ultimately, personnel selection and training at oftentimes less than adequate levels.

1.2.4 Accident Distribution According to Type of Operation

It is conspicuous that, in the recent past, considerably more accidents have occurred during charter, freight, ferry, test and maintenance flights than during commercial passenger flight operations (see Fig. 1.4).

Fig. 1.4
figure 4

Accident rates according to type of operation (Boeing 2009)

1.2.5 Accidents According to Phase of Flight Phase

The accident-rich phases of flight are takeoff and climb, with 31 % of all accidents occurring within 16 % of the average flight time and, even more so, approach and landing, with 43 % of the fatal accidents also occurring within 16 % of the average flight time. Only 9 % of the accidents are attributed to the cruise flight phase (see Fig. 1.5).

Fig. 1.5
figure 5

Accidents according to flight phase

Nevertheless, 12 % occurred during the taxi phase. The Flight Safety Foundation (FSF) refers to an estimated 27,000 ramp accidents and incidents per year resulting in 243,000 injuries. The related costs to the airlines amount to at least USD 10 billion (FSF 2009). Large airlines incur EUR 20–30 million worth of taxi- and ground-related damages annually.

1.2.6 Types of Accidents

Ninety fatal accidents occurred around the world between 1998 and 2007. According to CAST/ICAO, these are distributed among the following categoriesFootnote 2: (see Fig. 1.6 and Table 1.3).

Fig. 1.6
figure 6

Fatal accident count according to type of accident (Boeing 2009)

Table 1.3 Abbreviations of accident types

Some remarks regarding the individual accident types:

CFIT: Almost no aircraft has been lost to date that has had Enhanced Ground Proximity Warning System (EGPWS) installed. It is important to emphasize the ENHANCED function at this point. CFIT accidents, despite being outfitted with “normal” GPWS, headed up the accident statistics for a long time.

Loss of Control in Flight: Aircraft commonly in use today and possessing partial or complete flight envelope protection are only rarely affected by these types of accidents. Yet, they are in no way immune to them if the system is not operated properly (e.g. due to a training deficit) or is defective. Generally speaking, only improved “Upset Recovery Training” can help in this case.

Turbulence: This is the most common cause of injury, albeit not fatalities, in cruise flight.

For modern aircraft, the key accident-related factors continue to be all those associated with the runway: runway excursion, landing, runway incursion.

1.2.7 Accident Rates According to Different Aviation Sectors

Table 1.4 depicts the accident rates for the US aviation industryFootnote 3 based on flight hours between 1992 and 1997.

Table 1.4 Accident rates according to aviation sector

In comparison, an automobile accident takes place in Germany almost every 4,000 h. Albeit, it must be noted that the related risk of severe injury or even death per single accident is lower than in an aircraft.Footnote 4

Interestingly: if one considers the relationship of “tons carried per km” with respect to all general means of transport, then the common building elevator proves to be the safest mode of conveyance.

1.2.8 Accidents in the German Air Transport Industry

The German Federal Bureau of Aircraft Accident Investigation (BFU) recorded the following accidents by German registered aircraft (with a maximum takeoff weight > 5.7 t) over foreign and domestic soil (see Table 1.5).

Table 1.5 Accidents in the German air transport industry (BFU 2008)

The overall accident rate during this period equated to approx. 1 accident per year per 200 registered aircraft. For the professional pilot with a career spanning approx. 30 years, this means that there is an appreciable risk of being involved in an accident during this period.

1.3 Basic Principles

Before presenting the findings from the accident investigations, it is essential to define some of the basic principles of flight safety.

1.3.1 Zero Accident Rate

The reliability of commercial aviation in developed countries has improved to an impressive level since around the year 2000. Yet, the increase in traffic density stands opposed to this positive trend, bringing with it the potential for an increased number of accidents in the future if the rate of accidents per flight is not improved upon even further. In all probability, it will never be possible to achieve a zero accident rate. Notwithstanding, this still remains the elemental goal of every transport pilot.

1.3.2 Safety Net

The share of accidents that could have been prevented by the flight crews is around 70 % (National Civil Aviation Review Commission 1998). When referring to these accidents, one speaks of “human error”, and being mostly that of the pilots. In contrast, the number of cases in which the pilots were able to prevent an accident remains uncounted and statistically unrecorded.

Yet, human error, now more commonly referred to as human factors, is very rarely the one and only cause of such accidents. James Reason’s model is widely used to explain why this is so. The model assumes a multiple number of preventive levels, where the failure at a single level does not necessarily result in an accident. Only then, when failures take place at multiple levels, can an accident be anticipated (see Fig. 1.7). This is referred to as a safety netor a safety chain (Reason 1991).

Fig. 1.7
figure 7

Prevention levels for accident avoidance (according to Reason 1991)

Somewhat abridged: Legislators and authorities ensure the creation of and compliance with uniform standards applicable to aircraft manufacturing, crew training and infrastructure. Aircraft manufacturers build the aircraft and its systems. Aircraft operators ensure proper aircraft maintenance, personnel selection, training and compliance with desired operating standards. Pilots function in a preventive manner to ensure as risk-free an operation as possible and, in the event of a malfunction, to defuse the problem in a safe manner. Errors will occur at all of these levels, with the pilots being the last in a long chain of involved parties who could ultimately prevent an accident from happening. They are the final link in the safety chain and are, therefore, often mistakenly seen as the main cause.

This subject will be discussed in more detail in the chapter on “Human error”.

1.3.3 Economics and Flight Safety

If the goal were to achieve a theoretical level of perfect safety, then, in order to limit the effects of an explosion in the cargo bay, for example, the bay would have to be divided into smaller compartments and separated by special and very expensive synthetic, or perhaps even reinforced steel plating. The aircraft would then be “bomb-proof” in the truest sense of the word, but it would weigh 140 tons rather than 40 tons. This, of course, is utopian because it would be extremely uneconomical.

Consequently, flight safety will always be a compromise between hazard, risk and cost.

Statistics reveal that

  • airlines in developed countries are safer than those in lesser developed countries.

  • passenger-related flight operations are safer than freight-related flight operations.

  • large air carriers are safer than smaller air carriers.

The general aviation sector apparently provides a sufficient degree of safety. This branch of the industry experiences one accident every 10,000 h, yet no radical changes have been made in the technical requirements these aircraft are subject to or in the expertise required of the pilots that fly them. Respectively, a large airline with the same statistical level of safety and 650,000 flights per year would have experienced around 100 accidents, of which a third would have included fatal injuries. They wouldn’t have a chance of being successful and would necessarily disappear from the marketplace.

Therefore, the legal demands placed on the commercial aviation industry are at an overall much higher level. But large variances can be found in the rate of accidents in this sector, as well, which can’t be explained by technology or differences in the operating environments alone. In general (but of course not always), large air carriers offer a degree of flight safety that is higher than that of smaller air carriers. This also applies to the low cost air carriers, some of whom are operating safer than the network carriers (Flouris 2006). Safety is primarily a question of fleet size and not of pricing policy or the airline’s position in the marketplace.

The following example should help explain this:

According to Table 1.2, a small airline has an accident every 200,000 flights. Assume this fictitious company employs a fleet of 5 aircraft and each aircraft flies 1,000 flights per year. As such, the company produces 5,000 flights every year, meaning it would have an accident every 40 years. From an economical standpoint, it would be unwise for this small, fictitious company to invest more into safety than necessary to achieve this accident rate.

In comparison, a large airline with 500 aircraft produces 500,000 flights a year. Assuming the same level of safety as with the small company above, this company would experience two to three accidents each year, of which one to two would result in a total loss and one in fatalities. The following example shows just how threatening such an accident rate can be to the existence of an airline:

  • ValuJet almost disappeared from the market in the USA after an accident in Florida and subsequently changed its name to AirTran.

  • The Turkish company, Birgenair, met with a similar fate following the total loss of an aircraft in the Dominican Republic with German tourists on board.

  • Lauda-Air ran into deep trouble following a total loss in Thailand.

  • Crossair went through a severe crisis following a series of three total losses.

  • The Cypriot carrier, Helios, also went through a very difficult crisis following a spectacular total loss, which was traced back to deficits in their safety program.

One serious accident can threaten the virtual existence of an air carrier. Consequently, a level of safety must be achieved that can guarantee the carrier will only infrequently be the focus of the public’s attention. Looking into the future of the industry, this can also be applied to airline alliances or cooperations who share a common image. Additional investments made into aircraft technology or pilot selection and training is economically wise, because an accident would inevitably be accompanied by a loss of trust and a subsequent rise in revenue shortfalls, neither of which are covered by insurance.

An airline will become increasingly averse to risk as its fleet size increases. They will strive to ensure that fewer and fewer risks are taken in all relevant areas. They will invest more money into safety and assume a short-term economic disadvantage compared to the smaller carrier. This presents a problem in that, while the cost of a particular initiative taken to increase safety can be clearly measured, the anticipated effects of a reduced probability of accident cannot be easily quantified.

1.3.4 High-Profile Accidents

People always seem to be more interested in the large, sensational catastrophes than in the small, everyday accidents. “Small” accidents receive merely marginal attention. Even though German registered aircraft experience 3–10 accidents a year, the public perceives only unique events.

This selective perception of accidents with a large media profile has the effect of forcing air carriers to invest a great deal of time, effort and money into safety. Because of this, passengers on commercial airliners are transported at an objective level of safety that the automobile driver or even a private pilot wouldn’t deem necessary.

The safety image of an airline is of tremendous importance: If it is good, one accident—under certain circumstances—could be absorbed without great economic consequence (e.g. the Swissair accident by Halifax). If it is poor, on the other hand, mere speculation could be enough to cause the company serious difficulties. Public opinion does not wait for years in anticipation of the official accident report; it is very quick to issue a premature verdict (e.g. the U.S. airline, TWA, accident by Long Island).

The importance of an airline’s safety image is also affirmed by the concept of the public Perceived Safety Risk (PSR) (Simon and Mitchell 2009). This correlation is also referenced in the ICAO Safety Management Manual (see Fig. 1.8).Footnote 5

Fig. 1.8
figure 8

Public perceived safety risk

An airline having achieved the highest PSR “Surplus” level has two options:

  • Quality leadership (create a premium market)

  • Cost leadership (reduce costs)

By seizing the second option, however, the PSR will drop one level down to “Acceptable”. At this level, competition is carried out through ticket price or service. In cases where the standing in public opinion is poor, safety inputs, alone, can lead to a reassessment and ultimately improve the image.

1.3.5 Safety Management Through Flight Operations

In the past, you could see a “jolt” go through the affected airline following a serious accident. All at once, the painful reality of gaps in the safety culture became evident, that then had to be subsequently closed. Changes that had been impossible to implement were all of a sudden possible. One example of this is USAir, who, after a series of accidents from 1989 to 1994 (five total losses with a combined total of 211 passenger fatalities) began an in-house “revolution”. Since that time, it has faded from the safety discussion. A similar situation can be observed with Korean Air. Following a long series of accidents (five accidents from Aug. 97 to Dec. 99), drastic in-house measures were taken that successfully elevated the airline out of the limelight.

From a purely economic standpoint, it would be better to close any safety gaps through more effective prevention prior to a serious accident.

Within the framework of a Safety Management System the costs related to the safety measures as well as the estimated costs that could be expected in the event of an accident are brought into relation with the probability of occurrence of the associated accident. In this manner, a point of reference is established for determining the cost effectiveness of a measure.

A second option available to company management would be to observe the competition to ensure they are not found lacking with respect to safety-relevant initiatives.

Because, in the event of an accident, the management will be questioned by the media and the victims as to whether it did everything in its power to prevent the accident.

It will be problematic if the management consciously saved money by not installing a market-ready system or by not sufficiently implementing a recognized safety measure. One example is the TCAS collision alerting system: Installation of a TCAS system into transport aircraft was not required by German law in 1997, even though systems available on the market were already mandatory in the USA. That year, 1997, the German Air Force had a midair collision with a passenger airliner over Namibia that cost 70 lives. Germany’s Minister of Defence at the time came under considerable pressure from the public because the system was not installed in the Air Force’s Tupolev aircraft.

1.3.6 Individually “Sufficient” Safety vs. Objectively Necessary Safety, Part I

When controlling an aircraft, the pilot unconsciously establishes a level of safety by “gut feeling” or instinct. Using the example of an automobile, safety is only one aspect out of many the interests the driver while at the controls.

  • Driving too fast, and thereby unsafely, can result from deadline pressures or perhaps just the fun of driving fast.

  • A small child in the car or a ringing mobile phone can distract the driver’s attention.

  • Violating minimum traffic separation distances may be accepted as a demonstration of dominance or power.

Inexperience, insufficient routine, negligence, time pressure, dominant behaviour and laziness are always along for the ride and increase the risk, yet they are accepted by society at the cost of avoidable traffic victims.

The integrity of life and limb, as one of the supreme human rights, is permanently disregarded in everyday life through thoughtlessness or carelessness. The fact that transportation ministers do not have the necessary political capacity to take action against this mechanism is demonstrated by the regularity, with which sensible recommendations from experts for improving road traffic safety are buried in the sand. For example, it is still possible to drive a car in Germany with significant concentrations of alcohol in the blood. Yet, alcohol and flight duty are absolutely incompatible for the transport pilot.

It would be an objective necessity to defuse identifiable risks prior to the occurrence of an accident and, in the case of an accident that has already occurred, that it not be allowed to happen again. This is precisely what is demanded of commercial aviation.

All accidents are investigated and evaluated by a national accident investigation body. Recommendations at the end of the accident report should impact all involved aviation stakeholders so that the mistakes identified are corrected.

If this method were applied to road traffic, it would mean the following: an accident due to excess speed, alone, would, as a minimum, result in a recommendation for stricter speed limits. All motorists would adhere to the provision out of conviction. This is obviously an unrealistic scenario. The following section on “Standard Operating Procedures” (SOP), the difference between perceived, individually sufficient and objectively necessary safety will be dealt with in greater detail.

1.4 Origin and Prevention of Accidents

It is possible to derive recommendations towards the prevention of accidents from the knowledge gained about their origin. While every single accident is carefully investigated and evaluated on the one side, there are only a few studies dealing with the commonalities of accidents on the other. Two of these studies are noteworthy: One from the U.S. National Transportation Safety Board (NTSB) and one from Lufthansa.

1.4.1 The NTSB Study

The NTSB analysed 37 commercial aircraft accident reports it had issued during the period between 1978 and 1990. In all the accidents investigated, the pilots were named as the initiating or contributing factor (NTSB 1994). In these 37 accidents, the crews made 302 work-related errors (see Tables 1.6 and 1.7). The number of errors per accident lies between 3 and 19, with an average of 7.

Table 1.6 Accident-related crew errors
Table 1.7 Description of the error types

Figure 1.9 depicts the distribution of errors attributed to the respective pilot position. Errors attributed to the flight engineer are excluded.

Fig. 1.9
figure 9

Pilot error distribution

  • Commonalities between the accidents investigated

  • In over 80 % of the accidents, the captain was the pilot flying and the first officer (FO) was the pilot not flying.

  • The primary failures attributed to the crews were their mistakes in the application of SOPs, incorrect tactical decisions and errors in monitoring (Monitoring/Challenging).

  • Errors in Monitoring/Challenging took place in over 80 % of the accidents. This failure was attributed exclusively to the FO.

  • In 40 % of the accidents, the captain made incorrect decisions that were not challenged by the FO. In most cases, the decisions in question had to do with a failure to follow a required course of action, such as a go-around.

  • 55 % of the accidents were on flights affected by a flight delay. The average rate of delay for the overall air transport industry at the time was about 25 %.

  • Crews allow themselves to be pressured by delays and make significantly more workload-related mistakes, especially on the ground during flight preparation and taxiing.

  • 73 % of all accidents happened on the first day of a joint tour by the captain and FO. A total of 44 % actually happened on the very first leg of the tour.

  • Half of the crews were awake for longer than 12 h at the time of the accident (Time Since Awake, TSA). These fatigued crews made significant mistakes in the areas of SOPs and decision making. Especially overnight flights are more frequently prone to accident.

  • 53 % of the FOs were in their first year with the company. The average flight time for the FOs on their respective aircraft type was 419 h.

As a consequence, the NTSB required:

A LOFT (Line Orientated Flight Training) component should be scheduled in the simulator for each type rating, which

  1. 1.

    provides each pilot with the opportunity to exercise his Monitoring/Challenging function as pilot not flying,

  2. 2.

    provides crews with the opportunity to exercise their tactical decision-making capabilities,

  3. 3.

    allows crews to practice correct checklist reading procedures.

Instructors should receive better training from the airlines so that, during line training, they

  1. 1.

    will place greater emphasis on Monitoring/Challenging, especially with FOs,

  2. 2.

    are able to put the captain in a better position to accept criticism.

  • Further implications of the study

Accidents very rarely occur, if at all, because of one mistake. When they do, they occur predominantly as a result of a chain of errors. Particularly inexperienced FOs have difficulties addressing the mistakes made by their captains. This is especially true when they are still getting to know each other and factors such as time pressure and fatigue begin to aggravate the situation. The “novelty of the task” increases the probability of an error by a factor of 17 and “time pressure” by a factor of 11. Captains must know this so they don’t overburden their (inexperienced) FOs and, in so doing, deprive themselves of their only source of feedback for recognizing and correcting their own mistakes.

FOs must receive training that puts them in a position to properly assume their role of monitoring the captain, beginning with the first leg alone with a captain during line training. They must be familiar with all safety-relevant SOPs, have the skills to safely handle the aircraft in every phase of flight, be knowledgeable of the aircraft’s flight limitations and be prepared to openly address these at all times, even when their thoughts are yet unclear. They must intervene promptly when required and take over the controls as necessary.

  • Boeing and several airlines have taken this into account and have replaced the term PNF (Pilot Not Flying) with a more sensible PM (Pilot Monitoring).

  • Captains must be able to accept criticism from their FOs and beware of belittling what they deem to be improper or exaggerated criticism.

  • Captains should call for criticism anytime he suspects it is being withheld.

  • A captain should give his FO an opportunity to become accustomed to him on their first day together, and especially on the first leg of a joint tour. At the same time, risks of any kind should be avoided to the greatest extent possible. This may mean forgoing a voluntarily shortened approach, a visual approach or an “immediate T/O” in order to preclude overburdening his monitoring function.

Even on the first leg, the FO must possess as much self confidence in himself and his capabilities as needed to be able to immediately and openly address disagreements and mistakes.

The calling for criticism by the captain and the offering of criticism by the FO are crucial to the successful prevention of accidents.

Only then can a hierarchical gradient exist that guarantees the safe working relationship between both pilots. By forcing an FO into a “passenger role”—consciously or unconsciously—the captain potentially deprives himself of an important source of competence, good ideas and problem-solving recommendations.

1.4.2 The Lufthansa Study

To draw any conclusions regarding the state of aviation safety based on relatively few accidents is very difficult from a statistical perspective. In order to improve the statistical basis for determining the impact on operations, it makes sense to not only investigate accidents, but to investigate close calls or safety–critical incidents, as well. For this reason, Lufthansa carried out an intensive study of incidents, which occur much more frequently than accidents, from 1997 to 1999. 2070 pilots took part in the study (Lufthansa 1999).

As it turned out, 99.9 % of the pilots had experienced at least one safety–critical incident in their careers. A surprising quota of approx. 3,000 incidents per year emerged. This equates to 8 incidents per day. Expressed otherwise: Every Lufthansa pilot experienced around one incident per year! In order to narrow the focus and scope of risks, errors were classified according to the following criteria and allocated individually or in combinations of up to four:

  • OPS: operational problems

  • HUM: human work-related errors

  • TEC: technical faults

  • SOC: social climate among the crew

Incidents attributed to only one group pose a small risk because a structured cockpit working environment will defuse individual errors. The combination of OPS + HUM + SOC stands out conspicuously; composing 37.8 % of all incidents (see Fig. 1.10). A possible scenario could look like this: An operational problem (OPS) causes an increased workload from which a work-related error (HUM) results that is not corrected due to a stressed cockpit environment (SOC).

Fig. 1.10
figure 10

Frequency of event configurations

It is apparent that a negative social climate acts like a “turbocharger” for accidents. This study showed for the first time that a quantitative correlation can be measured between social climate and flight safety.

The TEC (technical faults) and OPS (errors resulting from operational processes) categories are the least prevalent and can be influenced only to a limited extent by the pilots.

The OPS category deals primarily with bad weather and dangerously close encounters with other aircraft, or so-called “near misses”.

Technical faults are comprised mostly of engine and landing gear problems, as well as false indicator readouts on the flight guidance instruments.

Incidents not influenced by pilots in this study make up merely 13 % (1.2 % + 7.7 % + 4.1 %) of all incidents from (TEC + OPS). Conversely, 87 % of all incidents could have potentially been defused by the flight crews.

It is evident from this chart that SOC-related problems played a role in 70 % of all incidents. In all incidents where pilot error was involved, the proportion even rises to 80 %. From the “turbocharger-insight” above, it follows that: 80 % of all incidents, in which human error played a role, could have been prevented if an optimal cockpit environment had prevailed.

The following insight is particularly interesting: As opposed to the common perception that pilots might be overwhelmed by new technologies, this study revealed that the combination of technical problems and human error (TEC + HUM) remained below 1 %, whereby each category by itself resulted in 7.7 % (TEC) and 4.9 % (HUM) respectively.

The analysis revealed the following key aspects:

Communication

Interpersonal communication problems occurred in 53 % of all incidents. Of these, approx. 30 % took place inside the cockpit while 70 % took place between the crew and outside parties. In this context, ATC communications were most often involved, playing a role in 27 % of all incidents. The main points of focus were:

  • Mandatory statements, such as callouts in the event of deviations, being omitted.

  • Concerns not being expressed.

  • Important statements being incomplete, unintelligible, missed or ignored.

Defence strategies to ward against these errors have long been known, yet crews evidently have difficulty applying them with any consequence. When one considers its role in 53 % of all incidents, then a further discussion as to the need for professional communications training should be unnecessary. Here is a short look ahead to the chapter on Communication:

  • When something is unclear (cockpit, ATC): Seek clarification from the message sender.

  • Use standard R/T consistently.

  • Address all deviations without ambiguity.

  • Be alert for non-verbal signals.

  • First dial in the value, then provide the readback in R/T through the FMA/MCP/COMM.

  • Employ the “Sterile Cockpit” concept (80 % of the incidents in 7 % of the time).

A crew member going it alone

A “non-jointly coordinated action” was involved in 12 % of all safety-related incidents. The “lone warrior” syndrome is still a central problem in the cockpit. In most cases, it does not involve “ill will or even a decision made solely by one person”. It is more often target fixation under difficult operational conditions that turns a good team player into a solo pilot: a tight slot-time, an expiring hold-over time, the desire to get the passengers to their destination on time.

It lies in the nature of the industry that the problem of a crew member “going it alone” will usually be triggered by the captain. It is easier for the captain to stop an FO from acting alone just based on his hierarchical position and overall responsibility, as well as age and experience. Co-pilots will commonly try to excuse their actions on the incident report by noting: “The captain would have probably acted as he did, regardless”.

The study revealed that, in 918 out of a total 1,897 incidents, the FO did not express any criticism. In 210 cases, concerns were expressed, but these were disregarded by the pilot flying. Recommendations stemming from the study are:

  • Uneasiness, differing opinions, deviations and objections should be articulated loud and clear.

  • Avoid rushing; don’t allow yourself to be pushed; create some free space as a buffer for any unforeseen circumstances (fuel, descent, ground times, etc.).

It should be noted at this point:

It is crucial for pilots to maintain a good overview (so-called situational awareness). Thorough flight planning with a deliberate assessment of potential risk helps improve this overview and prevent subsequent problems from arising.

Specific risks should be addressed during the departure, takeoff and approach briefings.

Human work - related errors ( HUM )

The effects of work-related errors can be avoided or minimized through a tightly woven safety net and a structured, uniform work routine.

Nevertheless, human error was a factor in 87 % of all incidents. Of these,

  • 90 % involved available facts that were not considered,

  • 79 % involved the cockpit crew being implicated from the onset, while

  • 77 % of the work-related errors were associated with rule violations

It should also be noted at this point that errors are unavoidable; they can’t be prevented entirely. Errors are not necessarily safety-relevant as long as they are discovered and caught, such as through a checklist or through intervention and feedback from a colleague. It first becomes critical when errors go undiscovered and evolve into an error chain, which can lead to an incident or accident.

It is important for every transport pilot to analyse work-related errors when detected, either by himself or together with the crew, to avoid repeating them where possible. If it is not possible to address a situation directly when it arises, then a short discussion in the cockpit following landing may be sufficient. It does not have to be long and can be introduced with the questions: “Did you notice any mistakes?” or “Would you have done anything differently than I did?”

Standard Operating Procedures ( SOPs )

The most logical starting point for improving flight safety is through disciplined compliance with the SOPs. This is generally well known and has been trained intensively for many years. But then, why is it so hard for the crews to comply with them?

According to the study, the importance of SOPs is generally not called into question by the crews. Nevertheless, they are breached over and over again, either knowingly or unknowingly. With over 2,000 flights a day in a large airline, tight limits are necessary for economic survival, yet these limits must also be padded with clear-cut buffers. The buffers must be available in the event of unforeseen circumstances.

By the same token, mutual monitoring according to a definite set of rules is essential. When limits are transgressed, the gate falls away for the person being monitored—a second limit doesn’t exist. Moreover, when a rule violation is tolerated once, the inhibition towards further transgression falls away with it. This encourages entry into the error chain.

Every SOP that is ignored can represent the last level of prevention prior to the accident.

  • Unstabilized approach

The unstabilized approach plays a role in 20 % of all incidents and, as such, makes up the lion’s share of SOP violations:

  • 58 % too high

  • 57 % too fast

  • 27 % due to lateral offset

  • 17 % too low

  • 21 % due to incorrect configuration

All these situations could have been elegantly and safely alleviated with a go-around procedure. Yet, the study revealed just how poorly developed the disposition towards this solution really is.

The study also revealed that the majority of unstabilized approaches were carried out by the captain. Instead of the prescribed callouts when a limit is about to be transgressed, it is obvious that the co-pilot’s individual tolerance threshold will determine the size of the mesh in the last safety net. This means that the co-pilot is the last resort for containing the error when a captain transgresses the limit.

  • Deviation from ATC clearances

This error ensues almost always unconsciously, illustrating just how great the various pressures normally are inside the cockpit. This type of error is found in 19 % of all incidents. Of these:

  • 45 % is attributed to flying at an uncleared flight level

  • 22 % is attributed to course deviation

  • 21 % is attributed to deviation from a SID or STAR

  • 10 % is attributed to takeoff or landing without clearance

Strategies for error prevention:

  • Distractions, communication and unnecessary work must be avoided whenever possible during the critical phases of flight.

  • All pilots in the cockpit must hear a clearance.

  • Uncertainty regarding ATC clearances should be clarified with the controller, not in the cockpit.

  • First enter the value into the FCU/MCP, then readback the value from the display via R/T.

Basic flying

The study revealed that deficiencies existed in flying ability, as well: Problems with basic flying played a role in 25 % of all incidents.

  • 60 % occurred during correction of target parameters: too much, too little, too late, too slow

  • 33 % occurred during landing: too far, too hard, incorrect flare, deviation from the centerline

  • 21 % occurred while taxiing: too fast, using the wrong taxiway and taxiing over runway holding points

  • 10 % occurred during go-around: incorrect manoeuvre sequence, dropping below minimum speed and, in three incidents, even contact with the ground

Particularly the go-around, because it appears so infrequently as an incident, is over-represented by a factor of at least 27. This revealed a discrepancy with regard to reliance on the simulator, where it presents no problem. In practice, however, there are large emotional hurdles and even feelings of personal failure to deal with, which may significantly complicate the overall manoeuvre.

An important corrective measure would be to augment “stick-and-rudder training” in the simulator. In so doing, the desired degree of competency can be achieved while a sufficiently rapid “instrument scan” is acquired. The long-haul fleet is especially prone to this problem.

A note from the authors: If it is possible to do so without impairing safety (weather, traffic, ATC, fatigue), increased line operations should also be flown using a reduced degree of automation in an effort to supplement simulator training.

Taxi incidents are all too frequent. The seconds saved by taxiing fast can scarcely be measured; the number of related incidents, however, all the more.

Many FOs do not intervene with callouts, but assume a front-seat passenger mentality “I don’t like it either when someone interferes while I’m driving my car”. Personal discomfort in this context is a strong indicator that verbal intervention is call for.

Another note from the authors: Where technically possible, it is helpful for FOs to taxi from time-to-time in their role as monitors.

Equipment operation

The crew is well familiar with their airplane and its functions; instrument inputs are repeated daily, a hundred times over. A great deal of self discipline is required to keep from getting complacent in this regard. Only in this manner can operating error, which accounts for 18 % of all incidents, be avoided:

  • 30 % is attributed to incorrect inputs

  • 20 % is attributed to mistakenly omitted component actuation

  • 14 % is attributed to actuating the wrong switch

  • 14 % is attributed to actuating the wrong mode

Countermeasures:

  • Deliberate verification of inputs into the FMA and compliance with FMA callouts

  • The other crew member should also check the result.

1.4.3 The Boeing Study

What can be done to prevent accidents? This question was posed by the American airplane manufacturer, Boeing. They examined 232 accidents that took place around the world from 1982 to 1991 involving transport aircraft with maximum takeoff weights greater than 60,000 pounds. The objective was to determine the prevention strategy in each individual case, which would have averted the accident at its origin. The number of possible strategies ranged from 1 (in 39 accidents) to 20 (in one accident), with an average of barely four strategies per accident (Boeing 1993).

Table 1.8 shows the proportion of the 232 accidents that could have been prevented by the respective strategy. The terms used are understood by flight crews around the world

Table 1.8 Prevention strategies

The individual terms are explained in Table 1.9:

Table 1.9 Explanation of the prevention strategies

The Boeing study largely confirms the findings of both the NTSB and the Lufthansa studies referenced earlier.

1.4.4 Conclusion

Empirical findings from the studies identify three ways to approach more effective accident prevention:

  1. 1.

    More stringent application of SOPs

  2. 2.

    Improved CRM

  3. 3.

    Improved basic flying

The requirement for more training of the obligatory “Abnormal Procedures” does not appear on this list. Yet, because the majority of accidents occurred during “Normal Operations”, significantly greater emphasis should be placed on this key training aspect.

1.5 Consequences

1.5.1 Flight Operations

  • Pilot selection and training should be optimized to the desired level of flight operation safety.

  • With a change of employer, pilots should receive training as to how their individual work habits must be adapted to conform to the level of safety demanded by the new operating environment.

  • Airline management should be cognizant of the investment that must be made into safety so the company’s long-term economical basis is not destroyed by a short-term profit motive.

  • In corporate groups comprised of multiple airlines, similar and uniform selection and training standards should be pursued where possible for of all branch flight operations. This will help produce a consistent level of safety.

  • SOPs are the main key to flight safety. They must be known and applied. “Need-to-know” content must be defined and thoroughly trained, both in theory as well as in practice. Flight operations management must ensure there are no SOPs that are not, or are not adequately being complied with. Flight operations must become active when SOPs are identified that are not being sufficiently complied with: They must be modified or substantiated in detail and handled with greater emphasis during “Recurrent Training”. All multipliers, such as management pilots and trainers, should maintain a uniformly high standard when applying the SOPs. Grey areas should leave as little room for interpretation as possible and must be clarified at the highest level of the flight operations hierarchy. All pilots should be aware of the mechanisms specified further below, which can weaken the disciplined application of SOPs.

  • Personalized FDM feedback is sensible under very strict conditions (data protection outside the disciplinary hierarchy with operating partner veto rights).

  • Furthermore, pilots should have the possibility of submitting confidential safety reports to safety pilots without fear of disciplinary action or legal consequences. Moreover, a culture should be established within the flight operation, in which these confidential reports are actually submitted in writing. According to well founded estimates, only about 1 % of all safety–critical incidents occurring within a large German airline are reported in this manner.

  • First and foremost, new FOs are vulnerable to accident. The “initial training” received at an airline new to the FO should be so extensive that he is capable of recognizing and addressing preferably all of the captain’s mistakes on their first flight alone together (not with an additional FO).

  • CRM must be precisely defined and integrated into every training event by the flight operation. If CRM is assessed during training, then it can be more effectively developed by the individual. A prerequisite for the assessment would be the development of a flight operation-based CRM Assessment Policy that allows no room for arbitrary action on the part of individual trainers. This book provides guidance to this end in the sections that follow.

  • “Basic flying” must be improved upon. Initial Training should include the safe mastery of all levels of automation, basic jet flying, exploration of aircraft performance limits and the training of monitoring skills. Especially for long-haul pilots, measures should be taken to effectively compensate for the low “stick-time” common to this type of operation. An increased emphasis on basic flying training in the simulator can be implemented, as well the requirement for more hands-on flying using differing levels of automation in daily flight operations under precisely defined conditions of fatigue, weather and traffic density. Northwest Airlines defined these conditions in its Flight Operations Handbook, thereby encouraging their pilots to fly at reduced levels of automation (Landry 2006).

1.5.2 Individually “Sufficient” Safety vs. Objectively Necessary Safety, Part II

Numerous individual consequences were listed above, particularly within the context of the Lufthansa study. For this reason, this section will remain general and concern itself for the most part with the difficulties encountered when trying to comply with SOPs as safety rules.

An individual pilot in a career encompassing around 20,000 flight hours will most likely never be involved in an accident.

Because nothing really serious happens to him month for month, it is possible that he will intuitively or unconsciously call these safety rules into question. A certain degree of looseness can develop over time that may not directly damage the individual, but can lead to a significant safety risk on the whole.

Development of this behaviour—analogous to road traffic—is normal for the individual, nevertheless inappropriate. In the event of an accident, the resulting consequences under certain circumstances could be catastrophic for the flight operation and, with it, the entire pilot corps.

The objectively necessary level of safety conveyed during initial training with the airline tends to degenerate to an individually sufficient level of safety.

Pilots seem to be in a perpetual state of dilemma: on the one hand, they must painstakingly comply with the SOPs, yet, on the other, they should flexibly call them into question if, by ignoring them, an apparent greater level of safety may result. Flexibility is called for especially then, when fuel is running short, for example, or a passenger requires urgent medical attention, or smoke or fire is detected, etc.

These are incidents where a deviation from a standard procedure could possibly increase safety. Such deviations should and can actually be limited to only a few specific cases.

Purely operational reasons do not justify deviations from the SOPs. If an approach is progressing too high or too fast, the SOPs are the lifeline that differentiates between an acceptable and an unacceptable risk. When an SOP is violated, the borderline between objectively necessary and individually sufficient safety is crossed.

In a further study conducted by Lufthansa, many pilots remarked that they are oftentimes forced to deviate from an SOP in daily operations due to ATC requirements. These deviations are seen as unavoidable in day-to-day flight operations. The level of safety that remains is therefore considered to be sufficient. The study expressly point out that this is very critical. Everyone who knowingly deviates from a rule does so for the most part in the assumed belief that they are acting safely. But risks can not be minimized to the necessary degree in this manner (Lufthansa 2009).

The following example depicts the increased risk of a runway excursion (Landing Overrun) associated with SOP deviations during unstabilized approaches. Hereunto, the Dutch National Aerospace Laboratory (NLR) analysed this type of accident (Van Es 2005), whereby 400 landing overruns were recorded between 1970 and 2005. With around 800 million landings during this period, the risk equated to 0.5 accidents/million landings. 53 % of the overruns took place on “slippery” or “contaminated runways”. The conditions listed below (see Table 1.10) contributed to the increased risk of a runway excursion by the factors listed:

Table 1.10 Landing overrun risk factors

Statistically, an accident rate of 0.5 accidents per million landings with a flight operation of 500,000 cycles per year would mean that a landing overrun will occur every two years: an objectively high risk.

An individual pilot flies about 10,000 cycles in his career. Statistically, only one out of 50 pilots will experience such an accident in their professional career: a potentially acceptable risk for the individual.

Several factors stand in the way of the disciplined adherence to the SOPs: it can be tedious and painstaking.

  • For instance, all private discussions and distractions should be avoided at any time below an altitude of 10,000 feet. Is this being strictly adhered to at all times?

  • Normal checklists are read many thousands of times over. It can easily happen that, if it becomes too routine, it will be read only superficially.

  • The call sign should be given first during R/T readbacks. This, too, proves to be difficult in practice.

Laziness, nonchalance, excess routine and complacency are an ever-present enticement to infringe against those procedures of seemingly lessor importance.

First and foremost, operational decisions must always be based on risk avoidance and error minimization. Economic considerations (e.g. delay, fuel) must play only a subordinate role. Private interests (e.g. proceeding, shuttling) should have absolutely no influence on safety-relevant decisions. In the interest of critical self-assessment, every pilot should pose the question as to how a particular flight would have been judged within the context of an aircraft accident investigation. This question should provide a benchmark, against which professionalism can be measured.

Flight operations must also pose self-critical questions, such as whether the SOPs published in the handbooks are practical, are being effectively put into practice and are being correctly taught. Otherwise, the impression may be conveyed that SOPs serve merely the legal self-protection of the aircraft manufacturers or operators, among others.

In addition, there psychological findings that address the issue of a pilot’s self-discipline These are dealt with briefly in the following paragraphs.

1.5.3 Acquired Carelessness

While flying, as in many areas of private and professional life, it can frequently be observed that pilots ignore existing risks and disregard elementary rules of safety. This behaviour can be explained using the “Theory of acquired carelessness” (Frey and Schulz-Hardt 1998).

In a state of carelessness, it may be assumed, for example, that an error won’t have any substantially negative consequences. This is referred to as “acquired carelessness” because pilots are careless when they come “right out of flight school”, but because they acquire carelessness as a result of certain learning experiences. These experiences can be catalogued:

Individual experience

Carelessness arises when dangerous behaviour (e.g. SOP violations) is repeated and remains without negative consequence. The more frequently and intensively this happens, the more rapidly carelessness emerges. Because, especially in the flying profession, many SOPs were developed as a result of only one accident, it can mean that violations against them may lead to a similar accident only after many years. Yet, prevention of the improbable is precisely the objective.

One’s own positive experiences, such as those gained from poor weather approaches, can actually be dangerous. With each successful approach in convective weather, the positive outcome increases the probability that the same or even a greater risk will be taken the next time. For this reason, an experienced pilot may be inclined to underestimate an actual risk after the positive outcome of several high-risk approaches. This was identified as a contributing factor in the report following the Air France landing accident in Toronto in 2005 (TSB 2007).

Hedonism

Hedonism refers to the striving for, and the preserving of a positive state of being, whereby greater significance is place on the short-term rather than the long-term consequences. Carelessness can represent just such a positive state, because the exercise of care means an increase in near-term effort at first. Under certain circumstances, it may be more convenient to try to preserve an uncritical, uplifted disposition rather than to comply with an SOP.

80 % of all incidents took place at a time when at least one flight crew member interpreted the working environment as being disturbed. One-third of these cases involved this uplifted, excessively positive state (Lufthansa 1999). An effective means of combating this state is to comply with the Sterile Cockpit concept.

Imitation

Observing a person’s apparent success despite their careless behaviour often leads to the imitation of that behaviour. The captain’s (and each multiplier’s) example in this regard is particularly important in order to keep a “caution is cowardice” attitude from developing.

Control illusion

People are inclined to over estimate their own degree of influence. The illusion that “I have everything under control” facilitates risky behaviour—even when risks are perceived.

Unrealistic optimism

Although pilots are aware of the origin and principle importance of the SOPs with respect to hazard avoidance, they may be persuaded that they are not personally at risk: “It won’t happen to me!”

Fatalism

A fatalistic attitude serves to impede one from changing his personal behaviour despite the threat of danger. It encompasses the mindset: “There are so many procedures, we can’t know them all anyway, let alone comply with them all the time—so what’s the use of learning them in the first place?”

1.6 CRM, Human Factors and Non-Technical Skills

Many pilots are sceptical when it comes to CRM, possibly having been influenced by a bad experience. This is comprehensible insofar as the physiological and psychological fundamentals of CRM, as well as the detailed safety-relevant behavioural patterns desired, are oftentimes insufficiently defined or disclosed. Therefore, it has been the aspiration of the Vereinigung Cockpit (VC/German Airline Pilots Association) to make CRM as practice-oriented and efficient as possible.

VC sees opportunities for improvement in systematic training, including time spent in simulators and aircraft outside the typical seminar environment.

The numbers referenced earlier speak out clearly for improved CRM training:

  • 90 % of all incidents: available facts are not taken into consideration.

  • 80 % of all incidents take place in conjunction with a disturbed working environment.

  • 80 % of all accidents reveal deficiencies in the leadership of and collaboration between the crew.

  • 70 % of all accidents occur following incorrect decisions or a failure to make decisions.

  • 53 % of all incidents reveal communication problems.

  • 30 % of all accidents are based on inaccurate situational awareness.

  • 25 % of all accidents reveal symptoms of excess stress.

  • 16 % of all accidents can be prevented through the effective use of the captain’s authority.

  • 14 % of all accidents can be prevented through timely go-around decisions.

  • 12 % of all incidents: „the captain goes it alone.“

  • 4 to 7 % of all accidents happen to fatigued crews.

On the basis of these statistics, it is obvious that a large percentage of the accidents or incidents could have been prevented through effective CRM. To accomplish this, an integral training concept and the consistent implementation on the part of each pilot is essential.

A comprehensive CRM training concept will be presented later in the text comprised of the following content:

  • Basic principles of information processing

  • Fundamental approaches to dealing with errors

  • Communication

  • Stress management

  • Decision making

  • Leadership and team behaviour

  • Management of fatigue and attentiveness

  • Implementation in the training

  • Recommendations for an assessment policy

Only very few employees at the highest levels of the hierarchy can cause so much damage to an airline like a pilot can.