1 Introduction

Artificial neural networks (ANNs) are used in many safety-related applications within industry. Typical applications within aerospace industry may involve utilisation of ANNs in flight control systems [1]. Other applications within medicine may involve ANNs for diagnosis of certain diseases [2]. A wide-ranging review of many applications of ANNs in safety-related industries can be found in a UK HSE report [3].

There are many reasons why industries find ANNs appealing. Most of these reasons relate to the functional benefits offered by ANNs. These benefits may include:

  • The ability to learn: This is useful for problems whose intentionally complete algorithmic specification cannot be determined at the initial stages of development. They are also used when there is little understanding of the relationship between inputs and outputs. Therefore the neural network uses learning algorithms and training sets to learn new features associated with the desired function.

  • Dealing with novel inputs: Providing generalisation to novel inputs using pre-learned samples for comparisons.

  • Operational performance: By exploiting the generalisation ability, the neural network can outperform other methods particularly in areas of pattern recognition and function approximation.

  • Computational efficiency: Neural networks are often faster and more memory efficient than other methods.

Although neural networks are used in many safety-related applications, they share a common problem. That is, ANNs are typically restricted to advisory roles. In other words, the ANN does not have the final decision in situations where there is risk of severe consequences. Current safety standards have extremely limited recommendations for using artificial intelligence for safety critical systems. One example is IEC61508-7 [4], where neural networks may be used as a ‘safety bags’. The ‘safety bag’ is an external monitor that ensures the system does not enter an unsafe state. This may protect against residual specification and implementation faults which may adversely affect safety. However, application of ANNs in safety critical systems is allowed only for the lowest level of safety integrity (SIL1).

2 The problem

The principle reason why ANNs are restricted to advisory roles is the continued absence of safety assurance provided through compelling safety argumentation. Given the potential benefits of neural networks in many applications, it is desirable to utilise them for highly-dependable roles. A highly-dependable role may include situations where an ‘unsafe’ output from a neural network may lead to potential hazards.

There are many existing approaches for developing ANNs in safety critical systems. One particular approach involves using diverse neural networks [5, 6]. This technique attempts to overcome the difficulty of using a single ANN to cover some target function. The approach creates an ensemble of ANNs where each member in the ensemble is developed differently but for the same problem. Each network is varied using a wide range of techniques such as different training sets or learning algorithms. Although very low levels of error were achieved they did not provide suitable safety arguments. One prime reason was that each ANN could only be analysed as a black box. This limitation is common to many other approaches including verification and validation of ANNs [7, 8] reviewed in [9].

The problem involves generating white-box analytical arguments about the functional properties associated with learning and generalisation. The scope of this paper is to provide a summary of the work on the safety criteria [10, 11], a potentially suitable ANN model [12, 13] and the safety lifecycle [13].

3 Safety critical systems and safety argumentation

Safety critical systems are employed in many areas including transport industries [1], medicine [7] and defence [14]. Systems need to be carefully developed if they are to have some direct influence on the safety of the user and the public. Safety critical systems are concerned with preventing incorrect operation that may lead to fatal or severe consequences. In other words, safety critical systems must not directly or indirectly contribute to the occurrence of a hazardous system state. A system level hazard is a condition that is potentially dangerous to man, society and environment. Potentially hazardous states can be prevented though safety processes that aim to identify, analyse, control and mitigate hazards within the system. In general, systems can never be described as totally safe [15] including humans as well as computers. However, systems have been shown to be acceptably safe for a given role [16]. Through analysis aided and assured by safety processes, failures can be detected or controlled with the risk of failure assured to a tolerable level [16].

The software safety lifecycle specifies where certain safety processes should be performed throughout the development of software systems. Within the software context, a hazard is a software level condition that could give rise to a system level hazard. The following is an outline of some of the major processes performed during the software safety lifecycle:

  • Hazard identification: It is a major activity at the start of the software lifecycle. This requires an element of in-depth knowledge about the system and inquisitively explores possible hazards. This may require consultation of a checklist of known hazards specific to the type of application (possibly from an initial hazard list).

  • Functional hazard analysis (FHA): It analyses the risk or the severity and probability of potential accidents for each identified hazard. This is performed during the specification and design stages.

  • Preliminary system safety analysis (PSSA): The purpose of this phase is twofold. To ensure that the proposed design will adhere to safety requirements and to refine the safety requirements and help guide the design process.

  • System safety analysis (SSA): This process is performed at implementation, testing and other stages of development. Its main purpose is to gain evidence from these development stages to provide assurance that safety requirements have been achieved.

  • Safety case: This final phase delivers a comprehensible and defensible argument that the software is acceptably safe to use in a given context. This is presented with the delivery and commissioning of the final software system.

The intentions of these safety processes are well established. They can be found in current safety standards such as ARP 4761 [17].

Current approaches for improving safety of ANNs have been found to be inappropriate for safety critical systems [9]. They lack meaningful safety argumentation particularly about the functional properties of ANNs. Many neglect issues concerned with tackling typical safety concerns (such as hazard identification). The challenge is to devise a suitable ANN model that is both feasible and practical. In other words, the performance of the ANN (and desirable features) must be maintained whilst providing an acceptable level of safety assurance. This can be interpreted as the safety versus performance trade-off. Generated safety arguments must then be organised and presented using the safety case.

4 Types of safety arguments

There are two main types of safety arguments currently used for software systems. These are known as process and product based arguments. Process based arguments are concerned with providing assurance purely on the fact that a certain process has been carried out. Previous work [18] attempted to devise process certification requirements for ANNs in safety critical systems. This is a good example of over-reliance on weak forms of evidence from process-based arguments. For example, many of the requirements involved dealing with issues such a team management and implementation (coding) issues (using formal methods). However, there was no clear reasoning on how the functional behaviour of the ANN has been made ‘safer’ (in terms of hazard mitigation and control). The role of analytical tools is highly important and will involve hazard analysis for identifying potential hazards. Some factors which prevent this type of analysis for ANNs can be seen in typical monolithic neural networks. These models distribute their function among the weights in a fashion that makes interpretation difficult, resulting in pedagogical approaches to analysis [19] (black-box behavioural interpretations).

The ability to argue more about the functional behaviour can be provided using product-based arguments. These are evidence-based and can show how they tackle safety issues such as identifying potential hazards. These product-based arguments are required to deliver certification based upon analytical methods. Process-based arguments are generally considered as weak forms of assurance [20] and current practices and standards are working towards excluding them. Types of arguments suitable for ANNs are product-based and process-based but only where a clear defensible argument can be made about how they tackle specific safety concerns.

5 Safety criteria for artificial neural networks

Analytical product-based arguments about the functional behaviour of ANNs can be provided through the satisfaction of the safety criteria [10]. The safety criteria are a set of high-level goals devised by analysing aspects of current safety standards and behaviour of ANNs (factors affecting safety). They define minimum behavioural properties which must be enforced for safety-critical contexts. Most criterions require suitable white-box style arguments to be completely satisfied. These safety criteria have been presented in Fig. 1 in the form of goal structuring notation (GSN) [21] which is commonly used for composing safety case patterns. The boxes illustrate goals (claims) or sub-goals which need to be fulfilled. Rounded boxes denote the context in which the corresponding goal is stated. The rhomboid represents strategies to achieve goals. Diamonds underneath goals symbolise that further development is required leading to supporting arguments and evidence.

Fig. 1
figure 1

Preliminary safety criteria for artificial neural networks

Figure 1 is best read using a top–down approach. The safety criteria consists of a top goal labelled G1. If G1 is achieved then the neural network can be considered ‘safe’ to perform some specified function in safety critical contexts (particularly highly-dependable roles). To make this goal less ambiguous, consider the following related contexts:

  • Context C1 defines the specific neural network model being used.

  • Context C2 requires ANNs to be used for a particular problem when conventional software methods are inappropriate (utilising the benefits of ANNs outlined in the introduction).

  • Context C3 requires that ‘acceptably safe’ is determined by the manner in which the criteria is satisfied. This is related to the types of arguments used (such as product-based, white-box, analytical styles).

The strategy S1 will attempt to generate safety arguments from the sub-goals (which form the criteria) to fulfil G1. The goals G2 to G5 represent the safety criteria and are outlined below:

  • Criterion G2: Input–output functions for neural network have been safely mapped.

    • This criterion provides assurance that the neural network represents the desired function which may be considered as input–output mappings.

    • ‘Safely’ refers to mitigation or control of potential hazards associated with properties of the input–output mappings.

    • Context C4 indicates partial or complete satisfaction of the desired function.

      • This is a realistic condition if all hazards have been identified and mitigated for the partial function.

      • Suitable for applications when analysis is unable to determine whether total representation of the entire desired function has been achieved.

      • Previous work on dealing with refining partial specification for neural networks [22] may apply.

    • Forms of sub-goals or strategies for arguing G2 may involve using analytical methods such as decomposition approaches [23]. This will help analyse the function performed by the ANN and help present a white-box view of the network.

  • Criterion G3: Observable behaviour of the neural network must be predictable and repeatable.

    • Criterion provides assurance that safety is maintained during ANN learning and training.

    • The ‘observable behaviour’ of the network means the input and output mappings that take place (and not the adaptable parameters in the ANN).

    • The term ‘repeatable’ provides assurance that any previous valid (or safe) mapping or output does not become flawed during learning. This is concerned with issues surrounding ‘forgetting’ of previous learnt samples.

    • Potential safety arguments may be concerned with providing evidence that learning is controlled (through behavioural constraints identified by hazard analysis).

  • Criterion G4: The neural network tolerates faults in its inputs.

    • This allows the neural network to provide safe outputs for all input conditions.

    • Possible flawed inputs may include samples that do not represent the desired function.

    • Satisfaction of this goal may depend on the application context (in case assurance can be provided that input will not be flawed).

    • Possible safety arguments for this goal may involve detecting and suppressing flawed inputs and provide assurance for input space coverage.

  • Criterion G5: The neural network does not create hazardous outputs.

    • This is similar to G2 but focuses more upon the network output.

    • Provides assurance that the output is not hazardous regardless of the integrity of the input.

    • Possible forms of arguments may involve black-box approaches.

    • Solutions may include output monitors or bounds.

The safety criteria are intended to be applicable to most types of neural networks. However, certain ANN models may be susceptible to new types of faults (potentially as a result of specific safety requirements on functional properties). In this case, appropriate criterion can be added to deal with these specific safety concerns. The flexibility of the safety criteria enables applicability to a wider range of applications and neural network models.

6 Criteria ‘completeness’

A fault is a deficiency or imperfection in the system. The focus of the safety criteria is to encapsulate all possible types of faults leading to failures along with the types of failures that can occur. Failure modes are the effect by which the failure is observed. The safety criteria can be argued in terms of the different modes of failure that they tackle. In the following justification some non-functional properties are also highlighted (including spatial, temporal resource usage, etc.). A full list of functional and non-functional properties can be found in [11], below is a summary of some of the main faults and failure modes tackled by each criterion:

  • Criterion G2 faults

    1. 1.

      One or more tunable parameter in the neural network is faulty

      • The parameters in the network are faulty (wrong value) such that the contribution of that parameter in the creation of the final output leads to a hazardous function.

    2. 2.

      The topology of the network is faulty

      • The topology and structure (layers, number of neurons, arrangement and relations) does not allow learning or generalisation of desired function.

    3. 3.

      Activation functions are faulty.

      • Activations occur when they should not or vice versa.

    4. 4.

      The connections are faulty.

      • In terms of existence and placement.

  • Criterion G2 failure modes

    • Given an input pattern within some valid area of the data space, the network output is not desired and can potentially lead to a hazard (during deployment).

    • Given a series of inputs, the output is discontinuous. This is related to potentially hazardous fluctuations in output as described in [15].

  • Criterion G3 faults

    1. 1.

      One or more weights during learning have changed such that the network output changes from a ‘safe’ to potentially hazardous signal.

      • This fault is associated with problems of ‘repeatability’ during learning (while deployed).

    2. 2.

      A neuron has been added and memory limitation has been exceeded (non-functional).

    3. 3.

      Internal properties have changed such that the response time has exceeded acceptable limit (non-functional).

    4. 4.

      Unsuitable learning algorithms and ANN structure results in training samples not being learnt.

  • Criterion G3 failure modes

    1. 1.

      The network output is potentially hazardous for some input given that the network output was ‘non-hazardous’ at a previous network state (result of fault 1).

    2. 2.

      Memory overflow (result of fault 2)

      • Resource usage (Space): The network has attempted to exhibit an architectural growth beyond memory capacity and has caused critical failure or infringed all other criteria.

    3. 3.

      Time-out problem given input pattern (result of fault 3).

      • Resource usage (Time): The network has attempted to exhibit a change δ in response time from input-vector entering network to output-vector exiting network, beyond an acceptable limit τ (or where δ > τ).

    4. 4.

      Time-out problem given training samples (result of fault 4)

      • Resource usage (Time): The learning phase of the network has not been able to learn all required samples in the given available time. For example, classification problems.

  • Criterion G4 faults

    • Input vector is not within some valid area of the data space (flawed input sample).

  • Criterion G5 faults

    • Given any input pattern the network has a fault (black-box approach).

  • Criterion G4 & G5 failure modes

    • Network output is potentially hazardous. As a result of a flaw in inputs or some fault in the neural network.

    • Given an input vector, the output is missing or outside a safe region.

7 Characteristics of a suitable neural network model

The previous section described how the criteria attempts to deal with problems associated with typical ANNs. One of the key methodologies for satisfying the criteria is to avoid pedagogical style analysis (black-box) and to find ways for white-box style arguments. However, when attempting to generate arguments, at the same time, it is essential to preserve the ability to learn and generalise (outlined in introduction).

One particular model that has the potential to generate desired safety arguments is the well-established ‘hybrid’ ANN [24, 25]. The term ‘hybrid’ in this case refers to representing symbolic information within a neural network structure.

Consider the example illustrated in Fig. 2. This diagram is divided into three columns:

  • Knowledge/Data: All references to symbolic data and training data are kept here. Symbolic knowledge may be in the form of logic or if-then rules (or even fuzzy rules which combine both qualitative and quantitative knowledge).

  • Process: This column encapsulates different processes that take place. This may involve translation algorithms which can convert rules into network neurons and parameters (and vice versa). It also encapsulates learning algorithms.

  • Neural network: This represents major states of the ANN. Typically, this may include ANNs with initial conditions or post-learning states.

The manner in which ‘hybrid’ ANNs are developed is described within the columns of Fig. 2. Suppose we want to use ANNs to solve some problem. Domain experts may attempt to gather knowledge and represent them in a form of rules. These rules may not be complete and may be incorrect as a result of insufficient prior knowledge. This set of initial knowledge is then processed using a ‘rules-to-network’ algorithm which creates and inserts into a suitable structure. The structure of the ANN (such as topology, weights and connections) is determined by the insertion process. Once inserted, the result is an initial neural network that represents all the initial symbolic knowledge. The network then goes through a process of learning using training data. The aim is to adapt or refine the rules given variations within the data domain. These rules may then expand in number or may be refinements of existing ones. Once training is complete, these rules are extracted (using suitable learning and extraction algorithms). The final set of rules may correspond to changes in the data which reflects changes in the operating environment.

Fig. 2
figure 2

Framework for combining symbolic and neural paradigms [26]

There are many advantages in this approach. The ‘hybrid’ ANN has been shown to outperform many all-symbolic techniques in terms of generalisation and the rules that it embodies [24]. In terms of safety, the ‘hybrid’ ANN offers the potential for analysis using a decompositional approach. This is facilitated by the structured manner in which rules are represented within the ANN. It also provides the potential for transparency or white-box style analysis. This can be achieved through rule extraction algorithms which are relatively easy to implement than for conventional neural networks (and in some cases rule extraction is a computationally efficient process [25]). These advantages can result in potentially strong analytical arguments [12]. However, the learning process still needs to be controlled using appropriate mechanisms if it is to be used whilst the ANN is deployed. So far, we have defined several suitable ‘hybrid’ ANN models [12, 27, 28].

8 Safety lifecycle for ‘hybrid’ neural networks

Current software development lifecycle is inadequate for artificial neural networks. For example, the specification, design, implementation and testing phases do not support the extra activities involved in developing ANNs (such as data processing). With the unique development lifecycle of ‘hybrid’ ANNs it is not obvious where required safety processes should be initiated. Moreover, adaptations of existing safety processes may need to be made to deal with symbolic and neural representations. There are very few existing ANN development lifecycles for safety-related applications [29, 30]. One particular lifecycle [29] is directly intended for safety-critical applications however there are several problems associated with the approach. One problem is that it relies on determining a complete specification at the initial phase of development. This is highly idealistic since the prime motivation for ANN learning is to determine behaviour, given very limited initial data. Another problem is that it over-emphasises control over non-functional properties. Typical examples of non-functional properties include usage of spatial or temporal resources. Instead, focus should be upon constraining functional properties such as learning or the behaviour of the ANN. Documenting each development phase (process-based arguments) is generally regarded as weak arguments for providing assurance for conventional software [20]. However, they are used extensively throughout the ANN lifecycle as described in [29]. Figure 3 illustrates a development and safety lifecycle we developed for ‘hybrid’ ANNs in the form of a ‘W’ model [13]. This diagram is divided into three main levels:

Fig. 3
figure 3

Safety lifecycle for hybrid neural networks

  1. 1.

    Symbolic level: This level is associated only with symbolic information. It is separated from the neural learning paradigm and deals with analysis in terms of symbolic knowledge. Typical examples may include the gathering and processing of initial knowledge. Other uses may involve evaluating extracted knowledge gathered post-learning.

  2. 2.

    Translation level: This is where symbolic knowledge and neural architectures are combined or separated. This is achieved though processes such as rule insertion [26] or extraction algorithms [23]. All transitions from the symbolic level to neural learning level involve rule insertion algorithms. All transitions from the neural learning level to symbolic involve rule extraction algorithms.

  3. 3.

    Neural learning level: This level uses neural learning to modify and refine symbolic knowledge. Neural learning is performed by using specific training samples along with suitable learning algorithms [26, 31, 32].

The following points describe the ‘W’ model and outline the major stages in the development lifecycle:

  • Determination of requirements: These requirements describe in informal terms the problems to be solved. Although requirements for conventional software are intentionally complete, the requirements for ANNs are intentionally incomplete. Typical for real-world problems, ‘incompleteness’ may be a result of insufficient data or knowledge prior to development.

  • Sub-initial knowledge: All known knowledge is translated into logical rules (by domain experts).

  • Initial knowledge: Sub-initial knowledge is converted into symbolic forms compatible for translation into network structure. A compatible symbolic language has been defined in [24] and [28].

  • Dynamic learning: Once the initial symbolic knowledge has been inserted or translated into a suitable ANN, two-tier learning process commences. This uses suitable learning algorithms [12] to refine the initial symbolic knowledge and add new rules. It uses training data sets and attempts to modify the network to reduce error in output. This may result in topological changes to the network (adding new hidden neurons to represent new rules).

  • Refined symbolic knowledge: Knowledge refined by dynamic learning is extracted using appropriate extraction algorithms. This will result in a new set of rules which can be analysed from a symbolic level.

  • Static learning: Refined symbolic knowledge may be modified by domain experts and re-inserted into the ANN structure. It then goes through a process of static learning. This further refines the knowledge in the ANN but does not allow topological or architectural changes. Instead, the learning process adapts the knowledge within pre-defined constraints and bounds. Therefore, this learning process can be performed during deployment whilst adhering to safety criteria [12].

  • Knowledge extraction: This can be performed at any time during static learning to obtain the adapted rule set. This can either be used to understand changes in environment [27] or for maintenance of the knowledge base [28].

In contrast to software development, constructing ‘hybrid’ ANNs is a vastly distinctive approach. The aim of safety processes for ANNs will be to identify, analyse, control and mitigate potential hazards (and associated faults). Although a detailed description of each process is beyond the scope of this paper an outline is presented below:

Preliminary hazard identification (PHI) deals with initial symbolic knowledge and is used to determine the initial conditions of the network. PHI is performed over a relatively meaningful representation (rules) and attempts to understand potential hazards in real-world terms. In general, whilst attempting to build an understanding of the problem PHI considers identification of potential hazards and generates possible system level hazards (using a black-box approach). This may result in a set of rules partially fulfilling the desired function. It utilises knowledge gathered from domain experts, empirical data and other sources.

The next step is to perform functional hazard analysis (FHA) over the sub-initial symbolic knowledge. FHA performs a predictive, systematic, white-box style technique to understand how the symbolic knowledge can lead to hazards. FHA also builds an understanding of the criticality of certain rules and can facilitate ‘targeted’ training. For example, specific training sets may be devised to focus on certain important rules or areas of concern. FHA may also make rule assertions to prevent specific hazards. The result of FHA at this stage is the initial symbolic knowledge that is prepared for translation into a neural structure.

Preliminary system safety assessment (PSSA) is then performed during dynamic learning between training epochs. PSSA uses an evaluative approach to analyse rules modified by dynamic learning (through rule extraction). It determines the existence of potential hazards and will do the following:

  1. 1.

    Manually modify rule set: Through manual assertions to mitigate or control identified hazards.

  2. 2.

    Initiate further training: Devise new training sets to reflect desired directions of learning. It can also be viewed as ‘guiding’ the design process to discover missing or unknown knowledge. This can be iterated until desired performance is achieved (determined through analytical techniques).

An important point is that the function (or rule set represented by the ANN) may only partially satisfy the desired function. This is a realistic assumption since it may be infeasible to determine whether totality of the desired function has been achieved (supporting the motivation of neural learning). Certification can be provided based upon this ‘partial-but-safe’ approach. Alternatively, assurance for providing a complete function may be produced for non-linear function approximation problems [28]. The approach so far deals with mitigating hazards associated with the rule set and improving performance, it must also control future learning.

FHA is performed again at post-dynamic learning stage. This uses an exploratory safety analysis technique to completely identify how rules may be allowably refined during static learning. FHA is based upon the pre-consideration of possible adaptation of rules. This involves analysing the symbolic knowledge of the ANN and determines how potential faults may be introduced (if parts of rules are modified). There are two approaches which are dependant on the type of neural model used:

  1. 1.

    FHA analyses how the logical representation may change such as adding, removing or inverting antecedents of rules. This approach is specific for the safety-critical ANN model based upon the knowledge based artificial neural network (KBANN) [27].

  2. 2.

    FHA analyses how the quantitative representation may be allowably adapted. This is based upon recent work on a type of neuro-fuzzy model suitable for safety critical systems as defined in [28].

Measures to mitigate potential faults are then incorporated into the network by exploiting knowledge-insertion algorithms to translate rule conditions into network parameters.

The safety lifecycle for ‘hybrid’ ANNs respects the development methodologies of the ‘hybrid’ ANN whilst considering the safety criteria. The lifecycle uses adapted software safety process to deal with particular faults associated with functional properties of ANNs. The important advantage is that this lifecycle contributes to generating compelling product-based analytical safety arguments required for certification [13].

9 Potential safety arguments

A plethora of potential safety arguments can be generated using a suitable ‘hybrid’ ANN and safety lifecycle. Although a full assessment is beyond the scope of this paper some properties of the ‘hybrid’ ANN model are as follows:

  • Translation algorithms and data representation greatly contribute to white-box style analysis of the ANN.

  • Analysing the rules represented by the network can contribute to satisfying criterion G2 (provision of function).

  • The two-tier learning model allows working towards desired function whilst enabling control over the learning process (criterion G3).

Some potential safety arguments contributed by the safety lifecycle are as follows [28]:

  • Faults within rules represented in the network may be identified using meaningful representations.

  • Hazard analysis can exploit the decompositional analytical approach for tackling all specific faults that could give rise to hazards before certification.

  • Hazard analysis (FHA) can also provide assurance for dealing with novel inputs and all allowable adaptations during learning (that takes place post-certification). This is performed by analysing knowledge in the ANN and incorporating required constraints.

  • Hazard analysis can identify and mitigate faults associated with input space coverage of the knowledge base. Assurance is provided through PSSA which guides the learning (design) process.

  • Hazard analysis can identify and prevent hazardous functional properties such as discontinuity of output.

  • Hazard analysis can provide assurance that the behaviour of the ANN is always within safe regions.

The main emphasis of these potential arguments is that black-box analytical methodologies do not need to be relied upon for certifying neural networks. The types or forms of safety arguments presented in this paper are comparable to those found for software safety assurance. Whilst providing suitable forms of safety assurance, the ‘hybrid’ ANN also maintains the ability to learn and generalise post-certification. Neural learning is one of the major motivations for adopting the ANN approach. By preserving the learning capability, desirable features of ANNs are maintained without compromising on safety.

10 Conclusion

The absence of analytical certification methods has typically restricted ANNs to advisory roles in safety-related systems. Many of the existing techniques on improving safety of ANNs for safety critical systems do not provide the necessary forms of safety arguments. This work has attempted to overcome typical problems by first outlining safety criteria for the functional behaviour of neural networks. This has been devised by considering some of the major faults associated with ANNs. The paper then presented a potential model known as ‘hybrid’ ANNs which combines symbolic and neural paradigms. The advantage of this approach is a more analysable and controllable representation. The ‘hybrid’ ANN also tackles the challenge of maintaining advantages gained from learning and generalisation by incorporating necessary controls and constraints.

The safety lifecycle provides a framework for developing ANNs for safety critical applications and focuses on identifying, analysing, controlling and mitigating hazards. The approach for utilising ‘hybrid’ ANNs is highly flexible. For example, other domain-specific base functions (such as image filters for image processing applications [33]) may be used instead of rules without the need to modify the safety lifecycle. Some of the main potential safety arguments resulting from these approaches demonstrate the potential to use ‘hybrid’ ANNs as highly-dependable roles in safety critical systems.