Keywords

1 Introduction

The petroleum industry requires a detailed comprehensive framework for delivering high reliability and availability systems. “Reliability is taken as the probability that a system will operate satisfactorily under specific operating conditions for a given time. System maintainability is defined as the ability of a system to be operable without failure for a given duration in the future, and the system can be restored easily if a breakdown occurs” (Rausand & Høyland, 2004). A system is considered ‘not available’ if it is shut down for unplanned or planned maintenance or component failures since the outcome is the same. “The reliability analysis is used to judge a system’s maintainability. Reliability, in turn, is dependent on the system architecture, material selection as well as design details; and it is only achievable if the availability is at its highest level” (MIL-HDBK-217 Rev. F, 1995).

Performing System reliability, availability, and maintainability (RAM) analyses early in the phases of a project development provides a metric for comparison of alternative architectural concepts” (Yasseri & Bahai, 2018). At the concept generation time (Yasseri, 2012), several functional architectures are considered, it is useful at this phase to model the functional components, as rough building blocks, without reference to their physical properties for an early estimate of the reliability using historical data. Functional architecture is an idealized abstraction of a system, which identifies functional elements without a precise description of their physical properties and their implementation.

The client decides on the desired target reliability level of the system, which is used by the designers for the allocation of reliability requirements for every piece of equipment (it may also include software) as a target. Then the system engineer's objective is to demonstrate “by examination and provision of evidence that the hardware (as well as software) meets the Client’s specified requirements for the intended use” (DNVGL-RP-A203, 2019). For novel hardware, the failure data is likely to be non-existent or insufficient, thus tests may be essential to enhance confidence in the reliability of results.

Reliability can also be affected by an ill-defined specification or mismatch between specification and design. Bad manufacturing processes, unsuitable materials, poor installation, inadequate or irrelevant tests, and incorrect use of the system will also influence the system’s performance. These lead to the estimated performance, which is demonstrated by analyses, to be different from the actual performance of the as-built system. Other explanations are “emergent behavior, undetected faults, unanticipated operating conditions, unanticipated failure mechanisms & their causes, epistemic and aleatory uncertainties” (Pecht, 1993). Unforeseen and unexpected operational conditions are because of insufficient or incorrect specifications, user errors, or as a result of incorrect implementation changes due to inadequate change control management and lack of oversight. A scenario-based approach and what-if analyses can help to minimize the impact of any uncertainties. The “results of functional failure analyses and testing, are complemented by field experience obtained from observation of proven technologies as well as physics-based analyses” (Viola et al., 2012).

A reliability analyst who uses only generic historical data, (e.g., OREDA, 2009), to determine the probability of mechanical failures cannot account for the impact of design errors and poor manufacturing on reliability. It may be incorrectly assumed that all errors will be detected and rectified during the development of the system. Thus, “the reliability predictions based only on historical data is not highly dependable, and hence must be augmented by other types of analyses and tests” (Feiler et al., 2012). It is not realistic to assume that modern fabrication methods and material qualities are the same as they were in the past.

The equipment reliability may even change from project to project. Components that are designed to perform a specific functionality by different manufacturers could have different failure modes and routes to failure. V&V and testing must be used to fill the knowledge gap.

A more dependable framework is needed for validating and qualifying a system, economically and quickly, rather than “test and test again until time and budgets are exhausted”. The objective is to outline a methodology for the detection of all types of errors early in the Development Phase and to ‘furnish the system with good quality attributes, such as high performance, safety, sufficient reliability, resilience, robustness, and defensible (adequate installation security) (Yasseri & Bahai, 2018). It is prudent to build resilience into the system at the design stage to counter unforeseen, undetected, and emergent behavior. It is also crucial to assure that unavoidable, undetected, and unanticipated failure modes are managed by a well-organized and robust risk management plan during the operational phase. This framework aims to identify failure modes at the architectural level, the approach is also can deal with issues that are not easy, or possible, to test unless the whole system has been installed.

A framework for achieving a reliable SPS is described in this chapter. A parallel V is proposed which shadows the SE’s V-model (see Fig. 2.18). This ties the reliability assurance to the system development process efforts and minimizes downtimes by embedding robustness and resilience into the system. The framework enables the delivery of reliable systems while respecting all constraints and requirements. The subsea battery limit in this chapter is from the down-hole valve to the seabed production equipment, to the topside equipment, (and possibly, to an onshore receiving terminal), in their operational environment employing the notion of “Fit-For-purpose”.

2 System Thinking in SE

A system is an assembly of components and linkages, and linkages allow the system’s components to interact with each other (Fig. 2.1). How components of a system are arranged, interact, and influence each other determines the property of that system. A collection of components, without linkages and relationships, does not make a system.

Fig. 2.1
An illustration indicates an element, these elements are in a circular loop, input, boundary, system, environment, relationship, and output.

A system consists of three elements: components, linkages, and relationships

Accordingly, a system is a set of objects ate are organized in a specific way, with a certain relationship between the objects that work together in some manner to perform a function (the purpose). Systems can accomplish tasks that would be impossible if the same elements were put together in random order, or if there is no logical relationship between them. Humanity benefits continually from various clever ways of putting together the resources that provide us with food, transportation, education, goods, and services.

The characteristics of a system are (Fig. 2.1):

  • Purpose: A system can only be visioned when it has a clear purpose and provides a desired function. This purpose usually governs the arrangement of elements, their connectivities, as well as the strength of their relationship and the interactions between the system and its environment.

  • Boundaries: The boundary determines the extent of influence of a system. The boundary stops where the impact of the environment on the system becomes marginal, and vice versa. Judgments as to where the boundary lies, are necessary constituents of the Systems Thinking.

  • Coherence: (A sense of belonging). Every interaction within a system must be coherent.

  • Emergence: A characteristic of systems is that they cannot be identified solely by their parts. This wholeness causes behaviors to emerge that are known as emergent characteristics.

  • Hierarchy: Any system should consist of at least three levels of hierarchy; System of systems (SoS), systems, and sub-systems, which determine how changes at one level can influence other levels.

  • Sub-systems: These are the parts of the system that must interact to achieve a balance to the purpose of the system. A sub-system or a component is a system the vendor.

  • Environment: All things not included in the system that may affect its purpose. Some aspects of a system’s environment may be closely associated with the system, while other aspects are less relevant or unrelated.

The “systems thinking focus is on relationships between the system’s elements, (not on the elements as unrelated objects), objectives (not the structure), the whole (not its constituent parts), the context (rather than the contents) of a system, and patterns” (Royal Academy of Engineering, 2014). Engineers for a long time have taken any complex system (like a transportation system), separated it into its parts, and then tried to manage each part as best as they can. Parts could in the context of transportation refer to different means of transport (road, rail, air, etc.), hardware, or people. If that was done, engineers believed that the system would behave well. “Thinking in systems requires shifts in perception, which lead to diverse ways to perceive, and different ways of organizing a system” (Edson, 2008). “It is possible to improve the performance of many system's components and yet disable or destroy the system in its entirety” (Senge, 1990).

3 System Architecture

System Architecture is an abstraction of the vision of how a system should hang together, which is an arrangement of its components and their relationships to each other and the environment” (Sillitto, 2014). The system architecture is used as a plan (blueprint) for the definition of subsystems and components, their design, manufacture, and integration with the system’s operational environment so that the elements of the installed system will work in unison to deliver the intended functionality.

A system architecture is presented at two levels of abstraction hierarchy, which are known as functional and physical. The first level is the functional architecture, which is also known as the “conceptual design”, it is still an abstract view of the system but may have more details (Yasseri & Bahai, 2018). In software engineering, another layer is added between these two and call it the logical architecture. In this chapter functional and logical architectures are used interchangeably.

The functional architecture is a representation of the system independent of suppliers, and equipment is named by its functions. Each piece of equipment is represented by a box and identified by its function. At this early stage of development, what a component must deliver is known but its physical properties (dimensions, sizes, footprint, material, weight, and so forth) are not known until more definitions are added by identifying suppliers and deciding which equipment to procure. This takes place in the next phase of the project development.

Equipment manufactured by two different suppliers delivers the same function, but their physical properties are quite different. Two pieces of equipment designed and manufactured by two suppliers will share many common functional characteristics, but they will have many different physical characteristics. A component in the functional architecture represents its function (what it does), but some properties, and interfaces may be similar to a range of products supplied by different vendors. The functional architecture remains static and independent of technologies and vendors and will provide a relatively stable baseline to proceed to the system design, vendor selection, and fabrications.

The lefthand side of Fig. 2.2 shows, a deepwater subsea system to deliver certain functionality (extract gas from six wells and send it to shore for preparing it for sale), consisting of several sub-functions. At this stage of hierarchy, the function of all equipment, their relationship & connectivity, and how they should communicate are defined, which are mostly diagrammatic and descriptive. This is to make sure that all required components are present and logic for their inclusion is well understood.

Fig. 2.2
A schematic of functional architecture, physical architecture, and an artist's impression of as-built. It indicates the transition from functional to physical architecture and from physical architecture to as-built. As-built includes flowlines to termination assembly and production manifold assembly.

A typical subsea system and its functional and physical architecture

Several concepts are developed, prioritized, and the front runner is taken forward for greater definition. The physical architecture gradually evolves to the middle section of Fig. 2.2. The middle section of the drawing in Fig. 2.2 is similar to the lefthand side, but with more information, and “there is a one-to-one relationship between the functional components and their physical realization.” (Yasseri, 2014a). All major components of the physical architecture (middle part of Fig. 2.2) are defined by their physical properties, suppliers, position in the system, and relationship and communication between them. It must include all known data such as the concept of operation (How the system should operate), system configuration, supplier’s operating instructions, materials, and means of communication (flow of fluid, signals, and energy) & control. “All physical constraints or limitations are also identified, e.g., physical solution for interfaces, fluid flow requirements, size (geometric compatibility), footprint, weight, and installation barges & cranes requirements are also decided” (Yasseri, 2015b).

The functional architecture is a plan that enables each function of a system to be allocated to a physical component (Fig. 2.2). The functional design will remain almost unchanged, but the physical design will change throughout the lifecycle” (Yasseri & Bahai, 2018). The choice of physical components is governed by the available suppliers and needs to improve or modify the installation during its lifecycle, and hence the physical system will be changed to suit the new conditions. Physical architecture would also change with the introduction of new capabilities (e.g., debottlenecking or expansion), new technologies (e.g., new control systems), hardware innovations, software upgrades, the necessity of replacing obsolete equipment (e.g., no spare is available), or acquiring a piece of equipment from a different vendor.

Figures 2.3 and 2.4 show examples of the functional architecture of two types of deepwater fields.

Fig. 2.3
A typical functional architecture indicates oil export via remote loading buoy, F P S O 1410 m s w spread moored, riser systems flowlines and umbilical, manifold, one 10-inch gas recycle flowing, oil producer, gas producer, gas injector, future, ENYENRA FIELD, and one 8-inch oil production flowline.

A typical functional architecture of a deepwater development

Fig. 2.4
A functional architecture of alpha field, alpha platform, alpha field extension, beta field phase 2, beta field phase 1, existing tee in the pipeline, umbilical from beta phase 1 12 kilometers, beta umbilical from shore, umbilical 2 kilometers, T I M, alpha T I M, beta 2 well P 21 of T 6, and well P 13 of T 5.

A typical functional architecture of a satellite deepwater field. The produced oil and gas are transported via pipeline to an onshore terminal

4 Phase-Gated-Incremental Commitment

The development of a project is a sequential process that takes several years from its inception to its completion. The time from the inception to decommissioning is known as the life cycle. Life cycle models vary according to the project’s nature, purpose, use, and the procedures of the Client’s organization. There are many forms of life cycle models, however, they all share a similar set of phases. The development life is divided into several phases (Fig. 2.5), and sometimes each phase is broken down into several stages—(some authors swap around phases and stages). “Each phase has a distinct and definite purpose and position in the life cycle and represents an identifiable period in the life cycle of a system” (ISO/IEC 15288, 2008). These phases also mark major milestones in the development process.

Fig. 2.5
A life cycle model of phase 0 of scoping, phase 1 of appraise, phase 2 of select, phase 3 of define, phase 4 over 1 of execute design, phase 4 over 2 of execute construct, phase 5 of operate and maintain, phase 6 of dispose, and gates 0 to 5. It indicates S R R, S D R, P D R, C D R, S V R, P C R, and S O R.

A typical life cycle model with phase-gate and review milestones

The subsea project life cycle begins with exploration and scoping (initiation)—(Phase 0)—the concession to explore by the government is excluded here. Phase 1 (the Appraisal Phase) focuses on identifying the Client’s needs and objectives, exploring diverse ways of extracting the hydrocarbon, and transporting it to the shore. The focus of Phase 2 (the Select Phase) is identifying, refining, and verifying the system requirements, generating a few concepts with enough detail for decision-making, choosing a front-runner, and taking it to Phase 3 (the Define Phase) for more definition. Afterwards, the project progresses to Phase 4 (the Execution Phase), Phase 5 (the Operations Phase), and finally to Phase 6 (decommissioning or retiring). A life cycle model shows how early choices would impact what can be done further along a project’s life cycle, thus enabling sensible trade-offs, and can beneficially influencing its viability.

Commitment to the capital investment in any large complex project is incremental. The lifecycle approach enables one to commit to the project incrementally (incremental commitment). This is achieved by inserting gates between phases. The gates are milestones and key decision-making points of the project development. The gate enables sponsors to review progress, decide on the commercial and technical viability of the project, and whether it is logical to proceed to the next stage by committing more funds. Phase-gated processes (Fig. 2.5) allow the timely accrual of required information for the decision-makers.

Each phase is designed to collect specific information or meet specific goals (Table 2.1). There is a major review gate at the end of each phase, where the Client’s team gets the opportunity to assess whether the phase objectives are met and decide if and how the project should continue. Reviews are a formal means that allow project sponsors to control risks (commercial and technical) and monitor changes in the project scope. Based on the deliverables and decision criteria for the phase, sponsors can also validate the business case. This is an external review by the executive sponsor, stakeholders, and others who were not involved with the design.

Table 2.1 Activities in the design phases of a large capital project

At each gate, Project managers and sponsors should review the following:

  • Identify and manage risk in each phase.

  • Whether the phase met its objectives.

  • Approve any changes in scope or schedule since the last gate review.

  • Abandon the project, or proceed (with or without modification in scope).

The criteria for successful gating are:

  • Gates must control decisions, not activities. Deliverables, decision criteria, and decision-makers must be clearly defined.

  • Division of the project into a suitable number of phases that are structured, scalable, simple, and adaptable.

  • The gate must be for transitioning a project to the next phase and must logically be a milestone in the development process.

5 Fitness-For-Service

The notion of Fitness-For-Service (FFS) and Fit-For-Purpose (FFP) is promoted to assess a system’s or a product's suitability for service-i.e., it does the job. Two phrases of FFS and FFP are used interchangeably in this chapter. These phrases are used to mean that a system, based on rational reasoning, is suitable for a specified purpose. The poof can be qualitative as well as quantitative. If a system is poorly assembled, sustained some damage, or is not suitable for its intended purpose, then it is considered as not ‘Fit-For-Purpose’. That is the system cannot reliably deliver what is expected of it; either it fails frequently, or it doesn’t function as it is supposed to. For example, if a component is ‘bolted on’—(added as an afterthought)—to a system to enhance the system somehow (e.g., to become more reliable or safer), but the add-on component does almost nothing, then that component is not FFP, meaning quality, is decided based on FFP (or FFS). This means that quality is not a system’s intrinsic property; but is assessed in the context of what it must deliver (namely if it provides the required functionality). For instance, hardware cannot be judged to be of a ‘high quality’ product, because the quality is an attribute of the relationships among the system’s components and the purpose for which a component is inserted into a system. But a product can be judged as ‘low quality’ because it can be shown that it is not good for any use. The purpose of a system (its function), and the quality that is needed for delivering the stated function, must be well-defined such that to enable one to make a judgment on the system’s quality. It is difficult to imagine a purpose for a subsea control system that cannot perform emergency shutdown in an orderly manner and as safely as reasonably practicable within an acceptable time. Some systems may have many functionalities (purposes), and over time, some original functionalities may not be needed anymore. The systems’ envisioned operating conditions (normal and abnormal) and the system’s intended purpose are coupled and must be completely identified and well-defined, leaving no room for interpretation.

With the notion of “Fitness-For-Purpose”, one can judge, as well as question, the completeness, and relevance of the defined purposes to ensure compliance and possible enhancement. FFP equates quality with the fulfillment of a specification or stated objectives. The intention is to validate a system for its intended use, nothing less or more. There may be multiple purposes such as safety, reliability, availability, or some specific quality requirements (possibly all), which are requested by the client. Thus, FFP is a practical concept for assuring a system meets quality, measured against the client’s requirement. This definition gives the impression that value for money’ is a synonym for Fitness-for-Purpose, however, neither affordability nor cost-effectiveness criteria are necessary elements of FFP.

The notion of Fitness-For-Purpose has emerged as a guide to direct efforts toward an installation with zero faults (i.e., no downtime). The ultimate measure of perfection is ‘zero faults’, which is an excellent goal but achieving it is impractical. Perfection is meaningless If a system does not deliver the required service.

Proving a system is ‘Fitness-For-Service’ requires gathering many kinds of evidence, which involves collecting data while the system development is in progress. Such evidence includes reviews, V&V of requirements and design, using the analytical methods, simulation, and particularly test results to support justifiable confidence in the as-built system.

Generally, a ‘claim’ is made that a piece of equipment is FFS, then the claim is qualified by assembling relevant evidence (Yasseri, 2015a) that supports “the equipment would function within defined limits and with a sufficient confidence level” (Woody et al., 2014). Such confidence is assured by prototyping, simulation, physics-based analyses (analytical and numerical), reliability analysis methods (FMECA, RAM, etc. (see e.g., IAEA, 2001), risk assessments, visual inspection, and of course testing.

6 State of Practice

Classification societies have published recommended practices and guidance notes on the qualification of subsea production systems. For example, API-RP-17N (2023) and API-RP-17Q (2023), DNVGL-RP-A203 (2019), Bureau Veritas-NI525 (2020), ABS (2017), and Lloyds Register (2017). Figure 2.6 shows the DNVGL-RP-A203 (2019) procedure. API recommendations are similar with some variations (Fig. 2.7).

Fig. 2.6
A flow chart from top to bottom includes qualification basis, technology assessment, hazard assessment, qualification plan, execution plan, and performance assessment, if requirements satisfied then qualification complete deployment, and if requirements not satisfied then concept improvement.

DNVGL-RP-A203 (2019), Technology qualification process (TQP)

Fig. 2.7
A flow chart with 8 steps. 1. The initial qualification plan. 2. Functional specification. 3. T R L analysis. 4. Test requirements. 5. update qualification. 6. Implement qualification test and performance assessment. 7. Product improvement and fault correction. 8. Reporting.

API-RP-17N (2023) Technology qualification process flow chart

The primary target of these codes of practice and guidance notes is “New Technology”, but their definition of new technology is quite wide and includes almost everything if the site is greenfield and even includes some brownfield sites. The term “‘Technology’ in these codes refers to a piece of “equipment that uses a physical law’ to satisfy a purpose”. They recommend that both the underlying physics and equipment be qualified. For example, if existing topside equipment is modified for the subsea application (i.e., marinized), it must be qualified. Generally, if no new physics is involved, then it is only required that the equipment be qualified for its new working environment (effects of corrosion, marine fouling, etc.).

These codes, require reliability analyses to be conducted in Phase 1 for the entire system, with a level of detail that is commensurate with the definition of the system at that phase. The stated purposes to do so are:

  • Identify possible design weaknesses.

  • Compare and contrast alternative designs, architecture, equipment, materials, etc.

  • Estimate costs at each phase of the lifecycle, with sufficient accuracy necessary for the decision-making.

  • Perform availability assessments and check if the architecture would meet the client’s target.

  • Define requirements, procedures, tooling, and required results for performing reliability testing.

  • Specify sparing requirements and sparing policy.

  • Probable intervention needs, its practicality, and intervention tooling & methods.

More than one reliability analysis method must be used to tease out all failure modes. Another tool to be used together with reliability methods to mitigate technical risks is the Technology Readiness Level (TRL).

DNVGL-RP-A203-2019 and API-RP-17N-2023 recommendations define “technology as ‘new’ when it is not used (i.e., its suitability is not proven) in a similar field under similar conditions”. Thus, if Commercial Off-The-Shelf technology (COTS) is used in an environment that was not used before, it must be considered new, but not unknown. This implies that the TRL of every subsea equipment for a new field at best, at TRL = 5, (Yasseri, 2013).

7 Systems Engineering V-Model

Systems Engineering (SE) provides processes for developing a system that can satisfy the client’s requirements and needs against the background of conflicting constraints. “SE is an all-encompassing integrative activity, which encourages and coordinates the collaboration of several disciplines, to deliver a coherent operable system that is not dominated by the perspective of any single discipline” (NASA, 2007). INCOSE (2015) gives this definition: “SE is an iterative process of top-down sequential synthesis and development to produce a system that meets, (in a near-optimal manner), the full range of the client’s requirements”.

SE does not deal with the physics of the problem but provides processes, which can be employed to meet both the client’s business needs and the technical requirements in engineering the system. System Engineering processes have been successfully applied for many purposes such as (NASA, 2007):

  • Definition of systems of systems—identification of system(s) that satisfies the client’s needs.

  • Development of system requirements—development of conceptual architecture, concepts trade-offs, configuration management during development, and system integration.

  • Validation, verification—operability evaluation and acceptance tests, sparing policy, and planning for maintaining the system over the whole lifecycle, including interventions, expansion, and refurbishment.

The process starts left-hand side of the SE’s V-Model (Fig. 2.8) with the definition of the ConOps and the client’s operational needs. Namely how the system is supposed to operate and function. Then the system is deconstructed (decomposed) into functional components or subsystems and components, for the ease of managing its development. The aim of breaking down a system into its constituent components is to create a logical chain by linking the operational needs to system requirements, to the specification of subsystems, then to the specifications for their integration, and then to acceptance testing. Moving along the left-hand side of the V one can partition the system hierarchy into functional, and physical collections of components which can be designed by the discipline experts and tracked to the logical conclusion. The use of conceptual models early in the project development is encouraged by SE processes to gain insights into the technical feasibility of a concept. A better understanding of the client’s requirements enhances the chance of succeeding in delivering what the client asked for. The V-model allows concurrent activities.

Fig. 2.8
A V-model of an engineering plan with 6 phases. Client needs, functional analysis, system architecture, subsystem definition and design, fabrication and installation, subsystem integration and verification, integration, validation, and demonstrations and acceptance. It indicates traceability.

Systems engineering V-model

The horizontal line, in the middle of the “V” in Fig. 2.8, depicts the handover of the design activities to the specialized disciplines, or engineers, who specialize in specific engineering disciplines, to produce the physical system. The position of this dividing line determines the overlap between discipline engineers and the systems engineer engaged in the integration processes. The horizontal line as drawn shows a modest overlap; a total separation is not implied. Interface management and some integration and qualification activities take place during the design.

The right-hand side of the V-model depicts the integration, Verification & Validation, and qualification activities. Integration involves the assembly of parts into components, the assembly of components into subsystems, the assembly of sub-subsystems into higher-level subsystems, and the assembly of subsystems into the final system. These parts, components, and assemblies must be qualified which could involve testing of the newly assembled sub-subsystems to check their compliance with the requirements; this process is known as verification (Grady, 2007). After verifying the system against the system requirements, the system must be validated.

The V diagram graphically shows how the design activities flowed down from system requirements to functional design, and finally to the physical design in an iterative loop of interrelated activities. Several factors, such as technology selection (Yasseri, 2012), degree of standardization, hardware interface requirements, as well as the choice of concept would influence the nature and the level of iteration and possibility of concurrent engineering (Yasseri & Bahai, 2019).

8 Primary Loops of Development Process

The V-model requires that a system be decomposed into functional subsystems, which can be designed with fewer complications. This allows subsystems to be designed and fabricated in parallel (concurrent engineering) according to verified and validated system specifications developed in the previous phase. The SE processes allow the concurrent development of subsystems of a large system which accelerates the project development by involving many disciplines concurrently and encourages the engagement of vendors.

The V-model breaks down system definitions into three separate loops (Fig. 2.9). These three main loops, (Fig. 2.8), are the three main loops in the system development. The first loop is the design loop which deals with components, assemblies, and subsystems. If the system is a modular type, then the subsystem design and verification can be done in parallel. The installation ‘(implementation) takes place at the bottom of the V, which is a collection of many ‘parallel Vs’, which are equal to the number of subsystems that are inserted into the system.

Fig. 2.9
A model includes inputs, a requirement loop between requirements and functional analysis, a design loop between functional analysis and transform concepts to physical architecture, a verification and validation loop, technical management processes, system analyses and controls, and outputs.

Three loops of the development process-Left-hand side of the V-model (adapted from NASA, 2007)

The Systems Engineering Processes are iterative, which is applied top-down sequentially by system engineers to decompose a complex system into manageable parts, for which an expert can be found. The client’s goals, requirements, and needs are described in a set of top-level system requirements that are input for the next level of decomposition” (NASA, 2007). A complex system is decomposed sequentially to several levels. At each level, more definitions are added, and performance requirements cascaded down. This process leads to nested loops (Fig. 2.9) indicating the repetitive nature of the process. “The loops are the requirements loop, design loop, verification loop, and control loop. It also includes input & output definitions” (NASA, 2007). “These loops link requirement analysis, functional analysis & functional allocation, and synthesis” (see NASA, 2007 for more details).

The Inputs are the customer’s requirements, objectives, needs, and the list of all constraints. The design process starts with understanding the client’s needs and wants, the system operating environment and the battery limit. Before searching for a concept, it must be determined if the client’s inputs are primary requirements or nice-to-have features (wants). Separating needs from wants allows the system engineer to concentrate on needs as the primary objectives and define a system that satisfies requirements rather than the implementation of the directed by the client’s wants.

Requirements analysis is the elicitation and validation of the client’s requirements and needs, which is the basis of the system’s functional and performance requirements. The client’s requirements are translated into the system requirements, namely what and how the system must function. Development starts by translating the Client’s need into a set of agreed requirements, from which the system requirements are established. The system requirements are then flowed down to establish requirements for subsystems and equipment. “Parts, assemblies, and subsystems are successively qualified against their requirements” (Bahill & Henderson, 2005).

Functional Analysis & Allocation’s purpose is to allocate functions and performance requirements to lower-level subsystems, which defines the system successively to its lowest level. “High-level system requirements are flowed down for allocating them to subsystems and components. Defining allocated functions in adequate detail provides design specifications and verification criteria to support the development of the entire system” (INCOSE, 2015). Functional and performance requirements for lower-level subsystems must be tied to higher-level requirements. Functional analysis and allocation activity will ensure consistency of the requirements and may require another iteration of the requirement’s analysis. This is the Requirements Loop, which is iterative.

Synthesis defines the property of the hardware that makes subsystems, which leads to the complete description of the physical architecture. Every hardware (part) must support at least one of the functional requirements, however, several functions can be delivered by a single part.

Design Loop is the process of inspecting and assuring that the functional architecture leading to the physical system can deliver the desired functionalities at the desired performance levels. The design loop allows the revisiting of how the system would function and if it is desirable to optimize the system further.

The verification Loop is for verifying if the solution satisfies the requirements. System requirements at each level of the hierarchy must be verified. During the functional analysis and allocation baseline documents are developed which define how every requirement must be verified. As each component is integrated into the system, it is verified for compliance with all higher-level requirements. Visual inspection, demonstration, simulation, or test are used for the verification. Verification strategies and plans are to support the requirements. Validation is a system-level activity in which the system performance is compared with the requirements.

9 Requirement Analysis

The quality of a product only has meaning if it fulfills the service provider’s needs. If a piece of equipment performs well the function for which it is inserted into a system, then it is considered as a quality product (i.e., Fit for Service), Thus “what the system must deliver, its performance and availability” must be defined (FAA, 2008). “Requirements are linked to ConOps via traceability matrix”, (INCOSE, 2015) and “cascaded down into requirements for subsystems, sub-subsystems, and components” (Hull et al., 2002).

System requirements are defined for two operational conditions:

  • The capabilities under normal operational regimes, which specifies the expected behavior, and desired performance.

  • Desired expected behavior during upset conditions (abnormal conditions) i.e., the required resilience and survivability (robustness), and how to control the system during an upset condition and return it to normal operation.

The first loop of product development (Fig. 2.9) is about the requirements that define what is required of a system and its purpose. How well a system must fulfill its functions, or how well it must suit its purpose, which is an indication of how good the system is.

Requirements engineering is the systematic effort to collect, verify, specify, agree, validate, and manage the client’s needs and goals while considering the user’s interaction with the system, technical issues, and economic & business concerns. These envelop the whole lifecycle, involving dispersed teams of specialist engineers and several supply chains over many regions for a few years. Thus, requirements that are complete, verified, and stable are important tasks of systems engineers, since all design activities are cascaded from the high-level requirements.

The following three concepts are helpful when dealing with large capital projects:

  • Abstraction: i.e., seeing the big picture, not details. The functional architecture is an abstraction of the system's functions without much detail.

  • Decomposition: i.e., decrypting a system into its subsystems and components, so that they can be studied in isolation by relevant engineers. A system is decomposed along the line of suppliers’ specialization. In decoupling between parts no decomposition is perfect, however, it enables the identification of specialist engineers and competent suppliers.

  • Projection: i.e., an understanding of how the system should work (a perspective of view of the system) and describing only the pertinent aspects. While constituent (decomposed) components are designed independently, they share a common mission (purpose) as members of one system.

Requirements analysts use these concepts to decide what requirements are necessary and sufficient and how to satisfy them. The system engineer by abstraction, decomposition, and projection reduces a complex problem to its simplest form and investigates if existing solutions or off-the-shelf items can be used. Ideally, the decomposition must be directed toward components (or solutions) that already exist, which can be used albeit with some modifications. However, adopting existing solutions and off-the-shelf items could require substantial work to integrate them into a system in a different context.

Each requirement may impact many parts of a system and may need several test cases to verify it” (INCOSE, 2015). The integration of a system with its environment and user interaction are also requirements.

There are two types of requirements (NASA, 2007):

  • Functional requirements define the system's purposes, i.e., what services it provides and how.

  • Non-functional requirements address the practicalities, which is how the system must operate and, the regulations and standards that must be obeyed. Other attributes cannot be expressed as functions—for example, the installation security, reliability maintainability, and availability.

Non-functional requirements may also include the following:

  • Can the system be expanded, or adapted to suit new conditions?

  • Can the system be fabricated in existing construction yards?

  • Can the system be broken down to suit road and sea transportation restrictions?

  • Can the system be installed using existing barges and lifting capabilities?

  • Can the system be shut down fast in an emergency and startup with a reasonable effort and time?

  • Are human–machine interfaces suitable and are users’ access acceptable?

  • Are suitable materials, skill sets, and manufacturers available? and

  • any other constraints.

10 Concept of Operations (ConOps)

The operation (ConOps) document (Fig. 2.10) describes how the system should operate (Frittman and Edson (2010) and GOES-R, 2020). According to IEE (1362 and 1220), “the ConOps is a “user-oriented document”, that describes how a system will be used, and includes: who will use it; when they will use it; how they will use it, and for what purpose they will use it.”

Fig. 2.10
A description of the system from operational and user perspectives includes 1. scope, purpose, and system review, 2. references, 3. Description of operation from users' perspective, 4. prerational needs, 5. system overview, 6. operational environment, 7. support needs, and 8. scenarios.

Content of the ConOp document

The ConOps document defines the user’s needs and expectations for the system developer, the procurement team, and the other stakeholders. ConOps establishes a shared understanding among all stakeholders. The ConOPs document (Fig. 2.10) is prepared at the beginning of the requirements analysis, describing what the system should do (not how it will do it) and its rationale (why). It should also identify any critical, top-level performance objectives and requirements as well as the system rationale. The human–machine interface must also be defined.

The primary considerations are (GPO, 2005):

  • The client’s team must be involved.

  • The ConOps must be mature as the project moves through the project lifecycle.

  • Must allow performing “what if” analysis.

  • Should help to reach a consensus before the requirements process begins.

The principal function of ConOps is to have a collective understanding among all stakeholders regarding the expected functionality and level of expected performance. It is also used to describe/define some of the high-level concepts in support of detailed engineering, installation, integration, verification, and validation processes.

11 Baselining

Paraphrasing Barry Boehm “Constructing an installation from a specification is like walking on the water—It is safer if it is “frozen”.

A design, product, or procedure at the end of a phase is called a “baseline”, provided it has been reviewed and agreed upon, and then that level of progress is frozen. Any change thereafter can only take place through the project’s formal change control management. A design that is baselined becomes the basis for the next stage of improvement evolving toward the final stage of development. For example, a System Requirement Specification (SRS) is frozen (i.e., baselined) to move to the next phase of development as a basis for completion. Once a design (or product) is baselined then no change can take place haphazardly, thus providing a stable reference for further improvement.

Thus, a baseline is a frozen picture of the design evolution at a specific time (generally the project’s milestone point) in the system development lifecycle, signaling the end of a phase. It becomes a basis for improvements under change control management in the following phase, and hence it needs to be a stable reference for design evolution. The objectives are to mitigate the vulnerability of all key deliverables to haphazard uncontrolled changes.

Figure 2.11 shows a typical baseline waterfall, which includes.

Fig. 2.11
A flow diagram of baselines of requirements, functional design, physical design, system, operational, maintenance, and monitoring, alongside baseline components. The system baseline includes fabricate, acquire, install, testing, and V and V. The operational baseline includes commissioning and deployment.

Specifying a baseline

12 Requirements Traceability

The purpose of requirements traceability (Fig. 2.12) is to ensure every low-level requirement is linked to the higher-level requirements” (Dick, 2002, 2012), however, some high-level requirements may impact many low-level requirements. Everything should be traceable from requirement specifications to design documents, interface control documents, and down to test procedures for acceptance. “It is important to establish the link between requirements, design specifications, and supporting data for design (known as the design basis) since providing the original context in which a requirement was defined enables any future modification of the requirement to be checked to see if the originally defined constraints are still controls” (Königs et al., 2012).

Fig. 2.12
An illustration of forward and backward traceability. It includes validate client requirements, systems requirements, subsystems requirements, components requirements, components test plan, integration and systems test plans, and acceptance tests. It indicates allocating requirements to components.

Requirements traceability (see also Fig. 2.8)

The traceability aims to create consistent links between test cases, user requirements, and project specifications. It should be possible to consistently cross-reference between components and system requirements, namely the functionality of any equipment is traceable to the client's requirements. For this purpose, the Client’s requirements are assigned a distinct identification number for designation, which enables referencing.

Each client’s requirement is tied to at least one system requirement, and vice versa. The system engineer enters these links in the tracing matrix to demonstrate that all client requirements have been considered. This is also used to show the completeness of the system specifications and the correspondence between the technical implementation and the requirements. The tracing matrix is also used for the compilation of the test plans to demonstrate that all requirements have been tested.

The primary purpose is to establish links between V&V tests and system requirements (Fig. 2.12). After cascading down higher-level requirements to equipment appropriate test plans are defined to verify whether the system will meet requirements. The following three items must be considered:

  • If the Client’s requirements and needs are accounted for

  • All components are necessary, and in combination are sufficient, for adequately meeting the client’s needs.

  • The test plans will unambiguously verify them.

The circular traceability links enable precisely assessing what will be impacted if a requirement changes, and if there is a choice to avoid the proposed change.

Figure 2.13 gives an example of the decomposition of a system requirement into many component requirements. The two essential sufficient and necessary conditions are:

Fig. 2.13
A flow chart from bottom to top includes 4 component requirements, a decomposition argument, and system requirements. In system requirements, during start up M E G must be injected into the production system. The injection system shall inject with adequate rate to cope with x x cubic meter of gas.

Tracing requirements through a satisfaction relationship

  • Sufficient: if the compiled low-level requirements are sufficient? and

  • Necessary: if every low-level requirement is necessary?

13 Reliability Assessment for Assurance

The reliability of any modern equipment that is well-designed and manufactured, with materials that are suitably chosen, and tested is generally very good. However, in practice, several items are bundled together to make an assembly, hence the assembly’s reliability is lower, and the Mean Time to Failure (MTTF) for the assembly would be shorter because there is more equipment that may fail. The Choice of architecture based on reliability can help to optimize MTTF.

Reliability, Availability, and Maintainability (RAM) analyses are used to obtain a functional architecture with the most advantages, considering all constraints. There is a multitude of methods such as MTBF (Mean Time Before Failure), MTTR (Mean Time to Repair), and the Reliability Block Diagram (RBD) that can be used to achieve the client’s goals within reason (Fig. 2.14). These techniques are used to determine the most promising functional architecture for a field. At the early phase of development, historical failure data, (OREDA, 2009) is employed to estimate the availability of a system in pursuit of meeting the project’s target availability.

Fig. 2.14
A flow chart. Reliability branched into attributes, threats, and means. Means branched into fault detection with qualitative methods, fault prevention in development and operational phases, and fault tolerance, including resilience and robustness. Attributes include safety, integrity, and security.

Components of reliability analysis

At the physical design phase, the supplier and the client’s failure database become available, and they are added to the historical data for a more accurate estimation of system reliability. The primary tool to capture all probable failure modes, their effects, and criticality is FMCA (the Failure Modes, Effects, and Criticality Analysis). The Define Phase is a suitable time to perform the first FMECA and should be revisited in the Execution Phase when the physical architecture is almost complete. The aim is to identify the weaknesses and potential failure modes, rank them, assess their criticality, and suggest design modifications to avoid them, and if modification is not possible then mitigate their effect. The level of detail of FMEA must be commensurate with the project development phase. It is challenging to undertake RAM analysis at an early phase of development since little is known about the physical system. Nevertheless, even a rough RAM analysis is useful in assessing if the target availability is achievable. Before performing the RAM analysis, a Systems Description Document must be prepared to enable a common understanding among system engineers and designers. The content of this document is a description of all components and their functions as well as their interfaces. This document also defines the expected level of performance for all components, which are used in the system’s RAM analysis at the Define Phase.

The primary objective is to identify all possible ways that a system can fail to perform. “A failure state results if one or more components malfunction (e.g., not performing well or exceeding their acceptable limit). The resulting state is called a fault or a failure mode” (Rausand & Høyland, 2004). “A component may have several failure modes and each failure mode may have many causes, mechanisms, and effects” (Rausand & Høyland, 2004).

Early in the Define Phase, only suppliers of long lead items are selected and very few pieces of equipment are known with enough detail, it is useful to perform functional FMECA to identify potential failures for each function according to their hierarchy of functions, because a failure of a lower function leads to failure of a higher-level function. After Phase 4 when preferred suppliers are selected and the physical design has taken shape, an FMECA is performed for interfaces to verify compatibility across all interfaces of the system’s components. Then specifications for equipment are prepared (Datasheet), and the preferred suppliers are invited to tender. Towards the end of the Define Phase vendors are selected and possibly contracts to supply are placed.

When contacts are placed for all hardware (the Execution Phase), a System Breakdown Structure (SBS) is constructed showing the hierarchy of components and subsystems, which is like the Function Trees Fig. 2.15. With the SBS as input, a detailed FMECA is performed to identify system failures based on the failure modes of lower-level components and step by step moving toward the higher levels in the functional components hierarchy. The FMECA is performed by posing the following questions (Rausand & Høyland, 2004):

Fig. 2.15
A failure hierarchy from top to bottom includes part failure mechanism, component failure mechanism, assembly failure mechanism, subsystem failure mechanism, and system failure mechanism. It indicates part, component, assembly, subsystem, and system along with their mode cause.

Failure hierarchy (adapted from IEEE 1220)

  • Credible failure of each part, component, and assembly.

  • Possible failure mechanisms of identified failure modes? And their possible effect.

  • Is the failure on the safe or unsafe side? (The concept of a “fail-safe” system.)

  • How to detect failure?

  • What provisions are provided to stop the failure progression or mitigate its effect?

RAM analysis is deployed for verification and validation of the system’s components at every level of evolving development (Using TRL as an indicator) and compared against the agreed client’s operational requirements. Complications in manufacturing and system integration could lessen the system availability. Therefore, to offset the influence of manufacturing errors on the system availability, designers deliberately aim at availability above the agreed operational availability target, while addressing every manufacturing limitation.

14 Technology Readiness Level

Tests and simulations can only eliminate some of the uncertainties. The Technology Readiness Level (TRL) scale is another tool to manage technical risks. TRL is a useful tool for tracing the progress of technology toward readiness and maturity (API-RP-17N, 2023). However, “TRL is not a measure of the quality of technology to be inserted into a system” (API-RP-17N, 2023).

The TRL for a piece of existing equipment that is inserted in a new system is assumed to be at TRL = 4, or at best at TRL = 5” (Yasseri, 2014b). The logic behind this decision is that a new subsea field is not the same as an old one; they are similar but not the same. Consequently, every piece of equipment must be qualified for use in the new environment and operational conditions.

The notion of TRL was advanced by NASA in the ‘70s. Later, NASA rehashed the idea and published this metric as a 9-point scale. Many industries have adopted NASA’s 9-point TRL scale but modified it to suit their needs (Yasseri, 2013). Table 2.2 is adopted from API, which shows API’s definitions alongside a NASA-type TRL. TRL = 1 in The NASA scale is a technology as a basic idea probably supported by basic science. The development is pushed along the TRL ladder until it reaches maturity, then readiness which is proven by working in its intended operating environment.

Table 2.2 API definitions of TRLs and its equivalent NASA’s scale (based on API-RP-17N, 2023)—(Yasseri, 2013)

Table 2.3 shows processes that are used to reduce uncertainties of the technology during its development phase.

Table 2.3 Uncertainty reduction at various levels of the TRL scale (adapted from Yasseri, 2015b)

15 Verification and Validation

Components are tested for acceptance at the factory, known as Factory Acceptance Tests (FAT). Some components may require extended factory tests (EFAT). Tested components are delivered to the fabrication yard to produce bigger assemblies or modules for ease of transportation and installation. Modules are then transported to the quayside for integration. They are tested at quayside before installing them in their working environment and integrating them with the previously installed modules until the integration and verification of the entire ‘as-built system’ is complete, the entire system is tested and commissioned for handing it over to the client’s team. When the entire system is installed on the seabed, and commissioning tests are complete, the responsibility of operation is gradually handover to the client’s operations team. The handover includes providing support, devising a sparing policy, instructions for operation, operator training, and all other enabling items that assure the smooth running of the operation and maintaining the system in good working condition. During the handover period acceptance tests are organized by the client’s team to confirm that the system complies with the client's requirements. The handover period and warranty period are intended for a smooth transition of responsibilities from the primary contactor to the client’s operation team. The entire process is called “Verification” and “Validation” (V&V), which are carried out through a myriad of tests at every stage of system integration (Fig. 2.16).

Fig. 2.16
A flow chart of agreed on validated requirements, user requirements, vendor assessment and selection, project specific validation plan, functional and design specifications, reviews and risk assessment, installation and operation qualifications, performance qualification, and validation summary report.

V&V life cycle

Verification and Validation procedures are used to confirm that a product, service, or system meets its defined specifications and judged it is FFS. Verification is a quality control process that is used to evaluate whether a product, service, or system complies with regulations, specifications, or conditions requested by the client at the beginning of the Development” (Babuska & Oden, 2004). “Validation is a quality assurance process for obtaining evidence that with a high degree of confidence proves a product, service, or system delivers the agreed specified requirements” (Plant & Gamble, 2003). The ISO 9000 (2015) definition is based on the general field of quality and the focus is on providing ‘‘objective evidence’’ which proves that all requirements have been satisfactorily satisfied. According to ISO 26262 (2011), “the validation is focused on providing proof that the system will meet its intended purpose.” ISO defines the verification process in broad terms.

Figure 2.17 shows a possible flow diagram for the V&V activities. The process begins with reliability analyses and ends with V&V by testing, prototyping, simulation, and analytical approaches. The approved system’s requirements are used to define the subsystems’ requirements and specifications, which are then validated to assure that they are feasible, necessary, and exhaustive, in the light of the notion of ‘necessary and sufficient’ condition.

Fig. 2.17
A flow chart of client requirements, system requirements specifications, hardware and software requirements specifications, field data, system boundaries, concept of operation, and functional and physical architectures. It has requirements and architecture artifacts and system reliability and availability.

A possible flow chart for verification and validation

Tools are qualified but processes are validated according to this definition, qualification is considered as a subset of validation. All fasteners (rivets and nuts & bolts) including welding are considered tools for joining, but their FFS must be evaluated. In this respect:

  • Fasteners, weldments, and materials as well as procedures using them are qualified as a tool for system building. They are procured from trusted suppliers and may be accompanied by a certificate of FFS. However, basic verification, based on the statistical sampling method, should be undertaken.

  • Fasteners and weldments in assembled equipment must be validated to assure that they are capable enough to allow the equipment to fulfill its purpose. Results from the fastener’s qualification tests are appended in the equipment’s validation report.

The Verification and Validation strategy is a set of actions, consisting of tests, inspections, and trials. Each requirement may require several actions.

Each action must be:

  • Suitable to check the requirements under consideration.

  • Timely-implementation at an early phase is preferable.

  • Describe the necessary testing tools.

  • Define the successful outcome.

Verification & Validation, and qualification are used interchangeably in some literature. For example, IEC 61508 (2010), defines the qualification process to encompass V&V.

Figure 2.18 shows the V&V activities in parallel with the development processes. Any requirement may give rise to several verification tests at every phase of the evolving project. If a requirement is fulfilled by chance due to the beneficial effect of emergent behavior, such a chance event must be confirmed by tests at the level of emergent behavior.

Fig. 2.18
A flow chart depicts the V and V activities in parallel with the development processes. It starts with client requirements studies and client requirements flows to commissioning and hand over and reliability assurance evidence, and ends at implementation.

The Qualification activities are shown in parallel with the development activities

Evidence for quality assurance is collected throughout the development phase utilizing a combination of testing and simulation. Validation solely based on tests or analytical methods would let some faults remain undetected. A balanced approach to confirm compliance has a high chance to control costs and enhance confidence in the system's performance. Simulation is preferable and testing is best used to fill the knowledge gap since simulation cannot detect manufacturing errors or visual inspection may not be suitable for accepting fabrication defects.

16 Provision of Evidence

There are several methodologies for gathering evidence for supporting reliability assurance, and naturally, some overlap between them should be expected. Any chosen procedures (methods), and the depth and detail, are based on the “need to know” or “necessary and sufficient, and hence the choice depends on the problem at hand. There must be a purpose to gather information. Sometimes, evidence is collected for the design activities and hence is indispensable, since the design effort, however exhaustive, cannot reasonably detect all probable failures and their causes. The concept is founded on the principle of ‘beyond a reasonable doubt’, which is quite rigorous, not based on the balance of probabilities. However, reasonable doubt does not mean beyond all doubts. Sound engineering judgment is needed to avoid undertakings yielding little value. As a minimum two different methods should be used to detect all probable faults. A particular procedure, e.g., testing, may be necessary but is not sufficient (e.g., doing the same test twice), thus it must be complemented with another method to make sure that all faults are detected. Generally, simulations and analytical methods are used to lower the cost of testing needed for reliability assurance. Numerical approaches could replace the need for testing, for example when testing is almost impossible or very expensive. A few approaches that are in use for managing V&V are listed below:

Trust-based means that hardware is sourced from a trusted supplier or a design can be claimed to be compliant with codes and standards by the contractor, and it is taken on trust that the contractor’s claim is valid. Generally, any claim involving analysis or simulation is verified by a trusted third-party verifier.

The certification approach means that a third party has witnessed the performance of the finished product during some specified tests and that the third party awarded a certificate of performance. The certificate approach is commonly used for mass-produced items based on a standard or specification. The certificate is the qualification of the production facilities, as well as assuring that prescribed standards (depending on the application area), are followed and the product meets all stated requirements which means the product is FFP. This approach is also used to validate the claim of a manufacturer/fabricator that an item as sold is “fit-for-Purpose”. Representatives of a verifying consultancy witness tests organized by the manufacturer and issue a certificate of compliance for a particular application if convinced. For example, firewalls are qualified using this approach.

The current certification approach follows the prescribed process of an applicable standard. For example, IEC 61508 is designed for industrial purposes, ISO 26262 (2011) covers the automotive industry, and DO-178B/C (2012) focuses on software for airborne systems.

Competence Cost of compliance with ever-increasing requirements is not trivial. The capability to check weld quality demands management of personnel competence. ISO 3834:2008 defines the quality requirement for fusion welding, with an emphasis on the welder’s competence and inspection, supervision, and testing personnel (ISO 3834, 2008, and AS/NZS ISO 3834 (2021). Thus, competence assurance (e.g., certified operators) is essential in delivering reliable systems, to assure the delivered product is FFP, and should remain so for its design life. Personnel competence assurance is set out in ISO 9001 clause 6.2.1. Inspectors are also required to have a certificate of ‘competence’ issued by an authority.

17 Acceptance Testing

The purpose of acceptance testing is to validate the system assuring that it will deliver the required functionalities; that is FFS. At the start of the project, all Client requirements, the system purpose, key capabilities, use cases, (ConOps & usage scenarios), level of performance to be achieved, and the system’s acceptance criteria for validations are defined and documented. The System Validation Plan is produced and put under the change control process for monitoring to ensure that the test procedure (the verification plan) is relevant, up-to-date, and not changed without the approved change processes. “The purpose of Test Plans is to demonstrate that a system satisfies the approved requirements-i.e., FFS” (Engel, 2010)—Fig. 2.19

Fig. 2.19
A flow chart to identify test needs and devise preliminary test plan, create and assign test requests, design test, perform test, capture test and analysis results, assess results, and make conclusions. Test results and recommendations lead to concept and system designs, detail design, and requirement management.

Tests plan to demonstrate that a system meets requirements

The Test Plans document is the overall testing strategy, which includes the general test procedures, what results are to be documented, and the procedures for dealing with test failures. The Test Plan will also include types of testing, describing the testing environments and tooling, the responsibility matrix, test equipment that will be used, and any other organizational procedures.

Test Protocols describe the specific testing requirements. Test Protocols are a collection of Test Cases (use cases) that validate a specific element of the system. Each test case includes the goal of the test, prerequisites, as well as acceptance criteria. Each test case is broken down into a series of steps. Each step includes detailed instructions, what result to expect, as well as the actual result, and what to document. The test procedures must have sufficient details so that a tester can perform the testing consistently without requiring interpretation.

The Client Acceptance Criteria are used for authorizing the shipment of parts, equipment, or assemblies that are tested and ready to be delivered to the Client site. That is, it is verified that the part, equipment, or assembly is constructed in a manner that has been defined by the flow-down of the client requirements and fabricated in a manner that meets the industry standards, good practices, and client standards. This is achieved by various procedures, such as using independent testers, witnessing the vendor’s test, or on the trust base.

User Acceptance Testing (UAT)—for operational needs—describes testing to prove the fulfillment of what the user expects the system to deliver, and how the system must function. UAT documents provide pertinent information, data, the operating environment, acceptable processes, and the system’s functionality to make tests meaningful, applicable, and repeatable. These tests are completed during the FAT (Factory Acceptance Test) as well as the SAT (Site Acceptance Test (SAT) (Rahimi, 2013). If a piece of equipment is developed by the vendor’s subcontractor, then it must have a FAT and SAT plan associated with it and a certificate issued by the subcontractor.

FAT and UAT can be looked at as the partial commissioning and qualification of equipment, and systems, which must be done before shipping products to the Client’s site. The vendor tests the product using the Client’s approved test plans and specifications to show that the system is mature/ready to be shipped to the site. For most equipment and assemblies, FAT is the focus of collecting evidence to support the verification and validation of equipment or the assembly.

FAT and EFAT (Extended Factory Acceptance Tests) are done by the manufacturer, possibly witnessed by the client’s representative, and results are documented for use in linking tests’ results to the requirements in the traceability matrix. The traceability matrix must show that tests’ results are linked to one or more requirements and hence no requirement is forgotten. Validated components are then assembled to make bigger assemblies or modules, and then they are tested to assure they will work together.

A complementary purpose of testing is to check whether all interfaces comply with specifications and also all constraints have been accommodated. The integration plan, which was produced earlier in the project development, defines the order of components integration towards constructing the whole system. The functionality of every subsystem at every stage of integration is checked against the relevant approved requirements and must be verified following the ‘Subsystem Verification Plan.’ Tests for the verification of component-level requirements are necessary because many systems’ requirements are flowed down via several routes and levels of system decomposition (Yasseri, 2013). These efforts should ensure that the functionality of all parts of the system has been proven.

18 Insights and Implications for Practice

A practical framework was described for delivering reliable subsea production systems based on the system engineering processes. The objective is the assurance of uninterrupted operation and the robustness & resilience of the SPS. Although subsea production systems are used as a vehicle to explain the process, the method is equally applicable to the reliability assurance of any capital project (Okaro, 2017).

Reliability assurance is a useful framework, to build robustness and resilience into a system (e.g., security threats, Yasseri, 2019). Reliability assurance also relies on mitigative policies, such as using appropriate materials, corrosion & erosion protection, and prevention of accidents (e.g., dropped objects), other external hazards (e.g., boat impact, seismic event, storms, debris flow), geotechnical hazards (liquefaction, seabed movement), and so forth. The effectiveness of reliability assurance is judged by the availability of the system for continued operation when required.

The outlined method also supports the project’s risk control management, and it is also aligned with the owner’s strategic objectives. The described framework aims to achieve the following objectives:

  • Meet the Client’s needs and goals.

  • Control the project cost and schedule.

The method starts with the client’s requirements (needs and objectives). It was stated that each requirement shall be:

  • Traceablehigher-level requirements are linked to one or more components’ requirements

  • Unique—it should be associated with a paragraph in a document with an identifier

  • Single—it should not concern more than one issue.

  • Verifiable—can be verified using approved project’s verification procedures.

  • Unambiguous—defined with an exact statement.

  • Correctly assigned to applicable requirements, with unambiguous paragraph identifier.

A fundamental idea is that quality must be built into a system’s components and processes at the start of development. The system design specifications must support the quality needs of all processes so that they can be judged as ‘deemed’ FFS. Reliability analysis will identify ‘critical elements’ of a system architecture, which then can be used to moderate the amount of testing. The term ‘critical elements’ means mitigation controls devices included during the design phase, which are hardware, not procedural controls. Risk analysis may also be used to identify critical elements.

A myriad of techniques is used for risk identification. HAZOP, HAZID, and FMECA are the most favored tools of hazard identification and assessment; all perform well in identifying failure modes. The original use of HAZID and HAZOP was to enhance system safety, but the reliability analysis also employs them.

It also emphasized the importance of tracing the requirements to their physical implementations (design solutions). It was shown how to translate the results of requirements analysis into the project-specific design requirements, from which technical specifications for equipment (data sheets) can be developed and used to prepare testing and acceptance criteria. Testing starts from the lowest level component, then progresses to assemblies and modules, and finally, the whole system is tested for compliance (Tehera et al., 2019)-The validation process.

Reliability and safety analysis address separate issues, but the safety-related system must be reliable, thus risk analyses are used for risk reduction and enhancement of the reliability of the safety system. Note that there are two sets of safety systems which are process safety and system safety; though both perform the same function but are independent of each other.

Designers should never intentionally create requirements and designs that result in the system operating at the “limit.”; i.e., little or no margin. If a system is designed to meet performance specifications within an adequate margin, then it should be rare for the system to fail rapidly when excursions beyond normal operating conditions are minor. A key objective in developing a high-reliability system is, for the system to degrade gracefully without sudden, frequent failure, as well as OPEX, and unplanned intervention overrun.