1 Introduction

As humans, we design and employ systems to support our goals. The system user interface permits one to control the system in the face of environmental variability. The triad among the human (i.e., agent), system (i.e., artefact), and environment are important when understanding cognitive work (Woods and Roth 1988), as recognized in ecological interface design (Bennett and Flach 2019). Interaction is additionally dependent upon the situation of use (Vicente 1990). As a result, literature on adaptable user interfaces acknowledges the interplay between these four elements, thus user interface designers must consider the user, the target system, the environment, and context of use and their interactions simultaneously while designing user interfaces (Abrahão et al. 2021). The user interface design process is complicated as the interface moderates the interplay between these four elements.

To achieve increased capability and features, it is often necessary to increase system complexity—the number of interactions across the physical, functional, and behavioral boundaries of the system (Langford 2012). As the user typically controls the physical or functional elements of the system to affect intended behavior, this increase in complexity further complicates the user interface design process. As systems become more capable, the decisions the human makes and implements with the user interface becomes ever more consequential, increasing the potential negative consequences of system and user interface flaws (Schutte 2015). As such, it is becoming increasingly difficult for the human designer to consider these interactions and their impact on the user given the designer’s limited cognitive resources. Modern digital design must consider design aids for the selection and design of user interface elements.

Tools for design aids in user interface design are primarily limited to user interface toolkits, design guidelines, and task description languages (Gulliksen et al. 2003; Paterno et al. 1997; Paternò 2001; Bass et al. 2011; Bindewald et al. 2014). However, recent research in adaptive or adaptable user interfaces have begun to discuss the use of ontologies to guide the manipulation of user interfaces in response to system, user, environment, or situation changes (López-Jaquero et al. 2008; Ezzell et al. 2011; Abrahão et al. 2021; Huo and Kobierski 2006). While the adaptations enabled through these approaches are limited, the use of ontologies permits the capture of knowledge about the relevant elements to enable reasoning and manipulation of the user interface.

Over the past decade, system engineering practices have evolved to include digital model-based systems engineering (MBSE) methods and languages that describe requirements, structure, behavior, and interactions among system components (Friedenthal et al. 2008). Importantly, these models are permitting structured trade space analysis (Parnell et al. 2021; Colombi et al. 2014) which can extend to human interaction design (Watson et al. 2017). These techniques permit the designer to explore multiple system or interface configurations algorithmically, provided the impact of design changes on the value of the system can be represented mathematically. Although these approaches have traditionally been used to establish trades and improve understanding of a fixed number of available options, optimization approaches, including genetic algorithms, have been developed to permit the exploration of a defined trade space, permitting more potential system solutions to be explored and evaluated than is possible otherwise (Thompson et al. 2015). Further, the MBSE literature includes discussions of the use of ontologies both to structure the information in these models and to provide structure across models using different system modeling languages (Herrington 2022).

This paper explores the intersection of ontologies, model-based systems engineering, and user interface design, specifically adaptive user interface design, to understand the synergies among these tools as they pertain to user interface specification and design. Opportunities are discussed to combine concepts from these three bodies of literature to enable intelligent decision support systems for user interface development. Specifically, this paper defines and summarizes the utility of these three bodies of literature, reviews the literature on the intersection of ontologies with adaptive user interface and ontologies with MBSE, and discusses an example for how concepts from these three fields could be combined to aid the selection of a common off the shelf hearing protection device as part of human-system interface.

2 Examining the components

Before discussing the synergies between or among the three areas, it is important to first define and discuss the objectives of each of these bodies of literature.

2.1 Ontology

Ontology” is a term often mistakenly used interchangeably with the term “Taxonomy” as both are concerned with the classification of concepts. Taxonomy is a knowledge map used to represent concepts within the same domain and is limited in representing relationships with other domains. Laubheimer (2022) defines taxonomy as a classification tool used to identify hierarchical relationships within a category to describe and classify content. Lamb (2007) suggests seven types of taxonomies: lists, tree structures, hierarchies, polyhierarchies, matrices, facets, and system maps. One important aspect of taxonomies is the hierarchical relationship between concepts can only be described using “is a” relationship. Thus, this structure includes generalized classes at the top of the hierarchy and specific examples of the generalized class towards to bottom of the hierarchy. Ontology, on the other hand, expands the defining relationships to include any description determined by the ontology designer; in addition to “is a”, ontology can expand to describe other relationships including “developed from,” “is part of,” “factors in,” and “calculated from.” The ability to generalize the relationships beyond “is a” permits ontologies to capture knowledge across domains.

In 1993, Tom Gruber proposed the following definition of an ontology: “Ontology is a specification of a conceptualization” (Gruber 1993). This definition has been adapted and modified by other literature. For instance, Yang et al. (2019) defined ontology as “a formal, explicit specification of a shared conceptualization.” Yang et al. (2019) and Ast et al. (2014) defined ontology as “an explicit specification of the conceptualization of a domain of interest.” Both definitions add that the specification of the ontology is “explicit,” highlighting the fact that the purpose of an ontology is to limit ambiguity of specified concepts. For robust ontology application, multiple subject matter experts (SMEs) must agree on proper and accurate terminology to describe a concept within the domain of knowledge. This agreement is described as an ontological commitment (Gruber 1995); an agreement uses a vocabulary consistently with respect to the theory specified by an ontology. The SMEs define the concepts using as few vocabulary words as possible in a set of unlimited specifications to describe a concept, hence, abstracting the element’s associations and specification. Commitment to this specification is an ontological contribution as it unifies the view of the domain across stakeholders.

The object management group (OMG) describes ontology as a philosophical discipline that aims to develop “a system of general categories, the relationships between them and the rules that govern them which together form a theory of reality” (OMG Standards Development Organization 2013). This definition highlights two important notions of Ontology which are “philosophical aim” and “govern a theory of reality.” Ontology has a long history in philosophy and as such it means “the subject of existence” also “the science of being.” The adaptation of the term “ontology” from philosophy to engineering, specifically artificial intelligence (AI), was first introduced by Gruber and further cultivated to formally define the existence of a domain of knowledge (Gruber 1993, 1995). The governing aspect of the definition highlights the contribution of ontology towards the regulation of relationships among the concepts to ground them to the reality of the world. Ontology utilizes an “open world assumption” which in an information system sense means that “what is not known to be true is simply unknown.” This viewpoint allows the connection of concepts from different domains and the expansion of knowledge. For an ontology to be effective, there should be a method to ground the knowledge to reality to avoid erroneous inference. Axiomatization is used to that effect. Axioms apply logic to constrain the interpretation of concepts and their relationships (Hitzler et al. 2016).

Notably, an ontological commitment contributes to the unification of vocabulary, enabling interoperability and communication among stakeholders. The axioms contribute to the reasoning and logic expressed, avoid ambiguity, can be read by a human, and can be interpreted by a computer. Ontologies in computer science serve as metadata schemas, providing a controlled vocabulary of concepts, each with explicitly defined and machine-processable semantics (Maedche and Staab 2001).

An ontology, or perhaps more explicitly, the knowledge graph representing an ontology, is therefore an advanced taxonomy with an aim of sharing a common understanding of a domain of knowledge by defining concepts, relationships and assigning properties, and creating a set of axioms (rules) to regulate the descriptions and grounding them to reality. The goal is abstracting knowledge to unify the stakeholder vocabulary. Succinctly, refining this description, an ontology is a knowledge specification that abstracts knowledge into concepts, including the relationships among the concepts and specifies real-world constraints on the concepts using axioms. An ontology block architecture is represented in Fig. 1 which illustrates the components of a knowledge graph.

Fig. 1
figure 1

A block definition diagram showing the ontology defining components of “Concepts”, “Axioms,” and “Relationships” among the concepts

Computer scientists and engineers adopted ontology and evolved its properties and features to include axioms, data properties and instances which expands it to a “knowledge graph,” enabled by a commonly used semantic web language developed for this purpose known as OWL or web ontology language supported by the world wide web consortium (W3C) and enabled by many editors starting with the commonly used Protégé to NeOn Toolkit and SWOOP (W3C Wiki n.d.). The architectural building blocks of this language represented in Fig. 2 and can be described as follows:

  1. 1)

    Concepts (also known as classes) that provide the abstract terminology of a domain

  2. 2)

    Object properties that describe relationships among concepts

  3. 3)

    Data properties that connect instances to their datatype value (e.g., string, integer, and date)

  4. 4)

    Individuals or instances of the concepts from the real world

  5. 5)

    Axioms that govern the concepts and relationships described in the ontology as needed

Fig. 2
figure 2

A SysML block definition diagram illustrating the relationships among components in an OWL knowledge graph

The instances in an ontology are attached to the corresponding concepts. Data properties assign a data type (e.g., integer, double, and string) to the instance that is “logged” into a specification. Figure 2 illustrates how an OWL ontology or knowledge graph is composed of concepts that are related by object properties. Data properties also refine the concepts and each data property can include multiple instances of data for each concept. Axioms are then related to the concepts and data properties, constraining the use of these entities within the ontology.

Ehrlinger (2016) explains that an ontology does not differ from a knowledge base and that ontologies are sometimes mistakenly described closely to database schemas. Consequently, the article identifies a knowledge graph to be a very large ontology articulating that the difference between knowledge graphs and ontology is a matter of two factors: 1) size and 2) extended requirements where a reasoner is used to derive new knowledge. Paulheim (2016) defines knowledge graphs as follow: “A knowledge graph (i) mainly describes real world entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other, and (iv) covers various topical domains.” This definition aligns closely and mirrors the concluded description of ontology mentioned earlier, therefore, the term “ontology” will be used interchangeably with the term “knowledge graph” in this paper as we are interested in the representation of human knowledge to support both human and computer-based reasoning.

Over the years, ontology development methodologies have been proposed including METHONTOLOGY (Fernández-López et al. 1997), On-To-knowledge (Fensel et al. 2000), DILIGENT (Pinto et al. 2006), and NeOn Methodology (Suárez-Figueroa et al. 2011 and Gómez-Pérez et al. 2009). Each of the methodologies describe an ontology development process; however, their design process, lifecycle considerations, dynamic evolution, and the ontology reuse guidelines differ among these processes.

For instance, while METHONTOLOGY identifies an ontology development process, the life cycle of the ontology, and techniques to manage the ontology, it lacks clear guidelines for ontology reuse. (Gómez-Pérez and Suárez-Figueroa 2009; Corcho et al. 2003). On-To-Knowledge or OTKM considers the use of the ontology in knowledge management. OTKM process steps are kick-off, refinement, evaluation, and ontology maintenance. OTKM recommends finding potentially reusable ontologies; however, it lacks description of criteria to select ontologies for reuse and guidelines for reuse (Corcho et al. 2003). DILIGENT or Distributed Engineering of Ontologies is intended to “support experts in a distributed setting to engineer and evolve ontologies.” DILIGENT process steps include building, local adaptation, analysis, revision, and local update (Gómez-Pérez and Suárez-Figueroa 2009). While it adopts a process that encourages local updates within the ontology to support evolution of the ontology, it also lacks guidelines on reuse. At last, NeOn Methodology or network of excellence in ontology engineering is a scenario-based methodology founded on METHONTOLOGY, On-To-Knowledge, and DILIGENT (Gómez-Pérez and Suárez-Figueroa 2009).

The ability to recycle or reuse ontologies have been beneficial and an area of interest of many researchers. As such, Katsumi and Grüninger (2018) explains that the value of ontologies includes the benefit of shareability. Some literature defines reuse as “the process in which available (ontological) knowledge is used as input to generate new ontologies.” Sowinski et al. (2022) suggests that an ontology’s quality is vital for its potential to be reused. However, there are many challenges with ontology reuse, including its level of granularity or detail. The ontological commitment could pose a challenge where an ontological commitment can be so robust that it does not allow flexibility, or so fluid that it could be misinterpreted (Shimizu 2022). As a remedy, Shimizu et al. (2022) suggests using a modular approach to ontology to facilitate its reuse and suggests modular ontology engineering to be a method that produces highly reusable knowledge graph schema. Modular ontology utilizes modules to build the ontology where a module is defined as a subset of the ontology (including the axioms) as it captures a key notion together with its key attributes. A module has two key properties: 1) it is a technical entity as it is a defined part of an ontology and 2) it is a conceptual entity as it encompasses concepts, relationships, and axioms that “naturally” belong together. Modular ontologies decompose and simplify larger ontologies that could be more difficult for a human to interpret or adapt to an alternate domain.

2.2 Model-based systems engineering applied in trade studies

Model-based system engineering (MBSE) is a widely known and used systems engineering practice that supports and enables digital engineering. MBSE is defined as the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases (INCOSE SE Vision 2020 2007). In response to the increasing system complexity and need for a shortened product lifecycle, digital engineering approaches are being adopted and applied to enable the delivery of fast but effective system solutions to the end user (Zimmerman 2019). In fact, the International Council on Systems Engineering (INCOSE) suggest that by the year 2035 “Systems Engineering will leverage the digital transformation in its tools and methods and will be largely model-based using integrated descriptive and analytical digital representations of the systems” (INCOSE SE vision 2035 2021). MBSE is an innovative approach that allows system engineering practitioners to shift the record of authority from documents to digital models using tools and languages such a computer-aided design (CAD) to support design of physical components, unified modeling language (UML) to support software design, and system modeling language (SysML) to support system design (Hart 2015; Albers and Zingel 2013). Figure 3 shows the types of modeling elements used in SysML.

Fig. 3
figure 3

SysML Diagram illustrating the types of modeling elements for SysML. SysML diagrams are able to model system requirements, system structure, system behavior, and relationships among them and can perform quantitative evaluations

MBSE allows systems engineers to manage the system and its complexity throughout its lifecycle, maintain consistency of its descriptions, and insure the traceability between requirements, design artifacts, and validation or verification data (Khandoker et al. 2023). A major contribution of MBSE is sharing a digital model of the system among all stakeholders and engineering functions which improves communication among stakeholders, increases the ability to manage the increasingly complex systems, improves product quality by providing an unambiguous precise model, and enhances knowledge capture of the system (Magicdraw 2011). MBSE models can be integrated with simulation or parametric modeling techniques and optimization to automate portions of trade study and design analyses, providing automated design techniques (Parnell et al. 2021; Colombi et al. 2014).

Although MBSE tools such as SysML provide precise definition of system engineering artifacts, including requirements, structural components, system behaviors, and test elements, these languages are system agnostic. This attribute supports the application of these modeling languages, tools, and methods for various types of systems. However, this attribute also implies that any two models of the same type of system can use system-specific terminology differently and two models of the same type of system can have significantly different structure. To overcome these issues, MBSE tools often support profiles to extend the language in support of domain-specific artifacts and diagrams. For example, Miller and colleagues define a SysML-based profile for modeling human-agent teams (Miller et al. 2020). An alternate solution is to define a reference architecture, which is a reusable logical or technical model intended to guide and constrain specific system designs within some domain. For example, Kaslow and colleagues (2021) define a reference architecture for a CubeSat, supporting the consistent and rapid definition and design of small satellites.

2.3 Adaptive user interfaces

One of the major challenges that face human–computer interface (HCI) engineers is designing an interface that meets system specifications while also satisfying the users’ needs and preferences. Users differ along many dimensions including their demographic characteristics, cognitive abilities, education, training, personality, and preferences (Freitas 2022). In addition, the increasing complexity of systems and information displayed on interfaces increases the complexity of the interaction between the human and the system (Schölkopf et al. 2022), hence decreasing usability and negatively impacting the user’s workload and situation awareness. Adaptive user interfaces (AUIs) are intended to address this variation in user goals, needs, and expectations. AUIs are intended to change features and functionality automatically based on the users’ needs (Freitas 2022) and are enabled by a software system that is able to modify its context to meet the system requirements, as well as user needs, wishes, and preferences (Abrahão et al. 2021). According to Freitas and colleagues, interface adaptation is the process of adjusting a user interface based on the knowledge the system gains about the context of use (i.e., user, platform, and environment) (Freitas et al.). This adaptation is necessary to increase the probability the system will meet the user’s goals based on the specific scenario by monitoring the user status, system state, and current segment of the mission (Rothrock 2002).

Flexible user interfaces can be classified into two categories adaptable and adaptive. Adaptable user interfaces allow the user to customize the interface to their preferences. For example the user may personalize the interface by changing the interface colors or font size (Freitas 2022; Gulla et al. 2015). Adaptive user interfaces self-adapt to meet the user preferences based on the embedded software (Gulla et al. 2015; Abrahão 2021). Interface adaptation is intended to enhance user interaction by improving usability and accessibility of the system, thus making it more efficient, effective, and easier to use (Lavie and Meyer 2010; Abrahão et al. 2021). However, a poorly designed AUI may disorient the user as they are not able to understand or adapt to changes in the AUI. Design trade-offs are required in AUIs such that changes are predictable and accurate in support of user and system goals (Gajos et al. 2008). A correctly modeled context of use produces a system that is aware of its environment, able to detect changes, and triggers adaptation to changes in context. Such a system must correctly model the environment, user, and platform on which the AUI is deployed (Abrahão et al. 2021 and Akiki et al. 2014). As such, the framework of adaptive user interface requires a software system with a semantic core to capture the information and logic of the system. It also requires an Intelligent UI adaptor and external sources as proposed by (Abrahão et al. 2021).

3 Examining the interactions

Having reviewed ontologies, MBSE, and adaptive user interfaces, it is instructive to examine the synergies within these technologies from the literature. Ontologies have been discussed both within the MBSE and adaptive user interface literature. Therefore, it is instructive to review these interactions.

3.1 Ontologies and MBSE

In complex systems and systems of systems, the interdisciplinary field of systems engineering is required to manage the complexity and interoperability of the systems. To accurately manage system information throughout the life cycle, MBSE is used to create a central repository to include requirements, architecture, functions, and performance. During design as stakeholders exchange system-related information, the terminology and concepts discussed could have different meanings and interpretations based on the organizational goals, roles, and objectives or even more specifically, it could be due to the individual’s education, knowledge, and area of expertise. To that end, the lack of a common understanding and agreement on basic terminology among stakeholders and engineers causes ambiguity that impedes technical interchange, obstructs the development of the system, and hinders its performance. Systems engineering practitioners advocate for the use of ontology to support MBSE as ontology enables precise modeling and standardizes the domain of knowledge and enables a common understanding of the domain allowing interoperability (Yang 2019). For instance, (Hou and Kobierski 2006) suggests unifying MBSE and ontology to formalize system models with a unified syntax and data structure to support development and integration with artificial intelligence and machine learning. This union of MBSE and ontology would be advantageous to design processes as the ontology supports an integrated architectural representation across the system’s lifecycle, promotes model interoperability, and ideally synchronizes the use of terminology across stakeholders.

The link in description between ontology and knowledge graphs identifies a contribution to knowledge graphs being the ability to produce new knowledge or assisting the user in developing this new information. OWL editor Protégé has multiple plug-ins that elevates the abilities of the developed ontology. One famously known as SPARQL, short for “SPARQL Protocol and RDF Query Language,” enables users to query information from databases or any data source that can be mapped to resource description framework (RDF). We suggest that the intersection of ontology developed in OWL and used to structure and define the terminology within MBSE models would elevate the ontology and allow the production of new information by clearly defining the terminology in a system model, as well as permitting individuals to query this model for understanding. This interaction could benefit both tools and elevate their contribution to the final product.

3.2 Ontology and adaptive user interfaces

Ontologies are expressive and flexible representations of domain knowledge. Thus, they possess useful characteristics when designing intelligent information systems. AUIs is one of the domains where ontology has made a debut. Ontology driven approaches to AUI has four main benefits according to Silver et al. (2007): a) using a unified terminology and vocabulary facilitates communication during the life cycle of the system; b) despite the differences between the systems, mechanisms and software languages, the core knowledge can still be represented in a unified agreed upon language; c) ontologies are described in semantic web languages which supports model and knowledge repository accessibility; and d) standardized language to support inference and reuse of ontology components, reducing system development time.

Reuse assumes that once an ontology is mapped and defined, it can be used for different purposes and applications. This notion is highly leveraged by biologists. For instance, the Gene Ontology Project, which was created to unify genetic biology vocabulary has been very successful and widely used among scientists. The foundational model of anatomy ontology (FMAO) defines anatomical concepts and has also been widely shared and reused, including for user interface design. Ezzell et al. (2011) took advantage of FMAO and the discrete event and modeling ontology to create a user interface capable of constructing dynamic anatomical models and their corresponding 3D visualizations to enhance medical training and healthcare education. A user of the software builds an ontology by 1) creating nodes, 2) connecting the nodes using relationships, and 3) assigning attributes to the nodes. Once created, a dynamic graph is created and a simulation is executed to provide a plot of the simulation variables.

Freitas argues that ontologies provide a promising approach to support AUI due to their ability to capture, structure, and organize knowledge pertaining to the user and system. Thus, they can help identify the necessary adaptations and implementation mechanisms (Freitas 2022). Such a complex system requires the combination of multiple ontologies within different domains, referred to as ontology networks (Freitas 2022), meta-ontologies (Kultsova et al. 2016), or simply discussed as combined ontologies (Stefanidi et al. 2022). For example, Freitas discusses an adaptive interface for users with low vision or color blindness that utilizes five ontologies on human–computer interaction (HCI) subdomains. Specifically, these include the human–computer interaction ontology (HCIO), the user characterization ontology (UCO), UI types and elements ontology (UITandEO), the adaptive interface ontology (AIO), and the user profile ontology (UPO) (Freitas 2022). Similarly, Kultsova and colleagues discuss an adaptive user interface that includes ontologies for the user, disease, interface, and device.

Freitas et al. (2023) explains that complex domains such as human–computer interaction can be better organized using an ontology network (ON). The study describes the human–computer interaction ontology network (HCI-ON) consisting of three layers. The foundational layer is the unified foundational ontology (UFO) for ground knowledge. The core layer is the human–computer interaction ontology (HCIO). The domain layer consists of 9 ontologies covering multiple aspects of HCI such as the adaptive interface ontology (AIO) and the user characterization ontology (UCO). Freitas et al. (2023) explored if and how an ontology can be extracted from an ontology network to develop an AUI system meant for a social network about scientific events.

Stefanidi et al. (2022) discusses dynamic adaptation of user interfaces to improve situation awareness of law enforcement using ontologies to capture information regarding the user, the activity or operational task at hand, the environment (i.e., operational scenario) and the interface device. This project, which was conducted as part of a European Union funded project known as DARLENE for deep augmented reality law enforcement ecosystem. DARLENE was developed with an aim to enhance human physical and mental processing capacity to increase situational awareness and support decision making in stressful scenarios (Concept n.d.).

Figure 4 shows the overall arrangement and components of the systems discussed by Stefanidi and company. As shown in Fig. 4, the article demonstrates the different modules and how they interact with one another. The context module contains user information, such as the position, experience, and stress level of the human, feeding it to the decision-maker. The knowledge base module provides the decision-maker with descriptors of the situation the user must address. The visual module contains GUI design templates and feeds these potential templates to the user interface optimizer of the decision maker. These three modules together support the decision maker to make a decision of what, how, and where information will be presented to the end user.

Fig. 4
figure 4

The context of the systems and descriptions adapted from Stefanidi 2022

Importantly, these systems do not automatically customize the entire user interface but customize the display of existing interface elements to permit real-time interface configuration. It is also important that these systems rely on ontologies for knowledge that informs the desired structure of the interface but require computational models beyond the ontology to support optimization or selection of interface configurations. This optimization or selection model is analogous to the trade study or design analyses often applied in MBSE.

Ontologies have been used as a conceptual model that provides a clear description of the elements (reference ontology) and as a computational repository (operational ontology) in the human–computer interaction (HCI) field. Costa et al. (2021) conducted a study on the application of ontology in the HCI field and found that HCI ontologies became a focus of research in HCI starting in publications from 2010. The study conducted a literature review and found that of a total of 899 publications considered, only 35 ontologies were identified that support HCI-related subjects including user interface (UI), adaptive user interface (AUI), HCI design, HCI phenomenon, pervasive computing, user modeling/profile, and interaction experience. These ontologies were used particularly to form the structure and inference of the knowledge domain, support reasoning, serve as a foundation in system development and framework, and provide a description of the HCI domain. Of the 35 ontologies, 15 ontologies were used for knowledge management, 9 were used for reasoning, 6 as foundation of a systems approach, 6 provided conceptual description, 6 were used for conceptual communication, 5 were used for semantic mapping, and the last 5 for semantic annotation. The review identified areas where this interaction could use further exploration such as user experience, usability, prototyping, and brain computer interface. Moreover, the study found that there was no current work that addresses design, evaluation processes, documentation, evaluation methods, or physiological computing. In terms of reusability, there is generally a lack of reuse in existing ontologies which is a universal problem for ontologies (Shimizu et al. 2022 and Kamdar et al. 2017).

4 Examining the potential intersection

Although the existing literature advocates for the use of ontologies to aid the integration of knowledge across different MBSE models and to aid adaptive user interface development, there appears to be no discussion that integrates these three technologies together. Ontologies are discussed as a method to integrate knowledge across different MBSE models; yet, there is little discussion of using ontologies to inform or motivate the structure of those models. This might ensure that common terminology and structure are employed across multiple models of similar systems or components. Despite the increasing body of literature that uses ontologies and optimization technology to create AUIs, there is less literature on the use of these technologies to design user interfaces. However, recent research has appeared in this field (Oulasvirta et al. 2020), and one could imagine eventual integration of this work with system modeling tools to support user interface design.

For the human system integration (HSI) professional, the literature reviewed in this paper may be relevant to three important types of activities. These include the following:

  1. 1)

    The selection of common off-the-shelf (COTS) or slightly modified off-the-shelf (SMOTS) interface devices, such as personal protective equipment (e.g., hearing or vision protection devices)

  2. 2)

    Adding context sensitive user interface adaptation to interfaces that largely exist to improve user performance

  3. 3)

    Automated decision support systems to support the creation of complex system interfaces where the human designer lacks the cognitive bandwidth to thoroughly consider all system interactions during interface design

These three activities are progressively more complex; however, they include many common elements, in particular, meeting the system operational requirements and necessities to successfully complete a mission while maintaining the system usability and user workload within an acceptable level. To illustrate the application of these technologies within this space, we focus on the first activity related to the application of ontology to select COTS or SMOTS products.

Many user interface technologies have multiple commercially available options, each option having different attributes that provide advantages or disadvantages in different contexts. For example, considering the large selection of systems for auditory support i.e., noise suppression or communication, are readily available from many vendors and vary along many factors and dimensions. Thus, these technologies have been developed with consideration of different use contexts, which can be highly challenging to the HSI professionals who must select the correct adaptation considering that the decision to select the device with the highest utility (e.g., performance per cost) expands to depend upon the environment, the system into which they are to be employed, attributes of the user, and attributes of the operational environment. The typical HSI professional likely does not have all knowledge required to consider all the factors and metrics required to make the proper selection and may not even understand all differentiating criteria among these devices. Usually, HSI professionals have to rely on subject matter experts with focused knowledge and experience on the related domain. Unfortunately, differences in terminology and the level of understanding of these devices among individuals can lead to confusion, reduce the efficiency of communication, and perhaps include assumptions that produce less than optimal results. Further, the categorization of the devices could vary based on one individual’s field of expertise.

Human system integration relies on metrics to inform design decisions and device selection that might involve both quantitative and qualitative data which can be collected in a laboratory or on a representative platform (Parr et al. 2015 and Miller et al. 2013).Thus, data points can be collected using various methods that provide insight into the contextualized performance of a device. For instance, hearing protection device (HPD) performance is assessed using various methods where the best method(s) depends upon intended use. For example, continuous ambient noise protection may be assessed using noise reduction rating (NRR), noise reduction statistic with A weighting (NRSA), graphical noise reduction statistic (NRSG), and the octave band method (Gallagher et al. 2015; Gong et al. 2022; Gauger and Berger 2004). Alternatively, impact noise protection, including protection from hearing loss due to gunfire, may be assessed using impulse peak insertion loss (IPIL) (Murphy et al. 2015 and Murphy et al. 2011). Speech intelligibility may be measured using the modified rhyme test (MRT), articulation index (AI), speech transmission index (STI), or speech intelligibility index (SII) (Palmiero et al. 2016 and Letowski et al. 2017). Basically, there are multiple methods and metrics that can be used to assess performance of human systems aiming to evaluate its integration with the human and the variability in human anatomy, perception, cognition, etc. and the importance of each metric depends upon the user’s characteristics and the context of use.

Although manufacturers and vendors are aware that many users procure HPD to protect their hearing from impulse noise, very rarely do manufacturers and vendors assess HPDs using IPIL or include this metric in device assessments. In fact, most vendors only measure and publish NRR data, which is among the least reliable measurement of HPD performance as it overestimates the level of protection. As such, the Occupational Safety and Health Administration (OSHA) applies a 50% reduction factor to the manufacturer’s labeled NRR (OSHA 2021 and OSHA Technical Manual (OTM)). Similarly, the National Institute for Occupational Safety and Health NIOSH applies a deration to HPD based on a calculation that accounts for its type (NIOSH 1998). In fact, the U.S. Environmental Protection Agency EPA has proposed replacing the single number rating of NRR to a high-low range of estimated protection (Golembeski 2023; Cole-Parmer 2018). HSI professionals and other stakeholders may not be knowledgeable that NRR is not an accurate measure of sound attenuation and protection from ambient noise, and certainly not an accurate indication of relative hearing protection from noise produced by impulse sources, such as gunfire. This lack of knowledge and data obstructs the decision-making process among stakeholders and contributes to inaccurate decisions, potentially increasing hearing loss beyond what could have been achieved if an accurate selection process was conducted.

As such, an ontology developed by domain experts on HPD selection that abstracts this knowledge and information, permitting it to be conceptualized and shared among stakeholders would facilitate an educated discussion of qualitative and quantitative aspects of the various devices during selection. This ontology could be used to standardize the domain of knowledge; clearly defining the concepts and attributes required by HSI professionals and users when selecting an HPD. Further, as discussed, ontology editors enable data properties and instances which permits the visualization and navigation of a knowledge representation with a data repository. Such a standard repository could compel manufacturers and vendors to publish computational information and data pertaining to qualitative and quantitative concepts that are included and enabled by a data property in the ontology as required and requested by the domain experts exhibited in the knowledge graph. This clear definition could compel the manufacturer to gather the qualitative and quantitative data necessary to ensure data consistency across vendors, making device comparison easier. Ontologies also describe relationships between attributes, so such an ontology could show compatibility factors to be considered for cases of dual HPD, coupling with other personal protective equipment (PPE), such as helmets or visors or the platform it must be coupled with, including the physical plug connection and electrical impedance level.

Unfortunately, ontology tools do not inherently support complex calculation of sound pressure levels at the user’s ears or the calculation of compound metrics. However, specifications for equipment and systems that produce noise in the environment can be captured and modeled in MBSE tools to estimate sound pressure levels at the user’s ears. Further, if sound attenuation as a function of octave band or wavelength is known for each hearing protection device, the attenuation these devices provide can be modeled in MBSE tools to estimate their psycho-acoustic performance. Then, MBSE tools using SysML can automatically compare the simulated performance to system requirements. Therefore, by integrating an ontology with MBSE tools, one could develop a standard assessment of the impact of various system designs using various types of hearing protection.

As described, the HSI professional should consider external elements and factors captured by the context of use in addition to sound attenuation and related metrics. The description of all attributes may leverage modular Ontology concepts, using different modules to capture the elements of the context of use, attributes of the device or system, and relevant attributes of the user. As such, ontological modules should be created to represent the user, platform, the environment and of course, the target device. Depending on the scope of the ontology and the use case, each module would demonstrate concepts that pertain to the use case or mission scenario. Figure 5 (developed in yEd GraphEditor) shows the proposed modules and the connections among them.

Fig. 5
figure 5

The Ontology proposed modules to capture the device and the context of use

For instance, the device module, shown in Fig. 6a, includes concepts that describe the types and functions of HPD as well as compatibility factors. The user module, shown in Fig. 6b, describes any user of the device throughout the lifecycle of the system starting with the role of the user (tester, trainer, operator, maintainer, or end user), to the user task concepts related to the device, and it could also expand to include user characteristics and capabilities. The acoustic environment module describes the physical environmental concepts that apply to the system/device module. In the example of HPD, the environment module, shown in Fig. 6c, describes the noise environment types as well as noise level metrics depending on the environment. When modeling other human systems such as night vision goggles (NVG) and oxygen systems, ontology engineers may modify this module to extend it to other domains. For example, in a night vision goggles (NVGs) example, this module would capture the lighting and illumination conditions that require and affect the NVGs. In the example of oxygen support systems, this module would capture the altitude and cruising condition that require and affect the O2 system. The platform module could describe the mission and vehicle of use of the device. In the example of HPD, the platform module, shown in Fig. 6d, captures mission essential concepts related to the HPD, such as the interface that the device should be coupled with as well as characteristics of the platform that need to be considered when coupling it with the device such as impedance level and plug type. These modules and the integrated ontology could help HSI practitioners view the HPD in context and as part of the whole case scenario and not just as a standalone system. Figures 6a–d (developed in yEd GraphEditor) show an example of the ontology schema describing each ontological module. The white arrows describe a subclass and the black arrows describe an object property relationship.

Fig. 6
figure 6

a HPD ontology example module. b User ontology example module. c. Acoustic environment ontology example module. d Platform ontology example module

5 Discussion and conclusion

As described in the intersection examination, using ontology to describe a domain of knowledge is beneficial as it requires the participation of the domain experts in the process and requires their review and input on the model created and further validates the model by examining the data and the axioms using competency questions formed by the end user. It is beneficial for the decision-makers to be able to obtain a shareable data repository that describes key metrics and evaluators of the identified concepts, this data repository allows the decision-makers to make tradeoffs and critical decisions when it comes to COTS and SMOTs, here in represented in the example of hearing protection devices. MBSE tools are currently being adopted within the systems engineering community to support effective design and analysis of complex systems. However, the literature provides limited discussion of the integration of ontology and MBSE tools. Assuming that ontology tools and the associated data repository could then be tied to MBSE tools, calculations, and automatic requirements checking could be performed in these tools to aid selection of COTS and SMOTs products. Future integration of these tools might additionally provide methods to support the design of complex system interfaces where the structure, behavior, and interface of the system are described in MBSE models and concepts, relations, and axioms which support decision-making during user interface design are provided within ontology tools but used to support evaluation of user interface concepts developed within the MBSE tools.

It is important for the HSI community to recognize the potential and benefits of ontology in computer science and systems engineering practices. Ontology permits the development of human centered interfaces and systems by standardizing a common understanding of knowledge domains, enabling interoperability and allowing the expression and reasoning of machine-readable programs that are still easily readable to humans. By using modular ontology to account for the context of use of the human system, the HSI practitioners gain the ability to consider the external attributes to the system that need to be taken into consideration when making a decision regarding the system. This application could be used and applied at any stage of the system’s life cycle from design and development to testing and validating the integration and performance. In this paper, we discuss the application of ontology in aiding in the selection of COTS and SMOTS products. Further, the suggested tool is able to support the development of state of the art AUIs, minimize the system development life cycle and prevent MBSE profiles from becoming obsolete. MBSE is a key enabler of effective digital engineering practices and is adopted by systems engineering practitioners due to it many benefits in the systems engineering process and management.

One limitation of this study is that system engineering experts are not necessarily trained in the use and application of OWL or its editors such as Protégé. As such, this application would require the training of the system engineers, human systems engineering experts, and other domain experts in understanding ontologies and tools at least, at an elementary level, together with its process and development. Another limitation is that human system integration is a challenging and inherently complex field due to its interdisciplinary nature, dynamic, and contextual nature, ethical and moral considerations, and challenging measurement and quantification practices. It is an evolving research landscape that requires accountability for human variability and therefore it requires the collaboration of multiple experts to model. In order to elicit and describe key notions and key concepts, competency questions, object properties and axioms, this research requires collaborating with and interviewing multiple subject matter experts, including scientists within the field of practice, engineers who understand design trades, logisticians to understand sourcing and supply issues, as well as HSI practitioners to properly form the ontology. Experts in each of these field have a different and sometime conflicting views, which must be reconciled through judgement At last, current ontology development processes require knowledge elicitation mostly focused on focus groups and interviews, perhaps more sophisticated knowledge elicitation techniques need to be examined, especially for knowledge domains that describe the complex interactions between humans, systems, and interfaces that must apply across multiple mission types and environments.

Nonetheless, the intersection of MBSE and ontology could further enable interoperability, effectively, and efficiently shorten the development phase of a system and minimize uncertainty among stakeholders. These potential contributions make the premise that further research and exploration in this intersection has the potential to produce user interfaces in complex system with multiple goals and contexts such as air and space craft that are dynamic and effective, with enhanced usability that accounts for the system’s context of use.