Keywords

1 Introduction

Reification is a notion that is known from different scientific areas. Literally it means representing something abstract as a material or concrete thing (Merriam-Webster dictionary), or making something abstract more concrete or real (Oxford dictionaries). Wellknown examples in linguistic, logic and knowledge representation domains are representing relations between objects as objects themselves (reified relations); this enables to introduce variables and relations over these reified relations. In this way the expressivity of a language can be extended substantially. In such a way in logic, statements can be represented by term expressions over which predicates can be defined. This idea of reification has been applied in particular to many modeling and programming languages, for example, logical, functional, and object-oriented languages; e.g., [5,6,7,8, 16, 17, 22]. Also in fundamental research the notion of reification plays an important role. For example, Gödel’s incompleteness theorems in Mathematical Logic depend on representing logical statements by natural numbers over which predicates are used to express, for example, (non)provability of such statements; e.g., [14].

In this paper the notion of reification is applied to networks, and illustrated for a Network-Oriented Modeling approach based on temporal-causal networks [18, 20]. A network (the base network) is extended by adding explicit states representing the network structure. In a temporal-causal network the network structure is defined by three types of characteristics: connection weights, combination functions and speed factors. By reifying these characteristics of the base network as states in the extended network, and defining proper causal relations for them and with the other states, an extended, reified network is obtained which explicitly represents the structure of the base network, and how this network structure evolves over time. This enables to model dynamics of the base network by dynamics within the reified network. Thus an adaptive network is represented as a non-adaptive network. It substantially increases the expressiveness of the Network-Oriented Modeling approach.

By the introduced concept of network reification the Network-Oriented Modeling approach in particular becomes expressive enough to analyse network adaptation principles from an inherent network modeling perspective. Applying this, a unified framework is obtained to represent and compare network adaptation principles across different domains. To illustrate this, a number of known network adaptation principles are shown, among which adaptation principles based on Hebbian learning for Mental Networks and on Homophily for Social Networks.

In the paper, first in Sect. 2 the Network-Oriented Modeling approach based on temporal-causal networks is briefly summarized. Next, in Sect. 3 the idea of reifying the network structure by additional reification states representing them is introduced. It is shown how causal relations for these reified states can be defined by which they affect the states in the base network. Section 4 shows how the obtained reification approach can be applied to analyse and unify network adaptation principles from a Network-Oriented Modeling perspective. This also includes as an illustration an example simulation within a developed software environment for network reification showing how an adaptive speed factor and an adaptive combination function can be used to model a scenario of a manager who adapts to an organisation. Section 5 is a discussion.

2 Temporal-Causal Networks: Structure and Dynamics

A network structure is often considered to be defined by nodes and connections between nodes. However, these only cover very general aspects of a network structure in which no distinctions can be made, for example, between different strengths of connections, and different ways in which multiple connections to the same node interact and work together. In this sense in many cases a plain graph structure provides underspecification of a network structure. Also Pearl [12] pointed at this problem of underspecification in the context of causal networks from the (deterministic) Structural Causal Model perspective where functions fi for nodes Vi are used to specify how multiple impacts on the same node Vi should be combined, but this concept is lacking in a plain graph representation: ‘Every causal model M can be associated with a directed graph, G(M) (…) This graph merely identifies the endogeneous and background variables that have a direct influence on each Vi; it does not specify the functional form of fi.’ [12], p. 203. A conceptual representation of the network structure of a temporal-causal network model does involve representing in a declarative manner states and connections between them that represent (causal) impacts of states on each other. This part of a conceptual representation is often depicted in a conceptual picture by a graph with nodes and directed connections. However, a full conceptual representation of a temporal-causal network structure also includes a number of labels for such a graph. First, in reality not all connections are equally strong, so some notion of strength of a connection is used as a label for connections. Second, when more than one connection affects a given state, some way to aggregate multiple impacts on a state is used. Third, a notion of speed of change of a state is used for timing of the processes. These three notions, called connection weight, combination function, and speed factor, make the graph of states and connections a labeled graph. This labeled graph forms the defining network structure of a temporal-causal network model in the form of a conceptual representation; see Table 1, first five rows (adopted from [20]).

Table 1. Concepts of conceptual and numerical representations of a temporal-causal network.

Combination functions can have different forms, as there are many different approaches possible to address the issue of combining multiple impacts. To provide sufficient flexibility, the Network-Oriented Modelling approach based on temporal-causal networks incorporates a library with a number of standard combination functions are available as options; but also own-defined functions can be added. In the last five rows of Table 1 it is shown how a conceptual representation (based on states and connections enriched with labels for connection weights, combination functions and speed factors), can be transformed into a numerical representation defining the network’s intended dynamic semantics in a systematic (or even automated) manner [18], Ch. 2. The difference equations in the last row in Table 1 form the numerical representation of the dynamics of a temporal-causal network model and can be used for simulation and mathematical analysis, and also be written in differential equation format:

$$ \begin{array}{*{20}l} {Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}_{Y} [{\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right),\,\, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right)) \, - Y\left( t \right)]\Delta t} \hfill \\ {{\mathbf{d}}Y\left( t \right)/{\mathbf{d}}t = {\varvec{\upeta}}_{Y} [{\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right),\,\, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right)) \, - Y\left( t \right)]} \hfill \\ \end{array} $$

where the Xi are all states with outgoing connections to state Y. Often used examples of combination functions are the identity id(.) for states with impact from only one other state, the scaled sum ssumλ(..) with scaling factor λ, the minimum function min(..), and the advanced logistic sum combination function alogisticσ,τ(..) with steepness σ and threshold τ; see also [18], Chap. 2, Table 2.10:

$$ \begin{aligned} & {\mathbf{id}}(V) = V\,\,\,\,\,\,\,{\mathbf{ssum}}_{\uplambda } (V_{1} , \, \ldots , \, V_{k} ) = \, \left( {V_{1} , \, \ldots , \, V_{k} } \right)/\uplambda\,\,\,\,\,\,{\mathbf{min}}(V_{1} , \, \ldots , \, V_{k} ) \, = minimal\,\,\,V_{i} \\ & {\mathbf{alogistic}}_{{{\upsigma ,\uptau }}} (V_{1} , \, \ldots ,V_{k} ) = \, [(1/(1 + e^{{{-}\upsigma (V_{1} + \, \ldots + V_{{k\,}} -\uptau)}} )) \, {-} \, 1/(1 + e^{{\upsigma{\uptau} }} )] \, (1 + e^{{{-}\upsigma{\uptau} }} ) \\ \end{aligned} $$

3 Network Reification

A network structure is described by certain parameters, such as connection weights. Usually the values of these parameters are considered static: they are assumed not to change over time. This stands in the way of addressing network evolution, where the values of these parameters do change. It means that network evolution has to be studied by considering a separate dynamic model for these parameters, for example, specified in the numerical mathematical form of difference or differential equations, different from and outside the context of the Network-Oriented Modeling perspective on dynamics within the base network itself. In specific applications, still this dynamical model has to interact with the internal network dynamics of the base network as well. For example, in studying the role of homophily in bonding of two persons Xi and Y, at each point in time t the change of connection weights \( \omega_{{X_{i} ,Y}} \left( t \right) \) depends on the states Xi(t) and Y(t) of the two persons, usually modeled by the nodes within the base network (for example, strengthening the connection when the states differ less than some threshold value τ and weakening it when this difference is more than τ). So that leaves us with one network model and one non-network model with interactions between these two different types of models (upward for \( \omega_{{X_{i} ,Y}} \left( t \right),X_{i} (t) \) and \( Y\left( t \right) \), and downward for \( \omega_{{X_{i} ,Y}} (t + \Delta t)) \).

Network reification provides a way to address this in a more unified manner, staying more genuinely within the Network-Oriented Modeling perspective, by extending the base network by extra states that represent the parameters representing the network structure. In this way the whole is modeled by one network, a network extension of the base network: the modeling stays within the network context. The new additional states representing the parameter values for the network structure are what are called reification states for the parameters, the parameters are reified by these states. What will be reified in temporal-causal networks in particular are the following parameters used to define the network structure: the labels for connection weights, combination functions, and speed factors. For connection weights \( \upomega_{{X_{i} ,Y}} \) and speed factors ηY their reification states \( \Upomega_{{X_{i} ,Y}} \) and ΗY represent the value of them. The reification states are depicted in the upper plane in Fig. 1, whereas the states of the base network are in the lower plane.

Fig. 1.
figure 1

Network reification with downward causal connections from reification states in the upper (reification) plane to related base network states (lower plane)

For combination functions from a theoretical perspective a coding is needed for all options for such functions by numbers; for example, assuming it is countable, the set of all of them is numbered by natural numbers \( n = \, 1, \, 2,\,\,\, \ldots ., \) and the reified state CY representing them actually represents that number. This is the general idea for addressing reification of combination functions; however, below a more refined approach will be shown that is easier to use in practice.

By having states for the base network structure characteristics within the extended network, causal relations can be added making then dynamic and affecting the network structure through them, thus making the structure adaptive by the internal network dynamics of the extended network. Causal connections will be defined from the reification states to related base network states to effectuate this process. Thus dynamics of the base network is replaced by dynamics within the extended, reified network.

Derivable properties of the base network such as indegree, outdegree, connectness of two states, and shortest path length between two states can now be defined within the reified network. For example, for indegree add a state IndegreeY and make connections with weight 1 from each \( {\mathbf{\Upomega }}_{{X_{i} ,Y}} \) to IndegreeY and use the sum function as combination function for IndegreeY. For outdegree it is similar but then with \( {\mathbf{\Upomega }}_{{Y,X_{i} }} \). For connectivity: the property that there exists a path from state X to state Y can be represented by adding states EPX,Y, and then by transitivity, using a connection from \( {\mathbf{\Upomega }}_{X,Y} \) to EPX,Y as base step with identity combination function id(..), and connections from EPX,Y to EPX,Z and from \( {\mathbf{\Upomega }}_{Y,Z} \) to EPX,Z with the min(..) combination function as transitivity step extending EPX,Y to EPX,Z. For Shortest Path Length, by slightly little more work this can be done by adding states SPLX,Y representing the length of the shortest path from X to Y, adding suitable connections for them and use transitivity and minimum and sum combination functions. For example, in this way using a connection from \( {\mathbf{\Upomega }}_{X,Y} \) to SPLX,Y as base step with identity combination function id(..), and connections from \( {\mathbf{SPL}}_{{X,Y_{i} }} \) to SPLX,Z and from \( {\mathbf{\Upomega }}_{{Y_{i} ,Z}} \) to SPLX,Z with the \( {\mathbf{min}}({\mathbf{sum}}({\mathbf{SPL}}_{{X,Y_{1} }} ,{\varvec{\Omega}}_{{Y_{1} ,Z}} ),\,\, \ldots ,\,\,{\mathbf{sum}}({\mathbf{SPL}}_{{X,Y_{k} }} ,{\varvec{\Omega}}_{{Y_{k} ,Z}} )) \) combination function as transitivity step extending \( {\mathbf{SPL}}_{{X,Y_{i} }} \) to SPLX,Z.

The added reification states are integrated to obtain a well-connected overall network. In the first place connections from the reification states to the states in the base network are used, in order to model how they have their effect on the dynamics in the base network. More specifically, it has to be defined how the reification states contribute to an aggregated impact on the related base network state. In addition to a downward connection, also the combination function has to be defined for the aggregated impact. Both these downward causal relations and the combination functions will be defined in a generic manner, related to the role of a specific parameter in the overall dynamics as part of the intended semantics of a temporal-causal network.

In addition, other connections of the reification states can be added in order to model specific network adaptation principles. These may concern upward connections from the states of the base network to the reification states, or horizontal mutual connections between reification states within the upper plain, or both, depending on the specific network adaptation principles addressed. These connections are not generic; they will be discussed and illustrated in Sect. 4.

For the downward connections the general pattern is that each of the reification states \( {\mathbf{\Upomega }}_{{X_{i} ,Y}} \), ΗY and CY for connection weights, speed factors and combination functions has a causal connection to state Y in the base network, as they all affect Y. These are the downward arrows from the reification plane to the base plane in Fig. 1. The different components C1,Y, C2,Y, … for CY will be explained below. All depicted (downward and horizontal) connections get weight 1. Note that this is also a way in which a weighted network can be transformed into an equivalent non-weighted network. In the extended network the speed factors of the base states are set at 1 too. As the base states have more incoming connections now, new combination functions for them are needed. They can be expressed in a generic manner based on the original combination functions, but to define them some work is needed.

As the overall approach is a bit complex, to get the idea first the three types of parameters are considered separately in (a) to (c) (they are illustrated in Box 1); for the overall process, see (d) and Box 2. First, consider only connection weight reification. The original difference equation based on the original combination function cY(..) is

Box 1.
figure 2

Examples of combination functions in the reified network for base states Y

Box 2.
figure 3

Deriving the general combination functions in the reified network for base states Y

$$ Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}_{Y} [{\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{X_{1} ,Y}} \left( t \right)X_{1} \left( t \right),\,\,\, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} \left( t \right)X_{k} (t)) \, - Y(t)]\Delta t $$

A requirement for the new combination function \( {\mathbf{c}}^{*} {}_{Y}\left( {..} \right) \) is

$$ Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}_{Y} [{\mathbf{c}}^{*} {}_{Y}(X_{1} \left( t \right), \ldots ,X_{k} \left( t \right),{\varvec{\Omega}}_{{X_{1} ,Y}} \left( t \right), \ldots ,{\varvec{\Omega}}_{{X_{k} ,Y}} \left( t \right)) \, - Y(t)]\Delta t $$

As these difference equations must have the same result, the requirement for \( {\mathbf{c}}^{*} {}_{Y}\left( {..} \right) \) is

$$ {\mathbf{c}}^{*} {}_{Y}(X_{1} \left( t \right), \ldots ,X_{k} \left( t \right),{\varvec{\Omega}}_{{X_{1} ,Y}} \left( t \right), \ldots ,{\varvec{\Omega}}_{{X_{k} ,Y}} \left( t \right)) \, = \, {\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{X_{1} ,Y}} \left( t \right)X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} \left( t \right)X_{k} (t)) $$

Now define the new combination function by

$$ {\mathbf{c}}^{*} {}_{Y}(V_{1} , \, \ldots ,V_{k} ,W_{1} , \, \ldots ,W_{k} ) = {\mathbf{c}}_{Y} (W_{1} V_{1} , \, \ldots ,W_{k} V_{k} ) $$

where Vi stands for Xi(t), and Wi stands for \( {\varvec{\Omega}}_{{X_{i} ,Y}} \left( t \right) \). In Box 1(a) an example of this combination function relating to Fig. 1 is shown. Indeed the requirement is fulfilled when \( {\varvec{\Omega}}_{{\varvec{X}_{\varvec{i}} ,\varvec{Y}}} \left( t \right) \, = {\varvec{\upomega}}_{{\varvec{X}_{\varvec{i}} ,\varvec{Y}}} \left( t \right) \):

$$ {\mathbf{c}}^{*} {}_{Y}(X_{1} \left( t \right), \ldots ,X_{k} \left( t \right),{\varvec{\Omega}}_{{\varvec{X}_{{\mathbf{1}}} \varvec{,Y}}} \left( t \right), \ldots ,{\varvec{\Omega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} \left( t \right)) = {\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} \varvec{,Y}}} \left( t \right)X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} \left( t \right)X_{k} (t)) $$

(b) Second, reification of speed factors can be addressed separately; in the new situation

$$ Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}^{*} {}_{Y}[{\mathbf{c}}^{*} {}_{Y}({\mathbf{H}}_{\varvec{Y}} \left( t \right),{\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right), \, Y\left( t \right)) \, - Y\left( t \right)]\Delta t $$

Note that here also Y(t) is an argument of the combination function, as this is needed for the timing modeled by the speed factor. It is assumed that the new speed factor \( {\varvec{\upeta}}^{*} {}_{Y} \) is 1; then the requirement becomes:

$$ \begin{aligned} & {\mathbf{c}}^{*} {}_{Y}({\mathbf{H}}_{\varvec{Y}} \left( t \right),{\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right), \, Y\left( t \right)) \, - Y\left( t \right) = \\ & {\varvec{\upeta}}_{\varvec{Y}} \left( t \right)[{\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right)) \, - Y\left( t \right)] \\ \end{aligned} $$

This can be rewritten into

$$ \begin{aligned} & {\mathbf{c}}^{*} {}_{Y}({\mathbf{H}}_{\varvec{Y}} \left( t \right),{\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right), \, Y\left( t \right)) \, \\ & = {\varvec{\upeta}}_{Y} \left( t \right){\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{X_{1} ,Y}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right)) \, + \, ({\mathbf{1}} - {\varvec{\upeta}}_{Y} \left( t \right))Y(t) \\ \end{aligned} $$

Now define

$$ {\mathbf{c}}^{*} {}_{Y}(S,V_{1} , \, \ldots ,V_{k} , \, W) \, = S\,{\mathbf{c}}_{Y} (V_{1} , \, \ldots ,V_{k} ) \, + (1 - S)W $$

where Vi stands for Xi(t), S stands for ΗY(t), and W stands for Y(t). This is a weighted average (with weights speed factor S and 1–S) of \( {\mathbf{c}}_{Y} (V_{1} , \, \ldots ,V_{k} ) \) and W. Again in Box 1(b) an example of this combination function relating to Fig. 1 is shown. Also here the requirement is fulfilled, when HY(t) = ηY(t).

(c) For reification of combination functions, for practical reasons for the base network a countable number of basic combination functions bc1(..), bc2(..), … is assumed. From this sequence of basic combination functions bc1(..), bc2(..), …… for any arbitrary N the finite subsequence can be chosen to be used in a specific application. For example, bc1(..) = id(..), bc2(..) = ssumλ(..) and bc3(..) = alogisticσ,τ(..). Note that when more than one argument is used in id(..), the outcome is the sum of these arguments (only one of them will be nonzero when Y has only one incoming connection).

In the base network for each Y combination function weights are assumed: numbers cw1,Y, cw2,Y,…≥ 0 that change over time such that the combination function cY(..) is expressed by:

$$ \begin{aligned} {\mathbf{c}}_{Y} (t,V_{1} , \, \ldots ,V_{k} ) \, = \left[ {\text{cw}_{1,Y} \left( t \right) \, \text{bc}_{1} \left( {V_{1} , \, \ldots ,V_{k} } \right) \, + \, \ldots \, + \, \text{cw}_{m,Y} \left( t \right) \, \text{bc}_{m} \left( {V_{1} , \, \ldots ,V_{k} } \right) \, } \right]/ \hfill \\ \left[ {\text{cw}_{1,Y} \left( t \right) \, + \, \ldots \, + \, \text{cw}_{m,Y} \left( t \right)} \right] \hfill \\ \end{aligned} $$

In this way it can be expressed that for Y a weighted average of basic combination functions is used, if more than one of cwi,Y(t) has a nonzero value, or just one basic combination function is selected for cY(..), if exactly one of the cwi,Y(t) is nonzero. This approach makes it possible, for example, to smoothly switch to another combination function over time by decreasing the value of cwi,Y(t) for the earlier chosen basic combination function and increasing the value of cwj,Y(t) for the new choice of combination function. For each basic combination function weight cwi,Y(..) a different reification state Ci,Y is added. The value of that state represents the extent to which that basic combination function bci(..) is applied for state Y. Now from

$$ Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}_{\varvec{Y}} [{\mathbf{c}}_{Y} ({\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{X_{k} ,Y}} X_{k} \left( t \right)) \, - Y\left( t \right)]\Delta t $$
$$ Y(t + \Delta t) = Y(t) + {\varvec{\upeta}}_{\varvec{Y}} [{\mathbf{c}}^{*} {}_{Y}({\mathbf{C}}_{1,Y} (t), \, \ldots .{\mathbf{C}}_{m,Y} (t),{\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right), \, \ldots ,{\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right), \, Y\left( t \right)) \, - Y\left( t \right)]\Delta t $$

the following requirement for the combination function \( {\mathbf{c}}^{*} {}_{Y}(C_{1} , \, \ldots .,C_{m} ,V_{1} , \, \ldots ,V_{k} ) \) is obtained:

$$ \begin{aligned} {\mathbf{c}}^{*} {}_{Y}{\mathbf{(}}{\mathbf{C}}_{1,Y} {\mathbf{(}}t{\mathbf{)}},\,\,\, \ldots .{\mathbf{C}}_{m,Y} {\mathbf{(}}t{\mathbf{)}},{\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right),\,\,\, \ldots ,{\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right){\mathbf{)}} = {\mathbf{c}}_{Y} {\mathbf{(\omega }}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right){\mathbf{,}}\,\,\,{\mathbf{ \ldots ,\omega }}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right){\mathbf{)}} \hfill \\ {\mathbf{c}}^{*} {}_{Y}{\mathbf{(}}{\mathbf{C}}_{1,Y} {\mathbf{(}}t{\mathbf{)}},\,\,\, \ldots .{\mathbf{C}}_{m,Y} {\mathbf{(}}t{\mathbf{)}},{\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right),\,\,\, \ldots ,{\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right){\mathbf{)}} = {\mathbf{(}}\text{cw}_{1,Y} \left( t \right) \, \text{bc}_{1} ({\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right){\mathbf{,}}\,\,\,{\mathbf{ \ldots ,}} \hfill \\ {\varvec{\upomega}}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right)) + \ldots + \, \text{cw}_{m,Y} \left( t \right) \, \text{bc}_{m} ({\varvec{\upomega}}_{{\varvec{X}_{{\mathbf{1}}} ,\varvec{Y}}} X_{1} \left( t \right){\mathbf{,}}\,\,\,{\mathbf{ \ldots ,\omega }}_{{\varvec{X}_{\varvec{k}} ,\varvec{Y}}} X_{k} \left( t \right)))/\left( {\text{cw}_{1,Y} \left( t \right) + \ldots + \, \text{cw}_{m,Y} \left( t \right)} \right) \hfill \\ \end{aligned} $$

Now define the combination function \( {\mathbf{c}}^{*} {}_{Y}{\mathbf{(}}C_{1} , \, \ldots .,C_{m} ,V_{1} , \, \ldots ,V_{k} {\mathbf{)}} \) by

$$ \begin{aligned} & {\mathbf{c}}^{*} {}_{Y}(C_{1} , \, \ldots .,C_{m} ,V_{1} , \, \ldots ,V_{k} ) = \\ & (C_{1} \text{bc}_{1} \left( {V_{1} , \, \ldots ,V_{k} } \right) \, + \, \ldots + C_{m} \text{bc}_{m} \left( {V_{1} , \, \ldots ,V_{k} } \right))/\left( {C_{1} + \, \ldots . + C_{m} } \right) \\ \end{aligned} $$

where Ci stands for the combination function weight reification Ci,Y(t), and Vi for the state value Xi(t) of base state Xi. Using this function, the requirement is indeed fulfilled. Note that it has to be guaranteed that the case that all Ci become 0 does not occur. For a given combination function adaptation principle that easily can be achieved by normalising the Ci for each adaptation step so that their sum always stays 1. In Box 1(c) also an example of this is shown.

Note that basic combination functions may contain some parameters, for example, for the scaled sum combination function the scaling factor λ, and for the advanced logistic function the steepness σ and the threshold τ. If desired, for these parameters also reification states can be added, with the possibility to make them adaptive as well.

(d) It has been discussed above how in the reified network the causal relations for the base network states can be defined separately for each of the three types of parameters. By combining these three in one it can be found that this combination function does all at once:

$$ \begin{aligned} & {\mathbf{c}}^{*} {}_{Y}(S,C_{1} , \, \ldots .,C_{m} ,V_{1} , \, \ldots ,V_{k} ,W_{1} , \, \ldots ,W_{k} ,W) = \\ & S\left( {C_{1} \text{bc}_{1} \left( {W_{1} V_{1} , \ldots ,W_{k} V_{k} } \right) \, + \ldots + C_{m} \text{bc}_{m} \left( {W_{1} V_{1} , \ldots ,W_{k} V_{k} } \right)} \right)/\left( {C_{1} + \ldots + C_{m} } \right) + (1 - S)W \\ \end{aligned} $$

where S stands for the speed factor reification HY(t), Ci for the combination function weight reification Ci,Y(t), Vi for the state value Xi(t) of base state Xi, Wi for the connection weight reification \( {\varvec{\Omega}}_{{X_{i} ,Y}} \left( t \right) \), and W for the state value Y(t) of base state Y. See Box 1(d) for an example, and Box 2 for more explanation and a general derivation of this function. So this is what defines the dynamics of the base network states within the reified network.

4 Reification for Unified Modeling of Adaptation Principles

The availability of the reification states for the base network structure as explicit states that in principle can change over time, opens the possibility to define network adaptation principles in a Network-Oriented manner, by causal connections to and between the reification states, and proper combination functions, and not just by a separate set of difference or differential equations. This offers a framework to specify network adaptation principles in a unified and standardized Network-Oriented manner, and compare them to each other. This is illustrated in the current section for a number of examples of network adaptation principles: Hebbian learning in Mental Networks and homophily in Social Networks, triadic closure in Mental and Social Networks, and preferential attachment in Mental and Social Networks. These examples of network adaptation principles focus on adaptive connection weights. In addition, it will be shown how adaptive speed factors and adaptive combination functions can have applications as well. Table 2 shows an overview of these examples.

Table 2. Overview of known adaptation principles modeled by a reified network

Note that in Table 2 it is not indicated which combination functions may be used for the reification states. For example, for Hebbian learning (row 1) that may be

$${\mathbf{c}}_{{\Upomega}_{{{X_{i}} ,Y}}} (V_{1} ,\,\,V_{2} \,\,W) = V_{1} V_{2} (1 - W) + {\upmu} W $$

with μ a persistence parameter where V1 stands for Xi(t), V2 for Y(t) and W for \( \Upomega_{{X_{i} ,Y}} \left( t \right) \), and for the homophily principle (also row 1) by [4] it may be

$$ {\mathbf{c}}_{{\Upomega}_{{{X_{i}} ,Y}}} (V_{1} , \, V_{2} ,W) \, = W + \upalpha(\uptau - \, \left| {V_{1} - V_{2} } \right|)\left( {1 - W} \right)W $$

For triadic closure (row 2) and preferential attachment (row 3) a scaled or logistic sum function may be chosen. For controlled connection modulation it can be the following function: \( {\mathbf{c}}_{{\Upomega}_{{{X_{i}} ,Y}}} (V,W) \, = W +\upalpha\,\,V\,\,W\,\,\left( {1{-}W} \right) \) with α a modulation factor, which can be positive (amplification effect) or negative (suppressing effect), where V stands for Ci(t) and W for \( \Upomega_{{X_{i} ,Y}} \left( t \right) \).

By an example scenario the use of an adaptive speed factor and an adaptive combination function is illustrated based on an implemented environment for network reification. Consider within an organisation a manager of a group of 7 members with their opinions \( X_{1} ,\,\,\, \ldots ,X_{7} \), and how she adapts to the organization over time. She wants to represent the opinions of the group members well within the organization and therefore she initially uses a (normalized) scaled sum function. However, later on based on disappointing experiences within the organization, she decides to use a threshold function: the alogistic function. Moreover, initially she is busy with other things and only later she gets more time to respond faster on the input she gets from her group members. In Fig. 2 left hand side it is shown how the manager’s speed factor and combination function weights adapt over time, and in the right hand side the other relevant states are shown: the group member opinions, the manager’s opinion, and the manager’s change in available time and in disappointment.

Fig. 2.
figure 4

Adaptive speed factor and combination function for a manager and her group members. Time at the horizontal axis, state values at the vertical axis. (Color figure online)

It can be seen in Fig. 2 that after time point 40 the manager’s speed factor increases (blue line; with as effect a shorter response time), due to more availability (the purple line in Fig. 2). After time point 140 in Fig. 2 a switch is shown from a dominant weight for the scaled sum function (purple line) to a dominant weight for the alogistic function (red line), due to increasing disappointment (green line in Fig. 2). In Fig. 2 it is also shown how the manager’s opinion is affected by the opinions of the group members. Here it can be seen that after time point 140 the manager’s opinion becomes much lower due to the switch of combination function, which is resulting from the increase in disappointment (green line).

5 Discussion

In this paper it was shown how network structure can be reified in the network by adding explicit network states representing the parameters defining the characteristics of the network structure, in particular connection weights, combination functions and speed factors. Network reification can provide advantages similar to those found for reification in languages in other areas of AI and Computer Science, in particular, substantially enhanced expressiveness; e.g., [6,7,8, 16, 17, 22].

A reified network including an explicit representation of the network structure enables to model dynamics of the original network by dynamics within the reified network. In this way an adaptive network can be represented by a non-adaptive network. It is shown how the approach provides a unified manner of modelling network adaptation principles, and allows comparison of such principles across different domains. This was illustrated for known adaptation principles for Mental Networks and for Social Networks. Note that this approach to model network adaptation principles can be applied successfully to any adaptation principle that is described by (first-order) difference or differential equations (as usually is their format), as in [19], it is shown how any difference or differential equation can be modeled in the network format.

Network reification will increase complexity, but this will at most be quadratic in the number of nodes N and linear in the number of connections M of the original network. More specifically, if m the number of basic combination functions considered, then the number of nodes in the reified network is at most N (original nodes) + N (nodes for speed factors) + N2 (nodes for connection weights) + mN (nodes for combination functions), which is (2 + m + N) N. If not all connections are used but only a number M of them, the outcome is (2 + m) N + M; this is linear in number of nodes and connections. The number of connections in the reified network is M (original connection weights) + N (speed factors to their states) + ΣY indegree(Y) = M (connection weights to their states) + mN (combination function weights to their states), which is (m + 1)N + 2M; again this is linear in number of nodes and connections.

Note that in the presentation the structure of the base network is reified but not the structure of the reified network as a whole. In the reification process structures are added which are not reified themselves. One may wonder whether the structure of the reified network also can be reified within a second-order reification. In principle this can be done. It is a question what practical benefits there are for such a second-order reification. Structures in the first-order reified network not part the base network are, for example, those used to model adaptation principles. In a second order reified network they are explicitly represented by states. Future research will explore what is the use of such a second-order reification level.

It is also possible for any n to repeat the construction n times and obtain nth order reification. But still there will be structures introduced in the step from n–1 to n that have no reification. From a theoretical perspective it can also be considered to repeat the construction infinitely many times, for all natural numbers: ω-order reification, where ω is the ordinal for the natural numbers. Then an infinite network is obtained, which is theoretically well-defined; all structures in this network are reified within the network itself, but it may not be clear whether it can be applied in practice, or for theoretical questions. This also might be a subject for future research.