Keywords

1 Introduction

Online Social Network (OSN) behavior has always been a topic of interest within various fields of social applications in artificial intelligence. These include: link detection, security threat identification, pattern recognition, recommendation, topic modeling and event prediction tasks, etc. Key relational behavior arises from manifolds of dynamic communication patterns which evolve over a temporal space of constant inceptions. Recent research include the use of directional dyads and signed reciprocity as a special representation of link “strength” [22].

Challenges. Many relational approaches used in this study however, lack depth and representative power [35]. The drawback of these techniques are that important correlational attributes shared between actors are ignored, resulting in shallow representations of relational states [35]. Methods based on feature similarities throughout studies in literature, have shown the lack of representational efficacy to model real life social structures effectively [2, 28]. Generally speaking, there are several critical key questions in this field of study which remain unanswered. In an unstructured social network within an evolving construct of dynamic relationships [35]; how can we firstly, represent generalizations of evolutionary behavior within these social transactions accurately? Secondly, how can we recognize dynamic relational profiles which correlate to different social communication patterns? Finally, how can we quantify the dynamic errors arising from social disruptions (outliers) in our representations?

Data Models. We address these questions with the use of Fractal Neural Networks (FNNs). FNNs are used within the Relational Turbulence Model (RTM) framework to describe structures of chaos [25]. FNNs leverage on the dynamic structure of fractals as the lowest principle decompositions of never ending patterns. They are driven by a recursive process, and are adaptable enough to describe highly dynamic system representations [21]. In our approach, we define Relational Turbulence as probabilistic measures of Relational Intensity \(P(\gamma _{rl})\), Relational Interference \(P(\vartheta _{rl})\) and Relational Uncertainty \(P(\varphi _{rl})\) [30]. RTM characterizes an artificial construct, which predicts communication behaviors. These behaviors are observed during relationship transitions in an environment of constant social disruptions [30]. We choose this model because alternative data models compromise accuracy and performance for simplicity in representation. Examples include node-based, neighbor-based, path-based, random walk-based, measures etc. [11]. These representations capture relational structures from a time static perspective and are not adaptable to real-life dynamic evolutions of relational states [31]. In this work, we focus on discovering relational intelligence through identifying relational profiles on three major social platforms: Twitter, Google and Enron email datasets.

Technical Model. In this paper, we introduce RFT to tackle the problem of misrepresentations as a time evolving flow of relational attributes. The model evolves into a multi-stage Deep Neural Network (DNN) from atomic fractal hybrid architectures [5]. The atom structure is morphed from standard concatenations of Restricted Boltzmann Machines (RBMs) and Recursive Neural Nets (RNNs). RFT accepts as inputs, key relational feature states \(f_{i}\) between actors \(a_{j}\) and global events \(E_{\epsilon }\) from past and present social transactions to determine the likelihood of relational turbulence \(\tau _{ij}\) within an identified social flux \(F_{\epsilon }\). Turbulence broadly corresponds to disruptive social communication patterns within various topic and event contexts. For example, passive negative sentiments transacted through discussions on major topics like trade wars, drive relational breakdowns in many aspects like trust, influence, status, etc. We develop a novel architecture from RTM to identify social disruptions by estimating relational turbulence profiles, within a given social context describing the state of flux. Then, we evaluate and demonstrate that our methods outperform similarity based feature and flat structural approaches in detecting social flux and turbulence.

Contributions. Our scientific contributions are presented as follows:

  1. 1.

    Our method adaptively learns from real-time online streaming data to identify key turbulent relationships within a given OSN.

  2. 2.

    An innovative RFT model was developed to capture key relational features which were used to detect and profile social communication patterns of eventful states within a given OSN.

  3. 3.

    Experiment results show that RFT is able to offer a good modeling of relational ground truths, while FNN efficiently and accurately represents evolving relational turbulence and flux profiles within a given OSN.

The remaining part of the paper is organized as follows: Sect. 2 presents a brief overview of related works drawn from social theories and relational structures. Section 3 introduces key concepts, theories and preliminaries of our proposed model. Section 4 discusses the methods and models we have developed for profiling relational turbulence in OSNs. Section 5 introduces our experimental design, implementation, results and presents our discussion. Section 6 leads to a conclusion and potential future directions.

2 Related Literature

Relational Turbulence. Relational Turbulence was first studied in [13]. It is characterized as a resultant state of conflicting interests between two or more actors. Conflict correlates to both a stimulus for communication and detrimental event occurrences [13]. Therefore, relational altering events are important discriminators to conflict detection and turbulence profiling. These events, if found to be in huge negative violations of expectancies between relational reciprocates of actors, can lead to instability in a relational flux [13]. The RTM [30] builds upon the core principles of relational state shifts and conflict management in an environment of continuous online social disruptions. The process of turbulent relationship development can be described as a continuous and communicative state of flux [30]. This state defines a consistent exchange of sentimental and affective information between the actor/s involved. Each transition to another state (e.g. professional colleagues to friendship) has the probability to cause friction (conflict), which may lead to a polarization of sentiments and affective communication flux in OSNs [30]. Two key features of the RTM are actor interferences and relational uncertainty [30]. They enable effective detection and prediction of conflicting events in sentimental and affective computing.

Neural Network Architectures. In [18], the authors present a minimalist neural network architecture for reliably and accurately estimating emotional states based on EEG captured data. Their model however, suffers from a lack of representation for more deeply complex emotional states (e.g. an in-betweenness in quantization across valance and arousal). Additionally, their reinforced gradient coefficient augments the errors calculated between expected-weighted and actual outputs which are then used to update the layered weights of their shallow Artificial Neural Network (ANN) model. This approach alleviates diminishing gradients at the expense of performance. In the same vein, [26] deals with social role recognition through the use of a Conditional Random Field (CRF) layered model architecture. However, for video image frames in which latent social role-based semantics exist, CRF architectures are ill-adapted to handle the complex representations of the depth to these roles in the identification process. This leads to poor performance output measures of their full model method. Building on the principles of Role Theory, the authors in [20] propose a deeper hierarchical model for human activity recognition based on identified actor roles within an eventful context. Their models performance suffer from scaling to larger event frameworks due to problems of overfitting and error gradient saddle points.

3 Preliminaries

Our RFT model leverages on two very important key concepts. The logical aspect is derived from the Relational Turbulence Model and the structural design is evolved from the Fractal Neural Network (FNN). The core idea of RFT is to iteratively adapt the structure of the neural network model to changing outputs (relational turbulence) at the inputs of the design. This is done in reference to the changing complexities of data at the inputs. A detailed architecture of the FNN used in our design is given in Fig. 1a.

Fig. 1.
figure 1

The RFT logical architecture

Relational Turbulence. From the RTM approach [29], we define Relational Intensity \(P(\gamma _{rl})\), Relational Interference \(P(\vartheta _{rl})\) and Relational Uncertainty \(P(\varphi _{rl})\) to be three key probabilistic outputs of the RFT model which represent the relational turbulence \(P({\tau _{rl}})\) of a given link in an OSN. The key element types we have identified to be contributing features between the duration of the turning point and relationship development (as an unstable/turbulent process) are the Confidence \(\rho _{ij}\), Salience \(\xi _{ij}\) and Sentiment \(\lambda _{ij}\) scores in an actor-actor relationship of a social transaction in question.

Expectancy Violation. It is noteworthy of mention that the ground truth reciprocities of these element types shared within a relational flux, violates expectancies - \(E(\rho _{ij})\), \(E(\xi _{ij})\) and \(E(\lambda _{ij})\) respectively [29]. These violations, are a contributing factor to temporal representations of relational turbulence - \(\gamma _{rl}\), \(\vartheta _{rl}\) and \(\varphi _{rl}\). Negative expectancy is defined has a polar mismatch between expected reciprocates against actual reciprocates (e.g. Actor i expecting a somewhat positive reciprocation of an egress sentiment stream, but instead, received a negative ingress sentiment stream from actor j). Positive expectancy is defined as the strong cosine similar vector alignment between these reciprocates. Both expectancy violation (EV) extremes, are characterized by sharp gradient changes of their weighted feature scores. This is given mathematically as:

$$\begin{aligned} \frac{\partial E_{rl}}{\partial \tau _{rl}}=\sum _{i,j=1}^{n}\prod _{\eta =\rho ,\xi ,\lambda }\frac{E(\eta _{ji})}{\partial \eta _{ji}}\times \frac{\partial \eta _{ij}}{\partial \tau _{ij}} \end{aligned}$$
(1)

Where \(\tau _{ij}\) is also known as the relational turbulence between node i and its surrounding neighbors j and \(\eta _{ij}\), \(\eta _{ji}\) is the reciprocated sentiment from node i to j and j to i respectively.

Relational change or transition - also known as a turning point, defines some state-based critical threshold, beyond which relational turbulence and negative communication is irrevocable [19]. This critical threshold is specific to actor-actor relationships and learned through our model as a conflict escalation minimization function [27]. Conflict escalation is defined as the gradual increase in negative flux \(\frac{-\nabla F_{\epsilon }}{\nabla t}\) over time within a classified context area \(L_{F_{\epsilon }}\) of interest [27]. The critical threshold parameter is then driven mathematically as:

(2)

Where \(T_{\epsilon }\) is the threshold of interest and m is the total number of training data over the time window t. The equation states simply that the relational transition threshold decreases drastically for strong EVs and gradually for weak EVs.

Problem Formulation. The problem statement which our work addresses can be summarised as follows: Given an OSN within an environment of constant social shocks, we wish to minimize inaccuracies in the representations from time evolving flow of relational attributes (time-realistic relationships) between actors. Furthermore, although DNNs are very powerful tools designed for use in both classification and recognition tasks, it is computationally abhorrent [14]. A drawback of a generative architectural approach involves the use of stochastic gradient decent methods during training which do not scale well to high dimensionalities [17, 23]. Although still, generative DBNs offer many benefits like a supply of good initialization points, the efficient use of unlabeled data, etc.; thus, making its use in deep network architectures indispensable [5].

The Model Solution. To tackle the problem of computational efficiency and learning scalability to large data sets, we have adopted the DSN model framework for our study. Central to the concept of such an architecture is the relational use of stacking to learn complex distributions from simple core belief modules, functions and classifiers. Our approach leverages on the temporal transitions of stages in the relational evolution between nodes of an OSN [34]. It determines profiles of relational turbulence and encodes knowledge dimensionality into a highly volatile shallow fractal ANN architecture. This is used to either generate or collapse depth complexity during active learning - in response to random “anytime-sequenced” fluctuating data information.

4 Model and Methods

A high level system architecture of RFT is given in Fig. 1b. Specifically, in our design, data is fed into our model from two distinct sources. The first is batch processed from a repository of social data (Googles and Enron emails). The second is actively learned from live streaming tweet data (Twitter) pulled from multiple server sources using the twitter firehose API. It is then pushed through the model in stages. During pre-processing, data is first broken down into key relational features - Category confidence, Entity salience, Entity sentiment, Mentions sentiment and Context sentiment using the Googles NLP API. Then, in the next stage, these features are accepted as inputs into our RFT model (Fig. 1b) to estimate the output relational turbulence profiles. The input features of our RFT model is concatenated with the truth values of relational turbulence calculated from (6), (7) and (8) and synchronously fed back recursively into the intermediate confabulations of our FNN architecture (Fig. 1a). Errors in output expectations are backpropagated and corrected with inter-layer activity weight adjustments until they fall within pre-defined tolerance levels.

4.1 The Hybrid RFT Fractal Architecture

We begin with the definition of a soft kernel used to discover a markovian structure which we then encode into confabulations of fractal sub-structures. For a given set of data observables as inputs: \(\chi \in X\) and outputs: \(\mathfrak {I}\in \Xi \) we wish to loosely define a mapping such that the source space \((X,\alpha )\) maps onto a target space \((\mathfrak {I},\omega )\). The conditional \(P(\chi \vee \omega )\) assigns a probability from each source input \(\chi \) to the final output space in \(\omega \). Each posterior state-space from in between input to output is generated and sampled through a random walk process. An indicator function which we have chosen to describe the state transition rule is:

$$\begin{aligned} \varTheta _{t+1}=min {\left\{ \begin{array}{ll} 0\\ \amalg _{c=1}^{n}\frac{\delta E_{t+1}^{c}}{\delta \chi _{t}^{c}} \end{array}\right. } \end{aligned}$$
(3)

Where \(\delta E_{t+1}^{c}\) is the error change from one hidden feature activity state \(h_{t}\in H\) onto higher posterior confabulations. The objective function at each transition seeks to minimize error gradients. For a general finite state space markovian process, the markov kernel is thus defined as:

$$\begin{aligned} Kern(M)= {\left\{ \begin{array}{ll} p:X\times \omega \rightarrow [0,1]\\ p(\chi |\omega )=\oint _{\omega }q(\chi ,\mathfrak {I})\nu (\delta \mathfrak {I}) \end{array}\right. } \end{aligned}$$
(4)

Once a unique markovian neural network has been discovered, a Single Layer Convolutional Perceptron (SLCP) is proposed as a baseline structure to learn the fractal sub-network from pre-existing posterior confabulations. The SLCP baseline structure changes as discovered knowledge is progressively encoded during the learning process.

4.2 The Fractal Neural Network

The model design we have chosen, with which to address the dynamic profiling of relational turbulence is the Fractal Neural Network (FNN) [21]. FNN adopts a hybrid architecture which incorporates the use of both generative and discriminative deep networks [5]. In our architecture, the generative DBN is used to initialize the DNN weights. Fine-tuning from the backpropagation process is then subsequently carried out sequentially layer by layer.

Generative Framework. In our learning model, the FNN generative framework is developed from the Restricted Boltzmann Machine (RBM) [12] layer stack. A Boltzmann Machine is architecturally defined as a stochastically coupled pair of binary units. These units contain a visible layer given as: \(V\in {0,1}^{D}\) and a hidden layer vector: \(H\in {0,1}^{P}\). The coupling between visible and hidden layers VH is driven by an energy state of layered interactivity; expressed as:

$$\begin{aligned} E(V,H,\theta )=-\frac{1}{2}V^{T}LV-\frac{1}{2}H^{T}JH-V^{T}WH \end{aligned}$$
(5)

Where \(\theta ={W,J,L}\) are Boltzmann Machine model weights between visible to hidden, visible to visible and hidden to hidden layers respectively. The discriminative architecture of the FNN model is built from the Tensorized Deep Stacking Recursive Neural Network (TDSN-RNN) model framework [5].

Discriminative Framework. The discriminative architecture of our FNN model is built from the Tensorized Deep Stacking Recursive Neural Network (TDSN-RNN) model framework. All deep architectures (Contrastive Divergence or per layer RBM to supervised backpropogation – perceptron golden architecture) rely on a back and forth recursive process through three core stages of their learning process. Stage 1 involves a forward pass which sequentially processes stacked training layers from input to output. Stage 2 backpropogates this layer-wise sequence from output to input using gradient descent. Stage 3 adjusts weights between layers to minimize output errors. This process is repeated in cycles until the final expectation is reached.

4.3 The Relational Turbulence Model

In our model, the probabilistic Relational Turbulence \(P(\tau _{rl})\) of a given link in an OSN is determined by key features of an established relationship in any instance. They are the confidence \(\rho _{ij}\), salience \(\xi _{ij}\) and sentiment \(\lambda _{ij}\) scores in a dyadic link. We define relational intensity as the continuous integration of sentimental transactions \(F_{\epsilon }\) per context (event topic) \(L_{F_{\epsilon }}\) area, the relational uncertainty as the likelihood from opposing sentiment mentions and relational interference as the probabilistic deviations in expectancies from predicted uncertainties and flux intensities. Mathematically, these are given as:

For Relational Intensity:

$$\begin{aligned} \gamma _{rl} = \sum _{i,j=1}^{n}\frac{\beta _{ij}|-\frac{\nabla F_{\epsilon j}}{\nabla t}|}{L_{F_{\epsilon }}} + \chi _{rl} + \dot{\theta }_{rl} \end{aligned}$$
(6)

Where \(\beta _{ij}\) is defined as the temporal derivative of the latent topic (context) oscillation phase \(\epsilon \), \(\chi _{rl}\) is the reciprocal bias and \(\dot{\theta }_{rl}\) is the gradient of social influence from one actor to another across a relational link.

For Relational Uncertainty:

$$\begin{aligned} \varphi _{rl}=\frac{\sum _{i,j=1}^{n}S_{i}S_{j}}{\sqrt{\sum _{i=1}^{n}S_{i}}\sqrt{\sum _{j=1}^{n}S_{j}}} \end{aligned}$$
(7)

Where \(S_{i}\) and \(S_{j}\) are sentiments transacted from nodes i to j and from nodes j to i respectively.

For Relational Interference:

(8)

Where,

$$\begin{aligned} F(\gamma _{rl},\varphi _{rl}:\mu _{\gamma \varphi },\omega _{\gamma \varphi }^2)=\frac{1}{\sqrt{2\pi }\omega }\sum _{t=-\infty }^{\gamma _{rl},\varphi _{rl}}\exp ^{-\frac{(t-\mu )^2}{2\omega ^2}}dt \end{aligned}$$
(9)

Here, \(F(\gamma _{rl},\varphi _{rl}:\mu _{\gamma \varphi },\omega _{\gamma \varphi }^2)\) is the Cumulative Distribution Function (CDF), and erf(x) is the error function of the predicted outcomes \(\gamma _{rl}\) and \(\varphi _{rl}\).

Finally Relational Turbulence: was calculated from conditional posteriors of \(\gamma _{rl}\), \(\vartheta _{rl}\) and \(\varphi _{rl}\) as the mathematical relation of:

$$\begin{aligned} P(\tau _{rl})=\sum _{i=1}^{n}\frac{P(\gamma _{i}|\theta _{i})P(\vartheta _{i}|\varphi _{i})P(\varphi _{i}|\gamma _{i})}{N_{i}P(\gamma _{i})P(\vartheta _{i})P(\varphi _{i})} \end{aligned}$$
(10)

Here, \(N_{i}\) is the conditional scaling factor. The inputs were tested across the RFT dynamically stacked Fractal Neural Network (FNN) and the chosen baseline models.

5 Experiments

5.1 Dataset

The experiments were conducted on three datasets using RFT and five different baseline algorithms. The datasets are: Twitter, Google and Enron emails. These three datasets were chosen because they are widely benchmarked throughout the academic circle for studies in sentimental computing and can be easily understood by the audience of this paper. They are detailed in Table 1.

Table 1. Statistics of datasets

5.2 Baselines

Several state-of-the-art methods were considered for comparison with the proposed RFT model. Since the model is the first in line for this type of adaptive online active learning approach, modified versions of similar methods were used along with the baselines, developed earlier for comparison. Another notable point is although many prediction models exist, not all methods have the same goal or data features as this study. Therefore consideration is given only to the models which use similar data for comparison. It should be mentioned that not all the methods can both predict relational turbulence and profile communication patterns together. Therefore we compare only the profiles of relational turbulence outputs between each other. Descriptions of the competing methods are given in Table 2. The key difference between DCN and RFT is that in DCN the number of layers are fixed at 45 while in RFT, the layers are allowed to grow and collapse as new feature complexity representations are learned over time.

Table 2. Baseline models

Tuning Parameters. In the experiments, system model parameters were chosen based on the combined effect of several factors - including errors in observational data, choices of calibration methods and Design Of Experiment (DOE) criterias [10]. A hybrid of both global and local Sensitivity Analysis (SA) approaches was used to determine and specify the best performing parameters for experimentation based on a predefined behavior threshold for the model. The experiments were conducted on the training model with a learning rate set to 1.1, a sliding window set to 3, an error tolerance set to 0.1 (10%), a data outlier threshold set to 1.0, with scaling set to 10, a vanishing gradient error threshold at 0 and an exploding gradient error threshold set to 100. Finally, both trust region radius parameter was set to 5 and the softmax temperature regularization parameter was staged at 1.2.

5.3 Performance Measurements

Kendall Coefficient. The Kendall (tau-b coefficient) was used to measure the strength of associations between predicted and expected outputs of the learning models. It is given as:

$$\begin{aligned} \tau _{b}=\frac{N_{c}-N_{d}}{\sqrt{(N_{0}-N_{x})(N_{0}-N_{y})}} \end{aligned}$$
(11)

Where \(N_{c}\), \(N_{d}\) are the number of concordant and discordant pairs respectively, \(u_{i}\) is the number of tied values in the \(i^{th}\) group of ties for the first quantity and \(v_{j}\) is the number of tied values in the \(j^{th}\) group of ties for the second quantity.

Spearman Coefficient. The Spearman (rho coefficient) was used to measure the monotonic relationship between the independent variables (Category confidence \(\varvec{\mathfrak {C}_{i}}\), Entity Sailence \(\varvec{\mathcal {J}_{i}}\), Entity sentiments - magnitude and scores \((\varvec{\mathfrak {I}_{i}},\varvec{\beth _{i}})\), Mention sentiments -magnitude and scores \((\varvec{\mathcal {L}_{i}},\varvec{\gimel _{i}})\), Context sentiments - magnitude and scores ) and the dependent variables (Relational Intensity \(\varvec{\gamma _{rl}}\), Relational Interference \(\varvec{\vartheta _{rl}}\) and Relational Uncertainty \(\varvec{\varphi _{rl}}\)). It is calculated as:

$$\begin{aligned} \varGamma _{S}=1-\frac{6\sum D_{i}^{2}}{N(N^{2}-1)} \end{aligned}$$
(12)

Where \(D_{i}=rank(X_{i})-rank(Y_{i})\) is the difference in ranks between the observed independent variable \(X_{i}\) and dependent variable \(Y_{i}\) and N is the number of predictions to input data sets for all three sources.

K-Fold Validation. Finally, during the experimentation, the full datasets obtained from the different sources (twitter, google and enron) were partitioned into k-subsamples. K-fold validation [33] was performed over all deep learning models across the Mean Absolute Percentage Error (MAPE) [4] measurement of each run. Mathematically, MAPE can be expressed as:

$$\begin{aligned} \delta _{MAPE}=\frac{1}{N}\sum _{i=1}^{N}|\frac{E_{i}(x)-Y_{i}(t)}{E_{i}(x)}| \end{aligned}$$
(13)

Where \(E_{i}(x)\) is the expectation at the output of data input set i and \(Y_{i}(t)\) is the corresponding prediction over N total subsamples. \(\delta _{MAPE}\) is the average measure of errors in expectations at the output.

5.4 Results

The tests were run across the baselines and our RFT model. For clarity and simplicity of explainations, only every 10th running data from a chosen output sample set is plotted on a graph and displayed for discussion purposes. The line of best fit was used to graph the curve through the points. Additionally because of space constraints, only the table on Kendall correlation experimented on the chosen datasets is displayed. The results are shown in Table 3 and Fig. 2a–c.

Fig. 2.
figure 2

Graph of relational turbulence across three datasets

Table 3. Table of Spearman, Kendall and K-fold results

5.5 Investigation

As can be seen from the graphs, SLP models consistently underperforms in ranking where prediction accuracy is concerned, the Kendall (tau-b coefficient) test shows a lower (positive) correlation between expected and predicted outputs across the test data set for SLP models and much higher (positive) association for other baselines and RFT. Furthermore, from the results of the Spearman (rho coefficient) test done on the independent and dependent variables, it can be seen from Tables 3c–h that the spearman coefficient indicates strongly positive monotonic correlations between turbulence measures (\(\gamma _{rl}\), \(\vartheta _{rl}\) and \(\varphi _{rl}\)) and sentiment scores and moderately positive correlations between the same turbulence measures (\(\gamma _{rl}\), \(\vartheta _{rl}\) and \(\varphi _{rl}\)) to both category confidence and entity salience \((\mathfrak {C}_{i},\mathcal {J}_{i})\).

Additionally, from Table 3a, it is observed that across all models, strength of associations between predicted and expected outputs tend to be weaker in specifically directed communications. This is observed in Enron’s email datasets as opposed to Twitter and Google results. It is analysed that this is due to high relational interference scores which tend to correlate fairly well to entity salience scores. In this scenario, entity salience plays an important function in determining relational turbulence - as opposed to contexts over which the sentiments were expressed. This means that an actor with a higher social status of influence may more readily interfere with other relationships in directed communications. Generally however, it can be observed from Table 3b that as the number of sub-sample windows increases over the dataset, the MAPE over DCN, EnsemDT and RFT decreases. Whereas MAPE for SLP, IMPALA and MVVA tend to fluctuate about a fixed error. This behavior is attributed to overfitting and gradient saddle points from poor initializations. RFT remains the clear winner across the measured baselines in all k-fold validation experiments.

6 Conclusion

In conclusion, it has been shown that RFT is capable of predicting relational turbulence profiles between actors within a given OSN acquired from anytime data. The results show superior accuracies and performance of the FNN model in comparison to well known baseline models. The feasibility of the learning model has been demonstrated through the implementation on three large scale networks: Twitter, Google Plus and Enron emails. The study uncovers three pivotal long-term objectives from a relational perspective. Firstly, relational features can be used to strengthen medical, cyber security and social applications where the constant challenges between detection, recommendation, prediction, data utility and privacy are being continually addressed. Secondly, in fintech applications, relational predicates (e.g. turbulence) are determinants to market movements - closely modeled after a system of constant shocks. Thirdly, in artificial intelligence applications like computer cognition and robotics, learning relational features between social actors enables machines to recognize and evolve.