Keywords

1 Introduction

The notion of similarity was present in the scientific discourse at least as long as there were the ideas of ancient philosophers. In Plato’s The Republic, similarity was invoked to advocate arguments on how the State functions and what is its nature. Aristotle put similarity as one of the pillars of his theory of how human behavior is learned, and one of his laws stated that the experience or recall of an object (a situation) will evoke a recall of something similar to that object (situation) [2]. These views gave rise to a theory called associationism which states that people perform complex psychological actions through the act of association between similar mental states they experienced in the past. The main proponents of this stance were members of the school of British Empiricism, so philosophers like David Hume, John Locke or John Stuart Mill. According to Hume, for one example, similarity, besides contiguity of time and space as well as cause and effect, was one of the principles by which ideas are connected. Associationism also affected the first psychologists, like the pioneer of this field of study, William James, who saw similarity at the root of mental associations. On the other hand, there were people who regarded similarity as much of a troublesome idea. Bertrand Russell held that if we accept it, we must also accept the existence of at least one universal – a mind-independent characteristic with which we may describe multiple things and, which he believed, does not exist. Quine went even further and called similarity logically repugnant as it cannot be explicated in terms of more basic notions. Although the concept served to establish important philosophical dependencies and inflamed disputes across the years, the formal definition was given to it only at the beginning of the \(20^{\text {th}}\) century, thanks to thinkers like Rudolf Carnap, Hans Wallach or Roger Shepard. We may divide these definitions into two groups: mathematical and non-mathematical.

Non-mathematical definitions stemmed mainly from psychology. Wertheimer in his classical article from 1923 formed the Factor of Similarity which gave the notion a descriptive specification. His law assumed that objects which are grouped together in the process of cognition are in fact similar. This has been further enhanced by behaviorists like Pavlov who viewed similarity between two stimuli as their relative distance on sensory dimensions. It was until the beginning of the second half of the \(20^{\text {th}}\) century, however, that associationism began to be slowly discarded and new ideas came into the scene. Wallach’s On Psychological Similarity marked a new era in thinking about the titular concept. In his work, the author juxtaposed older views on the topic with his perceived similarity conception in which people decide which features to select and which to ignore when judging resemblance of two stimuli. He also showed experiments when such decisions were based on the context in which stimuli were presented, and included external features, independent of the structure of a stimulus (like a potential use) into similarity judgment.

Mathematical definitions for most of the time were based on geometrical understanding of similarity. Carnap set all binary, reflexive and symmetric relations to be equivalent to the notion. This understanding was later adopted by psychologists (see [3] for more references) and similarity was treated as a metric defined in the set of objects being compared. The distance from one point to another defined the level of their difference. Thus, it was possible to quantitatively state that objects a and b resemble each other more than objects c and d or that objects e and f are approximately identical since their dissimilarity does not exceed some threshold t.

Both of the aforementioned accounts, whether strictly mathematical or not, if they drew on geometrical understanding, were later deemed as inappropriate. It was mainly due to Nelson Goodman’s criticism who, as Quine, had very little opinion on the concept of similarity and treated it as devoid of any explanatory power. His main argument was that for any three objects it is always possible to state that any two of them are more similar to each other than to the third one. After Wallach, he used this observation to argue that there can be no similarity metric that is context-independent, thus, voicing against models of similarity of his time. This critique was later partially backed by the works of Amos Tversky and most notably, his famous paper Features of Similarity which introduced new formal view on similarity and provided psychological data against geometrical stance [16]. Tversky showed how people’s judgments often violate each of the metric’s axioms with symmetry being almost impossible to keep as corresponding to our behavior. His model, which we shall discuss in detail farther in the text, did not address all the philosophical remarks made by Goodman or Quine though – the context was still overlooked. Gentner tried to account for the lack of that information and created a conception of relational similarity which he expressed in terms of unary predicates. Nevertheless, in this work we will be considering Tversky’s breakthrough formulation as not only is it consistent with psychological evidence but is also very robust in terms of its use in computer science.

The work is constructed as follows. In the second section we lay out basic definitions of concepts that underlie the paper as a whole. Next section describes the similarity concept as well as selected methods of expressing it. Section four contains a detailed description of the recognition framework based on a similarity fuzzy relation and presents its implementation in the form of a network of comparators. The last section provides a summary and some comments about the methodology of the framework.

2 Preliminaries

The basic element that was under the scope of interest of ancient philosophers, just like it is now of modern researchers of artificial intelligence, is a compound object. The structure of a compound object is formulated by utilizing the notion of ontology which comes from philosophy, but now it is also frequently found in the field of artificial intelligence (AI). The formal definition that we will use in here was introduced in 2001 in [15]. It states that ontology is a system marked as \(O=\{C,R,H_{c},rel,A,L\}\), which specifies the structure of concepts, relationships between them as well as theory defined on a model. C is understood as the set of all concepts of the model and a singular concept is equated with a group of objects with common characteristics. Then, R is a set of named connections between concepts [1], \(H_{c}\) – a collection of taxonomic relationships between concepts, rel – defined, non-taxonomic relationships between concepts, A – a set of axioms, and L – a lexicon defining the meaning of concepts (including relations). L is a set of the form {\(L_{c}\), \(L_{r}\), F, G}, where \(L_{c}\) stands for the lexicon definitions for concepts, \(L_{r}\) – the lexicon of definitions for the set of relationships, F – references to concepts, G – references to relationships.

In the simplest sense, ontology is as a set of concepts connected with one another through named relationships. If we group specific concepts into more general entities, then we can make use of the resulting hierarchies in defining mereologic relations – that is descriptions of dependencies between parts of objects. The literature described many other interesting applications of ontology in computer science, most notably in pattern recognition, image analysis or modeling situational awareness by AI systems. The main problem there, is to understand the structure of an object and, on the basis of the results of perception, discover the similarities. In the literature there are some convergent approaches which treat about interactive granular computing [10]. In the context of this work, ontology is used as a set of concepts describing objects, the structure of this set, and its relations. It is used for designating reducts of features as well as describing features to which they are compared, and hence becomes a necessary tool for recognition and identification processes.

In general, objects can be divided into two groups: compound objects (\(X_c\)) and simple objects (\(X_s\)). A simple object is any element of the real world that has its representation capable of being expressed by the adopted ontology (O). In addition, the following properties arise from their ontological representation:

  1. 1.

    Objects always belong to a certain class or a fixed number of classes in ontology. A single object may belong to several classes.

  2. 2.

    An object has a property within a class. Features may vary by class.

  3. 3.

    An object may be in relation to other objects in the same ontology.

A compound object is composed of other objects defined by means of ontology (connects them) and creates a new entity. A compound object has its specification, which describes the structure, relations and connections between sub-objects. Compound objects satisfy the following additional properties:

  1. 1.

    We can extract from them a minimum of two objects that can be independent entities.

  2. 2.

    Component objects are interrelated with ontology through the formal definition of relationship.

3 Similarity Concept

In some sense, similarity can be seen as a relationship that comes from identity. Identity is an intuitive equality of objects, with the intuition formalized as the equality of attributes of entities that are compared. It is thus the supreme form of similarity. Rules for determining the identity of objects have been already proposed in the 17th century by Gottfried Wilhelm Leibniz who called them ‘identity of indiscernibles’. They are as follows:

$$\begin{aligned} \forall x \forall y[\forall P(Px\leftrightarrow Py)\rightarrow x=y]: x,y \in U \end{aligned}$$
(1)

and

$$\begin{aligned} \forall x \forall y[x\ne y\rightarrow \lnot \forall P(Px\leftrightarrow Py)]: x,y \in U, \end{aligned}$$
(2)

where x and y are objects and P is a property. Formula (1) means that for any objects x and y from the universe U, if they have exactly the same values of all properties, these objects are identical in the space in question. Similarly, formula (2) means that for any object x and y, if x is not identical to y, then in the space U there must exist at least one discriminatory characteristic for the two.

Intuitively, similarity is a certain kind of incomplete identity. Two similar objects are those that are primarily comparable and for which a degree of similarity can be obtained. The latter is feasible only if these objects have common or distinguishing features that we understand as descriptive attributes attaining different values. Thus, comparing similar objects’ attributes gives the possibility of determining the degree of their similarity. It is commonly understood that the statement a is similar to b means that one object resembles the other or is almost the same. These statements are, of course, very imprecise, but it is certainly possible to map them using appropriate modelling techniques (e.g. fuzzy sets [4]). By following this intuition, one can determine when two objects fail to fulfill the definition of identity, but if that happens, there is very little left to be fulfilled. The first option then is to use so-called quantitative approach. We are dealing with a set of attributes describing both objects, where most of the attributes of these objects are equal, although there is at least one attribute for which equality does not hold. These objects are almost certainly identical in colloquial speech, but from the strict point of view they are only similar to a certain degree. The second approach is not limited to examining attributes that characterize identities and it focuses on the remaining attributes. These attributes do not meet the condition of identity, but one can try to determine the degree of similarity for them. This is called a qualitative approach. It may involve a situation in which no identities are found on any attribute, and yet these objects are judged similar to a certain degree.

The scale of similarity is most often the interval [0, 1], where 0 means a total lack of similarity and 1 is interpreted as indiscernibility between given attributes, and thus, according to the principle of Leibniz, as an absolute identity. Similarity and the very comparison operation are indispensable elements of the world around us, and in many cases, they are necessary to determine the state of an object. In practice, it is weight, size, capacity, duration or other characteristic of objects that is determined. Each of these elements requires knowledge of a certain reference concept, by means of which one can specify a given object’s parameter, e.g. a kilogram, a liter, a second, etc. In spite of the introduction of reference values, the feature of the object can be expressed in a countable way. At the same time, objects have common reference points for all.

One can distinguish several types of approaches to defining similarities, and we shall discuss a selected few shortly.

3.1 Selected Methods of Expressing Similarities

In the literature, the problem of similarity is quite widespread, but it is usually not the main research point, but merely a means to achieve other goals. In most cases similarity is equated with the distance in a certain space of features. In this case, the metric is considered in the form:

$$\begin{aligned} d:X \times X \rightarrow [0,+\infty ), \end{aligned}$$
(3)

which satisfies the following properties \(\forall x,y,z \in X\):

  1. 1.

    \(d(x,y)=0 \Leftrightarrow x=y\)

  2. 2.

    \(d(x,y)=d(y,x)\)

  3. 3.

    \(d(x,y) \le d(x,z) + d(z,y)\)

There are various metrics that suit the type of space and the problem that is to be solved. This solution allows one to convert the problem of determining similarity between objects to the problem of distance measurement in a coordinate system determined by features. This is a relatively common approach, but not always sufficient to solve complex problems. It should be noted that there are very strong constraints associated with the metric. In the case of a generally understood similarity, the condition of symmetry is often not possible to be met, not to mention the condition of transitivity. Therefore, there is a need for other approaches as well. The common element of many solutions is the use of feature vectors. We will try to stress out throughout this paper that the essence of the problem lies in how these vectors are constructed and how they can adapt to new situations.

The next step in evolution related to methods of implementing similarity involves approaches based on ontological relationships between objects and concepts [17]. In this context, individual ontological concepts are treated as features that contribute to comparing objects. The set of these features constitutes an input into the process of determining the minimum set of essential features. This process comes down to the designation of a kind of reduction of features similar to information reducts encountered in data mining [11], i.e. a minimum set of attributes that uniquely identify or classify a given object. There are many reducts that consist of different features and selecting the best reduct is based on domain knowledge about the problem, information about the implementation and many other factors. Ontology and reduct ensure the proper design of a feature vector, however, they do not directly support the method of calculating similarity. Therefore, after the selection of features, we use other methods described earlier, or come up with dedicated methods based on the comparison of ontology. These methods are very complicated and depend on the construction of a particular ontology.

Another approach that replaced distance thinking was the contrast model created by Amos Tversky on the basis of study on how people perceive similarity [16]. In this model, not only the common features, but also distinguishing features of objects play an important role. Consequently, the model also examines aspects of reducing similarity between objects and determines their impact on the value of its degree. The common formula of the similarity function in the proposed contrast model is:

$$\begin{aligned} sim(x,y)=\theta f(X\cap Y) - \alpha f(X-Y) - \beta f(Y-X): \; \theta , \alpha , \beta \ge 0, \end{aligned}$$
(4)

where X and Y are sets of features describing object x and y respectively, \(X\cap Y\) determines common features for x and y, \(X-Y\) determines feature existing in x and not existing in y, \(Y-X\) determines features not existing in x,  and existing in y. Function f is a scale factor, while \(\theta ,\alpha \) and \(\beta \) are parameters of the model. It is easy to see that for \(\alpha =0\) and \(\beta =0\) the model is limited to common features of objects. On the other hand for parameters \(\theta =0\) and \(\alpha =1, \beta =1\) we get:

$$\begin{aligned} -sim(x,y) = f(X-Y)+f(Y-X), \end{aligned}$$
(5)

which is a dissimilarity [7].

From the point of view of modeling similarity, it is important to be able to deal with imprecision of the description and its effect on the result. Another method of representing object similarities involves fuzzy sets [6], as the fuzzy relation is an ideal tool for such purposes. It is defined on the Cartesian product of two crisp sets [4] which in this case include elements for which similarity is determined. There are many similarity measures based on fuzzy sets in the literature. The usual approach is based on the analysis of common features of objects, i.e. those at the intersection of sets \(A \cap B\) or complement, in the form:

$$\begin{aligned} sim(x,y) = 1-\mu (x,y), \end{aligned}$$
(6)

where \(\mu (x,y)\) is the membership function of a relation designating the degree of difference between two objects. The same approach can be used in building similarity functions, which will be used for the purposes of calculation degrees for individual features or distilling full feature vector. An important aspect of this method is its ability to obtain the results in terms of fuzzy sets.

Slightly different methods can be used when comparing object’s structures or their topological relationships. In cases like these, apart from attributes and their values, constraints related to the location of the object in space or the internal structure of the object are imposed. This kind of similarity can also be expressed by means of methods described above, but only on a case-by-case basis. This is why certain standardized methods that deal with such problems have been sought, c.f. rough mereology or near sets [8, 9].

The main idea behind rough mereology is to examine an extent to which an object is a part of another object using a properly selected function of rough inclusion. A typical example of the inclusion function, and at the same time an instance of asymmetrical measure of similarity that is based on multiplicity of common components, is the following formula [9]:

$$\begin{aligned} sim(X,Y)=\mu (X,Y)=\ \frac{card(X \cap Y)}{card(X)},\; card(X)\ne 0, \end{aligned}$$
(7)

where X is a set of sub-objects included in the object x, and Y is a set of ingredients of object y. The rough inclusion function provides a method for comparing parts of objects, their quantities, types or other relationships in the ontological hierarchy. Therefore, it can be interpreted as a measure of similarity that takes into account structural dependencies of objects.

In this paper, structural similarity is calculated on the basis of sum of similarities between sub-components of a fixed structure object. The sub-components are extracted by means of decomposition. We treat their similarity values as additive, and multiply by respective weighting factors. Consequently, arising similarity function is based on the knowledge of composition of a given object and the significance of each component. To define the relationship between an object and its parts, we use functions which state how to construct it from its underlying constituents. Then, these functions and the modeled dependencies are applied to similarities which in consequence allows to interpret the outcome as a similarity value referred to the main object. An example of similarity function of this kind can be as follows:

$$\begin{aligned} sim(x,y) = \frac{w_1sim(x_1,y_1)+w_2sim(x_2,y_2)+...+w_{n}sim(x_n,y_n)}{(w_1+w_2+...+w_n)} \end{aligned}$$
(8)

where \(x_i\) are sub-objects of x, and \(y_i\) are sub-objects of y for \(i=1,...,n\).

To summarize, there is a handful of methods of processing and defining similarities. Many of them are related to specific cases of use, where use is subject to special considerations. It is worth pointing out that the methods listed here were chosen from among many other equally useful techniques (e.g. similarity and processing graphs [12]). At the same time, an universal approach that is proposed in this paper, combines the majority of methods described in this section and makes the comparison of similarity results easier. In addition, it considers different possible cases and establishes proper methodologies and facilitations for them.

4 Recognition Framework

There are many ways to implement object recognition solutions. The method considered in this paper is based on multi-similarity calculations, gathering many aspects of similarity between pairs of objects and synthesizing them to get global similarity snapshot. Objects belonging to multi-dimensional space are described by similarity values between input and reference objects measured on a given set of features. The result of a recognition is thus a similarity vector which represents the closeness between input object and reference points in the domain space. Further in the text, units responsible for single-feature calculations will be called comparators, and networks allowing processing input objects through the layers of multiple comparators will be called comparator networks.

The compound objects comparator (COC) is a construct denoted as \(com^{ref}\) and can be expressed in the following form:

$$\begin{aligned} \mu _{com}^{ref}:X\times 2^{ref}\rightarrow [0,1]^{ref}, \end{aligned}$$
(9)

where \(X \subseteq \ U\) is the set of input objects to be compared and ref is the set of reference objects that we infer the similarity from. \([0,1]^{ref}\) denotes the space of vectors \({\varvec{v}}\) of dimension |ref|, where each i-th coordinate \(v[i]\in [0,1]\) corresponds to an element \(y_{i}\in ref\), \(ref=\{y_1,...,y_{|ref|}\}\). We will further call ref a reference set, while each \(Y \subseteq ref\) will be referred to as a reference subset. Additionally, a(x) will be the function that provides a representation of object \(x \in X\) with respect to an attribute a corresponding to some feature. This representation is then used by the comparator while processing x. Similarly, each reference object \(y \in Y\) is processed using its representation a(y) for a given attribute a. If we are given an ordering on elements of the reference set ref, i.e, \(ref = \{y_1, \ldots ,y_{|ref|}\}\) we can represent the function corresponding to the COC as:

$$\begin{aligned} \mu _{com}^{ref}(x,Y)=Sh(F({\varvec{v}})). \end{aligned}$$
(10)

We shall now elaborate on subsequent components of this expression. Let’s start with \({\varvec{v}}\) which is the proximity vector defined as:

$$\begin{aligned} {\varvec{v}}[i]={\left\{ \begin{array}{ll} 0 &{} y_{i}\notin Y \\ sim(x,y_{i})&{} y_{i}\in Y\end{array}\right. } \end{aligned}$$
(11)

Note that when Y is a proper subset of ref the positions in \({\varvec{v}}\) corresponding to \(y_i\notin Y\) are filled with zeros. Non-zero elements of \({\varvec{v}}\) determine the degree of similarity between the object x in question and each element of reference subset Y. In general, the value of similarity sim(xy) is calculated by the means of fuzzy relation [4] but in reality it is a combination of three mechanisms expressed in the following formula of similarity:

$$\begin{aligned} sim(x,y_{i})= {\left\{ \begin{array}{ll}0: &{} Exc^{ref}_{Rules_{i}}(x)=1 \vee y_i\notin Y\ \ \\ t_{h}(\mu (x,y_{i})): &{} otherwise \end{array}\right. } \end{aligned}$$
(12)

This formula also needs explication which is the following: Y is the reference subset; \(t_{h}\) is a threshold function given as

$$\begin{aligned} t_{h}(z)={\left\{ \begin{array}{ll} {0\atop z}:&{z<p\atop z\ge p}\end{array}\right. },\,p\in [0,1]. \end{aligned}$$
(13)

with p corresponding to the lowest similarity acceptable by a single comparator and set independently for each one of them; \(\mu \) is the basic similarity function defined by the means of traditional fuzzy relation between two objects x and \(y_{i}\); i is the index of the coordinate of proximity vector for which the similarity is derived; \(Exc^{ref}_{Rules_{i}}\), i.e

$$\begin{aligned} Exc^{ref}_{Rules_{i}}(x)=max_{j=1}^{|Rules_i|}\{r_j(x)\},\,x\in X \end{aligned}$$
(14)

is a function associated with exception rules in the form of:

$$\begin{aligned} r_{j}:X\rightarrow \{0,1\}, \end{aligned}$$
(15)

where j is an index of a rule (its id number) in the set \(Rules_{i}\).

The second element of COC, F, is a function responsible for filtering the result before applying the Sh function. Typically, F is based on combination of some standard, idempotent functions such as min, max, top, or simply identity. It introduces competitiveness between reference objects which distinguishes this mechanism from threshold function defined inside sim(xy).

Finally, Sh, called a sharpening function is a mapping that satisfies three basic conditions:

$$\begin{aligned} \forall i \in \{1,...,|ref|\}: (v[i]=0) \Rightarrow (Sh({\varvec{v}})[i]=0), \end{aligned}$$
(16)

which ensures keeping the zero values to prevent getting artificially high results;

$$\begin{aligned} \forall i \in \{1,...,|ref|\}: (v[i]=max_{j=1}^{|ref|}(v[j])) \Rightarrow (Sh({\varvec{v}})[i]=v[i]), \end{aligned}$$
(17)

which ensures keeping the maximum value so that the best result retained its original properties;

$$\begin{aligned} \forall i,j \in \{1,...,|ref|\}: (v[i]<v[j]) \Rightarrow (Sh({\varvec{v}})[i]<Sh({\varvec{v}})[j]), \end{aligned}$$
(18)

which ensures strong monotonicity with purpose to increase the difference between the average and the best results.

If we take a wider look at the COC we may notice that if all the notions introduced above are composed, it can be expressed as:

$$\begin{aligned} \mu _{com}^{ref}(x,Y) = Sh(F(\langle sim(x,y_1),\ldots ,sim(x,y_{|ref|})\rangle )) \end{aligned}$$
(19)

Network of Comparators (NoC) can play different roles depending on their settings. They can serve as multi-stage classifiers whose purpose is to limit the reference set of objects and identify the most probable candidate to be the final result. The scenario of processing in such networks is to compute relatively simple features at the first layers and to filter out the reference objects to only those that are the most promising in the final perspective. Particular comparators can be also specialized in recognition of different features based on the nature of sub-objects. The idea is that the similarity of parts of objects can help in resolving the similarity of the whole objects. From the mathematical perspective a comparator network can be interpreted as a calculation of a function:

$$\begin{aligned} \mu ^{ref_{out}}_{net}:X\rightarrow [0,1]^{|ref_{out}|}, \end{aligned}$$
(20)

which takes the input object \(x\in X\) as an argument and \(ref_{out}\) is a reference set for the network’s output layer. The target set of \(\mu ^{ref_{out}}_{net}\) is the space of proximity vectors. In this way we get the value of the network’s function:

$$\begin{aligned} \mu ^{ref}_{net}(x) = \langle SIM(x,y_1),...,SIM(x,y_{|ref|})\rangle , \end{aligned}$$
(21)

where \(SIM(x,y_{i})\) is the value of global similarity established by the network for input object x and reference object \(y_i\). Global similarity depends on partial (local) similarities calculated by the elements of the network (unit comparators). Through the application of aggregation (in a sense of consensus reaching [5]) and translation procedures at subsequent layers of the network these local similarities ultimately lead to the global one. Particular constituents of the network have been described in detail in previous publications [14]. Figure 1 shows an example of the NoC with all possible elements, interactions between them, and signal granule arising around the input object x.

Fig. 1.
figure 1

General scheme of a comparator network in UML-like representation. Notation: \(com_{ji}\) – comparators, \(T_{j}\) – translators. Symbols: oval – comparator, thick vertical line – aggregator, rhombus – translator, encircled cross – projection module.

The models for COC and NoC are functions, and both of them return results which are vectors. Fortunately, there is a simple method for converting these proximity vectors into type I fuzzy sets [4] which allows using fuzzy sets machinery in their further processing and interpretation. Note that individual vector coordinates define similarity of a particular pair of objects (xy), where \(x \in X\) and \(y \in Y\subseteq ref\). Since these values reflect the degree of memberships to the fuzzy set, we actually deal with a fuzzy relation, which is also a fuzzy set. Consequently, the result described in functional terms can be converted to a fuzzy set notation in the following way:

$$\begin{aligned} R(x,y) = \{((x,y_i),{\varvec{v}}[i]):i =1,...,|ref|\}, \end{aligned}$$
(22)

where \({\varvec{v}}[i]\) is the i’th coordinate of the proximity vector, which simultaneously fulfills the condition of the fuzzy relation in the form:

$$\begin{aligned} \mu : X\times ref \rightarrow [0,1] \end{aligned}$$
(23)

This method is also consistent with the definition of similarity function of the COC and global similarity in the NoC. The form of formula (23) is equivalent to Zadeh’s notation:

$$\begin{aligned} R = \frac{\varvec{{\varvec{v}}}[1]}{(x,y_1)}+\frac{\varvec{{\varvec{v}}}[2]}{(x,y_2)}+...+\frac{\varvec{{\varvec{v}}}[|ref|]}{(x,y_{|ref|})} \end{aligned}$$
(24)

5 Summary

We analyzed how similarity was perceived and understood over the centuries, and made a brief review of the philosophical currents in search of this notion and its use in formulating concepts. We gathered several approaches to represent similarity and showed methods of processing it. Finally, we described the NoC approach as one which introduces new standards into the field of computing similarity and represents one of the main frameworks for building similarity based recognition systems. This framework provides the ability to build large and complex logical structures that use fuzzy sets as a communication language and express similarity between objects. It is worth noting that even though the method bases on established patterns, it is possible to perform dynamic search of the object space and approximate the optimal solution adequately. By selecting an appropriate defuzzification method, it is also possible to obtain results from the outside of the reference set. Few practical applications have been described in previous publications [13].

Further research should be focused on development of the NoC framework in a defuzzification aspect. Particularly, it is valuable to consider an extension of the catalog of network components with a new element responsible for the defuzzification of the NoC. This could significantly broaden the circle of targeted uses of the method. The second field of future research should encompass creating a framework for tuning aggregators and selecting the best one to use in a particular case. Either way, there is still much space to optimize NoCs further.