Keywords

1 Introduction

The implementation of plans to create “smart cities” as one of the most important areas of the digital economy requires the priority development of transport infrastructure, ensuring the movement of people and goods within the city and adjacent territories [1]. Safe operation and maximum throughput of the resulting cyber-physical system are possible provided that a diagnostic technology is created for transport infrastructure facilities, including video-based road conditions [2]. The totality of such devices forms a heterogeneous distributed system of artificial intelligence [3], which should be enriched by some hierarchical structure, which would provide the distribution of tasks between devices, their interaction with the environment, and the exchange of information under fundamental goals and settings (Fig. 1). This implies the existence of protocols [4] for their joint work at the lower level of interaction, as well as protocols for sharing in data processing centers (DPC) at the mid-level and further for transfer to a single data center and development of management decisions at the upper level to form a global cyber-physical urban traffic management system as part of the concept of creating “smart cities”. It refers to the enormous amount of structured, semi-structured, and unstructured data [5] that are exponentially generated by high-performance applications in many domains: view data [6], factor analysis, to mention a few. The difficulties are tremendously growing as applied to mobile cyber-physical applications [7] and supporting Internet services [8] with unmanned autonomous systems [9], as a result of which it is necessary to use the multi-core capabilities of modern processors to parallelize computations [10].

Fig. 1
figure 1

Block diagram of a cyber-physical urban traffic control system

The apparatus of wavelets been introduced by Grossman and Morlet in the mid-80s of the last century in connection with the analysis of the properties of seismic and acoustic signals [11] became the best in compression of non-stationary signals providing a significant advantage over Fourier transform, since, in contrast to wavelets, the basic Fourier functions (sines and cosines) do not decay at infinity.

The basis for constructing wavelets is the presence of a set of embedded approximating spaces … Vj–1, Vj, Vj+1 … such that each basis function in Vj–1 can be expressed as a linear combination of basic functions in Vj. In particular, splines—smooth functions glued from pieces of polynomials of degree m on an embedded sequence of grids—have this property. A spline defect is a difference between the degree m and the smoothness of gluing adjacent pieces, and it is equal to the number of basic functions for each node. For example, step functions, broken lines, and simple cubic splines of smoothness C2 have a defect 1. Hermitian splines of odd degree m have a defect (m+1)/2. The use of the wavelet basis at the stage of processing allows revealing the spectral properties of the approximation spline.

From the classical literature on the theory of wavelets, it is known that spline wavelets of the first defect can be built on minimal support [0, 2m+1]—a rather large length. Boundary wavelets have even larger supports. The reduction of supports is achieved by the construction of Hermitian spline multi-wavelets [12,13,14]. The advantage of multi-wavelets in comparison with scalar wavelets is that [15], under certain conditions, the wavelet expansion relations are split into separate relations for the wavelet coefficients of zero (for function), first (for 1st derivative), second (for 2nd derivative of multiwavelets of the fifth degree), third (for the 3rd derivative of multiwavelets of the seventh degree) and further orders. In particular, for wavelet transform of Hermite splines of the fifth degree, splitting into three simultaneously solvable systems is achieved, of which one is three-diagonal with strict diagonal dominance and two other four-diagonal systems with dominant central diagonals [16]. Similar construction for the case of multiwavelets of the 7th degree leads to a parallel solution of four five-diagonal systems with strict diagonal dominance [17]. That is, the degree of parallelism increases with an increase in the spline defect.

This work aims to build new types of multi-tiered multiwavelets based on minimal defect splines and justify the optimization and parallelization of computational wavelet transform algorithms to solve problems of processing numerical information. The relevance of the work is because in most practical situations only point values of functions are given. In this case, when using wavelet transforms based on Hermitian spline multi-wavelets, we will have to calculate the approximate values of the derivatives necessary for Hermitian interpolation of the table-defined functions, using, for example, regularized numerical differentiation schemes. The transition to multi-wavelets based on basic splines of the minimum defect of several successive levels of thinning of the grid of measured values of the function can lead, if the idea of splitting is successfully realized, to a significant acceleration of calculations due to parallel processing of the measured values by several (instead of one) filters. At the same time, the depth of each filter (the number of successive decimation levels involved in constructing the filters) is not of fundamental importance, which leads, in the limit, to the possibility of a one-step procedure for calculating simultaneously all wavelet decomposition coefficients at all decimation levels instead of reusing the same filters at each successive level of thinning.

2 Zero Degree Spline Multiwavelets

2.1 Haar Wavelets and Parallel Computing

This applies, in particular, to Haar wavelets proposed by the Hungarian mathematician Alfred Haar back in 1909. Haar wavelets are orthogonal, have a compact support, are well localized in space, but are not smooth, because approximating functions for them are zero-order splines that are discontinuous at nodes. Haar transform is used to compress images, mainly color and black and white with smooth transitions. This type of compression has been known for a long time and directly proceeds from the idea of using coherence of regions. The compression ratio is set and varies between 5 and 100. When you try to set a larger coefficient on sharp boundaries, especially those that run diagonally, a “staircase effect” appears—steps of different brightness with a size of several pixels.

In Haar basis at this level j, there exist 2j scaling functions and 2j wavelets. At the same time, the necessary refinement (synthesis) matrices that describe how it is possible to obtain the coefficients of two scaling functions from Vj and two wavelets from Wj using four coefficients of scaling functions from Vj+1 are of the form:

$$ A^{2} = \frac{1}{{\sqrt 2 }}\left[ {\begin{array}{*{20}c} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ \end{array} } \right],\;\;B^{2} = \frac{1}{{\sqrt 2 }}\left[ {\begin{array}{*{20}c} 1 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & { - 1} \\ \end{array} } \right]. $$

Let there be a one-dimensional discrete input signal S. Each pair of neighboring elements is associated with two numbers:

$$ a_{i} = \frac{{S_{{2i}} + S_{{2i + 1}} }}{{\sqrt 2 }},\;\;b_{i} = \frac{{S_{{2i}} - S_{{2i + 1}} }}{{\sqrt 2 }}.$$

Repeating this operation for all elements of the original signal, the output receives two signals, one of which is a coarsened version of the input signal—ai, and the second contains the detailed information necessary to restore the original signal—bi. Similarly, the Haar transform can be applied to the received signal ai, etc.

Consider an example of Haar transformation of a one-dimensional signal of length 16. Let the incoming signal be represented as a string of 16 pixel brightness values (S): (220, 211, 212, 218, 217, 214, 210, 202, 194, 185, 186, 192, 191, 188, 184, 176). After applying Haar transformation, the following two sequences are obtained ai: (304.763, 304.056, 304.763, 291.328, 267.993, 267.286, 267.993, 254.558) and bi: (6.364, −4.243, 2.121, 5.657, 6.364, −4.243, 2.121, 5.657). It is worth noting that the values bi are quite close to 0. Repeating the operation, as applied to the sequence ai, we obtain ai’: (430.5, 421.5, 378.5, 369.5), bi’: (0.5, 9.5, 0.5, 9.5) and further: (602.455, 528.916) (6.364, 6.364), (800) (52).

Using Haar transform as an example, the structure of a discrete wavelet transform of a signal is visible. At each step of the conversion, the signal splits into two components: approximation with a coarser resolution and detailed information. The inverse transformation is performed following the formulas:

$$ S_{2i} = \frac{{a_{i} + b_{i} }}{\sqrt 2 },\;\;S_{2i + 1} = \frac{{a_{i} - b_{i} }}{\sqrt 2 }.\;\;\; $$

And in this case, it is permissible to annul the wavelet coefficients that are small in absolute value by making the least error in the sense of the least-squares method. For example, for a threshold value <6, we get 8 non-zero coefficients in the remainder, and the graphical representation of the result has the form (Fig. 2a).

Fig. 2
figure 2

The solid line is the image of a one-dimensional signal of length 16, the crosses are the restored coefficients of the spline of the 0th degree for conversion (in parentheses are the number of nonzero coefficients): a Haar (8); b Walsh-Hadamard (7)

It is easy to see that the two components of Haar transformation are calculated independently, so the computational process splits into two threads. In the case of a multi-wavelet interpretation of Haar basis, the scaling functions relate to two consecutive grid nodes, and each tetrad of successive signal values is subjected to transformation by the refinement matrix (synthesis)

$$ \frac{1}{\sqrt 2 }\left[ {\begin{array}{*{20}c} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & { - 1} \\ \end{array} } \right]. $$

The corresponding decomposition (analysis) matrices, which are inverse to the matrix presented above, are simply the result of transposition (a consequence of the orthogonality of the basic functions) and, therefore, are also sparse:

$$ \frac{1}{\sqrt 2 }\left[ {\begin{array}{*{20}c} 1 & 0 & 1 & 0 \\ 1 & 0 & { - 1} & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & { - 1} \\ \end{array} } \right]. $$

The next step along the path of parallelizing computations is to switch to the recursive representation of Haar transform [12]. Combine the two successive steps of Haar transformation to receive for each tetrad of neighboring elements one average value and three detailing coefficients under the refinement matrix:

$$ \frac{1}{2}\left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ 1 & 1 & { - 1} & { - 1} \\ {\sqrt 2 } & { - \sqrt 2 } & 0 & 0 \\ 0 & 0 & {\sqrt 2 } & { - \sqrt 2 } \\ \end{array} } \right]. $$

All four components of the resulting transformation are calculated independently, so the computational process splits into four threads. In this case, the total number of steps in the computational process decreases exactly two times. The three newly obtained detailing functions, corresponding to the three lower lines of the given system, are tied to one node and therefore can also be called multi-wavelets. In the same place [12], it was proposed as an exercise to bring recursion to an 8-tuple multiwavelet transform and even to a 16-tuple one. Unfortunately, the prospects of this approach to parallelizing computations were not indicated, and therefore, each time it has to be reinvented [18].

2.2 Haar Wavelet Packets and Walsh-Hadamard Transform

There is another way to construct a multiwavelet decomposition of step functions associated with the definition of so-called wavelet packets [12]. In this case, Haar transform is applied not only to the sequence ai but also to the wavelet coefficients themselves bi, to obtain Walsh-Hadamard transform as a result:

$$ \frac{1}{2}\left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ 1 & 1 & { - 1} & { - 1} \\ 1 & { - 1} & 1 & { - 1} \\ 1 & { - 1} & { - 1} & 1 \\ \end{array} } \right]. $$

Other unitary transformations [19], found by arbitrary rearrangement of rows and columns, can be proposed, but the Walsh-Hadamard transform looks best because it is symmetric. In any case, the found coefficients differ only by permutation. In particular, after applying Walsh-Hadamard transform to the previous signal of length 16, the following sequences are again obtained ai’: (430.5, 421.5, 378.5, 369.5), bi’: (0.5, 9.5, 0.5, 9.5) and two more sequences: (1.5, 5.5, 1.5, 5.5), (7.5, −2.5, 7.5, −2.5). Repeating the operation, as applied to the sequence ai’, we obtain the numbers: (800) (52) (9) (0). For the same threshold value <6, we obtain 7 non-zero coefficients in the remainder, and the graphical representation of the result has the form (Fig. 2b).

According to the authors [12], the obtained approximation is good, but not excellent. Besides, the total number of computational operations is greater. In our opinion, all this is redeemed by the absence of the square root extraction operation and the uniform loading of computing cores in parallel.

2.3 Use of Orthogonal Wavelets with Compact Support and With Scaling Coefficient N

Another way to construct multi-wavelets for minimum defect splines is to use wavelets with a scaling factor of N = 4. According to the theory [20, 21], the refinement matrix describing how to obtain coefficients of the scaling function from Vj and three wavelets from Wj using four coefficients scaling functions from Vj + 2, has the form:

$$ \frac{1}{6}\left[ {\begin{array}{*{20}c} 3 & 3 & 3 & 3 \\ {3\sqrt 2 } & { - 3\sqrt 2 } & 0 & 0 \\ {\sqrt 6 } & {\sqrt 6 } & { - 2\sqrt 6 } & 0 \\ {\sqrt 3 } & {\sqrt 3 } & {\sqrt 3 } & { - 3\sqrt 3 } \\ \end{array} } \right]. $$

After applying the presented transformation to a signal of length 16, the same sequence is obtained ai’: (430.5, 421.5, 378.5, 369.5) and three more sequences: (6.364, 2.121, 6.364, 2.121) and (2.858, 4.491, 2.858, 4.491), (−3.175, 10.104, −3.175, 10.104). At the same time, calculations are rather laborious due to a large number of square roots, and the savings in the number of computational operations are not so great as in the multiwavelet form of Haar transform. Repeating the operation, as applied to the sequence ai’, we obtain the numbers: (800) (6.364) (38.784) (35.218). For the same threshold value <6, we obtain 8 non-zero coefficients in the remainder, and the graphical representation of the result obtained has the form (Fig. 3a).

Fig. 3
figure 3

The solid line is the image of a one-dimensional signal of length 16, the crosses are the restored coefficients of the spline of the 0th degree for conversion (in parentheses are the number of nonzero coefficients): a in [20] (8); b slant (7)

Thus, the last approach considered looks worse in all possible indicators. Nevertheless, since the matrix of transition to the lowest level of resolution has 16 coefficients in the multi-wavelet version instead of 4 in the classical Haar version, the formulation of the multi-wavelet transformation optimization problem in some parameters does not seem hopeless.

3 Use of Orthogonality to Polynomials of Higher Degree

One such interesting option for optimizing a multi-wavelet transform is an attempt to find Haar-like multi-wavelets based on the orthogonality property to polynomials of degree higher than the spline degree. In literature, the property of orthogonality for polynomials is usually called the property of zero moments. Previously, only scalar wavelets were used, and, for example, such properties as supports compactness, orthogonality, symmetry, zero moments, and other characteristics important in signal processing cannot be present in a scalar wavelet at the same time. A multi-wavelet-based system can have all of them at once. This means that multi-wavelets can provide perfect reconstruction (due to orthogonality), good efficiency at the signal boundaries (due to symmetry), and a high approximation order (due to the large number of zero moments) so that they can act when processing signals and fields better than scalar ones.

It is said that a wavelet \(\psi (x)\) is orthogonal to polynomials of order n if the integral \(\int {\psi (x)x^{k} dx}\) is identically zero for \(k = 0, \ldots ,\,n - 1\), but not for k = n. Wavelets orthogonal to polynomials of a higher degree are often desirable in applications where numerical approximations of smooth operators are required.

The requirement of orthogonality to polynomials can sometimes be an excessive restriction. There is not a single wavelet basis (except for Haar basis), which simultaneously has compact support and is orthogonal and symmetric. But its drawback is that it is orthogonal only to zero degree polynomials. Therefore, if we want to obtain wavelets with compact support and high accuracy, we need to sacrifice the orthogonality of the basic functions. However, such damage is not always “painful”: for example, in some cases we can build a multi-scale analysis in which wavelets with this resolution are orthogonal to each other, and at the same time, they are orthogonal to higher polynomials.

Recall that for the case of splines of degree zero (step functions), the scaling function \(\phi (x)\) with the unit value of the integral

$$ \int\limits_{ - \infty }^{\infty } {\phi (x){\kern 1pt} \,dx} = 1, $$

which determines the rough approximation of a signal, is constant:

$$ \phi (x) = \left\{ \begin{gathered} 1,\quad 0 \le x < 1, \hfill \\ 0,\quad x \notin [0,1). \hfill \\ \end{gathered} \right. $$

The mother wavelet function \(\psi (x)\) [22] with a zero value of the first three moments,

$$ \int\limits_{ - \infty }^{\infty } {\psi (x){\kern 1pt} \,x^{k} dx} = 0,\;k = 0,1,2, $$

which determines signal details, is defined as follows:

$$ \psi (x) = \left\{ \begin{gathered} 1,\quad \quad 0 \le x < 1/4, \hfill \\ - 3,\quad 1/4 \le x < 1/2, \hfill \\ 3,\;\;\;\;\;\;1/2 \le x < 3/4, \hfill \\ - 1,\;\;\;\;3/4 \le x < 1, \hfill \\ 0,\quad \quad x \notin [0,1). \hfill \\ \end{gathered} \right. $$

Unfortunately, the inverse matrix of the defining system of equations for this wavelet is filled. Therefore, in contrast to the classical case (Haar orthogonal wavelets), the transition to explicitly setting synthesis filter coefficients is not advisable. The process of dividing the coefficients ai into a rougher version ai’ and refinement coefficients bi’ is more convenient to carry out in the form of a solution of a system of linear equations of dimension 2j × 2j [23]. There is known the only way to simplify further the numerical solution by splitting the system into even and odd nodes [14,15,16], to pass as a result to 2j–1 × 2j–1 matrix. But this trick only works for splines of an odd degree and wavelets with shifted supports [24,25,26].

3.1 Slant Matrices

In the case of a multi-wavelet interpretation of the basis obtained, the scaling functions refer to four consecutive grid nodes, two four-point wavelets orthogonal to polynomials of zero degrees and one—first degree, respectively, are determined, and each tetrad of successive signal values is transformed by the refinement matrix (the so-called slant matrix [27])

$$ \frac{1}{2}\left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ {\frac{3}{{\sqrt 5 }}} & {\frac{1}{{\sqrt 5 }}} & { - \frac{1}{{\sqrt 5 }}} & { - \frac{3}{{\sqrt 5 }}} \\ 1 & { - 1} & { - 1} & 1 \\ {\frac{1}{{\sqrt 5 }}} & { - \frac{3}{{\sqrt 5 }}} & {\frac{3}{{\sqrt 5 }}} & { - \frac{1}{{\sqrt 5 }}} \\ \end{array} } \right]. $$

The corresponding decomposition (analysis) matrices, which are inverse to the matrix presented above, as in the case of Haar matrices, are simply the result of its transposition (a consequence of the orthogonality of the basic functions).

After applying the slant transformation to a signal of length 16, the same numbers ai’ and new sequences bi’: (1.118, 10.957, 1.118, 10.957) and (7.5, −2.5, 7.5, −2.5), (1.118, 0.671, 1.118, 0.671) are obtained. Repeating the operation, as applied to the sequence ai’, we obtain the numbers: (800) (52.535) (0) (–15.205). For the same threshold value <6, we obtain 7 non-zero coefficients in the remainder, and the graphical representation of the result has the form (Fig. 3b).

4 Use of Double Transformation

Now we apply Walsh-Hadamard transformation not only to the sequence ai’, but also to all the multiwavelet coefficients of Walsh-Hadamard decomposition obtained above. Miraculously, most of the coefficients were reset to zero: (800, 10, 7, 5) and three more sequences: (52, 0, 0, 0) and (9, −9, −4, 10), (0, 0 , 0, 0). So there is the so-called “lossless” compression effect. If we zero here two coefficients (−4, 5), which are less than 6 in absolute value, then there are also 7 nonzero coefficients, and the reconstructed image is hardly distinguishable from the original (Fig. 4a).

Fig. 4
figure 4

The solid line is the image of a one-dimensional signal of length 16, the crosses are the restored coefficients of the spline of the 0th degree for the double transform (in parentheses are the number of nonzero coefficients): a Walsh-Hadamard (7); b slant (6)

For a slant transformation, similar calculations lead to the values: (800, 50.535, 0, −15.205) and three more sequences: (12.075, −4.4, 0, −8.8) and (5, 4.472, 0, 8.944), (1.789 , 0.2, 0, 0.4). If we zero all the coefficients here, which are less than 6 in absolute value, then 6 non-zero coefficients remain, however, in the region of the first signal oscillation, a significant defect of the reconstructed image is observed (Fig. 4b). Note that with the multi-wavelet transformations of Haar and [20], such a “trick” does not work. The compression quality is deteriorating (Figs. 5a, b, and 6a).

Fig. 5
figure 5

The solid line is the image of a one-dimensional signal of length 16, the crosses are the restored coefficients of the 0-degree spline for the double conversion of Haar (in brackets are the number of non-zero coefficients): a (8); b (10)

Fig. 6
figure 6

The solid line is the image of a one-dimensional signal, the crosses are the restored coefficients of the 0-degree spline for the double conversion of (in brackets are the number of non-zero coefficients and the length 16): a in [20] (9, 16); b Haar (12, 64)

5 Experiments With Use a Scaling Factor 8

For a scaling factor of N = 8, the calculation of Haar transform splits into eight threads following the matrix

$$\frac{1}{2\sqrt 2 }\begin{bmatrix}1 & 1& \sqrt{2}& 0 &2 &0 &0& 0\\1 & 1& \sqrt{2}& 0 &-2 &0 &0& 0\\ 1 & 1& -\sqrt{2}& 0 &0 &2 &0& 0\\1 & 1& -\sqrt{2}& 0 &0 &-2 &0& 0\\ 1 & -1& 0 & \sqrt{2}&0 &0&2& 0 \\1 & -1& 0 & \sqrt{2}&0 &0 &-2& 0\\ 1 & -1& 0 & -\sqrt{2}&0 &0& 0&2 \\1 & -1& 0 & -\sqrt{2}&0 &0& 0 &-2\end{bmatrix}.$$

Therefore, the total number of steps in the computational process decreases exactly four times. For illustration, we will perform a continuous continuation of the signal given above of a length of 16 according to the formulas:

$$ \begin{gathered} S_{i + 16} = S_{i} - 220 + 2S_{15} - S_{14} ,\;i = 0,1, \ldots ,15; \hfill \\ S_{i + 32} = S_{i} - 220 + 2S_{31} - S_{30} ,\;i = 0,1, \ldots ,31, \hfill \\ \end{gathered} $$

adding a linear trend to convince: \(S_{i} + i,\;i = 0,1, \ldots ,63\).

Here, Haar double transformation turned out to be slightly better than the single transformation, both in the number of coefficients, which are on modulo >6 (12 instead of 16), and in the quality of the graphical representation of the result obtained after their zeroing (Fig. 6b).

As for the wavelet packets, in this case, the procedure of constructing them has to be repeated 2 times, to obtain Walsh-Hadamard transformation (the essence is the Kronecker product of three Haar matrices [28], which from a computational point of view can be more profitable):

$$ \frac{\sqrt 2 }{4}\left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & { - 1} & 1 & { - 1} & 1 & { - 1} & 1 & { - 1} \\ 1 & 1 & { - 1} & { - 1} & 1 & 1 & { - 1} & { - 1} \\ 1 & { - 1} & { - 1} & 1 & 1 & { - 1} & { - 1} & 1 \\ 1 & 1 & 1 & 1 & { - 1} & { - 1} & { - 1} & { - 1} \\ 1 & { - 1} & 1 & { - 1} & { - 1} & 1 & { - 1} & 1 \\ 1 & 1 & { - 1} & { - 1} & { - 1} & { - 1} & 1 & 1 \\ 1 & { - 1} & { - 1} & 1 & { - 1} & 1 & 1 & { - 1} \\ \end{array} } \right]. $$

After the double Walsh-Hadamard transform, most of the obtained coefficients are strictly equal to zero:

$$ \left[ {\begin{array}{*{20}c} {1228} & {10} & {12} & {10} & 2 & { - 8} & { - 18} & {20} \\ {72} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ {144} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ {288} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} } \right]. $$

This shows that the Walsh-Hadamard double transformation is exact on linear functions. In Fig. 7a, b presents the results of similar calculations for a quadratic trend \(S_{i} + i^{2} ,\;i = 0,1, \ldots ,63\). Namely, the errors of signal reconstruction using the double Haar and Walsh-Hadamard transforms over a given threshold (>64) are presented. The number of non-zeroed coefficients is indicated in parentheses.

Fig. 7
figure 7

The error of signal reconstruction using the double transforms a Haar (20); b Walsh-Hadamard (16) (in parentheses are the numbers of nonzero coefficients)

6 Conclusion

There is extensive literature related to Walsh-Hadamard transforms [29] and their generalizations to higher-order orthogonal and biorthogonal spline wavelets [30]. There is an urgent need for the construction of parallel algorithms for calculating semi-orthogonal spline-multiwavelets of the minimum defect on this basis. In the chapter, the author’s vision of the problem of “Big Data” compressing in cyber-physical systems in transport was presented. Prospects for data compressing algorithms based on a multiwavelet approach that is proposed to be implemented as a program on a mobile device are explored. The presented algorithms can be used in the planning of road repairs, in the analysis of road accidents, in the processing of applications of road users, etc.