Keywords

1 Introduction

With the development of technology, embedded technology has been integrated into our life. Medical electronics, intelligent furniture, logistics management, electric power control, even electronic watches, cars, aircraft, satellites and all the devices with digital interfaces and program control are marked by the embedded system [1]. The resource scheduling of embedded platform aims to regulate, measure, analyze and use various resources reasonably and effectively. Effective resource scheduling algorithm can improve operation speed and operation efficiency better. Based on task completion time and resource load balance, literature [2] finds out the optimal resource scheduling scheme through particle swarm optimization algorithm. Literature [3] uses gray-scale prediction model to realize resource scheduling of virtual machine system through network benchmark test and polynomial modeling. Literature [4] constructs a problem based on logic model and applies the algorithm’s satisfiability problem to solve the resource scheduling problem. Literature [5] proposes a Delay scheduling algorithm (RFD) based on resource prediction, which is based on the prediction method of resource availability to schedule jobs reasonably. In this paper, we use grey prediction model to predict the load of CPU, calculate the weight according to the prediction results, rearrange the CPU queue and use the weight polling algorithm to realize resource scheduling.

2 Research and Design for Resource Scheduling

2.1 Gray-Scale Prediction Model

The gray-scale prediction model is a prediction method for the long term description of the development law of things through a small amount of incomplete information and establishing the grey differential prediction model. The data of grey-scale prediction is the inverse result of the prediction value obtained by the GM (1, 1) model of data generation.

The GM (1, 1) model assumes that the utilization rate of a node’s CPU processing node at N continuous time can be denoted as,

$$ W^{0} = \left( {\omega_{1}^{0} ,\omega_{2}^{0} ,\omega_{3}^{0} \cdots ,\,\omega_{N}^{0} } \right) $$
(1)

The GM (1, 1) model assumes that the utilization rate of a node’s CPU processing node at N continuous time can be denoted as,

$$ W^{0} = \left( {\omega_{1}^{0} ,\omega_{2}^{0} ,\omega_{3}^{0} \cdots ,\omega_{N}^{0} } \right) $$
(2)

Then the data are added to the data sequentially as follows,

$$ \omega_{1}^{1} = \omega_{1}^{0} $$
(3)
figure a

A new series can be generated by a cumulative addition, and the new series is denoted as follows,

$$ W^{1} = \left( {\omega_{1}^{1} ,\omega_{2}^{1} ,\omega_{3}^{1} \cdots ,\,\omega_{N}^{1} } \right) $$
(6)

It also can be denoted simply as following:

$$ \omega^{\left( 1 \right)} \left( i \right) = \left\{ {\mathop \sum \nolimits_{j = 1}^{i} \omega^{0} \left( j \right)|i = 1,2 \cdots N} \right\} $$
(7)

By accumulating, the vibration and fluctuation of raw data can be weakened. Then the latter item is subtracted from the previous item,

$$ \Delta \omega^{1} \left( i \right) = \omega^{1} \left( i \right) - \omega^{1} \left( {i - 1} \right) = \omega^{0} \left( i \right) $$
(8)

where \( i = 1,2, \cdots {\text{N}},\omega^{0} \left( 0 \right) = 0 \). \( \omega^{1} \) satisfies the first order ordinary differential equation,

$$ \frac{{d\omega^{1} }}{dt} + a\omega^{1} = \mu $$
(9)

Where \( \upmu \) is the value of development of gray, which is the constant input to system. The initial condition of the differential equation is, when \( {\text{t}} = {\text{t}}_{0} \), \( \upomega^{1} =\upomega^{1} \left( {{\text{t}}_{0} } \right) \), its solution is,

$$ \omega^{1} \left( {\text{t}} \right) = \left[ {\omega^{1} \left( {t_{0} } \right) - \frac{\mu }{a}} \right]e^{{ - a\left( {t - t_{0} } \right)}} + \frac{\mu }{a} $$
(10)

The discrete value of samples that are sampled equidistantly

$$ \omega^{1} \left( {k + 1} \right) = \left[ {\omega^{1} \left( 1 \right) - \frac{\mu }{a}} \right]e^{ - ak} + \frac{\mu }{a} $$
(11)

For a cumulative sequence, \( {\text{W}}^{1} \), this paper estimates the constant \( \upmu \) and \( {\text{a}} \) by the least square method. Because \( \upomega^{1} \left( 1 \right) \) is the initial value, \( \upomega^{1} \left( 2 \right) ,\,\upomega^{1} \left( 3 \right) \cdots\upomega^{1} \left( {\text{N}} \right) \) are substituted for the equation respectively, and the differential is substituted for differential, and the samples are sample equidistantly, \( \Delta {\text{t}} = \left( {{\text{t}} + 1} \right) - {\text{t}} = 1 \). Then it has,

$$ \frac{{\Delta \omega^{1} (2)}}{\Delta t} = \Delta \omega^{1} \left( 2 \right) = \omega^{1} \left( 2 \right) - \omega^{1} \left( 1 \right) = \omega^{0} \left( 2 \right) $$
(12)

For the same reason,

$$ \frac{{\Delta \omega^{1} (N)}}{\Delta t} = \omega^{0} \left( N \right) $$
(13)

It can be deduced at the same time,

$$ \omega^{0} \left( 2 \right) + a\omega^{1} \left( 2 \right) = \mu $$
(14)
figure b

The \( {\text{ax}}^{1} \left( {\text{i}} \right) \) can be moved to the right, and its vector product form is denoted as following,

$$ \left\{ {\begin{array}{*{20}c} {\omega^{0} \left( 2 \right) = \left[ { - \omega^{1} \left( 2 \right),1} \right]\left[ {\begin{array}{*{20}c} a \\ \mu \\ \end{array} } \right]} \\ \\ {\omega^{0} \left( 3 \right) = \left[ { - \omega^{1} \left( 3 \right),1} \right]\left[ {\begin{array}{*{20}c} a \\ \mu \\ \end{array} } \right]} \\ { \cdots \cdots } \\ {\omega^{0} \left( N \right) = \left[ { - \omega^{1} \left( N \right),1} \right]\left[ {\begin{array}{*{20}c} a \\ \mu \\ \end{array} } \right]} \\ \end{array} } \right. $$
(17)

Then the matrix expression is

$$ \left[ {\begin{array}{*{20}c} {\omega^{0} \left( 2 \right)} \\ {\omega^{0} \left( 3 \right)} \\ \vdots \\ {\omega^{0} \left( N \right)} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} { - \frac{1}{2}\left[ {\omega^{1} \left( 2 \right) + \omega^{1} \left( 1 \right)} \right] } & 1 \\ { - \frac{1}{2}\left[ {\omega^{1} \left( 3 \right) + \omega^{1} \left( 2 \right)} \right]} & 1 \\ \vdots & {} \\ { - \frac{1}{2}\left[ {\omega^{1} \left( N \right) + \omega^{1} \left( {N - 1} \right)} \right]} & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} a \\ \mu \\ \end{array} } \right] $$
(18)

The three matrices of the upper equation are expressed as W, B, U respectively.

$$ {\text{W}} = {\text{BU}} $$
(19)

The least square estimation is

$$ \hat{U} = \left[ {\begin{array}{*{20}c} {\hat{a}} \\ {\hat{\mu }} \\ \end{array} } \right] = \left( {B^{T} B} \right)^{ - 1} B^{T} W $$
(20)

The corresponding equation of time can be obtained by substituting the estimated value \( {\hat{\text{a}}} \) and \( {\hat{\upmu }} \) into the equation.

$$ \hat{x}^{1} \left( {k + 1} \right) = \left[ {x^{1} \left( 1 \right) - \frac{{\hat{\mu }}}{{\hat{a}}}} \right]e^{{ - \hat{a}k}} + \frac{{\hat{u}}}{{\hat{a}}} $$
(21)

When k = 1, 2, … , N − 1, the upper equation is a fitting value. When K >= N, the upper equation is the predicted value. It is equivalent to a cumulative fitting value, which is reduced by subtraction, and then the predicted value is obtained.

2.2 Weighted Rounded Robin Algorithm

Owing to the embedded platform of data processing is a number of isomorphic servers in function, and the data processing process, the processing capacity of components is not very different from each other. However, it needs to consider some real-time situations, such as CPU utilization rate, memory utilization rate, network node congestion and so on. Therefore, this paper uses the Weighted Rounded Robin algorithm. Because the processing capability of the data processing platform is the same, and the Weighted Rounded Robin is preferred. But with each dynamic assignment of the task, the real-time state of the processors will change.

First, the received task queue enters the centralized scheduling server, the resource management component in this paper, second, assigns weights to each server in advance according to the real-time status of the processor, and last assigns the task to the task by the corresponding weight value. Therefore, a reasonable determination of the weight is the key to the allocation of resources. By processing the historical data of the nodes, the CPU load rate of the corresponding nodes can be estimated, and each CPU weight queue is determined according to the CPU load rate.

The Weighted Rounded Robin algorithm uses the node weight to represent the processing performance of the node. The algorithm assigns each processing node by the order of weight value and polling scheduling, and the processing nodes with high weight value are able to handle more task requests with lower weight than the weight value. The algorithm process can be described as follows:

This paper sets the processing node in the cluster as N = {N0, N1, … , Nn − 1}, where the weight value of node Ni can be denoted by W(Ni), and i represents the ID of the last selected sever. T (Ni) represents the amount of Ni currently allocated.

\( \sum {\text{T}}\left( {{\text{N}}_{\text{i}} } \right) \) represents the amount of tasks that are required to be assigned currently. \( \sum {\text{W}}\left( {{\text{N}}_{\text{i}} } \right) \) represents the sum of the weights of the nodes. Then, it has:

$$ {{{\text{W}}\left( {{\text{N}}_{\text{i}} } \right)} \mathord{\left/ {\vphantom {{{\text{W}}\left( {{\text{N}}_{\text{i}} } \right)} {\sum {\text{W}}\left( {{\text{N}}_{\text{i}} } \right)}}} \right. \kern-0pt} {\sum {\text{W}}\left( {{\text{N}}_{\text{i}} } \right)}} = {{{\text{T}}\left( {{\text{N}}_{\text{i}} } \right)} \mathord{\left/ {\vphantom {{{\text{T}}\left( {{\text{N}}_{\text{i}} } \right)} {\sum {\text{T}}\left( {{\text{N}}_{\text{i}} } \right)}}} \right. \kern-0pt} {\sum {\text{T}}\left( {{\text{N}}_{\text{i}} } \right)}} $$
(22)

Computation of weights is an important factor in achieving load balancing. When initializing the system, the operator needs to set the initial weight DW (Ni) for each node according to the conditions of each node.

The dynamic weights of the processing nodes are determined by various parameters in the running state, which mainly include CPU resources, memory resources, response time and the number of modules to run. For data processing, CPU utilization and memory utilization are relatively important. In order to express the weight of each factor, a constant coefficient is introduced, \( \upomega_{\text{i}} \). This paper uses this constant to represent the importance of the influencing factors, where \( \sum\upomega_{\text{i}} = 1 \). Therefore, the weight value formula of each node Ni can be described as:

$$ {\text{LOAD}}\left( {{\text{N}}_{\text{i}} } \right) =\upomega_{1} \cdot {\text{Lcpu}}\left( {{\text{N}}_{\text{i}} } \right) +\upomega_{2} \cdot {\text{Lmemory}}\left( {{\text{N}}_{\text{i}} } \right) +\upomega_{3} \cdot {\text{Lprocess}}\left( {{\text{N}}_{\text{i}} } \right) $$
(23)

Where Lf(Ni) represents the amount of load of a node Ni. The upper equation can represent the CPU utilization ratio, memory utilization ratio and process number. The dynamic weight of the resource scheduling component is run periodically through the state of the program, and the final weight of the system can be calculated through the node weight.

$$ {\text{W}}_{\text{i}} = {\text{A}}\, *\,{\text{DW}}\left( {{\text{N}}_{\text{i}} } \right) + {\text{B}}\, *\,\left( {{\text{LOAD}}\left( {{\text{N}}_{\text{i}} } \right) - {\text{DW}}\left( {{\text{N}}_{\text{i}} } \right)} \right)\, *\, 1 / 3 $$
(24)

In this formula, if the dynamic weight is just equal to the initial weight value and the final weight is unchanged, the load state of the system is just in the ideal state, equal to the initial state DW (Ni). In the case of the constant number of nodes, the task is not dispatched, or the task with low priority is unloaded, and then the task is sent.

3 Validation of Resource Scheduling System

After completing the design and implementation of ship resource scheduling components, we need to carry out a comprehensive system testing of resource scheduling components. In the process of development, each module is unit tested. In this section, the black box testing method is mainly used to verify the functional requirements and non-functional requirements proposed in the requirements.

3.1 Resource Allocation Test

Resource generation is divided into two steps, load forecasting and resource generation. Load prediction verification: First of all, the normal CPU queue is used for load forecasting, weight calculation, and CPU queue is rearranged by weight size. Resource generation: The CPU queue is configured in turn according to the task, and the resource scheduling plan is displayed in the table below (Table 1). However, as a ship display control terminal component, resource scheduling components retain the ability of manual configuration by operators simultaneously. The configuration information is stored in the message queue at this time.

Table 1. The error of fitting of each time.

3.2 Resource Distribution Test

When the device clicks on the right key, it can be successfully sent to the resource scheme. The embedded data processing platform framework gives the resource scheduling component feedback message according to the load condition after receiving the message of the load message. At this point, the resource scheduling component will prompt the operator, and the operator needs to choose whether to continue executing the scheduling. If necessary, the resource scheduling component will continue to load. If it is not needed, the queue will be emptied and the components loaded will be unloaded.

3.3 Test of Gray-Scale Prediction Module

Embedded internal data platform collects real-time CPU state information in real time, and predict the data through AR model. The prediction curve is shown as follows, Fitting error value of CPU is shown as following:

The parameters, \( {\text{a}} = - 0.00917 \), \( \upmu = 42.76613 \), where the predictive values of time 6, 7, 8 are shown in Table 2.

Table 2. Grey model predicted value and actual value.

By predicting the predicted value of CPU load balance, the lower value is the actual observed CPU value. It can be seen from the table that the actual value is not much different from the predicted value, so the application of the grey prediction model to the resource scheduling system has a good prediction result.

4 Conclusion

In this paper, the grey prediction model and the Weighted Rounded Robin algorithm are used to schedule the embedded data processing platform. In the system, the gray prediction model is used to predict the load of the normal CPU queue, and the weight value is calculated. The weight value is used to rearrange the CPU queue, and the weights are polled according to the load message condition. Scheduling algorithm is used for resource scheduling. The constant coefficient is introduced in the Weighted Rounded Robin algorithm, which indicates the importance of the influence factors, thus making the system priority scheduling to important resources and improving the efficiency of the system. Through the resource scheduling of the component testing system and the collection of CPU state information, the prediction value is compared. The results show that the grey prediction model is applied to the resource scheduling system with better prediction results.