1 Introduction

In today’s technological world, the requirement for videos are increasing depending on the uncomplicated usage of a number of electronic devices. At the same time, the requirement for bandwidth is not considered as a factor that would go down. Thus, it relatively continues to be discharged depending on the requirement of the video that is watched over the web [23, 33]. Video content is generated each day based on several electronic devices, but, the video storing and video signal transmission in unprocessed format is considered to be unfeasible based on its disproportionate requirement of resources [15, 26]. Nowadays, admired standards (video coding) including H.264 as well as MPEG-4 are exploited for compressing the video signals before doing the storing process and video transmission. Consequently, capable video coding participates for giving a significant role in video communications [18, 25, 28].

In fact, HEVC also termed as Motion Picture Expert Group (MPEG-H) Part 2 and H.265 is a video compression standard that is widely utilized Advanced Video Coding (AVC) like H.264 and MPEG-4 Part 10. Due to the widespread application of video, there arises a requirement for greatest compression and less complexity video coding approaches that conserve the quality of the image. Standard organizations such as ISO, VCEG, ITO of ITU-T and cooperation of various companies have leads to the video coding standards development for meeting the requirements of video coding. The AVC/H.264 standard is considered as the most extensively deployed video coding technique [11, 27, 31]. Thus, AVC is generally regarded to any of the main standards that utilized in Blue-Ray devices to compress the video. Further, it extensively exploited by various video streaming services like applications of video conferencing, TV broadcasting and so on. At present, the most imperative expansion in this field is the implementation of H.265/HEVC standard. The main objective of this technique is to generate video compression requirement which is made proficient for compressing the video in an efficient manner when compared to other standards with respect to complexity in coding and quality of video [4, 6]. The standard has an extensive series of platforms for receiving the digital video. Personal computers, TVs, smart phones and tablets have a dissimilar display, computational and connectivity capabilities, and thus the video should be transformed in order to meet the stipulations of the target platform. These type of transformation is attained using video transcoding. In case of transcoding, an uncomplicated solution is normally used to decipher the video signal that is in compressed form and then, it is re-encoded based on the target compression system, however, the process is not so simple by means of computation [10]. Predominantly in real-time applications, it is required to develop the information that is previously obtainable based on the compressed video bit-streams for accelerating the transformation [12, 16].

In order to achieve good compression rate with less complexity, an improved water wave optimization algorithm introduced in the H.265/HEVC standard allow video encoders to achieve better compression efficiencies. On the other hand the increased complexity requires a new design methodology able to face challenges associated with ever higher spatio-temporal resolutions. The major contributions of this paper are to introduce an enhanced model of standard entropy encoding model for HEVC. Here, the enhancement is made by proposing the weighting factors for the coding bits rate. Further, the range of weighing purely depends to the perceptivity and resolution of video contents. Thus, a new optimization model namely IPU-WWO is exploited. The main advantage of using improved water wave optimization is to reduce the weighting factor of the encoding process and maintain the quality of the multimedia sequences.

This paper intends to propose a new video compression model termed, weighted Holoentropy. Here, a weight is adopted to the holoentropy for enhancing the video quality. This paper introduces a new IPU-WWO algorithm for selecting the optimal weight, which makes the encoding process more effective without losing the quality. The rest of the paper is arranged as follows: Section 2 reviews the background of the work. The proposed architecture of HEVC with weighted Holoentropy encoding are discussed in Section 3. Section 3.5 explains about the weight optimization by improved Iterative based propagation update in water wave Optimization Algorithm (IPU-WWO). The experimental results and discussion are described in Section 4. The Conclusion and Future work are discussed in Section 5.

2 Literature review

Sunil et al. [1] have suggested a high efficient video coding depending on the optimized quantization matrix. Here, entropy encoding based approach was used with respect to the disseminated optimized quantization matrix, and thus, an enhancement of the compression rate was attained when compared with the enhanced entropy encoding. Several experiments were demonstrated based on the 8 benchmark video series along with the PSNR for deviating the data transmission rate. From the experimental outcome, it was clear that this approach attains better performance when compared to other classical entropy encoding approaches. Antony and Ganapathy [24] have established a highly effective video compression approach. Here, a SIP algorithm was formulated for intra mode coding (lossless) in HEVC. This approach mainly exploited the adaptive switching scheme that connecting 2 intra prediction approaches on per pixel basis. Further, MSAP was considered as an alteration which was used to enhance the calculation accurateness of SAP. The developed SIP model was regarded as a near lossless approach which was used to choose the optimal calculation for every pixel to minimize the remaining energy which leads to the increment of coding effectiveness. For avoiding the enormous transparency needed for the assortment transmission (encoder to the decoder), SIP assumes piggybacking for the choosing based on the on the LSB of the broadcasted enduring for each pixel. From the experimental results, it was clear that this approach attains better coding gain when compared with the other classical approaches. Sunil et al. [2] have developed an enhanced entropy encoding for HEVC approach. Here, a classical entropy encoding was enhanced based on the optimized weighing parameters in order to attain a higher rate of compression. The optimization was done depending on the newly established firefly algorithm. The experimentation was done based on the 8 benchmark video series and the PSNR value was also examined. From the experimental results, it was clear that this approach better perseverance rate when compared with the other approaches. Diaz et al. [29] have implemented HEVC video compression techniques. Here, in order to generate a feasible construction system, the prior assumption was made in case of optimizing and parallelizing the HEVC pipeline. Hence, it leads to the creation of final dual layer streams. The generation of HDR-HEVC encoder was combined with the encoder system of users depending on the application infrastructure, preserving the robustness of clients, testability and excellence necessities. Thus, the final streams were required to be examined in a correspondingly performing decoder policy for physical and computerized examination and substantiation. Finally, the programmed decoder substantiation was employed to examine both the backward-compatible base layer and the high dynamic range enhancement layer. From the experimental results, it was apparent that this approach achieves better testability when compared with the other approaches. Yu et al. [5] have developed an HEVC based encoder optimization. Here, a HEVC encoder optimization-based approach is formulated for HEVC Main 10 Profile based HDR video coding by the effect of abnormality suppression. The HVS is considered to be more susceptible to the visual elements depending on the reliability when compared with the abnormality suppression effect. Here, the abnormality suppression effect was employed for optimizing the HEVC encoder. First, the pixels orientation was estimated in order to generate abnormality suppression in a confined area based on the orientation allocation. Next, the visual reliability with 10-bit luminance alteration and spatial masking was employed for the production of 10-bit JND map. Finally, a combination of the perceptual block was done in order to produce HEVC main 10 profile. From the experimental outcomes, it was obvious that this approach achieves a high enhancement in the runtime in case of creating the HDR videos.

Heindel et al. [34] have formulated a low complexity video compression approach. In case of compressing the enhancement layer, a low complexity approach such as SELC was implemented. Moreover, an enhancement layer compression based on JPEG-LS and the SHVC were computed. Here, the performance of the scalable coding approach with respect to three approaches was compared to single layer coding dependent to Intra Main RExt configuration. From the investigational outcomes, it was clear that this approach attains better runtime when compared with the other enhancement layer encoding approaches. In 2017, Zhu et al. [30] have established a binary and multi-class learning approach for video compression. First, modeling of HEVC based on the recursive CU assessment and PU assortment was done with respect to the hierarchical binary as well as multi-class classification configurations. Next, depending on the two-stage classification configurations, the decisions of CU and the selection of PU were optimally selected based on SVM. Specifically, for attaining enhanced detection performance, a learning approach was projected for integrating the off-line ML mode. In addition, the determination of optimal parameters was accepted for allowance in stretchy involvedness under the constraint of RD. From the simulation outcomes, it was obvious that this approach achieves low delay main and arbitrary access configuration when compared with the other traditional compression approaches. Yuan et al. [8] have stated a motion homogeneous based video compression approach. Here, a fast H.264/ AVC to HEVC transcoding approach was proposed. In this encoding process, a CU is considered as a motion-homogeneous block depending on the investigation of the decoded data from the H.264/AVC bitstream. In the case of motion-homogeneous blocks, the intensity of CU with several other units such as PU based execution approaches was also proposed. From the experimental results, it was apparent that this approach attains better effectiveness in compressing the video when compared with the other standard approaches.

The literature has come out with several machine learning techniques for the compression of video (Table 1). However, there needs more enhancement because of lack of several features in the compression process. Optimized quantization matrix [1] was used to enhance the subjective quality of the video with respect to high SNR. But, it requires high processing time and computational complexity. SIP algorithm [24] was exploited for reducing the residual energy during compression. However, it involves the high occurrence of errors in transmission. Optimized weighing parameters [2] include motion vectors, which was used for the prediction of the block sample from the frames. But, here, no importance was given for the key features in designing the core elements. HEVC encoder [29] was used to support a higher dynamic range based on the steady and consistent growth in resolution. However, the performance improvement was not significant for complex sequences. Visual regularity approach [5] was considered to be more convenient for storage and transmission of video. However, it requires high processing time. SELC [34] normally achieves good compression bit rate. But, it has high complexity in rate-distortion optimization. SVM [30] was used to permit enhancement of accuracy level and thus achieves, better prediction performance. However, there involves huge power consumption. Video transcoding [8] was exploited to determine the motion based homogeneous condition. But, there arises a lack of attention to the dependency between the pixels. These limitations have highly motivated to develop more improved video compression models. An intelligent media-data transfer new technologies [20] were studied through various open source tools, such as CCanalyzers and simulators. The challenges and the synchronization techniques [13] have been proposed for synchronizing video and haptic data through internet. The upcoming IoT network architecture & its security challenges and analyze the most important researches on media security and privacy have been proposed in wireless sensor networks (WSNs) [14]. An efficient algorithm for advanced scalable Media-basedSmart Big Data (3D, Ultra HD) proposed on Intelligent Cloud Computing systems [19].

Table 1 Features and challenges of conventional video compression approaches

3 Proposed architecture of HEVC with weighted Holoentropy encoding

The major motivation of the digital video coding standard is the optimization aspect of coding efficacy. The efficiency in coding is nothing but the capability of a coding standard for minimizing the bit rate, which denotes a video content with the required quality level of video. Further, it is determined as the capability of the coding standard on increasing the video quality with restricted bit rate. The foremost version of High Efficiency Video HEVC standard [21] is accepted as ISO/IEC 23008–2 as well as ITU-T H.265 as the initial motive and later the extension is carried out to make them capable on various broad new applications. In fact, the elder version of HEVC standard has its better scope; however, the model hasn’t given much significance on the key features on core element design. The current work intends to divide under three areas: the scalability extensions, the range extensions, and the 3D video extensions. Here, both the color sampling format as well as bit depth ranges is enlarged. This also spots the performance of screen-content coding, high-quality coding along with the lossless coding. The extension of scalability permits the usability of embedded bitstream subsets that represent in the form of minimal-bit-rate. Both the stereoscopic as well as multi-view representations get maximized through the extension of 3D video in terms of depth maps as well as view-synthesis methodologies [22].

3.1 Encoding model

CABAC is the most common entropy coding model of HEVC in enhanced form, which has distinctive features including context modeling, adaptive coefficient scanning as well as coefficient coding. In fact, the modeling of context is utilized to maximize the efficacy of CABAC and the context modeling indices are attained from splitting depth of the transform tree. The indices have various syntax elements like unit_flag, split_coding, skip_flag, cbf_cr, cbf_luma, split_transform flag and cbf_cb. Further, the split_transform_flag refers to the TB splitting and spli_coding_unit_flag is mainly to indicate the additional splitting of CB, which is coded dependent upon the neighboring data. The CB coded is pointed out by skip_flag that is in the inter-picture predictively skipped form and the cbf_cr, cbf_cb, and cbf_luma coding are on the basis of transform tree’s splitting depth. In order to maximize the throughput, the data gets reduced with CABAC’s bypass mode in HEVC.

For scanning purpose, more coefficient scanning approaches including horizontal, diagonal up-right, as well as vertical scans are practiced and that are dependent to intra picture predicted areas directionality, choosing of scanning orders. The utilization of horizontal scan is carried out if the direction to prediction is closer to vertical and the application of vertical scan takes place if the prediction direction is closer to horizontal areas. Rather than these vertical as well as horizontal directions, the usage of diagonal up-right scan is done. In most of the cases, the scanning is done in 8 × 8 and 4 × 4 sub-blocks TB sizes. To code the 32 × 32 or 16 × 16 intra picture prediction, the application of 4 × 4 diagonal up-right scan is done. The transmission of the transform coefficient is performed for final non-zero transform coefficients location, significant maps, and so on. More particularly, the coding of sign bits is done that is dependent to count of coded coefficients and also the novel effects of compression could be identified via the sign data hiding method. The initial non-zero coefficients sign bit is identified from the amplitude’s parity, which is happened if and only if the value of changes among the initial and final non-zero coefficients scanning locations and the sign data hiding have 2 non-zero coefficients in 4 × 4 sub-block is larger than 3. The architecture diagram of CABAC is illustrated in Fig. 1. CABAC has been adopted as a normative part of the H.264/AVC standard; it is one of two alternative methods of entropy coding in the new video coding standard.. The design of CABAC involves the key elements of context modelling, and binary arithmetic coding. These elements are illustrated as the main algorithmic building blocks of the CABAC encoding block diagram.

Fig. 1
figure 1

Art of CABAC encoder of HEVC with proposed optimization concept

3.2 Weighted Holoentropy encoding for HEVC

Consider S as the set of V video sequences {s1, s2, s3, .. …sV} along the distinctive sk for 1 ≤ k ≤ V and their respective unconditional attribute vector of [p1, p2, .. …pN]T, in which N refers to the count of attributes. pj has a province value that could be esteemed by[p1, j, p2, j, .. …pni, j] for 1 ≤ j ≤ N. Here, pj represents the count of dissimilar values in the pj attribute. Consider, pj as the arbitrary variable, and the random vectors [p1, p2, .. …pN]Tare selected as P. The si attribute is also represented as [si. 1, si. 2, .……si. m]T. For every attribute, the entropy is weighted using reverse sigmoid function.

$$ W{E}_i=1-\log \kern0.33em i{t}^{-1}\left( wE{N}_i\right) $$
(1)
$$ W{E}_i=1-\frac{1}{1+{e}^{-{w}_iE{N}_i}} $$
(2)

The entropy model (weighted) maintains in defining optimal wi on the basis of the pixel-wise relationship of the original and decoded video sequence. Thus the optimal wi is defined as the maximization problem that is given in Eq. (3).

$$ {w}^{\ast }=\underset{w_i}{\arg \kern0.33em \max}\kern0.33em \sum \limits_{l=1}^{Ns}\left(2\kern0.33em \log \kern0.33em {s}_l^{\mathrm{max}}-\left[\log \kern0.33em \frac{1}{\left|{s}_l\right|}\sum \limits_u\sum \limits_v{\left({s}_l\left(u,v\right)-{\hat{s}}_l\left(u,v\right)\right)}^2\right]\right) $$
(3)

In Eq. (3), sl(u, v) and \( {\hat{s}}_l\left(u,v\right) \) brings up to (u, v)th pixel element of frame with respect to the decoded video sequence and video sequence l, respectively. Further, the entropy weight w is optimally selected using the new improved water wave optimization algorithm, IPU-WWO. Weight Optimization by Improved Water Wave Optimization.

3.3 Solution encoding

The important fact of this proposed encoding model is the optimal selection of weight in weighted holoentropy. Hence, the solution to the proposed algorithm is “weight”. The illustration of solution encoding is given in Fig. 2. The main objective of this proposed work is given in Eq. (4).

$$ OB=\max (PSNR) $$
(4)
Fig. 2
figure 2

Solution encoding

3.4 Conventional WWO algorithm

WWO [32] algorithm is one of the renowned optimization models, which is inspired from the shallow water wave to solve the optimization issues. In this, the solution space W is equivalent, and the point fitness w ∈ W is inversely measured via the depth of seabed: as the minimization of distance to still water level happens, the fitness FI(w) gets raised. Further, the model sustains a population solution that includes height HE and wavelength λ. For each wave, HE is defined as Hmax constant and λ is defined as 0.5. Three operations are there in this model including (i) Propagation (ii) refraction (iii) Breaking.

Propagation

This propagation generates new wave w, and this is achieved by shifting \( \overline{d} \) dimension of w wave as given in Eq. (5), where ran[−1, 1] indicates the random number that uniformly dispersed within range (−1,1), \( LN\left(\overline{d}\right) \)indicates the length of \( \overline{d} \).

$$ {w}^{\prime}\left(\overline{d}\right)=w\left(\overline{d}\right)+\mathit{\operatorname{ran}}\left[-1,1\right]\cdotp \lambda LN\left(\overline{d}\right) $$
(5)

After each generation, the update of each w wavelength happens, which is defined in Eq. (6), in which FImin and FImax indicates the maximum as well as minimum fitness value between the current population, α indicates the wavelength minimization coefficient and ε defines the least positive number for avoiding division-by-zero.

$$ \lambda =\lambda \cdotp {\alpha}^{-\left( FI(w)-F{I}_{\mathrm{min}}+\varepsilon \right)/\left(F{I}_{\mathrm{max}}-F{I}_{\mathrm{min}}+\varepsilon \right)} $$
(6)

This update formulation ensures that the maximum fitness value has reduced wavelength and thus it propagates at minimal range.

Refraction

The refraction is done on the wave that has heights decrease-to-zero. Eq. (7) shows the simple way of evaluating the position after refraction. In Eq. (7), w indicates the best solution attained so far and GU(μ, σ) specifies the Gaussian random number that has σ standard deviation and μ mean.

$$ {w}^{\prime}\left(\overline{d}\right)= GU\left(\frac{w^{\ast}\left(\overline{d}\right)+w\left(\overline{d}\right)}{2},\frac{\left|{w}^{\ast}\left(\overline{d}\right)-w\left(\overline{d}\right)\right|}{2}\right) $$
(7)

After the completion of refraction, the wave height w is reset to HEmax, and the wavelength is defined as in Eq. (8).

$$ {\lambda}^{\prime }=\lambda \frac{FI(w)}{FI\left({w}^{\prime}\right)} $$
(8)

Breaking

This is the final operation on the wave w, which finds the new best solution w, and subsequently it does a local search around w for simulating the wave breaking. The random R dimension is chosen and \( \overline{d} \) produces a solitary wave as given in Eq. (9), where β refers to the breaking coefficient.

$$ {w}^{\prime}\left(\overline{d}\right)=w\left(\overline{d}\right)+ GU\left(0,1\right)\cdotp \beta LN\left(\overline{d}\right) $$
(9)

If there has no solitary wave better than w, then w exist else w is replaced with the best fittest one. Algorithm 1 shows the pseudo code of conventional WWO.

figure a

3.5 Proposed improved waterwave optimization: IPU-WWO

Despite to the attractive facts of WWO approach like simple structure and easy realization, it also has some drawbacks in terms of lacking of enhanced calculation precision and convergence speed. Thus, the conventional WWO algorithm is planned to improve by modifying the propagation evaluation in Eq. (5) to a new evaluation process that given in Eq. (10), where η = (0, 1), iter is the current iteration and maxiter is the maximum iteration, rand is the random number. Algorithm 2 defines the pseudo code of developed IPU-WWO. Figure 3 illustrates the flowchart of developed algorithm.

$$ {w}^{\prime}\left(\overline{d}\right)=w\left(\overline{d}\right)+\mathit{\operatorname{rand}}\left[-1,1\right]\cdotp \lambda LN\left(\overline{d}\right){\left(\frac{iter}{\max \kern3pt iter}\right)}^{\eta } $$
(10)
Fig. 3
figure 3

Art of proposed IPU-WWO

figure b

4 Results and discussions

The HEVC standard with holoentropy coding was experimentally investigated by six standard video sequences in YUV file format including garden, football, tennis, mobile, foreman and coastguard in terms of count of sequences 125, 115, 140, 112, 300 and 300 respectively. The garden, football, and mobile, tennis videos has the resolution 352 × 240, and the resolution of coastguard and foreman was 352 × 288. Further, the performance of the proposed principle was observed through the investigation of PSNR level of the decoded video sequences for various count of encoded bits. Comparison was made over the proposed and conventional methods including ABC [9], FF [3], GA [7] and PSO [17]. PSNR analysis has been carried out to analyze the performance of the IPU-WWO method. However the overall computation time of proposed method remains higher, the encoding performance is considerably higher than the previous method.

The performance of proposed and conventional methods in terms of PSNR value and number of bits in compressed for all the collected video streams is given in Figs. 4, 5, 6, 7, 8 and 9. Figure 4 illustrates the encoding performance in terms of PSNR and number of bits in compressed for the first video “Football”. From the graph, it is obviously proven that the proposed model is very effective when compared to other methods in HEVC encoding. For block size 1, the proposed method is 5.69%, 10.57%, 11.20%, and 20.57% better from PSO, GA, FF, and ABC, respectively. For block size 2, the proposed method is 5.38%, 12.50%, 12.87%, and 17.04% better than PSO, GA, FF, and ABC, respectively. For block size 4, the proposed method is 3.87%, 7.17%, 8.16% and 9.27% superior to PSO, GA, FF, and ABC, respectively. For block size 8, the proposed method is 5.87%, 7.64%, 8.69%, and 7.19% better than PSO, GA, FF, and ABC, respectively. Similarly, the number of bits in compressed is less when compared to other conventional methods. For block size 1, the proposed method has 87.54, whereas the remaining methods have 88.15 for PSO, 88.45 for GA, 89.01 for FF, 89.16 for FF.

Fig. 4
figure 4

Encoding performance with respect to PSNR and Number of bits in compressed for Football video (a) ABC (b) FF (c) GA (d) PSO (e) IPU-WWO

Fig. 5
figure 5

Encoding performance with respect to PSNR and number of bits in compressed for garden video (a) ABC (b) FF (c) GA (d) PSO (e) IPU-WWO

Fig. 6
figure 6

Encoding performance with respect to PSNR and number of bits in compressed for mobile video (a) ABC (b) FF (c) GA (d) PSO (e) IPU-WWO

Fig. 7
figure 7

Encoding performance with respect to PSNR and nbumber of bits in compressed for tennis video (a) ABC (b) FF (c) GA (d) PSO (e) IPU-WWO

Fig. 8
figure 8

Encoding performance with respect to PSNR and number of bits in compressed for coastguard video (a) ABC

Fig. 9
figure 9

Encoding performance with respect to PSNR and Number of bits in compressed for Foreman video (a) ABC (b) FF (c) GA (d) PSO (e) IPU-WWO

For the second video “Garden”, the proposed method has high PSNR value, which is evidential from Fig. 5. It is observed that the developed model for block size 1 is 7.01%, 12.41%, 14.08%, and 17.59% better than PSO, GA, FF, and ABC, respectively. For block size 2, the proposed method is 6.87%, 9.29%, 10.31%, and 12.64% better from PSO, GA, FF, and ABC, respectively. For block size 4, the proposed method has high PSNR value, and it is 4.60%, 6.51%, 7.04%, and 7.27% better from PSO, GA, FF, and ABC, respectively. For block size 8, the developed method is 0.75%, 0.82%, 1.74% and 2.93% superior to PSO, GA, FF, and ABC, respectively. Similarly, while analyzing the number of bits in compressed, the proposed method has less value when compared to other methods. The obtained values of the proposed method is 83.54, 83.46, 82.92 and 82.66 for block size 1, 2 4 and 8, respectively.

The encoding performance regarding both PSNR and number of bits in compressed for third video “Mobile Video” is given in Fig. 6. From, the graphs given in Fig. 4 shows that the proposed method for block size 1 is 1.26%, 2.57%, 3.77%, and 5.14% better than PSO, GA, FF, and ABC, respectively. For block size 2, the proposed method is 2.06%, 4.05%, 4.44%, and 6.30% better than PSO, GA, FF, and ABC, respectively. For block size 4, the developed method is 1.96%, 2.33%, 3.58%, and 6.62%better than the conventional methods like PSO, GA, FF, and ABC, respectively with high PSNR value. For block size 8, the performance of the developed model in terms of PSNR is high, which is 1.71%, 2.56%, 3.38%, and 5.52% better than PSO, GA, FF, and ABC, respectively. In this case, the number of bits in compressed has minimum value for proposed method when compared to other methods (for all block size).

Similarly, the proposed method and conventional methods are compared with respect to PSNR and number of bits in compressed and illustrated as graphs in Figs. 7, 8 and 9 for Tennis video, Coastguard video, and Foreman video. In Fig. 7, it is observed that the PSNR of the proposed model is high, and for block size 1, the proposed method is 1.27%, 3.16%, 3.81%, and 6.08% better than PSO, GA, FF, and ABC, respectively. For block size 2, the proposed method is 1.53%, 2.75%, 4.73%, and 5.76% better from PSO, GA, FF, and ABC, respectively. For block size 4, the proposed method is 3.03%, 4.98%, 6.82%, and 8.38% better than PSO, GA, FF, and ABC, respectively with high PSNR. For block size 8, the proposed method has attained high PSNR, which is 5.36%, 5.77%, 7.13%, and 8.13% better from PSO, GA, FF, and ABC, respectively. In the case of number of bits in compressed, the proposed method has the least value than other conventional methods, which is proven from all the graphs. Figure 8 shows the encoding performance of both proposed and conventional methods for Coastguard video. Here, the graph proves that the proposed method for block size 1 is 0.78%, 1.79%, 2.34% and 2.90% superior to PSO, GA, FF, and ABC, respectively with high PSNR value. For block size 2, the proposed method has high PSNR, and it is 1.52%, 3.07%, 3.35%, and 4.52% better than PSO, GA, FF, and ABC, respectively. Similarly, the analysis is made for the number of bits in compressed, which shows that the proposed method has least value than other methods. This is proven for all the block size. Based on H.265/HEVC standards, the proposed method reduces the computational complexity significantly reduced by water wave optimization compared with Improved by water wave optimization. The simulation results demonstrate that the proposed algorithm reduces the average encoding time by 69.16% under low block configurations.

Moreover, the consumed time by the methods (both proposed and conventional) for the encoding process were calculated for the football video, the methods have consumed the time of 6,125,492, 6,954,164, 8,565,415 and 8,854,611 for block size 1, 2, 4 and 8, respectively. Similarly, Table 2 shows the time consumed for encoding process by other videos (garden, mobile, coastguard, tennis, and foreman).

Table 2 Time consumed for encoding process

5 Conclusion

In this paper, an efficient weighted encoding model for HEVC has been proposed for multimedia applications. The selected standard video sequence has been experimentally examined, and the effectiveness of the IPU-WWO method has been evaluated. This research work provides complete solution to HEVC encoding schemes. The performance of proposed IPU-WWO was compared with existing algorithms like ABC, FF, GA, and PSO. It is observed that the proposed algorithm produces compression rate of 1.27%, 3.16%, 3.81%, and 6.08% for block size 1, 1.53%, 2.75%, 4.73%, and 5.76% for block size 2 and 3.03%, 4.98%, 6.82%, and 8.38% for block size 4. It is observed that the bit rate of compression has been reduced while increasing the block size and also reduce the time consumption to perform the process. Cloud based encoding scheme has become more popular in recent years. In future, this work will be extended into cloud based encoding for live stream of videos in cloud environments.