Keywords

1 Introduction

Multi View Video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. Compression is certainly needed since the increase in the amount of data is quit substantial. To reduce the data transition and storage requirements, new compression techniques exploit not only the temporal correlation in the single video but also the inter view correlation between the adjacent videos. Those techniques are being extensively studied in H.264/AVC Multi-View Coding (MVC). New applications such as 3DTV and Free-Viewpoint Television (FFV) services are now available depending on MVC. MVC utilizes the considerable amount of inter-view redundancies between adjacent views for further compression [1]. In MVC, Motion Vectors (MV) are generated from motion compensation between frames in the same view and Disparity Vectors (DV) are generated from disparity compensation between frames of adjacent views as shown in the MVC prediction structure in Fig. 1.

Fig. 1
figure 1

Prediction structure of the multi-view video coding [1]

In this paper, a proposed algorithm exploits the inter-view and intra-view spatio-temporal correlation to conceal lost MBs to get higher PSNR and higher subjective quality. The proposed algorithm initially conceals lost inter MBs. It dynamically changes its behavior according to lost MBs size. In addition, the algorithm adaptively candidates MBs according to the current used view in EC process. During the proposed algorithm execution, new candidate MBs are generated according to motion direction to get more matched MBs instead of using fixed candidates MBs as in [2].

After initial concealment two proposed enhancement methods are used depending on lost MB size. For inter 16 × 16 MBs, WBMDC is applied using best MB in each reference frame of other views. Those best MBs are added with different factors producing best matched MB. For other inter MBs types, OBMC is applied which uses pre defined weighting matrices [3]. The rest of this paper is organized as follows: Section 2 presents basic EC algorithms, Sect. 3 presents the proposed EC algorithms, Sect. 4 presents simulation results and Sect. 5 concludes the paper.

2 Bases of Error Concealment Algorithms

The Boundary Matching Algorithm (BMA) is considered as the basic motion compensation EC technique recommended in H.264/AVC standard for temporal concealment. With MVC, BMA utilizes the disparity vectors of inter-view as well as the motion vectors [4]. The selection between the motion/disparity compensated MB is based on the smallest values of Sum of Absolute Differences (SAD). Another EC technique is called the Outer BMA (OBMA). The OBMA is generated from the BMA but it gives the differences between the two pixels wide outer boundary of the replacing MB and the same external boundary of the lost corrupted. This offers significantly better concealment performance than BMA with the same complexity of calculations [3].

The algorithm presented in [5] enhances the results of BMA and OBMA to obtain better results in stereoscopic video coding. In our proposed algorithm, the algorithm in [5] is applied to MVC but with some modifications and more calculations complexity to enhance initially concealed MBs. Due to the prediction structure of MVC any MBs loss results in errors propagations in other frames of adjacent views [4]. When a lost MB shall be concealed in MVC, reference MBs are searched in the temporally neighboring frames in the current view and in the neighboring frames of other camera views to the left and to the right of the considered view. The best reference or a weighted average between two references is then selected to be copied into the missing area.

3 Proposed Error Concealment Algorithm

The block diagram of the proposed MVC concealment technique is shown in Fig. 2.

Fig. 2
figure 2

The proposed algorithm

In the proposed algorithm, the damaged intra MB is concealed and enhanced using (SIV) algorithm. In SIV algorithm, the spatial EC method called Weighted Pixel Average (WPA) is applied first. The WPA values are obtained inversely proportional to the distance between the reference pixel and the interpolated pixel. After applying WPA, the best DVs are obtained using pixels inside the lost MB and pixels surrounding lost MB position in other views to enhance the initially concealed Intra MB.

If the damaged MB is inter MB, the algorithm first determines the reference frames in all views. Those reference frames may be located in the same view or in other views. Then the proposed search engine is applied over all reference frames. The candidate MBs in the proposed algorithm are adaptively selected according to the view contains current reference frame being processed with next rules:

  • If the view of the current reference frame is the same view of lost MB so the candidate MBs are selected from the horizontal, vertical and diagonal neighborhood MBs.

  • If the view of the current reference frame is the left view of lost MB view so the candidate MBs are surrounding neighboring MBs (4 neighboring, 4 corners) plus more right neighboring MBs.

  • If the view of the current reference frame is the right view of lost MB view then the candidate MBs are surrounding neighboring MBs (4 neighboring, 4 corners) plus more left neighboring MBs.

The selection for more left candidate MBs at right view and right candidate MBs at left view is explained in Fig. 3.

Fig. 3
figure 3

Location of lost MB and best matched MB with respect to adjacent left and right views

Lost MB is assumed to be placed in intermediate view (current view). For the left view, the location of best MB is properly placed at the right side of lost MB position. As a result, there is more probability to find more matched MBs at the right side of lost MB. So more right MBs are candidates for proposed search algorithm.

For the right view, the location of best MB is properly placed at the left side of lost MB position. As a result, there is more probability to find more matched MBs at the left side of lost MB. So more left MBs are candidates for the proposed search algorithm. Then, the MB size is determined using the size of surrounding neighboring MBs [3]. The determined MB modes will be one of the following four modes as shown in Fig. 4.

Fig. 4
figure 4

MV candidate for lost MB and four EC modes

  • Mode 1 (16 × 16): The set of concealed MVs candidates for block 0 is {V1, V2,…, V8} where V1, V2,…,V8 are referred to all MVs located around lost MB.

  • Mode 2 (16 × 8): The sets of concealed MV candidates for blocks 0 and 1 are {MV1, MV2, MV3, MV7} and {MV4, MV5, MV6, MV8}, respectively.

  • Mode 3 (8 × 16): The sets of MV candidates for blocks 0 and 1 are {MV1, MV3, MV4, MV5} and {MV2, MV6, MV7, MV8}, respectively.

  • Mode 4 (8 × 8): The sets of MV candidates for blocks 0, 1, 2 and 3 are {MV1, MV3}, {MV2, MV7}, {MV4, MV5} and {MV6, MV8}, respectively.

After selecting the most suitable partition type for the lost MB, each partition of the lost MB is concealed by applying the proposed method using the first level candidate set of MBs. For better MBs replacement, a self generated candidate MB algorithm is proposed. This algorithm uses the best selected MB direction as a step to get more candidate MBs in the same motion direction. With each step, if the OBMA value of new candidate MB is lower than the previous one then this new candidate MB is selected as best matched MB and so on till getting best MB through the selected loop number N where N is 5.

For enhancing the initially concealed MBs, the OBMC is applied for all inter MB modes expect 16 × 16 MB. The OBMC exploits the division of lost MB. It is mainly used to avoid the deblocking effects after initial concealment of lost MB. In OBMC the initial concealed MB is split into four 8 × 8 blocks, and each of these blocks is processed individually by predefined weighting matrices [3] and neighboring MBs pixels to be better matched.

For 16 × 16 inter MBs enhancement, WBMDC is applied. In WBMDC, OBMA is first applied to get the best MB P best which is similar to the lost MB in all reference frames. During applying OBMA, we get the most similar MBs in each reference frames of the lost MB in all views. The best MB P best is improved to be more similar to the lost MB by using the other similar MBs in the other references multiplied with weights values \( \omega 1 \), \( \omega 2 \), \( \omega 3 \) and apply the formula (1):

$$ P_{lost} (i,j) = \frac{{\omega_{1} *P_{best} (i,j) + \omega_{2} *P_{replac}^{1} (i,j) + \sum {\omega_{3} } *P_{replac}^{2,3} (i,j)}}{{\omega_{1} + \omega_{2} + \sum {\omega_{3} } }} $$
(1)

where

\( P_{replac}^{1} \) Best pixels in other temporal reference frames if MB Pbest in same view or other disparity reference frame if MB Pbest in other views.

\( P_{replac}^{2,3} \) Best pixels in other disparity two reference frames if MB P best in the same view or other temporary reference frame if MB P best in other views.

The weights w1, w2, and w3 are set to 5, 4, and 3, respectively, for better results.

4 Simulation and Results

The proposed algorithm is applied to Joint MVC reference software [6] for 50 frames of size 640 × 480 of ballroom sequences with frame rate 30 Hz. An error mask is applied for multiview stream to get Loss Rate (LR) of about 22 % for all MBs as shown in Fig. 6a. Then the error mask ratio is decreased to 15, 11 and 5 % to test the proposed algorithm assuming lost MBs locations are known.

Table 1 shows the PSNR results of the concealed Ballroom sequence with no error occur, error occur without applying EC, applying normal OBMA, applying the proposed algorithm without adaptive candidate MBs and applying full proposed algorithm (with adaptive candidate MBs).

Table 1 PSNR (dB) results for ballroom test sequence

By applying the full proposed algorithm, the PSNR results for ballroom sequences is improved up to 1.1 dB comparing with applying normal OBMA at LR equals to 22 %. As seen from Table 1 the improvements of PSNR differences are increased with higher LR.

Using adaptive candidate MBs according to view of the current reference frame in the proposed algorithm saves processing time due to using specific candidates MBs for each reference frame than using large collection of candidate neighborhood MBs for all reference frames [2].

In Fig. 5, PSNR for all concealed frames are higher than others concealed by normal OBMA and are slightly higher than the OBMA with fixed candidate MBs. The obtained PSNR of concealed frames by the full proposed algorithm are higher than normal OBMA and higher than proposed algorithm with fixed candidate MBs however the visual quality by full algorithm is better as shown in Fig. 6. The yellow dashed arrows in Fig. 6d indicate visual details improvements of the proposed algorithm compared to normal OBMA in Fig. 6b. The red arrows in Fig. 6d indicate visual improvements using adaptive candidate MBs with each reference frame than proposed OBMA without adaptive candidate MBs as in Fig. 6c.

Fig. 5
figure 5

PSNR of ballroom frames

Fig. 6
figure 6

Subjective quality comparison for frame 26 of ballroom sequence. a Corrupted frame. b Concealed by normal OBMA. c Concealed by proposed OBMA but without adaptive candidate MBs. d Concealed by full algorithm

Those improvements are obtained due to making sure that the lost MB is replaced with the same MB size and using better candidate MBs with higher probability to be matched to lost MBs.

5 Conclusion

In this paper a proposed EC algorithm is suggested to conceal lost MBs in MVC. This algorithm is dynamically operates according to the lost MB size and adaptively selects the candidate MBs according to view of current reference frame being processed. Two enhancement algorithms are tan proposed according to lost MBs size to increase total PSNR. The proposed algorithm provides considerable better gain in the objective and subjective quality with lower processing time comparing with normal OBMA method if we use all possible candidate MBs are used for all reference frames.