Abstract
This paper provides a novel real-time video streaming method in distributed wireless image-sensing platforms. It consists of (1) a millimeter-wave (mmW)-based multi-hop routing optimization for real-time video streaming, (2) wireless image-sensing platforms by using the high-efficiency video coding. A mmW wireless communication is a promising technology for increasing capacity in next-generation wireless systems. However, the weakness of mmW signals to (1) do long-distance transmission and (2) survive in non-line-of-sight environments makes the mmW networks need a multi-hop relaying. Thus, this paper focuses on the maximization of video transmission quality of service (QoS) that makes the optimization problem different from the conventional sum-rate maximization. Specifically, this paper develops an algorithm that optimizes the summation of QoS of the individual wireless transmission of different video streams, subject to constraints of separate streams (i.e., minimum requirements in each stream). Experimental results show the proposed approach presents 32 % better performance with 1.84 dB gain in Y-peak signal-to-noise ratio than the widely used max–min flow routing that is generally considered in QoS-sensitive video streaming applications.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The real-time image-sensing platforms using multiple cameras need very fast data transmission (mmW) as well as a video-level QoS adaptation.
A mmW wireless data transmission (i.e., transmission data with the carrier frequencies between 30 and 300 GHz) has been recently considered as one of the most important approaches for increasing network capacity in next-generation wireless networks [8, 17]. In particular, 28, 38, and 60 GHz mmW channels are considered for designing next-generation wireless networking systems [1, 16, 23] and the feasibility of mmW wireless communications has been recently demonstrated [17]. The benefits of mmW wireless communication links are given by the ultra-wide-bandwidth (UWB) which is available at these mmW frequencies (i.e., from 30 to 300 GHz). This leads them eventually suitable for wireless UWB transmission of video streams, which is already the major sender of wireless network traffic [4]. Regarding the coming 5G networks, the mmW frequencies (e.g., 28, 38, and 60 GHz) are actively considered because they could provide high-bandwidth communication links that meet the target bitrate with multi-gigabit-per-second (multi-Gbps) level. For example, Samsung Electronics and Intel Corporation develop the 5G system using 28 and 38 GHz frequencies, respectively.
Nowadays, distributed image-sensing platforms using camera start to get a lot of attention [9] and the network forwards data from wireless camera/video platforms to a final destination in a multi-hop routing manner. For the multi-hop routing, mmW links are considered in this paper. However, mmW wireless propagation channels suffer from high attenuation, since (1) the free-space loss is high, (2) additional path loss in non-line-of-sight (NLOS) situations is higher. Thus, cameras cannot directly communicate with other cameras at longer distances. Consequently, multi-hop routing/relaying is required.
As illustrated in Fig. 1, if we use multi-hop routing capable mobile cameras for data forwarding from end-hop cameras (those are performing visual data perception) to a final destination, we can deal with the two main issues, i.e., (1) extending communication range with intermediate cameras and (2) finding a route with intermediate cameras to overcome NLOS. The key question is then finding appropriate routes from the transmitters to the receivers. To increase the probability of finding a suitable route, we place dedicated relays in the network, such preplanned relays are common in cellular and mobile access networks [24].
Most of the currently existing routing techniques have concentrated on either maximizing the summation of network capacity or uses max–min flow routing, i.e., maximizing the minimum capacities of flow links is generally used [15]. However, this max–min flow routing is not always good for QoS-sensitive video streaming applications when differentiated quality metrics and weights are considered for all the given individual flows. In other words, the criterion does not reflect the quality criterion for video streams, where there is a monotonous, but nonlinear and saturating relationship between the achieved data rates of the sessions and the perceived qualities. As we have shown for a special case (two-hop relaying), taking video quality as criterion leads to significantly different routes and significantly improved overall quality [10–12]. According to the fact that max–min flow routing shows better performance compared to throughput maximization algorithms in video-related applications [15], a quality optimization (as we propose) that outperforms max–min flow obviously will outperform throughput maximization.
Our objective is as follows: when K number of cameras want to construct single-hop or multi-hop device-to-device (D2D) flows using mmW links (i.e., K number of source cameras and K number of destination cameras, i.e., K video sessions), our objective is to find single-hop or multi-hop flow paths for the video sessions which can maximize the summation of differentiated video QoS requirements.
2 Related and previous work
This section provides two main works; (1) our previous work related to the real-time video streaming over wireless sensor platforms, (2) video coding standards HEVC and scalable HEVC (SHVC) with parallel processing tools for real-time image processing.
2.1 Our previous work
In [15], the fundamental concepts and novel features of max–min flow routing are well studied. The proposed algorithm in [8] also discussing about per-flow quality-maximum multi-hop routing, but it is different from the proposed algorithm in this paper in many ways:
-
The proposed algorithm in [8] does not consider 60 GHz radio technologies which is the most popular among mmW radio access technologies. When we start to consider 60 GHz radios, we need to consider their own path loss models and attenuation parameters in oxygen and rains. Therefore, many formulations will be updated with the consideration of 60 GHz radios.
-
In the definition of quality functions, there is no concept of minimum flow amount \(f_{\rm min }\) in [8]. Therefore, the quality formulation in [8] assumes that there exists quality gain even if there is very small amount of information flows. However, in real-time video streaming condition using scalable video coding (SVC) and SHVC with the dynamic adaptive streaming over HTTP (DASH), we at least require a certain amount of bandwidth which is essentially needed for basement layer transmission. Otherwise, we cannot send any bitstream. This baseline bandwidth for transmitting basement layers will be the \(f_{\rm min }\) in this paper.
-
This paper contains real-world video streaming-based performance evaluation. We can observe performance gain with video frames in this paper, whereas the [8] paper does not contain any video-/image-based performance evaluation.
2.2 Video coding standard HEVC for real-time video processing
The proposed system uses next-generation video coding standard HEVC and its scalable extension that was standardized in 2013, 2014, respectively. The HEVC was developed with the goal of providing twice the compression efficiency of the previous standard, H.264/AVC (Advanced Video Coding) [20]. After successfully standardizing H.264/AVC, ISO/IEC MPEG and ITU-T VCEG have been jointly developing next-generation video standard called HEVC. This new standard targets next-generation HDTV displays and IPTV services, addressing the concern of error resilient streaming in HEVC-based IPTV. As shown in Table 1, comparing to H.264/AVC, HEVC includes new features such as extended prediction block sizes, coding tree unit (CTU) (up to 64x64), large transform block sizes (up to 32x32), tile and slice picture segmentations for loss resilience and parallelism, sample adaptive offset (SAO), and so on [21] (Fig. 2).
The HEVC parallel processing tools support different picture partition strategies such as tiles and wavefront parallel processing (WPP). Tiles partitions a picture with horizontal and vertical boundaries and provides better coding gains compared to multiple slices. WPP is when a slice is divided into rows of CTUs in which the first row is decoded normally but each additional row requires that decisions be made in the previous row. WPP has the entropy encoder use information from the preceding row of CTUs and allows for a method of parallel processing that may allow for better compression than tiles [21].
The SHVC as an extension of HEVC can support multiple resolutions, frame rates, video qualities, and coding standards within a single bitstream because the bitstream consists of multiple layers. It is designed to have low complexity for enhancement layers (ELs) by adding the reconstructed base layer (BL) picture to the reference picture lists in EL [22]. In addition, SHVC uses multiple loops decoding to make a decoder chipset simple, while the scalable video coding (SVC) uses single loop decoding. SHVC also provides a standard scalability by supporting AVC with BL and HEVC with EL. Thus, UHD TV services that supporting legacy HDTVs as well as simple bitstream-level rate control (layer switching) need the SHVC.
3 A reference system architecture
Our considering network consists of \(\left| {V}_{s} \cup {V}_{d} \cup {V}_{r} \cup {V}_{i}\right|\) number of cameras where \({V}_{s}\), \({V}_{d}\), \({V}_{r}\), and \({V}_{i}\) stand for the sets of source cameras, destination cameras, relays, and intermediate cameras, respectively. Notice that
where V is a set of image-sensing cameras and relays in the given network.
The deployed cameras and relays are equipped with 28, 38, or 60 GHz antennas and RF front-ends as shown in Fig. 3. The real-time image-sensing platform consists of image sensor with HEVC encoder such as camera, millimeter-wave transmitter, and QoS manager with rate adaptation model and proposed SQM scheme. Though the video rate adaptation over WLAN has been studied as [18, 19], the adaptation over mmW networks needs to consider some limitations and options. According to the fact that the given cameras have space limitations (especially for mobile cell phones), suppose that they are equipped with a single mmW antenna, and therefore, cameras are capable of only transmitting/receiving one video stream at once. The relays have less restriction in terms of spaces, and the use of multiple mmW antennas is allowed, i.e., multiple video streams can be handled.
4 Millimeter-wave multi-hop routing with individual QoS consideration
4.1 Problem formulation
The considering objective function is as follows:
where \(f_{s_{k}\rightarrow v}^{k}\) is the amount of bits of outgoing flow from source camera \(s_{k}\) to its next-hop camera v which is attributed to the session \(\left( s_{k}, d_{k}\right)\) where \(s_{k}\in {V}_{s}, v \in {V}\), and \(d_{k}\) is the destination camera of the flow. Therefore, the summation of qualities of all given flows can be maximized by (2). According to the flow constraint in Sect. 4.1.3, the amount of outgoing traffic from a source camera is eventually equivalent to the amount incoming traffic at the destination camera. Moreover, \(q_{k}\left( \cdot \right)\) denotes the function which represents the QoS depending on the given flow amounts for the session \(\left( s_{k}, d_{k}\right)\). The details of this function \(q_{k}\left( \cdot \right)\) are explained in Sect. 4.3.
4.1.1 Formulation for cameras
The variable \({L}_{v_{i}\rightarrow v_{j}}\) represents the link connection between \(v_{i}\) and \(v_{j}\) where \(v_{i}, v_{j} \in {V}\), i.e.,
Each source camera has a video stream in order to transmit toward next-hop camera, and each destination camera has a video stream to receive from a preceding camera, i.e.,
Each intermediate camera is able to receive a video stream from one camera (because it has only one mmW antenna) and transmit data to one camera (because it also has only one mmW antenna):
where \(\forall v_{i}\in {V}\).
If one intermediate camera receives a video stream, it will transmit the video stream and visa versa, i.e.,
where \(\forall p_{i}\in {V}_{i}\) and \(v_{j},v_{k}\in {V}\).
4.1.2 Formulation for relays
Suppose that each relay has \(N_{\text {antenna}}\) number of antennas. Similar to the formulations in (6) and (7), the numbers of incoming and outgoing flows in each relay are limited by \(N_{\text {antenna}}\), i.e.,
where \(r_{i}\in {V}_{r}\) and \(v_{j}, v_{k}\in {V}\).
4.1.3 Formulation for information flows
The amount of incoming traffic should be equal to the amount of outgoing traffic for each session in the given intermediate cameras and relays, i.e.,
where \(p_{i}\in {V}_{i}\), \(r_{i}\in {V}_{r}\), \(v_{l}\in {V}\), and \(s_{k}\in {V}_{s}\). Moreover, the flow amounts of each session are limited by its wireless channel capacity, i.e.,
where \(v_{i}, v_{j}\in {V}, s_{k}\in {V}_{s}\) and \({C}_{(v_{i},v_{j})}\) denotes the channel capacity in the wireless links between \(v_{i}\) and \(v_{j}\). The channel capacity can be computed based on link budget. For the link budget calculation, Shannon’s equation is used under the consideration of optimal coding and modulation schemes [14]:
where \(P_{\text {signal}}\) and \(P_{\text {noise}}\) stand for the signal power and noise power in a linear scale and BW stands for the wireless channel bandwidth which is 800 MHz at 38 GHz and 2160 MHz at 60 GHz [11, 16].
The signal power, \(P_{\text {signal}}\), is obtained as:
where EIRP is limited to 47 dBm in 38 GHz and 43 dBm in 60 GHz bands [11, 16]. \(G_{\text {Rx}}\) stands for the receiver antenna gain, and the values are in Table 2. \(PL\left( d_{\left( v_{i},v_{j}\right) }\right)\) is path loss depending on the distance of the wireless link between \(v_{i}\) and \(v_{j}\); the models for mmW channels are obtained in [16] for 38 GHz and in [1] for 28 GHz as follows:
where \(d_{\left( v_{i},v_{j}\right) }\), \(d_{0}\), \(\lambda\), n, and \({X}_{\sigma }\) stand for the distance between \(v_{i}\) and \(v_{j}\), close-in free-space reference distance (5.0 meter in [1, 16]), wavelengths (7.78 mm in 38 GHz [16] and 10.71 mm in 28 GHz [1]), average path loss coefficient over distance and all pointing angles, and shadowing random variable which is represented as a Gaussian random variable with 0 mean and \(\sigma\) standard deviation [14]. n and \(\sigma\) values in 38 and 28 GHz are measured in [1, 16] and summarized in Table 2.
For 60 GHz path loss, a standardized 60 GHz IEEE 802.11ad path loss model is used as follows [13]:
in a dB scale where \(A=32.5\) dB which is a specific value for the selected type of antenna and beamforming algorithms, which depends on the antenna beamwidth but for the considered beam range from \(60^{\circ }\) to \(10^{\circ }\) the variance is very small—less than 0.1 dB. In (17), n refers to the path loss coefficient where it is set to 2, and f stands for a carrier frequency in GHz and is set to 60. Note that there is no shadowing effect in LOS path loss model as presented in [13].
In (15), \(O\left( d_{\left( v_{i},v_{j}\right) }\right)\) and \(R\left( d_{\left( v_{i},v_{j}\right) }\right)\) stand for the oxygen and rain attenuation factors those are valid only for 60 GHz (no effects in 28 and 38 GHz mmW propagation channels).
In (15), \(O\left( d_{\left( v_{i},v_{j}\right) }\right)\) captures the oxygen absorption loss that can be as high as 15 dB/km [7]. For oxygen absorption of 15 dB/Km,
In (15), \(R\left( d_{\left( v_{i},v_{j}\right) }\right)\) presents the rain attenuation that can be different depending on rain rates in individual regions. According to the region segmentation by International Telecommunication Union (ITU) depending on rain rates, assume that Europe is region E; and USA is region D in the west coast (i.e., California, Oregon, and Washington) [6]. Then, the rain rate is about 6 mm/hour for the region E (Europe) for the 0.1 % of the year (i.e., 99.9 % availability). Similarly, the rain rate is about 8 mm/hour for the region D (California, Oregon, and Washington) for the 0.1 % of the year (i.e., 99.9 % availability). Finally, the rain attenuation factor in Europe is around 2.8 dB/Km, and the rain attenuation factor in USA is around 3 dB/km as can be derived from [5]. For rain attenuation of 2.8 dB/km in Europe and 3 dB/km in USA,
where \(\alpha _{r}=2.8\) in Europe and \(\alpha _{r}=3\) in USA.
The noise power, \(P_{\text {noise,dB}}\) is computed as:
where \(k_{B}T_{e}\) is noise power spectral density, i.e.,
as explained in [14] and \(F_{N}\) is the receiver noise figure, set to 6 dB.
4.2 Mathematical formulation
As discussed in Sect. 4.1, our considering objective function is (2), i.e.,
and considering constraints are (3)-(13), i.e.,
Now, the set of \({L}_{v_{i}\rightarrow v_{j}}=\{0,1\}, \forall i, \forall j\) that optimizes (2) should be obtained. For this purpose, we first figure out that this formulation is mixed integer disciplined convex programming where the given integers are 0-1 binary, i.e., branch-and-bound is used to obtain optimal solutions [3].
4.3 Individual QoS support (IQS)
The QoS of each flow is formulated as a function of the amount of flow traffic. If the QoS level of traffic in \(\left( s_{k},d_{k}\right)\) logarithmically and monotonously increases depending on the flow amount until \(f_{\max }^{k}\), the corresponding quality function is defined as follows:
as represented in Fig. 4 and [11]. Each information flow has its own \(f_{\max }^{k}\), and the traffic in the flow can have full quality from this flow amount in (33). Moreover, when \(f_{v_{i}\rightarrow v_{j}}^{k} \ge f_{\max }^{k}\), the return value of QoS function with this given \(f_{v_{i}\rightarrow v_{j}}^{k}\) is equal to \(q_{k}\left( f_{\max }^{k}\right)\). Thus, allocating a larger flow for \(\left( s_{k},d_{k}\right)\) does not increase the video quality anymore. Last, \(w_{k}\) in (33) is a weight of session flow \(\left( s_{k},d_{k}\right)\), i.e., differentiated weights can be assigned for each individual flow.
5 Performance evaluation
This section explains the experimental environments of mmWave-based multi-hop simulations as well as HEVC video performance comparison.
5.1 Simulation results with numerical data
For performance evaluation, link failure probability \(p_i\) is defined and this section obtains our performance results as a function of this \(p_i\) [8]. According to the fact that additional path loss at mmW channels is higher than that at non-mmW channels, especially in NLOS and deep fading situations, link failure probability is also introduced for realistic performance evaluation [8]. As a performance evaluation metric, the concept of average throughput depending on link failure probabilities is used. For this purpose, we compute the throughput for all possible combinations of active/nonactive links (weighted with the probability for this combination to occur) [8]. As a comparison benchmark, the average throughput of max–min flow (MaxMin) multi-hop routing is also computed averaging over the various link failures.
For the simulation-based performance evaluation, the number of cameras and relays is set to 20, the number of relays is set to 4, the number of antennas in relays is set to \(N_{\text {antenna}} = 4\), the number of sessions is set to 5, the receiver antenna gain of relays is set to 25 dBm, and the receiver antenna gain of cameras is set to 13.3 dBm.
To evaluate our protocol with link failure probabilities, we set same \(p_{k}\) values for all given mmW wireless links. Our simulation observes the average throughput while \(p_{k}\) increases from 0.1 (if the value is 0, it means all links are fully connected) up to 0.8 (note all given individual links will be broken with probability of \(p_{k}=1.0\)) with step size 0.1.
In both cases, given five sessions have following settings:
-
Session \(\left( s_{1},d_{1}\right)\): \(w_{1}=2\), nonlinear quality function in (33) and illustrated in Fig. 4. \(f_{\rm min}^{s_{1}}= 0\) and \(f_{\max }^{s_{1}}=1.5\cdot 10^{3}\) bit/s.
-
Session \(\left( s_{2},d_{2}\right)\): \(w_{2}=6\), nonlinear quality function in (33) and illustrated in Fig. 4. \(f_{\rm min }^{s_{2}}= 0\) and \(f_{\max }^{s_{1}}=1 \times 10^{9}\) bit/s.
-
Session \(\left( s_{3},d_{3}\right)\): \(w_{3}=2\), nonlinear quality function in (33) and illustrated in Fig. 4. \(f_{\rm min }^{s_{3}}= 0\) and \(f_{\max }^{s_{1}}=1.5 \cdot 10^{6}\) bit/s.
-
Session \(\left( s_{4},d_{4}\right)\): \(w_{4}=6\), nonlinear quality function in (33) and illustrated in Fig. 4. \(f_{\rm min }^{s_{4}}= 0\) and \(f_{\max }^{s_{1}}= 6 \times 10^{9}\) bit/s.
-
Session \(\left( s_{5},d_{5}\right)\): \(w_{5}=2\), nonlinear quality function in (33) and illustrated in Fig. 4. \(f_{\rm min }^{s_{4}}= 0\) and \(f_{\max }^{s_{1}}= 3 \times 10^{9}\) bit/s.
Now, the simulation results are presented in Table 3 along with the simulation results of the benchmark, i.e., max–min flow routing (MaxMin). As shown in Table 3, our proposed IQS generally shows better performance than MaxMin. We also note that session 1 and session 3 achieve full quality for both IQS and MaxMin when \(p_{k}\le 0.7\) because of their low \(f_{\max }^{s_{1}}\) and \(f_{\max }^{s_{3}}\) setting. In IQS, session 2 and session 4 can achieve full quality when \(p_{k}\le 0.2\). However, MaxMin cannot achieve full quality for session 2 and session 4 even though \(p_{k} = 0\). Last, the improved average achieved QoS value is 5.79875 out of \(2+6+2+6+2=18\), i.e., 32.22 %.
5.2 Visual quality improvement under the video coding standard common test condition
In video streaming system, the numerical gain of the proposed sum quality maximization (SQM) scheme can influence the video quality in objective and subjective quality metrics. To verify the benefit of the proposed scheme, this study experiments the visual quality gain with the next-generation video coding standard, HEVC, reference software (HM ver. 15.0) [21].
Regarding the video coding standard common test condition(CTC) [2], our encoding and decoding experiments are conducted with the subset testing points of the CTC in multi-core Windows 64-bit operating system workstation. Test sequence named PeopleOnStreet (resolution 3840\(\times\)2160 pixels) from the joint collaborative team on video coding (JCT-VC) is used, and two video coding structures random access (RA) and low-delay P (LDP) are used with four different quantization parameters (22, 27, 32, 37) as shown in Table 4.
Figure 5 illustrates the coding structure of RA and LDP. In the example, RA has intrapictures (I picture) every 16 pictures, and the number in the rectangular is picture order count (POC) that represents frame number. The coding structure RA provides very high compression performance, where the decoder can begin a decoding process without decoding all preceded bitstreams. Thus, the RA is generally used for seeking operation in a video streaming system. The LDP has the first I picture and following P pictures in the structure. It does not have the picture reordering issue in the group of pictures (GOP) that causes decoding delays, and video conferencing system uses the LDP.
Table 5 shows the experimental results, and the gains in objective video quality vary from 1.49 to 2.21 dB in Y-PSNR (maximum BD-rate gain is 4.2 %).
Figure 6 shows the subjective visual quality comparison with the reconstructed and enlarged image sections of PeopleOnStreet (frame number 150). As shown in the figure, there are several noticeable visual differences between the proposed SQM and the reference max–min schemes: the area of legs, shadows, texts, and heads. Thus, the advantage of the proposed scheme SQM is verified in subjective and objective video quality metrics as well as the QoS value measured in previous section.
6 Conclusions
To deal with short-distance wireless communications and non-line-of-sight situations at mmW wireless propagation channels, a QoS multi-hop camera-to-camera routing algorithm with intermediate cameras and relays is desired for designing next-generation mmW cellular and mobile access networks. This paper proposes a novel real-time video streaming method in distributed wireless image-sensing platforms, which uses a multi-hop routing protocol that is assisted by multi-antenna relays. Moreover, QoS-awareness with mmW is also considered for supporting real-time wireless video streaming applications.
Our proposed method (SQM) takes account the differentiated individual QoS metrics for individual video stream flows, i.e., it can achieve better performance than max–min routing which is generally used for QoS-sensitive streaming applications. The simulation-based performance evaluation achieved around 32.22 % improvement than max–min routing, and the channel gain guarantees the average video quality improvement of 1.84 dB in Y-PSNR.
References
Azar, Y., Wong, GN., Wang, K., Mayzus, R., Schulz, J.K., Zhao, H., Gutierrez, F., Hwang, D., Rappaport, TS.: 28 GHz propagation measurements for outdoor cellular communications using steerable beam antennas in New York city. In: 2013 IEEE International Conference on Communications (ICC), pp. 5143–5147 (2013)
Bossen, F.: Common test conditions and software reference configurations. Joint Collaborative Team on Video Coding (JCT-VC), JCTVC-H1100 (2012)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Cisco, SJ.: CA, Cisco visual networking index: Global mobile data traffic forecast update, 2012–2017. Cisco Public Information (2013)
Commission, F.C., et al.: Millimeter wave propagation: spectrum management implications. Bulletin 70, 1–24 (1997)
ITURPN Recommendation I: 837-1. Characteristics of precipitation for propagation modeling (1994)
ITU Recommendation I: Attenuation by atmospheric gases. ITU-R P 676-10 (2013)
Kim, J., Molisch, A.F.: Quality-aware millimeter-wave device-to-device multi-hop routing for 5G cellular networks. In: 2014 IEEE International Conference on Communications (ICC), pp. 5251–5256 (2014)
Kim, J., Ryu, E.S.: Quality analysis of massive high-definition video streaming in two-tiered embedded camera-sensing systems. Int. J. Distrib. Sens. Netw. 2014 (2014)
Kim, J., Tian, Y., Molisch, A.F., Mangold, S.: Joint optimization of HD video coding rates and unicast flow control for IEEE 802.11ad relaying. In: 2011 IEEE 22nd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1109–1113 (2011)
Kim, J., Tian, Y., Mangold, S., Molisch, A.F.: Joint scalable coding and routing for 60 GHz real-time live HD video streaming applications. IEEE Trans. Broadcast. 59(3), 500–512 (2013a)
Kim, J., Tian, Y., Mangold, S., Molisch, A.F.: Quality-aware coding and relaying for 60 GHz real-time wireless video broadcasting. In: 2013 IEEE International Conference on Communications (ICC), pp. 5148–5152 (2013b)
Maltsev, A., Perahia, E., Maslennikov, R., Lomayev, A., Khoryaev, A., Sevastyanov, A.: Path loss model development for TGad channel models. doc: IEEE 802-11 (2009)
Molisch, A.F.: Wireless Communications. Wiley, New York (2007)
Radunovic, B., Boudec, J.Y.L.: A unified framework for max–min and min–max fairness with applications. IEEE/ACM Trans. Netw. 15(5), 1073–1083 (2007)
Rappaport, T.S., Gutierrez, F., Ben-Dor, E., Murdock, J.N., Qiao, Y., Tamir, J.I.: Broadband millimeter-wave propagation measurements and models using adaptive-beam antennas for outdoor urban cellular communications. IEEE Trans. Antennas Propag. 61(4), 1850–1859 (2013)
Roh, W., Seol, J.Y., Park, J., Lee, B., Lee, J., Kim, Y., Cho, J., Cheun, K., Aryanfar, F.: Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results. IEEE Commun. Mag. 52(2), 106–113 (2014)
Ryu, E.S., Jayant, N.: Home gateway for three-screen TV using H.264 SVC and raptor FEC. IEEE Trans. Consum. Electron. 57(4), 1652–1660 (2011)
Ryu, E.S., Kim, J.: Error concealment mode signaling for robust mobile video transmission. AEU-Int. J. Electron. Commun. 69(7), 1070–1073 (2015)
Sullivan, G.J., Topiwala, P.N., Luthra, A.: The H.264/AVC advanced video coding standard: overview and introduction to the fidelity range extensions. In: The SPIE 49th Annual Meeting. International Society for Optics and Photonics, pp. 454–474 (2004)
Sullivan, G.J., Ohm, J.R., Han, W.J., Wiegand, T.: Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012)
Ye, Y., Andrivon, P.: The scalable extensions of HEVC for ultra-high-definition video delivery. MultiMed. IEEE 21(3), 58–64 (2014)
Zhao, H., Mayzus, R., Sun, S., Samimi, M., Schulz, J.K., Azar, Y., Wang, K., Wong, G.N., Gutierrez, F., Rappaport, T.S.: 28 GHz millimeter wave cellular communication measurements for reflection and penetration loss in and around buildings in New York city. In: 2013 IEEE International Conference on Communications (ICC), pp. 5163–5167 (2013)
Zheng, K., Fan, B., Ma, Z., Liu, G., Shen, X., Wang, W.: Multihop cellular networks toward LTE-advanced. IEEE Veh. Technol. Mag. 4(3), 40–47 (2009)
Acknowledgments
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) and funded by the Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037743 and 2016R1C1B1015406).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, J., Ryu, ES. QoS optimal real-time video streaming in distributed wireless image-sensing platforms. J Real-Time Image Proc 13, 547–556 (2017). https://doi.org/10.1007/s11554-016-0629-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11554-016-0629-4