Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Cross-layer design and control in optical networks include a wide range of techniques from the service layer to physical layer. In order to develop these interdisciplinary techniques, significant cooperative research efforts involving users of research and educational networks, application developers, network operators, and network equipment suppliers are needed. Once these techniques are developed, it is very important to build testbeds to evaluate the feasibility and effectiveness of networks employing these interdisciplinary techniques as well as to determine issues that need to be addressed before these networks can be deployed in the real world. Chapter 15 mainly presents cross-layer network design and control testbeds developed by Nippon Telegraph and Telephone Corporation (NTT) in close cooperation with other research institutes over the last few years, showcasing cross-layer network design and control techniques from service-layer aware to physical-layer aware techniques. Exhaustive enumeration of testbeds in this area will not be the focus of this chapter.

Before describing various testbeds, it may be useful to clarify the classification of cross-layer network design and control techniques used in this chapter. Figure 15.1 illustrates the network design and control model. At the top of this model is the application layer, which includes users and all types of applications such as scientific calculations, visualization, and a computer-supported collaborative work environment. The layer below is the network layer where Layer 2/3 and Layer 1 are explicitly illustrated. In this chapter, cross-layer network design and control testbeds using (a) service-layer aware, (b) multilayer traffic-driven, and (c) physical-layer aware network design and control techniques are presented.

Fig. 15.1
figure 00151

Network design and control model

Using service-layer aware network design and control techniques, users or applications can request network resources that meet their bandwidth and latency requirements through the management plane. Coordinated computer/network resource allocation demonstrations across several network domains are presented in Sect. 2. Traffic-driven network control techniques improve the utilization efficiency of network resources as a whole. Testbed demonstrations of server-layer path provisioning as well as coordinated UNI-link failure recoveries by using the control plane are covered in Sect. 3. Physical-layer aware network design and control techniques overcome the limitations of transparent and translucent optical networks. Experimental demonstrations of impairment-aware management and control plane are introduced in Sect. 4.

In Sect. 5, we will focus our attention to the middle term future. Thanks to the recent remarkable advances in digital coherent detection followed by sophisticated Digital Signal Processing (DSP), the spectral efficiency of the cutting edge 100 Gb/s Dense Wavelength Division Multiplexing (DWDM) systems employing dual-polarization, quadrature-phase-shift-keying (QPSK) modulation is reaching as a high level as 2 b/s/Hz. Unfortunately, it is well known that bit loading higher than that for QPSK causes a rapid increase in the optical signal-to-noise ratio (OSNR) penalty, while further increase in the launched signal power results in serious impairment due to nonlinear effects in optical fibers. Therefore, it is becoming widely recognized that we are rapidly approaching the physical capacity limit of conventional optical fiber. Considering these challenges, there is growing anticipation toward the implementation of elastic optical path networks where the right-sized spectral resource is adaptively allocated to an optical path according to the actual client-layer traffic volume and/or network physical conditions in the fully optical domain. Section 5 provides a brief overview of elastic optical path networks, together with testbed demonstrations in terms of service-layer and physical-layer aware network design and control in elastic optical networks.

2 Service-Layer Aware Network Design and Control Testbeds

2.1 Network Management Models

In most current and future bandwidth-hungry applications such as a computer-supported collaborative work environment, large-scale scientific calculation, and high-definition video conferencing, computing resources are geographically distributed and need to be connected across multiple network domains. In general, each independently managed network domain is controlled using different technology and managed under different operational policies. A network resource manager (NRM) manages the current and future available network resources in its domain and reserves those resources for future use. NRMs bridge network domains via a vertical or horizontal interface to harmonize provisioning of an end-to-end connection. There are two models for service-layer aware network management across multiple network domains, namely, the tree model (centralized management approach) and chain model (distributed management approach), as shown in Fig. 15.2.

Fig. 15.2
figure 00152

Two models for application-driven network resource control across multiple domains. (a) Tree model (centralized management approach), (b) chain model (distributed management approach). RC resource coordinator, NRM network resource manager

In the tree model, a user or application issues a request to a resource coordinator (RC) for an end-to-end connection. The RC communicates with each NRM to provide the required end-to-end connection. In the chain model, a user or application sends a connection request message to an NRM. NRMs pass the message to an adjacent NRM in sequence on a one-by-one basis.

2.2 Tree Model Network Control Testbeds

2.2.1 G-Lambda/EnLIGHTened Application-Driven Network Control Testbed

The G-lambda/EnLIGHTened project is an ambitious joint testbed to demonstrate in-advance reservation of coordinated computing and network resources between Japan and the USA [1]. Japan’s G-lambda project is a joint collaboration among NTT, the National Institute of Advanced Industrial Science and Technology (AIST), KDDI R&D Laboratories (KDDI Labs), and the National Institute of Information and Communications Technology (NICT) that was started in 2004 [2]. The goal of the project is to establish a standard web services interface between an RC and an NRM. USA’s EnLIGHTened computing project is an interdisciplinary effort among the Microelectronics Center of North Carolina (MCNC), Louisiana State University (LSU), North Carolina State University (NCSU), and several other organizations [3]. The project began in 2005 to design an architectural framework that would allow e-science applications to request dynamically in advance or on demand any type of grid resource.

The joint testbed consists of multiple domains with each domain managed by its own NRM, as shown in Fig. 15.3. On the Japan side, the testbed is built on the JGN 2 research network, which includes two Generalized Multi-protocol Label Switching (GMPLS) administrative domains managed by NTT and KDDI Labs, and the Tokyo–Chicago intermediate transmission line. Each domain consists of multiple optical crossconnects (OXCs) and L2/3 devices controlled using GMPLS. Seven sites each with a computing cluster are each managed by a computing resource manager (CRM). The NRMs and CRMs communicate with an RC on a one-by-one basis to reserve computing and optical path resources. On the USA side, the testbed is built on the National Lambda Rail (NLR) and local Regional Optical Networks using four OXCs controlled using GMPLS. Since the RCs, NRMs, and CRMs were independently developed and had different interfaces between the two projects, a set of protocol wrappers was developed to bridge the different interfaces. A distributed numerical simulation and experimental distributed visualization were conducted using reserved computing and optical path resources.

Fig. 15.3
figure 00153

G-lambda/EnLIGHTened application-driven network control testbed (tree model). RC resource coordinator, NRM network resource manager, CRM computing resource manager, OXC optical crossconnect, R router/switch, C computer cluster. Adapted from [4], © 2010 IEICE

Figure 15.4a illustrates an example of the NRM architecture. For the interface between the RC and the NRMs, the G-lambda project defined grid network service–web services interface version 2 (GNS–WSI2), a stateful web service protocol, in which a state is managed by an entity called an End Point (EP) [4]. The NRM consists of a GNS–WSI2 server module, a scheduler, a path database, and a network control and management module. GNS–WSI2 employs polling-based operations and a two-phase commit sequence, as shown in Fig. 15.4b, in order to enable a generic, non-blocking, and secure in-advance reservation process based on distributed transaction. According to the reservation request from a user or application, the RC sends the resource reservation request to NRMs containing site IDs, bandwidth, start and end time of service, etc. Each NRM calculates the necessary network resources and confirms the availability of the network resources. If network resources that meet the request are available during the service period, the NRMs return the “prepared” message. The RC/CRM interface also employs a web service-based two-phase commit protocol similar to that of GNS–WSI2. The RC sends the “commit reservation” message only after all the NRMs and CRMs return the “prepared” message.

Fig. 15.4
figure 00154

NRM architecture example and GNS–WSI2 reservation scheme. (a) NRM architecture developed by NTT, (b) GNS–WSI2 sequence diagram defined by G-lambda project. Adapted from [4], © 2010 IEICE

The major accomplishments of this testbed are the demonstration of the in-advance reservation of global scale coordinated computing and optical path resources across multiple administrative domains. One important finding obtained through the demonstration was that developing protocol wrappers to bridge different interface requires considerable effort, which could easily increase with the number of different interfaces [1]. In order to achieve interoperability among network service provisioning systems, standardization is under way to develop a common network provisioning service interface and a universal network service interface proxy in the Global Lambda Integrated Facility (GLIF) (http://code.google.com/p/fenius) and network service interface working group (NSI-WG) in the Open Grid Forum (OGF) (http://forge.gridforum.org/sf/projects/nsi-wg). In late 2010, a collaborative team comprising KDDI Labs, NICT, and AIST demonstrated dynamic circuit provisioning using a web service interface proxy for lambdas and Ethernet-based administrative domains on JGN2 plus and Internet 2 [5].

Another important finding is that since user/application traffic volume ranges from hundreds of megabits per second to tens of gigabits per second, bandwidth stranding occurs when the user/application traffic volume is not sufficient to fill the entire capacity of an optical path. One promising solution is to employ multilayer traffic control with the help of the emerging QoS-guaranteed packet transport technology.

2.2.2 Multilayer Lambda Grid Testbed

Figure 15.5 shows the multilayer lambda grid testbed built by NTT and AIST, which has two network domains connected via the JGN 2 research network and GEMnet 2 (Global Enhanced Multifunctional Network for NTT R&D testbed) [6]. The testbed comprises OXCs for optical path provisioning, Domain Edge Routers (DERs) for sub-lambda packet path provisioning, an RC, NRMs for each domain, and CRMs for each site. When a user sends a request message that contains the number of CPUs, bandwidth between computer sites, and the duration period, the RC reserves computers with the CRMs and a sub-lambda or lambda between the allocated computers with the NRM via GNS–WSI2. In order to ensure that the reserved sub-lambda packet path accommodates packets only from the allocated computers, the CRM advertises the information of the allocated computers, such as the MAC address or IP address, to the RC. The RC sends a message that contains the source and destination addresses of packets to the NRM. At the set time, the NRM configures the DER filtering conditions and establishes a sub-lambda with the reserved bandwidth between the allocated computers and discards the best effort traffic. In the demonstration, three sub-lambda packet paths with the bandwidth of 200 Mb/s for scientific calculation with the grid message passing interface (MPI), 500 Mb/s for super high-definition video (SHD), and 100 Mb/s for FTP were established over a 1 Gb/s optical path.

Fig. 15.5
figure 00155

Multilayer lambda grid testbed (tree model). RC resource coordinator, NRM network resource manager, CRM computing resource manager, OXC optical crossconnect, R router/switch, C computer cluster. Adapted from [6], © 2009 IEEE

2.3 Chain Model Network Control Testbeds

2.3.1 NTT–EVL Service-Layer Aware Network Control Testbed

NTT and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) build a joint testbed in Chicago metropolitan area in order to demonstrate the establishment of a connection of skew-minimized parallel optical paths that possibly traverse multiple domains for latency-deviation sensitive, parallel visualization applications [7]. The scalable adaptive graphic environment (SAGE) developed by the EVL is a middleware system for managing visualization and high-definition video streams for viewing on ultrahigh-definition tiled displays. The SAGE application accommodates 55 LCD displays driven by a 30 node cluster of PCs with the graphics rendering capacity approaching nearly a terabit per second. The testbed has two administrative domains each having two OXCs, where NRMs are connected based on the chain model, as shown in Fig. 15.6. The SAGE middleware sent an advanced reservation request. Optical virtual concatenation (OVC) is implemented in the OXCs in order to de-skew the optical paths. The SAGE proxy sends reservation requests to NRM 1 with the node-IDs of the SAGE transmitter and SAGE receiver systems and the bandwidth of 1.6 Gb/s. NRM 1 allocates one GbE optical path for its domain and another GbE optical path for the other domain and sends a reservation request to NRM 2.

Fig. 15.6
figure 00156

NTT–EVL service-layer aware network control testbed (chain model). NRM network resource manager, SAGE scalable adaptive graphic environment, OXC optical crossconnect, OVC optical virtual concatenation

This testbed demonstrates the feasibility of coordinated advanced reservation over multiple domains in the chain model and de-skewing of parallel optical paths with OVC, which is required for high-end visualization applications.

2.3.2 UvA-Nortel Heterogeneous Domains Testbed

The experiment demonstrated during the SC 2004 Conference by Universiteit van Amsterdam, Nortel networks, and other organizations is another example of chain model service-layer aware network control [8]. NetherLight, StarLight, and OMNINet were used as three optical domains. Each domain was managed by Nortel’s Dynamic Resource Allocation Controller (DRAC), which was given responsibility for the setup of intra- and inter-domain connections and was implemented with the grid network service agent. Another important key element in the service plane was the authentication, authorization, and accounting (AAA) subsystem.

3 Multilayer Traffic-Driven Network Control Testbeds

3.1 NTT–EVL Traffic-Driven Network Control Testbed

NTT and EVL also designed and built a testbed to demonstrate two scenarios of multilayer traffic-driven network control. The first scenario is the IP-routing-stable link failure restoration achieved through the coordinated control of Layers 1 and 3 [9]. The testbed consists of two OXCs at EVL in Chicago and an OXC at the University of California at San Diego (UCSD), which are connected by 10 Gb/s links via the NRL and TeraGrid Wave facilities, as shown in Fig. 15.7. A logical interface and proprietary virtual interfaces were implemented into edge Layer 3 switches to avoid reconfiguring the Layer 3 level network for both UNI and NNI link failures. A switch controller periodically monitors the health and traffic volume of the links. If there is a UNI link failure, for example, the virtual interface sends a failure message to the switch controller but pauses the IP-routing restart process for a certain guard time. Meanwhile, the switch controller localizes the failure, determines a detour route, requests the OXC controller to delete the optical path on the failed route, and establish an optical path on the detour route. Subsequently, the switch controller tears off the logical IF from the virtual IF (a) and attaches it to the virtual IF (b). In the demonstration, a link failure between Switch 1 and OXC 1 was simulated by disconnecting the fiber, and successful Layer 1/Layer 3 harmonized restoration was confirmed within a few seconds without any IP-routing instability.

Fig. 15.7
figure 00157

NTT–EVL traffic-driven network control testbed. OXC optical crossconnect, OXC CRL OXC controller, R CRL router controller, SAGE scalable adaptive graphic environment, LIF logical interface, VIF virtual interface

The second scenario of multilayer traffic-driven network control is adaptive control of parallel optical paths according to time-varying demands of high-end, bandwidth-hungry applications [10, 11]. A proprietary GMPLS extension for controlling each Layer 2 link, which is virtually bundled with other links using the IEEE 802.3ad link aggregation technique, was implemented into switch controllers. In the demonstration, as the initial condition, a small streaming application on SAGE was operating over a 10 Gb/s Ethernet (10GE) connection established between the SAGE transmitter and receiver via optical path A traversing through the NRL. Subsequently, two additional streams were launched. A router controller that monitors the traffic from the SAGE transmitter detects when the traffic volume exceeds the wavelength capacity, and then sends an optical path setup request to an OXC controller. After the second optical path was established, the switch controller activated another 10GE port and bundled the two 10GE connections into a 20 Gb/s virtual link using the link aggregation technique. The harmonized Layer 1 and Layer 2 network reconfiguration procedure enabled smooth bandwidth adjustment of the end-to-end Layer 2 connection over multiple nationwide optical paths.

4 Physical-Layer Aware Network Design and Control Testbeds

4.1 DICONET Impairment-Aware Network Planning and Operation Testbed

In order to operationalize transparent/translucent optical networks, which have the minimum number of expensive optical–electrical–optical regenerators, monitoring physical-layer impairments and optical performance incorporated with impairment-aware routing algorithms will be key. The goal of the DICONET (Dynamic Impairment Constraint Optical Networking) project, which is funded by the European Commission, is to design and develop an intelligent network planning and operation tool (NPOT), which considers the impact of physical-layer impairments in the planning and operation phases of optical networking.

Figure 15.8 illustrates the DICONET testbed located in Barcelona [12]. The testbed consists of a configurable signaling communications network (SCN) running over wavelength selective switch (WSS)-based OXC emulators. In the SCN, optical connection controllers (OCCs) are interconnected by 100 Mb/s Ethernet links, resembling the same physical topology of the emulated optical transport plane. Each OCC implements the full GMPLS protocol stack and is interconnected to the respective OXC. The NPOT consists of network description repositories, a physical-layer performance evaluator, impairment-aware routing and wavelength assignment engines, and component placement modules. The network management system (NMS) allows global supervision of the network active optical path state, the current configuration in each network node, and the requests for soft-permanent optical paths. Centralized and distributed control plane integration schemes are implemented in the testbed. The experimental evaluation in terms of the setup delay reveals that the distributed approach outperformed the centralized one especially for high traffic loads.

Fig. 15.8
figure 00158

DICONET impairment-aware network planning and operation testbed. OXC optical crossconnect, SCN configurable signaling communications network, OCC optical connection controller, NPOT intelligent network planning and operation tool, NMS network management system. Adapted from [12], © 2010 IEEE

4.2 Heterogeneous Translucent Optical Network Testbed

A multi-vender interoperability experiment in heterogeneous translucent optical network was demonstrated by a collaborative team comprising the NEC Corporation, Mitsubishi Electric Corporation, NTT, and KDDI Labs [13]. The testbed incorporated transparent and translucent domains. The transparent domain consists of three Reconfigurable Optical Add/Drop Multiplexers (ROADMs) and four Wavelength Crossconnects (WXCs), and both types had a colorless and directionless node architecture. The translucent domain consists of two OXCs connected with DWDM links. In order to achieve optical path setup across the multi-vender multi-domain translucent optical networks, the testbed implements GMPLS protocol extensions in terms of wavelength availability distribution and lambda label exchange. The major accomplishment of the testbed was the demonstration of second-order highly resilient multiple failure recovery in the multi-vender translucent optical network.

5 Spectrally Efficient Elastic Optical Path Network Testbeds

5.1 Elastic Optical Path Network Overview

In the current optically routed network, optical channels are aligned on the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) G.694.1 frequency grid. In such traditional optical networks, client-layer traffic flows are aggregated and groomed in the electrical domain to the level of limited kinds of line rates, for example, approximately 2.5, 10, and 40 Gb/s. In the future 100 G era and beyond, we may face difficulties in increasing the capacity in electrical aggregation and grooming switches while keeping cost, footprint, and power consumption at an acceptable level. In the mean time, the pace of spectral efficiency in WDM transmission systems is slowing down due to the vulnerability of higher-order multilevel modulation formats to OSNR degradation and nonlinear effects. One promising strategy to resolve the incoming capacity crunch is to boost the capacity at the network level by introducing “elasticity” and cross-layer “adaptation” into the optical domain [1416]. This is the reason why elastic optical path networks, where the right-sized spectral resource is adaptively allocated to an optical path according to the client-layer actual traffic volume and/or network physical conditions in the fully optical domain, have been fostering growing anticipation in the past few years. This subsection provides a brief overview of elastic optical path networks, together with testbed demonstrations in terms of the service-layer and physical-layer aware network design and control.

Figure 15.9 illustrates how elastic optical path networks provide spectrally efficient transport of 100 Gb/s services and beyond. If based on the conventional design philosophy, every optical path is aligned on an ITU-T fixed grid (Fig. 15.9a) regardless of the path length, bit rate, or actual client traffic volume (Fig. 15.9b). By taking advantage of spectral-efficiency-conscious adaptive signal modulation and elastic channel spacing, elastic optical path networks yield significant spectral savings. For shorter optical paths, which suffer from less OSNR degradation, a more spectrally efficient modulation format such as 16 Quadrature Amplitude Modulation (QAM) is employed instead of a more SNR tolerant but less spectrally efficient QPSK format. For client traffic that does not fill the entire capacity of a wavelength, the elastic optical path network provides right-sized intermediate bandwidth, such as 200 Gb/s (Fig. 15.9c). Combined with elastic channel spacing based on the “frequency slot” concept (Fig. 15.9d), where the required minimum guard band is assigned between channels, elastic optical path networks accommodate a wide range of traffic in a spectrally efficient manner without any intermediate electrical grooming switches (Fig. 15.9e).

Fig. 15.9
figure 00159

User-rate and distance-adaptive spectrum resource allocation in elastic optical path network. (a) ITU-T G.694.1 frequency grid, (b) conventional channel plan, (c) optimized modulation considering actual user bandwidth and path distance, (d) frequency slot related to current ITU-T grid, (e) spectrally efficient elastic channel plan

5.2 Service-Layer Aware Elastic Optical Path Network Testbed

5.2.1 SLICE Service-Layer Aware Network Control Testbed

The SLICE (spectral sliced elastic optical path network) testbed built by NTT is schematically depicted in Fig. 15.10. Key building blocks of the elastic optical network are a rate and format flexible optical transponder and a bandwidth agnostic WXC [17]. Introduction of coherent detection followed by DSP will yield a novel degree of freedom in designing transponders. By optimizing three parameters, the symbol rate, the number of modulation levels, and the number of subcarriers, the required data rate and optical reach can be provided while minimizing the spectral width. For example, the flexible optical reach can be achieved by changing the number of bits per symbol with a high-speed digital-to-analog converter and In-phase/Quadrature (IQ) modulator. Optical Orthogonal Frequency Division Multiplexing (OFDM) is a spectrally overlapped orthogonal subcarrier modulation scheme and allows a flexible rate transmitter by customizing the number of subcarriers of the OFDM signal. Bandwidth agnostic WXCs can be achieved by using a continuously bandwidth-variable WSS based on, for example, liquid crystal on silicon (LCoS) technology. In a bandwidth-variable WSS, the incoming optical signals with different optical bandwidths and center frequencies can be routed to any of the output fibers. These technologies allow opening of the required minimum spectrum window at every node along the optical path.

Fig. 15.10
figure 001510

SLICE service-layer aware network control testbed. (a) Testbed configuration with multiple rate elastic optical paths, (b) bandwidth agnostic wavelength crossconnect (BA-WXC) architecture and signal spectra at BA-WXC 3. WXC CRL wavelength crossconnect controller, BV-WSS bandwidth-variable wavelength selective switch. Adapted from [17], © 2010 OSA

The SLICE testbed incorporates six bandwidth agnostic WXCs connected by 50 km spans of optical fiber to establish a mesh configuration (Fig. 15.10a). The bandwidth agnostic WXCs employ a broadcast-and-select architecture, where 1 × N optical splitters and bandwidth-variable N × 1 WSSs are arranged at the input and output, respectively. This architecture enables forwarding of channels with arbitrary spectral widths to arbitrary output ports, and add-and-drop functions as well as broadcasting functionality. Optical OFDM signals having a bandwidth ranging from 40 to 440 Gb/s are generated according to a user/application request and demodulated using a multicarrier light source synchronized with the data signal, an optical multiplexer, an optical demultiplexer, and optical gate as an equivalent of the electrical inverse First Fourier Transform (FFT) and FFT method. An example of the optical channel assignment in terms of the optical spectrum is illustrated in Fig. 15.10b with the spectra of optical channels at the input, output, and the add/drop ports of bandwidth agnostic WXC 3. In addition to the multiple rate optical path setup and transmission according to the user/application demand described so far, the SLICE testbed demonstrated bandwidth scaling of a single elastic optical path from 40 to 440 Gb/s in accordance with demand change. In both cases, the Q-factor performance of elastic optical paths were measured and confirmed above the Forward Error Correction (FEC) Q limit of 9.1 dB.

5.3 Physical-Layer Aware Elastic Optical Path Network Testbed

5.3.1 SLICE Physical-Layer Aware Network Design Testbed

Another important accomplishment of the SLICE testbed is the demonstration of adaptive spectrum allocation design with modulation format/baud rate and filter passband adjustment according to physical conditions of the network. In the testbed, multiple WDM optical paths with QPSK and 16 amplitude phase-shift keying (APSK) formats are transmitted over a bandwidth agnostic WXC placed in a recirculating loop mimicking the characteristics of a ring network, as shown in Fig. 15.11 [18]. The 16APSK format employs four phase levels and four intensity levels in a cross-shaped constellation, which results in constellation point spacing similar to that of 64-QAM. As a result, the 16APSK format provides half the spectral width at the expense of an approximate 8 dB OSNR penalty when compared to those for the QPSK format. A multicarrier source generates a 50 GHz-spaced optical wavelength comb, and the following WSS directs the comb output into one of two modulator branches to produce 100 GHz-spaced 21.4 Gbaud, 42.7 Gb/s QPSK signals for the long-reach paths and 10.7 Gbaud, 42.7 Gb/s 16APSK signals for the short reach paths. A separate, single 16APSK channel using a narrow linewidth optical source is also generated to test the performance of the multilevel format. All branches are multiplexed in a WSS. Six QPSK and 10 16APSK signals are transmitted in a recirculating loop containing 40 km of single mode fiber (SMF), a dispersion compensation fiber (DCF), a bandwidth agnostic WXC, and a gain equalizing filter. Performance analysis of both the QPSK and the 16APSK signals after transmission of multiple WXCs and fiber spans revealed that the distance-adaptive spectrum allocation allows transmission of 16 instead of 11 channels, which corresponds to an increase in spectral efficiency of 45%.

Fig. 15.11
figure 001511

SLICE physical-layer aware network design testbed. CW continuous wave, OA optical amplifier, WSS wavelength selective switch, RZ return to zero, DQPSK mod. differential quadrature phase-shift keying modulator, 16APSK 16 amplitude phase-shift keying modulator, OSW optical switch, SMF single mode fiber, DCF dispersion compensation fiber, BA-WXC bandwidth agnostic wavelength crossconnect, Rx receiver

The problem of calculating routes for optical paths in transparent optical networks is called the routing and wavelength assignment (RWA) problem with the wavelength continuity constraint. Adaptive spectral allocation in elastic optical path networks introduces a more severe constraint on spectrum continuity, as sort of the routing and spectrum assignment (RSA) problem taking into account linear and nonlinear impairment factors. The RSA algorithm can be divided into two stages, as shown in Fig. 15.12 [15]. The first stage is to create a route list with the necessary spectral width and modulation format. Given a network topology and physical parameters, an ordered list of a number of fixed routes for each source-destination pair is created first. Subsequently, the necessary spectral width and modulation format for each route at a given bit rate can be calculated taking into account linear and nonlinear impairment factors. The second stage is to allocate a contiguous frequency slot to the route. When a connection request arrives, a route from the ordered route list created in the previous stage is selected in sequence. The number of necessary slots is found from the list according to the physical parameters of the route. Then, the available contiguous slots available for every link on the route is searched from a lower-numbered slot, and the lowest available contiguous slots is selected. If no available contiguous slots are found on the route, an alternate route is selected from the route list.

Fig. 15.12
figure 001512

Rate- and distance-adaptive elastic bandwidth allocation sequence diagrams. (a) Necessary spectrum resource calculation sequence, (b) routing and spectrum allocation algorithm

6 Summary

In this chapter, we described a wide variety of research testbeds on cross-layer network design and control in optical networks in terms of service-layer aware, multilayer traffic-driven, and physical-layer aware techniques. Table 15.1 summarizes the described testbeds and their respective objectives. These testbeds demonstrated the feasibility and effectiveness of cross-layer design in optical networks. Many issues that need to be solved before these networks can be commercialized had been recognized during the planning and operating of the testbeds.

Table 15.1 Network design and control testbeds

Fortunately, such feedback has given rise to insights for the subsequent work. For example, the interoperability issue when a user/application requires optical paths across multiple administrative domains is being investigated, and a common network provisioning service interface and universal network service interface proxy are under standardization in the OGF as described in Sect. 2.2. The first interoperability demonstration was successfully conducted in late 2010. The granularity mismatch issue between an optical path and user/application traffic was first addressed by introducing coordinated packet and optical path architecture. In the middle term, this issue should be addressed by introducing elasticity and adaptation into the optical domain. The results achieved through the SLICE testbed confirmed that elastic optical path networks have the potential for a more prominent role in optics in the context of a more efficient and scalable optical layer as a mission critical infrastructure to support the future Internet and services.