Next Article in Journal
Single-Pixel Moving Object Classification with Differential Measuring in Transform Domain and Deep Learning
Next Article in Special Issue
CASM: A Cost-Aware Switch Migration Strategy for Elastic Optical Inter-Datacenter Networks
Previous Article in Journal
Multimodal In Vivo Imaging of Retinal and Choroidal Vascular Occlusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router

Electro-Optical Communications (ECO), Technology University of Eindhoven, 5600 AZ Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Photonics 2022, 9(3), 203; https://doi.org/10.3390/photonics9030203
Submission received: 25 January 2022 / Revised: 4 March 2022 / Accepted: 17 March 2022 / Published: 21 March 2022
(This article belongs to the Special Issue Optical Data Center Networks)

Abstract

:
In order to meet the massively increasing requirements of big-data applications, data centers (DCs) are key infrastructures to cope with the associated demands, such as high performance, easy scalability, low cabling complexity and low power consumption. Many research efforts have been dedicated to traditional wired data center networks (DCNs). However, DCNs’ static and rigid topology based on optical cables significantly limits their flexibility, scalability, and even reconfigurability. The limitations of this wired connection can be addressed with optical wireless technology, which avoids cable complexity problems while allowing dynamic adaption and fast reconfiguration. Here, we propose and investigate a novel optical wireless data-center network (OW-DCN) architecture based on nanoseconds semiconductor optical amplifier (SOA)-based wavelength selectors and arrayed waveguide grating router (AWGR) controlled by fast field-programmable gate array (FPGA)-based switch schedulers. The full architecture, including the design, packet-switching strategy, contention solving methodology, and reconfiguration capability, is presented and demonstrated. Dynamic switch scheduling with a FPGA-based switch scheduler processing optical label and software-defined network (SDN)-based reconfiguration were experimentally confirmed. The proposed OW-DCN was also achieved with a power penalty of less than 2 dB power penalty at BER < 1 × 10−9 for a 50 Gb/s OOK transmission and packet-switching transmission.

1. Introduction

Driven by the ever-growing field of big data applications and cloud computing paradigms, the growth of data-center traffic is increasing at a very steep rate. Up to 75% of the emerging traffic load within data centers is transferred between servers and between racks [1,2]. To cope with the unprecedented traffic explosions and unbalanced traffic distributions, data center network (DCN) architecture design has become a major research priority, since DCs have hundreds of thousands of servers for data storage and processing [3]. The design of DCN architecture needs to satisfy several requirements, such as high bandwidth, flexibility, fast reconfiguration, scalability, dynamical adaption, and low cabling complexity [4,5]. The conventional wired DCN architecture [6,7] has so far been the dominant architecture in research. However, this wired architecture, constructed by millions of meters of copper and optical fibers, cannot efficiently support the aforementioned requirements due to the fundamental limitation of wired connections, which leads to issues such as wire ducting, heat dissipation, space utilization, and energy efficiency [8]. Furthermore, the scaling and upgrading of the DCN to support increasing services becomes extremely complicated, which incurs extra maintenance costs in current DC infrastructures. At the same time, the traditional wired hierarchical tree-based DCN architecture has either been oversubscribed, failing to adapt to the dynamic and unpredictable traffic outbursts, or overprovisioned, which is extremely costly and inefficient [9].
Therefore, introducing optical wireless technology into the DCN is a promising solution to address the inherent restrictions of wired DCNs [10,11]. Firstly, if the cable complexity problems are removed, more flexible architectures can be explored for better space utilization and reduced power consumption. Secondly, the OW-DCN architecture allows simple implementation and scalability, easy relocation, and fast reconfiguration by adding or redirecting plug-and-play wireless modules on top of each rack. Moreover, benefitting from the inherent advantages of optical wireless technology, ultra-high data rates, low latency, and high capacity can be gained with low transmission power, since it exhibits negligible waveguide dispersion and almost zero attenuation in a wide unlicensed spectrum range [12,13,14]. Additionally, wireless on-demand links can be implemented to dynamically adapt to the burst and changing traffic outbursts.
So far, only a small number of studies on OW-DCN have been conducted. Most existing approaches either focus on theoretical model simulations without sufficient experimental support [15,16,17,18], or only show some preliminary experimental demonstrations with low data rates (10 Gb/s), without considering the full network architecture deployment [19,20,21]. The optical wireless technologies, including photonic integrated circuits [16], MEMS [10,20], digital micro-mirror devices [21], switchable mirrors [22], and pedestal mounted transceiver modules with height and rotation control [18], have been studied for the development of OW-DCNs. However, these approaches have drawbacks, such as slow speed (milliseconds reconfiguration time), complexity control methodology, and small steering angles, which result in limited throughput and scalability. These deficiencies necessitate the innovation of OW-DCN in terms of full architecture interconnection, switching system design, and experimental verification.
In this paper, a practical OW-DCN architecture based on a distributed arrayed waveguide grating router (AWGR), a semiconductor optical amplifier (SOA)-based wavelength selector (SWS), and a field-programmable gate array (FPGA)-based fast switch scheduler is proposed. The full architecture design and the optical packet switching system are experimentally demonstrated. The proposed OW-DCN combines the benefits of the high data rate and format signal transparency of optical switching with optical wireless technology to enable fast optical packet-switching operation, reduce the large power-consuming O/E/O conversion of traditional electrical-switch-based data centers [23], and benefit from all the above-mentioned wireless properties. For this OW-DCN, a detailed packet switching strategy, contention-solving methodology, and reconfiguration capability are elaborated. To be specific, a SWS that is placed on top-of-the-rack (ToR) switches controls a group of SOAs for quickly selecting/tuning the transmitted wavelength of the transceiver in a few nanoseconds. A passive AWGR and an FPGA-based switch scheduler comprise the intra- or inter-cluster optical packet switch for the intra- and inter-cluster interconnection between racks. By appropriately changing the central wavelength of the SWS at the ToR switch, the data can be rapidly switched from any input port of AWGR to any output port, thus reaching any of the target ToRs. Meanwhile, the FPGA-based switch scheduler is implemented for resolving the possible contention to ensure a fast packet switching scheduler, while the SDN-based control plane is employed for look-up table deployment, monitoring, optimizing, and reconfiguring the network topology (via the distribution of new lookup tables to the ToRs and schedulers) to support the dynamic traffic demands. A functional plane of the FPGA-based switch scheduler with the label system for priority assignment, label processing, contention solving, acknowledgment, and retransmission was experimentally verified. Next, the reconfigurability of this architecture was confirmed with the SDN-control plane implemented by OpenDaylight (ODL) and Openstack platforms. For the data plane, 50 Gb/s transmission performance of traffic between ToRs is evaluated via bit error rate measurements for all the validated combinations. Moreover, the switching performance with ToR-to-ToR true packet-switch operating at 50 Gb/s was also experimentally demonstrated, which validates the architecture’s packet-switching and delivery credentials.
This paper is organized as follows. Section 2 describes the novel OW-DCN architecture and the system operation of the SWS-based ToR switch, AWGR-based optical switch, and SDN-based reconfiguration. The validation and assessment of contention solving, reconfiguration via priority assignment, and performance of data transmission and switching are reported in Section 3. Section 4 concludes the paper by summarizing the main results.

2. OW-DCN Architecture

The proposed OW-DCN architecture comprising SWS, FPGA-based switch scheduler, and N × N-port AWGR is shown in Figure 1. It is divided into N clusters, and each cluster groups N racks. The K servers are interconnected by one optical ToR switch at each rack. As illustrated in Figure 1, an inter-cluster AWGR-based switch (EAS) is used to connect the N ToRs within one cluster through the inter-cluster optical wireless links, while an intra-cluster AWGR-based switch (IAS) is dedicated to the traffic transmission between the i-th ToR of each cluster (1 ≤ i ≤ N) through the intra-cluster optical wireless links. These two bi-directional optical wireless links are established via two pairs of collimators placed on each rack and each optical switch (EAS and IAS). Table 1 shows the wavelength mapping between each port of the N × N-port AWGR and each ToR. Furthermore, a SDN control plane was also implemented with ODL and Openstack platform for monitoring, optimizing, and reconfiguring the OW-DCN. Meanwhile, an OpenFlow protocol was deployed at each ToR for the interaction between the SDN controllers. With the help of SDN, the transmission link can be dynamically reconfigured based upon the statistical collection of real-time network utilization and application requirements.
The schematic showing the functional blocks of the FPGA-based ToR switch is presented in Figure 2. Each ToR switch has a server interface and a network interface. The traffic that comes from these two interfaces exchanges between the intra-rack servers (intra-rack traffic), inter-cluster servers (intra-cluster traffic), and different cluster servers (inter-cluster traffic), respectively. When a packet arrives at the ToR switch, the head processor first checks the destination of each packet. For packets destined to the servers in the same rack, the ToR directly forwards the packet to the K intra-ToR buffers to reach the K servers. Likewise, packets destined to a server in different racks are forwarded to the intra-buffer or inter-buffer to wait for intra-cluster or inter-cluster transmission. As shown in Figure 3, for the intra-cluster or inter-cluster transmission, if the packets need to be switched at the same optical switch (IAS or EAS) to the same destination ToR and at the same time slot, contention may occur, since a passive N × N-port AWGR is used for the packet-switching between each ToR. Therefore, a contention-solving procedure is implemented with a FPGA-based switch scheduler and a label channel between the ToRs and the optical switch node. For each packet, an optical label signal that contains the destination ToR and the priority information is first generated and forwarded to the IAS or EAS in advance. The switch scheduler at the IAS or EAS side checks the label information that comes from different intra- (or inter-) cluster ToRs for possible contentions and then notifies the delivery request status to each ToR. If there is no contention, the switch scheduler sends successful acknowledgment (ACK) signals back to each ToR that indicates the granted requests. However, in the event of contention, the priority is compared at the switch scheduler side. For packet labels defined as higher-priority, an ACK signal is forwarded to the requested ToRs, while negative ACK (NACK) signals are sent back to the ToR switches labeled as carrying lower-priority information. According to the received ACK or NACK signals, the data packet at the ToRs side is either forwarded or required to wait for retransmission at the next time slot, respectively.
For ToRs that are allowed to send the data packets, the central wavelength of the optical transmitters is selected by the fast SWS that is controlled by the FPGA-based controller of the ToR. The fast SWS comprises an array of lasers (or a comb laser) centered at different wavelengths that match with the routing wavelength of the AWGR (shown in Table 1), an array of SOA gates for selecting the laser (or lasers in the event of multicasting), and an AWG for multiplexing. After the laser wavelength is selected, the electrical data packets are modulated into an optical signal via an optical modulator and sent to the destinated ToRs via the AWGR at IAS or EAS. Furthermore, the nanosecond switching time of the SOAs guarantees the nanosecond operation of the SWS and, thus, of the tunable transmitter and switching. Moreover, the SOAs provide amplification to guarantee the power budget between the interconnect links. The SWS and optical modulator transmitter system can be photonically integrated for decreasing the cost, footprint, and power consumption. Furthermore, instead of using discrete laser modules, chip-based optical generation, such as optical comb, can be improved as an efficient source of multi-wavelength lasers [24]. Moreover, due to the cyclic routing characteristic of the AWGR, all the N ToRs are connected by wavelengths from λ 1 to λ N , as shown in Table 1.
Apart from the label generation and wavelength switch selection of the data packets, the ToR switch also collects the information of the granted and rejected transmission requests (ACK/NACK signals) and reports to the SDN controller for priority reconfiguration to further enhance the performance in terms of packet loss and latency. At the physical layer, even if the switch scheduler can prevent packet contention, there might be some packet losses at the ToR’s buffer sides and high transmission latencies caused by the unbalanced heavy network traffic load. Furthermore, packet losses caused by buffer overflow might also occur when the system is under a high traffic load due to the higher likelihood of burst traffic generated to the electronic buffer block in a certain time. Therefore, the traffic statistics of the underlying data plane are monitored and reported to the SDN controller via the OpenFlow agent. Based on this information, with the help of an orchestrator, the SDN controller can balance the network traffic load by updating new look-up tables to the FPGA-based ToRs and schedulers in real-time. Consequently, the performance of the network in terms of latency and packet loss can be improved, and better network utilization is achieved for supporting diverse types of traffic.
It may be noted that the OW-DCN enables a flat architecture that allows different path interconnections between the ToRs. Single-hop communication is needed to forward the traffic to ToRs residing in the same cluster via the IAS, and, at most, two-hop communication is sufficient for the interconnection between the ToRs locating in different clusters. This flexible optical network connection offers the advantage of supporting different load-balancing algorithms and ensuring fault protection. Moreover, exploiting an AWGR with more ports, a more scalable architecture interconnecting more ToRs can be achieved. A 90 × 90 por- AWGR was demonstrated with 50 GHz bandwidth in [25]; it can be inferred that an OW-DCN with up to 324,000 servers (if each ToR group 40 servers) is, in principle, possible. Moreover, with the help of the periodical FSR feature of AWGR, the capacity of each link could be dynamically increased without changing the infrastructure.

3. Experimental Validation

To demonstrate the working principle of the proposed OW-DCN and to evaluate the transmission performance, a fully functional 4 × 4-rack DCN prototype with a control plane and data plane was built. First, the FPGA-based switch scheduler with the label system was validated with a testbed comprised of an FPGA-based switch scheduler and four FPGA-based ToR switches to demonstrate the system’s switch-scheduling ability for contention-solving. Furthermore, the SDN control plane based on ODL and Openstack was implemented to reconfigure the network for supporting the dynamic transmission traffic patterns. The data plane network based on a 4 × 4-port AWGR and SWS at each ToR was assessed to validate the transmission and switching performance for all the combination links. Moreover, a packet switching operation for ToR-to-ToR true packetized payloads was experimentally assessed.

3.1. Switch Schedule and Reconfiguration of the OW-DCN

The experimental setup to investigate the full dynamic operation, including the switch scheduling and SDN-based reconfiguration of a 4 × 4-rack DCN, is shown in Figure 4a. It consisted of four ToR switches and one optical switch scheduler. The four ToR switches are constructed with four Xilinx UltraScale FPGA-based ToRs equipped with OpenFlow agents. At the optical switch side (IAS or EAS), another Xilinx Virtex FPGA was implemented as a switch scheduler. For each of these FPGAs, 10-gigabyte-per-second SFP transceivers were used for the communication between the ToRs and the switch scheduler through the OW link. Furthermore, the ODL- and Openstack-based SDN controller was also applied for monitoring, load balancing, and priority assignment, based on the static traffic collection from the OpenFlow agent of each ToR.
To evaluate the performance of the OW-DCN, Ethernet frames were randomly generated by the FPGA-based ToR to emulate the aggregated server traffic. The frames destined to the servers between these four racks were sent to the data packet processor for generating optical data packets. For each packet, a corresponding optical label carrying the destination information and the transmission priority class was generated. The transimission priority ws assigned by the ODL controller via the OF agent according to the application requirements. This 4 × 4-rack DC featured three classes of priority. The priority was set as ‘1 > 2 > 3’, which means the label with priority ‘1’ had the highest priority. The look-up table with the priority information was initialized by the SDN control plane and stored at the ToR switch and switch scheduler. At each time slot, the ToR switch sent the optical label associated with the data packet to the switch scheduler via the OW link. Based on the received optical label requests, the switch scheduler processed all the labels, resolved the packet contention, compared the priority of the contention ToRs, and then generated the scheduling response signal (ACK/NACK) back to the corresponding ToRs. For the data packets that were granted for transmission (receiving ACK signal), the average label processing latency across the switch network was measured by using the traffic analyzer and the logic analyzer inside the FPGA. The average latency introduced by this label system was measured. This included the transmission delay and the data-processing delay across the FPGA-based ToR and the switch scheduler. The value of each process is summarized in Table 2. The total latency was 677.6ns. For each retransmission, this latency accumulated. Therefore, retransmission needed to be prevented in order to decrease the network latency, and in turn, release the buffer to reduce the packet loss. Since the look-up table was the key element in the transmission system that decided the transmission priority and, hence the number of retransmissions, the SDN control system was applied to dynamically update the look-up table according to statistical real-time traffic.
The initial configuration of the look-up table is shown in Figure 5a. The data packet from ToR1 has the highest priority; ToR2 has the second higher priority, followed by ToR3 and, finally, ToR4. According to this look-up table, Figure 5c illustrates the time traces of the processing label signals (optical destination and priority) and the contention solving with the corresponding ACK/NACK signal at the FPGA-based switch scheduler side. The granted request is confirmed by receiving a signal, which is the same as the requested destination ToR number (ACK signal), otherwise the ToR receives a different signal with the requested destination ToR number (NACK signal). As shown in Figure 5c, contentions between ToR1 and ToR2, ToR1 and ToR4, and ToR3 and ToR4 happened at time slot N, N + 1 and N + 2, respectively. According to the priority, the switch scheduler sends positive acknowledgement (ACK) to the ToRs with higher-priority requests (ToR1, ToR3 and ToR4 in time-slot N, and ToR1, ToR2 and ToR3 in time-slot N + 1 and N + 2) and sends negative acknowledgement back to the ToRs with lower priority requests (ToR2 in time-slot N, ToR4 in time-slot N + 1 and N + 2). The scheduler sent the ACK back to all the ToRs in time-slots N + 3 and N + 4 as there were no contentions. For the ToRs that receive a NACK signal, retransmission is implemented by sending the label again at next time slot (ToR2 in time-slot N + 1, ToR4 in time-slot N + 2 and N + 3).
In the meantime, the four FPGA-based ToR switches also collect and report the traffic information to the SDN-based centralized controller via the OpenFlow link. At the SDN controller side, the ODL is in charge of monitoring this transmission status. Figure 5e shows the real-time collected number of contentions and lost packets. It should be noted that the lost packet was mainly due to the unbalanced network traffic load, which introduces higher packet retransmission, which fills the electronic buffer, leading to overflow happens. Therefore, based on the monitored information, a load-balancing algorithm in the SDN control plane was applied to prevent this situation by assigning higher priority to the transmission links with more traffic load. To be specific, the SDN control plane dynamically optimized the configured network and updated the new look-up tables and priorities to the ToRs by OpenFlow protocol for improving the network efficiency and decreasing the transmission latency. Figure 4b provides insights into the priority configuration from the SDN-based controller, and Figure 5b shows the updated look-up table with new priority allocation. In this new configuration, the packets from ToR1 no longer had the highest priority, while the packets from ToR2 to ToR1, ToR4 to ToR2, and ToR4 to ToR3 had the highest priority. Thus, when contention occurred, the switch schedule and the retransmission changed due to the priority changes of each transmission link, as shown in Figure 5d. The improved performance of the proposed OW-DCN is also demonstrated by the reduced number of contentions and zero packet loss (shown in Figure 5f). Moreover, it should be noted that this reconfiguration automatically operates under the management of the SDN control plane without any manual operation.

3.2. Data Plane Transmission Performance Evaluation

The transmission performance is further evaluated in this section. Our evaluation follows a stepwise approach, starting with 50-gigabyte-per-second traffic transmission for all the validated combinations of ToRs. This is followed by a demonstration of the end-to-end 50 Gb/s packet switching experiment.
Firstly, an experiment was set up with a one-cluster 4 × 4-rack DCN prototype, which was implemented by the SWS and a 4 × 4 AWGR with 200-gigahertz channel spacing, as shown in Figure 6. Four lasers and four SOAs were employed to implement the SWS prototype. The central wavelength of each laser output was set to match the wavelength routing map of the AWGR. For each transmission evaluation, the SOA gates of the SWS were turned on/off in order to set the central wavelength of the transmitters to assess each of the transmission paths between the source and destination ToRs of the DCN. Since the optical modulator and AWGR were polarization-dependent, polarization controllers were used to align the polarization state of the laser beam with the Mach–Zehnder modulator (MZM) and AWGR separately. The 50 Gb/s PRBS-31 NRZ-OOK signals were then coupled into a triplet lens collimator (Thorlabs TC18FC-1550) to reach the AWGR through an optical wireless path of 2 m. Since the AWGR forwarded the light signal between the input and output ports with a fixed routing map, the optical signal from the transmission ToR was switched to reach each destination ToR by switching the SOA gates of the SWS. At the destination ToR, the transmission performance was evaluated by a BER tester to measure the bit error rate of each transmission link. The back-to-back (BtB) measurement was carried out by connecting the output of the MZM directly to the BER tester. Figure 7 shows the BER curves measured at 50 Gb/s for all the 12 links. The results confirm a nearly identical system performance error-free operation with a power penalty of less than 2dB at BER < 1 × 10−9 with respect to BtB transmission. This 2dB penalty was mainly due to the decrease in the SNR values in addition to spectral filtering by the AWGR.
In the second experiment setup, shown in Figure 8, an optical packet-switched proof-of-concept testbed of a 4 × 4-rack OW-DCN operating at 50 Gb/s was implemented. The central wavelength of ToR1 and ToR2’s optical packets was fast-switched by the FPGA-controlled SWS for transmitting the 50 Gb/s packetized signal to ToR3 or ToR4. In order to achieve fast and accurate packet switching at 50 Gb/s, an FPGA that controlled the SWSs of ToR1 and ToR2 was time-synchronized with the 50-gigabyte-per-second pattern generator, which drove the 50-gigabyte-per-second MZM. Four SOAs were employed to demonstrate the full dynamic fast packet-switch transmission between ToR1 and ToR2, to reach ToR3 or ToR4. Specifically, optical packets at 50 Gb/s OOK with a 7 ns (350 bits) payload time and 1 ns (50 bits) guard time were generated, as shown in Figure 9a. Figure 9b shows one of the FPGA’s control signals for SOA. The time duration of this control signal was chosen in accordance with the packet duration of 8 ns. When the optical packets had to be transmitted to the ToR3 or ToR4, a trigger signal from the pattern generator was sent to the FPGA for triggering the generation of the control signals simultaneously. This guaranteed an accurate time synchronization so that the wavelength switching occurred right in the middle of the gap between two consecutive packets. Moreover, guard time of 1 ns was also well designed to match the fast switching time of the SOAs.
The central wavelengths of the two lasers at ToR1 and ToR2 matched the wavelength routing map of the AWGR, as shown in Table 3. The SWS of ToR1 was controlled to select the wavelength laser at 1560.7 nm to send the optical packets to ToR3, otherwise the wavelength laser at 1559.09 nm was selected to send the optical packets to ToR4. At ToR2, the SWS selected the wavelength laser at 1562.26 nm or at 1560.64 nm to send the optical packets to ToR3 or ToR4, respectively. Figure 9c,d shows the switched optical packets from ToR1 and ToR2 to ToR3 or ToR4, respectively, according to the control signals provided to the SWS of ToR1 and ToR2. The optical packets were amplified by an EDFA to compensate the MZM loss and launched via a triplet lens collimator (Thorlabs TC18FC-1550) into the free-space link and then collected by another collimator at the AWGR side. Next, an optical splitter divided the optical power of the optical packets between the input port 1 and port 2 of the AWGR for switching to the destination ToR3 (output port 3) or ToR4 (output port 4). A fiber delay line and an inline attenuator were used to ensure that the output signal from the AWGR was synchronized and with the same power intensity. The outputs of the AWGR were then transmitted to ToR3 or ToR4 via another pair of triplet lens collimators. A BER tester was used to evaluate the quality of the received 50-gigabyte-per-second optical packets.
Figure 10 shows the BER curves of the optical packets evaluated at the receiver side of ToR3 and ToR4. The back-to-back BER of the optical packet is reported for reference. The results indicate that the fast switching has very limited impact on the signal quality, with only 0.6 dB power penalty. Therefore, these experimental results prove that it is feasible to produce a high data-rate (50 Gb/s) fast-optical-packet switch using our proposed fast SWS and AWGR OW-DCN switching system.

4. Discussion and Conclusions

We presented and assessed a novel flat OW-DCN architecture based on SOA-based SWS, FPGA-based switch schedulers, N × N-port AWGRs, and SDN-enabled reconfiguration. The introduced optical wireless technology avoids cable complexity and produces a flexible and low-power-consuming DCN while enabling fast and reconfigurable all-optical packet switching.
Table 4 compares the proposed architecture with the existing research on OW-DCNs. Different switching technologies have been applied to provide wireless interconnections. However, most of them, i.e., MEMS, have been demonstrated with a millisecond switching timescale and complexity control methodology, making the network results limited in throughput and scalability. Furthermore, only theoretical model simulations or simple-link experimental demonstrations are reported in these studies. Moreover, no OW-DCN has been proposed with both a complete network switching system and a full architecture design.
For the proposed architecture, a nanosecond fast optical switch with complete architectural design interconnection and switching system was proposed. A fully functional control plane based on label processing, contention solving, retransmission, fast reconfiguration for supporting dynamic traffic patterns and facilitating network provision was experimentally verified. The transmission performance with the functional block of SWS was evaluated with a 4 × 4-rack OW-DCN based on a 4 × 4-port AWGR. It was shown that a 50-gigabyte-per-second error-free (BER < 10−9) transmission for all the validated links between the ToRs was achieved. The results of the fast (50 Gb/s) optical packet switching showed a good performance, with a power penalty of around 0.6dB with respect to the back-to-back packetized traffic. This confirms the ability of the proposed OW-DCN switching system to support fast and accurate packet-level switching.
It should be noted that the scalability of our architecture only relies on the AWGR and the SWS system. Therefore, we believe that exploiting an AWGR with more ports [25] and introducing photonic integration technology to integrate the SWS, a scalable OW-DCN with high throughput, small footprint, low cost, and low power consumption, is achievable. In addition, benefiting from the format-transparent all-optical switching technology and the periodical FSR of AWGR, the throughput of this OW-DCN can be further increased by employing a multi-level modulation format and wavelength multiplexing without changing the infrastructure.

Author Contributions

Conceptualization, S.Z.; validation, S.Z. and X.X.; writing, S.Z.; supervision, E.T. and N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meeker, M. 2018 Internet Trends; Kleiner Perkins: 2018. Available online: https://www.kleinerperkins.com/perspectives/internet-trends-report-2018/ (accessed on 15 December 2021).
  2. Cisco Global Cloud Index: Forecast and Methodology, 2016–2021. Available online: https://virtualization.network/Resources/Whitepapers/0b75cf2e-0c53-4891-918e-b542a5d364c5_white-paper-c11-738085.pdf (accessed on 15 December 2021).
  3. Xia, W.; Zhao, P.; Wen, Y.; Xie, H. A Survey on Data Center Networking (DCN): Infrastructure and Operations. IEEE Commun. Surv. Tutor. 2017, 19, 640–656. [Google Scholar] [CrossRef]
  4. Imran, M.; Haleem, S. Optical interconnects for cloud computing data centers: Recent advances and future challenges. In Proceedings of the International Symposium on Grids and Clouds, Taipei, Taiwan, 16–23 March 2018. [Google Scholar]
  5. Quttoum, A.N. Interconnection Structures, Management and Routing Challenges in Cloud-Service Data Center Networks: A Survey. Int. J. Interact. Mob. Technol. 2018, 12, 36–60. [Google Scholar] [CrossRef] [Green Version]
  6. Yan, F.; Xue, X.; Calabretta, N. HiFOST: A Scalable and Low-Latency Hybrid Data Center Network Architecture Based on Flow-Controlled Fast Optical Switches. J. Opt. Commun. Netw. 2018, 10, B1–B14. [Google Scholar] [CrossRef]
  7. Xue, X.; Nakamura, F.; Prifti, K.; Pan, B.; Yan, F.; Wang, F.; Guo, X.; Tsuda, H.; Calabretta, N. SDN enabled flexible optical data center network with dynamic bandwidth allocation based on photonic integrated wavelength selective switch. Opt. Express 2020, 28, 8949–8958. [Google Scholar] [CrossRef] [PubMed]
  8. Popoola, O.; Pranggono, B. On energy consumption of switch-centric data center networks. J. Supercomput. 2018, 74, 334–369. [Google Scholar] [CrossRef] [Green Version]
  9. Bilal, K.; Khan, S.U.; Zhang, L.; Li, H.; Hayat, K.; Madani, S.A.; Min-Allah, N.; Wang, L.; Chen, D.; Iqbal, M.; et al. Quantitative comparisons of the state-of-the-art data center architectures. Concurr. Comput. Pract. Exp. 2013, 25, 1771–1783. [Google Scholar] [CrossRef]
  10. Arnon, S. Optical wireless communication in data centers. Broadband Access Commun. Technol. XII 2018, 10559, 105590. [Google Scholar]
  11. Hamza, A.S.; Deogun, J.S.; Alexander, D.R. Wireless Communication in Data Centers: A Survey. IEEE Commun. Surv. Tutor. 2016, 18, 1572–1595. [Google Scholar] [CrossRef] [Green Version]
  12. Koonen, T.; Mekonnen, K.; Cao, Z.; Huijskens, F.; Pham, N.Q.; Tangdiongga, E. Ultra-high-capacity wireless communication by means of steered narrow optical beams. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 2020, 378, 20190192. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Ghassemlooy, Z.; Member, S.; Arnon, S.; Member, S.; Uysal, M.; Member, S.; Xu, Z.; Member, S.; Cheng, J.; Member, S. Emerging Optical Wireless Communications-Advances and Challenges. IEEE J. Sel. Areas Commun. 2015, 33, 1738–1749. [Google Scholar] [CrossRef]
  14. Koonen, T.; Mekonnen, K.A.; Cao, Z.; Huijskens, F.; Pham, N.Q.; Tangdiongga, E. Beam-Steered Optical Wireless Communication for Industry 4.0. IEEE J. Sel. Top. Quantum Electron. 2021, 27, 1–10. [Google Scholar] [CrossRef]
  15. Hamza, A.S.; Yadav, S.; Ketan, S.; Deogun, J.S.; Alexander, D.R. OWCell: Optical wireless cellular data center network architecture. In Proceedings of the International Conference on Communications, Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  16. Chaintoutis, C.; Shariati, B.; Bogris, A.; Dijk, P.V.; Roeloffzen, C.G.H.; Bourderionnet, J.; Tomkos, I.; Syvridis, D. Free Space Intra-Datacenter Interconnects Based on 2D Optical Beam Steering Enabled by Photonic Integrated Circuits. Photonics 2018, 5, 21. [Google Scholar] [CrossRef] [Green Version]
  17. Alteri, A.S.; Alsulami, O.Z.; El-Gorashi, T.E.H.; Alresheedi, M.T.; Elmirghani, J.M.H. Data Center Top of Rack Switch to Multiple Spine Switches Optical Wireless Uplinks. In Proceedings of the International Conference on Transparent Optical Networks, Bari, Italy, 19–23 July 2020. [Google Scholar] [CrossRef]
  18. Riza, N.A. The camceiver: Empowering robust agile indoor optical wireless for massive data centres. In Proceedings of the 42nd International Conference on Telecommunications and Signal Processing, Budapest, Hungary, 1–3 July 2019; pp. 445–448. [Google Scholar]
  19. Ali, W.; Cossu, G.; Gilli, L.; Ertunc, E.; Messa, A.; Sturniolo, A.; Ciaramella, E. 10 Gbit/s OWC System for Intra-Data Centers Links. IEEE Photon-Technol. Lett. 2019, 31, 805–808. [Google Scholar] [CrossRef]
  20. Deng, P.; Kane, T.; Alharbi, O. Reconfigurable free space optical data center network using gimbal-less MEMS retroreflective acquisition and tracking. In Proceedings of the Free-Space Laser Communication and Atmospheric Propagation XXX, San Francisco, CA, USA, 29–30 January 2018. [Google Scholar] [CrossRef]
  21. Ghobadi, M.; Mahajan, R.; Phanishayee, A.; Devanur, N.; Kulkarni, J.; Ranade, G.; Blanche, P.A.; Rastegarfar, H.; Glick, M.; Kilper, D. ProjecToR: Agile reconfigurable data center interconnect. In Proceedings of the PACM SIGCOMM Conference, Florianopolis, Brazil, 22–26 August 2016; pp. 216–229. [Google Scholar]
  22. Hamedazimi, N.; Qazi, Z.; Gupta, H.; Sekar, V.; Das, S.R.; Longtin, J.P.; Shah, H.; Tanwery, A. FireFly: A reconfigurable wireless data center fabric using free-space optics. In Proceedings of the ACM Conference on SIGCOMM, Chicago, IL, USA, 18 August 2014; pp. 319–330. [Google Scholar]
  23. Calabretta, N.; Prifti, K.; Xue, X.; Yan, F.; Pan, B.; Guo, X. Nanoseconds photonic integrated switches for optical data center interconnect systems. In Proceedings of the Optical Interconnects XX, San Francisco, CA, USA, 1–6 February 2020; Volume 11286, p. 1128605. [Google Scholar]
  24. Hu, H.; Oxenløwe, L.K. Chip-based optical frequency combs for high-capacity optical communications. Nanophotonics 2021, 10, 1367–1385. [Google Scholar] [CrossRef]
  25. Yoo, S.; Lee, J.K.; Kim, K. Suppression of thermal wavelength drift in widely tunable DS-DBR laser for fast channel-to-channel switching. Opt. Express 2017, 25, 30406–30417. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A schematic diagram of the OW-DCN architecture; FSO: free-space optical; IAS: intra-cluster AWGR-based switch; EAS: inter-cluster AWGR-based switch; ToR: top of rack.
Figure 1. A schematic diagram of the OW-DCN architecture; FSO: free-space optical; IAS: intra-cluster AWGR-based switch; EAS: inter-cluster AWGR-based switch; ToR: top of rack.
Photonics 09 00203 g001
Figure 2. The functional blocks of FPGA-based ToR switch.
Figure 2. The functional blocks of FPGA-based ToR switch.
Photonics 09 00203 g002
Figure 3. The functional blocks of ToR switch.
Figure 3. The functional blocks of ToR switch.
Photonics 09 00203 g003
Figure 4. (a) The experiment setup for switch scheduling and reconfiguration; (b) insights from the priority configuration from the SDN-based controller.
Figure 4. (a) The experiment setup for switch scheduling and reconfiguration; (b) insights from the priority configuration from the SDN-based controller.
Photonics 09 00203 g004
Figure 5. (a) The initial look-up table of priority information; (b) The updated look-up table of priority information; (c) The time traces at the FPGA-based switch scheduler before reconfiguration; (d) the time traces at the FPGA-based switch scheduler after reconfiguration; (e) number of lost packets and contentions before reconfiguration; (f) number of lost packets and contentions after reconfiguration.
Figure 5. (a) The initial look-up table of priority information; (b) The updated look-up table of priority information; (c) The time traces at the FPGA-based switch scheduler before reconfiguration; (d) the time traces at the FPGA-based switch scheduler after reconfiguration; (e) number of lost packets and contentions before reconfiguration; (f) number of lost packets and contentions after reconfiguration.
Photonics 09 00203 g005
Figure 6. (a). Experimental setup of one cluster with 4 × 4-rack OW-DCN. BERT: bit error-rate tester; (b) the actual setup.
Figure 6. (a). Experimental setup of one cluster with 4 × 4-rack OW-DCN. BERT: bit error-rate tester; (b) the actual setup.
Photonics 09 00203 g006
Figure 7. BER curves versus received power at 50 Gb/s for the links between ToRs.
Figure 7. BER curves versus received power at 50 Gb/s for the links between ToRs.
Photonics 09 00203 g007
Figure 8. (a). Experimental setup for 50-gigabyte-per-second end-to-end optical packet-switch transmission; (b) the actual setup.
Figure 8. (a). Experimental setup for 50-gigabyte-per-second end-to-end optical packet-switch transmission; (b) the actual setup.
Photonics 09 00203 g008
Figure 9. (a) The electrical signal generated by the pattern generator; (b) the control signal from the FPGA; (c) the switched optical packetized signals from ToR1 to ToR3 and ToR2 to ToR3; (d) the switched optical packetized signals from ToR1 to ToR4 and ToR2 to ToR4.
Figure 9. (a) The electrical signal generated by the pattern generator; (b) the control signal from the FPGA; (c) the switched optical packetized signals from ToR1 to ToR3 and ToR2 to ToR3; (d) the switched optical packetized signals from ToR1 to ToR4 and ToR2 to ToR4.
Photonics 09 00203 g009
Figure 10. BER curves for the 50 Gb/s packet switch transmission.
Figure 10. BER curves for the 50 Gb/s packet switch transmission.
Photonics 09 00203 g010
Table 1. The wavelength mapping of a N × N-port AWGR.
Table 1. The wavelength mapping of a N × N-port AWGR.
AWGRO1 (ToR1)O2 (ToR2)O3 (ToR3)ON (ToRN)
I1 (ToR1)λ0λ1λ2λN
I2 (ToR1)λ1λ2λ3λ0
IN (ToR1)λNλ0λ1λ(N − 1)
Table 2. The label-processing latency.
Table 2. The label-processing latency.
Processing BlocksLatency (ns)
Label generation (ToR)25.6
Label packet received (Switch Scheduler)25.6
ACK/NACK generation (Switch Scheduler)19.2
ACK/NACK received and processing (ToR)73.5
Optical wireless transmission (4 m)13.3
10G GTH (IP from Xinlinx Ultrascale XCVU095) (ToR)—TX path79.2
10G GTH (IP from Xinlinx Ultrascale XCVU095) (ToR)—RX path87.8
10G GTH (IP from Xinlinx Virtex VC709)
(Switch Scheduler)—TX path
145.6
10G GTH (IP from Xinlinx Virtex VC709)
(Switch Scheduler)—RX path
207.8
Total677.6
Table 3. Wavelength routing mapping of AWGR.
Table 3. Wavelength routing mapping of AWGR.
InputOutput
ToR3ToR4
ToR11560.70 nm1559.08 nm
ToR21562.26 nm1560.64 nm
Table 4. Comparison of the research on optical wireless DCN.
Table 4. Comparison of the research on optical wireless DCN.
AuthorsEnabled TechSwitching TimeSwitching System Switching ComplexityFull Architecture Experiment (Single Link/Network)
Arnon, S. [10]MEMS or optical phased arrayms or μs×NA×
Hamza, A.S. [15]Multipoint systemNAComplex××
Chaintoutis, C. [16]Photonic chip based 2D beam steeringnsComplex×
Alhazmi, A.S. [17]Angle diversity transmitter NA×NA×
Hamedazimi, N. [22]Switchable mirrormsComplexSingle link (10 Gb/s)
Riza, N.A. [18]Mechanically steerable linksNA×NA×
Ali, W. [19]VCSEL and lensNA×NA×Single link (10 Gb/s)
Deng, P. [20]MEMSms×NA×Single link (10 Gb/s)
Ghobadi, M. [21]Digital micro-mirror deviceμs×NA×Three links (9.3 Gb/s)
This WorkSOA and AWGRnsSimpleNetwork (50 Gb/s)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, S.; Xue, X.; Tangdiongga, E.; Calabretta, N. Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router. Photonics 2022, 9, 203. https://doi.org/10.3390/photonics9030203

AMA Style

Zhang S, Xue X, Tangdiongga E, Calabretta N. Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router. Photonics. 2022; 9(3):203. https://doi.org/10.3390/photonics9030203

Chicago/Turabian Style

Zhang, Shaojuan, Xuwei Xue, Eduward Tangdiongga, and Nicola Calabretta. 2022. "Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router" Photonics 9, no. 3: 203. https://doi.org/10.3390/photonics9030203

APA Style

Zhang, S., Xue, X., Tangdiongga, E., & Calabretta, N. (2022). Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router. Photonics, 9(3), 203. https://doi.org/10.3390/photonics9030203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop