Low-Latency Optical Wireless Data-Center Networks Using Nanoseconds Semiconductor-Based Wavelength Selectors and Arrayed Waveguide Grating Router

In order to meet the massively increasing requirements of big-data applications, data centers (DCs) are key infrastructures to cope with the associated demands, such as high performance, easy scalability, low cabling complexity and low power consumption. Many research efforts have been dedicated to traditional wired data center networks (DCNs). However, DCNs’ static and rigid topology based on optical cables significantly limits their flexibility, scalability, and even reconfigurability. The limitations of this wired connection can be addressed with optical wireless technology, which avoids cable complexity problems while allowing dynamic adaption and fast reconfiguration. Here, we propose and investigate a novel optical wireless data-center network (OW-DCN) architecture based on nanoseconds semiconductor optical amplifier (SOA)-based wavelength selectors and arrayed waveguide grating router (AWGR) controlled by fast field-programmable gate array (FPGA)-based switch schedulers. The full architecture, including the design, packet-switching strategy, contention solving methodology, and reconfiguration capability, is presented and demonstrated. Dynamic switch scheduling with a FPGA-based switch scheduler processing optical label and software-defined network (SDN)-based reconfiguration were experimentally confirmed. The proposed OW-DCN was also achieved with a power penalty of less than 2 dB power penalty at BER < 1 × 10−9 for a 50 Gb/s OOK transmission and packet-switching transmission.


Introduction
Driven by the ever-growing field of big data applications and cloud computing paradigms, the growth of data-center traffic is increasing at a very steep rate. Up to 75% of the emerging traffic load within data centers is transferred between servers and between racks [1,2]. To cope with the unprecedented traffic explosions and unbalanced traffic distributions, data center network (DCN) architecture design has become a major research priority, since DCs have hundreds of thousands of servers for data storage and processing [3]. The design of DCN architecture needs to satisfy several requirements, such as high bandwidth, flexibility, fast reconfiguration, scalability, dynamical adaption, and low cabling complexity [4,5]. The conventional wired DCN architecture [6,7] has so far been the dominant architecture in research. However, this wired architecture, constructed by millions of meters of copper and optical fibers, cannot efficiently support the aforementioned requirements due to the fundamental limitation of wired connections, which leads to issues such as wire ducting, heat dissipation, space utilization, and energy efficiency [8]. Furthermore, the scaling and upgrading of the DCN to support increasing services becomes extremely complicated, which incurs extra maintenance costs in current DC infrastructures. At the same time, the traditional wired hierarchical tree-based DCN architecture has either been oversubscribed, failing to adapt to the dynamic and unpredictable traffic outbursts, or overprovisioned, which is extremely costly and inefficient [9].

OW-DCN Architecture
The proposed OW-DCN architecture comprising SWS, FPGA-based switch scheduler, and N × N-port AWGR is shown in Figure 1. It is divided into N clusters, and each cluster groups N racks. The K servers are interconnected by one optical ToR switch at each rack. As illustrated in Figure 1, an inter-cluster AWGR-based switch (EAS) is used to connect the N ToRs within one cluster through the inter-cluster optical wireless links, while an intra-cluster AWGR-based switch (IAS) is dedicated to the traffic transmission between the i-th ToR of each cluster (1 ≤ i ≤ N) through the intra-cluster optical wireless links. These two bi-directional optical wireless links are established via two pairs of collimators placed on each rack and each optical switch (EAS and IAS). Table 1 shows the wavelength mapping between each port of the N × N-port AWGR and each ToR. Furthermore, a SDN control plane was also implemented with ODL and Openstack platform for monitoring, optimizing, and reconfiguring the OW-DCN. Meanwhile, an OpenFlow protocol was deployed at each ToR for the interaction between the SDN controllers. With the help of SDN, the transmission link can be dynamically reconfigured based upon the statistical collection of real-time network utilization and application requirements.
Photonics 2022, 9, x FOR PEER REVIEW 3 of 13 tions. Moreover, the switching performance with ToR-to-ToR true packet-switch operating at 50 Gb/s was also experimentally demonstrated, which validates the architecture's packet-switching and delivery credentials. This paper is organized as follows. Section 2 describes the novel OW-DCN architecture and the system operation of the SWS-based ToR switch, AWGR-based optical switch, and SDN-based reconfiguration. The validation and assessment of contention solving, reconfiguration via priority assignment, and performance of data transmission and switching are reported in Section 3. Section 4 concludes the paper by summarizing the main results.

OW-DCN Architecture
The proposed OW-DCN architecture comprising SWS, FPGA-based switch scheduler, and N × N-port AWGR is shown in Figure 1. It is divided into N clusters, and each cluster groups N racks. The K servers are interconnected by one optical ToR switch at each rack. As illustrated in Figure 1, an inter-cluster AWGR-based switch (EAS) is used to connect the N ToRs within one cluster through the inter-cluster optical wireless links, while an intra-cluster AWGR-based switch (IAS) is dedicated to the traffic transmission between the i-th ToR of each cluster (1 ≤ i ≤ N) through the intra-cluster optical wireless links. These two bi-directional optical wireless links are established via two pairs of collimators placed on each rack and each optical switch (EAS and IAS). Table 1 shows the wavelength mapping between each port of the N × N-port AWGR and each ToR. Furthermore, a SDN control plane was also implemented with ODL and Openstack platform for monitoring, optimizing, and reconfiguring the OW-DCN. Meanwhile, an OpenFlow protocol was deployed at each ToR for the interaction between the SDN controllers. With the help of SDN, the transmission link can be dynamically reconfigured based upon the statistical collection of real-time network utilization and application requirements. The schematic showing the functional blocks of the FPGA-based ToR switch is presented in Figure 2. Each ToR switch has a server interface and a network interface. The traffic that comes from these two interfaces exchanges between the intra-rack servers (intra-rack traffic), inter-cluster servers (intra-cluster traffic), and different cluster servers (inter-cluster traffic), respectively. When a packet arrives at the ToR switch, the head processor first checks the destination of each packet. For packets destined to the servers in the same rack, the ToR directly forwards the packet to the K intra-ToR buffers to reach the K servers. Likewise, packets destined to a server in different racks are forwarded to the intra-buffer or inter-buffer to wait for intra-cluster or inter-cluster transmission. As shown in Figure 3, for the intra-cluster or inter-cluster transmission, if the packets need to be switched at the same optical switch (IAS or EAS) to the same destination ToR and at the same time slot, contention may occur, since a passive N × N-port AWGR is used for the  The schematic showing the functional blocks of the FPGA-based ToR switch is presented in Figure 2. Each ToR switch has a server interface and a network interface. The traffic that comes from these two interfaces exchanges between the intra-rack servers (intra-rack traffic), inter-cluster servers (intra-cluster traffic), and different cluster servers (inter-cluster traffic), respectively. When a packet arrives at the ToR switch, the head processor first checks the destination of each packet. For packets destined to the servers in the same rack, the ToR directly forwards the packet to the K intra-ToR buffers to reach the K servers. Likewise, packets destined to a server in different racks are forwarded to the intra-buffer or inter-buffer to wait for intra-cluster or inter-cluster transmission. As shown in Figure 3, for the intra-cluster or inter-cluster transmission, if the packets need to be switched at the same optical switch (IAS or EAS) to the same destination ToR and at the same time slot, contention may occur, since a passive N × N-port AWGR is used for the packet-switching between each ToR. Therefore, a contention-solving procedure is implemented with a FPGA-based switch scheduler and a label channel between the ToRs and the optical switch node. For each packet, an optical label signal that contains the destination ToR and the priority information is first generated and forwarded to the IAS or EAS in advance. The switch scheduler at the IAS or EAS side checks the label information that comes from different intra-(or inter-) cluster ToRs for possible contentions and then notifies the delivery request status to each ToR. If there is no contention, the switch scheduler sends successful acknowledgment (ACK) signals back to each ToR that indicates the granted requests. However, in the event of contention, the priority is compared at the switch scheduler side. For packet labels defined as higher-priority, an ACK signal is forwarded to the requested ToRs, while negative ACK (NACK) signals are sent back to the ToR switches labeled as carrying lower-priority information. According to the received ACK or NACK signals, the data packet at the ToRs side is either forwarded or required to wait for retransmission at the next time slot, respectively. packet-switching between each ToR. Therefore, a contention-solving procedure is implemented with a FPGA-based switch scheduler and a label channel between the ToRs and the optical switch node. For each packet, an optical label signal that contains the destination ToR and the priority information is first generated and forwarded to the IAS or EAS in advance. The switch scheduler at the IAS or EAS side checks the label information that comes from different intra-(or inter-) cluster ToRs for possible contentions and then notifies the delivery request status to each ToR. If there is no contention, the switch scheduler sends successful acknowledgment (ACK) signals back to each ToR that indicates the granted requests. However, in the event of contention, the priority is compared at the switch scheduler side. For packet labels defined as higher-priority, an ACK signal is forwarded to the requested ToRs, while negative ACK (NACK) signals are sent back to the ToR switches labeled as carrying lower-priority information. According to the received ACK or NACK signals, the data packet at the ToRs side is either forwarded or required to wait for retransmission at the next time slot, respectively.  For ToRs that are allowed to send the data packets, the central wavelength of the optical transmitters is selected by the fast SWS that is controlled by the FPGA-based controller of the ToR. The fast SWS comprises an array of lasers (or a comb laser) centered at different wavelengths that match with the routing wavelength of the AWGR (shown in Table 1), an array of SOA gates for selecting the laser (or lasers in the event of multicasting), and an AWG for multiplexing. After the laser wavelength is selected, the electrical data packets are modulated into an optical signal via an optical modulator and sent to the destinated ToRs via the AWGR at IAS or EAS. Furthermore, the nanosecond switching time of the SOAs guarantees the nanosecond operation of the SWS and, thus, of the tunable transmitter and switching. Moreover, the SOAs provide amplification to guarantee the power budget between the interconnect links. The SWS and optical modulator transmitter system can be photonically integrated for decreasing the cost, footprint, and power consumption. Furthermore, instead of using discrete laser modules, chip-based optical generation, such as optical comb, can be improved as an efficient source of multi-wavelength lasers [24]. Moreover, due to the cyclic routing characteristic of the AWGR, all the N ToRs packet-switching between each ToR. Therefore, a contention-solving procedure is implemented with a FPGA-based switch scheduler and a label channel between the ToRs and the optical switch node. For each packet, an optical label signal that contains the destination ToR and the priority information is first generated and forwarded to the IAS or EAS in advance. The switch scheduler at the IAS or EAS side checks the label information that comes from different intra-(or inter-) cluster ToRs for possible contentions and then notifies the delivery request status to each ToR. If there is no contention, the switch scheduler sends successful acknowledgment (ACK) signals back to each ToR that indicates the granted requests. However, in the event of contention, the priority is compared at the switch scheduler side. For packet labels defined as higher-priority, an ACK signal is forwarded to the requested ToRs, while negative ACK (NACK) signals are sent back to the ToR switches labeled as carrying lower-priority information. According to the received ACK or NACK signals, the data packet at the ToRs side is either forwarded or required to wait for retransmission at the next time slot, respectively.  For ToRs that are allowed to send the data packets, the central wavelength of the optical transmitters is selected by the fast SWS that is controlled by the FPGA-based controller of the ToR. The fast SWS comprises an array of lasers (or a comb laser) centered at different wavelengths that match with the routing wavelength of the AWGR (shown in Table 1), an array of SOA gates for selecting the laser (or lasers in the event of multicasting), and an AWG for multiplexing. After the laser wavelength is selected, the electrical data packets are modulated into an optical signal via an optical modulator and sent to the destinated ToRs via the AWGR at IAS or EAS. Furthermore, the nanosecond switching time of the SOAs guarantees the nanosecond operation of the SWS and, thus, of the tunable transmitter and switching. Moreover, the SOAs provide amplification to guarantee the power budget between the interconnect links. The SWS and optical modulator transmitter system can be photonically integrated for decreasing the cost, footprint, and power consumption. Furthermore, instead of using discrete laser modules, chip-based optical generation, such as optical comb, can be improved as an efficient source of multi-wavelength lasers [24]. Moreover, due to the cyclic routing characteristic of the AWGR, all the N ToRs For ToRs that are allowed to send the data packets, the central wavelength of the optical transmitters is selected by the fast SWS that is controlled by the FPGA-based controller of the ToR. The fast SWS comprises an array of lasers (or a comb laser) centered at different wavelengths that match with the routing wavelength of the AWGR (shown in Table 1), an array of SOA gates for selecting the laser (or lasers in the event of multicasting), and an AWG for multiplexing. After the laser wavelength is selected, the electrical data packets are modulated into an optical signal via an optical modulator and sent to the destinated ToRs via the AWGR at IAS or EAS. Furthermore, the nanosecond switching time of the SOAs guarantees the nanosecond operation of the SWS and, thus, of the tunable transmitter and switching. Moreover, the SOAs provide amplification to guarantee the power budget between the interconnect links. The SWS and optical modulator transmitter system can be photonically integrated for decreasing the cost, footprint, and power consumption. Furthermore, instead of using discrete laser modules, chip-based optical generation, such as optical comb, can be improved as an efficient source of multi-wavelength lasers [24]. Moreover, due to the cyclic routing characteristic of the AWGR, all the N ToRs are connected by wavelengths from λ 1 to λ N , as shown in Table 1.
Apart from the label generation and wavelength switch selection of the data packets, the ToR switch also collects the information of the granted and rejected transmission requests (ACK/NACK signals) and reports to the SDN controller for priority reconfiguration to further enhance the performance in terms of packet loss and latency. At the physical layer, even if the switch scheduler can prevent packet contention, there might be some packet losses at the ToR's buffer sides and high transmission latencies caused by the unbalanced heavy network traffic load. Furthermore, packet losses caused by buffer overflow might also occur when the system is under a high traffic load due to the higher likelihood of burst traffic generated to the electronic buffer block in a certain time. Therefore, the traffic statistics of the underlying data plane are monitored and reported to the SDN controller via the OpenFlow agent. Based on this information, with the help of an orchestrator, the SDN controller can balance the network traffic load by updating new look-up tables to the FPGA-based ToRs and schedulers in real-time. Consequently, the performance of the network in terms of latency and packet loss can be improved, and better network utilization is achieved for supporting diverse types of traffic.
It may be noted that the OW-DCN enables a flat architecture that allows different path interconnections between the ToRs. Single-hop communication is needed to forward the traffic to ToRs residing in the same cluster via the IAS, and, at most, two-hop communication is sufficient for the interconnection between the ToRs locating in different clusters. This flexible optical network connection offers the advantage of supporting different loadbalancing algorithms and ensuring fault protection. Moreover, exploiting an AWGR with more ports, a more scalable architecture interconnecting more ToRs can be achieved. A 90 × 90 por-AWGR was demonstrated with 50 GHz bandwidth in [25]; it can be inferred that an OW-DCN with up to 324,000 servers (if each ToR group 40 servers) is, in principle, possible. Moreover, with the help of the periodical FSR feature of AWGR, the capacity of each link could be dynamically increased without changing the infrastructure.

Experimental Validation
To demonstrate the working principle of the proposed OW-DCN and to evaluate the transmission performance, a fully functional 4 × 4-rack DCN prototype with a control plane and data plane was built. First, the FPGA-based switch scheduler with the label system was validated with a testbed comprised of an FPGA-based switch scheduler and four FPGA-based ToR switches to demonstrate the system's switch-scheduling ability for contention-solving. Furthermore, the SDN control plane based on ODL and Openstack was implemented to reconfigure the network for supporting the dynamic transmission traffic patterns. The data plane network based on a 4 × 4-port AWGR and SWS at each ToR was assessed to validate the transmission and switching performance for all the combination links. Moreover, a packet switching operation for ToR-to-ToR true packetized payloads was experimentally assessed.

Switch Schedule and Reconfiguration of the OW-DCN
The experimental setup to investigate the full dynamic operation, including the switch scheduling and SDN-based reconfiguration of a 4 × 4-rack DCN, is shown in Figure 4a. It consisted of four ToR switches and one optical switch scheduler. The four ToR switches are constructed with four Xilinx UltraScale FPGA-based ToRs equipped with OpenFlow agents. At the optical switch side (IAS or EAS), another Xilinx Virtex FPGA was implemented as a switch scheduler. For each of these FPGAs, 10-gigabyte-per-second SFP transceivers were used for the communication between the ToRs and the switch scheduler through the OW link. Furthermore, the ODL-and Openstack-based SDN controller was also applied for monitoring, load balancing, and priority assignment, based on the static traffic collection from the OpenFlow agent of each ToR. was implemented as a switch scheduler. For each of these FPGAs, 10-gigabyte-per-s SFP transceivers were used for the communication between the ToRs and the scheduler through the OW link. Furthermore, the ODL-and Openstack-based SD troller was also applied for monitoring, load balancing, and priority assignment, ba the static traffic collection from the OpenFlow agent of each ToR. To evaluate the performance of the OW-DCN, Ethernet frames were ran generated by the FPGA-based ToR to emulate the aggregated server traffic. The destined to the servers between these four racks were sent to the data packet proces generating optical data packets. For each packet, a corresponding optical label ca the destination information and the transmission priority class was generated transimission priority ws assigned by the ODL controller via the OF agent accord the application requirements. This 4 × 4-rack DC featured three classes of priorit priority was set as '1 > 2 > 3', which means the label with priority '1' had the h priority. The look-up table with the priority information was initialized by th control plane and stored at the ToR switch and switch scheduler. At each time sl ToR switch sent the optical label associated with the data packet to the switch sch via the OW link. Based on the received optical label requests, the switch sch processed all the labels, resolved the packet contention, compared the priority contention ToRs, and then generated the scheduling response signal (ACK/NACK to the corresponding ToRs. For the data packets that were granted for transmissi ceiving ACK signal), the average label processing latency across the switch netwo measured by using the traffic analyzer and the logic analyzer inside the FPGA. Th age latency introduced by this label system was measured. This included the transm delay and the data-processing delay across the FPGA-based ToR and the switch uler. The value of each process is summarized in Table 2. The total latency was 6 For each retransmission, this latency accumulated. Therefore, retransmission nee be prevented in order to decrease the network latency, and in turn, release the bu reduce the packet loss. Since the look-up table was the key element in the transm system that decided the transmission priority and, hence the number of retransmi the SDN control system was applied to dynamically update the look-up table acc to statistical real-time traffic.
The initial configuration of the look-up table is shown in Figure 5a. The data from ToR1 has the highest priority; ToR2 has the second higher priority, followed by and, finally, ToR4. According to this look-up table, Figure 5c illustrates the time tr the processing label signals (optical destination and priority) and the contention s with the corresponding ACK/NACK signal at the FPGA-based switch scheduler sid granted request is confirmed by receiving a signal, which is the same as the req destination ToR number (ACK signal), otherwise the ToR receives a different signa the requested destination ToR number (NACK signal). As shown in Figure 5c, conte To evaluate the performance of the OW-DCN, Ethernet frames were randomly generated by the FPGA-based ToR to emulate the aggregated server traffic. The frames destined to the servers between these four racks were sent to the data packet processor for generating optical data packets. For each packet, a corresponding optical label carrying the destination information and the transmission priority class was generated. The transimission priority ws assigned by the ODL controller via the OF agent according to the application requirements. This 4 × 4-rack DC featured three classes of priority. The priority was set as '1 > 2 > 3', which means the label with priority '1' had the highest priority. The look-up table with the priority information was initialized by the SDN control plane and stored at the ToR switch and switch scheduler. At each time slot, the ToR switch sent the optical label associated with the data packet to the switch scheduler via the OW link. Based on the received optical label requests, the switch scheduler processed all the labels, resolved the packet contention, compared the priority of the contention ToRs, and then generated the scheduling response signal (ACK/NACK) back to the corresponding ToRs. For the data packets that were granted for transmission (receiving ACK signal), the average label processing latency across the switch network was measured by using the traffic analyzer and the logic analyzer inside the FPGA. The average latency introduced by this label system was measured. This included the transmission delay and the data-processing delay across the FPGA-based ToR and the switch scheduler. The value of each process is summarized in Table 2. The total latency was 677.6ns. For each retransmission, this latency accumulated. Therefore, retransmission needed to be prevented in order to decrease the network latency, and in turn, release the buffer to reduce the packet loss. Since the look-up table was the key element in the transmission system that decided the transmission priority and, hence the number of retransmissions, the SDN control system was applied to dynamically update the look-up table according to statistical real-time traffic.
The initial configuration of the look-up table is shown in Figure 5a. The data packet from ToR1 has the highest priority; ToR2 has the second higher priority, followed by ToR3 and, finally, ToR4. According to this look-up table, Figure 5c illustrates the time traces of the processing label signals (optical destination and priority) and the contention solving with the corresponding ACK/NACK signal at the FPGA-based switch scheduler side. The granted request is confirmed by receiving a signal, which is the same as the requested destination ToR number (ACK signal), otherwise the ToR receives a different signal with the requested destination ToR number (NACK signal). As shown in Figure 5c, contentions between ToR1 and ToR2, ToR1 and ToR4, and ToR3 and ToR4 happened at time slot N, N + 1 and N + 2, respectively. According to the priority, the switch scheduler sends positive acknowledgement (ACK) to the ToRs with higher-priority requests (ToR1, ToR3 and ToR4 in time-slot N, and ToR1, ToR2 and ToR3 in time-slot N + 1 and N + 2) and sends negative acknowledgement back to the ToRs with lower priority requests (ToR2 in time-slot N, ToR4 in time-slot N + 1 and N + 2). The scheduler sent the ACK back to all the ToRs in Photonics 2022, 9, 203 7 of 13 time-slots N + 3 and N + 4 as there were no contentions. For the ToRs that receive a NACK signal, retransmission is implemented by sending the label again at next time slot (ToR2 in time-slot N + 1, ToR4 in time-slot N + 2 and N + 3).

Data Plane Transmission Performance Evaluation
The transmission performance is further evaluated in this section. Our evaluation follows a stepwise approach, starting with 50-gigabyte-per-second traffic transmission for all the validated combinations of ToRs. This is followed by a demonstration of the end-toend 50 Gb/s packet switching experiment.
Firstly, an experiment was set up with a one-cluster 4 × 4-rack DCN prototype, which In the meantime, the four FPGA-based ToR switches also collect and report the traffic information to the SDN-based centralized controller via the OpenFlow link. At the SDN Photonics 2022, 9, 203 8 of 13 controller side, the ODL is in charge of monitoring this transmission status. Figure 5e shows the real-time collected number of contentions and lost packets. It should be noted that the lost packet was mainly due to the unbalanced network traffic load, which introduces higher packet retransmission, which fills the electronic buffer, leading to overflow happens. Therefore, based on the monitored information, a load-balancing algorithm in the SDN control plane was applied to prevent this situation by assigning higher priority to the transmission links with more traffic load. To be specific, the SDN control plane dynamically optimized the configured network and updated the new look-up tables and priorities to the ToRs by OpenFlow protocol for improving the network efficiency and decreasing the transmission latency. Figure 4b provides insights into the priority configuration from the SDN-based controller, and Figure 5b shows the updated look-up table with new priority allocation. In this new configuration, the packets from ToR1 no longer had the highest priority, while the packets from ToR2 to ToR1, ToR4 to ToR2, and ToR4 to ToR3 had the highest priority. Thus, when contention occurred, the switch schedule and the retransmission changed due to the priority changes of each transmission link, as shown in Figure 5d. The improved performance of the proposed OW-DCN is also demonstrated by the reduced number of contentions and zero packet loss (shown in Figure 5f). Moreover, it should be noted that this reconfiguration automatically operates under the management of the SDN control plane without any manual operation.

Data Plane Transmission Performance Evaluation
The transmission performance is further evaluated in this section. Our evaluation follows a stepwise approach, starting with 50-gigabyte-per-second traffic transmission for all the validated combinations of ToRs. This is followed by a demonstration of the end-to-end 50 Gb/s packet switching experiment.
Firstly, an experiment was set up with a one-cluster 4 × 4-rack DCN prototype, which was implemented by the SWS and a 4 × 4 AWGR with 200-gigahertz channel spacing, as shown in Figure 6. Four lasers and four SOAs were employed to implement the SWS prototype. The central wavelength of each laser output was set to match the wavelength routing map of the AWGR. For each transmission evaluation, the SOA gates of the SWS were turned on/off in order to set the central wavelength of the transmitters to assess each of the transmission paths between the source and destination ToRs of the DCN. Since the optical modulator and AWGR were polarization-dependent, polarization controllers were used to align the polarization state of the laser beam with the Mach-Zehnder modulator (MZM) and AWGR separately. The 50 Gb/s PRBS-31 NRZ-OOK signals were then coupled into a triplet lens collimator (Thorlabs TC18FC-1550) to reach the AWGR through an optical wireless path of 2 m. Since the AWGR forwarded the light signal between the input and output ports with a fixed routing map, the optical signal from the transmission ToR was switched to reach each destination ToR by switching the SOA gates of the SWS. At the destination ToR, the transmission performance was evaluated by a BER tester to measure the bit error rate of each transmission link. The back-to-back (BtB) measurement was carried out by connecting the output of the MZM directly to the BER tester. Figure 7 shows the BER curves measured at 50 Gb/s for all the 12 links. The results confirm a nearly identical system performance error-free operation with a power penalty of less than 2dB at BER < 1 × 10 −9 with respect to BtB transmission. This 2dB penalty was mainly due to the decrease in the SNR values in addition to spectral filtering by the AWGR.
Photonics 2022, 9, 203 9 of 13 ure the bit error rate of each transmission link. The back-to-back (BtB) measurement was carried out by connecting the output of the MZM directly to the BER tester. Figure 7 shows the BER curves measured at 50 Gb/s for all the 12 links. The results confirm a nearly identical system performance error-free operation with a power penalty of less than 2dB at BER < 1 × 10 −9 with respect to BtB transmission. This 2dB penalty was mainly due to the decrease in the SNR values in addition to spectral filtering by the AWGR.  In the second experiment setup, shown in Figure 8, an optical packet-switched proofof-concept testbed of a 4 × 4-rack OW-DCN operating at 50 Gb/s was implemented. The central wavelength of ToR1 and ToR2's optical packets was fast-switched by the FPGAcontrolled SWS for transmitting the 50 Gb/s packetized signal to ToR3 or ToR4. In order to achieve fast and accurate packet switching at 50 Gb/s, an FPGA that controlled the SWSs of ToR1 and ToR2 was time-synchronized with the 50-gigabyte-per-second pattern generator, which drove the 50-gigabyte-per-second MZM. Four SOAs were employed to demonstrate the full dynamic fast packet-switch transmission between ToR1 and ToR2, to reach ToR3 or ToR4. Specifically, optical packets at 50 Gb/s OOK with a 7 ns (350 bits) payload time and 1 ns (50 bits) guard time were generated, as shown in Figure 9a. Figure  9b shows one of the FPGA's control signals for SOA. The time duration of this control signal was chosen in accordance with the packet duration of 8 ns. When the optical packets had to be transmitted to the ToR3 or ToR4, a trigger signal from the pattern generator was sent to the FPGA for triggering the generation of the control signals simultaneously. This guaranteed an accurate time synchronization so that the wavelength switching occurred right in the middle of the gap between two consecutive packets. Moreover, guard time of 1 ns was also well designed to match the fast switching time of the SOAs. ure the bit error rate of each transmission link. The back-to-back (BtB) measurem carried out by connecting the output of the MZM directly to the BER tester. Figure the BER curves measured at 50 Gb/s for all the 12 links. The results confirm a nea tical system performance error-free operation with a power penalty of less tha BER < 1 × 10 −9 with respect to BtB transmission. This 2dB penalty was mainly du decrease in the SNR values in addition to spectral filtering by the AWGR.  In the second experiment setup, shown in Figure 8, an optical packet-switche of-concept testbed of a 4 × 4-rack OW-DCN operating at 50 Gb/s was implemen central wavelength of ToR1 and ToR2's optical packets was fast-switched by th controlled SWS for transmitting the 50 Gb/s packetized signal to ToR3 or ToR4. to achieve fast and accurate packet switching at 50 Gb/s, an FPGA that controlled t of ToR1 and ToR2 was time-synchronized with the 50-gigabyte-per-second generator, which drove the 50-gigabyte-per-second MZM. Four SOAs were emp demonstrate the full dynamic fast packet-switch transmission between ToR1 and reach ToR3 or ToR4. Specifically, optical packets at 50 Gb/s OOK with a 7 ns ( payload time and 1 ns (50 bits) guard time were generated, as shown in Figure 9a 9b shows one of the FPGA's control signals for SOA. The time duration of this signal was chosen in accordance with the packet duration of 8 ns. When the optica had to be transmitted to the ToR3 or ToR4, a trigger signal from the pattern gener sent to the FPGA for triggering the generation of the control signals simultaneou guaranteed an accurate time synchronization so that the wavelength switching o right in the middle of the gap between two consecutive packets. Moreover, guard 1 ns was also well designed to match the fast switching time of the SOAs. In the second experiment setup, shown in Figure 8, an optical packet-switched proofof-concept testbed of a 4 × 4-rack OW-DCN operating at 50 Gb/s was implemented. The central wavelength of ToR1 and ToR2's optical packets was fast-switched by the FPGAcontrolled SWS for transmitting the 50 Gb/s packetized signal to ToR3 or ToR4. In order to achieve fast and accurate packet switching at 50 Gb/s, an FPGA that controlled the SWSs of ToR1 and ToR2 was time-synchronized with the 50-gigabyte-per-second pattern generator, which drove the 50-gigabyte-per-second MZM. Four SOAs were employed to demonstrate the full dynamic fast packet-switch transmission between ToR1 and ToR2, to reach ToR3 or ToR4. Specifically, optical packets at 50 Gb/s OOK with a 7 ns (350 bits) payload time and 1 ns (50 bits) guard time were generated, as shown in Figure 9a. Figure 9b shows one of the FPGA's control signals for SOA. The time duration of this control signal was chosen in accordance with the packet duration of 8 ns. When the optical packets had to be transmitted to the ToR3 or ToR4, a trigger signal from the pattern generator was sent to the FPGA for triggering the generation of the control signals simultaneously. This guaranteed an accurate time synchronization so that the wavelength switching occurred right in the middle of the gap between two consecutive packets. Moreover, guard time of 1 ns was also well designed to match the fast switching time of the SOAs. Photonics 2022, 9, x FOR PEER REVIEW 10 of 13  The central wavelengths of the two lasers at ToR1 and ToR2 matched the wavelength routing map of the AWGR, as shown in Table 3. The SWS of ToR1 was controlled to select the wavelength laser at 1560.7 nm to send the optical packets to ToR3, otherwise the wavelength laser at 1559.09 nm was selected to send the optical packets to ToR4. At ToR2, the SWS selected the wavelength laser at 1562.26 nm or at 1560.64 nm to send the optical packets to ToR3 or ToR4, respectively. Figure 9c,d shows the switched optical packets from ToR1 and ToR2 to ToR3 or ToR4, respectively, according to the control signals provided to the SWS of ToR1 and ToR2. The optical packets were amplified by an EDFA to compensate the MZM loss and launched via a triplet lens collimator (Thorlabs TC18FC-1550) into the free-space link and then collected by another collimator at the AWGR side. Next, an optical splitter divided the optical power of the optical packets between the input port 1 and port 2 of the AWGR for switching to the destination ToR3 (output port 3) or  The central wavelengths of the two lasers at ToR1 and ToR2 matched the wavelength routing map of the AWGR, as shown in Table 3. The SWS of ToR1 was controlled to select the wavelength laser at 1560.7 nm to send the optical packets to ToR3, otherwise the wavelength laser at 1559.09 nm was selected to send the optical packets to ToR4. At ToR2, the SWS selected the wavelength laser at 1562.26 nm or at 1560.64 nm to send the optical packets to ToR3 or ToR4, respectively. Figure 9c,d shows the switched optical packets from ToR1 and ToR2 to ToR3 or ToR4, respectively, according to the control signals provided to the SWS of ToR1 and ToR2. The optical packets were amplified by an EDFA to compensate the MZM loss and launched via a triplet lens collimator (Thorlabs TC18FC-1550) into the free-space link and then collected by another collimator at the AWGR side. Next, an optical splitter divided the optical power of the optical packets between the input port 1 and port 2 of the AWGR for switching to the destination ToR3 (output port 3) or The central wavelengths of the two lasers at ToR1 and ToR2 matched the wavelength routing map of the AWGR, as shown in Table 3. The SWS of ToR1 was controlled to select the wavelength laser at 1560.7 nm to send the optical packets to ToR3, otherwise the wavelength laser at 1559.09 nm was selected to send the optical packets to ToR4. At ToR2, the SWS selected the wavelength laser at 1562.26 nm or at 1560.64 nm to send the optical packets to ToR3 or ToR4, respectively. Figure 9c,d shows the switched optical packets from ToR1 and ToR2 to ToR3 or ToR4, respectively, according to the control signals provided to the SWS of ToR1 and ToR2. The optical packets were amplified by an EDFA to compensate the MZM loss and launched via a triplet lens collimator (Thorlabs TC18FC-1550) into the free-space link and then collected by another collimator at the AWGR side. Next, an optical splitter divided the optical power of the optical packets between the input port 1 and port 2 of the AWGR for switching to the destination ToR3 (output port 3) or ToR4 (output port 4).
A fiber delay line and an inline attenuator were used to ensure that the output signal from the AWGR was synchronized and with the same power intensity. The outputs of the AWGR were then transmitted to ToR3 or ToR4 via another pair of triplet lens collimators. A BER tester was used to evaluate the quality of the received 50-gigabyte-per-second optical packets.  Figure 10 shows the BER curves of the optical packets evaluated at the receiver side of ToR3 and ToR4. The back-to-back BER of the optical packet is reported for reference. The results indicate that the fast switching has very limited impact on the signal quality, with only 0.6 dB power penalty. Therefore, these experimental results prove that it is feasible to produce a high data-rate (50 Gb/s) fast-optical-packet switch using our proposed fast SWS and AWGR OW-DCN switching system. ToR4 (output port 4). A fiber delay line and an inline attenuator were used to ensure that the output signal from the AWGR was synchronized and with the same power intensity. The outputs of the AWGR were then transmitted to ToR3 or ToR4 via another pair of triplet lens collimators. A BER tester was used to evaluate the quality of the received 50gigabyte-per-second optical packets.  Figure 10 shows the BER curves of the optical packets evaluated at the receiver side of ToR3 and ToR4. The back-to-back BER of the optical packet is reported for reference. The results indicate that the fast switching has very limited impact on the signal quality, with only 0.6 dB power penalty. Therefore, these experimental results prove that it is feasible to produce a high data-rate (50 Gb/s) fast-optical-packet switch using our proposed fast SWS and AWGR OW-DCN switching system.

Discussion and Conclusions
We presented and assessed a novel flat OW-DCN architecture based on SOA-based SWS, FPGA-based switch schedulers, N × N-port AWGRs, and SDN-enabled reconfiguration. The introduced optical wireless technology avoids cable complexity and produces a flexible and low-power-consuming DCN while enabling fast and reconfigurable all-optical packet switching. Table 4 compares the proposed architecture with the existing research on OW-DCNs. Different switching technologies have been applied to provide wireless interconnections. However, most of them, i.e., MEMS, have been demonstrated with a millisecond switching timescale and complexity control methodology, making the network results limited in throughput and scalability. Furthermore, only theoretical model simulations or simplelink experimental demonstrations are reported in these studies. Moreover, no OW-DCN has been proposed with both a complete network switching system and a full architecture design.

Discussion and Conclusions
We presented and assessed a novel flat OW-DCN architecture based on SOA-based SWS, FPGA-based switch schedulers, N × N-port AWGRs, and SDN-enabled reconfiguration. The introduced optical wireless technology avoids cable complexity and produces a flexible and low-power-consuming DCN while enabling fast and reconfigurable all-optical packet switching. Table 4 compares the proposed architecture with the existing research on OW-DCNs. Different switching technologies have been applied to provide wireless interconnections. However, most of them, i.e., MEMS, have been demonstrated with a millisecond switching timescale and complexity control methodology, making the network results limited in throughput and scalability. Furthermore, only theoretical model simulations or simple-link experimental demonstrations are reported in these studies. Moreover, no OW-DCN has been proposed with both a complete network switching system and a full architecture design.
For the proposed architecture, a nanosecond fast optical switch with complete architectural design interconnection and switching system was proposed. A fully functional control plane based on label processing, contention solving, retransmission, fast reconfiguration for supporting dynamic traffic patterns and facilitating network provision was experimentally verified. The transmission performance with the functional block of SWS was evaluated with a 4 × 4-rack OW-DCN based on a 4 × 4-port AWGR. It was shown that a 50-gigabyteper-second error-free (BER < 10 −9 ) transmission for all the validated links between the ToRs was achieved. The results of the fast (50 Gb/s) optical packet switching showed a good performance, with a power penalty of around 0.6dB with respect to the back-to-back packetized traffic. This confirms the ability of the proposed OW-DCN switching system to support fast and accurate packet-level switching.