A Low-Complexity Receiver-Side Lookup Table Equalization Method for High-Speed Short-Reach IM/DD Transmission Systems
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
The paper demonstrates a receiver-side LUT (Rx-side LUT) equalization method for high-speed short-reach intensity modulation and direct detection (IM/DD) transmission systems to relax the computational complexity of neural network-based equalization algorithms. Results span FCNN, GRU, and Volterra cases, including robustness tests across different datasets and baud rates up to 112 GBaud. The paper is well-structured and provides a detailed analysis of various types of baseline equalizers, it carefully evaluates BER, execution time, neuron/tap numbers, and training epochs, providing a well-rounded performance picture. After some minor revisions, I recommend publishing the work.
1. FLOPs analysis is provided; a hardware-level FPGA/ASIC feasibility analysis would add weight to the claim of “low-complexity”.
2. Why did the authors choose GRU as the RNN type of equalizer? What is the advantage compared to other equalizers? What is the cause of the advantage?
3. The study mentions that LUT size is affected by baseline neural network hidden-layer neurons. How does the LUT size itself influence storage overhead and lookup efficiency? A discussion on this is preferable.
4. The results presented in Figure 10(c–d) can be more clearly interpreted by formulating them in terms of an explicit equation.
5. The figure descriptions of Fig. 10 are dense, simplifying would improve accessibility for readers.
Author Response
Please see the attachment.
Author Response File:
Author Response.docx
Reviewer 2 Report
Comments and Suggestions for Authors
This study presents a low-complexity receiver-side lookup table (Rx-side LUT) equalization method, aimed at reducing the computational complexity for high-speed short-reach intensity modulation and direct detection (IM/DD) transmission systems. Unlike traditional transmitter-side pre-equalization techniques, the Rx-side LUT method relies entirely on receiver-side data, utilizing a nearest-neighbor algorithm to find the closest match in the lookup table for signal equalization. Experimental results show that the proposed method significantly reduces algorithm execution time while maintaining performance comparable to traditional fully connected neural network (FCNN) and gated recurrent unit (GRU) equalization methods, thus lowering computational complexity. Some issues need to be resolved before publication.
-
The paper primarily evaluates the equalization performance for PAM4 signals, but does not consider other modulation formats (e.g., QPSK, 16-QAM). This limits the general applicability of the method. It is recommended to include validation for other modulation formats to demonstrate the versatility of the approach.
-
While the article compares the Rx-side LUT with FCNN and GRU, it does not consider other traditional equalization methods, such as adaptive filters and decision feedback equalizers (DFE). Including these traditional algorithms in the comparison would provide a more comprehensive evaluation of the Rx-side LUT’s advantages.
-
The experiments are conducted using an idealized channel model, and there is no detailed discussion on how channel impairments (e.g., multipath effects, noise) impact the performance of the Rx-side LUT. It is recommended to consider more complex channel models and assess the method's performance under non-ideal conditions.
-
Although the paper proposes a low-complexity computation method, it does not discuss the feasibility and efficiency of implementing the method in actual hardware, particularly for platforms such as FPGA or ASIC. A detailed discussion on hardware implementation and resource consumption would be beneficial.
-
While the paper mentions the impact of LUT size on performance, it does not provide an in-depth analysis of how LUT optimization and size selection balance computational complexity and performance across different data lengths and training epochs. Further analysis on how to optimize LUT size would strengthen the paper.
-
The paper compares the Rx-side LUT with FCNN and GRU but does not consider other modern neural network architectures (e.g., Convolutional Neural Networks (CNN), Transformers). Including comparisons with these architectures would further highlight the advantages of the proposed method and its potential for broader applicability.
Author Response
Please see the attachment.
Author Response File:
Author Response.docx
Round 2
Reviewer 2 Report
Comments and Suggestions for Authors
The manuscript can be published in its current version.
