Optics for AI and AI for Optics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (30 November 2019) | Viewed by 43869

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Peng Cheng Laboratory, Shenzhen 518055, China
Interests: optics communications; signal processing; modulation/coding; machine learning
Special Issues, Collections and Topics in MDPI journals
Department of Electrical Engineering, Hong Kong Polytechnic University, Hong Kong, China
Interests: DSP; machine learning; optical communications and networks
Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: smart fiber lasers; intelligent optical communications
Special Issues, Collections and Topics in MDPI journals
Dublin City University, School of Electronic Engineering, Radio and Optical Communications Lab, Dublin 9, Ireland
Interests: optical communications; DSP; machine learning
Columbia University, New York, USA
Interests: optical integrated circuits based on III/V and silicon platform; machine learning

Special Issue Information

Dear Colleagues,               

We now live in an era of information explosion and digital revolution that has resulted in rapid technological developments in different aspects of life. Artificial intelligence (AI) is playing a key role in this digital transformation. New AI applications require edge cloud computing with low latency connections, where the significant challenge is that it needs a lot of computer processing power. It is envisaged that optics could tackle this power issue efficiently by interconnecting AI compute clusters and additionally simplifying the fundamental computing architecture, thus providing significant power consumption savings.

On the other hand, optical networking becomes more and more complex, driven by more data and more connections. Generating, transmitting, and recovering such high-volume data requires advanced signal processing and networking technologies with high performance and cost-and-power efficiency. AI is especially useful for optimization and performance prediction for systems that exhibit complex behaviors. In this aspect, traditional signal processing algorithms may not be as efficient as AI algorithms. As a typical subcategory of AI, machine learning (ML) methods have recently entered the field of optics, ranging from quantum mechanics to nanophotonics, optical communication, and optical networks.

A few rudimentary ML algorithms have been introduced in the optical domain with promising results. For instance, an improvement of transmission capacity was observed without network infrastructure modifications or a logical add-on to state-of-the-art transmission products. It is envisaged that ML tools will enable gains significantly beyond conventional systems for meshed network applications, fiber plant diversity, and flexible networks. Since it has been reported that ML can recover noisy signals under the presence of random noise, this can potentially benefit other applications, such as 5G mobile networks, visible-light communications, satellite communication systems, and even optical sensing. AI can also be harnessed to capture laser dynamics and parameters that are difficult to model by standard approaches. This Special Issue encourages proposals of optical implementations of AI and ML algorithms.

Potential topics covering the abovementioned two folds include, but are not limited to the following:

  • Analysis of computing complexity and power consumptions of AI applications;
  • Introduction and implementation of optics, such as all-optical signal processing and integrated-photonics implementation of reservoir computing neural networks that can fundamentally improve the computing and power efficiency for AI applications;
  • Novel photonic devices that facilitate optical computing for ML applications, such as optical matrix multipliers using free-space optics and integrated photonic platforms;
  • Photonic integrated circuits for deep neural networks;
  • AI for optical communications incorporating digital signal processing in the physical layer, such as nonlinearity mitigation, performance optimization, optical performance monitoring, and intelligent test and measurement;
  • AI for optical communications network layer, including network operation and management, routing, resource assignment, and cognitive transport networks;
  • AI for highly-sensitive laser noise characterization that might include inference of laser dynamics or chaotic oscillatory dynamics in semiconductor lasers;
  • AI for optical sensing that might include polarization sensing, OSNR sensing, etc.;
  • AI for quantum computation, theory, and communications, including, for example, quantum cryptography (quantum key distribution);
  • Review articles which describe the current state of the art of AI and optics.

Dr. Jinlong Wei
Prof. Dr. Alan Pak Tao Lau
Prof. Dr. Lilin Yi
Dr. Elias Giacoumidis
Dr. Qixiang Cheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • edge cloud computing
  • optical communication
  • optical networking
  • nanophotonics
  • semiconductor laser dynamics
  • performance monitoring
  • optical sensing
  • quantum communications

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 174 KiB  
Editorial
Special Issue on “Optics for AI and AI for Optics”
by Jinlong Wei, Lilin Yi, Elias Giacoumidis, Qixiang Cheng and Alan Pak Tao Lau
Appl. Sci. 2020, 10(9), 3262; https://doi.org/10.3390/app10093262 - 08 May 2020
Cited by 2 | Viewed by 1650
Abstract
We live in an era of information explosion and digital revolution that has resulted in rapid technological developments in different aspects of life [...] Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)

Research

Jump to: Editorial, Review

15 pages, 3692 KiB  
Article
Numerical Simulation of an InP Photonic Integrated Cross-Connect for Deep Neural Networks on Chip
by Bin Shi, Nicola Calabretta and Ripalta Stabile
Appl. Sci. 2020, 10(2), 474; https://doi.org/10.3390/app10020474 - 09 Jan 2020
Cited by 4 | Viewed by 3681
Abstract
We propose a novel photonic accelerator architecture based on a broadcast-and-weight approach for a deep neural network through a photonic integrated cross-connect. The single neuron and the complete neural network operation are numerically simulated. The weight calibration and weighted addition are reproduced and [...] Read more.
We propose a novel photonic accelerator architecture based on a broadcast-and-weight approach for a deep neural network through a photonic integrated cross-connect. The single neuron and the complete neural network operation are numerically simulated. The weight calibration and weighted addition are reproduced and demonstrated to behave as in the experimental measurements. A dynamic range higher than 25 dB is predicted, in line with the measurements. The weighted addition operation is also simulated and analyzed as a function of the optical crosstalk and the number of input colors involved. In particular, while an increase in optical crosstalk negatively influences the simulated error, a greater number of channels results in better performance. The iris flower classification problem is solved by implementing the weight matrix of a trained three-layer deep neural network. The performance of the corresponding photonic implementation is numerically investigated by tuning the optical crosstalk and waveguide loss, in order to anticipate energy consumption per operation. The analysis of the prediction error as a function of the optical crosstalk per layer suggests that the first layer is essential to the final accuracy. The ultimate accuracy shows a quasi-linear dependence between the prediction accuracy and the errors per layer for a normalized root mean square error lower than 0.09, suggesting that there is a maximum level of error permitted at the first layer for guaranteeing a final accuracy higher than 89%. However, it is still possible to find good local minima even for an error higher than 0.09, due to the stochastic nature of the network we are analyzing. Lower levels of path losses allow for half the power consumption at the matrix multiplication unit, for the same error level, offering opportunities for further improved performance. The good agreement between the simulations and the experiments offers a solid base for studying the scalability of this kind of network. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

14 pages, 1290 KiB  
Article
Histogram Based Clustering for Nonlinear Compensation in Long Reach Coherent Passive Optical Networks
by Ivan Aldaya, Elias Giacoumidis, Geraldo de Oliveira, Jinlong Wei, Julián Leonel Pita, Jorge Diego Marconi, Eric Alberto Mello Fagotto, Liam Barry and Marcelo Luis Francisco Abbade
Appl. Sci. 2020, 10(1), 152; https://doi.org/10.3390/app10010152 - 23 Dec 2019
Cited by 11 | Viewed by 3158
Abstract
In order to meet the increasing capacity requirements, network operators are extending their optical infrastructure closer to the end-user while making more efficient use of the resources. In this context, long reach passive optical networks (LR-PONs) are attracting increasing attention.Coherent LR-PONs based on [...] Read more.
In order to meet the increasing capacity requirements, network operators are extending their optical infrastructure closer to the end-user while making more efficient use of the resources. In this context, long reach passive optical networks (LR-PONs) are attracting increasing attention.Coherent LR-PONs based on high speed digital signal processors represent a high potential alternative because, alongside with the inherent mixing gain and the possibility of amplitude and phase diversity formats, they pave the way to compensate linear impairments in a more efficient way than in traditional direct detection systems. The performance of coherent LR-PONs is then limited by the combined effect of noise and nonlinear distortion. The noise is particularly critical in single channel systems where, in addition to the the elevated fibre loss, the splitting losses should be considered. In such systems, Kerr induced self-phase modulation emerges as the main limitation to the maximum capacity. In this work, we propose a novel clustering algorithm, denominated histogram based clustering (HBC), that employs the spatial density of the points of a 2D histogram to identify the borders of high density areas to classify nonlinearly distorted noisy constellations. Simulation results reveal that for a 100 km long LR-PON with a 1:64 splitting ratio, at optimum power levels, HBC presents a Q-factor 0.57 dB higher than maximum likelihood and 0.21 dB higher than k-means. In terms of nonlinear tolerance, at a BER of 2×10 3 , our method achieves a gain of ∼2.5 dB and ∼1.25 dB over maximum likelihood and k-means, respectively. Numerical results also show that the proposed method can operate over blocks as small as 2500 symbols. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

14 pages, 3865 KiB  
Article
Deep Neural Network Equalization for Optical Short Reach Communication
by Maximilian Schaedler, Christian Bluemm, Maxim Kuschnerov, Fabio Pittalà, Stefano Calabrò and Stephan Pachnicke
Appl. Sci. 2019, 9(21), 4675; https://doi.org/10.3390/app9214675 - 02 Nov 2019
Cited by 25 | Viewed by 4125
Abstract
Nonlinear distortion has always been a challenge for optical communication due to the nonlinear transfer characteristics of the fiber itself. The next frontier for optical communication is a second type of nonlinearities, which results from optical and electrical components. They become the dominant [...] Read more.
Nonlinear distortion has always been a challenge for optical communication due to the nonlinear transfer characteristics of the fiber itself. The next frontier for optical communication is a second type of nonlinearities, which results from optical and electrical components. They become the dominant nonlinearity for shorter reaches. The highest data rates cannot be achieved without effective compensation. A classical countermeasure is receiver-side equalization of nonlinear impairments and memory effects using Volterra series. However, such Volterra equalizers are architecturally complex and their parametrization can be numerical unstable. This contribution proposes an alternative nonlinear equalizer architecture based on machine learning. Its performance is evaluated experimentally on coherent 88 Gbaud dual polarization 16QAM 600 Gb/s back-to-back measurements. The proposed equalizers outperform Volterra and memory polynomial Volterra equalizers up to 6th orders at a target bit-error rate (BER) of 10 2 by 0.5 dB and 0.8 dB in optical signal-to-noise ratio (OSNR), respectively. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

15 pages, 8863 KiB  
Article
Photon Enhanced Interaction and Entanglement in Semiconductor Position-Based Qubits
by Panagiotis Giounanlis, Elena Blokhina, Dirk Leipold and Robert Bogdan Staszewski
Appl. Sci. 2019, 9(21), 4534; https://doi.org/10.3390/app9214534 - 25 Oct 2019
Cited by 10 | Viewed by 3589
Abstract
CMOS technologies facilitate the possibility of implementing quantum logic in silicon. In this work, we discuss a minimalistic modelling of entangled photon communication in semiconductor qubits. We demonstrate that electrostatic actuation is sufficient to construct and control desired potential energy profiles along a [...] Read more.
CMOS technologies facilitate the possibility of implementing quantum logic in silicon. In this work, we discuss a minimalistic modelling of entangled photon communication in semiconductor qubits. We demonstrate that electrostatic actuation is sufficient to construct and control desired potential energy profiles along a Si quantum dot (QD) structure allowing the formation of position-based qubits. We further discuss a basic mathematical formalism to define the position-based qubits and their evolution under the presence of external driving fields. Then, based on Jaynes–Cummings–Hubbard formalism, we expand the model to include the description of the position-based qubits involving four energy states coupled with a cavity. We proceed with showing an anti-correlation between the various quantum states. Moreover, we simulate an example of a quantum trajectory as a result of transitions between the quantum states and we plot the emitted/absorbed photos in the system with time. Lastly, we examine the system of two coupled position-based qubits via a waveguide. We demonstrate a mechanism to achieve a dynamic interchange of information between these qubits over larger distances, exploiting both an electrostatic actuation/control of qubits and their photon communication. We define the entanglement entropy between two qubits and we find that their quantum states are in principle entangled. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

8 pages, 2440 KiB  
Article
A Blind Nonlinearity Compensator Using DBSCAN Clustering for Coherent Optical Transmission Systems
by Elias Giacoumidis, Yi Lin, Mutsam Jarajreh, Sean O’Duill, Kevin McGuinness, Paul F. Whelan and Liam P. Barry
Appl. Sci. 2019, 9(20), 4398; https://doi.org/10.3390/app9204398 - 17 Oct 2019
Cited by 17 | Viewed by 4531
Abstract
Coherent fiber-optic communication systems are limited by the Kerr-induced nonlinearity. Benchmark optical and digital nonlinearity compensation techniques are typically complex and tackle deterministic-induced nonlinearities. However, these techniques ignore the impact of stochastic nonlinear distortions in the network, such as the interaction of fiber [...] Read more.
Coherent fiber-optic communication systems are limited by the Kerr-induced nonlinearity. Benchmark optical and digital nonlinearity compensation techniques are typically complex and tackle deterministic-induced nonlinearities. However, these techniques ignore the impact of stochastic nonlinear distortions in the network, such as the interaction of fiber nonlinearity with amplified spontaneous emission from optical amplification. Unsupervised machine learning clustering (e.g., K-means) has recently been proposed as a practical approach to the blind compensation of stochastic and deterministic nonlinear distortions. In this work, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is employed, for the first time, for blind nonlinearity compensation. DBSCAN is tested experimentally in a 40 Gb/s 16 quadrature amplitude-modulated system at 50 km of standard single-mode fiber transmission. It is shown that at high launched optical powers, DBSCAN can offer up to 0.83 and 8.84 dB enhancement in Q-factor when compared to conventional K-means clustering and linear equalisation, respectively. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

10 pages, 1267 KiB  
Article
Tunable Optoelectronic Chromatic Dispersion Compensation Based on Machine Learning for Short-Reach Transmission
by Stenio M. Ranzini, Francesco Da Ros, Henning Bülow and Darko Zibar
Appl. Sci. 2019, 9(20), 4332; https://doi.org/10.3390/app9204332 - 15 Oct 2019
Cited by 15 | Viewed by 2812
Abstract
In this paper, a machine learning-based tunable optical-digital signal processor is demonstrated for a short-reach optical communication system. The effect of fiber chromatic dispersion after square-law detection is mitigated using a hybrid structure, which shares the complexity between the optical and the digital [...] Read more.
In this paper, a machine learning-based tunable optical-digital signal processor is demonstrated for a short-reach optical communication system. The effect of fiber chromatic dispersion after square-law detection is mitigated using a hybrid structure, which shares the complexity between the optical and the digital domain. The optical part mitigates the chromatic dispersion by slicing the signal into small sub-bands and delaying them accordingly, before regrouping the signal again. The optimal delay is calculated in each scenario to minimize the bit error rate. The digital part is a nonlinear equalizer based on a neural network. The results are analyzed in terms of signal-to-noise penalty at the KP4 forward error correction threshold. The penalty is calculated with respect to a back-to-back transmission without equalization. Considering 32 GBd transmission and 0 dB penalty, the proposed hybrid solution shows chromatic dispersion mitigation up to 200 ps/nm (12 km of equivalent standard single-mode fiber length) for stage 1 of the hybrid module and roughly double for the second stage. A simplified version of the optical module is demonstrated with an approximated 1.5 dB penalty compared to the complete two-stage hybrid module. Chromatic dispersion tolerance for a fixed optical structure and a simpler configuration of the nonlinear equalizer is also investigated. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

11 pages, 1935 KiB  
Article
Reduced-Complexity Artificial Neural Network Equalization for Ultra-High-Spectral-Efficient Optical Fast-OFDM Signals
by Mutsam A. Jarajreh
Appl. Sci. 2019, 9(19), 4038; https://doi.org/10.3390/app9194038 - 27 Sep 2019
Cited by 4 | Viewed by 2172
Abstract
Digital-based artificial neural network (ANN) machine learning is harnessed to reduce fiber nonlinearities, for the first time in ultra-spectrally-efficient optical fast orthogonal frequency division multiplexed (Fast-OFDM) signals. The proposed ANN design is of low computational load and is compared to the benchmark inverse [...] Read more.
Digital-based artificial neural network (ANN) machine learning is harnessed to reduce fiber nonlinearities, for the first time in ultra-spectrally-efficient optical fast orthogonal frequency division multiplexed (Fast-OFDM) signals. The proposed ANN design is of low computational load and is compared to the benchmark inverse Volterra-series transfer function (IVSTF)-based nonlinearity compensator. The two aforementioned schemes are compared for long-haul single-mode-fiber-based links at 9.69 Gb/s direct-detected optical Fast-OFDM signals. It is shown that an 80 km extension in transmission-reach is feasible when using ANN compared to IVSTF. This occurs because ANN can tackle stochastic nonlinear impairments, such as parametric noise amplification. Using ANN, the dynamic parameters requirements of the sub-ranging quantizers can also be relaxed compared to linear equalization, such as the reduction of the optimum clipping ratio and quantization bits by 2 dB and 2-bits, respectively, and by 2 dB and 2 bits when compared to the IVTSF equalizer. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

10 pages, 1032 KiB  
Article
Optimization Algorithms of Neural Networks for Traditional Time-Domain Equalizer in Optical Communications
by Haide Wang, Ji Zhou, Yizhao Wang, Jinlong Wei, Weiping Liu, Changyuan Yu and Zhaohui Li
Appl. Sci. 2019, 9(18), 3907; https://doi.org/10.3390/app9183907 - 18 Sep 2019
Cited by 10 | Viewed by 2302
Abstract
Neural networks (NNs) have been successfully applied to channel equalization for optical communications. In optical fiber communications, the linear equalizer and the nonlinear equalizer with traditional structures might be more appropriate than NNs for performing real-time digital signal processing, owing to its much [...] Read more.
Neural networks (NNs) have been successfully applied to channel equalization for optical communications. In optical fiber communications, the linear equalizer and the nonlinear equalizer with traditional structures might be more appropriate than NNs for performing real-time digital signal processing, owing to its much lower computational complexity. However, the optimization algorithms of NNs are useful in many optimization problems. In this paper, we propose and evaluate the tap estimation schemes for the equalizer with traditional structures in optical fiber communications using the optimization algorithms commonly used in the NNs. The experimental results show that adaptive moment estimation algorithm and batch gradient descent method perform well in the tap estimation of equalizer. In conclusion, the optimization algorithms of NNs are useful in the tap estimation of equalizer with traditional structures in optical communications. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Graphical abstract

12 pages, 5396 KiB  
Article
A Simple Joint Modulation Format Identification and OSNR Monitoring Scheme for IMDD OOFDM Transceivers Using K-Nearest Neighbor Algorithm
by Qianwu Zhang, Hai Zhou, Yuntong Jiang, Bingyao Cao, Yingchun Li, Yingxiong Song, Jian Chen, Junjie Zhang and Min Wang
Appl. Sci. 2019, 9(18), 3892; https://doi.org/10.3390/app9183892 - 17 Sep 2019
Cited by 10 | Viewed by 2422
Abstract
In this study, a joint modulation format identification and optical signal-to-noise ratio (OSNR) monitoring algorithm is proposed and experimentally demonstrated using the k-nearest neighbor algorithm for intensity modulation and direct detection (IMDD) orthogonal frequency division multiplexing (OFDM) systems. A modified amplitude histogram of [...] Read more.
In this study, a joint modulation format identification and optical signal-to-noise ratio (OSNR) monitoring algorithm is proposed and experimentally demonstrated using the k-nearest neighbor algorithm for intensity modulation and direct detection (IMDD) orthogonal frequency division multiplexing (OFDM) systems. A modified amplitude histogram of received signal is employed to serve as the classification feature to simplify the computation. Experimental results show that five common quadrature amplitude modulation (QAM) modulation formats, including 4-QAM, 16-QAM, 32-QAM, 64-QAM and 128-QAM, can be identified under 100% accurate estimation at the received optical power of −11 dBm. Robustness of the proposed scheme to constellation rotation is also experimentally assessed. At the same time, system OSNR monitoring also can be achieved and the average prediction mean square error (MSE) is 0.69 dB2, which is similar to that using an artificial neural network. Computational complexity assessment demonstrated that similar performance but less computing resource consumption can be achieved by using the proposed scheme rather than the artificial neural network-based scheme. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

10 pages, 1434 KiB  
Article
Mitigation of Nonlinear Impairments by Using Support Vector Machine and Nonlinear Volterra Equalizer
by Rebekka Weixer, Jonas Koch, Patrick Plany, Simon Ohlendorf and Stephan Pachnicke
Appl. Sci. 2019, 9(18), 3800; https://doi.org/10.3390/app9183800 - 11 Sep 2019
Cited by 7 | Viewed by 2163
Abstract
A support vector machine (SVM) based detection is applied to different equalization schemes for a data center interconnect link using coherent 64 GBd 64-QAM over 100 km standard single mode fiber (SSMF). Without any prior knowledge or heuristic assumptions, the SVM is able [...] Read more.
A support vector machine (SVM) based detection is applied to different equalization schemes for a data center interconnect link using coherent 64 GBd 64-QAM over 100 km standard single mode fiber (SSMF). Without any prior knowledge or heuristic assumptions, the SVM is able to learn and capture the transmission characteristics from only a short training data set. We show that, with the use of suitable kernel functions, the SVM can create nonlinear decision thresholds and reduce the errors caused by nonlinear phase noise (NLPN), laser phase noise, I/Q imbalances and so forth. In order to apply the SVM to 64-QAM we introduce a binary coding SVM, which provides a binary multiclass classification with reduced complexity. We investigate the performance of this SVM and show how it can improve the bit-error rate (BER) of the entire system. After 100 km the fiber-induced nonlinear penalty is reduced by 2 dB at a BER of 3.7 × 10 3 . Furthermore, we apply a nonlinear Volterra equalizer (NLVE), which is based on the nonlinear Volterra theory, as another method for mitigating nonlinear effects. The combination of SVM and NLVE reduces the large computational complexity of the NLVE and allows more accurate compensation of nonlinear transmission impairments. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

12 pages, 1978 KiB  
Article
LED Nonlinearity Estimation and Compensation in VLC Systems Using Probabilistic Bayesian Learning
by Chen Chen, Xiong Deng, Yanbing Yang, Pengfei Du, Helin Yang and Lifan Zhao
Appl. Sci. 2019, 9(13), 2711; https://doi.org/10.3390/app9132711 - 03 Jul 2019
Cited by 16 | Viewed by 2679
Abstract
In this paper, we propose and evaluate a novel light-emitting diode (LED) nonlinearity estimation and compensation scheme using probabilistic Bayesian learning (PBL) for spectral-efficient visible light communication (VLC) systems. The nonlinear power-current curve of the LED transmitter can be accurately estimated by exploiting [...] Read more.
In this paper, we propose and evaluate a novel light-emitting diode (LED) nonlinearity estimation and compensation scheme using probabilistic Bayesian learning (PBL) for spectral-efficient visible light communication (VLC) systems. The nonlinear power-current curve of the LED transmitter can be accurately estimated by exploiting PBL regression and hence the adverse effect of LED nonlinearity can be efficiently compensated. Simulation results show that, in a 80-Mbit/s orthogonal frequency division multiplexing (OFDM)-based nonlinear VLC system, comparable bit-error rate (BER) performance can be achieved by the conventional time domain averaging (TDA)-based LED nonlinearity mitigation scheme with totally 20 training symbols (TSs) and the proposed PBL-based scheme with only a single TS. Therefore, compared with the conventional TDA scheme, the proposed PBL-based scheme can substantially reduce the required training overhead and hence greatly improve the overall spectral efficiency of bandlimited VLC systems. It is also shown that the PBL-based LED nonlinearity estimation and compensation scheme is computational efficient for the implementation in practical VLC systems. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

18 pages, 1613 KiB  
Review
AI-Based Modeling and Monitoring Techniques for Future Intelligent Elastic Optical Networks
by Xiaomin Liu, Huazhi Lun, Mengfan Fu, Yunyun Fan, Lilin Yi, Weisheng Hu and Qunbi Zhuge
Appl. Sci. 2020, 10(1), 363; https://doi.org/10.3390/app10010363 - 03 Jan 2020
Cited by 38 | Viewed by 5913
Abstract
With the development of 5G technology, high definition video and internet of things, the capacity demand for optical networks has been increasing dramatically. To fulfill the capacity demand, low-margin optical network is attracting attentions. Therefore, planning tools with higher accuracy are needed and [...] Read more.
With the development of 5G technology, high definition video and internet of things, the capacity demand for optical networks has been increasing dramatically. To fulfill the capacity demand, low-margin optical network is attracting attentions. Therefore, planning tools with higher accuracy are needed and accurate models for quality of transmission (QoT) and impairments are the key elements to achieve this. Moreover, since the margin is low, maintaining the reliability of the optical network is also essential and optical performance monitoring (OPM) is desired. With OPM, controllers can adapt the configuration of the physical layer and detect anomalies. However, considering the heterogeneity of the modern optical network, it is difficult to build such accurate modeling and monitoring tools using traditional analytical methods. Fortunately, data-driven artificial intelligence (AI) provides a promising path. In this paper, we firstly discuss the requirements for adopting AI approaches in optical networks. Then, we review various recent progress of AI-based QoT/impairments modeling and monitoring schemes. We categorize these proposed methods by their functions and summarize advantages and challenges of adopting AI methods for these tasks. We discuss the problems remained for deploying AI-based methods to a practical system and present some possible directions for future investigation. Full article
(This article belongs to the Special Issue Optics for AI and AI for Optics)
Show Figures

Figure 1

Back to TopTop