Topic Editors

Dr. Mi Yang
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
School of Information Communication Engineering, Beijing Information Science and Technology University, Beijing 102206, China

AI-Driven Wireless Channel Modeling and Signal Processing

Abstract submission deadline
31 January 2027
Manuscript submission deadline
31 March 2027
Viewed by
3512

Topic Information

Dear Colleagues,

Wireless communication is the basic enabling technology in modern society, which supports economic development and a convenient life. In the technical architecture, wireless channels and signal processing are the basic components of a communication system that fundamentally determine the performance of a communication system at the level of radio wave propagation and information theory. In recent decades, many colleagues have made outstanding achievements in this field, including basic algorithms, modeling methods, measurement databases, methodologies, etc. These achievements have supported the development and commercialization of multi-generation mobile communication systems. Today, commercial communication systems are still evolving and developing. The difference is that the rapid development of artificial intelligence in recent years has provided new solutions for many traditional fields, including some basic technologies of wireless communication. Artificial intelligence methods provide a new research paradigm for channel modeling, environmental sensing, channel simulation, channel estimation, waveform design, baseband signal processing, coding and decoding, etc. For example, the real-time RSSI prediction of a site-specific link can be realized through an online-deployed neural network. Or, through the monitoring of interference and signals, the adaptive intelligent regulation of a waveform and MCS can be realized. On the other hand, deep learning is used to establish the mapping relationship between the propagation environment and channel characteristics or communication performance, so as to assist intelligent communication transceivers.

This Topic aims to bring together interdisciplinary approaches focusing on innovative applications of artificial intelligence technology in the communication field. We expect researchers in related fields to develop new or improved existing suitable artificial intelligence models, tools, and application schemes to effectively solve the related problems of wireless communication in basic fields such as channel modeling and signal processing. Therefore, this Topic is open to anyone who wants to submit a relevant research manuscript.

Dr. Mi Yang
Dr. Yi Gong
Topic Editors

Keywords

  • AI-driven channel modeling
  • data processing
  • channel measurements
  • intelligent waveform design
  • ISAC
  • OTFS
  • intelligent interference monitoring
  • AI-assisted interference cancellation
  • intelligent software defined radio

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
5.0 6.9 2020 19.2 Days CHF 1800 Submit
Electronics
electronics
2.6 6.1 2012 16.4 Days CHF 2400 Submit
Sensors
sensors
3.5 8.2 2001 17.8 Days CHF 2600 Submit
Signals
signals
2.6 4.6 2020 21.8 Days CHF 1200 Submit
Telecom
telecom
2.4 5.4 2020 23 Days CHF 1400 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (6 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
19 pages, 1843 KB  
Article
Expert Knowledge-Infused Learning for Indoor Radio Propagation Environment Digital Twins
by Haotian Wang, Lili Xu, Yu Zhang, Tao Peng and Wenbo Wang
Sensors 2026, 26(7), 2199; https://doi.org/10.3390/s26072199 - 2 Apr 2026
Viewed by 207
Abstract
Digital Twin (DT) technology, which enables the simulation, evaluation, and optimization of physical entities through synchronized digital replicas, has attracted increasing attention in the context of wireless networks. Among the various components involved, the radio propagation environment is fundamental to communication performance, making [...] Read more.
Digital Twin (DT) technology, which enables the simulation, evaluation, and optimization of physical entities through synchronized digital replicas, has attracted increasing attention in the context of wireless networks. Among the various components involved, the radio propagation environment is fundamental to communication performance, making its accurate digital replication a critical challenge. This paper focuses on constructing a high-precision radio propagation environment DT using deep learning (DL) methods. While data-driven DL has become a mainstream solution for signal propagation prediction in DTs, its performance depends heavily on the model’s ability to learn intrinsic propagation patterns from data. Owing to the complex interactions between wireless signals and environmental obstacles, conventional DL models often struggle to efficiently capture implicit propagation laws solely from raw data. To address this issue, we propose a general methodology for incorporating expert knowledge of radio propagation into DL frameworks. Building upon the widely adopted encoder–decoder architecture, the proposed approach explicitly integrates theoretical propagation knowledge to enhance learning efficiency and prediction accuracy. Ablation experiments demonstrate that the inclusion of expert knowledge significantly improves the performance of DL-based radio environment DTs. This work highlights the potential of knowledge–data dual-driven DL as a promising direction for advancing radio propagation environment DTs. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

30 pages, 1848 KB  
Article
Causal Representation Learning for Joint Modeling and Mitigation of Coupled RF Impairments in MIMO Systems
by Mohammed Waleed Majeed Al-Dulaimi and Osman Nuri Ucan
Electronics 2026, 15(6), 1289; https://doi.org/10.3390/electronics15061289 - 19 Mar 2026
Viewed by 222
Abstract
Radio-frequency (RF) impairments such as thermal noise, phase noise, and nonlinear distortion are inherently coupled in practical multiple-input multiple-output (MIMO) transceivers, yet most existing mitigation techniques treat them independently or rely on correlation-based black-box learning models. These approaches often fail to generalize under [...] Read more.
Radio-frequency (RF) impairments such as thermal noise, phase noise, and nonlinear distortion are inherently coupled in practical multiple-input multiple-output (MIMO) transceivers, yet most existing mitigation techniques treat them independently or rely on correlation-based black-box learning models. These approaches often fail to generalize under varying operating conditions because they do not capture the underlying causal relationships among hardware impairments. This paper proposes a causal representation learning framework that jointly models and mitigates coupled RF impairments by learning disentangled latent variables aligned with their physical causal structure. A causal variational autoencoder with a structured physics-informed prior and causal regularization is developed to recover impairment-specific representations and enable targeted compensation under diverse channel conditions. The framework is evaluated in a controlled MIMO simulation environment to systematically analyze impairment interactions and mitigation performance. Experimental results show that the proposed method significantly outperforms both classical receivers and conventional learning-based approaches. In particular, the framework achieves an average BER reduction of approximately 57% compared with the classical model-based receiver and about 30% relative to correlation-based deep learning models, while also outperforming recent variational autoencoder-based MIMO detectors in robustness under unseen operating conditions. The output signal-to-noise ratio improves by up to 2.2 dB across the evaluated SNR range. Furthermore, latent representation analysis shows a substantial reduction in cross-covariance, with the disentanglement score decreasing from above 0.48 in standard variational models to approximately 0.12 using the proposed causal approach. Under unseen combinations of SNR and impairment severity, the proposed model achieves the lowest BER degradation and a robustness score of 0.86, confirming improved generalization beyond the training distribution. These results demonstrate that causal representation learning provides a principled and effective solution for modeling and mitigating coupled RF impairments in MIMO communication systems. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

16 pages, 3604 KB  
Article
Research on Channel Modeling for Underground Mine Tunnel with Nonlinear Electromagnetic Propagation Using Support Vector Machine—Adaboost
by Lian Shi, Yong-Qiang Chai, Ruo-Qi Li, Fu-Gang Wang, Mi Liu and Meng-Xia Liu
Electronics 2026, 15(5), 1087; https://doi.org/10.3390/electronics15051087 - 5 Mar 2026
Viewed by 290
Abstract
A support vector machine based on AdaBoost algorithm (SVM-AB) is proposed for complicated underground mine tunnel modeling. This method accurately predicts the nonlinear propagation characteristics of electromagnetic waves in complex environments in the case of small samples. Firstly, an electromagnetic wave propagation loss [...] Read more.
A support vector machine based on AdaBoost algorithm (SVM-AB) is proposed for complicated underground mine tunnel modeling. This method accurately predicts the nonlinear propagation characteristics of electromagnetic waves in complex environments in the case of small samples. Firstly, an electromagnetic wave propagation loss model is established by analyzing complex factors including tunnel geometry, wall roughness, tilt, dielectric properties, and multipath effects. Secondly, the complex factors and measured signal strength serve as inputs of the SVM model to establish a nonlinear mapping for preliminary prediction. Furthermore, the AdaBoost algorithm is applied to dynamically correct the SVM prediction errors, further enhancing accuracy. Finally, the measured experiments are carried out in complex underground mine tunnels to verify the proposed theoretical model. The experimental results demonstrate that the proposed SVM-AB model achieves a fitting accuracy of over 99.92%. In addition, compared with the traditional support vector machine, its Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) are reduced by about 84.76% and 92.61%, respectively. The proposed tunnel model has important application value for optimizing the layout of communication system of underground mine tunnel. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

22 pages, 4946 KB  
Article
Incremental Coding Testing and LT-Net Bit Error Prediction for Aircraft Pod LVDS Links
by Ting Wang, Peilei Xiao, Yong Tang and Ao Pang
Electronics 2026, 15(2), 339; https://doi.org/10.3390/electronics15020339 - 12 Jan 2026
Viewed by 265
Abstract
Aircraft pod Low-Voltage Differential Signalling (LVDS) links frequently suffer from transmission errors in adverse environments, compromising reliability. We propose a comprehensive ‘real-time detection—precise prediction—dynamic adaptation’ solution. Firstly, a testing system based on the Xilinx Artix-7 Field Programmable Gate Array (FPGA) was developed using [...] Read more.
Aircraft pod Low-Voltage Differential Signalling (LVDS) links frequently suffer from transmission errors in adverse environments, compromising reliability. We propose a comprehensive ‘real-time detection—precise prediction—dynamic adaptation’ solution. Firstly, a testing system based on the Xilinx Artix-7 Field Programmable Gate Array (FPGA) was developed using incremental coding, verified across diverse hardware with quantitative physical parameters. Secondly, a Long Short-Term Memory (LSTM)-Transformer fusion network (LT-Net) with weighted loss and dynamic regularization was designed to optimize prediction in critical high Bit Error Rate (BER) regimes. To address distribution drift, an online adaptive mechanism utilizing Elastic Weight Consolidation (EWC) was integrated. Results show LT-Net reduces Mean Squared Error (MSE) by 41.7% and maintains superior Mean Absolute Error (MAE) compared to baseline Transformers, with drift-induced degradation kept within 8%. With an inference latency under 0.28 s, the system meets hard real-time requirements for aircraft pod reliability in complex scenarios. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

20 pages, 3599 KB  
Article
An Adaptative Wavelet Time–Frequency Transform with Mamba Network for OFDM Automatic Modulation Classification
by Hongji Xing, Xiaogang Tang, Lu Wang, Binquan Zhang and Yuepeng Li
AI 2025, 6(12), 323; https://doi.org/10.3390/ai6120323 - 9 Dec 2025
Viewed by 946
Abstract
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key [...] Read more.
Background: With the development of wireless communication technologies, the rapid advancement of 5G and 6G communication systems has spawned an urgent demand for low latency and high data rates. Orthogonal Frequency Division Multiplexing (OFDM) communication using high-order digital modulation has become a key technology due to its characteristics, such as high reliability, high data rate, and low latency, and has been widely applied in various fields. As a component of cognitive radios, automatic modulation classification (AMC) plays an important role in remote sensing and electromagnetic spectrum sensing. However, under current complex channel conditions, there are issues such as low signal-to-noise ratio (SNR), Doppler frequency shift, and multipath propagation. Methods: Coupled with the inherent problem of indistinct characteristics in high-order modulation, these currently make it difficult for AMC to focus on OFDM and high-order digital modulation. Existing methods are mainly based on a single model-driven approach or data-driven approach. The Adaptive Wavelet Mamba Network (AWMN) proposed in this paper attempts to combine model-driven adaptive wavelet transform feature extraction with the Mamba deep learning architecture. A module based on the lifting wavelet scheme effectively captures discriminative time–frequency features using learnable operations. Meanwhile, a Mamba network constructed based on the State Space Model (SSM) can capture long-term temporal dependencies. This network realizes a combination of model-driven and data-driven methods. Results: Tests conducted on public datasets and a custom-built real-time received OFDM dataset show that the proposed AWMN achieves a performance reaching higher accuracies of 62.39%, 64.50%, and 74.95% on the public Rml2016(a) and Rml2016(b) datasets and our formulated EVAS dataset, while maintaining a compact parameter size of 0.44 M. Conclusions: These results highlight its potential for improving the automatic modulation classification of high-order OFDM modulation in 5G/6G systems. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

20 pages, 4655 KB  
Article
Low-Latency Marine-Based OTFS Echo Parameter Estimation Enabled by AI
by Khurshid Hussain and Jeseon Yoo
Sensors 2025, 25(23), 7104; https://doi.org/10.3390/s25237104 - 21 Nov 2025
Viewed by 806
Abstract
We propose an end-to-end pipeline for Orthogonal Time–Frequency Space (OTFS) sensing that integrates deterministic signal processing with a Machine-Learning (ML) inference stage. The pipeline first generates a complex delay–Doppler grid via standard Symplectic Fast Fourier Transform (SFFT)-based OTFS reception. We then employ an [...] Read more.
We propose an end-to-end pipeline for Orthogonal Time–Frequency Space (OTFS) sensing that integrates deterministic signal processing with a Machine-Learning (ML) inference stage. The pipeline first generates a complex delay–Doppler grid via standard Symplectic Fast Fourier Transform (SFFT)-based OTFS reception. We then employ an ’oracle’ Ground-Truth (GT) association process to deterministically label signal peaks, extracting their complex gain (α) and absolute indices (m,n) to deduce physical targets (range, radial velocity). These oracle-aligned labels are used to train a Random-Forest (RF) classifier. The RF model learns to map normalized 33×33 complex patches, centered on signal peaks, to their corresponding target parameters. On an 80/20 split of 10,000 samples, the classifier achieved a 0.966 accuracy, 0.965 macro-F1 score, and 0.998 macro Receiver Operating Characteristic–Area Under the Curve (ROC–AUC). Notably, when tested on held-out scenes, the model’s derived range and velocity predictions achieved 100% coincidence with the GT, while amplitude and phase corresponded in 89% of instances. This hybrid oracle-and-ML approach demonstrates a highly effective and robust method for precise target extraction in OTFS-based sensing systems. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

Back to TopTop