Reinforcement Learning: Emerging Techniques and Future Prospects

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 February 2026 | Viewed by 478

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
Interests: large-scale wireless network optimization; deep reinforcement learning; multi-agent reinforcement learning; digital twin for wireless networks

E-Mail Website
Guest Editor
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
Interests: theory and applications of deep reinforcement learning and multi-agent reinforcement learning

E-Mail Website
Guest Editor
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing 100084, China
Interests: deep reinforcement learning; federated learning and privacy preservation

Special Issue Information

Dear Colleagues,

Reinforcement learning (RL) has emerged as a powerful paradigm for sequential decision-making, enabling agents to learn optimal behaviors through interaction with dynamic and uncertain environments. In recent years, RL has made remarkable progress and found widespread applications across various domains such as robotics, wireless communications, autonomous systems, intelligent control, and industrial optimization. However, many challenges remain, including sample inefficiency, exploration–exploitation trade-offs, scalability to large state-action spaces, safety guarantees, and coordination among multiple agents.

This Special Issue aims to provide a platform for researchers and practitioners to present the latest advancements in reinforcement learning, covering both theoretical foundations and practical applications. We especially welcome contributions that explore novel algorithms, frameworks, and system designs that address key limitations in current RL approaches, as well as emerging trends such as offline RL, safe RL, multi-agent RL, and federated or privacy-preserving RL. Interdisciplinary works that integrate RL with areas like digital twin, network optimization, edge computing, and intelligent sensing are particularly encouraged.

Topics of interest include, but are not limited to, the following:

  • Deep reinforcement learning and its theoretical analysis;
  • Model-based and model-free RL algorithms;
  • Multi-agent reinforcement learning and coordination;
  • RL in wireless networks, edge/cloud systems, and IoT;
  • Sample-efficient, robust, or safe RL approaches;
  • Federated RL and privacy-preserving learning;
  • RL applications in robotics, smart manufacturing, and control systems.

Dr. Haoqiang Liu
Dr. Wenzhen Huang
Dr. Huiming Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • reinforcement learning
  • multi-agent systems
  • safe reinforcement learning
  • intelligent control
  • wireless network optimization
  • digital twin
  • edge intelligence
  • multi-agent reinforcement learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3102 KB  
Article
Physics-Informed Reinforcement Learning for Multi-Band Octagonal Fractal Frequency-Selective Surface Optimization
by Gaoya Dong, Ming Liu and Xin He
Electronics 2025, 14(23), 4656; https://doi.org/10.3390/electronics14234656 - 26 Nov 2025
Viewed by 235
Abstract
Diverse application scenarios demand frequency-selective surfaces (FSSs) with tailored center frequencies and bandwidths. However, their design traditionally relies on iterative full-wave simulations using tools such as the High-Frequency Structure Simulator (HFSS) and Computer Simulation Technology (CST), which are time-consuming and labor-intensive. To overcome [...] Read more.
Diverse application scenarios demand frequency-selective surfaces (FSSs) with tailored center frequencies and bandwidths. However, their design traditionally relies on iterative full-wave simulations using tools such as the High-Frequency Structure Simulator (HFSS) and Computer Simulation Technology (CST), which are time-consuming and labor-intensive. To overcome these limitations, this work proposes an octagonal fractal frequency-selective surface (OF-FSS) composed of a square ring resonator and an octagonal fractal geometry, where the fractal configuration supports single-band and multi-band resonance. A physics-informed reinforcement learning (PIRL) algorithm is developed, enabling the RL agent to directly interact with CST and autonomously optimize key structural parameters. Using the proposed PIRL framework, the OF-FSS achieves both single-band and dual-band responses with desired frequency responses. Full-wave simulations validate that the integration of OF-FSS and PIRL provides an efficient and physically interpretable strategy for designing advanced multi-band FSSs. Full article
(This article belongs to the Special Issue Reinforcement Learning: Emerging Techniques and Future Prospects)
Show Figures

Figure 1

Back to TopTop