Deep Reinforcement Learning and Its Latest Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 October 2024 | Viewed by 1252

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
Interests: deep learning; generative models; diffusion models; deep reinforcement learning

Special Issue Information

Dear Colleagues,

Despite a few remarkable achievements, reinforcement learning is a field within artificial intelligence that has seen a relatively limited impact of deep learning techniques thus far. While neural networks have helped overcome some scalability issues associated with traditional methods, the fundamental methodologies have remained largely unchanged, leaving many traditional problems unresolved. There are several challenges that still need to be addressed or better understood in RL: sample efficiency and moving beyond the current “tabula rasa” approach, the exploitation vs. exploration dilemma, the lack of generalization and difficult adaptation to different scenarios, intrinsic vs. extrinsic rewarding systems, and intelligent transfer learning, among many others.

There is a widely held belief that solving the aforementioned challenges, which are interconnected to some extent, requires a significant paradigm shift in the field of RL. This shift is likely to be centered around a more comprehensive and extensive utilization of deep learning techniques. In light of this perspective, we strongly encourage the submission of innovative and visionary works that align with this vision. We welcome contributions that present mature applications addressing tangible problems, as well as well-crafted proof-of-concept articles that showcase and promote pioneering approaches.

Prof. Dr. Andrea Asperti
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sample efficiency
  • multi-agent DRL
  • transfer learning
  • generalization
  • intrinsic rewarding systems
  • hierarchical DRL
  • sparse reward environments
  • multi-objective RL
  • partial observability
  • multi-task and meta RL

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1357 KiB  
Article
Energy Efficient Power Allocation in Massive MIMO Based on Parameterized Deep DQN
by Shruti Sharma and Wonsik Yoon
Electronics 2023, 12(21), 4517; https://doi.org/10.3390/electronics12214517 - 2 Nov 2023
Cited by 2 | Viewed by 703
Abstract
Machine learning offers advanced tools for efficient management of radio resources in modern wireless networks. In this study, we leverage a multi-agent deep reinforcement learning (DRL) approach, specifically the Parameterized Deep Q-Network (DQN), to address the challenging problem of power allocation and user [...] Read more.
Machine learning offers advanced tools for efficient management of radio resources in modern wireless networks. In this study, we leverage a multi-agent deep reinforcement learning (DRL) approach, specifically the Parameterized Deep Q-Network (DQN), to address the challenging problem of power allocation and user association in massive multiple-input multiple-output (M-MIMO) communication networks. Our approach tackles a multi-objective optimization problem aiming to maximize network utility while meeting stringent quality of service requirements in M-MIMO networks. To address the non-convex and nonlinear nature of this problem, we introduce a novel multi-agent DQN framework. This framework defines a large action space, state space, and reward functions, enabling us to learn a near-optimal policy. Simulation results demonstrate the superiority of our Parameterized Deep DQN (PD-DQN) approach when compared to traditional DQN and RL methods. Specifically, we show that our approach outperforms traditional DQN methods in terms of convergence speed and final performance. Additionally, our approach shows 72.2% and 108.5% improvement over DQN methods and the RL method, respectively, in handling large-scale multi-agent problems in M-MIMO networks. Full article
(This article belongs to the Special Issue Deep Reinforcement Learning and Its Latest Applications)
Show Figures

Figure 1

Back to TopTop