Machine Learning for Edge Computing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 5075

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Schreiner University, Kerrville, TX 78028, USA
Interests: artificial intelligence; edge computing; connected autonomous vehicles; LLM; cybersecurity

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, USA
Interests: connected and autonomous vehicles; edge and cloud computing; cyberinfrastructures; cybersecurity; distributed and IoT systems; intelligent systems; machine learning; high performance computing

Special Issue Information

Dear Colleagues,

We are delighted to invite you to submit your latest research to this Special Issue titled "Machine Learning for Edge Computing". This integration marks a pivotal shift, bringing computational intelligence closer to data sources, significantly reducing latency, and enhancing privacy. We aim to explore innovative research and advancements in deploying machine learning algorithms directly on edge devices, which face stringent power and computational constraints.

These devices present unique challenges that demand efficient and robust ML solutions, capable of operating under limited resources while being crucial for rapid data processing and decision-making. We seek high-quality papers addressing both theoretical advancements in algorithm design and practical implementations that showcase novel algorithmic adaptations and system designs. These should enable sophisticated ML tasks to be performed efficiently on edge devices.

Potential topics include, but are not limited to, lightweight neural networks, federated learning, real-time data processing, energy-efficient ML architectures, and ML applications at the edge. We particularly welcome submissions demonstrating innovative approaches to adapting algorithms for reduced power consumption, efficient computation, and the trade-offs between computational complexity and performance in edge scenarios.

Contributions may range from exploring the balance between accuracy and computational demand in applications such as connected vehicles, smart cities, IoT systems, and the edge-cloud continuum to investigating the impact of machine learning on the privacy and security of edge computing systems. This Special Issue provides a platform for researchers and practitioners from academia and industry to share their insights and findings, helping us to push the boundaries of what is possible in edge computing with machine learning.

We invite you to contribute to this cutting-edge discussion by submitting your research, reviews, or communication articles to this timely Special Issue.

Dr. Sihai Tang
Prof. Dr. Song Fu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • machine learning
  • federated learning
  • real-time data processing
  • adaptive algorithms
  • energy-efficient machine learning
  • privacy and security in edge computing
  • autonomous decision making

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

9 pages, 880 KiB  
Article
Machine Learning Models to Predict Google Stock Prices
by Cosmina Elena Bucura and Paolo Giudici
Algorithms 2025, 18(2), 81; https://doi.org/10.3390/a18020081 - 3 Feb 2025
Viewed by 787
Abstract
The aim of this paper is to predict Google stock price using different datasets and machine learning models, and understand which models perform better. The novelty of our approach is that we compare models not only by predictive accuracy but also by explainability [...] Read more.
The aim of this paper is to predict Google stock price using different datasets and machine learning models, and understand which models perform better. The novelty of our approach is that we compare models not only by predictive accuracy but also by explainability and robustness. Our findings show that the choice of the best model to employ to predict Google stock prices depends on the desired objective. If the goal is accuracy, the recurrent neural network is the best model, while, for robustness, the Ridge regression model is the most resilient to changes and, for explainability, the Gradient Boosting model is the best choice. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

19 pages, 421 KiB  
Article
Characterizing Perception Deep Learning Algorithms and Applications for Vehicular Edge Computing
by Wang Feng, Sihai Tang, Shengze Wang, Ying He, Donger Chen, Qing Yang and Song Fu
Algorithms 2025, 18(1), 31; https://doi.org/10.3390/a18010031 - 8 Jan 2025
Viewed by 903
Abstract
Vehicular edge computing relies on the computational capabilities of interconnected edge devices to manage incoming requests from vehicles. This offloading process enhances the speed and efficiency of data handling, ultimately boosting the safety, performance, and reliability of connected vehicles. While previous studies have [...] Read more.
Vehicular edge computing relies on the computational capabilities of interconnected edge devices to manage incoming requests from vehicles. This offloading process enhances the speed and efficiency of data handling, ultimately boosting the safety, performance, and reliability of connected vehicles. While previous studies have concentrated on processor characteristics, they often overlook the significance of the connecting components. Limited memory and storage resources on edge devices pose challenges, particularly in the context of deep learning, where these limitations can significantly affect performance. The impact of memory contention has not been thoroughly explored, especially regarding perception-based tasks. In our analysis, we identified three distinct behaviors of memory contention, each interacting differently with other resources. Additionally, our investigation of Deep Neural Network (DNN) layers revealed that certain convolutional layers experienced computation time increases exceeding 2849%, while activation layers showed a rise of 1173.34%. Through our characterization efforts, we can model workload behavior on edge devices according to their configuration and the demands of the tasks. This allows us to quantify the effects of memory contention. To our knowledge, this study is the first to characterize the influence of memory on vehicular edge computational workloads, with a strong emphasis on memory dynamics and DNN layers. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

22 pages, 638 KiB  
Article
Unfolded Algorithms for Deep Phase Retrieval
by Naveed Naimipour, Shahin Khobahi, Mojtaba Soltanalian, Haleh Safavi and Harry C. Shaw
Algorithms 2024, 17(12), 587; https://doi.org/10.3390/a17120587 - 20 Dec 2024
Viewed by 830
Abstract
Exploring the idea of phase retrieval has been intriguing researchers for decades due to its appearance in a wide range of applications. The task of a phase retrieval algorithm is typically to recover a signal from linear phase-less measurements. In this paper, we [...] Read more.
Exploring the idea of phase retrieval has been intriguing researchers for decades due to its appearance in a wide range of applications. The task of a phase retrieval algorithm is typically to recover a signal from linear phase-less measurements. In this paper, we approach the problem by proposing a hybrid model-based, data-driven deep architecture referred to as Unfolded Phase Retrieval (UPR), which exhibits significant potential in improving the performance of state-of-the-art data-driven and model-based phase retrieval algorithms. The proposed method benefits from the versatility and interpretability of well-established model-based algorithms while simultaneously benefiting from the expressive power of deep neural networks. In particular, our proposed model-based deep architecture is applied to the conventional phase retrieval problem (via the incremental reshaped Wirtinger flow algorithm) and the sparse phase retrieval problem (via the sparse truncated amplitude flow algorithm), showing immense promise in both cases. Furthermore, we consider a joint design of the sensing matrix and the signal processing algorithm and utilize the deep unfolding technique in the process. Our numerical results illustrate the effectiveness of such hybrid model-based and data-driven frameworks and showcase the untapped potential of data-aided methodologies to enhance existing phase retrieval algorithms. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

26 pages, 664 KiB  
Article
Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies
by Mauro Femminella and Gianluca Reali
Algorithms 2024, 17(8), 320; https://doi.org/10.3390/a17080320 - 23 Jul 2024
Cited by 1 | Viewed by 1870
Abstract
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in [...] Read more.
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

Back to TopTop