Next Article in Journal
Special Issue on Dynamics of Railway Vehicles
Previous Article in Journal
Bias and Variance Analysis of Contemporary Symbolic Regression Methods
Previous Article in Special Issue
Techniques for Detecting the Start and End Points of Sign Language Utterances to Enhance Recognition Performance in Mobile Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial: Deep Learning and Edge Computing for Internet of Things

1
National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, Shenzhen 518060, China
2
Key Laboratory of AI and Information Processing, Hechi University, Hechi 546300, China
3
Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China
4
College of Computer Science and Software Engineering, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(23), 11063; https://doi.org/10.3390/app142311063
Submission received: 20 November 2024 / Accepted: 26 November 2024 / Published: 28 November 2024
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
The evolution of 5G and Internet of Things (IoT) technologies is leading to ubiquitous connections among humans and their environment, such as autopilot transportation, mobile e-commerce, unmanned vehicles, and healthcare applications, bringing revolutionary changes to our daily lives. Moreover, the computing environment requires support to an increasing range of functionality, including multi-sensory data processing and analysis, complex systems control strategies, and, ultimately, artificial intelligence. After several years of development, edge computing for deep learning has shown incomparable practical value in the IoT environment. Pushing computing resources to the edge, closer to devices, enables low-latency service delivery for both safety and applications. However, edge computing still has abundant untapped potential for deep learning. Systems should leverage awareness of the surrounding environment and attach more importance to edge–edge intelligence collaboration and edge–cloud communication. This Special Issue aims to explore recent advances in edge computing technologies.
This issue includes eleven peer-reviewed papers that focus on deep learning and edge computing for the Internet of Things. “Techniques for detecting the start and end points of sign language utterances to enhance recognition performance in mobile environments” written by Kim et al. [1] proposes a technique to dynamically adjust the sampling rate based on the number of frames extracted in real-time during sign language utterances in mobile environments to accurately detect the start and end points of the sign language. Since partitioning and offloading are important for delay-sensitive CNN inference in edge computing, Zha et al. [2] wrote “Energy-efficient joint partitioning and offloading for delay-sensitive CNN inference in edge computing”, which proposes a parallel partitioning method based on matrix convolution to partition foundation model inference tasks. Inspired by the widespread use of mobile and IoT devices, Sanabria et al. [3] wrote “connection-aware heuristics for scheduling and distributing jobs under dynamic dew computing environments”, which proposes integrating the “reliability” concept into cutting-edge human-design job distribution heuristics, adapting to the dynamic and ever-changing network conditions caused by nodes’ mobility.
Subsequently, deoxyribonucleic acid (DNA) computing has demonstrated great potential in data encryption due to its capability of parallel computation, minimal storage requirement, and unbreakable cryptography. Li et al. [4] wrote “hash-based DNA computing algorithm for image encryption” in which the experimental results suggest that the proposed method performs better than most comparative methods in key space, histogram analysis, pixel correlation, information entropy, and sensitivity measurements. Basori et al. [5] wrote “Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection”, which provides a technique for the automated analysis of X-ray pictures using server processing with a deep convolutional generative adversarial network (DCGAN), thus improving the overall image quality of X-ray scans. Wang et al. [6] wrote “A Combined Multi-Classification Network Intrusion Detection System Based on Feature Selection and Neural Network Improvement”, which uses 23 subframes and a mixer for multi-classification work, thus improving the parallelism of NIDS and is more adaptable to edge networks.
Furthermore, to improve the detection and recognition accuracy of small-sized, occluded, or truncated objects in complex scenes, Sheng et al. [7] wrote “Faster RCNN Target Detection Algorithm Integrating CBAM and FPN”, which incorporates the convolutional block attention module in the feature extraction network, linking high- and bottom-level feature data to obtain high-resolution and strong semantic data. Yu et al. [8] wrote “Rainfall Similarity Search Based on Deep Learning by Using Precipitation Images”. They proposed a rainfall similarity research method based on deep learning by using precipitation images, discovering similar rainfall processes, and providing new ideas for hydrological forecasting. To decrease the cost of data annotation and improve segmentation accuracy, Fu et al. [9] wrote “Attention-Based Active Learning Framework for Segmentation of Breast Cancer in Mammograms”, which consists of a basic breast cancer segmentation model, an attention-based sampling scheme, and an active learning strategy for labeling. Wang et al. [10] wrote “An Adaptive Dynamic Channel Allocation Algorithm Based on a Temporal–Spatial Correlation Analysis for LEO Satellite Networks” in which they propose an adaptive dynamic channel allocation algorithm based on temporal–spatial correlation analysis for LEO satellite networks, reducing the rejection probability of the handoff call and then improving the total performance of the LEO satellite network.
Finally, Yan et al. [11] wrote a review “UAV Detection and Tracking in Urban Environments Using Passive Sensors: A Survey” in which they provide an overview of the existing military and commercial anti-UAV systems, proposing several suggestions for developing general-purpose UAV-monitoring systems tailored for urban environments.
Overall, we released eleven excellent papers in our Special Issue, which show promising developments in deep learning and edge computing for the Internet of Things. However, there are still many open challenges. For example, the complete architecture under heterogeneous networks has not been well addressed yet. Moreover, the difficulty of the optimization problem is further exacerbated due to the dynamic changes in the network environments; and though the edge computing architecture brings some benefits to the network applications, it raises unforeseen security and privacy issues at the same time. To further explore this discipline, we hope researchers and practitioners from academia and related industries can investigate this topic for further developments. Finally, we would like to thank the authors of the papers published in this Special Issue and the editorial team of Applied Sciences.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, T.; Kim, B. Techniques for Detecting the Start and End Points of Sign Language Utterances to Enhance Recognition Performance in Mobile Environments. Appl. Sci. 2024, 14, 9199. [Google Scholar] [CrossRef]
  2. Zha, Z.; Yang, Y.; Xia, Y.; Wang, Z.; Luo, B.; Li, K.; Ye, C.; Xu, B.; Peng, K. Energy-Efficient Joint Partitioning and Offloading for Delay-Sensitive CNN Inference in Edge Computing. Appl. Sci. 2024, 14, 8656. [Google Scholar] [CrossRef]
  3. Sanabria, P.; Montoya, S.; Neyem, A.; Toro Icarte, R.; Hirsch, M.; Mateos, C. Connection-Aware Heuristics for Scheduling and Distributing Jobs under Dynamic Dew Computing Environments. Appl. Sci. 2024, 14, 3206. [Google Scholar] [CrossRef]
  4. Li, H.; Zhang, L.; Cao, H.; Wu, Y. Hash Based DNA Computing Algorithm for Image Encryption. Appl. Sci. 2023, 13, 8509. [Google Scholar] [CrossRef]
  5. Basori, A.H.; Malebary, S.J.; Alesawi, S. Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection. Appl. Sci. 2023, 13, 12725. [Google Scholar] [CrossRef]
  6. Wang, Y.; Liu, Z.; Zheng, W.; Wang, J.; Shi, H.; Gu, M. A Combined Multi-Classification Network Intrusion Detection System Based on Feature Selection and Neural Network Improvement. Appl. Sci. 2023, 13, 8307. [Google Scholar] [CrossRef]
  7. Sheng, W.; Yu, X.; Lin, J.; Chen, X. Faster rcnn target detection algorithm integrating cbam and fpn. Appl. Sci. 2023, 13, 6913. [Google Scholar] [CrossRef]
  8. Yu, Y.; He, X.; Zhu, Y.; Wan, D. Rainfall Similarity Search Based on Deep Learning by Using Precipitation Images. Appl. Sci. 2023, 13, 4883. [Google Scholar] [CrossRef]
  9. Fu, X.; Cao, H.; Hu, H.; Lian, B.; Wang, Y.; Huang, Q.; Wu, Y. Attention-Based Active Learning Framework for Segmentation of Breast Cancer in Mammograms. Appl. Sci. 2023, 13, 852. [Google Scholar] [CrossRef]
  10. Wang, J.; Sun, L.; Zhou, J.; Han, C. An Adaptive Dynamic Channel Allocation Algorithm Based on a Temporal–Spatial Correlation Analysis for LEO Satellite Networks. Appl. Sci. 2022, 12, 10939. [Google Scholar] [CrossRef]
  11. Yan, X.; Fu, T.; Lin, H.; Xuan, F.; Huang, Y.; Cao, Y.; Hu, H.; Liu, P. UAV Detection and Tracking in Urban Environments Using Passive Sensors: A Survey. Appl. Sci. 2023, 13, 11320. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wan, S.; Wu, Y. Editorial: Deep Learning and Edge Computing for Internet of Things. Appl. Sci. 2024, 14, 11063. https://doi.org/10.3390/app142311063

AMA Style

Wan S, Wu Y. Editorial: Deep Learning and Edge Computing for Internet of Things. Applied Sciences. 2024; 14(23):11063. https://doi.org/10.3390/app142311063

Chicago/Turabian Style

Wan, Shaohua, and Yirui Wu. 2024. "Editorial: Deep Learning and Edge Computing for Internet of Things" Applied Sciences 14, no. 23: 11063. https://doi.org/10.3390/app142311063

APA Style

Wan, S., & Wu, Y. (2024). Editorial: Deep Learning and Edge Computing for Internet of Things. Applied Sciences, 14(23), 11063. https://doi.org/10.3390/app142311063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop