Topic Editors

Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland
Department of Computer Information Systems, The University of Malta, Msida, Malta
Faculty of Informatics and Computing, Singidunum University, 11010 Belgrade, Serbia
Institute of Mechanical Engineering, University of Zielona Góra, Zielona Góra, Poland

AI-Enabled Sustainable Computing for Digital Infrastructures: Challenges and Innovations

Abstract submission deadline
closed (15 October 2023)
Manuscript submission deadline
closed (15 December 2023)
Viewed by
12332

Topic Information

Dear Colleagues,

The Internet of Things (IoT) has revolutionized various aspects of our lives, with the integration of semiconductor technology and artificial intelligence (AI) playing a crucial role. However, the intensive computational requirements of AI and blockchain technologies have created significant challenges for energy-constrained IoT devices. The rapid advancement of AI technologies, such as deep learning, offers exciting opportunities for extracting reliable information from large amounts of raw sensor data in IoT applications. Blockchain, on the other hand, is gaining traction in IoT development to address security and privacy concerns due to its immutable and decentralized nature.

This Topic focuses on the latest advances and research findings in sustainable computing for IoT applications driven by AI and blockchain. It aims to offer a platform for academics and practitioners worldwide to develop innovative solutions to current challenges. The topics of interest include but are not limited to lightweight deep learning models with blockchain-based architectures, a fusion of AI and blockchain for sustainable IoT, new computing architectures for sustainable IoT systems, cyber physical systems, energy-efficient communication protocols, and security and privacy issues in sustainable computing for IoT applications.

In summary, this Topic aims to explore the interplay between AI, blockchain, and sustainable computing in the context of IoT applications and to provide insights and practical solutions for addressing the challenges of sustainable computing in digital infrastructures.

Prof. Dr. Robertas Damaševičius
Dr. Lalit Garg
Dr. Nebojsa Bacanin
Prof. Dr. Justyna Patalas-Maliszewska
Topic Editors

Keywords

  • Internet of Things (IoT)
  • artificial intelligence (AI)
  • deep learning
  • blockchain
  • sustainable computing
  • edge computing
  • cyber physical systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 5.3 2011 16.9 Days CHF 2400
Digital
digital
- 3.1 2021 22.7 Days CHF 1000
Electronics
electronics
2.9 5.3 2012 15.6 Days CHF 2400
Infrastructures
infrastructures
2.6 5.2 2016 16.9 Days CHF 1800
Machines
machines
2.6 3.0 2013 15.6 Days CHF 2400
Sensors
sensors
3.9 7.3 2001 17 Days CHF 2600
Systems
systems
1.9 2.8 2013 16.8 Days CHF 2400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (10 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
20 pages, 13804 KiB  
Article
Manhole Cover Classification Based on Super-Resolution Reconstruction of Unmanned Aerial Vehicle Aerial Imagery
by Dejiang Wang and Yuping Huang
Appl. Sci. 2024, 14(7), 2769; https://doi.org/10.3390/app14072769 - 26 Mar 2024
Viewed by 563
Abstract
Urban underground pipeline networks are a key component of urban infrastructure, and a large number of older urban areas lack information about their underground pipelines. In addition, survey methods for underground pipelines are often time-consuming and labor-intensive. While the manhole cover serves as [...] Read more.
Urban underground pipeline networks are a key component of urban infrastructure, and a large number of older urban areas lack information about their underground pipelines. In addition, survey methods for underground pipelines are often time-consuming and labor-intensive. While the manhole cover serves as the hub connecting the underground pipe network with the ground, the generation of underground pipe network can be realized by obtaining the location and category information of the manhole cover. Therefore, this paper proposed a manhole cover detection method based on UAV aerial photography to obtain ground images, using image super-resolution reconstruction and image positioning and classification. Firstly, the urban image was obtained by UAV aerial photography, and then the YOLOv8 object detection technology was used to accurately locate the manhole cover. Next, the SRGAN network was used to perform super-resolution processing on the manhole cover text to improve the clarity of the recognition image. Finally, the clear manhole cover text image was input into the VGG16_BN network to realize the manhole cover classification. The experimental results showed that the manhole cover classification accuracy of this paper’s method reached 97.62%, which verified its effectiveness in manhole cover detection. The method significantly reduces the time and labor cost and provides a new method for manhole cover information acquisition. Full article
Show Figures

Figure 1

20 pages, 3594 KiB  
Article
Reinforcement Learning-Driven Bit-Width Optimization for the High-Level Synthesis of Transformer Designs on Field-Programmable Gate Arrays
by Seojin Jang and Yongbeom Cho
Electronics 2024, 13(3), 552; https://doi.org/10.3390/electronics13030552 - 30 Jan 2024
Viewed by 651
Abstract
With the rapid development of deep-learning models, especially the widespread adoption of transformer architectures, the demand for efficient hardware accelerators with field-programmable gate arrays (FPGAs) has increased owing to their flexibility and performance advantages. Although high-level synthesis can shorten the hardware design cycle, [...] Read more.
With the rapid development of deep-learning models, especially the widespread adoption of transformer architectures, the demand for efficient hardware accelerators with field-programmable gate arrays (FPGAs) has increased owing to their flexibility and performance advantages. Although high-level synthesis can shorten the hardware design cycle, determining the optimal bit-width for various transformer designs remains challenging. Therefore, this paper proposes a novel technique based on a predesigned transformer hardware architecture tailored for various types of FPGAs. The proposed method leverages a reinforcement learning-driven mechanism to automatically adapt and optimize bit-width settings based on user-provided transformer variants during inference on an FPGA, significantly alleviating the challenges related to bit-width optimization. The effect of bit-width settings on resource utilization and performance across different FPGA types was analyzed. The efficacy of the proposed method was demonstrated by optimizing the bit-width settings for users’ transformer-based model inferences on an FPGA. The use of the predesigned hardware architecture significantly enhanced the performance. Overall, the proposed method enables effective and optimized implementations of user-provided transformer-based models on an FPGA, paving the way for edge FPGA-based deep-learning accelerators while reducing the time and effort typically required in fine-tuning bit-width settings. Full article
Show Figures

Figure 1

19 pages, 13040 KiB  
Article
A Framework for Determining the Optimal Vibratory Frequency of Graded Gravel Fillers Using Hammering Modal Approach and ANN
by Xianpu Xiao, Taifeng Li, Feng Lin, Xinzhi Li, Zherui Hao and Jiashen Li
Sensors 2024, 24(2), 689; https://doi.org/10.3390/s24020689 - 22 Jan 2024
Cited by 1 | Viewed by 719
Abstract
To address the uncertainty of optimal vibratory frequency fov of high-speed railway graded gravel (HRGG) and achieve high-precision prediction of the fov, the following research was conducted. Firstly, commencing with vibratory compaction experiments and the hammering modal analysis [...] Read more.
To address the uncertainty of optimal vibratory frequency fov of high-speed railway graded gravel (HRGG) and achieve high-precision prediction of the fov, the following research was conducted. Firstly, commencing with vibratory compaction experiments and the hammering modal analysis method, the resonance frequency f0 of HRGG fillers, varying in compactness K, was initially determined. The correlation between f0 and fov was revealed through vibratory compaction experiments conducted at different vibratory frequencies. This correlation was established based on the compaction physical–mechanical properties of HRGG fillers, encompassing maximum dry density ρdmax, stiffness Krd, and bearing capacity coefficient K20. Secondly, the gray relational analysis algorithm was used to determine the key feature influencing the fov based on the quantified relationship between the filler feature and fov. Finally, the key features influencing the fov were used as input parameters to establish the artificial neural network prediction model (ANN-PM) for fov. The predictive performance of ANN-PM was evaluated from the ablation study, prediction accuracy, and prediction error. The results showed that the ρdmax, Krd, and K20 all obtained optimal states when fov was set as f0 for different gradation HRGG fillers. Furthermore, it was found that the key features influencing the fov were determined to be the maximum particle diameter dmax, gradation parameters b and m, flat and elongated particles in coarse aggregate Qe, and the Los Angeles abrasion of coarse aggregate LAA. Among them, the influence of dmax on the ANN-PM predictive performance was the most significant. On the training and testing sets, the goodness-of-fit R2 of ANN-PM all exceeded 0.95, and the prediction errors were small, which indicated that the accuracy of ANN-PM predictions was relatively high. In addition, it was clear that the ANN-PM exhibited excellent robust performance. The research results provide a novel method for determining the fov of subgrade fillers and provide theoretical guidance for the intelligent construction of high-speed railway subgrades. Full article
Show Figures

Figure 1

21 pages, 7565 KiB  
Article
Early Fire Detection Using Long Short-Term Memory-Based Instance Segmentation and Internet of Things for Disaster Management
by Sharaf J. Malebary
Sensors 2023, 23(22), 9043; https://doi.org/10.3390/s23229043 - 8 Nov 2023
Cited by 2 | Viewed by 981
Abstract
Fire outbreaks continue to cause damage despite the improvements in fire-detection tools and algorithms. As the human population and global warming continue to rise, fires have emerged as a significant worldwide issue. These factors may contribute to the greenhouse effect and climatic changes, [...] Read more.
Fire outbreaks continue to cause damage despite the improvements in fire-detection tools and algorithms. As the human population and global warming continue to rise, fires have emerged as a significant worldwide issue. These factors may contribute to the greenhouse effect and climatic changes, among other detrimental consequences. It is still challenging to implement a well-performing and optimized approach, which is sufficiently accurate, and has tractable complexity and a low false alarm rate. A small fire and the identification of a fire from a long distance are also challenges in previously proposed techniques. In this study, we propose a novel hybrid model, called IS-CNN-LSTM, based on convolutional neural networks (CNN) to detect and analyze fire intensity. A total of 21 convolutional layers, 24 rectified linear unit (ReLU) layers, 6 pooling layers, 3 fully connected layers, 2 dropout layers, and a softmax layer are included in the proposed 57-layer CNN model. Our proposed model performs instance segmentation to distinguish between fire and non-fire events. To reduce the intricacy of the proposed model, we also propose a key-frame extraction algorithm. The proposed model uses Internet of Things (IoT) devices to alert the relevant person by calculating the severity of the fire. Our proposed model is tested on a publicly available dataset having fire and normal videos. The achievement of 95.25% classification accuracy, 0.09% false positive rate (FPR), 0.65% false negative rate (FNR), and a prediction time of 0.08 s validates the proposed system. Full article
Show Figures

Figure 1

15 pages, 4140 KiB  
Article
Personalized Federated Learning Algorithm with Adaptive Clustering for Non-IID IoT Data Incorporating Multi-Task Learning and Neural Network Model Characteristics
by Hua-Yang Hsu, Kay Hooi Keoy, Jun-Ru Chen, Han-Chieh Chao and Chin-Feng Lai
Sensors 2023, 23(22), 9016; https://doi.org/10.3390/s23229016 - 7 Nov 2023
Viewed by 1221
Abstract
The proliferation of IoT devices has led to an unprecedented integration of machine learning techniques, raising concerns about data privacy. To address these concerns, federated learning has been introduced. However, practical implementations face challenges, including communication costs, data and device heterogeneity, and privacy [...] Read more.
The proliferation of IoT devices has led to an unprecedented integration of machine learning techniques, raising concerns about data privacy. To address these concerns, federated learning has been introduced. However, practical implementations face challenges, including communication costs, data and device heterogeneity, and privacy security. This paper proposes an innovative approach within the context of federated learning, introducing a personalized joint learning algorithm for Non-IID IoT data. This algorithm incorporates multi-task learning principles and leverages neural network model characteristics. To overcome data heterogeneity, we present a novel clustering algorithm designed specifically for federated learning. Unlike conventional methods that require a predetermined number of clusters, our approach utilizes automatic clustering, eliminating the need for fixed cluster specifications. Extensive experimentation demonstrates the exceptional performance of the proposed algorithm, particularly in scenarios with specific client distributions. By significantly improving the accuracy of trained models, our approach not only addresses data heterogeneity but also strengthens privacy preservation in federated learning. In conclusion, we offer a robust solution to the practical challenges of federated learning in IoT environments. By combining personalized joint learning, automatic clustering, and neural network model characteristics, we facilitate more effective and privacy-conscious machine learning in Non-IID IoT data settings. Full article
Show Figures

Figure 1

24 pages, 3475 KiB  
Article
A Knowledge-Graph-Driven Method for Intelligent Decision Making on Power Communication Equipment Faults
by Huiying Qu, Yiying Zhang, Kun Liang, Siwei Li and Xianxu Huo
Electronics 2023, 12(18), 3939; https://doi.org/10.3390/electronics12183939 - 18 Sep 2023
Cited by 1 | Viewed by 996
Abstract
The grid terminal deploys numerous types of communication equipment for the digital construction of the smart grid. Once communication equipment failure occurs, it might jeopardize the safety of the power grid. The massive amount of communication equipment leads to a dramatic increase in [...] Read more.
The grid terminal deploys numerous types of communication equipment for the digital construction of the smart grid. Once communication equipment failure occurs, it might jeopardize the safety of the power grid. The massive amount of communication equipment leads to a dramatic increase in fault research and judgment data, making it difficult to locate fault information in equipment maintenance. Therefore, this paper designs a knowledge-graph-driven method for intelligent decision making on power communication equipment faults. The method consists of two parts: power knowledge extraction and user intent multi-feature learning recommendation. The power knowledge extraction model utilizes a multi-layer bidirectional encoder to capture the global features of the sentence and then characterizes the deep local semantics of the sentence through a convolutional pooling layer, which achieves the joint extraction and visual display of the fault entity relations. The user intent multi-feature learning recommendation model uses a graph convolutional neural network to aggregate the higher-order neighborhood information of faulty entities and then the cross-compression matrix to solve the feature interaction degree of the user and graph, which achieves accurate prediction of fault retrieval. The experimental results show that the method is optimal in knowledge extraction compared to classical models such as BERT-CRF, in which the F1 value reaches 81.7%, which can effectively extract fault knowledge. User intent multi-feature learning recommendation works best, with an F1 value of 87%. Compared with the classical models such as CKAN and KGCN, it is improved by 5%~11%, which can effectively solve the problem of insufficient mining of user retrieval intent. This method realizes accurate retrieval and personalized recommendation of fault information of electric power communication equipment. Full article
Show Figures

Figure 1

18 pages, 952 KiB  
Article
Risk-Sensitive Markov Decision Processes of USV Trajectory Planning with Time-Limited Budget
by Yi Ding and Hongyang Zhu
Sensors 2023, 23(18), 7846; https://doi.org/10.3390/s23187846 - 13 Sep 2023
Viewed by 793
Abstract
Trajectory planning plays a crucial role in ensuring the safe navigation of ships, as it involves complex decision making influenced by various factors. This paper presents a heuristic algorithm, named the Markov decision process Heuristic Algorithm (MHA), for time-optimized avoidance of Unmanned Surface [...] Read more.
Trajectory planning plays a crucial role in ensuring the safe navigation of ships, as it involves complex decision making influenced by various factors. This paper presents a heuristic algorithm, named the Markov decision process Heuristic Algorithm (MHA), for time-optimized avoidance of Unmanned Surface Vehicles (USVs) based on a Risk-Sensitive Markov decision process model. The proposed method utilizes the Risk-Sensitive Markov decision process model to generate a set of states within the USV collision avoidance search space. These states are determined based on the reachable locations and directions considering the time cost associated with the set of actions. By incorporating an enhanced reward function and a constraint time-dependent cost function, the USV can effectively plan practical motion paths that align with its actual time constraints. Experimental results demonstrate that the MHA algorithm enables decision makers to evaluate the trade-off between the budget and the probability of achieving the goal within the given budget. Moreover, the local stochastic optimization criterion assists the agent in selecting collision avoidance paths without significantly increasing the risk of collision. Full article
Show Figures

Figure 1

25 pages, 38224 KiB  
Article
Non-Standard Map Robot Path Planning Approach Based on Ant Colony Algorithms
by Feng Li, Young-Chul Kim and Boyin Xu
Sensors 2023, 23(17), 7502; https://doi.org/10.3390/s23177502 - 29 Aug 2023
Viewed by 1358
Abstract
Robot path planning is an important component of ensuring the robots complete work tasks effectively. Nowadays, most maps used for robot path planning obtain relevant coordinate information through sensor measurement, establish a map model based on coordinate information, and then carry out path [...] Read more.
Robot path planning is an important component of ensuring the robots complete work tasks effectively. Nowadays, most maps used for robot path planning obtain relevant coordinate information through sensor measurement, establish a map model based on coordinate information, and then carry out path planning for the robot, which is time-consuming and labor-intensive. To solve this problem, a method of robot path planning based on ant colony algorithms after the standardized design of non-standard map grids such as photos was studied. This method combines the robot grid map modeling with image processing, bringing in calibration objects. By converting non-standard actual environment maps into standard grid maps, this method was made suitable for robot motion path planning on non-standard maps of different types and sizes. After obtaining the planned path and pose, the robot motion path planning map under the non-standard map was obtained by combining the planned path and pose with the non-standard real environment map. The experimental results showed that this method has a high adaptability to robot non-standard map motion planning, can realize robot path planning under non-standard real environment maps, and can make the obtained robot motion path display more intuitive and convenient. Full article
Show Figures

Figure 1

26 pages, 7015 KiB  
Article
Smart Preventive Maintenance of Hybrid Networks and IoT Systems Using Software Sensing and Future State Prediction
by Marius Minea, Viviana Laetitia Minea and Augustin Semenescu
Sensors 2023, 23(13), 6012; https://doi.org/10.3390/s23136012 - 28 Jun 2023
Cited by 1 | Viewed by 1546
Abstract
At present, IoT and intelligent applications are developed on a large scale. However, these types of new applications require stable wireless connectivity with sensors, based on several standards of communication, such as ZigBee, LoRA, nRF, Bluetooth, or cellular (LTE, 5G, etc.). The continuous [...] Read more.
At present, IoT and intelligent applications are developed on a large scale. However, these types of new applications require stable wireless connectivity with sensors, based on several standards of communication, such as ZigBee, LoRA, nRF, Bluetooth, or cellular (LTE, 5G, etc.). The continuous expansion of these networks and services also comes with the requirement of a stable level of service, which makes the task of maintenance operators more difficult. Therefore, in this research, an integrated solution for the management of preventive maintenance is proposed, employing software-defined sensing for hardware components, applications, and client satisfaction. A specific algorithm for monitoring the levels of services was developed, and an integrated instrument to assist the management of preventive maintenance was proposed, which are based on the network of future states prediction. A case study was also investigated for smart city applications to verify the expandability and flexibility of the approach. The purpose of this research is to improve the efficiency and response time of the preventive maintenance, helping to rapidly recover the required levels of service, thus increasing the resilience of complex systems. Full article
Show Figures

Figure 1

20 pages, 4859 KiB  
Article
Tendon Stress Estimation from Strain Data of a Bridge Girder Using Machine Learning-Based Surrogate Model
by Sadia Umer Khayam, Ammar Ajmal, Junyoung Park, In-Ho Kim and Jong-Woong Park
Sensors 2023, 23(11), 5040; https://doi.org/10.3390/s23115040 - 24 May 2023
Cited by 1 | Viewed by 1569
Abstract
Prestressed girders reduce cracking and allow for long spans, but their construction requires complex equipment and strict quality control. Their accurate design depends on a precise knowledge of tensioning force and stresses, as well as monitoring the tendon force to prevent excessive creep. [...] Read more.
Prestressed girders reduce cracking and allow for long spans, but their construction requires complex equipment and strict quality control. Their accurate design depends on a precise knowledge of tensioning force and stresses, as well as monitoring the tendon force to prevent excessive creep. Estimating tendon stress is challenging due to limited access to prestressing tendons. This study utilizes a strain-based machine learning method to estimate real-time applied tendon stress. A dataset was generated using finite element method (FEM) analysis, varying the tendon stress in a 45 m girder. Network models were trained and tested on various tendon force scenarios, with prediction errors of less than 10%. The model with the lowest RMSE was chosen for stress prediction, accurately estimating the tendon stress, and providing real-time tensioning force adjustment. The research offers insights into optimizing girder locations and strain numbers. The results demonstrate the feasibility of using machine learning with strain data for instant tendon force estimation. Full article
Show Figures

Figure 1

Back to TopTop