New Advances in Distributed Computing and Its Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 February 2025 | Viewed by 21464

Special Issue Editors

1. Haihe Laboratory of Information Technology Application Innovation, Tianjin 300350, China
2. Institute of Computing, Institute of Computing Technology Chinese Academy of Sciences, Beijing 53035, China
Interests: artificial intelligence; big data; edge computing; internet of things; computer network security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen 518055, China
Interests: network performance; future Internet architecture; internet measurement

E-Mail Website
Guest Editor
Haihe Laboratory of Information Technology Application Innovation, Tianjin 300350, China
Interests: IoT security; data-driven anomaly detection; blockchain
School of Computer Science and Technology, Xidian University, Xi'an 710071, China
Interests: hardware security; hardware Trojan detection; physically unclonable function (PUF); secure architecture; bus security for System-on-Chips (SoCs)

Special Issue Information

Dear Colleagues,

Distributed computing, especially for computing functions in the network, is beneficial for designing systems in a way that allows for joint optimization of computing and networking resources by aiming for tighter integration of computing and networking. New computing paradigms, e.g., container-based microservices, enable scalable computility and dynamic computing performance and thus improve computing performance and resilience significantly, especially in the presence of dynamic, unpredictable workload changes. On the other hand, with the proliferation of artificial intelligence and the rise of emerging applications, the potential of distributed computing has now been released. The advanced distributed computing frameworks provide an efficient and robust way to facilitate large AI models and deploy them or their components on either data center or edge computing environments. This presents exciting opportunities for innovation in areas such as autonomous driving, smart healthcare, and other intelligent applications.

The motivation of this Special Issue is to discover and promote the current advancements, techniques, innovation and real-world solutions of distributed computing and its applications. It aims to gather a comprehensive range of both quantitative and qualitative research contributions from a diverse array of individual, academic, organizational, and industry practitioners in the evolving field of distributed computing solutions. By exploring the latest advances in distributed computing, this Special Issue seeks to provide valuable insights and innovative approaches to tackle the challenges posed by this rapidly expanding domain.

Topics of interest for the Special Issue include, but are not limited to, the following:

  1. Advanced distributed computing architectures, such as edge computing, container-based microservices, and blockchain;
  2. New optimizations of distributed computing in big data processing and artificial intelligence;
  3. Innovations and challenges in the synergy between edge computing and cloud computing for intelligent applications;
  4. Innovative applications and optimizations of distributed systems in blockchain technology;
  5. Applications of distributed computing in domains like autonomous driving, smart healthcare, smart cities, and intelligent transportation;
  6. Research on security, privacy, and trust mechanisms in distributed computing;
  7. Other technologies and applications advocating distributed computing.

Dr. Zhiwei Xu
Dr. Jianer Zhou
Dr. Xueshuo Xie
Dr. Zhao Huang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • big data processing
  • artificial intelligence
  • cloud computing
  • blockchain technology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2819 KiB  
Article
DGA-Based Fault Diagnosis Using Self-Organizing Neural Networks with Incremental Learning
by Siqi Liu, Zhiyuan Xie and Zhengwei Hu
Electronics 2025, 14(3), 424; https://doi.org/10.3390/electronics14030424 - 22 Jan 2025
Viewed by 309
Abstract
Power transformers are vital components of electrical power systems, ensuring reliable and efficient energy transfer between high-voltage transmission and low-voltage distribution networks. However, they are prone to various faults, such as insulation breakdowns, winding deformations, partial discharges, and short circuits, which can disrupt [...] Read more.
Power transformers are vital components of electrical power systems, ensuring reliable and efficient energy transfer between high-voltage transmission and low-voltage distribution networks. However, they are prone to various faults, such as insulation breakdowns, winding deformations, partial discharges, and short circuits, which can disrupt electrical service, incur significant economic losses, and pose safety risks. Traditional fault diagnosis methods, including visual inspection, dissolved gas analysis (DGA), and thermal imaging, face challenges such as subjectivity, intermittent data collection, and reliance on expert interpretation. To address these limitations, this paper proposes a novel distributed approach for multi-fault diagnosis of power transformers based on a self-organizing neural network combined with data augmentation and incremental learning techniques. The proposed framework addresses critical challenges, including data quality issues, computational complexity, and the need for real-time adaptability. Data cleaning and preprocessing techniques improve the reliability of input data, while data augmentation generates synthetic samples to mitigate data imbalance and enhance the recognition of rare fault patterns. A two-stage classification model integrates unsupervised and supervised learning, with k-means clustering applied in the first stage for initial fault categorization, followed by a self-organizing neural network in the second stage for refined fault diagnosis. The self-organizing neural network dynamically suppresses inactive nodes and optimizes its training parameter set, reducing computational complexity without sacrificing accuracy. Additionally, incremental learning enables the model to continuously adapt to new fault scenarios without modifying its architecture, ensuring real-time performance and adaptability across diverse operational conditions. Experimental validation demonstrates the effectiveness of the proposed method in achieving accurate, efficient, and adaptive fault diagnosis for power transformers, outperforming traditional and conventional machine learning approaches. This work provides a robust framework for integrating advanced machine learning techniques into power system monitoring, paving the way for automated, real-time, and reliable transformer fault diagnosis systems. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

19 pages, 1319 KiB  
Article
Towards Failure-Aware Inference in Harsh Operating Conditions: Robust Mobile Offloading of Pre-Trained Neural Networks
by Wenjing Liu, Zhongmin Chen and Yunzhan Gong
Electronics 2025, 14(2), 381; https://doi.org/10.3390/electronics14020381 - 19 Jan 2025
Viewed by 304
Abstract
Pre-trained neural networks like GPT-4 and Llama2 have revolutionized intelligent information processing, but their deployment in industrial applications faces challenges, particularly in harsh environments. To address these related issues, model offloading, which involves distributing the computational load of pre-trained models across edge devices, [...] Read more.
Pre-trained neural networks like GPT-4 and Llama2 have revolutionized intelligent information processing, but their deployment in industrial applications faces challenges, particularly in harsh environments. To address these related issues, model offloading, which involves distributing the computational load of pre-trained models across edge devices, has emerged as a promising solution. While this approach enables the utilization of more powerful models, it faces significant challenges in harsh environments, where reliability, connectivity, and resilience are critical. This paper introduces failure-resilient inference in mobile networks (FRIM), a framework that ensures robust offloading and inference without the need for model retraining or reconstruction. FRIM leverages graph theory to optimize partition redundancy and incorporates an adaptive failure detection mechanism for mobile inference with efficient fault tolerance. Experimental results on DNN models (AlexNet, ResNet, VGG-16) show that FRIM improves inference performance and resilience, enabling more reliable mobile applications in harsh operating environments. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

14 pages, 492 KiB  
Article
Towards Intelligent Edge Computing: A Resource- and Reliability-Aware Hybrid Scheduling Method on Multi-FPGA Systems
by Zeyu Li, Yuchen Hao, Hongxu Gao and Jia Zhou
Electronics 2025, 14(1), 82; https://doi.org/10.3390/electronics14010082 - 27 Dec 2024
Viewed by 327
Abstract
Multi-FPGA systems can form larger and more powerful computing units through high-speed interconnects between chips, and are beginning to be widely used by various computing service providers, especially in edge computing. However, the new computing architecture brings new challenges to efficient and reliable [...] Read more.
Multi-FPGA systems can form larger and more powerful computing units through high-speed interconnects between chips, and are beginning to be widely used by various computing service providers, especially in edge computing. However, the new computing architecture brings new challenges to efficient and reliable task scheduling. In this context, we propose a resource- and reliability-aware hybrid scheduling method on Multi-FPGA systems. First, a set of models is established based on the resource/time requirements, communication overhead, and state conversion process of tasks to further analyze the constraints of system scheduling. On this basis, the large task is divided into subtasks based on the data dependency matrix, and the Maintenance Multiple Sequence (MMS) algorithm is used to generate execution sequences for each subtask to the Multi-FPGA systems to fully exploit resources and ensure reliable operation. Compared with state-of-the-art scheduling methods, the proposed method can achieve an average increase in resource utilization of 7%; in terms of reliability, it achieves good execution gains, with an average task completion rate of 98.3% and a mean time to failure of 15.7 years. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

29 pages, 5282 KiB  
Article
Dynamic Artificial Bee Colony Algorithm Based on Permutation Solution
by Yongkang Gong, Donglin Zhu, Chengtian Ouyang, Hongjie Guo and Changjun Zhou
Electronics 2024, 13(24), 4934; https://doi.org/10.3390/electronics13244934 - 13 Dec 2024
Viewed by 716
Abstract
The artificial bee colony algorithm (ABC), as a classic swarm intelligence algorithm, has advantages such as fewer parameters and clear logic. However, ABC cannot balance the exploration and development stages well in the iterative process, and is easily affected by local optimal solutions [...] Read more.
The artificial bee colony algorithm (ABC), as a classic swarm intelligence algorithm, has advantages such as fewer parameters and clear logic. However, ABC cannot balance the exploration and development stages well in the iterative process, and is easily affected by local optimal solutions in the final optimization stage, which affects the final optimal solution. To effectively compensate for the shortcomings of the algorithm, a neighbor learning artificial bee colony algorithm based on permutation solutions (CNABC) is proposed. In CNABC, a dynamic neighbor learning strategy is proposed to improve the search ability and optimal selection ability of the algorithm in the exploration phase. To solve the problem of lack of balance between exploration and development, the local optimal solution is used to guide the update of the surrounding difference. After the three stages of the algorithm are completed, a substitution mechanism is introduced, which replaces the worst solution by introducing external candidate solutions as feasible solutions, thereby improving the algorithm’s ability to escape from local optima. Finally, comparative algorithm experiments are conducted on the CEC2022 test set, and compared with the TOP algorithm in the CEC competition on the CEC2022 test set. According to the experimental results, CNABC has good competitiveness in the comparative algorithm, which verifies the novelty and optimization ability of CNABC. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

14 pages, 492 KiB  
Article
Enhanced In-Network Caching for Deep Learning in Edge Networks
by Jiaqi Zhang, Wenjing Liu, Li Zhang and Jie Tian
Electronics 2024, 13(23), 4632; https://doi.org/10.3390/electronics13234632 - 24 Nov 2024
Viewed by 485
Abstract
With the deep integration of communication technology and Internet of Things technology, the edge network structure is becoming increasingly dense and heterogeneous. At the same time, in the edge network environment, characteristics such as wide-area differentiated services, decentralized deployment of computing and network [...] Read more.
With the deep integration of communication technology and Internet of Things technology, the edge network structure is becoming increasingly dense and heterogeneous. At the same time, in the edge network environment, characteristics such as wide-area differentiated services, decentralized deployment of computing and network resources, and highly dynamic network environment lead to the deployment of redundant or insufficient edge cache nodes, which restricts the efficiency of network service caching and resource allocation. In response to the above problems, research on the joint optimization of service caching and resources in the decentralized edge network scenario is carried out. Therefore, we have conducted research on the collaborative caching of training data among multiple edge nodes and optimized the number of collaborative caching nodes. Firstly, we use a multi-queue model to model the collaborative caching process. This model can be used to simulate the in-network cache replacement process on collaborative caching nodes. In this way, we can describe the data flow and storage changes during the caching process more clearly. Secondly, considering the limitation of storage space of edge nodes and the demand for training data within a training epoch, we propose a stochastic gradient descent algorithm to obtain the optimal number of caching nodes. This algorithm entirely takes into account the resource constraints in practical applications and provides an effective way to optimize the number of caching nodes. Finally, the simulation results clearly show that the optimized number of caching nodes can significantly improve the adequacy rate and hit rate of the training data, with the adequacy rate reaching 84% and the hit rate reaching 100%. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

20 pages, 8132 KiB  
Article
A Blockchain and Zero Knowledge Proof Based Data Security Transaction Method in Distributed Computing
by Bowei Zhang, Heng Pan, Kunyang Li, Ying Xing, Jiaxiang Wang, Dongdong Fan and Wenjie Zhang
Electronics 2024, 13(21), 4260; https://doi.org/10.3390/electronics13214260 - 30 Oct 2024
Viewed by 1817
Abstract
In distributed computing, data trading mechanisms are essential for ensuring the sharing of data across multiple computing nodes. Nevertheless, they currently encounter considerable obstacles, including low accuracy in matching trading parties, ensuring fairness in transactions, and safeguarding data privacy throughout the trading process. [...] Read more.
In distributed computing, data trading mechanisms are essential for ensuring the sharing of data across multiple computing nodes. Nevertheless, they currently encounter considerable obstacles, including low accuracy in matching trading parties, ensuring fairness in transactions, and safeguarding data privacy throughout the trading process. In order to address these issues, we put forward a data trading security scheme based on zero-knowledge proofs and smart contracts. In the phase of preparing the security parameters, the objective is to reduce the complexity of generating non-interactive zero-knowledge proofs and to enhance the efficiency of data trading. In the pre-trading phase, we devise attribute atomic matching smart contracts based on precise data property alignment, with the objective of achieving fine-grained matching of data attributes between trading parties. In the trading execution phase, lightweight cryptographic algorithms based on elliptic curve cryptography (ECC) and non-interactive zero-knowledge proofs are employed for the dual encryption of trading data and the generation of attribute proof contracts, thus ensuring the security and privacy of the data. The results of experiments conducted on the Ethereum platform in an industrial IoT scenario demonstrate that our scheme maintains stable and low-cost consumption while ensuring accuracy in matching and privacy protection. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

22 pages, 4062 KiB  
Article
A Distributed Non-Intrusive Load Monitoring Method Using Karhunen–Loeve Feature Extraction and an Improved Deep Dictionary
by Siqi Liu, Zhiyuan Xie and Zhengwei Hu
Electronics 2024, 13(19), 3970; https://doi.org/10.3390/electronics13193970 - 9 Oct 2024
Viewed by 954
Abstract
In recent years, the non-invasive load monitoring (NILM) method based on sparse coding has shown promising research prospects. This type of method learns a sparse dictionary for each monitoring target device, and it expresses load decomposition as a problem of signal reconstruction using [...] Read more.
In recent years, the non-invasive load monitoring (NILM) method based on sparse coding has shown promising research prospects. This type of method learns a sparse dictionary for each monitoring target device, and it expresses load decomposition as a problem of signal reconstruction using dictionaries and sparse vectors. The existing NILM methods based on sparse coding have problems such as inability to be applied to multi-state and time-varying devices, single-load characteristics, and poor recognition ability for similar devices in distributed manners. Using the analysis above, this paper focuses on devices with similar features in households and proposes a distributed non-invasive load monitoring method using Karhunen–Loeve (KL) feature extraction and an improved deep dictionary. Firstly, Karhunen–Loeve expansion (KLE) is used to perform subspace expansion on the power waveform of the target device, and a new load feature is extracted by combining singular value decomposition (SVD) dimensionality reduction. Afterwards, the states of all the target devices are modeled as super states, and an improved deep dictionary based on the distance separability measure function (DSM-DDL) is learned for each super state. Among them, the state transition probability matrix and observation probability matrix in the hidden Markov model (HMM) are introduced as the basis for selecting the dictionary order during load decomposition. The KL feature matrix of power observation values and improved depth dictionary are used to discriminate the current super state based on the minimum reconstruction error criterion. The test results based on the UK-DALE dataset show that the KL feature matrix can effectively reduce the load similarity of devices. Combined with DSM-DDL, KL has a certain information acquisition ability and acceptable computational complexity, which can effectively improve the load decomposition accuracy of similar devices, quickly and accurately estimating the working status and power demand of household appliances. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

24 pages, 693 KiB  
Article
Traffic-Aware Intelligent Association and Task Offloading for Multi-Access Edge Computing
by Avilia Kusumaputeri Nugroho and Taewoon Kim
Electronics 2024, 13(16), 3130; https://doi.org/10.3390/electronics13163130 - 7 Aug 2024
Viewed by 1300
Abstract
Edge computing is a promising technology, especially for offloading users’ computationally heavy tasks. The close proximity of edge computing units to users minimizes network latency, thereby enabling delay-sensitive applications. Although optimal resource provisioning and task offloading in edge computing are widely studied in [...] Read more.
Edge computing is a promising technology, especially for offloading users’ computationally heavy tasks. The close proximity of edge computing units to users minimizes network latency, thereby enabling delay-sensitive applications. Although optimal resource provisioning and task offloading in edge computing are widely studied in the literature, there are still some critical research gaps. In this study, we propose a traffic-aware optimal association and task-offloading approach. The proposed method does not rely solely on the average rate of offloading requests, which can differ from actual values in real time. Instead, it uses an intelligent, high-precision prediction model to forecast future offloading requests, allowing resource provisioning to be based on future sequences of requests rather than average values. Additionally, we propose an optimization-based approach that can meet task deadlines, which is crucial for mission-critical applications. Finally, the proposed approach distributes the computing load over multiple time steps, ensuring future resource scheduling and task-offloading decisions can be made with a certain level of flexibility. The proposed approach is extensively evaluated under various scenarios and configurations to validate its effectiveness. As a result, the proposed deep learning model has resulted in a request prediction error of 0.0338 (RMSE). In addition, compared to the greedy approach, the proposed approach has reduced the use of local and cloud computing from 0.02 and 18.26 to 0.00 and 0.62, respectively, while increasing edge computing usage from 1.31 to 16.98, which can effectively prolong the lifetime of user devices and reduce network latency. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

12 pages, 371 KiB  
Article
A Hardware Trojan Diagnosis Method for Gate-Level Netlists Based on Graph Theory
by Hongxu Gao, Guangxi Zhai, Zeyu Li, Jia Zhou, Xiang Li and Quan Wang
Electronics 2024, 13(12), 2400; https://doi.org/10.3390/electronics13122400 - 19 Jun 2024
Viewed by 830
Abstract
With the increasing complexity of integrated circuit design, the threat of a hardware Trojan (HT) is becoming more and more prominent. At present, the research mainly focuses on the detection of HTs, but the amount of research on the diagnosis of HTs is [...] Read more.
With the increasing complexity of integrated circuit design, the threat of a hardware Trojan (HT) is becoming more and more prominent. At present, the research mainly focuses on the detection of HTs, but the amount of research on the diagnosis of HTs is very small. The number of existing HT diagnosis methods is generally completed by detecting the HT nodes in the netlist. The main reason is the lack of consideration of the integrity of HTs, so the diagnosis accuracy is low. Based on the above reason, this paper proposes two implanted node search algorithms named layer-by-layer difference search (LDS) and layer-by-layer grouping difference search (LGDS). The LDS algorithm can greatly reduce the search time, and the LGDS algorithm can solve the problem of input node disorder. The two methods greatly reduce the number of nodes sorting and comparing, and therefore the time complexity is lower. Moreover, the relevance between implanted nodes is taken into account to improve the diagnosis rate. We completed experiments on an HT diagnosis; the HT implantation example is from Trust-Hub. The experimental results are shown as follows: (1) The average true positive rate (TPR) of the diagnosis using KNN, RF, or SVM with the LDS or LGDS algorithm is more than 93%, and the average true negative rate (TNR) is 100%. (2) The average proportion of implanted nodes obtained by the LDS or LGDS algorithm is more than 97%. The proposed method has a lower time complexity compared with other existing diagnosis methods, and the diagnosis time is shortened by nearly 75%. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

15 pages, 2500 KiB  
Article
GLRM: Geometric Layout-Based Resource Management Method on Multiple Field Programmable Gate Array Systems
by Hongxu Gao, Zeyu Li, Lirong Zhou, Xiang Li and Quan Wang
Electronics 2024, 13(10), 1821; https://doi.org/10.3390/electronics13101821 - 8 May 2024
Viewed by 880
Abstract
Multiple field programmable gate array (Multi-FPGA) systems are capable of forming larger and more powerful computing units through high-speed interconnections between chips and are beginning to be widely used by various computing service providers. However, the new computing architecture brings new challenges to [...] Read more.
Multiple field programmable gate array (Multi-FPGA) systems are capable of forming larger and more powerful computing units through high-speed interconnections between chips and are beginning to be widely used by various computing service providers. However, the new computing architecture brings new challenges to the system’s task resource management. Existing resource management methods do not fully exploit resources in Multi-FPGA systems, and it is difficult to support fast resource request and release. In this regard, we propose a geometric layout-based resource management (GLRM) method for Multi-FPGA systems. First, a geometric layout-based task combination algorithm (TCA) was proposed to ensure that the final system can use the available FPGA resources more efficiently. Then, we optimised two resource management algorithms using TCA. Compared with state-of-the-art resource management methods, TCA increases resource flexibility by an average of 6% and resource utilisation by an average of 7%, and the two optimised resource management methods are effective in improving resource management performance. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

18 pages, 500 KiB  
Article
Lightweight Privacy Protection via Adversarial Sample
by Guangxu Xie, Gaopan Hou, Qingqi Pei and Haibo Huang
Electronics 2024, 13(7), 1230; https://doi.org/10.3390/electronics13071230 - 26 Mar 2024
Cited by 21 | Viewed by 1082
Abstract
Adversarial sample-based privacy protection has its own advantages compared to traditional privacy protections. Previous adversarial sample privacy protections have mostly been centralized or have not considered the issue of hardware device limitations when conducting privacy protection, especially on the user’s local device. This [...] Read more.
Adversarial sample-based privacy protection has its own advantages compared to traditional privacy protections. Previous adversarial sample privacy protections have mostly been centralized or have not considered the issue of hardware device limitations when conducting privacy protection, especially on the user’s local device. This work attempts to reduce the requirements of adversarial sample privacy protections on devices, making the privacy protection more locally friendly. Adversarial sample-based privacy protections rely on deep learning models, which generally have a large number of parameters, posing challenges for deployment. Fortunately, the model structural pruning technique has been proposed, which can be employed to reduce the parameter count of deep learning models. Based on the model pruning technique Depgraph and existing adversarial sample privacy protections AttriGuard and MemGuard, we design two structural pruning-based adversarial sample privacy protections, in which the user obtains the perturbed data through the pruned deep learning model. Extensive experiments are conducted on four datasets, and the results demonstrate the effectiveness of our adversarial sample privacy protection based on structural pruning. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

15 pages, 567 KiB  
Article
Oversea Cross-Lingual Summarization Service in Multilanguage Pre-Trained Model through Knowledge Distillation
by Xiwei Yang, Jing Yun, Bofei Zheng, Limin Liu and Qi Ban
Electronics 2023, 12(24), 5001; https://doi.org/10.3390/electronics12245001 - 14 Dec 2023
Cited by 2 | Viewed by 1062
Abstract
Cross-lingual text summarization is a highly desired service for overseas report editing tasks and is formulated in a distributed application to facilitate the cooperation of editors. The multilanguage pre-trained language model (MPLM) can generate high-quality cross-lingual text summaries with simple fine-tuning. However, the [...] Read more.
Cross-lingual text summarization is a highly desired service for overseas report editing tasks and is formulated in a distributed application to facilitate the cooperation of editors. The multilanguage pre-trained language model (MPLM) can generate high-quality cross-lingual text summaries with simple fine-tuning. However, the MPLM does not adapt to complex variations, like the word order and tense in different languages. When the model performs on these languages with separate syntactic structures and vocabulary morphologies, it will lead to the low-level quality of the cross-lingual summary. The matter worsens when the cross-lingual summarization datasets are low-resource. We use a knowledge distillation framework for the cross-lingual summarization task to address the above issues. By learning the monolingual teacher model, the cross-lingual student model can effectively capture the differences between languages. Since the teacher and student models generate summaries in two languages, their representations lie on different vector spaces. In order to construct representation relationships across languages, we further propose a similarity metric, which is based on bidirectional semantic alignment, to map different language representations to the same space. In order to improve the quality of cross-lingual summaries further, we use contrastive learning to make the student model focus on the differentials among languages. Contrastive learning can enhance the ability of the similarity metric for bidirectional semantic alignment. Our experiments show that our approach is competitive in low-resource scenarios on cross-language summarization datasets in pairs of distant languages. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

20 pages, 631 KiB  
Article
Throughput Optimization for Blockchain System with Dynamic Sharding
by Chuyi Liu, Jianxiong Wan, Leixiao Li and Bingbing Yao
Electronics 2023, 12(24), 4915; https://doi.org/10.3390/electronics12244915 - 6 Dec 2023
Cited by 3 | Viewed by 2125
Abstract
Sharding technology, which divides a network into multiple disjoint groups so that transactions can be processed in parallel, is applied to blockchain systems as a promising solution to improve Transactions Per Second (TPS). This paper considers the Optimal Blockchain Sharding (OBCS) problem as [...] Read more.
Sharding technology, which divides a network into multiple disjoint groups so that transactions can be processed in parallel, is applied to blockchain systems as a promising solution to improve Transactions Per Second (TPS). This paper considers the Optimal Blockchain Sharding (OBCS) problem as a Markov Decision Process (MDP) where the decision variables are the number of shards, block size and block interval. Previous works solved the OBCS problem via Deep Reinforcement Learning (DRL)-based methods, where the action space must be discretized to increase processability. However, the discretization degrades the quality of the solution since the optimal solution usually lies between discrete values. In this paper, we treat the block size and block interval as continuous decision variables and provide dynamic sharding strategies based on them. The Branching Dueling Q-Network Blockchain Sharding (BDQBS) algorithm is designed for discrete action spaces. Compared with traditional DRL algorithms, the BDQBS overcomes the drawbacks of high action space dimensions and difficulty in training neural networks. And it improves the performance of the blockchain system by 1.25 times. We also propose a sharding control algorithm based on the Parameterized Deep Q-Networks (P-DQN) algorithm, i.e., the Parameterized Deep Q-Networks Blockchain Sharding (P-DQNBS) algorithm, to efficiently handle the discrete–continuous hybrid action space without the scalability issues. Also, the method can effectively improve the TPS by up to 28%. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

17 pages, 14394 KiB  
Article
An Efficient Path Planning Method for the Unmanned Aerial Vehicle in Highway Inspection Scenarios
by Yuanlong Li, Shang Gao, Xuewen Liu, Peiliang Zuo and Haoliang Li
Electronics 2023, 12(20), 4200; https://doi.org/10.3390/electronics12204200 - 10 Oct 2023
Cited by 2 | Viewed by 1801
Abstract
Unmanned aerial vehicles (UAVs) have received widespread attention due to their flexible deployment characteristics. Automated airports equipped with UAVs are expected to become important equipment for improving quality and reducing costs in many inspection scenarios. This paper focuses on the automated inspection business [...] Read more.
Unmanned aerial vehicles (UAVs) have received widespread attention due to their flexible deployment characteristics. Automated airports equipped with UAVs are expected to become important equipment for improving quality and reducing costs in many inspection scenarios. This paper focuses on the automated inspection business of UAVs dispatched by automated airports in highway scenarios. On the basis of considering the shape of highway curves, inspection targets, and the energy consumption characteristics of UAVs, planning the flight parameters of UAVs is of great significance for ensuring the effectiveness of the inspection process. This paper first sets the inspection path points for the UAV based on highway curves, and then proposes an efficient heuristic method for the nonlinear non-convex parameter optimization problem, through which the parameters of the UAV’s inspection altitude, hovering altitude, and flight speed are planned. Simulation and analysis show that the proposed method possesses good parameter planning efficiency. By combining several existing trajectory planning methods, e.g., the traversal method, the deep Q-network based method, and the genetic method, it can be concluded that the proposed method in this paper has better overall planning performance including planning efficiency and inspection effectiveness. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

18 pages, 2491 KiB  
Article
Access Control Strategy for the Internet of Vehicles Based on Blockchain and Edge Computing
by Leixiao Li, Jianxiong Wan and Chuyi Liu
Electronics 2023, 12(19), 4057; https://doi.org/10.3390/electronics12194057 - 27 Sep 2023
Cited by 3 | Viewed by 1221
Abstract
Data stored in the Internet of Vehicles (IoV) face problems with ease of tampering, easy disclosure and single access control. Based on this problem, we propose an access control scheme for the IoV based on blockchain, trust values and weighted attribute-based encryption, called [...] Read more.
Data stored in the Internet of Vehicles (IoV) face problems with ease of tampering, easy disclosure and single access control. Based on this problem, we propose an access control scheme for the IoV based on blockchain, trust values and weighted attribute-based encryption, called the Blockchain Trust and Weighted Attribute-Based Access Control Strategy (BTWACS). First, we utilize both local and global blockchains to jointly maintain the generation, verification and storage of blocks, achieving distributed data storage and ensuring that data cannot arbitrarily be tampered with. Local blockchain mainly uses Road Side Unit (RSU) technology to calculate trust values, while global blockchain is mainly responsible for data storage and access policy selection. Secondly, we design a blockchain-based trust evaluation scheme called Blockchain-Based Trust Evaluation (BBTE). In this evaluation scheme, the trust value of the vehicle node is based on four factors: initial trust, historical experience trust, recommendation trust and RSU observation trust. CRITIC is used to determine the optimal weights of four factors to obtain the trust value. Then, we use the Network Simulator version 3 (NS3) to verify the security and accuracy of BBTE, improving the recognition accuracy and detection rate of malicious vehicle nodes. Finally, by mining the association relationships between attribute permissions among various roles, we construct a hierarchical access control strategy based on weight and trust, and further optimize the access strategy through pruning techniques. The experiment results indicate that this scheme can effectively respond to gray hole attacks, defamation attacks and collusion attacks from other vehicle nodes. This method can effectively reduce the computing and transmission costs of vehicles and meet the access requirements of multiple entities and roles in the IoV. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

27 pages, 2981 KiB  
Article
Analyzing Miners’ Dynamic Equilibrium in Blockchain Networks under DDoS Attacks
by Xiao Liu, Zhao Huang, Quan Wang, Xiaohong Jiang, Yin Chen and Bo Wan
Electronics 2023, 12(18), 3903; https://doi.org/10.3390/electronics12183903 - 15 Sep 2023
Cited by 2 | Viewed by 1418
Abstract
Proof of work (PoW) is one of the most widely used consensus algorithms in blockchain networks. It mainly uses the competition between mining nodes to obtain block rewards. However, this competition for computational power will allow malicious nodes to obtain illegal profits, bringing [...] Read more.
Proof of work (PoW) is one of the most widely used consensus algorithms in blockchain networks. It mainly uses the competition between mining nodes to obtain block rewards. However, this competition for computational power will allow malicious nodes to obtain illegal profits, bringing potential security threats to blockchain systems. A distributed denial of service (DDoS) attack is a major threat to the PoW algorithm. It utilizes multiple nodes in the blockchain network to attack honest miners to obtain illegal rewards. To solve this problem, academia has proposed a DDoS attack detection mechanism based on reinforcement learning methods and static game modeling methods based on mining pools. However, these methods cannot effectively make miners choose the strategy with the best profit over time when facing DDoS attacks. Therefore, this paper proposes a dynamic evolutionary game model for miners facing DDoS attacks under blockchain networks to solve the above problems for the first time. We address the model by replicating the dynamic equation to obtain a stable solution. According to the theorem of the Lyapunov method, we also obtain the only stable strategy for miners facing DDoS attacks. The experimental results show that compared with the static method, the dynamic method can affect game playing and game evolution over time. Moreover, miners’ strategy to face DDoS attacks gradually shifts from honest mining to launching DDoS attacks against each other as the blockchain network improves. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

18 pages, 598 KiB  
Article
Hierarchical Decentralized Federated Learning Framework with Adaptive Clustering: Bloom-Filter-Based Companions Choice for Learning Non-IID Data in IoV
by Siyuan Liu, Zhiqiang Liu, Zhiwei Xu, Wenjing Liu and Jie Tian
Electronics 2023, 12(18), 3811; https://doi.org/10.3390/electronics12183811 - 8 Sep 2023
Cited by 8 | Viewed by 1882
Abstract
The accelerating progress of the Internet of Vehicles (IoV) has put forward a higher demand for distributed model training and data sharing in vehicular networks. Traditional centralized approaches are no longer applicable in the face of drivers’ concerns about data privacy, while Decentralized [...] Read more.
The accelerating progress of the Internet of Vehicles (IoV) has put forward a higher demand for distributed model training and data sharing in vehicular networks. Traditional centralized approaches are no longer applicable in the face of drivers’ concerns about data privacy, while Decentralized Federated Learning (DFL) provides new possibilities to address this issue. However, DFL still faces challenges regarding the non-IID data of passing vehicles. To tackle this challenge, a novel DFL framework, Hierarchical Decentralized Federated Learning (H-DFL), is proposed to achieve qualified distributed training among vehicles by considering data complementarity. We include vehicles, base stations, and data center servers in this framework. Firstly, a novel vehicle-clustering paradigm is designed to group passing vehicles based on the Bloom-filter-based compact representation of data complementarity. In this way, vehicles train their models based on local data, exchange model parameters in each group, and achieve a qualified local model without the interference of imbalanced data. On a higher level, a local model trained by each group is submitted to the data center to obtain a model covering global features. Base stations maintain the local models of different groups and judge whether the local models need to be updated according to the global model. The experimental results based on real-world data demonstrate that H-DFL dose not only reduces communication latency with different participants but also addresses the challenges of non-IID data in vehicles. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

Back to TopTop