Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = incremental federated meta-learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1213 KiB  
Article
ICMFed: An Incremental and Cost-Efficient Mechanism of Federated Meta-Learning for Driver Distraction Detection
by Zihan Guo, Linlin You, Sheng Liu, Junshu He and Bingran Zuo
Mathematics 2023, 11(8), 1867; https://doi.org/10.3390/math11081867 - 14 Apr 2023
Cited by 7 | Viewed by 2210
Abstract
Driver distraction detection (3D) is essential in improving the efficiency and safety of transportation systems. Considering the requirements for user privacy and the phenomenon of data growth in real-world scenarios, existing methods are insufficient to address four emerging challenges, i.e., data accumulation, communication [...] Read more.
Driver distraction detection (3D) is essential in improving the efficiency and safety of transportation systems. Considering the requirements for user privacy and the phenomenon of data growth in real-world scenarios, existing methods are insufficient to address four emerging challenges, i.e., data accumulation, communication optimization, data heterogeneity, and device heterogeneity. This paper presents an incremental and cost-efficient mechanism based on federated meta-learning, called ICMFed, to support the tasks of 3D by addressing the four challenges. In particular, it designs a temporal factor associated with local training batches to stabilize the local model training, introduces gradient filters of each model layer to optimize the client–server interaction, implements a normalized weight vector to enhance the global model aggregation process, and supports rapid personalization for each user by adapting the learned global meta-model. According to the evaluation made based on the standard dataset, ICMFed can outperform three baselines in training two common models (i.e., DenseNet and EfficientNet) with average accuracy improved by about 141.42%, training time saved by about 54.80%, communication cost reduced by about 54.94%, and service quality improved by about 96.86%. Full article
Show Figures

Figure 1

Back to TopTop