Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (405)

Search Parameters:
Keywords = ubiquitous learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 366
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

18 pages, 591 KiB  
Article
Active Learning for Medical Article Classification with Bag of Words and Bag of Concepts Embeddings
by Radosław Pytlak, Paweł Cichosz, Bartłomiej Fajdek and Bogdan Jastrzębski
Appl. Sci. 2025, 15(14), 7955; https://doi.org/10.3390/app15147955 - 17 Jul 2025
Viewed by 230
Abstract
Systems supporting systematic literature reviews often use machine learning algorithms to create classification models to assess the relevance of articles to study topics. The proper choice of text representation for such algorithms may have a significant impact on their predictive performance. This article [...] Read more.
Systems supporting systematic literature reviews often use machine learning algorithms to create classification models to assess the relevance of articles to study topics. The proper choice of text representation for such algorithms may have a significant impact on their predictive performance. This article presents an in-depth investigation of the utility of the bag of concepts representation for this purpose, which can be considered an enhanced form of the ubiquitous bag of words representation, with features corresponding to ontology concepts rather than words. Its utility is evaluated in the active learning setting, in which a sequence of classification models is created, with training data iteratively expanded by adding articles selected for human screening. Different versions of the bag of concepts are compared with bag of words, as well as with combined representations, including both word-based and concept-based features. The evaluation uses the support vector machine, naive Bayes, and random forest algorithms and is performed on datasets from 15 systematic medical literature review studies. The results show that concept-based features may have additional predictive value in comparison to standard word-based features and that the combined bag of concepts and bag of words representation is the most useful overall. Full article
Show Figures

Figure 1

25 pages, 732 KiB  
Article
Accuracy-Aware MLLM Task Offloading and Resource Allocation in UAV-Assisted Satellite Edge Computing
by Huabing Yan, Hualong Huang, Zijia Zhao, Zhi Wang and Zitian Zhao
Drones 2025, 9(7), 500; https://doi.org/10.3390/drones9070500 - 16 Jul 2025
Viewed by 315
Abstract
This paper presents a novel framework for optimizing multimodal large language model (MLLM) inference through task offloading and resource allocation in UAV-assisted satellite edge computing (SEC) networks. MLLMs leverage transformer architectures to integrate heterogeneous data modalities for IoT applications, particularly real-time monitoring in [...] Read more.
This paper presents a novel framework for optimizing multimodal large language model (MLLM) inference through task offloading and resource allocation in UAV-assisted satellite edge computing (SEC) networks. MLLMs leverage transformer architectures to integrate heterogeneous data modalities for IoT applications, particularly real-time monitoring in remote areas. However, cloud computing dependency introduces latency, bandwidth, and privacy challenges, while IoT device limitations require efficient distributed computing solutions. SEC, utilizing low-earth orbit (LEO) satellites and unmanned aerial vehicles (UAVs), extends mobile edge computing to provide ubiquitous computational resources for remote IoTDs. We formulate the joint optimization of MLLM task offloading and resource allocation as a mixed-integer nonlinear programming (MINLP) problem, minimizing latency and energy consumption while optimizing offloading decisions, power allocation, and UAV trajectories. To address the dynamic SEC environment characterized by satellite mobility, we propose an action-decoupled soft actor–critic (AD-SAC) algorithm with discrete–continuous hybrid action spaces. The simulation results demonstrate that our approach significantly outperforms conventional deep reinforcement learning methods in convergence and system cost reduction compared to baseline algorithms. Full article
Show Figures

Figure 1

19 pages, 1635 KiB  
Article
Integrating AI-Driven Wearable Metaverse Technologies into Ubiquitous Blended Learning: A Framework Based on Embodied Interaction and Multi-Agent Collaboration
by Jiaqi Xu, Xuesong Zhai, Nian-Shing Chen, Usman Ghani, Andreja Istenic and Junyi Xin
Educ. Sci. 2025, 15(7), 900; https://doi.org/10.3390/educsci15070900 - 15 Jul 2025
Viewed by 397
Abstract
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory [...] Read more.
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory experiences and adaptable learning environments that transcend the constraints of conventional ubiquitous learning. This research proposes a novel framework for ubiquitous blended learning in the wearable metaverse, aiming to address critical challenges, such as multi-source data fusion, effective human–computer collaboration, and efficient rendering on resource-constrained wearable devices, through the integration of embodied interaction and multi-agent collaboration. This framework leverages a real-time multi-modal data analysis architecture, powered by the MobileNetV4 and xLSTM neural networks, to facilitate the dynamic understanding of the learner’s context and environment. Furthermore, we introduced a multi-agent interaction model, utilizing CrewAI and spatio-temporal graph neural networks, to orchestrate collaborative learning experiences and provide personalized guidance. Finally, we incorporated lightweight SLAM algorithms, augmented using visual perception techniques, to enable accurate spatial awareness and seamless navigation within the metaverse environment. This innovative framework aims to create immersive, scalable, and cost-effective learning spaces within the wearable metaverse. Full article
Show Figures

Figure 1

15 pages, 3425 KiB  
Article
Designing Cross-Domain Sustainability Instruction in Higher Education: A Mixed-Methods Study Using AHP and Transformative Pedagogy
by Wan-Ting Xie, Shang-Tse Ho and Han-Chien Lin
Sustainability 2025, 17(14), 6380; https://doi.org/10.3390/su17146380 - 11 Jul 2025
Viewed by 268
Abstract
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize [...] Read more.
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize teaching objectives and interpret student evaluations, alongside qualitative insights from reflective journals, open-ended surveys, and focus group discussions. The results indicate that hands-on experience, interdisciplinary collaboration, and context-aware applications play a critical role in fostering ecological awareness and responsibility among students. Notably, modules such as biosafety testing and water purification prompted transformative engagement with sustainability issues. The study contributes to sustainability education by integrating a decision-analytic structure with reflective learning and intelligent instructional strategies. The proposed model provides valuable implications for educators and policymakers designing interdisciplinary sustainability curricula in smart learning environments. Full article
Show Figures

Figure 1

21 pages, 2469 KiB  
Article
Robust Low-Overlap Point Cloud Registration via Displacement-Corrected Geometric Consistency for Enhanced 3D Sensing
by Xin Wang and Qingguang Li
Sensors 2025, 25(14), 4332; https://doi.org/10.3390/s25144332 - 11 Jul 2025
Viewed by 351
Abstract
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and [...] Read more.
Accurate alignment of 3D point clouds, achieved by ubiquitous sensors such as LiDAR and depth cameras, is critical for enhancing perception capabilities in robotics, autonomous navigation, and environmental reconstruction. However, low-overlap scenarios—common due to limited sensor field-of-view or occlusions—severely degrade registration robustness and sensing reliability. To address this challenge, this paper proposes a novel geometric consistency optimization and rectification deep learning network named GeoCORNet. By synergistically designing a geometric consistency enhancement module, a bidirectional cross-attention mechanism, a predictive displacement rectification strategy, and joint optimization of overlap loss with displacement loss, GeoCORNet significantly improves registration accuracy and robustness in complex scenarios. The Attentive Cross-Consistency module of GeoCORNet integrates distance and angular consistency constraints with bidirectional cross-attention to significantly suppress noise from non-overlapping regions while reinforcing geometric coherence in overlapping areas. The predictive displacement rectification strategy dynamically rectifies erroneous correspondences through predicted 3D displacements instead of discarding them, maximizing the utility of sparse sensor data. Furthermore, a novel displacement loss function was developed to effectively constrain the geometric distribution of corrected point-pairs. Experimental results demonstrate that our method outperformed existing approaches in the aspects of registration recall, rotation error, and algorithm robustness under low-overlap conditions. These advances establish a new paradigm for robust 3D sensing in real-world applications where partial sensor data is prevalent. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 2783 KiB  
Article
Cross-Project Multiclass Classification of EARS-Based Functional Requirements Utilizing Natural Language Processing, Machine Learning, and Deep Learning
by Touseef Tahir, Hamid Jahankhani, Kinza Tasleem and Bilal Hassan
Systems 2025, 13(7), 567; https://doi.org/10.3390/systems13070567 - 10 Jul 2025
Viewed by 423
Abstract
Software requirements are primarily classified into functional and non-functional requirements. While research has explored automated multiclass classification of non-functional requirements, functional requirements remain largely unexplored. This study addressed that gap by introducing a comprehensive dataset comprising 9529 functional requirements from 315 diverse projects. [...] Read more.
Software requirements are primarily classified into functional and non-functional requirements. While research has explored automated multiclass classification of non-functional requirements, functional requirements remain largely unexplored. This study addressed that gap by introducing a comprehensive dataset comprising 9529 functional requirements from 315 diverse projects. The requirements are classified into five categories: ubiquitous, event-driven, state-driven, unwanted behavior, and optional capabilities. Natural Language Processing (NLP), machine learning (ML), and deep learning (DL) techniques are employed to enable automated classification. All software requirements underwent several procedures, including normalization and feature extraction techniques such as TF-IDF. A series of Machine learning (ML) and deep learning (DL) experiments were conducted to classify subcategories of functional requirements. Among the trained models, the convolutional neural network achieved the highest performance, with an accuracy of 93, followed by the long short-term memory network with an accuracy of 92, outperforming traditional decision-tree-based methods. This work offers a foundation for precise requirement classification tools by providing both the dataset and an automated classification approach. Full article
(This article belongs to the Special Issue Decision Making in Software Project Management)
Show Figures

Figure 1

16 pages, 1037 KiB  
Article
Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes
by Xinhai Li, Chenxu Meng, Heng Zhou, Yi Guo, Bowen Xue, Tianzuo Yu and Yunan Lu
Electronics 2025, 14(13), 2736; https://doi.org/10.3390/electronics14132736 - 7 Jul 2025
Viewed by 209
Abstract
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is [...] Read more.
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is ubiquitous in real-world applications due to the annotator subjectivity, algorithmic biases, and experimental errors. Existing related LDL algorithms often assume a linear combination of true and random label distributions when modeling the noisy label distributions, an oversimplification that fails to capture the practical generation processes of noisy label distributions. Therefore, this paper introduces an assumption that the noise in label distributions primarily arises from the semantic confusion between labels and proposes a novel generative label distribution learning algorithm to model the confusion-based generation process of both the feature data and the noisy label distribution data. The proposed model is inferred using variational methods and its effectiveness is demonstrated through extensive experiments across various real-world datasets, showcasing its superiority in handling noisy label distributions. Full article
(This article belongs to the Special Issue Neural Networks: From Software to Hardware)
Show Figures

Graphical abstract

32 pages, 2945 KiB  
Article
SelfLoc: Robust Self-Supervised Indoor Localization with IEEE 802.11az Wi-Fi for Smart Environments
by Hamada Rizk and Ahmed Elmogy
Electronics 2025, 14(13), 2675; https://doi.org/10.3390/electronics14132675 - 2 Jul 2025
Viewed by 483
Abstract
Accurate and scalable indoor localization is a key enabler of intelligent automation in smart environments and industrial systems. In this paper, we present SelfLoc, a self-supervised indoor localization system that combines IEEE 802.11az Round Trip Time (RTT) and Received Signal Strength Indicator [...] Read more.
Accurate and scalable indoor localization is a key enabler of intelligent automation in smart environments and industrial systems. In this paper, we present SelfLoc, a self-supervised indoor localization system that combines IEEE 802.11az Round Trip Time (RTT) and Received Signal Strength Indicator (RSSI) data to achieve fine-grained positioning using commodity Wi-Fi infrastructure. Unlike conventional methods that depend heavily on labeled data, SelfLoc adopts a contrastive learning framework to extract spatially discriminative and temporally consistent representations from unlabeled wireless measurements. The system integrates a dual-contrastive strategy: temporal contrasting captures sequential signal dynamics essential for tracking mobile agents, while contextual contrasting promotes spatial separability by ensuring that signal representations from distinct locations remain well-differentiated, even under similar signal conditions or environmental symmetry. To this end, we design signal-specific augmentation techniques for the physical properties of RTT and RSSI, enabling the model to generalize across environments. SelfLoc also adapts effectively to new deployment scenarios with minimal labeled data, making it suitable for dynamic and collaborative industrial applications. We validate the effectiveness of SelfLoc through experiments conducted in two realistic indoor testbeds using commercial Android devices and seven Wi-Fi access points. The results demonstrate that SelfLoc achieves high localization precision, with a median error of only 0.55 m, and surpasses state-of-the-art baselines by at least 63.3% with limited supervision. These findings affirm the potential of SelfLoc to support spatial intelligence and collaborative automation, aligning with the goals of Industry 4.0 and Society 5.0, where seamless human–machine interactions and intelligent infrastructure are key enablers of next-generation smart environments. Full article
(This article belongs to the Special Issue Collaborative Intelligent Automation System for Smart Industry)
Show Figures

Figure 1

18 pages, 9529 KiB  
Article
Adaptive Temporal Action Localization in Video
by Zhiyu Xu, Zhuqiang Lu, Yong Ding, Liwei Tian and Suping Liu
Electronics 2025, 14(13), 2645; https://doi.org/10.3390/electronics14132645 - 30 Jun 2025
Viewed by 288
Abstract
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action [...] Read more.
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action of interest. However, the diversely distributed semantics of a video over time have not been well considered, which could compromise the localization performance, especially for ubiquitous short actions or events (e.g., a fall in healthcare and a traffic violation in surveillance). To address this problem, we propose a novel deep learning architecture, namely an adaptive template-guided self-attention network, to characterize the proposals adaptively with their relevant frames. An input video is segmented into temporal frames, within which the spatio-temporal patterns are formulated by a global–Local Transformer-based encoder. Each frame is associated with a number of proposals of different lengths as their starting frame. Learnable templates for proposals of different lengths are introduced, and each template guides the sampling for proposals with a specific length. It formulates the probabilities for a proposal to form the representation of certain spatio-temporal patterns from its relevant temporal frames. Therefore, the semantics of a proposal can be formulated in an adaptive manner, and a feature map of all proposals can be appropriately characterized. To estimate the IoU of these proposals with ground truth actions, a two-level scheme is introduced. A shortcut connection is also utilized to refine the predictions by using the convolutions of the feature map from coarse to fine. Comprehensive experiments on two benchmark datasets demonstrate the state-of-the-art performance of our proposed method: 32.6% mAP@IoU 0.7 on THUMOS-14 and 9.35% mAP@IoU 0.95 on ActivityNet-1.3. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

21 pages, 1761 KiB  
Article
Protecting IOT Networks Through AI-Based Solutions and Fractional Tchebichef Moments
by Islam S. Fathi, Hanin Ardah, Gaber Hassan and Mohammed Aly
Fractal Fract. 2025, 9(7), 427; https://doi.org/10.3390/fractalfract9070427 - 29 Jun 2025
Viewed by 375
Abstract
Advancements in Internet of Things (IoT) technologies have had a profound impact on interconnected devices, leading to exponentially growing networks of billions of intelligent devices. However, this growth has exposed Internet of Things (IoT) systems to cybersecurity vulnerabilities. These vulnerabilities are primarily caused [...] Read more.
Advancements in Internet of Things (IoT) technologies have had a profound impact on interconnected devices, leading to exponentially growing networks of billions of intelligent devices. However, this growth has exposed Internet of Things (IoT) systems to cybersecurity vulnerabilities. These vulnerabilities are primarily caused by the inherent limitations of these devices, such as finite battery resources and the requirement for ubiquitous connectivity. The rapid evolution of deep learning (DL) technologies has led to their widespread use in critical application domains, thereby highlighting the need to integrate DL methodologies to improve IoT security systems beyond the basic secure communication protocols. This is essential for creating intelligent security frameworks that can effectively address the increasingly complex cybersecurity threats faced by IoT networks. This study proposes a hybrid methodology that combines fractional discrete Tchebichef moment analysis with deep learning for the prevention of IoT attacks. The effectiveness of our proposed technique for detecting IoT threats was evaluated using the UNSW-NB15 and Bot-IoT datasets, featuring illustrative cases of common IoT attack scenarios, such as DDoS, identity spoofing, network reconnaissance, and unauthorized data access. The empirical results validate the superior classification capabilities of the proposed methodology in IoT cybersecurity threat assessments compared with existing solutions. This study leveraged the synergistic integration of discrete Tchebichef moments and deep convolutional networks to facilitate comprehensive attack detection and prevention in IoT ecosystems. Full article
(This article belongs to the Section Optimization, Big Data, and AI/ML)
Show Figures

Figure 1

32 pages, 5675 KiB  
Article
Reducing Label Dependency in Human Activity Recognition with Wearables: From Supervised Learning to Novel Weakly Self-Supervised Approaches
by Taoran Sheng and Manfred Huber
Sensors 2025, 25(13), 4032; https://doi.org/10.3390/s25134032 - 28 Jun 2025
Viewed by 586
Abstract
Human activity recognition (HAR) using wearable sensors has advanced through various machine learning paradigms, each with inherent trade-offs between performance and labeling requirements. While fully supervised techniques achieve high accuracy, they demand extensive labeled datasets that are costly to obtain. Conversely, unsupervised methods [...] Read more.
Human activity recognition (HAR) using wearable sensors has advanced through various machine learning paradigms, each with inherent trade-offs between performance and labeling requirements. While fully supervised techniques achieve high accuracy, they demand extensive labeled datasets that are costly to obtain. Conversely, unsupervised methods eliminate labeling needs but often deliver suboptimal performance. This paper presents a comprehensive investigation across the supervision spectrum for wearable-based HAR, with particular focus on novel approaches that minimize labeling requirements while maintaining competitive accuracy. We develop and empirically compare: (1) traditional fully supervised learning, (2) basic unsupervised learning, (3) a weakly supervised learning approach with constraints, (4) a multi-task learning approach with knowledge sharing, (5) a self-supervised approach based on domain expertise, and (6) a novel weakly self-supervised learning framework that leverages domain knowledge and minimal labeled data. Experiments across benchmark datasets demonstrate that: (i) our weakly supervised methods achieve performance comparable to fully supervised approaches while significantly reducing supervision requirements; (ii) the proposed multi-task framework enhances performance through knowledge sharing between related tasks; (iii) our weakly self-supervised approach demonstrates remarkable efficiency with just 10% of labeled data. These results not only highlight the complementary strengths of different learning paradigms, offering insights into tailoring HAR solutions based on the availability of labeled data, but also establish that our novel weakly self-supervised framework offers a promising solution for practical HAR applications where labeled data are limited. Full article
Show Figures

Figure 1

21 pages, 3197 KiB  
Review
Deploying AI on Edge: Advancement and Challenges in Edge Intelligence
by Tianyu Wang, Jinyang Guo, Bowen Zhang, Ge Yang and Dong Li
Mathematics 2025, 13(11), 1878; https://doi.org/10.3390/math13111878 - 4 Jun 2025
Viewed by 2940
Abstract
In recent years, artificial intelligence (AI) has achieved significant progress and remarkable advancements across various disciplines, including biology, computer science, and industry. However, the increasing complexity of AI network structures and the vast number of associated parameters impose substantial computational and storage demands, [...] Read more.
In recent years, artificial intelligence (AI) has achieved significant progress and remarkable advancements across various disciplines, including biology, computer science, and industry. However, the increasing complexity of AI network structures and the vast number of associated parameters impose substantial computational and storage demands, severely limiting the practical deployment of these models on resource-constrained edge devices. Although edge intelligence methods have been proposed to alleviate the computational and storage burdens, they still face multiple persistent challenges, such as large-scale model deployment, poor interpretability, privacy and security vulnerabilities, and energy efficiency constraints. This article systematically reviews the current advancements in edge intelligence technologies, highlights key enabling techniques including model sparsity, quantization, knowledge distillation, neural architecture search, and federated learning, and explores their applications in industrial, automotive, healthcare, and consumer domains. Furthermore, this paper presents a comparative analysis of these techniques, summarizes major trade-offs, and proposes decision frameworks to guide deployment strategies under different scenarios. Finally, it discusses future research directions to address the remaining technical bottlenecks and promote the practical and sustainable development of edge intelligence. Standing at the threshold of an exciting new era, we believe edge intelligence will play an increasingly critical role in transforming industries and enabling ubiquitous intelligent services. Full article
Show Figures

Figure 1

21 pages, 4114 KiB  
Article
Noise Impact Analysis of School Environments Based on the Deployment of IoT Sensor Nodes
by Georgios Dimitriou and Fotios Gioulekas
Signals 2025, 6(2), 27; https://doi.org/10.3390/signals6020027 - 3 Jun 2025
Viewed by 579
Abstract
This work presents an on-field noise analysis during the class breaks in Greek school units (a high school and a senior high school) based on the design and deployment of low-cost IoT sensor nodes and IoT platforms. The course breaks form 20% of [...] Read more.
This work presents an on-field noise analysis during the class breaks in Greek school units (a high school and a senior high school) based on the design and deployment of low-cost IoT sensor nodes and IoT platforms. The course breaks form 20% of a regular school day, during which intense mobility and high noise levels usually evolve. Indoor noise levels, along with environmental conditions, have been measured through a wireless network that comprises IoT nodes that integrate humidity, temperature, and acoustic level sensors. PM10 and PM2.5 values have also been acquired through data sensors located nearby the school complex. School buildings that have been recently renovated for minimizing their energy footprint and CO2 emissions have been selected in comparison with similar works in academia. The data are collected, shipped, and stored into a time-series database in cloud facilities where an IoT platform has been developed for processing and analysis purposes. The findings show that low-cost sensors can efficiently monitor noise levels after proper adjustments. Additionally, the statistical evaluation of the received sensor measurements has indicated that ubiquitous high noise levels during the course breaks potentially affect teachers’ leisure time, despite the thermal isolation of the facilities. Within this context, we prove that the proposed IoT Sensor Network could form a tool to essentially monitor school infrastructures and thus to prompt for improvements regarding the building facilities. Several guides to further mitigate noise and achieve high-quality levels in learning institutes are also described. Full article
Show Figures

Figure 1

19 pages, 3119 KiB  
Article
Distress-Based Pavement Condition Assessment Using Artificial Intelligence: A Case Study of Egyptian Roads
by Mostafa M. Radwan, Sundus A. Faris, Ahmed Y. Barakat and Ahmad Mousa
Eng 2025, 6(6), 114; https://doi.org/10.3390/eng6060114 - 28 May 2025
Viewed by 1050
Abstract
The pavement is a complex construction subject to a range of environmental and loading conditions. Transportation organizations use pavement management systems (PMSs) to maintain satisfactory pavement performance. The pavement condition index (PCI) is a commonly used performance indicator, yet PCI evaluation is costly [...] Read more.
The pavement is a complex construction subject to a range of environmental and loading conditions. Transportation organizations use pavement management systems (PMSs) to maintain satisfactory pavement performance. The pavement condition index (PCI) is a commonly used performance indicator, yet PCI evaluation is costly and time-consuming. Machine and deep learning algorithms have recently been more instrumental for forecasting pavement conditions. This research uses AI tools to develop a correlation between PCI and collected distress in urban road networks. The distresses for 15,000 pavement segments in Egypt were investigated through a desk study and field data collection. To this end, several machine learning (ML) and deep learning approaches were developed. The ML techniques include random forest (RF), support vector machine (SVM), decision tree (DT), and the deep learning approach entails artificial neural networks (ANN). The proposed techniques provide precise PCI estimates and can be seamlessly integrated with PMCs using ubiquitous spreadsheet programs. The results have shown excellent predictions of the ANN model, as demonstrated in the high coefficient of determination (R2  = 0.939) and the low root mean squared error (RMSE = 7.20) and the mean absolute error (MAE = 2.94). This study sets out to provide a reliable and affordable alternative to specialized tools like MicroPAVER. The ANN model exhibited greater prediction accuracy than the other developed models and can also reliably forecast PCI values by using only measured distress data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

Back to TopTop