Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (103)

Search Parameters:
Keywords = IoT taxonomies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 851 KB  
Review
Autoencoder-Based Self-Supervised Anomaly Detection in Wireless Sensor Networks: A Taxonomy-Driven Meta-Synthesis
by Rana Muhammad Subhan, Young-Doo Lee and Insoo Koo
Appl. Sci. 2026, 16(3), 1448; https://doi.org/10.3390/app16031448 (registering DOI) - 31 Jan 2026
Abstract
Wireless Sensor Networks (WSNs) are widely deployed for long-term monitoring in environments characterized by nonstationary sensing dynamics, intermittent connectivity and continuously evolving network topologies, while reliable, fine-grained labeled data capturing faults and adversarial behaviors remain scarce. This survey systematically reviews and synthesizes recent [...] Read more.
Wireless Sensor Networks (WSNs) are widely deployed for long-term monitoring in environments characterized by nonstationary sensing dynamics, intermittent connectivity and continuously evolving network topologies, while reliable, fine-grained labeled data capturing faults and adversarial behaviors remain scarce. This survey systematically reviews and synthesizes recent research that integrates autoencoder-based representation learning with self-supervised learning (SSL) objectives to enhance anomaly detection under these practical constraints. We structure the existing literature through a unified taxonomy encompassing autoencoder variants, self-supervised pretext tasks, spatio-temporal encoding mechanisms and the increasing use of graph-structured autoencoders for topology-aware modeling. Across distinct methodological categories, SSL-augmented frameworks consistently demonstrate improved robustness and stability compared to purely reconstruction-driven baselines, particularly in heterogeneous, dynamic and temporally drifting WSN environments. Nevertheless, this review also highlights several unresolved challenges that hinder real-world adoption, including uncertain scalability to large-scale networks, limited model interpretability, nontrivial energy and memory overheads on resource-constrained sensor nodes and a lack of standardized evaluation protocols and reporting practices. By consolidating publicly available datasets, experimental configurations and comparative performance trends, we derive concrete design requirements for robust and resource-aware anomaly detection in operational WSNs and outline promising future research directions, emphasizing lightweight model architectures, explainable learning mechanisms and federated AE–SSL paradigms to enable adaptive, privacy-preserving monitoring in next-generation IoT sensing systems. Full article
Show Figures

Figure 1

36 pages, 462 KB  
Review
Trustworthiness in Resource-Constrained IoT: Review and Taxonomy of Privacy-Enhancing Technologies and Anomaly Detection
by Madalin Neagu, Codruta Maria Serban, Anca Hangan and Gheorghe Sebestyen
Telecom 2026, 7(1), 10; https://doi.org/10.3390/telecom7010010 - 16 Jan 2026
Viewed by 320
Abstract
Resource-constrained Internet of Things (IoT) devices are increasingly deployed in critical domains but remain vulnerable to stealthy attacks that can bypass conventional defenses. At the same time, privacy constraints limit centralized data collection and processing, complicating anomaly detection. This systematic review surveys methods [...] Read more.
Resource-constrained Internet of Things (IoT) devices are increasingly deployed in critical domains but remain vulnerable to stealthy attacks that can bypass conventional defenses. At the same time, privacy constraints limit centralized data collection and processing, complicating anomaly detection. This systematic review surveys methods for privacy-preserving anomaly detection in resource-constrained IoT and introduces a five-dimension taxonomy covering deployment paradigms, resource constraints, real-time requirements, protection techniques, and communication constraints. We review how the literature measures and reports resource and privacy costs and identify three major gaps: (1) a shortage of co-designed detector-plus-privacy solutions tailored to constrained hardware, (2) inconsistent reporting of resource and privacy trade-offs, and (3) limited robustness against adaptive attackers and realistic deployment noise. We conclude with actionable recommendations and a prioritized research roadmap. Furthermore, the multi-dimensional taxonomy we introduce provides a structured framework to guide design choices and systematically improve the comparability, deployability, and overall trustworthiness of anomaly detection systems for constrained IoT. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor Networks and Applications)
Show Figures

Figure 1

29 pages, 1277 KB  
Review
A Survey on Acoustic Side-Channel Attacks: An Artificial Intelligence Perspective
by Benjamin Quattrone and Youakim Badr
J. Cybersecur. Priv. 2026, 6(1), 6; https://doi.org/10.3390/jcp6010006 - 29 Dec 2025
Viewed by 771
Abstract
Acoustic Side-Channel Attacks (ASCAs) exploit the sound produced by keyboards and other devices to infer sensitive information without breaching software or network defenses. Recent advances in deep learning, large language models, and signal processing have greatly expanded the feasibility and accuracy of these [...] Read more.
Acoustic Side-Channel Attacks (ASCAs) exploit the sound produced by keyboards and other devices to infer sensitive information without breaching software or network defenses. Recent advances in deep learning, large language models, and signal processing have greatly expanded the feasibility and accuracy of these attacks. To clarify the evolving threat landscape, this survey systematically reviews ASCA research published between January 2020 and February 2025. We categorize modern ASCA methods into three levels of text reconstruction—individual keystrokes, short text (words/phrases), and long-text regeneration— and analyze the signal processing, machine learning, and language-model decoding techniques that enable them. We also evaluate how environmental factors such as microphone placement, ambient noise, and keyboard design influence attack performance, and we examine the challenges of generalizing laboratory-trained models to real-world settings. This survey makes three primary contributions: (1) it provides the first structured taxonomy of ASCAs based on text generation granularity and decoding methodology; (2) it synthesizes cross-study evidence on environmental and hardware factors that fundamentally shape ASCA performance; and (3) it consolidates emerging countermeasures, including Generative Adversarial Network-based noise masking, cryptographic defenses, and environmental mitigation, while identifying open research gaps and future threats posed by voice-enabled IoT and prospective quantum side-channels. Together, these insights underscore the need for interdisciplinary, multi-layered defenses against rapidly advancing ASCA techniques. Full article
Show Figures

Figure 1

46 pages, 1279 KB  
Article
Privacy-Preserving Machine Learning Techniques: Cryptographic Approaches, Challenges, and Future Directions
by Elif Nur Kucur, Tolga Buyuktanir, Muharrem Ugurelli and Kazim Yildiz
Appl. Sci. 2026, 16(1), 277; https://doi.org/10.3390/app16010277 - 26 Dec 2025
Viewed by 809
Abstract
Privacy-preserving machine learning (PPML) constitutes a core element of responsible AI by supporting model training and inference without exposing sensitive information. This survey presents a comprehensive examination of the major cryptographic PPML techniques and introduces a unified taxonomy covering technical models, verification criteria, [...] Read more.
Privacy-preserving machine learning (PPML) constitutes a core element of responsible AI by supporting model training and inference without exposing sensitive information. This survey presents a comprehensive examination of the major cryptographic PPML techniques and introduces a unified taxonomy covering technical models, verification criteria, and evaluation dimensions. The study consolidates findings from both survey and experimental works using structured comparison tables and emphasizes that recent research increasingly adopts hybrid and verifiable PPML designs. In addition, we map PPML applications across domains such as healthcare, finance, Internet of Things (IoT), and edge systems, indicating that cryptographic approaches are progressively transitioning from theoretical constructs to deployable solutions. Finally, the survey outlines emerging trends—including the growth of zero-knowledge proofs (ZKPs)-based verification and domain-specific hybrid architectures—and identifies practical considerations that shape PPML adoption in real systems. Full article
Show Figures

Figure 1

36 pages, 3105 KB  
Review
Reinforcement Learning for Industrial Automation: A Comprehensive Review of Adaptive Control and Decision-Making in Smart Factories
by Yasser M. Alginahi, Omar Sabri and Wael Said
Machines 2025, 13(12), 1140; https://doi.org/10.3390/machines13121140 - 15 Dec 2025
Viewed by 1672
Abstract
The accelerating integration of Artificial Intelligence (AI) in Industrial Automation has established Reinforcement Learning (RL) as a transformative paradigm for adaptive control, intelligent optimization, and autonomous decision-making in smart factories. Despite the growing literature, existing reviews often emphasize algorithmic performance or domain-specific applications, [...] Read more.
The accelerating integration of Artificial Intelligence (AI) in Industrial Automation has established Reinforcement Learning (RL) as a transformative paradigm for adaptive control, intelligent optimization, and autonomous decision-making in smart factories. Despite the growing literature, existing reviews often emphasize algorithmic performance or domain-specific applications, neglecting broader links between methodological evolution, technological maturity, and industrial readiness. To address this gap, this study presents a bibliometric review mapping the development of RL and Deep Reinforcement Learning (DRL) research in Industrial Automation and robotics. Following the PRISMA 2020 protocol to guide the data collection procedures and inclusion criteria, 672 peer-reviewed journal articles published between 2017 and 2026 were retrieved from Scopus, ensuring high-quality, interdisciplinary coverage. Quantitative bibliometric analyses were conducted in R using Bibliometrix and Biblioshiny, including co-authorship, co-citation, keyword co-occurrence, and thematic network analyses, to reveal collaboration patterns, influential works, and emerging research trends. Results indicate that 42% of studies employed DRL, 27% focused on Multi-Agent RL (MARL), and 31% relied on classical RL, with applications concentrated in robotic control (33%), process optimization (28%), and predictive maintenance (19%). However, only 22% of the studies reported real-world or pilot implementations, highlighting persistent challenges in scalability, safety validation, interpretability, and deployment readiness. By integrating a review with bibliometric mapping, this study provides a comprehensive taxonomy and a strategic roadmap linking theoretical RL research with practical industrial applications. This roadmap is structured across four critical dimensions: (1) Algorithmic Development (e.g., safe, explainable, and data-efficient RL), (2) Integration Technologies (e.g., digital twins and IoT), (3) Validation Maturity (from simulation to real-world pilots), and (4) Human-Centricity (addressing trust, collaboration, and workforce transition). These insights can guide researchers, engineers, and policymakers in developing scalable, safe, and human-centric RL solutions, prioritizing research directions, and informing the implementation of Industry 5.0–aligned intelligent automation systems emphasizing transparency, sustainability, and operational resilience. Full article
Show Figures

Figure 1

50 pages, 3678 KB  
Article
Artificial Intelligence for 5G and 6G Networks: A Taxonomy-Based Survey of Applications, Trends, and Challenges
by Nouri Omheni, Hend Koubaa and Faouzi Zarai
Technologies 2025, 13(12), 559; https://doi.org/10.3390/technologies13120559 - 1 Dec 2025
Cited by 1 | Viewed by 3756
Abstract
The mobile network ecosystem is undergoing profound change driven by Artificial Intelligence (AI), Network Function Virtualization (NFV), and Software-Defined Networking (SDN). These technologies are well positioned to enable the essential transformation of next-generation networks, delivering significant improvements in efficiency, flexibility, and sustainability. AI [...] Read more.
The mobile network ecosystem is undergoing profound change driven by Artificial Intelligence (AI), Network Function Virtualization (NFV), and Software-Defined Networking (SDN). These technologies are well positioned to enable the essential transformation of next-generation networks, delivering significant improvements in efficiency, flexibility, and sustainability. AI is expected to impact the entire lifecycle of mobile networks, including design, deployment, service implementation, and long-term management. This article reviews the key characteristics of 5G and the anticipated technology enablers of 6G, focusing on the integration of AI within mobile networks. This study addresses several perspectives, including network optimization, predictive analytics, and security enhancement. A taxonomy is proposed to classify AI applications into 5G and 6G according to their role in network operations and their impact across vertical domains such as the Internet of Things (IoT), healthcare, and transportation. Furthermore, emerging trends are discussed, including federated learning, advanced AI models, and explainable AI, along with major challenges related to data privacy, adaptability, and interoperability. This paper concludes with future research directions, emphasizing the importance of ethical AI policies and cross-sector collaborations to ensure effective and sustainable AI-enabled mobile networks. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

29 pages, 9817 KB  
Review
Multimedia Transmission over LoRa Networks for IoT Applications: A Survey of Strategies, Deployments, and Open Challenges
by Soumadeep De, Harikrishnan Muraleedharan Jalajamony, Santhosh Adhinarayanan, Santosh Joshi, Himanshu Upadhyay and Renny Fernandez
Sensors 2025, 25(23), 7128; https://doi.org/10.3390/s25237128 - 21 Nov 2025
Viewed by 1463
Abstract
LoRa has emerged as a cornerstone of low-power, long-range IoT communication. While highly effective for scalar sensing, its extension to multimedia remains constrained by limited bitrate, payload size, and duty-cycle regulations. This survey reviews research on multimedia transmission over LoRa, revealing that most [...] Read more.
LoRa has emerged as a cornerstone of low-power, long-range IoT communication. While highly effective for scalar sensing, its extension to multimedia remains constrained by limited bitrate, payload size, and duty-cycle regulations. This survey reviews research on multimedia transmission over LoRa, revealing that most current efforts are image-centric, with only a few preliminary studies addressing video or audio. We propose a structured taxonomy encompassing compression and fragmentation methods, cooperative and multi-hop architectures, MAC and cross-layer optimizations, and hybrid network designs. These strategies are analyzed in the context of IoT domains such as agriculture, surveillance, and environmental monitoring. Open challenges are highlighted in extending beyond static images, ensuring energy-efficient delivery, and developing spectrum- and ML-aware protocols. The survey provides IoT researchers with both a consolidated reference and a roadmap toward practical and scalable multimedia systems over LoRa. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

32 pages, 2954 KB  
Review
From Traditional Machine Learning to Fine-Tuning Large Language Models: A Review for Sensors-Based Soil Moisture Forecasting
by Md Babul Islam, Antonio Guerrieri, Raffaele Gravina, Declan T. Delaney and Giancarlo Fortino
Sensors 2025, 25(22), 6903; https://doi.org/10.3390/s25226903 - 12 Nov 2025
Viewed by 1516
Abstract
Smart Agriculture (SA) combines cutting edge technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and real-time sensing systems with traditional farming practices to enhance productivity, optimize resource use, and support environmental sustainability. A key aspect of SA is the continuous [...] Read more.
Smart Agriculture (SA) combines cutting edge technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and real-time sensing systems with traditional farming practices to enhance productivity, optimize resource use, and support environmental sustainability. A key aspect of SA is the continuous monitoring of field conditions, particularly Soil Moisture (SM), which plays a crucial role in crop growth and water management. Accurate forecasting of SM allows farmers to make timely irrigation decisions, improve field management, and conserve water. To support this, recent studies have increasingly adopted soil sensors, local weather data, and AI-based data-driven models for SM forecasting. In the literature, most existing review articles lack a structured framework and often overlook recent advancements, including privacy-preserving Federated Learning (FL), Transfer Learning (TL), and the integration of Large Language Models (LLMs). To address this gap, this paper proposes a novel taxonomy for SM forecasting and presents a comprehensive review of existing approaches, including traditional machine learning, deep learning, and hybrid models. Using the PRISMA methodology, we reviewed over 189 papers and selected 68 peer-reviewed studies published between 2017 and 2025. These studies are analyzed based on sensor types, input features, AI techniques, data durations, and evaluation metrics. Six guiding research questions were developed to shape the review and inform the taxonomy. Finally, this work identifies promising research directions, such as the application of TinyML for edge deployment, explainable AI for improved transparency, and privacy-aware model training. This review aims to provide researchers and practitioners with valuable insights for building accurate, scalable, and trustworthy SM forecasting systems to advance SA. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

38 pages, 5872 KB  
Review
Faults, Failures, Reliability, and Predictive Maintenance of Grid-Connected Solar Systems: A Comprehensive Review
by Karl Kull, Bilal Asad, Muhammad Amir Khan, Muhammad Usman Naseer, Ants Kallaste and Toomas Vaimann
Appl. Sci. 2025, 15(21), 11461; https://doi.org/10.3390/app152111461 - 27 Oct 2025
Cited by 3 | Viewed by 4253
Abstract
This paper reviews recent progress in fault detection, reliability analysis, and predictive maintenance methods for grid-connected solar photovoltaic (PV) systems. With the rising adoption of solar power globally, maintaining system reliability and performance is vital for a sustainable energy supply. Common faults discussed [...] Read more.
This paper reviews recent progress in fault detection, reliability analysis, and predictive maintenance methods for grid-connected solar photovoltaic (PV) systems. With the rising adoption of solar power globally, maintaining system reliability and performance is vital for a sustainable energy supply. Common faults discussed include panel degradation, electrical issues, inverter failures, and grid disturbances, all of which affect system efficiency and safety. While traditional diagnostics like thermal imaging and V-I curve analysis offer valuable insights, they mostly detect issues reactively. New approaches using Artificial Intelligence (AI), Machine Learning (ML), and Internet of Things (IoT) enable real-time monitoring and predictive diagnostics, significantly enhancing accuracy and reliability. This study represents the introduction of a consolidated decision framework and taxonomy that systematically integrates and evaluates the fault types, symptoms, signals, diagnostics, and field-readiness across both plant types and voltage levels. Moreover, this study provides quantitative benchmarks of performance metrics, energy losses, and diagnostic accuracies of 95% confidence intervals. Adopting these advanced techniques promotes proactive management, reducing operational risks and downtime, thus reinforcing the resilience and sustainability of solar power infrastructure. Full article
(This article belongs to the Special Issue Feature Review Papers in Energy Science and Technology)
Show Figures

Figure 1

30 pages, 1166 KB  
Article
Case-Based Data Quality Management for IoT Logs: A Case Study Focusing on Detection of Data Quality Issues
by Alexander Schultheis, Yannis Bertrand, Joscha Grüger, Lukas Malburg, Ralph Bergmann and Estefanía Serral Asensio
IoT 2025, 6(4), 63; https://doi.org/10.3390/iot6040063 - 23 Oct 2025
Viewed by 961
Abstract
Smart manufacturing applications increasingly rely on time-series data from Industrial IoT sensors, yet these data streams often contain data quality issues (DQIs) that affect analysis and disrupt production. While traditional Machine Learning methods are difficult to apply due to the small amount of [...] Read more.
Smart manufacturing applications increasingly rely on time-series data from Industrial IoT sensors, yet these data streams often contain data quality issues (DQIs) that affect analysis and disrupt production. While traditional Machine Learning methods are difficult to apply due to the small amount of data available, the knowledge-based approach of Case-Based Reasoning (CBR) offers a way to reuse previously gained experience. We introduce the first end-to-end Case-Based Reasoning (CBR) framework that both detects and remedies DQIs in near real time, even when only a handful of annotated fault instances are available. Our solution encodes expert experience in the four CBR knowledge containers: (i) a vocabulary that represents sensor streams and their context in the DataStream format; (ii) a case base populated with fault-annotated event logs; (iii) tailored similarity measures—including a weighted Dynamic Time Warping variant and structure-aware list mapping—that isolate the signatures of missing-value, missing-sensor, and time-shift errors; and (iv) lightweight adaptation rules that recommend concrete repair actions or, where appropriate, invoke automated imputation and alignment routines. A case study is used to examine and present the suitability of the approach for a specific application domain. Although the case study demonstrates only limited capabilities in identifying Data Quality Issues (DQIs), we aim to support transparent evaluation and future research by publishing (1) a prototype of the Case-Based Reasoning (CBR) system and (2) a publicly accessible, meticulously annotated sensor-log benchmark. Together, these resources provide a reproducible baseline and a modular foundation for advancing similarity metrics, expanding the DQI taxonomy, and enabling knowledge-intensive reasoning in IoT data quality management. Full article
Show Figures

Figure 1

20 pages, 448 KB  
Article
Toward Scalable and Sustainable Detection Systems: A Behavioural Taxonomy and Utility-Based Framework for Security Detection in IoT and IIoT
by Ali Jaddoa, Hasanein Alharbi, Abbas Hommadi and Hussein A. Ismael
IoT 2025, 6(4), 62; https://doi.org/10.3390/iot6040062 - 21 Oct 2025
Viewed by 793
Abstract
Resource-constrained IoT and IIoT systems require detection architectures that balance accuracy with energy efficiency, scalability, and contextual awareness. This paper presents a conceptual framework informed by a systematic review of energy-aware detection systems (XDS), unifying intrusion and anomaly detection systems (IDS and ADS) [...] Read more.
Resource-constrained IoT and IIoT systems require detection architectures that balance accuracy with energy efficiency, scalability, and contextual awareness. This paper presents a conceptual framework informed by a systematic review of energy-aware detection systems (XDS), unifying intrusion and anomaly detection systems (IDS and ADS) within a single framework. The proposed taxonomy captures six key dimensions: energy-awareness, adaptivity, modularity, offloading support, domain scope, and attack coverage. Applying this framework to the recent literature reveals recurring limitations, including static architectures, limited runtime coordination, and narrow evaluation settings. To address these challenges, we introduce a utility-based decision model for multi-layer task placement, guided by operational metrics such as energy cost, latency, and detection complexity. Unlike review-only studies, this work contributes both a synthesis of current limitations and the design of a novel six-dimensional taxonomy and utility-based layered architecture. The study concludes with future directions that support the development of adaptable, sustainable, and context-aware XDS architectures for heterogeneous environments. Full article
Show Figures

Figure 1

24 pages, 1287 KB  
Article
Technological Innovation in Cultural Organizations: A Review and Conceptual Mapping Framework
by Zornitsa Yordanova and Zlatina Todorova
Digital 2025, 5(4), 54; https://doi.org/10.3390/digital5040054 - 9 Oct 2025
Cited by 2 | Viewed by 2020
Abstract
Cultural organizations have traditionally been viewed as resistant to change, often bound by legacy structures, public dependency, and non-commercial missions. However, recent advances in digital technologies—ranging from AI and VR to IoT and big data—are reshaping the operational and strategic landscape of these [...] Read more.
Cultural organizations have traditionally been viewed as resistant to change, often bound by legacy structures, public dependency, and non-commercial missions. However, recent advances in digital technologies—ranging from AI and VR to IoT and big data—are reshaping the operational and strategic landscape of these institutions. Despite this shift, academic literature has yet to comprehensively map how technological innovation transforms cultural organizations into practice. This paper addresses this gap by introducing the concept of the Cultural Organizational System (COS)—a holistic framework that captures the multi-component structure of cultural entities, including space, tools, performance, management, and networks. Using a PRISMA-based scoping review methodology, we analyze over 90 sources to identify the types, functions, and strategic roles of technological innovations across COS components. The findings reveal a taxonomy of innovation use cases, a mapping to Oslo innovation categories, and a quadrant model of enablers and barriers unique to the cultural sector. By offering an integrated view of digital transformation in cultural settings, this study advances innovation theory and provides practical guidance for cultural leaders and policymakers seeking to balance mission-driven goals with sustainability and modernization imperatives. Full article
Show Figures

Figure 1

34 pages, 3231 KB  
Review
A Review of Smart Crop Technologies for Resource Constrained Environments: Leveraging Multimodal Data Fusion, Edge-to-Cloud Computing, and IoT Virtualization
by Damilola D. Olatinwo, Herman C. Myburgh, Allan De Freitas and Adnan M. Abu-Mahfouz
J. Sens. Actuator Netw. 2025, 14(5), 99; https://doi.org/10.3390/jsan14050099 - 9 Oct 2025
Viewed by 2550
Abstract
Smart crop technologies offer promising solutions for enhancing agricultural productivity and sustainability, particularly in the face of global challenges such as resource scarcity and climate variability. However, their deployment in infrastructure-limited regions, especially across Africa, faces persistent barriers, including unreliable power supply, intermittent [...] Read more.
Smart crop technologies offer promising solutions for enhancing agricultural productivity and sustainability, particularly in the face of global challenges such as resource scarcity and climate variability. However, their deployment in infrastructure-limited regions, especially across Africa, faces persistent barriers, including unreliable power supply, intermittent internet connectivity, and limited access to technical expertise. This study presents a PRISMA-guided systematic review of literature published between 2015 and 2025, sourced from the Scopus database including indexed content from ScienceDirect and IEEE Xplore. It focuses on key technological components including multimodal sensing, data fusion, IoT resource management, edge-cloud integration, and adaptive network design. The analysis of these references reveals a clear trend of increasing research volume and a major shift in focus from foundational unimodal sensing and cloud computing to more complex solutions involving machine learning post-2019. This review identifies critical gaps in existing research, particularly the lack of integrated frameworks for effective multimodal sensing, data fusion, and real-time decision support in low-resource agricultural contexts. To address this, we categorize multimodal sensing approaches and then provide a structured taxonomy of multimodal data fusion approaches for real-time monitoring and decision support. The review also evaluates the role of IoT virtualization as a pathway to scalable, adaptive sensing systems, and analyzes strategies for overcoming infrastructure constraints. This study contributes a comprehensive overview of smart crop technologies suited to infrastructure-limited agricultural contexts and offers strategic recommendations for deploying resilient smart agriculture solutions under connectivity and power constraints. These findings provide actionable insights for researchers, technologists, and policymakers aiming to develop sustainable and context-aware agricultural innovations in underserved regions. Full article
(This article belongs to the Special Issue Remote Sensing and IoT Application for Smart Agriculture)
Show Figures

Figure 1

52 pages, 3501 KB  
Review
The Role of Artificial Intelligence and Machine Learning in Advancing Civil Engineering: A Comprehensive Review
by Ali Bahadori-Jahromi, Shah Room, Chia Paknahad, Marwah Altekreeti, Zeeshan Tariq and Hooman Tahayori
Appl. Sci. 2025, 15(19), 10499; https://doi.org/10.3390/app151910499 - 28 Sep 2025
Cited by 6 | Viewed by 6612
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of [...] Read more.
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of peer-reviewed publications from the past decade, employing bibliometric mapping and critical evaluation to analyse methodological advances, practical applications, and limitations. A novel taxonomy is introduced, classifying AI/ML approaches by civil engineering domain, learning paradigm, and adoption maturity to guide future development. Key applications include pavement condition assessment, slope stability prediction, traffic flow forecasting, smart water management, and flood forecasting, leveraging techniques such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Support Vector Machines (SVMs), and hybrid physics-informed neural networks (PINNs). The review highlights challenges, including limited high-quality datasets, absence of AI provisions in design codes, integration barriers with IoT-based infrastructure, and computational complexity. While explainable AI tools like SHAP and LIME improve interpretability, their practical feasibility in safety-critical contexts remains constrained. Ethical considerations, including bias in training datasets and regulatory compliance, are also addressed. Promising directions include federated learning for data privacy, transfer learning for data-scarce regions, digital twins, and adherence to FAIR data principles. This study underscores AI as a complementary tool, not a replacement, for traditional methods, fostering a data-driven, resilient, and sustainable built environment through interdisciplinary collaboration and transparent, explainable systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

23 pages, 4487 KB  
Article
Lightweight Anonymous Authentication for IoT: A Taxonomy and Survey of Security Frameworks
by Jian Zhong, Sheng He, Zhicai Liu and Ling Xiong
Sensors 2025, 25(17), 5594; https://doi.org/10.3390/s25175594 - 8 Sep 2025
Viewed by 1529
Abstract
The resource-constrained nature of Internet of Things (IoT) devices necessitates authentication mechanisms built upon lightweight cryptographic primitives, such as symmetric key algorithms and hash functions. In response to demands for user anonymity and forward secrecy, numerous innovative authentication schemes have emerged. This work [...] Read more.
The resource-constrained nature of Internet of Things (IoT) devices necessitates authentication mechanisms built upon lightweight cryptographic primitives, such as symmetric key algorithms and hash functions. In response to demands for user anonymity and forward secrecy, numerous innovative authentication schemes have emerged. This work presents a systematic review of these state-of-the-art approaches. We introduce a structured classification by synthesizing the field into nine distinct sub-frameworks, each focused on either user anonymity or forward secrecy. These are then integrated into two general frameworks that provide both properties. Our analysis illuminates the design principles, security guarantees, and performance trade-offs inherent to each framework. Building on this classification, we comparatively evaluate the security features and performance metrics of 45 representative schemes. Ultimately, this work seeks to enhance the understanding of current challenges and foster further advancement in IoT security. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop