Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = Artificial Intelligence of Thing (AIoT)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 9450 KiB  
Article
Industrial-AdaVAD: Adaptive Industrial Video Anomaly Detection Empowered by Edge Intelligence
by Jie Xiao, Haocheng Shen, Yasan Ding and Bin Guo
Mathematics 2025, 13(17), 2711; https://doi.org/10.3390/math13172711 - 22 Aug 2025
Abstract
The rapid advancement of Artificial Intelligence of Things (AIoT) has driven an urgent demand for intelligent video anomaly detection (VAD) to ensure industrial safety. However, traditional approaches struggle to detect unknown anomalies in complex and dynamic environments due to the scarcity of abnormal [...] Read more.
The rapid advancement of Artificial Intelligence of Things (AIoT) has driven an urgent demand for intelligent video anomaly detection (VAD) to ensure industrial safety. However, traditional approaches struggle to detect unknown anomalies in complex and dynamic environments due to the scarcity of abnormal samples and limited generalization capabilities. To address these challenges, this paper presents an adaptive VAD framework powered by edge intelligence tailored for resource-constrained industrial settings. Specifically, a lightweight feature extractor is developed by integrating residual networks with channel attention mechanisms, achieving a 58% reduction in model parameters through dense connectivity and output pruning. A multidimensional evaluation strategy is introduced to dynamically select optimal models for deployment on heterogeneous edge devices. To enhance cross-scene adaptability, we propose a multilayer adversarial domain adaptation mechanism that effectively aligns feature distributions across diverse industrial environments. Extensive experiments on a real-world coal mine surveillance dataset demonstrate that the proposed framework achieves an accuracy of 86.7% with an inference latency of 23 ms per frame on edge hardware, improving both detection efficiency and transferability. Full article
23 pages, 2426 KiB  
Article
SUQ-3: A Three Stage Coarse-to-Fine Compression Framework for Sustainable Edge AI in Smart Farming
by Thavavel Vaiyapuri and Huda Aldosari
Sustainability 2025, 17(12), 5230; https://doi.org/10.3390/su17125230 - 6 Jun 2025
Viewed by 613
Abstract
Artificial intelligence of things (AIoT) has become a pivotal enabler of precision agriculture by supporting real-time, data-driven decision-making at the edge. Deep learning (DL) models are central to this paradigm, offering powerful capabilities for analyzing environmental and climatic data in a range of [...] Read more.
Artificial intelligence of things (AIoT) has become a pivotal enabler of precision agriculture by supporting real-time, data-driven decision-making at the edge. Deep learning (DL) models are central to this paradigm, offering powerful capabilities for analyzing environmental and climatic data in a range of agricultural applications. However, deploying these models on edge devices remains challenging due to constraints in memory, computation, and energy. Existing model compression techniques predominantly target large-scale 2D architectures, with limited attention to one-dimensional (1D) models such as gated recurrent units (GRUs), which are commonly employed for processing sequential sensor data. To address this gap, we propose a novel three-stage coarse-to-fine compression framework, termed SUQ-3 (Structured, Unstructured Pruning, and Quantization), designed to optimize 1D DL models for efficient edge deployment in AIoT applications. The SUQ-3 framework sequentially integrates (1) structured pruning with an M×N sparsity pattern to induce hardware-friendly, coarse-grained sparsity; (2) unstructured pruning to eliminate low-magnitude weights for fine-grained compression; and (3) quantization, applied post quantization-aware training (QAT), to support low-precision inference with minimal accuracy loss. We validate the proposed SUQ-3 by compressing a GRU-based crop recommendation model trained on environmental and climatic data from an agricultural dataset. Experimental results show a model size reduction of approximately 85% and an 80% improvement in inference latency while preserving high predictive accuracy (F1 score: 0.97 vs. baseline: 0.9837). Notably, when deployed on a mobile edge device using TensorFlow Lite, the SUQ-3 model achieved an estimated energy consumption of 1.18 μJ per inference, representing a 74.4% reduction compared with the baseline and demonstrating its potential for sustainable low-power AI deployment in agricultural environments. Although demonstrated in an agricultural AIoT use case, the generality and modularity of SUQ-3 make it applicable to a broad range of DL models across domains requiring efficient edge intelligence. Full article
(This article belongs to the Collection Sustainability in Agricultural Systems and Ecosystem Services)
Show Figures

Figure 1

57 pages, 1428 KiB  
Review
Artificial Intelligence of Things for Solar Energy Monitoring and Control
by Omayma Hadil Boucif, Abla Malak Lahouaou, Djallel Eddine Boubiche and Homero Toral-Cruz
Appl. Sci. 2025, 15(11), 6019; https://doi.org/10.3390/app15116019 - 27 May 2025
Cited by 1 | Viewed by 4481
Abstract
In the rapidly evolving field of renewable energy, integrating Artificial Intelligence (AI) and the Internet of Things (IoT) has become a transformative strategy for improving solar energy monitoring and control. This paper provides a comprehensive survey of Artificial Intelligence of Things (AIoT) applications [...] Read more.
In the rapidly evolving field of renewable energy, integrating Artificial Intelligence (AI) and the Internet of Things (IoT) has become a transformative strategy for improving solar energy monitoring and control. This paper provides a comprehensive survey of Artificial Intelligence of Things (AIoT) applications in solar energy, illustrating how IoT technologies enable real-time monitoring, system optimization through techniques such as Maximum Power Point Tracking (MPPT), solar tracking, and automated cleaning. Simultaneously, AI boosts these capabilities through energy forecasting, optimization, predictive maintenance, and fault detection, significantly enhancing system performance and reliability. This review highlights key advancements, challenges, and practical applications of AIoT in the solar energy sector, emphasizing its role in advancing energy efficiency and sustainability. Full article
(This article belongs to the Special Issue IoT for Solar Monitoring and Photovoltaic Sensing)
Show Figures

Figure 1

35 pages, 1503 KiB  
Systematic Review
Integrating AIoT Technologies in Aquaculture: A Systematic Review
by Fahmida Wazed Tina, Nasrin Afsarimanesh, Anindya Nag and Md Eshrat E. Alahi
Future Internet 2025, 17(5), 199; https://doi.org/10.3390/fi17050199 - 30 Apr 2025
Cited by 3 | Viewed by 3655
Abstract
The increasing global demand for seafood underscores the necessity for sustainable aquaculture practices. However, several challenges, including rising operational costs, variable environmental conditions, and the threat of disease outbreaks, impede progress in this field. This review explores the transformative role of the Artificial [...] Read more.
The increasing global demand for seafood underscores the necessity for sustainable aquaculture practices. However, several challenges, including rising operational costs, variable environmental conditions, and the threat of disease outbreaks, impede progress in this field. This review explores the transformative role of the Artificial Intelligence of Things (AIoT) in mitigating these challenges. We analyse current research on AIoT applications in aquaculture, with a strong emphasis on the use of IoT sensors for real-time data collection and AI algorithms for effective data analysis. Our focus areas include monitoring water quality, implementing smart feeding strategies, detecting diseases, analysing fish behaviour, and employing automated counting techniques. Nevertheless, several research gaps remain, particularly regarding the integration of AI in broodstock management, the development of multimodal AI systems, and challenges regarding model generalization. Future advancements in AIoT should prioritise real-time adaptability, cost-effectiveness, and sustainability while emphasizing the importance of multimodal systems, advanced biosensing capabilities, and digital twin technologies. In conclusion, while AIoT presents substantial opportunities for enhancing aquaculture practices, successful implementation will depend on overcoming challenges related to scalability, cost, and technical expertise, improving models’ adaptability, and ensuring environmental sustainability. Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
Show Figures

Graphical abstract

24 pages, 985 KiB  
Article
Secure Hierarchical Federated Learning for Large-Scale AI Models: Poisoning Attack Defense and Privacy Preservation in AIoT
by Chengzhuo Han, Tingting Yang, Xin Sun and Zhengqi Cui
Electronics 2025, 14(8), 1611; https://doi.org/10.3390/electronics14081611 - 16 Apr 2025
Cited by 1 | Viewed by 972
Abstract
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data [...] Read more.
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data leakage, and adversarial interference, threatening the integrity of collaborative learning and responsible AI deployment. To address these issues, this paper proposes a Hierarchical Federated Cross-domain Retrieval (FHCR) framework tailored for secure and privacy-preserving AIoT systems. By decoupling models into a shared retrieval layer (globally optimized via federated learning) and device-specific layers (locally personalized), FHCR minimizes communication overhead while enabling dynamic module selection. Crucially, we integrate a retrieval-layer mean inspection (RLMI) mechanism to detect and filter malicious gradient updates, effectively mitigating poisoning attacks and reducing attack success rates by 20% compared to conventional methods. Extensive evaluation on General-QA and IoT-Native datasets demonstrates the robustness of FHCR against adversarial threats, with FHCR maintaining global accuracy not lower than baseline levels while reducing communication costs by 14%. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Graphical abstract

40 pages, 470 KiB  
Systematic Review
A Systematic Review on the Combination of VR, IoT and AI Technologies, and Their Integration in Applications
by Dimitris Kostadimas, Vlasios Kasapakis and Konstantinos Kotis
Future Internet 2025, 17(4), 163; https://doi.org/10.3390/fi17040163 - 7 Apr 2025
Cited by 5 | Viewed by 2971
Abstract
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This [...] Read more.
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This systematic review adheres to the PRISMA guidelines in order to critically analyze peer-reviewed literature from highly recognized academic databases related to the intersection of VR, AI, and IoT, and identify application domains, methodologies, tools, and key challenges. By focusing on real-life implementations and working prototypes, this review highlights state-of-the-art advancements and uncovers gaps that hinder practical adoption, such as data collection issues, interoperability barriers, and user experience challenges. The findings reveal that digital twins (DTs), AIoT systems, and immersive XR environments are promising as emerging technologies (ET), but require further development to achieve scalability and real-world impact, while in certain fields a limited amount of research is conducted until now. This review bridges theory and practice, providing a targeted foundation for future interdisciplinary research aimed at advancing practical, scalable solutions across domains such as healthcare, smart cities, industry, education, cultural heritage, and beyond. The study found that the integration of VR, AI, and IoT holds significant potential across various domains, with DTs, IoT systems, and immersive XR environments showing promising applications, but challenges such as data interoperability, user experience limitations, and scalability barriers hinder widespread adoption. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

17 pages, 1790 KiB  
Article
Advancing Artificial Intelligence of Things Security: Integrating Feature Selection and Deep Learning for Real-Time Intrusion Detection
by Faisal Albalwy and Muhannad Almohaimeed
Systems 2025, 13(4), 231; https://doi.org/10.3390/systems13040231 - 28 Mar 2025
Cited by 1 | Viewed by 1595
Abstract
The size of data transmitted through various communication systems has recently increased due to technological advancements in the Artificial Intelligence of Things (AIoT) and the industrial Internet of Things (IoT). IoT communications rely on intrusion detection systems (IDS) to ensure secure and reliable [...] Read more.
The size of data transmitted through various communication systems has recently increased due to technological advancements in the Artificial Intelligence of Things (AIoT) and the industrial Internet of Things (IoT). IoT communications rely on intrusion detection systems (IDS) to ensure secure and reliable data transmission, as traditional security mechanisms, such as firewalls and encryption, remain susceptible to attacks. An effective IDS is crucial as evolving threats continue to expose new security vulnerabilities. This study proposes an integrated approach combining feature selection methods and principal component analysis (PCA) with advanced deep learning (DL) models for real-time intrusion detection, significantly improving both computational efficiency and accuracy compared to previous methods. Specifically, five feature selection methods (correlation-based feature subset selection (CFS), Pearson analysis, gain ratio (GR), information gain (IG) and symmetrical uncertainty (SU)) were integrated with PCA to optimise feature dimensionality and enhance predictive performance. Three classifiers—artificial neural networks (ANNs), deep neural networks (DNNs), and TabNet–were evaluated on the RT-IoT2022 dataset. The ANN classifier combined with Pearson analysis and PCA achieved the highest intrusion detection accuracy of 99.7%, demonstrating substantial performance improvements over ANN alone (92%) and TabNet (94%) without feature selection. Key features identified by Pearson analysis included id.resp_p, service, fwd_init_window_size and flow_SYN_flag_count, which significantly contributed to the performance gains. These results indicate that combining Pearson analysis with PCA consistently improves classification performance across multiple models. Furthermore, the deployment of classifiers directly on the original dataset decreased the accuracy, emphasising the importance of feature selection in enhancing AIoT and IoT security. This predictive model strengthens IDS capabilities, enabling early threat detection and proactive mitigation strategies against cyberattacks in real-time AIoT environments. Full article
(This article belongs to the Special Issue Integration of Cybersecurity, AI, and IoT Technologies)
Show Figures

Figure 1

9 pages, 520 KiB  
Article
Research on Approximate Computation of Signal Processing Algorithms for AIoT Processors Based on Deep Learning
by Yingzhe Liu, Fangfa Fu and Xuejian Sun
Electronics 2025, 14(6), 1064; https://doi.org/10.3390/electronics14061064 - 7 Mar 2025
Cited by 1 | Viewed by 1004
Abstract
In the post-Moore era, the excessive amount of information brings great challenges to the performance of computing systems. To cope with these challenges, approximate computation has developed rapidly, which enhances the system performance with minor degradation in accuracy. In this paper, we investigate [...] Read more.
In the post-Moore era, the excessive amount of information brings great challenges to the performance of computing systems. To cope with these challenges, approximate computation has developed rapidly, which enhances the system performance with minor degradation in accuracy. In this paper, we investigate the utilization of an Artificial Intelligence of Things (AIoT) processor for approximate computing. Firstly, we employed neural architecture search (NAS) to acquire the neural network structure for approximate computation, which approximates the functions of FFT, DCT, FIR, and IIR. Subsequently, based on this structure, we quantized and trained a neural network implemented on the AI accelerator of the MAX78000 development board. To evaluate the performance, we implemented the same functions using the CMSIS-DSP library. The results demonstrate that the computational efficiency of the approximate computation on the AI accelerator is significantly higher compared to traditional DSP implementations. Therefore, the approximate computation based on AIoT devices can be effectively utilized in real-time applications. Full article
(This article belongs to the Special Issue The Progress in Application-Specific Integrated Circuit Design)
Show Figures

Figure 1

50 pages, 1217 KiB  
Review
Smart Lighting Systems: State-of-the-Art in the Adoption of the EdgeML Computing Paradigm
by Gaetanino Paolone, Romolo Paesani, Francesco Pilotti, Jacopo Camplone, Andrea Piazza and Paolino Di Felice
Future Internet 2025, 17(2), 90; https://doi.org/10.3390/fi17020090 - 14 Feb 2025
Viewed by 2245
Abstract
Lighting Systems (LSs) play a fundamental role in almost every aspect of human activities. Since the advent of lights, both academia and industry have been engaged in raising the quality of the service offered by these systems. The advent of Light Emitting Diode [...] Read more.
Lighting Systems (LSs) play a fundamental role in almost every aspect of human activities. Since the advent of lights, both academia and industry have been engaged in raising the quality of the service offered by these systems. The advent of Light Emitting Diode (LED) lighting represented a giant step forward for such systems in terms of light quality and energy saving. To further raise the quality of the services offered by LSs, increase the range of services they offer, while at the same time consolidating their reliability and security, we see the need to explore the contribution that can be derived from the use of the Artificial Intelligence of Things (AIoT) emerging technology. This paper systematically reviews and compares the state-of-the-art with regard to the impact of the AIoT in the smart LS domain. The study reveals that the field is relatively new, in fact the first works date back to 2019. In addition to that, the review delves into recent research works focusing on the usage of Machine Learning (ML) algorithms in an edge Cloud-based computing architecture. Our findings reveal that this topic is almost unexplored. Finally, the survey sheds light on future research opportunities that can overcome the current gaps, with the final aim of guiding scholars and practitioners in advancing the field of smart LSs. The study is reported in full detail, so it can be replicated. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

47 pages, 1743 KiB  
Review
Artificial Intelligence of Things (AIoT) Advances in Aquaculture: A Review
by Yo-Ping Huang and Simon Peter Khabusi
Processes 2025, 13(1), 73; https://doi.org/10.3390/pr13010073 - 1 Jan 2025
Cited by 14 | Viewed by 9116
Abstract
The integration of artificial intelligence (AI) and the internet of things (IoT), known as artificial intelligence of things (AIoT), is driving significant advancements in the aquaculture industry, offering solutions to longstanding challenges related to operational efficiency, sustainability, and productivity. This review explores the [...] Read more.
The integration of artificial intelligence (AI) and the internet of things (IoT), known as artificial intelligence of things (AIoT), is driving significant advancements in the aquaculture industry, offering solutions to longstanding challenges related to operational efficiency, sustainability, and productivity. This review explores the latest research studies in AIoT within the aquaculture industry, focusing on real-time environmental monitoring, data-driven decision-making, and automation. IoT sensors deployed across aquaculture systems continuously track critical parameters such as temperature, pH, dissolved oxygen, salinity, and fish behavior. AI algorithms process these data streams to provide predictive insights into water quality management, disease detection, species identification, biomass estimation, and optimized feeding strategies, among others. Much as AIoT adoption in aquaculture is advantageous on various fronts, there are still numerous challenges, including high implementation costs, data privacy concerns, and the need for scalable and adaptable AI models across diverse aquaculture environments. This review also highlights future directions for AIoT in aquaculture, emphasizing the potential for hybrid AI models, improved scalability for large-scale operations, and sustainable resource management. Full article
(This article belongs to the Special Issue Transfer Learning Methods in Equipment Reliability Management)
Show Figures

Figure 1

22 pages, 5891 KiB  
Article
Optimizing Cold Chain Logistics with Artificial Intelligence of Things (AIoT): A Model for Reducing Operational and Transportation Costs
by Hamed Nozari, Maryam Rahmaty, Parvaneh Zeraati Foukolaei, Hossien Movahed and Mahmonir Bayanati
Future Transp. 2025, 5(1), 1; https://doi.org/10.3390/futuretransp5010001 - 1 Jan 2025
Cited by 1 | Viewed by 4641
Abstract
This paper discusses the modeling and solution of a cold chain logistics (CCL) problem using artificial intelligence of things (AIoT). The presented model aims to reduce the costs of the entire CCL network by maintaining the minimum quality of cold products distributed to [...] Read more.
This paper discusses the modeling and solution of a cold chain logistics (CCL) problem using artificial intelligence of things (AIoT). The presented model aims to reduce the costs of the entire CCL network by maintaining the minimum quality of cold products distributed to customers. This study considers equipping distribution centers and trucks with IoT tools and examines the advantages of using these tools to reduce logistics costs. Also, four algorithms based on artificial intelligence (AI), including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Emperor Penguin Optimizer (EPO), have been used in solving the mathematical model. The analysis results show that equipping trucks and distribution centers with the Internet of Things has increased the total costs by 15% compared to before. This approach resulted in a 26% reduction in operating costs and a 60% reduction in transportation costs. As a result of using the Internet of Things, total costs have been reduced by 2.78%. Furthermore, the performance of AI algorithms showed that the high speed of these algorithms is guaranteed against the high accuracy of the obtained results. So, EPO has achieved the optimal value of the objective function compared to a 70% reduction in the solution time. Further analyses show the effectiveness of EPO in the indicators of average objective function, average RPD error, and solution time. The results of this paper help managers understand the need to create IoT infrastructure in the distribution of cold products to customers. Because implementing IoT devices can offset a large portion of transportation and energy costs, this paper provides management solutions and insights at the end. As a result, there is a need to deploy IoT tools in other parts of the mathematical model and its application. Full article
Show Figures

Figure 1

36 pages, 448 KiB  
Review
A Comprehensive Survey on Generative AI Solutions in IoT Security
by Juan Luis López Delgado and Juan Antonio López Ramos
Electronics 2024, 13(24), 4965; https://doi.org/10.3390/electronics13244965 - 17 Dec 2024
Cited by 3 | Viewed by 6639
Abstract
The influence of Artificial Intelligence in our society is becoming important due to the possibility of carrying out analysis of the large amount of data that the increasing number of interconnected devices capture and send as well as making autonomous and instant decisions [...] Read more.
The influence of Artificial Intelligence in our society is becoming important due to the possibility of carrying out analysis of the large amount of data that the increasing number of interconnected devices capture and send as well as making autonomous and instant decisions from the information that machines are now able to extract, saving time and efforts in some determined tasks, specially in the cyberspace. One of the key issues concerns security of this cyberspace that is controlled by machines, so the system can run properly. A particular situation, given the heterogeneous and special nature of the environment, is the case of IoT. The limited resources of some components in such a network and the distributed nature of the topology make these types of environments vulnerable to many different attacks and information leakages. The capability of Generative Artificial Intelligence to generate contents and to autonomously learn and predict situations can be very useful for making decisions automatically and instantly, significantly enhancing the security of IoT systems. Our aim in this work is to provide an overview of Generative Artificial Intelligence-based existing solutions for the very diverse set of security issues in IoT environments and to try to anticipate future research lines in the field to delve deeper. Full article
Show Figures

Figure 1

32 pages, 2250 KiB  
Article
Integration of Foundation Models and Federated Learning in AIoT-Based Aircraft Health Monitoring Systems
by Igor Kabashkin
Mathematics 2024, 12(21), 3428; https://doi.org/10.3390/math12213428 - 31 Oct 2024
Cited by 3 | Viewed by 2008
Abstract
The study presents a comprehensive framework for integrating foundation models (FMs), federated learning (FL), and Artificial Intelligence of Things (AIoT) technologies to enhance aircraft health monitoring systems (AHMSs). The proposed architecture uses the strengths of both centralized and decentralized learning approaches, combining the [...] Read more.
The study presents a comprehensive framework for integrating foundation models (FMs), federated learning (FL), and Artificial Intelligence of Things (AIoT) technologies to enhance aircraft health monitoring systems (AHMSs). The proposed architecture uses the strengths of both centralized and decentralized learning approaches, combining the broad knowledge capture of foundation models with the privacy-preserving and adaptive nature of federated learning. Through extensive simulations on a representative aircraft fleet, the integrated FM + FL approach demonstrated consistently superior performance compared to standalone implementations across multiple key metrics, including prediction accuracy, model size efficiency, and convergence speed. The framework establishes a robust digital twin ecosystem for real-time monitoring, predictive maintenance, and fleet-wide optimization. Comparative analysis reveals significant improvements in anomaly detection capabilities and reduced false alarm rates compared to traditional methods. The study conducts a systematic evaluation of the benefits and limitations of FM, FL, and integrated approaches in AHMS, examining their implications for system robustness, scalability, and security. Statistical analysis confirms that the integrated approach substantially enhances precision and recall in identifying potential failures while optimizing computational resources and training time. This paper outlines a detailed aviation ecosystem architecture integrating these advanced AI technologies across centralized processing, client, and communication domains. Future research directions are identified, focusing on improving model efficiency, ensuring generalization across diverse operational conditions, and addressing regulatory and ethical considerations. Full article
(This article belongs to the Special Issue Statistical Modeling and Data-Driven Methods in Aviation Systems)
Show Figures

Figure 1

20 pages, 5052 KiB  
Article
AIoT-Based Visual Anomaly Detection in Photovoltaic Sequence Data via Sequence Learning
by Qian Wei, Hongjun Sun, Jingjing Fan, Guojun Li and Zhiguang Zhou
Energies 2024, 17(21), 5369; https://doi.org/10.3390/en17215369 - 29 Oct 2024
Cited by 1 | Viewed by 1792
Abstract
Anomaly detection is a common analytical task aimed at identifying rare cases that differ from the majority of typical cases in a dataset. In the management of photovoltaic (PV) power generation systems, it is essential for electric power companies to effectively detect anomalies [...] Read more.
Anomaly detection is a common analytical task aimed at identifying rare cases that differ from the majority of typical cases in a dataset. In the management of photovoltaic (PV) power generation systems, it is essential for electric power companies to effectively detect anomalies in PV sequence data, as this helps operators and experts understand and interpret anomalies within PV arrays when making response decisions. However, traditional methods that rely on manual labor and regular data collection are difficult to monitor in real time, resulting in delays in fault detection and localization. Traditional machine learning algorithms are slow and cumbersome in processing data, which affects the operational safety of PV plants. In this paper, we propose a visual analytic approach for detecting and exploring anomalous sequences in a PV sequence dataset via sequence learning. We first compare the sequences with their reconstructions through an unsupervised anomaly detection algorithm (Long Short-Term Memory) based on AutoEncoders to identify anomalies. To further enhance the accuracy of anomaly detection, we integrate the artificial intelligence of things (AIoT) technology with a strict time synchronization data collection and real-time processing algorithm. This integration ensures that data from multiple sensors are synchronized and processed in real time. Then, we analyze the characteristics of the anomalies based on the visual comparison of different PV sequences and explore the potential correlation factors to analyze the possible causes of the anomalies. Case studies based on authentic enterprise datasets demonstrate the effectiveness of our method in the anomaly detection and exploration of PV sequence data. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

24 pages, 35874 KiB  
Article
Implementation of Smart Farm Systems Based on Fog Computing in Artificial Intelligence of Things Environments
by Sukjun Hong, Seongchan Park, Heejun Youn, Jongyong Lee and Soonchul Kwon
Sensors 2024, 24(20), 6689; https://doi.org/10.3390/s24206689 - 17 Oct 2024
Cited by 4 | Viewed by 2914
Abstract
Cloud computing has recently gained widespread attention owing to its use in applications involving the Internet of Things (IoT). However, the transmission of massive volumes of data to a cloud server often results in overhead. Fog computing has emerged as a viable solution [...] Read more.
Cloud computing has recently gained widespread attention owing to its use in applications involving the Internet of Things (IoT). However, the transmission of massive volumes of data to a cloud server often results in overhead. Fog computing has emerged as a viable solution to address this issue. This study implements an Artificial Intelligence of Things (AIoT) system based on fog computing on a smart farm. Three experiments are conducted to evaluate the performance of the AIoT system. First, network traffic volumes between systems employing and not employing fog computing are compared. Second, the performance of the communication protocols—hypertext transport protocol (HTTP), message queuing telemetry transport protocol (MQTT), and constrained application protocol (CoAP)—commonly used in IoT applications is assessed. Finally, a convolutional neural network-based algorithm is introduced to determine the maturity level of coffee tree images. Experimental data are collected over ten days from a coffee tree farm in the Republic of Korea. Notably, the fog computing system demonstrates a 26% reduction in the cumulative data volume compared with a non-fog system. MQTT exhibits stable results in terms of the data volume and loss rate. Additionally, the maturity level determination algorithm performed on coffee fruits provides reliable results. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop