Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (292)

Search Parameters:
Keywords = edge AI devices

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1370 KiB  
Article
AIM-Net: A Resource-Efficient Self-Supervised Learning Model for Automated Red Spider Mite Severity Classification in Tea Cultivation
by Malathi Kanagarajan, Mohanasundaram Natarajan, Santhosh Rajendran, Parthasarathy Velusamy, Saravana Kumar Ganesan, Manikandan Bose, Ranjithkumar Sakthivel and Baskaran Stephen Inbaraj
AgriEngineering 2025, 7(8), 247; https://doi.org/10.3390/agriengineering7080247 (registering DOI) - 1 Aug 2025
Abstract
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. [...] Read more.
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. This article proposes AIM-Net (AI-based Infestation Mapping Network) by evaluating SwAV (Swapping Assignments between Views), a self-supervised learning framework, for classifying RSM infestation severity (Mild, Moderate, Severe) using a geo-referenced, field-acquired dataset of RSM infested tea-leaves, Cam-RSM. The methodology combines SwAV pre-training on unlabeled data with fine-tuning on labeled subsets, employing multi-crop augmentation and online clustering to learn discriminative features without full supervision. Comparative analysis against a fully supervised ResNet-50 baseline utilized 5-fold cross-validation, assessing accuracy, F1-scores, and computational efficiency. Results demonstrate SwAV’s superiority, achieving 98.7% overall accuracy (vs. 92.1% for ResNet-50) and macro-average F1-scores of 98.3% across classes, with a 62% reduction in labeled data requirements. The model showed particular strength in Mild_RSM-class detection (F1-score: 98.5%) and computational efficiency, enabling deployment on edge devices. Statistical validation confirmed significant improvements (p < 0.001) over baseline approaches. These findings establish self-supervised learning as a transformative tool for precision pest management, offering resource-efficient solutions for early infestation detection while maintaining high accuracy. Full article
37 pages, 6916 KiB  
Review
The Role of IoT in Enhancing Sports Analytics: A Bibliometric Perspective
by Yuvanshankar Azhagumurugan, Jawahar Sundaram, Zenith Dewamuni, Pritika, Yakub Sebastian and Bharanidharan Shanmugam
IoT 2025, 6(3), 43; https://doi.org/10.3390/iot6030043 (registering DOI) - 31 Jul 2025
Abstract
The use of Internet of Things (IoT) for sports innovation has transformed the way athletes train, compete, and recover in any sports activity. This study performs a bibliometric analysis to examine research trends, collaborations, and publications in the realm of IoT and Sports. [...] Read more.
The use of Internet of Things (IoT) for sports innovation has transformed the way athletes train, compete, and recover in any sports activity. This study performs a bibliometric analysis to examine research trends, collaborations, and publications in the realm of IoT and Sports. Our analysis included 780 Scopus articles and 150 WoS articles published during 2012–2025, and duplicates were removed. We analyzed and visualized the bibliometric data using R version 3.6.1, VOSviewer version 1.6.20, and the bibliometrix library. The study provides insights from a bibliometric analysis, showcasing the allocation of topics, scientific contributions, patterns of co-authorship, prominent authors and their productivity over time, notable terms, key sources, publications with citations, analysis of citations, source-specific citation analysis, yearly publication patterns, and the distribution of research papers. The results indicate that China and India have the leading scientific production in the development of IoT and Sports research, with prominent authors like Anton Umek, Anton Kos, and Emiliano Schena making significant contributions. Wearable technology and wearable sensors are the most trending topics in IoT and Sports, followed by medical sciences and artificial intelligence paradigms. The analysis also emphasizes the importance of open-access journals like ‘Journal of Physics: Conference Series’ and ‘IEEE Access’ for their contributions to IoT and Sports research. Future research directions focus on enhancing effective, lightweight, and efficient wearable devices while implementing technologies like edge computing and lightweight AI in wearable technologies. Full article
Show Figures

Figure 1

17 pages, 3604 KiB  
Article
Binary-Weighted Neural Networks Using FeRAM Array for Low-Power AI Computing
by Seung-Myeong Cho, Jaesung Lee, Hyejin Jo, Dai Yun, Jihwan Moon and Kyeong-Sik Min
Nanomaterials 2025, 15(15), 1166; https://doi.org/10.3390/nano15151166 - 28 Jul 2025
Viewed by 123
Abstract
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this [...] Read more.
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this requirement of energy-efficient computing, this work presents a BWNN (binary-weighted neural network) architecture implemented using FeRAM (Ferroelectric RAM)-based synaptic arrays. By leveraging the non-volatile nature and low-power computing of FeRAM-based CIM (computing in memory), the proposed CIM architecture indicates significant reductions in both dynamic and standby power consumption. Simulation results in this paper demonstrate that scaling the ferroelectric capacitor size can reduce dynamic power by up to 6.5%, while eliminating DRAM-like refresh cycles allows standby power to drop by over 258× under typical conditions. Furthermore, the combination of binary weight quantization and in-memory computing enables energy-efficient inference without significant loss in recognition accuracy, as validated using MNIST datasets. Compared to prior CIM architectures of SRAM-CIM, DRAM-CIM, and STT-MRAM-CIM, the proposed FeRAM-CIM exhibits superior energy efficiency, achieving 230–580 TOPS/W in a 45 nm process. These results highlight the potential of FeRAM-based BWNNs as a compelling solution for edge-AI and IoT applications where energy constraints are critical. Full article
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)
Show Figures

Figure 1

37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 455
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

14 pages, 1295 KiB  
Article
Edge-FLGuard+: A Federated and Lightweight Anomaly Detection Framework for Securing 5G-Enabled IoT in Smart Homes
by Manuel J. C. S. Reis
Future Internet 2025, 17(8), 329; https://doi.org/10.3390/fi17080329 - 24 Jul 2025
Viewed by 168
Abstract
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, [...] Read more.
The rapid expansion of 5G-enabled Internet of Things (IoT) devices in smart homes has heightened the need for robust, privacy-preserving, and real-time cybersecurity mechanisms. Traditional cloud-based security systems often face latency and privacy bottlenecks, making them unsuitable for edge-constrained environments. In this work, we propose Edge-FLGuard+, a federated and lightweight anomaly detection framework specifically designed for 5G-enabled smart home ecosystems. The framework integrates edge AI with federated learning to detect network and device anomalies while preserving user privacy and reducing cloud dependency. A lightweight autoencoder-based model is trained across distributed edge nodes using privacy-preserving federated averaging. We evaluate our framework using the TON_IoT and CIC-IDS2018 datasets under realistic smart home attack scenarios. Experimental results show that Edge-FLGuard+ achieves high detection accuracy (≥95%) with minimal communication and computational overhead, outperforming traditional centralized and local-only baselines. Our results demonstrate the viability of federated AI models for real-time security in next-generation smart home networks. Full article
Show Figures

Figure 1

34 pages, 2648 KiB  
Review
Microfluidic Sensors for Micropollutant Detection in Environmental Matrices: Recent Advances and Prospects
by Mohamed A. A. Abdelhamid, Mi-Ran Ki, Hyo Jik Yoon and Seung Pil Pack
Biosensors 2025, 15(8), 474; https://doi.org/10.3390/bios15080474 - 22 Jul 2025
Viewed by 341
Abstract
The widespread and persistent occurrence of micropollutants—such as pesticides, pharmaceuticals, heavy metals, personal care products, microplastics, and per- and polyfluoroalkyl substances (PFAS)—has emerged as a critical environmental and public health concern, necessitating the development of highly sensitive, selective, and field-deployable detection technologies. Microfluidic [...] Read more.
The widespread and persistent occurrence of micropollutants—such as pesticides, pharmaceuticals, heavy metals, personal care products, microplastics, and per- and polyfluoroalkyl substances (PFAS)—has emerged as a critical environmental and public health concern, necessitating the development of highly sensitive, selective, and field-deployable detection technologies. Microfluidic sensors, including biosensors, have gained prominence as versatile and transformative tools for real-time environmental monitoring, enabling precise and rapid detection of trace-level contaminants in complex environmental matrices. Their miniaturized design, low reagent consumption, and compatibility with portable and smartphone-assisted platforms make them particularly suited for on-site applications. Recent breakthroughs in nanomaterials, synthetic recognition elements (e.g., aptamers and molecularly imprinted polymers), and enzyme-free detection strategies have significantly enhanced the performance of these biosensors in terms of sensitivity, specificity, and multiplexing capabilities. Moreover, the integration of artificial intelligence (AI) and machine learning algorithms into microfluidic platforms has opened new frontiers in data analysis, enabling automated signal processing, anomaly detection, and adaptive calibration for improved diagnostic accuracy and reliability. This review presents a comprehensive overview of cutting-edge microfluidic sensor technologies for micropollutant detection, emphasizing fabrication strategies, sensing mechanisms, and their application across diverse pollutant categories. We also address current challenges, such as device robustness, scalability, and potential signal interference, while highlighting emerging solutions including biodegradable substrates, modular integration, and AI-driven interpretive frameworks. Collectively, these innovations underscore the potential of microfluidic sensors to redefine environmental diagnostics and advance sustainable pollution monitoring and management strategies. Full article
(This article belongs to the Special Issue Biosensors Based on Microfluidic Devices—2nd Edition)
Show Figures

Figure 1

17 pages, 6432 KiB  
Article
Intelligent Battery-Designed System for Edge-Computing-Based Farmland Pest Monitoring System
by Chung-Wen Hung, Chun-Chieh Wang, Zheng-Jie Liao, Yu-Hsing Su and Chun-Liang Liu
Electronics 2025, 14(15), 2927; https://doi.org/10.3390/electronics14152927 - 22 Jul 2025
Viewed by 204
Abstract
Cruciferous vegetables are popular in Asian dishes. However, striped flea beetles prefer to feed on leaves, which can damage the appearance of crops and reduce their economic value. Due to the lack of pest monitoring, the occurrence of pests is often irregular and [...] Read more.
Cruciferous vegetables are popular in Asian dishes. However, striped flea beetles prefer to feed on leaves, which can damage the appearance of crops and reduce their economic value. Due to the lack of pest monitoring, the occurrence of pests is often irregular and unpredictable. Regular and quantitative spraying of pesticides for pest control is an alternative method. Nevertheless, this requires manual execution and is inefficient. This paper presents a system powered by solar energy, utilizing batteries and supercapacitors for energy storage to support the implementation of edge AI devices in outdoor environments. Raspberry Pi is utilized for artificial intelligence image recognition and the Internet of Things (IoT). YOLOv5 is implemented on the edge device, Raspberry Pi, for detecting striped flea beetles, and StyleGAN3 is also utilized for data augmentation in the proposed system. The recognition accuracy reaches 85.4%, and the results are transmitted to the server through a 4G network. The experimental results indicate that the system can operate effectively for an extended period. This system enhances sustainability and reliability and greatly improves the practicality of deploying smart pest detection technology in remote or resource-limited agricultural areas. In subsequent applications, drones can plan routes for pesticide spraying based on the distribution of pests. Full article
(This article belongs to the Special Issue Battery Health Management for Cyber-Physical Energy Storage Systems)
Show Figures

Figure 1

30 pages, 10173 KiB  
Article
Integrated Robust Optimization for Lightweight Transformer Models in Low-Resource Scenarios
by Hui Huang, Hengyu Zhang, Yusen Wang, Haibin Liu, Xiaojie Chen, Yiling Chen and Yuan Liang
Symmetry 2025, 17(7), 1162; https://doi.org/10.3390/sym17071162 - 21 Jul 2025
Viewed by 348
Abstract
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized [...] Read more.
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized as operating in resource-constrained environments. In such scenarios, deploying powerful Transformer-based models like ChatGPT and Vision Transformers is highly impractical because of their large parameter sizes and intensive computational requirements. While lightweight Transformer models, such as MobileViT, offer a promising solution to meet storage and computational limitations, their robustness remains insufficient. This poses a significant security risk for AI applications, particularly in critical edge environments. To address this challenge, our research focuses on enhancing the robustness of lightweight Transformer models under resource-constrained conditions. First, we propose a comprehensive robustness evaluation framework tailored for lightweight Transformer inference. This framework assesses model robustness across three key dimensions: noise robustness, distributional robustness, and adversarial robustness. It further investigates how model size and hardware limitations affect robustness, thereby providing valuable insights for robustness-aware model design. Second, we introduce a novel adversarial robustness enhancement strategy that integrates lightweight modeling techniques. This approach leverages methods such as gradient clipping and layer-wise unfreezing, as well as decision boundary optimization techniques like TRADES and SMART. Together, these strategies effectively address challenges related to training instability and decision boundary smoothness, significantly improving model robustness. Finally, we deploy the robust lightweight Transformer models in real-world resource-constrained environments and empirically validate their inference robustness. The results confirm the effectiveness of our proposed methods in enhancing the robustness and reliability of lightweight Transformers for edge AI applications. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

33 pages, 2299 KiB  
Review
Edge Intelligence in Urban Landscapes: Reviewing TinyML Applications for Connected and Sustainable Smart Cities
by Athanasios Trigkas, Dimitrios Piromalis and Panagiotis Papageorgas
Electronics 2025, 14(14), 2890; https://doi.org/10.3390/electronics14142890 - 19 Jul 2025
Viewed by 448
Abstract
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste [...] Read more.
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste management, and infrastructure health. We examine hardware platforms and machine learning models, with particular attention to power-efficient deployment and data privacy. We review the approaches employed in published studies for deploying machine learning models on resource-constrained hardware, emphasizing the most commonly used communication technologies—while noting the limited uptake of low-power options such as Low Power Wide Area Networks (LPWANs). We also discuss hardware–software co-design strategies that enable sustainable operation. Furthermore, we evaluate the alignment of these deployments with the United Nations Sustainable Development Goals (SDGs), highlighting both their contributions and existing gaps in current practices. This review identifies recurring technical patterns, methodological challenges, and underexplored opportunities, particularly in the areas of hardware provisioning, usage of inherent privacy benefits in relevant applications, communication technologies, and dataset practices, offering a roadmap for future TinyML research and deployment in smart urban systems. Among the 66 studies examined, 29 focused on mobility and transportation, 17 on public safety, 10 on environmental sensing, 6 on waste management, and 4 on infrastructure monitoring. TinyML was deployed on constrained microcontrollers in 32 studies, while 36 used optimized models for resource-limited environments. Energy harvesting, primarily solar, was featured in 6 studies, and low-power communication networks were used in 5. Public datasets were used in 27 studies, custom datasets in 24, and the remainder relied on hybrid or simulated data. Only one study explicitly referenced SDGs, and 13 studies considered privacy in their system design. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

10 pages, 915 KiB  
Article
Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems
by Minseon Kang and Moonju Park
Energies 2025, 18(14), 3840; https://doi.org/10.3390/en18143840 - 19 Jul 2025
Viewed by 349
Abstract
The rapid expansion of IoT devices poses new challenges for AI-driven services, particularly in terms of energy consumption. Although cloud-based AI processing has been the dominant approach, its high energy consumption calls for more energy-efficient alternatives. Edge computing offers an approach for reducing [...] Read more.
The rapid expansion of IoT devices poses new challenges for AI-driven services, particularly in terms of energy consumption. Although cloud-based AI processing has been the dominant approach, its high energy consumption calls for more energy-efficient alternatives. Edge computing offers an approach for reducing both latency and energy consumption. In this paper, we propose a methodology for estimating the power consumption of AI accelerators on an embedded edge device. Through experimental evaluations involving GPU- and Edge TPU-based platforms, the proposed method demonstrated estimation errors below 8%. The estimation errors were partly due to unaccounted power consumption from main memory and storage access. The proposed approach provides a foundation for more reliable energy management in AI-powered edge computing systems. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 4th Edition)
Show Figures

Figure 1

17 pages, 2533 KiB  
Article
Oscillator-Based Processing Unit for Formant Recognition
by Tamás Rudner-Halász, Wolfgang Porod and Gyorgy Csaba
Information 2025, 16(7), 611; https://doi.org/10.3390/info16070611 - 16 Jul 2025
Viewed by 183
Abstract
Oscillatory neural networks have so far been successfully applied to a number of computing problems, such as associative memories, or to handle computationally hard tasks. In this paper, we show how to use oscillators to process time-dependent waveforms with minimal or no preprocessing. [...] Read more.
Oscillatory neural networks have so far been successfully applied to a number of computing problems, such as associative memories, or to handle computationally hard tasks. In this paper, we show how to use oscillators to process time-dependent waveforms with minimal or no preprocessing. Since preprocessing and first-layer processing are often the most power-hungry steps in neural networks, our findings may open new doors to simple and power-efficient edge-AI devices. Full article
(This article belongs to the Special Issue Neuromorphic Engineering and Machine Learning)
Show Figures

Figure 1

20 pages, 1202 KiB  
Article
Enhanced Collaborative Edge Intelligence for Explainable and Transferable Image Recognition in 6G-Aided IIoT
by Chen Chen, Ze Sun, Jiale Zhang, Junwei Dong, Peng Zhang and Jie Guo
Sensors 2025, 25(14), 4365; https://doi.org/10.3390/s25144365 - 12 Jul 2025
Viewed by 283
Abstract
The Industrial Internet of Things (IIoT) has revolutionized industry through interconnected devices and intelligent applications. Leveraging the advancements in sixth-generation cellular networks (6G), the 6G-aided IIoT has demonstrated a superior performance across applications requiring low latency and high reliability, with image recognition being [...] Read more.
The Industrial Internet of Things (IIoT) has revolutionized industry through interconnected devices and intelligent applications. Leveraging the advancements in sixth-generation cellular networks (6G), the 6G-aided IIoT has demonstrated a superior performance across applications requiring low latency and high reliability, with image recognition being among the most pivotal. However, the existing algorithms often neglect the explainability of image recognition processes and fail to address the collaborative potential between edge computing servers. This paper proposes a novel method, IRCE (Intelligent Recognition with Collaborative Edges), designed to enhance the explainability and transferability in 6G-aided IIoT image recognition. By incorporating an explainable layer into the feature extraction network, IRCE provides visual prototypes that elucidate decision-making processes, fostering greater transparency and trust in the system. Furthermore, the integration of the local maximum mean discrepancy (LMMD) loss facilitates seamless transfer learning across geographically distributed edge servers, enabling effective domain adaptation and collaborative intelligence. IRCE leverages edge intelligence to optimize real-time performance while reducing computational costs and enhancing scalability. Extensive simulations demonstrate the superior accuracy, explainability, and adaptability of IRCE compared to those of the traditional methods. Moreover, its ability to operate efficiently in diverse environments highlights its potential for critical industrial applications such as smart manufacturing, remote diagnostics, and intelligent transportation systems. The proposed approach represents a significant step forward in achieving scalable, explainable, and transferable AI solutions for IIoT ecosystems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 2063 KiB  
Article
Designing a Generalist Education AI Framework for Multimodal Learning and Ethical Data Governance
by Yuyang Yan, Hui Liu, Helen Zhang, Toby Chau and Jiahui Li
Appl. Sci. 2025, 15(14), 7758; https://doi.org/10.3390/app15147758 - 10 Jul 2025
Viewed by 516
Abstract
The integration of artificial intelligence (AI) into education requires frameworks that are not only technically robust but also ethically and pedagogically grounded. This paper proposes the Generalist Education Artificial Intelligence (GEAI) framework—a conceptual blueprint designed to enable privacy-preserving, personalized, and multimodal AI-supported learning [...] Read more.
The integration of artificial intelligence (AI) into education requires frameworks that are not only technically robust but also ethically and pedagogically grounded. This paper proposes the Generalist Education Artificial Intelligence (GEAI) framework—a conceptual blueprint designed to enable privacy-preserving, personalized, and multimodal AI-supported learning in educational contexts. GEAI features a Trusted Domain architecture that supports secure, voluntary multimodal data collection via multimedia registration devices (MM Devices), edge-based AI inference, and institutional data sovereignty. Drawing on principles from constructivist pedagogy and regulatory standards such as GDPR and FERPA, GEAI supports adaptive feedback, engagement monitoring, and learner-centered interaction while addressing key challenges in ethical data governance, transparency, and accountability. To bridge theory and application, we outline a staged validation roadmap informed by technical feasibility assessments and stakeholder input. This roadmap lays the foundation for future prototyping and responsible deployment in real-world educational settings, positioning GEAI as a forward-looking contribution to both AI system design and education policy alignment. Full article
(This article belongs to the Special Issue ICT in Education, 2nd Edition)
Show Figures

Figure 1

17 pages, 7292 KiB  
Article
QP-Adaptive Dual-Path Residual Integrated Frequency Transformer for Data-Driven In-Loop Filter in VVC
by Cheng-Hsuan Yeh, Chi-Ting Ni, Kuan-Yu Huang, Zheng-Wei Wu, Cheng-Pin Peng and Pei-Yin Chen
Sensors 2025, 25(13), 4234; https://doi.org/10.3390/s25134234 - 7 Jul 2025
Viewed by 365
Abstract
As AI-enabled embedded systems such as smart TVs and edge devices demand efficient video processing, Versatile Video Coding (VVC/H.266) becomes essential for bandwidth-constrained Multimedia Internet of Things (M-IoT) applications. However, its block-based coding often introduces compression artifacts. While CNN-based methods effectively reduce these [...] Read more.
As AI-enabled embedded systems such as smart TVs and edge devices demand efficient video processing, Versatile Video Coding (VVC/H.266) becomes essential for bandwidth-constrained Multimedia Internet of Things (M-IoT) applications. However, its block-based coding often introduces compression artifacts. While CNN-based methods effectively reduce these artifacts, maintaining robust performance across varying quantization parameters (QPs) remains challenging. Recent QP-adaptive designs like QA-Filter show promise but are still limited. This paper proposes DRIFT, a QP-adaptive in-loop filtering network for VVC. DRIFT combines a lightweight frequency fusion CNN (LFFCNN) for local enhancement and a Swin Transformer-based global skip connection for capturing long-range dependencies. LFFCNN leverages octave convolution and introduces a novel residual block (FFRB) that integrates multiscale extraction, QP adaptivity, frequency fusion, and spatial-channel attention. A QP estimator (QPE) is further introduced to mitigate double enhancement in inter-coded frames. Experimental results demonstrate that DRIFT achieves BD rate reductions of 6.56% (intra) and 4.83% (inter), with an up to 10.90% gain on the BasketballDrill sequence. Additionally, LFFCNN reduces the model size by 32% while slightly improving the coding performance over QA-Filter. Full article
(This article belongs to the Special Issue Multimodal Sensing Technologies for IoT and AI-Enabled Systems)
Show Figures

Figure 1

19 pages, 1891 KiB  
Article
Comparative Study on Energy Consumption of Neural Networks by Scaling of Weight-Memory Energy Versus Computing Energy for Implementing Low-Power Edge Intelligence
by Ilpyung Yoon, Jihwan Mun and Kyeong-Sik Min
Electronics 2025, 14(13), 2718; https://doi.org/10.3390/electronics14132718 - 5 Jul 2025
Cited by 1 | Viewed by 578
Abstract
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and [...] Read more.
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and transformer-based large language models (LLMs), represented by GPT3-small, Llama-7B, and GPT3-175B. By analyzing how the scaling of memory energy versus computing energy affects the energy consumption of neural networks with different batch sizes (1, 4, 8, 16), it is shown that ResNet18 transitions from a memory energy-limited regime at low batch sizes to a computing energy-limited regime at higher batch sizes due to its extensive convolution operations. On the other hand, GPT-like models remain predominantly memory-bound, with large parameter tensors and frequent key–value (KV) cache lookups accounting for most of the total energy usage. Our results reveal that reducing weight-memory energy is particularly effective in transformer architectures, while improving multiply–accumulate (MAC) efficiency significantly benefits CNNs at higher workloads. We further highlight near-memory and in-memory computing approaches as promising strategies to lower data-transfer costs and enhance power efficiency in large-scale deployments. These findings offer actionable insights for architects and system designers aiming to optimize artificial intelligence (AI) performance under stringent energy budgets on battery-powered edge devices. Full article
Show Figures

Figure 1

Back to TopTop