Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,870)

Search Parameters:
Keywords = edge intelligence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 17190 KiB  
Review
Empowering Smart Soybean Farming with Deep Learning: Progress, Challenges, and Future Perspectives
by Huihui Sun, Hao-Qi Chu, Yi-Ming Qin, Pingfan Hu and Rui-Feng Wang
Agronomy 2025, 15(8), 1831; https://doi.org/10.3390/agronomy15081831 - 28 Jul 2025
Abstract
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies [...] Read more.
This review comprehensively examines the application of deep learning technologies across the entire soybean production chain, encompassing areas such as disease and pest identification, weed detection, crop phenotype recognition, yield prediction, and intelligent operations. By systematically analyzing mainstream deep learning models, optimization strategies (e.g., model lightweighting, transfer learning), and sensor data fusion techniques, the review identifies their roles and performances in complex agricultural environments. It also highlights key challenges including data quality limitations, difficulties in real-world deployment, and the lack of standardized evaluation benchmarks. In response, promising directions such as reinforcement learning, self-supervised learning, interpretable AI, and multi-source data fusion are proposed. Specifically for soybean automation, future advancements are expected in areas such as high-precision disease and weed localization, real-time decision-making for variable-rate spraying and harvesting, and the integration of deep learning with robotics and edge computing to enable autonomous field operations. This review provides valuable insights and future prospects for promoting intelligent, efficient, and sustainable development in soybean production through deep learning. Full article
(This article belongs to the Section Precision and Digital Agriculture)
17 pages, 1009 KiB  
Article
Binary-Weighted Neural Networks Using FeRAM Array for Low-Power AI Computing
by Seung-Myeong Cho, Jaesung Lee, Hyejin Jo, Dai Yun, Jihwan Moon and Kyeong-Sik Min
Nanomaterials 2025, 15(15), 1166; https://doi.org/10.3390/nano15151166 - 28 Jul 2025
Abstract
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this [...] Read more.
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this requirement of energy-efficient computing, this work presents a BWNN (binary-weighted neural network) architecture implemented using FeRAM (Ferroelectric RAM)-based synaptic arrays. By leveraging the non-volatile nature and low-power computing of FeRAM-based CIM (computing in memory), the proposed CIM architecture indicates significant reductions in both dynamic and standby power consumption. Simulation results in this paper demonstrate that scaling the ferroelectric capacitor size can reduce dynamic power by up to 6.5%, while eliminating DRAM-like refresh cycles allows standby power to drop by over 258× under typical conditions. Furthermore, the combination of binary weight quantization and in-memory computing enables energy-efficient inference without significant loss in recognition accuracy, as validated using MNIST datasets. Compared to prior CIM architectures of SRAM-CIM, DRAM-CIM, and STT-MRAM-CIM, the proposed FeRAM-CIM exhibits superior energy efficiency, achieving 230–580 TOPS/W in a 45 nm process. These results highlight the potential of FeRAM-based BWNNs as a compelling solution for edge-AI and IoT applications where energy constraints are critical. Full article
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)
25 pages, 1339 KiB  
Article
Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control
by Daniel Poul Mtowe, Lika Long and Dong Min Kim
Sensors 2025, 25(15), 4666; https://doi.org/10.3390/s25154666 - 28 Jul 2025
Abstract
This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently [...] Read more.
This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system’s enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications. Full article
(This article belongs to the Section Internet of Things)
33 pages, 941 KiB  
Article
Scratching the Surface of Responsible AI in Financial Services: A Qualitative Study on Non-Technical Challenges and the Role of Corporate Digital Responsibility
by Antonis Skouloudis and Archana Venkatraman
AI 2025, 6(8), 169; https://doi.org/10.3390/ai6080169 - 28 Jul 2025
Abstract
Artificial Intelligence (AI) and Generative AI are transformative yet double-edged technologies with evolving risks. While research emphasises trustworthy, fair, and responsible AI by focusing on its “what” and “why,” it overlooks practical “how.” To bridge this gap in financial services, an industry at [...] Read more.
Artificial Intelligence (AI) and Generative AI are transformative yet double-edged technologies with evolving risks. While research emphasises trustworthy, fair, and responsible AI by focusing on its “what” and “why,” it overlooks practical “how.” To bridge this gap in financial services, an industry at the forefront of AI adoption, this study employs a qualitative approach grounded in existing Responsible AI and Corporate Digital Responsibility (CDR) frameworks. Through thematic analysis of 15 semi-structured interviews conducted with professionals working in finance, we illuminate nine non-technical barriers that practitioners face, such as sustainability challenges, trade-off balancing, stakeholder management, and human interaction, noting that GenAI concerns now eclipse general AI issues. CDR practitioners adopt a more human-centric stance, emphasising consensus-building and “no margin for error.” Our findings offer actionable guidance for more responsible AI strategies and enrich academic debates on Responsible AI and AI-CDR symbiosis. Full article
Show Figures

Graphical abstract

21 pages, 4738 KiB  
Article
Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space–Air–Marine Network
by Haixiang Gao
Entropy 2025, 27(8), 803; https://doi.org/10.3390/e27080803 - 28 Jul 2025
Abstract
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, [...] Read more.
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud–edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15–60% specifically. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

51 pages, 4494 KiB  
Review
A Survey of Loss Functions in Deep Learning
by Caiyi Li, Kaishuai Liu and Shuai Liu
Mathematics 2025, 13(15), 2417; https://doi.org/10.3390/math13152417 - 27 Jul 2025
Abstract
Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing. Loss function determines the convergence speed and accuracy of the DL model and has a crucial impact on algorithm quality and [...] Read more.
Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing. Loss function determines the convergence speed and accuracy of the DL model and has a crucial impact on algorithm quality and model performance. However, most of the existing studies focus on the improvement of specific problems of loss function, which lack a systematic summary and comparison, especially in computer vision and natural language processing tasks. Therefore, this paper reclassifies and summarizes the loss functions in DL and proposes a new category of metric loss. Furthermore, this paper conducts a fine-grained division of regression loss, classification loss, and metric loss, elaborating on the existing problems and improvements. Finally, the new trend of compound loss and generative loss is anticipated. The proposed paper provides a new perspective for loss function division and a systematic reference for researchers in the DL field. Full article
(This article belongs to the Special Issue Advances in Applied Mathematics in Computer Vision)
21 pages, 3448 KiB  
Article
A Welding Defect Detection Model Based on Hybrid-Enhanced Multi-Granularity Spatiotemporal Representation Learning
by Chenbo Shi, Shaojia Yan, Lei Wang, Changsheng Zhu, Yue Yu, Xiangteng Zang, Aiping Liu, Chun Zhang and Xiaobing Feng
Sensors 2025, 25(15), 4656; https://doi.org/10.3390/s25154656 - 27 Jul 2025
Abstract
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability [...] Read more.
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability of deep learning models, this paper proposes a multi-granularity spatiotemporal representation learning algorithm based on the hybrid enhancement of handcrafted and deep learning features. A MobileNetV2 backbone network integrated with a Temporal Shift Module (TSM) is designed to progressively capture the short-term dynamic features of the molten pool and integrate temporal information across both low-level and high-level features. A multi-granularity attention-based feature aggregation module is developed to select key interference-free frames using cross-frame attention, generate multi-granularity features via grouped pooling, and apply the Convolutional Block Attention Module (CBAM) at each granularity level. Finally, these multi-granularity spatiotemporal features are adaptively fused. Meanwhile, an independent branch utilizes the Histogram of Oriented Gradient (HOG) and Scale-Invariant Feature Transform (SIFT) features to extract long-term spatial structural information from historical edge images, enhancing the model’s interpretability. The proposed method achieves an accuracy of 99.187% on a self-constructed dataset. Additionally, it attains a real-time inference speed of 20.983 ms per sample on a hardware platform equipped with an Intel i9-12900H CPU and an RTX 3060 GPU, thus effectively balancing accuracy, speed, and interpretability. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 1850 KiB  
Article
Cloud–Edge Collaborative Model Adaptation Based on Deep Q-Network and Transfer Feature Extraction
by Jue Chen, Xin Cheng, Yanjie Jia and Shuai Tan
Appl. Sci. 2025, 15(15), 8335; https://doi.org/10.3390/app15158335 - 26 Jul 2025
Viewed by 50
Abstract
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. [...] Read more.
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. However, in sky-ground integrated systems, the limited computing capacity of edge devices and the inconsistency between cloud-side fusion results and edge-side detection outputs significantly undermine the reliability of edge inference. To overcome these issues, this paper proposes a cloud-edge collaborative model adaptation framework that integrates deep reinforcement learning via Deep Q-Networks (DQN) with local feature transfer. The framework enables category-level dynamic decision making, allowing for selective migration of classification head parameters to achieve on-demand adaptive optimization of the edge model and enhance consistency between cloud and edge results. Extensive experiments conducted on a large-scale multi-view remote sensing aircraft detection dataset demonstrate that the proposed method significantly improves cloud-edge consistency. The detection consistency rate reaches 90%, with some scenarios approaching 100%. Ablation studies further validate the necessity of the DQN-based decision strategy, which clearly outperforms static heuristics. In the model adaptation comparison, the proposed method improves the detection precision of the A321 category from 70.30% to 71.00% and the average precision (AP) from 53.66% to 53.71%. For the A330 category, the precision increases from 32.26% to 39.62%, indicating strong adaptability across different target types. This study offers a novel and effective solution for cloud-edge model adaptation under resource-constrained conditions, enhancing both the consistency of cloud-edge fusion and the robustness of edge-side intelligent inference. Full article
Show Figures

Figure 1

21 pages, 2514 KiB  
Article
Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory
by Lihong Yang, Zhi Zeng, Hang Ge, Yao Li, Shurui Ge and Kai Hu
Appl. Sci. 2025, 15(15), 8319; https://doi.org/10.3390/app15158319 - 26 Jul 2025
Viewed by 77
Abstract
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is [...] Read more.
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is proposed in this paper. The method involves building a two-stage optimization framework: in the first stage, global contrast enhancement is achieved by Retinex preprocessing, which effectively improves the detail information regarding the dark area and the accuracy of the transmittance map and atmospheric light intensity estimation; in the second stage, an a priori compensation model for the dark channel is constructed, and a depth-map-guided transmittance correction mechanism is introduced to obtain a refined transmittance map. At the same time, the atmospheric light intensity is accurately calculated by the Otsu algorithm and edge constraints, which effectively suppresses the halo artifacts and color deviation of the sky region in the dark channel a priori defogging algorithm. The experiments based on self-collected data and public datasets show that the algorithm in this paper presents better detail preservation ability (the visible edge ratio is minimally improved by 0.1305) and color reproduction (the saturated pixel ratio is reduced to about 0) in the subjective evaluation, and the average gradient ratio of the objective indexes reaches a maximum value of 3.8009, which is improved by 36–56% compared with the classical DCP and Tarel algorithms. The method provides a robust image defogging solution for computer vision systems under complex meteorological conditions. Full article
Show Figures

Figure 1

19 pages, 290 KiB  
Article
Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study
by Stefanie Mache, Monika Bernburg, Annika Würtenberger and David A. Groneberg
Clin. Pract. 2025, 15(8), 138; https://doi.org/10.3390/clinpract15080138 - 25 Jul 2025
Viewed by 80
Abstract
Background: Artificial intelligence (AI) is being increasingly promoted as a means to enhance diagnostic accuracy, to streamline workflows, and to improve overall care quality in primary care. However, empirical evidence on how primary care physicians (PCPs) perceive, engage with, and emotionally respond [...] Read more.
Background: Artificial intelligence (AI) is being increasingly promoted as a means to enhance diagnostic accuracy, to streamline workflows, and to improve overall care quality in primary care. However, empirical evidence on how primary care physicians (PCPs) perceive, engage with, and emotionally respond to AI technologies in everyday clinical settings remains limited. Concerns persist regarding AI’s usability, transparency, and potential impact on professional identity, workload, and the physician–patient relationship. Methods: This qualitative study investigated the lived experiences and perceptions of 28 PCPs practicing in diverse outpatient settings across Germany. Participants were purposively sampled to ensure variation in age, practice characteristics, and digital proficiency. Data were collected through in-depth, semi-structured interviews, which were audio-recorded, transcribed verbatim, and subjected to rigorous thematic analysis employing Mayring’s qualitative content analysis framework. Results: Participants demonstrated a fundamentally ambivalent stance toward AI integration in primary care. Perceived advantages included enhanced diagnostic support, relief from administrative burdens, and facilitation of preventive care. Conversely, physicians reported concerns about workflow disruption due to excessive system prompts, lack of algorithmic transparency, increased cognitive and emotional strain, and perceived threats to clinical autonomy and accountability. The implications for the physician–patient relationship were seen as double-edged: while some believed AI could foster trust through transparent use, others feared depersonalization of care. Crucial prerequisites for successful implementation included transparent and explainable systems, structured training opportunities, clinician involvement in design processes, and seamless integration into clinical routines. Conclusions: Primary care physicians’ engagement with AI is marked by cautious optimism, shaped by both perceived utility and significant concerns. Effective and ethically sound implementation requires co-design approaches that embed clinical expertise, ensure algorithmic transparency, and align AI applications with the realities of primary care workflows. Moreover, foundational AI literacy should be incorporated into undergraduate health professional curricula to equip future clinicians with the competencies necessary for responsible and confident use. These strategies are essential to safeguard professional integrity, support clinician well-being, and maintain the humanistic core of primary care. Full article
20 pages, 766 KiB  
Article
Accelerating Deep Learning Inference: A Comparative Analysis of Modern Acceleration Frameworks
by Ishrak Jahan Ratul, Yuxiao Zhou and Kecheng Yang
Electronics 2025, 14(15), 2977; https://doi.org/10.3390/electronics14152977 - 25 Jul 2025
Viewed by 127
Abstract
Deep learning (DL) continues to play a pivotal role in a wide range of intelligent systems, including autonomous machines, smart surveillance, industrial automation, and portable healthcare technologies. These applications often demand low-latency inference and efficient resource utilization, especially when deployed on embedded or [...] Read more.
Deep learning (DL) continues to play a pivotal role in a wide range of intelligent systems, including autonomous machines, smart surveillance, industrial automation, and portable healthcare technologies. These applications often demand low-latency inference and efficient resource utilization, especially when deployed on embedded or edge devices with limited computational capacity. As DL models become increasingly complex, selecting the right inference framework is essential to meeting performance and deployment goals. In this work, we conduct a comprehensive comparison of five widely adopted inference frameworks: PyTorch, ONNX Runtime, TensorRT, Apache TVM, and JAX. All experiments are performed on the NVIDIA Jetson AGX Orin platform, a high-performance computing solution tailored for edge artificial intelligence workloads. The evaluation considers several key performance metrics, including inference accuracy, inference time, throughput, memory usage, and power consumption. Each framework is tested using a wide range of convolutional and transformer models and analyzed in terms of deployment complexity, runtime efficiency, and hardware utilization. Our results show that certain frameworks offer superior inference speed and throughput, while others provide advantages in flexibility, portability, or ease of integration. We also observe meaningful differences in how each framework manages system memory and power under various load conditions. This study offers practical insights into the trade-offs associated with deploying DL inference on resource-constrained hardware. Full article
(This article belongs to the Special Issue Hardware Acceleration for Machine Learning)
Show Figures

Figure 1

37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 260
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

10 pages, 460 KiB  
Article
Industry 5.0 and Digital Twins in the Chemical Industry: An Approach to the Golden Batch Concept
by Andrés Redchuk and Federico Walas Mateo
ChemEngineering 2025, 9(4), 78; https://doi.org/10.3390/chemengineering9040078 - 25 Jul 2025
Viewed by 119
Abstract
In the context of industrial digitalization, the Industry 5.0 paradigm introduces digital twins as a cutting-edge solution. This study explores the concept of digital twins and their integration with the Industrial Internet of Things (IIoT), offering insights into how these technologies bring intelligence [...] Read more.
In the context of industrial digitalization, the Industry 5.0 paradigm introduces digital twins as a cutting-edge solution. This study explores the concept of digital twins and their integration with the Industrial Internet of Things (IIoT), offering insights into how these technologies bring intelligence to industrial settings to drive both process optimization and sustainability. Industrial digitalization connects products and processes, boosting the productivity and efficiency of people, facilities, and equipment. These advancements are expected to yield broad economic and environmental benefits. As connected systems continuously generate data, this information becomes a vital asset, but also introduces new challenges for industrial operations. The work presented in this article aims to demonstrate the possibility of generating advanced tools for process optimization. This, which ultimately impacts the environment and empowers people in the processes, is achieved through data integration and the development of a digital twin using open tools such as NodeRed v4.0.9 and Python 3.13.5 frameworks, among others. The article begins with a conceptual analysis of IIoT and digital twin integration and then presents a case study to demonstrate how these technologies support the principles of the Industry 5.0 framework. Specifically, it examines the requirements for applying the golden batch concept within a biological production environment. The goal is to illustrate how digital twins can facilitate the achievement of quality standards while fostering a more sustainable production process. The results from the case study show that biomaterial concentration was optimized by approximately 10%, reducing excess in an initially overdesigned process. In doing so, this paper highlights the potential of digital twins as key enablers of Industry 5.0—enhancing sustainability, empowering operators, and building resilience throughout the value chain. Full article
Show Figures

Figure 1

18 pages, 3717 KiB  
Article
A Hybrid LMD–ARIMA–Machine Learning Framework for Enhanced Forecasting of Financial Time Series: Evidence from the NASDAQ Composite Index
by Jawaria Nasir, Hasnain Iftikhar, Muhammad Aamir, Hasnain Iftikhar, Paulo Canas Rodrigues and Mohd Ziaur Rehman
Mathematics 2025, 13(15), 2389; https://doi.org/10.3390/math13152389 - 25 Jul 2025
Viewed by 194
Abstract
This study proposes a novel hybrid forecasting approach designed explicitly for long-horizon financial time series. It incorporates LMD (Local Mean Decomposition), SD (Signal Decomposition), and sophisticated machine learning methods. The framework for the NASDAQ Composite Index begins by decomposing the original time series [...] Read more.
This study proposes a novel hybrid forecasting approach designed explicitly for long-horizon financial time series. It incorporates LMD (Local Mean Decomposition), SD (Signal Decomposition), and sophisticated machine learning methods. The framework for the NASDAQ Composite Index begins by decomposing the original time series into stochastic and deterministic components using the LMD approach. This method effectively separates linear and nonlinear signal structures. The stochastic components are modeled using ARIMA to represent linear temporal dynamics, while the deterministic components are projected using cutting-edge machine learning methods, including XGBoost, Random Forest (RF), Artificial Neural Networks (ANNs), and Support Vector Machines (SVMs). This study employs various statistical metrics to evaluate the predictive ability across both short-term noise and long-term trends, including Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Directional Statistic (DS). Furthermore, the Diebold–Mariano test is used to determine the statistical significance of any forecast improvements. Empirical results demonstrate that the hybrid LMD–ARIMA–SD–XGBoost model consistently outperforms alternative configurations in terms of prediction accuracy and directional consistency. These findings demonstrate the advantages of integrating decomposition-based signal filtering with ensemble machine learning to improve the robustness and generalizability of long-term forecasting. This study presents a scalable and adaptive approach for modeling complex, nonlinear, and high-dimensional time series, thereby contributing to the enhancement of intelligent forecasting systems in the economic and financial sectors. As far as the authors are aware, this is the first study to combine XGBoost and LMD in a hybrid decomposition framework for forecasting long-horizon stock indexes. Full article
Show Figures

Figure 1

25 pages, 9119 KiB  
Article
An Improved YOLOv8n-Based Method for Detecting Rice Shelling Rate and Brown Rice Breakage Rate
by Zhaoyun Wu, Yehao Zhang, Zhongwei Zhang, Fasheng Shen, Li Li, Xuewu He, Hongyu Zhong and Yufei Zhou
Agriculture 2025, 15(15), 1595; https://doi.org/10.3390/agriculture15151595 - 24 Jul 2025
Viewed by 185
Abstract
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. [...] Read more.
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. This paper proposes a high-precision, lightweight solution based on an enhanced YOLOv8n with improvements in network architecture, feature fusion, and attention mechanism. The backbone’s C2f module is replaced with C2f-Faster-CGLU, integrating partial convolution (PConv) local convolution and convolutional gated linear unit (CGLU) gating to reduce computational redundancy via sparse interaction and enhance small-target feature extraction. A bidirectional feature pyramid network (BiFPN) weights multiscale feature fusion to improve edge positioning accuracy of dense grains. Attention mechanism for fine-grained classification (AFGC) is embedded to focus on texture and damage details, enhancing adaptability to light fluctuations. The Detect_Rice lightweight head compresses parameters via group normalization and dynamic convolution sharing, optimizing small-target response. The improved model achieved 96.8% precision and 96.2% mAP. Combined with a quantity–mass model, SR/BR detection errors reduced to 1.11% and 1.24%, meeting national standard (GB/T 29898-2013) requirements, providing an effective real-time solution for intelligent hulling sorting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop