Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (219)

Search Parameters:
Keywords = e-learning resource optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 3174 KB  
Review
A Bibliometric-Systematic Literature Review (B-SLR) of Machine Learning-Based Water Quality Prediction: Trends, Gaps, and Future Directions
by Jeimmy Adriana Muñoz-Alegría, Jorge Núñez, Ricardo Oyarzún, Cristian Alfredo Chávez, José Luis Arumí and Lien Rodríguez-López
Water 2025, 17(20), 2994; https://doi.org/10.3390/w17202994 - 17 Oct 2025
Viewed by 130
Abstract
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified [...] Read more.
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified exponential growth in scientific publications since 2020, indicating that this field has reached a stage of maturity. The results showed that the predominant techniques for predicting water quality, both for surface and groundwater, fall into three main categories: (i) ensemble models, with Bagging and Boosting representing 43.07% and 25.91%, respectively, particularly random forest (RF), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGB), along with their optimized variants; (ii) deep neural networks such as long short-term memory (LSTM) and convolutional neural network (CNN), which excel at modeling complex temporal dynamics; and (iii) traditional algorithms like artificial neural network (ANN), support vector machines (SVMs), and decision tree (DT), which remain widely used. Current trends point towards the use of hybrid and explainable architectures, with increased application of interpretability techniques. Emerging approaches such as Generative Adversarial Network (GAN) and Group Method of Data Handling (GMDH) for data-scarce contexts, Transfer Learning for knowledge reuse, and Transformer architectures that outperform LSTM in time series prediction tasks were also identified. Furthermore, the most studied water bodies (e.g., rivers, aquifers) and the most commonly used water quality indicators (e.g., WQI, EWQI, dissolved oxygen, nitrates) were identified. The B-SLR and Topic Modeling methodology provided a more robust, reproducible, and comprehensive overview of AI/ML/DL models for freshwater quality prediction, facilitating the identification of thematic patterns and research opportunities. Full article
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)
Show Figures

Figure 1

19 pages, 1928 KB  
Article
Assessment of Frozen Stored Silver Carp Surimi Gel Quality Using Synthetic Data-Driven Machine Learning (SDDML) Model
by Jingyi Yang, Shuairan Chen, Tianjian Tong and Chenxu Yu
Gels 2025, 11(10), 810; https://doi.org/10.3390/gels11100810 - 9 Oct 2025
Viewed by 168
Abstract
The invasive Silver Carp (Hypophthalmichthys molitrix) in North America represents a promising resource for surimi production; however, its gel formability deteriorates significantly during frozen storage. This study investigated the deterioration of gel properties in Silver Carp surimi over six months of [...] Read more.
The invasive Silver Carp (Hypophthalmichthys molitrix) in North America represents a promising resource for surimi production; however, its gel formability deteriorates significantly during frozen storage. This study investigated the deterioration of gel properties in Silver Carp surimi over six months of frozen storage, and showed that short-term frozen storage (<2 months) was beneficial for surimi gel-forming ability, while extended frozen storage (>2 months) tended to have detrimental effects. The adverse effect of long-term frozen storage could be mitigated via using food additives (e.g., manufactured microfiber, transglutaminase, and chicken skin collagen), among which transglutaminase was the most effective. Transglutaminase at a relatively low level (0.1 wt%) could effectively negate frozen storage’s effects, and produced surimi gel with quality attributes (e.g., gel strength, hardness, and chewiness) at levels comparable to those from fresh fish samples. To assess the effects of the addition of various food additives for quality improvement, a synthetic data-driven machine learning (SDDML) approach was developed. After testing multiple algorithms, the random forest model was shown to yield synthetic data points that represented experimental data characteristics the best (R2 values of 0.871–0.889). It also produced improved predictions for gel quality attributes from control variables (i.e., additive levels) compared to using experimental data alone, showing the potential to overcome data scarcity issues when only limited experimental data are available for ML models. A synthetic dataset of 240 data points was shown to supplement the experimental dataset (60 points) well for assessment of the Frozen Silver Carp (FSC) surimi gel quality attributes. The SDDML method could be used to find optimal recipes for generating additive profiles to counteract the adverse effects of frozen storage and to improve surimi gel quality to upgrade underutilized invasive species to value-added food products. Full article
(This article belongs to the Special Issue Application of Composite Gel in Food Processing and Engineering)
Show Figures

Figure 1

15 pages, 2364 KB  
Article
Optimized Lung Nodule Classification Using CLAHE-Enhanced CT Imaging and Swin Transformer-Based Deep Feature Extraction
by Dorsaf Hrizi, Khaoula Tbarki and Sadok Elasmi
J. Imaging 2025, 11(10), 346; https://doi.org/10.3390/jimaging11100346 - 4 Oct 2025
Viewed by 233
Abstract
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD [...] Read more.
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD pipeline integrates ten image preprocessing techniques and ten pretrained deep learning models for feature extraction including convolutional neural networks and transformer-based architectures, and four classical machine learning classifiers. Unlike traditional end-to-end deep learning systems, our approach decouples feature extraction from classification, enhancing interpretability and reducing the risk of overfitting. A total of 400 model configurations were evaluated to identify the optimal combination. The proposed approach was evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset, which comprises 1018 thoracic CT scans annotated by four thoracic radiologists. For the classification task, the dataset included a total of 6568 images labeled as malignant and 4849 images labeled as benign. Experimental results show that the best performing pipeline, combining Contrast Limited Adaptive Histogram Equalization, Swin Transformer feature extraction, and eXtreme Gradient Boosting, achieved an accuracy of 95.8%. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
Show Figures

Figure 1

27 pages, 1547 KB  
Article
Does Data Asset Information Disclosure Mitigate Supply Chain Risk? Causal Evidence from Double-Debiased Machine Learning
by Huiyi Shi, Yufei Xia, Zihe Zong, Yifan Hua, Jikang Sun and Xiangyu Chen
Systems 2025, 13(10), 844; https://doi.org/10.3390/systems13100844 - 25 Sep 2025
Viewed by 428
Abstract
As a vital driver of supply chain management, data has evolved into both a foundational resource and a critical production factor for optimizing supply chains and mitigating risk. This study adopts a four-dimensional framework (i.e., visibility, coordination, flexibility, and redundancy) to investigate how [...] Read more.
As a vital driver of supply chain management, data has evolved into both a foundational resource and a critical production factor for optimizing supply chains and mitigating risk. This study adopts a four-dimensional framework (i.e., visibility, coordination, flexibility, and redundancy) to investigate how data asset information disclosure (DAID) shapes supply chain risk (SCR). Relative to the existing literature, this paper contributes by examining the determinants of supply chain risk from the perspective of data asset information disclosure and by conducting empirical analyses using double debiased machine learning and causal mediation analysis. The results show that DAID significantly lowers SCR, with results robust to multiple sensitivity checks. Economically, a one-standard-deviation increase in DAID leads to an average decline in SCR of 0.63%. Causal mediation analysis, aligned with the theoretical dimensions, reveals that DAID mitigates SCR through four channels: enhancing information transparency, improving visibility, strengthening agile responsiveness, and increasing supply chain concentration. Heterogeneity tests reveal stronger effects among firms facing fewer financing constraints, operating in more marketized environments, and designated as chain master firms. Further evidence suggests that reduced SCR promotes a greater capacity for coordinated innovation within the supply chain. Full article
Show Figures

Figure 1

25 pages, 9694 KB  
Article
Short- and Medium-Term Predictions of Spatiotemporal Distribution of Marine Fishing Efforts Using Deep Learning
by Shenglong Yang, Wei Wang, Tianfei Cheng, Shengmao Zhang, Yang Dai, Fei Wang, Heng Zhang, Yongchuang Shi, Weifeng Zhou and Wei Fan
Fishes 2025, 10(10), 479; https://doi.org/10.3390/fishes10100479 - 25 Sep 2025
Viewed by 378
Abstract
High-resolution spatiotemporal prediction information on fishing vessel activities is essential for formulating and effectively implementing fisheries policies that ensure the sustainability of marine resources and fishing practices. This study focused on the tuna longline fishery in the Western and Central Pacific Ocean (130° [...] Read more.
High-resolution spatiotemporal prediction information on fishing vessel activities is essential for formulating and effectively implementing fisheries policies that ensure the sustainability of marine resources and fishing practices. This study focused on the tuna longline fishery in the Western and Central Pacific Ocean (130° E–150° W, 20° S–20° N) and constructed a CLA U-Net deep learning model to predict fishing effort (FE) distribution based on 2017–2023 FE records and environmental variables. Two modeling schemes were designed: Scheme 1 incorporated both historical FE and environmental data, while Scheme 2 used only environmental variables. The model predicts not only the binary outcome (presence or absence of fishing effort) but also the magnitude of FE. Results show that in short-term predictions, Scheme 1 achieved F1 scores of 0.654 at the 0.5°-1-day scale and 0.763 at the 1°-1-day scale, indicating substantial improvement from including historical FE data. In medium-term predictions, Scheme 1 and Scheme 2 reached maximum F1 scores of 0.77 and 0.72, respectively, at the optimal spatiotemporal scale of 1°-30 days. The analysis also quantified the relative importance of environmental variables, with sea surface temperature (SST) and chlorophyll-a (Chl-a) identified as the most influential. These findings provide methodological insights for spatiotemporal prediction of fishing effort and support the refinement of fisheries management and sustainability strategies. Full article
(This article belongs to the Section Fishery Economics, Policy, and Management)
Show Figures

Graphical abstract

32 pages, 684 KB  
Article
Screening Smarter, Not Harder: Budget Allocation Strategies for Technology-Assisted Reviews (TARs) in Empirical Medicine
by Giorgio Maria Di Nunzio
Mach. Learn. Knowl. Extr. 2025, 7(3), 104; https://doi.org/10.3390/make7030104 - 20 Sep 2025
Viewed by 590
Abstract
In the technology-assisted review (TAR) area, most research has focused on ranking effectiveness and active learning strategies within individual topics, often assuming unconstrained review effort. However, real-world applications such as legal discovery or medical systematic reviews are frequently subject to global screening budgets. [...] Read more.
In the technology-assisted review (TAR) area, most research has focused on ranking effectiveness and active learning strategies within individual topics, often assuming unconstrained review effort. However, real-world applications such as legal discovery or medical systematic reviews are frequently subject to global screening budgets. In this paper, we revisit the CLEF eHealth TAR shared tasks (2017–2019) through the lens of budget-aware evaluation. We first reproduce and verify the official participant results, organizing them into a unified dataset for comparative analysis. Then, we introduce and assess four intuitive budget allocation strategies—even, proportional, inverse proportional, and threshold-capped greedy—to explore how review effort can be efficiently distributed across topics. To evaluate systems under resource constraints, we propose two cost-aware metrics: relevant found per cost unit (RFCU) and utility gain at budget (UG@B). These complement traditional recall by explicitly modeling efficiency and trade-offs between true and false positives. Our results show that different allocation strategies optimize different metrics: even and inverse proportional allocation favor recall, while proportional and capped strategies better maximize RFCU. UG@B remains relatively stable across strategies, reflecting its balanced formulation. A correlation analysis reveals that RFCU and UG@B offer distinct perspectives from recall, with varying alignment across years. Together, these findings underscore the importance of aligning evaluation metrics and allocation strategies with screening goals. We release all data and code to support reproducibility and future research on cost-sensitive TAR. Full article
Show Figures

Graphical abstract

20 pages, 596 KB  
Review
A Survey on Digital Solutions for Health Services Management: Features and Use Cases from Brazilian National Literature
by Ericles Andrei Bellei, Cleide Fátima Moretto, Carla Maria Dal Sasso Freitas and Ana Carolina Bertoletti De Marchi
Healthcare 2025, 13(18), 2348; https://doi.org/10.3390/healthcare13182348 - 18 Sep 2025
Viewed by 609
Abstract
Background and Objective: Health services management faces increasing complexity, particularly in developing countries such as Brazil. Digital tools play a central role in optimizing health service operations, yet synthesized evidence on manager-focused applications remains limited. This study aimed to survey digital innovations for [...] Read more.
Background and Objective: Health services management faces increasing complexity, particularly in developing countries such as Brazil. Digital tools play a central role in optimizing health service operations, yet synthesized evidence on manager-focused applications remains limited. This study aimed to survey digital innovations for management within the Brazilian context. Methods: We systematically reviewed the complete proceedings of the Brazilian Symposium on Computing Applied to Health (SBCAS) from 2001 to 2024, identifying 26 studies that met eligibility criteria based on managerial relevance. Results: Applications identified predominantly addressed hospital management (e.g., resource scheduling and process optimization) and public health surveillance (e.g., disease prediction and monitoring), employing technologies such as machine learning and simulation. These tools primarily leveraged structured administrative data from national health information systems, reflecting existing data infrastructure capabilities. The reported implications suggest improvements in decision-making through optimized resource allocation (e.g., ICU beds and staffing), streamlined operational processes (e.g., bottleneck identification), enhanced planning and monitoring capabilities (e.g., endemic disease control and telemonitoring programs), and more timely, targeted public health surveillance (e.g., georeferenced analysis). Conclusions: The identified research aligns with global digital health trends but is also tailored to the complex realities of the healthcare system. Despite significant technical advancements, these digital solutions predominantly remain at the prototype stage, highlighting a gap between academic innovation and real-world deployment. Realizing the benefits of these tools will require a concerted effort to move beyond technical validation, focusing on implementation science, supportive policies, and strategic partnerships to integrate these solutions into managerial practice. Full article
Show Figures

Figure 1

26 pages, 4529 KB  
Article
AgriMicro—A Microservices-Based Platform for Optimization of Farm Decisions
by Cătălin Negulescu, Theodor Borangiu, Silviu Răileanu and Victor Valentin Anghel
AgriEngineering 2025, 7(9), 299; https://doi.org/10.3390/agriengineering7090299 - 16 Sep 2025
Viewed by 655
Abstract
The paper presents AgriMicro, a modern Farm Management Information System (FMIS) designed to help farmers monitor and optimize corn crops from sowing to harvest, by leveraging cloud technologies and machine learning algorithms. The platform is built on a modular architecture composed of multiple [...] Read more.
The paper presents AgriMicro, a modern Farm Management Information System (FMIS) designed to help farmers monitor and optimize corn crops from sowing to harvest, by leveraging cloud technologies and machine learning algorithms. The platform is built on a modular architecture composed of multiple components implemented through microservices such as the weather and soil service, recommendation and alert engine, field service, and crop service—which continuously communicate to centralize field data and provide real-time insights. Through the ongoing exchange of data between these services, different information pieces about soil conditions, crop health, and agricultural operations are processed and analyzed, resulting in predictions of crop evolution and practical recommendations for future interventions (e.g., fertilization or irrigation). This integrated FMIS transforms collected data into concrete actions, supporting farmers and agricultural consultants in making informed decisions, improving field productivity, and ensuring more efficient resource use. Its microservice-based architecture provides scalability, modularity, and straightforward integration with other information systems. The objectives of this study are threefold. First, to specify and design a modular FMIS architecture based on microservices and cloud computing, ensuring scalability, interoperability and adaptability to different farm contexts. Second, to prototype and integrate initial components and Internet of Things (IoT)-based data collection with machine learning models, specifically Random Forest and XGBoost, to provide maize yield forecasting as a proof of concept. Model performance was evaluated using standard predictive accuracy metrics, including the coefficient of determination (R2) and the root mean square error (RMSE), confirming the reliability of the forecasting pipeline and validated against official harvest data (average maize yield) from the Romanian National Institute of Statistics (INS) for 2024. These results confirm the reliability of the forecasting pipeline under controlled conditions; however, in real-world practice, broader regional and inter-annual variability typically results in considerably higher errors, often on the order of 10–20%. Third, to present a Romania based case study which illustrates the end-to-end workflow and outlines an implementation roadmap toward full deployment. As this is a design-oriented study currently under development, several services remain at the planning or early prototyping stage, and comprehensive system level benchmarks are deferred to future work. Full article
Show Figures

Figure 1

25 pages, 3637 KB  
Review
Application Research Progress of Solid Waste Concrete Based on Machine Learning Technology
by Fan Zhang, Bo Wen, Bo Wen and Ditao Niu
Buildings 2025, 15(18), 3333; https://doi.org/10.3390/buildings15183333 - 15 Sep 2025
Viewed by 536
Abstract
With accelerating urbanization, China generates large volumes of construction waste annually, much of which is landfilled, causing environmental damage and wasting recyclable resources. Traditional recycling technologies struggle with complex material compositions and performance degradation, limiting large-scale reuse. This highlights the need for intelligent, [...] Read more.
With accelerating urbanization, China generates large volumes of construction waste annually, much of which is landfilled, causing environmental damage and wasting recyclable resources. Traditional recycling technologies struggle with complex material compositions and performance degradation, limiting large-scale reuse. This highlights the need for intelligent, data-driven solutions to enhance solid waste recycling efficiency. Recent developments in machine learning (ML) have enabled more accurate performance prediction and optimization of waste concrete, offering new pathways for material regeneration. ML’s nonlinear modeling and pattern recognition capabilities are particularly suited to capturing the complex behaviors of recycled materials. However, systematic reviews on ML applications across various waste concretes are still lacking, and future research directions remain unclear. This study conducts a scientometric analysis of 1762 publications (2011–2024) from the Web of Science Core Collection (WOSCC) and CNKI, aiming to map current trends and guide future research on ML applications in waste concrete recycling. The findings show a clear evolution: research has expanded from recycled concrete to a wider range of modified solid waste concretes. ML applications have advanced from compressive strength prediction to multi-objective optimization involving durability, cost, and mechanical performance. Algorithm development and model accuracy have improved steadily. Key challenges persist, including limited data quality and scale, which constrain complex model training. Research on specific properties such as salt-frost resistance is scarce due to high testing costs. Models often lack generalizability in coupled conditions (e.g., salt-frost–carbonation) and suffer from poor physical interpretability, hindering understanding of durability mechanisms. Nevertheless, unresolved issues remain, including the scarcity of standardized datasets, the limited integration of domain knowledge with ML methods, and the lack of comprehensive evaluations across multiple degradation mechanisms, all of which represent critical research gaps for future studies. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

20 pages, 1943 KB  
Article
Spatial–Temporal Physics-Constrained Multilayer Perceptron for Aircraft Trajectory Prediction
by Zhongnan Zhang, Jianwei Zhang, Yi Lin, Kun Zhang, Xuemei Zheng and Dengmei Xiang
Appl. Sci. 2025, 15(18), 9895; https://doi.org/10.3390/app15189895 - 10 Sep 2025
Viewed by 515
Abstract
Aircraft trajectory prediction (ATP) is a critical technology for air traffic control (ATC), safeguarding aviation safety and airspace resource management. To address the limitations of existing methods—kinetic models’ susceptibility to environmental disturbances and machine learning’s lack of physical interpretability—this paper proposes a Spatial–Temporal [...] Read more.
Aircraft trajectory prediction (ATP) is a critical technology for air traffic control (ATC), safeguarding aviation safety and airspace resource management. To address the limitations of existing methods—kinetic models’ susceptibility to environmental disturbances and machine learning’s lack of physical interpretability—this paper proposes a Spatial–Temporal Physics-Constrained Multilayer Perceptron (STPC-MLP) model. The model employs a spatiotemporal attention encoder to decouple timestamps and spatial coordinates (longitude, latitude, altitude), eliminating feature ambiguity caused by mixed representations. By fusing temporal and spatial attention features, it effectively extracts trajectory degradation patterns. Furthermore, a Hidden Physics-Constrained Multilayer Perceptron (HPC-MLP) integrates kinematic equations (e.g., maximum acceleration and minimum turning radius constraints) as physical regularization terms in the loss function, ensuring predictions strictly adhere to aircraft maneuvering principles. Experiments demonstrate that STPC-MLP reduces the trajectory point prediction error (RMSE) by 7.13% compared to a conventional optimal Informer model. In ablation studies, the absence of the HPC-MLP module, attention mechanism, and physical constraint loss terms significantly increased prediction errors, unequivocally validating the efficacy of the STPC-MLP architecture for trajectory prediction. Full article
Show Figures

Figure 1

27 pages, 2027 KB  
Article
Comparative Analysis of SDN and Blockchain Integration in P2P Streaming Networks for Secure and Reliable Communication
by Aisha Mohmmed Alshiky, Maher Ali Khemakhem, Fathy Eassa and Ahmed Alzahrani
Electronics 2025, 14(17), 3558; https://doi.org/10.3390/electronics14173558 - 7 Sep 2025
Viewed by 669
Abstract
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) [...] Read more.
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) and blockchain individually address aspects of these limitations, their combined potential for comprehensive optimization remains underexplored. This study proposes a distributed SDN (DSDN) architecture enhanced with blockchain support to provide secure, scalable, and reliable P2P video streaming. We identified research gaps through critical analysis of the literature. We systematically compared traditional P2P, SDN-enhanced, and hybrid architectures across six performance metrics: latency, throughput, packet loss, authentication accuracy, packet delivery ratio, and control overhead. Simulations with 200 peers demonstrate that the proposed hybrid SDN–blockchain framework achieves a latency of 140 ms, a throughput of 340 Mbps, an authentication accuracy of 98%, a packet delivery ratio of 97.8%, a packet loss ratio of 2.2%, and a control overhead of 9.3%, outperforming state-of-the-art solutions such as NodeMaps, the reinforcement learning-based routing framework (RL-RF), and content delivery networks-P2P networks (CDN-P2P). This work establishes a scalable and attack-resilient foundation for next-generation P2P streaming. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

20 pages, 4585 KB  
Article
MMamba: An Efficient Multimodal Framework for Real-Time Ocean Surface Wind Speed Inpainting Using Mutual Information and Attention-Mamba-2
by Xinjie Shi, Weicheng Ni, Boheng Duan, Qingguo Su, Lechao Liu and Kaijun Ren
Remote Sens. 2025, 17(17), 3091; https://doi.org/10.3390/rs17173091 - 4 Sep 2025
Viewed by 983
Abstract
Accurate observations of Ocean Surface Wind Speed (OSWS) are vital for predicting extreme weather and understanding ocean–atmosphere interactions. However, spaceborne sensors (e.g., ASCAT, SMAP) often experience data loss due to harsh weather and instrument malfunctions. Existing inpainting methods often rely on reanalysis data [...] Read more.
Accurate observations of Ocean Surface Wind Speed (OSWS) are vital for predicting extreme weather and understanding ocean–atmosphere interactions. However, spaceborne sensors (e.g., ASCAT, SMAP) often experience data loss due to harsh weather and instrument malfunctions. Existing inpainting methods often rely on reanalysis data that is released with delays, which restricts their real-time capability. Additionally, deep-learning-based methods, such as Transformers, face challenges due to their high computational complexity. To address these challenges, we present the Multimodal Wind Speed Inpainting Dataset (MWSID), which integrates 12 auxiliary forecasting variables to support real-time OSWS inpainting. Based on MWSID, we propose the MMamba framework, combining the Multimodal Feature Extraction module, which uses mutual information (MI) theory to optimize feature selection, and the OSWS Reconstruction module, which employs Attention-Mamba-2 within a Residual-in-Residual-Dense architecture for efficient OSWS inpainting. Experiments show that MMamba outperforms MambaIR (state-of-the-art) with an RMSE of 0.5481 m/s and an SSIM of 0.9820, significantly reducing RMSE by 21.10% over Kriging and 8.22% over MambaIR in high-winds (>15 m/s). We further introduce MMamba-L, a lightweight 0.22M-parameter variant suitable for resource-limited devices. These contributions make MMamba and MWSID powerful tools for OSWS inpainting, benefiting extreme weather prediction and oceanographic research. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

20 pages, 2354 KB  
Article
MineVisual: A Battery-Free Visual Perception Scheme in Coal Mine
by Ming Li, Zhongxu Bao, Shuting Li, Xu Yang, Qiang Niu, Muyu Yang and Shaolong Chen
Sensors 2025, 25(17), 5486; https://doi.org/10.3390/s25175486 - 3 Sep 2025
Viewed by 743
Abstract
The demand for robust safety monitoring in underground coal mines is increasing, yet traditional methods face limitations in long-term stability due to inadequate energy supply and high maintenance requirements. To address the critical challenges of high computational demand and energy constraints in this [...] Read more.
The demand for robust safety monitoring in underground coal mines is increasing, yet traditional methods face limitations in long-term stability due to inadequate energy supply and high maintenance requirements. To address the critical challenges of high computational demand and energy constraints in this resource-limited environment, this paper proposes MineVisual, a battery-free visual sensing scheme specifically designed for underground coal mines. The core of MineVisual is an optimized lightweight deep neural network employing depthwise separable convolution modules to enhance computational efficiency and reduce energy consumption. Crucially, we introduce an energy-aware dynamic pruning network (EADP-Net) ensuring a sustained inference accuracy and energy efficiency across fluctuating power conditions. The system integrates supercapacitor buffering and voltage regulation for stable operation under wind intermittency. Experimental validation demonstrates that MineVisual achieves high accuracy (e.g., 91.5% Top-1 on mine-specific tasks under high power) while significantly enhancing the energy efficiency (reducing inference energy to 6.89 mJ under low power) and robustness under varying wind speeds. This work provides an effective technical pathway for intelligent safety monitoring in complex underground environments and conclusively proves the feasibility of battery-free deep learning inference in extreme settings like coal mines. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

22 pages, 763 KB  
Article
Optimizing TSCH Scheduling for IIoT Networks Using Reinforcement Learning
by Sahar Ben Yaala, Sirine Ben Yaala and Ridha Bouallegue
Technologies 2025, 13(9), 400; https://doi.org/10.3390/technologies13090400 - 3 Sep 2025
Viewed by 595
Abstract
In the context of industrial applications, ensuring medium access control is a fundamental challenge. Industrial IoT devices are resource-constrained and must guarantee reliable communication while reducing energy consumption. The IEEE 802.15.4e standard proposed time-slotted channel hopping (TSCH) to meet the requirements of the [...] Read more.
In the context of industrial applications, ensuring medium access control is a fundamental challenge. Industrial IoT devices are resource-constrained and must guarantee reliable communication while reducing energy consumption. The IEEE 802.15.4e standard proposed time-slotted channel hopping (TSCH) to meet the requirements of the industrial Internet of Things. TSCH relies on time synchronization and channel hopping to improve performance and reduce energy consumption. Despite these characteristics, configuring an efficient schedule under varying traffic conditions and interference scenarios remains a challenging problem. The exploitation of reinforcement learning (RL) techniques offers a promising approach to address this challenge. AI enables TSCH to dynamically adapt its scheduling based on real-time network conditions, making decisions that optimize key performance criteria such as energy efficiency, reliability, and latency. By learning from the environment, reinforcement learning can reconfigure schedules to mitigate interference scenarios and meet traffic demands. In this work, we compare various reinforcement learning (RL) algorithms in the context of the TSCH environment. In particular, we evaluate the deep Q-network (DQN), double deep Q-network (DDQN), and prioritized DQN (PER-DQN). We focus on the convergence speed of these algorithms and their capacity to adapt the schedule. Our results show that the PER-DQN algorithm improves the packet delivery ratio and achieves faster convergence compared to DQN and DDQN, demonstrating its effectiveness for dynamic TSCH scheduling in Industrial IoT environments. These quantifiable improvements highlight the potential of prioritized experience replay to enhance reliability and efficiency under varying network conditions. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

21 pages, 3363 KB  
Article
A Hybrid CNN-GCN Architecture with Sparsity and Dataflow Optimization for Mobile AR
by Jiazhong Chen and Ziwei Chen
Appl. Sci. 2025, 15(17), 9356; https://doi.org/10.3390/app15179356 - 26 Aug 2025
Viewed by 583
Abstract
Mobile augmented reality (AR) applications require high-performance, energy-efficient deep learning solutions to deliver immersive experiences on resource-constrained devices. We propose SAHA-WS, a Sparsity-Aware Hybrid Architecture with Weight-Stationary Dataflow, combining Convolutional Neural Networks (CNNs) and Graph Convolutional Networks (GCNs) to efficiently process grid-like (e.g., [...] Read more.
Mobile augmented reality (AR) applications require high-performance, energy-efficient deep learning solutions to deliver immersive experiences on resource-constrained devices. We propose SAHA-WS, a Sparsity-Aware Hybrid Architecture with Weight-Stationary Dataflow, combining Convolutional Neural Networks (CNNs) and Graph Convolutional Networks (GCNs) to efficiently process grid-like (e.g., images) and graph-structured (e.g., human skeletons) data. SAHA-WS leverages channel-wise sparsity in CNNs and adjacency matrix sparsity in GCNs, paired with weight-stationary dataflow, to minimize computations and memory access. Evaluations on ImageNet, COCO, and NTU RGB+D datasets demonstrate SAHA-WS achieves 87.5% top-1 accuracy, 75.8% mAP, and 92.5% action recognition accuracy at 0% sparsity, with 40 ms latency and 42 mJ energy consumption at 60% sparsity, outperforming a baseline by 1020% in efficiency. Ablation studies confirm the contributions of sparsity and dataflow optimizations. SAHA-WS enables complex AR applications to run smoothly on mobile devices, enhancing immersive and engaging experiences. Full article
Show Figures

Figure 1

Back to TopTop