Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (846)

Search Parameters:
Keywords = E-learning and M-learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 6888 KiB  
Article
AirTrace-SA: Air Pollution Tracing for Source Attribution
by Wenchuan Zhao, Qi Zhang, Ting Shu and Xia Du
Information 2025, 16(7), 603; https://doi.org/10.3390/info16070603 - 13 Jul 2025
Viewed by 147
Abstract
Air pollution source tracing is vital for effective pollution prevention and control, yet traditional methods often require large amounts of manual data, have limited cross-regional generalizability, and present challenges in capturing complex pollutant interactions. This study introduces AirTrace-SA (Air Pollution Tracing for Source [...] Read more.
Air pollution source tracing is vital for effective pollution prevention and control, yet traditional methods often require large amounts of manual data, have limited cross-regional generalizability, and present challenges in capturing complex pollutant interactions. This study introduces AirTrace-SA (Air Pollution Tracing for Source Attribution), a novel hybrid deep learning model designed for the accurate identification and quantification of air pollution sources. AirTrace-SA comprises three main components: a hierarchical feature extractor (HFE) that extracts multi-scale features from chemical components, a source association bridge (SAB) that links chemical features to pollution sources through a multi-step decision mechanism, and a source contribution quantifier (SCQ) based on the TabNet regressor for the precise prediction of source contributions. Evaluated on real air quality datasets from five cities (Lanzhou, Luoyang, Haikou, Urumqi, and Hangzhou), AirTrace-SA achieves an average R2 of 0.88 (ranging from 0.84 to 0.94 across 10-fold cross-validation), an average mean absolute error (MAE) of 0.60 (ranging from 0.46 to 0.78 across five cities), and an average root mean square error (RMSE) of 1.06 (ranging from 0.51 to 1.62 across ten pollution sources). The model outperforms baseline models such as 1D CNN and LightGBM in terms of stability, accuracy, and cross-city generalization. Feature importance analysis identifies the main contributions of source categories, further improving interpretability. By reducing the reliance on labor-intensive data collection and providing scalable, high-precision source tracing, AirTrace-SA offers a powerful tool for environmental management that supports targeted emission reduction strategies and sustainable development. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

20 pages, 3147 KiB  
Article
Crossed Wavelet Convolution Network for Few-Shot Defect Detection of Industrial Chips
by Zonghai Sun, Yiyu Lin, Yan Li and Zihan Lin
Sensors 2025, 25(14), 4377; https://doi.org/10.3390/s25144377 - 13 Jul 2025
Viewed by 197
Abstract
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for [...] Read more.
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for industrial defect detection, existing approaches struggle with mixed-scene conditions (e.g., daytime and night-version scenes). In this work, we propose a crossed wavelet convolution network (CWCN), including a dual-pipeline crossed wavelet convolution training framework (DPCWC) and a loss value calculation module named ProSL. Our method innovatively applies wavelet transform convolution and prototype learning to industrial defect detection, which effectively fuses feature information from multiple scenarios and improves the detection performance. Experiments across various few-shot tasks on chip datasets illustrate the better detection quality of CWCN, with an improvement in mAP ranging from 2.76% to 16.43% over other FSL methods. In addition, experiments on an open-source dataset NEU-DET further validate our proposed method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 3425 KiB  
Article
Designing Cross-Domain Sustainability Instruction in Higher Education: A Mixed-Methods Study Using AHP and Transformative Pedagogy
by Wan-Ting Xie, Shang-Tse Ho and Han-Chien Lin
Sustainability 2025, 17(14), 6380; https://doi.org/10.3390/su17146380 - 11 Jul 2025
Viewed by 177
Abstract
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize [...] Read more.
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize teaching objectives and interpret student evaluations, alongside qualitative insights from reflective journals, open-ended surveys, and focus group discussions. The results indicate that hands-on experience, interdisciplinary collaboration, and context-aware applications play a critical role in fostering ecological awareness and responsibility among students. Notably, modules such as biosafety testing and water purification prompted transformative engagement with sustainability issues. The study contributes to sustainability education by integrating a decision-analytic structure with reflective learning and intelligent instructional strategies. The proposed model provides valuable implications for educators and policymakers designing interdisciplinary sustainability curricula in smart learning environments. Full article
Show Figures

Figure 1

20 pages, 2750 KiB  
Article
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
by Jiahao Zhang, Bowen Wang, Hong Liu, Liangzhi Li, Yuta Nakashima and Hajime Nagahara
J. Imaging 2025, 11(7), 232; https://doi.org/10.3390/jimaging11070232 - 11 Jul 2025
Viewed by 188
Abstract
Large-scale models trained on extensive datasets have become the standard due to their strong generalizability across diverse tasks. In-context learning (ICL), widely used in natural language processing, leverages these models by providing task-specific prompts without modifying their parameters. This paradigm is increasingly being [...] Read more.
Large-scale models trained on extensive datasets have become the standard due to their strong generalizability across diverse tasks. In-context learning (ICL), widely used in natural language processing, leverages these models by providing task-specific prompts without modifying their parameters. This paradigm is increasingly being adapted for computer vision, where models receive an input–output image pair, known as an in-context pair, alongside a query image to illustrate the desired output. However, the success of visual ICL largely hinges on the quality of these prompts. To address this, we propose Enhanced Instruct Me More (E-InMeMo), a novel approach that incorporates learnable perturbations into in-context pairs to optimize prompting. Through extensive experiments on standard vision tasks, E-InMeMo demonstrates superior performance over existing state-of-the-art methods. Notably, it improves mIoU scores by 7.99 for foreground segmentation and by 17.04 for single object detection when compared to the baseline without learnable prompts. These results highlight E-InMeMo as a lightweight yet effective strategy for enhancing visual ICL. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

19 pages, 14033 KiB  
Article
SCCA-YOLO: Spatial Channel Fusion and Context-Aware YOLO for Lunar Crater Detection
by Jiahao Tang, Boyuan Gu, Tianyou Li and Ying-Bo Lu
Remote Sens. 2025, 17(14), 2380; https://doi.org/10.3390/rs17142380 - 10 Jul 2025
Viewed by 268
Abstract
Lunar crater detection plays a crucial role in geological analysis and the advancement of lunar exploration. Accurate identification of craters is also essential for constructing high-resolution topographic maps and supporting mission planning in future lunar exploration efforts. However, lunar craters often suffer from [...] Read more.
Lunar crater detection plays a crucial role in geological analysis and the advancement of lunar exploration. Accurate identification of craters is also essential for constructing high-resolution topographic maps and supporting mission planning in future lunar exploration efforts. However, lunar craters often suffer from insufficient feature representation due to their small size and blurred boundaries. In addition, the visual similarity between craters and surrounding terrain further exacerbates background confusion. These challenges significantly hinder detection performance in remote sensing imagery and underscore the necessity of enhancing both local feature representation and global semantic reasoning. In this paper, we propose a novel Spatial Channel Fusion and Context-Aware YOLO (SCCA-YOLO) model built upon the YOLO11 framework. Specifically, the Context-Aware Module (CAM) employs a multi-branch dilated convolutional structure to enhance feature richness and expand the local receptive field, thereby strengthening the feature extraction capability. The Joint Spatial and Channel Fusion Module (SCFM) is utilized to fuse spatial and channel information to model the global relationships between craters and the background, effectively suppressing background noise and reinforcing feature discrimination. In addition, the improved Channel Attention Concatenation (CAC) strategy adaptively learns channel-wise importance weights during feature concatenation, further optimizing multi-scale semantic feature fusion and enhancing the model’s sensitivity to critical crater features. The proposed method is validated on a self-constructed Chang’e 6 dataset, covering the landing site and its surrounding areas. Experimental results demonstrate that our model achieves an mAP0.5 of 96.5% and an mAP0.5:0.95 of 81.5%, outperforming other mainstream detection models including the YOLO family of algorithms. These findings highlight the potential of SCCA-YOLO for high-precision lunar crater detection and provide valuable insights into future lunar surface analysis. Full article
Show Figures

Figure 1

21 pages, 4829 KiB  
Article
Quantification of MODIS Land Surface Temperature Downscaled by Machine Learning Algorithms
by Qi Su, Xiangchen Meng, Lin Sun and Zhongqiang Guo
Remote Sens. 2025, 17(14), 2350; https://doi.org/10.3390/rs17142350 - 9 Jul 2025
Viewed by 233
Abstract
Land Surface Temperature (LST) is essential for understanding the interactions between the land surface and the atmosphere. This study presents a comprehensive evaluation of machine learning (ML)-based downscaling algorithms to enhance the spatial resolution of MODIS LST data from 960 m to 30 [...] Read more.
Land Surface Temperature (LST) is essential for understanding the interactions between the land surface and the atmosphere. This study presents a comprehensive evaluation of machine learning (ML)-based downscaling algorithms to enhance the spatial resolution of MODIS LST data from 960 m to 30 m, leveraging auxiliary variables including vegetation indices, terrain parameters, and land surface reflectance. By establishing non-linear relationships between LST and predictive variables through eXtreme Gradient Boosting (XGBoost) and Random Forest (RF) algorithms, the proposed framework was rigorously validated using in situ measurements across China’s Heihe River Basin. Comparative analyses demonstrated that integrating multiple vegetation indices (e.g., NDVI, SAVI) with terrain factors yielded superior accuracy compared to factors utilizing land surface reflectance or excessive variable combinations. While slope and aspect parameters marginally improved accuracy in mountainous regions, including them degraded performance in flat terrain. Notably, land surface reflectance proved to be ineffective in snow/ice-covered areas, highlighting the need for specialized treatment in cryospheric environments. This work provides a reference for LST downscaling, with significant implications for environmental monitoring and urban heat island investigations. Full article
Show Figures

Figure 1

14 pages, 6120 KiB  
Article
Drones and Deep Learning for Detecting Fish Carcasses During Fish Kills
by Edna G. Fernandez-Figueroa, Stephanie R. Rogers and Dinesh Neupane
Drones 2025, 9(7), 482; https://doi.org/10.3390/drones9070482 - 8 Jul 2025
Viewed by 279
Abstract
Fish kills are sudden mass mortalities that occur in freshwater and marine systems worldwide. Fish kill surveys are essential for assessing the ecological and economic impacts of fish kill events, but are often labor-intensive, time-consuming, and spatially limited. This study aims to address [...] Read more.
Fish kills are sudden mass mortalities that occur in freshwater and marine systems worldwide. Fish kill surveys are essential for assessing the ecological and economic impacts of fish kill events, but are often labor-intensive, time-consuming, and spatially limited. This study aims to address these challenges by exploring the application of unoccupied aerial systems (or drones) and deep learning techniques for coastal fish carcass detection. Seven flights were conducted using a DJI Phantom 4 RGB quadcopter to monitor three sites with different substrates (i.e., sand, rock, shored Sargassum). Orthomosaics generated from drone imagery were useful for detecting carcasses washed ashore, but not floating or submerged carcasses. Single shot multibox detection (SSD) with a ResNet50-based model demonstrated high detection accuracy, with a mean average precision (mAP) of 0.77 and a mean average recall (mAR) of 0.81. The model had slightly higher average precision (AP) when detecting large objects (>42.24 cm long, AP = 0.90) compared to small objects (≤14.08 cm long, AP = 0.77) because smaller objects are harder to recognize and require more contextual reasoning. The results suggest a strong potential future application of these tools for rapid fish kill response and automatic enumeration and characterization of fish carcasses. Full article
Show Figures

Figure 1

32 pages, 1126 KiB  
Review
Exploring the Role of Artificial Intelligence in Smart Healthcare: A Capability and Function-Oriented Review
by Syed Raza Abbas, Huiseung Seol, Zeeshan Abbas and Seung Won Lee
Healthcare 2025, 13(14), 1642; https://doi.org/10.3390/healthcare13141642 - 8 Jul 2025
Viewed by 738
Abstract
Artificial Intelligence (AI) is transforming smart healthcare by enhancing diagnostic precision, automating clinical workflows, and enabling personalized treatment strategies. This review explores the current landscape of AI in healthcare from two key perspectives: capability types (e.g., Narrow AI and AGI) and functional architectures [...] Read more.
Artificial Intelligence (AI) is transforming smart healthcare by enhancing diagnostic precision, automating clinical workflows, and enabling personalized treatment strategies. This review explores the current landscape of AI in healthcare from two key perspectives: capability types (e.g., Narrow AI and AGI) and functional architectures (e.g., Limited Memory and Theory of Mind). Based on capabilities, most AI systems today are categorized as Narrow AI, performing specific tasks such as medical image analysis and risk prediction with high accuracy. More advanced forms like General Artificial Intelligence (AGI) and Superintelligent AI remain theoretical but hold transformative potential. From a functional standpoint, Limited Memory AI dominates clinical applications by learning from historical patient data to inform decision-making. Reactive systems are used in rule-based alerts, while Theory of Mind (ToM) and Self-Aware AI remain conceptual stages for future development. This dual perspective provides a comprehensive framework to assess the maturity, impact, and future direction of AI in healthcare. It also highlights the need for ethical design, transparency, and regulation as AI systems grow more complex and autonomous, by incorporating cross-domain AI insights. Moreover, we evaluate the viability of developing AGI in regionally specific legal and regulatory frameworks, using South Korea as a case study to emphasize the limitations imposed by infrastructural preparedness and medical data governance regulations. Full article
(This article belongs to the Special Issue The Role of AI in Predictive and Prescriptive Healthcare)
Show Figures

Figure 1

27 pages, 13752 KiB  
Article
Robust Watermarking of Tiny Neural Networks by Fine-Tuning and Post-Training Approaches
by Riccardo Adorante, Alessandro Carra, Marco Lattuada and Danilo Pietro Pau
Symmetry 2025, 17(7), 1094; https://doi.org/10.3390/sym17071094 - 8 Jul 2025
Viewed by 335
Abstract
Because neural networks pervade many industrial domains and are increasingly complex and accurate, the trained models themselves have become valuable intellectual properties. Developing highly accurate models demands increasingly higher investments of time, capital, and expertise. Many of these models are commonly deployed in [...] Read more.
Because neural networks pervade many industrial domains and are increasingly complex and accurate, the trained models themselves have become valuable intellectual properties. Developing highly accurate models demands increasingly higher investments of time, capital, and expertise. Many of these models are commonly deployed in cloud services and on resource-constrained edge devices. Consequently, safeguarding them is critically important. Neural network watermarking offers a practical solution to address this need by embedding a unique signature, either as a hidden bit-string or as a distinctive response to specially crafted “trigger” inputs. This allows owners to subsequently prove model ownership even if an adversary attempts to remove the watermark through attacks. In this manuscript, we adapt three state-of-the-art watermarking methods to “tiny” neural networks deployed on edge platforms by exploiting symmetry-related properties that ensure robustness and efficiency. In the context of machine learning, “tiny” is broadly used as a term referring to artificial intelligence techniques deployed in low-energy systems in the mW range and below, e.g., sensors and microcontrollers. We evaluate the robustness of the selected techniques by simulating attacks aimed at erasing the watermark while preserving the model’s original performances. The results before and after attacks demonstrate the effectiveness of these watermarking schemes in protecting neural network intellectual property without degrading the original accuracy. Full article
(This article belongs to the Section Computer)
Show Figures

Graphical abstract

25 pages, 1312 KiB  
Article
The Role of Exchange Energy in Modeling Core-Electron Binding Energies of Strongly Polar Bonds
by Feng Wang and Delano P. Chong
Molecules 2025, 30(13), 2887; https://doi.org/10.3390/molecules30132887 - 7 Jul 2025
Viewed by 240
Abstract
Accurate determination of carbon core-electron binding energies (C1s CEBEs) is crucial for X-ray photoelectron spectroscopy (XPS) assignments and predictive computational modeling. This study evaluates density functional theory (DFT)-based methods for calculating C1s core-electron binding energies (CEBEs), comparing three functionals—PW86x-PW91c (DFTpw), mPW1PW, and PBE50—across [...] Read more.
Accurate determination of carbon core-electron binding energies (C1s CEBEs) is crucial for X-ray photoelectron spectroscopy (XPS) assignments and predictive computational modeling. This study evaluates density functional theory (DFT)-based methods for calculating C1s core-electron binding energies (CEBEs), comparing three functionals—PW86x-PW91c (DFTpw), mPW1PW, and PBE50—across 68 C1s cases in small hydrocarbons and halogenated molecules (alkyl halides), using the delta self-consistent field ΔSCF (or ΔDFT) method developed by one of the authors over the past decade. The PW86x-PW91c functional achieves a root mean square deviation (RMSD) of 0.1735 eV, with improved accuracy for polar C-X bonds (X=O, F) using mPW1PW and PBE50, reducing the average absolute deviation (AAD) to ~0.132 eV. The study emphasizes the role of Hartree–Fock (HF) exchange in refining CEBE predictions and highlights the synergy between theoretical and experimental approaches. These insights lay the groundwork for machine learning (ML)-driven spectral analysis, advancing materials characterization, and catalysis through more reliable automated XPS assignments. Full article
Show Figures

Graphical abstract

26 pages, 1804 KiB  
Article
Dependency-Aware Entity–Attribute Relationship Learning for Text-Based Person Search
by Wei Xia, Wenguang Gan and Xinpan Yuan
Big Data Cogn. Comput. 2025, 9(7), 182; https://doi.org/10.3390/bdcc9070182 - 7 Jul 2025
Viewed by 283
Abstract
Text-based person search (TPS), a critical technology for security and surveillance, aims to retrieve target individuals from image galleries using textual descriptions. The existing methods face two challenges: (1) ambiguous attribute–noun association (AANA), where syntactic ambiguities lead to incorrect associations between attributes and [...] Read more.
Text-based person search (TPS), a critical technology for security and surveillance, aims to retrieve target individuals from image galleries using textual descriptions. The existing methods face two challenges: (1) ambiguous attribute–noun association (AANA), where syntactic ambiguities lead to incorrect associations between attributes and the intended nouns; and (2) textual noise and relevance imbalance (TNRI), where irrelevant or non-discriminative tokens (e.g., ‘wearing’) reduce the saliency of critical visual attributes in the textual description. To address these aspects, we propose the dependency-aware entity–attribute alignment network (DEAAN), a novel framework that explicitly tackles AANA through dependency-guided attention and TNRI via adaptive token filtering. The DEAAN introduces two modules: (1) dependency-assisted implicit reasoning (DAIR) to resolve AANA through syntactic parsing, and (2) relevance-adaptive token selection (RATS) to suppress TNRI by learning token saliency. Experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid demonstrate state-of-the-art performance, with the DEAAN achieving a Rank-1 accuracy of 76.71% and an mAP of 69.07% on CUHK-PEDES, surpassing RDE by 0.77% in Rank-1 and 1.51% in mAP. Ablation studies reveal that DAIR and RATS individually improve Rank-1 by 2.54% and 3.42%, while their combination elevates the performance by 6.35%, validating their synergy. This work bridges structured linguistic analysis with adaptive feature selection, demonstrating practical robustness in surveillance-oriented TPS scenarios. Full article
Show Figures

Figure 1

16 pages, 3606 KiB  
Article
Comparative Study on Rail Damage Recognition Methods Based on Machine Vision
by Wanlin Gao, Riqin Geng and Hao Wu
Infrastructures 2025, 10(7), 171; https://doi.org/10.3390/infrastructures10070171 - 4 Jul 2025
Viewed by 245
Abstract
With the rapid expansion of railway networks and increasing operational complexity, intelligent rail damage detection has become crucial for ensuring safety and improving maintenance efficiency. Traditional physical inspection methods (e.g., ultrasonic testing, magnetic flux leakage) are limited in terms of efficiency and environmental [...] Read more.
With the rapid expansion of railway networks and increasing operational complexity, intelligent rail damage detection has become crucial for ensuring safety and improving maintenance efficiency. Traditional physical inspection methods (e.g., ultrasonic testing, magnetic flux leakage) are limited in terms of efficiency and environmental adaptability. This study proposes a machine vision-based approach leveraging deep learning to identify four primary types of rail damages: corrugations, spalls, cracks, and scratches. A self-developed acquisition device collected 298 field images from the Chongqing Metro system, which were expanded into 1556 samples through data augmentation techniques (including rotation, translation, shearing, and mirroring). This study systematically evaluated three object detection models—YOLOv8, SSD, and Faster R-CNN—in terms of detection accuracy (mAP), missed detection rate (mAR), and training efficiency. The results indicate that YOLOv8 outperformed the other models, achieving an mAP of 0.79, an mAR of 0.69, and a shortest training time of 0.28 h. To further enhance performance, this study integrated the Multi-Head Self-Attention (MHSA) module into YOLO, creating MHSA-YOLOv8. The optimized model achieved a significant improvement in mAP by 10% (to 0.89), increased mAR by 20%, and reduced training time by 50% (to 0.14 h). These findings demonstrate the effectiveness of MHSA-YOLO for accurate and efficient rail damage detection in complex environments, offering a robust solution for intelligent railway maintenance. Full article
Show Figures

Figure 1

23 pages, 4607 KiB  
Article
Threshold Soil Moisture Levels Influence Soil CO2 Emissions: A Machine Learning Approach to Predict Short-Term Soil CO2 Emissions from Climate-Smart Fields
by Anoop Valiya Veettil, Atikur Rahman, Ripendra Awal, Ali Fares, Timothy R. Green, Binita Thapa and Almoutaz Elhassan
Sustainability 2025, 17(13), 6101; https://doi.org/10.3390/su17136101 - 3 Jul 2025
Viewed by 404
Abstract
Machine learning (ML) models are widely used to analyze the spatiotemporal impacts of agricultural practices on environmental sustainability, including the contribution to global greenhouse gas (GHG) emissions. Management practices, such as organic amendment applications, are critical pillars of Climate-smart agriculture (CSA) strategies that [...] Read more.
Machine learning (ML) models are widely used to analyze the spatiotemporal impacts of agricultural practices on environmental sustainability, including the contribution to global greenhouse gas (GHG) emissions. Management practices, such as organic amendment applications, are critical pillars of Climate-smart agriculture (CSA) strategies that mitigate GHG emissions while maintaining adequate crop yields. This study investigated the critical threshold of soil moisture level associated with soil CO2 emissions from organically amended plots using the classification and regression tree (CART) algorithm. Also, the study predicted the short-term soil CO2 emissions from organically amended systems using soil moisture and weather variables (i.e., air temperature, relative humidity, and solar radiation) using multilinear regression (MLR) and generalized additive models (GAMs). The different organic amendments considered in this study are biochar (2268 and 4536 kg ha−1) and chicken and dairy manure (0, 224, and 448 kg N/ha) under a sweet corn crop in the greater Houston area, Texas. The results of the CART analysis indicated a direct link between soil moisture level and the magnitude of CO2 flux emission from the amended plots. A threshold of 0.103 m3m−3 was calculated for treatment amended by biochar level I (2268 kg ha−1) and chicken manure at the N recommended rate (CXBX), indicating that if the soil moisture is less than the 0.103 m3m−3 threshold, then the median soil CO2 emission is 142 kg ha−1 d−1. Furthermore, applying biochar at a rate of 4536 kg ha−1 reduced the soil CO2 emissions by 14.5% compared to the control plots. Additionally, the results demonstrate that GAMs outperformed MLR, exhibiting the highest performance under the combined effect of chicken and biochar. We conclude that quantifying soil moisture thresholds will provide valuable information for the sustainable mitigation of soil CO2 emissions. Full article
Show Figures

Figure 1

26 pages, 23383 KiB  
Article
Multi-Focus Image Fusion Based on Dual-Channel Rybak Neural Network and Consistency Verification in NSCT Domain
by Ming Lv, Sensen Song, Zhenhong Jia, Liangliang Li and Hongbing Ma
Fractal Fract. 2025, 9(7), 432; https://doi.org/10.3390/fractalfract9070432 - 30 Jun 2025
Viewed by 290
Abstract
In multi-focus image fusion, accurately detecting and extracting focused regions remains a key challenge. Some existing methods suffer from misjudgment of focus areas, resulting in incorrect focus information or the unintended retention of blurred regions in the fused image. To address these issues, [...] Read more.
In multi-focus image fusion, accurately detecting and extracting focused regions remains a key challenge. Some existing methods suffer from misjudgment of focus areas, resulting in incorrect focus information or the unintended retention of blurred regions in the fused image. To address these issues, this paper proposes a novel multi-focus image fusion method that leverages a dual-channel Rybak neural network combined with consistency verification in the nonsubsampled contourlet transform (NSCT) domain. Specifically, the high-frequency sub-bands produced by NSCT decomposition are processed using the dual-channel Rybak neural network and a consistency verification strategy, allowing for more accurate extraction and integration of salient details. Meanwhile, the low-frequency sub-bands are fused using a simple averaging approach to preserve the overall structure and brightness information. The effectiveness of the proposed method has been thoroughly evaluated through comprehensive qualitative and quantitative experiments conducted on three widely used public datasets: Lytro, MFFW, and MFI-WHU. Experimental results show that our method consistently outperforms several state-of-the-art image fusion techniques, including both traditional algorithms and deep learning-based approaches, in terms of visual quality and objective performance metrics (QAB/F, QCB, QE, QFMI, QMI, QMSE, QNCIE, QNMI, QP, and QPSNR). These results clearly demonstrate the robustness and superiority of the proposed fusion framework in handling multi-focus image fusion tasks. Full article
Show Figures

Figure 1

18 pages, 9529 KiB  
Article
Adaptive Temporal Action Localization in Video
by Zhiyu Xu, Zhuqiang Lu, Yong Ding, Liwei Tian and Suping Liu
Electronics 2025, 14(13), 2645; https://doi.org/10.3390/electronics14132645 - 30 Jun 2025
Viewed by 202
Abstract
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action [...] Read more.
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action of interest. However, the diversely distributed semantics of a video over time have not been well considered, which could compromise the localization performance, especially for ubiquitous short actions or events (e.g., a fall in healthcare and a traffic violation in surveillance). To address this problem, we propose a novel deep learning architecture, namely an adaptive template-guided self-attention network, to characterize the proposals adaptively with their relevant frames. An input video is segmented into temporal frames, within which the spatio-temporal patterns are formulated by a global–Local Transformer-based encoder. Each frame is associated with a number of proposals of different lengths as their starting frame. Learnable templates for proposals of different lengths are introduced, and each template guides the sampling for proposals with a specific length. It formulates the probabilities for a proposal to form the representation of certain spatio-temporal patterns from its relevant temporal frames. Therefore, the semantics of a proposal can be formulated in an adaptive manner, and a feature map of all proposals can be appropriately characterized. To estimate the IoU of these proposals with ground truth actions, a two-level scheme is introduced. A shortcut connection is also utilized to refine the predictions by using the convolutions of the feature map from coarse to fine. Comprehensive experiments on two benchmark datasets demonstrate the state-of-the-art performance of our proposed method: 32.6% mAP@IoU 0.7 on THUMOS-14 and 9.35% mAP@IoU 0.95 on ActivityNet-1.3. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

Back to TopTop