Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,639)

Search Parameters:
Keywords = adaptive imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1612 KiB  
Article
Multi-Label Conditioned Diffusion for Cardiac MR Image Augmentation and Segmentation
by Jianyang Li, Xin Ma and Yonghong Shi
Bioengineering 2025, 12(8), 812; https://doi.org/10.3390/bioengineering12080812 - 28 Jul 2025
Abstract
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are [...] Read more.
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are costly and time-consuming to obtain. This study addresses this challenge by proposing a novel data augmentation framework based on a condition-guided diffusion generative model, controlled by multiple cardiac labels. The framework aims to expand annotated cardiac MR datasets and significantly improve the performance of downstream cardiac segmentation tasks. The proposed generative data augmentation framework operates in two stages. First, a Label Diffusion Module is trained to unconditionally generate realistic multi-category spatial masks (encompassing regions such as the left ventricle, interventricular septum, and right ventricle) conforming to anatomical prior probabilities derived from noise. Second, cardiac MR images are generated conditioned on these semantic masks, ensuring a precise one-to-one mapping between synthetic labels and images through the integration of a spatially-adaptive normalization (SPADE) module for structural constraint during conditional model training. The effectiveness of this augmentation strategy is demonstrated using the U-Net model for segmentation on the enhanced 2D cardiac image dataset derived from the M&M Challenge. Results indicate that the proposed method effectively increases dataset sample numbers and significantly improves cardiac segmentation accuracy, achieving a 5% to 10% higher Dice Similarity Coefficient (DSC) compared to traditional data augmentation methods. Experiments further reveal a strong correlation between image generation quality and augmentation effectiveness. This framework offers a robust solution for data scarcity in cardiac image analysis, directly benefiting clinical applications. Full article
24 pages, 6890 KiB  
Article
Multi-Level Transcriptomic and Physiological Responses of Aconitum kusnezoffii to Different Light Intensities Reveal a Moderate-Light Adaptation Strategy
by Kefan Cao, Yingtong Mu and Xiaoming Zhang
Genes 2025, 16(8), 898; https://doi.org/10.3390/genes16080898 - 28 Jul 2025
Abstract
Objectives: Light intensity is a critical environmental factor regulating plant growth, development, and stress adaptation. However, the physiological and molecular mechanisms underlying light responses in Aconitum kusnezoffii, a valuable alpine medicinal plant, remain poorly understood. This study aimed to elucidate the adaptive [...] Read more.
Objectives: Light intensity is a critical environmental factor regulating plant growth, development, and stress adaptation. However, the physiological and molecular mechanisms underlying light responses in Aconitum kusnezoffii, a valuable alpine medicinal plant, remain poorly understood. This study aimed to elucidate the adaptive strategies of A. kusnezoffii under different light intensities through integrated physiological and transcriptomic analyses. Methods: Two-year-old A. kusnezoffii plants were exposed to three controlled light regimes (790, 620, and 450 lx). Leaf anatomical traits were assessed via histological sectioning and microscopic imaging. Antioxidant enzyme activities (CAT, POD, and SOD), membrane lipid peroxidation (MDA content), osmoregulatory substances, and carbon metabolites were quantified using standard biochemical assays. Transcriptomic profiling was conducted using Illumina RNA-seq, with differentially expressed genes (DEGs) identified through DESeq2 and functionally annotated via GO and KEGG enrichment analyses. Results: Moderate light (620 lx) promoted optimal leaf structure by enhancing palisade tissue development and epidermal thickening, while reducing membrane lipid peroxidation. Antioxidant defense capacity was elevated through higher CAT, POD, and SOD activities, alongside increased accumulation of soluble proteins, sugars, and starch. Transcriptomic analysis revealed DEGs enriched in photosynthesis, monoterpenoid biosynthesis, hormone signaling, and glutathione metabolism pathways. Key positive regulators (PHY and HY5) were upregulated, whereas negative regulators (COP1 and PIFs) were suppressed, collectively facilitating chloroplast development and photomorphogenesis. Trend analysis indicated a “down–up” gene expression pattern, with early suppression of stress-responsive genes followed by activation of photosynthetic and metabolic processes. Conclusions: A. kusnezoffii employs a coordinated, multi-level adaptation strategy under moderate light (620 lx), integrating leaf structural optimization, enhanced antioxidant defense, and dynamic transcriptomic reprogramming to maintain energy balance, redox homeostasis, and photomorphogenic flexibility. These findings provide a theoretical foundation for optimizing artificial cultivation and light management of alpine medicinal plants. Full article
(This article belongs to the Section Plant Genetics and Genomics)
Show Figures

Figure 1

30 pages, 2578 KiB  
Article
Real-Time Functional Stratification of Tumor Cell Lines Using a Non-Cytotoxic Phospholipoproteomic Platform: A Label-Free Ex Vivo Model
by Ramón Gutiérrez-Sandoval, Francisco Gutiérrez-Castro, Natalia Muñoz-Godoy, Ider Rivadeneira, Adolay Sobarzo, Jordan Iturra, Ignacio Muñoz, Cristián Peña-Vargas, Matías Vidal and Francisco Krakowiak
Biology 2025, 14(8), 953; https://doi.org/10.3390/biology14080953 - 28 Jul 2025
Abstract
The development of scalable, non-invasive tools to assess tumor responsiveness to structurally active immunoformulations remains a critical unmet need in solid tumor immunotherapy. Here, we introduce a real-time, ex vivo functional system to classify tumor cell lines exposed to a phospholipoproteomic platform, without [...] Read more.
The development of scalable, non-invasive tools to assess tumor responsiveness to structurally active immunoformulations remains a critical unmet need in solid tumor immunotherapy. Here, we introduce a real-time, ex vivo functional system to classify tumor cell lines exposed to a phospholipoproteomic platform, without relying on cytotoxicity, co-culture systems, or molecular profiling. Tumor cells were monitored using IncuCyte® S3 (Sartorius) real-time imaging under ex vivo neutral conditions. No dendritic cell components or immune co-cultures were used in this mode. All results are derived from direct tumor cell responses to structurally active formulations. Using eight human tumor lines, we captured proliferative behavior, cell death rates, and secretomic profiles to assign each case into stimulatory, inhibitory, or neutral categories. A structured decision-tree logic supported the classification, and a Functional Stratification Index (FSI) was computed to quantify the response magnitude. Inhibitory lines showed early divergence and high IFN-γ/IL-10 ratios; stimulatory ones exhibited a proliferative gain under balanced immune signaling. The results were reproducible across independent batches. This system enables quantitative phenotypic screening under standardized, marker-free conditions and offers an adaptable platform for functional evaluation in immuno-oncology pipelines where traditional cytotoxic endpoints are insufficient. This approach has been codified into the STIP (Structured Traceability and Immunophenotypic Platform), supporting reproducible documentation across tumor models. This platform contributes to upstream validation logic in immuno-oncology workflows and supports early-stage regulatory documentation. Full article
(This article belongs to the Section Cancer Biology)
34 pages, 9273 KiB  
Review
Multi-Task Deep Learning for Lung Nodule Detection and Segmentation in CT Scans: A Review
by Runhan Li and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 3009; https://doi.org/10.3390/electronics14153009 - 28 Jul 2025
Abstract
Lung nodule detection and segmentation are essential tasks in computer-aided diagnosis (CAD) systems for early lung cancer screening. With the growing availability of CT data and deep learning models, researchers have explored various strategies to improve the performance of these tasks. This review [...] Read more.
Lung nodule detection and segmentation are essential tasks in computer-aided diagnosis (CAD) systems for early lung cancer screening. With the growing availability of CT data and deep learning models, researchers have explored various strategies to improve the performance of these tasks. This review focuses on Multi-Task Learning (MTL) approaches, which unify or cooperatively integrate detection and segmentation by leveraging shared representations. We first provide an overview of traditional and deep learning methods for each task individually, then examine how MTL has been adapted for medical image analysis, with a particular focus on lung CT studies. Key aspects such as network architectures and evaluation metrics are also discussed. The review highlights recent trends, identifies current challenges, and outlines promising directions toward more accurate, efficient, and clinically applicable CAD solutions. The review demonstrates that MTL frameworks significantly enhance efficiency and accuracy in lung nodule analysis by leveraging shared representations, while also identifying critical challenges such as task imbalance and computational demands that warrant further research for clinical adoption. Full article
Show Figures

Figure 1

18 pages, 3347 KiB  
Article
Assessment of Machine Learning-Driven Retrievals of Arctic Sea Ice Thickness from L-Band Radiometry Remote Sensing
by Ferran Hernández-Macià, Gemma Sanjuan Gomez, Carolina Gabarró and Maria José Escorihuela
Computers 2025, 14(8), 305; https://doi.org/10.3390/computers14080305 - 28 Jul 2025
Abstract
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are [...] Read more.
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are assessed: a Random Forest (RF) algorithm, a Convolutional Neural Network (CNN) that incorporates spatial coherence, and a Long Short-Term Memory (LSTM) neural network designed to capture temporal coherence. Validation against in situ data from the Beaufort Gyre Exploration Project (BGEP) moorings and the ESA SMOSice campaign demonstrates that the RF algorithm achieves robust performance comparable to the ESA product, despite its simplicity and lack of explicit spatial or temporal modeling. The CNN exhibits a tendency to overestimate SIT and shows higher dispersion, suggesting limited added value when spatial coherence is already present in the input data. The LSTM approach does not improve retrieval accuracy, likely due to the mismatch between satellite resolution and the temporal variability of sea ice conditions. These results highlight the importance of L-band sea ice emission modeling over increasing algorithm complexity and suggest that simpler, adaptable methods such as RF offer a promising foundation for future SIT retrieval efforts. The findings are relevant for refining current methods used with SMOS and for developing upcoming satellite missions, such as ESA’s Copernicus Imaging Microwave Radiometer (CIMR). Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

19 pages, 88349 KiB  
Article
Dynamic Assessment of Street Environmental Quality Using Time-Series Street View Imagery Within Daily Intervals
by Puxuan Zhang, Yichen Liu and Yihua Huang
Land 2025, 14(8), 1544; https://doi.org/10.3390/land14081544 - 27 Jul 2025
Abstract
Rapid urbanization has intensified global settlement density, significantly increasing the importance of urban street environmental quality, which profoundly affects residents’ physical and psychological well-being. Traditional methods for evaluating urban environmental quality have largely overlooked dynamic perceptual changes occurring throughout the day, resulting in [...] Read more.
Rapid urbanization has intensified global settlement density, significantly increasing the importance of urban street environmental quality, which profoundly affects residents’ physical and psychological well-being. Traditional methods for evaluating urban environmental quality have largely overlooked dynamic perceptual changes occurring throughout the day, resulting in incomplete assessments. To bridge this methodological gap, this study presents an innovative approach combining advanced deep learning techniques with time-series street view imagery (SVI) analysis to systematically quantify spatio-temporal variations in the perceived environmental quality of pedestrian-oriented streets. It further addresses two central questions: how perceived environmental quality varies spatially across sections of a pedestrian-oriented street and how these perceptions fluctuate temporally throughout the day. Utilizing Golden Street, a representative living street in Shanghai’s Changning District, as the empirical setting, street view images were manually collected at 96 sampling points across multiple time intervals within a single day. The collected images underwent semantic segmentation using the DeepLabv3+ model, and emotional scores were quantified through the validated MIT Place Pulse 2.0 dataset across six subjective indicators: “Safe,” “Lively,” “Wealthy,” “Beautiful,” “Depressing,” and “Boring.” Spatial and temporal patterns of these indicators were subsequently analyzed to elucidate their relationships with environmental attributes. This study demonstrates the effectiveness of integrating deep learning models with time-series SVI for assessing urban environmental perceptions, providing robust empirical insights for urban planners and policymakers. The results emphasize the necessity of context-sensitive, temporally adaptive urban design strategies to enhance urban livability and psychological well-being, ultimately contributing to more vibrant, secure, and sustainable pedestrian-oriented urban environments. Full article
(This article belongs to the Special Issue Planning for Sustainable Urban and Land Development, Second Edition)
Show Figures

Figure 1

26 pages, 3125 KiB  
Article
Tomato Leaf Disease Identification Framework FCMNet Based on Multimodal Fusion
by Siming Deng, Jiale Zhu, Yang Hu, Mingfang He and Yonglin Xia
Plants 2025, 14(15), 2329; https://doi.org/10.3390/plants14152329 - 27 Jul 2025
Abstract
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper [...] Read more.
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper proposes a tomato leaf disease recognition framework FCMNet based on multimodal fusion, which combines tomato leaf disease image and text description to enhance the ability to capture disease characteristics. In this paper, the Fourier-guided Attention Mechanism (FGAM) is designed, which systematically embeds the Fourier frequency-domain information into the spatial-channel attention structure for the first time, enhances the stability and noise resistance of feature expression through spectral transform, and realizes more accurate lesion location by means of multi-scale fusion of local and global features. In order to realize the deep semantic interaction between image and text modality, a Cross Vision–Language Alignment module (CVLA) is further proposed. This module generates visual representations compatible with Bert embeddings by utilizing block segmentation and feature mapping techniques. Additionally, it incorporates a probability-based weighting mechanism to achieve enhanced multimodal fusion, significantly strengthening the model’s comprehension of semantic relationships across different modalities. Furthermore, to enhance both training efficiency and parameter optimization capabilities of the model, we introduce a Multi-strategy Improved Coati Optimization Algorithm (MSCOA). This algorithm integrates Good Point Set initialization with a Golden Sine search strategy, thereby boosting global exploration, accelerating convergence, and effectively preventing entrapment in local optima. Consequently, it exhibits robust adaptability and stable performance within high-dimensional search spaces. The experimental results show that the FCMNet model has increased the accuracy and precision by 2.61% and 2.85%, respectively, compared with the baseline model on the self-built dataset of tomato leaf diseases, and the recall and F1 score have increased by 3.03% and 3.06%, respectively, which is significantly superior to the existing methods. This research provides a new solution for the identification of tomato leaf diseases and has broad potential for agricultural applications. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

19 pages, 2106 KiB  
Article
Rethinking Infrared and Visible Image Fusion from a Heterogeneous Content Synergistic Perception Perspective
by Minxian Shen, Gongrui Huang, Mingye Ju and Kaikuang Ma
Sensors 2025, 25(15), 4658; https://doi.org/10.3390/s25154658 - 27 Jul 2025
Abstract
Infrared and visible image fusion (IVIF) endeavors to amalgamate the thermal radiation characteristics from infrared images with the fine-grained texture details from visible images, aiming to produce fused outputs that are more robust and information-rich. Among the existing methodologies, those based on generative [...] Read more.
Infrared and visible image fusion (IVIF) endeavors to amalgamate the thermal radiation characteristics from infrared images with the fine-grained texture details from visible images, aiming to produce fused outputs that are more robust and information-rich. Among the existing methodologies, those based on generative adversarial networks (GANs) have demonstrated considerable promise. However, such approaches are frequently constrained by their reliance on homogeneous discriminators possessing identical architectures, a limitation that can precipitate the emergence of undesirable artifacts in the resultant fused images. To surmount this challenge, this paper introduces HCSPNet, a novel GAN-based framework. HCSPNet distinctively incorporates heterogeneous dual discriminators, meticulously engineered for the fusion of disparate source images inherent in the IVIF task. This architectural design ensures the steadfast preservation of critical information from the source inputs, even when faced with scenarios of image degradation. Specifically, the two structurally distinct discriminators within HCSPNet are augmented with adaptive salient information distillation (ASID) modules, each uniquely structured to align with the intrinsic properties of infrared and visible images. This mechanism impels the discriminators to concentrate on pivotal components during their assessment of whether the fused image has proficiently inherited significant information from the source modalities—namely, the salient thermal signatures from infrared imagery and the detailed textural content from visible imagery—thereby markedly diminishing the occurrence of unwanted artifacts. Comprehensive experimentation conducted across multiple publicly available datasets substantiates the preeminence and generalization capabilities of HCSPNet, underscoring its significant potential for practical deployment. Additionally, we also prove that our proposed heterogeneous dual discriminators can serve as a plug-and-play structure to improve the performance of existing GAN-based methods. Full article
(This article belongs to the Section Sensing and Imaging)
21 pages, 3448 KiB  
Article
A Welding Defect Detection Model Based on Hybrid-Enhanced Multi-Granularity Spatiotemporal Representation Learning
by Chenbo Shi, Shaojia Yan, Lei Wang, Changsheng Zhu, Yue Yu, Xiangteng Zang, Aiping Liu, Chun Zhang and Xiaobing Feng
Sensors 2025, 25(15), 4656; https://doi.org/10.3390/s25154656 - 27 Jul 2025
Abstract
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability [...] Read more.
Real-time quality monitoring using molten pool images is a critical focus in researching high-quality, intelligent automated welding. To address interference problems in molten pool images under complex welding scenarios (e.g., reflected laser spots from spatter misclassified as porosity defects) and the limited interpretability of deep learning models, this paper proposes a multi-granularity spatiotemporal representation learning algorithm based on the hybrid enhancement of handcrafted and deep learning features. A MobileNetV2 backbone network integrated with a Temporal Shift Module (TSM) is designed to progressively capture the short-term dynamic features of the molten pool and integrate temporal information across both low-level and high-level features. A multi-granularity attention-based feature aggregation module is developed to select key interference-free frames using cross-frame attention, generate multi-granularity features via grouped pooling, and apply the Convolutional Block Attention Module (CBAM) at each granularity level. Finally, these multi-granularity spatiotemporal features are adaptively fused. Meanwhile, an independent branch utilizes the Histogram of Oriented Gradient (HOG) and Scale-Invariant Feature Transform (SIFT) features to extract long-term spatial structural information from historical edge images, enhancing the model’s interpretability. The proposed method achieves an accuracy of 99.187% on a self-constructed dataset. Additionally, it attains a real-time inference speed of 20.983 ms per sample on a hardware platform equipped with an Intel i9-12900H CPU and an RTX 3060 GPU, thus effectively balancing accuracy, speed, and interpretability. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

28 pages, 6143 KiB  
Article
Optical Character Recognition Method Based on YOLO Positioning and Intersection Ratio Filtering
by Kai Cui, Qingpo Xu, Yabin Ding, Jiangping Mei, Ying He and Haitao Liu
Symmetry 2025, 17(8), 1198; https://doi.org/10.3390/sym17081198 - 27 Jul 2025
Abstract
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to [...] Read more.
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to meet the accuracy and real-time demands of complex logistics scenarios due to challenges such as image distortion, uneven illumination, and field overlap. This paper proposes a three-level collaborative recognition method based on deep learning that facilitates structured information extraction through regional normalization, dual-path parallel extraction, and a dynamic matching mechanism. First, the geometric distortion associated with contour detection and the lightweight direction classification model has been improved. Second, by integrating the enhanced YOLOv5s for key area localization with the upgraded PaddleOCR for full-text character extraction, a dual-path parallel architecture for positioning and recognition has been constructed. Finally, a dynamic space–semantic joint matching module has been designed that incorporates anti-offset IoU metrics and hierarchical semantic regularization constraints, thereby enhancing matching robustness through density-adaptive weight adjustment. Experimental results indicate that the accuracy of this method on a self-constructed dataset is 89.5%, with an F1 score of 90.1%, representing a 24.2% improvement over traditional OCR methods. The dynamic matching mechanism elevates the average accuracy of YOLOv5s from 78.5% to 89.7%, surpassing the Faster R-CNN benchmark model while maintaining a real-time processing efficiency of 76 FPS. This study offers a lightweight and highly robust solution for the efficient extraction of order information in complex logistics scenarios, significantly advancing the intelligent upgrading of sorting systems. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

27 pages, 4677 KiB  
Article
DERIENet: A Deep Ensemble Learning Approach for High-Performance Detection of Jute Leaf Diseases
by Mst. Tanbin Yasmin Tanny, Tangina Sultana, Md. Emran Biswas, Chanchol Kumar Modok, Arjina Akter, Mohammad Shorif Uddin and Md. Delowar Hossain
Information 2025, 16(8), 638; https://doi.org/10.3390/info16080638 - 27 Jul 2025
Abstract
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability [...] Read more.
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability across geographically distributed agrarian systems. To transcend these limitations, we propose DERIENet, a robust and scalable classification approach within a deep ensemble learning framework. It is meticulously engineered by integrating three high-performing convolutional neural networks—ResNet50, InceptionV3, and EfficientNetB0—along with regularization, batch normalization, and dropout strategies, to accurately classify jute leaf diseases such as Cercospora Leaf Spot, Golden Mosaic Virus, and healthy leaves. A key methodological contribution is the design of a novel augmentation pipeline, termed Geometric Localized Occlusion and Adaptive Rescaling (GLOAR), which dynamically modulates photometric and geometric distortions based on image entropy and luminance to synthetically upscale a limited dataset (920 images) into a significantly enriched and diverse dataset of 7800 samples, thereby mitigating overfitting and enhancing domain generalizability. Empirical evaluation, utilizing a comprehensive set of performance metrics—accuracy, precision, recall, F1-score, confusion matrices, and ROC curves—demonstrates that DERIENet achieves a state-of-the-art classification accuracy of 99.89%, with macro-averaged and weighted average precision, recall, and F1-score uniformly at 99.89%, and an AUC of 1.0 across all disease categories. The reliability of the model is validated by the confusion matrix, which shows that 899 out of 900 test images were correctly identified and that there was only one misclassification. Comparative evaluations of the various ensemble baselines, such as DenseNet201, MobileNetV2, and VGG16, and individual base learners demonstrate that DERIENet performs noticeably superior to all baseline models. It provides a highly interpretable, deployment-ready, and computationally efficient architecture that is ideal for integrating into edge or mobile platforms to facilitate in situ, real-time disease diagnostics in precision agriculture. Full article
21 pages, 5205 KiB  
Article
SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation
by Haijiao Yun, Qingyu Du, Ziqing Han, Mingjing Li, Le Yang, Xinyang Liu, Chao Wang and Weitian Ma
Sensors 2025, 25(15), 4652; https://doi.org/10.3390/s25154652 - 27 Jul 2025
Abstract
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based [...] Read more.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN–Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet’s superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet’s exceptional accuracy and robust generalization for computer-aided dermatological diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

18 pages, 16066 KiB  
Article
DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing
by Jiacheng Cai, Jiankui Chen, Wei Tang, Jinliang Wu, Jingcheng Ruan and Zhouping Yin
Machines 2025, 13(8), 657; https://doi.org/10.3390/machines13080657 - 27 Jul 2025
Abstract
The Online Droplet Inspection system plays a vital role in closed-loop control for OLED inkjet printing. However, generating realistic synthetic droplet images for reliable restoration and precise measurement of droplet parameters remains challenging due to the complex, multi-factor degradation inherent to microscale droplet [...] Read more.
The Online Droplet Inspection system plays a vital role in closed-loop control for OLED inkjet printing. However, generating realistic synthetic droplet images for reliable restoration and precise measurement of droplet parameters remains challenging due to the complex, multi-factor degradation inherent to microscale droplet imaging. To address this, we propose a physics-informed degradation model, Diffraction–Gaussian–Motion–Noise (DGMN), that integrates Fraunhofer diffraction, defocus blur, motion blur, and adaptive noise to replicate real-world degradation in droplet images. To optimize the multi-parameter configuration of DGMN, we introduce the MISABO (Multi-strategy Improved Subtraction-Average-Based Optimizer), which incorporates Sobol sequence initialization for search diversity, lens opposition-based learning (LensOBL) for enhanced accuracy, and dimension learning-based hunting (DLH) for balanced global–local optimization. Benchmark function evaluations demonstrate that MISABO achieves superior convergence speed and accuracy. When applied to generate synthetic droplet images based on real droplet images captured from a self-developed OLED inkjet printer, the proposed MISABO-optimized DGMN framework significantly improves realism, enhancing synthesis quality by 37.7% over traditional manually configured models. This work lays a solid foundation for generating high-quality synthetic data to support droplet image restoration and downstream inkjet printing processes. Full article
(This article belongs to the Section Advanced Manufacturing)
29 pages, 17807 KiB  
Article
Low-Cost Microalgae Cell Concentration Estimation in Hydrochemistry Applications Using Computer Vision
by Julia Borisova, Ivan V. Morshchinin, Veronika I. Nazarova, Nelli Molodkina and Nikolay O. Nikitin
Sensors 2025, 25(15), 4651; https://doi.org/10.3390/s25154651 - 27 Jul 2025
Abstract
Accurate and efficient estimation of microalgae cell concentration is critical for applications in hydrochemical monitoring, biofuel production, pharmaceuticals, and ecological studies. Traditional methods, such as manual counting with a hemocytometer, are time-consuming and prone to human error, while automated systems are often costly [...] Read more.
Accurate and efficient estimation of microalgae cell concentration is critical for applications in hydrochemical monitoring, biofuel production, pharmaceuticals, and ecological studies. Traditional methods, such as manual counting with a hemocytometer, are time-consuming and prone to human error, while automated systems are often costly and require extensive training data. This paper presents a low-cost, automated approach for estimating cell concentration in Chlorella vulgaris suspensions using classical computer vision techniques. The proposed method eliminates the need for deep learning by leveraging the Hough circle transform to detect and count cells in microscope images, combined with a conversion factor to translate pixel measurements into metric units for direct concentration calculation (cells/mL). Validation against manual hemocytometer counts demonstrated strong agreement, with a Pearson correlation coefficient of 0.96 and a mean percentage difference of 17.96%. The system achieves rapid processing (under 30 s per image) and offers interpretability, allowing specialists to verify results visually. Key advantages include affordability, minimal hardware requirements, and adaptability to other microbiological applications. Limitations, such as sensitivity to cell clumping and impurities, are discussed. This work provides a practical, accessible solution for laboratories lacking expensive automated equipment, bridging the gap between manual methods and high-end technologies. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

24 pages, 2508 KiB  
Article
Class-Discrepancy Dynamic Weighting for Cross-Domain Few-Shot Hyperspectral Image Classification
by Chen Ding, Jiahao Yue, Sirui Zheng, Yizhuo Dong, Wenqiang Hua, Xueling Chen, Yu Xie, Song Yan, Wei Wei and Lei Zhang
Remote Sens. 2025, 17(15), 2605; https://doi.org/10.3390/rs17152605 - 27 Jul 2025
Abstract
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for [...] Read more.
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for intra-class sample size variations and inherent inter-class differences. To address this problem, existing studies have introduced a class weighting mechanism within the prototype network framework, determining class weights by calculating inter-sample similarity through distance metrics. However, this method suffers from a dual limitation: susceptibility to noise interference and insufficient capacity to capture global class variations, which may lead to distorted weight allocation and consequently result in alignment bias. To solve these issues, we propose a novel class-discrepancy dynamic weighting-based cross-domain FSL (CDDW-CFSL) framework. It integrates three key components: (1) the class-weighted domain adaptation (CWDA) method dynamically measures cross-domain distribution shifts using global class mean discrepancies. It employs discrepancy-sensitive weighting to strengthen the alignment of critical categories, enabling accurate domain adaptation while maintaining feature topology; (2) the class mean refinement (CMR) method incorporates class covariance distance to compute distribution discrepancies between support set samples and class prototypes, enabling the precise capture of cross-domain feature internal structures; (3) a novel multi-dimensional feature extractor that captures both local spatial details and continuous spectral characteristics simultaneously, facilitating deep cross-dimensional feature fusion. The results in three publicly available HSIC datasets show the effectiveness of the CDDW-CFSL. Full article
Show Figures

Figure 1

Back to TopTop