Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,126)

Search Parameters:
Keywords = image transfer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4409 KiB  
Article
Performance of Dual-Layer Flat-Panel Detectors
by Dong Sik Kim and Dayeon Lee
Diagnostics 2025, 15(15), 1889; https://doi.org/10.3390/diagnostics15151889 - 28 Jul 2025
Abstract
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also [...] Read more.
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also enable more efficient use of incident photons, resulting in x-ray images with improved noise power spectrum (NPS) and detection quantum efficiency (DQE) performances as single-energy applications. Purpose: Although the development of DFD systems for material decomposition applications is actively underway, there is a lack of research on whether single-energy applications of DFD can achieve better performance than the single-layer case. In this paper, we experimentally observe the DFD performance in terms of the modulation transfer function (MTF), NPS, and DQE with discussions. Methods: Using prototypes of DFD, we experimentally measure the MTF, NPS, and DQE of the convex combination of the images acquired from the upper and lower detector layers of DFD. To optimize DFD performance, a two-step image registration is performed, where subpixel registration based on the maximum amplitude response to the transform based on the Fourier shift theorem and an affine transformation using cubic interpolation are adopted. The DFD performance is analyzed and discussed through extensive experiments for various scintillator thicknesses, x-ray beam conditions, and incident doses. Results: Under the RQA 9 beam conditions of 2.7 μGy dose, the DFD with the upper and lower scintillator thicknesses of 0.5 mm could achieve a zero-frequency DQE of 75%, compared to 56% when using a single-layer detector. This implies that the DFD using 75 % of the incident dose of a single-layer detector can provide the same signal-to-noise ratio as a single-layer detector. Conclusions: In single-energy radiography imaging, DFD can provide better NPS and DQE performances than the case of the single-layer detector, especially at relatively high x-ray energies, which enables low-dose imaging. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

24 pages, 2159 KiB  
Article
Cross-Domain Transfer Learning Architecture for Microcalcification Cluster Detection Using the MEXBreast Multiresolution Mammography Dataset
by Ricardo Salvador Luna Lozoya, Humberto de Jesús Ochoa Domínguez, Juan Humberto Sossa Azuela, Vianey Guadalupe Cruz Sánchez, Osslan Osiris Vergara Villegas and Karina Núñez Barragán
Mathematics 2025, 13(15), 2422; https://doi.org/10.3390/math13152422 - 28 Jul 2025
Abstract
Microcalcification clusters (MCCs) are key indicators of breast cancer, with studies showing that approximately 50% of mammograms with MCCs confirm a cancer diagnosis. Early detection is critical, as it ensures a five-year survival rate of up to 99%. However, MCC detection remains challenging [...] Read more.
Microcalcification clusters (MCCs) are key indicators of breast cancer, with studies showing that approximately 50% of mammograms with MCCs confirm a cancer diagnosis. Early detection is critical, as it ensures a five-year survival rate of up to 99%. However, MCC detection remains challenging due to their features, such as small size, texture, shape, and impalpability. Convolutional neural networks (CNNs) offer a solution for MCC detection. Nevertheless, CNNs are typically trained on single-resolution images, limiting their generalizability across different image resolutions. We propose a CNN trained on digital mammograms with three common resolutions: 50, 70, and 100 μm. The architecture processes individual 1 cm2 patches extracted from the mammograms as input samples and includes a MobileNetV2 backbone, followed by a flattening layer, a dense layer, and a sigmoid activation function. This architecture was trained to detect MCCs using patches extracted from the INbreast database, which has a resolution of 70 μm, and achieved an accuracy of 99.84%. We applied transfer learning (TL) and trained on 50, 70, and 100 μm resolution patches from the MEXBreast database, achieving accuracies of 98.32%, 99.27%, and 89.17%, respectively. For comparison purposes, models trained from scratch, without leveraging knowledge from the pretrained model, achieved 96.07%, 99.20%, and 83.59% accuracy for 50, 70, and 100 μm, respectively. Results demonstrate that TL improves MCC detection across resolutions by reusing pretrained knowledge. Full article
(This article belongs to the Special Issue Mathematical Methods in Artificial Intelligence for Image Processing)
Show Figures

Figure 1

18 pages, 5229 KiB  
Article
Exploring the Spectral Variability of Estonian Lakes Using Spaceborne Imaging Spectroscopy
by Alice Fabbretto, Mariano Bresciani, Andrea Pellegrino, Kersti Kangro, Anna Joelle Greife, Lodovica Panizza, François Steinmetz, Joel Kuusk, Claudia Giardino and Krista Alikas
Appl. Sci. 2025, 15(15), 8357; https://doi.org/10.3390/app15158357 - 27 Jul 2025
Abstract
This study investigates the potential of spaceborne imaging spectroscopy to support the analysis of the status of two major Estonian lakes, i.e., Lake Peipsi and Lake Võrtsjärv, using data from the PRISMA and EnMAP missions. The study encompasses nine specific applications across 12 [...] Read more.
This study investigates the potential of spaceborne imaging spectroscopy to support the analysis of the status of two major Estonian lakes, i.e., Lake Peipsi and Lake Võrtsjärv, using data from the PRISMA and EnMAP missions. The study encompasses nine specific applications across 12 satellite scenes, including the validation of remote sensing reflectance, optical water type classification, estimation of phycocyanin concentration, detection of macrophytes, and characterization of reflectance for lake ice/snow coverage. Rrs validation, which was performed using in situ measurements and Sentinel-2 and Sentinel-3 as references, showed a level of agreement with Spectral Angle < 16°. Hyperspectral imagery successfully captured fine-scale spatial and spectral features not detectable by multispectral sensors, in particular it was possible to identify cyanobacterial pigments and optical variations driven by seasonal and meteorological dynamics. Through the combined use of in situ observations, the study can serve as a starting point for the use of hyperspectral data in northern freshwater systems, offering new insights into ecological processes. Given the increasing global concern over freshwater ecosystem health, this work provides a transferable framework for leveraging new-generation hyperspectral missions to enhance water quality monitoring on a global scale. Full article
Show Figures

Figure 1

21 pages, 977 KiB  
Article
Fall Detection Using Federated Lightweight CNN Models: A Comparison of Decentralized vs. Centralized Learning
by Qasim Mahdi Haref, Jun Long and Zhan Yang
Appl. Sci. 2025, 15(15), 8315; https://doi.org/10.3390/app15158315 - 25 Jul 2025
Viewed by 130
Abstract
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to [...] Read more.
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to train deep learning models across decentralized data sources without compromising user privacy. The pipeline begins with data acquisition, in which annotated video-based fall-detection datasets formatted in YOLO are used to extract image crops of human subjects. These images are then preprocessed, resized, normalized, and relabeled into binary classes (fall vs. non-fall). A stratified 80/10/10 split ensures balanced training, validation, and testing. To simulate real-world federated environments, the training data is partitioned across multiple clients, each performing local training using pretrained CNN models including MobileNetV2, VGG16, EfficientNetB0, and ResNet50. Two FL topologies are implemented: a centralized server-coordinated scheme and a ring-based decentralized topology. During each round, only model weights are shared, and federated averaging (FedAvg) is applied for global aggregation. The models were trained using three random seeds to ensure result robustness and stability across varying data partitions. Among all configurations, decentralized MobileNetV2 achieved the best results, with a mean test accuracy of 0.9927, F1-score of 0.9917, and average training time of 111.17 s per round. These findings highlight the model’s strong generalization, low computational burden, and suitability for edge deployment. Future work will extend evaluation to external datasets and address issues such as client drift and adversarial robustness in federated environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
19 pages, 2644 KiB  
Article
Multispectral and Chlorophyll Fluorescence Imaging Fusion Using 2D-CNN and Transfer Learning for Cross-Cultivar Early Detection of Verticillium Wilt in Eggplants
by Dongfang Zhang, Shuangxia Luo, Jun Zhang, Mingxuan Li, Xiaofei Fan, Xueping Chen and Shuxing Shen
Agronomy 2025, 15(8), 1799; https://doi.org/10.3390/agronomy15081799 - 25 Jul 2025
Viewed by 88
Abstract
Verticillium wilt is characterized by chlorosis in leaves and is a devastating disease in eggplant. Early diagnosis, prior to the manifestation of symptoms, enables targeted management of the disease. In this study, we aim to detect early leaf wilt in eggplant leaves caused [...] Read more.
Verticillium wilt is characterized by chlorosis in leaves and is a devastating disease in eggplant. Early diagnosis, prior to the manifestation of symptoms, enables targeted management of the disease. In this study, we aim to detect early leaf wilt in eggplant leaves caused by Verticillium dahliae by integrating multispectral imaging with machine learning and deep learning techniques. Multispectral and chlorophyll fluorescence images were collected from leaves of the inbred eggplant line 11-435, including data on image texture, spectral reflectance, and chlorophyll fluorescence. Subsequently, we established a multispectral data model, fusion information model, and multispectral image–information fusion model. The multispectral image–information fusion model, integrated with a two-dimensional convolutional neural network (2D-CNN), demonstrated optimal performance in classifying early-stage Verticillium wilt infection, achieving a test accuracy of 99.37%. Additionally, transfer learning enabled us to diagnose early leaf wilt in another eggplant variety, the inbred line 14-345, with an accuracy of 84.54 ± 1.82%. Compared to traditional methods that rely on visible symptom observation and typically require about 10 days to confirm infection, this study achieved early detection of Verticillium wilt as soon as the third day post-inoculation. These findings underscore the potential of the fusion model as a valuable tool for the early detection of pre-symptomatic states in infected plants, thereby offering theoretical support for in-field detection of eggplant health. Full article
Show Figures

Figure 1

19 pages, 28897 KiB  
Article
MetaRes-DMT-AS: A Meta-Learning Approach for Few-Shot Fault Diagnosis in Elevator Systems
by Hongming Hu, Shengying Yang, Yulai Zhang, Jianfeng Wu, Liang He and Jingsheng Lei
Sensors 2025, 25(15), 4611; https://doi.org/10.3390/s25154611 - 25 Jul 2025
Viewed by 149
Abstract
Recent advancements in deep learning have spurred significant research interest in fault diagnosis for elevator systems. However, conventional approaches typically require substantial labeled datasets that are often impractical to obtain in real-world industrial environments. This limitation poses a fundamental challenge for developing robust [...] Read more.
Recent advancements in deep learning have spurred significant research interest in fault diagnosis for elevator systems. However, conventional approaches typically require substantial labeled datasets that are often impractical to obtain in real-world industrial environments. This limitation poses a fundamental challenge for developing robust diagnostic models capable of performing reliably under data-scarce conditions. To address this critical gap, we propose MetaRes-DMT-AS (Meta-ResNet with Dynamic Meta-Training and Adaptive Scheduling), a novel meta-learning framework for few-shot fault diagnosis. Our methodology employs Gramian Angular Fields to transform 1D raw sensor data into 2D image representations, followed by episodic task construction through stochastic sampling. During meta-training, the system acquires transferable prior knowledge through optimized parameter initialization, while an adaptive scheduling module dynamically configures support/query sets. Subsequent regularization via prototype networks ensures stable feature extraction. Comprehensive validation using the Case Western Reserve University bearing dataset and proprietary elevator acceleration data demonstrates the framework’s superiority: MetaRes-DMT-AS achieves state-of-the-art few-shot classification performance, surpassing benchmark models by 0.94–1.78% in overall accuracy. For critical few-shot fault categories—particularly emergency stops and severe vibrations—the method delivers significant accuracy improvements of 3–16% and 17–29%, respectively. Full article
(This article belongs to the Special Issue Signal Processing and Sensing Technologies for Fault Diagnosis)
Show Figures

Figure 1

26 pages, 3625 KiB  
Article
Deep-CNN-Based Layout-to-SEM Image Reconstruction with Conformal Uncertainty Calibration for Nanoimprint Lithography in Semiconductor Manufacturing
by Jean Chien and Eric Lee
Electronics 2025, 14(15), 2973; https://doi.org/10.3390/electronics14152973 - 25 Jul 2025
Viewed by 182
Abstract
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM [...] Read more.
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM images from binary design layouts and delivers calibrated pixel-by-pixel uncertainty simultaneously. First, a shallow U-Net is trained on conformalized quantile regression (CQR) to output 90% prediction intervals with statistically guaranteed coverage. Moreover, per-level errors on a small calibration dataset are designed to drive an outlier-weighted and encoder-frozen transfer fine-tuning phase that refines only the decoder, with its capacity explicitly focused on regions of spatial uncertainty. On independent test layouts, our proposed fine-tuned model significantly reduces the mean absolute error (MAE) from 0.0365 to 0.0255 and raises the coverage from 0.904 to 0.926, while cutting the labeled data and GPU time by 80% and 72%, respectively. The resultant uncertainty maps highlight spatial regions associated with error hotspots and support defect-aware optical proximity correction (OPC) with fewer guard-band iterations. Extending the current perspective beyond OPC, the innovatively model-agnostic and modular design of the pipeline here allows flexible integration into other critical stages of the semiconductor manufacturing workflow, such as imprinting, etching, and inspection. In these stages, such predictions are critical for achieving higher precision, efficiency, and overall process robustness in semiconductor manufacturing, which is the ultimate motivation of this study. Full article
Show Figures

Figure 1

18 pages, 2885 KiB  
Article
Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning
by Huaixiu Wang and Haomiao Wang
Appl. Sci. 2025, 15(15), 8246; https://doi.org/10.3390/app15158246 - 24 Jul 2025
Viewed by 151
Abstract
To achieve more precise and effective microseismic magnitude estimation, a classification model based on transfer learning with an improved deep residual network is proposed for predicting microseismic magnitudes. Initially, microseismic waveform images are preprocessed through cropping and blurring before being used as inputs [...] Read more.
To achieve more precise and effective microseismic magnitude estimation, a classification model based on transfer learning with an improved deep residual network is proposed for predicting microseismic magnitudes. Initially, microseismic waveform images are preprocessed through cropping and blurring before being used as inputs to the model. Subsequently, the microseismic waveform image dataset is divided into training, testing, and validation sets. By leveraging the pretrained ResNet18 model weights from ImageNet, a transfer learning strategy is implemented, involving the retraining of all layers from scratch. Following this, the CBAM is introduced for model optimization, resulting in a new network model. Finally, this model is utilized in seismic magnitude classification research to enable microseismic magnitude prediction. The model is validated and compared with other commonly used neural network models. The experiment uses microseismic waveform data and images of magnitudes 0–3 from the Stanford Earthquake Dataset (STEAD) as training samples. The results indicate that the model achieves an accuracy of 87% within an error range of ±0.2 and 94.7% within an error range of ±0.3. This model demonstrates enhanced stability and reliability, effectively addressing the issue of missing data labels. It validates that using ResNet transfer learning combined with an attention mechanism yields higher accuracy in microseismic magnitude prediction, as well as confirming the effectiveness of the CBAM. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 9603 KiB  
Article
Label-Efficient Fine-Tuning for Remote Sensing Imagery Segmentation with Diffusion Models
by Yiyun Luo, Jinnian Wang, Jean Sequeira, Xiankun Yang, Dakang Wang, Jiabin Liu, Grekou Yao and Sébastien Mavromatis
Remote Sens. 2025, 17(15), 2579; https://doi.org/10.3390/rs17152579 - 24 Jul 2025
Viewed by 113
Abstract
High-resolution remote sensing imagery plays an essential role in urban management and environmental monitoring, providing detailed insights for applications ranging from land cover mapping to disaster response. Semantic segmentation methods are among the most effective techniques for comprehensive land cover mapping, and they [...] Read more.
High-resolution remote sensing imagery plays an essential role in urban management and environmental monitoring, providing detailed insights for applications ranging from land cover mapping to disaster response. Semantic segmentation methods are among the most effective techniques for comprehensive land cover mapping, and they commonly employ ImageNet-based pre-training semantics. However, traditional fine-tuning processes exhibit poor transferability across different downstream tasks and require large amounts of labeled data. To address these challenges, we introduce Denoising Diffusion Probabilistic Models (DDPMs) as a generative pre-training approach for semantic features extraction in remote sensing imagery. We pre-trained a DDPM on extensive unlabeled imagery, obtaining features at multiple noise levels and resolutions. In order to integrate and optimize these features efficiently, we designed a multi-layer perceptron module with residual connections. It performs channel-wise optimization to suppress feature redundancy and refine representations. Additionally, we froze the feature extractor during fine-tuning. This strategy significantly reduces computational consumption and facilitates fast transfer and deployment across various interpretation tasks on homogeneous imagery. Our comprehensive evaluation on the sparsely labeled dataset MiniFrance-S and the fully labeled Gaofen Image Dataset achieved mean intersection over union scores of 42.7% and 66.5%, respectively, outperforming previous works. This demonstrates that our approach effectively reduces reliance on labeled imagery and increases transferability to downstream remote sensing tasks. Full article
(This article belongs to the Special Issue AI-Driven Mapping Using Remote Sensing Data)
Show Figures

Figure 1

23 pages, 10648 KiB  
Article
Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Haisong Chen and Minhui Wang
Electronics 2025, 14(15), 2952; https://doi.org/10.3390/electronics14152952 - 24 Jul 2025
Viewed by 143
Abstract
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a [...] Read more.
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a neural architecture search (NAS) for a few-shot HSI classification method that combines meta learning. Firstly, a multi-source domain learning framework was constructed to integrate heterogeneous natural images and homogeneous remote sensing images to improve the information breadth of few-sample learning, enabling the final network to enhance its generalization ability under limited labeled samples by learning the similarity between different data sources. Secondly, by constructing precise and robust search spaces and deploying different units at different locations, the classification accuracy and model transfer robustness of the final network can be improved. This method fully utilizes spatial texture information and rich category information of multi-source data and transfers the learned meta knowledge to the optimal architecture for HSIC execution through precise and robust search space design, achieving HSIC tasks with limited samples. Experimental results have shown that our proposed method achieved an overall accuracy (OA) of 98.57%, 78.39%, and 98.74% for classification on the Pavia Center, Indian Pine, and WHU-Hi-LongKou datasets, respectively. It is fully demonstrated that utilizing spatial texture information and rich category information of multi-source data, and through precise and robust search space design, the learned meta knowledge is fully transmitted to the optimal architecture for HSIC, perfectly achieving classification tasks with few-shot samples. Full article
Show Figures

Figure 1

15 pages, 2123 KiB  
Article
Multi-Class Visual Cyberbullying Detection Using Deep Neural Networks and the CVID Dataset
by Muhammad Asad Arshed, Zunera Samreen, Arslan Ahmad, Laiba Amjad, Hasnain Muavia, Christine Dewi and Muhammad Kabir
Information 2025, 16(8), 630; https://doi.org/10.3390/info16080630 - 24 Jul 2025
Viewed by 166
Abstract
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media [...] Read more.
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media platforms necessitates new approaches to address cyberbullying using images. This domain has been largely overlooked. In this paper, we present a novel dataset specifically designed for the detection of visual cyberbullying, encompassing four distinct classes: abuse, curse, discourage, and threat. The initial prepared dataset (cyberbullying visual indicators dataset (CVID)) comprised 664 samples for training and validation, expanded through data augmentation techniques to ensure balanced and accurate results across all classes. We analyzed this dataset using several advanced deep learning models, including VGG16, VGG19, MobileNetV2, and Vision Transformer. The proposed model, based on DenseNet201, achieved the highest test accuracy of 99%, demonstrating its efficacy in identifying the visual cues associated with cyberbullying. To prove the proposed model’s generalizability, the 5-fold stratified K-fold was also considered, and the model achieved an average test accuracy of 99%. This work introduces a dataset and highlights the potential of leveraging deep learning models to address the multifaceted challenges of detecting cyberbullying in visual content. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 8859 KiB  
Article
Effect of Systematic Errors on Building Component Sound Insulation Measurements Using Near-Field Acoustic Holography
by Wei Xiong, Wuying Chen, Zhixin Li, Heyu Zhu and Xueqiang Wang
Buildings 2025, 15(15), 2619; https://doi.org/10.3390/buildings15152619 - 24 Jul 2025
Viewed by 168
Abstract
Near-field acoustic holography (NAH) provides an effective way to achieve wide-band, high-resolution visualization measurement of the sound insulation performance of building components. However, based on Green’s function, the microphone array’s inherent amplitude and phase mismatch errors will exponentially amplify the sound field inversion [...] Read more.
Near-field acoustic holography (NAH) provides an effective way to achieve wide-band, high-resolution visualization measurement of the sound insulation performance of building components. However, based on Green’s function, the microphone array’s inherent amplitude and phase mismatch errors will exponentially amplify the sound field inversion process, significantly reducing the measurement accuracy. To systematically evaluate this problem, this study combines numerical simulation with actual measurements in a soundproof room that complies with the ISO 10140 standard, quantitatively analyzes the influence of array system errors on NAH reconstructed sound insulation and acoustic images, and proposes an error correction strategy based on channel transfer function normalization. The research results show that when the array amplitude and phase mismatch mean values are controlled within 5% and 5°, respectively, the deviation of the weighted sound insulation measured by NAH can be controlled within 1 dB, and the error in the key frequency band of building sound insulation (200–1.6k Hz) does not exceed 1.5 dB; when the mismatch mean value increases to 10% and 10°, the deviation of the weighted sound insulation can reach 2 dB, and the error in the high-frequency band (≥1.6k Hz) significantly increases to more than 2.0 dB. The sound image shows noticeable spatial distortion in the frequency band above 250 Hz. After applying the proposed correction method, the NAH measurement results of the domestic microphone array are highly consistent with the weighted sound insulation measured by the standard method, and the measurement difference in the key frequency band is less than 1.0 dB, which significantly improves the reliability and applicability of low-cost equipment in engineering applications. In addition, the study reveals the inherent mechanism of differential amplification of system errors in the propagating wave and evanescent wave channels. It provides quantitative thresholds and operational guidance for instrument selection, array calibration, and error compensation of NAH technology in building sound insulation detection. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

23 pages, 3301 KiB  
Article
An Image-Based Water Turbidity Classification Scheme Using a Convolutional Neural Network
by Itzel Luviano Soto, Yajaira Concha-Sánchez and Alfredo Raya
Computation 2025, 13(8), 178; https://doi.org/10.3390/computation13080178 - 23 Jul 2025
Viewed by 137
Abstract
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and [...] Read more.
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

31 pages, 11068 KiB  
Article
Airport-FOD3S: A Three-Stage Detection-Driven Framework for Realistic Foreign Object Debris Synthesis
by Hanglin Cheng, Yihao Li, Ruiheng Zhang and Weiguang Zhang
Sensors 2025, 25(15), 4565; https://doi.org/10.3390/s25154565 - 23 Jul 2025
Viewed by 151
Abstract
Traditional Foreign Object Debris (FOD) detection methods face challenges such as difficulties in large-size data acquisition and the ineffective application of detection algorithms with high accuracy. In this paper, image data augmentation was performed using generative adversarial networks and diffusion models, generating images [...] Read more.
Traditional Foreign Object Debris (FOD) detection methods face challenges such as difficulties in large-size data acquisition and the ineffective application of detection algorithms with high accuracy. In this paper, image data augmentation was performed using generative adversarial networks and diffusion models, generating images of monitoring areas under different environmental conditions and FOD images of varied types. Additionally, a three-stage image blending method considering size transformation, a seamless process, and style transfer was proposed. The image quality of different blending methods was quantitatively evaluated using metrics such as structural similarity index and peak signal-to-noise ratio, as well as Depthanything. Finally, object detection models with a similarity distance strategy (SimD), including Faster R-CNN, YOLOv8, and YOLOv11, were tested on the dataset. The experimental results demonstrated that realistic FOD data were effectively generated. The Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) of the synthesized image by the proposed three-stage image blending method outperformed the other methods, reaching 0.99 and 45 dB. YOLOv11 with SimD trained on the augmented dataset achieved the mAP of 86.95%. Based on the results, it could be concluded that both data augmentation and SimD significantly improved the accuracy of FOD detection. Full article
Show Figures

Figure 1

25 pages, 5142 KiB  
Article
Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model
by Meilin Li, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang and Qiang Wang
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580 - 23 Jul 2025
Viewed by 211
Abstract
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early [...] Read more.
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture. Full article
Show Figures

Figure 1

Back to TopTop