Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (26,385)

Search Parameters:
Keywords = convolutions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1681 KiB  
Article
A Hybrid Quantum–Classical Architecture with Data Re-Uploading and Genetic Algorithm Optimization for Enhanced Image Classification
by Aksultan Mukhanbet and Beimbet Daribayev
Computation 2025, 13(8), 185; https://doi.org/10.3390/computation13080185 (registering DOI) - 1 Aug 2025
Abstract
Quantum machine learning (QML) has emerged as a promising approach for enhancing image classification by exploiting quantum computational principles such as superposition and entanglement. However, practical applications on complex datasets like CIFAR-100 remain limited due to the low expressivity of shallow circuits and [...] Read more.
Quantum machine learning (QML) has emerged as a promising approach for enhancing image classification by exploiting quantum computational principles such as superposition and entanglement. However, practical applications on complex datasets like CIFAR-100 remain limited due to the low expressivity of shallow circuits and challenges in circuit optimization. In this study, we propose HQCNN–REGA—a novel hybrid quantum–classical convolutional neural network architecture that integrates data re-uploading and genetic algorithm optimization for improved performance. The data re-uploading mechanism allows classical inputs to be encoded multiple times into quantum states, enhancing the model’s capacity to learn complex visual features. In parallel, a genetic algorithm is employed to evolve the quantum circuit architecture by optimizing gate sequences, entanglement patterns, and layer configurations. This combination enables automatic discovery of efficient parameterized quantum circuits without manual tuning. Experiments on the MNIST and CIFAR-100 datasets demonstrate state-of-the-art performance for quantum models, with HQCNN–REGA outperforming existing quantum neural networks and approaching the accuracy of advanced classical architectures. In particular, we compare our model with classical convolutional baselines such as ResNet-18 to validate its effectiveness in real-world image classification tasks. Our results demonstrate the feasibility of scalable, high-performing quantum–classical systems and offer a viable path toward practical deployment of QML in computer vision applications, especially on noisy intermediate-scale quantum (NISQ) hardware. Full article
Show Figures

Figure 1

22 pages, 4300 KiB  
Article
Optimised DNN-Based Agricultural Land Cover Mapping Using Sentinel-2 and Landsat-8 with Google Earth Engine
by Nisha Sharma, Sartajvir Singh and Kawaljit Kaur
Land 2025, 14(8), 1578; https://doi.org/10.3390/land14081578 (registering DOI) - 1 Aug 2025
Abstract
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of [...] Read more.
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of agricultural lands through thematic mapping, which is critical for crop monitoring, land management, and sustainable development. Here, a Hyper-tuned Deep Neural Network (Hy-DNN) model was created and used for land use and land cover (LULC) classification into four classes: agricultural land, vegetation, water bodies, and built-up areas. The technique made use of multispectral data from Sentinel-2 and Landsat-8, processed on the Google Earth Engine (GEE) platform. To measure classification performance, Hy-DNN was contrasted with traditional classifiers—Convolutional Neural Network (CNN), Random Forest (RF), Classification and Regression Tree (CART), Minimum Distance Classifier (MDC), and Naive Bayes (NB)—using performance metrics including producer’s and consumer’s accuracy, Kappa coefficient, and overall accuracy. Hy-DNN performed the best, with overall accuracy being 97.60% using Sentinel-2 and 91.10% using Landsat-8, outperforming all base models. These results further highlight the superiority of the optimised Hy-DNN in agricultural land mapping and its potential use in crop health monitoring, disease diagnosis, and strategic agricultural planning. Full article
Show Figures

Figure 1

17 pages, 1340 KiB  
Article
Enhanced Respiratory Sound Classification Using Deep Learning and Multi-Channel Auscultation
by Yeonkyeong Kim, Kyu Bom Kim, Ah Young Leem, Kyuseok Kim and Su Hwan Lee
J. Clin. Med. 2025, 14(15), 5437; https://doi.org/10.3390/jcm14155437 (registering DOI) - 1 Aug 2025
Abstract
 Background/Objectives: Identifying and classifying abnormal lung sounds is essential for diagnosing patients with respiratory disorders. In particular, the simultaneous recording of auscultation signals from multiple clinically relevant positions offers greater diagnostic potential compared to traditional single-channel measurements. This study aims to improve [...] Read more.
 Background/Objectives: Identifying and classifying abnormal lung sounds is essential for diagnosing patients with respiratory disorders. In particular, the simultaneous recording of auscultation signals from multiple clinically relevant positions offers greater diagnostic potential compared to traditional single-channel measurements. This study aims to improve the accuracy of respiratory sound classification by leveraging multichannel signals and capturing positional characteristics from multiple sites in the same patient. Methods: We evaluated the performance of respiratory sound classification using multichannel lung sound data with a deep learning model that combines a convolutional neural network (CNN) and long short-term memory (LSTM), based on mel-frequency cepstral coefficients (MFCCs). We analyzed the impact of the number and placement of channels on classification performance. Results: The results demonstrated that using four-channel recordings improved accuracy, sensitivity, specificity, precision, and F1-score by approximately 1.11, 1.15, 1.05, 1.08, and 1.13 times, respectively, compared to using three, two, or single-channel recordings. Conclusion: This study confirms that multichannel data capture a richer set of features corresponding to various respiratory sound characteristics, leading to significantly improved classification performance. The proposed method holds promise for enhancing sound classification accuracy not only in clinical applications but also in broader domains such as speech and audio processing.  Full article
(This article belongs to the Section Respiratory Medicine)
18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 (registering DOI) - 1 Aug 2025
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

20 pages, 2774 KiB  
Article
Complex Network Analytics for Structural–Functional Decoding of Neural Networks
by Jiarui Zhang, Dongxiao Zhang, Hu Lou, Yueer Li, Taijiao Du and Yinjun Gao
Appl. Sci. 2025, 15(15), 8576; https://doi.org/10.3390/app15158576 (registering DOI) - 1 Aug 2025
Abstract
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework [...] Read more.
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework decoding structure–function coupling by mapping convolutional layers, fully connected layers, and Dropout modules into graph representations. To overcome limitations of heuristic compression techniques, we develop a topology-sensitive adaptive pruning algorithm that evaluates critical paths via node strength centrality, preserving structural–functional integrity. On CIFAR-10, our method achieves 55.5% parameter reduction with only 7.8% accuracy degradation—significantly outperforming traditional approaches. Crucially, retrained pruned networks exceed original model accuracy by up to 2.63%, demonstrating that topology optimisation unlocks latent model potential. This research establishes a paradigm shift from empirical to topologically rationalised neural architecture design, providing theoretical foundations for deep learning optimisation dynamics. Full article
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks (2nd Edition))
23 pages, 3427 KiB  
Article
Visual Narratives and Digital Engagement: Decoding Seoul and Tokyo’s Tourism Identity Through Instagram Analytics
by Seung Chul Yoo and Seung Mi Kang
Tour. Hosp. 2025, 6(3), 149; https://doi.org/10.3390/tourhosp6030149 (registering DOI) - 1 Aug 2025
Abstract
Social media platforms like Instagram significantly shape destination images and influence tourist behavior. Understanding how different cities are represented and perceived on these platforms is crucial for effective tourism marketing. This study provides a comparative analysis of Instagram content and engagement patterns in [...] Read more.
Social media platforms like Instagram significantly shape destination images and influence tourist behavior. Understanding how different cities are represented and perceived on these platforms is crucial for effective tourism marketing. This study provides a comparative analysis of Instagram content and engagement patterns in Seoul and Tokyo, two major Asian metropolises, to derive actionable marketing insights. We collected and analyzed 59,944 public Instagram posts geotagged or location-tagged within Seoul (n = 29,985) and Tokyo (n = 29,959). We employed a mixed-methods approach involving content categorization using a fine-tuned convolutional neural network (CNN) model, engagement metric analysis (likes, comments), Valence Aware Dictionary and sEntiment Reasoner (VADER) sentiment analysis and thematic classification of comments, geospatial analysis (Kernel Density Estimation [KDE], Moran’s I), and predictive modeling (Gradient Boosting with SHapley Additive exPlanations [SHAP] value analysis). A validation analysis using balanced samples (n = 2000 each) was conducted to address Tokyo’s lower geotagged data proportion. While both cities showed ‘Person’ as the dominant content category, notable differences emerged. Tokyo exhibited higher like-based engagement across categories, particularly for ‘Animal’ and ‘Food’ content, while Seoul generated slightly more comments, often expressing stronger sentiment. Qualitative comment analysis revealed Seoul comments focused more on emotional reactions, whereas Tokyo comments were often shorter, appreciative remarks. Geospatial analysis identified distinct hotspots. The validation analysis confirmed these spatial patterns despite Tokyo’s data limitations. Predictive modeling highlighted hashtag counts as the key engagement driver in Seoul and the presence of people in Tokyo. Seoul and Tokyo project distinct visual narratives and elicit different engagement patterns on Instagram. These findings offer practical implications for destination marketers, suggesting tailored content strategies and location-based campaigns targeting identified hotspots and specific content themes. This study underscores the value of integrating quantitative and qualitative analyses of social media data for nuanced destination marketing insights. Full article
Show Figures

Figure 1

24 pages, 29785 KiB  
Article
Multi-Scale Feature Extraction with 3D Complex-Valued Network for PolSAR Image Classification
by Nana Jiang, Wenbo Zhao, Jiao Guo, Qiang Zhao and Jubo Zhu
Remote Sens. 2025, 17(15), 2663; https://doi.org/10.3390/rs17152663 (registering DOI) - 1 Aug 2025
Abstract
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based [...] Read more.
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based on a 3D complex-valued network to improve classification accuracy by fully leveraging multi-scale features, including phase information. We first designed a complex-valued three-dimensional network framework combining complex-valued 3D convolution (CV-3DConv) with complex-valued squeeze-and-excitation (CV-SE) modules. This framework is capable of simultaneously capturing spatial and polarimetric features, including both amplitude and phase information, from PolSAR images. Furthermore, to address robustness degradation from limited labeled samples, we introduced a multi-scale learning strategy that jointly models global and local features. Specifically, global features extract overall semantic information, while local features help the network capture region-specific semantics. This strategy enhances information utilization by integrating multi-scale receptive fields, complementing feature advantages. Extensive experiments on four benchmark datasets demonstrated that the proposed method outperforms various comparison methods, maintaining high classification accuracy across different sampling rates, thus validating its effectiveness and robustness. Full article
Show Figures

Figure 1

14 pages, 2795 KiB  
Article
Obtaining Rotational Stiffness of Wind Turbine Foundation from Acceleration and Wind Speed SCADA Data
by Jiazhi Dai, Mario Rotea and Nasser Kehtarnavaz
Sensors 2025, 25(15), 4756; https://doi.org/10.3390/s25154756 (registering DOI) - 1 Aug 2025
Abstract
Monitoring the health of wind turbine foundations is essential for ensuring their operational safety. This paper presents a cost-effective approach to obtain rotational stiffness of wind turbine foundations by using only acceleration and wind speed data that are part of SCADA data, thus [...] Read more.
Monitoring the health of wind turbine foundations is essential for ensuring their operational safety. This paper presents a cost-effective approach to obtain rotational stiffness of wind turbine foundations by using only acceleration and wind speed data that are part of SCADA data, thus lowering the use of moment and tilt sensors that are currently being used for obtaining foundation stiffness. First, a convolutional neural network model is applied to map acceleration and wind speed data within a moving window to corresponding moment and tilt values. Rotational stiffness of the foundation is then estimated by fitting a line in the moment-tilt plane. The results obtained indicate that such a mapping model can provide stiffness values that are within 7% of ground truth stiffness values on average. Second, the developed mapping model is re-trained by using synthetic acceleration and wind speed data that are generated by an autoencoder generative AI network. The results obtained indicate that although the exact amount of stiffness drop cannot be determined, the drops themselves can be detected. This mapping model can be used not only to lower the cost associated with obtaining foundation rotational stiffness but also to sound an alarm when a foundation starts deteriorating. Full article
(This article belongs to the Special Issue Sensors Technology Applied in Power Systems and Energy Management)
Show Figures

Figure 1

43 pages, 2466 KiB  
Article
Adaptive Ensemble Learning for Financial Time-Series Forecasting: A Hypernetwork-Enhanced Reservoir Computing Framework with Multi-Scale Temporal Modeling
by Yinuo Sun, Zhaoen Qu, Tingwei Zhang and Xiangyu Li
Axioms 2025, 14(8), 597; https://doi.org/10.3390/axioms14080597 (registering DOI) - 1 Aug 2025
Abstract
Financial market forecasting remains challenging due to complex nonlinear dynamics and regime-dependent behaviors that traditional models struggle to capture effectively. This research introduces the Adaptive Financial Reservoir Network with Hypernetwork Flow (AFRN–HyperFlow) framework, a novel ensemble architecture integrating Echo State Networks, temporal convolutional [...] Read more.
Financial market forecasting remains challenging due to complex nonlinear dynamics and regime-dependent behaviors that traditional models struggle to capture effectively. This research introduces the Adaptive Financial Reservoir Network with Hypernetwork Flow (AFRN–HyperFlow) framework, a novel ensemble architecture integrating Echo State Networks, temporal convolutional networks, mixture density networks, adaptive Hypernetworks, and deep state-space models for enhanced financial time-series prediction. Through comprehensive feature engineering incorporating technical indicators, spectral decomposition, reservoir-based representations, and flow dynamics characteristics, the framework achieves superior forecasting performance across diverse market conditions. Experimental validation on 26,817 balanced samples demonstrates exceptional results with an F1-score of 0.8947, representing a 12.3% improvement over State-of-the-Art baseline methods, while maintaining robust performance across asset classes from equities to cryptocurrencies. The adaptive Hypernetwork mechanism enables real-time regime-change detection with 2.3 days average lag and 95% accuracy, while systematic SHAP analysis provides comprehensive interpretability essential for regulatory compliance. Ablation studies reveal Echo State Networks contribute 9.47% performance improvement, validating the architectural design. The AFRN–HyperFlow framework addresses critical limitations in uncertainty quantification, regime adaptability, and interpretability, offering promising directions for next-generation financial forecasting systems incorporating quantum computing and federated learning approaches. Full article
(This article belongs to the Special Issue Financial Mathematics and Econophysics)
Show Figures

Figure 1

15 pages, 4258 KiB  
Article
Complex-Scene SAR Aircraft Recognition Combining Attention Mechanism and Inner Convolution Operator
by Wansi Liu, Huan Wang, Jiapeng Duan, Lixiang Cao, Teng Feng and Xiaomin Tian
Sensors 2025, 25(15), 4749; https://doi.org/10.3390/s25154749 (registering DOI) - 1 Aug 2025
Abstract
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings [...] Read more.
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings and the demand for real-time processing, this paper proposes a YOLOv7-MTI recognition model that combines the attention mechanism and involution. By integrating the MTCN module and involution, performance is enhanced. The Multi-TASP-Conv network (MTCN) module aims to effectively extract low-level semantic and spatial information using a shared lightweight attention gate structure to achieve cross-dimensional interaction between “channels and space” with very few parameters, capturing the dependencies among multiple dimensions and improving feature representation ability. Involution helps the model adaptively adjust the weights of spatial positions through dynamic parameterized convolution kernels, strengthening the discrete strong scattering points specific to aircraft and suppressing the continuous scattering of the background, thereby alleviating the interference of complex backgrounds. Experiments on the SAR-AIRcraft-1.0 dataset, which includes seven categories such as A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and others, show that the mAP and mRecall of YOLOv7-MTI reach 93.51% and 96.45%, respectively, outperforming Faster R-CNN, SSD, YOLOv5, YOLOv7, and YOLOv8. Compared with the basic YOLOv7, mAP is improved by 1.47%, mRecall by 1.64%, and FPS by 8.27%, achieving an effective balance between accuracy and speed, providing research ideas for SAR aircraft recognition. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 1889 KiB  
Article
Infrared Thermographic Signal Analysis of Bioactive Edible Oils Using CNNs for Quality Assessment
by Danilo Pratticò and Filippo Laganà
Signals 2025, 6(3), 38; https://doi.org/10.3390/signals6030038 (registering DOI) - 1 Aug 2025
Abstract
Nutrition plays a fundamental role in promoting health and preventing chronic diseases, with bioactive food components offering a therapeutic potential in biomedical applications. Among these, edible oils are recognised for their functional properties, which contribute to disease prevention and metabolic regulation. The proposed [...] Read more.
Nutrition plays a fundamental role in promoting health and preventing chronic diseases, with bioactive food components offering a therapeutic potential in biomedical applications. Among these, edible oils are recognised for their functional properties, which contribute to disease prevention and metabolic regulation. The proposed study aims to evaluate the quality of four bioactive oils (olive oil, sunflower oil, tomato seed oil, and pumpkin seed oil) by analysing their thermal behaviour through infrared (IR) imaging. The study designed a customised electronic system to acquire thermographic signals under controlled temperature and humidity conditions. The acquisition system was used to extract thermal data. Analysis of the acquired thermal signals revealed characteristic heat absorption profiles used to infer differences in oil properties related to stability and degradation potential. A hybrid deep learning model that integrates Convolutional Neural Networks (CNNs) with Long Short-Term Memory (LSTM) units was used to classify and differentiate the oils based on stability, thermal reactivity, and potential health benefits. A signal analysis showed that the AI-based method improves both the accuracy (achieving an F1-score of 93.66%) and the repeatability of quality assessments, providing a non-invasive and intelligent framework for the validation and traceability of nutritional compounds. Full article
Show Figures

Figure 1

19 pages, 1408 KiB  
Article
Self-Supervised Learning of End-to-End 3D LiDAR Odometry for Urban Scene Modeling
by Shuting Chen, Zhiyong Wang, Chengxi Hong, Yanwen Sun, Hong Jia and Weiquan Liu
Remote Sens. 2025, 17(15), 2661; https://doi.org/10.3390/rs17152661 (registering DOI) - 1 Aug 2025
Abstract
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential [...] Read more.
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential LiDAR point clouds in complex urban environments presents significant challenges: traditional point-based or feature-matching methods are often sensitive to urban dynamics (e.g., moving vehicles and pedestrians) and struggle to establish reliable correspondences. While deep learning offers solutions, current approaches for point cloud alignment exhibit key limitations: self-supervised losses often neglect inherent alignment uncertainties, and supervised methods require costly pixel-level correspondence annotations. To address these challenges, we propose UnMinkLO-Net, an end-to-end self-supervised LiDAR odometry framework. Our method is as follows: (1) we efficiently encode 3D point cloud structures using voxel-based sparse convolution, and (2) we model inherent alignment uncertainty via covariance matrices, enabling novel self-supervised loss based on uncertainty modeling. Extensive evaluations on the KITTI urban dataset demonstrate UnMinkLO-Net’s effectiveness in achieving highly accurate point cloud registration. Our self-supervised approach, eliminating the need for manual annotations, provides a powerful foundation for processing and analyzing LiDAR data within multi-sensor urban sensing frameworks. Full article
Show Figures

Figure 1

27 pages, 15404 KiB  
Article
Machine-Learning Models for Surface Ozone Forecast in Mexico City
by Mateen Ahmad, Bernhard Rappenglück, Olabosipo O. Osibanjo and Armando Retama
Atmosphere 2025, 16(8), 931; https://doi.org/10.3390/atmos16080931 (registering DOI) - 1 Aug 2025
Abstract
Mexico City frequently experiences high near-surface ozone concentrations, and exposure to elevated near-surface ozone causes harmful effects to the inhabitants and the environment of Mexico City. This necessitates developing models for Mexico City that predict near-surface ozone levels in advance. Such models are [...] Read more.
Mexico City frequently experiences high near-surface ozone concentrations, and exposure to elevated near-surface ozone causes harmful effects to the inhabitants and the environment of Mexico City. This necessitates developing models for Mexico City that predict near-surface ozone levels in advance. Such models are crucial for regulatory procedures and can save a great deal of near-surface ozone detrimental effects by serving as early warning systems. We utilize three machine-learning models, trained on seven-year data (2015–2021) and tested on one-year data (2022), to forecast the near-surface ozone concentrations. The trained models predict the next day’s 24-h near-surface ozone concentrations for up to one month; before forecasting the following months, the models are trained again and updated. Based on prediction results, the convolutional neural network outperforms the rest of the models on a yearly scale with an index of agreement of 0.93 for three stations, 0.92 for nine stations, and 0.91 for one station. Full article
Show Figures

Figure 1

20 pages, 5369 KiB  
Article
Smart Postharvest Management of Strawberries: YOLOv8-Driven Detection of Defects, Diseases, and Maturity
by Luana dos Santos Cordeiro, Irenilza de Alencar Nääs and Marcelo Tsuguio Okano
AgriEngineering 2025, 7(8), 246; https://doi.org/10.3390/agriengineering7080246 - 1 Aug 2025
Abstract
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, [...] Read more.
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, covering eight quality categories, including anthracnose, gray mold, powdery mildew, uneven ripening, and physical defects. Data augmentation techniques, such as rotation and Gaussian blur, were applied to enhance model generalization and robustness. The model was trained over 100 and 200 epochs, and its performance was evaluated using standard metrics: Precision, Recall, and mean Average Precision (mAP). The 200-epoch model achieved the best results, with a mAP50 of 0.79 and an inference time of 1 ms per image, demonstrating suitability for real-time applications. Classes with distinct visual features, such as anthracnose and gray mold, were accurately classified. In contrast, visually similar categories, such as ‘Good Quality’ and ‘Unripe’ strawberries, presented classification challenges. Full article
Show Figures

Figure 1

15 pages, 1767 KiB  
Article
A Contrastive Representation Learning Method for Event Classification in Φ-OTDR Systems
by Tong Zhang, Xinjie Peng, Yifan Liu, Kaiyang Yin and Pengfei Li
Sensors 2025, 25(15), 4744; https://doi.org/10.3390/s25154744 (registering DOI) - 1 Aug 2025
Abstract
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods [...] Read more.
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods typically rely on sufficient labeled signal data for model training, which poses a major bottleneck in applying these methods due to the expensive and laborious process of labeling extensive data. To address this limitation, we propose CLWTNet, a novel contrastive representation learning method enhanced with wavelet transform convolution for event classification in Φ-OTDR systems. CLWTNet learns robust and discriminative representations directly from unlabeled signal data by transforming time-domain signals into STFT images and employing contrastive learning to maximize inter-class separation while preserving intra-class similarity. Furthermore, CLWTNet incorporates wavelet transform convolution to enhance its capacity to capture intricate features of event signals. The experimental results demonstrate that CLWTNet achieves competitive performance with the supervised representation learning methods and superior performance to unsupervised representation learning methods, even when training with unlabeled signal data. These findings highlight the effectiveness of CLWTNet in extracting discriminative representations without relying on labeled data, thereby enhancing data efficiency and reducing the costs and effort involved in extensive data labeling in practical Φ-OTDR system applications. Full article
(This article belongs to the Topic Distributed Optical Fiber Sensors)
Show Figures

Figure 1

Back to TopTop