Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,157)

Search Parameters:
Keywords = product image design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 (registering DOI) - 31 Jul 2025
Viewed by 116
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 252
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

18 pages, 3347 KiB  
Article
Assessment of Machine Learning-Driven Retrievals of Arctic Sea Ice Thickness from L-Band Radiometry Remote Sensing
by Ferran Hernández-Macià, Gemma Sanjuan Gomez, Carolina Gabarró and Maria José Escorihuela
Computers 2025, 14(8), 305; https://doi.org/10.3390/computers14080305 - 28 Jul 2025
Viewed by 204
Abstract
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are [...] Read more.
This study evaluates machine learning-based methods for retrieving thin Arctic sea ice thickness (SIT) from L-band radiometry, using data from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. In addition to the operational ESA product, three alternative approaches are assessed: a Random Forest (RF) algorithm, a Convolutional Neural Network (CNN) that incorporates spatial coherence, and a Long Short-Term Memory (LSTM) neural network designed to capture temporal coherence. Validation against in situ data from the Beaufort Gyre Exploration Project (BGEP) moorings and the ESA SMOSice campaign demonstrates that the RF algorithm achieves robust performance comparable to the ESA product, despite its simplicity and lack of explicit spatial or temporal modeling. The CNN exhibits a tendency to overestimate SIT and shows higher dispersion, suggesting limited added value when spatial coherence is already present in the input data. The LSTM approach does not improve retrieval accuracy, likely due to the mismatch between satellite resolution and the temporal variability of sea ice conditions. These results highlight the importance of L-band sea ice emission modeling over increasing algorithm complexity and suggest that simpler, adaptable methods such as RF offer a promising foundation for future SIT retrieval efforts. The findings are relevant for refining current methods used with SMOS and for developing upcoming satellite missions, such as ESA’s Copernicus Imaging Microwave Radiometer (CIMR). Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

29 pages, 3125 KiB  
Article
Tomato Leaf Disease Identification Framework FCMNet Based on Multimodal Fusion
by Siming Deng, Jiale Zhu, Yang Hu, Mingfang He and Yonglin Xia
Plants 2025, 14(15), 2329; https://doi.org/10.3390/plants14152329 - 27 Jul 2025
Viewed by 438
Abstract
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper [...] Read more.
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper proposes a tomato leaf disease recognition framework FCMNet based on multimodal fusion, which combines tomato leaf disease image and text description to enhance the ability to capture disease characteristics. In this paper, the Fourier-guided Attention Mechanism (FGAM) is designed, which systematically embeds the Fourier frequency-domain information into the spatial-channel attention structure for the first time, enhances the stability and noise resistance of feature expression through spectral transform, and realizes more accurate lesion location by means of multi-scale fusion of local and global features. In order to realize the deep semantic interaction between image and text modality, a Cross Vision–Language Alignment module (CVLA) is further proposed. This module generates visual representations compatible with Bert embeddings by utilizing block segmentation and feature mapping techniques. Additionally, it incorporates a probability-based weighting mechanism to achieve enhanced multimodal fusion, significantly strengthening the model’s comprehension of semantic relationships across different modalities. Furthermore, to enhance both training efficiency and parameter optimization capabilities of the model, we introduce a Multi-strategy Improved Coati Optimization Algorithm (MSCOA). This algorithm integrates Good Point Set initialization with a Golden Sine search strategy, thereby boosting global exploration, accelerating convergence, and effectively preventing entrapment in local optima. Consequently, it exhibits robust adaptability and stable performance within high-dimensional search spaces. The experimental results show that the FCMNet model has increased the accuracy and precision by 2.61% and 2.85%, respectively, compared with the baseline model on the self-built dataset of tomato leaf diseases, and the recall and F1 score have increased by 3.03% and 3.06%, respectively, which is significantly superior to the existing methods. This research provides a new solution for the identification of tomato leaf diseases and has broad potential for agricultural applications. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

17 pages, 4139 KiB  
Article
Design and Development of an Intelligent Chlorophyll Content Detection System for Cotton Leaves
by Wu Wei, Lixin Zhang, Xue Hu and Siyao Yu
Processes 2025, 13(8), 2329; https://doi.org/10.3390/pr13082329 - 22 Jul 2025
Viewed by 219
Abstract
In order to meet the needs for the rapid detection of crop growth and support variable management in farmland, an intelligent chlorophyll content in cotton leaves (CCC) detection system based on hyperspectral imaging (HSI) technology was designed and developed. The system includes a [...] Read more.
In order to meet the needs for the rapid detection of crop growth and support variable management in farmland, an intelligent chlorophyll content in cotton leaves (CCC) detection system based on hyperspectral imaging (HSI) technology was designed and developed. The system includes a near-infrared (NIR) hyperspectral image acquisition module, a spectral extraction module, a main control processor module, a model acceleration module, a display module, and a power module, which are used to achieve rapid and non-destructive detection of chlorophyll content. Firstly, spectral images of cotton canopy leaves during the seedling, budding, and flowering-boll stages were collected, and the dataset was optimized using the first-order differential algorithm (1D) and Savitzky–Golay five-term quadratic smoothing (SG) algorithm. The results showed that SG had better processing performance. Secondly, the sparrow search algorithm optimized backpropagation neural network (SSA-BPNN) and one-dimensional convolutional neural network (1DCNN) algorithms were selected to establish a chlorophyll content detection model. The results showed that the determination coefficients Rp2 of the chlorophyll SG-1DCNN detection model during the seedling, budding, and flowering-boll stages were 0.92, 0.97, and 0.95, respectively, and the model performance was superior to SG-SSA-BPNN. Therefore, the SG-1DCNN model was embedded into the detection system. Finally, a CCC intelligent detection system was developed using Python 3.12.3, MATLAB 2020b, and ENVI, and the system was subjected to application testing. The results showed that the average detection accuracy of the CCC intelligent detection system in the three stages was 98.522%, 99.132%, and 97.449%, respectively. Meanwhile, the average detection time for the samples is only 20.12 s. The research results can effectively solve the problem of detecting the nutritional status of cotton in the field environment, meet the real-time detection needs of the field environment, and provide solutions and technical support for the intelligent perception of crop production. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

27 pages, 2034 KiB  
Article
LCFC-Laptop: A Benchmark Dataset for Detecting Surface Defects in Consumer Electronics
by Hua-Feng Dai, Jyun-Rong Wang, Quan Zhong, Dong Qin, Hao Liu and Fei Guo
Sensors 2025, 25(15), 4535; https://doi.org/10.3390/s25154535 - 22 Jul 2025
Viewed by 307
Abstract
As a high-market-value sector, the consumer electronics industry is particularly vulnerable to reputational damage from surface defects in shipped products. However, the high level of automation and the short product life cycles in this industry make defect sample collection both difficult and inefficient. [...] Read more.
As a high-market-value sector, the consumer electronics industry is particularly vulnerable to reputational damage from surface defects in shipped products. However, the high level of automation and the short product life cycles in this industry make defect sample collection both difficult and inefficient. This challenge has led to a severe shortage of publicly available, comprehensive datasets dedicated to surface defect detection, limiting the development of targeted methodologies in the academic community. Most existing datasets focus on general-purpose object categories, such as those in the COCO and PASCAL VOC datasets, or on industrial surfaces, such as those in the MvTec AD and ZJU-Leaper datasets. However, these datasets differ significantly in structure, defect types, and imaging conditions from those specific to consumer electronics. As a result, models trained on them often perform poorly when applied to surface defect detection tasks in this domain. To address this issue, the present study introduces a specialized optical sampling system with six distinct lighting configurations, each designed to highlight different surface defect types. These lighting conditions were calibrated by experienced optical engineers to maximize defect visibility and detectability. Using this system, 14,478 high-resolution defect images were collected from actual production environments. These images cover more than six defect types, such as scratches, plain particles, edge particles, dirt, collisions, and unknown defects. After data acquisition, senior quality control inspectors and manufacturing engineers established standardized annotation criteria based on real-world industrial acceptance standards. Annotations were then applied using bounding boxes for object detection and pixelwise masks for semantic segmentation. In addition to the dataset construction scheme, commonly used semantic segmentation methods were benchmarked using the provided mask annotations. The resulting dataset has been made publicly available to support the research community in developing, testing, and refining advanced surface defect detection algorithms under realistic conditions. To the best of our knowledge, this is the first comprehensive, multiclass, multi-defect dataset for surface defect detection in the consumer electronics domain that provides pixel-level ground-truth annotations and is explicitly designed for real-world applications. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

21 pages, 16254 KiB  
Article
Prediction of Winter Wheat Yield and Interpretable Accuracy Under Different Water and Nitrogen Treatments Based on CNNResNet-50
by Donglin Wang, Yuhan Cheng, Longfei Shi, Huiqing Yin, Guangguang Yang, Shaobo Liu, Qinge Dong and Jiankun Ge
Agronomy 2025, 15(7), 1755; https://doi.org/10.3390/agronomy15071755 - 21 Jul 2025
Viewed by 410
Abstract
Winter wheat yield prediction is critical for optimizing field management plans and guiding agricultural production. To address the limitations of conventional manual yield estimation methods, including low efficiency and poor interpretability, this study innovatively proposes an intelligent yield estimation method based on a [...] Read more.
Winter wheat yield prediction is critical for optimizing field management plans and guiding agricultural production. To address the limitations of conventional manual yield estimation methods, including low efficiency and poor interpretability, this study innovatively proposes an intelligent yield estimation method based on a convolutional neural network (CNN). A comprehensive two-factor (fertilization × irrigation) controlled field experiment was designed to thoroughly validate the applicability and effectiveness of this method. The experimental design comprised two irrigation treatments, sufficient irrigation (C) at 750 m3 ha−1 and deficit irrigation (M) at 450 m3 ha−1, along with five fertilization treatments (at a rate of 180 kg N ha−1): (1) organic fertilizer alone, (2) organic–inorganic fertilizer blend at a 7:3 ratio, (3) organic–inorganic fertilizer blend at a 3:7 ratio, (4) inorganic fertilizer alone, and (5) no fertilizer control. The experimental protocol employed a DJI M300 RTK unmanned aerial vehicle (UAV) equipped with a multispectral sensor to systematically acquire high-resolution growth imagery of winter wheat across critical phenological stages, from heading to maturity. The acquired multispectral imagery was meticulously annotated using the Labelme professional annotation tool to construct a comprehensive experimental dataset comprising over 2000 labeled images. These annotated data were subsequently employed to train an enhanced CNN model based on ResNet50 architecture, which achieved automated generation of panicle density maps and precise panicle counting, thereby realizing yield prediction. Field experimental results demonstrated significant yield variations among fertilization treatments under sufficient irrigation, with the 3:7 organic–inorganic blend achieving the highest actual yield (9363.38 ± 468.17 kg ha−1) significantly outperforming other treatments (p < 0.05), confirming the synergistic effects of optimized nitrogen and water management. The enhanced CNN model exhibited superior performance, with an average accuracy of 89.0–92.1%, representing a 3.0% improvement over YOLOv8. Notably, model accuracy showed significant correlation with yield levels (p < 0.05), suggesting more distinct panicle morphological features in high-yield plots that facilitated model identification. The CNN’s yield predictions demonstrated strong agreement with the measured values, maintaining mean relative errors below 10%. Particularly outstanding performance was observed for the organic fertilizer with full irrigation (5.5% error) and the 7:3 organic-inorganic blend with sufficient irrigation (8.0% error), indicating that the CNN network is more suitable for these management regimes. These findings provide a robust technical foundation for precision farming applications in winter wheat production. Future research will focus on integrating this technology into smart agricultural management systems to enable real-time, data-driven decision making at the farm scale. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

16 pages, 2914 KiB  
Article
Smart Dairy Farming: A Mobile Application for Milk Yield Classification Tasks
by Allan Hall-Solorio, Graciela Ramirez-Alonso, Alfonso Juventino Chay-Canul, Héctor A. Lee-Rangel, Einar Vargas-Bello-Pérez and David R. Lopez-Flores
Animals 2025, 15(14), 2146; https://doi.org/10.3390/ani15142146 - 21 Jul 2025
Viewed by 363
Abstract
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient [...] Read more.
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient object detection and classification with real-time performance. The model is trained on a public dataset of cow images labeled with 305-day milk yield records. Thresholds were established to define the three yield classes, and a balanced subset of labeled images was selected for training, validation, and testing purposes. To assess the robustness and consistency of the proposed approach, the model was trained 30 times following the same experimental protocol. The system achieves precision, recall, and mean Average Precision (mAP@50) of 0.408 ± 0.044, 0.739 ± 0.095, and 0.492 ± 0.031, respectively, across all classes. The highest precision (0.445 ± 0.055), recall (0.766 ± 0.107), and mAP@50 (0.558 ± 0.036) were observed in the low-yield class. Qualitative analysis revealed that misclassifications mainly occurred near class boundaries, emphasizing the importance of consistent image acquisition conditions. The resulting model was deployed in a mobile application designed to support field-level assessment by non-specialist users. These findings demonstrate the practical feasibility of applying vision-based models to support decision-making in dairy production systems, particularly in settings where traditional data collection methods are unavailable or impractical. Full article
Show Figures

Figure 1

28 pages, 25758 KiB  
Article
Cam Design and Pin Defect Detection of Cam Pin Insertion Machine in IGBT Packaging
by Wenchao Tian, Pengchao Zhang, Mingfang Tian, Si Chen, Haoyue Ji and Bingxu Ma
Micromachines 2025, 16(7), 829; https://doi.org/10.3390/mi16070829 - 20 Jul 2025
Viewed by 287
Abstract
Packaging equipment plays a crucial role in the semiconductor industry by enhancing product quality and reducing labor costs through automation. Research was conducted on IGBT module packaging equipment (an automatic pin insertion machine) during the pin assembly process of insulated gate bipolar transistor [...] Read more.
Packaging equipment plays a crucial role in the semiconductor industry by enhancing product quality and reducing labor costs through automation. Research was conducted on IGBT module packaging equipment (an automatic pin insertion machine) during the pin assembly process of insulated gate bipolar transistor (IGBT) modules to improve productivity and product quality. First, the manual pin assembly process was divided into four stages: feeding, stabilizing, clamping, and inserting. Each stage was completed by separate cams, and corresponding step timing diagrams are drawn. The profiles of the four cams were designed and verified through theoretical calculations and kinematic simulations using a seventh-degree polynomial curve fitting method. Then, image algorithms were developed to detect pin tilt defects, pin tip defects, and to provide visual guidance for pin insertion. Finally, a pin insertion machine and its human–machine interaction interface were constructed. On-machine results show that the pin cutting pass rate reached 97%, the average insertion time for one pin was 2.84 s, the pass rate for pin insertion reached 99.75%, and the pin image guidance accuracy was 0.02 mm. Therefore, the designed pin assembly machine can reliably and consistently perform the pin insertion task, providing theoretical and experimental insights for the automated production of IGBT modules. Full article
Show Figures

Figure 1

20 pages, 5404 KiB  
Article
Flying Steel Detection in Wire Rod Production Based on Improved You Only Look Once v8
by Yifan Lu, Fei Zhang, Xiaozhan Li, Jian Zhang, Xiong Xiao, Lijun Wang and Xiaofei Xiang
Processes 2025, 13(7), 2297; https://doi.org/10.3390/pr13072297 - 18 Jul 2025
Viewed by 360
Abstract
In the process of high-speed wire rod production, flying steel accidents may occur due to various reasons. Current detection methods relying on sensors like hardware make debugging complex as well as limit real-time and accuracy. These methods are complicated to debug, and the [...] Read more.
In the process of high-speed wire rod production, flying steel accidents may occur due to various reasons. Current detection methods relying on sensors like hardware make debugging complex as well as limit real-time and accuracy. These methods are complicated to debug, and the real-time and accuracy of detection are poor. Therefore, this paper proposes a flying steel detection method based on improved You Only Look Once v8 (YOLOv8), which can realize high-precision flying steel detection based on machine vision through the monitoring video of the production site. Firstly, the Omni-dimensional Dynamic Convolution (ODConv) is added to the backbone network to improve the feature extraction ability of the input image. Then, a lightweight C2f-PCCA_RVB module is proposed to be integrated into the neck network, so as to carry out the lightweight design of the neck network. Finally, the Efficient Multi-Scale Attention (EMA) module is added to the neck network to fuse the context information of different scales and improve the feature extraction ability. The experimental results show that the average accuracy (mAP@0.5) of the flying steel detection method based on the improved YOLOv8 is 99.1%, and the latency is reduced to 2.5 ms, which can realize the real-time accurate detection of the flying steel. Full article
Show Figures

Figure 1

17 pages, 2829 KiB  
Article
Apparatus and Experiments Towards Fully Automated Medical Isotope Production Using an Ion Beam Accelerator
by Abdulaziz Yahya M. Hussain, Aliaksandr Baidak, Ananya Choudhury, Andy Smith, Carl Andrews, Eliza Wojcik, Liam Brown, Matthew Nancekievill, Samir De Moraes Shubeita, Tim A. D. Smith, Volkan Yasakci and Frederick Currell
Instruments 2025, 9(3), 18; https://doi.org/10.3390/instruments9030018 - 18 Jul 2025
Viewed by 236
Abstract
Zirconium-89 (89Zr) is a widely used radionuclide in immune-PET imaging due to its physical decay characteristics. Despite its importance, the production of 89Zr radiopharmaceuticals remains largely manual, with limited cost-effective automation solutions available. To address this, we developed an automated [...] Read more.
Zirconium-89 (89Zr) is a widely used radionuclide in immune-PET imaging due to its physical decay characteristics. Despite its importance, the production of 89Zr radiopharmaceuticals remains largely manual, with limited cost-effective automation solutions available. To address this, we developed an automated system for the agile and reliable production of radiopharmaceuticals. The system performs transmutations, dissolution, and separation for a range of radioisotopes. Steps in the production of 89Zr-oxalate are used as an exemplar to illustrate its use. Three-dimensional (3D) printing was exploited to design and manufacture a target holder able to include solid targets, in this case an 89Y foil. Spot welding was used to attach 89Y to a refractory tantalum (Ta) substrate. A commercially available CPU chiller was repurposed to efficiently cool the metal target. Furthermore, a commercial resin (ZR Resin) and compact peristaltic pumps were employed in a compact (10 × 10 × 10 cm3) chemical separation unit that operates automatically via computer-controlled software. Additionally, a standalone 3D-printed unit was designed with three automated functionalities: photolabelling, vortex mixing, and controlled heating. All components of the assembly, except for the target holder, are housed inside a commercially available hot cell, ensuring safe and efficient operation in a controlled environment. This paper details the design, construction, and modelling of the entire assembly, emphasising its innovative integration and operational efficiency for widespread radiopharmaceutical automation. Full article
Show Figures

Figure 1

19 pages, 15854 KiB  
Article
Failure Analysis of Fire in Lithium-Ion Battery-Powered Heating Insoles: Case Study
by Rong Yuan, Sylvia Jin and Glen Stevick
Batteries 2025, 11(7), 271; https://doi.org/10.3390/batteries11070271 - 17 Jul 2025
Viewed by 387
Abstract
This study investigates a lithium-ion battery failure in heating insoles that ignited during normal walking while powered off. Through comprehensive material characterization, electrical testing, thermal analysis, and mechanical gait simulation, we systematically excluded electrical or thermal abuse as failure causes. X-ray/CT imaging localized [...] Read more.
This study investigates a lithium-ion battery failure in heating insoles that ignited during normal walking while powered off. Through comprehensive material characterization, electrical testing, thermal analysis, and mechanical gait simulation, we systematically excluded electrical or thermal abuse as failure causes. X-ray/CT imaging localized the ignition source to the lateral heel edge of the pouch cell, correlating precisely with peak mechanical stress identified through gait analysis. Remarkably, the cyclic load was less than 10% of the single crush load threshold specified in safety standards. Key findings reveal multiple contributing factors as follows: the uncoated polyethylene separator’s inability to prevent stress-induced internal short circuits, the circuit design’s lack of battery health monitoring functionality that permitted undetected degradation, and the hazardous placement inside clothing that exacerbated burn injuries. These findings necessitate a multi-level safety framework for lithium-ion battery products, encompassing enhanced cell design to prevent internal short circuit, improved circuit protection with health monitoring capabilities, optimized product integration to mitigate mechanical and environmental impact, and effective post-failure containment measures. This case study exposes a critical need for product-specific safety standards that address the unique demands of wearable lithium-ion batteries, where existing certification requirements fail to prevent real-use failure scenarios. Full article
(This article belongs to the Section Battery Performance, Ageing, Reliability and Safety)
Show Figures

Graphical abstract

29 pages, 4633 KiB  
Article
Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction
by Diqing Fan, Chenjiang Yu, Ling Sha, Haifeng Zhang and Xintian Liu
Machines 2025, 13(7), 616; https://doi.org/10.3390/machines13070616 - 17 Jul 2025
Viewed by 247
Abstract
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the [...] Read more.
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the welding material and welding process, the weld seam is prone to various defects such as cracks, pores, undercutting, and incomplete fusion, which can weaken the joint and even lead to product failure. Traditional weld seam detection methods include destructive testing and non-destructive testing; however, destructive testing has high costs and long cycles, and non-destructive testing, such as radiographic testing and ultrasonic testing, also have problems such as high consumable costs, slow detection speed, or high requirements for operator experience. In response to these challenges, this article proposes a defect detection and classification method for laser welding seams of automotive brake joints based on machine vision inspection technology. Laser-welded automotive brake joints are subjected to weld defect detection and classification, and image processing algorithms are optimized to improve the accuracy of detection and failure analysis by utilizing the high efficiency, low cost, flexibility, and automation advantages of machine vision technology. This article first analyzes the common types of weld defects in laser welding of automotive brake joints, including craters, holes, and nibbling, and explores the causes and characteristics of these defects. Then, an image processing algorithm suitable for laser welding of automotive brake joints was studied, including pre-processing steps such as image smoothing, image enhancement, threshold segmentation, and morphological processing, to extract feature parameters of weld defects. On this basis, a welding seam defect detection and classification system based on the cascade classifier and AdaBoost algorithm was designed, and efficient recognition and classification of welding seam defects were achieved by training the cascade classifier. The results show that the system can accurately identify and distinguish pits, holes, and undercutting defects in welds, with an average classification accuracy of over 90%. The detection and recognition rate of pit defects reaches 100%, and the detection accuracy of undercutting defects is 92.6%. And the overall missed detection rate is less than 3%, with both the missed detection rate and false detection rate for pit defects being 0%. The average detection time for each image is 0.24 s, meeting the real-time requirements of industrial automation. Compared with infrared and ultrasonic detection methods, the proposed machine-vision-based detection system has significant advantages in detection speed, surface defect recognition accuracy, and industrial adaptability. This provides an efficient and accurate solution for laser welding defect detection of automotive brake joints. Full article
Show Figures

Figure 1

24 pages, 911 KiB  
Article
Integrated Process-Oriented Approach for Digital Authentication of Honey in Food Quality and Safety Systems—A Case Study from a Research and Development Project
by Joanna Katarzyna Banach, Przemysław Rujna and Bartosz Lewandowski
Appl. Sci. 2025, 15(14), 7850; https://doi.org/10.3390/app15147850 - 14 Jul 2025
Viewed by 323
Abstract
The increasing scale of honey adulteration poses a significant challenge for modern food quality and safety management systems. Honey authenticity, defined as the conformity of products with their declared botanical and geographical origin, is challenging to verify solely through documentation and conventional physicochemical [...] Read more.
The increasing scale of honey adulteration poses a significant challenge for modern food quality and safety management systems. Honey authenticity, defined as the conformity of products with their declared botanical and geographical origin, is challenging to verify solely through documentation and conventional physicochemical analyses. This study presents an integrated, process-oriented approach for digital honey authentication, building on initial findings from an interdisciplinary research and development project. The approach includes the creation of a comprehensive digital pollen database and the application of AI-driven image segmentation and classification methods. The developed system is designed to support decision-making processes in quality assessment and VACCP (Vulnerability Assessment and Critical Control Points) risk evaluation, enhancing the operational resilience of honey supply chains against fraudulent practices. This study aligns with current trends in the digitization of food quality management and the use of Industry 4.0 technologies in the agri-food sector, demonstrating the practical feasibility of integrating AI-supported palynological analysis into industrial workflows. The results indicate that the proposed approach can significantly improve the accuracy and efficiency of honey authenticity assessments, supporting the integrity and transparency of global honey markets. Full article
(This article belongs to the Special Issue Advances in Safety Detection and Quality Control of Food)
Show Figures

Figure 1

23 pages, 8911 KiB  
Article
Porosity Analysis and Thermal Conductivity Prediction of Non-Autoclaved Aerated Concrete Using Convolutional Neural Network and Numerical Modeling
by Alexey N. Beskopylny, Evgenii M. Shcherban’, Sergey A. Stel’makh, Diana Elshaeva, Andrei Chernil’nik, Irina Razveeva, Ivan Panfilov, Alexey Kozhakin, Emrah Madenci, Ceyhun Aksoylu and Yasin Onuralp Özkılıç
Buildings 2025, 15(14), 2442; https://doi.org/10.3390/buildings15142442 - 11 Jul 2025
Viewed by 289
Abstract
Currently, the visual study of the structure of building materials and products is gradually supplemented by intelligent algorithms based on computer vision technologies. These algorithms are powerful tools for the visual diagnostic analysis of materials and are of great importance in analyzing the [...] Read more.
Currently, the visual study of the structure of building materials and products is gradually supplemented by intelligent algorithms based on computer vision technologies. These algorithms are powerful tools for the visual diagnostic analysis of materials and are of great importance in analyzing the quality of production processes and predicting their mechanical properties. This paper considers the process of analyzing the visual structure of non-autoclaved aerated concrete products, namely their porosity, using the YOLOv11 convolutional neural network, with a subsequent prediction of one of the most important properties—thermal conductivity. The object of this study is a database of images of aerated concrete samples obtained under laboratory conditions and under the same photography conditions, supplemented by using the author’s augmentation algorithm (up to 100 photographs). The results of the porosity analysis, obtained in the form of a log-normal distribution of pore sizes, show that the developed computer vision model has a high accuracy of analyzing the porous structure of the material under study: Precision = 0.86 and Recall = 0.88 for detection; precision = 0.86 and recall = 0.91 for segmentation. The Hellinger and Kolmogorov–Smirnov statistical criteria, for determining the belonging of the real distribution and the one obtained using the intelligent algorithm to the same general population show high significance. Subsequent modeling of the material using the ANSYS 2024 R2 Material Designer module, taking into account the stochastic nature of the pore size, allowed us to predict the main characteristics—thermal conductivity and density. Comparison of the predicted results with real data showed an error less than 7%. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

Back to TopTop