Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (100)

Search Parameters:
Keywords = line-scan cameras

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 8582 KiB  
Article
Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers
by Xueqin Wu, Jian Ma, Jianfeng Wang, Hongxun Song and Jiyang Xu
Sensors 2025, 25(13), 4177; https://doi.org/10.3390/s25134177 - 4 Jul 2025
Viewed by 202
Abstract
The health of road tunnel linings directly impacts traffic safety and requires regular inspection. Appearance defects on tunnel linings can be measured through images scanned by cameras mounted on a car to avoid disrupting traffic. Existing tunnel lining mobile scanning methods often fail [...] Read more.
The health of road tunnel linings directly impacts traffic safety and requires regular inspection. Appearance defects on tunnel linings can be measured through images scanned by cameras mounted on a car to avoid disrupting traffic. Existing tunnel lining mobile scanning methods often fail in image stitching due to the lack of corresponding feature points in the lining images, or require complex, time-consuming algorithms to eliminate stitching seams caused by the same issue. This paper proposes a mobile scanning method aided by collimated lasers, which uses lasers as corresponding points to assist with image stitching to address the problems. Additionally, the lasers serve as structured light, enabling the measurement of image projection relationships. An inspection car was developed based on this method for the experiment. To ensure operational flexibility, a single checkerboard was used to calibrate the system, including estimating the poses of lasers and cameras, and a Laplace kernel-based algorithm was developed to guarantee the calibration accuracy. Experiments show that the performance of this algorithm exceeds that of other benchmark algorithms, and the proposed method produces nearly seamless, measurable tunnel lining images, demonstrating its feasibility. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

14 pages, 3205 KiB  
Article
A 209 ps Shutter-Time CMOS Image Sensor for Ultra-Fast Diagnosis
by Houzhi Cai, Zhaoyang Xie, Youlin Ma and Lijuan Xiang
Sensors 2025, 25(12), 3835; https://doi.org/10.3390/s25123835 - 19 Jun 2025
Viewed by 385
Abstract
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image [...] Read more.
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image sensor that can be seamlessly integrated with an electronic pulse broadening system can provide a viable alternative to the microchannel plate detector. This paper introduces the design of an 8 × 8 pixel-array ultrashort shutter-time single-framing CMOS image sensor, which leverages silicon epitaxial processing and a 0.18 μm standard CMOS process. The focus of this study is on the photodiode and the readout pixel-array circuit. The photodiode, designed using the silicon epitaxial process, achieves a quantum efficiency exceeding 30% in the visible light band at a bias voltage of 1.8 V, with a temporal resolution greater than 200 ps for visible light. The readout pixel-array circuit, which is based on the 0.18 μm standard CMOS process, incorporates 5T structure pixel units, voltage-controlled delayers, clock trees, and row-column decoding and scanning circuits. Simulations of the pixel circuit demonstrate an optimal temporal resolution of 60 ps. Under the shutter condition with the best temporal resolution, the maximum output swing of the pixel circuit is 448 mV, and the output noise is 77.47 μV, resulting in a dynamic range of 75.2 dB for the pixel circuit; the small-signal responsivity is 1.93 × 10−7 V/e, and the full-well capacity is 2.3 Me. The maximum power consumption of the 8 × 8 pixel-array and its control circuits is 0.35 mW. Considering both the photodiode and the pixel circuit, the proposed CMOS image sensor achieves a temporal resolution better than 209 ps. Full article
(This article belongs to the Special Issue Ultrafast Optoelectronic Sensing and Imaging)
Show Figures

Figure 1

39 pages, 13529 KiB  
Article
Intelligent Monitoring of BECS Conveyors via Vision and the IoT for Safety and Separation Efficiency
by Shohreh Kia and Benjamin Leiding
Appl. Sci. 2025, 15(11), 5891; https://doi.org/10.3390/app15115891 - 23 May 2025
Viewed by 652
Abstract
Conveyor belts are critical in various industries, particularly in the barrier eddy current separator systems used in recycling processes. However, hidden issues, such as belt misalignment, excessive heat that can lead to fire hazards, and the presence of sharp or irregularly shaped materials, [...] Read more.
Conveyor belts are critical in various industries, particularly in the barrier eddy current separator systems used in recycling processes. However, hidden issues, such as belt misalignment, excessive heat that can lead to fire hazards, and the presence of sharp or irregularly shaped materials, reduce operational efficiency and pose serious threats to the health and safety of personnel on the production floor. This study presents an intelligent monitoring and protection system for barrier eddy current separator conveyor belts designed to safeguard machinery and human workers simultaneously. In this system, a thermal camera continuously monitors the surface temperature of the conveyor belt, especially in the area above the magnetic drum—where unwanted ferromagnetic materials can lead to abnormal heating and potential fire risks. The system detects temperature anomalies in this critical zone. The early detection of these risks triggers audio–visual alerts and IoT-based warning messages that are sent to technicians, which is vital in preventing fire-related injuries and minimizing emergency response time. Simultaneously, a machine vision module autonomously detects and corrects belt misalignment, eliminating the need for manual intervention and reducing the risk of worker exposure to moving mechanical parts. Additionally, a line-scan camera integrated with the YOLOv11 AI model analyses the shape of materials on the conveyor belt, distinguishing between rounded and sharp-edged objects. This system enhances the accuracy of material separation and reduces the likelihood of injuries caused by the impact or ejection of sharp fragments during maintenance or handling. The YOLOv11n-seg model implemented in this system achieved a segmentation mask precision of 84.8 percent and a recall of 84.5 percent in industry evaluations. Based on this high segmentation accuracy and consistent detection of sharp particles, the system is expected to substantially reduce the frequency of sharp object collisions with the BECS conveyor belt, thereby minimizing mechanical wear and potential safety hazards. By integrating these intelligent capabilities into a compact, cost-effective solution suitable for real-world recycling environments, the proposed system contributes significantly to improving workplace safety and equipment longevity. This project demonstrates how digital transformation and artificial intelligence can play a pivotal role in advancing occupational health and safety in modern industrial production. Full article
Show Figures

Figure 1

14 pages, 2577 KiB  
Article
Dual-Branch Cross-Fusion Normalizing Flow for RGB-D Track Anomaly Detection
by Xiaorong Gao, Pengxu Wen, Jinlong Li and Lin Luo
Sensors 2025, 25(8), 2631; https://doi.org/10.3390/s25082631 - 21 Apr 2025
Viewed by 529
Abstract
With the ease of acquiring RGB-D images from line-scan 3D cameras and the development of computer vision, anomaly detection is now widely applied to railway inspection. As 2D anomaly detection is susceptible to capturing condition, a combination of depth maps is now being [...] Read more.
With the ease of acquiring RGB-D images from line-scan 3D cameras and the development of computer vision, anomaly detection is now widely applied to railway inspection. As 2D anomaly detection is susceptible to capturing condition, a combination of depth maps is now being explored in industrial inspection to reduce these interferences. In this case, this paper proposes a novel approach for RGB-D anomaly detection called Dual-Branch Cross-Fusion Normalizing Flow (DCNF). In this work, we aim to exploit the fusion strategy for dual-branch normalizing flow with multi-modal inputs to be applied in the field of track detection. On the one hand, we introduce the mutual perception module to acquire cross-complementary prior knowledge in the early stage. On the other hand, we exploit the effectiveness of the fusion flow to fuse the dual-branch of RGB-D inputs. We experiment on the real-world Track Anomaly (TA) dataset. The performance evaluation of DCNF on TA dataset achieves an impressive AUROC score of 98.49%, which is 3.74% higher than the second-best method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

10 pages, 1419 KiB  
Article
Time-Domain Full-Field Confocal Optical Coherence Tomography with Digital Scanning
by Danielis Rutkauskas, Karolis Adomavičius and Egidijus Auksorius
Photonics 2025, 12(4), 304; https://doi.org/10.3390/photonics12040304 - 26 Mar 2025
Viewed by 468
Abstract
Full-field optical coherence tomography (FF-OCT) is a fast, en face interferometric technique that allows imaging inside a scattering tissue with high spatial resolution. However, camera-based detection, which lacks confocal gating, results in a suboptimal signal-to-noise ratio (SNR). To address this, we implemented a [...] Read more.
Full-field optical coherence tomography (FF-OCT) is a fast, en face interferometric technique that allows imaging inside a scattering tissue with high spatial resolution. However, camera-based detection, which lacks confocal gating, results in a suboptimal signal-to-noise ratio (SNR). To address this, we implemented a time-domain FF-OCT system that uses a digital micromirror device (DMD). The DMD allows us to scan multiple illumination spots across the sample and simultaneously realize confocal detection with multiple pinholes. Confocal imaging can also be demonstrated with line illumination and detection. Using a USAF target mounted behind a scattering layer, we demonstrate an order-of-magnitude improvement in SNR. Full article
Show Figures

Figure 1

18 pages, 5569 KiB  
Article
Supervised Hyperspectral Band Selection Using Texture Features for Classification of Citrus Leaf Diseases with YOLOv8
by Quentin Frederick, Thomas Burks, Jonathan Adam Watson, Pappu Kumar Yadav, Jianwei Qin, Moon Kim and Megan M. Dewdney
Sensors 2025, 25(4), 1034; https://doi.org/10.3390/s25041034 - 9 Feb 2025
Cited by 2 | Viewed by 1050
Abstract
Citrus greening disease (HLB) and citrus canker cause financial losses in Florida citrus groves via smaller fruits, blemishes, premature fruit drop, and/or eventual tree death. Management of these two diseases requires early detection and distinction from other leaf defects and infections. Automated leaf [...] Read more.
Citrus greening disease (HLB) and citrus canker cause financial losses in Florida citrus groves via smaller fruits, blemishes, premature fruit drop, and/or eventual tree death. Management of these two diseases requires early detection and distinction from other leaf defects and infections. Automated leaf inspection with hyperspectral imagery (HSI) is tested in this study. Citrus leaves bearing visible symptoms of HLB, canker, scab, melanose, greasy spot, zinc deficiency, and a control class were collected, and images were taken with a line-scan HSI camera. YOLOv8 was trained to classify multispectral images from this image dataset, created by selecting bands with a novel variance-based method. The ‘small’ network using an intensity-based band combination yielded an overall weighted F1 score of 0.8959, classifying HLB and canker with F1 scores of 0.788 and 0.941, respectively. The network size appeared to exert greater influence on performance than the HSI bands selected. These findings suggest that YOLOv8 relies more heavily on intensity differences than on the texture properties of citrus leaves and is less sensitive to the choice of wavelengths than traditional machine vision classifiers. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 4649 KiB  
Article
Deep Learning Model Compression and Hardware Acceleration for High-Performance Foreign Material Detection on Poultry Meat Using NIR Hyperspectral Imaging
by Zirak Khan, Seung-Chul Yoon and Suchendra M. Bhandarkar
Sensors 2025, 25(3), 970; https://doi.org/10.3390/s25030970 - 6 Feb 2025
Viewed by 1269
Abstract
Ensuring the safety and quality of poultry products requires efficient detection and removal of foreign materials during processing. Hyperspectral imaging (HSI) offers a non-invasive mechanism to capture detailed spatial and spectral information, enabling the discrimination of different types of contaminants from poultry muscle [...] Read more.
Ensuring the safety and quality of poultry products requires efficient detection and removal of foreign materials during processing. Hyperspectral imaging (HSI) offers a non-invasive mechanism to capture detailed spatial and spectral information, enabling the discrimination of different types of contaminants from poultry muscle and non-muscle external tissues. When integrated with advanced deep learning (DL) models, HSI systems can achieve high accuracy in detecting foreign materials. However, the high dimensionality of HSI data, the computational complexity of DL models, and the high-paced nature of poultry processing environments pose challenges for real-time implementation in industrial settings, where the speed of imaging and decision-making is critical. In this study, we address these challenges by optimizing DL inference for HSI-based foreign material detection through a combination of post-training quantization and hardware acceleration techniques. We leveraged hardware acceleration utilizing the TensorRT module for NVIDIA GPU to enhance inference speed. Additionally, we applied half-precision (called FP16) post-training quantization to reduce the precision of model parameters, decreasing memory usage and computational requirements without any loss in model accuracy. We conducted simulations using two hypothetical hyperspectral line-scan cameras to evaluate the feasibility of real-time detection in industrial conditions. The simulation results demonstrated that our optimized models could achieve inference times compatible with the line speeds of poultry processing lines between 140 and 250 birds per minute, indicating the potential for real-time deployment. Specifically, the proposed inference method, optimized through hardware acceleration and model compression, achieved reductions in inference time of up to five times compared to unoptimized, traditional GPU-based inference. In addition, it resulted in a 50% decrease in model size while maintaining high detection accuracy that was also comparable to the original model. Our findings suggest that the integration of post-training quantization and hardware acceleration is an effective strategy for overcoming the computational bottlenecks associated with DL inference on HSI data. Full article
(This article belongs to the Special Issue Spectral Detection Technology, Sensors and Instruments, 2nd Edition)
Show Figures

Figure 1

23 pages, 44092 KiB  
Article
A Global-Scale Overlapping Pixels Calculation Method for Whisk-Broom Payloads with Multi-Module-Staggered Longlinear-Array Detectors
by Xinwang Du, Chao Wu, Quan Liang, Lixing Zhao, Yixuan Xu, Junhong Guo, Xiaoyan Li and Fansheng Chen
Remote Sens. 2025, 17(3), 433; https://doi.org/10.3390/rs17030433 - 27 Jan 2025
Viewed by 1054
Abstract
A multi-module staggered (MMS) long-linear-array (LLA) detector is presently recognized as an effective and widely adopted means of improving the field of view (FOV) of in-orbit optical line-array cameras. In particular, in terms of low-orbit whisk-broom payloads, the MMS LLA detector combined with [...] Read more.
A multi-module staggered (MMS) long-linear-array (LLA) detector is presently recognized as an effective and widely adopted means of improving the field of view (FOV) of in-orbit optical line-array cameras. In particular, in terms of low-orbit whisk-broom payloads, the MMS LLA detector combined with the one-dimensional scanning mirror is capable of achieving both large-swath and high-resolution imaging. However, because of the complexity of the instantaneous relative motion model (IRMM) of the whisk-broom imaging mechanism, it is really difficult to determine and verify the actual numbers of overlapping pixels of adjacent detector sub-module images and consecutive images in the same and opposite scanning directions, which are exceedingly crucial to the instrument design pre-launch as well as the in-orbit geometric quantitative processing and application post-launch. Therefore, in this paper, aiming at addressing the problems above, we propose a global-scale overlapping pixels calculation method based on the IRMM and rigorous geometric positioning model (RGPM) of the whisk-broom payloads with an MMS LLA detector. First, in accordance with the imaging theory and the specific optical–mechanical structure, the RGPM of the whisk-broom payload is constructed and introduced elaborately. Then, we qualitatively analyze the variation tendency of the overlapping pixels of adjacent detector sub-module images with the IRMM of the imaging targets, and establish the associated overlapping pixels calculation model based on the RGPM. And subsequently, the global-scale overlapping pixels calculation models for consecutive images of the same and opposite scanning directions of the whisk-broom payload are also built. Finally, the corresponding verification method is presented in detail. The proposed method is validated using both simulation data and in-orbit payload data from the Thermal Infrared Spectrometer (TIS) of the Sustainable Development Goals Satellite-1 (SDGSAT-1), launched on 5 November 2021, demonstrating its effectiveness and accuracy with overlapping pixel errors of less than 0.3 pixels between sub-modules and less than 0.5 pixels between consecutive scanning images. Generally, this method is suitable and versatile for the other scanning cameras with a MMS LLA detector because of the similarity of the imaging mechanism. Full article
(This article belongs to the Special Issue Optical Remote Sensing Payloads, from Design to Flight Test)
Show Figures

Graphical abstract

19 pages, 5153 KiB  
Article
Aluminum Reservoir Welding Surface Defect Detection Method Based on Three-Dimensional Vision
by Hanjie Huang, Bin Zhou, Songxiao Cao, Tao Song, Zhipeng Xu and Qing Jiang
Sensors 2025, 25(3), 664; https://doi.org/10.3390/s25030664 - 23 Jan 2025
Cited by 1 | Viewed by 1086
Abstract
Welding is an important process in the production of aluminum reservoirs for motor vehicles. The welding quality affects product performance. However, rapid and accurate detection of weld surface defects remains a huge challenge in the field of industrial automation. To address this problem, [...] Read more.
Welding is an important process in the production of aluminum reservoirs for motor vehicles. The welding quality affects product performance. However, rapid and accurate detection of weld surface defects remains a huge challenge in the field of industrial automation. To address this problem, we proposed a 3D vision-based aluminum reservoir welding surface defect detection method. First of all, a scanning system based on laser line scanning camera was constructed to acquire the point cloud data of weld seams on the aluminum reservoir surface. Next, a planar correction algorithm was used to adjust the slope of the contour line according to the slope of the contour line in order to minimize the effect of systematic disturbances when acquiring weld data. Then, the surface features of the weld, including curvature and normal vector direction, were extracted to extract holes, craters, and undercut defects. For better extraction of the defect, a double-aligned template matching method was used to ensure comprehensive extraction and measurement of defect areas. Finally, the detected defects were categorized according to their morphology. Experimental results show that the proposed method using 3D laser scanning data can detect and classify typical welding defects with an accuracy of more than 97.1%. Furthermore, different types of defects, including holes, undercuts, and craters, can also be accurately detected with precision 98.9%. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

19 pages, 8495 KiB  
Article
Design and Development of a Precision Defect Detection System Based on a Line Scan Camera Using Deep Learning
by Byungcheol Kim, Moonsun Shin and Seonmin Hwang
Appl. Sci. 2024, 14(24), 12054; https://doi.org/10.3390/app142412054 - 23 Dec 2024
Cited by 1 | Viewed by 3322
Abstract
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across [...] Read more.
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across industries, including electronics, semiconductors, printing, metal, food, and packaging. Small and medium-sized manufacturing companies are increasingly demanding smart factory solutions for quality control to create added value and enhance competitiveness. In this paper, we design and develop a high-speed defect detection system based on a line-scan camera using deep learning. The camera is positioned for side-view imaging, allowing for detailed inspection of the component mounting and soldering quality on PCBs. To detect defects on PCBs, the system gathers extensive images of both flawless and defective products to train a deep learning model. An AI engine generated through this deep learning process is then applied to conduct defect inspections. The developed high-speed defect detection system was evaluated to have an accuracy of 99.5% in the experiment. This will be highly beneficial for precision quality management in small- and medium-sized enterprises Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

29 pages, 50680 KiB  
Article
Relative Radiometric Correction Method Based on Temperature Normalization for Jilin1-KF02
by Shuai Huang, Song Yang, Yang Bai, Yingshan Sun, Bo Zou, Hongyu Wu, Lei Zhang, Jiangpeng Li and Xiaojie Yang
Remote Sens. 2024, 16(21), 4096; https://doi.org/10.3390/rs16214096 - 2 Nov 2024
Viewed by 1322
Abstract
The optical remote sensors carried by the Jilin-1 KF02 series satellites have an imaging resolution better than 0.5 m and a width of 150 km. There are radiometric problems, such as stripe noise, vignetting, and inter-slice chromatic aberration, in their raw images. In [...] Read more.
The optical remote sensors carried by the Jilin-1 KF02 series satellites have an imaging resolution better than 0.5 m and a width of 150 km. There are radiometric problems, such as stripe noise, vignetting, and inter-slice chromatic aberration, in their raw images. In this paper, a relative radiometric correction method based on temperature normalization is proposed for the response characteristics of sensors and the structural characteristics of optical splicing of Jilin-1 KF02 series satellites cameras. Firstly, a model of temperature effect on sensor output is established to correct the variation of sensor response output digital number (DN) caused by temperature variation during imaging process, and the image is normalized to a uniform temperature reference. Then, the horizontal stripe noise of the image is eliminated by using the sensor scan line and dark pixel information, and the vertical stripe noise of the image is eliminated by using the method of on-orbit histogram statistics. Finally, the method of superposition compensation is used to correct the vignetting area at the edge of the image due to the lack of energy information received by the sensor so as to ensure the consistency of the image in color and image quality. The proposed method is verified by Jilin-1 KF02A on-orbit images. Experimental results show that the image response is uniform, the color is consistent, the average Streak Metrics (SM) is better than 0.1%, Root-Mean-Square Deviation of the Mean Line (RA) and Generalized Noise (GN) are better than 2%, Relative Average Spectral Error (RASE) and Relative Average Spectral Error (ERGAS) are greatly improved, which are better than 5% and 13, respectively, and the relative radiation quality is obviously improved after relative radiation correction. Full article
(This article belongs to the Special Issue Optical Remote Sensing Payloads, from Design to Flight Test)
Show Figures

Figure 1

14 pages, 3987 KiB  
Article
Research on an Intelligent Seed-Sorting Method and Sorter Based on Machine Vision and Lightweight YOLOv5n
by Yubo Feng, Xiaoshun Zhao, Ruitao Tian, Chenyang Liang, Jingyan Liu and Xiaofei Fan
Agronomy 2024, 14(9), 1953; https://doi.org/10.3390/agronomy14091953 - 29 Aug 2024
Cited by 3 | Viewed by 2091
Abstract
To address the current issues of low intelligence and accuracy in seed-sorting devices, an intelligent seed sorter was developed in this study using machine-vision technology and the lightweight YOLOv5n. The machine consisted of a transmission system, feeding system, image acquisition system, and seed [...] Read more.
To address the current issues of low intelligence and accuracy in seed-sorting devices, an intelligent seed sorter was developed in this study using machine-vision technology and the lightweight YOLOv5n. The machine consisted of a transmission system, feeding system, image acquisition system, and seed screening system. A lightweight YOLOv5n model, FS-YOLOv5n, was trained using 4756 images, incorporating FasterNet, Local Convolution (PConv), and a squeeze-and-excitation (SE) attention mechanism to improve feature extraction efficiency, detection accuracy, and reduce redundancy. Taking ‘Zhengdan 958’ corn seeds as the research object, a quality identification and seed sorting test was conducted on six test groups (each consisting of 1000 seeds) using the FS-YOLOv5n model. Following lightweight improvements, the machine showed an 81% reduction in parameters and floating-point operations compared to baseline models. The intelligent seed sorter achieved an average sorting rate of 90.76%, effectively satisfying the seed-sorting requirements. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

20 pages, 4918 KiB  
Article
Influence of Extrusion Parameters on the Mechanical Properties of Slow Crystallizing Carbon Fiber-Reinforced PAEK in Large Format Additive Manufacturing
by Patrick Consul, Matthias Feuchtgruber, Bernhard Bauer and Klaus Drechsler
Polymers 2024, 16(16), 2364; https://doi.org/10.3390/polym16162364 - 21 Aug 2024
Cited by 3 | Viewed by 1469
Abstract
Additive Manufacturing (AM) enables the automated production of complex geometries with low waste and lead time, notably through Material Extrusion (MEX). This study explores Large Format Additive Manufacturing (LFAM) with carbon fiber-reinforced polyaryletherketones (PAEK), particularly a slow crystallizing grade by Victrex. The research [...] Read more.
Additive Manufacturing (AM) enables the automated production of complex geometries with low waste and lead time, notably through Material Extrusion (MEX). This study explores Large Format Additive Manufacturing (LFAM) with carbon fiber-reinforced polyaryletherketones (PAEK), particularly a slow crystallizing grade by Victrex. The research investigates how extrusion parameters affect the mechanical properties of the printed parts. Key parameters include line width, layer height, layer time, and extrusion temperature, analyzed through a series of controlled experiments. Thermal history during printing, including cooling rates and substrate temperatures, was monitored using thermocouples and infrared cameras. The crystallization behavior of PAEK was replicated in a Differential Scanning Calorimetry (DSC) setup. Mechanical properties were evaluated using three-point bending tests to analyze the impact of thermal conditions at the deposition interface on interlayer bonding and overall part strength. The study suggests aggregated metrics, enthalpy deposition rate and shear rate under the nozzle, that should be maximized to enhance mechanical performance. The findings show that the common practice of setting fixed layer times falls short of ensuring repeatable part quality. Full article
(This article belongs to the Topic Advanced Composites Manufacturing and Plastics Processing)
Show Figures

Figure 1

14 pages, 7097 KiB  
Article
Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging
by Sanhui Wang, Mengyun Zhang, Zhiyu Wen, Zhenxuan Zhao and Ruoyu Zhang
Agronomy 2024, 14(7), 1481; https://doi.org/10.3390/agronomy14071481 - 9 Jul 2024
Cited by 2 | Viewed by 1133
Abstract
Due to the widespread use of mulching film in cotton planting in China, residual mulching film mixed with machine-picked cotton poses a significant hazard to cotton processing. Detecting residual mulching film in seed cotton has become particularly challenging due to the film’s semi-transparent [...] Read more.
Due to the widespread use of mulching film in cotton planting in China, residual mulching film mixed with machine-picked cotton poses a significant hazard to cotton processing. Detecting residual mulching film in seed cotton has become particularly challenging due to the film’s semi-transparent nature. This study constructed an imaging system combining an area array camera and a line scan camera. A detection scheme was proposed that utilized features from both image types. To simulate online detection, samples were placed on a conveyor belt moving at 0.2 m/s, with line lasers at a wavelength of 650 nm as light sources. For area array images, feature extraction was performed to establish a partial least squares discriminant analysis (PLS-DA) model. For line scan images, texture feature analysis was used to build a support vector machine (SVM) classification model. Subsequently, image features from both cameras were merged to construct an SVM model. Experimental results indicated that detection methods based on area array and line scan images had accuracies of 75% and 79%, respectively, while the feature fusion method achieved an accuracy of 83%. This study demonstrated that the proposed method could effectively improve the accuracy of residual mulching film detection in seed cotton, providing a basis for reducing residual mulching film content during processing. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

13 pages, 7339 KiB  
Article
Improving the Two-Color Temperature Sensing Using Machine Learning Approach: GdVO4:Sm3+ Prepared by Solution Combustion Synthesis (SCS)
by Jovana Z. Jelic, Aleksa Dencevski, Mihailo D. Rabasovic, Janez Krizan, Svetlana Savic-Sevic, Marko G. Nikolic, Myriam H. Aguirre, Dragutin Sevic and Maja S. Rabasovic
Photonics 2024, 11(7), 642; https://doi.org/10.3390/photonics11070642 - 6 Jul 2024
Cited by 2 | Viewed by 1287
Abstract
The gadolinium vanadate doped with samarium (GdVO4:Sm3+) nanopowder was prepared by the solution combustion synthesis (SCS) method. After synthesis, in order to achieve full crystallinity, the material was annealed in air atmosphere at 900 °C. Phase identification in the [...] Read more.
The gadolinium vanadate doped with samarium (GdVO4:Sm3+) nanopowder was prepared by the solution combustion synthesis (SCS) method. After synthesis, in order to achieve full crystallinity, the material was annealed in air atmosphere at 900 °C. Phase identification in the post-annealed powder samples was performed by X-ray diffraction, and morphology was investigated by high-resolution scanning electron microscope (SEM) and transmission electron microscope (TEM). Photoluminescence characterization of emission spectrum and time resolved analysis was performed using tunable laser optical parametric oscillator excitation and streak camera. In addition to samarium emission bands, a weak broad luminescence emission band of host VO43− was also observed by the detection system. In our earlier work, we analyzed the possibility of using the host luminescence for two-color temperature sensing, improving the method by introducing the temporal dependence in line intensity ratio measurements. Here, we showed that further improvements are possible by using the machine learning approach. To facilitate the initial data assessment, we incorporated Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP) clustering of GdVO4:Sm3+ spectra at various temperatures. Good predictions of temperature were obtained using deep neural networks. Performance of the deep learning network was enhanced by data augmentation technique. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: Photonics Sensors)
Show Figures

Figure 1

Back to TopTop