Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = near color background

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 12524 KiB  
Article
Therapeutic Efficacy of Plant-Derived Exosomes for Advanced Scar Treatment: Quantitative Analysis Using Standardized Assessment Scales
by Lidia Majewska, Agnieszka Kondraciuk, Iwona Paciepnik, Agnieszka Budzyńska and Karolina Dorosz
Pharmaceuticals 2025, 18(8), 1103; https://doi.org/10.3390/ph18081103 - 25 Jul 2025
Viewed by 583
Abstract
Background: Wound healing and scar management remain significant challenges in dermatology and aesthetic medicine. Recent advances in regenerative medicine have introduced plant-derived exosome-like nanoparticles (PDENs) as potential therapeutic agents due to their bioactive properties. This study examines the clinical application of rose [...] Read more.
Background: Wound healing and scar management remain significant challenges in dermatology and aesthetic medicine. Recent advances in regenerative medicine have introduced plant-derived exosome-like nanoparticles (PDENs) as potential therapeutic agents due to their bioactive properties. This study examines the clinical application of rose stem cell exosomes (RSCEs) in combination with established treatments for managing different types of scars. Methods: A case series of four patients with different scar etiologies (dog bite, hot oil burn, forehead trauma, and facial laser treatment complications) was treated with RSCEs in combination with microneedling (Dermapen 4.0, 0.2–0.4 mm depth) and/or thulium laser therapy (Lutronic Ultra MD, 8–14 J), or as a standalone topical treatment. All cases underwent sequential treatments over periods ranging from two to four months, with comprehensive photographic documentation of the progression. The efficacy was assessed through clinical photography and objective evaluation using the modified Vancouver Scar Scale (mVSS) and the Patient and Observer Scar Assessment Scale (POSAS), along with assessment of scar appearance, texture, and coloration. Results: All cases demonstrated progressive improvement throughout the treatment course. The dog bite scar showed significant objective improvement, with a 71% reduction in modified Vancouver Scar Scale score (from 7/13 to 2/13) and a 61% improvement in Patient and Observer Scar Assessment Scale scores after four combined treatments. The forehead trauma case exhibited similar outcomes, with a 71% improvement in mVSS score and 55–57% improvement in POSAS scores. The hot oil burn case displayed the most dramatic improvement, with a 78% reduction in mVSS score and over 70% improvement in POSAS scores, resulting in near-complete resolution without visible scarring. The facial laser complication case showed a 75% reduction in mVSS score and ~70% improvement in POSAS scores using only topical exosome application without device-based treatments. Clinical improvements across all cases included reduction in elevation, improved texture, decreased erythema, and better integration with surrounding skin. No adverse effects were reported in any of the cases. Conclusions: This preliminary case series suggests that plant-derived exosome-like nanoparticles, specifically rose stem cell exosomes (RSCEs), may enhance scar treatment outcomes when combined with microneedling and laser therapy, or even as a standalone topical treatment. The documented objective improvements, measured by standardized scar assessment scales, along with clinical enhancements in scar appearance, texture, and coloration across different scar etiologies—dog bite, burn, traumatic injury, and iatrogenic laser damage—suggest that this approach may offer a valuable addition to the current armamentarium of scar management strategies. Notably, the successful treatment of laser-induced complications using only topical exosome application demonstrates the versatility and potential of this therapeutic modality. Full article
Show Figures

Figure 1

33 pages, 14577 KiB  
Article
A Color-Based Multispectral Imaging Approach for a Human Detection Camera
by Shuji Ono
J. Imaging 2025, 11(4), 93; https://doi.org/10.3390/jimaging11040093 - 21 Mar 2025
Viewed by 1932
Abstract
In this study, we propose a color-based multispectral approach using four selected wavelengths (453, 556, 668, and 708 nm) from the visible to near-infrared range to separate clothing from the background. Our goal is to develop a human detection camera that supports real-time [...] Read more.
In this study, we propose a color-based multispectral approach using four selected wavelengths (453, 556, 668, and 708 nm) from the visible to near-infrared range to separate clothing from the background. Our goal is to develop a human detection camera that supports real-time processing, particularly under daytime conditions and for common fabrics. While conventional deep learning methods can detect humans accurately, they often require large computational resources and struggle with partially occluded objects. In contrast, we treat clothing detection as a proxy for human detection and construct a lightweight machine learning model (multi-layer perceptron) based on these four wavelengths. Without relying on full spectral data, this method achieves an accuracy of 0.95, precision of 0.97, recall of 0.93, and an F1-score of 0.95. Because our color-driven detection relies on pixel-wise spectral reflectance rather than spatial patterns, it remains computationally efficient. A simple four-band camera configuration could thus facilitate real-time human detection. Potential applications include pedestrian detection in autonomous driving, security surveillance, and disaster victim searches. Full article
(This article belongs to the Special Issue Color in Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 9116 KiB  
Article
Cross-Modal Feature Fusion for Field Weed Mapping Using RGB and Near-Infrared Imagery
by Xijian Fan, Chunlei Ge, Xubing Yang and Weice Wang
Agriculture 2024, 14(12), 2331; https://doi.org/10.3390/agriculture14122331 - 19 Dec 2024
Cited by 2 | Viewed by 1113
Abstract
The accurate mapping of weeds in agricultural fields is essential for effective weed control and enhanced crop productivity. Moving beyond the limitations of RGB imagery alone, this study presents a cross-modal feature fusion network (CMFNet) designed for precise weed mapping by integrating RGB [...] Read more.
The accurate mapping of weeds in agricultural fields is essential for effective weed control and enhanced crop productivity. Moving beyond the limitations of RGB imagery alone, this study presents a cross-modal feature fusion network (CMFNet) designed for precise weed mapping by integrating RGB and near-infrared (NIR) imagery. CMFNet first applies color space enhancement and adaptive histogram equalization to improve the image brightness and contrast in both RGB and NIR images. Building on a Transformer-based segmentation framework, a cross-modal multi-scale feature enhancement module is then introduced, featuring spatial and channel feature interaction to automatically capture complementary information across two modalities. The enhanced features are further fused and refined by integrating an attention mechanism, which reduces the background interference and enhances the segmentation accuracy. Extensive experiments conducted on two public datasets, the Sugar Beets 2016 and Sunflower datasets, demonstrate that CMFNet significantly outperforms CNN-based segmentation models in the task of weed and crop segmentation. The model achieved an Intersection over Union (IoU) metric of 90.86% and 90.77%, along with a Mean Accuracy (mAcc) of 93.8% and 94.35%, respectively. Ablation studies further validate that the proposed cross-modal fusion method provides substantial improvements over basic feature fusion methods, effectively localizing weed and crop regions across diverse field conditions. These findings underscore their potential as a robust solution for precise and adaptive weed mapping in complex agricultural landscapes. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 2899 KiB  
Article
Mangrove Extraction Algorithm Based on Orthogonal Matching Filter-Weighted Least Squares
by Yongze Li, Jin Ma, Dongyang Fu, Jiajun Yuan and Dazhao Liu
Sensors 2024, 24(22), 7224; https://doi.org/10.3390/s24227224 - 12 Nov 2024
Viewed by 925
Abstract
High-precision extraction of mangrove areas is a crucial prerequisite for estimating mangrove area as well as for regional planning and ecological protection. However, mangroves typically grow in coastal and near-shore areas with complex water colors, where traditional mangrove extraction algorithms face challenges such [...] Read more.
High-precision extraction of mangrove areas is a crucial prerequisite for estimating mangrove area as well as for regional planning and ecological protection. However, mangroves typically grow in coastal and near-shore areas with complex water colors, where traditional mangrove extraction algorithms face challenges such as unclear region segmentation and insufficient accuracy. To address this issue, in this paper we propose a new algorithm for mangrove identification and extraction based on Orthogonal Matching Filter–Weighted Least Squares (OMF-WLS) target spectral information. This method first selects GF-6 remote sensing images with less cloud cover, then enhances mangrove feature information through preprocessing and band extension, combining whitened orthogonal subspace projection with the whitened matching filter algorithm. Notably, this paper innovatively introduces Weighted Least Squares (WLS) filtering technology. WLS filtering precisely processes high-frequency noise and edge details in images using an adaptive weighting matrix, significantly improving the edge clarity and overall quality of mangrove images. This innovative approach overcomes the bottleneck of traditional methods in effectively extracting edge information against complex water color backgrounds. Finally, Otsu’s method is used for adaptive threshold segmentation of GF-6 remote sensing images to achieve target extraction of mangrove areas. Our experimental results show that OMF-WLS improves extraction accuracy compared to traditional methods, with overall precision increasing from 0.95702 to 0.99366 and the Kappa coefficient rising from 0.88436 to 0.98233. In addition, our proposed method provides significant improvements in other metrics, demonstrating better overall performance. These findings can provide more reliable technical support for the monitoring and protection of mangrove resources. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 7304 KiB  
Article
Estimation of Fractal Dimension and Segmentation of Body Regions for Deep Learning-Based Gender Recognition
by Dong Chan Lee, Min Su Jeong, Seong In Jeong, Seung Yong Jung and Kang Ryoung Park
Fractal Fract. 2024, 8(10), 551; https://doi.org/10.3390/fractalfract8100551 - 24 Sep 2024
Cited by 3 | Viewed by 1549
Abstract
There are few studies utilizing only IR cameras for long-distance gender recognition, and they have shown low recognition performance due to their lack of color and texture information in IR images with a complex background. Therefore, a rough body segmentation-based gender recognition network [...] Read more.
There are few studies utilizing only IR cameras for long-distance gender recognition, and they have shown low recognition performance due to their lack of color and texture information in IR images with a complex background. Therefore, a rough body segmentation-based gender recognition network (RBSG-Net) is proposed, with enhanced gender recognition performance achieved by emphasizing the silhouette of a person through a body segmentation network. Anthropometric loss for the segmentation network and an adaptive body attention module are also proposed, which effectively integrate the segmentation and classification networks. To enhance the analytic capabilities of the proposed framework, fractal dimension estimation was introduced into the system to gain insights into the complexity and irregularity of the body region, thereby predicting the accuracy of body segmentation. For experiments, near-infrared images from the Sun Yat-sen University multiple modality re-identification version 1 (SYSU-MM01) dataset and thermal images from the Dongguk body-based gender version 2 (DBGender-DB2) database were used. The equal error rates of gender recognition by the proposed model were 4.320% and 8.303% for these two databases, respectively, surpassing state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 7252 KiB  
Article
Research on Detection Algorithm of Green Walnut in Complex Environment
by Chenggui Yang, Zhengda Cai, Mingjie Wu, Lijun Yun, Zaiqing Chen and Yuelong Xia
Agriculture 2024, 14(9), 1441; https://doi.org/10.3390/agriculture14091441 - 24 Aug 2024
Cited by 2 | Viewed by 1327
Abstract
The growth environment of green walnuts is complex. In the actual picking and identification process, interference from near-background colors, occlusion by branches and leaves, and excessive model complexity pose higher demands on the performance of walnut detection algorithms. Therefore, a lightweight walnut detection [...] Read more.
The growth environment of green walnuts is complex. In the actual picking and identification process, interference from near-background colors, occlusion by branches and leaves, and excessive model complexity pose higher demands on the performance of walnut detection algorithms. Therefore, a lightweight walnut detection algorithm suitable for complex environments is proposed based on YOLOv5s. First, the backbone network is reconstructed using the lightweight GhostNet network, laying the foundation for a lightweight model architecture. Next, the C3 structure in the feature fusion layer is optimized by proposing a lightweight C3 structure to enhance the model’s focus on important walnut features. Finally, the loss function is improved to address the problems of target loss and gradient adaptability during training. To further reduce model complexity, the improved algorithm undergoes pruning and knowledge distillation operations, and is then deployed and tested on small edge devices. Experimental results show that compared to the original YOLOv5s model, the improved algorithm reduces the number of parameters by 72.9% and the amount of computation by 84.1%. The mAP0.5 increased by 1.1%, the precision increased by 0.7%, the recall increased by 0.3%, and the FPS is 179.6% of the original model, meeting the real-time detection needs for walnut recognition and providing a reference for walnut harvesting identification. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

21 pages, 13401 KiB  
Article
Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image
by Sa Wang, Yi Cen, Liang Qu, Guanghua Li, Yao Chen and Lifu Zhang
Remote Sens. 2024, 16(16), 2882; https://doi.org/10.3390/rs16162882 - 7 Aug 2024
Cited by 5 | Viewed by 2702
Abstract
Painted cultural relics hold significant historical value and are crucial in transmitting human culture. However, mold is a common issue for paper or silk-based relics, which not only affects their preservation and longevity but also conceals the texture, patterns, and color information, hindering [...] Read more.
Painted cultural relics hold significant historical value and are crucial in transmitting human culture. However, mold is a common issue for paper or silk-based relics, which not only affects their preservation and longevity but also conceals the texture, patterns, and color information, hindering cultural value and heritage. Currently, the virtual restoration of painting relics primarily involves filling in the RGB based on neighborhood information, which might cause color distortion and other problems. Another approach considers mold as noise and employs maximum noise separation for its removal; however, eliminating the mold components and implementing the inverse transformation often leads to more loss of information. To effectively acquire virtual restoration for mold removal from ancient paintings, the spectral characteristics of mold were analyzed. Based on the spectral features of mold and the cultural relic restoration philosophy of maintaining originality, a 3D CNN artifact restoration network was proposed. This network is capable of learning features in the near-infrared spectrum (NIR) and spatial dimensions to reconstruct the reflectance of visible spectrum, achieving the virtual restoration for mold removal of calligraphic and art relics. Using an ancient painting from the Qing Dynasty as a test subject, the proposed method was compared with the Inpainting, Criminisi, and inverse MNF transformation methods across three regions. Visual analysis, quantitative evaluation (the root mean squared error (RMSE), mean absolute percentage error (MAPE), mean absolute error (MEA), and a classification application were used to assess the restoration accuracy. The visual results and quantitative analyses demonstrated that the proposed 3D CNN method effectively removes or mitigates mold while restoring the artwork to its authentic color in various backgrounds. Furthermore, the color classification results indicated that the images restored with 3D CNN had the highest classification accuracy, with overall accuracies of 89.51%, 92.24%, and 93.63%, and Kappa coefficients of 0.88, 0.91, and 0.93, respectively. This research provides technological support for the digitalization and restoration of cultural artifacts, thereby contributing to the preservation and transmission of cultural heritage. Full article
Show Figures

Figure 1

17 pages, 32322 KiB  
Article
Automatic Detection of Floating Ulva prolifera Bloom from Optical Satellite Imagery
by Hailong Zhang, Quan Qin, Deyong Sun, Xiaomin Ye, Shengqiang Wang and Zhixin Zong
J. Mar. Sci. Eng. 2024, 12(4), 680; https://doi.org/10.3390/jmse12040680 - 19 Apr 2024
Cited by 3 | Viewed by 1971
Abstract
Annual outbreaks of floating Ulva prolifera blooms in the Yellow Sea have caused serious local environmental and economic problems. Rapid and effective monitoring of Ulva blooms from satellite observations with wide spatial-temporal coverage can greatly enhance disaster response efforts. Various satellite sensors and [...] Read more.
Annual outbreaks of floating Ulva prolifera blooms in the Yellow Sea have caused serious local environmental and economic problems. Rapid and effective monitoring of Ulva blooms from satellite observations with wide spatial-temporal coverage can greatly enhance disaster response efforts. Various satellite sensors and remote sensing methods have been employed for Ulva detection, yet automatic and rapid Ulva detection remains challenging mainly due to complex observation scenarios present in different satellite images, and even within a single satellite image. Here, a reliable and fully automatic method was proposed for the rapid extraction of Ulva features using the Tasseled-Cap Greenness (TCG) index from satellite top-of-atmosphere reflectance (RTOA) data. Based on the TCG characteristics of Ulva and Ulva-free targets, a local adaptive threshold (LAT) approach was utilized to automatically select a TCG threshold for moving pixel windows. When tested on HY1C/D-Coastal Zone Imager (CZI) images, the proposed method, termed the TCG-LAT method, achieved over 95% Ulva detection accuracy though cross-comparison with the TCG and VBFAH indexes with a visually determined threshold. It exhibited robust performance even against complex water backgrounds and under non-optimal observing conditions with sun glint and cloud cover. The TCG-LAT method was further applied to multiple HY1C/D-CZI images for automatic Ulva bloom monitoring in the Yellow Sea in 2023. Moreover, promising results were obtained by applying the TCG-LAT method to multiple optical satellite sensors, including GF-Wide Field View Camera (GF-WFV), HJ-Charge Coupled Device (HJ-CCD), Sentinel2B-Multispectral Imager (S2B-MSI), and the Geostationary Ocean Color Imager (GOCI-II). The TCG-LAT method is poised for integration into operational systems for disaster monitoring to enable the rapid monitoring of Ulva blooms in nearshore waters, facilitated by the availability of near-real-time satellite images. Full article
(This article belongs to the Special Issue New Advances in Marine Remote Sensing Applications)
Show Figures

Figure 1

25 pages, 6206 KiB  
Article
A Novel Multimodal Fusion Framework Based on Point Cloud Registration for Near-Field 3D SAR Perception
by Tianjiao Zeng, Wensi Zhang, Xu Zhan, Xiaowo Xu, Ziyang Liu, Baoyou Wang and Xiaoling Zhang
Remote Sens. 2024, 16(6), 952; https://doi.org/10.3390/rs16060952 - 8 Mar 2024
Cited by 3 | Viewed by 3166
Abstract
This study introduces a pioneering multimodal fusion framework to enhance near-field 3D Synthetic Aperture Radar (SAR) imaging, crucial for applications like radar cross-section measurement and concealed object detection. Traditional near-field 3D SAR imaging struggles with issues like target–background confusion due to clutter and [...] Read more.
This study introduces a pioneering multimodal fusion framework to enhance near-field 3D Synthetic Aperture Radar (SAR) imaging, crucial for applications like radar cross-section measurement and concealed object detection. Traditional near-field 3D SAR imaging struggles with issues like target–background confusion due to clutter and multipath interference, shape distortion from high sidelobes, and lack of color and texture information, all of which impede effective target recognition and scattering diagnosis. The proposed approach presents the first known application of multimodal fusion in near-field 3D SAR imaging, integrating LiDAR and optical camera data to overcome its inherent limitations. The framework comprises data preprocessing, point cloud registration, and data fusion, where registration between multi-sensor data is the core of effective integration. Recognizing the inadequacy of traditional registration methods in handling varying data formats, noise, and resolution differences, particularly between near-field 3D SAR and other sensors, this work introduces a novel three-stage registration process to effectively address these challenges. First, the approach designs a structure–intensity-constrained centroid distance detector, enabling key point extraction that reduces heterogeneity and accelerates the process. Second, a sample consensus initial alignment algorithm with SHOT features and geometric relationship constraints is proposed for enhanced coarse registration. Finally, the fine registration phase employs adaptive thresholding in the iterative closest point algorithm for precise and efficient data alignment. Both visual and quantitative analyses of measured data demonstrate the effectiveness of our method. The experimental results show significant improvements in registration accuracy and efficiency, laying the groundwork for future multimodal fusion advancements in near-field 3D SAR imaging. Full article
Show Figures

Figure 1

13 pages, 4181 KiB  
Article
Near-Perfect Infrared Transmission Based on Metallic Hole and Disk Coupling Array for Mid-Infrared Refractive Index Sensing
by Lingyi Xu, Jianjun Lai, Qinghua Meng, Changhong Chen and Yihua Gao
Chemosensors 2024, 12(1), 3; https://doi.org/10.3390/chemosensors12010003 - 26 Dec 2023
Viewed by 2373
Abstract
Nanostructured color filters, particularly those generated by the extraordinary optical transmission (EOT) resonance of metal–dielectric nanostructures, have been intensively studied over the past few decades. In this work, we propose a hybrid array composed of a hole array and a disk array with [...] Read more.
Nanostructured color filters, particularly those generated by the extraordinary optical transmission (EOT) resonance of metal–dielectric nanostructures, have been intensively studied over the past few decades. In this work, we propose a hybrid array composed of a hole array and a disk array with the same working period within the 3–14 μm mid-infrared band. Through numerical simulations, near-perfect transmission (more than 99%) and a narrower linewidth at some resonance wavelengths were achieved, which is vital for highly sensitive sensing applications. This superior performance is attributed to the surface plasmon coupling resonance between the hole and disk arrays. A high tunability of the near-perfect transmission peak with varying structural parameters, characteristics of sensitivity to the background refractive index, and angle independence were observed. We expect that this metallic hole and disk coupling array is promising for use in various applications, such as in plasmon biosensors for the high-sensitivity detection of biochemical substances. Full article
Show Figures

Figure 1

11 pages, 1042 KiB  
Systematic Review
Fluorescence and Near-Infrared Light for Detection of Secondary Caries: A Systematic Review
by Dimitrios Spagopoulos, Stavroula Michou, Sotiria Gizani, Eftychia Pappa and Christos Rahiotis
Dent. J. 2023, 11(12), 271; https://doi.org/10.3390/dj11120271 - 28 Nov 2023
Cited by 1 | Viewed by 3130
Abstract
Background: Early detection of secondary caries near dental restorations is essential to prevent further complications. This systematic review seeks to evaluate the sensitivity of fluorescence and near-infrared (NIR) imaging techniques for detecting secondary caries and to provide insight into their clinical utility. Methods: [...] Read more.
Background: Early detection of secondary caries near dental restorations is essential to prevent further complications. This systematic review seeks to evaluate the sensitivity of fluorescence and near-infrared (NIR) imaging techniques for detecting secondary caries and to provide insight into their clinical utility. Methods: A comprehensive search strategy was used to select studies from seven databases, emphasizing diagnostic accuracy studies of secondary caries detection using fluorescence and NIR imaging techniques. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) instrument assessed bias risk and practicality. Two evaluators performed data extraction, screening, and quality assessment independently. Results: From 3110 initial recordings, nine studies were selected for full-text analysis. Wide variations in sensitivity (SE) and specificity (SP) values were reported across the studies. These studies exhibited variable SE and SP values, and the findings highlighted the importance of method selection based on clinical context. This systematic review underlines the potential for fluorescence and NIR imaging to detect secondary caries. However, results from different studies vary, indicating the need to consider additional variables such as restoration materials. Conclusions: Although these technologies exhibit potential for detecting caries, our research underscores the complex procedure of identifying secondary caries lesions. It is a continuous necessity for progress in dental diagnostics to promptly identify secondary caries lesions, particularly those in proximity to tooth-colored ones. Full article
(This article belongs to the Special Issue Feature Review Papers in Dentistry)
Show Figures

Graphical abstract

16 pages, 17129 KiB  
Article
Cucumber Picking Recognition in Near-Color Background Based on Improved YOLOv5
by Liyang Su, Haixia Sun, Shujuan Zhang, Xinyuan Lu, Runrun Wang, Linjie Wang and Ning Wang
Agronomy 2023, 13(8), 2062; https://doi.org/10.3390/agronomy13082062 - 4 Aug 2023
Cited by 5 | Viewed by 2136
Abstract
Rapid and precise detection of cucumbers is a key element in enhancing the capability of intelligent harvesting robots. Problems such as near-color background interference, branch and leaf occlusion of fruits, and target scale diversity in greenhouse environments posed higher requirements for cucumber target [...] Read more.
Rapid and precise detection of cucumbers is a key element in enhancing the capability of intelligent harvesting robots. Problems such as near-color background interference, branch and leaf occlusion of fruits, and target scale diversity in greenhouse environments posed higher requirements for cucumber target detection algorithms. Therefore, a lightweight YOLOv5s-Super model was proposed based on the YOLOv5s model. First, in this study, the bidirectional feature pyramid network (BiFPN) and C3CA module were added to the YOLOv5s-Super model with the goal of capturing cucumber shoulder features of long-distance dependence and dynamically fusing multi-scale features in the near-color background. Second, the Ghost module was added to the YOLOv5s-Super model to speed up the inference time and floating-point computation speed of the model. Finally, this study visualized different feature fusion methods for the BiFPN module; independently designed a C3SimAM module for comparison between parametric and non-parametric attention mechanisms. The results showed that the YOLOv5s-Super model achieves mAP of 87.5%, which was 4.2% higher than the YOLOv7-tiny and 1.9% higher than the YOLOv8s model. The improved model could more accurately and robustly complete the detection of multi-scale features in complex near-color backgrounds while the model met the requirement of being lightweight. These results could provide technical support for the implementation of intelligent cucumber picking. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

13 pages, 3326 KiB  
Article
Thin Luminous Tracks of Particles Released from Electrodes with A Small Radius of Curvature in Pulsed Nanosecond Discharges in Air and Argon
by Victor F. Tarasenko, Dmitry V. Beloplotov, Alexei N. Panchenko and Dmitry A. Sorokin
Surfaces 2023, 6(2), 214-226; https://doi.org/10.3390/surfaces6020014 - 14 Jun 2023
Cited by 6 | Viewed by 2160
Abstract
Features of the nanosecond discharge development in a non-uniform electric field are studied experimentally. High spatial resolution imaging showed that thin luminous tracks of great length with a cross-section of a few microns are observed against the background of discharge glow in air [...] Read more.
Features of the nanosecond discharge development in a non-uniform electric field are studied experimentally. High spatial resolution imaging showed that thin luminous tracks of great length with a cross-section of a few microns are observed against the background of discharge glow in air and argon. It has been established that the detected tracks are adjacent to brightly luminous white spots on the electrodes or in the vicinity of these spots, and are associated with the flight of small particles. It is shown that the tracks have various shapes and change from pulse to pulse. The particle tracks may look like curvy or straight lines. In some photos, they can change their direction of movement to the opposite. It was found that the particle’s track abruptly breaks and a bright flash is visible at the break point. The color of the tracks differs from that of the spark leaders, while the bands of the second positive nitrogen system dominate in the plasma emission spectra during the existence of a diffuse discharge. Areas of blue light are visible near the electrodes as well. The development of glow and thin luminous tracks in the gap during its breakdown is revealed using an ICCD camera. Physical reasons for the observed phenomena are discussed. Full article
(This article belongs to the Collection Featured Articles for Surfaces)
Show Figures

Figure 1

20 pages, 6890 KiB  
Article
YOLOv7-Peach: An Algorithm for Immature Small Yellow Peaches Detection in Complex Natural Environments
by Pingzhu Liu and Hua Yin
Sensors 2023, 23(11), 5096; https://doi.org/10.3390/s23115096 - 26 May 2023
Cited by 24 | Viewed by 3033
Abstract
Using object detection techniques on immature fruits to find out their quantity and position is a crucial step for intelligent orchard management. A yellow peach target detection model (YOLOv7-Peach) based on the improved YOLOv7 was proposed to address the problem of immature yellow [...] Read more.
Using object detection techniques on immature fruits to find out their quantity and position is a crucial step for intelligent orchard management. A yellow peach target detection model (YOLOv7-Peach) based on the improved YOLOv7 was proposed to address the problem of immature yellow peach fruits in natural scenes that are similar in color to the leaves but have small sizes and are easily obscured, leading to low detection accuracy. First, the anchor frame information from the original YOLOv7 model was updated by the K-means clustering algorithm in order to generate anchor frame sizes and proportions suitable for the yellow peach dataset; second, the CA (coordinate attention) module was embedded into the backbone network of YOLOv7 so as to enhance the network’s feature extraction for yellow peaches and to improve the detection accuracy; then, we accelerated the regression convergence process of the prediction box by replacing the object detection regression loss function with EIoU. Finally, the head structure of YOLOv7 added the P2 module for shallow downsampling, and the P5 module for deep downsampling was removed, effectively improving the detection of small targets. Experiments showed that the YOLOv7-Peach model had a 3.5% improvement in mAp (mean average precision) over the original one, much higher than that of SSD, Objectbox, and other target detection models in the YOLO series, and achieved better results under different weather conditions and a detection speed of up to 21 fps, suitable for real-time detection of yellow peaches. This method could provide technical support for yield estimation in the intelligent management of yellow peach orchards and also provide ideas for the real-time and accurate detection of small fruits with near background colors. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

16 pages, 2828 KiB  
Review
Plasmon Modulated Upconversion Biosensors
by Anara Molkenova, Hye Eun Choi, Jeong Min Park, Jin-Ho Lee and Ki Su Kim
Biosensors 2023, 13(3), 306; https://doi.org/10.3390/bios13030306 - 22 Feb 2023
Cited by 5 | Viewed by 3716
Abstract
Over the past two decades, lanthanide-based upconversion nanoparticles (UCNPs) have been fascinating scientists due to their ability to offer unprecedented prospects to upconvert tissue-penetrating near-infrared light into color-tailorable optical illumination inside biological matter. In particular, luminescent behavior UCNPs have been widely utilized for [...] Read more.
Over the past two decades, lanthanide-based upconversion nanoparticles (UCNPs) have been fascinating scientists due to their ability to offer unprecedented prospects to upconvert tissue-penetrating near-infrared light into color-tailorable optical illumination inside biological matter. In particular, luminescent behavior UCNPs have been widely utilized for background-free biorecognition and biosensing. Currently, a paramount challenge exists on how to maximize NIR light harvesting and upconversion efficiencies for achieving faster response and better sensitivity without damaging the biological tissue upon laser assisted photoactivation. In this review, we offer the reader an overview of the recent updates about exciting achievements and challenges in the development of plasmon-modulated upconversion nanoformulations for biosensing application. Full article
(This article belongs to the Special Issue Nano/Micro Biosensors for Biomedical Applications)
Show Figures

Figure 1

Back to TopTop