Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = YIQ color space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5648 KiB  
Article
An Object Feature-Based Recognition and Localization Method for Wolfberry
by Renwei Wang, Dingzhong Tan, Xuerui Ju and Jianing Wang
Sensors 2025, 25(11), 3365; https://doi.org/10.3390/s25113365 - 27 May 2025
Viewed by 365
Abstract
To improve the object recognition and localization capabilities of wolfberry harvesting robots, this study introduces an object feature-based image segmentation algorithm designed for the segmentation and localization of wolfberry fruits and branches in unstructured lighting environments. Firstly, based on the a-channel of [...] Read more.
To improve the object recognition and localization capabilities of wolfberry harvesting robots, this study introduces an object feature-based image segmentation algorithm designed for the segmentation and localization of wolfberry fruits and branches in unstructured lighting environments. Firstly, based on the a-channel of the Lab color space and the I-channel of the YIQ color space, a feature fusion algorithm combined with wavelet transformation is proposed to achieve pixel-level fusion of the two feature images, significantly enhancing the image segmentation effect. Experimental results show that this method achieved a 78% segmentation accuracy for wolfberry fruits in 500 test image samples under complex lighting and occlusion conditions, demonstrating good robustness. Secondly, addressing the issue of branch colors being similar to the background, a K-means clustering segmentation algorithm based on the Lab color space is proposed, combined with morphological processing and length filtering strategies, effectively achieving precise segmentation of branches and localization of gripping point coordinates. Experiments validated the high accuracy of the improved algorithm in branch localization. The results indicate that the algorithm proposed in this paper can effectively address illumination changes and occlusion issues in complex harvesting environments. Compared with traditional segmentation methods, it significantly improves the segmentation accuracy of wolfberry fruits and the localization accuracy of branches, providing technical support for the vision system of field-based wolfberry harvesting robots and offering theoretical basis and a practical reference for research on agricultural automated harvesting operations. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

19 pages, 8831 KiB  
Article
Tongue Disease Prediction Based on Machine Learning Algorithms
by Ali Raad Hassoon, Ali Al-Naji, Ghaidaa A. Khalid and Javaan Chahl
Technologies 2024, 12(7), 97; https://doi.org/10.3390/technologies12070097 - 28 Jun 2024
Cited by 8 | Viewed by 19683
Abstract
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels [...] Read more.
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels of progression of the ailment. With the development of computer vision systems, especially in the field of artificial intelligence, there has been important progress in acquiring, processing, and classifying tongue images. This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%. Based on these obtained results, the XGBoost algorithm was chosen as the classifier of the proposed imaging system and linked with a graphical user interface to predict tongue color and its related diseases in real time. Thus, this proposed imaging system opens the door for expanded tongue diagnosis within future point-of-care health systems. Full article
Show Figures

Figure 1

12 pages, 2625 KiB  
Article
Image Perceptual Similarity Metrics for the Assessment of Basal Cell Carcinoma
by Panagiota Spyridonos, Georgios Gaitanis, Aristidis Likas, Konstantinos Seretis, Vasileios Moschovos, Laurence Feldmeyer, Kristine Heidemeyer, Athanasia Zampeta and Ioannis D. Bassukas
Cancers 2023, 15(14), 3539; https://doi.org/10.3390/cancers15143539 - 8 Jul 2023
Cited by 1 | Viewed by 1660
Abstract
Efficient management of basal cell carcinomas (BCC) requires reliable assessments of both tumors and post-treatment scars. We aimed to estimate image similarity metrics that account for BCC’s perceptual color and texture deviation from perilesional skin. In total, 176 clinical photographs of BCC were [...] Read more.
Efficient management of basal cell carcinomas (BCC) requires reliable assessments of both tumors and post-treatment scars. We aimed to estimate image similarity metrics that account for BCC’s perceptual color and texture deviation from perilesional skin. In total, 176 clinical photographs of BCC were assessed by six physicians using a visual deviation scale. Internal consistency and inter-rater agreement were estimated using Cronbach’s α, weighted Gwet’s AC2, and quadratic Cohen’s kappa. The mean visual scores were used to validate a range of similarity metrics employing different color spaces, distances, and image embeddings from a pre-trained VGG16 neural network. The calculated similarities were transformed into discrete values using ordinal logistic regression models. The Bray–Curtis distance in the YIQ color model and rectified embeddings from the ‘fc6’ layer minimized the mean squared error and demonstrated strong performance in representing perceptual similarities. Box plot analysis and the Wilcoxon rank-sum test were used to visualize and compare the levels of agreement, conducted on a random validation round between the two groups: ‘Human–System’ and ‘Human–Human.’ The proposed metrics were comparable in terms of internal consistency and agreement with human raters. The findings suggest that the proposed metrics offer a robust and cost-effective approach to monitoring BCC treatment outcomes in clinical settings. Full article
(This article belongs to the Special Issue Skin Cancers as a Paradigm Shift: From Pathobiology to Treatment)
Show Figures

Figure 1

17 pages, 40959 KiB  
Article
Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information
by Chengcai Fu, Fengli Lu, Xiaoxiao Zhang and Guoying Zhang
Symmetry 2021, 13(11), 2097; https://doi.org/10.3390/sym13112097 - 5 Nov 2021
Cited by 5 | Viewed by 2097
Abstract
Affected by the uneven concentration of coal dust and low illumination, most of the images captured in the top-coal caving face have low definition, high haze and serious noise. In order to improve the visual effect of underground images captured in the top-coal [...] Read more.
Affected by the uneven concentration of coal dust and low illumination, most of the images captured in the top-coal caving face have low definition, high haze and serious noise. In order to improve the visual effect of underground images captured in the top-coal caving face, a novel single-channel Retinex dedusting algorithm with frequency domain prior information is proposed to solve the problem that Retinex defogging algorithm cannot effectively defog and denoise, simultaneously, while preserving image details. Our work is inspired by the simple and intuitive observation that the low frequency component of dust-free image will be amplified in the symmetrical spectrum after adding dusts. A single-channel multiscale Retinex algorithm with color restoration (MSRCR) in YIQ space is proposed to restore the foggy approximate component in wavelet domain. After that the multiscale convolution enhancement and fast non-local means (FNLM) filter are used to minimize noise of detail components while retaining sufficient details. Finally, a dust-free image is reconstructed to the spatial domain and the color is restored by white balance. By comparing with the state-of-the-art image dedusting and defogging algorithms, the experimental results have shown that the proposed algorithm has higher contrast and visibility in both subjective and objective analysis while retaining sufficient details. Full article
Show Figures

Figure 1

36 pages, 11726 KiB  
Article
Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space
by Kai Guo, Xiongfei Li, Hongrui Zang and Tiehu Fan
Entropy 2020, 22(12), 1423; https://doi.org/10.3390/e22121423 - 17 Dec 2020
Cited by 8 | Viewed by 3747
Abstract
In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. [...] Read more.
In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms. Full article
Show Figures

Figure 1

25 pages, 12152 KiB  
Article
Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces
by Hongyin Han, Chengshan Han, Taiji Lan, Liang Huang, Changhong Hu and Xucheng Xue
Appl. Sci. 2020, 10(18), 6467; https://doi.org/10.3390/app10186467 - 17 Sep 2020
Cited by 22 | Viewed by 4161
Abstract
Shadow often results in difficulties for subsequent image applications of multispectral satellite remote sensing images, like object recognition and change detection. With continuous improvement in both spatial and spectral resolutions of satellite remote sensing images, a more serious impact occurs on satellite remote [...] Read more.
Shadow often results in difficulties for subsequent image applications of multispectral satellite remote sensing images, like object recognition and change detection. With continuous improvement in both spatial and spectral resolutions of satellite remote sensing images, a more serious impact occurs on satellite remote sensing image interpretation due to the existence of shadow. Though various shadow detection methods have been developed, problems of both shadow omission and nonshadow misclassification still exist for detecting shadow well in high-resolution multispectral satellite remote sensing images. These shadow detection problems mainly include high small shadow omission and typical nonshadow misclassification (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification). For further resolving these problems, a new shadow index is developed based on the analysis of the property difference between shadow and the corresponding nonshadow with several multispectral band components (i.e., near-infrared, red, green and blue components) and hue and intensity components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), respectively. The shadow mask is further acquired by applying an optimal threshold determined automatically on the shadow index image. The final shadow image is further optimized with a definite morphological operation of opening and closing. The proposed algorithm is verified with many images from WorldView-3 and WorldView-2 acquired at different times and sites. The proposed algorithm performance is particularly evaluated by qualitative visual sense comparison and quantitative assessment of shadow detection results in comparative experiments with two WorldView-3 test images of Tripoli, Libya. Both the better visual sense and the higher overall accuracy (over 92% for the test image Tripoli-1 and approximately 91% for the test image Tripoli-2) of the experimental results together deliver the excellent performance and robustness of the proposed shadow detection approach for shadow detection of high-resolution multispectral satellite remote sensing images. The proposed shadow detection approach is promised to further alleviate typical shadow detection problems of high small shadow omission and typical nonshadow misclassification for high-resolution multispectral satellite remote sensing images. Full article
(This article belongs to the Collection Optical Design and Engineering)
Show Figures

Graphical abstract

12 pages, 1900 KiB  
Article
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
by Yuanshen Zhao, Liang Gong, Yixiang Huang and Chengliang Liu
Sensors 2016, 16(2), 173; https://doi.org/10.3390/s16020173 - 29 Jan 2016
Cited by 99 | Viewed by 9025
Abstract
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: [...] Read more.
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop