Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = YCbCr

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3794 KiB  
Article
A Robust System for Super-Resolution Imaging in Remote Sensing via Attention-Based Residual Learning
by Rogelio Reyes-Reyes, Yeredith G. Mora-Martinez, Beatriz P. Garcia-Salgado, Volodymyr Ponomaryov, Jose A. Almaraz-Damian, Clara Cruz-Ramos and Sergiy Sadovnychiy
Mathematics 2025, 13(15), 2400; https://doi.org/10.3390/math13152400 - 25 Jul 2025
Viewed by 203
Abstract
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a [...] Read more.
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a novel residual model named OARN (Optimized Attention Residual Network) specifically designed to enhance the visual quality of low-resolution images. The network operates on the Y channel of the YCbCr color space and integrates LKA (Large Kernel Attention) and OCM (Optimized Convolutional Module) blocks. These components can restore large-scale spatial relationships and refine textures and contours, improving feature reconstruction without significantly increasing computational complexity. The performance of OARN was evaluated using satellite images from WorldView-2, GaoFen-2, and Microsoft Virtual Earth. Evaluation was conducted using objective quality metrics, such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Edge Preservation Index (EPI), and Perceptual Image Patch Similarity (LPIPS), demonstrating superior results compared to state-of-the-art methods in both objective measurements and subjective visual perception. Moreover, OARN achieves this performance while maintaining computational efficiency, offering a balanced trade-off between processing time and reconstruction quality. Full article
Show Figures

Figure 1

21 pages, 9571 KiB  
Article
Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods
by Joohyung Roh, Sehong Min and Minsuk Kong
Fire 2025, 8(7), 283; https://doi.org/10.3390/fire8070283 - 18 Jul 2025
Viewed by 506
Abstract
Heat release rate (HRR) is a key indicator for characterizing fire behavior, and it is conventionally measured under laboratory conditions. However, this measurement is limited in its widespread application to various fire conditions, due to its high cost, operational complexity, and lack of [...] Read more.
Heat release rate (HRR) is a key indicator for characterizing fire behavior, and it is conventionally measured under laboratory conditions. However, this measurement is limited in its widespread application to various fire conditions, due to its high cost, operational complexity, and lack of real-time predictive capability. Therefore, this study proposes an image-based HRR prediction model that uses deep learning and image processing techniques. The flame region in a fire video was segmented using the YOLO-YCbCr model, which integrates YCbCr color-space-based segmentation with YOLO object detection. For comparative analysis, the YOLO segmentation model was used. Furthermore, the fire diameter and flame height were determined from the spatial information of the segmented flame, and the HRR was predicted based on the correlation between flame size and HRR. The proposed models were applied to various experimental fire videos, and their prediction performances were quantitatively assessed. The results indicated that the proposed models accurately captured the HRR variations over time, and applying the average flame height calculation enhanced the prediction performance by reducing fluctuations in the predicted HRR. These findings demonstrate that the image-based HRR prediction model can be used to estimate real-time HRR values in diverse fire environments. Full article
Show Figures

Figure 1

32 pages, 7929 KiB  
Article
Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques
by Chia-Chen Lin, Aristophane Nshimiyimana, Chih-Cheng Chen and Shu-Han Liao
Symmetry 2025, 17(7), 1027; https://doi.org/10.3390/sym17071027 - 30 Jun 2025
Viewed by 249
Abstract
With the increasing integration of augmented reality (AR) in various applications, ensuring secure access and content authenticity has become a critical challenge. This paper proposes an innovative and robust authentication framework for protecting AR multimedia content through a hash-based data-hiding technique. Leveraging the [...] Read more.
With the increasing integration of augmented reality (AR) in various applications, ensuring secure access and content authenticity has become a critical challenge. This paper proposes an innovative and robust authentication framework for protecting AR multimedia content through a hash-based data-hiding technique. Leveraging the Discrete Wavelet Transform (DWT) in the YCbCr color space, the method embeds multiple cryptographic hash signatures directly into the AR visual data. This design not only utilizes the symmetric property between two consecutive AR contents but also allows users to verify the connectivity between two AR digital contents by checking the embedded hash values. These embedded signatures support hierarchical, multi-level authentication, verifying not only the integrity and authenticity of individual AR objects but also their contextual relationships within the AR environment. The proposed system exhibits exceptional resilience to tampering, effectively identifying whether two consecutive e-pages in the AR content have been altered, while preserving high perceptual quality with PSNR values above 45 dB and SSIM scores consistently exceeding 0.98. This work presents a practical, real-time solution for enhancing AR content security, contributing significantly to the advancement of secure multimedia systems in next-generation interactive platforms. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 18399 KiB  
Article
Channel Attention for Fire and Smoke Detection: Impact of Augmentation, Color Spaces, and Adversarial Attacks
by Usama Ejaz, Muhammad Ali Hamza and Hyun-chul Kim
Sensors 2025, 25(4), 1140; https://doi.org/10.3390/s25041140 - 13 Feb 2025
Cited by 1 | Viewed by 1405
Abstract
The prevalence of wildfires presents significant challenges for fire detection systems, particularly in differentiating fire from complex backgrounds and maintaining detection reliability under diverse environmental conditions. It is crucial to address these challenges for developing sustainable and effective fire detection systems. In this [...] Read more.
The prevalence of wildfires presents significant challenges for fire detection systems, particularly in differentiating fire from complex backgrounds and maintaining detection reliability under diverse environmental conditions. It is crucial to address these challenges for developing sustainable and effective fire detection systems. In this paper: (i) we introduce a channel-wise attention-based architecture, achieving 95% accuracy and demonstrating an improved focus on flame-specific features critical for distinguishing fire in complex backgrounds. Through ablation studies, we demonstrate that our channel-wise attention mechanism provides a significant 3–5% improvement in accuracy over the baseline and state-of-the-art fire detection models; (ii) evaluate the impact of augmentation on fire detection, demonstrating improved performance across varied environmental conditions; (iii) comprehensive evaluation across color spaces including RGB, Grayscale, HSV, and YCbCr to analyze detection reliability; and (iv) assessment of model vulnerabilities where Fast Gradient Sign Method (FGSM) perturbations significantly impact performance, reducing accuracy to 41%. Using Local Interpretable Model-Agnostic Explanations (LIME) visualization techniques, we provide insights into model decision-making processes across both standard and adversarial conditions, highlighting important considerations for fire detection applications. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

20 pages, 10618 KiB  
Article
Combining UAV Multi-Source Remote Sensing Data with CPO-SVR to Estimate Seedling Emergence in Breeding Sunflowers
by Shuailing Zhang, Hailin Yu, Bingquan Tian, Xiaoli Wang, Wenhao Cui, Lei Yang, Jingqian Li, Huihui Gong, Junsheng Zhao, Liqun Lu, Jing Zhao and Yubin Lan
Agronomy 2024, 14(10), 2205; https://doi.org/10.3390/agronomy14102205 - 25 Sep 2024
Viewed by 1251
Abstract
In order to accurately obtain the seedling emergence rate of breeding sunflower and to assess the quality of sowing as well as the merit of sunflower varieties, a method of extracting the sunflower seedling emergence rate using multi-source remote sensing information from unmanned [...] Read more.
In order to accurately obtain the seedling emergence rate of breeding sunflower and to assess the quality of sowing as well as the merit of sunflower varieties, a method of extracting the sunflower seedling emergence rate using multi-source remote sensing information from unmanned aerial vehicles is proposed. Visible and multispectral images of sunflower seedlings were acquired using a UAV. The thresholding method was used to segment the excess green image of the visible image into vegetation and non-vegetation, to obtain the center point of the vegetation to generate a buffer, and to mask the visible image to achieve weed removal. The components of color models such as the hue–saturation value (HSV), green-relative color space (YCbCr), cyan-magenta-yellow-black (CMYK), and CIELAB color space (L*A*B) models were compared and analyzed. The A component of the L*A*B model was preferred for the optimization of K-means clustering to segment sunflower seedlings and mulch using the genetic algorithm, and the segmentation accuracy was improved by 4.6% compared with the K-means clustering algorithm. All told, 10 geometric features of sunflower seedlings were extracted using segmented images, and 10 vegetation indices and 48 texture features of sunflower seedlings were calculated based on multispectral images. The Pearson’s correlation coefficient method was used to filter the three types of features, and the geometric feature set, the vegetation index set, the texture feature set, and the preferred feature set were constructed. The construction of a sunflower plant number estimation model using the crested porcupine optimizer–support vector machine is proposed and compared with the sunflower plant number estimation models constructed based on decision tree regression, BP neural network, and support vector machine regression. The results show that the accuracy of the model based on the preferred feature set is higher than that of the other three feature sets, indicating that feature screening can improve the accuracy and stability of models; assessed using the CPO-SVR model, the accuracy of the preferred feature set was the highest, with an R² of 0.94, an RMSE of 5.16, and an MAE of 3.03. Compared to the SVR model, the value of the R2 is improved by 3.3%, the RMSE decreased by 18.3%, and the MAE decreased by 18.1%. The results of the study can be cost-effective, accurate, and reliable in terms of obtaining the seedling emergence rate of sunflower field breeding. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture—2nd Edition)
Show Figures

Figure 1

19 pages, 8831 KiB  
Article
Tongue Disease Prediction Based on Machine Learning Algorithms
by Ali Raad Hassoon, Ali Al-Naji, Ghaidaa A. Khalid and Javaan Chahl
Technologies 2024, 12(7), 97; https://doi.org/10.3390/technologies12070097 - 28 Jun 2024
Cited by 8 | Viewed by 19671
Abstract
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels [...] Read more.
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels of progression of the ailment. With the development of computer vision systems, especially in the field of artificial intelligence, there has been important progress in acquiring, processing, and classifying tongue images. This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%. Based on these obtained results, the XGBoost algorithm was chosen as the classifier of the proposed imaging system and linked with a graphical user interface to predict tongue color and its related diseases in real time. Thus, this proposed imaging system opens the door for expanded tongue diagnosis within future point-of-care health systems. Full article
Show Figures

Figure 1

18 pages, 28354 KiB  
Article
A Hybrid Domain Color Image Watermarking Scheme Based on Hyperchaotic Mapping
by Yumin Dong, Rui Yan, Qiong Zhang and Xuesong Wu
Mathematics 2024, 12(12), 1859; https://doi.org/10.3390/math12121859 - 14 Jun 2024
Cited by 2 | Viewed by 1326
Abstract
In the field of image watermarking technology, it is very important to balance imperceptibility, robustness and embedding capacity. In order to solve this key problem, this paper proposes a new color image adaptive watermarking scheme based on discrete wavelet transform (DWT), discrete cosine [...] Read more.
In the field of image watermarking technology, it is very important to balance imperceptibility, robustness and embedding capacity. In order to solve this key problem, this paper proposes a new color image adaptive watermarking scheme based on discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD). In order to improve the security of the watermark, we use Lorenz hyperchaotic mapping to encrypt the watermark image. We adaptively determine the embedding factor by calculating the Bhattacharyya distance between the cover image and the watermark image, and combine the Alpha blending technique to embed the watermark image into the Y component of the YCbCr color space to enhance the imperceptibility of the algorithm. The experimental results show that the average PSNR of our scheme is 45.9382 dB, and the SSIM is 0.9986. Through a large number of experimental results and comparative analysis, it shows that the scheme has good imperceptibility and robustness, indicating that we have achieved a good balance between imperceptibility, robustness and embedding capacity. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

40 pages, 23599 KiB  
Article
Bio-Inspired Watermarking Method for Authentication of Fundus Images in Computer-Aided Diagnosis of Retinopathy
by Ernesto Moya-Albor, Sandra L. Gomez-Coronel, Jorge Brieva and Alberto Lopez-Figueroa
Mathematics 2024, 12(5), 734; https://doi.org/10.3390/math12050734 - 29 Feb 2024
Cited by 5 | Viewed by 2059
Abstract
Nowadays, medical imaging has become an indispensable tool for the diagnosis of some pathologies and as a health prevention instrument. In addition, medical images are transmitted over all types of computer networks, many of them insecure or susceptible to intervention, making sensitive patient [...] Read more.
Nowadays, medical imaging has become an indispensable tool for the diagnosis of some pathologies and as a health prevention instrument. In addition, medical images are transmitted over all types of computer networks, many of them insecure or susceptible to intervention, making sensitive patient information vulnerable. Thus, image watermarking is a popular approach to embed copyright protection, Electronic Patient Information (EPR), institution information, or other digital image into medical images. However, in the medical field, the watermark must preserve the quality of the image for diagnosis purposes. In addition, the inserted watermark must be robust both to intentional and unintentional attacks, which try to delete or weaken it. This work presents a bio-inspired watermarking algorithm applied to retinal fundus images used in computer-aided retinopathy diagnosis. The proposed system uses the Steered Hermite Transform (SHT), an image model inspired by the Human Vision System (HVS), as a spread spectrum watermarking technique, by leveraging its bio-inspired nature to give imperceptibility to the watermark. In addition, the Singular Value Decomposition (SVD) is used to incorporate the robustness of the watermark against attacks. Moreover, the watermark is embedded into the RGB fundus images through the blood vessel patterns extracted by the SHT and using the luma band of Y’CbCr color model. Also, the watermark was encrypted using the Jigsaw Transform (JST) to incorporate an extra level of security. The proposed approach was tested using the image public dataset MESSIDOR-2, which contains 1748 8-bit color images of different sizes and presenting different Diabetic Retinopathy (DR). Thus, on the one hand, in the experiments we evaluate the proposed bio-inspired watermarking method over the entire MESSIDOR-2 dataset, showing that the embedding process does not affect the quality of the fundus images and the extracted watermark, by obtaining average Peak Signal-to-Noise Ratio (PSNR) values higher to 53 dB for the watermarked images and average PSNR values higher to 32 dB to the extracted watermark for the entire dataset. Also, we tested the method against image processing and geometric attacks successfully extracting the watermarking. A comparison of the proposed method against state-of-the-art was performed, obtaining competitive results. On the other hand, we classified the DR grade of the fundus image dataset using four trained deep learning models (VGG16, ResNet50, InceptionV3, and YOLOv8) to evaluate the inference results using the originals and marked images. Thus, the results show that DR grading remains both in the non-marked and marked images. Full article
(This article belongs to the Special Issue Data Hiding, Steganography and Its Application)
Show Figures

Figure 1

15 pages, 4739 KiB  
Article
Nectarine Disease Identification Based on Color Features and Label Sparse Dictionary Learning with Hyperspectral Images
by Ronghui Miao, Jinlong Wu, Hua Yang and Fenghua Huang
Appl. Sci. 2023, 13(21), 11904; https://doi.org/10.3390/app132111904 - 31 Oct 2023
Cited by 1 | Viewed by 1241
Abstract
Fruit cracking and rust spots are common diseases of nectarines that seriously affect their yield and quality. Therefore, it is essential to construct fast and accurate disease-identification models for agricultural products. In this paper, a sparse dictionary learning method was proposed to realize [...] Read more.
Fruit cracking and rust spots are common diseases of nectarines that seriously affect their yield and quality. Therefore, it is essential to construct fast and accurate disease-identification models for agricultural products. In this paper, a sparse dictionary learning method was proposed to realize the rapid and nondestructive identification of nectarine disease based on multiple color features combined with improved LK-SVD (Label K-Singular Value Decomposition). According to the color characteristics of the nectarine itself and the significant color differences existing in the three categories of nectarine (diseased, normal, and background parts), multiple color spaces of RGB, HSV, Lab, and YCbCr were studied. It was concluded that the G channel in RGB space, Y channel in YCbCr space, and L channel in Lab space can better distinguish the diseased part from the other parts. At the model-training stage, pixels of the diseased, normal, and background parts in the nectarine image were randomly selected as the initial training sets, and then, the neighboring image blocks of the pixels were selected to construct the feature vectors based on the above color space channels. An improved LK-SVD dictionary learning algorithm was proposed that integrated the category label into the process of dictionary learning, and thus, an over-complete feature dictionary with significant discrimination was obtained. At the model-testing stage, the orthogonal matching pursuit (OMP) algorithm was used for sparse reconstruction of the original data, which can obtain the classification categories based on the optimized feature dictionary. The experimental results show that the sparse dictionary learning method based on multi-color features combined with improved LK-SVD can identify fruit cracking and rust spot diseases of nectarines quickly and accurately, and the average overall classification accuracies were 92.06% and 88.98%, respectively, which were better than those of k-nearest neighbor (KNN), support vector machine (SVM), DeepLabV3+, and Unet++; the identification results of DeepLabV3+ and Unet++ were also relatively high, but their average time costs were much higher, requiring 126.46~265.65 s. It is demonstrated that this study can provide technical support for disease identification in agricultural products. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

19 pages, 3035 KiB  
Article
A Comparative Analysis of Skin Cancer Detection Applications Using Histogram-Based Local Descriptors
by Yildiz Aydin
Diagnostics 2023, 13(19), 3142; https://doi.org/10.3390/diagnostics13193142 - 6 Oct 2023
Cited by 12 | Viewed by 3342
Abstract
Among the most serious types of cancer is skin cancer. Despite the risk of death, when caught early, the rate of survival is greater than 95%. This inspires researchers to explore methods that allow for the early detection of skin cancer that could [...] Read more.
Among the most serious types of cancer is skin cancer. Despite the risk of death, when caught early, the rate of survival is greater than 95%. This inspires researchers to explore methods that allow for the early detection of skin cancer that could save millions of lives. The ability to detect the early signs of skin cancer has become more urgent in light of the rising number of illnesses, the high death rate, and costly healthcare treatments. Given the gravity of these issues, experts have created a number of existing approaches for detecting skin cancer. Identifying skin cancer and whether it is benign or malignant involves detecting features of the lesions such as size, form, symmetry, color, etc. The aim of this study is to determine the most successful skin cancer detection methods by comparing the outcomes and effectiveness of the various applications that categorize benign and malignant forms of skin cancer. Descriptors such as the Local Binary Pattern (LBP), the Local Directional Number Pattern (LDN), the Pyramid of Histogram of Oriented Gradients (PHOG), the Local Directional Pattern (LDiP), and Monogenic Binary Coding (MBC) are used to extract the necessary features. Support vector machines (SVM) and XGBoost are used in the classification process. In addition, this study uses colored histogram-based features to classify the various characteristics obtained from the color images. In the experimental results, the applications implemented with the proposed color histogram-based features were observed to be more successful. Under the proposed method (the colored LDN feature obtained using the YCbCr color space with the XGBoost classifier), a 90% accuracy rate was achieved on Dataset 1, which was obtained from the Kaggle website. For the HAM10000 data set, an accuracy rate of 96.50% was achieved under a similar proposed method (the colored MBC feature obtained using the HSV color space with the XGBoost classifier). Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

14 pages, 2403 KiB  
Article
DBENet: Dual-Branch Brightness Enhancement Fusion Network for Low-Light Image Enhancement
by Yongqiang Chen, Chenglin Wen, Weifeng Liu and Wei He
Electronics 2023, 12(18), 3907; https://doi.org/10.3390/electronics12183907 - 16 Sep 2023
Cited by 2 | Viewed by 1881
Abstract
In this paper, we propose an end-to-end low-light image enhancement network based on the YCbCr color space to address the issues encountered by existing algorithms when dealing with brightness distortion and noise in the RGB color space. Traditional methods typically enhance the image [...] Read more.
In this paper, we propose an end-to-end low-light image enhancement network based on the YCbCr color space to address the issues encountered by existing algorithms when dealing with brightness distortion and noise in the RGB color space. Traditional methods typically enhance the image first and then denoise, but this amplifies the noise hidden in the dark regions, leading to suboptimal enhancement results. To overcome these problems, we utilize the characteristics of the YCbCr color space to convert the low-light image from RGB to YCbCr and design a dual-branch enhancement network. The network consists of a CNN branch and a U-net branch, which are used to enhance the contrast of luminance and chrominance information, respectively. Additionally, a fusion module is introduced for feature extraction and information measurement. It automatically estimates the importance of corresponding feature maps and employs adaptive information preservation to enhance contrast and eliminate noise. Finally, through testing on multiple publicly available low-light image datasets and comparing with classical algorithms, the experimental results demonstrate that the proposed method generates enhanced images with richer details, more realistic colors, and less noise. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 2nd Edition)
Show Figures

Figure 1

15 pages, 13788 KiB  
Article
Sub-Pixel Convolutional Neural Network for Image Super-Resolution Reconstruction
by Guifang Shao, Qiao Sun, Yunlong Gao, Qingyuan Zhu, Fengqiang Gao and Junfa Zhang
Electronics 2023, 12(17), 3572; https://doi.org/10.3390/electronics12173572 - 24 Aug 2023
Cited by 11 | Viewed by 4180
Abstract
Image super-resolution (SR) reconstruction technology can improve the quality of low-resolution (LR) images. There are many available deep learning networks different from traditional machine learning algorithms. However, these networks are usually prone to poor performance on complex computation, vanishing gradients, and loss of [...] Read more.
Image super-resolution (SR) reconstruction technology can improve the quality of low-resolution (LR) images. There are many available deep learning networks different from traditional machine learning algorithms. However, these networks are usually prone to poor performance on complex computation, vanishing gradients, and loss of useful information. In this work, we propose a sub-pixel convolutional neural network (SPCNN) for image SR reconstruction. First, to reduce the strong correlation, the RGB mode was translated into YCbCr mode, and the Y channel data was chosen as the input LR image. Meanwhile, the LR image was chosen as the network input to reduce computation instead of the interpolation reconstructed image as used in the super-resolution convolutional neural network (SRCNN). Then, two convolution layers were built to obtain more features, and four non-linear mapping layers were used to achieve different level features. Furthermore, the residual network was introduced to transfer the feature information from the lower layer to the higher layer to avoid the gradient explosion or vanishing gradient phenomenon. Finally, the sub-pixel convolution layer based on up-sampling was designed to reduce the reconstruction time. Experiments on three different data sets proved that the proposed SPCNN performs superiorly to the Bicubic, sparsity constraint super-resolution (SCSR), anchored neighborhood regression (ANR), and SRCNN methods on reconstruction precision and time consumption. Full article
Show Figures

Figure 1

20 pages, 22254 KiB  
Article
Image-Based Corrosion Detection in Ancillary Structures
by Amrita Das, Eberechi Ichi and Sattar Dorafshan
Infrastructures 2023, 8(4), 66; https://doi.org/10.3390/infrastructures8040066 - 28 Mar 2023
Cited by 12 | Viewed by 4224
Abstract
Ancillary structures are essential for highways’ safe operationality but are mainly prone to environmental corrosion. The traditional way of inspecting ancillary structures is manned inspection, which is laborious, time-consuming, and unsafe for inspectors. In this paper, a novel image processing technique was developed [...] Read more.
Ancillary structures are essential for highways’ safe operationality but are mainly prone to environmental corrosion. The traditional way of inspecting ancillary structures is manned inspection, which is laborious, time-consuming, and unsafe for inspectors. In this paper, a novel image processing technique was developed for autonomous corrosion detection of in-service ancillary structures. The authors successfully leveraged corrosion features in the YCbCr color space as an alternative to the conventional red–green–blue (RGB) color space. The proposed method included a preprocessing operation including contrast adjustment, histogram equalization, adaptive histogram equalization, and optimum value determination of brightness. The effect of preprocessing was evaluated against a semantically segmented ground truth as a set of pixel-level annotated images. The false detection rate was higher in Otsu than in the global threshold method; therefore, the preprocessed images were converted to binary using the global threshold value. Finally, an average accuracy and true positive rate of 90% and 70%, respectively, were achieved for corrosion prediction in the YCbCr color space. Full article
Show Figures

Figure 1

15 pages, 5975 KiB  
Article
A Novel Approach to Quantifying the Failure Modes of Concrete-Epoxy Interface
by Abubakar Sodiq Ishaq, Yoonju Jang, Donghyeok An, Yoseok Jeong and Ilro Youn
Materials 2023, 16(6), 2376; https://doi.org/10.3390/ma16062376 - 16 Mar 2023
Cited by 1 | Viewed by 1426
Abstract
The failure or debonding of CEIs (Concrete-Epoxy Interfaces) in Fiber-Reinforced Polymer concrete (FRP) systems occurs in one or a combination of three modes: CC (Cohesive failure in Concrete), CE (Cohesive failure in Epoxy), and IF (Interfacial Failure). These failure modes are usually identified, [...] Read more.
The failure or debonding of CEIs (Concrete-Epoxy Interfaces) in Fiber-Reinforced Polymer concrete (FRP) systems occurs in one or a combination of three modes: CC (Cohesive failure in Concrete), CE (Cohesive failure in Epoxy), and IF (Interfacial Failure). These failure modes are usually identified, and their relationships are established by human intuition, which is prone to subjectivity. This study proposes a novel method based on image processing techniques to analyze CEI fracture surfaces and evaluate their failure modes. The failure modes of CEI fracture surfaces of specimens from a 3PB (Three-Point Bending) experiment were assessed using an HVS, CIE L*a*b*, YCbCr, or RGB color space image segmentation-based image processing technique on the preprocessed images of the CEI failure sides. A manual approach was adopted to validate the accuracy of the proposed method. Comparing the failure mode (CE) obtained using the manual and the proposed methodology, an RMSE (Root Means Square Error) of 0.19, 0.10, 0.23, and 0.26 was obtained for HVS, CIE L*a*b*, YCbCr, or RGB color space, respectively. The epoxy area selected with CIE L*a*b* color space produced the most accurate evaluation of the failure modes. This study provides an accurate method of quantifying the failure modes of CEI fracture surfaces. The methodology proposed in this study is recommended for forensic investigations to understand better the possible causes of failure in externally bounded fiber-reinforced polymers. Full article
Show Figures

Figure 1

22 pages, 2962 KiB  
Article
A Hybrid Framework for Detection and Analysis of Leaf Blight Using Guava Leaves Imaging
by Sidrah Mumtaz, Mudassar Raza, Ofonime Dominic Okon, Saeed Ur Rehman, Adham E. Ragab and Hafiz Tayyab Rauf
Agriculture 2023, 13(3), 667; https://doi.org/10.3390/agriculture13030667 - 13 Mar 2023
Cited by 8 | Viewed by 2797 | Correction
Abstract
Fruit is an essential element of human life and a significant gain for the agriculture sector. Guava is a common fruit found in different countries. It is considered the fourth primary fruit in Pakistan. Several bacterial and fungal diseases found in guava fruit [...] Read more.
Fruit is an essential element of human life and a significant gain for the agriculture sector. Guava is a common fruit found in different countries. It is considered the fourth primary fruit in Pakistan. Several bacterial and fungal diseases found in guava fruit decrease production daily. Leaf Blight is a common disease found in guava fruit that affects the growth and production of fruit. Automatic detection of leaf blight disease in guava fruit can help avoid decreases in its production. In this research, we proposed a CNN-based deep model named SidNet. The proposed model contains thirty-three layers. We used a guava dataset for early recognition of leaf blight, which consists of two classes. Initially, the YCbCr color space was employed as a preprocessing step in detecting leaf blight. As the original dataset was small, data augmentation was performed. DarkNet-53, AlexNet, and the proposed SidNet were used for feature acquisition. The features were fused to get the best-desired results. Binary Gray Wolf Optimization (BGWO) was used on the fused features for feature selection. The optimized features were given to the variants of SVM and KNN classifiers for classification. The experiments were performed on 5- and 10-fold cross validation. The highest achievable outcomes were 98.9% with 5-fold and 99.2% with 10-fold cross validation, confirming the evidence that the identification of Leaf Blight is accurate, successful, and efficient. Full article
(This article belongs to the Special Issue Big Data Analytics and Machine Learning for Smart Agriculture)
Show Figures

Figure 1

Back to TopTop