Detection of Hydroxychloroquine Retinopathy via Hyperspectral and Deep Learning through Ophthalmoscope Images

Hydroxychloroquine, also known as quinine, is primarily utilized to manage various autoimmune diseases, such as systemic lupus erythematosus, rheumatoid arthritis, and Sjogren’s syndrome. However, this drug has side effects, including diarrhea, blurred vision, headache, skin itching, poor appetite, and gastrointestinal discomfort. Blurred vision is caused by irreversible retinal damages and can only be mitigated by reducing hydroxychloroquine dosage or discontinuing the drug under a physician’s supervision. In this study, color fundus images were utilized to identify differences in lesions caused by hydroxychloroquine. A total of 176 color fundus images were captured from a cohort of 91 participants, comprising 25 patients diagnosed with hydroxychloroquine retinopathy and 66 individuals without any retinopathy. The mean age of the participants was 75.67 ± 7.76. Following the selection of a specific region of interest within each image, hyperspectral conversion technology was employed to obtain the spectrum of the sampled image. Spectral analysis was then conducted to discern differences between normal and hydroxychloroquine-induced lesions that are imperceptible to the human eye on the color fundus images. We implemented a deep learning model to detect lesions, leveraging four artificial neural networks (ResNet50, Inception_v3, GoogLeNet, and EfficientNet). The overall accuracy of ResNet50 reached 93% for the original images (ORIs) and 96% for the hyperspectral images (HSIs). The overall accuracy of Inception_v3 was 87% for ORIs and 91% for HSI, and that of GoogLeNet was 88% for ORIs and 91% for HSIs. Finally, EfficientNet achieved an overall accuracy of 94% for ORIs and 97% for HSIs.


Introduction
The retina is situated within the inner layer of the eyeball wall and is a delicate and intricate structure. The average thickness of the human retina is approximately 250 µm [1,2]. Histologically, it consists of 10 layers extending from the retinal pigment epithelium (RPE) to the inner limiting membrane. The retina is similar to the negative film of a camera because it is responsible for photosensitive imaging. A variety of ocular diseases can arise in the retina, with some of the most common ones being age-related macular degeneration (AMD), diabetic retinopathy (DR), retinitis pigmentosa, and glaucoma [3][4][5][6][7][8][9][10].

Data Collection
The overall research process is illustrated in Figure 1. In data collection, retinal images were obtained using an ophthalmoscope and cropped in a specific area. Cropping was conducted referring to the research results of Hadoux, X et al. [24]. The instruments used were Nonmyd 7 Retinal and D7200 Camera from Kowa American Corporation and Nikon Taiwan, respectively. Foveal locations (F1, F2) from [24] were merged into one area (F), as shown in Figure 2a,b. To ensure a correlation between spectral variables and avoid selection bias, regions with high color contrast representation are carefully chosen. These areas are selected based on the presence of vascular organizations and important biological structures of the eye, such as the fovea and nerve fibers. By focusing on these regions, the spectra bands are enhanced, contributing to a more effective analysis [24].

Data Collection
The overall research process is illustrated in Figure 1. In data collection, retinal images were obtained using an ophthalmoscope and cropped in a specific area. Cropping was conducted referring to the research results of Hadoux, X et al. [24]. The instruments used were Nonmyd 7 Retinal and D7200 Camera from Kowa American Corporation and Nikon Taiwan, respectively. Foveal locations (F1, F2) from [24] were merged into one area (F), as shown in Figure 2a,b. To ensure a correlation between spectral variables and avoid selection bias, regions with high color contrast representation are carefully chosen. These areas are selected based on the presence of vascular organizations and important biological structures of the eye, such as the fovea and nerve fibers. By focusing on these regions, the spectra bands are enhanced, contributing to a more effective analysis [24]. Hyperspectral conversion algorithm was employed to extract spectral features from five positions as regions of interest (ROIs) in the augmented dataset. Spectral analysis was then applied to the ROIs to find the most prominent spectral regions. Band selection from range 500-600 nm is applied to perform color reproduction from hyperspectral band to RGB images. The HS images were transformed from HS cubes (512 × 512 × 401) to fit the input of deep learning models (512 × 512 × 3). Eventually, the HS images were utilized to train the deep learning models. Figure 1. Experimental flow chart. Hyperspectral conversion algorithm was employed to extract spectral features from five positions as regions of interest (ROIs) in the augmented dataset. Spectral analysis was then applied to the ROIs to find the most prominent spectral regions. Band selection from range 500-600 nm is applied to perform color reproduction from hyperspectral band to RGB images. The HS images were transformed from HS cubes (512 × 512 × 401) to fit the input of deep learning models (512 × 512 × 3). Eventually, the HS images were utilized to train the deep learning models.
In this study, we recruited 25 patients treated with HCQ and 66 people with normal vision. According to the findings presented in Table 1, no statistically significant differences were observed between the HCQ group and the normal group in terms of age and sex. Fur- In this study, we recruited 25 patients treated with HCQ and 66 people with normal vision. According to the findings presented in Table 1, no statistically significant differences were observed between the HCQ group and the normal group in terms of age and sex. Furthermore, there were no significant differences detected between the two groups regarding the prevalence of high blood pressure, glaucoma, AMD, or DR.
We obtained 176 color fundus images, following the exclusion of blurred or affected images caused by light sources. Patients treated with HCQ were included in the study. For patients taking HCQ, the dose used was 200 mg per day, and 5 years of continuous treatment will cause retinopathy. Such a long-term continuous treatment would cause difficulty in collecting patient cases. By contrast, patients with dementia, DR, AMD, and glaucoma were excluded. The presence of dementia, DR, AMD, and glaucoma was tested. The dataset comprised 66 normal color fundus images and 110 color fundus images of patients who were administered with HCQ. Normal fundus images were defined as people without HCQ treatment, dementia, DR, AMD, glaucoma, and poor vision. A spectral-domain OCT image of the left eye of a normal individual with a normal foveal profile was used for comparison ( Figure 2c). Optical coherence tomography (OCT) was used as the basis for diagnosis to accurately discern the presence of retinopathy caused by HCQ, given the difficulty in identifying HCQ on color fundus images. In Figure 2d, HCQ leads to the loss of the external membrane and ellipsoidal region and the thinning of the outer nuclear layer on both sides of the fovea (green box). The outer nuclear layer, external membrane, and ellipsoidal region remain unaffected below the fovea, forming a saucer-like appearance (red box). The fundus images collected in this study were sampled in specific areas, We obtained 176 color fundus images, following the exclusion of blurred or affected images caused by light sources. Patients treated with HCQ were included in the study. For patients taking HCQ, the dose used was 200 mg per day, and 5 years of continuous treatment will cause retinopathy. Such a long-term continuous treatment would cause difficulty in collecting patient cases. By contrast, patients with dementia, DR, AMD, and glaucoma were excluded. The presence of dementia, DR, AMD, and glaucoma was tested. The dataset comprised 66 normal color fundus images and 110 color fundus images of patients who were administered with HCQ. Normal fundus images were defined as people without HCQ treatment, dementia, DR, AMD, glaucoma, and poor vision. A spectraldomain OCT image of the left eye of a normal individual with a normal foveal profile was used for comparison ( Figure 2c). Optical coherence tomography (OCT) was used as the basis for diagnosis to accurately discern the presence of retinopathy caused by HCQ, given the difficulty in identifying HCQ on color fundus images. In Figure 2d, HCQ leads to the loss of the external membrane and ellipsoidal region and the thinning of the outer nuclear layer on both sides of the fovea (green box). The outer nuclear layer, external membrane, and ellipsoidal region remain unaffected below the fovea, forming a saucer-like appearance (red box). The fundus images collected in this study were sampled in specific areas, namely, above the temporal vascular arcade (S1, S2), the fovea (F), and below the temporal vascular arcade (I1, I2), with an area size of 240 × 240 pixels ( Figure 3).  1 Interval variables are expressed as mean ± standard deviation. An unpaired two-tailed t-test is applied to analyze the mean of two independent HCQ and normal groups. The effect size and 95% CI are the difference between means. 2 Categorial variables are expressed as number of participants. The chi-square test evaluates whether there is a significant association between the categories of the two variables. The effect size and 95% CI are the odds ratio. 3 All participants in the 60s, 70s, and 80s age ranges were analyzed with a one-way analysis of variance (ANOVA) test.
namely, above the temporal vascular arcade (S1, S2), the fovea (F), and below the temporal vascular arcade (I1, I2), with an area size of 240 × 240 pixels ( Figure 3).  1 Interval variables are expressed as mean ± standard deviation. An unpaired two-tailed t-test is applied to analyze the mean of two independent HCQ and normal groups. The effect size and 95% CI are the difference between means. 2 Categorial variables are expressed as number of participants. The chi-square test evaluates whether there is a significant association between the categories of the two variables. The effect size and 95% CI are the odds ratio. 3 All participants in the 60s, 70s, and 80s age ranges were analyzed with a one-way analysis of variance (ANOVA) test.

Data Preprocessing and Training Deep Learning Model
In data preprocessing, the image was used for data purification. Blurred images or those affected by the light source were removed, and the final data amplification included flipping, rotating, limiting contrast-adaptive histogram equalization, Gaussian blur, green channel extraction, and hyperspectral image conversion.
The hyperspectral imaging algorithm for ophthalmoscopic images is described in Section S2 of Supplementary Materials and in our previous studies [4,30]. These

Data Preprocessing and Training Deep Learning Model
In data preprocessing, the image was used for data purification. Blurred images or those affected by the light source were removed, and the final data amplification included flipping, rotating, limiting contrast-adaptive histogram equalization, Gaussian blur, green channel extraction, and hyperspectral image conversion.
The hyperspectral imaging algorithm for ophthalmoscopic images is described in Section S2 of Supplementary Materials and in our previous studies [4,30]. These images were then converted into 401 spectral bands spanning the visible light spectrum (380 nm to 780 nm) by using hyperspectral conversion technology. By analyzing spectral differences among normal and HCQ at different sampling positions, we aimed to identify potential biomarkers of these conditions. Based on spectral analysis, the difference between color fundus images of patients with HCQ and those of normal individuals is the most significant in the range of 500-600 nm. Therefore, spectral information within this range was selected for conversion into HSIs. Subsequently, the ORIs and obtained HSIs were subjected to testing using four artificial neural network models, including ResNet50, Inception_v3, GoogLeNet, and EfficientNet. This study aimed to compare the performance of these deep-learning models in diagnosing HCQ by using two distinct types of datasets, namely, ORIs and HSIs. A comprehensive overview of the distribution of data in the train and test sets is provided in Table 2. The deep learning framework PyTorch was utilized for the purpose of implementing transfer learning to enhance the accuracy of diabetic retinopathy detection. Four distinct neural network models were employed in this process. The cross-entropy loss function was employed as the optimization objective, gradually decreasing after each epoch to fine-tune the model weights. A batch size of 16 was set, and the training procedure consisted of 50 epochs. The initial learning rate was configured as 0.001. Furthermore, the learning rate was reduced to 0.1 times its original value every seven epochs to aid in convergence and optimize training progress.

OCT System-Type B Ultrasonic Scanner (Nidek RS-3000)
The Type B ultrasonic scanner (Nidek RS-3000, NIDEK Co., Ltd., Gamagori, Japan) incorporates a choroidal mode, which offers a comprehensive assessment of the choroid, retina, and glaucoma analysis. The advanced mode of the RS-3000 allows for ultra-low sensitivity measurements, depending on the specific pathology being evaluated. With its 9 mm × 9 mm wide-area scan, it provides excellent coverage of the entire retinal structure. The unique Eye Tracer technology utilizes fundus information obtained from high-definition images to ensure precise measurements. By combining positioning, tracking, and automatic shooting functions, the Eye Tracer technology enables convenient and rapid measurements.
During macular line scans, the "Tracking HD" function compensates for microtracking and other involuntary eye movements. This compensation ensures that up to 120 macular scan images are aligned, enhancing image averaging. Subsequent images are precisely aligned with the baseline data, resulting in high reproducibility. The automatic registration function compensates for any adjustments made during the image acquisition process, thereby improving the quality of the subsequent data.

Results
Given that there were no significant differences observed between the HCQ and normal groups concerning age (p = 0.75, 95% CI: −5.51-4.86, unpaired two-tailed t-test, Table 1) and sex (p = 0.10, 95% CI: −0.02-0.05, chi-square test, Table 1), a sample size of n = 91 was determined to examine the spectral differences more comprehensively. Figure 4 presents the analysis of spectral differences in HCQ color fundus images between the HCQ group (n = 25) and the Normal group (n = 66) at specific sampling locations, namely (a) F, (b) I1, (c) I2, (d) S1, and (e) S2. Remarkable variations were observed in the wavelength range of 500-600 nm within the I1, I2, S1, and S2 regions. Furthermore, a minor difference was found in the HCQ spectrum between the short and long wavelength ranges. The morphological and spectral characteristics of the purified populations of melanosomes and lipofuscin granules from the human retinal pigment epithelium (RPE) changed with respect to HCQ treatment. HCQ disrupts retinal pigment epithelium metabolism, and lysosomal damage will result in retinal pigmented epithelium and their changes with melanin bleaching [30][31][32][33]. The evident disparity observed in the wavelength range of 500-600 nm may be attributed to retinopathy induced by HCQ.
observed in the wavelength range of 500-600 nm within the I1, I2, S1, and S2 regions. Furthermore, a minor difference was found in the HCQ spectrum between the short and long wavelength ranges. The morphological and spectral characteristics of the purified populations of melanosomes and lipofuscin granules from the human retinal pigment epithelium (RPE) changed with respect to HCQ treatment. HCQ disrupts retinal pigment epithelium metabolism, and lysosomal damage will result in retinal pigmented epithelium and their changes with melanin bleaching [30][31][32][33]. The evident disparity observed in the wavelength range of 500-600 nm may be attributed to retinopathy induced by HCQ. This study utilized normal fundus images (i.e., those without lesions) and compared the spectra based on age groups. The reflection intensity of the spectrum between 380 and 530 nm was higher for individuals aged 80-89 years compared with those aged 60-69 and 70-79 years. By contrast, the spectral reflection intensity decreased significantly between 530 and 780 nm for individuals aged 80-89 years, but the decrease was not as prominent for those aged 70-79 years. Figure 5 provides a visualization of this phenomenon in the S1 region. One possible explanation is that as people age, the arteries and veins in the eye may shrink, and the lutein content in the eye may decrease, leading to a decrease in the reflection intensity in the long wavelength part of the spectrum. Thus, the ophthalmoscope image spectrum may exhibit different reflection intensities across different age groups. This study utilized normal fundus images (i.e., those without lesions) and compared the spectra based on age groups. The reflection intensity of the spectrum between 380 and 530 nm was higher for individuals aged 80-89 years compared with those aged 60-69 and 70-79 years. By contrast, the spectral reflection intensity decreased significantly between 530 and 780 nm for individuals aged 80-89 years, but the decrease was not as prominent for those aged 70-79 years. Figure 5 provides a visualization of this phenomenon in the S1 region. One possible explanation is that as people age, the arteries and veins in the eye may shrink, and the lutein content in the eye may decrease, leading to a decrease in the reflection intensity in the long wavelength part of the spectrum. Thus, the ophthalmoscope image spectrum may exhibit different reflection intensities across different age groups.
Excluding potential analytical deviations due to diabetes, this study conducted an additional analysis of the spectrum of diabetic color fundus images. The oxygen concentration in blood vessels in the retina of patients with diabetes was primarily analyzed as the staging basis. The diabetic fundus color image spectrum was categorized into Normal, background retinopathy (BDR), preproliferative retinopathy (PDR), and proliferative retinopathy (PPDR) according to the stage. In the five regions of Figure 6, no significant difference in the spectrum was observed with diabetes, indicating that the HCQ fundus color image spectra did not vary with or without diabetes (n = 25, p-value = 0.56, chi-square test). Excluding potential analytical deviations due to diabetes, this study conducted an additional analysis of the spectrum of diabetic color fundus images. The oxygen concentration in blood vessels in the retina of patients with diabetes was primarily analyzed as the staging basis. The diabetic fundus color image spectrum was categorized into Normal, background retinopathy (BDR), preproliferative retinopathy (PDR), and proliferative retinopathy (PPDR) according to the stage. In the five regions of Figure 6, no significant difference in the spectrum was observed with diabetes, indicating that the HCQ fundus color image spectra did not vary with or without diabetes (n = 25, p-value = 0.56, chi-square test). This study compared the four deep-learning models by using two types of data sets, namely, ORIs and HSIs, for HCQ diagnosis. The deep learning model is described in Section S3 of the Supplementary Materials. As shown in Table 3, the overall accuracy of the ResNet50 testing model was 93% for the ORIs and 96% for the HSIs. Similarly, the Inception_v3 model had an overall accuracy of 87% for ORI and 91% This study compared the four deep-learning models by using two types of data sets, namely, ORIs and HSIs, for HCQ diagnosis. The deep learning model is described in Section S3 of the Supplementary Materials. As shown in Table 3, the overall accuracy of the ResNet50 testing model was 93% for the ORIs and 96% for the HSIs. Similarly, the Inception_v3 model had an overall accuracy of 87% for ORI and 91% for HSI, whereas the GoogLeNet model showed an overall accuracy of 88% for ORIs and 91% for HSIs. The overall accuracy of the EfficientNet model reached 94% for ORIs and 97% for HSIs. The accuracies of ResNet50, Inception_v3, GoogLeNet, and EfficientNet increased by 3% (from 93% in ORIs to 96% in HSIs), 4% (from 87% in ORIs to 91% in HSIs), 3% (from 88% in ORIs to 91% in HSIs), and 3% (from 94% in ORIs to 97% in HSIs), respectively. These results demonstrate that the use of HSIs, which provide spectral features, can improve accuracy compared with ORIs. The increase in accuracy can range from a minimum of 3% to a maximum of 4%, depending on the neural network model used. Moreover, the learning capability of different models varied with various image data. Thus, multiple models can be utilized for comparison in deep learning applications. However, the accuracy did not provide a complete picture of the model's performance across different datasets. Figure 7a and Table 3 show the prediction results of ResNet50 on the sets of HSIs and ORIs. The superiority of ResNet50 in the HSI set was evident, where the accuracy reached 96% compared with 93% in the ORI set. Additionally, the Precision, Recall, Specificity, and f1-score in the HSI set were higher than those in the ORI set. Specifically, the Precision, Recall, Specificity, and f1-score in the HSIs set were 96%, 96%, 95%, and 96%, respectively, while the results in the ORI set were 95%, 95%, 92%, and 95%, respectively. In the case of Inception_v3 (Figure 7b), the Recall rate obtained in the HSIs was 85%, which is significantly higher than the rate achieved in the ORIs (75%). Consequently, the f1-scores, which reflect the harmonic mean of Precision, Recall, and Specificity, were also higher in the HSIs than in the ORIs, with values of 88% and 78%, respectively.   Given that the network architecture of GoogLeNet is similar to that of Inception_v3, their results were not significantly different. As shown in Figure 5c, the Recall rate obtained in the HSIs was 77%, which is 10% higher than the rate achieved in the ORIs (67%). Similarly, the F1-score was 7% higher in the HSIs than in the ORIs, with values of 87% and 80%, respectively. Figure 5d shows the prediction results of EfficientNet_B0. The findings demonstrate the superiority of EfficientNet_B0 in the HSI set, where the accuracy reached 97% compared with 94% in the ORIs set. Furthermore, the Precision, Recall, Specificity, and f1-score in the HSI set were higher than those in the ORI set. Specifically, the values were 99%, 92%, 99%, and 96%, respectively, in the HSI set and 94%, 91%, 91%, and 94%, respectively, in the ORI set.
Recall is an important indicator used to evaluate the performance of a model in detecting diseases. A higher recall rate indicates that fewer sick patients are identified as healthy, which can prevent the disease from worsening. The recall rate in the four models varied as follows: ResNet50: 95%, Inception_v3: 75%, GoogLeNet: 67%, and EfficientNet: 91%. In the case of HCQ hyperspectral image detection, the recall rates (Recall) were 96% for ResNet50, 85% for Inception_v3, 77% for GoogLeNet, and 92% for EfficientNet. These results indicate that the addition of hyperspectral images significantly improves the detection ability of HCQ. Through the comparison of different models, we can confirm that the use of HCQ color fundus images can be effectively predicted by deep learning. The accuracy of the HSI results reached more than 90% in the models tested, which can assist doctors in identifying HCQ as a tool. More HCQ color fundus images should be collected in the future to improve the accuracy of the results and promote their use for evaluation.

Effects of Aging on Hydroxychloroquine Retinopathy
Hydroxychloroquine retinopathy is known to result in the destruction of rods and cones while leaving the cones relatively intact. This characteristic pattern often manifests as a bull's-eye appearance. As a consequence of the damage to photoreceptors, the RPE may migrate into the regions affected, leading to the detection of pigment-filled cells within the outer nuclear layer and outer plexiform layer.
Age has been identified as a significant risk factor for HCQ retinopathy, as stated in a report by The American Academy of Ophthalmology [31]. Numerous studies have documented cases of HCQ retinopathy specifically occurring in elderly individuals. One study, in particular, demonstrated that electroretinography was capable of detecting changes in elderly patients (over 65 years of age) undergoing HCQ treatment, whereas such changes were not observed in younger patients [11]. This suggests that the likelihood of HCQ retinopathy incidence may increase with advancing age. It is plausible that HCQ induces abnormalities in the organization of the eye, leading to the destruction of rods and cones. Furthermore, it may affect structures with complex surface profiles, thereby contributing to the development of retinopathy associated with HCQ usage. Figure 2c,d illustrate OCT images of patients affected by HCQ infection and normal patients. In all four eyes, OCT imaging revealed abnormalities in the external retina corresponding to areas identified as hydroxychloroquine-associated retinopathy through ophthalmoscopy results. These abnormalities involve the complete loss of the inner/outer segment junction of photoreceptors, while the RPE, outer limiting membrane, and translocation exhibit relative preservation. Furthermore, the surface profile of HCQ patients appears to be rough, exhibiting more folds compared to normal cases. This rough surface profile contributes to a scattering phenomenon when observed using hyperspectral imaging. Figure 5 demonstrates these differences, particularly in the long wavelength region, at five surveyed locations. The most pronounced difference is observed at the foveal location (F), as well as the superior (S1) and inferior (I1) locations. In our study, we selected three patients from three different age groups (60s, 70s, and 80s). This suggests that age can be considered a non-independent factor contributing to HCQ retinopathy.

The Relationship of HCQ Retinopathy and Diabetes Remains Uncertain
The relationship between HCQ retinopathy and diabetes has not been extensively investigated. A study examining the progression of diabetic retinopathy (DR) from mild to moderate and severe stages has suggested that HCQ may have an impact on the development of DR [32]. However, it is important to note that these findings are based on clinical trials with a limited sample size. In Figure 6, the spectral regions do not exhibit significant differences among the four stages of diabetes. This indicates that the stages of diabetes primarily affect the vascular system, including the arteries and veins within the eye. On the other hand, the structural abnormalities associated with HCQ retinopathy are relatively small and occur within the retina, often with minimal external manifestations.

A Novel Screening Technique for HCQ Retinopathy
To gain insights into the deep learning model and visually identify regions with prominent features, the Grad-CAM method [33] is employed to assess layer weights by generating heatmap distributions of feature layers. Gradient computation is performed at the final layer of the feature module in each deep learning model. Figure 8 illustrates the feature maps obtained for four deep learning models using HSIs of HCQ cases. The feature maps demonstrate that the heatmap scores are predominantly concentrated around the foveal locations (F). This observation aligns with previous studies on HCQ retinal retinopathy, which have utilized OCT spectral-domain analysis [34][35][36][37][38][39]. These studies have indicated that the concentration and presence of HCQ active ingredients significantly impact the thickness in the corresponding area. This provides a solid foundation for diagnosing HCQ retinopathy through screening using ophthalmoscope images and spectral analysis with OCT images. Moreover, the hyperspectral conversion algorithm enhances the detection of anomalies in the spectral-domain information, enabling computer-aided diagnosis models to accurately detect HCQ retinopathy.
illustrates the feature maps obtained for four deep learning models using HSIs of HCQ cases. The feature maps demonstrate that the heatmap scores are predominantly concentrated around the foveal locations (F). This observation aligns with previous studies on HCQ retinal retinopathy, which have utilized OCT spectral-domain analysis [34][35][36][37][38][39]. These studies have indicated that the concentration and presence of HCQ active ingredients significantly impact the thickness in the corresponding area. This provides a solid foundation for diagnosing HCQ retinopathy through screening using ophthalmoscope images and spectral analysis with OCT images. Moreover, the hyperspectral conversion algorithm enhances the detection of anomalies in the spectraldomain information, enabling computer-aided diagnosis models to accurately detect HCQ retinopathy.

Conclusions
This study analyzed the spectral differences caused by age changes in normal color fundus images. Different ages will cause spectral changes due to the atrophy of blood vessels in the fundus. The color fundus images of patients taking HCQ were analyzed and found to be significantly different in the wavelength of 500-600 nm, which could be due to retinopathy caused by HCQ. An additional study of diabetic

Conclusions
This study analyzed the spectral differences caused by age changes in normal color fundus images. Different ages will cause spectral changes due to the atrophy of blood vessels in the fundus. The color fundus images of patients taking HCQ were analyzed and found to be significantly different in the wavelength of 500-600 nm, which could be due to retinopathy caused by HCQ. An additional study of diabetic color fundus images was conducted, and the result showed that diabetes did not cause changes in the spectrum.
Diseases related to the retina were previously thought to only occur due to aging. However, the age of onset is gradually decreasing, and continuous innovation can provide better care for detecting and treating patients. In this study, hyperspectral conversion technology was used to analyze the spectrum of color fundus images. The technology can effectively detect and recognize differences in the image spectrum after HCQ.
The development of AI in medical imaging has been ongoing for a long time, and various models have produced different prediction results for the same data. This study employed a hyperspectral conversion algorithm to analyze the spectrum of color fundus images and obtained HSIs to capture additional features. Using four deep learning models, namely, ResNet50, Inception_v3, GoogLeNet, and EfficientNet, we trained and evaluated HCQ ophthalmoscope images. HSI effectively improved the accuracy and other indicators in different models, indicating that hyperspectral conversion technology is beneficial for the analysis of color fundus images.
The application of retinal imaging and AI is widespread in the diagnosis of diseases such as glaucoma, DR, and other related conditions. However, each disease requires specialized treatment for specific lesions. Most research using hyperspectral imaging involves obtaining spectra by using spectrometers and analyzing the data to identify the presence or absence of lesions. In the future, hyperspectral retinal images could lead to the development of a system that can detect various eye diseases. Analyzing color fundus images for eye diseases can assist doctors in diagnosis and extend the deployment of telemedicine systems to bridge the gap between urban and rural areas, which often lack medical resources.
One limitation of our research is the underutilization of the spectral information inherent in hyperspectral images. Currently, we have only converted the hyperspectral data into images with three channels, thereby losing a significant amount of spectral detail. However, our objective is to explore advanced processing methods, specifically convolutional neural networks (CNNs), to extract and fully leverage the wealth of information present in hyperspectral data.
Our aim is to design a neural network architecture that is specifically tailored to effectively represent and exploit the rich spectral information provided by hyperspectral data. By developing such an architecture, we anticipate enhancing the effectiveness and efficiency of our analysis and obtaining more comprehensive insights from the hyperspectral datasets. This will allow us to maximize the potential of hyperspectral imaging in our research and unlock new opportunities for improved analysis and interpretation.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/diagnostics13142373/s1, Figure S1: An image of the Zeiss fundus camera. Xe is xenon flash, L1 is concentrator, S1 is observation system light source, L2-7 is photographic system optics, BS is beam splitter, f is filter, M1-4 is mirror, F is camera negative, R is targeting, and the dotted and solid lines are the light source input and observation paths, respectively [40].; Figure S2: The hyperspectral conversion construction flow chart uses the standard 24 color blocks (X-Rite Classic, 24 Color Checkers) as the common target for the conversion of the ophthalmoscope and the spectrometer, and converts the ophthalmoscope image into 401 bands of visible spectrum information.; Figure S3: XYZ color matching function.; Figure S4: Polynomial regression graph of XYZ Funducopy and XYZ Spectrum .; Figure S5: Comparison of chromatic aberration between ophthalmoscope before and after correction and spectrometer.; Figure S6: 12 principal components of R Spectrum .; Figure S7: Root Mean Square Error of S Spectrum and R Spectrum .; Figure S8: Chromatic difference diagram of measured spectrum and simulated spectrum of 24 color blocks.; Figure S9: Comparison of normal retinal spectra by age interval (a) F, (b) I1, (c) I2, (d) S1, and (e) S2.; Figure S10: Comparison of Diabetic Retina Spectrum by areas (a) F, (b) I1, (c) I2, (d) S1, and (e) S2.; Figure S11: Loss and accuracy of ResNet50.; Figure S12: Loss and accuracy of GoogLeNet.; Figure S13: Loss and accuracy of Inception_v3.; Figure S14: Loss and accuracy of EfficientNet_B0. Reference [40] is cited in Supplementary Materials.  Informed Consent Statement: Written informed consent was waived in this study because of the retrospective anonymized nature of the study design.