Next Article in Journal
Application of the Harmony Search Algorithm for Optimization of WDN and Assessment of Pipe Deterioration
Previous Article in Journal
Hemp Cultivation in Soils Polluted by Cd, Pb and Zn in the Mediterranean Area: Sites Characterization and Phytoremediation in Real Scale Settlement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient and Portable LED Multispectral Imaging System and Its Application to Human Tongue Detection

1
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
2
Shenzhen Key Laboratory of Precision Engineering, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(7), 3552; https://doi.org/10.3390/app12073552
Submission received: 26 January 2022 / Revised: 25 February 2022 / Accepted: 29 March 2022 / Published: 31 March 2022
(This article belongs to the Section Optics and Lasers)

Abstract

:
LED illumination-based multispectral imaging has a fast switching ability, high cost-effectiveness, and a simple structure. It has been used in some applications, especially color recognition. In this paper, we introduce an efficient and portable LED multispectral imaging system for human tongue detection. The spatial pixels are categorized based on cosine similarity to reduce the optimization calculation times. Further, segment linear calibration is used to improve the recovery quality. Simulation results show that this method greatly improves the reconstruction speed and that recovered images maintain a high spatial and spectral quality. This LED multispectral imaging system captures images quickly and obtains multispectral images in a timely fashion. We also built a small prototype for human tongue detection in traditional Chinese medicine. The recovered spectra were used to calculate the tongue body color and fur color. By combining these with the spatial information, the fur distribution and fur thickness were analyzed. The results of this study verified the effectiveness of this LED multispectral imaging system. Further experiments will be undertaken for the quantitative analysis of tongue features. The study was approved by the Institutional Review Board (or Ethics Committee) of Shenzhen Institute of Advanced Technology Chinese Academy of Sciences.

1. Introduction

Multispectral imaging systems obtain the spatial and spectral information of objects. They are widely used in material identification, object recognition, and other fields. There are already many different types of multispectral imaging techniques [1]. Conventional systems are implemented by optical narrowband filters or gratings to obtain different wavelengths. Due to their system complexity and high cost, most multispectral imagers are used and operated by researchers at laboratories. LED illumination-based multispectral imaging is a more practical system for the common consumer. Through the development of LED manufacturing technology, it has a fast switching ability and cost-effectiveness. Additionally, its structure is simple and without moving parts. It has been used in cultural heritage conservation [2], color reproduction [3], blood and melanin estimation [4], underwater surveys [5], etc.
According to their illumination method, LED multispectral imaging systems can be classified into two major types. One typical type is the LED illumination in sequence system. In this type, different LEDs are illuminated in sequence and a camera captures images under every illumination. Further, either monochrome or RGB cameras can be used, and an optimal number of LEDs should be selected to reduce capturing times [6]. The other type is the multiplexed LED illumination system. Normally, several different LEDs are lit simultaneously, and an RGB camera is used to capture three-channel images in one shot [7,8]. This usually only needs two or three shots. In addition to these types, there are some other multiplexed LED illumination methods. For example, LEDs have been designed by using mutually orthogonal spectral functions as illuminations [9,10]. This multiplexed method can easily obtain the reflection spectrum from captured images without optimal reconstruction, but it needs a higher number of LEDs and shots to obtain an accurate measurement. Besides being time-multiplexed, the illumination can be modulated by carrier frequency and transformed to the frequency domain [11]. This method can obtain high-quality multispectral images; however, it requires a signal generator to modulate the illumination, and requires many image sequences to demodulate the spectral images. The LED illumination in sequence system and the ordinary time-multiplexed method have simpler structures when compared to the orthogonal function and frequency modulation methods. However, spectral reconstruction is essential to recover spectral reflectance from the several obtained band images. Generally, the dimension of spectral reflectance is higher than the detected bands. The reconstruction process is an ill-posed inverse problem. Recently, many reconstruction methods have been introduced [12,13], such as the Wiener [14], PCA [15], PLS [16], KPLS [17], NNMF [18,19,20], the machine learning algorithm [21,22], and compressive sensing [23]. These methods are all effective for spectral reconstruction and continue to improve. The main purpose of improving these methods is to increase the accuracy of reconstruction.
In this work, we focus on the computation time of reconstruction and introduce an efficient LED multispectral imaging system. The main contributions of this paper are reducing the time cost of reconstruction and maintaining a high accuracy. The following section will introduce the principle of our efficient multispectral imaging system. It has a typical LED illumination structure with an LED ring and an RGB camera. Further, an improved recovery method is presented to quickly reconstruct the multispectral images. In Section 3, the simulation results are analyzed to verify the reconstruction speed and spectral and spatial quality. This LED multispectral imaging system captures images quickly and obtains multispectral images in a timely fashion. Such a system is very suitable for human tongue detection in Chinese traditional medicine diagnosis. The spatial and spectral information is helpful for analyzing the tongue shape, tongue color, and fur features. Further, patients do not need to keep their tongues static for a long time and can receive the detected results quickly. Therefore, we built a system prototype for human tongue detection. Some experiments and analyses are presented in Section 4.

2. Principle

This paper presents an efficient and portable LED multispectral imaging system. Its efficiency is embodied in two aspects. First, only two frames are captured, and the detection process is fast. Second, the computation time of spectral reconstruction is short. The multispectral images can therefore be obtained quickly.

2.1. Hardware

Figure 1 shows the hardware structure of the LED multispectral imaging system. It has a simple structure and mainly includes a LED ring for illumination and a RGB camera for capturing images. The diffuser is placed in front of the LEDs to produce a diffused uniform illumination. The controller has a USB connection for PCs. To avoid the effect of specular reflection, a polarizing coating can be applied on the diffuser and imaging lens.
The choice of LEDs is important for LED multispectral imaging systems. An optimal LED combination can reduce noise and improve the reconstruction quality [24]. Generally, five to six channels are enough for spectral reflectance [7,17,21]. In our system, the LED ring contains five types of LEDs, and every type has three LEDs distributed evenly in the ring. Considering the device cost and detection target, the spectral centers of the five LED types are 460, 530, 580, 630, and 650 nm, respectively. The LED spectral center covers more bands at the yellow and red range for the following tongue color detection. The RGB camera captures two frames in detection. One frame is the lighting of the 460, 530, and 630 nm LEDs. Three images are obtained from three r/g/b channels. The other frame is the lighting of the 580 and 650 nm LEDs, and two images are obtained from r/g channels. Therefore, five images are obtained from two frames. After detecting the images, an optimization algorithm is used to reconstruct multispectral images from the five gray images. The reconstruction process is shown in Figure 2: it is an inverse algorithm of Equation (1).
I n = λ = 1 L S λ C n λ P λ = λ = 1 L S λ R n λ
where, In is the detected gray image, n = 1, 2,…5, S λ is the spectral reflectance of the object, λ = 1, 2,…L is the wavebands, C n λ   is the spectral response of the detector, and P n λ is the emitted spectra of the LEDs. In our system, the spectral response of the LEDs and detector ( R n λ = C n λ P λ ) is known information which is measured and calibrated using a spectrometer (AvaSpec-ULS2048CL-EVO) and a white board. The objective of this system is to recover the spectrum S λ from Equation (1).

2.2. Multispectral Image Reconstruction Method

In this system, 31 multispectral images between 400 and 700 nm (31 wavebands with 10 nm intervals) were recovered from the 5 gray images. As the input dimension is significantly lower than the output dimension, this reconstruction problem is highly ill-posed. Slight changes of the input can lead to severe alterations at the output. Generally, several steps should be taken to decrease the impact. Here, normalization is used to compensate for the intensity difference of the LEDs. Further, the positive constraint and smooth constraint of spectral reflectance are added in the optimal function to reduce the biases.

2.2.1. Traditional Point-by-Point Optimization

Owing to the smoothness of spectral reflectance for most real-world surfaces, the spectral reflectance can be compressed to several feature vectors using PCA (principal component analysis). Parkkinen et al. [25] proposed a linear model using the set of orthogonal basis functions, b k ( λ ) . The spectral reflectance can be written as:
S λ = k = 1 K σ k b k ( λ )
where σ k are scalar coefficients, k = 1, 2,…K indicates the number of coefficients, and b k ( λ ) are the orthogonal basis functions. These are eigenvectors derived from the spectral reflectance of Musel color chips. The scalar coefficients can be solved using optimization:
arg min σ F σ I 2 2
where F includes the known quantities b k ( λ ) C λ P λ , and σ and I are the matrix forms of coefficients and detected values. Imposing the positive constraint and smooth constraint, Equation (3) is changed to:
arg min σ [ F σ I 2 2 + α P σ 2 ] ,   A σ > 0 ,
where A is relevant to the positive constraint, P is relevant to the second derivative of spectral reflectance, and α is the regularization parameter. This regularized minimization can be solved using quadratic programming. After obtaining the optimal values of the coefficients, the spectral reflectance can be calculated using Equation (2).
The above calculation obtains the reflectance of one pixel. This is traditional point-by-point optimization. This optimization of the general quadratic program is NP-hard: it takes more than polynomial time to be solved. If the image has a large number of pixels, it will be very time-consuming to obtain the spectral reflectance. For instance, for 1024 × 768 pixels, it takes about 2 h to calculate the spectral reflectance [26]. Although better computers or parallel computation can reduce the computational time, an improved algorithm with lower computation costs would be better.

2.2.2. Improved Method to Reduce Computation Time

The time-consuming nature of reconstruction is mainly caused by the spatial point-by-point optimization calculation. In the above method, each pixel is processed independently. Here, we apply spatial similarity to reduce the calculation times. Considering the change trend of spectral values, cosine angles are used to indicate the spatial similarity. Then, segment linear calibration is used to correct the recovery bias. The procedure mainly includes the following steps.
Step 1: Categorize the spatial region using cosine similarity. Cosine similarity is a metric used to measure the similarity of two vectors [27]. It is particularly concerned with orientation. Assume the known detected images are I 1 , I 2 , I N , where N is the number of LEDs with different wavebands. Every pixel can be expressed as a vector with N values. If two pixels have similar spectra, the cosine angle of two vectors will be zero or small. Select the first pixel (x = 1, y = 1) as a reference: the intensity of the reference pixel is I r e f ( x , y ) = [ I 1 ( x , y ) , I 2 ( x , y ) , I N ( x , y ) ] . Calculate the similarity between the reference pixel and other pixels: I p i x e l ( x , y ) = [ I 1 ( x , y ) , I 2 ( x , y ) , I N ( x , y ) ] .
s i m = acos ( I r e f I p i x e l T I r e f I p i x e l )
If the sim is smaller than the threshold value, th, the pixel (x′, y′) is regarded as the same category as the reference pixel (x, y). The threshold value, th, is an important parameter that will be discussed in the Simulation Section.
Step 2: Optimization at the similar region. Calculate the mean intensity of the same category pixels. Then, solve the minimization problem of Equation (4) and obtain the optimal, σopt. After this, the spectral reflectance, S′, can be obtained.
I = ( I r e f + + I n u m ) n u m
S = k = 1 K σ o p t b k ( λ )
where I r e f , , I n u m are the pixels of the same category as the reference pixel, num is the pixel number of this category, and S′ is the recovered spectral reflectance.
Step3: Segment linear calibration. For each pixel among the same category, calculate the intensity, I p i x l e , using S′ and Equation (1). Then, the error between the calculated intensity, I p i x e l , and the actual detected intensity, I p i x l e , is known. The spectral reflectance can be adjusted in segments using the scale value, s c a l e ( t ) . Here, N = 5 corresponds to the five types of LEDs and the wavelength is divided into five ranges for calibration.
s c a l e ( t ) = I p i x e l I p i x e l t = 1 , N
Substituting Equation (8) into Equation (1), Equation (9) is obtained. Sλ is the calibrated spectral reflectance.
λ S λ R λ ( t ) = λ S λ R λ ( t ) s c a l e ( t )
S λ = S λ s c a l e ( t ) { λ = 1 , L 1 t = 1 λ = L 1 + 1 , L 2 t = 2 λ = L N 1 + 1 , L t = N
The segment nodes L1, L2, , are based on the spectral response of the LEDs and detectors. According to the spectral curve in Figure 2, the segment nodes of the wavelength can be easily set as 490, 570, 600, and 640 nm, which can also be adjusted according to the actual situation. After the segment calibration, a simple window-moving smooth process is applied to obtain a smooth spectral curve, especially at the segment nodes.
Step 4: Process other similar spatial regions by the same procedure as Step 1 to Step 3. The reference pixel in Step 1 should be changed to other pixels which do not belong to the processed category.
In our method, the optimal quadratic program is run once at each similar spatial region. This greatly decreases the running time. Although each pixel requires the follow-up calibration process, this is a linear computation which has much less complexity than the quadratic optimization program. Therefore, this reconstruction method using spatial similarity can greatly reduce the computation time. Further, the segment linear calibration compensates for the recovery bias and ensures the recovered spectral images are of a high quality.

3. Simulation

The CAVE multispectral image dataset [28] can be used for simulation. It consists of 31 images from 400 to 700 nm, with 10 nm intervals. First, calculate the ideal detected images using Equation (1). Then, compare the recovered multispectral images with the reference images. In this study, we compared the traditional point-by-point optimization method and our improved method with different thresholds. Figure 3 shows some reference multispectral images (Figure 3b) and recovered images at 450, 500, 550, 600, 650, and 700 nm. The reconstructed images have some errors with reference images at 500 and 700 nm, but there are no obvious differences between our method (Figure 3d,e) and the point-by-point optimization method (Figure 3c).
To see the difference more clearly, we selected one pixel of the ‘F’ region in Figure 3a and compared the recovered spectra. In the recovery process, we used the mean value for optimization initially and adjusted the spectra by segment linear calibration. In order to examine the results of the linear calibration, three conditions were compared: (1) point-by-point optimization directly, (2) optimization using the mean value, and (3) optimization using the mean value and adding the linear calibration. In our method, the mean value is relevant for threshold values, which will affect the calculation speed and accuracy. Figure 4 displays the spectra differences, comparing the reference under different thresholds. A smaller cosine value means more similar spectra. That is, the reconstructed spectra is better with a small cosine value. Traditional point-by-point optimization has no issues with different thresholds—it has fixed values for different thresholds. When the threshold value was small, the spectra reconstructed using mean values were close to those of the point-by-point optimization method. The spectra were reconstructed best when using mean values that added calibration. As the threshold increased, the cosine values increased for those that used mean value reconstruction only. When the threshold was 0.8, the difference was much greater. Fortunately, by adding the linear calibration, the differences or biases were compensated for. Therefore, linear calibration is necessary for maintaining a high recovery quality. Figure 5 shows the recovered spectra for the different regions (‘A–F’) in Figure 3a. Although there were some biases at peak regions, especially at the narrow peak for the smooth constraint, the recovered spectra generally displayed similar trends to the reference spectra.
Besides comparing single pixels, we also compared the total images. The simulated image had 512 × 512 pixels, and a Dell notebook with an Intel Core i5-6200 CPU at 2.3 GHz was used for computation. Table 1 shows the computation time, PSNR (peak signal-to-noise ratio), SAM (spectral angle mapping), and SMAPE (symmetric mean absolute percentage error) [20]. When the threshold increased, the computation time greatly decreased. Further, the computation time was only 5.5% of that of the point-by-point optimization method when the threshold was 0.5. Here, MATLAB R2014a (32 bit) was used for computation. If the C/C++ codes are programmed, the computation time may be reduced further. PSNR indicates the image spatial quality of recovered images. The mean PSNR of 31 wavebands was calculated and its value was above 31 dB, which means that the image had a good spatial quality and was similar to the reference image. SAM and SMAPE evaluate the spectral differences of recovered spectra and reference spectra. SAM has the same expression as Equation (5). A mean SAM of 512 × 512 pixels was used for comparison, and a small value was found to be better. The SAM metric is concerned with the spectral orientation, while the SMAPE metric measures the absolute differences. The SMAPE is expressed in percentages, and a small value denotes a good match. From the PSNR, SAM, and SMAPE values in Table 1, it can be seen that there was a tendency for these values to become worse as the threshold increased. When the th = 0.5, the PSNR and SAM results were still better than those of the point-by-point optimization method, while the SMAPE result was slightly worse. Therefore, it is best that the threshold value does not exceed 0.5; otherwise, the reconstructed spectral difference will continue to increase.

4. Experiment on Human Tongue Detection

This system is very suitable for human tongue detection due to its fast detection and reconstruction speed and small size and portability. We built an LED multispectral imaging prototype to detect the human tongue. The human tongue is soft and flexible and a key region of interest in Chinese traditional medicine diagnosis. The tongue shape, tongue color, fur color, and fur thickness are important features for human health status and physical information. Besides the subjective analysis of doctors, several computer-aided tongue diagnosis systems have been proposed, such as an RGB camera or smartphone system [29] and a scanning spectral imager [30,31]. An RGB camera can obtain the tongue shape and color information directly, but it can be easily influenced by ambient light. As for the scanning spectral imager, it has a high spectral resolution and can obtain the tongue color more accurately than the RGB camera, but it usually requires scanning to obtain the spectral images. To obtain accurate and clear images, the tongue should be static during scanning. However, if the body keeps still for a long time, it can be uncomfortable for people and can even change the tongue color due to muscle contraction. In comparison to the above-mentioned systems, our system is more user-friendly and can obtain the spectral and spatial information in a timely fashion. This study was approved by the Institutional Review Board (or Ethics Committee) of Shenzhen Institute of Advanced Technology Chinese Academy of Sciences.

4.1. Multispectral Images for the Human Tongue

Figure 6 shows the system prototype. The LED ring includes five different types of LEDs. The RGB camera is a small module that uses a SONY IMX291 sensor with 1080P and 50 frames/s. The prototype was installed in a red shell with an operation handle. People can capture images using the button on the handle or by using the software. The black part in Figure 6 is a disposable cup. It is easy to plug this in or out of the red shell. It can avoid ambient interference and also protect personal health, as the cup can be changed after use.
Using the two captured frames and the recovery method, we obtained the multispectral images of the tongue. Through some pre-processing, we cut out the surrounding region and only preserved the interest region of the tongue. Figure 7a shows the two frames captured and Figure 7b shows some of the tongue images with different wavebands.

4.2. Quantity Analysis of Tongue Color and Tongue Fur

Different human eyes may perceive the same color differently. The quantity analysis of tongue color is therefore necessary to avoid the use of subjective judgment. The color of the tongue mainly includes the body color and fur color. The fur is a coating film on the tongue. It is not easy to distinguish the tongue fur exactly. Here, we applied a simple five-part method which divides the tongue into the tongue tip, middle, rear, and two sides, as seen in Figure 8 [32]. The fur is mainly located in the middle part. The sides and the rear part may feature light shadows during imaging, so the tip part and middle part are used in our system. In the experiment, the average spectra of the tip region were used to calculate the body color, and the average spectra of the middle region were used for the fur color. Knowing the spectral reflectance, its color coordinate (CIE-XYZ) can be calculated using CIE1931 spectral tristimulus values and the spectral distribution of D50 illumination. After this, it can easily be converted to CLE-LAB color.
We detected two samples. After obtaining the CLE-LAB value from the spectra, we searched for the standard color corresponding to the LAB. The spectral curve, CIE-LAB color coordinates, and standard color are listed in Table 2. It can be seen that Sample 1 had similar spectra at the tip and middle region. That is, the fur color was close to that of the tongue body. Conversely, in Sample 2, the body color was clearly different to the fur color. The fur color was light white, which is obviously different from the light pink body color. Further, the standard color was close to the detected image color. The CIE-LAB value can quantitatively indicate the tongue color and fur color.
Fur thickness is another important feature for Chinese traditional medicine. Traditionally, it is divided into thin fur, thick fur, and no or little fur without a quantitative index. The thickness is relevant to the fur density. If the tongue body is covered prominently by fur, it is regarded as thick fur. Here, we used the spectral and spatial information to calculate the fur density. It was defined as the distribution ratio of the human fur and body at the middle part. For Sample 1, the fur and body had a similar color. The SAM (spectral angle mapping) between the fur and body was 0.006 radian, and it is difficult to distinguish them. The distribution ratio was regarded as 1, which indicates no or little fur. For Sample 2, we calculated the 2D SAM image for every point at the middle fur region. Figure 9b shows a comparison of the SAM images to the tip spectra. The area of small value was regarded as the tongue body and the area of large value was the tongue fur. This can be used to identify the accurate tongue fur distribution. To see the fur distribution clearly, the image was easily classified into four parts according to the SAM values. As seen in Figure 9c, the red region was smaller than 0.05 radian, the green region was between 0.05 and 0.1 radians, the blue region was between 0.1 and 0.15 radians, and the white region was larger than 0.15 radian. These interval values should be optimized based on more experiments in further studies. From Figure 9c, it can be seen that the white region was about 67.9% of the total middle tongue. This area was large and dense, which means that the tongue fur was thick. The ratio between the white and red regions was about 129. This ratio value was proportional to the fur thickness. Therefore, the fur distribution ratio can be used as one indicator for fur density or thickness. Additionally, the degree of intersection or encirclement between the tongue body and fur can also be considered further.
The use of multispectral images of the human tongue is effective for the objective and quantitative analysis of tongue features. In the future, more tongue samples and experiments are needed to compile tongue information and to determine better quantity values for distinguishing fur density, tongue color, and other features more exactly.

5. Conclusions

Compared to the traditional scanning spectral imaging system, the benefits of the LED-based multispectral imaging system are its cost-effectiveness, fast switching, and easy implementation. It is mostly used for color identification. In this paper, we introduced an efficient LED multispectral imaging system for human tongue detection. It only captures 2 frames in order to reconstruct 31 multispectral images. A simulation and experiment verified its effectiveness.
(1)
Spatial similarity and segment linear calibration were used to improve the reconstruction process. The spatial pixels were merged based on cosine similarity, which greatly decreased the computation time. A following linear calibration was used to improve the reconstruction accuracy. Our simulation results showed that the reconstruction time was only 5.5% of that of the point-by-point optimization method when the threshold value was 0.5. Further, the mean PSNR and SAM of the reconstructed images were better than those of the traditional point-by-point method.
(2)
A portable prototype was built to detect the human tongue. As this system is not affected by environmental light, it can obtain an accurate tongue color. Further, it has a fast detection and reconstruction speed for a friendly human experience. The experiment showed that multispectral images are useful for quantitatively analyzing tongue color and fur thickness.
Although our proposed method can effectively reconstruct spectral images, it still exhibited some errors when comparing reference images. In the future, much more attention should be paid to the development of better reconstruction algorithms and optimal LED combinations. As for tongue feature detection, it is a complex problem that is being studied by various people, and more samples are necessary to determine the exact quantitative values.

Author Contributions

C.M. contributed to the methodology, simulation, and writing of the original draft preparation; M.Y. helped to build the prototype, software, and validation; F.C. helped to design the prototype and experiment; H.L. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shenzhen Fundamental Research Program, grant number: JCYJ20170818163928953, and the National Natural Science Foundation of China, grant number: U1713210.

Institutional Review Board Statement

The study was approved by the Institutional Review Board (or Ethics Committee) of Shenzhen Institute of Advanced Technology Chinese Academy of Sciences (protocol code SIAT-IRB-220215-H0588, 29 March 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garini, Y.; Young, I.T.; Mcnamara, G. Spectral imaging: Principles and applications. Cytometry 2006, 69, 735–747. [Google Scholar] [CrossRef] [PubMed]
  2. Marengo, E.; Manfredi, M.; Zerbinati, O.O.; Robotti, E. Technique based on LED multispectral imaging and multivariate analysis for monitoring the conservation state of the Dead Sea scrolls. Anal. Chem. 2011, 83, 6609–6618. [Google Scholar] [CrossRef]
  3. Yamamoto, S.; Tsumura, N. Development of a multispectral scanner using LED array for digital color proof. J. Imaging Sci. Technol. 2007, 51, 61–69. [Google Scholar] [CrossRef]
  4. Setiadi, I.C.; Nasution, A.M.T.; Chandra, T.G. A new LED-based multispectral imaging system for blood and melanin content estimation: The validation. AIP Conf. Proc. 2019, 2193, 050017. [Google Scholar]
  5. Liu, H.; Sticklus, J.; Köser, K.; Hoving, H.T.; Song, H.; Chen, Y.; Greinert, J.; Schoening, T. TuLUMIS—A tunable LED-based underwater multispectral imaging system. Opt. Express 2018, 26, 7811–7828. [Google Scholar] [CrossRef] [PubMed]
  6. Shrestha, R.; Hardeberg, J.Y.; Boust, C. LED based multispectral film scanner for accurate color imaging. In Proceedings of the 8th International Conference on Signal Image Technology and Internet Based Systems IEEE, Sorrento, Italy, 25–29 November 2012; pp. 811–817. [Google Scholar]
  7. Park, J.; Lee, M.H.; Grossberg, M.D.; Nayar, S.K. Multispectral Imaging using Multiplexed Illumination. In Proceedings of the IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007. [Google Scholar]
  8. Shrestha, R.; Hardeberg, J.Y. Multispectral imaging using LED illumination and an RGB camera. In Proceedings of the 21st Color and Imaging Conference on Color Science and Engineering Systems, Technologies and Applications, Albuquerque, New Mexico, USA, 4–8 November 2013; pp. 8–13. [Google Scholar]
  9. Kamshilin, A.A.; Nippolainen, E. Chromatic discrimination by use of computer controlled set of light-emitting diodes. Opt. Express 2007, 15, 15093–15100. [Google Scholar] [CrossRef] [PubMed]
  10. Fauch, L.; Nippolainen, E.; Teplov, V.; Kamshilin, A.A. Recovery of reflection spectra in a multispectral imaging system with light emitting diodes. Opt. Express 2010, 18, 23394–23405. [Google Scholar] [CrossRef] [PubMed]
  11. Li, H.; Li, G.; Ye, Y.; Lin, L. A high-efficiency acquisition method of LED multispectral images based on frequency-division modulation and RGB camera. Opt. Commun. 2021, 480, 126492. [Google Scholar] [CrossRef]
  12. Haneishi, H.; Hasegawa, T.; Hosoi, A.; Yokoyama, Y.; Tsumura, N.; Miyake, Y. System design for accurately estimating the spectral reflectance of art paintings. Appl. Opt. 2000, 39, 6621–6632. [Google Scholar] [CrossRef] [Green Version]
  13. Shimano, N.; Terai, K.; Hironaga, M. Recovery of spectral reflectances of objects being imaged by multispectral cameras. J. Opt. Soc. Am. A 2007, 24, 3211–3219. [Google Scholar] [CrossRef] [PubMed]
  14. Shimano, N. Recovery of spectral reflectances of objects being imaged without prior knowledge. IEEE Trans. Image Processing 2006, 15, 1848–1856. [Google Scholar] [CrossRef] [PubMed]
  15. Agahian, F.; Amirshahi, S.A.; Amirshahi, S.H. Reconstruction of reflectance spectra using weighted principal component analysis. Color Res. Appl. 2008, 33, 360–371. [Google Scholar] [CrossRef]
  16. Shen, H.L.; Wan, H.J.; Zhang, Z.C. Estimating reflectance from multispectral camera responses based on partial least-squares regression. J. Electron. Imaging 2010, 19, 020501. [Google Scholar] [CrossRef]
  17. Xiao, G.S.; Wan, X.X.; Wang, L.X.; Liu, S.W. Reflectance spectra reconstruction from trichromatic camera based on kernel partial least square method. Opt. Express 2019, 27, 34921–34936. [Google Scholar] [CrossRef] [PubMed]
  18. Lopez, M.; Hernandez, J.; Valero, E.; Romero, J. Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight. J. Opt. Soc. Am. A 2007, 24, 942–956. [Google Scholar] [CrossRef] [PubMed]
  19. Arias, L.; Sbarbaro, D.; Torres, S. Removing baseline flame’s spectrum by using advanced recovering spectrum techniques. Appl. Opt. 2012, 51, 6111–6116. [Google Scholar] [CrossRef] [PubMed]
  20. Toro, C.; Arias, L.; Torres, S.; Sbarbaro, D. Flame spectra-temperature estimation based on a color imaging camera and a spectral reconstruction technique. Appl. Opt. 2014, 53, 6351–6361. [Google Scholar] [CrossRef]
  21. Tschannerl, J.; Ren, J.C.; Zhao, H.M.; Kao, F.J.; Marshall, S.; Yuen, P. Hyperspectral image reconstruction using multi-color and time-multiplexed LED illumination. Opt. Lasers Eng. 2019, 121, 352–357. [Google Scholar] [CrossRef]
  22. Fu, Y.; Zheng, Y.R.; Zhang, L.; Huang, H. Spectral reflectance recovery from a single RGB image. IEEE Trans. Comput. Imaging 2018, 4, 382–394. [Google Scholar] [CrossRef]
  23. Wu, G.Y.; Xiong, Y.F.; Li, X.Z. Spectral sparse recovery from a single RGB image. Laser Phys. Lett. 2021, 18, 095201. [Google Scholar] [CrossRef]
  24. Paray, J.N. LED Selection for Spectral (Multispectral) Imaging. Master’s Thesis, Rochester Institute of Technology, Rochester, NY, USA, May 2020. [Google Scholar]
  25. Parkkinen, J.P.S.; Hallikainen, J.; Jaaskelainen, T. Characteristic spectra of Munsell colors. J. Opt. Soc. Am. A. 1989, 6, 318–322. [Google Scholar] [CrossRef]
  26. Han, S.; Sato, I.; Okabe, T.; Sato, Y. Fast spectral reflectance recovery using DLP projector. Int. J. Comput. Vis. 2014, 110, 172–184. [Google Scholar] [CrossRef]
  27. Lahitani, A.R.; Permanasari, A.E.; Setiawan, N.A. Cosine similarity to determine similarity measure: Study case in online essay assessment. In Proceedings of the 2016 4th International Conference on Cyber and IT Service Management, Bandung, Indonesia, 26–27 April 2016; pp. 1–6. [Google Scholar] [CrossRef]
  28. Database [DB/OL]. Available online: http://www.cs.columbia.edu/CAVE/databases/multispectral/ (accessed on 20 January 2022).
  29. Hu, M.C.; Lan, K.C.; Fang, W.C.; Huang, Y.C. Automated tongue diagnosis on the smartphone and its applications. Comput. Methods Programs Biomed. 2019, 174, 51–64. [Google Scholar] [CrossRef]
  30. Liu, Z.; Wang, H.J.; Li, Q.L. Tongue tumor detection in medical hyperspectral images. Sensors 2012, 12, 162–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Zhang, D.; Zhang, J.H.; Wang, Z.; Sun, M.J. Tongue colour and coating prediction in traditional Chinese medicine based on visible hyperspectral imaging. IET Image Process 2019, 13, 2265–2270. [Google Scholar] [CrossRef]
  32. Xu, J.T. Clinical Illustration of Tongue Diagnosis of Traditional Chinese Medicine; Chemical Industry Press: Beijing, China, 2017; p. 3. [Google Scholar]
Figure 1. Hardware structure (1) diffuser, (2) LED ring, (3) RGB camera, (4) control module.
Figure 1. Hardware structure (1) diffuser, (2) LED ring, (3) RGB camera, (4) control module.
Applsci 12 03552 g001
Figure 2. Reconstruction process from detected images to multispectral images.
Figure 2. Reconstruction process from detected images to multispectral images.
Applsci 12 03552 g002
Figure 3. (a) Initial image for simulation. (be) Parts of spectral images at 450, 500, 550, 600, and 650 nm. (b) Reference spectral images. (c) Recovered images using point-by-point optimization. (d) Recovered images using our method, th = 0.1. (e) Recovered images using our method, th = 0.5.
Figure 3. (a) Initial image for simulation. (be) Parts of spectral images at 450, 500, 550, 600, and 650 nm. (b) Reference spectral images. (c) Recovered images using point-by-point optimization. (d) Recovered images using our method, th = 0.1. (e) Recovered images using our method, th = 0.5.
Applsci 12 03552 g003
Figure 4. Spectral similarity between reference spectra and recovered spectra using different thresholds during reconstruction.
Figure 4. Spectral similarity between reference spectra and recovered spectra using different thresholds during reconstruction.
Applsci 12 03552 g004
Figure 5. Spectra comparison for different spatial regions and colors (reference spectra, point-by-point optimization directly, our method with th = 0.8). The (af) curves correspond to the spatial regions ‘A–F’ in Figure 3a.
Figure 5. Spectra comparison for different spatial regions and colors (reference spectra, point-by-point optimization directly, our method with th = 0.8). The (af) curves correspond to the spatial regions ‘A–F’ in Figure 3a.
Applsci 12 03552 g005
Figure 6. System prototype and the experiment for human tongue detection.
Figure 6. System prototype and the experiment for human tongue detection.
Applsci 12 03552 g006
Figure 7. Detected images and part of the multispectral images. (a) Two frame images detected. (b) Reconstructed multispectral images of tongue region.
Figure 7. Detected images and part of the multispectral images. (a) Two frame images detected. (b) Reconstructed multispectral images of tongue region.
Applsci 12 03552 g007
Figure 8. Five divisions of the human tongue.
Figure 8. Five divisions of the human tongue.
Applsci 12 03552 g008
Figure 9. SAM images of fur region (the middle area of the tongue within red lines). (a) Tongue image of Sample 2. (b) SAM image of the middle region compared to the tongue body spectra. (c) SAM image divided into 4 parts by the radian value—the white region has larger SAM values which can be regarded as the tongue fur.
Figure 9. SAM images of fur region (the middle area of the tongue within red lines). (a) Tongue image of Sample 2. (b) SAM image of the middle region compared to the tongue body spectra. (c) SAM image divided into 4 parts by the radian value—the white region has larger SAM values which can be regarded as the tongue fur.
Applsci 12 03552 g009
Table 1. Performance comparison for different thresholds.
Table 1. Performance comparison for different thresholds.
Computation TimePSNR (Mean Value of All Wavebands) SAM (Mean Value of All Spatial Points)SMAPE (Mean Value of All Spatial Points)
Point-by-point optimization1162.2 s30.38 dB0.154515.84%
Our method (th = 0.1)342.0 s (29.4%)31.56 dB0.140714.95%
Our method (th = 0.3)74.5 s (6.4%)31.35 dB0.148115.28%
Our method (th = 0.5)64.0 s (5.5%)31.09 dB0.150015.99%
Table 2. Tongue body color and fur color of two samples.
Table 2. Tongue body color and fur color of two samples.
Average Spectra of Tip and Middle Part of TongueColor Value Under CIE1964 10° and D50 IlluminationStandard Color of CIE-LAB
Applsci 12 03552 i001
Sample 1
Applsci 12 03552 i002Tip part (Tongue body):
CIE-LAB (77.09, 28.33, −8.17)
Applsci 12 03552 i003
Middle part (Tongue fur):
CIE-LAB (76.13, 28.13, −8.55)
Applsci 12 03552 i004
Applsci 12 03552 i005
Sample 2
Applsci 12 03552 i006Tip part (Tongue body):
CIE-LAB (77.98, 18.40, 6.29)
Applsci 12 03552 i007
Middle part (Tongue fur):
CIE-LAB (91.02, 4.77, 2.27)
Applsci 12 03552 i008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, C.; Yu, M.; Chen, F.; Lin, H. An Efficient and Portable LED Multispectral Imaging System and Its Application to Human Tongue Detection. Appl. Sci. 2022, 12, 3552. https://doi.org/10.3390/app12073552

AMA Style

Ma C, Yu M, Chen F, Lin H. An Efficient and Portable LED Multispectral Imaging System and Its Application to Human Tongue Detection. Applied Sciences. 2022; 12(7):3552. https://doi.org/10.3390/app12073552

Chicago/Turabian Style

Ma, Cui, Ming Yu, Fokui Chen, and Hui Lin. 2022. "An Efficient and Portable LED Multispectral Imaging System and Its Application to Human Tongue Detection" Applied Sciences 12, no. 7: 3552. https://doi.org/10.3390/app12073552

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop