Super-Resolution Reconstruction of Cell Pseudo-Color Image Based on Raman Technology

Raman spectroscopy visualization is a challenging task due to the interference of complex background noise and the number of selected measurement points. In this paper, a super-resolution image reconstruction algorithm for Raman spectroscopy is studied to convert raw Raman data into pseudo-color super-resolution imaging. Firstly, the Raman spectrum data of a single measurement point is measured multiple times to calculate the mean value to remove the random background noise, and innovatively introduce the Retinex algorithm and the median filtering algorithm which improve the signal-to-noise ratio. The novel method of using deep neural network performs a super-resolution reconstruction operation on the gray image. An adaptive guided filter that automatically adjusts the filter radius and penalty factor is proposed to highlight the contour of the cell, and the super-resolution reconstruction of the pseudo-color image of the Raman spectrum is realized. The average signal-to-noise ratio of the reconstructed pseudo-color image sub-band reaches 14.29 db, and the average value of information entropy reaches 4.30 db. The results show that the Raman-based cell pseudo-color image super-resolution reconstruction algorithm is an effective tool to effectively remove noise and high-resolution visualization. The contrast experiments show that the pseudo-color image Kullback–Leiber (KL) entropy of the color image obtained by the method is small, the boundary is obvious, and the noise is small, which provide technical support for the development of sophisticated single-cell imaging Raman spectroscopy instruments.


Introduction
The Raman spectrum is a scattering spectrum obtained by the Raman scattering effect. Based on strong molecular specificity [1], Raman spectroscopy has the advantage of non-invasive, high specificity, and high sensitivity [2,3]. It has a wide range of applications in geology, medicine, archaeology, and chemistry [4][5][6][7][8][9][10][11]. For example, Li et al. [12] have studied Sudan Red I in duck feed by analyzing the R, G, B three-color channel of the Raman spectral pseudo-color images and the Raman pseudo-color image binarization. Chao et al. [13] developed a Raman spectral imaging system for food safety and quality assessment, which was capable of high-spectrum Raman imaging. However, the Raman spectroscopy two-dimensional images contained some noise. Qin et al. [14] developed a line-scan Raman spectroscopy imaging platform that could evaluate food safety and internal quality. The platform increased the image resolution by increasing the number of scanning points, but if the number of scanning points was small and the object size was close to the scanning limit, which was 0.07 mm, the imaging would be blurred. Anna et al. [15] studied the Raman imaging of brain tumors by further processing the obtained wavelength information and combining the pseudo-color information.

Materials and Methods
In this experiment, the cells immobilized with alcohol in glass sections (including Escherichia coli (dh5a strain), yeast cells (Yeast), and human colon cancer cells (the hct116 cell line) were obtained from Hooke Instruments, Changchun, China and used as experimental samples. The three kinds of cells have certain representativeness: Escherichia coli cells are small, yeast cells have the characteristics of aggregation, and human colon cancer cells have a representatively large volume. The test samples were placed under a microscope at ambient room temperature for observation. In this paper, the Witec Alpha 300R instrument is used for collecting Raman spectra. The instrument consists of a digital controller, a laser spectrometer, and a charge coupled device camera. The measured Raman spectral light intensity is saved using the Control FIVE software provided by Witec Instruments. Digital optical microscopy imaging and Raman spectroscopy pseudo-color imaging shown in Figure 2 were performed by the Control FIVE software. The device is used when the CCD temperature is lowered to −60 °C for maximum sensitivity. All of the images included a field of view equal or less than 20 × 20 µm. In this paper, the collected samples were scanned using a 534 nm laser. All of the samples were computed at 3-5 cm −1 resolution across the spectral range of 155-3926 cm −1 . The integration time ranged from 2 s to 10 s depending on the sample. The laser power ranged from 1.5 mw to 11 mw depending on the sample. Single-point spectral scanning was used to set the test point <= 400 points, and a total of two scans were obtained for each spectrum.

Retinex Image Enhancement Technology
Extracting useful information in a short period of time is a very important task because the data measured by Witec instruments is superimposed on machine noise, fluorescence noise, and phosphorescence noise. The averaging of the measurement noise could effectively suppress machine noise. For fluorescence noise and phosphorescence noise, it is difficult to form a unified method to remove it due to the instability of ambient temperature and substrate enhancement properties. Specific spectral denoising process is another area of research for Raman spectroscopy and is not be described here. The research in this paper is to carry out clear imaging processing of Raman spectroscopy under the premise of minimal pretreatment, which provides a guarantee for Raman spectroscopy in other fields. This paper does not denoise the Raman spectral data from the perspective of data processing, but innovatively denoises the Raman spectral data in the field of image processing. It is also tested shown in Figure 3

Materials and Methods
In this experiment, the cells immobilized with alcohol in glass sections (including Escherichia coli (dh5a strain), yeast cells (Yeast), and human colon cancer cells (the hct116 cell line) were obtained from Hooke Instruments, Changchun, China and used as experimental samples. The three kinds of cells have certain representativeness: Escherichia coli cells are small, yeast cells have the characteristics of aggregation, and human colon cancer cells have a representatively large volume. The test samples were placed under a microscope at ambient room temperature for observation. In this paper, the Witec Alpha 300R instrument is used for collecting Raman spectra. The instrument consists of a digital controller, a laser spectrometer, and a charge coupled device camera. The measured Raman spectral light intensity is saved using the Control FIVE software provided by Witec Instruments. Digital optical microscopy imaging and Raman spectroscopy pseudo-color imaging shown in Figure 2 were performed by the Control FIVE software. The device is used when the CCD temperature is lowered to −60 • C for maximum sensitivity. All of the images included a field of view equal or less than 20 × 20 µm. In this paper, the collected samples were scanned using a 534 nm laser. All of the samples were computed at 3-5 cm −1 resolution across the spectral range of 155-3926 cm −1 . The integration time ranged from 2 s to 10 s depending on the sample. The laser power ranged from 1.5 mw to 11 mw depending on the sample. Single-point spectral scanning was used to set the test point <= 400 points, and a total of two scans were obtained for each spectrum.

Materials and Methods
In this experiment, the cells immobilized with alcohol in glass sections (including Escherichia coli (dh5a strain), yeast cells (Yeast), and human colon cancer cells (the hct116 cell line) were obtained from Hooke Instruments, Changchun, China and used as experimental samples. The three kinds of cells have certain representativeness: Escherichia coli cells are small, yeast cells have the characteristics of aggregation, and human colon cancer cells have a representatively large volume. The test samples were placed under a microscope at ambient room temperature for observation. In this paper, the Witec Alpha 300R instrument is used for collecting Raman spectra. The instrument consists of a digital controller, a laser spectrometer, and a charge coupled device camera. The measured Raman spectral light intensity is saved using the Control FIVE software provided by Witec Instruments. Digital optical microscopy imaging and Raman spectroscopy pseudo-color imaging shown in Figure 2 were performed by the Control FIVE software. The device is used when the CCD temperature is lowered to −60 °C for maximum sensitivity. All of the images included a field of view equal or less than 20 × 20 µm. In this paper, the collected samples were scanned using a 534 nm laser. All of the samples were computed at 3-5 cm −1 resolution across the spectral range of 155-3926 cm −1 . The integration time ranged from 2 s to 10 s depending on the sample. The laser power ranged from 1.5 mw to 11 mw depending on the sample. Single-point spectral scanning was used to set the test point <= 400 points, and a total of two scans were obtained for each spectrum.

Retinex Image Enhancement Technology
Extracting useful information in a short period of time is a very important task because the data measured by Witec instruments is superimposed on machine noise, fluorescence noise, and phosphorescence noise. The averaging of the measurement noise could effectively suppress machine noise. For fluorescence noise and phosphorescence noise, it is difficult to form a unified method to remove it due to the instability of ambient temperature and substrate enhancement properties. Specific spectral denoising process is another area of research for Raman spectroscopy and is not be described here. The research in this paper is to carry out clear imaging processing of Raman spectroscopy under the premise of minimal pretreatment, which provides a guarantee for Raman spectroscopy in other fields. This paper does not denoise the Raman spectral data from the perspective of data processing, but innovatively denoises the Raman spectral data in the field of image processing. It is also tested shown in Figure 3

Retinex Image Enhancement Technology
Extracting useful information in a short period of time is a very important task because the data measured by Witec instruments is superimposed on machine noise, fluorescence noise, and phosphorescence noise. The averaging of the measurement noise could effectively suppress machine noise. For fluorescence noise and phosphorescence noise, it is difficult to form a unified method to remove it due to the instability of ambient temperature and substrate enhancement properties. Specific spectral denoising process is another area of research for Raman spectroscopy and is not be described here. The research in this paper is to carry out clear imaging processing of Raman spectroscopy under the premise of minimal pretreatment, which provides a guarantee for Raman spectroscopy in other fields. This paper does not denoise the Raman spectral data from the perspective of data processing, but innovatively denoises the Raman spectral data in the field of image processing. It is also tested shown in Figure 3 that the symmetrical N (N = 4, 8, 12) points near the peak are averaged, and the image contrast is found to be large, but the noise is also increased. It is observed that a "void" phenomenon occurs inside the cell. Filtering these salt and pepper noises in the case of strong contrast increases the difficulty of the next work.
Sensors 2019, 19, x FOR PEER REVIEW 4 of 23 the peak are averaged, and the image contrast is found to be large, but the noise is also increased. It is observed that a "void" phenomenon occurs inside the cell. Filtering these salt and pepper noises in the case of strong contrast increases the difficulty of the next work. Therefore, the initial image is obtained by using the method of multi-measurement peak averaging. Therefore, it is necessary first to improve the contrast of the image, and use the Retinex theory to improve the contrast processing of the image [26].
The effect of the Retinex algorithm is shown below. In the original picture, the overall brightness of the image is low, and the details in the dark area cannot be seen clearly; the histogram equalization and the Retinex algorithm both enhance the contrast of the image, enhance the details of the dark area, and observe the Figure 4b,c the histogram equalized image has halo and artifacts, and the color distortion is more serious. The Retinex algorithm performs better in these aspects. Therefore, the Retinex algorithm is used to perform contrast stretching on the obtained Raman gray image. Figure  4 shows the superiority of the Retinex algorithm for enhanced images.  Retinex theory believes that the color of an object is only related to the ability of the object to reflect long, medium and short waves, and is independent of the intensity of the incident light, the non-uniformity of the light, and the absolute intensity of the reflected light. Therefore, in the Retinex image enhancement algorithm, the image to be enhanced is decomposed into two parts, an incident component and a reflection component, and the difference in brightness between the pixels (i.e., the difference in gray value) is compared to obtain an incident component. Then, the reflection component is obtained by stretching or the like operation. Finally, the effect of image enhancement is achieved. When calculating the relative shading relationship, assume that the size of the image is × , and assume the light and dark values of each pixel are the same at the beginning, given by: Therefore, the initial image is obtained by using the method of multi-measurement peak averaging. Therefore, it is necessary first to improve the contrast of the image, and use the Retinex theory to improve the contrast processing of the image [26].
The effect of the Retinex algorithm is shown below. In the original picture, the overall brightness of the image is low, and the details in the dark area cannot be seen clearly; the histogram equalization and the Retinex algorithm both enhance the contrast of the image, enhance the details of the dark area, and observe the Figure 4b,c the histogram equalized image has halo and artifacts, and the color distortion is more serious. The Retinex algorithm performs better in these aspects. Therefore, the Retinex algorithm is used to perform contrast stretching on the obtained Raman gray image. Figure 4 shows the superiority of the Retinex algorithm for enhanced images. the peak are averaged, and the image contrast is found to be large, but the noise is also increased. It is observed that a "void" phenomenon occurs inside the cell. Filtering these salt and pepper noises in the case of strong contrast increases the difficulty of the next work. Therefore, the initial image is obtained by using the method of multi-measurement peak averaging. Therefore, it is necessary first to improve the contrast of the image, and use the Retinex theory to improve the contrast processing of the image [26].
The effect of the Retinex algorithm is shown below. In the original picture, the overall brightness of the image is low, and the details in the dark area cannot be seen clearly; the histogram equalization and the Retinex algorithm both enhance the contrast of the image, enhance the details of the dark area, and observe the Figure 4b,c the histogram equalized image has halo and artifacts, and the color distortion is more serious. The Retinex algorithm performs better in these aspects. Therefore, the Retinex algorithm is used to perform contrast stretching on the obtained Raman gray image. Figure  4 shows the superiority of the Retinex algorithm for enhanced images.  Retinex theory believes that the color of an object is only related to the ability of the object to reflect long, medium and short waves, and is independent of the intensity of the incident light, the non-uniformity of the light, and the absolute intensity of the reflected light. Therefore, in the Retinex image enhancement algorithm, the image to be enhanced is decomposed into two parts, an incident component and a reflection component, and the difference in brightness between the pixels (i.e., the difference in gray value) is compared to obtain an incident component. Then, the reflection component is obtained by stretching or the like operation. Finally, the effect of image enhancement is achieved. When calculating the relative shading relationship, assume that the size of the image is × , and assume the light and dark values of each pixel are the same at the beginning, given by: Retinex theory believes that the color of an object is only related to the ability of the object to reflect long, medium and short waves, and is independent of the intensity of the incident light, the non-uniformity of the light, and the absolute intensity of the reflected light. Therefore, in the Retinex image enhancement algorithm, the image to be enhanced is decomposed into two parts, an incident component and a reflection component, and the difference in brightness between the pixels (i.e., the difference in gray value) is compared to obtain an incident component. Then, the reflection component is obtained by stretching or the like operation. Finally, the effect of image enhancement is achieved. When calculating the relative shading relationship, assume that the size of the image is m × n, and assume the light and dark values of each pixel are the same at the beginning, given by: Here indicates that the gray value at the position of the image (i, j) takes a log function, and C onstant is a certain matrix, i.e., C onstant = log 10 (P). First calculate the relative light and dark relationship between any two pixels with a distance h = m 2 , given by then compare with the C onstant value, as shown in the following then calculate the relative brightness relationship between any two pixels in the vertical direction with distance ϑ = n 2 , i.e., Equation (4) is also compared with C onstant , the formula is the same as Equation (3). After calculating the relative shading relationship value of each pixel, the distance in the horizontal direction is reduced to h = m 4 , the distance in the vertical direction is reduced to ϑ = n 4 , and the iterative calculation is repeated until the distance between the horizontal and vertical directions is 1. Finally, according to the gray maximum value and the gray minimum value in the processed image, uniform stretching processing is performed [27].
The image is processed and the result is shown in Figure 5. First calculate the relative light and dark relationship between any two pixels with a distance ℎ = , given by then compare with the onstant value, as shown in the following then calculate the relative brightness relationship between any two pixels in the vertical direction with distance = , i.e., Equation (4) is also compared with onstant , the formula is the same as Equation (3). After calculating the relative shading relationship value of each pixel, the distance in the horizontal direction is reduced to ℎ = , the distance in the vertical direction is reduced to = , and the iterative calculation is repeated until the distance between the horizontal and vertical directions is 1. Finally, according to the gray maximum value and the gray minimum value in the processed image, uniform stretching processing is performed [27].
The image is processed and the result is shown in Figure 5.

Image Super-Resolution Reconstruction
The traditional super-resolution methods are bilinear interpolation, bicubic interpolation, sparse coding-based methods, anchored neighborhood regression methods, etc., while deep learning methods are superior to these algorithms [28].
It could be seen from [28] that the Super-Resolution Convolutional Neural Network (hereinafter referred to as SRCNN) achieves a better super-resolution effect. The test selects the Set 14 image data

Image Super-Resolution Reconstruction
The traditional super-resolution methods are bilinear interpolation, bicubic interpolation, sparse coding-based methods, anchored neighborhood regression methods, etc., while deep learning methods are superior to these algorithms [28].
It could be seen from [28] that the Super-Resolution Convolutional Neural Network (hereinafter referred to as SRCNN) achieves a better super-resolution effect. The test selects the Set 14 image data set for testing, and the SRCNN is superior in most of the evaluation indicators. Therefore, this paper uses SRCNN network for super-resolution processing. SRCNN is a three-layer deep neural network, as shown in Figure 6: High-resolution image (output) Patch extraction and representation Non-linear mapping Reconstruction Formally, the first layer is represented as operation : where is a convolution kernel with size 64 × 9 × 9, and the size of is 64 × 1. The second layer is represented as operation : where contains 32 convolution kernels with size 64 × 5 × 5, and size is 32 × 1. The third layer is represented as operation: where is a convolution kernel with size 32 × 5 × 5, and the size of is 1 × 1. In the SRCNN network, each neuron selects the Rectified Liner Uints (ReLU) function as the activation function and uses the symmetric extension method to perform the convolution operation at the image boundary. The network is pre-trained in the ImageNet dataset, and the trained network is directly applied to the low-resolution images in this paper shown in Figure 7. Through two super-resolution deep neural networks, it is found that the network enhances the contour edge of the image, but also enhances some noise point information. Formally, the first layer is represented as operation F 1 :

After SRCNN operation
where W 1 is a convolution kernel with size 64 × 9 × 9, and the size of B 1 is 64 × 1. The second layer is represented as operation F 2 : where W 2 contains 32 convolution kernels with size 64 × 5 × 5, and B 2 size is 32 × 1. The third layer is represented as F 3 operation: where W 3 is a convolution kernel with size 32 × 5 × 5, and the size of B 3 is 1 × 1. In the SRCNN network, each neuron selects the Rectified Liner Uints (ReLU) function as the activation function and uses the symmetric extension method to perform the convolution operation at the image boundary. The network is pre-trained in the ImageNet dataset, and the trained network is directly applied to the low-resolution images in this paper shown in Figure 7. Through two super-resolution deep neural networks, it is found that the network enhances the contour edge of the image, but also enhances some noise point information.
network, each neuron selects the Rectified Liner Uints (ReLU) function as the activation function and uses the symmetric extension method to perform the convolution operation at the image boundary. The network is pre-trained in the ImageNet dataset, and the trained network is directly applied to the low-resolution images in this paper shown in Figure 7. Through two super-resolution deep neural networks, it is found that the network enhances the contour edge of the image, but also enhances some noise point information.
After SRCNN operation

Traditional Guided Filter
Since the images processed by the SRCNN network are inconsistent in terms of details, the image is processed by filtering. There are three main methods for smoothing traditional image smoothing: weighted least squares filtering [29], bilateral filtering [30], and guided filtering [31]. Weighted least squares filtering needs to calculate the inverse of high-dimensional matrices, which is difficult to implement in practical engineering. and the calculation speed is slow. The bilateral filtering also has the problem of long-running time and it has edge inversion characteristics [32]. The guided filtering considers the intrinsic relationship between the pixels of the image, and the model of the ridgeline plays a better role in smoothing the image. Besides, the algorithm runs faster. Therefore, this paper uses guided filtering to smooth the image. Referring to the [31] paper, the guided filtering parameters are set to the radius r = 2, eps = 0.1 2 , and the bilateral filtering parameters are set to the radius r = 2, sigma_s = 2, sigma_r = 0.1. When filtering the image, the R, G, and B color channels are separately filtered. The filtered effect diagram is shown in the Figure below. The Weighted least squares (WLS) filter loses a lot of the image details, such as the texture information and the edge information. The bilateral filtering preserves the texture information better, but it can be seen that there is a white edge phenomenon at the edge. The texture information and the edge information are better preserved, and there is no white edge phenomenon at the edge. Figure 8 shows the effects of guided filtering, bilateral filtering, and WLS filtering. Therefore, the super-resolution image is operated by the guided filter.

Traditional Guided Filter
Since the images processed by the SRCNN network are inconsistent in terms of details, the image is processed by filtering. There are three main methods for smoothing traditional image smoothing: weighted least squares filtering [29], bilateral filtering [30], and guided filtering [31]. Weighted least squares filtering needs to calculate the inverse of high-dimensional matrices, which is difficult to implement in practical engineering. and the calculation speed is slow. The bilateral filtering also has the problem of long-running time and it has edge inversion characteristics [32]. The guided filtering considers the intrinsic relationship between the pixels of the image, and the model of the ridgeline plays a better role in smoothing the image. Besides, the algorithm runs faster. Therefore, this paper uses guided filtering to smooth the image. Referring to the [31] paper, the guided filtering parameters are set to the radius r = 2, eps = 0.1 , and the bilateral filtering parameters are set to the radius r = 2, sigma_s = 2, sigma_r = 0.1. When filtering the image, the R, G, and B color channels are separately filtered. The filtered effect diagram is shown in the Figure below. The Weighted least squares (WLS) filter loses a lot of the image details, such as the texture information and the edge information. The bilateral filtering preserves the texture information better, but it can be seen that there is a white edge phenomenon at the edge. The texture information and the edge information are better preserved, and there is no white edge phenomenon at the edge. Figure 8 shows the effects of guided filtering, bilateral filtering, and WLS filtering. Therefore, the super-resolution image is operated by the guided filter. The guided filtering algorithm assumes that there is a locally corresponding linear relationship between the guided image and the output image . Assuming is the input image, is a local linear transformation of the sliding window centered on pixel with respect to , then the following equation: By minimizing the following cost function: The guided filtering algorithm assumes that there is a locally corresponding linear relationship between the guided image I and the output image q. Assuming p is the input image, q is a local linear transformation of the sliding window ω k centered on pixel k with respect to I, then the following equation: By minimizing the following cost function: we solve the coefficients a k and b k . In Equation (9), ε is a penalty term coefficient for a k . According to the linear ridge regression model, there is where µ k and σ 2 k are the mean and variance of p in the sliding window ω k , |ω| is the number of data contained in the sliding window ω k , and p k = 1 |ω| i∈ω k p i is the mean of the input image p in the sliding window ω k . Since the values obtained by using Equation (8) to cover all overlapping windows at the i position are different, the q i values corresponding to each position are averaged, corresponding to all sliding windows ω k in the calculated image (a k , b k ).
According to the symmetry of the box filter, then k|i∈ω k a k = k∈ω i a k , so eventually rewrite Equation (12) as where a i = 1 |ω| k∈ω i a k , b i = 1 |ω| k∈ω i b k . The guided filter has the characteristics of fast speed, edge retention, and no edge inversion, and is essentially an implicit filter. In this paper, the guide image I is the same as the input image p, then a k and b k are simplified to a k =

Adaptive Guided Filter
This paper innovatively proposes an adaptive guided filtering method. When using traditional guided filtering, it is necessary to manually set the filter radius and regularization coefficient, which makes it difficult to process different cell images. Therefore, this paper proposes a method for adaptively setting the filter radius and regularization coefficient. By the prior image information with its corresponding filter radius and regularization coefficient information, the mapping relationship which also combines with the total Raman spectrum sample points and the sub-band spectrum variance information is fitted to a linear relationship. The final linear expression is obtained by the least-squares method to achieve the optimal guiding filtering effect. Then a moderate result will be obtained in the boundary preservation and image smoothing.
After analyzing the variance of the sub-band spectrum, it is observed that when the variance threshold is set to 1000, the difference map which is binarized could better fit the cell boundary, as shown in the above Figure 9c. The reason is that the Raman spectrum intensity changes more dramatically because of the detection of the cell. Therefore, the least-squares fit of r and eps is performed by a large amount of experimental data. The equation of the filter radius is |ω| = a 1 N 1 + b 1 N 2 + c 1 σ 2 k + d 1 , the equation of eps is eps =a 2 N 1 + b 2 N 2 + c 2 σ 2 k + d 2 . Here, N 1 represents the total number of points scanned, and N 2 represents the number of the scanning points in the image with the variance of more than 1000, i.e., N 2 = {number of i|i ∈ N and σ 2 i > 1000}σ 2 k represents the intensity variance of a single scan point over the sub-band. The overall fit function is shown below: is the same as the input image , then and are simplified to = , = 1 − .

Adaptive Guided Filter
This paper innovatively proposes an adaptive guided filtering method. When using traditional guided filtering, it is necessary to manually set the filter radius and regularization coefficient, which makes it difficult to process different cell images. Therefore, this paper proposes a method for adaptively setting the filter radius and regularization coefficient. By the prior image information with its corresponding filter radius and regularization coefficient information, the mapping relationship which also combines with the total Raman spectrum sample points and the sub-band spectrum variance information is fitted to a linear relationship. The final linear expression is obtained by the least-squares method to achieve the optimal guiding filtering effect. Then a moderate result will be obtained in the boundary preservation and image smoothing.
, then the loss is transformed into a matrix form of Ab = Y, . . .
where y N 1 , The matrix is then listed as: In the popular sense, when the number of sampling points is small, the sampled data will have a large noise. At this time, the image quality will be poor, and the image will be blurred, so the size of the filter radius should be reduced. When the variance of the corresponding band of a certain point is large, the overall image area here varies greatly, and r should be reduced, which is consistent with the obtained linear system. Then, the reliability of the obtained linear function is verified from the side.
By using the adaptive guided filter, the Figure 10b is obtained. The contour of the cells in the image is highlighted, and some of the noise regions are filtered and removed, which is beneficial to the pseudo-color processing of the image in the next step.

Raman Spectral Pseudo-Color Imaging System
Raman data has strong noise and large data volume. Therefore, when performing Raman spectrum pseudo-color imaging, how to extract useful information in a short time and convert it into a high-resolution image is a difficulty. The measured Raman spectrum usually contains noise such as fluorescent background noise, Gaussian noise and shot noise, and the characteristic peak information of the substance to be tested is aliased in the noise. The Figure 11 shows the Raman spectrum information of the measured single observation points.

Raman Spectral Pseudo-Color Imaging System
Raman data has strong noise and large data volume. Therefore, when performing Raman spectrum pseudo-color imaging, how to extract useful information in a short time and convert it into a high-resolution image is a difficulty. The measured Raman spectrum usually contains noise such as fluorescent background noise, Gaussian noise and shot noise, and the characteristic peak information of the substance to be tested is aliased in the noise. The Figure 11 shows the Raman spectrum information of the measured single observation points. spectrum pseudo-color imaging, how to extract useful information in a short time and convert it into a high-resolution image is a difficulty. The measured Raman spectrum usually contains noise such as fluorescent background noise, Gaussian noise and shot noise, and the characteristic peak information of the substance to be tested is aliased in the noise. The Figure 11 shows the Raman spectrum information of the measured single observation points. Raman spectroscopy is the intensity data derived from the energy released by the energy level transition of an electron. Therefore, if the intensity of the obtained amplitude signal is stronger, the possible type of element is larger. According to [2], some of the smaller spectral information is likely to be noise mixing, and the highest peak information selected can better preserve the intensity information. Besides, according to [33], Raman spectral data measured at the same time show similarity, so the peak ratio can better reflect the difference between different measurement points. Since the purpose of this paper is to obtain a better pseudo-color imaging effect without processing Raman spectroscopy is the intensity data derived from the energy released by the energy level transition of an electron. Therefore, if the intensity of the obtained amplitude signal is stronger, the possible type of element is larger. According to [2], some of the smaller spectral information is likely to be noise mixing, and the highest peak information selected can better preserve the intensity information. Besides, according to [33], Raman spectral data measured at the same time show similarity, so the peak ratio can better reflect the difference between different measurement points. Since the purpose of this paper is to obtain a better pseudo-color imaging effect without processing the Raman data, the Raman spectral data is measured twice and averaged. Therefore, the maximum value is selected for each measured point, and the grayscale image of size a × b (the unit is a pixel, a represents the length, b represents the width) is arranged according to the position of the observation point, as shown in the following Figure 12. Since there are very few observation points, usually around a few hundred observation points, how to convert them into a visible high-resolution image is a difficulty. the Raman data, the Raman spectral data is measured twice and averaged. Therefore, the maximum value is selected for each measured point, and the grayscale image of size × (the unit is a pixel, represents the length, represents the width) is arranged according to the position of the observation point, as shown in the following Figure 12. Since there are very few observation points, usually around a few hundred observation points, how to convert them into a visible high-resolution image is a difficulty. After obtaining a grayscale image of × size, as shown in the following Figure 13a, the image contrast is too small, and it is difficult to distinguish the objects in the image. Therefore, the Retinex method is used to improve the contrast of the image, as shown in Figure 13b below. The noise in the image is prominent, similar to the salt particle noise. Because the image size is too small, the image is processed using the median filtering method. The processing result is shown in Figure 13c, the cell outline in the image is obvious, and the salt particle noise is better filtered. After obtaining a grayscale image of a × b size, as shown in the following Figure 13a, the image contrast is too small, and it is difficult to distinguish the objects in the image. Therefore, the Retinex method is used to improve the contrast of the image, as shown in Figure 13b below. The noise in the image is prominent, similar to the salt particle noise. Because the image size is too small, the image is processed using the median filtering method. The processing result is shown in Figure 13c, the cell outline in the image is obvious, and the salt particle noise is better filtered.
After obtaining a grayscale image of × size, as shown in the following Figure 13a, the image contrast is too small, and it is difficult to distinguish the objects in the image. Therefore, the Retinex method is used to improve the contrast of the image, as shown in Figure 13b below. The noise in the image is prominent, similar to the salt particle noise. Because the image size is too small, the image is processed using the median filtering method. The processing result is shown in Figure 13c, the cell outline in the image is obvious, and the salt particle noise is better filtered. The image is processed by super-resolution below. Considering the imaging effect and the algorithm complexity, this paper selects the deep neural network SRCNN network with few layers which has been pre-trained in the ImageNet database. Perform two SRCNN operations, and finally get a visually large grayscale image, as shown in Figure 14a. The image is processed by super-resolution below. Considering the imaging effect and the algorithm complexity, this paper selects the deep neural network SRCNN network with few layers which has been pre-trained in the ImageNet database. Perform two SRCNN operations, and finally get a visually large grayscale image, as shown in Figure 14a. Due to the blurry image, the adaptive guided filter is used for image enhancement processing. . The full-band data is imaged and pseudo-color processed, using the Jet pseudo-color index sequence in MATLAB, as shown in the Figure 14b. The Raman spectral pseudo-color imaging task was finally completed. The architecture of the paper algorithm is shown in Figure 15 and Algorithm1: Due to the blurry image, the adaptive guided filter is used for image enhancement processing.
The parameters |ω| and eps are calculated by the following linear equation . The full-band data is imaged and pseudo-color processed, using the Jet pseudo-color index sequence in MATLAB, as shown in the Figure 14b. The Raman spectral pseudo-color imaging task was finally completed. The architecture of the paper algorithm is shown in Figure 15 and Algorithm1: Raman spectral scanning (setting) of the artificial designated area T, obtaining the two-dimensional data .
Extract the maximum value of each of the 1024-dimensional vectors, and fill the extracted data into the matrix.

Results and Discussion
The data of Figure 16b,c below is acquired under the following conditions: A 20 × 20 measurement dot matrix was selected, and a Raman scattering point scan was performed on the Escherichia coli (dh5a strain) at 25 • C to obtain the following image. The integral power is 1.5 mw and the integration time is 2 s. The lens parameter is 600 g/mm and the same scan point is scanned twice. The data of the Figure 17e,f below is under the following conditions: A 10 × 10 measurement point was selected, and a Raman scattering point scan was performed on the Escherichia coli (dh5a strain) at 25 • C. The integral power is 1.5 mw, and the integration time is 2 s. The lens parameter is 600 g/mm, and the same scan point is scanned twice. Since the image size of 10 × 10 is too small, the Retinex enhancement processing is performed on the 10 × 10 images. Then, using the bilinear interpolation processing unifies the image into the size of 20 × 20, and the use of the median filtering to process the noise. Since no algorithm papers related to pseudo-color imaging of Raman cells have been found, this paper mainly discusses two aspects: software vs. software, algorithm vs. algorithm.

Comparison with A Digital Optical Microscope
The full-band data is taken into the method proposed in this paper for calculation, and the results of the following Figure 16b,c,e,f are obtained. The data of Figure 16b,c below is acquired under the following conditions: A 20 × 20 measurement dot matrix was selected, and a Raman scattering point scan was performed on the Escherichia coli (dh5a strain) at 25 °C to obtain the following image. The integral power is 1.5 mw and the integration time is 2 s. The lens parameter is 600 g/mm and the same scan point is scanned twice. The data of the Figure 17e,f below is under the following conditions: A 10 × 10 measurement point was selected, and a Raman scattering point scan was performed on the Escherichia coli (dh5a strain) at 25 °C. The integral power is 1.5 mw, and the integration time is 2 s. The lens parameter is 600 g/mm, and the same scan point is scanned twice. Since the image size of 10 × 10 is too small, the Retinex enhancement processing is performed on the 10 × 10 images. Then, using the bilinear interpolation processing unifies the image into the size of 20 × 20, and the use of the median filtering to process the noise. Since no algorithm papers related to pseudo-color imaging of Raman cells have been found, this paper mainly discusses two aspects: software vs. software, algorithm vs. algorithm.

Comparison with A Digital Optical Microscope
The full-band data is taken into the method proposed in this paper for calculation, and the results of the following Figure 16b,c,e,f are obtained. The image under the digital optical microscope is taken as a reference image. By observing the pseudo-color image, the contour of the cell can be clearly observed and is similar to the cell size under an digital optical microscope. The pseudo-color imaging has significant contrast. The disadvantage; however, is that the image contains significant noise and Figure 16e only roughly depicts the cell outline, with less information about the inside of the cell. The image under the digital optical microscope is taken as a reference image. By observing the pseudo-color image, the contour of the cell can be clearly observed and is similar to the cell size under an digital optical microscope. The pseudo-color imaging has significant contrast. The disadvantage; however, is that the image contains significant noise and Figure 16e only roughly depicts the cell outline, with less information about the inside of the cell.

Imaging Contrast and Analysis for Different Bands
In view of the Figure 9a Raman waveform diagram, three bands were selected for analysis, which were 50-2750, 2750-3050, 3050-3950 (unit: cm −1 ). Two sets of experimental data were subjected to Raman pseudo-color image processing in three bands.
The first set of data is processed in the bands 50-2750, 2750-3050, 3050-3950 (unit: cm −1 ) to obtain three sets of images, as shown in the following Figure 17. In view of the Figure 9a Raman waveform diagram, three bands were selected for analysis, which were 50-2750, 2750-3050, 3050-3950 (unit: cm −1 ). Two sets of experimental data were subjected to Raman pseudo-color image processing in three bands.
The first set of data is processed in the bands 50-2750, 2750-3050, 3050-3950 (unit: cm −1 ) to obtain three sets of images, as shown in the following Figure 17. The peak signal-to-noise ratio (PSNR) is used to evaluate the performance of the algorithm. The peak signal-to-noise ratio is evaluated by the mean square error (MSE) for the similarity of the two images. Two images are define with length and width as and , respectively, and the mean square error is: The peak signal-to-noise ratio (PSNR) is used to evaluate the performance of the algorithm. The peak signal-to-noise ratio is evaluated by the mean square error (MSE) for the similarity of the two images. Two images are define with length a and width b as I and K, respectively, and the mean square error is: (19) and the calculation equation of the peak signal to noise ratio is: Cell contours could be seen in all three bands. The PSNR values are obtained from each other for the three grayscale images obtained as shown in the following Table 1. Observing the above table, it is found that the PSNR values of Figure 17a,c are larger, indicating that Figure 17a is closer to Figure 17c. The key information in this spectrum is in the second band; therefore, both the first band and the third band are missing certain information, so the images look more blurred. The effect that the three images could see the cell contour demonstrates from the side that the system is better robust.
The second set of data is processed in the bands 50-2750, 2750-3050, 3050-3950 (unit: cm −1 ) to obtain three sets of images, as shown in the following Figure 18.
and the calculation equation of the peak signal to noise ratio is: Cell contours could be seen in all three bands. The PSNR values are obtained from each other for the three grayscale images obtained as shown in the following Table1. Observing the above table, it is found that the PSNR values of Figure 17a,c are larger, indicating that Figure 17a is closer to Figure 17c. The key information in this spectrum is in the second band; therefore, both the first band and the third band are missing certain information, so the images look more blurred. The effect that the three images could see the cell contour demonstrates from the side that the system is better robust.
The second set of data is processed in the bands 50-2750, 2750-3050, 3050-3950 (unit: cm -1 ) to obtain three sets of images, as shown in the following Figure 18. It can be seen that the shape of the cells can be seen by imaging regardless of the band. As shown in the Table 2, the PSNR values are obtained from each other for the three grayscale images. Observing the above table, similar to the results of previous experiment, it is found that the Figure 18a,e have larger PSNR values, while Figure 18c has lower similarity with the other two images. The reason is that the key information of the spectrum is in the second band. The first band and the third band are missing important information, so the image looks fuzzier and more similar. This experiment is inferior to the previous experiment imaging effect. The reason is that the measurement points are selected less, so the Raman information measured by the image contains more noise. However, the pseudo-color imaging of Band 1 and Band 3 can still see the cell outline and position

Pseudo-color Super-resolution Algorithm Comparison
Since the pseudo-color images obtained by using this paper's algorithm are images with no reference type, it is difficult to have an evaluation of the superiority of the fair comparison algorithm. In this paper, we use the cell images, which are interpolated and blurred. Compare the different algorithms, such as 'Yang et al.' [34], 'Zeyde et al.' [35], 'GR' [36], ' ANR' [36], 'NE +L E' [37], 'NE + NNLS' [38], 'A+' [36], 'SRCNN'. The evaluation methods about PSNR, SSIM, NQM, GSM, and MSSIM are created on the cell pseudo-color dataset (Built by the laboratory itself). The evaluation results are shown in the Table 3. The original images are images that have not been super-resolution processed. When evaluating the image quality, the larger the difference between the original image and the processed image, the better the super-resolution effect. Therefore, the bold blackbody numbers in the table indicate that the SRCNN super-resolution is superior to other super-resolution algorithms.  It can be seen that the shape of the cells can be seen by imaging regardless of the band. As shown in the Table 2, the PSNR values are obtained from each other for the three grayscale images. Observing the above table, similar to the results of previous experiment, it is found that the Figure 18a,e have larger PSNR values, while Figure 18c has lower similarity with the other two images. The reason is that the key information of the spectrum is in the second band. The first band and the third band are missing important information, so the image looks fuzzier and more similar. This experiment is inferior to the previous experiment imaging effect. The reason is that the measurement points are selected less, so the Raman information measured by the image contains more noise. However, the pseudo-color imaging of Band 1 and Band 3 can still see the cell outline and position

Pseudo-color Super-resolution Algorithm Comparison
Since the pseudo-color images obtained by using this paper's algorithm are images with no reference type, it is difficult to have an evaluation of the superiority of the fair comparison algorithm. In this paper, we use the cell images, which are interpolated and blurred. Compare the different algorithms, such as 'Yang et al.' [34], 'Zeyde et al.' [35], 'GR' [36], 'ANR' [36], 'NE +L E' [37], 'NE + NNLS' [38], 'A+' [36], 'SRCNN'. The evaluation methods about PSNR, SSIM, NQM, GSM, and MSSIM are created on the cell pseudo-color dataset (Built by the laboratory itself). The evaluation results are shown in the Table 3. The original images are images that have not been super-resolution processed. When evaluating the image quality, the larger the difference between the original image and the processed image, the better the super-resolution effect. Therefore, the bold blackbody numbers in the table indicate that the SRCNN super-resolution is superior to other super-resolution algorithms.

Algorithm Sharpness Comparison
Since the images obtained in this paper are no-reference images, refer to the method used in [39]. Convert the image space from RGB to CIELAB, and use the Kullback-Leiber (KL) divergence to quantize the difference between the probability densities of two random variables (i.e., the two images compared). This method could assess the degree of visual clarity through information theory measurement methods.
Suppose and represent the probability mass functions of the two images to be compared in CIELAB space, respectively, and define the comparison formula as If p(L * , C * , h * ) is close to q(L * , C * , h * ), then D(p q) will be close to 0, which means that the visual clarity of the two is relatively close. Comparing the bold blackbody values in the Table 4, we can conclude that the image clarity of the algorithm is much higher than the image generated by the Witec instrument. The comparison is made by the image of digital optical microscopy because the resolution of the digital optical microscope is far higher than the resolution of the Raman spectrum.

Imaging Comparison of Witec Instruments
The image of Band 2 imaged by the Witec instrument is compared with the results of the method used in this paper. The pseudo-color sequence selected by Witec instrument and the pseudo-color sequence used in this paper are shown in the following Figure 19: Since the images obtained in this paper are no-reference images, refer to the method used in [39]. Convert the image space from RGB to CIELAB, and use the Kullback-Leiber (KL) divergence to quantize the difference between the probability densities of two random variables (i.e., the two images compared). This method could assess the degree of visual clarity through information theory measurement methods.
Suppose and represent the probability mass functions of the two images to be compared in CIELAB space, respectively, and define the comparison formula as ‖ = * , * , ℎ * * , * , ℎ * * , * , ℎ * * , * ,ℎ * If * , * , ℎ * is close to * , * , ℎ * , then ‖ will be close to 0, which means that the visual clarity of the two is relatively close. Comparing the bold blackbody values in the Table 4, we can conclude that the image clarity of the algorithm is much higher than the image generated by the Witec instrument. The comparison is made by the image of digital optical microscopy because the resolution of the digital optical microscope is far higher than the resolution of the Raman spectrum.

Imaging Comparison of Witec Instruments
The image of Band 2 imaged by the Witec instrument is compared with the results of the method used in this paper. The pseudo-color sequence selected by Witec instrument and the pseudo-color sequence used in this paper are shown in the following Figure 19:  Witec instrument is used for detection and is divided into 90 observation wavelengths in the wavelength range of 2750-3050. Witec measured the pseudo-color image (a,c) as shown in Figure 20, and the image obtained by this paper's algorithm were imaged as (b,d). It can be seen that compared with (c) the image is ambiguous and the detailed information cannot be recognized, the algorithm in this paper better highlights the contour of the cell. However, there is still some noise. Compared with Witec instrument is used for detection and is divided into 90 observation wavelengths in the wavelength range of 2750-3050. Witec measured the pseudo-color image (a,c) as shown in Figure 20, and the image obtained by this paper's algorithm were imaged as (b,d). It can be seen that compared with (c) the image is ambiguous and the detailed information cannot be recognized, the algorithm in this paper better highlights the contour of the cell. However, there is still some noise. Compared with the results obtained by Witec, the pseudo-color index sequence used in this paper enhances the contrast of the image and reflects the cell contour. The two experiments indicate that the red region in the image was also verified as a cell. the results obtained by Witec, the pseudo-color index sequence used in this paper enhances the contrast of the image and reflects the cell contour. The two experiments indicate that the red region in the image was also verified as a cell.   The information entropy of the pseudo-color image generated by the Witec instrument and by this paper is calculated, and the red channel information entropy, the green channel information By analyzing the image information entropy, the amount of information contained in the pseudo-color image is evaluated. The image information entropy equation is: where p i = f (i) (a×b) , f (i) represents the number of pixels in the image with a statistical gray value of i, where 0 ≤ i ≤ 255 and i ∈ N. a is the image length and b is the image width.
If the information entropy of the pseudo-color image is calculated, the equation is where p ijk = f (i,j,k) (a×b×3) , the same 0 ≤ i ≤ 255, 0 ≤ j ≤ 255, 0 ≤ k ≤ 255, and i, j, k ∈ N. The information entropy of the pseudo-color image generated by the Witec instrument and by this paper is calculated, and the red channel information entropy, the green channel information entropy and the blue channel information entropy of the image are calculated shown in the Table 5, respectively. It is observed from Table 4 that the red channel information entropy and the green channel information entropy of Figure 20a,c generated by the Witec instrument are both high, which may be due to the selected pseudocolor sequence being biased toward black, red, and yellow. Comparing the color image information entropy, it is found that the value obtained in this paper is smaller than that obtained by Witec instrument, which indicates that the method used in this paper is better for cell segmentation of images. Meanwhile, the difference of the entropy value obtained in this paper and by Witec instrument is small, which indicates that the method used in this paper is not over-segmented or under-segmented. Compared with the Witec instrument imaging using this paper's color map, the image information entropy or sub-channel information entropy is small, indicating that the image generated by the Witec instrument is relatively smooth. However, from a visual perspective, the resulting boundaries are blurred and the most important point is that the size of the cells has been severely distorted.

Conclusions
In this paper, we propose a novel visualization method for cell Raman spectroscopy, which can be widely used in microscopic cell research. From the perspective of image processing, the peak is extracted without denoising the Raman spectral data, and the super-resolution network is deeply studied. A method of adaptively selecting the radius and penalty coefficient is proposed to generate the cell image. The method has the following advantages: 1) the image is clear, the edge is obvious, and the contour of the photograph under the contrast digital optical microscope is consistent; 2) the universality is obvious, and the image generated by the strong noise or weak noise Raman spectral data is relatively clear. More experimental results are shown in Figure 21, and the corresponding color image entropy and image subchannel entropy are shown in Table 6.  Table 5, respectively. It is observed from Table 4 that the red channel information entropy and the green channel information entropy of Figure 20a,c generated by the Witec instrument are both high, which may be due to the selected pseudocolor sequence being biased toward black, red, and yellow. Comparing the color image information entropy, it is found that the value obtained in this paper is smaller than that obtained by Witec instrument, which indicates that the method used in this paper is better for cell segmentation of images. Meanwhile, the difference of the entropy value obtained in this paper and by Witec instrument is small, which indicates that the method used in this paper is not over-segmented or under-segmented. Compared with the Witec instrument imaging using this paper's color map, the image information entropy or sub-channel information entropy is small, indicating that the image generated by the Witec instrument is relatively smooth. However, from a visual perspective, the resulting boundaries are blurred and the most important point is that the size of the cells has been severely distorted.

Conclusions
In this paper, we propose a novel visualization method for cell Raman spectroscopy, which can be widely used in microscopic cell research. From the perspective of image processing, the peak is extracted without denoising the Raman spectral data, and the super-resolution network is deeply studied. A method of adaptively selecting the radius and penalty coefficient is proposed to generate the cell image. The method has the following advantages: 1) the image is clear, the edge is obvious, and the contour of the photograph under the contrast digital optical microscope is consistent; 2) the universality is obvious, and the image generated by the strong noise or weak noise Raman spectral data is relatively clear. More experimental results are shown in Figure 21, and the corresponding color image entropy and image subchannel entropy are shown in Table 6.
As for future work, there are a few interesting topics that are worth to explore, such as how to detect the substance elements contained in cells, how to perform pseudo-color imaging of Raman spectra of clinical cells which are the problems worth studying. As for future work, there are a few interesting topics that are worth to explore, such as how to detect the substance elements contained in cells, how to perform pseudo-color imaging of Raman spectra of clinical cells which are the problems worth studying. Table 6. The entropy of the Figure 21 Witec images and images generated by this paper's algorithm.