Smartphone Biosensor System with Multi-Testing Unit Based on Localized Surface Plasmon Resonance Integrated with Microfluidics Chip

Detecting biomarkers is an efficient method to diagnose and monitor patients’ stages. For more accurate diagnoses, continuously detecting and monitoring multiple biomarkers are needed. To achieve point-of-care testing (POCT) of multiple biomarkers, a smartphone biosensor system with the multi-testing-unit (SBSM) based on localized surface plasmon resonance (LSPR) integrated multi-channel microfluidics was presented. The SBSM could simultaneously record nine sensor units to achieve the detection of multiple biomarkers. Additional 72 sensor units were fabricated for further verification. Well-designed modularized attachments consist of a light source, lenses, a grating, a case, and a smartphone shell. The attachments can be well assembled and attached to a smartphone. The sensitivity of the SBSM was 161.0 nm/RIU, and the limit of detection (LoD) reached 4.2 U/mL for CA125 and 0.87 U/mL for CA15-3 through several clinical serum specimens testing on the SBSM. The testing results indicated that the SBSM was a useful tool for detecting multi-biomarkers. Comparing with the enzyme-linked immunosorbent assays (ELISA) results, the results from the SBSM were correlated and reliable. Meanwhile, the SBSM was convenient to operate without much professional skill. Therefore, the SBSM could become useful equipment for point-of-care testing due to its small size, multi-testing unit, usability, and customizable design.


The Workflow of the Python Program
The software used to write and run the Python program is PyCharm (version 2019.1.3). The version of Python is 3.7.3. The packs and functions used in the Python program including opencv (version 3.4.2), numpy (version 1.16.4), matplotlib (version 3.1.0), and scipy (version 1.3.0).
Origin images were captured by the smartphone during the test. To avoid system errors, at least five images were captured each time. It is a repetitive and hard work to read out and analyze a huge number of images. Therefore, a python program was written to finish the work of analyzation. For each image, the workflow of the Python program was presented as follows.
Border region of the image is totally black, which is useless for sensor. However, the black border region would increase the calculation time. Therefore, each image is cropped to define the sensor region as shown in Figure S1a.
The colorful image in Figure S1a was transferred into the grayscale image, Figure S1b. The calculation function was as follows: where Gray represents the grayscale of one pixel. R, G, and B represent the red, green, and blue scale (0-255) of the pixel. The R, G, B were recorded by the CMOS (Complementary Metal-Oxide-Semiconductor Transistor) sensor. The images could be regarded as a matrix of pixel. The x axis was defined as the direction perpendicular to the rainbow bar, and the y axis was defined as the direction parallel to the rainbow bar. The sum of each column was presented in Figure S1c. Each peak in Figure S1c presented the center of each rainbow bar respectively.
The sensor regions were redefined based on the center of each rainbow bar. The new sensor regions were nine rectangles that contained the center of each channel. In Figure S1d, the new sensor regions were labeled by the white line.
For each sensor region, the sum of each row related to the intensity of different wavelengths. Figure S1e presented a typical intensity-pixel position curve.

The Calibration Result of Each Channel
Due to the difference of the physical structure and the optical deviation, the calibration results at different positions were different. Table S2 listed the spectral calibration results of all channels. The intercept of the linear-regression results ranged from 353.6037 to 356.9689, and the slope of the linear-regression results ranged from 0.18897 to 0.19322. The difference of intercept and slope between channels could not be ignored. Therefore, different intercepts and slopes were applied to calculate the spectrum data for different channels. However, the adjusted coefficient of determination (Adj. R-Square) approached 1 at the spectral calibration results of all channels, which means that the system was highly precise at anywhere on the sensor region.

The camera Control Software in Smartphones
To get uniform and reliable data, the camera should be strictly controlled to take photos. The main factors in taking photos include exposure time, photosensitivity (ISO) and focal length. Most smartphones have software to control their cameras. The smartphone used in this study has a built-in function to control the camera named master mode. The mater mode could adjust exposure time, ISO, and focal length accurately. In Figure S2, the uses of the master mode were presented. The slide bar on the interface can be dragged to adjust the numerical value of the factors.

Overexposure
The intensity recorded by the sensor could be saturated if long exposure time was applied. However, higher ISO may also produce an overexposed image. In Figure S3, an example of overexposure was presented. The white dash line in Figure S3a indicated the center of the fifth channel, whose intensity data of RGB and grayscale were presented in Figure S3b,c. In Figure S3b, the spectrum of the green and blue channels showed flat tops where the sensor was saturated, and the grayscale in Figure S3c also showed a flat top. These flat tops would lead to wrong results. Therefore, the exposure time and the ISO should not be too high. The exposure time and the ISO should be adjusted according to the size of micro-hole and power of light source before testing. During a series of tests, the exposure time and the ISO should be same. Figure S3. An example of overexposure. (a) Overexposed picture; (b) spectrum of three channels of color (red, green, and blue); (c) spectrum of grayscale.