Next Article in Journal
Microwave Detection of Carbon Monoxide Gas via a Spoof Localized Surface Plasmons-Enhanced Cavity Antenna
Previous Article in Journal
3D-Printed Multi-Stimulus-Responsive Hydrogels: Fabrication and Characterization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Device and Prediction Method for Urine Component Concentrations

1
School of Life Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
2
School of Optoelectronics Engineering, Changchun University of Science and Technology, Changchun 130022, China
3
Jilin Institute of Metrology and Science, Changchun 130022, China
4
Changchun Chunqiu Technology Development Co., Ltd, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Micromachines 2025, 16(7), 789; https://doi.org/10.3390/mi16070789
Submission received: 13 May 2025 / Revised: 28 June 2025 / Accepted: 29 June 2025 / Published: 2 July 2025
(This article belongs to the Section B:Biology and Biomedicine)

Abstract

To tackle the low-accuracy problem with analyzing urine component concentrations in real time, a fully automated dipstick analysis device of urine dry chemistry was designed, and a prediction method combining an image acquisition system with a whale optimization algorithm (WOA) for BP neural network optimization was proposed. The image acquisition system, which comprised an ESP32S3 chip and a GC2145 camera, was used to collect the urine test strip images, and then color data were calibrated by image processing and color correction on the upper computer. The correlations between reflected light and concentrations were established following the Kubelka–Munk theory and the Beer–Lambert law. A mathematical model of urine colorimetric value and concentration was constructed based on the least squares method. The WOA algorithm was applied to optimize the weight and threshold of the BP neural network, and substantial data were utilized to train the neural network and perform comparative analysis. The experimental results show that the MAE, RMSE and R2 of predicted versus actual urine protein values were, respectively, 3.1415, 4.328 and approximately 1. The WOA-BP neural network model exhibited high precision and accuracy in predicting the urine component concentrations.

1. Introduction

Although intelligence technologies have been able to efficiently predict and analyze key parameters in numerous areas, qualitative detection remains the mainstream technique for the dry chemistry dipstick test of urine component concentrations. To be specific, the existing urine detection methods primarily compare the colors of urine test strips with a standard colorimetric card. The principle is to calculate the Euclidean distances from the color values of each test strip to the color blocks with different concentration gradients on the colorimetric card, thereby determining the similarity between the two. When the Euclidean distance is at its minimum, the corresponding colorimetric concentration is identified as the concentration value of the detection item. Although such a method is simple and easy to operate, it still requires improvement in terms of detection range and accuracy [1,2].
In view of this, the present study proposes a fully automated detection device and method for urine test strips, with a view to inferring the specific values of various indices by recognizing the test strip colors. The device obtains the images of reacted reagent blocks via the image acquisition system, extracts the RGB value of each reagent block on the test strip through a series of image processing operations, and then performs color correction and inputs the corrected RGB values into the pre-trained WOA-BP neural network. Through the neural network calculation, the predicted concentration values are obtained, thereby achieving more accurate diagnosis [3,4]. In view of this, the present study proposes a fully automated detection device and method for urine test strips, with a view to inferring the specific values of various indices by recognizing the test strip colors. The device obtains the images of reacted reagent blocks via the image acquisition system, extracts the RGB value of each reagent block on the test strip through a series of image processing operations, and then performs color correction. The corrected RGB values are then input into the pre-trained WOA-BP neural network. Through the neural network’s calculations, the predicted concentration values are obtained, leading to more accurate diagnoses [3,4]. Before the experiment begins, sterile and dry containers should be prepared to avoid the presence of cleaning agents or disinfectants; midstream urine should be collected to minimize contamination from bacteria or impurities at the urethral opening. It is generally recommended to collect morning urine (which is concentrated and stable) or to collect at specific times (such as post-meal urine or random urine). The collected urine should be processed immediately after collection. In respect of centrifugation, if sediment components (such as cells or casts) need to be detected, a sample of 3000 rpm urine should be centrifuged for 5–10 min. If the supernatant needs to be tested, the solution should be well mixed. If the sample is cloudy or has high protein concentration, it should be filtered through filter paper or diluted with physiological saline to avoid interference with the testing equipment or affecting the accuracy of the results. For certain special tests, buffer solutions should be used to adjust the urine pH to an appropriate range to ensure the stability of the test reagents. After completing these steps, the corresponding tests can be conducted.

Device Design

Figure 1 illustrates the overall structure of the urine test strip color recognition device, which has a modular design. The functional connection between various parts is realized according to their respective installation and coordination relationships. The device is composed of a test strip feeding module, a test strip transport module, a sample transport module, a sample addition module, a syringe pump, a cleaning tank and an image acquisition module, which can fully automatically complete the test strip placement and transport, sample instillment, image acquisition, image processing and final data analysis.
Figure 2 displays the specific structure of the test strip feeding module, where the test strip chamber, pickup roller, roller wheel and motor support base are all made by 3D printing. The stepper motor provides and controls power for the module; the coupler connects the motor shaft with the pickup roller shaft; and the synchronous belt and pulley transfer power to the roller wheel shaft. The gear ratio of the synchronous belt and pulley is 2:1, so that the pickup roller rotates once and the roller wheel rotates twice. There is a rectangular slot at the rear of the test strip chamber, which facilitates the strip placement. A rubber layer is arranged on the convex part of pickup roller to increase friction. The test strip chamber is provided with a fan-shaped slot under the pickup roller, which enables the pickup roller to press the test strip tightly; the front end is designed with a slope structure to enable the test strip separation. The roller wheel transports a test strip onto the rubber band platform. After the motor shaft rotates counterclockwise to complete the output of single test strip, it needs to rotate clockwise to allow the return of underlying test strip to the origin and then proceeds with the next test strip output.
Figure 3 depicts the specific structure of the test strip transport module. The stepper motor provides and controls power for the module; the coupler connects the motor shaft with the drive shafts; and the synchronous belt and pulley transfer power to the roller wheel shaft. The bearing seats offer a fixed support for the drive shaft. The other end of the drive shaft is fixed by two bearing seats, and the fixation of all the bearing seats is achieved by connecting the independently designed 3D printing parts with the bed bolts. The two drive shafts are connected by a rubber band. After the test strip is placed on the rubber band, the two underlying steering gears lift the platform that has a buckle structure. The platform is fixed on the rack by the spring middleware, and the test strip moves on the rubber band platform and gets stuck by the buckle, thereby achieving the strip localization. The steering gear arms lower the platform to allow the passage of the test strip. After the test strip moves for a certain distance, the steering gear arms lift again to ensure a certain inter-strip spacing, which also guarantees that the test strip can stay underneath the sample adding needle and image acquisition module.
The liquid path system, the core component of the sample addition module, is responsible for ensuring the flow and circulation of urine samples and system liquid during the sample adding process, as well as the disposal of post-reaction waste liquid. Its major functions include multiple important operations such as sample suction and discharge, system liquid delivery and sample-adding-needle cleaning [5]. In the actual sample adding operation, the design and control of the liquid path system is crucial in ensuring the accuracy and efficiency of sample processing. Here, the sample adding needle is required to discharge 20 μL/time repeatedly for 12 times, so its capacity needs to be above 240 μL. Moreover, given the necessity of an air column and a system liquid column, we chose a 400-μL three-layer sample adding needle to effectively isolate the interferences from internal fluid and external space. For liquid level detection, the PCS0902 capacitive level sensor, which has good anti-jamming ability, was selected as the detecting unit. Figure 4 depicts the waveform of the liquid detected under an empty needle condition. MSP1-D1 was adopted as the syringe pump, while a rigid PTFE tube was used for the liquid contact pipeline [6].

2. Principle Analyses

2.1. Principle of Color Recognition

During the test strip detection, color changes are generated by chemical reactions. Different color space models have been used to recognize and analyze these color changes. Common color spaces include the RGB (red, green, blue) and CIELab spaces. RGB is a device-related color model, and the data acquisition system used in this study was a typical RGB input device. As an addition-based color model, RGB represents various colors by combining the intensity values of three primary colors (R, G, B). The intensity of each primary color is usually expressed as an integer value from 0 to 255. Colors range from completely black (0, 0, 0) to completely white (255, 255, 255), between which different intensities of red, green and blue colors are mixed to form other colors. For the RGB color model displayed in Figure 5, the color range becomes from black (0, 0, 0) to white (1, 1, 1) after normalization [7].
The RGB color space may not accurately represent all colors perceived by the human eye, while the CIELab color space is more in line with the understanding of the human visual system, which makes it easier to operate and facilitates color analysis. As a human visual perception-based color space developed by the International Commission on Illumination, CIELab divides colors into three parts, as shown in Figure 6: L* (brightness) stands for the brightness of colors, ranging from 0 (black) to 100 (white); a* (green–red) represents the color distribution from green to red, with negative values indicating green and positive values indicating red; and b* (blue–yellow) represents the color distribution from blue to yellow, with negative values indicating blue and positive values indicating yellow. In the CIELAB color space, the asterisk (*) serves as a specific identifier for the parameters of this color model. CIELab is particularly suitable for detecting small changes in color. For example, when the color of test strip changes from light to dark pink, CIELab can accurately reflect the brightness and tonal difference of such change, thereby facilitating more accurate analysis [8].
The formula for converting a color from RGB to CIELab color spaces is as follows:
Initially, the [R, G, B] value needs to be normalized within the range of [0, 1].
Then, gamma correction is applied to convert this value from nonlinear to linear. For every color channel, the following formula is used:
r = r 12.92 r 0.04045 ( r + 0.055 1.055 ) 2.4 r > 0.04045
The same formula applies to g and b .
The RGB-to-XYZ conversion matrix is employed as:
X Y Z = 0.4124564 0.3575761 0.1804375 0 . 2126729 0.7151522 0.0721750 0 . 0193339 0 . 1191920 0.9503041 r g b
Subsequently, the XYZ color coordinates are transformed into the CIELab color space.
The normalized X, Y, Z are calculated as:
x = X X n , y = Y Y n , z = Z Z n
where Xn, Yn and Zn denote the tristimulus values of standard illuminant. In this experiment, Xn = 0.95047, Yn = 1.0, and Zn = 1.08883.
xyz is transformed into the CIELab color space using the following function f(t):
f ( t ) = t 1 / 3 t > 0.008856 t 903.3 t 0.008856
The CIELab L*, a* and b* coordinates are calculated as:
L * = 116 f ( y ) 16 a * = 500 ( f ( x ) f ( y ) ) b * = 200 ( f ( y ) f ( z ) )
With these formulas, the RGB values can be converted into CIELab coordinates.

2.2. Principle of Concentration Calculation

Figure 7 illustrates the structure of reagent blocks. The block surface is covered with a nylon film, which can effectively block the macromolecular entry of the reagent layer, thus protecting the reagents from contamination. At the bottom of the reagent blocks, an absorber layer is designed, whose function is to absorb excess urine and prevent incident light from penetrating the reagent layer. Upon contact of the reagent layer with urine, substantial diffuse reflectors would be formed. It can be observed from Figure 7 that specular reflection is produced on the surfaces of the detection blocks, while part of the light enters the diffuse reflectors and eventually forms a diffuse reflection after a series of optical processes such as reflection, refraction and diffraction. When the detection zone of the reagent blocks is sufficiently thick, the influence of transmitted light is negligible. By collecting and analyzing the reflected light, concentration-related detection information can be extracted.
According to the Kubelka–Munk theory, the reflectivity of incident light is specifically correlated with the optical absorption coefficient, the scattering coefficient and the degree of diffuse reflected light absorption in the test strip reaction zone. Such correlation can be formulated as:
R = 1 2 × [ 1 + 1 + 4 R d ( 1 R d ) 2 ]
R d = K S
In the above formula, R signifies the reflectivity; Rd represents the diffuse reflectance when the test sample thickness is greater than the transmission depth; K denotes the absorption coefficient of the reagent blocks; and S represents the scattering coefficient. Through simultaneous Formulas (6) and (7), we can obtain
K S = ( 1 R ) 2 2 R
The scattering coefficient depends mainly on the object material properties. Thus, when the thickness of the reaction zone and the scattering coefficient remain constant, the reflectivity is only correlated with the absorption coefficient. Since the absorption coefficient K and the substance concentration C follow the Beer–Lambert law,
K = ε C
where ε represents the molar absorptivity. The absorption coefficient K is linearly proportional to the concentration C of the sample being tested. By measuring the reflectance R of the reagent block, quantitative analysis of the target substance concentration in the urine can be achieved.
In the above relation, ε denotes the molar absorption coefficient. It is thus clear that the absorption coefficient K is directly proportional to the test sample concentration C. Hence, as long as the reflectivity of detection reagent blocks is determined, the urine concentrations of corresponding substances can be calculated.
Combining the Kubelka–Munk theory with the Beer–Lambert law, we can derive that reflectivity is directly proportional to concentration. In the ideal state, if a surface is completely diffuse (i.e., the surface reflects all incident light evenly in all directions), the color value of the surface can be regarded as a direct reflection of reflectivity. However, in practical applications, since cameras and sensors are affected by various interfering factors such as lighting conditions and object surface glossiness, the color values cannot directly reflect the reflectivity. Therefore, a direct relationship between color values and concentrations needs to be established through mathematical modeling.

2.3. Image Processing

A urine test strip image was collected from the image acquisition system, partial functions from the OpenCV library were scheduled in the PyCharm2025.1.1.1 integrated development environment and corresponding program code was written for image processing. Figure 8 schematizes the processing flow, which includes Gaussian filtering, highlight removal by weighted superposition, Otsu’s image thresholding, morphological open-close operation, Canny operator edge extraction, image extraction, color value extraction and color correction. Considering that during the image acquisition, the camera would introduce some noise (predominantly Gaussian) due to device components and various other factors, the Gaussian smoothing filter was used to accomplish image filtering. The highlight removal reduces the influence of highlight zone through linear weighting of the original image with its smooth version (blurred image). During threshold segmentation, the Otsu’s method was employed to automatically obtain the image thresholds, and the optimal threshold was calculated automatically based on the image gray distribution, thereby separating the background region from the foreground region. The core idea of Otsu’s method is to maximize the variance between classes and find a gray-level threshold T that maximizes the separation between foreground and background pixels. The first step involves calculating the histogram and probability distribution. Assuming the gray level range is from 0 to L − 1, the probability of the i-th gray level is:
P i = The   number   of   pixels   with   pixel   value   i Total   number   of   pixels
For a given threshold T, the Otsu method divides the image into two categories: background pixels (gray levels [0, T]), with probability ω 0 and average gray level μ 0 ; and foreground pixels (gray levels [T, L − 1]), with probability w1 and average gray level μ 1 . The inter-class variance is defined as:
σ b 2 = ω 0 ω 1 ( μ 0 μ 1 ) 2
ω 0 = i = 0 T 1 P ( i ) ω 1 = i = T L 1 P ( i ) μ 0 = i = 0 T 1 i P ( i ) ω 0 μ 1 = i = T L 1 i P ( i ) ω 1
By traversing all possible threshold values of T, Ot finds the T that maximizes σb2. This T value is the optimal segmentation threshold.
For morphological processing, the opening operation was performed on the image first to eliminate some small-pixel interfering color blocks. Then, closing operation was performed to fill the small holes in the target color block zones. The primary purpose of morphological processing is to segment the independent elements of the image and reduce the interferences in small and medium-sized regions therein. After the above image operations, the edges of each urine dry chemistry test strip image were sharp and easy to locate. Thus, the Canny operator was directly applied to extract the image edges, obtaining the edge positions of various reagent blocks on the test strip. The image center was determined based on the edges, and by extracting rectangular images with a certain pixel size from the central position, we could obtain respective images of each detection item. Finally, the average RGB value of pixels in the region was calculated, thereby acquiring the representative color information of each reagent block.
In addition to the conventional image processing methods, the YOLOv5 model was also used to train and detect the test strip images. As an advanced object detection model, YOLOv5 has been widely applied in image recognition and localization tasks, and is capable of quickly and accurately identifying objects and their positions in images. Substantial urine test strip images were collected and each detection item color block in the images was accurately annotated with Makesense. These annotated data were utilized to train the YOLOv5 model, allowing it to accurately identify the color block zones of all detection items in the test strip image. Figure 9 describes the detection effect of the trained model. Regardless of the type of test strip, the model exhibits good detection performance, which can accurately identify the location of each reagent block. During the detection process, the model assigns a confidence value to each detected target zone, which is used for measuring the reliability of detection results. In this study, a detection region with a confidence level of 0.78 or above is regarded as an effective target region and is labelled a “reagent block”. Further processing was carried out on these target regions. Initially, the coordinates of the center point of the detection box were extracted. Then, an image area with a pixel size of 15 × 15 was extracted centering on this central point, which served as the pure color block image of corresponding detection items [9,10].
Given the characteristics of the image acquisition system and the influence of environmental factors, the directly extracted color values may have certain deviations. Thus, a final color correction process is required. Through color correction, the extracted color values can be adjusted and corrected, thereby obtaining more accurate color values. The color correction here is specifically the device color correction. By fitting the mapping relationship between the color values captured by the image acquisition system and the known color values on a standard colorimetric card, a polynomial regression model was constructed to correct the color deviations from the device. Initially, it is necessary to photograph the international standard colorimetric card in a fixed lighting environment. All the obtained data were converted from the RGB color space to the CIELab color space with Formulas (11) and (12). The Lab values of the color blocks collected by the system, the known Lab values of standard color blocks, and the corresponding color block images before and after polynomial nonlinear correction are presented in Figure 10. For every color channel (L, a, b), a polynomial fitting model was built as:
x s = a 0 + a 1 x m + a 2 x m 2 + + a n x m n
where xs signifies the standard value of color channel; a0, a1, …, an represent the fitted polynomial coefficient; xm denotes the measured value of color channel; and n is the polynomial order. Given the standard colorimetric card dataset, the polynomial coefficients ai of various channels were fitted by the least squares method as follows:
min a 0 , a 1 , , a n i = 1 m ( x s ( a 0 + a 1 x i + a 2 x i 2 + + a n x i n ) ) 2
where xi represents the device measured value and m denotes the sample number of standard colorimetric card.
The specific implementation process in the program code is as follows: a NumPy library was used for array operation. The focus was on constructing polynomial features by scheduling the preprocessing function in a sklearn library, thereby extending the input data to polynomial features. For example, when degree = 2, the input data x would be extended to [1, x, x2]. The linear_model function in the sklearn library was applied to fit the polynomial regression equation, and the regression model was trained using the measured extended polynomial feature matrix X_poly and reference target values, thereby obtaining the weight coefficient and intercept of each polynomial feature. The polyfit function code for single-channel data is as follows:
//Polynomial fitting of single-channel data
# Scheduling corresponding function library
from numpy.polynomial.polynomial import Polynomial
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
# Single-channel fitting function
def fit_polynomial(measured, reference, degree = 2):
# Creating polynomial features
poly = PolynomialFeatures(degree)
X_poly = poly.fit_transform(measured.reshape(−1, 1))
X_poly = poly.fit_transform(measured.reshape(−1, 1)) # Constructing polynomial features
# Fitting polynomial regression model
model = LinearRegression().fit(X_poly, reference)
return model
It is necessary to separately calculate the corresponding polynomial models for the three color channels L, a and b. Substituting the Lab values of the color block images into the model yields corrected Lab color values that are close to the standard. Figure 10, from left to right, shows the schematic diagram of device color block acquisition, the original image of the standard color chart and the effect diagram of polynomial nonlinear correction.
G1B1, R2G2B2, R3G3B3, …, RiGiBi values corresponding to various concentration levels (−, −+, +, ++, +++, ++++) of corresponding items on the standard colorimetric card of urine test strip were separately calculated. The computational formula for the Euclidean distance ΔEab is:
Δ E a b = L * L i * 2 + a * a i * 2 + b * b i * 2
Through comparative calculation, all the CIELab distance values ΔEab were obtained. The concentration level on the standard colorimetric card corresponding to the smallest ΔEab was precisely the concentration level of the urine test strip detection item.

2.4. Regression Analysis

The image acquisition system collects RGB color values from the protein and leukocyte items of the urine test strip colorimetric card and uses them as the input features after color correction. In OriginPro2018, the nonlinear fitting relationships between the R, G and B values corresponding to each collected protein concentration and the concentration value were separately constructed, as described in Figure 11 and Figure 12. The fitting equations of the two are presented in Table 1 and Table 2, where C denotes the concentration value. The R2 values of the three equations are all above 0.95, proving a good fitting effect. Since this model roughly describes the variation trend of concentration value with the color values, a more accurate mathematical model is required to predict the corresponding concentration value based on the three color values.
For the sample acquisition, a concentration value was input into the above three regression equations to obtain a set of data comprising an RGB value and a concentration value C. After multiple calculations, the obtained data were divided into the training set (500 data) and the test set (70 data).

2.5. Whale Optimization Algorithm (WOA)

WOA, first proposed by Seyedali Mirjalili et al. in 2016, is an optimization algorithm that simulates whale behavior. The core idea stems from humpback whales’ unique bubble-net feeding strategy. When humpback whales hunt, they blow spiraling circles of bubbles to create a net around their prey, gathering the prey up for easy capture. By simulating this process, WOA searches for the optimal solution in the solution space. The location of each whale corresponds to a potential solution, and the global optimal solution is gradually approached by constantly updating the whale locations. This predation process consists of three stages: the prey encirclement, the bubble-net assaulting and the prey search.
The behavior of encircling prey is modeled by calculating the distance between the whale and the prey (current optimal location) and adjusting the whale location according to this distance [11,12,13]. The specific location update formula is:
The distance vector D between the current whale location and the optimal location is determined as:
D = C X * ( t ) X ( t )
C = 2 r 2
where C stands for a coefficient vector that adjusts the search range, with r2 being a random number between [0, 1]. t denotes the number of iterations; X*(t) represents the current global optimal location; and X(t) represents the current whale location.
The adjustment vector A is calculated to determine the offset of the whale location relative to the optimal location:
A = 2 a r a
a = 2 t t T max
a is a coefficient that decreases linearly from 2 to 0 to control the search convergence process; r is a random number between [0, 1]; and Tmax denotes the maximum number of iterations.
Using the distance vector D and the adjustment vector A, the whale location X(t + 1) is updated as:
X ( t + 1 ) = X * ( t ) A D
During the bubble-net assaulting behavior, the whale updates its location by calculating its distance from the optimal location and by gradually approaching the optimal location along the spiral path. The specific location update formula is:
The distance vector D between the current whale location and the optimal location is calculated as follows:
D = X * ( t ) X ( t )
Using the distance vector D and a spiral path, the whale location X(t + 1) is updated as follows:
X ( t + 1 ) = D e b l cos ( 2 π l ) + X * ( t )
where ebl represents an l-dependent exponential function that simulates the bubble-net contraction or expansion; cos(2πl) generates an l-dependent cosine function to simulate the spiral motion; b is a constant that determines the spiral shape; and l is a random number between [−1, 1].
When |A| < 1, the probability parameter p is compared with the preset threshold to identify which of the above behaviors a whale specifically chooses for location updating.
p is a random number between [0, 1]. If p < 0.5, the whale chooses to encircle the prey, which is suitable for the initial stage when a large-area search is required. If p ≥ 0.5, bubble-net assaulting behavior is chosen, which is more suitable for the later stage of the algorithm and enables more accurate approximation when approaching the optimal solution.
When |A| ≥ 1, under the prey search behavior, the specific location update formula of whale is as follows:
X ( t + 1 ) = X * ( t ) A D , i f p < 0.5 D e b l cos ( 2 π l ) + X * ( t ) , i f p 0.5
where Xrand(t) is the location of a whale randomly selected from the whale population.

3. Experimental Results and Discussions

3.1. Network Construction

The specific process of optimizing a BP neural network using the WOA is described in Figure 13. The relevant steps are as follows: (1) data were subjected to normalization pre-processing; (2) optimal network topology was determined by exhaustive method; (3) the BP neural network was initialized, including determination of the network input and output structures, initial connection weights and thresholds; (4) the WOA population was randomly initialized, where each individual represented the weight and threshold of a set of BP neural networks; (5) WOA calculation was performed by taking the training error of the BP neural network as the fitness value and the network weight and threshold as the population individual; (6) the above optimization and updating process was repeated to gradually approach the optimal population location and the minimum fitness value (iterative updating); (7) the optimal network weight and threshold were obtained through the optimization process; (8) the network was trained using the optimized network weight and threshold; and (9) after completion of the training, the optimized neural network was used to output the prediction results.

3.2. Network Training

The WOA-BP neural network was trained according to the above parameter settings. Meanwhile, the particle swarm optimization (PSO) algorithm and the genetic algorithm (GA) were employed to optimize the BP neural network, and the training effect was compared with that of the WOA-BP neural network [14,15]. The fitness iteration curves of protein and leukocyte items are presented in Figure 13. Clearly, the three optimization algorithms can all significantly enhance the searching fitness and improve the model performance, especially the WOA-BP algorithm, which exhibits the fastest convergence and the lowest final fitness during the iterative process. According to Table 3, the MAEs and RMSEs of BP neural networks optimized by the three algorithms all decreased significantly on the test set. Compared with the PSO and GA, the WOA outperformances in terms of MAE, RMSE and fitness, suggesting that it has the fastest convergence during optimization, the highest prediction accuracy and the best prediction effect.

3.3. Results Analysis

The prediction results of protein and leukocyte concentrations in urine using various algorithms are presented in Figure 14. Clearly, the predicted values of the four optimization algorithms are very close to the actual values. In particular, for three algorithms—PSO-BP, GA-BP and WOA-BP—their predicted values almost coincide with the actual values, indicating their excellence in prediction accuracy. Contrastively, although the predicted value of the standard BP algorithm is also close to the actual value, some deviations exist on some samples. Since these two devices are conducting simultaneous and parallel detection, the total response time should be taken as the maximum value between the two, which is 52.0 ± 2.5 s, and it is less than 60 s. Figure 15 displays the prediction errors of various algorithms. Based on analysis combining the two figures, the prediction errors of PSO-BP are small and stable, with relative errors ranging mostly between −10% and 10%. GA-BP also exhibits small prediction errors on most samples, but larger errors of nearly −60% on individual samples (e.g., protein number 7). The overall prediction error of WOA-BP is minimal and stable, with relative errors mostly concentrated between −5% and 5%.
The predicted protein results were verified and compared with the manual detection results. For the preparation of a 500 mg/dL bovine serum albumin (BSA) solution, 0.5 g of BSA powder was weighed first with a precision electronic balance and then placed into a sterile 50 mL centrifuge tube. After slowly adding 30 mL of sterile 1×PBS buffer, the centrifuge tube was shaken slightly to allow preliminary dissolution of the powder. Subsequently, the centrifuge tube was turned upside down slowly to mix the solution uniformly until the powder was fully dissolved. No bubbles should be generated during the whole process. The fully dissolved solution was transferred into a 100 mL volumetric flask, and then sterile 1×PBS buffer was added to the mark. Finally, the volumetric flask was turned upside down slowly several times to ensure that the solution was mixed uniformly [16]. The prepared solution was stored in a 4 °C refrigerator. The preparation method of protein solutions for the remaining concentration gradients was the same as above. The urine protein detection was based on the principle of “protein error of pH indicators”. The reagent blocks used for protein detection contained the tetrabromophenol blue indicator and the buffer. The positive charge energy of proteins could bind to the tetrabromophenol blue anions, causing the pH alteration to produce a color change from yellowish green to bluish green. At low concentrations, the test strip was yellow or light green, and as the concentration rose, the color gradually deepened to bluish green. For each gradient, 20 μL was instilled onto the reagent blocks. After reacting for 30 secs, the reagent blocks were placed in the data acquisition system to read the corresponding RGB values. The data for each gradient were averaged from ten acquisitions. These RGB values were corrected and imported into the neural network as input features. Through prediction, the corresponding concentration results were obtained. Table 4 details the prediction results of test samples.
Figure 16 displays the goodness of fit results drawn based on data in the above table, where the dashed diagonal line represents the consistency between actual and predicted values under ideal conditions, while the scatter dots represent the ratio of each actual value to the corresponding predicted value. These scatter distribution trends are very close to the diagonal line, indicating a good consistent regression between the model predicted results and the actual values. Meanwhile, the measured results were computationally analyzed. The MAE, RMSE and R2 of the predicted versus actual values were 3.1415, 4.328 and 0.99931, respectively. The R2 value was superior to that of the regression of concentration against RGB values, suggesting that the average deviation between the model predicted results and actual values was small, and that the model had a strong ability to interpret the real data.

4. Conclusions

In this study, a fully automated detection device for urine test strips was successfully designed, and a urine component concentration analysis system based on the image acquisition module and WOA-BP neural network was constructed. The device automatically drops samples onto the test strip, the image acquisition module collects the color information of the test strip, and the WOA-BP neural network performs quantitative regression prediction, thus achieving accurate prediction of the urinary protein and leukocyte concentrations. The experimental results show that the analysis system has high accuracy and reliability in predicting the concentration of urine samples, with small MAE and RMSE between the predicted and actual values, and the R2 coefficient approximately 1. These indicate that the model has a strong predictive ability. The proposed detection method only analyzes the protein and leukocyte items on the test strips. Different detection items of different test strips need to be analyzed differently, where training of different networks is required, resulting in a huge time cost. The present study provides a new idea and method for the development of urine dipstick detection, which has certain theoretical and application value. In the future, the hardware performance and optimization algorithm can be further improved, and the sample data and diversity of test strip detection items should be further expanded to enhance the model applicability, with a view to meeting broader clinical needs.

Author Contributions

Conceptualization, Z.W. and J.H.; methodology, Q.C.; software, Y.Y.; validation, X.Y.; formal analysis, Y.W.; investigation, C.S.; resources, Q.C.; data curation, Y.Y.; writing—original draft preparation, Z.W. and J.H.; writing—review and editing, Y.Z.; visualization, Z.Z.; supervision, D.T.; project administration, Z.W.; funding acquisition, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study is funded by the Jilin Province Science and Technology Development Plan Project (No. 20240404062ZP).

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study and due to technical limitation]. Requests to access the datasets should be directed to 2024101566@mails.cust.edu.cn.

Conflicts of Interest

Author Dachun Tang was employed by Changchun Chunqiu Technology Development Co., Ltd. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Kapoor, A.; Raghunathan, M.; Kumar, P.; Tripathi, S.C.; Haque, S.; Pal, D.B. Molecularly Imprinted Polymers Coupled with Cellulosic Paper-Based Analytical Devices for Biosensing Applications. Indian J. Microbiol. 2024, 65, 69–91. [Google Scholar] [CrossRef] [PubMed]
  2. Roy, J.S.; Nesakumar, N.; Kulandaisamy, A.J.; Rayappan, J.B.B. Technological Advances in Biosensors for the Detection of Health Biomarkers. In Protein Biomarkers: Discovery and Applications in Clinical Diagnostics; Singh, S.K., Chandra, P., Eds.; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
  3. Mokni, M.; Tlili, A.; Khalij, Y.; Attia, G.; Zerrouki, C.; Hmida, W.; Othmane, A.; Bouslama, A.; Omezzine, A.; Fourati, N. Designing a Simple Electrochemical Genosensor for the Detection of Urinary PCA3, a Prostate Cancer Biomarker. Micromachines 2024, 15, 602. [Google Scholar] [CrossRef] [PubMed]
  4. Almawgani, A.H.M.; Sorathiya, V.; Soni, U.; Golani, J.; Abdelrahman Ali, Y.A. Multi-layered MXene and GST Material–Based Reflective Index Sensor: Numerical Study and Predication of Behaviour Using Machine Learning. Plasmonics 2024, 20, 3505–3522. [Google Scholar] [CrossRef]
  5. Whelan, A.; Elsayed, R.; Bellofiore, A.; Anastasiu, D.C. Selective Partitioned Regression for Accurate Kidney Health Monitoring. Ann. Biomed. Eng. 2024, 52, 1448–1462. [Google Scholar] [CrossRef] [PubMed]
  6. Song, Y.; Liu, M.; Wang, F.; Zhu, J.; Hu, A.; Sun, N. Gesture Recognition Based on a Convolutional Neural Network–Bidirectional Long Short-Term Memory Network for a Wearable Wrist Sensor with Multi-Walled Carbon Nanotube/Cotton Fabric Material. Micromachines 2024, 15, 185. [Google Scholar] [CrossRef] [PubMed]
  7. Ji, X.; Wang, B.; Zhang, Z.; Xiang, Y.; Yang, H.; Pan, R.; Li, J. One-Step Dry-Etching Fabrication of Tunable Two-Hierarchical Nanostructures. Micromachines 2024, 15, 1160. [Google Scholar] [CrossRef] [PubMed]
  8. Garlan, B.; Rabehi, A.; Ngo, K.; Neveu, S.; Askari Moghadam, R.; Kokabi, H. Miniaturized Pathogen Detection System Using Magnetic Nanoparticles and Microfluidics Technology. Micromachines 2024, 15, 1272. [Google Scholar] [CrossRef] [PubMed]
  9. Gu, Y.; Wang, J.; Luo, Z.; Luo, X.; Lin, L.L.; Ni, S.; Wang, C.; Chen, H.; Su, Z.; Lu, Y.; et al. Multiwavelength Surface-Enhanced Raman Scattering Fingerprints of Human Urine for Cancer Diagnosis. ACS Sens. 2024, 9, 5999–6010. [Google Scholar] [CrossRef] [PubMed]
  10. Bhaiyya, M.; Panigrahi, D.; Rewatkar, P.; Haick, H. Role of Machine Learning Assisted Biosensors in Point-of-Care-Testing For Clinical Decisions. ACS Sens. 2024, 9, 4495–4519. [Google Scholar] [CrossRef] [PubMed]
  11. Ahmed, K.; Bui, F.M.; Wu, F.-X. PreOBP_ML: Machine Learning Algorithms for Prediction of Optical Biosensor Parameters. Micromachines 2023, 14, 1174. [Google Scholar] [CrossRef] [PubMed]
  12. Arshad, S.; Yaseen, S.; Nawaz, H.; Majeed, M.I.; Rashid, N.; Ali, A.; Shahzadi, A.; Shafique, H.; Rehman, A.; Maryam, A.; et al. Biochemical Profiling of Iron Deficiency Anemia by Using SERS and Multivariate Analysis of Low Molecular Weight Fractions of Serum. Plasmonics 2025. [Google Scholar] [CrossRef]
  13. Fang, W.; Wu, J.; Cheng, M.; Zhu, X.; Du, M.; Chen, C.; Liao, W.; Zhi, K.; Pan, W. Diagnosis of invasive fungal infections: Challenges and recent developments. J. Biomed. Sci. 2023, 30, 42. [Google Scholar] [CrossRef] [PubMed]
  14. Zea, M.; Ben Halima, H.; Villa, R.; Nemeir, I.A.; Zine, N.; Errachid, A.; Gabriel, G. Salivary Cortisol Detection with a Fully Inkjet-Printed Paper-Based Electrochemical Sensor. Micromachines 2024, 15, 1252. [Google Scholar] [CrossRef] [PubMed]
  15. Shi, L.; Gong, P.; Li, M.; Song, D.; Zhang, H.; Wang, Z.; Feng, X. Chronic lymphocytic leukemia (CLL) screening and abnormality detection based on multi-layer fluorescence imaging signal enhancement and compensation. J. Cancer Res. Clin. Oncol. 2025, 151, 106. [Google Scholar] [CrossRef] [PubMed]
  16. Kim, S.-K. Contact Hole Shrinkage: Simulation Study of Resist Flow Process and Its Application to Block Copolymers. Micromachines 2024, 15, 1151. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Test strip color recognition device.
Figure 1. Test strip color recognition device.
Micromachines 16 00789 g001
Figure 2. Specific structure of the test strip feeding module.
Figure 2. Specific structure of the test strip feeding module.
Micromachines 16 00789 g002
Figure 3. Test strip transport module.
Figure 3. Test strip transport module.
Micromachines 16 00789 g003
Figure 4. Depicts the waveform of liquid detected under an empty needle condition.
Figure 4. Depicts the waveform of liquid detected under an empty needle condition.
Micromachines 16 00789 g004
Figure 5. RGB color model.
Figure 5. RGB color model.
Micromachines 16 00789 g005
Figure 6. Visual perception-based color space.
Figure 6. Visual perception-based color space.
Micromachines 16 00789 g006
Figure 7. Schematic diagram of reagent block reflection.
Figure 7. Schematic diagram of reagent block reflection.
Micromachines 16 00789 g007
Figure 8. Processing flow of urine test strip image.
Figure 8. Processing flow of urine test strip image.
Micromachines 16 00789 g008
Figure 9. the detection effect of the trained model.
Figure 9. the detection effect of the trained model.
Micromachines 16 00789 g009
Figure 10. Color block images collected by the device.
Figure 10. Color block images collected by the device.
Micromachines 16 00789 g010
Figure 11. Protein concentration–RGB value fitting relationship.
Figure 11. Protein concentration–RGB value fitting relationship.
Micromachines 16 00789 g011
Figure 12. Leukocyte concentration–RGB value fitting relationship.
Figure 12. Leukocyte concentration–RGB value fitting relationship.
Micromachines 16 00789 g012
Figure 13. Fitness iteration curves of networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Figure 13. Fitness iteration curves of networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Micromachines 16 00789 g013
Figure 14. Predicted value comparison among networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Figure 14. Predicted value comparison among networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Micromachines 16 00789 g014
Figure 15. Prediction error comparison among networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Figure 15. Prediction error comparison among networks optimized by different algorithms. (A) Proteins. (B) Leukocytes.
Micromachines 16 00789 g015
Figure 16. Goodness of fit curves.
Figure 16. Goodness of fit curves.
Micromachines 16 00789 g016
Table 1. Regression equation of protein concentration–RGB value fitting relationship.
Table 1. Regression equation of protein concentration–RGB value fitting relationship.
CategoryRegression EquationR2
C-Ry = 45.17 × exp(−x/15.67) + 170.50 × exp(−x/345.99) + 12.550.9983
C-Gy = 27.86 × exp(−x/24.47) + 53.51 × exp(−x/484.70) + 158.290.9919
C-By = −23.29 × exp(−x/23.61) – 707,813.13 × exp(−x/1.39) + 707,957.780.9578
Table 2. Regression equation of leukocyte concentration–RGB value fitting relationship.
Table 2. Regression equation of leukocyte concentration–RGB value fitting relationship.
CategoryRegression EquationR2
C-Ry = 180.69 + 74.17/(1 + exp((x − 112.27)/38.33))0.9900
C-Gy = 77.88 + 174.77/(1 + (x/83.35)^0.77)0.9991
C-By = 143.66 + 87.49/(1 + (x/105.65)^1.02)0.9904
Table 3. Parameter comparison among networks optimized by different algorithms.
Table 3. Parameter comparison among networks optimized by different algorithms.
ProteinMaeRmseFitnessLeukocyteMaeRmseFitness
Standard BP4.42546.0815/Standard BP2.12943.0597/
PSO-BP3.46834.206937.2043PSO-BP1.03842.466531.3348
GA-BP3.36874.122015.7919GA-BP0.71242.3118.3059
WOA-BP3.15033.86181.3161WOA-BP0.18361.81191.535
Table 4. Prediction results of test samples.
Table 4. Prediction results of test samples.
Serial NumberTrue Concentration Value

mg/dL
Input the RGB Values of the NetworkPredicted Concentration Value

mg/dL
RGB
150053.35177.52171.15500.0475
245058.99179.44167.47457.5366
340066.21181.74164.93404.6016
435074.55184.28162.4358.8658
530082.64186.66157.8305.5325
625095.33190.24157.33256.276
7200108.2193.72154.78203.2454
8150123.08197.62152.21145.2632
9100142.94203.39149.0699.3458
1075150.2205.43147.4875.0586
1150161.97210.17144.3748.9074
1230171.31213.58145.0729.0233
1315196.36228.5127.3515.0114
140227.63238.84122.850.3451
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Huang, J.; Chen, Q.; Yu, Y.; Yu, X.; Zhao, Y.; Wang, Y.; Shi, C.; Zhao, Z.; Tang, D. Analytical Device and Prediction Method for Urine Component Concentrations. Micromachines 2025, 16, 789. https://doi.org/10.3390/mi16070789

AMA Style

Wang Z, Huang J, Chen Q, Yu Y, Yu X, Zhao Y, Wang Y, Shi C, Zhao Z, Tang D. Analytical Device and Prediction Method for Urine Component Concentrations. Micromachines. 2025; 16(7):789. https://doi.org/10.3390/mi16070789

Chicago/Turabian Style

Wang, Zhe, Jianbang Huang, Qimeng Chen, Yuanhua Yu, Xuan Yu, Yue Zhao, Yan Wang, Chunxiang Shi, Zizhao Zhao, and Dachun Tang. 2025. "Analytical Device and Prediction Method for Urine Component Concentrations" Micromachines 16, no. 7: 789. https://doi.org/10.3390/mi16070789

APA Style

Wang, Z., Huang, J., Chen, Q., Yu, Y., Yu, X., Zhao, Y., Wang, Y., Shi, C., Zhao, Z., & Tang, D. (2025). Analytical Device and Prediction Method for Urine Component Concentrations. Micromachines, 16(7), 789. https://doi.org/10.3390/mi16070789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop