Robotic System for Blood Serum Aliquoting Based on a Neural Network Model of Machine Vision
Abstract
:1. Introduction
2. The Structure of the Control System for the Process of Aliquoting
- The controller of the manipulator, in accordance with the task, moves the grasping to the test tube to capture it and transport it to the workspace.
- The controller of the manipulator, having received information about the successful completion of the transportation, transfers it to the computer.
- The video camera, on command from the computer, takes an image of the test tube and transfers the image to the computer.
- Segmentation of the image and determination of the level of separation of fractions in the test tube.
- The levels are recalculated into the delta robot coordinate system, the coordinates are transferred to the delta robot controller.
- The delta robot controller moves the dosing head towards the tube.
- The controller of the delta robot, having received information about the successful completion of the movement, transfers it to the computer.
- The dosing head, at the command of the computer, draws liquid from the test tube.
- In a loop whose number of iterations is determined by the calculated number of aliquots:
- (a)
- the computer sends the coordinates of the next mini-tube to the controller of the delta robot;
- (b)
- the delta robot controller moves the dosing head towards the mini tube;
- (c)
- the controller of the delta robot, having received information about the successful completion of the movement, transmits it to the computer;
- (d)
- The dosing head, at the command of the computer, disperses the liquid into a mini-tube.
- The computer issues a command to the manipulator controller to unload the test tube.
- The controller controls the manipulator in order to transport the test tube from the workspace.
3. Vision Algorithm Using the HSV Color Model
3.1. Algorithm Synthesis
Algorithm 1 Liquid Level Detection Using Training Dataset |
Input: I, D, N, r, p, HB, SB, VB , …, HeightI do , …, WidthI do then 8: end if 9: end for 10: end for WidthD and Finish = false do HeightD and Finish = false do then then , Finish = true 21: end if 22: end if 24: end while 26: end while |
3.2. Simulation Results
3.2.1. First Scenario
3.2.2. Second Scenario
4. Vision Algorithm Using Convolutional Neural Network
4.1. Image Segmentation Methods
- With manual segmentation, the selection of the area of interest is performed manually. This is a very time-consuming method that is not applicable to automated systems, but is actively used to form the training samples necessary for training a neural network in intelligent segmentation methods.
- The segmentation methods based on pixel intensity are very simple and sometimes give good results, but they do not take spatial information into account, and are also sensitive to noise and homogeneities in intensity. Among the methods of segmentation based on intensity, there are:
- 2.1
- Threshold methods [25] that divide images into two or more parts based on some intensity thresholds. The threshold is a value on the image histogram that divides it into two parts: the first part is all of the pixels that have an intensity value, for example, greater than or equal to the threshold, and the second part is all other pixels:Multiple thresholds are used to select multiple objects.
- 2.2
- Region-spreading methods [26] are interactive methods that require setting some starting points to then divide the image into regions, according to a predetermined law based on intensity. The disadvantage of such methods is the need to determine the starting points and the dependence of the result of the algorithm on the human factor.
- It is convenient to show clustering methods using the most widely used k-means method as an example. K-means iteratively recalculates the average intensity for each class and segments the image, assigning each pixel to the class with the closest average value.Although the clustering methods do not require labeled data, their results depend on the setting of the input parameters. These methods are good in that they are able to generalize to different data sets and usually do not require much time for calculations, but they are sensitive to noise and therefore do not always give the desired result [27].
- Neural network methods are currently successfully used to solve many problems related to image processing. Such methods are resistant to noise and take into account spatial information. In most cases, convolutional neural networks (CNN) are used to solve segmentation problems.
4.2. Description U-Net
4.3. Formation of the Training Sample
- Serum: the upper fraction of the contents of the test tube. It is important to define the top and bottom boundaries of the object.
- Fibrin threads: first, the fact of their presence in the image is important, which makes it possible to correctly determine the required distance from the end-effector of the pipette to the lower boundary of the upper fraction.
- Clot: the lower fraction of the contents of the test tube.
- Upper empty (air-filled) part of the tube.
4.4. Neural Network Training and Results
4.5. Algorithm for Determining the Boundary Level between Blood Phases
- The height c of the visible part of the tube is calculated (in pixels).
- Including height information h tripod in mm determines the scale of the image (the ratio between the dimensions of objects in pixels and their linear dimensions in mm is established) m = h/c.
- The lowest point A of the empty part of the tube and the vertical distance a from it to point D (in pixels) are determined.
- The initial immersion depth of the pipette l1 = l − h + m + e1 is calculated, where l is the length of the test tube, e1 is the margin on the upper limit.
- Depending on the presence of fibrin threads on the image, the value of e2 of the reserve is set along the lower border.
- The highest point B of the clot or fibrin strands and the vertical distance are determined b from it to point D (in pixels).
- The final immersion depth of the pipette is calculated asl2 = l − h + b · m − e2.
- The number of aliquots is determined n = [(l2 − l1)S/V0], where is the area of the inner section of the test tube, V0 is the volume of the aliquot, square brackets mean taking the integer part of the number (rounding down to the nearest integer).
5. Comparative Analysis of Algorithms
6. Experimental Results and Analysis
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Conflicts of Interest
References
- Malm, J.; Fehniger, T.E.; Danmyr, P.; Végvári, A.; Welinder, C.; Lindberg, H.; Appelqvist, R.; Sjödin, K.; Wieslander, E.; Laurell, T.; et al. Developments in biobanking workflow standardization providing sample integrity and stability. J. Pro-Teomics 2013, 95, 38–45. [Google Scholar] [CrossRef] [PubMed]
- Plebani, M.; Carraro, P. Mistakes in a stat laboratory: Types and frequency. Clin. Chem. 1997, 43, 1348–1351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Plebani, M. Laboratory errors: How to improve pre- and post- analytical phases? Biochem. Med. 2007, 17, 5–9. [Google Scholar] [CrossRef]
- Ross, J.; Boone, D. Assessing the effect of mistakes in the total testing process on the quality of patient care. In 1989 Institute of Critical Issues in Health Laboratory Practice; Martin, L., Wagner, W., Essien, J.D.K., Eds.; DuPont Press: Minneapolis, MN, USA, 1991. [Google Scholar]
- Sivakova, O.V.; Pokrovskaya, M.S.; Efimova, I.A.; Meshkov, A.N.; Metelskaya, V.A.; Drapkina, O.M. Quality control of serum and plasma samples for scientific research. Profil. Med. 2019, 22, 91–97. [Google Scholar] [CrossRef]
- Malm, J.; Végvári, A.; Rezeli, M.; Upton, P.; Danmyr, P.; Nilsson, R.; Steinfelder, E.; Marko-Varga, G.J. Large scale biobanking of blood—The importance of high density sample processing procedures. J. Proteom. 2012, 76, 116–124. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Huang, Y.; Fang, Y.; Liao, P.; Wu, Y.; Chen, H.; Chen, Z.; Deng, Y.; Li, S.; Liu, H.; et al. The Liquid Level Detection System Based on Pressure Sensor. J. Nanosci. Nanotechnol. 2019, 19, 2049–2053. [Google Scholar] [CrossRef]
- Fleischer, H.; Baumann, D.; Joshi, S.; Chu, X.; Roddelkopf, T.; Klos, M.; Thurow, K. Analytical Measurements and Efficient Process Generation Using a Dual–Arm Robot Equipped with Electronic Pipettes. Energies 2018, 11, 2567. [Google Scholar] [CrossRef] [Green Version]
- Fleischer, H.; Drews, R.R.; Janson, J.; Chinna Patlolla, B.R.; Chu, X.; Klos, M.; Thurow, K. Application of a Dual-Arm Robot in Complex Sample Preparation and Measurement Processes. J. Assoc. Lab. Autom. 2016, 21, 671–681. [Google Scholar] [CrossRef] [Green Version]
- Preda, N.; Ferraguti, F.; De Rossi, G.; Secchi, C.; Muradore, R.; Fiorini, P.; Bonfé, M. A Cognitive Robot Control Architecture for Autonomous Execution of Surgical Tasks. J. Med. Robot. Res. 2016, 01, 1650008. [Google Scholar] [CrossRef] [Green Version]
- Sánchez-Brizuela, G.; Santos-Criado, F.-J.; Sanz-Gobernado, D.; Fuente-Lopez, E.; Fraile, J.-C.; Pérez-Turiel, J.; Cisnal, A. Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks. Sensors 2022, 22, 5180. [Google Scholar] [CrossRef]
- Eppel, S.; Xu, H.; Wang, Y.; Aspuru-Guzik, A. Predicting 3D shapes, masks, and properties of materials, liquids, and objects inside transparent containers, using the TransProteus CGI dataset. arXiv 2021, arXiv:2109.07577. [Google Scholar] [CrossRef]
- Eppel, S. Computer vision for liquid samples in hospitals and medical labs using hierarchical image segmentation and relations prediction. arXiv 2021, arXiv:2105.01456. [Google Scholar] [CrossRef]
- Suzuki, S.; Keiichi, A. Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Process. 1985, 30, 32–46. [Google Scholar] [CrossRef]
- Prabhu, C.A.; Chandrasekar, A. An Automatic Threshold Segmentation and Mining Optimum Credential Features by Using HSV Model. 3D Res. 2019, 10, 18. [Google Scholar] [CrossRef]
- Kolkur, S.; Kalbande, D.; Shimpi, P.; Bapat, C.; Jatakia, J. Human Skin Detection Using RGB, HSV and YCbCr Color Models. In Proceedings of the International Conference on Communication and Signal Processing, ICCASP, Shanghai, China, 20–25 March 2016. [Google Scholar]
- Novozamsky, A.; Flusser, J.; Tacheci, I.; Sulik, L.; Krejcar, O. Automatic blood detection in capsule endoscopy video. J. Biomed. Opt. 2016, 21, 126007. [Google Scholar] [CrossRef]
- Joy, D.T.; Kaur, G.; Chugh, A.; Bajaj, S.B. Computer Vision for Color Detection. Int. J. Innov. Res. Comput. Sci. Technol. 2021, 9, 53–59. [Google Scholar] [CrossRef]
- Noreen, U.; Jamil, M.; Ahmad, N. Hand Detection Using HSV Model. Int. J. Sci. Technol. Res. 2016, 5, 195–197. [Google Scholar]
- Cai, Z.; Luo, W.; Ren, Z.; Huang, H. Color Recognition of Video Object Based on HSV Model. Appl. Mech. Mater. 2011, 143–144, 721–725. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Malyshev, D.; Rybak, L.; Carbone, G.; Semenenko, T.; Nozdracheva, A. Optimal design of a parallel manipulator for aliquoting of biomaterials considering workspace and singularity zones. Appl. Sci. 2022, 12, 2070. [Google Scholar] [CrossRef]
- Voloshkin, A.; Rybak, L.; Cherkasov, V.; Carbone, G. Design of gripping devices based on a globoid transmission for a robotic biomaterial aliquoting system. Robotica 2022, 40, 4570–4585. [Google Scholar] [CrossRef]
- Voloshkin, A.; Rybak, L.; Carbone, G.; Cherkasov, V. Novel Gripper Design for Transporting of Biosample Tubes. In ROMANSY 24-Robot Design, Dynamics and Control: Proceedings of the 24th CISM IFToMM Symposium, Udine, Italy, 4–7 July 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 255–262. [Google Scholar]
- Niu, Z.; Li, H. Research and analysis of threshold segmentation algorithms in image processing. J. Phys. Conf. Ser. 2019, 1237, 022122. [Google Scholar] [CrossRef]
- Angelina, S.; Suresh, L.; Veni, S. Image segmentation based on genetic algorithm for region growth and region merging. In Proceedings of the 2012 International Conference on Computing, Electronics and Electrical Technologies, Nagercoil, India, 21–22 March 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 970–974. [Google Scholar]
- Sarmah, S. A grid-density based technique for finding clusters in satellite image. Pattern. Recognit. Lett. 2012, 33, 589–604. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khalapyan, S.; Rybak, L.; Nebolsin, V.; Malyshev, D.; Nozdracheva, A.; Semenenko, T.; Gavrilov, D. Robotic System for Blood Serum Aliquoting Based on a Neural Network Model of Machine Vision. Machines 2023, 11, 349. https://doi.org/10.3390/machines11030349
Khalapyan S, Rybak L, Nebolsin V, Malyshev D, Nozdracheva A, Semenenko T, Gavrilov D. Robotic System for Blood Serum Aliquoting Based on a Neural Network Model of Machine Vision. Machines. 2023; 11(3):349. https://doi.org/10.3390/machines11030349
Chicago/Turabian StyleKhalapyan, Sergey, Larisa Rybak, Vasiliy Nebolsin, Dmitry Malyshev, Anna Nozdracheva, Tatyana Semenenko, and Dmitry Gavrilov. 2023. "Robotic System for Blood Serum Aliquoting Based on a Neural Network Model of Machine Vision" Machines 11, no. 3: 349. https://doi.org/10.3390/machines11030349
APA StyleKhalapyan, S., Rybak, L., Nebolsin, V., Malyshev, D., Nozdracheva, A., Semenenko, T., & Gavrilov, D. (2023). Robotic System for Blood Serum Aliquoting Based on a Neural Network Model of Machine Vision. Machines, 11(3), 349. https://doi.org/10.3390/machines11030349