Graphical Image Region Extraction with K-Means Clustering and Watershed
Abstract
:1. Introduction
2. Related Work
2.1. K-Means Segmentation
2.2. Watershed Segmentation
2.3. Hybrid Segmentation
3. Methods
3.1. System Description
- Step 1:
- Preprocessing steps resize the input image to size 224 × 224 and reshape image information into a three-dimensional array. When the image is first read, it is saved as a one-dimensional numerical array containing information relative to each and every pixel. Because we are going to work with RGB images, reshaping this information into three dimensions assures one channel is used for each of the three red, green and blue color ways.
- Step 2:
- A K-Means function with random initial centroids and a user-defined value for k is executed. This step performs initial segmentation in the input image and removes noise caused due to inferior image quality. In typical implementations, using randomized cluster centroids may negatively impact the clustering result. However, in this scenario, is persistently used as a default value, given that the main objective of this algorithm is to separate foreground from background, where randomized cluster centroids should not produce adverse results.
- Step 3:
- A gray-scale version of the K-Means segmented image is generated to be input into a threshold function. Thresholding is a method in which each pixel of a certain image is replaced by a black pixel, given that the image intensity is lower than a user-defined fixed constant, T, or a white pixel, given that the image intensity is higher than the constant [79]. For the proposed system, a constant T value of 225 was used, due to the K-Means implementation already performing basic segmentation of image components by creating two distinct color labels and thus eliminating the need for a lower (generally 128) T value. The goal of this threshold method is to generate a binary mask containing information necessary for distance mapping.
- Step 4:
- The binary mask produced in step 3 contains information regarding foreground and background labeled-pixels and is then used to generate a distance map calculated through the exact distance transform formula. The exact distance transform computes the distance from non-zero (foreground) points to the nearest zero (background) points and allows for binary input [80]. The distance map is an input step necessary for Watershed functions as it labels each pixel with the distance to the nearest obstacle pixel (in this case, another object boundary). Connected Component Analysis is then performed over the distance map, labelling regions based on 8-way pixel connectivity and generating markers that ensure all regions are correctly found.
- Step 5:
- The Watershed function is performed over the distance map with the mask generated in step 3, and using the markers defined by the CCA function as described in the scikit image documentation [81]. This step unifies the processes performed in earlier steps and aims to separate any overlapping objects. Finally, individual objects are labelled for extraction via the contours method. The minimum distance to consider for each local maximum can be set by a user-defined parameter.
- Step 6:
- Because the output of the previous watershed step is a binary image, using an edge detection system like Canny is not necessary for Contour detection. The proposed method uses a border proposed by Satoshi Suzuki [82] for finding extreme outer contours through the labels created in the previous step, outlining each separate object and extracting it into a separate file. Each of the outputs is presented as a binary mask.
3.2. Time Complexity of the Proposed Approach
- Preprocessing: The resizing of the image to be processed and its representation in a three-dimensional RGB array have n operations each.
- K-Means: Considering the sequential implementation of the K-Means algorithm, the amount of computation within each K-Means iteration is constant. Each iteration consists of distance calculations and centroid updates. Distance calculations require roughly , where is the number of operation needed to compute the squared Euclidean distance, is the number of operations needed to find the closest centroid for each data point, and is the number of operations needed for the reassignment of each data point to the cluster whose centroid is closest to it. Centroid updates require approximately operations. Hence, the estimated number of operations performed by the sequential implementation of the K-Means algorithm can be estimated as .
- Grayscale function and Threshold: Each block requires n operations.
- Exact Distance Transform: The proposed approach calculates the distance map using the function provided by the OpenCV library, which implements the algorithm presented by [84], whose number of operations is .
- Watershed: The Watershed function is performed over the previously generated distance map and using the markers defined by the CCA function, according to the algorithm proposed by Beucher and Meyer [88]. Considering a -connectivity, where , and the implementation proposed by Bieniek and Moga [89], the estimated number of operations required for the execution of the Watershed algorithm is .
- Contour Detection: Performed through the implementation of the algorithm proposed by Satoshi Suzuki [82], requires operations, with , for 8-connectivity neighbourhood.
3.3. K-Means Clustering
3.4. Watershed Algorithm
3.5. Connected Component Analysis
3.6. Contour Detection
4. Results
4.1. Dataset
4.2. Experimental Results
4.2.1. Experiment A
4.2.2. Experiment B
4.2.3. Experiment C
4.2.4. Experiment D
4.2.5. Experiment E
4.2.6. Experiment F
4.2.7. Experiment G
4.3. System Variables Sensitivity
5. Comparison of the Proposed Solution with Other Approaches
5.1. Comparison with Colour-Texture Segmentation Algorithms
5.2. Comparison with Deep Learning Approaches
- Typical deep learning approaches require exhaustive training phases so that the model can correctly identify objects in an image. The results produced by the proposed system are achieved with only image processing techniques and require no training data.
- Deep Learning mechanisms are usually very limited regarding what they can identify in images, typically being capable of correctly identifying a small number of very general classes.
- Deep Learning mechanisms work much better with natural images and are mostly unbeatably accurate in real-life datasets. Building and training a network for graphical images (trademark images) requires a huge amount of data and would most likely never be as accurate as necessary, given the extreme variety of color composition, graphical styles, calligraphy and typography used in images.
- The system proposed in this work can be manually adjusted to each image’s complexity and graphical density, whereas deep learning models would require significant amounts of fine-tuning and retraining to achieve the same level of versatility.
- Unlabeled Detection: The proposed system is not prepared to identify a series of predetermined objects in images. Regions are proposed with regard to pixel blobs and distance measuring, meaning it is practically unlimited in regard to what it can extract.
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CCA | Connected Component Analysis |
CBIR | Content Based Image Retrieval |
CNN | Convolutional Neural Network |
R-CNN | Region-Based Convolutional Neural Network |
OCR | Optical Character Recognition |
References
- Meng, B.C.C.; Damanhuri, N.S.; Othman, N.A. Smart traffic light control system using image processing. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1088, 012021. [Google Scholar] [CrossRef]
- Padmapriya, B.; Kesavamurthi, T.; Ferose, H.W. Edge Based Image Segmentation Technique for Detection and Estimation of the Bladder Wall Thickness. Int. Conf. Commun. Technol. Syst. Des. Procedia Eng. 2012, 30, 828–835. [Google Scholar] [CrossRef] [Green Version]
- Al-amri, S.S.; Kalyankar, N.V.; Khamitkar, S.D. Image Segmentation by Using Edge Detection. Int. J. Comput. Sci. Eng. 2010, 2, 804–807. [Google Scholar]
- Shih, F.Y.; Cheng, S. Automatic seeded region growing for color image segmentation. Image Vis. Comput. 2005, 23, 877–886. [Google Scholar] [CrossRef]
- Zhou, D.; Shao, Y. Region growing for image segmentation using an extended PCNN model. IET Image Process. 2018, 12, 729–737. [Google Scholar] [CrossRef]
- Mondal, S.; Bours, P. A study on continuous authentication using a combination of keystroke and mouse biometrics. Neurocomputing 2017, 230, 1–22. [Google Scholar] [CrossRef]
- Shukla, A.; Kanungo, S. An efficient clustering-based segmentation approach for biometric image. Recent Pat. Comput. Sci. 2021, 4, 803–819. [Google Scholar] [CrossRef]
- Selvathi, D.; Chandralekha, R. Fetal biometric based abnormality detection during prenatal development using deep learning techniques. Multidimens. Syst. Signal Process. 2022, 33, 1–15. [Google Scholar] [CrossRef]
- Müller, D.; Kramer, F. MIScnn: A framework for medical image segmentation with convolutional neural networks and deep learning. BMC Med. Imaging 2021, 21, 12. [Google Scholar] [CrossRef]
- You, H.; Yu, L.; Tian, S.; Cai, W. DR-Net: Dual-rotation network with feature map enhancement for medical image segmentation. Complex Intell. Syst. 2022, 8, 611–623. [Google Scholar] [CrossRef]
- Wang, R.; Chen, S.; Ji, C.; Fan, J.; Li, Y. Boundary-aware context neural network for medical image segmentation. J. Med. Image Anal. 2022, 78, 102395. [Google Scholar] [CrossRef] [PubMed]
- Jaware, T.H.; Badgujar, R.D.; Patil, P.G. Crop disease detection using image segmentation. World J. Sci. Technol. 2012, 2, 190–194. [Google Scholar]
- Febrinanto, F.G.; Dewi, C.; Triwiratno, A. The Implementation of K-Means Algorithm as Image Segmenting Method in Identifying the Citrus Leaves Disease. IOP Conf. Ser. Earth Environ. Sci. 2019, 243, 1–11. [Google Scholar] [CrossRef]
- Hemamalini, V.; Rajarajeswari, S.; Nachiyappan, S.; Sambath, M.; Devi, T.; Singh, B.K.; Raghuvanshi, A. Food Quality Inspection and Grading Using Efficient Image Segmentation and Machine Learning-Based System. J. Food Qual. 2022, 2022, 5262294. [Google Scholar] [CrossRef]
- Lilhore, U.K.; Imoize, A.L.; Lee, C.-C.; Simaiya, S.; Pani, S.K.; Goyal, N.; Kumar, A.; Li, C.-T. Enhanced Convolutional Neural Network Model for Cassava Leaf Disease Identification and Classification. Mathematics 2022, 10, 580. [Google Scholar] [CrossRef]
- Kurmi, Y.; Saxena, P.; Kirar, B.S.; Gangwar, S.; Chaurasia, V.; Goel, A. Deep CNN model for crops’ diseases detection using leaf images. Multidimens. Syst. Signal Process. 2022, 4, 1–20. [Google Scholar] [CrossRef]
- Akoum, A.H. Automatic Traffic Using Image Processing. J. Softw. Eng. Appl. 2017, 10, 8. [Google Scholar] [CrossRef] [Green Version]
- Sharma, A.; Chaturvedi, R.; Bhargava, A. A novel opposition based improved firefly algorithm for multilevel image segmentation. Multimed. Tools Appl. 2022, 81, 15521–15544. [Google Scholar] [CrossRef]
- Kheradmandi, N.; Mehranfar, V. A critical review and comparative study on image segmentation-based techniques for pavement crack detection. J. Constr. Build. Mater. 2022, 321, 126162. [Google Scholar] [CrossRef]
- Farooq, M.U.; Ahmed, A.; Khan, S.M.; Nawaz, M.B. Estimation of Traffic Occupancy using Image Segmentation. Int. J. Eng. Technol. Appl. Sci. Res. 2021, 11, 7291–7295. [Google Scholar] [CrossRef]
- Kaymak, Ç.; Uçar, A. Semantic Image Segmentation for Autonomous Driving Using Fully Convolutional Networks. In Proceedings of the 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 21–22 September 2019; pp. 1–8. [Google Scholar]
- Hofmarcher, M.; Unterthiner, T.; Antonio, T. Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning Lecture Notes in Computer Science; Springer: Berlin, Germany, 2019; Volume 11700, pp. 285–296. [Google Scholar]
- Sagar, A.; Soundrapandiyan, R. Semantic Segmentation with Multi Scale Spatial Attention for Self Driving Cars. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Virtual, 11–17 October 2021; pp. 2650–2656. [Google Scholar]
- Sellat, Q.; Bisoy, S.; Priyadarshini, R.; Vidyarthi, A.; Kautish, S.; Barik, R.K. Intelligent Semantic Segmentation for Self-Driving Vehicles Using Deep Learning. Comput. Intell. Neurosci. 2022, 2022, 6390260. [Google Scholar] [CrossRef] [PubMed]
- Avenash, R.; Viswanath, P. Semantic Segmentation of Satellite Images using a Modified CNN with Hard-Swish Activation Function. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 413–420. [Google Scholar]
- Manisha, P.; Jayadevan, R.; Sheeba, V.S. Content-based image retrieval through semantic image segmentation. AIP Conf. Proc. 2020, 2222, 030008. [Google Scholar]
- Ouhda, M.; El Asnaoui, K.; Ouanan, M.; Aksasse, B. Using Image Segmentation in Content Based Image Retrieval Method. In Advanced Information Technology, Services and Systems Lecture Notes in Networks and Systems; Springer: Berlin, Germany, 2018; Volume 25. [Google Scholar]
- Kurmi, Y.; Chaurasia, V. Content-based image retrieval algorithm for nuclei segmentation in histopathology images. Multimed. Tools Appl. 2021, 80, 3017–3037. [Google Scholar] [CrossRef]
- Kugunavar, S.; Prabhakar, C.J. Content-Based Medical Image Retrieval Using Delaunay Triangulation Segmentation Technique. J. Inf. Technol. Res. 2021, 14, 48–66. [Google Scholar] [CrossRef]
- Singh, T.R.; Roy, S.; Singh, O.I.; Sinam, T.; Singh, K.M. A New Local Adaptive Thresholding Technique in Binarization. Int. J. Comput. Sci. Issues 2011, 8, 271–277. [Google Scholar]
- Bhargavi, K.; Jyothi, S. A Survey on Threshold Based Segmentation Technique in Image Processing. Int. J. Innov. Res. Dev. 2014, 3, 234–239. [Google Scholar]
- Abdel-Basset, M.; Chang, V.; Mohamed, R. A novel equilibrium optimization algorithm for multi-thresholding image segmentation problems. Neural Comput. Appl. 2021, 33, 10685–10718. [Google Scholar] [CrossRef]
- Houssein, E.H.; Helmy, B.E.; Oliva, D.; Elngar, A.A.; Shaban, H. A novel Black Widow Optimization algorithm for multilevel thresholding image segmentation. Expert Syst. Appl. 2021, 167, 114159. [Google Scholar] [CrossRef]
- Gupta, D.; Anand, R.S. A hybrid edge-based segmentation approach for ultrasound medical images. Int. J. Biomed. Signal Process. Control 2017, 31, 116–126. [Google Scholar] [CrossRef]
- Iannizzotto, G.; Vita, L. Fast and accurate edge-based segmentation with no contour smoothing in 2D real images. IEEE Trans. Image Process. 2020, 9, 1232–1237. [Google Scholar] [CrossRef]
- Gould, S.; Gao, T.; Koller, D. Region-based Segmentation and Object Detection. Adv. Neural Inf. Process. Syst. 2009, 22, 1–9. [Google Scholar]
- Wanga, Z.; Jensenb, J.R.; Jungho, I.J. An automatic region-based image segmentation algorithm for remote sensing applications. J. Environ. Model. Softw. 2010, 25, 1149–1165. [Google Scholar] [CrossRef]
- Mazouzi, S.; Guessoum, Z. A fast and fully distributed method for region-based image segmentation. J. Real Time Image Process. 2021, 18, 793–806. [Google Scholar] [CrossRef]
- Vlaminck, M.; Heidbuchel, R.; Philips, W.; Luong, H. Region-Based CNN for Anomaly Detection in PV Power Plants Using Aerial Imagery. Sensors 2022, 22, 1244. [Google Scholar] [CrossRef] [PubMed]
- Zheng, X.; Lei, Q.; Yao, R.; Gong, Y.; Yin, Q. Image segmentation based on adaptive K-means algorithm. J. Image Video Process. 2018, 2018, 68. [Google Scholar] [CrossRef]
- Yang, Z.; Chung, F.; Shitong, W. Robust fuzzy clustering-based image segmentation. Int. J. Appl. Soft Comput. 2009, 9, 80–84. [Google Scholar] [CrossRef]
- Hooda, H.; Verma, O.P. Fuzzy clustering using gravitational search algorithm for brain image segmentation. Multimed. Tools Appl. 2022, 4, 1–20. [Google Scholar] [CrossRef]
- Khrissi, L.; El Akkad, N.; Satori, H.; Satori, K. Clustering method and sine cosine algorithm for image segmentation. Evol. Intell. 2022, 15, 669–682. [Google Scholar] [CrossRef]
- Oskouei, A.G.; Hashemzadeh, M. CGFFCM: A color image segmentation method based on cluster-weight and feature-weight learning. Softw. Impacts 2022, 11, 100228. [Google Scholar] [CrossRef]
- Kucharski, A.; Fabijańska, A. CNN-watershed: A watershed transform with predicted markers for corneal endothelium image segmentation. Biomed. Signal Process. Control 2021, 68, 102805. [Google Scholar] [CrossRef]
- Tian, X.; Zhang, C.; Li, J.; Fan, S.; Yang, Y.; Huang, W. Detection of early decay on citrus using LW-NIR hyperspectral reflectance imaging coupled with two-band ratio and improved watershed segmentation algorithm. Food Chem. 2021, 360, 130077. [Google Scholar] [CrossRef] [PubMed]
- Jia, F.; Tao, Z.; Wang, F. Wooden pallet image segmentation based on Otsu and marker watershed. J. Phys. Conf. Ser. 2021, 1976, 012005. [Google Scholar] [CrossRef]
- Kornilov, A.; Safonov, I.; Yakimchuk, I. A Review of Watershed Implementations for Segmentation of Volumetric Images. J. Imaging 2022, 8, 127. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Yan, S.; Lu, N.; Yang, D.; Fan, C.; Lv, H.; Wang, S.; Zhu, X.; Zhao, Y.; Wang, Y.; et al. Automatic segmentation of foveal avascular zone based on adaptive watershed algorithm in retinal optical coherence tomography angiography images. J. Innov. Opt. Health Sci. 2022, 15, 2242001. [Google Scholar] [CrossRef]
- Michailovich, O.; Rathi, Y.; Tannenbaum, A. Image Segmentation Using Active Contours Driven by the Bhattacharyya Gradient Flow. IEEE Trans. Image Process. 2007, 16, 2787–2801. [Google Scholar] [CrossRef] [Green Version]
- Hemalatha, R.J.; Thamizhvani, T.R.; Dhivya, A.J.; Joseph, J.E.; Babu, B.; Chandrasekaran, R. Active Contour Based Segmentation Techniques for Medical Image Analysis. Med. Biol. Image Anal. 2018, 7, 17–34. [Google Scholar]
- Dong, B.; Weng, G.; Jin, R. Active contour model driven by Self Organizing Maps for image segmentation. Expert Syst. Appl. 2021, 177, 114948. [Google Scholar] [CrossRef]
- Yang, Y.; Hou, X.; Ren, H. Efficient active contour model for medical image segmentation and correction based on edge and region information. Expert Syst. Appl. 2022, 194, 116436. [Google Scholar] [CrossRef]
- Boykov, Y.; Funka-Lea, G.; Dhivya, A.J.; Joseph, J.E.; Babu, B.; Chandrasekaran, R. Graph Cuts and Efficient N-D Image Segmentation. Int. J. Comput. Vis. 2006, 70, 109–131. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Udupa, J.K.; Bağcı, U.; Zhuge, Y.; Yao, J. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models. IEEE Trans. Image Process. 2012, 21, 2035–2046. [Google Scholar] [CrossRef] [Green Version]
- Devi, M.A.; Sheeba, J.I.; Joseph, K.S. Neutrosophic graph cut-based segmentation scheme for efficient cervical cancer detection. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 1352–1360. [Google Scholar]
- Hajdowska, K.; Student, S.; Borys, D. Graph based method for cell segmentation and detection in live-cell fluorescence microscope imaging. Biomed. Signal Process. Control 2022, 71, 103071. [Google Scholar] [CrossRef]
- Kato, Z.; Pong, T.C. A Markov random field image segmentation model for color textured images. J. Image Vis. Comput. 2006, 24, 1103–1114. [Google Scholar] [CrossRef] [Green Version]
- Venmathi, A.R.; Ganesh, E.N.; Kumaratharan, N. Image Segmentation based on Markov Random Field Probabilistic Approach. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019. [Google Scholar]
- Sasmal, P.; Bhuyan, M.K.; Dutta, S.; Iwahori, Y. An unsupervised approach of colonic polyp segmentation using adaptive markov random fields. Pattern Recognit. Lett. 2022, 154, 7–15. [Google Scholar] [CrossRef]
- Song, J.; Yuan, L. Brain tissue segmentation via non-local fuzzy c-means clustering combined with Markov random field. J. Math. Biosci. Eng. 2021, 19, 1891–1908. [Google Scholar] [CrossRef]
- Sachin Meena, S.; Palaniappan, K.; Seetharaman, G. User driven sparse point-based image segmentation. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
- Huang, J. Efficient Image Segmentation Method Based on Sparse Subspace Clustering. In Proceedings of the International Conference on Communications and Signal Processing, Melmaruvathur, Tamilnadu, India, 6–8 April 2016. [Google Scholar]
- Zhai, H.; Zhang, H.; Zhang, L.; Li, P. Sparsity-Based Clustering for Large Hyperspectral Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10410–10424. [Google Scholar] [CrossRef]
- Tejas, P.; Tejas, P.; Padma, S.K. A Hybrid Segmentation Technique for Brain Tumor Detection in MRI Images. Lect. Notes Netw. Syst. 2022, 300, 334–342. [Google Scholar]
- Desai, U.; Kamath, S.; Shetty, A.D.; Prabhu, M.S. Computer-Aided Detection for Early Detection of Lung Cancer Using CT Images. Lect. Notes Netw. Syst. 2022, 213, 287–301. [Google Scholar]
- Ng, H.P.; Ong, S.H.; Foong, K.W.C.; Goh, P.S.; Nowinski, W.L. Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm. In Proceedings of the 2006 IEEE Southwest Symposium on Image Analysis and Interpretation, Denver, CO, USA, 26–28 March 2006. [Google Scholar]
- Zhou, J.; Yang, M. Bone Region Segmentation in Medical Images Based on Improved Watershed Algorithm. J. Comput. Intell. Neurosci. 2022, 2022, 3975853. [Google Scholar] [CrossRef]
- Malik, J.; Belongie, S.; Leung, T.; Shi, J. Contour and texture analysis for image segmentation. Int. J. Comput. Vis. 2001, 43, 7–27. [Google Scholar] [CrossRef]
- Risheh, A.; Tavakolian, P.; Melinkov, A.; Mandelis, A. Infrared computer vision in non-destructive imaging: Sharp delineation of subsurface defect boundaries in enhanced truncated correlation photothermal coherence tomography images using K-means clustering. NDT Int. J. 2022, 125, 102568. [Google Scholar] [CrossRef]
- Lian, J.; Li, H.; Li, N.; Cai, Q. An Adaptive Mesh Segmentation via Iterative K-Means Clustering. Lect. Notes Electr. Eng. 2022, 805, 193–201. [Google Scholar]
- Nasor, M.; Obaid, W. Mesenteric cyst detection and segmentation by multiple K-means clustering and iterative Gaussian filtering. Int. J. Electr. Comput. Eng. 2021, 11, 4932–4941. [Google Scholar] [CrossRef]
- Patil, S.; Naik, A.; Sequeira, M.; Naik, G. An Algorithm for Pre-processing of Areca Nut for Quality Classification. Lect. Notes Netw. Syst. 2022, 300, 79–93. [Google Scholar]
- Hall, M.E.; Black, M.S.; Gold, G.E.; Levenston, M.E. Validation of watershed-based segmentation of the cartilage surface from sequential CT arthrography scans. Quant. Imaging Med. Surg. 2022, 12, 1–14. [Google Scholar] [CrossRef]
- Banerjee, A.; Dutta, H.S. A Reliable and Fast Detection Technique for Lung Cancer Using Digital Image Processing. Lect. Notes Netw. Syst. 2022, 292, 58–64. [Google Scholar]
- Dixit, A.; Bag, S. Adaptive clustering-based approach for forgery detection in images containing similar appearing but authentic objects. Appl. Soft Comput. 2021, 113, 107893. [Google Scholar] [CrossRef]
- Shen, X.; Ma, H.; Liu, R.; Li, H.; He, J.; Wu, X. Lesion segmentation in breast ultrasound images using the optimized marked watershed method. Biomed. Eng. Online 2021, 20, 112. [Google Scholar] [CrossRef]
- Hu, P.; Wang, W.; Li, Q.; Wang, T. Touching text line segmentation combined local baseline and connected component for Uchen Tibetan historical documents. Inf. Process. Manag. 2021, 58, 102689. [Google Scholar] [CrossRef]
- Gonzalez, R.; Woods, E.R. Thresholding. In Digital Image Processing; Pearson Education: London, UK, 2002; pp. 595–611. [Google Scholar]
- Scipy. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.distancetransformedt.html (accessed on 4 February 2022).
- Scikit-Image. Available online: https://scikit-image.org/docs/stable/api/skimage.segmentation.html?highlight=watershed#skimage.segmentation.watershed (accessed on 1 February 2022).
- Suzuki, S.; Abe, K. Smith, Topological structural analysis of digitized binary images by border following. Int. J. Comput. Vis. Graph. Image Process. 1985, 30, 32–46. [Google Scholar] [CrossRef]
- Mittal, H.; Pandey, A.C.; Saraswat, M.; Kumar, S.; Pal, R.; Modwel, G. A comprehensive survey of image segmentation: Clustering methods, performance parameters, and benchmark datasets. Multimed. Tools Appl. 2021, 1174, 1–26. [Google Scholar] [CrossRef] [PubMed]
- Borgefors, G. Distance Transformations in Digital Images. Comput. Vis. Graph. Image Process. 1986, 34, 344–371. [Google Scholar] [CrossRef]
- Soille, P. Morphological Image Analysis: Principles and Applications; Springer: Berlin, Germany, 1998. [Google Scholar]
- Rosenfeld, A.; Pfaltz, J.L. Sequential operations in digital picture processing. J. ACM 1966, 13, 471–494. [Google Scholar] [CrossRef]
- Kornilov, A.S.; Safonov, I.V. An Overview of Watershed Algorithm Implementations in Open Source Libraries. J. Imaging 2018, 4, 123. [Google Scholar] [CrossRef] [Green Version]
- Beucher, S.; Meyer, F. The morphological approach to segmentation: The watershed transformation. In Mathematical Morphology in Image Processing; CRC Press: Boca Raton, FL, USA, 1993; pp. 433–481. [Google Scholar]
- Bieniek, A.; Moga, A. An efficient watershed algorithm based on connected components. Pattern Recognit. 2000, 33, 907–916. [Google Scholar] [CrossRef]
- Kriegel, H.P.; Schubert, E.; Zimek, A. The (black) art of runtime evaluation: Are we comparing algorithms or implementations? Knowl. Inf. Syst. 2017, 52, 341–378. [Google Scholar] [CrossRef]
- Scikit-Image. Available online: https://scikit-image.org/docs/dev/ (accessed on 21 September 2021).
- Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 585–598. [Google Scholar] [CrossRef] [Green Version]
- Smith, K. Precalculus: A Functional Approach to Graphing and Problem Solving; Jones and Bartlett Publishers: Burlington, MA, USA, 2013; Volume 13, p. 8. [Google Scholar]
- Connected Component Labelling. Available online: https://homepages.inf.ed.ac.uk/rbf/HIPR2/label.htm (accessed on 22 September 2021).
- Zhang, C.; Hu, Y.; Zhang, T.; An, H.; Xu, W. The Application of Wavelet in Face Image Pre-Processing. In Proceedings of the 2010 4th International Conference on Bioinformatics and Biomedical Engineering, Chengdu, China, 18–20 June 2010; pp. 1–4. [Google Scholar] [CrossRef]
- Khalsa, N.N.; Ingole, V.T. Optimal Image Compression Technique based on Wavelet Transforms. Int. J. Adv. Res. Eng. Technol. 2014, 5, 341–378. [Google Scholar]
- Ilea, D.E.; Whelan, P.F. Image segmentation based on the integration of colour–texture descriptors—A review. Int. J. Pattern Recognit. 2011, 44, 2479–2501. [Google Scholar] [CrossRef]
- Hoang, M.A.; Geusebroek, J.M.; Smeulders, A.W. Colour texture measurement and segmentation. Int. J. Signal Process. 2005, 85, 265–275. [Google Scholar] [CrossRef]
- Deng, Y.; Manjunath, B.S. Unsupervised segmentation of colour–Texture regions in images and video. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 800–810. [Google Scholar] [CrossRef] [Green Version]
- Yang, A.Y.; Wright, J.; Ma, Y.; Sastry, S. Unsupervised segmentation of natural images via lossy data compression. Comput. Vis. Image Underst. 2008, 110, 212–225. [Google Scholar] [CrossRef]
- Chen, J.; Pappas, T.N.; Mojsilovic, A.; Rogowitz, B.E. Adaptive perceptual colour–Texture image segmentation. IEEE Trans. Image Process. 2005, 14, 1524–1536. [Google Scholar] [CrossRef] [PubMed]
- Han, S.; Tao, W.; Wang, D.; Tai, X.C.; Wu, X. Image segmentation based on GrabCut framework integrating multiscale non linear structure tensor. IEEE Trans. Image Process. 2009, 18, 2289–2302. [Google Scholar] [PubMed]
- Rother, C.; Kolmogorov, V.; Blake, A. GrabCut: Interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 2004, 23, 309–314. [Google Scholar] [CrossRef]
- Carson, C.; Belongie, S.; Greenspan, H.; Malik, J. Blobworld: Image segmentation using expectation-maximization and its application to image querying. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1026–1038. [Google Scholar] [CrossRef] [Green Version]
- Ilea, D.E.; Whelan, P.F. CTex—An adaptive unsupervised segmentation algorithm based on colour–texture coherence. IEEE Trans. Image Process. 2008, 17, 1926–1939. [Google Scholar] [CrossRef] [Green Version]
Dataset | Sample Size | Elapsed Time |
---|---|---|
National jurisdiction trademark images | 19,102 | 2865 s |
1 | 0.1538 s | |
International jurisdiction trademark images | 45,690 | 6542 s |
1 | 0.1542 s |
Colour-Texture Segmentation Algorithm | Summary |
---|---|
Hoang et al. [98] | The colour and texture information is included in the segmentation process. The RGB image is converted into a Gaussian colour model. Primary colour–texture features are extracted from each colour channel using a set of Gabor filters. Feature vectors, whose dimensionality is reduced by applying Principal Component Analysis, and used as inputs for a K-Means algorithm, providing initial segmentation that is refined by a region-merging procedure. |
JSEG-Deng and Manjunath [99] | Consisting of two independent steps: color quantization and spatial segmentation. In the first step, image colors are quantized in different classes, which are used to create an image class map. The image segmentation results from the application of a region growing method to the set of multiscale images, formed through the application of the class map based segmentation evaluation criterion. |
CTM-Yang et al. [100] | Colour–texture features at pixel level are extracted simultaneously by stacking the intensity values within a 7x7 window for each band of the CIE Lab converted image. Segmentation is formulated as a data clustering process. To reduce the dimensionality of the colour–texture vectors, Principal Component Analysis is used. To overcome the difficulty related to the fact that often the colour–texture information cannot be described with normal distributions, a coding-based clustering algorithm is employed that is able to accommodate input data defined by degenerate Gaussian mixtures. |
Chen et al. [101] | Segmentation of natural images into perceptually distinct regions with application to content-based image retrieval. Local colour features are extracted using a spatially Adaptive Clustering Algorithm. Texture features are computed through a multi-scale frequency decomposition procedure. Colour and texture features are integrated using a region growing algorithm that generates a primary segmentation that is improved through a post-processing step that implements a border refinement procedure. |
Han et al. [102] | A segmentation framework developed to identify the foreground object in natural colour images. Colour features are extracted from the CIE Lab converted colour image. Texture features are computed from the luminance component of the input image using the multi-scale nonlinear structure tensor. To reduce the dimensionality of the colour–texture feature space, the colour information is clustered using a binary tree quantisation procedure and the features in the texture domain are clustered using a K-Means algorithm. The resulting colour and texture features are modelled by Gaussian Mixture Models and integrated into a framework based on the GrabCut algorithm. The accuracy of the algorithm is improved by an adaptive feature integration strategy that consists of adjusting a weighting factor for colour and texture in the segmentation process. |
GrabCut-Rother et al. [103] | A graph-cut approach extension, with a simpler user interaction and an iterative version of the optimization method. An algorithm for “border matting” is used to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. |
Blobworld-Carson et al. [104] | The goal of the proposal is to partition the input image in perceptual coherent regions. It includes an isotropy, polarity and contrast features in a multi-scale texture model. Colour features are extracted on an independent channel from the CIE Lab converted image previously filtered with a Gaussian operator. For automatic colour–texture image segmentation, the distribution of the colour, texture and position features are jointly modeled using Gaussian Mixture Models. The Blobworld algorithm is able to segment the image into compact regions, being suitable to integrate a content-based image retrieval system. |
CTex-Ilea and Whelan [105] | Colour and texture are treated on separate channels. Colour segmentation involves the statistical analysis of data using multi-space colour representations. After filtering the input data using a Gradient-Boosted Forward and Backward anisotropic diffusion algorithm, the colour segmentation algorithm extracts the dominant colours and identifies the optimal number of clusters using an unsupervised procedure based on a Self Organising Map network. After, the image is analysed in a complementary colour space where the number of clusters previously calculated performs the synchronisation between the two computational streams of the algorithm. Finally, clustered results obtained for each colour space form the input for a multi-space clustering process that outputs the final colour segmented image. The extraction of the texture features from the luminance component of the original image uses a multi-channel texture decomposition technique based on Gabor filters. The colour and texture features are integrated in an Adaptive Spatial K-Means framework that partitions the data mapped into the colour-texture space by adaptively sampling the local texture continuity and the local colour smoothness in the image. |
Malik et al. [69] | An algorithm for partitioning grayscale images into disjoint regions of coherent brightness and texture, where cues of colors and texture differences of natural images are exploited simultaneously. Contours are treated in the intervening contour framework, while texture is analysed using textons. Given the different domain of applicability of each cue, a gating operator is introduced based on the texturedness of the neighbourhood at a pixel. Given a local measure of the similarity between nearby pixels, the spectral graph theoretic framework of normalized cuts is used to find partitions of the image in regions of coherent texture and brightness. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jardim, S.; António, J.; Mora, C. Graphical Image Region Extraction with K-Means Clustering and Watershed. J. Imaging 2022, 8, 163. https://doi.org/10.3390/jimaging8060163
Jardim S, António J, Mora C. Graphical Image Region Extraction with K-Means Clustering and Watershed. Journal of Imaging. 2022; 8(6):163. https://doi.org/10.3390/jimaging8060163
Chicago/Turabian StyleJardim, Sandra, João António, and Carlos Mora. 2022. "Graphical Image Region Extraction with K-Means Clustering and Watershed" Journal of Imaging 8, no. 6: 163. https://doi.org/10.3390/jimaging8060163
APA StyleJardim, S., António, J., & Mora, C. (2022). Graphical Image Region Extraction with K-Means Clustering and Watershed. Journal of Imaging, 8(6), 163. https://doi.org/10.3390/jimaging8060163