Next Article in Journal
Alternative Navigation Approaches for Railways: Overcoming GNSS Limitations
Previous Article in Journal
Energy Absorption Characteristics of Biodegradable and Recyclable Composite with Interlocking Periodic Honeycomb Sandwich Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

An Automated Medical Diagnosis System for Neoplasm Medical Image Classification Using Supervised and Unsupervised Techniques †

by
Sreedhar Kumar Seetharaman
1,2,*,
Basant Kumar
3,
Manjunath Chikkanjinappa Rajanna
4 and
Syed Thouheed Ahmed
5
1
Lincoln Global Postdoctoral Research (LGPR) Program, Lincoln University College, Petaling Jaya 47301, Selangor Darul Ehsan, Malaysia
2
Department of Information Science and Engineering, Sir. M. Visvesvaraya Institute of Technology, Bengaluru 562157, Karnataka, India
3
Department of Mathematics and Computer Science, Modern College of Business and Science, Muscat 133, Oman
4
Department of Computer Science and Engineering, Global Academic Technology, Bengaluru 560098, Karnataka, India
5
School of Computer Science and Engineering, REVA University, Bengaluru 560064, Karnataka, India
*
Author to whom correspondence should be addressed.
Presented at the 6th International Electronic Conference on Applied Sciences, 9–11 December 2025; Available online: https://sciforum.net/event/ASEC2025.
Eng. Proc. 2026, 124(1), 49; https://doi.org/10.3390/engproc2026124049
Published: 11 February 2026
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)

Abstract

In this research, an improved automated medical prediction system, namely, the Neoplasm Medical Image Classification System (NMICS), is proposed. The proposed NMICS aims to robotically identify whether the given test magnetic resonance image (MRI) belongs to the tumor group or the non-tumor group using machine learning techniques. The proposed NMICS is divided into two stages, namely, the Train Medical Image Model (TMIM) and the Medical Image Prediction Stage (MIPS), respectively. In the TMIM stage, the NMICS performs various distinct operations including improving input medical image data set quality and consistency through standard arithmetic operations; extracting specific features (edge) from every individual medical image in the input medical image set using the CNN method; and separating the feature vector set of the input medical image set into two distinct clusters, namely, tumor and non-tumor, respectively, using the unsupervised k-means clustering technique. In the MIPS stage, the proposed (NMIC) system performs the same types of operations, including preprocessing and feature extraction, on the test medical image samples. Next, the NMICS maps and classifies the feature vector of the test medical image sample against trained medical image data set clusters using a KNN classifier. The investigation results show that the NMICS is well-suited to diagnosing whether the given medical image is grouped into the neoplasm category or the non-neoplasm group.

1. Introduction

Brain tumor detection is a crucial medical process, and it is expected to identify abnormal growths in the brain [1,2,3,4,5]. These growths, known as tumors, can be either benign or malignant and may impact normal brain functions. Detecting brain tumors early is essential for effective treatment. Various imaging techniques, such as MRI scans, play a vital role in identifying the presence, size, and location of tumors. Advanced technologies, including machine learning algorithms, are increasingly employed to enhance the accuracy and speed of detection. Symptoms of brain tumors can vary, ranging from headaches and nausea to changes in cognitive function. Early detection allows healthcare professionals to act quickly, which increases the chances of successful treatment and lowers the risk of complications [6,7,8,9].
The existing system for brain tumor detection employs machine learning techniques to analyze medical imaging data such as MRI scans. These algorithms are trained to identify abnormal patterns and anomalies within the brain, helping radiologists and clinicians to diagnose tumors with greater accuracy and efficiency. By leveraging features like image segmentation and classification, this technology aids in early detection, ultimately improving patient outcomes and reducing the burden on healthcare professionals. Detecting brain tumors is crucial for early intervention and treatment. In this paper, the combined approach allows for a robust analysis of brain MRI for tumor detection. CNNs efficiently capture complex image features, K-means clustering aids in segmentation, and K-NN facilitates precise pattern matching for accurate classification. Integrating these techniques provides a holistic solution, leveraging the strengths of each algorithm to enhance the accuracy and reliability of brain tumor detection in MRI.

2. Proposed NMICS

The proposed NMICS is designed to partition brain MRIs into two clusters through the dynamic identification of centroids using the K-means clustering algorithm. Additionally, it performs the classification of brain MRIs into tumor and non-tumor categories. This process involves multiple stages, including image preprocessing, edge feature extraction, clustering, and classification. The functional stages of the NMICS are visually depicted in Figure 1.

2.1. Data Collection Stage

A set of brain MRIs deployed in an NMICS is represented as X = X i ; for X i = X i j r , i = 1 , 2 , , n, j = 1 , 2 , , h and r = 1 , 2 , , w where X i indicates the i t h input of the medical image in the MRI deposit X with n MRI, X i j r denotes the pixel element location of j t h row and r t h column in the i t h MRI in input medical, n is the size of the input medical image set X = X i , w is describing the width of the image, and h denotes the height of the input medical image in the input image repository.

2.2. Data Preprocessing Stage

Brain MRIs’ preprocessing involves several steps to enhance the quality of the images. Resizing ensures that you adjust the dimensions of the image while keeping the aspect ratio constant. This helps prevent distortion and ensures that structures in the brain are accurately represented. MRIs might have some level of random noise that can interfere with the interpretation of important features. Gaussian blur is a smoothing technique that helps reduce this noise.

2.2.1. Resizing

The MRIs undergo re-sizing using the aspect ratio preservation technique, ensuring that the original proportions of the images are maintained throughout the process. Aspect ratio preservation maintains the original proportions of the MRI. Re-sizing ensures uniformity in image dimensions. To represent the re-sizing process with aspect ratio preservation mathematically, let us denote the original dimensions of each brain MRI as w i (width) and h i (height) of the input image in the medical image set X = X i . The aspect ratio A R ( w i , h i ) of width and height have been given below in Equations (1) and (2)
A R ( w i ) = w t a r g e t w i
A R ( h i ) = h t a r g e t h i
where A R ( w i ) is the aspect ratio of the i-th image width, A R ( w i ) is the aspect ratio of the i t h image height, w t a r g e t is the target width of the image, and h t a r g e t is the target height of the image. Later resize width and height with the following equation:
w r i = w i × A R ( w i )
h r i = h i × A R ( h i )
where, w r i is the resized width of the i t h image and h r i is the resized height of the i t h image.

2.2.2. Noise Reduction

Gaussian blur is a common technique employed in preprocessing brain MRIs to reduce noise and enhance the clarity of structural information. Mathematically, the Gaussian blur operation involves convolving the image with a Gaussian kernel. The convolution of an image X ^ i with a 2D Gaussian kernel G is given by
G ( i , j ) = 1 2 π σ 2 × e ( i 2 + j 2 ) 2 e 2
where G ( i , j ) is the value of the Gaussian kernel at position where i represents the row index; j represents the column index; and σ is the standard deviation of the Gaussian distribution, controlling the amount of blur, and applying this kernel to the re-sized MRI X ^ i to obtain a pre-processed MRI X by
X i = i j × X ^ i × G i , j
where X is the resulting blurred image or pre-processed brain MRI, and X ^ i is the set of re-sized MRI for X ^ = X ^ i for i = 1 , 2 , , n .

2.3. Feature Extraction Stage

In this stage, the proposed system is currently focused on extracting edge features from brain MRIs with the goal of identifying unique patterns that distinguish tumor regions from non-tumor areas using convolution neural network. In brain tumor detection using Convolutional Neural Networks (CNNs), feature extraction is a critical step. CNNs automatically learn relevant features from input images. In this context, the CNN layers identify distinctive patterns or structures in MRIs that are indicative of tumors. These features may include shapes, textures, or variations in intensity. The network then uses these learned features to classify regions of the image as tumor or non-tumor, making it a powerful tool for automating the detection process and improving diagnostic accuracy. Assuming a single convolutional layer with a filter (kernel) F and bias term b , the convolution operation for the i t h image X i can be represented as Z i .
Z i = m = 1 M n = 1 N X i × F + b
where Z i is the result of a convolution operation, X i denotes the i t h pre-processed MRI, and F ( m , n ) is the filter weight at the position of ( m , n ) . Subsequently, M and N are representing the dimensions of the filter. Afterward, the ReLU activation function is performed for the result of the convolution process Z i and it is defined in Equation (8).
a i = max ( 0 , Z i )
In the next step, assuming max pooling with a pooling window of size P × Q, apply the pooling operation for the i t h image to extract the feature from the brain MRI as explained in the following mathematical Equations (7)–(9).
X ¯ = m a x ( m , n ) p o o l i n g w i n d o w a i
where, X ¯ is the result of the max pooling operation at position (u,v), a i is the ReLU-activated value at position (u,v), and the max operation is taken over the pooling window. The set of MRIs, X ¯ , after feature extraction and pooling, can be represented as X ¯ = { X ¯ 1 , X ¯ 2 , , X ¯ n } . This notation signifies that X ¯ consists of images obtained after applying the convolutional layer, ReLU activation, and max pooling to each image in X′. The output X ¯ represents a set of images with extracted features and reduced spatial dimensions. The step-by-step process of feature extraction operation is presented in Algorithm 1.
Algorithm 1: Feature Extraction
Input :   Preprocessed   brain   MRI   X = X 1 , X 2 , , X n
Output: The set of images X ¯ = { X ¯ 1 , X ¯ 2 , , X ¯ n } with extracted features.
Begin
  • The convolution operation is applied to the input image X′ with a filter F by Equation (7).
  • Apply the ReLU activation function   a i to introduce non-linearity by Equation (8).
  • Apply   max   pooling   X ¯   to   downsample   the   spatial   dimensions   of   the   activation   map   a i using Equation (9).
  • Repeat   the   above   steps   with   multiple   filters   and   for   multiple   layers   to   extract   the   hierarchical   feature   from   X ¯ = { X ¯ 1 , X ¯ 2 , , X ¯ n } and the input image X = X 1 , X 2 , , X n
End

2.4. Training Stage

In this stage, the proposed NMICS is iteratively separating the brain MRI set into two clusters, such as C 1 (normal) and C 2 (tumor) using the k-means technique [10,11,12] based on the feature set of the MRI set, which is carried out at the previous stage through a CNN approach. Each image X ¯ i in X ¯ is typically represented as a feature vector. Let us denote the feature vector of X ¯ i as v i for i = 0 , 1 , , m where m is the number of features in the feature vector. The first step is to compute the Euclidean distance d i 1 and d i 2 between each image’s feature vector v i and the cluster centroids as C 1 j (normal) and C 2 j (tumor).
d i 1 = j = 1 m v i j C ¯ 1 j 2
d i 1 = j = 1 m v i j C ¯ 2 j 2
where d i 1 and d i 2 representing the Euclidean distance between a data point v i j and the centroids C ¯ 1 (tumor) and C ¯ 2 (no tumor), respectively. During the training process, the image feature set X ¯ = X ¯ i of the input image set is divided into two distinct clusters, namely C 1 (normal) and C 2 (tumor). In the next step, the assignment of a data point v i j to a cluster is determined by comparing the distances d i 1 and d i 2 between the data elements and the cluster centroids. If the distance to C 1 is less than or equal to the distance to C 2 , the data element v i j is assigned to the cluster C 1 . Otherwise, it is assigned to the cluster C 2 . The steps involved in the training operation are described in Algorithm 2.
C ( v i ) = C 1 i f d i 1 d i 2 C 2 i f d i 1 d i 2
C ¯ 1 = 1 C 1 × i d C 1 i | C 1 i C 1
C ¯ 2 = 1 C 2 × i d C 2 i | C 2 i C 2
Algorithm 2: Training Data Model using Clustering
Input: The set of images X ¯ = { X ¯ 1 , X ¯ 2 , , X ¯ n } with extracted features.
Output: The two distinct clusters C 1   ( normal )   and   C 2 (tumor) are obtained.
Begin
  • Pre-determined number of clusters in k = 2 .
  • Randomly   select   k number of cluster centroids from the MRI feature set X ¯ of the input image set X   and   obtain   the   selected   feature   vectors   to   respective   cluster   centroids ,   namely   C ¯ 1   and   C ¯ 2 .
  • Calculate   the   Euclidean   distance   between   the   image s   feature   vector   set   X ¯ = X ¯ i ,   for   i = 0 , 1 , , n ,   and   the   cluster   centroids   C ¯ 1   and   C ¯ 2 , respectively, using Equations (10) and (11).
  • Iteratively   separate   the   contents   of   the   image   feature   vector   set   into   its   closest   cluster ,   either   C 1   or   C 2 , using Equations (10) and (11).
  • Update the content of cluster centroids C ¯ 1   and   C ¯ 2 through the data elements of the clusters C 1   and   C 2 respectively using Equations (13) and (14).
  • Repeat the steps from 3 and 4 until the previous iteration centroid value is the same as the current iteration centroid value.
End

2.5. Testing and Classification Stage

Classification of brain MRIs as tumor and no tumor typically involves using machine learning algorithms. In this context, C 1 (normal) and C 2 (tumor) clusters likely refer to distinct features or patterns associated with the classes. The algorithm analyzes patterns in the MRIs, using features from C 1 (normal) and C 2 (tumor) clusters to distinguish between tumor and non-tumor cases. This process aids in accurate classification based on the identified patterns in the images. KNN is used for classification, considering the Euclidean distances between the test sample’s feature vector and the centroids of the tumor and no-tumor clusters. T′ is the feature vector obtained from the CNN output for the test sample, representing the high-level features extracted from the brain MRI d t u m , the Euclidean distance between the feature vector T’ and the Centroid C ¯ 1 of the tumor cluster in Equation (15), d n o t u m is the Euclidean distance between the feature vector T and the centroid C ¯ 2 of the no tumor cluster in Equation (16). The steps involved in the KNN classifier are illustrated in Algorithm 3.
d t u m ( T ¯ , C 1 ) = j = 1 m T ¯ j C 1 j 2 C 1 j C 1
d n o t u m T ¯ , C 2 = j = 1 m T ¯ j C 2 j 2 C 2 j C 2
c ¯ 1 = C o u n t d ( T ¯ , C 1 j ) j C 1 , 1 d ( T ¯ , C 1 j ) < = t 0 d ( T ¯ , C 1 j ) > t
c ¯ 2 = C o u n t d ( T ¯ , C 2 j ) j C 2 , 1 d ( T ¯ , C 2 j ) < = t 0 d ( T ¯ , C 2 j ) > t
C l ( T ¯ ) = m a j o r i t y c ¯ 1 , c ¯ 2
Algorithm 3: Classification
Input: Trained data set with two clusters C = { C 1 , C 2 } and feature vector of test sample T ¯   with   m features
Output: KNN classification result either tum as tumor or no tumor.
Begin
  • Determine the number of neighbors K to consider during classification.
  • Iteratively   compute   the   distance   between   the   feature   vector   of   the   test   sample   and   the   data   elements   of   each   individual   cluster   d ( T ¯ , C i ) in the training data set using Equations (15) and (16).
  • Measures the frequency of closeness between the test sample T ¯ and data elements in the clusters using Equations (17) and (18).
  • Note the class label of the test sample T ¯ which belongs to either the tumor class or the no tumor class using Equation (18).
End

3. Results and Discussion

For demonstration purposes, we have randomly selected a few images from the Kaggle MRI data set [13], and the sample medical pictures are presented in Figure 2. Figure 3 shows that the preprocessing result of the input sample medical image set is illustrated in Figure 2. The proposed NMICS and its preprocessing methods have been described in Section 2.2. The feature extraction results of the pre-processed input MRI set are presented in Figure 4. It is clearly shown in Figure 4 that the proposed NMICS extracts edge features from all MRI samples through the CNN method, which has been discussed in the Algorithm 1 at Section 2.3. Subsequently, in the training stage, the NMICS iteratively separates the MRI sample set into two (K = 2) distinct clusters, namely C 1 (normal) and C 2 (tumor), based on the feature set of the MRI sample set in Figure 4 through the k-means clustering technique, and the results have been presented in Figure 5. The classification results of the NMICS with different test MRI samples have been presented in Table 1, which is discussed in Algorithm 3 at Section 2.5.

4. Conclusions

In this paper, an improved automated medical prediction system, namely, the Neoplasm Medical Image Classification System (NMICS), is presented, and it is intended to instinctively classify the test medical image into either the tumor group or the non-tumor (normal) group, respectively, through machine learning techniques. The NMICS is divided into two stages, including the Train Medical Image Model (TMIM) and Medical Image Prediction Stage (MIPS). In the TMIM stage, the NMICS separates the input MRI set into two distinct clusters based on the feature set of the MRI set through the k-means technique, and also performs the essential operations before the training process, including preprocessing and feature extraction. In the MIPS, the NMICS performs the same types of operations over the test medical image samples, which are then followed by the TMIM stage, excluding the training operation. Following, the NMICS classifies the feature vector of the test medical image sample with trained medical image data set clusters using a KNN classifier. The investigation outcomes demonstrate that the NMICS is fit to categorize the given test sample of MRI into the neoplasm or non-neoplasm group.

Author Contributions

Conceptualization, S.K.S.; methodology, S.K.S.; software, S.K.S.; formal analysis, S.K.S. and B.K.; investigation, S.K.S.; resources, M.C.R. and S.T.A.; writing—original draft preparation, S.K.S.; writing—review and editing, S.K.S. and S.T.A.; visualization, S.K.S.; supervision, B.K.; project administration, S.K.S. and B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data set is accessed on 10 August 2025 and it can be found here: https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset?select=Testing.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guy-Fernand, K.N.; Zhao, J.J.; Sabuni, F.M.; Wang, J. Classification of Brain Tumor Leveraging Goal-Driven Visual Attention with the Support of Transfer Learning. In Proceedings of the Information Communication Technologies Conference, Kigali, Rwanda, 6–8 May 2020; pp. 328–332. [Google Scholar]
  2. Jayade, S.; Ingole, D.T.; Ingole, M.D. Review of Brain Tumor Detection Concept using MRI Images. In Proceedings of the International Conference on Innovative Trends and Advances in Engineering and Technology, Shegoaon, India, 27–28 December 2019; pp. 206–209. [Google Scholar]
  3. Patil, D.O.; Hamde, S.T. Monogenic Wavelet Phase Encoded Descriptors for Brain Tumor Image Detection. In Proceedings of the IEEE Third International Conference on Multimedia Processing, Communication & Information Technology, Shivamogga, India, 11–12 December 2020; pp. 40–44. [Google Scholar]
  4. Gouskir, M.; Zyad, M.A.; Boutalline, M. Automatic Analysis of Brain Tumor from Magnetic Resonance Images based on Geometric Median Shift. In Proceedings of the 2nd International Conference on Knowledge-Based Engineering and Innovation, Beni Mellal, Morocco, 20–21 April 2020; Volume 8, pp. 11–17. [Google Scholar]
  5. Grampurohit, S.; Shalavadi, V.; Dhotargavi, V.R.; Kudari, M. Brain Tumor Detection Using Deep Learning Models. In Proceedings of the IEEE India Council International Subsections Conference, Visakhapatnam, India, 3–4 October 2020; pp. 129–134. [Google Scholar]
  6. Kabir, A.; Gyorfi, A.; Kovacs, L.; Szilagyi, L.O. Early-Stage Brain Tumor Detection on MRI Image Using a Hybrid Technique. In Proceedings of the IEEE Region 10 Symposium, Dhaka, Bangladesh, 5–7 June 2020; pp. 1828–1831. [Google Scholar]
  7. Gyorfi, A.; Sudirman, R.; Wei, S.T.C. A feature ranking and selection algorithm for brain tumor segmentation in multi spectral magnetic resonance image data. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 804–807. [Google Scholar]
  8. Chew, K.M.; Yong, C.Y.; Sudirman, R.; Wei, S.T.C. Human Brain Modeling Tumor Detection in 2D and 3D Representation Using Microwave Signal Analysis. In Proceedings of the 2018 IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 28–29 April 2018; pp. 310–316. [Google Scholar]
  9. Polly, F.P.; Aziz, A.I.; Rana, M.; Islam, A.; Inum, R. Detection and Classification of HGG and LGG Brain Tumor Using Machine Learning, Expert Systems with Applications. In Proceedings of the 2018 International Conference on Information Networking (ICOIN), Chiang Mai, Thailand, 10–12 January 2018; pp. 813–817. [Google Scholar]
  10. Kumar, S.; Ahmed, S.T.; Sandeep, S.; Madheswaran, M.; Basha, S.M. Unstructured Oncological Image Cluster Identification Using Improved Unsupervised Clustering Techniques. Comput. Mater. Contin. 2022, 72, 281–298. [Google Scholar] [CrossRef]
  11. Sreedhar Kumar, S.; Ahmed, S.T.; Fathima, A.S.; Mathivanan, S.K.; Jayagopal, P.; Saif, A.; Gupta, S.K.; Sinha, G. iLIAC: An approach of identifying dissimilar groups on unstructured numerical image dataset using improved agglomerative clustering technique. Multimed. Tools Appl. 2024, 83, 86359–86381. [Google Scholar]
  12. Ahmed, S.T.; Kumar, S.; Guptha, N.S.; Basha, S.M.; Fathima, A.S. Improving medical image pixel quality using micq unsupervised machine learning technique. Malays. J. Comput. Sci. 2022, 2022, 53–64. [Google Scholar] [CrossRef]
  13. (Msoud Nickparvar). Available online: https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset (accessed on 10 August 2025).
Figure 1. Proposed (NMICS) System Architecture.
Figure 1. Proposed (NMICS) System Architecture.
Engproc 124 00049 g001
Figure 2. Input MRI samples with different sizes.
Figure 2. Input MRI samples with different sizes.
Engproc 124 00049 g002
Figure 3. Result of pre-processed stage over the input MRI samples in Figure 2.
Figure 3. Result of pre-processed stage over the input MRI samples in Figure 2.
Engproc 124 00049 g003
Figure 4. Result of feature extraction (edge detection) operation conducted on the improved MRI set in Figure 3.
Figure 4. Result of feature extraction (edge detection) operation conducted on the improved MRI set in Figure 3.
Engproc 124 00049 g004
Figure 5. Training result of the input MRI set in Figure 3.
Figure 5. Training result of the input MRI set in Figure 3.
Engproc 124 00049 g005
Table 1. Classification results of sample MRI tested by proposed NMICS System.
Table 1. Classification results of sample MRI tested by proposed NMICS System.
Test MRI
Image Sample
Improved MRI
Samples
Feature of MRI
Samples
Classification Result
Cluster 1 (Normal)Cluster 2 (Tumor)
Engproc 124 00049 i001Engproc 124 00049 i002Engproc 124 00049 i003Engproc 124 00049 i004
Engproc 124 00049 i005Engproc 124 00049 i006Engproc 124 00049 i007Engproc 124 00049 i008
Engproc 124 00049 i009Engproc 124 00049 i010Engproc 124 00049 i011 Engproc 124 00049 i012
Engproc 124 00049 i013Engproc 124 00049 i014Engproc 124 00049 i015 Engproc 124 00049 i016
Engproc 124 00049 i017Engproc 124 00049 i018Engproc 124 00049 i019 Engproc 124 00049 i020
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Seetharaman, S.K.; Kumar, B.; Rajanna, M.C.; Ahmed, S.T. An Automated Medical Diagnosis System for Neoplasm Medical Image Classification Using Supervised and Unsupervised Techniques. Eng. Proc. 2026, 124, 49. https://doi.org/10.3390/engproc2026124049

AMA Style

Seetharaman SK, Kumar B, Rajanna MC, Ahmed ST. An Automated Medical Diagnosis System for Neoplasm Medical Image Classification Using Supervised and Unsupervised Techniques. Engineering Proceedings. 2026; 124(1):49. https://doi.org/10.3390/engproc2026124049

Chicago/Turabian Style

Seetharaman, Sreedhar Kumar, Basant Kumar, Manjunath Chikkanjinappa Rajanna, and Syed Thouheed Ahmed. 2026. "An Automated Medical Diagnosis System for Neoplasm Medical Image Classification Using Supervised and Unsupervised Techniques" Engineering Proceedings 124, no. 1: 49. https://doi.org/10.3390/engproc2026124049

APA Style

Seetharaman, S. K., Kumar, B., Rajanna, M. C., & Ahmed, S. T. (2026). An Automated Medical Diagnosis System for Neoplasm Medical Image Classification Using Supervised and Unsupervised Techniques. Engineering Proceedings, 124(1), 49. https://doi.org/10.3390/engproc2026124049

Article Metrics

Back to TopTop