Next Article in Journal
Special Issue: Sports Performance: Data Measurement, Analysis and Improvement
Previous Article in Journal
Global Anomaly Detection Using Feedforward Symmetrical Autoencoder Neuronal Network: Comparison with Other Methods in a Case Study Using Real Industrial Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept

by
Vanesa Gómez-Martínez
*,
David Chushig-Muzo
and
Cristina Soguero-Ruiz
*
Department of Signal Theory and Communications, Rey Juan Carlos University, 28943 Madrid, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2026, 16(5), 2459; https://doi.org/10.3390/app16052459
Submission received: 28 January 2026 / Revised: 24 February 2026 / Accepted: 2 March 2026 / Published: 3 March 2026
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing, 2nd Edition)

Abstract

Deep learning (DL) models have demonstrated strong performance in dermatological applications, particularly when trained on dermoscopic images. In contrast, tabular clinical data—such as patient metadata and lesion-level descriptors—are difficult to integrate into DL-based pipelines due to their heterogeneous, non-spatial, and often low-dimensional nature. As a result, these data are commonly handled using separate classical machine learning (ML) models. In this work, we present a proof-of-concept study that investigates whether dermatological tabular data can be transformed into two-dimensional image representations to enable convolutional neural network (CNN)-based learning. To this end, we employ the Low Mixed-Image Generator for Tabular Data (LM-IGTD), a framework designed to transform low-dimensional and heterogeneous tabular data into two-dimensional image representations, through type-aware encoding and controlled feature augmentation. Using this approach, we encode low-dimensional clinical metadata, high-dimensional lesion-level statistical features extracted from dermoscopic images, as well as their feature-level fusion, into grayscale image representations. The resulting image representations serve as input to CNNs, and the performance is compared with ML models trained on tabular data. Experiments conducted on the Derm7pt and PH2 datasets show that traditional ML models generally achieve the highest Area Under the Curve values, while LM-IGTD-based representations provide comparable performance and enable the use of CNNs on tabular clinical data used in dermatology.

1. Introduction

Skin cancer remains one of the most prevalent malignancies worldwide, with both melanoma and keratinocyte skin cancers showing a sustained increase, particularly among fair-skinned populations [1,2]. Early detection is essential, as timely diagnosis is strongly associated with improved treatment outcomes and prognosis, especially in melanoma [1,2].
Dermoscopy, a noninvasive imaging technique, has been extensively used to enhance the visual assessment of skin lesions and support clinical decision-making [3]. The availability of large, publicly accessible dermoscopic image datasets has further accelerated research in automated skin lesion analysis [4,5]. This has enabled the development of data-driven models with high predictive performance [6,7]. Convolutional neural networks (CNNs) play a central role by leveraging the spatial structure of dermoscopic images for predictive tasks [8]. As a result, most recent advances in automated skin lesion assessment have primarily focused on image-based deep learning (DL) pipelines, often relying almost exclusively on dermoscopic images [9].
However, clinical decision-making in dermatology does not rely exclusively on images. In addition to dermoscopic patterns, clinicians consider patient-level clinical metadata, such as age, sex, and anatomical location, as well as lesion-level descriptors derived from image analysis, including asymmetry, border irregularity, color distribution, and texture-related characteristics [10]. While clinical metadata typically give rise to low-dimensional tabular representations, lesion-level descriptors extracted from dermoscopic images are often summarized through high-dimensional handcrafted or statistically derived feature sets [11,12]. Together, these complementary sources of tabular information capture clinically meaningful aspects of skin lesions. Although clinically relevant, such tabular information remains underutilized in modern DL-based dermatological pipelines [9,13]. This is mainly due to the challenges of tabular dermatological data for DL models, including their heterogeneity, absence of spatial structure, and low dimensionality. As a result, standard DL architectures—originally designed to model spatial or temporal correlations—often struggle to effectively learn from tabular representations [14,15,16].
Recent advances in medical image analysis have also emphasized robustness and adaptability to heterogeneous and ambiguous data sources. Beyond purely discriminative CNN pipelines, generative and multimodal learning frameworks have been proposed to better handle uncertainty and complementary information across modalities [17]. These developments highlight the importance of adaptive representation learning for structured and heterogeneous biomedical data. In this context, transforming tabular clinical features into spatially organized representations can be viewed as an alternative strategy to bridge tabular data with convolutional architectures while preserving clinically meaningful relationships.
Motivated by this need, recent work has explored transforming tabular data into image-like representations, thereby enabling the application of CNN-based models to non-visual data [18,19,20,21,22,23]. These approaches aim to impose a spatial structure on tabular features, allowing convolutional architectures to exploit local patterns and relationships. However, such methods have rarely been investigated in the context of dermatological data [9], where heterogeneous clinical metadata and lesion-level descriptors are routinely available and clinically meaningful.
Clinical metadata are frequently combined with dermoscopic images in AI-based dermatology studies using multimodal learning strategies such as late fusion, feature concatenation, or attention-based modules [7,13,24,25]. However, in most existing approaches, tabular information is processed separately from image data, typically using multilayer perceptrons or other dedicated tabular-processing modules, without an explicit spatial organization of structured variables [24,26,27]. This limitation reflects a fundamental representation challenge: although tabular clinical features encode clinically meaningful relationships, their non-spatial format limits how convolutional neural architectures capture spatial dependencies. Tabular-to-image transformation addresses this gap by projecting structured clinical data into a domain more suitable for convolutional processing. While this paradigm has gained increasing attention in other biomedical domains, including omics, radiomics, and electronic health records [20,22,28], its application to dermatological tabular data remains largely unexplored.
Within this line of research, the Low Mixed-Image Generator for Tabular Data (LM-IGTD) framework [29] introduced a permutation-based strategy to map numerical and categorical features onto two-dimensional images according to their statistical relationships. In addition, LM-IGTD incorporates a noise-based feature augmentation mechanism designed to improve representations in low-dimensional settings. This makes it particularly suitable for clinical data scenarios commonly encountered in biomedical applications.
To the best of our knowledge, the application of tabular-to-image transformation frameworks to dermatological data has not yet been investigated. In this context, the present work is conceived as a proof-of-concept (PoC) study exploring the feasibility of LM-IGTD-based tabular-to-image representations for dermatological clinical data. Specifically, this study considers different sources of tabular information derived from dermoscopic datasets, including patient-level clinical metadata, lesion-level statistical features extracted from images, and their joint representation, in order to assess their suitability for CNN-based melanoma classification.
The remainder of this paper is organized as follows. Section 2 presents the publicly available PH2 and Derm7pt datasets, which provide dermoscopic images together with clinical metadata, as well as the feature extraction process and the LM-IGTD transformation framework. Experimental results are reported in Section 3, followed by a discussion in Section 4 and concluding remarks in Section 5.

2. Materials and Methods

This section describes the datasets, feature extraction procedures, and tabular-to-image transformation framework employed in this study. An overview of the complete methodology is presented in Figure 1, summarizing the main processing stages of the proposed pipeline, from tabular data construction to CNN-based melanoma classification.
First, patient-level clinical metadata and lesion-level statistical features are extracted from the PH2 and Derm7pt datasets and represented as tabular data. These tabular representations are subsequently analyzed either independently or jointly, depending on the experimental setting. Next, the tabular representations are transformed into two-dimensional grayscale images using the LM-IGTD framework. This process incorporates feature ranking, pixel arrangement optimization, and noise-based feature augmentation when required. Finally, the generated images are used as input to CNN-based models for melanoma classification.

2.1. Dataset Description

This study uses two publicly available dermoscopy datasets, PH2 [30] and Derm7pt [31], both of which provide dermoscopic images together with clinical metadata.
The PH2 dataset consists of 200 dermoscopic images acquired at the Dermatology Service of Hospital Pedro Hispano (Matosinhos, Portugal), including 160 benign lesions—80 common nevi and 80 atypical nevi—and 40 melanomas [30]. In this study, common and atypical nevi are grouped into a single not melanoma class, while all melanoma cases are assigned to the melanoma class, resulting in a binary classification scenario. PH2 provides metadata associated with dermoscopic criteria, including lesion colors, asymmetry, pigment network, dots and globules, streaks, regression areas, and blue-whitish veil.
The Derm7pt dataset contains over 2000 dermoscopic and macroscopic images, accompanied by metadata derived from the seven-point checklist and additional clinical information [31]. This study focuses only on the 1011 dermoscopic images available in the dataset. Lesions are grouped into a binary classification setting. Melanoma-related categories are merged into a single melanoma class, while the remaining lesion types are assigned to the not melanoma class, resulting in 252 melanoma and 759 not melanoma samples, respectively. The associated metadata include dermoscopic criteria (pigment network, streaks, pigmentation, regression structures, dots and globules, blue-whitish veil, and vascular structures), as well as clinical and lesion-related features such as diagnostic difficulty, elevation, anatomical location, sex, and management.
Table 1 summarizes the main characteristics of the tabular metadata used in this study, which consist of low-dimensional, clinically interpretable features.

2.2. Image Feature Extraction

To obtain clinically and statistically meaningful descriptors, lesion-level statistical features were extracted from dermoscopic images. These features aim to capture complementary aspects of lesion morphology, color distribution, and texture patterns that are routinely assessed by dermatologists during visual examination [32]. Following established practice in dermoscopic image analysis, we considered geometric features, color features, and a diverse set of local and global texture descriptors [33,34].
Geometric features were derived using the ABCD rule [35], which encodes asymmetry, border irregularity, color variation, and lesion diameter—criteria closely related to melanoma risk assessment. Color information was characterized using multiple color spaces, including RGB, HSV, CIE L*a*b, CIE L*u*v, and YCrCb. For each color channel, first-order statistical moments were computed to describe the distribution and variability of pigmentation within the lesion.
Texture features were extracted to characterize spatial variations and structural patterns within the lesion region. A broad range of statistical and signal-processing approaches was employed to characterize texture. These included first- and second-order statistics, run-length and size-zone matrices, wavelet-based representations, and local pattern descriptors [36]. These methods quantify heterogeneity, regularity, and spatial organization of pixel intensities, properties shown to be informative for melanoma detection.
Table 2 provides an overview of the extracted feature categories, the corresponding techniques, and their dimensionality, together with representative studies illustrating the use of these descriptors in dermoscopic image analysis and melanoma detection.
Overall, the resulting feature set constitutes a high-dimensional tabular representation capturing complementary low-level characteristics of skin lesions. This representation is suitable for tabular-to-image transformation using the LM-IGTD framework, which supports both high- and low-dimensional heterogeneous data. A more detailed description of the feature extraction procedures and associated descriptors can be found in our previous work [46].

2.3. Tabular-to-Image Transformation Using LM-IGTD

LM-IGTD [29] is a tabular-to-image transformation framework that converts tabular data into two-dimensional grayscale image representations. It builds upon the original IGTD [22] method and extends it to low-dimensional and mixed-type datasets frequently encountered in clinical and biomedical domains.
IGTD permutes features by assigning them to pixel positions so that statistically similar features are placed close to each other. LM-IGTD further enhances this idea by incorporating additional mechanisms, including type-aware similarity measures for mixed data and noise-based feature augmentation. These extensions address challenges related to heterogeneous feature types and the limited dimensionality of clinical datasets.
A central limitation when applying tabular-to-image transformations to clinical metadata lies in the low dimensionality of many real-world datasets. When the number of available features is small, the resulting image representations present low resolution. This limits the ability of CNN-based models to exploit spatial patterns [14,61]. LM-IGTD explicitly addresses this issue by incorporating a stochastic noise-based feature augmentation mechanism, which increases the dimensionality of the tabular input prior to image generation while preserving its underlying statistical structure.
The augmentation mechanism generates a feature matrix X R n × ( d + d a ) , where n represents the number of samples, d is the number of original features, and d a denotes the total number of noisy features added. For each continuous feature x i j (where i denotes the sample index and j the feature index), m noisy features x ˜ i j ( m ) are generated through the Gaussian noise mechanism:
x ˜ i j ( m ) = x i j + α i j ( m ) · ϵ i j ( m ) , ϵ i j ( m ) N ( 0 , σ j 2 )
where x ˜ i j ( m ) is the m-th noisy feature created for sample i and feature j, and ϵ i j ( m ) represents Gaussian noise sampled from a normal distribution with zero mean and variance σ j 2 , corresponding to the empirical variance of the original feature j. The scaling factor α i j ( m ) U ( 0.1 , 0.9 ) controls the noise power and determines the magnitude of the perturbation applied to x i j .
Similarly, for categorical features, the m-th noisy feature x ˜ i j ( m ) for a specific sample i is defined via a swap-noise mechanism:
x ˜ i j ( m ) = x k j , with probability p s x i j , with probability 1 p s
where x i j is the original category of sample i in feature j, and x k j is a value randomly selected from a different sample k ( k i ) within the same column j. This swap mechanism preserves the original category frequencies. The swap probability p s is sampled by the same noise power α [ 0.1 , 0.9 ] .
LM-IGTD supports both Homogeneous Noise Generation (HoNG) and Heterogeneous Noise Generation (HeNG). Let d denote the number of original features and m the augmentation factor (the number of noisy features generated per original feature). In HoNG, the noise generator produces exactly d a = d × m noisy features, which are directly concatenated to the original feature matrix. In contrast, HeNG introduces variability in the augmented representation. A candidate pool of noisy features (up to d a ) is first generated. Subsequently, the number of noisy features is randomly sampled from a set of feasible sizes not exceeding d a , and only the corresponding subset is concatenated to the original feature matrix. In our experimental evaluation, different augmentation levels were explored for low-dimensional clinical metadata by generating m { 3 , 5 , 7 } noisy features per original feature, in order to control the effective dimensionality of the resulting tabular representations.
To identify the optimal noisy feature set among N = 100 stochastically generated candidate datasets, denoted as { Z j } j = 1 N (where each Z j R n × ( d + d a ) represents a candidate augmented matrix obtained through stochastic noise generation), we implemented the unsupervised selection mechanism detailed in [29]. The selected optimal candidate is subsequently denoted as X = Z j * and used for image generation. This procedure ensures that the noisy features preserve the intrinsic structure of the original clinical dataset through the following steps:
  • Mixed-type metric computation: We first computed the pairwise dissimilarities between samples for each candidate dataset Z j . We utilized the Gower distance [62], as it ensures consistent handling of the mixed numerical and categorical features present in the clinical metadata.
  • Spectral structure analysis: To capture the underlying data manifold, we applied spectral clustering [63]. Specifically, we constructed an affinity matrix S, where each element S i j measures the similarity between samples z i and z j from the candidate dataset. The affinity was computed using a Gaussian kernel S i j = exp z i z j 2 / 2 σ 2 , where z i z j denotes the Euclidean distance between samples i and j, and σ is the kernel bandwidth controlling the neighborhood scale. Let D denote the diagonal degree matrix with entries D i i = j S i j , and let I be the identity matrix. The normalized Laplacian is then defined as L norm = I D 1 / 2 S D 1 / 2 . The eigenvectors associated with the k smallest eigenvalues of L norm form a low-dimensional embedding where k-means is applied to identify clusters.
  • Selection criterion based on cluster validity: The quality of the resulting partitions was quantified using the Silhouette Coefficient (SC) [64]. To ensure that the augmentation does not distort the original data patterns, the number of clusters k was fixed based on the original data X (selecting the k [ 2 , 10 ] that maximized SC for X ). This fixed k was then used to evaluate all candidates { Z j } , and the dataset Z j * with the highest SC was selected for image generation.
In the present PoC study, noise-based feature augmentation is applied exclusively to low-dimensional clinical metadata. In contrast, statistical features extracted from dermoscopic images are inherently high-dimensional and naturally yield image representations with an adequate number of pixels. Consequently, these are transformed directly without the introduction of additional synthetic features.
Once the augmented tabular representation is obtained (when applicable), the image generation process proceeds as follows. In LM-IGTD, each tabular sample is mapped to a grayscale image in which each pixel corresponds to a single input feature. The pixel intensity represents the normalized feature value for that sample. The spatial arrangement of features within the image is determined through an optimization process that seeks to preserve intrinsic relationships among features.
To this end, LM-IGTD computes two ranking matrices. First, a feature ranking matrix ( R f ) is constructed by calculating pairwise dissimilarities between features. Spearman [65] correlation ( ρ ) is used for numerical–numerical pairs, point-biserial [66] correlation ( r p b ) for numerical–binary pairs, and Phik [67] correlation ( ϕ k ) for pairs involving categorical features. All pairwise correlation coefficients C (whether ρ , r p b , or ϕ k ) are uniformly converted to dissimilarities via 1 | C | and subsequently ranked to yield R f .
In parallel, a pixel ranking matrix ( R p ) is defined based on the pairwise distances between pixel locations on the two-dimensional image grid, computed using distance measures (Euclidean for numerical data or Gower [68] for mixed types) and then ranked.
The optimization procedure iteratively permutes the feature assignment to minimize the structural difference between the feature and pixel ranking matrices. Formally, the algorithm seeks to minimize the squared error function E:
E = i = 2 d j = 1 i 1 R f ( i , j ) R p ( i , j ) 2
where d denotes the total number of features. Here, R f ( i , j ) represents the rank of the dissimilarity between the i-th and j-th features, while R p ( i , j ) denotes the rank of the spatial distance between the pixel locations assigned to those features. This alignment minimizes the discrepancy between statistical and spatial structures, ensuring that correlated features are mapped to neighboring pixels.

3. Results

This section presents the experimental results organized by data modality (clinical metadata, statistical features, and their fusion). For each modality, we compare the performance of traditional machine learning (ML) models trained directly on tabular data with CNN-based models trained on LM-IGTD-generated image representations. Results are reported for both the PH2 and Derm7pt datasets.

3.1. Experimental Setting

For each dataset, samples were randomly split into training (80%) and test (20%) sets, with 15% of the training data further reserved for validation and used for hyperparameter tuning and early stopping. All experiments were repeated five times using different random seeds for the training–test split, and results are reported as averages across runs. Given the class imbalance present in both datasets, random undersampling was applied to the training data only prior to model fitting. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity [69].
For tabular baselines, the evaluated ML models included Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Least Absolute Shrinkage and Selection Operator (LASSO) [69,70]. Hyperparameter tuning for these models was performed using 5-fold cross-validation on the training set. The hyperparameter ranges are summarized in Table 3.
The LM-IGTD augmentation factor, m (number of synthetic noise features generated per original feature), was explored within the search space { 3 , 5 , 7 } as part of an ablation study. This range was defined based on prior LM-IGTD-based literature [29,71] and to systematically evaluate the trade-off between increasing spatial resolution and preserving the relative contribution of the original clinical features.
The CNN architecture and training process were optimized for each dataset and modality using Bayesian optimization [72] combined with HyperBand [73]. This strategy explores the search space by combining probabilistic sampling with multi-fidelity scheduling. Hyperparameters such as convolutional filters, kernel and pooling sizes, dense units, dropout rates, learning rate, optimizer, and image resolution were tuned (see Table 3). Each configuration underwent 50 trials to select the best model based on validation loss. Final architectures and hyperparameters for all datasets and modalities are listed in the Supplementary Material (Table S1).

3.2. Results Using Clinical Metadata

This subsection reports the results obtained using clinical metadata features described in Section 2.1 (Table 1). Due to the low dimensionality of metadata in both datasets, LM-IGTD was combined with noise-based feature augmentation to generate image representations suitable for CNN-based learning. Both HoNG and HeNG noise generation strategies were evaluated using different augmentation levels, considering the addition of 3, 5, and 7 synthetic features per original feature. Table 4 summarizes the classification performance obtained on the PH2 and Derm7pt datasets using clinical metadata.
Results on the PH2 dataset show increased variability across noise configurations, which can be attributed to the smaller dataset size and the limited number of available metadata features. In this setting, CNN-based models display a noticeable trade-off between sensitivity and specificity, especially for HeNG, which often favors higher specificity at the expense of sensitivity. Nevertheless, the overall performance trends remain coherent across runs, indicating that LM-IGTD can be applied to low-dimensional clinical metadata even in small-sample scenarios.
For the Derm7pt dataset, CNNs trained on LM-IGTD image representations show stable performance across configurations. Although AUC values remain below those of tabular baselines, models with HeNG noise augmentation are more consistent than those without. This suggests that controlled augmentation helps mitigate limitations associated with low-dimensional metadata.
Overall, these results support the feasibility of transforming low-dimensional dermatological metadata into image representations using LM-IGTD, enabling the use of CNN-based models in settings where traditional tabular learning approaches remain dominant.

3.3. Results Using Statistical Features

This subsection reports the results obtained using statistical features extracted from dermoscopic images. In contrast to clinical metadata, these features form a high-dimensional tabular representation composed of numerous handcrafted descriptors capturing geometric, color, and texture-related characteristics of skin lesions. As the original feature dimensionality is sufficient to generate image representations with adequate spatial resolution, LM-IGTD was applied without noise-based feature augmentation.
Table 5 summarizes the classification performance obtained on the PH2 and Derm7pt datasets using statistical features. For both datasets, traditional ML models trained directly on tabular data provide strong baseline performance, particularly in terms of sensitivity. In contrast, CNN-based models trained on LM-IGTD-generated image representations consistently achieve higher specificity. Although AUC values are comparable across approaches, the two modeling paradigms exhibit a systematic trade-off. Tabular models tend to achieve higher sensitivity, whereas CNN models favor higher specificity.
On the PH2 dataset, CNN-based models achieve comparable or improved performance in terms of AUC and specificity relative to tabular baselines, despite the limited sample size. Notably, the increase in specificity is more pronounced than in Derm7pt, indicating that LM-IGTD-based representations may be particularly effective in small-sample settings when the original feature space is high-dimensional.
For the Derm7pt dataset, CNN-based models trained on LM-IGTD-generated image representations exhibit an increase in specificity compared to tabular baselines, while traditional ML models achieve higher sensitivity. This behavior suggests that the spatial encoding induced by LM-IGTD leads CNNs to learn more conservative decision boundaries, favoring the reduction of false positive predictions when learning from high-dimensional statistical descriptors. Overall, these results demonstrate that LM-IGTD can effectively transform high-dimensional dermatological statistical features into image representations suitable for CNN-based learning. This supports its applicability beyond low-dimensional clinical metadata and motivates its use in subsequent multimodal fusion experiments.

3.4. Results Using Fused Clinical Metadata and Statistical Features

This subsection reports the results obtained using the fusion of clinical metadata and statistical features. Feature fusion was performed by concatenating clinical metadata and statistical features at the tabular level, followed by either direct learning using traditional ML models or transformation into image representations using LM-IGTD for CNN-based classification. Given the high dimensionality of the fused feature space, noise-based feature augmentation was not applied. Table 6 summarizes the classification performance obtained on the PH2 and Derm7pt datasets using fused features. For both datasets, traditional ML models trained on the fused tabular representation provide strong baseline performance, achieving improved or comparable results relative to models trained on individual feature modalities.
On the Derm7pt dataset, tabular fusion leads to improved performance compared to using statistical features alone, while achieving performance comparable to that obtained using clinical metadata, particularly for LASSO-based models. CNN-based models trained on fused image representations achieve stable but lower performance, indicating that, in large datasets with rich tabular information, direct learning on fused tabular features remains more effective than image-based representations in this setting.
In contrast, results on the PH2 dataset highlight the benefit of feature fusion in small-sample settings. Tabular models trained on fused features achieve the highest AUC and sensitivity among all evaluated configurations. CNN-based models trained on fused image representations exhibit higher specificity but increased variability in sensitivity, reflecting the combined effect of limited sample size and high feature dimensionality.
Overall, these results indicate that feature fusion enhances classification performance when complementary sources of tabular information are combined. While traditional ML models remain strong baselines for fused tabular data, LM-IGTD-based image representations provide an alternative representation for integrating heterogeneous dermatological features within an image-based learning framework.
Figure 2 provides an overview of the comparative performance across data modalities, including clinical metadata, statistical features, and their fusion. For each modality, the figure reports the AUC achieved by the best-performing tabular ML models and the best-performing LM-IGTD-based CNN models for the Derm7pt and PH2 datasets. Overall, the figure illustrates how classification performance varies across data modalities and modeling approaches. While traditional tabular models generally achieve strong AUC values, LM-IGTD-based CNN models provide competitive results in several configurations, highlighting their feasibility as an alternative representation strategy depending on the feature dimensionality and dataset size.
Figure 2 compares performance across data modalities—clinical metadata, statistical features, and their fusion—showing AUCs for the best tabular ML and LM-IGTD-based CNN models on the Derm7pt and PH2 datasets. While tabular models generally achieve higher AUCs, LM-IGTD CNNs remain competitive in several configurations, demonstrating their potential as an alternative representation strategy depending on feature dimensionality and dataset size.
To assess statistical significance, we conducted two-sided Wilcoxon signed-rank tests [74] comparing the best LM-IGTD CNN models with the strongest tabular ML baselines across datasets and modalities. Tests were performed for AUC, sensitivity, and specificity. All resulting p-values were above α = 0.05 , indicating no statistically significant differences. To complement hypothesis testing, we estimated Cohen’s d effect sizes [75] and 95% confidence intervals for the mean paired differences using a Student’s t-distribution [76]. The results suggest modest, dataset-dependent trends that mainly reflect a sensitivity–specificity trade-off rather than a consistent advantage of either approach. Detailed values are reported in Supplementary Material Table S2.
Furthermore, to complement the scalar performance metrics, we included Receiver Operating Characteristic (ROC) curve visualizations (see Figure 3), which compare the best CNN-based LM-IGTD models with the strongest tabular ML baselines across datasets and modalities. The figure is organized into six panels: the top row corresponds to the PH2 dataset (metadata, statistical features, and fusion), and the bottom row to the Derm7pt dataset. This layout enables a direct visual comparison of discriminative behavior across modalities and datasets.
For the PH2 dataset, the ROC curves reveal similar discriminative behavior across modalities, with neither approach consistently outperforming the other. In some modalities—particularly statistical features—both curves overlap substantially, while in the fusion setting the curves intersect, reflecting different sensitivity–specificity trade-offs rather than a consistent dominance of either approach. For the Derm7pt dataset, tabular ML models tend to achieve higher true-positive rates at low false-positive rates, especially in the metadata and statistical feature modalities. Nonetheless, LM-IGTD–CNN models remain competitive and converge toward similar performance levels at higher thresholds.
Overall, the ROC analysis confirms that LM-IGTD preserves the discriminative structure of the original tabular data and does not introduce systematic performance degradation, supporting its feasibility as an alternative representation for CNN-based learning.

3.5. Visual Examples of LM-IGTD Image Representations

To enhance interpretability, Figure 4 presents LM-IGTD-generated images for not melanoma and melanoma cases across both datasets and modalities. Although these images do not correspond to natural visual structures, distinct spatial organization patterns can be observed between classes.
For clinical metadata (panels a and d), the representations exhibit structured block-like patterns resulting from the deterministic mapping of heterogeneous features. Differences between not melanoma and melanoma cases manifest as variations in the location and intensity of activated regions, reflecting underlying feature differences. For statistical features (panels b and e), which involve higher-dimensional descriptors, the generated images display more granular and texture-like patterns. Melanoma cases tend to exhibit more heterogeneous intensity distributions compared to not melanoma cases, suggesting richer inter-feature interactions in the encoded space.
In the fusion representations (panels c and f), characteristics from both modalities are jointly embedded. The resulting spatial patterns combine block-like and fine-grained textures, indicating that LM-IGTD preserves complementary information when integrating clinical metadata and statistical features. Overall, these qualitative examples illustrate how LM-IGTD transforms tabular data into spatially organized representations in which class-dependent differences remain observable, supporting convolutional processing.

4. Discussion

This work presents a PoC study investigating the feasibility of representing clinically meaningful dermatological tabular data as two-dimensional image-like structures using the LM-IGTD framework, and subsequently applying CNNs for melanoma classification. Rather than aiming to outperform established image-based or tabular learning frameworks, the primary objective of this study was to assess whether tabular-to-image transformations constitute a valid and coherent representation strategy for heterogeneous dermatological data sources.
The experimental results demonstrate that LM-IGTD can effectively encode both low-dimensional clinical metadata and high-dimensional statistical features into image representations suitable for CNN-based learning. Across both the Derm7pt and PH2 datasets, CNN models trained on LM-IGTD-generated images exhibited stable and consistent performance trends, supporting the hypothesis that tabular dermatological data can be mapped to a spatial domain while preserving discriminative information. This observation is particularly relevant in dermatology, where tabular clinical descriptors and derived lesion features are commonly used alongside visual assessment.
For clinical metadata, which typically consist of a limited number of heterogeneous features, the incorporation of noise-based feature augmentation proved useful in stabilizing the generated image representations and facilitating CNN learning. In particular, heterogeneous noise generation led to more consistent performance in the larger Derm7pt dataset, suggesting that controlled stochastic augmentation can mitigate the limitations imposed by low feature dimensionality. These findings align with the design principles of LM-IGTD and support its applicability to clinical metadata commonly encountered in dermatological practice.
While noise-based augmentation is useful for low-dimensional metadata, it may introduce spurious patterns or artificial correlations if synthetic features are not carefully controlled. However, the augmentation strategy implemented in LM-IGTD is not based on arbitrary noise injection. Instead, it is type-aware and structure-preserving: Gaussian perturbations are applied to numerical features, while swap-based noise is used for categorical attributes, ensuring that the statistical nature of each feature is maintained. Importantly, this process is fully unsupervised and does not rely on class labels. Furthermore, augmented candidate datasets are not accepted indiscriminately. Multiple stochastic candidates are generated and evaluated using a structure-preserving selection pipeline based on cluster validity criteria (Silhouette Coefficient). Only the candidate that best preserves the intrinsic geometry of the original data is selected for image generation. This step mitigates the risk of retaining destructive or misleading synthetic patterns. Finally, the ablation study exploring augmentation factors (3, 5, and 7 synthetic features per feature) empirically demonstrates that excessive augmentation leads to performance degradation rather than artificial improvement. This observed signal dilution effect further supports that LM-IGTD does not benefit from uncontrolled dimensional inflation, reinforcing the controlled nature of the augmentation mechanism.
In contrast, statistical features extracted from dermoscopic images are inherently high-dimensional and naturally support the construction of image representations without additional augmentation. In this context, LM-IGTD-based CNN models achieved performance levels comparable to traditional tabular baselines, while exhibiting a systematic increase in specificity in several experimental configurations. This behavior suggests that the spatial organization induced by LM-IGTD enables CNNs to learn more conservative decision boundaries, potentially reducing false positive predictions.
Consistent with prior work [29,71], traditional ML models trained directly on tabular data remain strong baselines across all evaluated scenarios, particularly for low-dimensional clinical metadata. Importantly, the goal of this study is not to replace such models, but to explore an alternative representation paradigm that enables the use of DL architectures in contexts dominated by tabular data. The fact that LM-IGTD-based CNN models achieve competitive and coherent performance—despite the additional representation step—constitutes an important observation, demonstrating that the tabular-to-image transformation preserves relevant information and supports stable learning behavior.
From a methodological perspective, an important question concerns the mechanism through which LM-IGTD enables CNN-based learning on tabular data. Its effectiveness can be interpreted in terms of inductive bias alignment. CNNs exploit local spatial correlations through weight sharing and localized receptive fields. By reorganizing tabular features according to their statistical relationships and placing correlated features in spatial proximity, LM-IGTD converts abstract feature dependencies into local neighborhoods. This spatial encoding allows convolutional filters to model inter-feature interactions that would otherwise require high-capacity fully connected layers, introducing a form of structural regularization that constrains the hypothesis space. In low-dimensional settings, the feature-based augmentation further enriches the representation space, mitigating sparsity and supporting more stable learning behavior. Thus, LM-IGTD does not simply reshape tabular data; it embeds statistical dependencies and controlled variations into spatial structures, enabling CNN architectures to leverage their inherent inductive biases in a coherent manner.
An additional contribution of this work lies in the exploration of feature-level fusion through tabular-to-image transformation. By concatenating clinical metadata and statistical features prior to image generation, this study introduces a unified representation in which heterogeneous sources of information are jointly embedded within a single spatial structure. This approach can be interpreted as an implicit fusion mechanism operating entirely within the tabular domain, allowing CNNs to model interactions between clinical and lesion-level features without requiring raw image data at inference time. The fusion experiments indicate that this unified encoding is feasible and yields coherent performance trends across datasets, even though direct tabular learning remains highly effective.
Beyond the specific CNN architectures evaluated in this study, an important implication of the proposed tabular-to-image representation is that it enables the use of a broader class of DL models originally developed for visual data. Once tabular clinical information is encoded as a structured image, more advanced architectures such as residual [77] and densely connected networks [78], as well as more recent attention-based and transformer-based models [79,80], can be directly applied without requiring fundamental changes to the input representation. From a PoC perspective, the competitive performance obtained with relatively simple CNN architectures suggests that further gains may be achieved using more expressive models capable of capturing long-range spatial dependencies and higher-order feature interactions [81].
Another promising direction involves integrating LM-IGTD-generated images with raw dermoscopic images in multimodal learning frameworks. In this setting, tabular-derived representations could act as a complementary visual modality encoding clinical metadata and lesion descriptors. This opens the possibility of jointly modeling tabular clinical information and visual lesion characteristics using unified image-based architectures, potentially enhancing robustness and interpretability in dermatological decision-support systems. While the present study focuses on establishing feasibility, the proposed framework naturally lends itself to multimodal extensions that combine generated and acquired images.
Although this study focuses on melanoma classification, the LM-IGTD framework is inherently domain-agnostic and can be adapted to other medical tasks involving heterogeneous tabular data. Previous validations of the framework have demonstrated its applicability in distinct clinical domains, including diabetes, hepatitis [29], and cardiovascular risk prediction [71], confirming its ability to preserve feature correlations across diverse biomedical contexts. In broader medical imaging scenarios, this representation strategy could support multimodal learning settings. For example, in breast cancer analysis, tabular clinical features (e.g., demographic factors or molecular markers) are often analyzed alongside mammography [82] or histopathology images [83]. Transforming such tabular descriptors into spatially organized representations may facilitate their integration within convolutional pipelines using unified image-based architectures. Empirical evaluation in these domains remains future work, but the methodological principles of LM-IGTD are directly transferable.
The present study is subject to certain constraints that are inherent to its PoC nature. The experimental evaluation was conducted on two publicly available datasets, with relatively limited sample sizes—particularly in the case of PH2—and no external validation cohort was considered. In addition, the CNN architectures explored were kept relatively simple, as the emphasis of this work was placed on assessing the feasibility of the proposed representation rather than on architectural optimization. Finally, the analysis was limited to binary melanoma classification using tabular features derived from dermoscopic datasets. These aspects define the scope of the present study and naturally motivate future research involving larger and more diverse cohorts, external validation, more advanced network architectures, and extended clinical scenarios.
Overall, this PoC study demonstrates that LM-IGTD-based tabular-to-image representations provide a feasible framework for encoding heterogeneous dermatological data. By enabling CNN-based learning on clinical metadata, statistical features, and their fusion, this work supports future research exploring more advanced architectures and multimodal strategies for dermatological decision support.

5. Conclusions

This work presents a PoC study evaluating the feasibility of representing heterogeneous dermatological tabular data as two-dimensional image-like structures using the LM-IGTD framework. The study assesses whether such tabular-to-image transformations provide a stable representation for clinically relevant dermatological data.
The experimental results show that LM-IGTD can encode both low-dimensional clinical metadata and high-dimensional statistical features into structured image representations while preserving discriminative information. CNN models trained on these representations exhibited consistent performance across datasets and modalities, indicating stable learning behavior. Although traditional ML models trained directly on tabular data remain strong baselines, the competitive performance of LM-IGTD-based CNNs supports the feasibility of the proposed representation strategy.
Beyond individual data modalities, the results demonstrate that feature-level fusion through tabular-to-image transformation is feasible. This enables the joint representation of clinical metadata and statistical features within a unified spatial structure, supporting future multimodal dermatological AI systems. By converting non-visual clinical data into a compatible spatial format, the framework facilitates integration with dermoscopic images in unified deep learning architectures. This reduces the need for complex fusion mechanisms and allows models to jointly learn from visual lesion characteristics and patient-level metadata.
Overall, this PoC study demonstrates that LM-IGTD-based tabular-to-image representations provide a feasible approach for encoding heterogeneous dermatological tabular data. By offering a unified image-based representation, this work supports future research on multimodal strategies combining tabular-derived images with medical imaging data, with potential benefits for dermatological decision-support systems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app16052459/s1, Table S1: Selected hyperparameters for CNN architectures for each dataset and modality, Table S2: Statistical comparison between LM-IGTD CNN models and tabular ML models across datasets and modalities. Results include Wilcoxon signed-rank p-values, paired Cohen’s d, and 95% confidence intervals.

Author Contributions

Conceptualization, V.G.-M., D.C.-M. and C.S.-R.; Methodology, V.G.-M., D.C.-M. and C.S.-R.; Software, V.G.-M.; Validation, V.G.-M., D.C.-M. and C.S.-R.; Formal analysis, V.G.-M.; Investigation, V.G.-M.; Resources, C.S.-R.; Data curation, V.G.-M.; Writing—original draft preparation, V.G.-M.; Writing—review and editing, V.G.-M., D.C.-M. and C.S.-R.; Visualization, V.G.-M.; Supervision, D.C.-M. and C.S.-R.; Project administration, C.S.-R.; Funding acquisition, C.S.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the European Commission through H2020-EU.3.1.4.2, European Project WARIFA (Watching the Risk Factors: Artificial Intelligence and the Prevention of Chronic Conditions) under Grant Agreement No. 101017385; and by the Spanish Government through Grant PID2023-149457OB funded by AEI (Agencia Estatal de Investigación, AEI/10.13039/501100011033). The study sponsors were not involved in any stage of the study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are publicly available. The PH2 dataset is available at https://www.fc.up.pt/addi/ph2%20database.html (accessed on 1 March 2026). The Derm7pt dataset is available at https://derm.cs.sfu.ca/Welcome.html (accessed on 1 March 2026).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AUCArea Under the Receiver Operating Characteristic Curve
CNNConvolutional Neural Network
DLDeep Learning
DTDecision Tree
HeNGHeterogeneous Noise Generation
HoNGHomogeneous Noise Generation
IGTDImage Generator for Tabular Data
LASSOLeast Absolute Shrinkage and Selection Operator
LM-IGTDLow Mixed-Image Generator for Tabular Data
MLMachine Learning
PoCProof-of-Concept
RFRandom Forest
ROCReceiver Operating Characteristic
SVMSupport Vector Machine

References

  1. Sun, Y.; Shen, Y.; Liu, Q.; Zhang, H.; Jia, L.; Chai, Y.; Jiang, H.; Wu, M.; Li, Y. Global trends in melanoma burden: A comprehensive analysis from the global burden of disease study, 1990–2021. J. Am. Acad. Dermatol. 2025, 92, 100–107. [Google Scholar] [CrossRef] [PubMed]
  2. Garbe, C.; Keim, U.; Gandini, S.; Amaral, T.; Katalinic, A.; Hollezcek, B.; Martus, P.; Flatz, L.; Leiter, U.; Whiteman, D. Epidemiology of cutaneous melanoma and keratinocyte cancer in white populations 1943–2036. Eur. J. Cancer 2021, 152, 18–25. [Google Scholar] [CrossRef] [PubMed]
  3. Rojas, K.D.; Perez, M.E.; Marchetti, M.A.; Nichols, A.J.; Penedo, F.J.; Jaimes, N. Skin cancer: Primary, secondary, and tertiary prevention. Part II. J. Am. Acad. Dermatol. 2022, 87, 271–288. [Google Scholar] [CrossRef] [PubMed]
  4. Goyal, M.; Knackstedt, T.; Yan, S.; Hassanpour, S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput. Biol. Med. 2020, 127, 104065. [Google Scholar] [CrossRef]
  5. Wen, D.; Khan, S.M.; Xu, A.J.; Ibrahim, H.; Smith, L.; Caballero, J.; Zepeda, L.; de Blas Perez, C.; Denniston, A.K.; Liu, X.; et al. Characteristics of publicly available skin cancer image datasets: A systematic review. Lancet Digit. Health 2022, 4, e64–e74. [Google Scholar] [CrossRef]
  6. Li, H.; Pan, Y.; Zhao, J.; Zhang, L. Skin disease diagnosis with deep learning: A review. Neurocomputing 2021, 464, 364–393. [Google Scholar] [CrossRef]
  7. Gómez-Martínez, V.; Chushig-Muzo, D.; Veierød, M.B.; Granja, C.; Soguero-Ruiz, C. A multimodal and interpretable-based approach for improving melanoma detection using dermoscopy images. Research Square 2023. [Google Scholar] [CrossRef]
  8. Hasan, M.K.; Ahamad, M.A.; Yap, C.H.; Yang, G. A survey, review, and future trends of skin lesion segmentation and classification. Comput. Biol. Med. 2023, 155, 106624. [Google Scholar] [CrossRef]
  9. Wu, Y.; Chen, B.; Zeng, A.; Pan, D.; Wang, R.; Zhao, S. Skin cancer classification with deep learning: A systematic review. Front. Oncol. 2022, 12, 893972. [Google Scholar] [CrossRef]
  10. Khan, M.A.; Akram, T.; Sharif, M.; Shahzad, A.; Aurangzeb, K.; Alhussein, M.; Haider, S.I.; Altamrah, A. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 2018, 18, 638. [Google Scholar] [CrossRef]
  11. Venkat, N. The Curse of Dimensionality: Inside Out; Birla Institute of Technology and Science, Pilani, Department of Computer Science and Information Systems: Pilani, India, 2018; Volume 10. [Google Scholar]
  12. Lee, H.D.; Mendes, A.I.; Spolaor, N.; Oliva, J.T.; Parmezan, A.R.S.; Wu, F.C.; Fonseca-Pinto, R. Dermoscopic assisted diagnosis in melanoma: Reviewing results, optimizing methodologies and quantifying empirical guidelines. Knowl.-Based Syst. 2018, 158, 9–24. [Google Scholar] [CrossRef]
  13. Park, A.J.; Weintraub, G.S.; Asgari, M.M. Leveraging the electronic health record to improve dermatologic care delivery: The importance of finding structure in data. J. Am. Acad. Dermatol. 2020, 82, 773–775. [Google Scholar] [CrossRef] [PubMed]
  14. Shwartz-Ziv, R.; Armon, A. Tabular data: Deep learning is not all you need. Inf. Fusion 2022, 81, 84–90. [Google Scholar] [CrossRef]
  15. Arik, S.Ö.; Pfister, T. Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 6679–6687. [Google Scholar]
  16. Medeiros Neto, L.; Rogerio da Silva Neto, S.; Endo, P.T. A comparative analysis of converters of tabular data into image for the classification of Arboviruses using Convolutional Neural Networks. PLoS ONE 2023, 18, e0295598. [Google Scholar] [CrossRef]
  17. Zhang, S.; Zhang, X.; Ren, W.; Zhao, L.; Fan, E.; Huang, F. Exploring Fuzzy Priors From Multi-Mapping GAN for Robust Image Dehazing. IEEE Trans. Fuzzy Syst. 2025, 33, 3946–3958. [Google Scholar] [CrossRef]
  18. Wolf, T.N.; Pölsterl, S.; Wachinger, C.; Alzheimer’s Disease Neuroimaging Initiative. DAFT: A universal module to interweave tabular data and 3D images in CNNs. NeuroImage 2022, 260, 119505. [Google Scholar] [CrossRef]
  19. Lee, E.; Nam, M.; Lee, H. Table 2vox: CNN-based multivariate multilevel demand forecasting framework by tabular-to-voxel image conversion. Sustainability 2022, 14, 11745. [Google Scholar] [CrossRef]
  20. Sharma, A.; Vans, E.; Shigemizu, D.; Boroevich, K.A.; Tsunoda, T. DeepInsight: A methodology to transform a non-image data to an image for convolution neural network architecture. Sci. Rep. 2019, 9, 11399. [Google Scholar] [CrossRef]
  21. Bazgir, O.; Zhang, R.; Dhruba, S.R.; Rahman, R.; Ghosh, S.; Pal, R. Representation of features as images with neighborhood dependencies for compatibility with convolutional neural networks. Nat. Commun. 2020, 11, 4391. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Brettin, T.; Xia, F.; Partin, A.; Shukla, M.; Yoo, H.; Evrard, Y.A.; Doroshow, J.H.; Stevens, R.L. Converting tabular data into images for deep learning with convolutional neural networks. Sci. Rep. 2021, 11, 11325. [Google Scholar] [CrossRef]
  23. Castillo-Cara, M.; Talla-Chumpitaz, R.; García-Castro, R.; Orozco-Barbosa, L. TINTO: Converting Tidy Data into image for classification with 2-Dimensional Convolutional Neural Networks. SoftwareX 2023, 22, 101391. [Google Scholar] [CrossRef]
  24. Ou, C.; Zhou, S.; Yang, R.; Jiang, W.; He, H.; Gan, W.; Chen, W.; Qin, X.; Luo, W.; Pi, X.; et al. A deep learning based multimodal fusion model for skin lesion diagnosis using smartphone collected clinical images and metadata. Front. Surg. 2022, 9, 1029991. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, Y.; Feng, Y.; Zhang, L.; Zhou, J.T.; Liu, Y.; Goh, R.S.M.; Zhen, L. Adversarial multimodal fusion with attention mechanism for skin lesion classification using clinical and dermoscopic images. Med. Image Anal. 2022, 81, 102535. [Google Scholar] [CrossRef] [PubMed]
  26. Cai, G.; Zhu, Y.; Wu, Y.; Jiang, X.; Ye, J.; Yang, D. A multimodal transformer to fuse images and metadata for skin disease classification. Vis. Comput. 2023, 39, 2781–2793. [Google Scholar] [CrossRef]
  27. Das, A.; Agarwal, V.; Shetty, N.P. Comparative analysis of multimodal architectures for effective skin lesion detection using clinical and image data. Front. Artif. Intell. 2025, 8, 1608837. [Google Scholar] [CrossRef]
  28. Casal-Guisande, M.; Fernández-Villar, A.; Mosteiro-Añón, M.; Comesaña-Campos, A.; Cerqueiro-Pequeño, J.; Torres-Durán, M. Integrating tabular data through image conversion for enhanced diagnosis: A novel intelligent decision support system for stratifying obstructive sleep apnoea patients using convolutional neural networks. Digit. Health 2024, 10, 20552076241272632. [Google Scholar] [CrossRef]
  29. Gómez-Martínez, V.; Lara-Abelenda, F.J.; Peiro-Corbacho, P.; Chushig-Muzo, D.; Granja, C.; Soguero-Ruiz, C. LM-IGTD: A 2D image generator for low-dimensional and mixed-type tabular data to leverage the potential of convolutional neural networks. arXiv 2024, arXiv:2406.14566. [Google Scholar]
  30. Mendonça, T.; Celebi, M.; Mendonca, T.; Marques, J. Ph2: A public database for the analysis of dermoscopic images. In Dermoscopy Image Analysis; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  31. Kawahara, J.; Daneshvar, S.; Argenziano, G.; Hamarneh, G. Seven-point checklist and skin lesion classification using multitask multimodal neural nets. J. Biomed. Health Inform. 2018, 23, 538–546. [Google Scholar] [CrossRef]
  32. Liu, J.; Zou, X.B. Practical Dermoscopy; Springer Nature: Berlin, Germany, 2022. [Google Scholar]
  33. Camacho-Gutiérrez, J.A.; Solorza-Calderón, S.; Álvarez-Borrego, J. Multi-class skin lesion classification using prism-and segmentation-based fractal signatures. Expert Syst. Appl. 2022, 197, 116671. [Google Scholar] [CrossRef]
  34. Singh, L.; Janghel, R.R.; Sahu, S.P. A hybrid feature fusion strategy for early fusion and majority voting for late fusion towards melanocytic skin lesion detection. Int. J. Imaging Syst. Technol. 2022, 32, 1231–1250. [Google Scholar] [CrossRef]
  35. Friedman, R.J.; Rigel, D.S.; Kopf, A.W. Early detection of malignant melanoma: The role of physician examination and self-examination of the skin. CA Cancer J. Clin. 1985, 35, 130–151. [Google Scholar] [CrossRef]
  36. De Siqueira, F.R.; Schwartz, W.R.; Pedrini, H. Multi-scale gray level co-occurrence matrices for texture description. Neurocomputing 2013, 120, 336–345. [Google Scholar] [CrossRef]
  37. Ali, A.-R.H.; Li, J.; Yang, G. Automating the ABCD rule for melanoma detection: A survey. IEEE Access 2020, 8, 83333–83346. [Google Scholar] [CrossRef]
  38. Kingsly, A.; Sankaragomathi, B. Performance Analysis of Machine Learning Based Classifiers for the Diagnosis of Melanoma Cancer and Comparison. J. Comput. Theor. Nanosci. 2018, 15, 558–575. [Google Scholar] [CrossRef]
  39. Thanh, D.N.H.; Prasath, V.B.S.; Hieu, L.M.; Hien, N.N. Melanoma skin cancer detection method based on adaptive principal curvature, colour normalisation and feature extraction with the ABCD rule. J. Digit. Imaging 2020, 33, 574–585. [Google Scholar] [CrossRef] [PubMed]
  40. Karuppiah, S.P.; Sheeba, A.; Padmakala, S.; Subasini, C.A. An Efficient Galactic Swarm Optimization Based Fractal Neural Network Model with DWT for Malignant Melanoma Prediction. Neural Process. Lett. 2022, 54, 5043–5062. [Google Scholar] [CrossRef]
  41. Waladi, A.; Firdaus, N.M.; Arymurthy, A.M. Melanoma classification using texture and wavelet analysis. In Proceedings of the 2019 International Conference of Artificial Intelligence and Information Technology (ICAIIT), Yogyakarta, Indonesia, 13–15 March 2019; pp. 336–343. [Google Scholar]
  42. Kumar, T.K.; Himanshu, I.N. Artificial intelligence based real-time skin cancer detection. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering (ICCAE), Sydney, Australia, 3–5 March 2023; pp. 215–219. [Google Scholar]
  43. Kumar, M.; Alshehri, M.; AlGhamdi, R.; Sharma, P.; Deep, V. A de-ann inspired skin cancer detection approach using fuzzy c-means clustering. Mob. Netw. Appl. 2020, 25, 1319–1329. [Google Scholar] [CrossRef]
  44. Bansal, P.; Garg, R.; Soni, P. Detection of melanoma in dermoscopic images by integrating features extracted using handcrafted and deep learning models. Comput. Ind. Eng. 2022, 168, 108060. [Google Scholar] [CrossRef]
  45. Chatterjee, S.; Dey, D.; Munshi, S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. Comput. Methods Programs Biomed. 2019, 178, 201–218. [Google Scholar] [CrossRef]
  46. Gómez-Martínez, V.; Chushig-Muzo, D.; Veierød, M.B.; Granja, C.; Soguero-Ruiz, C. Ensemble feature selection and tabular data augmentation with generative adversarial networks to enhance cutaneous melanoma identification and interpretability. BioData Min. 2024, 17, 46. [Google Scholar] [CrossRef]
  47. Tan, T.Y.; Zhang, L.; Neoh, S.C.; Lim, C.P. Intelligent skin cancer detection using enhanced particle swarm optimization. Knowl.-Based Syst. 2018, 158, 118–135. [Google Scholar] [CrossRef]
  48. Murugan, A.; Nair, S.A.H.; Preethi, A.A.P.; Kumar, K.P.S. Diagnosis of skin cancer using machine learning techniques. Microprocess. Microsystems 2021, 81, 103727. [Google Scholar] [CrossRef]
  49. Peng, Y.; Chu, Y.; Chen, Z.; Zhou, W.; Wan, S.; Xiao, Y.; Zhang, Y.; Li, J. Combining texture features of whole slide images improves prognostic prediction of recurrence-free survival for cutaneous melanoma patients. World J. Surg. Oncol. 2020, 18, 130. [Google Scholar] [CrossRef]
  50. De Moura, L.V.; Dartora, C.M.; da Silva, A.M.M. Skin lesions classification using multichannel dermoscopic images. In Anais do XII Simpósio de Engenharia Biomédica–IX Simpósio de Instrumentação e Imagens Médicas; Zenodo: Natal, Brazil, 2019. [Google Scholar]
  51. Singh, S.; Urooj, S. A Methodological Approach for Analysis of Melanoma Images. Madridge J. Dermatol. Res. 2018, 3, 83–87. [Google Scholar] [CrossRef]
  52. Shrestha, B.; Bishop, J.; Kam, K.; Chen, X.; Moss, R.H.; Stoecker, W.V.; Umbaugh, S.; Stanley, R.J.; Celebi, M.E.; Marghoob, A.A.; et al. Detection of atypical texture features in early malignant melanoma. Skin Res. Technol. 2010, 16, 60–65. [Google Scholar] [CrossRef] [PubMed][Green Version]
  53. Adjed, F.; Safdar Gardezi, S.J.; Ababsa, F.; Faye, I.; Chandra Dass, S. Fusion of structural and textural features for melanoma recognition. IET Comput. Vis. 2018, 12, 185–195. [Google Scholar] [CrossRef]
  54. Majtner, T.; Yildirim-Yayilgan, S.; Hardeberg, J.Y. Combining deep learning and hand-crafted features for skin lesion classification. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications, Oulu, Finland, 12–15 December 2016; pp. 1–6. [Google Scholar]
  55. Jayaraman, P.; Veeramani, N.; Krishankumar, R.; Ravichandran, K.S.; Cavallaro, F.; Rani, P.; Mardani, A. Wavelet-based classification of enhanced melanoma skin lesions through deep neural architectures. Information 2022, 13, 583. [Google Scholar] [CrossRef]
  56. Narasimhan, K.; Elamaran, V. Wavelet-based energy features for diagnosis of melanoma from dermoscopic images. Int. J. Biomed. Eng. Technol. 2016, 20, 243–252. [Google Scholar] [CrossRef]
  57. Bansal, P.; Vanjani, A.; Mehta, A.; Kavitha, J.C.; Kumar, S. Improving the classification accuracy of melanoma detection by performing feature selection using binary Harris hawks optimization algorithm. Soft Comput. 2022, 26, 8163–8181. [Google Scholar] [CrossRef]
  58. Barata, C.; Celebi, M.E.; Marques, J.S. Melanoma detection algorithm based on feature fusion. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2653–2656. [Google Scholar]
  59. Shalu; Rani, R.; Kamboj, A. Automated melanoma skin cancer detection from digital images. Int. J. Biomed. Eng. Technol. 2021, 37, 275–289. [Google Scholar] [CrossRef]
  60. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef]
  61. Hashemi, M. Enlarging smaller images before inputting into convolutional neural network: Zero-padding vs. interpolation. J. Big Data 2019, 6, 98. [Google Scholar] [CrossRef]
  62. Gower, J.C. A general coefficient of similarity and some of its properties. Biometrics 1971, 27, 857–871. [Google Scholar] [CrossRef]
  63. Ng, A.; Jordan, M.; Weiss, Y. On spectral clustering: Analysis and an algorithm. Adv. Neural Inf. Process. Syst. 2001, 14, 849–856. [Google Scholar]
  64. Arbelaitz, O.; Gurrutxaga, I.; Muguerza, J.; Pérez, J.M.; Perona, I. An extensive comparative study of cluster validity indices. Pattern Recognit. 2013, 46, 243–256. [Google Scholar] [CrossRef]
  65. Yu, H.; Hutson, A.D. A robust Spearman correlation coefficient permutation test. Commun. Stat. Theory Methods 2024, 53, 2141–2153. [Google Scholar] [CrossRef] [PubMed]
  66. Kornbrot, D. Point biserial correlation. In Wiley StatsRef: Statistics Reference Online; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
  67. Baak, M.; Koopman, R.; Snoek, H.; Klous, S. A new correlation coefficient between categorical, ordinal and interval variables with Pearson characteristics. Comput. Stat. Data Anal. 2020, 152, 107043. [Google Scholar] [CrossRef]
  68. Liu, P.; Yuan, H.; Ning, Y.; Chakraborty, B.; Liu, N.; Peres, M.A. A modified and weighted Gower distance-based clustering analysis for mixed type data: A simulation and empirical analyses. BMC Med Res. Methodol. 2024, 24, 305. [Google Scholar] [CrossRef]
  69. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  70. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  71. Lara-Abelenda, F.J.; Chushig-Muzo, D.; Peiro-Corbacho, P.; Gómez-Martínez, V.; Wägner, A.M.; Granja, C.; Soguero-Ruiz, C. Transfer learning for a tabular-to-image approach: A case study for cardiovascular disease prediction. J. Biomed. Inform. 2025, 165, 104821. [Google Scholar] [CrossRef]
  72. Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. Adv. Neural Inf. Process. Syst. 2012, 25, 2951–2959. [Google Scholar]
  73. Falkner, S.; Klein, A.; Hutter, F. BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1437–1446. [Google Scholar]
  74. Taheri, S.; Hesamian, G. A generalization of the Wilcoxon signed-rank test and its applications. Stat. Pap. 2013, 54, 457–470. [Google Scholar] [CrossRef]
  75. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Taylor & Francis Group: New York, NY, USA, 1988; Volume 54, pp. 77–155. [Google Scholar]
  76. Grigelionis, B. Student T Distribution, The. In International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2025; pp. 2720–2722. [Google Scholar]
  77. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  78. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  79. Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  80. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  81. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. 2022, 54, 1–41. [Google Scholar] [CrossRef]
  82. Yala, A.; Mikhael, P.G.; Strand, F.; Lin, G.; Smith, K.; Wan, Y.-L.; Lamb, L.; Hughes, K.; Lehman, C.; Barzilay, R. Toward robust mammography-based models for breast cancer risk. Sci. Transl. Med. 2021, 13, eaba4373. [Google Scholar] [CrossRef]
  83. Mobadersany, P.; Yousefi, S.; Amgad, M.; Gutman, D.A.; Barnholtz-Sloan, J.S.; Velázquez Vega, J.E.; Brat, D.J.; Cooper, L.A.D. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc. Natl. Acad. Sci. USA 2018, 115, E2970–E2979. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed methodology.
Figure 1. Overview of the proposed methodology.
Applsci 16 02459 g001
Figure 2. Summary of classification performance (AUC) obtained by the best-performing tabular ML models and LM-IGTD-based CNN models across different data modalities. Results are shown for (a) PH2 and (b) Derm7pt datasets. Blue bars: ML models trained on tabular data; orange bars: CNN-based models trained on LM-IGTD-generated image representations.
Figure 2. Summary of classification performance (AUC) obtained by the best-performing tabular ML models and LM-IGTD-based CNN models across different data modalities. Results are shown for (a) PH2 and (b) Derm7pt datasets. Blue bars: ML models trained on tabular data; orange bars: CNN-based models trained on LM-IGTD-generated image representations.
Applsci 16 02459 g002
Figure 3. ROC curves illustrating the discriminative performance of LM-IGTD-CNN models versus classical tabular ML baselines for PH2 (top row) and Derm7pt (bottom row). Each column represents a different modality (metadata, statistical features, fusion). The gray diagonal line denotes chance-level performance.
Figure 3. ROC curves illustrating the discriminative performance of LM-IGTD-CNN models versus classical tabular ML baselines for PH2 (top row) and Derm7pt (bottom row). Each column represents a different modality (metadata, statistical features, fusion). The gray diagonal line denotes chance-level performance.
Applsci 16 02459 g003
Figure 4. Examples of LM-IGTD-generated image representations for PH2 (panels ac) and Derm7pt (panels df). In each panel, the left image corresponds to a not melanoma case and the right image to a melanoma case. Panels (a,d) show metadata images, (b,e) statistical feature images, and (c,f) fusion images.
Figure 4. Examples of LM-IGTD-generated image representations for PH2 (panels ac) and Derm7pt (panels df). In each panel, the left image corresponds to a not melanoma case and the right image to a melanoma case. Panels (a,d) show metadata images, (b,e) statistical feature images, and (c,f) fusion images.
Applsci 16 02459 g004
Table 1. Overview of the tabular metadata used in the PH2 and Derm7pt datasets.
Table 1. Overview of the tabular metadata used in the PH2 and Derm7pt datasets.
DatasetSamplesMelanoma/Not MelanomaNo. of FeaturesMetadata Features
PH220040/1607Asymmetry; pigment network; dots and globules; streaks; regression areas; blue-whitish veil; lesion color
Derm7pt1011252/75912Pigment network; streaks; pigmentation; regression structures; dots and globules; blue-whitish veil; vascular structures; diagnostic difficulty; elevation; anatomical location; sex; management
Table 2. Summary of handcrafted image features extracted from skin lesions.
Table 2. Summary of handcrafted image features extracted from skin lesions.
Feature CategoryTechniqueRepresentative StudiesNo. of Features
GeometricABCD rule[37,38,39]4
TextureDWT[40,41]34
FDTA[42]4
FOS[38]15
GLCM[43,44,45]14
GLDS[46]5
GLRLM[47,48]16
GLSZM[49,50]14
HOS[51]2
King[52]5
LBP[53,54]6
SFM[46]4
WP[55,56]125
ColorRGB[44,57]4
HSV[44,57,58]4
CIE L*a*b[44,57]12
CIE L*u*v[59,60]12
YCrCb[44,57,59]12
Acronyms: Asymmetry, Border, Color, and Diameter (ABCD); discrete wavelet transform (DWT); fractal dimension texture analysis (FDTA); first-order statistics (FOS); gray-level co-occurrence matrix (GLCM); gray-level difference statistics (GLDS); gray-level run-length matrix (GLRLM); gray-level size zone matrix (GLSZM); higher-order spectra (HOS); local binary pattern (LBP); statistical feature matrix (SFM); wavelet packet (WP).
Table 3. Hyperparameter ranges considered for both tabular ML models and CNN-based models trained on LM-IGTD-generated images.
Table 3. Hyperparameter ranges considered for both tabular ML models and CNN-based models trained on LM-IGTD-generated images.
Model/ComponentHyperparameterValues Considered
Tabular ML models
DTMaximum depth [ 2 , 8 ]
Minimum samples per split [ 15 % , 20 % ] of training set size
RFNumber of trees { 10 , 20 , 30 }
Maximum depth [ 2 , 8 ]
Minimum samples per split [ 2 , 10 ]
SVMRegularization parameter [ 0.1 , 10 ]
Kernel coefficient { 10 2 , 10 3 , 10 4 , 10 5 }
LASSORegularization parameter [ 10 1.5 , 10 0.4 ]
CNN-based models
Convolutional layersNumber of filters { 8 , 16 , 32 , 64 }
Kernel size { 3 , 4 , 5 }
Pooling layersPool size { 2 , 3 }
Fully connected layersNumber of units { 32 , 64 , 128 }
RegularizationDropout rate { 0.1 , 0.2 , 0.4 }
OptimizationLearning rate [ 10 4 , 10 2 ]
Optimizer{Adam, RMSprop}
LM-IGTD imagesMinimum image size (pixels) { 25 , 35 , 45 }
TrainingBatch size { 10 , 20 , 30 }
Early stoppingbased on validation loss
Table 4. Classification performance using clinical metadata from the PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on tabular data and CNN-based models trained on LM-IGTD image representations with different noise augmentation configurations (3, 5, 7). Best results for each dataset are shown in bold.
Table 4. Classification performance using clinical metadata from the PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on tabular data and CNN-based models trained on LM-IGTD image representations with different noise augmentation configurations (3, 5, 7). Best results for each dataset are shown in bold.
DatasetData TypeModelAUCSensitivitySpecificity
PH2TabularDT0.833 ± 0.0850.875 ± 0.1020.792 ± 0.126
LASSO0.818 ± 0.0770.875 ± 0.1020.760 ± 0.126
RF0.828 ± 0.0840.875 ± 0.1020.781 ± 0.133
SVM0.771 ± 0.0450.792 ± 0.1560.750 ± 0.111
Image—no noiseCNN0.785 ± 0.0360.667 ± 0.1560.904 ± 0.115
Image + HoNG (3)0.743 ± 0.0410.625 ± 0.1020.861 ± 0.041
Image + HeNG (3)0.734 ± 0.0460.500 ± 0.1020.968 ± 0.026
Image + HoNG (5)0.712 ± 0.0640.583 ± 0.0590.840 ± 0.070
Image + HeNG (5)0.702 ± 0.0330.458 ± 0.1180.946 ± 0.055
Image + HoNG (7)0.681 ± 0.0520.542 ± 0.1020.823 ± 0.087
Image + HeNG (7)0.693 ± 0.0400.417 ± 0.0830.955 ± 0.044
Derm7ptTabularDT0.847 ± 0.0290.902 ± 0.0640.792 ± 0.040
LASSO0.848 ± 0.0210.928 ± 0.0330.768 ± 0.008
RF0.863 ± 0.0140.935 ± 0.0400.792 ± 0.017
SVM0.858 ± 0.0340.908 ± 0.0560.807 ± 0.016
Image—no noiseCNN0.803 ± 0.0230.797 ± 0.0510.809 ± 0.009
Image + HoNG (3)0.792 ± 0.0060.778 ± 0.0240.807 ± 0.020
Image + HeNG (3)0.810 ± 0.0180.824 ± 0.0280.796 ± 0.021
Image + HoNG (5)0.777 ± 0.0120.778 ± 0.0180.776 ± 0.019
Image + HeNG (5)0.812 ± 0.0100.817 ± 0.0210.807 ± 0.016
Image + HoNG (7)0.803 ± 0.0200.824 ± 0.0420.783 ± 0.019
Image + HeNG (7)0.798 ± 0.0060.791 ± 0.0400.805 ± 0.030
Table 5. Classification performance using statistical features for PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on tabular data and CNN-based models trained on image representations without noise augmentation. Best results for each dataset are shown in bold.
Table 5. Classification performance using statistical features for PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on tabular data and CNN-based models trained on image representations without noise augmentation. Best results for each dataset are shown in bold.
DatasetData TypeModelAUCSensitivitySpecificity
PH2TabularDT0.739 ± 0.0850.833 ± 0.2360.667 ± 0.167
LASSO0.781 ± 0.0690.833 ± 0.2360.729 ± 0.164
RF0.771 ± 0.0790.833 ± 0.2360.708 ± 0.191
SVM0.740 ± 0.0660.667 ± 0.2360.812 ± 0.125
Image—no noiseCNN0.785 ± 0.0360.667 ± 0.1560.904 ± 0.115
Derm7ptTabularDT0.648 ± 0.0230.627 ± 0.1110.669 ± 0.105
LASSO0.709 ± 0.0170.752 ± 0.0370.667 ± 0.017
RF0.682 ± 0.0140.680 ± 0.0330.684 ± 0.057
SVM0.696 ± 0.0320.667 ± 0.0640.726 ± 0.003
Image—no noiseCNN0.633 ± 0.0230.503 ± 0.0820.763 ± 0.057
Table 6. Classification performance using feature fusion for PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on fused tabular features and CNN-based models trained on fused image representations without noise. Best results for each dataset are shown in bold.
Table 6. Classification performance using feature fusion for PH2 and Derm7pt datasets. Results are reported for traditional ML models trained on fused tabular features and CNN-based models trained on fused image representations without noise. Best results for each dataset are shown in bold.
DatasetData TypeModelAUCSensitivitySpecificity
PH2Fusion (tabular)DT0.875 ± 0.0620.875 ± 0.1180.875 ± 0.118
LASSO0.906 ± 0.0340.958 ± 0.0590.854 ± 0.053
RF0.875 ± 0.0360.833 ± 0.0590.812 ± 0.026
SVM0.807 ± 0.0480.792 ± 0.0590.823 ± 0.039
Fusion (image)—no noiseCNN0.737 ± 0.1140.625 ± 0.2700.849 ± 0.055
Derm7ptFusion (tabular)DT0.797 ± 0.0370.804 ± 0.0580.789 ± 0.019
LASSO0.848 ± 0.0080.928 ± 0.0330.768 ± 0.031
RF0.801 ± 0.0180.817 ± 0.0460.785 ± 0.014
SVM0.814 ± 0.0280.843 ± 0.0420.785 ± 0.022
Fusion (image)—no noiseCNN0.775 ± 0.0260.784 ± 0.0610.765 ± 0.033
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gómez-Martínez, V.; Chushig-Muzo, D.; Soguero-Ruiz, C. Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept. Appl. Sci. 2026, 16, 2459. https://doi.org/10.3390/app16052459

AMA Style

Gómez-Martínez V, Chushig-Muzo D, Soguero-Ruiz C. Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept. Applied Sciences. 2026; 16(5):2459. https://doi.org/10.3390/app16052459

Chicago/Turabian Style

Gómez-Martínez, Vanesa, David Chushig-Muzo, and Cristina Soguero-Ruiz. 2026. "Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept" Applied Sciences 16, no. 5: 2459. https://doi.org/10.3390/app16052459

APA Style

Gómez-Martínez, V., Chushig-Muzo, D., & Soguero-Ruiz, C. (2026). Tabular-to-Image Encoding Methods for Melanoma Detection: A Proof-of-Concept. Applied Sciences, 16(5), 2459. https://doi.org/10.3390/app16052459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop