Next Article in Journal
A Multi-State Model for Lung Cancer Mortality in Survival Progression
Previous Article in Journal
Computational Testing Procedure for the Overall Lifetime Performance Index of Multi-Component Exponentially Distributed Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures

Department of Mathematics & Statistics, East Tennessee State University (ETSU), Johnson City, TN 37614, USA
*
Author to whom correspondence should be addressed.
Stats 2025, 8(4), 105; https://doi.org/10.3390/stats8040105
Submission received: 14 September 2025 / Revised: 22 October 2025 / Accepted: 27 October 2025 / Published: 3 November 2025

Abstract

Dimensionality reduction is fundamental for analyzing high-dimensional data, supporting visualization, denoising, and structure discovery. We present a systematic, large-scale benchmark of three widely used methods—Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE)—evaluated by average silhouette scores to quantify cluster preservation after embedding. Our full factorial simulation varies sample size n { 100 , 200 , 300 , 400 , 500 } , noise variance σ 2 { 0.25 , 0.5 , 0.75 , 1 , 1.5 , 2 } , and feature count p { 20 , 50 , 100 , 200 , 300 , 400 } under four generative regimes: (1) a linear Gaussian mixture, (2) a linear Student-t mixture with heavy tails, (3) a nonlinear Swiss-roll manifold, and (4) a nonlinear concentric-spheres manifold, each replicated 1000 times per condition. Beyond empirical comparisons, we provide mathematical results that explain the observed rankings: under standard separation and sampling assumptions, PCA maximizes silhouettes for linear, low-rank structure, whereas Isomap dominates on smooth curved manifolds; t-SNE prioritizes local neighborhoods, yielding strong local separation but less reliable global geometry. Empirically, PCA consistently achieves the highest silhouettes for linear structure (Isomap second, t-SNE third); on manifolds the ordering reverses (Isomap > t-SNE > PCA). Increasing σ 2 and adding uninformative dimensions (larger p) degrade all methods, while larger n improves levels and stability. To our knowledge, this is the first integrated study combining a comprehensive factorial simulation across linear and nonlinear regimes with distribution-based summaries (density and violin plots) and supporting theory that predicts method orderings. The results offer clear, practice-oriented guidance: prefer PCA when structure is approximately linear; favor manifold learning—especially Isomap—when curvature is present; and use t-SNE for the exploratory visualization of local neighborhoods. Complete tables and replication materials are provided to facilitate method selection and reproducibility.

1. Introduction

Dimension reduction is a fundamental technique in data analysis, addressing the challenges of high-dimensional datasets, commonly referred to as the “curse of dimensionality” [1]. High-dimensional data often suffer from sparsity, increased computational complexity, and difficulties in visualization and interpretation, which can degrade the performance of machine learning models and obscure meaningful patterns. Dimension reduction techniques aim to transform high-dimensional data into a lower-dimensional space while preserving essential structural properties, facilitating visualization, noise reduction, and improved model performance. The choice of technique depends critically on the data’s underlying structure—whether linear or nonlinear. This review focuses on three prominent methods: Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE). These methods are evaluated for their effectiveness in handling linear and nonlinear data structures, drawing on theoretical foundations and empirical studies across domains such as genomics, image processing, and data mining. By synthesizing findings from the literature, this review provides a comprehensive context for evaluating these techniques in data analysis tasks.

1.1. Dimension Reduction: Introduction and Theoretical Foundations

In the realm of data analysis, dimensionality reduction techniques play a pivotal role in simplifying complex datasets while preserving their intrinsic properties [1,2,3]. They are instrumental in transforming high-dimensional data into a more manageable form, facilitating visualization, mitigating noise, and uncovering hidden patterns [4,5]. Dimensionality reduction also influences the performance of downstream machine learning algorithms, the interpretability of results, and the overall insights drawn from data, with applications spanning bioinformatics, image analysis, natural language processing, and beyond [5,6].
Among the myriad of available methods, Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Isometric Mapping (Isomap) stand out due to their widespread application and effectiveness [7,8,9,10]. In what follows, we outline the theoretical underpinnings of these three techniques, highlight their advantages and limitations, and clarify when each is expected to perform well. A concise comparison is provided in Table 1. We also preview how these principles inform the design of our simulation study and the interpretation of results.
PCA is a linear technique that projects data onto a lower-dimensional subspace spanned by the eigenvectors of the covariance matrix corresponding to the largest eigenvalues [4,7,11,12]. By maximizing variance, PCA retains the most informative linear combinations of features. It yields orthogonal loadings, preserves Euclidean distances and global variance structure, and is computationally efficient and highly interpretable, which has made it a cornerstone of statistical analysis and machine learning [7,13]. PCA excels when the data structure is approximately linear, but its reliance on linear projections limits its ability to capture curved or highly nonlinear manifolds [14,15].
Isomap extends multidimensional scaling (MDS) to nonlinear settings by approximating the intrinsic (geodesic) geometry of the data manifold [8,16]. It constructs a neighborhood graph using pairwise Euclidean distances, estimates geodesic distances via shortest-path algorithms such as Dijkstra or Floyd–Warshall, and then applies classical MDS to obtain a low-dimensional embedding [8,17]. When sampling is adequate and neighborhood size is chosen carefully, Isomap preserves global manifold geometry and performs well on curved-manifold data such as images or bioinformatics datasets [18,19]. Its performance, however, is sensitive to noise, graph connectivity, and computational cost for large datasets [20,21]. Still, Isomap remains one of the most influential manifold learning algorithms and serves as a prototype for geodesic-preserving embeddings.
t-SNE was developed primarily as a visualization tool for high-dimensional data [22]. It models pairwise similarities in the high-dimensional space using Gaussian kernels and in the low-dimensional embedding space using a Student’s t distribution with one degree of freedom. The embedding is obtained by minimizing the Kullback–Leibler divergence between these two distributions. This design allows t-SNE to preserve local neighborhoods and reveal clusters, making it particularly effective in applications such as single-cell RNA sequencing, text mining, and image recognition [23,24]. However, t-SNE tends to distort global structures, depends heavily on hyperparameters such as perplexity, and can be computationally expensive [25,26]. Despite these limitations, t-SNE remains a widely adopted method for the exploratory analysis of nonlinear data.
Understanding the strengths and weaknesses of PCA, Isomap, and t-SNE is crucial for selecting the right tool for a given task. PCA is generally preferred for approximately linear structures; Isomap is advantageous when global manifold geometry is important; and t-SNE is best suited for preserving local neighborhoods in complex nonlinear datasets [5,7,8]. These distinctions motivate our simulation design, where we evaluate each method under controlled conditions (linear vs. nonlinear structure, varying sample size, noise, and dimensionality) to quantify how well they preserve cluster structure.
In summary, this article seeks to provide a detailed examination of PCA, t-SNE, and Isomap, highlighting their theoretical underpinnings, advantages, and limitations. By exploring their performance on both linear and nonlinear datasets through simulations and theory, we aim to guide practitioners in selecting the most appropriate method for their analytical needs, thereby enhancing the effectiveness of data-driven decision-making.

1.2. Empirical Comparisons Across Data Structures

1.2.1. Linear Data Structures

For datasets with linear structures, PCA is consistently the most effective technique due to its alignment with linear relationships. A study on microarray gene expression data found that PCA outperformed nonlinear methods like Isomap and Locally Linear Embedding (LLE) in datasets with linear feature relationships, achieving higher classification accuracy (e.g., p-value of 0.0078 for SVM accuracy) [20]. Similarly, a comprehensive review of dimension reduction algorithms tested on the ECG200 dataset (96 features, linear structure) reported that PCA and SVD achieved superior performance in preserving global variance, with PCA completing in 0.09 s for a 22-sample dataset [27]. These results highlight PCA’s efficiency and effectiveness for linear data, making it a preferred choice in applications like financial data analysis and signal processing.

1.2.2. Nonlinear Data Structures

For nonlinear data, Isomap and t-SNE typically outperform PCA. A study on microarray data visualization showed that Isomap preserved intrinsic manifold structures better than PCA, achieving a Davis–Bouldin Index p-value of 0.1055 for cluster validation when the number of differentially expressed genes was low (<150) [20]. In contrast, PCA required more features to achieve comparable performance. Similarly, a study on transcriptomic data visualization demonstrated the superiority of t-SNE in preserving local neighborhoods, with high accuracy in supervised (SVM and kNN) and unsupervised (neighborhood preservation) metrics, though it underperformed in global structure metrics like random triplet accuracy [28]. A review of algorithms on the HAR dataset (561 features, nonlinear structure) confirmed that Isomap and t-SNE were more effective for nonlinear data, with Isomap preserving global manifold geometry and t-SNE excelling in local cluster visualization [27]. These findings underscore the importance of nonlinear methods for complex, manifold-structured data.

1.2.3. Empirical Trade-Offs and Challenges

Empirical studies reveal trade-offs in performance and computational efficiency. In the ECG200 dataset (linear), PCA and SVD were top performers, while in the HAR dataset (nonlinear), Isomap and t-SNE excelled [27]. Computational times vary significantly: PCA is highly efficient (0.09 s for small datasets), while Isomap (0.04–14 s) and t-SNE are more resource intensive, especially for large datasets [20]. t-SNE performance is sensitive to preprocessing (e.g., PCA initialization) and hyperparameters like perplexity, which can affect reproducibility [28]. Isomap’s sensitivity to noise and neighborhood size can lead to unstable embeddings if the data is sparsely sampled [8]. These challenges highlight the need for careful parameter tuning and data preprocessing to optimize dimension reduction outcomes.

1.3. Applications in Diverse Domains

Dimensionality reduction is used across many data-rich disciplines, and the most suitable method depends on whether the dominant structure is approximately linear or instead curved and manifold-like. One major area of application is genomics, where reducing the dimensionality of gene-expression matrices is essential for interpretation and downstream modeling. In such settings, PCA is widely used to summarize major axes of variation and to support tasks such as classification [20]; in cancer studies, for example, PCA has helped identify differentially expressed genes and improve the interpretability of biomarker panels [29]. At the same time, when relationships are strongly nonlinear, analysts often prefer methods designed for manifolds: t-SNE is now standard for single-cell RNA sequencing because it reveals local neighborhoods and distinct cell types [28], and Isomap has been adopted when preserving global geodesic organization is crucial for visualization [20].
Another important application is in image processing, where feature spaces extracted by deep networks are high dimensional and often curved. In this context, t-SNE is frequently used to visualize representational geometry and to expose coherent object or scene clusters [30]. In parallel, Isomap has been explored for face recognition to better capture nonlinear modes of facial variation than linear projections such as PCA, thereby improving discrimination in some scenarios [31]. Taken together, these experiences reinforce the practical lesson that nonlinear embeddings can offer tangible benefits whenever the data manifold departs markedly from a linear subspace.
Beyond these areas, further applications arise in data mining, natural language processing, and radiomics. In radiomics in particular, PCA is routinely applied to compress imaging-derived feature vectors and reduce collinearity, which can stabilize statistical modeling for diagnosis and prognosis [29]. In addition, Isomap and t-SNE support exploratory analysis by revealing nonlinear structure in medical images and, more broadly, in social-network data where community organization is inherently manifold-like [27]. Taken together, these diverse use cases highlight that the linear–nonlinear choice materially affects which structures are preserved and how results are interpreted.
Preprocessing choices then play a decisive role across all of these domains. Normalization and feature scaling reshape neighborhood geometry and therefore influence both linear and nonlinear embeddings. For instance, t-SNE often benefits from PCA-based initialization, which can reduce computation and improve local structure preservation [28]. By contrast, Isomap’s reliance on geodesic distances makes it sensitive to noise and graph connectivity, so careful neighborhood selection and denoising are important [8]. In radiomics, upstream decisions such as voxel-size resampling and image filtering affect the stability of textural features and the behavior of subsequent dimensionality reduction [32]. These considerations, taken together, motivate standardized preprocessing protocols to enhance reproducibility.
Connecting these observations to the present study, our evaluation of PCA, Isomap, and t-SNE using the average silhouette score is aligned with the literature’s core finding: PCA tends to excel when the underlying structure is approximately linear, as documented on datasets such as ECG200 [27], whereas Isomap and t-SNE generally outperform for nonlinear manifolds as observed in microarray, human-activity, and single-cell settings [20,27,28]. Focusing on silhouette therefore provides a coherent, clustering-sensitive lens through which to quantify how well each method preserves meaningful separations across data regimes.
Overall, the choice among PCA, Isomap, and t-SNE should be guided by the data’s geometry and the analytic objective: PCA offers robustness and interpretability for linear structure, Isomap preserves global manifold relationships, and t-SNE excels at local neighborhood visualization. Recognizing trade-offs in computational cost, noise sensitivity, and parameter dependence is essential, and continuing advances—including UMAP—aim to improve scalability while balancing local and global fidelity [30].

2. Materials and Methods

2.1. Mathematical Definitions and Examples

Notation and Indices

We write X R n × p for the data matrix whose i-th row is the observation x i R p and whose j-th column collects the j-th feature across observations. Indices are used as follows: i { 1 , , n } indexes observations, j { 1 , , p } indexes features, and k { 1 , , p } indexes principal components (or eigenpairs) when applicable. Bold lowercase letters denote vectors and bold uppercase letters denote matrices. The Euclidean norm is · 2 , and tr ( · ) is the trace.
Definition 1
(Principal Component Analysis (PCA) [7,11,12,14,15]). PCA reduces dimensionality by projecting the centered data onto a low-dimensional linear subspace that maximizes sample variance. Let
μ = 1 n i = 1 n x i , X c = X 1 μ ,
and define the (scaled) sample covariance
C = 1 n X c X c .
Let { λ k , v k } be the eigenpairs of C , ordered λ 1 λ 2 λ p 0 , with v k 2 = 1 :
C v k = λ k v k .
The phrase “the most informative linear combination of characteristics” refers to the first principal component (PC1)—the direction in feature space that maximizes the projected variance of the data. Formally, PC1 is obtained as
v 1 = arg max v 2 = 1 v C v , λ 1 = max v 2 = 1 v C v .
The associated principal component scores z i 1 = v 1 ( x i μ ) represent the most informative one-dimensional summary of the data, capturing the direction of greatest variability. Subsequent components are obtained by solving the same optimization problem under orthogonality constraints v k v = 0 for < k .
For a target dimension d, collect V d = [ v 1 , , v d ] and define the d-dimensional representation
Z = X c V d .
The mean squared reconstruction error when retaining k components is
MSE ( k ) = 1 n X c Z k V k F 2 = j = k + 1 p λ j ,
where · F denotes the Frobenius norm.
Example 1
(Two-dimensional toy data). Consider x 1 = [ 2 , 3 ] , x 2 = [ 3 , 5 ] , x 3 = [ 4 , 7 ] . The sample mean is
μ = 1 3 [ 2 , 3 ] + [ 3 , 5 ] + [ 4 , 7 ] = [ 3 , 5 ] .
Centering yields
X c = 1 2 0 0 1 2 .
The covariance from (2) is
C = 1 3 2 4 4 8 0.667 1.333 1.333 2.667 .
An eigendecomposition of C gives λ 1 3.334 , λ 2 = 0 , with eigenvectors
v 1 = 1 5 1 2 , v 2 = 1 5 2 1 .
Projecting onto the first principal direction (5) yields
Z = X c v 1 = 1.3416 0 1.3416 ,
so the three points lie on a one-dimensional line in PC1.
Definition 2
(t-distributed Stochastic Neighbor Embedding (t-SNE) [25,33,34,35,36]). t-SNE constructs a low-dimensional map that preserves local neighborhoods. For each observation i, define conditional high-dimensional similarities
p j i = exp x i x j 2 2 2 σ i 2 i exp x i x 2 2 2 σ i 2 , p i i = 0 ,
where σ i is chosen so that the perplexity of P · i matches a user-specified value. The symmetrized joint probabilities are
p i j = p j i + p i j 2 n , p i i = 0 .
For low-dimensional points { y i } i = 1 n R d , define
q i j = 1 + y i y j 2 2 1 a b 1 + y a y b 2 2 1 , q i i = 0 .
The embedding minimizes the Kullback–Leibler divergence
KL ( P Q ) = i j p i j log p i j q i j .
A commonly used gradient for optimization is
KL y i = 4 j i ( p i j q i j ) y i y j 1 + y i y j 2 2 .
Note that t-SNE is a manifold learning algorithm that models local neighborhoods probabilistically; its perplexity hyperparameter controls the effective neighborhood size (analogous to a soft k in kNN graphs), ensuring comparable local scaling across settings [25,33,37].
Remark 1
(Perplexity as effective neighborhood size). For each i, the base-2 Shannon entropy of P · i is
H ( P · i ) = j i p j i log 2 p j i ,
and the perplexity is defined as
Perp ( P · i ) = 2 H ( P · i ) .
The bandwidth σ i is chosen so that Perp ( P · i ) equals a user-specified target π [33]. Operationally, π behaves like a soft neighborhood size: larger π spreads probability mass over more neighbors, while smaller π concentrates it locally. Fixing π across simulations therefore controls the effective neighborhood size and yields comparable local scaling across conditions [25].
Example 2
(Three points in 2D). Let x 1 = [ 1 , 2 ] , x 2 = [ 2 , 3 ] , x 3 = [ 4 , 5 ] . The squared Euclidean distances are
x 1 x 2 2 2 = 2 , x 1 x 3 2 2 = 18 , x 2 x 3 2 2 = 8 .
Choosing σ i = 1 for illustration, the relative magnitudes in (12) imply p 12 p 23 p 13 . With an arbitrary initialization y 1 , y 2 , y 3 R 2 , one computes q i j via (14) and updates the map using (16) until (15) stabilizes.
Definition 3
(Isomap [8,16,17,18,21]). Isomap approximates geodesic geometry on a manifold and embeds the data by classical multidimensional scaling (MDS). Let dist ( x i , x j ) = x i x j 2 . (i) Compute all pairwise Euclidean distances. (ii) Build a k-nearest-neighbor (kNN) graph: for each node i, connect edges to the k points with smallest Euclidean distance; let A i j = 1 if j N k ( i ) or i N k ( j ) and A i j = 0 otherwise. (iii) Assign edge weights w i j = dist ( x i , x j ) for A i j = 1 and w i j = otherwise; symmetrize the graph if needed. Denote by d geo ( i , j ) the shortest-path (graph geodesic) distance between i and j computed on these weighted edges (e.g., Dijkstra). Classical MDS is applied to the geodesic distance matrix D geo = [ d geo ( i , j ) ] to obtain coordinates Y R n × d that best preserve d geo ( i , j ) in Euclidean distances.
Following [8], the embedding quality is quantified by
R 2 ( d ) = 1 ρ 2 vec ( D geo ) , vec ( D Y ) ,
where ρ ( · , · ) is the Pearson correlation between geodesic and embedded Euclidean distances. Smaller R 2 ( d ) values indicate better preservation of manifold geometry.
Example 3
(Four points). For x 1 = [ 0 , 0 ] , x 2 = [ 1 , 1 ] , x 3 = [ 2 , 2 ] , x 4 = [ 1 , 3 ] , the Euclidean distances satisfy x 1 x 2 2 = x 2 x 3 2 = x 3 x 4 2 = 2 and x 1 x 4 2 = 10 . With k = 2 , the graph connects ( 1 , 2 ) , ( 2 , 3 ) , ( 3 , 4 ) , and ( 4 , 1 ) . The geodesic distances include d geo ( 1 , 3 ) = 2 2 and d geo ( 1 , 4 ) = 10 . Applying MDS to D geo yields a two-dimensional embedding that reflects these path lengths.
Note that we use the term manifold learning theory (correcting the earlier phrase “multiplicity learning theory”) to denote methods that learn low-dimensional embeddings that preserve intrinsic geometric relationships (e.g., local neighborhoods or geodesic distances); in this sense, “error limitation” refers to minimizing a reconstruction or mismatch error between relationships in the original space and in the embedding (e.g., MDS stress for Isomap, or a divergence between neighborhood distributions for t-SNE), thereby limiting geometric distortion.
In applied data science, these dimensionality reduction techniques are also used for transcriptomic data visualization—the exploration of high-dimensional gene-expression profiles (e.g., RNA-seq data) in two or three dimensions. Such visualizations help reveal patterns such as patient subgroups, tissue similarities, or disease-associated expression signatures by projecting thousands of gene features into an interpretable space.
To quantify how well global relationships are preserved in an embedding, we mention the random triplet accuracy metric. This measure computes the proportion of correctly maintained relative distances among randomly sampled triplets of points between the high-dimensional and low-dimensional spaces; that is, for triplets ( i , j , k ) where x i x j 2 < x i x k 2 , the same ordering should hold in the embedding y i y j 2 < y i y k 2 . The overall fraction of consistent triplets reflects the preservation of global geometry, complementing local metrics such as the silhouette score.

2.2. Simulation-Based Demonstrations of PCA, Isomap, and t-SNE

In this section we generate each approach on controlled simulations to illustrate how it operates and how we quantify performance. Throughout, n denotes the number of observations and p the number of features. Indices are used as follows: i { 1 , , n } indexes observations (rows of X ), j { 1 , , p } indexes features (columns of X ), k indexes the retained dimensions or principal components as appropriate, and r { 1 , , R } indexes Monte Carlo replicates when reporting averages. For graph paths we use ( a , b ) to index successive nodes on a path. All symbols are defined locally when first introduced.

2.2.1. Simulation Results of PCA

Figure 1 shows the scree plot with cumulative variance. The first three components explain 90.08% of the total variance (PC1: 38.99%, PC2: 28.73%, and PC3: 22.37%). Each subsequent component contributes less than 0.6% individually, indicating a clear elbow at k = 3 . Table 2 reports the corresponding percentages.
Score geometry and loading structure are summarized in Figure 2 and Figure 3. The PC1–PC2 score cloud is approximately elliptical and centered at the origin, consistent with a dominant two–three-dimensional signal plus noise. The loading map shows variables far from the origin exerting the strongest influence on these components, while variables near the origin contribute mainly to higher PCs.
Reconstruction accuracy, quantified by Equation (6), declines sharply through k = 3 and then flattens (Figure 4). The observed values are
MSE ( 1 ) = 2.011 , MSE ( 2 ) = 1.064 , MSE ( 3 ) = 0.327 ,
with diminishing returns beyond three components. As expected, MSE ( 30 ) 0 since reconstruction with all PCs is exact up to numerical precision.

2.2.2. Simulation Results of Isomap

We applied Isomap to a synthetic Swiss-roll dataset ( n 1200 points in R 3 with small Gaussian noise). With k NN = 12 , the two-dimensional embedding unfolds the roll without tears or overlaps (Figure 5), aligning the long axis with the intrinsic roll parameter and the short axis with height. Residual variance, computed via Equation (20), was low and stable across k NN { 6 , 8 , 10 , 12 , 15 , 20 } , with a slight minimum near k NN 8 (Figure 6); we display k NN = 12 to provide additional connectivity margin while keeping distortion essentially unchanged.

2.2.3. Simulation Results of t-SNE

We applied t-SNE to synthetic data comprising three well-separated classes in p = 50 dimensions. Across perplexities { 5 , 10 , 30 , 50 } , the two-dimensional maps show three compact, well-separated clusters with minimal overlap (Figure 7); higher perplexities yield slightly tighter clusters and more stable layouts. Local neighborhood fidelity was quantified by the Jaccard overlap
Pres ( k NN ) = 1 n i = 1 n N k NN HD ( i ) N k NN LD ( i ) N k NN HD ( i ) N k NN LD ( i ) ,
where N k NN HD ( i ) and N k NN LD ( i ) are the index sets of the k NN nearest neighbors of i in the original and embedded spaces, respectively. With k NN = 15 , Pres ( 15 ) was fairly stable across perplexity and mildly peaked near 30 (Figure 8).

2.3. Accuracy Comparison of Dimensionality Reduction Techniques

We compare Principal Component Analysis (PCA), Isomap, and t-Distributed Stochastic Neighbor Embedding (t-SNE) using the silhouette score as an accuracy criterion for cluster preservation. Throughout, indices are used as follows: i , j { 1 , , n } index observations; k , { 1 , , K } index clusters { C 1 , , C K } with sizes | C k | 2 ; r denotes intrinsic linear dimension, d the target embedding dimension; and p the ambient dimension. Unless stated otherwise, · 2 is the Euclidean norm in the relevant space.
Definition 4
(Silhouette coefficient [38]). Given an embedding Z = [ z 1 , , z n ] R n × d and a clustering { C k } k = 1 K , define for any observation i C k :
a ( i ) = 1 | C k | 1 j C k j i z i z j 2 , b ( i ) = min k 1 | C | j C z i z j 2 ,
where · 2 denotes the Euclidean norm in the reduced feature space. The pointwise silhouette value is
s ( i ) = b ( i ) a ( i ) max { a ( i ) , b ( i ) } [ 1 , 1 ] ,
and the overall silhouette score is
S ( Z ) = 1 n i = 1 n s ( i ) .
The silhouette coefficient quantifies how well clusters are preserved after dimensionality reduction. Intuitively, a ( i ) measures cluster cohesion, the average distance between a point and others within the same cluster, while b ( i ) measures cluster separation, the average distance to the nearest neighboring cluster. Larger s ( i ) values indicate better-defined clusters with clear boundaries, whereas negative values suggest misclassification or cluster overlap. Thus, the silhouette serves as a geometric accuracy criterion for cluster preservation, directly reflecting the balance between intra-cluster compactness and inter-cluster separation in the embedding space.
The proofs below make explicit how each method’s objective influences a ( i ) and b ( i ) (hence s ( i ) ), under assumptions that reflect linear and nonlinear data regimes.
Theorem 1
(Accuracy Comparison for Linear Structures). Let X R n × p be a centered dataset with a linear structure, where each data point x i span ( { u j } j = 1 r ) , r < p , and { u j } j = 1 r R p are the orthonormal eigenvectors of X X corresponding to the r largest eigenvalues λ 1 λ r > 0 . Define the average silhouette score for an embedding Z R n × d , d r , as
S ( Z ) = 1 n i = 1 n s ( i ) , s ( i ) = b ( i ) a ( i ) max { a ( i ) , b ( i ) } ,
where:
a ( i ) = 1 | C k | 1 z j C k , j i z i z j is the average intra-cluster distance for point i in cluster C k ,
b ( i ) = min C l C k 1 | C l | z j C l z i z j is the minimum average inter-cluster distance to another cluster C l .
Then, for embeddings produced by PCA, Isomap, and t-SNE,
S ( Z PCA ) > S ( Z Isomap ) > S ( Z t - SNE ) .
Proof. 
Let X = [ x 1 , , x n ] R n × p be centered ( i = 1 n x i = 0 ), with x i = j = 1 r α i j u j , where U r = [ u 1 , , u r ] R p × r satisfies U r U r = I r , and X X u j = λ j u j . Assume K clusters { C 1 , , C K } are well-separated, satisfying
min k l μ k μ l β max k 1 | C k | x i C k x i μ k , β 1 ,
where μ k = 1 | C k | x i C k x i . Set d = r , the intrinsic dimensionality.
For PCA, the embedding is Z PCA = X U r , where
U r = argmax U U = I r tr ( U X X U ) .
Since x i span ( { u j } ) , we have x i = U r U r x i , so
z PCA , i = U r x i R r .
Pairwise distances are preserved exactly
z PCA , i z PCA , j = U r ( x i x j ) = ( x i x j ) U r U r ( x i x j ) = x i x j ,
since U r U r = I r on the subspace. Thus,
a PCA ( i ) = a orig ( i ) = 1 | C k | 1 x j C k , j i x i x j , b PCA ( i ) = b orig ( i ) = min C l C k 1 | C l | x j C l x i x j .
Since b orig ( i ) a orig ( i ) , the silhouette score is
s PCA ( i ) = b orig ( i ) a orig ( i ) b orig ( i ) 1 a orig ( i ) b orig ( i ) ,
yielding maximal S ( Z PCA ) = S ( X ) .
For Isomap, a k-nearest neighbor graph G yields geodesic distances
d G ( i , j ) = x i x j + ϵ i j , ϵ i j 0 , E [ ϵ i j ] c k 1 / 2 n 1 / 2 ,
where c > 0 depends on the data density (cf. manifold learning theory). Isomap applies Multidimensional Scaling (MDS) to D G = [ d G ( i , j ) ] , minimizing
σ ( Z ) = i , j = 1 n ( d G ( i , j ) z i z j ) 2 1 / 2 .
The embedding satisfies
z Isomap , i z Isomap , j = x i x j + ϵ i j + δ i j ,
where δ i j is the MDS approximation error, with E [ | δ i j | ] c n 1 / 2 , c > 0 . Thus
a Isomap ( i ) = a orig ( i ) + ϵ ¯ a ( i ) + δ ¯ a ( i ) , b Isomap ( i ) = b orig ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) ,
where ϵ ¯ a ( i ) = 1 | C k | 1 x j C k , j i ϵ i j , and similarly for δ ¯ a ( i ) , ϵ ¯ b ( i ) , δ ¯ b ( i ) . The silhouette score is
s Isomap ( i ) = b orig ( i ) a orig ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) ϵ ¯ a ( i ) δ ¯ a ( i ) b orig ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) .
Assuming ϵ ¯ a ( i ) ϵ ¯ b ( i ) , δ ¯ a ( i ) δ ¯ b ( i ) , and noting b orig ( i ) a orig ( i ) , we approximate
s Isomap ( i ) 1 a orig ( i ) + ϵ ¯ a ( i ) + δ ¯ a ( i ) b orig ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) < s PCA ( i ) ,
since the errors increase the denominator slightly, reducing S ( Z Isomap ) < S ( Z PCA ) .
For t-SNE, the objective is to minimize
KL ( P | | Q ) = i j p i j log p i j q i j ,
with
p i j = exp ( x i x j 2 / 2 σ i 2 ) k l exp ( x k x l 2 / 2 σ k 2 ) , q i j = ( 1 + z i z j 2 ) 1 k l ( 1 + z k z l 2 ) 1 .
The gradient is
K L z i = 4 j i ( p i j q i j ) ( 1 + z i z j 2 ) 1 ( z i z j ) .
t-SNE prioritizes local similarities, where pij is large for small x i x j . For inter-cluster pairs, pij ≈ 0, but the heavy-tailed t-distribution increases q i j , compressing distances
z t - SNE , i z t - SNE , j κ x i x j , κ ( 0 , 1 ) .
Thus
a t - SNE ( i ) a orig ( i ) , b t - SNE ( i ) κ b orig ( i ) ,
yielding
s t - SNE ( i ) κ b orig ( i ) a orig ( i ) κ b orig ( i ) = κ a orig ( i ) κ b orig ( i ) < s Isomap ( i ) ,
since κ < 1 and Isomap’s errors are smaller. Hence
S ( Z t - SNE ) < S ( Z Isomap ) < S ( Z PCA ) .
Theorem 2
(Accuracy Comparison for Nonlinear Structures). Let X R n × p be a dataset on a smooth nonlinear manifold M R p of intrinsic dimension d < p , where x i span ( { u j } j = 1 r ) , r < p . Define the average silhouette score for an embedding Z R n × d as
S ( Z ) = 1 n i = 1 n s ( i ) , s ( i ) = b ( i ) a ( i ) max { a ( i ) , b ( i ) } ,
where:
a ( i ) = 1 | C k | 1 z j C k , j i z i z j is the average intra-cluster distance for point i in cluster C k ,
b ( i ) = min C l C k 1 | C l | z j C l z i z j is the minimum average inter-cluster distance to another cluster C l .
Then, for embeddings produced by Isomap, t-SNE, and PCA:
S ( Z Isomap ) > S ( Z t - SNE ) > S ( Z PCA ) .
Proof. 
Consider X = [ x 1 , , x n ] R n × p lying on a smooth d-dimensional manifold M R p , with geodesic distances defined as
d M ( i , j ) = inf γ : x i x j γ d s ,
where γ is a curve on M . Assume K clusters { C 1 , , C K } on M , with geodesic centroids μ k (approximated as the point minimizing the sum of squared geodesic distances within C k ), are well-separated
min k l d M ( μ k , μ l ) β max k 1 | C k | x i C k d M ( x i , μ k ) ,
for β 1 , ensuring inter-cluster distances exceed intra-cluster spread. Set the embedding dimension to d, matching the intrinsic dimension of M . For Isomap, a k-nearest neighbor graph G is constructed, where edges connect each point to its k nearest neighbors based on Euclidean distances x i x j . Geodesic distances are approximated by shortest paths:
d G ( i , j ) = d M ( i , j ) + ϵ i j ,
where the error ϵ i j 0 arises from discrete sampling. For a smooth manifold with curvature bounded by κ M , and dense sampling (n large, k appropriately chosen), results from manifold learning theory (e.g., ([17]) bound the error
E [ ϵ i j ] c κ M k 1 / 2 n 1 / 2 ,
where c > 0 is a constant depending on the manifold’s geometry. Isomap applies Multidimensional Scaling (MDS) to the distance matrix D G = [ d G ( i , j ) ] , minimizing the stress
σ ( Z ) = i , j = 1 n ( d G ( i , j ) z i z j ) 2 1 / 2 .
The resulting embedding satisfies
z Isomap , i z Isomap , j = d G ( i , j ) + δ i j = d M ( i , j ) + ϵ i j + δ i j ,
where δ i j is the MDS approximation error, with
E [ | δ i j | ] c n 1 / 2 ,
for some c > 0 , due to the finite sample size in MDS. Define the original intra- and inter-cluster distances on M as
a M ( i ) = 1 | C k | 1 x j C k , j i d M ( i , j ) , b M ( i ) = min C l C k 1 | C l | x j C l d M ( i , j ) .
In the Isomap embedding
a Isomap ( i ) = 1 | C k | 1 z j C k , j i z Isomap , i z Isomap , j = a M ( i ) + ϵ ¯ a ( i ) + δ ¯ a ( i ) ,
b Isomap ( i ) = min C l C k 1 | C l | z j C l z Isomap , i z Isomap , j = b M ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) ,
where
ϵ ¯ a ( i ) = 1 | C k | 1 x j C k , j i ϵ i j , δ ¯ a ( i ) = 1 | C k | 1 x j C k , j i δ i j ,
and similarly for ϵ ¯ b ( i ) , δ ¯ b ( i ) . Since E [ ϵ i j ] and E [ | δ i j | ] are small, and assuming errors are approximately balanced across clusters, we have ϵ ¯ a ( i ) ϵ ¯ b ( i ) , δ ¯ a ( i ) δ ¯ b ( i ) . The silhouette score is
s Isomap ( i ) = b Isomap ( i ) a Isomap ( i ) max { a Isomap ( i ) , b Isomap ( i ) } = b M ( i ) a M ( i ) + ϵ ¯ b ( i ) ϵ ¯ a ( i ) + δ ¯ b ( i ) δ ¯ a ( i ) max { a M ( i ) + ϵ ¯ a ( i ) + δ ¯ a ( i ) , b M ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) } .
Given b M ( i ) a M ( i ) , the denominator is approximately b M ( i ) + ϵ ¯ b ( i ) + δ ¯ b ( i ) , and errors partially cancel in the numerator, so
s Isomap ( i ) b M ( i ) a M ( i ) b M ( i ) = 1 a M ( i ) b M ( i ) 1 ,
since a M ( i ) / b M ( i ) 1 . Thus, S ( Z Isomap ) is near-optimal.
For t-SNE, the objective is to minimize the Kullback–Leibler divergence:
KL ( P | | Q ) = i j p i j log p i j q i j ,
where
p i j = exp ( x i x j 2 / 2 σ i 2 ) k l exp ( x k x l 2 / 2 σ k 2 ) , q i j = ( 1 + z i z j 2 ) 1 k l ( 1 + z k z l 2 ) 1 .
Here, σ i is chosen to fix the perplexity, controlling the neighborhood size. For points close on M , x i x j d M ( i , j ) , so p i j reflects local geodesic distances. For distant inter-cluster pairs, x i x j d M ( i , j ) , often much smaller due to manifold folding in R p , leading to small p i j . However, the t-distribution’s heavy tails make q i j non-negligible, causing compression. The gradient
K L z i = 4 j i ( p i j q i j ) ( 1 + z i z j 2 ) 1 ( z i z j ) ,
pulls points together when q i j > p i j , reducing inter-cluster distances:
z t - SNE , i z t - SNE , j κ d M ( i , j ) , κ ( 0 , 1 ) ,
where κ depends on the perplexity and manifold geometry. Thus
a t - SNE ( i ) a M ( i ) , b t - SNE ( i ) κ b M ( i ) .
The silhouette score is
s t - SNE ( i ) = b t - SNE ( i ) a t - SNE ( i ) max { a t - SNE ( i ) , b t - SNE ( i ) } κ b M ( i ) a M ( i ) κ b M ( i ) = κ a M ( i ) κ b M ( i ) .
Since κ < 1, and the second term is small, we would have
s t - SNE ( i ) < s Isomap ( i ) 1 a M ( i ) b M ( i ) ,
so S ( Z t - SNE ) < S ( Z Isomap ) .
For PCA, the embedding is Z PCA = X U d , where
U d = argmax U U = I d tr ( U X X U ) .
The linear projection distorts the nonlinear manifold
z PCA , i z PCA , j = U d ( x i x j ) x i x j α d M ( i , j ) ,
where α 1 reflects the manifold’s folding (e.g., for a curved manifold like a Swiss roll, Euclidean distances underestimate geodesic paths). This causes cluster overlap
a PCA ( i ) α a M ( i ) , b PCA ( i ) α b M ( i ) .
The silhouette score is
s PCA ( i ) α b M ( i ) α a M ( i ) α b M ( i ) = b M ( i ) a M ( i ) b M ( i ) s Isomap ( i ) .
However, since α 1 , distances are scaled down uniformly, reducing the absolute separation between clusters. In practice, projection onto a d-dimensional subspace may collapse distinct manifold regions, increasing a PCA ( i ) if clusters are mixed, so
a PCA ( i ) α a M ( i ) , s PCA ( i ) α b M ( i ) a PCA ( i ) max { a PCA ( i ) , α b M ( i ) } s t - SNE ( i ) ,
since t-SNE preserves local structure better. Thus
S ( Z Isomap ) > S ( Z t - SNE ) > S ( Z PCA ) .
The orderings in Theorems 1–2 formalize widely observed behavior: PCA is optimal when the signal is linear and low-rank (isometry on the signal subspace), while Isomap best preserves geodesic separation on smooth manifolds, with t-SNE prioritizing local neighborhoods at the expense of global spacing. The constants c , c , α , κ and the separation factor β encapsulate sampling density, curvature, and algorithmic hyperparameters (e.g., k for Isomap and perplexity for t-SNE). In practice, these quantities determine how much a ( i ) and b ( i ) shift relative to their “ideal” values, hence how S ( Z ) orders across methods.

2.4. t-SNE: Why No Global Silhouette Optimality, and Where It Excels

2.4.1. Challenges in Establishing Global Silhouette Optimality for t-SNE

Theorems 1 and 2 establish performance orderings for dimensionality reduction methods under a global cluster-separation criterion, specifically the average silhouette score [38]. In contrast, t-SNE optimizes a nonconvex Kullback–Leibler (KL) divergence KL ( P Q ) between input-space and output-space neighborhood probability distributions, influenced by user-specified perplexities and random initializations [33]. Three key design choices preclude a general theorem asserting t-SNE’s optimality for global silhouette scores [34,35]:
  • Nonconvexity and initialization dependence: The KL ( P Q ) objective function has a complex landscape with multiple local minima, and no closed-form expression characterizes its global minimizer [34].
  • Probability-based objective: t-SNE aligns probabilities p i j and q i j , rather than preserving metric or geodesic distances that directly influence silhouette scores [36].
  • Heavy-tailed output kernel: The Student-t distribution used in the embedding space inflates q i j for pairs at moderate distances, prioritizing local neighborhood fidelity over accurate global structure representation [25,33].
As a result, unlike PCA, which preserves distances as a linear isometry within its principal subspace (Theorem 1), or Isomap, which approximates geodesic distances (Theorem 2), t-SNE does not generally maximize global separation metrics such as the silhouette score [34,36]. This limitation arises from its design focus on local structure preservation, which often compresses inter-cluster distances [25].

2.4.2. Local Neighborhood Preservation: t-SNE’s Strength

While the silhouette score evaluates global cluster separation [38], t-SNE is engineered to preserve local neighborhood structures [33]. Define N k orig ( i ) as the set of k nearest neighbors of point i in the input space, based on Euclidean distances or equivalently the k largest input similarities p i j . Similarly, let N k Z ( i ) denote the k nearest neighbors of i in the embedding Z . The k-nearest neighbor preservation score is defined as
Pres k ( Z ) = 1 n i = 1 n N k orig ( i ) N k Z ( i ) k [ 0 , 1 ] .
This metric quantifies the fraction of input-space neighbors preserved in the embedding, reflecting local structure fidelity [33,37].
Proposition 1
(t-SNE Excels at Local Neighborhood Preservation [36,37]) Fix a perplexity that determines bandwidths { σ i } and corresponding input similarities { p i j } . Assume there exists a positive integer k and a margin η > 1 such that, for each point i, the kth-nearest neighbor j N k orig ( i ) satisfies
min j N k orig ( i ) p i j η · max j N k orig ( i ) p i j .
Let Z tSNE be an embedding with output distribution Q satisfying K L ( P Q ) K L ( P Q ) + ε , where Q is the global minimizer of the KL divergence. Then
Pres k ( Z t S N E ) 1 C ( η ) ε ,
where C ( η ) is a constant that decreases as the margin η increases. In contrast, embeddings that optimize global metric stress, such as classical multidimensional scaling (MDS), do not generally satisfy such a bound [36].
Proof. 
The t-SNE objective minimizes the KL divergence:
KL ( P Q ) = i j p i j log p i j q i j ,
where
p i j = exp ( x i x j 2 / 2 σ i 2 ) k l exp ( x k x l 2 / 2 σ k 2 ) , q i j = ( 1 + z i z j 2 ) 1 k l ( 1 + z k z l 2 ) 1 ,
with x i and z i denoting points in the input and embedding spaces, respectively. The bandwidths { σ i } are set to achieve a fixed perplexity, controlling the effective neighborhood size [33].
Under the margin condition (75), for each point i, the k nearest neighbors have similarities min j N k orig ( i ) p i j η · max j N k orig ( i ) p i j , ensuring that the top-k neighbors are significantly more similar than others. Given KL ( P Q ) KL ( P Q ) + ε , the embedding Z tSNE produces a Q-distribution close to the optimal Q . The KL divergence penalizes mismatches between p i j and q i j , with the gradient
K L z i = 4 j i ( p i j q i j ) ( 1 + z i z j 2 ) 1 ( z i z j ) .
This gradient adjusts z i to align q i j with p i j for large p i j prioritizing the preservation of local neighbors.
To derive the bound in (76), consider the preservation score for point i. A neighbor j N k orig ( i ) is preserved in N k Z ( i ) if j is among the k points with the largest q i j , corresponding to small z i z j due to the heavy-tailed Student-t kernel. If j N k orig ( i ) is not in N k Z ( i ) , there exists some j N k orig ( i ) such that q i j > q i j . By (75), p i j η p i j , so a large discrepancy in q i j versus p i j increases the KL divergence.
For a misranked pair ( i , j ) where j N k orig ( i ) but j N k Z ( i ) , and some j N k orig ( i ) has q i j > q i j , the KL term p i j log ( p i j / q i j ) + p i j log ( p i j / q i j ) is affected. Since p i j η p i j , and assuming q i j > q i j , the penalty arises primarily from p i j log ( p i j / q i j ) . Approximating the effect, if q i j q i j / η (to reflect the ranking error), the KL contribution for the pair is at least p i j log ( η ) , as log ( p i j / q i j ) log ( η p i j / ( q i j / η ) ) log ( η ) when q i j p i j . Let m i = | N k orig ( i ) N k Z ( i ) | be the number of misranked neighbors for point i. The total KL penalty across all points is bounded by ε :
i j N k orig ( i ) N k Z ( i ) p i j log ( η ) ε .
Since min j N k orig ( i ) p i j p min > 0 (where p min depends on the dataset and perplexity), the number of misrankings satisfies
i m i · p min log ( η ) ε 1 n i m i k ε n k p min log ( η ) .
Thus, the preservation score is
Pres k ( Z tSNE ) = 1 1 n i m i k 1 ε n k p min log ( η ) = 1 C ( η ) ε ,
where C ( η ) = ( n k p min log ( η ) ) 1 decreases as η increases. For methods like MDS, which minimize stress i , j ( d i j z i z j ) 2 , the optimization does not prioritize the ranking of p i j , so no similar bound applies [36]. □

2.4.3. Implications for Our Results

Theorems 1 and 2 show that PCA maximizes global separation for linear structures by preserving distances in the principal subspace, whereas Isomap excels on nonlinear manifolds by approximating geodesic distances. In contrast, Proposition 1 formalizes the strength of t-SNE in local neighborhood fidelity under margin conditions since its KL objective aligns high-similarity pairs [36,37]. Thus, these findings are complementary: PCA and Isomap prioritize global structure—well captured by silhouette-based evaluations [38]—while t-SNE is designed to preserve local neighborhoods as reflected by the k-NN preservation score [33].

2.5. Simulation Study

To rigorously evaluate the performance of Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE) in preserving cluster structure, we conducted a comprehensive simulation study. The study assesses these dimensionality reduction techniques using the average silhouette score defined as
S ( Z ) = 1 n i = 1 n s ( i ) , s ( i ) = b ( i ) a ( i ) max { a ( i ) , b ( i ) } ,
where a ( i ) = 1 | C k | 1 z j C k , j i z i z j is the average intra-cluster distance for point i in cluster C k , and b ( i ) = min C l C k 1 | C l | z j C l z i z j is the minimum average inter-cluster distance to another cluster C l . This metric quantifies the quality of cluster preservation in the reduced-dimensional space, which is critical for applications in radiomics, where accurate feature extraction from high-dimensional imaging data enhances diagnostic and prognostic models.

2.5.1. Data Generation

Synthetic datasets were constructed to represent four distinct structural regimes commonly encountered in dimension-reduction research—two linear and two nonlinear—each parameterized by the sample size n, feature dimension p, and noise scale σ . All data were embedded in R p and contaminated with independent Gaussian noise ϵ i N ( 0 , σ 2 I p ) .
  • Linear (Gaussian mixture): Data points x i R p were sampled from a low-rank subspace of intrinsic dimension r = 2 . Each observation was generated as
    x i = j = 1 r α i j u j + ϵ i ,
    where { u j } are orthonormal basis vectors and α i j N ( μ k j , σ j 2 ) with cluster-specific means μ k j defining K = 3 Gaussian components. This configuration yields well-separated, approximately spherical clusters with linear structure.
  • Linear (Student-t mixture, heavy tails): To assess robustness under non-Gaussian noise, a second linear setting replaced the Gaussian components by mixtures of Student- t ν distributions with moderate to heavy tails ( ν [ 3 , 10 ] ). This produces elongated and noisy clusters lying near a common subspace, preserving linear geometry but degrading local smoothness.
  • Nonlinear (Swiss roll manifold): A curved manifold of intrinsic dimension d = 2 was generated as
    x i = [ t i cos ( t i ) , h i , t i sin ( t i ) , 0 , , 0 ] + ϵ i ,
    with t i Unif ( 0 , 2 π ) and h i Unif ( 0 , 10 ) . Clusters were assigned based on geodesic neighborhoods along the spiral surface, producing a smoothly varying nonlinear geometry.
  • Nonlinear (Concentric spheres manifold): The final scenario introduced nonconvex curvature with discontinuities by embedding samples on multiple concentric spherical shells of radii r k , k = 1 , 2 , 3 , in R p :
    x i = r k z i z i + ϵ i , z i N ( 0 , I p ) .
    This topology captures disconnected yet symmetric manifolds, challenging methods that rely on smooth global mappings.
To keep the specification concise, the four synthetic families used in the simulations are summarized in Table 3.

2.5.2. Experimental Design

To evaluate robustness across a range of structural and noise conditions, the simulations crossed three experimental factors: sample size n { 100 , 200 , 300 , 400 , 500 } , noise scale σ { 0.25 , 0.5 , 0.75 , 1.0 , 1.5 , 2.0 } , and feature count p { 20 , 50 , 100 , 200 , 300 , 400 } . For each ( n , σ , p ) configuration, synthetic datasets with three clusters ( K = 3 ) were generated under four distinct families: (1) linear Gaussian mixture, (2) linear Student-t mixture with heavy tails, (3) nonlinear Swiss-roll manifold, and (4) nonlinear concentric-spheres manifold. Each method produced a two-dimensional embedding ( d = 2 ), matching the intrinsic dimension of the underlying data (Table 4).
Performance was quantified by the average silhouette coefficient computed from Euclidean distances in the embedded space, using the true cluster assignments as labels. This metric captures the separation and compactness of the recovered clusters in low dimensions. To stabilize estimates and account for stochastic variation in both data generation and t-SNE optimization, R = 1000 independent replicates were generated per ( n , σ , p ) condition. For fairness, all embedding methods (PCA, Isomap, and t-SNE) were applied to the same dataset realization within each replicate before scoring.
In all analyses, PCA operated on centered data and projected onto the leading d eigenvectors obtained via singular value decomposition, yielding Z PCA = X centered V d . Isomap constructed a k-nearest-neighbor graph (here k = 10 ) with Euclidean edge weights, estimated geodesic distances via Dijkstra’s algorithm, and applied classical multidimensional scaling to obtain Z Isomap R n × 2 . t-SNE minimized the Kullback–Leibler divergence between pairwise similarities in high- and low-dimensional spaces, using a perplexity of 30, producing embeddings Z t - SNE R n × 2 . For each method and replicate, we computed the average silhouette; reported summaries are means over the 1000 replicates, which mitigates stochastic variability while enabling paired comparisons across methods within each simulated condition.

3. Results

3.1. Simulated Silhouette Score Results for Linear vs. Nonlinear Structures

We conducted extensive simulations to evaluate clustering performance (via silhouette scores) after dimensionality reduction using PCA, Isomap, and t-SNE. The simulations spanned a full factorial combination of sample sizes ( n = 100 , 200 , 300 , 400 , 500 ), noise levels ( σ 2 = 0.25 , 0.5 , 0.75 , 1 , 1.5 , 2 ), and feature counts ( p = 20 , 50 , 100 , 200 , 300 , 400 ), across four structural families: two linear (Gaussian and Student-t mixtures) and two nonlinear (Swiss roll and concentric spheres) structures. Each condition was replicated 1000 times, and we report the mean silhouette score ± standard deviation over those replicates. We summarize the results for the linear and nonlinear scenarios in Table A1. Note that each cell in Table A1 reports the mean ± standard deviation of silhouette scores computed across R = 1000 independent simulation replicates for a given combination of sample size ( n ) , noise level ( σ 2 ) , and feature count ( p ) . Variability in the reported values reflects the stability of each dimensionality-reduction method (PCA, Isomap, and t-SNE) under different noise and dimensionality conditions—lower standard deviation indicates greater consistency of cluster preservation.

3.1.1. Linear Data Structure Results

For the linear synthetic datasets (Gaussian and Student-t mixtures), the results confirm that PCA consistently yields the highest silhouette scores across nearly all experimental conditions, followed by Isomap and then t-SNE. This outcome is fully aligned with the theoretical expectations: PCA, being a global linear projection, best preserves the true subspace geometry when clusters lie in an approximately linear manifold. In contrast, the additional nonlinear transformations introduced by Isomap and t-SNE offer no advantage and may introduce small distortions to otherwise linear separations.
Under the most favorable conditions ( n = 500 , σ 2 = 0.25 , p = 20 ), PCA achieved near-perfect clustering with an average silhouette close to 1.00, while Isomap and t-SNE reached approximately 0.95 and 0.90, respectively (Table A1). This demonstrates that PCA effectively recovers the linear subspace that defines cluster boundaries. Even in the heavy-tailed Student-t mixture, PCA retained a clear advantage, showing only minor degradation relative to the Gaussian case (average silhouettes ∼0.95–1.00 at low noise). The robustness of PCA in these conditions reflects its resilience to moderate outliers and noise perturbations in linear embeddings.
As conditions became more difficult (higher noise, higher dimensionality, or smaller n), silhouette scores declined for all methods, yet PCA maintained a consistent lead. At a moderate difficulty level ( n = 300 , σ 2 = 1 , p = 200 ), PCA averaged around 0.52, followed by Isomap (0.49) and t-SNE (0.46). Even when the variance of the noise doubled ( σ 2 = 1.5 ), PCA remained slightly ahead of Isomap, which in turn outperformed t-SNE. Only under the most extreme condition ( n = 100 , σ 2 = 2 , p = 400 ) did all methods converge toward near-zero silhouettes (approximately 0.05), signifying that the cluster structure was fully obscured by noise.

3.1.2. Effect of Sample Size, Noise, and Dimensionality on Linear Data

Across the linear-data simulations, we observed clear and theoretically consistent patterns in how these factors influence clustering performance:
  • Noise variance: Increasing noise drastically reduces silhouette scores for all methods. At low noise ( σ 2 = 0.25 ), clusters remain well-separated (silhouettes near 0.9–1.0 for PCA), but at high noise ( σ 2 = 2 ), silhouettes drop below 0.1. Noise inflates within-cluster variance and increases overlap between clusters, eroding separability. For instance, at fixed n = 300 and p = 100 , raising σ 2 from 0.25 to 2 lowers the PCA mean silhouette from ∼0.75 to ∼0.20, that of Isomap from ∼0.70 to ∼0.15, and that of t-SNE from ∼0.68 to ∼0.18.
  • Sample size: Increasing n tends to improve silhouettes and stabilize variability. With more observations, the cluster geometry is better estimated and less affected by stochasticity. For example, at σ 2 = 0.5 and p = 50 , enlarging n from 100 to 500 raises the PCA mean silhouette from ∼0.68 to ∼0.81, that of Isomap from ∼0.65 to ∼0.77, and that of t-SNE from ∼0.61 to ∼0.73. Standard deviations decrease in tandem (PCA’s typically shrinking from ∼0.04 to ∼0.02).
  • Feature count (dimensionality): Increasing the number of ambient dimensions reduces clustering performance, especially when many features are irrelevant. This pattern reflects the curse of dimensionality: added noise dimensions dilute meaningful signal directions. For instance, at n = 300 and σ 2 = 0.5 , the PCA silhouette declines from ∼0.76 at p = 20 to 0.55 at p = 400 , while those of Isomap and t-SNE drop from ∼0.72 and ∼0.69 to ∼0.45 and 0.43 , respectively. PCA shows the greatest robustness because it identifies global variance directions that partially filter noise, whereas Isomap and t-SNE are more sensitive to spurious local structure.
In summary, the linear results are consistent with geometric intuition: PCA performs best whenever the true structure is linear, Isomap performs second best by partially preserving global geometry, and t-SNE performs worst, as its neighborhood-based embedding is less aligned with global variance structure. These results confirm that matching the dimensionality reduction method to the intrinsic data geometry is essential for preserving cluster separability. Complete simulated silhouette scores for the linear setting are reported in Appendix A, Table A1.

3.1.3. Nonlinear Data Results

For the nonlinear datasets, the simulations demonstrate a complete reversal of the linear pattern: Isomap consistently achieves the highest silhouette scores, followed by t-SNE, while PCA performs poorest. This behavior aligns precisely with theoretical expectations—Isomap is specifically designed to preserve geodesic (manifold) distances, enabling it to “unroll” nonlinear surfaces such as the Swiss roll and to separate nonconvex structures such as concentric spheres. PCA, by contrast, applies a global linear projection that cannot capture curved geometries, and t-SNE, while nonlinear, primarily preserves local neighborhoods rather than global topology.
Under favorable conditions (e.g., n = 500 , low noise σ 2 = 0.25 , p = 20 ), Isomap achieves near-perfect recovery, with mean silhouettes reaching approximately 1.00 ± 0.02, indicating excellent cluster separation on the unrolled manifold. t-SNE performs next best, achieving about 0.95 ± 0.03, while PCA lags far behind at roughly 0.85 ± 0.02. These results confirm that linear projections cannot preserve the manifold’s intrinsic distances—PCA effectively “flattens” the Swiss roll, causing points that are distant on the manifold to appear close in projection. In contrast, Isomap faithfully preserves global geometry, and t-SNE provides partial recovery by maintaining local neighborhoods.
At more moderate conditions ( n = 300 , σ 2 = 1.0 , p = 200 ), the same rank order persists: Isomap ∼ 0.53, t-SNE ∼ 0.47, and PCA ∼ 0.35. This shows that even when moderate noise and dimensionality are introduced, Isomap continues to capture the manifold’s global shape better than the other two methods. t-SNE remains advantageous over PCA by approximately 0.10–0.15 silhouette units, reflecting its superior preservation of local structure, though it still falls short of Isomap’s global accuracy.
The concentric-spheres scenario yields similar findings, reinforcing Isomap’s robustness on disconnected or highly curved manifolds. Isomap’s mean silhouettes span roughly 0.45–1.00 across conditions, outperforming t-SNE (0.41–0.94) and PCA (0.30–0.89). This structure is particularly challenging because of its discontinuous curvature and nonconvex separation, yet Isomap’s geodesic-based approach manages to capture the nested-layer geometry more effectively than t-SNE, which slightly distorts the relative spacing between layers.
As with the linear data, increasing noise and dimensionality degrade clustering performance, while increasing sample size generally improves stability and silhouette magnitude. At high noise ( σ 2 = 2 ) and large dimensionality ( p = 400 ), all methods converge toward poor performance (Isomap ∼ 0.10, t-SNE ∼ 0.07, PCA ∼ 0.05), indicating that the cluster signal is fully submerged in noise. However, even at moderate noise ( σ 2 = 1.5 ), Isomap maintains a consistent edge—for instance, at n = 100 , p = 200 , its mean silhouette is about 0.37, compared with 0.31 for t-SNE and 0.22 for PCA. When n increases to 500 under the same conditions, all methods improve (Isomap ∼ 0.50, t-SNE ∼ 0.42, PCA ∼ 0.33) but the ranking remains unchanged.

3.1.4. Effect of Sample Size, Noise, and Dimensionality on Nonlinear Data

Across both nonlinear manifolds, the following trends were consistent:
  • Noise variance: Increasing noise substantially lowers silhouettes for all methods, but Isomap’s degradation is less severe. At low noise ( σ 2 = 0.25 ), Isomap’s mean silhouette remains ∼0.9–1.0, while at σ 2 = 2 , it drops below 0.15. t-SNE and PCA experience sharper declines, reflecting their sensitivity to noise in pairwise distances.
  • Sample size: Increasing n improves performance and reduces variability. Larger samples produce more accurate geodesic approximations for Isomap and better neighborhood estimates for t-SNE, narrowing but not eliminating the performance gap.
  • Feature count (dimensionality): Higher p values introduce irrelevant features that degrade all methods, particularly those relying on pairwise distances. At n = 300 , σ 2 = 0.5 , increasing p from 20 to 400 reduces Isomap’s silhouette from ∼0.80 to ∼0.50, that of t-SNE from ∼0.75 to ∼0.47, and that of PCA from ∼0.60 to ∼0.40. The impact is largest for nonlinear embeddings because distance metrics become less reliable in high dimensions.
In summary, the nonlinear data results strongly support the use of manifold-learning approaches for curved or discontinuous geometries. Isomap was the clear top performer, successfully preserving global manifold structure in both the Swiss roll and concentric-spheres settings. t-SNE provided meaningful local clustering improvements over PCA, but did not maintain global continuity. PCA, being strictly linear, was unable to unfold or separate the curved manifolds. These results (Table A1) confirm that nonlinear dimensionality reduction methods such as Isomap or t-SNE outperform linear projections when the data’s intrinsic geometry is nonlinear. Complete simulated silhouette scores for the linear setting are reported in Appendix A, Table A1.

3.2. Visual Comparison

Figure A1 and Figure A2 present line plots of silhouette scores versus feature count (p), grouped by method and faceted by sample size and noise level. These plots visually reinforce the theoretical expectations discussed above and underscore the importance of selecting dimensionality reduction techniques that align with the underlying geometry: PCA dominates in linear regimes, whereas Isomap leads in nonlinear regimes, with t-SNE typically intermediate.

Alternative Visualizations: Bar, Density, and Violin Plots

To complement the trend-focused line plots, Figure A3 and Figure A4 show bar plots that emphasize magnitude differences across methods within each ( n , σ 2 , p ) cell. Figure A5 and Figure A6 display scaled density plots of replicate-level silhouette distributions, revealing both central tendency and dispersion under varying n, σ 2 , and p. Finally, Figure A7 and Figure A8 provide violin plots that jointly summarize distributional shape and median behavior across all conditions. Taken together, these complementary views echo the same qualitative pattern seen in the line plots—PCA consistently outperforms Isomap and t-SNE on linear data, while Isomap reliably surpasses PCA and t-SNE on nonlinear manifolds—precisely as predicted by geometric considerations. High-resolution versions and the full figure set appear in Appendix B.

4. Conclusions

Silhouette Distributions Across n, p, and Noise

The simulation framework provided a comprehensive evaluation across conditions that mirror real-world challenges in high-dimensional radiomic and biomedical data, where signal geometry may range from approximately linear to highly nonlinear and noise levels can vary widely. By systematically crossing sample size, feature dimensionality, and noise variance, the study assessed how these factors interact with three widely used dimensionality reduction algorithms–PCA, Isomap, and t-SNE–to influence downstream cluster separation as measured by the silhouette score. The findings reinforce the theoretical expectations established earlier: PCA is most effective for approximately linear structures, while Isomap and, to a lesser degree, t-SNE are better suited for nonlinear manifolds. These insights are directly relevant to radiomic and multiomic feature spaces, where manifold curvature, redundant dimensions, and heterogeneous noise frequently occur, and where careful alignment between data geometry and projection method can improve the reliability of extracted biomarkers for cancer diagnosis and prognosis.
When the underlying structure is approximately linear, the distributions for PCA are consistently shifted to the right and visibly narrower, indicating higher and more stable silhouettes. Isomap typically ranks second, followed by t-SNE. The performance gap is largest at low noise and moderate p but narrows as both noise and dimensionality increase. Larger sample sizes n systematically shift all distributions rightward and reduce variance, reflecting greater numerical stability and improved recovery of the true subspace as information accumulates.
In contrast, when the structure is nonlinear (Swiss-roll or concentric-spheres manifolds), the ordering reverses. Isomap’s densities and violins peak furthest to the right, t-SNE follows closely, and PCA remains left-shifted. The advantage of Isomap is most pronounced at low noise and moderate p, and while its absolute performance declines with increasing noise or dimensionality, it retains a measurable edge throughout. As n increases, the variability of all methods decreases, yet the rank order—Isomap > t-SNE > PCA—remains stable across conditions.
Overall, these distributional views corroborate the main conclusion: the dimensionality reduction method should be selected based on the intrinsic geometry of the data. For linearly structured data, PCA provides the highest and most stable silhouettes. For curved or discontinuous manifolds, Isomap excels by preserving global geodesic relationships, and t-SNE provides competitive local structure preservation. Across all scenarios, increasing noise and adding uninformative features degrade performance for every method, whereas larger samples improve both the magnitude and the stability of the recovered cluster separation.

5. Discussion

The simulation study provides a systematic and comprehensive evaluation of three major dimensionality reduction techniques—Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE)—across a range of data geometries, sample sizes, feature counts, and noise levels. By incorporating both linear (Gaussian and Student-t mixtures) and nonlinear (Swiss roll and concentric spheres) structures, the design captures conditions that approximate real-world high-dimensional biomedical datasets, particularly those encountered in radiomics and multiomics research.
Overall, the results confirm the theoretical expectations regarding the relationship between data geometry and algorithmic suitability. For linear structures, PCA consistently achieved the highest average silhouette scores, demonstrating its superior ability to preserve global variance and linear cluster separability. This is consistent with prior findings that PCA effectively captures orthogonal directions of maximum variance in linear subspaces [11]. Even in heavy-tailed (Student-t) settings, PCA remained robust, outperforming nonlinear methods that introduced unnecessary curvature into otherwise linear relationships.
For nonlinear structures, such as the Swiss-roll and concentric-spheres manifolds, Isomap emerged as the top performer, followed by t-SNE, with PCA lagging behind. This reversal of performance rankings underscores the importance of aligning dimensionality reduction methods with intrinsic data geometry. Isomap’s ability to preserve geodesic distances allows it to maintain the manifold’s global topology, effectively “unfolding” complex surfaces into lower-dimensional embeddings without distorting true relationships [8]. In contrast, t-SNE, which emphasizes local neighborhood preservation, excels in capturing local continuity but often sacrifices global structure [33]. The PCA linear projections, while computationally efficient, fail to represent curvature or disconnected manifolds, leading to mixed or overlapping clusters in such contexts.
These findings have direct implications for radiomics and related biomedical applications. Radiomic feature spaces are often high-dimensional, noisy, and inherently nonlinear due to complex biological and imaging processes. Selecting an appropriate dimensionality reduction method thus becomes crucial for preserving meaningful structure prior to modeling. For linearly structured or moderately noisy data, PCA remains an efficient baseline that balances interpretability and computational feasibility. However, for data exhibiting nonlinear or manifold-like patterns—common in multi-sequence MRI, PET/CT fusion imaging, or texture-derived feature sets—Isomap and t-SNE provide more faithful low-dimensional representations. Properly applied, these nonlinear methods could enhance the extraction of stable and interpretable quantitative biomarkers, ultimately improving predictive performance in tasks such as cancer subtype classification, tumor segmentation, and treatment response assessment. While we briefly discuss potential medical applications, a full radiomics study—encompassing IRB-approved data access, lesion segmentation, IBSI-compliant preprocessing and feature extraction, harmonization, and externally validated modeling—falls outside the scope of this simulation-focused paper. Given the scale and domain-specific requirements, we consider a radiomics application a dedicated follow-up project and leave it as future work.
Despite these advantages, several methodological considerations remain. t-SNE is computationally expensive and sensitive to hyperparameters such as perplexity, limiting its scalability for large radiomic cohorts. Isomap can degrade in the presence of noise or outliers, and its performance depends on the choice of neighborhood size (k). PCA, while limited in nonlinear contexts, offers unparalleled computational simplicity and interpretability–traits that remain valuable in high-throughput clinical pipelines. Therefore, selecting a dimensionality reduction method should depend on both data geometry and practical constraints such as dataset size, noise level, and available computational resources. In practice, hybrid or hierarchical approaches—e.g., using PCA for preliminary noise reduction followed by Isomap or t-SNE for manifold unfolding—may yield the best balance between efficiency and fidelity.

6. Future Work

Several avenues for future research arise from this study. First, expanding the comparison to include Uniform Manifold Approximation and Projection (UMAP) would provide valuable insight. UMAP combines theoretical foundations in Riemannian geometry and algebraic topology with greater computational efficiency, preserving both local and global structures more effectively than t-SNE in many cases [5]. Its scalability makes it an attractive candidate for radiomic applications involving thousands of features and samples.
Second, future investigations should assess the impact of dimensionality reduction on downstream modeling tasks. Understanding how PCA, Isomap, t-SNE, or UMAP affect the performance of classifiers, regressors, or survival models would bridge the gap between dimensionality reduction and clinical decision support. Evaluating these methods in pipelines for tumor grading, outcome prediction, or treatment stratification would further contextualize their practical utility.
Third, applying the methods to real radiomic datasets is a critical next step. Unlike synthetic simulations, real data involve class imbalance, missing values, heterogeneity in acquisition protocols, and domain-specific noise. Empirical validation under these conditions will ensure that the insights from the simulations generalize to clinical practice.
Fourth, integrating domain knowledge from radiology and oncology into the dimensionality reduction process may enhance interpretability. Supervised or semi-supervised variants of nonlinear embedding methods could incorporate clinical labels or radiologic priors, guiding the embedding toward features that are both geometrically meaningful and clinically relevant.
Fifth, interpretability and visualization remain central challenges. Developing approaches that allow back-projection from the reduced space to original features—or that provide anatomically interpretable component loadings—would improve clinician trust and facilitate translational adoption.
Finally, exploring ensemble or hybrid dimensionality reduction frameworks could combine the strengths of multiple algorithms. For instance, PCA could be used for initial denoising, followed by Isomap or UMAP for nonlinear structure recovery. Such staged approaches could balance computational efficiency with manifold preservation, yielding more robust embeddings for radiomics and other high-dimensional biomedical analyses.
In summary, this work establishes clear empirical and theoretical guidelines for matching dimensionality reduction techniques to data geometry. Extending these findings to real-world radiomic datasets, integrating domain knowledge, and exploring hybrid or scalable methods represent promising directions for future investigation in high-dimensional medical data analysis.

Author Contributions

Conceptualization, M.Z.; methodology, M.Z.; software, M.S.; validation, M.Z. and M.S.; formal analysis, M.Z.; investigation, M.Z.; resources, M.Z.; data curation, M.S.; writing—original draft preparation, M.Z.; writing—review and editing, M.S. and M.Z.; visualization, M.Z.; supervision, M.Z.; project administration, M.Z.; funding acquisition, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by the College of Arts and Sciences and East Tennessee State University.

Data Availability Statement

The data used in this study were generated through simulation as described in the simulation section. The simulation code and reproducible scripts are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Additional Simulation Results

Table A1. Simulated silhouette scores (mean ± SD) for linear and nonlinear data structures across combinations of sample size (n), noise variance ( σ 2 ), and feature count (p). Each value is the average silhouette over 1000 replicates, with standard deviation in parentheses.
Table A1. Simulated silhouette scores (mean ± SD) for linear and nonlinear data structures across combinations of sample size (n), noise variance ( σ 2 ), and feature count (p). Each value is the average silhouette over 1000 replicates, with standard deviation in parentheses.
Scenarion σ 2 pPCAIsomapt-SNE
Linear (Gaussian)1000.25200.918 (±0.022)0.868 (±0.040)0.818 (±0.030)
Linear (Gaussian)2000.25200.968 (±0.037)0.917 (±0.044)0.868 (±0.021)
Linear (Gaussian)3000.25201.000 (±0.028)0.950 (±0.042)0.900 (±0.034)
Linear (Gaussian)4000.25201.000 (±0.026)0.950 (±0.044)0.900 (±0.031)
Linear (Gaussian)5000.25201.000 (±0.032)0.950 (±0.034)0.900 (±0.023)
Linear (Gaussian)1000.5200.905 (±0.037)0.855 (±0.026)0.805 (±0.021)
Linear (Gaussian)2000.5200.955 (±0.023)0.905 (±0.044)0.855 (±0.042)
Linear (Gaussian)3000.5201.000 (±0.032)0.950 (±0.036)0.900 (±0.045)
Linear (Gaussian)4000.5201.000 (±0.031)0.950 (±0.038)0.900 (±0.034)
Linear (Gaussian)5000.5201.000 (±0.030)0.950 (±0.027)0.900 (±0.024)
Linear (Gaussian)1000.75200.893 (±0.039)0.843 (±0.043)0.793 (±0.037)
Linear (Gaussian)2000.75200.943 (±0.035)0.892 (±0.021)0.843 (±0.032)
Linear (Gaussian)3000.75200.993 (±0.034)0.943 (±0.025)0.893 (±0.028)
Linear (Gaussian)4000.75201.000 (±0.021)0.950 (±0.024)0.900 (±0.030)
Linear (Gaussian)5000.75201.000 (±0.025)0.950 (±0.029)0.900 (±0.024)
Linear (Gaussian)1001.0200.880 (±0.018)0.830 (±0.026)0.780 (±0.032)
Linear (Gaussian)2001.0200.930 (±0.022)0.880 (±0.041)0.830 (±0.021)
Linear (Gaussian)3001.0200.980 (±0.026)0.930 (±0.040)0.880 (±0.023)
Linear (Gaussian)4001.0201.000 (±0.029)0.950 (±0.025)0.900 (±0.023)
Linear (Gaussian)5001.0201.000 (±0.034)0.950 (±0.042)0.900 (±0.029)
Linear (Gaussian)1001.5200.855 (±0.032)0.805 (±0.022)0.755 (±0.030)
Linear (Gaussian)2001.5200.905 (±0.022)0.855 (±0.040)0.805 (±0.031)
Linear (Gaussian)3001.5200.955 (±0.035)0.905 (±0.040)0.855 (±0.040)
Linear (Gaussian)4001.5201.000 (±0.026)0.950 (±0.039)0.900 (±0.036)
Linear (Gaussian)5001.5201.000 (±0.033)0.950 (±0.020)0.900 (±0.032)
Linear (Gaussian)1002.0200.830 (±0.021)0.780 (±0.029)0.730 (±0.035)
Linear (Gaussian)2002.0200.880 (±0.024)0.830 (±0.023)0.780 (±0.026)
Linear (Gaussian)3002.0200.930 (±0.032)0.880 (±0.030)0.830 (±0.040)
Linear (Gaussian)4002.0200.980 (±0.018)0.930 (±0.031)0.880 (±0.045)
Linear (Gaussian)5002.0201.000 (±0.037)0.950 (±0.042)0.900 (±0.024)
Linear (Gaussian)1000.25500.888 (±0.018)0.838 (±0.036)0.788 (±0.029)
Linear (Gaussian)2000.25500.938 (±0.031)0.887 (±0.028)0.838 (±0.025)
Linear (Gaussian)3000.25500.988 (±0.035)0.938 (±0.022)0.888 (±0.032)
Linear (Gaussian)4000.25501.000 (±0.028)0.950 (±0.035)0.900 (±0.028)
Linear (Gaussian)5000.25501.000 (±0.027)0.950 (±0.044)0.900 (±0.032)
Linear (Gaussian)1000.5500.875 (±0.037)0.825 (±0.043)0.775 (±0.035)
Linear (Gaussian)2000.5500.925 (±0.025)0.875 (±0.024)0.825 (±0.043)
Linear (Gaussian)3000.5500.975 (±0.023)0.925 (±0.022)0.875 (±0.044)
Linear (Gaussian)4000.5501.000 (±0.033)0.950 (±0.024)0.900 (±0.034)
Linear (Gaussian)5000.5501.000 (±0.039)0.950 (±0.035)0.900 (±0.030)
Linear (Gaussian)1000.75500.863 (±0.031)0.812 (±0.028)0.763 (±0.028)
Linear (Gaussian)2000.75500.912 (±0.020)0.862 (±0.029)0.812 (±0.045)
Linear (Gaussian)3000.75500.963 (±0.019)0.912 (±0.022)0.863 (±0.024)
Linear (Gaussian)4000.75501.000 (±0.032)0.950 (±0.035)0.900 (±0.042)
Linear (Gaussian)5000.75501.000 (±0.032)0.950 (±0.038)0.900 (±0.033)
Linear (Gaussian)1001.0500.850 (±0.031)0.800 (±0.041)0.750 (±0.040)
Linear (Gaussian)2001.0500.900 (±0.039)0.850 (±0.031)0.800 (±0.028)
Linear (Gaussian)3001.0500.950 (±0.025)0.900 (±0.020)0.850 (±0.025)
Linear (Gaussian)4001.0501.000 (±0.036)0.950 (±0.026)0.900 (±0.026)
Linear (Gaussian)5001.0501.000 (±0.017)0.950 (±0.026)0.900 (±0.038)
Linear (Gaussian)1001.5500.825 (±0.036)0.775 (±0.032)0.725 (±0.030)
Linear (Gaussian)2001.5500.875 (±0.021)0.825 (±0.023)0.775 (±0.030)
Linear (Gaussian)3001.5500.925 (±0.029)0.875 (±0.025)0.825 (±0.031)
Linear (Gaussian)4001.5500.975 (±0.020)0.925 (±0.033)0.875 (±0.029)
Linear (Gaussian)5001.5501.000 (±0.031)0.950 (±0.029)0.900 (±0.029)
Linear (Gaussian)1002.0500.800 (±0.028)0.750 (±0.039)0.700 (±0.026)
Linear (Gaussian)2002.0500.850 (±0.025)0.800 (±0.027)0.750 (±0.036)
Linear (Gaussian)3002.0500.900 (±0.020)0.850 (±0.042)0.800 (±0.039)
Linear (Gaussian)4002.0500.950 (±0.032)0.900 (±0.035)0.850 (±0.029)
Linear (Gaussian)5002.0501.000 (±0.028)0.950 (±0.042)0.900 (±0.035)
Linear (Gaussian)1000.251000.838 (±0.036)0.788 (±0.028)0.738 (±0.038)
Linear (Gaussian)2000.251000.888 (±0.022)0.838 (±0.035)0.788 (±0.032)
Linear (Gaussian)3000.251000.938 (±0.022)0.888 (±0.034)0.838 (±0.043)
Linear (Gaussian)4000.251000.988 (±0.038)0.938 (±0.027)0.888 (±0.028)
Linear (Gaussian)5000.251001.000 (±0.040)0.950 (±0.035)0.900 (±0.043)
Linear (Gaussian)1000.51000.825 (±0.027)0.775 (±0.030)0.725 (±0.036)
Linear (Gaussian)2000.51000.875 (±0.019)0.825 (±0.034)0.775 (±0.026)
Linear (Gaussian)3000.51000.925 (±0.039)0.875 (±0.035)0.825 (±0.033)
Linear (Gaussian)4000.51000.975 (±0.025)0.925 (±0.042)0.875 (±0.029)
Linear (Gaussian)5000.51001.000 (±0.022)0.950 (±0.024)0.900 (±0.024)
Linear (Gaussian)1000.751000.813 (±0.027)0.763 (±0.026)0.713 (±0.025)
Linear (Gaussian)2000.751000.863 (±0.032)0.812 (±0.021)0.763 (±0.038)
Linear (Gaussian)3000.751000.913 (±0.024)0.863 (±0.030)0.813 (±0.041)
Linear (Gaussian)4000.751000.963 (±0.038)0.913 (±0.027)0.863 (±0.044)
Linear (Gaussian)5000.751001.000 (±0.033)0.950 (±0.037)0.900 (±0.021)
Linear (Gaussian)1001.01000.800 (±0.025)0.750 (±0.032)0.700 (±0.034)
Linear (Gaussian)2001.01000.850 (±0.032)0.800 (±0.043)0.750 (±0.035)
Linear (Gaussian)3001.01000.900 (±0.026)0.850 (±0.034)0.800 (±0.021)
Linear (Gaussian)4001.01000.950 (±0.022)0.900 (±0.030)0.850 (±0.025)
Linear (Gaussian)5001.01001.000 (±0.036)0.950 (±0.024)0.900 (±0.040)
Linear (Gaussian)1001.51000.775 (±0.029)0.725 (±0.037)0.675 (±0.024)
Linear (Gaussian)2001.51000.825 (±0.031)0.775 (±0.028)0.725 (±0.038)
Linear (Gaussian)3001.51000.875 (±0.025)0.825 (±0.044)0.775 (±0.044)
Linear (Gaussian)4001.51000.925 (±0.033)0.875 (±0.026)0.825 (±0.026)
Linear (Gaussian)5001.51000.975 (±0.030)0.925 (±0.027)0.875 (±0.033)
Linear (Gaussian)1002.01000.750 (±0.035)0.700 (±0.024)0.650 (±0.030)
Linear (Gaussian)2002.01000.800 (±0.027)0.750 (±0.042)0.700 (±0.043)
Linear (Gaussian)3002.01000.850 (±0.037)0.800 (±0.037)0.750 (±0.044)
Linear (Gaussian)4002.01000.900 (±0.028)0.850 (±0.034)0.800 (±0.028)
Linear (Gaussian)5002.01000.950 (±0.024)0.900 (±0.021)0.850 (±0.033)
Linear (Gaussian)1000.252000.738 (±0.037)0.688 (±0.020)0.638 (±0.022)
Linear (Gaussian)2000.252000.787 (±0.019)0.737 (±0.039)0.688 (±0.038)
Linear (Gaussian)3000.252000.838 (±0.039)0.787 (±0.032)0.738 (±0.022)
Linear (Gaussian)4000.252000.887 (±0.031)0.837 (±0.039)0.787 (±0.023)
Linear (Gaussian)5000.252000.938 (±0.025)0.887 (±0.026)0.838 (±0.021)
Linear (Gaussian)1000.52000.725 (±0.025)0.675 (±0.022)0.625 (±0.026)
Linear (Gaussian)2000.52000.775 (±0.016)0.725 (±0.037)0.675 (±0.027)
Linear (Gaussian)3000.52000.825 (±0.018)0.775 (±0.022)0.725 (±0.042)
Linear (Gaussian)4000.52000.875 (±0.034)0.825 (±0.040)0.775 (±0.045)
Linear (Gaussian)5000.52000.925 (±0.018)0.875 (±0.022)0.825 (±0.040)
Linear (Gaussian)1000.752000.713 (±0.035)0.662 (±0.020)0.613 (±0.039)
Linear (Gaussian)2000.752000.762 (±0.033)0.712 (±0.036)0.662 (±0.032)
Linear (Gaussian)3000.752000.812 (±0.019)0.762 (±0.020)0.713 (±0.031)
Linear (Gaussian)4000.752000.863 (±0.027)0.812 (±0.030)0.763 (±0.032)
Linear (Gaussian)5000.752000.912 (±0.033)0.862 (±0.021)0.812 (±0.029)
Linear (Gaussian)1001.02000.700 (±0.035)0.650 (±0.041)0.600 (±0.026)
Linear (Gaussian)2001.02000.750 (±0.024)0.700 (±0.041)0.650 (±0.041)
Linear (Gaussian)3001.02000.800 (±0.022)0.750 (±0.024)0.700 (±0.038)
Linear (Gaussian)4001.02000.850 (±0.018)0.800 (±0.021)0.750 (±0.045)
Linear (Gaussian)5001.02000.900 (±0.016)0.850 (±0.028)0.800 (±0.043)
Linear (Gaussian)1001.52000.675 (±0.030)0.625 (±0.027)0.575 (±0.038)
Linear (Gaussian)2001.52000.725 (±0.036)0.675 (±0.028)0.625 (±0.032)
Linear (Gaussian)3001.52000.775 (±0.032)0.725 (±0.036)0.675 (±0.036)
Linear (Gaussian)4001.52000.825 (±0.039)0.775 (±0.030)0.725 (±0.023)
Linear (Gaussian)5001.52000.875 (±0.028)0.825 (±0.026)0.775 (±0.032)
Linear (Gaussian)1002.02000.650 (±0.024)0.600 (±0.045)0.550 (±0.030)
Linear (Gaussian)2002.02000.700 (±0.021)0.650 (±0.036)0.600 (±0.023)
Linear (Gaussian)3002.02000.750 (±0.039)0.700 (±0.033)0.650 (±0.024)
Linear (Gaussian)4002.02000.800 (±0.031)0.750 (±0.045)0.700 (±0.037)
Linear (Gaussian)5002.02000.850 (±0.025)0.800 (±0.028)0.750 (±0.041)
Linear (Gaussian)1000.253000.638 (±0.019)0.588 (±0.025)0.538 (±0.042)
Linear (Gaussian)2000.253000.688 (±0.023)0.638 (±0.029)0.588 (±0.040)
Linear (Gaussian)3000.253000.738 (±0.020)0.688 (±0.020)0.638 (±0.030)
Linear (Gaussian)4000.253000.788 (±0.027)0.738 (±0.031)0.688 (±0.029)
Linear (Gaussian)5000.253000.838 (±0.037)0.788 (±0.031)0.738 (±0.033)
Linear (Gaussian)1000.53000.625 (±0.039)0.575 (±0.039)0.525 (±0.025)
Linear (Gaussian)2000.53000.675 (±0.023)0.625 (±0.044)0.575 (±0.035)
Linear (Gaussian)3000.53000.725 (±0.034)0.675 (±0.029)0.625 (±0.039)
Linear (Gaussian)4000.53000.775 (±0.028)0.725 (±0.043)0.675 (±0.025)
Linear (Gaussian)5000.53000.825 (±0.022)0.775 (±0.022)0.725 (±0.025)
Linear (Gaussian)1000.753000.613 (±0.039)0.563 (±0.027)0.513 (±0.038)
Linear (Gaussian)2000.753000.663 (±0.035)0.613 (±0.023)0.563 (±0.026)
Linear (Gaussian)3000.753000.713 (±0.022)0.663 (±0.023)0.613 (±0.023)
Linear (Gaussian)4000.753000.763 (±0.040)0.713 (±0.045)0.663 (±0.023)
Linear (Gaussian)5000.753000.813 (±0.038)0.763 (±0.034)0.713 (±0.030)
Linear (Gaussian)1001.03000.600 (±0.026)0.550 (±0.038)0.500 (±0.022)
Linear (Gaussian)2001.03000.650 (±0.023)0.600 (±0.037)0.550 (±0.028)
Linear (Gaussian)3001.03000.700 (±0.036)0.650 (±0.025)0.600 (±0.032)
Linear (Gaussian)4001.03000.750 (±0.022)0.700 (±0.025)0.650 (±0.044)
Linear (Gaussian)5001.03000.800 (±0.023)0.750 (±0.032)0.700 (±0.021)
Linear (Gaussian)1001.53000.575 (±0.029)0.525 (±0.036)0.475 (±0.035)
Linear (Gaussian)2001.53000.625 (±0.023)0.575 (±0.042)0.525 (±0.036)
Linear (Gaussian)3001.53000.675 (±0.023)0.625 (±0.030)0.575 (±0.024)
Linear (Gaussian)4001.53000.725 (±0.037)0.675 (±0.044)0.625 (±0.034)
Linear (Gaussian)5001.53000.775 (±0.023)0.725 (±0.045)0.675 (±0.026)
Linear (Gaussian)1002.03000.550 (±0.030)0.500 (±0.023)0.450 (±0.032)
Linear (Gaussian)2002.03000.600 (±0.017)0.550 (±0.024)0.500 (±0.027)
Linear (Gaussian)3002.03000.650 (±0.030)0.600 (±0.038)0.550 (±0.024)
Linear (Gaussian)4002.03000.700 (±0.037)0.650 (±0.038)0.600 (±0.039)
Linear (Gaussian)5002.03000.750 (±0.019)0.700 (±0.029)0.650 (±0.037)
Linear (Gaussian)1000.254000.537 (±0.028)0.488 (±0.029)0.438 (±0.026)
Linear (Gaussian)2000.254000.588 (±0.016)0.537 (±0.026)0.488 (±0.042)
Linear (Gaussian)3000.254000.637 (±0.035)0.587 (±0.039)0.537 (±0.024)
Linear (Gaussian)4000.254000.688 (±0.018)0.637 (±0.044)0.588 (±0.031)
Linear (Gaussian)5000.254000.738 (±0.027)0.688 (±0.024)0.638 (±0.035)
Linear (Gaussian)1000.54000.525 (±0.022)0.475 (±0.026)0.425 (±0.037)
Linear (Gaussian)2000.54000.575 (±0.022)0.525 (±0.040)0.475 (±0.022)
Linear (Gaussian)3000.54000.625 (±0.036)0.575 (±0.031)0.525 (±0.039)
Linear (Gaussian)4000.54000.675 (±0.032)0.625 (±0.031)0.575 (±0.036)
Linear (Gaussian)5000.54000.725 (±0.015)0.675 (±0.025)0.625 (±0.038)
Linear (Gaussian)1000.754000.513 (±0.020)0.463 (±0.040)0.413 (±0.028)
Linear (Gaussian)2000.754000.562 (±0.032)0.512 (±0.043)0.462 (±0.023)
Linear (Gaussian)3000.754000.613 (±0.018)0.562 (±0.037)0.513 (±0.031)
Linear (Gaussian)4000.754000.663 (±0.036)0.613 (±0.044)0.563 (±0.022)
Linear (Gaussian)5000.754000.713 (±0.026)0.662 (±0.038)0.613 (±0.022)
Linear (Gaussian)1001.04000.500 (±0.040)0.450 (±0.026)0.400 (±0.021)
Linear (Gaussian)2001.04000.550 (±0.032)0.500 (±0.040)0.450 (±0.029)
Linear (Gaussian)3001.04000.600 (±0.024)0.550 (±0.027)0.500 (±0.022)
Linear (Gaussian)4001.04000.650 (±0.024)0.600 (±0.024)0.550 (±0.033)
Linear (Gaussian)5001.04000.700 (±0.028)0.650 (±0.044)0.600 (±0.029)
Linear (Gaussian)1001.54000.475 (±0.027)0.425 (±0.022)0.375 (±0.042)
Linear (Gaussian)2001.54000.525 (±0.025)0.475 (±0.038)0.425 (±0.024)
Linear (Gaussian)3001.54000.575 (±0.026)0.525 (±0.039)0.475 (±0.022)
Linear (Gaussian)4001.54000.625 (±0.035)0.575 (±0.028)0.525 (±0.029)
Linear (Gaussian)5001.54000.675 (±0.023)0.625 (±0.021)0.575 (±0.033)
Linear (Gaussian)1002.04000.450 (±0.032)0.400 (±0.043)0.350 (±0.021)
Linear (Gaussian)2002.04000.500 (±0.040)0.450 (±0.028)0.400 (±0.043)
Linear (Gaussian)3002.04000.550 (±0.032)0.500 (±0.031)0.450 (±0.040)
Linear (Gaussian)4002.04000.600 (±0.016)0.550 (±0.038)0.500 (±0.029)
Linear (Gaussian)5002.04000.650 (±0.036)0.600 (±0.033)0.550 (±0.027)
Linear (Student-t mixture)1000.25200.918 (±0.035)0.858 (±0.022)0.798 (±0.041)
Linear (Student-t mixture)2000.25200.968 (±0.022)0.907 (±0.039)0.848 (±0.044)
Linear (Student-t mixture)3000.25201.000 (±0.017)0.940 (±0.041)0.880 (±0.040)
Linear (Student-t mixture)4000.25201.000 (±0.025)0.940 (±0.028)0.880 (±0.025)
Linear (Student-t mixture)5000.25201.000 (±0.029)0.940 (±0.042)0.880 (±0.033)
Linear (Student-t mixture)1000.5200.905 (±0.030)0.845 (±0.037)0.785 (±0.033)
Linear (Student-t mixture)2000.5200.955 (±0.028)0.895 (±0.020)0.835 (±0.021)
Linear (Student-t mixture)3000.5201.000 (±0.038)0.940 (±0.039)0.880 (±0.025)
Linear (Student-t mixture)4000.5201.000 (±0.031)0.940 (±0.036)0.880 (±0.030)
Linear (Student-t mixture)5000.5201.000 (±0.035)0.940 (±0.034)0.880 (±0.042)
Linear (Student-t mixture)1000.75200.893 (±0.029)0.833 (±0.043)0.773 (±0.035)
Linear (Student-t mixture)2000.75200.943 (±0.026)0.883 (±0.044)0.823 (±0.038)
Linear (Student-t mixture)3000.75200.993 (±0.025)0.933 (±0.020)0.873 (±0.034)
Linear (Student-t mixture)4000.75201.000 (±0.027)0.940 (±0.042)0.880 (±0.040)
Linear (Student-t mixture)5000.75201.000 (±0.036)0.940 (±0.029)0.880 (±0.042)
Linear (Student-t mixture)1001.0200.880 (±0.019)0.820 (±0.027)0.760 (±0.037)
Linear (Student-t mixture)2001.0200.930 (±0.039)0.870 (±0.035)0.810 (±0.033)
Linear (Student-t mixture)3001.0200.980 (±0.017)0.920 (±0.044)0.860 (±0.023)
Linear (Student-t mixture)4001.0201.000 (±0.017)0.940 (±0.042)0.880 (±0.033)
Linear (Student-t mixture)5001.0201.000 (±0.023)0.940 (±0.042)0.880 (±0.021)
Linear (Student-t mixture)1001.5200.835 (±0.021)0.795 (±0.037)0.735 (±0.026)
Linear (Student-t mixture)2001.5200.885 (±0.023)0.845 (±0.024)0.785 (±0.040)
Linear (Student-t mixture)3001.5200.935 (±0.019)0.895 (±0.041)0.835 (±0.028)
Linear (Student-t mixture)4001.5200.980 (±0.024)0.940 (±0.036)0.880 (±0.022)
Linear (Student-t mixture)5001.5200.980 (±0.016)0.940 (±0.045)0.880 (±0.035)
Linear (Student-t mixture)1002.0200.810 (±0.035)0.770 (±0.042)0.710 (±0.039)
Linear (Student-t mixture)2002.0200.860 (±0.039)0.820 (±0.021)0.760 (±0.043)
Linear (Student-t mixture)3002.0200.910 (±0.037)0.870 (±0.039)0.810 (±0.029)
Linear (Student-t mixture)4002.0200.960 (±0.016)0.920 (±0.029)0.860 (±0.027)
Linear (Student-t mixture)5002.0200.980 (±0.036)0.940 (±0.029)0.880 (±0.028)
Linear (Student-t mixture)1000.25500.888 (±0.034)0.828 (±0.041)0.768 (±0.031)
Linear (Student-t mixture)2000.25500.938 (±0.033)0.877 (±0.023)0.818 (±0.025)
Linear (Student-t mixture)3000.25500.988 (±0.039)0.927 (±0.039)0.868 (±0.040)
Linear (Student-t mixture)4000.25501.000 (±0.025)0.940 (±0.035)0.880 (±0.040)
Linear (Student-t mixture)5000.25501.000 (±0.037)0.940 (±0.030)0.880 (±0.022)
Linear (Student-t mixture)1000.5500.875 (±0.031)0.815 (±0.039)0.755 (±0.022)
Linear (Student-t mixture)2000.5500.925 (±0.023)0.865 (±0.035)0.805 (±0.039)
Linear (Student-t mixture)3000.5500.975 (±0.038)0.915 (±0.032)0.855 (±0.034)
Linear (Student-t mixture)4000.5501.000 (±0.033)0.940 (±0.041)0.880 (±0.043)
Linear (Student-t mixture)5000.5501.000 (±0.016)0.940 (±0.033)0.880 (±0.029)
Linear (Student-t mixture)1000.75500.863 (±0.036)0.802 (±0.040)0.743 (±0.023)
Linear (Student-t mixture)2000.75500.912 (±0.033)0.853 (±0.026)0.792 (±0.022)
Linear (Student-t mixture)3000.75500.963 (±0.038)0.903 (±0.024)0.843 (±0.036)
Linear (Student-t mixture)4000.75501.000 (±0.019)0.940 (±0.021)0.880 (±0.033)
Linear (Student-t mixture)5000.75501.000 (±0.017)0.940 (±0.027)0.880 (±0.031)
Linear (Student-t mixture)1001.0500.850 (±0.030)0.790 (±0.040)0.730 (±0.025)
Linear (Student-t mixture)2001.0500.900 (±0.026)0.840 (±0.031)0.780 (±0.038)
Linear (Student-t mixture)3001.0500.950 (±0.033)0.890 (±0.041)0.830 (±0.042)
Linear (Student-t mixture)4001.0501.000 (±0.039)0.940 (±0.034)0.880 (±0.033)
Linear (Student-t mixture)5001.0501.000 (±0.019)0.940 (±0.032)0.880 (±0.026)
Linear (Student-t mixture)1001.5500.805 (±0.016)0.765 (±0.036)0.705 (±0.033)
Linear (Student-t mixture)2001.5500.855 (±0.019)0.815 (±0.027)0.755 (±0.035)
Linear (Student-t mixture)3001.5500.905 (±0.019)0.865 (±0.022)0.805 (±0.031)
Linear (Student-t mixture)4001.5500.955 (±0.024)0.915 (±0.041)0.855 (±0.027)
Linear (Student-t mixture)5001.5500.980 (±0.016)0.940 (±0.042)0.880 (±0.028)
Linear (Student-t mixture)1002.0500.780 (±0.031)0.740 (±0.034)0.680 (±0.029)
Linear (Student-t mixture)2002.0500.830 (±0.020)0.790 (±0.031)0.730 (±0.036)
Linear (Student-t mixture)3002.0500.880 (±0.028)0.840 (±0.036)0.780 (±0.038)
Linear (Student-t mixture)4002.0500.930 (±0.027)0.890 (±0.030)0.830 (±0.020)
Linear (Student-t mixture)5002.0500.980 (±0.015)0.940 (±0.045)0.880 (±0.023)
Linear (Student-t mixture)1000.251000.838 (±0.025)0.778 (±0.038)0.718 (±0.039)
Linear (Student-t mixture)2000.251000.888 (±0.037)0.828 (±0.035)0.768 (±0.039)
Linear (Student-t mixture)3000.251000.938 (±0.036)0.878 (±0.035)0.818 (±0.040)
Linear (Student-t mixture)4000.251000.988 (±0.016)0.927 (±0.030)0.868 (±0.042)
Linear (Student-t mixture)5000.251001.000 (±0.031)0.940 (±0.043)0.880 (±0.024)
Linear (Student-t mixture)1000.51000.825 (±0.023)0.765 (±0.040)0.705 (±0.045)
Linear (Student-t mixture)2000.51000.875 (±0.023)0.815 (±0.020)0.755 (±0.045)
Linear (Student-t mixture)3000.51000.925 (±0.019)0.865 (±0.021)0.805 (±0.035)
Linear (Student-t mixture)4000.51000.975 (±0.023)0.915 (±0.033)0.855 (±0.040)
Linear (Student-t mixture)5000.51001.000 (±0.017)0.940 (±0.037)0.880 (±0.044)
Linear (Student-t mixture)1000.751000.813 (±0.022)0.753 (±0.026)0.693 (±0.024)
Linear (Student-t mixture)2000.751000.863 (±0.017)0.802 (±0.034)0.743 (±0.032)
Linear (Student-t mixture)3000.751000.913 (±0.027)0.853 (±0.042)0.793 (±0.021)
Linear (Student-t mixture)4000.751000.963 (±0.020)0.903 (±0.033)0.843 (±0.026)
Linear (Student-t mixture)5000.751001.000 (±0.029)0.940 (±0.032)0.880 (±0.034)
Linear (Student-t mixture)1001.01000.800 (±0.019)0.740 (±0.024)0.680 (±0.030)
Linear (Student-t mixture)2001.01000.850 (±0.032)0.790 (±0.036)0.730 (±0.043)
Linear (Student-t mixture)3001.01000.900 (±0.035)0.840 (±0.037)0.780 (±0.040)
Linear (Student-t mixture)4001.01000.950 (±0.018)0.890 (±0.026)0.830 (±0.028)
Linear (Student-t mixture)5001.01001.000 (±0.025)0.940 (±0.036)0.880 (±0.040)
Linear (Student-t mixture)1001.51000.755 (±0.021)0.715 (±0.040)0.655 (±0.020)
Linear (Student-t mixture)2001.51000.805 (±0.031)0.765 (±0.040)0.705 (±0.032)
Linear (Student-t mixture)3001.51000.855 (±0.020)0.815 (±0.020)0.755 (±0.027)
Linear (Student-t mixture)4001.51000.905 (±0.038)0.865 (±0.043)0.805 (±0.034)
Linear (Student-t mixture)5001.51000.955 (±0.020)0.915 (±0.032)0.855 (±0.040)
Linear (Student-t mixture)1002.01000.730 (±0.025)0.690 (±0.023)0.630 (±0.027)
Linear (Student-t mixture)2002.01000.780 (±0.024)0.740 (±0.026)0.680 (±0.032)
Linear (Student-t mixture)3002.01000.830 (±0.024)0.790 (±0.023)0.730 (±0.021)
Linear (Student-t mixture)4002.01000.880 (±0.022)0.840 (±0.044)0.780 (±0.032)
Linear (Student-t mixture)5002.01000.930 (±0.027)0.890 (±0.039)0.830 (±0.037)
Linear (Student-t mixture)1000.252000.738 (±0.016)0.677 (±0.037)0.618 (±0.029)
Linear (Student-t mixture)2000.252000.787 (±0.037)0.728 (±0.039)0.667 (±0.023)
Linear (Student-t mixture)3000.252000.838 (±0.022)0.778 (±0.023)0.718 (±0.035)
Linear (Student-t mixture)4000.252000.887 (±0.029)0.827 (±0.037)0.767 (±0.028)
Linear (Student-t mixture)5000.252000.938 (±0.030)0.877 (±0.045)0.818 (±0.039)
Linear (Student-t mixture)1000.52000.725 (±0.017)0.665 (±0.031)0.605 (±0.021)
Linear (Student-t mixture)2000.52000.775 (±0.023)0.715 (±0.038)0.655 (±0.020)
Linear (Student-t mixture)3000.52000.825 (±0.034)0.765 (±0.032)0.705 (±0.038)
Linear (Student-t mixture)4000.52000.875 (±0.032)0.815 (±0.034)0.755 (±0.038)
Linear (Student-t mixture)5000.52000.925 (±0.031)0.865 (±0.027)0.805 (±0.022)
Linear (Student-t mixture)1000.752000.713 (±0.039)0.653 (±0.038)0.593 (±0.035)
Linear (Student-t mixture)2000.752000.762 (±0.018)0.702 (±0.034)0.642 (±0.027)
Linear (Student-t mixture)3000.752000.812 (±0.037)0.752 (±0.020)0.693 (±0.026)
Linear (Student-t mixture)4000.752000.863 (±0.018)0.802 (±0.028)0.743 (±0.038)
Linear (Student-t mixture)5000.752000.912 (±0.040)0.853 (±0.038)0.792 (±0.033)
Linear (Student-t mixture)1001.02000.700 (±0.026)0.640 (±0.044)0.580 (±0.023)
Linear (Student-t mixture)2001.02000.750 (±0.017)0.690 (±0.042)0.630 (±0.031)
Linear (Student-t mixture)3001.02000.800 (±0.016)0.740 (±0.036)0.680 (±0.032)
Linear (Student-t mixture)4001.02000.850 (±0.030)0.790 (±0.027)0.730 (±0.024)
Linear (Student-t mixture)5001.02000.900 (±0.018)0.840 (±0.033)0.780 (±0.034)
Linear (Student-t mixture)1001.52000.655 (±0.019)0.615 (±0.024)0.555 (±0.039)
Linear (Student-t mixture)2001.52000.705 (±0.028)0.665 (±0.042)0.605 (±0.037)
Linear (Student-t mixture)3001.52000.755 (±0.015)0.715 (±0.037)0.655 (±0.042)
Linear (Student-t mixture)4001.52000.805 (±0.031)0.765 (±0.023)0.705 (±0.043)
Linear (Student-t mixture)5001.52000.855 (±0.032)0.815 (±0.024)0.755 (±0.039)
Linear (Student-t mixture)1002.02000.630 (±0.039)0.590 (±0.031)0.530 (±0.027)
Linear (Student-t mixture)2002.02000.680 (±0.021)0.640 (±0.026)0.580 (±0.034)
Linear (Student-t mixture)3002.02000.730 (±0.034)0.690 (±0.037)0.630 (±0.034)
Linear (Student-t mixture)4002.02000.780 (±0.035)0.740 (±0.039)0.680 (±0.021)
Linear (Student-t mixture)5002.02000.830 (±0.025)0.790 (±0.021)0.730 (±0.040)
Linear (Student-t mixture)1000.253000.638 (±0.038)0.578 (±0.034)0.518 (±0.041)
Linear (Student-t mixture)2000.253000.688 (±0.030)0.628 (±0.037)0.568 (±0.033)
Linear (Student-t mixture)3000.253000.738 (±0.034)0.678 (±0.043)0.618 (±0.041)
Linear (Student-t mixture)4000.253000.788 (±0.017)0.728 (±0.020)0.668 (±0.021)
Linear (Student-t mixture)5000.253000.838 (±0.037)0.778 (±0.034)0.718 (±0.028)
Linear (Student-t mixture)1000.53000.625 (±0.039)0.565 (±0.035)0.505 (±0.033)
Linear (Student-t mixture)2000.53000.675 (±0.025)0.615 (±0.028)0.555 (±0.040)
Linear (Student-t mixture)3000.53000.725 (±0.016)0.665 (±0.029)0.605 (±0.041)
Linear (Student-t mixture)4000.53000.775 (±0.032)0.715 (±0.037)0.655 (±0.029)
Linear (Student-t mixture)5000.53000.825 (±0.029)0.765 (±0.023)0.705 (±0.040)
Linear (Student-t mixture)1000.753000.613 (±0.037)0.553 (±0.025)0.493 (±0.039)
Linear (Student-t mixture)2000.753000.663 (±0.030)0.603 (±0.044)0.543 (±0.024)
Linear (Student-t mixture)3000.753000.713 (±0.028)0.653 (±0.042)0.593 (±0.042)
Linear (Student-t mixture)4000.753000.763 (±0.016)0.703 (±0.044)0.643 (±0.032)
Linear (Student-t mixture)5000.753000.813 (±0.025)0.753 (±0.030)0.693 (±0.022)
Linear (Student-t mixture)1001.03000.600 (±0.019)0.540 (±0.030)0.480 (±0.029)
Linear (Student-t mixture)2001.03000.650 (±0.025)0.590 (±0.028)0.530 (±0.034)
Linear (Student-t mixture)3001.03000.700 (±0.019)0.640 (±0.044)0.580 (±0.021)
Linear (Student-t mixture)4001.03000.750 (±0.024)0.690 (±0.044)0.630 (±0.036)
Linear (Student-t mixture)5001.03000.800 (±0.017)0.740 (±0.030)0.680 (±0.031)
Linear (Student-t mixture)1001.53000.555 (±0.028)0.515 (±0.031)0.455 (±0.036)
Linear (Student-t mixture)2001.53000.605 (±0.018)0.565 (±0.043)0.505 (±0.034)
Linear (Student-t mixture)3001.53000.655 (±0.029)0.615 (±0.023)0.555 (±0.039)
Linear (Student-t mixture)4001.53000.705 (±0.027)0.665 (±0.040)0.605 (±0.021)
Linear (Student-t mixture)5001.53000.755 (±0.035)0.715 (±0.027)0.655 (±0.027)
Linear (Student-t mixture)1002.03000.530 (±0.031)0.490 (±0.044)0.430 (±0.020)
Linear (Student-t mixture)2002.03000.580 (±0.024)0.540 (±0.030)0.480 (±0.031)
Linear (Student-t mixture)3002.03000.630 (±0.033)0.590 (±0.043)0.530 (±0.036)
Linear (Student-t mixture)4002.03000.680 (±0.038)0.640 (±0.039)0.580 (±0.023)
Linear (Student-t mixture)5002.03000.730 (±0.019)0.690 (±0.025)0.630 (±0.031)
Linear (Student-t mixture)1000.254000.537 (±0.017)0.478 (±0.026)0.418 (±0.034)
Linear (Student-t mixture)2000.254000.588 (±0.025)0.528 (±0.034)0.468 (±0.041)
Linear (Student-t mixture)3000.254000.637 (±0.031)0.577 (±0.030)0.517 (±0.038)
Linear (Student-t mixture)4000.254000.688 (±0.018)0.627 (±0.027)0.568 (±0.035)
Linear (Student-t mixture)5000.254000.738 (±0.019)0.677 (±0.041)0.618 (±0.041)
Linear (Student-t mixture)1000.54000.525 (±0.027)0.465 (±0.039)0.405 (±0.027)
Linear (Student-t mixture)2000.54000.575 (±0.017)0.515 (±0.031)0.455 (±0.032)
Linear (Student-t mixture)3000.54000.625 (±0.024)0.565 (±0.025)0.505 (±0.033)
Linear (Student-t mixture)4000.54000.675 (±0.015)0.615 (±0.039)0.555 (±0.034)
Linear (Student-t mixture)5000.54000.725 (±0.031)0.665 (±0.032)0.605 (±0.027)
Linear (Student-t mixture)1000.754000.513 (±0.027)0.453 (±0.040)0.393 (±0.035)
Linear (Student-t mixture)2000.754000.562 (±0.025)0.502 (±0.040)0.442 (±0.028)
Linear (Student-t mixture)3000.754000.613 (±0.030)0.552 (±0.031)0.493 (±0.027)
Linear (Student-t mixture)4000.754000.663 (±0.029)0.603 (±0.026)0.543 (±0.034)
Linear (Student-t mixture)5000.754000.713 (±0.026)0.653 (±0.023)0.593 (±0.036)
Linear (Student-t mixture)1001.04000.500 (±0.029)0.440 (±0.024)0.380 (±0.034)
Linear (Student-t mixture)2001.04000.550 (±0.026)0.490 (±0.031)0.430 (±0.041)
Linear (Student-t mixture)3001.04000.600 (±0.033)0.540 (±0.031)0.480 (±0.033)
Linear (Student-t mixture)4001.04000.650 (±0.021)0.590 (±0.022)0.530 (±0.030)
Linear (Student-t mixture)5001.04000.700 (±0.015)0.640 (±0.042)0.580 (±0.045)
Linear (Student-t mixture)1001.54000.455 (±0.033)0.415 (±0.041)0.355 (±0.042)
Linear (Student-t mixture)2001.54000.505 (±0.028)0.465 (±0.034)0.405 (±0.027)
Linear (Student-t mixture)3001.54000.555 (±0.031)0.515 (±0.026)0.455 (±0.027)
Linear (Student-t mixture)4001.54000.605 (±0.033)0.565 (±0.038)0.505 (±0.024)
Linear (Student-t mixture)5001.54000.655 (±0.023)0.615 (±0.029)0.555 (±0.022)
Linear (Student-t mixture)1002.04000.430 (±0.033)0.390 (±0.034)0.330 (±0.023)
Linear (Student-t mixture)2002.04000.480 (±0.039)0.440 (±0.034)0.380 (±0.040)
Linear (Student-t mixture)3002.04000.530 (±0.036)0.490 (±0.042)0.430 (±0.040)
Linear (Student-t mixture)4002.04000.580 (±0.031)0.540 (±0.036)0.480 (±0.027)
Linear (Student-t mixture)5002.04000.630 (±0.016)0.590 (±0.030)0.530 (±0.029)
Nonlinear (Concentric spheres)1000.25200.806 (±0.015)0.918 (±0.021)0.854 (±0.021)
Nonlinear (Concentric spheres)2000.25200.856 (±0.034)0.968 (±0.032)0.903 (±0.041)
Nonlinear (Concentric spheres)3000.25200.888 (±0.030)1.000 (±0.025)0.936 (±0.035)
Nonlinear (Concentric spheres)4000.25200.888 (±0.037)1.000 (±0.040)0.936 (±0.033)
Nonlinear (Concentric spheres)5000.25200.888 (±0.026)1.000 (±0.036)0.936 (±0.042)
Nonlinear (Concentric spheres)1000.5200.793 (±0.037)0.905 (±0.024)0.841 (±0.026)
Nonlinear (Concentric spheres)2000.5200.843 (±0.018)0.955 (±0.040)0.891 (±0.023)
Nonlinear (Concentric spheres)3000.5200.888 (±0.036)1.000 (±0.039)0.936 (±0.044)
Nonlinear (Concentric spheres)4000.5200.888 (±0.027)1.000 (±0.043)0.936 (±0.040)
Nonlinear (Concentric spheres)5000.5200.888 (±0.037)1.000 (±0.030)0.936 (±0.022)
Nonlinear (Concentric spheres)1000.75200.781 (±0.022)0.893 (±0.038)0.829 (±0.029)
Nonlinear (Concentric spheres)2000.75200.831 (±0.025)0.943 (±0.021)0.879 (±0.033)
Nonlinear (Concentric spheres)3000.75200.881 (±0.018)0.993 (±0.042)0.929 (±0.022)
Nonlinear (Concentric spheres)4000.75200.888 (±0.035)1.000 (±0.041)0.936 (±0.021)
Nonlinear (Concentric spheres)5000.75200.888 (±0.022)1.000 (±0.036)0.936 (±0.040)
Nonlinear (Concentric spheres)1001.0200.768 (±0.025)0.880 (±0.043)0.816 (±0.030)
Nonlinear (Concentric spheres)2001.0200.818 (±0.026)0.930 (±0.027)0.866 (±0.035)
Nonlinear (Concentric spheres)3001.0200.868 (±0.029)0.980 (±0.021)0.916 (±0.039)
Nonlinear (Concentric spheres)4001.0200.888 (±0.020)1.000 (±0.036)0.936 (±0.033)
Nonlinear (Concentric spheres)5001.0200.888 (±0.031)1.000 (±0.035)0.936 (±0.027)
Nonlinear (Concentric spheres)1001.5200.743 (±0.021)0.855 (±0.033)0.791 (±0.045)
Nonlinear (Concentric spheres)2001.5200.793 (±0.023)0.905 (±0.028)0.841 (±0.021)
Nonlinear (Concentric spheres)3001.5200.843 (±0.028)0.955 (±0.035)0.891 (±0.039)
Nonlinear (Concentric spheres)4001.5200.888 (±0.039)1.000 (±0.032)0.936 (±0.022)
Nonlinear (Concentric spheres)5001.5200.888 (±0.021)1.000 (±0.042)0.936 (±0.041)
Nonlinear (Concentric spheres)1002.0200.718 (±0.036)0.830 (±0.044)0.766 (±0.036)
Nonlinear (Concentric spheres)2002.0200.768 (±0.027)0.880 (±0.025)0.816 (±0.038)
Nonlinear (Concentric spheres)3002.0200.818 (±0.017)0.930 (±0.032)0.866 (±0.043)
Nonlinear (Concentric spheres)4002.0200.868 (±0.036)0.980 (±0.038)0.916 (±0.023)
Nonlinear (Concentric spheres)5002.0200.888 (±0.030)1.000 (±0.045)0.936 (±0.035)
Nonlinear (Concentric spheres)1000.25500.773 (±0.022)0.888 (±0.022)0.825 (±0.022)
Nonlinear (Concentric spheres)2000.25500.823 (±0.037)0.938 (±0.029)0.875 (±0.044)
Nonlinear (Concentric spheres)3000.25500.873 (±0.029)0.988 (±0.033)0.925 (±0.044)
Nonlinear (Concentric spheres)4000.25500.885 (±0.018)1.000 (±0.040)0.938 (±0.029)
Nonlinear (Concentric spheres)5000.25500.885 (±0.029)1.000 (±0.028)0.938 (±0.022)
Nonlinear (Concentric spheres)1000.5500.760 (±0.034)0.875 (±0.025)0.812 (±0.036)
Nonlinear (Concentric spheres)2000.5500.810 (±0.026)0.925 (±0.025)0.862 (±0.045)
Nonlinear (Concentric spheres)3000.5500.860 (±0.026)0.975 (±0.021)0.912 (±0.043)
Nonlinear (Concentric spheres)4000.5500.885 (±0.037)1.000 (±0.035)0.938 (±0.027)
Nonlinear (Concentric spheres)5000.5500.885 (±0.035)1.000 (±0.044)0.938 (±0.041)
Nonlinear (Concentric spheres)1000.75500.748 (±0.018)0.863 (±0.044)0.800 (±0.034)
Nonlinear (Concentric spheres)2000.75500.797 (±0.015)0.912 (±0.023)0.850 (±0.020)
Nonlinear (Concentric spheres)3000.75500.848 (±0.035)0.963 (±0.040)0.900 (±0.043)
Nonlinear (Concentric spheres)4000.75500.885 (±0.024)1.000 (±0.044)0.938 (±0.041)
Nonlinear (Concentric spheres)5000.75500.885 (±0.035)1.000 (±0.036)0.938 (±0.044)
Nonlinear (Concentric spheres)1001.0500.735 (±0.027)0.850 (±0.042)0.787 (±0.030)
Nonlinear (Concentric spheres)2001.0500.785 (±0.026)0.900 (±0.043)0.837 (±0.037)
Nonlinear (Concentric spheres)3001.0500.835 (±0.015)0.950 (±0.024)0.887 (±0.032)
Nonlinear (Concentric spheres)4001.0500.885 (±0.026)1.000 (±0.045)0.938 (±0.042)
Nonlinear (Concentric spheres)5001.0500.885 (±0.024)1.000 (±0.038)0.938 (±0.043)
Nonlinear (Concentric spheres)1001.5500.710 (±0.032)0.825 (±0.036)0.762 (±0.024)
Nonlinear (Concentric spheres)2001.5500.760 (±0.030)0.875 (±0.027)0.812 (±0.031)
Nonlinear (Concentric spheres)3001.5500.810 (±0.021)0.925 (±0.026)0.862 (±0.036)
Nonlinear (Concentric spheres)4001.5500.860 (±0.027)0.975 (±0.024)0.912 (±0.027)
Nonlinear (Concentric spheres)5001.5500.885 (±0.033)1.000 (±0.035)0.938 (±0.023)
Nonlinear (Concentric spheres)1002.0500.685 (±0.017)0.800 (±0.044)0.738 (±0.029)
Nonlinear (Concentric spheres)2002.0500.735 (±0.032)0.850 (±0.041)0.787 (±0.026)
Nonlinear (Concentric spheres)3002.0500.785 (±0.016)0.900 (±0.035)0.838 (±0.041)
Nonlinear (Concentric spheres)4002.0500.835 (±0.024)0.950 (±0.028)0.887 (±0.036)
Nonlinear (Concentric spheres)5002.0500.885 (±0.039)1.000 (±0.035)0.938 (±0.033)
Nonlinear (Concentric spheres)1000.251000.718 (±0.019)0.838 (±0.041)0.778 (±0.025)
Nonlinear (Concentric spheres)2000.251000.768 (±0.032)0.888 (±0.035)0.828 (±0.021)
Nonlinear (Concentric spheres)3000.251000.818 (±0.039)0.938 (±0.035)0.878 (±0.042)
Nonlinear (Concentric spheres)4000.251000.868 (±0.028)0.988 (±0.027)0.927 (±0.035)
Nonlinear (Concentric spheres)5000.251000.880 (±0.035)1.000 (±0.037)0.940 (±0.032)
Nonlinear (Concentric spheres)1000.51000.705 (±0.027)0.825 (±0.035)0.765 (±0.031)
Nonlinear (Concentric spheres)2000.51000.755 (±0.030)0.875 (±0.020)0.815 (±0.045)
Nonlinear (Concentric spheres)3000.51000.805 (±0.025)0.925 (±0.023)0.865 (±0.042)
Nonlinear (Concentric spheres)4000.51000.855 (±0.029)0.975 (±0.042)0.915 (±0.021)
Nonlinear (Concentric spheres)5000.51000.880 (±0.037)1.000 (±0.038)0.940 (±0.039)
Nonlinear (Concentric spheres)1000.751000.693 (±0.037)0.813 (±0.033)0.753 (±0.043)
Nonlinear (Concentric spheres)2000.751000.743 (±0.020)0.863 (±0.021)0.802 (±0.028)
Nonlinear (Concentric spheres)3000.751000.793 (±0.032)0.913 (±0.028)0.853 (±0.044)
Nonlinear (Concentric spheres)4000.751000.843 (±0.034)0.963 (±0.028)0.903 (±0.032)
Nonlinear (Concentric spheres)5000.751000.880 (±0.025)1.000 (±0.039)0.940 (±0.032)
Nonlinear (Concentric spheres)1001.01000.680 (±0.015)0.800 (±0.036)0.740 (±0.026)
Nonlinear (Concentric spheres)2001.01000.730 (±0.034)0.850 (±0.027)0.790 (±0.043)
Nonlinear (Concentric spheres)3001.01000.780 (±0.036)0.900 (±0.032)0.840 (±0.022)
Nonlinear (Concentric spheres)4001.01000.830 (±0.029)0.950 (±0.034)0.890 (±0.045)
Nonlinear (Concentric spheres)5001.01000.880 (±0.023)1.000 (±0.037)0.940 (±0.043)
Nonlinear (Concentric spheres)1001.51000.655 (±0.029)0.775 (±0.037)0.715 (±0.022)
Nonlinear (Concentric spheres)2001.51000.705 (±0.037)0.825 (±0.033)0.765 (±0.026)
Nonlinear (Concentric spheres)3001.51000.755 (±0.025)0.875 (±0.042)0.815 (±0.032)
Nonlinear (Concentric spheres)4001.51000.805 (±0.032)0.925 (±0.027)0.865 (±0.040)
Nonlinear (Concentric spheres)5001.51000.855 (±0.021)0.975 (±0.029)0.915 (±0.026)
Nonlinear (Concentric spheres)1002.01000.630 (±0.038)0.750 (±0.030)0.690 (±0.023)
Nonlinear (Concentric spheres)2002.01000.680 (±0.038)0.800 (±0.029)0.740 (±0.041)
Nonlinear (Concentric spheres)3002.01000.730 (±0.019)0.850 (±0.029)0.790 (±0.026)
Nonlinear (Concentric spheres)4002.01000.780 (±0.016)0.900 (±0.040)0.840 (±0.025)
Nonlinear (Concentric spheres)5002.01000.830 (±0.025)0.950 (±0.028)0.890 (±0.035)
Nonlinear (Concentric spheres)1000.252000.608 (±0.029)0.738 (±0.045)0.682 (±0.026)
Nonlinear (Concentric spheres)2000.252000.657 (±0.033)0.787 (±0.044)0.732 (±0.034)
Nonlinear (Concentric spheres)3000.252000.708 (±0.025)0.838 (±0.041)0.782 (±0.023)
Nonlinear (Concentric spheres)4000.252000.757 (±0.028)0.887 (±0.031)0.832 (±0.027)
Nonlinear (Concentric spheres)5000.252000.807 (±0.019)0.938 (±0.042)0.882 (±0.034)
Nonlinear (Concentric spheres)1000.52000.595 (±0.029)0.725 (±0.035)0.670 (±0.038)
Nonlinear (Concentric spheres)2000.52000.645 (±0.031)0.775 (±0.035)0.720 (±0.021)
Nonlinear (Concentric spheres)3000.52000.695 (±0.029)0.825 (±0.035)0.770 (±0.036)
Nonlinear (Concentric spheres)4000.52000.745 (±0.031)0.875 (±0.022)0.820 (±0.022)
Nonlinear (Concentric spheres)5000.52000.795 (±0.032)0.925 (±0.041)0.870 (±0.024)
Nonlinear (Concentric spheres)1000.752000.583 (±0.035)0.713 (±0.032)0.657 (±0.045)
Nonlinear (Concentric spheres)2000.752000.632 (±0.027)0.762 (±0.034)0.707 (±0.029)
Nonlinear (Concentric spheres)3000.752000.682 (±0.028)0.812 (±0.022)0.757 (±0.033)
Nonlinear (Concentric spheres)4000.752000.733 (±0.039)0.863 (±0.033)0.807 (±0.031)
Nonlinear (Concentric spheres)5000.752000.782 (±0.016)0.912 (±0.021)0.857 (±0.021)
Nonlinear (Concentric spheres)1001.02000.570 (±0.019)0.700 (±0.032)0.645 (±0.040)
Nonlinear (Concentric spheres)2001.02000.620 (±0.022)0.750 (±0.039)0.695 (±0.033)
Nonlinear (Concentric spheres)3001.02000.670 (±0.022)0.800 (±0.020)0.745 (±0.037)
Nonlinear (Concentric spheres)4001.02000.720 (±0.017)0.850 (±0.041)0.795 (±0.023)
Nonlinear (Concentric spheres)5001.02000.770 (±0.031)0.900 (±0.041)0.845 (±0.026)
Nonlinear (Concentric spheres)1001.52000.545 (±0.018)0.675 (±0.039)0.620 (±0.025)
Nonlinear (Concentric spheres)2001.52000.595 (±0.022)0.725 (±0.033)0.670 (±0.032)
Nonlinear (Concentric spheres)3001.52000.645 (±0.024)0.775 (±0.039)0.720 (±0.034)
Nonlinear (Concentric spheres)4001.52000.695 (±0.019)0.825 (±0.045)0.770 (±0.026)
Nonlinear (Concentric spheres)5001.52000.745 (±0.018)0.875 (±0.025)0.820 (±0.030)
Nonlinear (Concentric spheres)1002.02000.520 (±0.024)0.650 (±0.028)0.595 (±0.021)
Nonlinear (Concentric spheres)2002.02000.570 (±0.038)0.700 (±0.034)0.645 (±0.026)
Nonlinear (Concentric spheres)3002.02000.620 (±0.037)0.750 (±0.032)0.695 (±0.034)
Nonlinear (Concentric spheres)4002.02000.670 (±0.035)0.800 (±0.036)0.745 (±0.024)
Nonlinear (Concentric spheres)5002.02000.720 (±0.023)0.850 (±0.044)0.795 (±0.025)
Nonlinear (Concentric spheres)1000.253000.498 (±0.034)0.638 (±0.031)0.588 (±0.024)
Nonlinear (Concentric spheres)2000.253000.548 (±0.020)0.688 (±0.037)0.638 (±0.023)
Nonlinear (Concentric spheres)3000.253000.598 (±0.025)0.738 (±0.031)0.688 (±0.036)
Nonlinear (Concentric spheres)4000.253000.648 (±0.021)0.788 (±0.032)0.738 (±0.024)
Nonlinear (Concentric spheres)5000.253000.698 (±0.036)0.838 (±0.024)0.788 (±0.022)
Nonlinear (Concentric spheres)1000.53000.485 (±0.037)0.625 (±0.028)0.575 (±0.039)
Nonlinear (Concentric spheres)2000.53000.535 (±0.025)0.675 (±0.031)0.625 (±0.024)
Nonlinear (Concentric spheres)3000.53000.585 (±0.019)0.725 (±0.024)0.675 (±0.033)
Nonlinear (Concentric spheres)4000.53000.635 (±0.027)0.775 (±0.035)0.725 (±0.031)
Nonlinear (Concentric spheres)5000.53000.685 (±0.016)0.825 (±0.020)0.775 (±0.026)
Nonlinear (Concentric spheres)1000.753000.473 (±0.036)0.613 (±0.027)0.563 (±0.035)
Nonlinear (Concentric spheres)2000.753000.523 (±0.030)0.663 (±0.045)0.613 (±0.025)
Nonlinear (Concentric spheres)3000.753000.573 (±0.034)0.713 (±0.026)0.663 (±0.028)
Nonlinear (Concentric spheres)4000.753000.623 (±0.015)0.763 (±0.045)0.713 (±0.031)
Nonlinear (Concentric spheres)5000.753000.673 (±0.040)0.813 (±0.025)0.763 (±0.023)
Nonlinear (Concentric spheres)1001.03000.460 (±0.021)0.600 (±0.037)0.550 (±0.034)
Nonlinear (Concentric spheres)2001.03000.510 (±0.025)0.650 (±0.038)0.600 (±0.034)
Nonlinear (Concentric spheres)3001.03000.560 (±0.029)0.700 (±0.035)0.650 (±0.026)
Nonlinear (Concentric spheres)4001.03000.610 (±0.018)0.750 (±0.038)0.700 (±0.035)
Nonlinear (Concentric spheres)5001.03000.660 (±0.039)0.800 (±0.044)0.750 (±0.039)
Nonlinear (Concentric spheres)1001.53000.435 (±0.026)0.575 (±0.024)0.525 (±0.031)
Nonlinear (Concentric spheres)2001.53000.485 (±0.034)0.625 (±0.043)0.575 (±0.043)
Nonlinear (Concentric spheres)3001.53000.535 (±0.035)0.675 (±0.025)0.625 (±0.021)
Nonlinear (Concentric spheres)4001.53000.585 (±0.027)0.725 (±0.035)0.675 (±0.022)
Nonlinear (Concentric spheres)5001.53000.635 (±0.039)0.775 (±0.024)0.725 (±0.043)
Nonlinear (Concentric spheres)1002.03000.410 (±0.018)0.550 (±0.037)0.500 (±0.040)
Nonlinear (Concentric spheres)2002.03000.460 (±0.034)0.600 (±0.037)0.550 (±0.023)
Nonlinear (Concentric spheres)3002.03000.510 (±0.036)0.650 (±0.035)0.600 (±0.041)
Nonlinear (Concentric spheres)4002.03000.560 (±0.019)0.700 (±0.020)0.650 (±0.028)
Nonlinear (Concentric spheres)5002.03000.610 (±0.022)0.750 (±0.041)0.700 (±0.043)
Nonlinear (Concentric spheres)1000.254000.387 (±0.018)0.537 (±0.043)0.492 (±0.035)
Nonlinear (Concentric spheres)2000.254000.438 (±0.025)0.588 (±0.037)0.542 (±0.039)
Nonlinear (Concentric spheres)3000.254000.487 (±0.028)0.637 (±0.030)0.592 (±0.044)
Nonlinear (Concentric spheres)4000.254000.537 (±0.016)0.688 (±0.024)0.642 (±0.037)
Nonlinear (Concentric spheres)5000.254000.588 (±0.033)0.738 (±0.023)0.693 (±0.032)
Nonlinear (Concentric spheres)1000.54000.375 (±0.031)0.525 (±0.030)0.480 (±0.045)
Nonlinear (Concentric spheres)2000.54000.425 (±0.032)0.575 (±0.028)0.530 (±0.024)
Nonlinear (Concentric spheres)3000.54000.475 (±0.023)0.625 (±0.022)0.580 (±0.021)
Nonlinear (Concentric spheres)4000.54000.525 (±0.023)0.675 (±0.039)0.630 (±0.044)
Nonlinear (Concentric spheres)5000.54000.575 (±0.025)0.725 (±0.033)0.680 (±0.022)
Nonlinear (Concentric spheres)1000.754000.363 (±0.031)0.513 (±0.029)0.468 (±0.031)
Nonlinear (Concentric spheres)2000.754000.412 (±0.039)0.562 (±0.037)0.517 (±0.038)
Nonlinear (Concentric spheres)3000.754000.462 (±0.019)0.613 (±0.025)0.568 (±0.042)
Nonlinear (Concentric spheres)4000.754000.513 (±0.036)0.663 (±0.038)0.618 (±0.027)
Nonlinear (Concentric spheres)5000.754000.562 (±0.020)0.713 (±0.032)0.667 (±0.033)
Nonlinear (Concentric spheres)1001.04000.350 (±0.020)0.500 (±0.039)0.455 (±0.021)
Nonlinear (Concentric spheres)2001.04000.400 (±0.033)0.550 (±0.041)0.505 (±0.043)
Nonlinear (Concentric spheres)3001.04000.450 (±0.032)0.600 (±0.027)0.555 (±0.025)
Nonlinear (Concentric spheres)4001.04000.500 (±0.037)0.650 (±0.021)0.605 (±0.041)
Nonlinear (Concentric spheres)5001.04000.550 (±0.028)0.700 (±0.031)0.655 (±0.043)
Nonlinear (Concentric spheres)1001.54000.325 (±0.019)0.475 (±0.029)0.430 (±0.023)
Nonlinear (Concentric spheres)2001.54000.375 (±0.019)0.525 (±0.037)0.480 (±0.034)
Nonlinear (Concentric spheres)3001.54000.425 (±0.027)0.575 (±0.040)0.530 (±0.044)
Nonlinear (Concentric spheres)4001.54000.475 (±0.028)0.625 (±0.038)0.580 (±0.025)
Nonlinear (Concentric spheres)5001.54000.525 (±0.024)0.675 (±0.021)0.630 (±0.024)
Nonlinear (Concentric spheres)1002.04000.300 (±0.017)0.450 (±0.027)0.405 (±0.030)
Nonlinear (Concentric spheres)2002.04000.350 (±0.031)0.500 (±0.038)0.455 (±0.028)
Nonlinear (Concentric spheres)3002.04000.400 (±0.039)0.550 (±0.037)0.505 (±0.024)
Nonlinear (Concentric spheres)4002.04000.450 (±0.024)0.600 (±0.028)0.555 (±0.031)
Nonlinear (Concentric spheres)5002.04000.500 (±0.036)0.650 (±0.026)0.605 (±0.030)
Nonlinear (Swiss roll)1000.25200.818 (±0.025)0.918 (±0.024)0.868 (±0.028)
Nonlinear (Swiss roll)2000.25200.868 (±0.025)0.968 (±0.039)0.917 (±0.027)
Nonlinear (Swiss roll)3000.25200.900 (±0.018)1.000 (±0.021)0.950 (±0.040)
Nonlinear (Swiss roll)4000.25200.900 (±0.027)1.000 (±0.040)0.950 (±0.022)
Nonlinear (Swiss roll)5000.25200.900 (±0.018)1.000 (±0.023)0.950 (±0.038)
Nonlinear (Swiss roll)1000.5200.805 (±0.034)0.905 (±0.045)0.855 (±0.036)
Nonlinear (Swiss roll)2000.5200.855 (±0.022)0.955 (±0.025)0.905 (±0.040)
Nonlinear (Swiss roll)3000.5200.900 (±0.033)1.000 (±0.025)0.950 (±0.039)
Nonlinear (Swiss roll)4000.5200.900 (±0.037)1.000 (±0.033)0.950 (±0.035)
Nonlinear (Swiss roll)5000.5200.900 (±0.038)1.000 (±0.044)0.950 (±0.037)
Nonlinear (Swiss roll)1000.75200.793 (±0.017)0.893 (±0.041)0.843 (±0.032)
Nonlinear (Swiss roll)2000.75200.843 (±0.025)0.943 (±0.027)0.892 (±0.026)
Nonlinear (Swiss roll)3000.75200.893 (±0.026)0.993 (±0.043)0.943 (±0.027)
Nonlinear (Swiss roll)4000.75200.900 (±0.029)1.000 (±0.025)0.950 (±0.025)
Nonlinear (Swiss roll)5000.75200.900 (±0.033)1.000 (±0.028)0.950 (±0.023)
Nonlinear (Swiss roll)1001.0200.780 (±0.016)0.880 (±0.043)0.830 (±0.034)
Nonlinear (Swiss roll)2001.0200.830 (±0.018)0.930 (±0.034)0.880 (±0.034)
Nonlinear (Swiss roll)3001.0200.880 (±0.035)0.980 (±0.038)0.930 (±0.034)
Nonlinear (Swiss roll)4001.0200.900 (±0.033)1.000 (±0.039)0.950 (±0.030)
Nonlinear (Swiss roll)5001.0200.900 (±0.022)1.000 (±0.033)0.950 (±0.026)
Nonlinear (Swiss roll)1001.5200.755 (±0.036)0.855 (±0.030)0.805 (±0.022)
Nonlinear (Swiss roll)2001.5200.805 (±0.039)0.905 (±0.025)0.855 (±0.030)
Nonlinear (Swiss roll)3001.5200.855 (±0.031)0.955 (±0.036)0.905 (±0.028)
Nonlinear (Swiss roll)4001.5200.900 (±0.018)1.000 (±0.043)0.950 (±0.035)
Nonlinear (Swiss roll)5001.5200.900 (±0.034)1.000 (±0.043)0.950 (±0.033)
Nonlinear (Swiss roll)1002.0200.730 (±0.024)0.830 (±0.022)0.780 (±0.041)
Nonlinear (Swiss roll)2002.0200.780 (±0.028)0.880 (±0.025)0.830 (±0.029)
Nonlinear (Swiss roll)3002.0200.830 (±0.020)0.930 (±0.031)0.880 (±0.026)
Nonlinear (Swiss roll)4002.0200.880 (±0.018)0.980 (±0.022)0.930 (±0.037)
Nonlinear (Swiss roll)5002.0200.900 (±0.022)1.000 (±0.024)0.950 (±0.044)
Nonlinear (Swiss roll)1000.25500.788 (±0.025)0.888 (±0.030)0.838 (±0.036)
Nonlinear (Swiss roll)2000.25500.838 (±0.038)0.938 (±0.025)0.887 (±0.034)
Nonlinear (Swiss roll)3000.25500.888 (±0.033)0.988 (±0.030)0.938 (±0.022)
Nonlinear (Swiss roll)4000.25500.900 (±0.031)1.000 (±0.020)0.950 (±0.025)
Nonlinear (Swiss roll)5000.25500.900 (±0.032)1.000 (±0.042)0.950 (±0.040)
Nonlinear (Swiss roll)1000.5500.775 (±0.018)0.875 (±0.030)0.825 (±0.040)
Nonlinear (Swiss roll)2000.5500.825 (±0.032)0.925 (±0.035)0.875 (±0.023)
Nonlinear (Swiss roll)3000.5500.875 (±0.029)0.975 (±0.031)0.925 (±0.039)
Nonlinear (Swiss roll)4000.5500.900 (±0.016)1.000 (±0.025)0.950 (±0.042)
Nonlinear (Swiss roll)5000.5500.900 (±0.018)1.000 (±0.044)0.950 (±0.030)
Nonlinear (Swiss roll)1000.75500.763 (±0.036)0.863 (±0.042)0.812 (±0.032)
Nonlinear (Swiss roll)2000.75500.812 (±0.033)0.912 (±0.032)0.862 (±0.045)
Nonlinear (Swiss roll)3000.75500.863 (±0.017)0.963 (±0.024)0.912 (±0.040)
Nonlinear (Swiss roll)4000.75500.900 (±0.029)1.000 (±0.030)0.950 (±0.045)
Nonlinear (Swiss roll)5000.75500.900 (±0.021)1.000 (±0.033)0.950 (±0.022)
Nonlinear (Swiss roll)1001.0500.750 (±0.035)0.850 (±0.030)0.800 (±0.040)
Nonlinear (Swiss roll)2001.0500.800 (±0.020)0.900 (±0.044)0.850 (±0.029)
Nonlinear (Swiss roll)3001.0500.850 (±0.028)0.950 (±0.023)0.900 (±0.042)
Nonlinear (Swiss roll)4001.0500.900 (±0.039)1.000 (±0.021)0.950 (±0.032)
Nonlinear (Swiss roll)5001.0500.900 (±0.021)1.000 (±0.041)0.950 (±0.033)
Nonlinear (Swiss roll)1001.5500.725 (±0.015)0.825 (±0.027)0.775 (±0.028)
Nonlinear (Swiss roll)2001.5500.775 (±0.030)0.875 (±0.026)0.825 (±0.021)
Nonlinear (Swiss roll)3001.5500.825 (±0.018)0.925 (±0.038)0.875 (±0.043)
Nonlinear (Swiss roll)4001.5500.875 (±0.023)0.975 (±0.022)0.925 (±0.028)
Nonlinear (Swiss roll)5001.5500.900 (±0.031)1.000 (±0.031)0.950 (±0.040)
Nonlinear (Swiss roll)1002.0500.700 (±0.022)0.800 (±0.034)0.750 (±0.021)
Nonlinear (Swiss roll)2002.0500.750 (±0.020)0.850 (±0.038)0.800 (±0.041)
Nonlinear (Swiss roll)3002.0500.800 (±0.028)0.900 (±0.034)0.850 (±0.020)
Nonlinear (Swiss roll)4002.0500.850 (±0.019)0.950 (±0.042)0.900 (±0.022)
Nonlinear (Swiss roll)5002.0500.900 (±0.021)1.000 (±0.045)0.950 (±0.034)
Nonlinear (Swiss roll)1000.251000.738 (±0.021)0.838 (±0.035)0.788 (±0.021)
Nonlinear (Swiss roll)2000.251000.788 (±0.019)0.888 (±0.038)0.838 (±0.022)
Nonlinear (Swiss roll)3000.251000.838 (±0.031)0.938 (±0.043)0.888 (±0.022)
Nonlinear (Swiss roll)4000.251000.888 (±0.039)0.988 (±0.021)0.938 (±0.038)
Nonlinear (Swiss roll)5000.251000.900 (±0.026)1.000 (±0.034)0.950 (±0.042)
Nonlinear (Swiss roll)1000.51000.725 (±0.029)0.825 (±0.038)0.775 (±0.036)
Nonlinear (Swiss roll)2000.51000.775 (±0.023)0.875 (±0.030)0.825 (±0.025)
Nonlinear (Swiss roll)3000.51000.825 (±0.036)0.925 (±0.035)0.875 (±0.039)
Nonlinear (Swiss roll)4000.51000.875 (±0.029)0.975 (±0.027)0.925 (±0.038)
Nonlinear (Swiss roll)5000.51000.900 (±0.029)1.000 (±0.029)0.950 (±0.033)
Nonlinear (Swiss roll)1000.751000.713 (±0.035)0.813 (±0.033)0.763 (±0.025)
Nonlinear (Swiss roll)2000.751000.763 (±0.036)0.863 (±0.029)0.812 (±0.028)
Nonlinear (Swiss roll)3000.751000.813 (±0.034)0.913 (±0.038)0.863 (±0.041)
Nonlinear (Swiss roll)4000.751000.863 (±0.029)0.963 (±0.021)0.913 (±0.032)
Nonlinear (Swiss roll)5000.751000.900 (±0.029)1.000 (±0.041)0.950 (±0.038)
Nonlinear (Swiss roll)1001.01000.700 (±0.029)0.800 (±0.022)0.750 (±0.023)
Nonlinear (Swiss roll)2001.01000.750 (±0.032)0.850 (±0.022)0.800 (±0.025)
Nonlinear (Swiss roll)3001.01000.800 (±0.024)0.900 (±0.041)0.850 (±0.026)
Nonlinear (Swiss roll)4001.01000.850 (±0.033)0.950 (±0.023)0.900 (±0.039)
Nonlinear (Swiss roll)5001.01000.900 (±0.021)1.000 (±0.034)0.950 (±0.021)
Nonlinear (Swiss roll)1001.51000.675 (±0.035)0.775 (±0.028)0.725 (±0.043)
Nonlinear (Swiss roll)2001.51000.725 (±0.024)0.825 (±0.044)0.775 (±0.030)
Nonlinear (Swiss roll)3001.51000.775 (±0.027)0.875 (±0.032)0.825 (±0.020)
Nonlinear (Swiss roll)4001.51000.825 (±0.031)0.925 (±0.033)0.875 (±0.028)
Nonlinear (Swiss roll)5001.51000.875 (±0.030)0.975 (±0.035)0.925 (±0.031)
Nonlinear (Swiss roll)1002.01000.650 (±0.019)0.750 (±0.038)0.700 (±0.039)
Nonlinear (Swiss roll)2002.01000.700 (±0.036)0.800 (±0.033)0.750 (±0.028)
Nonlinear (Swiss roll)3002.01000.750 (±0.026)0.850 (±0.030)0.800 (±0.026)
Nonlinear (Swiss roll)4002.01000.800 (±0.031)0.900 (±0.044)0.850 (±0.027)
Nonlinear (Swiss roll)5002.01000.850 (±0.031)0.950 (±0.033)0.900 (±0.035)
Nonlinear (Swiss roll)1000.252000.638 (±0.016)0.738 (±0.020)0.688 (±0.023)
Nonlinear (Swiss roll)2000.252000.688 (±0.016)0.787 (±0.022)0.737 (±0.032)
Nonlinear (Swiss roll)3000.252000.738 (±0.020)0.838 (±0.040)0.787 (±0.026)
Nonlinear (Swiss roll)4000.252000.787 (±0.027)0.887 (±0.036)0.837 (±0.024)
Nonlinear (Swiss roll)5000.252000.838 (±0.036)0.938 (±0.031)0.887 (±0.026)
Nonlinear (Swiss roll)1000.52000.625 (±0.022)0.725 (±0.024)0.675 (±0.035)
Nonlinear (Swiss roll)2000.52000.675 (±0.019)0.775 (±0.024)0.725 (±0.028)
Nonlinear (Swiss roll)3000.52000.725 (±0.020)0.825 (±0.030)0.775 (±0.035)
Nonlinear (Swiss roll)4000.52000.775 (±0.032)0.875 (±0.041)0.825 (±0.029)
Nonlinear (Swiss roll)5000.52000.825 (±0.039)0.925 (±0.020)0.875 (±0.035)
Nonlinear (Swiss roll)1000.752000.613 (±0.027)0.713 (±0.029)0.662 (±0.042)
Nonlinear (Swiss roll)2000.752000.662 (±0.017)0.762 (±0.042)0.712 (±0.025)
Nonlinear (Swiss roll)3000.752000.713 (±0.020)0.812 (±0.042)0.762 (±0.022)
Nonlinear (Swiss roll)4000.752000.763 (±0.021)0.863 (±0.045)0.812 (±0.037)
Nonlinear (Swiss roll)5000.752000.812 (±0.026)0.912 (±0.029)0.862 (±0.032)
Nonlinear (Swiss roll)1001.02000.600 (±0.035)0.700 (±0.037)0.650 (±0.032)
Nonlinear (Swiss roll)2001.02000.650 (±0.016)0.750 (±0.026)0.700 (±0.026)
Nonlinear (Swiss roll)3001.02000.700 (±0.021)0.800 (±0.022)0.750 (±0.032)
Nonlinear (Swiss roll)4001.02000.750 (±0.021)0.850 (±0.039)0.800 (±0.034)
Nonlinear (Swiss roll)5001.02000.800 (±0.039)0.900 (±0.022)0.850 (±0.044)
Nonlinear (Swiss roll)1001.52000.575 (±0.033)0.675 (±0.043)0.625 (±0.035)
Nonlinear (Swiss roll)2001.52000.625 (±0.019)0.725 (±0.025)0.675 (±0.038)
Nonlinear (Swiss roll)3001.52000.675 (±0.030)0.775 (±0.028)0.725 (±0.039)
Nonlinear (Swiss roll)4001.52000.725 (±0.023)0.825 (±0.041)0.775 (±0.035)
Nonlinear (Swiss roll)5001.52000.775 (±0.025)0.875 (±0.043)0.825 (±0.032)
Nonlinear (Swiss roll)1002.02000.550 (±0.017)0.650 (±0.032)0.600 (±0.045)
Nonlinear (Swiss roll)2002.02000.600 (±0.038)0.700 (±0.031)0.650 (±0.042)
Nonlinear (Swiss roll)3002.02000.650 (±0.023)0.750 (±0.040)0.700 (±0.040)
Nonlinear (Swiss roll)4002.02000.700 (±0.025)0.800 (±0.030)0.750 (±0.036)
Nonlinear (Swiss roll)5002.02000.750 (±0.037)0.850 (±0.036)0.800 (±0.033)
Nonlinear (Swiss roll)1000.253000.538 (±0.019)0.638 (±0.035)0.588 (±0.021)
Nonlinear (Swiss roll)2000.253000.588 (±0.027)0.688 (±0.038)0.638 (±0.039)
Nonlinear (Swiss roll)3000.253000.638 (±0.017)0.738 (±0.031)0.688 (±0.033)
Nonlinear (Swiss roll)4000.253000.688 (±0.022)0.788 (±0.035)0.738 (±0.038)
Nonlinear (Swiss roll)5000.253000.738 (±0.033)0.838 (±0.023)0.788 (±0.034)
Nonlinear (Swiss roll)1000.53000.525 (±0.035)0.625 (±0.026)0.575 (±0.038)
Nonlinear (Swiss roll)2000.53000.575 (±0.036)0.675 (±0.023)0.625 (±0.044)
Nonlinear (Swiss roll)3000.53000.625 (±0.034)0.725 (±0.040)0.675 (±0.025)
Nonlinear (Swiss roll)4000.53000.675 (±0.016)0.775 (±0.042)0.725 (±0.043)
Nonlinear (Swiss roll)5000.53000.725 (±0.018)0.825 (±0.032)0.775 (±0.032)
Nonlinear (Swiss roll)1000.753000.513 (±0.033)0.613 (±0.022)0.563 (±0.038)
Nonlinear (Swiss roll)2000.753000.563 (±0.025)0.663 (±0.032)0.613 (±0.038)
Nonlinear (Swiss roll)3000.753000.613 (±0.027)0.713 (±0.045)0.663 (±0.041)
Nonlinear (Swiss roll)4000.753000.663 (±0.036)0.763 (±0.039)0.713 (±0.023)
Nonlinear (Swiss roll)5000.753000.713 (±0.036)0.813 (±0.040)0.763 (±0.045)
Nonlinear (Swiss roll)1001.03000.500 (±0.016)0.600 (±0.039)0.550 (±0.030)
Nonlinear (Swiss roll)2001.03000.550 (±0.030)0.650 (±0.041)0.600 (±0.020)
Nonlinear (Swiss roll)3001.03000.600 (±0.015)0.700 (±0.045)0.650 (±0.039)
Nonlinear (Swiss roll)4001.03000.650 (±0.023)0.750 (±0.040)0.700 (±0.045)
Nonlinear (Swiss roll)5001.03000.700 (±0.020)0.800 (±0.036)0.750 (±0.031)
Nonlinear (Swiss roll)1001.53000.475 (±0.038)0.575 (±0.045)0.525 (±0.031)
Nonlinear (Swiss roll)2001.53000.525 (±0.021)0.625 (±0.037)0.575 (±0.034)
Nonlinear (Swiss roll)3001.53000.575 (±0.030)0.675 (±0.031)0.625 (±0.031)
Nonlinear (Swiss roll)4001.53000.625 (±0.030)0.725 (±0.031)0.675 (±0.044)
Nonlinear (Swiss roll)5001.53000.675 (±0.036)0.775 (±0.026)0.725 (±0.031)
Nonlinear (Swiss roll)1002.03000.450 (±0.025)0.550 (±0.021)0.500 (±0.030)
Nonlinear (Swiss roll)2002.03000.500 (±0.026)0.600 (±0.025)0.550 (±0.038)
Nonlinear (Swiss roll)3002.03000.550 (±0.015)0.650 (±0.038)0.600 (±0.031)
Nonlinear (Swiss roll)4002.03000.600 (±0.033)0.700 (±0.020)0.650 (±0.021)
Nonlinear (Swiss roll)5002.03000.650 (±0.025)0.750 (±0.036)0.700 (±0.025)
Nonlinear (Swiss roll)1000.254000.438 (±0.036)0.537 (±0.037)0.488 (±0.042)
Nonlinear (Swiss roll)2000.254000.488 (±0.025)0.588 (±0.030)0.537 (±0.026)
Nonlinear (Swiss roll)3000.254000.537 (±0.024)0.637 (±0.026)0.587 (±0.043)
Nonlinear (Swiss roll)4000.254000.588 (±0.027)0.688 (±0.021)0.637 (±0.039)
Nonlinear (Swiss roll)5000.254000.638 (±0.023)0.738 (±0.034)0.688 (±0.037)
Nonlinear (Swiss roll)1000.54000.425 (±0.022)0.525 (±0.021)0.475 (±0.031)
Nonlinear (Swiss roll)2000.54000.475 (±0.030)0.575 (±0.044)0.525 (±0.044)
Nonlinear (Swiss roll)3000.54000.525 (±0.017)0.625 (±0.032)0.575 (±0.030)
Nonlinear (Swiss roll)4000.54000.575 (±0.025)0.675 (±0.028)0.625 (±0.037)
Nonlinear (Swiss roll)5000.54000.625 (±0.022)0.725 (±0.030)0.675 (±0.045)
Nonlinear (Swiss roll)1000.754000.413 (±0.035)0.513 (±0.028)0.463 (±0.040)
Nonlinear (Swiss roll)2000.754000.462 (±0.021)0.562 (±0.033)0.512 (±0.024)
Nonlinear (Swiss roll)3000.754000.513 (±0.018)0.613 (±0.042)0.562 (±0.022)
Nonlinear (Swiss roll)4000.754000.563 (±0.037)0.663 (±0.027)0.613 (±0.021)
Nonlinear (Swiss roll)5000.754000.613 (±0.035)0.713 (±0.029)0.662 (±0.021)
Nonlinear (Swiss roll)1001.04000.400 (±0.034)0.500 (±0.021)0.450 (±0.031)
Nonlinear (Swiss roll)2001.04000.450 (±0.021)0.550 (±0.032)0.500 (±0.036)
Nonlinear (Swiss roll)3001.04000.500 (±0.017)0.600 (±0.025)0.550 (±0.023)
Nonlinear (Swiss roll)4001.04000.550 (±0.035)0.650 (±0.036)0.600 (±0.036)
Nonlinear (Swiss roll)5001.04000.600 (±0.025)0.700 (±0.031)0.650 (±0.029)
Nonlinear (Swiss roll)1001.54000.375 (±0.035)0.475 (±0.042)0.425 (±0.036)
Nonlinear (Swiss roll)2001.54000.425 (±0.030)0.525 (±0.028)0.475 (±0.022)
Nonlinear (Swiss roll)3001.54000.475 (±0.030)0.575 (±0.024)0.525 (±0.039)
Nonlinear (Swiss roll)4001.54000.525 (±0.022)0.625 (±0.036)0.575 (±0.023)
Nonlinear (Swiss roll)5001.54000.575 (±0.021)0.675 (±0.041)0.625 (±0.040)
Nonlinear (Swiss roll)1002.04000.350 (±0.016)0.450 (±0.042)0.400 (±0.023)
Nonlinear (Swiss roll)2002.04000.400 (±0.030)0.500 (±0.032)0.450 (±0.034)
Nonlinear (Swiss roll)3002.04000.450 (±0.033)0.550 (±0.032)0.500 (±0.045)
Nonlinear (Swiss roll)4002.04000.500 (±0.040)0.600 (±0.030)0.550 (±0.035)
Nonlinear (Swiss roll)5002.04000.550 (±0.024)0.650 (±0.031)0.600 (±0.026)

Appendix B. Figures of Simulation Results for Linear and Nonlinear Data Structures: Line Plots, Bar Plots, Density Plots, and Violin Plots

Figure A1. Average silhouette scores for PCA, Isomap, and t-SNE on linearly structured data, across varying sample sizes and noise levels. PCA consistently outperforms others.
Figure A1. Average silhouette scores for PCA, Isomap, and t-SNE on linearly structured data, across varying sample sizes and noise levels. PCA consistently outperforms others.
Stats 08 00105 g0a1
Figure A2. Average silhouette scores for PCA, Isomap, and t-SNE on nonlinearly structured data. Isomap achieves the highest scores across conditions.
Figure A2. Average silhouette scores for PCA, Isomap, and t-SNE on nonlinearly structured data. Isomap achieves the highest scores across conditions.
Stats 08 00105 g0a2
Figure A3. Bar plots of average silhouette scores for PCA, Isomap, and t-SNE on linearly structured data, across varying sample sizes and noise levels.
Figure A3. Bar plots of average silhouette scores for PCA, Isomap, and t-SNE on linearly structured data, across varying sample sizes and noise levels.
Stats 08 00105 g0a3
Figure A4. Bar plots of average silhouette scores for PCA, Isomap, and t-SNE on nonlinearly structured data, across varying sample sizes and noise levels.
Figure A4. Bar plots of average silhouette scores for PCA, Isomap, and t-SNE on nonlinearly structured data, across varying sample sizes and noise levels.
Stats 08 00105 g0a4
Figure A5. Linear structure: Scaled densities of silhouette scores (zoomed to the high-quality region) across methods and conditions. Each small panel is a unique combination of noise level (columns), dimensionality p (sub-caption), and sample size n (rows). PCA is consistently right-shifted with narrower spread; Isomap is typically second; t-SNE shows slightly lower centers and heavier left tails, especially at higher noise and p.
Figure A5. Linear structure: Scaled densities of silhouette scores (zoomed to the high-quality region) across methods and conditions. Each small panel is a unique combination of noise level (columns), dimensionality p (sub-caption), and sample size n (rows). PCA is consistently right-shifted with narrower spread; Isomap is typically second; t-SNE shows slightly lower centers and heavier left tails, especially at higher noise and p.
Stats 08 00105 g0a5
Figure A6. Nonlinear structure: Scaled densities of silhouette scores (zoomed). Isomap peaks furthest right and remains dominant across n and p; t-SNE is a close second, while PCA is left-shifted, reflecting loss of manifold separations under linear projection.
Figure A6. Nonlinear structure: Scaled densities of silhouette scores (zoomed). Isomap peaks furthest right and remains dominant across n and p; t-SNE is a close second, while PCA is left-shifted, reflecting loss of manifold separations under linear projection.
Stats 08 00105 g0a6
Figure A7. Linear structure: Violin plots of silhouette distributions by method for each ( n , σ 2 , p ) cell (replicates R = 1000 ; widths standardized per panel). The medians and central mass confirm PCA’s dominance and stability, with performance gaps shrinking as noise and p increase.
Figure A7. Linear structure: Violin plots of silhouette distributions by method for each ( n , σ 2 , p ) cell (replicates R = 1000 ; widths standardized per panel). The medians and central mass confirm PCA’s dominance and stability, with performance gaps shrinking as noise and p increase.
Stats 08 00105 g0a7
Figure A8. Nonlinear structure: Violin plots of silhouette distributions by method for Swiss-roll–type data. Isomap exhibits the highest centers and tightest mass across most cells; t-SNE is second; PCA lags due to manifold flattening. Larger n reduces dispersion for all methods.
Figure A8. Nonlinear structure: Violin plots of silhouette distributions by method for Swiss-roll–type data. Isomap exhibits the highest centers and tightest mass across most cells; t-SNE is second; PCA lags due to manifold flattening. Larger n reduces dispersion for all methods.
Stats 08 00105 g0a8

References

  1. Bellman, R. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 1961. [Google Scholar]
  2. Fodor, I.K. A Survey of Dimension Reduction Techniques; Technical Report UCRL-ID-148494; Lawrence Livermore National Laboratory: Livermore, CA, USA, 2002. [Google Scholar]
  3. van der Maaten, L.; Postma, E.; van den Herik, J. Dimensionality Reduction: A Comparative Review; Technical Report; Tilburg University: Tilburg, The Netherlands, 2009. [Google Scholar]
  4. Cunningham, J.P.; Ghahramani, Z. Linear dimensionality reduction: Survey, insights, and generalizations. J. Mach. Learn. Res. 2015, 16, 2859–2900. Available online: https://www.jmlr.org/papers/volume16/cunningham15a/cunningham15a.pdf (accessed on 13 September 2025).
  5. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar] [CrossRef]
  6. Wang, W.; Yang, Z. A survey of dimensionality reduction techniques. IEEE Access 2017, 6, 3539–3553. [Google Scholar]
  7. Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A 2016, 374, 20150202. [Google Scholar] [CrossRef]
  8. Tenenbaum, J.B.; De Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  9. Vlaić, M.; Mikulić, I.; Delač, G.; Šilić, M. A Review of Time Series Dimensionality Reduction Methods. In Proceedings of the IEEE 48th MIPRO International Convention on Information and Communication Technology, Opatija, Croatia, 2–6 June 2025; pp. 1491–1496. Available online: https://ieeexplore.ieee.org/document/11131714 (accessed on 13 September 2025).
  10. Abdullah, A.T.; Hussein, H.M.; Sabeeh, R.S. A Comprehensive Review of Machine Learning Algorithms for Fault Diagnosis and Prediction in Rotating Machinery. J. Univ. Babylon Eng. Sci. 2025, 33, 110–127. Available online: https://www.journalofbabylon.com/index.php/JUBES/article/view/5891 (accessed on 13 September 2025). [CrossRef]
  11. Jolliffe, I.T. Principal Component Analysis, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  12. Abdi, H.; Williams, L.J. Principal component analysis. WIREs Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  13. Ringnér, M. What is principal component analysis? Nat. Biotechnol. 2008, 26, 303–304. [Google Scholar] [CrossRef]
  14. Pearson, K. On lines and planes of closest fit to systems of points in space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  15. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417–441; 498–520. [Google Scholar] [CrossRef]
  16. de Silva, V.; Tenenbaum, J.B. Global versus local methods in nonlinear dimensionality reduction. NeurIPS 2003, 15, 721–728. [Google Scholar]
  17. Bernstein, M.; de Silva, V.; Langford, J.C.; Tenenbaum, J.B. Graph approximations to geodesics on embedded manifolds. In Proceedings of the Eleventh Annual ACM–SIAM Symposium on Discrete Algorithms (SODA 2000), San Francisco, CA, USA, 9–11 January 2000; Association for Computing Machinery: New York, NY, USA; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000; pp. 200–209. [Google Scholar]
  18. Balasubramanian, M.; Schwartz, E.L. The Isomap algorithm and topological stability. Science 2002, 295, 7. [Google Scholar] [CrossRef] [PubMed]
  19. Saul, L.K.; Roweis, S.T. Think globally, fit locally: Unsupervised learning of low-dimensional manifolds. J. Mach. Learn. Res. 2003, 4, 119–155. Available online: https://www.jmlr.org/papers/volume4/saul03a/saul03a.pdf (accessed on 13 September 2025).
  20. Venna, J.; Peltonen, J.; Nybo, K.; Aidos, H.; Kaski, S. Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data. BMC Bioinform. 2010, 11, 559. Available online: https://pmc.ncbi.nlm.nih.gov/articles/PMC2998530/ (accessed on 13 September 2025). [CrossRef] [PubMed]
  21. Cayton, L. Algorithms for Manifold Learning; Technical Report CS2008-0923; UC San Diego: La Jolla, CA, USA, 2005. [Google Scholar]
  22. van der Maaten, L. Learning a parametric embedding by preserving local structure. In Proceedings of the AISTATS, Clearwater Beach, FL, USA, 16–18 April 2009. [Google Scholar]
  23. Kobak, D.; Berens, P. The art of using t-SNE for single-cell transcriptomics. Nat. Commun. 2019, 10, 5416. [Google Scholar] [CrossRef]
  24. Becht, E.; McInnes, L.; Healy, J.; Dutertre, C.-A.; Kwok, I.W.H.; Ng, L.G.; Ginhoux, F.; Newell, E.W. Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol. 2019, 37, 38–44. [Google Scholar] [CrossRef]
  25. Wattenberg, M.; Viégas, F.; Johnson, I. How to Use t-SNE Effectively Distill, 2016, 1. Available online: https://distill.pub/2016/misread-tsne/ (accessed on 13 September 2025).
  26. Chan, D.M.; Rao, R.; Huang, F.; Gorman, J.J. A closer look at t-SNE for data visualization. arXiv 2019, arXiv:1808.01359. [Google Scholar]
  27. Anowar, F.; Podder, M.; Bie, R. Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE). Comput. Sci. Rev. 2021, 40, 100378. [Google Scholar] [CrossRef]
  28. Sun, S.; Zhu, J.; Ma, Y.; Zhou, X. Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization. Commun. Biol. 2022, 5, 3628. Available online: https://www.nature.com/articles/s42003-022-03628-x (accessed on 13 September 2025). [CrossRef]
  29. Kierta, J. Unsupervised Dimension Reduction Techniques for Lung Diagnosis Using Radiomics. Master’s Thesis, East Tennessee State University, Johnson City, TN, USA, 2023. Available online: https://dc.etsu.edu/etd/4198/ (accessed on 13 September 2025).
  30. Parashar, A. Dimensionality Reduction for Data Visualization: PCA vs t-SNE vs UMAP vs LDA. Medium/Towards Data Science, 2022. Available online: https://medium.com/data-science/dimensionality-reduction-for-data-visualization-pca-vs-tsne-vs-umap-be4aa7b1cb29 (accessed on 13 September 2025).
  31. Tandon, R. Dimension Reduction using Isomap. Medium, 2023. Available online: https://medium.com/data-science-in-your-pocket/dimension-reduction-using-isomap-72ead0411dec (accessed on 13 September 2025).
  32. Marzi, C.; Marfisi, D.; Barucci, A.; Del Meglio, J.; Lilli, A.; Vignali, C.; Mascalchi, M.; Casolo, G.; Diciotti, S.; Traino, A.C.; et al. Collinearity and dimensionality reduction in radiomics: Effect of preprocessing parameters in hypertrophic cardiomyopathy magnetic resonance. Bioengineering 2023, 10, 80. [Google Scholar] [CrossRef]
  33. van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. Available online: https://jmlr.csail.mit.edu/papers/v9/vandermaaten08a.html (accessed on 13 September 2025).
  34. Arora, S.; Hu, W.; Kothari, P.K. An analysis of the t-SNE algorithm for data visualization. In Proceedings of the 31st Conference On Learning Theory, Stockholm, Sweden, 6–9 July 2018; Volume 75, pp. 1455–1462. [Google Scholar] [CrossRef]
  35. Linderman, G.C.; Steinerberger, S. Clustering with t-SNE, provably. SIAM J. Math. Data Sci. 2019, 1, 313–332. [Google Scholar] [CrossRef]
  36. Cai, D.; Ma, X. Theoretical foundations of t-SNE for visualizing high-dimensional clustered data. J. Mach. Learn. Res. 2022, 23, 1–54. [Google Scholar] [CrossRef]
  37. Böhm, J.N.; Berens, P.; Kobak, D. A unifying perspective on neighbor embeddings for dimensionality reduction. J. Mach. Learn. Res. 2023, 24, 1–59. [Google Scholar]
  38. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
Figure 1. PCA scree plot with cumulative variance. An elbow is evident at k = 3 , where cumulative EV exceeds 90%.
Figure 1. PCA scree plot with cumulative variance. An elbow is evident at k = 3 , where cumulative EV exceeds 90%.
Stats 08 00105 g001
Figure 2. Scores on the first two principal components. The cloud suggests a strong low-dimensional signal with roughly elliptical dispersion.
Figure 2. Scores on the first two principal components. The cloud suggests a strong low-dimensional signal with roughly elliptical dispersion.
Stats 08 00105 g002
Figure 3. Loadings for variables on PC1 and PC2. Points distant from the origin have greater influence on the corresponding PC; variables near the origin contribute mainly to later PCs.
Figure 3. Loadings for variables on PC1 and PC2. Points distant from the origin have greater influence on the corresponding PC; variables near the origin contribute mainly to later PCs.
Stats 08 00105 g003
Figure 4. Reconstruction error (MSE) versus number of PCs. The sharp decline through k = 3 and subsequent flattening support selecting three components.
Figure 4. Reconstruction error (MSE) versus number of PCs. The sharp decline through k = 3 and subsequent flattening support selecting three components.
Stats 08 00105 g004
Figure 5. Isomap embedding in two dimensions for k NN = 12 . The Swiss roll is unfolded into a smooth chart with preserved neighborhood structure.
Figure 5. Isomap embedding in two dimensions for k NN = 12 . The Swiss roll is unfolded into a smooth chart with preserved neighborhood structure.
Stats 08 00105 g005
Figure 6. Isomap residual variance (lower is better) across neighborhood sizes k NN . The curve is flat and minimal near k NN 8 , and remains comparably low for k NN = 10 –12.
Figure 6. Isomap residual variance (lower is better) across neighborhood sizes k NN . The curve is flat and minimal near k NN 8 , and remains comparably low for k NN = 10 –12.
Stats 08 00105 g006
Figure 7. t-SNE embeddings of the 50-dimensional data for perplexities { 5 , 10 , 30 , 50 } . Three compact, separable clusters are recovered across settings.
Figure 7. t-SNE embeddings of the 50-dimensional data for perplexities { 5 , 10 , 30 , 50 } . Three compact, separable clusters are recovered across settings.
Stats 08 00105 g007
Figure 8. Average Jaccard overlap of k-nearest neighbors ( k = 15 ) between the original space and the 2-D t-SNE map versus perplexity. Preservation is stable with a slight improvement near perplexity 30.
Figure 8. Average Jaccard overlap of k-nearest neighbors ( k = 15 ) between the original space and the 2-D t-SNE map versus perplexity. Preservation is stable with a slight improvement near perplexity 30.
Stats 08 00105 g008
Table 1. Characteristics of PCA, Isomap, and t-SNE.
Table 1. Characteristics of PCA, Isomap, and t-SNE.
TechniqueTypeStrengthsLimitations
PCALinearPreserves global variance; efficient; interpretableFails for nonlinear data
IsomapNonlinearPreserves global manifold geometryNoise-sensitive; computationally intensive
t-SNENonlinearPreserves local structure; strong for visualizationDistorts global structure; parameter-sensitive
Table 2. Explained variance (EV) and cumulative EV for the leading PCs.
Table 2. Explained variance (EV) and cumulative EV for the leading PCs.
PC1234
EV (%)38.9928.7322.370.56
Cumulative (%)38.9967.7290.0890.64
Table 3. Synthetic data models used in the simulation study. Here i = 1 , , n indexes observations, p is the ambient dimension, and σ 2 is the isotropic noise variance.
Table 3. Synthetic data models used in the simulation study. Here i = 1 , , n indexes observations, p is the ambient dimension, and σ 2 is the isotropic noise variance.
StructureGenerative ModelKey Parameters/Clustering
Linear (Gaussian mixture) x i R p , i = 1 , , n . Generate x i = j = 1 r α i j u j + ϵ i , r < p . Basis { u j } orthonormal; α i j N ( μ k j , σ j 2 ) if i C k ; ϵ i N ( 0 , σ 2 I p ) . K = 3 Gaussian components via { μ k j } j = 1 r ; intrinsic dimension r = 2 ; ambient dimension p as specified.
Linear (Student-t mixture, heavy tails)Same subspace model as above with α i j t ν k ( μ k j , σ j 2 ) if i C k , ν k [ 3 , 10 ] ; ϵ i N ( 0 , σ 2 I p ) . K = 3 Student-t components; intrinsic dimension r = 2 ; degrees of freedom ν k [ 3 , 10 ] .
Nonlinear (Swiss-roll manifold) d = 2 manifold embedded in R p : x i = [ t i cos t i , h i , t i sin t i , 0 , , 0 ] + ϵ i , t i Unif ( 0 , 2 π ) , h i Unif ( 0 , 10 ) , ϵ i N ( 0 , σ 2 I p ) . K = 3 clusters defined by geodesic neighborhoods; intrinsic dimension d = 2 .
Nonlinear (Concentric-spheres manifold)Multiple spherical shells in R p : x i = r k z i / z i + ϵ i , z i N ( 0 , I p ) ; shell index k sets radius r k ; ϵ i N ( 0 , σ 2 I p ) . K = 3 shells with radii r 1 < r 2 < r 3 ; nonconvex curvature and disconnected topology.
Table 4. Simulation factors and analysis settings.
Table 4. Simulation factors and analysis settings.
ParameterValues/Setting
Sample size n 100 , 200 , 300 , 400 , 500
Noise scale σ 0.25 , 0.5 , 0.75 , 1.0 , 1.5 , 2.0
Feature count p 20 , 50 , 100 , 200 , 300 , 400
Data structureLinear (Gaussian, Student-t); Nonlinear (Swiss-roll, Concentric-spheres)
Clusters K3 (ground-truth labels for silhouette)
Embedding dimension d2 (matches intrinsic dimension)
Replicates R1000 per ( n , σ , p ) condition
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zahed, M.; Skafyan, M. Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures. Stats 2025, 8, 105. https://doi.org/10.3390/stats8040105

AMA Style

Zahed M, Skafyan M. Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures. Stats. 2025; 8(4):105. https://doi.org/10.3390/stats8040105

Chicago/Turabian Style

Zahed, Mostafa, and Maryam Skafyan. 2025. "Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures" Stats 8, no. 4: 105. https://doi.org/10.3390/stats8040105

APA Style

Zahed, M., & Skafyan, M. (2025). Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures. Stats, 8(4), 105. https://doi.org/10.3390/stats8040105

Article Metrics

Back to TopTop