Next Article in Journal
AFJ-PoseNet: Enhancing Simple Baselines with Attention-Guided Fusion and Joint-Aware Positional Encoding
Previous Article in Journal
Research on GNSS Spoofing Detection and Autonomous Positioning Technology for Drones
Previous Article in Special Issue
Multispectral Reconstruction in Open Environments Based on Image Color Correction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Filter-Based Tchebichef Moment Analysis for Whole Slide Image Reconstruction

by
Keun Woo Kim
1,
Wenxian Jin
2 and
Barmak Honarvar Shakibaei Asli
2,*
1
Centre for Computational Engineering Sciences, Faculty of Engineering and Applied Sciences, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK
2
Centre for Life-Cycle Engineering and Management, Faculty of Engineering and Applied Sciences, Cranfield University, Cranfield, Bedfordshire MK43 0AL, UK
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(15), 3148; https://doi.org/10.3390/electronics14153148
Submission received: 19 June 2025 / Revised: 29 July 2025 / Accepted: 5 August 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Image Fusion and Image Processing)

Abstract

In digital pathology, accurate diagnosis and prognosis critically depend on robust feature representation of Whole Slide Images (WSIs). While deep learning offers powerful solutions, its “black box” nature presents significant challenges to clinical interpretability and widespread adoption. Handcrafted features offer interpretability, yet orthogonal moments, particularly Tchebichef moments (TMs), remain underexplored for WSI analysis. This study introduces TMs as interpretable, efficient, and scalable handcrafted descriptors for WSIs, alongside a novel two-dimensional digital filter architecture designed to enhance numerical stability and hardware compatibility during TM computation. We conducted a comprehensive reconstruction analysis using H&E-stained WSIs from the MIDOG++ dataset to evaluate TM effectiveness. Our results demonstrate that lower-order TMs accurately reconstruct both square and rectangular WSI patches, with performance stabilising beyond a threshold moment order, confirmed by SNIRE, SSIM, and BRISQUE metrics, highlighting their capacity to retain structural fidelity. Furthermore, our analysis reveals significant computational efficiency gains through the use of pre-computed polynomials. These findings establish TMs as highly promising, interpretable, and scalable feature descriptors, offering a robust alternative for computational pathology applications that prioritise both accuracy and transparency.

1. Introduction

Whole Slide Images (WSIs) have revolutionised the field of pathology by enabling the high-resolution digitisation of histopathological slides, thereby facilitating remote diagnostics, quantitative analysis, and AI-assisted decision support [1]. This technological advancement has catalysed substantial progress in digital and computational pathology, supporting applications such as automated disease classification, biomarker discovery, and prognosis prediction [2,3]. Despite significant advances in deep learning methodologies, several recent studies continue to underscore the relevance of handcrafted features within computational pathology.
Interpretability and expandability are critical considerations for the clinical adoption of AI in pathology. Despite the impressive performance, deep learning-based algorithms often operate as “black boxes,” providing little insight into how decisions are made [4]. This lack of transparency poses a significant barrier to trust and acceptance in clinical settings, where explainability is essential for validation and regulatory approval [5]. Although recent advancements in explainable AI, including attention mechanisms, saliency maps, and graph-based neural networks, have enhanced our understanding of the internal workings of deep models [6,7,8], these methods often provide only partial explanations or are difficult to interpret consistently across cases. As a result, there remains a pressing need for complementary or alternative approaches that prioritise transparency, reproducibility, and alignment with domain expertise.
Handcrafted pathological features are extracted from images using traditional image processing techniques, frequently guided by the expertise of trained pathologists [9]. These features offer enhanced interpretability. These features are designed to quantify the morphological structures, texture patterns, and spatial relationships discernible in WSIs. In contrast, deep features are representations automatically learned by deep learning algorithms. While such features can capture complex, hierarchical information and achieve performance levels comparable to those of expert pathologists, their application does not invariably guarantee superior accuracy across all computational pathology tasks.
Alhindi et al. [10] evaluated the relative efficacy of handcrafted versus deep features for the classification of malignant and benign tissue samples. Their results indicated that handcrafted features achieved higher classification accuracy, thereby suggesting their superior utility in certain tasks. More recently, Bolus Al Baqain and Sultan Al-Kadi [11] reported that handcrafted features tend to outperform in classification tasks, whereas deep features are generally more effective in segmentation applications. Furthermore, Huang et al. [12] proposed a hybrid framework that integrates both handcrafted and deep features, demonstrating improved classification performance in computational pathology.
Despite extensive exploration of handcrafted descriptors such as Local Binary Patterns (LBPs), Histogram of Oriented Gradients (HOG), Speeded-Up Robust Features (SURFs), and Scale-Invariant Feature Transform (SIFT), there remains a paucity of research investigating the use of image orthogonal moments as feature descriptors in WSIs. Orthogonal moments, such as Zernike [13], Legendre [14], and Chebyshev [15], are mathematical descriptors that provide compact yet accurate representations of image structures and have proven valuable in numerous image analysis tasks, including biomedical imaging [16]. Among these, Tchebichef Moments (TMs) are particularly notable for their discrete orthogonality, which facilitates superior image reconstruction without reliance on continuous-domain approximations [17,18].
TMs have demonstrated the ability to achieve high-fidelity image representation with minimal redundancy, and have been successfully applied in domains such as satellite imaging [19] and medical image analysis [20,21]. In computational pathology, TMs have previously been utilised to extract textural features for the classification of colorectal cancer [22]. While these findings underscore the potential of TMs as effective feature descriptors, the aforementioned study was limited in scope, having relied solely on the red channel of WSIs, thereby constraining its utility as a full RGB feature descriptor.
In the present study, we propose a two-dimensional cascaded digital filter architecture for the efficient and accurate generation of TMs, specifically tailored for WSI reconstruction. Additionally, we conduct a comprehensive reconstruction analysis of WSIs to assess the capacity of TMs to capture salient features. This research aims to address the gap in the current literature by demonstrating the utility of Tchebichef moments as robust and interpretable feature descriptors in computational pathology.
This paper is organised as follows. Section 1 introduces the background of WSIs, the challenges in digital pathology, and the motivation for using TMs as interpretable feature descriptors. Section 2 presents the theoretical foundations of Tchebichef Polynomials and Moments, including their mathematical definitions and properties, and discusses their application to coloured images. Section 3 details the proposed novel filter structure designed for the efficient computation of Tchebichef polynomials. Section 4 provides the experimental results and discussion, covering the dataset used, algorithm implementation, reconstruction analysis, and evaluation of time complexity. Concluding remarks and future work are presented in Section 5.

2. Mathematical Background

In this section, we define the 2D TM for image analysis using orthogonal discrete Tchebichef polynomials t p ( n ) , specifying the norm function ρ ( p , N ) and normalisation factor β ( p , N ) , along with their recurrence relation, and detailing how images can be reconstructed from TMs and how polynomials are represented. We then extend TMs to colour images, noting the limitations of greyscale approaches and introducing channel-wise TM computation for red, green, and blue components, culminating in the introduction of Quaternion Tchebichef Moments (QTMs) for a more integrated representation using quaternion algebra.

2.1. Tchebichef Polynomials and Moments

The 2D Tchebichef moment of order ( p + q ) for an image intensity function f ( n , m ) , with dimensions N × M , is defined as
T p q = A ( p , N ) A ( q , M ) n = 0 N 1 m = 0 M 1 t p ( n ) t q ( m ) f ( n , m ) ,
where t p ( n ) denotes the orthogonal discrete Tchebichef polynomial of order p, as given by Mukundan et al. [17]:
t p ( n ) = p ! k = 0 p ( 1 ) p k N 1 k p k p + k p n k .
Additionally, the coefficient A ( p , N ) is defined as
A ( p , N ) = β ( p , N ) ρ ( p , N ) ,
where β ( p , N ) serves as a normalisation factor, typically chosen as N p . Furthermore, the orthogonality of the polynomials is governed by the squared norm
ρ ( p , N ) = ( 2 p ) ! N + p 2 p + 1 .
The recurrence relation for the Tchebichef polynomials is expressed as
( p + 1 ) t p + 1 ( n ) ( 2 p + 1 ) ( 2 n N + 1 ) t p ( n ) + p ( N 2 p 2 ) t p 1 ( n ) = 0 ,
for p 1 , with the initial polynomials given by t 0 ( n ) = 1 and t 1 ( n ) = 2 n N + 1 .
As shown in Figure 1, the top panel presents a plot of discrete Tchebichef polynomial values for N = 8 . Complementing this, the bottom panel displays an 8 × 8 array of basis images for the two-dimensional discrete Tchebichef transform.
Given a set of TMs up to order ( N max , M max ) , the image function f ( n , m ) can be approximated by the reconstruction formula
f ˜ ( n , m ) = p = 0 N max q = 0 M max 1 β ( p , N ) β ( q , M ) T p q t p ( n ) t q ( m ) .
The discrete Tchebichef polynomial t p ( n ) may also be represented as a polynomial in n, as reported by Mukundan et al. [17]:
t p ( n ) = k = 0 n C k ( p , N ) i = 0 k s k ( i ) n i ,
where C k ( p , N ) = ( 1 ) p k p ! k ! N 1 k p k p + k p , and  s k ( i ) are the Stirling numbers of the first kind [23], satisfying the identity x ! ( x k ) ! = i = 0 k s k ( i ) x i .

2.2. Tchebichef Moments in Coloured Image

To this day, most studies involving TMs have been conducted on greyscale images or rely on a single colour channel. This approach helps reduce computational cost and is generally sufficient for applications such as shape analysis, watermarking, texture classification, and image retrieval, where greyscale representations adequately capture the relevant features [24,25,26,27]. However, in digital pathology, the colour information in histological slides is critical for accurate analysis. Converting WSIs to greyscale in this context can result in significant information loss, potentially compromising diagnostic accuracy [28].
While greyscale representations are commonly used in image analysis with TMs, the method is not inherently limited to greyscale and can be effectively extended to colour images using techniques such as channel-wise computation or quaternion-based formulations. The most straightforward approach is channel-wise computation, where TMs are calculated separately for each colour channel. This allows for the independent extraction of structural features from the red, green, and blue channels, resulting in three distinct sets of 2D Tchebichef moments per image:
f ( x , y ) = { f R ( x , y ) , f G ( x , y ) , f B ( x , y ) } { T M R , T M G , T M B } ,
where f ( x , y ) is the image function represented as a set of RGB intensity functions, and  T M R , T M G , T M B denote the corresponding TMs computed separately for the red, green, and blue channels, respectively.
Recently, QTMs proposed by Zhu et al. [29] have gained increasing attention for coloured image analysis. These QTMs compute the TMs for colored images by integrating quaternion algebra, which inherently models the correlation between different colour channels. An RGB image can be expressed as a quaternion vector as follows:
f ( x , y ) = f R ( x , y ) i + f B ( x , y ) j + f B ( x , y ) k .
Using the quaternion vector of the image, the QTMs of a square image with a quaternion root of 1 can be computed as shown:
Q T M p q = x = 0 N 1 y = 0 N 1 f R ( x , y ) i + f G ( x , y ) j + f B ( x , y ) k t p ( x ) t q ( y ) ( i + j + k ) 3 = 1 3 x = 0 N 1 y = 0 N 1 f R ( x , y ) + f G ( x , y ) + f B ( x , y ) t p ( x ) t q ( y ) i 1 3 x = 0 N 1 y = 0 N 1 f G ( x , y ) f B ( x , y ) t p ( x ) t q ( y ) j 1 3 x = 0 N 1 y = 0 N 1 f B ( x , y ) f R ( x , y ) t p ( x ) t q ( y ) k 1 3 x = 0 N 1 y = 0 N 1 f R ( x , y ) f G ( x , y ) t p ( x ) t q ( y ) = A 0 + i A 1 + j A 2 + k A 3 ,
where
A 0 = 1 3 T M R + T M G + T M B , A 1 = 1 3 T M G T M B ,
A 2 = 1 3 T M B T M R , and A 3 = 1 3 T M R T M G .
The QTMs use channel-wise computation to derive Tchebichef moments for each RGB channel. These moments are then combined into a single quaternion representation that preserves both spatial and colour information.
In 2020, Elouariachi et al. [30] introduced QTMs with invariance to rotation, scale, and translation, making them particularly suitable for robust pattern recognition and image classification tasks. In their study, the proposed QTM invariants were successfully applied to hand gesture recognition. Subsequently, QTMs have been integrated with deep learning frameworks to achieve accurate classification of natural images [31] and for facial recognition [32]. In both studies, modified neural network architectures were proposed to accept QTMs as input data. While this approach has shown promising results, the quaternion vector is not a conventional input format for deep learning models, which may introduce challenges during implementation.

3. Proposed Filter Structure

We introduce a novel formula for the efficient computation of Tchebichef polynomials. By employing the backwards difference technique, we obtain a simplified input for the designed filter across any arbitrary order, making it independent of the image size (N). To further enhance efficiency, we implement a cascaded digital filter structure, leveraging the following lemma and theorem.
Lemma 1.
Let Q p ( n ) be a polynomial of degree p 1 , expressed as Q p ( n ) = a n p + b n p 1 + l . o . t . where a 0 , b are real coefficients, and l.o.t. represents lower-order terms (if any). Applying the p t h -order backwards difference operator p to Q p ( n ) results in
n { Q p ( n ) } = a p ! ,
and ∇ represents the backwards difference operator, defined as f ( n ) = f ( n ) f ( n 1 ) . This shows that only the coefficient of the highest-order term remains (leading coefficient) while any further backwards differences yield zero, as the result is constant with respect to n.
Proof. 
The proof follows directly by induction and is straightforward to verify (for further details, please refer to the inductive proof in [33]).    □
Compared to forward and central difference operators, the backwards difference operator offers two key advantages in our context. First, it simplifies the derivation of the factorial-scaled leading coefficient when applied to a polynomial of matching order. Second, it is inherently causal, making it well-suited for hardware-oriented implementations such as Field-Programmable Gate Array (FPGA)-based systems. In contrast, central differences require access to future values and introduce symmetry-related constraints, while forward differences can complicate boundary handling in convolutional architectures. Therefore, the backwards difference operator provides both theoretical clarity and practical efficiency for moment-based digital filtering.
Theorem 1.
The discrete Tchebichef polynomials satisfy the following property:
p t p ( n ) = ( 2 p ) ! p ! ,
where p denotes the polynomial order.
Proof. 
Using Equation (5), the leading coefficient of n p in t p ( n ) arises from the term where k = p and i = p . Since s p ( p ) = 1 and s k ( i ) = 0 for i > k , the leading coefficient of t p ( n ) is C p ( p , N ) s p ( p ) = C p ( p , N ) . Substituting k = p into the expression for C k ( p , N ) :
C p ( p , N ) = ( 1 ) p p p ! p ! N 1 p p p p + p p = ( 1 ) ( 1 ) ( 1 ) 2 p p = 2 p p ,
Thus, the leading coefficient of t p ( n ) is 2 p p . Utilising lemma 1, the p-th order backwards difference of the Tchebichef polynomials is given by 2 p p p ! , which further simplifies to ( 2 p ) ! p ! and concludes the proof.    □
By applying the flipped version of the input WSI image to our designed filter structure, we can compute TMs of arbitrary orders. Based on the findings in [34], a discrete transformation of a discrete signal f ( n ) of length N over a kernel function g ( n , p ) can be obtained through the discrete convolution of the kernel with the flipped signal, evaluated at N 1 . It means
T p q = n = 0 N 1 m = 0 M 1 f ( n , m ) t p ( n ) t q ( m ) = f F ( n , m ) t p ( n ) t q ( m ) | n = N 1 , m = M 1 ,
where * is 2D convolution and f F ( n , m ) represents the flipped WSI image. Figure 2 illustrates the structure of a cascaded digital filter based on successive backwards differences including ( N + M 2 ) delay blocks and ( N + M 2 ) binomial terms of ± p i and ± q j (where i = 1 , 2 , , N 1 and j = 1 , 2 , , M 1 ) are multipliers to show the specific order of the polynomial.
Figure 3 presents a flowchart of the proposed algorithm for computing and reconstructing WSIs using TMs. It begins with a WSI input, followed by TM calculation using weighted pixel sums. A cascaded digital filter accelerates this process. The reconstructed image is obtained using the TMs features and Tchebichef kernels. Image quality assessment (IQA) includes Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) as a no-reference score and Statistical Normalisation Image Reconstruction Error (SNIRE)/Structural Similarity Index Measure (SSIM) as full-reference scores. This workflow highlights the effectiveness of TMs in image processing and evaluation.

4. Experimental Results and Discussion

An experimental study was conducted to evaluate the effectiveness of TMs as feature representations for WSI reconstruction in histopathology analysis. As outlined in the previous section, we proposed a novel two-dimensional filter architecture designed to provide a theoretically efficient and mathematically elegant framework for TMs computation, with potential benefits in numerical precision and hardware compatibility. However, given that the primary objective of this study is to assess the utility of TMs in WSI reconstruction, we employed a validated and computationally efficient C++ implementation for all experimental analyses. This decision ensured consistency, scalability, and robustness when processing large-scale WSIs. The proposed filter is therefore presented as a theoretical contribution, with plans for future work to benchmark its performance, optimise its implementation, and explore its applicability in real-time or resource-constrained environments.

4.1. Data

For the reconstruction analysis, Hematoxylin and Eosin (H&E)-stained WSIs from the MIDOG++ dataset [35] were utilised due to the dataset’s diversity in tumour types and morphological characteristics. The dataset consists of 503 WSIs containing mitotic figures from seven tumour types collected from both human and canine specimens: breast carcinoma, neuroendocrine tumour, lung carcinoma, lymphosarcoma, cutaneous mast cell tumour, melanoma, and soft tissue sarcoma. Sample WSIs from the dataset are illustrated in Figure 4. To evaluate the capability of TMs in capturing spatial and structural information, WSIs were cropped into patches of varying sizes, enabling assessment of TM sensitivity to image scale and feature size. Additionally, the dataset’s physiological and staining variability provided a means to test the robustness of TMs in encoding visual features from diverse WSIs. The WSIs were cropped into patches of varying dimensions.

4.2. Algorithm Implementation

High-order Tchebichef polynomials and moments are prone to numerical instability, particularly when applied to large-scale image data, which can result in inaccurate reconstructions. To address this, several algorithms have been developed to enable stable and precise computation of these polynomials. In this study, we implemented the method proposed by Camacho-Bello and Rivera-Lopez [36] in C++ for the efficient computation of TMs on large WSIs. This approach incorporates the recurrence relation introduced by Mukundan et al. [17] and applies the Gram–Schmidt orthonormalisation process to maintain numerical stability and accuracy at high polynomial orders.
The quality of WSI reconstruction was evaluated using three image quality metrics: SNIRE [18], SSIM [37], and BRISQUE [38]. SNIRE quantifies pixel-wise reconstruction error relative to the original image, while SSIM measures perceived similarity, where a value of 1 indicates perfect similarity and 0 indicates no similarity. BRISQUE evaluates the perceptual quality of an image without a reference, with lower scores indicating higher quality (0 being the best and 100 the worst). It is noteworthy that BRISQUE did not yield a score of 0 even for the original WSIs, likely due to its interpretation of histological textures as noise. Therefore, the BRISQUE score of the original WSI was used as a baseline for comparative analysis.

4.3. Reconstruction Analysis

In contrast to traditional handcrafted features such as LBP, HOG, SURF, and SIFT, TMs do not provide pixel-wise interpretability. Instead, they provide a powerful global representation of the image by encoding high-level characteristics, including texture, shape, colour, and spatial structure—features that are crucial for distinguishing between pathological and healthy tissues. While their interpretability may be limited at the level of specific diagnostic indicators, TMs enable reconstruction-based interpretability, allowing us to assess how well visual information is retained through the encoded moments.
In this paper, we conduct both qualitative and quantitative reconstruction analyses to evaluate the extent to which TMs preserve salient information from WSIs. It is important to clarify that the purpose of this analysis is not to assess the robustness of TMs to noise or artefacts, nor to benchmark their reconstruction fidelity against deep learning models such as autoencoders. Rather, the goal is to demonstrate that meaningful and diagnostically relevant visual information is effectively encoded by TMs across different moment orders.
Table 1 presents four randomly selected 1000 × 1000 pixel patches extracted from WSIs in the MIDOG++ dataset, along with their respective reconstructions using TMs of varying orders. For comparison, Table 2 shows the reconstruction of natural images. The reconstructed WSIs exhibit a high degree of visual similarity to the originals, even at relatively low moment orders (e.g., order 50). By moment order 200, the reconstructions become visually indistinguishable from the original WSI to the human eye. In contrast, natural images require moment orders as high as 800 to achieve comparable visual fidelity. Furthermore, the variation in reconstruction quality across WSIs at a given order is smaller and more consistent than that observed in natural images, indicating that TMs encode features from WSIs more efficiently. These findings suggest that TMs are well-suited for capturing the semantic and structural characteristics of WSIs while maintaining reduced computational complexity. The ability to achieve high-fidelity reconstructions at lower moment orders underscores the potential of TMs as an efficient feature representation method in histopathological image analysis. Additional reconstruction results conducted using Python (version 3.13.2) are presented in Appendix A,highlighting that differences in computational environments and floating-point precision can affect the quality of the SNIRE.
Further analysis in Figure 5, using both square (1000 × 1000) and rectangular (800 × 1000) WSI patches, shows that SNIRE, SSIM, and BRISQUE scores begin to stabilise at a maximum reconstruction order of 600 for square patches and around 550 × 750 for rectangular patches. This convergence indicates that increasing the moment order beyond this threshold yields diminishing returns in terms of reconstruction fidelity. Additionally, this demonstrates that TMs are effective not only for square patches, as commonly studied, but also for rectangular regions, thereby broadening their applicability in real-world WSI analysis. These findings support the argument that lower-order TMs are sufficient for accurate WSI reconstruction. These results further underscore the efficiency of TMs in capturing features within WSIs, highlighting their potential as effective handcrafted descriptors for digital pathology analysis.
Full reconstruction was performed on patches larger than 1000 pixels to demonstrate the capability of TMs in handling high-resolution images. Figure 6 and Figure 7 illustrate the successful reconstruction of large WSIs with dimensions up to 6000 × 5000 pixels without any errors, underscoring the potential of TMs to efficiently process high-resolution data. This is particularly important in computational pathology, as it confirms that no numerical instability is introduced when handling large WSIs. Furthermore, since TMs effectively encode information even at lower moment orders, it is feasible to use only a subset of moments for analysis. This reduction in feature dimensionality presents a promising alternative to more computationally intensive methods. The ability to reconstruct WSIs using lower-order moments at reduced computational cost positions TMs as a valuable tool for large-scale computational pathology applications where both accuracy and efficiency are essential.

4.4. Redundancy and Dimensionality Analysis

In the reconstruction analysis, TMs demonstrated their ability to effectively and accurately encode visual semantic information from WSIs, achieving full reconstruction at the maximum moment order. As shown in the quantitative results in Figure 5, the quality of reconstructed images converges as the moment order increases, suggesting the presence of potentially redundant information at higher orders. To explore this, a redundancy analysis was conducted to assess whether high-order TMs introduce informational overlap. This analysis focused on square matrices for computational efficiency and interpretability, and employed both visual inspection of the correlation matrix and principal component analysis (PCA).
To analyse potential redundancy across moment orders, correlation matrices were visualised for two distinct subsets of TMs as shown in Figure 8: a lower-order range (1–100) and a higher-order range (901–1000). In the lower-order matrix, most off-diagonal values were near zero, indicating a weak correlation between moment pairs and suggesting a high degree of statistical independence. This supports the effectiveness of lower-order moments in capturing distinct and complementary image features. In contrast, the higher-order correlation matrix revealed pronounced diagonal bands which are perpendicular to the main diagonal, indicating a localised correlation between adjacent high-order moments. This pattern suggests increasing redundancy at higher orders, where moment values become more interdependent and potentially less informative.
These results align with the dimensionality analysis conducted via PCA, which similarly indicated that the majority of variance is captured by lower-order moments, as shown in Figure 9. An analysis of Tchebichef moments across a range of WSIs showed that a cumulative explained variance of 0.999 is achieved using moments of the order less than 200. The observed redundancy among higher-order moments, as revealed in the correlation matrix, corresponds to their diminishing contribution in PCA, reinforcing the conclusion that high-order moments provide limited additional information. Therefore, for image analysis tasks, features encoded at lower orders may be sufficient. This reduction not only preserves the essential information but also significantly lowers the computational burden of processing large WSIs, as the input feature space can be limited to a smaller subset of moment orders. The optimal moment order for feature extraction may vary depending on the specific task; therefore, further optimisation and careful selection of the appropriate moment order are recommended, particularly when applying TMs to classification or segmentation tasks.

4.5. Time Complexity

The time complexity of the Tchebichef polynomial generation, moment computation, and image reconstruction was measured on a CPU using a 12th Gen Intel® Core™ i7-12700F processor (Santa Clara, CA, USA) (2.19 GHz) with 64 GB of RAM. Figure 10a presents the average reconstruction time across varying maximum reconstruction orders, illustrating a time complexity of approximately O ( n log n ) . This results in an increased computational cost as the order rises. Notably, the plot shows overlapping curves for different images, suggesting that the computation time is largely dominated by the reconstruction order rather than the image content itself. Although minor variations exist, they are negligible in scale, indicating that the reconstruction time is relatively consistent across different images. This implies that the computational burden is primarily a function of algorithmic complexity rather than specific image characteristics.
Figure 10b illustrates the computation times for Tchebichef polynomials and their corresponding moments across different orders. Polynomial computation exhibits a time complexity of O ( n 3 ) , while moment computation requires O ( n 2 ) . As a result, calculating polynomials and moments for every image can be computationally inefficient. This cost can be mitigated by using pre-computed Tchebichef polynomials, which remain constant for a given order and resolution. Since the polynomial dimensions depend on image resolution, bases for commonly encountered sizes (e.g., 32, 64, 128, 256, 512, 1024, and 2048) can be pre-computed and stored in binary or CSV format. These can then be loaded at runtime as needed, allowing for reuse across multiple images, eliminating redundant computations, and significantly improving overall efficiency.
However, despite utilising pre-computed polynomials, the computational complexity of moment calculation remains at O ( n 2 ) , which may hinder real-time processing of high-resolution WSIs. Since all experiments in this study were conducted on a CPU, neither GPU acceleration nor multi-threaded parallel processing was explored. Given that the computation of moments involves repeated matrix operations, it lends itself well to parallelisation, which could significantly reduce runtime and enable real-time performance. Furthermore, implementing the proposed filter design on FPGAs or Application-Specific Integrated Circuits (ASICs) platforms could offer additional improvements in computational efficiency through hardware-level parallelism.

4.6. Integration with Machine and Deep Learning

Through the reconstruction analysis, we highlighted the efficiency of TMs and their ability to accurately encode high-level semantic and structural information from WSIs, while our redundancy analysis further demonstrated that lower-order subsets of TMs are more suitable for image analysis, as higher-order moments often carry redundant or less salient information. For integration into machine learning or deep learning pipelines, a selected subset of moments can be used as input features, as described in [16]. Alternatively, the full set of TMs can be concatenated with deep features to enhance representation, as explored in [39]. This can be achieved either by computing the full set of moments and identifying the optimal order during feature engineering, or by directly limiting the computation to a predefined moment order based on the task requirements.

5. Conclusions

This study introduced a novel two-dimensional cascaded digital filter architecture for the efficient computation of TMs and demonstrated its ability to encode visual features for digital pathology. Through extensive experiments using the MIDOG++ dataset, we showed that lower-order TMs can faithfully reconstruct diagnostically relevant WSI patches, with image quality metrics such as SSIM, SNIRE, and BRISQUE confirming their ability to retain structural and perceptual fidelity. Additionally, our redundancy and dimensionality analyses revealed that most of the informative content is encoded within lower-order moments, highlighting the efficiency and compactness of TMs as handcrafted descriptors.
Unlike traditional handcrafted features, which often offer pixel-wise interpretability, TMs enable reconstruction-based interpretability, allowing practitioners to visually assess the fidelity of encoded features. This property makes them especially valuable in clinical applications where transparency and interpretability are essential, providing image-level insight into what the model has learned, bridging the gap between abstract feature representations and human-understandable patterns.
The proposed filter architecture, while grounded in rigorous theoretical derivations, has not yet been benchmarked in practical hardware settings. Future work will focus on implementing this architecture on parallel hardware platforms, such as FPGAs and ASICs, which are well-suited for the cascaded and recursive structure of the design. Such platforms offer potential for achieving real-time performance even when processing high-resolution WSIs.
To support deployment in real-world diagnostic scenarios, particularly in edge-computing and resource-constrained environments, several optimisation strategies will be pursued. These include dynamic truncation of the maximum moment order based on task-specific accuracy requirements, thereby enabling a flexible balance between computational complexity and reconstruction quality. Additionally, memory-efficient schemes for storing and retrieving pre-computed Tchebichef polynomials will be explored, such as lookup tables or embedded ROM-based architectures, which can significantly reduce runtime and energy consumption.
Overall, this research establishes Tchebichef Moments as robust, interpretable, and scalable feature descriptors for computational pathology and lays the foundation for further advancements in real-time and embedded WSI analysis systems.

Author Contributions

Conceptualisation, K.W.K. and B.H.S.A.; methodology, K.W.K. and B.H.S.A.; software, K.W.K.; validation, K.W.K. and B.H.S.A.; formal analysis, K.W.K. and B.H.S.A.; investigation, K.W.K. and B.H.S.A.; resources, B.H.S.A.; data curation, K.W.K.; writing—original draft preparation, K.W.K., W.J. and B.H.S.A.; writing—review and editing, K.W.K. and B.H.S.A.; visualisation, K.W.K. and B.H.S.A.; supervision, B.H.S.A.; project administration, B.H.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The MIDOG++ dataset used in this study is available at https://github.com/DeepMicroscopy/MIDOGpp (accessed on 20 January 2025). The dataset is described in detail in [35].

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Reconstruction Results of WSIs with Python

In the appendix, additional reconstructed WSIs are presented using a Python (version 3.13.2) implementation of TMs, which shows a noticeable increase in SNIRE compared to the C++ implementation. This discrepancy highlights how differences in floating-point precision, numerical libraries, or computation pipelines between programming languages can significantly impact the quality of reconstruction. Specifically, Python (often relying on NumPy or similar libraries) may introduce greater numerical instability due to less stringent control over low-level arithmetic compared to C++, where higher precision (e.g., use of double types and optimised compilers) can result in more accurate computations. These findings emphasise the importance of implementation choices when deploying TMs in practice, particularly for high-resolution or precision-critical applications.
Table A1. Additional reconstructed WSIs of dimensions 1000 × 1000 with varying maximum reconstruction orders.
Table A1. Additional reconstructed WSIs of dimensions 1000 × 1000 with varying maximum reconstruction orders.
OriginalMaximum Reconstruction Order
Image 50 100 200 400 800 1000
Electronics 14 03148 i057Electronics 14 03148 i058Electronics 14 03148 i059Electronics 14 03148 i060Electronics 14 03148 i061Electronics 14 03148 i062Electronics 14 03148 i063
SNIRE
SSIM
BRISQUE
0.8126
0.3784
98.3239
0.7003
0.5030
79.8000
0.7134
0.5113
64.7209
0.1739
0.9341
49.9954
0.0108
0.9965
39.7217
0.0
1.0
40.5466
Electronics 14 03148 i064Electronics 14 03148 i065Electronics 14 03148 i066Electronics 14 03148 i067Electronics 14 03148 i068Electronics 14 03148 i069Electronics 14 03148 i070
SNIRE
SSIM
BRISQUE
0.8738
0.2725
96.5201
0.8070
0.3809
76.6921
0.6749
0.6288
65.5243
0.3732
0.9053
52.9512
0.0930
0.9929
44.5151
0.0
1.0
47.4998
Electronics 14 03148 i071Electronics 14 03148 i072Electronics 14 03148 i073Electronics 14 03148 i074Electronics 14 03148 i075Electronics 14 03148 i076Electronics 14 03148 i077
SNIRE
SSIM
BRISQUE
0.9802
0.2263
95.6816
0.8936
0.3635
75.3904
0.7702
0.5881
60.1599
0.5041
0.8688
43.3973
0.0430
0.9938
30.1158
0.0
1.0
29.1159
Electronics 14 03148 i078Electronics 14 03148 i079Electronics 14 03148 i080Electronics 14 03148 i081Electronics 14 03148 i082Electronics 14 03148 i083Electronics 14 03148 i084
SNIRE
SSIM
BRISQUE
0.8914
0.3488
94.8186
0.7790
0.4883
78.9704
0.6222
0.6662
59.4039
0.3856
0.8709
33.0006
0.0455
0.9912
13.8501
0.0
1.0
11.3324
Electronics 14 03148 i085Electronics 14 03148 i086Electronics 14 03148 i087Electronics 14 03148 i088Electronics 14 03148 i089Electronics 14 03148 i090Electronics 14 03148 i091
SNIRE
SSIM
BRISQUE
0.9144
0.1993
95.9375
0.8571
0.3306
76.0682
0.7529
0.5501
58.4476
0.5357
0.8235
45.1389
0.0830
0.9879
29.7955
0.0
1.0
26.9938

References

  1. Liang, P.; Zheng, H.; Li, H.; Gong, Y.; Bakas, S.; Fan, Y. Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Marrakech, Morocco, 6–10 October 2024; pp. 102–112. [Google Scholar]
  2. El Nahhas, O.S.; van Treeck, M.; Wölflein, G.; Unger, M.; Ligero, M.; Lenz, T.; Wagner, S.J.; Hewitt, K.J.; Khader, F.; Foersch, S.; et al. From whole-slide image to biomarker prediction: End-to-end weakly supervised deep learning in computational pathology. Nat. Protoc. 2025, 20, 293–316. [Google Scholar] [CrossRef]
  3. Lee, M. Recent Advancements in Deep Learning Using Whole Slide Imaging for Cancer Prognosis. Bioengineering 2023, 10, 897. [Google Scholar] [CrossRef]
  4. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
  5. Holzinger, A.; Biemann, C.; Pattichis, C.S.; Kell, D.B. What do we need to build explainable AI systems for the medical domain? arXiv 2017, arXiv:1712.09923. [Google Scholar] [CrossRef]
  6. Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar] [CrossRef]
  7. Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhudinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2048–2057. [Google Scholar]
  8. Ying, Z.; Bourgeois, D.; You, J.; Zitnik, M.; Leskovec, J. Gnnexplainer: Generating explanations for graph neural networks. Adv. Neural Inf. Process. Syst. 2019, 32, 829. [Google Scholar]
  9. Lu, X.; Ying, Y.; Chen, J.; Chen, Z.; Wu, Y.; Prasanna, P.; Chen, X.; Jing, M.; Liu, Z.; Lu, C. From digitized whole-slide histology images to biomarker discovery: A protocol for handcrafted feature analysis in brain cancer pathology. Brain-X 2025, 3, e70030. [Google Scholar] [CrossRef]
  10. Alhindi, T.J.; Kalra, S.; Ng, K.H.; Afrin, A.; Tizhoosh, H.R. Comparing LBP, HOG and deep features for classification of histopathology images. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  11. Bolus Al Baqain, F.; Sultan Al-Kadi, O. Comparative Analysis of Hand-Crafted and Machine-Driven Histopathological Features for Prostate Cancer Classification and Segmentation. arXiv 2025, arXiv:2501.12415. [Google Scholar]
  12. Huang, X.; Li, Z.; Zhang, M.; Gao, S. Fusing hand-crafted and deep-learning features in a convolutional neural network model to identify prostate cancer in pathology images. Front. Oncol. 2022, 12, 994950. [Google Scholar] [CrossRef]
  13. Khotanzad, A.; Hong, Y.H. Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 489–497. [Google Scholar] [CrossRef]
  14. Chong, C.W.; Raveendran, P.; Mukundan, R. Translation and scale invariants of Legendre moments. Pattern Recognit. 2004, 37, 119–129. [Google Scholar] [CrossRef]
  15. Flusser, J.; Zitova, B.; Suk, T. Moments and Moment Invariants in Pattern Recognition; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  16. Di Ruberto, C.; Loddo, A.; Putzu, L. On The Potential of Image Moments for Medical Diagnosis. J. Imaging 2023, 9, 70. [Google Scholar] [CrossRef]
  17. Mukundan, R.; Ong, S.; Lee, P.A. Image analysis by Tchebichef moments. IEEE Trans. Image Process. 2001, 10, 1357–1364. [Google Scholar] [CrossRef] [PubMed]
  18. Honarvar Shakibaei Asli, B.; Paramesran, R.; Lim, C.L. The fast recursive computation of Tchebichef moment and its inverse transform based on Z-transform. Digit. Signal Process. 2013, 23, 1738–1746. [Google Scholar] [CrossRef]
  19. Bai, X.; Ju, G.; Xu, B.; Gao, Y.; Zhang, C.; Wang, S.; Ma, H.; Xu, S. Active alignment of space astronomical telescopes by matching arbitrary multi-field stellar image features. Opt. Express 2021, 29, 24446–24465. [Google Scholar] [CrossRef] [PubMed]
  20. Bastani, A.; Ahouz, F. High capacity and secure watermarking for medical images using Tchebichef Moments. Radioengineering 2020, 29, 636–643. [Google Scholar] [CrossRef]
  21. Huang, H.; Coatrieux, G.; Shu, H.; Luo, L.; Roux, C. Blind forensics in medical imaging based on Tchebichef image moments. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 4473–4476. [Google Scholar]
  22. Nava, R.; González, G.; Kybic, J.; Escalante-Ramírez, B. Classification of tumor epithelium and stroma in colorectal cancer based on discrete Tchebichef moments. In Clinical Image-Based Procedures. Translational Research in Medical Imaging: 4th International Workshop, CLIP 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, 5 October 2015; Revised Selected Papers 4; Springer: Berlin/Heidelberg, Germany, 2016; pp. 79–87. [Google Scholar]
  23. Temme, N.M. Special Functions: An Introduction to the Classical Functions of Mathematical Physics; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  24. Shivaleela Patil, D.S.D. Composite Sketch Based Face Recognition Using ANN Classification. Int. J. Sci. Technol. Res. 2020, 9, 42–50. [Google Scholar]
  25. Bourzik, A.; Bouikhalen, B.; El-Mekkaoui, J.; Hjouji, A. A comparative study and performance evaluation of discrete Tchebichef moments for image analysis. In Proceedings of the 6th International Conference on Networking, Intelligent Systems & Security, Rabat, Morocco, 20–22 September 2023; pp. 1–7. [Google Scholar]
  26. Barczak, A.; Reyes, N.; Susnjak, T. Assessment of the local tchebichef moments method for texture classification by fine tuning extraction parameters. arXiv 2019, arXiv:1910.09758. [Google Scholar] [CrossRef]
  27. Wu, H.; Yan, S. Computing invariants of Tchebichef moments for shape based image retrieval. Neurocomputing 2016, 215, 110–117. [Google Scholar] [CrossRef]
  28. Hoque, M.Z.; Keskinarkaus, A.; Nyberg, P.; Seppänen, T. Stain normalization methods for histopathology image analysis: A comprehensive review and experimental comparison. Inf. Fusion 2024, 102, 101997. [Google Scholar] [CrossRef]
  29. Zhu, H.; Li, Q.; Liu, Q. Quaternion discrete Tchebichef moments and their applications. Int. J. Signal Process. Image Process. Pattern Recognit. 2014, 7, 149–162. [Google Scholar] [CrossRef]
  30. Elouariachi, I.; Benouini, R.; Zenkouar, K.; Zarghili, A. Robust hand gesture recognition system based on a new set of quaternion Tchebichef moment invariants. Pattern Anal. Appl. 2020, 23, 1337–1353. [Google Scholar] [CrossRef]
  31. El Alami, A.; Mesbah, A.; Berrahou, N.; Berrahou, A.; Jamil, M.O.; Qjidaa, H. Fast and Accurate Color Image Classification Based on Quaternion Tchebichef Moments and Quaternion Convolutional Neural Network. In Proceedings of the International Conference on Electronic Engineering and Renewable Energy Systems, Saidia, Morocco, 13–14 May 2022; pp. 329–337. [Google Scholar]
  32. El Alami, A.; Berrahou, N.; Lakhili, Z.; Mesbah, A.; Berrahou, A.; Qjidaa, H. Efficient color face recognition based on quaternion discrete orthogonal moments neural networks. Multimed. Tools Appl. 2022, 81, 7685–7710. [Google Scholar] [CrossRef]
  33. Milne-Thomson, L.M. The Calculus of Finite Differences; American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
  34. Honarvar Shakibaei Asli, B.; Flusser, J.; Zhao, Y.; Erkoyuncu, J.A.; Krishnan, K.B.; Farrokhi, Y.; Roy, R. Ultrasound image filtering and reconstruction using DCT/IDCT filter structure. IEEE Access 2020, 8, 141342–141357. [Google Scholar] [CrossRef]
  35. Aubreville, M.; Wilm, F.; Stathonikos, N.; Breininger, K.; Donovan, T.A.; Jabari, S.; Veta, M.; Ganz, J.; Ammeling, J.; van Diest, P.J.; et al. A comprehensive multi-domain dataset for mitotic figure detection. Sci. Data 2023, 10, 484. [Google Scholar] [CrossRef]
  36. Camacho-Bello, C.; Rivera-Lopez, J.S. Some computational aspects of Tchebichef moments for higher orders. Pattern Recognit. Lett. 2018, 112, 332–339. [Google Scholar] [CrossRef]
  37. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  38. Mittal, A.; Moorthy, A.K.; Bovik, A.C. Blind/referenceless image spatial quality evaluator. In Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 723–727. [Google Scholar]
  39. El Madmoune, Y.; El Ouariachi, I.; Zenkouar, K.; Zahi, A. Breast Cancer Histopathological Images Classification Using Transfer Learning Combined with Separable Quaternion Moments. Int. J. Intell. Eng. Syst. 2025, 18, 409–425. [Google Scholar] [CrossRef]
Figure 1. Tchebichef polynomials visualisation: The top panel illustrates the plot of scaled Tchebichef polynomials for N = 8 , while the bottom panel indicates the ( 8 × 8 ) array of basis images for two-dimensional discrete Tchebichef moments. The moment order increases along the horizontal (left to right) and vertical (top to bottom) directions, representing increasing polynomial orders ( p , q ) .
Figure 1. Tchebichef polynomials visualisation: The top panel illustrates the plot of scaled Tchebichef polynomials for N = 8 , while the bottom panel indicates the ( 8 × 8 ) array of basis images for two-dimensional discrete Tchebichef moments. The moment order increases along the horizontal (left to right) and vertical (top to bottom) directions, representing increasing polynomial orders ( p , q ) .
Electronics 14 03148 g001
Figure 2. Digital filter structure to generate Tchebichef polynomials of order p using cascaded delays and adders. The coefficients of the delay feedback are binomial terms.
Figure 2. Digital filter structure to generate Tchebichef polynomials of order p using cascaded delays and adders. The coefficients of the delay feedback are binomial terms.
Electronics 14 03148 g002
Figure 3. Flowchart of the proposed method.
Figure 3. Flowchart of the proposed method.
Electronics 14 03148 g003
Figure 4. Sample H&E WSIs from MIDOG++ dataset.
Figure 4. Sample H&E WSIs from MIDOG++ dataset.
Electronics 14 03148 g004
Figure 5. Evaluation of reconstructed WSIs with dimensions of (a) 1000 × 1000 and (b) 800 × 1000 with respect to the order of TMs.
Figure 5. Evaluation of reconstructed WSIs with dimensions of (a) 1000 × 1000 and (b) 800 × 1000 with respect to the order of TMs.
Electronics 14 03148 g005aElectronics 14 03148 g005b
Figure 6. Fully reconstructed large square WSIs with varying dimensions by using TMs. (ad) Square WSIs from 2000 × 2000 to 5000 × 5000 .
Figure 6. Fully reconstructed large square WSIs with varying dimensions by using TMs. (ad) Square WSIs from 2000 × 2000 to 5000 × 5000 .
Electronics 14 03148 g006
Figure 7. Fully reconstructed rectangular WSIs with varying dimensions by using TMs.
Figure 7. Fully reconstructed rectangular WSIs with varying dimensions by using TMs.
Electronics 14 03148 g007
Figure 8. Correlation matrices of Tchebichef moments for orders 0–100 and 900–1000. While lower-order moments show weak inter-order correlation, higher-order moments exhibit stronger localised redundancy, as seen in diagonal patterns off the main axis.
Figure 8. Correlation matrices of Tchebichef moments for orders 0–100 and 900–1000. While lower-order moments show weak inter-order correlation, higher-order moments exhibit stronger localised redundancy, as seen in diagonal patterns off the main axis.
Electronics 14 03148 g008
Figure 9. Cumulative explained variance of TMs. Over 99.9% of the variance is captured by moments of order less than 200, indicating that lower-order moments contain most of the informative content.
Figure 9. Cumulative explained variance of TMs. Over 99.9% of the variance is captured by moments of order less than 200, indicating that lower-order moments contain most of the informative content.
Electronics 14 03148 g009
Figure 10. Computation time of (a) image reconstruction and (b) computation of Tchebichef polynomials and moments for varying image sizes (orders). The timing includes separate measurements for polynomial generation, moment calculation, and full image reconstruction across increasing spatial resolutions from 50 × 50 to 1000 × 1000 pixels.
Figure 10. Computation time of (a) image reconstruction and (b) computation of Tchebichef polynomials and moments for varying image sizes (orders). The timing includes separate measurements for polynomial generation, moment calculation, and full image reconstruction across increasing spatial resolutions from 50 × 50 to 1000 × 1000 pixels.
Electronics 14 03148 g010
Table 1. Reconstructed WSIs of dimensions 1000 × 1000 with varying maximum reconstruction orders.
Table 1. Reconstructed WSIs of dimensions 1000 × 1000 with varying maximum reconstruction orders.
OriginalMaximum Reconstruction Order
Image 50 100 200 400 800 1000
Electronics 14 03148 i001Electronics 14 03148 i002Electronics 14 03148 i003Electronics 14 03148 i004Electronics 14 03148 i005Electronics 14 03148 i006Electronics 14 03148 i007
SNIRE
SSIM
BRISQUE
0.0106
0.3791
76.1671
0.0059
0.4711
61.6342
0.0025
0.6834
57.3636
0.0005
0.9346
42.0573
0.0
0.9966
35.6887
0.0
1.0
38.4073
Electronics 14 03148 i008Electronics 14 03148 i009Electronics 14 03148 i010Electronics 14 03148 i011Electronics 14 03148 i012Electronics 14 03148 i013Electronics 14 03148 i014
SNIRE
SSIM
BRISQUE
0.0109
0.4754
78.4379
0.0046
0.6133
65.4458
0.0015
0.8075
55.1812
0.0002
0.9657
40.4472
0.0
0.9973
32.3747
0.0
1.0
34.0520
Electronics 14 03148 i015Electronics 14 03148 i016Electronics 14 03148 i017Electronics 14 03148 i018Electronics 14 03148 i019Electronics 14 03148 i020Electronics 14 03148 i021
SNIRE
SSIM
BRISQUE
0.0425
0.2273
83.9600
0.0226
0.3809
67.2707
0.0102
0.6140
56.5922
0.0035
0.8625
35.9452
0.0002
0.9915
14.3345
0.0
1.0
13.0210
Electronics 14 03148 i022Electronics 14 03148 i023Electronics 14 03148 i024Electronics 14 03148 i025Electronics 14 03148 i026Electronics 14 03148 i027Electronics 14 03148 i028
SNIRE
SSIM
BRISQUE
0.0254
0.3760
75.9400
0.0144
0.4845
55.0308
0.0076
0.6566
46.4713
0.0026
0.8783
37.6347
0.0002
0.9925
12.6129
0.0
1.0
21.5446
Table 2. Reconstructed natural images of dimensions 1000 × 1000 with varying maximum reconstruction orders.
Table 2. Reconstructed natural images of dimensions 1000 × 1000 with varying maximum reconstruction orders.
OriginalMaximum Reconstruction Order
Image 50 100 200 400 800 1000
Electronics 14 03148 i029Electronics 14 03148 i030Electronics 14 03148 i031Electronics 14 03148 i032Electronics 14 03148 i033Electronics 14 03148 i034Electronics 14 03148 i035
SNIRE
SSIM
BRISQUE
0.6768
0.3959
98.9336
0.6106
0.4299
79.7782
0.5383
0.5595
65.3323
0.3718
0.8481
48.8923
0.0157
0.9974
44.6995
0.0
1.0
44.7999
Electronics 14 03148 i036Electronics 14 03148 i037Electronics 14 03148 i038Electronics 14 03148 i039Electronics 14 03148 i040Electronics 14 03148 i041Electronics 14 03148 i042
SNIRE
SSIM
BRISQUE
0.9394
0.3137
97.0276
0.8500
0.3825
76.7839
0.7469
0.5576
59.3039
0.5533
0.8217
39.1302
0.1064
0.9855
9.1570
0.0
1.0
7.8628
Electronics 14 03148 i043Electronics 14 03148 i044Electronics 14 03148 i045Electronics 14 03148 i046Electronics 14 03148 i047Electronics 14 03148 i048Electronics 14 03148 i049
SNIRE
SSIM
BRISQUE
0.4384
0.7767
95.3218
0.2450
0.8388
82.1695
0.1384
0.8881
68.2615
0.0834
0.9355
47.7812
0.0192
0.9881
15.1678
0.0
1.0
11.6452
Electronics 14 03148 i050Electronics 14 03148 i051Electronics 14 03148 i052Electronics 14 03148 i053Electronics 14 03148 i054Electronics 14 03148 i055Electronics 14 03148 i056
SNIRE
SSIM
BRISQUE
1.2027
0.1334
96.7264
1.1487
0.2032
74.2226
1.0583
0.4147
65.2873
0.8481
0.7871
44.1298
0.2612
0.9892
19.9911
0.0
1.0
16.5829
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, K.W.; Jin, W.; Honarvar Shakibaei Asli, B. Filter-Based Tchebichef Moment Analysis for Whole Slide Image Reconstruction. Electronics 2025, 14, 3148. https://doi.org/10.3390/electronics14153148

AMA Style

Kim KW, Jin W, Honarvar Shakibaei Asli B. Filter-Based Tchebichef Moment Analysis for Whole Slide Image Reconstruction. Electronics. 2025; 14(15):3148. https://doi.org/10.3390/electronics14153148

Chicago/Turabian Style

Kim, Keun Woo, Wenxian Jin, and Barmak Honarvar Shakibaei Asli. 2025. "Filter-Based Tchebichef Moment Analysis for Whole Slide Image Reconstruction" Electronics 14, no. 15: 3148. https://doi.org/10.3390/electronics14153148

APA Style

Kim, K. W., Jin, W., & Honarvar Shakibaei Asli, B. (2025). Filter-Based Tchebichef Moment Analysis for Whole Slide Image Reconstruction. Electronics, 14(15), 3148. https://doi.org/10.3390/electronics14153148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop