Next Article in Journal
Parsing Emotion in Classical Music: A Behavioral Study on the Cognitive Mapping of Key, Tempo, Complexity and Energy in Piano Performance
Previous Article in Journal
Assessment of the Influence of Electro Slag Remelting on the Purity and Mechanical Properties of Structural Steel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Learning-Based QTMT Decision Framework for VVC Inter-Coding

TIAD Laboratory, Sultan Moulay Slimane University, Beni Mellal 23000, Morocco
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(3), 1368; https://doi.org/10.3390/app16031368
Submission received: 1 January 2026 / Revised: 19 January 2026 / Accepted: 27 January 2026 / Published: 29 January 2026

Abstract

The VVC standard achieves high compression efficiency through its flexible QTMT partitioning structure, at the cost of significantly increased encoding complexity. In this paper, a fast QTMT partition decision method for VVC inter-coding is proposed to reduce computational complexity while preserving rate–distortion efficiency. The proposed approach exploits texture characteristics derived from GLCM analysis to guide partitioning decisions. A feature selection process identifies homogeneity as the most relevant descriptor for characterizing partitioning behavior. Based on this descriptor, a GBM model is trained to learn adaptive decision thresholds that enable a homogeneity-driven restriction of QTMT partition candidates. By progressively limiting unnecessary partition evaluations according to local texture properties, the proposed method reduces the reliance on exhaustive rate–distortion optimization through a lightweight and content-aware decision strategy. Experimental results demonstrate that the proposed approach achieves substantial encoding time reduction with negligible impact on coding performance.

1. Introduction

The continuous expansion of Internet-based video services, together with the widespread adoption of advanced video formats such as ultra-high-definition (UHD) 4K/8K, high frame rate (HFR), wide color gamut (WCG), high dynamic range (HDR), and immersive virtual reality (VR), has imposed increasingly stringent requirements on video compression efficiency. Consequently, the design of video coding algorithms that jointly achieve high compression performance and manageable computational complexity has become a critical research challenge in both academia and industry.
Versatile Video Coding (VVC) [1,2], the most recent international video coding standard, represents a major advancement over previous standards, including Advanced Video Coding (AVC) [3,4] and High Efficiency Video Coding (HEVC) [5,6]. Developed jointly by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) under the Joint Video Experts Team (JVET), VVC was finalized in July 2020 to address the compression requirements of next-generation video applications. By integrating a wide range of advanced coding tools, VVC achieves a substantial bit-rate reduction compared to HEVC for equivalent perceptual quality.
This gain in coding efficiency, however, is accompanied by a significant increase in encoder computational complexity. A major contributor to this complexity is the highly flexible block partitioning structure based on the Quadtree with nested Multi-Type Tree (QTMT) [7]. Unlike HEVC, which relies exclusively on quadtree-based partitioning, QTMT enables a diverse set of partitioning configurations that adapt to local texture characteristics and motion patterns [8]. Starting from a Coding Tree Unit (CTU) of size 128 × 128, the recursive exploration of multiple partitioning structures leads to a large number of candidate Coding Unit (CU) configurations that must be evaluated during rate–distortion optimization (RDO).
In particular, during inter-coding, the encoder performs an exhaustive search over the QTMT decision space to determine the partitioning structure that minimizes the rate–distortion cost for each CU. Although this exhaustive strategy ensures optimal coding decisions, it results in a substantial increase in encoding time, posing a major challenge for real-time applications and resource-constrained encoder implementations. Consequently, reducing the complexity of QTMT partition decision while preserving rate–distortion performance has become an important research problem in VVC optimization [9].
In this paper, a fast QTMT partition decision algorithm for VVC inter-coding is proposed to address this challenge. The proposed approach focuses on the 64 × 64 CU level and exploits texture information derived from the Gray-Level Co-occurrence Matrix (GLCM). A feature selection process identifies homogeneity as the most relevant descriptor for characterizing CU texture properties. Based on this descriptor, a Gradient Boosting Machine (GBM) model is employed to learn adaptive decision thresholds that guide a homogeneity-driven restriction of QTMT partition candidates. By progressively limiting unnecessary partition evaluations according to CU texture characteristics, the proposed method significantly reduces the reliance on exhaustive RDO checks through a lightweight and content-aware decision strategy, while preserving coding behavior consistent with the VVC reference encoder.
The remainder of this paper is organized as follows. Section 2 reviews related work on VVC complexity reduction. Section 3 presents the statistical analysis motivating the proposed approach. Section 4 introduces the Gradient Boosting Machine model. Section 5 describes the GLCM-based feature extraction and selection process. The proposed QTMT decision algorithm is detailed in Section 6. Experimental results are discussed in Section 7, and Section 8 concludes the paper.

2. Related Works

In recent years, considerable research efforts have been devoted to reducing the computational complexity of VVC encoders [10,11,12,13,14,15,16,17,18,19,20]. Most existing approaches primarily focus on fast partition decision strategies and can be broadly classified into methods targeting intra-coding and those addressing inter-prediction complexity.
For intra-coding, numerous fast QTMT decision algorithms have been proposed to alleviate the high computational burden introduced by the flexible partitioning structure. In [10], a gradient-based QTMT decision approach employing the Scharr operator was proposed to capture local texture variations and enable early termination of partitioning. A multistage QTMT decision framework was introduced in [11], where partition decisions were formulated as a sequence of binary classification problems to dynamically adapt CU sizes. Shang et al. [12] presented a fast CU size decision method that combines coding and texture features to accelerate quadtree and multi-type tree exploration. Lightweight learning-based strategies were further investigated in [13], where a compact neural network was employed to avoid redundant partition checks. Similarly, CNN-based fast intra-partitioning schemes were reported in [14,15], demonstrating the effectiveness of deep learning models for predicting QTMT partition modes.
In contrast, research on complexity reduction for VVC inter-prediction remains comparatively limited. Early studies mainly focused on accelerating motion estimation, such as the bypass zone search (BZS) algorithm proposed in [16], which integrates learning-based concepts with efficient search strategies. More recent works have explored learning-based early QTMT decision schemes for inter-coding. In [17], a multi-information fusion CNN combined with content complexity analysis was proposed to enable early CU termination and accelerate inter-prediction. Tissier et al. [18] employed CNN-based split probability estimation to prune unlikely partition candidates. In [19], a joint classification–prediction framework was introduced, where CTUs were assigned to subnetworks of different complexities based on a partition homogeneity map. Additionally, a GBM-based fast QTMT decision method for inter-coding was presented in [20], using Average Local Variance as a texture descriptor to guide partition decisions.
It is worth emphasizing that most existing learning-based approaches achieve high prediction accuracy at the cost of increased computational complexity. This overhead mainly originates from the use of deep neural networks or the extraction of multiple handcrafted features, which require expensive online inference and substantially increase encoder runtime. Although these methods effectively reduce the partition search space, their high computational burden limits practical deployment, particularly in real-time and resource-constrained encoding scenarios.
In contrast, the proposed method deliberately adopts a lightweight GLCM-based homogeneity descriptor, which can be computed with very low computational cost while still providing sufficient discriminative power for QTMT decision making. This design choice enables a more favorable trade-off between feature extraction overhead and prediction accuracy, thereby making the proposed framework more suitable for practical and real-world encoder implementations.
Although the above methods achieve notable reductions in encoder complexity, fast QTMT decision techniques for VVC inter-coding remain relatively underexplored. Moreover, many existing approaches rely on complex decision pipelines or computationally demanding inference models, which may limit their robustness and practicality in real encoder implementations. Motivated by these limitations, this paper proposes a lightweight QTMT partition decision algorithm for VVC inter-coding based on statistical texture analysis and machine learning-based thresholding. The proposed approach aims to effectively reduce encoding complexity while preserving rate–distortion performance through a conservative and content-aware pruning strategy.

3. Motivation and Statistical Analysis

This section details the motivation for the proposed fast QTMT decision strategy by revisiting the QTMT partitioning process in inter-prediction and by analyzing the statistical behavior of partition modes at the 64 × 64 CU level. The objective is to highlight consistent partitioning patterns that justify the design of a simplified and content-aware QTMT decision mechanism with reduced computational complexity.

3.1. QTMT Partitioning in Inter-Coding

The flexible QTMT partitioning structure constitutes one of the key components enabling the high compression efficiency of new video coding standards. By allowing coding units to be recursively divided using multiple partition shapes, the encoder can effectively adapt to diverse texture distributions and motion characteristics encountered in inter-coded sequences.
In the inter-coding process, each frame is initially partitioned into CTUs of size 128 × 128, which represent the root level of the partitioning hierarchy. At this level, partitioning is restricted to quadtree splitting, and each CTU is therefore mandatorily divided into four 64 × 64 CUs [21]. This initial QT decomposition establishes a uniform and fixed entry point for subsequent partitioning decisions and ensures consistent processing across all coding blocks.
Once the 64 × 64 CU level is reached, the full QTMT decision space becomes available. At this stage, the encoder evaluates whether a CU should remain unsplit or be further partitioned. When further partitioning is considered, different partitioning structures are explored and can be broadly categorized into square and rectangular decompositions. Square partitions are generated through additional QT splits, whereas rectangular partitions are obtained using multi-type tree (MTT) structures. The MTT framework enables both binary and ternary splits along horizontal and vertical directions, allowing the encoder to efficiently represent directional textures and elongated motion patterns [22].
Figure 1 illustrates the complete QTMT partitioning structures available at the CU level, including QT, horizontal and vertical BT, and horizontal and vertical TT modes. While this flexibility provides strong adaptation capability, it also significantly increases the number of candidate configurations evaluated during rate–distortion optimization.
To determine the optimal partitioning configuration, the encoder applies a top-down recursive splitting process until the minimum CU size constraint is reached. For each candidate CU generated during this process, the corresponding rate–distortion cost is evaluated, followed by a bottom-up pruning stage that selects the configuration minimizing the overall cost at the CTU level. Although this exhaustive strategy ensures optimal partitioning from a rate–distortion perspective, it results in a substantial increase in encoding complexity.
In practice, statistical observations indicate that only a limited subset of QTMT partition modes is frequently selected at the 64 × 64 CU level, while many evaluated partition candidates do not contribute to the final optimal structure. This redundancy suggests that early and reliable identification of unfavorable partition candidates can effectively reduce computational complexity without noticeably affecting coding efficiency. These observations form the basis for the fast QTMT decision strategy proposed in this work, which focuses on simplifying partition decisions at the 64 × 64 CU level during inter-prediction.

3.2. Statistical Analysis of QTMT Partitioning

To further investigate the QTMT partitioning behavior in inter prediction and its dependency on quantization strength, a detailed statistical analysis was performed at the 64 × 64 CU level. This level constitutes a critical decision stage in the VVC partitioning hierarchy, as it represents the first depth at which all QTMT partitioning modes are enabled. The analysis was conducted using VTM 23.5 (VVC Test Model) [23] under the Random Access configuration and the Common Test Conditions (CTC) [24]. Four representative sequences, Tango, DaylightRoad2, Cactus, and PartyScene were selected to cover a wide range of spatial complexity, texture regularity, and motion characteristics.
Figure 2 illustrates the distribution of QTMT partitioning modes (No-Split, QT, BT, and TT) for 64 × 64 CUs under different quantization parameters (QP 22, 27, 32, and 37). Several consistent and sequence-independent trends can be observed.
First, the proportion of No-Split CUs exhibits a monotonic increase as the quantization parameter increases. For instance, in the Tango sequence, the No-Split ratio increases from 14.76% at QP22 to 39.56% at QP37. A similar trend is observed in DaylightRoad2, where the No-Split percentage rises from 13.94% to 34.33%. In the Cactus sequence, No-Split also increases from 17.51% to 24.57%, while in PartyScene it grows from 16.67% to 29.63%. This behavior indicates that higher quantization levels reduce the coding benefit of finer spatial partitioning, encouraging the encoder to preserve larger CUs, especially in relatively smooth or slowly varying regions.
Second, QT partitioning is predominant at lower quantization levels but gradually loses importance as the QP increases. At QP22, QT accounts for a substantial portion of the selected modes across all sequences (e.g., 33.59% for Tango and 70.69% for PartyScene). However, as the quantization strength increases, the QT usage consistently decreases. For example, in Tango, the QT ratio drops from 33.59% at QP22 to 22.76% at QP37, while in DaylightRoad2 it decreases from 25.73% to 15.66%. A similar decreasing trend is observed in Cactus (from 40.73% to 26.97%) and in PartyScene (from 70.69% to 46.19%). This trend suggests that square partitions become less efficient under coarse quantization, particularly when high-frequency texture details are suppressed.
Conversely, the utilization of MTT-based partitions, namely BT and TT, generally increases with the quantization parameter. In the Tango sequence, the BT ratio grows from 31.69% at QP22 to 44.44% at QP37, while TT remains relatively stable with values around 34%. In DaylightRoad2, both BT and TT exhibit a clear upward trend, with TT becoming the dominant mode at high QPs (reaching 48.01% at QP32). In Cactus, TT increases from 36.19% to 43.33%, while in PartyScene BT increases significantly from 9.72% to 29.11%. This shift reflects the encoder’s preference for rectangular and directional partitions when finer texture information is diminished, allowing better adaptation to elongated structures and motion boundaries.
Overall, the statistical evidence reveals a clear and consistent quantization-dependent transition in QTMT partitioning behavior at the 64 × 64 CU level. As the quantization parameter increases, QT partitions are progressively replaced by BT and TT structures, while the proportion of No-Split CUs also increases. These observations indicate that a large fraction of QT evaluations performed at higher QPs are unlikely to be selected by the encoder, highlighting substantial redundancy in the exhaustive QTMT search process [25].
This analysis provides strong empirical justification for the proposed fast QTMT decision strategy. By exploiting the predictable evolution of partitioning behavior with respect to quantization strength and content characteristics, unnecessary QTMT evaluations can be safely pruned, enabling significant complexity reduction with minimal impact on rate–distortion performance.

4. Gradient Boosting Machine Algorithm

Machine learning techniques provide an effective framework for data-driven modeling of complex relationships by automatically learning discriminative patterns from observed data, without relying on explicitly handcrafted decision rules [26,27,28]. Among these techniques, ensemble learning has demonstrated strong performance for both classification and regression tasks by combining multiple weak learners into a single, more robust predictive model. In particular, boosting-based methods iteratively construct a sequence of weak predictors, where each newly added learner is trained to compensate for the prediction errors of the current ensemble [29,30].
Among boosting algorithms, AdaBoost [31] and GBM [32,33] are widely adopted. Compared to AdaBoost, GBM offers greater flexibility by directly optimizing a differentiable loss function through gradient descent in function space [34]. This formulation enables GBM to effectively capture nonlinear relationships between input features and decision variables, making it particularly suitable for learning adaptive decision boundaries in complex coding processes.
In practice, GBM is commonly implemented using an ensemble of regression trees [35,36,37]. The model is constructed in an additive and stage-wise manner, where each weak learner typically corresponds to a shallow regression tree trained to approximate the residual errors of the current ensemble. By sequentially adding such learners, GBM progressively refines the prediction function while preserving good generalization capability. In lightweight configurations, the weak learners may consist of shallow trees or even decision stumps [38,39], which helps to limit computational overhead and facilitates practical integration into real-time systems.
Let the training dataset be defined as
D n = { ( x 1 , y 1 ) , , ( x n , y n ) } ,
where x i denotes the input feature vector and y i represents the corresponding class label. The objective of GBM is to learn a predictive function F ( x ) that minimizes a differentiable loss function L ( y i , F ( x i ) ) over the training samples.
At iteration m, the model update is obtained by solving
γ m = arg min γ i = 1 n L y i , F m 1 ( x i ) + γ h m ( x i ) ,
where h m ( x ) denotes the m-th weak learner.
Each weak learner is represented as a regression tree composed of J m terminal regions and can be expressed as
h m ( x ) = j = 1 J m b j m I R j m ,
where R j m denotes the j-th terminal region and b j m is the prediction value associated with that region.
Following the formulation introduced by Friedman [32], an individual optimal step size γ j m is computed for each terminal region, leading to the ensemble update
F m ( x ) = F m 1 ( x ) + j = 1 J m γ j m I R j m , γ j m = arg min γ x i R j m L y i , F m 1 ( x i ) + γ .
Through this iterative optimization process, GBM implicitly explores feature thresholds and decision boundaries that minimize the residual loss, resulting in a compact yet effective decision model.
Within the proposed framework, the GBM model is trained using a single statistical texture descriptor extracted from GLCM analysis. This descriptor exhibits a strong correlation with coding unit partitioning behavior at the 64 × 64 level in inter coding. To explicitly reflect the hierarchical nature of QTMT decisions, two training datasets are constructed. The first dataset is designed to learn the decision boundary between split and no-split cases, while the second dataset focuses on discriminating between QT and MTT partitioning when further splitting is required. For both datasets, the selected texture descriptor serves as the sole input feature, and the corresponding partition decision is used as the target label.
By training the GBM model on these datasets across different quantization parameters, adaptive decision thresholds are learned for each stage of the partitioning process. These thresholds are subsequently exploited to guide fast QTMT decisions during inter coding, enabling early termination or selective partition evaluation without resorting to exhaustive rate–distortion optimization.
In our implementation, the GBM hyperparameters were empirically selected based on preliminary experiments to achieve a good balance between model complexity and generalization capability. Specifically, shallow regression trees with a maximum depth of 3 were adopted as weak learners to avoid overfitting. The learning rate was set to 0.1 to ensure stable convergence, while the number of boosting iterations was fixed to 100. These values were chosen to maintain a lightweight model suitable for practical encoder integration.
The complete training procedure of the Gradient Boosting Machine is summarized in Algorithm 1.
Algorithm 1 Gradient Boosting Machine (GBM)
Input: Training set D n , loss function L ( y , F ( x ) ) , number of iterations T
Output: Final prediction model F T ( x )
1:
Initialize model: F 0 ( x ) = arg min γ i = 1 n L ( y i , γ )
2:
for  t = 1 to T do
3:
    Compute pseudo-residuals: r i t = L ( y i , F ( x i ) ) F ( x i ) F = F t 1
4:
    Fit regression tree h t ( x ) to { ( x i , r i t ) } i = 1 n
5:
    for each terminal region R j t  do
6:
          Compute optimal step size: γ j t = arg min γ x i R j t L y i , F t 1 ( x i ) + γ
7:
    end for
8:
    Update model: F t ( x ) = F t 1 ( x ) + j γ j t I R j t
9:
end for
The trained GBM model is subsequently exploited in the proposed fast QTMT decision framework to provide adaptive thresholds for hierarchical partition selection in inter coding.

5. Analysis of QTMT Partitioning Features for Inter-Coding Decision Learning

The efficiency of QTMT partitioning in inter coding is strongly influenced by the encoder’s ability to accurately characterize local texture properties of coding units. During inter prediction, the decision to further partition a CU or to preserve its current structure directly affects both rate–distortion efficiency and computational complexity. Therefore, identifying statistical texture descriptors that reliably reflect local structural variations is a key requirement for the design of fast and content-adaptive QTMT decision strategies.
In this section, texture descriptors derived from GLCM analysis are investigated to examine their relationship with CU partitioning behavior at the 64 × 64 level. These second-order statistical features capture spatial dependencies between pixel intensities and provide a richer description of texture characteristics than first-order measures. A correlation-based analysis is subsequently conducted to assess the relevance of each feature with respect to CU splitting decisions, which forms the basis for the learning-based threshold estimation employed in the proposed framework.

5.1. GLCM Feature Extraction

Conventional fast decision approaches in video coding commonly rely on first-order statistical measures, such as variance or gradient magnitude, to estimate texture complexity. While these descriptors are computationally efficient, they only characterize the distribution of gray-level intensities within a region and do not account for spatial dependencies between neighboring pixels. As a consequence, regions exhibiting similar intensity distributions but different spatial arrangements may not be reliably distinguished, which limits the accuracy of coding unit partitioning decisions.
To address this limitation, second-order statistical features derived from GLCM analysis are employed, as extensively reported in the literature for both image and video texture characterization [40,41]. These features capture spatial relationships between pixel intensities and provide a richer and more discriminative representation of local texture structure. The GLCM models the joint probability of occurrence of two pixels with gray levels i and j, separated by a spatial distance d along a given direction θ . Let R denote the luminance intensity matrix of a coding unit with dimensions N x × N y , and let G represent the corresponding GLCM [42].
Each pixel is associated with eight neighboring directions, including horizontal, vertical, and diagonal orientations, as illustrated in Figure 3. To limit computational overhead while preserving sufficient discriminative capability for partition decision modeling, the GLCM is computed only along the horizontal direction ( θ = 0 ) with a pixel distance of d = 1 .
The GLCM element at position ( i , j ) is defined as
G ( i , j , d , θ ) = # { [ ( k , l ) , ( m , n ) ] D |                         R ( k , l ) = i , R ( m , n ) = j , d , θ } ,
where # denotes the counting operator, ( k , l ) and ( m , n ) represent pixel coordinates in R, and D = ( N x × N y ) × ( N x × N y ) .
To further reduce computational complexity, the luminance range [ 0 , 255 ] is uniformly quantized into W = 8 gray levels by dividing each pixel value by 32. This quantization strategy significantly reduces the size of the GLCM while preserving essential texture characteristics. The resulting matrix is expressed as
G = G ( 0 , 0 , 1 , 0 ) G ( 0 , 1 , 1 , 0 ) G ( 0 , W 1 , 1 , 0 ) G ( 1 , 0 , 1 , 0 ) G ( 1 , 1 , 1 , 0 ) G ( 1 , W 1 , 1 , 0 ) G ( W 1 , 0 , 1 , 0 ) G ( W 1 , 1 , 1 , 0 ) G ( W 1 , W 1 , 1 , 0 )
From the normalized GLCM, four texture features are extracted, namely Homogeneity, Contrast, Entropy, and Angular Second Moment (ASM). The mathematical definitions of these features are provided in [43] and are expressed as follows:
H o m o g e n e i t y = i = 0 N 1 j = 0 N 1 G ( i , j ) 1 + ( i j ) 2 ,
C o n t r a s t = i = 0 N 1 j = 0 N 1 ( i j ) 2 G ( i , j ) ,
E n t r o p y = i = 0 N 1 j = 0 N 1 G ( i , j ) ln G ( i , j ) ,
A S M = i = 0 N 1 j = 0 N 1 G 2 ( i , j ) .
These features provide complementary and interpretable descriptions of texture characteristics. Homogeneity reflects spatial uniformity and generally assumes higher values in smooth regions. Contrast captures local gray-level variations and increases with texture complexity. Entropy quantifies the degree of randomness in the intensity distribution, while ASM represents texture energy and decreases as structural irregularity increases. Collectively, these descriptors form a robust statistical representation of coding unit texture complexity, making them well suited for learning-based QTMT partitioning analysis in inter coding.

5.2. GLCM Feature Selection and Correlation Analysis

To evaluate the relevance of the extracted GLCM features for QTMT partitioning decisions, a correlation-based statistical analysis is conducted between each feature and the CU split flag at the 64 × 64 level. Three representative training video sequences are selected to construct the dataset used for the feature selection process, namely BasketballDrillText with a resolution of 832 × 480 , SlideEditing with a resolution of 1280 × 720 , and SlideShow with a resolution of 1280 × 720 . These Class F sequences exhibit different levels of content complexity due to variations in scene structure, number of subjects, and background details, making them suitable for learning texture-driven QTMT partitioning behavior. For each sequence, 50 frames are analyzed, resulting in a diverse and representative training dataset.
Although the training dataset is limited to three Class F sequences, these sequences were deliberately selected to cover a wide range of texture characteristics, including highly textured regions, smooth areas, and mixed-content scenes. This diversity allows the model to capture representative QTMT partitioning patterns despite the limited number of training sequences.
All sequences are encoded using the VVC reference software VTM 23.5 [23]. For every 64 × 64 CU extracted from these frames, four GLCM texture features are computed and paired with the corresponding QTMT partitioning decisions and CU split flags selected by the reference encoder, forming the basis for the subsequent statistical analysis and model training.
In addition, the learned homogeneity thresholds are QP-dependent and are not overfitted to the training data. Their effectiveness is further validated through experiments conducted on standard test sequences from multiple classes (Classes A–E), demonstrating good generalization capability across different spatial resolutions and content types.
The correlation between each texture feature and the CU splitting decision is computed using the CorrelationAttributeEval method implemented in the Waikato Environment for Knowledge Analysis (WEKA) [44], a widely used open-source machine learning and data mining toolkit that provides a comprehensive set of feature evaluation and statistical analysis algorithms.
The resulting correlation coefficients obtained under different quantization parameters are summarized in Table 1, while their variation trends across QP values are illustrated in Figure 4.
Several observations can be drawn from the statistical results. First, Homogeneity consistently exhibits the strongest correlation with the CU split decision across all tested quantization parameters. At QP 22, its correlation coefficient reaches 0.1093, which is noticeably higher than those of the other features. Although the correlation strength gradually decreases as the quantization parameter increases, Homogeneity remains the most informative descriptor even at QP 37, with a correlation value of 0.0586. This behavior indicates that spatial uniformity plays a dominant role in determining partitioning decisions at the 64 × 64 level.
Second, Contrast shows the second highest correlation among the analyzed features. Its correlation coefficient decreases from 0.0843 at QP 22 to 0.0456 at QP 37, reflecting a progressive reduction in discriminative power as texture details are attenuated under stronger quantization. Nevertheless, the relatively stable ranking of Contrast across all QP values suggests that local gray-level variations remain a useful indicator for partition decision modeling.
In contrast, Entropy and ASM exhibit lower and less discriminative correlation values across all tested quantization parameters. Their correlation coefficients remain below 0.06 and show limited sensitivity to QP variation, indicating a weaker relationship with CU splitting behavior. This suggests that global randomness and texture energy, as captured by these descriptors, are less effective for characterizing QTMT partitioning decisions at the considered CU level.
As illustrated in Figure 4, an overall decreasing trend in correlation strength is observed for all features as the quantization parameter increases. This trend can be attributed to the progressive suppression of fine texture details at higher QP values, which reduces the influence of spatial characteristics on partitioning decisions.
Although using a single feature may theoretically lead to some information loss, the results in Table 1 and Figure 4 clearly indicate that Homogeneity consistently outperforms the other GLCM descriptors across all QP values. The remaining features (Contrast, Entropy, and ASM) exhibit significantly lower and closely clustered correlation values, suggesting limited complementary information.
Therefore, retaining only Homogeneity enables a favorable trade-off between model simplicity, computational efficiency, and prediction reliability, while avoiding unnecessary model complexity.
Based on these observations, Homogeneity emerges as the most informative GLCM-based feature for QTMT partitioning analysis in inter coding. Its consistently higher correlation with CU split decisions across different quantization levels provides strong statistical evidence supporting its selection as the primary descriptor for learning adaptive decision thresholds in the proposed fast QTMT decision framework.

6. Proposed Learning-Based QTMT Decision Framework

This section presents a homogeneity-guided learning-based framework for fast QTMT decision in inter prediction. The proposed approach aims to significantly reduce encoder computational complexity while preserving rate–distortion performance. By exploiting the strong correlation between texture homogeneity and QTMT partitioning behavior, the framework adaptively restricts the partition search space at the 64 × 64 CU level. An overview of the proposed framework is illustrated in Figure 5.
To ensure practical integration within the VVC encoder, the proposed framework is organized into two complementary stages: an offline learning stage and an online decision stage. This separation confines all learning-related operations to the offline phase, while keeping the online encoding process lightweight and deterministic.
During the offline learning stage, representative inter-coded training sequences are encoded using the VVC reference software VTM 23.5 [23]. For each 64 × 64 inter CU, homogeneity values are extracted using GLCM-based texture analysis. Based on the correlation analysis presented in Section 5, homogeneity is identified as the most informative descriptor and is therefore retained as the sole feature used in the learning process. Each training sample is represented by a single scalar homogeneity value paired with the QTMT partitioning outcome selected by the reference encoder, resulting in a compact and low-dimensional dataset with strong discriminative capability.
Rather than formulating the learning task as a direct partition classification problem, the collected data implicitly captures how QTMT partitioning behavior evolves with texture uniformity. In particular, the dataset reflects transitions between non-split coding units, square QT partitions, and directional BT and TT structures [7,21]. A GBM model is trained exclusively in the offline stage to learn adaptive decision boundaries in the homogeneity domain. These boundaries are learned independently for each QP and correspond to three ordered thresholds T h 1 , T h 2 , and T h 3 , which partition the homogeneity space into four distinct decision regions.
Importantly, the trained model is not used to directly predict the final partition mode. Instead, it derives three homogeneity thresholds { T h 1 , T h 2 , T h 3 } that regulate which QTMT partition candidates are evaluated during encoding.
During the online decision stage, no learning or model inference are performed. For each 64 × 64 inter CU, the homogeneity value is computed using the same lightweight feature extraction process adopted in the offline stage and compared against the pre-learned thresholds embedded in the encoder decision logic. This comparison enables a hierarchical and content-adaptive restriction of the QTMT search space driven by texture homogeneity.
Specifically, four decision regions are defined:
  • H < T h 1 : only the NoSplit option is evaluated.
  • T h 1 H < T h 2 : only BT is enabled, while QT is always retained as a conservative candidate.
  • T h 2 H < T h 3 : both BT and TT (MTT) are enabled, while QT is always retained.
  • H T h 3 : only QT is evaluated and all MTT partitions are disabled.
Rather than directly selecting a final partition mode, the learned homogeneity thresholds regulate which QTMT partition candidates are evaluated during encoding. The final decision remains governed by rate–distortion optimization, while the adaptive restriction of the QTMT search space significantly reduces the number of partition candidates evaluated during inter prediction. Despite its low runtime complexity, the proposed framework closely follows the QTMT partitioning behavior of the reference encoder, achieving substantial complexity reduction with negligible impact on coding efficiency [7,22].
To further improve clarity and reproducibility, the online decision process of the proposed framework is explicitly described in Algorithm 2.
Algorithm 2 Proposed learning-based QTMT decision algorithm
  1:
Input: 64 × 64 inter CU, learned thresholds { T h 1 , T h 2 , T h 3 }
  2:
Output: Selected QTMT partition mode
  3:
Compute GLCM
  4:
Extract homogeneity value H using Equation (7)
  5:
Initialize candidate set S
  6:
if  H < T h 1   then
  7:
     S { NoSplit }
  8:
else if  T h 1 H < T h 2   then
  9:
     S { QT , BT }
10:
else if  T h 2 H < T h 3   then
11:
     S { QT , BT , TT }
12:
else
13:
     S { QT }
14:
end if
15:
Evaluate only modes in S using RDO
16:
Select the best mode according to RD cost
17:
return Selected QTMT mode
Figure 5 summarizes the complete workflow of the proposed approach. It illustrates how GLCM-based texture features are first extracted and statistically analyzed to select the most relevant descriptor, followed by offline GBM training to learn adaptive decision boundaries. These learned boundaries are subsequently embedded into the encoder and used during the online stage to hierarchically regulate QTMT partition candidate activation at the 64 × 64 inter CU level, enabling effective complexity reduction without modifying the core rate–distortion optimization process.

7. Experimental Results

This section evaluates the performance of the proposed fast QTMT decision algorithm in terms of encoding complexity reduction and rate–distortion efficiency. The proposed method was implemented on the VVC reference software VTM 23.5 [23] and evaluated under the Random Access configuration following the Common Test Conditions (CTCs) [24]. Standard Quantization Parameter values of 22, 27, 32, and 37 were used to cover a wide range of compression scenarios. The detailed experimental setup is summarized in Table 2.
The performance is evaluated using BDBR and BDPSNR metrics [45,46], while encoder complexity reduction is measured using the time-saving ratio defined as
T S = E T O r i g i n a l E T P r o p o s e d E T O r i g i n a l × 100 .

7.1. Overall Coding Performance

Table 3 presents the coding performance of the proposed method across all tested sequences. The results show that the proposed approach achieves a consistent reduction in encoding time while maintaining rate–distortion performance very close to that of the reference encoder.
On average, the proposed method achieves an encoding time reduction of approximately 27.6%, while introducing only a negligible BDPSNR loss of 0.006 dB and a BDBR increase of 0.19%. These results indicate that a large portion of unnecessary QTMT partition evaluations are effectively avoided through the proposed homogeneity-driven decision rules.
It is worth noting that the achieved complexity reduction is intentionally moderate compared to more aggressive learning-based approaches. This behavior can be attributed to the lightweight computation of the homogeneity feature, which, although simplified, still introduces a limited computational overhead. Nevertheless, this overhead is largely compensated by the reduction in redundant partition tests, resulting in a favorable overall time saving while preserving stable rate–distortion performance.

7.2. Comparison with State-of-the-Art Methods

Table 4 compares the proposed fast QTMT decision method with several representative inter-coding complexity reduction approaches reported in the literature. The comparison focuses on the trade-off between encoding time reduction and rate–distortion efficiency, as reflected by the BDBR and TS metrics.
The method in [17] achieves moderate time saving by employing a convolutional neural network combined with multi-information fusion for early partition termination. However, this gain comes at the cost of a significant rate-distortion degradation, as indicated by the high BDBR increase of 3.18%. This suggests that the aggressive pruning strategy adopted in [17] frequently eliminates beneficial partition candidates, particularly in regions with complex texture or motion.
The approach proposed in [18] relies on CNN-based split probability estimation to prune unlikely QTMT candidates. While this method improves the BDBR performance compared to [17], it still incurs a noticeable coding loss of 1.11% and requires the execution of a relatively heavy inference model at runtime. In contrast, the proposed method avoids complex inference and maintains a significantly lower BDBR increase by relying on simple texture-driven decision rules.
The framework presented in [19] reports the highest time-saving ratio by dynamically assigning CTUs to subnetworks of different complexity based on a partition homogeneity map. Although this strategy achieves substantial complexity reduction, it introduces a considerable BDBR penalty of 1.94%, reflecting the cost of coarse-grained partition classification and network switching overhead. Moreover, the reliance on multiple subnetworks increases memory consumption and implementation complexity.
The GBM-based method in [20] represents the closest approach to the proposed work in terms of learning paradigm. By using Average Local Variance as a texture descriptor, it achieves a favorable balance between time saving and coding efficiency. However, the higher time-saving ratio reported in [20] is obtained through more aggressive partition pruning, which results in a larger BDBR increase compared to the proposed approach.
In contrast, the proposed method deliberately adopts a conservative pruning strategy guided by a lightweight homogeneity descriptor and adaptive thresholding. Although the resulting time-saving ratio is slightly lower than that of some learning-heavy approaches, the proposed method achieves the lowest BDBR increase among all compared techniques. This demonstrates that the proposed design effectively suppresses unnecessary QTMT evaluations while preserving the vast majority of rate-distortion gains offered by the full search.
Overall, the proposed approach offers a more balanced trade-off between encoding complexity reduction and coding efficiency. Its lightweight feature extraction, absence of deep inference models, and stable rate-distortion behavior make it particularly well suited for practical and resource-constrained encoder implementations, where robustness and predictability are critical design requirements.

8. Conclusions

In this paper, a fast QTMT partition decision algorithm for inter-prediction in video coding is proposed to mitigate the high encoder complexity introduced by the flexible partitioning structure. The proposed method is built upon statistical texture analysis and lightweight machine learning, where a homogeneity descriptor derived from the Gray-Level Co-occurrence Matrix is utilized to characterize the spatial uniformity of coding units.
By exploiting the strong correlation between texture homogeneity and partitioning behavior, a Gradient Boosting Machine model is trained offline to learn adaptive decision thresholds that guide the partitioning process at the 64 × 64 coding unit level. Based on these thresholds, a hierarchical decision strategy is employed, in which the encoder first determines whether further partitioning is required and subsequently selects between Quad-Tree and Multi-Type Tree structures only when splitting is beneficial.
This design enables effective pruning of redundant QTMT evaluations while preserving partitioning decisions that are consistent with those selected by the reference encoder. The proposed approach avoids complex inference models and excessive feature extraction, resulting in a stable and computationally efficient solution. Consequently, the method provides a practical framework for reducing inter-prediction complexity in modern video encoders, making it well suited for real-time and resource-constrained implementations.
Although the proposed framework currently focuses on the 64 × 64 CU level to maximize complexity reduction, extending this strategy to smaller CU sizes (e.g., 32 × 32) represents a promising research direction. Such an extension could potentially yield additional complexity savings. However, it also raises new challenges, including increased decision frequency, higher feature extraction overhead, and the need for more fine-grained threshold adaptation. These aspects will be investigated in future work to further improve the scalability of the proposed framework.

Author Contributions

S.B. wrote the manuscript, conducted the computational tests, and prepared the figures and tables. I.B. helped edit the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The proposed algorithm was implemented and evaluated using the VVC reference software VTM 23.5, which is publicly available at https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/-/tags/VTM-23.5 (accessed on 20 November 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ITU-T Rec. H.266/ISO/IEC 23090-3; Versatile Video Coding (VVC). ITU-T: Geneva, Switzerland; ISO/IEC: Geneva, Switzerland, 2020.
  2. Bross, B.; Wang, Y.-K.; Ye, Y.; Liu, S.; Chen, J.; Sullivan, G.J.; Ohm, J.-R. Overview of the versatile video coding (VVC) standard and its applications. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3736–3764. [Google Scholar] [CrossRef]
  3. ITU-T Rec. H.264/ISO/IEC 14496-10; Advanced Video Coding for Generic Audiovisual Services. ITU-T: Geneva, Switzerland; ISO/IEC: Geneva, Switzerland, 2003; Version 1.
  4. Wiegand, T.; Sullivan, G.J.; Bjontegaard, G.; Luthra, A. Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 560–576. [Google Scholar] [CrossRef]
  5. ITU-T Rec. H.265/ISO/IEC 23008-2; High Efficiency Video Coding (HEVC). ITU-T: Geneva, Switzerland; ISO/IEC: Geneva, Switzerland, 2013; Version 1.
  6. Sullivan, G.J.; Ohm, J.-R.; Han, W.-J.; Wiegand, T. Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
  7. Huang, Y.-W.; Hsu, C.-W.; Chen, C.-Y.; Chuang, T.-D.; Hsiang, S.-T.; Chen, C.-C.; Chiang, M.-S.; Lai, C.-Y.; Tsai, C.-M.; Su, Y.-C.; et al. A VVC proposal with quaternary tree plus binary–ternary tree coding block structure and advanced coding techniques. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1311–1325. [Google Scholar] [CrossRef]
  8. Bross, B.; Andersson, K.; Blaser, M.; Drugeon, V.; Kim, S.-H.; Lainema, J.; Li, J.; Liu, S.; Ohm, J.-R.; Sullivan, G.J.; et al. General video coding technology in responses to the joint call for proposals on video compression with capability beyond HEVC. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1226–1240. [Google Scholar] [CrossRef]
  9. Wieckowski, A.; Ma, J.; Schwarz, H.; Marpe, D.; Wiegand, T. Fast partitioning decision strategies for the upcoming versatile video coding (VVC) standard. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 4130–4134. [Google Scholar]
  10. Li, Q.; Meng, H.; Li, Y. Texture-based fast QTMT partition algorithm in VVC intra coding. Signal Image Video Process. 2022, 17, 1581–1589. [Google Scholar] [CrossRef]
  11. Wang, Y.; Liu, Y.; Zhao, J.; Zhang, Q. Fast CU partitioning algorithm for VVC based on multi-stage framework and binary subnets. IEEE Access 2023, 11, 56812–56821. [Google Scholar] [CrossRef]
  12. Shang, X.; Li, G.; Zhao, X.; Han, H.; Zuo, Y. Fast CU size decision algorithm for VVC intra coding. Multimed. Tools Appl. 2023, 82, 28301–28322. [Google Scholar] [CrossRef]
  13. Amna, M.; Imen, W.; Ezahra, S.F. Fast multi-type tree partitioning for versatile video coding using machine learning. Signal Image Video Process. 2022, 17, 67–74. [Google Scholar] [CrossRef]
  14. Abdallah, B.; Belghith, F.; Ayed, M.A.B.; Masmoudi, N. Fast QTMT decision tree for versatile video coding based on deep neural networks. Multimed. Tools Appl. 2022, 81, 42731–42747. [Google Scholar] [CrossRef]
  15. Belghith, F.; Abdallah, B.; Jdidia, S.B.; Ayed, M.A.B.; Masmoudi, N. CNN-based ternary tree partition approach for VVC intra-QTMT coding. Signal Image Video Process. 2024, 18, 3587–3594. [Google Scholar] [CrossRef]
  16. Goncalves, P.; Correa, G.; Agostini, L.; Porto, M. Learning-based bypass zone search algorithm for fast motion estimation. Multimed. Tools Appl. 2022, 82, 3535–3560. [Google Scholar] [CrossRef]
  17. Pan, Z.; Zhang, P.; Peng, B.; Ling, N.; Lei, J. A CNN-based fast inter coding method for VVC. IEEE Signal Process. Lett. 2021, 28, 1260–1264. [Google Scholar] [CrossRef]
  18. Tissier, A.; Hamidouche, W.; Vanne, J.; Menard, D. Machine learning based efficient QT–MTT partitioning for VVC inter coding. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 18–21 September 2022; pp. 1756–1760. [Google Scholar]
  19. Peng, Z.; Shen, L. A classification–prediction joint framework to accelerate QTMT-based CU partition of inter-mode VVC. Electron. Lett. 2023, 59, e12770. [Google Scholar] [CrossRef]
  20. Bakkouri, S.; Bakkouri, I.; Elyousfi, A. GBM-QTMT: Gradient boosting machine-based fast QTMT partition decision for VVC inter-coding. Signal Image Video Process. 2025, 19, 173. [Google Scholar] [CrossRef]
  21. Abdallah, B.; Belghith, F.; Ayed, M.A.B.; Masmoudi, N. QTMT partitioning structure in VVC: Overview and analysis. In Proceedings of the IEEE 21st International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Sfax, Tunisia, 20–22 December 2022; pp. 1–6. [Google Scholar]
  22. Schwarz, H.; Coban, M.; Karczewicz, M.; Chuang, T.D.; Bossen, F.; Alshin, A.; Lainema, J.; Helmrich, C.R.; Wiegand, T. Quantization and entropy coding in the versatile video coding (VVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3891–3906. [Google Scholar] [CrossRef]
  23. Fraunhofer HHI JVET. VVCSoftware_VTM: VTM-23.5 Reference Software for Versatile Video Coding (VVC), Version 23.5, November 2024. Available online: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/-/tags/VTM-23.5 (accessed on 20 November 2024).
  24. Bossen, F.; Boyce, J.; Suehring, K.; Li, X.; Seregin, V. VTM common test conditions and software reference configurations for SDR video. In Proceedings of the JVET-T2010, Teleconference, 7–16 October 2020. [Google Scholar]
  25. Xu, M.; Jeon, B. Complexity-efficient dependent quantization for versatile video coding. IEEE Trans. Broadcast. 2023, 69, 832–839. [Google Scholar] [CrossRef]
  26. Yi, D.; Ahn, J.; Ji, S. An effective optimization method for machine learning based on ADAM. Appl. Sci. 2020, 10, 1073. [Google Scholar] [CrossRef]
  27. Bakkouri, I.; Bakkouri, S. 2MGAS-Net: Multi-level multi-scale gated attentional squeezed network for polyp segmentation. Signal Image Video Process. 2024, 18, 5377–5386. [Google Scholar] [CrossRef]
  28. Bakkouri, I.; Bakkouri, S. UGS-M3F: Unified gated Swin transformer with multi-feature fully fusion for retinal blood vessel segmentation. BMC Med. Imaging 2025, 25, 77. [Google Scholar] [CrossRef]
  29. Schapire, R. The strength of weak learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef]
  30. Binder, H.; Gefeller, O.; Schmid, M.; Mayr, A. Extending statistical boosting. Methods Inf. Med. 2014, 53, 428–435. [Google Scholar] [CrossRef] [PubMed]
  31. Bakkouri, S.; Elyousfi, A. Machine learning-based fast CU size decision algorithm for 3D-HEVC inter-coding. J. Real-Time Image Process. 2021, 18, 983–995. [Google Scholar] [CrossRef]
  32. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  33. Bakkouri, S.; Elyousfi, A. An adaptive CU size decision algorithm based on gradient boosting machines for 3D-HEVC inter-coding. Multimed. Tools Appl. 2023, 82, 32539–32557. [Google Scholar] [CrossRef]
  34. Bahad, P.; Saxena, P. Study of AdaBoost and gradient boosting algorithms for predictive analytics. In Algorithms for Intelligent Systems; Springer: Singapore, 2019; pp. 235–244. [Google Scholar]
  35. Guelman, L. Gradient boosting trees for auto insurance loss cost modeling and prediction. Expert Syst. Appl. 2012, 39, 3659–3667. [Google Scholar] [CrossRef]
  36. Sapountzoglou, N.; Lago, J.; Raison, B. Fault diagnosis in low-voltage smart distribution grids using gradient boosting trees. Electr. Power Syst. Res. 2020, 182, 106254. [Google Scholar] [CrossRef]
  37. Si, M.; Du, K. Development of a predictive emissions model using a gradient boosting machine learning method. Environ. Technol. Innov. 2020, 20, 101028. [Google Scholar] [CrossRef]
  38. Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef]
  39. Zhang, X.; Quadrianto, N.; Kersting, K.; Xu, Z.; Engel, Y.; Sammut, C.; Reid, M.; Liu, B.; Webb, G.; Sipper, M.; et al. Genetic and evolutionary algorithms. In Encyclopedia of Machine Learning; Springer: Boston, MA, USA, 2011; pp. 456–457. [Google Scholar]
  40. Bakkouri, S.; Elyousfi, A. Early Termination of CU Partition Based on Boosting Neural Network for 3D-HEVC Inter-Coding. IEEE Access 2022, 10, 13870–13883. [Google Scholar] [CrossRef]
  41. Chen, Z.; Zheng, H.; Duan, J.; Wang, X. GLCM-Based FBLS: A Novel Broad Learning System for Knee Osteopenia and Osteoprosis Screening in Athletes. Appl. Sci. 2023, 13, 11150. [Google Scholar] [CrossRef]
  42. Marceau, D.; Howarth, P.; Dubois, J.; Gratton, D. Evaluation of the grey-level co-occurrence matrix method for land-cover classification using SPOT imagery. IEEE Trans. Geosci. Remote Sens. 1990, 28, 513–519. [Google Scholar] [CrossRef]
  43. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef]
  44. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I. The WEKA data mining software. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  45. Bjøntegaard, G. Calculation of average PSNR differences between RD curves. In Proceedings of the 13th VCEG Meeting, Austin, TX, USA, 2–4 April 2001. Document VCEG-M33. [Google Scholar]
  46. Bjøntegaard, G. Improvements of the BD-PSNR model. In Proceedings of the 35th VCEG Meeting, Berlin, Germany, 16–18 July 2008. Document VCEG-AI11. [Google Scholar]
Figure 1. QTMT partitioning structures at the coding unit level.
Figure 1. QTMT partitioning structures at the coding unit level.
Applsci 16 01368 g001
Figure 2. QTMT mode distribution for 64 × 64 CUs under different QPs.
Figure 2. QTMT mode distribution for 64 × 64 CUs under different QPs.
Applsci 16 01368 g002
Figure 3. Eight nearest-neighbourhood schemes for constructing the gray-level co-occurrence matrix.
Figure 3. Eight nearest-neighbourhood schemes for constructing the gray-level co-occurrence matrix.
Applsci 16 01368 g003
Figure 4. Correlation coefficients of GLCM features across different quantization parameters.
Figure 4. Correlation coefficients of GLCM features across different quantization parameters.
Applsci 16 01368 g004
Figure 5. Workflow of the proposed learning-based QTMT decision framework.
Figure 5. Workflow of the proposed learning-based QTMT decision framework.
Applsci 16 01368 g005
Table 1. Correlation coefficients of GLCM features under different QP values.
Table 1. Correlation coefficients of GLCM features under different QP values.
FeatureQP 22QP 27QP 32QP 37
Homogeneity0.10930.08810.09070.0586
Contrast0.08430.07000.06600.0456
Entropy0.05640.05570.05460.0470
ASM0.05640.06060.05790.0493
Table 2. Test platform configuration.
Table 2. Test platform configuration.
ParameterValue
Test platformVTM 23.5
ConfigurationRandom Access
CTU size128 × 128
Quantization Parameter22, 27, 32, 37
Table 3. Coding performance comparison with the reference encoder.
Table 3. Coding performance comparison with the reference encoder.
ClassSequenceBDPSNR (dB)BDBR (%)TS (%)
A1Tango−0.0050.1827.4
FoodMarket4−0.0060.1728.1
Campfire−0.0060.2026.8
A2CatRobot−0.0060.1928.5
DaylightRoad2−0.0070.1829.0
ParkRunning3−0.0060.2026.2
BBasketballDrive−0.0070.2227.9
BQTerrace−0.0050.1528.6
Cactus−0.0070.2126.9
CBasketballDrill−0.0060.1828.3
BQMall−0.0050.1725.9
PartyScene−0.0060.2127.6
DBasketballPass−0.0060.1928.7
BlowingBubbles−0.0050.1626.4
EFourPeople−0.0070.2227.1
Johnny−0.0060.1828.0
Average −0.0060.18827.58
Table 4. Comparison with state-of-the-art methods.
Table 4. Comparison with state-of-the-art methods.
MethodBDBR (%)TS (%)
[17]3.1830.63
[18]1.1131.80
[19]1.9444.50
[20]0.3640.32
Proposed0.18827.58
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bakkouri, S.; Bakkouri, I. A Lightweight Learning-Based QTMT Decision Framework for VVC Inter-Coding. Appl. Sci. 2026, 16, 1368. https://doi.org/10.3390/app16031368

AMA Style

Bakkouri S, Bakkouri I. A Lightweight Learning-Based QTMT Decision Framework for VVC Inter-Coding. Applied Sciences. 2026; 16(3):1368. https://doi.org/10.3390/app16031368

Chicago/Turabian Style

Bakkouri, Siham, and Ibtissam Bakkouri. 2026. "A Lightweight Learning-Based QTMT Decision Framework for VVC Inter-Coding" Applied Sciences 16, no. 3: 1368. https://doi.org/10.3390/app16031368

APA Style

Bakkouri, S., & Bakkouri, I. (2026). A Lightweight Learning-Based QTMT Decision Framework for VVC Inter-Coding. Applied Sciences, 16(3), 1368. https://doi.org/10.3390/app16031368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop