Next Article in Journal
Multi-View Cluster Structure Guided One-Class BLS-Autoencoder for Intrusion Detection
Previous Article in Journal
A Review of Mathematical Models in Robotics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning

by
Nicolae Tarbă
*,
Costin-Anton Boiangiu
and
Mihai-Lucian Voncilă
Computer Science and Engineering Department, Faculty of Automatic Control and Computers, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 8096; https://doi.org/10.3390/app15148096
Submission received: 2 June 2025 / Revised: 9 July 2025 / Accepted: 17 July 2025 / Published: 21 July 2025

Abstract

Image binarization is an important process in many computer-vision applications. This transforms the color space of the original image into black and white. Global thresholding is a quick and reliable way to achieve binarization, but it is inherently limited by image noise and uneven lighting. This paper introduces a global thresholding method that uses the results of classical global thresholding algorithms and other global image features to train a regression model via machine learning. We prove through nested cross-validation that the model can predict the best possible global threshold with an average F-measure of 90.86% and a confidence of 0.79%. We apply our approach to a popular computer vision problem, document image binarization, and compare popular metrics with the best possible values achievable through global thresholding and with the values obtained through the algorithms we used to train our model. Our results show a significant improvement over these classical global thresholding algorithms, achieving near-perfect scores on all the computed metrics. We also compared our results with state-of-the-art binarization algorithms and outperformed them on certain datasets. The global threshold obtained through our method closely approximates the ideal global threshold and could be used in a mixed local-global approach for better results.

1. Introduction

Image binarization is the process of converting an image with varying pixel intensities into one with only two: black and white. The simplest way of doing this is by choosing a threshold intensity and converting every pixel with a lower intensity to a black pixel in the binarized image and every pixel with a higher intensity to a white pixel. This is called global threshold image binarization. Choosing different thresholds for different pixels based on their neighborhoods is called local threshold image binarization.
A popular use case for image binarization is the preprocessing of document scans in preparation for digitization. For this use case, several datasets are available, containing low-quality images of documents and corresponding ground-truth images that have black pixels where there was text in the original image and white pixels everywhere else. The quality of the binarized image can be measured using objective metrics by comparing the binarized image to the ground-truth image.
Global algorithms operate in the image histogram domain, resulting in a complexity based on the number of pixel intensities in the image, whereas local algorithms operate on each pixel for a complexity based on the number of pixels in the image, which is always much larger than the number of pixel intensities. Generally, global algorithms are much faster than local ones but yield lower-quality results on low-quality images owing to uneven lighting, noise, etc. The faster processing time and lower complexity are ideal for real-time or resource-constrained applications, such as embedded systems. For any given image, the maximum possible binarization quality achievable through global thresholding is not necessarily the highest possible binarization quality. Local thresholding algorithms have no such limitations; however, their computational cost is higher, and they only consider features from surrounding pixels, making them prone to specific failures.
Global thresholding algorithms provide varying results based on the type of noise and lighting in an image. Non-uniform lighting can be mitigated by applying global algorithms to image patches, such as the patch-based methods presented by Tahseen et al. [1], resulting in a mixed global–local approach.
We propose a machine learning model whose novelty lies in employing various thresholds produced by classical global thresholding algorithms as input features alongside other global image features to produce results as close as possible to the highest possible quality achievable through global thresholding.
The proposed solution is described in detail in Section 2. Evaluation metrics that allow comparison of the performance of global thresholding on different images are defined, and the methodology for evaluating the tested algorithms and employed datasets is laid out.
Section 3 presents the results of the proposed solution and compares them with the classic global thresholding algorithms used as features and the best possible global threshold. The proposed solution outperformed the other algorithms and achieved results very close to the ideal threshold, with a promising confidence interval.
Section 4 presents a discussion based on the presented results and visual examples, followed by conclusions and possible avenues for future work in Section 5.
In the Following Subsections, Previous Works Are Presented and Divided into Classical and Deep-Learning-Based Approaches.

1.1. Classical Approaches

Classical approaches can be split into two categories, in terms of how they operate on the image histogram space, linear and non-linear solutions. While the linear methods are straightforward and efficient, they require further exploration in our context; thus, we have opted not to implement them yet. They could be added to the features in future work without changing the time complexity of the solution. On the other hand, non-linear approaches, while potentially more powerful in handling complex relationships, introduce significant complexity. Given this trade-off, we chose not to pursue these methods either, prioritizing simplicity and computation time in our current implementation.
Wang and Fan [2] define the optimal threshold as the one that maximizes the objective function of the one-dimensional Tsallis entropy correlation single-threshold method:
S q t = 1 q 1 ( 2 i = 0 t h i c t q i = t + 1 I m a x h i 1 c t q ,
where q is a real number whose value can be experimented with.
Elen and Dönmez [3] separate the image histogram into three regions: α between μ σ and μ + σ , β L between 0 and μ σ , and β H between μ + σ and I m a x , where μ is the average pixel value and σ is the standard deviation. Then, they compute μ α = x α i p i and μ β = x β i p i where β = β L β H . They define the optimum threshold as T o p t = μ α + μ β / 2 . They compared their algorithm to several PDE-based ones, such as Guo and He [4], Guo et al. [5], and Mahani et al. [6].
Bogiatzis and Papadopoulos [7] compute two threshold candidates using a fuzzy inclusion indicator based on Zadeh’s fuzzy implication and the following fuzzy entropy formula:
E 1 = 2 x X min m A C x , m A x x X max m A C x , m A x ,
where X is the set of pixels in the image; m A x is the membership grade of x in the fuzzy subset A ; and A C is the complementary subset of A . Then, they compute a classification function and choose one of the two candidates based on a criterion.
Ashir [8] proposed a multilevel thresholding algorithm that can be extended to implement image binarization. It is based on computing the mean pixel intensity for the entire image μ , generating a gradient image I g by computing the intensity difference between the original image and μ , and computing the mean gradient pixel μ g . They choose the following threshold:
T = μ μ g , i f i = μ μ g μ h i < i = μ μ + μ g h i μ + μ g , o t h e r w i s e .
Rani et al. [9] proposed a longer algorithm that starts by computing the Laplacian of the image and subtracting the Laplacian from the image. Then, they slide a 5 × 5 window on the image and compute its average value in each position, resulting in an average image I a v g . The threshold T is selected as
T = max I a v g + median min on each column I a v g .

1.2. Deep-Learning Approaches

Deep-learning approaches currently dominate the state-of-the-art of document image binarization. Convolutional neural networks are the most popular implementation, followed by vision transformers, but both produce competitive results.
SAE [10] is an auto-encoder that harnesses a convolutional neural network to encode the original image into a probability map. A global threshold is then applied to the probability map to generate the binarized image. MRAM [11] uses a convolutional random field to generate the binarized image from a probability map produced by a multi-resolution attention model.
DeepOtsu [12] applies Otsu’s thresholding algorithm on an enhanced image generated by a convolutional neural network that iteratively removes noise from the original image. 2DMN [13] is a bidimensional morphological network that removes noise from the original image using simple operations such as dilation and erosion.
DD-GAN [14] is a dual discriminator generative adversarial network comprising two networks, one to process the image globally and one locally. A threshold is applied to the generated image to obtain the binarized image. Suh [15] uses six generative adversarial networks: four colour-independent ones that extract information from the input image and two that transform the output images of the first four into the final binarized image.
cGANs [16] use conditional generative adversarial networks to combine multi-scale information. First, text pixels are extracted from the original image at various scales, and then, the multi-scale binarized images are combined to obtain the final binarized image. DE-GAN [17] also uses conditional generative adversarial networks.
D2BFormer [18] uses a vision transformer for semantic segmentation, transforming the original image into a two-channel feature map, one channel for the foreground and the other for the background. They then use the feature map to binarize the original image.
DocDiff [19] consists of a coarse predictor, which recovers primary low-frequency information, and of a high-frequency residual refinement module, which predicts high-frequency information using diffusion models.
DocEntr [20] and DocBinFormer [21] use vision transformers in an encoder-decoder architecture. The encoder transforms pixel patches into latent representations without using any convolutional layers, and the decoder generates the binarized image from the latent representations.
GDB [22] uses gated convolutions that, unlike regular convolutions, precisely extract stroke edges. First, a coarse sub-network generates a precise feature map, and then, a refinement sub-network uses gated convolutions to further refine the feature map. GDBMO is a version of GDB that combines local and global features to further improve the results.

2. Materials and Methods

2.1. Features

The following algorithms were directly used in the proposed solution as features in the model that we trained to predict the optimal global threshold. They operate directly on the image histogram, optimizing various metrics, and can be computed in constant time ( O ( 1 ) ) in relation to the image size. Although some of these algorithms are iterative, they converge rapidly.
Otsu [23] proposed a popular global thresholding algorithm. For a given threshold k, the pixels are divided into two classes: the 0 class, composed of pixels with intensities lower than or equal to k, and the 1 class, composed of pixels with intensities greater than k. Between-class variance is defined as the weighted sum of the variances of the two classes. Otsu’s threshold is defined as the threshold with the largest between-class variance, defined as follows:
σ 01 2 = W 0 t W 1 t μ 1 t μ 0 t ,
where W 0 t is the ratio of pixels with intensities lower than or equal to t; W 1 t is the ratio of pixels with intensities greater than t ; and μ 0 ,   μ 1 are the mean values of the 0 class and 1 class pixel intensities, respectively.
Kittler and Illingworth [23] proposed a computationally efficient solution for minimum error thresholding. They assumed that the two pixel classes follow normal distributions, splitting the probability density function p i into two components, p 0 i and p 1 i , normally distributed with means μ 0 and μ 1 , standard deviations σ 0 and σ 1 , and a priori probabilities P 0 i and P 1 i . They choose a threshold that minimizes the following criterion function:
J t = 1 + 2 P 0 t log σ 0 t + P 1 t log σ 1 t 2 P 0 t log P 0 t + P 1 t log P 1 t
The function is easy to compute, and finding its minimum is relatively simple, as the function is smooth.
Lloyd [23] proposed an iterative thresholding algorithm that minimizes the mean squared error between the original and the binarized images. In the first iteration, the average intensity is chosen as the threshold, and in further iterations, the threshold is updated via
t i + 1 = μ 1 t i + μ 0 t i 2 + σ 2 log W 0 t i / W 1 t i μ 1 t i μ 0 t i ,
where σ 2 is the variance of the entire image. The algorithm converges when the final threshold is equal to the previous threshold.
Sung et al. [24] argued that the within-class variance is more useful than the between-class variance as a selection criterion for the optimal threshold. The within-class variance is defined as
σ 0 + 1 2 = W 0 t σ 0 2 t + W 1 t σ 1 2 t ,
where σ 0 2 , σ 1 2 are the variances of each class. If there is more than one threshold for which the within-class variance is minimal, then they consider the average of those thresholds to be the optimal one. Their experiments show that their method performs better than Otsu’s on their synthetic dataset.
Ridler and Calvard [23] proposed an iterative thresholding algorithm very similar to Lloyd’s. In the first iteration, the average intensity is chosen as the threshold, and in further iterations, the threshold is updated via
t i + 1 = μ 0 t i + μ 1 t i 2 .
The algorithm converges when the final threshold is equal to the previous threshold.
Huang and Wang [23] proposed an algorithm based on computing a measure of fuzziness, such as Shannon’s entropy, for each possible threshold and choosing the lowest threshold for which the minimum fuzziness is reached. They use the following formula for fuzziness:
E t = i = 1 I m a x S μ t i h i ,
where h ( i ) represents how many times the ith intensity appears in the image,
S μ t i = μ t i ln μ t i 1 μ t i ln 1 μ t i
is Shannon’s entropy, and the membership function is
μ t i = 1 + i μ 0 / C 1 ,   i f   i t 1 + i μ 1 / C 1 ,   o t h e r w i s e
where C is a constant value such that 0.5 μ t i 1 .
Ramesh et al. [23] proposed an algorithm based on functional approximation of the histogram composed of recursive bilevel approximations that minimize an error function. They compare the results of two error functions:
  • The sum of the square errors:
E t = i = 1 t i μ 0 2 + i = t + 1 I m a x i μ 1 2
  • The sum of the variances of each class:
E t = σ 0 2 + σ 1 2 .
Their experimental results show that the latter error function produces better results than the former.
Li and Lee [25] proposed an algorithm based on minimizing the cross-entropy between the original image and the binarized image. The cross-entropy is computed for each possible threshold with the following formula:
H t = i = 1 t 1 i h j ln i μ 0 t + i = t I m a x i h i ln i μ 1 t ,
where h i is the number of pixels with intensity i in the image, and I m a x is the maximum intensity. Li and Tam [26] proposed an iterative variation of this algorithm, boasting improved computational speed at the cost of accuracy. Brink and Pendock [27] proposed a similar algorithm, but they compute the cross-entropy with the following formula:
H t = i = 1 t μ 0 t log μ 0 t i + i = t + 1 I m a x μ 1 t log μ 1 t i .
Kapur et al. [26] proposed an algorithm based on maximizing an evaluation function:
f t = H t H I m a x ln c t ln ma x i = 1 t p i + 1 H t H I m a x ln 1 c t ln ma x i = t + 1 I max p i ,
where they use the classic information entropy formula:
H t = i = 1 t p i ln p i
and
c t = i = 1 t p i
is the cumulative probability density function.
Sahoo et al. [26] proposed an algorithm based on maximizing the sum of Renyi’s entropies associated with each class:
H 0 α = 1 α 1 l n i = 0 t p i W 0 α H 1 α = 1 α 1 l n i = t + 1 I m a x p i W 1 α .
They observed that there are three distinct thresholds t 1 , t 2 , t 3 that maximize the sum depending on whether α is lower than, equal to, or greater than 1, and use all three values to determine the optimal threshold:
t = t 1 p t 1 + 0.25 ω β 1 + 0.25 t 2 ω β 2 + t 3 1 p t 3 + 0.25 ω β 3 ,
where t 1 , t 2 , t 3 are the order statistics of the thresholds t 1 , t 2 , t 3 , respectively;
ω = p t 3 p t 1 ;
and β 1 , β 2 , β 3 take different values based on the distances between t 1 , t 2 , t 3 .
Shanbhag [26] proposed a modified version of Kapur’s algorithm. They propose a different information measure:
info 0 t = i n f o 0 t i n f o 1 t
and choose the threshold that minimizes this measure. They define
i n f o 0 t = 1 W 0 t i = 1 t p i log P 0 , t i i n f o 1 t = 1 W 1 t i = t + 1 I m a x p i log P 1 , t i ,
where
P 0 , t t i = 0.5 + 1 2 W 0 t j = 0 i p t i + j P 1 , t t + i = 0.5 + 1 2 W 1 t j = 0 i p t + i j .
Yen et al. [26] proposed a maximum entropy criterion based on the discrepancy between the binarized image and the original image and the number of bits required to represent the binarized image. They define entropy as
E t = i = 1 t p i ln p i W 0 t i = t + 1 I m a x p i ln p i W 1 t .
They choose the threshold with the maximum entropy.
Tsai [26] proposed an algorithm based on preserving the moments of the original image in the binarized image. They use the first 4 raw moments and obtain the following system:
W 0 t + W 1 t = m 0 W 0 t z 0 + W 1 t z 1 = m 1 W 0 t z 0 2 + W 1 t z 1 2 = m 2 W 0 t z 0 3 + W 1 t z 1 3 = m 3 ,
where m i is the i -th raw moment, and z 0 , z 1 are representative gray values for the two classes of pixels. Considering that
W 0 t + W 1 t = 1 , t 1 , I m a x ¯ ,
solving the system is trivial, and the threshold can be determined from the resulting W 0 t as the W 0 t -th percentile.
Sarle’s bimodality coefficient (BC) [28] uses skewness and kurtosis to ascertain how separable two populations that form a bimodal distribution are. It can also be generalized (GBC) [29] by using standardized moments of higher orders. For both BC and GBC, values range from 0 to 1, with 1 indicating no overlap between the two populations and lower values indicating more overlap. The formula for the k -th GBC is
GBC k = 0 , f o r d i s t r . c o n c e n t r a t e d o n o n e p o i n t μ ~ 2 k + 1 2 + 1 μ ~ 2 k + 2 μ ~ 2 k , o t h e r w i s e
where μ ~ n is the n -th standardized moment. Sarle’s bimodality coefficient is GBC 1 .

2.2. Tools

ML.NET [30] is a framework that allows users to train and evaluate a plethora of machine-learning models. Its AutoML component allows users to automatically explore various hyperparameter configurations for said models using one of the several search strategies, such as random search, grid search, and cost frugal tuner (CFT) [31]. There are several model types available in AutoML for regression, but LightGbm [32] always scored best in our tests. LightGbm is a gradient boosting decision tree that uses gradient-based one-sided sampling to exclude a significant proportion of data instances with small gradients, using only the rest to estimate the information gain, and exclusive feature bundling to bundle mutually exclusive features to reduce the number of features.
ML.NET does not allow for custom metrics. The metrics available for regression are the mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), and R-squared (R2), also known as the coefficient of determination [33]. For the first three metrics, values range from 0 to infinity, with 0 indicating a perfect fit. For R2 values range from negative infinity to 1, with 1 indicating a perfect fit. Because MSE and RMSE square the errors, they are more sensitive to larger errors than MAE.
Training a model on a dataset might cause overfitting, i.e., maximizing the performance of the model for the training dataset at the cost of lower performance on new datasets. Resampling the dataset can alleviate or even eliminate overfitting. According to Berrar [34], 10-fold cross-validation is one of the most widely used data resampling methods. k-fold cross-validation splits a dataset into k equally sized disjunct partitions and forms k training sets and k test sets. Each test set consists of one of the k partitions, and its corresponding training set consists of the other k-1 partitions. For each training set, the model is fitted on it and evaluated on its corresponding test set, resulting in k performance metrics, one for each test set, which can be averaged to obtain the performance metric of the model.
Bates et al. [35] introduced nested cross-validation and proved that the method is more robust than cross-validation in many examples. The idea behind nested cross-validation is splitting the dataset into train/test sets twice, fitting the model on the inner train sets and evaluating them on the inner test sets, fitting them on the outer train sets and evaluating them on the outer test sets and then using all the obtained metric on all the test sets to compute a robust approximand for the metric consisting of an average value and an MSE.

2.3. Dataset and Metrics

Our dataset is composed of all the images from the following datasets: the DIBCO datasets [36,37,38,39,40,41,42,43,44,45], a synthetic dataset from [46], the CMATERdb dataset [47], a subset of 10 pages from the Einsielden Stiftsbibliothek [48], a subset of 10 pages from the Salzinnes Antiphonal Manuscript [49], a subset of 15 images from the LiveMemory project [50], the Nabuco dataset [51], the Noisy Office dataset [52], the LRDE-DBD dataset [53], the palm leaf dataset [54], the PHIBD 2012 dataset [55], the Rahul Sharma dataset [56], and a multispectral dataset [57]. To enrich the dataset, 15 variants were generated for each image by adjusting the gamma value with values ranging from 0.5 to 2. For each image, 48 features were computed: the thresholds resulting from the 15 algorithms presented in Section 2.1; the confidence values for the 7 algorithms that provide them, Otsu, Lloyd, Ridler, Huang, Ramesh, Li and Lee, and Li and Tam (which represent the normalized values of their respective optimization functions for their selected thresholds); the normalized between-class variances associated with the computed thresholds; the pixel average, standard deviation, and normalized standardized moments up to the eighth order; Sarle’s bimodality coefficient; and the generalized bimodality coefficients up to the third order. Higher-order moments require significantly better precision than that offered by the standard 64-bit double floating-point representation. In order to ensure the required precision, the Boost Multiprecision C++ library [58] was employed using 256 bits for the cpp_bin_float back-end.
All features were normalized using min-max normalization, which ensures all values are in the [ 0,1 ] range by subtracting m i n from all values and then dividing them by m a x m i n . The min value was considered 0 for all features that could not have negative values. The max value was considered the maximum possible value for all features with values that, in practice, reach values close to their theoretical upper limit (255 for thresholds, 127.5 for the standard deviation, 127.5 2 for all variances, and 1 for all bimodality coefficients). For the features that do not have a finite or obvious lower/upper limit (both limits for standardized moments of odd orders and upper limits for standardized moments of even orders), quantiles were used instead of min and max values (0.001 quantile for the lower limit, and 0.999 quantile for the upper limit) to clamp outlier values to a more compact interval, ensuring a less sparse distribution of feature values.
A popular metric used to evaluate binarized images is the F-measure (FM), defined as
F M = 2 T P 2 T P + F ,
where T P is the number of pixels correctly classified as pixels containing text, and F is the number of pixels incorrectly classified. F M takes values between 0 and 1, with 1 indicating a perfect classifier. However, because we are using global thresholding, the maximum value of F M can differ from image to image. The maximum possible value of F M for an image is not necessarily provided by a single threshold. If two thresholds T 1 and T 2 provide the maximum possible F M , then all thresholds in the interval T 1 , T 2 ¯ provide the maximum possible F M . The ideal threshold of an image, in the global thresholding scenario, was defined as the middle of the largest interval of thresholds that provide the maximum possible FM for that image. The ideal threshold was computed for each image in the dataset and used as the column to be predicted by the ML models.
The goal of the experiment is to find a regression model instance that approximates the ideal threshold as closely as possible. Because the maximum possible FM on different image sets is not necessarily the same, we define the relative FM ( F M r ) as
F M r = F M F M M a x ,
where F M is the FM provided by the predicted threshold, and F M M a x is the maximum possible FM of the image. With F M r , the performance on different images can be compared as the values it can take are always between 0 and 1, and the maximum value of 1 can always be reached regardless of the image.
Other popular metrics are peak signal-to-noise ratio (PSNR) and MSE. Both PSNR and FM are better when their values are higher, so we define relative PSNR ( P S N R r ) similarly to F M r as
P S N R r = P S N R P S N R M a x ,
where P S N R is the PSNR provided by the predicted threshold, and P S N R M a x is the maximum possible PSNR of the image. MSE is better when its values are lower, so we define relative MSE ( M S E r ) as
M S E r = M S E M i n + ϵ M S E + ϵ ,
where M S E is the MSE provided by the predicted threshold; M S E M i n is the minimum possible MSE of the image, and we add ϵ to avoid division by 0, while still making sure that M S E r can take all values in 0,1 .

2.4. Proposed Solution

In order to achieve nested cross-validation, the dataset was split into 11 folds, one fold was set apart as the outer test set, and the remaining 10 folds were set as the outer training set. An ML.NET AutoML regression experiment was set up with cross-validation on the outer training set, with a time limit of 60 min, using MSE as the optimizing metric, exploring all the model types available for regression, searching for hyperparameters using an implementation of CFT for hierarchical search spaces. Lower time limits were also tested but produced slightly worse results. Higher time limits were not tested due to increased computational cost. Each of the 10 folds in the outer training set formed an inner test set, with the remaining 9 forming an inner training set. The model instances were fitted on the inner train sets and evaluated on the inner test sets, thus producing 10 different values for the evaluation metrics. The model instances were then refitted on the outer training set and evaluated on the outer test set. Each of the 10 folds was then switched with the one in the outer test set and the model instances were refit on the inner training sets and evaluated on the inner test sets, resulting in a total of 110 different values for the evaluation metrics for models trained on 9 folds and evaluated on one fold and a total of 11 different values for the evaluation metrics for models trained on 10 folds and evaluated on one fold.
We denote F M r i n the values for F M r obtained on images from the inner test sets and F M r o u t the values for F M r obtained on images from the outer test sets. The nested cross-validation estimand of F M r is denoted as F M r ^ and is the average over all the inner test sets of F M r i n . For each outer test set, two values are computed:
M S E i n o u t = F M r i n ¯ F M r o u t ¯ 2 σ o u t 2 = v a r i a n c e F M r o u t ,
where the x ¯ denotes the average value of the measure x , and F M r i n refers to all the inner test sets corresponding to the outer test set. The nested cross-validation estimand of the mean squared error of F M r ^ is denoted as M S E F M ^ and is the difference between M S E i n o u t ¯ and σ o u t 2 ¯ . Analogous estimands were computed for the PSNR and the MSE of the binarized image but were not used in the ranking of the models.
A score was defined as
F M r ^ 3 M S E F M ^ ,
and assigned to each model instance and used to determine the best one (a higher score is better). The best instance was then refit on the whole dataset, and the average metrics were computed over the entire dataset to compare the model’s results with the input algorithms. Algorithm 1 illustrates, using pseudocode, how the best model was selected using NCV.
Algorithm 1. NCV experiment pseudocode.
Input: dataset
Output: ML model
1.partitions = split the dataset into 11 equally sized partitions
2.for i from 1 to 11
3.  outerFolds[i].testSet = partitions[i]
4.  outerFolds[i].trainSet = all partitions except partitions[i]
5.innerFolds = split outerFolds [1].trainSet into 10 folds
6.models = run AutoML 10-fold cross-validation experiment on innerFolds
7. bestScore = , bestModel = null
8.for each model in models
9.  esfm = empty array
10.  for oi from 1 to 11
11.  innerFolds = split outerFolds[oi].trainSet into 10 folds
12.  einfm = empty array
13.  for ii from 1 to 10
14.  trainSet = innerFolds[ii].trainSet
15.  testSet = innerFolds[ii].testSet
16.  innerModel = model.fit(trainSet)
17.  predicted = innerModel.transform(testSet)
18.  for i from 1 to predicted.size
19.    einfm.add(relative FM of predicted[i])
20.  trainSet = outerFolds[oi].trainSet
21.  testSet = outerFolds[oi].testSet
22.  predicted = outerModel.transform(testSet)
23.  eoutfm = empty array
24.  for i from 1 to predicted.size
25.  eoutfm.add(relative FM of predicted[i])
26.  afm[oi] = pow(average(einfm)-average(eoutfm),2)
27.  bfm[oi] = variance(eoutfm)/eoutfm.size
28.  esfm.concatenate(einfm)
29.   M S E F M ^ = average(afm)-average(bfm)
30.   F M r ^ = average(esfm)
31.   score = F M r ^ 3 M S E F M ^
32.  if score > bestScore
33.  bestScore = score
34.  bestModel = model

3. Results

Table 1 shows the resulting best model instances of several runs of the same nested cross-validation (NCV) experiment, as described in Section 2.4, using all features. The procedure produces consistent results, with a LightGbm regression model always taking first place.
Three feature subsets were found to impact the results the most: 1—the 15 thresholds, 2—the moments (average, standard deviation, standardized moments from the third to the eighth), and 3—the BCs (Sarle’s bimodality coefficient and the next two generalized bimodality coefficients). Table 2 shows how these subsets were combined, and Table 3 compares the NCV results obtained with each subset and with the full set. The results found for the fourth, fifth, and seventh feature subsets are the best out of the seven and in line with the results of the experiments using all features, showing that the discarded features’ contributions are insignificant. Subset 1 seems to be the most relevant, followed closely by subset 2; however, their combinations yield significantly better results.
Table 4 compares the F M ¯ and F M r ¯ for each thresholding algorithm and for the ideal threshold, with the results obtained by refitting the models obtained through NCV to the entire dataset. It should be noted that
F M ¯ F M m a x ¯ F M r ¯
because even though
F M r = F M F M m a x
averaging F M r produces
F M r ¯ = n 1 F M r = n 1 F M F M m a x
and this is different from the average value of F M
F M ¯ = n 1 F M
because F M m a x is not constant across different images. This is why, in Table 4,
F M r ¯ F M ¯ F M ¯ I d e a l T
for any thresholding method other than the ideal one. The observation holds true for PSNR and MSE as well.
Table 5 shows the average processing time for an image with a known histogram expressed in milliseconds. The images have an average resolution of 3.31 megapixels. Image resolution does not affect processing times because the proposed solution operates linearly on the image histogram. It should be noted that all the times displayed in this paper were obtained on a machine with an Intel 14600 K processor running at up to 5.3 GHz. The application was built on Windows 11 targeting .NET 6.0.
Table 6 compares the results of the proposed method with those of state-of-the-art global thresholding methods (introduced in Section 1.2) on the H-DIBCO 2014 dataset.
Table 7 shows that the proposed method outperforms state-of-the-art neural-network-based methods (introduced in Section 1.2) on the DIBCO 2019 dataset, but it also shows that, when averaging on all DIBCO datasets, the proposed method cannot compete. This shows that global methods are not obsolete and that local methods are not always the better choice, even when time and resources are not taken into consideration.

4. Discussion

One thing to note is that the proposed solution’s global approach offers various benefits in terms of robustness and computational efficiency. By operating in a global context, rather than depending on local relationships, the algorithm is less affected by noise or small-scale variations that could affect the results, making it especially useful for tasks where global patterns matter more than detailed local structures. This also results in simpler computations, meaning faster processing and lower complexity, which is ideal for real-time or resource-constrained applications, like embedded systems. Additionally, since the solution does not depend on spatial relationships between samples, it generalizes well to various signals that require separation in low- and high-value groups, beyond just bi-dimensional images. The efficiency also benefits OCR systems and other machine learning applications, where local approaches can introduce noise by providing reliable preprocessing and robust features that can be computed quickly and efficiently.
However, while this approach of ignoring local relationships enhances efficiency, it can lead to the loss of important local information, which could be crucial for tasks like edge detection, texture analysis, or applications where neighboring samples are strongly correlated. For such scenarios, methods that take local relationships into account might provide more accurate results.
In the following figures, black represents true positives, white represents true negatives, cyan represents false positives, and red represents false negatives. Figure 1 illustrates the importance of local relationships with two images from the DIBCO 2018 dataset for which global binarization algorithms cannot possibly achieve an acceptable result. For images such as these, there is no global threshold with which you can eliminate both false negatives and false positives.
Figure 2 shows that on some images, such as image 13 from the challenging DIBCO 2019 dataset, where even state-of-the-art neural-network-based algorithms struggle to produce acceptable results, a global binarization algorithm can produce competitive results.
The diverse training dataset produces a model that is more robust in terms of noise, illumination conditions, contrast, etc. Combining multiple datasets carries the risk of hindering model performance for some particular datasets, but the benefit of increasing the robustness outweighs the risk. Cross-validation further reduces this risk, ensuring that the model performs consistently across all folds.
Data augmentation also improves model robustness and performance. The parameters used for all data augmentation transformations were set to produce realistic results that could have been captured from the same documents in different lighting conditions or with different camera settings.

5. Conclusions

The proposed solution produces results very close to the ideal scenario on various datasets and much better than any of the input algorithms, with an average score of around 90.3–90.8% of the maximum possible score and an MSE of around 0.7–0.8% when using nested cross-validation. When refitting the models on the entire dataset, the results improved even further, with the best model reaching an average score of 99.99% of the maximum possible score and an average processing time of just 0.22–0.25 s.
Future improvements could include using FM as the optimization metric, which might boost the average score and MSE across dataset folds, although this was not feasible in the current implementation due to the limitations of the ML.NET framework. The overall accuracy could be enhanced by applying the algorithm locally around each pixel, resulting in a local thresholding algorithm, or by applying the model on image patches, resulting in a mixed global–local thresholding algorithm. Experimenting with different neural network architectures specialized for regression instead of using AutoML might yield better results.
The proposed solution could be used as an estimator in a more complex approach. For example, Yang et al. [22] use Otsu’s method to generate an input noisy mask for a multi-branch coarse sub-network that predicts an output mask, which then feeds into a refinement sub-network to produce the final binarized image. The predicted ideal threshold could also be used as a feature in a local thresholding machine learning model.
Additional algorithms could be implemented into the overall solution to improve performance and adaptability to more complex tasks. For instance, incorporating local or mixed global–local thresholding techniques could improve accuracy in cases where fine-grained details are crucial. By expanding the algorithm to include more advanced techniques, the solution could become even more robust, offering greater flexibility for tasks such as image segmentation, edge detection, or other specialized applications.

Author Contributions

Conceptualization, N.T., C.-A.B. and M.-L.V.; data curation, C.-A.B.; formal analysis, N.T., C.-A.B. and M.-L.V.; investigation, N.T.; methodology, N.T. and M.-L.V.; project administration, C.-A.B.; resources, C.-A.B.; software, N.T.; supervision, C.-A.B.; validation, N.T., C.-A.B., and M.-L.V.; visualization, N.T., C.-A.B. and M.-L.V.; writing—original draft preparation, N.T.; writing—review and editing, N.T., C.-A.B. and M.-L.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCSarle’s Bimodality Coefficient
GBCGeneralized BC
CFTCost Frugal Tuner
MSEMean Squared Error
RMSERoot MSE
MAEMean Absolute Error
R2R-Squared
FMF-Measure
FMrRelative FM
PDEPartial Differential Equation
PSNRPeak Signal-to-Noise Ratio
PSNRrRelative PSNR
MSErRelative MSE
NCVNested Cross-Validation

References

  1. Tahseen, A.J.A.; Sotnik, S.; Sinelnikova, T.; Lyashenko, V. Binarization Methods in Multimedia Systems when Recognizing License Plates of Cars. Int. J. Acad. Eng. Res. 2023, 7, 1–9. [Google Scholar]
  2. Wang, S.; Fan, J. Image Thresholding Method Based on Tsallis Entropy Correlation. Multimed. Tools Appl. 2024, 84, 9749–9785. [Google Scholar] [CrossRef]
  3. Elen, A.; Dönmez, E. Histogram-Based Global Thresholding Method for Image Binarization. Optik 2024, 306, 171814. [Google Scholar] [CrossRef]
  4. Guo, J.; He, C. Adaptive Shock-Diffusion Model for Restoration of Degraded Document Images. Appl. Math. Model. 2019, 79, 555–565. [Google Scholar] [CrossRef]
  5. Guo, J.; He, C.; Zhang, X. Nonlinear Edge-Preserving Diffusion with Adaptive Source for Document Images Binarization. Appl. Math. Comput. 2019, 351, 8–22. [Google Scholar] [CrossRef]
  6. Mahani, Z.; Zahid, J.; Saoud, S.; Rhabi, M.E.; Hakim, A. Text Enhancement by PDE’s Based Methods. In Lecture Notes in Computer Science; Springer Nature: Berlin/Heidelberg, Germany, 2012; pp. 65–76. [Google Scholar]
  7. Bogiatzis, A.; Papadopoulos, B. Global Image Thresholding Adaptive Neuro-Fuzzy Inference System Trained with Fuzzy Inclusion and Entropy Measures. Symmetry 2019, 11, 286. [Google Scholar] [CrossRef]
  8. Ashir, A.M. Multilevel Thresholding for Image Segmentation Using Mean Gradient. J. Electr. Comput. Eng. 2022, 2022, 1–9. [Google Scholar] [CrossRef]
  9. Rani, U.; Kaur, A.; Josan, G. A New Binarization Method for Degraded Document Images. Int. J. Inf. Technol. 2019, 15, 1035–1053. [Google Scholar] [CrossRef]
  10. Calvo-Zaragoza, J.; Gallego, A.-J. A Selectional Auto-Encoder Approach for Document Image Binarization. Pattern Recognit. 2018, 86, 37–47. [Google Scholar] [CrossRef]
  11. Peng, X.; Wang, C.; Cao, H. Document Binarization via Multi-Resolutional Attention Model with DRD Loss. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), Sydney, Australia, 20–25 September 2019; pp. 45–50. [Google Scholar]
  12. He, S.; Schomaker, L. DeepOtsu: Document Enhancement and Binarization Using Iterative Deep Learning. Pattern Recognit. 2019, 91, 379–390. [Google Scholar] [CrossRef]
  13. Mondal, R.; Chakraborty, D.; Chanda, B. Learning 2D Morphological Network for Old Document Image Binarization. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), Sydney, Australia, 20–25 September 2019; pp. 65–70. [Google Scholar]
  14. De, R.; Chakraborty, A.; Sarkar, R. Document Image Binarization Using Dual Discriminator Generative Adversarial Networks. IEEE Signal Process. Lett. 2020, 27, 1090–1094. [Google Scholar] [CrossRef]
  15. Suh, S.; Kim, J.; Lukowicz, P.; Lee, Y.O. Two-Stage Generative Adversarial Networks for Binarization of Color Document Images. Pattern Recognit. 2022, 130, 108810. [Google Scholar] [CrossRef]
  16. Zhao, J.; Shi, C.; Jia, F.; Wang, Y.; Xiao, B. Document Image Binarization with Cascaded Generators of Conditional Generative Adversarial Networks. Pattern Recognit. 2019, 96, 106968. [Google Scholar] [CrossRef]
  17. Souibgui, M.A.; Kessentini, Y. DE-GAN: A Conditional Generative Adversarial Network for Document Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1180–1191. [Google Scholar] [CrossRef]
  18. Yang, M.; Xu, S. A Novel Degraded Document Binarization Model through Vision Transformer Network. Inf. Fusion 2022, 93, 159–173. [Google Scholar] [CrossRef]
  19. Yang, Z.; Liu, B.; Xxiong, Y.; Yi, L.; Wu, G.; Tang, X.; Liu, Z.; Zhou, J.; Zhang, X. DocDiff: Document Enhancement via Residual Diffusion Models. In Proceedings of the 31st ACM International Conference on Multimedia (MM 2023), Ottawa, ON, Canada, 29 October–3 November 2023; pp. 2795–2806. [Google Scholar]
  20. Souibgui, M.A.; Biswas, S.; Jemni, S.K.; Kessentini, Y.; Fornes, A.; Llados, J.; Pal, U. DoCENTR: An End-to-End Document Image Enhancement Transformer. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022. [Google Scholar] [CrossRef]
  21. Biswas, R.; Roy, S.K.; Wang, N.; Pal, U.; Huang, G.B. DocBinFormer: A Two-Level Transformer Network for Effective Document Image Binarization. arXiv 2023, arXiv:2312.03568. [Google Scholar] [CrossRef]
  22. Yang, Z.; Liu, B.; Xiong, Y.; Wu, G. GDB: Gated Convolutions-Based Document Binarization. Pattern Recognit. 2024, 146, 109989. [Google Scholar] [CrossRef]
  23. Siddiqui, F.U.; Yahya, A. Clustering Techniques for Image Segmentation; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  24. Sung, J.-M.; Kim, D.-C.; Choi, B.-Y.; Ha, Y.-H. Image Thresholding Using Standard Deviation. Proc. SPIE Int. Soc. Opt. Eng. 2014, 9024, 90240R. [Google Scholar] [CrossRef]
  25. Li, C.H.; Lee, C.K. Minimum Cross Entropy Thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
  26. Sekertekin, A. A Survey on Global Thresholding Methods for Mapping Open Water Body Using Sentinel-2 Satellite Imagery and Normalized Difference Water Index. Arch. Comput. Methods Eng. 2020, 28, 1335–1347. [Google Scholar] [CrossRef]
  27. Brink, A.D.; Pendock, N.E. Minimum Cross-Entropy Threshold Selection. Pattern Recognit. 1996, 29, 179–188. [Google Scholar] [CrossRef]
  28. Knapp, T.R. Bimodality Revisited. J. Mod. Appl. Stat. Methods 2007, 6, 8–20. [Google Scholar] [CrossRef]
  29. Tarbă, N.; Voncilă, M.-L.; Boiangiu, C.-A. On Generalizing Sarle’s Bimodality Coefficient as a Path towards a Newly Composite Bimodality Coefficient. Mathematics 2022, 10, 1042. [Google Scholar] [CrossRef]
  30. Ahmed, Z.; Amizadeh, S.; Bilenko, M.; Carr, R.; Chin, W.-S.; Dekel, Y.; Dupre, X.; Eksarevskiy, V.; Filipi, S.; Finley, T.; et al. Machine Learning at Microsoft with ML.NET. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2448–2458. [Google Scholar]
  31. Wu, Q.; Wang, C.; Huang, S. Frugal Optimization for Cost-Related Hyperparameters. Proc. AAAI Conf. Artif. Intell. 2021, 35, 10347–10354. [Google Scholar] [CrossRef]
  32. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  33. Hughes, A.J.; Grawoig, D.E. Statistics, a Foundation for Analysis; Addison-Wesley Educational Publishers Inc.: Boston, MA, USA, 1971. [Google Scholar]
  34. Berrar, D. Cross-Validation. In Elsevier eBooks; Elsevier: Amsterdam, The Netherlands, 2018; pp. 542–545. [Google Scholar]
  35. Bates, S.; Hastie, T.; Tibshirani, R. Cross-Validation: What Does It Estimate and How Well Does It Do It? J. Am. Stat. Assoc. 2023, 119, 1434–1445. [Google Scholar] [CrossRef]
  36. Gatos, B.; Ntirogiannis, K.; Pratikakis, I. ICDAR 2009 Document Image Binarization Contest (DIBCO 2009). In Proceedings of the 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009; pp. 1375–1382. [Google Scholar]
  37. Pratikakis, I.; Gatos, B.; Ntirogiannis, K. H-DIBCO 2010—Handwritten Document Image Binarization Competition. In Proceedings of the 2010 12th International Conference on Frontiers in Handwriting Recognition, Kolkata, India, 16–18 November 2010; pp. 727–732. [Google Scholar]
  38. Pratikakis, I.; Gatos, B.; Ntirogiannis, K. ICDAR 2011 Document Image Binarization Contest (DIBCO 2011). In Proceedings of the International Conference on Document Analysis and Recognition, Beijing, China, 18–21 September 2011; pp. 1506–1510. [Google Scholar] [CrossRef]
  39. Pratikakis, I.; Gatos, B.; Ntirogiannis, K. ICFHR 2012 Competition on Handwritten Document Image Binarization (H-DIBCO 2012). In Proceedings of the International Conference on Frontiers in Handwriting Recognition, Bari, Italy, 18–20 September 2012; pp. 817–822. [Google Scholar] [CrossRef]
  40. Pratikakis, I.; Gatos, B.; Ntirogiannis, K. ICDAR 2013 Document Image Binarization Contest (DIBCO 2013). In Proceedings of the 12th International Conference on Document Analysis and Recognition, Washington, DC, USA, 25–28 August 2013; pp. 1471–1476. [Google Scholar]
  41. Ntirogiannis, K.; Gatos, B.; Pratikakis, I. ICFHR2014 Competition on Handwritten Document Image Binarization (H-DIBCO 2014). In Proceedings of the 14th International Conference on Frontiers in Handwriting Recognition, Crete, Greece, 1–4 September 2014; pp. 809–813. [Google Scholar]
  42. Pratikakis, I.; Zagoris, K.; Barlas, G.; Gatos, B. ICFHR2016 Handwritten Document Image Binarization Contest (H-DIBCO 2016). In Proceedings of the 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), Shenzhen, China, 23–26 October 2016; pp. 619–623. [Google Scholar]
  43. Pratikakis, I.; Zagoris, K.; Barlas, G.; Gatos, B. ICDAR2017 Competition on Document Image Binarization (DIBCO 2017). In Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–12 November 2017; pp. 1395–1403. [Google Scholar]
  44. Pratikakis, I.; Zagori, K.; Kaddas, P.; Gatos, B. ICFHR 2018 Competition on Handwritten Document Image Binarization (H-DIBCO 2018). In Proceedings of the 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), Niagara Falls, NY, USA, 5–8 August 2018; pp. 489–493. [Google Scholar]
  45. Pratikakis, I.; Zagoris, K.; Karagiannis, X.; Tsochatzidis, L.; Mondal, T.; Marthot-Santaniello, I. ICDAR 2019 Competition on Document Image Binarization (DIBCO 2019). In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), Sydney, Australia, 20–25 September 2019; pp. 1547–1556. [Google Scholar]
  46. Stathis, P.; Kavallieratou, E.; Papamarkos, N. An Evaluation Technique for Binarization Algorithms. JUCS—J. Univers. Comput. Sci. 2008, 14, 3011–3030. [Google Scholar]
  47. CMATERdb. The Pattern Recognition Database Repository. Available online: https://code.google.com/archive/p/cmaterdb/ (accessed on 22 May 2025).
  48. Einsiedeln, Stiftsbibliothek, Codex 611(89), from 1314. Available online: https://www.e-codices.unifr.ch/en/sbe/0611/ (accessed on 22 May 2025).
  49. Salzinnes Antiphonal Manuscript (CDM-Hsmu M2149.14). Available online: https://cantus.simssa.ca/manuscript/133/ (accessed on 22 May 2025).
  50. Lins, R.D.; Torreão, G.; Silva, G.P.E. Content Recognition and Indexing in the LiveMemory Platform. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; pp. 220–230. [Google Scholar]
  51. Lins, R.D. Two Decades of Document Processing in Latin America. J. Univers. Comput. Sci. 2011, 17, 151–161. [Google Scholar]
  52. Castro-Bleda, M.J.; España-Boquera, S.; Pastor-Pellicer, J.; Zamora-Martínez, F. The NoisyOffice Database: A Corpus to Train Supervised Machine Learning Filters for Image Processing. Comput. J. 2019, 63, 1658–1667. [Google Scholar] [CrossRef]
  53. Lazzara, G.; Géraud, T. Efficient Multiscale Sauvola’s Binarization. Int. J. Doc. Anal. Recognit. (IJDAR) 2013, 17, 105–123. [Google Scholar] [CrossRef]
  54. Burie, J.-C.; Coustaty, M.; Hadi, S.; Kesiman, M.W.A.; Ogier, J.-M.; Paulus, E.; Sok, K.; Sunarya, I.M.G.; Valy, D. ICFHR2016 Competition on the Analysis of Handwritten Text in Images of Balinese Palm Leaf Manuscripts. In Proceedings of the 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), Shenzhen, China, 23–26 October 2016; pp. 596–601. [Google Scholar]
  55. Nafchi, H.Z.; Ayatollahi, S.M.; Moghaddam, R.F.; Cheriet, M. Persian Heritage Image Binarization Dataset (PHIBD 2012). Available online: http://tc11.cvc.uab.es/datasets/PHIBD 2012_1 (accessed on 22 May 2025).
  56. Singh, B.M.; Sharma, R.; Ghosh, D.; Mittal, A. Adaptive Binarization of Severely Degraded and Non-Uniformly Illuminated Documents. Int. J. Doc. Anal. Recognit. (IJDAR) 2014, 17, 393–412. [Google Scholar] [CrossRef]
  57. Hedjam, R.; Cheriet, M. Ground-Truth Estimation in Multispectral Representation Space: Application to Degraded Document Image Binarization. In Proceedings of the 12th International Conference on Document Analysis and Recognition, Washington, DC, USA, 25–28 August 2013; pp. 190–194. [Google Scholar]
  58. Maddock, J.; Kormanyos, C. Boost Multiprecision. Available online: https://www.boost.org/doc/libs/1_85_0/libs/multiprecision/doc/html/boost_multiprecision/intro.html (accessed on 22 May 2025).
Figure 1. Problematic images for global binarization from DIBCO 2018: 2 (top) and 10 (bottom). From left to right: original image, image binarized with Otsu’s method (threshold too high), and image binarized with the proposed method (threshold too low).
Figure 1. Problematic images for global binarization from DIBCO 2018: 2 (top) and 10 (bottom). From left to right: original image, image binarized with Otsu’s method (threshold too high), and image binarized with the proposed method (threshold too low).
Applsci 15 08096 g001
Figure 2. From top to bottom, left to right: original image, ground truth, Otsu binarization, Sauvola binarization, GDB binarization, proposed solution binarization.
Figure 2. From top to bottom, left to right: original image, ground truth, Otsu binarization, Sauvola binarization, GDB binarization, proposed solution binarization.
Applsci 15 08096 g002
Table 1. NCV results with full feature set.
Table 1. NCV results with full feature set.
F M r ^ (%) M S E F M ^ (‰) P S N R r ^ (%) M S E P S N R ^ (‰) M S E r ^ (%) M S E M S E ^ (‰)
90.867.9981.6411.0862.3222.01
90.468.0881.1611.7261.8123.69
90.568.4381.1111.1861.7023.86
90.808.0681.4011.6061.9723.97
Table 2. Feature subsets.
Table 2. Feature subsets.
Feature SetThresholdsMomentsBCs
1+
2+
3+
4++
5++
6++
7+++
+ indicates the presence of a subset and – indicates the absence of a subset.
Table 3. NCV result comparison.
Table 3. NCV result comparison.
Feature Set F M r ^ (%) M S E F M ^ (‰) P S N R r ^ (%) M S E P S N R ^ (‰) M S E r ^ (%) M S E M S E ^ (‰)
189.598.6482.387.2363.0515.37
284.158.0074.7912.5250.5024.45
366.8856.1966.8115.5042.9825.73
490.316.9381.5810.3562.2821.18
590.298.9282.977.4364.1815.59
686.428.1377.7115.2854.9027.43
790.397.9681.7510.1562.5219.89
Full90.867.9981.6411.0862.3222.01
Table 4. Result comparison on the entire dataset.
Table 4. Result comparison on the entire dataset.
Proposed Solution with Feature Set1234567Full
F M ¯ (%)81.5681.6562.3081.7581.7381.7381.7581.55
F M r ¯ (%)99.7099.7970.5899.9999.9799.9399.9999.74
P S N R ¯ 27.0127.3718.1727.5027.4827.4927.5027.18
P S N R r ¯ (%)92.9493.9669.2293.9993.9593.9693.9993.19
M S E ¯ 32023236815632443250323632443271
M S E r ¯ (%)87.5089.6870.5890.6090.4990.5090.6088.40
AlgorithmIdealOtsuKittlerLloydSungRidlerHuangRamesh
F M ¯ (%)81.7567.1164.5763.6561.6864.8853.0553.67
F M r ¯ (%)10079.2473.1070.6273.4077.1364.4858.90
P S N R ¯ 27.5015.8918.4518.6315.3515.1211.5414.41
P S N R r ¯ (%)93.9960.2165.8467.6755.4057.9146.7659.54
M S E ¯ 324410,2899528790011,79010,81916,32212,205
M S E r ¯ (%)90.635.8645.8841.0130.6733.7324.2237.68
AlgorithmLi-LeeLi-TamBrinkKapurSahooShanbhagYenTsai
F M ¯ (%)66.5168.5962.3460.1359.9050.2258.4760.23
F M r ¯ (%)78.8679.2473.1070.6273.4077.1364.4858.90
P S N R ¯ 16.7516.9414.7713.4013.2011.2912.8512.04
P S N R r ¯ (%)61.0462.7965.6662.3561.5351.9960.3552.38
M S E ¯ 10,332965664449383938710,97299839709
M S E r ¯ (%)37.7039.0741.2140.4539.2527.6038.2525.66
Table 5. Average processing time for an image in the dataset.
Table 5. Average processing time for an image in the dataset.
Feature Set1234567Full
Feature generation runtime (ms)20 *0 *220 *22
ML regression runtime (ms)230340415225214246251234
Total image binarization runtime (ms)232340415227216248253236
Times marked with 0 * are less than 0.1 ms.
Table 6. Result comparison on the H-DIBCO 2014 dataset.
Table 6. Result comparison on the H-DIBCO 2014 dataset.
Method P S N R ¯ (%) F M ¯ (%)
Proposed method19.2692.8
Elen and Dönmez1992.3
Guo and He18.5491.2
Guo et al.19.1792.3
Mahani et al.19.1692.7
Table 7. Result comparison with state-of-the-art methods on all datasets ( F M ¯ ) versus DIBCO 2019 dataset F M ¯ 2019 .
Table 7. Result comparison with state-of-the-art methods on all datasets ( F M ¯ ) versus DIBCO 2019 dataset F M ¯ 2019 .
Method F M ¯ (%) F M ¯ 2019 (%) P S N R ¯ P S N R ¯ 2019
Proposed method83.0477.7115.8616.91
DIBCO winners88.5972.8819.1614.48
Otsu76.5663.8715.3012.67
Sauvola77.6663.8215.7912.66
DeepOtsu87.0460.7519.3314.44
SAE88.0158.7819.5412.35
DD-GAN87.7058.0119.8714.43
MRAM85.6749.5319.1812.95
2DMN78.6142.7615.558.25
cGANs88.6462.2919.6612.80
DE-GAN88.5256.0319.448.37
Suh89.9270.6319.8714.72
GDB90.8973.8420.4114.80
GDBMO91.1673.5020.5014.96
D2BFormer91.7366.6920.8315.05
DocDiff-73.38-15.14
DocEnTr-59.00-13.85
DocBinFormer92.2260.3121.3014.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tarbă, N.; Boiangiu, C.-A.; Voncilă, M.-L. From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning. Appl. Sci. 2025, 15, 8096. https://doi.org/10.3390/app15148096

AMA Style

Tarbă N, Boiangiu C-A, Voncilă M-L. From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning. Applied Sciences. 2025; 15(14):8096. https://doi.org/10.3390/app15148096

Chicago/Turabian Style

Tarbă, Nicolae, Costin-Anton Boiangiu, and Mihai-Lucian Voncilă. 2025. "From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning" Applied Sciences 15, no. 14: 8096. https://doi.org/10.3390/app15148096

APA Style

Tarbă, N., Boiangiu, C.-A., & Voncilă, M.-L. (2025). From Classic to Cutting-Edge: A Near-Perfect Global Thresholding Approach with Machine Learning. Applied Sciences, 15(14), 8096. https://doi.org/10.3390/app15148096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop