You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Editorial
  • Open Access

1 December 2025

Digital Image Processing: Technologies and Applications

and
Mechanical and Electrical Engineering School, Instituto Politécnico Nacional, Ciudad de Mexico 04450, Mexico
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Digital Image Processing: Technologies and Applications

1. Introduction

Image processing has been an active research topic since the mid-1970s, during which time schemes for filtering noise reduction, enhancement, compression, and spectral analysis, among others, have been developed [,]. Subsequently, with the advancement of digital technology, the number of digital processing applications has increased, becoming an essential tool in the development of biometric identification systems, schemes that enhance security in transportation, video surveillance, early tumor detection and other medical applications, security schemes, pattern recognition, information protection and authentication, and remote sensing, among others [,,,,].
The above prompted the publication of this Special Issue, which addresses topics related to image filtering in both grayscale and color as well as hidden information transmission and lossless recovery of the carrier image [,]. In many cases, the important thing is the recovery of the hidden information without paying attention to the carrier image []; however, in many cases, it is also necessary to recover the carrier with the least possible distortion. Regarding information hiding, the insertion of digital watermarks for authentication has been another important aspect [] in applications such as copyright protection. In terms of medical applications, segmentation is a topic of great interest since it allows separating the background from the objects of interest as well as applications related to viral detections that enable the development of treatments for various diseases []. Finally, another aspect that has received considerable attention is the authentication of images whose content is intentionally modified to alter their meaning [,,]. The next section presents a description of the articles included in this Special Issue.

2. Overview of Published Articles

Information hiding mechanisms, which can be divided into watermarking and steganography, have found applications in various fields such as the storage of relevant information in medical images as well as the transmission of hidden information embedded in various carrier images, image authentication, etc. In many cases, it is necessary to recover hidden information without considering the quality of the carrier image. However, there are applications in which the carrier image must be recovered without distortion. To this end, mechanisms that allow reversible data hiding have been a topic of recent development interest. This has led to the development of various reversible data hiding (RDH) methods that aim to prioritize the exact recovery of the original carrier image. However, this limits both the embedding capacity and the flexibility of the system design. One of the articles in this Special Issue presents a partially reversible data hiding (PRDH) scheme that differs from conventional standards by allowing relative reversibility to an estimated covered image instead of the original one. The proposed system leverages a structure that enhances the (7,4) HC Hamming code to synthesize virtual pixels, enabling efficient coding. The proposed system achieved embedding rates up to 1.5 bpp with PSNR values above 48 dB while avoiding the use of auxiliary data; however, its reliability depends on the availability of paired images, which could be a limitation in real-world situations. The demonstrated resilience to RS-based steganographic analysis suggests its feasibility in sensitive domains. However, further evaluation with a wider variety of image types is required. Further details can be found in the article included in this Special Issue.
Entropy-based thresholding is a widely used technique for medical image segmentation. The basic principle is to determine the optimal threshold by maximizing or minimizing the image entropy, dividing the image into different regions or categories. In many cases, the intensity distributions of objects and backgrounds often overlap and consequently contain many outliers, making segmentation extremely difficult. This Special Issue includes a paper that presents a new thresholding method that incorporates the Cauchy distribution into a Tsallis entropy-based Bayesian estimation framework. By introducing Bayesian a priori probability estimation to address the overlap in intensity distributions between the two classes, the estimation of the probability of a pixel belonging to either class is improved. Additionally, the Cauchy distribution is used to adjust gray levels during optimal multilateral estimation for medical brain images based on Tsallis entropy, incorporating both Bayesian estimation and Cauchy distribution. The optimal threshold is derived by optimizing an information measure formulated using Tsallis entropy. Experimental results demonstrated that the proposed method, called Cauchy-TB, achieved significant superiority compared with existing approaches.
Digital watermarking is a technique used to solve or reduce problems associated with the widespread use of the Internet for sharing digital information such as copyright protection, tamper detection, and information authentication, among others. However, in most cases, watermarking negatively affects the quality of the original image. To address this problem, a digital watermarking optimization process is presented that utilizes the double-tree complex wavelet transform (DTCWT) in conjunction with a particle swarm optimization (PSO) algorithm. The presented approach attempts to maintain the highest possible quality of the watermarked image while minimizing any perceptible changes. In the proposed scheme, during the embedding phase, the original image is decomposed using the technique called double-tree complex wavelet transform (DTCWT) in conjunction with the particle swarm optimization method (PSO), which allows for the selection of appropriate coefficients. The bits of a binary logo are then embedded into the least significant bits of the selected coefficients, creating the watermarked image. Watermark extraction is performed by reversing the embedding process, first decomposing the input image using DTCWT and subsequently extracting the same coefficients to recover the bits corresponding to the watermark. Experimental results using a dataset commonly used by other researchers demonstrated the functionality of the proposed system through metrics such as signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC), yielding PSNR and NCC values of 80.50% and 92.51%, respectively.
The spread of the chikungunya virus, a member of the Alphavirus genus, is a major health problem. It continues to pose a global health challenge due to its widespread presence and the lack of specific antiviral therapies. Accurate detection of viral infections, such as that caused by the chikungunya virus, is crucial for antiviral research; however, traditional methods are time-consuming and error prone. This Special Issue presents the development and validation of an image-processing-based algorithm that improves the accuracy and speed of virus detection, which will contribute to the development of potential anti-chikungunya compounds. The proposed system used MvTec Halcon software (version 22.11) to develop an algorithm for detecting and classifying infected and uninfected cells in viral assays. Its performance was validated using manual counts performed by virology experts, showing promising results with Pearson coefficients of 0.9807 for cell detection and 0.9886 for virus detection. This indicates a high correlation between the values obtained by the algorithm and manual counts performed by three virology experts, demonstrating that the algorithm’s accuracy closely matched the experts’ manual assessments. Following statistical validation, the algorithm was applied to screen antiviral compounds, demonstrating its effectiveness in improving the throughput and accuracy of drug discovery workflows. This technology represents a scalable and efficient tool that contributes to accelerating drug development and improving the diagnosis of vector-borne and emerging viral diseases.
Another proposal presented in this Special Issue consists of a new edge-oriented sliding-window filter that computes output pixels using a cross-correlation-like equation derived from fractional calculus (FC); we will call this the “fractional cross-correlation filter” (FCCF). The performance of this filter is evaluated exclusively using edge-preserving metrics such as gradient driving mean square error (GCMSE), edge-based structural similarity (EBSSIM), and multi-scale structural similarity (MS-SSIM). The filter’s performance was evaluated using statistical techniques based on the Berkeley segmentation dataset as a test corpus. Experimental data showed that based on the considered metrics, the proposed approach achieved an improved performance compared with conventional edge filters. The experimental results underscore the potential of FCCF to significantly contribute to the advancement of image processing techniques in many practical applications such as medical imaging, image enhancement, and computer vision, which rely heavily on the edge detection filters employed.
Grayscale image processing is of utmost importance in computer vision and image analysis, where image quality and visualization effects can be severely affected by high-density impulse noise. Typically, a traditional median filter is used to reduce this noise, which can result in poor performance by distorting details under high amounts of noise, resulting in weak robustness. To reduce the effects of high-density impulse noise on image quality by processing high-noise grayscale versions of images, a median filter using the enhanced two-dimensional Shannon maximum entropy filter (TSETMF) was proposed for adaptive threshold selection. This improved the filter performance while stably and effectively retaining the image details. This paper presented an improved TSETMF algorithm, which filters the noise in images by automatically partitioning the window size, whose threshold value is adaptively calculated using the two-dimensional maximum Shannon entropy. The theoretical model was verified and analyzed through comparative experiments using three types of classical grayscale images. The experimental results demonstrate that the proposed improved TSETMF algorithm exhibited a better processing performance than that of the traditional filter, with superior high-density noise suppression and denoising stability. The experimental results showed a peak signal-to-noise ratio (PSNR) greater than 24.97 dB when the noise density was 95% located on the classical grayscale image of Lena.
Image filtering is a very important task in image processing. To this end, this Special Issue presents a new filter for removing impulse noise in digital color images. The proposed filter uses a detection stage to correct only noisy pixels, performing their detection using a binary classification model generated using genetic programming. The classification training scheme considers three impulse noise models in color images: impulsive, uniform, and correlated. The correction stage consists of a vector median filter that modifies the color channel values if any of them are noisy. The proposed filter was compared with other state-of-the-art filters related to noise removal in color images using PSNR, MAE, SSIM, and FSIM metrics. The experimental results showed substantial variation in the performance of the filters depending on the noise model under analysis and the image characteristics. The experimental results also indicate that, on average, the proposed filter exhibited highly efficient performance for all three impulse noise models, surpassed only by a deep learning-based filter. However, unlike the deep learning-based filters, the proposed filter exhibited high interpretability with a performance close to the break-even point for all images and noise models used in the experiment.
Image forgery using the copy-move scheme is a simple and common manipulation technique. To mitigate the problems related to the high time complexity of most copy-move forgery detection algorithms and the difficulty in detecting forgeries in smooth regions, a paper in this Special Issue proposes a copy-move forgery detection algorithm based on fused features and density clustering. The proposed algorithm combines two detection methods: accelerated robust features (SURF) and accelerated KAZE (A-KAZE), to extract descriptive features by setting a low contrast threshold. Subsequently, the density-based spatial clustering of applications with noise (DBSCAN) algorithm is employed to eliminate mismatched pairs and reduce false positives. The algorithm uses the original image and the image transformed by the affine matrix to compare similarities at the same position and locate the counterfeit region, thereby improving counterfeit detection. Experimental results showed that the method effectively improved counterfeit detection accuracy in smooth regions and reduced computational complexity, demonstrating high robustness to post-processing operations such as rotation, scaling, and noise addition.

3. Conclusions

This Special Issue presents a series of articles related to the technological development and applications of image processing in solving various existing problems in different fields. These applications involve the treatment, analysis, verification, and modification of images, ranging from schemes aimed at improving image quality through noise reduction via filtering and enhancement, to image segmentation that allows separating objects of interest from their background as well as schemes to improve people’s quality of life by enabling the development of new antiviral drugs and facilitating the early detection of tumors. Likewise, schemes are presented that will contribute to advances in the development of security systems through reversible recovery of the cover image used in information hiding, watermarking, and image authentication schemes. All of the developments presented will surely impact the development of technology associated with image processing and its use in solving problems that still exist in various fields of science and technology.

Author Contributions

H.P.-M.: Writing—original draft preparation; M.N.-M.: Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gonzalez, R.; Woods, R. Digital Image Processing; Pearson: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  2. Dougherty, E. Digital Image Processing Methods; Dekker: New York, NY, USA, 1994. [Google Scholar]
  3. Sencar, H.T.; Menon, N. Digital Image Forensics; Springer: New York, NY, USA, 2013. [Google Scholar]
  4. Janhe, B. Digital Image Processing and Image Acquisition; Springer: Cham, Switzerland, 2025. [Google Scholar]
  5. Pimenov, D.Y.; da Silva, L.R.R.; Ercetin, A.; Der, O.; Mikolajczyk, T.; Giasin, K. State-of-the-art review of applications of image processing techniques for tool condition monitoring on conventional machining processes. Int. J. Adv. Manuf. Technol. 2024, 130, 57–85. [Google Scholar] [CrossRef]
  6. Myslicka1, R.M.; Kawala-Sterniuk, A.; Bryniarska, A.; Sudol, A.; Podpora, M.; Gasz, R.; Martinek, R.; Vilimkova, R.; Vilimek, D.; Pelc, M.; et al. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch. Dermatol. Res. 2024, 316, 99. [Google Scholar] [CrossRef] [PubMed]
  7. Aggarwal, A.; Bharadwaj, S.; Corredor, G.; Pathak, T.; Badve, S.; Madabhushi, A. Artificial intelligence in digital pathology—time for a reality check. Nat. Rev. Clin. Oncol. 2025, 22, 283–291. [Google Scholar] [CrossRef] [PubMed]
  8. Kirthiga, R.; Elavenil, S. A survey on crack detection in concrete surface using image processing and machine learning. J. Build. Pathol. Rehabil. 2024, 9, 15. [Google Scholar] [CrossRef]
  9. Kadian, P.; Arora, S.M.; Arora, N. Robust Digital Watermarking Techniques for Copyright Protection of Digital Data: A Survey. Wirel. Pers. Commun. 2021, 118, 3225–3249. [Google Scholar] [CrossRef]
  10. Shah, O.I.; Rizvi, D.R.; Mir, A.N. Transformer-Based Innovations in Medical Image Segmentation: A Mini Review. SN Comput. Sci. 2025, 6, 375. [Google Scholar] [CrossRef]
  11. Sandhya; Kashyap, A. A comprehensive analysis of digital video forensics techniques and challenges. Iran J. Comput. Sci. 2024, 7, 359–380. [Google Scholar] [CrossRef]
  12. Chaitra, B.; Reddy, P.V.B. Digital image forgery: Taxonomy, techniques, and tools–a comprehensive study. Int. J. Syst. Assur. Eng. Manag. 2023, 14 (Suppl. S1), 18–33. [Google Scholar] [CrossRef]
  13. Kaur, N.; Jindal, N.; Singh, K. Passive Image Forgery Detection Techniques: A Review, Challenges, and Future Directions. Wirel. Pers. Commun. 2024, 134, 1491–1529. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.