Next Article in Journal
Multi-Feature Fusion Method Based on Adaptive Dilation Convolution for Small-Object Detection
Previous Article in Journal
A Systematic Review of AI-Based Techniques for Automated Waste Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A Corner Detection Method for Noisy Checkerboard Images

School of Automation, Xi’an University of Posts & Telecommunications, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3180; https://doi.org/10.3390/s25103180
Submission received: 7 April 2025 / Revised: 15 May 2025 / Accepted: 16 May 2025 / Published: 18 May 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

This article proposes a novel approach for corner detection in noisy checkerboard images, comprising several methodical steps: (1) an initial extraction of corners utilizing the cross features present in the edge image of the checkerboard; (2) the elimination of erroneous corners through an analysis of the periodic consistency among the detected corners; (3) the identification of the outermost corners and the subsequent generation of a rectangular bounding box based on the total number of input checkerboard corners; (4) the reconstruction of missing corners, which may have been obscured by noise, by leveraging the periodic characteristics of the corners. Experimental findings indicate that this methodology is capable of effectively detecting all corners of the checkerboard across varying levels of noise, thereby significantly enhancing the success rate of corner detection in noisy images. This makes the proposed method particularly advantageous for camera calibration in special scenarios where noise or contamination in checkerboard images is unavoidable.

1. Introduction

Camera calibration represents a fundamental process within the domains of computer vision and image measurement [1]. Its principal objective is to ascertain the intrinsic and extrinsic parameters of the camera, thereby facilitating precise image measurement and three-dimensional reconstruction [2,3]. In the context of camera calibration, checkerboard images have emerged as the predominant calibration pattern, attributed to their regular geometric configuration and the ease with which feature points can be identified [4]. In certain specialized contexts, checkerboard images are susceptible to noise contamination. In applications such as bio-inspired robotic obstacle avoidance and navigation, optical compound eye systems are frequently employed due to their advantages, including a wide field of view and high redundancy [5]. However, the traditional camera calibration method proposed by Zhang necessitates that each ommatidium observe a complete checkerboard pattern [6]. This requirement is often impractical in real-world scenarios due to the structural limitations of compound eyes, where many ommatidia may only capture fragments of the checkerboard or regions affected by noise. Area-array CMOS sensors, which are commonly used in UAV aerial photography for their high frame rates and cost efficiency, inherently introduce imaging noise that significantly impairs the accuracy of feature extraction from calibration patterns [7]. Conventional calibration techniques typically assume that sensor noise is negligible or uniformly distributed. In practice, however, the nonlinear interactions among photon shot noise, dark current noise, and readout circuit noise within imaging systems result in unpredictable grayscale distortions and artifacts in calibration images. High Dynamic Range (HDR) imaging enhances dynamic range by merging multiple exposure images to retain details in both highlights and shadows [8]. Nonetheless, sensor noise, which varies nonlinearly with illumination intensity, can introduce artifacts or distort local contrast in the synthesized images. Neuromorphic vision sensors offer advantages such as high dynamic range and low latency for dynamic scene perception. However, vision-based pulsed cameras are susceptible to generating spatiotemporally correlated noise under interference, leading to traditional corner detection algorithms exhibiting pulse accumulation artifacts or spatiotemporal mismatches [9]. These challenges significantly undermine the accuracy of three-dimensional reconstruction and the reliability of calibration in neuromorphic vision systems. Due to the inherent noise characteristics and motion-induced variations in event data, current feature extraction algorithms based on event cameras remain inadequately developed, resulting in suboptimal performance in feature-oriented methodologies [10]. In contemporary applications such as remote sensing, cartography, and industrial precision metrology, high-resolution cameras have become prevalent due to their high pixel-density sensors. However, their limited single-frame coverage often fails to satisfy large-scale observational needs. By utilizing image stitching techniques to amalgamate multiple images captured by a single camera, a comprehensive panoramic view can be synthesized, facilitating detailed topographic analysis and high-precision dimensional measurements [11,12,13,14]. Modern deep learning-based feature extraction networks utilize end-to-end training mechanisms to adaptively capture multi-scale spatial features [15], demonstrating remarkable robustness in matching under conditions of non-uniform illumination or partial occlusion, thereby significantly improving mosaic quality in complex scenarios. However, when processing checkerboard images characterized by strong periodic structures, the repetitive grid patterns can lead to feature space aliasing, resulting in biases in the estimation of registration parameters, while transposed convolution operations during network decoding can introduce high-frequency artifacts. Coupled with the models’ heavy dependence on annotated data, these limitations restrict algorithmic performance when addressing multi-scale checkerboard patterns. To mitigate these challenges, the intentional injection of structured noise into checkerboard patterns, as depicted in Figure 1, effectively disrupts structural periodicity and enhances spatial feature diversity, thereby creating essential conditions for robust image stitching.
In the context of checkerboard images characterized by substantial noise, the visibility of corner features may be diminished, resulting in potential misidentification or omission of corner locations. This phenomenon can negatively impact the precision of calibration outcomes. Consequently, it is imperative to devise corner detection methodologies tailored specifically for images with high levels of noise.
In recent years, a multitude of algorithms for the detection of checkerboard corners have been developed, including prominent techniques such as Harris and SUSAN [16,17,18,19]. The subpixel-accurate corner detection process utilized in Zhang’s classical calibration method has emerged as the standard in industrial calibration, owing to its high precision and ease of use [20]. This methodology is currently incorporated into the OpenCV library, allowing users to achieve corner detection results and calibration parameters through the straightforward adjustment of relevant thresholds. M.A. et al. [21] proposed a corner detection approach that relies on the accumulation of distances from chords to points, which demonstrates insensitivity to local variations and noise in curves; however, it is prone to detecting false corners in the background. Geiger et al. [22] employed template matching for corner detection, subsequently refining corner positions based on the grayscale gradient, a technique capable of accommodating images with substantial distortion. Yang et al. [23] applied the Hough transform alongside the distribution characteristics of checkerboard lines to identify valid lines, construct checkerboard corners, and initially locate corners. They then utilized a circular template to search for associated points, minimizing discrepancies between these points to achieve precise localization. Wu et al. [24] initially extracted corners based on grayscale distribution characteristics, merging neighboring points at the corners to derive the final coordinates of the checkerboard corners. Nonetheless, this method imposes stringent requirements on image grayscale features, potentially leading to missed detections when corner features are not distinctly visible. Chen et al. [25] employed filters to extract subpixel coordinates of edges, conducted polynomial fitting, and identified edge intersections as corners. Dan et al. [26] utilized the EDLines algorithm for line detection, filtering out background lines through feature filtering, fitting, and optimizing corners based on grayscale gradient sorting, ultimately organizing the corners by transforming the coordinate system. Donné et al. [27] demonstrated that deep learning networks trained on extensive datasets of checkerboard instances achieve detection accuracy comparable to existing algorithms while reducing runtime and maintaining high robustness in the presence of significant noise and lens distortion. Du et al. [28] proposed a method for extracting vertices of adjacent dark squares through binarization and morphological dilation, constructing reference point arrays by calculating geometric midpoints. They subsequently obtained subpixel contour points within the vicinity of reference points using Zernike moments, classified contour points based on row and column orientations and performed bidirectional line fitting to ascertain subpixel corner coordinates, thereby significantly enhancing illumination robustness and localization accuracy. Wang et al. [29] addressed inaccuracies in subpixel-level checkerboard corner detection by introducing a method that integrates local intensity responses with Gaussian–Laplacian responses, effectively improving detection accuracy in noisy environments and under affine transformations.
While the previously mentioned methods for checkerboard corner detection are capable of identifying relevant corner coordinates, they are still prone to both missed and false detections, which complicates the accurate identification of all inner corners of the checkerboard. Consequently, this study introduces a corner detection technique tailored for noisy checkerboard images. The proposed method initially filters the accurate corners by analyzing the local edge features of individual corners alongside the periodic consistency of multiple corner arrays. Following this filtering process, the outermost corners are determined to create a rectangular bounding box. Finally, the method adaptively completes any missing corners by utilizing the inherent periodic characteristics of the checkerboard pattern. This approach effectively mitigates the challenges associated with missed and false detections in corner detection for noisy checkerboard images, thereby significantly enhancing the overall success rate of corner detection.

2. Checkerboard Corner Detection

In this section, the analytical process commences with fully stitched images exhibiting substantial noise contamination. The presence of substantial white noise severely compromises the integrity of image features. To mitigate this issue, the present study proposes a systematic corner detection methodology, which is elaborated as follows: (1) Image preprocessing: implement established image processing techniques to diminish the influence of noise on features and to enhance the visibility of the checkerboard pattern. (2) Image corner extraction: identify potential corner locations and compile an initial set of corners. (3) Checkerboard corner filtering: refine the detected corners to retain only those that align with the characteristics of the checkerboard pattern. (4) Standard checkerboard corner grid generation: produce standard checkerboard corners to serve as a reference for subsequent processes. (5) Checkerboard corner merging and completion: address the missing corners by utilizing the generated checkerboard grid and amalgamate closely situated corners to ensure the integrity and precision of the final output. Through the implementation of these steps, the objective is to enhance corner detection efficacy in noisy conditions and to ensure the algorithm’s robustness and adaptability.

2.1. Image Preprocessing

In the analysis of the stitched color image, it was observed that following the processes of grayscale conversion and binarization, the corner features were not distinctly visible, and a considerable amount of white noise persisted. Basic filtering techniques, such as Gaussian filtering [30], proved inadequate in effectively mitigating this noise, as illustrated in Figure 2. To enhance the visibility of corner features while minimizing noise interference, morphological operations, specifically erosion and dilation, were employed [31,32]. The erosion operation is particularly effective in eliminating small noise artifacts from the image, particularly those smaller than the defined structural element, thereby improving image clarity; however, it may also lead to a reduction in the size of objects or a loss of finer details. Conversely, the dilation operation serves to recover some of the details that may have been diminished by erosion, while simultaneously augmenting the representation of objects within the image. The application of these two morphological techniques facilitates the enhancement of image clarity alongside noise reduction. In this study, the morphological operations utilized a rectangular kernel with isotropic characteristics, which is particularly advantageous for processing regular checkerboard patterns. Experimental evaluations indicated that larger kernel sizes are more effective in suppressing high-amplitude noise, with a 19 × 19 kernel size being empirically determined to provide optimal results. Following this, the Canny edge detection algorithm was employed to extract the edge features of the image. The Canny edge detection algorithm is noted for its high stability and accuracy across varying lighting conditions, noise levels, and interference, demonstrating considerable robustness [33]. The experimental findings suggest that setting the edge detection thresholds to 0.2 and 0.7 yields more precise and comprehensive edge detection outcomes.

2.2. Image Corner Extraction

In the context of corner detection within edge-processed images, the initial identification of corner locations can be based on the characteristics of chessboard corners, which are defined by the intersection of two orthogonal lines [34,35]. To facilitate this identification, a circular sampling approach is utilized. Specifically, a circle with a radius r is centered at the origin [36], and sampling points are determined at fixed angular intervals of 0.01 radians across each of the four quadrants to derive their coordinates ( x , y ) . Subsequently, these sampling points from the four quadrants are integrated into a cohesive circular sampling set, as illustrated in Figure 3. The equations for computing the sampling points in the four quadrants are presented as follows:
x 1 = r sin θ y 1 = r cos θ θ = 0 , 0.01 , 0.02 , , π 2
x 2 = r sin θ + π 2 y 2 = r cos θ + π 2 θ = 0 , 0.01 , 0.02 , , π 2
x 3 = r sin ( θ + π ) y 3 = r cos ( θ + π ) θ = 0 , 0.01 , 0.02 , , π 2
x 4 = r sin θ + 3 π 2 y 4 = r cos θ + 3 π 2 θ = 0 , 0.01 , 0.02 , , π 2
In the equations, the coordinates of the sampling points are denoted as ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) , and ( x 4 , y 4 ) , which correspond to the four quadrants. It is essential to configure the sampling radius r to be smaller than the pixel width of the individual checkerboard grids present in the acquired image to maintain spatial specificity. However, if r is set too small, it may lead to inadequate sampling density, whereas excessively large values of r could result in erroneous corner detection due to boundary overreach. This parameter necessitates adaptive modification based on the characteristics of the image, such as grid-scale and noise distribution. Through systematic evaluation, it was empirically established that radii of 20 and 25 pixels provide an optimal balance between sufficient sampling and the minimization of artifacts within our experimental framework.
To analyze the edge-processed image, one must examine each pixel point P by aggregating the coordinates of sampling points from the four quadrants, thereby generating a new set of points. This set is then utilized to ascertain the presence of non-zero pixel values within each quadrant. If non-zero pixel values are identified in all four quadrants, pixel point P may be provisionally classified as a corner point, with the corresponding radius r and angle θ documented. To enhance the precision of the detection process, the characteristic of the chessboard lines is employed, which indicates that points with non-zero pixel values are approximately aligned along a straight line. By varying the radius r, the image is traversed once more for evaluation. For the various ( r , θ ) pairs associated with pixel point P, if at least two distinct radii r are observed, while the angles θ differ by less than 5 degrees, then pixel point P is classified as a corner point, as illustrated in Figure 4a.
In the context of corner detection algorithms, when the physical dimensions of a checkerboard are smaller than the pixel resolution, the edges projected onto the image plane experience sub-pixel shifts due to pixel discretization during rotation. For example, an ideally straight line may be represented as a staircase-like edge that spans multiple pixels, resulting in angular fitting inaccuracies. These inaccuracies become increasingly significant as pixel size increases, particularly when edges occupy a limited number of pixels, leading to angular errors ( Δ θ ) that can reach several degrees. Additionally, when the physical dimensions of the checkerboard corners approach or fall below the pixel scale, edge-averaging effects can produce non-zero pixel intensities across all four quadrants, which may result in erroneous corner detection. To mitigate these issues, the algorithm implements a multi-radius r iterative detection approach, coupled with angular consistency verification ( Δ θ < 5 ), to accurately identify genuine corners. By increasing the radius r, the algorithm effectively spans a greater number of pixels with edges, thereby diminishing angular sensitivity during individual detection steps. Furthermore, the multi-scale angular consistency check helps to eliminate false corners that may arise from pixel aliasing or noise. This methodology effectively transforms the inherent limitations of pixel resolution into constraints that enhance algorithmic robustness, thereby ensuring the reliable extraction of geometric features in rotated grids, even under conditions of limited resolution.
During the detection process, scenarios akin to those depicted in Figure 4b may arise, wherein precise coordinates for corner points cannot be determined. To mitigate this challenge, the present study introduces a methodology for calculating the centroid, aimed at enhancing localization accuracy and identifying a distinct corner point. A zero matrix H , congruent in size to the edge image, is constructed. For each corner point within the derived corner point set, the values of the 5 × 5 neighborhood matrix surrounding the point in H are designated as 1. These neighborhoods are subsequently labeled and identified as distinct connected components, where the red bounding box in Figure 4c represents one such connected region. The centroid C of each connected region is computed, and this centroid C is regarded as the accurate corner point. The newly identified corner point P c is documented accordingly. The underlying implementation principle is illustrated in Figure 4c. The formula utilized for calculating the centroid is as follows:
C = i x i n , i y i n
In this context, ( x i , y i ) denotes the pixel coordinates within the specified region, while n indicates the total number of pixels present in that region. The centroid C is regarded as the precise corner point, and the updated corner point P c is subsequently documented. Nevertheless, despite the acquisition of more precise corner coordinates from the edge image, the presence of erroneous corners may persist, as illustrated in Figure 4d. Additional processing is necessary to remove these inaccuracies.
The foundation of the proposed algorithm is its checkerboard-optimized corner detection mechanism, which has been specifically designed to enhance the extraction of checkerboard corners while simultaneously creating an extensive pool of candidates for further evaluation. This design inherently accommodates the presence of two types of artifacts: (1) false positives resulting from texture noise, and (2) non-target corners that display geometric configurations not characteristic of checkerboards. Importantly, although this detection stage exhibits a universal capability for identifying cross-like topological features, it demonstrates limited effectiveness in detecting T-junction or Y-junction corner types due to their unique gradient distribution patterns.

2.3. Checkerboard Corner Filtering

In order to identify the chessboard corners from the acquired set of corners, this study utilizes the periodic characteristics inherent to the chessboard configuration. Notably, with the exception of the outermost ring of corners, each internal corner is encircled by four adjacent corners. The vectors that connect a central corner to its four neighboring corners exhibit either perpendicular or parallel orientations. This distinctive feature can be effectively employed to precisely isolate the chessboard corners.
For the designated corner point P c ( x c , y c ) , it is necessary to compute the Euclidean distance d i between P c and each respective corner point P i ( x i , y i ) . Concurrently, one should determine the vector P i P c that is established between the current corner point and each of the other corner points. The Euclidean distance d i can be derived using the subsequent formula:
d i = ( x i x c ) 2 + ( y i y c ) 2
The vector P i P c is calculated as:
P i P c = ( x i x c , y i y c )
Utilizing the acquired set of distances, employ the bubble sort algorithm to arrange the distances d i along with their associated vectors P i P c . Subsequently, determine the four nearest corner points to P c along with their respective distances, which will be denoted as d 1 , d 2 , d 3 , d 4 , and the corresponding vectors will be represented as p 1 , p 2 , p 3 , p 4 .
Compute the standard deviation σ of the four distances in order to evaluate the proximity of the four corner points, as indicated in Formula (8).
σ = 1 4 i = 1 4 ( d i d ¯ ) 2
where d ¯ is the mean of the four distances:
d ¯ = 1 4 i = 1 4 d i
When the standard deviation σ is less than 10, the four corners are classified as proximal corners, and the current corner P c is documented as a precise inner checkerboard corner. In contrast, if the four corners demonstrate considerable spatial separation, suggesting the lack of four adjacent checkerboard corners surrounding P c , then P c is categorized as a non-inner checkerboard corner. The threshold for this classification is established based on the pixel spacing between the checkerboard corners in the image. Given a constant camera focal length, this threshold necessitates only a single initialization for images obtained from a standardized calibration checkerboard, thus ensuring a high degree of adaptability without the need for further modifications.
Designate the vector p 1 as the reference vector and compute the cosine values cos θ i for the angles between p 1 and each of the vectors p 2 , p 3 , p 4 . Assess whether these vectors are approximately parallel or perpendicular to p 1 . In the event that two vectors are found to be perpendicular to p 1 , labeled as v 1 and v 2 , alongside one vector that is parallel to p 1 , referred to as h 1 , the corner point P c will be classified as a chessboard corner, as illustrated in Figure 5.
In order to facilitate the subsequent completion of the chessboard corners, the vectors v 1 and v 2 are subtracted, and their average is computed to derive the vector V 1 . In a similar manner, the vector h 1 in conjunction with p 1 is utilized to obtain the vector V 2 . It is essential to document the vectors V 1 and V 2 associated with each chessboard corner, as these vectors will play a critical role in the subsequent processes involved in completing the chessboard corners.

2.4. Standard Checkerboard Corner Grid Generation

In the analysis of the chessboard corners, one begins by examining the current corner point denoted as P c . If the product of the x and y components of the associated vector V 1 is greater than zero, it can be concluded that V 1 is aligned parallel to the v-axis within the pixel coordinate framework. In this scenario, the absolute value of V 1 is integrated into the vertical vector B v , while the x-component of the corresponding vector V 2 is incorporated into the x-component of the horizontal vector B u . Additionally, the negative value of the y-component of V 2 is added to the y-component of B u . Conversely, if the product of the x and y components of vector V 1 at corner point P c is less than zero, the absolute value of V 2 is included in the horizontal vector B u , and the x-component of vector V 1 is added to the x-component of the vertical vector B v . Furthermore, the negative of the y-component of V 1 is accumulated into the y-component of B v , as delineated in Equation (10).
B v = B v + ( | V 1 x | , | V 1 y | ) B u = B u + ( | V 2 x | , | V 2 y | ) , V 1 x · V 1 y > 0 B v = B v + ( | V 1 x | , | V 1 y | ) B u = B u + ( | V 2 x | , | V 2 y | ) , V 1 x · V 1 y 0
In the equations, V 1 x and V 1 y denote the x and y-components, respectively, of the vector V 1 , whereas V 2 x and V 2 y signify the x and y-components, respectively, of the vector V 2 .
The identification of the four essential corner points of the chessboard is achieved through the comparison of pixel coordinates, which are designated as P u (up), P d (down), P l (left), and P r (right). Subsequently, the mean values of the vectors B u and B v are computed to derive the average vectors B u ¯ and B v ¯ . These average vectors are then scaled by a factor of 1.5. Utilizing the identified corner points for weighted calculations, the boundary points of the chessboard—specifically d 1 (top), d 2 (bottom), d 3 (left), and d 4 (right)—are extended and calculated as detailed in Equation (11).
d 1 = P u B v ¯ d 2 = P d + B v ¯ d 3 = P l B u ¯ d 4 = P r + B u ¯
To determine the slope k i and intercept b i for each boundary of the chessboard, one can employ the linear equation representing a line, which is expressed as:
k i = B v y ¯ B v x ¯ b i = d i y d i x · B v y ¯ B v x ¯ , i = 1 , 2 k i = B u y ¯ B u x ¯ b i = d i y d i x · B u y ¯ B u x ¯ , i = 3 , 4
In the aforementioned equation, d i x and d i y denote the x and y-components of the point d i , respectively. Similarly, B u x ¯ and B u y ¯ signify the x and y-components of the vector B u ¯ , while B v x ¯ and B v y ¯ correspond to the x and y-components of the vector B v ¯ . A central corner point, designated as P * , is selected from the established chessboard corners. Subsequently, a weighted operation is conducted on the average vectors B u ¯ and B v ¯ to derive the standard chessboard corner P i . The formula for this calculation is outlined as follows:
P i = P * + n 0 · B u ¯ + n 1 · B v ¯ , n 0 = 20 , 19 , , 20 , n 1 = 20 , 19 , , 20
The parameter n 0 must be modified in accordance with the quantity of calibrated checkerboard corners to guarantee that n 0 2 surpasses the overall count of corners. Since the total number of checkerboard corners typically does not exceed 400, this investigation empirically establishes n 0 within the interval of 20 to 20. As a result, the algorithm is capable of autonomously calculating the necessary corner count without the need for manual intervention, as long as n 0 adheres to the specified criteria. The value requirement for n 1 is the same as that for n 0 .
Utilizing the linear equations derived, compute the vertical component y for each boundary line in relation to the horizontal component x of each corner point P i . These vertical components will be designated as y 1 , y 2 , y 3 , y 4 . It is imperative to ensure that the generated network corner points P i remain within the confines of the chessboard by adhering to the specified conditions:
( P i y y 1 ) · ( P i y y 2 ) < 0 ( P i y y 3 ) · ( P i y y 4 ) < 0
In the conditions, P i y denotes the y-component of the corner point P i . When these conditions are satisfied, the corner point P i is deemed a valid corner point. By confirming that each corner point P i is situated within the confines established by the linear equations, we can ascertain that the resultant corner points constitute a standard chessboard grid, which accurately reflects the configuration of the chessboard, as illustrated in Figure 6.

2.5. Checkerboard Corner Merging and Completion

This research utilizes a grid-constrained corner completion methodology, with the comprehensive decision-making process delineated as follows: initially, a proximity search is conducted for each designated grid point P i within the standard corner grid, wherein the squared Euclidean distance between P i and the identified corners P c from Section 2.2 is computed.
d p c 2 = ( P i x P c x ) 2 + ( P i y P c y ) 2
In this context, P i x and P i y represent the planar coordinate components of the predetermined grid corners, whereas P c x and P c y denote the coordinate components of the identified corners. The squared distance d p c 2 is evaluated against a predetermined threshold δ , which is generally assigned a small value; when this distance is less than δ , the two corners are deemed adjacent. In the present study, δ is empirically set to a value of 50. Under these circumstances, the grid corner P i is substituted with the filtered corner P c . Conversely, if no adjacent initial corner is identified, the grid corner P i remains unchanged. This approach utilizes geometric substitution and mechanisms for completing missing corners to facilitate corner reconstruction while maintaining subpixel-level localization precision. By systematically iterating through all designated grid points and performing this substitution process, a comprehensive topological structure of the checkerboard corners is ultimately established. The outcomes of this processing are illustrated in Figure 7.

3. Experimental Results and Analysis

The apparatus utilized in this study is an industrial CCD video microscope characterized by a resolution of 1920 by 1080 pixels. The optical and lens magnifications are adjustable, with ranges of 21–135× and 0.7–4.5×, respectively. Illumination was provided by an adjustable LED ring light. The camera was equipped with a Japanese Panasonic CMOS image sensor, capable of achieving a maximum image resolution of 48 million pixels and a frame rate of 60 frames per second. High-definition images were transmitted in real-time through a USB interface. This camera is produced by Jiangsu Yunliheng E-commerce Co., Ltd., a company based in Liyang, China, and was acquired through an e-commerce platform. The camera is designed for high-precision measurement applications. Due to the constraints of pixel density in high-resolution cameras, excessively large calibration boards necessitate the capture of multiple images for stitching, which can hinder computational efficiency. Additionally, sparse checkerboard corners may negatively impact the accuracy of subsequent registration and calibration processes. To mitigate these issues, the experiment employs a CG-2 glass checkerboard calibration board with physical dimensions of 15 mm by 15 mm, comprising 17 by 19 corners and grid dimensions of 0.1 mm by 0.1 mm. This configuration, with a pattern accuracy of ±1 μ m, adequately meets the dual requirements of spatial resolution and geometric precision necessary for micron-level visual measurement applications. The experimental measurement setup is depicted in Figure 8. The camera was fixed in place, while a 3D micro-positioning stage was utilized to translate the calibration board. Multiple images were captured from a single camera and subsequently stitched using a translation-based stitching strategy. This method preserves the original resolution of individual images, with the final stitched image resolution being contingent upon the resolution of the single images and the dimensions of the overlapping regions. For two images with resolutions of M × N each, if the overlapping region has pixel dimensions m × n , the resulting stitched image resolution can be approximated as ( 2 M m ) × ( 2 N n ) . This strategy optimizes the retention of raw image data. Images obtained under varying environmental conditions are stitched, and corner detection experiments are performed based on the resultant stitched images.

3.1. Precision Comparison Experiment

To assess the precision of the proposed corner detection methodology, a series of comparative experiments were executed utilizing stitched checkerboard images post-image preprocessing. These experiments benchmarked the new method against Zhang’s calibration technique and three advanced detection algorithms referenced in references [21,22,24]. The truth corner coordinates were established through a manual verification process, wherein human annotators utilized specialized image annotation software to delineate rectangular regions centered on each corner, subsequently extracting the centroid pixel coordinates as reference points. The precision of the detection was quantified by calculating the variance between the coordinates produced by the algorithm and the established ground truth positions across all identified corners, with the mean of these variances serving as the definitive metric for detection deviation.
The comparative outcomes of the five detection methodologies are illustrated in Figure 9. The findings reveal that Zhang’s calibration method demonstrates considerable reliability under standard conditions, achieving a detection accuracy of 98.95% with an almost negligible false detection rate, as detailed in Table 1. However, the method experiences significant performance deterioration in the presence of noise interference; as depicted in Figure 10a, the algorithm encounters a 9.76% missed detection rate and corner coordinate deviations of 0.71 pixels, which severely hampers calibration accuracy. In scenarios involving occlusion, the method exhibits inadequate recognition capabilities for corners obscured by edges, achieving a success rate of less than 40% and lacking effective geometric completion mechanisms. The chord-to-point distance accumulation algorithm referenced in reference [21] is characterized by high sensitivity to noise, resulting in 28.71% false detections in complex backgrounds, as shown in Figure 10b, with detection accuracy diminishing to 71.29% and localization errors escalating to 0.87 pixels. Although the method described in reference [22] maintains a detection accuracy of 97.98%, it suffers from systematic missed detections at the boundaries of the checkerboard, as illustrated in Figure 10c, alongside an increased false detection rate of 2.02% and localization errors of 0.93 pixels. The algorithm presented in reference [24] exhibits marked vulnerability to structural interference from the checkerboard, leading to a proliferation of false corner detections and localization errors that can reach up to 1.30 pixels, as depicted in Figure 10d. In contrast, our proposed algorithm achieves a full detection rate of 98.36% with localization errors of merely 0.67 pixels, representing a 32.1% enhancement over the best-performing baseline method. This outcome substantiates the precision assurance capabilities of our integrated detection and completion framework. Furthermore, under conditions of severe occlusion, the geometry constraint-driven corner completion strategy effectively recovers a majority of the occluded corners while maintaining superior overall detection accuracy compared to existing methodologies, thereby affirming the robustness of the algorithm in occlusion scenarios.
The experiment was conducted using an Intel Core i5-8265U processor, which is equipped with four physical cores and eight logical processors, operating at a base clock frequency of 1.60 GHz. As illustrated in the runtime comparisons presented in Table 1, the algorithms under evaluation demonstrate considerable variations in computational efficiency. Specifically, references [21,22] attain minimal latency through the implementation of lightweight architectures; however, this efficiency is accompanied by significant accuracy-efficiency trade-offs, resulting in precision levels of 71.29% and 97.98%, respectively. The methodology outlined in reference [24], along with our proposed algorithm, exhibit comparable computational durations within the second tier. Nevertheless, our approach, through the application of parallelized optimization, achieves superior detection accuracy while adhering to similar temporal constraints, thereby surpassing the performance of reference [24]. Conversely, Zhang’s calibration method is characterized by the highest computational burden due to its reliance on iterative optimization processes, leading to a latency that is twice that of our method and rendering it inadequate for real-time applications. Importantly, our algorithm sustains sub-pixel localization accuracy while reducing single-frame processing time to below 10 s, thereby achieving an optimal equilibrium between the preservation of precision and the minimization of computational overhead. The proposed architecture successfully meets the real-time operational requirements essential for visual calibration systems.

3.2. Lighting Condition Experiment

Variations in illumination significantly influence the luminance and contrast characteristics of images, which in turn directly impacts the efficacy of corner detection algorithms. This study systematically collected and integrated multiple image sets under four distinct illumination conditions: high-intensity lighting, low-intensity lighting, normal lighting, and uneven illumination fields. A thorough evaluation of corner detection was conducted utilizing Zhang’s calibration method, established techniques referenced in references [21,22,24], as well as the proposed algorithm. Figure 11 provides a quantitative comparison of their photometric stability.
The comparative analysis of detection performance among the various algorithms, as presented in Table 2, reveals significant disparities in the robustness of corner detection under complex illumination conditions. Notably, Zhang’s calibration method exhibits stable detection performance under high-intensity, normal, and uneven illumination conditions, effectively identifying checkerboard corners with minimal missed detections and false positives. However, its performance deteriorates markedly in low-light environments, where it struggles to detect valid corners and demonstrates increased sensitivity to lighting variations. The algorithm referenced in reference [21] maintains relatively stable detection performance under normal and strong lighting conditions but experiences considerable performance decline in low-light environments, characterized by a rise in false corner detections. In scenarios of uneven illumination, this algorithm shows inadequate responsiveness in darker regions, resulting in a significant increase in false positive rates. The algorithm from reference [22] achieves the highest detection accuracy under strong lighting and retains full accuracy in low-light conditions, although it experiences minor performance fluctuations under normal and uneven illumination, particularly leading to localized missed detections in dim areas. The algorithm in reference [24] demonstrates optimal detection completeness under strong illumination but suffers from partial, missed detections. While it maintains high accuracy in low-light environments, the number of valid detected corners diminishes. Under uneven illumination, its performance declines significantly, with the correct detection rate dropping to 88.83% and abnormal false positives occurring in regions of abrupt brightness transitions. The proposed algorithm achieves high detection accuracy in uniform illumination scenarios, demonstrating strong adaptability to varying light intensities. In conditions of uneven illumination, the algorithm actively discards candidate points in dark regions that cannot be confirmed as checkerboard areas. Although this strategy results in an 11.76% loss of valid corners, it effectively mitigates false corner judgments and significantly enhances detection reliability. Comparative experiments indicate that the proposed algorithm surpasses existing methods in overall detection accuracy and false positive control, particularly exhibiting greater robustness in complex illumination conditions.

3.3. Noise Level Experiment

In order to evaluate the stability of algorithms in practical applications characterized by differing levels of noise interference, we systematically applied controlled noise perturbations to checkerboard surfaces, resulting in the creation of stitched checkerboard images with precisely calibrated noise intensities. A thorough assessment was performed utilizing Zhang’s calibration method, alongside established methodologies referenced in references [21,22,24], as well as the proposed algorithm. The outcomes of the experiments conducted under various noise conditions are presented in Figure 12.
The experimental findings, benchmarked against a ground truth comprising 323 corner points, reveal distinct performance characteristics across different noise levels, with specific detected corner data presented in Table 3. Zhang’s calibration method consistently fails to identify the complete set of corners in the presence of noise, demonstrating a gradual decline in the number of detected corners as noise intensity increases. At noise level III, the algorithm becomes entirely ineffective, resulting in zero valid corner detections due to significant corruption of feature descriptors. Reference [21] exhibits increasing instability with heightened noise levels, maintaining a precision of 74.94% at noise level I, but experiencing a substantial rise in false positives, with precision dropping to 49.84% at level III, suggesting insufficient noise suppression. Reference [22] employs a conservative detection strategy through stringent filtering, achieving perfect precision at level I but failing to detect 50.15% of valid corners. Its precision subsequently decreases to 95.96% at level II and remains at 95.57% at level III, indicating a vulnerability to noise-induced texture confusion. Reference [24] demonstrates moderate robustness, achieving 99.05% precision at level I; however, its detection completeness significantly declines under higher noise conditions. In contrast, the proposed algorithm attains optimal precision at levels I and II through adaptive noise suppression, maintaining a precision of 94.74% at level III while controlling false positives. Importantly, while existing methods exhibit trade-offs between precision and recall or demonstrate sensitivity to noise-induced degradation, our approach maintains a balanced performance through dynamic candidate validation, limiting false positives to less than or equal to 5.26% even in the presence of extreme noise contamination.

3.4. Comparative Analysis of Algorithms and Experimental Results

The four corner detection methodologies examined in this study each possess unique advantages and limitations, showcasing specific strengths suited to various application contexts. For instance, Zhang’s calibration technique is characterized by its simplicity and operational ease, making it particularly effective for camera calibration in conventional settings. This method is well-suited for scenarios that demand rapid implementation, cost-effectiveness, and high precision, demonstrating considerable robustness in the presence of low-noise and low-distortion images. The approach detailed in reference [21], which utilizes chord-to-point distance accumulation, innovatively substitutes traditional curvature calculations with distance measurements from edge contour points to chords, thereby diminishing computational complexity. This technique is particularly applicable for corner feature extraction in environments afflicted by severe distortion or substantial noise interference. Reference [22] introduces a robust chessboard corner detection algorithm that leverages template matching and growth strategies. The fundamental premise of this method is multi-stage optimization aimed at achieving high-precision corner localization and reconstruction of chessboard structures. It adaptively identifies chessboards of varying sizes through growth strategies, effectively overcoming the constraints of conventional methods that necessitate predefined chessboard parameters. Additionally, it accommodates scenarios involving multiple chessboards. The methodology presented in reference [24] is tailored for complex environments, detecting chessboard corners by analyzing grayscale distribution characteristics. This approach involves the extraction of candidate corners at chessboard intersections using grayscale features, followed by iterative optimization to eliminate false corners and merge adjacent candidates for precise detection. Our proposed algorithm builds upon the template detection framework of reference [22] by establishing circular sampling points and examining orthogonal corner features. During the verification of orthogonal characteristics, we introduce a predefined angular tolerance range. The subsequent steps adhere to the iterative optimization strategy outlined in reference [24], filtering candidates based on corner arrangement patterns. Furthermore, we integrate angular tolerance into vector angle validation to enhance accuracy in the context of distorted and affine-transformed images. This methodology effectively eliminates false and non-chessboard corners while deriving two orthogonal directional components to create a standardized corner network. Finally, we adapt the corner merging strategy from reference [24] to facilitate the detection and completion of chessboard corners in environments characterized by noise.
Experimental findings indicate that Zhang’s method, along with the techniques presented in references [22,24], demonstrate high detection accuracy across varying levels of noise and illumination. Nonetheless, Zhang’s approach is ineffective in identifying corners under low-light or high-noise conditions, and its computational efficiency is significantly hindered by an excessive number of input corners. The method described in reference [22] encounters challenges with noisy images, leading to considerable feature loss, which results in missed detections and an inability to reconstruct absent corners. Additionally, its performance in template matching deteriorates in dim lighting conditions. Conversely, the technique outlined in reference [24] necessitates substantial grayscale differences at chessboard intersections and primarily concentrates on eliminating false corners, which consequently reduces corner counts in the presence of noise or variations in illumination. The method referenced in [21] is capable of detecting corners under a range of illumination and noise scenarios; however, it tends to produce an excessive number of false corners due to inadequate validation criteria, resulting in insufficient precision for vision measurement applications. In contrast, our proposed algorithm successfully achieves accurate corner detection and completion across diverse illumination and noise conditions while preserving precision. In situations characterized by severe noise or occlusion, where fewer than half of the anticipated corners are detected in a boundary row or column, our algorithm prioritizes measurement accuracy by selectively discarding uncertain corners rather than attempting to reconstruct ambiguous boundaries.
The experimental results suggest that while the algorithm presented in this study demonstrates superior performance in corner detection and completion capabilities compared to existing algorithms, particularly in noisy environments, it exhibits a significant reliance on prior knowledge of the checkerboard structure. This limitation confines its applicability primarily to regular network calibration targets. The accuracy of corner completion is notably compromised when the checkerboard experiences physical deformations, such as bending or folding. Moreover, the algorithm struggles to detect and complete all corners effectively in scenarios characterized by occlusion and uneven lighting. The employed corner discarding strategy fails to account for the continuity and spatial arrangement of local corners, which may result in the inadvertent loss of information from certain detected corners during the delineation of the checkerboard boundary. In instances where the checkerboard structure within the stitched image is incomplete, the presence of a completion strategy does not facilitate the creation of a standard network beyond the image, thereby leading to algorithmic inaccuracies. Additionally, the algorithm does not address the potential coexistence of multiple checkerboards; when multiple checkerboard structures are present within a single image, it may erroneously determine the network boundary and miscalculate the horizontal and vertical components of the corners. This miscalculation can lead to confusion within the corner network, ultimately undermining the reliability of the algorithm’s detection results.

4. Conclusions

This study presents a method for checkerboard corner detection and automatic completion specifically designed for noisy stitched images, offering a viable solution for the calibration of high-resolution cameras. By leveraging information from multiple images and intentionally introducing noise during the stitching process to create complete checkerboard patterns, the proposed approach achieves high-precision corner detection and completion in high-noise environments. Experimental findings indicate that the algorithm demonstrates superior detection accuracy, exhibiting reduced corner localization errors and significantly enhanced precision in comparison to existing techniques. Furthermore, tests conducted under varying illumination conditions validate the robustness of the method, effectively addressing common lighting variations encountered in practical applications. Future research will aim to tackle the existing challenges associated with completing edge corners in scenarios characterized by non-uniform illumination and partial occlusion, with the goal of improving overall detection performance and adaptability to specialized conditions.

5. Author Statement

The language expression in this manuscript has been optimized with the assistance of editing tools, but all research content, data analysis, and academic conclusions are the original work of the authors.

Author Contributions

Conceptualization, H.L., L.S. and J.F.; methodology, H.L., L.S. and J.F.; software, H.L. and L.S.; validation, L.S., J.F. and S.W.; formal analysis, L.S., J.F. and S.W.; investigation, H.L., L.S., J.F. and S.W.; resources, H.L. and L.S.; data curation, L.S., J.F. and S.W.; writing—original draft preparation, H.L. and L.S.; writing—review and editing, J.F. and S.W.; visualization, H.L., L.S. and J.F.; supervision, J.F. and S.W.; project administration, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krim, H.; Ben Hamza, A. Geometric Methods in Signal and Image Analysis; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  2. Liu, Y.; Zhang, L.; Ai, H.; Xu, B.; Sun, Y.; Fan, Z. Progress and Prospect of 3D Reconstruction Based on Light Field Cameras. Acta Electron. Sin. 2022, 50, 1774–1792. [Google Scholar]
  3. Rezaei, S.; Arefi, H. Evaluation of Network Design and Solutions of Fisheye Camera Calibration for 3D Reconstruction. Sensors 2025, 25, 1789. [Google Scholar] [CrossRef]
  4. Wang, Z.; Liu, B.; Huang, F.; Gao, H. Binocular Infrared Camera Calibration Based on Calibration Checkerboard. J. Phys. Conf. Ser. 2019, 1345, 032014. [Google Scholar] [CrossRef]
  5. Li, D.; Wang, G.; Liu, J.; Fan, H.; Li, B. Joint Internal and External Parameters Calibration of Optical Compound Eye Based on Random Noise Calibration Pattern. J. Electron. Inf. Technol. 2024, 46, 2898–2907. [Google Scholar]
  6. Wang, T.; Wang, L.; Zhang, W.; Duan, X.; Wang, W. Design of infrared target system with Zhang Zhengyou calibration method. Opt. Precis. Eng. 2019, 27, 1828–1835. [Google Scholar] [CrossRef]
  7. Chen, J.; Liu, X.; Du, L.; Song, B.; Sun, X. Image Noise Simulation and Verification of Area Array CMOS Sensor. Acta Opt. Sin. 2024, 44, 1228007. [Google Scholar]
  8. Liu, Z.; Xi, Z. Noise Processing of High Dynamic Image Synthesis Based on Multiple Exposure. Electron. Sci. Technol. 2016, 29, 90. [Google Scholar]
  9. Yu, Z.; Bu, T.; Zhang, Y.; Jia, S.; Huang, T.; Liu, J. Robust Decoding of Rich Dynamical Visual Scenes With Retinal Spikes. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 3396–3409. [Google Scholar] [CrossRef]
  10. Liu, Z.; Guan, B.; Shang, Y.; Bian, Y.; Sun, P.; Yu, Q. Stereo Event-Based, 6-DOF Pose Tracking for Uncooperative Spacecraft. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5607513. [Google Scholar] [CrossRef]
  11. Chen, H.; He, H.; Xiao, H.; Huang, J. A fast and automatic mosaic method for high-resolution satellite images. In Proceedings of the International Conference on Intelligent Earth Observing and Applications, Guilin, China, 23–24 October 2015; Volume 98080A. [Google Scholar]
  12. Yang, Y.; Mu, L.; Liu, J.; Zeng, X.; Liu, Y.; Zou, X.; Li, C. A Fast Mosaicing Method of High Resolution Image Based on Chang’E 2 data. Bull. Surv. Mapp. 2015, 1, 115–120. [Google Scholar]
  13. Liu, T.; Song, Y.; Li, J.; Zhao, X. Improved Algorithm for High-Resolution Image Stitching Based on ORB Features. Laser Optoelectron. Prog. 2021, 58, 0810004. [Google Scholar]
  14. Xia, M.; Mao, Q.; Wang, G.; Fan, T. A subway tunnel image stitching method based on point cloud mapping relationships and high-resolution image. Eng. Res. Express 2024, 6, 025102. [Google Scholar] [CrossRef]
  15. Li, K.; Chen, Y.; Liu, J.; Mu, X. Survey of Deep Learning-Based Object Detection Algorithms. Comput. Eng. 2022, 48, 1–12. [Google Scholar]
  16. Elliott, J.; Khandare, S.; Butt, A.A.; Smallcomb, M.; Vidt, M.E.; Simon, J.C. Automated tissue strain calculations using Harris corner detection. Ann. Biomed. Eng. 2022, 50, 564–574. [Google Scholar] [CrossRef]
  17. Luo, C.; Sun, X.; Sun, X.; Song, J. Improved Harris corner detection algorithm based on Canny edge detection and gray difference preprocessing. J. Phys. Conf. Ser. 2021, 1971, 012088. [Google Scholar] [CrossRef]
  18. Zhang, H.; Wang, Z. SUSAN multi-scale corner detection based on wavelet transform. Opto-Electron. Eng. 2009, 36, 100–105. [Google Scholar]
  19. Deng, M.; Yang, F.; Chen, Q.; Wang, J.; Sun, S.; Qi, B. A multi-scale refinement corner detection algorithm based on Shi-Harris. Digit. Signal Process. 2025, 161, 105137. [Google Scholar] [CrossRef]
  20. Liu, Y.; Li, T. Reaserch of the improvement of Zhang’s camera calibration method. Opt. Tech. 2014, 40, 565. [Google Scholar]
  21. Ma, A.; Lu, G. Robust image corner detection based on the chord-to-point distance accumulation technique. IEEE Trans. Multimed. 2008, 10, 1059–1072. [Google Scholar]
  22. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 3936–3943. [Google Scholar]
  23. Yang, W.; Guo, S.; Li, X.; Li, H. Checkerboard corner detection based on Hough transform and circular template. Laser Optoelectron. Prog. 2020, 57, 181510. [Google Scholar] [CrossRef]
  24. Wu, M.; Wu, J.; Ma, S.; Yang, K.; Yang, P. Checkerboard corner detection based on corner gray distribution feature. Laser Optoelectron. Prog. 2020, 57, 109–116. [Google Scholar]
  25. Chen, Y.; Huang, F.; Shi, F.; Liu, B.; Yu, H. Plane chessboard-based calibration method for a LWIR ultra-wide-angle camera. Appl. Opt. 2019, 58, 744–751. [Google Scholar] [CrossRef]
  26. Dan, X.; Gong, Q.; Zhang, M.; Li, T.; Li, G.; Wang, Y. Chessboard corner detection based on EDLines algorithm. Sensors 2022, 22, 3398. [Google Scholar] [CrossRef]
  27. Donné, S.; DeVylder, J.; Goossens, B.; Philips, W. MATE: Machine Learning for Adaptive Calibration Template Detection. Sensors 2016, 16, 1858. [Google Scholar] [CrossRef]
  28. Du, X.; Jiang, B.; Wu, L.; Xiao, M. Checkerboard corner detection method based on neighborhood linear fitting. Appl. Opt. 2023, 62, 7736–7743. [Google Scholar] [CrossRef]
  29. Wang, F.; Luo, Y.; Duan, G. Checkerboard Subpixel Corner Detection Based on Local Area Intensity Response and LoG Response. In Proceedings of the 2024 6th International Conference on Natural Language Processing (ICNLP), Xi’an, China, 22 March 2024; pp. 392–396. [Google Scholar]
  30. Wang, X.; Guo, H.; Wang, Y.; Jin, B. Research on BOTDR noise suppression method based on two-dimensional image Gaussian filter. Opt. Tech. 2024, 50, 513–521. [Google Scholar]
  31. Said, K.A.M.; Jambek, A.B. Analysis of image processing using morphological erosion and dilation. J. Phys. Conf. Ser. 2021, 2071, 012033. [Google Scholar] [CrossRef]
  32. Rufli, M.; Scaramuzza, D.; Siegwart, R. Automatic detection of checkerboards on blurred and distorted images. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3121–3126. [Google Scholar]
  33. Zhou, M.; Zhou, Y.; Yang, D.; Song, K. Remote sensing image classification based on Canny operator enhanced edge features. Sensors 2024, 24, 3912. [Google Scholar] [CrossRef]
  34. Meng, C.; Cai, Q.; Wu, L.; Guan, S.; Wang, Y.; Wang, T. A fast X-corner detection method based on block-search strategy. Adv. Mech. Eng. 2019, 11, 168781401983414. [Google Scholar] [CrossRef]
  35. Wang, J.; Ji, X.; Zhang, X.; Sun, Z.; Wang, T. Real-time robust individual X point localization for stereoscopic tracking. Pattern Recognit. Lett. 2018, 112, 138–144. [Google Scholar] [CrossRef]
  36. Ye, C. Mutual information-based circular template matching for image registration. Korean J. Remote Sens. 2014, 30, 547–557. [Google Scholar] [CrossRef]
Figure 1. Comparison of stitching results between noise-free and noisy images. (a) Noisy-free. (b) Noisy.
Figure 1. Comparison of stitching results between noise-free and noisy images. (a) Noisy-free. (b) Noisy.
Sensors 25 03180 g001
Figure 2. Comparison of the effects of different image processing techniques. (a) The origin mosaic image. (b) Gaussian filtering. (c) Dilation. (d) Erosion. (e) Edge detection after Gaussian filtering. (f) Edge detection after dilation and erosion processing.
Figure 2. Comparison of the effects of different image processing techniques. (a) The origin mosaic image. (b) Gaussian filtering. (c) Dilation. (d) Erosion. (e) Edge detection after Gaussian filtering. (f) Edge detection after dilation and erosion processing.
Sensors 25 03180 g002
Figure 3. The coordinate system image of circular sampling points.
Figure 3. The coordinate system image of circular sampling points.
Sensors 25 03180 g003
Figure 4. (a) Corner detection pipeline. (b) Raw points. (c) Centroid calculation. (d) False positive examples.
Figure 4. (a) Corner detection pipeline. (b) Raw points. (c) Centroid calculation. (d) False positive examples.
Sensors 25 03180 g004
Figure 5. The schematic diagram of the checkerboard corner screening process.
Figure 5. The schematic diagram of the checkerboard corner screening process.
Sensors 25 03180 g005
Figure 6. Standard checkerboard corner grid.
Figure 6. Standard checkerboard corner grid.
Sensors 25 03180 g006
Figure 7. Standard checkerboard corner point network diagram.
Figure 7. Standard checkerboard corner point network diagram.
Sensors 25 03180 g007
Figure 8. The corner detection experimental setup.
Figure 8. The corner detection experimental setup.
Sensors 25 03180 g008
Figure 9. Detection performance comparison: non-occluded cases and occluded conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Figure 9. Detection performance comparison: non-occluded cases and occluded conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Sensors 25 03180 g009
Figure 10. Summary of issues in corner detection under non-occluded and occluded conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Figure 10. Summary of issues in corner detection under non-occluded and occluded conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Sensors 25 03180 g010
Figure 11. Detection results under different lighting conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Figure 11. Detection results under different lighting conditions. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Sensors 25 03180 g011
Figure 12. Detection results under different noise levels. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Figure 12. Detection results under different noise levels. The red dots depicted in the images represent the identified coordinates of the corners. (a) Zhang. (b) Ref. [21]. (c) Ref. [22]. (d) Ref. [24]. (e) Ours.
Sensors 25 03180 g012
Table 1. Comparative analysis of corner detection methods.
Table 1. Comparative analysis of corner detection methods.
AlgorithmPrecision (%)Recall (%)Bias (pixel)Average Runtime (s)
Zhang98.954790.24760.7127816~17
Ref. [21]71.299295.3560.876694~5
Ref. [22]97.987672.60060.93485~6
Ref. [24]95.074384.52011.30759~10
Ours98.36694.73680.666368~9
Table 2. Summary of corner detection counts under varying illumination conditions.
Table 2. Summary of corner detection counts under varying illumination conditions.
AlgorithmCorrect Corners/Detected Corners
High-Intensity Lighting Normal Lighting Low-Intensity Lighting Uneven Lighting
Zhang323/323314/317301/303
Ref. [21]323/419316/420322/484305/470
Ref. [22]311/312288/299257/257273/277
Ref. [24]295/297274/317225/230191/215
Ours323/323323/323323/323285/285
Table 3. Summary of corner detection counts under varying noise levels.
Table 3. Summary of corner detection counts under varying noise levels.
AlgorithmCorrect Corners/Detected Corners
Noise Level I Noise Level II Noise Level III
Zhang314/316299/299
Ref. [21]323/431316/467321/644
Ref. [22]161/161310/323216/226
Ref. [24]209/211297/328205/228
Ours323/323323/323306/323
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Shan, L.; Feng, J.; Wang, S. A Corner Detection Method for Noisy Checkerboard Images. Sensors 2025, 25, 3180. https://doi.org/10.3390/s25103180

AMA Style

Liu H, Shan L, Feng J, Wang S. A Corner Detection Method for Noisy Checkerboard Images. Sensors. 2025; 25(10):3180. https://doi.org/10.3390/s25103180

Chicago/Turabian Style

Liu, Hui, Ligen Shan, Jiahao Feng, and Shuanghao Wang. 2025. "A Corner Detection Method for Noisy Checkerboard Images" Sensors 25, no. 10: 3180. https://doi.org/10.3390/s25103180

APA Style

Liu, H., Shan, L., Feng, J., & Wang, S. (2025). A Corner Detection Method for Noisy Checkerboard Images. Sensors, 25(10), 3180. https://doi.org/10.3390/s25103180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop