Next Article in Journal
AI for Sustainable Recycling: Efficient Model Optimization for Waste Classification Systems
Previous Article in Journal
Wearable Spine Tracker vs. Video-Based Pose Estimation for Human Activity Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection

Department of Geotechnical Engineering Research, Korea Institute of Civil Engineering and Building Technology (KICT), Goyang 10223, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(12), 3804; https://doi.org/10.3390/s25123804
Submission received: 28 April 2025 / Revised: 30 May 2025 / Accepted: 16 June 2025 / Published: 18 June 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

:
This study quantitatively evaluates the impact of motion blur—caused by high-speed movement—on image quality in a mobile tunnel scanning system (MTSS). To simulate movement at speeds of up to 70 km/h, a high-speed translational motion panel was developed. Images were captured under conditions compliant with the ISO 12233 international standard, and image quality was assessed using two metrics: blurred edge width (BEW) and the spatial frequency response at 50% contrast (MTF50). Experiments were conducted under varying shutter speeds, lighting conditions (15,000 lx and 40,000 lx), and motion speeds. The results demonstrated that increased motion speed increased BEW and decreased MTF50, indicating greater blur intensity and reduced image sharpness. Two-way analysis of variance and t-tests confirmed that shutter and motion speed significantly affected image quality. Although higher illumination levels partially improved, they also occasionally led to reduced sharpness. Field validation using MTSS in actual tunnel environments demonstrated that BEW and MTF50 effectively captured blur variations by scanning direction. This study proposes BEW and MTF50 as reliable indicators for quantitatively evaluating motion blur in tunnel inspection imagery and suggests their potential to optimize MTSS operation and improve the accuracy of automated defect detection.

1. Introduction

In South Korea, tunnels for railways, subways, and roads have been constructed to maximize land use. Over time, their performance deteriorates due to structural and environmental factors, leading to cracks, water leakage, and exposed rebar. Such damage can worsen because of inadequate maintenance and substandard repairs [1]. Notable incidents caused by poor maintenance include the 2006 ceiling collapse of the Big Dig tunnel in Boston (USA), the 2012 ceiling collapse of the Sasago tunnel in Japan, and the 2019 concrete lining collapse in the Berté tunnel on Italy’s A26 highway [2,3,4]. Proper maintenance, such as structural health monitoring (SHM), is essential for preventing such accidents, slowing deterioration, and sustaining tunnel performance [5]. Currently, tunnel inspections rely on visual assessments by qualified engineers to evaluate damage. Damage details and locations are typically mapped on the tunnel face, and repair methods are selected accordingly. However, more efficient and safer inspection methods are required owing to the low reliability of visual inspections, rising labor costs, and hazardous conditions [6].
Computer vision (CV) has been applied to automate inspections and manual analyses, becoming a key tool in civil engineering monitoring [7]. The use of deep learning (DL) for detecting surface damage in civil structures is increasing [8]. DL automatically extracts image features for damage detection, serving as an aid rather than a replacement for engineers in data collection and result interpretation [9]. Due to restrictions on public access to infrastructure data, most DL-based surface damage research relies on open datasets. For condition assessment, damage information is classified, localized, segmented, and quantified [10]. In tunnel maintenance, high precision is required—such as detecting cracks smaller than 0.3 mm—since repair methods depend on this threshold. Achieving such accuracy requires domain-specific datasets that consider defect types, environmental conditions, materials, and structural factors. Ultimately, the quality of training image data strongly influences surface damage detection performance.
A mobile tunnel scanning system (MTSS) with cameras was introduced in the early 2000s to automate tunnel inspections. However, existing systems have captured images without accounting for quality factors such as low resolution, motion blur, and noise, often overlooking environmental differences across tunnels. Image processing methods (IPMs) have been employed to detect damage and address site-specific environmental variations.
Cracks on the surface of concrete structures in tunnels can be characterized by two main features: they are thinner and distinct from other texture patterns in the structural form, and have low brightness. Therefore, IPM detection algorithms require structural specifications to extract dark objects from a bright background [11]. Crack detection techniques involving IPM in tunnel scanning systems include morphological Gabor filters [12], hat transforms [13], image fusion [14], Bayesian classifiers [15], and wavelet approaches [16]. These detection techniques detect cracks using grayscale images [17]. Additionally, with the increasing shooting speed of tunnel scanning systems, the volume and complexity of collected image data increase. This leads to the over-extraction of features by existing image processing algorithms, such as threshold segmentation [18], edge detection [19], and wavelet transforms [17]. Consequently, this results in time-consuming and complex data processing and analysis, often yielding inaccurate results [20]. Mobile DL-based tunnel scanning systems have been developed because of the need for accurate and effective analysis of collected image data.
Recently, research has been conducted using various CV and DL methods for SHM to detect surface damage to structures [21,22]. DL analysis to obtain high-accuracy and precision data required for SHM is more influenced by the quality of raw image data, such as pixel size, quality, and quantity, than the algorithm. Increased noise and motion blur in images captured during movement reduce the quality of the convolutional neural network (CNN)-based results [23]. Image quality variations can lead to inaccurate damage detection in public infrastructure like tunnels owing to environmental effects and low-quality training image data [24]. Ensuring high-quality image data enables CNN to achieve greater accuracy and precision in reproduction values. However, comprehensive research on the objective evaluation of image quality from MTSSs is still lacking, particularly considering factors affecting image quality, such as low resolution and motion blur. Furthermore, research on optimizing MTSS that collects high-quality raw image data to overcome data-related limitations to DL-based surface damage detection performance is also lacking.
This study aims to quantitatively analyze the quality of raw images captured by cameras in the MTSS. It evaluates the performance of currently operating systems and examines the potential of image quality assessment (IQA) metrics as foundational data for developing advanced future equipment. To this end, this study develops a translational moving panel device that simulates indoor high-speed movement at 70 km/h, considering the driving directionality of MTSS. An indoor testing environment was constructed based on the international standard ISO 12233, which is used for analyzing camera resolution performance. Motion-blurred images were captured at various physical movement speeds under different camera exposure settings and lighting conditions. These images were comparatively analyzed using IQA metrics. Additionally, the applicability of these IQA metrics was evaluated using a currently operational MTSS, aiming to suggest a direction for future image quality management of the MTSS.

2. Related Research

2.1. Camera-Based Tunnel Scanning Systems for Automation in Inspection

Regarding tunnels with continuous sections of the same shape and size, automated inspection can be achieved by scanning the entire concrete lining using imaging devices such as cameras. Mobile camera-based tunnel scanning systems can be broadly categorized into those designed for railways and those for roads. Railway tunnel scanning systems have been continuously developed alongside advancements in camera image sensor technology and typically capture images at 5–10 km/h using trolley systems [25,26,27,28,29,30,31,32,33,34]. Railway tunnel inspections face temporal limitations because they must be conducted during non-operational hours. Low-speed inspection equipment is used because of the challenges of setting up high-speed movement devices within a short timeframe.
Conversely, road tunnels have fewer constraints compared with railway tunnels. The speed of vehicles can be autonomously controlled or adjusted for inspections by controlling or partially closing certain lanes. Additionally, road tunnels can be inspected anytime, day or night, providing ample inspection time. Typically, road tunnel scanning systems can be mounted on large vehicles equipped with cameras and lighting, facilitating transportation ease and offering excellent adaptability for tunnel inspections [35]. These systems are being developed to detect various types of damage from images captured at 40–80 km/h [36,37,38,39,40,41,42,43,44].
MTSSs and DL technologies are being developed because of the increasing need for more accurate and effective detection of damage such as cracks from image data. Xue et al. [45] proposed and designed the Faster Region-based CNN (Faster RCNN) for crack detection in images acquired using MTI-100 railway tunnel scanning equipment, showing improved accuracy compared with GoogLeNet [46], AlexNet [47,48,49], and Visual Geometry Group networks [50]. Huang et al. [51] upgraded MTI-100 to MTI-200a to enable crack detection in the concrete lining of metro tunnels by training the widely used fully convolutional network model for semantic segmentation of surface damage. The two-stream algorithm of the fully convolutional network model was superior to the region-growing and adaptive thresholding algorithms of existing IPMs in inference time and error rate [52,53]. Song et al. [54] proposed a deep learning-based tunnel crack detection system using the DeepLab model. However, they identified dataset scarcity and the challenges associated with labeling, which should be addressed in future research. Li et al. [55] developed the Metro Tunnel Surface Inspection System based on Faster RCNN for the high-precision automatic detection of defects in tunnel concrete linings. The system improved the location and classification of cracks, spalling, and leaks; however, it is insufficient for providing data for facility condition assessment. Moreover, the limitations in collecting sufficient datasets and high-quality images at high speeds should be addressed in the future for quantitative damage assessment.
Despite successfully detecting cracks using numerous DL technologies, several critical technical challenges remain [56]. In practice, the performance of CV-based crack damage detection models is significantly influenced by the quality of crack images collected under various conditions [57]. Mobile scanning systems are the most efficient for safely collecting tunnel images. However, vehicle vibrations and the limitation of close-up imaging of specific targets can lead to motion blur and resolution deficiencies in the collected images. These issues can result in the loss of image information, making crack detection more challenging and leaving fine cracks with widths of <0.3 mm undetected.

2.2. IQA for Motion Blur

With advancements in science and technology and increased availability, camera-based MTSSs are becoming prominent in tunnel maintenance. However, motion blur frequently arises when capturing images under high-speed and low-light conditions [57]. Research on image deblurring to address motion blur has focused on the iterative estimation of the blur kernel, known as the point spread function, from a blurred image—known as blind deconvolution [58,59]. However, non-blind deconvolution uses an inertial measurement unit-based deblurring approach to estimate uniform/non-uniform motion blur kernels from inertial measurement unit readings, encoding motion information into the data for later retrieval [60]. However, the combination of camera and subject movement in real images generates more complex, non-uniform blur.
Recently, deep neural networks have been used to address issues related to non-uniform blur kernels. However, deep neural networks require large training datasets to enhance the performance of trained models [60]. Several image-deblurring datasets, such as Need for Speed [61], DeBlurNet [62], GOPRO [63], Realistic and Diverse Scenes dataset [64], Human-aware Image Deblurring [65], and Real-World Blur Dataset [66], are publicly available. The blurred images in these datasets are generated by capturing video at high frame rates of 120 or 240 frames per second (FPS). One frame from the video is used as a sharp reference image. Subsequently, blurred images are synthesized sequentially around the sharp reference frame. Different levels and extents of motion blur effects are produced by varying the number of consecutive frames averaged in this manner [67].
Accurately evaluating images degraded by motion blur is crucial [60]. Image quality can be assessed subjectively (qualitatively) through human vision or objectively (quantitatively) using numerical metrics [68]. Subjective approaches to IQA are impractical for image processing applications. Objective approaches define quantitative measures that represent perceived image quality. Objective image quality metrics can be categorized into three types based on dependence on reference images: full reference IQA (FR-IQA), reduced reference IQA (RR-IQA), and no reference IQA (NR-IQA) [69].
FR-IQA measures image quality using the original, undistorted, and distorted images [70]. In FR-IQA, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Metric (SSIM) are the most widely used metrics for evaluating image deblurring algorithms. These metrics do not directly assess image quality; however, they reveal the extent of differences by comparing reconstructed and original images. PSNR is an index used to evaluate the quality of an image degraded by noise. It measures the similarity between a restored image and the original sharp image by calculating the ratio of the maximum power of the signal, represented by the mean-squared error between the two images, to the power of the signal noise. A higher PSNR indicates greater similarity to the original image [71]. SSIM evaluates blur levels (BLs) by comparing three components—luminance, contrast, and structure—between two images, with scores of 0–1, where a score closer to 1 indicates greater similarity to the original image. As a function of brightness and contrast, the SSIM is highly sensitive to parameter changes [72]. Abdullah-Al-Mamun et al. [60] proposed the BL metric to estimate motion profiles by determining the extent of degradation caused by motion, such as pixel shifts and rotations in images. The BL metric outperformed SSIM and PSNR in explaining motion blur and sharpness in low-light and low-quality images. However, it did not provide information on the direction of blurred motion in test image datasets.
The NR-IQA algorithm can provide quality assessments of images without requiring a reference image or specific features; nonetheless, it is more challenging than FR-IQA. Owing to the absence of a reference image, the modeling process must consider the statistics of the reference image, human perceptual characteristics, and the impact of distortions on image statistics. Evaluating the effectiveness of quality measurements on specific distorted images without reference images is also limited [73].
RR-IQA algorithms assess the quality of distorted images using limited features of reference images instead of complete images [74]. Various databases are used to evaluate the suitability of developed RR-IQA metrics. Publicly available databases for evaluating developed quality metrics include LIVE2005 [75], TID2008 [76], TID2013 [77], and ILV [78]. However, no image databases are tailored to tunnel environments, limiting the assessment of the quality of images acquired via MTSSs.
The image sharpness assessment of the Imatest® software (version 24.1) part of the RR-IQA, requires impractical test charts for natural images or common dataset-based training. Moreover, such measurements cannot provide information on the direction of motion, causing blurred results [60]. However, Imatest® analyzes camera resolution, enabling the performance analysis of cameras capturing images [79]. Obtaining quantitative motion-blurred image data in an indoor setting that simulates tunnel conditions is possible by adjusting physical movement and lighting conditions to create a low-light environment and capturing test charts. This can be achieved based on the movement speed and exposure performance of the camera. A database compiled from these images would aid in assessing and analyzing image quality. Based on the preceding literature review, addressing image data limitations for applying DL and evaluating image quality is essential. Therefore, acquiring motion-blurred image data and conducting IQAs in a standardized indoor environment are necessary.

3. Methodology

3.1. Modulation Transfer Function (MTF)

The digital images acquired via camera-based tunnel scanning systems undergo multiple processes, including image generation, compression, storage, and transmission. The final image quality will degrade if visual information is lost at any of these stages [80]. A reliable IQA metric is essential for selecting high-quality images. Human subjective image evaluations are reliable; nevertheless, their application requires considerable time and effort [81]. An objective IQA that accurately reflects the perceptual characteristics of the human visual system is necessary [82]. The MTF is a well-established method for measuring sharpness for camera systems [83].
Spatial resolution and image sharpness are fundamental characteristics of digital imaging devices such as MTSSs. The international standard ISO 12233 for digital cameras provides guidelines and evaluation methods for determining image sharpness and resolution [84]. The MTF and spatial frequency response (SFR) of optical imaging systems are metrics defined by ISO 12233 for spatial resolution analysis. Figure 1 presents the standardized Imatest ISO 12233:2017 edge spatial frequency response (eSFR) target chart for measuring image quality parameters. This chart fully complies with the low-contrast eSFR—ISO standard—and integrates several image quality parameters, including sharpness, lateral chromatic aberration, white balance, tone response, color accuracy, and noise [85].
The slanted-edge method of measuring SFR, as presented in the international standard ISO 12233, is a technique for deriving the MTF from a slanted edge on a standardized test chart [86]. Sharpness is an important image quality factor, defined as the boundary between areas of different tones. It is quantified by the edge width in the image, which can be determined by measuring the pixel-level distance between 10% and 90% of the final value. This edge width is assessed in the frequency domain, where the frequency is measured in cycles per distance or line pairs (millimeters, inches, pixels, image height, or sometimes angle [degrees or milliradians]) [87]. The relative contrast at a given spatial frequency (output/input contrast) is referred to as MTF. In Imatest®, MTF is used interchangeably with SFR.
Figure 2 is a scheme of the slanted-edge method. The tilted edge of the eSFR ISO test chart has an angle of 5°. The region of interest (ROI) is depicted as a rectangular area along the short side (Figure 2a). Figure 2b presents a one-dimensional edge spread function (ESF) graph, approximated using a finite difference filter. The discrete Fourier transform, calculated with a Hamming window, is displayed. Figure 2c presents the MTF, represented as the normalized value of the complex coefficients of the discrete Fourier transform. The MTF is the Fourier transform of the impulse response, which is the derivative of the edge response. The MTF of a sampled image is a measure of image resolution and sharpness. Thus, it is used to determine the level of detail a camera can reproduce [88].
MTF50 represents the spatial frequency at which the contrast falls to half (50%) of its low-frequency value, while MTF50P indicates the spatial frequency at which the contrast drops to half of its peak value [89]. Lower MTF values signify poorer image quality. Koren [90] confirmed that MTF50 values measured using the Imatest® software were closely correlated with perceived sharpness by the human eye.
Figure 3 demonstrates the sharpness of an edge using the Imatest® software to derive rise distance and MTF [91]. The blurred edge width (BEW) is the width of the rise distance from 10% to 90% measured from the ESF, expressed in pixels. The BEW of the reference image is 1.58 pixels, whereas that of the image affected by motion blur increases to 2.63 pixels. The slope width of the rise distance graph increases as motion blur occurs, indicating reduced image sharpness. This analytical method quantitatively assesses image sharpness.

3.2. High-Speed Translational Moving Panel Device

In the review of existing literature, motion-blurred image datasets available to the public for IQA are regarded as virtual data. The motion blur in images of moving objects is influenced more by the camera’s exposure than by FPS. During the camera’s shutter exposure time, a moving object is captured by integrating all positions along its trajectory, resulting in a blurred image. Dinh et al. [83] developed an indoor rotational test device with a maximum of 300 RPM to evaluate the quality of motion-blurred images. In the experiments, the camera remained stationary while the chart rotated. The slope of the ESF gradually increased with increasing exposure time owing to differences in lighting brightness and rotational speed, indicating motion blur. However, MTF evaluation based on the slanted-edge method is more suitable for translational than rotational motion. Luo et al. [92] conducted an experiment where a smartphone was moved in a translational direction at 1 m/s while recording a fixed chart. Motion blur caused the ESF to widen and the MTF profile to decrease. Their analysis indicated that higher FPS reduced motion blur, and exposure speed was expected to impact high-speed imaging. Nevertheless, no study has investigated the effects of motion blur in images captured at high speeds of 70 km/h (~19.4 m/s).
We designed a novel device that captures images while moving at high speeds in a translational direction (Figure 4). The device can achieve a maximum speed of 110 km/h, with adjustments of 10 km/h increments. In the developed high-speed translational moving panel device, image capture was initiated at a speed of 50 km/h when the motor power converged to approximately 25%, maintaining a steady state as shown in Figure 5. Notably, once the motor power fluctuation remained consistent within ±0.1%, the speed also remained constant.
The panel can use the Imatest® eSFR ISO test chart for MTF measurement. The setup was constructed according to the shooting method outlined in ISO 12233 (Figure 6a). Two 120 W daylight LED floodlights with a color temperature of 5600 K were positioned at a 45° angle to the front of the test chart (Figure 6b). The machine vision area scan camera has a complementary metal-oxide–semiconductor image sensor with a 4096 × 2304 resolution, a minimum exposure speed of 5 μs, and a global shutter. The shooting distance was 2 m, with an image resolution of 0.2 mm/pixel. The rotational test device will be used to determine factors for evaluating the quality of images captured by high-speed moving tunnel scanning systems. This device can serve as a standard imaging setup for evaluating the quality of images captured by high-speed MTSS in motion.

3.3. Indoor Test Setup in Standard Environments Considering Camera Exposure

In low-light environments such as tunnels, short exposure times of image sensors can introduce noise, resulting in coarse images. Conversely, longer exposure times can reduce noise, but are highly susceptible to motion blur [93]. A solution to this limitation is using high-intensity lighting to supply sufficient brightness to the image sensor, even at short exposure times. The exposure performance of a camera is influenced by the shutter speed, ISO, and lens aperture (F-stop) values. Finding the optimal exposure settings that make moving objects appear stationary is crucial for capturing sharp images. Sasama et al. [17] proposed that, in a railway scanning system, to acquire images at a resolution of 1 mm/pixel while moving at a speed of 20 km/h, an illuminance of 20,000 lx on the surface is required. Therefore, to meet the fast exposure performance of the camera, the lighting brightness was set to 15,000 lx and 40,000 lx. At illuminance levels below 15,000 lx, images captured at a shutter speed of 50 µs were not identifiable.
Videos captured by the high-speed translational moving panel device contain black background areas and the test chart. From the video frames capturing the entire test chart, 30 test chart images were extracted for each case (Figure 7) to construct a dataset of high-speed motion blur images.
An indoor test was conducted (Table 1) considering the panel’s movement speed and the camera’s exposure performance. The aperture value (F) of the lens was 2.8, and the FPS was 100. The shutter speed varied at 50 μs, 100 μs, 250 μs, and 500 μs, while the ISO was adjusted to 640, 1250, and 1600. The illuminance on the chart surface was 15,000 lx and 40,000 lx. The speed of the high-speed translational moving panel was varied at 0 km/h (stationary image), 10 km/h, 30 km/h, 50 km/h, and 70 km/h.

4. Results

An IQA was performed on images captured from indoor experiments based on the international standard ISO 12233. We employed existing IQA metrics, including the ESF of the RR-IQA for BEW and MTF50 and the PSNR and SSIM for FR-IQA. The metrics were analyzed according to variations in illuminance, shutter speed, and movement speed.

4.1. Analysis of BEW and MTF50 by Moving Speed and Shutter Speed

Figure 8 illustrates the test chart captured at an illuminance of 15,000 lx, with variations in the speed of the moving panel and the camera’s shutter speed. According to subjective visual analysis, motion blur increases as the moving panel speed increases. Conversely, the motion blur decreases as the camera’s shutter speed increases. Despite increasing the ISO sensitivity, the images appeared darker as the shutter speed increased. Figure 9 illustrates the ROI of the slanted edge in images captured at 70 km/h, with varying shutter speeds. The motion blur decreased as the shutter speed increased from 500 μs to 50 μs. However, the level of light detected by the image sensor decreased owing to the higher shutter speed, resulting in darker images. Nonetheless, changes in the moving speed did not affect the image brightness for the same shutter speed.
The Imatest® software was used to objectively assess motion blur. Table 2 presents the BEW and MTF50 of the ESF for the central position of the eSFR ISO test chart. These values are shown in Figure 10. Table 2 presents the BEW and MTF50 of the ESF for the central position of the eSFR ISO test chart. As the speed of the moving panel increased, the BEW of the ESF increased, and the MTF50 decreased. This indicates that motion blur in the images increased with increasing speed. At 70 km/h, as the camera’s shutter speed increased from 500 μs to 50 μs, the BEW decreased from 38.28 pixels to 5.32 pixels. Simultaneously, the MTF50 increased from 0.0132 cycles/pixel to 0.0981 cycles/pixel, indicating reduced motion blur. The BEW and MTF50 metrics correspond to the visual observations of motion-blurred images.
In the experiment with 15,000 lx illuminance, higher shutter speeds resulted in darker images. Images captured at 50 μs appeared darker to the naked eye than those captured at 500 μs. Contrast is a crucial factor in image quality. The experiment was repeated with an illuminance of 40,000 lx to assess the impact of illuminance on motion blur. Figure 11 presents the test chart captured under 40,000 lx illuminance with varying moving and camera shutter speeds. Table 3 presents the average and standard deviation of BEW and MTF50 for the ESF at the central position of 30 eSFR ISO test charts. These values are shown in Figure 12. Similar to the 15,000 lx experiment, the BEW increased and MTF50 decreased as the moving speed increased, indicating an increase in motion blur.
Through graphical analysis, it was observed that both moving and shutter speed considerably influenced BEW and MTF50. To statistically validate this observation, a two-way analysis of variance (ANOVA) was conducted, and the results are presented in Table 4. The analysis was conducted separately by illuminance condition, and the effects of moving speed, shutter speed, and their interaction on the dependent variables BEW and MTF50 were evaluated.
According to the two-way ANOVA results, under the 15,000 lx illuminance condition, shutter speed had the greatest effect on BEW (F = 43,840.35, p < 0.0001), while travel speed (F = 29,998.22, p < 0.0001) also showed a highly significant effect. The interaction term between these two variables (F = 8298.73, p < 0.0001) was significant, indicating that BEW was not influenced by a single factor, but rather by the combined effects of speed and shutter speed.
Under the same condition, MTF50 was most influenced by shutter speed (F = 233.39, p < 0.0001), while both speed (F = 365.26, p < 0.00001) and the interaction term (F = 16.48, p < 0.00001) also showed statistically significant effects. This suggests that image sharpness also changed nonlinearly owing to the interaction between the two variables.
The overall trend remained consistent under the higher illuminance condition of 40,000 lx. For BEW, both shutter speed (F = 51,130.25, p < 0.0001) and speed (F = 36,821.60, p < 0.0001) showed strong significance and explanatory power. Regarding MTF50, shutter speed (F = 523.77, p < 0.0001) had the largest effect, while speed (F = 1265.27) and the interaction term (F = 59.73) were also significant.
Notably, the F-values for shutter speed were the highest under both lighting conditions, with increases of approximately 16.6% for BEW and 124.4% for MTF50. This indicates that the influence of shutter speed remained consistent, even in high-illuminance environments, and that in some cases, its relative explanatory power may have even increased. However, this does not necessarily imply an absolute increase in effect size; rather, it suggests that the shutter speed control remains significant, even with increased illuminance, and may be more effective under certain conditions.

4.2. Analysis of Image Quality Variation Due to Increased Illuminance

To quantitatively analyze the impact of illuminance changes on image quality, heatmaps were generated to visualize the variation rates of BEW and MTF50 when increasing illuminance from 15,000 lx to 40,000 lx, across varying travel speeds (0–70 km/h) and shutter speeds (50–500 μs). The variation rate, calculated as a percentage using Equation (1), compares the relative increase or decrease in values at 40,000 lx to those under 15,000 lx, serving as the reference baseline.
C h a n g e   R a t e % = 100 × 40,000   l x 15,000   l x 15,000   l x
Figure 13 shows a heatmap of the BEW change rate, where negative values (blue) indicate that blur decreased with increased illuminance, signifying improved image quality. Conversely, positive values (red) indicate increased blur, meaning image quality degradation. The analysis indicated that, under most high-speed conditions (≥50 km/h), BEW decreased by more than 5% with increased illuminance, suggesting that high illuminance positively influences blur suppression. Notably, under the condition of a shutter speed of 250 μs at 70 km/h, BEW decreased by approximately 8.1%. However, under low-speed conditions (0–10 km/h), BEW either showed minimal change or increased, indicating that higher illuminance has limited impact on blur at low speed.
Figure 14 shows a heatmap of the MTF50 change rate, where negative values (blue) indicate a decrease in sharpness owing to increased blur, while positive values (red) represent improved sharpness. The MTF50 results exhibited more complex patterns. Under certain conditions (e.g., shutter speeds of 100 and 250 μs and speed of 0 km/h), MTF50 increased by 5–10% or more, confirming improved image sharpness. However, under high-speed conditions—particularly a shutter speed of 500 μs at 70 km/h—sharpness decreased by 31.5%, even with increased illuminance. This suggests that, beyond a certain level, higher illuminance may cause sensor overexposure or increased specular reflection, thereby reducing sharpness.
These heatmap analyses demonstrated that image quality varies non-linearly based on the interaction between illuminance and shutter speed combinations. The findings quantitatively confirmed that increasing illuminance does not always enhance quality. Therefore, when operating or developing an MTSS, illumination design should be optimized for shooting conditions. Moreover, when using MTF50 as a primary quality metric, a careful balance between illuminance and shutter speed is required.
After visualizing the effects of illuminance changes on BEW and MTF50 across conditions using heatmaps, independent-sample t-tests were performed to assess the statistical significance of these changes. Table 5 presents the resulting p-values for each condition.
Concerning BEW, 10 out of 20 experimental combinations (50%) exhibited statistically significant differences at the p < 0.05 level. Notably, repeated significance was found under high-speed conditions (≥30 km/h) and long shutter durations (≥250 μs). For example, conditions such as (30 km/h, 250 μs), (50 km/h, 500 μs), and (70 km/h, 250 μs) yielded a p-value of 0, indicating highly significant differences. Even under mid-speed conditions (10–50 km/h), statistical significance was found when the shutter speed was 250 μs or 500 μs (p = 0.0030–0.0072). However, under 0 km/h or short shutter speed conditions (50–100 μs), most cases did not exhibit statistical significance. These results suggest that illuminance change alone does not consistently affect BEW; however, under specific conditions (high speed + long exposure), it can statistically induce either suppression or worsening of motion blur.
Conversely, for MTF50, only 3 out of 20 conditions (15%) showed statistically significant differences at the p < 0.05 level. All of these significant results occurred under stationary conditions (0 km/h) at shutter speeds of 100 μs (p = 0.0014), 250 μs (p = 0.0003), and 500 μs (p = 0.0435). However, under moving conditions of 10 km/h or more, none of the shutter combinations showed statistical significance. For instance, under 10 km/h (250 μs), p = 0.3344, and under 50 km/h (250 μs), p = 0.5823, indicating that illuminance change had no statistically significant effect on image sharpness. These findings suggest that MTF50 was sensitive to illuminance changes only under certain conditions, primarily in stationary environments. Under more realistic operating conditions involving motion, the influence was minimal. Specifically, all statistically significant conditions occurred under stationary conditions with long exposures (≥100 μs), suggesting that illuminance changes may have affected optical factors, such as camera exposure compensation and illuminance uniformity.
Ultimately, the influence of illuminance change on MTF50 was statistically significant only under limited conditions, and thus cannot be generalized based on the results of this study. This implies that, rather than simply increasing illumination, a comprehensive design that quantitatively considers the interaction between speed, shutter, and illuminance is essential for optimizing image quality.
To supplement the finding that BEW and MTF50 showed no significant differences under many conditions in the statistical significance test, boxplots were generated for each metric to visually compare their distributions (Figure 15). This analysis used 600 measured BEW and MTF50 values obtained under 15,000 lx and 40,000 lx conditions. As presented in Table 6, each boxplot is based on the median, first quartile (Q1), third quartile (Q3), and interquartile range (IQR = Q3 − Q1). Outliers are defined as values smaller than Q1 − 1.5 × IQR or greater than Q3 + 1.5 × IQR, and are shown as individual points.
The boxplot in Figure 15a shows BEW measurements from 1200 motion blur images, with values ranging from approximately 2.6 to 39.1 pixels. Across both illuminance conditions (15,000 lx and 40,000 lx), the means, medians, and interquartile ranges were similar. Specifically, the median BEW was 5.13 px under 15,000 lx and 5.10 px under 40,000 lx, showing virtually no difference. The IQRs were 3.94–10.17 px and 3.79–9.59 px, respectively, and the mean values were 9.48 px at 15,000 lx and 8.95 px at 40,000 lx. This aligns with the statistical significance test result (p = 0.291), indicating no significant effect. These findings suggest that increasing illuminance has minimal impact on BEW, and it is difficult to expect blur improvement solely through changes in lighting conditions.
However, large numbers of outliers were observed in the BEW distribution, primarily at a speed of 60–70 km/h combined with 250–500 μs shutter speeds. In these cases, BEW values sharply increased to over 30 pixels, indicating significant motion blur when high speed coincides with long exposure. The locations and number of outliers were similar across both lighting conditions, suggesting that illumination change contributes little to suppressing extreme blur artifacts.
In Figure 15b, the MTF50 values ranged between approximately 0.01 and 0.20 cycles/pixel. Similar to BEW, the medians and IQRs across both 15,000 lx and 40,000 lx conditions were highly identical. The median MTF50 values were 0.1016 (15,000 lx) and 0.1005 (40,000 lx), while the IQRs were approximately 0.053–0.141. This visually confirms that increased illuminance did not significantly enhance overall image sharpness. The mean MTF50 values were 0.0960 (15,000 lx) and 0.0961 (40,000 lx), consistent with the previously conducted statistical significance test (p = 0.998).
From the boxplot analysis, the distributions of both BEW and MTF50 under different illuminance conditions were found to be highly similar. No substantial differences were observed in medians, interquartile ranges, or overall value ranges. Although a few extreme blur values appeared as outliers in the BEW boxplot—specifically under high-speed or long-exposure conditions—the overall distribution patterns were consistent, regardless of lighting conditions.

4.3. Field Validation

The results of the indoor experiment confirmed that MTF is an effective metric for quantitatively evaluating motion blur caused by physical movement. However, in actual tunnel environments, where the MTSS is operated, various factors such as uneven road surfaces and vertical vibrations during driving may generate complex motion blur in multiple directions. Because the indoor testing equipment cannot fully replicate such multidirectional blur, we conducted a field experiment using the MTSS in a real tunnel to validate the applicability of MTF.
The field testbed was Songhyeon Tunnel in Incheon, South Korea, a 400 m long, three-lane one-way tunnel (Figure 16). For this validation, we used MTSS equipment fitted with a 4 K (4096 × 2) line scan camera, as shown in Figure 17 [94]. To evaluate image quality, two SFRreg test charts from Imatest [95] were attached to the tunnel’s concrete lining (Figure 18). Figure 19 presents the images captured at 20 km/h, 40 km/h, 60 km/h, and 80 km/h, with a resolution of 1 mm/pixel. The illuminance measured from a distance of 3 m during shooting was approximately 15,000 lx, and the camera exposure was set to 50 kHz, with two sets of images captured per condition.
The images in Figure 19 were taken at different vehicle speeds using the MTSS. However, visual inspection alone does not distinguish quality differences by speed. The PSNR and SSIM metrics analyzed in previous sections are FR-IQA methods, which require a static reference image for comparison against motion-blurred images. However, because the MTSS is a line scan camera-based system that inherently captures images in motion, obtaining a static reference image is not feasible. This highlights a fundamental limitation of applying FR-IQA techniques to MTSS images captured in actual tunnel environments.
Therefore, this study quantitatively analyzed the effect of motion blur on image quality in tunnel inspection settings by measuring horizontal and vertical BEW and MTF50 values at four different speeds: 20, 40, 60, and 80 km/h. Two SFRreg test charts affixed to the tunnel wall were used for the analysis. As shown in Figure 20, two ROIs were designated on each chart—one for horizontal and one for vertical MTF measurements. Eight samples were obtained for each speed condition and used for analysis.
As shown in Table 7 and Figure 21, the horizontal BEW gradually increased with vehicle speed, from 2.30 pixels at 20 km/h to 3.38 pixels at 80 km/h. In contrast, the horizontal MTF50 decreased from 0.228 cycles/pixel to 0.168 cycles/pixel, indicating a degradation in image sharpness owing to motion blur.
In the vertical direction, BEW values were consistently lower and MTF50 values were higher than those in the horizontal direction across all speed conditions. For instance, at 80 km/h, the vertical BEW was 2.19 pixels, approximately 35% lower than the horizontal value (3.38 pixels), while the vertical MTF50 was 0.234 cycles/pixel, significantly higher than the horizontal value (0.168 cycles/pixel). This directional difference suggests that motion blur primarily occurs in the horizontal direction, corresponding to the direction of travel.
Additionally, the standard deviation of the horizontal BEW increased with speed, peaking at ±1.02 pixels at 80 km/h. This implies that, at higher speeds, factors such as mechanical vibrations, exposure instability, or surface irregularities on the tunnel wall may reduce consistency in image quality. These findings validate BEW and MTF50 as effective indicators for quantitatively assessing motion blur under high-speed conditions and show their potential for further characterizing the spatial properties of blur in tunnel imagery.

5. Discussion

This study quantitatively analyzed the impact of motion blur—caused by high-speed driving in an MTSS—on image quality and validated the effectiveness of two evaluation metrics: MTF50 and BEW. To verify both the theoretical soundness and practical applicability of the proposed quality assessment framework, experiments were conducted in an indoor setting designed according to the ISO 12233 international standard and complemented by field tests in an actual tunnel environment.
In the indoor experiments, BEW increased and MTF50 decreased with speed, corresponding to a visually perceptible increase in motion blur. Although faster shutter speeds helped to reduce blur to some extent, they also led to underexposed and darker images due to reduced exposure time. To mitigate this, illumination was increased to 40,000 lx; however, in some cases, excessive lighting degraded sharpness across the image, suggesting that simply increasing illumination does not always improve image quality. Instead, an optimal balance between illumination and shutter speed is required.
Two-way ANOVA analysis confirmed that both BEW and MTF50 were significantly affected by vehicle speed, shutter speed, and their interaction, indicating that image quality is determined not by any single factor, but by their combined effect. BEW was particularly sensitive to long exposure times at high speeds, whereas MTF50 showed significant changes only under stationary conditions.
In the field validation at 80 km/h, the horizontal BEW measured 3.38 px, compared with 2.19 px in the vertical direction, and the horizontal MTF50 was also lower than that in the vertical direction. These results quantitatively demonstrate that image degradation is more pronounced along the horizontal axis—the direction of travel in MTSS. Additionally, the increasing standard deviation of horizontal BEW at higher speeds suggests that mechanical vibrations, road surface conditions, and lighting imbalance may compromise consistency in image quality. These findings highlight the significance of BEW and MTF50 as effective metrics for quantifying motion blur and characterizing its directional behavior in tunnel imagery.
A limitation of this study is the inability of the indoor test setup to fully simulate vertical vibration or multi-directional blur. However, because most experiments were conducted with shutter speeds faster than 50 μs, it is likely that vertical shaking had minimal effect on image quality during actual MTSS operation. This assumption was supported by the field results, where the vertical axis consistently retained higher sharpness than the horizontal. Thus, while vertical blur was not fully replicated, the focus on horizontal blur was sufficiently validated.
Moreover, recent studies have repeatedly shown that image quality significantly affects CNN-based crack detection. Models such as U-Net and Faster R-CNN are highly sensitive to input sharpness, contrast, and noise levels. Blurred or poorly lit images can result in missed detections or false positives, as cracks may not be distinguishable from the background.
To address this, attention has recently shifted toward NR-IQA methods that evaluate quality without a reference image. Metrics such as the blind/referenceless image spatial quality evaluator (BRISQUE) [96], naturalness image quality evaluator (NIQE) [97], perception-based image quality evaluator (PIQE) [98], and cumulative probability of blur detection (CPBD) [99] can quantify blur, noise, and contrast degradation. Numerous studies have reported that filtering out low-quality images using these metrics can enhance both model accuracy and computational efficiency. Specifically, datasets filtered by BRISQUE showed enhancements not only in accuracy and F1 score, but also in training efficiency [100].
Such IQA-based approaches are evolving beyond simple post-processing into preemptive quality control strategies that enhance dataset reliability and CNN training efficiency. In high-speed environments such as MTSS, attaching reference targets such as ISO 12233 charts is impractical, and FR-IQA methods such as PSNR or SSIM are infeasible owing to the lack of ground-truth images. Thus, NR-IQA offers a practical and scalable solution for real-time quality assessment and integration into automated tunnel inspection systems.
Future research should focus on applying various NR-IQA metrics to real tunnel environments and quantifying how CNN crack detection performance varies with image sets filtered based on those metrics. This could lead to an integrated framework linking IQA, dataset management, and model performance improvement.

6. Conclusions

This study quantitatively analyzed the impact of motion blur generated during high-speed driving on the image quality of an MTSS and validated the applicability of two objective evaluation metrics: MTF50 and BEW. Using an ISO 12233-based indoor experimental setup along with data collected in an actual tunnel environment, this study demonstrated that these metrics serve as practical tools for effectively assessing image degradation during high-speed movement.
The experimental results revealed that, as driving speed increased, horizontal BEW increased while MTF50 decreased, quantitatively confirming that horizontal motion blur is the primary factor contributing to image degradation owing to the directional characteristics of MTSS travel. Conversely, the vertical direction exhibited lower BEW and higher MTF50 values, indicating that vertical blur has a relatively minor effect. The effects of illumination and shutter speed combinations on image quality also varied—excessive lighting sometimes reduced image sharpness. This finding emphasizes that higher illumination alone does not guarantee improved quality and highlights the need for an optimized balance between illumination and shutter speed.
Additionally, BEW and MTF50 have been demonstrated as reliable metrics that quantitatively represent blur extent and sharpness, respectively, and are applicable for IQA, even under complex operating conditions. Notably, this study provides practical contributions by exploring the directional characteristics of motion blur through horizontal–vertical comparisons using BEW and MTF50 in real tunnel environments, a gap previously unaddressed in the literature.
These findings can be utilized to optimize camera settings in MTSS, establish image quality standards based on driving conditions, and develop real-time image quality monitoring systems. They offer a technical foundation for securing image quality in high-speed data acquisition environments.
Future research should experimentally simulate more sophisticated multi-directional motion blur conditions, such as vertical vibration and rotational blur. It should also incorporate NR-IQA methods to enable automated image filtering and integrate with CNN-based crack detection, establishing a comprehensive intelligent image quality management framework. Such advancements will enhance the reliability and precision of automated tunnel inspection systems operating under high-speed conditions.

Author Contributions

Conceptualization, C.L.; Methodology, C.L., D.K. (Donggyou Kim), and D.K. (Dongku Kim); Formal analysis, C.L.; Resources, D.K. (Donggyou Kim); Writing—original draft, C.L.; Writing—review and editing, D.K. (Donggyou Kim) and D.K. (Dongku Kim); S Supervision, D.K. (Donggyou Kim); Project administration, C.L. and D.K. (Donggyou Kim) All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Korea Agency for Infrastructure Technology Advancement under Grant RS-2022-00142566.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

Research for this paper was conducted under the Development of Advanced Management Technology (Total Care) for infrastructure (project no. RS-2022-00142566) funded by the Korea Agency for Infrastructure Technology Advancement.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Montero, R.; Victores, J.G.; Martínez, S.; Jardón, A.; Balaguer, C. Past, present and future of robotic tunnel inspection. Autom. Constr. 2015, 59, 99–112. [Google Scholar] [CrossRef]
  2. NTSB. Ceiling Collapse in the Interstate 90 Connector Tunnel; National Transportation Safety Board: Boston, MA, USA, 2006. [Google Scholar]
  3. Kawahara, S.; Doi, H.; Shirato, M.; Kajifusa, N.; Kutsukake, T. Investigation of the tunnel ceiling collapse in the central expressway in Japan; TRB Paper Manuscript 14–2559. In Proceedings of the Transportation Research Board 93rd Annual Meeting, Washington, DC, USA, 12–16 January 2014. [Google Scholar]
  4. Allaix, D.L.; Vliet, A.B. Existing standardization on monitoring, safety assessment and maintenance of bridges and tunnels. Ce/Pap. 2023, 6, 498–504. [Google Scholar] [CrossRef]
  5. Huang, Z.; Zhang, C.L.; Fu, H.L.; Ma, S.K.; Fan, X.D. Machine inspection equipment for tunnels: A review. J. Highw. Transp. Res. Dev. 2021, 15, 40–53. [Google Scholar] [CrossRef]
  6. Balaguer, C.; Montero, R.; Victores, J.G.; Martínez, S.; Jardón, A. Towards fully automated tunnel inspection: A survey and future trends. In Proceedings of the 31st International Symposium on Automation and Robotics in Construction and Mining (ISARC), Sydney, Australia, 9–11 July 2014; pp. 19–33. [Google Scholar] [CrossRef]
  7. Spencer, B.F.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  8. Ye, X.W.; Jin, T.; Yun, C.B. A review on deep learning-based structural health monitoring of civil infrastructures. Smart Struct. Syst. 2019, 24, 567–585. [Google Scholar] [CrossRef]
  9. Guo, J.; Liu, P.; Xiao, B.; Deng, L.; Wang, Q. Surface defect detection of civil structures using images: Review from data perspective. Autom. Constr. 2024, 158, 105186. [Google Scholar] [CrossRef]
  10. Sony, S.; Dunphy, K.; Sadhu, A.; Capretz, M. A systematic review of convolutional neural network-based structural condition assessment techniques. Eng. Struct. 2021, 226, 111347. [Google Scholar] [CrossRef]
  11. Sankarasrinivasan, S.; Balasubramanian, E.; Karthik, K.; Chandrasekar, U.; Gupta, R. Health monitoring of civil structures with integrated UAV and image processing system. Procedia Comput. Sci. 2015, 54, 508–515. [Google Scholar] [CrossRef]
  12. Landstrom, A.; Thurley, M.J. Morphology-based crack detection for steel slabs. IEEE J. Sel. Top. Signal Process. 2012, 7, 866–875. [Google Scholar] [CrossRef]
  13. Giakoumis, I.; Nikolaidis, N.; Pitas, I. Digital image processing techniques for the detection and removal of cracks in digitized paintings. IEEE Trans. Image Process. 2006, 15, 178–188. [Google Scholar] [CrossRef]
  14. Ranjan, P.; Chandra, U. A novel technique for wall crack detection using image fusion. In Proceedings of the International Conference on Computing, Communication and Informatics, Coimbatore, India, 4–6 January 2013; pp. 1–6. [Google Scholar]
  15. Cornelis, M.; Coscarón, M.C. The Nabidae (Insecta, Hemiptera, Heteroptera) of Argentina. Zookeys 2013, 333, 1–30. [Google Scholar] [CrossRef] [PubMed]
  16. Surace, C.; Ruotolo, R. Crack detection of a beam using the wavelet transform. In Proceedings of the International Symposium on Optics, Imaging, and Instrumentation, San Diego, CA, USA, 24–25 July 1994; p. 1141. [Google Scholar]
  17. Yamaguchi, T.; Nakamura, S.; Saegusa, R.; Hashimoto, S. Image-based crack detection for real concrete surfaces. IEEJ Trans. Electr. Electron. Eng. 2008, 3, 128–135. [Google Scholar] [CrossRef]
  18. Adhikari, R.S.; Moselhi, O.; Bagchi, A. Image-based retrieval of concrete crack properties for bridge inspection. Autom. Constr. 2014, 39, 180–194. [Google Scholar] [CrossRef]
  19. Nguyen, H.N.; Kam, T.Y.; Cheng, P.Y. An automatic approach for accurate edge detection of concrete crack utilizing 2D geometric features of crack. J. Sign. Process. Syst. 2014, 77, 221–240. [Google Scholar] [CrossRef]
  20. Xiang, C.; Wang, W.; Deng, L.; Shi, P.; Kong, X. Crack detection algorithm for concrete structures based on super-resolution reconstruction and segmentation network. Autom. Constr. 2022, 140, 104346. [Google Scholar] [CrossRef]
  21. Yao, Y.; Tung, S.E.; Glisic, B. Crack detection and characterization techniques—An overview. Struct. Control Health Monit. 2014, 21, 1387–1413. [Google Scholar] [CrossRef]
  22. Alidoost, F.; Austen, G.; Hahn, M. A multi-camera mobile system for tunnel inspection. In iCity: Transformative Research for the Livable, Intelligent, and Sustainable City; Coors, V., Pietruschka, D., Zeitler, B., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 211–224. [Google Scholar] [CrossRef]
  23. Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Zhang, G. Vision-based concrete crack detection using a hybrid framework considering noise effect. J. Build. Eng. 2022, 61, 105246. [Google Scholar] [CrossRef]
  24. Ni, F.; Zhang, J.; Noori, M.N. Deep learning for data anomaly detection and data compression of a long-span suspension bridge. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 685–700. [Google Scholar] [CrossRef]
  25. Ukai, M.; Miyamoto, T.; Sasama, H. Development of inspection system of railway facilities using continuous scan image. WIT Trans. Built Environ. 1996, 20, 61–70. [Google Scholar]
  26. Sasama, H.; Ukai, M.; Ohta, M.; Miyamoto, T. Inspection system for railway facilities using a continuously scanned image. Electr. Eng. Japan 1998, 125, 52–64. [Google Scholar] [CrossRef]
  27. Ukai, M. Advanced inspection system of tunnel wall deformation using image processing. Q. Rep. RTRI 2007, 48, 94–98. [Google Scholar] [CrossRef]
  28. Yu, S.N.; Jang, J.H.; Han, C.S. Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel. Autom. Constr. 2007, 16, 255–261. [Google Scholar] [CrossRef]
  29. Lee, S.Y.; Lee, S.H.; Shin, D.I.; Son, Y.K.; Han, C.S. Development of an inspection system for cracks in a concrete tunnel lining. Can. J. Civ. Eng. 2007, 34, 966–975. [Google Scholar] [CrossRef]
  30. Zhang, W.; Zhang, Z.; Qi, D.; Liu, Y. Automatic crack detection and classification method for subway tunnel safety monitoring. Sensors 2014, 14, 19307–19328. [Google Scholar] [CrossRef]
  31. Huang, H.; Sun, Y.; Xue, Y.; Wang, F. Inspection equipment study for subway tunnel defects by grey-scale image processing. Adv. Eng. Inform. 2017, 32, 188–201. [Google Scholar] [CrossRef]
  32. Gong, Q.; Zhu, L.; Wang, Y.; Yu, Z. Automatic subway tunnel crack detection system based on line scan camera. Struct. Control. Health Monit. 2021, 28, e2776. [Google Scholar] [CrossRef]
  33. Qin, S.; Qi, T.; Lei, B.; Li, Z. Rapid and automatic image acquisition system for structural surface defects of high-speed rail tunnels. KSCE J. Civ. Eng. 2024, 28, 967–989. [Google Scholar] [CrossRef]
  34. Zhan, D.; Yu, L.; Xiao, J.; Chen, T. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels. Sensors 2015, 15, 8664–8684. [Google Scholar] [CrossRef]
  35. Xiao, L.; Ying-jie, D.; Chun-ming, X.; Bo, L.; Yang, L. Design and implement of vehicle-based experiment prototype for expressway tunnel intelligent detection. In Proceedings of the 3rd International Conference on Robotics, Control and Automation, Chengdu, China, 11–13 August 2018; pp. 78–81. [Google Scholar] [CrossRef]
  36. Jiang, Y.; Zhang, X.; Taniguchi, T. Quantitative condition inspection and assessment of tunnel lining. Autom. Constr. 2019, 102, 258–269. [Google Scholar] [CrossRef]
  37. Gavilán, M.; Sánchez, F.; Ramos, J.A.; Marcos, O. Mobile inspection system for high-resolution assessment of tunnels. In Proceedings of the 6th International Conference on Structural Health Monitoring of Intelligent Infrastructure, Hong Kong, China, 9–11 December 2013. [Google Scholar]
  38. Yasuda, T.; Yamamoto, H.; Shigeta, Y. Tunnel inspection system by using high-speed mobile 3D survey vehicle: MIMM-R. J. Robot. Soc. Japan 2016, 34, 589–590. [Google Scholar] [CrossRef]
  39. Yasuda, T.; Yamamoto, H.; Enomoto, M.; Nitta, Y. Smart tunnel inspection and assessment using mobile inspection vehicle, non-contact radar and AI. From demonstration to practical use to new stage of construction robot. In Proceedings of the 37th International Symposium on Automation and Robotics in Construction 2020 (ISARC 2020), Kitakyushu, Japan, 27–28 October 2020; pp. 1373–1379. [Google Scholar] [CrossRef]
  40. MMSD. Available online: https://www.mitsubishielectric.co.jp/mmsd/ (accessed on 10 March 2024).
  41. Tunnel Catcher 3. Available online: https://mestrc.co.jp/radar/ (accessed on 10 March 2024).
  42. Tunnel Tracer. Available online: https://www.chugai-tec.co.jp/business/structure_material_investigation/ (accessed on 10 March 2024).
  43. GT-8K. Available online: https://www.aeroasahi.co.jp/company/fortune/157/ (accessed on 10 March 2024).
  44. Wang, H.; Wang, Q.; Zhai, J.; Yuan, D.; Zhang, W.; Xie, X.; Zhou, B.; Cai, J.; Lei, Y. Design of fast acquisition system and analysis of geometric feature for highway tunnel lining cracks based on machine vision. Appl. Sci. 2022, 12, 2516. [Google Scholar] [CrossRef]
  45. Xue, Y.; Li, Y. A fast detection method via region-based fully convolutional neural networks for shield tunnel lining defects. Comput. Aided Civ. Infrast. Eng. 2018, 33, 638–654. [Google Scholar] [CrossRef]
  46. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  47. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25, pp. 1097–1105. [Google Scholar]
  48. Kim, B.; Cho, S. Automated vision-based detection of cracks on concrete surfaces using a deep learning technique. Sensors 2018, 18, 3452. [Google Scholar] [CrossRef] [PubMed]
  49. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar] [CrossRef]
  50. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  51. Huang, H.; Li, Q.; Zhang, D. Deep learning-based image recognition for crack and leakage defects of metro shield tunnel. Tunn. Undergr. Space Technol. 2018, 77, 166–176. [Google Scholar] [CrossRef]
  52. Kamdi, S.; Krishna, R.K. Image segmentation and region growing algorithm. Int. J. Comput. Technol. Electr. Eng. 2012, 2, 103–107. [Google Scholar]
  53. Chan, F.Y.; Lam, F.K.; Zhu, H. Adaptive thresholding by variational method. IEEE Trans. Image Process. 1998, 7, 468–473. [Google Scholar] [CrossRef]
  54. Song, Q.; Wu, Y.; Xin, X.; Yang, L.; Yang, M.; Chen, H.; Liu, C.; Hu, M.; Chai, X.; Li, J. Real-time tunnel crack analysis system via deep learning. IEEE Access 2019, 7, 64186–64197. [Google Scholar] [CrossRef]
  55. Li, D.; Xie, Q.; Gong, X.; Yu, Z.; Xu, J.; Sun, Y.; Wang, J. Automatic defect detection of metro tunnel surfaces using a vision-based inspection system. Adv. Eng. Inform. 2021, 47, 101206. [Google Scholar] [CrossRef]
  56. Bae, H.; Jang, K.; An, K. Deep super resolution crack network (SrcNet) for improving computer vision–based automated crack detectability in in situ bridges. Struct. Health Monit. 2021, 20, 1428–1442. [Google Scholar] [CrossRef]
  57. Liu, Y.; Yeoh, J.K.W.; Chua, D.K.H. Deep learning-based enhancement of motion blurred UAV concrete crack images. J. Comput. Civ. Eng. 2020, 34, 04020028. [Google Scholar] [CrossRef]
  58. Sorel, M.; Flusser, J. Space-variant restoration of images degraded by camera motion blur. IEEE Trans. Image Process. 2008, 17, 105–116. [Google Scholar] [CrossRef] [PubMed]
  59. Paramanand, C.; Rajagopalan, A.N. Shape from sharp and motion-blurred image pair. Int. J. Comput. Vis. 2014, 107, 272–292. [Google Scholar] [CrossRef]
  60. Abdullah-Al-Mamun, M.; Tyagi, V.; Zhao, H. A new full-reference image quality metric for motion blur profile characterization. IEEE Access 2021, 9, 156361–156371. [Google Scholar] [CrossRef]
  61. Galoogahi, H.K.; Fagg, A.; Huang, C.; Ramanan, D.; Lucey, S. Need for speed: A benchmark for higher frame rate object tracking. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1134–1143. [Google Scholar] [CrossRef]
  62. Su, S.; Delbracio, M.; Wang, J.; Sapiro, G.; Heidrich, W.; Wang, O. Deep video deblurring for hand-held cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 237–246. [Google Scholar] [CrossRef]
  63. Nah, S.; Kim, T.H.; Lee, K.M. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 257–265. [Google Scholar] [CrossRef]
  64. Nah, S.; Baik, S.; Hong, S.; Moon, G.; Son, S.; Timofte, R.; Mu Lee, K. NTIRE 2019 challenge on video deblurring and super-resolution: Dataset and study. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 1996–2005. [Google Scholar] [CrossRef]
  65. Shen, Z.; Wang, W.; Lu, X.; Shen, J.; Ling, H.; Xu, T.; Shao, L. Human-aware motion deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5571–5580. [Google Scholar] [CrossRef]
  66. Rim, J.; Lee, H.; Won, J.; Cho, S. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 184–201. [Google Scholar] [CrossRef]
  67. Jiang, H.; Sun, D.; Jampani, V.; Yang, M.H.; Learned-Miller, E.; Kautz, J. Super SloMo: High quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9000–9008. [Google Scholar] [CrossRef]
  68. De, K.D.; Masilamani, V. Image sharpness measure for blurred images in frequency domain. Procedia Eng. 2013, 64, 149–158. [Google Scholar] [CrossRef]
  69. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  70. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
  71. Salomon, D.; Motta, G. Handbook of Data Compression; Springer: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
  72. Horé, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
  73. Kamble, V.; Bhurchandi, K.M. No-reference image quality assessment algorithms: A survey. Optik 2015, 126, 1090–1097. [Google Scholar] [CrossRef]
  74. Dost, S.; Saud, F.; Shabbir, M.; Khan, M.G.; Shahid, M.; Lovstrom, B. Reduced reference image and video quality assessments: Review of methods. EURASIP J. Image Video Process. 2022, 2022, 1. [Google Scholar] [CrossRef]
  75. Sheikh, H. Live Image Quality Assessment Database Release 2. Available online: http://live.ece.utexas.edu/research/quality (accessed on 10 March 2024).
  76. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  77. Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef]
  78. Corchs, S.; Gasparini, F.; Schettini, R. No reference image quality classification for JPEG-distorted images. Digit. Signal Process. 2014, 30, 86–100. [Google Scholar] [CrossRef]
  79. Imatest. Available online: https://www.imatest.com/ (accessed on 10 March 2024).
  80. Choi, S.; Jun, H.; Shin, S.; Chung, W. Evaluating accuracy of algorithms providing subsurface properties using full-reference image quality assessment. Geophys. Geophys. Explor. 2021, 24, 6–19. [Google Scholar]
  81. Dey, R.; Bhattacharjee, D.; Kejcar, O. No-reference image quality assessment using meta-learning. In COMSYS 2023, Volume 2, Proceedings of the International Conference on Frontiers of Computer Science and Technology, Himachal Pradesh, India, 16–17 October 2023; Sarkar, R., Pal, S., Basu, S., Plewczynski, D., Bhattacharjee, D., Eds.; Springer Nature: Singapore, 2023; pp. 137–144. [Google Scholar] [CrossRef]
  82. Bae, S.H.; Kim, M. Elaborate image quality assessment with a novel luminance adaptation effect model. J. Broadcast. Eng. 2015, 20, 818–826. [Google Scholar] [CrossRef]
  83. Dinh, H.; Wang, Q.; Tu, F.; Frymire, B.; Mu, B. Evaluation of motion blur image quality in video frame interpolation. Electron. Imaging 2023, 35, 262–265. [Google Scholar] [CrossRef]
  84. ISO 12233:2023; Photography: Electronic Still Picture Imaging—Resolution and Spatial Frequency Responses. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/obp/ui/#iso:std:iso:12233:ed-4:v1:en (accessed on 10 March 2024).
  85. Imatest—ISO 122233:2017 Test Charts. Available online: http://www.imatest.com/solutions/iso-12233/ (accessed on 10 March 2024).
  86. Masaoka, K. Accuracy and precision of edge-based modulation transfer function measurement for sampled imaging systems. IEEE Access 2018, 6, 41079–41086. [Google Scholar] [CrossRef]
  87. Imatest-Sharpness. 2023. Available online: https://www.imatest.com/support/docs/23-1/sharpness/ (accessed on 10 March 2024).
  88. Dugonik, B.; Dugonik, A.; Marovt, M.; Golob, M. Image quality assessment of digital image capturing devices for melanoma detection. Appl. Sci. 2020, 10, 2876. [Google Scholar] [CrossRef]
  89. Artmann, U. Image quality evaluation using moving targets. In Multimedia Content Mobile Devices; SPIE: Bellingham, WA, USA, 2013; Volume 8667, pp. 398–409. [Google Scholar] [CrossRef]
  90. Koren, N. The Imatest program: Comparing cameras with different amounts of sharpening. In Digital Photography II; SPIE: Bellingham, WA, USA, 2013; Volume 6069, pp. 195–203. [Google Scholar] [CrossRef]
  91. Imatest—MTF Curves and Image Appearance. Available online: https://www.imatest.com/docs/MTF_appearance/ (accessed on 10 March 2024).
  92. Luo, L.; Yurdakul, C.; Feng, K.; Seo, D.E.; Tu, F.; Mu, B. Temporal MTF evaluation of slow-motion mode in mobile phones. J. Electron. Imaging 2022, 34, 1–4. [Google Scholar] [CrossRef]
  93. Telleen, J.; Sullivan, A.; Yee, J.; Wang, O.; Gunawardane, P.; Collins, I.; Davis, J. Synthetic shutter speed imaging. Comput. Graph. Forum 2007, 26, 591–598. [Google Scholar] [CrossRef]
  94. Lee, G.P.; Lim, H.J.; Kim, J.H. Availability evaluation of automatic inspection equipment using line scan camera for concrete lining. J. Korean Tunn. Undergr. Space Assoc. 2020, 22, 643–653. [Google Scholar]
  95. Imatest-SFRreg Test Chart. 2024. Available online: https://www.imatest.com/product/sfrreg-test-chart/ (accessed on 10 March 2024).
  96. Narvekar, N.D.; Karam, L.J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans. Image Process. 2011, 20, 2678–2683. [Google Scholar] [CrossRef] [PubMed]
  97. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  98. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a completely blind image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  99. Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar] [CrossRef]
  100. Pennada, S.; Perry, M.; McAlorum, J.; Dow, H.; Dobie, G. Threshold-Based BRISQUE-Assisted Deep Learning for Enhancing Crack Detection in Concrete Structures. J. Imaging 2023, 9, 218. [Google Scholar] [CrossRef]
Figure 1. Enhanced eSFR ISO test chart: 3:2 aspect ratio, 6 added squares on sides, 16 added color patches, and several added wedge patterns.
Figure 1. Enhanced eSFR ISO test chart: 3:2 aspect ratio, 6 added squares on sides, 16 added color patches, and several added wedge patterns.
Sensors 25 03804 g001
Figure 2. Illustration of the slanted edge-based modulation transfer function (MTF) estimation process: (a) selection of a region of interest (ROI), (b) normalized edge spread function, and (c) estimated MTF.
Figure 2. Illustration of the slanted edge-based modulation transfer function (MTF) estimation process: (a) selection of a region of interest (ROI), (b) normalized edge spread function, and (c) estimated MTF.
Sensors 25 03804 g002
Figure 3. Illustration of the 10–90% rise distance graph on blurry and sharp edges.
Figure 3. Illustration of the 10–90% rise distance graph on blurry and sharp edges.
Sensors 25 03804 g003
Figure 4. Illustration of the high-speed translational moving panel device.
Figure 4. Illustration of the high-speed translational moving panel device.
Sensors 25 03804 g004
Figure 5. Velocity control of high-speed translational moving panel device.
Figure 5. Velocity control of high-speed translational moving panel device.
Sensors 25 03804 g005
Figure 6. Motion blur capturing on a high-speed translational moving panel in a laboratory: (a) Schematic of the motion blur capturing setup and (b) eSFR ISO test chart on the panel.
Figure 6. Motion blur capturing on a high-speed translational moving panel in a laboratory: (a) Schematic of the motion blur capturing setup and (b) eSFR ISO test chart on the panel.
Sensors 25 03804 g006
Figure 7. Motion-blurred images captured from a high-speed translational moving panel device.
Figure 7. Motion-blurred images captured from a high-speed translational moving panel device.
Sensors 25 03804 g007
Figure 8. Images captured using the moving panel device for motion blur at 15,000 lx.
Figure 8. Images captured using the moving panel device for motion blur at 15,000 lx.
Sensors 25 03804 g008
Figure 9. Comparison of motion blur as shutter speed changes at 15,000 lx.
Figure 9. Comparison of motion blur as shutter speed changes at 15,000 lx.
Sensors 25 03804 g009
Figure 10. Image quality trends at 15,000 lx illuminance. (a) BEW increases with speed, particularly under longer shutter durations. (b) MTF50 decreases as speed increases, with sharper degradation observed at slower shutter speeds.
Figure 10. Image quality trends at 15,000 lx illuminance. (a) BEW increases with speed, particularly under longer shutter durations. (b) MTF50 decreases as speed increases, with sharper degradation observed at slower shutter speeds.
Sensors 25 03804 g010
Figure 11. Images captured using the moving panel device for motion blur at 40,000 lx.
Figure 11. Images captured using the moving panel device for motion blur at 40,000 lx.
Sensors 25 03804 g011
Figure 12. Image quality trends at 40,000 lx illuminance. (a) BEW increases with speed, particularly under longer shutter durations. (b) MTF50 decreases as speed increases, with sharper degradation observed at slower shutter speeds.
Figure 12. Image quality trends at 40,000 lx illuminance. (a) BEW increases with speed, particularly under longer shutter durations. (b) MTF50 decreases as speed increases, with sharper degradation observed at slower shutter speeds.
Sensors 25 03804 g012
Figure 13. Heatmap of BEW change rates (%) owing to illuminance increase from 15,000 lx to 40,000 lx across varying speeds and shutter durations.
Figure 13. Heatmap of BEW change rates (%) owing to illuminance increase from 15,000 lx to 40,000 lx across varying speeds and shutter durations.
Sensors 25 03804 g013
Figure 14. Heatmap of MTF50 change rates (%) resulting from illuminance variation (15,000 lx → 40,000 lx) under different speed and exposure settings.
Figure 14. Heatmap of MTF50 change rates (%) resulting from illuminance variation (15,000 lx → 40,000 lx) under different speed and exposure settings.
Sensors 25 03804 g014
Figure 15. Boxplots showing the distribution of image quality metrics under two illuminance conditions (15,000 lx and 40,000 lx). (a) BEW indicates the extent of motion blur, and (b) MTF50 represents image sharpness.
Figure 15. Boxplots showing the distribution of image quality metrics under two illuminance conditions (15,000 lx and 40,000 lx). (a) BEW indicates the extent of motion blur, and (b) MTF50 represents image sharpness.
Sensors 25 03804 g015
Figure 16. View of the Songhyeon Tunnel testbed used for field validation.
Figure 16. View of the Songhyeon Tunnel testbed used for field validation.
Sensors 25 03804 g016
Figure 17. MTSS equipped with 4 K (4096 × 2) line-scan cameras used for image acquisition during field testing in the Songhyeon Tunnel.
Figure 17. MTSS equipped with 4 K (4096 × 2) line-scan cameras used for image acquisition during field testing in the Songhyeon Tunnel.
Sensors 25 03804 g017
Figure 18. SFRreg test charts used for evaluating motion blur and spatial resolution in tunnel environments; (a) SFRreg test chart, (b) installation of SFRreg test charts on the tunnel lining wall.
Figure 18. SFRreg test charts used for evaluating motion blur and spatial resolution in tunnel environments; (a) SFRreg test chart, (b) installation of SFRreg test charts on the tunnel lining wall.
Sensors 25 03804 g018
Figure 19. SFRreg test chart images acquired using the MTSS at various vehicle speeds. (a) 20 km/h, (b) 40 km/h, (c) 60 km/h, and (d) 80 km/h.
Figure 19. SFRreg test chart images acquired using the MTSS at various vehicle speeds. (a) 20 km/h, (b) 40 km/h, (c) 60 km/h, and (d) 80 km/h.
Sensors 25 03804 g019
Figure 20. ROIs for horizontal (red) and vertical (blue) MTF analysis extracted from the SFRreg chart image.
Figure 20. ROIs for horizontal (red) and vertical (blue) MTF analysis extracted from the SFRreg chart image.
Sensors 25 03804 g020
Figure 21. Motion blur characteristics of tunnel inspection images using an SFR chart. (a) Variation in horizontal and vertical BEW by speed. (b) Variation in horizontal and vertical MTF50 by speed.
Figure 21. Motion blur characteristics of tunnel inspection images using an SFR chart. (a) Variation in horizontal and vertical BEW by speed. (b) Variation in horizontal and vertical MTF50 by speed.
Sensors 25 03804 g021
Table 1. Test condition for capturing motion blur using a moving panel and area scan camera.
Table 1. Test condition for capturing motion blur using a moving panel and area scan camera.
Panel Speed (km/h)Shutter Speed
(μs)
ISOF-NumberFPSIlluminance
0
10
30
50
70
5006402.810015,000 lx
40,000 lx
2501250
1001600
501600
Table 2. Evaluation of motion-blurred images depending on the velocity of the moving panel and shutter speed at illuminance 15,000 lx.
Table 2. Evaluation of motion-blurred images depending on the velocity of the moving panel and shutter speed at illuminance 15,000 lx.
Shutter SpeedIQA0 km/h10 km/h30 km/h50 km/h70 km/h
500 μsBEW_mean (pixels)3.406.3116.3827.5638.28
BEW_std (pixels)±0.38±0.25±0.09±0.28±0.31
MTF50_mean (cy/px)0.15660.08100.03750.02710.0227
MTF50_std (cy/px)±0.0155±0.0208±0.0324±0.0352±0.0364
250 μsBEW_mean (pixels)3.614.688.7813.8618.99
BEW_std (pixels)±0.41±0.46±0.12±0.11±0.29
MTF50_mean (cy/px)0.14850.11180.06060.04230.0338
MTF50_std (cy/px)±0.0133±0.0154±0.0262±0.0311±0.0334
100 μsBEW_mean (pixels)3.563.864.736.338.47
BEW_std (pixels)±0.33±0.74±0.33±0.21±0.21
MTF50_mean (cy/px)0.15050.14450.10690.08060.0626
MTF50_std (cy/px)±0.0114±0.0253±0.0153±0.0209±0.0257
50 μsBEW_mean (pixels)3.373.663.994.525.32
BEW_std (pixels)±0.26±0.70±0.5±0.34±0.43
MTF50_mean (cy/px)0.15810.15240.13370.11400.0955
MTF50_std (cy/px)±0.0128±0.0253±0.0162±0.0143±0.0182
Table 3. Evaluation of motion-blurred images depending on the velocity of the moving panel and shutter speed at illuminance 40,000 lx.
Table 3. Evaluation of motion-blurred images depending on the velocity of the moving panel and shutter speed at illuminance 40,000 lx.
Shutter SpeedIQA0 km/h10 km/h30 km/h50 km/h70 km/h
500 μsBEW_mean (px)3.236.0715.1825.1935.37
BEW_std (px)±0.54±0.34±0.12±0.51±0.61
MTF50_mean (cy/px)0.16920.08530.04030.02950.0155
MTF50_std (cy/px)±0.0293±0.0199±0.0317±0.0346±0.0005
250 μsBEW_mean (px)3.524.398.2912.8317.44
BEW_std (x)±0.47±0.35±0.12±0.18±0.2
MTF50_mean (cy/px)0.16340.11720.05830.03740.0277
MTF50_std (cy/px)±0.0166±0.0068±0.0014±0.0008±0.0006
100 μsBEW_mean (px)3.493.614.766.478.37
BEW_std (px)±0.28±0.31±0.19±0.15±0.12
MTF50_mean (cy/px)0.15940.14560.10410.07430.0567
MTF50_std (cy/px)±0.0092±0.0070±0.0038±0.0014±0.0007
50 μsBEW_mean (px)3.423.713.954.435.30
BEW_std (px)±0.16±0.35±0.24±0.15±0.18
MTF50_mean (cy/px)0.15690.14320.13370.11350.0897
MTF50_std (cy/px)±0.0063±0.0098±0.0056±0.0029±0.0022
Table 4. Results of two-way ANOVA analyzing the effects of speed, shutter speed, and their interaction on image quality metrics (BEW and MTF50) under different illuminance levels (15,000 lx and 40,000 lx).
Table 4. Results of two-way ANOVA analyzing the effects of speed, shutter speed, and their interaction on image quality metrics (BEW and MTF50) under different illuminance levels (15,000 lx and 40,000 lx).
Illuminance Metric Source of
Variation
DF Sum of Squares F-Value p-Value
15,000 lxBEWMoving panel speed417,036.8829,998.22<0.0001
Shutter speed318,673.6843,840.35<0.0001
Interaction1214,139.288298.73<0.0001
Residual58082.35-<0.0001
MTF50Moving panel speed40.82365.26<0.0001
Shutter speed30.39233.39<0.0001
Interaction120.1116.48<0.0001
Residual5800.32-<0.0001
40,000 lxBEWMoving panel speed414,555.5136,821.60<0.0001
Shutter speed315,158.7851,130.25<0.0001
Interaction1211,647.529821.72<0.0001
Residual58057.32-<0.0001
MTF50Moving panel speed41.041265.27<0.0001
Shutter speed30.32523.77<0.0001
Interaction120.1559.73<0.0001
Residual5800.12-<0.0001
Note: All p-values below 0.0001 are denoted as “<0.0001”. DF = degrees of freedom.
Table 5. Statistical significance (p-values) of BEW and MTF50 for illuminance change (15,000 lx vs. 40,000 lx) across different speed and shutter conditions.
Table 5. Statistical significance (p-values) of BEW and MTF50 for illuminance change (15,000 lx vs. 40,000 lx) across different speed and shutter conditions.
Moving Panel Speed (km/h)Shutter Speed (μs)BEW p-ValueMTF50 p-Value
0500.33720.644
1000.35640.0014 (p < 0.05)
2500.43860.0003 (p < 0.05)
5000.16450.0435 (p < 0.05)
10500.74090.0715
1000.09440.8215
2500.0072 (p < 0.05)0.0903
5000.003 (p < 0.05)0.4239
30500.70710.9949
1000.75940.3376
2500 (p < 0.05)0.6427
5000 (p < 0.05)0.733
50500.18160.8499
1000.0029(p < 0.05)0.1083
2500 (p < 0.05)0.394
5000 (p < 0.05)0.7915
70500.73350.0965
1000.019 (p < 0.05)0.214
2500 (p < 0.05)0.3242
5000 (p < 0.05)0.2914
Table 6. Descriptive statistics of BEW and MTF50 under two illuminance conditions (15,000 lx and 40,000 lx).
Table 6. Descriptive statistics of BEW and MTF50 under two illuminance conditions (15,000 lx and 40,000 lx).
MetricIlluminanceQ1 (25%)Median (50%)Q3 (75%)MeanMinMaxStd. Dev.Sample
BEW (pixels)15,000 lx3.945.1310.179.482.6539.149.13600
40,000 lx3.795.19.598.952.6736.648.32600
MTF50 (cy/px)15,000 lx0.05340.10160.14110.0960.0130.19790.0524600
40,000 lx0.05560.10050.1410.09610.01450.20080.0521600
Note: Q1 = 1st quartile, Q3 = 3rd quartile, Std. Dev. = standard deviation, N = number of samples. These values were calculated using raw measurement data from 600 images per illuminance level.
Table 7. Mean and standard deviation of BEW and MTF50 measured in both horizontal and vertical directions for different tunnel scanning speeds (20–80 km/h).
Table 7. Mean and standard deviation of BEW and MTF50 measured in both horizontal and vertical directions for different tunnel scanning speeds (20–80 km/h).
DirectionSpeed (km/h)Mean BEW (px)Std. Dev.Mean MTF50 (cy/px)Std. Dev.
Horizontal202.30±0.060.228±0.009
402.76±0.400.191±0.034
602.94±0.510.176±0.030
803.38±1.020.168±0.051
Vertical201.42±0.300.393±0.094
401.89±0.450.274±0.087
601.91±0.290.271±0.046
802.19±0.480.234±0.047
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, C.; Kim, D.; Kim, D. Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection. Sensors 2025, 25, 3804. https://doi.org/10.3390/s25123804

AMA Style

Lee C, Kim D, Kim D. Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection. Sensors. 2025; 25(12):3804. https://doi.org/10.3390/s25123804

Chicago/Turabian Style

Lee, Chulhee, Donggyou Kim, and Dongku Kim. 2025. "Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection" Sensors 25, no. 12: 3804. https://doi.org/10.3390/s25123804

APA Style

Lee, C., Kim, D., & Kim, D. (2025). Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection. Sensors, 25(12), 3804. https://doi.org/10.3390/s25123804

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop