Next Article in Journal
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
Previous Article in Journal
Parameter Identification and Speed Control of a Small-Scale BLDC Motor: Experimental Validation and Real-Time PI Control with Low-Pass Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing

by
Jiacheng Cai
1,
Jiankui Chen
1,2,*,
Wei Tang
2,
Jinliang Wu
1,
Jingcheng Ruan
1 and
Zhouping Yin
1
1
The State Key Laboratory of Intelligent Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
2
Wuhan National Innovation Technology Optoelectronics Equipment Co., Ltd., Wuhan 430078, China
*
Author to whom correspondence should be addressed.
Machines 2025, 13(8), 657; https://doi.org/10.3390/machines13080657
Submission received: 25 June 2025 / Revised: 23 July 2025 / Accepted: 25 July 2025 / Published: 27 July 2025
(This article belongs to the Section Advanced Manufacturing)

Abstract

The Online Droplet Inspection system plays a vital role in closed-loop control for OLED inkjet printing. However, generating realistic synthetic droplet images for reliable restoration and precise measurement of droplet parameters remains challenging due to the complex, multi-factor degradation inherent to microscale droplet imaging. To address this, we propose a physics-informed degradation model, Diffraction–Gaussian–Motion–Noise (DGMN), that integrates Fraunhofer diffraction, defocus blur, motion blur, and adaptive noise to replicate real-world degradation in droplet images. To optimize the multi-parameter configuration of DGMN, we introduce the MISABO (Multi-strategy Improved Subtraction-Average-Based Optimizer), which incorporates Sobol sequence initialization for search diversity, lens opposition-based learning (LensOBL) for enhanced accuracy, and dimension learning-based hunting (DLH) for balanced global–local optimization. Benchmark function evaluations demonstrate that MISABO achieves superior convergence speed and accuracy. When applied to generate synthetic droplet images based on real droplet images captured from a self-developed OLED inkjet printer, the proposed MISABO-optimized DGMN framework significantly improves realism, enhancing synthesis quality by 37.7% over traditional manually configured models. This work lays a solid foundation for generating high-quality synthetic data to support droplet image restoration and downstream inkjet printing processes.

1. Introduction

Organic light-emitting diodes (OLEDs) have emerged as a leading innovation in modern display technology, offering benefits including wide viewing angles, high contrast, and rapid response time, as well as mechanical advantages such as flexibility and rollability [1]. Inkjet printing [2] stands out as a high-potential technology for OLED manufacturing due to its cost-effectiveness, high resolution, adaptability to diverse materials, and strong potential for producing large-area panels [3,4]. However, a major barrier to large-scale production is the high defect rate, often caused by nozzle failures and variations in droplet volume, which lead to Mura defects in printed panels [5]. To achieve zero-defect manufacturing [6] in OLED inkjet printing, significant efforts have been dedicated in print planning [7], droplet volume measurement [8], forming control [9], droplet volume control by waveform [10], deposited droplet volume measurement [11], and other related aspects. Among these, precise measurement of in-flight droplets is crucial for improving yield rates and minimizing defects.
Specifically, as shown in Figure 1, a closed-loop control is employed to achieve zero-defect inkjet printing. First, the nozzles on the printhead modules eject droplets according to the planned printing pattern and the corresponding driving waveform. The embedded Online Droplet Inspection System (ODIS) then inspects the in-flight droplets to assess injection status, droplet volume, velocity, and ejection angle across all nozzles involved in the inkjet process [12]. The measured volume of each nozzle enables precise regulation of the inkjet driving waveform. In addition, ODIS identifies and disables abnormal nozzles based on measured droplet parameters, such as droplet volume V i , target volume V 0 , and threshold θ . It also provides feedback for printhead waveform regulation and inkjet print replanning with indexes of disabled nozzles N i . Together, these functions establish a vision-based closed-loop decision-making system for the zero-defect inkjet printing process [13,14].
Visual inspection and measurement approaches are widely deployed in the display fabrication industry due to its visibility, precision, and reliability [8]. The common droplets regulated for OLED inkjet printing are typically 1–10 pL in volume with diameters ranging from 12 to 26 µm and velocities of 3–8 m/s [10], as illustrated in Figure 2. However, online droplet imaging under industrial conditions faces multiple challenges due to limited space for optics, restricted light source power, and strict temperature control. High-magnification systems with narrow depth of field (DOF) inherently introduce defocus blur [15], while low light power requires longer exposure, leading to motion blur. Increased magnification also amplifies diffraction effects, reducing resolution and introducing diffraction blur [16]. Figure 2c,d show that the blurred edges of the droplets exceed 30 pixels in width, indicating a complex degradation process affecting droplet detail. Therefore, advanced deep learning-based image restoration techniques, such as denoising [17], deblurring [16], and super-resolution [18], are essential for reconstructing droplet details, thereby enabling accurate image segmentation and precise measurement of droplet parameters. However, training such models requires a large volume of high-quality images, which are expensive and time-consuming to acquire. To address this limitation, generating realistic synthetic images for data augmentation and incorporating public datasets for training are crucial to improving restoration performance.
Although public datasets such as DIV2K [19], Set14 [20], and ImageNet [21] are widely used in image restoration research, they typically simulate degradation through limited, deterministic methods like bicubic downsampling, blurring, or noise addition, introducing only mild distortions [22,23]. However, these degradation models fail to replicate the complex and severe degradation observed in microscale in-flight droplet images, which are influenced by multiple interacting distortion factors under unique imaging conditions. To narrow the domain gap between synthetic and real images, several advanced degradation models have been proposed. Some methods incorporate multiple predefined blur kernels, point spread functions (PSFs), or randomized noise to increase generalization ability [24,25,26,27]. But these methods still cannot deliver acceptable performance due to their unfitting in real-world scenarios with deterministic presetting. Others adopt learning-based approaches that implicitly estimate degradation by learning representations between low- and high-quality image pairs. For instance, Pengliang et al. [28] used iterative back-projection (IBP) to minimize reconstruction error between sharp normal-light and low-light images. Generative Adversarial Network (GAN)-based methods also show great potential to generate realistic images. Manuel et al. [29] designed a DownSampleGAN (DSGAN) to generate LR images matching HR image characteristics without paired data. But it still requires high-quality HR images with the same noise distribution. Changhee et al. [30] generate multi-sequence brain magnetic resonance images from noise using two separate GANs.
However, such approaches are not aligned with our objective. GAN-based methods typically generate only realistic degraded images without providing the corresponding paired clean (ground truth) data required for supervised training. Moreover, many of these approaches rely on the availability of sharp high-quality reference images, which are not obtainable in actual droplet imaging due to physical limitations. To overcome this, it is essential to improve the realism of synthetic paired degraded images, ensuring that they accurately reflect the complex degradation patterns observed in real in-flight droplet imaging. This would enable more effective use of public datasets for training restoration models. Nevertheless, accurately modeling such complex, multi-parameter degradation remains a significant optimization challenge.
Metaheuristic methods are well-suited for high-dimensional optimization tasks like image degradation modeling, offering high-quality solutions and stable convergence in complex search spaces. A wide range of metaheuristic methods inspired by biological and physical processes, such as Genetic Algorithms (GAs) [31], Particle Swarm Optimization (PSO) [32,33], Gray Wolf Optimization (GWO) [34], and the Firefly Algorithm [35], have been successfully applied across engineering fields [36,37,38,39,40,41,42]. Particularly in the field of computer vision, metaheuristics have also been explored for tasks such as image segmentation [39], classification [43,44], and feature selection [45,46], demonstrating their adaptability to image-related optimization problems.
However, conventional metaheuristic algorithms still face challenges including limited agent diversity [47], premature convergence [48], and poor exploration–exploitation balance [49], which limit their effectiveness in complex tasks like synthetic image generation. Moreover, due to the No Free Lunch (NFL) theorem [50], no particular optimization algorithm performs optimally across all problem domains, motivating the need for customized strategies that align with specific degradation modeling needs. Differently from the methods mentioned above, proposed for processing an obtained image, to the best of our knowledge, this is the first metaheuristic-driven approach for generating paired in-flight droplet images that align with real-world degradation characteristics, thereby facilitating effective supervised training in restoration networks.
This study proposes a physics-informed Diffraction–Gaussian–Motion–Noise (DGMN) model for realistic droplet image degradation, integrating Fraunhofer diffraction, Gaussian mixtures, motion blur, and adaptive noise, specifically tailored to in-flight droplet imaging. To optimize DGMN, we introduce MISABO, an improved optimizer combining Sobol initialization, lens opposition-based learning (LensOBL), and dimension learning-based hunting (DLH) for enhanced global search and stable convergence. Validated on benchmark functions, MISABO effectively tunes DGMN parameters for realistic synthetic droplet image generation.
The main contributions of this research are summarized as below.
(1) A novel physics-informed DGMN model is constructed by integrating the diffraction, defocus, motion, and adaptive noise degradation components, providing an accurate representation of real-world complex degradation in droplet imaging.
(2) A new composite approach, named MISABO, is introduced based on the basic SABO (subtraction-average-based optimizer) algorithm, incorporating multiple improvement strategies. Sobol sequence initialization provides broader coverage of the search space, the LensOBL strategy enhances the fitting accuracy, and the DLH strategy balances the global and local search capabilities.
(3) The proposed MISABO-optimized DGMN framework is applied to generate synthetic droplet images based on actual droplet images from an OLED inkjet printer developed by us, achieving more realistic images and improved convergence performance compared to competitive methods, with a 37.79% synthetic accuracy improvement from the previous solution.
This article is organized as follows. Section 1 presents existing works relating the image degradation models and meta-heuristic methods. In Section 2, the DGMN model is introduced, and each component of the proposed MISABO is elaborated on in detail. Section 3 presents the results on benchmark functions and real droplet images captured from a self-developed OLED inkjet printer, along with comparative analyses. Section 4 provides the overall conclusion of the paper.

2. Materials and Methods

In this section, the proposed physics-informed DGMN model and MISABO algorithm are introduced. First, the mathematical modeling of the realistic droplet image degradation is presented. Next, the standard SABO approach is given as a basis. Following this, the flowchart and each improvement strategy of the MISABO algorithm is explained in detail.

2.1. Physics-Informed DGMN Model of Droplet Image

A Basic Droplet Degradation Framework (BDDF) with manually configured parameters has been introduced in our previous work [16]. However, motion blur, a significant degradation component, was neglected, particularly in the context of high-speed in-flight imaging within the field of view. To further accurately represent real-world droplet features, this study refines the current droplet degradation framework by incorporating motion blur, represented through the application of a customized kernel to the image.
Compared to the conventional assumption of fixed additive Gaussian noise in [16], our study models the noise level σ as a function of pixel intensity, which more accurately reflects real-world imaging conditions. This approach captures signal-dependent noise characteristics such as increased noise in brighter regions due to photon shot or Poisson noise and reduced noise in darker areas, thus more realistically simulating sensor-related phenomena like non-uniform photoresponse, readout noise, and illumination-dependent degradation. Here, a straightforward linear intensity-dependent noise model is employed with a fixed maximum standard deviation σ m a x  = 0.031 and minimum of σ m i n  = 0.009, as the imaging illumination conditions remain highly stable. The intensity-dependent noise model is defined as follows:
σ I m ( x , y ) = σ m i n + σ m a x σ m i n · I m ( x , y ) m i n ( I m ) m a x ( I m ) m i n ( I m )
As illustrated in Figure 3, the ideal droplet image I s is sequentially downgraded by diffraction, defocus blur, motion blur, and adaptive noise, resulting in the degraded droplet image I d . The entire DGMN model of the droplet image can be represented as follows:
I d ( x , y ) = V m = 1 M α m G m D I s ( x , y ) + N 0 , σ I m ( x , y ) 2
In Equation (2), ⊛ denotes the convolution operator, V is the motion blur kernel, G m is the m-th Gaussian kernel among M mixed Gaussian kernels with corresponding weight α m , D represents the diffraction kernel, and N denotes the intensity-dependent noise.
Specifically, the motion blur kernel V is constructed via following steps: First, initialize a zero matrix V of size N × N . Then, calculate the direction vector by Equation (3) with defined motion direction angle θ .
d x = c o s π 180 · θ , d y = s i n π 180 · θ
Second, compute the corresponding position for the motion as follows:
x = round x i · d x + y j · d y y = round y j · d y x i · d x
where the x i , y i are the offset from the center of the kernel with x i = i N 1 2 , y j = j N 1 2 , and i , j ranging from 0 to N 1 . After that, for each element V [ i , j ] , set
V [ i , j ] = 1 N , if 0 x N and 0 y N 0 , otherwise
Last, normalize the motion kernel V to maintain the brightness of the image during downgrade.

2.2. SABO Algorithm

The SABO is a novel metaheuristic algorithm that utilizes the subtraction average to update the positions of search agents, enabling less control parameters and efficient exploration [51]. However, SABO still has several limitations, including uneven initial agent distribution that hinders global exploration and random parameters that can cause instability. These issues may lead to premature convergence and increase the risk of becoming trapped in local optima, particularly in complex or multimodal optimization tasks.

2.3. The Proposed MISABO Algorithm

To tackle the above issue, MISABO is proposed in this research by incorporating multiple improvement strategies. Specifically, the core structure of our approach involves the following steps: Sobol sequence initialization, LensOBL, and the DLH search strategy.

2.3.1. Sobol Sequence Initialization

To enhance global search, our study uses Sobol sequences, also known as quasi-Monte Carlo, for uncertainty and sensitivity analysis. These low-discrepancy samples improve sampling efficiency within the search space, offering better performance, particularly in high-dimensional optimization problems [52]. Sobol sequence initialization yields more evenly and broadly distributed points than the points from random number initialization, enhancing coverage across the search space.

2.3.2. Lens Opposition-Based Learning

In the population iteration phase of the SABO, it is susceptible to becoming trapped in local optima, leading to iteration stagnation. To address this, this paper introduces the LensOBL strategy, which helps individuals escape local optima and improves the fitting accuracy [53]. Figure 4 illustrates the concept of the LensOBL strategy in a one-dimensional space.
In Figure 4, AB represents the lens, and an object Q is positioned on the left side of the lens, whose height is h . Through the lens, the Q has an opposite image Q* with height h * , whose projection on the X-axis is located at x * . The positional relationship between Q and Q* is described by Equation (6) as follows:
l b + u b / 2 x x * l b + u b / 2 = h h *
Let k = h / h * ; the opposite point x * will be
x * = ( l b + u b ) / 2 x k + ( l b + u b ) / 2
When k = 1 , Equation (7) can be simplified following Equation (8), which is the formula of the basic opposition-based learning (OBL) strategy [54]:
x * = l b + u b x
Based on the above, OBL is a special case of LensOBL, which adaptively adjusts parameter k to generate dynamic reverse solutions. This flexibility enhances the exploitation capability of search agents compared to the fixed reverse solution in standard OBL. In our study, we define the parameter k as being self-adaptively increased based on the number of iterations as follows:
k = 1 + t T 0.5 10
where t is the current iteration and T is the total iteration count.
If the new candidate calculated from LensOBL offers a better fitness value, X i is replaced by it. Otherwise, X i holds.
X i , Lens ( t + 1 ) = X i , Lens ( t ) , if Fit i , Lens Fit i X i ( t ) , otherwise

2.3.3. DLH Search Strategy

DLH is an optimization strategy that enhances search efficiency by focusing on specific dimensions and constructing local regions where agents share information about optimal or nearby solutions, thus avoiding local optima [49]. First, the neighborhood N i ( t ) of agent X i ( t ) is defined as
N i ( t ) = { X j ( t ) | D i X i ( t ) , X j ( t ) R i ( t ) , X j ( t ) X }
where R i ( t ) is the Euclidean distance between X i ( t ) and X i ( t + 1 ) .
Then, multi-dimensional neighborhood learning is carried out as
X i , DLH , d ( t + 1 ) = X i , d ( t ) + rand × X n , d ( t ) X r , d ( t )
where the X i , DLH , d ( t + 1 ) denotes the dth dimension of X i , DLH , X n , d ( t ) is the dth dimension of the random agent from the corresponding neighborhood N i ( t ) , and X r , d ( t ) is the dth dimension of the random agent from the population X.
At last, update the search agents as follows:
X i , DLH ( t + 1 ) = X i , DLH ( t ) , if Fit i , DLH Fit i X i ( t ) , otherwise

2.3.4. MISABO Algorithm Flow

The pseudocode of the MISABO algorithm is presented in Algorithm 1, where t is the current iteration count, T is the maximum iteration, N is the population size, and D is the population dimension.
Algorithm 1 MISABO Algorithm
  1:  Define the boundaries for D dimensions
  2:  Implement Sobel Sequence Initialization
  3:  while  t < T  do
  4:     Calculate the fitness of each X i
  5:     for  i = 1 to N do
  6:        Generate the new SABO candidate by [51]
  7:        Check the bounds
  8:        Apply SABO selection and get X i , SABO ( t + 1 )
  9:        Apply LensOBL operator by Equations (7) and (9)
10:        Check the bounds
11:        Apply LensOBL selection by Equation (10)
12:        Compute the neighborhood radius
13:        Construct the neighborhood of X i
14:        for  d = 1 to D do
15:        Implement DLH search by Equation (12)
16:        end for
17:        Check the bounds
18:        Apply DLH selection by Equation (13)
19:     end for
20:     Select the fittest agent from X as current position
21:      t t + 1
22:  end while

2.4. Comparative Evaluation on Benchmark Functions

This section presents a benchmark evaluation designed to assess the performance of the proposed method using a variety of benchmark functions. The evaluation includes comparisons with several competitive metaheuristic algorithms, namely the original SABO (as the baseline), SCA [55], and SFO [56]. A set of representative benchmark functions is employed, comprising unimodal functions ( F 1 F 4 ) and multimodal functions ( F 5 F 8 ), as shown in Table 1, to comprehensively evaluate the optimization capabilities of the proposed MISABO. Unimodal functions test convergence speed and accuracy, while multimodal functions assess global search ability. Each method was independently run 50 times across all benchmark functions, with a population size of 50 and 500 iterations. The applied performance metrics include mean (average performance), standard deviation (stability), and best/worst values (accuracy).

2.5. Synthetic Droplet Image Generation Experiments

2.5.1. Experimental Setup

To evaluate the proposed method for high-quality droplet image generation, the proposed MISABO-optimized DGMN framework is deployed on our NEJ-PTG6H OLED inkjet printer, jointly developed by Huazhong University of Science and Technology and Wuhan National Innovation Technology Optoelectronics Equipment Co., Ltd. (Wuhan, China) (Figure 5). The printer features multiple Konica KM1800i printheads (Tokyo, Japan) [57] with YZ degrees of freedom (DOFs), each containing 1776 nozzles, and an X-DOF platform for panel placement. Through coordinated motion and control, the system prints functional OLED layers, such as TFE and color filters [58,59].
An ODIS, equipped with X DOF, includes an HIKROBOT industrial camera (MV-CH120-10GM, 4096 × 3000 resolution, Hangzhou, China), a Navitar 12 × zoom lens (Rochester, NY, USA), and a high-power LED. The camera uses a 1 μ s exposure time with constant backlighting to capture high-speed shadowgraph images of droplets in flight, as shown in Figure 6. Utilizing known nozzle coordinates and DOF cooperation, all nozzles are inspected to measure droplet volume and speed and to identify abnormal nozzles for print planning and nozzle waveform regulation. Based on this setup, each printhead is moved to the ODIS system for inspection. Nozzles eject droplets at 14 kHz, and multiple droplets are captured for measurement. After the current nozzle set is inspected, the remaining nozzles are scanned sequentially along the XY DOF to complete the droplet image collection.

2.5.2. Metrics for Synthetic Image Quality Assessment

In our study, the Deep Image Structure and Texture Similarity (DISTS) [60] Metric is adopted to assess the synthetic image quality for degradation pattern alignment. Unlike traditional pixel-level metrics, DISTS evaluates perceptual similarity by comparing structural and textural information in the deep feature space between two images, yielding values in [ 0 , 1 ] , which are converted such that lower scores indicate better alignment in our study.

2.6. Ablation Study on DGMN Components and MISABO Strategies

To evaluate the contribution of each degradation component in the proposed DGMN model, we conducted a comprehensive ablation study with parameters optimized by MISABO. This study systematically investigates how each component affects the quality and realism of the synthetic droplet images. The following variants are evaluated:
(M1): The BDDF without the diffraction component. (M2): The BDDF without the defocus blur component. (M3): The original BDDF, used as the baseline. (M4): M3 with the motion blur component added. (M5): M3 with fixed Gaussian noise replaced by an adaptive noise component. (M6): The complete DGMN model, combining all components in M4 and M5.
Additionally, to assess the individual contributions of each incorporated strategy in MISABO, we conducted an ablation study by progressively adding each strategy and comparing the resulting performance against the full MISABO. The evaluated variants are as follows:
(V1): The original SABO, which served as the baseline. (V2): V1 with Sobol initialization incorporated. (V3): V2 further enhanced by introducing the LensOBL strategy. (V4): The complete MISABO, obtained by adding the DLH strategy to V3.

3. Results and Discussion

This section presents the results of the proposed method through various benchmark evaluations, synthetic droplet image experiments, and ablation studies, followed by comprehensive discussions. All experiments were conducted on an ADVANTECH industrial PC (Taipei, Taiwan) with Intel Xeon E5 CPU (Santa Clara, CA, USA) and 32 GB of RAM.

3.1. Results of MISABO on Benchmark Functions

Table 2 shows that MISABO outperforms other competitive algorithms, reaching the theoretical optima on F 1 F 5 and F 7 and significantly improving average performance over SABO on F 6 and F 8 , with gains of 88.87% and 99.59%, respectively. Although it does not reach the exact optima on F 6 and F 8 , MISABO achieves the closest results among all methods. It also shows strong convergence stability, with at least 99.93% lower standard deviations than SFO and SCA on all functions except F 5 and F 7 . Compared to SABO, MISABO improves stability on F 1 F 4 and reduces the standard deviation on F 8 by three orders of magnitude, achieving a 99.65% reduction. These results further validate the effectiveness of the multi-strategy design.
To analyze the accuracy and convergence speed of MISABO, this study presents the convergence curves across all benchmark functions in Figure 7, showing that the MISABO consistently achieves the steepest convergence curves and highest accuracy across functions F 1 to F 8 . It outperforms other methods in both unimodal and multimodal functions, reaching optimal or near-optimal solutions faster. Even when not ideal, MISABO maintains superior performance, making it the most effective algorithm among all evaluated.

3.2. Performance of Synthetic Droplet Image Generation

Synthetic droplet image experiments were conducted on 10 real droplet images ( 230 × 230 ) captured by ODIS, labeled X1 to X9 in Figure 6. MISABO and other algorithms optimized the DGMN model using paired synthetic and real droplet images. All search agents were normalized to ( 0 , 100 ) before optimization and denormalized for DISTS evaluation. Each algorithm ran independently on all images with 15 agents over 50 iterations. Table 3 details the DGMN control parameters and their levels.
Figure 8 illustrates the convergence performance of the proposed MISABO algorithm in comparison with other optimization methods applied to the DGMN model for synthetic droplet image generation. As shown, MISABO consistently outperforms its competitors in final DISTS values across all image cases and presents better convergence speed in most images. For instance, in images X1 and X7, MISABO minimizes the distribution error within approximately 20 iterations. In images X3, X4, and X8, it achieves the lowest DISTS among all methods in just five iterations. For images X5 and X6, MISABO demonstrates effective multi-step convergence, producing the best degradation simulations after 40 iterations. In image X9, MISABO yields a significantly lower final DISTS after 50 iterations compared to other algorithms. The optimization results across all image cases, summarized in Table 4, further confirms that MISABO delivers superior performance in synthetic droplet image generation.

3.3. Results of the Ablation Study on DGMN and MISABO

In this subsection, we validate the impact of the integrated MISABO strategies and DGMN components on the synthetic droplet image generation.

3.3.1. Ablation Analysis of the DGMN Model

We verified the effectiveness of the DGMN formulation with parameters optimized by our proposed MISABO in Table 5. It shows that adding motion blur (M4) to M3 results in a DISTS reduction of 0.0021 . Replacing the fixed Gaussian noise with intensity-dependent noise (M5) also yields a 0.0134 DISTS decrease, demonstrating the value of adaptive noise for improved degradation alignment. The complete model, M6 (DGMN), achieves the lowest DISTS with a total reduction of 0.0175 , confirming the synergistic benefit of integrating both strategies. In summary, our method (DGMN-MISABO) achieves the most accurate degradation alignment, outperforming the BDDF with manually configured parameters and the BDDF optimized by MISABO by 37.79 % and 9.19 % , respectively.
In addition, we performed a further ablation study on the BDDF model using MISABO-optimized parameters. The results are also shown in Table 5, where M1 and M2 represent the BDDF model without diffraction and defocus blur components, respectively, while M3 represents the complete BDDF model. The DISTS scores show that M3 achieves a value of 0.1905 , which is substantially better than M1 and M2 by 0.0403 and 0.0340 , respectively. This demonstrates the importance of both diffraction and defocus blur in generating realistic droplet images and supports the use of the BDDF as a strong baseline. Moreover, the DISTS score of M3 ( 0.1905 ) is significantly better than that of the manually configured parameters in [16] ( 0.2781 ), demonstrating the substantial enhancement in degradation alignment enabled by MISABO.
Figure 9 presents synthetic images generated by the BDDF and DGMN models using the best-performing parameter sets obtained either manually or via MISABO, with corresponding DISTS scores shown at the bottom right of each image. The close visual similarity between the DGMN-MISABO results and real droplet images demonstrates the effectiveness of the proposed framework.

3.3.2. Effectiveness of Integrated Strategies in MISABO

Table 6 shows performance improvements after each strategy integrated for optimization when applied to our proposed DGMN. Comparing V1 (SABO) and V2, Sobol initialization leads to a 0.0002 DISTS reduction based on M3. Adding LensOBL (V2 to V3) brings an additional 0.0004 DISTS decrement, showing its effectiveness in avoiding local optima. Further integrating DLH (V3 to V4) results in a DISTS decrease of 0.0003 , highlighting its role in balancing global and local searches for better degradation alignment and providing more realistic synthetic images.

4. Conclusions

This paper presents a physics-informed DGMN model for generating realistically degraded droplet images by integrating diffraction, defocus, motion, and adaptive noise, tailored to the ODIS’s imaging conditions. To optimize DGMN parameters, we develop MISABO, which incorporates Sobol initialization, LensOBL, and DLH to enhance diversity, avoid local optima, and balance search strategies. Benchmark results demonstrate MISABO’s superior convergence and accuracy. Experiments show that the MISABO-optimized DGMN framework improves synthetic accuracy by 37.79% over the manually configured BDDF. Ablation studies confirm the effectiveness of each integrated strategy. This study focuses on realistic droplet image generation, and future work will involve constructing a synthetic dataset using MISABO-optimized DGMN to train image restoration models, ultimately improving the accuracy of ODIS and inkjet printing performance.

Author Contributions

Conceptualization, J.C. (Jiacheng Cai); methodology, J.C. (Jiacheng Cai) and J.R.; software, J.C. (Jiacheng Cai) and J.W.; validation, J.C. (Jiacheng Cai); formal analysis, J.C. (Jiacheng Cai); investigation, W.T.; resources, J.C. (Jiankui Chen), W.T., and Z.Y.; data curation, J.C. (Jiacheng Cai) and W.T.; writing—original draft preparation, J.C. (Jiacheng Cai); writing—review and editing, J.C. (Jiacheng Cai); visualization, J.C. (Jiacheng Cai); supervision, J.C. (Jiankui Chen) and Z.Y.; project administration, J.C. (Jiankui Chen) and Z.Y.; funding acquisition, J.C. (Jiankui Chen), W.T., and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the CUI CAN Program of Guangdong Province (grant no. CC/XM-202402ZJ0102) and the National Natural Science Foundation of China (51975236, 52205594).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Jiankui Chen, Wei Tang were employed by the company Wuhan National Innovation Technology Optoelectronics Equipment Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Gather, M.C.; Reineke, S. Recent advances in light outcoupling from white organic light-emitting diodes. J. Photonics Energy 2015, 5, 057607. [Google Scholar] [CrossRef]
  2. Kim, K.; Kim, G.; Lee, B.R.; Ji, S.; Kim, S.Y.; An, B.W.; Song, M.H.; Park, J.U. High-resolution electrohydrodynamic jet printing of small-molecule organic light-emitting diodes. Nanoscale 2015, 7, 13410–13415. [Google Scholar] [CrossRef]
  3. Zheng, X.; Liu, Y.; Zhu, Y.; Ma, F.; Feng, C.; Yu, Y.; Hu, H.; Li, F. Efficient inkjet-printed blue OLED with boosted charge transport using host doping for application in pixelated display. Opt. Mater. 2020, 101, 109755. [Google Scholar] [CrossRef]
  4. Hu, Z.; Yin, Y.; Ali, M.U.; Peng, W.; Zhang, S.; Li, D.; Zou, T.; Li, Y.; Jiao, S.; Chen, S.j. Inkjet printed uniform quantum dots as color conversion layers for full-color OLED displays. Nanoscale 2020, 12, 2103–2110. [Google Scholar] [CrossRef]
  5. Yang, H.; Song, K.; Mao, F.; Yin, Z. Autolabeling-enhanced active learning for cost-efficient surface defect visual classification. IEEE Trans. Instrum. Meas. 2020, 70, 1–15. [Google Scholar] [CrossRef]
  6. Psarommatis, F.; Sousa, J.; Mendonça, J.P.; Kiritsis, D. Zero-defect manufacturing the approach for higher manufacturing sustainability in the era of industry 4.0: A position paper. Int. J. Prod. Res. 2022, 60, 73–91. [Google Scholar] [CrossRef]
  7. Xiong, J.; Chen, J.; Chen, W.; Yue, X.; Zhao, Z.; Yin, Z. Intelligent path planning algorithm system for printed display manufacturing using graph convolutional neural network and reinforcement learning. J. Manuf. Syst. 2025, 79, 73–85. [Google Scholar] [CrossRef]
  8. Liu, Q.; Chen, J.; Yang, H.; Yin, Z. Accurate stereo-vision-based flying droplet volume measurement method. IEEE Trans. Instrum. Meas. 2021, 71, 5000116. [Google Scholar] [CrossRef]
  9. Zhu, H.; Chen, J.-k.; Yue, X.; Xiong, J.-k.; Xiong, J.-c.; Gao, G.-x. Forming control method of inkjet printing OLED emitting layer pixel pit film. Chin. J. Liq. Cryst. Displays 2022, 37, 1420–1429. [Google Scholar] [CrossRef]
  10. Yue, X.; Chen, J.; Li, Y.; Li, X.; Zhu, H.; Yin, Z. Intelligent control system for droplet volume in inkjet printing based on stochastic state transition soft actor–critic DRL algorithm. J. Manuf. Syst. 2023, 68, 455–464. [Google Scholar] [CrossRef]
  11. Zhang, Z.; Yang, H.; Chen, J.; Yin, Z. Multi-scale conditional diffusion model for deposited droplet volume measurement in inkjet printing manufacturing. J. Manuf. Syst. 2023, 71, 595–608. [Google Scholar] [CrossRef]
  12. Park, S.; Oh, J.H. A novel vision-based approach for high-speed jetting status monitoring of a multi-nozzle inkjet head. J. Manuf. Processes 2025, 150, 670–681. [Google Scholar] [CrossRef]
  13. Yue, X.; Chen, J.; Yang, H.; Li, X.; Xiong, J.; Yin, Z. Multinozzle Droplet Volume Distribution Control in Inkjet Printing Based on Multiagent Soft Actor–Critic Network. IEEE/ASME Trans. Mechatron. 2024, 30, 447–457. [Google Scholar] [CrossRef]
  14. Qiao, H.; Chen, J.; Huang, X. A Survey of Brain-Inspired Intelligent Robots: Integration of Vision, Decision, Motion Control, and Musculoskeletal Systems. IEEE Trans. Cybern. 2022, 52, 11267–11280. [Google Scholar] [CrossRef]
  15. Choi, B.S.; Kim, S.H.; Lee, J.; Seong, D.; Lee, J.; Lee, J.; Chang, S.; Park, J.; Lee, S.J.; Shin, J.K. Effects of aperture diameter on image blur of CMOS image sensor with pixel apertures. IEEE Trans. Instrum. Meas. 2019, 68, 1382–1388. [Google Scholar] [CrossRef]
  16. Liu, Q.; Chen, J.; Yang, H.; Yin, Z. Prior Guided Multi-Scale Dynamic Deblurring Network for Diffraction Image Restoration in Droplet Measurement. IEEE Trans. Instrum. Meas. 2023, 73, 5004814. [Google Scholar]
  17. Lin, X.; Ren, C.; Liu, X.; Huang, J.; Lei, Y. Unsupervised image denoising in real-world scenarios via self-collaboration parallel generative adversarial branches. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 12642–12652. [Google Scholar]
  18. Liu, Q.; Yang, H.; Chen, J.; Yin, Z. Multiframe super-resolution with dual pyramid multiattention network for droplet measurement. IEEE Trans. Instrum. Meas. 2023, 72, 1–14. [Google Scholar] [CrossRef]
  19. Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 126–135. [Google Scholar]
  20. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the Curves and Surfaces: 7th International Conference, Avignon, France, 24–30 June 2010; Revised Selected Papers 7. Springer: Berlin/Heidelberg, Germany, 2010; pp. 711–730. [Google Scholar]
  21. Hendrycks, D.; Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar] [CrossRef]
  22. Li, S.; Zhang, G.; Luo, Z.; Liu, J. DFAN: Dual Feature Aggregation Network for Lightweight Image Super-Resolution. Wirel. Commun. Mob. Comput. 2022, 2022, 8116846. [Google Scholar] [CrossRef]
  23. Luo, Z.; Huang, Y.; Li, S.; Wang, L.; Tan, T. Efficient super resolution by recursive aggregation. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 8592–8599. [Google Scholar]
  24. Bell-Kligler, S.; Shocher, A.; Irani, M. Blind super-resolution kernel estimation using an internal-gan. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
  25. Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3262–3271. [Google Scholar]
  26. Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind super-resolution with iterative kernel correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1604–1613. [Google Scholar]
  27. Huang, Y.; Li, S.; Wang, L.; Tan, T. Unfolding the alternating optimization for blind super resolution. Adv. Neural Inf. Process. Syst. 2020, 33, 5632–5643. [Google Scholar]
  28. Li, P.; Liang, J.; Zhang, M. A degradation model for simultaneous brightness and sharpness enhancement of low-light image. Signal Process. 2021, 189, 108298. [Google Scholar] [CrossRef]
  29. Fritsche, M.; Gu, S.; Timofte, R. Frequency separation for real-world super-resolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3599–3608. [Google Scholar]
  30. Han, C.; Hayashi, H.; Rundo, L.; Araki, R.; Shimoda, W.; Muramatsu, S.; Furukawa, Y.; Mauri, G.; Nakayama, H. GAN-based synthetic brain MR image generation. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 734–738. [Google Scholar]
  31. Mirjalili, S.; Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–55. [Google Scholar]
  32. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  33. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  35. Fister, I.; Fister, I., Jr.; Yang, X.S.; Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 2013, 13, 34–46. [Google Scholar] [CrossRef]
  36. Yuen, M.C.; Ng, S.C.; Leung, M.F.; Che, H. A metaheuristic-based framework for index tracking with practical constraints. Complex Intell. Syst. 2022, 8, 4571–4586. [Google Scholar] [CrossRef]
  37. Rezk, H.; Fathy, A.; Aly, M.; Ibrahim, M.N. Energy Management Control Strategy for Renewable Energy System Based on Spotted Hyena Optimizer. Comput. Mater. Contin. 2021, 67, 2271–2281. [Google Scholar] [CrossRef]
  38. Abderazek, H.; Yildiz, A.R.; Mirjalili, S. Comparison of recent optimization algorithms for design optimization of a cam-follower mechanism. Knowl.-Based Syst. 2020, 191, 105237. [Google Scholar] [CrossRef]
  39. Abdel-Basset, M.; Chang, V.; Mohamed, R. HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Appl. Soft Comput. 2020, 95, 106642. [Google Scholar] [CrossRef]
  40. Dehghani, M.; Montazeri, Z.; Malik, O. Optimal sizing and placement of capacitor banks and distributed generation in distribution systems using spring search algorithm. Int. J. Emerg. Electr. Power Syst. 2020, 21, 20190217. [Google Scholar] [CrossRef]
  41. Dehbozorgi, S.; Ehsanifar, A.; Montazeri, Z.; Dehghani, M.; Seifi, A. Line loss reduction and voltage profile improvement in radial distribution networks using battery energy storage system. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 215–219. [Google Scholar]
  42. Bouaraki, M.; Recioui, A. Optimal placement of power factor correction capacitors in power systems using Teaching Learning Based Optimization. Alger. J. Signals Syst. 2017, 2, 102–109. [Google Scholar] [CrossRef]
  43. El-Kenawy, E.S.M.; Mirjalili, S.; Ibrahim, A.; Alrahmawy, M.; El-Said, M.; Zaki, R.M.; Eid, M.M. Advanced meta-heuristics, convolutional neural networks, and feature selectors for efficient COVID-19 X-ray chest image classification. IEEE Access 2021, 9, 36019–36037. [Google Scholar] [CrossRef]
  44. Bourouis, S.; Band, S.S.; Mosavi, A.; Agrawal, S.; Hamdi, M. Meta-heuristic algorithm-tuned neural network for breast cancer diagnosis using ultrasound images. Front. Oncol. 2022, 12, 834028. [Google Scholar]
  45. Chandra, M.A.; Bedi, S. Survey on SVM and their application in image classification. Int. J. Inf. Technol. 2021, 13, 1–11. [Google Scholar] [CrossRef]
  46. Canayaz, M. MH-COVIDNet: Diagnosis of COVID-19 using deep neural networks and meta-heuristic-based feature selection on X-ray images. Biomed. Signal Process. Control 2021, 64, 102257. [Google Scholar] [CrossRef]
  47. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  48. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  49. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  50. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 57–82. [Google Scholar]
  51. Moustafa, G.; Tolba, M.A.; El-Rifaie, A.M.; Ginidi, A.; Shaheen, A.M.; Abid, S. A Subtraction-Average-Based Optimizer for Solving Engineering Problems with Applications on TCSC Allocation in Power Systems. Biomimetics 2023, 8, 332. [Google Scholar] [CrossRef]
  52. Sebastian, B.; Dirk, J.; Gregor P, H. Sampling Based on Sobol’ Sequences for Monte Carlo Techniques Applied to Building Simulations. In Proceedings of the Building Simulation 2011: 12th Conference of IBPSA, Sydney, Australia, 14–16 November 2011. [Google Scholar]
  53. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy perturbation. Appl. Soft Comput. 2024, 152, 111211. [Google Scholar] [CrossRef]
  54. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar] [CrossRef]
  55. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  56. Gomes, G.F.; da Cunha, S.S.; Ancelotti, A.C. A sunflower optimization (SFO) algorithm applied to damage identification on laminated composite plates. Eng. Comput. 2019, 35, 619–626. [Google Scholar] [CrossRef]
  57. Tomotake, A. Technology of Konica Minolta’s Inkjet Printhead. Inkjet Print. Ind. Mater. Technol. Syst. Appl. 2022, 1, 503–521. [Google Scholar]
  58. Kwon, B.H.; Joo, C.W.; Cho, H.; Kang, C.m.; Yang, J.H.; Shin, J.W.; Kim, G.H.; Choi, S.; Nam, S.; Kim, K.; et al. Organic/inorganic hybrid thin-film encapsulation using inkjet printing and PEALD for industrial large-area process suitability and flexible OLED application. ACS Appl. Mater. Interfaces 2021, 13, 55391–55402. [Google Scholar] [CrossRef]
  59. Gao, Y.; Kang, C.; Prodanov, M.F.; Vashchenko, V.V.; Srivastava, A.K. Inkjet-Printed, Flexible Full-Color Photoluminescence-Type Color Filters for Displays. Adv. Eng. Mater. 2022, 24, 2101553. [Google Scholar] [CrossRef]
  60. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image quality assessment: Unifying structure and texture similarity. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2567–2581. [Google Scholar] [CrossRef]
Figure 1. Closed-loop control for OLED inkjet printing, with Online Droplet Inspection System (ODIS) for in-flight droplet measurement and inkjet jetting status assessment.
Figure 1. Closed-loop control for OLED inkjet printing, with Online Droplet Inspection System (ODIS) for in-flight droplet measurement and inkjet jetting status assessment.
Machines 13 00657 g001
Figure 2. Gray value distribution of in-flight droplet image. (a) Inkjetted droplets. (b) Zoomed-in view of the droplet size. (c) Horizontal gray value distribution of the droplet image. (d) Vertical gray value distribution of the droplet image.
Figure 2. Gray value distribution of in-flight droplet image. (a) Inkjetted droplets. (b) Zoomed-in view of the droplet size. (c) Horizontal gray value distribution of the droplet image. (d) Vertical gray value distribution of the droplet image.
Machines 13 00657 g002
Figure 3. Physics-informed DGMN degradation model for droplet images.
Figure 3. Physics-informed DGMN degradation model for droplet images.
Machines 13 00657 g003
Figure 4. The schematic of LensOBL.
Figure 4. The schematic of LensOBL.
Machines 13 00657 g004
Figure 5. NEJ-PTG6H inkjet printing equipment. (a) Overview of the printing equipment; (b) inside of the inkjet printer; (c) close view of the ODIS system.
Figure 5. NEJ-PTG6H inkjet printing equipment. (a) Overview of the printing equipment; (b) inside of the inkjet printer; (c) close view of the ODIS system.
Machines 13 00657 g005
Figure 6. Examples of captured in-flight droplets image.
Figure 6. Examples of captured in-flight droplets image.
Machines 13 00657 g006
Figure 7. Iterative results on benchmark functions.
Figure 7. Iterative results on benchmark functions.
Machines 13 00657 g007
Figure 8. Convergence trends by the proposed MISABO and its competitors applied to DGMN for synthetic droplet image generation.
Figure 8. Convergence trends by the proposed MISABO and its competitors applied to DGMN for synthetic droplet image generation.
Machines 13 00657 g008
Figure 9. Examples of synthetic droplet images (left) and real droplet images (far right).
Figure 9. Examples of synthetic droplet images (left) and real droplet images (far right).
Machines 13 00657 g009
Table 1. Benchmark functions used in evaluation.
Table 1. Benchmark functions used in evaluation.
FunctionRange f min
F 1 ( x ) = i = 1 n x i 2 [ 100 , 100 ] 0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [ 10 , 10 ] 0
F 3 ( x ) = i = 1 n j = 1 i x j 2 [ 100 , 100 ] 0
F 4 ( x ) = max i | x i | , 1 i n [ 100 , 100 ] 0
F 5 ( x ) = i = 1 n x i 2 10 cos ( 2 π x i ) + 10 [ 5.12 , 5.12 ] 0
F 6 ( x ) = 20 exp 0.2 1 n i = 1 n x i 2
exp 1 n i = 1 n cos ( 2 π x i ) + 20 + e
[ 32 , 32 ] 0
F 7 ( x ) = 1 4000 i = 1 n x i 2
i = 1 n cos x i i + 1
[ 600 , 600 ] 0
F 8 ( x ) = π n [ 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2
× 1 + 10 sin 2 ( π y i + 1 ) + ( y n 1 ) 2 ]
+ i = 1 n u ( x i , 10 , 100 , 4 )
where y i = 1 + x i + 1 4 , and
u ( x i , a , k , m ) = k ( x i a ) m , x i > a 0 , | x i | a k ( x i a ) m , x i < a
[ 50 , 50 ] 0
Table 2. Optimization results of benchmark functions.
Table 2. Optimization results of benchmark functions.
FunctionMetricMISABOSABOSCASFO
F1mean 0.00 × 10 0 8.50 × 10 201 4.36 × 10 0 1.39 × 10 18
std 0.00 × 10 0 0.00 × 10 0 6.97 × 10 0 3.25 × 10 18
best 0.00 × 10 0 9.50 × 10 204 4.24 × 10 2 1.09 × 10 21
worst 0.00 × 10 0 9.45 × 10 200 3.12 × 10 1 1.66 × 10 17
F2mean 0.00 × 10 0 1.38 × 10 113 7.91 × 10 3 1.52 × 10 19
std 0.00 × 10 0 4.04 × 10 113 1.25 × 10 2 6.80 × 10 9
best 0.00 × 10 0 4.14 × 10 115 3.72 × 10 4 7.52 × 10 11
worst 0.00 × 10 0 2.19 × 10 112 5.67 × 10 2 3.13 × 10 8
F3mean 0.00 × 10 0 1.96 × 10 30 5.49 × 10 3 5.03 × 10 16
std 0.00 × 10 0 1.06 × 10 29 4.15 × 10 3 8.83 × 10 16
best 0.00 × 10 0 5.50 × 10 83 4.25 × 10 2 9.04 × 10 20
worst 0.00 × 10 0 5.83 × 10 29 1.69 × 10 4 4.04 × 10 15
F4mean 0.00 × 10 0 9.36 × 10 78 2.62 × 10 1 2.11 × 10 10
std 0.00 × 10 0 1.32 × 10 77 1.12 × 10 1 2.60 × 10 10
best 0.00 × 10 0 5.62 × 10 79 2.32 × 10 0 3.62 × 10 12
worst 0.00 × 10 0 4.82 × 10 77 4.43 × 10 1 1.11 × 10 9
F5mean 0.00 × 10 0 0.00 × 10 0 3.09 × 10 1 0.00 × 10 0
std 0.00 × 10 0 0.00 × 10 0 3.14 × 10 1 0.00 × 10 0
best 0.00 × 10 0 0.00 × 10 0 6.07 × 10 4 0.00 × 10 0
worst 0.00 × 10 0 0.00 × 10 0 9.06 × 10 1 0.00 × 10 0
F6mean 4.44 × 10 16 3.99 × 10 15 1.18 × 10 1 5.87 × 10 10
std 0.00 × 10 0 0.00 × 10 0 9.46 × 10 0 6.32 × 10 10
best 4.44 × 10 16 3.99 × 10 15 8.22 × 10 3 9.52 × 10 12
worst 4.44 × 10 16 3.99 × 10 15 2.04 × 10 1 2.23 × 10 9
F7mean 0.00 × 10 0 0.00 × 10 0 7.78 × 10 1 0.00 × 10 0
std 0.00 × 10 0 0.00 × 10 0 2.28 × 10 1 0.00 × 10 0
best 0.00 × 10 0 0.00 × 10 0 2.72 × 10 1 0.00 × 10 0
worst 0.00 × 10 0 0.00 × 10 0 1.09 × 10 0 0.00 × 10 0
F8mean 7.08 × 10 4 1.72 × 10 1 1.36 × 10 1 3.62 × 10 1
std 2.46 × 10 4 7.08 × 10 2 2.64 × 10 1 3.53 × 10 1
best 3.56 × 10 4 5.88 × 10 2 9.46 × 10 1 7.91 × 10 4
worst 1.50 × 10 3 3.52 × 10 1 1.43 × 10 2 1.33 × 10 0
Table 3. Control parameters and their value ranges in the DGMN model.
Table 3. Control parameters and their value ranges in the DGMN model.
ParameterValue Range
Lens Numerical Aperture (NA) [ 0.028 , 0.151 ]
Diffraction kernel size k d [ 11 , 131 ]
Diffraction kernel scale k s [ 0.08 , 1 ]
Mixed Gaussian kernel size G k [ 21 , 121 ]
Mixed Gaussian kernel sigma G s [ 1 , 10 ]
Motion kernel size k v [ 35 , 50 ]
Droplet flying angle θ [ 80 , 100 ]
Table 4. DISTS of each algorithm in synthetic droplet image generation, * method in [16].
Table 4. DISTS of each algorithm in synthetic droplet image generation, * method in [16].
ImageMISABOSABOSCASFOBaseline *
X10.16650.16810.16690.16860.2806
X20.16580.16680.16640.16690.2703
X30.18950.19010.19050.19080.2734
X40.20490.20560.20600.20710.2714
X50.17770.17820.17840.18120.2831
X60.21950.22000.22110.21980.2780
X70.15590.15630.15660.15700.2854
X80.17810.17920.17850.17900.2758
X90.09920.10040.10160.10080.2851
Average0.17300.17390.17400.17460.2781
Table 5. Ablation studies of the proposed MISABO-optimized DGMN model. M1: BDDF − diffraction; M2: BDDF − defocus blur; M3: BDDF; M4: M3 + motion blur; M5: M3 + adaptive noise; M6: complete DGMN; * M3 with manually configured parameters in [16].
Table 5. Ablation studies of the proposed MISABO-optimized DGMN model. M1: BDDF − diffraction; M2: BDDF − defocus blur; M3: BDDF; M4: M3 + motion blur; M5: M3 + adaptive noise; M6: complete DGMN; * M3 with manually configured parameters in [16].
MethodsM1M2M3M4M5M6 (Ours)
Diffraction
Defocus Blur
BDDF
Motion Blur
Adaptive Noise
DISTS0.23080.22450.1905/0.2781 *0.18840.17710.1730
Table 6. Ablation studies of integrated strategies in MISABO. V1: SABO; V2: V1 + Sobol initialization; V3: V2 + LensOBL; V4: V3 + DLH.
Table 6. Ablation studies of integrated strategies in MISABO. V1: SABO; V2: V1 + Sobol initialization; V3: V2 + LensOBL; V4: V3 + DLH.
MethodsV1V2V3V4 (Ours)
SABO
Sobol
LensOBL
DLH
DISTS0.17390.17370.17330.1730
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, J.; Chen, J.; Tang, W.; Wu, J.; Ruan, J.; Yin, Z. DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing. Machines 2025, 13, 657. https://doi.org/10.3390/machines13080657

AMA Style

Cai J, Chen J, Tang W, Wu J, Ruan J, Yin Z. DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing. Machines. 2025; 13(8):657. https://doi.org/10.3390/machines13080657

Chicago/Turabian Style

Cai, Jiacheng, Jiankui Chen, Wei Tang, Jinliang Wu, Jingcheng Ruan, and Zhouping Yin. 2025. "DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing" Machines 13, no. 8: 657. https://doi.org/10.3390/machines13080657

APA Style

Cai, J., Chen, J., Tang, W., Wu, J., Ruan, J., & Yin, Z. (2025). DGMN-MISABO: A Physics-Informed Degradation and Optimization Framework for Realistic Synthetic Droplet Image Generation in Inkjet Printing. Machines, 13(8), 657. https://doi.org/10.3390/machines13080657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop