Next Article in Journal
GPU-Accelerated Fock Matrix Computation with Efficient Reduction
Previous Article in Journal
Integrated Physical Microstructure and Mechanical Performance Analysis of the Failure Mechanism of Weakly Cemented Sandstone Under Long-Term Water Immersion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Robust Optimization Reconstruction Algorithm for Electrical Capacitance Tomography

1
China Nuclear Power Engineering Co., Ltd., Haidian District, Beijing 100840, China
2
School of Energy, Power and Mechanical Engineering, North China Electric Power University, Changping District, Beijing 102206, China
3
Institute of Engineering Thermophysics, Chinese Academy of Sciences, Haidian District, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 4778; https://doi.org/10.3390/app15094778
Submission received: 28 February 2025 / Revised: 5 April 2025 / Accepted: 22 April 2025 / Published: 25 April 2025

Abstract

:
Electrical capacitance tomography holds significant potential for multiphase flow parameter measurements, but its application has been limited by the challenge of reconstructing high-quality images, especially under complex and uncertain conditions. We propose an innovative multi-objective robust optimization model to alleviate this limitation. This model integrates advanced optimization methods, multimodal learning, and measurement physics, structured as a nested upper-level optimization problem and lower-level optimization problem to tackle the challenges of complex image reconstruction. By integrating supervised learning methodologies with optimization principles, our framework synchronously achieves parameter tuning and performance enhancement. Utilizing the regularization theory, the multimodal learning prior image, sparsity prior, and measurement physics are incorporated into a novel lower-level optimization problem. To enhance the inference accuracy of the prior image, a new multimodal neural network leveraging multimodal data is developed. An innovative nested algorithm that mitigates computational difficulties arising from the interactions between the upper- and lower-level optimization problems is proposed to solve the proposed multi-objective robust optimization model. Qualitative and quantitative evaluation results demonstrate that the proposed method surpasses mainstream imaging algorithms, enhancing the automation level of the reconstruction process and image quality while exhibiting exceptional robustness. This study pioneers a novel imaging framework for enhancing overall reconstruction performance.

1. Introduction

Accurate multiphase flow measurement remains a key challenge. Electrical capacitance tomography has emerged as a promising solution to this problem through an inverse problem method that reconstructs permittivity distribution from capacitance data. Recent advances in sensor design, reconstruction algorithms, and hardware/software systems have significantly improved its sensitivity, resolution, and practicality. The technology has found valuable applications in the energy, petroleum, chemical, and storage industries, with potential for broader adoption as it matures further.
Reconstruction algorithms have developed along two distinct paths: physics-based and learning-based methods. Learning-based methods automate the imaging process and reduce reliance on specific physical mechanisms, thus minimizing the impact of manual parameter settings on the results and enhancing imaging quality. However, they also face challenges such as high data quality and quantity requirements and potential over-fitting, which may limit performance in new or complex scenarios. These methods also demand significant computational resources, increasing costs and complicating application in resource-limited settings. These models’ task-specific nature requires retraining for new tasks, adding complexity, and their lack of physical interpretability can cause controversy in scientific and engineering fields. Despite these hurdles, learning-based methods have great potential. Future research could focus on efficient data techniques, improved generalization, physics-guided learning, and creating interpretable models, bridging the gap between traditional and learning methods to advance the measurement technology.
Physics-based reconstruction methods, especially regularization methods, have been dominant due to their solid theoretical foundation, strong interpretability, and effective use of prior information. However, as imaging targets become more complex, these methods face different challenges. Selecting regularization parameters is also difficult. To alleviate these challenges, new methodologies that fuse physics-based and learning-based approaches are needed. Hybrid frameworks that combine the strengths of both methods are promising, aiming to balance physical principles with data-driven insights. Developing such frameworks requires addressing the formulation of effective objective functions and creating efficient solvers.
Regularization approaches are vital for solving inverse problems and can integrate measurement-physics-based methods with learning-based algorithms. This study will use this method as a theoretical foundation to design a novel image reconstruction model that integrates measurement physics, optimization principles, and advanced machine learning methods to adapt to different imaging scenarios and reconstruction requirements. To achieve this goal, we need to address several critical challenges. How can we design an effective reconstruction model that minimizes the impact of measurement noise and enables adaptive model parameter selection? How can we mine and integrate more valuable prior information to improve reconstruction quality? How can we solve the reconstruction model?
To address the first challenge, we formulate the image reconstruction problem as a multi-objective robust optimization model. The framework is structured with two hierarchical components: an upper-level optimization problem (ULOP) and a lower-level optimization problem (LLOP). The ULOP, framed as a multi-objective optimization problem, focuses on estimating the optimal values of model parameters, using the mean of and variance in performance metrics as objective functions to ensure robust performance under uncertainty. The LLOP, framed as a single-objective optimization problem, utilizes the solution from the ULOP as input parameters to execute the image reconstruction task. This new model not only effectively handles uncertainties but also enhances the automation of the image reconstruction process by adaptively learning and adjusting model parameters.
To ensure the robustness of the image reconstruction process against uncertainties and varying conditions, in the new image reconstruction model, the ULOP is modeled as a multi-objective optimization problem. The objective function consists of the mean of and variance in the performance function. The core principle of this design is to achieve optimal performance while minimizing performance fluctuations when faced with various possible disturbances and uncertainties. The mean value represents the system’s overall performance level across specified parameter conditions. The significance of this objective lies in ensuring that the system can achieve optimal performance in volatile environments or under varying input conditions. In the measurement process, when there are different levels of interference, noise, or other influencing factors, the mean represents the system’s average performance under these conditions. The variance measures the volatility or instability of system performance. A lower variance indicates lower sensitivity of the system to input changes or noise, thus demonstrating stronger robustness. When there are fluctuations in the input data, a system with lower variance is able to resist these fluctuations and maintain stable output. By minimizing variance as an objective, it is possible to reduce performance volatility, thereby enhancing the stability of image reconstruction under different conditions. This multi-objective optimization approach provides a more comprehensive and effective solution for image reconstruction, allowing the process to maintain good performance in complex and changing environments.
To address the second key challenge, we develop a multimodal neural network (MNN) to infer prior images, defined as the multimodal learning prior image (MLPI). This method improves reconstruction quality by enhancing the quality of the MLPI. The MNN employs a dual-channel input strategy (image and capacitance data) to capture cross-modal associations, enabling more accurate predictions. By integrating multimodal information, the MNN overcomes the limitations of a single data source, enhancing the model’s inference capability and robustness while minimizing information loss.
In this new reconstruction model, we propose an innovative LLOP. The uniqueness of the LLOP lies in its integration of measurement physics principles, the MLPI, and sparsity priors. To maximize the effectiveness of the LLOP, we design an efficient solver that decreases reconstruction time without compromising the quality of the reconstructed images.
We propose a new imaging model presented in the form of a multi-objective bilevel optimization problem. Although this innovative modeling approach opens new avenues for improving imaging accuracy and robustness, it also introduces new computational challenges, which constitute the third challenge in our study. Our research introduces an innovative nested algorithm architecture that implements an alternating iteration mechanism to separate and independently solve the ULOP and the LLOP. This hybrid method not only alleviates the inherent computational difficulties of the bilevel optimization structure but also handles the complexity arising from multi-objective optimization. Additionally, the proposed algorithm exhibits good scalability and adaptability. It is not only suitable for solving problems of varying scale and complexity but can also integrate existing optimization algorithms to directly solve bilevel optimization problems without the need to specially design specific bilevel optimization solvers. This flexibility implies that our approach has the potential for wide application across various real-world scenarios.
Our research provides a comprehensive solution to mitigate the challenges in image reconstruction tasks and improve overall reconstruction performance through the extraction and integration of prior information, design and solution of objective functions, adaptive learning of model parameters, and integration of multimodal supervised learning methods and optimization principles. We summarize the main contributions and novelty of this study as follows:
(1) We model the image reconstruction problem as a multi-objective robust optimization problem. This new model integrates advanced optimization methods, multimodal learning, and measurement physics to ensure reliable reconstruction performance under uncertain conditions. By integrating supervised learning methods and optimization principles, this model not only improves reconstruction accuracy and robustness but also provides a systematic framework that facilitates parameter tuning and optimizes reconstruction quality in practical scenarios, thereby enhancing the automation level of the image reconstruction process.
(2) The MLPI, sparsity prior, and measurement principles are integrated into a new LLOP to improve the image reconstruction quality. The MLPI captures the statistical characteristics of objects, serving as both a diverse prior image in the model and an ideal initial value for iterations, thus enhancing algorithm convergence. This integration achieves the fusion of measurement physics with multimodal learning. Additionally, a new optimizer is designed for this LLOP to reduce computational complexity.
(3) We design a new MNN to infer the MLPI. This new MNN performs inference tasks with image and capacitance data as input. It fully utilizes information from different modalities, compensating for the limitations of single-modal information, enhancing the model’s reasoning ability, reducing potential information loss when using a single modality, and improving the model’s overall performance.
(4) We design an algorithm capable of effectively solving the proposed multi-objective robust optimization problem with a hierarchical optimization structure. By employing an alternating optimization strategy, this nested algorithm decomposes the complex bilevel optimization problem into two relatively independent but interrelated single-level optimization problems for solving. This approach not only reduces the complexity of the solution but also effectively handles the interdependencies between the ULOP and the LLOP.
(5) Performance evaluation results indicate that our proposed multi-objective robust optimization imaging algorithm significantly enhances the overall image reconstruction performance compared to currently popular algorithms. It maintains stable performance across different targets, with improvements in various imaging performance metrics and increased model automation. This approach revolutionizes image reconstruction by integrating optimization principles with advanced multimodal learning techniques, filling research gaps and advancing measurement technology for greater practical value.
To address the challenges of image reconstruction, this study proposes a systematic solution. The structure of this paper is summarized as follows: Section 2 reviews and analyzes major image reconstruction algorithms, laying the theoretical foundation for subsequent research. Section 3 models the imaging problem as a multi-objective robust optimization problem, introducing a new perspective for alleviating imaging challenges in complex scenarios. On this basis, Section 4 proposes an innovative optimization technique to solve the multi-objective robust optimization model. In Section 5, a novel MNN for inferring MLPIs is designed. The overall reconstruction methodology and its computational workflow are summarized in Section 6. Section 7 fully demonstrates the superiority of the proposed method through performance evaluation. Finally, Section 8 concludes the research findings and highlights this study’s significant value in both theoretical and applied domains.

2. Related Work

Image reconstruction algorithms play a crucial role in determining the effectiveness of the measurement technology. In this section, we provide a comprehensive overview of the popular reconstruction algorithms, outlining their fundamental technical principles and discussing their advantages and limitations. We aim to summarize the key factors contributing to the success of these algorithms in specific applications, as well as the challenges they face in broader applications.
Our discussion starts with a review of classical iterative approaches that exclude regularization terms, such as the Kaczmarz algorithm [1] and the Landweber algorithm [2]. These methods are widely recognized for their simple iterative structure, low memory consumption, and high computational efficiency. However, these two algorithms fail to incorporate prior information about reconstruction objects, which limits their ability to produce high-quality images. To address this limitation and enhance the stability of numerical solutions, iterative regularization methods have been developed. These methods employ either single [3,4,5,6,7] or multiple [8,9,10,11,12,13,14,15,16] regularization terms to better utilize prior information, narrow the solution space, ensure numerical stability, and enhance reconstruction quality and robustness. However, they rely on fixed model components and prior insights derived from theoretical analysis and empirical knowledge, which may not adequately capture the complexity of real-world reconstruction scenarios. The solution of non-convex or non-smooth regularization models complicates the development of numerical algorithms. In addition, choosing regularization parameters continues to be an unresolved problem. Recent advancements have witnessed the development of novel imaging algorithms [17,18,19] aimed at improving imaging quality.
Non-iterative algorithms are a key category in image reconstruction methods, known for their lower computational complexity compared to iterative methods, making them advantageous for online reconstruction, particularly in real-time scenarios [20,21]. These algorithms simplify physical or mathematical models to enhance computational efficiency but at the cost of precision, being more sensitive to noise and model errors. Despite these limitations, they are valuable in conditions requiring high real-time responsiveness but lower precision due to their efficiency and speed. Thus, it is important to balance the pros and cons of non-iterative algorithms based on specific measurement goals and conditions to choose the most suitable reconstruction method.
Evolutionary algorithms, as population-based methods, have attracted increasing attention due to their independence from the differentiability of the objective function, allowing application across diverse scenarios [22,23]. They excel in complex problems but face limitations such as slower convergence and reduced performance in high-dimensional tasks, like image reconstruction with many pixel variables. The effectiveness of evolutionary algorithms can also be hindered by challenging parameter settings. Despite these issues, improvement techniques are being developed to enhance their flexibility and adaptability in complex scenarios. Evolutionary multi-objective optimization algorithms are particularly effective for balancing multiple objectives in inverse problems but share limitations with single-objective approaches [24,25]. New methods have been introduced to mitigate these issues [26,27], offering new insights for advancing image reconstruction algorithm design.
Generative models have been used to solve inverse problems, with different variants developed for various tasks [28,29,30,31,32]. Their main advantage is the ability to learn complex probability distributions, capturing both global and local image details. They generalize well to diverse challenges but face limitations like high computational costs, training instability, and reliance on large and high-quality datasets. Biased or unbalanced data can affect performance, and the results may be unpredictable, increasing usage risks.
Surrogate optimization uses surrogate models to approximate the objective function, reducing computational loads [33,34]. It excels with non-convex or non-smooth functions and allows the integration of various models and techniques. However, constructing high-quality surrogate models requires significant prior knowledge and training data, adding computational burdens. The performance depends on the distribution and quantity of initial samples, and poor sampling can bias the model. As dimensionality increases, the training difficulty and costs rise, and potential inaccuracies may lead to local optima. Despite these challenges, surrogate optimization shows great potential in image reconstruction.
Deep learning, a powerful data-driven approach, excels in learning the complex nonlinear relationship between the capacitance and dielectric constant, offering efficient solutions for image reconstruction and inverse problems [35,36,37,38]. Unlike traditional methods, it enables automated feature extraction and representation learning, significantly advancing electrical capacitance tomography [35,36,37,38] and other fields [39,40,41,42,43,44,45,46]. Deep learning models optimize parameter selection and address high real-time requirements by training on large datasets. However, challenges remain, including difficulty in capturing causal relationships, the risk of over-fitting with insufficient or low-quality data, and weak adaptability to changing environments [47]. Changes in measurement scenarios may require retraining, increasing costs and limiting practical applications in dynamic conditions. Despite these challenges, deep learning holds promise for improving intelligent measurement and imaging technologies, though improvements are needed in causality capture, environmental adaptability, and generalization capability.
Several advanced techniques integrate deep learning with iterative methods, such as plug-and-play prior [48], algorithm unrolling [49,50], and regularization by denoising [51]. These approaches achieve performance improvements for complex tasks such as image reconstruction. The plug-and-play prior provides a versatile framework for various applications through the flexible integration of prior information. The algorithm unrolling method converts optimization algorithms into neural networks, allowing end-to-end training and improved efficiency. Regularization by denoising uses denoisers to design the regularization term in order to bolster model robustness in noisy environments. Despite their success, these methods face challenges like high computational complexity and immature theoretical foundations, necessitating further exploration to improve effectiveness in practical applications [52].
The algorithms discussed have distinct advantages, limitations, and applicable conditions, showing performance variability across real-world scenarios. However, they fall short of achieving fully automated and precise image reconstruction. Currently, such processes depend on simplified analysis and empirical adjustments, limiting algorithm adaptability and performance under complex conditions. To improve image reconstruction quality, innovation in imaging strategies is crucial, rather than refining existing methods. This research aims to achieve this by introducing multidisciplinary approaches combining advanced optimization, multimodal learning, and measurement physics to enhance image reconstruction accuracy. Further technical details will be explored in subsequent sections.

3. Multi-Objective Robust Optimization Imaging Model

Robustness enhancement in image reconstruction is essential. We frame the reconstruction process as a multi-objective robust optimization problem to maintain performance amidst uncertainty, presenting the theoretical framework and technical details in this section.

3.1. Imaging Model

The inverse problem in the measurement technology solves the unknown permittivity distribution, u , based on the measured capacitance, b , and the sensitivity matrix, A . We can use the following equation to formulate this process [3]:
A u = b + ε
where ε defines the noise vector; A is a matrix with a size of m × n ; and u , b , and ε are vectors with sizes of n × 1 , m × 1 , and m × 1 , respectively.
To achieve a stable and physically meaningful solution for the ill-posed inverse problem formulated in Equation (1), the incorporation of prior knowledge becomes a critical necessity in computational mathematics. This requirement stems from the inherent non-uniqueness and sensitivity to measurement errors that characterize such inverse problems. Among various methodologies, the regularization technique emerges as a mathematically rigorous framework that systematically integrates prior constraints through the construction of a well-posed optimization paradigm. The generalized formulation can be defined by the following:
min u f ( A u , b ) + j = 1 γ α j Reg j ( u )
where Reg j ( u ) represents the regularization term; f ( u , b ) is the data fidelity metric; α j > 0 is the regularization parameter; and γ defines the number of regularizers.
As a single-objective optimization problem, Equation (2) demonstrates strong interpretability and scalability, making it particularly suitable for bridging model-based approaches with learning-based algorithms. The advantages of this model lie not only in its intuitive mathematical structure but also in providing a flexible framework that can be adjusted and extended according to specific needs. In practical applications, the single-objective optimization model can be adapted to diverse problems by introducing different data fidelity terms and regularization terms, thereby enhancing its applicability. Furthermore, this model can be effectively solved by utilizing existing algorithms.

3.2. Multi-Objective Optimization Problem

Multi-objective optimization aims to address multiple interdependent objective functions. It seeks a set of non-dominated optimal solutions (Pareto-optimal solutions) by balancing the trade-offs between objectives. This optimization paradigm not only reveals the intrinsic relationships and constraints among objective functions but also provides diverse decision support for complex system modeling. The unconstrained multi-objective optimization problem can be represented by the following mathematical model [53]:
min F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f K m o p ( x ) )
where f j ( x ) is the j th objective function, and the subscript K m o p represents the number of objective functions.

3.3. Bilevel Optimization Problem

Bilevel optimization is designed to address intricate complex problems characterized by two interdependent optimization problems, namely the ULOP and the LLOP. This framework effectively models hierarchical dependencies. The bilevel optimization problem can be represented by the following mathematical model [54,55]:
min x u F u ( x u , x l ) s . t .   min x l f l ( x u , x l )
where the subscripts u and l refer to the ULOP and the LLOP, respectively; F u ( ) and f l ( ) represent the ULOP and the LLOP, respectively; and x u and x l are the decision variables for the ULOP and the LLOP, respectively.

3.4. Proposed Multi-Objective Robust Optimization Imaging Model

The innovation of this section lies in the development of a multi-objective robust optimization imaging model, along with a comprehensive discussion of its theoretical performance.

3.4.1. Conceptualized Multi-Objective Robust Optimization Imaging Model

In the optimization model in Equation (2), the regularization parameter is a key factor because the solution is sensitive to its configuration. To achieve greater automation in the reconstruction model, an effective strategy is to infer the regularization parameter from the collected data. By leveraging the principles of multi-objective optimization and bilevel optimization, along with labeled data, ( b j , u j , T ) j = 1 M o , we can determine the model parameters by solving the following multi-objective bilevel optimization problem:
min α f 1 ( α ) , f 2 ( α ) s . t .   u ( α ) = min u f ( A u , b j ) + i = 1 γ α i Reg i ( u )
where b j stands for the j th capacitance vector; min u f ( A u , b j ) + i = 1 γ α i Reg i ( u ) defines the LLOP and is a regularization imaging model; and the ULOP is defined by min α f 1 ( α ) , f 2 ( α ) , where f 1 ( α ) and f 2 ( α ) represent the objective functions for the ULOP.

3.4.2. Lower-Level Optimization Problem

Leveraging the mathematical foundation of regularization theory, we propose a unified framework that synergistically integrates multimodal learning capabilities through the introduction of the MLPI and sparsity-driven prior. This integration culminates in the formalization of the LLOP, which is rigorously defined as the following optimization formulation:
min u f ( A u , b ) + α 1 Reg 1 ( u ) + α 2 Reg 2 ( u )
where f ( A u , b ) defines the data fidelity criterion, and Reg 1 ( u ) and Reg 2 ( u ) are regularization terms and model the MLPI and the sparsity prior, respectively.
Owing to its quadratic structure, the least squares method exhibits convexity. Leveraging this benefit, we use this method as the data fidelity term, which can be mathematically expressed as follows:
f ( A u , b ) = | | A u b | | 2
In order to improve the reconstruction quality and promote the application of multimodal learning in image reconstruction, we integrate the MLPI into the reconstruction model in the form of a regularization term. It can be expressed as the following mathematical model:
Reg 1 ( u ) = | | u u p | | 1
where u p defines the MLPI, and it can be predicted by the proposed MNN.
Integrating the MLPI into the regularized image reconstruction model offers several benefits. Firstly, the MLPI learns complex image features and prior information from collected data, enhancing detail recovery and robustness in noisy conditions by alleviating ill-posedness. This integration allows the algorithm to utilize current measurements and latent data information, optimizing multi-source heterogeneous information and improving imaging quality. Secondly, the MLPI uses a pre-training strategy for near-real-time prior information inference once trained, increasing the reconstruction speed and computational efficiency, reducing the iterations and time compared to traditional methods. Thirdly, the adaptable regularization term in Equation (8) provides a flexible framework for incorporating various priors from multi-sensor measurements and simulation data. This enhances the model’s ability to integrate heterogeneous data, offering a more robust reconstruction architecture.
To further improve the reconstruction performance, this study also introduces the sparsity prior and formulates a regularization framework for image prior fusion. Specifically, the sparsity prior can be integrated into the reconstruction model by the following regularizer:
Reg 2 ( u ) = | | W u | | 1
where W is a nonnegative weighted matrix.
Based on the developed regularization framework incorporating both the customized data fidelity term and regularization terms, we formulate the novel LLOP as follows:
min u 1 2 | | A u b | | 2 + α 1 | | u u p | | 1 + α 2 | | W u | | 1
The distinguishing aspect of the LLOP, as opposed to conventional compound regularization methods, lies in its incorporation of the MLPI. This inclusion enhances the model’s efficacy in handling intricate scenarios while bridging measurement physics and machine learning.

3.4.3. Upper-Level Optimization Problem

The purpose of the ULOP is to estimate model parameters. Unlike conventional models, our model aims to optimize the performance in the presence of uncertainty. In order to improve the robustness of the model, our ULOP minimizes both the mean of and the variance in the performance function, which can be defined as the following multi-objective optimization problem [56,57,58,59]:
min α mean ( R ( α ) ) , var ( R ( α ) )
where R ( α ) = j = 1 M o | | u j u T , j | | 2 and is called the performance indicator; mean ( R ( α ) ) and var ( R ( α ) ) represent the mean of and variance in the performance indicator R ( α ) . By taking into account the uncertainty of the measurement data, we use noise to perturb the capacitance data and calculate the mean of and variance in the performance metric. Depending on the actual application, the capacitance noise level does not exceed 15%. The definition of the noise level is given in [16].
In the ULOP, the mean value represents the system’s expected performance under defined parameters, ensuring optimal operation across varying environmental conditions. Conversely, the variance quantifies performance fluctuations, where reduced values indicate lower sensitivity to input perturbations and consequently greater robustness. This multi-objective optimization framework provides a comprehensive solution for electrical capacitance tomography, maintaining robust performance in dynamic and complex environments.

3.4.4. New Multi-Objective Robust Optimization Imaging Model

Drawing on the defined LLOP and ULOP, we reframe the imaging problem as a novel multi-objective bilevel optimization problem:
min α mean ( R ( α ) ) , var ( R ( α ) ) s . t .   u ( α ) = min u 1 2 | | A u b j | | 2 + α 1 | | u u p , j | | 1 + α 2 | | W u | | 1
This new model integrates supervised learning methods, multi-objective optimization, robust optimization, and bilevel optimization principles. This is reflected in two aspects. Firstly, the LLOP uses optimization principles to achieve image reconstruction given the model parameters. The ULOP employs multi-objective learning principles to adjust the parameters of the LLOP to meet the requirements of multiple performance metrics. In the ULOP, robustness is used as an optimization goal to ensure that the model maintains good performance in uncertain environments. This integrated approach not only enhances the model’s reconstruction accuracy and robustness but also provides a systematic framework that aids in parameter tuning and performance optimization for practical applications. Secondly, the MLPI is integrated into the reconstruction model, playing a dual role. On the one hand, it is incorporated as a prior constraint term within the regularization framework of the imaging model to improve reconstruction quality. On the other hand, it serves as a high-quality initial solution for iteration, improving the convergence characteristics of the optimization algorithm.
This multi-objective robust optimization model is established based on the principles of multi-objective optimization, bilevel optimization, and robust design optimization. It integrates supervised learning methods and optimization principles, fully leveraging their complementary advantages to ensure the model can better handle complex and volatile application scenarios. The multi-objective bilevel optimization model can simultaneously consider multiple performance metrics and different levels of optimization needs, resulting in more comprehensive and diverse optimal solutions. This helps in selecting the most suitable parameter configuration based on specific needs in practical applications.
By solving the optimization problem in Equation (12), we can obtain the optimal model parameters. Subsequently, when new capacitance data become available, these estimated parameters along with the inferred prior image can be used to solve the LLOP for image reconstruction.

4. Solution Method

Within this section, we propose a new optimizer for solving the LLOP and then design an efficient solver consisting of two nested optimization loops to solve this proposed multi-objective robust optimization problem in Equation (12).

4.1. The Solution of the Lower-Level Optimization Problem

Solving the LLOP in Equation (10) is challenging due to its non-smooth terms. Our solution involves developing a half-quadratic splitting algorithm [60,61] that converts the problem into the following more tractable form:
min u , z ψ ( u , z )
where ψ ( u , z ) is defined by the following:
ψ ( u , z ) = 1 2 | | A u b | | 2 + α 1 | | z | | 1 + α 2 | | W u | | 1 + μ 2 | | z ( u u p ) | | 2
where μ is the positive penalty parameters, and z is an auxiliary variable that has been introduced to separate the non-smooth term.
To solve the variables u and z , we decompose the optimization problem in Equation (13) into the two following sub-problems:
z k + 1 = min z ψ ( u k , z )
u k + 1 = min u ψ ( u , z 1 k + 1 , z 2 k + 1 )
Based on the optimization problem in Equation (13), we are able to specify the optimization problems in Equations (15) and (16) as follows:
z k + 1 = min z α 1 | | z | | 1 + μ 2 | | z ( u k u p ) | | 2
u k + 1 = min u 1 2 | | A u b | | 2 + α 2 | | W u | | 1 + μ 2 | | z k + 1 ( u u p ) | | 2
Utilizing the soft threshold algorithm, we derive the update scheme for the optimization problem in Equation (17):
z k = shrink ( u u p , α 1 / μ )
We use the forward–backward splitting algorithm [62,63] to solve the sub-problem of u in Equation (18), and this yields the following:
u k + 1 / 2 = min u α 2 | | W u | | 1 + 1 2 δ | | u q k + 1 | | 2
where q k + 1 is an auxiliary vector calculated as follows:
q k + 1 = u k δ f M ( u k )
where δ is a positive step size parameter; f M ( u ) is as follows:
f M ( u ) = 1 2 | | A u b | | 2 + μ 2 | | z k + 1 ( u u p ) | | 2
The intention to improve convergence results in the use of the following scheme [64]:
u k + 1 = u k + 1 / 2 + ( η k 1 ) ( u k + 1 / 2 u k ) / η k + 1
where η is updated as follows:
η 1 = 1 η k + 1 = 1 + 1 + 4 ( η k ) 2 / 2
We summarize the above computational steps in Algorithm 1 to achieve image reconstruction. This new solver makes the computation of Equation (10) simpler and easier to use in practice.
Algorithm 1: Proposed optimizer for solving Equation (10).
1. Input: A , b , α 1 , α 2 and μ .
2. Initialization: z 1 , η 1 and u 1 .
3. Output: u .
4. For k = 1 , 2 , until convergence do
  4.1 Update z k + 1 according to Equation (19).
  4.2 Update q k + 1 according to Equation (21).
  4.3 Update u k + 1 / 2 according to Equation (20).
  4.4 Update η k + 1 by solving Equation (24).
  4.5 Update u k + 1 according to Equation (23).
5. End For
6. Return the optimal solution.

4.2. The Solution of the Upper-Level Optimization Problem

The ULOP is a multi-objective optimization problem. The NSGA-II algorithm, as a classical algorithm in the field of multi-objective optimization, exhibits numerous advantages [65,66,67]. Firstly, in terms of computational efficiency, the algorithm uses a fast non-dominated sorting strategy to reduce computational complexity, thus enhancing overall operational efficiency. Secondly, its unique crowding distance mechanism not only ensures population diversity but also guides the Pareto solutions to remain evenly distributed in the objective space, providing decision-makers with more diverse and balanced solutions. Finally, the algorithm shows excellent adaptability and robustness, effectively handling optimization problems of varying scales and complexity. These advantages make the NSGA-II algorithm a powerful tool for solving complex multi-objective optimization problems. Given these advantages, this study uses the algorithm to solve the ULOP.

4.3. The Solution of the Multi-Objective Robust Optimization Problem

We introduce a novel optimizer to solve the multi-objective robust optimization problem formulated in Equation (12). This optimizer leverages the complementary strengths of the NSGA-II algorithm and Algorithm 1 to achieve an efficient solution. The detailed computational workflow is outlined in Algorithm 2.
Algorithm 2: The solver for the multi-objective robust optimization problem in Equation (12).
1. The algorithm parameters are set and initialized.
2. The initial population is generated.
3. While no converged do
  3.1 The LLOP is solved using Algorithm 1, ensuring that the solution to the ULOP remains unchanged throughout the process.
  3.2 The ULOP is solved using the NSGA-II algorithm, ensuring that the solution to the LLOP remains unchanged throughout the process.
4. End While
5. Output the model parameters.
Algorithm 2 is a nested algorithm framework designed to solve bilevel optimization problems. It decomposes the bilevel optimization problem into two separate optimization problems, which are solved using the NSGA-II method and Algorithm 1. Through an alternating optimization strategy, this proposed nested algorithm effectively handles the complex interdependencies between the ULOP and the LLOP, thereby enhancing overall optimization performance and reducing computational complexity. Notably, the design of this nested algorithm boasts good modularity and scalability, allowing different algorithms to be used to solve the ULOP and the LLOP separately, facilitating future integration and expansion with other optimization algorithms or new technologies. This offers the potential for further improving algorithm performance and meeting more diverse optimization needs. Moreover, the optimal solution is selected from the Pareto set using the method proposed in [68].

5. Multimodal Neural Network

In the proposed reconstruction model, the MLPI has two important roles. In order to fully exploit the multimodal information, we introduce the MNN to infer the MLPI. This section details the MNN’s architecture and implementation.

5.1. The Proposed Multimodal Neural Network

Multimodal machine learning is a technique that integrates data from different sources and formats. Its advantage lies in the ability to comprehensively utilize various information sources, thereby improving the accuracy and robustness of models. This approach can overcome the limitations of single-modal data by integrating multiple types of data such as images, text, and audio, thereby enhancing feature representation capabilities [69,70].
We propose the MNN to enhance the inference accuracy of the MLPIs by jointly processing image and capacitance data. Its architecture includes input and output layers and multiple fully connected hidden layers (Figure 1). Unlike conventional networks, the MNN uniquely accepts dual-modal inputs (image and capacitance data), enabling cross-modal information fusion to surpass single-source limitations and boost accuracy.
Training the MNN represents a pivotal step, necessitating the solution to the following optimization problem:
min Θ i = 1 S a m | | ψ M D N N ( c i ; Θ ) θ i | | 2 2
where Θ denotes model parameters; { ( c i , θ i ) } i = 1 S a m defines training sample pairs; and ψ M D N N ( ) stands for a forward propagation operator, and its form is determined by the network architecture.
The training workflow is summarized in Algorithm 3. The training of the MNN is in an offline mode, which means that all complex computations and model parameter optimization are conducted in the training phase, thus minimizing the computational burden in the inference phase. After sufficient training, the MNN is able to achieve high efficiency and speed in the inference process. Through the combination of offline training and efficient inference, the MNN shows strong potential for practical applications, not only in terms of computation time but also in terms of high accuracy and stability, providing a feasible solution to complex imaging problems.
Algorithm 3: MNN Training.
1. Input: Training samples { ( c i , θ i ) } i = 1 S a m .
2. Output: Θ .
3. Determine the network structure.
4. Initialize model parameters.
5. Solve (25) by the stochastic gradient descent method.
This new MNN offers the following advantages:
(1) The MNN can integrate data from various modalities (e.g., image and capacitance data), effectively utilizing information from different sources. This fusion compensates for the limitations of single-modality information, enhances model performance, and reduces the potential information loss associated with using only one modality.
(2) Through its multi-layer structure, the MNN extracts features that capture complex data patterns and relationships. Unlike traditional methods that require manual feature design, the MNN can automatically learn useful features from raw multimodal data, reducing dependence on manual expertise.
(3) By integrating multimodal data, the MNN improves model performance. The complementary advantages of different modalities help reduce errors and biases associated with single modalities, providing more stable and reliable predictions. Moreover, the ability to leverage redundant information from multimodal data allows the MNN to maintain strong performance even in the presence of noise and incomplete data.

5.2. Prediction Procedure of the Multimodal Learning Prior Image

To improve the inference performance, we designed the MNN. The MNN takes image data and capacitance data as inputs and outputs the corresponding true dielectric constant distribution. By integrating multimodal information, we aim to improve the accuracy and efficiency of dielectric constant distribution inference. Based on prior theoretical analysis and experimental validation, the specific steps for the MLPI inference are summarized as follows:
(1) The core task of the training phase is to optimize the MNN using Algorithm 3. During this phase, the labeled dielectric constant distribution data are introduced to guide the model in learning the complex mapping relationships between the image and capacitance data and the dielectric constant distribution.
(2) Using existing imaging algorithms, the collected capacitance data are used to reconstruct a preliminary image. In this process, physical measurement data are converted into an image with spatial distribution characteristics through imaging algorithms, providing input for the subsequent inference tasks.
(3) The preliminarily reconstructed image, together with the corresponding capacitance data, is used as input for the trained MNN model. By fusing and reasoning over the features of the multimodal data, the model generates high-precision prediction results, completing the MLPI inference.

6. Proposed Multi-Objective Bilevel Optimization Imaging Method

By integrating multi-objective optimization, bilevel optimization, robust optimization, measurement theory, and multimodal learning, our study proposes an innovative multi-objective robust optimization reconstruction (MOROR) algorithm. Its computational workflow is summarized in Figure 2.
The MOROR algorithm provides a comprehensive and systematic approach for image reconstruction. It integrates advanced machine learning techniques and multi-objective optimization methods to achieve high-quality reconstruction results. As shown in Figure 2, the implementation of the MOROR algorithm consists of six key stages, each providing crucial support for robust and efficient image reconstruction:
(1) Training sample collection. The collected training samples are used to train the MNN, ensuring that the algorithm has good generalization capabilities in various scenarios.
(2) Training the MNN using Algorithm 3. Based on the collected training samples, Algorithm 3 is executed to train the MNN. This step establishes a predictive model that serves as the core foundation for image reconstruction. The MNN can capture complex patterns and relationships in the data, thus providing accurate prior image predictions for subsequent stages.
(3) Determining the optimal configuration of model parameters using Algorithm 2. At this stage, Algorithm 2 is executed to determine the optimal configuration of model parameters. This step involves solving a multi-objective robust optimization problem to ensure the robustness and accuracy of image reconstruction under uncertain conditions.
(4) Initial image reconstruction using the newly acquired capacitance data. After determining the optimal model parameters, the initial image reconstruction is implemented using the newly acquired capacitance data. This step generates a preliminary image that serves as the input of the trained MNN.
(5) Inferring the MLPI using the trained MNN. The trained MNN is used for inferring the MLPI, a step that plays a pivotal role in enhancing reconstruction quality. This is because the MLPI serves a dual purpose: it not only acts as valuable prior knowledge that can be integrated into the reconstruction model but also functions as an effective initialization for iterative optimization processes. By providing a high-quality starting point, the MLPI significantly accelerates the convergence rate of optimization algorithms while simultaneously improving the accuracy and reliability of the final solution. Furthermore, the incorporation of the MLPI helps mitigate common reconstruction challenges such as noise amplification and artifact generation, particularly in ill-posed or underdetermined problems.
(6) Conducting final image reconstruction using Algorithm 1. In the final stage, the algorithm combines the newly acquired capacitance data with the inferred MLPI and the determined model parameters and completes the final image reconstruction by executing Algorithm 1.
In the MOROR algorithm, the first three steps (i.e., training sample collection, MNN training, and model parameter optimization) can be performed offline, meaning that they do not occupy time during the image reconstruction process. This offline processing enhances the algorithm’s efficiency, especially in applications with high real-time requirements. Once the MNN training is completed, its inference process is efficient, allowing the MLPI to be completed in a very short time. Since the MLPI inference time is extremely brief, its impact on the overall reconstruction time of the algorithm is negligible. The majority of the algorithm’s execution time is consumed by the operation of Algorithm 1, which is responsible for the final image reconstruction.

7. Validation and Discussion

We have analyzed and discussed the MOROR algorithm theoretically, and this section will focus on evaluating its advantages numerically by comparing it with well-known imaging algorithms.

7.1. Compared Algorithms and Implementation Details

Our analysis centers on a comparison of widely used iterative algorithms, as outlined in Table 1. Since the MOROR algorithm is fundamentally iterative in nature, and non-iterative algorithms often fall short in achieving image reconstruction quality on par with iterative approaches, our evaluation is limited to iterative methods, excluding non-iterative algorithms from the comparison.
In Table 1, the PRPCG algorithm is a well-known unconstrained optimization algorithm and has a faster convergence rate than gradient optimization algorithms. The PRPCG algorithm is used on the premise that the objective function must be differentiable. The TL1R, L1-2R, L1R, FL1R, and L1/2R algorithms are renowned for their emphasis on leveraging the sparsity prior of imaging objects. The L1TV, L1SOTV, ELNR, FOENR, and L1LR algorithms are compound regularization methods. The LRR algorithm, renowned for its utilization of the nuclear norm to integrate the low-rank prior of imaging targets, represents a widely recognized approach in low-rank regularization methodologies. The algorithms selected for comparison are highly prevalent and have been extensively applied in the field of electrical capacitance tomography. Their extensive adoption establishes them as both credible and representative benchmarks for assessing performance. All the algorithms have been developed using the MATLAB R2021a platform.
In the NSGA-II algorithm, the number of populations is set to 20, and the maximum evolution generation is 150.
The MNN consists of an input layer, an output layer, and two hidden layers. The input layer takes capacitance signals and initially reconstructed images as input, with 1090 neurons. The output layer represents the true distribution of dielectric constants and has 1024 neurons. Each of the two hidden layers contains 2180 neurons, and their activation functions are sigmoid functions. To improve training stability, we have introduced two batch normalization layers. We use the Adam algorithm for training with an initial learning rate of 0.0001.
The imaging domain is discretized into 32 × 32-pixel elements, establishing a parameter space of a dimensionality of 1024. Within this framework, the dimensionality of the sensitivity matrix governing the physical model is 66 × 1024, while the acquired capacitance measurements constitute a vector with a dimensionality of 66 × 1.

7.2. Evaluation Criterion

In this study, we select two widely recognized metrics, namely image error and correlation coefficient, as standards for evaluating reconstruction quality [16].
Assessing the consistency of reconstruction results is equally important in determining the reliability of an algorithm. Therefore, we pay special attention to the performance stability of the MOROR algorithm under different operating conditions. To this end, we calculate the variance in the image error and correlation coefficient. A smaller variance indicates greater stability and consistency of the algorithm during the reconstruction process, thus enhancing its reliability in practical applications.
To comprehensively test the adaptability of the MOROR algorithm under different environmental conditions, we simulate various noise conditions and present visualized images under these conditions. At the same time, we provide the corresponding image error and correlation coefficient values to further analyze the algorithm’s performance in the presence of noise interference. This approach allows us to identify the algorithm’s potential and shortcomings in terms of noise resistance.
In designing the experiments, we refer to the definitions of noise levels in [16] to ensure the comparability and scientific validity of the results. Through such meticulous and comprehensive evaluation, we gain a more thorough understanding of the MOROR algorithm’s performance across diverse application scenarios while clarifying its advantages and areas for improvement. This provides valuable guidance for the further optimization of the algorithm and lays the foundation for its practical application.

7.3. Sensor Details

The inherent flexibility of capacitance sensors offers a wide range of layout possibilities, enabling their adaptation to various application scenarios. Among the available configurations, sensors with 12 electrodes are commonly considered optimal for achieving reliable performance across diverse use cases. In accordance with this widely accepted approach, our study also simulates a sensor configuration with 12 electrodes. Specifically, the sensing domain in these simulations measures 80 mm × 80 mm, aligning with previous work in the field [78]. This configuration is selected to ensure a balance between sensitivity, spatial resolution, and practicality, making it suitable for a broad spectrum of experimental and industrial applications.

7.4. Reconstruction Objects

To evaluate the performance of our approach, we utilize the benchmark reconstruction objects in Figure 3. For simplicity, we will refer to Figure 3a through Figure 3e as reconstruction objects 1 through 5, respectively, in the following sections. These imaging objects are designed with black regions representing a high-permittivity medium and the remaining areas corresponding to a low-permittivity medium. The permittivity values for the high and low regions are set at 2.6 and 1.0, respectively. The geometric dimensions of these targets include circles with a 20 mm diameter and squares with 20 mm sides.

7.5. Results and Discussion

This section will present and analyze the reconstruction results, evaluate the noise resistance capabilities, and examine the imaging speed and stability of the reconstruction performance of the MOOR algorithm. These analyses aim to provide a comprehensive understanding of the algorithm’s practical applicability and reliability in real-world scenarios.

7.5.1. The Results of the Iterative Reconstruction Algorithms

The primary objective of this section is to conduct a comparison between the MOROR algorithm and iterative algorithms. We conduct an in-depth analysis of their performance across different reconstruction scenarios while systematically examining their respective strengths and limitations. The critical parameters used throughout the reconstruction procedure are delineated in Table 2. The selection of these parameters not only affects the accuracy and efficiency of the algorithms but also plays a decisive role in determining the reconstruction quality. Through this comparative analysis, we gain enhanced insights into the applicability scope of each algorithm and identify potential optimization directions for future improvements. In the MOROR algorithm, the algorithm parameters are estimated by solving Equation (12) with W i i k + 1 = 1 / ( | u i k | 3 + 1 e 10 ) . The quantitative visualization results are illustrated in Figure 4, offering a clear graphical representation of the comparative performance of the algorithms. To ensure a more objective and comprehensive evaluation, Table 3 and Table 4 provide detailed quantitative metrics, including image error and correlation coefficients. These metrics are critical for assessing the accuracy and consistency of the reconstruction results across different algorithms. By combining visual and quantitative analyses, we can identify the strengths and limitations of each algorithm, facilitating informed comparisons and highlighting areas for potential improvement.
From the results shown in Figure 4 and Table 3 and Table 4, it is evident that the algorithms being compared perform poorly in image reconstruction tasks and struggle to reconstruct high-quality images. In contrast, the MOROR algorithm performs excellently in image reconstruction tasks, achieving high-fidelity reconstruction. This conclusion is validated not only through subjective visual comparisons but also by the quantitative analysis results. The MOROR algorithm achieves a maximum image error as low as 0.39%, with a correlation coefficient peaking at 1. These numerical results clearly highlight the superior performance of the MOROR algorithm. Compared to existing reconstruction algorithms, the MOROR algorithm has made significant advancements in the design and solution of the reconstruction model and the exploration and integration of prior information, as well as the adaptive optimization of model parameters. The image reconstruction problem is modeled as a multi-objective robust optimization problem, which not only improves reconstruction accuracy and robustness but also provides a systematic reconstruction framework that facilitates parameter tuning and the optimization of reconstruction quality. In the MOROR algorithm, the LLOP integrates the MLPI, sparsity priors, and measurement principles, enhancing the diversity and complementarity of information in the reconstruction process, laying the foundation for reconstructing high-quality images. Specifically, the MLPI captures the statistical characteristics and patterns of the object to be reconstructed, not only integrating as a prior image into the reconstruction model, increasing the diversity and complementarity of prior information, but also serving as an ideal iterative initial value, improving the convergence of the algorithm. By integrating the MLPI, the fusion of measurement physics with multimodal learning is realized, contributing to a significant enhancement in reconstruction performance. The MOROR algorithm also realizes the adaptive selection of model parameters, facilitating the attainment of superior reconstruction results. Additionally, the adaptive learning of model parameters not only effectively eliminates the uncertainties caused by reliance on empirical selection methods, enhancing the model’s level of automation, but also significantly boosts the model’s ability to perform reconstruction tasks in complex environments.
We propose a new MNN to infer the MLPI. This new MNN performs inference tasks by simultaneously utilizing image and capacitance data. It fully leverages information from different modalities, compensating for the shortcomings of single-modality information and reducing potential information loss that may occur when only using one type of modality data, thereby improving the overall performance of the model. Notably, our proposed model can integrate any prior images, including inference information from any machine learning and deep learning methods, sensing information from other sensors, simulation information based on physical models, and more, without being limited to a specific type of algorithm. This enhances the model’s flexibility and adaptability. Moreover, we develop an efficient algorithm to solve the hierarchical multi-objective robust optimization problem. Both the visual results illustrated in Figure 4 and the quantitative data presented in Table 3 and Table 4 validate the efficacy of the introduced MNN and the newly designed optimizer.
Sparsity serves as a crucial prior in imaging, extensively utilized across numerous algorithms such as TL1R, L1R, FL1R, L1-2R, and L1/2R. This prior holds significant theoretical importance, helping algorithms to process image data more effectively and enhance image quality. However, extensive computational results indicate that in practical applications of electrical capacitance tomography, this prior information does not always perform well. For the reconstruction targets in Figure 3, we conduct a detailed comparative analysis of these reconstruction algorithms with different sparsity characteristics. The results show that the maximum image errors for these algorithms are 12.14%, 16.13%, 15.82%, 17.55%, and 13.75%, while the minimum correlation coefficients are 0.9029, 0.8984, 0.9306, 0.9229, and 0.9316, respectively. These data indicate that the quantitative image errors are substantial, rendering them incapable of precisely representing the spatial distribution of dielectric constants. This not only compromises the image quality but also leads to erroneous conclusions. In scenarios requiring high-precision reconstruction, these algorithms obviously cannot meet the demands. Although the sparsity prior has demonstrated certain advantages, there are still several issues in practical applications that need continuous research and improvement to meet the requirements of applications with high reconstruction precision.
The low-rank prior is another widely employed form of prior knowledge. The LRR algorithm incorporates the low-rank characteristics of the reconstruction target by utilizing the nuclear norm. It is difficult to fully characterize complex objects with the LRR method, and it tends to produce overly smooth solutions, making edges and subtle structures unclear. From the visual results in Figure 5d and the quantitative results provided in Table 3 and Table 4, it can be seen that the LRR algorithm cannot achieve high-quality reconstruction. For the reconstruction objects in Figure 3a–e, the image errors are 11.78%, 15.16%, 9.45%, 8.30%, and 10.93%, with correlation coefficients of 0.8948, 0.8970, 0.9751, 0.9825, and 0.9577, respectively. This result further underscores that relying exclusively on low-rank priors is inadequate for addressing intricate reconstruction challenges. Incorporating additional and more robust prior knowledge and enhancing the automation capabilities of models during the image reconstruction process can significantly enhance imaging quality. The effectiveness of the MOROR algorithm further validates this insight.
We also demonstrate the reconstruction results of the PRPCG algorithm. As seen in Figure 4e, this algorithm has certain limitations, posing challenges for achieving high-quality reconstruction. For the reconstruction objects in Figure 3, the PRPCG algorithm’s maximum image error reaches 16.88%, and the minimum correlation coefficient is 0.8693. In electrical capacitance tomography, leveraging prior information to improve reconstruction quality is essential. However, the PRPCG algorithm, being a gradient-based optimization algorithm, fails to incorporate such prior knowledge. Consequently, it falls short of reconstructing high-quality images. Enhancing the performance of the PRPCG algorithm requires additional research and development to align with the demands of real-world applications.

7.5.2. The Results of the Compound Regularization Algorithms

We compare the MOROR algorithm with five well-known compound regularization algorithms, including the L1TV method with the regularization parameters of 0.01 and 0.0003 for the L1 norm and the total variation term, the L1SOTV method with the regularization parameters of 0.01 and 0.0008 for the L1 norm and the second-order total variation term, the ELNR method with the regularization parameters of 0.02 and 0.003 for the L1 norm and the L2 norm, the L1LR method with the regularization parameters of 0.08 and 0.08 for the L1 norm and the nuclear norm, and the FOENR algorithm with the regularization parameters of 0.0001 and 0.05 for the L2 norm and the L1 norm. Figure 5 displays quantitative visualization outcomes, offering an intuitive visual representation of algorithmic performance comparisons. For rigorous and unbiased assessment, Table 5 and Table 6 summarize key numerical indicators, reconstruction errors and correlation metrics, which are critical for appraising the result precision and reliability across diverse methods. This dual analytical framework, merging qualitative and quantitative evaluations, allows the systematic identification of algorithmic advantages and limitations.
We compare the performance of several advanced compound regularization methods with the newly proposed MOROR algorithm. The results reveal the limitations of traditional compound regularization methods while highlighting the excellent performance of the MOROR algorithm in high-fidelity image reconstruction. By analyzing the data from Figure 5 and Table 5 and Table 6, we find that the five commonly used compound regularization methods struggle to achieve high-precision reconstruction for the targets shown in Figure 3a–e. The image errors for these methods range from 5.96% to 16.485%, while the correlation coefficients range from 0.9014 to 0.9860. These data indicate that despite these methods performing well in certain scenarios, they fall short when facing complex reconstruction tasks. In contrast, the MOROR algorithm demonstrates remarkable performance advantages. When handling the same reconstruction targets, the MOROR algorithm keeps the image error below 0.39% while maintaining a correlation coefficient of 1. This significant performance improvement is evident not only in quantitative metrics but also visibly through comparing the visual images in Figure 4 and Figure 5, where the superiority of the MOROR algorithm’s image quality is apparent.
Traditional compound regularization methods attempt to incorporate different priors by introducing regularization terms, but these methods rely on domain expert knowledge, making it difficult to fully and accurately capture the features of the reconstruction target. This has been confirmed by the visualization results in Figure 5 and the quantitative results in Table 5 and Table 6. Moreover, traditional methods depend on theoretical analysis or empirical adjustments for parameter configuration, resulting in low automation and difficulty adapting to complex and variable reconstruction tasks. In contrast, the MOROR algorithm integrates supervised learning and advanced optimization principles to achieve the automatic learning and optimization of model parameters, enhancing the algorithm’s level of automation.

7.5.3. Noise Sensitivity Assessment

Noise during the actual measurement process is unavoidable and has always been a major challenge for this measurement technology. Noise sensitivity limits the widespread application and further development of the technology. To address this challenge, we specifically introduce corresponding countermeasures in the design of the reconstruction model. In this section, we perform a comprehensive assessment of the robustness of the MOOR algorithm by analyzing capacitance data collected under varying noise conditions (5%, 10%, and 15%). The evaluation results are presented both qualitatively and quantitatively, with Figure 6 illustrating the visual reconstruction results and Table 7 and Table 8 providing statistical metrics, such as image error and correlation coefficients. This dual approach ensures a thorough understanding of the algorithm’s performance under noisy environments, highlighting its ability to maintain accuracy and consistency across different noise levels. By systematically evaluating its robustness, this analysis validates the algorithm’s potential for practical applications in real-world scenarios where noise interference is inevitable.
The precise reconstruction of images and the performance of algorithms in complex noise environments have always been central issues of concern in academia and engineering applications. Achieving high-precision and stable reconstruction in complex noise scenarios is not only directly related to the practicality and reliability of this technology but also holds significant implications for technological developments in related fields. As shown in Figure 6 and Table 7 and Table 8, the MOROR algorithm demonstrates outstanding performance advantages under different noise conditions, consistently maintaining image errors at a relatively low level. Specifically, the maximum image error of the MOROR algorithm is only 1.21%, while the minimum correlation coefficient is 0.9996. This fully proves the robustness and superior performance of the algorithm in noisy environments.
The exceptional robustness of the MOROR algorithm is attributed to several key features and innovative designs. Firstly, the MOROR algorithm introduces a multi-objective robust imaging model that reduces sensitivity to noise during the image reconstruction process. This model considers the balance between imaging accuracy and algorithm robustness, allowing the algorithm to maintain stable performance in various noise scenarios. Secondly, the MOROR algorithm incorporates regularization techniques and integrates multiple types of prior information. The enhancement in the diversity and complementarity of prior information helps improve imaging quality and the robustness of the algorithm. Additionally, the use of regularization techniques effectively mitigates the interference of measurement noise. Thirdly, the MOROR algorithm has the capability to automatically calculate model parameters. Compared to traditional algorithms that rely on manual adjustments, this adaptive optimization strategy significantly improves the model’s adaptability and reconstruction quality in complex and high-noise scenarios.

7.5.4. Reconstruction Time

In the research and application of iterative algorithms, imaging time is one of the core metrics for evaluating algorithm performance. This is especially true in dynamic, rapidly changing scenarios where the importance of imaging speed is more pronounced. For these scenarios, measurement technologies need to have high-speed and real-time data processing capabilities to meet the stringent response speed requirements of practical applications.
In the reconstruction tasks shown in Figure 3a–e, the MOROR algorithm’s reconstruction times are approximately 0.0472 s, 0.0414 s, 0.0442 s, 0.0449 s, and 0.0433 s, respectively, showing fairly consistent and fast reconstruction capability. While certain algorithms may achieve shorter reconstruction times compared to the MOOR algorithm, their reconstruction quality is notably inferior. A meaningful comparison of reconstruction times can only be made when the quality of the reconstructions is on par.
The reconstruction time of the MOROR algorithm is mainly spent on solving the LLOP. This is because the algorithm is designed to offload other computational tasks such as model training and model parameter learning as much as possible, concentrating more computing resources on the online reconstruction process. However, despite the MOROR’s simplified iterative structure, its iterative nature still poses challenges to real-time imaging. These challenges arise from the inherent conflict between two competing objectives: increasing the reconstruction speed and reducing the reconstruction error. Effectively balancing these two objectives remains an unresolved challenge in the field of iterative algorithms, and this limitation also hinders the broader adoption of electrical capacitance tomography technology in real-world industrial and scientific applications. The real-time imaging demands of rapidly changing scenes intensify this contradiction, making it difficult for traditional methods to maintain both accuracy and efficient computational performance.
Enhancing the reconstruction speed typically involves simplifying models or decreasing the number of iterations, which can compromise accuracy. On the other hand, striving for high accuracy often results in greater computational complexity and longer processing times. This inherent conflict is one of common obstacles faced by iterative algorithms and is a core challenge that the MOROR algorithm needs to address in further optimization. There is still room for optimization in the MOROR algorithm. Future research will focus on developing more efficient solving strategies to shorten the reconstruction time.

7.5.5. Consistency of Reconstruction Performance

In this section, we systematically evaluate the stability of the MOROR algorithm by analyzing the variance in the image error and correlation coefficients, with the quantitative results shown in Figure 7. These indicators quantify the consistency of the algorithm across different reconstruction tasks. To further validate the analysis results, Figure 8 demonstrates the performance of different algorithms in terms of the image error and correlation coefficients under various reconstruction objects. These quantitative results lay a foundation for a thorough comparison between the performance of the MOROR algorithm and that of other methods, which supports the subsequent optimization of these algorithms.
By analyzing the data visualization results in Figure 7, it can be seen that among all the algorithms evaluated, the MOROR algorithm exhibits significant advantages across multiple dimensions. In particular, the MOROR algorithm achieves the lowest values in the two critical performance indicators, i.e., the variances in image errors and correlation coefficients, far surpassing other algorithms. This result fully demonstrates the exceptional stability and effectiveness of the MOROR algorithm in image reconstruction tasks, indicating its ability to consistently maintain high-quality reconstruction results while handling complex tasks with excellent robustness.
To delve deeper into the specific performance of each algorithm across various reconstruction tasks, Figure 8 illustrates the image errors and correlation coefficients of different algorithms in a range of reconstruction objects. Despite minor variations in the image errors and correlation coefficients across different reconstruction targets, the MOROR algorithm demonstrates consistently stable performance, exhibiting a narrower range of performance fluctuations and maintaining consistently low error levels. This indicates that the performance of the MOROR algorithm is less related to task characteristics, exhibiting stronger adaptability and performance consistency.

8. Conclusions

This study proposes an innovative multi-objective robust optimization method to alleviate the technical bottleneck caused by low imaging quality. This new approach integrates advanced optimization principles, multimodal learning, and measurement physics to alleviate reconstruction challenges under complex and uncertain conditions, achieving excellent reconstruction performance. It consists of two mutually nested and dependent optimization problems, the ULOP and the LLOP, which can simultaneously achieve the automatic tuning of model parameters and the optimization of reconstruction quality. A new LLOP is derived using the regularization theory to unify the MLPI, sparsity prior, and measurement physics into an optimization framework. To enhance the inference accuracy of the MLPI, we design the MNN, which improves the model’s inference ability through the fusion of multimodal information. To mitigate the computational difficulties induced by the ULOP-LLOP interdependencies, an innovative nested algorithm is proposed to solve the proposed multi-objective robust optimization model. The evaluation results indicate that the proposed method exhibits significant advantages over existing mainstream imaging algorithms. It enhances the automation level of the image reconstruction process, improves imaging quality, and demonstrates excellent robustness. For the studied reconstruction objects, the maximum image error is 0.39%, and the minimum correlation coefficient is 1 when capacitance noise is not considered. When the capacitance noise is 15%, the maximum image error increases to 1.21%, and the minimum correlation coefficient is 0.9996. Furthermore, the novel algorithm demonstrates remarkable stability in performance, maintaining consistent reconstruction quality without substantial fluctuations across varying reconstruction targets. This research not only provides a comprehensive method to improve the overall performance metrics of image reconstruction but also establishes a new imaging paradigm, which is expected to accelerate the expansion of this measurement technology’s application boundaries.
Despite numerous advantages and potential of multi-objective optimization, robust optimization, and bilevel optimization, they have not been sufficiently studied and explored in the field of electrical capacitance tomography, limiting the innovative development of imaging algorithms and hindering further improvements in the technology’s accuracy, efficiency, and adaptability. Our work aims to overcome this bottleneck and promote the innovative development of reconstruction algorithms. Future research will focus on developing more advanced modeling methods to enhance the automation of the image reconstruction process, strengthen adaptability in complex reconstruction scenarios, and substantially improve the intelligence level of measurement equipment.

Author Contributions

Data curation, X.Y., and J.L.; formal analysis, X.Y., J.L., and Q.L.; funding acquisition, J.L.; investigation, X.Y., J.L., and Q.L.; methodology, X.Y., J.L., and Q.L.; resources, Q.L.; software, X.Y., and J.L.; validation, X.Y., J.L., and Q.L.; visualization, X.Y., J.L., and Q.L.; writing—original draft, X.Y., and J.L.; writing—review and editing, X.Y., J.L., and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the Science and Technology Programme Project of the State Administration for Market Regulation (2024MK081).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Xuejie Yang was employed by the company China Nuclear Power Engineering Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Shao, C. A deterministic Kaczmarz algorithm for solving linear systems. SIAM J. Matrix Anal. Appl. 2023, 44, 212–239. [Google Scholar] [CrossRef]
  2. Wang, H.; Yang, W. Application of electrical capacitance tomography in circulating fluidised beds-a review. Appl. Therm. Eng. 2020, 176, 115311. [Google Scholar] [CrossRef]
  3. Sun, Y.; Sun, Y. An optimized TV regularization algorithm for image reconstruction of co-planar array capacitive sensor. IEEE Trans. Instrum. Meas. 2025, 74, 5008009. [Google Scholar] [CrossRef]
  4. Zhang, L.F.; Zhai, Y.J.; Wang, X.G. Application of Barzilai-Borwein gradient projection for sparse reconstruction algorithm to image reconstruction of electrical capacitance tomography. Flow Meas. Instrum. 2018, 65, 45–51. [Google Scholar] [CrossRef]
  5. Wang, M.; Dai, S.; Wang, X.; Liu, X. Application of L1-L2 regularization in sparse-view photoacoustic imaging reconstruction. IEEE Photonics J. 2024, 16, 1–8. [Google Scholar] [CrossRef]
  6. Suo, P.; Sun, J.; Zhang, X.; Li, X.; Sun, S.; Xu, L. Adaptive group-based sparse representation for image reconstruction in electrical capacitance tomography. IEEE Trans. Instrum. Meas. 2023, 72, 1–9. [Google Scholar] [CrossRef]
  7. Lu, Y.; Wu, X.; Zhang, B. A modified non-convex Cauchy total variation regularization model for image restoration. Comput. Appl. Math. 2025, 44, 3. [Google Scholar] [CrossRef]
  8. Tong, G.W.; Liu, S.; Chen, H.Y.; Wang, X.Y. Regularization iteration imaging algorithm for electrical capacitance tomography. Meas. Sci. Technol. 2018, 29, 035403. [Google Scholar] [CrossRef]
  9. Dutta, J.; Ahn, S.; Li, C.; Cherry, S.R.; Leahy, R.M. Joint L1 and total variation regularization for fluorescence molecular tomography. Phys. Med. Biol. 2012, 57, 1459–1476. [Google Scholar] [CrossRef]
  10. Li, R.; Zheng, B. A spatially adaptive hybrid total variation model for image restoration under Gaussian plus impulse noise. Appl. Math. Comput. 2022, 419, 126862. [Google Scholar] [CrossRef]
  11. Cai, X.; Chan, R.; Zeng, T. A two-stage images segmentation method using a convex variant of the Mumford-Shah model and thresholding. SIAM J. Imaging Sci. 2013, 6, 368–390. [Google Scholar] [CrossRef]
  12. Padcharoen, A.; Kumam, P.; Martínez-Moreno, J. Augmented Lagrangian method for TV-l1-l2 based colour image restoration. J. Comput. Appl. Math. 2019, 354, 507–519. [Google Scholar] [CrossRef]
  13. Chen, X.J.; Jiang, Z.Q.; Han, X.; Wang, X.L.; Tang, X.Y. Research of magnetic particle imaging reconstruction based on the elastic net regularization. Biomed. Signal Process. Control 2021, 69, 102823. [Google Scholar] [CrossRef]
  14. Guo, W.; Qin, J.; Yin, W. A new detail-preserving regularization scheme. SIAM J. Imaging Sci. 2014, 7, 1309–1334. [Google Scholar] [CrossRef]
  15. Liu, X. Total generalized variation and wavelet frame-based adaptive image restoration algorithm. Vis. Comput. 2019, 35, 1883–1894. [Google Scholar] [CrossRef]
  16. Guo, G.; Tong, G.W.; Lu, L.; Liu, S. Iterative reconstruction algorithm for the inverse problems in electrical capacitance tomography. Flow Meas. Instrum. 2018, 64, 204–212. [Google Scholar] [CrossRef]
  17. Acero, D.O.; Marahsdeh, Q.M.; Teixeira, F.L. Relevance vector machine image reconstruction algorithm for electrical capacitance tomography with explicit uncertainty estimates. IEEE Sens. J. 2020, 20, 4925–4939. [Google Scholar] [CrossRef]
  18. Liu, H.L.; Wu, J.; Zhang, W.W.; Ma, H.W. Fractional-order elastic net regularization for identifying various types of unknown external forces. Mech. Syst. Signal Process. 2023, 205, 110842. [Google Scholar] [CrossRef]
  19. Prakash, J.; Sanny, D.; Kalva, S.K.; Pramanik, M.; Yalavarthy, P.K. Fractional regularization to improve photoacoustic tomographic image reconstruction. IEEE Trans. Med. Imaging 2019, 38, 1935–1947. [Google Scholar] [CrossRef]
  20. Huang, G.X.; Liu, Y.Y.; Yin, F. Tikhonov regularization with MTRSVD method for solving large-scale discrete ill-posed problems. J. Comput. Appl. Math. 2022, 405, 113969. [Google Scholar] [CrossRef]
  21. Liu, H.; Tan, C.; Ren, S.; Dong, F. Real-time reconstruction for low contrast ultrasonic tomography using continuous-wave excitation. IEEE Trans. Instrum. Meas. 2020, 69, 1632–1642. [Google Scholar] [CrossRef]
  22. Khan, T.A.; Ling, S.H.; Rizvi, A.A. Optimisation of electrical impedance tomography image reconstruction error using heuristic algorithms. Artif. Intell. Rev. 2023, 56, 15079–15099. [Google Scholar] [CrossRef]
  23. Erkoc, M.E.; Karaboga, N. Evolutionary algorithms for sparse signal reconstruction. Signal Image Video Process. 2019, 13, 1293–1301. [Google Scholar] [CrossRef]
  24. Zhou, Y.; Kwong, S.; Guo, H.; Zhang, X.; Zhang, Q. A two-phase evolutionary approach for compressive sensing reconstruction. IEEE Trans. Cybern. 2017, 47, 2651–2663. [Google Scholar] [CrossRef]
  25. Gong, M.G.; Jiang, X.; Li, H. Optimization methods for regularization-based ill-posed problems: A survey and a multi-objective framework. Front. Comput. Sci. 2017, 11, 362–391. [Google Scholar] [CrossRef]
  26. Tanabe, H.; Fukuda, E.H.; Yamashita, N. Convergence rates analysis of a multiobjective proximal gradient method. Optim. Lett. 2023, 17, 333–350. [Google Scholar] [CrossRef]
  27. Tanabe, H.; Fukuda, E.H.; Yamashita, N. Proximal gradient methods for multiobjective optimization and their applications. Comput. Optim. Appl. 2019, 72, 339–361. [Google Scholar] [CrossRef]
  28. Liu, Z.; Bicer, T.; Kettimuthu, R.; Gursoy, D.; Carlo, F.D.; Foster, I. Tomogan: Low-dose synchrotron x-ray tomography with generative adversarial networks: Discussion. J. Opt. Soc. Am. A 2020, 37, 422–434. [Google Scholar] [CrossRef]
  29. Lei, K.; Mardani, M.; Pauly, J.M.; Vasanawala, S.S. Wasserstein GANs for MR imaging: From paired to unpaired training. IEEE Trans. Med. Imaging 2021, 40, 105–115. [Google Scholar] [CrossRef]
  30. Baguer, D.O.; Leuschner, J.; Schmidt, M. Computed tomography reconstruction using deep image prior and learned reconstruction methods. Inverse Probl. 2020, 36, 094004. [Google Scholar] [CrossRef]
  31. Gong, K.; Catana, C.; Qi, J.; Li, Q. PET image reconstruction using deep image prior. IEEE Trans. Med. Imaging 2019, 38, 1655–1665. [Google Scholar] [CrossRef] [PubMed]
  32. Lu, Z.; Gao, Q.; Wang, T.; Yang, Z.; Wang, Z.; Yu, H.; Chen, H.; Zhou, J.; Shan, H.; Zhang, Y. PrideDiff: Physics-regularized generalized diffusion model for CT reconstruction. IEEE Trans. Radiat. Plasma Med. Sci. 2025, 9, 157–168. [Google Scholar] [CrossRef]
  33. Chen, H.; Zhang, Z.; Li, W.; Liu, Q.; Sun, K.; Fan, D.; Cui, W. Ensemble of surrogates in black-box-type engineering optimization: Recent advances and applications. Expert Syst. Appl. 2024, 248, 123427. [Google Scholar] [CrossRef]
  34. He, C.; Zhang, Y.; Gong, D.; Ji, X. A review of surrogate-assisted evolutionary algorithms for expensive optimization problems. Expert Syst. Appl. 2023, 217, 119495. [Google Scholar] [CrossRef]
  35. Lei, J.; Mu, H.P.; Liu, Q.B.; Wang, X.Y.; Liu, S. Data-driven reconstruction method for electrical capacitance tomography. Neurocomputing 2018, 273, 333–345. [Google Scholar] [CrossRef]
  36. Wael, D.; Abdel-Hakim, A.E. CGAN-ECT: Reconstruction of electrical capacitance tomography images from capacitance measurements using conditional generative adversarial networks. Flow Meas. Instrum. 2024, 96, 102566. [Google Scholar] [CrossRef]
  37. Zhu, H.; Sun, J.; Xu, L.J.; Tian, W.B.; Sun, S. Permittivity reconstruction in electrical capacitance tomography based on visual representation of deep neural network. IEEE Sens. J. 2020, 20, 4803–4815. [Google Scholar] [CrossRef]
  38. Jin, Y.; Li, Y.; Zhang, M.; Peng, L. A physics-constrained deep learning-based image reconstruction for electrical capacitance tomography. IEEE Trans. Instrum. Meas. 2024, 73, 4500612. [Google Scholar] [CrossRef]
  39. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  40. Peng, B.; Wang, K.K.; Abdulla, W.H. An integrated hierarchical wireless acoustic sensor network and optimized deep learning model for scalable urban sound and environmental monitoring. Appl. Sci. 2025, 15, 2196. [Google Scholar] [CrossRef]
  41. Almukhalfi, H.; Noor, A.; Noor, T.H. Traffic management approaches using machine learning and deep learning techniques: A survey. Eng. Appl. Artif. Intell. 2024, 133, 108147. [Google Scholar] [CrossRef]
  42. Hoseinnezhad, R. A comprehensive review of deep learning techniques in mobile robot path planning: Categorization and analysis. Appl. Sci. 2025, 15, 2179. [Google Scholar] [CrossRef]
  43. Su, Y.; Fu, J.; Lin, W.; Lin, C.; Lai, X.; Xie, X. Dam Deformation Monitoring Model Based on Deep Learning and Split Conformal Quantile Prediction. Appl. Sci. 2025, 15, 1960. [Google Scholar] [CrossRef]
  44. Pan, Y.; Hao, L.; He, J.L.; Ding, K.; Yu, Q.; Wang, Y.L. Deep convolutional neural network based on self-distillation for tool wear recognition. Eng. Appl. Artif. Intell. 2024, 132, 107851. [Google Scholar] [CrossRef]
  45. Zhang, L.; Wu, X. The lightweight deep learning model in sunflower disease identification: A comparative study. Appl. Sci. 2025, 15, 2104. [Google Scholar] [CrossRef]
  46. Mehr, S.; Craven, M.; Leonov, A.; Keenan, G.; Cronin, L. A universal system for digitization and automatic execution of the chemical synthesis literature. Science 2020, 370, 101–108. [Google Scholar] [CrossRef]
  47. Antun, V.; Renna, F.; Poon, C.; Adcock, B.; Hansen, A.C. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. USA 2020, 117, 30088–30095. [Google Scholar] [CrossRef]
  48. Zhao, M.; Wang, X.; Chen, J.; Chen, W. A plug-and-play priors framework for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5501213. [Google Scholar] [CrossRef]
  49. Zheng, Z.; Dai, W.; Xue, D.; Li, C.; Zou, J.; Xiong, H. Hybrid ISTA: Unfolding ISTA with convergence guarantees using free-form deep neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3226–3244. [Google Scholar] [CrossRef]
  50. Liu, R.; Cheng, S.; Ma, L.; Fan, X.; Luo, Z. Deep proximal unrolling: Algorithmic framework, convergence analysis and applications. IEEE Trans. Image Process. 2019, 28, 5013–5026. [Google Scholar] [CrossRef]
  51. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  52. Huang, J.; Wu, Y.; Wang, F.; Fang, Y.; Nan, Y.; Alkan, C.; Abraham, D.; Liao, C.; Xu, L.; Gao, Z.; et al. Data- and physics-driven deep learning based reconstruction for fast MRI: Fundamentals and methodologies. IEEE Rev. Biomed. Eng. 2025, 18, 152–171. [Google Scholar] [CrossRef] [PubMed]
  53. Sharma, S.; Kumar, V. A comprehensive review on multi-objective optimization techniques: Past, present and future. Arch. Comput. Methods Eng. 2022, 29, 5605–5633. [Google Scholar] [CrossRef]
  54. Mejía-De-Dios, J.A.; Rodríguez-Molina, A.; Mezura-Montes, E. Multiobjective bilevel optimization: A survey of the state-of-the-art. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5478–5490. [Google Scholar] [CrossRef]
  55. Sinha, A.; Malo, P.; Deb, K. A review on bilevel optimization: From classical to evolutionary approaches and applications. IEEE Trans. Evol. Comput. 2018, 22, 276–295. [Google Scholar] [CrossRef]
  56. Zhao, H.; Gao, Z.; Xu, F.; Zhang, Y. Review of robust aerodynamic design optimization for air vehicles. Arch. Comput. Methods Eng. 2019, 26, 685–732. [Google Scholar] [CrossRef]
  57. Martins, P.H.; Trindade, M.A.; Varoto, P.S. Improving the robust design of piezoelectric energy harvesters by using polynomial chaos expansion and multiobjective optimization. Int. J. Mech. Mater. Des. 2024, 20, 571–590. [Google Scholar] [CrossRef]
  58. Hu, G.; Tao, Q.; Ying, R.; Long, J. Multi-objective robust optimization design framework for low-pollution emission burners. Chem. Eng. Res. Des. 2024, 210, 180–189. [Google Scholar] [CrossRef]
  59. Yao, W.; Chen, X.; Luo, W.; Tooren, M.V.; Guo, J. Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles. Prog. Aerosp. Sci. 2011, 47, 450–479. [Google Scholar] [CrossRef]
  60. Cheng, K.H.; Du, J.; Zhou, H.X.; Zhao, D.; Qin, H.L. Image super-resolution based on half quadratic splitting. Infrared Phys. Technol. 2020, 105, 103193. [Google Scholar] [CrossRef]
  61. Sun, Y.; Yang, Y.; Liu, Q.; Chen, J.; Yuan, X.T.; Guo, G. Learning non-locally regularized compressed sensing network with half-quadratic splitting. IEEE Trans. Multimed. 2020, 22, 3236–3248. [Google Scholar] [CrossRef]
  62. Bello-Cruz, Y.; Li, G.Y.; Nghia, T.T.A. On the linear convergence of forward-backward splitting method: Part i—Convergence analysis. J. Optim. Theory Appl. 2021, 188, 378–401. [Google Scholar] [CrossRef]
  63. Hao, B.B.; Zhu, J.G. Fast L1 regularized iterative forward backward splitting with adaptive parameter selection for image restoration. J. Vis. Commun. Image Represent. 2017, 44, 139–147. [Google Scholar] [CrossRef]
  64. Beck, A.; Teboulle, M. A fast iteration shrinkage thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  65. Wu, X.G.; Wang, L.; Chen, B.; Feng, Z.B.; Qin, Y.W.; Liu, Q.; Liu, Y. Multi-objective optimization of shield construction parameters based on random forests and NSGA-II. Adv. Eng. Inform. 2022, 54, 101751. [Google Scholar] [CrossRef]
  66. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  67. Verma, S.; Pant, M.; Snasel, V. A comprehensive review on NSGA-II for multi-objective combinatorial optimization problems. IEEE Access 2021, 9, 57757–57791. [Google Scholar] [CrossRef]
  68. Li, R.J.; Grosu, L.; Queiros-Conde, D. Multi-objective optimization of Stirling engine using finite physical dimensions thermodynamics (FPDT) method. Energy Convers. Manag. 2016, 124, 517–527. [Google Scholar] [CrossRef]
  69. Krones, F.; Marikkar, U.; Parsons, G.; Szmul, A.; Mahdi, A. Review of multimodal machine learning approaches in healthcare. Inf. Fusion 2025, 114, 102690. [Google Scholar] [CrossRef]
  70. Fatemeh, B.; Abadeh, M.S. An overview of deep learning methods for multimodal medical data mining. Expert Syst. Appl. 2022, 200, 117006. [Google Scholar] [CrossRef]
  71. Yuan, G.; Wei, Z.; Yang, Y. The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions. J. Comput. Appl. Math. 2019, 362, 262–275. [Google Scholar] [CrossRef]
  72. Dong, X. Modified globally convergent Polak-Ribière-Polyak conjugate gradient methods with self-correcting property for large-scale unconstrained optimization. Numer. Algorithms 2023, 93, 765–783. [Google Scholar] [CrossRef]
  73. Guo, W.L.; Lou, Y.F.; Qin, J.; Yan, M. A new regularization based on the error function for sparse recovery. J. Sci. Comput. 2021, 87, 31. [Google Scholar] [CrossRef]
  74. Zhang, S.; Xin, J. Minimization of transformed L1 penalty: Closed form representation and iterative thresholding algorithms. Commun. Math. Sci. 2017, 15, 511–537. [Google Scholar] [CrossRef]
  75. Tom, A.J.; George, S.N. A three-way optimization technique for noise robust moving object detection using tensor low-rank approximation, l1/2, and TTV regularizations. IEEE Trans. Cybern. 2021, 51, 1004–1014. [Google Scholar] [CrossRef]
  76. Yang, K.; Liu, Y.; Yu, Z.; Chen, C.L.P. Extracting and composing robust features with broad learning system. IEEE Trans. Knowl. Data Eng. 2023, 35, 3885–3896. [Google Scholar] [CrossRef]
  77. Hu, Z.X.; Nie, F.P.; Wang, R.; Li, X.L. Low rank regularization: A review. Neural Netw. 2021, 136, 218–232. [Google Scholar] [CrossRef]
  78. Lei, J.; Liu, W.Y.; Liu, Q.B.; Wang, X.Y.; Liu, S. Robust dynamic inversion algorithm for the visualization in electrical capacitance tomography. Measurement 2014, 50, 305–318. [Google Scholar] [CrossRef]
Figure 1. Multimodal neural network.
Figure 1. Multimodal neural network.
Applsci 15 04778 g001
Figure 2. Proposed MOROR method.
Figure 2. Proposed MOROR method.
Applsci 15 04778 g002
Figure 3. Reconstruction objects. (a) Reconstruction object 1; (b) Reconstruction object 2; (c) Reconstruction object 3; (d) Reconstruction object 4; (e) Reconstruction object 5.
Figure 3. Reconstruction objects. (a) Reconstruction object 1; (b) Reconstruction object 2; (c) Reconstruction object 3; (d) Reconstruction object 4; (e) Reconstruction object 5.
Applsci 15 04778 g003
Figure 4. Images reconstructed by (a) L1-2R, (b) L1R, (c) FL1R, (d) LRR, (e) PRPCG, (f) L1/2R, (g) TL1R, and (h) MOROR.
Figure 4. Images reconstructed by (a) L1-2R, (b) L1R, (c) FL1R, (d) LRR, (e) PRPCG, (f) L1/2R, (g) TL1R, and (h) MOROR.
Applsci 15 04778 g004
Figure 5. Results reconstructed by the compound regularization methods: (a) L1TV, (b) L1SOTV, (c) ELNR, (d) L1LR, and (e) FOENR.
Figure 5. Results reconstructed by the compound regularization methods: (a) L1TV, (b) L1SOTV, (c) ELNR, (d) L1LR, and (e) FOENR.
Applsci 15 04778 g005
Figure 6. Tomograms reconstructed by the MOROR algorithm at different capacitance noise levels: (a) 6%, (b) 10%, and (c) 15%.
Figure 6. Tomograms reconstructed by the MOROR algorithm at different capacitance noise levels: (a) 6%, (b) 10%, and (c) 15%.
Applsci 15 04778 g006
Figure 7. The variances in image errors (a) and correlation coefficients (b). (1. L1-2R, 2. L1R, 3. FL1R, 4. LRR, 5. PRPCG, 6. L1/2R, 7. TL1R, 8. MOROR, 9. L1TV, 10. L1SOTV, 11. ELNR, 12. L1LR, and 13. FOENR).
Figure 7. The variances in image errors (a) and correlation coefficients (b). (1. L1-2R, 2. L1R, 3. FL1R, 4. LRR, 5. PRPCG, 6. L1/2R, 7. TL1R, 8. MOROR, 9. L1TV, 10. L1SOTV, 11. ELNR, 12. L1LR, and 13. FOENR).
Applsci 15 04778 g007
Figure 8. The image errors (a) and correlation coefficients (b) of different imaging algorithms. (1. L1-2R, 2. L1R, 3. FL1R, 4. LRR, 5. PRPCG, 6. L1/2R, 7. TL1R, 8. MOROR, 9. L1TV, 10. L1SOTV, 11. ELNR, 12. L1LR, and 13. FOENR).
Figure 8. The image errors (a) and correlation coefficients (b) of different imaging algorithms. (1. L1-2R, 2. L1R, 3. FL1R, 4. LRR, 5. PRPCG, 6. L1/2R, 7. TL1R, 8. MOROR, 9. L1TV, 10. L1SOTV, 11. ELNR, 12. L1LR, and 13. FOENR).
Applsci 15 04778 g008
Table 1. Tested algorithms.
Table 1. Tested algorithms.
AlgorithmsAbbreviation
Polak–Ribière–Polyak conjugate gradient method [71,72]PRPCG
Combined L1 with second-order total variation method [16]L1SOTV
Joint L1 and total variation regularization [9]L1TV
Elastic net regularization method [13]ELNR
Transformed L1 regularization [73,74]TL1R
L1/2 regularization method [75]L1/2R
Joint L1 and low-rank regularization [8]L1LR
L1 norm regularization method [76]L1R
Low-rank regularization [77]LRR
L1-2 regularization [5]L1-2R
Fractional L1 regularization [19]FL1R
Fractional-order elastic net regularization [18]FOENR
Table 2. The parameters of the compared iterative imaging algorithms.
Table 2. The parameters of the compared iterative imaging algorithms.
MethodParameterParameter Value
PRPCGStep size1
L1-2RRegularization parameter0.15
L1RRegularization parameter0.01
FL1RRegularization parameter0.09
LRRRegularization parameter0.01
L1/2RRegularization parameter0.02
TL1RRegularization parameter0.005
Table 3. Image errors of the compared iterative algorithms (%).
Table 3. Image errors of the compared iterative algorithms (%).
MethodObject 1Object 2Object 3Object 4Object 5
L1-2R10.7013.3115.8217.5513.75
L1R10.6013.489.949.5210.22
FL1R9.4411.4110.4311.1410.97
LRR11.7815.169.458.3010.93
PRPCG11.2416.8811.9210.6411.62
L1/2R12.1416.1310.2310.4010.89
TL1R9.2910.448.118.837.51
MOROR000.260.390.16
Table 4. Correlation coefficients of the compared iterative algorithms.
Table 4. Correlation coefficients of the compared iterative algorithms.
MethodObject 1Object 2Object 3Object 4Object 5
L1-2R0.91860.91750.93060.92290.9316
L1R0.92210.92410.97220.97660.9633
FL1R0.93810.94510.96900.96770.9568
LRR0.89480.89700.97510.98250.9577
PRPCG0.90500.86930.95980.97090.9520
L1/2R0.90290.89840.97070.97210.9588
TL1R0.94020.95490.98160.98000.9804
MOROR11111
Table 5. Image errors of the compound regularization methods (%).
Table 5. Image errors of the compound regularization methods (%).
AlgorithmsObject 1Object 2Object 3Object 4Object 5
L1TV7.7011.8215.0416.4812.35
L1SOTV11.5914.869.9010.019.42
ELNR5.969.7112.7613.9511.11
L1LR12.2913.947.957.3910.17
FOENR10.1911.958.738.869.70
Table 6. Correlation coefficients of the compound regularization methods.
Table 6. Correlation coefficients of the compound regularization methods.
AlgorithmsObject 1Object 2Object 3Object 4Object 5
L1TV0.95760.94180.93520.92820.9454
L1SOTV0.90280.90610.97260.97450.9689
ELNR0.97290.95760.95360.94910.9555
L1LR0.90140.91960.98220.98600.9632
FOENR0.92910.94090.97850.97960.9672
Table 7. Image errors of the MOROR algorithm at different capacitance noise levels (%).
Table 7. Image errors of the MOROR algorithm at different capacitance noise levels (%).
Noise LevelsObject 1Object 2Object 3Object 4Object 5
5%000.280.460.27
10%000.390.990.63
15%000.911.211.14
Table 8. Correlation coefficients of the MOROR algorithm at different capacitance noise levels.
Table 8. Correlation coefficients of the MOROR algorithm at different capacitance noise levels.
Noise LevelsObject 1Object 2Object 3Object 4Object 5
5%1110.99991
10%1110.99980.9999
15%110.99980.99960.9996
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Lei, J.; Liu, Q. Multi-Objective Robust Optimization Reconstruction Algorithm for Electrical Capacitance Tomography. Appl. Sci. 2025, 15, 4778. https://doi.org/10.3390/app15094778

AMA Style

Yang X, Lei J, Liu Q. Multi-Objective Robust Optimization Reconstruction Algorithm for Electrical Capacitance Tomography. Applied Sciences. 2025; 15(9):4778. https://doi.org/10.3390/app15094778

Chicago/Turabian Style

Yang, Xuejie, Jing Lei, and Qibin Liu. 2025. "Multi-Objective Robust Optimization Reconstruction Algorithm for Electrical Capacitance Tomography" Applied Sciences 15, no. 9: 4778. https://doi.org/10.3390/app15094778

APA Style

Yang, X., Lei, J., & Liu, Q. (2025). Multi-Objective Robust Optimization Reconstruction Algorithm for Electrical Capacitance Tomography. Applied Sciences, 15(9), 4778. https://doi.org/10.3390/app15094778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop