Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,271)

Search Parameters:
Keywords = inverse solution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4685 KiB  
Article
Development of an Automated Phase-Shifting Interferometer Using a Homemade Liquid-Crystal Phase Shifter
by Zhenghao Song, Lin Xu, Jing Wang, Xitong Liang and Jun Dai
Photonics 2025, 12(7), 722; https://doi.org/10.3390/photonics12070722 (registering DOI) - 16 Jul 2025
Abstract
In this paper, an automatic phase-shifting interferometer has been developed using a homemade liquid-crystal phase shifter, which demonstrates a low-cost, fully automated technical solution for measuring the phase information of optical waves in devices. Conventional phase-shifting interferometers usually rely on PZT piezoelectric phase [...] Read more.
In this paper, an automatic phase-shifting interferometer has been developed using a homemade liquid-crystal phase shifter, which demonstrates a low-cost, fully automated technical solution for measuring the phase information of optical waves in devices. Conventional phase-shifting interferometers usually rely on PZT piezoelectric phase shifters, which are complex, require additional half-inverse and half-transparent optics to build the optical path, and are expensive. To overcome these limitations, we used a laboratory-made liquid-crystal waveplate as a phase shifter and integrated it into a Mach–Zehnder phase-shifting interferometer. The system is controlled by an STM32 microcontroller and self-developed measurement software, and it utilizes a four-step phase-shift interferometry algorithm and the CPULSI phase-unwrapping algorithm to achieve automatic phase measurements. Phase test experiments using a standard plano-convex lens and a homemade liquid-crystal grating as test objects demonstrate the feasibility and accuracy of the device by the fact that the measured focal lengths are in good agreement with the nominal values, and the phase distributions of the gratings are also in good agreement with the predefined values. This study validates the potential of liquid-crystal-based phase shifters for low-cost, fully automated optical phase measurements, providing a simpler and cheaper alternative to conventional methods. Full article
Show Figures

Figure 1

20 pages, 927 KiB  
Article
An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM
by Yuetong Liu, Chaolang Hu, Shiquan Zhang and Qixiao Hu
Mathematics 2025, 13(14), 2286; https://doi.org/10.3390/math13142286 - 16 Jul 2025
Abstract
Given the evaluation data of all the experts in multi-attribute group decision making, this paper establishes an optimization model for learning and determining expert weights based on minimizing the sum of the differences between the individual evaluation and the overall consistent evaluation results. [...] Read more.
Given the evaluation data of all the experts in multi-attribute group decision making, this paper establishes an optimization model for learning and determining expert weights based on minimizing the sum of the differences between the individual evaluation and the overall consistent evaluation results. The paper proves the uniqueness of the solution of the optimization model and rigorously proves that the expert weights obtained by the model have “perfect rationality”, i.e., the weights are inversely proportional to the distance to the “overall consistent scoring point”. Based on the above characteristics, the optimization problem is further transformed into solving a system of nonlinear equations to obtain the expert weights. Finally, numerical experiments are conducted to verify the rationality of the model and the feasibility of transforming the problem into a system of nonlinear equations. Numerical experiments demonstrate that the deviation metric for the expert weights produced by our optimization model is significantly lower than that obtained under equal weighting or the entropy weight method, and it approaches zero. Within numerical tolerance, this confirms the model’s “perfect rationality”. Furthermore, the weights determined by solving the corresponding nonlinear equations coincide exactly with the optimization solution, indicating that a dedicated algorithm grounded in perfect rationality can directly solve the model. Full article
Show Figures

Figure 1

23 pages, 2079 KiB  
Article
Offshore Energy Island for Sustainable Water Desalination—Case Study of KSA
by Muhnad Almasoudi, Hassan Hemida and Soroosh Sharifi
Sustainability 2025, 17(14), 6498; https://doi.org/10.3390/su17146498 - 16 Jul 2025
Abstract
This study identifies the optimal location for an offshore energy island to supply sustainable power to desalination plants along the Red Sea coast. As demand for clean energy in water production grows, integrating renewables into desalination systems becomes increasingly essential. A decision-making framework [...] Read more.
This study identifies the optimal location for an offshore energy island to supply sustainable power to desalination plants along the Red Sea coast. As demand for clean energy in water production grows, integrating renewables into desalination systems becomes increasingly essential. A decision-making framework was developed to assess site feasibility based on renewable energy potential (solar, wind, and wave), marine traffic, site suitability, planned developments, and proximity to desalination facilities. Data was sourced from platforms such as Windguru and RETScreen, and spatial analysis was conducted using Inverse Distance Weighting (IDW) and Multi-Criteria Decision Analysis (MCDA). Results indicate that the central Red Sea region offers the most favorable conditions, combining high renewable resource availability with existing infrastructure. The estimated regional desalination energy demand of 2.1 million kW can be met using available renewable sources. Integrating these sources is expected to reduce local CO2 emissions by up to 43.17% and global desalination-related emissions by 9.5%. Spatial constraints for offshore installations were also identified, with land-based solar energy proposed as a complementary solution. The study underscores the need for further research into wave energy potential in the Red Sea, due to limited real-time data and the absence of a dedicated wave energy atlas. Full article
Show Figures

Figure 1

24 pages, 19550 KiB  
Article
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
by Jie Yin, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan and Jianjun He
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422 - 12 Jul 2025
Viewed by 160
Abstract
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing [...] Read more.
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant (Cn2) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances. Full article
Show Figures

Figure 1

19 pages, 1583 KiB  
Article
Modeling, Validation, and Controllability Degradation Analysis of a 2(P-(2PRU–PRPR)-2R) Hybrid Parallel Mechanism Using Co-Simulation
by Qing Gu, Zeqi Wu, Yongquan Li, Huo Tao, Boyu Li and Wen Li
Dynamics 2025, 5(3), 30; https://doi.org/10.3390/dynamics5030030 - 11 Jul 2025
Viewed by 113
Abstract
This work systematically addresses the dual challenges of non-inertial dynamic coupling and kinematic constraint redundancy encountered in dynamic modeling of serial–parallel–serial hybrid robotic mechanisms, and proposes an improved Newton–Euler modeling method with constraint compensation. Taking the Skiing Simulation Platform with 6-DOF as the [...] Read more.
This work systematically addresses the dual challenges of non-inertial dynamic coupling and kinematic constraint redundancy encountered in dynamic modeling of serial–parallel–serial hybrid robotic mechanisms, and proposes an improved Newton–Euler modeling method with constraint compensation. Taking the Skiing Simulation Platform with 6-DOF as the research mechanism, the inverse kinematic model of the closed-chain mechanism is established through GF set theory, with explicit analytical expressions derived for the motion parameters of limb mass centers. Introducing a principal inertial coordinate system into the dynamics equations, a recursive algorithm incorporating force/moment coupling terms is developed. Numerical simulations reveal a 9.25% periodic deviation in joint moments using conventional methods. Through analysis of the mechanism’s intrinsic properties, it is identified that the lack of angular momentum conservation constraints on the end-effector in non-inertial frames leads to system controllability degradation. Accordingly, a constraint compensation strategy is proposed: establishing linearly independent differential algebraic equations supplemented with momentum/angular momentum balance equations for the end platform. Co-Simulation results demonstrate that the optimized model reduces the maximum relative error of actuator joint moments to 0.98%, and maintains numerical stability across the entire configuration space. The constraint compensation framework provides a universal solution for dynamics modeling of complex closed-chain mechanisms, validated through applications in flight simulators and automotive driving simulators. Full article
Show Figures

Figure 1

16 pages, 7958 KiB  
Article
Truncation Artifact Reduction in Stationary Inverse-Geometry Digital Tomosynthesis Using Deep Convolutional Generative Adversarial Network
by Burnyoung Kim and Seungwan Lee
Appl. Sci. 2025, 15(14), 7699; https://doi.org/10.3390/app15147699 - 9 Jul 2025
Viewed by 155
Abstract
Stationary inverse-geometry digital tomosynthesis (s-IGDT) causes truncation artifacts in reconstructed images due to its geometric characteristics. This study introduces a deep convolutional generative adversarial network (DCGAN)-based out-painting method for mitigating truncation artifacts in s-IGDT images. The proposed network employed an encoder–decoder architecture for [...] Read more.
Stationary inverse-geometry digital tomosynthesis (s-IGDT) causes truncation artifacts in reconstructed images due to its geometric characteristics. This study introduces a deep convolutional generative adversarial network (DCGAN)-based out-painting method for mitigating truncation artifacts in s-IGDT images. The proposed network employed an encoder–decoder architecture for the generator, and a dilated convolution block was added between the encoder and decoder. A dual-discriminator was used to distinguish the artificiality of generated images for truncated and non-truncated regions separately. During network training, the generator was able to selectively learn a target task for the truncated regions using binary mask images. The performance of the proposed method was compared to conventional methods in terms of signal-to-noise ratio (SNR), normalized root-mean-square error (NRMSE), peak SNR (PSNR), and structural similarity (SSIM). The results showed that the proposed method led to a substantial reduction in truncation artifacts. On average, the proposed method achieved 62.31, 16.66, and 14.94% improvements in the SNR, PSNR, and SSIM, respectively, compared to the conventional methods. Meanwhile, the NRMSE values were reduced by an average of 37.22%. In conclusion, the proposed out-painting method can offer a promising solution for mitigating truncation artifacts in s-IGDT images and improving the clinical availability of the s-IGDT. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

21 pages, 4791 KiB  
Article
Research on the Active Suspension Control Strategy of Multi-Axle Emergency Rescue Vehicles Based on the Inverse Position Solution of a Parallel Mechanism
by Qinghe Guo, Dingxuan Zhao, Yurong Chen, Shenghuai Wang, Hongxia Wang, Chen Wang and Renjun Liu
Vehicles 2025, 7(3), 69; https://doi.org/10.3390/vehicles7030069 - 9 Jul 2025
Viewed by 171
Abstract
Aiming at the problems of complex control processes, strong model dependence, and difficult engineering application when the existing active suspension control strategy is applied to multi-axle vehicles, an active suspension control strategy based on the inverse position solution of a parallel mechanism is [...] Read more.
Aiming at the problems of complex control processes, strong model dependence, and difficult engineering application when the existing active suspension control strategy is applied to multi-axle vehicles, an active suspension control strategy based on the inverse position solution of a parallel mechanism is proposed. First, the active suspension of the three-axle emergency rescue vehicle is grouped and interconnected within the group, and it is equivalently constructed into a 3-DOF parallel mechanism. Then, the displacement of each equivalent suspension actuating hydraulic cylinder is calculated by using the method of the inverse position solution of a parallel mechanism, and then the equivalent actuating hydraulic cylinder is reversely driven according to the displacement, thereby realizing the effective control of the attitude of the vehicle body. To verify the effectiveness of the proposed control strategy, a three-axis vehicle experimental platform integrating active suspension and hydro-pneumatic suspension was built, and a pulse road experiment and gravel pavement experiment were carried out and compared with hydro-pneumatic suspension. Both types of road experimental results show that compared to hydro-pneumatic suspension, the active suspension control strategy based on the inverse position solution of a parallel mechanism proposed in this paper exhibits different degrees of advantages in reducing the peak values of the vehicle vertical displacement, pitch angle, and roll angle changes, as well as suppressing various vibration accelerations, significantly improving the vehicle’s driving smoothness and handling stability. Full article
Show Figures

Figure 1

21 pages, 9386 KiB  
Article
Structural Characterization and Segmental Dynamics Evaluation in Eco-Friendly Polymer Electrospun Fibers Based on Poly(3-hydroxybutyrate)/Polyvinylpyrrolidone Blends to Evaluate Their Sustainability
by Svetlana G. Karpova, Anatoly A. Olkhov, Ivetta A. Varyan, Ekaterina P. Dodina, Yulia K. Lukanina, Natalia G. Shilkina, Anatoly A. Popov, Alexandre A. Vetcher, Anna G. Filatova and Alexey L. Iordanskii
J. Compos. Sci. 2025, 9(7), 355; https://doi.org/10.3390/jcs9070355 - 8 Jul 2025
Viewed by 253
Abstract
Ultrafine fibers from poly(3-hydroxybutyrate) (PHB) and polyvinylpyrrolidone (PVP) and their blends with different component ratios in the range of 0/100 to 100/0 wt.% were obtained, and their structure and dynamic properties were studied. The polymers were obtained via electrospinning in solution mode. The [...] Read more.
Ultrafine fibers from poly(3-hydroxybutyrate) (PHB) and polyvinylpyrrolidone (PVP) and their blends with different component ratios in the range of 0/100 to 100/0 wt.% were obtained, and their structure and dynamic properties were studied. The polymers were obtained via electrospinning in solution mode. The structure, morphology, and segmental dynamic behavior of the fibers were determined using optical microscopy, SEM, EPR, DSC, and IR spectroscopy. The low-temperature maximum on the DSC endotherms provided information on the state of the PVP hydrogen bond network, which made it possible to determine the enthalpies of thermal destruction of these bonds. The PHB/PVP fiber blend ratio significantly affected the structural and dynamic parameters of the system. Thus, at low concentrations of PVP (up to 9%) in the structure of ultra-fine fibers, the distribution of this polymer occurs in the form of tiny particles, which are crystallization centers, which causes a significant increase in the degree of crystallinity (χ) activation energy (Eact) and slowing down of molecular dynamics (τ). At higher concentrations of PVP, loose interphase layers were formed in the system, which caused a decrease in these parameters. The strongest changes in the concentration of hydrogen bonds occurred when PVP was added to the composition from 17 to 50%, which was due to the formation of intermolecular hydrogen bonds both in PVP and during the interaction of PVP and PHB. The diffusion coefficient of water vapor in the studied systems (D) decreased as the concentration of glassy PVP in the composition increased. The concentration of the radical decreased with an increase in the proportion of PVP, which can be explained by the glassy state of this polymer at room temperature. A characteristic point of the 50/50% mixture component ratio was found in the region where an inversion transition of PHB from a dispersion material to a dispersed medium was assumed. The conducted studies made it possible for the first time to conduct a comprehensive analysis of the effect of the component ratio on the structural and dynamic characteristics of the PHB/PVP fibrous material at the molecular scale. Full article
Show Figures

Figure 1

23 pages, 1290 KiB  
Article
A KeyBERT-Enhanced Pipeline for Electronic Information Curriculum Knowledge Graphs: Design, Evaluation, and Ontology Alignment
by Guanghe Zhuang and Xiang Lu
Information 2025, 16(7), 580; https://doi.org/10.3390/info16070580 - 6 Jul 2025
Viewed by 298
Abstract
This paper proposes a KeyBERT-based method for constructing a knowledge graph of the electronic information curriculum system, aiming to enhance the structured representation and relational analysis of educational content. Electronic Information Engineering curricula encompass diverse and rapidly evolving topics; however, existing knowledge graphs [...] Read more.
This paper proposes a KeyBERT-based method for constructing a knowledge graph of the electronic information curriculum system, aiming to enhance the structured representation and relational analysis of educational content. Electronic Information Engineering curricula encompass diverse and rapidly evolving topics; however, existing knowledge graphs often overlook multi-word concepts and more nuanced semantic relationships. To address this gap, this paper presents a KeyBERT-enhanced method for constructing a knowledge graph of the electronic information curriculum system. Utilizing teaching plans, syllabi, and approximately 500,000 words of course materials from 17 courses, we first extracted 500 knowledge points via the Term Frequency–Inverse Document Frequency (TF-IDF) algorithm to build a baseline course–knowledge matrix and visualize the preliminary graph using Graph Convolutional Networks (GCN) and Neo4j. We then applied KeyBERT to extract about 1000 knowledge points—approximately 65% of extracted terms were multi-word phrases—and augment the graph with co-occurrence and semantic-similarity edges. Comparative experiments demonstrate a ~20% increase in non-zero matrix coverage and a ~40% boost in edge count (from 5100 to 7100), significantly enhancing graph connectivity. Moreover, we performed sensitivity analysis on extraction thresholds (co-occurrence ≥ 5, similarity ≥ 0.7), revealing that (5, 0.7) maximizes the F1-score at 0.83. Hyperparameter ablation over n-gram ranges [(1,1),(1,2),(1,3)] and top_n [5, 10, 15] identifies (1,3) + top_n = 10 as optimal (Precision = 0.86, Recall = 0.81, F1 = 0.83). Finally, GCN downstream tests show that, despite higher sparsity (KeyBERT 64% vs. TF-IDF 40%), KeyBERT features achieve Accuracy = 0.78 and F1 = 0.75, outperforming TF-IDF’s 0.66/0.69. This approach offers a novel, rigorously evaluated solution for optimizing the electronic information curriculum system and can be extended through terminology standardization or larger data integration. Full article
Show Figures

Figure 1

16 pages, 1929 KiB  
Article
Dynamical Behavior of Solitary Waves for the Space-Fractional Stochastic Regularized Long Wave Equation via Two Distinct Approaches
by Muneerah Al Nuwairan, Bashayr Almutairi and Anwar Aldhafeeri
Mathematics 2025, 13(13), 2193; https://doi.org/10.3390/math13132193 - 4 Jul 2025
Viewed by 165
Abstract
This study investigates the influence of multiplicative noise—modeled by a Wiener process—and spatial-fractional derivatives on the dynamics of the space-fractional stochastic Regularized Long Wave equation. By employing a complete discriminant polynomial system, we derive novel classes of fractional stochastic solutions that capture the [...] Read more.
This study investigates the influence of multiplicative noise—modeled by a Wiener process—and spatial-fractional derivatives on the dynamics of the space-fractional stochastic Regularized Long Wave equation. By employing a complete discriminant polynomial system, we derive novel classes of fractional stochastic solutions that capture the complex interplay between stochasticity and nonlocality. Additionally, the variational principle, derived by He’s semi-inverse method, is utilized, yielding additional exact solutions that are bright solitons, bright-like solitons, kinky bright solitons, and periodic structures. Graphical analyses are presented to clarify how variations in the fractional order and noise intensity affect essential solution features, such as amplitude, width, and smoothness, offering deeper insight into the behavior of such nonlinear stochastic systems. Full article
Show Figures

Figure 1

16 pages, 4637 KiB  
Article
Estimating Subsurface Geostatistical Properties from GPR Reflection Data Using a Supervised Deep Learning Approach
by Yu Liu, James Irving and Klaus Holliger
Remote Sens. 2025, 17(13), 2284; https://doi.org/10.3390/rs17132284 - 3 Jul 2025
Viewed by 252
Abstract
The quantitative characterization of near-surface heterogeneity using ground-penetrating radar (GPR) is an important but challenging task. The estimation of subsurface geostatistical parameters from surface-based common-offset GPR reflection data has so far relied upon a Monte-Carlo-type inversion approach. This allows for a comprehensive exploration [...] Read more.
The quantitative characterization of near-surface heterogeneity using ground-penetrating radar (GPR) is an important but challenging task. The estimation of subsurface geostatistical parameters from surface-based common-offset GPR reflection data has so far relied upon a Monte-Carlo-type inversion approach. This allows for a comprehensive exploration of the parameter space and provides some measure of uncertainty with regard to the inferred results. However, the associated computational costs are inherently high. To alleviate this problem, we present an alternative deep-learning-based technique, that, once trained in a supervised context, allows us to perform the same task in a highly efficient manner. The proposed approach uses a convolutional neural network (CNN), which is trained on a vast database of autocorrelations obtained from synthetic GPR images for a comprehensive range of stochastic subsurface models. An important aspect of the training process is that the synthetic GPR data are generated using a computationally efficient approximate solution of the underlying physical problem. This strategy effectively addresses the notorious challenge of insufficient training data, which frequently impedes the application of deep-learning-based methods in applied geophysics. Tests on a wide range of realistic synthetic GPR data generated using a finite-difference time-domain (FDTD) solution of Maxwell’s equations, as well as a comparison with the results of the traditional Monte Carlo approach on a pertinent field dataset, confirm the viability of the proposed method, even in the presence of significant levels of data noise. Our results also demonstrate that typical mismatches between the dominant frequencies of the analyzed and training data can be readily alleviated through simple spectral shifting. Full article
(This article belongs to the Special Issue Advanced Ground-Penetrating Radar (GPR) Technologies and Applications)
Show Figures

Figure 1

29 pages, 1138 KiB  
Article
Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms
by Marta González-Lázaro, Eduardo Viciana, Víctor Valdivieso, Ignacio Fernández and Francisco Manuel Arrabal-Campos
Mathematics 2025, 13(13), 2166; https://doi.org/10.3390/math13132166 - 2 Jul 2025
Viewed by 155
Abstract
Inverse Laplace transforms (ILTs) are fundamental to a wide range of scientific and engineering applications—from diffusion NMR spectroscopy to medical imaging—yet their numerical inversion remains severely ill-posed, particularly in the presence of noise or sparse data. The primary objective of this study is [...] Read more.
Inverse Laplace transforms (ILTs) are fundamental to a wide range of scientific and engineering applications—from diffusion NMR spectroscopy to medical imaging—yet their numerical inversion remains severely ill-posed, particularly in the presence of noise or sparse data. The primary objective of this study is to develop robust and efficient numerical methods that improve the stability and accuracy of ILT reconstructions under challenging conditions. In this work, we introduce a novel family of Kaczmarz-based ILT solvers that embed advanced regularization directly into the iterative projection framework. We propose three algorithmic variants—Tikhonov–Kaczmarz, total variation (TV)–Kaczmarz, and Wasserstein–Kaczmarz—each incorporating a distinct penalty to stabilize solutions and mitigate noise amplification. The Wasserstein–Kaczmarz method, in particular, leverages optimal transport theory to impose geometric priors, yielding enhanced robustness for multi-modal or highly overlapping distributions. We benchmark these methods against established ILT solvers—including CONTIN, maximum entropy (MaxEnt), TRAIn, ITAMeD, and PALMA—using synthetic single- and multi-modal diffusion distributions contaminated with 1% controlled noise. Quantitative evaluation via mean squared error (MSE), Wasserstein distance, total variation, peak signal-to-noise ratio (PSNR), and runtime demonstrates that Wasserstein–Kaczmarz attains an optimal balance of speed (0.53 s per inversion) and accuracy (MSE = 4.7×108), while TRAIn achieves the highest fidelity (MSE = 1.5×108) at a modest computational cost. These results elucidate the inherent trade-offs between computational efficiency and reconstruction precision and establish regularized Kaczmarz solvers as versatile, high-performance tools for ill-posed inverse problems. Full article
Show Figures

Figure 1

20 pages, 11822 KiB  
Article
Inverse Design of Ultrathin Metamaterial Absorber
by Eunbi Jang, Junghee Cho, Chanik Kang and Haejun Chung
Nanomaterials 2025, 15(13), 1024; https://doi.org/10.3390/nano15131024 - 1 Jul 2025
Viewed by 323
Abstract
Electromagnetic absorbers combining ultrathin profiles with robust absorptivity across wide incidence angles are essential for applications such as stealth applications, wireless communications, and quantum computing. Traditional designs, including Salisbury screens, typically require thicknesses of at least a quarter-wavelength (λ/4), [...] Read more.
Electromagnetic absorbers combining ultrathin profiles with robust absorptivity across wide incidence angles are essential for applications such as stealth applications, wireless communications, and quantum computing. Traditional designs, including Salisbury screens, typically require thicknesses of at least a quarter-wavelength (λ/4), restricting their use in compact systems. While metamaterial absorbers (MMAs) offer reduced thicknesses, their absorptivity generally decreases under oblique incidence conditions. Here, we introduce an adjoint optimization-based inverse design method that merges the ultrathin advantage of MMAs with the angle-insensitive characteristics of Salisbury screens. By leveraging the computational efficiency of the adjoint method, we systematically optimize absorber structures as thin as λ/20. The optimized structures achieve absorption exceeding 90% at the target frequency (7.5 GHz) and demonstrate robust performance under oblique incidence, maintaining over 90% absorption up to 50°, approximately 80% at 60°, and around 70% at 70°. Comparative analysis against particle swarm optimization further highlights the superior efficiency of the adjoint method, reducing the computational effort by approximately 98%. This inverse design framework thus provides substantial improvements in both the performance and computational cost, offering a promising solution for advanced electromagnetic absorber design. Full article
Show Figures

Figure 1

10 pages, 2159 KiB  
Communication
Beyond Green’s Functions: Inverse Helmholtz and “Om” -Potential Methods for Macroscopic Electromagnetism in Isotropy-Broken Media
by Maxim Durach
Photonics 2025, 12(7), 660; https://doi.org/10.3390/photonics12070660 - 30 Jun 2025
Viewed by 213
Abstract
The applicability ranges of macroscopic and microscopic electromagnetism are contrasting. While microscopic electromagnetism deals with point sources, singular fields, and discrete atomistic materials, macroscopic electromagnetism concerns smooth average distributions of sources, fields, and homogenized effective metamaterials. Green’s function method (GFM) involves finding fields [...] Read more.
The applicability ranges of macroscopic and microscopic electromagnetism are contrasting. While microscopic electromagnetism deals with point sources, singular fields, and discrete atomistic materials, macroscopic electromagnetism concerns smooth average distributions of sources, fields, and homogenized effective metamaterials. Green’s function method (GFM) involves finding fields of point sources and applying the superposition principle to find fields of distributed sources. When utilized to solve microscopic problems, GFM is well within the applicability range. Extension of GFM to simple macroscopic problems is convenient, but not fully logically sound, since point sources and singular fields are technically not a subject of macroscopic electromagnetism. This explains the difficulty of both finding the Green’s functions and applying the superposition principle in complex isotropy-broken media, which are very different from microscopic environments. In this manuscript, we lay out a path to the solution of macroscopic Maxwell’s equations for distributed sources, bypassing GFM by introducing an inverse approach and a method based on “Om” -potential, which we describe here. To the researchers of electromagnetism, this provides access to powerful analytical tools and a broad new space of solutions for Maxwell’s equations. Full article
(This article belongs to the Special Issue Photonics Metamaterials: Processing and Applications)
Show Figures

Figure 1

22 pages, 501 KiB  
Article
Identification of a Time-Dependent Source Term in Multi-Term Time–Space Fractional Diffusion Equations
by Yushan Li, Yuxuan Yang and Nanbo Chen
Mathematics 2025, 13(13), 2123; https://doi.org/10.3390/math13132123 - 28 Jun 2025
Viewed by 211
Abstract
This paper investigates the inverse problem of identifying a time-dependent source term in multi-term time–space fractional diffusion Equations (TSFDE). First, we rigorously establish the existence and uniqueness of strong solutions for the associated direct problem under homogeneous Dirichlet boundary conditions. A novel implicit [...] Read more.
This paper investigates the inverse problem of identifying a time-dependent source term in multi-term time–space fractional diffusion Equations (TSFDE). First, we rigorously establish the existence and uniqueness of strong solutions for the associated direct problem under homogeneous Dirichlet boundary conditions. A novel implicit finite difference scheme incorporating matrix transfer technique is developed for solving the initial-boundary value problem numerically. Regarding the inverse problem, we prove the solution uniqueness and stability estimates based on interior measurement data. The source identification problem is reformulated as a variational problem using the Tikhonov regularization method, and an approximate solution to the inverse problem is obtained with the aid of the optimal perturbation algorithm. Extensive numerical simulations involving six test cases in both 1D and 2D configurations demonstrate the high effectiveness and satisfactory stability of the proposed methodology. Full article
Show Figures

Figure 1

Back to TopTop