Next Article in Journal
A Continuous-Time Degradation Model for Autonomous Underwater Vehicles with Data-Driven Mission Decision Rules
Previous Article in Journal
The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training
Previous Article in Special Issue
From Light to Insight: Hemodynamic Models for Optical Monitoring of the Brain in Cardiac Arrest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Advances in B-Mode Ultrasound Simulators

by
Cindy M. Solano-Cordero
1,
Nerea Encina-Baranda
1,
Mailyn Pérez-Liva
1,2 and
Joaquin L. Herraiz
1,2,*
1
Nuclear Physics Group and IPARCOS, Faculty of Physical Sciences, Complutense University of Madrid, CEI Moncloa, 28040 Madrid, Spain
2
Health Research Institute of the Hospital Clínico San Carlos (IdISSC), 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12535; https://doi.org/10.3390/app152312535
Submission received: 27 October 2025 / Revised: 23 November 2025 / Accepted: 24 November 2025 / Published: 26 November 2025

Abstract

Ultrasound (US) imaging is one of the most accessible, non-invasive, and real-time diagnostic techniques in clinical medicine. However, conventional B-mode US suffers from intrinsic limitations such as speckle noise, operator dependence, and variability in image interpretation, which reduce diagnostic reproducibility and hinder skill acquisition. Because accurate image acquisition and interpretation rely heavily on the operator’s experience, mastering ultrasound requires extensive hands-on training under diverse anatomical and pathological conditions. Yet, traditional educational settings rarely provide consistent exposure to such variability, making simulation-based environments essential for developing and standardizing operator expertise. This scoping review synthesizes advances from 2014 to 2024 in B-mode ultrasound simulation, identifying 80 studies through structured searches in PubMed, Scopus, Web of Science, and IEEE. Simulation methods were organized into interpolative, wave-based, ray-based, and convolution-based models, as well as emerging Artificial Intelligence (AI)-driven approaches. The review emphasizes recent simulation engines and toolboxes reported in this period and highlights the growing role of learning-based pipelines (e.g., Generative Adversarial Networks (GANs) and diffusion) for realism, scalability, and data augmentation. The results show steady progress toward high realism and computational efficiency, including Graphics Processing Unit (GPU)-accelerated transport models, physics-informed convolution, and AI-enhanced translation and synthesis. Remaining challenges include the modeling of nonlinear and dynamic effects at scale, standardizing evaluation across tasks, and integrating physics with learning to balance fidelity and speed. These findings outline current capabilities and future directions for training, validation, and diagnostic support in ultrasound imaging.

1. Introduction

Ultrasound (US) imaging is a non-invasive, real-time, and cost-effective diagnostic technique that provides anatomical and structural information of the human body. It operates by transmitting high-frequency acoustic pulses into tissue and detecting the echoes generated at interfaces of different acoustic impedances. The most widely used representation of this information is the brightness-mode (B-mode) image, in which the amplitude of each received echo is converted into pixel intensity to form a two-dimensional cross-sectional view of internal anatomy. Because of its ability to display tissue morphology interactively, without ionizing radiation, ultrasound has become an essential modality for routine diagnostics and for guiding minimally invasive procedures such as biopsies, catheter placements, or vascular access, where precise spatial localization is required. Compared with other imaging techniques such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), ultrasound offers lower cost, portability, and immediate feedback at the bedside, making it indispensable in point-of-care and intraoperative contexts.
Despite these advantages, conventional B-mode imaging faces intrinsic limitations related to its coherent nature. The constructive and destructive interference of backscattered echoes from numerous sub-resolution scatterers produces speckle noise, which reduces the signal-to-noise ratio (SNR) and can obscure fine anatomical details [1]. These effects make image interpretation challenging and highly dependent on the operator’s experience, leading to substantial inter- and intra-observer variability. Acquiring proficiency in ultrasound, therefore, requires extensive supervised practice to develop hand–eye coordination for probe manipulation, a solid understanding of three-dimensional anatomy, and the ability to select appropriate imaging planes while recognizing and minimizing artifacts—skills essential for accurate diagnostic decisions [2].
However, traditional training environments based on scanning real patients or volunteers present inherent limitations. Exposure to relevant anatomical variations and pathologies is inconsistent, and certain conditions are rare or time-dependent, forcing trainees to wait long periods before encountering them. Moreover, clinical settings do not allow for controlled repetition or standardized assessment, which hinders objective evaluation of learning progress. Simulation-based education has thus become a fundamental component of ultrasound training, providing standardized, safe, and reproducible learning environments [3]. Early solutions relied on physical phantoms or tissue-mimicking materials designed to approximate human acoustic properties. However, these models remain static and fail to reproduce the complexity and variability of living tissues. Animal organs, while more realistic in texture, have limited durability and anatomical fidelity
Initial advances in computational acoustics and image synthesis have led to ultrasound simulators capable of generating realistic B-mode images under controlled and diverse conditions. These platforms can model acoustic propagation, tissue deformation, and probe motion, offering immersive training experiences that replicate real clinical scenarios [2,4]. Beyond education, such simulators are increasingly used for system design, algorithm validation, and quantitative imaging research.
Within this context, the present article provides a comprehensive review of the technological and methodological advances in B-mode ultrasound simulation achieved over the last decade (2014–2024). Through a structured scoping analysis, this review categorizes simulation strategies from two complementary perspectives (see Figure 1): the data-driven perspective (including interpolative and generative approaches, such as Artificial Intelligence (AI)-driven models) and the physical modeling perspective, which encompasses the core model-based methods (wave-based, ray-based, and convolution-based). The review then discusses their underlying principles, performance, and limitations. It also highlights the growing convergence between physics-based modeling and data-driven learning, outlining best practices and future perspectives for developing realistic, efficient, and educationally effective ultrasound simulators.

Principles and Methodological Frameworks in Ultrasound Simulation

Ultrasound image simulation can be categorized using two complementary perspectives: its dependence on real data, and its underlying physical modeling principles. From a data-driven perspective, methods are classified as interpolative or generative. From a physical modeling perspective, simulations can be model-free (e.g., interpolative approaches) or model-based, which include wave-based, ray-based, and convolution-based formulations, depending on how they approximate the propagation of acoustic energy in tissue.
Interpolative approaches rely on pre-existing ultrasound data to synthesize new images by resampling and interpolating within previously acquired datasets. These methods generate synthetic slices from real volumes. They are often captured under specific probe orientations, which are then re-sliced during simulation to correspond with the virtual probe position. Their efficiency and visual realism make them particularly suitable for real-time applications such as medical training simulators [5]. For instance, the authors in [6] implemented a strategy that maps wave interactions within tissue by sampling pre-acquired images and linking them to corresponding 3D locations in a deformed region, assuming that a B-mode image of deformed tissue resembles a properly deformed version of a previous frame. Despite their computational advantages, interpolative methods perform image-based rendering rather than genuine physical simulation, i.e., they are model-free, and their realism decreases when imaging parameters—such as beam direction, attenuation, or speckle dynamics—deviate from the acquisition setup.
Generative approaches, in contrast, aim to reproduce ultrasound images from first principles by explicitly modeling acoustic propagation and tissue interaction. Within this category, three main families of methods can be distinguished according to their level of physical abstraction: wave-based, ray-based, and convolution-based. These techniques commonly leverage geometric or density data from imaging modalities such as CT [7,8,9,10,11] or MRI, or rely on mesh-based anatomical representations [12]. Through this physically grounded modeling, they can reproduce ultrasound-specific artifacts, including shadowing, mirroring, range distortions, refractions, and reverberations.
Wave-based methods simulate ultrasound propagation by numerically solving the full acoustic wave equation (typically in its nonlinear, heterogeneous form) or employing high-order pseudo-spectral or finite-difference (FD) formulations that retain coherent phenomena such as diffraction, interference, multiple scattering, and phase aberration. Seminal work such as [13] introduced heterogeneous nonlinear full-wave modeling, which remains a reference standard for realistic B-mode synthesis and serves as a foundation for subsequent three-dimensional simulations. However, because of their high computational demands, these methods are generally limited to offline applications rather than real-time use.
Ray-based methods simplify the physics by applying geometric acoustics principles, tracing rays through tissue boundaries to efficiently simulate reflection, refraction, attenuation, and shadowing. While less accurate for sub-wavelength interference and speckle formation, these approaches provide convincing macroscopic realism suitable for interactive training environments. Landmark studies such as [14,15] demonstrated Graphics Processing Unit (GPU)-accelerated ray-based simulators derived from CT data, establishing the foundation for modern real-time frameworks that balance realism and performance.
Convolution-based methods model the received radio-frequency (RF) signal as the superposition of echo contributions from a distribution of point scatterers, each weighted by the system’s spatially varying point-spread function (PSF). This linear, shift-variant convolutional formulation captures speckle statistics and depth-dependent blurring, providing an efficient approximation of the imaging chain under the Born approximation [16]. Techniques such as the one-dimensional convolution model simulate RF signals by projecting scatterers and convolving them with an emitted pulse, replicating speckle texture and image blurring without explicitly solving the wave equation [17]. More recent open-source implementations, also based on [17], extend this principle through GPU acceleration and adaptive scatterer modeling, supporting dynamic simulations of entire cardiac cycles (Figure 1).
Alongside these methodological families, several toolboxes and simulation platforms have played a pivotal role in shaping the field of computational ultrasound. One of the earliest and most influential tools, Field II [18,19], was introduced in MATLAB during the late 1990s. This simulator enables the use of single or multi-element transducers and supports a wide range of configurations, including apodization, focusing, and excitation. Field II approximates the impulse response by dividing the transducer face into smaller rectangular sub-elements and superimposing the far-field impulse responses of these sub-elements, providing a robust and flexible framework for simulating ultrasound fields with high accuracy. To address the challenges of accurately modeling pressure fields—particularly in the very near field, where solving the acoustic equation numerically can become unstable—Focus [20] was developed. Focus adopts a faster approach to converge on impulse responses, achieving high-resolution simulations with reduced computational cost. In parallel, k-Wave [21] was designed for both the simulation and reconstruction of photoacoustic and ultrasonic wavefields. It employs a k-space pseudo-spectral time-domain solution to coupled first-order acoustic equations, accommodating both homogeneous and heterogeneous media in one, two, or three dimensions. These simulators are unified by their ability to compute one-way acoustic pressure fields based on solutions to the ultrasound wave equation: while Field II operates primarily in the time domain, Focus and k-Wave leverage frequency-domain formulations to model acoustic propagation within approximated biological media.
For the specific design and optimization of transducer arrays, ULTRASIM [22] was introduced as a menu-oriented simulator that allows researchers to predict the sound fields generated by medical transducers. This tool facilitates the study of transducer geometries and configurations for diverse applications, bridging theoretical modeling and experimental design. Complementarily, the need to simulate RF images containing both fundamental and second-harmonic components of ultrasound propagation led to the development of CREANUIS [23]. This simulator specializes in reproducing RF image characteristics and harmonic imaging phenomena, enabling controlled analysis of nonlinear effects within virtual environments. Collectively, these platforms established the computational foundations for modern ultrasound simulation, offering essential tools for modeling, analyzing, and optimizing acoustic systems across medical and research contexts.
In recent years, the landscape of ultrasound simulation has continued to evolve toward real-time performance and data-driven intelligence. Traditional training methods remain time-consuming and resource-intensive [24], but the integration of deep learning has revolutionized both medical image analysis and simulation. Convolutional neural networks and recurrent fully convolutional architectures have been optimized to provide rapid interpretation and feedback, achieving real-time processing of ultrasound cine clips [25]. These developments mark a paradigm shift from purely physics-based simulation to hybrid approaches that combine physical modeling, computational acceleration, and learned representations, enabling realistic, interactive ultrasound environments that continuously adapt to user behavior and anatomical variability.

2. Materials and Methods

2.1. Methodology

This study conducted a scoping review to identify and analyze advancements in ultrasound B-mode simulation techniques and technologies, focusing on methods for realistic image generation for training, validation, and clinical applications. The review explored diverse approaches, tools, and methods utilized in the field, with a particular emphasis on recent innovations that enhance simulation accuracy and realism. To ensure relevance and precision, the literature search was performed partially based on article titles. Title-based searches were chosen to reduce the number of retrieved studies while ensuring that the research explicitly addressed ultrasound simulation methodologies and their applications.
The Literature Search Strategy
The articles included in this review were identified through two comprehensive searches conducted in the following databases: Web of Science, Scopus, PubMed, and IEEE Xplore. The last search was conducted on 24 October 2024.
The search utilized the following terms: (“ultrasound” AND (“simulation” OR “simulator” OR “simulate” OR “synthetic”), combined with one of the following groups using the boolean operator AND: (“wave” OR “acoustics” OR “algorithm” OR “toolbox”), (“CT” OR “computed tomography” OR “ray tracing” OR “GPU” OR “Monte Carlo” OR “convolution” OR “deep learning” OR “convolutional neural network”). The full search strings for each database are provided in Appendix A.
Inclusion Criteria
  • Articles published between 2014 and 2024, ensuring relevance to recent advancements in the field.
  • Articles explicitly mentioning “ultrasound” AND “simulation” OR “simulator” OR “simulate” OR “synthetic” in their titles.
  • Articles focusing on B-mode image formation, image realism, or training datasets were included.
Exclusion Criteria
  • Articles published in languages other than English or Spanish were excluded.
  • Studies not explicitly mentioning the key terms in their titles were excluded to maintain relevance to the research objectives, as well as those focusing on non-human populations or system-level simulations not related to image formation.

2.2. Study Selection Process

This scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines [26], ensuring a transparent and methodical selection process.
The database search initially identified 1263 studies. Because several databases imposed query length restrictions, the search was performed through two complementary queries whose results were subsequently merged for analysis. Following duplicate removal, 662 studies with identical or overlapping content were excluded, leaving 601 unique records for title and abstract screening. This screening phase eliminated 525 studies, resulting in 76 articles retained for full-text evaluation.
During the full-text assessment, 19 papers were excluded for not meeting the inclusion criteria, leaving 57 eligible studies. To strengthen the comprehensiveness of the review, 23 additional references were identified through citation tracking of these sources, bringing the final corpus to 80 studies.
To minimize bias and enhance reliability, study selection and data extraction were performed independently by multiple reviewers, with any discrepancies resolved through discussion and consensus. This rigorous process forms the foundation of the analyses and insights presented in this review. The overall workflow—covering identification, screening, eligibility, and inclusion—is summarized in the PRISMA flow diagram (Figure 2).

3. Results

This approach identified recurring research themes and methodologies, focusing on advancements in ultrasound simulation technologies. The findings were summarized into concise text and tables, providing a comprehensive overview of how simulation techniques are evolving to enhance realism and accuracy. The studies were first categorized by software and toolboxes (n = 9) and their Anatomical, Motion, and Artifact Modeling (n = 29), including CT/MRI-derived US simulation (n = 8), dynamic motion (e.g., heart, breathing) (n = 11), and speckle/noise modeling (n = 10). A separate classification by Ultrasound Transport Models covered wave-based methods (n = 4), ray-based and convolution-based methods (n = 12), and AI-based methods (n = 20), offering insights into the diverse techniques shaping the future of ultrasound simulation. While some studies could fall into more than one category, they are assigned to the section that best aligns with their primary objective, ensuring clarity and focus on the presentation of findings. Additionally, a set of distinct approaches (n = 6) addressed very specific methodological aspects that do not fall neatly into the main categories. These works were nevertheless integrated throughout the review in the sections most closely related to their objectives, ensuring that their contributions are adequately contextualized. Table 1 provides a comprehensive summary of the study distribution across the various categories outlined above.
Building on these classifications, the following sections provide a detailed exploration of the key themes identified. We begin by discussing the latest software tools and toolboxes that support ultrasound simulation, highlighting their critical role in advancing the field. This is followed by an examination of other innovative simulation approaches that incorporate imaging modalities like CT and MRI for static scenarios. The review then transitions to dynamic simulations, addressing aspects such as heart motion, breathing, and the modeling of speckle, scatter, and noise. Lastly, we delve into the diverse methodologies utilized in ultrasound simulation, including wave propagation techniques, ray-tracing, GPU-accelerated computations, convolution-based approaches, and cutting-edge AI-driven models like GANs and diffusion models. Together, these sections provide a comprehensive perspective on the state of the art and emerging trends in ultrasound simulation research (Figure 3).

3.1. Software and Toolboxes

Several ultrasound frameworks have been developed to support modeling, simulation, and evaluation, generally involving simulators and toolboxes. Simulators generate synthetic ultrasound RF or B-mode data (e.g., Field II, SIMUS, MUST, mSOUND), whereas toolboxes provide higher-level utilities for beamforming, analysis, or post-processing (e.g., QUPS, USTB). While the former directly emulates acoustic propagation or signal formation, the latter are designed to facilitate rapid prototyping, evaluation of algorithms, or integration with existing simulators.
Community resources like Ultrasound Toolbox (USTB) have become central for beamforming, evaluation, and dataset sharing, complementing simulators by providing standardized pipelines for processing and validation. As a complement to existing ultrasound modeling toolboxes, mSOUND [29] was intended for modeling linear/nonlinear acoustic wave propagation in media (primarily biological tissues) with arbitrary heterogeneities.
Operating in the frequency domain to efficiently model physical phenomena like attenuation and element directivity, MUST [28] simulates, analyzes, and designs ultrasound imaging scenarios. Working in the same domain, SIMUS [30] is an ultrasound simulator based on far-field (Fraunhofer) and paraxial (Fresnel) acoustic equations, capable of simulating acoustic pressure fields and RF signals for uniform linear or convex probes. SIMUS produced pressure fields similar to those of Field II, FOCUS, and k-Wave [31] and with SIMUS3 the 2D capabilities of SIMUS and PFIELD were extended to 3D for matrix transducers, enabling advanced 3D ultrasound imaging simulations [35].
Providing a common platform for the evaluation of velocity estimation schemes on in silico data, FLUST (Flow-Line based Ultrasound Simulation Tool), enables multiple realizations of signals from both simple and complex flow models at low computational cost [32].
Using four interconnected modules, PROTEUS [33] generates physically realistic contrast-enhanced ultrasound RF data that can be used to benchmark contrast-enhanced ultrasound imaging.
QUPS offers tools for rapid prototyping of ultrasound beamforming and imaging methods [34]. It provides standardized data structures to streamline tasks like data acquisition, generation, and processing. Utilizing MATLAB’s parallel processing and GPU capabilities, QUPS optimizes performance and supports data imports from popular ultrasound simulators such as Field-II, MUST, and k-Wave, Table 2.
It is important to consider the usability, computational performance, and practical adoption of the available simulation frameworks. Full-wave simulators such as Field II, k-Wave, and CREANUIS remain the reference standard for physical accuracy. However, their computational demands restrict them to offline use; a single B-mode frame may require minutes to hours depending on configuration, and even GPU-accelerated k-Wave typically remains outside real-time performance. In contrast, more recent toolboxes (including MUST/SIMUS, PROTEUS, QUPS, and mSOUND) prioritize efficiency and accessibility through frequency-domain formulations, semi-analytical models, and GPU acceleration, enabling faster prototyping and large-scale data generation. Toolboxes such as USTB and FLUST also promote reproducible benchmarking through shared datasets and evaluation pipelines. However, despite their value in research and education, most simulators have not yet been integrated into commercial clinical training platforms, highlighting a gap between methodological advances and real-world adoption.
While toolboxes increasingly streamline access to simulations, their accuracy and capabilities ultimately depend on their underlying numerical models. For instance, studies have proposed improvements to FOCUS and k-Wave, such as nonlinear continuous-wave modeling [102], acceleration of k-Wave with Sparse Fourier Transform algorithms [106], and corrections for “staircasing” errors in grid-based simulations [103]. These works illustrate that usually toolboxes and simulators evolve together, with toolboxes acting as bridges that enhance usability and workflow integration, while core engines continue to advance in physical fidelity and numerical performance.

3.2. Anatomical, Motion and Artifact Modeling

3.2.1. CT/MRI-Derived Ultrasound Simulation

Recent research has leveraged CT and, to a lesser extent, MRI data to drive ultrasound simulation frameworks, employing a range of modeling strategies—including convolution-based methods, ray-tracing, ray casting, and hybrid techniques—to reproduce acoustic phenomena and generate realistic B-mode images for clinical and research applications.
Two related studies explore real-time ultrasound simulation from CT data using ray-based approaches. The first [36] presents an interactive simulator employing ray-casting to model reflection, transmission, and speckle-like scattering through a simplified, position-dependent model with tailored PSFs, achieving up to 55 fps for low-cost, high-fidelity training. The second [39] compares a pure ray-tracing model with a convolution-enhanced variant that incorporates additional ultrasound artifacts, such as realistic speckle and deformation effects. While the convolutional model provides richer artifact simulation, it requires higher-quality CT segmentation and more parameters, whereas the pure ray-based approach offers greater computational efficiency.
Building on this line, the authors of [41,42] proposed ultrasound simulation frameworks that combine reflection, transmission, and scatter images derived from CT data, replacing the computationally intensive Field II scatter model with a convolution-based approach integrated with ray-tracing. Validation with a tissue-mimicking phantom demonstrated high similarity between simulated and real ultrasound images, and a reduced computation time from several hours to seconds without compromising image quality. Such efficiency makes it suitable for time-critical clinical applications, including brachytherapy and pre-procedural rehearsal for image-guided interventions.
More recently, LOTUS [43] employs a fully differentiable ray-casting simulator to generate synthetic US images from CT label maps. The framework learns to optimize simulation parameters specifically for downstream segmentation tasks, producing task-tailored ultrasound representations. An image adaptation network aligns synthetic images with real US characteristics, enabling end-to-end training and improving model generalization across different organs. In parallel, other approaches focus on real-time deformable ultrasound simulation using patient-specific preoperative CT data. One platform [40] leverages GPU-accelerated position-based dynamics to model tissue deformation, mapping simulated US pixels back to the original CT volume, and validating the method with phantom experiments. Complementarily, refraction-aware simulators [38] use ray-tracing and wavefront construction methods to account for wave phenomena like diffraction and refraction, producing more realistic projections of pathological structures while maintaining real-time performance. Together, these studies highlight the trend toward integrating high-fidelity, patient-specific, and physics-informed simulations for both training and intraoperative guidance
Modeling acoustics using MRI is an emerging and innovative approach that extends the capabilities of traditional MRI techniques to simulate and analyze sound wave propagation in various media, including biological tissues. While MRI is primarily used for imaging anatomical and functional structures through magnetic fields and radio waves, its potential for modeling acoustics lies in its ability to provide high-resolution images of tissue properties and their spatial variations, which are essential for acoustic simulations. This technique was developed in [37], where the authors developed a hybrid convolutional ray-tracing technique using segmented MRI data to estimate tissue-specific acoustic properties, such as sound speed, acoustic impedance, and attenuation. The ray-tracing engine models both reflected and backscattered ultrasound energy, simulating typical ultrasound artifacts like shadowing, mirroring, and refraction, and an automatic optimization algorithm adjusts simulation parameters to improve realism, comparing simulated images with real ultrasound data to fine-tune acoustic properties.

3.2.2. Dynamic Motion

Simulation technology has advanced beyond static models to encompass dynamic processes, allowing for the realistic representation of motion in various physiological systems. One of the most vital applications of this is the simulation of bodily functions such as the rhythmic motion of the heart and the intricate movements involved in breathing.
A significant step forward in this field was introduced by [46], who developed a pipeline for the generation of realistic 3D synthetic echocardiographic sequences. This framework combines electromechanical (E/M) heart modeling with ultrasound simulation and leverages real 3D ultrasound recordings to learn and reproduce realistic speckle textures. By displacing a 3D cloud of scatterers according to the motion fields generated by the E/M model, the pipeline produces synthetic sequences that not only replicate typical ultrasound artifacts but also provide voxel-wise ground-truth displacement fields. This allows researchers to systematically evaluate the sensitivity of strain and deformation algorithms under varying mechanical and imaging conditions. The authors released an initial open-access library of eight synthetic sequences (healthy and pathological cases), setting up a foundation for standardized benchmarking in 3D cardiac motion analysis.
Building upon this groundwork, ref. [47] later introduced a computational heart model designed to simulate controlled synthetic motion, replicating both healthy and ischemic heart conditions. The pipeline incorporates 2D speckle textures derived from recorded echocardiograms and applies them to a 3D heart geometry. An electromechanical (E/M) model simulates the cardiac cycle, determining myocardial motion, and tracking the left ventricle (LV) position over time. Displacement fields are computed to align the 3D simulation with 2D tracking data, and 3D scatterers, with scattering amplitudes sampled from the 2D recordings, are moved accordingly. These scatter maps are input into ultrasound simulators to generate realistic 2D synthetic images, complete with ultrasound artifacts, enhancing their applicability for diagnostic and training purposes.
Another framework proposed by [50] focuses on creating synthetic sequences specifically for strain-tracking algorithms, generating an open-access library of 105 synthetic 2D echocardiographic sequences. These sequences encompass both healthy and ischemic motion patterns, a variety of apical probe orientations, and image quality profiles from seven ultrasound systems. The synthetic library also includes ground-truth deformation data, enabling robust performance evaluation and validation of strain-tracking algorithms. This work provides an invaluable resource for researchers developing and testing echocardiographic motion analysis techniques.
Expanding further, ref. [49] developed a unified simulation framework to generate realistic multimodal cardiac imaging sequences, including ultrasound, cine MR, and tagged MR, all derived from the same virtual patient. This approach combines clinical recordings for anatomical and textural details with E/M models to simulate cardiac motion fields for healthy and pathological conditions. Physical simulation environments model image formation, incorporating effects such as tag fading in MR and echocardiographic textures in ultrasound. Techniques like mesh warping ensure spatio-temporal alignment between motion fields and simulated images, capturing pathological variations such as ischemic scars and dyssynchrony. Image realism and accuracy are quantitatively evaluated, validating the framework’s capability to produce synthetic data closely matching real clinical imaging.
Lastly, ref. [54] developed a computationally efficient pipeline that overcomes the limitations of small-scale datasets by enabling the generation of large-scale synthetic cardiac ultrasound recordings. The framework combines the CircAdapt model to simulate realistic myocardial deformation signals with the COLE simulator for fast ultrasound image formation. Echogenicities and diverse LV geometries are sampled from the CAMUS clinical dataset, ensuring realistic speckle patterns. This approach produced an open-access database of 1296 synthetic recordings, covering both healthy and pathological conditions, including heart failure with reduced ejection fraction. Quantitative validation confirmed close agreement between simulated and ground-truth LV motion. This resource provides high-quality data to advance machine learning–based echocardiographic applications.
On the other hand, the simulation of the respiratory component can be found in [48], which presents a real-time ultrasound simulation framework for training US-guided needle insertion in liver surgery, explicitly incorporating dynamic physiological motion. Unlike most Virtual Reality (VR) ultrasound simulators that rely on static 3D patient models, this approach integrates breathing motion extracted from 4D CT image data into a visuo-haptic simulation environment. The framework enables real-time US image generation, 3D visualization, and haptic steering of both the ultrasound probe and needle within breathing virtual patients, improving realism and training effectiveness for procedures affected by organ motion.
Other works to simulate breathing motion consist of animating the movement of the lungs within a 3D vector model of a patient, created in Daz 3D software [51]. The lung movement is manually implemented, and the motion parameters vary based on the selected virtual patient, emulating different breathing speeds and depths. Pleural sliding during respiration is achieved by manipulating UV maps that simulate lung surface motion, which modifies the pleural line’s intensity dynamically. Rib movement is excluded due to complexity and minimal impact on the final simulation outcome. And in [52], the task is addressed using a deep-learning model called U-RAFT. This method tracks pixel displacement in a sequence of ultrasound images caused by respiratory motion. By performing deformable image registration, the network generates displacement fields that compensate for tissue motion, effectively stabilizing the images. The Spatial Transformer Network (STN) reconstructs static images from the motion-compensated deformation fields.
While the above simulations intend to simulate respiratory motion, others [53] strive to compensate during 3D ultrasound reconstruction using implicit neural representations (INRs). A robotic ultrasound system acquires sequential B-mode images, while breathing-induced motion artifacts are mitigated by extracting frames from the exhale phase of the respiratory cycle. The INR model interpolates between images to create a continuous, patient-specific function, producing smooth and consistent 3D reconstructions.
Beyond cardiopulmonary applications, dynamic motion modeling has also been explored in other clinical and methodological contexts. For instance, ref. [44] introduces a dynamic framework for simulating brain shift. By using BrainWeb tissue probability maps to create digital brain phantoms and simulating ultrasound images with Field II, the study provides deformable, perfectly registered MR–US pairs. This setup allows the evaluation of multimodal registration algorithms under conditions such as tumor resection or tissue deformation, expanding the role of ultrasound simulation into neurosurgical validation scenarios.
From a methodological perspective, ref. [45] advances dynamic scatterer modeling itself. Traditional point-scatterer approaches can be computationally prohibitive, especially when high temporal resolution is needed. By representing dynamic scatterers with B-splines and integrating this with a GPU-accelerated implementation of the COLE convolution-based simulator, the authors achieved dramatic improvements in efficiency and scalability. This innovation enables fast and memory-efficient simulation of moving scatterers, broadening the feasibility of large-scale dynamic ultrasound datasets while preserving realistic RF signals.
Together, these contributions highlight how dynamic motion simulation is being leveraged both to address specific clinical challenges (e.g., brain shift in neurosurgery) and to drive computational advances that make dynamic ultrasound modeling more practical and scalable.

3.2.3. Speckle, Scatter, and Noise Modeling

The simulation of all possible interactions between ultrasound and tissue remains an unsolved theoretical challenge. The characteristic granular appearance of ultrasound images is known as speckle, and it arises from the constructive and destructive interference of ultrasonic waves scattered by sub-wavelength tissue structures, such as cell nuclei and organelles. This interference creates patterns shaped by tissue heterogeneity and beam characteristics that are essential to be modeled for realistic ultrasound simulation.
Building on convolution-based models of speckle, a series of studies has progressively advanced inverse-problem approaches for reconstructing scatterer maps directly from ultrasound images. Initial work by [55] introduced a method to estimate scatterer distributions from observed speckle, enabling realistic image synthesis under varying probe orientations and acquisition settings. This not only provided a way to replicate tissue-specific textures but also allowed the parameterization of scatterer models to simulate new instances of the same tissue type. Subsequently, ref. [56] addressed a critical limitation of such methods: the need for accurate characterization of the ultrasound beam. They proposed an automatic PSF estimation technique in the cepstrum domain, capable of capturing depth-dependent beam variations directly from in vivo data, thus improving the fidelity of simulated speckle patterns. Building further on these concepts, ref. [60] enhanced robustness by acquiring multiple observations of the same tissue region through electronic beam steering, thereby ensuring that reconstructed scatterer maps remain consistent across different viewing directions and frequencies. Together, these contributions establish a coherent framework in which scatterer maps and PSF estimates are systematically extracted from real data, enabling the generation of highly realistic and adaptable ultrasound speckle simulations.
A consistent line of research has focused on developing synthetic speckle noise models through sampling- and interpolation-based approaches. In their initial work, ref. [57] introduced three predefined sampling schemes—radial polar, radial uniform, and uniform grid—that replicate typical ultrasound acquisition geometries. This framework simulates the full acquisition process by sampling, adding speckle noise, and interpolating missing points, enabling the generation of images with noise characteristics statistically comparable to real ultrasound. Building on this foundation, ref. [58] extended the method with a quantitative quality assessment using second-order statistical descriptors, specifically the Gray-Level Co-occurrence Matrix (GLCM), demonstrating strong agreement between synthetic and real textures as validated by clinical experts. Subsequently, ref. [59] consolidated the framework into a comprehensive pipeline, incorporating parametric variations in sampling, interpolation (e.g., B-spline, Hermite, Lanczos), and speckle generation, alongside extensive experimental validation through both quantitative metrics and subjective evaluation. Finally, ref. [61] advanced the evaluation stage further by applying Local Binary Patterns (LBP) as an additional texture descriptor, showing that LBP features can effectively capture the subtle texture variations in speckle, thereby improving the accuracy of synthetic-to-real image quality assessment. These contributions provide a robust methodology for generating and validating synthetic ultrasound images with realistic speckle patterns, supporting both speckle reduction research and broader simulation applications.
To bridge speckle realism with tissue deformation, ref. [62] proposed a novel framework that integrates dynamic speckle modeling into ray-tracing simulations. The method estimates scatterer maps from in vivo images using GANs and embeds them into a deformable scatterer domain. This approach enables speckle textures to evolve consistently with tissue compression, achieving interactive frame rates and improved anatomical realism. In parallel, a complementary line of work has focused on the computational efficiency of scatterer distributions. ref. [63] introduced a strategy based on lazy evaluation of pseudorandom low-discrepancy sequences, reducing memory and processing demands while maintaining fully developed speckle with relatively few scatterers per resolution cell. Building on this, ref. [64] consolidated the approach into a full end-to-end pipeline, from 3D tissue volumes to coherent 2D and 3D ultrasound simulations, supported by the SIMUS platform. This method enables on-the-fly scatterer generation with minimal memory footprint, extending efficiency gains to large volumetric domains and real-time applications. Collectively, these works advance both the physical realism of speckle under deformation and the practical scalability of scatterer-based simulations, reinforcing the essential role of speckle and scatterer modeling in ultrasound simulation research.

3.3. Ultrasound Transport Models

3.3.1. Wave-Based Methods

Wave-based methods in ultrasound simulation aim to model full acoustic wave propagation through biological tissues, enabling realistic B-mode image formation. Full-wave nonlinear --FD solvers, such as those developed by Pinton [65], represent the most complete realization of this approach because they numerically solve the full 3D acoustic wave equation without linear or ray-based approximations, capturing essential phenomena such as diffraction, scattering, nonlinearity, phase aberration, and reverberation clutter. However, FD solvers are computationally demanding because stability and accuracy requirements impose strict spatial and temporal sampling constraints, which significantly increase the total number of grid points and time steps in large 3D domains. Part I [67] established the methodological basis of this framework by converting high-resolution anatomical datasets (e.g., the Visible Human Project) into 3D acoustic property maps and demonstrating that the FD model can generate anatomically realistic B-mode images validated through beamplots, RMS aberration, spatial coherence, and CNR. In Part II [68], the model was applied to controlled in silico experiments to isolate how rib structures affect image degradation, revealing competing effects of rib-induced beam apodization and increased reverberation clutter, with the anatomically correct configuration yielding optimal CNR. Together, these works [65,67,68] highlight the value of full-wave simulations for studying harmonic imaging, coherence loss, and transducer design within anatomically realistic conditions. To address the computational limitations inherent to FD solvers, complementary wave-based strategies such as the k-space pseudospectral method implemented in k-Wave offer substantially improved efficiency. By computing spatial derivatives using Fast Fourier Transforms (FFTs), k-Wave relaxes the stringent spatial and temporal sampling constraints of FD methods, enabling larger computational domains and faster simulations while still capturing key wave phenomena including diffraction, attenuation, phase aberration, and reverberation. As demonstrated in [66], the combination of MRI-based tissue segmentation with k-Wave propagation can yield anatomically realistic results, illustrating the broader utility of k-space pseudospectral approaches in high-fidelity ultrasound simulation.

3.3.2. Ray-Based and Convolution-Based Methods

Beyond CT-based approaches, convolution-based methods are also used to simulate pseudo-acoustic nonlinear RF ultrasound images, as demonstrated in [69]. This approach combines pseudo-acoustic modeling with convolution, leveraging the fidelity of full-wave Creanuis and the efficiency of FFT-accelerated convolution. Local nonlinear PSFs are estimated from punctual scatterers and then convolved and recombined, yielding depth-dependent resolution and harmonic content suitable for real-time applications where nonlinear effects are critical, such as tissue characterization and harmonic imaging.
Extending this research line, ref. [104] introduced a Fourier-based approach for ultra-fast simulation of synthetic ultrasound images. The method avoids explicit scatterer-PSF modeling by operating in the frequency domain, where it modifies the phase of the low-frequency spectrum to embed lesions into real images. This allows the generation of large and diverse training datasets up to 36,000× faster than Field II, making it particularly attractive for deep learning applications where data augmentation is critical.
While convolution and Fourier-domain methods emphasize computational efficiency, complete and accurate simulation of wave phenomena can also be pursued through full-wave solvers or spatial impulse response models. Unfortunately, these methods demand significantly higher computational resources. Ray-tracing techniques simplify the computation of wave phenomena by focusing on the tracking of rays, reducing processing time while still accounting for attenuation, reflection, refraction, and diffraction. This is exemplified in [38], which accurately replicates wave effects such as refraction based on CT data. Similarly, ref. [72] applies geometrical acoustics to trace primary and secondary rays through modeled scenes, incorporating beam profiles and tissue characterization to reproduce speckle and resolution effects for training applications. In [80], a GPU-accelerated ray-tracing pipeline for cardiac ultrasound combines Monte Carlo path tracing with convolution-based models, producing anatomically accurate, view-dependent B-mode images in real time, with demonstrated value for both training and AI-based echocardiography analysis.
GPU-accelerated methods have gained significant attention in ultrasound simulation due to their ability to dramatically increase computational speed and handle complex, large-scale models efficiently. Other strategies specifically address the bottlenecks of memory bandwidth and data transfer. For instance, lossy fixed-point compression schemes have been applied to optimize CPU–GPU communication, improving transfer rates while maintaining acceptable accuracy [70]. Similarly, convolution-based simulators such as COLE achieve orders-of-magnitude speedups by storing scatterer trajectories directly in GPU memory and minimizing CPU–GPU exchanges, enabling dynamic full-cycle cardiac simulations at interactive rates [74].
GPU acceleration is particularly beneficial for ray-tracing because it allows the parallel computation of thousands or millions of rays simultaneously, greatly speeding up simulations and enabling highly detailed scenarios in real time. For example, ref. [73] combines ray-tracing with bioheat transfer simulations to reproduce cryosurgery artifacts such as shadowing, reverberation, and reflections, achieving real-time performance through GPU parallelization.
Beyond ray-tracing, GPU clusters have also been harnessed to accelerate full-wave propagation solvers. In [101], a k-space pseudo-spectral method is optimized using a local Fourier basis for domain decomposition, overcoming the all-to-all communication bottleneck of spectral methods. This enables scalability across more than one hundred GPUs and allows simulations with over 3 × 109 grid points, representing one of the most computationally ambitious ultrasound propagation models to date.
Monte Carlo ray-tracing is a powerful computational technique used to simulate the interaction of waves (such as sound or light) with complex media. In the context of ultrasound and acoustics, Monte Carlo ray-tracing involves simulating the paths of sound waves as they travel through heterogeneous materials, accounting for scattering, absorption, and other interactions that can occur within the medium. Although computationally expensive, GPU acceleration has made these methods feasible. Works such as [71,75,79] employ Monte Carlo sampling to trace rays in parallel, modeling rough tissue surfaces, soft shadows, and fuzzy reflections with improved realism. In particular, ref. [71] introduced a hierarchical speckle model, while [75] extended this with adaptive sampling and a robust volumetric approach; ref. [79] instead focused on performance and memory efficiency. Ref. [77] further demonstrated GPU-based Monte Carlo simulation on CT-derived surface models, achieving 30% global acceleration and up to 80% reduction in ray-tracing times, with low memory usage that facilitates complete GPU-based pipelines. Notably, [76] showed that Monte Carlo ray-tracing can inherently generate speckle patterns, eliminating the need for separate speckle modeling, and reported preliminary real-time frame rates using NVIDIA’s OptiX engine. Building on this, ref. [78] leverages OptiX for full real-time Monte Carlo path tracing, achieving 25 frames per second. This work not only simulates complex tissue interactions but also improves the visualization of small anatomical structures. This simulation marks one of the first demonstrations of high-performance, real-time ultrasound simulations at true video rates.

3.3.3. AI-Based Methods

AI in ultrasound simulation has become a transformative tool, enhancing the accuracy, efficiency, and realism of ultrasound imaging and therapy. By integrating advanced machine learning techniques, AI can improve various aspects of ultrasound simulations, including image reconstruction, wave propagation modeling, and noise reduction. Among the most prominent AI techniques applied to ultrasound simulation are Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Diffusion Models.
Early works explored CNN-based ultrasound synthesis by learning direct mappings between probe parameters and image appearance. For example, ref. [83] trained convolutional models to regress from transducer position and orientation to synthetic images, while [84] proposed ScatParam, a CNN that estimates scatterer distribution parameters by modeling B-mode formation as a convolution between statistical scatterer maps and PSFs. These approaches demonstrated clinical feasibility for rapid simulation but remain dependent on training distributions and offer limited control over physical processes.
GAN-based methods represent the most widely explored family of generative models in ultrasound simulation. Beyond producing realistic images, GANs have been used to augment training datasets, translate synthetic ray-based outputs into ultrasound-like textures, and enable domain transfer across modalities [97]. For instance, UltraGAN [87] improved realism by incorporating frequency-domain losses and anatomical coherence constraints. In contrast, CycleGAN-based frameworks [86,90] translate CT-derived ray-casting outputs into ultrasound-like images, improving fine structural detail through spatial/channel attention mechanisms (scSE). Cross-modal transfer learning is also being investigated: ref. [93] demonstrated that converting CT to synthetic US improved kidney segmentation performance by >0.1 Dice Similarity Coefficient (DSC) on cross-site tests, highlighting a direct clinical impact for model generalization.
Generative adversarial networks (GANs) have been widely employed to synthesize ultrasound images from annotations, segmentation masks, or label maps, demonstrating utility across diverse anatomical regions and simulation contexts. Notable examples include real-time generation from Field II–based training data (~15 fps) [81], segmentation-conditioned echocardiography using Patch-cGAN validated on CAMUS [82], enhanced fetal ultrasound simulation with more realistic acoustic artifacts [85], and kidney image synthesis whose realism was confirmed through blinded expert assessment [88]. Additional applications involve muscle ultrasound generation combined with automatic labeling to reduce annotation burden [89] and GAN models that replace computationally intensive rendering by generating high-quality ultrasound directly from anatomical slices [92]. Several studies showed that networks trained solely on synthetic data can match or outperform real-data training, e.g., segmentation Dice scores of 87–91 in [91], or improved breast lesion classification using StyleGAN2-ADA augmentation (+15% F1) [94]. These results demonstrated both feasibility and robustness in downstream tasks, particularly when real data is scarce or heterogeneous across sites.
In intraoperative and procedural contexts, GANs have shown the ability to preserve clinically relevant structures with high precision. ApGAN [96] generated intraoperative liver ultrasound (iUS) from preoperative MR (pMR) while maintaining sub-millimeter accuracy (Hausdorff Distance (HD) < 0.25 mm), which is crucial for surgical navigation and training. More recently, GAN-based kernel-stretching approaches [99] introduced zero-shot frequency control, enabling simulated resolution adjustment without retraining—partially addressing one of the main limitations of prior generative models: lack of explicit control over acquisition parameters.
Diffusion models constitute a newer class of generative techniques that improve stability and anatomical fidelity by learning the inverse process of noise corruption. In ultrasound, adversarial diffusion models [95] have synthesized diverse and anatomically consistent echocardiography sequences, while semantic diffusion pipelines [98] demonstrated that networks trained entirely on synthetic diffusion-generated data can outperform state-of-the-art segmentation methods trained on real echocardiograms. A hybrid finite-element + diffusion framework [100] further showed feasibility for physically grounded simulation of muscle deformation sequences, providing ground-truth mechanical labels while reducing overfitting. Likewise, finite-element method (FEM)-ultrasound coupling using angular impulse responses (AIR) [105] enabled physically realistic 3D ultrasound generation at reduced computational cost, suggesting a path toward bridging physics-based and AI-based simulation.
Taken together, all these methods illustrate a growing feasibility for AI-based ultrasound simulation in educational and data augmentation settings. GANs and diffusion models can already produce anatomically coherent images at real-time or near real-time rates (<40 ms per frame), in stark contrast to full-wave solvers such as Field II or k-Wave, which often require minutes to hours per image and remain unsuitable for interactive training. Robustness has improved via the integration of anatomical constraints, multi-domain validation, and cross-modality training, with several studies demonstrating successful generalization to unseen datasets, pathology cases, and institutions.
However, translation into clinical diagnostic workflows remains limited due to several outstanding challenges. First, deep generative models require large and carefully curated datasets, and performance degrades with fewer than 100 training samples. Second, although image realism is improving, current models still struggle to reproduce complex acoustic artifacts originating from nonlinear propagation, beam profiles, and speckle dynamics—features that carry diagnostic significance. Third, most approaches lack continuous control over acquisition physics (e.g., transducer frequency, aperture, focal depth). This limitation restricts their utility for training sonographers or validating device settings. Finally, while synthetic data has proven effective for training neural networks, regulatory validation, standardized benchmarking, and clinical adoption are still lacking.

4. Discussion

Ultrasound simulation has progressed along several methodological directions over the past decade, reflecting the challenge of finding the best balance between physical fidelity, computational efficiency, and clinical applicability. The results of this review highlight that the field is no longer defined by a single dominant paradigm, but by a distributed ecosystem of wave-based models, ray- and convolution-based approximations, and emerging AI-driven generative strategies. Understanding how these families differ and how they complement one another is essential not only for methodological comparison but also for guiding decisions in the design of training systems, device prototyping, and quantitative imaging pipelines. A qualitative overview of these trade-offs is provided in Table 3.
Wave-based solvers continue to serve as the reference standard due to their ability to model diffraction, nonlinear propagation, multipath interference, and coherent scatterer interactions. These properties make them indispensable for applications requiring high acoustic accuracy, such as transducer design, safety modeling, or the study of complex wave phenomena. However, even with modern computational resources, their high spatial and temporal sampling requirements impose substantial computational costs. Typical simulations require minutes to hours per frame, limiting their use to offline research. The low number of wave-based studies observed in this review is consistent with these constraints: while they offer unmatched physical realism, they remain too computationally demanding for real-time or large-scale use.
Ray-based models represent the opposite end of the computational spectrum. By relying on geometric acoustics, they efficiently approximate macroscopic effects such as attenuation, reflection, refraction, and shadowing, and achieve real-time performance with GPU acceleration. Their speed and simplicity explain their widespread use in procedural training, ultrasound-guided intervention rehearsal, and interactive educational systems. However, because they omit coherent wave behavior, they cannot reproduce speckle statistics or nonlinear propagation with the fidelity required for quantitative imaging. As a result, ray-based approaches excel for macroscopic artifact modeling but lack fine-grained physiological realism.
Convolution-based approaches provide a middle ground, enabling realistic speckle generation and depth-dependent point-spread-function blurring at extremely low computational cost. Models based on convolution with scatterer distributions are widely used for dynamic simulations of the heart, respiratory motion, and tissue deformation. Their efficiency makes them well suited for machine-learning dataset creation and large-scale experiments, especially as recent Fourier-domain formulations (e.g., [102]) have accelerated these computations by several orders of magnitude. Nonetheless, their reliance on linear approximations limits their ability to capture nonlinear propagation, high-frequency effects, and complex wave–tissue interactions. Thus, while convolution-based techniques offer an advantageous balance of speed and realism, they cannot fully substitute physical solvers when modeling nonlinear or harmonic imaging behavior.
Artificial intelligence introduced a fundamentally different simulation paradigm in which image formation is learned from examples rather than explicitly modeled. GANs, domain-transfer networks, and diffusion models have demonstrated remarkable realism, producing synthetic ultrasound at speeds compatible with real-time interrogation and achieving high anatomical coherence. These capabilities make AI-based simulation particularly attractive for interactive training, domain adaptation between imaging modalities, and addressing data scarcity in supervised learning. While promising, these methods face important limitations. They typically cannot guarantee physical consistency, struggle to reproduce diagnostically meaningful artifacts linked to nonlinear propagation, and lack explicit control over acquisition parameters such as frequency, focal depth, aperture, or beam profile. Their dependence on high-quality training data introduces challenges for rare pathologies or unusual imaging scenarios, and their generalization across scanners and institutions remains an open question. These limitations indicate that currently AI-based simulations alone are not sufficient for high-stakes diagnostic or regulatory contexts and reinforce the need for physics-informed or hybrid approaches that integrate acoustic priors into learning-based frameworks.
Dynamic modeling further illustrates the complementary strengths of the different simulation families. Electromechanical cardiac models combined with convolution-based scatterers, deformable speckle fields estimated through learning, and implicit neural representations for motion compensation show that realistic temporal ultrasound synthesis requires a combination of biomechanical deformation models and realistic image formation. Across applications (including cardiology, respiratory modeling, and intraoperative scenarios), physics-based methods excel in representing tissue deformation and motion mechanics, while learned models excel in generating visually realistic speckle evolution. Hybrid simulation frameworks, rather than single-method approaches, may offer the most promising route for accommodating both physical plausibility and computational efficiency.
Although this review focuses on standard B-mode imaging, the methodological principles analyzed here extend to emergent areas such as high-frequency intravascular ultrasound (IVUS) and ultrasound neuromodulation. IVUS imaging above 40–60 MHz demands finely resolved acoustic modeling capable of capturing vessel–wall interactions, high attenuation, and nonlinear propagation, making wave-based or hybrid wave–convolution solvers particularly relevant [107]. Similarly, therapeutic applications such as neuromodulation and focused ultrasound require accurate prediction of acoustic pressure fields, thermal deposition, cavitation thresholds, and tissue-specific nonlinear responses [108]. Solvers such as FOCUS and k-Wave remain essential in these contexts, especially for treatment planning and safety evaluation. The integration of these modalities into future simulators will be critical for broadening the utility of these simulators beyond diagnostic imaging alone. Recent developments such as PROTEUS Part II (2025) [109], differentiable ray-tracing pipelines like UltraRay [110], OptiX-accelerated sensing frameworks within Isaac for Healthcare [111], latent diffusion models tailored for ultrasound [112], and cardiac ultrasound foundation models such as EchoFlow [113] further highlight the rapid expansion of simulation into domains that merge physics, machine learning, and differentiability. These works show a shift toward simulation environments that are simultaneously physically grounded, computationally tractable, and fully differentiable, thus enabling optimization-based tasks, inverse problems, and data-driven design of acquisition strategies.
The reviewed literature suggests that the future of ultrasound simulation lies in the convergence of accurate physical modeling, computational efficiency, and data-driven adaptability. Wave-based solvers will continue to serve as the reference standard for acoustic accuracy, ray-based and convolutional methods will dominate real-time and large-scale data generation, and AI-based techniques will increasingly augment simulation realism and scalability. Emerging hybrid systems integrating differentiable physics with generative models point toward simulation platforms capable of meeting the demands of modern training, navigation, therapy planning, and quantitative imaging. Nonetheless, the field still lacks standardized benchmarks for evaluating realism, motion consistency, artifact representation, and computational performance, making cross-study comparison difficult. Heterogeneity in evaluation metrics, variability in reported computational costs, and the absence of unified datasets continue to limit the interpretability and reproducibility of simulation research.
Finally, although this scoping review focused on studies published between 2014 and 2024, very recent works already point toward emerging trends, such as differentiable ray-tracing pipelines (UltraRay [110]), advanced contrast-enhanced frameworks (PROTEUS Part II [109]), and diffusion or foundation models (Latent Diffusion [112], EchoFlow [113]) for large-scale, realistic data generation. These developments reinforce the trajectory outlined in this review, emphasizing the need for hybrid, scalable, and AI-augmented simulation frameworks in the coming years. Despite these limitations, the diversity, depth, and rapid development of contemporary simulation methodologies indicate a field that is maturing toward hybrid, scalable, and clinically oriented frameworks capable of supporting the next generation of ultrasound imaging, training, and therapy planning.

5. Conclusions and Perspectives

This review highlights significant progress in ultrasound simulation technologies in the last decade, emphasizing the critical role of diverse methodologies and tools in advancing the field. The state of the art reflects a strategic reliance on AI and ray-based methods (especially combined with convolutional-based approaches) to achieve computational efficiency and visual realism, while reserving wave-based techniques for high-fidelity offline validation.
The analysis also reveals persistent methodological gaps, such as the uneven distribution of studies in complex topics such as dynamic motion modeling. Furthermore, there is room for development in the available toolboxes to address the growing complexity of simulation scenarios.
Future research is expected to address these gaps by integrating insights from multiple methodologies, exploring hybrid and fully differentiable approaches, and standardizing the evaluation of simulation methods. These efforts reinforce the trajectory toward hybrid, scalable, and AI-augmented simulation frameworks that merge physics, machine learning, and differentiability, thereby expanding simulation utility beyond diagnostic imaging alone.

Author Contributions

Conceptualization and Methodology (J.L.H., C.M.S.-C.); Writing—original draft preparation (All authors); Funding acquisition: (J.L.H.). All authors have read and agreed to the published version of the manuscript.

Funding

Funded by the Spanish Ministry of Economic Affairs and Digital Transformation (Project MIA.2021.M02.0005 TARTAGLIA, from the Recovery, Resilience, and Transformation Plan financed by the European Union through Next Generation EU funds). TARTAGLIA takes place under the R&D Missions in Artificial Intelligence program, which is part of the Spain Digital 2025 Agenda and the Spanish National Artificial Intelligence Strategy. This work also received support from the TEC-2024/TEC-43 LUNABRAIN-CM project funded by the Comunidad de Madrid through the R&D activities program in technologies, granted by Order 5696/2024, from the RYC2021-032739-I Ramón y Cajal Programme funded by MCIN/AEI/10.13039/501100011033, and the European Union “NextGenerationEU”/PRTR”, and from the PID2022-137114OA-I00 INVENTOR project funded by MCIN/AEI/10.13039/501100011033.

Data Availability Statement

Data sharing is not applicable, as no new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Search Strings

Appendix A.1. PubMed

  • Query 1: ((ULTRASOUND[Title]) AND (SIMULATOR[Title] OR SIMULATION[Title] OR SIMULATE[Title] OR SYNTHETIC[Title])) AND (WAVE[Title/Abstract] OR ACOUSTICS[Title/Abstract] OR ALGORITHM[Title/Abstract] OR TOOLBOX[Title/Abstract])
  • Query 2: ((ULTRASOUND[Title]) AND (SIMULATOR[Title] OR SIMULATION[Title] OR SIMULATE[Title] OR SYNTHETIC[Title])) AND (CT[Title/Abstract] OR COMPUTED TOMOGRAPHY[Title/Abstract] OR “RAY TRACING”[Title/Abstract] OR GPU OR “Monte Carlo” OR CONVOLUTION OR “deep learning” OR “convolutional neural network”)

Appendix A.2. Web of Science

  • Query 1: TI = (ULTRASOUND) AND TI = (SIMULATOR OR SIMULATION OR SIMULATE OR SYNTHETIC) AND TS = (WAVE OR ACOUSTICS OR ALGORITHM OR TOOLBOX)
  • Query 2: TI = (ULTRASOUND) AND TI = (SIMULATOR OR SIMULATION OR SIMULATE OR SYNTHETIC) AND TS = (CT OR “COMPUTED TOMOGRAPHY” OR “RAY TRACING” OR GPU OR “Monte Carlo” OR CONVOLUTION OR “deep learning” OR “convolutional neural network”)

Appendix A.3. Scopus

  • Query 1: TITLE(ULTRASOUND) AND TITLE(SIMULATOR OR SIMULATION OR SIMULATE OR SYNTHETIC) AND TITLE-ABS-KEY(WAVE OR ACOUSTICS OR ALGORITHM OR TOOLBOX)
  • Query 2: TITLE(ULTRASOUND) AND TITLE(SIMULATOR OR SIMULATION OR SIMULATE OR SYNTHETIC) AND TITLE-ABS-KEY(CT OR “COMPUTED TOMOGRAPHY” OR “RAY TRACING” OR GPU OR “Monte Carlo” OR CONVOLUTION OR “deep learning” OR “convolutional neural network”)

Appendix A.4. IEEE Xplore

  • Query 1: (“Document Title”:“ultrasound”) AND (“Document Title”:“simulator” OR “Document Title”:“simulation” OR “Document Title”:“simulate” OR “Document Title”:“synthetic”) AND (“Document Title”:“wave” OR “Document Title”:“acoustics” OR “Document Title”:“algorithm” OR “Document Title”:“toolbox” OR Abstract:wave” OR Abstract:“acoustics” OR Abstract:“algorithm” OR Abstract:“toolbox”)
  • Query 2: (“Document Title”:“ultrasound”) AND (“Document Title”:“simulator” OR “Document Title”:“simulation” OR “Document Title”:“simulate” OR “Document Title”:“synthetic”) AND (“Document Title”:“CT” OR “Document Title”:“COMPUTED TOMOGRAPHY” OR Abstract:”CT” OR Abstract:”COMPUTED TOMOGRAPHY” OR “Document Title”:“RAY TRACING” OR Abstract:”RAY TRACING” OR “Document Title”:“GPU” OR Abstract:”GPU” OR “Document Title”:“Monte Carlo” OR Abstract:”Monte Carlo” OR “Document Title”:“convolution” OR Abstract:”convolution” OR “Document Title”:“deep learning” OR Abstract:”deep learning” OR “Document Title”:“convolutional neural network” OR Abstract:”convolutional neural network”)

References

  1. Tasnim, T.; Shuvo, M.M.H.; Hasan, S. Study of Speckle Noise Reduction from Ultrasound B-Mode Images Using Different Filtering Techniques. In Proceedings of the 2017 4th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 28–30 September 2017; pp. 229–234. [Google Scholar]
  2. Dietrich, C.F.; Lucius, C.; Nielsen, M.B.; Burmester, E.; Westerway, S.C.; Chu, C.Y.; Condous, G.; Cui, X.-W.; Dong, Y.; Harrison, G.; et al. The Ultrasound Use of Simulators, Current View, and Perspectives: Requirements and Technical Aspects (WFUMB State of the Art Paper). Endosc. Ultrasound 2022, 12, 38. [Google Scholar] [CrossRef]
  3. Nayahangan, L.J.; Dietrich, C.F.; Nielsen, M.B. Simulation-Based Training in Ultrasound—Where Are We Now? Ultraschall Med.-Eur. J. Ultrasound 2021, 42, 240–244. [Google Scholar] [CrossRef]
  4. Starkov Marion, A.; Vray, D. Toward a Real-Time Simulation of Ultrasound Image Sequences Based On a 3-D Set of Moving Scatterers. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2009, 56, 2167–2179. [Google Scholar] [CrossRef]
  5. Law, Y.C.; Knott, T.; Hentschel, B.; Kuhlen, T. Geometrical-Acoustics-Based Ultrasound Image Simulation. Eurographics Workshop Vis. Comput. Biol. Med. 2012, 25–32. [Google Scholar] [CrossRef]
  6. Goksel, O.; Salcudean, S.E. B-Mode Ultrasound Image Simulation in Deformable 3-D Medium. IEEE Trans. Med. Imaging 2009, 28, 1657–1669. [Google Scholar] [CrossRef]
  7. Hostettler, A.; Forest, C.; Forgione, A.; Soler, L.; Marescaux, J. Real-Time Ultrasonography Simulator Based on 3D CT-Scan Images. Stud. Health Technol. Inform. 2005, 111, 191–193. [Google Scholar]
  8. Shams, R.; Hartley, R.; Navab, N. Real-Time Simulation of Medical Ultrasound from CT Images. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2008, Proceedings of the 11th International Conference, New York, NY, USA, 6–10 September 2008; Springer: Berlin/Heidelberg, Germany, 2008; Volume 11, pp. 734–741. [Google Scholar] [CrossRef]
  9. Dillenseger, J.-L.; Laguitton, S.; Delabrousse, E. Fast Simulation of Ultrasound Images from a CT Volume. Comput. Biol. Med. 2009, 39, 180–186. [Google Scholar] [CrossRef]
  10. Gjerald, S.U.; Brekken, R.; Bø, L.E.; Hergum, T.; Nagelhus Hernes, T.A. Interactive Development of a CT-Based Tissue Model for Ultrasound Simulation. Comput. Biol. Med. 2012, 42, 607–613. [Google Scholar] [CrossRef]
  11. Cong, W.; Yang, J.; Liu, Y.; Wang, Y. Fast and Automatic Ultrasound Simulation from CT Images. Comput. Math. Methods Med. 2013, 2013, 327613. [Google Scholar] [CrossRef]
  12. Burger, B.; Bettinghausen, S.; Radle, M.; Hesser, J. Real-Time GPU-Based Ultrasound Simulation Using Deformable Mesh Models. IEEE Trans. Med. Imaging 2013, 32, 609–618. [Google Scholar] [CrossRef]
  13. Pinton, G.F.; Dahl, J.; Rosenzweig, S.; Trahey, G.E. A Heterogeneous Nonlinear Attenuating Full-Wave Model of Ultrasound. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2009, 56, 474–488. [Google Scholar] [CrossRef]
  14. Kutter, O.; Shams, R.; Navab, N. Visualization and GPU-Accelerated Simulation of Medical Ultrasound from CT Images. Comput. Methods Programs Biomed. 2009, 94, 250–266. [Google Scholar] [CrossRef]
  15. Reichl, T.; Passenger, J.; Acosta, O.; Salvado, O. Ultrasound Goes GPU: Real-Time Simulation Using CUDA. In Proceedings of the Medical Imaging 2009: Visualization, Image-Guided Procedures, and Modeling, Lake Buena Vista, FL, USA, 7–12 February 2009; SPIE: Bellingham, WA, USA, 2009; Volume 7261, pp. 386–395. [Google Scholar]
  16. Floquet, A.; Soubies, E.; Pham, D.-H.; Kouame, D. Spatially Variant Ultrasound Image Restoration with Product Convolution. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2025, 72, 1235–1244. [Google Scholar] [CrossRef]
  17. Gao, H.; Choi, H.F.; Claus, P.; Boonen, S.; Jaecques, S.; Van Lenthe, G.H.; Van Der Perre, G.; Lauriks, W.; D’hooge, J. A Fast Convolution-Based Methodology to Simulate 2-Dd/3-D Cardiac Ultrasound Images. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2009, 56, 404–409. [Google Scholar] [CrossRef]
  18. Jensen, J.A. Simulation of Advanced Ultrasound Systems Using Field II. In Proceedings of the 2004 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), Arlington, VA, USA, 15–18 April 2004; Volume 2, pp. 636–639. [Google Scholar] [CrossRef]
  19. Jensen, J.A. FIELD: A Program for Simulating Ultrasound Systems. Med. Biol. Eng. Comput. 1996, 34, 351–352. [Google Scholar]
  20. Zhu, Y.; Szabo, T.L.; McGough, R.J. A Comparison of Ultrasound Image Simulations with FOCUS and Field II. In Proceedings of the 2012 IEEE International Ultrasonics Symposium, Dresden, Germany, 7–10 October 2012; pp. 1694–1697. [Google Scholar]
  21. Treeby, B.E.; Cox, B.T. K-Wave: MATLAB Toolbox for the Simulation and Reconstruction of Photoacoustic Wave Fields. J. Biomed. Opt. 2010, 15, 021314. [Google Scholar] [CrossRef]
  22. Holm, S. Ultrasim-a Toolbox for Ultrasound Field Simulation. In Proceedings of the Nordic MATLAB Conference Proceedings, Oslo, Norway, 17–18 October 2001. [Google Scholar]
  23. Varray, F.; Cachard, C.; Tortoli, P.; Basset, O. Nonlinear Radio Frequency Image Simulation for Harmonic Imaging: Creanuis. In Proceedings of the 2010 IEEE International Ultrasonics Symposium, San Diego, CA, USA, 11–14 October 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2179–2182. [Google Scholar]
  24. Constantinescu, E.C.; Udriștoiu, A.-L.; Udriștoiu, Ș.C.; Iacob, A.V.; Gruionu, L.G.; Gruionu, G.; Săndulescu, L.; Săftoiu, A. Transfer Learning with Pre-Trained Deep Convolutional Neural Networks for the Automatic Assessment of Liver Steatosis in Ultrasound Images. Med. Ultrason. 2021, 23, 135–139. [Google Scholar] [CrossRef]
  25. Webb, J.M.; Meixner, D.D.; Adusei, S.A.; Polley, E.C.; Fatemi, M.; Alizad, A. Automatic Deep Learning Semantic Segmentation of Ultrasound Thyroid Cineclips Using Recurrent Fully Convolutional Networks. IEEE Access 2021, 9, 5119–5127. [Google Scholar] [CrossRef]
  26. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  27. Rodriguez-Molares, A.; Rindal, O.M.H.; Bernard, O.; Nair, A.; Lediju Bell, M.A.; Liebgott, H.; Austeng, A.; L⊘vstakken, L. The UltraSound ToolBox. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017; pp. 1–4. [Google Scholar]
  28. Garcia, D. Make the Most of MUST, an Open-Source Matlab UltraSound Toolbox. In Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS), Xi’an, China, 11–16 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–4. [Google Scholar]
  29. Gu, J.; Jing, Y. mSOUND: An Open Source Toolbox for Modeling Acoustic Wave Propagation in Heterogeneous Media. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 1476–1486. [Google Scholar] [CrossRef]
  30. Garcia, D. SIMUS: An Open-Source Simulator for Medical Ultrasound Imaging. Part I: Theory & Examples. Comput. Methods Programs Biomed. 2022, 218, 106726. [Google Scholar] [CrossRef]
  31. Cigier, A.; Varray, F.; Garcia, D. SIMUS: An Open-Source Simulator for Medical Ultrasound Imaging. Part II: Comparison with Four Simulators. Comput. Methods Programs Biomed. 2022, 220, 106774. [Google Scholar] [CrossRef]
  32. Ekroll, I.K.; Saris, A.E.C.M.; Avdal, J. FLUST: A Fast, Open Source Framework for Ultrasound Blood Flow Simulations. Comput. Methods Programs Biomed. 2023, 238, 107604. [Google Scholar] [CrossRef]
  33. Blanken, N.; Heiles, B.; Kuliesh, A.; Versuis, M.; Jain, K.; Maresca, D.; Lajoinie, G. PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator—Part I: Numerical Methods. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2024, 1. [Google Scholar] [CrossRef]
  34. Brevett, T. QUPS: A MATLAB Toolbox for Rapid Prototyping of Ultrasound Beamforming and Imaging Techniques. J. Open Source Softw. 2024, 9, 6772. [Google Scholar] [CrossRef]
  35. Garcia, D.; Varray, F. SIMUS3: An Open-Source Simulator for 3-D Ultrasound Imaging. Comput. Methods Programs Biomed. 2024, 250, 108169. [Google Scholar] [CrossRef]
  36. D’Amato, J.P.; Vercio, L.L.; Rubi, P.; Vera, E.F.; Barbuzza, R.; Fresno, M.D.; Larrabide, I. Efficient Scatter Model for Simulation of Ultrasound Images from Computed Tomography Data. In Proceedings of the 11th International Symposium on Medical Information Processing and Analysis, Cuenca, Ecuador, 17–19 November 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9681, pp. 23–32. [Google Scholar]
  37. Salehi, M.; Ahmadi, S.-A.; Prevost, R.; Navab, N.; Wein, W. Patient-Specific 3D Ultrasound Simulation Based on Convolutional Ray-Tracing and Appearance Optimization. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 510–518. [Google Scholar]
  38. Szostek, K.; Piórkowski, A. Real-Time Simulation of Ultrasound Refraction Phenomena Using Ray-Trace Based Wavefront Construction Method. Comput. Methods Programs Biomed. 2016, 135, 187–197. [Google Scholar] [CrossRef]
  39. Rubi, P.; Vera, E.F.; Larrabide, J.; Calvo, M.; D’Amato, J.P.; Larrabide, I. Comparison of Real-Time Ultrasound Simulation Models Using Abdominal CT Images. In Proceedings of the 12th International Symposium on Medical Information Processing and Analysis, Tandil, Argentina, 5–7 December 2017; Volume 10160, pp. 55–63. [Google Scholar]
  40. Camara, M.; Mayer, E.; Darzi, A.; Pratt, P. Simulation of Patient-Specific Deformable Ultrasound Imaging in Real Time. In Proceedings of the Imaging for Patient-Customized Simulations and Systems for Point-of-Care Ultrasound, Québec City, QC, Canada, 14 September 2017; Cardoso, M.J., Arbel, T., Tavares, J.M.R.S., Aylward, S., Li, S., Boctor, E., Fichtinger, G., Cleary, K., Freeman, B., Kohli, L., et al., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 11–18. [Google Scholar]
  41. Satheesh B., A.; Thittai, A.K. A Method of Ultrasound Simulation from Patient-Specific CT Image Data: A Preliminary Simulation Study. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1483–1486. [Google Scholar]
  42. Satheesh B., A.; Thittai, A.K. A Fast Method for Simulating Ultrasound Image from Patient-Specific CT Data. Biomed. Signal Process. Control 2019, 48, 61–68. [Google Scholar] [CrossRef]
  43. Velikova, Y.; Azampour, M.F.; Simson, W.; Gonzalez Duque, V.; Navab, N. LOTUS: Learning to Optimize Task-Based US Representations. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2023, Vancouver, BC, Canada, 8–12 October 2023; Greenspan, H., Madabhushi, A., Mousavi, P., Salcudean, S., Duncan, J., Syeda-Mahmood, T., Taylor, R., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 435–445. [Google Scholar]
  44. Rivaz, H.; Collins, D.L. Simulation of Ultrasound Images for Validation of MR to Ultrasound Registration in Neurosurgery. In Proceedings of the Augmented Environments for Computer-Assisted Interventions, Boston, MA, USA, 14 September 2014; Linte, C.A., Yaniv, Z., Fallavollita, P., Abolmaesumi, P., Holmes, D.R., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 23–32. [Google Scholar]
  45. Storve, S.; Torp, H. Use of B-Splines in Fast Dynamic Ultrasound RF Simulations. In Proceedings of the 2015 IEEE International Ultrasonics Symposium (IUS), Taipei, Taiwan, 21–24 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–4. [Google Scholar]
  46. Alessandrini, M.; De Craene, M.; Bernard, O.; Giffard-Roisin, S.; Allain, P.; Waechter-Stehle, I.; Weese, J.; Saloux, E.; Delingette, H.; Sermesant, M.; et al. A Pipeline for the Generation of Realistic 3D Synthetic Echocardiographic Sequences: Methodology and Open-Access Database. IEEE Trans. Med. Imaging 2015, 34, 1436–1451. [Google Scholar] [CrossRef]
  47. Alessandrini, M.; Heyde, B.; Giffard-Roisin, S.; Delingette, H.; Sermesant, M.; Allain, P.; Bernard, O.; De Craene, M.; D’hooge, J. Generation of Ultra-Realistic Synthetic Echocardiographic Sequences to Facilitate Standardization of Deformation Imaging. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 756–759. [Google Scholar]
  48. Mastmeyer, A.; Wilms, M.; Fortmeier, D.; Schröder, J.; Handels, H. Real-Time Ultrasound Simulation for Training of US-Guided Needle Insertion in Breathing Virtual Patients. Stud. Health Technol. Inform. 2016, 220, 219–226. [Google Scholar]
  49. Zhou, Y.; Giffard-Roisin, S.; De Craene, M.; Camarasu-Pop, S.; D’Hooge, J.; Alessandrini, M.; Friboulet, D.; Sermesant, M.; Bernard, O. A Framework for the Generation of Realistic Synthetic Cardiac Ultrasound and Magnetic Resonance Imaging Sequences From the Same Virtual Patients. IEEE Trans. Med. Imaging 2018, 37, 741–754. [Google Scholar] [CrossRef]
  50. Alessandrini, M.; Chakraborty, B.; Heyde, B.; Bernard, O.; De Craene, M.; Sermesant, M.; D’Hooge, J. Realistic Vendor-Specific Synthetic Ultrasound Data for Quality Assurance of 2-D Speckle Tracking Echocardiography: Simulation Pipeline and Open Access Database. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2018, 65, 411–422. [Google Scholar] [CrossRef]
  51. Szostek, K.; Lasek, J.; Piórkowski, A. Real-Time Simulation of Wave Phenomena in Lung Ultrasound Imaging. Appl. Sci. 2023, 13, 9805. [Google Scholar] [CrossRef]
  52. Abhimanyu, F.N.U.; Orekhov, A.L.; Galeotti, J.; Choset, H. Unsupervised Deformable Image Registration for Respiratory Motion Compensation in Ultrasound Images. arXiv 2023, arXiv:2306.13332. [Google Scholar] [CrossRef]
  53. Velikova, Y.; Azampour, M.F.; Simson, W.; Esposito, M.; Navab, N. Implicit Neural Representations for Breathing-Compensated Volume Reconstruction in Robotic Ultrasound. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1316–1322. [Google Scholar]
  54. Burman, N.; Alessandra Manetti, C.; Heymans, S.V.; Ingram, M.; Lumens, J.; D’Hooge, J. Large-Scale Simulation of Realistic Cardiac Ultrasound Data with Clinical Appearance: Methodology and Open-Access Database. IEEE Access 2024, 12, 117040–117055. [Google Scholar] [CrossRef]
  55. Mattausch, O.; Goksel, O. Scatterer Reconstruction and Parametrization of Homogeneous Tissue for Ultrasound Image Simulation. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: Piscataway, NJ, USA, 2024; pp. 6350–6353. [Google Scholar]
  56. Mattausch, O.; Goksel, O. Image-Based PSF Estimation for Ultrasound Training Simulation. In Simulation and Synthesis in Medical Imaging, Proceedings of the 9th International Workshop, SASHIMI 2024, Held in Conjunction with MICCAI 2024; Marrakesh, Morocco, 10 October 2016, Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 23–33. [Google Scholar]
  57. Singh, P.; Mukundan, R.; de Ryke, R. Synthetic Models of Ultrasound Image Formation for Speckle Noise Simulation and Analysis. In Proceedings of the 2017 International Conference on Signals and Systems (ICSigSys), Bali, Indonesia, 16–18 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 278–284. [Google Scholar]
  58. Singh, P.; Mukundan, R.; de Ryke, R. Quality Analysis of Synthetic Ultrasound Images Using Co-Occurrence Texture Statistics. In Proceedings of the 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), Christchurch, New Zealand, 4–6 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  59. Singh, P.; Mukundan, R.; de Ryke, R. Modelling, Speckle Simulation and Quality Evaluation of Synthetic Ultrasound Images. In Proceedings of the Medical Image Understanding and Analysis, Edinburgh, UK, 11–13 July 2017; Valdés Hernández, M., González-Castro, V., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 74–85. [Google Scholar]
  60. Mattausch, O.; Goksel, O. Image-Based Reconstruction of Tissue Scatterers Using Beam Steering for Ultrasound Simulation. IEEE Trans. Med. Imaging 2018, 37, 767–780. [Google Scholar] [CrossRef]
  61. Singh, P.; Mukundan, R.; De Ryke, R. Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns. J. Imaging 2018, 4, 3. [Google Scholar] [CrossRef]
  62. Starkov, R.; Zhang, L.; Bajka, M.; Tanner, C.; Goksel, O. Ultrasound Simulation with Deformable and Patient-Specific Scatterer Maps. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1589–1599. [Google Scholar] [CrossRef]
  63. Gaits, F.; Mellado, N.; Basarab, A. Efficient 2D Ultrasound Simulation Based on Dart-Throwing 3D Scatterer Sampling. In Proceedings of the 2022 30th European Signal Processing Conference (EUSIPCO), Belgrade, Serbia, 29 August–2 September 2022; pp. 897–901. [Google Scholar]
  64. Gaits, F.; Mellado, N.; Bouyjou, G.; Garcia, D.; Basarab, A. Efficient Stratified 3-D Scatterer Sampling for Freehand Ultrasound:Simulation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2024, 71, 127–140. [Google Scholar] [CrossRef]
  65. Pinton, G. Three Dimensional Full-Wave Nonlinear Acoustic Simulations: Applications to Ultrasound Imaging. AIP Conf. Proc. 2015, 1685, 070001. [Google Scholar] [CrossRef]
  66. Looby, K.; Herickhoff, C.D.; Sandino, C.; Zhang, T.; Vasanawala, S.; Dahl, J.J. Unsupervised Clustering Method to Convert High-Resolution Magnetic Resonance Volumes to Three-Dimensional Acoustic Models for Full-Wave Ultrasound Simulations. J. Med. Imaging 2019, 6, 037001. [Google Scholar] [CrossRef]
  67. Pinton, G. Ultrasound Imaging of the Human Body with Three Dimensional Full-Wave Nonlinear Acoustics. Part 1: Simulations Methods. arXiv 2020, arXiv:2003.06934. [Google Scholar]
  68. Pinton, G. Ultrasound Imaging with Three Dimensional Full-Wave Nonlinear Acoustic Simulations. Part 2: Sources of Image Degradation in Intercostal Imaging. arXiv 2020, arXiv:2003.06927. [Google Scholar]
  69. Varray, F.; Liebgott, H.; Cachard, C.; Vray, D. Fast Simulation of Realistic Pseudo-Acoustic Nonlinear Radio-Frequency Ultrasound Images. In Proceedings of the 2014 IEEE International Ultrasonics Symposium, Chicago, IL, USA, 3–6 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 2217–2220. [Google Scholar]
  70. Haigh, A.A.; McCreath, E.C. Acceleration of GPU-Based Ultrasound Simulation via Data Compression. In Proceedings of the 2014 IEEE International Parallel & Distributed Processing Symposium Workshops, Phoenix, AZ, USA, 19–23 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1248–1255. [Google Scholar]
  71. Mattausch, O.; Goksel, O. Monte-Carlo Ray-Tracing for Realistic Interactive Ultrasound Simulation. In Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine, Bergen, Norway, 7–9 September 2016; The Eurographics Association: Munich, Germany, 2016; pp. 173–181. [Google Scholar]
  72. Law, Y.; Kuhlen, T.; Cotin, S. Real-Time Simulation of B-Mode Ultrasound Images for Medical Training; RWTH Aachen University: Aachen, Germany, 2016. [Google Scholar]
  73. Keelan, R.; Shimada, K.; Rabin, Y. GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training. Technol. Cancer Res. Treat. 2017, 16, 5–14. [Google Scholar] [CrossRef]
  74. Storve, S.; Torp, H. Fast Simulation of Dynamic Ultrasound Images Using the GPU. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2017, 64, 1465–1477. [Google Scholar] [CrossRef]
  75. Mattausch, O.; Makhinya, M.; Goksel, O. Realistic Ultrasound Simulation of Complex Surface Models Using Interactive Monte-Carlo Path Tracing. Comput. Graph. Forum 2018, 37, 202–213. [Google Scholar] [CrossRef]
  76. Tuzer, M.; Yazıcı, A.; Türkay, R.; Boyman, M.; Acar, B. Multi-Ray Medical Ultrasound Simulation without Explicit Speckle Modelling. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1009–1017. [Google Scholar] [CrossRef] [PubMed]
  77. Tanner, C.; Starkov, R.; Bajka, M.; Goksel, O. Framework for Fusion of Data- and Model-Based Approaches for Ultrasound Simulation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2018, Granada, Spain, 16–20 September 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 332–339. [Google Scholar]
  78. Wang, Q.; Peng, B.; Cao, Z.; Huang, X.; Jiang, J. A Real-Time Ultrasound Simulator Using Monte-Carlo Path Tracing in Conjunction with Optix Engine. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 11 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3661–3666. [Google Scholar]
  79. Cambet, T.P.; Vitale, S.; Larrabide, I. High Performance Ultrasound Simulation Using Monte-Carlo Simulation: A GPU Ray-Tracing Implementation. In Proceedings of the 16th International Symposium on Medical Information Processing and Analysis, Lima, Perú, 3–4 November 2020; SPIE: Bellingham, WA, USA, 2020; Volume 11583, pp. 107–115. [Google Scholar]
  80. Amadou, A.A.; Peralta, L.; Dryburgh, P.; Klein, P.; Petkov, K.; Housden, R.J.; Singh, V.; Liao, R.; Kim, Y.-H.; Ghesu, F.C.; et al. Cardiac Ultrasound Simulation for Autonomous Ultrasound Navigation. Front. Cardiovasc. Med. 2024, 11, 1384421. [Google Scholar] [CrossRef] [PubMed]
  81. Peng, B.; Huang, X.; Wang, S.; Jiang, J. A Real-Time Medical Ultrasound Simulator Based on a Generative Adversarial Network Model. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4629–4633. [Google Scholar]
  82. Abdi, A.H.; Tsang, T.; Abolmaesumi, P. GAN-Enhanced Conditional Echocardiogram Generation. arXiv 2019, arXiv:1911.02121. [Google Scholar] [CrossRef]
  83. Magnetti, C.; Zimmer, V.; Ghavami, N.; Skelton, E.; Matthew, J.; Lloyd, K.; Hajnal, J.; Schnabel, J.A.; Gomez, A. Deep Generative Models to Simulate 2D Patient-Specific Ultrasound Images in Real Time. In Proceedings of the Medical Image Understanding and Analysis, Oxford, UK, 15–17 July 2020; Papież, B.W., Namburete, A.I.L., Yaqub, M., Noble, J.A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 423–435. [Google Scholar]
  84. Zhang, L.; Vishnevskiy, V.; Goksel, O. Deep Network for Scatterer Distribution Estimation for Ultrasound Image Simulation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2020, 67, 2553–2564. [Google Scholar] [CrossRef]
  85. Zhang, L.; Portenier, T.; Paulus, C.; Goksel, O. Deep Image Translation for Enhancing Simulated Ultrasound Images. In Proceedings of the Medical Ultrasound, and Preterm, Perinatal and Paediatric Image Analysis: First International Workshop, ASMUS 2020, and 5th International Workshop, PIPPI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4–8 October 2020; Proceedings. Springer: Berlin/Heidelberg, Germany, 2020; pp. 85–94. [Google Scholar]
  86. Vitale, S.; Orlando, J.I.; Iarussi, E.; Larrabide, I. Improving Realism in Patient-Specific Abdominal Ultrasound Simulation Using CycleGANs. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 183–192. [Google Scholar] [CrossRef]
  87. Escobar, M.; Castillo, A.; Romero, A.; Arbeláez, P. UltraGAN: Ultrasound Enhancement Through Adversarial Generation. In Proceedings of the Simulation and Synthesis in Medical Imaging, Lima, Peru, 4 October 2020; Burgos, N., Svoboda, D., Wolterink, J.M., Zhao, C., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 120–130. [Google Scholar]
  88. Pigeau, G.; Elbatarny, L.; Wu, V.; Schonewille, A.; Fichtinger, G.; Ungi, T. Ultrasound Image Simulation with Generative Adversarial Network. In Proceedings of the Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, Houston, TX, USA, 15–20 February 2020; Volume 11315, pp. 54–60. [Google Scholar]
  89. Cronin, N.J.; Finni, T.; Seynnes, O. Using Deep Learning to Generate Synthetic B-Mode Musculoskeletal Ultrasound Images. Comput. Methods Programs Biomed. 2020, 196, 105583. [Google Scholar] [CrossRef]
  90. Ao, Y.; Yang, H.; Wei, G.; Ji, B.; Shi, W.; Jiang, Z. An Impoved U-GAT-IT Model for Enhancing the Realism of Simulated Ultrasound Images. In Proceedings of the 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), Changchun, China, 23–26 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 336–340. [Google Scholar]
  91. Gilbert, A.; Marciniak, M.; Rodero, C.; Lamata, P.; Samset, E.; Mcleod, K. Generating Synthetic Labeled Data From Existing Anatomical Models: An Example with Echocardiography Segmentation. IEEE Trans. Med. Imaging 2021, 40, 2783–2794. [Google Scholar] [CrossRef]
  92. Zhang, L.; Portenier, T.; Goksel, O. Learning Ultrasound Rendering from Cross-Sectional Model Slices for Simulated Training. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 721–730. [Google Scholar] [CrossRef]
  93. Song, Y.; Zheng, J.; Lei, L.; Ni, Z.; Zhao, B.; Hu, Y. CT2US: Cross-Modal Transfer Learning for Kidney Segmentation in Ultrasound Images with Synthesized Data. Ultrasonics 2022, 122, 106706. [Google Scholar] [CrossRef]
  94. Maack, L.; Holstein, L.; Schlaefer, A. GANs for Generation of Synthetic Ultrasound Images from Small Datasets. Curr. Dir. Biomed. Eng. 2022, 8, 17–20. [Google Scholar] [CrossRef]
  95. Tiago, C.; Snare, S.R.; Šprem, J.; McLeod, K. A Domain Translation Framework with an Adversarial Denoising Diffusion Model to Generate Synthetic Datasets of Echocardiography Images. IEEE Access 2023, 11, 17594–17602. [Google Scholar] [CrossRef]
  96. Chen, L.; Liao, H.; Kong, W.; Zhang, D.; Chen, F. Anatomy Preserving GAN for Realistic Simulation of Intraoperative Liver Ultrasound Images. Comput. Methods Programs Biomed. 2023, 240, 107642. [Google Scholar] [CrossRef]
  97. Mendez, M.; Sundararaman, S.; Probyn, L.; Tyrrell, P.N. Approaches and Limitations of Machine Learning for Synthetic Ultrasound Generation. J. Ultrasound Med. 2023, 42, 2695–2706. [Google Scholar] [CrossRef]
  98. Stojanovski, D.; Hermida, U.; Lamata, P.; Beqiri, A.; Gomez, A. Echo from Noise: Synthetic Ultrasound Image Generation Using Diffusion Models for Real Image Segmentation. In Proceedings of the Simplifying Medical Ultrasound, Vancouver, BC, Canada, 8 October 2023; Kainz, B., Noble, A., Schnabel, J., Khanal, B., Müller, J.P., Day, T., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 34–43. [Google Scholar]
  99. Ghosh, R.K.; Sheet, D. Zero-Shot Multi-Frequency Ultrasound Simulation Using Physics Informed GAN. In Proceedings of the 2024 IEEE South Asian Ultrasonics Symposium (SAUS), Gujarat, India, 27–29 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–4. [Google Scholar]
  100. Song, Z.; Zhou, Y.; Wang, J.; Ma, C.Z.-H.; Zheng, Y. Synthesizing Real-Time Ultrasound Images of Muscle Based on Biomechanical Simulation and Conditional Diffusion Network. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2024, 71, 1501–1513. [Google Scholar] [CrossRef] [PubMed]
  101. Jaros, J.; Vaverka, F.; Treeby, B.E. Spectral Domain Decomposition Using Local Fourier Basis: Application to Ultrasound Simulation on a Cluster of GPUs. Supercomput. Front. Innov. 2016, 3, 40–55. [Google Scholar] [CrossRef]
  102. Zhao, X.; Hamilton, M.; McGough, R. Simulations of Nonlinear Continuous Wave Pressure Fields in FOCUS. AIP Conf. Proc. 2017, 1821, 080001. [Google Scholar]
  103. Wise, E.S.; Robertson, J.L.B.; Cox, B.T.; Treeby, B.E. Staircase-Free Acoustic Sources for Grid-Based Models of Wave Propagation. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
  104. Sharifzadeh, M.; Benali, H.; Rivaz, H. An Ultra-Fast Method for Simulation of Realistic Ultrasound Images. In Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS), Xi’an, China, 11–16 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–4. [Google Scholar]
  105. Jacquet, J.B.; Tamraoui, M.; Kauffmann, P.; Guey, J.L.; Roux, E.; Nicolas, B.; Liebgott, H. Integrating Finite-Element Model of Probe Element in GPU Accelerated Ultrasound Image Simulation. In Proceedings of the 2023 IEEE International Ultrasonics Symposium (IUS), Montreal, QC, Canada, 3–8 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
  106. Olsak, O.; Jaros, J. Techniques for Efficient Fourier Transform Computation in Ultrasound Simulations. In Proceedings of the 33rd International Symposium on High-Performance Parallel and Distributed Computing, Pisa, Italy, 3–7 June 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 361–363. [Google Scholar]
  107. Su, M.; Xia, X.; Liu, B.; Zhang, Z.; Liu, R.; Cai, F.; Qiu, W.; Sun, L. High Frequency Focal Transducer with a Fresnel Zone Plate for Intravascular Ultrasound. Appl. Phys. Lett. 2021, 119, 143702. [Google Scholar] [CrossRef]
  108. Ji, J.; Gong, C.; Lu, G.; Zhang, J.; Liu, B.; Liu, X.; Lin, J.; Wang, P.; Thomas, B.B.; Humayun, M.S.; et al. Potential of Ultrasound Stimulation and Sonogenetics in Vision Restoration: A Narrative Review. Neural Regen. Res. 2024, 20, 3501–3516. [Google Scholar] [CrossRef] [PubMed]
  109. Heiles, B.; Blanken, N.; Kuliesh, A.; Versluis, M.; Jain, K.; Lajoinie, G.; Maresca, D. PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator—Part II: Imaging Applications. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2025, 72, 866–878. [Google Scholar] [CrossRef]
  110. Duelmer, F.; Azampour, M.F.; Wysocki, M.; Navab, N. UltraRay: Introducing Full-Path Ray Tracing in Physics-Based Ultrasound Simulation. arXiv 2025, arXiv:2501.05828. [Google Scholar]
  111. Isaac-for-Healthcare/I4h-Sensor-Simulation 2025. Available online: https://github.com/isaac-for-healthcare/i4h-sensor-simulation (accessed on 26 August 2025).
  112. Freiche, B.; El-Khoury, A.; Nasiri-Sarvi, A.; Hosseini, M.S.; Garcia, D.; Basarab, A.; Boily, M.; Rivaz, H. Ultrasound Image Generation Using Latent Diffusion Models. In Proceedings of the Medical Imaging 2025: Ultrasonic Imaging and Tomography, San Diego, CA, USA, 17–20 February 2025; SPIE: Bellingham, WA, USA, 2025; Volume 13412, pp. 287–292. [Google Scholar]
  113. Reynaud, H.; Gomez, A.; Leeson, P.; Meng, Q.; Kainz, B. EchoFlow: A Foundation Model for Cardiac Ultrasound Image and Video Generation. arXiv 2025, arXiv:2503.22357. [Google Scholar] [CrossRef]
Figure 1. Schematic overview of ultrasound image simulation approaches. The diagram separates methods into a data-driven perspective (interpolative and generative) and a physical modeling perspective, which includes model-free rendering and model-based simulations. Model-based and generative methods are grouped into wave-based, ray-based, and convolution-based families, summarizing their core principles and typical trade-offs in realism and computational cost.
Figure 1. Schematic overview of ultrasound image simulation approaches. The diagram separates methods into a data-driven perspective (interpolative and generative) and a physical modeling perspective, which includes model-free rendering and model-based simulations. Model-based and generative methods are grouped into wave-based, ray-based, and convolution-based families, summarizing their core principles and typical trade-offs in realism and computational cost.
Applsci 15 12535 g001
Figure 2. . PRISMA flow diagram summarizing the study identification and selection process. The database search yielded 1263 records, with an additional 23 studies identified through citation tracking and other sources. After removing duplicates, 601 unique records were screened based on titles and abstracts, leading to the exclusion of 525 studies. A total of 76 articles underwent full-text assessment, of which 19 were excluded for not meeting the predefined eligibility criteria. The final qualitative synthesis includes 80 studies that satisfied all inclusion criteria.
Figure 2. . PRISMA flow diagram summarizing the study identification and selection process. The database search yielded 1263 records, with an additional 23 studies identified through citation tracking and other sources. After removing duplicates, 601 unique records were screened based on titles and abstracts, leading to the exclusion of 525 studies. A total of 76 articles underwent full-text assessment, of which 19 were excluded for not meeting the predefined eligibility criteria. The final qualitative synthesis includes 80 studies that satisfied all inclusion criteria.
Applsci 15 12535 g002
Figure 3. Unified pipeline for ultrasound simulation comprising: (1) anatomical volume and heterogeneous acoustic property maps; (2) system configuration, including transducer geometry and acquisition protocol; (3) core simulation kernel (full-wave, geometric-acoustics, shift-variant convolutional PSF models, or AI-based surrogates); (4) image formation through beamforming and post-processing, with Optional modules (*) for speckle modeling, tissue deformation, or AI-based refinement; and (5) B-mode/RF output and validation metrics. The diagram highlights that most methodological variability arises from the kernel implementation and optional modules, whereas the overall pipeline structure remains invariant across simulators.
Figure 3. Unified pipeline for ultrasound simulation comprising: (1) anatomical volume and heterogeneous acoustic property maps; (2) system configuration, including transducer geometry and acquisition protocol; (3) core simulation kernel (full-wave, geometric-acoustics, shift-variant convolutional PSF models, or AI-based surrogates); (4) image formation through beamforming and post-processing, with Optional modules (*) for speckle modeling, tissue deformation, or AI-based refinement; and (5) B-mode/RF output and validation metrics. The diagram highlights that most methodological variability arises from the kernel implementation and optional modules, whereas the overall pipeline structure remains invariant across simulators.
Applsci 15 12535 g003
Table 1. Classification of the 80 selected studies by their primary ultrasound simulation methodology, listing article counts and first-author references across the major categories.
Table 1. Classification of the 80 selected studies by their primary ultrasound simulation methodology, listing article counts and first-author references across the major categories.
CategoryTotal ArticlesFirst Author (Year)
Software and toolboxes9Rodríguez-Morales et al. (2017) [27], Garcia (2021) [28], Gu & Jing (2021) [29], Garcia (2022) [30], Cigier et al. (2022) [31], Ekroll et al. (2023) [32], Blanken et al. (2024) [33], Brevett (2024) [34], Garcia & Varray (2024) [35].
CT/MRI Derived US Simulation8D’Amato et al. (2015) [36], Salehi et al. (2015) [37], Szostek & Piórkowski (2016) [38], Rubi et al. (2017) [39], Camara et al. (2017) [40], Satheesh B. & Thittai (2018) [41], Satheesh B. & Thittai (2019) [42], Velikova et al. (2023) [43].
Dynamic motion modeling11Rivaz & Collins (2014) [44], Storve & Torp (2015) [45], Alessandrini et al. (2015) [46], Alessandrini et al. (2015) [47], Mastmeyer et al. (2016) [48], Zhou et al. (2018) [49], Alessandrini et al. (2018) [50], Szostek et al. (2023) [51], Abhimanyu et al. (2023) [52], Velikova et al. (2024) [53], Burman et al. (2024) [54]
Speckle, scatter or noise modeling10Mattausch & Goksel (2015) [55], Mattausch & Goksel (2016) [56], Singh et al. (2017) [57], Singh et al. (2017) [58], Singh et al. (2017) [59], Mattausch & Goksel (2018) [60], Singh et al. (2018) [61], Starkov et al. (2019) [62], Gaits et al. (2022) [63], Gaits et al. (2024) [64].
Wave-based methods4Pinton (2015) [65], Looby et al. (2019) [66], Pinton (2020) [67], Pinton (2020) [68]
Ray and convolutionbased methods12Varray et al. (2014) [69], Haigh & McCreath (2014) [70], Mattausch & Goksel (2016) [71], Law et al. (2016) [72], Keelan et al. (2017) [73], Storve & Torp (2017) [74], Mattausch et al. (2018) [75], Tuzer et al. (2018) [76], Tanner et al. (2018) [77], Wang et al. (2020) [78], Cambet et al. (2020) [79], Amadou et al. (2024) [80].
AI-based methods20Peng et al. (2019) [81], Abdi et al. (2019) [82], Magnetti et al. (2020) [83], Zhang et al. (2020) [84], Zhang et al. (2020) [85], Vitale et al. (2020) [86], Escobar et al. (2020) [87], Pigeau et al. (2020) [88], Cronin et al. (2020) [89], Ao et al. (2021) [90], Gilbert et al. (2021) [91], Zhang et al. (2021) [92], Song et al. (2022) [93], Maack et al. (2022) [94], Tiago et al. (2023) [95], Chen et al. (2023) [96], Mendez et al. (2023) [97], Stojanovski et al. (2023) [98], Ghosh & Sheet (2024) [99], Song et al. (2024) [100].
Distinct approaches6Jaros et al. (2016) [101] Zhao et al. (2017) [102], Wise et al. (2017) [103], Sharifzadeh et al. (2021) [104], J-B. et al. (2023) [105], Olsak & Jaros (2024) [106].
Table 2. Overview of B-mode ultrasound simulators and toolboxes included in this review, with information of their publication date, characteristics, GPU support, and availability.
Table 2. Overview of B-mode ultrasound simulators and toolboxes included in this review, with information of their publication date, characteristics, GPU support, and availability.
Simulator/ToolboxArticle’s Public. YearDomainGPU SupportOpen-Source Availability
FIELD [19]1996Early spatial impulse response (SIR)–based computation of transducer fields (precursor to Field II).See FIELD IISee FIELD II
ULTRASIM [22]2001Transducer array modeling and CW/PW acoustic field simulation (near and far field).Not specifiedYes, (GNU GPL).
https://www.mn.uio.no/ifi/english/research/groups/dsb/resources/software/ultrasim/ (accessed on 23 November 2025)
FIELD II [18]2004Linear RF and B-mode simulation using spatial impulse response (SIR); point-scatterer modeling; beamforming research.No, just modified versions like Field IIpro or FIELDGPU Free (non-commercial)
https://field-ii.dk//
(accessed on 23 November 2025)
k-WAVE [21]2010Full-wave nonlinear ultrasound and photoacoustics using k-space pseudospectral solver (acoustic/elastic media).YesYes, (LGPL).
http://www.k-wave.org/documentation/k-wave.php
(accessed on 23 November 2025)
CREANUIS [23]2010Fundamental + harmonic RF simulation with pseudo-acoustic modeling.YesYes, (CeCILL-B)
https://www.creatis.insa-lyon.fr/site/fr/creanuis
(accessed on 23 November 2025)
FOCUS [20]2012Fast transient field computation using spatial impulse response and convolution.Not specifiedPartially (Free, but unclear license)
https://www.egr.msu.edu/~fultras-web/download.php
(accessed on 23 November 2025)
Ultrasound Toolbox (USTB) [27]2017Toolbox for beamforming, processing, and benchmarking of 2D/3D datasets; standardizes data formats.Yes Partially (Free, but unclear license)
https://www.ustb.no/
(accessed on 23 November 2025)
mSOUND [29]2021Linear and nonlinear acoustic wave propagation in heterogeneous media (Born approximation)NoYes, (GPL-3.0).
https://m-sound.github.io/mSOUND/home
(accessed on 23 November 2025)
MUST [28]2021Frequency-domain ultrasound simulation and design of imaging scenarios; attenuation/directivity modelingNo, but Multi-CPU (MATLAB Parallel Computing Toolbox)Yes, (LGPL-3.0)
https://www.biomecardio.com/MUST/index.html
(accessed on 23 November 2025)
SIMUS [30,31]2022Fast ray-based/convolutional simulation of pressure fields and RF signals (2D).See MustIncluded in Must
FLUST [32]2023Blood-flow and Doppler simulation for velocity estimation benchmarking.See USTBIncluded in USTB
SIMUS 3 [35]20243D extension of SIMUS/PFIELD for matrix arrays and volumetric imaging.See MustIncluded in MUST
PROTEUS [33]2024Contrast-enhanced ultrasound (CEUS) RF data simulation, including bubble dynamics.YesYes, (MIT)
https://github.com/PROTEUS-SIM/PROTEUS
(accessed on 23 November 2025)
QUPS [34]2024Standardized data structures and GPU-accelerated beamforming for research workflows.YesYes, (Apache 2.0)
https://github.com/thorstone25/qups
(accessed on 23 November 2025)
All the simulators and toolboxes listed above are either written in MATLAB, available as third-party MATLAB toolboxes, or can be executed within the MATLAB environment.
Table 3. Qualitative comparison of wave-based, ray-based, convolution-based, and AI-based ultrasound simulation approaches, summarizing their expected physical fidelity, computational cost, dominant application domains, and main advantages and limitations.
Table 3. Qualitative comparison of wave-based, ray-based, convolution-based, and AI-based ultrasound simulation approaches, summarizing their expected physical fidelity, computational cost, dominant application domains, and main advantages and limitations.
MethodologyPhysical FidelityComputational CostPrimary ApplicationsKey Advantages/Limitations
Wave-basedVery high (captures diffraction, scattering, nonlinearity)Very high (minutes–hours per frame; offline)Device design, safety modeling, research requiring acoustic accuracy+ More realistic physics.
− Not feasible for real-time or large volumes.
Ray-basedMedium-high (macroscopic artifacts: reflection, refraction, shadowing)Low (real-time with GPU)Procedural training, interactive environments+ Fast; handles large volumes.
− Cannot reproduce interference or speckle physics.
Convolution-basedMedium–high (realistic speckle, PSF-based blurring)Low (real-time; scalable dataset generation)Speckle studies, motion modeling, ML dataset creation+ Fast and flexible.
− Limited nonlinear and complex propagation modeling.
AI-based (GANs, Diffusion)High visual realismVery low (inference < 40 ms/frame)Data augmentation, domain transfer, interactive training+ Extreme speed, high realism.
− Limited explicit physical control; dependent on training data.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Solano-Cordero, C.M.; Encina-Baranda, N.; Pérez-Liva, M.; Herraiz, J.L. Recent Advances in B-Mode Ultrasound Simulators. Appl. Sci. 2025, 15, 12535. https://doi.org/10.3390/app152312535

AMA Style

Solano-Cordero CM, Encina-Baranda N, Pérez-Liva M, Herraiz JL. Recent Advances in B-Mode Ultrasound Simulators. Applied Sciences. 2025; 15(23):12535. https://doi.org/10.3390/app152312535

Chicago/Turabian Style

Solano-Cordero, Cindy M., Nerea Encina-Baranda, Mailyn Pérez-Liva, and Joaquin L. Herraiz. 2025. "Recent Advances in B-Mode Ultrasound Simulators" Applied Sciences 15, no. 23: 12535. https://doi.org/10.3390/app152312535

APA Style

Solano-Cordero, C. M., Encina-Baranda, N., Pérez-Liva, M., & Herraiz, J. L. (2025). Recent Advances in B-Mode Ultrasound Simulators. Applied Sciences, 15(23), 12535. https://doi.org/10.3390/app152312535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop