Next Article in Journal
An FPGA-Based Reconfigurable Accelerator for Real-Time Affine Transformation in Industrial Imaging Heterogeneous SoC
Previous Article in Journal
Power Without Wires: Advancing KHz, MHz and Microwave Rectennas for Wireless Power Transfer with a Focus on India-Based R&D
Previous Article in Special Issue
A Novel Denoising Method for Mud Continuous-Wave Signals Based on Selective Ensemble Strategy with Particle Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensing Through Tissues Using Diffuse Optical Imaging and Genetic Programming

Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Be’er Sheva 8441405, Israel
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(1), 318; https://doi.org/10.3390/s26010318
Submission received: 3 December 2025 / Revised: 27 December 2025 / Accepted: 30 December 2025 / Published: 3 January 2026

Abstract

Diffuse optical imaging (DOI) uses scattered light to non-invasively sense and image highly diffuse media, including biological tissues such as the breast and brain. Despite its clinical potential, widespread adoption remains limited because physical constraints, limited available datasets, and conventional reconstruction algorithms struggle with the strongly nonlinear, ill-posed inverse problem posed by multiple photon scattering. We introduce Diffuse optical Imaging using Genetic Programming (DI-GP), a physics-guided and fully interpretable genetic programming framework for DOI. Grounded in the diffusion equation, DI-GP evolves closed-form symbolic mappings that enable fast and accurate 2-D reconstructions in strongly scattering media. Unlike deep neural networks, Genetic Programming (GP) naturally produces symbolic expressions, explicit rules, and transparent computational pipelines—an increasingly important capability as regulatory and high-stakes domains (e.g., FDA/EMA, medical imaging regulation) demand explainable and auditable AI systems, and where training data are often scarce. DI-GP delivers substantially faster inference and improved qualitative and quantitative reconstruction performance compared to analytical baselines. We validate the approach in both simulations and tabletop experiments, recovering targets without prior knowledge of shape or location at depths exceeding ~25 transport mean-free paths. Additional experiments demonstrate centimeter-scale imaging in tissue-like media, highlighting the promise of DI-GP for non-invasive deep-tissue imaging and its potential as a foundation for practical DOI systems.

1. Introduction

Diffuse optical imaging (DOI) is a non-invasive computational image retrieval technique employing near-infrared (NIR) light to explore thick scattering media like biological tissues or fog [1,2,3,4,5]. NIR light can penetrate several centimeters into soft tissues like the breast and brain [6]. Because of this capability, DOI has diverse medical applications, including tumor detection [7,8], brain studies [9,10,11,12], and chemotherapy treatment monitoring [13,14,15]. Its safety and ability to provide detailed tissue information make DOI a promising tool for diagnosing diseases and guiding treatment strategies [16,17].
Numerous specialized mathematical and computational algorithms have been developed for reconstructing anomalies within diffuse media with diverse geometrical and physical properties over recent decades [17,18,19,20]. However, image reconstruction in this domain poses significant challenges, primarily due to the inherent complexity of the inverse problem [21]. The inverse problem is nonlinear, ill-posed, and ill-conditioned [17,22]. Furthermore, while camera measurements of light transmission through diffuse media offer notable imaging contrast, the reconstructed images often exhibit subpar spatial resolution [23,24]. Additionally, many classical methods rely on assumptions of complete measurements or strong linearization, usually based on precisely known boundary conditions, a scenario seldom achievable in practical settings. This limitation arises from the nature of light propagation within diffuse media, where the light takes a convoluted and unpredictable path influenced by multiple scattering events [25].
In recent years, addressing this challenge has spurred active research into employing various deep neural network (DNN) architectures for image reconstruction in diffuse optical imaging [13,26,27,28,29]. These algorithms offer several advantages for image reconstruction in DOI. They excel at learning complex mappings between input (camera measurement) and output (reconstructed anomalies) spaces, allowing for more accurate and robust reconstruction even in the presence of noise and uncertainties [30,31,32]. Additionally, deep neural networks can capture intricate patterns and relationships within the data, enabling enhanced spatial resolution and finer detail in the reconstructed images compared to traditional methods [16,26]. Moreover, deep learning approaches have the flexibility to adapt to different imaging scenarios and can be trained on diverse datasets, making them highly versatile for DOI applications across various tissue types and conditions [13].
However, despite their promise, deep learning methods for DOI image reconstruction have certain limitations. These include the need for large training data, which may be challenging to acquire, especially for rare or specialized imaging scenarios. Additionally, deep neural networks can be computationally intensive, requiring substantial resources for training and inference, which may limit their practical implementation in real-time imaging systems. Moreover, the “black-box” nature of several deep learning models can make it difficult to interpret the reasoning behind their decisions, raising concerns about the reliability and trustworthiness of the reconstructed images in clinical settings [16,17,27,33].
Some of the above limitations associated with learning-based approaches like DNNs can be mitigated through evolution-based methods such as Genetic Programming (GP) [34]. Genetic programming is a powerful computational technique inspired by biological evolution that automatically generates tree-like recursive computer programs to solve complex problems [35]. By mimicking the process of natural selection, GP evolves a population of candidate solutions over successive generations, repeatedly applying genetic operators such as mutation, crossover, and selection to facilitate evolutionary search [34,36]. This continual optimization process allows GP to effectively search large solution spaces and discover high-performing solutions. Modern science and medicine increasingly require transparent and interpretable AI models rather than opaque black-box systems. Genetic Programming (GP) naturally produces symbolic expressions, explicit rules, and fully interpretable computational pipelines, offering transparency by design. Unlike deep neural networks—which often depend on post hoc explanation techniques—GP provides built-in interpretability. This capability is especially relevant today, as regulatory and high-stakes domains such as the FDA, EMA, and medical imaging now mandate explainable and auditable AI systems. GP aligns directly with these requirements, making it a strong and compliant alternative to black-box approaches. Moreover, many real-world “AI for science” problems operate in low-data, noisy, or incomplete-data environments, where traditional deep learning struggles. GP excels under these conditions, delivering robust performance with limited samples while enabling flexible, globally optimized, and physically grounded model discovery. Together, these strengths—interpretability, flexibility, global optimization, symbolic discovery, and strong low-data performance—explain why GP is increasingly relevant in modern scientific and medical AI. To summarize, GP presents several advantages. Firstly, because the programs evolved by GP may contain functions tailored to the specific problem domain, the process often demands less training data since it leverages existing domain knowledge rather than constructing models from scratch. This attribute translates into shorter training times, particularly suitable for applications requiring swift deployment [37]. Furthermore, GP trees are comprehensible to human programmers in certain conditions, offering a distinct advantage in interpretability compared to the often opaque neural networks. Such transparency is crucial, especially in fields like medical imaging, where comprehension and trust in the reconstruction process are paramount [38]. Additionally, GP’s adaptability in defining and refining the cost function enables customization to the specific requirements and constraints of the task, unlike the more rigid loss functions typical in learning algorithms [39]. These characteristics allow GP to find utility and relevance in various applications and industries [37,38,40].
GP presents a unique approach to image reconstruction, focusing on evolving executable structures rather than specific parameter values [37]. GP’s adaptability in representing and evolving executable code enables it to discern complex relationships within image datasets, thereby enhancing reconstruction accuracy. Moreover, GP’s interpretability and transparent modeling approach render it suitable for applications necessitating explainable AI, such as forensic imaging and industrial quality control [41,42,43]. These recent advancements highlight GP’s growing significance in image reconstruction research and its potential for addressing real-world challenges across various domains [37,40,44,45].
We introduce Diffuse optical Imaging using Genetic Programming (DI-GP), a novel GP algorithm tailored to efficiently and accurately reconstruct 2D objects in strongly scattering media. Unlike recent machine learning (ML) approaches, which often require extensive computational resources, large datasets, and lengthy training times to achieve desired outcomes, DI-GP focuses on minimizing training duration, reducing dependence on extensive datasets, and effectively transferring domain knowledge to real-world experiments. DI-GP achieves this by directly integrating the principles of the diffusion equation into its evolutionary process, allowing for the development of optimized solutions with greater precision and efficiency [25].
In addition to the advantages over DNNs outlined earlier, the proposed approach also presents several notable benefits compared to traditional analytical methods. These include expedited inference times and superior performance across multiple qualitative and quantitative metrics, such as a mean squared error of 0.0240 ± 0.0121, a structural similarity index of 0.8108 ± 0.0498, and a Pearson correlation coefficient of 0.7919 ± 0.0866. Furthermore, our algorithm can retrieve objects at imaging depths corresponding to several centimeters of human tissue [46]. Overall, our findings underscore the potential of DI-GP as a valuable tool for rapid and accurate reconstruction in DOI applications as a proof of concept.
The following sections detail the image reconstruction algorithm, elaborate on the experimental methodology, and present the results obtained from our study.

2. Theory, Methods and Experiments

This section details the methodology of computational imaging in the diffuse regime. We start by discussing the forward and inverse problems associated with this imaging technique. Following this, we explore the methods for simulating light transport and collecting training data for algorithm development. Then, we introduce DI-GP, a novel approach that utilizes Genetic Programming for computational diffuse optical imaging. Finally, we provide a detailed overview of the experimental setup and procedures employed to validate the effectiveness of our proposed methodology.

2.1. Forward and Inverse Problems in Diffuse Optical Imaging

The forward problem in computational diffuse optical imaging involves predicting light distribution within the diffuse media based on known optical properties and boundary conditions. We conducted experiments using a melamine sponge with a thickness of 3 cm, where the transport mean free path ( τ ) is more than 25 times smaller (at λ = 639 nm) than the total thickness of the material. This configuration places the light in the strongly diffuse regime, enabling us to utilize a diffusion approximation to model the propagation of photons within the diffuse medium [24,25]. The diffusion equation can be expressed as:
μ a Φ r · D Φ r = S r
Here, Φ r represents the photon flux at a position r, μ a is the absorption coefficient, S r is the source power density, and D is the diffuse coefficient given by D = 3 μ a + μ s 1 . Here, μ s is the reduced scattering coefficient and is given by μ s 1 g , where μ s denotes the scattering coefficient and g denotes the anisotropy factor, given by the mean cosine of the scattering angle. The reduced scattering coefficient captures the effective rate of directional randomization in anisotropic scattering media.
When photon propagation is modeled using Equation (1), it follows the steepest descent of the scalar gradient weighted by the diffusivity (dominated by scattering μ s in strongly diffuse media), accompanied by an additional loss effect due to photon absorption. For highly localized sources, Equation (1) has an analytical solution given by [25]:
Φ m r = φ e μ a D r 4 π D r c
Here, Φ m is the photon flux measurement at distance r, φ is the source amplitude term, and c is the speed of light in the medium.
Solving the forward problem is crucial for comprehending light interactions within diffuse media and serves as the foundation for image reconstruction in diffuse optical imaging [17,19,47]. In computational diffuse optical imaging, the inverse problem revolves around deducing the characteristics of hidden structures within a medium based on measurements obtained from the camera. This problem is particularly pertinent in scenarios where the goal is to reconstruct the shape, contrast, or other relevant parameters of objects embedded within biological tissue or other scattering media [23,48].
In scenarios where anomalies are present within an otherwise homogeneous background, the anomaly can be reconstructed by inverting a linearized forward model given by:
M = J · χ
Here, M is the measurement obtained at the camera, χ is the optical perturbation, and J is the Jacobian matrix which is given by J = Φ r / μ a , and can be obtained from Equation (2) [25].
Initially, we compute the forward model using Equation (2) to simulate how light travels from the input plane to the object plane. Then, we use simple estimates to mask the propagation from the object plane in the diffuse medium to the measurement matrix. Next, we compare this numerical solution to the actual measurement by checking a cost function. This function helps adjust the shape of the estimated object, refining it through an iterative process of solving Equation (3) with the adjusted guess. Therefore, computational image reconstruction in diffuse optical imaging is typically framed as an optimization task aimed at minimizing the objective function given by:
a r g m i n χ J · χ M 2 + δ
Here, δ represents the regularization term, frequently derived from specific assumptions regarding the statistical properties of optical images [16,29,49]. Similarly, in this article, we propose a GP algorithm that retrieves anomalies ( χ ) inside a strongly diffuse medium by optimizing a modified Equation (4) given by:
χ r e t r i e v e d = a r g m i n χ 1 2 J · χ p r e d c t i o n M 2 2 + δ × T M
Here, χ r e t r i e v e d is the retrieved image containing the anomaly distribution, χ p r e d c t i o n is the predicted outcome for each individual of the GP algorithm and T M is the thresholding mask, which works by minimizing Equation (5) only over pixels where the signal-to-noise ratio in the recorded data M exceeds a certain threshold [23]. The binary mask defined from a per-pixel signal-to-noise ratio map S N R x , y = M x , y σ n , where σ n denotes the standard deviation of background noise estimated from reference measurements acquired without the anomaly. Pixels satisfying S N R x , y S N R t h r are retained, whereas the remaining pixels are excluded from the minimization to prevent noise-dominated measurements from influencing the solution. In this work, S N R t h r denotes the threshold SNR which is set at S N R t h r = 3 .
The details of the algorithm are presented in Section 2.3. However, before delving into the image reconstruction algorithm, we look at the light transport simulations and the dataset used to evolve the GP algorithm.

2.2. Light Transport Simulations and Training Data

Applying evolution-based methods for DOI reconstruction requires physical knowledge about the background sample and the embedded anomalies, contributing to the contrast observed in DOI measurements. This necessitates the availability of a dataset. Although GP algorithms require datasets that are several orders of magnitude smaller than DNNs, acquiring such datasets in the context of DOI remains challenging [13,37,40]. Therefore, simulating light propagation through digital phantoms offers a viable approach to generating datasets for algorithm training. Furthermore, light transport simulations offer another advantage: they allow for the assessment of experimental design considerations through computer simulations, helping to avoid the costly and wasteful construction of clinical prototype systems that may possess inherent design flaws [8,18].
In this study, simulations are performed using a cuboid volume measuring 110 × 60 × 30 m m 3 (height × width × thickness), as depicted in Figure 1. The volumetric mesh is generated using a MATLAB-based meshing tool called iso2mesh [50]. The sample’s imaging thickness is 30 mm, with absorption and scattering coefficients of 0.0035 m m 1 and 0.8090 m m 1 , respectively. These parameters were derived by fitting experimental measurements to the diffusion equation [23,51,52].
A continuous wave Gaussian light source operating at a wavelength of 639 nm and a spot size of radius 1.5 cm is employed to illuminate the sample, with flux measurements taken at the opposite boundary. Anomalies are introduced at the central depth of the sample (15 mm), comprising hand-drawn shapes placed within the field of view of the imaging system. These anomalies are characterized by absorption coefficients ranging from 0.5 m m 1 to 0.72 m m 1 , representing the contrast between the anomalies and background samples in the experimental material.
The light transport simulations were executed using MCX [53,54]. This MATLAB-based software employs Monte Carlo algorithms to model light transport in diffuse media. The simulations were conducted on 30 distinct meshes, each featuring a unique hand-drawn anomaly shape. For each shape, both the shape itself and the corresponding measurement data were saved, resulting in a dataset comprising 30 measurement vectors as inputs and the corresponding anomaly distributions as ground truths. Additionally, the Jacobian at the imaging plane and the reference measurements were obtained by conducting simulations without the anomalies. A 2% Gaussian noise is added to the measurements to simulate any experimental noise [8,55]. The dataset is utilized to train the DI-GP, which evolves an optimal solution for retrieving hidden anomalies within diffuse media. The algorithm is described in the next section.

2.3. Training the DI-GP Algorithm: Diffuse Optical Imaging Using Genetic Programming

Genetic Programming is a nature-inspired hyper-heuristic search algorithm that generates and evolves computer programs representing solutions to a given problem. GP begins with a set of basic building blocks, a high-level objective, and a method for evaluating solution quality. As illustrated in Figure 2, it automatically generates a group (or population) of random initial solutions (or individuals). Individuals are computer programs structured as trees. This structure allows for a hierarchical organization of operations and data, resembling branches and leaves, where each node represents a computational step or decision point. Individuals are evaluated using a predefined fitness function specifically tailored to the task at hand [35,56]. Better individuals are given more favorable fitness scores.
GP then constructs the next population of individuals (or the next generation) by using a method analogous to natural selection: better individuals (i.e., those with better fitness scores) are selected with a higher probability for the application of genetic operators, such as crossover, mutation, and gene duplication [35]. Through fitness-based selections and genetic operators applied over time, genetic algorithms iteratively optimize solutions during simulated evolution.
Each genetic operator yields one or more individuals, which are inserted into the population. In contrast, individuals who were not selected are discarded, adhering to the survival of the fittest principle. The process then reiterates until one of the termination conditions is met, typically after a fixed number of generations has passed. Thus, over successive generations, through fitness-based selections and genetic operators applied repeatedly over time, genetic algorithms refine and optimize solutions to the specified problem. Further details regarding the image reconstruction are provided in the following sub-sections.

2.3.1. Image Reconstruction Using Genetic Programming and Structure of the GP Individuals

In this study, we utilize a Koza-style GP [34] within the DEAP Python environment [57]. In this framework, each GP individual in our algorithm embodies a function of the form R N × N R , where it processes the inputs represent a neighborhood of N × N (N = 5 in our case) pixels from the measurement image ( M ) and produces a single pixel at a corresponding location in the predicted image ( χ p r e d c t i o n ). This is repeated for all pixels in M where the N × N environment is fully contained in the image, while pixels with inadequate environments, in the margins or corners, are skipped. Consequently, the image is reconstructed pixel by pixel, leveraging local pixel data of the measurements. The pseudocode in Table 1 represents the generation of a predicted image where M is the input image, GP_individual is the individual currently being evaluated, N is the width of the environment (the environment is N × N pixels).
GP individuals, serving as candidate solutions for reconstructing a pixel using a local NxN environment of pixels in M, embody functions of the form R N × N R , as stated above. Each GP individual (or GP tree) resembles a recursive LISP expression, consisting of inner functions that accept arguments and terminals which serve as leaf nodes without any arguments [58]. Trees are initially constructed at random, adhering to specific depth constraints (see below). The leaf nodes are either input pixels, marked as P i x e l 1 , P i x e l 2 , …, P i x e l N × N , or random constants in the range [0, 1]. Inner tree-nodes are functions with varying arities, with each being applied to the appropriate number of inputs, which can be outputs of other tree-nodes or values from terminals. All functions and terminals output numeric values. The final values GP individuals return are clipped to the range [0, 1]. Table 2 provides a comprehensive list of inner functions and terminals utilized by GP.

2.3.2. Parameters for Evolution

The algorithm’s parameters were empirically defined within specific ranges to optimize performance. The population size, representing the number of individuals in each generation, varied between 150 and 250, which is relatively small but still provides enough diversity. Generation count, indicating the number of iterations or generations, ranged from 20 to 40 since evolution typically converged to good solutions relatively fast. The crossover probability was set at 0.6 and the mutation probability at 0.4, which is relatively high for mutation, to avoid local optima. Tree depth, specifying the maximum depth of GP individuals, ranged from 3 to 5, balancing complexity and computational efficiency. Lastly, we used tournament selection [59], which is currently one of the most widely used methods for selection [60], with a tournament size of 4.
The genetic programming hyperparameters were determined empirically across several experiments, following standard practice in tree-based genetic programming and symbolic regression, with the goal of stable convergence and controlled expression growth. These values regulate the evolutionary search dynamics, including exploration intensity, selection pressure, and bloat control, and do not modify the underlying inverse problem, which is defined by the forward model and the physics-guided fitness functions. Consequently, reasonable variations in these hyperparameters mainly affect convergence rate, computational cost, and the complexity of the evolved expressions, rather than the physical objective being optimized [61,62].

2.3.3. Evaluation of the GP Individuals

The evaluation of GP individuals begins with reference-subtracted simulated measurements, serving as inputs for the GP optimization process, referred to as evolution or training. During this process, explained in the previous subsections, the model undergoes optimization through selection, where various candidate solutions compete over multiple generations, and the best-performing individuals are selected for further refinement. This iterative process enhances the model’s accuracy and robustness. Once the best individual is identified, it is tested on real measurement data to validate its performance. This real-world evaluation ensures that the model’s predictions are accurate and reliable. One important thing to note is that the algorithm leverages ground truth data solely during the training phase. This data is instrumental in optimizing the solution by assessing predictions using the Mean Squared Error (MSE) loss [26], which aids in identifying the distribution of hidden anomalies, and the Dice loss, which enhances edge detection capabilities [63]. The schematic of the DI-GP workflow to evaluate the best individual is shown in Figure 3. All simulations and genetic programming experiments were executed on a workstation-class laptop, namely an HP OMEN by HP Laptop 16 c0004nj, running Microsoft Windows 11 64-bit, equipped with an AMD Ryzen 7 5800H central processing unit with 8 cores and 16 logical processors at a base frequency of 3.2 GHz, and 32 GB of random-access memory. Under this configuration, the full genetic programming evolution required approximately 15 min of wall clock time, and inference for a single reconstruction required less than 1 s.
As can be seen in Figure 3, the fitness function plays a significant role in determining the best individual. It quantifies the accuracy and effectiveness of an individual model in solving the problem at hand by incorporating various performance metrics to provide a comprehensive assessment of the model’s quality. By assigning a fitness score to each candidate, the fitness function guides the selection process, ensuring that the most promising solutions are identified and iteratively refined, ultimately leading to the optimal solution. The fitness functions used in DI-GP are described in the next section.

2.3.4. Fitness Function Used by DI-GP

The cost of the fitness function used by DI-GP to retrieve hidden anomalies in diffuse media is given by:
F = α ξ P I + β ξ m s e + γ ξ d i c e       ( t r a i n i n g ) ξ P I       ( t e s t i n g )
Here, F is the overall cost of evaluating an optimal solution, ξ P I is the cost of the fitness function that is used to optimize Equation (5) and is given by:
ξ P I = M p r e d c t i o n M m e a s u r e m e n t 2
where M p r e d c t i o n is the measurement obtained by the DI-GP prediction and M m e a s u r e m e n t is the measurement obtained at the camera. By constantly updating M p r e d c t i o n using the latest χ p r e d i c t i o n values predicted by the GP function, we incorporate domain knowledge regarding diffuse optical imaging into the algorithm. ξ m s e is the MSE loss given by:
ξ m s e = 1 N i = 1 N χ i G i
where N is the total number of samples, and χ i and G i are predictions and the labels, respectively. Similarly, ξ d i c e is the Dice loss given by:
ξ d i c e = 2 × i N χ i × G i i N χ i 2 + i N G i 2
α , β , and γ are constant weights used to vary the influence of each fitness function. For the training process, the optimal values for α , β , and γ in this experiment are 0.7, 0.25, and 0.15, respectively. However, when the algorithm is tested using unseen experimental data, β , and γ are 0, and α becomes 1, reducing Equation (6) to F = ξ P I . When subjected to unseen simulated data, the DI-GP algorithm yields reconstructions with a mean squared error of 0.029 ± 0.0142. The reconstructed images are shown in Figure 4.
The next sections detail the experimental setup used to validate the DI-GP algorithm and discuss the results.

2.4. Experimental Setup to Validate DI-GP

This study aims to computationally reconstruct anomalies embedded within the diffuse media using continuous wave (CW) measurements. To achieve this, we build an experimental setup, as illustrated in Figure 5. We use a CW laser source (THORLABS PL252) with a wavelength of 639 nm and an output power of 4.5 mW. The beam is expanded to a spot with a radius of 15 mm using a beam expander, after which it is incident on the melamine foam of 3 cm thickness, containing the anomaly at a depth of 1.5 cm.
As discussed in Section 2.2, the melamine foam is a highly diffuse medium where the measured optical properties correspond to a transport mean free path of 1230 μm, which is more than one order of magnitude smaller than the sample thickness ( τ = ~ 24 ) transport mean free paths. The hidden objects are placed at the central depth of the sample and are random shapes created using black tape.
The light passes through the material, with a portion absorbed by the concealed anomaly. Subsequently, the transmitted light is captured by a CMOS camera (uEYE UI-2210-C), which has a resolution of 640 × 480 pixels and a pixel size of 10 µm. Throughout the image acquisition process, we selectively utilize a resolution of 300 × 300 pixels, enabling the collection of transmitted light from a single source. A noteworthy aspect of these measurement images is the inability to visually discern the presence or absence of an object embedded inside the medium.
The acquired measurements are subsequently resized to 64 × 64 pixels and fed into the DI-GP algorithm, which reconstructs the location and distribution of the hidden object with precision and efficiency. To gauge the efficacy, the performance of our proposed algorithm is compared to that of the conjugate gradient descent algorithm (CGD), a well-established method for image reconstruction and widely used for comparison [13,27,28,29]. The results are detailed in the next section.

3. Results and Discussion

Figure 6 showcases the reconstructions obtained through the DI-GP algorithm when tested with experimental measurements. By incorporating knowledge specific to diffuse optical imaging into the fitness evaluation process, the DI-GP algorithm benefits from insights derived from the underlying principles of this field. This comprehensive evaluation approach enhances the robustness and reliability of the evolved solutions, as they are evaluated based on both empirical performance metrics and adherence to known principles of diffuse optical imaging. As depicted in Figure 6, the experimental sample contains a variety of hidden anomalies, including a plus sign, two circles, an arrow, a diagonal line, and a trapezoid. The DI GP algorithm recovers the dominant location and contrast patterns of the concealed objects without prior information regarding their shapes. Fine features and disconnected components can be attenuated when their measurement signatures approach the effective resolution and noise limits of the experimental configuration. This effect is visible in Sample 4, where one of the two circles exhibits reduced recovery, and in Sample 5, where the thin arrow shaft is suppressed while the higher contrast arrow head is retained. Notably, the inference time for this task is less than 1 s.
The effectiveness of the proposed DI-GP algorithm lies in its optimization process, which minimizes the objective function described in Equation (5). Furthermore, by leveraging the fitness function detailed in Equation (6), with parameters α and β set to 0 for experimental evaluation, DI-GP achieves optimal solutions for reconstructing hidden anomalies within the diffuse media. This approach allows the algorithm to adaptively evolve and refine its predictions, leading to accurate reconstructions with high fidelity.
Notably, DI-GP requires only a small dataset for training as it incorporates domain-specific knowledge about DOI, significantly reducing the computational memory and training time needed. This efficient utilization of data resources streamlines the training process. It enhances the algorithm’s scalability and applicability to diverse imaging scenarios.
The performance evaluation of the proposed DI-GP algorithm involves using several key performance metrics to assess the quality and accuracy of the reconstructed images [13,27]. These metrics include Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Pearson Correlation Coefficient (PCC). MSE measures the average squared difference between the pixel values of the reconstructed image and the ground truth image. SSIM quantifies the similarity between two images by comparing the luminance, contrast, and structure. It ranges from −1 to 1, where 1 indicates perfect similarity. Finally, the PCC measures the linear correlation between the pixel values of the reconstructed and ground truth images. The performance achieved by the DI-GP algorithm is 0.0240 ± 0.0121 in terms of MSE, 0.8108 ± 0.0498 in terms of SSIM, and 0.7919 ± 0.0866 in terms of PCC. The low MSE value indicates minimal average reconstruction error, suggesting that the DI-GP algorithm accurately captures the details of the hidden anomalies. The high SSIM and PCC values also reflect strong similarity in contrast, structure, and directional correlation between the reconstructed and ground truth images, affirming the algorithm’s effectiveness in reproducing the true anomalies. The performance of the DI-GP algorithm is compared to that of an Analytical (CGD), and the results are detailed in Figure 7.
The performance enhancement obtained by DI-GP with respect to the analytical solution is given by an improvement ratio defined as:
Λ = m e t D I G P m e t C G D
Here, Λ is the improvement ratio. m e t D I G P and m e t C G D are the performance evaluation values obtained using the performance metrics mentioned above for the DI-GP and the CGD approaches, respectively.
The significant improvement ratios obtained for MSE, SSIM, and PCC metrics, namely 0.5138, 1.2015, and 1.2198, respectively, underscore the superior performance of the DI-GP algorithm compared to the analytical solution. These ratios indicate that DI-GP achieves notably lower errors and higher structural similarity and pixel-wise correlation with ground truth images, reflecting its ability to capture better the true characteristics of the hidden anomalies within the diffuse media. Figure 8 compares the reconstructions of the DI-GP and the analytical solution.
While the performance metrics demonstrate that DI GP outperforms conventional analytical reconstructions, it is important to emphasize that deep learning has also delivered impressive advances in diffuse optical imaging when large and diverse training datasets are available. The present work is positioned to complement, rather than compete with, those data-rich approaches by addressing a regime that is frequently encountered in practice, namely, limited training data. Consequently, a quantitative deep learning comparison is not included because the available dataset comprises only 30 samples, which is not sufficient to train, validate, and tune deep neural networks in a manner that would be both reliable and fair. Instead, the manuscript demonstrates that high reconstruction quality can be achieved in small-data settings using a physics-guided genetic programming framework that yields an explicit and interpretable expression. This transparency enables direct examination of the learned reconstruction logic. It supports confidence in deployment, while maintaining competitive performance against established analytical methods in scenarios where data are scarce, and interpretability is a priority.
Figure 9 presents a specific GP individual evolved by the DI-GP algorithm, depicted as a tree-structured expression that transforms DOI measurements into clear reconstructions of hidden objects. This evolved expression can be interpreted as a nonlinear contrast-enhancing filter emphasizing sharp transitions and suppressing background noise. It operates by examining local pixel neighborhoods and applying a sequence of adaptive transformations, allowing it to isolate regions likely to correspond to embedded objects within the diffuse media while disregarding more homogeneous background areas.
The GP individual begins with reference-subtracted measurements as inputs, labeled “Pixel #,” corresponding to intensities at different positions. Numeric constants like 0.5 or 0.75 serve as thresholds or scaling factors. These form the tree’s leaf nodes, supplying the raw data needed to generate the reconstruction. Operators, shown as pink rectangles in Figure 9, perform key transformations. Initially, functions like “min3” compute minimum values across several groups of pixels to identify the darkest regions within a local patch. These are combined with inverted signals from key pixels via the “Compl” (Complement) operator to form a dynamic threshold. This result is then inverted again to amplify areas that stand out from their surroundings. Operators such as “sqr” (Square) emphasize stronger signals and suppress weaker ones, “pDiv” (Protected Division) normalizes while avoiding divide-by-zero errors, and “max9” amplifies key variations. Additional arithmetic functions mix, scale, or reshape pixel values.
At the root of the tree, the “if.then.else” node manages adaptive branching, applying different operations based on internal conditions. This design introduces dynamic gating that fine-tunes the reconstruction depending on signal levels. Together, these operators define a nonlinear mapping from measurements to reconstructed pixels, enabling the function to scan and integrate signals across the image. This process untangles scattering effects, converting noisy input into a clearer, high-contrast output.
Rather than acting as a generic filter, the evolved expression encodes a specific heuristic that adaptively responds to spatial intensity patterns typical of hidden anomalies. It behaves similarly to a learned segmentation operator, identifying and boosting subtle signal variations based on their spatial configuration. This level of detailed, functional interpretability reveals how each operator contributes to reconstruction, offering a clear explanation of how predictions are computed, often missing in black-box models. While the evolved expressions can be complex, and further validation on heterogeneous samples is needed, this approach paves the way for DI-GP’s use in clinical imaging. Overall, our study sets an optimistic foundation for the future development of GP in computational diffuse optical imaging.
However, several constraints of the present study limit the current scope and motivate the next research steps. The demonstrated results focus on two-dimensional reconstructions under a fixed acquisition geometry and within the range of optical properties represented in the simulations and phantom experiments. The modelling framework relies on a diffusion-based description of photon transport, which is well suited to highly scattering media but can lose accuracy when diffusion assumptions are weakened, including in strongly heterogeneous tissues. Reconstruction fidelity can also be affected by forward-model mismatch arising from variability in optical parameters, source and detector alignment, and measurement noise statistics relative to those used during training. These considerations motivate future work that expands training and validation across broader distributions of optical properties, geometries, and noise conditions, and that evaluates performance in heterogeneous and anatomically realistic phantoms. Robustness to mismatch can be strengthened through physics-guided fitness designs that incorporate explicit noise models and through data generation protocols that intentionally span experimentally relevant variability. Extension to three-dimensional volumetric imaging is followed by formulating the forward problem on a three-dimensional domain using diffusion-based or Monte Carlo-based light transport, yielding volumetric sensitivities, namely Jacobian operators, that map voxel-wise absorption perturbations to measurement changes. In this regime, genetic programming can be trained to estimate voxel-wise contrast using local volumetric neighborhoods, together with measurement features acquired under multiple illuminations, multiple source-detector separations, or multiple views. The evolutionary procedure remains unchanged, whereas feature construction, target representation, and physics-guided fitness are defined on a volumetric reconstruction grid. The dominant practical challenge is the computational scaling of volumetric forward evaluations and sensitivity operations, which motivates parallel implementations and efficient reduced-order representations.

4. Conclusions

In conclusion, the development of DI-GP introduces a novel approach to computational diffuse optical imaging, presenting a potential solution for detecting 2D objects within strongly diffuse media. Leveraging genetic programming and incorporating domain-specific knowledge about DOI, DI-GP offers a method for achieving interpretability algorithms and accurate reconstructions with reduced dataset requirements and training times. Its capability to reconstruct anomalies from real experimental measurements without prior knowledge of their characteristics suggests promising implications for DOI image reconstruction.
Despite some limitations, DI-GP exhibits favorable performance across error metrics compared to analytical algorithms like CGD. Its low error rates in terms of MSE, SSIM, and PCC underscore its efficacy in accurately retrieving information about embedded anomalies. However, translating DI-GP into clinical practice remains a future endeavor, as current experiments have been limited to homogeneous media. Future research should focus on validating DI-GP on inhomogeneous volumetric samples and exploring the integration of neural operators within the GP framework. Such an approach could harness the superior feature extraction capabilities of deep learning while preserving the inherent interpretability of GP, thereby paving the way for more robust and clinically applicable imaging solutions.

Author Contributions

G.M.B. contributed to developing the idea, performed the experiments, performed the simulations, analyzed the data, and prepared the manuscript (initial draft, writing, reviewing, images, and English corrections). A.H. analyzed the simulated and experimental data, built the DI-GP algorithm framework, and prepared the manuscript (reviewing, writing, and English Correction). S.A. led the project, acquired funding, advised how to design and perform the experiments, and contributed to the analysis of the results and manuscript preparation (review and corrections). All authors have read and agreed to the published version of the manuscript.

Funding

The authors also thank the Kreitman School of Advanced Graduate Studies and Ben-Gurion University of the Negev for providing fellowships to continue the research.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time. However, they may be obtained from the corresponding author (GMB) upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El-Sharkawy, Y.H.; Elbasuney, S.; Radwan, S.M. Non-invasive diffused reflected/transmitted signature accompanied with hyperspectral imaging for breast cancer early diagnosis. Opt. Laser Technol. 2024, 169, 110151. [Google Scholar] [CrossRef]
  2. Li, J.; Yang, L.; Hao, Y.; Feng, H.; Ding, W.; Wang, J.; Shang, H.; Ti, G. High spatial resolution diffuse optical tomography based on cross-correlation of chaotic light. Opt. Express 2024, 32, 12496. [Google Scholar] [CrossRef]
  3. Bertolotti, J.; Katz, O. Imaging in complex media. Nat. Phys. 2022, 18, 1008–1017. [Google Scholar] [CrossRef]
  4. Lindell, D.B.; Wetzstein, G. Three-dimensional imaging through scattering media based on confocal diffuse tomography. Nat. Commun. 2020, 11, 4517. [Google Scholar] [CrossRef]
  5. Rosen, J.; Alford, S.; Allan, B.; Anand, V.; Arnon, S.; Arockiaraj, F.G.; Art, J.; Bai, B.; Balasubramaniam, G.M.; Birnbaum, T.; et al. Roadmap on computational methods in optical imaging and holography [invited]. Appl. Phys. B Lasers Opt. 2024, 130, 166. [Google Scholar] [CrossRef]
  6. Tuchin, V.V. Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnostics, Third Edition, 3rd ed.; SPIE: Bellingham, WA, USA, 2015. [Google Scholar] [CrossRef]
  7. Maffeis, G.; Di Sieno, L.; Dalla Mora, A.; Pifferi, A.; Tosi, A.; Conca, E.; Giudice, A.; Ruggeri, A.; Tisa, S.; Flocke, A.; et al. The SOLUS instrument: Optical characterization of the first hand-held probe for multimodal imaging (ultrasound and multi-wavelength time-resolved diffuse optical tomography). Opt. Lasers Eng. 2024, 176, 108075. [Google Scholar] [CrossRef]
  8. Balasubramaniam, G.M.; Arnon, S. Regression-based neural network for improving image reconstruction in diffuse optical tomography. Biomed. Opt. Express 2022, 13, 2006. [Google Scholar] [CrossRef] [PubMed]
  9. Balasubramaniam, G.M.; Manavalan, G.; Hauptman, A.; Arnon, S. Infant head subsurface imaging using high-density diffuse optical tomography and machine learning. In Diffuse Optical Spectroscopy and Imaging IX 29; SPIE: Bellingham, WA, USA, 2023. [Google Scholar] [CrossRef]
  10. Crouzet, C.; Phan, T.; Wilson, R.H.; Shin, T.J.; Choi, B. Intrinsic, widefield optical imaging of hemodynamics in rodent models of Alzheimer’s disease and neurological injury. Neurophotonics 2023, 10, 020601. [Google Scholar] [CrossRef] [PubMed]
  11. Forti, R.M.; Hobson, L.J.; Benson, E.J.; Ko, T.S.; Ranieri, N.R.; Laurent, G.; Weeks, M.K.; Widmann, N.J.; Morton, S.; Davis, A.M.; et al. Non-invasive diffuse optical monitoring of cerebral physiology in an adult swine-model of impact traumatic brain injury. Biomed. Opt. Express 2023, 14, 2432. [Google Scholar] [CrossRef]
  12. Ayaz, H.; Baker, W.B.; Blaney, G.; Boas, D.A.; Bortfeld, H.; Brady, K.; Brake, J.; Brigadoi, S.; Buckley, E.M.; Carp, S.A.; et al. Optical imaging and spectroscopy for the study of the human brain: Status report. Neurophotonics 2022, 9, S24001. [Google Scholar] [CrossRef]
  13. Deng, B.; Gu, H.; Zhu, H.; Chang, K.; Hoebel, K.V.; Patel, J.B.; Kalpathy-Cramer, J.; Carp, S.A. FDU-Net: Deep Learning-Based Three-Dimensional Diffuse Optical Image Reconstruction. IEEE Trans. Med. Imaging 2023, 42, 2439–2450. [Google Scholar] [CrossRef]
  14. Mule, N.; Maffeis, G.; Santangelo, C.; Cubeddu, R.; Pifferi, A.; Panizza, P.; Taroni, P. Evaluation of neoadjuvant chemotherapy-induced changes in contralateral healthy breast tissue through diffuse optical spectroscopy. In Diffuse Optical Spectroscopy and Imaging IX; Contini, D., Hoshi, Y., O’Sullivan, T.D., Eds.; SPIE: Bellingham, WA, USA, 2023; p. 7. [Google Scholar] [CrossRef]
  15. Zhang, W.; Liang, X.; Zhang, X.; Tong, W.; Shi, G.; Guo, H.; Jin, Z.; Tian, J.; Du, Y.; Xue, H. Magnetic-optical dual-modality imaging monitoring chemotherapy efficacy of pancreatic ductal adenocarcinoma with a low-dose fibronectin-targeting Gd-based contrast agent. Eur. J. Nucl. Med. Mol. Imaging 2024, 51, 1841–1855. [Google Scholar] [CrossRef]
  16. Balasubramaniam, G.M.; Wiesel, B.; Biton, N.; Kumar, R.; Kupferman, J.; Arnon, S. Tutorial on the Use of Deep Learning in Diffuse Optical Tomography. Electronics 2022, 11, 305. [Google Scholar] [CrossRef]
  17. Okawa, S.; Hoshi, Y. A Review of Image Reconstruction Algorithms for Diffuse Optical Tomography. Appl. Sci. 2023, 13, 5016. [Google Scholar] [CrossRef]
  18. Pogue, B.W.; McBride, T.O.; Osterberg, U.L.; Paulsen, K.D. Comparison of imaging geometries for diffuse optical tomography of tissue. Opt. Express 1999, 4, 270. [Google Scholar] [CrossRef] [PubMed]
  19. Dehghani, H.; Eames, M.E.; Yalavarthy, P.K.; Davis, S.C.; Srinivasan, S.; Carpenter, C.M.; Pogue, B.W.; Paulsen, K.D. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction. Commun. Numer. Methods Eng. 2009, 25, 711–732. [Google Scholar] [CrossRef] [PubMed]
  20. Arridge, S.R.; Schotland, J.C. Optical tomography: Forward and inverse problems. Inverse Probl. 2009, 25, 123010. [Google Scholar] [CrossRef]
  21. Dehghani, H.; Sri Nivasan, S.; Pogue, B.W.; Gibson, A. Numerical modelling and image reconstruction in diffuse optical tomography. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2009, 367, 3073–3093. [Google Scholar] [CrossRef]
  22. Aspri, A.; Benfenati, A.; Causin, P.; Cavaterra, C.; Naldi, G. Mathematical and Numerical Challenges in Diffuse Optical Tomography Inverse Problems. Discret. Contin. Dyn. Syst.—Ser. S 2024, 17, 421–461. [Google Scholar] [CrossRef]
  23. Lyons, A.; Tonolini, F.; Boccolini, A.; Repetti, A.; Henderson, R.; Wiaux, Y.; Faccio, D. Computational time-of-flight diffuse optical tomography. Nat. Photonics 2019, 13, 575–579. [Google Scholar] [CrossRef]
  24. Vo-Dinh, T. Biomedical Photonics Handbook; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar] [CrossRef]
  25. Wang, L.V.; Wu, H.-I. Biomedical Optics: Principles and Imaging; 2012. [Google Scholar] [CrossRef]
  26. Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep learning in medical imaging: General overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef]
  27. Yoo, J.; Sabir, S.; Heo, D.; Kim, K.H.; Wahab, A.; Choi, Y.; Lee, S.I.; Chae, E.Y.; Kim, H.H.; Bae, Y.M.; et al. Deep Learning Diffuse Optical Tomography. IEEE Trans. Med. Imaging 2020, 39, 877–887. [Google Scholar] [CrossRef] [PubMed]
  28. Zou, Y.; Zeng, Y.; Li, S.; Zhu, Q. Machine learning model with physical constraints for diffuse optical tomography. Biomed. Opt. Express 2021, 12, 5720. [Google Scholar] [CrossRef] [PubMed]
  29. Ko, Z.Y.G.; Li, Y.; Liu, J.; Ji, H.; Qiu, A.; Chen, N. DOTnet 2.0: Deep learning network for diffuse optical tomography image reconstruction. Intell. Med. 2024, 9, 100133. [Google Scholar] [CrossRef]
  30. Li, S.; Deng, M.; Lee, J.; Sinha, A.; Barbastathis, G. Imaging through glass diffusers using densely connected convolutional networks. Optica 2018, 5, 803. [Google Scholar] [CrossRef]
  31. Sabir, S.; Cho, S.; Kim, Y.; Pua, R.; Heo, D.; Kim, K.H.; Choi, Y.; Cho, S. Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography. Appl. Opt. 2020, 59, 1461. [Google Scholar] [CrossRef]
  32. Ongie, G.; Jalal, A.; Metzler, C.A.; Baraniuk, R.G.; Dimakis, A.G.; Willett, R. Deep Learning Techniques for Inverse Problems in Imaging. IEEE J. Sel. Areas Inf. Theory 2020, 1, 39–56. [Google Scholar] [CrossRef]
  33. Applegate, M.B.; Istfan, R.E.; Spink, S.; Tank, A.; Roblyer, D. Recent advances in high speed diffuse optical imaging in biomedicine. APL Photonics 2020, 5, 040802. [Google Scholar] [CrossRef]
  34. Koza, J.R.; Poli, R. Genetic programming. In Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques; Springer: Boston, MA, USA, 2005; pp. 127–164. [Google Scholar] [CrossRef]
  35. Koza, J.R.; Bennett, F.H.; Andre, D.; Keane, M.A. Genetic programming III: Darwinian invention and problem solving [Book Review]. IEEE Trans. Evol. Comput. 2005, 3, 251–253. [Google Scholar] [CrossRef]
  36. Langdon, W.B.; Poli, R. Foundations of Genetic Programming; Springer: Heidelberg, Germany, 2002. [Google Scholar] [CrossRef]
  37. Khan, A.; Qureshi, A.S.; Wahab, N.; Hussain, M.; Hamza, M.Y. A recent survey on the applications of genetic programming in image processing. Comput. Intell. 2021, 37, 1745–1778. [Google Scholar] [CrossRef]
  38. Cavallaro, C.; Cutello, V.; Pavone, M.; Zito, F. Machine Learning and Genetic Algorithms: A case study on image reconstruction. Knowledge-Based Syst. 2024, 284, 111194. [Google Scholar] [CrossRef]
  39. Talbi, E.-G. Machine Learning into Metaheuristics. ACM Comput. Surv. 2022, 54, 1–32. [Google Scholar] [CrossRef]
  40. Hauptman, A.; Balasubramaniam, G.M.; Arnon, S. Machine Learning Diffuse Optical Tomography Using Extreme Gradient Boosting and Genetic Programming. Bioengineering 2023, 10, 382. [Google Scholar] [CrossRef]
  41. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  42. Hu, T. Genetic Programming for Interpretable and Explainable Machine Learning. In Genetic Programming Theory and Practice XIX; Springer: Singapore, 2023; pp. 81–90. [Google Scholar] [CrossRef]
  43. Salamun, K.; Pavić, I.; Džapo, H.; Đurasević, M. Evolving scheduling heuristics with genetic programming for optimization of quality of service in weakly hard real-time systems. Appl. Soft Comput. 2023, 137, 110141. [Google Scholar] [CrossRef]
  44. Hauptman, A.; Sipper, M. GP-endchess: Using genetic programming to evolve chess endgame players. In Lecture Notes in Computer Science; 2005; Volume 3447, pp. 120–131. [Google Scholar]
  45. Ahvanooey, M.T.; Li, Q.; Wu, M.; Wang, S. A survey of genetic programming and its applications. KSII Trans. Internet Inf. Syst. 2019, 13, 1765–1794. [Google Scholar] [CrossRef]
  46. Jacques, S.L. Optical properties of biological tissues: A review. Phys. Med. Biol. 2013, 58, R37. [Google Scholar] [CrossRef] [PubMed]
  47. Schweiger, M.; Arridge, S. The Toast++ software suite for forward and inverse modeling in optical tomography. J. Biomed. Opt. 2014, 19, 040801. [Google Scholar] [CrossRef] [PubMed]
  48. Wiesel, B.; Arnon, S. Imaging inside highly scattering media using hybrid deep learning and analytical algorithm. J. Biophotonics 2023, 16, e202300127. [Google Scholar] [CrossRef] [PubMed]
  49. Deng, B.; Brooks, D.H.; Boas, D.A.; Lundqvist, M.; Fang, Q. Characterization of structural-prior guided optical tomography using realistic breast models derived from dual-energy x-ray mammography. Biomed. Opt. Express 2015, 6, 2366. [Google Scholar] [CrossRef]
  50. Fang, Q.; Boas, D.A. Tetrahedral mesh generation from volumetric binary and grayscale images. In Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; IEEE: New York, NY, USA, 2009; pp. 1142–1145. [Google Scholar] [CrossRef]
  51. Hohmann, M.; Lengenfelder, B.; Muhr, D.; Späth, M.; Hauptkorn, M.; Klämpfl, F.; Schmidt, M. Direct measurement of the scattering coefficient. Biomed. Opt. Express 2021, 12, 320. [Google Scholar] [CrossRef]
  52. Michels, R.; Foschum, F.; Kienle, A. Optical properties of fat emulsions. Opt. Express 2008, 16, 5907. [Google Scholar] [CrossRef] [PubMed]
  53. Fang, Q.; Boas, D.A. Monte Carlo Simulation of Photon Migration in 3D Turbid Media Accelerated by Graphics Processing Units. Opt. Express 2009, 17, 20178. [Google Scholar] [CrossRef] [PubMed]
  54. Yuan, Y.; Yan, S.; Fang, Q. Light transport modeling in highly complex tissues using the implicit mesh-based Monte Carlo algorithm. Biomed. Opt. Express 2021, 12, 147. [Google Scholar] [CrossRef]
  55. Ben Yedder, H.; Cardoen, B.; Hamarneh, G. Deep learning for biomedical image reconstruction: A survey. Artif. Intell. Rev. 2021, 54, 215–251. [Google Scholar] [CrossRef]
  56. Koza, J.R. Genetic programming as a means for programming computers by natural selection. Stat. Comput. 1994, 4, 87–112. [Google Scholar] [CrossRef]
  57. Fortin, F.A.; De Rainville, F.M.; Gardner, M.A.; Parizeau, M.; Gagńe, C. DEAP: Evolutionary algorithms made easy. J. Mach. Learn. Res. 2012, 13, 2171–2175. [Google Scholar]
  58. Eicher, W.; Puttkamer, E.V. A new strategy for interpreting LISP. Microprocess. Microprogram 1986, 18, 81–88. [Google Scholar] [CrossRef]
  59. Fang, Y.; Li, J. A Review of Tournament Selection in Genetic Programming. In Advances in Computation and Intelligence; Springer: Heidelberg, Germany, 2010; pp. 181–192. [Google Scholar] [CrossRef]
  60. Boldi, R.; Bao, A.; Briesch, M.; Helmuth, T.; Sobania, D.; Spector, L.; Lalejini, A. Analyzing the Interaction Between Down-Sampling and Selection. arXiv 2023. [Google Scholar] [CrossRef]
  61. Poli, R.; Langdon, W.B.; McPhee, N.F. A Field Guide to Genetic Programing; Wyvern: Morecambe, UK, 2008. [Google Scholar]
  62. Sipper, M.; Fu, W.; Ahuja, K.; Moore, J.H. Investigating the parameter space of evolutionary algorithms. BioData Min. 2018, 11, 2. [Google Scholar] [CrossRef] [PubMed]
  63. Dice, L.R. Measures of the Amount of Ecologic Association Between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
Figure 1. Schematic of the light transport simulations performed in this study. The red arrow within the volumetric mesh denotes the direction of photon propagation.
Figure 1. Schematic of the light transport simulations performed in this study. The red arrow within the volumetric mesh denotes the direction of photon propagation.
Sensors 26 00318 g001
Figure 2. Schematic of a generic Genetic Programming algorithm.
Figure 2. Schematic of a generic Genetic Programming algorithm.
Sensors 26 00318 g002
Figure 3. Schematic of the DI-GP workflow. The PI fitness (physics-informed fitness) is used in conjunction with MSE and DICE fitness functions in the training process, where DI-GP evolves a function to reconstruct hidden anomalies inside the sample. However, only the PI fitness (see Equation (7)) is used when the algorithm is tested using unseen experimental data.
Figure 3. Schematic of the DI-GP workflow. The PI fitness (physics-informed fitness) is used in conjunction with MSE and DICE fitness functions in the training process, where DI-GP evolves a function to reconstruct hidden anomalies inside the sample. However, only the PI fitness (see Equation (7)) is used when the algorithm is tested using unseen experimental data.
Sensors 26 00318 g003
Figure 4. Simulation Results: Image reconstruction using DI-GP on unseen simulated DOI data. The black boxes on the left are the ground truths, and the green boxes show the reconstructions. The color bar depicts the contrast values of the images. (a,b) show reconstructions for two different samples. The blue borders in the retrieved images show the edges of the ground truth.
Figure 4. Simulation Results: Image reconstruction using DI-GP on unseen simulated DOI data. The black boxes on the left are the ground truths, and the green boxes show the reconstructions. The color bar depicts the contrast values of the images. (a,b) show reconstructions for two different samples. The blue borders in the retrieved images show the edges of the ground truth.
Sensors 26 00318 g004
Figure 5. Experimental setup used to detect anomalies hidden inside diffuse media.
Figure 5. Experimental setup used to detect anomalies hidden inside diffuse media.
Sensors 26 00318 g005
Figure 6. Experimental results: Image reconstruction results from the D-GP algorithm. The first row (in dark blue boxes) shows the recorded images at the CMOS camera. The second row (in light blue boxes) shows the reference-subtracted data. The third row (in black boxes) shows the hidden anomalies inside the diffuse media. The fourth row (in green boxes) shows the retrieved object images obtained using the DI-GP algorithm. The blue borders in the retrieved images show the edges of the ground truth.
Figure 6. Experimental results: Image reconstruction results from the D-GP algorithm. The first row (in dark blue boxes) shows the recorded images at the CMOS camera. The second row (in light blue boxes) shows the reference-subtracted data. The third row (in black boxes) shows the hidden anomalies inside the diffuse media. The fourth row (in green boxes) shows the retrieved object images obtained using the DI-GP algorithm. The blue borders in the retrieved images show the edges of the ground truth.
Sensors 26 00318 g006
Figure 7. Boxplots comparing the performance of the proposed GI-DP algorithm to that of the analytical method. (ac) show the performance of both algorithms in terms of MSE, SSIM, and PCC, respectively.
Figure 7. Boxplots comparing the performance of the proposed GI-DP algorithm to that of the analytical method. (ac) show the performance of both algorithms in terms of MSE, SSIM, and PCC, respectively.
Sensors 26 00318 g007
Figure 8. Comparison of image reconstructions using the proposed approach (green boxes) and the analytical approach (red boxes). The blue borders in the retrieved images show the edges of the ground truth. The contrast map on the right compares the pixel intensities in the reconstructions to the ground truth (along the horizontal lines shown in the reconstructions). (a) and (b) represent examples of two reconstructed images.
Figure 8. Comparison of image reconstructions using the proposed approach (green boxes) and the analytical approach (red boxes). The blue borders in the retrieved images show the edges of the ground truth. The contrast map on the right compares the pixel intensities in the reconstructions to the ground truth (along the horizontal lines shown in the reconstructions). (a) and (b) represent examples of two reconstructed images.
Sensors 26 00318 g008
Figure 9. An example of a GP individual evolved by the DI-GP algorithm.
Figure 9. An example of a GP individual evolved by the DI-GP algorithm.
Sensors 26 00318 g009
Table 1. Pseudocode showing the generation of a predicted image.
Table 1. Pseudocode showing the generation of a predicted image.
1. function GenerateOutputImage(M, GP_individual, N) -> X_prediction:
2.      X_prediction = createImageWithSameSizeAs(M)
3.
4.      for each row in M:
5.        for each column in M:
6.             environment = extractEnvironment(M, row, column, N)
7.             if isValidEnvironment(environment):
8.                 outputPixel = applyGPIndividual(environment, GP_individual)
9.                 setPixel(X_prediction, row, column, outputPixel)
10.            else:
11.                continue
12.    return X_prediction
Table 2. List of inner functions and terminals utilized in the GP algorithm.
Table 2. List of inner functions and terminals utilized in the GP algorithm.
CategoryInner-Functions
Arithmetic-Binary A d d x , y ,
S u b t r a c t x , y ,
M u l t i p l y x , y ,
P r o t e c t e d D i v i s i o n ( x , y )
Arithmetic-Unary M i n u s x = x ,
S q u a r e ( x ) = x 2 ,
T r i ( x ) = x 3 ,
SquareRoot ( x ) = sign ( x ) × abs ( x ) ,
Complement ( x ) = 1 x
Minimum and Maximum M i n 3 x 1 , x 2 , , x 9 ,
M i n 6 x 1 , x 2 , , x 9 ,
M i n 9 x 1 , x 2 , , x 9 ,
M a x 3 x 1 , x 2 , , x 9 ,
M a x 6 x 1 , x 2 , , x 9 ,
M a x 9 x 1 , x 2 , , x 9
Conditional I f _ T h e n _ E l s e ( x , y , z ) = y   i f   x > 0.5   e l s e   z
I f _ L a r g e r ( x , y , z , t ) = z   i f   x > y   e l s e   t
I f _ I n _ R a n g e ( x , y , z , t , w ) = t   i f   x y < z   e l s e   w
CategoryTerminals
PixelsPixel1, …, P i x e l N × N
Constants 0.0 , 0.25 , 0.5 , 0.75 , 1.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balasubramaniam, G.M.; Hauptman, A.; Arnon, S. Sensing Through Tissues Using Diffuse Optical Imaging and Genetic Programming. Sensors 2026, 26, 318. https://doi.org/10.3390/s26010318

AMA Style

Balasubramaniam GM, Hauptman A, Arnon S. Sensing Through Tissues Using Diffuse Optical Imaging and Genetic Programming. Sensors. 2026; 26(1):318. https://doi.org/10.3390/s26010318

Chicago/Turabian Style

Balasubramaniam, Ganesh M., Ami Hauptman, and Shlomi Arnon. 2026. "Sensing Through Tissues Using Diffuse Optical Imaging and Genetic Programming" Sensors 26, no. 1: 318. https://doi.org/10.3390/s26010318

APA Style

Balasubramaniam, G. M., Hauptman, A., & Arnon, S. (2026). Sensing Through Tissues Using Diffuse Optical Imaging and Genetic Programming. Sensors, 26(1), 318. https://doi.org/10.3390/s26010318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop