Next Article in Journal
A Sensor Based Waste Rock Detection Method in Copper Mining Under Low Light Environment
Previous Article in Journal
A Convolutional-Transformer Residual Network for Channel Estimation in Intelligent Reflective Surface Aided MIMO Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance

1
School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an 710026, China
2
Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an 710071, China
3
International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment, School of Life Science and Technology, Xidian University, Xi’an 710071, China
4
Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China
5
School of Electrical Engineering and Automation, Suzhou University of Technology, Suzhou 215500, China
6
College of Electrical Engineering, Henan University of Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 5960; https://doi.org/10.3390/s25195960
Submission received: 27 August 2025 / Revised: 17 September 2025 / Accepted: 23 September 2025 / Published: 25 September 2025

Abstract

Material decomposition in X-ray imaging is essential for enhancing tissue differentiation and reducing the radiation dose, but the clinical adoption of photon-counting detectors (PCDs) is limited by their high cost and technical complexity. To address this, we propose Dual-head Pix2Pix, a PCD-guided deep learning framework that enables simultaneous iodine and bone decomposition from single-energy X-ray projections acquired with conventional energy-integrating detectors. The model was trained and tested on 1440 groups of energy-integrating detector (EID) projections with their corresponding iodine/bone decomposition images. Experimental results demonstrate that the Dual-head Pix2Pix outperforms baseline models. For iodine decomposition, it achieved a mean absolute error (MAE) of 5.30 ± 1.81, representing an ~10% improvement over Pix2Pix (5.92) and a substantial advantage over CycleGAN (10.39). For bone decomposition, the MAE was reduced to 9.55 ± 2.49, an ~6% improvement over Pix2Pix (10.18). Moreover, Dual-head Pix2Pix consistently achieved the highest MS-SSIM, PSNR, and Pearson correlation coefficients across all benchmarks. In addition, we performed a cross-domain validation using projection images acquired from a conventional EID-CT system. The results show that the model successfully achieved the effective separation of iodine and bone in this new domain, demonstrating a strong generalization capability beyond the training distribution. In summary, Dual-head Pix2Pix provides a cost-effective, scalable, and hardware-friendly solution for accurate dual-material decomposition, paving the way for the broader clinical and industrial adoption of material-specific imaging without requiring PCDs.

1. Introduction

In computed tomography (CT), material decomposition has gained increasing attention beyond conventional morphological evaluation [1,2,3,4], as it enables the differentiation and quantification of specific substances such as iodine and calcium [5]. Iodine–bone separation plays an important role in vascular enhancement [6], bone mineral density assessment [7], and pathological calcification detection, thereby improving diagnostic accuracy and clinical decision-making [8,9,10]. Beyond medical applications, this technique has also been widely applied in industrial contexts. For example, dual-energy methods have been used to detect semi-precious beryl in surrounding rock, and multi-energy material separation combined with neural networks such as the YOLO algorithm has been employed to segment particles of different rock types [11,12]. The development of CT detectors has progressed from energy-integrating detectors (EIDs) to dual-energy CT (DECT) and more recently to photon-counting detectors (PCDs) [13]. EIDs lose energy information during acquisition, which limits their capability for material decomposition [14]. DECT alleviates this limitation to some extent but still suffers from an increased radiation dose, susceptibility to artifacts, and limited quantitative accuracy [15]. In contrast, PCDs directly detect individual X-ray photons while preserving their spectral information, offering significant advantages in energy resolution, noise suppression, and multi-material decomposition [9,16,17,18]. These benefits have been demonstrated in preclinical and early clinical studies. However, the high cost and technical complexity of PCDs have restricted their large-scale clinical adoption [19], leaving EIDs as the mainstream in current practice. This situation highlights the importance of improving the material decomposition performance on EID systems [1].
With the rapid development of deep learning, especially generative adversarial networks [20,21,22,23], new opportunities have emerged for learning the complex mapping between projection or image data and decomposed material maps. Most existing studies focus on image-domain decomposition [1,24,25]. However, such methods are highly dependent on spectral fidelity, are prone to artifacts, and often lack quantitative accuracy. By contrast, projection-domain data preserve richer structural and spectral information. Direct modeling in the projection domain introduces physical constraints at an early stage, which can reduce error accumulation, suppress beam-hardening effects, and compensate for detector- and acquisition-related distortions.
Projection-domain decomposition in PCD-CT is typically performed by representing measured projection data as linear combinations of basic materials (e.g., soft tissue, bone, and iodine) [26], followed by independent reconstruction. Compared with image-domain approaches, this strategy provides a stronger physical consistency [27]. Prior studies have proposed regularization and filtering strategies to mitigate noise propagation [28], as well as multi-stage or hierarchical optimization frameworks to enhance quantitative accuracy and robustness against artifacts [29]. More recently, machine learning-based approaches have been introduced to fully exploit the high-dimensional features embedded in projection data while retaining physical constraints [30]. Experimental results indicate that projection-domain methods can explicitly compensate for scattering, noise, and detector response while improving reconstruction and segmentation quality [27]. Furthermore, projection-domain decomposition has shown superior performance in both non-K-edge and K-edge material quantification, suggesting the broad potential for clinical applications. Leveraging the high-quality decomposition results obtained from PCD provides a promising pathway to enhance material decomposition in EID systems, which serves as the central motivation of this study.
In this study, we will first establish a PCD-CT-based data acquisition platform to construct a dedicated iodine–bone separation projection dataset. Building upon this, we will design an enhanced Pix2Pix-based generative model with a dual-head output architecture to enable accurate mapping from single-channel projections to dual-material decomposition. The effectiveness of the proposed approach will be further evaluated using projection data from conventional EID systems.

2. System Setup and Data Acquisition

2.1. System Architecture

To enable both supervised training and cross-domain evaluation, we constructed a dual-energy projection acquisition system based on a PCD-CT platform and additionally acquired data using a conventional micro-CT system equipped with an EID. The PCD-CT system is illustrated in Figure 1.
Figure 1a presents a schematic diagram of our PCD-CT system, highlighting key geometric parameters such as the source-to-detector distance and imaging field of view. Figure 1b shows a photograph of the actual system setup. This PCD-CT system integrates an RZX-8016D X-ray source (maximum 80 kV, 1000 μA) and an XCounter FX3 PCT (CdTe-CMOS sensor, XCounter AB, Sweden). The FX3 detector features a dual-energy CdTe photon-counting sensor with a pixel size of 100 μm, enabling energy-resolved data acquisition. To acquire multi-angle projection data, the system operates in TDS (Translation During Scan) mode, in which the detector is translated horizontally using a high-precision PSA150-11-X motorized stage, allowing for high-resolution projection imaging at multiple viewing angles. For comparison, a conventional micro-CT system (Figure 1c,d) was used to acquire projection data using a Dexela 1512 flat-panel detector (Varex Imaging) with a pixel size of 74.8 μm. This EID-based system performs energy integration over the entire X-ray spectrum and does not support spectral resolution. To maintain spectral consistency across modalities, the same X-ray source was integrated into both the PCD-based and EID-based imaging systems.

2.2. Image Acquisition and Preprocessing

All animal experiments were conducted in strict accordance with the ethical regulations and approval procedures of the Animal Ethics Committee of Xi’an Medical University. Four female BALB/c nude mice (6–8 weeks old, 18–20 g) were used for in vivo imaging. Prior to scanning, each mouse was anesthetized via an intraperitoneal injection of 100 μL chloral hydrate. Then, 150 μL of iohexol contrast agent (150 mg/mL) was administered via the tail vein. Following injection, the mice were immobilized in a custom-designed acrylic cylindrical holder to minimize motion artifacts during imaging. Each mouse underwent a full 360° scan with 1° angular increments, resulting in 360 projection images per subject. In total, 1440 projection images were collected from four mice. Based on predefined energy thresholds, the acquired data were processed as follows:
The PCD records the energy of each detected photon, and according to the XCounter FX3 instructions, we set energy thresholds to obtain projections at different energy levels. Projections acquired at a 10 keV threshold were used as conventional projections (ConvProj). Projections at a 30 keV threshold served as high-energy projections. Material decomposition was performed using the energy silhouette subtraction method [31], where subtraction between energy channels yielded bone-preserving projections (BoneProj) and iodine-preserving projections (IodineProj).
All projection images were saved in a standard 8-bit grayscale PNG format, with a resolution of 1024 × 512 pixels and a pixel size of 100 μm. For each subject, 80% of the images were used for model training and 20% for testing. To further evaluate the cross-domain generalization capability of the proposed method, additional projection data were collected using the conventional energy-integrating micro-CT system described in Section 2.1. The original images (1536 × 1944 pixels, pixel size 74.8 μm) were preprocessed and center cropped to 1024 × 512 pixels, matching the resolution of the training dataset. These external data were used to assess the material decomposition performance of the trained model on a different imaging domain.

3. Methodology

As shown in Figure 2, the input imgConv from ConvProj, along with a random noise vector z, is fed into the generator G. The generator simultaneously produces two output images, imgBone ‘and imgIodine’, corresponding to the bone domain (BoneProj) and the iodine domain (IdoineProj), respectively. These generated images are then concatenated with the original input imgConv, and the real imgBone and imgIodine are fed into the corresponding discriminators, Db and Dc, for real/fake classification. The corresponding loss components are indicated with dashed lines in Figure 2 and are described in detail in Section 3.3. In the following, we provide a detailed description of each component of the proposed network.

3.1. Dual-Head Pix2Pix Generative Network Architecture

This study proposes an enhanced generator architecture based on U-Net, termed Dual-head Pix2Pix, which adopts an encoder–decoder framework (Figure 3). The encoder maps the input image into a low-dimensional latent representation, while the decoder reconstructs the target output via transposed convolutions. Compared with the conventional U-Net, the proposed architecture introduces dual-decoder branches, each responsible for independently generating images corresponding to BoneProj (bone structures) and IodineProj (iodine contrast agent), thereby enabling the simultaneous prediction of multiple material components.
Figure 2. Overview of the Dual-branch Pix2Pix network.
Figure 2. Overview of the Dual-branch Pix2Pix network.
Sensors 25 05960 g002
Figure 3. Dual-head image generator.
Figure 3. Dual-head image generator.
Sensors 25 05960 g003
As illustrated in Figure 3, the input comprises the imgConv and random noise z. These inputs are first passed through a shared encoder to extract latent features, which are then fed into two parallel decoders. Decoder Gb is responsible for generating the output imgBone ‘corresponding to BoneProj, while decoder Gc generates the output imgIodine’ for IodineProj. To retain high-frequency spatial details and improve the reconstruction accuracy, skip connections (depicted as dashed lines in Figure 3) are implemented between corresponding layers of the encoder and each decoder.
This dual-decoder architecture enables the generator to effectively learn a mapping from a single input image, conditioned on a specific label, to two distinct image domains, making it particularly well-suited for material decomposition tasks in CT projection imaging.

3.2. Dual Discriminator Design Based on PatchGAN

The discriminator in the proposed Dual-head Pix2Pix network adopts the PatchGAN architecture from the original Pix2Pix framework [32], which divides the input image into multiple local patches and performs a real/fake classification on each patch independently. The key difference in our design is the introduction of two separate discriminators, D b and D c (Figure 2), which independently evaluate the generator’s outputs corresponding to BoneProj and IodineProj, respectively. Compared with a global image-level discriminator, PatchGAN uses significantly fewer parameters. This improves the training stability and convergence, especially for high-resolution images. Its low computational complexity allows for the use of multiple discriminators without substantially increasing the overall computational burden. This effectively reduces both the training cost and inference time. As illustrated in Figure 2, both the generated image and the Ground Truth are concatenated with the original input image (imgConv) along the channel dimension before being fed into the discriminator. This design implements conditional discrimination, aiming to enable the discriminator to evaluate whether the generated image is both “realistic” and “consistent” with the conditional input rather than merely assessing the authenticity of a single image. Consequently, this approach ensures that the generator produces outputs that are not only visually plausible but also semantically aligned with the input condition.

3.3. Loss Function Design

The loss function in the Dual-head Pix2Pix network comprises three components (Figure 2): (1) an adversarial loss L 1 * to ensure realism of the generated outputs, (2) a reconstruction loss L 2 * to enforce pixel-wise similarity with the Ground Truth, and (3) a mutual exclusivity loss L 3 * to exploit the complementary nature of iodine and bone decomposition images.
Adversarial Loss L 1 *
Dual-head Pix2Pix introduces two adversarial branches by modifying the output channels. The adversarial loss for BoneProj is formulated as follows:
L 1 b * = E a , b log D b a , b + E a , z log 1 D b a , G b a , z
Similarly, for IodineProj,
L 1 c * = E a , c log D c a , c + E a , z log 1 D c a , G c a , z
The total adversarial loss is the sum of the two branches:
L 1 * = arg m i n G b m a x D b L 1 b * + arg m i n G c m a x D c L 1 c *
Reconstruction Loss L 2 *
To ensure the generated images closely resemble the Ground Truth, an L 1 loss is employed. For BoneProj,
L 2 b * = E a , b , z log b G b a , z
And for IodineProj,
L 2 c * = E a , c , z log c G c a , z
The total reconstruction loss is
L 2 * = L 2 b * + L 2 c *
Mutual Exclusivity Loss L 3 *
To enforce the separation of features between iodine and bone components, a mutual exclusivity loss is introduced. First, the inverse of the L 1 difference is used as a base metric:
L 3 b * = E a , b , z log c G b a , z
L 3 c * = E a , c , z log b G c x , z
To ensure positivity and stability, a Sigmoid function is applied:
L 3 * = 1 S i g m o i d L 3 b * + 1 S i g m o i d L 3 c *
When the predicted outputs for the mutually exclusive iodine and bone channels are well-separated, this loss approaches zero, encouraging disentangled material decomposition.
Objective Function
The overall objective function of Dual-head Pix2Pix is defined as follows:
L * = λ 1 L 1 * + λ 2 L 2 * + λ 3 L 3 *
where λ i ( i = 1 , 2 , 3 ) are the weighting coefficients for each loss component. In our experiments, we set λ 1 = 1 , λ 2 = 100, and λ 3 = 100 . The mutual exclusivity loss L 3 * acts as a reward term that guides the generator to improve the contrast between the decomposition outputs, enhancing the overall material separation performance.

3.4. Training Strategy and Parameter Settings

All experiments were conducted on a computing platform running Ubuntu 22.04, equipped with an AMD Ryzen 5950X CPU, 128 GB of RAM, and an NVIDIA GeForce RTX 4090 GPU. The neural network used in this study was implemented using the PyTorch 1.5+ framework, with structural modifications carried out via the MMEditing toolbox from the open-source OpenMMLab project. Both the generator and the discriminators were optimized using the Adam optimizer. During training, each batch was constructed by randomly sampling data from both the source and target domains. The batch size was set to 4, and the training was conducted for a total of 333 epochs.
To improve training stability and accelerate convergence, a warm-up learning rate strategy was employed. Specifically, the learning rate was initialized at 0 and linearly increased over the first 10,000 iterations to a peak value of 9 × 10−4. After the warm-up phase, a linear decay strategy with a decay factor of 0.9 was applied, and the minimum learning rate was constrained to 5 × 10−6 to maintain stability throughout training.

3.5. Baselines and Metrics

Both qualitative and quantitative evaluations were performed to assess the effectiveness of the proposed method. To further validate its generalization capability, we conducted comparative experiments against multiple generative models and additionally evaluated the performance on external projection images acquired from an EID-CT system. To comprehensively assess the quality of material decomposition images generated by Dual-head Pix2Pix, the following quantitative metrics were used:
Mean Absolute Error (MAE)
The MAE measures the average absolute difference between the generated image and the Ground Truth:
M A E = 1 n i = 1 n y ^ i y i
Multi-Scale Structural Similarity Index (MS-SSIM)
MS-SSIM evaluates the structural similarity over multiple image scales, defined as follows:
M S S S I M = l y ^ , y α M j = 1 M c y ^ , y β j s y ^ , y γ j
Pearson Correlation Coefficient (Pearson-R)
This metric evaluates the linear correlation between the predicted and reference images:
r = i = 1 n y ^ i Y ^ y i Y i = 1 n y ^ i Y ^ 2 i = 1 n y i Y 2
Peak Signal-to-Noise Ratio (PSNR)
PSNR reflects the similarity between two images based on the mean squared error (MSE):
M S E = 1 m n i = 0 m 1 j = 0 n 1 y i , j y ^ i , j 2
Given the maximum pixel value M A X y , PSNR is defined as follows:
P S N R = 10 · log 10 M A X y 2 M S E
Statistical analysis was performed using paired t-tests to compare the proposed method with the baseline models. Results were considered statistically significant at p < 0.05 and extremely significant at p < 0.0001. To further evaluate the material-specific performance, we extracted line profiles across selected regions of interest (ROI). Sampling lines were drawn on the raw image a in areas known to contain only bone or only iodine. Corresponding lines were then evaluated on the bone image b and iodine image ccc to analyze the material separation quality and intensity preservation.
In addition, to verify the accuracy of iodine–bone separation, regions of interest were selected along the line profiles in the iodine and bone domains, and the pixel intensity distribution curves were plotted along the specified paths.

4. Results

To comprehensively evaluate the proposed method, the Results section first presents comparisons with different models to assess the relative performance, followed by cross-domain testing on traditional EID projection images to examine the model’s generalization capability. Finally, an ablation study on the loss functions in the Dual-head Pix2Pix network is conducted.

4.1. Comparison with Different Models

To comprehensively validate the superiority of the proposed method, we conducted comparative experiments using the same training and testing datasets on CycleGAN, Pix2Pix, and the proposed Dual-head Pix2Pix network.
Table 1 presents quantitative results for iodine decomposition using different methods. Compared to the original input images, the proposed Dual-head Pix2Pix model reduces the MAE from 51.07 to 5.30 (a reduction of 89.6%), improves the MS-SSIM from 0.80 to 0.91, increases the Pearson correlation coefficient from 0.91 to 0.99, and raises the PSNR from 13.01 dB to 32.06 dB. Compared with CycleGAN and standard Pix2Pix, Dual-head Pix2Pix consistently outperforms across all metrics, demonstrating its superior ability to preserve iodine details while suppressing bone interference.
Table 2 shows quantitative results for bone decomposition. Relative to the input images, Dual-head Pix2Pix reduces the MAE by 58.1% (from 22.77 to 9.55), improves the MS-SSIM from 0.78 to 0.84, maintains the Pearson-R at 0.98, and increases the PSNR from 19.72 dB to 26.74 dB. Dual-head Pix2Pix also surpasses CycleGAN and standard Pix2Pix in all metrics, highlighting its improved capacity for accurate bone structure reconstruction and iodine artifact suppression.
As shown in Figure 4, for iodine decomposition, red arrows indicate residual bone structures. CycleGAN outputs retain visible spine components, whereas Pix2Pix and Dual-head Pix2Pix effectively suppress bone interference. For iodine detail preservation, the enlarged cardiac region (red box) shows that Dual-head Pix2Pix produces more natural textures and is visually closer to the Ground Truth than Pix2Pix, demonstrating the benefit of the dual-decoder design for fine-grained iodine reconstruction. For bone decomposition, red boxes mark regions with a high iodine concentration. Pix2Pix struggles to recover fine bone structures, often exhibiting unnatural textures or small “white spot” artifacts. In contrast, Dual-head Pix2Pix achieves more accurate structural recovery, preserves bone integrity more faithfully, and generates more natural textures, even under strong iodine interference.
Figure 5 shows the region-specific line profile analysis for iodine decomposition using different algorithms. The yellow and red sampling lines mark bone-rich and iodine-rich regions, respectively. In the yellow line profiles (e), greater deviation from the original input indicates better bone suppression; the Dual-head Pix2Pix output shows the largest deviation, confirming superior suppression. In the red line profiles (f), smaller deviation reflects better iodine preservation. The Dual-head Pix2Pix aligns most closely with the input, demonstrating the best retention of iodine details.
Figure 6 shows the region-specific line profile analysis for iodine decomposition using different algorithms. The yellow and red sampling lines indicate bone-rich and iodine-rich regions, respectively. In the yellow line profiles (e), a higher similarity to the input denotes better bone preservation; Dual-head Pix2Pix achieves the closest match. In the red line profiles (f), greater deviation signifies better iodine suppression. Dual-head Pix2Pix again shows the largest difference, indicating the most effective removal of iodine artifacts.

4.2. Traditional EID Projection Image Cross-Domain Testing

To evaluate the generalizability of the proposed method across different acquisition devices and imaging conditions, cross-domain testing was conducted using projection images acquired with traditional EID. These EID images exhibit different signal characteristics and noise distributions compared to the images used during model training. The cross-domain evaluation aims to assess the model’s robustness and applicability on unseen data distributions. By comparing decomposition results between the training-domain and the EID-domain projections, we analyze the stability and potential practical value of the method in diverse imaging scenarios. The acquired mouse angiography images (Figure 7a) were preprocessed and cropped before undergoing material decomposition using the Dual-head Pix2Pix model. As shown in Figure 7b,c, the decomposition results are based on data acquired by a conventional energy-integrating detector. Unlike dual-energy systems, these detectors cannot directly produce material decomposition images through energy thresholding, which are considered the “gold standard.” Therefore, the quality of the generated images was evaluated by the retention of the target material in its corresponding decomposition image and the suppression of irrelevant materials.
In Figure 7b, the yellow sampling line marks a bone-rich region. Ideally, the pixel intensities in the bone decomposition image should closely match those in the input projection along this line, while the iodine decomposition image should exhibit minimal bone-related signals. The results show that the bone decomposition image from the Dual-head Pix2Pix aligns well with the input intensities, whereas the iodine decomposition image maintains a uniform intensity, indicating effective bone suppression.
In Figure 7c, the red sampling line indicates an iodine-rich region. In this case, the iodine decomposition image preserves the input signal along the line, while the bone decomposition image displays low and uniform intensities, confirming the successful separation of iodine and bone components.

4.3. Ablation Study on Loss Functions in the Dual-Head Pix2Pix Network

To evaluate the effectiveness of the proposed mutual exclusivity loss L 3 * , ablation experiments were conducted using different combinations of loss functions in the Dual-head Pix2Pix network. The performance of iodine and bone material decomposition was evaluated based on four quantitative metrics: MAE, MS-SSIM, Pearson-R, and PSNR.
From Table 3 and Table 4, it can be observed that incorporating the mutual exclusivity loss L 3 * leads to consistent improvements across all evaluation metrics for both iodine and bone decomposition. While the improvements in MAE and PSNR are relatively modest, the increases in MS-SSIM and Pearson-R suggest more consistent and structurally accurate outputs, thereby confirming the effectiveness of the proposed constraint.
As shown in Figure 8, models trained with the mutual exclusivity loss L 3 * produce outputs that are more consistent with the Ground Truth, particularly in anatomically complex regions such as the heart, where iodine and bone signals intersect. The generated images also exhibit smoother textures and improved anatomical separation.

5. Discussion

In this study, we proposed the Dual-head Pix2Pix network, which employs a dual-decoder architecture to effectively achieve the material decomposition of iodine and bone from single-energy X-ray projection images. Compared to traditional single-decoder models, the dual-decoder design enables the independent learning of distinct material channels, thereby mitigating interference caused by signal overlaps between materials. Although the introduced mutual exclusivity loss term L 3 * yielded moderate improvements in global quantitative metrics, it played a critical role in enhancing fine detail preservation and boundary clarity, particularly in anatomically complex regions where iodine and bone signals overlap. Both ablation studies and qualitative evaluations corroborate the effectiveness of this design (Table 3 and Table 4 and Figure 8).
Quantitative assessments across multiple metrics—including MAE, MS-SSIM, Pearson correlation coefficient, and PSNR—demonstrate that the Dual-head Pix2Pix model consistently outperforms CycleGAN and the standard Pix2Pix model, achieving a superior image reconstruction accuracy and structural similarity (Table 1 and Table 2 and Figure 4, Figure 5 and Figure 6). Furthermore, cross-domain evaluations (Figure 7) validate the model’s generalization capability on projection images acquired by conventional energy-integrating detectors, which lack true dual-energy references. Despite this limitation, the model successfully preserved target material signals while suppressing irrelevant content, underscoring its potential for practical application in clinical and industrial settings with low-cost detectors.
Furthermore, as observed in Figure 4, Figure 5 and Figure 6 and Figure 8, both the Ground Truth and the network-generated iodine–bone separation results derived from PCD-CT projection data exhibit horizontal striped artifacts. These artifacts likely arise from multiple factors, including photon starvation, noise amplification, and beam-hardening effects. Material decomposition algorithms require the simultaneous use of low- and high-energy data to resolve iodine and calcium densities. The extremely high noise present in the low-energy data becomes significantly amplified during the decomposition process. When the algorithm attempts to interpret this unstructured noise, it erroneously attributes these fluctuations to slight variations in iodine or calcium signals, resulting in distinct, alternating striped artifacts perpendicular to the X-ray propagation direction. Notably, as shown in Figure 7, when the proposed method is applied to material decomposition using projection images acquired with EID-CT, the horizontal striped artifacts are eliminated. This suggests that our approach can effectively suppress such striped artifacts and improve image quality in material decomposition when processing EID-CT projection data. While our proposed network demonstrated accurate material decomposition on the tested datasets, several limitations should be noted. First, the network has been trained and evaluated on projection images from phantoms and external EID system data, showing no signs of overfitting. However, its performance on larger animals or clinical subjects remains untested. Future work will focus on validating the method in more complex, heterogeneous objects to assess its applicability beyond the current datasets.
In this study, we intentionally performed material decomposition in the projection domain rather than directly on reconstructed CT slices. Our aim was to investigate whether projection-domain learning using data from a conventional EID system can approximate the material separation performance of PCD systems. Compared with image-domain decomposition, projection-domain approaches preserve richer structural and spectral information, enable the incorporation of physical constraints at an early stage, and help reduce error accumulation, suppress beam-hardening artifacts, and compensate for detector- and acquisition-related distortions.
Although only projection-domain decomposition was presented in this work, this represents the first stage of our research. In future studies, we plan to extend this framework to image reconstruction based on decomposed projections, thereby achieving slice-level material decomposition (e.g., reconstructed BoneProj and IodineProj slices). This workflow is consistent with the principles of PCD-CT systems and will be the focus of our subsequent research.
Nonetheless, several limitations persist. The model training relies on paired dual-energy reference data, which may restrict applicability in scenarios where such annotations are unavailable or limited. Residual artifacts remain in certain challenging regions, suggesting that the further refinement of loss functions and network architecture is necessary. Moreover, computational efficiency and inference speed require improvement to meet the demands of real-time or large-scale deployment.
Future work will focus on exploring weakly supervised and unsupervised learning strategies to reduce dependency on paired training data, integrating domain adaptations and transfer learning techniques to enhance robustness across varying acquisition conditions and devices, and optimizing network design to improve computational performance and scalability.
In summary, the Dual-head Pix2Pix framework offers an effective AI-driven solution for single-energy material decomposition, improving accuracy and image quality while bridging the functional gap between low-cost conventional detectors and advanced photon-counting systems. This work lays a solid foundation for expanding material-specific imaging capabilities in both clinical and industrial applications without necessitating hardware upgrades.

6. Conclusions

In this work, we proposed the Dual-head Pix2Pix network for iodine and bone material decomposition from X-ray projection images. By modifying the generator architecture and introducing a mutual exclusivity loss, the network achieves superior material separation compared to traditional models. Experimental results demonstrate improvements in both the accuracy and visual quality of the decomposition images. Moreover, the model generalizes data acquired from conventional detectors well. Future work will focus on extending this approach to CT-reconstructed images to enable a more accurate 3D material decomposition.

Author Contributions

Conceptualization, S.Z., Y.L. and Z.L.; methodology, Y.L. and Z.L.; software, Y.L., Z.L. and Y.W.; validation, Y.L., Z.L. and Y.W.; formal analysis, R.C., D.D. and X.L. (Xiaoyi Liu); investigation, X.L. (Xiangyu Liu), Y.S. and S.L.; resources, X.L. (Xiangyu Liu); data curation, Y.L., Z.L. and S.Z.; writing—original draft preparation, Y.L.; writing—review and editing, S.Z.; visualization, Z.L. and Y.W.; supervision, S.Z.; project administration, S.Z.; funding acquisition, S.Z., X.L. (Xiangyu Liu) and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 62027901 and 62471372; the Postdoctoral Fellowship Program of the China Postdoctoral Science Foundation, grant number GZC20241304; the National Natural Science Foundation of Shaanxi Province, grant number 2025JC-YBQN-1235; the Fundamental Research Funds for the Central Universities, grant number XJSJ25015; the Natural Science Foundation of the Jiangsu Higher Education Institutions of China, grant number 24KJD310001; and the Joint Project of Industry-University-Research of Zhangjiagang City, grant number ZKYY2441.

Institutional Review Board Statement

The animal study protocol was approved by the Institutional Animal Ethics Committee of Xi’an Medical University.

Data Availability Statement

The data that supports the findings of this study are available from the corresponding author, Shouping Zhu, upon reasonable request.

Acknowledgments

Limited language editing tools (such as Grammarly) were only used for grammar correction and the polishing of expressions, under the full oversight of the authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
PCDPhoton-counting detector
MAEMean absolute error
CTComputed tomography
EIDsEnergy-integrating detectors
DECTDual-energy CT
PCDsPhoton-counting detectors
MS-SSIMMulti-scale structural similarity index
Pearson-RPearson correlation coefficient
PSNRPeak signal-to-noise ratio
ROIRegions of interest

References

  1. Xue, Y.; Qin, W.; Luo, C.; Yang, P.; Jiang, Y.; Tsui, T.; He, H.; Wang, L.; Qin, J.; Xie, Y.; et al. Multi-Material Decomposition for Single Energy CT Using Material Sparsity Constraint. IEEE Trans. Med. Imaging 2021, 40, 1303–1318. [Google Scholar] [CrossRef]
  2. Wang, X.; Xiang, J.; Mao, A.; Xie, J.; Jin, P.; Ding, M.; Yuan, Y.; Lu, Y.; Yu, L.; Cai, H.; et al. Clip-Driven Universal Model for Multi-Material Decomposition in Dual-Energy CT. IEEE Trans. Comput. Imaging 2025, 11, 349–361. [Google Scholar] [CrossRef]
  3. Ji, X.; Zhuo, X.; Lu, Y.; Mao, W.; Zhu, S.; Quan, G.; Xi, Y.; Lyu, T.; Chen, Y. Image Domain Multi-Material Decomposition Noise Suppression Through Basis Transformation and Selective Filtering. IEEE J. Biomed. Health Inform. 2024, 28, 2891–2903. [Google Scholar] [CrossRef]
  4. Wang, G.; Liu, Z.; Huang, Z.; Zhang, N.; Luo, H.; Liu, L.; Shen, H.; Che, C.; Niu, T.; Liang, D.; et al. Improved GAN: Using a Transformer Module Generator Approach for Material Decomposition. Comput. Biol. Med. 2022, 149, 105952. [Google Scholar] [CrossRef]
  5. Generative Adversarial Network–Based Noncontrast CT Angiography for Aorta and Carotid Arteries|Radiology. Available online: https://pubs.rsna.org/doi/10.1148/radiol.230681?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed (accessed on 11 August 2025).
  6. Yunaga, H.; Ohta, Y.; Kaetsu, Y.; Kitao, S.; Watanabe, T.; Furuse, Y.; Yamamoto, K.; Ogawa, T. Diagnostic Performance of Calcification-Suppressed Coronary CT Angiography Using Rapid Kilovolt-Switching Dual-Energy CT. Eur. Radiol. 2017, 27, 2794–2801. [Google Scholar] [CrossRef]
  7. Gruenewald, L.D.; Koch, V.; Martin, S.S.; Yel, I.; Eichler, K.; Gruber-Rouh, T.; Lenga, L.; Wichmann, J.L.; Alizadeh, L.S.; Albrecht, M.H.; et al. Diagnostic Accuracy of Quantitative Dual-Energy CT-Based Volumetric Bone Mineral Density Assessment for the Prediction of Osteoporosis-Associated Fractures. Eur. Radiol. 2022, 32, 3076–3084. [Google Scholar] [CrossRef]
  8. Rajiah, P.; Rong, R.; Martinez-Rios, C.; Rassouli, N.; Landeras, L. Benefit and Clinical Significance of Retrospectively Obtained Spectral Data with a Novel Detector-Based Spectral Computed Tomography—Initial Experiences and Results. Clin. Imaging 2018, 49, 65–72. [Google Scholar] [CrossRef] [PubMed]
  9. McCollough, C.H.; Rajendran, K.; Baffour, F.I.; Diehn, F.E.; Ferrero, A.; Glazebrook, K.N.; Horst, K.K.; Johnson, T.F.; Leng, S.; Mileto, A.; et al. Clinical Applications of Photon Counting Detector CT. Eur. Radiol. 2023, 33, 5309–5320. [Google Scholar] [CrossRef] [PubMed]
  10. Ganguly, S.; Neelam; Grinberg, I.; Margel, S. Layer by Layer Controlled Synthesis at Room Temperature of Tri-modal (MRI, Fluorescence and CT) Core/Shell Superparamagnetic IO/Human Serum Albumin Nanoparticles for Diagnostic Applications. Polym. Adv. Technol. 2021, 32, 3909–3921. [Google Scholar] [CrossRef]
  11. Application of YOLO Algorithm for Segmentation and Classification of Minerals in CT Slices Obtained by Dual- and Multi-Energy CT|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/10553944 (accessed on 13 September 2025).
  12. Dual-Energy Processing of X-Ray Images of Beryl in Muscovite Obtained Using Pulsed X-Ray Sources—PMC. Available online: https://pmc.ncbi.nlm.nih.gov/articles/PMC10181619/ (accessed on 13 September 2025).
  13. McCollough, C.H.; Rajiah, P.S. Milestones in CT: Past, Present, and Future. Radiology 2023, 309, e230803. [Google Scholar] [CrossRef]
  14. Greffier, J.; Viry, A.; Robert, A.; Khorsi, M.; Si-Mohamed, S. Photon-Counting CT Systems: A Technical Review of Current Clinical Possibilities. Diagn. Interv. Imaging 2025, 106, 53–59. [Google Scholar] [CrossRef]
  15. García-Figueiras, R.; Oleaga, L.; Broncano, J.; Tardáguila, G.; Fernández-Pérez, G.; Vañó, E.; Santos-Armentia, E.; Méndez, R.; Luna, A.; Baleato-González, S. What to Expect (and What Not) from Dual-Energy CT Imaging Now and in the Future? J. Imaging 2024, 10, 154. [Google Scholar] [CrossRef]
  16. Flohr, T.; Schmidt, B. Technical Basics and Clinical Benefits of Photon-Counting CT. Investig. Radiol. 2023, 58, 441–450. [Google Scholar] [CrossRef]
  17. Photon-Counting Detector CT: System Design and Clinical Applications of an Emerging Technology|RadioGraphics. Available online: https://pubs.rsna.org/doi/10.1148/rg.2019180115 (accessed on 11 August 2025).
  18. Algin, O.; Tokgoz, N.; Cademartiri, F. Photon-Counting Computed Tomography in Radiology. Pol. J. Radiol. 2024, 89, 433–442. [Google Scholar] [CrossRef]
  19. Lell, M.; Kachelrieß, M. Computed Tomography 2.0: New Detector Technology, AI, and Other Developments. Investig. Radiol. 2023, 58, 587–601. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, T.; Jiang, C.; Ding, W.; Chen, Q.; Shen, D.; Ding, Z. Deep-Learning Generated Synthetic Material Decomposition Images Based on Single-Energy CT to Differentiate Intracranial Hemorrhage and Contrast Staining Within 24 Hours After Endovascular Thrombectomy. CNS Neurosci. Ther. 2025, 31, e70235. [Google Scholar] [CrossRef]
  21. Kawahara, D.; Saito, A.; Ozawa, S.; Nagata, Y. Image Synthesis with Deep Convolutional Generative Adversarial Networks for Material Decomposition in Dual-Energy CT from a Kilovoltage CT. Comput. Biol. Med. 2021, 128, 104111. [Google Scholar] [CrossRef]
  22. Gong, H.; Tao, S.; Rajendran, K.; Zhou, W.; McCollough, C.H.; Leng, S. Deep-Learning-Based Direct Inversion for Material Decomposition. Med. Phys. 2020, 47, 6294–6309. [Google Scholar] [CrossRef]
  23. Dayarathna, S.; Islam, K.T.; Uribe, S.; Yang, G.; Hayat, M.; Chen, Z. Deep Learning Based Synthesis of MRI, CT and PET: Review and Analysis. Med. Image Anal. 2024, 92, 103046. [Google Scholar] [CrossRef]
  24. Niu, T.; Dong, X.; Petrongolo, M.; Zhu, L. Iterative Image-domain Decomposition for Dual-energy CT. Med. Phys. 2014, 41, 041901. Available online: https://aapm.onlinelibrary.wiley.com/doi/10.1118/1.4866386 (accessed on 21 August 2025). [CrossRef] [PubMed]
  25. Using Edge-Preserving Algorithm with Non-Local Mean for Significantly Improved Image-Domain Material Decomposition in Dual-Energy CT—IOPscience. Available online: https://iopscience.iop.org/article/10.1088/0031-9155/61/3/1332 (accessed on 21 August 2025).
  26. Nakamura, Y.; Higaki, T.; Kondo, S.; Kawashita, I.; Takahashi, I.; Awai, K. An Introduction to Photon-Counting Detector CT (PCD CT) for Radiologists. Jpn. J. Radiol. 2022, 41, 266–282. [Google Scholar] [CrossRef]
  27. Development of the Projection-Based Material Decomposition Algorithm for Multienergy CT. Available online: https://ieeexplore.ieee.org/document/9187882 (accessed on 21 August 2025).
  28. Yuan, Y.; Zhang, Y.; Yu, H. Optimization of Energy Combination for Gold-Based Contrast Agents Below—Edges in Dual-Energy Micro-CT. IEEE Trans. Radiat. Plasma Med. Sci. 2017, 2, 187–193. Available online: https://ieeexplore.ieee.org/document/8219758 (accessed on 21 August 2025). [CrossRef]
  29. Fredette, N.R.; Kavuri, A.; Das, M. Multi-Step Material Decomposition for Spectral Computed Tomography. Phys. Med. Biol. 2019, 64, 145001. [Google Scholar] [CrossRef]
  30. Lu, Y.; Kowarschik, M.; Huang, X.; Chen, S.; Ren, Q.; Fahrig, R.; Hornegger, J.; Maier, A. Material Decomposition Using Ensemble Learning for Spectral X-Ray Imaging. IEEE Trans. Radiat. Plasma Med. Sci. 2018, 2, 194–204. [Google Scholar] [CrossRef]
  31. Vock, P.; Szucs-Farkas, Z. Dual Energy Subtraction: Principles and Clinical Applications. Eur. J. Radiol. 2009, 72, 231–237. [Google Scholar] [CrossRef]
  32. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-To-Image Translation With Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
Figure 1. Overview of the PCD-CT and EID-CT imaging system, (a) is the schematic of the PCD-CT system geometry and (b) is the photograph of the PCD-CT physical system, and (c) is the schematic of the EID-CT system geometry and (d) is the photograph of the EID-CT physical system.
Figure 1. Overview of the PCD-CT and EID-CT imaging system, (a) is the schematic of the PCD-CT system geometry and (b) is the photograph of the PCD-CT physical system, and (c) is the schematic of the EID-CT system geometry and (d) is the photograph of the EID-CT physical system.
Sensors 25 05960 g001
Figure 4. Material decomposition results obtained with different algorithms. The first row (ae) shows iodine decomposition, and the second row (fj) shows bone decomposition. From left to right: (a,f) original images, (b,g) Ground Truth, (c,h) CycleGAN results, (d,i) Pix2Pix results, and (e,j) Dual-head Pix2Pix results. Red arrows indicate bone-enriched regions, and red boxes highlight iodine-enriched regions.
Figure 4. Material decomposition results obtained with different algorithms. The first row (ae) shows iodine decomposition, and the second row (fj) shows bone decomposition. From left to right: (a,f) original images, (b,g) Ground Truth, (c,h) CycleGAN results, (d,i) Pix2Pix results, and (e,j) Dual-head Pix2Pix results. Red arrows indicate bone-enriched regions, and red boxes highlight iodine-enriched regions.
Sensors 25 05960 g004
Figure 5. Line profile analysis of ROIs in iodine decomposition results obtained with different algorithms. (a) Original input; (b) result from CycleGAN; (c) result from Pix2Pix; and (d) result from Dual-head Pix2Pix. The yellow and red lines in (ad) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (e) Line profile of the yellow ROI; (f) line profile of the red ROI.
Figure 5. Line profile analysis of ROIs in iodine decomposition results obtained with different algorithms. (a) Original input; (b) result from CycleGAN; (c) result from Pix2Pix; and (d) result from Dual-head Pix2Pix. The yellow and red lines in (ad) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (e) Line profile of the yellow ROI; (f) line profile of the red ROI.
Sensors 25 05960 g005
Figure 6. Line profile analysis of ROIs in bone decomposition results obtained with different algorithms. (a) Original input; (b) result from CycleGAN; (c) result from Pix2Pix; and (d) result from Dual-head Pix2Pix. The yellow and red lines in (ad) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (e) Line profile of the yellow ROI; (f) line profile of the red ROI.
Figure 6. Line profile analysis of ROIs in bone decomposition results obtained with different algorithms. (a) Original input; (b) result from CycleGAN; (c) result from Pix2Pix; and (d) result from Dual-head Pix2Pix. The yellow and red lines in (ad) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (e) Line profile of the yellow ROI; (f) line profile of the red ROI.
Sensors 25 05960 g006
Figure 7. Performance evaluation of the Dual-head Pix2Pix model on projection images from conventional EIDs. (a) Original projection image from EID, (b) line profile along yellow line (bone region), (c) line profile along red line (iodine region). The yellow and red lines in (ac) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (d) Line profile of the yellow ROI; (e) line profile of the red ROI.
Figure 7. Performance evaluation of the Dual-head Pix2Pix model on projection images from conventional EIDs. (a) Original projection image from EID, (b) line profile along yellow line (bone region), (c) line profile along red line (iodine region). The yellow and red lines in (ac) indicate the selected ROIs, where the yellow line corresponds to the bone region and the red line corresponds to the iodine region. (d) Line profile of the yellow ROI; (e) line profile of the red ROI.
Sensors 25 05960 g007
Figure 8. Ablation study results, (a) and (d) show reference images for iodine and bone decomposition, respectively, (b,e) display the predicted results without the mutual exclusivity loss L 3 * , while (c,f) present the predictions with L 3 * included. Red boxes highlight regions with noticeable differences, particularly in areas where bone and iodine signals overlap.
Figure 8. Ablation study results, (a) and (d) show reference images for iodine and bone decomposition, respectively, (b,e) display the predicted results without the mutual exclusivity loss L 3 * , while (c,f) present the predictions with L 3 * included. Red boxes highlight regions with noticeable differences, particularly in areas where bone and iodine signals overlap.
Sensors 25 05960 g008
Table 1. Results of quantitative assessment of iodine substance decomposition using different methods.
Table 1. Results of quantitative assessment of iodine substance decomposition using different methods.
ModelsMAEMS-SSIMPearson-RPSNR
Original51.07 ± 11.450.80 ± 0.040.91± 0.0313.01 ± 1.78
Cycle GAN10.39 ± 4.050.86 ± 0.050.96 ± 0.0226.40 ± 2.65
pix2pix5.92 ± 3.450.90 ± 0.040.98 ± 0.0131.19 ± 2.62
Dual-head Pix2Pix5.30 ± 1.81 *0.91 ± 0.03 *0.99 ± 0.0132.06 ± 2.62 *
* Paired t-tests showed that Dual-head Pix2Pix significantly outperformed all other methods across all metrics (p < 0.0001).
Table 2. Results of quantitative assessment of bone substance decomposition using different methods.
Table 2. Results of quantitative assessment of bone substance decomposition using different methods.
ModelMAEMS-SSIMPearson-RPSNR
Input22.77 ± 9.990.78 ± 0.070.94 ± 0.0419.72 ± 3.05
CycleGAN21.19 ± 5.840.70 ± 0.060.92 ± 0.0419.92 ± 1.97
pix2pix10.18 ± 2.880.83 ± 0.030.98 ± 0.0126.20 ± 1.90
Dual-head Pix2Pix9.55 ± 2.49 *0.84 ± 0.03 *0.98 ± 0.0126.74 ± 1.97 *
* Paired t-tests showed that Dual-head Pix2Pix significantly outperformed all other methods across all metrics (p < 0.0001).
Table 3. Quantitative evaluation of iodine decomposition with different loss function combinations.
Table 3. Quantitative evaluation of iodine decomposition with different loss function combinations.
Loss FunctionMAEMS-SSIMPearson-RPSNR
L 1 * + L 2 * 5.30 ± 2.040.90 ± 0.040.98 ± 0.0131.95 ± 2.49
L 1 * + L 2 * + L 3 * 5.30 ± 1.810.91 ± 0.030.99 ± 0.0132.06 ± 2.62
Table 4. Quantitative evaluation of bone decomposition with different loss function combinations.
Table 4. Quantitative evaluation of bone decomposition with different loss function combinations.
Loss FunctionMAEMS-SSIMPearson-RPSNR
L 1 * + L 2 * 9.68 ± 2.640.83 ± 0.030.98 ± 0.0126.63 ± 2.07
L 1 * + L 2 * + L 3 * 9.55 ± 2.490.84 ± 0.030.98 ± 0.0126.74 ± 1.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Li, Z.; Wang, Y.; Chen, R.; Duan, D.; Liu, X.; Liu, X.; Shi, Y.; Li, S.; Zhu, S. Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance. Sensors 2025, 25, 5960. https://doi.org/10.3390/s25195960

AMA Style

Liu Y, Li Z, Wang Y, Chen R, Duan D, Liu X, Liu X, Shi Y, Li S, Zhu S. Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance. Sensors. 2025; 25(19):5960. https://doi.org/10.3390/s25195960

Chicago/Turabian Style

Liu, Yanyun, Zhiqiang Li, Yang Wang, Ruitao Chen, Dinghong Duan, Xiaoyi Liu, Xiangyu Liu, Yu Shi, Songlin Li, and Shouping Zhu. 2025. "Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance" Sensors 25, no. 19: 5960. https://doi.org/10.3390/s25195960

APA Style

Liu, Y., Li, Z., Wang, Y., Chen, R., Duan, D., Liu, X., Liu, X., Shi, Y., Li, S., & Zhu, S. (2025). Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance. Sensors, 25(19), 5960. https://doi.org/10.3390/s25195960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop