1. Introduction
Lung cancer is a leading cause of cancer-related deaths worldwide, emphasizing the need for accurate and efficient diagnostic tools [
1].
Current methods for lung lesion detection and classification rely heavily on imaging techniques, such as computed tomography scans. However, CT-based approaches are limited by radiation exposure and provide only indirect vascular characterization without the use of contrast agents.
Although ultrasonography (US) has been extensively used in medicine for over 50 years, its role in lung assessment was initially limited due to the presence of air in the lungs, which hinders ultrasound penetration. However, in the past two decades, advancements in imaging techniques have demonstrated the utility of US in lung pathology assessment, particularly in detecting pleural and subpleural lesions. Though there were data concerning the acoustical properties of lung showing that high attenuation levels were present in the lung tissue sections at all levels of lung inflation, results from a more recent study underlines that the US attenuation properties of the lung are far from a precise cut-off [
2].
So, for a long time, the lung was considered rather inapproachable by US, with only the pleura being commonly assessed. As more and more data have built up, we saw the definition of a lung US semiology and lung pattern recognition being defined and reported [
3].
Currently, applications of ultrasonography have evolved from simple measurements of anatomical dimensions to the detection of small changes in tissue texture, evaluation of the vascularization with Doppler modes, and tissue perfusion with contrast agents. Also, ultrasonography offers several crucial advantages such as portability, no ionizing radiations exposure, low cost, short acquisition time, and sensitivity in determining the morphology of the organ, becoming one of the most significant, widely used, and versatile imaging techniques in medicine; it is a significantly valuable technique for the cancer’s early diagnosis, being used to identify malignant and benign tissues in organs such as breast, thyroid, and many other organs [
4]. The advantages of ultrasonography (US) were also proven during a previous study in which a comparative evaluation of lung lesions in contact with the thoracic wall by transthoracic US and computed tomography (CT) was performed. The study included 131 patients with thoraco-pulmonary peripheral lesions, demonstrated by CT [
4]. As a result of evaluating US and CT, it was concluded that there is an agreement between the two imaging methods regarding tumor invasion, intratumoral necrosis, and peritumoral atelectasis in patients with thoracic lesions in contact with the thoracic wall [
4]. There are also several studies on the use of Doppler mode US and CEUS for the needle biopsy of thoracic lesions and differential diagnosis, proving comparable results or even better results and less side effects in comparison with contrast-enhanced CT [
5,
6,
7,
8,
9,
10]. Doppler US can assess blood flow in and around a lung tumor. The presence, pattern, and velocity of blood flow detected by color Doppler can help differentiate between malignant (cancerous) and benign (non-cancerous) lung lesions. Previous studies have demonstrated that if the vessels inside a lung mass are irregularly distributed, convoluted, and mostly peripheral distributed, we can suspect a malignant lesion (angiogenesis), distinguishing them from benign lesions which retain an organized vascular structure [
5]. Color Doppler can also be useful in guiding needle biopsies of lung tumors. By identifying blood vessels around and within a tumor, physicians can plan the safest path for the biopsy needle, avoiding major blood vessels to reduce the risk of bleeding. These characteristics are important for diagnosis, but they are not considered by other radiological diagnostic methods (CT and MRI), being revealed by Doppler US and CEUS (contrast enhanced ultrasound) [
6,
7,
8,
9,
10,
11]. At present, CT-guided percutaneous biopsy is one of the main investigative tools for a pathological diagnosis of chest lesions, but it exposes the patient to a considerable amount of radiation without being able to dynamically display the puncture process in real time, resulting in possible complications. Contrast-enhanced ultrasound (CEUS) can be valuable in the preoperative evaluation of the biopsy site and in increasing the accuracy of the biopsy because it can effectively identify both necrotic and viable tissues within the lesion, as well as provide valuable information for the accurate design and execution of the preoperative puncture biopsy [
6,
7,
8,
9,
10,
11].
Recent advancements in artificial intelligence (AI) have led to the development of deep learning-based models for analyzing lung nodules using imaging modalities such as CT and MRI, in the case of Rayscape [
12] and Optellum [
13], but they are insufficiently available and present the disadvantage of irradiation for screening and follow-up. Recent studies have demonstrated the potential of automated image analysis techniques to enhance the diagnostic accuracy of color Doppler ultrasound [
14,
15,
16,
17,
18]. Machine learning algorithms, particularly deep learning convolutional neural networks, have shown promising results in the classification of lung lesions based on CT images [
19,
20,
21].
AI-assisted ultrasound analysis is an emerging field, with studies demonstrating its potential for improving lesion classification and reducing diagnostic variability. However, the application of similar techniques to color Doppler ultrasound data remains a less explored area [
22,
23,
24,
25,
26].
Current research endeavors are concentrated on the integration of advanced computational methods with ultrasonography in order to circumvent conventional constraints. For example, Doppler ultrasound can assist in the differentiation of benign and malignant tumors by assessing blood flow and vascular patterns in lung lesions. This is especially pertinent in the context of tumor angiogenesis, which is a critical factor in the growth and metastasis of cancer.
The aim of this study is to develop a novel method for the efficient classification of lung lesions using automatic vascularization evaluation on color Doppler ultrasound with machine learning techniques.
This hybrid approach combines the advantages of color Doppler ultrasound imaging with the power of machine learning algorithms to provide a cost-effective, highly accurate, non-invasive, and reliable diagnosis of lung lesions.
2. Materials and Methods
2.1. Study Design and Setting
This retrospective cohort study was conducted at the County Emergency Hospital Cluj and other hospitals referred to the 1st Medical Clinic and included patients diagnosed with lung cancer who underwent surgical staging between 2018 and 2019. Ethical approval for this study was obtained from the UMF “Iuliu Hatieganu” Ethics Committee.
Participation required informed consent as per the Declaration of Helsinki, and all procedures (research protocol, data collections) conformed to ethical standards.
The sample size (number of cases and number of ultrasound images) was determined by the number of patients that met the inclusion criteria during the study period of between 2018 and 2019. During the study period, 113 cases and 565 Doppler mode US images of lung tumors were analyzed. Inclusion criteria were: (1) histologically confirmed malignant lung lesions by percutaneous US-guided biopsy and patients with benign lung consolidation followed at least 6 months after diagnosis, (2) adequate Doppler ultrasound image quality, and (3) availability of clinical and imaging data for correlation. Exclusion criteria included: (1) poor ultrasound image quality due to excessive motion artifacts, (2) presence of severe comorbidities affecting lung vasculature, and (3) patients with prior lung surgery or radiation therapy affecting lesion morphology.
The annotation process was performed using the Supervisely platform [
27], where the medical team manually segmented and annotated all images using the “ADD Polygon” tool to define their contours. Annotators selected the appropriate object class, placed points along the lymph node boundary, and finalized the segmentation by completing the polygon. Each annotation was recorded in the Objects panel, allowing for visibility adjustments, editing, or deletion if needed. To ensure annotation accuracy and reduce subjectivity, all annotations were cross-checked and verified by the expert radiologist (R.I.C.).
2.2. Doppler Ultrasound Examination Protocol
The Doppler ultrasound examination was primarily conducted using a General Electric Logiq E10s LEX101710 series ultrasound machine by GE Ultrasound Korea Ltd., Seongnam, South Korea. Chest scans were performed utilizing a broadband convex transducer (1.8–5 MHz). Doppler ultrasound imaging, including color flow Doppler and power Doppler, was carried out with the following settings:
Color maps, including velocity variance maps: 13;
Velocity scale range: 1–300 cm/s;
Steering angle (linear transducer): 0–20°;
Auto ROI placement and steering enabled for linear scans.
2.3. System Architecure
The lung diagnosis image-based system is composed from multiple modules and autonomous functions. Each of these help in obtaining the result focused on the diagnosis of the types of lung tumors. The process begins with defining the affected area as the initial region of interest (ROI) considered by the user based on his experience. Inside this selected mask, three parameters are computed that describe the lung vascularization: blood vessels area related to the ROI (mask area), blood vessels tortuosity index, and blood vessels orientation. The three vascularization parameters—blood vessel area, tortuosity index, and orientation—were selected based on their strong correlation with malignancy indicators. Malignant tumors tend to exhibit: (1) irregular and increased vascularization (evaluated through blood vessel area); (2) highly convoluted and disorganized blood vessel paths (measured by tortuosity index); and (3) altered blood flow orientation due to chaotic angiogenesis (quantified by vessel orientation patterns) [
5].
These features have been widely recognized in tumor angiogenesis studies and provide a comprehensive assessment of malignancy, improving diagnostic accuracy. The chosen parameters are transmitted as inputs to a feed-forward neural network to be analyzed which will provide an indication regarding the malignancy character of the analyzed lung lesion as output. A feed-forward neural network (FNN) was selected instead of a convolutional neural network (CNN) because the input features consisted of extracted numerical vascularization parameters rather than raw image data. CNNs are typically used for pixel-level feature learning, whereas our approach relies on precomputed vascularization metrics (blood vessel area, tortuosity index, and orientation), which are better suited for structured data analysis using an FNN.
Figure 1 presents the entire system architecture.
2.3.1. Blood Vessels Area Evaluation Module
The blood vessel area inside the evaluated area serves as the first diagnostic criterion for differentiating lung lesions as benign or malignant. The area of interest (ROI) is determined inside the ultrasonic field of view as ultrasonic (US) imaging limits capturing the whole lesion in some circumstances. Based on this ROI, the system determines the mask area (denoted as ) and the blood vessel area (denoted as ).
To identify areas of interest, the input picture is first turned into a binary format. Next, all pixels from the image are checked if their coordinates lie inside the mask’s bounds defined as the ROI. The x and y coordinate vectors are thus created to show the contour of the mask. The area contained by the polygon is determined by the x and y vectors. Likewise, object count techniques in the binary image help to define the regions of the vessels; “ON” pixels indicate the vessels. This technique guarantees an exact assessment of the blood vessel distribution in the target area.
2.3.2. Blood Vessels Tortuosity Index
Another essential metric in the evaluation of lung lesions is the tortuosity index as it indicates the regular or irregular aspect of blood vessels present inside the evaluated ROI. The more the aspect is regular, the better. When the evaluated lesion has a malignant character, the blood vessels tend to be fewer, thinner, and curlier, due to the invasion of the pulmonary arteries and irregularity of the neo-angiogenesis, becoming twisted or having a curly appearance. To evaluate the tortuosity index, all of the identified vessels in the previously defined ROI must be analyzed.
The designated region is subsequently transformed into a binary picture, wherein blood vessels are recognized as separate entities. These vessels are modeled using a linear polynomial equation, depicting them as straight lines. This linear approximation establishes a reference for quantifying vessel curvature by contrasting the real vessel shapes with their idealized straight-line representations.
Throughout this procedure, anomalies in the binary image, including solitary “zero” pixels within vascular structures, are rectified. The isolated pixels, referred to as “holes”, are detected and filled to enhance the accuracy of the vessel’s structural representation. Rectifying these irregularities guarantees a more accurate representation of the vessel’s form. The operation stops when the condition is met:
The skeletonization of the vessels is executed to condense their representation to lines of single-pixel width. This involves delineating the primary trajectory of the vessel between bifurcation points and terminuses. Edge erosion is executed iteratively until just the vessel’s skeleton persists. The number of iterations is dependent upon the vessel’s thickness, with thicker vessels necessitating more iterations to attain a single-line representation.
Next, the tortuosity index is calculated by assessing the deviation between the real vessel trajectory previously determined and its linear approximation. The vascular tortuosity index (TI) was computed by comparing the actual vessel trajectory with its linear approximation. The tortuosity index was calculated as:
where
Lactual represents the total vessel length and
Lstraight represents the straight-line distance between vessel endpoints. A higher
TI value indicates increased vessel curvature, which is characteristic of malignancy. In a most basic scenario, a vessel with no curvature will exhibit no distance between the real trajectory and the approximation line. The deviation between the actual trajectory and the approximation for curved vessels is calculated and aggregated across all pertinent sites. This offers a quantitative assessment of the vessel’s tortuosity.
Figure 2 exemplifies this procedure below.
2.3.3. Blood Vessels Orientation Module
The blood vessels orientation module basically uses the previous vessels approximation as first-degree linear equations, facilitating the computation of their slopes. These slopes are expressed in degrees, and the determined values are stored for further analysis. In order to evaluate the overall distribution of vessel orientations inside the evaluated ROI, the mean square deviation is computed. When the lung circulatory system has radial distribution, in normal conditions, the slopes of the blood vessels vary in a wide range and the standard deviation value is higher. In pathological circumstances such as malignancies, vessel orientations tend to align more tightly, resulting in a narrower range of slopes and a reduced standard deviation. The mean square deviation formula for
E is below, where
N—number of elements; and
—current element:
2.3.4. Diagnosis Module
The diagnostic module is a key element of the proposed system architecture and employs a feed-forward neural network (FNN) to categorize lung lesions as malignant or benign based on vascularization metrics derived from Doppler ultrasound images. This method utilizes machine learning to improve diagnostic precision, efficiency, and impartiality in clinical practice.
The FNN architecture consists of the subsequent layers:
Input layer: This layer receives three essential vascularization parameters: (1) area of blood vessels within the region of interest (ROI); (2) tortuosity index of the vascular structures; and (3) distribution of vessel orientation;
Hidden layers: These were incorporated to identify intricate patterns and correlations among the input parameters. The number of layers were refined for the dataset to guarantee computational efficiency while enhancing diagnostic precision;
Output layer: This generates a classification result as a probability score reflecting the possibility of the lesion becoming malignant.
Also, to prevent overfitting, L2 regularization (λ = 0.01) was applied to the network weights, and dropout layers with a dropout rate of 0.3 were introduced between hidden layers. These techniques improved model generalization and reduced the risk of overfitting during training. The data used consist of 565 Doppler mode ultrasound (US) images that are carefully annotated. Due to the limited number of US images, in order to improve the robustness of the proposed system, several data augmentation techniques (image transformations) were used on the train dataset. The dataset was extended by scaling rotations and applying filters on initial images. For each initial image, three rotation procedures and one scaling procedure were applied. Also, for each initial image, we applied six filters. After applying augmentation, the dataset had 3390 images. The dataset was split into a train dataset (60%), validation dataset (20%), and test dataset (20%). Maintaining a strict 60%–20%–20% train–validation–test dataset split ensured that the validation and test sets remained independent, preventing overfitting and preserving model generalization. Additionally, k-fold cross-validation would have further reduced the number of unique patient cases per fold, potentially affecting classification reliability. Instead, a single, well-stratified split was used, ensuring that benign and malignant cases were adequately represented across all three subsets while maintaining computational efficiency.
Figure 3 presents the augmented data obtained using several filters.
The implementation was performed in Matlab, and the training was performed on a personal computer with the following characteristics: (1) processor (CPU): 13th gen Intel(R) Core(TM) i9-13900H, 2600 Mhz, 14 Core(s), 20 logical processor(s); (2) graphics processing unit (GPU): NVIDIA GeForce RTX 3050, 4GB VRAM (ASUS, Taipei, Taiwan), used for parallel computations; (3) RAM: 32GB DDR4 (Micron, Boise, ID, USA).
The Levenberg–Marquardt algorithm was used at first for training, achieving rapid convergence and high accuracy. In addition to the Levenberg–Marquardt (LM) algorithm, we evaluated other optimization methods, including Adam, stochastic gradient descent (SGD), and Bayesian regularization. While Adam and SGD achieved comparable accuracy (~97%), LM demonstrated superior classification performance (98%) with faster convergence. Given the network size and dataset constraints, LM was selected as the most effective training algorithm for this study.
3. Results
This section focuses on the testing methods of the diagnostic module and each individual image processing module and presents the evaluation procedure and results.
To this end,
Figure 4 presents the intermediate results and steps of the blood vessels area evaluation module.
Figure 5 presents the determined blood vessels skeleton to be used in the evaluation of the blood vessels tortuosity index and blood vessels orientation.
The attending physician pre-determined the diagnosis for both the original and augmented ultrasound images, supplying the images with their corresponding diagnostic labels. After the image analysis stage, the retrieved data (the analysis results) was processed and organized into a dataset, as illustrated in
Table 1. This dataset was used as the input for training the neural network, allowing it to determine the final classification outcome: zero for benign tumors and one for malignant tumors.
Model Evaluation Results
Table 2 presents the diagnosis performance of the systemwhen these three parameters were treated separately. The neural network was trained with each parameter one time. The maximum accuracy (82%) was obtained using the angles parameter when hidden layer size is eight. The considered type of neural network is feed-forward.
Table 3 presents the diagnosis performance of the system when two parameters were treated together. The neural network was trained with area and curly level. The best result is for the eight hidden layers with a performance of 75%.
Table 4 presents the diagnosis performance of the system when the neural network was trained with all three input parameters. The performance increased from 75% to 95%.
Table 5 presents the diagnosis performance of the system when using various training methods like Levenberg–Marquardt and scale conjugate gradient.
To provide a more detailed evaluation of model performance, additional classification metrics were computed. The model achieved a sensitivity of 96.5%, specificity of 97.2%, precision of 97.1%, F1-score of 96.8%, and AUC score of 0.985, demonstrating strong diagnostic capability.
Table 6 presents the confusion matrix for the classification results, showing the distribution of true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs).
4. Discussion
This study highlights the efficacy of combining Doppler ultrasound imaging with machine learning algorithms for classifying lung lesions, providing a dependable, non-invasive alternative to conventional approaches. The modular methodology employed to assess vascularization traits, such as blood vessel area, tortuosity, and direction, was important in attaining precise lesion classification.
The blood vessel area assessment module is capable of evaluating the vascularization patterns inside the area of interest (ROI). The findings indicated that malignant tumors frequently display diminished or uneven vascularization patterns inside the area of interest (ROI). Although this metric alone does not produce high classification accuracy, it is an essential aspect in the multi-parameter system, as seen by the enhanced performance of the feed-forward neural network (FNN) when integrated with other vascularization factors. The system’s capacity to measure these changes was confirmed with a performance accuracy of 52% when utilized as an independent parameter.
The tortuosity index module was essential in detecting abnormal vascularization patterns. Malignant lesions were frequently linked to highly curved and irregular vascular pathways, indicative of tumor-induced angiogenesis. The module assessed these abnormalities by juxtaposing actual vessel trajectories with their linear approximations. Although this module has shown little efficacy as an independent diagnostic instrument, its impact on the collective analysis was substantial. The results corroborate the idea that heightened vascular tortuosity is a significant marker of malignancy.
The blood vessel orientation module had a superior standalone performance of the three, attaining an accuracy of 82% as an independent diagnostic instrument. This outcome corresponds with clinical observations that malignant tumors display aligned and confined vascular orientations, leading to reduced mean square deviation (MSD) values. In contrast, benign lesions characterized by a more radial arrangement of arteries demonstrated elevated MSD values. Quantifying the distribution of vessel slopes inside the ROI offers a reliable criterion for differentiating between diseased and normal vascularization patterns. The module’s reliance on accurate skeletonization underscores the necessity for meticulous preprocessing to preserve its efficacy.
The integration of all three vascularization factors resulted in a substantial enhancement of the FNN’s performance, with an accuracy of 95%. This discovery highlights the significance of integrating complimentary attributes to improve diagnostic precision. Moreover, the use of multiple training methods, including the Levenberg–Marquardt algorithm, achieved a maximum performance of 98%, hence reinforcing the system’s reliability. Comparatively, AI-driven CT-based models such as Rayscape and Optellum achieve sensitivities exceeding 90% but require high computational resources and expose patients to radiation risks. Our method, integrating Doppler ultrasound with machine learning, achieves 98% accuracy while offering a non-invasive, cost-effective, and radiation-free alternative. However, further validation through direct benchmarking with CT-based AI models is necessary to fully assess its clinical utility.
Table 7 presents a comparative analysis of our model’s performance against traditional Doppler ultrasound diagnosis and AI-driven CT-/MRI-based methods [
28].
The proposed approach is well suited for resource-limited clinical settings, where CT and MRI scanners may be unavailable due to cost and infrastructure constraints. Doppler ultrasound is widely accessible, portable, and radiation-free, making it a practical diagnostic alternative in such environments. Unlike AI-based CT analysis, which demands high computational power, our FNN-based model runs efficiently on standard workstations, enabling real-world deployment. Future studies will focus on validating its feasibility across diverse clinical settings, including low-resource healthcare facilities.
A key limitation of this study is the relatively small size of the original dataset (565 images). While data augmentation was applied to expand the dataset to 3390 images, it is important to note that augmentation does not introduce new patient cases or variations in pathology, which are essential for improving model generalizability. The current dataset may not fully capture the diverse range of lung lesions encountered in real-world clinical settings.
To address this limitation, future work will focus on expanding the dataset by incorporating Doppler ultrasound images from multiple clinical centers. This expansion will allow for a broader representation of different lung lesion types, cancer stages, and patient demographics, improving the model’s generalizability and clinical applicability. Additionally, future studies will explore automated segmentation techniques to reduce dependence on manual annotation and will use multiple ultrasound devices, further enhancing the robustness of the proposed system.
5. Conclusions
This research presents an innovative approach for lung lesions diagnosis using the integration of Doppler ultrasound imaging and a feed-forward neural network (FNN). This technique utilizes three vascularization parameters—blood vessel area, tortuosity index, and orientation—to deliver a highly accurate, radiation-free, and cost-effective lung cancer diagnostic tool.
The modular methodology facilitated an in-depth examination of certain vascularization metrics, with the orientation module achieving the highest independent performance at 82% accuracy. Although the blood vessel area and tortuosity index modules exhibited lower independent accuracies (52% and 53%, respectively), their contributions were essential in improving the performance of the integrated system. Upon integrating all three parameters, the FNN attained a classification accuracy of 95%, which was then enhanced to 98% using advanced training techniques such as the Levenberg–Marquardt algorithm.
While AI-driven CT-based models demonstrate high accuracy, they require specialized imaging infrastructure and expose patients to radiation risks. In contrast, the proposed method achieves 98% classification accuracy while being accessible, portable, and suitable for resource-limited settings.
The proposed system signifies a substantial advancement in the early detection and classification of lung abnormalities. However, the dependence on manual segmentation in preprocessing creates variability, underscoring the necessity for automated segmentation methods. Furthermore, augmenting the dataset to encompass varied patient demographics and investigating alternative imaging techniques such as contrast-enhanced ultrasound (CEUS) may boost the system’s relevance and efficacy.
While these findings demonstrate strong diagnostic potential, further improvements are necessary to enhance clinical applicability and scalability such as: (1) automated segmentation for improved reproducibility, efficiency, and consistency; (2) validation on larger and more diverse datasets to ensure better model generalization; and even (3) integration with portable ultrasound devices, making it a viable point-of-care diagnostic tool.
Author Contributions
Conceptualization, R.-I.C.; Data curation, A.S.; Formal analysis, R.R.-B. and C.A.; Funding acquisition, R.R.-B. and C.A.; Investigation, R.-I.C.; Methodology, R.R.-B., R.-I.C. and A.C.; Project administration, R.R.-B.; Resources, R.R.-B. and R.-I.C.; Software, A.S.; Validation, A.S., R.R.-B. and A.C.; Supervision, R.R.-B., Visualization, R.-I.C. and C.A.; Writing—original draft, A.S. and R.R.-B.; Writing—review and editing, R.R.-B. and A.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Ethical review and approval were waived for this study due to its retrospective design, which relied on pre-existing ultrasound images collected as part of a previously approved clinical study no 30/2016 issued by the Ethical Committee of the “Iuliu Haţieganu” University of Medicine and Pharmacy Cluj-Napoca.
Informed Consent Statement
Patient consent was waived due to the retrospective nature of the study, which utilized pre-existing, de-identified ultrasound images obtained from a previously approved clinical investigation.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.
Acknowledgments
This research was supported by the project “Lung Cancer Diagnosis and Monitoring System Based on Transthoracic Ultrasonography and Neural Networks”, funded by EIT Health, under EIT Health RIS 2019 INNOVATION CALL, contract no. 21528/07.08.2019; by the project “New Smart and Adaptive Robotics Solutions for Personalized Minimally Invasive Surgery in Cancer Treatment”—ATHENA; funded by the European Union—NextGenerationEU; and by the Romanian Government under the National Recovery and Resilience Plan for Romania, contract no. 760072/23.05.2023, code CF 116/15.11.2022, through the Romanian Ministry of Research, Innovation and Digitalization, within Component 9, Investment I8.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- World Health Organization. Cancer. Available online: https://www.who.int/news-room/fact-sheets/detail/lung-cancer (accessed on 1 February 2024).
- Mikhak, Z.; Pedersen, P.C. Acoustic attenuation properties of the lung: An open question. Ultrasound Med. Biol. 2002, 28, 1209–1216. [Google Scholar] [CrossRef]
- Gargani, L.; Volpicelli, G. How I do it: Lung ultrasound. Cardiovasc. Ultrasound 2014, 12, 25. [Google Scholar] [CrossRef] [PubMed]
- Chira, R.; Chira, A.; Mircea, P.A. Intrathoracic tumors in contact with the chest wall—ultrasonographic and computed tomography comparative evaluation. Med. Ultrason. 2012, 14, 115–119. [Google Scholar]
- Ngaha, T.Y.S.; Zhilenkova, A.V.; Essogmo, F.E.; Uchendu, I.K.; Abah, M.O.; Fossa, L.T.; Sangadzhieva, Z.D.; Sanikovich, D.; Rusanov, S.; Pirogova, N.; et al. Angiogenesis in Lung Cancer: Understanding the Roles of Growth Factors. Cancers 2023, 15, 4648. [Google Scholar] [CrossRef]
- Chira, R.I.; Chira, A.; Mircea, P.A. Transthoracic Ultrasonography: Advantages and Limitations in the Assessment of Lung Cancer. In Advances in Medical Imaging and Digital Signal Processing; InTech: Rijeka, Croatia, 2017. [Google Scholar]
- Sridharan, B.; Sharma, A.K.; Lim, H.G. The Role of Ultrasound in Cancer and Cancer-Related Pain—A Bibliometric Analysis and Future Perspectives. Sensors 2023, 23, 7290. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Xu, Z.; Huang, H.; Zhou, X.; Xian, M. Application of quantitative contrast-enhanced ultrasound for evaluation and guiding biopsy of peripheral pulmonary lesions: A preliminary study. Clin. Radiol. 2020, 75, 79.e19–79.e24. [Google Scholar] [CrossRef]
- Yusuf, G.T.; Fang, C.; Tran, S.; Rao, D.; Bartlett-Pestell, S.; Stefanidis, K.; Huang, D.Y.; Sidhu, P.S. A pictorial review of the utility of CEUS in thoracic biopsies. Insights Imaging 2021, 12, 9. [Google Scholar] [CrossRef]
- Liang, J.; Wang, D.; Li, H.; Zhao, S.; Chen, M.; Li, H.; Ding, Z.; Liu, J.; Liu, L. Contrast-enhanced ultrasound for needle biopsy of thoracic lesions. Oncol. Lett. 2020, 20, 75. [Google Scholar] [CrossRef] [PubMed]
- Schauer, M.I.; Jung, E.-M.; da Silva, N.P.B.; Akers, M.; Loch, E.; Markowiak, T.; Piler, T.; Larisch, C.; Neu, R.; Stroszczynski, C.; et al. Intraoperative Contrast-Enhanced Ultrasonography (Io-CEUS) in Minimally Invasive Thoracic Surgery for Characterization of Pulmonary Tumours: A Clinical Feasibility Study. Cancers 2023, 15, 3854. [Google Scholar] [CrossRef]
- Rayscape. Available online: https://rayscape.ai/ (accessed on 10 December 2024).
- Optellum. Available online: https://optellum.com/ (accessed on 15 December 2024).
- Both, R.; Socaci, C.; Palagos, A. Machine-Learning Based Elastography Analysis in Predicting Lymph Node Metastasis. In Proceedings of the 2024 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 16 May 2024. [Google Scholar] [CrossRef]
- Ashhar, S.M.; Mokri, S.S.; Rahni, A.A.A.; Huddin, A.B.; Zulkarnain, N.; Azmi, N.A.; Mahaletchumy, T. Comparison of deep learning convolutional neural network (CNN) architectures for CT lung cancer classification. Int. J. Adv. Trends Eng. Educ. 2021, 10, 126–134. [Google Scholar] [CrossRef]
- Paing, M.P.; Hamamoto, K.; Tungjitkusolmun, S.; Pintavirooj, C. Automatic Detection and Staging of Lung Tumors Using Locational Features and Double-Staged Classifications. Appl. Sci. 2019, 9, 2329. [Google Scholar] [CrossRef]
- Alabdulwahab, A.; Park, H.-C.; Jeong, H.; Lee, S.-W. An Efficient One-Dimensional Texture Representation Approach for Lung Disease Diagnosis. Appl. Sci. 2024, 14, 10661. [Google Scholar] [CrossRef]
- Muñoz, M.; Rubio, A.; Cosarinsky, G.; Cruza, J.F.; Camacho, J. Deep Learning-Based Algorithms for Real-Time Lung Ultrasound Assisted Diagnosis. Appl. Sci. 2024, 14, 11930. [Google Scholar] [CrossRef]
- Dunn, B.; Pierobon, M.; Wei, Q. Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis. Bioengineering 2023, 10, 690. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Zhang, C.; Zhang, Y.; Li, J. An Automated Diagnosis Method for Lung Cancer Target Detection and Subtype Classification-Based CT Scans. Bioengineering 2024, 11, 767. [Google Scholar] [CrossRef]
- Song, X.; Liang, B.; Wang, C.; Shi, S. Clinical Value of Color Doppler Ultrasound Combined with Serum CA153, CEA and TSGF Detection in the Diagnosis of Breast Cancer. Exp. Ther. Med. 2020, 20, 1822–1828. [Google Scholar] [CrossRef] [PubMed]
- Abian, A.I.; Khan Raiaan, M.A.; Karim, A.; Azam, S.; Fahad, N.M.; Shafiabady, N.; Yeo, K.C.; De Boer, F. Automated Diagnosis of Respiratory Diseases from Lung Ultrasound Videos Ensuring XAI: An Innovative Hybrid Model Approach. Front. Comput. Sci. 2024, 6, 1438126. [Google Scholar] [CrossRef]
- Althubaity, D.D.; Alotaibi, F.F.; Osman, A.M.A.; Al-Khadher, M.A.; Abdalla, Y.H.A.; Alwesabi, S.A.; Abdulrahman, E.E.H.; Alhemairy, M.A. Automated Lung Cancer Segmentation in Tissue Micro Array Analysis Histopathological Images Using a Prototype of Computer-Assisted Diagnosis. J. Pers. Med. 2023, 13, 388. [Google Scholar] [CrossRef]
- Madero Orozco, H.; Vergara Villegas, O.O.; Cruz Sánchez, V.G.; Ochoa Domínguez, H.; Nandayapa Alfaro, M. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine. BioMed Eng. Online 2015, 14, 9. [Google Scholar] [CrossRef]
- Shrestha, B.; Stern, N.B.; Zhou, A.; Dunn, A.; Porter, T. Current trends in the characterization and monitoring of vascular response to cancer therapy. Cancer Imaging 2024, 24, 143. [Google Scholar] [CrossRef]
- Sekiya, M. Chest ultrasound for lung cancer: Present and future. J. Med. Ultrason. 2024, 51, 393–395. [Google Scholar] [CrossRef] [PubMed]
- Supervisely. Unified OS for Computer Vision. Available online: https://supervise.ly (accessed on 14 October 2024).
- Wulaningsih, W.; Villamaria, C.; Akram, A.; Benemile, J.; Croce, F.; Watkins, J. Deep Learning Models for Predicting Malignancy Risk in CT-Detected Pulmonary Nodules: A Systematic Review and Meta-analysis. Lung 2024, 202, 625–636. [Google Scholar] [CrossRef] [PubMed]
Figure 1.
System architecture. (a1,b1,c1) show input Doppler US images, while (a2,b2,c2) display Doppler US input images with defined regions of interest (ROI). The system extracts three vascularization parameters: blood vessel area percentage, vessel tortuosity, and vessel orientation index. These features are processed by a feed-forward neural network, which classifies the lesion as benign or malignant.
Figure 1.
System architecture. (a1,b1,c1) show input Doppler US images, while (a2,b2,c2) display Doppler US input images with defined regions of interest (ROI). The system extracts three vascularization parameters: blood vessel area percentage, vessel tortuosity, and vessel orientation index. These features are processed by a feed-forward neural network, which classifies the lesion as benign or malignant.
Figure 2.
Blood vessels tortuosity index exemplified on a curly lung vein: (a) approximated blood vein pixels; (b) approximated blood vein with a green line, first-degree equation; (c) computed tortuosity index as a distance (yellow) between the approximated blood vein first-degree equation (green line) and the initial blood vein (white line).
Figure 2.
Blood vessels tortuosity index exemplified on a curly lung vein: (a) approximated blood vein pixels; (b) approximated blood vein with a green line, first-degree equation; (c) computed tortuosity index as a distance (yellow) between the approximated blood vein first-degree equation (green line) and the initial blood vein (white line).
Figure 3.
Filters applied on the initial image: (a) initial Doppler ultrasound image; (b) Arctic filter applied on the initial image; (c) Burlesque filter; (d) Zeke filter.
Figure 3.
Filters applied on the initial image: (a) initial Doppler ultrasound image; (b) Arctic filter applied on the initial image; (c) Burlesque filter; (d) Zeke filter.
Figure 4.
Doppler mode lung US processed: (a) initial lung Doppler ultrasound of a lung lesion where vascularization is present; (b) selected region of interest (ROI) for analysis with Amask representing the ROI area; (c) identified vascularity in the ROI and the computed vessels area.
Figure 4.
Doppler mode lung US processed: (a) initial lung Doppler ultrasound of a lung lesion where vascularization is present; (b) selected region of interest (ROI) for analysis with Amask representing the ROI area; (c) identified vascularity in the ROI and the computed vessels area.
Figure 5.
Blood vessels skeleton: (a) initial image of the lung Doppler ultrasound, containing lung veins and arteries; (b) identified blood vessels with white color; (c) skeleton, computed with morphological operations from figure (b).
Figure 5.
Blood vessels skeleton: (a) initial image of the lung Doppler ultrasound, containing lung veins and arteries; (b) identified blood vessels with white color; (c) skeleton, computed with morphological operations from figure (b).
Table 1.
Doppler mode US analysis parameter values.
Table 1.
Doppler mode US analysis parameter values.
Vascularization Report Area | Curly Level | Angles Deviation | Result |
---|
33 | 10.3 | 50 | 0 |
54 | 13 | 75 | 0 |
56 | 13 | 75 | 0 |
26 | 6 | 55 | 1 |
16 | 10 | 89 | 1 |
15 | 1 | 100 | 1 |
Table 2.
Diagnosis module accuracy for each parameter and variable hidden layer size.
Table 2.
Diagnosis module accuracy for each parameter and variable hidden layer size.
Parameter | Hidden Layer Sizes | Performance |
---|
Area | 8 | 52% |
Curly | 8 | 53% |
Angles | 8 | 82% |
Area | 7 | 48% |
Curly | 7 | 37% |
Angles | 7 | 77% |
Table 3.
Diagnosis module accuracy for the combination of 2 parameters and variable hidden layer size.
Table 3.
Diagnosis module accuracy for the combination of 2 parameters and variable hidden layer size.
Parameter | Hidden Layer Sizes | Performance |
---|
Area | 8 | 75% |
Curly |
Area | 7 | 60% |
Curly |
Area | 9 | 62% |
Curly |
Table 4.
Diagnosis module accuracy for all parameters and variable hidden layer size.
Table 4.
Diagnosis module accuracy for all parameters and variable hidden layer size.
Parameter | Hidden Layer Sizes | Performance |
---|
Area | 8 | 95% |
Curly |
Angles |
Area | 9 | 82% |
Curly |
Angles |
Table 5.
Diagnosis module accuracy for all parameters and different training methods.
Table 5.
Diagnosis module accuracy for all parameters and different training methods.
Training Method | Neural Network | Performance |
---|
Levenberg–Marquardt | Feed-forward network | 98% |
Scale Conjugate Gradient | Feed-forward network | 97% |
Table 6.
Confusion matrix.
Table 6.
Confusion matrix.
Actual/Predicted | Benign (Predicted) | Malignant (Predicted) | Total Cases |
---|
Benign (Actual) | 313 (TN) | 9 (FP) | 322 |
Malignant (Actual) | 12 (FN) | 344 (TP) | 356 |
Table 7.
Performance comparative analysis.
Table 7.
Performance comparative analysis.
Method | Accuracy | Sensitivity | Specificity | AUC | Radiation |
---|
Doppler US + FNN | 98% | 96.5% | 97.2% | 0.985 | NO |
CT-based AI | 92–95% | 91–94% | 90–96% | 0.95 | YES |
MRI-Based AI | 94–97% | 92–96% | 91–98% | 0.96 | NO |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).