Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = automatic fatality detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1452 KB  
Article
Ensemble Method of Pre-Trained Models for Classification of Skin Lesion Images
by Umadevi V, Joshi Manisha Shivaram, Shankru Guggari and Kingsley Okoye
Appl. Sci. 2025, 15(24), 13083; https://doi.org/10.3390/app152413083 - 12 Dec 2025
Viewed by 495
Abstract
Human beings are affected by different types of skin diseases worldwide. Automatic identification of skin disease from Dermoscopy images has proved effective for diagnosis and treatment to reduce fatality rate. The objective of this work is to demonstrate efficiency of three deep learning [...] Read more.
Human beings are affected by different types of skin diseases worldwide. Automatic identification of skin disease from Dermoscopy images has proved effective for diagnosis and treatment to reduce fatality rate. The objective of this work is to demonstrate efficiency of three deep learning pre-trained models, namely MobileNet, EfficientNetB0, and DenseNet121 with ensembling techniques for classification of skin lesion images. This study considers HAM1000 dataset which consists of n = 10,015 images of seven different classes, with a huge class imbalance. The study has two-fold contributions for the classification methodology of skin lesions. First, modification of three pre-trained deep learning models for grouping of skin lesion into seven types. Second, Weighted Grid Search algorithm is proposed to address the class imbalance problem for improving the accuracy of the base classifiers. The results showed that the weighted ensembling method achieved a 3.67% average improvement in Accuracy, Precision, and Recall, 3.33% average improvement for F1-Score, and 7% average improvement for Matthews Correlation Coefficient (MCC) when compared to base classifiers. Evaluation of the model’s efficiency and performance shows that it obtained the highest ROC-AUC score of 92.5% for the modified MobileNet model for skin lesion categorization in comparison to EfficientNetB0 and DenseNet121, respectively. The implications of the results show that deep learning methods and classification techniques are effective for diagnosis and treatment of skin lesion diseases to reduce fatality rate or detect early warnings. Full article
(This article belongs to the Special Issue Process Mining: Theory and Applications)
Show Figures

Figure 1

23 pages, 5751 KB  
Article
Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images
by Hakan Baltaci, Sercan Yalcin, Muhammed Yildirim and Harun Bingol
Diagnostics 2025, 15(19), 2476; https://doi.org/10.3390/diagnostics15192476 - 27 Sep 2025
Viewed by 1450
Abstract
Background/Objectives: Diagnosis of abdominal aortic aneurysm and abdominal aortic dissection (AAA and AAD) is of strategic importance as cardiovascular disease has fatal implications worldwide. This study presents a novel deep learning-based approach for the accurate and efficient diagnosis of abdominal aortic aneurysms [...] Read more.
Background/Objectives: Diagnosis of abdominal aortic aneurysm and abdominal aortic dissection (AAA and AAD) is of strategic importance as cardiovascular disease has fatal implications worldwide. This study presents a novel deep learning-based approach for the accurate and efficient diagnosis of abdominal aortic aneurysms (AAAs) and aortic dissections (AADs) from CT images. Methods: Our proposed convolutional neural network (CNN) architecture effectively extracts relevant features from CT scans and classifies regions as normal or diseased. Additionally, the model accurately delineates the boundaries of detected aneurysms and dissections, aiding in clinical decision-making. A pyramid scene parsing network has been built in a hybrid method. The layer block after the classification layer is divided into two groups: whether there is an AAA or AAD region in the abdominal CT image, and determination of the borders of the detected diseased region in the medical image. Results: In this sense, both detection and segmentation are performed in AAA and AAD diseases. Python programming has been used to assess the accuracy and performance results of the proposed strategy. From the results, average accuracy rates of 83.48%, 86.9%, 88.25%, and 89.64% were achieved using ResDenseUNet, INet, C-Net, and the proposed strategy, respectively. Also, intersection over union (IoU) of 79.24%, 81.63%, 82.48%, and 83.76% have been achieved using ResDenseUNet, INet, C-Net, and the proposed method. Conclusions: The proposed strategy is a promising technique for automatically diagnosing AAA and AAD, thereby reducing the workload of cardiovascular surgeons. Full article
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2026)
Show Figures

Figure 1

22 pages, 3356 KB  
Article
MS-LTCAF: A Multi-Scale Lead-Temporal Co-Attention Framework for ECG Arrhythmia Detection
by Na Feng, Chengwei Chen, Peng Du, Chengrong Gong, Jianming Pei and Dong Huang
Bioengineering 2025, 12(9), 1007; https://doi.org/10.3390/bioengineering12091007 - 22 Sep 2025
Viewed by 1338
Abstract
Cardiovascular diseases are the leading cause of death worldwide, with arrhythmia being a prevalent and potentially fatal condition. The multi-lead electrocardiogram (ECG) is the primary tool for detecting arrhythmias. However, existing detection methods have shortcomings: they cannot dynamically integrate inter-lead correlations with multi-scale [...] Read more.
Cardiovascular diseases are the leading cause of death worldwide, with arrhythmia being a prevalent and potentially fatal condition. The multi-lead electrocardiogram (ECG) is the primary tool for detecting arrhythmias. However, existing detection methods have shortcomings: they cannot dynamically integrate inter-lead correlations with multi-scale temporal changes in cardiac electrical activity. They also lack mechanisms to simultaneously focus on key leads and time segments, and thus fail to address multi-lead redundancy or capture comprehensive spatial-temporal relationships. To solve these problems, we propose a Multi-Scale Lead-Temporal Co-Attention Framework (MS-LTCAF). Our framework incorporates two key components: a Lead-Temporal Co-Attention Residual (LTCAR) module that dynamically weights the importance of leads and time segments, and a multi-scale branch structure that integrates features of cardiac electrical activity across different time periods. Together, these components enable the framework to automatically extract and integrate features within a single lead, between different leads, and across multiple time scales from ECG signals. Experimental results demonstrate that MS-LTCAF outperforms existing methods. On the PTB-XL dataset, it achieves an AUC of 0.927, approximately 1% higher than the current optimal baseline model (DNN_zhu’s 0.918). On the LUDB dataset, it ranks first in terms of AUC (0.942), accuracy (0.920), and F1-score (0.745). Furthermore, the framework can focus on key leads and time segments through the co-attention mechanism, while the multi-scale branches help capture both the details of local waveforms (such as QRS complexes) and the overall rhythm patterns (such as RR intervals). Full article
Show Figures

Figure 1

24 pages, 7605 KB  
Article
Pedestrian-Crossing Detection Enhanced by CyclicGAN-Based Loop Learning and Automatic Labeling
by Kuan-Chieh Wang, Chao-Li Meng, Chyi-Ren Dow and Bonnie Lu
Appl. Sci. 2025, 15(12), 6459; https://doi.org/10.3390/app15126459 - 8 Jun 2025
Cited by 1 | Viewed by 1679
Abstract
Pedestrian safety at crosswalks remains a critical concern as traffic accidents frequently result from drivers’ failure to yield, leading to severe injuries or fatalities. In response, various jurisdictions have enacted pedestrian priority laws to regulate driver behavior. Nevertheless, intersections lacking clear traffic signage [...] Read more.
Pedestrian safety at crosswalks remains a critical concern as traffic accidents frequently result from drivers’ failure to yield, leading to severe injuries or fatalities. In response, various jurisdictions have enacted pedestrian priority laws to regulate driver behavior. Nevertheless, intersections lacking clear traffic signage and environments with limited visibility continue to present elevated risks. The scarcity and difficulty of collecting data under such complex conditions pose significant challenges to the development of accurate detection systems. This study proposes a CyclicGAN-based loop-learning framework, in which the learning process begins with a set of manually annotated images used to train an initial labeling model. This model is then applied to automatically annotate newly generated synthetic images, which are incorporated into the training dataset for subsequent rounds of model retraining and image generation. Through this iterative process, the model progressively refines its ability to simulate and recognize diverse contextual features, thereby enhancing detection performance under varying environmental conditions. The experimental results show that environmental variations—such as daytime, nighttime, and rainy conditions—substantially affect the model performance in terms of F1-score. Training with a balanced mix of real and synthetic images yields an F1-score comparable to that obtained using real data alone. These results suggest that CycleGAN-generated images can effectively augment limited datasets and enhance model generalization. The proposed system may be integrated with in-vehicle assistance platforms as a supportive tool for pedestrian-crossing detection in data-scarce environments, contributing to improved driver awareness and road safety. Full article
Show Figures

Figure 1

23 pages, 2818 KB  
Article
Casualty Analysis of the Drivers in Traffic Accidents in Turkey: A CHAID Decision Tree Model
by Zeliha Cagla Kuyumcu, Hakan Aslan and Nilufer Yurtay
Appl. Sci. 2024, 14(24), 11693; https://doi.org/10.3390/app142411693 - 14 Dec 2024
Cited by 1 | Viewed by 6911
Abstract
The number of traffic accidents in a region rises as the vehicle–km value in traffic increases. Furthermore, since automobiles make up the highest proportion of vehicles in traffic, they represent the greatest weight in traffic accidents. This study aims to establish a model [...] Read more.
The number of traffic accidents in a region rises as the vehicle–km value in traffic increases. Furthermore, since automobiles make up the highest proportion of vehicles in traffic, they represent the greatest weight in traffic accidents. This study aims to establish a model to predict the driver’s status (survived–injured–dead) as a result of the fatal-injury type of accident. The size of the vehicles suppresses the direct factors related to drivers by having a significant and dominant effect on the analysis of the results of the accidents by concealing the other important factors which must be taken into consideration with regard to the casualty levels of the drivers. Consequently, this paper focuses on automobiles, which are the most frequently involved vehicle type in accidents. Furthermore, the dataset representing the accidents that occurred in Turkey between 2015 and 2021 was employed for the analysis of the effects of the attributes of the drivers on the outcome of casualties for automobile-related accidents alone. The uniqueness of this research stems from being the first study in Turkey to investigate the severity levels of the drivers involved in automobile-related accidents. In addition, this study highlights the preventable factors investigated relatively less than other factors in the literature in order to establish a successful model. The difference between the success of the models with regard to accuracy obtained through dominant and investigated factors is only 5.0%. Random Forests, Naïve Bayes, and CHAID (Chi-squared Automatic Interaction Detection) models were established and compared as decision tree algorithms. The results revealed the fact that the CHAID model produced the most successful outcomes among them. Driver fault, gender, education level, and age, along with alcohol usage and surface condition, were found to be significant influential factors for the severity of traffic accidents. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Transportation Engineering)
Show Figures

Figure 1

14 pages, 2999 KB  
Article
AI-Aided Robotic Wide-Range Water Quality Monitoring System
by Ameen Awwad, Ghaleb A. Husseini and Lutfi Albasha
Sustainability 2024, 16(21), 9499; https://doi.org/10.3390/su16219499 - 31 Oct 2024
Cited by 4 | Viewed by 3931
Abstract
Waterborne illnesses lead to millions of fatalities worldwide each year, particularly in developing nations. In this paper, we introduce a comprehensive system designed for the autonomous early detection of viral outbreaks transmitted through water to ensure sustainable access to healthy water resources, especially [...] Read more.
Waterborne illnesses lead to millions of fatalities worldwide each year, particularly in developing nations. In this paper, we introduce a comprehensive system designed for the autonomous early detection of viral outbreaks transmitted through water to ensure sustainable access to healthy water resources, especially in remote areas. The system utilizes an autonomous water quality monitoring setup consisting of an airborne water sample collector, an autonomous sample processor, and an artificial intelligence-aided microscopic detector for risk assessment. The proposed system replaces the time-consuming conventional monitoring protocol by automating sample collection, sample processing, and pathogen detection. Furthermore, it provides a safer processing method against the spillage of contaminated liquids and potential resultant aerosols during the heat fixation of specimens. A morphological image processing technique of light microscopic images is used to segment images, assisting in selecting a unified appropriate input segment size based on individual blob areas of different bacterial cultures. The dataset included harmful pathogenic bacteria (A. baumanii, E. coli, and P. aeruginosa) and harmless ones found in drinking water and wastewater (E. faecium, L. paracasei, and Micrococcus spp.). The segmented labeled dataset was used to train deep convolutional neural networks to automatically detect pathogens in microscopic images. To minimize prediction error, Bayesian optimization was applied to tune the hyperparameters of the networks’ architecture and training settings. Different convolutional networks were tested in accordance with different required output labels. The neural network used to classify bacterial cultures as harmful or harmless achieved an accuracy of 99.7%. The neural network used to identify the specific types of bacteria achieved a cumulative accuracy of 93.65%. Full article
Show Figures

Figure 1

15 pages, 5499 KB  
Article
Correlating Histopathological Microscopic Images of Creutzfeldt–Jakob Disease with Clinical Typology Using Graph Theory and Artificial Intelligence
by Carlos Martínez, Susana Teijeira, Patricia Domínguez, Silvia Campanioni, Laura Busto, José A. González-Nóvoa, Jacobo Alonso, Eva Poveda, Beatriz San Millán and César Veiga
Mach. Learn. Knowl. Extr. 2024, 6(3), 2018-2032; https://doi.org/10.3390/make6030099 - 7 Sep 2024
Cited by 1 | Viewed by 3006
Abstract
Creutzfeldt–Jakob disease (CJD) is a rare, degenerative, and fatal brain disorder caused by abnormal proteins called prions. This research introduces a novel approach combining AI and graph theory to analyze histopathological microscopic images of brain tissues affected by CJD. The detection and quantification [...] Read more.
Creutzfeldt–Jakob disease (CJD) is a rare, degenerative, and fatal brain disorder caused by abnormal proteins called prions. This research introduces a novel approach combining AI and graph theory to analyze histopathological microscopic images of brain tissues affected by CJD. The detection and quantification of spongiosis, characterized by the presence of vacuoles in the brain tissue, plays a crucial role in aiding the accurate diagnosis of CJD. The proposed methodology employs image processing techniques to identify these pathological features in high-resolution medical images. By developing an automatic pipeline for the detection of spongiosis, we aim to overcome some limitations of manual feature extraction. The results demonstrate that our method correctly identifies and characterize spongiosis and allows the extraction of features that will help to better understand the spongiosis patterns in different CJD patients. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

21 pages, 18332 KB  
Article
Automated Region of Interest-Based Data Augmentation for Fallen Person Detection in Off-Road Autonomous Agricultural Vehicles
by Hwapyeong Baek, Seunghyun Yu, Seungwook Son, Jongwoong Seo and Yongwha Chung
Sensors 2024, 24(7), 2371; https://doi.org/10.3390/s24072371 - 8 Apr 2024
Cited by 2 | Viewed by 2498
Abstract
Due to the global population increase and the recovery of agricultural demand after the COVID-19 pandemic, the importance of agricultural automation and autonomous agricultural vehicles is growing. Fallen person detection is critical to preventing fatal accidents during autonomous agricultural vehicle operations. However, there [...] Read more.
Due to the global population increase and the recovery of agricultural demand after the COVID-19 pandemic, the importance of agricultural automation and autonomous agricultural vehicles is growing. Fallen person detection is critical to preventing fatal accidents during autonomous agricultural vehicle operations. However, there is a challenge due to the relatively limited dataset for fallen persons in off-road environments compared to on-road pedestrian datasets. To enhance the generalization performance of fallen person detection off-road using object detection technology, data augmentation is necessary. This paper proposes a data augmentation technique called Automated Region of Interest Copy-Paste (ARCP) to address the issue of data scarcity. The technique involves copying real fallen person objects obtained from public source datasets and then pasting the objects onto a background off-road dataset. Segmentation annotations for these objects are generated using YOLOv8x-seg and Grounded-Segment-Anything, respectively. The proposed algorithm is then applied to automatically produce augmented data based on the generated segmentation annotations. The technique encompasses segmentation annotation generation, Intersection over Union-based segment setting, and Region of Interest configuration. When the ARCP technique is applied, significant improvements in detection accuracy are observed for two state-of-the-art object detectors: anchor-based YOLOv7x and anchor-free YOLOv8x, showing an increase of 17.8% (from 77.8% to 95.6%) and 12.4% (from 83.8% to 96.2%), respectively. This suggests high applicability for addressing the challenges of limited datasets in off-road environments and is expected to have a significant impact on the advancement of object detection technology in the agricultural industry. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Figure 1

19 pages, 8194 KB  
Article
Efficient Vertical Structure Correlation and Power Line Inference
by Paul Flanigen, Ella Atkins and Nadine Sarter
Sensors 2024, 24(5), 1686; https://doi.org/10.3390/s24051686 - 5 Mar 2024
Cited by 1 | Viewed by 1780
Abstract
High-resolution three-dimensional data from sensors such as LiDAR are sufficient to find power line towers and poles but do not reliably map relatively thin power lines. In addition, repeated detections of the same object can lead to confusion while data gaps ignore known [...] Read more.
High-resolution three-dimensional data from sensors such as LiDAR are sufficient to find power line towers and poles but do not reliably map relatively thin power lines. In addition, repeated detections of the same object can lead to confusion while data gaps ignore known obstacles. The slow or failed detection of low-salience vertical obstacles and associated wires is one of today’s leading causes of fatal helicopter accidents. This article presents a method to efficiently correlate vertical structure observations with existing databases and infer the presence of power lines. The method uses a spatial hash key which compares an observed tower location to potential existing tower locations using nested hash tables. When an observed tower is in the vicinity of an existing entry, the method correlates or distinguishes objects based on height and position. When applied to Delaware’s Digital Obstacle File, the average horizontal uncertainty decreased from 206 to 56 ft. The power line presence is inferred by automatically comparing the proportional spacing, height, and angle of tower sets based on the more accurate database. Over 87% of electrical transmission towers were correctly identified with no false negatives. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

7 pages, 6646 KB  
Proceeding Paper
Image Enhancement CNN Approach to COVID-19 Detection Using Chest X-ray Images
by Chamoda Tharindu Kumara, Sandunika Charuni Pushpakumari, Ashmini Jeewa Udhyani, Mohamed Aashiq, Hirshan Rajendran and Chinthaka Wasantha Kumara
Eng. Proc. 2023, 55(1), 45; https://doi.org/10.3390/engproc2023055045 - 4 Dec 2023
Cited by 5 | Viewed by 1980
Abstract
Coronavirus (COVID-19) is a fast-spreading virus-related disease. On 28 March 2022, Worldometer (COVID-19 live update) reported that there were about 482,338,923 COVID-19 cases and 6,149,387 fatalities worldwide. Moreover, there were about 416,884,712 recovered patients. The primary clinical mechanism currently utilized for COVID-19 identification [...] Read more.
Coronavirus (COVID-19) is a fast-spreading virus-related disease. On 28 March 2022, Worldometer (COVID-19 live update) reported that there were about 482,338,923 COVID-19 cases and 6,149,387 fatalities worldwide. Moreover, there were about 416,884,712 recovered patients. The primary clinical mechanism currently utilized for COVID-19 identification is the Reverse Transcription–Polymerase Chain Reaction (RT-PCR). Hospitals only have small quantities of COVID-19 test kits available due to the daily increase in cases. As an alternative diagnosis possibility, an automatic detection system was implemented. A vigorous technique for the automatic COVID-19 identification is the deep learning approach. Chest X-ray (CXR) imaging is a modest tool that can be an alternate for diagnosing COVID-19-infected patients. With the use of deep learning, deep layer characteristics that are hidden from human sight may be observed using CXR imaging. One of the largest public databases, the “COVID-19 Radiography Database”, comprises 21,164 CXR images and was taken from Kaggle. To achieve the best accuracy in this work, data cleansing and the balanced dataset approach were applied. The primary goal of data cleansing is to remove duplicate CXR images from the database. The accuracy of three distinct pre-trained Convolutional Neural Networks (CNNs) was compared and then analyzed (Xception, InceptionV3, and MobileNetV2). Among other models, Xception achieved the best testing accuracy of 94.13% with plain lung CXR pictures. The Gabor filtering image enhancement approach was also employed to identify COVID-19. Only for the MobileNetV2 model did enhance CXR images perform significantly better for classification than plain lung CXR images. This study attempts to enhance the system’s accuracy to 100%, outperforming previous tests. Full article
Show Figures

Figure 1

17 pages, 3310 KB  
Article
Application of YOLO v5 and v8 for Recognition of Safety Risk Factors at Construction Sites
by Kyunghwan Kim, Kangeun Kim and Soyoon Jeong
Sustainability 2023, 15(20), 15179; https://doi.org/10.3390/su152015179 - 23 Oct 2023
Cited by 38 | Viewed by 7222
Abstract
The construction industry has high accident and fatality rates owing to time and cost pressures as well as hazardous working environments caused by heavy construction equipment and temporary structures. Thus, safety management at construction sites is essential, and extensive investments are made in [...] Read more.
The construction industry has high accident and fatality rates owing to time and cost pressures as well as hazardous working environments caused by heavy construction equipment and temporary structures. Thus, safety management at construction sites is essential, and extensive investments are made in management and technology to reduce accidents. This study aims to improve the accuracy of object recognition and classification that is the foundation of the automatic detection of safety risk factors at construction sites, using YOLO v5, which has been acknowledged in several studies for its high performance, and the recently released YOLO v8. Images were collected through web crawling and labeled into three classes to form the dataset. Based on this dataset, accuracy was improved by changing epochs, optimizers, and hyperparameter conditions. In each YOLO version, the highest accuracy is achieved by the extra-large model, with mAP50 test accuracies of 94.1% in v5 and 95.1% in v8. This study could be further expanded for application in various management tools at construction sites to improve the work process, quality control, and progress management in addition to safety management through the collection of more image data and automation for accuracy improvement. Full article
Show Figures

Figure 1

16 pages, 3338 KB  
Article
Enhancing Cervical Pre-Cancerous Classification Using Advanced Vision Transformer
by Manal Darwish, Mohamad Ziad Altabel and Rahib H. Abiyev
Diagnostics 2023, 13(18), 2884; https://doi.org/10.3390/diagnostics13182884 - 8 Sep 2023
Cited by 14 | Viewed by 3798
Abstract
One of the most common types of cancer among in women is cervical cancer. Incidence and fatality rates are steadily rising, particularly in developing nations, due to a lack of screening facilities, experienced specialists, and public awareness. Visual inspection is used to screen [...] Read more.
One of the most common types of cancer among in women is cervical cancer. Incidence and fatality rates are steadily rising, particularly in developing nations, due to a lack of screening facilities, experienced specialists, and public awareness. Visual inspection is used to screen for cervical cancer after the application of acetic acid (VIA), histopathology test, Papanicolaou (Pap) test, and human papillomavirus (HPV) test. The goal of this research is to employ a vision transformer (ViT) enhanced with shifted patch tokenization (SPT) techniques to create an integrated and robust system for automatic cervix-type identification. A vision transformer enhanced with shifted patch tokenization is used in this work to learn the distinct features between the three different cervical pre-cancerous types. The model was trained and tested on 8215 colposcopy images of the three types, obtained from the publicly available mobile-ODT dataset. The model was tested on 30% of the whole dataset and it showed a good generalization capability of 91% accuracy. The state-of-the art comparison indicated the outperformance of our model. The experimental results show that the suggested system can be employed as a decision support tool in the detection of the cervical pre-cancer transformation zone, particularly in low-resource settings with limited experience and resources. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

25 pages, 3352 KB  
Review
Artificial Intelligence in Lung Cancer Screening: The Future Is Now
by Michaela Cellina, Laura Maria Cacioppa, Maurizio Cè, Vittoria Chiarpenello, Marco Costa, Zakaria Vincenzo, Daniele Pais, Maria Vittoria Bausano, Nicolò Rossini, Alessandra Bruno and Chiara Floridi
Cancers 2023, 15(17), 4344; https://doi.org/10.3390/cancers15174344 - 30 Aug 2023
Cited by 98 | Viewed by 18944
Abstract
Lung cancer has one of the worst morbidity and fatality rates of any malignant tumour. Most lung cancers are discovered in the middle and late stages of the disease, when treatment choices are limited, and patients’ survival rate is low. The aim of [...] Read more.
Lung cancer has one of the worst morbidity and fatality rates of any malignant tumour. Most lung cancers are discovered in the middle and late stages of the disease, when treatment choices are limited, and patients’ survival rate is low. The aim of lung cancer screening is the identification of lung malignancies in the early stage of the disease, when more options for effective treatments are available, to improve the patients’ outcomes. The desire to improve the efficacy and efficiency of clinical care continues to drive multiple innovations into practice for better patient management, and in this context, artificial intelligence (AI) plays a key role. AI may have a role in each process of the lung cancer screening workflow. First, in the acquisition of low-dose computed tomography for screening programs, AI-based reconstruction allows a further dose reduction, while still maintaining an optimal image quality. AI can help the personalization of screening programs through risk stratification based on the collection and analysis of a huge amount of imaging and clinical data. A computer-aided detection (CAD) system provides automatic detection of potential lung nodules with high sensitivity, working as a concurrent or second reader and reducing the time needed for image interpretation. Once a nodule has been detected, it should be characterized as benign or malignant. Two AI-based approaches are available to perform this task: the first one is represented by automatic segmentation with a consequent assessment of the lesion size, volume, and densitometric features; the second consists of segmentation first, followed by radiomic features extraction to characterize the whole abnormalities providing the so-called “virtual biopsy”. This narrative review aims to provide an overview of all possible AI applications in lung cancer screening. Full article
(This article belongs to the Special Issue Advances in Oncological Imaging)
Show Figures

Figure 1

22 pages, 7154 KB  
Article
A Comprehensive Analysis of Real-Time Car Safety Belt Detection Using the YOLOv7 Algorithm
by Lwando Nkuzo, Malusi Sibiya and Elisha Didam Markus
Algorithms 2023, 16(9), 400; https://doi.org/10.3390/a16090400 - 23 Aug 2023
Cited by 14 | Viewed by 7361
Abstract
Using a safety belt is crucial for preventing severe injuries and fatalities during vehicle accidents. In this paper, we propose a real-time vehicle occupant safety belt detection system based on the YOLOv7 (You Only Look Once version seven) object detection algorithm. The proposed [...] Read more.
Using a safety belt is crucial for preventing severe injuries and fatalities during vehicle accidents. In this paper, we propose a real-time vehicle occupant safety belt detection system based on the YOLOv7 (You Only Look Once version seven) object detection algorithm. The proposed approach aims to automatically detect whether the occupants of a vehicle have buckled their safety belts or not as soon as they are detected within the vehicle. A dataset for this purpose was collected and annotated for validation and testing. By leveraging the efficiency and accuracy of YOLOv7, we achieve near-instantaneous analysis of video streams, making our system suitable for deployment in various surveillance and automotive safety applications. This paper outlines a comprehensive methodology for training the YOLOv7 model using the labelImg tool to annotate the dataset with images showing vehicle occupants. It also discusses the challenges of detecting seat belts and evaluates the system’s performance on a real-world dataset. The evaluation focuses on distinguishing the status of a safety belt between two classes: “buckled” and “unbuckled”. The results demonstrate a high level of accuracy, with a mean average precision (mAP) of 99.6% and an F1 score of 98%, indicating the system’s effectiveness in identifying the safety belt status. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

17 pages, 2193 KB  
Article
Speed Bump and Pothole Detection Using Deep Neural Network with Images Captured through ZED Camera
by José-Eleazar Peralta-López, Joel-Artemio Morales-Viscaya, David Lázaro-Mata, Marcos-Jesús Villaseñor-Aguilar, Juan Prado-Olivarez, Francisco-Javier Pérez-Pinal, José-Alfredo Padilla-Medina, Juan-José Martínez-Nolasco and Alejandro-Israel Barranco-Gutiérrez
Appl. Sci. 2023, 13(14), 8349; https://doi.org/10.3390/app13148349 - 19 Jul 2023
Cited by 32 | Viewed by 7946
Abstract
The condition of the roads where cars circulate is of the utmost importance to ensure that each autonomous or manual car can complete its journey satisfactorily. The existence of potholes, speed bumps, and other irregularities in the pavement can cause car wear and [...] Read more.
The condition of the roads where cars circulate is of the utmost importance to ensure that each autonomous or manual car can complete its journey satisfactorily. The existence of potholes, speed bumps, and other irregularities in the pavement can cause car wear and fatal traffic accidents. Therefore, detecting and characterizing these anomalies helps reduce the risk of accidents and damage to the vehicle. However, street images are naturally multivariate, with redundant and substantial information, as well as significantly contaminated measurement noise, making the detection of street anomalies more challenging. In this work, an automatic color image analysis using a deep neural network for the detection of potholes on the road using images taken by a ZED camera is proposed. A lightweight architecture was designed to speed up training and usage. This consists of seven properly connected and synchronized layers. All the pixels of the original image are used without resizing. The classic stride and pooling operations were used to obtain as much information as possible. A database was built using a ZED camera seated on the front of a car. The routes where the photographs were taken are located in the city of Celaya in Guanajuato, Mexico. Seven hundred and fourteen images were manually tagged, several of which contain bumps and potholes. The system was trained with 70% of the database and validated with the remaining 30%. In addition, we propose a database that discriminates between potholes and speed bumps. A precision of 98.13% using 37 convolution filters in a 3 × 3 window was obtained, which improves upon recent state-of-the-art work. Full article
(This article belongs to the Special Issue AI, Machine Learning and Deep Learning in Signal Processing)
Show Figures

Figure 1

Back to TopTop