Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = DAWN dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
60 pages, 633 KiB  
Article
Secure and Trustworthy Open Radio Access Network (O-RAN) Optimization: A Zero-Trust and Federated Learning Framework for 6G Networks
by Mohammed El-Hajj
Future Internet 2025, 17(6), 233; https://doi.org/10.3390/fi17060233 - 25 May 2025
Viewed by 1079
Abstract
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to [...] Read more.
The Open Radio Access Network (O-RAN) paradigm promises unprecedented flexibility and cost efficiency for 6G networks but introduces critical security risks due to its disaggregated, AI-driven architecture. This paper proposes a secure optimization framework integrating zero-trust principles and privacy-preserving Federated Learning (FL) to address vulnerabilities in O-RAN’s RAN Intelligent Controllers (RICs) and xApps/rApps. We first establish a novel threat model targeting O-RAN’s optimization processes, highlighting risks such as adversarial Machine Learning (ML) attacks on resource allocation models and compromised third-party applications. To mitigate these, we design a Zero-Trust Architecture (ZTA) enforcing continuous authentication and micro-segmentation for RIC components, coupled with an FL framework that enables collaborative ML training across operators without exposing raw network data. A differential privacy mechanism is applied to global model updates to prevent inference attacks. We validate our framework using the DAWN Dataset (5G/6G traffic traces with slicing configurations) and the OpenRAN Gym Dataset (O-RAN-compliant resource utilization metrics) to simulate energy efficiency optimization under adversarial conditions. A dynamic DU sleep scheduling case study demonstrates 32% energy savings with <5% latency degradation, even when data poisoning attacks compromise 15% of the FL participants. Comparative analysis shows that our ZTA reduces unauthorized RIC access attempts by 89% compared to conventional O-RAN security baselines. This work bridges the gap between performance optimization and trustworthiness in next-generation O-RAN, offering actionable insights for 6G standardization. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

20 pages, 5383 KiB  
Article
Enhancing Autonomous Vehicle Perception in Adverse Weather: A Multi Objectives Model for Integrated Weather Classification and Object Detection
by Nasser Aloufi, Abdulaziz Alnori and Abdullah Basuhail
Electronics 2024, 13(15), 3063; https://doi.org/10.3390/electronics13153063 - 2 Aug 2024
Cited by 12 | Viewed by 5011
Abstract
Robust object detection and weather classification are essential for the safe operation of autonomous vehicles (AVs) in adverse weather conditions. While existing research often treats these tasks separately, this paper proposes a novel multi objectives model that treats weather classification and object detection [...] Read more.
Robust object detection and weather classification are essential for the safe operation of autonomous vehicles (AVs) in adverse weather conditions. While existing research often treats these tasks separately, this paper proposes a novel multi objectives model that treats weather classification and object detection as a single problem using only the AV camera sensing system. Our model offers enhanced efficiency and potential performance gains by integrating image quality assessment, Super-Resolution Generative Adversarial Network (SRGAN), and a modified version of You Only Look Once (YOLO) version 5. Additionally, by leveraging the challenging Detection in Adverse Weather Nature (DAWN) dataset, which includes four types of severe weather conditions, including the often-overlooked sandy weather, we have conducted several augmentation techniques, resulting in a significant expansion of the dataset from 1027 images to 2046 images. Furthermore, we optimize the YOLO architecture for robust detection of six object classes (car, cyclist, pedestrian, motorcycle, bus, truck) across adverse weather scenarios. Comprehensive experiments demonstrate the effectiveness of our approach, achieving a mean average precision (mAP) of 74.6%, underscoring the potential of this multi objectives model to significantly advance the perception capabilities of autonomous vehicles’ cameras in challenging environments. Full article
(This article belongs to the Special Issue Advances in the System of Higher-Dimension-Valued Neural Networks)
Show Figures

Figure 1

25 pages, 11915 KiB  
Article
Improving YOLO Detection Performance of Autonomous Vehicles in Adverse Weather Conditions Using Metaheuristic Algorithms
by İbrahim Özcan, Yusuf Altun and Cevahir Parlak
Appl. Sci. 2024, 14(13), 5841; https://doi.org/10.3390/app14135841 - 4 Jul 2024
Cited by 11 | Viewed by 5752
Abstract
Despite the rapid advances in deep learning (DL) for object detection, existing techniques still face several challenges. In particular, object detection in adverse weather conditions (AWCs) requires complex and computationally costly models to achieve high accuracy rates. Furthermore, the generalization capabilities of these [...] Read more.
Despite the rapid advances in deep learning (DL) for object detection, existing techniques still face several challenges. In particular, object detection in adverse weather conditions (AWCs) requires complex and computationally costly models to achieve high accuracy rates. Furthermore, the generalization capabilities of these methods struggle to show consistent performance under different conditions. This work focuses on improving object detection using You Only Look Once (YOLO) versions 5, 7, and 9 in AWCs for autonomous vehicles. Although the default values of the hyperparameters are successful for images without AWCs, there is a need to find the optimum values of the hyperparameters in AWCs. Given the many numbers and wide range of hyperparameters, determining them through trial and error is particularly challenging. In this study, the Gray Wolf Optimizer (GWO), Artificial Rabbit Optimizer (ARO), and Chimpanzee Leader Selection Optimization (CLEO) are independently applied to optimize the hyperparameters of YOLOv5, YOLOv7, and YOLOv9. The results show that the preferred method significantly improves the algorithms’ performances for object detection. The overall performance of the YOLO models on the object detection for AWC task increased by 6.146%, by 6.277% for YOLOv7 + CLEO, and by 6.764% for YOLOv9 + GWO. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection)
Show Figures

Figure 1

15 pages, 3517 KiB  
Article
Synthetic Data-Driven Real-Time Detection Transformer Object Detection in Raining Weather Conditions
by Chen-Yu Hao, Yao-Chung Chen, Tai-Tien Chen, Ting-Hsuan Lai, Tien-Yin Chou, Fang-Shii Ning and Mei-Hsin Chen
Appl. Sci. 2024, 14(11), 4910; https://doi.org/10.3390/app14114910 - 5 Jun 2024
Cited by 7 | Viewed by 3101
Abstract
Images captured in rainy weather conditions often suffer from contamination, resulting in blurred or obscured objects, which can significantly impact detection performance due to the loss of identifiable texture and color information. Moreover, the quality of the detection model plays a pivotal role [...] Read more.
Images captured in rainy weather conditions often suffer from contamination, resulting in blurred or obscured objects, which can significantly impact detection performance due to the loss of identifiable texture and color information. Moreover, the quality of the detection model plays a pivotal role in determining detection outcomes. This study adopts a dual perspective, considering both pre-trained models and training data. It employs 15 image augmentation techniques, combined with neural style transfer (NST), CycleGAN, and an analytical method, to synthesize images under rainy conditions. The Real-Time Detection Transformer (RTDETR) and YOLOv8 pre-trained models are utilized to establish object detection frameworks tailored for rainy weather conditions. Testing is carried out using the DAWN (Detection in Adverse Weather Nature) dataset. The findings suggest compatibility between the pre-trained detection models and various data synthesis methods. Notably, YOLOv8 exhibits better compatibility with CycleGAN data synthesis, while RTDETR demonstrates a stronger alignment with the NST and analytical approaches. Upon the integration of synthesized rainy images into model training, RTDETR demonstrates significantly enhanced detection accuracy compared to YOLOv8, indicating a more pronounced improvement in performance. The proposed approach of combining RTDETR with NST in this study shows a significant improvement in Recall (R) and mAP50-95 by 16.35% and 15.50%, respectively, demonstrating the robust rainy weather resilience of this method. Additionally, RTDETR outperforms YOLOv8 in terms of inference speed and hardware requirements, making it easier to use and deploy in real-time applications. Full article
Show Figures

Figure 1

23 pages, 2697 KiB  
Review
Artificial Intelligence in Pediatric Emergency Medicine: Applications, Challenges, and Future Perspectives
by Lorenzo Di Sarno, Anya Caroselli, Giovanna Tonin, Benedetta Graglia, Valeria Pansini, Francesco Andrea Causio, Antonio Gatto and Antonio Chiaretti
Biomedicines 2024, 12(6), 1220; https://doi.org/10.3390/biomedicines12061220 - 30 May 2024
Cited by 17 | Viewed by 4166
Abstract
The dawn of Artificial intelligence (AI) in healthcare stands as a milestone in medical innovation. Different medical fields are heavily involved, and pediatric emergency medicine is no exception. We conducted a narrative review structured in two parts. The first part explores the theoretical [...] Read more.
The dawn of Artificial intelligence (AI) in healthcare stands as a milestone in medical innovation. Different medical fields are heavily involved, and pediatric emergency medicine is no exception. We conducted a narrative review structured in two parts. The first part explores the theoretical principles of AI, providing all the necessary background to feel confident with these new state-of-the-art tools. The second part presents an informative analysis of AI models in pediatric emergencies. We examined PubMed and Cochrane Library from inception up to April 2024. Key applications include triage optimization, predictive models for traumatic brain injury assessment, and computerized sepsis prediction systems. In each of these domains, AI models outperformed standard methods. The main barriers to a widespread adoption include technological challenges, but also ethical issues, age-related differences in data interpretation, and the paucity of comprehensive datasets in the pediatric context. Future feasible research directions should address the validation of models through prospective datasets with more numerous sample sizes of patients. Furthermore, our analysis shows that it is essential to tailor AI algorithms to specific medical needs. This requires a close partnership between clinicians and developers. Building a shared knowledge platform is therefore a key step. Full article
(This article belongs to the Special Issue Artificial Intelligence in the Detection of Diseases)
Show Figures

Figure 1

23 pages, 27139 KiB  
Article
Enhancing the Safety of Autonomous Vehicles in Adverse Weather by Deep Learning-Based Object Detection
by Biwei Zhang, Murat Simsek, Michel Kulhandjian and Burak Kantarci
Electronics 2024, 13(9), 1765; https://doi.org/10.3390/electronics13091765 - 2 May 2024
Cited by 3 | Viewed by 3538
Abstract
Recognizing and categorizing items in weather-adverse environments poses significant challenges for autonomous vehicles. To improve the robustness of object-detection systems, this paper introduces an innovative approach for detecting objects at different levels by leveraging sensors and deep learning-based solutions within a traffic circle. [...] Read more.
Recognizing and categorizing items in weather-adverse environments poses significant challenges for autonomous vehicles. To improve the robustness of object-detection systems, this paper introduces an innovative approach for detecting objects at different levels by leveraging sensors and deep learning-based solutions within a traffic circle. The suggested approach improves the effectiveness of single-stage object detectors, aiming to advance the performance in perceiving autonomous racing environments and minimizing instances of false detection and low recognition rates. The improved framework is based on the one-stage object-detection model, incorporating multiple lightweight backbones. Additionally, attention mechanisms are integrated to refine the object-detection process further. Our proposed model demonstrates superior performance compared to the state-of-the-art method on the DAWN dataset, achieving a mean average precision (mAP) of 99.1%, surpassing the previous result of 84.7%. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

23 pages, 8260 KiB  
Article
Naturalize Revolution: Unprecedented AI-Driven Precision in Skin Cancer Classification Using Deep Learning
by Mohamad Abou Ali, Fadi Dornaika, Ignacio Arganda-Carreras, Hussein Ali and Malak Karaouni
BioMedInformatics 2024, 4(1), 638-660; https://doi.org/10.3390/biomedinformatics4010035 - 1 Mar 2024
Cited by 13 | Viewed by 3408
Abstract
Background: In response to the escalating global concerns surrounding skin cancer, this study aims to address the imperative for precise and efficient diagnostic methodologies. Focusing on the intricate task of eight-class skin cancer classification, the research delves into the limitations of conventional diagnostic [...] Read more.
Background: In response to the escalating global concerns surrounding skin cancer, this study aims to address the imperative for precise and efficient diagnostic methodologies. Focusing on the intricate task of eight-class skin cancer classification, the research delves into the limitations of conventional diagnostic approaches, often hindered by subjectivity and resource constraints. The transformative potential of Artificial Intelligence (AI) in revolutionizing diagnostic paradigms is underscored, emphasizing significant improvements in accuracy and accessibility. Methods: Utilizing cutting-edge deep learning models on the ISIC2019 dataset, a comprehensive analysis is conducted, employing a diverse array of pre-trained ImageNet architectures and Vision Transformer models. To counteract the inherent class imbalance in skin cancer datasets, a pioneering “Naturalize” augmentation technique is introduced. This technique leads to the creation of two indispensable datasets—the Naturalized 2.4K ISIC2019 and groundbreaking Naturalized 7.2K ISIC2019 datasets—catalyzing advancements in classification accuracy. The “Naturalize” augmentation technique involves the segmentation of skin cancer images using the Segment Anything Model (SAM) and the systematic addition of segmented cancer images to a background image to generate new composite images. Results: The research showcases the pivotal role of AI in mitigating the risks of misdiagnosis and under-diagnosis in skin cancer. The proficiency of AI in analyzing vast datasets and discerning subtle patterns significantly augments the diagnostic prowess of dermatologists. Quantitative measures such as confusion matrices, classification reports, and visual analyses using Score-CAM across diverse dataset variations are meticulously evaluated. The culmination of these endeavors resulted in an unprecedented achievement—100% average accuracy, precision, recall, and F1-score—within the groundbreaking Naturalized 7.2K ISIC2019 dataset. Conclusion: This groundbreaking exploration highlights the transformative capabilities of AI-driven methodologies in reshaping the landscape of skin cancer diagnosis and patient care. The research represents a pivotal stride towards redefining dermatological diagnosis, showcasing the remarkable impact of AI-powered solutions in surmounting the challenges inherent in skin cancer diagnosis. The attainment of 100% across crucial metrics within the Naturalized 7.2K ISIC2019 dataset serves as a testament to the transformative capabilities of AI-driven approaches in reshaping the trajectory of skin cancer diagnosis and patient care. This pioneering work paves the way for a new era in dermatological diagnostics, heralding the dawn of unprecedented precision and efficacy in the identification and classification of skin cancers. Full article
(This article belongs to the Special Issue Computational Biology and Artificial Intelligence in Medicine)
Show Figures

Graphical abstract

19 pages, 22636 KiB  
Article
Analyzing Performance of YOLOx for Detecting Vehicles in Bad Weather Conditions
by Imran Ashraf, Soojung Hur, Gunzung Kim and Yongwan Park
Sensors 2024, 24(2), 522; https://doi.org/10.3390/s24020522 - 14 Jan 2024
Cited by 10 | Viewed by 3542
Abstract
Recent advancements in computer vision technology, developments in sensors and sensor-collecting approaches, and the use of deep and transfer learning approaches have excelled in the development of autonomous vehicles. On-road vehicle detection has become a task of significant importance, especially due to exponentially [...] Read more.
Recent advancements in computer vision technology, developments in sensors and sensor-collecting approaches, and the use of deep and transfer learning approaches have excelled in the development of autonomous vehicles. On-road vehicle detection has become a task of significant importance, especially due to exponentially increasing research on autonomous vehicles during the past few years. With high-end computing resources, a large number of deep learning models have been trained and tested for on-road vehicle detection recently. Vehicle detection may become a challenging process especially due to varying light and weather conditions like night, snow, sand, rain, foggy conditions, etc. In addition, vehicle detection should be fast enough to work in real time. This study investigates the use of the recent YOLO version, YOLOx, to detect vehicles in bad weather conditions including rain, fog, snow, and sandstorms. The model is tested on the publicly available benchmark dataset DAWN containing images containing four bad weather conditions, different illuminations, background, and number of vehicles in a frame. The efficacy of the model is evaluated in terms of precision, recall, and mAP. The results exhibit the better performance of YOLOx-s over YOLOx-m and YOLOx-l variants. YOLOx-s has 0.8983 and 0.8656 mAP for snow and sandstorms, respectively, while its mAP for rain and fog is 0.9509 and 0.9524, respectively. The performance of models is better for snow and foggy weather than rainy weather sandstorms. Further experiments indicate that enhancing image quality using multiscale retinex improves YOLOx performance. Full article
(This article belongs to the Special Issue Design, Communication, and Control of Autonomous Vehicle Systems)
Show Figures

Figure 1

18 pages, 5580 KiB  
Article
Why You Cannot Rank First: Modifications for Benchmarking Six-Degree-of-Freedom Visual Localization Algorithms
by Sheng Han, Wei Gao and Zhanyi Hu
Sensors 2023, 23(23), 9580; https://doi.org/10.3390/s23239580 - 2 Dec 2023
Viewed by 1310
Abstract
Robust and precise visual localization over extended periods of time poses a formidable challenge in the current domain of spatial vision. The primary difficulty lies in effectively addressing significant variations in appearance caused by seasonal changes (summer, winter, spring, autumn) and diverse lighting [...] Read more.
Robust and precise visual localization over extended periods of time poses a formidable challenge in the current domain of spatial vision. The primary difficulty lies in effectively addressing significant variations in appearance caused by seasonal changes (summer, winter, spring, autumn) and diverse lighting conditions (dawn, day, sunset, night). With the rapid development of related technologies, more and more relevant datasets have emerged, which has also promoted the progress of 6-DOF visual localization in both directions of autonomous vehicles and handheld devices.This manuscript endeavors to rectify the existing limitations of the current public benchmark for long-term visual localization, especially in the part on the autonomous vehicle challenge. Taking into account that autonomous vehicle datasets are primarily captured by multi-camera rigs with fixed extrinsic camera calibration and consist of serialized image sequences, we present several proposed modifications designed to enhance the rationality and comprehensiveness of the evaluation algorithm. We advocate for standardized preprocessing procedures to minimize the possibility of human intervention influencing evaluation results. These procedures involve aligning the positions of multiple cameras on the vehicle with a predetermined canonical reference system, replacing the individual camera positions with uniform vehicle poses, and incorporating sequence information to compensate for any failed localized poses. These steps are crucial in ensuring a just and accurate evaluation of algorithmic performance. Lastly, we introduce a novel indicator to resolve potential ties in the Schulze ranking among submitted methods. The inadequacies highlighted in this study are substantiated through simulations and actual experiments, which unequivocally demonstrate the necessity and effectiveness of our proposed amendments. Full article
Show Figures

Figure 1

29 pages, 5318 KiB  
Article
Object Detection in Adverse Weather for Autonomous Driving through Data Merging and YOLOv8
by Debasis Kumar and Naveed Muhammad
Sensors 2023, 23(20), 8471; https://doi.org/10.3390/s23208471 - 14 Oct 2023
Cited by 50 | Viewed by 14369
Abstract
For autonomous driving, perception is a primary and essential element that fundamentally deals with the insight into the ego vehicle’s environment through sensors. Perception is challenging, wherein it suffers from dynamic objects and continuous environmental changes. The issue grows worse due to interrupting [...] Read more.
For autonomous driving, perception is a primary and essential element that fundamentally deals with the insight into the ego vehicle’s environment through sensors. Perception is challenging, wherein it suffers from dynamic objects and continuous environmental changes. The issue grows worse due to interrupting the quality of perception via adverse weather such as snow, rain, fog, night light, sand storms, strong daylight, etc. In this work, we have tried to improve camera-based perception accuracy, such as autonomous-driving-related object detection in adverse weather. We proposed the improvement of YOLOv8-based object detection in adverse weather through transfer learning using merged data from various harsh weather datasets. Two prosperous open-source datasets (ACDC and DAWN) and their merged dataset were used to detect primary objects on the road in harsh weather. A set of training weights was collected from training on the individual datasets, their merged versions, and several subsets of those datasets according to their characteristics. A comparison between the training weights also occurred by evaluating the detection performance on the datasets mentioned earlier and their subsets. The evaluation revealed that using custom datasets for training significantly improved the detection performance compared to the YOLOv8 base weights. Furthermore, using more images through the feature-related data merging technique steadily increased the object detection performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

22 pages, 7686 KiB  
Article
Object Detection Performance Evaluation for Autonomous Vehicles in Sandy Weather Environments
by Nasser Aloufi, Abdulaziz Alnori, Vijey Thayananthan and Abdullah Basuhail
Appl. Sci. 2023, 13(18), 10249; https://doi.org/10.3390/app131810249 - 13 Sep 2023
Cited by 12 | Viewed by 3392
Abstract
In order to reach the highest level of automation, autonomous vehicles (AVs) are required to be aware of surrounding objects and detect them even in adverse weather. Detecting objects is very challenging in sandy weather due to characteristics of the environment, such as [...] Read more.
In order to reach the highest level of automation, autonomous vehicles (AVs) are required to be aware of surrounding objects and detect them even in adverse weather. Detecting objects is very challenging in sandy weather due to characteristics of the environment, such as low visibility, occlusion, and changes in lighting. In this paper, we considered the You Only Look Once (YOLO) version 5 and version 7 architectures to evaluate the performance of different activation functions in sandy weather. In our experiments, we targeted three activation functions: Sigmoid Linear Unit (SiLU), Rectified Linear Unit (ReLU), and Leaky Rectified Linear Unit (LeakyReLU). The metrics used to evaluate their performance were precision, recall, and mean average precision (mAP). We used the Detection in Adverse Weather Nature (DAWN) dataset which contains various weather conditions, though we selected sandy images only. Moreover, we extended the DAWN dataset and created an augmented version of the dataset using several augmentation techniques, such as blur, saturation, brightness, darkness, noise, exposer, hue, and grayscale. Our results show that in the original DAWN dataset, YOLOv5 with the LeakyReLU activation function surpassed other architectures with respect to the reported research results in sandy weather and achieved 88% mAP. For the augmented DAWN dataset that we developed, YOLOv7 with SiLU achieved 94% mAP. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 16904 KiB  
Article
A Novel ST-ViBe Algorithm for Satellite Fog Detection at Dawn and Dusk
by Huiyun Ma, Zengwei Liu, Kun Jiang, Bingbo Jiang, Huihui Feng and Shuaifeng Hu
Remote Sens. 2023, 15(9), 2331; https://doi.org/10.3390/rs15092331 - 28 Apr 2023
Cited by 6 | Viewed by 2469
Abstract
Satellite remote sensing provides a potential technology for detecting fog at dawn and dusk on a large scale. However, the spectral characteristics of fog at dawn and dusk are similar to those of the ground surface, which makes satellite-based fog detection difficult. With [...] Read more.
Satellite remote sensing provides a potential technology for detecting fog at dawn and dusk on a large scale. However, the spectral characteristics of fog at dawn and dusk are similar to those of the ground surface, which makes satellite-based fog detection difficult. With the aid of time-series datasets from the Himawari-8 (H8)/AHI, this study proposed a novel algorithm of the self-adaptive threshold of visual background extractor (ST-ViBe) model for satellite fog detection at dawn and dusk. Methodologically, the background model was first built using the difference between MIR and TIR (BTD) and the local binary similarity patterns (LBSP) operator. Second, BTD and scale invariant local ternary pattern (SILTP) texture features were coupled to form scene factors, and the detection threshold of each pixel was determined adaptively to eliminate the influence of the solar zenith angles. The background model was updated rapidly by accelerating the updating rate and increasing the updating quantity. Finally, the residual clouds were removed with the traditional cloud removal method to achieve accurate detection of fog at dawn and dusk over a large area. The validation results demonstrated that the ST-ViBe algorithm could detect fog at dawn and dusk precisely, and on a large scale. The probability of detection, false alarm ratio, and critical success index were 72.5%, 18.5%, 62.4% at dawn (8:00) and 70.6%, 33.6%, 52.3% at dusk (17:00), respectively. Meanwhile, the algorithm mitigated the limitations of the traditional algorithms, such as illumination mutation, missing detection, and residual shadow. The results of this study could guide satellite fog detection at dawn and dusk and improve the detection of similar targets. Full article
Show Figures

Figure 1

29 pages, 4916 KiB  
Article
A Fast and Accurate Real-Time Vehicle Detection Method Using Deep Learning for Unconstrained Environments
by Annam Farid, Farhan Hussain, Khurram Khan, Mohsin Shahzad, Uzair Khan and Zahid Mahmood
Appl. Sci. 2023, 13(5), 3059; https://doi.org/10.3390/app13053059 - 27 Feb 2023
Cited by 67 | Viewed by 13581
Abstract
Deep learning-based classification and detection algorithms have emerged as a powerful tool for vehicle detection in intelligent transportation systems. The limitations of the number of high-quality labeled training samples makes the single vehicle detection methods incapable of accomplishing acceptable accuracy in road vehicle [...] Read more.
Deep learning-based classification and detection algorithms have emerged as a powerful tool for vehicle detection in intelligent transportation systems. The limitations of the number of high-quality labeled training samples makes the single vehicle detection methods incapable of accomplishing acceptable accuracy in road vehicle detection. This paper presents detection and classification of vehicles on publicly available datasets by utilizing the YOLO-v5 architecture. This paper’s findings utilize the concept of transfer learning through fine tuning the weights of the pre-trained YOLO-v5 architecture. To employ the concept of transfer learning, extensive data sets of images and videos of the congested traffic patterns were collected by the authors. These datasets were made more comprehensive by pointing various attributes, for instance high- and low-density traffic patterns, occlusions, and different weather circumstances. All of these gathered datasets were manually annotated. Ultimately, the improved YOLO-v5 structure becomes accustomed to any difficult traffic patterns. By fine-tuning the pre-trained network through our datasets, our proposed YOLO-v5 has exceeded several other traditional vehicle detection methods in terms of detection accuracy and execution time. Detailed simulations performed on the PKU, COCO, and DAWN datasets demonstrate the effectiveness of the proposed method in various challenging situations. Full article
(This article belongs to the Special Issue Digital Image Processing: Advanced Technologies and Applications)
Show Figures

Figure 1

23 pages, 3828 KiB  
Article
Regional Quantitative Mineral Prospectivity Mapping of W, Sn, and Nb-Ta Based on Integrated Information in Rwanda, Central Africa
by Zhuo Chen, Jianping Chen, Tao Liu, Yunfeng Li, Qichun Yin and Haishuang Du
Minerals 2023, 13(2), 189; https://doi.org/10.3390/min13020189 - 28 Jan 2023
Cited by 3 | Viewed by 4635
Abstract
As the need to discovers new mineral deposits and occurrences has intensified in recent years, it has become increasingly apparent that we need to map potentials via integrated information on the basis of metallogeny. Occurrences of mineralization such as tungsten (W), tin (Sn), [...] Read more.
As the need to discovers new mineral deposits and occurrences has intensified in recent years, it has become increasingly apparent that we need to map potentials via integrated information on the basis of metallogeny. Occurrences of mineralization such as tungsten (W), tin (Sn), columbium (Nb), tantalum (Ta), gold (Au), copper (Cu), lead (Pb), zinc (Zn), manganese (Mn) and monazite (Mnz) have been discovered in Rwanda. The objective of this study was to present a regional quantitative mineral prospectivity mapping (MPM) of W, Sn and Nb-Ta mineralization in Rwanda using the random forest (RF) method on the basis of open source data, such as geological maps, Bouguer gravity anomalies, magnetic anomalies, Landsat 8 images, ASTER GDEM, Globeland30, and OpenStreetMap. In addition, a newly introduced interpolation–density–delineation (IDD) process was applied to deal with the blank (masked) areas in remotely sensed mineral alteration extraction. Additionally, a k2-fold cross-validation method was also proposed to obtain more reasonable test errors. Firstly, the metallogenic regularity of W, Sn and Nb-Ta in Rwanda was summarized with the help of articles online. Secondly, original geological, geophysical, and remote sensing data were utilized to generate secondary data. Specifically, the IDD process was applied subsequent to the directed principal component analysis method (DPCA) to reconstruct the alteration anomaly map, and a relevant dataset was formed by the combination of original and secondary data. Thirdly, specific predictor layers for W, Sn and Nb-Ta were selected from relevant data via spatial correlation with known deposits, respectively, and the predictive models were established. Finally, near 26,000 squares were zoned in Rwanda, and RF was optimized and applied, the k2-fold cross-validation method was utilized to assess test errors, metallogenic belts and prospective areas for W, Sn, and Nb-Ta were delineated on the basis of total mineralization potential map and likelihoods map. Results proved that the open source data online were valid for drawing a preliminary mineralization potential map. Furthermore, it was also shown that the IDD method is suitable for the postprocessing of masked alteration anomaly maps. Belt IV-4 in the northwest and belt IV-2, IV-1 in the middle-east of Rwanda, containing a number of prospective areas, possess considerable likelihoods of deposits, and mining in Rwanda is at its dawn, with potential worth expecting. Full article
Show Figures

Figure 1

12 pages, 4690 KiB  
Article
Spatio-Temporal Niche of Sympatric Tufted Deer (Elaphodus cephalophus) and Sambar (Rusa unicolor) Based on Camera Traps in the Gongga Mountain National Nature Reserve, China
by Zhiyuan You, Bigeng Lu, Beibei Du, Wei Liu, Yong Jiang, Guangfa Ruan and Nan Yang
Animals 2022, 12(19), 2694; https://doi.org/10.3390/ani12192694 - 7 Oct 2022
Cited by 6 | Viewed by 3533
Abstract
Clarifying the distribution pattern and overlapping relationship of sympatric relative species in the spatio-temporal niche is of great significance to the basic theory of community ecology and integrated management of multi-species habitats in the same landscape. In this study, based on a 9-year [...] Read more.
Clarifying the distribution pattern and overlapping relationship of sympatric relative species in the spatio-temporal niche is of great significance to the basic theory of community ecology and integrated management of multi-species habitats in the same landscape. In this study, based on a 9-year dataset (2012–2021) from 493 camera-trap sites in the Gongga Mountain National Nature Reserve, we analyzed the habitat distributions and activity patterns of tufted deer (Elaphodus cephalophus) and sambar (Rusa unicolor). (1) Combined with 235 and 153 valid presence sites of tufted deer and sambar, the MaxEnt model was used to analyze the distribution of the two species based on 11 ecological factors. The distribution areas of the two species were 1038.40 km2 and 692.67 km2, respectively, with an overlapping area of 656.67 km2. Additionally, the overlap indexes Schoener’s D (D) and Hellinger’s-based I (I) were 0.703 and 0.930, respectively. (2) Based on 10,437 and 5203 independent captures of tufted deer and sambar, their daily activity rhythms were calculated by using the kernel density estimation. The results showed that the daily activity peak in the two species appeared at dawn and dusk; however, the activity peak in tufted deer at dawn and dusk was later and earlier than sambar, respectively. Our findings revealed the spatio-temporal niche relationship between tufted deer and sambar, contributing to a further understanding of the coexistence mechanism and providing scientific information for effective wild animal conservation in the reserve and other areas in the southeastern edge of the Qinghai–Tibetan Plateau. Full article
(This article belongs to the Special Issue Use of Camera Trap for a Better Wildlife Monitoring and Conservation)
Show Figures

Figure 1

Back to TopTop