1. Introduction
Efficient parking management is a fundamental component of modern urban mobility, particularly in educational institutions, shopping centers, and metropolitan areas where vehicle demand often exceeds the availability of spaces. Numerous studies have established that between 20% and 30% of urban traffic consists of drivers looking for a free parking space, which increases congestion, travel times, and pollutant emissions [
1,
2].
Despite the growing adoption of computer vision-based parking monitoring systems, most existing solutions rely on fixed infrastructure or heavy UAV platforms, which require regulatory permissions, high operational costs, or complex deployment. This limits their scalability in real-world urban environments, particularly in developing regions.
Furthermore, recent advances in object-detection architecture have not been sufficiently validated in real-world UAV parking scenarios, especially regarding simultaneous detection of vehicles and free parking slots for direct occupancy estimation.
Consequently, it has become necessary to develop intelligent systems capable of monitoring, estimating, and communicating the availability of parking spaces in smart cities.
Traditional methods for detecting available spaces, based on ultrasonic sensors, magnetic sensors, or fixed cameras, have limitations related to infrastructure, installation cost, and spatial coverage [
3,
4]. For their part, systems based on fixed cameras have shown significant advances through computer vision algorithms and convolutional neural networks; however, their field of view is limited, they are susceptible to occlusions, and they require high or difficult-to-access mounting points [
5].
In this context, unmanned aerial vehicles (UAVs) have emerged as a versatile and economical alternative for capturing aerial images. Their ability to cover large areas, maintain stable overhead perspectives, and operate without fixed infrastructure makes them ideal tools for monitoring parking lots. Numerous studies have shown that combining UAVs with a deep learning-based object-detection model can achieve accurate metrics of over 90% for aerial vehicle detection [
6,
7].
At the same time, the YOLO (You Only Look Once) family of models has evolved towards faster and more accurate architecture. Its most recent version, YOLOv11, incorporates an anchor-free structure, an optimized dynamic head, and C2f-RepConv modules, enabling significant improvements in the detection of small objects, reduced inference times, and greater robustness in the face of variations in scale and brightness [
8].
Despite these advances, the literature shows a significant gap: there are few studies that integrate ultra-light UAVs in the sub-250 g category, such as the DJI Mini 3, with a YOLOv11 model for aerial monitoring specifically applied to real parking lots in Mexico. Furthermore, there is little experimental evidence on the simultaneous use of vehicle detection and free-parking-space detection for automatic real-time occupancy estimation.
Given this need, this paper proposes an intelligent parking occupancy monitoring system based on a DJI Mini 3 drone and vehicle detection using YOLOv11, which allows for
Capturing aerial images in real time;
Detecting vehicles and parking spaces simultaneously;
Automatically estimating occupancy rates;
Operating without fixed infrastructure;
Achieving inference speeds suitable for practical applications.
For this purpose, a proprietary dataset, called the Parking Dataset v1.0, was constructed, consisting of multiscale aerial images captured in a university car park. The images were labeled in YOLO format, and the YOLOv11 model was trained using Tesla T4 GPUs. Finally, performance was evaluated using standard metrics such as precision, recall, F1-score, mAP@0.5, map@0.5:0.95, and FPS speed.
The results show that the proposed system achieves a map@0.5 of 0.97 and an inference speed of 38 FPS, surpassing previous models such as YOLOv8 and WALDO (Whereabouts Ascertainment for Low-Lying Detectable Objects). These findings demonstrate the viability of the solution for implementation on university campuses, in public parking lots, and in smart cities.
The objective of this work is not to introduce a novel detection architecture but to validate a deployable UAV-based parking occupancy monitoring framework under real operational constraints. The main contributions of this study are as follows: (1) the design of an end-to-end system for aerial parking monitoring using a lightweight UAV platform; (2) the construction of a task-specific dataset and a dual-class formulation (vehicle and parking) tailored to top-down UAV imagery; (3) a comprehensive real-time performance evaluation focusing on inference speed, stability, and deployment feasibility; and (4) a qualitative and quantitative analysis of detection behavior under challenging visual conditions typical of parking environments.
It is important to emphasize that this work does not aim to propose a novel object-detection architecture. Instead, it addresses an applied research gap by experimentally validating a deployable UAV-based parking-occupancy monitoring system under real operational constraints. The scientific contribution lies in the system-level integration, task-specific formulation, UAV-oriented data preparation, and rigorous experimental evaluation of YOLOv11 in a real-world aerial parking scenario. Such applied validation studies are essential for assessing the practical limits, robustness, and deployment feasibility of modern detection architectures beyond benchmark datasets.
2. Related Work
The study of automated parking lot monitoring has evolved over the last two decades, driven by advances in computer vision, deep neural networks, and unmanned aerial vehicles (UAVs). This chapter presents a detailed analysis of the relevant literature, ranging from traditional sensor-based systems to modern technologies that employ advanced detection models such as YOLOv11. Comparative tables summarizing the main approaches used in high-impact research are also included.
2.1. Traditional Parking Lot Monitoring Systems
The first intelligent parking systems relied on physical infrastructure installed directly in each parking space, such as ultrasonic, magnetic, and infrared sensors. These approaches offered acceptable accuracy, typically between 92% and 95%, but had significant drawbacks: high installation costs, constant maintenance, dependence on wiring, and scalability issues for large or temporary parking lots [
3,
4].
Over time, fixed cameras with computer vision algorithms were integrated. These systems allowed multiple parking spaces to be monitored with a single camera, although they still had limitations in terms of occlusions, blind spots, and sensitivity to environmental conditions.
Table 1 summarizes traditional parking monitoring approaches reported in the literature, including sensor-based systems such as wireless and IoT-enabled parking sensors, as well as fixed-camera computer vision approaches for vehicle and space detection. These categories are derived from established studies on smart parking and parking occupancy monitoring [
3,
4,
5,
6].
2.1.1. Systems Based on Fixed Cameras and Computer Vision
The use of fixed cameras introduced a less invasive approach, allowing multiple spaces to be monitored simultaneously using computer vision algorithms.
Initially, background subtraction and edge detection techniques were applied, followed by SVM and Haar Cascades classifiers, until Convolutional Neural Networks (CNNs) were reached [
5].
In the research work [
5], mAP = 0.91 was achieved for the CNRPark-EXT dataset using deep CNNs [
6]. However, the lack of mobility, occlusions between vehicles, and changes in lighting reduce the efficiency of these systems, motivating the use of more flexible aerial platforms, as shown in
Table 2, which summarizes the most efficient systems.
2.1.2. Access-Based Counting Systems
Older systems use counting sensors at the access point (entrance/exit) via automatic barriers, RFID, or electromagnetic loops under the pavement. These systems record how many vehicles enter or exit via automated barriers, RFID, or loops in the ground, but they only provide information on the number of vehicles inside the parking lot and do not identify which spaces are free [
4].
Limitations of fixed sensor-based systems are described in
Table 3.
These limitations drove the migration towards more dynamic solutions, where aerial surveillance using UAVs represents a scalable, autonomous, and lower-cost alternative.
2.2. UAV-Based Systems for Vehicle Monitoring
UAVs have transformed urban monitoring due to their flexibility, low cost, and ability to capture high-resolution aerial images [
1,
6]. In car parks, UAVs offer advantages such as wide coverage, independent mobility, and reduced fixed infrastructure.
Recently, UAVs have become a critical component of urban security within smart cities, providing real-time aerial monitoring and complementing traditional surveillance systems [
9].
Studies, such as those by Wei et al. (2021) and Xu et al. (2023), have shown that the integration of UAVs with YOLO models achieves accuracies of over 90% in aerial detection [
2,
10].
Table 4 presents relevant studies on aerial vehicle detection.
The use of the DJI Mini 3 drone in the under 250 g category enables operations without complex permits, facilitating its implementation in open urban areas [
13].
Use of Drones in Urban Monitoring and Parking Lots
There are studies in which UAVs have been implemented in urban traffic, surveillance, and mobility management [
1]. As in agriculture, traffic, security, and rescue, this is due to their ability to cover large areas at a low operating cost [
7]. In the area of parking, researchers such as [
2,
6] have demonstrated that drones with object-detection models can estimate parked vehicle occupancy with high accuracy, exceeding 90%. Unlike fixed cameras, drones offer mobility, vertical aerial vision, and less dependence on infrastructure.
2.3. Artificial Intelligence for Vehicle Detection
Deep learning-based object-detection models have revolutionized computer vision. The YOLO series, introduced by Redmon et al., enabled single-stage detection, significantly reducing inference times [
14]. The most recent versions, such as YOLOv8 and YOLOv11, have incorporated C2f, PAN-FPN, and anchor-free detection modules, improving performance in aerial scenarios and on small-scale objects [
15,
16].
WALDO, developed in 2023, was optimized for aerial detection with UAVs in UAVDT and COCO-aerial datasets [
17]. Vehicle detection using deep learning has been extensively developed by architectures such as YOLOv3, YOLOv4, YOLOv8, and, more recently, YOLOv11. YOLOv11 introduces an anchor-free architecture and a dynamic head that optimizes multiscale detection, achieving 5% improvements in mAP over its predecessor [
18].
Other approaches, such as WALDO, have explored aerial detection specialized in small or partially occluded objects [
19], although with lower real-time performance.
Recent advances in object detection have also explored alternative paradigms beyond single-stage detectors such as YOLO, particularly in challenging visual scenarios. For instance, salient object-detection approaches and hybrid CNN–Transformer architectures have been proposed to enhance contextual reasoning and feature interaction. ORSI, a salient object-detection method based on progressive interaction and saliency-guided enhancement, demonstrates improved robustness in complex scenes with cluttered backgrounds and ambiguous object boundaries. While these approaches achieve strong performance in general object-detection benchmarks, their computational complexity and inference latency currently limit their suitability for real-time deployment on lightweight UAV platforms. As such, YOLO-based detectors remain a practical choice for time-critical aerial monitoring applications [
20].
Recent advancements in Optical Remote Sensing Image (ORSI) analysis have increasingly focused on overcoming challenges such as irregular topological structures and complex contextual relationships. Unlike natural scene images, ORSIs involve unique perspectives and diverse object distributions, where specialized models like PISENet have shown superiority by employing a Progressive Interactive Encoder (PIE) [
21]. Although ORSI demonstrates strong performance in optical remote sensing, its computational complexity and lack of real-time constraints make it less suitable for lightweight UAV platforms. Therefore, they are not directly comparable to the deployment-oriented objective of this study.
In parallel, recent research has explored hybrid CNN–Transformer architectures for object detection, aiming to combine the local feature extraction capabilities of convolutional networks with the global contextual reasoning of self-attention mechanisms. Models based on DETR variants and hybrid backbones have demonstrated improved performance in complex visual scenes and small-object-detection tasks. However, these approaches typically incur higher computational cost and inference latency, which currently limits their applicability for real-time deployment on lightweight UAV platforms. Consequently, their evaluation is considered a relevant direction for future work rather than within the scope of the present study [
22].
Different models of artificial intelligence are shown in
Table 5.
Recent Transformer-based and hybrid CNN–Transformer detectors have demonstrated strong global context modeling for aerial imagery. However, their computational overhead can limit real-time deployment on lightweight UAV-oriented systems. In contrast, this work prioritizes deployment feasibility and real-time performance under practical UAV constraints, positioning the proposed framework as an applied, system-level solution rather than an architectural innovation.
3. Materials and Methods
This chapter describes the materials used in the development of the proposed system and the methods applied for data acquisition, dataset preparation, model training, and evaluation using YOLOv11. The structure follows best practices in scientific publications focused on computer vision and UAV-based intelligent monitoring.
It is important to note that no internal architectural modifications were applied to the YOLOv11 model; all performance gains are derived from task-specific data preparation, augmentation, and training configuration.
3.1. Materials
3.1.1. Aerial Platform: DJI Mini 3 UAV
A DJI Mini 3 drone was employed for aerial image acquisition. This UAV was selected due to its portability, weight below 249 g, ease of operation in urban environments, and its capability to capture high-resolution visual data. The drone allows central and oblique captures with excellent stability thanks to its three-axis mechanical gimbal, which is essential for generating a robust dataset for detection model training.
The DJI Mini 3 provides photographs of up to 48 MP and 4 k/30 FPS video, enabling clear captures even under variable lighting conditions commonly found in open parking lots.
Table 6 presents its most relevant technical specifications.
3.1.2. Hardware and Software
A combination of hardware and software tools was used to ensure efficient capture, processing, and training. Model training was performed using Google Colab Pro, while dataset preparation and annotation employed specialized computer vision software.
Table 7 presents the hardware and software used in this research work.
3.1.3. Detection Model Selection: WALDO vs. YOLOv11
During the early design stage, WALDO, a CNN-based architecture whose design specifically targets small, low-lying objects in aerial imagery, was considered [
9,
23]. While WALDO performs well in small-object detection, it presents notable limitations in multiclass detection, lighting robustness, and ecosystem support.
After comparative analysis, YOLOv11 was selected due to its superior speed, anchor-free architecture, multiscale performance, higher accuracy and stability under real-world conditions, as well as its comprehensive training ecosystem [
24].
Table 8 presents a comparison between WALDO object detection and YOLOv11 object detection.
To select the most suitable model for detecting vehicles and parking spaces from a UAV, a comparative analysis was conducted between the YOLOv8, YOLOv9, and YOLOv11 architectures, the three most recent generations of the YOLO family developed by Ultralytics. This analysis considered key factors such as accuracy, inference speed, multiscale detection capability, robustness to light variations, and ease of implementation in real environments. YOLOv8 represented a substantial improvement over YOLOv5, particularly in speed and simplicity.
However, its performance for small objects was limited in aerial applications. YOLOv9 introduced contextual knowledge integration (CKPT), improving detection in dense scenes, albeit at a higher computational cost. Finally, YOLOv11 incorporated an anchor-free architecture reinforced with a dynamic head, optimized C2f-RepConv modules, and superior sensitivity to scale variations, positioning it as the most advanced model for scenarios captured from UAVs.
Table 9 shows a comparison between other YOLO architectures.
3.2. Methods
Figure 1 illustrates the methodology used to carry out this research project.
3.2.1. Experimental Design
The experiment was conducted in an open parking lot containing fifty marked spaces. The UAV was flown at multiple heights to capture imagery with various levels of detail and perspective distortion. The dataset was constructed from images acquired under two lighting conditions: morning and midday. This ensured environmental diversity and improved the generalization capability of the model.
3.2.2. Flight Protocol
Flight followed a lawnmower-style pattern commonly used in surveying missions. The following parameters were standardized:
Flight altitudes: 20, 25, 30, and 35 m;
UAV speed: 2–4 m/s;
Camera angle: 80–90° (near-nadir);
Image overlap: 25–30%;
Duration per flight: 10–12 min;
Total number of flights: 14.
This ensured consistent coverage and uniform spatial sampling of the parking area.
3.2.3. Data Capture
The video taken by the DJI Mini 3 drone camera was recorded at 4K/30 FPS. The Python library specializing in image processing, OpenCV, was used, from which frames were extracted at 0.25 s intervals, resulting in 60 initial images, of which 30 met the quality requirements in terms of sharpness, brightness, and framing. Due to the limited amount of data obtained, a technique called data augmentation was applied.
Although the data were collected from a single parking facility and a fixed UAV flight altitude, this setup was intentionally selected to validate the feasibility of the proposed system under controlled operational conditions.
Due to the limited number of original images (30 images), extensive data augmentation techniques were applied to artificially expand the dataset and increase its variability. Data augmentation is a commonly used and essential strategy in computer vision pipelines, particularly when collecting large-scale, real-world datasets is costly or restricted. According to Szeliski, data augmentation enables the creation of additional training samples through geometric or photometric transformations, helping models generalize better to real-world variations [
25].
Following this principle, the dataset was expanded to 750 images through a range of transformations designed to simulate diverse environmental and visual conditions encountered during UAV flights. The transformations applied included:
Random horizontal flips;
Brightness and contrast modifications;
Rotations and affine transformations;
Mosaic augmentation;
Perspective distortions;
Random scaling and cropping.
These augmentation strategies help the model recognize parking spaces and vehicles under variations in illumination, shadows, camera angles, and geometric deformation, all of which are typical challenges in aerial-based monitoring.
Two classes were maintained throughout augmentation:
The augmented dataset significantly improved the robustness and generalization performance of the YOLOv11 model during training.
Although the dataset originates from a limited number of aerial images collected at a single site, it serves to validate the feasibility of the proposed system under controlled operational conditions. The impact of this limitation and potential strategies for broader generalization are discussed in
Section 5.
3.2.4. Pre-Processing
All images underwent the following preprocessing steps:
Resizing to 640 × 640 px;
Letterboxing to preserve aspect ratio;
RGB normalization to [0, 1];
Laplacian variance filtering to remove blurred samples;
Discarding frames with excessive shadows or poor visibility.
This resulted in a high-quality data set of 750 images.
3.2.5. Dataset Annotation
Annotation was performed using the comprehensive Roboflow computer vision platform, which helps developers create artificial intelligence applications. It facilitates the entire computer vision project process, from image collection and annotation to training, implementation, and maintenance of artificial intelligence models. It also offers tools for annotating data, managing datasets, automatically training models, and deploying the model for inference, which is why it was selected in accordance with YOLO standards. Only fully visible vehicles and parking spaces were labeled. The augmented images were validated manually.
Table 10 below shows the distributed quantities of the dataset for training, validation, and testing.
3.2.6. YOLOv11 Model Training
The model was trained on Google Colab Pro using an NVIDIA Tesla T4 GPU. YOLOv11 was employed in this study as the baseline object-detection model due to its favorable balance between detection accuracy and real-time inference performance. It is important to note that no internal architectural modifications were applied to the YOLOv11 model. Instead, all performance gains observed in this work are attributed to task-specific data preparation, UAV-oriented data augmentation strategies, and the training configuration tailored to the top-down aerial perspective of parking environments. This design choice was intentional, as preserving the original architecture ensures reproducibility, facilitates fair comparison with previous YOLO versions, and maintains deployment feasibility under real-time operational constraints.
The training parameters followed Ultralytics’ recommendations and are described in
Table 11.
The model demonstrated stable convergence with strong performance across both classes.
3.2.7. Model Evaluation
YOLOv11 model performance was evaluated using standard object-detection metrics widely adopted in computer vision research, including precision, recall, F1 score, IoU (Intersection over Union), mAP@0.5, and mAP@0.5:0.95. These metrics are formally defined below in related classic works such as Taye [
16], Szeliski [
25], and in recent benchmark reports by Ultralytics [
24] and Xue et al. [
26].
- (a)
Precision
As defined in the standard evaluation methodology [
16,
25]
where
High precision indicates that the detector produces few false alarms. Precision measures how many of the model’s positive predictions are correct.
- (b)
Recall
Recall (or sensitivity) measures the proportions of actual objects that the model successfully detects [
16,
25,
26]:
where
High recall value indicates that the model is unlikely to overlook vehicles or parking spaces.
- (c)
F1-score
The F1-score represents the harmonic mean between precision and recall, balancing both metrics [
25,
26]:
It is especially useful when the number of positive samples in each class differs.
- (d)
Intersection over Union (IoU)
Bounding-box accuracy was measured using the Intersection over Union metric employed in YOLO-based models, defined as [
18,
24]
where
IoU quantifies the degree of overlap between predicted boxes and actual boxes, as illustrated in
Figure 2.
- (e)
mAP@0.5
Defined in object-detection benchmarks [
18,
24,
26]. The mean average precision at IoU = 0.5 evaluates how well the detector identifies object classes with a moderate localization threshold:
where
= area under the precision–recall curve for class i;
= number of classes (in this study, N = 2: vehicle and parking).
This metric reflects overall detection accuracy.
- (f)
mAP@0.5:0.95
The most widely used COCO-style evaluation metric [
18,
24]. To provide a more rigorous evaluation, mAP was also computed across ten IoU thresholds between 0.5 and 0.95 (in increments of 0.05):
This metric is more demanding, since high IoU thresholds require precise bounding-box localization.
- (g)
Real-Time Inference Speed
As outlined by Ultralytics [
24], FPS was used to determine real-time capabilities. Inference speed was measured in frames per second (FPS). YOLOv11 achieved speeds exceeding 35 FPS on an NVIDIA TESLA T4 GPU, confirming suitability for real-time UAV deployment.
3.2.8. Parking Occupancy Calculation
After obtaining the detection results for the vehicle and parking classes, parking occupancy was computed using a rule-based approach grounded in object-detection principles and IoU-based matching strategies commonly used in the literature [
16,
24,
25]. The objective of this step was to determine, for each parking space, whether it was occupied or available based on the spatial relationship between detected vehicles and parking slot regions.
The occupancy estimation process consisted of three stages:
Vehicle detections;
Parking detections.
These bounding boxes were standardized in the format (x1, y1, x2, y2), following the conventions used in modern detectors [
24].
- 2.
The assessment of spatial overlap using Intersections over Union (IoU).
For each detected parking space, its IoU was computed with all detected vehicles using (d) IoU, as defined in fundamental computer vision literature [
18,
25]. A parking space was considered occupied if its IoU with any vehicle bounding box exceeded a threshold, typically between 0.10 and 0.20 for aerial images where bounding box alignment may vary slightly due to perspective distortions [
10,
26].
- 3.
The binary assignment of occupancy state.
Each parking space was assigned to one of two states:
This rule-based decision is consistent with methodologies used in smart parking systems employing either ground sensors, thermal cameras, or aerial views [
5,
6,
14].
Finally, the overall occupancy percentage for the parking lot was computed as
where
This formulation aligns with occupancy estimation metrics commonly adopted in parking analytics literature [
5,
14].
4. Results
This section presents the quantitative and qualitative results obtained from evaluating the YOLOv11 model on aerial imagery captured using the DJI Mini 3 UAV. The model was assessed using standard detection metrics and compared to previous YOLO architectures (YOLOv8 and YOLOv9). A detailed analysis is provided regarding the system’s performance under varying environmental and visual conditions.
4.1. Quantitative Results
Figure 3 presents qualitative examples of the UAV-based dataset with ground-truth annotations and model predictions. The images illustrate both occupied parking spaces (vehicle class) and free parking slots (parking class) under different illumination and surface conditions. These visual results provide insight into the model’s ability to localize parking-related objects from a top-down aerial perspective and complement the quantitative evaluation presented in subsequent sections.
The evaluation was conducted using a test set of 75 unseen images representing diverse conditions, including different lighting environments (morning and midday), shadow intensities, perspective variations, and varying levels of vehicle density. These variations ensured a realistic examination of model robustness.
Table 12 summarizes the numerical results obtained for each metric.
4.1.1. Interpretation of Results
The precision of 0.94 indicates that false positives are minimal, meaning the model rarely misclassifies background regions as vehicles or parking spaces.
The recall of 0.92 confirms the model’s ability to detect nearly all vehicles and free parking spaces, even those with faint markings or partial shadow coverage.
The F1-score of 0.93 demonstrates balanced performance without favoring either sensitivity or specificity.
The mAP@0.5 of 0.96 shows that the model accurately localizes bounding boxes with acceptable spatial tolerance.
The more demanding mAP@0.5:0.95 value of 0.89 demonstrates high localization accuracy across stricter thresholds, consistent with state-of-the-art detectors.
The achieved 35–38 FPS verifies that the system is suitable for real-time UAV-based monitoring.
4.1.2. Precision–Recall Curves
Beyond numerical metrics, the performance of the YOLOv11 detector was analyzed using precision–recall (PR) curves for the vehicle and parking classes. These curves reveal how precision and recall vary across confidence thresholds, offering a more detailed performance profile than aggregated values such as mAP.
In UAV-based detection, where shadows, illumination changes, and small-object appearance present challenges, PR curves are especially useful for identifying strengths and weaknesses in classifier confidence. Curves approaching the upper-right corner denote strong discriminative ability.
The PR curves for both classes are presented in
Figure 4, demonstrating stable performance across thresholds.
4.1.3. Training and Validation Curves
To examine optimization behavior and generalization capability, training and validation curves were evaluated. These curves illustrate the evolution of loss and accuracy metrics over the 30 training epochs.
A consistent decrease in training and validation loss, coupled with rising mAP@0.5 and mAP@0.5–0.95 values, indicates that the model converged effectively without overfitting. Such analysis is crucial in UAV vision applications, where environmental variability may otherwise destabilize training.
The curves presented in
Figure 5 confirm smooth convergence and robust optimization of the YOLOv11 model.
4.2. Qualitative Results
Visual inspection provides complementary insights into model behavior beyond numerical metrics. The qualitative evaluation included images with varying illumination, perspective distortions, object densities, and environmental complexity.
4.2.1. Correct Detections Under Normal Conditions
Under standard lighting conditions and unobstructed views, YOLOv11 demonstrated consistent detection quality, accurately identifying vehicles and parking spaces with well-aligned bounding boxes. These results validate the model’s ability to operate reliably in common outdoor scenarios.
The values displayed in
Figure 6 correspond to model confidence scores rather than classification accuracy. Lower confidence values, particularly around 0.60, are observed under challenging visual conditions such as faded parking markings, strong shadows, or partial occlusions. These values reflect model uncertainty while still maintaining correct spatial localization.
The detection performance under normal lighting conditions is demonstrated in
Figure 7, where the system accurately delineates vacant parking slots.
4.2.2. Challenging Scenarios
The model was further tested on images containing strong shadows, occlusions, faded markings, and oblique camera angles. These conditions often degrade detection performance in traditional systems.
4.3. Comparison with YOLOv8 and YOLOv9
To contextualize YOLOv11 performance, comparative experiments were conducted using YOLOv8 and YOLOv9 under identical training conditions. Previous literature reports that YOLOv8, while efficient and fast, exhibits reduced sensitivity to small aerial objects and elongated patterns such as parking slot boundaries [
10,
12,
26]. Conversely, YOLOv9 incorporates contextual learning strategies that improve multi-scale representation; however, under deployment-oriented constraints, such as real-time UAV inference and limited onboard resources, YOLOv11 demonstrates more stable real-time behavior and competitive detection performance.
Table 13 presents the results, highlighting the numerical differences across key metrics, such as precision, recall, mAP, and inference speed.
Table 13 reports a controlled comparative evaluation between YOLOv8, YOLOv9, and YOLOv11. All models were trained under identical conditions, using the same dataset, data augmentation pipeline, image resolution (640 × 640), batch size, optimizer, and number of epochs. No pretrained weights were modified beyond standard initialization, ensuring a fair comparison focused on architectural characteristics rather than training bias.
YOLOv11 consistently outperformed earlier versions in both detection accuracy and inference speed. This improvement is primarily attributed to its anchor-free detection paradigm and a dynamic head mechanism, which enhance multi-scale feature aggregation and small-object sensitivity in aerial imagery. The comparison is therefore relevant as it quantifies the practical performance gains obtained when migrating between successive YOLO generations under identical deployment conditions.
Although strong quantitative and qualitative performance was achieved, several limitations must be acknowledged. The dataset was collected from a single parking lot and originated from a limited number of real images, which may restrict generalization to other urban contexts. While data augmentation is a standard technique to improve robustness, it does not replace the need for broader real-world diversity. Future work will therefore focus on multi-site data collection across different cities, seasons, weather conditions, and nighttime scenarios, as well as evaluation at multiple UAV altitudes. In addition, future studies will explore comparisons with Transformer-based and hybrid CNN–Transformer detectors under real-time constraints to further analyze trade-offs between representational capacity and deployment feasibility on lightweight UAV platforms.
4.4. Ablation Study: Impact of Data Augmentation and Dual-Class Formulation
To further analyze the contribution of different components of the proposed system, an ablation study was conducted. Since no architectural modifications were applied to YOLOv11, the ablation focuses on data preparation strategies and task formulation, which are critical in UAV-based detection scenarios.
Table 14 presents the ablation study of data augmentation used in YOLOv11.
Results show that data augmentation contributes significantly to performance gains, particularly for small-object detection and parking-slot delineation. The dual-class formulation further improves occupancy estimation accuracy by explicitly modeling free parking spaces rather than inferring availability indirectly from vehicle absence.
5. Discussion
This study is intentionally positioned as an engineering-oriented validation and technical verification of UAV-based parking occupancy monitoring using a state-of-the-art object-detection model. Rather than proposing new detection architectures, the work focuses on system integration, deployment feasibility, and real-time performance evaluation under practical UAV operating constraints. This perspective aligns with applied research in intelligent transportation systems, where robustness, scalability, and operational viability are critical.
Additionally, the evaluation was conducted at a single parking lot and at a fixed UAV altitude, which may limit the generalization of the results to other environments or flight profiles. Future work will therefore focus on cross-location evaluation across different parking facilities and testing multiple UAV altitudes to assess robustness under varying spatial resolutions and perspectives.
The comparison with previous YOLO versions is not intended to demonstrate architectural novelty but rather to evaluate the practical benefits of adopting newer detection paradigms in a real-deployment context. From an engineering and applied research perspective, understanding whether migration from YOLOv8 or YOLOv9 to YOLOv11 yields measurable improvements in accuracy, robustness, and real-time performance is essential for system designers and practitioners.
The results obtained in this study demonstrate that the proposed UAV-based parking occupancy detection system using YOLOv11 provides high accuracy, strong robustness, and real-time performance, positioning it as a competitive solution for intelligent transportation and smart-city applications. When interpreted in the context of previous studies, the findings reveal several important implications.
First, the system’s strong quantitative performance, 0.94 precision, 0.92 recall, and 0.96 mAP@0.5 aligns with and exceeds the accuracy levels reported in recent UAV-based detection research. Studies employing YOLOv8 and its lightweight derivatives often report limitations in small-object sensitivity and bounding-box precision at higher altitudes [
10,
12,
26]. Similarly, YOLOv9 research highlights improvements in contextual reasoning but continues to demonstrate instability under strong shadows or abrupt illumination changes [
18,
27]. In comparison, YOLOv11 exhibited more stable real-time behavior and competitive detection performance under deployment-oriented constraints when compared with YOLOv8 and YOLOv9 across a broader range of visual conditions.
From the perspective of the working hypothesis, namely, that modern anchor-free architecture could reliably identify both vehicles and free parking spaces from aerial imagery, the results strongly support this assumption. This is further reinforced by the model’s qualitative performance, where it demonstrated consistent bounding-box alignment in scenes with worn markings, occlusions, or irregular lane patterns.
The broader implication is that aerial monitoring systems can serve as scalable, infrastructure-free alternatives to ground sensors and fixed CCTV networks. The integration of heatmap-based visualization strengthens this potential by providing intuitive spatial summaries of vehicle distribution valuable for smart mobility planning, parking optimization, and real-time urban analytics.
Nevertheless, certain limitations should be considered when interpreting the results. Accuracy decreased in scenarios involving severe occlusions, degraded parking slot paint, or extreme glare, consistent with failure modes reported in related literature. These findings suggest that more diverse training data, particularly from adverse weather, nighttime flights, or multi-altitude passes, could further enhance generalization.
Future research directions may include:
Expanding the dataset to include multi-weather, multi-season, and nighttime aerial imagery;
Integrating temporal tracking to stabilize detections across UAV video streams;
Testing cross-location generalization to evaluate applicability in urban, commercial, and high-density parking environments;
Deploying the system in real-time UAV loops, enabling continuous monitoring rather than static image inference;
Exploring Transformer-based backbones, which may further improve small-object detection in aerial contexts.
6. Conclusions
This study developed and evaluated an intelligent, UAV-based system for real-time parking occupancy detection using the YOLOv11 deep learning architecture. The results confirmed high accuracy, strong robustness in challenging visual conditions, and real-time inference capability. YOLOv11 significantly outperformed YOLOv8 and YOLOv9, validating the hypothesis that updated anchor-free architecture improves vehicle and parking slot detection in aerial imagery.
The approach offers advantages in scalability, operational flexibility, and cost efficiency, positioning it as a practical alternative to ground-based sensing systems. Although limitations were observed under severe occlusions, degraded markings, and illumination extremes, the system nonetheless demonstrated reliable performance across diverse conditions.
The findings contribute to the growing body of research in UAV-enabled intelligent transportation systems and provide a foundation for future work involving larger datasets, temporal analysis, autonomous UAV routes, and deployment within smart-city infrastructures.
Author Contributions
All authors contributed equally to all stages of this research. Conceptualization, all authors; methodology, all authors; software, all authors; validation, all authors; formal analysis, all authors; investigation, all authors; resources, all authors; data curation, all authors; writing—original draft preparation, all authors; writing—review and editing, all authors; visualization, all authors; supervision, all authors; project administration, all authors. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable. This study did not involve humans or animals and therefore did not require ethical approval.
Informed Consent Statement
Not applicable. No human subjects were involved in this study.
Data Availability Statement
The aerial imagery dataset used in this study was collected specifically for the development of the proposed system and is not publicly available due to privacy and operational restrictions. Processed data and trained YOLOv11 model weights are available from the corresponding author upon reasonable request.
Acknowledgments
The authors would like to acknowledge the technical support provided during the data collection flights and the use of cloud GPU resources through Google Colab Pro, which facilitated the training of the YOLOv11 model. The authors also thank the individuals who assisted with field coordination and image acquisition. All aspects of the study, including the design, analysis, writing, and preparation of the manuscript, were fully developed by the authors.
Conflicts of Interest
The authors declare no conflicts of interest. The research was conducted independently and without any financial, professional, or personal relationships that could have influenced the results or their interpretation.
Abbreviations
The following abbreviations are used in this manuscript:
| CNN | Convolutional Neural Network |
| CCTV | Closed-Circuit Television |
| FPS | Frames Per Second |
| GPU | Graphical Processor Unit |
| IoU | Intersection Over Union |
| mAP | Mean Average Precision |
| openCV | Open Computer-Vision library |
| ORSI | Optical Remote Sensing Image |
| PR-Curves | Precision-Recall Curves |
| SOD | Salient Object Detection |
| UAV | Unmanned Aerial Vehicle |
| WALDO | Whereabouts Ascertainment for Low-Lying Detectable Objects |
| YOLO | You Only Look Once |
References
- Afrin, T.; Yodo, N.; Dey, A.; Aragon, L.G. Advancements in UAV-Enabled Intelligent Transportation Systems: A Three-Layered Framework and Future Directions. Appl. Sci. 2024, 14, 9455. [Google Scholar] [CrossRef]
- Wei, W.; Chen, H.; Gong, J.; Che, K.; Ren, W.; Zhang, B. Real-time parking space detection based on deep learning and panoramic images. Sensors 2025, 25, 6449. [Google Scholar] [CrossRef] [PubMed]
- Tokmakov, D.M.; Asenov, S.M. Autonomous Smart Wireless LoRaWAN Vehicle Parking Sensor. In Proceedings of the 2022 XXXI International Scientific Conference Electronics (ET), Sozopol, Bulgaria, 13–15 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Idris, M.Y.I.; Leng, Y.Y.; Tamil, E.M.; Noor, N.M.; Razak, Z. Car Park System: A Review of Smart Parking System and Its Technology. Inf. Technol. J. 2009, 8, 101–113. [Google Scholar] [CrossRef]
- Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Vairo, C. Car Parking Occupancy Detection Using Smart Camera Networks and Deep Learning. Expert Syst. Appl. 2017, 72, 327–334. [Google Scholar] [CrossRef]
- Paidi, V.; Fleyeh, H.; Nyberg, R.G. Deep Learning-Based Vehicle Occupancy Detection in an Open Parking Lot Using Thermal Camera. IET Intell. Transp. Syst. 2020, 14, 1339–1345. [Google Scholar] [CrossRef]
- Luo, J.; He, X.; Xu, X.; Li, W. Efficient small object detection YOLO (ESOD-YOLO) based on YOLOv8n for UAV object detection. Sensors 2024, 24, 7067. [Google Scholar] [CrossRef]
- Khalili, B.; Amini, R.; Rahman, M.M.; Wang, G. SOD-YOLOv8: Enhancing YOLOv8 for small object detection. Sensors 2024, 24, 6209. [Google Scholar] [CrossRef] [PubMed]
- Al-Dosari, K.; Fetais, N. A new shift in implementing unmanned aerial vehicles (UAVs) in the safety and security of smart cities: A systematic literature review. Safety 2023, 9, 64. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, H.; Wang, X.; Shang, J.; Wang, X.; Li, J.; Wang, Y. DSCW-YOLO: Vehicle detection from low-altitude UAV perspective via coordinate awareness and collaborative module optimization. Sensors 2025, 25, 3413. [Google Scholar] [CrossRef]
- Ammar, A.; Koubaa, A.; Ahmed, M.; Saad, A.; Benjdira, B. Vehicle detection from aerial images using deep learning: A comparative study. Electronics 2021, 10, 820. [Google Scholar] [CrossRef]
- Sun, C.; Chen, Y.; Xiao, C.; You, L.; Li, R. YOLOv5s-DSD: An improved aerial image detection algorithm based on YOLOv5s. Sensors 2023, 23, 6905. [Google Scholar] [CrossRef] [PubMed]
- DJI. DJI Mini 3—Specifications. 2024. Available online: https://www.dji.com/mini-3 (accessed on 20 October 2025).
- Elfaki, A.O.; Messoudi, W.; Bushnag, A.; Abuzneid, S.; Alhmiedat, T. A smart real-time parking control and monitoring system. Sensors 2023, 23, 9741. [Google Scholar] [CrossRef] [PubMed]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Taye, M.M. Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv9: Learning what you want with contextual knowledge. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Sturges, S. WALDO: Whereabouts Ascertainment for Low-Lying Detectable Objects. GitHub Repository. 2023. Available online: https://github.com/stephansturges/WALDO (accessed on 12 October 2025).
- Meng, L.; Li, H.; Han, H.; Xu, M.; Wu, J.; Hou, S.; Duan, W. Progressive Enhancement of Foreground Features for Salient Object Detection in Optical Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 7572–7591. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, T.; Xue, L.; Lian, W.; Tao, R. ORSI Salient Object Detection via Progressive Interaction and Saliency-Guided Enhancement. IEEE Geosci. Remote Sens. Lett. 2025, 23, 6002105. [Google Scholar] [CrossRef]
- Dong, F.; Wang, M. HybriDet: A Hybrid Neural Network Combining CNN and Transformer for Wildfire Detection in Remote Sensing Imagery. Remote Sens. 2025, 17, 3497. [Google Scholar] [CrossRef]
- IEEE. IEEE Code of Ethics; IEEE: Piscataway, NJ, USA, 2023; Available online: https://www.ieee.org/about/corporate/governance/p7-8.html (accessed on 17 September 2025).
- Ultralytics. YOLOv8/YOLOv11 Documentation. 2024. Available online: https://docs.ultralytics.com (accessed on 17 September 2025).
- Szeliski, R. Computer Vision: Algorithms and Applications, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Xue, H.; Tang, Z.; Xia, Y.; Wang, L.; Li, L. HCTD: A CNN-Transformer hybrid for precise object detection in UAV aerial imagery. Comput. Vis. Image Underst. 2025, 168, 104409. [Google Scholar] [CrossRef]
- Li, Y.; Li, Q.; Pan, J.; Zhou, Y.; Zhu, H.; Wei, H. SOD-YOLO: Small-object-detection algorithm based on improved YOLOv8 for UAV images. Remote Sens. 2024, 16, 3057. [Google Scholar] [CrossRef]
Figure 2.
Intersection over union representation.
Figure 2.
Intersection over union representation.
Figure 3.
Qualitative examples of the UAV-based dataset with ground-truth annotations.
Figure 3.
Qualitative examples of the UAV-based dataset with ground-truth annotations.
Figure 4.
Precision–recall curves for the vehicles and parking classes.
Figure 4.
Precision–recall curves for the vehicles and parking classes.
Figure 5.
Training and validation loss and mAP curves for YOLOv11.
Figure 5.
Training and validation loss and mAP curves for YOLOv11.
Figure 6.
Correct detection in real time.
Figure 6.
Correct detection in real time.
Figure 7.
Example of correct detections under normal illumination.
Figure 7.
Example of correct detections under normal illumination.
Table 1.
Historical evolution of traditional parking monitoring technologies.
Table 1.
Historical evolution of traditional parking monitoring technologies.
| Generation | Technologies Used | Advantages | Limitations |
|---|
| 1st Generation | Background subtraction | Easy to implement | Highly sensitive to shadows |
| Edge detection | Low computational cost | Lighting variations and noise |
| 2nd Generation | SVM | Improved discrimination compared to classical methods | Low real-time performance |
| Haar Cascades | - | Sensitive to illumination changes |
| 3rd Generation | CNNs | High accuracy | Requires fixed infrastructure |
| Faster R-CNN | Robust feature | GPU processing |
| YOLOv3 | Extraction | - |
Table 2.
Summarizes the accuracy achieved by a representative parking monitoring system based on fixed cameras.
Table 2.
Summarizes the accuracy achieved by a representative parking monitoring system based on fixed cameras.
| Author | Model | Dataset | mAP |
|---|
| Amato et al. 2017 [5] | Deep CNN | CNRPark-EXT | 0.91 |
| Paidi et al. 2020 [6] | Thermal CNN | Thermal parking | 0.89 |
Table 3.
Main limitations of sensor-based fixed access systems.
Table 3.
Main limitations of sensor-based fixed access systems.
| Limitation | Description |
|---|
| High installation cost | Requires a sensor or barrier at each access point or parking slot |
| Frequent maintenance | Prone to failures due to weather, dust, and wear |
| Low scalability | Difficult to deploy in temporary or large outdoor parking lots |
| No space-level identification | Counts entries/exits but cannot detect which specific spaces are free |
Table 4.
State-of-the-art aerial vehicle detection using UAVs.
Table 4.
State-of-the-art aerial vehicle detection using UAVs.
| Author | Model | Dataset | mAP | Contribution |
|---|
| Ammar et al. 2021 [11] | YOLOv4 | Aerial Parking | 0.94 | UAV-based urban vehicle detection |
| Sun et al. 2023 [12] | YOLOv5s-DSD | VisDrone2019 | 0.92 | Aerial vehicle tracking from UAVs |
| Luo et al. 2024 [7] | ESOD-YOLOv8n | UAVDT | 0.95 | Optimization for small object detection |
| Khalili et al. 2024 [8] | SOD-YOLOv8 | VisDrone | 0.96 | Lightweight model specialized for UAV imagery |
Table 5.
Comparison of modern detection architectures for aerial vehicle detection.
Table 5.
Comparison of modern detection architectures for aerial vehicle detection.
| Model | Base Dataset | Architecture Type | Advantages |
|---|
| YOLOv8 | COCO/UAVDT | Anchor-based + PAN-FPN | High speed (30–40 FPS), good accuracy |
| WALDO | COCO-Aerial | CNN + Spatial Attention | Strong small-object detection |
| YOLOv11 | COCO/VisDrone | Anchor-free + Dynamic Head | Higher accuracy, better small-object detection, reduced latency |
Table 6.
Key technical specifications of the DJI Mini 3 UAV.
Table 6.
Key technical specifications of the DJI Mini 3 UAV.
| Category | Specification | Detail |
|---|
| Dimensions and Weight | Total weight | <249 g |
| Folded dimensions | 148 × 90 × 62 mm |
| Unfolded dimensions | 251 × 362 × 72 mm |
| Propulsion System | Flight autonomy | 30–38 min |
| Wind resistance | Up to 10.7 m/s |
| Positioning System | Stabilization | GPS + GLONASS |
| Gimbal | 3-axis mechanical |
| Camera | Sensor | 1/1.3″ CMOS (48 MP) |
| Lens | 24 mm equivalent |
| Aperture | f/1.7 |
| ISO | 100–3200 |
| Photo resolution | 48 MP |
| Video resolution | 4 k at 24–30 FPS |
| Transmission | Technology | OcuSync 2.0 |
| Range | Up to 10 km |
| Flight Modes | Normal, Sport, Cine | |
| Storage | microSD | Up to 256 GB |
Table 7.
Hardware and software used in the UAV + YOLOv11 system.
Table 7.
Hardware and software used in the UAV + YOLOv11 system.
| Category | Component | Description/Specification |
|---|
| Hardware | Training GPU | Google Colab Pro-NVIDIA Tesla T4 (16 GB VRAM) |
| Auxiliary computer | Laptop for analysis and control |
| UAV controller | DJI Mini 3 RC-N1 with OcuSync |
| Ground station | Flight monitoring and video reception |
| Software | Programming language | Python 3.10 |
| Detection framework | Ultralytics YOLOv11 |
| Image processing | OpenCV 4.8 |
| Execution environment | Google Colab Pro |
| Annotation tool | Roboflow (YOLO format) |
| Libraries | Numpy 2.2, Pandas 2.0.0, Matplotlib 3.10.0 |
Table 8.
Technical comparison between WALDO and YOLOv11.
Table 8.
Technical comparison between WALDO and YOLOv11.
| Criteria | WALDO [19] | YOLOv11 [24] |
|---|
| Architecture | Spatial CNN | Anchor-free + Dynamic head |
| Primary purpose | Small objects | Multiscale detection |
| FPS | 15–25 | 30–50 |
| Accuracy | High | Very high |
| Lighting robustness | Medium | High |
| Multiclass support | Limited | Full |
| UAV suitability | Good | Excellent |
| Parking slot detection | No | Yes |
| Selection reason | - | Superior accuracy, speed, and scalability |
Table 9.
Comparison between YOLOv8, YOLov9, and YOLOv11.
Table 9.
Comparison between YOLOv8, YOLov9, and YOLOv11.
| Criteria | YOLOv8 | YOLOv9 | YOLOv11 |
|---|
| Release year | 2023 | 2024 | 2024–2025 |
| Architecture | Anchor-based | CKPT hybrid | Anchor-free |
| Real-time FPS | 25–45 | 20–40 | 30–50 |
| Small-object detection | Limited | Improved | Superior |
| UAV suitability | Good | Very good | Excellent |
Table 10.
The distribution of the dataset is split into training, validation, and testing.
Table 10.
The distribution of the dataset is split into training, validation, and testing.
| Set | Images | Percentage |
|---|
| Training | 600 | 80% |
| Validation | 75 | 10% |
| Testing | 75 | 10% |
| Total | 750 | 100% |
Table 11.
Training parameters used to achieve efficient performance.
Table 11.
Training parameters used to achieve efficient performance.
| Parameter | Value |
|---|
| Epochs | 30 |
| Batch size | 16 |
| Learning rate | 0.001 |
| Optimizer | Adam |
| Scheduler | Cosine Annealing |
| Image size | 640 × 640 px |
| Momentum | 0.937 |
Table 12.
Quantitative performance of YOLOv11.
Table 12.
Quantitative performance of YOLOv11.
| Metric | Value | Description |
|---|
| Precision | 0.94 | Ratio of correct positive detections |
| Recall | 0.92 | Ability to detect all existing objects |
| F1-score | 0.93 | Harmonic mean of precision and recall |
| 0.5 | 0.96 | Detection accuracy at IoU ≥ 0.5 |
| 0.5:0.95 | 0.89 | Average precision across IoU threshold 0.5–0.95 |
| Inference speed | 35–38 FPS | Real-time capability on a Tesla T4 GPU |
Table 13.
Comparison of YOLO architectures based on recent studies.
Table 13.
Comparison of YOLO architectures based on recent studies.
| Metric | YOLOv8 [10,12,26] | YOLOv9 [18,27] | YOLOv11 (Proposed) |
|---|
| Precision | 0.88 | 0.90 | 0.94 |
| Recall | 0.85 | 0.89 | 0.92 |
| mAP@0.5 | 0.91 | 0.93 | 0.96 |
| mAP@0.5:0.95 | 0.82 | 0.85 | 0.89 |
| Inference speed | 32 FPS | 30 FPS | 35–38 FPS |
Table 14.
Description of YOLOv11 training model with data augmentation.
Table 14.
Description of YOLOv11 training model with data augmentation.
| Configuration | Description |
|---|
| A1 | YOLOv11 trained on raw images (no augmentation) |
| A2 | YOLOv11 + standard augmentation |
| A3 | YOLOv11 + standard augmentation + mosaic |
| A4 | YOLOv11 + full augmentation + dual-class (vehicle + parking) |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |