Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (370)

Search Parameters:
Keywords = urban aerial images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4388 KiB  
Article
An Omni-Dimensional Dynamic Convolutional Network for Single-Image Super-Resolution Tasks
by Xi Chen, Ziang Wu, Weiping Zhang, Tingting Bi and Chunwei Tian
Mathematics 2025, 13(15), 2388; https://doi.org/10.3390/math13152388 - 25 Jul 2025
Viewed by 286
Abstract
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of [...] Read more.
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of high-frequency details, high computational complexity, and insufficient adaptability to complex scenes. To address these challenges, we propose an Omni-dimensional Dynamic Convolutional Network (ODConvNet) tailored for SISR tasks. Specifically, ODConvNet comprises four key components: a Feature Extraction Block (FEB) that captures low-level spatial features; an Omni-dimensional Dynamic Convolution Block (DCB), which utilizes a multidimensional attention mechanism to dynamically reweight convolution kernels across spatial, channel, and kernel dimensions, thereby enhancing feature expressiveness and context modeling; a Deep Feature Extraction Block (DFEB) that stacks multiple convolutional layers with residual connections to progressively extract and fuse high-level features; and a Reconstruction Block (RB) that employs subpixel convolution to upscale features and refine the final HR output. This mechanism significantly enhances feature extraction and effectively captures rich contextual information. Additionally, we employ an improved residual network structure combined with a refined Charbonnier loss function to alleviate gradient vanishing and exploding to enhance the robustness of model training. Extensive experiments conducted on widely used benchmark datasets, including DIV2K, Set5, Set14, B100, and Urban100, demonstrate that, compared with existing deep learning-based SR methods, our ODConvNet method improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the visual quality of SR images is also improved. Ablation studies further validate the effectiveness and contribution of each component in our network. The proposed ODConvNet offers an effective, flexible, and efficient solution for the SISR task and provides promising directions for future research. Full article
Show Figures

Figure 1

29 pages, 7229 KiB  
Article
The Non-Destructive Testing of Architectural Heritage Surfaces via Machine Learning: A Case Study of Flat Tiles in the Jiangnan Region
by Haina Song, Yile Chen and Liang Zheng
Coatings 2025, 15(7), 761; https://doi.org/10.3390/coatings15070761 - 27 Jun 2025
Viewed by 598
Abstract
This study focuses on the ancient buildings in Cicheng Old Town, a typical architectural heritage area in the Jiangnan region of China. These buildings are famous for their well-preserved Tang Dynasty urban layout and Ming and Qing Dynasty roof tiles. However, the natural [...] Read more.
This study focuses on the ancient buildings in Cicheng Old Town, a typical architectural heritage area in the Jiangnan region of China. These buildings are famous for their well-preserved Tang Dynasty urban layout and Ming and Qing Dynasty roof tiles. However, the natural aging, weathering, and biological erosion of the roof tiles seriously threaten the integrity of heritage protection. Given that current detection methods mostly depend on manual checks, which are slow and cover only a small area, this study suggests using deep learning technology for heritage protection and creating a smart model to identify damage in flat tiles using the YOLOv8 architecture. During this research, the team used drone aerial photography to collect images of typical building roofs in Cicheng Old Town. Through preprocessing, unified annotation, and system training, a damage dataset containing 351 high-quality images was established, covering five types of damage: breakage, cracks, the accumulation of fallen leaves, lichen growth, and vegetation growth. The results show that (1) the model has an overall mAP of 73.44%, an F1 value of 0.75 in the vegetation growth category, and a recall rate of 0.70, showing stable and balanced detection performance for various damage types; (2) the model performs well in comparisons using confusion matrices and multidimensional indicators (including precision, recall, and log-average miss rate) and can effectively reduce the false detection and missed detection rates; and (3) the research team applied the model to drone images of the roof of Fengyue Painted Terrace Gate in Cicheng Old Town, Jiangbei District, Ningbo City, Zhejiang Province, and automatically detected and located multiple tile damage areas. The prediction results are highly consistent with field observations, verifying the feasibility and application potential of the model in actual heritage protection scenarios. Full article
Show Figures

Figure 1

28 pages, 5208 KiB  
Article
The Use of BIM Models and Drone Flyover Data in Building Energy Efficiency Analysis
by Agata Muchla, Małgorzata Kurcjusz, Maja Sutkowska, Raquel Burgos-Bayo, Eugeniusz Koda and Anna Stefańska
Energies 2025, 18(13), 3225; https://doi.org/10.3390/en18133225 - 20 Jun 2025
Viewed by 593
Abstract
Building information modeling (BIM) and thermal imaging from drone flyovers present innovative opportunities for enhancing building energy efficiency. This study examines the integration of BIM models with thermal data collected using unmanned aerial vehicles (UAVs) to assess and manage energy performance throughout a [...] Read more.
Building information modeling (BIM) and thermal imaging from drone flyovers present innovative opportunities for enhancing building energy efficiency. This study examines the integration of BIM models with thermal data collected using unmanned aerial vehicles (UAVs) to assess and manage energy performance throughout a building’s lifecycle. By leveraging BIM’s structured data and the concept of the digital twin, thermal analysis can be automated to detect thermal bridges and inefficiencies, facilitating data-driven decision-making in sustainable construction. The paper examines methodologies for combining thermal imaging with BIM, including image analysis algorithms and artificial intelligence applications. Case studies demonstrate the practical implementation of UAV-based thermal data collection and BIM integration in an educational facility. The findings highlight the potential for optimizing energy efficiency, improving facility management, and advancing low-emission building practices. The study also addresses key challenges such as data standardization and interoperability, and outlines future research directions in the context of smart city applications and energy-efficient urban development. Full article
Show Figures

Figure 1

20 pages, 2711 KiB  
Article
Urban Environment and Momentary Psychological States: A Micro-Scale Study on a University Campus with Network Analysis
by Fanxi Wang and Feng Qi
Urban Sci. 2025, 9(6), 221; https://doi.org/10.3390/urbansci9060221 - 13 Jun 2025
Viewed by 461
Abstract
Urban environmental settings influence human psychological states, contributing to varying mental health outcomes. This study examines the relationships between objective environmental features and psychological states at a fine scale. Using a geo-enabled survey tool, we collected data on individuals’ perceptions of their immediate [...] Read more.
Urban environmental settings influence human psychological states, contributing to varying mental health outcomes. This study examines the relationships between objective environmental features and psychological states at a fine scale. Using a geo-enabled survey tool, we collected data on individuals’ perceptions of their immediate environment within their daily activity space on an urban university campus. The psychological assessment included emotional and affective states such as perceived stress, fatigue, and happiness. Objective environmental properties were derived from high-resolution imagery to analyze the association between environmental settings and psychological responses. The data were analyzed using Spearman’s correlation, moderated multiple regression, and partial correlation networks. Our findings revealed that beneficial psychological states were positively associated with the quantity of natural elements in the immediate environment such as trees, water, and grass. Conversely, negative psychological states were positively associated with barren areas, parking lots, buildings, and artificial surfaces. These relationships were not significantly moderated by gender or ethnicity in our experiment. The interconnections of psychological states show distinct patterns in three different environmental settings, which are a mostly green environment, a mixed environment with green and artificial elements, and a mostly artificial environment. A difference in such interconnections between males and females has been observed. These results highlight the complex interplay between environmental features and mental state networks. Full article
Show Figures

Figure 1

21 pages, 10875 KiB  
Article
FIM-JFF: Lightweight and Fine-Grained Visual UAV Localization Algorithms in Complex Urban Electromagnetic Environments
by Faming Gong, Junjie Hao, Chengze Du, Hao Wang, Yanpu Zhao, Yi Yu and Xiaofeng Ji
Information 2025, 16(6), 452; https://doi.org/10.3390/info16060452 - 27 May 2025
Viewed by 448
Abstract
Unmanned aerial vehicles (UAVs) are a key driver of the low-altitude economy, where precise localization is critical for autonomous flight and complex task execution. However, conventional global positioning system (GPS) methods suffer from signal instability and degraded accuracy in dense urban areas. This [...] Read more.
Unmanned aerial vehicles (UAVs) are a key driver of the low-altitude economy, where precise localization is critical for autonomous flight and complex task execution. However, conventional global positioning system (GPS) methods suffer from signal instability and degraded accuracy in dense urban areas. This paper proposes a lightweight and fine-grained visual UAV localization algorithm (FIM-JFF) suitable for complex electromagnetic environments. FIM-JFF integrates both shallow and global image features to leverage contextual information from satellite and UAV imagery. Specifically, a local feature extraction module (LFE) is designed to capture rotation, scale, and illumination-invariant features. Additionally, an environment-adaptive lightweight network (EnvNet-Lite) is developed to extract global semantic features while adapting to lighting, texture, and contrast variations. Finally, UAV geolocation is determined by matching feature points and their spatial distributions across multi-source images. To validate the proposed method, a real-world dataset UAVs-1100 was constructed in complex urban electromagnetic environments. The experimental results demonstrate that FIM-JFF achieves an average localization error of 4.03 m with a processing time of 2.89 s, outperforming state-of-the-art methods by improving localization accuracy by 14.9% while reducing processing time by 0.76 s. Full article
Show Figures

Figure 1

28 pages, 7500 KiB  
Article
Lightweight Multi-Head MambaOut with CosTaylorFormer for Hyperspectral Image Classification
by Yi Liu, Yanjun Zhang and Jianhong Zhang
Remote Sens. 2025, 17(11), 1864; https://doi.org/10.3390/rs17111864 - 27 May 2025
Viewed by 377
Abstract
Unmanned aerial vehicles (UAVs) equipped with hyperspectral hardware systems are widely used in urban planning and land classification. However, hyperspectral sensors generate large volumes of data that are rich in both spatial and spectral information, making its efficient processing in resource-constrained devices challenging. [...] Read more.
Unmanned aerial vehicles (UAVs) equipped with hyperspectral hardware systems are widely used in urban planning and land classification. However, hyperspectral sensors generate large volumes of data that are rich in both spatial and spectral information, making its efficient processing in resource-constrained devices challenging. While transformers have been widely adopted for hyperspectral image classification due to their global feature extraction capabilities, their quadratic computational complexity limits their applicability for resource-constrained devices. To address this limitation and enable the real-time processing of hyperspectral data on UAVs, we propose a lightweight multi-head MambaOut with a CosTaylorFormer (LMHMambaOut-CosTaylorFormer). First, 3D-2D CNN is used to extract both spatial and spectral shallow features from hyperspectral images. Following this, one branch employs a linear transformer, CosTaylorFormer, to extract global spectral information. More specifically, we propose CosTaylorFormer with a cosine function, adjusting the weights based on the spectral curve distribution, which is more conducive to establishing long-distance spectral dependencies. Meanwhile, compared with other linearized transformers, the CosTaylorFormer we propose better improves model performance. For the other branch, we propose multi-head MambaOut to extract global spatial features and enhance the network classification effect. Moreover, a dynamic information fusion strategy is proposed to adaptively fuse spatial and spectral information. The proposed network is validated on four datasets (IP, WHU-Longkou, SA, and PU) and compared with several models, demonstrating its superior classification accuracy; however, the number of model parameters is only 0.22 M, thus achieving better balance between model complexity and accuracy. Full article
Show Figures

Figure 1

25 pages, 17332 KiB  
Article
Aerial Remote Sensing and Urban Planning Study of Ancient Hippodamian System
by Dimitris Kaimaris and Despina Kalyva
Urban Sci. 2025, 9(6), 183; https://doi.org/10.3390/urbansci9060183 - 22 May 2025
Viewed by 532
Abstract
In ancient Olynthus (Greece), an Unmanned Aircraft System (UAS) was utilized to collect both RGB and multispectral (MS) images of the archaeological site. Ground Control Points (GCPs) were used to solve the blocks of images and the production of Digital Surface Models (DSMs) [...] Read more.
In ancient Olynthus (Greece), an Unmanned Aircraft System (UAS) was utilized to collect both RGB and multispectral (MS) images of the archaeological site. Ground Control Points (GCPs) were used to solve the blocks of images and the production of Digital Surface Models (DSMs) and orthophotomosaics. Check Points (CPs) were employed to verify the spatial accuracy of the products. The innovative image fusion process carried out in this paper, which combined the RGB and MS orthophotomosaics from UAS sensors, led to the creation of a fused image with the best possible spatial resolution (five times better than that of the MS orthophotomosaic). This improvement facilitates the optimal visual and digital (e.g., classification) analysis of the archaeological site. Utilizing the fused image and reviewing the literature, the paper compiles and briefly presents information on the Hippodamian system of the excavated part of the ancient city of Olynthus (regularity, main and secondary streets, organization of building blocks, public and private buildings, types and sizes of dwellings, and internal organization of buildings) as well as information on its socio-economic organization (different social groups based on the characteristics of the buildings, commercial markets, etc.). Full article
Show Figures

Figure 1

16 pages, 1143 KiB  
Article
AlleyFloodNet: A Ground-Level Image Dataset for Rapid Flood Detection in Economically and Flood-Vulnerable Areas
by Ook Lee and Hanseon Joo
Electronics 2025, 14(10), 2082; https://doi.org/10.3390/electronics14102082 - 21 May 2025
Viewed by 972
Abstract
Urban flooding in economically and environmentally vulnerable areas—such as alleyways, lowlands, and semi-basement residences—poses serious threats. Previous studies on flood detection have largely relied on aerial or satellite-based imagery. While some studies used ground-level images, datasets capturing localized flooding in economically vulnerable urban [...] Read more.
Urban flooding in economically and environmentally vulnerable areas—such as alleyways, lowlands, and semi-basement residences—poses serious threats. Previous studies on flood detection have largely relied on aerial or satellite-based imagery. While some studies used ground-level images, datasets capturing localized flooding in economically vulnerable urban areas remain limited. To address this, we constructed AlleyFloodNet, a dataset designed for rapid flood detection in flood-vulnerable urban areas, with ground-level images collected from diverse regions worldwide. In particular, this dataset includes data from flood-vulnerable urban areas under diverse realistic conditions, such as varying water levels, colors, and lighting. By fine-tuning several deep learning models on AlleyFloodNet, the ConvNeXt-Large model achieved excellent performance, with an accuracy of 96.56%, precision of 95.45%, recall of 97.67%, and an F1 score of 96.55%. Comparative experiments with existing ground-level image datasets confirmed that datasets specifically designed for economically and flood-vulnerable urban areas, like AlleyFloodNet, are more effective for detecting floods in these regions. By successfully fine-tuning deep learning models, AlleyFloodNet not only addresses the limitations of existing flood monitoring datasets but also provides foundational resources for developing practical, real-time flood detection and alert systems for urban populations vulnerable to flooding. Full article
(This article belongs to the Special Issue Advanced Edge Intelligence in Smart Environments)
Show Figures

Figure 1

24 pages, 9161 KiB  
Article
An Efficient Pyramid Transformer Network for Cross-View Geo-Localization in Complex Terrains
by Chengjie Ju, Wangping Xu, Nanxing Chen and Enhui Zheng
Drones 2025, 9(5), 379; https://doi.org/10.3390/drones9050379 - 17 May 2025
Viewed by 811
Abstract
Unmanned aerial vehicle (UAV) self-localization in complex environments is critical when global navigation satellite systems (GNSSs) are unreliable. Existing datasets, often limited to low-altitude urban scenes, hinder generalization. This study introduces Multi-UAV, a novel dataset with 17.4 k high-resolution UAV–satellite image pairs from [...] Read more.
Unmanned aerial vehicle (UAV) self-localization in complex environments is critical when global navigation satellite systems (GNSSs) are unreliable. Existing datasets, often limited to low-altitude urban scenes, hinder generalization. This study introduces Multi-UAV, a novel dataset with 17.4 k high-resolution UAV–satellite image pairs from diverse terrains (urban, rural, mountainous, farmland, coastal) and altitudes across China, enhancing cross-view geolocalization research. We propose a lightweight value reduction pyramid transformer (VRPT) for efficient feature extraction and a residual feature pyramid network (RFPN) for multi-scale feature fusion. Using meter-level accuracy (MA@K) and relative distance score (RDS), VRPT achieves robust, high-precision localization across varied terrains, offering significant potential for resource-constrained UAV deployment. Full article
Show Figures

Figure 1

21 pages, 4104 KiB  
Article
Linkage Analysis Between Coastline Change and Both Sides of Coastal Ecological Spaces
by Xianchuang Fan, Chao Zhou, Tiejun Cui, Tong Wu, Qian Zhao and Mingming Jia
Water 2025, 17(10), 1505; https://doi.org/10.3390/w17101505 - 16 May 2025
Cited by 2 | Viewed by 401
Abstract
As the first marine economic zone, the coastal zone is a complex and active ecosystem, serving as an important resource breeding area. However, during the process of economic development, coastal zone resources have been severely exploited, leading to fragile ecology and frequent natural [...] Read more.
As the first marine economic zone, the coastal zone is a complex and active ecosystem, serving as an important resource breeding area. However, during the process of economic development, coastal zone resources have been severely exploited, leading to fragile ecology and frequent natural disasters. Therefore, it is imperative to analyze coastline changes and their correlation with coastal ecological space. Utilizing long-time series high-resolution remote sensing images, Google Earth images, and key sea area unmanned aerial vehicle (UAV) remote sensing monitoring data, this study selected the coastal zone of Ningbo City as the research area. Remote sensing interpretation mark databases for coastline and typical coastal ecological space were established. Coastline extraction was completed based on the visual discrimination method. With the help of the Modified Normalized Difference Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI) and maximum likelihood classification, a hierarchical classification discrimination process combined with a visual discrimination method was constructed to extract long-time series coastal ecological space information. The changes and the linkage relationship between the coastlines and coastal ecological spaces were analyzed. The results show that the extraction accuracy of ground objects based on the hierarchical classification process is high, and the verification effect is improved with the help of UAV remote sensing monitoring. Through long-time sequence change monitoring, it was found that the change in coastline traffic and transportation is significant. Changes in ecological spaces, such as industrial zones, urban construction, agricultural flood wetlands and irrigation land, dominated the change in artificial shorelines, while the change in Spartina alterniflora dominated the change in biological coastlines. The change in ecological space far away from the coastline on both the land and sea sides has little influence on the coastline. The research shows that the correlation analysis between coastline and coastal ecological space provides a new perspective for coastal zone research. In the future, it can provide technical support for coastal zone protection, dynamic supervision, administration, and scientific research. Full article
(This article belongs to the Special Issue Advanced Remote Sensing for Coastal System Monitoring and Management)
Show Figures

Figure 1

19 pages, 11207 KiB  
Article
Fusion of Aerial and Satellite Images for Automatic Extraction of Building Footprint Information Using Deep Neural Networks
by Ehsan Haghighi Gashti, Hanieh Bahiraei, Mohammad Javad Valadan Zoej and Ebrahim Ghaderpour
Information 2025, 16(5), 380; https://doi.org/10.3390/info16050380 - 2 May 2025
Viewed by 1188
Abstract
The analysis of aerial and satellite images for building footprint detection is one of the major challenges in photogrammetry and remote sensing. This information is useful for various applications, such as urban planning, disaster monitoring, and 3D city modeling. However, it has become [...] Read more.
The analysis of aerial and satellite images for building footprint detection is one of the major challenges in photogrammetry and remote sensing. This information is useful for various applications, such as urban planning, disaster monitoring, and 3D city modeling. However, it has become a significant challenge due to the diverse characteristics of buildings, such as shape, size, and shadow interference. This study investigated the simultaneous use of aerial and satellite images to improve the accuracy of deep learning models in building footprint detection. For this purpose, aerial images with a spatial resolution of 30 cm and Sentinel-2 satellite imagery were employed. Several satellite-derived spectral indices were extracted from the Sentinel-2 image. Then, U-Net models combined with ResNet-18 and ResNet-34 were trained on these data. The results showed that the combination of the U-Net model with ResNet-34, trained on a dataset obtained by integrating aerial images and satellite indices, referred to as RGB–Sentinel–ResNet34, achieved the best performance among the evaluated models. This model attained an accuracy of 96.99%, an F1-score of 90.57%, and an Intersection over Union of 73.86%. Compared to other models, RGB–Sentinel–ResNet34 showed a significant improvement in accuracy and generalization capability. The findings indicated that the simultaneous use of aerial and satellite data can substantially enhance the accuracy of building footprint detection. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

22 pages, 9648 KiB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 671
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

24 pages, 15554 KiB  
Article
The Evolution of Plot Morphology and Design Strategies in Built Heritage Renewal in Central Shanghai from the Perspective of Sharing Cities
by Zhenyu Li, Mengxun Liu and Yichen Zhu
Land 2025, 14(5), 959; https://doi.org/10.3390/land14050959 - 29 Apr 2025
Viewed by 795
Abstract
With the rise of the sharing economy and the concept of the sharing city, the field of urban renewal is facing new opportunities and challenges. This paper innovatively explores built heritage renewal in central Shanghai from the perspective of the sharing economy, focusing [...] Read more.
With the rise of the sharing economy and the concept of the sharing city, the field of urban renewal is facing new opportunities and challenges. This paper innovatively explores built heritage renewal in central Shanghai from the perspective of the sharing economy, focusing on the evolution of plot morphology and associated design strategies. Six representative cases, selected within the framework of three urban renewal policies from 1999 to the present, are analyzed using a diachronic method based on the Conzen school and the street frontage index. Combined with historical maps, aerial photographs, and satellite images, the paper analyzes the changes in plot morphology from 1999 to 2024. The paper highlights how the introduction of sharing city principles significantly impacted plot morphology, facilitating the expansion and diversification of space use and driving the restructuring of plot boundaries, including physical, property, and activity boundaries. The study further reveals how the shared city concept has led to the emergence of privately owned public spaces. Additionally, the paper discusses the pursuit of flow, openness, and sharing in urban renewal, noting how these factors have shifted the focus from purely rentable and sellable areas to more efficient space resource allocation, optimizing spatial configurations. Finally, the paper introduces the concept of “sharing by transfer”, proposing that adjustments to plot boundaries under the sharing economy framework can foster more equitable, efficient, and sustainable urban renewal, providing new perspectives and strategic recommendations for built heritage renewal. Full article
Show Figures

Figure 1

30 pages, 21100 KiB  
Article
Three-Dimensional Landing Zone Segmentation in Urbanized Aerial Images from Depth Information Using a Deep Neural Network–Superpixel Approach
by N. A. Morales-Navarro, J. A. de Jesús Osuna-Coutiño, Madaín Pérez-Patricio, J. L. Camas-Anzueto, J. Renán Velázquez-González, Abiel Aguilar-González, Ernesto Alonso Ocaña-Valenzuela and Juan-Belisario Ibarra-de-la-Garza
Sensors 2025, 25(8), 2517; https://doi.org/10.3390/s25082517 - 17 Apr 2025
Viewed by 907
Abstract
Landing zone detection of autonomous aerial vehicles is crucial for locating suitable landing areas. Currently, landing zone localization predominantly relies on methods that use RGB cameras. These sensors offer the advantage of integration into the majority of autonomous vehicles. However, they lack depth [...] Read more.
Landing zone detection of autonomous aerial vehicles is crucial for locating suitable landing areas. Currently, landing zone localization predominantly relies on methods that use RGB cameras. These sensors offer the advantage of integration into the majority of autonomous vehicles. However, they lack depth perception, which can lead to the suggestion of non-viable landing zones, as they only assess an area using RGB information. They do not consider if the surface is irregular or accessible for a user (easily accessible to a person on foot). An alternative approach is to utilize 3D information extracted from depth images, but this introduces the challenge of correctly interpreting depth ambiguity. Motivated by the latter, we propose a methodology for 3D landing zone segmentation using a DNN-Superpixel approach. This methodology consists of three steps: First, the proposal involves clustering depth information using superpixels to segment, locate, and delimit zones within the scene. Second, we propose feature extraction from adjacent objects through a bounding box of the analyzed area. Finally, this methodology uses a Deep Neural Network (DNN) to segment a 3D area as landable or non-landable, considering its accessibility. The experimental results are feasible and promising. For example, the landing zone detection achieved an average recall of 0.953, meaning that this approach identified 95.3% of the pixels according to the ground truth. In addition, we have an average precision of 0.949, meaning that this approach segments 94.9% of the landing zones correctly. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 23859 KiB  
Article
Super-Resolution of Landsat-8 Land Surface Temperature Using Kolmogorov–Arnold Networks with PlanetScope Imagery and UAV Thermal Data
by Mahdiyeh Fathi, Hossein Arefi, Reza Shah-Hosseini and Armin Moghimi
Remote Sens. 2025, 17(8), 1410; https://doi.org/10.3390/rs17081410 - 16 Apr 2025
Viewed by 1375
Abstract
Super-Resolution Land Surface Temperature (LSTSR) maps are essential for urban heat island (UHI) analysis and temperature monitoring. While much of the literature focuses on improving the resolution of low-resolution LST (e.g., MODIS-derived LST) using high-resolution space-borne data (e.g., Landsat-derived LST), Unmanned [...] Read more.
Super-Resolution Land Surface Temperature (LSTSR) maps are essential for urban heat island (UHI) analysis and temperature monitoring. While much of the literature focuses on improving the resolution of low-resolution LST (e.g., MODIS-derived LST) using high-resolution space-borne data (e.g., Landsat-derived LST), Unmanned Aerial Vehicles (UAVs)/drone thermal imagery are rarely used for this purpose. Additionally, many deep learning (DL)-based super-resolution approaches, such as Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), require significant computational resources. To address these challenges, this study presents a novel approach to generate LSTSR maps by integrating Low-Resolution Landsat-8 LST (LSTLR) with High-Resolution PlanetScope images (IHR) and UAV-derived thermal imagery (THR) using the Kolmogorov–Arnold Network (KAN) model. The KAN efficiently integrates the strengths of splines and Multi-Layer Perceptrons (MLPs), providing a more effective solution for generating LSTSR. The multi-step process involves acquiring and co-registering THR via the DJI Mavic 3 thermal (T) drone, IHR from Planet (3 m resolution), and LSTLR from Landsat-8, with THR serving as reference data while IHR and LSTLR are used as input features for the KAN model. The model was trained at two sites in Germany (Oberfischbach and Mittelfischbach) and tested at Königshain, achieving reasonable performance (RMSE: 4.06 °C, MAE: 3.09 °C, SSIM: 0.83, PSNR: 22.22, MAPE: 9.32%), and outperforming LightGBM, XGBoost, ResDensNet, and ResDensNet-Attention. These results demonstrate the KAN’s superior ability to extract fine-scale temperature patterns (e.g., edges and boundaries) from IHR, significantly improving LSTLR. This advancement can enhance UHI analysis, local climate monitoring, and LST modeling, providing a scalable solution for urban heat mitigation and broader environmental applications. To improve scalability and generalizability, KAN models benefit from training on a more diverse set of UAV thermal imagery, covering different seasons, land use types, and regions. Despite this, the proposed approach is effective in areas with limited UAV data availability. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Back to TopTop