Next Article in Journal
Detection and Analysis of Airport Tailwind Events Triggered by Frontal Activity
Previous Article in Journal
Mapping High-Resolution Carbon Emission Spatial Distribution Combined with Carbon Satellite and Muti-Source Data
Previous Article in Special Issue
Analyzing Travel and Emission Characteristics of Hazardous Material Transportation Trucks Using BeiDou Satellite Navigation System Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends

by
José Antonio Gámez García
1,2,
Giacomo Lazzeri
2 and
Deodato Tapete
2,*
1
Department of Civil, Building and Environmental Engineering (DICEA), Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
2
Italian Space Agency (ASI), Via del Politecnico s.n.c., 00133 Rome, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 3126; https://doi.org/10.3390/rs17173126
Submission received: 22 June 2025 / Revised: 26 August 2025 / Accepted: 2 September 2025 / Published: 8 September 2025
(This article belongs to the Special Issue Application of Photogrammetry and Remote Sensing in Urban Areas)

Abstract

Highlights

 
What are the main findings?
Airborne imagery remains the dominant hyperspectral data type used in urban remote sensing owing to higher spatial resolution and availability of benchmark datasets, while spaceborne applications have significantly increased since 2019.
Machine Learning (in particular, Support Vector Machine and Random Forest) is the most widely used image processing approach. Deep Learning is increasingly exploited, however constraints due to needed data and computing resources currently apply.
 
What is the implication of the main finding?
The growing accessibility of spaceborne hyperspectral data (e.g., PRISMA, EnMAP) is shifting the field from conceptual studies toward urban monitoring applications.
To achieve full operational integration, future research should focus on robust data fusion techniques, standardized urban classification frameworks, real-world case studies beyond benchmark datasets, and capability for multi-temporal monitoring.

Abstract

This study provides a comprehensive and systematic review of hyperspectral remote sensing in urban areas, with a focus on the evolving roles of airborne and spaceborne platforms. The main objective is to assess the state of the art and identify current trends, challenges, and opportunities arising from the scientific literature (the gray literature was intentionally not included). Despite the proven potential of hyperspectral imaging to discriminate between urban materials with high spectral similarity, its application in urban environments remains underexplored compared to natural settings. A systematic review of 1081 peer-reviewed articles published between 1993 and 2024 was conducted using the Scopus database, resulting in 113 selected publications. Articles were categorized by scope (application, method development, review), sensor type, image processing technique, and target application. Key methods include Spectral Unmixing, Machine Learning (ML) approaches such as Support Vector Machines and Random Forests, and Deep Learning (DL) models like Convolutional Neural Networks. The review reveals a historical reliance on airborne data due to their higher spatial resolution and the availability of benchmark datasets, while the use of spaceborne data has increased notably in recent years. Major urban applications identified include land cover classification, impervious surface detection, urban vegetation mapping, and Local Climate Zone analysis. However, limitations such as lack of training data and underutilization of data fusion techniques persist. ML methods currently dominate due to their robustness with small datasets, while DL adoption is growing but remains constrained by data and computational demands. This review highlights the growing maturity of hyperspectral remote sensing in urban studies and its potential for sustainable urban planning, environmental monitoring, and climate adaptation. Continued improvements in satellite missions and data accessibility will be key to transitioning from theoretical research to operational applications.

1. Introduction

Over the past century, urban areas have emerged as pivotal centers of human activity, concentrating population, economic production, infrastructure, and innovation. Today, cities not only drive national and global economies but also face a range of vulnerabilities mainly due to climate change [1]. With over half of the global population residing in urban environments and projections estimating a rise to nearly 70% by 2050, the challenges faced by cities are increasingly recognized as central for a sustainable development and climate resilience [2]. This perspective is reflected in the United Nations 2030 Agenda for Sustainable Development, particularly in Goal 11 [3], which advocates for inclusive, safe, resilient, and sustainable cities. Achieving this objective demands effective monitoring, planning, and management tools for urban areas. In this context, remote sensing has become a key element for urban studies, enabling the observation and analysis of cities from a holistic perspective. Among the many types of remote sensing data, multispectral imaging (MSI) has long been used to characterize land cover, vegetation, and infrastructure. Successful applications have included mapping impervious surfaces [4], detecting urban heat islands [5], and monitoring urban vegetation health [6]. However, multispectral data is inherently limited by its relatively coarse spectral resolution, making it difficult to differentiate between materials with similar reflectance properties—an issue particularly problematic in densely built environments [7].
To address these limitations, hyperspectral imaging (HSI) has gained attention for its ability to capture subtle spectral variations across hundreds of contiguous bands. Although this technology was initially developed in the 1980s and first applied through airborne platforms, its application to urban areas has recently become more feasible and relevant due to improvements in sensor technology, image processing capabilities, and the emergence of second-generation spaceborne hyperspectral missions.
Despite the proven utility of HSI in fields such as water monitoring [8], wildfire analysis [9], mineral exploration [10] and precision agriculture [11], its full potential in urban analysis remains underexplored—particularly when it comes to satellite-based observations. The present review aims to fill this gap by providing a systematic assessment of the current state of hyperspectral urban remote sensing, with a specific focus on the comparative role of airborne and spaceborne platforms. Most previous reviews have focused exclusively on airborne data; thus, this article emphasizes the increasing role of spaceborne HSI and examines whether its advantages are yet reflected in urban research. The structure of the article follows a progression from foundational concepts and datasets to processing techniques, application trends, and, finally, a comparative evaluation of sensor platforms and future directions.

2. Research Aims

The primary objective of this study is to provide a systematic and comprehensive overview of the current state of the art in hyperspectral spaceborne and airborne remote sensing for urban areas.
To this aim, the research questions that are addressed are as follows:
  • What are the predominant trends in hyperspectral remote sensing data analysis for urban applications, regardless of the platform used to collect data?
  • Which are the most applied hyperspectral image processing techniques for analysis over urban areas?
  • To what extent are spaceborne and airborne hyperspectral imaging currently exploited in urban areas?
How does spaceborne hyperspectral imaging perform in comparison to airborne hyperspectral imaging?

3. Materials and Methods

3.1. Terminology Definition

This section defines the two central concepts of this review —hyperspectral and urban area. This ensures a clear and consistent understanding of these terms, allowing for a well-defined screening criterion and enabling comparability with other review articles on this topic.
Since at least 1985 [12], “Imaging Spectrometry” refers to images in which each pixel is associated with the full electromagnetic spectrum. In practice, hyperspectral images are formed by collecting the reflected signal in many (typically more than 100) narrow contiguous spectral bands covering from the visible to the infrared portions of the spectrum. Unlike MSI, where each pixel typically contains a limited set of discrete spectral bands, HSI captures a continuous spectral response, allowing for the characterization of the spectral signature of surface materials [13]. The high spectral resolution results in a so-called hypercube, a 3D structure showcasing the spectral response of each pixel, represented with geographical coordinates (X, Y), through the spectral sensing range. An example of such hypercube can be observed in Figure 1.
The inherent characteristics of this technology tend to vary depending on the platform on which they are deployed. The two main platforms of interest in this article are satellites (spaceborne) and airplanes (airborne).
  • Airborne platforms: airborne missions have operated since 1987, with the first flight campaign conducted by AVIRIS [15]. Depending on sensor characteristics and how the flight survey is performed (e.g., flight altitude), these platforms offer high spatial resolution (SR) (SR; 1–20 m) and variable temporal resolution. However, their major drawbacks include high acquisition costs, limited coverage, and challenges related to flight altitude and speed [16]. Furthermore, airborne datasets are mostly collected as one-shot surveys and multi-temporal observations are constrained by acquisition costs.
  • Spaceborne platforms: spaceborne missions are relatively recent, with the first-generation including Hyperion [17] (2000–2017) and CHRIS [18] (2001–2021), followed by newer missions such as PRISMA [19] (2019), HISUI [20] (2019), and EnMAP [21] (2022), among others [22]. These platforms offer wide spatial coverage and the possibility to collect over the same areas according to a nominal temporal revisit. However, their major limitations include a greater exposure to adverse weather—while airborne platforms are more flexible in terms of acquisition under varying weather conditions—and a coarser SR (20–60 m) as a trade-off for being more cost-effective [16].
An additional technical aspect to address is the signal-to-noise ratio (SNR), a crucial parameter of optical systems. SNR is measured as the ratio of the mean signal for an invariant target to the standard deviation of the signal [23]. While HSI sensors provide fine spectral information, a lower SNR can offset their advantage compared to multispectral sensors. This reduction in SNR occurs because HSI sensors capture fewer photons per detector due to the narrower spectral channels. SNR is fundamental in measuring quality, especially in complex environments such as urban landscapes [24]. This factor will be later addressed in the results and discussion. While the reader can refer to [22] for a comprehensive overview of spaceborne sensors and missions operated since 1996, Table 1 focuses and compares those that in this review exercise were found most exploited for urban research and applications.
The first-generation sensors include Hyperion and CHRIS. Their main distinction is that CHRIS covered only the Visible and Near-Infrared (VNIR) spectrum, with a lower number of bands and variable spectral resolution, although it had a higher revisit frequency and twice the swath width. Hyperion was used more for urban tasks, which could be attributed to its slightly better SR and the inclusion of the Short-wavelength infrared (SWIR) spectrum.
Moving into the second generation, some technical improvements can be highlighted: a higher SNR and wider swath coverage. The latter is important to mention because it brings the sensors closer to the coverage of the most widely used multispectral sensors, Landsat 8 [25] (185 km) and Sentinel-2 [26] (290 km). This is particularly useful for covering an entire urban agglomeration within just one data take. PRISMA, EnMAP, and AHSI do not seem to present significant technical differences except for the following:
  • AHSI has a higher number of bands, twice the swath compared to EnMAP and PRISMA, and a higher revisit frequency
  • PRISMA offers a coregistered panchromatic (PAN) image with a 5 m Ground Sampling Distance (GSD)
The sensor which exhibits very different characteristics is OHS. Launched by a private initiative, this constellation consists of 10 satellites [27], that allows the revisit time to be reduced up to 2 days. On the other hand, the sensor covers the VNIR only and has a reduced number of bands, thus limiting the spectral analysis capabilities. In exchange, it captures a much wider swath and offers a higher SR.
Table 1. Specifications of the spaceborne sensors and missions found to be the most exploited for urban research and applications and thus included in the present review database.
Table 1. Specifications of the spaceborne sensors and missions found to be the most exploited for urban research and applications and thus included in the present review database.
SensorMission/
Platform
Years of ServiceN° of BandsSpectral Resolution (nm)Spectral Range (nm)Spatial Resolution (m)Swath Width (km)Peak Signal-to-Noise RatioNadir Revisit Time
Hyperion [17]EO-12000–201722010VNIR (357–1000)
SWIR (900–2576)
307.7190:1 (VNIR)
110:1 (SWIR)
30 days
CHRIS [28]PROBA2001–2022621.25–12VNIR (400–1000)3414160:1 (VNIR)7 days
OHS [24,27]Zhuhai-12018322.5VNIR (400–1000)10150Not specified2 days
AHSI [29]Gaofen-520183304 (VNIR)
8 (SWIR)
390–25503060650:1 (VNIR)
190:1 (SWIR)
51 days (Pers. Comment Dr. Yinnian Liu. Email. ynliu@mail.sitp.ac.cn)
PRISMA [19]PRISMA2019240 of which
1 PAN
10VNIR (400–1010)
SWIR (920–2550)
PAN image (400–700)
30
5 (PAN image)
30600:1 (VNIR)
200:1 (SWIR)
Less than 29 days
Hyperspectral Imager [30] EnMAP20222246.5 (VNIR)
10 (SWIR)
420–2445
VNIR (420–1000)
SWIR (900–2445)
3030620:1 (VNIR)
230:1 (SWIR)
21 days
Table 2 summarizes key specifications of major airborne hyperspectral sensors used in urban remote sensing. Compared to spaceborne missions (Table 1), airborne sensors typically provide much finer SR (on the order of 0.5–5 m) and a substantially higher SNR. On the other hand, they offer narrower bandwidths than their orbital counterparts, thus enhancing the ability to discriminate similar urban materials. However, these performance advantages come with limitations: airborne swath widths are usually on the order of a few kilometers per flight line. As a consequence, the spatial coverage may be limited compared to the extent of the interest area, and mapping large cities requires multiple flight passes, which may not be feasible and cost-effective to survey. In contrast, spaceborne hyperspectral images cover a much wider area in a single overpass and provide regular revisit schedules (days to weeks), allowing us to monitor over time at regional and city scales. Consequently, airborne platforms excel at providing high-detail, high-quality hyperspectral data for localized urban analyses, while spaceborne platforms enable better trade-off between spatial detail over greater area coverage and temporal revisit.
According to [37], “urban” is a characteristic of place and, when applied as an adjective to places, refers to spatial concentrations of people whose lives are organized around non-agricultural activities. The urban concept is a function of sheer population size, space (land area), population to space ratio (in simple terms, density or concentration), and economic and social organization [38]. While there is nowadays consensus that urban and rural are ends of a continuum rather than opposite sides of a sharp contrast, remotely sensed data offer indirect ways to measure the urbaneness of an area of interest, through the generation of representative proxy variables of the built environment. Classification of satellite images is a practical tool to detect and highlight anthropogenic changes to the physical environment.
Urban areas can generally be classified into three major levels: cities, urban agglomerations, and metropolitan areas. While cities and metropolitan areas are primarily defined by administrative boundaries, urban agglomerations, as described by UN-HABITAT, refer to “a contiguous territory inhabited at urban density levels without regard to administrative boundaries.” This means they can be characterized by specific physical properties and discriminated from remote sensing imagery. This concept is often considered a more accurate representation of urban areas. However, the threshold established for built-up proportion and density may vary, affecting what is classified as part of an urban area [39]. This implies that the image analysts have to decide upon the criteria to implement into the algorithm for assigning each place of the imaged scene to either the urban or rural category. While a binary assignment to one of the two categories lead to a hard classification, softer classification approaches call for definitions of transitional urban-rural gradients [38]. In conclusion, in this article, the term urban area will refer to places that, as depicted in remote sensing images, contain a significant proportion of built-up areas making them distinguishable from rural land cover (either natural or agricultural).

3.2. Methodology

Two main steps were followed. The first involved collecting, screening, and attributing a literature database. The second focused on generating statistics to gather key insights, which are introduced in Section 4, and discussed in Section 5.
The first step of the related methodology followed the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” framework [40]. Its scope lies in providing a standardized method to synthesize the current state of the art for a given discipline or topic. Figure 2 illustrates the workflow adopted in this research to first select literature from a scientific database based on criteria latter defined in this section, up to the screening process in which non-relevant articles to the topic are discarded. The reported numbers allow for a quantitative assessment of the critical selection performed.
The search for pertinent articles to be included in this review paper was performed using Scopus, an extensive database composed of peer-reviewed scientific literature. The gray literature, encompassing documentation as reports and guidelines existing outside of traditional academic publishing, was not included in this analysis. Reading this body of work unveils how users and stakeholders benefitted from the experimentation and use of technological development and thus may suggest whether the scientific community was able to address real use-case scenarios [41]. However, because this review aims to understand the maturity of scientific advances in airborne and spaceborne hyperspectral remote sensing for urban applications, this aspect falls outside the scope of this paper, though it presents a valuable topic for future research.
The collection was built using different keyword combinations, with a clear focus on hyperspectral data in urban contexts. Articles were included only if each keyword appeared at least once. Studies on urban remote sensing were targeted, using different hyperspectral platforms. Three keyword combinations were defined to capture a diverse range of publications:
-
Remote Sensing AND Hyperspectral AND Urban AND Classification
-
Remote Sensing AND Hyperspectral AND Urban AND Satellite
-
Remote Sensing AND Hyperspectral AND Urban AND Spaceborne
-
Remote Sensing AND Hyperspectral AND Urban AND Airborne
This process yielded a preliminary database of 1081 articles published between 1993 and 2024.
The manual screening was conducted according to three sequential phases. In the first phase, an automatic screening was applied to remove duplicate records and non-English publications. The second phase involved the exclusion of articles whose titles or abstracts were not aligned with the scope of the review, as well as those for which the full text was not accessible. In the third and final phase, each article was examined in detail to evaluate its relevance to the objectives of the present review. The criteria applied during the second phase for retaining articles were as follows:
  • Hyperspectral imaging had to represent the primary data source employed in the analysis.
  • The study area had to consist predominantly, or entirely, of urban environments.
A considerable number of articles were excluded based on these criteria. Many papers focused primarily on multispectral analysis, with only brief references to the potential of hyperspectral imaging. Likewise, a substantial portion of hyperspectral studies concentrated on natural environments, including only limited representations of anthropogenic land cover (roads, built structures) or human settlements (e.g., hamlets, villages, small towns). Review papers, which typically do not rely on a specific study area, were included when urban environments were explicitly emphasized in their structure or content. Following this process, a total of 113 records were retained. The reader can find the final database in the Supplementary Materials.
In the attribution phase, the retained documents were categorized according to attributes designed to address the secondary objectives outlined in Section 2. The primary classification was based on the article’s main purpose, and three main situations were considered, i.e., whether the paper: (i) is focused on demonstrating an application, (ii) proposes an image processing technique, or (iii) provides a critical review. Articles focused on image processing were generally the most comprehensive, encompassing both the development of new methods and comparative evaluations. The relative proportion of these three categories reflects the prevailing research interests of the scientific community in this field. Additional attributes included the type of technology and corresponding sensors, which are central to assessing the role of spaceborne and airborne HSI in urban analysis. Furthermore, aspects of data integration and fusion were considered, as these are particularly important when working with hyperspectral data [42]. The type of image processing technique and the product generated were also examined, as they reveal trends in the primary topics and the nature of the applied techniques.

4. Results

This section illustrates the results gathered from the attribution of the records, as stated in the prior section “Methodology”. Statistical insights are provided in order to highlight trends and practices that are further analyzed and discussed in the section “Discussion”. Throughout this section, several statistics are split between airborne and spaceborne sensors. However, care in the interpretation of these results is advised, as the number of articles on spaceborne imaging applied to urban areas is still limited at the time this article is written, although hints and possible trends can be observed.

4.1. General Statistics

The analysis of general statistics provides several useful insights. Figure 3 illustrates the articles categorized by scope and publication year. Overall, Figure 3a indicates a higher proportion of methodology-focused articles than application-based ones, with relatively few review articles. Figure 3b presents the distribution of the main categories by publication year.
Quantitatively, the overall temporal range (1993–2024) can be divided into three periods. The first period (1993–2001) shows very few publications, i.e., two articles in total consisting in experimental image-processing papers published between 1993 and 1994. The second period (2002–2017) exhibits a steadily increasing publication rate; its start coincides with the launches of first-generation hyperspectral satellites (Hyperion and, CHRIS, launch years 2000 and 2001, respectively) and with the release of the Pavia and Houston benchmark datasets (see later Table 4 in Section 4.3). During this phase, image-processing technique papers continued to be published, and application-focused studies began to appear from 2006 onward. Four publication peaks are noticeable (2008, 2011, 2014, and 2017), although the earlier peaks are quantitatively smaller than the 2017 peak. It is to be noted that 2017 was the final year of service of the Hyperion satellite. The second period comprises 46 papers, an average of approximately three publications per year, suggesting an increased scholarly engagement compared with the paucity observed in the preceding period.
The third and final period (2018–2024) experienced a marked increase in publications, with 65 articles in total and an average annual rate of approximately 10.8 papers. Four sensors were launched during this interval, AHSI and OHS (2018), PRISMA (2019), and EnMAP (2022). The publication rate rose noticeably following these launches, including 16 papers in 2023 alone. The growth has been especially pronounced for papers on image-processing techniques, although 2023 also recorded the largest number of urban application-focused studies. Overall, this trend underscores the growing relevance of urban hyperspectral remote sensing and suggests that the field has matured in translating data from newer HSI missions into practical applications.

4.2. Earlier Literature Reviews on Hyperspectral Remote Sensing in Urban Areas

As displayed in Figure 3b, since 2007 the scientific community has published a growing number of review articles. As already recalled in the methodology, many of these review articles cover urban areas or hyperspectral analysis only as a part of their content. So, the evidence gathered from the literature review is that there is still a lack of review papers focusing on urban hyperspectral remote sensing only. On the other side, a careful reading of the existing reviews prove that these articles can still be considered valuable to draw some important conclusions on the use of hyperspectral remote sensing for urban science and applications.
As shown in Table 3, nine literature reviews are presented, with a summary of previous insights from urban hyperspectral analysis and the temporal range covered. Some of these papers conducted a transversal analysis of urban applications [43,44,45], while others focus on specific topics such as asbestos detection [46,47], urban vegetation [48,49], impervious surfaces [50], and land cover classification [51]. Most of these studies were published after 2010, coinciding with the growth of hyperspectral urban analysis, as mentioned in the previous section. This trend underscores the need to systematize existing research and identify gaps in the field.
The primary focus of these studies was to evaluate the advantages and disadvantages of hyperspectral sensors vs. the multispectral sensors, traditionally used for such tasks. In most cases, particular attention was given to the benefits of airborne sensors, whose high spatial and spectral resolution allows highly specific applications to be addressed. However, the challenges of data acquisition were also highlighted, as these sensors depend on flight campaigns, making them a less cost-effective solution [46]. This limitation is directly related to the lack of time-series data, which is essential for analyzing the temporal dynamics of urban features, particularly vegetation [48]. Moreover, several studies have reported successful classification outcomes when hyperspectral imagery was integrated with other data sources [49,51], underscoring the need for further advancements in this area.
Most of these articles focus on the limitations of urban analysis, particularly emphasizing where the second generation of hyperspectral spaceborne sensors may enable improvements. A recurring concern is the coarse SR [44,45,46,49], which leads to the mixed-pixel problem and constrains tasks such as target detection, change detection, and land cover classification. However, the potential benefits of the temporal resolution offered by spaceborne HSI sensors for studying the diverse materials of the urban landscape are rarely mentioned.
Consequently, several key gaps can be identified and are therefore important to address in this review:
-
A lack of studies that consider the trajectory of HSI in urban applications as a comprehensive topic.
-
A lack of studies that systematically analyze image-processing techniques and applications that have historically been, and are currently, trending in urban environments.
-
A lack of papers that go beyond the SR parameter to establish a comparison between airborne and spaceborne sensors.
Addressing these gaps would provide a clearer understanding of the evolution of HSI in urban analysis and help identify major challenges and opportunities for its future exploitation.

4.3. Hyspectral Sensors and Datasets

As noted in the terminology definition section, the two primary platforms for hyperspectral analysis are spaceborne and airborne (field spectroscopy is outside the scope of this review). Figure 4 illustrates both platforms’ occurrence in the performed review, showing their proportional representation in publications as well as their use over time. A third category, simulated spaceborne, is also considered. This refers to scenes generated from airborne images to approximate raw spaceborne data while accounting for instrumental and environmental conditions. This approach is particularly useful for assessing the technical capabilities of future satellites [52].
Figure 4a shows that publications predominantly rely on data collected from airborne platforms (nearly two-thirds), with a substantial share from spaceborne platforms (almost one-third) and only a minimal contribution from simulated spaceborne imagery. Figure 4b, however, indicates that the proportion of these categories has varied over time. Although the use of both spaceborne and airborne imagery has increased across different periods, the evolution of spaceborne imagery can be divided into three distinct phases.
The first period (2001–2012) corresponds to the era of first-generation hyperspectral satellites, during which airborne imagery dominated research. This dominance was partly due to the availability of publicly accessible datasets that supported algorithm development and methodological studies. The second period (2013–2018) marks the emergence of simulated spaceborne data, as preliminary studies explored the potential applications of forthcoming hyperspectral missions. The third period (2019–2024) has witnessed a substantial increase in the use of spaceborne imagery, driven by the launch of the aforementioned satellites, resulting in a more balanced utilization of both the platforms. Overall, this trend indicates that while airborne imagery has dominated hyperspectral urban analysis over the past two decades, spaceborne sensors have gained significant prominence in the last five years.
The inclusion of the “simulated spaceborne” category highlights an important intermediate step in the evolution of hyperspectral urban research. Although it represents only a small share of the overall publications (5%), its usage trend is noteworthy. These studies allowed researchers to test image-processing techniques under conditions approximating real satellite acquisitions, accounting for spatial resolution, sensor noise, and atmospheric effects. Soon after the second-generation spaceborne data (e.g., from PRISMA and EnMAP) became available after 2019, reliance on simulated datasets decreased. Nevertheless, their contribution remains significant, as they provided a testing ground that facilitated methodological readiness and accelerated the uptake of newly acquired spaceborne imagery in urban applications.
Examining this in further detail, Figure 5 presents the range of airborne sensors used in urban research. The data indicate that up to seven different sensors have been employed, with AISA, CASI, and AVIRIS being the most prevalent. Their widespread use largely reflects their role in generating publicly available datasets, as shown in Figure 5b and Table 4. Only 36% of airborne images were collected through flight campaigns conducted directly by the publication authors or provided by institutions, while the majority were obtained from public datasets. These datasets primarily serve as benchmarks for testing the effectiveness of different image-processing techniques, as they are widely sampled and openly accessible. However, this also highlights that most studies did not acquire original imagery, likely due to the costs associated with flight campaigns. Among the most frequently used datasets, the Pavia University and Houston datasets are particularly prominent.
To clarify why these datasets were selected, Table 4 presents basic information about the sites and the sensors employed. As shown, most datasets share very high SR (ranging from sub-meter to 4 m), high spectral resolution, and high peak SNR. However, they differ in the number of samples, defined classes, spectral range, and the characteristics of the urban areas captured.
The number of labeled samples is a key advantage of these datasets, as it supports methodologies that require large training sets, which are particularly relevant to HSI. Variation in sample size among datasets primarily reflects the extent of the covered scene, since most of the area is labeled. Consequently, the number of samples does not constitute a major difference between the datasets.
Significant distinctions emerge when analyzing the defined classes. While each dataset selects its classes based on the land covers represented in the scene, some categorize materials more specifically than others. The Pavia datasets, for example, focus on materials, dividing them into spectrally distinct classes rather than considering land use. The Washington DC Mall and Urban datasets follow a similar approach. The main difference is that the Houston dataset includes roughly twice as many categories, placing greater emphasis on land use (e.g., distinguishing between residential and non-residential areas, highways, and railways) and capturing finer distinctions among classes (such as different types of grass and trees). It is also noteworthy that some datasets classify shadows as a material (e.g., Pavia University and Washington DC Mall), which may pose challenges when analyzing low-albedo surfaces, an aspect further addressed in the discussion.
The spectral range is another factor differentiating these sites. The Pavia datasets include only the visible spectrum and part of the VNIR, whereas the MUULF and Houston datasets extend further into the VNIR. The only datasets covering the SWIR are the Washington DC Mall and Urban datasets. This suggests that the SWIR region was not prioritized in theselection of the benchmark datasets, since the most widely used datasets do not include this spectral range.
The urban characteristics of each dataset also play an important role. Most selected areas correspond to peri-urban zones where buildings are well spaced, and vegetation is abundant. The only dataset capturing a consolidated urban structure is Pavia Centre (not to be confused with Pavia University), but its limited use suggests that such environments have not been preferred by researchers for testing methodologies. Regarding SNR, first-generation hyperspectral sensors (1995–2003), which include the Pavia and Washington DC Mall and Urban datasets, exhibit lower SNR values compared with more recent sensors (MUUFL and Houston datasets). Finally, the most recent benchmark datasets (MUUFL and Houston) also incorporate information from other sensors in addition to hyperspectral data. This feature, absent from earlier datasets, reflects the emerging interest of the scientific community in testing data fusion and integration techniques.
Figure 6 illustrates the distribution of spaceborne sensors used. The first generation of hyperspectral spaceborne sensors (Hyperion and CHRIS) accounts for 26% of total publications—a relatively low percentage given their long operational lifespans compared with the second generation. This is particularly evident for CHRIS, which has been rarely employed in urban applications. Among second-generation sensors, PRISMA is the most widely used for urban tasks, followed by EnMAP and Chinese sensors. When considering the ratio of total publications to years in service, PRISMA (≈3 publications per year) and EnMAP (≈2 per year) significantly outperform Hyperion (≈0.35 per year) in urban applications. This trend highlights the growing interest in hyperspectral spaceborne imagery for urban analysis and underscores the role of free and quasi-open data policies in promoting the uptake of spaceborne data within the scientific and user communities.

4.4. Image Processing Techniques

The image-processing techniques that scholars have used or developed to process hyperspectral data can be grouped into three main families, as shown in Figure 7, which also distinguishes between spaceborne and airborne sensors. Spectral Unmixing (SU) techniques seek to address the mixed pixel problem, due to the presence of different materials, leading to a mingled spectral response. These techniques identify the component spectra of the mixed pixels (endmembers) to calculate their proportions or abundances [62]. SU techniques can be broadly divided into linear and nonlinear approaches. Linear Spectral Mixture Analysis (LSMA) assumes that each pixel spectrum is a linear combination of endmembers and has been widely used in urban contexts to estimate fractions of impervious surfaces, vegetation cover, and roofing materials [62,63]. Nonlinear approaches attempt to capture multiple scattering effects and complex interactions typical of three-dimensional urban structures. More recent methods such as Nonnegative Matrix Factorization (NMF) and autoencoder-based unmixing frameworks enable automated endmember extraction and abundance estimation [64]. Machine Learning (ML) techniques refer to a family of algorithms that learn from data to make predictions or decisions [65]. Key techniques in hyperspectral image analysis include:
-
Spectral Angle Mapper (SAM) is a physically based spectral classification method that measures the similarity between spectra using an n-dimensional angle, treating each pixel and reference spectrum as vectors in a space where the number of dimensions corresponds to the number of spectral bands [66].
-
Random Forest (RF) is a supervised ensemble classifier composed of multiple Decision Trees (DT). The final prediction is made through voting (for classification tasks) or averaging (for regression tasks) among the trees. For each tree, a random subset of the training samples is selected (a process known as bagging).
-
Support Vector Machine (SVM) is a supervised technique based on statistical learning theory, where the input data is mapped through a nonlinear transformation into a high dimensional space (hyperplane), in which linear relationships can be depicted. Throughout this process, the objective of the classifier is to find the best decision surface, which maximizes the distance (margin) among the different training data (support vectors) belonging to different classes; the higher, the better the final accuracy [67]. The transformation to a higher-dimensional space is performed via a kernel function. SVM employs a range of those functions, as they perform differently regarding the nature of data: Polynomial (nonlinear data), Linear (linear data), and Radial Basis kernels (no clear separation among data) are the most significant [68].
Figure 7. Distribution of HSI-based articles sorted by the primary image processing family exploited to process the hyperspectral data, for spaceborne and airborne images. Notation: SU—Spectral Unmixing; ML—Machine Learning; DL—Deep Learning.
Figure 7. Distribution of HSI-based articles sorted by the primary image processing family exploited to process the hyperspectral data, for spaceborne and airborne images. Notation: SU—Spectral Unmixing; ML—Machine Learning; DL—Deep Learning.
Remotesensing 17 03126 g007
Meanwhile, Deep Learning (DL) techniques are based on Artificial Neural Network (ANN) architectures with multiple layers (typically more than 50) [69]. Key techniques in hyperspectral image analysis include the following:
-
Convolutional Neural Networks (CNNs) use a convolutional layer as their core structural unit, with the main purpose of applying a set of kernels to the input data to extract patterns and features. Pooling is also included, serving to reduce the spatial dimensions of the feature maps, thereby facilitating processing in subsequent convolutional layers. Finally, the activation layer introduces a nonlinear activation function, enabling the model to learn more complex features from the data [70]. There are three main types of CNNs architecture: 1-D, 2-D, and 3-D. One-dimensional architecture is the most basic one, as it performs the convolution to the spectral dimension, as a vector, ignoring the spatial properties of the data. Two-dimensional architecture introduces the convolution into spatial data, converting the pixel spectral vectors into two-dimensional spectral images, thus the network can utilize the spatial information to improve the accuracy, albeit not taking into consideration the full spectral depth [71]. Three-dimensional architecture performs the convolution into the spectral-spatial features, as the third dimension allows to analyze the full spectral depth of the hyperspectral image [72].
-
Auto-encoders (AEs) are an unsupervised architecture that learns to reconstruct input data from a lower-dimensional representation, through an encoder, bottleneck layer and a decoder [68]. They are mostly employed to perform dimensionality reduction and high-level spectral feature extraction.
-
Recurrent Neural Network (RNN) is a supervised architecture based on loops in the connections, where node-activations at each step depends on the previous one, making it ideal to analyze temporal sequences. The most common RNN is the Long Short-Term Memory (LSTM), composed by a recurrent unit composed by a cell which remembers values at arbitrary time intervals, and three gates intended to regulate the information in and out of the cell [73].
In both airborne and spaceborne cases (Figure 7), approximately 60% of the articles applied ML techniques, followed by a substantial proportion using DL, with only a small number employing SU. Notably, SU is used relatively more often with airborne sensors, even though their higher SR reduces the impact of the mixed-pixel problem. Interest in its application may stem from the need to conduct algorithm testing, detect sub-pixel objects, or address the inherent heterogeneity of urban environments, as discussed in subsequent sections.
Figure 8 presents the various ML and DL techniques applied to airborne sensors data. Among the ML approaches, SVM and RF are the most widely used, together accounting for 57%. FCNN and SAM also have a notable presence (30% combined), while the remaining techniques appear only sporadically. Regarding the learning procedure, only 7% of the methods are unsupervised (K-means, ISODATA), indicating that 93% of the approaches rely on training samples and are therefore supervised.
Regarding DL methods, the range of techniques is less diverse, with CNNs being the dominant approach (80%). GANs and AEs are also employed, but to a lesser extent. In this case, unsupervised learning procedures account for 20% of the total use cases (GANs and AEs), although their overall adoption remains limited.
Figure 9 showcases the methods applied to spaceborne imagery. The range of ML approaches appears less diverse than for airborne imagery. RF is the most widely used (41%), followed by SVM (27%) and SAM (14%). FCNN (9%) and MLC (9%) are also applied, though less frequently. In the case of DL, CNNs are overwhelmingly predominant (88%), with the additional presence of RNNs, designed to process sequential or multi-temporal data.
Figure 10 illustrates the temporal trends of the most widely used ML and DL techniques for both airborne and spaceborne sensors. SAM was the most frequently applied ML technique until 2013, after which it was largely replaced by SVM and, to a lesser extent, RF. With the launch of second-generation hyperspectral sensors, CNNs began to be widely adopted, alongside RF. Overall, DL methods started to appear in 2019; however, they have not yet surpassed traditional ML approaches, suggesting that ML techniques continue to perform competitively, an aspect explored further in the discussion.

4.5. Main Application Trends

In the context of applications, the present analysis takes into consideration the primary objective addressed in each of the selected articles. Figure 11 displays the outcome of this operation, separating papers that addressed the given application with airborne sensor data from those that relied on spaceborne sensor data.
Before discussing the results, it is worth specifying the rationale behind the application types by which the selected papers were classified, in particular those that could have been more simply grouped into one class only. The guiding principle was to refer to specific urban applications as they were the precise objective of the analyzed papers. Accordingly, it was intentional not to use generic and wide classes that could encompass diverse applications. This is the case of “land cover/land use” class. While categories such as land cover classification, urban vegetation mapping, impervious surface detection, and land cover/land use classification are interrelated and may be grouped into the same “land cover/land use” class, in this case they were treated separately to reflect how they are commonly distinguished in the literature. Each corresponds to different objectives and methodological emphases. For example, vegetation studies often focus on species-level discrimination, impervious surface detection simplifies the urban mosaic into key classes, and land cover/land use analyses combine physical and socio-economic aspects. Although, in all the above cases, the objective commonly is to generate a classification of a number of categories. The reader can directly refer to the classification assigned to each paper included in the database (see the Supplementary Materials).
Although multiple application topics are represented in the bar chart in Figure 11, the most prominent for both the platforms is land cover classification, accounting for 57% of airborne application papers and, to a lesser extent, 29% of spaceborne application papers. The prominence of land cover classification in airborne HSI is largely due to the availability of labeled features in publicly accessible datasets. Dataset labeling enables the benchmarking of newly developed classification algorithms by categorizing surface types based on their biological, geological, chemical, or physical characteristics, which make them spectrally distinguishable [50]. Although the specific objectives of classification may vary, three closely related application trends emerge in connection with land cover classification:
-
Urban vegetation classification: This application focuses on distinguishing various types of vegetation, with most of the studies aiming to identify species [74]. It accounts for a significant proportion for both the sensors (15–16%).
-
Impervious surface detection: Impervious surfaces are natural or artificial coverings in urban areas that prevent water infiltration into the ground. This classification is generally simpler, often presented as a dichotomy (soil|impervious), a trichotomy (soil|impervious|vegetation) [24]. Although there are some authors that do not apply this scheme, considering roof/pavement and grass/tree independently [75]. This application foresees a more frequent use of spaceborne (26%) than airborne (16%) sensors.
-
Land cover/land use classification: This application results from distinct but often conflated concepts. Although land covers usually correspond to a single spectral category, land uses are information classes that result from the confluence of diverse spectral categories. While combining the two concepts in a single product may seem beneficial—as it could provide more comprehensive information, their use typically relates to different objectives: environmental monitoring (land cover) and policymaking (land use) [76]. The variation on purposes could lead to confusion for the end-user, which will be further discussed in the Discussion section. This application accounts for a reduced proportion in both the sensor types (4% for spaceborne; 3% for airborne).
Among the applications observed, other topics focused on detecting specific features or covers emerge, such as asbestos detection or road detection, though their overall proportion is minimal. Additionally, the analysis of water quality in urban environments is a less-represented topic (2%), only present in the case of airborne. Lastly, some studies [77,78] aimed to conduct a more aggregated analysis, interpreting classification in a broader context. Urban Heat Islands are an application that is only addressed for airborne imaging, meanwhile Local Climate Zones (LCZ) are only applied for spaceborne imaging.
Using hyperspectral data for the above-mentioned applications, scholars have demonstrated the feasibility of generating thematic products. For land cover classification, the most common approach involves categorizing specific classes [79,80,81,82], typically corresponding to materials such as asphalt, tiles, metals, and concrete. It has been observed that the number of urban surface categories often depends on whether the city is the primary focus of the classification or part of a broader territorial analysis, as well as on the spectral separability of material, factors largely influenced by sensor specifications and the techniques applied.
In the case of land cover/land use products, the focus is generally directed on identifying residential, commercial, and industrial areas. Impervious surface detection, as noted, may rely on products comprising two [83], three classes [84] or four classes [75]. For urban vegetation, most products relate to species-level classification, with some studies producing maps of all identified species [74,84,85,86] while others focusing only on the predominant ones [87]. An alternative approach classifies trees based on functional types [88]. Finally, all LCZ products follow the standard provided by Stewart et al. [89] for criteria and classes; thus, variations typically reflect the analyst’s perspective during training, the characteristics of inner-city environments, and the properties of the data source used.

4.6. Use of Data Integration/Fusion

Data integration refers to the process of co-registering and organizing different data sources into an interoperable framework. Meanwhile, data fusion aims to combine information from multiple sensors and/or sources to derive inferences that would not be possible from a single data source [42]. Table 5 presents an overview of the sensors that were integrated or fused in various studies, either with spaceborne or airborne sensors. In the case of multispectral sensors, the most common approach is the fusion with hyperspectral spaceborne data. This was primarily carried out through pan-sharpening, a data fusion method that integrates a higher SR panchromatic (PAN) or multispectral image with a lower SR hyperspectral image to produce a hyperspectral image with enhanced SR [90]. Most applications of this method were performed using PRISMA, as it provides a co-registered PAN image, reducing alignment challenges.
In the case of active sensors, data integration was carried out using different approaches depending on the platform: LIDAR was extensively used with airborne hyperspectral systems, while SAR was the primary choice for spaceborne platforms. Despite these differences in sensor pairing, the proportion of studies applying data fusion and integration methods is consistent across both types of platforms. Overall, only 24% of the reviewed articles address these processes, indicating that data fusion and integration are not currently a major focus in hyperspectral urban remote sensing, aspect that will be further addressed in the discussion section.

5. Discussion

In this section, the main insights extracted from the results are discussed. The primary objective is to address the different goals outlined in Section 2, thereby gaining a holistic understanding of the current state of hyperspectral urban analysis, including trends, key techniques, commonly used sensors, and existing gaps and challenges. As mentioned, all the assumptions derived from the separate analysis between airborne- and spaceborne-based articles have to be taken carefully into consideration, since the literature sample is quantitatively limited to this day compared to literature reviews focusing on other topics.

5.1. Main Insights Extracted from Basic Statistics: Theoretical Stage of the Topic

As indicated, urban hyperspectral remote sensing has witnessed a considerable increase in its yearly publication rate during the last 10 years.
When comparing the most prolific year (14 papers in 2022) with the same year in other topics, such as lithological mapping [10] (23 papers), mineral mapping [10] (190 papers), or precision agriculture [11] (25 papers), i.e., topics primarily developed for natural and rural areas, it can be stated that hyperspectral data does not seem to be yet a widely exploited and consolidated type of dataset to address urban applications. If the comparison is made between the last decades, the mentioned topics have witnessed a steady increase in publications since the mid-90s. This fact suggests that they had foreseen a much larger trajectory for framework development and applications. The number of image processing articles during the last years indicates that the discipline still is in an important theoretical stage, meaning that frameworks for applications are still being developed. This phenomenon was also encouraged by public datasets acquired by airborne sensors, being the main source available for many years. Historically, there were just two hyperspectral spaceborne sensors (Hyperion and CHRIS) that could provide constant information regarding Earth’s surface, mainly a technology demonstration. However, the aforementioned satellites were used far more for other applications, e.g., agriculture [91], fire detection and analysis [92], or forest mapping [93].
In comparison with studies of urban applications using multispectral remote sensing, it is possible to observe that the publication rate of urban vegetation [46,94] or land cover classification [95] studies are greater when compared to HSI studies. However, it has been proven in several case studies that HSI has a better ability to discern different typologies of materials in urban areas owing to its higher spectral resolution [77,96,97,98]. Still, its application is not extended in this context. Some of the factors that can address this publication rate mismatch are:
-
Limited availability of spaceborne and airborne hyperspectral imagery for many years before new satellite missions were launched. As indicated in Figure 5, the major problem with airborne HSI lies in its high acquisition cost, leading most authors to rely on public datasets to test techniques and perform benchmarking, resulting in a lack of application-oriented papers. The lack of data has been compensated in the last 5 years with the launch of the new generation of hyperspectral satellites with free of charge and quasi-open data policies (such as PRISMA), as demonstrated by the increasing use of this technology in comparison to previous years.
-
Coarse resolution of spaceborne imaging: many authors argue that SR is a major factor that highly influences accuracy when analyzing urban areas since they are composed of a mixture of land covers at small scales [54]. Because of that, some authors preferred to work with spaceborne MSI or airborne HSI [55] which offer better SR, provided that there are disadvantages in terms of spectral resolution and/or cost-effectiveness to access the data.
-
Complexity of urban environment: Many aspects involve this issue, such as the similar spectral signatures among many urban land covers [99] or the inherent geometrical complexity of cities (3D structures), which lead to nonlinear data [100]. Although hyperspectral data can address many of these challenges, misclassifications may still occur when relying solely on spectral criteria, requiring the integration of different data sources. This is particularly important in cases where different objects exhibit similar spectral signatures due to shared materials or surface properties, or when the same object returns a different spectrum because of differences in condition, illumination, or background effects [101]. Those factors have led scientists to prefer natural and rural environments to start developing frameworks and applications.

5.2. Imaging Processing Techniques

One of the factors contributing to the development of hyperspectral urban remote sensing is the advancement of image processing techniques. However, it must be mentioned that specific challenges complicate the development of ML and DL frameworks.
The first and most significant challenge is the HSI high dimensionality. The large number of bands results in a high-dimensional space with substantial redundancy between bands [102]. As the number of dimensions increases, the number of samples required for accurate classification in any ML or DL task also increases. This often leads to the “Hughes phenomenon”, where classification accuracy decreases due to the high dimensionality combined with a limited number of training samples, causing overfitting [79]. This issue underscores the necessity of dimensionality reduction, which can be facilitated by the high correlation among bands. By reducing dimensionality while preserving essential information, it is possible to maintain accuracy with minimal loss [103].
In addition, urban environments introduce specific challenges regarding sample selection. The complex and heterogeneous composition of urban areas often makes it difficult to extract pure and representative training samples, particularly when using coarse-resolution hyperspectral sensors. Land cover types are frequently intermingled at the pixel level, thus complicating the isolation of homogeneous training areas. As a result, sample selection in urban HSI analysis requires a meticulous strategy, making the task significantly more labor-intensive. On the other hand, care must be taken to prevent selecting covers that may reduce spectral separability among classes [79]. Lastly, the nonlinearity of hyperspectral data is another factor to address. This nonlinearity is attributed to the data gathering process, due to multiple scattering between solar radiation and surface objects, combined with variations in the positions of the Sun and the sensor [104]. This effect is augmented by the complex geometry of urban environments and the heterogeneity of pixel composition, which is further exacerbated by coarser SR of input images [63].
SAM was the only technique used for HSI classification from 1993 to 2010. This technique was widely adopted early on because it does not require many training samples per class and is computationally efficient compared to other ML methods (SVM, RF) [105]. Additionally, its insensitivity to vector length (referred to the magnitude/intensity of a spectral vector) as it is focused on its shape, leads to a good performance under varying illumination conditions. This characteristic makes this technique particularly useful in urban environments where complex geometries can create significant lighting variations. However, SAM has limitations. Since it classifies each pixel based solely on its spectral similarity to a single reference material per class, it struggles to accurately classify land cover types composed of multiple materials, such as roofs or streets [106]. Examples of its application are land cover classification [82] and vegetation mapping [107].
Since 2011, more complex ML techniques were used: MLC, FCNN, RF, and SVM. However, more than 50% of the papers (depicted in Figure 8 and Figure 9) used either SVM or RF, as they are the best fitted to the above-mentioned challenges. RF’s approach helps achieve higher accuracy and reduces overfitting, particularly when working with a limited number of samples [108]. RF is considered particularly suitable for hyperspectral data because it effectively selects relevant features for each DT, avoiding the high correlation between features that can arise from the high-dimensional space of HSI data. Additionally, it can capture nonlinear relationships within the data, which is essential for hyperspectral analysis [109]. The main limitations of RF stem from the quality of the training dataset, which must meet certain requirements. For instance, (1) class samples should be balanced, as RF tends to be biased toward classes with a higher proportion of samples [110]; (2) samples need to be spatially distributed, not concentrated, since the classifier is sensible to spatial autocorrelation [111]. These factors can pose challenges when performing tasks in urban areas, where minority classes are common and certain land covers may be concentrated in specific regions of the city. However, the issue of class imbalance has been debated. Some scholars argue that imbalanced datasets can still be used effectively, as they may help improve the performance of the more challenging classes without significantly affecting the overall accuracy [112]. However, as far as this article’s review has reached, this approach has not been tested in urban areas. In the addressed context, RF has been mainly used for mapping urban tree species [74,86] and land cover classification, i.e., tasks where a wide range of classes are required.
SVM technique proved to be suitable for analyzing hyperspectral data by several studies and superior to other ML techniques (MLC, SAM …) [113,114,115,116], as it exhibits low sensitivity to the Hughes phenomenon, a good capacity of generalization and can be used for classification and regression tasks. This second aspect is particularly useful in urban areas, because SVM can delineate smooth transitions among classes better than other classifiers. A high intraclass variability and inter-class similarity are inherent aspects in this context [117]. As RF, this technique also performs well when the amount of training data is limited [118]. In urban areas, this technique has been mainly applied for impervious surface detection [117] and land cover classification [80,119,120]. There are two major limitations to address from this technique. The first one is the difficulty of handling large datasets, as the technique needs to perform several large-dimension matrix multiplications to obtain the best hyperplane. This processing step results in a very high computational cost, ending with the model running out of memory in some cases, thus not finding the best possible hyperplane [121]. This aspect could be problematic when applying this technique using spaceborne HSI on urban areas since the model would need to deal with a much wider spatial coverage than airborne HSI. The second one relies on its sensitivity to noise, as this technique is based on the precondition that all the samples need to be independent and well-distributed. However, this is not the case for most urban applications, especially when the number of samples is scarce and thus noise may cause a higher impact [122].
If a comparison with RF is asserted, some articles address that they both perform almost equally in ideal conditions [123]. Sheykhmousa et al. [124] compared RF and SVM performance with varying SR and number of features, concluding that SVM tends to perform slightly better if the pixel size is <30 m and with more than 100 features; RF performs better with 10–100 features and a pixel size of >100 m. These authors established the comparison from the general remote sensing literature; thus, it needs to be analyzed from an urban context since its complexity could alter both classifiers performance. As an example, Ahmad et al. [125] compared RF and SVM techniques for classifying a range of hyperspectral airborne datasets, obtaining an overall accuracy of 77.4% and 77.8% for Pavia University and 75.38% and 81.86% for the University of Houston, respectively. In this case, SVM equals or even outperforms RF. This outcome can be attributed to SVM ability to better outline smooth transitions among classes, which are frequent in urban areas, and its best suitability for a high SR image, although no solid conclusions can be extracted. The present literature review exercise did not find any article addressing this comparison when spaceborne HSI data was used, and when samples were scarce. In this case, the impact of noise may be a factor making RF perform better.
Although there is no solid evidence of a better performance of both the classifiers for hyperspectral analysis in urban applications, the literature review highlights that SVM was more applied before 2019, while RF gained prominence after this year when the new generation of hyperspectral satellites started to be launched. This would suggest that RF was preferred to process coarser imagery, given that SVM is known to be more sensitive to the noise that may be present in such types of images.
Since 2019, a spike in the use of DL techniques has been observed. This is mainly because DL has often outperformed traditional ML methods in urban applications [126]. Some of the key advantages of DL include:
-
Semantic representation through Deep Features: early layers capture low-level features (spatial-spectral metrics), while middle layers capture patterns, to relate both in the deep layers with semantic concepts indicated during the training process. This is especially useful when working with urban areas, as it allows to better cope with its complexities [127].
-
Universal approximation: as radically different inputs can be included [73].
-
Large flexibility of its structure design: allows to adapt the model to specific tasks and also to different learning strategies, which is fundamental in remote sensing applications [73].
However, its use has not overcome the predominance of ML techniques. The main reason lies in the complex training involved in DL, as it generates many parameters, leading to high computational costs. Additionally, a larger number of samples is required, compared to ML methods, to avoid overfitting due to the large number of parameters generated by the model and the inherent higher dimensionality of HSI [128].
Therefore, most of the surveyed articles that employ DL techniques rely on airborne test datasets due to the vast amount of training data available. Excluding test areas, there are only two articles that were based on their own training datasets [87]. This underlies a theoretical stage of DL in hyperspectral urban remote sensing, as most of the articles consist of the proposal of new image processing techniques that enhance the classification performance in this context, usually comparing them to previous DL and ML approaches as a way to indicate the superiority of their method [59,129,130,131,132]. In the case of the two mentioned articles which employ their own training datasets, Perretta et al. [87] gathered 938 point samples for a classification of urban tree species, while Feng et al. [24] gathered 378,642 samples and 235,559 for two study cases. In both cases, samples were manually gathered, suggesting an intensive labeling process.
When DL is implemented, CNNs are the most used models. A benchmark comparison among the CNN three architectures was performed by Bera et al. [133], using the Pavia test dataset as the study area. The results indicated that 3-D architecture in general terms, is able to perform better than 1-D and 2-D architectures, although the authors argue that considering spectral and spatial features separately could improve the final accuracy of the tasks performed with HSI. The development of 2-D architecture may have encouraged the growth of the use of CNN for HSI, especially in urban contexts. Ma et al. [134] states that the spectral ambiguity of materials limits the performance of classic ML methods, while 3-D CNNs provide better results as they are able to extract texture-based features. Although, the use of higher dimensional CNNs come at the expense of a higher computational cost, thus the need to introduce less dimensions as an input. One of the major challenges relates to the development of methodologies to accurately identify, select, and extract the information from spectral bands, to overcome the computational burden of these models [135].
The other three DL models employed (AEs, GANs and RNNs), although minorly, respond to inherent challenges to HSI/urban analysis. AEs are often used for their ability to capture meaningful data representations while not needing labeled data, makes this model a really good choice to perform a dimensionality reduction before performing any downstream task throughout any ML or DL model [136]. However, its use for urban applications studies is still very scarce. The present literature review exercise found two articles only employing this technique. Shahi et al. [132], introduced MSA2-Net, an auto-encoder designed to extract spectral and spatial features, and based on this information, group pixels into clusters based on their similar properties, resulting particularly effective in large-scale HSI datasets. Wang et al. [64] proposed an AE framework for nonlinear spectral unmixing, where the encoder estimates material abundances and the decoder reconstructs the observed spectra. This enables simultaneous and unsupervised extraction of endmembers and abundances, particularly interesting to urban areas for its nonlinear properties. In both cases, the framework was developed for combining a dimensionality reduction (extraction of meaningful information) and a downstream task. No article was found that compares the performance of feature extraction of AEs and other methods (such as main components, linear discriminant analysis, etc.) in complex contexts such as urban settings. Therefore, it is still unclear which major benefit this unsupervised method may provide. The application of RNNs to HSI is based on the assumption that hyperspectral pixels can be treated as sequential data [137]. Yuan et al. [78] applied this architecture for the classification of Urban Functional Zones, combining them with a CNN to perform a classification.
A major bottleneck for DL in hyperspectral urban analysis remains the requirement for large, well-labeled training datasets. Several optimization paths have been proposed to mitigate this limitation. Transfer learning and semi-/self-supervised learning approaches, such as X-ModalNet, demonstrate that cross-domain adaptation can reduce labeling requirements while maintaining high accuracy [131]. Reinforcement learning strategies for band selection have been explored to optimize spectral input dimensionality before CNN-based classification, alleviating computational demands [54]. Generative models, particularly GANs, are being tested for data augmentation and synthetic sample generation, as shown in recent physics-informed and conditional GAN frameworks. Hybrid network architectures, such as AMSSE-Net [59] and Hybrid FusionNet [130], also point toward scalable strategies for combining spectral, spatial, and ancillary data. These developments indicate that, while bottlenecks persist, DL optimization paths are emerging and represent a research frontier for urban hyperspectral analysis.
Beyond ML and DL, SU also deserves greater consideration in the urban context. The mixed-pixel problem is intrinsic to both airborne and especially spaceborne imagery, making this technique particularly relevant. Traditional linear models such as LSMA have been successfully applied to derive fractions of vegetation, soil, and built materials [63]. Nonlinear models better capture urban multiple-scattering effects, while advanced approaches such as Vertex Component Analysis and autoencoder-based unmixing have shown promise for robust endmember extraction and abundance estimation [138]. It can be further highlighted that hybrid frameworks combining SU with ML/DL could improve classification and sub-pixel mapping in urban environments, though endmember selection and spectral variability remain significant challenges [139]. SU highlights specific problems when multiple urban materials are to be categorized. The diversity of roofing, paving, and construction materials often results in highly similar spectral signatures, while aging, weathering, and shadows introduce further variability. These factors make it difficult to isolate representative endmembers and to achieve stable abundance estimations across different sites. Moreover, the spectral mixtures are often nonlinear in dense urban fabrics, where multiple scattering among elements alters the recorded signal. This complexity explains why SU is less frequently used for broad land cover classifications, but retains value in sub-pixel analyses and targeted applications such as impervious surface fraction mapping or vegetation abundance estimation [98,107].
Despite the growing interest in multimodal remote sensing, only about one-fourth of the studies in the review applied data fusion (Table 5). More importantly, relatively few papers evaluated fusion effectiveness with quantitative metrics, yet those that did report consistent improvements. For example, pansharpening with PRISMA imagery improved classification accuracy of urban trees by 5–10% compared to unfused data [87]. Fusion of PRISMA and Sentinel-2 increased separability of spectrally similar urban materials [97]. Several studies showed that hyperspectral and LIDAR fusion significantly enhanced urban vegetation mapping by incorporating structural information, with gains up to 15% in overall accuracy [42,74]. For impervious surfaces, integration of HSI with polarimetric SAR or multispectral imagery reduced confusion between asphalt and roofing, raising Kappa values by 0.05–0.10 [135,136]. Even for asbestos detection, nonlinear and bilinear fusion approaches improved target detection reliability in large datasets [137]. These examples confirm that, although underrepresented, data fusion approaches can lead to substantial accuracy gains. However, information allowing an in-depth quantitative accuracy improvement analysis is not always provided.
Regardless of the techniques employed, papers presenting a fully developed application on a real urban context are very few. As mentioned during the evaluation of the airborne datasets, there is a lack of benchmarks that represent a consolidated urban area (except for the Pavia Centre dataset that is not frequently used). As a result, many techniques that demonstrate good performance on these datasetscould experience a significant decrease in performance when applied to more complex urban areas. Nevertheless, in the literature, there is not enough evidence to specifically support this statement. Furthermore, when a technique was implemented to address an application, its performance was evaluated on a very high SR dataset (<2 m). In absence of published tests run on spaceborne coarser resolution imagery, it is likely that the performance may not be adequate, and the impact of mixed pixels may come to light.

5.3. Trends in Urban Applications

In the results section, a range of different applications were presented. The four more represented are discussed: land cover classification, impervious surface detection, LCZ mapping and urban vegetation classification.
As analyzed, the major topic addressed by the papers was land cover classification. The product delivered by different authors tend to differ taxonomically, as the main idea behind the definition of categories relies on the morphological characteristics retrievable from spatial and spectral information [140]. Thus, the final classes are heavily influenced by the information retrieved by the sensor, the technique applied, and the characteristics of the study area. However, the complexity of urban environments has limited the development and accuracy of these products. The spatial variability, geometric complexity, and spectral ambiguity of man-made materials present challenges that have been widely treated by the literature over the past decades [98].
One of the main interests of the scientific community was determining the SR necessary to accurately analyze and classify urban areas. Many authors consider SR to be the most important factor when classifying urban areas [141,142]. Sliuzas et al. [143] considered that a minimum of 15 m is required to achieve an overview of urban areas, while 5 m is the threshold value for object recognition. However, it is very difficult to indicate a fixed number value, because the required minimum resolution can depend on many factors as follows:
-
Pixel analysis approach: Pixel-based analysis is often limited by coarse SR in complex areas—where the spectral signal may be mixed due to the presence of multiple land covers within individual pixels. Sub-pixel analysis techniques, such as regression-based methods provided by ML or DL, or spectral unmixing, can potentially extract more detailed information [75]. However, as far as this article has reviewed, sub-pixel analysis has not yet been applied to land cover classification using HSI, as the high number of land cover categories introduces significant complexity in determining accurate material fractions.
-
Study area morphology: Welch [144] conducted an analysis of the minimum pixel size required to effectively study urban areas across various global cities. The findings of the study emphasized that the critical factor to determine suitable SR mainly relied on the contrast between different urban land cover types. In heterogeneous urban environments—where built-up areas, vegetation, transportation infrastructure, and bare soil often coexist within small spatial extents—high spectral and spatial contrast enable more effective discrimination of land cover types, even at coarser resolutions. In areas with low contrast or gradual transitions between covers, even finer SR may struggle to produce accurate classifications.
-
Size of features: Another aspect to take into consideration is the size of the urban feature to classify, as coarser SR tends to overlook smaller or more fragmented features. This limitation becomes relevant when the classification task involves detailed land cover categories. In such cases, the spatial heterogeneity within urban environments may not be accurately captured, leading to underrepresentation or misclassification of fine-scale and linear elements such as narrow roads, small buildings, or isolated vegetation patches [145].
Although employing a sensor with higher SR can improve the detection of fine-scale urban features, it is not sufficient on its own to ensure accurate land cover classification in urban areas. This limitation becomes particularly evident when surfaces with similar physical and spectral characteristics have to be classified. For instance, roads and rooftops often share construction materials such as asphalt, concrete, or bitumen, which leads to very similar spectral responses. These similarities, coupled with the frequent presence of shadows and surface degradation, make it difficult to differentiate such classes using only spatial detail. In such cases, high spectral resolution becomes particularly valuable, as it enhances spectral separability and allows for a more detailed discrimination of surface materials [50]. The information provided by HSI sensors has proven useful in distinguishing low-albedo materials, which often complicate classification when using MSI. Hyperspectral data also tend to produce better results when generating spectral indices, which can be particularly helpful in differentiating materials with low spectral separability [97]. This is relevant for urban surfaces where variations in material properties—such as moisture content, aging, composition—become more distinguishable when exploiting finer spectral differences. These improvements contribute to more accurate and reliable urban land cover classifications, especially when the aim is to map specific and narrowly defined categories [46].
Applications dealing with impervious surface detection highlight that HSI provides significant advantages over traditional MSI in distinguishing urban materials with similar reflectance characteristics [24]. However, urban complexity still presents notable classification challenges. As outlined, intra-class variability and inter-class spectral similarity remain primary obstacles, particularly between impervious and pervious low-albedo surfaces like shaded grass or dark roofing materials [83]. Nevertheless, several studies suggest that hybrid approaches integrating spatial metrics (e.g., texture, shape) and auxiliary data from other sensors or sources (Synthetic Aperture Radar, GIS, etc.) are needed to increase mapping accuracy in dense urban areas that may exhibit geometrical complexity [146,147]. Moreover, the potential of multi-temporal hyperspectral data for change detection in impervious surfaces remains largely unexplored and represents a topic to be researched on in future.
Urban vegetation classification, particularly tree species mapping, has benefited considerably from hyperspectral remote sensing due to its capacity to resolve subtle spectral differences across vegetation types [96,148]. In urban settings, where canopies are often fragmented and mixed, this capability becomes even more crucial. However, studies confirm that achieving high classification accuracy remains difficult without the integration of structural information such as canopy height and crown shape, often provided by LIDAR [49]. A key insight is that the species-level classification requires not only high spectral fidelity but also well-designed training datasets and a clear nomenclature system, as spectral similarity across species within the same family can lead to confusion [109]. The present literature review exercise did not find any article addressing this application using a multi-temporal approach. While the most likely reason for this literature gap was the lack of multi-temporal datasets, future dedicated experiments may reveal phenology patterns as an additional discriminating feature to separate tree/vegetation species.
LCZ mapping is a growing application of HSI remote sensing, driven by its potential to characterize urban morphology and thermal behavior more accurately than the classical binary urban/rural classifications, as their boundaries have become vague [149,150]. Distinguishing between built-up LCZs (e.g., compact midrise vs. open low-rise) remains challenging due to their similar spectral properties, highlighting the need to incorporate ancillary data like building height or impervious surface fraction. A promising direction is the integration of spectral data with urban canopy parameters (UCPs), as exemplified by recent hybrid approaches combining PRISMA and Sentinel-2 data with RF classifiers, resulting in a substantial increase in the accuracy [77]. One of the current limitations is the class imbalance within training datasets, given that some categories have a wide representation in front of others that are poorly represented [151]. Another limitation is the difficulty to define a uniform product, as LCZ classification in cities with specific geographical and climatic conditions can differ from more generalized areas. As a consequence, the methods by which LCZs are defined and distributed may be very different [152]. One promising direction would be LCZ mapping using DL techniques. It would be interesting to research how this method could help to better account for the geographical variabilities of LCZs. In addition, apart from very recent studies such as [77], the potential of applying hyperspectral time-series analysis to monitor LCZ transitions and urban dynamics has not been leveraged, as far as this review has reached.

5.4. Comparison Between Satellite and Airborne HSI

Based on the above analysis of trends, a comparison between airborne and spaceborne HSI is made. To complement the narrative discussion, Table 6 provides a concise overview of the main differences between airborne and spaceborne hyperspectral platforms in the context of urban applications. The table is not meant to indicate that one platform typology should be more recommended than the other but, conversely, to highlight how airborne and spaceborne HSI are complementary and, as such, they can address different scopes and user needs.
As shown by Table 6, it is important to remind the reader that spaceborne and airborne HSI sensors differ in some key characteristics (also shown in Table 1 and Table 2). The differences in spatial and spectral resolution, coverage, SNR and trajectory lead to a different situation and potential strengths. As observed in Figure 4, more than half of the articles published between 1993 and 2024 are based on airborne HSI. When the analysis is limited to the last 6 years (i.e., 2018–2024), this proportion is completely rebalanced to an equal number of articles between spaceborne and airborne. The reason why the first generation of hyperspectral spaceborne sensors (Hyperion, AHSI, CHRIS) was practically not used for urban applications, relies on both technical limitations and contextual factors. Technically, first-generation hyperspectral satellites like Hyperion and CHRIS suffered from low SNR and limited swath widths compared to the second generation and their airborne contemporary counterparts. Additionally, data access during their operational lifetimes was restricted, and the scientific community had limited tools and infrastructure to exploit hyperspectral imagery at scale, especially in the early 2000s. These factors collectively did not help the use of spaceborne data for detailed urban analysis [153]. In comparison, the scientific community could access public airborne datasets, which played a pivotal role in advancing urban remote sensing research. These datasets were instrumental in facilitating the development and benchmarking of various processing techniques under controlled conditions. Airborne hyperspectral datasets offered several advantages: high spatial and spectral resolution, controlled acquisition conditions, and availability of ground truth data. These characteristics made airborne datasets ideal for testing the performance of different processing techniques in urban environments. They allowed researchers to conduct experiments in ideal conditions, leading to the development of hyperspectral algorithms for urban feature extraction, classification, and analysis. The availability of such datasets significantly contributed to the growth of urban hyperspectral remote sensing, providing a foundation for methodological advancements and facilitating the transition to more complex applications as technology evolved. The advent of second-generation hyperspectral satellites (e.g., PRISMA) has prompted a significant increase in application-driven research within urban remote sensing. The broad coverage capabilities of these satellites facilitate studies across urban regions, as it enables consistent and repeatable observations. This improved accessibility and data quality have empowered researchers to explore a wide array of urban applications and to test the previous techniques developed on the benchmark datasets.
Regarding the processing techniques employed, airborne images appear to have been analyzed with a wider variety of algorithms if compared to spaceborne images. This finding is logical since benchmark datasets facilitated experimentation to assess whether a technique performed well for HSI tasks. However, the slightly higher proportion of DL techniques applied to airborne data (29%) compared to spaceborne data (24%) suggests that, although airborne platforms offer significant advantages for these models owing to the wider availability of labeled data, the advent of spaceborne sensors coinciding with the rise in deep learning (Figure 10) has encouraged researchers to develop DL-based techniques and applications for spaceborne data as well—even though, in absolute terms, the number of DL techniques remains higher for airborne imagery. DL has made inroads on both fronts, but airborne-based research led (and still continues leading) in architectural experimentation: 1D, 2D, and full 3D CNNs; recurrent and attention-based models; AEs for feature extraction; and GANs for data augmentation—all tested extensively on well-labeled benchmark datasets. Spaceborne DL applications are fewer and tend to focus on proof-of-concept CNNs, constrained by the smaller volume of labeled training data available. This finding highlights the need for more studies where a proper labeling process is conducted prior to applying any DL model to spaceborne imagery, given that its performance in this context remains relatively unexplored.
In terms of applications, major differences between the two typologies of HSI can be identified. Spaceborne HSI is evenly distributed among four main topics: land cover classification, impervious surface detection, LCZs and urban vegetation classification. In contrast, airborne HSI is mostly used for land cover classification, with a significant proportion also devoted to impervious surface detection and urban vegetation classification. As mentioned, the higher proportion of land cover classification in airborne data is largely due to its reliance on specific datasets, as the samples correspond to this application, and this type of application is frequently used for benchmarking purposes.
Few to no studies that make use of airborne data for LCZs were found. Although the coarser SR of spaceborne imagery was considered as a constraint for providing an accurate analysis of urban areas, LCZs and impervious surface detection are urban applications that are frequently addressed via more aggregate analysis. Therefore, the SR was not an obstacle, and some authors could use spaceborne data successfully. Huang et al. [151] indicates that a minimum of 100 m of SR is required to perform an accurate LCZ classification, and a wider spatial coverage tends to favor a higher classification accuracy when samples are scarce. Weng et al. [50] state that medium SR imagery is able to extract reliable impervious surface data, although airborne seems to provide a higher accuracy [154].
Apart from the above cited studies, the key outcome is that the full potential of spaceborne HSI has yet to be fully exploited for urban areas. Furthermore, the emphasis on SR has left the temporal dimension largely overlooked. Therefore, with more spaceborne HSI missions capable of collecting imagery over the same area across time, the advantages brought by temporal resolution have yet to be investigated. As far as this review has found, no study has utilized a hyperspectral time series covering different years, indicating a notable scarcity of research and development in this area. In some ways, this outcome is quite surprising given that missions like PRISMA have already proved effective in collecting time series over same locations and such archives do exist and can be accessed. However, the evidence is that this advantage offered by satellite missions has been more exploited in other Earth observation research fields (e.g., water quality monitoring and agricultural applications). Consequently, the benefits for analyzing the temporal evolution of spectral curves—for example to monitor urban vegetation phenology or changes in artificial surfaces and materials as a proxy to infer urban regeneration projects, land conversions, or nature-based solution implementation—require further research. Some hints about the potentiality can be found in the literature. Granero-Belinchon et al. [155] characterize urban vegetation based on its phenology using MSI, allowing for better discrimination of categories. Mendili et al. [156] demonstrate that, by analyzing the temporal profiles of spectral signatures, it is possible to achieve greater separability between spectrally similar urban materials. Similarly, Xu et al. [157] show that incorporating multi-temporal data significantly improves classification accuracy, especially when distinguishing between spectrally similar materials. Therefore, combining high spectral resolution with moderate temporal resolution could be a key factor in enhancing applications where spectral separability is challenging. We conclude that future research should invest more on the multi-temporal aspect and cascading analytical advantages, in order to capitalize the impact that planned missions such as the Copernicus Hyperspectral Imaging Mission for the Environment (CHIME; https://sentiwiki.copernicus.eu/web/chime; accessed on 24 August 2025) are expected to generate in the upcoming years.

6. Conclusions

This review has provided a systematic and in-depth assessment of the current state of hyperspectral remote sensing for urban applications, with particular attention in the comparison of the roles played by airborne and spaceborne platforms. The study was guided by four main research questions, aiming to identify the most commonly used image processing techniques, understand prevailing analytical trends in urban hyperspectral studies, evaluate the current exploitation of spaceborne hyperspectral imagery for urban areas, and assess how spaceborne sensors perform relative to airborne systems. The field of hyperspectral remote sensing has seen significant progress in urban analysis over the past three decades, particularly with the deployment of second-generation spaceborne sensors. Historically, airborne platforms have dominated owing to their higher spatial and spectral resolution and in absence of a significant flow of data made available from satellite missions for years. However, the landscape is shifting as newer satellite missions—such as PRISMA and EnMAP—offer enhanced data accessibility and revisit capabilities, enabling scalable and cost-effective solutions for large-scale urban applications.
This transition to spaceborne platforms reflects a growing demand for broader coverage and operational efficiency in urban studies. While methodological innovation has advanced, much of the research remains focused on algorithm development, often reliant on benchmark datasets rather than real-world, application-driven investigations. Persistent challenges include the coarse spatial resolution of satellite data, the complexity of urban morphology, and the high dimensionality of hyperspectral imagery.
It should be noted that this review is constrained by the limited number of spaceborne studies currently available for urban areas, the strong reliance on benchmark airborne datasets, and the scarcity of multi-temporal and fusion-based analyses. In addition, the gray literature was intentionally excluded, which may omit insights into real-world applications and operational use cases. These constraints reflect the historical evolution of hyperspectral urban remote sensing – that has wide room for improvement and advancement as a research field –, and they may limit the generalizability of some of the trends identified.
Recent developments in machine learning and deep learning are progressively addressing many of the challenges of hyperspectral urban remote sensing. Techniques such as Random Forests, Support Vector Machines, and Convolutional Neural Networks have enhanced the accuracy of classification tasks and feature extraction in complex urban environments. However, considerable gaps remain, including the limited availability of labeled data, the reduced use of data fusion methods, and the lack of multi-temporal analyses. To fully harness the potential of hyperspectral data for applications such as land cover classification, urban vegetation mapping, impervious surface detection, and LCZ identification, several research directions must be prioritized. First, the integration/fusion of hyperspectral data with complementary sources (LiDAR, SAR, GIS) is needed to overcome spectral similarity issues and urban 3D complexity. Second, the development and release of benchmark datasets representing consolidated urban fabrics would enable more realistic testing of algorithms. Third, the systematic exploitation of multi-temporal hyperspectral data offers significant potential for analyzing urban dynamics, vegetation phenology, etc. Finally, while machine learning remains dominant, advancing deep learning frameworks will require large, well-labeled training datasets and strategies to mitigate computational costs. Pursuing these avenues is essential to move from purely methodological experimentation toward robust, operational urban applications, thereby consolidating the role of hyperspectral imaging in urban science and planning.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs17173126/s1, File S1: PRISMA 2020 checklist; Table S1: Excel spreadsheet listing all the 113 papers composing the database that were used to undertake the present systematic review. Each paper is provided with the classification by properties analyzed in this paper.

Author Contributions

Conceptualization, J.A.G.G. and D.T.; methodology, J.A.G.G. and D.T.; formal analysis, J.A.G.G.; investigation, J.A.G.G.; data curation, J.A.G.G.; writing—original draft preparation, J.A.G.G.; writing—review and editing, D.T. and G.L.; visualization, J.A.G.G.; supervision, D.T.; project administration, D.T.; funding acquisition, D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Agenzia Spaziale Italiana/Italian Space Agency (ASI), through a PhD studentship in the framework of “Dottorato Nazionale di Osservazione della Terra/National Doctorate in Earth Observation (DNOT)” based at Sapienza University of Rome. The funder has played no role in the research.

Data Availability Statement

The original contributions presented in this study are included in the Supplementary Materials. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AEsAuto-encoders
ANNArtificial Neural Network
ASIItalian Space Agency
CHIMECopernicus Hyperspectral Imaging Mission for the Environment
CNNConvolutional Neural Network
DLDeep Learning
DTDecision Trees
FCNNFully Connected Neural Network
GANGenerative Adversarial Network
GISGeographic Information System
GSDGround Sampling Distance
HSIHyperspectral Imaging
K-NNK-Nearest Neighbors
LIDARLight Detection and Ranging
LCZLocal Climate Zone
LSMALinear Spectral Mixture Analysis
LSTMLong Short-Term Memory
MLMachine Learning
MLCMaximum Likelihood Classifier
MSIMultispectral Imaging
NMFNonnegative Matrix Factorization
PANPanchromatic
RFRandom Forest
RNNRecurrent Neural Network
SARSynthetic Aperture Radar
SAMSpectral Angle Mapper
SNRSignal-to-Noise Ratio
SRSpatial Resolution
SUSpectral Unmixing
SVMSupport Vector Machine
SWIRShort-Wave Infrared
UCPUrban Canopy Parameter
VNIRVisible and Near-Infrared

References

  1. Keivani, R. A review of the main challenges to urban sustainability. Int J. Urban Sustain. Dev. 2009, 1, 5–16. [Google Scholar] [CrossRef]
  2. United Nations World Urbanization Prospects: The 2018 Revision. Department of Economic and Social Affairs. 2018. Available online: https://www.un.org/development/desa/en/news/population/2018-revision-of-world-urbanization-prospects.html (accessed on 6 June 2025).
  3. United Nations Transforming Our World: The 2030 Agenda for Sustainable Development 2015; Division for Sustainable Development Goals: New York, NY, USA, 2015; Available online: https://www.un.org/sustainabledevelopment/cities/ (accessed on 6 June 2025).
  4. Zhu, C.; Li, J.; Zhang, S.; Wu, C.; Zhang, B.; Gao, L.; Plaza, A. Impervious Surface Extraction From Multispectral Images via Morphological Attribute Profiles Based on Spectral Analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4775–4790. [Google Scholar] [CrossRef]
  5. de Almeida, C.R.; Teodoro, A.C.; Gonçalves, A. Study of the Urban Heat Island (UHI) Using Remote Sensing Data/Techniques: A Systematic Review. Environments 2021, 8, 105. [Google Scholar] [CrossRef]
  6. Shojanoori, R.; Shafri, H. Review on the Use of Remote Sensing for Urban Forest Monitoring. Arboric. Urban For. 2016, 42, 400–417. [Google Scholar] [CrossRef]
  7. Herold, M.; Roberts, D.A.; Gardner, M.E.; Dennison, P.E. Spectrometry for urban area remote sensing—Development and analysis of a spectral library from 350 to 2400 nm. Remote Sens. Environ. 2004, 91, 304–319. [Google Scholar] [CrossRef]
  8. Fabbretto, A.; Bresciani, M.; Pellegrino, A.; Alikas, K.; Pinardi, M.; Mangano, S.; Padula, R.; Giardino, C. Tracking Water Quality and Macrophyte Changes in Lake Trasimeno (Italy) from Spaceborne Hyperspectral Imagery. Remote Sens. 2024, 16, 1704. [Google Scholar] [CrossRef]
  9. Fernández-Manso, A.; Quintano, C.; Fernández-Guisuraga, J.M.; Roberts, D. Next-gen regional fire risk mapping: Integrating hyperspectral imagery and National Forest Inventory data to identify hot-spot wildland-urban interfaces. Sci. Total Environ. 2024, 940, 173568. [Google Scholar] [CrossRef] [PubMed]
  10. Hajaj, S.; Harti, A.E.; Pour, A.B.; Jellouli, A.; Adiri, Z.; Hashim, M. A review on hyperspectral imagery application for lithological mapping and mineral prospecting: Machine learning techniques and future prospects. Remote Sens. Appl. Soc. Environ. 2024, 35, 101218. [Google Scholar] [CrossRef]
  11. Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A systematic review of hyperspectral imaging in precision agriculture: Analysis of its current state and future prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
  12. Goetz, A.; Vane, G.; Solomon, J.; Rock, B. Imaging Spectrometry for Earth Remote Sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef]
  13. Selci, S. The Future of Hyperspectral Imaging. J. Imaging 2019, 5, 84. [Google Scholar] [CrossRef]
  14. Manolakis, D.; Marden, D.; Shaw, G. Hyperspectral image processing for automatic target detection applications. Linc. Lab. J. 2003, 14, 79–116. [Google Scholar]
  15. Vane, G.; Green, R.O.; Chrien, T.G.; Enmark, H.T.; Hansen, E.G.; Porter, W.M. The airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 1993, 44, 127–143. [Google Scholar] [CrossRef]
  16. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  17. Pearlman, J.; Carman, S.; Segal, C.; Jarecke, P.; Clancy, P.; Browne, W. Overview of the Hyperion Imaging Spectrometer for the NASA EO-1 mission. Scanning the Present and Resolving the Future. In Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium, Sydney, NSW, Australia, 9–13 July 2001; Cat. No.01CH37217. Volume 7, pp. 3036–3038. [Google Scholar] [CrossRef]
  18. Barnsley, M.J.; Settle, J.J.; Cutter, M.A.; Lobb, D.R.; Teston, F. The PROBA/CHRIS mission: A low-cost smallsat for hyperspectral multiangle observations of the Earth surface and atmosphere. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1512–1520. [Google Scholar] [CrossRef]
  19. Cogliati, S.; Sarti, F.; Chiarantini, L.; Cosi, M.; Lorusso, R.; Lopinto, E.; Miglietta, F.; Genesio, L.; Guanter, L.; Damm, A.; et al. The PRISMA imaging spectroscopy mission: Overview and first performance analysis. Remote Sens. Environ. 2021, 262, 112499. [Google Scholar] [CrossRef]
  20. Matsunaga, T.; Iwasaki, A.; Tsuchida, S.; Tanii, J.; Kashimura, O.; Nakamura, R.; Yamamoto, H.; Tachikawa, T.; Rokugawa, S. Hyperspectral Imager Suite (HISUI). In Optical Payloads for Space Missions; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2015; pp. 215–222. [Google Scholar] [CrossRef]
  21. Chabrillat, S.; Foerster, S.; Segl, K.; Beamish, A.; Brell, M.; Asadzadeh, S.; Milewski, R.; Ward, K.J.; Brosinsky, A.; Koch, K.; et al. The EnMAP spaceborne imaging spectroscopy mission: Initial scientific results two years after launch. Remote Sens. Environ. 2024, 315, 114379. [Google Scholar] [CrossRef]
  22. Qian, S.-E. Hyperspectral Satellites, Evolution, and Development History. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7032–7056. [Google Scholar] [CrossRef]
  23. Kudela, R.M.; Hooker, S.B.; Guild, L.S.; Houskeeper, H.F.; Taylor, N. Expanded Signal to Noise Ratio Estimates for Validating Next-Generation Satellite Sensors in Oceanic, Coastal, and Inland Waters. Remote Sens. 2024, 16, 1238. [Google Scholar] [CrossRef]
  24. Feng, X.; Shao, Z.; Huang, X.; He, L.; Lv, X.; Zhuang, Q. Integrating Zhuhai-1 Hyperspectral Imagery With Sentinel-2 Multispectral Imagery to Improve High-Resolution Impervious Surface Area Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2410–2424. [Google Scholar] [CrossRef]
  25. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef]
  26. Spoto, F.; Sy, O.; Laberinti, P.; Martimort, P.; Fernandez, V.; Colin, O.; Hoersch, B.; Meygret, A. Overview Of Sentinel-2. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 1707–1710. [Google Scholar]
  27. Li, J.; Huang, X.; Tu, L. WHU-OHS: A benchmark dataset for large-scale Hersepctral Image classification. Int. J. Appl. Earth Obs. Geoinformation 2022, 113, 103022. [Google Scholar] [CrossRef]
  28. Barducci, A.; Guzzi, D.; Marcoionni, P.; Pippi, I. CHRIS-PROBA performance evaluation: Signal-to-noise ratio, instrument efficiency and data quality from acquisitions over San Rossore (ITALY) test site. In Proceedings of the 3rd ESA CHRIS/Proba Workshop, Noordwijk, The Netherlands, 21–23 March 2005. [Google Scholar]
  29. Liu, Y.-N.; Sun, D.-X.; Hu, X.-N.; Ye, X.; Li, Y.-D.; Liu, S.-F.; Cao, K.-Q.; Chai, M.-Y.; Zhou, W.-Y.-N.; Zhang, J.; et al. The Advanced Hyperspectral Imager: Aboard China’s GaoFen-5 Satellite. IEEE Geosci. Remote Sens. Mag. 2019, 7, 23–32. [Google Scholar] [CrossRef]
  30. Storch, T.; Honold, H.-P.; Chabrillat, S.; Habermeyer, M.; Tucker, P.; Brell, M.; Ohndorf, A.; Wirth, K.; Betz, M.; Kuchler, M.; et al. The EnMAP imaging spectroscopy mission towards operations. Remote Sens. Environ. 2023, 294, 113632. [Google Scholar] [CrossRef]
  31. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M.; et al. Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  32. Cocks, T.; Jenssen, R.; Stewart, A.; Wilson, I.; Shields, T. The HyMap Airborne Hyperspectral Sensor: The System, Calibration and Performance. In Proceedings of the 1st EARSeL Workshop on Imaging Spectroscopy, Zurich, Switzerland, 6–8 October 1998. [Google Scholar]
  33. Babey, S.K.; Anger, C.D. Compact airborne spectrographic imager (CASI): A progress review. In Proceedings of the Imaging Spectrometry of the Terrestrial Environment; Vane, G., Ed.; SPIE: Bellingham, WA, USA, 1993; Volume 1937, pp. 152–163. [Google Scholar] [CrossRef]
  34. Centre for Remote Sensing (CRS), University of Iceland ROSIS. 2019. Available online: https://crs.hi.is/?page_id=877 (accessed on 15 May 2025).
  35. Spectral Imaging Ltd. (SPECIM). Specim FX10 Hyperspectral Camera Datasheet. Available online: https://www.specim.com/products/specim-fx10/ (accessed on 12 August 2025).
  36. Basedow, R.W.; Carmer, D.C.; Anderson, M.E. HYDICE system: Implementation and performance. In Proceedings of the Imaging Spectrometry; Descour, M.R., Mooney, J.M., Perry, D.L., Illing, L.R., Eds.; SPIE: Bellingham, WA, USA, 1995; Volume 2480, pp. 258–267. [Google Scholar]
  37. Weeks, J. Population: An Introduction to Concepts and Issues, 10th ed.; Wadsworth Thomson Learning: Belmont, CA, USA, 2008. [Google Scholar]
  38. Weeks, J. Defining Urban Areas. In Remote Sensing of Urban and Suburban Areas; Springer: Berlin/Heidelberg, Germany, 2010; Volume 10, pp. 33–45. [Google Scholar] [CrossRef]
  39. UN-Habitat. What Is a City? United Nations Human Settlements Programme: Nairobi, Kenya, 2020; Available online: https://unhabitat.org/sites/default/files/2020/06/city_definition_what_is_a_city.pdf (accessed on 12 June 2025).
  40. Page, M.J.; Moher, D.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ 2021, 372, n160. [Google Scholar] [CrossRef]
  41. Cuca, B.; Zaina, F.; Tapete, D. Monitoring of Damages to Cultural Heritage across Europe Using Remote Sensing and Earth Observation: Assessment of Scientific and Grey Literature. Remote Sens. 2023, 15, 3748. [Google Scholar] [CrossRef]
  42. Schmitt, M.; Zhu, X.X. Data Fusion and Remote Sensing: An ever-growing relationship. IEEE Geosci. Remote Sens. Mag. 2016, 4, 6–23. [Google Scholar] [CrossRef]
  43. Taherzadeh, E.; Mansor, S.B.; Ashurov, R. Hyperspectral Remote Sensing of Urban Areas: An Overview of Techniques and Applications. 2012. Available online: https://api.semanticscholar.org/CorpusID:16655704 (accessed on 15 May 2025).
  44. van der Linden, S.; Okujeni, A.; Canters, F.; Degerickx, J.; Heiden, U.; Hostert, P.; Priem, F.; Somers, B.; Thiel, F. Imaging Spectroscopy of Urban Environments. Surv. Geophys. 2019, 40, 471–488. [Google Scholar] [CrossRef]
  45. Transon, J.; D’Andrimont, R.; Maugnard, A.; Defourny, P. Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context. Remote Sens. 2018, 10, 157. [Google Scholar] [CrossRef]
  46. Torres Gil, L.K.; Valdelamar Martínez, D.; Saba, M. The Widespread Use of Remote Sensing in Asbestos, Vegetation, Oil and Gas, and Geology Applications. Atmosphere 2023, 14, 172. [Google Scholar] [CrossRef]
  47. Abbasi, M.; Mostafa, S.; Vieira, A.S.; Patorniti, N.; Stewart, R.A. Mapping Roofing with Asbestos-Containing Material by Using Remote Sensing Imagery and Machine Learning-Based Image Classification: A State-of-the-Art Review. Sustainability 2022, 14, 8068. [Google Scholar] [CrossRef]
  48. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  49. Ciesielski, M.; Sterenczak, K. Accuracy of determining specific parameters of the urban forest using remote sensing. IForest Biogeosci. For. 2019, 12, 498–510. [Google Scholar] [CrossRef]
  50. Weng, Q. Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends. Remote Sens. Environ. 2012, 117, 34–49. [Google Scholar] [CrossRef]
  51. Kuras, A.; Brell, M.; Rizzi, J.; Burud, I. Hyperspectral and Lidar Data Applied to the Urban Land Cover Machine Learning and Neural-Network-Based Classification: A Review. Remote Sens. 2021, 13, 3393. [Google Scholar] [CrossRef]
  52. Segl, K.; Guanter, L.; Rogass, C.; Kuester, T.; Roessner, S.; Kaufmann, H.; Sang, B.; Mogulsky, V.; Hofer, S. EeteS—The EnMAP End-to-End Simulation Tool. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 522–530. [Google Scholar] [CrossRef]
  53. Gong, Z.; Zhong, P.; Hu, W. Diversity in Machine Learning. IEEE Access 2019, 7, 64323–64350. [Google Scholar] [CrossRef]
  54. Sawant, S.; Prabukumar, M. Band fusion based hyper spectral image classification. Int. J. Pure Appl. Math. 2017, 117, 71–76. [Google Scholar]
  55. Gamba, P. A collection of data for urban area characterization. In Proceedings of the IGARSS 2004. 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; Volume 1, p. 72. [Google Scholar] [CrossRef]
  56. Zhang, L.; You, J. A spectral clustering based method for hyperspectral urban image. In Proceedings of the 2017 Joint Urban Remote Sensing Event (JURSE), Dubai, United Arab Emirates, 6–8 March 2017; pp. 1–3. [Google Scholar] [CrossRef]
  57. Nischan, M.L.; Kerekes, J.P.; Baum, J.E.; Basedow, R.W. Analysis of HYDICE noise characteristics and their impact on subpixel object detection. In Proceedings of the Imaging Spectrometry V; Descour, M.R., Shen, S.S., Eds.; SPIE: Bellingham, WA, USA, 1999; Volume 3753, pp. 112–123. [Google Scholar] [CrossRef]
  58. Kalman, L.S.; Bassett, E.M., III. Classification and material identification in an urban environment using HYDICE hyperspectral data. In Proceedings of the Imaging Spectrometry III; Descour, M.R., Shen, S.S., Eds.; SPIE: Bellingham, WA, USA, 1997; Volume 3118, pp. 57–68. [Google Scholar] [CrossRef]
  59. Gao, H.; Feng, H.; Zhang, Y.; Xu, S.; Zhang, B. AMSSE-Net: Adaptive Multiscale Spatial–Spectral Enhancement Network for Classification of Hyperspectral and LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  60. Rasti, B.; Ghamisi, P.; Gloaguen, R. Fusion of Multispectral LiDAR and Hyperspectral Imagery. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2659–2662. [Google Scholar] [CrossRef]
  61. ITRES CASI-1500. 2008. Available online: http://www.formosatrend.com/Brochure/CASI-1500.pdf (accessed on 28 April 2025).
  62. Wei, J.; Wang, X. An Overview on Linear Unmixing of Hyperspectral Data. Math. Probl. Eng. 2020, 2020, 3735403. [Google Scholar] [CrossRef]
  63. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  64. Wang, M.; Zhao, M.; Chen, J.; Rahardja, S. Nonlinear Unmixing of Hyperspectral Data via Deep Autoencoder Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1467–1471. [Google Scholar] [CrossRef]
  65. Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geosci. Front. 2016, 7, 3–10. [Google Scholar] [CrossRef]
  66. Rashmi, S.; Addamani, S.; Venkat; Ravikiran, S. Spectral Angle Mapper Algorithm for Remote Sensing Image Classification. Int. J. Innov. Sci. Eng. Technol. 2014, 1, 201–205. Available online: http://www.ijiset.com/ (accessed on 2 May 2025).
  67. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  68. Tejasree, G.; Agilandeeswari, L. An extensive review of hyperspectral image classification and prediction: Techniques and challenges. Multimed. Tools Appl. 2024, 83, 80941–81038. [Google Scholar] [CrossRef]
  69. Lv, W.; Wang, X. Overview of Hyperspectral Image Classification. J. Sens. 2020, 2020, 4817234. [Google Scholar] [CrossRef]
  70. Krichen, M. Convolutional Neural Networks: A Survey. Computers 2023, 12, 151. [Google Scholar] [CrossRef]
  71. Gao, H.; Lin, S.; Yang, Y.; Li, C.; Yang, M. Convolution Neural Network Based on Two-Dimensional Spectrum for Hyperspectral Image Classification. J. Sens. 2018, 2018, 8602103. [Google Scholar] [CrossRef]
  72. Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  73. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  74. Liu, L.; Coops, N.C.; Aven, N.W.; Pang, Y. Mapping urban tree species using integrated airborne hyperspectral and LiDAR remote sensing data. Remote Sens. Environ. 2017, 200, 170–182. [Google Scholar] [CrossRef]
  75. Okujeni, A.; Van der Linden, S.; Jakimow, B.; Rabe, A.; Verrelst, J.; Hostert, P. A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover. Remote Sens. 2014, 6, 6324–6346. [Google Scholar] [CrossRef]
  76. Comber, A.J. Land use or land cover? J. Land Use Sci. 2008, 3, 199–201. [Google Scholar] [CrossRef]
  77. Vavassori, A.; Oxoli, D.; Venuti, G.; Brovelli, M.A.; de Cumis, M.S.; Sacco, P.; Tapete, D. A combined Remote Sensing and GIS-based method for Local Climate Zone mapping using PRISMA and Sentinel-2 imagery. Int. J. Appl. Earth Obs. Geoinf. 2024, 131, 103944. [Google Scholar] [CrossRef]
  78. Yuan, J.; Wang, S.; Wu, C.; Xu, Y. Fine-Grained Classification of Urban Functional Zones and Landscape Pattern Analysis Using Hyperspectral Satellite Imagery: A Case Study of Wuhan. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3972–3991. [Google Scholar] [CrossRef]
  79. Sajadi, P.; Gholamnia, M.; Bonafoni, S.; Mills, G.; Sang, Y.-F.; Li, Z.; Khan, S.; Han, J.; Pilla, F. Automated Pixel Purification for Delineating Pervious and Impervious Surfaces in a City Using Advanced Hyperspectral Imagery Techniques. IEEE Access 2024, 12, 82560–82583. [Google Scholar] [CrossRef]
  80. Negri, R.G.; Dutra, L.V.; Sant’aNna, S.J.S. Comparing support vector machine contextual approaches for urban area classification. Remote Sens. Lett. 2016, 7, 485–494. [Google Scholar] [CrossRef]
  81. Le Bris, A.; Chehata, N.; Briottet, X.; Paparoditis, N. Spectral Band Selection for Urban Material Classification Using Hyperspectral Libraries. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III–7, 33–40. [Google Scholar] [CrossRef]
  82. Cavalli, R.M.; Fusilli, L.; Pascucci, S.; Pignatti, S.; Santini, F. Hyperspectral Sensor Data Capability for Retrieving Complex Urban Land Cover in Comparison with Multispectral Data: Venice City Case Study (Italy). Sensors 2008, 8, 3299–3320. [Google Scholar] [CrossRef]
  83. Weng, Q.; Hu, X.; Lu, D. Extracting impervious surfaces from medium spatial resolution multispectral and hyperspectral imagery: A comparison. Int. J. Remote Sens. 2008, 29, 3209–3232. [Google Scholar] [CrossRef]
  84. Kumar, V.; Garg, R.D. Comparison of Different Mapping Techniques for Classifying Hyperspectral Data. J. Indian Soc. Remote Sens. 2012, 40, 411–420. [Google Scholar] [CrossRef]
  85. Gadal, S.; Ouerghemmi, W.; Barlatier, R.; Mozgeris, G. Critical Analysis of Urban Vegetation Mapping by Satellite Multispectral and Airborne Hyperspectral Imagery. In Proceedings of the 5th International Conference on Geographical Information Systems Theory, Applications and Management, Heraklion, Greece, 3–5 May 2019. [Google Scholar] [CrossRef]
  86. Niedzielko, J.; Kopeć, D.; Wylazłowska, J.; Kania, A.; Charyton, J.; Halladin-Dąbrowska, A.; Niedzielko, M.; Berłowski, K. Airborne data and machine learning for urban tree species mapping: Enhancing the legend design to improve the map applicability for city greenery management. Int. J. Appl. Earth Obs. Geoinf. 2024, 128, 103719. [Google Scholar] [CrossRef]
  87. Perretta, M.; Delogu, G.; Funsten, C.; Patriarca, A.; Caputi, E.; Boccia, L. Testing the Impact of Pansharpening Using PRISMA Hyperspectral Data: A Case Study Classifying Urban Trees in Naples, Italy. Remote Sens. 2024, 16, 3730. [Google Scholar] [CrossRef]
  88. Degerickx, J.; Hermy, M.; Somers, B. Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data. Sustainability 2020, 12, 2144. [Google Scholar] [CrossRef]
  89. Stewart, I.D.; Oke, T.R. Local Climate Zones for Urban Temperature Studies. Bull. Am. Meteorol. Soc. 2012, 93, 1879–1900. [Google Scholar] [CrossRef]
  90. Civicioglu, P.; Besdok, E. Pansharpening of remote sensing images using dominant pixels. Expert Syst. Appl. 2024, 242, 122783. [Google Scholar] [CrossRef]
  91. Khan, A.; Vibhute, A.D.; Mali, S.; Patil, C.H. A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications. Ecol. Inform. 2022, 69, 101678. [Google Scholar] [CrossRef]
  92. Waigl, C.F.; Prakash, A.; Stuefer, M.; Verbyla, D.; Dennison, P. Fire detection and temperature retrieval using EO-1 Hyperion data over selected Alaskan boreal forest fires. Int. J. Appl. Earth Obs. Geoinf. 2019, 81, 72–84. [Google Scholar] [CrossRef]
  93. Keramitsoglou, I.; Kontoes, C.; Sykioti, O.; Sifakis, N.; Xofis, P. Reliable, accurate and timely forest mapping for wildfire management using ASTER and Hyperion satellite imagery. For. Ecol. Manag. 2008, 255, 3556–3562. [Google Scholar] [CrossRef]
  94. Shahtahmassebi, A.R.; Li, C.; Fan, Y.; Wu, Y.; Lin, Y.; Gan, M.; Wang, K.; Malik, A.; Blackburn, G.A. Remote sensing of urban green spaces: A review. Urban For. Urban Green. 2021, 57, 126946. [Google Scholar] [CrossRef]
  95. Digra, M.; Dhir, R.; Sharma, N. Land use land cover classification of remote sensing images based on the deep learning approaches: A statistical analysis and review. Arab. J. Geosci. 2022, 15, 1003. [Google Scholar] [CrossRef]
  96. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  97. Gaur, S.; Das, N.; Bhattacharjee, R.; Ohri, A.; Patra, D. A novel band selection architecture to propose a built-up index for hyperspectral sensor PRISMA. Earth Sci. Inform. 2023, 16, 887–898. [Google Scholar] [CrossRef]
  98. Poursanidis, D.; Panagiotakis, E.; Chrysoulakis, N. Sub-pixel Material Fraction Mapping in the UrbanScape using PRISMA Hyperspectral Imagery. In Proceedings of the 2023 Joint Urban Remote Sensing Event (JURSE), Heraklion, Greece, 21–23 March 2023; pp. 1–5. [Google Scholar] [CrossRef]
  99. Rosentreter, J.; Hagensieker, R.; Okujeni, A.; Roscher, R.; Wagner, P.D.; Waske, B. Subpixel Mapping of Urban Areas Using EnMAP Data and Multioutput Support Vector Regression. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1938–1948. [Google Scholar] [CrossRef]
  100. Li, X.; Li, Z.; Qiu, H.; Hou, G.; Fan, P. An overview of hyperspectral image feature extraction, classification methods and the methods based on small samples. Appl. Spectrosc. Rev. 2023, 58, 367–400. [Google Scholar] [CrossRef]
  101. Wang, R.; Wang, M.; Ren, C.; Chen, G.; Mills, G.; Ching, J. Mapping local climate zones and its applications at the global scale: A systematic review of the last decade of progress and trend. Urban Clim. 2024, 57, 102129. [Google Scholar] [CrossRef]
  102. Vaddi, R.; Kumar, B.L.N.P.; Manoharan, P.; Agilandeeswari, L.; Sangeetha, V. Strategies for dimensionality reduction in hyperspectral remote sensing: A comprehensive overview. Egypt. J. Remote Sens. Space Sci. 2024, 27, 82–92. [Google Scholar] [CrossRef]
  103. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  104. Han, T.; Goodenough, D.G. Investigation of nonlinearity in hyperspectral remotely sensed imagery—A nonlinear time series analysis approach. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–27 July 2007; pp. 1556–1560. [Google Scholar] [CrossRef]
  105. Christovam, L.E.; Pessoa, G.G.; Shimabukuro, M.H.; Galo, M.L.B.T. Land Use and Land Cover Classification Using Hyperspectral Imagery: Evaluating The Performance Of Spectral Angle Mapper, Support Vector Machine And Random Forest. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 1841–1847. [Google Scholar] [CrossRef]
  106. Liu, Y.; Lu, S.; Lu, X.; Wang, Z.; Chen, C.; He, H. Classification of Urban Hyperspectral Remote Sensing Imagery Based on Optimized Spectral Angle Mapping. J. Indian Soc. Remote Sens. 2019, 47, 289–294. [Google Scholar] [CrossRef]
  107. Ouerghemmi, W.; Gadal, S.; Mozgeris, G. Urban Vegetation Mapping Using Hyperspectral Imagery and Spectral Library. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1632–1635. [Google Scholar] [CrossRef]
  108. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  109. Brabant, C.; Alvarez-Vanhard, E.; Laribi, A.; Morin, G.; Thanh Nguyen, K.; Thomas, A.; Houet, T. Comparison of Hyperspectral Techniques for Urban Tree Diversity Classification. Remote Sens. 2019, 11, 1269. [Google Scholar] [CrossRef]
  110. Freeman, E.A.; Moisen, G.G.; Frescino, T.S. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada. Ecol. Model. 2012, 233, 1–10. [Google Scholar] [CrossRef]
  111. Colditz, R.R. An Evaluation of Different Training Sample Allocation Schemes for Discrete and Continuous Land Cover Classification Using Decision Tree-Based Algorithms. Remote Sens. 2015, 7, 9655–9681. [Google Scholar] [CrossRef]
  112. Mellor, A.; Boukir, S.; Haywood, A.; Jones, S. Exploring issues of training data imbalance and mislabelling on random forest performance for large area land cover classification using the ensemble margin. ISPRS J. Photogramm. Remote Sens. 2015, 105, 155–168. [Google Scholar] [CrossRef]
  113. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  114. Foody, G. Thematic Map Comparison: Evaluating the Statistical Significance of Differences in Classification Accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
  115. Moughal, T.A. Hyperspectral image classification using Support Vector Machine. J. Phys. Conf. Ser. 2013, 439, 012042. [Google Scholar] [CrossRef]
  116. Gopinath, G.; Sasidharan, N.; Surendran, U. Landuse classification of hyperspectral data by spectral angle mapper and support vector machine in humid tropical region of India. Earth Sci. Inform. 2020, 13, 633–640. [Google Scholar] [CrossRef]
  117. Okujeni, A.; van der Linden, S.; Suess, S.; Hostert, P. Ensemble Learning From Synthetically Mixed Training Data for Quantifying Urban Land Cover With Support Vector Regression. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1640–1650. [Google Scholar] [CrossRef]
  118. Mantero, P.; Moser, G.; Serpico, S.B. Partially Supervised classification of remote sensing images through SVM-based probability density estimation. IEEE Trans. Geosci. Remote Sens. 2005, 43, 559–570. [Google Scholar] [CrossRef]
  119. Martins, L.A.; Viel, F.; Seman, L.O.; Bezerra, E.A.; Zeferino, C.A. A real-time SVM-based hardware accelerator for hyperspectral images classification in FPGA. Microprocess. Microsyst. 2024, 104, 104998. [Google Scholar] [CrossRef]
  120. Khodadadzadeh, M.; Li, J.; Prasad, S.; Plaza, A. Fusion of Hyperspectral and LiDAR Remote Sensing Data Using Multiple Feature Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1–13. [Google Scholar] [CrossRef]
  121. Adugna, T.; Fan, J. Comparison of Random Forest and Support Vector Machine Classifiers for Regional Land Cover Mapping Using Coarse Resolution FY-3C Images. Remote Sens. 2022, 14, 574. [Google Scholar] [CrossRef]
  122. Li, H.-X.; Yang, J.-L.; Zhang, G.; Fan, B. Probabilistic support vector machines for classification of noise affected data. Inf. Sci. 2013, 221, 60–71. [Google Scholar] [CrossRef]
  123. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  124. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  125. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  126. Youssef, R.; Aniss, M.; Jamal, C. Machine Learning and Deep Learning in Remote Sensing and Urban Application: A Systematic Review and Meta-Analysis. In GEOIT4W-2020: Proceedings of the 4th Edition of International Conference on Geo-IT and Water Resources 2020, Geo-IT and Water Resources 2020, Al-Hoceima, Morocco, 11–12 March 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  127. Zhang, G.; Zhang, R.; Zhou, G.; Jia, X. Hierarchical spatial features learning with deep CNNs for very high-resolution remote sensing image classification. Int. J. Remote Sens. 2018, 39, 5978–5996. [Google Scholar] [CrossRef]
  128. Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep learning for remote sensing image classification: A survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef]
  129. Li, X.; Fan, X.; Fan, J.; Li, Q.; Gao, Y.; Zhao, X. DASR-Net: Land Cover Classification Methods for Hybrid Multiattention Multispectral High Spectral Resolution Remote Sensing Imagery. Forests 2024, 15, 1826. [Google Scholar] [CrossRef]
  130. Zheng, Y.; Liu, S.; Chen, H.; Bruzzone, L. Hybrid FusionNet: A Hybrid Feature Fusion Framework for Multisource High-Resolution Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
  131. Hong, D.; Yokoya, N.; Xia, G.-S.; Chanussot, J.; Zhu, X.X. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 12–23. [Google Scholar] [CrossRef]
  132. Rafiezadeh Shahi, K.; Ghamisi, P.; Rasti, B.; Gloaguen, R.; Scheunders, P. MS 2 A-Net: Multiscale Spectral–Spatial Association Network for Hyperspectral Image Clustering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1–13. [Google Scholar] [CrossRef]
  133. Bera, S.; Shrivastava, V.K.; Satapathy, S.C. Advances in Hyperspectral Image Classification Based on Convolutional Neural Networks: A Review. CMES—Comput. Model. Eng. Sci. 2022, 133, 219–250. [Google Scholar] [CrossRef]
  134. Ma, X.; Man, Q.; Yang, X.; Dong, P.; Yang, Z.; Wu, J.; Liu, C. Urban Feature Extraction within a Complex Urban Area with an Improved 3D-CNN Using Airborne Hyperspectral Data. Remote Sens. 2023, 15, 992. [Google Scholar] [CrossRef]
  135. Noshiri, N.; Beck, M.A.; Bidinosti, C.P.; Henry, C.J. A comprehensive review of 3D convolutional neural network-based classification techniques of diseased and defective crops using non-UAV-based hyperspectral images. Smart Agric. Technol. 2023, 5, 100316. [Google Scholar] [CrossRef]
  136. Jaiswal, G.; Rani, R.; Mangotra, H.; Sharma, A. Integration of hyperspectral imaging and autoencoders: Benefits, applications, hyperparameter tunning and challenges. Comput. Sci. Rev. 2023, 50, 100584. [Google Scholar] [CrossRef]
  137. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  138. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  139. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  140. Vali, A.; Comai, S.; Matteucci, M. Deep Learning for Land Use and Land Cover Classification Based on Hyperspectral and Multispectral Earth Observation Data: A Review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
  141. Li, R.; Gao, X.; Shi, F.; Zhang, H. Scale Effect of Land Cover Classification from Multi-Resolution Satellite Remote Sensing Data. Sensors 2023, 23, 6136. [Google Scholar] [CrossRef]
  142. Li, X.; Chen, G.; Zhang, Y.; Yu, L.; Du, Z.; Hu, G.; Liu, X. The impacts of spatial resolutions on global urban-related change analyses and modeling. iScience 2022, 25(12), 105660. [Google Scholar] [CrossRef] [PubMed]
  143. Sliuzas, R.; Kuffer, M.; Masser, I. The Spatial and Temporal Nature of Urban Objects; Springer: Berlin/Heidelberg, Germany, 2009; pp. 67–84. ISBN 978-1-4020-4371-0. [Google Scholar]
  144. Welch, R. Spatial resolution requirements for urban studies. Int. J. Remote Sens. 1982, 3, 139–146. [Google Scholar] [CrossRef]
  145. Fisher, J.; Acosta Porras, E.; Dennedy-Frank, P.; Kroeger, T.; Boucher, T. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 2017, 4, 137–149. [Google Scholar] [CrossRef]
  146. Hasani, H.; Samadzadegan, F.; Reinartz, P. A metaheuristic feature-level fusion strategy in classification of urban area using hyperspectral imagery and LiDAR data. Eur. J. Remote Sens. 2017, 50, 222–236. [Google Scholar] [CrossRef]
  147. Zhang, H.; Wan, L.; Wang, T.; Lin, Y.; Lin, H.; Zheng, Z. Impervious Surface Estimation From Optical and Polarimetric SAR Data Using Small-Patched Deep Convolutional Networks: A Comparative Study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2374–2387. [Google Scholar] [CrossRef]
  148. Mozgeris, G.; Gadal, S.; Jonikavičius, D.; Straigytė, L.; Ouerghemmi, W.; Juodkienė, V. Hyperspectral and color-infrared imaging from ultralight aircraft: Potential to recognize tree species in urban environments. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–5. [Google Scholar] [CrossRef]
  149. Liang, Y.; Song, W.; Cao, S.; Du, M. Local Climate Zone Classification Using Daytime Zhuhai-1 Hyperspectral Imagery and Nighttime Light Data. Remote Sens. 2023, 15, 3351. [Google Scholar] [CrossRef]
  150. Moix, E.; Giuliani, G. Mapping Local Climate Zones (LCZ) Change in the 5 Largest Cities of Switzerland. Urban Sci. 2024, 8, 120. [Google Scholar] [CrossRef]
  151. Huang, F.; Jiang, S.; Zhan, W.; Bechtel, B.; Liu, Z.; Demuzere, M.; Huang, Y.; Xu, Y.; Ma, L.; Xia, W.; et al. Mapping local climate zones for cities: A large review. Remote Sens. Environ. 2023, 292, 113573. [Google Scholar] [CrossRef]
  152. Aslam, A.; Rana, I.A. The use of local climate zones in the urban environment: A systematic review of data sources, methods, and themes. Urban Clim. 2022, 42, 101120. [Google Scholar] [CrossRef]
  153. Goetz, A.F.H. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  154. Okujeni, A.; van der Linden, S.; Hostert, P. Extending the vegetation–impervious–soil model using simulated EnMAP data and machine learning. Remote Sens. Environ. 2015, 158, 69–80. [Google Scholar] [CrossRef]
  155. Granero-Belinchon, C.; Adeline, K.; Lemonsu, A.; Briottet, X. Phenological Dynamics Characterization of Alignment Trees with Sentinel-2 Imagery: A Vegetation Indices Time Series Reconstruction Methodology Adapted to Urban Areas. Remote Sens. 2020, 12, 639. [Google Scholar] [CrossRef]
  156. El Mendili, L.; Puissant, A.; Chougrad, M.; Sebari, I. Towards a Multi-Temporal Deep Learning Approach for Mapping Urban Fabric Using Sentinel 2 Images. Remote Sens. 2020, 12, 423. [Google Scholar] [CrossRef]
  157. Xu, F.; Heremans, S.; Somers, B. Urban land cover mapping with Sentinel-2: A spectro-spatio-temporal analysis. Urban Inform. 2022, 1, 8. [Google Scholar] [CrossRef]
Figure 1. The basic structure of a HSI hyper-cube [14]. The colored spectrum visualization highlights the electromagnetic spectrum covering from the visible to the infrared regions, as well as the concept of hyperspectral imaging.
Figure 1. The basic structure of a HSI hyper-cube [14]. The colored spectrum visualization highlights the electromagnetic spectrum covering from the visible to the infrared regions, as well as the concept of hyperspectral imaging.
Remotesensing 17 03126 g001
Figure 2. Workflow of the systematic literature review and intermediate and final outcomes.
Figure 2. Workflow of the systematic literature review and intermediate and final outcomes.
Remotesensing 17 03126 g002
Figure 3. Urban HSI-based articles sorted by the scope of their publication (a), and number of publications categorized by the scope of the paper vs. year of publication (b).
Figure 3. Urban HSI-based articles sorted by the scope of their publication (a), and number of publications categorized by the scope of the paper vs. year of publication (b).
Remotesensing 17 03126 g003
Figure 4. Urban HSI-based articles sorted by technology used (a), and number of publications categorized by technology used vs. year of publication (b).
Figure 4. Urban HSI-based articles sorted by technology used (a), and number of publications categorized by technology used vs. year of publication (b).
Remotesensing 17 03126 g004
Figure 5. Distribution of airborne sensors used for hyperspectral data acquisition (a), and frequency of usage of public datasets (b).
Figure 5. Distribution of airborne sensors used for hyperspectral data acquisition (a), and frequency of usage of public datasets (b).
Remotesensing 17 03126 g005
Figure 6. Articles sorted by spaceborne sensor used.
Figure 6. Articles sorted by spaceborne sensor used.
Remotesensing 17 03126 g006
Figure 8. ML and DL techniques applied to airborne images. Notation: K-NN—K-Nearest Neighbors; MLC—Maximum Likelihood Classifier; FCNN— Fully Connected Neural Networks; SAM—Spectral Angle Mapper; RF—Random Forest; SVM—Support Vector Machine; GAN—Generative Adversarial Networks; CNN—Convolutional Neural Networks.
Figure 8. ML and DL techniques applied to airborne images. Notation: K-NN—K-Nearest Neighbors; MLC—Maximum Likelihood Classifier; FCNN— Fully Connected Neural Networks; SAM—Spectral Angle Mapper; RF—Random Forest; SVM—Support Vector Machine; GAN—Generative Adversarial Networks; CNN—Convolutional Neural Networks.
Remotesensing 17 03126 g008
Figure 9. DL and ML techniques applied to spaceborne images. Notation: MLC—Maximum Likelihood Classifier; FCNN— Fully Connected Neural Networks; SAM—Spectral Angle Mapper; RF—Random Forest; SVM—Support Vector Machine; RNN—Recurrent Neural Networks; CNN—Convolutional Neural Networks.
Figure 9. DL and ML techniques applied to spaceborne images. Notation: MLC—Maximum Likelihood Classifier; FCNN— Fully Connected Neural Networks; SAM—Spectral Angle Mapper; RF—Random Forest; SVM—Support Vector Machine; RNN—Recurrent Neural Networks; CNN—Convolutional Neural Networks.
Remotesensing 17 03126 g009
Figure 10. Distribution of most prevalent hyperspectral image processing techniques vs. year of paper publication. Notation: CNN—Convolutional Neural Networks; RF—Random Forest; SAM—Spectral Angle Mapper; SVM—Support Vector Machine.
Figure 10. Distribution of most prevalent hyperspectral image processing techniques vs. year of paper publication. Notation: CNN—Convolutional Neural Networks; RF—Random Forest; SAM—Spectral Angle Mapper; SVM—Support Vector Machine.
Remotesensing 17 03126 g010
Figure 11. Articles sorted by the main application addressed, based on spaceborne and airborne hyperspectral images.
Figure 11. Articles sorted by the main application addressed, based on spaceborne and airborne hyperspectral images.
Remotesensing 17 03126 g011
Table 2. Specifications of the airborne sensors found to be the most exploited for urban research and applications and were thus included in the present review database.
Table 2. Specifications of the airborne sensors found to be the most exploited for urban research and applications and were thus included in the present review database.
SensorNº of BandsSpectral Resolution (nm)Spectral Range (nm)Spatial Resolution (m)Swath Width (km)Peak Signal-to-Noise Ratio
AVIRIS [31]224~10400–25004–20~11>1000:1
HyMap [32]128–160~15450–25003–52.5–5>500:1
CASI [33]2282–10380–10500.5–421095:1
ROSIS-3 [34]1154430–8602.32Mentioned as “high” but no value was provided
AISA [35]63–4882–10400–25001–52>1000:1
HYDICE [36]21010400–25001–42~800:1
Table 3. List of literature review papers where urban hyperspectral remote sensing was considered.
Table 3. List of literature review papers where urban hyperspectral remote sensing was considered.
PublicationMain TopicTime RangeOutcomes
[43]Techniques and applications of hyperspectral remote sensing in urban areas1990–2012
  • Spectral-spatial fusion necessary to improve the accuracy of models for land cover classification
  • Lack of specially designed urban-oriented algorithms
  • Technical complexity of using spaceborne hyperspectral data
[44]Influence of SR in hyperspectral remote sensing in urban areas1995–2017
  • Difficulties of spaceborne hyperspectral imaging due to its coarse resolution
  • Lack of standardization of urban features nomenclature
[46]Analysis of asbestos and vegetation in urban areas using hyperspectral remote sensing1998–2022
  • High spectral and SR are considered important to classify asbestos roofs
  • Hyperspectral images are hardly used for vegetation classification although it evidences a higher classification performance
  • Scarcity of availability of cost-effective hyperspectral sources
[48]State-of-the-art tree species classification in urban areas using hyperspectral remote sensing1967–2015
  • Interest in multi-temporal hyperspectral data for vegetation studies. Possibility of addressing this using spaceborne data.
  • Difficulties in studying vegetation in urban areas can be faced by integrating hyperspectral and active sensors
[49]Different analytical approaches to urban forests using hyperspectral remote sensing1997–2018
  • Hyperspectral data allows a more specific species classification in urban areas compared to multispectral
  • The integration of hyperspectral data with active sensors allows better results
  • Challenges in classifying different species using spaceborne sensors
[50]State-of-the-art impervious surface classification in urban areas using hyperspectral remote sensing1975–2010
  • Hyperspectral imaging is proven to be advantageous in extracting the variety of impervious materials present in cities
  • Hyperspectral imaging is still poorly exploited for the analysis of impervious surfaces
[51]State-of-the-art land cover classification in urban areas using hyperspectral airborne remote sensing1991–2021
  • Difficulty in classifying urban areas from a single source perspective
  • Deep Learning techniques may be the key to solve the lack of standardization of urban features nomenclature
[47]State-of-the-art asbestos classification in urban areas using hyperspectral airborne remote sensing and Machine Learning1980–2022
  • Proven effectiveness in classifying asbestos using hyperspectral images.
  • Coarse SR usually leads to inconsistent results
[45]Analysis of applicability of hyperspectral sensors in comparison to multispectral Sentinel-2 data2000–2017
  • Coarse SR of hyperspectral data is identified as the major constraint for analyzing the complexity of urban landscape
Table 4. Specifications of public urban hyperspectral benchmark datasets and their respective acquisition sensors.
Table 4. Specifications of public urban hyperspectral benchmark datasets and their respective acquisition sensors.
Name of the Public DatasetBrief DescriptionClassesSensor UsedN° of BandsSpectral Range (nm)Spatial Resolution (m)Spectral Resolution (nm)Peak Signal-to-Noise Ratio
Pavia University (2003) [34,53]Gathered over north-east Pavia (Northern Italy), containing 42,776 labeled samples.Asphalt, Meadows, Gravel, Trees, Metal sheet, Bare soil, Bitumen, Brick, and Shadow.ROSIS-3115430–8602.34Mentioned as “high” but no value was provided
Pavia Centre (2003) [54,55] Gathered over the city center of Pavia (Northern Italy), containing 7456 labeled samplesWater, Trees, Asphalt, Self-blocking bricks, Bitumen, Tiles, Shadows, Meadows, Bare soil.
Washington DC Mall (1995) [56,57]Gathered over Washington DC (Virginia, US) containing 76,777 labeled samplesRoofs, Street, Grass, Trees, Path, Water, ShadowHYDICE210400–25001–410800:1
Urban Dataset (1995) [58]Captured over Yuma City (Arizona, US), containing 94,249 labeled samples.Asphalt, Grass, Tree, Roof, Metal and Dirt
MUUFL (2010) [59]Gathered over the Gulf Park campus (Mississippi, US) containing 53,687 labeled samples, and a LIDAR image.Trees, Mostly Grass, Mixed Ground, Dirt and Sand, Roads, Water, Building Shadows, Buildings, Sidewalks, Yellow curbs, Cloth panelsITRES CASI-1500Up to 288, used in this dataset: 64380–10500.510.41095:1
Houston (2018) [60,61]Gathered over Houston (Texas, US), containing 504,172 labeled samples, and a Digital Surface Model gathered through LIDAR.Healthy grass, Stressed grass, Artificial turf, Evergreen trees, Deciduous trees, Bare earth, Water, Residential buildings, Non-residential buildings, Roads, Sidewalks, Crosswalks, Major thoroughfares, Highways, Railways, Paved parking lots, Unpaved parking lots, Cars, Trains, Stadium seatsITRES CASI-1500Up to 288, used in this dataset: 48380–10500.5141095:1
Table 5. Number of articles (and respective percentage) where integration/fusion of hyperspectral data with other sensors was performed.
Table 5. Number of articles (and respective percentage) where integration/fusion of hyperspectral data with other sensors was performed.
Type of Data Used for Integration/FusionHyperspectral
Spaceborne
Hyperspectral
Airborne
Panchromatic/Multispectral9 (8%)0
LIDAR011 (10%)
SAR3 (3%)0
Geographic Information System (GIS) 3 (3%)-
No integration/
fusion
87 (76%)
Table 6. Key differences between airborne and spaceborne HSI.
Table 6. Key differences between airborne and spaceborne HSI.
CharacteristicAirborne HSISpaceborne HSI
Spatial ResolutionVery high (0.5–5 m)Moderate (10–60 m)
CoverageLocal to regional; limited swath (few km)Regional to global; wide swath (30–150 km)
Temporal ResolutionSporadic (campaign-based, one-shot surveys)Repeat coverage (days to weeks depending on mission observation scenario) and/or on demand
Data AccessibilityMostly proprietary or costly flight campaigns; public benchmark datasets availableIncreasingly free access for scientific research purposes (e.g., PRISMA, EnMAP)
Signal-to-Noise Ratio (SNR)High, depending on sensor and conditionsVariable; generally lower than airborne but improving in second generation missions
Typical ApplicationsAlgorithm testing, benchmark datasets, detailed land cover classification, urban vegetation studies, impervious surfacesBroader-scale land cover mapping, impervious surfaces, LCZ analysis, regional and city-scale vegetation studies
LimitationsHigh acquisition cost, limited coverage, poor temporal repeatabilityCoarser spatial resolution, mixed pixel problem, weather dependence
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gámez García, J.A.; Lazzeri, G.; Tapete, D. Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends. Remote Sens. 2025, 17, 3126. https://doi.org/10.3390/rs17173126

AMA Style

Gámez García JA, Lazzeri G, Tapete D. Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends. Remote Sensing. 2025; 17(17):3126. https://doi.org/10.3390/rs17173126

Chicago/Turabian Style

Gámez García, José Antonio, Giacomo Lazzeri, and Deodato Tapete. 2025. "Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends" Remote Sensing 17, no. 17: 3126. https://doi.org/10.3390/rs17173126

APA Style

Gámez García, J. A., Lazzeri, G., & Tapete, D. (2025). Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends. Remote Sensing, 17(17), 3126. https://doi.org/10.3390/rs17173126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop