^{★}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

The goal of the paper is to detect pixels that contain targets of known spectra. The target can be present in a sub- or above pixel. Pixels without targets are classified as background pixels. Each pixel is treated via the content of its neighborhood. A pixel whose spectrum is different from its neighborhood is classified as a “suspicious point”. In each suspicious point there is a mix of target(s) and background. The main objective in a supervised detection (also called “target detection”) is to search for a specific given spectral material (target) in hyperspectral imaging (HSI) where the spectral signature of the target is known

Hyperspectral remote sensing exploits the fact that all materials reflect, absorb and emit electromagnetic energy at specific wavelengths. In comparison to a typical camera that uses red, green and blue colors as three wavelength bands, hyperspectral imaging (HSI) sensors acquire digital images in many contiguous and very narrow spectral bands that typically span the visible, near-infrared and mid-infrared portions of the spectrum. This enables to construct essentially continuous radiance spectrum for every pixel in the scene. The goal of the paper is to identify and classify materials that are present in HSI and to detect targets of interest.

Material and object detection using remotely sensed spectral information has many military and civilian applications. Detection algorithms can be divided into two classes: supervised and unsupervised. The unsupervised detection is also called “anomaly detection”. Anomalies are defined as patterns in the data that do not conform to a well defined notion of normal data behavior. In anomaly detection tasks, hyperspectral pixels have to be classified as either background (normal behavior) or anomalies. Every pixel or group of pixels, which are spectrally different in a meaningful way from the abundant (background) spectral material, are classified as anomalies. A survey of unsupervised detection algorithms is given in [

In a supervised detection (also called “target detection”), the main objective is to search for a specific given spectral material (target) in HSI. The spectral signature of the target is known

The paper has the following structure. Problem statement including HSI descriptions, which are used for testing the proposed algorithms, are given in Section 2. Related work are described in Section 3. Section 4 describes the needed mathematical background that is used for the constructed solution. Section 5 presents the ROTU algorithm that is based on orthogonal rotations. Classification for an unmixing algorithm is given in Section 6 and its experimental results are presented in Section 6.4.

The goal in this paper is to detect pixels that contain targets of a known spectrum. The target can be present in a sub- or in a whole pixel. Pixels without targets are classified as background pixels. Each pixel is treated via the content of its neighborhood. A pixel whose spectrum is different from its neighborhood according to [

Linear mixing is the most widely used as spectral mixing model, which assumes that the observed reflectance spectrum of a given pixel is generated by a linear combination of a small number of unique constituent deterministic spectral signatures. Endmembers are the outputs from the spectral unmixing. This model is defined with constraints in the following way:
_{1}, _{2}, . . . , _{M}_{1}, _{2}, . . . , _{M}

A mixed spectrum means a spectrum that contains background and target spectra. We consider a simplified version of _{1} = _{1} = _{k}

We are given the target’s spectrum

The related algorithms, which appeared in Section 3 (also in [^{2}.

This proposition implies a solution if the background and the target have statistically independent spectra. Note that if ^{2}.

The linear unmixing algorithms, which are presented in [

The first algorithm in [

In this paper, we present two new methods based on the same assumption about the relation between spectra of different materials as was done in [

The experiments in this paper including performance evaluation were performed on three real hyper-spatial datasets titled: “desert” (

Up-to-date overview on hyperspectral unmixing is given in [

In the rest of this section, we divided the many existing algorithms into several groups. We wish to show some trends but do not attempt to cover the avalanche of related work on unmixing and target detection.

_{1}, _{2}, . . . _{p}_{i}_{1}, _{2}, . . . _{p}^{T}^{T}^{p}_{k}_{x}^{L}_{k}

Several approaches [

The MVT type approaches are complex from computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [

If the image is of size approximately 300

The description in this section, which is based on [

According to [^{n}_{1}_{n}^{T}^{n}

^{n}

The Rosenblatt’s Perceptron algorithm ([_{i}_{i}_{i}_{i}_{i}_{i}

The Perceptron algorithm is guaranteed to converge only if the training data are linearly separable. A procedure that does not suffer from this limitation is the Linear Discriminant Analysis (LDA) via Fisher’s discriminant functional [_{i}_{i}_{i}_{j}_{j}_{i}

_{i}_{i}_{i}_{i}

^{n}_{1}_{k}^{n}_{1}_{k}

Suppose that we have a set _{1}_{n}_{1} and _{−}_{1}. Each subset represents a cluster. The solution is based on the K-means algorithm ([_{i}_{i}_{B}_{i}

_{1}_{n}^{n}

When is a dataset separable? One criterion is when _{1} _{−}_{1} _{1})_{−}_{1})), where the notation in

_{1}) _{2}) _{1} _{2} _{1}) ≥ _{2})

This section describes the new mathematical background constructions that are needed for the proposed solution in this paper.

Denote by _{W}^{n}^{n}

_{1}_{2}_{n}_{ɛ}_{p}_{i}_{ɛ}

Let _{i}

The norm of the linear operator ^{n}^{n}_{‖x‖}_{=1} ‖

_{1}_{2} ∈

For example, the integers set is

Given two vector _{1} and _{2} in ^{n}_{1} and _{2} are two linear operators in ^{n}_{1}_{2}}. We say that _{1} and _{2} are equivalent in relation to _{1} and _{2}, denoted by _{1} _{2}, if _{1}(_{2}(

_{1} _{2} _{j}_{1} = {_{1}(_{2} = {_{2}(_{1} ∩ _{2} = ∅_{1} _{2}

Let {_{1}_{2}_{n}^{n}_{i1}⊕_{i2} ⊕ ⋯ ⊕_{ik}, where _{1}_{2}_{k}^{n}

For example, in the 3D space with the standard basis {_{1}_{2}_{3}}, the plane, which is spanned by the vectors {_{1}_{2}}, is a basic subspace. The plane, which is spanned by the vectors {(_{1} + _{3})_{2} + _{3})_{1}_{2}_{n}^{n}

Given a pair of vectors _{1} and _{2} in ^{n}_{1}_{2}) is the linear span of _{1} and _{2}. The basic subspace ^{n}_{1} and _{2} if _{W}_{1}_{2})) = 1.

Given a pair of vectors _{1} and _{2} in ^{n}_{1} and _{2}, if the two conditions hold:

If _{1} and _{2} then

There exists a dependent basic subspace _{1} and _{2} where

For example, assume that _{1} = (1_{2} = (2^{4} = _{1} ⊕ ⋯ ⊕ _{4}. Let _{2}⊕_{3}. Then, _{1} = _{W}_{1}) = (2_{2} = _{W}_{2}) = (1_{1} = 2_{2}. Thus, we obtain that _{1} and _{2}. It is clear that the dependency rank of the vectors _{1} and _{2} is 2.

If a pair of vectors _{1} and _{2} is sparse-independent, then their basis dependent rank is zero.

Assume that ^{n}_{ɛ}

Given a pair of vectors _{1} and _{2} in ^{n}_{ɛ2} (_{1})_{ɛ}_{1}) ∩ _{ɛ}_{2}))_{ɛ2} (_{2})_{ɛ}_{1}) ∩ _{ɛ}_{2}))]. _{1} and _{2}.

_{1} _{2}_{1} _{2}

For example, sparse-independent vectors have a basis dependent rank zero and they are

Denote by _{W}^{n}^{n}

Let _{i}| > ɛ

Assume that Γ is a set of vectors from ^{n}^{n}_{ζ∈Γ}

Our goal is to separate between the sets Φ and Ω. The distinction between these two sets can be described in different ways. Behavior of the first or second derivatives of the vectors can be one way. Another way can be the characteristics of the sets such as the relation between the maximum and the minimum in the vectors. These properties define a set of weak classifiers Γ.

Next, we introduce two families of sensors.

The set Γ ⊆ ^{n}_{i}

Given ^{n}_{i}_{i}

_{1}_{2}_{m}^{n}^{m}_{1}_{2}_{m}

_{1}_{2}_{m}_{1}_{2}_{m}_{1} = Ξ (_{1} = Ξ(_{1} = Ξ (_{1}‖ = _{1}‖ _{1}_{1}) = 1.

_{1} = Ξ(_{1} = _{1}.

The goal of this section is to generate the conditions for which the vectors provide a solution for the unmixing problem given by

_{1} = {_{2} = {_{2} _{1}) ≥ _{1} ∩ _{2})_{ɛ}_{1} = {

_{2}\_{1}) and _{2} ∩ _{1}), then |_{i∈W}_{i∈V}_{i∈W}_{i∈V}_{ɛ}_{ɛ}

¿From Lemma 4.1 we get Proposition 4.2.

_{1} = {_{ɛ/γ}

^{2}.

^{2}.

Note that if ^{2}. “Sparse-independency” and orthogonality are strong conditions. Theorem 4.1 shows that “

_{i}_{1} = {_{i}_{i}_{ɛ}

_{ɛ}_{i}_{i}_{i}^{2} _{i}_{i∈Ω} _{i}_{i}^{2} _{i}_{i}_{i}_{i}_{i}_{i}_{i}_{ɛ}

Now we show how to obtain the “

_{1}_{2}} ^{n}_{1}_{2}_{1}_{2}_{r}^{n}

_{1} ≜ [_{1}_{1}(_{1})_{2}(_{1})_{r}_{1})] _{2} ≜ [_{2}_{1}(_{2})_{2}(_{2})_{r}_{2})]

The rest of this section proves Theorem 4.2.

_{1}_{2}} ^{2} _{1}_{2}^{2} ^{1} _{2}) = _{1})] _{2}) = 0]

^{2} ^{1} with ‖_{1}_{2}) is a vector with ‖

Consider
^{2}. This rotation splits the linear operator _{1} where _{1}(_{1}_{2}) ≜ (_{1}

We can choose the basis {_{1}_{2}} of ^{2} such that _{2} = (_{1} = (0

In addition,

_{1}_{2}_{r}^{2} ^{1} _{1}_{2}} ^{2}_{ɛ}

_{1}_{2}} ^{n}_{1}_{2}_{1}_{2}_{r}^{n}_{ɛ}_{i,α}

_{1}_{2}_{r}_{i}_{i}_{i,α}_{Sα∈Λ} α = 1_{ɛ}_{i}_{i}_{1}_{n}_{i}_{i}_{1}_{2}}. Define _{α}_{i}_{α}_{α}^{2} ^{1}. Our assumption was that Λ = {_{1}_{2}_{r}_{1}_{2}_{r}_{ɛ}_{i}

_{1} and _{2} from ^{nr}_{S⊂Λ}^{n}_{S⊂Λ}^{n}_{w}_{1}_{W}_{2}_{W}_{2})(_{1})(_{ɛ}_{i}

We denote the first derivative of a spectrum of a pixel

For independent spectra and for special type of dependent spectra, which will be defined below, we can use a faster and less computationally expensive method than [

We assume now that the conditions in Section 2 hold. We are given the target’s spectrum

Generally speaking, spectra of different materials have different sections in them: some sections are mutually correlated, some sections are smooth and some have sections with oscillatory behavior. Large first and second derivatives are used as characteristic features for a spectral classification.

Some section in the spectrum can be autocorrelated. Let _{1}_{2}) be a pair of the first and the second derivatives operator. Our assumptions imply that

Let

The unmixing algorithm has the following steps:

_{1}_{2}_{r}^{n}^{n}^{2}^{nr}_{S⊂Λ}^{n}^{n}^{2}^{rn}

The vectors _{T}_{B}_{P}_{P}_{T}_{B}_{B}_{T}_{B}_{ɛ}_{ɛ}

It follows that

_{k}_{i}_{k}_{k}

In _{ɛ}

The ROTU algorithm, which estimates

The “unmixing histogram” from Definition 5.1 is displayed in

From

The “unmixing histograms” from Definition 5.1 and the comparison between

In this section, we present another unmixing algorithm that is based on classification. This method works fast and does not require any conditions on the relation between the background and the target spectra. The spectra of the target and the background can be correlated and their derivatives can be correlated too. We outline the following consecutive steps in a target identification via classification by HSI learning:

Learning phase: Finding typical background features by the application of PCA to the spectra of randomly selected multi-pixels from the scene where a multi-pixel is one pixel with all its wavelengths;

Separation of the target’s spectra from the background via a linear operator, which is obtained from the “learning” phase;

Construct a set of sensors which are highly sensitive to targets and insensitive to background;

Embedding the spectra from the scene into a space generated by “good” sensors vectors and detect the points that contain the target that has a big norm value.

Let _{1}_{s}_{1}_{p}

Let

Given a set of weak classifiers Γ: Perform

Construct the “strong classifier”: It is a linear operator Ξ = [_{1}_{2}_{s}_{i}

Construct _{1} = Ξ (

Detailed description of the “weak classifier” collection:

Choose an integer

First iteration: Select a subset Γ̂ of

Let _{1} =

Assume ^{th}_{i}_{i}

Let _{s}

Assume

_{1}) is close to 1.

_{1}‖ is the portion of the target in the current pixel.

We assume that we have the same conditions as in Section 2. We are given the target’s spectrum

The CLUN algorithm contains the following steps:

Apply the sensor building algorithm (SBA - Algorithm 1).

Construct an operator that is the projection separator Ξ(

Apply the operator Ξ(

Let {_{1}_{ω}_{i}_{i}_{1}_{ω}^{+}:

The set {_{1}_{ω}

The set {_{1}_{ω}

From Definition 3.5, a dataset is separable if _{1}) _{2}) where _{1} is the partition and the number of classes is 1 and _{2} is the best partition for the two classes. The function _{1}) ≥ _{2}), then the dataset is inseparable and Fisher’s separation is incorrect.

In the first case, (_{1}_{ω}^{+} according to Definition 3.4 and _{i}

Assume _{i}_{i}_{i}_{1}) is close to 1, then _{1}‖ is the portion that contains

We present the results from the application of the CLUN algorithm to hyperspectral images where the target once occupies more than a pixel and once it occupies a subpixel. In

To separate between Ω and Φ, we utilize the following sets types Γ:

Assume that Γ = {_{i,j}_{i,j}_{i}_{j}

We present the results from the application of the CLUN algorithm to three different types of scenes (

The paper presents two linear unmixing algorithms ROTU and CLUN to recognize targets in hyperspectral images where each given target’s spectral signature can be located in a sub- or above pixel. The CLUN algorithm is based on automatic extraction of features from the target’s spectrum. These features separate the target from the background. The ROTU algorithm is based on embedding the spectra space into a special space by random orthogonal transformation and by examining the statistical properties of the embedding’s result. In addition, we also compute the portion of the target’s material in the detected pixel (fractional abundance).The ROTU algorithm works well and fast if the spectra of the sought after target has sparse derivatives. It also works well if the fractional abundance of the target was 10, 30, 70 and 90%. It also detects sub- and above pixels with minimal false alarms. The CLUN algorithm works better but slower since it needs training phase and does background modeling.

Exemplary points that were taken from the area pointed by the arrow (full image is given in 2nd figure in Section 2.1).

Zoom on the exemplary point in

Spectra of the background, the mixed pixel and the target from

The dataset “desert” is an hyperspectral image of a desert place taken by an airplane flying 10,000 feet above sea level. The resolution is 1.3 m/pixel, 286

The dataset “city” is an hyperspectral image of a city taken by an airplane flying 10,000 feet above sea level. The resolution is 1.5 m/pixel, 294

The dataset “field” is an hyperspectral image of a field taken by an airplane flying 9,500 feet above sea level. The resolution is 1.2 m/pixel, 286

The “unmixing histogram” for

The solid line corresponds to the real spectrum of the background and the dotted line corresponds to the estimated spectrum of the background

Random spectral vector signatures.

The “unmixing histogram” from Definition 5.1 for different

The spectra

Left: The “unmixing histogram” from Definition 5.1. Right: Comparison between

Left: The “unmixing histogram” from Definition 5.1. Right: Comparison between

Left: The “unmixing histogram” from Definition 5.1. Right: Comparison between

Left: The “unmixing histogram” from Definition 5.1. Right: Comparison between

Left: The scene from

Left: The scene from

Left: The source image. Right: The result from the application of the projection separators. The white points contain targets with

Left: The source image. Right: The result from the application of the projection separators. The white points contain a targets with

Left: The red line is the target’s spectrum and the black line measures the background (it was taken from the scene in

Representation of 20 sensors (“weak classifiers”) with small supports. They were obtained as shown in Section 6.1. The x-axis represents the wavelengths index and the y-axis represents the values of the sensor-function.

Left: The target’s spectrum. Right: The principal components of the background’s spectra. The spectra was taken from the “city” image (

Left: This plot represents 200

Left: The source image. Right: The white points contain a target with