remote sensing

: Random Forest (RF) is a widely used algorithm for classification of remotely sensed data. Through a case study in peatland classification using LiDAR derivatives, we present an analysis of the effects of input data characteristics on RF classifications (including RF out-of-bag error, independent classification accuracy and class proportion error). Training data selection and specific input variables ( i.e. , image channels) have a large impact on the overall accuracy of the image classification. High-dimension datasets should be reduced so that only uncorrelated important variables are used in classifications. Despite the fact that RF is an ensemble approach, independent error assessments should be used to evaluate RF results, and iterative classifications are recommended to assess the stability of predicted classes. Results are also shown to be highly sensitive to the size of the training data set. In addition to being as large as possible, the training data sets used in RF classification should also be (a) randomly distributed or created in a manner that allows for the class proportions of the training data to be representative of actual class proportions in the landscape; and (b) should have minimal spatial autocorrelation to improve classification results and to mitigate inflated estimates of RF out-of-bag classification accuracy.


Cameras
 the simplest and oldest of sensors used for remote sensing of the Earth's surface.  framing systems which acquire a near-instantaneous "snapshot" of an area (A), of the surface.  passive optical sensors that use a lens (B) to form an image at the focal plane (C) where an image is sharply defined.
• Panchromatic films are sensitive to the UV and the visible portions of the spectrum.  produces black and white images and is the most common type of film used for aerial photography.
• UV photography also uses panchromatic film, but a filter is used to block the visible energy from reaching the film.  is not widely used.
• Black and white infrared photography uses film sensitive to 0.3 to 0.9 μm wavelength range and is useful for detecting differences in vegetation cover, due to its sensitivity to IR reflectance.

Cameras and Aerial Photography
• Cameras can be used on a variety of platforms including ground-based stages, aircraft, and spacecraft. Very detailed photographs taken from aircraft are useful for many applications where identification of detail or small targets is required.
• The ground coverage of a photo depends on several factors, including the focal length of the lens, the platform altitude, and the format & size of the film.
• Focal length : The focal length controls the angular field of view of the lens (similar to the concept of IFOV) and determines the area "seen" by the camera. Typical focal lengths : 90mm, 210mm, & most commonly, 152mm. The longer the focal length, the smaller the area covered on the ground, but with greater detail (i.e. larger scale).
• Altitude : At high altitudes, a camera will "see" a larger area on the ground than at lower altitudes, but with reduced detail (i.e. smaller scale). Aerial photos can provide fine detail down to spatial resolutions of less than 50 cm.

Cameras and Aerial Photography
• In order to identify the location of the Principal Point on an airphoto, Fiducial Marks are photographed each time an image is recorded. The location of the Principal Point can then be determined by the intersection of straight lines between opposite fiducial marks.

Cameras and Aerial Photography
Oblique aerial photographs • with the camera pointed to the side of the aircraft.
• useful for covering very large areas in a single image and for depicting terrain relief and scale.
• distortions in scale from the foreground to the background preclude easy measurements of distance, area, and elevation.
Oblique vs Vertical photograph : depending on the orientation of the camera relative to the ground during acquisition.

Vertical aerial photographs
• the most common use of aerial photography for RS and mapping purposes.
• These cameras are specifically built for capturing a rapid sequence of photographs while limiting geometric distortion. They are often linked with navigation systems onboard the aircraft platform, to allow for accurate geographic coordinates to be instantly assigned to each photograph.

Cameras and Aerial Photography
• When obtaining vertical aerial photographs, the aircraft normally flies in a series of lines, each called a flight line.
• Photos are taken in rapid succession looking straight down at the ground, often with a 50-60 percent overlap (A) between successive photos. The overlap ensures total coverage along a flight line and also facilitates stereoscopic viewing. Successive photo pairs display the overlap region from different perspectives and can be viewed through a device called a stereoscope to see a three-dimensional view of the area, called a stereo model. Many applications of aerial photography use stereoscopic coverage and stereo viewing.

Cameras and Aerial Photography
Height Determination from Airphotos : two methods to determine the height of objects -the Single Photo Method & the Stereopair Parallax Method.
Single Photo Method : The simplest to use, but generally applicable only to vertical features where the top an bottom of the feature can be observed. The method uses the principle that the radial displacement of a feature varies proportionately with the height of the aircraft and is determined by the formula: h = dH/r

Parallax Method of Height Determination
This method requires two overlapping airphotos on the same flight line, the height of the aircraft above the ground, and the average photo base length. The photo base length is the distance from the geometric center (or Principal Point) of one airphoto to the other.
The method uses the principle that the radial displacement of a feature varies proportionately with the height of the aircraft, but takes into account measurements from two airphotos thereby providing greater accuracy in the result.

Cameras and Aerial Photography
• Aerial photographs are most useful when fine spatial detail is more critical than spectral information, as their spectral resolution is generally coarse when compared to data captured with electronic sensing devices.
• photogrammetry (The science of making measurements from photographs) : The geometry of vertical photographs is well understood and it is possible to make very accurate measurements from them, for a variety of different applications (geology, forestry, mapping, etc.). Photos are most often interpreted manually by a human analyst (often viewed stereoscopically). They can also be scanned to create a digital image and then analyzed in a digital computer environment.
• Multiband photography uses multi-lens systems with different film-filter combinations to acquire photos simultaneously in a number of different spectral ranges. Advantage : their ability to record reflected energy separately in discrete wavelength ranges, thus providing potentially better separation and identification of various features. However, simultaneous analysis of these multiple photographs can be problematic.

Digital cameras
• record EM radiation electronically, differ significantly from their counterparts which use film. Instead of using film, digital cameras use a gridded array of silicon coated CCDs (charge-coupled devices) that individually respond to electromagnetic radiation.
• Energy reaching the surface of the CCDs causes the generation of an electronic charge which is proportional in magnitude to the "brightness" of the ground area. A digital number for each spectral band is assigned to each pixel based on the magnitude of the electronic charge.
• Digital cameras also provide quicker turn-around for acquisition and retrieval of data and allow greater control of the spectral resolution.
• Although parameters vary, digital imaging systems are capable of collecting data with a spatial resolution of 0.3m, and with a spectral resolution of 0.012 μm to 0.3 μm. The size of the pixel arrays varies between systems, but typically ranges between 512 x 512 to 2048 x 2048.

Cameras and Aerial Photography
Did You Know?
The U.S. Space Shuttles have used cameras mounted in the shuttle's cargo bay, called Large Format Cameras (LFCs). LFCs have long focal lengths (305 mm) and take high quality photographs covering several hundreds of Km in both dimensions. Photos from these passive sensors need to be taken when the Earth's surface is being illuminated by the sun and are subject to cloud cover and other attenuation from the atmosphere.
The shuttle has also been used several times to image many regions of the Earth using a special active microwave sensor called a RADAR. The RADAR sensor can collect detailed imagery during the night or day, as it provides its own energy source, and is able to penetrate and "see" through cloud cover due to the long wavelength of the electromagnetic radiation.

Did You Know?
Taking photographs in the UV portion of the spectrum can be very useful where other types of photography are not.
An interesting example in wildlife research and management has used UV photography for detecting and counting harp seals on snow and ice. Adult harp seals have dark coats while their young have white coats. In normal panchromatic imagery, the dark coats of the adult seals are readily visible against the snow and ice background but the white coats of the young seals are not. However, the coats of both the adult and infant seals are strong absorbers of UV energy. Thus, both adult and young appear very dark in a UV image and can be easily detected. This allows simple and reliable monitoring of seal population changes over very large areas.

Multispectral Scanning
• Many electronic remote sensors acquire data using scanning systems, which employ a sensor with a narrow IFOV that sweeps over the terrain to build up and produce a two-dimensional image of the surface.
• Scanning systems can be used on both aircraft and satellite platforms.
• A scanning system used to collect data over a variety of different wavelength ranges is called a multispectral scanner (MSS), and is the most commonly used scanning system.
• There are two main modes or methods of scanning employed to acquire multispectral image data -acrosstrack scanning, and along-track scanning.

Across-track scanners
• scan the Earth in a series of lines. The lines are oriented perpendicular to the direction of motion of the sensor platform (i.e. across the swath). Each line is scanned from one side of the sensor to the other, using a rotating mirror (A). As the platform moves forward over the Earth, successive scans build up a two-dimensional image of the Earth´s surface.
• The incoming reflected or emitted radiation is separated into several spectral components that are detected independently.
• The UV, visible, near-infrared, and thermal radiation are dispersed into their constituent wavelengths. A bank of internal detectors (B), each sensitive to a specific range of wavelengths, detects and measures the energy for each spectral band and then, as an electrical signal, they are converted to digital data and recorded for subsequent computer processing.

Multispectral Scanning
• The IFOV (C) of the sensor and the altitude of the platform determine the ground resolution cell viewed (D), and thus the spatial resolution.
• The angular field of view (E) is the sweep of the mirror, measured in degrees, used to record a scan line, and determines the width of the imaged swath (F).
• Airborne scanners typically sweep large angles (between 90º and 120º), while satellites, because of their higher altitude need only to sweep fairly small angles (10-20º) to cover a broad region.
• Because the distance from the sensor to the target increases towards the edges of the swath, the ground resolution cells also become larger and introduce geometric distortions to the images.
• Also, the length of time the IFOV "sees" a ground resolution cell as the rotating mirror scans (called the dwell time), is generally quite short and influences the design of the spatial, spectral, and radiometric resolution of the sensor.

Along-track scanners
• use the forward motion of the platform to record successive scan lines and build up a 2-D image, perpendicular to the flight direction.
• However, instead of a scanning mirror, they use a linear array of detectors (A) located at the focal plane of the image (B) formed by lens systems (C), which are "pushed" along in the flight track direction (i.e. along track).
• also referred to as pushbroom scanners, as the motion of the detector array is analogous to the bristles of a broom being pushed along a floor.
• Each individual detector measures the energy for a single ground resolution cell (D) and thus the size and IFOV of the detectors determines the spatial resolution of the system.
• A separate linear array is required to measure each spectral band or channel. For each scan line, the energy detected by each detector of each linear array is sampled electronically and digitally recorded.

Multispectral Scanning
Advantages of along-track scanners with linear arrays over across-track mirror scanners.
• The array of detectors combined with the pushbroom motion allows each detector to "see" and measure the energy from each ground resolution cell for a longer period of time (dwell time). This allows more energy to be detected and improves the radiometric resolution.
• The increased dwell time also facilitates smaller IFOVs and narrower bandwidths for each detector. Thus, finer spatial and spectral resolution can be achieved without impacting radiometric resolution.
• Because detectors are usually solid-state microelectronic devices, they are generally smaller, lighter, require less power, and are more reliable and last longer because they have no moving parts.
• On the other hand, cross-calibrating thousands of detectors to achieve uniform sensitivity across the array is necessary and complicated.

Multispectral Scanning
Advantages of the scanning system over photographic systems.
• The spectral range of photographic systems is restricted to the visible and nearinfrared regions while MSS systems can extend this range into the thermal infrared. They are also capable of much higher spectral resolution than photographic systems. • Multi-band or multispectral photographic systems use separate lens systems to acquire each spectral band. This may cause problems in ensuring that the different bands are comparable both spatially and radiometrically and with registration of the multiple images. MSS systems acquire all spectral bands simultaneously through the same optical system to alleviate these problems.
• Photographic systems record the energy detected by means of a photochemical process which is difficult to measure and to make consistent. Because MSS data are recorded electronically, it is easier to determine the specific amount of energy measured, and they can record over a greater range of values in a digital format.
• Photographic systems require a continuous supply of film and processing on the ground after the photos have been taken. The digital recording in MSS systems facilitates transmission of data to receiving stations on the ground and immediate processing of data in a computer environment.
Remote sensing of the thermal infrared (3 μm to 15 μm) energy is different than the sensing of reflected energy. Thermal sensors use photo detectors sensitive to the direct contact of photons on their surface, to detect emitted thermal radiation. The detectors are cooled to temperatures close to absolute zero in order to limit their own thermal emissions.
Thermal imagers are typically across-track scanners that detect emitted radiation in the thermal portion of the spectrum. Thermal sensors employ one or more internal temperature references for comparison with the detected radiation, so they can be related to absolute radiant temperature. The data are generally recorded on film and/or magnetic tape and the temperature resolution of current sensors can reach 0.1 °C.
• For analysis, an image of relative radiant temperatures (a thermogram) is depicted in grey levels, with warmer temperatures shown in light tones, and cooler temperatures in dark tones. Imagery which portrays relative temperature differences in their relative spatial locations are sufficient for most applications.
• Because of the relatively long wavelength of thermal radiation (compared to visible radiation), atmospheric scattering is minimal. However, absorption by atmospheric gases normally restricts thermal sensing to two specific regions -3 to 5 μm and 8 to 14 μm.
• Because energy decreases as the wavelength increases, thermal sensors generally have large IFOVs to ensure that enough energy reaches the detector in order to make a reliable measurement. Therefore the spatial resolution of thermal sensors is usually fairly coarse, relative to the spatial resolution possible in the visible and reflected infrared.
• Thermal imagery can be acquired during the day or night (because the radiation is