How does remote sensing work in space

Remote sensing

Disaster monitoring using satellites

 

Natural disasters such as floods, forest fires or earthquakes represent a high risk potential for life on earth. Fast and coordinated operations, made possible by information from space, are a prerequisite for helping many affected people in an emergency. But how does disaster monitoring using satellite images work and what is remote sensing?

Optical remote sensing

In principle, optical remote sensing works in a similar way to human vision, except that when satellites are used in space, the field of view covers a larger area and additional information that is invisible to the human eye can be recorded. The electromagnetic radiation sent from the sun to the earth is subject to interaction processes in contact with the atmosphere and the earth. The energy can either be absorbed by the land surface (absorption) or directed or emitted diffusely (reflection), or it can pass through without emitting energy (transmission). Since most surfaces are optically dense media, the transmission can be assumed to be neglected. The ratio of reflected and absorbed electromagnetic radiation is therefore an information carrier about the material composition of the land cover type or the atmosphere. Each surface absorbs and reflects a material-specific portion of the irradiated spectrum. The human eye can, for example, recognize the color of a surface, since a different part of the optically visible light is reflected in each case. Remote sensing satellites are not only limited to the recording of reflections in the visible wavelength range (approx. 380 nm to 780 nm) but also collect information in the infrared (mostly up to approx. 2.5 µm).

Sentinel - knowledge from space

Sentinel-2 © ESA

"Sentinels" is the name of ESA's new European fleet of environmental observation satellites. The Sentinel-1 and Sentinel-2 missions are already delivering large-scale data (https://sentinel.esa.int/web/sentinel/sentinel-data-access). While Sentinel-1 is a radar mission, Sentinel-2 is configured as a classic remote sensing satellite as described in the last paragraph. The mission consists of the identical satellites Sentinel-2A and -2B, which were launched separately and brought into two offset polar orbits of the earth. In combination, the two satellites record the global state of the earth's surface every five days.

What does Sentinel-2 see?

The so-called multispectral sensor of the satellite records in the wavelength range between 0.443 µm and 2.19 µm and covers the visible (VIS), near (NIR) and short-wave infrared (SWIR) range.

Spectral resolution of Sentinel-2.

The bundled radiation arriving at the sensor is split up using prisms or grids and fed to different detectors. This enables information in different wavelength ranges to be recorded separately from one another. Sentinel-2 detects the reflections from the earth's surface using 13 so-called channels or ribbons. In addition to the colors (blue, green and red) that can also be perceived by the human eye, Sentinel-2 “sees” eight other spectral ranges in the infrared.

How is the radiation stored?

The surface reflections are stored pixel-based, channel-dependent in so-called digital values ​​(DN). The positive integer values ​​used represent the brightness of a pixel in the respective spectral range. By means of the DN, each pixel is assigned a value proportional to the intensity of the electromagnetic radiation picked up by the sensor. With Sentinel-2 (8-bit system), the DN can have values ​​between 0 and 255, with 0 representing the lowest and 255 the highest intensity.

Different grayscale resolutions depending on the underlying image storage.

A grayscale gradient is usually chosen for visualization. The more gray values ​​the scale contains, the more image information is contained.
In order to generate an image of how our eyes would see the earth from space, for example, several channels have to be superimposed and mapped at the same time. The additive color model is the most common for this. In addition, other color models are also used in remote sensing. By combining the reflections in the three visible channels (red, green and blue), a so-called RGB true color image is created.

RGB real color display (data basis: Sentinel-2, ESA).

Most remote sensing sensors record on a pixel basis, similar to conventional digital cameras. With Sentinel-2 the pixel size is 20m. In the picture, one pixel corresponds to 20x20m on the earth's surface.

Disaster Monitoring - For Earth Into Space

The forest fire in July 2016 near Nurri in Sardinia (Italy) changed a large area so that it can be seen from space. The following figure shows the RGB combination of the region around Nurri both before (July 18, 2016) and after (July 28, 2016) the fire.

RGB real-color images before (left) and after (right) the forest fire in Sardinia near Nurri in July 2016. The extent of the burned areas can already be seen with the naked eye (data basis: Sentinel-2, ESA).

Image evaluation

In order to be able to determine the forest fire area exactly, simple indices help. Using the so-called Normalized Differentiated Vegetation Index (NDVI), information about the vegetation can be derived by combining the surface reflections in the red and near infrared channels. The NDVI makes use of the property that the reflection spectrum of vital vegetation differs significantly from vegetation that is no longer alive or from all other land surfaces, especially in the red and near infrared wavelength ranges.

Reflection spectra of vital vegetation (green) and dead vegetation (brown).

While vital vegetation absorbs the red part of sunlight almost completely in order to carry out photosynthesis, in the near infrared it reflects a very high proportion of the incident solar energy. Non-vital vegetation and all other surfaces show a different reflection behavior. This special characteristic of healthy vegetation can be mathematically recorded with the help of the NDVI:

 

 

"NIR" or "RED" stands for the reflectance of a surface in the near infrared or in the red wavelength range (NIR: approx. 800 nm to 900 nm; RED: approx. 600 nm to 700 nm, depending on the satellite / sensor) . The reflectance describes the ratio of reflected to incident radiation and is usually given in percent. The NDVI can assume values ​​between +1 and -1, whereby the criterion which distinguishes vegetation from other land surfaces is approx. 0.4. While values ​​greater than 0.38 are assigned to plants, surfaces such as cities, open soils, bodies of water or, in the present case, burned areas are represented by values ​​less than 0.4.

Mean NDVI values ​​of various types of land cover

NDVILand cover types
< 0,1Water, soil, rock, sand or snow
0.2 to 0.3Low vitality vegetation
0.3 to 0.6Medium to dense vegetation cover
> 0,6Very dense vegetation of high vitality

Even with a simple direct comparison of the NDVI images, the areas in the center of the image that have been altered by fire stand out.

NDVI in grayscale from high (white) to low NDVIs (black) for the area around Nurri before (left) and after (right) the forest fire in July 2016 (data basis: Sentinel-2, ESA).

In order to be able to assess the short-term changes in the fire, a representation of the pixel-by-pixel differences between the two NDVI images helps. In the difference image, the large, dark gray areas of the image remained unchanged between July 18 and 28, 2016, while the lighter to white pixels in the center of the image indicate a decrease in vegetation and can thus be assigned to the areas affected by forest fire.

Difference image (data basis: Sentinel-2, ESA).

The image analysis was carried out with the LEOWorks software, which was specially developed for secondary school students to independently process satellite images. LEOWorks enables a simple display, analysis, processing and evaluation of earth observation recordings. The software can be downloaded free of charge from: http://leoworks.terrasigna.com/.