Friday, December 13, 2013

Remote Sensing Lab 8

Spectral Signature Analysis

Goal
This remote sensing lab provided students with skills in measurement and interpretation of spectral signatures of Earth's features.  Students collected, graphed and analyzed spectral signatures from a Landsate ETM+ image of Eau Claire, Wisconsin.

Methods
A polygon was drawn using the Drawing tool in the spectral function in Erdas Image 2013 for each of the following features.

1.) Standing Water
2.) Moving Water
3.) Vegetation
4.) Riparian Vegetation
5.) Crops
6.) Urban Grass
7.) Dry Uncultivated Soil
8.) Moist Uncultivated Soil
9.) Rock
10.) Asphalt Highway
11.) Airport Runway
12.) Concrete Surface (Parking Lot)

Once a polygon had been drawn, the Supervised tool in the raster functions was used to activate the Signature Editor tool.  The spectral signature of the image was graphed using the Display Mean Plot Window tool in the Signature Editor interface.

Results
Each of the features were added to the graph for the analysis of spectral signatures (Figure 1).


Figure 1: Spectral Signatures of 12 Earth Features

The X Axis of the graph represents the reflective bands of the image.  The Y Axis represents the wavelength of the spectral signature.  The chart below displays the highest and lowest reflective bands for the 12 features collected.

Band 1 = Blue
Band 2 = Green
Band 3 = Red
Band 4 = NIR
Band 5 = MID
Band 6 = MID

Signature
Highest
Lowest
Moving Water
Blue (1)
Mid IR (7)
Vegetation
NIR (4)
Red (3)
Riparian Vegetation
NIR (4)
Red (3)
Crops
Mid IR (5)
NIR (4)
Urban Grass
Mid IR (5)
Green (2)
Dry Soil-Uncultivated
Mid IR (5)
Red (3)
Moist Soil-Uncultivated
NIR (4)
Mid IR (7)
Rock
Mid IR (5)
NIR (4)
Asphalt Highway
Blue (1)
NIR (4)
Airport Runway
Mid IR (5)
Green (2)
Concrete Surface-Parking Lot
Red (3)
NIR (4)
Figure 2: Highest & Lowest Reflective Bands
For All 12 Features


Discussion
The graphical representation of the spectral signatures helped students to recognize that each surface feature has its own unique spectral reflectance.  This knowledge provides remote sensing analysts with the ability to identify and map features that may be unfamiliar and can also determine what bands are most useful for the identification of features through spectral signatures.  Spectral signatures can also aid in the identification and classification of images by discrete land covers.


Friday, December 6, 2013

Remote Sensing Lab 7

Photogrammetry

Goal
This lab introduced students to photographic scale and how to calculate scale when various photographic measurements are available.  The lab also introduced basic concepts of photogrammetry, relief displacement and orthorectification.

Methodology

Scale

Figure 1: Western Eau Claire, Wisconsin
The ground distance between points A and points B (Figure 1) is 8,822.47 ft. Students measured the distance between points A and B on the JPEG photograph using a ruler to determine the scale of the photograph.


Ground Distance = 8,822.47 ft.
Measured Distance = 2.1 inches
8,822.47/2.1 = 4201.2


Scale: 1 inch = 4201.2 ft

Students also calculated the scale using focal length, focal height and terrain elevation.  The focal height of the photograph was 20,000 ft above sea level and the focal length was 152 feet.

152mm/20,000 ft – 796 ft
152mm/19,204 ft = 1mm/ x ft

Scale: 1 mm = 126.34 ft

Relief Displacement
Students determined the relief displacement of the smoke stack identified by the letter ‘A’ in Figure 2. The height of the camera was 3,980 ft and the scale of the photograph was 1:3,209.  To calculate the relief displacement, students measured the height of the smoke stack with a ruler to find its real world height.  The radial distance between the principle points of the photograph and the top of the smoke stack was also measured.


Radial Distance = 11 in
Tower Height = 1 in
D = (3,209 x 11)/3,980 ft


Relief Displacement: 8.87 in

Figure 2: Eau Claire, WI
Relief Displacement of smoke stack

Stereoscopy
An anaglyph image was created in this lab to show a 3D perspective view of the City of Eau Claire, Wisconsin.  A digital elevation model was added to an aerial image using the Terrain-Anaglyph tool in ERDAS Imagine 2013.  The vertical dimension of the image was exaggerating by 2 using this tool.

Elevation features of the city can be seen in the resulting image using Polaroid glasses (Figure 3).

Figure 3: Anaglyph Image of Eau Claire, Wisconsin

Orthorectification

Students performed orthorectification using ERDAS Image Lecia Photogrammetric Suite (LPS) for SPOT satellite images of Palm Springs, California.  The process of orthorectification is used to create a planimetrically true orthoimage.  Before an image can be orthorectified, ground control points must be collected to geometrically correct the image.  Once the image has been corrected, a panchromatic image is added to the block file and more ground control points are collected.  The number of GCPs collected for each image in this lab was 12.  12 Verticle Reference points were collected for the panchromatic image (Figure 4).

The next step was to perform automatic tie point collection to triangulate the images.  Figure 5 displays the triangulation report including the RMSE value of the ground control points.

Figure 4: GCPS & Verticle Reference Points
Collected for Orthorectification & Triangulation

Figure 5: Triangulation Report







Friday, November 22, 2013

Remote Sensing Lab 6

Goal
One form of image preprocessing is geometric correction. This technique is performed to correct any distortions present in a satellite image so extraction of biophysical and sociocultural information can be extracted.

Methodology
There are two types of geometric correction: Image-to-Map rectification and Image-to-Image rectification.

Image-to-Map Rectification
To correct a Landsat TM image of the Chicago Metropolitan Statistical Area, a USGS 7.5 minute digital raster graphic was used to collect ground control points (GCPs).  These GCPs were used to rectify the Landsat TM image.  This was done using the Set Geometric Model tool in ERDAS Imagine 2013.  A first order polynomial equation was used to correct the distorted image.  This requires a minimum of 3 GCPs, but best practice calls for at least 4 GCPs.

The Set Geometric Model tool opens the Multipoint Geometric Correction window.  In this window, the GCPs are collected and evaluated.  When the window opens, the GCP Tool Reference Setup requires the user to input which image to collect GCPs from.  For this technique, Image Layer (New Viewer) was chosen.  The GCPs will be collected from USGS 7.5 minute digital raster graphic.

As shown in the figure below, four GCPs were collected for image rectification (Figure 1).  The quality of rectificiation is measured by RMS Error (root mean square error).  It is the difference between the distance of the input location of a GCP and the retransformed location for the same GCP in the output rectified image.  The closer the RMS error is to 0, the more accurate.  The RMS of this rectificationwas 1.6 which can be seen in the lower right hand corner of figure 1.

Figure 1: 4 GCPs collected for geometric correction

Image-to-Image Rectification
The second type of image rectification is Image-to-Image rectification.  The same process is followed as the previous technique, but this time, GCPs are collected from a reference image, not a map.  In this lab, a third order polynomial was used for the image-to-image rectification.  This changed the minimum number of control points from 3 to 10.  Just like the previous process, GCPs were collected for the reference an input image; 12 total GCP's were collected for this rectification technique (Figure 2).



Figure 2: 12 GCPs collected for Image-to-Image rectification


Once the RMS error is at an acceptable level, the image can be geometrically corrected.  This is done using the Display Resample Image Dialog tool.  For the purposes of this lab, all parameters of the tool were left as the default.  The second rectified image was resampled using bilinear interpolation.  Bilinear interpolation was used because it is a more spatially accurate technique that uses brightness values of the 4 closest input values in a 2 by 2 window to calculate the output value of the pixels.  When performing geometric correction it is best to use the most spatially accurate technique because that is the goal of the transformation.















Friday, November 15, 2013

Remote Sensing Lab 5

Image Mosaic & Miscellaneous Image Functions II

Goal
Remotely sensed data calls for different analytic processes than other fields of GIS.  The following processes were introduced through a lab in Geography 338 at the University of Wisconsin-Eau Claire. The lab included RGB to IHS and IHS to RGB transformations, spatial and spectral image enhancement, band ratio as well as binary change detection.  Image mosaic was also performed in the lab.

Methods
RGB IHS Transformation
These transformations provide an alternative way of displaying RGB (red, green, blue) primary additive colors. This processes changes red, green and blue to intensity, hue, saturation.  Because often RGB colors often lack saturation, this transformation is used to improve interpretation of multispectral color composites of images.  The figure below shows the differences between the original image and the newly created IHS image.

Figure 1: RGB to IHS Transformation
Left: Original Image
Right: IHS Image
The IHS image is not what would be seen in the natural world.  The image exhibits more contrast than the original image due to the increased orange and green tones.  When zoomed in, it is much harder to differentiate between features on the original image compared to the IHS image.

An IHS image can be subsequently transformed back to RGB to display the colors closely as they are perceived by the human eye.  When transforming back to RGB, band one represents intensity, band 2 represents hue and band 3 represents saturation as opposed to transforming to IHS when band 1 represents blue, band 2 represents green, and band 3 represents red.  Transforming from RGB to IHS using a stretched method is the best way to display earth's features as they would appear in nature (Figure 2).  The stretched image also has much better resolution than the other two images.

Figure 2: RGB to IHS transformed image
using a stretch method

Image Mosaicking
Mosaicking is used when an area of interest is larger than the extent of one satellite image scene or the area of interest intersects two adjacent satellite image scenes.  Two methods of mosaicking were introduced in this lab.  The first method used was Mosaic Express.  In this tool interface, it is important that the input images are added so that the best quality image will be laid over the lesser quality image.  For this lab, all default parameters were kept.


Figure 3: Mosaic Express Output Image

The second method of mosaicking is MosaicPro.  The images were brought in the same way as the previous method, but this time, before the images were added to the viewer Compute Active Area was selected in the Image Area Options.  All parameters were accepted because it was not necessary to crop or reduce the spatial extent of the output image.  the appearance of the viewer is shown in Figure 4 when the images have been added.


Figure 4: MosaicPro viewer with images to be mosaicked

In the MosaicPro viewer, images can be selected and sent to the bottom or top.  This tool is useful so the best quality image is placed on the top.  The Color Correction-Histogram Matching  tool was used to synchronize the radiometric properties of both images before the mosaic was performed.  Figure 5 displays the mosaicked image using MoscaicPro.


Figure 5: Mosaicked image using Mosaic Pro

Mosiac Express is only recommended for visual interpretation of images, not for analysis of remotely sensed images.  The transition between the mosaicked images is not as smooth as the original images appear in the viewer.  The bottom image exhibits much more red coloring than the image on top.  The next method of mosaicking produces better results because the images the radiometric properties of both images are more synchronized. (Figure 6).The image created by the Mosaic Pro has a much smoother transition at the overlap area of both images.


Figure 6: MosaicPro Output Image (left)
Mosaic Express Output Image (right)
Band Ratioing
In this lab, band ratioing was performed by implementing normalized difference vegetation index (NDVI).  This process was performed by first adding the image to the ERDAS viewer and then activating the Raster-Unsupervised tool.  In the tool interface, the sensor was set to Landsat TM and the function was set to NDVI.  The image below displays the original image and the output image created by this tool (Figure 7).


Figure 7: Original Image (left) NDVI Output Image (right)

In the NDVI image, the areas that are medium gray or black most likely do not have high concentrations of vegetation.  The areas of dark black are water and therefore the vegetation that exists is covered by a large quantity of water, so the sensor would not pick up the vegetation.

Spatial Enhancement
5 X 5 low pass convolution filtering
This tool is used to suppress images with high frequency.  High frequency refers to significant changes in brightness values over short distances in remotely sensed images.  This method is a spatial enhancement technique that creates more contrast in the output image.


Figure 8: High Frequency Image (left)
5 x 5 low pass convolution image (right)
5 X 5 high pass convolution filtering
When an images has low frequency (few changes in brightness values over a given area), a 5 X 5 high pass convolution filter can be performed to improve brightness values.  The newly created image is much darker in color, but there is more contrast and the resolution is much better (Figure 9).


Figure 9: Low Frequency Image (left)
5 x 5 high pass convolution image (right)

Spectral Enhancement
Minimum-Maximum Linear Contrast Stretch
This type of linear stretch is applied to Gaussian histograms to spread the range of brightness values for more contrast in the resulting image.


Figure 10: Resulting image of minimum-maximum contrast stretch
Piecewise Linear Contrast Stretch
This type of spectral enhancement is applied when an image's histogram has more than one mode.  For this lab the image had three modes, so it was considered trimodal.  Piecewise contrast stretch redistributes pixel values of the original image.  The resulting image's pixels values are more equally distributed in value.


Figure 11: Resulting image of piecewise contrast stretch
Histogram Equalization
Histogram equalization is performed to improve contrast of an image for better visual interpretation.  The process calls for the use of the Raster-Radiometric-Histogram Equalization tool in ERDAS Imagine 2013.  For this lab, all defaults were accepted in the tools interface.

The newly created image (Figure 12) has many more areas of white/light grays than the original image.  There is a drastic change in the image’s histogram as well, the new image histogram is starched from 39 to 256 as opposed to 14 to 44 (approximately).  This means the new image has much more contrast than the original image.
Figure 12: Histogram Equalization

Binary Change Detection
Binary change detection  is used estimate and map brightness values of pixels that have changed from one specified time to another.  For this lab, the area of interest was Eau Claire County, Wisconsin between August 1991 and August 2011.

The change of brightness values was analyzed spectrally using the image differencing technique.  In ERDAS, the Two Input Operators tool was used for the change detection process for layer 4 of the images.  The resulting image does not show areas of change.  The threshold of change must be determined before the areas of change can be visually interpreted.  The equation (mean +1.5 standard deviation) is used to calculate the threshold.  The threshold is added to the center value of the histogram for the upper threshold and subtracted from the center value of the histogram for the lower threshold of change (Figure 13).


Figure 13: Threshold of change
Model Maker was used to map the areas of change for the area of interest using the conditional function EITHER 1IF ($n1_ec_91>change/nochangethreshold value)OR 0 OTHERWISE.  This function will show all pixels with values above the change/no change threshold value and will makes all pixels with values below the threshold.  The resulting image was brought into ESRI Desktop 10.1 ArcMap for better interpretation of where the changes occurred (Figure 14).  There are more areas that didn’t change than did change between 1991 and 2011.  Areas that changed are mostly located near urban centers or large water bodies like lakes or rivers.
Figure 14: Binary change detection
Eau Claire, Wisconsin and surrounding areas


Friday, November 1, 2013

Remote Sensing Lab 4

Miscellaneous Image Functions I

Goal
In the realm of remote sensing, many functions can be used to better interpret or display remote sensing images.  A lab was used to introduce students to a few of these functions.  The lab taught students how to delineate a study area from a larger satellite image scene and how to link a satellite image in Erdas Imagine 2013 to Google Earth.  Students also examined how spatial resolution of satellite images can be optimized for visual interpretation.  Some radiometric enhancement techniques in optical images were introduced as well as various methods of resampling of satellite images.  The lab provided valuable skills in image pre-processing.

Methods
Image Subsetting
Image subsetting is used to delineate a region of interest from a larger satellite image scene; this can be thought of as using a cookie cutter.  This process is necessary because many times a study area is significantly smaller or not the exact shape of an image scene.

Subset Using an Inquire Box
The first step in subsetting using subset & chip is to import the satellite image into the Erdas Imagine 2013 software.  For this exercise, the image used as eau_claire_2011.img.  Once the image has been imported, an inquire box must be created.  This is done by right clicking on any area within the satellite image.  The inquire box is displayed in Figure 1.  It is a white box that can resized and moved.


Figure 1: Inquire Box used to subset a satellite image
The inquire box must be set to cover the entire area desired for the subset.  This can be accomplished by placing the cursor inside the Inquire Box, holding down the left mouse button while dragging the Inquire box to reposition over the study area.  For this exercise, the city of Eau Claire, Wisconsin was the study area.  When the area was sufficiently covered by the Inquire Box, "apply" must be clicked.

After the Inquire Box was set to cover the study area, the Subset & Chip tool within the raster toolset was clicked followed by the Create Subset Image tool which automatically populates the input image file to the current image in the viewer.  To finish the process of subsetting with an Inquire Box, an output location must be set.  The From Inquire Box parameter must be clicked to run this tool.  This brings the coordinates of the image area covered by the inquire box into the subset interface.  All other parameters within this tool interface may be left as default.  Clicking OK creates a subset image.  A model window will appear; once the model has successfully run, the window can be dismissed.  A new subset image of the study area (Eau Claire, Wisconsin) has been created, but must be imported into the Erdas Viewer.  Figure 2 displays the original eau_claire_2011.img as well as the newly created subset image.


Figure 2:
Left Image-Satellite scene of Western, WI and Eastern, MN
Right Image- Subset image of Eau Claire, WI
There are limitations to this method of subsetting.  The study area may not be in the shape of a rectangle or square (the only shape option using an Inquire Box).  The second method of subsetting is quite useful to avoid this limitation.

Subset Using an Area of Interest Shapefile
The second method uses a shapefile of an area of interest to subset a satellite image.  Again the eau_claire_2011.img is used as the base image.  A shapefile must be added in the Erdas image viewer containing the base image.  For this exercise the ec_cpw_cts.shp shapefile was used.  This area of interest incorporates Eau Claire and Chippewa Counties.  When the shapefile is added to the viewer, it is overlaid on top of the base image (Figure 3).


Figure 3: AOI Shapefile of Eau Claire and Chippewa Counties
Overlaid on top of the eau_claire_2011.img base image
To select the shapefile in the viewer as our AOI (area of interest) file, the shift key must be held while clicking on both counties.  Once the shapefile has been selected, the HOME button is selected and then past from selected object is chosen.  When this has been completed successfully, the AOI shapefile will be shown as dotted lines.  The AOI shapefile must then be saved as an AOI file.  This is accomplished by choosing SAVE AS, followed by AOI LAYER AS.

In order to complete this method of subsetting, the subset & chip tool is chosen under the raster tool set in Erdas.  The input file and output location must be set prior to running the subset & chip tool.  The AOI file is set by selected the AOI button on the bottom of the subset & chip tool window.  After choosing the AOI button, the AOI file is input by navigating to its saved location and selected OK.  For this process, all other parameters are left as the default values.  Figure 4 displays the original satellite scene as well as the newly created subset image using the subset & chip tool.

Figure 4: Subset using AOI file
Right Image: Full satellite image scene
Left Image: Subset image of Eau Claire and Chippewa Counties

Image Fusion
Pan-sharpen is used to create a higher spatial resolution image from a coarse resolution image.  This processes optimizes the image's spatial resolution for visual interpretation.  For this exercise, a panchromatic image (ec_cpw_2000pan.img) with 15 meter resolution was used to pan-sharpen a reflective image (ec_cpw_2000.img) with 30 meter resolution of Eau Claire and Chippewa counties.

The pan-sharpen tool is located in the raster tool set in Erdas Imagine 2013.  The Resolution Merge tool was used for this process.  The panchromatic image was used as the high resolution input file within the resolution merge tool window and the reflective image was used as the multispectral input file.  The multiplicative method was used in conjunction with the nearest neighbor resampling technique.  Figure 5 displays the difference between the original image and the pan-sharpened image.  The pan-sharpened image exhibits a higher degree of contrast than the original image.  This allows for greater interpretation of colors in the pan-sharpened image.

Figure 5: Original image on the left, Pan-sharpened image on the right
Radiometric Enhancement Techniques

Haze Reduction
On radiometric enhancement technique is haze reduction.  This process eliminates the appearance of haze in a remotely sensed satellite image.  To conduct this process, the radiometric-haze reduction tool was used in the raster processing tools of Erdas Imagine 2013. All parameters of the haze reduction tool were used.  The figure below displays the original image and the image produced by haze reduction (Figure 6).  The haze that is visible in the original image has dramatically reduces in the newly created image using haze reduction.  It should be noted that the areas that exhibited haze in the first image are not completely fixed in the second image; a shadow or transparent gray area is still present.

Figure 6: Original image on the left,
Image produced using the haze reduction tool on the right
Linking View to Google Earth
Google Earth can serve as an image interpretation key because it can show a true color aerial image of the features one is observing in the Erdas image viewer.  It can help to better visualize almost all image interpretation keys such as texture, size, pattern, shadow, site and color.  This is done by first adding a satellite image to the Erdas image viewer then clicking the Google Earth button on the top of the screen.  Connect to Google Earth is selected next; this will open Google Earth.  To link the Erdas image viewer with Google Earth, Match GE to View was selected.  This sets the spatial extent of Google Earth to the image in Erdas.  To synchronize Erdas and Google Earth, the Sync GE to View button is selected.  Once the images are connected and synced, it is possible to use the zoom in and out buttons to view the image in Erdas and Google Earth at the same spatial extent.  Figure 7 exhibits the desktop screen when the image in Erdas has been connected to Google Earth.

Figure 7: Erdas & Google Earth image synchronization

Resampling
Resampling is a mathematical technique used to create a new version of an image with a different pixel size.  It is employed for image rectification or geometric collection purposes and when data is collected from different sensors with different pixel sizes.  There are many methods of resampling.  For this lab, the Nearest Neighbor and Bilinear Interpolation techniques were used.

Nearest Neighbor
The Nearest Neighbor technique was used to change the pixel size of an image from 30 meters by 30 meters to 30 meters by 20 meters.  The Resample Pixel Size tool was used in the Spatial Raster tool set.  Within the Resample Pixel Size tool, the Resample Method was left as the default because the default is the nearest neighbor method.  The output cell size had to be set at 30 meters by 20 meters.  All other parameters were left as the default values.  Figure 8 displays the original image (30 m x 30 m) and the resampled image (30 m x 20 m).

Figure 8: Nearest Neighbor Resampling Technique
Figure 9: Large scale view of original & resampled image
The pixels in the resampled picture are rectangular shaped because the pixel size was resampled to 30 m by 20 m.  Because the nearest neighbor technique was used, the pixel value is determined by the pixels adjacent to it.  Therefore contrast is still apparent, but the areas of contrast are larger (Figure 9).

Bilinear Interpolation
The same resampling process was followed to create a Bilinear Interpolation resampled image except the resampling method was changed to Bilinear Interpolation in the Resample Pixel Size tool.  The pixels of the bilinear interpolated image smaller than the original image pixels because the image was resampled to a pixel size of 20 m by 20 meters.  This causes there to be more contrast around borders of the features shown in the image (Figures 10 and 11).

Figure 10: Bilinear Method of resampling
Figure 11: Large scale view of Bilinear Interpolation resampling technique

Conclusion
This lab provided students with valuable skills in remote sensing.  Study area delineation, synchronization to Google Earth, resampling, image subset and other radiometric enhancement techniques can be used to better interpret satellite imagery. This techniques are important to the field of remote sensing because the better interpretation of aerial imagery allows for better results of remote sensing analysis.