Friday, November 22, 2013

Remote Sensing Lab 6

Goal
One form of image preprocessing is geometric correction. This technique is performed to correct any distortions present in a satellite image so extraction of biophysical and sociocultural information can be extracted.

Methodology
There are two types of geometric correction: Image-to-Map rectification and Image-to-Image rectification.

Image-to-Map Rectification
To correct a Landsat TM image of the Chicago Metropolitan Statistical Area, a USGS 7.5 minute digital raster graphic was used to collect ground control points (GCPs).  These GCPs were used to rectify the Landsat TM image.  This was done using the Set Geometric Model tool in ERDAS Imagine 2013.  A first order polynomial equation was used to correct the distorted image.  This requires a minimum of 3 GCPs, but best practice calls for at least 4 GCPs.

The Set Geometric Model tool opens the Multipoint Geometric Correction window.  In this window, the GCPs are collected and evaluated.  When the window opens, the GCP Tool Reference Setup requires the user to input which image to collect GCPs from.  For this technique, Image Layer (New Viewer) was chosen.  The GCPs will be collected from USGS 7.5 minute digital raster graphic.

As shown in the figure below, four GCPs were collected for image rectification (Figure 1).  The quality of rectificiation is measured by RMS Error (root mean square error).  It is the difference between the distance of the input location of a GCP and the retransformed location for the same GCP in the output rectified image.  The closer the RMS error is to 0, the more accurate.  The RMS of this rectificationwas 1.6 which can be seen in the lower right hand corner of figure 1.

Figure 1: 4 GCPs collected for geometric correction

Image-to-Image Rectification
The second type of image rectification is Image-to-Image rectification.  The same process is followed as the previous technique, but this time, GCPs are collected from a reference image, not a map.  In this lab, a third order polynomial was used for the image-to-image rectification.  This changed the minimum number of control points from 3 to 10.  Just like the previous process, GCPs were collected for the reference an input image; 12 total GCP's were collected for this rectification technique (Figure 2).



Figure 2: 12 GCPs collected for Image-to-Image rectification


Once the RMS error is at an acceptable level, the image can be geometrically corrected.  This is done using the Display Resample Image Dialog tool.  For the purposes of this lab, all parameters of the tool were left as the default.  The second rectified image was resampled using bilinear interpolation.  Bilinear interpolation was used because it is a more spatially accurate technique that uses brightness values of the 4 closest input values in a 2 by 2 window to calculate the output value of the pixels.  When performing geometric correction it is best to use the most spatially accurate technique because that is the goal of the transformation.















Friday, November 15, 2013

Remote Sensing Lab 5

Image Mosaic & Miscellaneous Image Functions II

Goal
Remotely sensed data calls for different analytic processes than other fields of GIS.  The following processes were introduced through a lab in Geography 338 at the University of Wisconsin-Eau Claire. The lab included RGB to IHS and IHS to RGB transformations, spatial and spectral image enhancement, band ratio as well as binary change detection.  Image mosaic was also performed in the lab.

Methods
RGB IHS Transformation
These transformations provide an alternative way of displaying RGB (red, green, blue) primary additive colors. This processes changes red, green and blue to intensity, hue, saturation.  Because often RGB colors often lack saturation, this transformation is used to improve interpretation of multispectral color composites of images.  The figure below shows the differences between the original image and the newly created IHS image.

Figure 1: RGB to IHS Transformation
Left: Original Image
Right: IHS Image
The IHS image is not what would be seen in the natural world.  The image exhibits more contrast than the original image due to the increased orange and green tones.  When zoomed in, it is much harder to differentiate between features on the original image compared to the IHS image.

An IHS image can be subsequently transformed back to RGB to display the colors closely as they are perceived by the human eye.  When transforming back to RGB, band one represents intensity, band 2 represents hue and band 3 represents saturation as opposed to transforming to IHS when band 1 represents blue, band 2 represents green, and band 3 represents red.  Transforming from RGB to IHS using a stretched method is the best way to display earth's features as they would appear in nature (Figure 2).  The stretched image also has much better resolution than the other two images.

Figure 2: RGB to IHS transformed image
using a stretch method

Image Mosaicking
Mosaicking is used when an area of interest is larger than the extent of one satellite image scene or the area of interest intersects two adjacent satellite image scenes.  Two methods of mosaicking were introduced in this lab.  The first method used was Mosaic Express.  In this tool interface, it is important that the input images are added so that the best quality image will be laid over the lesser quality image.  For this lab, all default parameters were kept.


Figure 3: Mosaic Express Output Image

The second method of mosaicking is MosaicPro.  The images were brought in the same way as the previous method, but this time, before the images were added to the viewer Compute Active Area was selected in the Image Area Options.  All parameters were accepted because it was not necessary to crop or reduce the spatial extent of the output image.  the appearance of the viewer is shown in Figure 4 when the images have been added.


Figure 4: MosaicPro viewer with images to be mosaicked

In the MosaicPro viewer, images can be selected and sent to the bottom or top.  This tool is useful so the best quality image is placed on the top.  The Color Correction-Histogram Matching  tool was used to synchronize the radiometric properties of both images before the mosaic was performed.  Figure 5 displays the mosaicked image using MoscaicPro.


Figure 5: Mosaicked image using Mosaic Pro

Mosiac Express is only recommended for visual interpretation of images, not for analysis of remotely sensed images.  The transition between the mosaicked images is not as smooth as the original images appear in the viewer.  The bottom image exhibits much more red coloring than the image on top.  The next method of mosaicking produces better results because the images the radiometric properties of both images are more synchronized. (Figure 6).The image created by the Mosaic Pro has a much smoother transition at the overlap area of both images.


Figure 6: MosaicPro Output Image (left)
Mosaic Express Output Image (right)
Band Ratioing
In this lab, band ratioing was performed by implementing normalized difference vegetation index (NDVI).  This process was performed by first adding the image to the ERDAS viewer and then activating the Raster-Unsupervised tool.  In the tool interface, the sensor was set to Landsat TM and the function was set to NDVI.  The image below displays the original image and the output image created by this tool (Figure 7).


Figure 7: Original Image (left) NDVI Output Image (right)

In the NDVI image, the areas that are medium gray or black most likely do not have high concentrations of vegetation.  The areas of dark black are water and therefore the vegetation that exists is covered by a large quantity of water, so the sensor would not pick up the vegetation.

Spatial Enhancement
5 X 5 low pass convolution filtering
This tool is used to suppress images with high frequency.  High frequency refers to significant changes in brightness values over short distances in remotely sensed images.  This method is a spatial enhancement technique that creates more contrast in the output image.


Figure 8: High Frequency Image (left)
5 x 5 low pass convolution image (right)
5 X 5 high pass convolution filtering
When an images has low frequency (few changes in brightness values over a given area), a 5 X 5 high pass convolution filter can be performed to improve brightness values.  The newly created image is much darker in color, but there is more contrast and the resolution is much better (Figure 9).


Figure 9: Low Frequency Image (left)
5 x 5 high pass convolution image (right)

Spectral Enhancement
Minimum-Maximum Linear Contrast Stretch
This type of linear stretch is applied to Gaussian histograms to spread the range of brightness values for more contrast in the resulting image.


Figure 10: Resulting image of minimum-maximum contrast stretch
Piecewise Linear Contrast Stretch
This type of spectral enhancement is applied when an image's histogram has more than one mode.  For this lab the image had three modes, so it was considered trimodal.  Piecewise contrast stretch redistributes pixel values of the original image.  The resulting image's pixels values are more equally distributed in value.


Figure 11: Resulting image of piecewise contrast stretch
Histogram Equalization
Histogram equalization is performed to improve contrast of an image for better visual interpretation.  The process calls for the use of the Raster-Radiometric-Histogram Equalization tool in ERDAS Imagine 2013.  For this lab, all defaults were accepted in the tools interface.

The newly created image (Figure 12) has many more areas of white/light grays than the original image.  There is a drastic change in the image’s histogram as well, the new image histogram is starched from 39 to 256 as opposed to 14 to 44 (approximately).  This means the new image has much more contrast than the original image.
Figure 12: Histogram Equalization

Binary Change Detection
Binary change detection  is used estimate and map brightness values of pixels that have changed from one specified time to another.  For this lab, the area of interest was Eau Claire County, Wisconsin between August 1991 and August 2011.

The change of brightness values was analyzed spectrally using the image differencing technique.  In ERDAS, the Two Input Operators tool was used for the change detection process for layer 4 of the images.  The resulting image does not show areas of change.  The threshold of change must be determined before the areas of change can be visually interpreted.  The equation (mean +1.5 standard deviation) is used to calculate the threshold.  The threshold is added to the center value of the histogram for the upper threshold and subtracted from the center value of the histogram for the lower threshold of change (Figure 13).


Figure 13: Threshold of change
Model Maker was used to map the areas of change for the area of interest using the conditional function EITHER 1IF ($n1_ec_91>change/nochangethreshold value)OR 0 OTHERWISE.  This function will show all pixels with values above the change/no change threshold value and will makes all pixels with values below the threshold.  The resulting image was brought into ESRI Desktop 10.1 ArcMap for better interpretation of where the changes occurred (Figure 14).  There are more areas that didn’t change than did change between 1991 and 2011.  Areas that changed are mostly located near urban centers or large water bodies like lakes or rivers.
Figure 14: Binary change detection
Eau Claire, Wisconsin and surrounding areas


Friday, November 1, 2013

Remote Sensing Lab 4

Miscellaneous Image Functions I

Goal
In the realm of remote sensing, many functions can be used to better interpret or display remote sensing images.  A lab was used to introduce students to a few of these functions.  The lab taught students how to delineate a study area from a larger satellite image scene and how to link a satellite image in Erdas Imagine 2013 to Google Earth.  Students also examined how spatial resolution of satellite images can be optimized for visual interpretation.  Some radiometric enhancement techniques in optical images were introduced as well as various methods of resampling of satellite images.  The lab provided valuable skills in image pre-processing.

Methods
Image Subsetting
Image subsetting is used to delineate a region of interest from a larger satellite image scene; this can be thought of as using a cookie cutter.  This process is necessary because many times a study area is significantly smaller or not the exact shape of an image scene.

Subset Using an Inquire Box
The first step in subsetting using subset & chip is to import the satellite image into the Erdas Imagine 2013 software.  For this exercise, the image used as eau_claire_2011.img.  Once the image has been imported, an inquire box must be created.  This is done by right clicking on any area within the satellite image.  The inquire box is displayed in Figure 1.  It is a white box that can resized and moved.


Figure 1: Inquire Box used to subset a satellite image
The inquire box must be set to cover the entire area desired for the subset.  This can be accomplished by placing the cursor inside the Inquire Box, holding down the left mouse button while dragging the Inquire box to reposition over the study area.  For this exercise, the city of Eau Claire, Wisconsin was the study area.  When the area was sufficiently covered by the Inquire Box, "apply" must be clicked.

After the Inquire Box was set to cover the study area, the Subset & Chip tool within the raster toolset was clicked followed by the Create Subset Image tool which automatically populates the input image file to the current image in the viewer.  To finish the process of subsetting with an Inquire Box, an output location must be set.  The From Inquire Box parameter must be clicked to run this tool.  This brings the coordinates of the image area covered by the inquire box into the subset interface.  All other parameters within this tool interface may be left as default.  Clicking OK creates a subset image.  A model window will appear; once the model has successfully run, the window can be dismissed.  A new subset image of the study area (Eau Claire, Wisconsin) has been created, but must be imported into the Erdas Viewer.  Figure 2 displays the original eau_claire_2011.img as well as the newly created subset image.


Figure 2:
Left Image-Satellite scene of Western, WI and Eastern, MN
Right Image- Subset image of Eau Claire, WI
There are limitations to this method of subsetting.  The study area may not be in the shape of a rectangle or square (the only shape option using an Inquire Box).  The second method of subsetting is quite useful to avoid this limitation.

Subset Using an Area of Interest Shapefile
The second method uses a shapefile of an area of interest to subset a satellite image.  Again the eau_claire_2011.img is used as the base image.  A shapefile must be added in the Erdas image viewer containing the base image.  For this exercise the ec_cpw_cts.shp shapefile was used.  This area of interest incorporates Eau Claire and Chippewa Counties.  When the shapefile is added to the viewer, it is overlaid on top of the base image (Figure 3).


Figure 3: AOI Shapefile of Eau Claire and Chippewa Counties
Overlaid on top of the eau_claire_2011.img base image
To select the shapefile in the viewer as our AOI (area of interest) file, the shift key must be held while clicking on both counties.  Once the shapefile has been selected, the HOME button is selected and then past from selected object is chosen.  When this has been completed successfully, the AOI shapefile will be shown as dotted lines.  The AOI shapefile must then be saved as an AOI file.  This is accomplished by choosing SAVE AS, followed by AOI LAYER AS.

In order to complete this method of subsetting, the subset & chip tool is chosen under the raster tool set in Erdas.  The input file and output location must be set prior to running the subset & chip tool.  The AOI file is set by selected the AOI button on the bottom of the subset & chip tool window.  After choosing the AOI button, the AOI file is input by navigating to its saved location and selected OK.  For this process, all other parameters are left as the default values.  Figure 4 displays the original satellite scene as well as the newly created subset image using the subset & chip tool.

Figure 4: Subset using AOI file
Right Image: Full satellite image scene
Left Image: Subset image of Eau Claire and Chippewa Counties

Image Fusion
Pan-sharpen is used to create a higher spatial resolution image from a coarse resolution image.  This processes optimizes the image's spatial resolution for visual interpretation.  For this exercise, a panchromatic image (ec_cpw_2000pan.img) with 15 meter resolution was used to pan-sharpen a reflective image (ec_cpw_2000.img) with 30 meter resolution of Eau Claire and Chippewa counties.

The pan-sharpen tool is located in the raster tool set in Erdas Imagine 2013.  The Resolution Merge tool was used for this process.  The panchromatic image was used as the high resolution input file within the resolution merge tool window and the reflective image was used as the multispectral input file.  The multiplicative method was used in conjunction with the nearest neighbor resampling technique.  Figure 5 displays the difference between the original image and the pan-sharpened image.  The pan-sharpened image exhibits a higher degree of contrast than the original image.  This allows for greater interpretation of colors in the pan-sharpened image.

Figure 5: Original image on the left, Pan-sharpened image on the right
Radiometric Enhancement Techniques

Haze Reduction
On radiometric enhancement technique is haze reduction.  This process eliminates the appearance of haze in a remotely sensed satellite image.  To conduct this process, the radiometric-haze reduction tool was used in the raster processing tools of Erdas Imagine 2013. All parameters of the haze reduction tool were used.  The figure below displays the original image and the image produced by haze reduction (Figure 6).  The haze that is visible in the original image has dramatically reduces in the newly created image using haze reduction.  It should be noted that the areas that exhibited haze in the first image are not completely fixed in the second image; a shadow or transparent gray area is still present.

Figure 6: Original image on the left,
Image produced using the haze reduction tool on the right
Linking View to Google Earth
Google Earth can serve as an image interpretation key because it can show a true color aerial image of the features one is observing in the Erdas image viewer.  It can help to better visualize almost all image interpretation keys such as texture, size, pattern, shadow, site and color.  This is done by first adding a satellite image to the Erdas image viewer then clicking the Google Earth button on the top of the screen.  Connect to Google Earth is selected next; this will open Google Earth.  To link the Erdas image viewer with Google Earth, Match GE to View was selected.  This sets the spatial extent of Google Earth to the image in Erdas.  To synchronize Erdas and Google Earth, the Sync GE to View button is selected.  Once the images are connected and synced, it is possible to use the zoom in and out buttons to view the image in Erdas and Google Earth at the same spatial extent.  Figure 7 exhibits the desktop screen when the image in Erdas has been connected to Google Earth.

Figure 7: Erdas & Google Earth image synchronization

Resampling
Resampling is a mathematical technique used to create a new version of an image with a different pixel size.  It is employed for image rectification or geometric collection purposes and when data is collected from different sensors with different pixel sizes.  There are many methods of resampling.  For this lab, the Nearest Neighbor and Bilinear Interpolation techniques were used.

Nearest Neighbor
The Nearest Neighbor technique was used to change the pixel size of an image from 30 meters by 30 meters to 30 meters by 20 meters.  The Resample Pixel Size tool was used in the Spatial Raster tool set.  Within the Resample Pixel Size tool, the Resample Method was left as the default because the default is the nearest neighbor method.  The output cell size had to be set at 30 meters by 20 meters.  All other parameters were left as the default values.  Figure 8 displays the original image (30 m x 30 m) and the resampled image (30 m x 20 m).

Figure 8: Nearest Neighbor Resampling Technique
Figure 9: Large scale view of original & resampled image
The pixels in the resampled picture are rectangular shaped because the pixel size was resampled to 30 m by 20 m.  Because the nearest neighbor technique was used, the pixel value is determined by the pixels adjacent to it.  Therefore contrast is still apparent, but the areas of contrast are larger (Figure 9).

Bilinear Interpolation
The same resampling process was followed to create a Bilinear Interpolation resampled image except the resampling method was changed to Bilinear Interpolation in the Resample Pixel Size tool.  The pixels of the bilinear interpolated image smaller than the original image pixels because the image was resampled to a pixel size of 20 m by 20 meters.  This causes there to be more contrast around borders of the features shown in the image (Figures 10 and 11).

Figure 10: Bilinear Method of resampling
Figure 11: Large scale view of Bilinear Interpolation resampling technique

Conclusion
This lab provided students with valuable skills in remote sensing.  Study area delineation, synchronization to Google Earth, resampling, image subset and other radiometric enhancement techniques can be used to better interpret satellite imagery. This techniques are important to the field of remote sensing because the better interpretation of aerial imagery allows for better results of remote sensing analysis.