Thursday, September 27, 2018

GIS4930 - Module 2: MTR (Analysis Week)

Goal (Why): Analyze 2010 Landsat data to identify signs of MTR in the Appalachian Coal Region of West Virginia and surrounding states.

This week I changed my process a little bit.  I'm spending more time upfront planning, working on a project mindmap/process map, laying out my blog, and just trying to plan and define what I'm going to do up front before jumping into the lab.  I associate Project Objectives as the what is planned to accomplish the project goal/deliverables.  I'll be updating this Objective section as I jump each hurdle during this week's efforts.  You guessed it, this week the theme is a track & field relay race.

Objectives (What):
  • Complete Analysis Lab
    • Explore data in ArcMap
    • Further, explore data in ERDAS IMAGINE
    • Analyze Group 1 Landsat Image
      • Step 1 Prepare your Landsat image for analysis - L2010_p17r33.img
        • Use the Composite Bands tool to create a single raster dataset
        • Use the Extract By Mask tool to clip to basin raster
      • Step 2 Conduct Unsupervised Image Classification - clstr_2010_50_md.img
      • Step 3 Reclassify imagery - Rclss_p17r33.img
      • Step 4 Update MTR Story Map Journal
    • Look ahead: Anticipate how to present/report on this week's endeavors and envision what some next steps might look like.
  • Complete process summary
  • Screenshot of Reclassified MTR raster
  • Finalize this blog and Update MTR Story Map Journal Click Here 
    • https://pns.maps.arcgis.com/apps/MapJournal/index.html?appid=d020fe9fd64d490dba2f3c64b856e596


Here we are at the start of week 2.  I decided to crop out a portion of my project mindmap to highlight the handoff of week 1 to week2.  This handoff is analogous to a python method passing its output to another method.  The diagram on left attempts to illustrate week 2 accepting as input week 1 output (DEM, Streams, and Basins).  You can also see the reference to a python script using two library toolsets (Mosaic and Hydrology) to generate three spatial outputs, which are inputs for week 2.  I tried to capture the gist on week 1 handing off its payload to week 2.  This handoff is very similar to a smooth baton handoff related to a track relay handoff as shown in the image on right.




So what are we going to do with last week's accomplishments this week?  Last weeks deliverables are not the subject of analysis this week.  However, the output boundaries produced from last week could be used as a window to view this weeks image classification.  This week we continued to look for signs of mountaintop removal using another type of image, Landsat image.  The tools for performing Landsat image inspection are ERDAS Imagine and ArcMap 10.6.1.   Recall that the overall study region was divided into multiple groups. I'm still a member of group one.  The area that my group has been assigned is broken down into several Landsat rasters.  I analyzed one raster, L2010_p17r33.img, as my portion of the assigned work while other members worked with another raster. Being mindful of visually determining evidence of mountaintop removal, image analysis via unsupervised image classification, creating a new cluster property (Class_Name), and assigning Class_Name a new value ("MTR" or "non-MTR") was the focus of this week.   Below is a reminder of the study region I created from last week post.

So what actually happened during this weeks analysis phase?  Well for starters, getting reacquainted with ERDAS Imagine was an arduous task this week.  This feeling was reminiscent of last week went I was starting to use Idle to review the starter template script.  After jumping this first hurdle (ERDAS) next was remembering past lessons learned from a remote sensing class regarding the classification of images.  The next series of hurdles involved understanding the basic types of image classification: what's the difference between a supervised and unsupervised image classification?   After some internet searching, I was quickly back on track at a high-level.  Basically, unsupervised image classification is calculated/guided by a software clustering algorithm (no landscape training required) and supervised classification is human-guided image classification requiring landscape training by a human to help inform the computer algorithm to understand the landscape.  Well, that's how much of this week went, remembering the past as I processed each step of this week's lab instructions.

Exploring this week's Classification/Reclassification endeavors


The image to the right depicts this week's deliverable outputs.  My plan for this map was to show how last week's boundaries intersected this weeks analysis.  So, I'm using color on top of a grayscale background to hopefully show the area of interest this week.  The red bound extent area with white background depicts the Landsat image I investigated this week.  Each pixel in this area was examined with ERDAS Image using an unsupervised classification process that sorted the pixels into fifty fixed clusters.  Via lab instructions we set the number of clusters to fifty.  This classification method examined all Landsat image pixels and based on each pixel’s spectral reflectance value, all the pixels were sorted into fifty different colored clusters.  Then I examined the image in known areas of MTR and further defined these clusters with a custom property (Class_Name) and assigned a value of “MTR” and assigned a color of red.  For all other Clusters, I set the value of Class_Name to “non-MTR” and assign no color.  The Class_Name field will provide an easy way to query all pixels associated with MTR, which will be important later when we want to clip/mask the entire extent of the image to a smaller region, Group 1 Study Area.  Note that the majority of my Landsat image, "P17R33", is outside the area of interest (AOI) of the Group 1 study area.
When I zoomed in close to inspect known areas of MTR it was obvious by panning around that not all suspect clusters were valid MTR sites.  Authentic MTR areas are a type of urbanization (big machines were used to modify the landscape).  Hence, it’s important to remember that not all red locations classified as "MTR" are true MTR sites.
Let me repeat to be very clear, all red clusters are NOT valid locations of MTR.
Unsupervised classification does not include any type of landscape training so other types of urbanizations (roads, building, lot clearing) will obscure the identification of valid MTR clusters.  And there will be other types of atmospheric inference (clouds, moisture, etc.) that will further complicate the identification since various types of interference will have similar spectral reflectance traits common to real MTR locations.  So be mindful of urbanization and various types of interference when viewing this map.  I don't want to mislead the map viewer to feel that all this red is associated with valid MTR mining.  On the contrary, these red clusters are merely the results of unsupervised classification and this area are suspect locations of MTR that will require ground truthing to verify as true clusters of real MTR locations.
To the left is a close look at my Group 1 region.  Linear regions are easy to infer as roadways, but it’s difficult to truly identify legitimate MTR locations.  I’ve highlighted two regions that are authentic MTR Clusters.  Visit next week as we continue the search to discover evidence of MTR mining as the various types of interference is stripped away.



What was learned/remembered this week?
This week was definitely a flashback to the Remote Sensing course of yesteryear.  Relearning the two common types of image classification was the remarkable hurdle this week.

What was fun and or challenging this week?
Grayscale maps still continue to be fun.  They lend to making the area of interest take center stage.  And the not so fun thing this week was making sense of unsupervised classification.  It was tough getting started with ERDAS IMAGINE, but I managed to plow through it.  I'm sure with more practice, this task would become fun someday!

What were some Weekly Positives?
Completing the Landsat image classification using unsupervised classification was definitely a positive.  But a subtle positive awareness discovered this week was the observance of putting information into a system, then doing something with that input to produce an output.  We see this basic model everywhere.
Visualizing the importance of input-process-output (IPO).
This simple model is so subtle it is easily overlooked.  But when you're looking for it, it's a common underlay in many technical and non-technical (economic) processes.  Take for instance coal as in input for the steel industry and steel as in input for the coal industry.  And possibly the car industry requiring both coal and steel as inputs to output cars.  Here we see the goods of downstream industries producing inputs for further use in producing final goods (outputs).  IPO is ubiquitous, it's used in many aspects of life if you look close enough.  Its the basic model for processes to follow.

In Summary, the main objective this week was to create a reclassified raster using a combination of ERDAS Imagine and ArcMap to identify areas of suspected MTR.  Now that we have completed two-thirds of this MTR Project, its time to take a look ahead for the final week, envisioning a final presentation that reports our MTR mining findings.  Below is an outline of high-level objects for next week.

  • Convert MTR raster to polygons using Raster to Polygon (Conversion Tool)
  • Remove MTR areas that are smaller than a given acreage.
  • Create buffers around roads and rivers, removing MTR areas that fall within those buffers.
  • Perform an accuracy test using random points on your map.
  • Compare your 2010 MTR data with the 2005 dataset.
  • Package your dataset for your group leader.
  • Compile group data into a single dataset for group study area (group leader only).
  • Create map service presenting your group findings with ArcGISOnline, UWF Organization.
Song of the Week
My relay-race playlist looks like this.
 Cruise - by FL GA Line

And first pick was Cruise by FL GA Line:
https://www.youtube.com/watch?v=8PvebsWcpto

Off topic, another favorite of my by FL GA Line is Simple
https://www.youtube.com/watch?v=3GeaYy6zlXU


References:
• https://articles.extension.org/pages/40214/whats-the-difference-between-a-supervised-and-unsupervised-image-classification
• https://gisgeography.com/image-classification-techniques-remote-sensing/
• https://gisgeography.com/supervised-unsupervised-classification-arcgis/
• https://www.youtube.com/watch?v=8PvebsWcpto

Thursday, September 20, 2018

GIS4930 - Module 2: Mountain-Top Removal (Prepare Week)

During the first week of Mountain-Top Removal (MTR), data preparation was the primary goal and the following were the accomplished objectives:
  • Watch Lecture video of Amber interviewing John Amos, creator of SkyTruth
  • Understand the Study Area & What to deliver this week
  • User Mosaic raster toolset to create a mosaic dataset
  • Use ArcMap's Hydrology toolset to create stream and basin spatial data
  • Fix a python script template to automate the creation of Mosaic and Hydrology features
  • Create a basemap using outputs of Mosaic raster and Hydrology toolsets
  • Create Blog and iteratively update using Demming Cycle: plan-do-study-act
  • Create an MTR Story Map 
  • Create an MTR Journal 

The map on left illustrates the study region for Project 2: MTR spans five state borders: Ohio, West Virginia, Virginia, Tennessee, and Kentucky.  This region has an area of approximately 77,904,706,396 Sq m (~19,250,672 Acres).  Members of Group 1 worked with data associated with this eastern area as shown on the map to the left.  I downloaded all the basic US States from an ESRI site that is referenced below to make this simple map to highlight the Study Region.


The map on the right is a copy of a working map document I created to troubleshoot a standalone python script that automated the process of creating a mosaic raster dataset and creating hydrology spatial data (streams and basins) from a Digital Elevation Model (DEM).  I started with passing 4 DEM rasters assigned to Group 1 into the Mosaic raster toolset.  In the python script, these 4 DEMs where expressed in an array that was passed to the MosaicToNewRaster_managment method.  And the output of this method would then be passed to another tool (ExtractByMask) that masks/clips the raster to the boundary of the Group 1 study area.    This new masked raster has a set of elevation data ranging from  1435 - 156 meters as you can see from the legend (darn, looks like I forgot the reference to meters in the legend).  This new range of elevation data in the new DEM Raster was then used by the hydrology toolset to generate streams and basins by running a set of tools where the output of each tool feeds the next method.  Again, this tool/method execution (call stack) was very obvious in the script and nicely commented as step 4 - 10.  Fixing the script was really just a plumbing task of creating the appropriate file paths to keep the inputs and outputs from colliding as each tool passed their outputs to the next tool.  I added an inset map to make reference to the study region shown as a purple boundary. The red colored area is the Group 1 work area I analyzed this week.  The Hydrology toolset did all the work of generating the streams and basin features. The python script made this weeks data preparation very fast and efficient.  The boundary features where provide in this weeks project data.  And the ESRI basemap was added via ArcMap 10.6.1.  The state boundary features in the inset map I downloaded from Census Tiger files.

What was learned this week?
Getting the python script to work was a little tricky at first, but once I got into troubleshooting the script and got all my variables set to my planned file paths, the script quickly started behaving and stopped complaining soo much.  The old meme is true: "If you don't use it, you lose it".  But there is a remedy, start using the old skillset.  There might be a little dust on your bottle of python knowledge, but once you brush off the cobwebs, you might be surprised about what's inside (which reminds me of an oldie but goodie song from David Lee Murphy - http://www.davidlee.com/music/song-lyrics/out-with-a-bang-lyrics/dust-on-the-bottle/)


What was fun this week?
The fun things this week were story maps, getting the python to behave, experimenting with those light grey base-maps, and learning about the Mosaic and Hydrology Toolsets.  I found a short YouTube video that helped me get started understanding the Hydrology Fill method, sinks, and headwaters (see reference link below).

What were some Weekly Positives?
 SKYTRUTH - https://www.skytruth.org/
 SKYTRUTH helps the masses see the change so citizens can participate to help CHANGE IT.


In summary, this week was an eye-opening experience of the coal industry and the MTR process.  I liked exploring and getting started with Story Mapping and this weeks python script was a good python reminder experience.  At first, getting started with the script template was challenging.  It was a long time since I last cracked open a python editor.  I started with understanding what the script was meant to accomplish and first created a working map document with all my planned empty folders for my script to dump the generated features.  I stuck with the basics of wrapping all method calls with verbose print statements and walked the call stack until I resolved all script errors.  I then used my working map document to visualize the expected features: Raster (DEM), streams (line), basins (polygon).  The top-down execution of method calls in the python script made it easy to follow the logic and workflow of data manipulation this week.
Finally, I reused the working map document.  I created a new map document by copying my working map document.  After a few edits and applying all the map essentials, I was able to quickly create the map displayed above.  This weeks course material is starting to get easier to decipher and I'm starting to develop a rhythm to complete weekly assignments and getting reacquainted with searching and downloading GIS data.

References:
• http://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/defining-or-modifying-a-raster-coordinate-system.htm
• http://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/an-overview-of-the-hydrology-tools.htm
• https://www.arcgis.com/home/item.html?id=f7f805eb65eb4ab787a0a3e1116ca7e5
• https://www.youtube.com/watch?v=0D5kG6_3rTI
• http://www.davidlee.com/music/song-lyrics/out-with-a-bang-lyrics/dust-on-the-bottle/

Saturday, September 15, 2018

GIS4903 - Module 1: (Network Analysis) Hurricane Evacuation Reporting

Welcome back to Hurricane Preparedness in ArcGIS.  This week we return to the last part of Project 1: Understanding Network Analyst.  Last week was all about preparing and analyzing the project data originally supplied by UWF.  This week the focus is on using the route lines generated last week to create Project1 deliverables (evacuation maps and hospital evacuation brochure) to support two planned Project1 scenarios: 
  1. Evacuating patients from Tampa General Hospital (TGH) to other local Hospitals along with the creation of brochures to help inform patients prepare to evacuate.  
  2. Distribution of emergency supplies to U.S. Army National Guard to three storm shelters.  
The route map below is an example from scenario one.  This map is intended to assist hospital patients trying to evacuate TGH for Memorial hospital (MH).   This same map was slightly modified (removed redundant Title) and loaded it into a hospital evacuation brochure template.





The grayscale route maps in this area are from scenario two.  Their intended audience is a person arriving first (first responder) to provide emergency assistance to deliver supplies from a local National Guard Armory to an Emergency Shelter.  These maps are intended to provide an easy to follow street route with clear descriptive directions.  A grayscale basemap was used to follow the theme of reduced complexity.  
Color tends to increase the complexity of a map presentation.  Color (compared to greyscale) is also more expensive to publish.  So there would be added cost saving for creating a mass production grayscale map or brochure as in this hurricane scenario.


In summary, this week was more work than I first anticipated.  The two scenarios ultimately required 5 maps, but more maps were created as working maps that resulted in producing the final maps.  Making the route maps functional with instructions and updated to be followed turn by turn added more time to each deliverable.  Experimenting with grayscale (shades of white/black) maps was interesting for me this week. It allowed me to realize that there are MANY applications where grayscale and color can help map users quickly read an interpret a map.  While the use of too many colors can complicate a map, color information can make a map experience much easier if the number of colors is limited to three (5 max).  Lastly, organizing the data and developing a naming convention to help organize project deliverables is important.  Developing a map template as a requirement would also allow multiple GIS skill team members to disperse the workload and produce consistent looking project deliverables. 


Friday, September 7, 2018

GIS4930 - Module 1: Tampa Bay Hurricane Preparedness

Special Topics in GIS
Module 1 Tampa Bay Hurricane Preparedness - Week: Prepare/Analysis

Hurricane preparedness was the theme for the first week in Special Topics.  The setting is the peak of hurricane season.  The Gulf temperature is very warm, and the month of September is very hot and steamy; several unnamed tropical depressions, rainstorms hovering in the Caribbean and two Gulf of Mexico hurricanes (Jaws and Deliverance) set the stage.  Jaws is a Category 1 storm and Deliverance is a strengthening Category 3 storm; and Downtown Tampa Bay, FL is hoping for some dry Sahara dust to weaken the present threats.  The general scenario for this lab is hurricane evacuation using ESRI’s ArcGIS Network Analyst extension.  Project goals are routes, driving instructions, and evacuation maps.  A mind-map of this project looks like the following:

The first step was to gather and prepare various feature classes and transform them into a network dataset, which with the help of the Network Analysis extension (NAE) could generate a transportation network that could model evacuation routes.  The gathered points, lines, and polygons included: shelters, hospitals, fire stations, police stations, armory, roads, flood zones, study area boundary.   This collection of features comprised the network dataset.  The NAE would provide the logic of applying the appropriate routing methods, which would ultimately generate the final routes to be presented on public evacuation maps.  The end goal is polished evacuation maps to be televised to the public in preparation for a planned evacuation out of harm's way.

Data preparation consisted of organizing and preparing project data in a file Geo-Database (FGDB) and feature dataset.  Custom tools were used to clip all the project data to a region of downtown Tampa Bay and projection existing spatial data to a State Plane Coordinate System, NAD_1983_HARN_StatePlane_Florida_West_FIPS_0902.  A Digital Elevation Model (DEM) of the study area was reclassified by a Reclassify tool found in the Spatial Analyst extension.  The reclassified raster was then converted to a polygon using the Raster to Polygon Conversion tool.  A selection (grid_code < 6) of this polygon layer was saved and exported as the flood zone layer.  The road layer also needed some additional fields (Miles, Seconds, and Flooded) and values populated via the help of the Field Calculator. Finally, the metadata for these features was updated and exported using the Export Metadata Multiple tool.


The base map to the right displays the points, lines, and polygons feature classes that were gathered and prepared for setup in a Network Dataset, which will be available as input requirements for the NAE to perform various routing methods and generate line routes.  The Road feature (line) serves as the edge.  The Tampa point features serve as the junction network elements.  Various types of Network Analysis Classes (containers) exist for each type layer analysis that the extension can perform.  These classes are containers for network analysis objects, which work behind the scene at generating the routes we are hoping to discover.

Having the experienced the wrath of IRMA in 2017, the Analysis part of the lab was very informative.  I don't remember ever using the Network Analysis extension (NAE) in prior courses, but I'm glad to be reintroduced to it.  Below is an example of a map I made using the routes generated by NAE.  I used several legend inserts for the various scenarios of the project to help organize map content.  Balancing map elments and white space was fun to work out.


In summary, planning an evacuation route consists of three steps: gather inputs, select routing methods, and mapping the final routes.  Up to this point, the data has been gathered, massaged and ready for the NAE to act on to generate routes.  Well, I actually solved the routes this week.  After the data was prepared and a network dataset setup, several scenarios were analyzed: Tampa General Patient evacuation, emergency supply distribution, routes for distributing supplies, multiple evacuation routes for low-lying populated locations, and shelter map.  The final step is planning, creating, delivering a public evacuation story, and revisiting Corel Draw.

This week was a good practical experience of putting GIS tools into action and building a valuable skill (Network Analyst Extension) for the workplace.  The Lab instructions were pretty good.