Zach Miller GIS Portfolio
  • Home
  • Blog
  • Resources
  • Coursework
  • Contact

Research Blog


Classifying UAS imagery for prescribed burn analysis

9/25/2019

0 Comments

 

Introduction

Picture
Figure 1. Study area map
Prescribed burning of maintained areas helps land managers reduce the likelihood of wildfire by eliminating the amount of fuel available while also restoring carbon into the soil and callusing the bark of trees. Using a C-Astral Bramor fixed wing UAV, equipped with a MicaSense Altum multi-spectral sensor, and Field of View PPK system, a property maintained by Purdue's forestry department was imaged before and after a prescribed burn. The resulting images from each flight were then orthomosaicked in Pix4d and classified using ArcGIS Pro. The finished classifications depict areas of different vegetation and, for the post burn classification, areas of successful and partial burns. This information provides land managers with accurate and quantifiable information regarding the success of their prescribed burns and species characteristics of their property.

Methods

To map the study area (shown in Figure 1) both before and after the prescribed burn, a C-Astral Bramor equipped with a Field of View PPK and MicaSense Altum multi-spectral sensor, was used. To learn more about the C-Astral Bramor, visit Alan Pecor's blog where he has tutorials on various applications and components of the platform. A larger area within the Doak property line was captured with the Bramor for each flight, however, to speed up processing for purposes of this demonstration, the yellow Classification Study Area subset was used. Upon collecting the data and returning to the lab, the PPK information and images were processed in EZ Surv and Pix4D to create two multi-band orthomosaics. Since the Altum is capable of sensing RGB, "RedEdge", NIR, and Thermal IR, multiple band stack orthomosaics were created, however, the composite stacks consisting of: Red, Green, Blue, RedEdge, and Near Infrared bands, were used for these classifications. ArcGIS Pro was used to perform the following data manipulations.
Before classification could comence, the two composites were clipped to the study area boundary (shown in Figure 1) then assessed to determine what classes should be used. Ultimately, it was decided that both composites would use four common classes: trees, shrubs, prairie grass, and bare earth (shown in Figure 2), while the post burn imagery would implement an additional two classes ("Successful Burn" and "Partial Burn"). Next, the composites were edited to display a false color representation of the imagery, along with some other visualization settings (shown in Figure 3​).
Picture
Figure 2. Pre burn classes
Picture
Figure 3: False color band stack highlighted in green. Other image visualization properties highlighted in red.
By using a false color band stack with each composite, the health of the vegetation and differences between vegetated and non-vegetated surfaces were more distinguishable. The false coloring puts the NIR band in the red band display, the red band in the green band display, and the green band in the blue band display. This combination depicts healthy vegetation as a bright magenta and dark reds, withering vegetation as a dull greens and browns, and non-vegetated surfaces as a bright greens and light blues. Once this band stack was generated, the "Segment Mean Shift" tool was used to cluster individual pixels together based on color and brightness, thus simplifying the classification training process. At first, the default settings were used (shown in Figure 4) but the clustering was too small and could have confused the classification trainer.
Picture
Figure 4. Segment mean shift of orthomosaic imagery, using default 15.5 spectral detail, 15 spatial detail, and 20 minimum segment size in pixels.
The spectral and spatial detail parameters were set to 17 and 10 respectively, which decreased the variance in color between segments ( Figure 5) - a step in the right direction for training a classifier.
Picture
Figure 5. Segment mean shift of orthomosaic imagery, using 17 spectral detail, 10 spatial detail, and default 20 minimum segment size in pixels.
The minimum segment size was then increased by 20 pixels until the segment size appeared appropriate for diminishing noise upon training the classifier and in the subsequent classification as well. The minimum segment size that was decided on was 80 pixels (Figure 6​).
Picture
Figure 6. Final segment mean shift parameters, using 17 spectral detail, 10 spatial detail, and 80 minimum segment size in pixels.
Once both images were segmented, the "Classification" tool was used. The first steps in classification were to create a "Classification Schema", add classes (shown in Figure 7), and collect "Training Samples" for each class (Figure 8). These training samples provided the classification algorithm with pixel information to determine the appropriate class for the rest of the segments in the composite.
Picture
Figure 7. Add class to classification schema
In the bottom left-hand corner of Figure 8, is an example of creating a polygon within the segmented image to be used as a training sample. In total, 65 samples were collected for each class. Depending on the size of the area, the variance between classes, the amount of classes, and the level of accuracy desired, greater or fewer than 65 samples could be collected. For the purposes of this project, 65 samples  provided sufficient data for the resulting products (see Results) so reclassification and a greater amount of samples were unnecessary. ​The original RGB composite, the false color composite, and segmented image were used to determine the validity of training samples for each class.
Picture
Figure 8. Create training samples for each class
Upon collecting samples for each composite, the next step was to train the classifier; in this case, a "Support Vector Machine" was used (Figure 9​). The maximum number of samples per class was left at the default 500, but only 65 samples were collected for each class anyway, so it was irrelevant.
Picture
Figure 9. Use training samples to classify image
A preview of the classified image was generated and accepted, although if the preview had looked too noisey or misclassified, the classification trainer is capable of accepting edits to the samples or classification method to achieve a better result. The trainer proceeded and generated a classified image (see Results). This output did not contain pixel counts nor area calculations for each class. The next step was to "Reclassify" the classified image (confusing right?) to obtain pixel counts. This was done using the "Reclassify" tool (Figure 10). The class values were kept the same as a reclassification wasn't actually what was desired- it was the pixel values *see Discussion if this step is confusing. 
Picture
Figure 10. Use reclassify tool to obtain pixel counts for each class
The output was created to have the same classification as before, only this time, an attribute table containing pixel counts for each class was generated. Within each attribute table a new column was created and named "Area" (Figures 11 and 12​).
Picture
Figure 11. Add field to attribute table
Picture
Figure 12. Input field information
The column was added to the attribute table with <NULL>  values in its rows. To generate the area for each class, the ground sampling distance (GSD) was obtained from the layer properties under the "Source > Raster Information" tabs (Figure 13). Then, using the "Calculate Field" tool in the attribute table window (Figure 14), the area was calculated by multiplying the cell size GSD shown in green of Figure 13​.
Picture
Figure 13. Calculate area using the "Calculate Field" attribute table tool and multiplying the GSD of X and Y cells.
The area for each class was calculated in square meters (see Results). These measurements could be used in further analysis to quantify the areas of partial and successful burned area as well as types of species burned within burn areas.
Picture
Figure 14. "Calculate Field" attribute table tool

Results

Picture
Figure 15. Final classified pre-burn map
Picture
Figure 16. Final classified post-burn map
Picture
Figure 17. Pre-burn attribute table
Picture
Figure 18. Post-burn attribute table
Picture
Figure 19. Confusion matrix for pre-burn classification
Picture
Figure 20. Confusion matrix for post-burn classification

Discussion

As shown in the resulting maps and confusion matricies, the classification turned out well. Figures 15 and 16 do show some noise and misclassification, but also mostly among areas outside of burn plots. In cell (8 , 7) for Figures 19 and 20, the combined accuracy of the user and program was 80% and 86% respectively. Considering the lack of refining for training samples and reclassification done in this project, the initial results turned out quite well. 

​For purposes of brevity in demonstrating the methods presented in this project, the accuracy assessment of samples were only ground-truthed with the input imagery and my novice forestry knowledge. While this is an efficient way of getting the job done, it is recognized that it is also not the best way to assess accuracy of classification. One option for ensuring accurate classification in future work would be to use geolocated samples of various species before flight, so that once added as a spatial layer, could be used to collect better samples of known vegetation types. Another option would be to obtain the spectral signature of various known species and preform a classification in a more robust software. If nothing else, a thorough reclassification of commonly misclassified segments could be performed in additional steps not taken in this demonstration. 

​Looking at Figures 17 and 18, the pixel counts and calculated areas for each class could also be used in further analysis of this imagery. By clipping areas of the classified raster where, for example a section of burn area was missed, the analyst could then obtain the pixel counts and area information for each class within that clipped layer. Outside of wildland fire, utilizing classification and pixel counts to quantify crown widths, species counts, and other information is incredibly beneficial to data collection practices.

*Reclassification step:

As shown in Figure 9, the training samples manager contains a pixel count option for the output attribute table, however after generating the initial classified image, the attribute table did not contain this information. It was discovered that running the "Reclassify" tool and using the same values for each class was a workaround for this issue. Once the "reclassified" image was generated, the pixel count column appeared and the area calculation was executable.

Conclusion

When analyzing the success of a prescribed burn, mapping species coverage, or calculating stand dimensions, the use of multi-spectral UAS imagery for classification is an effective method for obtaining in-depth information necessary in land management. While multi-spectral imagery is certainly not required to perform classification, it does provide a greater selection of analytical opportunities depending on what is needed. Some things to consider for future work in classifying of UAS imagery would be to 1) collect a few coordinates for each class prior to leaving the site, 2) compare with other classification algorithms in various software, and 3) overlay NDVI and other indices to observe any correlation between the success of a burn area, and the species and health of the vegetation.

While there is much work to be done in terms of developing the most effective methods for UAS in forestry practices, it is encouraging to know that advancements in UAS and GIS technology will expedite solutions to long-endured problems in land management.
0 Comments



Leave a Reply.

    Zach Miller

    Welcome to my field blog! Here you will find the latest updates on what geospatial projects I'm working on. I also provide in-depth workflows and explainations of how I use different functions in various GIS and UAS software to create deliverables. Have a look around by using the  categories side bar or just scroll through.

    Archives

    September 2019
    August 2019

    Categories

    All
    Classification
    Image Processing
    Spatial Modeling

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Blog
  • Resources
  • Coursework
  • Contact