How-to-?

Analysing Drone and Satellite Imagery using Vegetation Indices

A majority of our ecosystem monitoring work involves acquiring, analysing and visualising satellite and aerial imagery. Creating true-colour composites, using the Red, Green and Blue (RGB) bands, allows us to actually view the land areas we’re studying. However, this is only a first step; creating detailed reports on deforestation, habitat destruction or urban heat islands requires us to extract more detailed information, which we do by conducting mathematical operations on the spectral bands available from any given sensor. For example, we can extract surface temperature from Landsat 8 satellite data, as detailed in a previous blogpost.

A true-colour composite image created using data from Landsat 8 bands 2, 3 and 4.

As you may imagine, understanding how much vegetation is available in any given pixel is essential to many of our projects, and for this purpose, we make use of Vegetation Indices. In remote sensing terms, a Vegetation Index is a single number that quantifies vegetation within a pixel. It is extracted by mathematically combining a number of spectral bands based on the physical parameters of vegetation, primarily the fact that it absorbs more more light in the red (R) than in the near-infrared (NIR) region of the spectrum.  These indices can be used to ascertain information such as vegetation presence, photosynthetic activity and plant health, which in turn can be used to look at climate trends, soil quality, drought monitoring and changes in forest cover. In this blogpost, we’re going to provide a technical overview of some of the vegetation indices available for analysing both aerial and satellite imagery. We’ve included the basic formulae used to calculate the indices, using a bracketing system that allows for the formulae to be copy-pasted directly into the Raster Algebra (ArcMap) and Raster Calculator (QGIS) tools; don’t forget to replace the Bx terms with the relevant band filenames when doing the calculations! We’ve also noted down the relevant band combinations for data from Landsat 8’s Operational Land Imager and both the Sentinel-2’s MultiSpectral Instruments.

We’ve created maps for most of the vegetation indices described below, using data from Landsat 8 acquired over Goa, India on the 28th of December 2018. Each band was clipped to the area of interest and the Digital Numbers were rescaled to calculate Top-of-Atmosphere radiance values. All the index calculations were then executed on these clipped and corrected bands. We used a single min-max stretched red-to-green colour gradient to visualise each index. For actual projects, we’d then classify each image to provide our partners with meaningful information.

The Basic Vegetation Indices

Ratio Vegetation Index

One of the first Vegetation Indices developed was the Ratio Vegetation Index (RVI) (Jordan 1969) which can be used to estimate and monitor above-ground biomass. While the RVI is very effective for the estimation of biomass, especially in densely-vegetated areas, it is sensitive to atmospheric effects when the vegetation cover is less than 50%, (Xue et al. 2017).

RVI = R / NIR

Sentinel 2: B4 / B8

Landsat 8: B4 / B5

 

Difference Vegetation Index

The Difference Vegetation Index (DVI) (Richardson et al. 1977) was developed to distinguish between soil and vegetation, and as the name suggests, is a simple difference equation between the red and near-infrared bands.

DVI = NIR - R

Sentinel 2: B8 - B4

Landsat 8: B5 - B4

Normalised Difference Vegetation Index

The Normalised Difference Vegetation Index (NDVI) (Rouse Jr. et al. 1974) was developed as an index of plant “greenness” and attempts to track photosynthetic activity. It has since become one of the most widely applied indices. Like the RVI and the DVI, it is also based on the principle that well-nourished, living plants absorb red light and reflect near-infrared light. However, it also takes into account the fact that stressed or dead vegetation absorbs comparatively less red light than healthy vegetation, bare soil reflects both red and near-infrared light about equally, and open water absorbs more infrared than red light. The NDVI is a relative value and cannot be used to compare between images taken at different times or from different sensors. NDVI values range from -1 to +1, where higher positive values indicate the presence of greener and healthier plants. The NDVI is widely used due to its simplicity, and several indices have been developed to replicate or improve upon it.

NDVI = NIR - R / NIR + R

Sentinel 2: B8 - B4 / B8 + B4

Landsat 8: B5 - B4 / B5 + B4

 

Synthetic NDVI

Synthetic NDVI

The Synthetic NDVI is an index that attempts to predict NDVI values using only Red and Green bands. Hence it can be applied to imagery collected from any RGB sensor., including those used on consumer-level drones. Like the NDVI, its values also range from -1 to +1, with higher values suggesting the presence of healthier plants. However, it is not as accurate as the NDVI and needs to be calibrated using ground information to be truly useful. It is also known as the Green Red Vegetation Index (GRVI) (Motohka et al. 2010).

Synthetic NDVI = ( G - R ) / ( G + R )

Sentinel 2: ( B3 - B4 ) / ( B3 + B4 )

Landsat 8: ( B3 - B4) / ( B3 + B4 )

 

Visible Difference Vegetation Index

Similarly, the Visible Difference Vegetation Index (VDVI) (Wang et al. 2015) can also be calculated using information from only the visible portion of the electromagnetic spectrum. Some studies indicate that VDVI is better at extracting vegetation information and predicting NDVI than other RGB-only indices,.

VDVI = ( (2*G) - R - B ) / ( (2 * G) + R + B )

Sentinel 2:  ( ( 2 * B3 ) - B4 - B2 ) / ( (2 * B3 ) + B4 + B2 )

Landsat 8: ( ( 2 * B3 ) - B4 - B2 ) / ( ( 2 * B3 ) + B4 + B2 ) 

  

Excess Green Index

The Excess Green Index (ExGI) contrasts the green portion of the spectrum against red and blue to distinguish vegetation from soil, and can also be used to predict NDVI values. It has been shown to outperform other indices (Larrinaga et al. 2019) that work with the visible spectrum to distinguish vegetation.

ExGI = ( 2 * G ) - ( R + B )

Sentinel 2: ( 2 * B3) - ( B4 + B2 )

Landsat 8: ( 2 * B3 ) - ( B4 + B2 )

Green Chromatic Coordinate

The Green Chromatic Coordinate (GCC) is also an RGB index (Sonnentag et al. 2012) which has been used to examine plant phenology in forests.

GCC = G / ( R + G + B )

Sentinel 2: B3 / ( B4 + B3 + B2 )

Landsat 8: B3 / ( B4 + B3 + B2 )

One of the primary shortcomings of the NDVI is that it is sensitive to atmospheric interference, soil reflectance and cloud- and canopy- shadows. Indices have thus been developed that help address some of these shortcomings.

Indices that address Atmospheric (and other) Effects

Enhanced Vegetation Index

The Enhanced Vegetation Index (EVI) was devised as an improvement over the NDVI (Heute et al. 2002) to be more effective in areas of high biomass, where it is possible for NDVI values to become saturated. The EVI attempts to reduce atmospheric influences, including aerosol scattering, and correct for canopy background signals. In remote sensing terms, a saturated index implies a failure to capture variation due to the maximum values being registered for some pixels. 

EVI = 2.5 * ( ( NIR - R ) / ( NIR + (6 * R) - ( 7.5 * B ) + 1 ) )

Sentinel 2: 2.5 * ( ( B8 - B4) / ( B8 + ( 6 * B4) - ( 7.5 * B2 ) + 1) )

Landsat 8: 2.5 * ( ( B5 - B4) / ( B5 + ( 6 * B4) - ( 7.5 * B2 ) + 1 ) )

 

Atmospheric Reflection Vegetation Index

The Atmospheric Reflection Vegetation Index (ARVI) was developed specifically to eliminate atmospheric disturbances (Kaufman et al. 1992).  However, for a complete elimination of aerosols and the ozone effect, the atmospheric transport model has to be implemented, which is complicated to calculate and for which the data is not always easily available.  Without integrating this model into the calculation, the ARVI is not expected to outperform the NDVI in terms of accounting for atmospheric effects, but can still be useful as an alternative to it.

ARVI (w/o atmospheric transport model) = ( NIR – ( R * B ) ) / ( NIR + (R * B) )

Sentinel 2: ( B8 - ( B4 * B2 ) ) / ( B8 + ( B4 * B2 ) )

Landsat 8: ( B5 - ( B4 * B2) ) / ( B5 + (B4 * B2 ) )

 

Green Atmospherically Resistant Index

The Green Atmospherically Resistant Index (GARI) was also developed to counter the effects of atmospheric interference in satellite imagery. It shows much higher sensitivity to chlorophyll content (Gitelson et al. 1996) and lower sensitivity to atmospheric interference.

GARI = ( NIR – ( G – ( γ * ( B – R ) ) ) ) / ( NIR + ( G – ( γ * ( B – R ) ) ) )

Sentinel 2: ( B8 – ( B3 – ( γ * ( B2 – B4 ) ) ) ) / ( B8 + ( B3 – ( γ * (B2-B4) ) ) )

  Landsat 8: ( B5 – ( B3 – ( γ * ( B2 – B4 ) ) ) ) / ( B5 + [ B3 – ( γ * ( B2 – B4) ) ) )

In the formula above, γ is a constant weighting function that the authors suggested be set at 1.7 (Gitelson et al. 1996, p 296) but may have to be recalibrated in areas of complete canopy coverage. For this image, we used a γ value of 1.

 

Visible Atmospherically Resistant Index

The Visible Atmospherically Resistant Index (VARI) can be used to account for atmospheric effects in RGB imagery.

VARI = ( G - R) / ( G + R - B )

Sentinel 2: ( B3 - B4 ) / ( B3 + B4 - B2 )

Landsat 8: ( B3 - B4 ) / ( B3 + B4 - B2 )

Addressing Soil Reflectance

As in the case of atmospheric effects, indices were also developed to address the effects of varying soil reflectance.

Soil Adjusted Vegetation Index

The Soil Adjusted Vegetation Index is a modified version of the NDVI designed specifically for areas with very little vegetative cover, usually less than 40% by area. Depending on the type and water content, soils reflect varying amounts of red and infrared light. The SAVI accounts for this by suppressing bare soil pixels.

SAVI = [ ( NIR – R ) / ( NIR + R + L ) ] * (1 + L)

Sentinel 2: [ ( B8 – B4 ) / ( B8 + B4 + L ) ] * (1 + L)

Landsat 8: [ ( B5 – B4 ) / (B5 + B4 + L ) ] * (1 + L) 

In the above equations, L is a function of vegetation density; calculating L requires a priori information about vegetation presence in the study area. It ranges from 0-1 (Xue et al. 2017) with higher vegetation coverages resulting values approaching 1.  

 

The Modified Chlorophyll Absorption in Reflectance Index (MCARI) was developed as a vegetation status index. The Chlorophyll Absorption in Reflective Index (Kim 1994) was initially designed to distinguish non-photosynthetic material from photosynthetically active vegetation. The MCARI is a modification of this index and is defined as the depth of chlorophyll absorption (Daughtry et al. 2000) in the Red region of the spectrum relative to the reflectance in the Green and Red-Edge regions.  

MCARI = (Red-Edge - R ) - 0.2 * ( Red-Edge - G) * ( Red-Edge / Red )

Sentinel 2: ( B5 - B4) - 0.2 * ( B5 - B3) * ( B5 / B4)

 Landsat 8: No true equivalent

The Structure Insensitive Pigment Index (SIPI) is also a vegetation status index, with reduced sensitivity to canopy structure and increased sensitivity to pigmentation. Higher SIPI values are strongly correlated with an increase in carotenoid pigments, which in turn indicate vegetation stress. This index is thus very useful in the monitoring of vegetation health.

SIPI = (800nm - 445nm) / (800nm - 680nm)

Sentinel 2: (B8 - B1) / (B8 - B4)

Landsat 8: (B5 - B1 ) /( B5 - B4)

Agricultural Indices

Some indices that were initially designed for agricultural purposes can also be used for the ecological monitoring of vegetation.

Triangular Greenness Index

The Triangular Greenness Index (TGI) was developed to monitor chlorophyll and indirectly, the nitrogen content of leaves (Hunt et al. 2013) to determine fertilizer application regimes for agricultural fields. It can be calculated using RGB imagery and serves as a proxy for chlorophyll content in areas of high leaf cover.

 TGI = 0.5 * ( ( ( λR - λB ) * ( R - G) ) - ( ( λR - λG ) * ( R - B ) ) )

Sentinel 2A: 0.5 * ( ( ( 664.6 - 492.4 ) * ( B4 - B3 ) ) - ( ( 664.6 - 559.8) * ( B4 - B2 ) ) )

Sentinel 2B: 0.5 * ( ( ( 664.9 - 492.1 ) * ( B4 - B3 ) ) - ( ( 664.9 - 559.0 ) * ( B4 - B2 ) ) )

Landsat 8: 0.5 * ( ( ( 654.59 - 482.04 ) * ( B4 - B3 ) ) - ( ( 654.59 - 561.41 ) * ( B4 - B2 ) ) )

In the above equations, λ represents the center wavelengths of the respective bands; the central wavelengths of Sentinel 2A and Sentinel 2B vary slightly.

 

Normalised Difference Infrared Index

The Normalised Difference Infrared Index (NDII) uses a normalized difference formulation instead of a simple ratio. It is a reflectance measurement that is sensitive to changes in the water content of plant canopies, and higher values in the index are associated with increasing water content. The NIDI can be used for agricultural crop management, forest canopy monitoring, and the detection of stressed vegetation.

NDII = ( NIR - SWIR ) / (NIR + SWIR )

Sentinel 2 : ( B8 - B11 ) / ( B8 + B11 )

Landsat 8: ( B5 - B6) / ( B5 + B6 )

Green Leaf Index

The Green Leaf Index (GLI) was originally designed for use with a digital RGB camera to measure wheat cover. It can also be applied to aerial and satellite imagery.

GLI = ( ( G - R ) + ( G - B ) ) / ( ( 2 * G ) + ( B + R ) )

Sentinel 2: ( ( B3 - B4 ) + ( B3 - B2 ) ) / [ ( 2 * B3) + ( B2 + B4 ) )

Landsat 8:  ( ( B3 - B4 ) + ( B3 - B2 ) ) / [ ( 2 * B3) + ( B2 + B4 ) )

 

Task-specific Vegetation Indices

As we can see, one index might be more appropriate than another based on the purpose of your study and the source of the imagery. The following section lists indices developed to meet the needs of specific research requirements.

Transformed Difference Vegetation Index

The Transformed Difference Vegetation Index (TDVI) was developed to detect vegetation in urban settings where NDVI is often saturated.

TDVI = 1.5 * ( NIR - R ) / √( NIR^2 + R + 0.5)]

Sentinel 2: 1.5 * ( B8 - B4 ) / sqrt( B8^2 + B4 + 0.5)

Landsat 8: 1.5 * ( B5 - B4 ) / sqrt( B5^2 + B4 + 0.5)

Calculating square roots in QGIS Raster Calculator and ArcMap’s Raster Algebra have different syntaxes; QGIS uses ‘sqrt’ while ArcMap uses ‘SquareRoot’.

The Leaf Chlorophyll Index (LCI)  was developed to assess chlorophyll content in areas of complete leaf coverage.

LCI= ( NIR − RedEdge) / (NIR + R)

Sentinel 2: ( B8 - B5 ) / ( B8 + B4 )

Landsat 8: No true equivalent

Vegetation Fraction

The Vegetation Fraction is defined as the percentage of vegetation occupying the ground area; since it’s calculated using values generated from a NDVI, it is subject to the same errors. It’s a comprehensive quantitative index in forest management and an important parameter in ecological models, and can also be used to determine the emissivity parameter when calculating Land Surface Temperature.

Vegetation Fraction: [ NDVI - NDVI(min) ] / [ NDVI(max) - NDVI(min) ]

In this blogpost, we’ve listed down and organised the vegetation indices that we’ve found while improving our ecological monitoring techniques. We make extensive use of both satellite and drone imagery, and will be using this blogpost internally as a quick reference guide to vegetation indices.

Find us on Twitter @techforwildlife if you have any questions or comments, or email us at contact@techforwildlife.com. We’ve also opened up the comments for a few days, so please feel free to point out any errors or leave any other feedback!

P.S.: Hat-tip to Harris Geospatial (@GeoByHarris) for a comprehensive list of vegetation indices, which can be found here.

P.P.S.: We’ll be updating this post with Sentinel-2A imagery in the next few days.

References

·      C. F. Jordan (1969) Derivation of leaf-area index from quality of light on the forest floor. Ecology, vol. 50, no. 4, pp. 663–666, 1969

·      Daughtry, C. S. T., Walthall, C. L., Kim, M. S., De Colstoun, E. B., & McMurtrey Iii, J. E. (2000). Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sensing of Environment, 74(2), 229-239.

·      Gitelson, A., Y. Kaufman, and M. Merzylak. (1996) Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sensing of Environment 58 (1996): 289-298.

·      Huete, A., et al. (2002) Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices." Remote Sensing of Environment 83 (2002):195–213.

·      Hunt, E. Raymond Jr.; Doraiswamy, Paul C.; McMurtrey, James E.; Daughtry, Craig S.T.; Perry, Eileen M.; and Akhmedov, Bakhyt, (2013) A visible band index for remote sensing leaf chlorophyll content at the canopy scale. Publications from USDA-ARS / UNL Faculty. 1156.

·      J. Richardson and C. Weigand, (1977) Distinguishing vegetation from soil background information. Photogrammetric Engineering and Remote Sensing, p. 43, 1977.

·      Jinru Xue and Baofeng Su. (2017) Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications, Journal of Sensors, vol. 2017, Article ID 1353691, 17 pages, 2017.

·      Kim, M. S. (1994). The Use of Narrow Spectral Bands for Improving Remote Sensing Estimations of Fractionally Absorbed Photosynthetically Active Radiation. (Doctoral dissertation, University of Maryland at College Park).

·      Larrinaga, A., & Brotons, L. (2019). Greenness Indices from a Low-Cost UAV Imagery as Tools for Monitoring Post-Fire Forest Recovery. Drones, 3(1), 6.

·      Motohka, T., Nasahara, K. N., Oguma, H., & Tsuchida, S. (2010). Applicability of green-red vegetation index for remote sensing of vegetation phenology. Remote Sensing, 2(10), 2369-2387.

·      Sonnentag, O.; Hufkens, K.; Teshera-Sterne, C.; Young, A.M.; Friedl, M.; Braswell, B.H.; Milliman, T.; O’Keefe, J.; Richardson, A.D. (2012) Digital repeat photography for phenological research in forest ecosystems. Agric. For. Meteorol. 2012, 152, 159–177

·      X. Wang, M. Wang, S. Wang, and Y. Wu. (2015) Extraction of vegetation information from visible unmanned aerial vehicle images. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering, vol. 31, no. 5, pp. 152–159, 2015. 

·      Y. J. Kaufman and D. Tanré. (1992) Atmospherically Resistant Vegetation Index (ARVI) for EOS-MODIS. IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 2, pp. 261–270, 1992.

Drones, spatial analysis and a 3D model: Asola Bhatti WLS

I recently collected some aerial imagery at the Asola Bhatti Wildlife Sanctuary in Delhi in collaboration with the people who run the outreach centre. I've really been enjoying working with the data, and this project has helped me clarify the various processes I use while using drones. So far, I have a three page checklist and am maintaining a mission log-book as well; keeping all the documentation up to date is hard! In this post, I'll be detailing the various applications I'm using to control the UAV and process the aerial imagery+data it generates, and will then describe a couple of the outputs.

TL;DR: Come for the aerial footage and the 3D models; stay for the process walk-through.

I'm using a DJI Phantom 3 Advanced; the P3A can be manually flown using the controller like a regular R/C plane. To tap into its more advanced functions, fly safely and troubleshoot issues though, it  needs to be connected to a smartphone. I use the DJI Go app on a OnePlus3 (Android) for regular flights, but may switch to an iPad soon; DJI-related apps apparently work better on iOS than on Android.

For mapping missions, there are a number of steps involved. The drone must fly a preset pattern autonomously, collecting images at regular intervals. These images can then be processed into a georeferenced mosaic and used to generate a 3D model. Depending on the use case, these can either be used as-is for visualisation, or analysed further to obtain specific outputs.

For mapping, I use DJI Go to configure the camera settings (exposure and shutter speed), and then use DroneDeploy to take-off and fly the drone along the preset mapping pattern. I'm also experimenting with Pix4D Capture; the UI isn't as clean as DroneDeploy's but the app itself is free, and you don't have to buy into the rest of the Pix4D ecosystem. Once the mapping is complete, I disable DroneDeploy and use DJI Go to manually collect more images from different angles and land the drone at the end of the flight. Once back at base, the images are uploaded into PrecisionMapper, where they're processed in the cloud to create:

  1. a RGB orthomosaic depicting reflectance values (.tif)

  2. a digital surface model representing elevation (.dsm)

  3. a 3D model (.ply and .las)

  4. a KML file for visualisation in Google Earth/Maps (.kml)

  5. a design file for visualisation in CAD software (.dxf)

So far, I've worked with all five of these products; there are more advanced ones available in PrecisionMapper, but I prefer to work directly with these products. I use QGIS and ArcGIS for almost all my satellite imagery analysis work, and these products feed directly into that workflow. The primary output I can create are basic maps; I've never had access to such high-resolution imagery before, so just the simple act of putting a scale bar onto one of these maps is exciting.

The images above are true-colour RGB composites, where the red, green and blue layers have been combined to represent the terrain as a human with unimpaired vision would observe it. The thing with composite bands is that they can also be combined to extract information that it's hard for a human observer to see. In a follow-up (more technical) post, I'll discuss the differences between false-NDVI, SAVI, VARI and TGI, which are all indices that use the RGB layers in interesting ways. In this post though, I'm just going to put in two images that depict the Triangular Greenness Index (TGI), which enhances chlorophyll-containing pixels; the greener the pixel, the more likely it is to contain vegetation.

There are various other algorithms that can be applied to the orthomosaic imagery; PrecisionMapper itself offers a couple that can delineate individual trees or count plants in rows. I'm going to be studying up on what else can be done with this imagery, especially with supervised classification and AI-based analysis processes.

And finally, my favourite output: the 3D model! With enough images from multiple perspectives, modern photogrammetry algorithms can generate vertices and meshes that depict an object or a landscape to scale and in three dimensions. I'm excited about these because while it's really cool to see these embedded in a web-page (as above), it's even cooler to see them carved out in wood or 3D-printed in ABS plastic. It's even possible to pull this into a VR system and explore the terrain in person, or make it the basis of an interactive game or... you get the drift; this is exciting stuff!

Get in touch via our contact form if you have any questions or want to discuss a project of your own.

Drones, aerial imagery, 3D models and VR.

I've been working a lot with drones and aerial imagery recently, and have been really enjoying myself. I'll be writing about a specific project I'm currently undertaking in another blog post, which will include pictures and 3D models. In this post, however, I wanted to jot down a few of the things that are possible with a cheap source of high-quality aerial imagery.

Satellite imagery is amazing; I have made use of it extensively in the past, and continue to do so today. For most applications, the only technical requirements needed to access and use satellite imagery are a good internet connection and a decent computing device.* Satellite imagery has its limitations though; between cheap, timely and high quality, you'll be lucky to get two out of three. This isn't necessarily a problem if you want to understand the movement of glaciers or look at how wetlands have vanished in a region. However, if you want high-quality data depicting a post-disaster site today to help plan humanitarian interventions tomorrow, you may have access to all the satellite imagery in the world but it isn't of much use if there are clouds covering the site.**

There are  applications that satellite imagery isn't suitable for; mapping small areas at a very high resolution, at a chosen time, is a task that drones are far better suited to.*** When I first started using drones, this was what I first thought of drones as: another source of aerial imagery with both advantages and disadvantages. However, prolonged use, lots of reading and lots of tinkering with various photogrammetry software packages has also made me aware of how much more than that they can be.

Drones aren't just flying toys; they're robots. They can be programmed to fly specific patterns while collecting data at specific points. In the case of imagery, which is the application I'm limiting this post to, mobile-based software tools such as DroneDeploy and Pix4Dcapture can make a drone collect imagery automatically over a large area. With a large number of images covering the same area, it's possible to create a very accurate 3D model with 1cm/pixel resolution or better.

For me, this is truly where it gets interesting. With this 3D model, it's possible to undertake formerly-laborious tasks, such as quantifying the biomass in a stand of trees, very easily; 3D models are great for volumetric analysis. It's also possible to use a 3D printer or CNC router to create a physical model, which would make a great art piece or communication tool. Finally, it's possible to use the 3D model as a basemap for a virtual reality experience set within the landscape. In combination with data on the local biodiversity, this could result in amazing products for conservation outreach and research.

*One of the reasons that led me into spatial analysis was that Landsat data became free to use in 2007, right when I was first learning how to use GIS.

** Another issue with satellite imagery used to be overpass times; no matter how large your budget for satellite imagery was, it was still possible that no satellite was in the right position to collect the imagery you wanted. That's rapidly changing; satellite imagery providers such as Planet state that their goal is to have enough satellites in orbit to image the Earth's entire surface once a day.

*** There's a lot of discussion about appropriate nomenclature; do we call them UAVs or drones? My take is that if it's a technical piece where the distinction between robots of various kinds (UAVs, UCAVs, AUVs, ROVs, UGVs) etc is important, then I use the acronyms; if it's just a placeholder for 'flying-robot-without-a-person-inside', I'm going to call it a drone.