Drones, spatial analysis and a 3D model: Asola Bhatti WLS

I recently collected some aerial imagery at the Asola Bhatti Wildlife Sanctuary in Delhi in collaboration with the people who run the outreach centre. I've really been enjoying working with the data, and this project has helped me clarify the various processes I use while using drones. So far, I have a three page checklist and am maintaining a mission log-book as well; keeping all the documentation up to date is hard! In this post, I'll be detailing the various applications I'm using to control the UAV and process the aerial imagery+data it generates, and will then describe a couple of the outputs.

TL;DR: Come for the aerial footage and the 3D models; stay for the process walk-through.

I'm using a DJI Phantom 3 Advanced; the P3A can be manually flown using the controller like a regular R/C plane. To tap into its more advanced functions, fly safely and troubleshoot issues though, it  needs to be connected to a smartphone. I use the DJI Go app on a OnePlus3 (Android) for regular flights, but may switch to an iPad soon; DJI-related apps apparently work better on iOS than on Android.

For mapping missions, there are a number of steps involved. The drone must fly a preset pattern autonomously, collecting images at regular intervals. These images can then be processed into a georeferenced mosaic and used to generate a 3D model. Depending on the use case, these can either be used as-is for visualisation, or analysed further to obtain specific outputs.

For mapping, I use DJI Go to configure the camera settings (exposure and shutter speed), and then use DroneDeploy to take-off and fly the drone along the preset mapping pattern. I'm also experimenting with Pix4D Capture; the UI isn't as clean as DroneDeploy's but the app itself is free, and you don't have to buy into the rest of the Pix4D ecosystem. Once the mapping is complete, I disable DroneDeploy and use DJI Go to manually collect more images from different angles and land the drone at the end of the flight. Once back at base, the images are uploaded into PrecisionMapper, where they're processed in the cloud to create:

  1. a RGB orthomosaic depicting reflectance values (.tif)

  2. a digital surface model representing elevation (.dsm)

  3. a 3D model (.ply and .las)

  4. a KML file for visualisation in Google Earth/Maps (.kml)

  5. a design file for visualisation in CAD software (.dxf)

So far, I've worked with all five of these products; there are more advanced ones available in PrecisionMapper, but I prefer to work directly with these products. I use QGIS and ArcGIS for almost all my satellite imagery analysis work, and these products feed directly into that workflow. The primary output I can create are basic maps; I've never had access to such high-resolution imagery before, so just the simple act of putting a scale bar onto one of these maps is exciting.

The images above are true-colour RGB composites, where the red, green and blue layers have been combined to represent the terrain as a human with unimpaired vision would observe it. The thing with composite bands is that they can also be combined to extract information that it's hard for a human observer to see. In a follow-up (more technical) post, I'll discuss the differences between false-NDVI, SAVI, VARI and TGI, which are all indices that use the RGB layers in interesting ways. In this post though, I'm just going to put in two images that depict the Triangular Greenness Index (TGI), which enhances chlorophyll-containing pixels; the greener the pixel, the more likely it is to contain vegetation.

There are various other algorithms that can be applied to the orthomosaic imagery; PrecisionMapper itself offers a couple that can delineate individual trees or count plants in rows. I'm going to be studying up on what else can be done with this imagery, especially with supervised classification and AI-based analysis processes.

And finally, my favourite output: the 3D model! With enough images from multiple perspectives, modern photogrammetry algorithms can generate vertices and meshes that depict an object or a landscape to scale and in three dimensions. I'm excited about these because while it's really cool to see these embedded in a web-page (as above), it's even cooler to see them carved out in wood or 3D-printed in ABS plastic. It's even possible to pull this into a VR system and explore the terrain in person, or make it the basis of an interactive game or... you get the drift; this is exciting stuff!

Get in touch via our contact form if you have any questions or want to discuss a project of your own.