How-to-?

Demystifying DashWare

Towards the end of our field season in 2022, my colleague and I began searching for an application to add annotations to our extensive video footage. After exploring various options, we concluded that DashWare, an open-source software, was the best tool* for our needs. This blog aims to explain how to use DashWare and share my insights on its application in our conservation work.

View of a drone controller while on a mission.

Our work extensively relies on the use of off-the-shelf drones for field operations. In addition to capturing photos and videos, drones collect vital data such as distance from launch, geographical coordinates, flight time, battery status, and more. This information is indispensable for research and gaining a better understanding of wildlife and ecosystems.

However, once we retrieve the video footage from the devices, this valuable flight data is missing visually in the video playback. Not all drones generate subtitle files, and even when they do, these subtitles are often presented as plain text in the media player. Unfortunately, this text-based representation does not effectively convey the data we collect during these flights alongside the aerial footage. This led us to search for an application that could help us annotate our aerial footage. We had specific criteria in mind: the tool had to be open-source, user-friendly, and require minimal setup and learning.

Using open-source software like Dashware can also be a valuable tool for conservationists who use drones to monitor wildlife and their natural habitats. It allows the addition of telemetry data to drone videos, providing context on flight path and animal behaviour, complementing traditional field practices. Dashware helps synchronise the telemetry data with the video footage, allowing users to display the data in real-time as the video plays. In addition to displaying data such as speed, altitude, and GPS coordinates, it offers customization options like font, colour, position of text, and data points selection.

(A) GETTING STARTED: 

  1. Installing DashWare

    The open-source application can be accessed for free at the Dashware website: http://www.dashware.net/dashware-download/  or elsewhere from the web.

  2. Video Processing

    There is a known issue with DashWare 1.9.1 caused by source videos without an audio track which prevents a useful export of the final output. DJI drones cannot record sounds without an external sensor set for this purpose. This can be relatively easily resolved by adding an audio track to the video. We used DaVinci Resolve to add a free audio file to our video footage. 

  3. File Format

    Overlaying telemetry data from an UAV onto a video can be done by using a CSV file. Acquire the CSV file by exporting the data from the UAV's flight controller or other telemetry source. 

    Edit the CSV to match the start of the video footage by checking its properties. Open the CSV file using spreadsheet software and navigate to the column labelled 'is Video.' Select all the rows before the value 1 that is 0 in the 'is Video' column, and delete those rows.

Screengrab of a CSV file with 'isVideo' column highlighted.

(B) DASHWARE PROJECT: 

Launch DashWare from the start menu or desktop shortcut.

  1. Click the ‘File’ button in the top left corner of the main window.

  2. Select ‘New Project’ and enter a name for your project in the ‘Project Name’ field.

  3. In the same dialog box select ‘<None>’

  4. Click on ‘Ok’ to save these changes.

  5. DashWare will open the new project and display the main window.

  6. Click on the ‘+ (Plus)’ icon next to the ‘Video’ title in the tab. Browse and select the pre-processed video.04

  7. Click on the ‘+ (Plus)’ icon next to this title in the tab, to view the ‘Add Data File’ dialog box. Click on the ‘Browse’ button under the title ‘Data logger file’, locate and select the .csv file containing all of the data from the selected drone flight video.

  8. Click on the downward arrow under the title ‘Choose a data profile’ and select ‘Flytrex’. Click on the ‘Add’ button to confirm choices.

When one starts a new project in Dashware, the main window displays a blank video screen on the LHS with the Dashware logo and some gauges, while the RHS displays the primary workspace.

The ‘Project’ tab is the central location for managing and configuring any project. It includes several sub-tabs that allow access to different types of information and settings related to the project. 

  1. Click on the ‘+ (Plus)’ icon next to the ‘Video’ title in the tab. Browse and select the pre-processed video.04

  2. Click on the ‘+ (Plus)’ icon next to this title in the tab, to view the ‘Add Data File’ dialog box. Click on the ‘Browse’ button under the title ‘Data logger file’, locate and select the .csv file containing all of the data from the selected drone flight video.

  3. Click on the downward arrow under the title ‘Choose a data profile’ and select ‘Flytrex’. Click on the ‘Add’ button to confirm choices.

Add pre-processed .csv and .mp4 files using the data logger.

 (C) PROJECT GAUGES: 

In DashWare, project gauges are graphical elements that overlay data onto videos. They can display various types of information, such as GPS coordinates or telemetry data. The Gauge Toolbox is a feature that allows one to add, create, and customize these gauges in a project. It also provides tools and options to design and adjust the appearance of data overlays.

The 'Filter' selection in the Gauge Toolbox is useful for searching and narrowing down to a specific gauge. One can also start listing the required attributes, such as speed, altitude, distance from launch, sea level, etc.

Gauge Toolbox search for the keyword 'altitude'.

Using the Gauge Toolbox, select the relevant gauges that correspond to these attributes. Make sure that each gauge is set to the same metric system as the data in the CSV file, such as kilometers per hour for speed or meters for altitude.

After selecting the appropriate gauges, add them to the project by pressing the gauge button or by clicking and dragging them onto the video screen. The gauge can then be further adjusted by clicking and dragging it to the preferred location.Here's the revised version with grammar corrections:

Gauges can also be modified as needed, such as this compact DashWare gauge above that displays speed, sea level, altitude, vertical speed, heading, takeoff distance, and traveled distance. To learn more about modifying and creating gauges, read the following blog. Modified gauges for displaying telemetry on UAV footage can be downloaded from our satellite drone imagery workflow page.

(D) DATA SYNCHRONISATION: 

Play the video in the workspace to cross-check if the data is in sync with the video footage. In case, the gauge overlay is not in sync with the video footage, cross-check the following information:

  1. Check the unit system to the units in the flight log.

    The values do not automatically convert to the gauge selected. Therefore it is pertinent to choose gauges with the unit system as in the flight log.

  2. Check if the gauge has a data value mapped to it correspondingly. 

    (a) To do so, change tabs to ‘Project’ in the primary workspace in Dashware.

    (b) Locate the gauge in the ‘Project Gauges’ sub-tab and double click on the gauge name. 

    (c) An editable dialogue box title ‘Gauge Input Mapper’ will be displayed, with two input options. (i) Data File and (ii) Data Value.

    (d) Review both data fields for appropriate entry, in case of modification use the downward arrow to select appropriate value from the list. Click on the ‘OK’ button to save changes.

Gauge Input Mapper with input options: (i) Data File and (ii) Data Value.

To sync the footage with the CSV file, navigate to the ‘Synchronization’ tab.

  1. Uncheck the ‘Sync with video’ option in (b.) the bottom right-hand corner.

  2. Then, drag the pointer to the start (c.) of the synchronization map.

  3. Next, click and drag the video player to the start of the (a.) timeline at 0:00:000.

  4. Once this done, re-check the ‘Sync with video’ option in (b.)

To synchronize the footage with the CSV file, use the 'Synchronization' tab.

E. Export Project

For the final step of the project, once all attributes and data have been mapped, we can export the file.

  1. Click the 'File' button in the top left corner of the main window.

  2. Select 'Create Video' and use the 'Browse' button to choose the location for the final file.

  3. Uncheck 'Auto' near the Quality option in export to manually choose the quality of the export. Once decided, click on 'Create Video,' and the export will be ready shortly.

'Create Video' dialogue box.

In conclusion, DashWare has proven to be a valuable tool for integrating telemetry data with aerial footage in our conservation work. By allowing us to annotate videos with critical flight information, it enhances our ability to analyse and present data collected during drone missions. We hope this guide helps other conservationists and drone enthusiasts streamline their video processing workflows, making it easier to visualise and share the insights gathered from their aerial surveys.

*Note: This a tutorial for using dashware based on my experience on adapting footage from for DJI Mini/Mavic video footage.

Understanding Drone Orthomosaics

Orthomosaics are individual images stitched into one single high-resolution image. In our case, these are taken from the drone and so they are georeferenced and can give a resolution of upto a few cm per pixel. This is drastically finer than freely available satellite data and so drone based orthomosaics are being used extensively for all types of landscape assessment.

Collecting data for an orthomosaic:

To create a drone-based orthomosaic, one has to plan an appropriate flight mission. However, some important factors to keep in mind when planning a mapping mission are -

  • The flight needs to form a grid, with the drone moving at a constant height and gimble pitching straight down at 90 degrees. 

  • Maintain a minimum of 70% to 80% front and side overlap of images captured while planning the flight.

  • Collect Ground Control Points from the Area Of Interest to improve the accuracy of georeferencing of the orthomosaic.

  • Maintain a moderately slow speed for the drone to move between images so as to reduce distortion.

  • Use mechanical or global shutter, if available, to capture images from drone cameras.

More details of this can be found on our blog on creating mapping missions.

Processing drone data to create an orthomosaic:

Once the drone images are collected, they need to be stitched together to create this high-resolution, georeferenced orthomosaic. At TfW, we used WebODM for this purpose. 

WebODM is a user-friendly drone image processing software. It is a web interface of OpenDroneMap (ODM), which is an open source command line toolkit for processing drone images to create maps, point clouds, 3D models and other geospatial products. There are two versions of this software: WebODM and WebODM Lightning. 

The offline version of  WebODM can be installed manually for free using this link. Command line skills are required to install this version. The installer version of offline webODM is also available here for a one time purchase. The offline version uses the local machine for processing and storing data.

Note: The WebODM Lightning option is a cloud hosted version of WebODM with some additional functionalities. This version is subscription based with standard and business categories. The trial version comes with 150 credits of free usage. The free credits suffice to process 337 images. The number of tasks allowed in free trial is not clear from the documentation. Paid plans will be required to process more images.

Once you have the images from the mapping mission - you can import them into the web ODM by selecting the ‘Select Images and GCP’ option. 

Fig. 01: Select images and Edit options

While selecting the images, exclude the outliers. For instance, images of the horizon or randomly clicked images which may be errors and should not be included in the orthomosaic. 

After selecting the images, one can edit the settings of the processing workflow by clicking on the Edit option (Fig 01). The functionalities of all the customizable options available in ‘Edit’ are explained elaborately here.

Some default options are as listed in (Fig 02). The default options work for most cases and the best way to assess edit options is to run them on a test dataset. 

Fig. 02: Customised Edit Options

Pro Tip: The High Resolution option with the original image size option takes more than an hour to process 200 images. The fast orthophoto option is quicker but the orthomosaic will have some distortions as displayed here. The guidelines to optimize flight plans according to the landscape are listed here

Analyzing orthomosaics:

This orthomosaic can now be analysed as a .tif file in GIS softwares. In this section, we explore how to use QGIS for this purpose.

Install a stable QGIS version from Download QGIS. It is advisable to install the most stable updated version than the latest version. 

Import the tif into QGIS: Once you download all the assets from webODM task, navigate to the folder where you have saved the outputs. The users are encouraged to explore the downloaded folders to gain information on the flight plan logistics. Among the downloaded files and folders, you can navigate to the odm_orthophoto folder and import the odm_orthophoto.tif file into QGIS map view.

Fig. 03: Download the assets.

Creating indexed images from satellite and aerial image bands is an effective way to extract information from the images. A list of few insightful indices are listed in this blog. In this instance, we will use the Green Leaf Index to get a visual estimate of the greenness of the area.

To begin, once you have imported the tif file into QGIS, select the ‘Raster Calculator’ from the Raster menu.

Fig. 04: Select raster calculator option.

Select the bands and calculate the Green Leaf Index using the raster calculator:

Green/ (Green + Red + Blue)

Fig. 05: Raster Calculator. 

Once you have the indexed output, select appropriate symbology to view the indexed image. Right click on the layer and select ‘Properties’ or double click on the image. Navigate to Symbology option. 

Fig. 06: Symbology of layer.

Select the appropriate settings for colour ramp and apply it to the indexed image. 

Fig. 07: Image with selected symbology.

We see that the above image is not giving us a contrast in the image to estimate vegetation health. In this case, one can explore the blending option. This may be useful to get an immediate idea of the area at a glance.

Fig. 08: Blending options for better visual assessment.

In order to extract contrasting information from the orthomosaic and indexed image, we can check the histogram of the indexed image and then decide the minimum and maximum values based on the distribution of the image.

Fig. 09: Histogram analysis for optimising visual output.

Looking at the histogram we can tell that the range of information is encoded between 0.3 to 0.6 pixel value range. Now go back to the symbology and change the minimum and maximum values to that range. 

Fig. 10: Image after rectifying minimum and maximum value range.

From the indexed image, we see that the western part of the image has lower leaf area as compared to the other parts. In order to focus on that area, create a polygon over it and draw a grid.

Fig. 11: Create a polygon layer.

Fig. 12: Select options to create a polygon.

You must digitise a polygon in projected CRS. Projected CRS is necessary for making measurements on the shapefile.

Fig. 13: Digitise and save the polygon.

To calculate the area of the polygon, right click the polygon layer and open the attribute layer.

Fig. 14: Open attribute table.

Open the field calculator and select the option shown in the following image to calculate the area of the polygon.

Fig. 15: Select ‘area’ field from Geometry.

Fig. 16: Area field added to the polygon.

The area field is automatically calculated and added to the attribute table. It is recommended that the polygon layer has a projected CRS for this calculation to be correct. Then save edits and untoggle the editing.

You can also create a grid of squares in the polygon using the create grid tool under the vector menu.

Fig 17: Creating a grid.

You can select the Rectangle type of grid, but there are other options like points, lines etc which can be chosen depending on the objective. Make sure to select the layer extent of the example polygon.

The above parameters should create a grid of rectangles of specified dimension.

Fig. 18: Zonal statistics from Processing toolbox.

The zonal statistics can be selected from the ‘Processing toolbox’. One can select which statistics are to be calculated and a new polygon of zonal statistics will be created. 

Fig. 19: Select the statistics to be calculated.

Now one can choose the statistic which they want to display and select an appropriate symbology to assess the least to most leaf cover in the selected example area as shown in the figure below.

Fig. 20: Visualise the statistical output.

Creating a Mapping Mission

Unmanned Aerial Vehicles (UAVs) can create orthomosaics to better understand ecosystems. To plan the flight path of a UAV for orthomosaic creation, we’ve developed a plugin compatible with QGIS

UAV Mapping Path Generator (for Litchi)

Our UAV Mapping Path Generator plugin takes into account camera parameters, altitude, overlaps, and more to prepare an optimized flight plan. This flight plan is compatible with the FlyLitchi app, which operates flights for DJI drones.

The steps below describe the installation process, logic and method to use the plugin. To install the plugin:

  1. Go to Plugins > Manage and Install Plugins.

  2. Search for UAV Mapping Path Generator (for Litchi) and install it.

Fig. 01a: Install the plugin.

Fig. 01b: Install the plugin.

The plugin will be available for use from the ‘Vector’ menu.

Fig. 02: UAV Path Generator Plugin. 

SETTING UP THE PLUGIN

Some default drone camera specifications are pre-filled in the plugin. However, you can adjust these settings by accessing the drone specifications for your specific model here.

Mandatory Inputs for the Plugin:

  • Altitude

  • Field of View (FoV)

  • Side Overlap %

  • Front Overlap %

  • Aspect Ratio

  • Image Height & Width

  • Speed

  • Gimbal Pitch Angle

These drone camera parameters are readily available in the drone specifications. The overlap percentages you choose will influence both the area covered and the time taken for the drone to complete the flight. The plugin calculates and displays the approximate flight time at the end of its execution (see Fig. 05).

Note: The flight time shown in FLyLitchi is slightly longer than that calculated here.

The parameters and the input types are as described below.

Fig. 03

Fig. 04: Calculated parameters and formulae.

How to use the PlugIn:

Then the user has to upload a shapefile of the area of interest. This shapefile should be of polygon geometry type only and in EPSG:4326 GCS projection. The users are advised to select a polygon and fill this input in every execution and make sure to click on the ‘Load’ option after filling the input. Following the selection of AOI, the user must draw a line - ‘Input_line.shp’:

Notes on the line:

  • This line must be outside the AOI polygon.

  • It should be in a direction parallel to the desired drone path.

  • The drone path will be drawn to the right/above this line.

  • This line is plotted in EPSG:4326 GCS projection.

As soon as you click on the ‘Draw Line’ option, the toggle editing is enabled and the user will be prompted to draw a line in the required direction. To end this action, save the edit, untoggle the edit mode and then the plugin dialog box will pop back in the map interface and the draw line box will be populated.

Then the user should input an approximate number of parallel lines to the input line that would cover the entire area of interest. When the user clicks on the ‘Draw Flight Path’ option, the parallel lines are drawn and clipped to the area of interest and then the way points are created according to the overlap percentages as shown in Fig. 06.

The flight time is also calculated as soon as one clicks on the ‘Draw Flight Path’ is clicked. The Fig. 05 shows that the flight will take 60 minutes to finish the entire flight. Hence, the user must split the area to be covered in each flight as one drone battery gives 20 mins of effective flight time.

Fig. 05: Flight time calculated.

Example of input line and the drone path:

(a) Input horizontal line, (b) Drone path with horizontal input line at the bottom,

(c) Input vertical line; (d) Drone path with vertical input line on the left.

Fig. 06: Input for a drone-path.

Fig. 07: The flight path in FMH.

A Fly Litchi compatible csv file is then created and it can be opened in Fly Litchi Mission hub.  Make sure the waypoints in the csv file show on the map view.

From this point, one can use the FlyLitchi App to check settings and execute the mapping mission. For more instructions on using the fly litchi app, please refer to our other blog on planning photo and video missions for UAVs.

Below is a visual walk-through of using the above plugin.