3689502495?profile=original

Thanks to Pteryx for this great data set! In order to generate a geo-referenced NDVI / EVI / EVI2 Vegetation Index we need to fly the area of interest (AOI) with a visible (RGB) and full spectrum (RGB+NIR) camera. Once the RGB and RGB+NIR images are processed inside DroneMapper we have two orthomosaic results from which we can generate a pure NIR ortho. To do this, we use GDAL and the following command: 

/usr/local/bin/gdal_calc.py -A VIS.tif -B NIRVIS.tif --outfile=nir.tif --calc="(A - B)"

Now that we have created our pure NIR orthomosaic we can use this to generate NDVI or other calculations using OTB in an automated fashion.
 

/usr/local/bin/otbcli_BandMath -il NIR.tif VIS.tif  -out ndvi.tif -exp "ndvi(im2b1, im1b1)"

In order to process the orthomosaic tif files, they need to be the exact same size and pixel resolution. OTB also has many other useful commands for remote sensing work. The original flight and area of interest is 1.0 km sq @ 10 cm GSD. 

Thanks -- JP @ DroneMapper

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hello. I am currently starting a research project which is focused on detecting the dead crops in a farmland using NDVI. I wish to derive the NDVI from the images and use computer vision to highlight the dead areas in an image. Can I use the regular images/videos taken from drones or do I need specific NIR images? Does anyone know of any datasets that I can use? 
    Please help me out, I'm totally new to this field. Please be my guiding light my lord, and I shall forever be in your debt :D 
    Cheers

  • ...an NDVI generated from a NIR-G-B single camera system via UAV...

    3692629044?profile=original

  • Great info! This was generated using open source tools and workflow (GDAL/OTB). Let me know if you'd like more details. I should have been more clear in my original post.. this is just one way to obtain this data, it is not the best method or most accurate. Anyone looking for a great open source remote sensing toolbox, check out OTB Orfeo .. the learning / compiling can be step but worth it. We've been tossing around the idea of building a field processing unit for offline/remote processing. Maybe I should do a kickstarter... eh? :)

  • T3

    wonderfull job guys!

    Hi Rory,

    Iam using my drone, for agro-environment maps as a part of my Job as well as my Phd thesis.

    I think, to guarantee layers match from different sensors is a just a matter of output or mapping scale.. 

    will try to explain with an example:

    If your two orthomosaics (NIR and RGB) have 5cm/p spatial resolution (a leaf size) and your map, or image analysis is set, lets say 1:5000 scale ( usual for large crops) the final distinct output would be ~1.23 meter/pixel (bush size).

    Also when you georeferencing the two mosaics you can see the total RMS error. Finaly combining the scale and RMS, you can resample the spatial resolution, using a desired method and then composite the bands to multiband image.

    If you work with vegetation indices for mapping and several plants, you will be able to clearly distinquish if the results are right or wrong. 

    Again JP, excelent work!! iam doing the same thing using 3-4 (very expensive) software and own GIS rutines! I would certainly be happy to work with DroneMapper and more happy, if you release a field app, to give on site results!!

    Regards

    James

  • Creating orthomosaics of the same GSD and same AOI from the same flight is the best course of action to guarantee that the pixel bandmath matches. Certain tools have these checks built in. Of course, you'll probably experience more noise from lighting conditions/shadows and/or lack of good normalization routine as well.

  • T3

    The problem in my opinion is that unless you can guarantee that the data from the different channels are aligned down to the pixel  and that there are no geometric differences between the layers produced during the othomosiac making processes (angle / rotation) when you perform the NDVI calculation you cannot guarantee that the IR and RED data is been drawn from exactly the same pixel and that could mean the difference between looking at the edge of a leaf or looking at dirt. 

  • Hi, the pixels between images are the same size. ~10 cm GSD

    Each orthomosaic (VIS and NIRVIS) scene is the same size and GSD. 10772 x 12442 pixels.

    The flight was completed with two different cameras and the geo-referenced orthomosaics were generated from the same flight. We provided a shapefile of the area of interest to our processing system and it clips the results to that AOI. This ensures you have the correct sized tifs to complete the BandMath.

    Tools such as OTB won't let you process tifs of different sizes for ndvi/evi calculation. Thanks

  • T3

    JP

    Have you got any data with regards to the accuracy of the pixel match between the layers? The reason that I ask this is that in my opinion the quantitative value of a two camera method is questionable as long as there is no guarantee that the you are working with data accurately supper imposed.   

  • Mark, I am not aware of any sites that will give you all the information, and as you can imagine, there are many approaches to this type of analysis. We started a graduate course in sUAS Agricultural Applications this year at Kansas State University to help graduate students to be more successful when using sUAS-based imaging in their research. We expect this field to expand dramatically when the FAA opens up the regs for private sUAS use in agriculture, and we are starting to develop applications that can be implemented by producers when the time comes. I'd be happy to share more details with you. I sent you a friend request.
  • @FlyingMerf.. I would love to get into doing this sort of analysis out in my area.. is there any good sites that show how to properly do vegetation stress analysis.. I like the modified camera approach as you probably would have a smaller payload then other methods.

This reply was deleted.