HD video live streaming from a model airplane

 

One of my projects is about establishing a solid, low-latency HD video link for FPV, SAR or surveillance and using accessible, affordable technology to do that. This is talking $4,000 for the entire setup here, which is mostly the macmini+720p goggles and which are reusable in different contexts.

The OSD is painted on the ground station (a mac mini) and visualized through the HD goggles (HMZ-T1). Even though the letters look pretty small, it's easier to see them through the goggles than on a regular computer screen. 

In the past year I worked on refining this application and all the required settings for wifi, encoding, decoding to establish a relatively low-latency video link (120-150ms, relatively: for this type of technology). The telemetry data link that transmits all data from all onboard sensors at a rate of 30 Hz paves the way for a host of new ground-based applications. Examples include the integration with map and DEM data for improved navigation and virtual 3D overlays to make tunnels or paint the location of "buddies", "spotters" or other points of interest. If the lens properties are known and the camera is precisely fitted, this is not incredibly hard to achieve.

The OSD's bottom bar is also new and requires some explanation. Most OSD's provide raw data as in the top bar, data which has to be interpreted by the pilot during flight. In this app I'm reinterpreting the data based on work done in Cognitive Work Analysis (CWA). An example is that the pilot's primary concern is not the capacity left in the battery, but the distance that the battery still allows him to travel using that capacity and whether that's enough to make it home. The bottom bar thus relieves the pilot of these calculations and the bottom bar can be scanned in short time. If all is green, there are no immediate concerns. Bars reducing in size and changing color to red demonstrate a decrease in the available affordance for that operational concern.

Anyway, I gathered a lot of data and observations in this field test and I'm already working on improving those. This includes the causes for the video stutter, the failure of the telemetry link and so on. Both launches failed because the bungee cord got caught in the motor (twice!). I fixed that and managed to make another third successful launch, but that  unfortunately didn't get recorded.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @Gerard:  Your blog post is outstanding among the many that we have been reviewing. And a high-quality blog post attracts high-quality comments. Soon we will release our blog post about reviewing DIY Drones (DIYD) blog posts. Here are some links: January 2013, February 2013. You can also try searching DIYD with 'dronespeak'. You may want to start with the DroneSpeak home page. Thanks for sharing!

    - John

  • The app passed all AppStore requirements and is available under the name "FPV":

    https://itunes.apple.com/us/app/fpv/id609127142?l=en&mt=12

  • Developer

    If I decided to get involved in making a robust digital FPV video link today, I would most likely go this route.

    http://www.cavium.com/Processor_CNW5602.html

  • @John: Amazing!  I'm saving that link. Looks like you approached this from a slightly different angle.

    Note: I'm not competing with analog. The context in which this is used will be different due to the pros and cons of either technology.

    I had an uplink over the same channel before for "control" commands. I've found that two-way traffic significantly increases the probability for collisions and thus frame loss, so I made these very sparse for complex messages like setting home, uploading new gains and so on. Not often transmitted messages like that.

    Could be time to reignite research into your own project?   The fpvlab link contains a lot of my findings and if you need more, I'm happy to help out in pm's.

  • @John: Yes, there are indeed losses, but also plenty of options for tweaking. I don't have the expectation that 100% of frames is or must be received at the ground station. The decoder I'm using has a configurable buffer that reorders packets on their sequence number, even if UDP is used. Obviously if the latency is very aggressive, the reordering capability is limited and more frame losses/corruption occurs. For static scenes a latency of 40ms is sufficient. When scene variability goes up, thus the bitrate, this buffer window must be increased to 60ms or so (latency induced by increases in bitrate).

    Adaptive and constant bitrates are already available in many IP cameras and I've been playing around with those to see which works best.

    Some error correction occurs in the 802.11n link by means of FEC. The wifi here is fixed to MCS-2 to reduce inter symbol interference (QPSK instead of 16QAM). It beats switching between different schemes all the time when there's no intermediate buffer that can deal well with latency induced by some of the renegotiations taking place.

    In the video above there was an option that dropped frames too aggressively, causing the significant stutter during the launches and some of the stutter in the static scenes. That's been fixed now and it's much, much better.

    The resulting stream will not be perfect, but I'm just aiming for something usable and practically reliable. It will never match the adrenaline-filled flights that you get with analog, flying between trees and such. But if this works reliably enough up to 2km @100m, i'm happy to call this a success.

    There's a thread on fpvlab.com discussing the complexities and what's been done by me and others to verify the link quality: Complexities of HD downlinks.

  • Developer

    Here is a link to some of my earlier experiments in late 2009. http://www.rcgroups.com/forums/showthread.php?t=1128065

    It worked, but without QoS in the encoder system it never got robust enough to compete with a good analog video link.

  • Developer

    @Gerard. In my opinion, to have any chance of getting a robust low latency h.264 stream using normal WiFi, you will need to implement QoS into the streaming encoder/decoder with adaptive bit-rate and error correction. Relying on the TCP/IP stack to safely deliver all data over WiFi will just lead to lots and lots of latency, or frame drops/corruption when using UDP and data is missing or no longer arriving in the order it was sent.

  • Ok i didnt see the link before. My bad :)

  • Nice work Gerard!! Really like your work. May i know what you're using for the actual datalink? 

    Also, what are you using for compression of video to H.264..is it done inside the camera?

    Thanks,Saad

  • I like the augmented reality (3-D overlays) possibilities and the CWA aspects in the design.

This reply was deleted.