Open-World Drone Active Tracking with Goal-Centered Rewards

South China University of Technology,

Institute for Super Robotics (Huangpu),

Pazhou Laboratory,

Peng Cheng Laboratory,

Sun Yat-sen University

Abstract

Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations, providing a more practical solution for effective tracking in dynamic environments. However, accurate Drone Visual Active Tracking using reinforcement learning remains challenging due to the absence of a unified benchmark, the diverse motion behavior of dynamic targets, and the complexity of open-world environments with frequent interference. To address these issues, we propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT. The DAT benchmark provides 24 visually complex environments to assess the algorithms' cross-scene and cross-domain generalization abilities, and high-fidelity modeling of realistic robot dynamics. Additionally, we propose a reinforcement learning-based drone tracking method called GC-VAT, which aims to improve the performance of drone tracking targets in complex scenarios. Specifically, we design a Goal-Centered Reward to provide precise feedback to the agent, preventing targets farther from the center of view from receiving higher rewards than closer ones. This allows the drone to adapt to the diverse motion behavior of open-world targets. Inspired by curriculum learning, we introduce a Curriculum-Based Training strategy that progressively enhances the agent tracking performance in vast environments with complex interference. Experiments demonstrate that the GC-VAT has about 400% improvement over the SOTA method in terms of the cumulative reward metric.

Benchmark

24 Scenarios

24
Tracking Targets

Rich
Perceptual Information

Complex
Scenario Design

Realistic
Traffic Management

Realistic
Dynamics Simulation

  • 01

    CityStreet

  • Village

    02

  • Lake

    03

  • Downtown

    04

  • Farmland

    05

  • Desert

    06

  • MOTORBIKE

  • SCOOTER

  • TRAILER

  • TRUCK

  • BUS

  • TESLAModel3

  • LINKOLNMKZ

  • RANGEROVER

  • BENZSprinter

  • TOYOTAPrius

  • BMWX5

  • CITROENCZero

  • PEDESTRIAN

  • SHRIMP

  • CREATE

  • SOJOURNER

  • MANTIS

  • BB_8

  • AIBOERS7

  • BIOLOIDDOG

  • FIREBIRD6

  • SCOUT

  • GHOSTDOG2

  • HOAP2

Performance

We conduct within-scene, cross-scene and cross-domain tests. In cross-scene testing, the agent trained under daytime conditions in one environment is tested in different scenarios with the same weather. For cross-domain testing, it is evaluated in the same scene but under varying weather conditions.
1. Within-scene and cross-scene
Our R-VAT performs significantly better than other methods as shown in Tab. 1. In within-scene testing, for the CR, the average performance improvement on six maps relative to the D-VAT method is 591%(35 → 242). Regarding the TSR, the average enhancement is 279%(0.19 → 0.72). In cross-scene testing, R-VAT achieves a 376%(37 → 176) improvement in average CR and a 200%(0.19 → 0.57) improvement in average TSR compared to D-VAT.

2. Cross-domain
As shown in Tab. 2, our method outperforms existing methods significantly in cross-domain generalization. Specifically, the R-VAT demon-strates an average CR enhancement of 509%(35 → 213) relative to D-VAT and T SR boost of 253%(0.19 → 0.67).

Acknowledgements

Thanks to Dongqing Zhao, Xilang Zeng and Li Wei for their support and help in the project