Challenge / Overview
In the past few years, several novel methods have been proposed for trajectory forecasting. However, most methods have been evaluated on limited data. Furthermore, these methods have been either evaluated on different subsets of the available data or on contrasting coordinate systems (2D, 3D) making it difficult to objectively compare the forecasting techniques.
One potential solution is to create a standardized benchmark to serve as an objective measure of performance. Benchmarks hold great promise in addressing such comparison issues. There have been a limited number of attempts at trajectory forecasting benchmarks, such as the ETH and the UCY datasets. Moreover, a good benchmark requires not only a standard dataset but also proper evaluation metrics.
In this challenge, we introduce TrajNet++, a new, large scale trajectory-based benchmark, that uses a unified evaluation system to test the gathered state-of-the-art methods on various trajectory-based activity forecasting datasets for a fair comparison.