Get more information or participate
- Submission instructions
- Register for updates and training data
- contact us via email at firstname.lastname@example.org
This benchmark will is part of the Robust Vision Challenge (2020).
- 2020-08-17 MiDaS RVC baseline was updated with author provided results, metrics have changed due to better choice of scaling
- 2020-08-11 Added Baseline Method for the RVC Challenge: BTSREF_RVC
- 2020-07-06 We have added another pretrained reference method: MiDaS
- 2020-06-14 Submissions are now open: submit your algorithm or result
- 2020-04-08 Training data is now available
- 2020-04-06 Our paper has been accepted to SAIAD 2020 (preprint/bibtex)
- 2020-04-03 We fixed a bug in the accounting of all metrics, results have changed slightly, bump30 changed significantly
- 2020-03-22 Public leaderboard is online
Public leaderboard, sorted by Avg30. Click on the method name to see detailed results for the method.
|Network||avg30||miss30||fake30||missSt30||fakeSt30||bump30||Avg ScaleError||Avg Offset [m]||silog [%]||sq_rel [%]||abs_rel [%]||rmse_inv [1/km]|
The following plots evalute our interpretable metrics over a range from 3 to 100 meters. See the the paper for details. Note that all metrics only operate in the drivable corridor up to 2m above the street surface.
The Miss metric detects obstacles (on and off-street) that are missing in the algorithm result. For example parked cars, trees, barriers, bollards, buildings.
The fake metric detects obstacles (on and off-street) that are hallucinated in places which should be empty (and hence drivable).
Same as Miss, but restricted to obstacles directly above the visible street surface (e.g. boom gates, branches, side mirrors).
Same as fake, but restricted to the area directly above the visible street surface.