TartanDrive 2.0: More Modalities and Better Infrastructure to Further Self-Supervised Learning Research in Off-Road Driving Tasks

1Carnegie Mellon University, 2IIT-Kharagpur, 3University of Pittsburgh

TartanDrive 2.0 provides more modalities and the infrastructure to facilitate self-supervised learning.

Abstract

We present TartanDrive 2.0, a large-scale off-road driving dataset for self-supervised learning tasks. In 2021 we released TartanDrive 1.0, which is one of the largest datasets for off-road terrain. As a follow-up to our original dataset, we collected seven hours of data at speeds of up to 15m/s with the addition of three new LiDAR sensors alongside the original camera, inertial, GPS, and proprioceptive sensors. We also release the tools we use for collecting, processing, and querying the data, including our metadata system designed to further the utility of our data. Custom infrastructure allows end users to reconfigure the data to cater to their own platforms. These tools and infrastructure alongside the dataset are useful for a variety of tasks in the field of off-road autonomy and, by releasing them, we encourage collaborative data aggregation. These resources lower the barrier to entry to utilizing large-scale datasets, thereby helping facilitate the advancement of robotics in areas such as self-supervised learning, multi-modal perception, inverse reinforcement learning, and representation learning.

Explore the Dataset!

Click on any of the orange points to see an example data point from its corresponding area. The 3D data (such as the pointcloud) is interactive!

PointCloud

Odometry

FPV Image

RGB BEV Image

Ground Plane

Video

System Overview

We provide a pointcloud scan of the vehicle so that users may take their own additional measurements as desired.

BibTeX


      
      @misc{sivaprakasam2024tartandrive,
      title={TartanDrive 2.0: More Modalities and Better Infrastructure to Further Self-Supervised Learning Research in Off-Road Driving Tasks}, 
      author={Matthew Sivaprakasam, Parv Maheshwari, Mateo Guaman Castro, Samuel Triest, Micah Nye, Steve Willits, Andrew Saba, Wenshan Wang, Sebastian Scherer},
      year={2024},
      eprint={2402.01913},
      archivePrefix={arXiv},
      primaryClass={cs.RO}}