If you ask Waymo, the company will tell you it is one of the largest and most diverse self-driving datasets every released for research. It contains data from 1,000 driving segments, with each segment capturing 20 seconds of continuous driving corresponding to 200,000 frames at 10Hz per sensor. This type of continuous footage can be a potential goldmine for researchers attempting to develop models to track and predict road user behavior.
"This data has the potential to help researchers make advances in 2D and 3D perception, and progress on areas such as domain adaptation, scene understanding and behavior prediction. We hope that the research community will generate more exciting directions with our data that will not only help to make self-driving vehicles more capable, but also impact other related fields and applications, such as computer vision and robotics.," Waymo says.
The data represents diverse driving environment, including dense urban and surbuban areas across Phoenix, Arizona, Kirkland, Washington, and Mountain View and San Francisco, California. It also includes data collected during daytime, nighttime, and dawn driving in the sun and rain.