Dataset

CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios

A comprehensive, multi-modal dataset for advancing vehicle-to-vehicle perception research in challenging driving conditions and complex traffic environments.

Dataset Overview

100 Video Clips
60K LiDAR Frames
1.26M Camera Images
750K GNSS/IMU Records
10 Weather Conditions
10 Test Locations

Abstract

Vehicle-to-Vehicle (V2V) cooperative perception has great potential to enhance autonomous driving performance by overcoming perception limitations in complex adverse traffic scenarios (CATS). Meanwhile, data serves as the fundamental infrastructure for modern autonomous driving AI. However, due to stringent data collection requirements, existing datasets focus primarily on ordinary traffic scenarios, constraining the benefits of cooperative perception. To address this challenge, we introduce CATS-V2V, the first-of-its-kind real-world dataset for V2V cooperative perception under complex adverse traffic scenarios.

The dataset was collected by two hardware time-synchronized vehicles, covering 10 weather and lighting conditions across 10 diverse locations. The 100-clip dataset includes 60K frames of 10 Hz LiDAR point clouds and 1.26M multi-view 30 Hz camera images, along with 750K anonymized yet high-precision RTK-fixed GNSS and IMU records. Correspondingly, we provide time-consistent 3D bounding box annotations for objects, as well as static scenes to construct a 4D BEV representation.

On this basis, we propose a target-based temporal alignment method, ensuring that all objects are precisely aligned across all sensor modalities. We hope that CATS-V2V, the largest-scale, most supportive, and highest-quality dataset of its kind to date, will benefit the autonomous driving community in related tasks.

Keywords: Vehicle-to-Vehicle • Cooperative Perception • Autonomous Driving • Dataset • Adverse Weather
V2V CVPR 2025 Dataset

Key Highlights

Hardware Time-Synchronized
Precise temporal alignment across all sensor modalities using hardware synchronization
Complex Adverse Scenarios
First dataset focused on challenging weather and lighting conditions
4D BEV Representation
Comprehensive 4D Bird's Eye View representation for temporal analysis

Authors

Hangyu Li¹* · Bofeng Cao¹* · Zhaohui Liang¹ · Wuzhen Li¹ · Juyoung Oh¹ · Yuxuan Chen¹ · Shixiao Liang¹ · Hang Zhou¹ · Chengyuan Ma¹ · Jiaxi Liu¹ · Zheng Li¹ · Peng Zhang¹ · KeKe Long¹ · Maolin Liu² · Jackson Jiang² · Chunlei Yu² · Shengxiang Liu² · Hongkai Yu³ · Xiaopeng Li¹
* Equal contribution • † Corresponding author
¹ University of Wisconsin-Madison² wuwen-ai³ Cleveland State University

Citation

@article{li2025catsv2v, title={CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios}, author={Li, Hangyu and Cao, Bofeng and Liang, Zhaohui and Li, Wuzhen and Oh, Juyoung and Chen, Yuxuan and Liang, Shixiao and Zhou, Hang and Ma, Chengyuan and Liu, Jiaxi and Li, Zheng and Zhang, Peng and Long, KeKe and Liu, Maolin and Jiang, Jackson and Yu, Chunlei and Liu, Shengxiang and Yu, Hongkai and Li, Xiaopeng}, journal={arXiv preprint arXiv:2511.11168}, year={2025} }

This project is licensed under the Creative Commons Attribution 4.0 International License.
© 2025 CATS-V2V Dataset Project. All rights reserved.