LidarDM: Generative LiDAR Simulation in a Generated World

1University of Illinois Urbana-Champaign, 2MIT
teaser

LidarDM generates a realistic sequence of LiDAR readings from scratch using Diffusion.

Abstract

We present LidarDM, a novel LiDAR generative model capable of producing realistic, layout-aware, physically plausible, and temporally coherent LiDAR videos. LidarDM stands out with two unprecedented capabilities in LiDAR generative modeling: (i) LiDAR generation guided by driving scenarios, offering significant potential for autonomous driving simulations, and (ii) 4D LiDAR point cloud generation, enabling the creation of realistic and temporally coherent sequences. At the heart of our model is a novel integrated 4D world generation framework. Specifically, we employ latent diffusion models to generate the 3D scene, combine it with dynamic actors to form the underlying 4D world, and subsequently produce realistic sensory observations within this virtual environment. Our experiments indicate that our approach outperforms competing algorithms in realism, temporal coherency, and layout consistency. We additionally show that LidarDM can be used as a generative world model simulator for training and testing perception models.

Consistent Video Generation

LidarDM generates temporally consistent sequences of LiDAR Readings.

Long Sequence Generation

LidarDM generates simulated LiDAR sensor readings for long traffic scenarios with only a BEV layout as input.

4-D Composition of Static and Dynamic Objects

LidarDM works by factorizing the scenario into static and dynamic elements.

comparison

Conditional Diffusion-based Scene Modeling

Powered by a conditional latent diffusion Model, LidarDM generates novel LiDAR reading sequences that closely match a provided map layout.

comparison

Out-of-Distribution Scenario Composition

LidarDM provides a flexible composition pipeline that allows self-driving autonomy evaluation in dangerous scenarios, such as animals escaping a zoo.

teaser

Competitive Single-Frame LiDAR Generation

LidarDM can be run unconditionally to generate single-frame LiDAR readings (KITTI-360 samples shown below).

comparison

Citation

@misc{lidardm,
      title={LidarDM: Generative LiDAR Simulation in a Generated World}, 
      author={Vlas Zyrianov and Henry Che and Zhijian Liu and Shenlong Wang},
      year={2024},
      eprint={2404.02903},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}