Targetless LiDAR-Camera Calibration
with Anchored 3D Gaussians

1 Seoul National University   2 Konkuk University  

arXiv 2025
Swipe to go to the next video.


Abstract

We present a targetless LiDAR-camera calibration method that jointly optimizes sensor poses and scene geometry from arbitrary scenes, without relying on traditional calibration targets such as checkerboards or spherical reflectors. Our approach leverages a 3D Gaussian-based scene representation. We first freeze reliable LiDAR points as anchors, then jointly optimize the poses and auxiliary Gaussian parameters in a fully differentiable manner using a photometric loss. This joint optimization significantly reduces sensor misalignment, resulting in higher rendering quality and consistently improved PSNR compared to the carefully calibrated poses provided in popular datasets. We validate our method through extensive experiments on two real-world autonomous driving datasets, KITTI-360 and Waymo, each featuring distinct sensor configurations. Additionally, we demonstrate the robustness of our approach using a custom LiDAR-camera setup, confirming strong performance across diverse hardware configurations.

Overview

method overview
Figure 1. After combining multiple LiDAR scans into a globally aligned point cloud, anchor Gaussians are fixed as stable geometric references. Auxiliary Gaussians then adapt to local scene geometry and optimize sensor extrinsic parameters through photometric loss. A rig-based camera pose update strategy maintains internal consistency among multiple cameras, enabling synchronized refinement of sensor poses. Additionally, the interplay between anchor and auxiliary Gaussians mitigates viewpoint overfitting, ensuring robust and accurate pose optimization.
The primary contributions of this work are as follows:
  1. We stabilize global scale and translation by designating reliable LiDAR points as anchor Gaussians and introducing auxiliary Gaussians to prevent scene saturation during joint optimization.
  2. By combining photometric loss with scale-consistent geometric constraints, our method robustly aligns LiDAR and camera sensors across diverse environments.
  3. We validate our approach on two real-world autonomous driving datasets with distinct sensor configurations, as well as a custom-captured dataset, demonstrating faster computation and higher accuracy than existing calibration methods.


Progress of Joint Optimization


*We sampled 4 images out of 40 per camera for visualization purposes.


Colorized Point Clouds Using Our Calibration


Teaser Image
Figure 2. Each scene includes data from the Waymo Open Dataset, our custom dataset, and KITTI-360, demonstrating its applicability across diverse environments. For clarity, trajectories from the same camera are shown in the same color. Calibration was performed within 40 images per camera in each scene.


Novel View Synthesis Results


Ours
Dataset Calib.


LiDAR-Camera Reprojection Comparison


Hover over the image to zoom in.
Click the image to zoom in.
LiDAR-Camera Reprojection Comparison


BibTeX

@article{jung2025targetless,
  title     = {{Targetless LiDAR-Camera Calibration with Anchored 3D Gaussians}},
  author    = {Haebeom Jung and Namtae Kim and Jungwoo Kim and Jaesik Park},
  journal   = {arXiv preprint arXiv:2504.04597},
  year      = {2025}
}

Reference

[1] Shuyi Zhou, Shuxiang Xie, Ryoichi Ishikawa, Ken Sakurada, Masaki Onishi, and Takeshi Oishi. INF: Implicit neural fusion for lidar and camera. IROS, 2023.
[2] Christian Schmidt, Jens Piekenbrinck, and Bastian Leibe. Look Gauss, No Pose: Novel View Synthesis using Gaussian Splatting without Accurate Pose Initialization. IROS, 2024.
[3] Zhaotong Luo, Guohang Yan, and Yikang Li. Calib-Anything: Zero-training lidar-camera extrinsic calibration method using segment anything. ICRA, 2024.