Abstract: Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.

We present a qualitative comparison of 3D mesh reconstructions. Our method, EventNeuS, consistently recovers higher-fidelity geometry and fewer artifacts compared to baselines. Interact with the synchronized views below to inspect the details.
Drag to rotate | Scroll to zoom | All views synchronized
* PAEv3D requires training on high-resolution event stream 692×520 px for convergence
@inproceedings{sachan2026eventneus,
title={EventNeuS: 3D Mesh Reconstruction from a Single Event Camera},
author={Sachan, Shreyas and Rudnev, Viktor and Elgharib, Mohamed and Theobalt, Christian and Golyanik, Vladislav},
booktitle={International Conference on 3D Vision (3DV)},
year={2026}
}