Abstract :
[en] Recent advances in radiance field reconstruction, such as 3D Gaussian
Splatting (3DGS), have achieved high-quality novel view synthesis and fast
rendering by representing scenes with compositions of Gaussian primitives.
However, 3D Gaussians present several limitations for scene reconstruction.
Accurately capturing hard edges is challenging without significantly increasing
the number of Gaussians, creating a large memory footprint. Moreover, they
struggle to represent flat surfaces, as they are diffused in space. Without
hand-crafted regularizers, they tend to disperse irregularly around the actual
surface. To circumvent these issues, we introduce a novel method, named 3D
Convex Splatting (3DCS), which leverages 3D smooth convexes as primitives for
modeling geometrically-meaningful radiance fields from multi-view images.
Smooth convex shapes offer greater flexibility than Gaussians, allowing for a
better representation of 3D scenes with hard edges and dense volumes using
fewer primitives. Powered by our efficient CUDA-based rasterizer, 3DCS achieves
superior performance over 3DGS on benchmarks such as Mip-NeRF360, Tanks and
Temples, and Deep Blending. Specifically, our method attains an improvement of
up to 0.81 in PSNR and 0.026 in LPIPS compared to 3DGS while maintaining high
rendering speeds and reducing the number of required primitives. Our results
highlight the potential of 3D Convex Splatting to become the new standard for
high-quality scene reconstruction and novel view synthesis. Project page:
convexsplatting.github.io.