Inpaint360GS
          Despite recent advances in single-object front-facing inpainting using NeRF and 3D Gaussian Splatting (3DGS), inpainting in complex 360° scenes remains largely underexplored. This is primarily due to three key challenges: (i) identifying target objects in the 3D field of 360° environments, (ii) dealing with severe occlusions in multi-object scenes, which makes it hard to define regions to inpaint, and (iii) maintaining consistent and high-quality appearance across views effectively.
To tackle these challenges, we propose Inpaint360GS, a flexible 360° editing framework based on 3DGS that supports multi-object removal and high-fidelity inpainting in 3D space. By distilling 2D segmentation into 3D and leveraging virtual camera views for contextual guidance, our method enables accurate object-level editing and consistent scene completion. We further introduce a new dataset tailored for 360° inpainting, addressing the lack of ground truth object-free scenes. Experiments demonstrate that Inpaint360GS outperforms existing baselines and achieves state-of-the-art performance.
        Inpaint360GS begins with RGB images to build a Gaussian Radiance Field(GRF) and obtain per-view masks from a segmentation model. We then associate these masks across views to get reasonably consistent object IDs for the Gaussians. This object-aware GRF enables direct possible 3D object manipulation, such as click-based or prompt-based removal. After removing target objects, we render at novel camera poses to obtain virtual views \( \mathcal{V} \). During 2D inpainting, we recursively perform conditional RGB and depth inpainting, which is then used for depth-guided 3D inpainting.
We provide a new 360° high-quality dataset, Inpaint360GS, which can be used for quantitative evaluation of object removal in inpainting tasks. All points corresponding to the inpainting (test) views in the sparse point cloud from COLMAP have been properly removed for fair evaluation. The dataset includes 4 multi-object scenes and 7 single-object scenes.
 
       
      @inproceedings{wang2025uni,
        title={Uni-slam: Uncertainty-aware neural implicit slam for real-time dense indoor scene reconstruction},
        author={Wang, Shaoxiang and Xie, Yaxu and Chang, Chun-Peng and Millerdurai, Christen and Pagani, Alain and Stricker, Didier},
        booktitle={2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
        pages={2228--2239},
        year={2025},
        organization={IEEE}
      }