Recent advances in 3D reconstruction techniques and vision-language models have fueled significant progress in 3D semantic understanding—a capability critical to robotics, autonomous driving, and virtual/augmented reality. However, methods that rely on 2D priors are prone to a critical challenge: cross-view semantic inconsistencies induced by occlusion, image blur, and view-dependent variations. These inconsistencies, when propagated via projection supervision, deteriorate the quality of 3D Gaussian semantic fields and introduce artifacts in the rendered outputs. To mitigate this limitation, we propose CCL-LGS, a novel framework that enforces view-consistent semantic supervision by integrating multi-view semantic cues. Specifically, our approach first employs a zero-shot tracker to align a set of 2D masks—provided by SAM—to reliably identify their corresponding categories. Next, we utilize CLIP to extract robust semantic encodings across views. Finally, our Contrastive Codebook Learning (CCL) module distills discriminative s emantic features by enforcing intra-class compactness and inter-class distinctiveness. In contrast to previous methods that directly apply CLIP to imperfect masks, our framework explicitly resolves semantic conflicts while preserving category discriminability. Extensive experiments demonstrate CCL-LGS's superiority over previous state-of-the-art methods.
Thanks a lot for the previous excellent work such as SAM, SAM2, Gaussian Splatting, Gaussian Grouping, LangSplat and LEGaussians.
@article{tian2025ccl-lgs,
title={CCL-LGS: Contrastive Codebook Learning for 3D Language Gaussian Splatting},
author={Tian, Lei and Li, Xiaomin and Ma, Liqian and Huang, Hefei and Zheng, Zirui and Yin, Hao and Li, Taiqing and Lu, Huchuan and Jia, Xu},
journal={arXiv preprint arXiv:2505.20469},
year={2025}
}