Skip to content

Add Scaffold-GS to methods #3623

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

brian-xu
Copy link
Contributor

@brian-xu brian-xu commented Apr 1, 2025

Adds an implementation of Scaffold-GS to the docs.

This project is a little ambitious and also includes 2 other methods: GSDF, an extension of Scaffold-GS, and a port of NeuS-acc from SDFStudio. Those two methods aren't completely done yet, but I won't have time to work on them for a while so I figured I should release Scaffold-GS first.

@ichsan2895
Copy link

Hey, nice addition for nerfstudio

But, IMO, the license of diff_scaffold_rasterization (https://github.com/brian-xu/diff-scaffold-rasterization) is not same as nerfstudio and its not commercially feasible. I recommend to use gsplat scaffold (PR from nerfstudio-project/gsplat#409 by MrNerf)

@brian-xu
Copy link
Contributor Author

brian-xu commented Apr 7, 2025

I did consider using gsplat since the authors of Scaffold-GS published a follow-up paper using gsplat. However, I wanted to implement GSDF and use depth+normal regularization from RaDe-GS so I opted for the Inria rasterizer. If I have the time, I'll see if I can migrate to gsplat.

@ichsan2895
Copy link

ichsan2895 commented Apr 9, 2025

Thanks for your effort for migrating to gsplat.

Gsplat also has unmerged RaDe-GS implementation.

BTW, Does ns-export gaussian-splat work?

@brian-xu
Copy link
Contributor Author

brian-xu commented Apr 9, 2025

I didn't know there was a RaDe-GS implementation for gsplat, I'll have to look into that.

The only problem with ns-export is that Scaffold-GS neural gaussians would have to be "baked" from a certain viewpoint, and I don't think ns-export can provide such a camera view. It could be possible by caching the first camera view as part of the model parameters.

@ichsan2895
Copy link

ichsan2895 commented Apr 18, 2025

Good result so far @brian-xu 🎉
Testing this PR on mip-360 dataset just for 1k iters. Here is the benchmark:

ns-train scaffold-gs --max-num-iterations=1000
image

ns-train splatfacto-mcmc --max-num-iterations=1000
image

ns-train splatfacto --max-num-iterations=1000
image

Seems scaffold-gs faster to convergence than others method.
Hopefully if I have free time, I will try for 30k iters

@brian-xu
Copy link
Contributor Author

brian-xu commented May 1, 2025

As of the most recent commit scaffold-gs-nerfstudio uses my fork of gsplat with depth+normal regularization from RaDe-GS and is compatible with the nerfstudio license.

@abrahamezzeddine
Copy link

abrahamezzeddine commented May 1, 2025

Nice addition mate.

Question: is scaffold standalone or do you run scaffold+rade+gs together? Is it possible to fetch own depth maps? For example, I have added depth supervision using MoGe and its OK to fill in some areas using relative depth, since gsplat already output rendered depths.

I read also in the gsplat forum that adding absgrad really improved the numbers. Any plan to add it as well? :)

@brian-xu
Copy link
Contributor Author

brian-xu commented May 1, 2025

Scaffold is standalone: we calculate the neural gaussians as per the paper and rasterize with RaDe+GS separately. The RaDe gsplat branch is compatible with any gaussian splatting framework you want to implement.

It is also compatible with external depth supervision via a monocular estimator such as MoGe. The authors of RaDe actually don't use any sort of depth supervision during training but it should be a relatively simple addition. I don't know whether the current gsplat implementation of depth rasterization is differentiable, but the outputs from RaDe are.

As for absgrad: I haven't tested it but the original PR used it in evaluation scripts so it likely works without any modification. Whether it works well in combination with Scaffold-GS and RaDe would require more testing.

@abrahamezzeddine
Copy link

Scaffold is standalone: we calculate the neural gaussians as per the paper and rasterize with RaDe+GS separately. The RaDe gsplat branch is compatible with any gaussian splatting framework you want to implement.

It is also compatible with external depth supervision via a monocular estimator such as MoGe. The authors of RaDe actually don't use any sort of depth supervision during training but it should be a relatively simple addition. I don't know whether the current gsplat implementation of depth rasterization is differentiable, but the outputs from RaDe are.

As for absgrad: I haven't tested it but the original PR used it in evaluation scripts so it likely works without any modification. Whether it works well in combination with Scaffold-GS and RaDe would require more testing.

Thanks for your prompt answer. Would it be also possible to supply normals for example? As I understand, gsplat does not have normal in the backward pass for 3dgs, but perhaps you have solved it in some way?

@brian-xu
Copy link
Contributor Author

brian-xu commented May 1, 2025

Yes, you can also train with normals. The paper RaDe-GS introduces depth and normals in the backward pass, and the authors implemented their work in gsplat. See nerfstudio-project/gsplat#317 for the original PR and https://github.com/brian-xu/gsplat-rade for my updated fork that is compatible with the latest version of nerfstudio.

@abrahamezzeddine
Copy link

Added absgrad and depth supervision now directly into scaffold-gs model. Let's see how it goes.

However, still wondering about the normals... need to figure out a way to parse them into scaffolf-gs... :D

@brian-xu
Copy link
Contributor Author

brian-xu commented May 2, 2025

I have updated GSDF to display in Viser while training and allow for mesh export from a trained model. I'm considering it ready for release and have updated the documentation accordingly.

@abrahamezzeddine
Copy link

abrahamezzeddine commented May 2, 2025

Training with depth and normal supervision at the moment.
I am wondering if it's possible to save the training as a 3DGS file to be viewed in regular 3DGS viewers? When I try to export, I see this:

Splat export is only supported with Gaussian Splatting methods

Edit: found it in export_panel!

from nerfstudio.data.scene_box import OrientedBox
from nerfstudio.models.base_model import Model
from nerfstudio.models.splatfacto import SplatfactoModel
from nerfstudio.viewer.control_panel import ControlPanel
from scaffold_gs.scaffold_gs_model import ScaffoldGSModel <--------


def populate_export_tab(
    server: viser.ViserServer,
    control_panel: ControlPanel,
    config_path: Path,
    viewer_model: Model,
) -> None:
    viewing_gsplat = isinstance(viewer_model, (SplatfactoModel, ScaffoldGSModel)) <-------

@abrahamezzeddine
Copy link

Good result so far @brian-xu 🎉 Testing this PR on mip-360 dataset just for 1k iters. Here is the benchmark:

ns-train scaffold-gs --max-num-iterations=1000 image

ns-train splatfacto-mcmc --max-num-iterations=1000 image

ns-train splatfacto --max-num-iterations=1000 image

Seems scaffold-gs faster to convergence than others method. Hopefully if I have free time, I will try for 30k iters

Hello and thanks for your tests. I can say that scaffold-gs in general reaches good values very early on. So to truly test evaluation, you need to compare against 30k.

image

3DGS vs Scaffold-GS.

@abrahamezzeddine
Copy link

abrahamezzeddine commented May 2, 2025

@brian-xu

Hello Brian, thanks for this awesome pull request. I really enjoy scaffold-gs. Using absgrad with depth- and normal supervision improves their respective area, but it might create some clouds of gaussians where it would not have done if supervision was not set to True. But the depth and normals improves a lot which is nice to see with the implemented supervision. I'll see if I can attach some example images of the supervised results.

Anyways, I also ran the GDSF setup (quite slow though which I guess is because of the neural part) and I got the following error at step 20000 when GDSF starts to kick in.


19980 (44.40%)      503.009 ms           3 h, 29 m, 45 s      2.04 K
----------------------------------------------------------------------------------------------------
Viewer running locally at: http://localhost:7007 (listening on 0.0.0.0)
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 0.4247
VanillaPipeline.get_train_loss_dict: 0.2806
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Abraham\.conda\envs\splatting\Scripts\ns-train.exe\__main__.py", line 7, in <module>
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 272, in entrypoint
    main(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 257, in main
    launch(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 190, in launch
    main_func(local_rank=0, world_size=world_size, config=config)
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 101, in train_loop
    trainer.train()
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 266, in train
    loss, loss_dict, metrics_dict = self.train_iteration(step)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 502, in train_iteration
    _, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\pipelines\base_pipeline.py", line 298, in get_train_loss_dict
    ray_bundle, batch = self.datamanager.next_train(step)
    ^^^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object

@brian-xu
Copy link
Contributor Author

brian-xu commented May 2, 2025

@brian-xu

Hello Brian, thanks for this awesome pull request. I really enjoy scaffold-gs. Using absgrad with depth- and normal supervision improves their respective area, but it might create some clouds of gaussians where it would not have done if supervision was not set to True. But the depth and normals improves a lot which is nice to see with the implemented supervision. I'll see if I can attach some example images of the supervised results.

Anyways, I also ran the GDSF setup (quite slow though which I guess is because of the neural part) and I got the following error at step 20000 when GDSF starts to kick in.


19980 (44.40%)      503.009 ms           3 h, 29 m, 45 s      2.04 K
----------------------------------------------------------------------------------------------------
Viewer running locally at: http://localhost:7007 (listening on 0.0.0.0)
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 0.4247
VanillaPipeline.get_train_loss_dict: 0.2806
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Abraham\.conda\envs\splatting\Scripts\ns-train.exe\__main__.py", line 7, in <module>
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 272, in entrypoint
    main(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 257, in main
    launch(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 190, in launch
    main_func(local_rank=0, world_size=world_size, config=config)
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 101, in train_loop
    trainer.train()
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 266, in train
    loss, loss_dict, metrics_dict = self.train_iteration(step)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 502, in train_iteration
    _, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\pipelines\base_pipeline.py", line 298, in get_train_loss_dict
    ray_bundle, batch = self.datamanager.next_train(step)
    ^^^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object

This error has been fixed.

Training with depth and normal supervision at the moment. I am wondering if it's possible to save the training as a 3DGS file to be viewed in regular 3DGS viewers? When I try to export, I see this:

Splat export is only supported with Gaussian Splatting methods

Edit: found it in export_panel!

from nerfstudio.data.scene_box import OrientedBox
from nerfstudio.models.base_model import Model
from nerfstudio.models.splatfacto import SplatfactoModel
from nerfstudio.viewer.control_panel import ControlPanel
from scaffold_gs.scaffold_gs_model import ScaffoldGSModel <--------


def populate_export_tab(
    server: viser.ViserServer,
    control_panel: ControlPanel,
    config_path: Path,
    viewer_model: Model,
) -> None:
    viewing_gsplat = isinstance(viewer_model, (SplatfactoModel, ScaffoldGSModel)) <-------

I would have to write a custom script to export splats to PLYs. These splats would have to be "baked" to render from a single view, losing some of the advantages Scaffold-GS offers. I'll consider making it if there is enough demand.

@abrahamezzeddine
Copy link

@brian-xu
Hello Brian, thanks for this awesome pull request. I really enjoy scaffold-gs. Using absgrad with depth- and normal supervision improves their respective area, but it might create some clouds of gaussians where it would not have done if supervision was not set to True. But the depth and normals improves a lot which is nice to see with the implemented supervision. I'll see if I can attach some example images of the supervised results.
Anyways, I also ran the GDSF setup (quite slow though which I guess is because of the neural part) and I got the following error at step 20000 when GDSF starts to kick in.


19980 (44.40%)      503.009 ms           3 h, 29 m, 45 s      2.04 K
----------------------------------------------------------------------------------------------------
Viewer running locally at: http://localhost:7007 (listening on 0.0.0.0)
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 0.4247
VanillaPipeline.get_train_loss_dict: 0.2806
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\Abraham\.conda\envs\splatting\Scripts\ns-train.exe\__main__.py", line 7, in <module>
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 272, in entrypoint
    main(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 257, in main
    launch(
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 190, in launch
    main_func(local_rank=0, world_size=world_size, config=config)
  File "O:\Gaussian\nerfstudio\nerfstudio\scripts\train.py", line 101, in train_loop
    trainer.train()
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 266, in train
    loss, loss_dict, metrics_dict = self.train_iteration(step)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\engine\trainer.py", line 502, in train_iteration
    _, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\utils\profiler.py", line 111, in inner
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "O:\Gaussian\nerfstudio\nerfstudio\pipelines\base_pipeline.py", line 298, in get_train_loss_dict
    ray_bundle, batch = self.datamanager.next_train(step)
    ^^^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object

This error has been fixed.

Training with depth and normal supervision at the moment. I am wondering if it's possible to save the training as a 3DGS file to be viewed in regular 3DGS viewers? When I try to export, I see this:
Splat export is only supported with Gaussian Splatting methods
Edit: found it in export_panel!

from nerfstudio.data.scene_box import OrientedBox
from nerfstudio.models.base_model import Model
from nerfstudio.models.splatfacto import SplatfactoModel
from nerfstudio.viewer.control_panel import ControlPanel
from scaffold_gs.scaffold_gs_model import ScaffoldGSModel <--------


def populate_export_tab(
    server: viser.ViserServer,
    control_panel: ControlPanel,
    config_path: Path,
    viewer_model: Model,
) -> None:
    viewing_gsplat = isinstance(viewer_model, (SplatfactoModel, ScaffoldGSModel)) <-------

I would have to write a custom script to export splats to PLYs. These splats would have to be "baked" to render from a single view, losing some of the advantages Scaffold-GS offers. I'll consider making it if there is enough demand.

Would be awesome to have that script to save them in a PLY file so I can work with the 3DGS data outside of nerfstudio too, such as supersplat to clean, crop etc. ☺️

@CanCanZeng
Copy link

Well done @brian-xu ! Do you have benchmark results for this version of the algorithm on mip360 data? Including PSNR, number of Gaussians, vmem usage, and training time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants