Evaluation¶
PSNR |
SSIM |
LPIPS |
Train Mem |
Train Time |
|
---|---|---|---|---|---|
inria-7k |
27.23 |
0.829 |
0.204 |
7.7 GB |
4m05s |
gsplat-7k |
27.21 |
0.831 |
0.202 |
4.3GB |
5m35s |
inria-30k |
28.95 |
0.870 |
0.138 |
9.0 GB |
37m13s |
gsplat-30k |
28.95 |
0.870 |
0.135 |
5.7 GB |
35m49s |
This repo comes with a standalone script (examples/simple_trainer.py
) that reproduces
the Gaussian Splatting with
exactly the same performance on PSNR, SSIM, LPIPS, and converged number of Gaussians.
Powered by gsplat’s efficient CUDA implementation, the training takes up to
4x less GPU memory with up to 15% less time to finish than the official implementation.
Trains Faster with Less GPU Memory¶
Train Mem (GB) |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
7.86 |
7.61 |
6.47 |
8.99 |
8.08 |
7.88 |
7.23 |
gsplat-7k |
6.10 |
2.20 |
1.93 |
7.57 |
2.89 |
2.04 |
6.25 |
inria-30k |
11.56 |
7.70 |
6.73 |
11.04 |
8.33 |
8.50 |
8.82 |
gsplat-30k |
10.58 |
2.29 |
2.23 |
9.88 |
3.17 |
2.79 |
8.10 |
Train Time (s) |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
336 |
340 |
364 |
427 |
436 |
336 |
321 |
gsplat-7k |
319 |
299 |
318 |
415 |
389 |
301 |
304 |
inria-30k |
2980 |
1552 |
1725 |
3092 |
2144 |
1773 |
2366 |
gsplat-30k |
2964 |
1422 |
1621 |
3013 |
2020 |
1708 |
2299 |
Reproduced Metrics¶
PSNR |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
23.59 |
29.75 |
27.21 |
26.13 |
29.02 |
29.26 |
25.64 |
gsplat-7k |
23.71 |
29.66 |
27.14 |
26.30 |
28.86 |
29.21 |
25.62 |
inria-30k |
25.19 |
32.21 |
29.02 |
27.29 |
31.07 |
31.31 |
26.56 |
gsplat-30k |
25.22 |
32.06 |
29.02 |
27.32 |
31.16 |
31.36 |
26.53 |
SSIM |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
0.662 |
0.921 |
0.877 |
0.824 |
0.902 |
0.893 |
0.721 |
gsplat-7k |
0.668 |
0.922 |
0.878 |
0.833 |
0.902 |
0.893 |
0.720 |
inria-30k |
0.763 |
0.941 |
0.906 |
0.863 |
0.925 |
0.918 |
0.771 |
gsplat-30k |
0.764 |
0.941 |
0.907 |
0.865 |
0.926 |
0.918 |
0.768 |
LPIPS |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
0.329 |
0.164 |
0.207 |
0.130 |
0.125 |
0.219 |
0.254 |
gsplat-7k |
0.324 |
0.162 |
0.206 |
0.123 |
0.127 |
0.217 |
0.253 |
inria-30k |
0.177 |
0.133 |
0.157 |
0.078 |
0.096 |
0.168 |
0.155 |
gsplat-30k |
0.172 |
0.132 |
0.154 |
0.075 |
0.094 |
0.164 |
0.153 |
Number of GSs |
Bicycle |
Bonsai |
Counter |
Garden |
Kitchen |
Room |
Stump |
---|---|---|---|---|---|---|---|
inria-7k |
3.57M |
1.16M |
1.01M |
4.33M |
1.63M |
1.11M |
3.75M |
gsplat-7k |
3.62M |
1.17M |
1.02M |
4.48M |
1.63M |
1.11M |
3.71M |
inria-30k |
6.06M |
1.24M |
1.19M |
5.71M |
1.78M |
1.55M |
4.82M |
gsplat-30k |
6.26M |
1.25M |
1.21M |
5.84M |
1.79M |
1.59M |
4.81M |
Note: Evaluations are conducted on a NVIDIA TITAN RTX GPU. The LPIPS metric is evaluated
using from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity
, which
is different from what’s reported in the original paper that uses
from lpipsPyTorch import lpips
.
The evaluation of gsplat-X can be reproduced with the command
cd examples; bash benchmark.sh
within the gsplat repo (commit 6acdce4).
The evaluation of inria-X can be
reproduced with our forked wersion of the official implementation at
here,
with the command python full_eval_m360.py
(commit 36546ce).