I don't know splat about Gaussian splatting, so like any good blogger I figured I'd poast on it anyway. May as well start with Aras Pranckevičius' latest.
Rather: I gotta start before all that. I keep hearing terms like "NeRF" and "raster". Best I can tell it goes like this - first, the splats are created; then, they are rendered. These tasks should - I think - be done on different machines, with different video cards.
The "Ne" means neural; "RF" is Radiance Field. RF is how the designer takes an image, rather several images at several angles, and then the software figures out how to approximate them. These go into the data-files you download from Steam.
"Raster" is basically the pixels being displayed on your screen, like the "1024" memory-address and beyond in the old TRS-80. Like - say your data describes a Euclidean vector, a line going from one two-dimensional spot [x,y] (0,0 being the top left maybe at the 1024 address) to the other [x2,y2]. Classically whole objects are approximated by polygons of Euclidean triangle; smart coders know to skip repeated vectors so to save space. Bresenham had the 1962 algorithm for rastering the vectors: to alias the line - here, as a pixel-staircase.
Who cares, right. Disney cartoons are alias; any motion-picture is alias, as a 2-D representation of a 3-D series of scenes. But there's alias and there's alias; shifting pixel-ladders for moving lines (say) can get annoying. Hence the "anti-aliasing", to fuzz them over. Classic games often make this optional, as this process takes up computer resources beyond what was to be rendered already.
3-D objects tend to raster as polygons as noted. This means lots of triangles so lots of lines - two per triangle, three for the first triangle.
With Gaussians, we forget a lot of this. Radiance-field creation isn't "neural"; it's its own thing. And when it is rendered, or "raster'd" or whatever - it is rendered as a blob. No more GPU line-drawing; it's done in CUDA. We may expect lots of aliasing, since the Gaussians are at the mercy of the developer's 3-D pictures; but that anti-aliasing would render... differently, than the old triangle-fuzzing. Better or worse, will depend upon different factors.
When I first heard about this, we had Gaussians in 3-D but they were static, the fourth dimension supplied by some enduser rotating his camera around the object. Last week, I heard from moving pictures being done into Gaussians. As reciprocal-squareroots have taught us, objects in motion can cut corners on anti-aliasing, and on storage generally; which is good, because Gaussians are (presently) memory hogs.
I foresee a major fork in GPU programming, at least in the drivers, maybe even in the hardware. One side must optimise for non-Neural RF by the designer. Consumers then handle the Gaussian non-vectors. In the near future that consumer seems to be a movie-director, planting a 3-D Gaussian phantasm into the frame with real objects. But later there may be games.
No comments:
Post a Comment