At the end of the video there is a non-pinhole camera demo; could someone explain what exactly is different about this camera?
I.e. what exactly the video is showing? And what would the video look like if that was a pinhole camera?
> This algorithm is much more efficient than typical ray tracing acceleration methods that rely on hierarchical acceleration structures with logarithmic query complexity.
!?
This is a wild claim to just slip in. Can anyone expand?
Em? Seems reasonable, ray tracing a BVH is inherently slow AF because it diverges like mad(SIMD no like).
How is Google using all these amazing radiant field techniques they're developing?
TBH only one author out of four has a Google affiliation, and their personal webpage [1] says "part-time (20%) staff research scientist at Google DeepMind", so it's a stretch to call this a "Google technique". I notice that this is a common thing when discussing research paper, people associate it with the first company name they can find in the affiliations
For one, I’ve seen interactive Gaussian Splatting interior flythroughs in the Google Maps app.
Pure conjecture: relighting in Pixel phones. I don't think they have too many AR-like products. I'm surprised so much of this research is coming out of Google and not Meta.
I'm a little surprised Google hasn't included lidar into their Pixel phones (even after including and dropping some oddball stuff like Soli) to support some of these radiance field / photogrammetry techniques. I guess the <2.5% market share of Pixel phones wouldn't encourage any third parties with bothering to develop for lidar on Android.
I have no idea, but given their stock of pictures of the entire earth (via google maps), I have some ideas about what I HOPE they would use this tech for.
And Google Maps/Google Earth have a long history of trying to create 3d views using all kinds of techniques, from manual modeling to radar satellite data.
How do you know they're amazing until you've used them yourself?
By being amazed when observing it, one can conclude that a thing is amazing.
They do look amazing
You must have more faith in research papers than I do. Every single one I've actually used has had significant flaws that are glossed over by what isn't being shown or said.
The problem is when the constraints exclude methods that are comparable performance while otherwise being superior options for the problem they are solving. I've found this to be extremely common.
Maybe you're misunderstanding just how different most research papers actually are when you implement them and see all the limitations they have, especially when they compare themselves to general techniques that work better but they claim to surpass.
It's naive to accept what a paper does as fact from a video, you have to get it to work and try it out to really know. Anyone who has worked with research papers knows this from experience.
Feel free to try this out and let me know.
In pinhole cameras, straight lines look straight. That's what a regular projection matrix gives you with rasterization. With non-pinhole cameras, straight lines look curved. You can't rasterize this directly. 3D Gaussian splats have an issue with this, addressed by methods like ray tracing. It's very useful to train on non-pinhole cameras, because in real world they can capture a wider field of view.
In Gaussian Splatting, a first order approximation takes ellipsoids in camera space to ellipses in image space. This works ok for pigeonhole cameras where straight lines remain straight and more generally conic sections are taken to other conic sections. For high distortion models like fisheye, this approximation probably breaks. However this method presumably does not rely on approximation since it is ray traced.
looks like it’s some sort of fisheye camera with a super wide fov. it might be simulating rays bending due to lens effects. a pinhole camera could just look “normal” ie straight lines stay straight (except for horizon convergence perspective effects)