Well this put a smile on my face. Nvidia just announced Racer RTX, a fully real-time raytraced minigame running on their Omniverse platform on a single RTX 4000. It looks quite a few steps up from Marbles RTX, which was already exceptional in itself. The lighting and shading quality is fairly unparalleled for a real-time application. It's amazing to see how quickly this technology has progressed in the last five years and to know that this will be available to everyone (who can afford a next-gen GPU) soon. If only GTA 6 would look as good as this...
A comparison of ray tracing performance on RTX 3000 with DLSS 2.0 vs RTX 4000 with DLSS 3.0:
Last week at Siggraph, Nvidia released a fascinating making-of documentary of the Nvidia GTC keynote. It contains lots of snippets showing the real-time photorealistic rendering capabilities of Omniverse.
Short version:
Extended version:
The Siggraph talk titled "Realistic digital human rendering with Omniverse RTX Renderer" is also a must-watch for anyone interested in CG humans:
Nvidia showed an improved version of their Marbles RTX demo during the RTX 3000 launch event. What makes this new demo so impressive is that it appears to handle dozens of small lights without breaking a sweat, something which is notoriously difficult for a path tracer, let alone one of the real-time kind:
Making of Marbles RTX (really fantastic):
The animation is rendered in real-time in Nvidia's Omniverse, a new collaborative platform which features noise-free real-time path tracing and is already turning heads in the CGI industry. Nvidia now also shared the first sneak peek of Omniverse's capabilities:
Today Nvidia showed this astounding demo. Pure real-time ray tracing (with some deep learning based upscaling and denoising), no rasterization or baked lighting. So it finally happened...
Check out the labels on the paint cans and books
The workshop setting in the Marbles demo reminds me of an early demo of Arnold Render from the year 2000, which truly stunned me back then, as it was the first time I saw a CG animation which looked completely photorealistic. If it wasn't for the ending, I would have thought it was clever stop motion anitmation:
The above video was also the reason I learned about unbiased rendering, path tracing and ultimately started dabbling in real-time path tracing, trying to recreate a simplified version of the Arnold demo in real-time (experiment from 2011):
It's amazing to think that we have finally reached a point where the Pepeland demo in Arnold can be rendered with the same fidelity in real-time on a single GPU, merely 20 years after the original.
I remember Nvidia first showing off real-time ray traced reflections on the GPU at GDC 2008 and GTC 2009 with a demo of a ray traced Bugatti Veyron running on a couple of pre-Fermi GPUs.
Brigade was a real trailblazer and showed off a glimpse of what photorealistic games could look like in a not so distant future. Brigade 2, its successor (and also developed by Jacco Bikker) was fully GPU based which pushed performance to another level.
The Lighthouse engine has a couple of unique features:
Lighthouse uses Nvidia's OptiX framework, which provides state-of-the-art methods to build and traverse BVH acceleration structures, including a built-in "top level BVH" which allows for real-time animated scenes with thousands of individual meshes, practically for free.
There are 3 manually optimised OptiX render cores:
OptiX 5 (for Maxwell and Pascal GPUs)
OptiX Prime (for Maxwell and Pascal GPUs)
OptiX 7 (with full RTX support for Turing GPUs)
OptiX 7 is much more low level than previous OptiX versions, creating more control for the developer, less overhead and a substantial performance boost on Turing GPUs compared to OptiX 5/6 (about 35%)
A Turing GPU running Lighthouse 2 with OptiX 7 (with RTX support) is about 6x faster than a Pascal GPU running OptiX 5 for path tracing (you have to try it to believe it :-) )
Lighthouse incorporates the new "blue noise" sampling method (https://eheitzresearch.wordpress.com/762-2/), which creates cleaner/less noisy looking images at low sample rates
Lighthouse manages a full game scene graph with instances, camera, lights and materials, including the Disney BRDF (the so-called "principled" shader) and their parameters can be edited on-the-fly through a lightweight GUI
Some screenshots (rendered with Lighthouse's OptiX 7 core on a RTX 2060)
1024 real-time ray traced dragons
2025 lego cars, spinning in real-time
Lighthouse 2 material test scene
A real-time raytraced Shelby Cobra
Just add bunnies
An old video of Sponza rendered with Lighthouse, showing off the real-time denoiser:
Lighthouse is still a work in progress, but due to its relative simplicity it's easy to quickly test a new sampling
algorithm or experiment with a new fast denoiser, ensuring the code
and performance remains on par with the state-of-the-art in rendering research.
Given the fact that it handles real-time animation, offers state-of-the-art performance and is licensed under Apache 2.0, Lighthouse 2 may soon end up in professional 3D tools like Blender for fast, photorealistic previews of real-time animations. Next-gen game engine developers should also keep an eye on this.
A couple of days ago, Denis Bogolepov sent me a link to LightTracer, a browser based path tracer which he and Danila Ulyanov have developed. I'm quite impressed and excited about LightTracer, as it is the first WebGL based path tracer that can render relatively complex scenes (including textures), which is something I've been waiting to see happen for a while (I tried something similar a few years ago, WebGL still had too many limitations back then).
What makes LightTracer particularly interesting is that it has the potential to bring photoreal interactive 3D to the web, paving the way for online e-commerce stores offering their clients a fully photorealistic preview of an article (be it jewellery, cars, wristwatches, running shoes or handbags).
Up until now, online shops have been trying several ways to offer their clients "photorealistic" previews with the ability to configure the product's materials and colours. These previews were either precomputed 360 degree videos, interactive 3D using WebGL rasterization and even using server-side rendering via cloud based ray tracing streamed to the browser (e.g. Clara.io and Lagoa Render) which requires expensive servers and is tricky to scale.
LightTracer's WebGL ray tracing offers a number of unique selling points:
- ease of use: it's entirely browser based, so nothing needs to be downloaded or installed
- intuitive: since ray tracing follow the physics of light, lights and materials behave just like in the real world, allowing non-rendering-experts to predictably light their scenes
- photorealisitic lighting and materials: as Monte Carlo path tracing solves the full rendering equations without taking shortcuts, this results in truly photoreal scenes
- speed: LightTracer's ray tracing is accelerated by the GPU via WebGL, offering very fast previews. This should get even faster once WebGL will support hardware accelerated ray tracing via Nvidia's RTX technology (and whatever AMD has in the works)
LightTracer is still missing a few features, such as an easy-to-use subsurface scattering shader for realistic skin, hair and waxy materials, and there are plenty of optimisations possible (scene loading speed, UI improvements and presets, etc.) but I think this is the start of something big.
The following video is an incredible example of an architectural visualisation rendered with Unreal's real-time raytraced reflections and refractions:
It's fair to say that real-time photorealism on consumer graphics card has finally arrived. In the last few years, fast and performant path tracers have become available for free (e.g. Embree, OptiX, RadeonRays, Cycles) or virtually for free (e.g Arnold, Renderman). Thanks to advances in noise reduction algorithms, their rendering speed has been accelerated from multiple hours to a few seconds per frame.
The rate at which game engines, with Unreal at the forefront, are taking over the offline-rendering world is staggering. Off-line rendering for architecture will most probably disappear in the near future and be replaced by game engines with real-time ray tracing features.
Nvidia recently released a new version of Optix, which finally adds support for the much hyped RTX cores on the Turing GPUs (RTX 2080, Quadro RTX 8000 etc), which provide hardware acceleration for ray-BVH and ray-triangle intersections.
First results are quite promising. One user reports a speedup between 4x and 5x when using the RTX cores (compared to not using them). Another interesting revelation is that the speedup gets larger with higher scene complexity (geometry-wise, not shading-wise):
As a consequence, the Turing cards can render up to 10x faster in some scenes than the previous generation of Geforce cards, i.e. Pascal (GTX 1080), which is in fact two generations old if you take the Volta architecture into account (Volta was already a huge step up from Pascal in terms of rendering speed, so for Nvidia's sake it's better to compare Turing with Pascal).
This post will be updated with more Optix benchmark numbers as they become available.
(See update at the bottom of this post to see something even more mindblowing)
The Chaos Group blog features quite an interesting article about the speed increase which can be expected by using Nvidia's recently announced RTX cards:
"Specialized hardware for ray casting has been attempted in the past, but has been largely unsuccessful — partly because the shading and ray casting calculations are usually closely related and having them run on completely different hardware devices is not efficient. Having both processes running inside the same GPU is what makes the RTX architecture interesting. We expect that in the coming years the RTX series of GPUs will have a large impact on rendering and will firmly establish GPU ray tracing as a technique for producing computer generated images both for off-line and real-time rendering."
The article features a new research project, called Lavina, which is essentially doing real-time ray tracing and path tracing (with reflections, refractions and one GI bounce). The video below gets seriously impressive towards the end:
Then again, thanks to noob-friendly ray tracing API's like Nvidia's RTX and Optix, soon everyone's grandmother and their dog will be able to write a real-time path tracer, so all is well in the end.
UPDATE: this talk by Nvidia researcher Jacopo Pantaleoni (famous from VoxelPipe and Weta's Pantaray engine) "Real-time ray tracing for real-time gloabl illumination" totally trounces the Lavina project, both in quality and in terms of dynamic scenes:
One of the authors of the Physically Based Rendering books (www.pbrt.org, some say it's the bible for Monte Carlo ray tracing). Before joining Nvidia, he was working at Google with Paul Debevec on Daydream VR, light fields and Seurat (https://www.blog.google/products/google-ar-vr/experimenting-light-fields/), none of which took off in a big way for some reason.
Before Google, he worked at Intel on Larrabee, Intel's failed attempt at making a GPGPU for real-time ray tracing and rasterisation which could compete with Nvidia GPUs) and ISPC, a specialised compiler intended to extract maximum parallelism from the new Intel chips with AVX extensions. He described his time at Intel in great detail on his blog: http://pharr.org/matt/blog/2018/04/30/ispc-all.html (sounds like an awful company to work for).
Intel also bought Neoptica, Matt's startup, which was supposed to research new and interesting rendering techniques for hybrid CPU/GPU chip architectures like the PS3's Cell
Pioneering researcher in the field of real-time ray tracing from the Saarbrücken computer graphics group in Germany, who later moved to Intel and the university of Utah to work on a very high performance CPU based ray tracing frameworks such as Embree (used in Corona Render and Cycles) and Ospray.
His PhD thesis "Real-time ray tracing and interactive global illumination" from 2004, describes a real-time GI renderer running on a cluster of commodity PCs and hardware accelerated ray tracing (OpenRT) on a custom fixed function ray tracing chip (SaarCOR).
Ingo contributed a lot to the development of high quality ray tracing acceleration structures (built with the surface area heuristic).
What connects these people is that they all have a passion for real-time ray tracing running in their blood, so having them all united under one roof is bound to give fireworks.
With these recent hires and initiatives such as RTX (Nvidia's ray tracing API), it seems that Nvidia will be pushing real-time ray tracing into the mainstream really soon. I'm really excited to finally see it all come together. I'm pretty sure that ray tracing will very soon be everywhere and its quality and ease-of-use will soon displace rasterisation based technologies (it's also the reason why I started this blog exactly ten years ago).
Senior Real Time Ray Tracing Engineer NVIDIA, Santa Clara, CA, US
Job description
Are you a real-time rendering engineer looking to work on real-time ray tracing to redefine the look of video games and professional graphics applications? Are you a ray tracing expert looking to transform real-time graphics as we lead the convergence with film? Do you feel at home in complex video game codebases built on the latest GPU hardware and GPU software APIs before anybody else gets to try them?
At NVIDIA we are developing the most forward-looking real-time rendering technology combining traditional graphics techniques with real-time ray tracing enabled by NVIDIA's RTX technology. We work at all levels of the stack, from the hardware and driver software, to the engine and application level code. This allows us to take on problems that others can only dream of solving at this point
We are looking for Real Time Rendering Software Engineers who are passionate about pushing the limits of what is possible with the best GPUs and who share our forward-looking vision of real-time rendering using real-time ray tracing.
In this position you will work with some of the world leading real-time ray tracing and rendering experts, developer technology engineers and GPU system software engineers. Your work will impact a number of products being worked on at NVIDIA and outside NVIDIA. These include the NVIDIA Drive Constellation autonomous vehicle simulator, NVIDIA Isaac virtual simulator for robotics, and NVIDIA Holodeck collaborative design virtual environment. Outside NVIDIA our work is laying the foundation for future video games and other rendering applications using real-time ray tracing. The first example of this impact is the NVIDIA GameWorks Ray Tracing denoising modules and much of the technology featured in our NVIDIA RTX demos at GDC 2018.
What You Will Be Doing
Implementing new rendering techniques in a game engine using real-time ray tracing with NVIDIA RTX technology
Improving the performance and quality of techniques you or others developed
Ensuring that the rendering techniques are robust and work well for the content needs of products using them
What We Need To See
Strong knowledge of C++
BS/MS or higher degree in Computer Science or related field with 5+ years of experience
Up to date knowledge of real-time rendering and offline rendering algorithms and research
Experience with ray tracing in real-time or offline
Knowledge of the GPU Graphics Pipeline and GPU architecture
Experience with GPU Graphics and Compute programming APIs such as Direct3D 11, Direct3D 12, DirectX Raytracing, Vulkan, OpenGL, CUDA, OpenCL or OptiX
Experience writing shader code in HLSL or GLSL for these APIS.
Experience debugging, profiling and optimizing rendering code on GPUs
Comfortable with a complex game engine codebase, such as Unreal Engine 4, Lumberyard, CryEngine or Unity
Familiar with the math commonly used in real-time rendering
Familiar with multi-threaded programming techniques
Can do attitude, with the will to dive into existing code and do what it takes to accomplish your job
Ability to work well with others in a team of deeply passionate individuals who respect each other
Before continuing the tutorial series, let's have a look at a simple but effective way to speed up path tracing. The idea is quite simple: like an octree, a bounding volume hierarchy (BVH) can double as both a ray tracing acceleration structure and a way to represent the scene geometry at multiple levels of detail (multi-resolution geometry representation). Specifically the axis-aligned bounding boxes (AABB) of the BVH nodes at different depths in the tree serve as a more or less crude approximation of the geometry.
Low detail geometry enables much faster ray intersections and can be useful when light effects don't require full geometric accuracy, for example in the case of motion blur, glossy (blurry) reflections, soft shadows, ambient occlusion and global illumination with diffuse bounced lighting. Especially when geometry is not directly visible in the view frustum or in specular (mirror-like) reflections, using geometry proxies can provide a significant speedup (depending on the fault tolerance) at an almost imperceptible and negligible loss in quality.
Advantages of using the BVH itself as multi-resolution LOD geometry representation:
doesn't require an additional scene voxelisation step (the BVH itself provides the LOD): less memory hungry
skips expensive triangle intersection when possible
performs only ray/box intersections (as opposed to having a mix of ray/triangle and ray/box intersections) which is more efficient on the GPU (avoids thread divergence)
BVH is stored in the GPU's cached texture memory (which is faster than global memory which should therefore store the triangles)
BVH nodes can store extra attributes like smoothed normals, interpolated colours and on-the-fly generated GI
(Note: AFAIK low level access to the acceleration structure is not provided by API's like OptiX/RTX and DXR, this has to be written in CUDA, ISPC or OpenCL)
The renderer determines the appropriate level of detail based on the distance from the camera for primary rays or on the distance from the ray origin and the ray type for secondary rays (glossy/reflection, shadow, AO or GI rays). The following screenshots show the bounding boxes of the BVH nodes from depth 1 (depth 0 is the rootnode) up to depth 12:
BVH level 1 (BVH level 0 is just the bunny's bounding box)
BVH level 2
BVH level 3
BVH level 4
BVH level 5
BVH level 6
BVH level 7
BVH level 8
BVH level 9
BVH level 10
BVH level 11
BVH level 12 (this level contains mostly inner BVH nodes, but also a few leafnodes)
The screenshot below shows the bottom-most BVH level (i.e. leafnodes only, hence some holes are apparent):
Visualizing the BVH leafnodes (bottom most BVH level)
Normals are axis aligned, but can be precomputed per AABB vertex (and stored at low precision) by averaging the normals of the AABBs it contains, with the leafnodes averaging the normals of their triangles.
TODO upload code to github or alternaive non ms repo and post link, propose fixes to fill holes, present benchmark results (8x speedup), get more timtams
The Blue Brain Project is a Switzerland based computational neuroscience project which aims to demystify how the brain works by simulating a biologically accurate brain using a state-of-the-art supercomputer. The simulation runs at multiple scales and goes from the whole brain level down to the tiny molecules which transport signals from one cell to another (neurotransmitters). The knowledge gathered from such an ultra-detailed simulation can be applied to advance neuroengineering and medical fields.
To visualize these detailed brain simulations, we have been working on a high performance rendering engine, aptly named "Brayns". Brayns uses raytracing to render massively complex scenes comprised of trillions of molecules interacting in real-time on a supercomputer. The core ray tracing intersection kernels in Brayns are based on Intel's Embree and Ospray high performance ray tracing libraries, which are optimised to render on recent Intel CPUs (such as the Skylake architecture). These CPUs basically are a GPU in CPU disguise (as they are based on Intel's defunct Larrabee GPU project), but can render massive scientific scenes in real-time as they can address over a terabyte of RAM. What makes these CPUs ultrafast at ray tracing is a neat feature called AVX-512 extensions, which can run several ray tracing calculations in parallel (in combination with ispc), resulting in blazingly fast CPU ray tracing performance which rivals that of a GPU and even beats it when the scene becomes very complex.
Besides using Intel's superfast ray tracing kernels, Brayns has lots of custom code optimisations which allows it to render a fully path traced scene in real-time. These are some of the features of Brayns:
hand optimised BVH traversal and geometry intersection kernels
real-time path traced diffuse global illumination
Optix real-time AI accelerated denoising
HDR environment map lighting
explicit direct lighting (next event estimation)
quasi-Monte Carlo sampling
volume rendering
procedural geometry
signed distance fields raymarching
instancing, allowing to visualize billions of dynamic molecules in real-time
stereoscopic omnidirectional 3D rendering
efficient loading and rendering of multi-terabyte datasets
linear scaling across many nodes
optimised for real-time distributed rendering on a cluster with high speed network interconnection
ultra-low latency streaming to high resolution display walls and VR caves
modular architecture which makes it ideal for experimenting with new rendering techniques
optional noise and gluten free rendering
Below is a screenshot of an early real-time path tracing test on a 40 megapixel curved screen powered by seven 4K projectors:
Real-time path traced scene on a 8 m by 3 m (25 by 10 ft) semi-cylindrical display,
powered by seven 4K projectors (40 megapixels in total)
Seeing this scene projected lifesize in photorealistic detail on a 180 degree stereoscopic 3D screen and interacting with it in real-time is quite a breathtaking experience. Having 3D molecules zooming past the observer will be the next milestone. I haven't felt this thrilled about path tracing in quite some time.
Technical/Medical/Scientific 3D artists wanted
We are currently looking for technical 3D artists to join our team to produce immersive neuroscientific 3D content. If this sounds interesting to you, get in touch by emailing me at sam.lapere@live.be
2018 will be bookmarked as a turning point for Monte Carlo rendering due to the wide availability of fast, high quality denoising algorithms, which can be attributed for a large part to Nvidia Research: Nvidia just released OptiX 5.0 to developers, which contains a new GPU accelerated "AI denoiser" which works as post-processing filter.
In contrast to traditional denoising filters, this new denoiser was trained using machine learning on a database of thousands of rendered image pairs (using both the noisy and noise-free renders of the same scene) providing the denoiser with a "memory": instead of calculating the reconstructed image from scratch (as a regular noise filter would do), it "remembers" the solution from having encountered similar looking noisy input scenes during the machine learning phase and makes a best guess, which is often very close to the converged image but incorrect (although the guesses progressively get better as the image refines and more data is available). By looking up the solution in its memory, the AI denoiser thus bypasses most of the costly calculations needed for reconstructing the image and works pretty much in real-time as a result.
The OptiX 5.0 SDK contains a sample program of a simple path tracer with the denoiser running on top (as a post-process). The results are nothing short of stunning: noise disappears completely, even difficult indirectly lit surfaces like refractive (glass) objects and shadowy areas clear up remarkably fast and the image progressively get closer to the ground truth.
The OptiX denoiser works great for glass and dark, indirectly lit areas
While in general the denoiser does a fantastic job, it's not yet optimised to deal with areas that converge fast, and in some instances overblurs and fails to preserve texture detail as shown in the screen grab below. The blurring of texture detail improves over time with more iterations, but perhaps this initial overblurring can be solved with more training samples for the denoiser:
Overblurring of textures
The denoiser is provided free for commercial use (royalty-free), but requires an Nvidia GPU. It works with both CPU and GPU rendering engines and is already implemented in Iray (Nvidia's own GPU renderer), V-Ray (by Chaos Group), Redshift Render and Clarisse (a CPU based renderer for VFX by Isotropix).
Some videos of the denoiser in action in Optix, V-Ray, Redshift and Clarisse:
Other renderers like Cycles and Corona already have their own built-in denoisers, but will probably benefit from the OptiX denoiser as well (especially Corona which was acquired by Chaos Group in September 2017).
The OptiX team has indicated that they are researching an optimised version of this filter for use in interactive to real-time photorealistic rendering, which might find its way into game engines. Real-time noise-free photorealistic rendering is tantalisingly close.
July is a great month for rendering enthusiasts: there's of course Siggraph, but the most exciting conference is High Performance Graphics, which focuses on (real-time) ray tracing. One of the more interesting sounding papers is titled: "Towards real-time path tracing: An Efficient Denoising Algorithm for Global Illumination" by Mara, McGuire, Bitterli and Jarosz, which was released a couple of days ago. The paper, video and source code can be found at
We propose a hybrid ray-tracing/rasterization strategy for realtime
rendering enabled by a fast new denoising method. We factor
global illumination into direct light at rasterized primary surfaces
and two indirect lighting terms, each estimated with one pathtraced
sample per pixel. Our factorization enables efficient (biased)
reconstruction by denoising light without blurring materials. We
demonstrate denoising in under 10 ms per 1280×720 frame, compare
results against the leading offline denoising methods, and include a
supplement with source code, video, and data.
While the premise of the paper sounds incredibly exciting, the results are disappointing. The denoising filter does a great job filtering almost all the noise (apart from some noise which is still visible in reflections), but at the same it kills pretty much all the realism that path tracing is famous for, producing flat and lifeless images. Even the first Crysis from 10 years ago (the first game with SSAO) looks distinctly better. I don't think applying such aggressive filtering algorithms to a path tracer will convince game developers to make the switch to path traced rendering anytime soon. A comparison with ground truth reference images (rendered to 5000 samples or more) is also lacking from some reason.
At the same conference, a very similar paper will be presented titled "Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination".
We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter. This hierarchy allows us to distinguish between noise and detail at multiple scales using luminance variance.
Physically-based light transport is a longstanding goal for real-time computer graphics. While modern games use limited forms of ray tracing, physically-based Monte Carlo global illumination does not meet their 30 Hz minimal performance requirement. Looking ahead to fully dynamic, real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10x more temporally stable results, matched references images 5-47% better (according to SSIM), and runs in just 10 ms (+/- 15%) on modern graphics hardware at 1920x1080 resolution.
It's going to be interesting to see if the method in this paper produces more convincing results that the other paper. Either way HPG has a bunch more interesting papers which are worth keeping an eye on.
UPDATE (16 July): Christoph Schied from Nvidia and KIT, emailed me a link to the paper's preprint and video at http://cg.ivd.kit.edu/svgf.php Thanks Christoph!
Video screengrab:
I'm not convinced by the quality of filtered path traced rendering at 1 sample per pixel, but perhaps the improvements in spatiotemporal stability of this noise filter can be quite helpful for filtering animated sequences at higher sample rates.
UPDATE (23 July) There is another denoising paper out from Nvidia: "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder" which uses machine learning to reconstruct the image.
We describe a machine learning technique for reconstructing image se- quences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contri- bution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show signi cantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.
This week Google announced "Seurat", a novel surface lightfield rendering technology which would enable "real-time cinema-quality, photorealistic graphics" on mobile VR devices, developed in collaboration with ILMxLab:
The technology captures all light rays in a scene by pre-rendering it from many different viewpoints. During runtime, entirely new viewpoints are created by interpolating those viewpoints on-the-fly resulting in photoreal reflections and lighting in real-time (http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/).
At almost the same time, Disney released a paper called "Real-time rendering with compressed animated light fields", demonstrating the feasibility of rendering a Pixar quality 3D movie in real-time where the viewer can actually be part of the scene and walk in between scene elements or characters (according to a predetermined camera path):
Light field rendering in itself is not a new technique and has actually been around for more than 20 years, but has only recently become a viable rendering technique. The first paper was released at Siggraph 1996 ("Light field rendering" by Mark Levoy and Pat Hanrahan) and the method has since been incrementally improved by others. The Stanford university compiled an entire archive of light fields to accompany the Siggraph paper from 1996 which can be found at http://graphics.stanford.edu/software/lightpack/lifs.html. A more up-to-date archive of photography-based light fields can be found at http://lightfield.stanford.edu/lfs.html
One of the first movies that showed a practical use for light fields is The Matrix from 1999, where an array of cameras firing at the same time (or in rapid succession) made it possible to pan around an actor to create a super slow motion effect ("bullet time"):
Bullet time in The Matrix (1999)
Rendering the light field
Instead of attempting to explain the theory behind light fields (for which there are plenty of excellent online sources), the main focus of this post is to show how to quickly get started with rendering a synthetic light field using Blender Cycles and some open-source plug-ins. If you're interested in a crash course on light fields, check out Joan Charmant's video tutorial below, which explains the basics of implementing a light field renderer:
The following video demonstrates light fields rendered with Cycles:
Rendering a light field is actually surprisingly easy with Blender's Cycles and doesn't require much technical expertise (besides knowing how to build the plugins). For this tutorial, we'll use a couple of open source plug-ins:
1) The first one is the light field camera grid add-on for Blender made by Katrin Honauer and Ole Johanssen from the Heidelberg University in Germany:
This plug-in sets up a camera grid in Blender and renders the scene from each camera using the Cycles path tracing engine. Good results can be obtained with a grid of 17 by 17 cameras with a distance of 10 cm between neighbouring cameras. For high quality, a 33-by-33 camera grid with an inter-camera distance of 5 cm is recommended.
3-by-3 camera grid with their overlapping frustrums
2) The second tool is the light field encoder and WebGL based light field viewer, created by Michal Polko, found at https://github.com/mpk/lightfield (build instructions are included in the readme file).
This plugin takes in all the images generated by the first plug-in and compresses them by keeping some keyframes and encoding the delta in the remaining intermediary frames. The viewer is WebGL based and makes use of virtual texturing (similar to Carmack's mega-textures) for fast, on-the-fly reconstruction of new viewpoints from pre-rendered viewpoints (via hardware accelerated bilinear interpolation on the GPU).
Results and Live Demo
A live online demo of the light field with the dragon can be seen here:
You can change the viewpoint (within the limits of the original camera grid) and refocus the image in real-time by clicking on the image.
I rendered the Stanford dragon using a 17 by 17 camera grid and distance of 5 cm between adjacent cameras. The light field was created by rendering the scene from 289 (17x17) different camera viewpoints, which took about 6 minutes in total (about 1 to 2 seconds rendertime per 512x512 image on a good GPU). The 289 renders are then highly compressed (for this scene, the 107 MB large batch of 289 images was compressed down to only 3 MB!).
A depth map is also created at the same time an enables on-the-fly refocusing of the image, by interpolating information from several images,
A later tutorial will add a bit more freedom to the camera, allowing for rotation and zooming.