Thursday, December 30, 2010

2010, an excellent year for raytracing!

What an exciting year this has been, for raytracing at least. There has been a huge buzz around accelerated ray tracing and unbiased rendering, in which the GPU has played a pivotal role. A little overview:

- Octane Render is publicly announced. A demo is released which lets many people experience high quality unbiased GPU rendering for the first time. Unparallelled quality and amazing rendertimes on even a low-end GTX8800, catch many by surprise.

- Arion, the GPU sibling of Random Control's Fryrender, is announced shortly after Octane. Touts hybrid CPU+GPU unbiased rendering as a distinguishing feature. The product eventually releases at a prohibitively expensive price (1000€ for 1 multi-GPU license)

- Luxrender's OpenCL-based GPU renderer SmallLuxGPU integrates stochastic progressive photon mapping, an unbiased rendering method which excels at caustic-heavy scenes

- Brigade path tracer is announced, a hybrid (CPU+GPU) real-time path tracer aimed at games. Very optimized, very fast, user-defined quality, first path tracer with support for dynamic objects. GI quality greatly surpasses virtual point light/instant radiosity based methods and even photon mapping, can theoretically handle all types of BRDF, is artefact free (except for noise) and nearly real-time. No screen-space limitations. The biggest advantage over other methods is progressive rendering which instantly gives a good idea of the final converged image (some filtering and LOD scheme, similar to VoxLOD, could produce very high quality results in real-time). Very promising, it could be the best option for high-quality dynamic global illumination in games in 2 to 3 years.

- release of Nvidia Fermi GPU: caches and other enhancements (e.g. concurrent kernel execution) give ray tracing tasks an enormous boost, up to 3.5x faster in scenes with many incoherent rays compared to the previous architecture. Design Garage, an excellent tech demo featuring GPU path tracing is released alongside the cards

- Siggraph 2010 puts heavy focus on GPU rendering

- GTC 2010: Nvidia organizes a whole bunch of GPU ray tracing sessions covering OptiX, iray, etc.

- John Carmack re-expresses interest in real-time ray tracing as an alternative rendering method for next-generation games (besides sparse voxel octrees). He even started twittering about his GPU ray tracing experiments in OpenCL: http://raytracey.blogspot.com/2010/08/is-carmack-working-on-ray-tracing-based.html

- GPU rendering gets more and more criticized by the CPU rendering crowd (Luxology, Next Limit, their userbase, ...) feeling the threat of decreased revenue

- release of mental ray's iray

- release of V-Ray RT GPU, the product that started the GPU rendering revolution

- Caustic Graphics is bought by Imagination Technologies, the maker of PowerVR GPU. A surprising and potentially successful move for both companies. Hardware accelerated real-time path tracing at very high sampling rates (higher than on Nvidia Fermi) could become possible. PowerVR GPUs are integrated in Apple TV, iPad, iPhone and iPod Touch, so this is certainly something to keep an eye on in 2011. Caustic doesn't disappoint when it comes to hype and drama :)

- one of my most burning questions since the revelation of path tracing on GPU, "is the GPU capable of more sophisticated and efficient rendering algorithms than brute force path tracing?" got answered just a few weeks ago, thanks to Dietger van Antwerpen and his superb work on GPU-based Metropolis light transport and energy redistribution path tracing.

All in all, 2010 was great for me and delivered a lot to write about. Hopefully 2011 will be at least equally exciting. Some wild speculation of what might happen:

- Metropolis light transport starts to appear in commercial GPU rendering software (very high probability for Octane)
- more news about Intel's Knight's Corner/Ferry with maybe some perfomance numbers (unlikely)
- Nvidia launches Kepler at the end of 2011 which offers 3x path tracing performance of Fermi (to good to be true?)
- PowerVR GPU maker and Caustic Graphics bring hardware accelerated real-time path tracing to a mass audience through Apple mobile products (would be great)
- Luxology and Maxwell Render reluctantly embrace GPU rendering (LOL)
- finally a glimpse of OTOY's real-time path tracing (fingers crossed)
- Brigade path tracer gains exposure and awareness with the release of the first path traced game in history (highly possible)
- ...

Joy!

Monday, December 27, 2010

Global illumination with Markov Chain Monte Carlo rendering in Nvidia Optix 2.1 + Metropolis Light Transport with participating media on GPUs

Optix 2.1 was released a few days ago and includes a Markov Chain Monte Carlo (MCMC) sample, which only works on Fermi cards (New sample: MCMC - Markov Chain Monte Carlo method rendering. A global illumination solution that requires an SM 2.0 class device (e.g. Fermi) or higher).

MCMC rendering methods, such as MLT (Metropolis light transport) and ERPT (energy redistribution path tracing) are partially sequential because each path of a Markov chain depends on the previous path and is therefor more difficult to parallellize for GPUs than standard Monte Carlo algorithms. This is an image of the new MCMC sampler included in the new Optix SDK, which can be downloaded from http://developer.nvidia.com/object/optix-download.html.




There is also an update on the Kelemen-style Metropolis Light Transport GPU renderer from Dietger van Antwerpen. He has released this new video showing Metropolis light transport with participating media running on the GPU: http://www.youtube.com/watch?v=3Xo0qVT3nxg



This scene is straight from the original Metropolis light transport paper from Veach and Guibas (http://graphics.stanford.edu/papers/metro/metro.pdf). Participating media (like fog, smoke and god rays) are one of the most difficult and compute intensive phenomena to simulate accurately with global illumination, because it is essentially a volumetric effect in which light scattering occurs. Subsurface scattering belongs to the same category of expensive difficult-to-render volumetric effects. The video shows it can now be done in almost real-time with MLT. which is pretty impressive!

Friday, December 24, 2010

Move over OTOY, here comes the new AMD tech demo!

June 2008: Radeon HD 4870 launches with the OTOY/Cinema 2.0/Ruby tech demo featuring voxel raytracing. It can't get much closer to photorealism than this... or can it?

December 2010: Radeon HD 6970 launches with this craptastic tech demo. Talk about progress. Laughable fire effects, crude physics with only a few dozen dynamic objects, pathetic Xbox 1 city model and lighting, uninspired Mecha design. It may be just a tech demo but this is just a disgrace for a high-tech GPU company. Well done AMD! Now where the hell is that Cinema 2.0 Ruby demo you promised dammit? My HD 4890 is almost EOL and already LOL :p

Sunday, December 19, 2010

GPU-accelerated biased and unbiased rendering

Since I've seen the facemeltingly awesome youtube video of Kelemen-style MLT+bidirectional path tracing running on a GPU, I'm quite convinced that most (if not all) unbiased rendering algorithms can be accelerated on the GPU. Here's a list of the most common unbiased algorithms which have been ported successfully to the GPU:

- unidirectional (standard) path tracing: used by Octane, Arion, V-Ray RT GPU, iray, SmallLuxGPU, OptiX, Brigade, Indigo Render, a bunch of renderers integrating iray, etc. Jan Novak is one of the first to report a working implementation of path tracing on the GPU (implemented with CUDA on a GTX 285, https://dip.felk.cvut.cz/browse/pdfcache/novakj8_2009dipl.pdf). The very first paper reporting GPU path tracing is "Stochastic path tracing on consumer graphics cards" from 2008 by Huwe and Hemmerling (implemented in GLSL).
- bidirectional path tracing (BDPT): http://www.youtube.com/watch?v=70uNjjplYzA, I think Jan Novak, Vlastimil Havran and Carsten Dachsbacher made this work as well in their paper "Path regeneration for interactive path tracing"
- Metropolis Light Transport (MLT)+BDPT: http://www.youtube.com/watch?v=70uNjjplYzA
- energy redistribution path tracing (ERPT): http://www.youtube.com/watch?v=c7wTaW46gzA, http://www.youtube.com/watch?v=d9X_PhFIL1o
- (stochastic) progressive photon mapping (SPPM): used by SmallLuxGPU, there's also a GPU-optimised parallellised version on Toshiya Hachisuka's website, CUDA http://www.youtube.com/watch?v=zg9NcCw53iA, OpenCL http://www.youtube.com/watch?v=O5WvidnhC-8

Octane, my fav unbiased GPU renderer, will also implement an MLT-like rendering algorithm in the next verion (beta 2.3 version 6), which is "coming soon". I gathered some interesting quotes from radiance (Octane's main developer) regarding MLT in Octane:

“We are working on a firefly/caustic capable and efficient rendering algorithm, it's not strictly MLT but a heavily modified version of it. Trust me, this is the last big feature we need to implement to have a capable renderer, so it's our highest priority feature to finish.”

“MLT is an algorithm that's much more efficient at rendering complex scenes, not so efficient at simple, directly lit scenes (eg objects in the open). However MLT does sample away the fireflies.”

“The fireflies are a normal side effect of unbiased rendering, they are reflective or refractive caustics. We're working on new algorithms in the next version that will solve this as it will compute these caustics better.”

“they are caustics, long paths with a high contribution, a side effect of unbiased path tracing. MLT will solve this problem which is in development and slated for beta 2.3”

“the pathtracing kernel already does caustics, it's just not very efficient without MLT, which will be in the next 2.3 release.”

“lights (mesh emitters) are hard to find with our current algorithms, rendertimes will severely improve with the new MLT replacement that's coming soon.”

“it will render more efficiently [once] we have portals/MLT/bidir.”

All exteriors render in a few minutes clean in octane currently. (if you have a decent GPU like a medium range GTX260 or better). Interiors is more difficult, requires MLT and ultimately bidir path tracing. However, with plain brute force pathtracing octane is the same or slightly faster than a MLT/Bidir complex/heavily matured [CPU] engine, which gives good promise for the future, as we're working on those features asap.

With all unbiased rendering techniques soon possible and greatly accelerated on the GPU, what about GPU acceleration for biased production rendering techniques (such as photon mapping and irradiance caching)? There have been a lot of academic research papers on this subject (e.g. Purcell, Rui Wang and Kun Zhou, Fabianowski and Dingliani, McGuire and Luebke, ...), but since it's a lot trickier to parallellize photon mapping and irradiance caching than unbiased algorithms while still obtaining production quality, it's still not quite ready for integration in commercial software. But this will change very soon imo: on the ompf forum I've found a link to a very impressive video showing very high-quality CUDA-accelerated photon mapping http://www.youtube.com/watch?v=ZTuos2lzQpM.

This is a render of Sponza, 800x800 resolution, rendered in 11.5 seconds on 1 GTX 470! (image taken from http://kaikaiwang.blogspot.com/):

11 seconds for this quality and resolution on just one GPU is pretty amazing if you ask me. I'm sure that further optimizations could bring the rendertime down to 1 second. The video also shows real-time interaction (scale, rotate, move, delete) with objects from the scenery (something that could be extended to support many dynamic objects via HLBVH). I could see this being very useful for real-time production quality global illumination using a hybrid of path tracing for exteriors and photon mapping for interiors, caustics, point lights.

Just like 2010 was the year of GPU-accelerated unbiased rendering, I think 2011 will become the year of heavily GPU-accelerated biased rendering (photon mapping in particular).

Wednesday, December 15, 2010

Real-time Metropolis Light Transport on the GPU: it works!!!!


This is probably the most significant news since the introduction of real-time path tracing on the GPU. I've been wondering for quite a while if MLT (Metropolis Light Transport) would be able to run on current GPU architectures. MLT is a more efficient and more complex algorithm than path tracing for rendering certain scenes which are predominantly indirectly lit (e.g. light coming through a narrow opening, such as a half-closed door, and illuminating a room), a case in which path tracing has much difficulty to find "important" contributing light paths. For this reason, it is the rendering method of choice for professional unbiased renderers like Maxwell Render, Fryrender, Luxrender, Indigo Render and Kerkythea Render.

Dietger van Antwerpen, an IGAD student who co-developed the Brigade path tracer and who also managed to make ERPT (energy distribution ray tracing) run in real-time on a Fermi GPU, has posted two utterly stunning and quite unbelievable videos of his latest progress:

- video 1 showing a comparison between real-time ERPT and path tracing on the GPU:

ERPT on the left, standard path tracing (PT) on the right. Light is coming in from a narrow opening, a scenario in which PT has a hard time to find light paths and converge, because it randomly samples the environment. ERPT shares properties with MLT: once it finds an important light path, it will sample nearby paths via small mutations of the found light path, so convergence is much faster.

- video 2 showing Kelemen-style MLT (an improvement on the original MLT algorithm) running in real-time on the GPU. The video description mentions Kelemen-style MLT on top of bidirectional path tracing (BDPT) with multiple importance sampling, pretty amazing.
Kelemen-MLT after 10 seconds of rendering at 1280x720 on a single GTX 470. The beautiful caustics are possible due to bidirectional path tracing+MLT and are much more difficult to obtain with standard path tracing.

These videos are ultimate proof that current GPUs are capable of more complex rendering algorithms than brute-force standard path tracing and can potentially accelerate the very same algorithms used in the major unbiased CPU renderers. This bodes very well for GPU renderers like Octane (which has its own MLT-like algorithm), V-Ray RT GPU, SmallLuxGPU and iray.

If Dietger decides to implement these in the Brigade path tracer we could be seeing (quasi) noise-free, real-time path traced (or better "real-time BDPT with MLT" traced) games much sooner than expected. Verrrry exciting stuff!! I think some rendering companies would hire this guy instantly.

Friday, December 10, 2010

Voxels again

Just encountered a very nice blog about voxel rendering, sparse voxel octrees and massive procedural terrain rendering: http://procworld.blogspot.com

The author has made some video's of the tech using OpenCL, showing the great detail that can be achieved when using voxels: http://www.youtube.com/watch?v=PzmsCC6hetM, http://www.youtube.com/watch?v=oZ6x_jbZ2GA It does look a bit like the atomontage engine.

Wednesday, December 1, 2010

OnLive just works, even for those darn Europeans!

I think this deserves it's own post. Someone (Anonymous) told me that the OnLive service can be accessed and played from EU countries as well, so I gave it a try and downloaded and installed the tiny OnLive plug-in. To my surprise, I actually got it working on a pretty old PC (just a Pentium 4 at 3GHz). I was flabbergasted. I'm about 6000 miles away from the OnLive servers and it's still running! The quality of the video stream was more than decent and smoother than when I try to decode 720p Youtube videos which my system just cannot handle.

My first impression: I love it, now I'm absolutely positive that this is the very near future for video games. It's a joy to watch others play and to start playing the same game within seconds! I've tried some Borderlands, Splinter Cell Conviction and FEAR2. There is some lag, because I'm about 6000 miles away from the OnLive server (I got a warning during log-in that my connection has huge latency), but I could nevertheless still enjoy the game. About half a second (or less) passes between hitting the shoot key and seeing the gun actually shoot, and when moving your character . I must say though that I got used to the delay after a while, and I anticipated my moves by half a second. My brain notices the delay during the first minutes of play, but I forgot about it after a while and just enjoyed the game. I think that if I can enjoy an OnLive game from 6000 miles away, then US players, who live much closer to the OnLive servers, have got to have an awesome experience. The lag could also be due to my own ancient PC (which is not even dual core) or to the local network infrastructure here in Belgium even though I have a pretty big bandwidth connection. I can't wait until they deploy their EU servers. Image quality is very variable, I guess it's partly because of my PC, which cannot decode the video stream fast enough. FEAR 2 looked very sharp though. The image looks best when you're not moving the camera and just stare at the scene. The recently announced MicroConsole seems to offer very good image quality from what I've read.

I think that cloud gaming will give an enormous boost to the graphics side of games and that photorealistic games will be here much sooner thanks to cloud rendering and it's inherent rendering efficiency (especially when using ray tracing, see the interview with Jules Urbach). My biggest gripe with consoles like Xbox and Playstation is that they stall graphics development for the duration of the console cycle (around 5 years), especially the latest round of consoles. With the exception of Crysis, PC games don't make full use of the latest GPUs which are much more powerful than the consoles. I just ran 3DMark05 some days ago, and it's striking me that this 5-year old benchmark still looks superior than any console game on the market. I truely hope that cloud gaming will get rid of the fixed console hardware and free up game developers (and graphics engineers in particular) to go nuts, because I'm sick of seeing another Unreal Engine 3 powered game.

I also think that OnLive will not be the only player and that there will be fierce competition between several cloud gaming services, each with their own exclusive games. I can imagine a future with multiple cloud gaming providers such as OnLive, OTOY, Gaikai, PlayStation Little Big Cloud, Activision Cloud of Duty, EA Battlefield Cloud of Honor, UbiCloud, MS Red Ringing Cloud of Death (offering Halo: RROD exclusively), Valve Strrream... To succeed they would have to be accessible for free (just like OnLive is now), without monthly subscription fees.

All in all, it's an awesome experience and it's going to open up gaming for the masses and will give a new meaning to the word "video" game. The incredible ease of use (easier than downloading a song from the iTunes Store) will attract vast audiences and for this reason I think it's going to be much bigger than Wii and will completely shake up the next-gen console landscape (Wii2, PS4 and Xbox 2.5/720/RROD2/...). MS, Sony and Nintendo better think twice before releasing a brand new console.

Be it OnLive, OTOY, Gaikai or any other service, I, for one, welcome our new cloud gaming overlords!