Awesome times for RTRT!!! 2010 is definitely going to be a breakthrough year for GPU based raytracing. There's a sudden tsunami of GPU renderers coming from everywhere. It started with the announcement of V-Ray RT for CUDA by Chaos Group at Siggraph 2009, and was followed by the iray demonstration by mental images at Nvidia's GTC 2009. Then in December, LuxRender showed their experiments with an OpenCL accelerated version of their renderer. This month, a new renderer popped up, called Octane Render, which is being developed by the former lead developer of LuxRender. As a reaction to this newbie unbiased renderer, other render companies decided to unveil their GPU acceleration plans as well: Fryrender (Arion), Bunkspeed, Indigo Renderer. I'm sure Maxwell Render and Kerkythea (or Thea Render) will follow very soon.
I think that most render companies were impatiently waiting for Larrabee, because the idea of porting their render software to a many-core CPU/GPU hybrid sounds much more attractive than porting to the relatively inefficient and memory starved GPU architecture... at least that's what most programmers without GPGPU experience would think, and they needed proof (in the form of OptiX (NVIRT), V-Ray GPU, iray or even the raytraced Ruby demo by OTOY) to realize that GPUs were in fact incredibly fast at certain ray tracing algorithms such as ray casting and path tracing. When Intel announced in December that Larrabee was not going to see the light of day for at least another two years, these companies had the choice: remain competitive by incorporating GPU acceleration into their product as soon as possible, or stagnate as a CPU-only render company and loose marketshare to their GPU embracing competitors.
In short, my dream of realtime photorealistic graphics at playable framerates has come a huge step closer to reality. Realtime raytracing is not an academic PhD project any longer, but it has become big business, now that the big guys in hard- and software are playing with it. I'm sure that this trend of GPU accelerated final production rendering (not just for previz purposes) will spill over to Hollywood CG companies. I'm incredibly excited at this revolution and I'm eager to see what's going to happen next (GPU cloud ray tracing like RealityServer, Fusion Render Cloud???)
I think that most render companies were impatiently waiting for Larrabee, because the idea of porting their render software to a many-core CPU/GPU hybrid sounds much more attractive than porting to the relatively inefficient and memory starved GPU architecture... at least that's what most programmers without GPGPU experience would think, and they needed proof (in the form of OptiX (NVIRT), V-Ray GPU, iray or even the raytraced Ruby demo by OTOY) to realize that GPUs were in fact incredibly fast at certain ray tracing algorithms such as ray casting and path tracing. When Intel announced in December that Larrabee was not going to see the light of day for at least another two years, these companies had the choice: remain competitive by incorporating GPU acceleration into their product as soon as possible, or stagnate as a CPU-only render company and loose marketshare to their GPU embracing competitors.
In short, my dream of realtime photorealistic graphics at playable framerates has come a huge step closer to reality. Realtime raytracing is not an academic PhD project any longer, but it has become big business, now that the big guys in hard- and software are playing with it. I'm sure that this trend of GPU accelerated final production rendering (not just for previz purposes) will spill over to Hollywood CG companies. I'm incredibly excited at this revolution and I'm eager to see what's going to happen next (GPU cloud ray tracing like RealityServer, Fusion Render Cloud???)
No comments:
Post a Comment