There are lots of amazing screenshots from Nvidia's Design Garage demo for the new Fermi cards. I like this interior shot in particular, because it reminds me of the Cinema2.0/OTOY/Ruby demo:
From http://www.evga.com/forums/tm.aspx?m=289470&mpage=6
Quasi-random, more or less unbiased blog about real-time photorealistic GPU rendering
Thursday, May 27, 2010
Wednesday, May 26, 2010
Intel pronounces Larrabee dead, how will this affect Unreal Engine 4?
You might remember this interview with Epic Games' ever so humble president Mike Capps from a few weeks ago, in which he said "if you look at what’s happening in the PC market – Larrabee and all that – it’s really taking off, and I think the jump to next generation’s going to be another really big one". Sadly, in an unexpected turn of events, Intel decided otherwise and decided to kill off the GPU that was partly Tim Sweeney's baby. In numerous occasions (e.g. Siggraph '09) Sweeney has stated that Epic's next-gen game technology Unreal Engine 4 was built specifically with Larrabee's multi-core architecture in mind.
Hopefully, this devastating revelation from Intel will not hinder Unreal Engine 4's supremacy in the next console generation, because I dare not imagine what console graphics would have looked like if it wasn't for UE3's anti-alias free dominance... oh the Humanity!
Hopefully, this devastating revelation from Intel will not hinder Unreal Engine 4's supremacy in the next console generation, because I dare not imagine what console graphics would have looked like if it wasn't for UE3's anti-alias free dominance... oh the Humanity!
Thursday, May 20, 2010
V-Ray GPU news
Yesterday a new video of V-Ray's GPU renderer surfaced on the net: http://www.spot3d.com/vray/images/rt_movies/20100514_VRayRTGPU.wmv
The rendering speed and interactivity look phenomenal, but then again it's being rendered on 3 GTX480's so no real surprise there. It's also using OpenCL and Chaos Group is the first to deliver a working commercial GPU renderer that is not CUDA-only (LuxRender's smallLuxGPU was actually first with OpenCL but is open source).
Chaos Group started the whole GPU rendering revolution 9 months ago at Siggraph 2009 (mental images probably had a working implementation first with iray, but it was not shown in public until GTC 2009). Not only did they prove that path traced rendering with high-quality global illumination was possible on GPUs, but also that the GPU was an order of magnitude faster at this kind of rendering than the CPU. Both of these amazing feats were utterly unbelievable just 10 months ago for every one but the lucky few at Nvidia, Chaos Group and mental images.
The V-Ray GPU presentation on a simple affordable PC (quad core i7 with gtx285) has inspired many other developers to start working on a GPU renderer (in contrast to iray's GTC demonstration on a uber render server consisting of 15 Teslas). If it wasn't for Chaos Group, most people probably still wouldn't have the ability to render on the GPU or even know that it was actually possible.
The rendering speed and interactivity look phenomenal, but then again it's being rendered on 3 GTX480's so no real surprise there. It's also using OpenCL and Chaos Group is the first to deliver a working commercial GPU renderer that is not CUDA-only (LuxRender's smallLuxGPU was actually first with OpenCL but is open source).
Chaos Group started the whole GPU rendering revolution 9 months ago at Siggraph 2009 (mental images probably had a working implementation first with iray, but it was not shown in public until GTC 2009). Not only did they prove that path traced rendering with high-quality global illumination was possible on GPUs, but also that the GPU was an order of magnitude faster at this kind of rendering than the CPU. Both of these amazing feats were utterly unbelievable just 10 months ago for every one but the lucky few at Nvidia, Chaos Group and mental images.
The V-Ray GPU presentation on a simple affordable PC (quad core i7 with gtx285) has inspired many other developers to start working on a GPU renderer (in contrast to iray's GTC demonstration on a uber render server consisting of 15 Teslas). If it wasn't for Chaos Group, most people probably still wouldn't have the ability to render on the GPU or even know that it was actually possible.
Wednesday, May 19, 2010
Cloud gaming a hot topic at HPG2010 and Onlive coming to Belgium!
Woot! Onlive is coming to my small, governmental-crisis-prone country :-D. Yay! http://blog.onlive.com/2010/05/16/onlive-coming-to-belgium/
In other news, HPG2010 will feature Turner Whitted (ray tracing pioneer) and Cevat Yerli (Crytek) as keynote speakers, and both will talk about server-side rendering http://www.highperformancegraphics.org/program.html
Belgacom, the largest broadband operator in Belgium, has made an investment in OnLive, and has partnered with us to deliver the OnLive® Game Service to their broadband customers. Belgacom has the exclusive right to bundle the OnLive Game Service in Belgium and Luxembourg with their other broadband services, but gamers in these countries also will have the option of ordering directly from OnLive through any Internet service provider.Too bad I hate those soul-sucking fuckers from Belgacom and I refuse to pay them a cent. OnLive should have partnered with Telenet: much better broadband service (Telenet is on cable, which is on average 3x faster than Belgacom's ADSL network) and much more popular in general than the state owned monopoly of Belgacom.
In other news, HPG2010 will feature Turner Whitted (ray tracing pioneer) and Cevat Yerli (Crytek) as keynote speakers, and both will talk about server-side rendering http://www.highperformancegraphics.org/program.html
Tuesday, May 18, 2010
Watch the rendering equation being solved in real-time!
Video from the "Brigade" real-time path tracer:
http://www.youtube.com/watch?v=b7W4BQevKiM
It looks dreamy (because of the blur), and it fits the atmosphere perfectly because it fulfills a dream for many (including myself):
http://www.youtube.com/watch?v=b7W4BQevKiM
It looks dreamy (because of the blur), and it fits the atmosphere perfectly because it fulfills a dream for many (including myself):
Watching path tracing in real-time is very satisfying imo, as it has been considered to be the most physically accurate but slowest solution to the rendering equation and it's real-time implementation has remained some kind of a holy grail for graphics researchers since it's conception in the 1980s. Now that this ultimate long sought after goal has (almost) been reached, I found it particularly pleasing to re-read the following overview of the history and principles of path tracing on Wikipedia:
Path tracing (shamelessly copied from wikipedia)
Path tracing is a computer graphics rendering technique that attempts to simulate the physical behaviour of light as closely as possible. It is a generalisation of conventional ray tracing, tracing rays from the virtual camera through several bounces on or through objects. The image quality provided by path tracing is usually superior to that of images produced using conventional rendering methods at the cost of much greater computation requirements.
Path tracing is the simplest, most physically-accurate and slowest rendering method. It naturally simulates many effects that have to be specifically added to other methods (ray tracing or scanline rendering), such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and indirect lighting. Implementation of a renderer including these effects is correspondingly simpler.
Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of other rendering algorithms. In order to get high quality images from path tracing, a very large number of rays need to be traced lest the image have lots of visible artefacts in the form of noise.
History
The rendering equation and its use in computer graphics was presented by James Kajiya in 1986.[1] This presentation contained what was probably the first description of the path tracing algorithm. Later that year, Lafortune suggested many refinements, including bidirectional path tracing.[2]
Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas.
More recently, computers and GPUs have become powerful enough to render images more quickly, causing more widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on a GPU in 2002.[3] In 2009, Vladimir Koylazov from Chaos Group demonstrated the first commercial implementation of a path tracer running on a GPU, and other implementations have followed.[4] This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL.
Description
In the real world, many small amounts of light are emitted from light sources, and travel in straight lines (rays) from object to object, changing colour and intensity, until they are absorbed (possibly by an eye or camera). This process is simulated by path tracing, except that the paths are traced backwards, from the camera to the light. The inefficiency arises in the random nature of the bounces from many surfaces, as it is usually quite unlikely that a path will intersect a light. As a result, most traced paths do not contribute to the final image.
This behaviour is described mathematically by the rendering equation, which is the equation that path tracing algorithms try to solve.
Path tracing is not simply ray tracing with infinite recursion depth. In conventional ray tracing, lights are sampled directly when a diffuse surface is hit by a ray. In path tracing, a new ray is randomly generated within the hemisphere of the object and then traced until it hits a light(possibly never). This type of path can hit many diffuse surfaces before interacting with a light.
Bidirectional path tracing
n order to accelerate the convergence of images, bidirectional algorithms trace paths in both directions. In the forward direction, rays are traced from light sources until they are too faint to be seen or strike the camera. In the reverse direction (the usual one), rays are traced from the camera until they strike a light or too many bounces ("depth") have occurred. This approach normally results in an image that converges much more quickly than using only one direction.
Veach and Guibas give a more accurate description[5]:
These methods generate one subpath starting at a light source and another starting at the lens, then they consider all the paths obtained by joining every prefix of one subpath to every suffix of the other. This leads to a family of different importance sampling techniques for paths, which are then combined to minimize variance.
Performance
A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases. This can take hours or days depending on scene complexity and hardware and software performance. Newer GPU implementations are promising from 1-10 million samples per second on modern hardware, producing acceptably noise-free images in seconds or minutes.[citation needed] Noise is particularly a problem for animations, giving them a normally-unwanted "film-grain" quality of random speckling.
Metropolis light transport obtains more important samples first, by slightly modifying previously-traced successful paths. This can result in a lower-noise image with fewer samples.
Renderer performance is quite difficult to measure fairly. One approach is to measure "Samples per second", or the number of paths that can be traced and added to the image each second. This varies considerably between scenes and also depends on the "path depth", or how many times a ray is allowed to bounce before it is abandoned. It also depends heavily on the hardware used. Finally, one renderer may generate many low quality samples, while another may converge faster using fewer high-quality samples.
Scattering distribution functions
The reflective properties (amount, direction and colour) of surfaces are modelled using BRDFs. The equivalent for transmitted light (light that goes through the object) are BTDFs. A path tracer can take full advantage of complex, carefully modelled or measured distribution functions, which controls the appearance ("material", "texture" or "shading" in computer graphics terms) of an object.
Wednesday, May 12, 2010
Larrabee is really taking off...
according to Epic's president Mike Capps. So is his hair: http://www.edge-online.com/news/epic-expects-unreal-engine-4-dominance /insert bald space marine joke
Friday, May 7, 2010
Voxelstein3D progress
Since the Voxelstein3D guys implemented path tracing, the engine produces some really cool and natural looking images:
The realistic path traced lighting may not be that obvious from these shots, but if you would replace the path tracing with direct lighting you'll get the usual hard shadows and there won't be a gradual lighting fall off in indirectly lit areas. It's a little disheartening that the voxels still look quite rough and create a stair stepping effect. I hope we'll soon have a free polygon based game engine supporting path tracing, the Brigade engine (by Jacco Bikker and Dietger van Antwerpen, IGAD) and Nvidia's OptiX being two serious candidates. You could theoretically re-use the geometry of an existing game(let's say Half-Life 2), apply some realistic brdf materials and render a photorealistic scene with real-time (interactive) global illumination.
The realistic path traced lighting may not be that obvious from these shots, but if you would replace the path tracing with direct lighting you'll get the usual hard shadows and there won't be a gradual lighting fall off in indirectly lit areas. It's a little disheartening that the voxels still look quite rough and create a stair stepping effect. I hope we'll soon have a free polygon based game engine supporting path tracing, the Brigade engine (by Jacco Bikker and Dietger van Antwerpen, IGAD) and Nvidia's OptiX being two serious candidates. You could theoretically re-use the geometry of an existing game(let's say Half-Life 2), apply some realistic brdf materials and render a photorealistic scene with real-time (interactive) global illumination.
Saturday, May 1, 2010
Thea Render jumps on the GPU bandwagon
Thea Render is going to incorporate GPU rendering in v1.3 which will probably come before the end of the year http://www.thearender.com/downloads/TheaRenderRoadmap.pdf.
A list of released and announced GPU renderers:
1. V-Ray GPU (Chaos Group)
2. iray (mental images)
3. SmallLuxGPU (LuxRender)
4. Octane Render (Refractive Software)
5. Arion Render (Random Control/FryRender)
6. Thea Render
7. SHOT using iray (Bunkspeed)
8. RTT Powerhouse and RTT DeltaGen using iray (Realtime Technology)
UPDATE:
9. Indigo Render also announced plans for GPU acceleration
UPDATE 2 (Sep 29):
10. finalRender (cebas Visual Technology) http://www.cebas.com/?pid=hot_news&nid=378
11. Artisan using OptiX (LightWorks) http://architosh.com/2010/07/sig-lightworks-unveils-power-of-optix/
12. Zeany using OptiX (Works Zebra)
I bet Maxwell and Modo will quickly follow.
A list of released and announced GPU renderers:
1. V-Ray GPU (Chaos Group)
2. iray (mental images)
3. SmallLuxGPU (LuxRender)
4. Octane Render (Refractive Software)
5. Arion Render (Random Control/FryRender)
6. Thea Render
7. SHOT using iray (Bunkspeed)
8. RTT Powerhouse and RTT DeltaGen using iray (Realtime Technology)
UPDATE:
9. Indigo Render also announced plans for GPU acceleration
UPDATE 2 (Sep 29):
10. finalRender (cebas Visual Technology) http://www.cebas.com/?pid=hot_news&nid=378
11. Artisan using OptiX (LightWorks) http://architosh.com/2010/07/sig-lightworks-unveils-power-of-optix/
12. Zeany using OptiX (Works Zebra)
I bet Maxwell and Modo will quickly follow.
Subscribe to:
Posts (Atom)