Tuesday, August 18, 2009

The Future of Game Graphics according to Crytek

Cevat Yerli has given a keynote at GDC Europe about Crytek's next engine and its rendering techniques. Incrysis has pictures of the keynote slides: http://www.incrysis.com/index.php?option=com_content&task=view&id=818&Itemid=1

Two interesting slides:

According to Cevat Yerli, their next generation engine will use a mix of ray tracing, rasterization, point based rendering and SVO: (Google translation from http://www.golem.de/0908/69105.html)
Yerli has also talked about the technique a year ago, other graphics programmers like John Carmack or Jon Olick are also researching it. According to Yerli Sparse Voxel Octrees will form the base for the next version of its Cryengine - but will be ready only in a few years .
From http://www.gamasutra.com/php-bin/news_index.php?story=24865
He then focused on the actual technical innovations that he feels will make a difference in graphics. For example, tech like point-based rendering is potentially faster than triangle-based rendering at certain higher qualities, and works well with levels of detail.

On the other hand point-based rendering might define a certain super-high polygon look for game, Yerli said. However: "There's a lot of games today in the Top 10 which don't need that", he conceded, and content creation tools are almost exclusively based around triangles right now.

He also noted ray-tracing as a possible rendering method to move towards, and particularly recommended rasterization and sparse voxel octrees for rendering. Such principles will form "the core" of future technology for Crytek's next engine, Yerli said, and the goal is to "render the entire world" with the voxel data structure.
Concluding, Yerli suggested that, after 2013, there are opportunities with new APIs and hardware platforms to "mix and match" between multiple rendering models, with "a Renaissance of graphics programming", and visual fidelity on a par with movies such as Shrek and Ice Age rendered in real time.

Friday, August 14, 2009

Next gen GPU from AMD unveiled on Sep 10

Now let's hope the "you won't believe your eyes" refers to a new Ruby demo from OTOY!

Carmack keynote at QuakeCon 09

A complete video of this year's keynote can be found at http://www.quakeunity.com/file=2919

Carmack considers cloud computing an interesting way forward for different game types. He thinks that a reduction of the latency to 50ms is achievable. However, he believes that the current internet infrastructure still needs a lot of work before fast-paced "twitchy" shooters like Quake are possible.

Two live blogging reports:

Oh boy, someone's asking about Cloud Computing.
Carmack says it's "wonderful."
Talking about how parallel processing is where it's at now.
But there are physical limits.
Especially in terms of how much power computers/consoles are drawing.
Cloud Computing prevents server-side cheating.
Still some serious disadvantages when it comes to fast-twitch games like Quake, though.
With 50 millisecond lag, though, anything's possible.
Some games even have that much lag internally.
Carmack thinks Cloud Computing could be a significant force in a few years -- perhaps even a decade.

Question about cloud computing and onlive
Carmack says cloud computing is wonderful
But carmack says about cloud computing for gaming that you start winding up coming up against power limits
So says common computing resources may be helpful. Because the power bricks on something like a 360 is showing a looming power problem
Latency is the killer, carmack says
Says the sims would work with a cloud setup
But thinks twitch gaming will be the last kind of game that could work
Says that cloud computing would limit cheating
Thinks you could get games down to 50 millisecond lags as they're streamed via cloud computing
Wouldn't be shocked if in ten years cloud computing is a significant paradigm for some kinds of games

On onlive type services..latency the issue, but a lot more classes of games than people think, could be feasible, example being "The Sims", twitch games like Quake would be the hardest..upside is client side cheating vanishes. Key will be optimizing networks stacks for reasonable latency. Definitely thinks its not a crazy idea and has very interesting potential.


Sunday, August 9, 2009

The race for real-time ray tracing (Siggraph 2009)

Last Siggraph has shown that a lot of the big companies in 3D are heavily involved in realtime and interactive raytracing research.

Nvidia: OptiX, mental images (RealityServer, iray)

Intel: LRB

AMD: nothing AMD-specific announced yet

Caustic Graphics: CausticRT, BrazilRT, integration in 3DStudioMax Design 2010, LightWork Design, Robert McNeel & Associates, Realtime Technology AG (RTT AG), Right Hemisphere and Splutterfish

Then there was also this extremely impressive demonstration of V-Ray RT for GPU's, which caught many by surprise:

video: https://www.youtube.com/watch?v=DJLCpS107jg

and http://www.spot3d.com/vrayrt/gpu20090725.mov

V-Ray RT rendering on a GTX 285 using CUDA (will be ported to OpenCL): rendering a Cornell box with 5 bounces of physically correct global illumination at 40 fps, and a 800k polygon Collosseum with 5 GI bounces at around 4 fps (with progressive rendering). Since it will be ported to OpenCL, GPU's from AMD and Intel will be able to run it as well.

The Chaos Group and mental images presentations indicate that rendering is going to move from the CPU to the GPU very soon and will become increasingly realtime.

Caustic Graphics would like to see their cards end up in a next-gen console as they target gaming as well. During their presentation at the Autodesk booth, they also mentioned cloud gaming as an option: put a lot of Caustic accelerators in a server and create a fully raytraced game rendered server side. Chaos Group could do the same in fact: they could use their V-Ray RT GPU tech in a cloud rendering environment and make a photorealistic game doing realtime raytracing on a bunch of GPUs (V-Ray RT GPU supports multi-GPU tech and distributed rendering), with fully accurate and dynamic GI rendered on the fly. And if they don't do it, I'm sure mental images will with iray and Realityserver.

There were also some interesting presentations about the future of realtime graphics and alternative rendering pipelines, which all suggested a bigger focus on ray tracing and REYES and indicated that pure rasterization will become less important in the not so distant future.

On the game development front, Tim Sweeney (and of course Carmack with the SVO stuff) is exploring ray tracing/ray casting for his next generation engines: http://news.cnet.com/8301-13512_3-10306215-23.html

According to Sweeney, next generation rendering pipelines could include a mix of ray tracing, REYES, voxel raycasting and other volume rendering techniques, all implemented in a GPGPU language: REYES for characters and other dynamic objects, voxels for the static environment and foliage, raytracing for reflection and refraction and maybe some kind of global illumination.

Friday, August 7, 2009

John Carmack talking about cloud computing...

... and getting super excited! Check it out for yourself:


This 10-minute video shows the last part of a three-part video interview. Carmack talks a little bit about the sparse voxel octree raycasting stuff that he's excited to do research into. Then he wanders off to cloud computing and his excitement goes visibly through the roof.

From http://www.eurogamer.net/articles/digitalfoundry-carmack-next-gen-blog-entry :

"The big question is, are we going to be able to do a ray-casting primitive for a lot of things?" he ponders. "Certainly we'll still be doing a lot of conventional stuff like animated characters and things like that very likely will be drawn not incredibly differently from how they're drawn now. Hopefully we'll be able to use some form of sparse voxel octree representation cast stuff for some of the things in the world that are gonna be rigid-bodied... maybe we'll have deformations on things like that. But that's a research project I'm excited to get back to in the relatively near future. We can prototype that stuff now on current hardware and if we're thinking that... this type of thing will be ten times faster on the hardware that ends up shipping, we'll be able to learn a lot from that."

However, while he predicts that the leaps in cutting edge console technology are set to continue (certainly there is no hint from him that Microsoft or Sony will follow a Wii-style strategy of simply adding minor or incremental upgrades to their existing hardware), we are swiftly reaching the point where platform holders will be unable to win their battles against the laws of physics.

"We talk about these absurd things like how many teraflops of processing and memory that are going into our game machines," Carmack says, speculating off-hand that the next gen consoles will have at least 2GB of internal RAM. "It's great and there's going to be at least another generation like that, although interestingly we are coasting towards some fundamental physical limits on things. We've already hit the megahertz wall and eventually there's going to be a power density wall from which you won't get more processing out there..."

That being the case, he speculates that the game-makers could move into different directions to provide new game experiences and at that point, the almost mythical cloud computing concept could make an impact.

"There'll be questions of whether we shift to a cloud computing infrastructure... lots of interesting questions about whether you have the computing power in your living room versus somewhere else," he says, noting that while latency is a fundamental issue, the sheer scope of storage available online opens up intriguing possibilities. "Certainly the easier aspect of that is 'net as storage' where it's all digital distribution and you could wind up doing an idTech 5-like thing... and blow it up to World of Warcraft size so you need a hundred petabytes of storage in your central game system. We can do that now! It's not an absurd thing to talk about. Games are already in the tens of millions of dollars in terms of budget size and that's probably going to continue to climb there. The idea of putting millions of dollars into higher-sized storage... it's not unreasonable to at least consider."

Sunday, August 2, 2009

OTOY at Siggraph 2009 on Aug 3

SIGGRAPH 2009 Panel Explores Eye-Definition Computing and the Future of Digital Actors:


Interview with David Perry of Gaikai on some of the technical details (scaling, latency) of his cloud gaming service:


I've mentioned the blog EnterTheSingularity in a previous post, and the author Jake Cannell keeps writing very interesting and elaborate blogposts on voxels and cloud gaming such as: