Sunday, June 19, 2011

What do we really want from our rendering?

As of late on Twitter and elsewhere, I've had/read a few discussions about the goals of the time and energy poured into our pretty graphics, and I realized it might make an interesting blog post. I can still remember sitting in my first graphics programming class and hearing the professor talk about how it is the more often the goal to simulate film and photography than real life. This is a pretty broad generalization that I've found to actually hold true more often than not.

Even after I decided to write about this, I spotted this part of a fitting conversation between Emil Persson, John Carmack, and Stephen Hill:

This is actually a dilemma that is difficult to answer, and bears in mind questions about what other disciplines and groups have in mind. To me, the problem is that both approaches could be perfectly acceptable. In fact, I would go so far to argue that as broad generalization, that game players almost *want* to be put into a movie when they play a video game, often demanding increasingly cinematic experiences. On the other hand, I think a lot of gamers imagine the future as being "like you're actually in the game," which strays more toward the argument that we want to mimic the human eye over the camera lens. Maybe thinking about what players want isn't the right way to go about it, players want to have fun, and they don't always want to understand why. I have a feeling designers would care even less about this issue, caring more about effectively immersing the player in their levels and plots, which is not entirely relevant here because though because this is more heavily dependent on the actual content of the game. We don't care about content so much, we care about what the "eye" is.

The argument here is very much a discussion of whether or not rendering programmers are trying to simulate a film camera or a human eye, with a very significant part of that being the lens. If you're not familiar, some effects used in video games such as depth of field and lens flares are largely due to issues with capturing images by using a camera, instead of directly with a human eye. Consider that these are some of the things that gamers rave about over every time a big title puts out a new trailer or demo, yet if it seems a little crazy that these things we sometimes try very hard to reproduce, are actually artifacts. I believe there are a couple factors at play here, a big one being that filmmakers have employed the classic "it's not a bug, it's a feature!" mentality and have often used these "flaws" to great effect, which ties directly into what Stephen Hill is arguing in that Twitter conversation. Depth of field in particular jumps to mind as an effect useful for storytelling, and even things like the poor film quality simulated in Tarantino and Rodriguez's Grindhouse movies play a role in setting the mood, and I'm sure that such an effect could play a noticeable role in the right game. On the other hand, there's this amusing little comparison between video games and photography that shows just how ridiculous this copying of camera flaws can get:

Yet one has to admit that HDR bloom tends to get a good "wow" factor from players, which leads to programmers running around on the internet sarcastically shouting for "MOAR BLOOM." After several experiences with team members boosting effects way too high (not to say I haven't done this myself, the key is to expect to adjust it back down), I've begun to understand why so many graphics programmers hold the physical basis of what they are creating to be of upmost importance. That being said, is it a physical basis of a camera or a human eye? I've read papers that discuss the physical basis of how an effect occurs in a camera, and I've read papers that discuss how an effect occurs in an eye, so really either could apply. However, that reminds me of a similar discussion...

Part 2: Reality or Art?

A related and perhaps more interesting discussion, is whether what looks "good" or what is actually "correct" is more important. This is something that when I started making games, I wasn't familiar with how technical this issue could get, but I think it stems from more than a few core deviations between artists and programmers. When it comes to visuals, everyone wants to have the "best," but just like the debate of whether or not that means duplicating camera artifacts among rendering programmers, there is the question of whether or not the best result is the one that is most realistic or the one that the artists think is best.

In a simple world the answer is easy when both parties agree. However, as technology capabilities grow, there are many more chances for rendering programmers to implement systems that do a much better job of modeling how these effect happen in real life. As our graphics improve, art content may start to look different in fundamental ways. Shifting our techniques to draw from a more accurate physical basis can allow us to create even more stunning visuals, but when I was reading through the course notes for Naty Hoffman's "Crafting Physically Motivated Shading Models for Game Development," I wondered if suddenly changing the way the specular highlight behaves might have a traumatizing effects on artists. I can easily recall several artists and designers that I've worked with that would take the side that a physical basis is not nearly as important as achieving a "good look." It doesn't help that programmers are not artists, which I know can make a lot of artists doubt their credibility if they suddenly think an "improvement" makes their art look worse. And speaking as a programmer myself, it doesn't help that sometimes we can be pretty stubborn ourselves, especially when the math has been proven out that the change makes for a more accurate simulation of real life.

This issue is something that I've become increasingly aware of from the excellent work of Naughty Dog's John Hable, from which I'd like to draw a couple examples of where this debate can come into play. In Hable's 2010 GDC presentation about some of the rendering techniques used in Uncharted 2, he discussed the importance of doing calculations in linear space. One consequence of this is that the diffuse falloff will be much sharper when calculations are done in the correct space. This is something that artists might initially might be turned off by, because have soft falloffs is supposed to be a good thing, right? But this is really just a case of not knowing what we really want, and Hable points out the lack of soft falloffs in the film Avatar, which as you may recall, was highly praised for its visual quality. In the realm of skin rendering, something that Hable is also involved in, it is often noted that as we account for subsurface scattering, we need to use detailed textures as input. Artists may have had the inclination to blur details in the textures themselves from wanting to account for the subsurface scattering. However, once programmers account for the scattering in code, they need to become aware that they no longer need to soften texture details to achieve the best results. If you're interested in checking out some of Hable's work for yourself, I highly recommend you visit his website www.filmicgames.com.

There will always be a great deal of approximations and fudge work involved in real-time rendering, but where the fudge work happens is a constantly moving target as graphics hardware continues expand the possibilities for what can be properly simulated. As this target moves, it also creates an environment that reminds me of something a graphic designer once told me about people that are truly pro status at using Photoshop: the people who really know what they're doing can make a dozen smaller tweaks that are hard to notice on their own, but when put together the end result is stunning. I think the same can be said about the direction that rendering is going. Just remember that improvements are not always improvements in everyone's eyes, let alone easy to notice, and it doesn't help that we all mistakes and sometimes our "improvements" are actually in the wrong. It's all fun and games until someone gets stabbed over chromatic aberration.

Death of a Project

This is actually the follow up to my article on my adventures with Off-Screen Particles in Unity, but being that it ended with me deciding to kill the feature, I've decided that my process towards doing so would make a much more interesting post. I feel that knowing when to throw in the towel or radically change approaches is a good skill to have when developing any game, especially because this is the type of engineering endeavor where “seems right” and “ships on time” is so much more important than things like “is actually right” or “bridge you're engineering doesn't collapse when the wind blows”.

Context

An important thing to keep in mind when you're working on a feature is the context of why you're doing it. This includes everything from how much time you can allocate for working on it to how much it impacts the user experience in your final product. In my case I was trying out a potential optimizations to hopefully allow an early prototype from a designer become feasible, which was not due to his heavy use of particles to create a dust storm, causing performance issues from heavy overdraw. This in itself is an important thing to keep in mind when doing an optimization, is whether or not what you're optimizing is actually a bottleneck in your game.

In my case, I knew the overdraw was becoming an issue because I: 1) reviewed possible reasons why our tons of particles could hurt our performance (to be fair, I was already a little familiar with the issue: knowledge is power!), 2)tested the effects of changing things like particle size, count, screen resolution, and 3) attempted to kill the designer for not using Unity' nifty overdraw visualization in the first place (Side note: attempting to kill a designer should only happen if you work in a strictly unprofessional environment, like a University lab, and will rarely have a direct impact on perf).

After identifying the problem, the next thing to do was decide how I could address it. Because the particles had only a minor impact on gameplay, the team cut it at the time and focused our efforts elsewhere for our deadlines at the time. However, I knew that being able to ramp up the particle density would really enhance the look and feel of the game, so I kept the problem as a side project.

Ramping Up

The project was slow to get started, due to it being on the back burners for some time. I started to really make progress on it when we were transitioning to hopefully ramp up our work on the game once again after many other projects that had been distracting team members for some months began to wind down (this is a continual problem in a University setting, but that's a story for a different day). I knew that the feature would be important if the proposed design changes happened, and that the work would be wasted if the designers went ahead and designed new zones without the feature at their disposal early on. With this in mind, the project quickly rose to a much higher level of significance. I spent a lot of time when I wasn't with the team cracking away at my work, and putting extra hours in at this point can often pay off.

Putting in that extra effort at the right time can really pay off. I quickly began to see measurable results and progress. The implementation that I discussed last time does have a substantial impact on a scene I brewed that's mostly particles at a density similar to what design wanted, resulting in a greater than 10 ms performance increase in my test scene at a 1280x720 screen resolution when rendering particles to a buffer smaller by 2x. I could do even better than that when using a 4x smaller buffer. That clearly identified two things: 1) that the feature warranted further refinement and 2) that I was right about the bottleneck, as the increase in perf proved that I was not wasting time going down the wrong path. However, in my first post I detailed artifacts that are noticeable in our particular game, and that I intended to explore the mixed resolution rendering as a solution to those issues.

Spotting the Wall

I knew that going to mixed resolution would require substantially more resources for the rendering and compositing of multiple passes, but would also allow me to get away with using a 4x decrease in our smaller buffer. So I started out with the intention of laying it out as simply and as efficiently as possible, and then tune from there.

After a little more work, I had it: mixed resolution off-screen particles. They had some room for quality improvements, but they definitely did a lot to fix the artifacts from before. I had also gained something else: my offscreen particles rendered with almost no increase in perf. I had lost all of my savings from my earlier work, which also meant that I may have moved my bottleneck as well.

I had a couple options at this point: 1) tediously work to increase perf and quality at the same time, 2) scrap mixed resolution and try to find another solution, or 3) kill it. The first option bears the weight of what would most likely be a lot of work on a project that I only have a limited amount of time to work on, and is especially difficult to do without source level access. I did try a couple quick and dirty thoughts on the second option, but going back to the drawing board has its own costs associated with it.

So I Killed It

It's not easy to kill something, especially when early work shows promise. However, it's an important thing to be able to do. I have other things to be poking around at in my spare time, and a feature that the game can live without is certainly not worth it. I had invested a reasonable amount of time into the feature already, but that didn't mean it was worth further pursuit. I suspect further work on it would be roughly equivalent to doubling the scope of the feature. Your time is always one of your most valuable resources, especially when code familiarity in itself makes you a valuable asset to your team. Getting stuck with your wheels spinning on a feature that isn't worth the benefits for the work isn't going to help anything, no matter what discipline you're in.