Friday, September 16, 2011

3 Years Later: Year Two

This is part two of my little series about what I've done in the past 3 years of my life to take myself from a kid that had never made a 3D game in his life to someone who lives and breathes game programming on several projects (commercial and otherwise). Here's a link to the first post in the series.

Getting Paid to make a Game

During the academic semesters for my first two years of college, I got paid by a scholarship to work for the Games for Entertainment and Learning Lab (an incentive to have first and second year students involved in research labs on campus). However, part of the way through the summer leading up to my second year of college, I got pulled on to a project as a second programmer, and being paid outside of the academic year by the Lab's actual budget was exciting for me. However, the project went anything but smoothly (and had been a mess well before I was brought on).

I've noticed that sometimes game devs talk about getting jaded or disillusioned after getting into game development professionally. If there was ever a project that did it for me it was this one, a serious game about power plant management. It was continually misdirected by a client that didn't understand game design or development. The project got extended into the Fall semester as the development continued to be a mess. Eventually this culminated into the client asking us to add back as many of the features the Lab had attempted to add to make the game more fun, finally realizing that the direction they had been steering it was keeping it from being fun for the target audience.

For as unfortunate as that was, I think something really important came from this, the knowledge that programming, not game design, was what I loved no matter how badly a project was going. Level design in particular came hard for me, realizing that it really just couldn't hold my interest like a programming challenge. I think most people can find a lot of glamour in more than one discipline of game development, and for me I thought that maybe game design was as an enjoyable as coding. Clearly I was wrong, and it proved to me that even a bad project can have a lot of value, giving me a better sense of direction of what I wanted to do moving forward.

Greener Pastures

While the power management game was wrapping up, two much more promising projects were just starting up. One was a competition hosted by Ford Credit between Michigan State University and University of Michigan's game development clubs (Ford is headquartered in Michigan, thus the localness of the competition). The goal was to make an serious/adver game to teach potential Ford customers about car financing. The target audience was people in their early 20's, so having college students make a Flash game seemed like a great way for Ford Credit to go about it and was probably a fun PR stunt at the same time.

While the content of the game might not sound amazing, Ford Credit was very hands off about the development, which was a breath of fresh air compared to the project I was coming off of. The real kicker though was the prize: all expenses covered for GDC with all-access passes. I threw myself into that project like there was no tomorrow, and it ended up being the first project I seriously crunched on as the team streaked towards our relatively aggressive deadline (I believe we had 3 or 4 months of development, and many of us had never done a Flash game before). It wasn't uncommon for me to be up at 2 am rolling in features and artwork that probably should have been cut for scope reasons, and even then I was fixing bugs right up to the deadline. And when I say "right up to," I mean that I did the submission build of the game on a laptop in the back of a van as we drove to present the game to the judges in Dearborn. You can check out the game here.

The result was that we won, and getting to go to sessions at GDC 2010 blew my mind. While I know many devs that have been in the industry go to GDC as much for the socializing as the sessions, as a student I'd say that the talks are infinitely valuable. Especially compared to the Career Pavilion, which I'd wager is what the majority of students attend GDC for. It was also then that I realized that the allure of rendering and engine code was ever so tantalizing, with John Hable's talk about HDR lighting in Uncharted 2 convincing me to drown myself in graphics programming in my spare time. This was a big jump for someone that thought they might still want to be a game designer less than a year earlier.

Lesson here? Student competitions are important, teaching more about deadlines and quality game development than any class could, because to be honest, student projects often are only a fraction of a someone's grade. The project for Ford Credit had that extra mile of polish that can only come from really wanting to make the best game possible. Getting in that extra stretch of polish and bug fixes before a class deadline lacks incentive because it probably won't budge someone's grade unless it's worth at least half the student's grade. I'll be revisiting that theory of mine in my next article.

Enter: Olympus

I mentioned that their were two projects in the wake of the power plant management game. The second one was a motion controlled action adventure about Greek Mythology. The purpose of the game was to be used to study the effectiveness of aggressive motion controls in an entertainment game (as opposed to a game like Wii Fit, where exercise is the consumer's intent).

I started as what I would probably refer to as a "junior programmer" handling basic gameplay tasks while I was still heavily involved in the Ford project and my first class about game design and development. However, I would inherit the role of leading up the player and motion control code when the original programmer graduated, continuing into the summer. I'll pick up on that story in my next post about lessons from my third year of learning game dev.

Friday, August 19, 2011

3 Years Later: Year One

A while back, someone on Twitter (I can't find the original conversation), asked me if I would consider writing up how I learned to program video games. I'm going to split this into several articles (for better or for worse) based around each year, and then one wrapping everything up with a big dump of the tools, articles, and books I've found incredibly helpful along the way. I thought splitting them up would be good because I'm only *a little* busy with moving back to Michigan and getting up to full steam on a project for IGF.

I think very few people will disagree that it's a lot harder to get into the games industry than it used to be, being that there are so many more people interested in getting into the industry these days. I've been happily interning for these past months at Iron Galaxy Studios where I do programming work commercial games, so I'd say that I at least got some important bits right. I think a lot of what I've done can manifest itself in a slightly different way for other prospective game developers, so hopefully these are helpful in some way to them.

Year One

When I started college, I knew I wanted to make video games for a living. However, like most college freshmen, I didn't know how to make video games, but I did know that getting into the games industry was no cakewalk. One of my scholarships paid a stipend in exchange for working 10 hours a week under a professor, essentially a way for the university to get underclassmen involved in research without putting strain on a lab's budget. Due to my interest in game development, I joined the MSU Games for Entertainment and Learning (GEL) Lab to work under Professor Brian Winn, but I suppose it was not the most opportune time to be a GEL Lab professorial assistant.

There was very little going on in the lab that year other than a small game design conference that we helped organize called Meaningful Play. However, beyond helping prepare for the conference, there was very little concrete game development work to hand me. Besides the lack of projects, I didn't know a whole lot about game development. I can still remember not having a good answer about what part of developing games I actually liked doing when I first started in the lab. All I knew was that I liked programming in general from the few classes I had in High School, although game design still seemed like "the cool thing" at that point, and I thought that I would probably enjoy design more if I was given the chance (spoiler alert: programming is actually way cooler, but this won't be discovered until year 2).

Contrary to what one might think, something very good came out of the lull of activity in the GEL. My commitment to GEL Lab was for 2 years, so Brian had me begin to teach myself how to use Unity with the hope that I'd be able to use it for future projects, being that the department had just adopted into the curriculum as its 3D engine of choice. As a result I had a conscious reason to teach myself game development, putting in at least 10 hours a week towards a small 3D project. I started with the standard tutorials, which are only really helpful for learning the menu flow. As with any first 3D game, the learning curve still felt steep even though I was using a fairly user-friendly engine like Unity. However, if you get a jolt of excitement from getting a cube to move back and forth across the screen for the first time in your life, then you know that game programming might actually be your thing. The project evolved into a small game that I presented at the end of my Spring semester.

It was an action-adventure game about a manatee. It was terrible, and my code base was even worse, but to this day I still love it (and amazingly its poster presentation won an award). What's important here? I did everything, even the art, and I committed to spend at least a minimum amount of time on it each week. I learned so much, and I didn't have things like fears of letting team members down, because I was the whole team.

Speaking of teams, I did get involved with Spartasoft, the student game development club, which was another important step toward being able to program a half-decent game. The club served a few primary purposes at that point, such as hosting a games party occasionally and getting alumni to come back and present to the club about their experiences in the games industry. However, the most important function for me was the 48 hour Game Jams that were hosted every few months. If you're not familiar with the concept of a game jam, we basically split into small teams on a Friday evening, a theme is announced, and then each team makes a game about that theme over the course of 48 hours. It often results in a lot of terrible games, but inevitably there's something new that's learned, new game ideas explored, and a lot of friendships built with game developers that you might not otherwise get to know. I cleared my schedule for these as a Freshman, and participated in every single one.

It's how I got connected with a couple of seniors, and I ended up putting in more than just 10 hours a week on one project in the Spring. I started meeting with them in between game jams to polish some of our better ideas. The fact that upperclassmen like Bert, a programmer that now works with me at Iron Galaxy, and Marie, an amazing artist that's now a grad student at SCAD, wanted to work with some freshman was amazing. I had gotten past the hump for being able to contribute to a game at all. I could help make their games better, and because I was more than ready to step up to the task, I ended up learning a lot back from them.

Conclusion

So what can be learned from my first year making games? First, don't be afraid to go it alone and force yourself to spend at least a minimum amount of time each week working on it. Secondly, game jams are great, especially if you don't know very many people to collaborate with. Between the manatee game and the game jams, I had worked on 6 different games by the time I finished my first year of college. How many games had I worked on for class? Zero, and Michigan State even has a game development curriculum! If you have the opportunity to work on *any* game when you're just starting out, even a game jam game, you'd better have a damn good reason if you pass on it. Failing any opportunities for collaboration, the only person keeping you from making your own game is yourself. Don't be the asshole that's keeping you from learning how to make video games. I got lucky that I didn't do that, it's easy to be lazy when you're an 18 year old college freshman.

Friday, August 5, 2011

Backface Culling 101

This post is for artists, designers, and anyone else who has had backface culling shoddily explained to them. Perhaps I should have checked the Venn Diagram of AltDev readership before writing about a rendering technique specifically with non-programmers in mind. I know I've gotten embarrassingly confused by forgetting what's important as little as 2 months ago.

The "Simple" Answer is Misleading

Backface culling comes up all the time when people are first becoming acquanted with 3D game dev. All it takes is deleting half of a cylinder in Maya and dropping it inside of an easy to use engine like Unity. Then they're suddenly be trying to figure out what got screwed up that's causing the inside of it to be invisible. Inevitably the answer given is that those faces are backface culled to avoid rendering the inside of 3D models. That's true to some extent, but in my opinion very misleading.

The other part of the common answers is that backface culling works by only drawing the side with outward facing normals. This part, is not actually right and it leads to a lot of misunderstandings. I think this answer comes up because people like artists and designers are used to thinking of meshes in 3D space, rarely do people think about the process that turns it into a 2D image unless they get into rendering programming.

The Triangles are not Drawn twice

First off, understand that you have a big mess of triangles that you are using to represent your piece of art. One of the simplest representations of this is to have all the vertices in a big long list, and then to have a big long list of triangles defined from those points. Your computer is taking those triangles and transforming them into the 2D space that is displayed to you. That triangle is so simply defined to expedite rendering that it has no concept of "front" or "back" (well it does, but I'm getting to that). The vertices may have normal information, but that's used for lighting, not culling. Consider a camera pointed at a half-cylinder that renders properly in the first case, with the following badly drawn diagram taken from above. The green lines are normals:



Now consider the *exact same* list of triangles with the only change being to flip the normals at each vertex. If you try this on your own, it is probably best done by modifying your vertex list in the rendering code, for reasons that will be explained shortly.



With backface culling turned on, you will still see the cylinder in each image, except the lighting will be flipped in the second one. And if you turn off backface culling then the same images will be rendered, and it will be done so with the same number of triangles, because there are no backfacing triangles in the given example. Don't think that the GPU is rendering the backside and then going over it a second time when you have backface culling turned off for the front side, unless you actually did specify two triangles in that big list of triangles with vertices in the exact same locations.

Assuming you understand that what I described is not what typically happens, let me now explain the point that I'm making. When you flip the normals in a 3D modeling package it's doing more than just changing the vertex information, it's also changing that triangle list. This is because the front and back face of a triangle are determined through the novel concept of winding order, which is quite simply that the front of a triangle is the side of the face where vertices are viewed counterclockwise. Suppose you have a mesh with no normal data, just positions in 3D space for vertices. You can still use backface culling just fine as long as your triangle list is specified in the proper order, a point that I think is often missed by designers and artists when trying to understand backface culling.

If you want the proof in the pudding, here's the OpenGL call to specify culling:

glFrontFace(GLenum mode);

Where mode is either GL_CW or GL_CCW, standing for clockwise and counterclockwise, notice that this has nothing to do with vertex normals!

And here's a diagram to illustrate which side is the front for a triangle by default (you could assume that triangles are front clockwise as your standard if you really want to, but the default is counterclockwise). If the points defining your triangle are listed in an order that would cause the diagram on the left to be true after transformed into screen-space, then that triangle is front facing:



So what are we culling again?

We're culling triangles facing away from the camera, which are going to be obstructed by the forward facing polygons on a closed mesh. Those triangles that would be drawing the inside of that cylinder are just going to be covered up by the front-facing triangles that you would actually see. With the winding order in hand, the triangle's face normal can be determined and checked against the camera. I made this little diagram (insipred by the great explanation for backface culling in Real-Time Rendering). The triangles on the backside of the full cylinder are facing away from the camera, so they are culled (indicated by the dotted part).



So on a closed mesh like a full cylinder, you can avoid doing the work of rendering all of those triangles that you already know are going to be obscured. This is why disabling backface culling is typically not the correct answer when you have triangles culled that you didn't intend to, and it's also bad because that usually means that the backfacing triangles are being shaded incorrectly with backwards normals.

If you do intentionally cut that cylinder in half, the easy fix is to add front facing triangles along the inside of the mesh. I believe there is functionality these days (DX10 maybe?) to figure out which side of the triangle is being rasterized. Theoretically you could have the shader flip the normals based off of that information to still have correct lighting, but if you just read a post on the basics of backface culling, I bet that's not what you're looking to do.

Because 3D modeling programs are automatically adjusting the winding order for you based off of what direction your normals are, it's easy to mistakenly think of backface culled triangles as triangles with incorrect normal information, when really it's winding order that determines it. This is why sometimes you can really end up in weird situations while using a 3D modeling package. I know that as a Freshman in college, there were many models at game jams that had those stray triangles that artists just couldn't get to show up, and maybe if they did the lighting got all weird. Perhaps thinking about what's actually happening can help alleviate those pains.

Monday, July 18, 2011

Obligatory FXAA Post

I did a quick search through AltDev and I don't think anyone else has really talked about FXAA on AltDevBlogADay yet, but the time is long overdue. I'm a little late to the game trying out Timothy Lottes' post-process anti-aliasing technique, but I had been noticing people saying great things about it when I finally decided to give it a serious look over 4th of July weekend. I had already been thinking about writing about my experiences with it when I saw Eric Haines had posted up a glowing review of it over on realtimerendering.com. So if you don't trust the judgement of some intern that doesn't even have a college degree completed, please refer to all the more qualified people that are having similar experiences to me.

The Problem at Hand

For those of you that might be reading this and haven't drank the graphics programming Kool-Aid, I'm going to fill you in a little bit as to why we care about anti-aliasing.

In our line of work, the end result of a rendered scene is typically a 2-dimensional array of colors that is displayed on the user's monitor. Just like how audio (which is originally analog), is quantized when it is made digital, the image is built into a discrete number of pixels during the process of rasterization (i.e. filling in each triangle). The measure of how many pixels are used to display the image is of course known as the resolution, which is why things become increasingly blocky as you lower a game's resolution, due to there being fewer dots per inch.

Even at high resolutions, individual pixels can still often be identified by the viewer, because the lines and edges of triangles form distracting "jaggies". This is noticeable along the outline of an object being displayed due to rasterization, but can also occur on interior surfaces for other reasons such as shadows and texture map resolution. The user typically picks the resolution their monitors are set at, but most console games provide 720p or 1080p images to the TV, and monitors tend to follow suit (my laptop is set to ~720p). Other than relying on hardware to pack more pixels into the same physical area by increasing the dots per inch (which is something that Apple is claiming with it's retina displays, but check out this post for an interesting look at that), we have to find ways to smooth the transition between pixels. Perhaps the simplest strategy to dealing with aliasing is to render into a texture twice as large as the target resolution and then downsampling it to actual output resolution. This allows you to take the average of every four pixels, softening out the boundaries by averaging the edge colors together. However, this also just straight up sucks for performance. You end up processing 4x as many fragments from the higher resolution, and you have 4x the memory usage for the buffer. This is way too high a cost to pay for some smooth edges.

The Hardware Option

There is a hardware accelerated method, MSAA (Multi-Sample Anti-Aliasing). This operates by computing additional samples when rendering the frame instead of rendering additional pixels and downsampling. The fragment shader is only run once for each group of samples. The problem is that it doesn't work with the increasingly popular deferred rendering techniques, because you can't really take more samples of a buffer you've already rendered. In short, the damage is already done, the data has been discretized to a particular resolution. Furthermore, I've always found that MSAA is still pretty expensive (but then again, the only way to make your rendering take 0 ms is to quite doing rendering and switch to a job in finance).

New Maps of AA-land

The desire to use deferred rendering has pushed alternate forms of anti-aliasing to get a lot of attention (I mean who doesn't need more acronyms, right?). Perhaps the most prevalent that you may have heard of is MLAA, but most of the techniques that I'm referring to here are post-processing techniques that rely on detecting and softening edges. By doing anti-aliasing as a post-process, it will work seamlessly with deferred rendering, and pretty much anything else for that matter. If the technique only needs the color buffer access, then it can even be applied to video/screenshots/whatever of existing games, which I think researchers in this area have found as a great method of showing off how their work.

MLAA was originally a CPU based technique developed by Intel, that has since then been adopted to be done on the GPU. This has been outlined in GPU Pro 2, Game Developer Magazine, and around the net, but if you're not familiar with it I'll make a few brief points about it. It essentially boils down to storing edges in a texture with different colors indicating which side of the pixel the edge is located at, and an additional buffer is used to calculate the blending weights for the blurring.

Impressively, they found results falling somewhere between 4x and 8x MSAA, while being 11x faster than 8x MSAA. The memory footprint of the technique is 1.5x or 2x the size of the back buffer depending on the hardware (2x for the Xbox 360 for all you console devs who probably have already heard everything I'm saying). That's pretty good for something that solves the problems encountered with deferred rendering at the same time. This is undoubtably why the technique has garnered so much attention, and I really recommend the article in GPU Pro 2 if you want a clear view of all of the details.

Enter: FXAA

As you may know from reading other posts, my big side-project/hobby/thing is that when I give a new technique a try, I do it in Unity because it a) is usually not straightforward and requires actually understanding the technique to get it working and b) can be evaluated using the many projects I've done in Unity previously. From looking over the details of MLAA, I knew it would probably take a full weekend to get it right, and I had been procrastinating quite a bit with getting around to doing it.

When the third iteration of FXAA rolled out I decided to take a look at what it entailed. I knew in the back of my mind that FXAA was a strictly luminosity based technique, which is interesting to me. The authors of MLAA recommend using depth to determine edges for best results and performance. However, luminosity based techniques offer the advantage/disadvantage of smoothing boundaries that exist in places other than depth, such as with aliasing on texture maps and with shadows. The downside is that this can produce results that are too blurry in places you don't want blur, such as with text on a prop in 3D space. I once tried a very, very simple luminosity based AA filter, that resulted in too many cons (especially with blurry text) for me to seriously use. I was curious if FXAA would give me similar problems.

With the intention of just looking over the code briefly, I suddenly found myself staring at a very simple and easy to use code base offering a ton of well explained pre-processor options for target platform and quality. Being around midnight when I started looking, I quickly decided that porting the higher-quality PC version of the HLSL code to CG/Unity would be fun. There were two steps involved:

1) At the end of all the other post-processing, calculate Luminosity and slam it into the alpha channel (super easy to do).
2) Perform the FXAA pass. Porting mostly involved fixing texture look-up syntax.

No extra buffers involved. This cuts down on the extra memory needed for MLAA, and the code seemed simple enough that it would probably be pretty fast. I ported all the code within 2 hours and then went to sleep. The next morning I finished setting it up for use in Unity... and was blown away by the speed and the results. Here's a breakdown of what I got running in the Unity editor at ~720p, using Dust, a previous project of mine, to test it. These are PNG's cropped down at native resolution:

Shot 1: No Anti-aliasing



Shot 2: FXAA3



Shot 3: 6x MSAA



It takes FXAA3 only ~1 ms [Note: corrected from an erroneous order of magnitude type when I first posted this article] on my laptop (MacBook Pro with an NVIDIA GeForce 320M) to be completed (including the luminosity calculation), and as I mentioned, no additional memory either. I'll pay that millisecond any day of the week for that quality of anti-aliasing. Furthermore, the blurriness on text was much more acceptable than the fast blur I had tried previously. Note that this text is not really meant to be read, but rather recognized to match the voice over so the player can understand that the voice is that of the journal's author. I wish I could credit the sources where I put together the fast blur from, but it's been more than a few months since I implemented/ditched it. Here's a comparison:

Shot 1: No Anti-aliasing



Shot 2: FXAA3



Shot 3: Fast Blur



This is getting a bit more to the personal opinion end of things, but even though FXAA3 does blur the text, I feel like the fast blur makes it almost uncomfortable to look at. I get the sensation that I'm getting tested for a new prescription of glasses, and have to recognize out of focus letters. To me, that seems like a pretty good indication that your luminosity based AA technique isn't up to snuff if you get that type of blurring. FXAA3 on the other hand seems acceptable enough that it'll definitely be enabled in future builds of Dust that get pushed up onto my website.

What did we learn?

Hopefully we learned that FXAA is both fast and doesn't require additional memory, and is trivially simple to implement or port. There are versions for the 360, PS3, and PC, as well as HLSL and GLSL variants. It's a luminosity based solution, and comes with the associated pros/cons those techniques have, but I've found FXAA to minimize the cons. It should literally take you at most a few hours to get it up and running in your game and evaluate if it's a good fit for what you're doing. Furthermore, it is in the public domain (credit to the post by Eric Haines for this twitter snippet):



Although at this point, I'd the amount of quality you're getting for the effort of implementation, it might as well be under the beer license. Finally, I would point out that Timothy Lottes is still improving on the code, and has already released version 3.9 in the time since I first touched it. You can find the latest source links and updates on his blog: http://timothylottes.blogspot.com/.

Sunday, June 19, 2011

What do we really want from our rendering?

As of late on Twitter and elsewhere, I've had/read a few discussions about the goals of the time and energy poured into our pretty graphics, and I realized it might make an interesting blog post. I can still remember sitting in my first graphics programming class and hearing the professor talk about how it is the more often the goal to simulate film and photography than real life. This is a pretty broad generalization that I've found to actually hold true more often than not.

Even after I decided to write about this, I spotted this part of a fitting conversation between Emil Persson, John Carmack, and Stephen Hill:

This is actually a dilemma that is difficult to answer, and bears in mind questions about what other disciplines and groups have in mind. To me, the problem is that both approaches could be perfectly acceptable. In fact, I would go so far to argue that as broad generalization, that game players almost *want* to be put into a movie when they play a video game, often demanding increasingly cinematic experiences. On the other hand, I think a lot of gamers imagine the future as being "like you're actually in the game," which strays more toward the argument that we want to mimic the human eye over the camera lens. Maybe thinking about what players want isn't the right way to go about it, players want to have fun, and they don't always want to understand why. I have a feeling designers would care even less about this issue, caring more about effectively immersing the player in their levels and plots, which is not entirely relevant here because though because this is more heavily dependent on the actual content of the game. We don't care about content so much, we care about what the "eye" is.

The argument here is very much a discussion of whether or not rendering programmers are trying to simulate a film camera or a human eye, with a very significant part of that being the lens. If you're not familiar, some effects used in video games such as depth of field and lens flares are largely due to issues with capturing images by using a camera, instead of directly with a human eye. Consider that these are some of the things that gamers rave about over every time a big title puts out a new trailer or demo, yet if it seems a little crazy that these things we sometimes try very hard to reproduce, are actually artifacts. I believe there are a couple factors at play here, a big one being that filmmakers have employed the classic "it's not a bug, it's a feature!" mentality and have often used these "flaws" to great effect, which ties directly into what Stephen Hill is arguing in that Twitter conversation. Depth of field in particular jumps to mind as an effect useful for storytelling, and even things like the poor film quality simulated in Tarantino and Rodriguez's Grindhouse movies play a role in setting the mood, and I'm sure that such an effect could play a noticeable role in the right game. On the other hand, there's this amusing little comparison between video games and photography that shows just how ridiculous this copying of camera flaws can get:

Yet one has to admit that HDR bloom tends to get a good "wow" factor from players, which leads to programmers running around on the internet sarcastically shouting for "MOAR BLOOM." After several experiences with team members boosting effects way too high (not to say I haven't done this myself, the key is to expect to adjust it back down), I've begun to understand why so many graphics programmers hold the physical basis of what they are creating to be of upmost importance. That being said, is it a physical basis of a camera or a human eye? I've read papers that discuss the physical basis of how an effect occurs in a camera, and I've read papers that discuss how an effect occurs in an eye, so really either could apply. However, that reminds me of a similar discussion...

Part 2: Reality or Art?

A related and perhaps more interesting discussion, is whether what looks "good" or what is actually "correct" is more important. This is something that when I started making games, I wasn't familiar with how technical this issue could get, but I think it stems from more than a few core deviations between artists and programmers. When it comes to visuals, everyone wants to have the "best," but just like the debate of whether or not that means duplicating camera artifacts among rendering programmers, there is the question of whether or not the best result is the one that is most realistic or the one that the artists think is best.

In a simple world the answer is easy when both parties agree. However, as technology capabilities grow, there are many more chances for rendering programmers to implement systems that do a much better job of modeling how these effect happen in real life. As our graphics improve, art content may start to look different in fundamental ways. Shifting our techniques to draw from a more accurate physical basis can allow us to create even more stunning visuals, but when I was reading through the course notes for Naty Hoffman's "Crafting Physically Motivated Shading Models for Game Development," I wondered if suddenly changing the way the specular highlight behaves might have a traumatizing effects on artists. I can easily recall several artists and designers that I've worked with that would take the side that a physical basis is not nearly as important as achieving a "good look." It doesn't help that programmers are not artists, which I know can make a lot of artists doubt their credibility if they suddenly think an "improvement" makes their art look worse. And speaking as a programmer myself, it doesn't help that sometimes we can be pretty stubborn ourselves, especially when the math has been proven out that the change makes for a more accurate simulation of real life.

This issue is something that I've become increasingly aware of from the excellent work of Naughty Dog's John Hable, from which I'd like to draw a couple examples of where this debate can come into play. In Hable's 2010 GDC presentation about some of the rendering techniques used in Uncharted 2, he discussed the importance of doing calculations in linear space. One consequence of this is that the diffuse falloff will be much sharper when calculations are done in the correct space. This is something that artists might initially might be turned off by, because have soft falloffs is supposed to be a good thing, right? But this is really just a case of not knowing what we really want, and Hable points out the lack of soft falloffs in the film Avatar, which as you may recall, was highly praised for its visual quality. In the realm of skin rendering, something that Hable is also involved in, it is often noted that as we account for subsurface scattering, we need to use detailed textures as input. Artists may have had the inclination to blur details in the textures themselves from wanting to account for the subsurface scattering. However, once programmers account for the scattering in code, they need to become aware that they no longer need to soften texture details to achieve the best results. If you're interested in checking out some of Hable's work for yourself, I highly recommend you visit his website www.filmicgames.com.

There will always be a great deal of approximations and fudge work involved in real-time rendering, but where the fudge work happens is a constantly moving target as graphics hardware continues expand the possibilities for what can be properly simulated. As this target moves, it also creates an environment that reminds me of something a graphic designer once told me about people that are truly pro status at using Photoshop: the people who really know what they're doing can make a dozen smaller tweaks that are hard to notice on their own, but when put together the end result is stunning. I think the same can be said about the direction that rendering is going. Just remember that improvements are not always improvements in everyone's eyes, let alone easy to notice, and it doesn't help that we all mistakes and sometimes our "improvements" are actually in the wrong. It's all fun and games until someone gets stabbed over chromatic aberration.

Death of a Project

This is actually the follow up to my article on my adventures with Off-Screen Particles in Unity, but being that it ended with me deciding to kill the feature, I've decided that my process towards doing so would make a much more interesting post. I feel that knowing when to throw in the towel or radically change approaches is a good skill to have when developing any game, especially because this is the type of engineering endeavor where “seems right” and “ships on time” is so much more important than things like “is actually right” or “bridge you're engineering doesn't collapse when the wind blows”.

Context

An important thing to keep in mind when you're working on a feature is the context of why you're doing it. This includes everything from how much time you can allocate for working on it to how much it impacts the user experience in your final product. In my case I was trying out a potential optimizations to hopefully allow an early prototype from a designer become feasible, which was not due to his heavy use of particles to create a dust storm, causing performance issues from heavy overdraw. This in itself is an important thing to keep in mind when doing an optimization, is whether or not what you're optimizing is actually a bottleneck in your game.

In my case, I knew the overdraw was becoming an issue because I: 1) reviewed possible reasons why our tons of particles could hurt our performance (to be fair, I was already a little familiar with the issue: knowledge is power!), 2)tested the effects of changing things like particle size, count, screen resolution, and 3) attempted to kill the designer for not using Unity' nifty overdraw visualization in the first place (Side note: attempting to kill a designer should only happen if you work in a strictly unprofessional environment, like a University lab, and will rarely have a direct impact on perf).

After identifying the problem, the next thing to do was decide how I could address it. Because the particles had only a minor impact on gameplay, the team cut it at the time and focused our efforts elsewhere for our deadlines at the time. However, I knew that being able to ramp up the particle density would really enhance the look and feel of the game, so I kept the problem as a side project.

Ramping Up

The project was slow to get started, due to it being on the back burners for some time. I started to really make progress on it when we were transitioning to hopefully ramp up our work on the game once again after many other projects that had been distracting team members for some months began to wind down (this is a continual problem in a University setting, but that's a story for a different day). I knew that the feature would be important if the proposed design changes happened, and that the work would be wasted if the designers went ahead and designed new zones without the feature at their disposal early on. With this in mind, the project quickly rose to a much higher level of significance. I spent a lot of time when I wasn't with the team cracking away at my work, and putting extra hours in at this point can often pay off.

Putting in that extra effort at the right time can really pay off. I quickly began to see measurable results and progress. The implementation that I discussed last time does have a substantial impact on a scene I brewed that's mostly particles at a density similar to what design wanted, resulting in a greater than 10 ms performance increase in my test scene at a 1280x720 screen resolution when rendering particles to a buffer smaller by 2x. I could do even better than that when using a 4x smaller buffer. That clearly identified two things: 1) that the feature warranted further refinement and 2) that I was right about the bottleneck, as the increase in perf proved that I was not wasting time going down the wrong path. However, in my first post I detailed artifacts that are noticeable in our particular game, and that I intended to explore the mixed resolution rendering as a solution to those issues.

Spotting the Wall

I knew that going to mixed resolution would require substantially more resources for the rendering and compositing of multiple passes, but would also allow me to get away with using a 4x decrease in our smaller buffer. So I started out with the intention of laying it out as simply and as efficiently as possible, and then tune from there.

After a little more work, I had it: mixed resolution off-screen particles. They had some room for quality improvements, but they definitely did a lot to fix the artifacts from before. I had also gained something else: my offscreen particles rendered with almost no increase in perf. I had lost all of my savings from my earlier work, which also meant that I may have moved my bottleneck as well.

I had a couple options at this point: 1) tediously work to increase perf and quality at the same time, 2) scrap mixed resolution and try to find another solution, or 3) kill it. The first option bears the weight of what would most likely be a lot of work on a project that I only have a limited amount of time to work on, and is especially difficult to do without source level access. I did try a couple quick and dirty thoughts on the second option, but going back to the drawing board has its own costs associated with it.

So I Killed It

It's not easy to kill something, especially when early work shows promise. However, it's an important thing to be able to do. I have other things to be poking around at in my spare time, and a feature that the game can live without is certainly not worth it. I had invested a reasonable amount of time into the feature already, but that didn't mean it was worth further pursuit. I suspect further work on it would be roughly equivalent to doubling the scope of the feature. Your time is always one of your most valuable resources, especially when code familiarity in itself makes you a valuable asset to your team. Getting stuck with your wheels spinning on a feature that isn't worth the benefits for the work isn't going to help anything, no matter what discipline you're in.

Thursday, May 19, 2011

Game Programming Interviews and Tests: Entry Level Edition

I'm starting to realize that most of my blog posts begin with me making excuses about something as well as linking to somebody else's post or article. Spoiler alert, today will be no different. The excuse is that this post is not my follow-up to my last one about my initial adventures with off-screen particles, due to a lack of time to work on it while moving into a new place and also some disappointing results with mixed resolution rendering. So what I'm talking about instead is my experiences with interviewing for programming intern positions throughout this past Spring. A while back, Jaymin did a great post about programming tests and demos, but I feel like I might have some insights for aspiring entry-level game developers.

The Hunt

I started my hunt for getting professional development experience by testing the water by sending out a few intern applications during my second year of college. The first step to getting any job is to actually apply for it. Typically this happens online these days, especially for intern positions. This brings me to my first point: looking good on paper counts for something. No matter what, you have to look appealing enough for a company to think about investing the time to actually talk to you. This was evident because, even though I was completely unqualified at the time, I scored a programming test for the Insomniac core team. I'm sure the only reason that happened was because I didn't let my grades slip during my first two years of college (proving that even the most painful "general education" class in existence still counts for something), and I also had experience working on several game development projects around the University. In short, I looked good on paper. Having a well thought out resume and portfolio are crucial, and the evidence adds up both from responsiveness to online applications and responsiveness from studio representatives at job fairs, such as the annual career pavilion at GDC.

After making it past the first hurdle, there are typically a two things that happen next. You will almost always need to do a technical interview and a programming test, although the order that they happen can vary. The Insomniac test was proof that I wasn't qualified yet, but at least it showed me what I needed to do to get past that initial challenge of getting noticed in the ever growing pool of fresh applicants looking for work in game development. So what exactly is in a programming test and how should you prepare?

The Test

For an intern, most tests are emailed to you, and you typically have a specified period of time to send it back. I've had several tests structured as a small programming project, in the style of typical work required to make a video game from the ground up. Anything that requires a substantial amount of work for a single problem is usually very generic in nature, so don't expect heavy AI or Graphics problems if the position isn't specifically for that discipline. Some of the more “small project” style questions that I've had have included memory management, binary file I/O, basic 2D collision detection, and string parsing. Almost every test requires C/C++ to be used, so make sure you've had practice! Also, take Jaymin's advice about showing a smart answer instead of a correct answer. Most problems have many solutions, is yours clean and optimized, and does it handle a good spread of use cases?

Some tests are limited to just a few hours, and I tend to find those to be more towards the stressful end. Perhaps the hardest test that I ever took was a 17 page short answer onslaught crammed into the span of 3 hours. It had required sections on bug finding and solving (things like what to do if the game crashes in release only, in a soak test with no memory leaks, etc), short answer code writing (think bit twiddling, list reversing, basic assembly writing, etc), and math problems (involving vectors, matrices, and trig) together forming the first 12 pages, and then 4 optional sections on gameplay, graphics, core, and networking programming (the remaining 5 pages). For me, it highlighted just how different a test could be from the “small project” style test.

Interviews

A technical interview tends to feel different in nature than a programming test, which was surprising to me the first time I had one. One thing that you should know before any interview is the difference between a struct and a class in C++, this is a question that gets asked over and over again, partly to break the ice. More than three quarters of my interviews have started with that question. Be prepared to know about inheritance, virtual functions, dot products, cross products, const correctness, cache use, and game engine design. Also make sure that you're prepared to talk about game projects you've worked on, especially those you have on your resume or portfolio. Typical questions about your work include asking about your individual contributions to group work, what kind of difficult problems have you overcome, and if what is something (maybe just part of a project) you've worked on that you wish you could go back and redo.

Something telling of being a programmer is that no one has ever asked me if I have played games made by their company, where as I know designers that have had tons of questions about redesigning games made by the company they were interviewing with. Don't sweat it if you haven't had a chance to pick up a company's latest game, spend your time refreshing yourself about the details of multiple inheritance and less commonly used C++ keywords instead. If anything, you might be asked what games you've been playing recently, just to make sure you play some games.

Rejection

Be prepared to not have everything go smoothly the first couple jobs you interview for. I've botched tests and interviews alike, sometimes from things completely out of my control, but the important thing was that I kept working to improve myself as a programmer afterwards. Have you read Jason Gregory's excellent Game Engine Architecture? Do you have projects where you can show how you specifically contributed? Can you get involved with a mod group or a student club or a something to get experience working with others? Have you done projects completely on your own?

I've heard for a long time that game industry jobs require personal connections and luck, but to be honest you just have to be at a certain level of experience as programmer and be able to prove it. I've always maintained that the key is to get a feel for what that level is, and then set your goals accordingly. Make sure you feel like you're continually getting closer to that end goal of landing a job (and in a work environment that suits you best!). If it seems hard, that's because it's certainly no cake walk to get there in 3 or 4 years time. This is especially because until you have a game development job, you usually have to spend time doing things like going to class or working a non-games related job. I attended a talk at my university last Fall about interviews and tests for general software engineering jobs, and I left knowing that non-game industry interviews are laughably easy in comparison, even for "top-tier" companies. It's not easy, so keep at it!

Saturday, May 7, 2011

Adventures with Off-Screen Particles

This is hopefully going to be a two part post because I didn't get to fully explore it as much as I wanted to, as I've been losing my mind during another eventful week of final exams in college. I considered postponing this and pulling something out of my ass for today... but then I saw fellow contributor Wheezie had already stolen that spot with his post “What Did I Do Today?” If you haven't already, you should check it out to at least read the amazing comic that he put at the end of it. Especially because half of the time I don't think people who aren't designers understand what designers actually do. I mean, it's pretty clear that graphics programmers spend all their time inventing new acronyms for anti-aliasing techniques except for when they're tightening up the graphics on level 3, but designers- they're a whole different mystery. Wait, what was I actually going to be talking about in this post? Particles, that's right...

Particles

Particle systems are cheap, flexible, and easy to set up, especially if you're using Unity3D, as is often the case with the projects happening at my University. This is why on a lot of my projects that I've been on (small student projects with tight deadlines), a designer often picks up the task to reduce the workload of other team members. This was the case with one such project that I was working on the fall. That project was Dust.


As you might guess from the name, there's a lot of dust in Dust. So much so that it takes place in a desert. Some of the designers on the project were tasked with helping to build the ambience for the game with some particle systems. An innocent enough task, but when it came time to integrate the work into our initial prototype, it was clear something was awry.

The problem with video game content creation is that people often have to learn the hard way how their work can impact performance. In this case, we wanted substantially more particles than anything other than a top of the line computer could accommodate (like the one we had been working on). And to be fair, a game like Dust should have as many particles as we can manage to have without blowing perf. One of the biggest impacts on performance from large numbers of particles is overdraw, especially when the particles are filling large sections of the screen:

[caption id="attachment_5183" align="aligncenter" width="484" caption="Massive overdraw from dust particles"][/caption]

This is a shot of the Unity editor's visualizer for overdraw. The bright areas are spots where many pixels are being drawn over and over again. This is particularly an issue with particles because the system contains many overlapping quads, where pixels will be drawn over many times. Keep in mind that the given shot is from the current version of Dust, his systems were originally much, much heavier on overdraw.

Doing it Offscreen

A solution to this particular consequence of particle effects is presented in the fabulous GPU Gems 3 by Iaian Cantlay. The technique boils down to reducing the number of pixels being rasterized by rendering the particles to a texture that is then composited back into the main image. The rendering is done after a depth buffer has been formed, so that the pixels in the particles that fail the depth test can be discarded properly as you render them. This means that the color can be applied directly back into the scene, which is especially easy if your particles are additive like ours were.

I started considering it as a possible solution to allow for thicker particles in future iterations of the project when I read through Shawn White's post on his implementation in Unity for Off Road Raptor Safari HD. It didn't take too long for me to adapt my own implementation a week or so ago, but there are definitely several issues that come to light very quickly (many of which are discussed in the GPU Gems 3).

Problems / Solutions

[caption id="attachment_5189" align="aligncenter" width="450" caption="Zoomed in view of cracks from point sampled depth"][/caption]

Some of the issues that I immediately encountered were visible halos between solid objects (notably the boat that the player controls in Dust) and particles after compositing. One solution to this problem is to have the depth buffer, which is being down-sampled due to the low-res target, take the minimum or maximum depth to be used when downsampling. As noted in the gems article, this is really just a rule of thumb, but it did indeed fix issues with cracks.

[caption id="attachment_5190" align="aligncenter" width="450" caption="Zoomed in view of depth farthest heuristic"][/caption]

However, it made it apparent that the much more serious issue is the aliasing that occurs due to the buffer being a lower resolution. It's very obvious along the edge of the sail in Dust. My goal was to get the particles to render at quarter res, but right now only half res comes close to an acceptable quality. One thing I could do is render the edges at full res in yet another pass, but the big question in my mind is whether or not two passes for particles would still result in a performance increase. We already have to create a depth buffer specifically for the purpose of the off-screen particles, because we don't do deferred rendering, it's not used for much else.

Next Time

Hopefully by the time my next post rolls around (and I'm done with finals), I'll have some more results (and perf numbers) and give some insight onto whether or not we decide to actually use it in the game. On the plus side, we modified Dust's design recently to be more separated, so potentially we can split it so that it's only in effect where the particles are prevalent enough to warrant its use. Also, I'll try to set up some fresh test scenes that show the effects better, considering that upsampling the in game shots went horribly.

Saturday, April 23, 2011

Give Blood

I started writing this post on my 21st birthday (this past Sunday), so it's going to be pretty frivolous and might not teach you anything-but hey, you've been warned. Because it's my birthday, I've been thinking about my successes and failures that I've had growing into the game developer that I am today. A bit of a postmortem of going from being a kid that loved video games and math to heading into my final year of college with more than a few games under my belt.

I decided I wanted to be a game developer when I was in 7th grade, which is now 8 years in the past (I'm a junior in college now). This decision was met with resistance from most people that I knew, but I was really serious about it. Most people thought I'd grow out of it and choose to do something that “made a difference”, especially those who knew that up until that point I was a kid that wanted to make use of my love of math and science at NASA working as a researcher.

Clearly, I didn't grow out of it. I suppose that it didn't help that I was already very aware that it was “nearly impossible” to get into the games industry. With this in mind, I managed to get a bunch of scholarship money at a private school on the other side of town for one reason: computer science classes. I didn't make games in high school (and I'm glad I didn't – I needed to be a kid), but getting serious about programming years before college was one of the best things to happen to me. Exposure to code before college helps so much farther down the line (by the way check out Brett Douville's awesome post about teaching his son about programming). But still, I knew that I had a great deal of work was ahead of me because I knew how hard it was to get a job.

Not that long ago, this video surfaced about game development:

http://www.youtube.com/watch?v=lGar7KC6Wiw

To put my thoughts at the time into perspective, I acted a bit like I had that video running on repeat in my head (even though that particular video didn't exist yet). I made the decision to attend Michigan State University largely for financial reasons, but also because of job opportunities with the game development lab I currently work for. Attending a large university was a bit worrisome to me, given that I didn't think a diverse program could compete with a specialized school like Digipen or Full Sail. I felt like I had to work my ass off to make up to what I would probably be missing from class. I got involved with Spartasoft, MSU's game development club, and vigilantly attended every game jam that year. By the time the year closed, I had worked on no fewer than six games of varying sizes, and was starting to get interested in writing shader code. To say the least, I was working hard and was perhaps overreacting a bit to my worries that I might be getting a “lesser” education.

Freshman year was also the year that I watched many of the seniors that mentored me fail to get jobs. The industry was starting to feel the pains of the recession and things were rough, and it scared the shit out of me. No matter how hard I was working that year, I pushed myself into overdrive the next year. I was involved with a team in a game development competition that ended up winning a trip to GDC paid for by Ford Credit. Being an underclassmen on the team as well as the primary programmer, it would be the first project I seriously crunched for, but I'm glad I did because going to GDC that Spring changed my perspective about everything.

Up until that year, I suffered from something that I suspect afflicts many young programmers. I thought I could be both programmer and designer. I mean design gets all of the glory right? Every kid wants to be the next Miyamoto or the next Ken Levine, and I still reveled in the thought. But that year I would finally get enough experience to realize that design is really hard and you might kill yourself if you try to be both a good programmer and a good designer. Still, I thought maybe gameplay code was the place for me, or a scripting heavy design position. And then GDC hit me like a train hitting a chicken, blowing apart all my thoughts about game development as a career. The fact of the matter is that engine code is really cool, tool development is incredibly important, and hand optimizing assembly code makes you a badass. I had started to become interested in graphics and rendering, and John Hable's presentation about HD rendering in Uncharted 2 convinced me that I wanted to do graphics code professionally. If you get the chance, check out some of his presentations and his website, there's some great stuff there.

Going along with these newfound desires to work on lower level systems, I once again decided I wasn't working hard enough and hurled myself further into my work for the next year. I had the portfolio development class for MSU's game specialization that fall, and I began pulling tons of late nighters and all nighters for my games. I look back on college and realize having a laptop and being able to take my work everywhere with me was both a blessing and a curse. I got to the point where I got a feature implemented into a game while in the back seat of a car on a coffee run. I was out of control, wanting to learn and accomplish so much, so fast. So was it worth it? Now that you've read through several paragraphs of me admitting to crunching increasingly more throughout college, I'm going to finally get to my point.

For the first Spring in a long time, I don't feel like college is a time bomb with only so much time left. Three years of increasingly stepping up my dedication to learning the art of game development has finally lead me to become the programmer I want to be. Maybe I worked too hard at times, but it's no small task to become a programmer cut out for game development in just four years time. The way I viewed life, if you want to get a job in the games industry, you have to become a good game developer, and the only way that that will happen is if you love making games and learning how to make even better games. I went back to GDC this year, and I'll be back again next year, because I love learning about all the crazy techniques people are developing and trying. People talk about how you have to have connections to make it as a game dev (what are we, film?), but I honestly think that's bull shit. I don't want your card because I want a job, I want your card because I want to be friends with people riding the edge of what games can do (speaking of which, props to the Battlefield 3 team). If you want to make games throw yourself at it, because only you can make yourself a crack game developer.

When I'm walking to the lab and I feel like I'd rather go home and sleep through the afternoon, I listen to one song consistently. It's called “Ali vs. Frazier 1” by a Massachusetts hardcore band named Bane. I think it summarizes my view's on what it takes to become successful at anything worthwhile, including game development:

(rumble, young man, rumble)
how many more days will you sit
and talk about your ambitions
all that you can be
the person you are dying to be
the place you want to get to
but always out of reach
before that fury swells inside of you
grows so big that it forever quiets you
stand up to your demons
make a run at your goliath
find the best, find the worst
waiting in both of you
it's not the who or the what that is lasting
but how you fight
that is the fight
the only mark that will not leave you
and I will feel my heart drum its final beat
if it meant that I have given this my all
there's nothing left for me to believe in
if not your, if not this...
what else is there but death?
(it's your call...it's all on you)
give more
give everything
give blood

Tuesday, April 5, 2011

Image Space for Beginners

I'm doing a relatively simple post today because I've had way too many milestones and sleepless nights in the past 7 days. This is why I'm going to talk about a very basic concept for graphics programming: doing image space calculations and effects. I realize that there are many seasoned game developers that frequent this blog, but this one is for the ones who are just getting started.

Tools and Motivation

As a developer of a real-time graphics technology (video games), you are almost certainly going to be making use of a GPU to accelerate your graphics processing. The GPU exploits high parallelization of rendering to speed up its work, and is responsible for transforming your 3D world into a 2D plane that is displayed to the player on their computer scene. However, not all effects are easy to simulate in a 3 dimensional space. Thats where leveraging further work on the GPU to perform additional image space calculations can come in handy. Some of these are quite obvious, such as depth of field calculations, being that depth of field is an artifact of lenses. Other popular image space effects include motion blur, color correction, and anti-aliasing. Image space calculations are also the foundations of deferred shading and lighting, rendering techniques that are becoming increasingly popular to handle a large number of lights in a scene.

The Actual Technique

Image space calculations can be performed on the CPU, but as with most things in graphics that would slow and booorrrriiinng (unless you're playing with SPUs, in which case carry on). The main point is that iterating over an image in the main thread while performing per-pixel calculations on the CPU would most likely be in bad taste. Instead you might consider the following GPU based solution:

  1. Render your scene normally, except to a texture if you are not already.
  2. Render a quad that covers the entire screen, with the texture from the previous step as a texture applied across the quad.
  3. Perform your calculations in the shader code used to render that full screen quad, modifying the how the texture is applied.

There are several catches with this that you have to keep in mind. First and foremost, each fragment's calculations cannot be too dependent on other locations on the screen. If you have to sample your frame many times, then those texture accesses will quickly add up, which is one of the big considerations that comes into play when performing screen space blurs. Secondly, while this might provide an effect for much cheaper than trying to model a similar effect in 3 dimensional space, keep in mind the actual performance is dependent on the resolution of the screen, which may be less than desirable. As resolution increases, so does the number of fragments being processed. In the end, it's all about picking when and where to perform different calculations in your game.

An Example

Here's a sample of a very simple post-processing fragment shader, written in CG. It does a simple screen space distortion based off of the x and y channels of a texture. The vertex shader doesn't do anything particularly special other than set up uvs that are interpolated from 0 to 1 across the quad. In general, most of the action happens in the fragment shader when doing post-processing.

uniform sampler2D _MainTex;

uniform sampler2D _DistortionMap;

uniform float _Distortion;


float4 frag (v2f i) : COLOR

{

float2 distortedOffset = (tex2D(_DistortionMap, i.uv).xy * 2 - 1);

distortedOffset *= _Distortion;


float2 distortedUV = i.uv + distortedOffset;

return tex2D(_MainTex, distortedUV);

}

The shader itself is really only a few lines of code! So easy! Here's what is happening a little more in depth. The float2 “distortedOffset” is simply the interpolated uv coordinate plus a lookup into the normal map which is then unpacked to fit into the range [-1,1] instead of the [0,1] range returned by tex2D(sampler2D, float2), which is then finally multiplied by _Distortion to control the strength of the distortion. Finally, a lookup into sampler2D MainTex is performed, where _MainTex is the previously rendered image. If there is no distortion, then the call would be equivalent to tex2D(_MainTex, i.uv), which would just copy the source image's color to the new target. Speaking of targets, you might consider rendering this post-processing pass into a texture as well, besides just your initial rendering of your 3D scene. This is so that you can pump the output of this post-processing into another post process you are implementing to be able to stack effects on top of each other.

Here is a sample of this particular distortion shader in action.
The original rendered scene:

Before Post-Processing

Texture that the distortion is calculated from in the shader:

Distortion Map

The final result:

After Post-Processing

Conclusion

Great success! The important question here should always be: how hard/expensive would it be to achieve the same effect in a different space? What do you gain/lose by doing it in image space? And also important is the question of whether or not this actually makes your game look any better. In the end, I personally think that post processing is great fun, especially used in terrible crazy ways on personal projects. You never know what you'll come up with when you play around with ideas in a different space, for example here's one paper exploring the possibility of moving skin rendering into screen space:http://jbit.net/~sparky/subsurf/cbf_skin.pdf. Finally, fun fact of the day: I modeled and textured that fish in the sample images way back when I was a Freshman, which is why it's so shoddy and terrible.

Monday, March 21, 2011

Moving beyond the Linear Bezier

Every beginner game programmer becomes familiar with the concept of needing to move from a value between point A to point B. It comes up all the time in game programming, especially as you sit there trying to figure out how to do something as simple as move a box across the screen. It is then that they learn about the linear interpolation (abbreviated to "lerp") as a way to smoothly fade between two values, finding any point between them with a simple [0,1] t value. The formula is simple:

Lerp(t) = (1 - t) * StartPoint + t * EndPoint

And before you know it, those beginner programmers are Lerping all over the place, using it to help overcome all sorts of challenged in game development from moving an object in 3D space to fading between two audio clips. In reality, this is actually just accomplishing one thing- it is just a straight line segment, reworked into a form that makes it easy to sample any point on that line. Anyone that's taken high school level math knows that there is more to life than just straight lines. What if we need a curve?

So let's consider how we might go about making this curve. An approach that I've seen many beginner programmers, including myself, to try to get around the problem by stringing several lines together as an approximation. This might be alright in some situations, but it fails in many. For one, it requires the placement of a lot of points and then storage of that data, which can be painful. Secondly, and perhaps more importantly, it will suffer from being jagged. Maybe at a certain distance we will have enough points that it will appear smooth enough, but when examined closely enough the lack of smoothness will always become apparent (unless the number of points approach infinity).

Enter: Bezier Curves

Moving to a more complex equation can allow us to overcome these difficulties. We know we can create curved functions with polynomial equations, but does this actually help us? We can write these as parametric equations much like the linear interpolation equation shown above, but we don't want to write a new equation each time we want to generate a curve. We want to generate different curves between points, not just the same curve between them each time. Before this was not an issue, as we only ever used a straight line. What we can do is store additional data that factors into our equation with additional data points beyond just the start and end points of the curve. This is where quadratic and cubic bezier curves come into play.

If you're new to bezier curves, you may find them a little intimidating due to the implication that there is some more complex math happening. I used to think they were outside my league of math just because Adobe has released entire applications based around drawing with bezier curves. Little did I know that I already used the 1st-degree, linear bezier equation all the time. Here it is:

LinearBezier(t) = (t - 1) * StartPoint + t * EndPoint

As you can probably tell, the linear bezier is actually just the linear interpolation equation. Higher degree bezier curves introduce control points into their equations and the math only gets a little bit worse, but the similarities are still very visible to our friend linear interpolation.

These 2nd and 3rd degree bezier curves can be defined as:

QuadraticBezier(t) = (1 – t)^2 * StartPoint + 2(1 – t) * t * ControlPoint + t^2 * EndPoint

CubicBezier(t) = (1 – t)^3 * StartPoint + 3(1 – t)^2 * t * ControlPoint1

+ 3(1 – t) * t^2 * ControlPoint2 + t^3 * EndPoint

Let's look at what happened. Our computations are a bit more expensive, but can now craft more intricate curves. These equations allow for the manipulation of continuously defined curves between the start point and end point by adjusting our control points. Take a look at a couple of shots showing examples of these:

A Quadratic Bezier Curve

A Cubic Bezier Curve

The interface for modifying these curves is quite simply the same as the start and end points of a line, being that a control point is just another point in the same vector space as our start and end points. Increasingly complex curves can be generated with higher degree bezier curves, but we don't actually want that because the computation becomes increasingly expensive and the increases in control points can become cumbersome. This is where turning to piecewise equations becomes a good solution.

B-Splines

Quadratic and cubic bezier curves do offer a lot more flexibility, but they also offer something else, the ability to be chained together without losing continuity, forming a piecewise function for your curve. So instead of using higher degree equations to create more intricate curves, we can create a series of cubic bezier curves and restrict them a bit to meet continuity requirements. Just as a refresher, a function meets different levels of continuity based depending on how many derivatives of the function are continuously defined. The previously mentioned chain of straight lines only has a continuity of C0, meaning that all points along the spline are defined, but because it lacks C1 continuity, the first derivative is not continuously defined. The more derivatives that are continuously defined, the smoother our curve will be, but it will also result in less and less control over the spline as a consequence.

Now that we have moved to a chain of cubic bezier curves, we can easily obtain C1 continuity by requiring that the line formed between an end point and the second control point, is the equal and opposite of the start control point on the curve that is being transitioned to. This looks like this:

Piecewise Bezier with C1 Continuity

However, even with that restriction we can still have relatively sharp corners occur. If we require a smoother curve, we can convert our piecewise curve to a B-spline. This will allow us to achieve C2 continuity, meaning that the second derivative is continuously defined. Our bezier control points are now moved to being controlled by a new set of points known as deBoor points. A cubic bezier curve that always has C2 continuity will be defined by these deBoor points. Perhaps the biggest drawback is that the start and endpoints of a segment are no longer directly controllable either, but it is still relatively easy to work with from just the control points. Here are the equations for the generating the cubic bezier from the deBoor points, note that I name the variables based off of examining a particular segment in the spline:

CubicControlPoint1 = (2 * deBoorStartPoint + deBoorEndPoint) / 3

CubicControlPoint2 = (2 * deBoorEndPoint + deBoorStartPoint) / 3

CubicStartPoint = (deBoorPreviousStartPoint + 4 * deBoorStartPoint + deBoorEndPoint) / 6

CubicEndPoint = (deBoorStartPoint + 4 * deBoorEndPoint + deBoorNextEndPoint) / 6

We can then build our cubic bezier with these values. These relationships are a lot easier to visualize in a picture:

A B-spline defined with deBoor Points

As you can probably see, these are a bit more mathematically involved than just a bezier curve. They are still relatively easy to manipulate through the deBoor points, and they have the added benefit of not requiring any additional data beyond that of a bezier curve and any bezier code that you currently have can be fit to work as a B-spline. This is exactly what I did when I most recently used B-splines to obtain C2 continuity on a project, I took advantage of the fact that the deBoor points could control the cubic curve entirely tool-side, while my in-game bezier code functioned exactly the same.

Conclusion

As you can see, math is fun and can lead to better better solutions to the engineering challenges in game development. Bezier curves naturally extend into many areas, and many useful uses have been cooked up for them. For a really great example of bezier curves at work, check out this article from GPU Gems 3 about rendering vector graphics on the GPU:http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html. Keep in mind that all this awesomeness comes with the cautionary wisdom that moving to higher degree curves is not always worth the increased computational complexity. As always, make sure you evaluate what the right tool is to get the job done.