Friday, August 19, 2011

3 Years Later: Year One

A while back, someone on Twitter (I can't find the original conversation), asked me if I would consider writing up how I learned to program video games. I'm going to split this into several articles (for better or for worse) based around each year, and then one wrapping everything up with a big dump of the tools, articles, and books I've found incredibly helpful along the way. I thought splitting them up would be good because I'm only *a little* busy with moving back to Michigan and getting up to full steam on a project for IGF.

I think very few people will disagree that it's a lot harder to get into the games industry than it used to be, being that there are so many more people interested in getting into the industry these days. I've been happily interning for these past months at Iron Galaxy Studios where I do programming work commercial games, so I'd say that I at least got some important bits right. I think a lot of what I've done can manifest itself in a slightly different way for other prospective game developers, so hopefully these are helpful in some way to them.

Year One

When I started college, I knew I wanted to make video games for a living. However, like most college freshmen, I didn't know how to make video games, but I did know that getting into the games industry was no cakewalk. One of my scholarships paid a stipend in exchange for working 10 hours a week under a professor, essentially a way for the university to get underclassmen involved in research without putting strain on a lab's budget. Due to my interest in game development, I joined the MSU Games for Entertainment and Learning (GEL) Lab to work under Professor Brian Winn, but I suppose it was not the most opportune time to be a GEL Lab professorial assistant.

There was very little going on in the lab that year other than a small game design conference that we helped organize called Meaningful Play. However, beyond helping prepare for the conference, there was very little concrete game development work to hand me. Besides the lack of projects, I didn't know a whole lot about game development. I can still remember not having a good answer about what part of developing games I actually liked doing when I first started in the lab. All I knew was that I liked programming in general from the few classes I had in High School, although game design still seemed like "the cool thing" at that point, and I thought that I would probably enjoy design more if I was given the chance (spoiler alert: programming is actually way cooler, but this won't be discovered until year 2).

Contrary to what one might think, something very good came out of the lull of activity in the GEL. My commitment to GEL Lab was for 2 years, so Brian had me begin to teach myself how to use Unity with the hope that I'd be able to use it for future projects, being that the department had just adopted into the curriculum as its 3D engine of choice. As a result I had a conscious reason to teach myself game development, putting in at least 10 hours a week towards a small 3D project. I started with the standard tutorials, which are only really helpful for learning the menu flow. As with any first 3D game, the learning curve still felt steep even though I was using a fairly user-friendly engine like Unity. However, if you get a jolt of excitement from getting a cube to move back and forth across the screen for the first time in your life, then you know that game programming might actually be your thing. The project evolved into a small game that I presented at the end of my Spring semester.

It was an action-adventure game about a manatee. It was terrible, and my code base was even worse, but to this day I still love it (and amazingly its poster presentation won an award). What's important here? I did everything, even the art, and I committed to spend at least a minimum amount of time on it each week. I learned so much, and I didn't have things like fears of letting team members down, because I was the whole team.

Speaking of teams, I did get involved with Spartasoft, the student game development club, which was another important step toward being able to program a half-decent game. The club served a few primary purposes at that point, such as hosting a games party occasionally and getting alumni to come back and present to the club about their experiences in the games industry. However, the most important function for me was the 48 hour Game Jams that were hosted every few months. If you're not familiar with the concept of a game jam, we basically split into small teams on a Friday evening, a theme is announced, and then each team makes a game about that theme over the course of 48 hours. It often results in a lot of terrible games, but inevitably there's something new that's learned, new game ideas explored, and a lot of friendships built with game developers that you might not otherwise get to know. I cleared my schedule for these as a Freshman, and participated in every single one.

It's how I got connected with a couple of seniors, and I ended up putting in more than just 10 hours a week on one project in the Spring. I started meeting with them in between game jams to polish some of our better ideas. The fact that upperclassmen like Bert, a programmer that now works with me at Iron Galaxy, and Marie, an amazing artist that's now a grad student at SCAD, wanted to work with some freshman was amazing. I had gotten past the hump for being able to contribute to a game at all. I could help make their games better, and because I was more than ready to step up to the task, I ended up learning a lot back from them.

Conclusion

So what can be learned from my first year making games? First, don't be afraid to go it alone and force yourself to spend at least a minimum amount of time each week working on it. Secondly, game jams are great, especially if you don't know very many people to collaborate with. Between the manatee game and the game jams, I had worked on 6 different games by the time I finished my first year of college. How many games had I worked on for class? Zero, and Michigan State even has a game development curriculum! If you have the opportunity to work on *any* game when you're just starting out, even a game jam game, you'd better have a damn good reason if you pass on it. Failing any opportunities for collaboration, the only person keeping you from making your own game is yourself. Don't be the asshole that's keeping you from learning how to make video games. I got lucky that I didn't do that, it's easy to be lazy when you're an 18 year old college freshman.

Friday, August 5, 2011

Backface Culling 101

This post is for artists, designers, and anyone else who has had backface culling shoddily explained to them. Perhaps I should have checked the Venn Diagram of AltDev readership before writing about a rendering technique specifically with non-programmers in mind. I know I've gotten embarrassingly confused by forgetting what's important as little as 2 months ago.

The "Simple" Answer is Misleading

Backface culling comes up all the time when people are first becoming acquanted with 3D game dev. All it takes is deleting half of a cylinder in Maya and dropping it inside of an easy to use engine like Unity. Then they're suddenly be trying to figure out what got screwed up that's causing the inside of it to be invisible. Inevitably the answer given is that those faces are backface culled to avoid rendering the inside of 3D models. That's true to some extent, but in my opinion very misleading.

The other part of the common answers is that backface culling works by only drawing the side with outward facing normals. This part, is not actually right and it leads to a lot of misunderstandings. I think this answer comes up because people like artists and designers are used to thinking of meshes in 3D space, rarely do people think about the process that turns it into a 2D image unless they get into rendering programming.

The Triangles are not Drawn twice

First off, understand that you have a big mess of triangles that you are using to represent your piece of art. One of the simplest representations of this is to have all the vertices in a big long list, and then to have a big long list of triangles defined from those points. Your computer is taking those triangles and transforming them into the 2D space that is displayed to you. That triangle is so simply defined to expedite rendering that it has no concept of "front" or "back" (well it does, but I'm getting to that). The vertices may have normal information, but that's used for lighting, not culling. Consider a camera pointed at a half-cylinder that renders properly in the first case, with the following badly drawn diagram taken from above. The green lines are normals:



Now consider the *exact same* list of triangles with the only change being to flip the normals at each vertex. If you try this on your own, it is probably best done by modifying your vertex list in the rendering code, for reasons that will be explained shortly.



With backface culling turned on, you will still see the cylinder in each image, except the lighting will be flipped in the second one. And if you turn off backface culling then the same images will be rendered, and it will be done so with the same number of triangles, because there are no backfacing triangles in the given example. Don't think that the GPU is rendering the backside and then going over it a second time when you have backface culling turned off for the front side, unless you actually did specify two triangles in that big list of triangles with vertices in the exact same locations.

Assuming you understand that what I described is not what typically happens, let me now explain the point that I'm making. When you flip the normals in a 3D modeling package it's doing more than just changing the vertex information, it's also changing that triangle list. This is because the front and back face of a triangle are determined through the novel concept of winding order, which is quite simply that the front of a triangle is the side of the face where vertices are viewed counterclockwise. Suppose you have a mesh with no normal data, just positions in 3D space for vertices. You can still use backface culling just fine as long as your triangle list is specified in the proper order, a point that I think is often missed by designers and artists when trying to understand backface culling.

If you want the proof in the pudding, here's the OpenGL call to specify culling:

glFrontFace(GLenum mode);

Where mode is either GL_CW or GL_CCW, standing for clockwise and counterclockwise, notice that this has nothing to do with vertex normals!

And here's a diagram to illustrate which side is the front for a triangle by default (you could assume that triangles are front clockwise as your standard if you really want to, but the default is counterclockwise). If the points defining your triangle are listed in an order that would cause the diagram on the left to be true after transformed into screen-space, then that triangle is front facing:



So what are we culling again?

We're culling triangles facing away from the camera, which are going to be obstructed by the forward facing polygons on a closed mesh. Those triangles that would be drawing the inside of that cylinder are just going to be covered up by the front-facing triangles that you would actually see. With the winding order in hand, the triangle's face normal can be determined and checked against the camera. I made this little diagram (insipred by the great explanation for backface culling in Real-Time Rendering). The triangles on the backside of the full cylinder are facing away from the camera, so they are culled (indicated by the dotted part).



So on a closed mesh like a full cylinder, you can avoid doing the work of rendering all of those triangles that you already know are going to be obscured. This is why disabling backface culling is typically not the correct answer when you have triangles culled that you didn't intend to, and it's also bad because that usually means that the backfacing triangles are being shaded incorrectly with backwards normals.

If you do intentionally cut that cylinder in half, the easy fix is to add front facing triangles along the inside of the mesh. I believe there is functionality these days (DX10 maybe?) to figure out which side of the triangle is being rasterized. Theoretically you could have the shader flip the normals based off of that information to still have correct lighting, but if you just read a post on the basics of backface culling, I bet that's not what you're looking to do.

Because 3D modeling programs are automatically adjusting the winding order for you based off of what direction your normals are, it's easy to mistakenly think of backface culled triangles as triangles with incorrect normal information, when really it's winding order that determines it. This is why sometimes you can really end up in weird situations while using a 3D modeling package. I know that as a Freshman in college, there were many models at game jams that had those stray triangles that artists just couldn't get to show up, and maybe if they did the lighting got all weird. Perhaps thinking about what's actually happening can help alleviate those pains.