Wednesday, September 26, 2012

It Was Warm In Chicago Today

It's back to being unseasonably good weather in Chicago today, so I spent some time tonight skateboarding.

I've been skating a lot since graduating this past Spring, and unfortunately I'm not nearly as good as I used to be (in comparison my increase in guitar practice has had great gains). There's a sort of 20% of the tricks that I'm missing. My coworker Nate always says that the last 20% is usually the hardest part of any task.

Honestly, it's fear. There's a sort of commitment required when you set up a trick and then there's a weightless moment where there's no going back and then you come back and snap the trick into the landing. I'm copping out and doing things like instinctively putting a foot on the ground in that moment. It's safe. It's lame. It ruins the trick.

That moment of insecurity is what doing tricks on a skateboard (or more realistically for me, a snowboard, my preferred sport), is all about. The reason people enjoy progression is because that feeling is like a drug. It's the rush of flying through the air or sliding across a 40 foot piece of metal or spinning across the top of a box and just enjoying the experience of doing it right. It's why fear is what what ruins tricks, the fear that makes you fight that moment instead of committing.

Progression is natural in board sports because people just want to extend that moment: another rotation, a longer rail, a bigger jump. Competitiveness with other riders is only a small part of it.

In snowboarding there's fewer opportunities to bail without falling and feeling some pain, I think it's a blessing in disguise. The thing that's terrifying about programming, is it's just the opposite. There isn't always an easy route from A to B, but you can hack one together. Programming is open ended.

Sometimes you have to rely on a piece of paper and some matrix algebra to get you to the other side.

Sometimes hooking up a debugger won't be an option.

Sometimes you'll be up all night trying to figure out why the AI is jittering as it moves.

I think that it's one reason why there's this gap between general software engineering and game development that a lot of people have trouble crossing. Especially when there's so much demand in the world for software engineers that only have to solve straightforward problems.

Don't put your foot on the ground. Do it right.

Monday, May 28, 2012

Summertime, Clustered Shading, and Blogging

It's summertime! Now is the time to be roasting hot dogs and marshmallows around a fire (just did that myself this evening back on my parents farm)! I graduated with my Bachelors diploma a few weeks ago from Michigan State University, which actually feels substantially weirder than when I graduated from High School. Perhaps a big part of it is going away from working with some great people over the past 4 years, namely Brian Winn and the GEL Lab as well as the crew over at Adventure Club Games. Adventure Club may not have left East Lansing after they graduated from the University a year ago, but that doesn't mean they haven't been doing a lot of great game dev projects (and more in the pipeline to my understanding).

Speaking of East Lansing game dev people of interest, the awesome Dan Sosnowski has stepped up to the task of President of Spartasoft, the MSU game development club and has recently started as my replacement as the Iron Galaxy Studios intern in Chicago for the summer. I'll be moving back to Chicago myself this coming Friday, where I'll be moving into a full-time graphics engineering position at Iron Galaxy. Whoah. Growing up I guess. I'm hoping to possibly make it back to East Lansing for the Meaningful Play Conference that's coming up again this fall, if you are interested in games in a capacity beyond entertainment, such as art or education, consider attending! or submitting a talk or a paper or a game to the showcase!

On the note of conferences, I've been tossing around the possibility of trying to make it to SIGGRAPH this fall in Los Angeles, especially because I do enjoy nerding out to computer graphics. Especially after reading the pre-print for Clustered Shading from the HPG conference that has been going on recently. The idea is to take the tiled rendering used to optimize many deferred renderers, and extend upon it. The "Forward+" renderer used in the AMD Leo Demo got a lot of buzz at GDC this year for combining tiling with forward rendering by utilizing DX11 features. The idea presented by Clustered Shading is to improve tiling by have it operate in 3D instead of just slicing in the screen into 2D tiles to cull lights. This is really interesting to me because binning in 3D matches the load of a designer or artist much better when spacing out a large number of lights in a scene, whereas a tiled renderer will have hot spots pop up more due to a particular camera angle where many lights align. I expect for more flavors of core rendering pipeline to pop up if next-gen consoles mostly support the more flexible possibilities of compute shaders on the minimum spec for multi-platform games. Hopefully I'll start playing around with more of my own DX11/CUDA type stuff this summer, as I'm planning to start putting together a new computer that runs DX 11.1 code to keep myself relevant with the future.

Obviously, I won't be pushing this meandering pile of thoughts up to AltDev, but I do hope to push some more technical articles out to both this blog and AltDev. I'm uncertain if I will actually rejoin the two week schedule though, because I'm uncertain if my current side projects and learning will either fit well into a two week turnaround or be all that interesting. I'm also going to try to do more personal-ish posts up on here because I've realized that I've become a little more spread out across the nation/world from many friends and acquaintances, so maybe I should be making better use of this newfangled internet technology. Hopefully I didn't say anything too stupid in this post, because I don't entirely feel like going through my normal revision/proofreading process that helps me be a coherent writer.

Finally, I had an awesome experience hanging out with some manatees at the Columbus Zoo and my awesome girlfriend Kelsey in my free time between graduation and moving. AREN'T MANATEES THE COOLEST?


(You should go hang out with them too, and then adopt a manatee from Save the Manatee Club)

Wednesday, April 18, 2012

Skin Shading in Unity3d

I've been away from AltDev for a while, a bit longer than I originally expected because after a period of crunch before IGF submission in late October, I went back to the comforts of getting a normal nights sleep, something that working a normal game development job this past summer spoiled me into enjoying. Now the Christmas holiday has given me not only the time to finally begin blogging again, but the chance to do a little side project in something I greatly enjoy: the rendering of human skin, which was also an excuse for giving the latest Unity3D 3.5 beta a whirl:



Motivation

This has entirely been a hobby project for me, because I do not foresee its application into any project I'm involved with. However, I've realized that Unity is rapidly approaching the point where someone may want to use a more complicated skin shading model than just basic wrap lighting (an example of which is provided in the documentation). With this in mind, I thought it might be of interest to do a post pulling together some of the better resources I've found on the topic as well as posting some of my more useful code so that perhaps it might help serve as a jumping off point for a more serious endeavor.

The one issue that is always at the core of skin shading is that of Subsurface Scattering, which is the effect of light bouncing around underneath the surface of the skin and re-exiting elsewhere. A simply using a Lambert model causes very harsh edges, because the scattering is what gives skin its softer appearance. Here's what the built-in "Bumped Diffuse" shader looks like, the basis that is trying to be improved:



Background: Texture Space Diffusion

No discussion of skin rendering starts with anything other than mentioning the NVIDIA Human Head demo, which has a detailed description of its implementation in GPU Gems 3. The technique relies upon Texture Space Diffusion, where the lighting for the mesh is rendered into a texture and then blurred various amounts (based off of the scattering of light inside of human skin). The results of those blurs is combined together to form the actual diffuse lighting term. I actually played around with this technique in Unity around Christmas time last year, but this proved to be difficult given the nature of the TSD and the fact that Unity did not at the time easily support some nice features like Linear Space lighting.

There have been some very useful resources on skin shading in since the publication of GPU Gems 3. There are some excellent Siggraph slides from John Hable where he details what Naughty Dog did to attempt cheaper calculations than the NVIDIA techniques in Uncharted 2. There is also a technique by Jorge Jimenez detailed in GPU Pro that does subsurface scattering calculations in screen space, which removes the cost per mesh of TSD (a serious limitation for it's use in an actual application). This seems to have garnered some adoption in game engines, but I understand that it is still a reasonably expensive technique (note: I'm less knowledgeable of Screen Space Subsurface Scattering, so take that comment with a grain of salt).

With regards to tools for doing Skin Rendering on your own time, a good quality head scan has entered into the public domain from Lee Perry-Smith and Infinite Realities. Also, Unity3d is much friendlier to doing some high quality rendering. There is now a public beta for Unity 3.5, which now features Linear Space lighting and HDR rendering conveniently built into the engine.

The Latest Hotness: Pre-Integrated Skin Shadding (PISS)

I've been impressed with the quality articles in GPU Pro 2 since I picked it up this past summer, one of which is Eric Penner's article detailing his technique "Pre-Integrated Skin Shading." He also gave a talk describing it at Siggraph, and the slides are available online. There are three main parts to it: scattering due to curvature, scattering on small details (i.e. the bump map), and scattering in shadow falloff. I did the first two, no shadows yet for me, but I'll do a follow up post if I get around to it.

The basic motivation is to pre-calculate the diffuse falloff into a look-up texture. The key to making the effect look good is that instead of having a 1-dimensional texture for NdotL, it is a 2D texture that encompasses different falloffs for different thicknesses of geometry. This will allow the falloff at the nose to differ from that in the forehead.

I've adapted the sample code to precompute the lookup texture into a little editor wizard for Unity. Simply make a folder titled "Editor" in Unity's assets and drop it in, and you'll have it extended to the editor (the function is added to the menu "GameObject/Generate Lookup Textures"). I've included the full script at the bottom of this blog post, which computes both a falloff texture that appears to be about the same as the one that Penner shows in GPU Pro 2. Here's what my lookup texture looked like when I was all said and done, this is to be sampled with 1/d in the y, and NdotL in the x:



An important note: if you activate Linear Rendering in Unity 3.5, know that the texture importer has an option to sample a texture in Linear Space. GPU Gems 3 has an entire article about the importance of being linear, I noticed that after I turned it on in Unity, the lookup texture had the effect of making the head look like it was still in Gamma Space. I then realized I needed to flip on that option in the importer. In fact you may notice that the lookup texture in GPU Pro 2 looks different than the one in Penner's Siggraph slides, this is because the one in the slides is in linear space. The one in the book is not, which is also the case with the one pictured above.

The parameter 1/d is approximated with a calculation of curvature based on similar triangles and derivatives (there's a great illustration of this in the slides), and I've included the snippet of GLSL from my shader. To see the output of curvature on my model (easily tuned with a uniform parameter in the shader to offset the calculation):
// Calculate curvature
float curvature = clamp(length(fwidth(localSurface2World[2])), 0.0, 1.0) / (length(fwidth(position)) * _TuneCurvature);



This curvature calculation is actually a serious problem with implementing the technique in Unity. This is because ddx and ddy (fwidth(x) is just: abs(ddx(x)) + abs(ddy(x))) are not supported by ARB, which means Unity can't use the shader in OpenGL. Those of you familiar with Unity, will know that CG is the shading language of choice, and it is then compiled for each platform. A thorny issue, that I have been stumped with in the past, especially because I do many of my side projects in OSX on my laptop. Tim Cooper (@stramit) came to my rescue with a solution that works decently: to write an OpenGL only shader directly in GLSL, which will support fwidth no problem. This is a *little* painful, especially because Unity seems to have dropped the glsl version of AutoLight.cginc (my 3.4 build has AutoLight.glslinc, but unfortunatey 3.5 beta 5 does not), which is going to make things like using the built-in light attenuation methods much more of a headache. In fact, the reason my sample images use a directional light is because I don't currently have point light falloffs matching *exactly* the same as Unity's. That being said, it hasn't been problematic enough to cause me to just abandon OpenGL support and move to a Windows computer. Furthermore, there's a nice wiki that served as a good jumping off point for doing GLSL in Unity.

(NOTE: Aras from Unity posted a suggestion down in the comments that might cause my choice to use GLSL directly to be un-needed. I will properly update the post after I try it out myself)

Softening Finer Details

As I mentioned, I'm sharing my experience with trying out 2 of the 3 techniques detailed by Penner. The second part is smoothing of small detailed bumps. While the pre-integrated texture will account for scattering at a broader sense, the fine details contained in the normal map are still much too harsh. Here's what my model looks like with just the pre-integrated scattering:



"Too Crispy" is my favorite way of describing the problem. Penner addresses this by proposing blending between a smoother normal map and the high detail normal map. The high detail is still used for specular calculations, but for the diffuse, you pretend that red, green, and blue all come from separate normals. This is very similar to what Hable details as the technique used in normal gameplay in Uncharted 2 in his Siggraph 2010 slides. By treating them separately the red channel can be made to be softer, Penner advises using the profile data from GPU Gems 3 to try to match real life.

In order to avoid the additional memory of having 2 normal maps, Penner uses a second sampler for the normal map that is clamped to a lower mip resolution (as opposed to using the surface normal like Uncharted 2). However, Unity does not allow easy access to samplers to my knowledge, so the only way to set up a sampler like that is to actually duplicate the texture asset, which won't save any memory. So, my solution was to just instead apply a LOD bias to the texture lookup (an optional third parameter in tex2D/texture2D in case you're not familiar). Here's what the same shot from above looks like with the blended normals applied using a mip bias of 3.0:



Specular

While many articles on skin shading focus almost entirely on the scattering of diffuse, the GPU Gems 3 article provides a good treatment of using a specular model that is better than Blinn-Phong. The authors chose the Kelemen/Szirmay-Kalos Specular BRDF, which relies on a precomputed Beckmann lookup texture and takes into account Fresnel with the Schlick approximation. Being that I had played around with that model a year ago when I was experimenting with TSD in Unity, I simply rolled that code into Penner's work. Here's the resulting specular:



A shot before applying specular:



And finally the specular and diffuse combined together:



Concluding Thoughts / Future Work

This has been a fun little side project for me, and I think it turned out decently well. I've made and fixed a few stupid mistakes along the way, and I wonder if I'll find a few more yet, don't be afraid to point out anything terrible if you spot it. My two main goals left are tighter integration with Unity : proper point/spot calculations that match CG shaders in Unity, and shadow support. When it comes to shadows I'm at a but of a crossroads between trying to hook into Unity's main shadow support vs. rolling some form of my own. I suspect rolling my own would be less useful in the effort to make the skin shader completely seamless with unity, but might be less of a headache to accomplish. Either way, if I do make make any drastic improvements, I promise I'll do a follow up post :)

Finally, if you're interested in the IGF project, Dust, that I mentioned as the root cause of my hiatus from AltDev, you can check out the project on it's website at www.adventureclubgames.com/dust/, and the trailer for it if clicking a link is too much effort:

[youtube http://www.youtube.com/watch?v=TJ5dLSh8PC0?rel=0&w=560&h=315]

Source Code

As I promised, here is the source for my Unity script that generates my lookup textures:
// GenerateLookupTexturesWizard.cs
// Place this script in Editor folder
// Generated textures are placed inside of the Editor folder as well
using UnityEditor;
using UnityEngine;

using System.IO;

class GenerateLookupTexturesWizard : ScriptableWizard {

public int width = 512;
public int height = 512;

public bool generateBeckmann = true;
public bool generateDiffuseScattering = true;

[MenuItem ("GameObject/Generate Lookup Textures")]
static void CreateWizard () {
ScriptableWizard.DisplayWizard<GenerateLookupTexturesWizard>("PreIntegrate Lookup Textures", "Create");
}

float PHBeckmann(float ndoth, float m)
{
float alpha = Mathf.Acos(ndoth);
float ta = Mathf.Tan(alpha);
float val = 1f/(m*m*Mathf.Pow(ndoth,4f)) * Mathf.Exp(-(ta * ta) / (m * m));
return val;
}

Vector3 IntegrateDiffuseScatteringOnRing(float cosTheta, float skinRadius)
{
// Angle from lighting direction
float theta = Mathf.Acos(cosTheta);
Vector3 totalWeights = Vector3.zero;
Vector3 totalLight = Vector3.zero;

float a = -(Mathf.PI/2.0f);

const float inc = 0.05f;

while (a <= (Mathf.PI/2.0f))
{
float sampleAngle = theta + a;
float diffuse = Mathf.Clamp01( Mathf.Cos(sampleAngle) );

// Distance
float sampleDist = Mathf.Abs( 2.0f * skinRadius * Mathf.Sin(a * 0.5f) );

// Profile Weight
Vector3 weights = Scatter(sampleDist);

totalWeights += weights;
totalLight += diffuse * weights;
a+=inc;
}

Vector3 result = new Vector3(totalLight.x / totalWeights.x, totalLight.y / totalWeights.y, totalLight.z / totalWeights.z);
return result;
}

float Gaussian (float v, float r)
{
return 1.0f / Mathf.Sqrt(2.0f * Mathf.PI * v) * Mathf.Exp(-(r * r) / (2 * v));
}

Vector3 Scatter (float r)
{
// Values from GPU Gems 3 "Advanced Skin Rendering"
// Originally taken from real life samples
return Gaussian(0.0064f * 1.414f, r) * new Vector3(0.233f, 0.455f, 0.649f)
+ Gaussian(0.0484f * 1.414f, r) * new Vector3(0.100f, 0.336f, 0.344f)
+ Gaussian(0.1870f * 1.414f, r) * new Vector3(0.118f, 0.198f, 0.000f)
+ Gaussian(0.5670f * 1.414f, r) * new Vector3(0.113f, 0.007f, 0.007f)
+ Gaussian(1.9900f * 1.414f, r) * new Vector3(0.358f, 0.004f, 0.00001f)
+ Gaussian(7.4100f * 1.414f, r) * new Vector3(0.078f, 0.00001f, 0.00001f);
}

void OnWizardCreate () {
// Beckmann Texture for specular
if (generateBeckmann)
{
Texture2D beckmann = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
float val = 0.5f * Mathf.Pow(PHBeckmann(i/(float) width, j/(float)height), 0.1f);
beckmann.SetPixel(i, j, new Color(val,val,val,val));
}
}
beckmann.Apply();

byte[] bytes = beckmann.EncodeToPNG();
DestroyImmediate(beckmann);
File.WriteAllBytes(Application.dataPath + "/Editor/BeckmannTexture.png", bytes);
}

// Diffuse Scattering
if (generateDiffuseScattering)
{
Texture2D diffuseScattering = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
// Lookup by:
// x: NDotL
// y: 1 / r
float y = 2.0f * 1f / ((j + 1) / (float) height);
Vector3 val = IntegrateDiffuseScatteringOnRing(Mathf.Lerp(-1f, 1f, i/(float) width), y);
diffuseScattering.SetPixel(i, j, new Color(val.x,val.y,val.z,1f));
}
}
diffuseScattering.Apply();

byte[] bytes = diffuseScattering.EncodeToPNG();
DestroyImmediate(diffuseScattering);
File.WriteAllBytes(Application.dataPath + "/Editor/DiffuseScatteringOnRing.png", bytes);
}
}

void OnWizardUpdate () {
helpString = "Press Create to calculate texture. Saved to editor folder";
}
}

I haven't included the actual shader because it's a little messy still and currently only supports the OpenGL path. The numerous resources that I've mentioned should guide you in the right direction though.

3 Years Later: Year Three

This post is drawing to a close my little series on how I went from not knowing the first thing about game development to being a programmer that eats rendering code for breakfast (as all good graphics programmers should). Here are links to my posts about Year 1 and Year 2.

Olympus

After finishing out my second year of college, I spent my Summer working on what would become the longest running project I've ever been a part of. Olympus is a project from the MSU Games for Entertainment and Learning Lab developed to study the effectiveness of exercise in motion games in the context of a game intended primarily for entertainment.

The design was that of an action adventure game set in Greek mythology. It not only required us to support toggling everything from dialog options to whether the game was played with a 360 Controller or a Wii-mote/Dance pad combination for various research groups, but it also demanded that we deal with the problems of maintaining a codebase over the course of a long-term project.

The lessons here were invaluable. There were only ever two main programmers on Olympus at a time, with Shawn Henry Adams and myself moving into those roles as the original two graduated. It very quickly became apparent to us that the code base was riddled with quick fixes, lack of proper tools, and poor software engineering. In other words- it was made by students, and not bad students at that. The fact that students are rarely have to revisit class projects in the future is something that I think should be fully realized when doing a student based project.

I'm not sure when students are supposed to learn the life lessons of the dangers of ignoring an attempt at good software design, but I certainly learned it from Olympus. It was also the most fun I ever had making a game for the GEL lab, working with my coworkers to get through the challenges of developing hours of gameplay and adapting existing, and often messy, systems to new designs. So as a student if you have the chance to work on or take a class that involves a project that lasts at least a semester, don't let it pass you by.

Becoming Nocturnal

When school actually started up, I was enrolled in two classes that changed me forever. One was the portfolio class for the game development track at MSU, and I threw myself into that class like there was no tomorrow. There was not an all-nighter that I wouldn't pull for the sake of my games, and looking back now I'm pretty sure I know why I had that mentality. The number one thing that I've ever heard about getting a job in game development is that a stellar portfolio is essential. Worthwhile? Yes. Although, as I lamented in my last post, a passing grade is not enough incentive to have people create amazing games. For me, feeling like I'm the only one working on a project is a lot more troubling than a difficult technical challenge. I'll always help and vouch for the kids that throw themselves at their work, because with a little direction, they'll always do great things. But in a portfolio class, it was always frustrating to deal with kids that had already let their enthusiasm slide to the point of no longer pursuing game development seriously.

I also took Yiying Tong's graduate level graphics programming class that Fall, and it was perhaps both the greatest and hardest class I've ever taken. There's a different lesson to be learned here than working hard to make a good game: it's worth it to go way over your head in a subject you enjoy. While a lot of it was beyond my comprehending at the time, I've come to learn that not being afraid of material you don't fully understand can help you out in the long run. Maybe the first time or the second time it doesn't make sense, but eventually, it all comes around.

On My Way Out

The Spring of junior year brought along my first serious attempt hunting for a job in the industry beyond Michigan State University's game development research projects. It also brought about the beginning of my writing for AltDev, the beginnings of a stint writing tutorials for Shaders in Unity, and my first semester as a TA for the introductory Game Design course. This was a bit of a tipping point for me and draws my story to a close.

I never had a good gauge for whether or not I was on track to make games professionally after college. When I seriously started teaching others how to make games, and people actually seemed to be improving as a consequence of my effort, I began to realize I was probably doing alright. My hunt for a game development internship landed me at the mighty Iron Galaxy Studios, and maybe someday after what I was working on gets announced I'll talk a little about my experiences as an intern there (spoiler: it was great!), but it certainly reaffirmed my suspicions that I had gone from someone wanting to learn game dev to someone that could competently contribute to a substantial game.

A final word on my efforts to contribute to helping others learn about game development: it's very rewarding. If you find yourself in a position to help others out, seriously consider it, whether it's teaching a class or writing a blog post or even giving a presentation at a local club or IGDA chapter meeting. I know that the reason I enjoy spending so much of my time trying to give back to the game development program at MSU and the development community as a whole is because a younger version of myself would have loved to be on the receiving end of any of the information I've shared. Many people helped me along the way to understanding what I've learned and experienced, it's all I can do to continue that effort now that I'm in their shoes as one of the more experienced individuals at MSU.