Thursday, February 13, 2014

Announcing: Republic Sniper

Sorry for being so quiet of late. We've been busy at MartianCraft of late.

In addition to merging with Empirical Development, we've also been studiously working on the first game set in the Turncoat Universe™.

It's called Republic Sniper™.

Today, we released a teaser trailer for the game. This trailer was done entirely in-house. Most of the production was done over just a two week period by a team of four people, only one of whom who was dedicated full-time. They did a hell of a job and I need to give a special shout-out to Patrick Letourneau for not just carrying the ball across the goal line on the trailer, but running it the entire length of the damn field.  He accomplished an amazing amount in a completely ridiculous amount of time and we're both grateful and impressed.

We've also been sharing concept art, trivia, and WIP screenshots from the Republic Sniper twitter account.

We haven't announced a ship date for Republic Sniper, but you can sign up for our mailing list on the website to be notified as soon as we have more information to share!

Our much larger combined company has a dedicated game team with multiple in-house and contract games currently under development and we're always happy to talk with people about new game (or non-game) projects.

Thursday, October 24, 2013

Turncoat Dev Diary: The Grind

This "every week" commitment has turned out to be harder than I expected. I've been pretty heads down working on game mechanics of late. I actually found time to write a couple of posts that I can't publish yet because we're still trying to sort out a problem with our Apple developer account. We've got our domain reserved, but we don't want to reveal the game's name or details until we've also reserved the app name. At present, we can't do that. Hopefully we'll get that resolved this week.

But work continues on the game. The gun you get at the start of the game has now been fully modeled and textured and we've generated a low-poly normal-mapped version for the game.

The yellow paint denotes that this is a training range rifle - it will only be on the gun when you're on the training range, though we may end up going with a different color or ditching the paint altogether. While it looks pretty nice in the renders, it tends to blow out in the actual game when you're near lights.

We've also got the initial model for the RAR-14 assault rifle:

This is the standard Republic assault rifle, the gun from which the sniper rifle above was derived. If you look at the receiver, you'll see that it's essentially the same gun with a different stock and barrel. The sniper variant also uses slightly different ammunition, so the clip is a little longer to account for the longer cartridges it uses.

You can see from the orange marks that we're playing around with different color markings, trying to figure out what will look good in-game.

Before too long, we should have the training range done. The first five levels — a tutorial plus four challenge levels — will take place on the training range. The range will also be used for a training mode that will let players test out weapons and weapon modifications. The level has been blocked out and the area behind the shooter has been partially detailed:

I've also made quite a bit of progress on developing the game mechanics. I've got a working prototype with bad guys. There's no real AI to speak of yet, just an algorithm that I like to call "lambs to the slaughter". The bad guys will avoid obstacle and each other, but otherwise, they just walk towards the shooter until they get shot and die.

The basics are working, though. There's a basic hit point system with location-based damage adjustments, particle-blood on impact, and death animations. It's not exactly a game yet, but it's actually starting to actually be kinda fun testing the builds.

I was pretty depressed earlier this week, though. The test level was working great on desktop builds, but trying to play on any mobile device except the iPhone 5S resulted in really bad framerates, and this was without much in the way of dramatic lighting or complex models. The test level I was using has a relatively tiny poly count and the proxy bad guy models I was using do as well. The textures were reasonable in size and using hardware-supported compression and I was using only simple shaders designed for mobile use. Yet, on an iPad Mini, the level would start at 15fps and quickly drop to 3 or 4. It was playable, but not great at 15fps, but when it dropped below that, it became unusable.

I reached out to some friends who have more experience with Unity for advice. With their help and the help of Instruments and Unity's excellent profiler, I found a couple of things that were just killing performance by forcing the physics engine to re-create a bunch of objects every frame. A few hours of poking around and asking questions and I had the prototype running at 30fps even on an iPad Mini and even with a dozen or more bad guys visible.

Once I got performance working well, I started playing around with Unity's light maps and light probes, which let you do fairly impressive lighting without the use of performance-killing rendered lights. I'm really happy with the results I've gotten so far and can't wait to see what we can do with these features on an actual artist-created level.

Overall, things are moving along well.

Saturday, October 12, 2013

Turncoat Dev Diary: Going Ballistic

We're still working on getting our ducks in a row administratively so we can actually announce the name and basic details of our first game, but as I've mentioned before, it is going to be centered around the use of guns. The Turncoat universe is set four hundred years in the future, so there will be fancy futuristic weapons available, but I wanted the first weapons you get access to in the game to be essentially more advanced variants of modern ballistic firearms.

You could argue that by the year 2400, cartridge-based combustion-propelled firearms will be horribly obsolete. Certainly many fictional futures have taken that route and opted for only rays guns, lasers, blasters, phasers, ion cannons, and other such options.

But, if you think about it, the sophisticated modern firearms of today are based on the exact same principles as weapons created in the fourteenth century. Using combustion to propel a small piece of metal very, very fast has proven to be a very effective way to harm living things. Firearms have been around for seven hundred years without becoming obsolete, so there's no reason why they wouldn't still be in use in some form four hundred years from now alongside whatever other new ways of killing get created.

When it comes down to it, however, it really doesn't matter how likely anything is in reality. From a gameplay perspective, we want to have many options for our players. We want them to be able to use different kinds of guns with different gameplay characteristics and to be able to upgrade those guns in numerous ways. Our goal is to add variety to the experience of playing the game, not accurately predict the future of weaponry.

As I started prototyping the gun mechanics in code, I found a lot of examples and tutorials scattered around the Internet about how to do guns in Unity. Most of those tutorials recommended a simple raycast (combined, of course, with sound and visual effects). You cast a ray out from the gun's barrel and see if it collides with something. If it does, you have a hit and the result of that hit happens immediately. And why not? As far as human senses are concerned, bullets from modern guns might as well be instantaneous except at the very longest range of the most powerful sniper rifles.

But that approach is no good for our game. One of the main benefits of the later advanced weapons in the game is that they don't suffer from some of the problems that ballistic firearms have. For example, when shooting at long range with a sniper rifle, you have to account for the trajectory of a bullet, the fact that gravity pulls the bullet downward as it moves forward, and the fact that other forces, like wind, can act on the bullet. Ray guns don't have that problem. They're simply point and shoot, so to speak, and raycasting makes perfect sense for those later, more advanced weapons. But raycasting doesn't take the realities of a physical world into account for a ballistic firearm.

I want the firearms in the game to "feel" real, and I want the bullets to behave the way a real bullet would. I'm all for cheating when it makes for a better experience and raycasting bullets is a great solution for many types of games, but the mechanics I've been working on really put the behavior of the guns front and center, so I really want the bullets to be part of the physics simulation.

I did find some tutorials and code examples that created bullets as rigidbody objects and applied force to them, which is the basic approach I wanted to use. There are some problems with this approach, however. First and foremost is simply that bullets travel very, very fast, and physics calculations only happen so many times a second. On mobile devices, those calculations tend to happen less times per second than on a desktop computer or console because there's simply less computing horsepower available. What can happen as a result, is that bullets can pass right through objects they should have hit. In one frame, the bullet is on the near side of the target, and by the time the next physics frame rolls around, the bullet is on the other side of it, and no collision is detected.

For a desktop game, this is easy to rectify; you just crank up the physics frame rate (which is distinct from the display framerate in Unity) so that the calculations happen more often. For a mobile game, that's not an ideal solution. You have to use the available CPU (and GPU) power efficiently on mobile if you want an overall experience to be good. Fortunately, there's a good solution to this problem on the Unity Wiki. You have your projectile do short ray casts in any physics frame where it travels far enough between frames to have missed a collision.

The bigger problem for me was trying to figure out just what values to use in the physics system. How much mass should the bullet have? How much force do we need to apply to that bullet?

The examples I found seem to have arrived at values by pure trial and error, and they all felt "off" to me. Many of the examples I found used the default mass value for the bullet, for example. In Unity, the default value of "1" is equivalent to 1 kilogram. If you've ever held a bullet, you know that it masses nowhere near a kilogram. Even the giant .50 caliber BMG round doesn't come close. You know what shoot bullets that weighs a kilogram? Battleships, not rifles.

Instead of taking the same trial-and-error approach to getting mass and force values that feel right, I decided I'd do a little research. There's a lot of science behind guns and a lot of people who are interested in guns, so I figured it couldn't be too hard to find real data on real bullets.

It ended up being even easier than I thought it would be. Wikipedia has gathered that data for pretty much every modern form of ammunition, including the exact mass of the bullet, the muzzle velocity, and the amount of energy used to propel the bullet to that velocity.

So, I gathered up that data for an assortment of assault, sniper, and high-powered hunting rifles in a spreadsheet. You can download that spreadsheet here, if you're interested.

Using the bullet's mass in Unity's physics system is easy enough. Just divide the grams by 1000 and that gives the value to use as the projectile's rigid body mass. But, how do we know how much force to apply to the bullet? Unity's documentation for the AddForce() method doesn't say what units it wants for input.

After digging around, I found that somebody had actually gone through the process of figuring out the answer to that while trying to counteract gravity for an object in their game. They determined that the AddForce() method uses 1/50th of a joule as its unit. Since we know how many joules of energy propel each of these modern bullets, we just multiply the number of joules by 50 and feed that value to the AddForce() method.

Great! But modern guns also spin the bullet as it travels down the barrel. In fact, the name "rifle" comes from the grooves in the barrel that cause that spin. After experimenting a bit, I came to the conclusion that for purposes of the game physics, rifling really isn't needed. Rifling helps deal with real world problem that just aren't present in the game's physics engine unless we add them.

But, I decided I still wanted the bullets to spin.

That may seem like an unnecessary bit of realism, but there's actually a reason for it. In some situations, like if you finish a level with a head shot, we're going to slow down time and follow the bullet to its target with the camera. It's a little clich├ęd, but it's still a cool effect when used sparingly. When we do it, though, I don't want people noticing that the bullet isn't spinning.

And they will.

In the real world, the twist rate of rifling is measured a couple of different ways, including revolutions per minute and the length required to complete one revolution inside the barrel. What it's not measured in is joules. And since this is just for show, I don't want to actually model the rifling into the gun's physics model, because that would be a lot of work and would force the physics engine to do an awful lot of calculations. Instead, I just want to spin the bullet right the moment it is spawned. Unity will let me do that in one line of code using the AddTorque() method. This method takes the same 1/50th of a joule input as AddForce().

But how much torque in joules should I add to the bullet's Z axis?

Honestly, I have no idea, and I really don't think it's worth spending a huge amount of time trying to figure it out since it doesn't actually affect the bullet's trajectory. I know it's a lot less force than is used to propel the bullet itself, so I'm going to start with a small number - 100 units (2 joules) - and see how it looks when we switch to the bullet cam. I'll then tweak the value if it doesn't look right. Sometimes trial and error is the right approach. Or maybe it's the lazy approach. Maybe it's both. Regardless, it's the approach I'm taking here.

I threw together a quick shooting gallery to test my real-world-based gun physics. Yes, it's an ugly shooting gallery. This is what you get when a developer throws something together quickly instead of asking his artists to make it for him.  Despite the ugliness, I'm actually pretty darn happy with the results. Here's what it looks like shooting a gun based on values taken from the .300 Winchester ammunition:

There's still a lot of work to be done on my gun class. I need to get recoil in there, for example, as well as muzzle flash. But, I've got most of the basics down for building a variety of weapons by simply configuring parameters in Unity's inspector. Change the weight and force and a handful of other parameters and you get a gun that behaves and feels very differently. Change the 3D model as well, and you basically have a new, different gun.

On a related note, my early prototypes used the gyroscope for aiming on mobile devices. It had a really natural feel that I loved, but it proved problematic when you zoomed in very far. The tiniest movements from holding the device in your hands would translate into very noticeable, unwanted movement. That movement actually felt like actual shake and scope drift, but beyond about 4x magnification, the game became basically unplayable. I spent some time trying to add stabilization and smoothing to the gyroscope input, but was never happy with the result or the amount of control we had over it.

After a while I admitted defeat and ended up ripping out the gyroscope code and replacing it with code that used the accelerometer for aiming. I then added scope drift back in algorithmically and created a parameter for it. That means we can easily change how steady a gun is when used. A rifle with a bipod, for example, will have almost no drift, while a large gun used while standing will have a fair bit more.

Thursday, October 3, 2013

Turncoat Dev Diary: Concept and Mechanics

As I mentioned in my last post, we now have dedicated game artists working on Turncoat, and we've mostly decided on the mechanics and basic structure of our first game. I'm not quite ready to announce the game's name or share much detail about it until we've finalized that decision and taken some administrative steps such as reserving the app and domain names.

For the first game, we've decided not to make it story-driven. This was a tough call, because our eventual goal is to create large, story-heavy cinematic games. However, we also need to run this as a business. Long cutscenes and story-driven plot would greatly increase the budget and timeline of this first game and probably wouldn't greatly increase our sales.

So, what we're doing is a game that's smaller in scope but set in the universe and takes place in and around the main storyline. Developing a more casual game will allow us to build up a library of game assets to eventually be used in the full story-driven game yet still keep a reasonable timeline for shipping something.

In the last post, I showed you the selection of gun silhouettes that Alex came up with:

We decided to start working with the silhouettes J and K (which are similar) for the first gun the player gets to use. I did a write-up of the characteristics of the gun and wrote a little in-universe history for it to help Alex while visualizing it. My thought was that the player would start with a more general purpose gun; something modular that came in several variants. We settled on an assault rifle that came in regular, sniper and tactical variants. After noodling around for a bit, Alex came turned that into these designs:

When he first sent it to me, I really wanted to find something that needed to be improved. I pretty much failed to find fault with the designs, though. They're pretty much exactly what I was hoping for. The only flaw I found was in the variant names. The "X" designation only applies to the sniper variant. The tactical variant is the RAR-14T, and the regular version is the RAR-14.

RAR stands for "Republic Assault Rifle", and it's pronounced "rawr fourteen".  I was originally going to drop the first R, making "Republic" assumed.  "AR-14" or "Republic AR-14" sounded more like a gun to me than "RAR-14". It turns out, there was a reason for that. The original name of the M16 rifle was "AR-15". Even today, the civilian semi-automatic variant of that gun is sold under the trademarked designation AR-15 by Colt. To keep our distance and not sound too derivative, I decided to stick with the original three-letter acronym pronounced like a word.

Patrick, our other game artist, took Alex's designs and started working on the 3D model for the gun. The model's not finished, but it's looking pretty sharp so far:

Meanwhile, needing a break from designing guns, Alex started working on concept art for the game's first level. We decided that the first level would be a shooting range on board a ship. Our earliest idea was to make a small, long, windowless range. In Deep Fleet ships, space is at a premium, so I initially wanted the space to feel cramped to reflect that, almost as if the designers of the ship had to make room for the rifle range as an afterthought. We explored that idea for a while, but after talking through it, we opted to go in a different direction. We decided that the first level the player sees needed to have a little "wow" to it, and a cramped, dingy range squirreled away in the bowels of the ship just wouldn't give us that. It makes sense in-universe, but it doesn't work for the game.

Essentially, we decided to let aesthetics trump in-Universe realities, and went with a range with large (very bullet proof) windows through which stars, the sun, or maybe even Mars or Earth can be seen. Alex isn't done with the concept art for the shooting range yet but he's off to a good start if you ask me.

The range has a large overhead window looking into space that prevents the room from feeling cramped or small. The shooting stalls can slide in for individual practice, or can be slide out for tactical training. The large pyramid above with the catwalks extending off of it is a holographic projector. Although some elements on the range are real physical items, the targets themselves will be projected holograms. We were originally going to go with targets that pop up the way they do on modern tactical training ranges, but then decided we wanted to go a bit more futuristic.

It'll be interesting to see how much of this changes before we ship the first version of the game, but so far I'm incredibly happy with the progress we're making. On the game mechanics side, I've been experimenting with the gryoscope and trying to make a decision about whether it can be used for certain game mechanics. What I've found is that it's quite well suited to certain situations, but not to others. For example, when you use a scope like the one on the rifle above, and zoom in far, the tiniest movements of your hands cause large movements in the scoped view. While this is somewhat realistic for shooting at long range - holding a gun absolutely perfectly still is impossible - it takes scope drift out of our control.

We need to be able to control things like that. How do we make a gun on a bipod more stable than a gun that's just held, otherwise? How do we make a large, heavy gun behave differently than a smaller one? Contrariwise, how do we keep them from stabilizing their device on a table?

No. Scope drift has to be something we have precise control over. It can't be a byproduct of our control mechanism.

Although I really like the feel of the gyroscope for shooting, I'm becoming convinced that it's not the right mechanism for this game. That being said, I think it might be the right way to control the view in at least some situations when you're not using a scope. The gyroscope is far more accurate than the accelerometer, and any control mechanism that requires screen touches would require screen real estate we're going to need for other controls.

We're making progress and I look forward to sharing more designs with you as we go along. Once we have our ducks in a row and have finalized the game's name and basic story, I'll also share that.

Tuesday, September 24, 2013

Turncoat Dev Diary: Visual Design Begins

We're starting to narrow in on our first game after putting the stealth game on the back burner. I'll be ready to share more about that in the next week or so. Today's post, though, is not about game mechanics, it's about the look and feel.

We've brought two excellent visual artists — Patrick and Alex — on board to help establish the visual style of our game universe and the first game.

You can check out Patrick's work at his Tumblr and on his blog. You can also follow him on Twitter… um… if you dare.

You can see some of Alex's stuff on his blog and follow him on Twitter.

I'm really excited to be working with these guys and can't wait to share some of the stuff they create.

In one of my next few posts, I'll talk about the mechanics of our first game, but for now, I'll just say that guns — and especially scoped rifles — are an important element of the game, so one of the first things I wanted to explore was what those guns might look like in the 24th century.

The process started with silhouettes. Alex came up with a sheet of different gun outlines based on both historical and modern weapons as well as taking inspiration from a variety of fictional sources. Here is a low-res version of the first silhouette sheet:

Talk about decision paralysis. So many cool looking guns silhouettes!

While we'll have multiple guns in the game when it ships and we'll eventually explore several of these designs, we have to start with one. Picking just one wasn't easy, though. Instead of deciding based on aesthetics, I decided to look at function. Our protagonist needs to start with a gun, but we don't want them to start with the coolest, fanciest, or biggest gun. Rather, we want them to start with something practical and multi-purpose. Both J & K looked to me like assault rifles that have been modified for sniping, and that feels like a good starting point for the default weapon. It's the weapon of a newly-qual'd sniper deployed with his or her squad.

So, Alex is now working on variations of J & K to come up with the design of the first gun our players will use. We'll be exploring some of the other silhouettes later and evolving those into finalized designs as well.

While Alex is exploring guns, Patrick has been exploring environments. The logical starting point for him was to create a rifle range for practice and training levels. We don't want players to worry about enemies shooting back at them until they've had a chance to at least try out their gun against inanimate objects, so Patrick is working on figuring out just what the rifle range on a 24th century spaceship might look like. None of the environment stuff is far enough along to share yet, but I'm looking forward to when we can.

Tuesday, September 17, 2013

Turncoat Dev Diary: Touch Controls are Hard… Let's go Shopping!

I haven't been making my "every week" blog post commitment for the last couple weeks. I apologize for that. There are few reasons on top of the ordinary work life busy-ness that have caused it.

First… well, touch controls are hard. I've got a partially written post exploring the use of touch controls for stealth games, but I haven't been able to hone in on something I'm 100% happy with. I've got something that I like better than any stealth-based iOS game I've found, but it's still nowhere near being shipworthy. Part of that is because this type of game grew up in the console world, where you have controllers like this:

Have you ever thought about the sheer amount of input that you can take through one of these modern joysticks?  The Xbox 360 controller, for example, has two analog joysticks, each of which allows analog input on two separate axes. That's four inputs that accept a range of values each, letting you (for example) not just specify that you want to move forward, but to actually specify the speed at which you want to move.

But there's actually another two analog controls on top of those. The left and right triggers are not buttons, they're also analog controls with one axis each. The harder you press them, the higher the value received. The DPad is the equivalent of eight tac buttons. There are four standard buttons (A,B,X,Y) and two shoulder buttons (RB, LB). Even without counting the start, Xbox, and back buttons, and without using combinations of buttons, we're talking about 14 buttons and 6 analog axes. Oh, but wait… each of the analog sticks can be pressed down and used as a button, so it's 16 buttons and 6 analog axes. If you count all the buttons, it's 19 buttons and 6 axes. You can also chord the A/X, A/B, X/Y, and B/Y buttons, allowing the equivalent of an additional four inputs.

That's an awful lot of input. These controllers are well designed, so you don't think about just how much data you're able to submit to a game using them, but as a game designer, it's something you have to think about.

If you look at the most successful and popular iOS games, they're not (generally speaking) copies of console games. There are exceptions, of course, like the recent Deus Ex game but, frankly, that one got by on its production value and franchise nostalgia. The controls are actually quite frustrating. A sloppy combination of direct manipulation, virtual joystick, and on-screen buttons that's hard to learn and hard to use.

I still believe that there's a way to do a stealth game on a touchscreen well without using an external controller, but I haven't found it yet. I think I'm going to put this idea on a back burner and return to it in a little while, maybe for the second or third game in the series. 

Another reason I haven't blogged recently is because I've been busy recruiting some pretty amazing artists to work on Turncoat. Pretty soon, I should be able to start posting some concept art and pictures of game assets. I'll tell you more about these artists in a future post but, for now, I will say that I'm super excited to be working with them and I can't wait to start showing you some of the art they create for the game.

So, where are we going from here? Well, we're probably going to be focusing on some high level look-and-feel stuff for the next few weeks and are also going to explore alternate game mechanics for the first game. It's important to me that the first game be really solid and also that it be produced in a timely manner. I just don't think that's going to happen with our original concept.

I'm also thinking about getting away from the prequel idea. There's something in the backstory that I was going to have to reveal if we kept going the prequel game as originally imagined, and it's something I really don't want to reveal yet for a couple of reasons. Instead, I'm thinking about focusing on origin stories for the main members of the squad. Everybody who gets recruited into The Squad, did something to get noticed. Some act of heroism, selflessness, or brilliance that caused the Squad's Commander to recruit them.

So, instead of going a hundred years in the past, we're going to only go back 2-5 years. We're in the same universe, dealing with a lot of the same characters, but they're not on The Squad yet. These will be fairly self-contained stories that can be told without having to reveal any of the secrets of the universe.

At this point, I know which character's origin story we're going to do first, but I don't know for sure the game mechanics that will be used to tell that story. I've got some ideas that I'm going to explore, though, so look for future posts.

Wednesday, September 4, 2013

Turncoat Dev Diary: Help! I'm Falling and I Can't Stand Up…

(This is part of a series. The first post in the series is here.)

Just as I started trying to figure out how the game's touch controls should work, I  began to be really bothered by a couple of problems in the basic movement of my character. One of those things, I've mentioned before, is the funky camera accordioning in the arc right and arc left animations. Turns out, those issues were more than cosmetic; the stuttering camera combined with the fact that stopping isn't instantaneous made it virtually impossible to line up the character precisely as you stopped moving.

It was easy enough to solve, though. I simply removed the arc left and right animations from the blend trees in my state machine, then added some code to my character controller class to simply rotate the whole character as she walked:

    transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The turnSpeed variable can be set in the inspector, so it can be adjusted on a per-character basis. The horizontal value is pulled from the x-axis of the joystick or determined from the left/right buttons or touch screen controls. The resulting turn animation is a tiny bit less realistic than using the animated left and right turns. You'd think that just rotating the whole character a small amount while they walked forward would look really fake, but it doesn't. Maybe it's simply the fact that this is the way most third person games ever created, including pretty much every MMORPG, have worked. Maybe our eyes are just accustomed to this particular cheat. Either way, I'm willing to sacrifice that tiny bit of realism for better, more precise controls.

After playing with it a bit, I decided that turn speed probably shouldn't usually be the same when walking and running. Instead of just setting a single turn speed in the inspector, I'll let you set both a walk and run speed and then interpolate between them. They can be set the same using this approach, but they don't have to be.

float turnSpeed = (turnSpeedDifference * currentRun) + walkingTurnSpeed;
transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The variable turnSpeedDifference gets calculated once at startup, since I don't anticipate these values changing at runtime:

turnSpeedDifference = runningTurnSpeed - walkingTurnSpeed;

I'm pretty happy with turning now, but there's another problem that I didn't notice until I expanded the playing field. The original field was sufficient for testing the basics of movement, but I realized that once I moved beyond the basics, I'd need ways to test things like crawling through vents and taking cover, so I expanded the test field, making it taller and longer. I added some ducts the same size as the ones in the prototype level, added some objects to take cover against, and added a third level platform and another set of stairs. That made the test level look like this:

As I started exploring this expanded test level, I realized that falling from a distance greater than, maybe the equivalent of two to three meters, looked unnatural because my character would try to  walk or idle. Walking on air is a pretty neat trick, but not very realistic.

The provided CharacterController class, which is what I've been using to handle basic interaction with the environment (climbing stairs, being affected by gravity) has a method called IsGrounded()¹ that will tell you if if you're standing on the ground. If you're walking, running, or idling, this will return true. If you're jumping or falling, it will return false.

That's the theory, at least. It always returns true for me no matter what my character is doing. Now, I understand why it might not work when jumping because the elevation increase is baked into my jump animation - the character controller doesn't actually leave the ground. The bone colliders move up into the air, so interaction with props is correct, but the implicit collider used for interacting with terrain does not. As a result, IsGrounded() is returning true. More confusing to me, though, was why it's returning true when I fall off of one of the higher levels. I had no working theories about why it wasn't working as expected. Even when falling off a third story platform, it would never report false for IsGrounded().

Because CharacterController is an opaque class provided by Unity, there wasn't an easy way to debug why it wasn't working as expected, so I decided to stop using the provided class and roll its functionality into my own controller. I removed the CharacterController component and added a RigidBody component (the component in Unity that makes something part of the physics world) as well as a CapsuleCollider. Because my character is part of the layer Player Controller, just like CharacterController used to be, CapsuleCollider should only interact with terrain, not with props, which will be left to the bone colliders. In theory, everything should work just like before except for situations that were being explicitly handled by the CharacterController class.

Surprisingly, the swap worked really well. It works way better than I expected, actually. Without implementing the IsGrounded() functionality, I'm already able to move around the level just as I was before. I had to tweak various values on the RigidBody and CapsuleCollider components to get things just right, but it turns out I was getting far less benefit from the CharacterController component than I realized. Even climbing up slopes and stairs works pretty much as expected.

Pleasant surprises like this one are few and far between. I expected to put a lot more work into replicating the functionality I was getting from CharacterController, so I took a moment to savor the victory.

Then it was time to turn my attention to figuring out when my character is grounded, when they're jumping, and when they're falling so that I can show the correct animation for each situation.

I tacked whether they're grounded first. There's a couple of different possible approaches here. The one I opted for is to simply cast a ray straight down from the player to determine the distance to the ground. If that distance is greater than what it is when they're just standing, we know the character is not grounded. In my case, that looks a bit like this:

        RaycastHit groundHit;
        Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
        grounded = groundHit.distance - groundedDistance <= 0f;

The variable origin is the calculated center of the capsule collider. The second parameter to Physics.Raycast is the direction I want the ray cast in, which is straight down. If we multiply transform.up by -1, we get a vector pointing straight down from the character. I don't know why Unity provides a method to give you a vector pointing up, but not one for pointing down, but multiplying the up vector by -1 gives us a down vector.

The third parameter is used to determine what object was hit, if any. C#, like Java, doesn't have pointers, so that funny out keyword is used to pass groundHit by reference rather than by value. As I've said before, I don't hate C# nearly as much as I hate Java, but there are still times when this language bugs me. Here's one example. I miss pointers. I know many devs feel we've outgrown the need for pointers and that our languages should hide them from us but, personally, I find this whole out² business to be far clunkier than simply passing the address of a variable. I understand some of the security concerns around pointers, but all the other arguments against them ring hollow to me.

Anyway, the next argument (100f) simply tells the ray cast to stop looking if it hasn't found something within 100 units. In my test level, units are roughly equivalent to meters, so that should be far enough to hit the ground no matter where I am on the level. The final argument is called groundLayers, and this one's a little confusing.  This is a bitwise mask field used to specify which layers I want it to look for when ray casting. It's very similar to the physics settings I used previously to keep the bone colliders and character collider from interfering with each other.

Determining which values correspond to which layers is a little confusing but, fortunately, you don't need to. You can declare a public LayerMask variable, and Unity will present a user interface in the inspector to let you select the layers to be included.

Once I have the results of my ray cast, it's relatively easy to figure out if I'm grounded. The variable groundedDistance is half the height of the capsule collider plus a small amount extra to account for small terrain changes. I'm ray casting from the center of the collider, so the ground should be half the collider's height away. If it's further than that distance (plus a little slop), we're not grounded.

In my testing, this works perfectly, except the jump problem is still there. My capsule collider doesn't move up as the character jumps, so this code reports that we're grounded when we're jumping.

For the jump problem, all I have to do is add a boolean variable to the class to track when a jump starts, and when it ends. Only, it's not quite that simple. When you tap the jump button, that starts the jump animation. With a running jump, the character immediately springs into the air, but with a standing jump, there's a build up as the character bends their knees down and then springs up. In both instances, the character's feet hits the ground some time before the animation ends. It seems like that's the point where we want them to start falling. We don't want them to land on thin air and then start to fall down.

The first thing to do was to figure out the exact timings for my two jump situations. After some trial and error, I came up with these values:

    // Timings used on the jump. Running jump starts immediately and transitions immediately back
    private static float jumpResetDelay = .1f;                          // Used to set the Jump input back to 0
    private static float runningJumpAnimationDuration = .416667f;       // Running jump animation is reported as .867f seconds, 
                                                                        //    but is actually .416667, Mixamo probably trimmed on import

    // Standing jump has a build up and recover, so feet don't leave the ground immediately, and animation continues
    // for a short period of time after
    private static float standingJumpLeavesGround = .5f;                // When the feet leave the ground on standing jump
    private static float standingJumpBackOnGround =  1.2f;  

Now, the trick is to use them. This is one of those areas where language differences bite you. Pretty much every mechanism that I would use to accomplish this in Objective-C or C aren't available in Unity using C#. Apparently, for performance reasons, Unity's APIs are not threadsafe. Even though C# supports threading, Unity kinda doesn't. Instead, the suggested way to do something like this is to use these funky things called co-routines, which are functions that yield execution back to the calling thread. In Unity, these functions will fire on the main thread, but they can yield time back to the main thread, similar to a thread sleeping for a specified period of time.

After some playing around, I cam up with something that seems to work well. When the jump button is tapped, this co-routine fires:

IEnumerator TriggerJump(bool isRunning)
    if (isRunning)
        jumping = true;
        yield return new WaitForSeconds(jumpResetDelay);
        yield return new WaitForSeconds(runningJumpAnimationDuration - jumpResetDelay);
    {   AnimatorSetJump(true);
        yield return new WaitForSeconds(standingJumpLeavesGround);
        jumping = true;
        yield return new WaitForSeconds(jumpResetDelay);
        yield return new WaitForSeconds(standingJumpBackOnGround - (standingJumpLeavesGround + jumpResetDelay));
    jumping = false;

If the character is doing a running jump, the public variable jumping gets set to true immediately, but if they're doing a standing jump, then we wait until the character's feet actually leave the ground to set it. In both cases, we set the Jump input to the animation state engine back to false after a short delay to make sure we don't accidentally trigger a second jump animation, and then, when the character's feet are back on the ground, we set jumping back to false.

Back in our Update() method, we should not be able to check at any point to see if our character is jumping or not and get the correct value (though some tweaks to the timing are to be expected during testing). Knowing this will help us avoid falling into gaps that we're trying to jump over, for example. Now that I can tell when we're jumping, I can update the grounded check to take jumping into account.

    Vector3 origin = transform.position + transform.up *;

    RaycastHit groundHit;
    Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
    grounded = groundHit.distance - groundedDistance >= 0f && !jumping;

Now that I have a reasonably accurate way to determine if the character is grounded, I should be able to tell when to fall and when to stop falling, right?


I knew I'd pay for that earlier bit of serendipity. Turns out, the whole falling thing is a harder than I expected. I implemented the code to start falling when the ground is a certain distance away. I made that distance configurable, since it could conceivably change based on the character's height, and then set it for this character. It mostly worked. There are some edge cases, such as when going up stairs fast, where it needs to be tweaked but, for the most part starting a fall works as expected.

Landing, however… Well, landing doesn't work so well. The character "lands" a few feet above the ground and then settles down to the ground as they start to stand up.

This one made me pull my hair out. It made no sense to me.

It wasn't until I watched the character in Unity's scene view that I realized what was happening. The character's height changes as they fall, and then again as they absorb the impact of the fall, but the capsule collider being used to figure out when they've hit the ground doesn't change in height, so we detect hitting the ground while our character's feet are still a few feet above the ground.

That might make more sense if you see it in action:

You can see how significant the difference in height is in this screenshot:

There's a couple of ways I can fix this. The way that the Unity Mecanim tutorials show is to use an animation curve and tie the height of the capsule collider to that curve.

We do need to make the capsule smaller and adjust its origin up a little so it overlaps our character while falling but, in addition to that, we're raycasting from the center of the capsule in code to figure out if we're grounded, so we have to account for this change in height in that code as well. Since I have to write code to deal with this, I think I'd rather handle the capsule collider changes there as well. By saying that, I probably sound to the Unity folks the way people who refuse to use Interface Builder sound to us old school Mac and iOS devs, but it seems logical to keep the functionality in one place.

With some trial and error, I found the right values and timings for resizing the collider. Those will likely need some tweaking as I test more, but I'm pretty happy with the overall result. I was just about ready to move back to figuring out touch controls when I started noticing another movement problem. When I ran up stairs or up the slope, it would sometimes start falling at the top, even though there wasn't any way they could possibly fall there.

Ray casting doesn't take into effect the size of the collider, it just draws a line straight down from specified point. There's a small gap between the top stair and the platform. It's tiny - not big enough for a person (or our collider) to fall through, but if the ray cast happens to be exactly over that gap when I do my check for falling, we get a false positive for needing to fall, and the wrong animation gets kicked off.

I could cast multiple rays down to make sure the gap isn't too small to fall through, but Unity actually provides a way to do a ray cast that takes X and Z size into account. It's called a Sphere Cast, and I stumbled upon it purely by accident.  Fixing this issue turned out to be a matter of simply changing my ray cast cal to a sphere cast call, using the radius from the capsule collider

    RaycastHit groundHit;
    //Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
    Physics.SphereCast(origin, movementCollider.radius, transform.up * -1, out groundHit, 100f, groundLayers);
    grounded = groundHit.distance - groundedDistance <= 0f && !jumping;

At this point, basic movement is working pretty well. I can walk, run, jump, and fall down fairly realistically. I still have to do crouch and and cover, but with these fixes, I think I'm finally ready to start exploring touch controls.

Next: Touch Controls
PreviousPrototyping Player Game Mechanics, Episode II

1: Yes, this is correct. The accepted convention in C# for naming methods is to start them with a capital letter. Considering this language came from the same people who gave us Hungarian Notation, however, this is a pretty tolerable bit of ugliness.

2: It seems simple, right? Specify out if you want to pass by reference, leave the keyword out if not. Only, it's not quite that simple. You can also use the keyword ref to specify you want an argument passed by reference. Two ways and both that do the same thing but if you use ref, the variable has to  initialized before it can be passed in. With out, the variable doesn't need to be initialized. This isn't simplicity, it's just different complexity with less power.