The first game mechanic that needs to be nailed down is player movement. This is the most important mechanic to get right because it's on the screen all the time. Up to this point, I've just been navigating the prototype map using the stock first person controller that Unity provides. It's time to move past that and figure out the actual player movement and controls for the game. I'm sure we'll be tweaking these right up until release, but we at least need to get to a good starting point created.
I've been going back and forth in my own mind over whether the Escape game should be a first-person or a third-person game. Both have merits. For shooters, often the first person-perspective works better because it's easier to aim guns and other distance weapons from this point of view. For games that use melee weapons or that aren't primarily about combat at all (like ours), I tend to think that third person works better. From a storytelling perspective, I kind of want the main character visible on screen. They are the protagonist, after all, so I want the player to be able to see them. The high-level considerations seem to be pointing more toward third-person perspective.
But third person perspective falls apart sometimes. The situation in our game that seems potentially problematic for third party control is when you're crawling through the ducts. Space is tight and the player will be in front of the camera taking up most of the available space. That's going to make it impossible to see what's in front of the character and difficult to effectively control their movements.
On the other hand, there are times where first person perspective isn't ideal either. I want the player to have a number of options for hiding and taking cover. If using first-person perspective and the character does something like flatten herself up against the wall or takes cover behind a piece of furniture, it's going to be hard to see everything the player needs to see to effectively play the game. Those perspectives may be potentially disorienting as well. In real life, when you're hiding behind something, you can't see the stuff on the other side. But, in a game, you have to be able to see at least some of what's going on for the game to be playable.
Perhaps, the camera needs to be able to change perspective. Crawl into a vent? Move to first person view. Press up against a wall to hide? Force third person view.
What about when both perspectives work, such as when simply walking or sneaking around the cell block? It seems like there are two possible choices there. We can either be opinionated and force one perspective or the other on the player, or we can let the player choose the point of view they want. I'm leaning toward letting the player choose, but I'm going to play wait and see before making the final decision on that. I think we need to let testers try it both ways and see the response.
Time to start building a character controller.
Time to start building a character controller.
Of course, we don't have character designs yet, let alone completed 3D characters. So, how do we prototype character movement?
We use this:
We use this:
This is a free character from Mixamo designed specifically for prototyping. With Unity 4's Mecanim animation system and its ability to retarget motions, pretty much any animation designed for a bipedal character can be used with any other bipedal character. That means we can write a generic controller object for this prototyping character, then simply swap in the correct character model later once it has been completed. As long as our models are designed correctly and aren't radically different in their basic proportions, it should should just work.
Mecanim is impressive. The motion retargeting is some of the best I've seen and the importer almost always maps all the bones correctly regardless of the naming convention used or the number of bones in the model. The only downside to Mecanim is that it's still fairly new, so a lot of Unity users haven't moved to it yet and are instead sticking with the legacy animation stuff for their current projects. As a result, there's just not as much out there in terms of tutorials or available help. Mecanim makes it really easy to do the basics, but once you start going beyond the basics, you pretty quickly get into uncharted territory.
Uncharted territory can be fun, but it's almost always time consuming. Before we can even get to Mecanim, though, we need to get our prototyping model and animations into Unity.
Over the last year, I've purchased a selection of animations from Mixamo that I thought we'd be likely to need for the game. I supplemented those with a few packs bought from the Unity Asset Store. In the future, I'll be buying any stock animations directly from Mixamo. The Mixamo motions in the Asset Store are much cheaper, but you only get the proprietary Mecanim animation file, not the original FBX file, so you can't modify the animations, you can't set curves (which are used to tie the timing of other actions to the animation), and you can't fix mistakes in the motions.
Unfortunately, the three packs I bought through the Asset Store - the male and female locomotion packs and the prototyping pack, all contained mistakes. In fairness, Mixamo very quickly responded to my feedback. In less than twenty-four hours, they fixed the worst problems - the ones that made the packs unusable. They seem less inclined to fix the more minor issues or to provide some way to use Mechanim curves, so from now on, I'm paying extra for the full motions.
By mixing and matching animations from different packs with the ones bought directly from Mixamo, I should have most of the animations I need to get started with basic character movement. When I discover gaps, I can buy or create animations to fill them.
Rather than test character movement in the prototype level, I'm going to work in a fresh Unity file with a simple map. Once I have the basic movement mechanics working well here, I'll then export the asset over and start testing it in the prototype level. The reason to work in a fresh file is to isolate what's causing problems I encounter. Determining the cause of a problem is much harder if both the map and the character are being constantly changed.
Here's the simple prototyping level I'll be using:
There's not much to it other than a ramp, two sets of stairs, a partial second story, and some room for dropping in objects so we can see how the character interacts with the virtual world.
Because physics calculations can be processor-intensive, game engines usually keep a second set of 3D models in memory that mirror the display model. These collision models (or colliders) are not displayed to the user and are only used for calculating the physical interactions in the virtual game world. These colliders are comprised of lower-resolution models and mathematically defined "primitives" like spheres, cubes, and capsules. These collision models allow the physics engine to provide fairly realistic physical interactions using a smaller amount of processing power than it would take if using the higher-resolution display models.
This is why neither the stock first- or third-person controller provided by Unity is going to work for our game. I want our characters to interact with the world in a believable manner. Both Unity's first- and third-person controllers use a single, large capsule collider to represent the character inside the physics engine. Although the characters look like the 3D model you create, they interact with the world like a giant floating pill.
The green capsule is how your character looks to the physics engine
when using Unity's stock character controllers
For many games, especially first person shooters, this provides a sufficiently believable interaction. It's not going to give the result I want for this game, however. I want interactions with the world to have a higher fidelity than that. Our game is going to rely less on combat and more on stealth and problem solving. Having objects bump away when you get near them, but not move when an extended arm or leg passes through them, is just not going to cut it.
I spent a lot of time trying to modify the stock controllers to give the results I want, but eventually decided I either had to roll my own, or find a third-party controller that works how I want. I found several third-party controllers that were better for my needs than Unity's, but none were perfect, so I'm going to have to build my own.
As I started working on my character controller, I ran into an unexpected limitation of Unity: It lacks support for animated collision meshes. Game models in many engines are comprised of two meshes - a higher resolution mesh that gets displayed to the user and a very low-resolution mesh used for physics calculations. Both meshes are rigged to the model's armature (the virtual skeleton used to animate the model), which allows the physics engine to calculate interactions based on the actual position and pose of the character. By using the animated collision mesh, the engine knows the general shape of the body at any given moment and can calculate physics accordingly.
I built this type of physics mesh for my prototyping character. In Unity, I added a mesh collider to the character using that lower-fidelity mesh, but then discovered that it didn't animate along with the armature as I expected it to. A little research turned up that this is the documented behavior of Unity. For performance reasons, mesh colliders do not deform with a character's armature.
Needless to say, I was surprised to find out that I couldn't do what I thought was a fairly standard practice. The response from many Unity users in the forums and on Stack Overflow can be paraphrased as "just use the provided controller; it's good enough for us, so it should be good enough for you," which isn't a particularly helpful bit of advice.
I spent a lot of time experimenting, trying to find a way to use an animated collision mesh in Unity. I came up with a way that I thought would let me get around Unity's limitations. Since Unity, Blender, and the FBX file format all allow objects to be parented to individual bones, I thought I could create separate collision objects for different part of the body and achieve the same result as using a mesh that animates along with the character. You can see this attempt below; the collision objects are the orange wireframe shapes surrounding the model. Instead of a single animated collision mesh, I built nineteen separate meshes, each of which moved along with a single parent bone.
I rendered a short animation to see if the physics meshes would animate properly built this way. Everything seemed to be in line with what I needed.
As I moved over to Unity, things looked good at first. The exported model looked right. All the physics meshes were in the correct places. They weren't physics meshes, though, they were being displayed. That didn't concern me. I just needed to turn off the mesh renderer for them and add a mesh collider to the appropriate bones.
I should've known it wasn't going to be that easy.
Unity mesh colliders automatically place their mesh so that they take on the parent object's transform (position, scale, rotation). In the case of a bone, that means the collision mesh gets moved to the head of the bone and rotated 90°. All my colliders ended up in the completely wrong place. Here, for example, is where the left thigh collider ended up.
I know how to fix this — I just have to move the origin of each collision object to match its parent bone's transform and then adjust the mesh's shape to overlap the length of the bone. But, I was starting to feel like I was fighting Unity… that I wasn't working with the system the way it was intended to be used.
I went back to researching, and found a few people saying to build your character's collision mesh right in Unity rather than in a modeling program. Primitive colliders like capsule, box, and sphere colliders give much better performance than mesh colliders, even mesh colliders that use low-resolution meshes. Building the collision mesh in Unity is a little tedious, but probably less tedious than fixing the collision mesh in Blender and importing it, and I'll get better performance. I'll lose a little precision, but probably not enough to matter. Most importantly, I won't be fighting my tools.
So, back to Blender I went to re-export the model without its collision meshes. After bringing the updated model back into Unity, I began adding primitive colliders to major bones and ended up with this:
Just like with the earlier model, I wanted to make sure the collision objects moved appropriately when animated, so I made the character dance and filmed it. Yikes. That sounds way creepier typed out than it did in my head.
It looks pretty good, right? But there is a problem with this collision model that's not obvious from the animation above. Everything looks good… until I enable rigid body physics on the character itself. If you look at the green outlined collision objects in the animation above, you'll notice that they overlap at times. The arm, chest, and shoulder collision objects overlap each other, for example, as do the pelvis and thigh objects. Since these are used in physics calculations, this becomes a problem if the parent object (the imported character model) uses physics.
Unity's physics engine is going to try and prevent these meshes from intersecting because it wants to treat them as solid physical objects. That's the whole point of a collision mesh. Unity wants to bounce these off each other. Even if I make all the colliders kinematic (which means they affect other colliders, but don't get moved around by the physics engine themselves), these colliders still cause problems because it will still try to bounce the character off the colliders attached to its bones.
This causes weird, erratic results. I've seen my character start walking on air as if there existed an invisible staircase of randomly sized steps in front of her, I've seen body parts suddenly fly away for no apparent reason, as if they were connected to the body only by a skin of really pliant rubber.
I could have solved this by leaving large enough gaps between the collision meshes so that they simply never overlap during normal movement. To do that, though, the gaps would have to be large enough that smaller objects in the world could pass through them or, worse, get stuck in them.
Fortunately, Unity provides two different ways to tell objects not to collide with specific other objects.
The first way is to simply attach a script to objects with colliders that specifies which other objects they shouldn't collide with. Doing that looks something like this:
Once they were all assigned to the same layer, turning off collisions between objects on that layer was a simple matter of going into the scene's Physics Settings:
By unchecking the box where the row is Player Collision and the column is also Player Collision, I've effectively told Unity to ignore any collision between two objects that are both assigned to the Player Collision layer, but to continue calculating collisions between those objects and all other objects in the world. In other words, the physics engine ignores when one part of the player collides with another part of the player, which is exactly the behavior we need.
That's enough for today's installment. In the next episode, I'll get our protagonist moving around and interacting with the world.
Previous: Thinking About Characters