The Saturday Paper – Metal Gear Science

volumeheaderScreens from Mike Bithell’s Volume (thanks Mike!)

Stealth games seem to be types of action game on the face of it – often third-person, three-dimension environments with enemies and doors and navigation problems to solve. But the pacing and mechanics of stealth games makes them more complex, adding in new dimensions to consider like how the levels and positions of obstacles change over time. That makes them hard to design for – but help is at hand. This week on the Saturday Papers: an open-source Unity tool that analyses stealth game levels in time and space.

We’re reading An Exploration Tool for Predicting Stealthy Behaviour by Jonathan Tremblay, Pedro Torres, Nir Rikovitch and Clark Verbrugge. The paper describes a tool built by the authors in Unity for analysing stealth game scenarios, and visualising the kinds of paths and problems players might encounter on a given level. The tool is going open-source very soon, and the paper itself also opens up lots of exciting questions for future research. Before we go on, you might like to watch a video of the tool in action:

Stealth games have really interesting properties compared to classic action games. Space is important, like it is with most games, but time is also a crucial property too. A space that is safe to be in at the start of the level may become unsafe as guards patrol, or the dynamics of the level change in some way. In order to analyse levels and discover properties like safe zones or whether paths exist to ghost the level, we need a tool that can take into account the changes that happen when the game runs, rather than just looking at the static level. The tool we’re looking at today helps do just this.

The current version of the tool can understand things with vision cones, like guards or security cameras, as well as the fact that they might move over time. It can simulate this movement, as well as the player’s movement, and calculate update vision cones based on the environment the game objects are currently in. We’re going to see a fairly classic example later that will highlight this nicely.

Screen Shot 2013-10-08 at 14.27.33

The key bit that makes the player simulation work is a technique called Rapidly exploring Random Tree or RRT. RRT is more commonly seen in robotics applications, and although it’s not necessary to understand in order to use the tool, it’s nice to get a feel for how the tool does what it does. What RRT tries to do is create a connected pattern of nodes describing paths through a world like the game’s level. Starting from an initial point (like the player’s spawn), RRT will randomly choose a node, try to connect nearby existing nodes to the new random one, and then add that new node to the set. In our case here, the process of connecting two nodes together is dependent on things like solid walls being in the way, but is also affected by guard lines-of-sight. It can even take into account things like player acceleration and momentum.

When the RRT has found enough paths of nodes from the player spawn to the goal (the user can decide how many she wants the tool to find before stopping) we then move onto preparing all this information in such a way that a human can understand it. There are often many paths which are quite similar to each other, which the tool will try and collapse together. By using information about nodes that were seen by guards, and nodes that weren’t, the tool can then produce some pretty awesome visualisations. Check out the illustration below:

Screen Shot 2013-10-08 at 14.21.59

Green shaded areas represent regions the player can never be seen in by the patrolling guards. Magenta areas mean that there is a chance patrolling guards will eventually sweep out the area and see the player. Red zones show current fields of view for the guards. This is already useful information for a level designer, but the tool can do more. What this image is hiding is all the path information collected by the tool – information that can tell us what paths were found to the exit, how many their were, and so on. To see why this information is so important, let’s take a look at another example from the paper, based on something from Randy Smith’s 2006 GDC talk. Blue lines represent Smith’s hand-drawn player paths, red represent guard patrols:

Screen Shot 2013-10-08 at 14.31.22

Considering this example, what we want to ask are questions like: if I made my player move 20% slower, could they still make that path okay? What if I moved one of the alcoves two metres to the side, could they make it then? Is it any harder? The tool can answer these questions for you, because it interprets whatever level you give it in Unity. Want to know what would happen if you moved an alcove? Move the alcove and ask:

Screen Shot 2013-10-08 at 14.34.14

 

The grey regions show paths which successfully reach the goal, becoming darker as more of the paths use that particular area. By moving the alcove further to the right, it becomes useless to the player, and the only solutions found are ones which only use one of the alcoves and then just dash for the exit. How do we overcome this? What if we tweaked the guard behaviour, would that work? The tool can tell you.

This has obvious uses for game developers and level designers. The tool can tell you things you might have missed about a level without needing to playtest it (in the example above, maybe we don’t want the player to be able to dash to the exit like that). But to me it also holds the potential for other exciting applications too. This tool lets software analyse levels, which means it could be used as part of a feedback loop for a procedural level designer for a stealth game. Generate a level, calculate paths and safe zones, and compare it to metrics set by a human designer. Are there too many safe zones? What’s the length of the shortest solution path? If it’s no good, go back to the procedural drawing board.

The paper has lots of exciting future directions for the work, including improving efficiency to get it to run in real-time (the examples shown here run in a couple of seconds, but the authors believe it can be improved even further). They also tantalisingly mention the idea of using the tool to improve companion AI in stealth games, and perhaps even simulate different player playstyles when analysing a level. All in all, this feels like a great approach that is already highly applicable to real game designs. I’m excited to see where the research leads.

Screen Shot 2013-10-08 at 14.46.20

Where to find more

Jonathan is a PhD candidate at McGill University in Canada, where Clark also works as Associate Professor. Nir is an MEng student, also at McGill, working on enhancing the autonomy of aerial vehicles.  Jonathan will be presenting his work next week at the Intelligence in the Design Process workshop at AIIDE, but if you can’t make it (!) then I recommend getting in touch with him via email if you’d like to know more about the project. Jonathan tells me the tool will soon be available on GitHub which is incredible news – I’ll make mention in a future Saturday Papers when it becomes available.

One thought on “The Saturday Paper – Metal Gear Science

Leave a Reply

Your email address will not be published. Required fields are marked *