As I knew I was going to have to implement AI using the NavMesh, and that the NavMesh could be navigated by the way of ‘waypoints’ (a point on the surface of the NavMesh that the entity is told to move to) I figured I could use the same system for my player movement as well – but more on that later.
For now I needed to learn about the NavMesh and how to move objects around on it, and how to get the AI to detect the player.
Moving the zombies toward the player
First I needed a detection mechanism and after some scouring of the interwebs I found plenty of information on using Raycasts to detect an object at a point along a vector. Below is a screenshot of the very, very simple Raycast code I started with.
The layerToDetect property gets set in the inspector to the layer assigned to the player object, which was my custom layer called ‘Player’.
The eyes property gets set in the inspector to the ‘zombies’ child game object – a small cube that i placed to represent the ‘eyes’. This gave me both a visual reference for where the zombie was ‘looking’, and a point from which to originate the Raycast.
And here is what the little fella looks like…
So now I can detect any object assigned the ‘Player’ layer, the zombie needs to move when that Raycast hits. To start all I did was move the zombie forward, just to prove it works.
Next I created a simple box, assigned it to the Player layer, and positioned it somewhere in front and to the side of the zombie (using side with the ‘eyes’ as a reference point for where to place it)
Now I could enter play mode and just manually drag the player in front of the zombie and away again to prove that it worked. Here’s how that looked;
It proves the functionality of the Raycast but its hardly very useful as it is. Right now our ‘zombie’ can only detect objects directly in front of it and only move in a single direction – forward. So for my next experiment I tried expanding the Raycast idea to cover a field-of-view. Combining a field-of-view raycast with a subsequent rotation of the ‘zombie’ to face the point the cast hit should give me a form of player tracking.
To emulate a field-of-view raycast I simply performed multiple raycasts between 2 bounding angles. So for a 120 degree FOV I started the scans from -60 degrees to +60 degrees, and I used a spacing angle of 5 degrees. This gave me enough scans in the range to hit the player object if it was in range. It wasnt mathematically calculated to be perfect as this was just a proof-of-concept, so I just adjusted the angle, range and spacing figures until the cast lines looked ok. (I used Debug.DrawRay to display a visual respresentation of the invisible Raycast lines)
Combining the results of the hit – where successful – with a rotation to ‘LookAt’ the hit point and then a forward movement applied, we get an approximation of AI player tracking.
This approach works and – in my opinion – looks really impressive, and doesn’t rely on any reference to the player beyond the knowledge of the players LayerMask to perform the tracking. However I have doubts about the scalability of performing so many raycasts, per AI entity, per frame. What about if there are multple ‘Player’ objects within the scan range… how should the AI behave? As with any programming problems, there are multiple possible approaches to the solution, but for now I am happy that this meets the requirements for my very simple first game efforts.
The following link contains a small asset package with the sample scene and code files I used to create the above images and gifs. Please feel free to check it out and leave comments or feedback on this simple implementation, or your own ideas on AI mechanics.