Combat AI for Action-Adventure Games Tutorial [Unity/C#] [GOAP]
January 3, 2018 1 min read
In this tutorial, we make a planning combat AI system in Unity and C#. I explain GOAP on a technical level, then what I made and why, and how it can be …
45 thoughts on “Combat AI for Action-Adventure Games Tutorial [Unity/C#] [GOAP]”
That was so helpful! Thankyou so much!!
Hello , Thanks for great tutorial , My question is , is dialogue system possible with GOAP ai ?? if yes i have to create dialog as a action script ?? … Thanks ….
This is awesome! Just making sure I have not missed something, it appears the IFSMState is never used. Instead we see that the FSM class defines a FSM state delegate which is used by the GOAPAgent.
Also, for anyone who finds the GOAP concept a bit confusing; at it's core it's simply a way of defining a state machine at run time (i.e. defining state transitions at run time rather than explicitly coding it).
Also, it could be helpful to figure out a way of statically typing the preconditions and effects using generic constraints to avoid having the problem where you accidentally mispell an effect or precondition and end up pulling your hair out trying to understand why a plan is not generated (speaking from personal experience).
Hey, you can try using a layered approach to optimize the search. Essentially you find a plan for some high level goal state, and then for each action in that high level plan you recursively find a plan that satisfies that actions preconditions. You can have as many layers as you want, and also you would only need to carry out planning for a given action in a layer once the lower layers plan has been completed (meaning you can spread the planning out over a longer time).
Help pls!!! Does anyone know how to update the world state everytime an action fails?
The simplest scenario in my project I can think of is a couple of zombies atacking a house. The zombies want to enter by the window, so they atack it, after some time one zombie breaks the window and the other don't. I made a code that changes the cost of an action based on pathfinding distance, so the cost of entering by the other window and atacking is lower than atacking again. but my zombie continues to atack to fullfill the objective "Free path"
Every time they atack the window and the window is not destroyed, the action basically return false, so the planner plans actions again. My problem is that I don't know how to tell my zombie that the prerequesite of the action "Atack Human" is already fullfilled.
Can anybody help me?
*Sorry for bad english…
This is so simple and beautifully illustrated, I am so grateful that people like you take time to make videos like this so that people like me can learn. Thank you!
Neat lib but found out if your agent has more than 8 or 9 actions, it starts to take more than 1 second for it to make a plan. So it's pretty tough to make a complex AI from it.
Que Pasada , Muy Bueno !! 😉
i was looking in deep the code and i notice some improvements, the planner makes a brute force search he 1st checks procedurals conditions and looks if they can run, so this is actually like a precondition but more abstract, later tries to find the shortes way to a goal so often we have 2 goals always will pick the more easier/shortt. And doesnt look which goal to perform backtracking because just looks on all actions and not looks for goals and tries to make his plan, based on usable actions and match them and look if any of those match for the goal.
for one unique goal works well, but when we add 2 doesnt pick the better choices, one example would be Goal : StayAlive, we could have 2 actions Action : Block , and Action : RunAway , block requires be close to us attacker but if our attacker blocks the action returns false because doesnt make sense block when our enemy doesnt sends us attacks, so this makes this action fail, so the agent will return to idleState and pick a new plan that will be the same that the we failed , why because its the shortes, and its more simpler than just run away, the planner shouldnt take the failed action, should pick the second shortes one. And if this fail shouldnt be taken either. This i think should be a important improvement.
when whe have 2 or more goals always will pick the shorter, and at least the goals need have a priority value , if the goal of higher relevance was been acomplished his priority should decrease and a plan for that goal shoulnt be executed to do so then every action should increase goal relevance, something like this scenario:
Workers needs Eat , bathroom and sleep when those goals are acomplished they do their work.
Goals
Hunger = 4; Bed = 2; Work = 1; bathroom = 3
Actions
/// Make the work = work -1 addEfect ("HasMoney");
maybe the goal to take should be the most higher but later of acomplush that goal our need of go to bath would be on his max that its 5 and we should avoid those situation
so we can follow this way trought to pick the low discontment goals to plan that is goal value squared this is just examples and i hope help someone to improve on the actual code that is a good base.
damn. This video answers my previous question. Thank you!
how old are you ? and have you learn all this by yourself or have you studied it in college or highschool or something ?
i've just recently found your channel. And i just wanted to thank you. It's been amazing learning from you! much appreciated!
spanish tutorial pliss
will this be useful for turn-based strategy games?
Your voice is adorable <3
Every time you say bool I hear bull, and then I get confused
How do I implement priority for different goals?
android, windows, or apple phone
If I wanted my enemy to check if they have the resource before they decided to take the attack action, where would I put that?
I tried making a precondition in the attack action, but I can't do anything like "addPrecondition(cost < currentStamina)". Then I tried making an Update() in the attack script and that was a big no-no. All the preconditions as part of this system have been set by other actions. Any ideas?
Hi I have a question. you said at this time https://youtu.be/n6vn7d5R_2c?t=448, to stay alive "and" damage player. It means that both goals as to satisfy to be added to the planner. my question is, is it possible to perform either one of the goals?
你能制作一套 TOP-DOWN RPG 游戏吗,这种教程在youtube上非常缺少。
Kind of disappointed that what we call "AI" are many times "recipes/algorithms" of what to do if XXX has certain "value", or is XXX units close to YYY, or "do-while ZZZ" if "PPP is at "RRR conditions". Wished there were AIs that could actually "learn" and create their own "recipes/algorithms" in real-time (at least in games). Imagine if AIs could evolve to prioritize "survival" or "victory".
It's the shrine of Makhleb!
You have a cutie patootie voice.
I saw GoapPlanner.plan is in Update function, isn't that it might cause performance issue, if we have multiple AI agents or a little bit complicate plan tree?
nicely explained approach im buffled how many views you actualy have with this video. been a c# programmer in the beginning of my carreer and feel a little fuzzy and nostalgic seeing all this 🙂 done good job, have a pretty decent idea how it would work 🙂
I love your lessons I found your channel recently Im enjoying it
And I thought I knew c#….
I think you should use Enums as key in key value pairs instead of strings. Magic strings are error-prone and hard to refactor.
Dat Wolf though. Kind of cute.
I am glad I can use your crazy wingless toothy corvid in my own projects.
Not that difficult. I got it all when I saw the world state and actions that change the state. The graph is the most interesting stuff if it grows enough to not be just a search.
I love your lessons, I found your channel recently, I'm enjoying it 🙂
Somebody finally made a video on the GOAP! Thanks so much! ^-^
You gave me an idea!! I'll make pixel art assets for YouTube game dev tutors to use in their tutorials! Completely for free!
I vote for using Unreal. A single behaviour tree can communicate about an agent's decision making logic in a more visual and human-readable manner than a complex-looking set of C# classes and interfaces.
I wish there was video on the hardware of the Nintendo Entertainment System. I know it had a cpu similar to the Atari 2600. It just seems to have very unique hardware. It think it has a graphics chip called the picture processing unit. I do searches on it, but I never find anything good about it.
goal driven behaviour?
Thanks for these videos! I would like to see you do a video about making a Game Design Document ^_^
That was so helpful! Thankyou so much!!
Hello , Thanks for great tutorial , My question is , is dialogue system possible with GOAP ai ?? if yes i have to create dialog as a action script ?? … Thanks ….
Isn't HashSet<KeyValuePair<TKey, TValue>> basically Dictionary<TKey, TValue>?
The link for "Tile Art" does not work. ;.;
This is awesome! Just making sure I have not missed something, it appears the IFSMState is never used. Instead we see that the FSM class defines a FSM state delegate which is used by the GOAPAgent.
Also, for anyone who finds the GOAP concept a bit confusing; at it's core it's simply a way of defining a state machine at run time (i.e. defining state transitions at run time rather than explicitly coding it).
Also, it could be helpful to figure out a way of statically typing the preconditions and effects using generic constraints to avoid having the problem where you accidentally mispell an effect or precondition and end up pulling your hair out trying to understand why a plan is not generated (speaking from personal experience).
Hey, you can try using a layered approach to optimize the search. Essentially you find a plan for some high level goal state, and then for each action in that high level plan you recursively find a plan that satisfies that actions preconditions. You can have as many layers as you want, and also you would only need to carry out planning for a given action in a layer once the lower layers plan has been completed (meaning you can spread the planning out over a longer time).
Help pls!!!
Does anyone know how to update the world state everytime an action fails?
The simplest scenario in my project I can think of is a couple of zombies atacking a house.
The zombies want to enter by the window, so they atack it, after some time one zombie breaks the window and the other don't.
I made a code that changes the cost of an action based on pathfinding distance, so the cost of entering by the other window and atacking is lower than atacking again. but my zombie continues to atack to fullfill the objective "Free path"
Every time they atack the window and the window is not destroyed, the action basically return false, so the planner plans actions again. My problem is that I don't know how to tell my zombie that the prerequesite of the action "Atack Human" is already fullfilled.
Can anybody help me?
*Sorry for bad english…
This is so simple and beautifully illustrated, I am so grateful that people like you take time to make videos like this so that people like me can learn. Thank you!
Neat lib but found out if your agent has more than 8 or 9 actions, it starts to take more than 1 second for it to make a plan. So it's pretty tough to make a complex AI from it.
Que Pasada , Muy Bueno !! 😉
i was looking in deep the code and i notice some improvements, the planner makes a brute force search he 1st checks procedurals conditions and looks if they can run, so this is actually like a precondition but more abstract, later tries to find the shortes way to a goal so often we have 2 goals always will pick the more easier/shortt. And doesnt look which goal to perform backtracking because just looks on all actions and not looks for goals and tries to make his plan, based on usable actions and match them and look if any of those match for the goal.
for one unique goal works well, but when we add 2 doesnt pick the better choices, one example would be Goal : StayAlive, we could have 2 actions Action : Block , and Action : RunAway , block requires be close to us attacker but if our attacker blocks the action returns false because doesnt make sense block when our enemy doesnt sends us attacks, so this makes this action fail, so the agent will return to idleState and pick a new plan that will be the same that the we failed , why because its the shortes, and its more simpler than just run away, the planner shouldnt take the failed action, should pick the second shortes one. And if this fail shouldnt be taken either. This i think should be a important improvement.
when whe have 2 or more goals always will pick the shorter, and at least the goals need have a priority value , if the goal of higher relevance was been acomplished his priority should decrease and a plan for that goal shoulnt be executed to do so then every action should increase goal relevance, something like this scenario:
Workers needs Eat , bathroom and sleep when those goals are acomplished they do their work.
Goals
Hunger = 4;
Bed = 2;
Work = 1;
bathroom = 3
Actions
///
Make the work = work -1 addEfect ("HasMoney");
Eat snack = Hunger – 2 bathroom +2 addPrecontion("HasMoney");
Sleep = Bed -2 work + 2 (addPrecondition ("InTheBed"), true);
work = bed +2, hunger +2 , (addPrecondition ("InTheWork"), true);
useBathroom = bathroom – 4 (addPrecondition ("InTheBathRoom"), true);
Generic Goto location just goto to specified location
Goto<Location> = addPrecondition("InThe<Location>"), false ,addEfect("InThe<Location>"), true);
///
maybe the goal to take should be the most higher but later of acomplush that goal our need of go to bath would be on his max that its 5 and we should avoid those situation
Goal: Hunger= 4 Goal: Bathroom = 3
Action: Eat Snack (Eat − 2; Bathroom + 2)
afterwards: Eat = 2, Bathroom = 5: Discontentment = 29
Action: Visit-Bathroom (Bathroom − 4)
afterwards: Eat = 4, Bathroom = 0: Discontentment = 16
so we can follow this way trought to pick the low discontment goals to plan that is goal value squared this is just examples and i hope help someone to improve on the actual code that is a good base.
damn. This video answers my previous question. Thank you!
Omg, that codestyle for c#, it's hurting my eyes
Why not to use this https://msdn.microsoft.com/en-us/library/ff926074.aspx ?
how old are you ? and have you learn all this by yourself or have you studied it in college or highschool or something ?
i've just recently found your channel. And i just wanted to thank you. It's been amazing learning from you! much appreciated!
spanish tutorial pliss
will this be useful for turn-based strategy games?
Your voice is adorable <3
Every time you say bool I hear bull, and then I get confused
How do I implement priority for different goals?
android, windows, or apple phone
If I wanted my enemy to check if they have the resource before they decided to take the attack action, where would I put that?
I tried making a precondition in the attack action, but I can't do anything like "addPrecondition(cost < currentStamina)". Then I tried making an Update() in the attack script and that was a big no-no. All the preconditions as part of this system have been set by other actions. Any ideas?
Hi I have a question. you said at this time https://youtu.be/n6vn7d5R_2c?t=448, to stay alive "and" damage player. It means that both goals as to satisfy to be added to the planner. my question is, is it possible to perform either one of the goals?
你能制作一套 TOP-DOWN RPG 游戏吗,这种教程在youtube上非常缺少。
Kind of disappointed that what we call "AI" are many times "recipes/algorithms" of what to do if XXX has certain "value", or is XXX units close to YYY, or "do-while ZZZ" if "PPP is at "RRR conditions". Wished there were AIs that could actually "learn" and create their own "recipes/algorithms" in real-time (at least in games). Imagine if AIs could evolve to prioritize "survival" or "victory".
It's the shrine of Makhleb!
You have a cutie patootie voice.
I saw GoapPlanner.plan is in Update function, isn't that it might cause performance issue, if we have multiple AI agents or a little bit complicate plan tree?
nicely explained approach im buffled how many views you actualy have with this video.
been a c# programmer in the beginning of my carreer and feel a little fuzzy and nostalgic seeing all this 🙂
done good job, have a pretty decent idea how it would work 🙂
I love your lessons I found your channel recently Im enjoying it
And I thought I knew c#….
I think you should use Enums as key in key value pairs instead of strings. Magic strings are error-prone and hard to refactor.
Dat Wolf though. Kind of cute.
I am glad I can use your crazy wingless toothy corvid in my own projects.
Not that difficult. I got it all when I saw the world state and actions that change the state. The graph is the most interesting stuff if it grows enough to not be just a search.
I love your lessons, I found your channel recently, I'm enjoying it 🙂
Somebody finally made a video on the GOAP! Thanks so much! ^-^
You gave me an idea!! I'll make pixel art assets for YouTube game dev tutors to use in their tutorials! Completely for free!
I vote for using Unreal. A single behaviour tree can communicate about an agent's decision making logic in a more visual and human-readable manner than a complex-looking set of C# classes and interfaces.
I wish there was video on the hardware of the Nintendo Entertainment System. I know it had a cpu similar to the Atari 2600. It just seems to have very unique hardware. It think it has a graphics chip called the picture processing unit. I do searches on it, but I never find anything good about it.
goal driven behaviour?
Thanks for these videos!
I would like to see you do a video about making a Game Design Document ^_^
this is tooo confusing