Buy cialis levitra and viagra Introducing my dissertation: StarPlanner
It’s about time I shared my final year project with the world. Due to my country’s military obligations I joined the army right after I got my bachelor, which meant that I didn’t have time to polish some stuff. However, I had some time off, so here it is! My dissertation: StarPlanner – Demonstrating the Use of AI Planning in a Video Game.
StarPlanner is a StarCraft bot that implements a Goal-Oriented Action Planning (GOAP) agent architecture. The architecture includes a Blackboard for subsystem communications, a working memory and an regressive planner for decision making. There are two levels of planning in StarPlanner: a high level strategic planner that creates a plan depending on the current state of the game and a low level build planner that creates a plan to build or train the required units in order to carry out the high level plan.
StarPlanner is written in Java and uses BWAPI (specifically, JBridge) to communicate with StarCraft. The project architecture is layered: at the lowest level there is an A* search engine. On top of that there is a generic GOAP architecture that uses the A* engine for its searching. And, finally, StarPlanner is a concrete implementation for StarCraft of the generic GOAP architecture.
Full source code and documentation along with the full report and videos as delivered to City College can be found at http://pekalicious.com/starplanner/ppeikidis.html
I wish I had more time for the project and added a lot more actions from which the bot could choose. The biggest time-waster was trying to create an XNA 2D game from scratch. That was way before I discovered BWAPI in the middle of the semester. Of course, even after I used BWAPI, a lot of the functionality had to be created: controlling units, training units, etc. Although there is an extension of BWAPI called BWSAL that handles a lot of this for you, this wasn’t ported in Java.
The GOAP architecture was done thanks to Edmund Long‘s MSc thesis “Enhanced NPC Behavior using Goal Oriented Action Planning”. Edmund implemented it in C++ in a 2D simulation he developed from scratch. I ported the code in Java and split it into the layers described earlier. This was this easy part.
The hardest part was modeling an RTS game in order to make decisions. Planning is a searching algorithm that uses a world state, a goal state and a set of actions to produce a plan (a sequence of actions that, when executed, will change the world state to the goal state). The actions contain preconditions and effects. Preconditions of an action are essentially a list of properties that must be true in the world state, otherwise the action cannot be performed. If all preconditions are true, the action can be selected. A heuristic is used to decide which of the valid actions should be chosen. Finally, the effects of the chosen action alter the world state and the cycle continues until the world state is the desired one.
So, initially the idea was to use any data type as a precondition and an effect. The world state then would have a set of integers, booleans or anything really. So, for example, when I wanted to create a plan that trained 3 Marines, the goal state would have an integer “marines = 3” as a precondition. The plan then would be “Build Barrack” -> “Train Marine” -> “Train Marine” -> “Train Marine”. Unfortunately this was harder than I thought so I went with a boolean-based approach. However, this solution has other limitations. For example, a precondition can be “haveMarines = true”. But how many marines? 10? 12? 100? So finally, I decoupled the units from the buildings and now training is the responsibility of a TrainManager who constantly trains units (as long as there are resources) until a new plan is created.
The bot is not, by any stretch of the imagination, a good AI player. It only does a limited number of actions and hardly can win a game (except when you are demonstrating it *cough*custom*cough*maps*cough). There where two reasons for this: first, I had to create managers that could successfully move units, battle against enemies, use unit abilities, upgrade, etc, etc, etc. And second, at the time, I wasn’t that much of a good player in StarCraft. I’m no pro, mind you, but I have been following the StarCraft II community (which, incidentally, launched right in the middle of my project) and realized how bad a player I was. A big thanks to Sean Plott and his Day Daily videos who made it clear what is actually a strategy and what is a tactic in an RTS game.
All these delays also made it impossible to enter the first StarCraft AI Competition that was announced a while before I began the project. Another lost opportunity.
Even though a lot of things could have gone better, I am really satisfied with the results. When I started my dissertation 2 years ago, planning was a big deal in the world of Game AI, so I was kind of proud. I still am. In fact, I loved so much working on the project, that my professional goal is to work in the Game AI industry!
After attending the first Hellenic Artificial Intelligence Summer School, I’m on my way to this year’s Paris Game AI Conference. All the while creating my first XNA video game for XBOX Live (coming up in a future post).
So yeah, once I’m done with my country’s military obligations this August, I’m flying to USA looking for a job. Wish me good luck!