So it's been 10 years since you played Starcraft?
Better check out our walkthrough. Get the best gaming deals, reviews, product advice, competitions, unmissable gaming news and more! BlizzCon cosplay winner shares costume crafting tips and tricks. More unit teases for the zerg and protoss coming on October 10th and 17th. It's cheaper, easier, and way more worth it than you think. Otherwise respectable list gives confusing highest honor. Please enjoy this video as our way of saying thank you.
Lose your original StarCraft CDs? This is the question we always get asked when we tell people we are doing a Starcraft: BroodWar AI competition. BWAPI was created by reverse engineering BroodWar and relies on reading and writing to the program memory space of BroodWar in order to read data and issue commands to the game.
There are other RTS game engines available for competitions as well. For an excellent up to date overview on the state of the art in StarCraft AI techniques and bot artchitecture descriptions, I highly recommend reading the following publications: StarCraft Bots and Competitions  D. Churchill, M. Preuss, F. Richoux, G.
Synnaeve, A. Uriarte, S. Ontanon, and M. Ontanon, G. Uriarte, F. Richoux, D. Churchill, and M. RTS AI Research and Competitions involve a tremendous amount of effort from many people, so I would like to thank those who have helped to organize current and past competitions, those who have submitted bots, and helped promote RTS AI in general. I would like to personally thank the current and past members of the UofA RTS AI research group for all their help over the years with promoting, organizing, and running these competitions, as well as for continuing to do world class research in the area.
I would also like to thank those who have organized and run past and current Starcraft AI Competitions. Michal Certicky has put forth a phenomenal effort every year in continuing to run and maintain the Student Starcraft AI Tournament , as well as the persistent online ladder and bot live stream, and has helped tremendously in promoting the field of RTS AI.
Last and certainly not least, I would like to extend a special thank you to Adam Heinermann for continuing to develop BWAPI, without which none of this research would be possible.
A total of 26 entrants competed in four different game modes which varied from simple combat battles to the full game of Starcraft. As this was the first year of the competition, and little infrastructure had been created, each game of the tournament was run manually on two laptop computers and monitored by hand to record the results. Also, no persistent data was kept for bots to learn about opponents between matches. The competition had 4 different tournament categories in which to compete.
Tournament 1 was a flat-terrain unit micro-management battle consisting of four separate unit composition games. Tournament 2 was another micro-focused game with non-trivial terrain. Two competitors submitted for this category, with FreSCBot once again coming in 1st by beating Sherbrooke. Tournament 3 was a tech-limited StarCraft game on a single known map with no fog-of-war enforced.
Players were only allowed to choose the Protoss race, with no late game units allowed. Tournament 4 was considered the main event, which involved playing the complete game of StarCraft: Brood War with fog-of-war enforced. The tournament was run with a random pairing double-elimination format with each match being best of 5 games. Since computer programs written with BWAPI have no limit to the number of actions they can issue to the Starcraft game engine, certain behaviours are possible which were not intended by the developers such as sliding buildings and walking ground units over walls, these sorts of actions are considered cheating and not allowed in the tournament.
A map pool of 5 well-known professional maps were announced to competitors in advance, with a random map being chosen for each game. Tournament 4 was won by Overmind - a Zerg bot created by a large team from the University of California, Berkeley, who defeated the Terran bot Krasi0 by Krasimir Krastev in the finals. Overmind relied heavily on using the powerful and agile Zerg Mutalisk flying unit, which it controlled to great success using potential fields.
Overmind's overall strategy consisted of an initial defense of Zerglings and Sunken Colonies static defense towers to protect its initial expansion while gathering enough resources to construct its initial Mutalisks. Once the Mutalisks had been constructed, it sent them to the enemy base and patrolled and harasses around the perimiter of the enemy base. If the bot did not win outright on the initial attack, it would slowly patrol and pick off any units which were undefended, eventually wearing down the enemy to the point where it could overwhelm it with a final attack.
The 2nd place bot krasi0 played a very defensive Terran strategy, constructing Bunkers, Sige Tanks, and Missle Turrets for defense. After a certain number of units had been constructed it would then send an army of mechanical units to siege the enemy base. This bot performed quite well, only losing to Overmind during the competition. An excellent article about Overmind was written by Ars Technica in There was also a man vs.
The version of UAlbertaBot played the Zerg race, with one main strategy which relied heavily on the Zerg Mutalisk flying unit. While the bot's combat and micromanagement were well implemented, several major logic bugs in the early game and build-order planning of the bot meant that its overall performance in the competition was weak, and it was eliminated in the 3rd round of the bracket style competition by the Terran bot krasi0. This first implementation of UAlbertaBot was plagued with technical problems, and so the decision was made after the competition to completely re-write the bot for the next year's competition.
is the year StarCraft II exploded as a competitive video game. Though it was by no means the first, it quickly became the most popular esport in North. The campaign storyline of StarCraft II takes place four years after StarCraft: Brood .. On August 3, , Blizzard replaced the previously available StarCraft II.
Unfortunately the first year of the CIG competition met with several serious technical problems, which stemmed from the decision to use custom made Starcraft maps as opposed to traditional human-tested maps, which caused the Brood War Terrain Analysis BWTA software to crash for many of the bots. Because of these frequent crashes, it was decided that no winner could be announced for the competition. UAlbertaBot was not submitted as an entry for this competition as it was being completely re-written. Due to the low number of entries to Tournaments 1, 2, and 3 from the AIIDE competition, it was decided that the AIIDE competition for would only consist of the full game of Starcraft with the same rules as the Tournament 4 , with no smaller micromanagement tournaments.
The tournament rules were also updated so that all entrants must submit the source code of their bot and allow it to be published after the competition is over, which was done for a few reasons. One reason was to lower the barrier to entry for future competitions - since programming a Starcraft AI bot was very time consuming, future entrants could download and modify the source code of previous bots to save considerable effort. Another reason was to more easily prevent cheating - with thousands of games being played in the tournament we could no longer watch each game to detect if any cheating tactics were being employed, which would be more easily detected by inspecting the source code which was not allowed to be heavily obfuscated.
The final reason was to help advance the state of the art in Starcraft AI by allowing future bots to borrow strategies and techniques of previous bots by inspecting their source code - ideally, all bots in future competitions should be at least as strong as the bots from the previous year.
The competition received 13 entrants. The tournament was run Ben Weber on two laptops, and games were played by manually starting the Starcraft game and creating and joining games by hand. As the physical demand was quite high, a simple random-pairing double-elimination tournament was played with approximately 60 games in total. This caused some negative feedback that results this elimination-style tournament was quite dependent on pairing luck, so for the competition we set out to eliminate all randomness from the tournament and play a round robin style format. Playing a round robin format requires far more games to be played, so for several months in the summer of Jason Lorenz an undergraduate summer student of Michael Buro and I wrote software which could automatically schedule and play a round robin tournaments of Starcraft on an arbitrary number of locally networked computers.
Bot files, replays, and final results were accessed by clients via a shared Windows folder on the local network which was visible to all client machines. The initial version of this software allowed for a total of games to be played in the same time period as the competition's 60 games, with each bot playing each other bot a total of 30 times. There were 10 total maps in the competition, chosen from expert human tournaments that were known to be balanced for each race, and were available for download several months in advance on the competition website.
The AIIDE competition was modeled on human tournaments where the map pool and opponents are known in advance in order to allow for some expert knowledge and opponent modeling. At the end of the five day competition, Skynet placed 1st, UAlbertaBot placed 2nd, and Aiur placed 3rd.
Its soloid economic play and good early defense were able to hold off the more offensive Protoss bots of UAlbertaBot and Aiur. UAlbertaBot now played the Protoss race, and is described in detail in the next paragraph. An interesting rock-paper-scissors scenario occurred between the top 3 finishers, with Skynet beating UAlbertaBot 26 games out of 30, UAlbertaBot beating Aiur 29 games out of 30, and Aiur beating Skynet 19 games out of Overmind, the bot which won the AIIDE competition did not enter the competition stating that it had been found to be very vulnerable to early game aggression and was easily beaten by rushes from all three races.
The Overmind team also expressed that they did not want the source code of their bot to be published, was was now a rule in the competition. In its place, a team of undergraduate students from Berkeley submitted Undermind, a Terran bot which ended up placing 7th.
UAlbertaBot had been completely re-written in by myself and Sterling Oersten an MSc student of Michael Buro and now played the Protoss race instead of the Zerg played the previous year. The biggest reason for the switch to the Protoss race was that overall, we found the Protoss strategies were much easier to implement from a technical point of view, and the strategies performed much more consistently in testing.
The nature of the Zerg race strategies relied heavily on intelligent building placement, a problem that we had not spend much effort in exploring in our research at that time. Due to the Zerg being relatively weak defensively in the early parts of the game, its buildings must be place in such a manner as to create a 'maze' into the base in an effort to delay the enemy from reaching their worker units. The Protoss race however did not have this issue and had a relatively strong early game defense due to its powerful Zealot and Dragoon units.
Another reason for the switch to Protoss the inclusion of the build order search system , which had been implemented with the simpler Protoss building infrastructure in mind, and did not work for the Zerg or Terran races. Given a goal as a set of desired unit type counts, this build order planning system was able to automatically plan time-optimal build orders for those goals in real-time during the competition, and produced far better results than the priority-based build order system of the BWSAL system used in the version of UAlbertaBot.