From: Daniel Axtens Date: Fri, 10 Sep 2010 04:18:31 +0000 (+0800) Subject: fix USINGPYTHON, justify X-Git-Tag: v01~4 X-Git-Url: https://git.ucc.asn.au/?p=progcomp10.git;a=commitdiff_plain;h=00de250e44d68ffdc4f19b5476bdfc335c18ee19 fix USINGPYTHON, justify --- diff --git a/src/USINGPYTHON.txt b/src/USINGPYTHON.txt index b11a686..432aa9b 100644 --- a/src/USINGPYTHON.txt +++ b/src/USINGPYTHON.txt @@ -1,19 +1,18 @@ USINGPYTHON.txt: A 10 step guide to writing an agent in Python. -1. Pick a name for your agent. Make sure the name is a valid python identifier. Be original. +1. Pick a name for your agent. Make sure the name is a valid python +identifier. Be original. -2. Open SampleAgents.py. Pick a sample agent to copy-paste (so you don't have to type out the function definitions). +2. Open SampleAgents.py. Pick a sample agent to copy-paste (so you +don't have to type out the function definitions). -3. Copy the sample agent code into a new file in the "agents" directory. Make sure the file has the same name as the agent. +3. Copy the sample agent code into a new file in the "agents" directory. +Make sure the file has the same name as the agent. -4. Modify the code to do your bidding. - -4a. If you wish to implement your own leaning agent based on BaseAgent, rather than use the limited functionality of LearningAgent, insert the following code snippet inside your class: - - def Results (self, foeName, wasAttacker, winner, attItem, defItem, bluffItem, pointDelta): - BaseAgent.Results (self, foeName, wasAttacker, winner, attItem, defItem, bluffItem, pointDelta) - # your own code goes here +4. Add the following lines to the beginning of your file: +from uccProgComp import BaseAgent, LearningAgent, RandomAttack +from rpsconst import * 5. Create an arena in which your agent can battle: @@ -21,22 +20,30 @@ USINGPYTHON.txt: A 10 step guide to writing an agent in Python. 5.2. add "from agents. import " - 5.3. modify the "Agents =" line to include your agent, and take out any agents you don't want to battle. + 5.3. modify the "Agents =" line to include your agent, and take out +any agents you don't want to battle. 6. Watch your agent in action: ./simulate -v -a MyArena 7. Oh no, my agent dies very quickly: what's going on? - 7.1 insert print statements in your python module. (Hint: prefix them with self.id so you can identify different agents) + 7.1 insert print statements in your python module. (Hint: prefix them +with self.id so you can identify different agents) - 7.2 Run "./simulate -v -n 1 -a MyArena" to start with only 1 of each agent. + 7.2 Run "./simulate -v -n 1 -a MyArena" to start with only 1 of each +agent. - 7.3 Change the agents against which you're battling in MyArena.py so that you have a predictable opponent. + 7.3 Change the agents against which you're battling in MyArena.py so +that you have a predictable opponent. - 7.1 Edit conf.py, and set DEBUG=True. Don't forget to reset it when you're done. + 7.1 Edit conf.py, and set DEBUG=True. Don't forget to reset it when +you're done. -8. Once your agent works to your satisfaction, try it both on short and long durations (100 and 1000 rounds: see MAX_ITERATIONS in conf.py) +8. Once your agent works to your satisfaction, try it both on short and +long durations (100 and 1000 rounds: see MAX_ITERATIONS in conf.py) -9. If the rolling scoreboard has been opened on progcomp.ucc.asn.au/, submit it there! Otherwise, sit tight. +9. If the rolling scoreboard has been opened on progcomp.ucc.asn.au/, +submit it there! Otherwise, sit tight. -10. Watch its progress on the scoreboard and adjust your strategy accordingly. \ No newline at end of file +10. Watch its progress on the scoreboard and adjust your strategy +accordingly.