|
RCR Simulator | Application Architecture | Experiment 1 | Experiment 2 | Experiment 3 | Back |
RoboCup Rescue Simulator
The RoboCup Rescue (RCR) simulator is a real-time distributed simulation system
that is built of several modules connected through a network via a central kernel,
which manages communications among such modules.The RCR simulator recognises six different types of agents:
For the experiments detailed here, the police forces (PF) use a simple plan. Each PF has a list of blocked roads, indicated by the police office, that is ordered by the closest distance from the blockage to the current agent position. Then, if an agent is clearing a road, it remains doing that until one of the passable lines becomes clear. Otherwise, it accesses its list to know the next blocked road. If the list is empty, the agent tries to find (search action) other blockages around the scenario. Classes and documents about configuration will be available in resource page.
To highlight the need of a planning architecture to this
application, lets see what happens in an experiment
where the police force agents implement the plan previously
discussed, while the police office has the function of receiving
requests to clear roads from other operational agents and
broadcasting such requests to its police forces. In other words,
the police office ensures that all its subordinate agents have the
same knowledge that it has about the roads.
According to the graphic, the Move curve is always increasing. This means that the police forces have requests to clear roads from the police office during all the simulation. In an efficient team we could expect that this rate starts to decrease at some moment. We can say that the sooner this rate decreases, the better the team performance is. It is also important that the police forces quickly deal with incoming requests so that they start to search for blocked roads by themselves. Thus, differently of the Move curve, we could expect that the Search curve starts to increase at some moment. |
|
|
Using now our planning architecture, agents are provided with a coordination structure where they can report execution, completion or failure of activities. In addition it is possible to implement handlers to deal with the ``clear activity'' in particular. For this experiment we have implemented a handler called ``SimpleAllocation'' that uses the reports and information about the environment to generate an efficient delegation of activities to police forces.
The curves in the graphic represent the behaviour that we are expecting from the police forces. The Move curve has a peak around the cycle 70 and after that starts to decrease. The Search curve has the opposite behaviour, showing that the police forces have actually finished the delegated activities and they are going back to the search actions. Finally the Clear curve also demonstrates a better performance so that if we calculate the integral of this curve the resultant value will be bigger than the same curve in the previous experiment. |
In the last experiment, the first report is sent when
agents start the execution of their activities. In this new
version, the first report is generated as soon as a plan is
created (commitment). Note that if there is a long period between
the plan generation and the plan execution, the police office will
also spend a long period unsure about the status of this activity.
This new version also compels police forces to send progress updates, if some plan information has changed. In this experiment, when a police force pf commits to the performance of an activity ac, it also sends the cost of ac to its police office. The cost here is given by the time, in cycles, that pf will spend to reach the blockage place, plus the time to clear such blockage. However this cost can change due to, for example, problems in the path and wrong estimations (e.g., pf usually does an estimative of the time to clear blockages in the moment of the commitment because it has not seen the blockage yet). As the allocator handler uses the cost values during the process of delegation, progress updates help it in keeping its allocation table in accordance with the real situation of the police forces, improving the process of allocation. |
|
If we compare this graphic with the graphic of the second experiment, we can also notice that the Move and Search curves are more regular and narrower. This indicates that the police forces finish their delegated activities faster than the last experiment, returning to their original action of searching blockages by themselves.