Replies: 2 comments
-
|
Dear @NONONOexe The formal representation may be wrong, but the TP^k_{ji} returns the step value, but range tested are based on the raw value. Please see how this is calculated in the column C rows 23-26 in the spreadsheet. Therefore, I do not consider the formulas described in the rules are incorrect or have any scale issues. But I do not have an answer to the discrepancy between the rules description and the actual calculation for the competitions. @ffaraji or @modaresimr may provide an explanation since they implemented the Python code for calculating the score during the competition and ran most of the competitions. My answer to your question is to use the formula described in the Rules document. Please let me know if you need any further clarification concerning the spreadsheet I shared. Kind regards, |
Beta Was this translation helpful? Give feedback.
-
|
Dear @gnardin Thank you for your explanation. Best regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to confirm an inconsistency I found regarding the calculation of competition rankings across the rules, the programs, and past competion results.
According to the 2025 competition rules, the ranking should be calculated as described in Section "3.3 Ranking" of this document. This procedure has remained unchanged since it was originally proposed in 2015 competition.
In this calculation, I found a probrem with the following formula:
Here, the variable$MSS_{i/step}^k$ is normalized according to the following:
However, the variable$SC_{ji}^k$ represents a raw score, so they are no different scales which makes the calculation inconsistent.
When calculating MSS, its values increse exponentially from near zero as the step approaches$2 \times n$ ($n$ is number of teams). Therefore, I think the formula needs to be adjusted so that MSS-s can function properly as a step boundaries---for example, by substracting the minimum score from $SC_{ji}^k$ to use it as the baseline.
On the other hand, in the implementation ($MSS_{j/step}^k$ , the program uses a fixed step width.
make_html.py), the calculation differs from the rules. This program has remained the same for about 12 years. Unlike the rules, where the step widths are defined usingFrom the code, it can be confirmed that the implementation defines the step width as:
Looking at past competition points, this definition of
deltais too large and does not match the published results. Instead, when I tried using:the values matched almost exactly.
As a result, we now have three different approaches:
best / (2.0*n), and(max(scores) - ave(scores)) / n.My question is which of these should be considered the correct method for calculating rankings in the RoboCupRescue Simulation competitions?
Beta Was this translation helpful? Give feedback.
All reactions