We are going to roll out an improvement to the LF and LFV challenges competitions.
This change fixes the following problems:
- The robot will always start in the right lane - a legal position.
- The evaluation and visualization code are going to be richer, with more statistics plotted (example).
- The evaluation rulebook is slightly changed to address a couple of bugs of how the metrics were computed.
What is going to happen is the following:
- The moment that we update the evaluation code, all existing submissions are set back to the state of "evaluation".
- The evaluators will then re-evaluate all of them. This will take 2-3 hours.
During this time the leaderboards are going to be blank, and slowly will re-populate as the evaluators do their job.
(To speed up evaluation of your submissions, you can run dts challenges evaluator.)
We increased the server capacity. This should mitigate the slowness issues due to the unexpected number of participants/submissions.
We updated the logic of how we compute the leaderboard; now there can be only one submission per user.
As we have more participants, the Duckietown Challenges Server
started to feel slow. The reason: we were a bit lazy and some pages had O(n) implementations where O(1) was needed - loading all challenges/submissions/etc.
We also updated the
"dts challenges evaluate" command to be more robust. Please continue to report bugs
as this part is fragile by nature --- running containers that spawn other containers on the user's machines.
We have reports of problems with the
dts challenges evaluate command (local evaluator).
This is a tricky command because it spawns a Docker container that spawns other Docker containers etc.
A workaround is described at here and
it involves running the native Python code for the evaluator.
We are looking for a *web-designer* to help us improve the appearance of the website.