I’m really excited to participate in this challenge. But there are a few things that make this competition not objective at all for now:
1) “Leaderboard” is showing top 16 submits from all users. Instead it should show one best submit for each person.
2) The evaluation if scores is super leaky. Because we do see the evaluation map. Someone may create this map and train his agent only on this map, while the agent may perform really good on this map — it may not be good on another maps. Please follow the practice, like kaggle does — 2 leaderboards. And on private leaderboard there is private map (that no one from participants have ever seen) — or even better multiple private maps.
This topic was modified 1 year, 7 months ago by heyt0ny.