Training and evaluation env is different!

Front page Forums 🤖 🚗 AI Driving Olympics 🏆 Embodied tasks Training and evaluation env is different!

This topic contains 7 replies, has 5 voices, and was last updated by  heyt0ny 3 years, 10 months ago.

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
  • #29290



    For a week I’m trying to reproduce my training results on leaderboard and I’m failing at this task.

    I’ve encountered several differences between training and evaluation env:
    1) on training by default the fisheye is off, while on evaluation it’s on
    2) on training by default you don’t have to convert actions to wheel power, while on evaluation you have to
    3) on training you can use frame_skip, while on evaluation you will get error:

      File "", line 22, in make_env
    TypeError: __init__() got an unexpected keyword argument 'frame_skip'

    So how to make sure that my training env is the same as evaluation? Because for now it seems that it’s not.

    Thank you.

    • This topic was modified 3 years, 11 months ago by  heyt0ny.
    • This topic was modified 3 years, 11 months ago by  heyt0ny.
    Liam Paull

    Hi Heytony,

    There have been some changes some time ago that might not have propagated to you. So I just want to be sure. I think most of what you describe is handled by changed from “remote2” to “remote3” in the protocols (see here for details about the protocols)
    For example, make sure that in all of your submission.yaml files (e.g. this one) the you have r3 and not r2 in the challenge specification. If you pulled recently this should be the case but let’s verify.

    I think in particular #1 and #2 should be handled by this change for #3 we should verify. Will get back soon.

    Thanks for the note and thanks for your help.


    Andrea Censi

    This is a separate issue.

    Tony’s submissions use the right protocol (otherwise it would refuse to connect).

    The issue is that the training scripts use the old environment.


    Bhairav Mehta

    Hi Tony,

    Would you be able to somehow share your code, or even some information about it?

    What template did you start from?
    Are you using ROS, Pytorch, or Tensorflow? And are you trying to do RL, Imitation Learning, Supervised Learning, or classical approaches?

    This will help us debug your issue.



    For training — I’m using gym duckietown, branch aido1_lf1_r3-v3. In this branch the points 1 and 2 are present.

    I took the template from challenge-aido1_LF1-template-pytorch branch remote 3 v3and modified it to load my net and it’s weights. Yes I can share my full submission code — but only privately for organizers.

    If you provide some email address or something.

    Thank you in advance.

    Florian Golemo


    Thanks, I’ve emailed to



    I finally found the problem… It was in NormalizeWrapper — because I use python3 for training, but python2 is used for evaluation — I had to add float:
    return (obs - self.obs_lo) / float(self.obs_hi - self.obs_lo)

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.