Front page Forums 🤖 🚗 AI Driving Olympics 🏆 Embodied tasks Training and evaluation env is different!

This topic contains 7 replies, has 5 voices, and was last updated by  heyt0ny 1 month ago.

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #29290

    heyt0ny
    Participant

    Hello!

    For a week I’m trying to reproduce my training results on leaderboard and I’m failing at this task.

    I’ve encountered several differences between training and evaluation env:
    1) on training by default the fisheye is off, while on evaluation it’s on
    2) on training by default you don’t have to convert actions to wheel power, while on evaluation you have to
    3) on training you can use frame_skip, while on evaluation you will get error:

      File "solution.py", line 22, in make_env
        env.__init__(frame_skip=config['frame_skip'])
    TypeError: __init__() got an unexpected keyword argument 'frame_skip'
    

    So how to make sure that my training env is the same as evaluation? Because for now it seems that it’s not.

    Thank you.

    • This topic was modified 1 month, 1 week ago by  heyt0ny.
    • This topic was modified 1 month, 1 week ago by  heyt0ny.
    #29293
    https://www.duckietown.org/wp-content/uploads/ap_avatars/1679091c5a880faf6fb5e6087eb1b2dc.jpg
    Liam Paull
    Keymaster

    Hi Heytony,

    There have been some changes some time ago that might not have propagated to you. So I just want to be sure. I think most of what you describe is handled by changed from “remote2” to “remote3” in the protocols (see here for details about the protocols)
    For example, make sure that in all of your submission.yaml files (e.g. this one) the you have r3 and not r2 in the challenge specification. If you pulled recently this should be the case but let’s verify.

    I think in particular #1 and #2 should be handled by this change for #3 we should verify. Will get back soon.

    Thanks for the note and thanks for your help.

    #29294

    Andrea Censi
    Keymaster

    This is a separate issue.

    Tony’s submissions use the right protocol (otherwise it would refuse to connect).

    The issue is that the training scripts use the old environment.

    #29305

    Bhairav Mehta
    Participant

    Hi Tony,

    Would you be able to somehow share your code, or even some information about it?

    What template did you start from?
    Are you using ROS, Pytorch, or Tensorflow? And are you trying to do RL, Imitation Learning, Supervised Learning, or classical approaches?

    This will help us debug your issue.

    #29306

    heyt0ny
    Participant

    For training — I’m using gym duckietown, branch aido1_lf1_r3-v3. In this branch the points 1 and 2 are present.

    I took the template from challenge-aido1_LF1-template-pytorch branch remote 3 v3and modified it to load my net and it’s weights. Yes I can share my full submission code — but only privately for organizers.

    If you provide some email address or something.

    Thank you in advance.

    #29310
    https://www.duckietown.org/wp-content/uploads/ap_avatars/70c639df5e30bdee440e4cdf599fec2b.jpg
    Florian Golemo
    Participant

    @heytony: try [email protected] and CC me [email protected]

    #29359

    heyt0ny
    Participant

    Thanks, I’ve emailed to [email protected]

    #29509

    heyt0ny
    Participant

    I finally found the problem… It was in NormalizeWrapper — because I use python3 for training, but python2 is used for evaluation — I had to add float:
    return (obs - self.obs_lo) / float(self.obs_hi - self.obs_lo)

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.

Close Menu