November 1, 2018 at 1:12 pm #29290
For a week I’m trying to reproduce my training results on leaderboard and I’m failing at this task.
I’ve encountered several differences between training and evaluation env:
1) on training by default the fisheye is off, while on evaluation it’s on
2) on training by default you don’t have to convert actions to wheel power, while on evaluation you have to
3) on training you can use frame_skip, while on evaluation you will get error:
File "solution.py", line 22, in make_env env.__init__(frame_skip=config['frame_skip']) TypeError: __init__() got an unexpected keyword argument 'frame_skip'
So how to make sure that my training env is the same as evaluation? Because for now it seems that it’s not.
Thank you.November 1, 2018 at 1:54 pm #29293
There have been some changes some time ago that might not have propagated to you. So I just want to be sure. I think most of what you describe is handled by changed from “remote2” to “remote3” in the protocols (see here for details about the protocols)
For example, make sure that in all of your
submission.yamlfiles (e.g. this one) the you have r3 and not r2 in the challenge specification. If you pulled recently this should be the case but let’s verify.
I think in particular #1 and #2 should be handled by this change for #3 we should verify. Will get back soon.
Thanks for the note and thanks for your help.November 1, 2018 at 2:08 pm #29294
This is a separate issue.
Tony’s submissions use the right protocol (otherwise it would refuse to connect).
The issue is that the training scripts use the old environment.November 1, 2018 at 2:36 pm #29305
Would you be able to somehow share your code, or even some information about it?
What template did you start from?
Are you using ROS, Pytorch, or Tensorflow? And are you trying to do RL, Imitation Learning, Supervised Learning, or classical approaches?
This will help us debug your issue.November 1, 2018 at 3:38 pm #29306
For training — I’m using gym duckietown, branch
aido1_lf1_r3-v3. In this branch the points 1 and 2 are present.
I took the template from
remote 3 v3and modified it to load my net and it’s weights. Yes I can share my full submission code — but only privately for organizers.
If you provide some email address or something.
Thank you in advance.November 2, 2018 at 3:02 am #29310November 2, 2018 at 11:25 am #29359November 5, 2018 at 9:20 am #29509
I finally found the problem… It was in
NormalizeWrapper— because I use python3 for training, but python2 is used for evaluation — I had to add float:
return (obs - self.obs_lo) / float(self.obs_hi - self.obs_lo)
You must be logged in to reply to this topic.