This topic contains 3 replies, has 3 voices, and was last updated by  heyt0ny 1 month ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #29612

    heyt0ny
    Participant

    Hi!

    My submits fail. What does the error mean?
    https://challenges.duckietown.org/v3/humans/submissions/815

    If I run submit locally — it goes fine…

    #29613
    https://www.duckietown.org/wp-content/uploads/ap_avatars/70c639df5e30bdee440e4cdf599fec2b.jpg
    Florian Golemo
    Participant

    The exception is Exception: Giving up to connect to the gym duckietown server at host: evaluator. But that’s only after successfully running the experiment (because the log contains a whole episode). So it’s either during reconnection or during closing the experiment. @Andrea? I feel like we’ve seen this before.

    • This reply was modified 1 month ago by https://www.duckietown.org/wp-content/uploads/ap_avatars/70c639df5e30bdee440e4cdf599fec2b.jpg Florian Golemo. Reason: original was bs
    #29615

    Bhairav Mehta
    Participant

    The issue is that (most likely), you’ve taken out the part of the code that looks like

    if "simulation_done" in info: break

    Please check your code vs one of our submission templates and see if this is the issue.

    #29616

    heyt0ny
    Participant

    No , this line is present — seems like it’s a problem with frame skip.
    so this code gives error:

    
        try:
            # Then we make sure we have a connection with the environment and it is ready to go
            cis.info('Reset environment')
            observation = env.reset()
    
            # While there are no signal of completion (simulation done)
            # we run the predictions for a number of episodes, don't worry, we have the control on this part
            while True:
                # cis.info("OBS %s, %s, %s" % (str(observation.shape), str(observation.min()), str(observation.max())))
                # we passe the observation to our model, and we get an action in return
                action = model.predict(observation)
    
                if debug:
                    env.render()
    
                for f in range(config['frame_skip']):
                    observation, reward, done, info = env.step(action)
                    # here you may want to compute some stats, like how much reward are you getting
                    # notice, this reward may no be associated with the challenge score.
    
                    # it is important to check for this flag, the Evalution Engine will let us know when should we finish
                    # if we are not careful with this the Evaluation Engine will kill our container and we will get no score
                    # from this submission
                    if 'simulation_done' in info:
                        cis.info('simulation_done received.')
                        break
                    if done:
                        cis.info('Episode done; calling reset()')
                        env.reset()
    

    and this doesn’t

    
        try:
            # Then we make sure we have a connection with the environment and it is ready to go
            cis.info('Reset environment')
            observation = env.reset()
    
            # While there are no signal of completion (simulation done)
            # we run the predictions for a number of episodes, don't worry, we have the control on this part
            while True:
                # cis.info("OBS %s, %s, %s" % (str(observation.shape), str(observation.min()), str(observation.max())))
                # we passe the observation to our model, and we get an action in return
                action = model.predict(observation)
    
                if debug:
                    env.render()
    
                # for f in range(config['frame_ skip']):
                observation, reward, done, info = env.step(action)
                # here you may want to compute some stats, like how much reward are you getting
                # notice, this reward may no be associated with the challenge score.
    
                # it is important to check for this flag, the Evalution Engine will let us know when should we finish
                # if we are not careful with this the Evaluation Engine will kill our container and we will get no score
                # from this submission
                if 'simulation_done' in info:
                    cis.info('simulation_done received.')
                    break
                if done:
                    cis.info('Episode done; calling reset()')
                    env.reset()
    

    So how I have to implement the frame skip to not have this problem ?

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.

Close Menu