observations given by the evaluator

Front page Forums 🤖 🚗 AI Driving Olympics 🏆 Embodied tasks observations given by the evaluator

This topic contains 1 reply, has 2 voices, and was last updated by  Bhairav Mehta 3 years, 7 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #29120


    When I look at the submission videos, the images seem distorted!

    Are we getting distorted images, when we drive manually in the gym the images are not distorted. Are we to “undistort” the images ourserlves?



    Bhairav Mehta

    Hi Jon, thanks for posting.

    That is correct that the evaluator will give distorted images, whereas using something like manual_control.py inside of the gym-duckietown repository will give you undistorted images. The reason this is the case is that we use the gym-duckietown simulator with AIDO, but also want to support outside research efforts as well. For AIDO, the images from the simulator will always be distorted, since the images from the real duckiebot will also be distorted.

    Yes – you will need to undistort them if you want “rectified” images – but we do provide wrappers that do this for you. That being said, rectification is an expensive process, so some things to think about (depending on whether you use a learning or classic-robotics approach):

    1. Do you really need to rectify? A learning system may not care, as long as its input image is consistent
    2. Rectification is expensive + slow (especially on the full image), so maybe point wise rectification is what you are looking for if going the classic route. This is how the original Duckietown stack was written

    There are ways you can make the simulator distorted when you drive around in it (distortion=True), but in order to support some other ongoing research (separate from AIDO, which doesn’t necessarily need to mimic the real robot), we also allow undistorted images.

    Hope this helps!

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.