Hi Jon, thanks for posting.
That is correct that the evaluator will give distorted images, whereas using something like manual_control.py
inside of the gym-duckietown
repository will give you undistorted images. The reason this is the case is that we use the gym-duckietown simulator with AIDO, but also want to support outside research efforts as well. For AIDO, the images from the simulator will always be distorted, since the images from the real duckiebot will also be distorted.
Yes – you will need to undistort them if you want “rectified” images – but we do provide wrappers that do this for you. That being said, rectification is an expensive process, so some things to think about (depending on whether you use a learning or classic-robotics approach):
1. Do you really need to rectify? A learning system may not care, as long as its input image is consistent
2. Rectification is expensive + slow (especially on the full image), so maybe point wise rectification is what you are looking for if going the classic route. This is how the original Duckietown stack was written
There are ways you can make the simulator distorted when you drive around in it (distortion=True
), but in order to support some other ongoing research (separate from AIDO, which doesn’t necessarily need to mimic the real robot), we also allow undistorted images.
Hope this helps!