The first Duckietown class in Cali Colombia has finished! At Universidad Autónoma de Occidente we carried out a Robotic Perception course totally geared towards making cars transport little duckies in DuckieUAO. We studied multiple topics of autonomous driving, including foundations of computer vision and Bayesian estimation, but more importantly we have gathered a group of students passionate about self-driving cars, and keen to develop and spread the word about the Duckietown project!
Duckietown Workshop at RoboCup Junior
In collaboration with the RoboCup Federation, the Duckietown Foundation will be offering workshops at RoboCup 2019 in Sydney, Australia, providing a hands-on introduction to the Duckietown platform.
We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019 for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup. There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended.
If you would like to apply to attend a workshop, please complete this form.
We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.org.
Team JetBrains came out on top on all 3 challenges
It was a busy (and squeaky) few days at the International Conference on Robotics and Automation in Montreal for the organizers and competitors of the AI Driving Olympics.
The finals were kicked off by a semifinals round, where we the top 5 submissions from the Lane Following in Simulation leaderboard. The finalists (JBRRussia and MYF) moved forward to the more complicated challenges of Lane Following with Vehicles and Lane Following with Vehicles and Intersections.
If you couldn’t make it to the event and missed the live stream on Facebook, here’s a short video of the first run of the JetBrains Lane Following submission.
Thanks to everyone that competed, dropped in to say hello, and cheered on the finalists by sending the song of the Duckie down the corridors of the Palais des Congrès.
A few pictures from the event
Don't know much about the AI Driving Olympics?
It is an accessible and reproducible autonomous car competition designed with straightforward standardized hardware, software and interfaces.
Step 1: Build and test your agent with our available templates and baselines
We will evaluate submissions by participants that are in the top part of the leaderboard in the simulated testing challenge.
The robotarium evaluations are limited, and we will do them in a round robin strategy for each user. We aim to evaluate all in the top 10 of the simulated challenge; and then more if there is the possibility.
Participants can have multiple submissions in the “real” challenges. We will evaluate first according to “user priority” or by most recent. The priority is settable through the web interface by using the top right button.
The challenges will close May 21 at 8pm Montreal (EDT) time. Please check the server timestamp for the precise time in your time zone.
We have implemented an improved dynamics model in the simulator. If you are using the simulator to:
Train your agent with reinforcement learning
Generate data for imitation learning
Test and debug your submission
then you may want to retrain/retest with the new dynamics model. This model is much closer to the true Duckiebot and should permit much easier transfer from simulation to the real robot hardware.
We are in the final countdown to AI-DO 2 at ICRA!
Now is the time to let us know if you will be using the validation and testing facilities at the Duckietown competition ground. Please register below!
Here are some technical updates regarding the competition.
Thanks for all the bug reports via Github and Slack!
Changes to platform model in simulations
We have changed the purely kinematic model in the simulations with one that is more similar to the real robots obtained by system identification.
You can find the model here.
The inputs to the model are the two PWM signals to the wheels, left and right. (not [speed, omega] like last year)
The maximum velocity is ~2 m/s. The rise time is about 1 second.
There is a simulated delay of 100 ms.
We will slightly perturb the parameters of the model in the future to account for robot-robot variations, but this is not implemented yet.
All the submissions have been re-evaluated. You can see the difference between the two models
purely kinematic platform model
more realistic platform model
The new model is much more smooth. Overall we expect that the new model makes the competition easier both in simulation, and obviously, in the transfer.
We have update the Duckietown Shell and commands several times to fix a few reported bugs.
We have started with provisioning AWS cloud evaluators. There are still sporadic problems. You should know that if your job fails with the host-error code, the system thinks it is a problem of the evaluator and it will try on another evaluator.
Some timeouts are a bit tight. Currently we allow 20 minutes like for NeurIPS, but this year we have much more realistic simulation and better visualization code that take more time. If your submission fails after 20 minutes of evaluation, this is the reason.
We are still working on the glue code for running the submissions on the real robots. Should be a couple of days away.
Some of the changes to the models/protocol above are not in the docs yet.
The AI-DO is back!
We are excited to announce that we are now ready to accept submissions for AI-DO 2, which will culminate in a live competition event to be held at ICRA 2019 this May 20-22.
The AI Driving Olympics is a global robotics competition that comprises a series of challenges based on autonomous driving. The AI-DO provides a standardized simulation and robotics platform that people from around the world use to engage in friendly competition, while simultaneously advancing the field of robotics and AI.