EdTech awards 2021: Duckietown finalist in 3 categories!

Duckietown reaches the finals in the EdTech Awards 2021

The EdTech awards are the largest and most competitive recognition program in all of education technology.

The competition, led by the EdTech digest, recognizes the biggest names in edtech – and those who soon will be, by identifying all over the world the products, services and people that bet promote education through the use of technology, for the benefit of learners.

The 2021 edition has brought a big surprise to Duckietown, as it was nominated as a finalist in 3 different categories:

  • Cool Tool Award: as robotics (for learning, education) solution;
  • Cool Tool Award: as higher education solution;
  • Trendsetter Award: as a product or service setting a trend in education technologies.

Although a final is just a starting point, we are proud of the hard work done by the team in this particularly difficult year of pandemic and lockdowns, and grateful to you all for the incredible support, constructive feedback and contributions!

To the future, and beyond!

(hidden) Want to learn more about us?

Congratulations to the winners of the second edition of the AI Driving Olympics!

Team JetBrains came out on top on all 3 challenges

It was a busy (and squeaky) few days at the International Conference on Robotics and Automation in Montreal for the organizers and competitors of the AI Driving Olympics. 

The finals were kicked off by a semifinals round, where we the top 5 submissions from the Lane Following in Simulation leaderboard. The finalists (JBRRussia and MYF) moved forward to the more complicated challenges of Lane Following with Vehicles and Lane Following with Vehicles and Intersections. 

Results from the AI-DO2 Finals event on May 22, 2019 at ICRA

If you couldn’t make it to the event and missed the live stream on Facebook, here’s a short video of the first run of the JetBrains Lane Following submission.

Thanks to everyone that competed, dropped in to say hello, and cheered on the finalists by sending the song of the Duckie down the corridors of the Palais des Congrès. 

A few pictures from the event

Don't know much about the AI Driving Olympics?

It is an accessible and reproducible autonomous car competition designed with straightforward standardized hardware, software and interfaces.

Get Started

Step 1: Build and test your agent with our available templates and baselines

Step 2: Submit to a challenge

Check out the leaderboard

View your submission in simulation

Step 3: Run your submission on a robot

in a Robotarium

AI-DO 2 Validation and Testing Registration

We are in the final countdown to AI-DO 2 at ICRA!

Now is the time to let us know if you will be using the validation and testing facilities at the Duckietown competition ground. Please register below!

AI-DO technical updates

Here are some technical updates regarding the competition. Thanks for all the bug reports via Github and Slack!

Changes to platform model in simulations

We have changed the purely kinematic model in the simulations with one that is more similar to the real robots obtained by system identification. You can find the model here. Properties:
  • The inputs to the model are the two PWM signals to the wheels, left and right. (not [speed, omega] like last year)
  • The maximum velocity is ~2 m/s. The rise time is about 1 second.
  • There is a simulated delay of 100 ms.
We will slightly perturb the parameters of the model in the future to account for robot-robot variations, but this is not implemented yet. All the submissions have been re-evaluated. You can see the difference between the two models
purely kinematic platform model more realistic platform model
  The new model is much more smooth. Overall we expect that the new model makes the competition easier both in simulation, and obviously, in the transfer.

Infrastructure changes

  • We have update the Duckietown Shell and commands several times to fix a few reported bugs.
  • We have started with provisioning AWS cloud evaluators. There are still sporadic problems. You should know that if your job fails with the host-error code, the system thinks it is a problem of the evaluator and it will try on another evaluator.

Open issues

  • Some timeouts are a bit tight. Currently we allow 20 minutes like for NeurIPS, but this year we have much more realistic simulation and better visualization code that  take more time. If your submission fails after 20 minutes of evaluation, this is the reason.
  • We are still working on the glue code for running the submissions on the real robots. Should be a couple of days away.
  • Some of the changes to the models/protocol above are not in the docs yet.