AI-DO technical updates

Here are some technical updates regarding the competition. Thanks for all the bug reports via Github and Slack!

Changes to platform model in simulations

We have changed the purely kinematic model in the simulations with one that is more similar to the real robots obtained by system identification. You can find the model here. Properties:
  • The inputs to the model are the two PWM signals to the wheels, left and right. (not [speed, omega] like last year)
  • The maximum velocity is ~2 m/s. The rise time is about 1 second.
  • There is a simulated delay of 100 ms.
We will slightly perturb the parameters of the model in the future to account for robot-robot variations, but this is not implemented yet. All the submissions have been re-evaluated. You can see the difference between the two models
purely kinematic platform model more realistic platform model
  The new model is much more smooth. Overall we expect that the new model makes the competition easier both in simulation, and obviously, in the transfer.

Infrastructure changes

  • We have update the Duckietown Shell and commands several times to fix a few reported bugs.
  • We have started with provisioning AWS cloud evaluators. There are still sporadic problems. You should know that if your job fails with the host-error code, the system thinks it is a problem of the evaluator and it will try on another evaluator.

Open issues

  • Some timeouts are a bit tight. Currently we allow 20 minutes like for NeurIPS, but this year we have much more realistic simulation and better visualization code that  take more time. If your submission fails after 20 minutes of evaluation, this is the reason.
  • We are still working on the glue code for running the submissions on the real robots. Should be a couple of days away.
  • Some of the changes to the models/protocol above are not in the docs yet.

Round 2 of the the AI Driving Olympics is underway!

The AI-DO is back!

We are excited to announce that we are now ready to accept submissions for AI-DO 2, which will culminate in a live competition event to be held at ICRA 2019 this May 20-22.

The AI Driving Olympics is a global robotics competition that comprises a series of challenges based on autonomous driving. The AI-DO provides a standardized simulation and robotics platform that people from around the world use to engage in friendly competition, while simultaneously advancing the field of robotics and AI. 

Check out our official press release.

The finals of AI-DO 1 at NeurIPS, December 2018

We want to see your classical robotic and machine learning based algorithms go head to head on the competition track. Get started today!

Want to learn more or join the competition? Information and get started instructions are here.

If you've already joined the competition we want to hear from you! 

 Share your pictures on facebook and twitter

 Get involved in the community by:

asking for help

offering help

AI-DO 1 at NeurIPS report. Congratulations to our winners!

The winners of AIDO-1 at NeurIPS

duckie-only-transparent

There was a great turnout for the first AI Driving Olympics competition, which took place at the NeurIPS conference in Montreal, Canada on Dec 8, 2018. In the finals, the submissions from the top five competitors were run from  five different locations on the competition track. 

Our top five competitors were awarded $3000 worth of AWS Credits (thank you AWS!) and a trip to one of nuTonomy’s offices for a ride in one of their self-driving cars (thanks APTIV!) 

2000px-Amazon_Web_Services_Logo.svg
aptiv_logo_color_rgb

WINNER

Team Panasonic R&D Center Singapore & NUS

(Wei Gao)


Check out the submission.

The approach: We used the random template for its flexibility and created a debug framework to test the algorithm. After that, we created one python package for our algorithm and used the random template to directly call it. The algorithm basically contains three parts: 1. Perception, 2. Prediction and 3. Control. Prediction plays the most important role when the robot is at the sharp turn where the camera can not observe useful information.

2nd Place

Jon Plante


Check out the submission.

The approach:  “I tried and imitate what a human does when he follows a lane. I believe the human tries to center itself at all times in the lane using the two lines as guides. I think the human implicitly projects the two lines into the horizon and where they intersect is where the human directs the vehicle towards.”

 

3rd Place

Vincent Mai


Check out the submission.

The approach: “The AI-DO application I made was using the ROS lane following baseline. After running it out of the box, I noticed a couple of problems and corrected them by changing several parameters in the code.”

 

 

Jacopo Tani - IMG_20181208_163935

4th Place

Team JetBrains

(Mikita Sazanovich)


Check out the submission.

The approach: “We used our framework for parallel deep reinforcement learning. Our network consisted of five convolutional layers (1st layer with 32 9×9 filters, each following layer with 32 5×5 filters), followed by two fully connected layers (with 768 and 48 neurons) that took as an input four last frames downsampled to 120 by 160 pixels and filtered for white and yellow color. We trained it with Deep Deterministic Policy Gradient algorithm (Lillicrap et al. 2015). The training was done in three stages: first, on a full track, then on the most problematic regions, and then on a full track again.”

5th Place

Team SAIC Moscow

(Anton Mashikhin)


Check out the submission.

The approach: Our solution is based on reinforcement learning algorithm. We used a Twin delayed DDPG and ape-x like distributed scheme. One of the key insights was to add PID controller as an additional  explorative policy. It has significantly improved learning speed and quality

A few photos from the day

AI-DO Finals Tomorrow!

If you are at NeurIPS please come by room 511 tomorrow at 3:30 – 5pm EST to see who will win !!!

AI-DO1 Submission Deadline: Thursday Dec 6 at 11:59pm PST

We’re just about at the end of the road for the 2018 AI Driving Olympics.

There’s certainly been some action on the leaderboard these last few days and it’s going down to the wire. Don’t miss your chance to see you name up there and win the amazing prizes donated by nuTonomy and Amazon AWS!

Submissions will close at 11:59pm PST on Thursday Dec. 6.

Please join us at NeurIPS for the live competition 3:30-5:00pm EST in room 511!

Update to challenge LF, LFV evaluation code

We are going to roll out an improvement to the LF and LFV challenges competitions. This change fixes the following problems:
  • The robot will always start in the right lane - a legal position.
  • The evaluation and visualization code are going to be richer, with more statistics plotted (example).
  • The evaluation rulebook is slightly changed to address a couple of bugs of how the metrics were computed.
What is going to happen is the following:
  • The moment that we update the evaluation code, all existing submissions are set back to the state of "evaluation".
  • The evaluators will then re-evaluate all of them. This will take 2-3 hours.
During this time the leaderboards are going to be blank, and slowly will re-populate as the evaluators do their job. (To speed up evaluation of your submissions, you can run dts challenges evaluator.)

AI-DO I Interactive Tutorials

The AI Driving Olympics, presented by the Duckietown Foundation with help from our partners and sponsors is now in full swing. Check out the leaderboard!

We now have templates for ROS, PyTorch, and TensorFlow, as well as an agnostic template.

We also have baseline implementation using the classical pipeline, imitation learning with data from both simulation and real Duckietown logs, and reinforcement learning.

We are excited to announce that we will be hosting a series of interactive tutorials for competitors to get started. These tutorials will be streamed live from our Facebook page.

See here for the full tutorial schedule.

Updated duckietown-challenges server fixes speed problems; updated “dts commands evaluate”

As we have more participants, the Duckietown Challenges Server started to feel slow. The reason: we were a bit lazy and some pages had O(n) implementations where O(1) was needed - loading all challenges/submissions/etc.

We also updated the "dts challenges evaluate" command to be more robust. Please continue to report bugs as this part is fragile by nature --- running containers that spawn other containers on the user's machines.