Here are some technical updates regarding the competition.
Thanks for all the bug reports via Github and Slack!
Changes to platform model in simulations
We have changed the purely kinematic model in the simulations with one that is more similar to the real robots obtained by system identification.
You can find the model here.
The inputs to the model are the two PWM signals to the wheels, left and right. (not [speed, omega] like last year)
The maximum velocity is ~2 m/s. The rise time is about 1 second.
There is a simulated delay of 100 ms.
We will slightly perturb the parameters of the model in the future to account for robot-robot variations, but this is not implemented yet.
All the submissions have been re-evaluated. You can see the difference between the two models
purely kinematic platform model
more realistic platform model
The new model is much more smooth. Overall we expect that the new model makes the competition easier both in simulation, and obviously, in the transfer.
We have update the Duckietown Shell and commands several times to fix a few reported bugs.
We have started with provisioning AWS cloud evaluators. There are still sporadic problems. You should know that if your job fails with the host-error code, the system thinks it is a problem of the evaluator and it will try on another evaluator.
Some timeouts are a bit tight. Currently we allow 20 minutes like for NeurIPS, but this year we have much more realistic simulation and better visualization code that take more time. If your submission fails after 20 minutes of evaluation, this is the reason.
We are still working on the glue code for running the submissions on the real robots. Should be a couple of days away.
Some of the changes to the models/protocol above are not in the docs yet.
We are excited to announce that we are now ready to accept submissions for AI-DO 2, which will culminate in a live competition event to be held at ICRA 2019 this May 20-22.
The AI Driving Olympics is a global robotics competition that comprises a series of challenges based on autonomous driving. The AI-DO provides a standardized simulation and robotics platform that people from around the world use to engage in friendly competition, while simultaneously advancing the field of robotics and AI.
Didn’t get a chance to compete in the AI Driving Olympics at NeurIPS this past December? Not to worry! The second iteration of the AI-DO will take place at ICRA this May. Get your engines and algorithms up and running by checking out the information on the AI-DO website.
There was a great turnout for the first AI Driving Olympics competition, which took place at the NeurIPS conference in Montreal, Canada on Dec 8, 2018. In the finals, the submissions from the top five competitors were run from five different locations on the competition track.
Our top five competitors were awarded $3000 worth of AWS Credits (thank you AWS!) and a trip to one of nuTonomy’s offices for a ride in one of their self-driving cars (thanks APTIV!)
The approach: We used the random template for its flexibility and created a debug framework to test the algorithm. After that, we created one python package for our algorithm and used the random template to directly call it. The algorithm basically contains three parts: 1. Perception, 2. Prediction and 3. Control. Prediction plays the most important role when the robot is at the sharp turn where the camera can not observe useful information.
The approach: “I tried and imitate what a human does when he follows a lane. I believe the human tries to center itself at all times in the lane using the two lines as guides. I think the human implicitly projects the two lines into the horizon and where they intersect is where the human directs the vehicle towards.”
The approach: “The AI-DO application I made was using the ROS lane following baseline. After running it out of the box, I noticed a couple of problems and corrected them by changing several parameters in the code.”
The approach: “We used our framework for parallel deep reinforcement learning. Our network consisted of five convolutional layers (1st layer with 32 9×9 filters, each following layer with 32 5×5 filters), followed by two fully connected layers (with 768 and 48 neurons) that took as an input four last frames downsampled to 120 by 160 pixels and filtered for white and yellow color. We trained it with Deep Deterministic Policy Gradient algorithm (Lillicrap et al. 2015). The training was done in three stages: first, on a full track, then on the most problematic regions, and then on a full track again.”
The approach: Our solution is based on reinforcement learning algorithm. We used a Twin delayed DDPG and ape-x like distributed scheme. One of the key insights was to add PID controller as an additional explorative policy. It has significantly improved learning speed and quality
A few photos from the day
Duckies invaded the conference venue. They could be heard throughout the halls all day.
Getting our Duckiebots charged
A little event advertising
Competition track under construction
The competition track
The demonstration track
The competition is underway!
We all “quacked” instead of clapped after each run
We’re just about at the end of the road for the 2018 AI Driving Olympics.
There’s certainly been some action on the leaderboard these last few days and it’s going down to the wire. Don’t miss your chance to see you name up there and win the amazing prizes donated by nuTonomy and Amazon AWS!
Submissions will close at 11:59pm PST on Thursday Dec. 6.
Please join us at NeurIPS for the live competition 3:30-5:00pm EST in room 511!
As we have more participants, the Duckietown Challenges Server
started to feel slow. The reason: we were a bit lazy and some pages had O(n) implementations where O(1) was needed - loading all challenges/submissions/etc.
We also updated the "dts challenges evaluate" command to be more robust. Please continue to report bugs
as this part is fragile by nature --- running containers that spawn other containers on the user's machines.