Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously

The AIDO challenge is divided into two global stages: simulation and real-world. A single algorithm needs to perform well in both. It was quickly identified that one of the major problems is the simulation to real-world transfer. 

Many algorithms trained in the simulated environment performed very poorly in the real world, and many classic control algorithms that are known to perform well in a real-world environment, once tuned to that environment, do not perform well in the simulation. Some approaches suggest randomizing the domain for the simulation to real-world transfer.

We propose a novel method of training a neural network model that can perform well in diverse environments, such as simulations and real-world environment.

Dataset Generation

To that end, we have trained our model through imitation learning on a dataset compiled from four different sources:

  1. Real-world Duckietown dataset from logs.duckietown.org (REAL-DT).
  2. Simulation dataset on a simple loop map (SIM-LP).
  3. Simulation dataset on an intersection map (SIM-IS).
  4. Real-world dataset collected by us in our environment with car driven by PD controller (REAL-IH).

We aimed to collect data with as many possible situations such as twists in the road, driving in circles clockwise/counterclockwise, and so on. We have also tried to diversify external factors such as scene lighting, items in the room that can get into the camera’s field of view, roadside objects, etc. If we keep these conditions constant, our model may overfit to them and perform poorly in a different environment. For this reason, we changed the lighting and environment after each duckiebot run. The lane detection was calibrated for every lighting condition since different lighting changes the color scheme of the image input.

We made the following change to the standard PD algorithm: since most Duckietown turns and intersections are standard-shaped, we hard-coded the robot’s motion in these situations, but we did not exclude imperfect trajectories. For example, the ones that would go slightly out of bounds of the lane. Imperfections in the robot’s actions increase the robustness of the model. 

Neural network architecture and training

Original images are 640×480 RGB. As a preprocessing step, we remove the top third of the image, since it mostly contains the sky, resize the image to 64×32 pixels and convert it into the YUV colorspace.

We have used 5 convolutional layers with a small number of filters, followed by 2 fully-connected layers. The small size of the network is not only due to it being less prone to overfitting, but we also need a model that can run on a single CPU on RaspberryPi.

We have also incorporated Independent-Component (IC) layers. These layers aim to make the activations of each layer more independent by combining two popular techniques, BatchNorm and Dropout. For convolutional layers, we substitute Dropout with Spatial Dropout which has been shown to work better with them. The model outputs two values for voltages of the left and the right wheel drives. We use the mean square error (MSE) as our training loss.

Results

For the training evaluation, we compute the mean square error (MSE) of the left and the right wheels outputs on the validation set of each data source. 

The first table shows the results for the models trained on all data sources (HYBRID), on real-world data sources only (REAL) and on simulation data sources only (SIM). As we can see, while training on a single dataset sometimes achieves lower error on the same dataset than our hybrid approach. We can also see that our method performs on par with the best single methods. In terms of the average error it outperforms the closest one tenfold. This demonstrates definitively the high dependence of MSE on the training method, and highlights the differences between the data sources.

The next table shows simulation closed-loop performance for all our approaches using the Duckietown simulator. All methods drove for 15 seconds without major infractions, and the SIM model that was trained specifically on the simulation data only drove just 1.8 tiles more than our hybrid approach.

The third table shows the closed-loop performance in the real-world environment. Comparing the number of tiles, we see that our hybrid approach drove about 3.5 tiles more than the following in the rankings model trained on real-world data only.

Conclusion

Our method follows the imitation learning approach and consists of a convolutional neural network which is trained on a dataset compiled from data from different sources, such as simulation model and real-world Duckietown vehicle driven by a PD controller, tuned to various conditions, such as different map configuration and lighting. 

We believe that our approach of emphasizing neurons independence and monitoring generalization performance can offer more robustness to control models that have to perform in diverse environments. We also believe that the described approach of imitation learning on data obtained from several algorithms that are fitted to specific environments may yield a single algorithm that will perform well in general.

 —
 JBRRussia1 team

IROS2020: Watch The Workshop on Benchmarking Progress in Autonomous Driving

What a start for IROS 2020 with the "Benchmarking Progress in Autonomous Driving" workshop!

The 2020 edition of the International Conference on Intelligent Robots and Systems (IROS) started great with the workshop on “Benchmarking Progress in Autonomous Driving”.

The workshop was held virtually on October 25th, 2020, using an engaging and concise format of a sequence of four 1.5-hour moderated round-table discussions (including an introduction) centered around 4 themes.

The discussions on the methods by which progress in autonomous driving is evaluated, benchmarked, and verified were exciting. Many thanks to all the panelists and the organizers!  

Here are the videos of the various sessions. 

Opening remarks

Theme 1: Assessing progress for the field of autonomous vehicles (AVs)

Moderator: Andrea Censi

Invited Panelists:

Theme 2: How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?

Moderator: Jacopo Tani

Invited Panelists:

Theme 3: Best practices for AV benchmarking

Moderator: Liam Paull

Invited Panelists:

Theme 4: Do we need new paradigms for AV development?

Moderator: Matt Walter

Invited Panelists:

Closing remarks

You can find additional information about the workshop here.

The Workshop on Benchmarking Progress in Autonomous Driving at IROS 2020

The IROS 2020 Workshop on Benchmarking Autonomous Driving

Duckietown has also a science mission: to help develop technologies for reproducible benchmarking in robotics.  

The IROS 2020 Workshop on Benchmarking Autonomous Driving provides a platform to investigate and discuss the methods by which progress in autonomous driving is evaluated, benchmarked, and verified.

It is free to attend.

The workshop is structured into 4 panels around four themes. 

  1. Assessing Progress for the Field of Autonomous Driving
  2. How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?
  3. Best practices for AV benchmarking
  4. Algorithms and Paradigms

The workshop will take place on Oct. 25, 2020 starting at 10am EDT

Invited Panelists

We have  a list of excellent invited panelists from academia, industry, and regulatory organizations. These include: 

  • Emilio Frazzoli (ETH Zürich / Motional)
  • Alex Kendall (Wayve)
  • Jane Lappin (National Academy of Sciences)
  • Bryant Walker Smith (USC Faculty of Law)
  • Luigi Di Lillo (Swiss Re Insurance), 
  • John Leonard (MIT)
  • Fabio Bonsignorio (Heron Robots)
  • Michael Milford (QUT)
  • Oscar Beijbom (Motional)
  • Raquel Urtasun (University of Toronto / Uber ATG). 

Please join us...

Please join us on October 25, 2020 starting at 10am EST for what should be a very engaging conversation about the difficult issues around benchmarking progress in autonomous vehicles.  

For full details about the event please see here.

Duckietown and NVIDIA work together for accessible AI and robotics education: Meet the NVIDIA powered Duckiebot

Duckietown and NVIDIA partnership for accessible AI and robotics education

NVIDIA GTC, October 6, 2020: Duckietown and NVIDIA align efforts to push the boundaries of accessible, state-of-the-art higher-education in robotics and AI. The tangible outcome is a brand new “Founder’s edition” Duckiebot, which will be broadly available from January 2021, powered by the new NVIDIA Jetson Nano 2GB platform.

Read the full NVIDIA announcement here.

Meet the NVIDIA powered Duckiebot

Autonomy is already changing the world. Duckietown and NVIDIA recognize the importance of hands-on education in robotics and AI to empower everybody today to understand and design the next generations of autonomy.

The result of this collaboration is a new NVIDIA powered Duckiebot, using the novel Jetson Nano 2GB board, that will enable local execution of machine learning agents in the Duckietown ecosystem. 

To celebrate this special occasion, the Duckiebot has been redesigned to include: new sensors (time of flight, IMU, encoders), a new custom-designed battery providing real time diagnostics (state of charge, remaining autonomy and other health metrics), and fun accessories like a screen to visualize key metrics. All of this while keeping the price accessible for anyone willing to experience the challenges of a real-life robotic ecosystem. 

A great team

“The new NVIDIA Jetson Nano 2GB is the ultimate starter AI computer for educators and students to teach and learn AI at an incredibly affordable price.” said Deepu Talla, Vice President and General Manager of Edge Computing at NVIDIA. “Duckietown and its edX MOOC are leveraging Jetson to take hands-on experimentation and understanding of AI and autonomous machines to the next level.”
“The Duckietown educational platform provides a hands-on, scaled down, accessible version of real world autonomous systems.” said Emilio Frazzoli,  Professor of Dynamic Systems and Control, ETH Zurich, “Integrating NVIDIA’s Jetson Nano power in Duckietown enables unprecedented access to state-of-the-art compute solutions for learning autonomy.”

Learn more

To know more about the technical specifications of the new NVIDIA powered Duckiebot, or to pre-order yours, visit the Duckietown project shop here.

The new Duckiebot will be also used in the “Self-driving Cars with Duckietown” Massive Online Open Course (MOOC) that will be held in early 2021 on edX. You can find more information about the MOOC here.

Round 3 of the the AI Driving Olympics is underway!

The AI Driving Olympics (AI-DO) is back!

We are excited to announce the launch of the AI-DO 3, which will culminate in a live competition event to be held at NeurIPS this Dec. 13-14.

The AI-DO is a global robotics competition that comprises a series of events based on autonomous driving. This year there are three events, urban (Duckietown), advanced perception (nuScenes), and racing (AWS Deepracer).  The objective of the AI-DO is to engage people from around the world in friendly competition, while simultaneously benchmarking and advancing the field of robotics and AI. 

Check out our official press release.

  • Learn more about the AI-DO competition here.

If you've already joined the competition we want to hear from you! 

 Share your pictures on facebook and twitter

Duckietown Workshop at RoboCup Junior

Duckietown Workshop at RoboCup Junior

In collaboration with the RoboCup Federation, the Duckietown Foundation will be offering workshops at RoboCup 2019 in Sydney, Australia, providing a hands-on introduction to the Duckietown platform.

We will be hosting three one-day workshops as part of RoboCup 2019 from July 4-6, 2019  for teachers, students, and independent learners who are interested in finding out more about the Duckietown platform. Attendance is completely free and everyone is welcome to apply, even if you are not participating in RoboCup. There are no formal requirements, though basic familiarity with GNU/Linux and shell usage is recommended. 

If you would like to apply to attend a workshop, please complete this form

We will have Duckiebots and Duckietowns for participants to use. However, you are more than welcome to bring your own Duckiebots, available for purchase at https://get.duckietown.org

Congratulations to the winners of the second edition of the AI Driving Olympics!

Team JetBrains came out on top on all 3 challenges

It was a busy (and squeaky) few days at the International Conference on Robotics and Automation in Montreal for the organizers and competitors of the AI Driving Olympics. 

The finals were kicked off by a semifinals round, where we the top 5 submissions from the Lane Following in Simulation leaderboard. The finalists (JBRRussia and MYF) moved forward to the more complicated challenges of Lane Following with Vehicles and Lane Following with Vehicles and Intersections. 

Results from the AI-DO2 Finals event on May 22, 2019 at ICRA

If you couldn’t make it to the event and missed the live stream on Facebook, here’s a short video of the first run of the JetBrains Lane Following submission.

Thanks to everyone that competed, dropped in to say hello, and cheered on the finalists by sending the song of the Duckie down the corridors of the Palais des Congrès. 

A few pictures from the event

Don't know much about the AI Driving Olympics?

It is an accessible and reproducible autonomous car competition designed with straightforward standardized hardware, software and interfaces.

Get Started

Step 1: Build and test your agent with our available templates and baselines

Step 2: Submit to a challenge

Check out the leaderboard

View your submission in simulation

Step 3: Run your submission on a robot

in a Robotarium

Round 2 of the the AI Driving Olympics is underway!

The AI-DO is back!

We are excited to announce that we are now ready to accept submissions for AI-DO 2, which will culminate in a live competition event to be held at ICRA 2019 this May 20-22.

The AI Driving Olympics is a global robotics competition that comprises a series of challenges based on autonomous driving. The AI-DO provides a standardized simulation and robotics platform that people from around the world use to engage in friendly competition, while simultaneously advancing the field of robotics and AI. 

Check out our official press release.

The finals of AI-DO 1 at NeurIPS, December 2018

We want to see your classical robotic and machine learning based algorithms go head to head on the competition track. Get started today!

Want to learn more or join the competition? Information and get started instructions are here.

If you've already joined the competition we want to hear from you! 

 Share your pictures on facebook and twitter

 Get involved in the community by:

asking for help

offering help

AI-DO I Interactive Tutorials

The AI Driving Olympics, presented by the Duckietown Foundation with help from our partners and sponsors is now in full swing. Check out the leaderboard!

We now have templates for ROS, PyTorch, and TensorFlow, as well as an agnostic template.

We also have baseline implementation using the classical pipeline, imitation learning with data from both simulation and real Duckietown logs, and reinforcement learning.

We are excited to announce that we will be hosting a series of interactive tutorials for competitors to get started. These tutorials will be streamed live from our Facebook page.

See here for the full tutorial schedule.