Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents

Why is this important?

As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be reproducible.

Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others.

We describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained by design from the beginning of the research/development processes.

We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet.

The Duckietown Automated Laboratories (Autolabs)

One of the central components of this setup is the Duckietown Autolab (DTA), a remotely accessible standardized setup that is itself also relatively low-cost and reproducible.

DTAs include an off-the-shelf camera-based localization system. The accessibility of the hardware testing environment through enables experimental benchmarking that can be performed on a network of DTAs in different geographical locations.

The DUCKIENet

When evaluating agents, careful definition of interfaces allows users to choose among local versus remote evaluation using simulation, logs, or remote automated hardware setups. The Decentralized Urban Collaborative Benchmarking Environment Network (DUCKIENet) is an instantiation of this design based on the Duckietown platform that provides an accessible and reproducible framework focused on autonomous vehicle fleets operating in model urban environments. 

The DUCKIENet enables users to develop and test a wide variety of different algorithms using available resources (simulator, logs, cloud evaluations, etc.), and then deploy their algorithms locally in simulation, locally on a robot, in a cloud-based simulation, or on a real robot in a remote lab. In each case, the submitter receives feedback and scores based on well-defined metrics.

Validation

We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs. We built DTAs at the Swiss Federal Institute of Technology in Zurich (ETHZ) and at the Toyota Technological Institute at Chicago (TTIC).

Conclusions

Our contention is that there is a need for stronger efforts towards reproducible research for robotics, and that to achieve this we need to consider the evaluation in equal terms as the algorithms themselves. In this fashion, we can obtain reproducibility by design through the research and development processes. Achieving this on a large-scale will contribute to a more systemic evaluation of robotics research and, in turn, increase the progress of development.

If you found this interesting, you might want to:

IROS2020: Watch The Workshop on Benchmarking Progress in Autonomous Driving

What a start for IROS 2020 with the "Benchmarking Progress in Autonomous Driving" workshop!

The 2020 edition of the International Conference on Intelligent Robots and Systems (IROS) started great with the workshop on “Benchmarking Progress in Autonomous Driving”.

The workshop was held virtually on October 25th, 2020, using an engaging and concise format of a sequence of four 1.5-hour moderated round-table discussions (including an introduction) centered around 4 themes.

The discussions on the methods by which progress in autonomous driving is evaluated, benchmarked, and verified were exciting. Many thanks to all the panelists and the organizers!  

Here are the videos of the various sessions. 

Opening remarks

Theme 1: Assessing progress for the field of autonomous vehicles (AVs)

Moderator: Andrea Censi

Invited Panelists:

Theme 2: How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?

Moderator: Jacopo Tani

Invited Panelists:

Theme 3: Best practices for AV benchmarking

Moderator: Liam Paull

Invited Panelists:

Theme 4: Do we need new paradigms for AV development?

Moderator: Matt Walter

Invited Panelists:

Closing remarks

You can find additional information about the workshop here.

The Workshop on Benchmarking Progress in Autonomous Driving at IROS 2020

The IROS 2020 Workshop on Benchmarking Autonomous Driving

Duckietown has also a science mission: to help develop technologies for reproducible benchmarking in robotics.  

The IROS 2020 Workshop on Benchmarking Autonomous Driving provides a platform to investigate and discuss the methods by which progress in autonomous driving is evaluated, benchmarked, and verified.

It is free to attend.

The workshop is structured into 4 panels around four themes. 

  1. Assessing Progress for the Field of Autonomous Driving
  2. How to evaluate AV risk from the perspective of real world deployment (public acceptance, insurance, liability, …)?
  3. Best practices for AV benchmarking
  4. Algorithms and Paradigms

The workshop will take place on Oct. 25, 2020 starting at 10am EDT

Invited Panelists

We have  a list of excellent invited panelists from academia, industry, and regulatory organizations. These include: 

  • Emilio Frazzoli (ETH Zürich / Motional)
  • Alex Kendall (Wayve)
  • Jane Lappin (National Academy of Sciences)
  • Bryant Walker Smith (USC Faculty of Law)
  • Luigi Di Lillo (Swiss Re Insurance), 
  • John Leonard (MIT)
  • Fabio Bonsignorio (Heron Robots)
  • Michael Milford (QUT)
  • Oscar Beijbom (Motional)
  • Raquel Urtasun (University of Toronto / Uber ATG). 

Please join us...

Please join us on October 25, 2020 starting at 10am EST for what should be a very engaging conversation about the difficult issues around benchmarking progress in autonomous vehicles.  

For full details about the event please see here.

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

Robust Reinforcement Learning-based Autonomous Driving Agent for Simulation and Real World

We asked Róbert Moni to tell us more about his recent work. Enjoy the read!

The author's perspective

Most of us, proud nerd community members, experience driving first time by the discrete actions taken on our keyboards. We believe that the harder we push the forward arrow (or the W-key), the car from the game will accelerate faster (sooo true 😊 ). Few of us believes that we can resolve this task with machine learning. Even fever of us believes that this can be done accurately and in a robust mode with a basic Deep Reinforcement Learning (DRL) method known as Deep Q-Learning Networks (DQN).

It turned to be true in the case of a Duckiebot, and even more, with some added computer vision techniques it was able to perform well both in simulation (where the training process was carried out) and real world.

The pipeline

The complete training pipeline carried out in the Duckietown-gym environment is visualized in the figure above and works as follows. First, the camera images go through several preprocessing steps:

  • resizing to a smaller resolution (60×80) for faster processing;
  • cropping the upper part of the image, which doesn’t contain useful information for the navigation;
  • segmenting important parts of the image based on their color (lane markings);
  • and normalizing the image;
  • finally a sequence is formed from the last 5 camera images, which will be the input of the Convolutional Neural Network (CNN) policy network (the agent itself).

The agent is trained in the simulator with the DQN algorithm based on a reward function that describes how accurately the robot follows the optimal curve. The output of the network is mapped to wheel speed commands.

The workings

The CNN was trained with the preprocessed images. The network was designed such that the inference can be performed real-time on a computer with limited resources (i.e. it has no dedicated GPU). The input of the network is a tensor with the shape of (40, 80, 15), which is the result of stacking five RGB images. The network consists of three convolutional layers, each followed by ReLU (nonlinearity function) and MaxPool (dimension reduction) operations.

The convolutional layers use 32, 32, 64 filters with size 3 × 3. The MaxPool layers use 2 × 2 filters. The convolutional layers are followed by fully connected layers with 128 and 3 outputs. The output of the last layer corresponds to the selected action. The output of the neural network (one of the three actions) is mapped to wheel speed commands; these actions correspond to turning left, turning right, or going straight, respectively.

Learn more

Our work was acknowledged and presented at the IEEE World Congress on Computational Intelligence 2020 conference. We plan to publish the source code after AI-DO5 competition. Our paper is available on ieeexplore.ieee.org, deepai.org and arxiv.org.

Check out our sim and real demo on Youtube performed at our Duckietown Robotarium put together at Budapest University of Technology and Economics. .

Community Spotlight: Arian Houshmand – Control Algorithms for Traffic

No one likes sitting in traffic – it is a waste of time and damaging to the environment. Thankfully researcher Arian Houshmand from Boston University CODES lab is on the case -and he’s using Duckietown to help solve the problem.

Control algorithms to improve traffic

by Arian Houshmand

Traffic congestion around the world is worsening, according to transport data firm INRIX. In the U.S. alone, Americans wasted an average of 97 hours in traffic in 2018 – that’s two precious weekends worth of time. Captivity in traffic also costs them nearly $87 billion in 2018, an average of $1,348 per driver. Clearly, the need for smart transportation is reaching a fervor, not only to alleviate the mental and financial state of drivers, but to address the significant economic toll on affected cities.

Fortunately, development of intelligent mobility technologies is advancing.  In an ongoing research project funded by the U.S. Department of Energy’s (DOE) Advanced Research Projects Agency-Energy (ARPA-E) NEXTCAR program, BU researchers in collaboration with researchers from University of Delaware, University of Michigan, Oak Ridge National Lab, and Bosch are developing technologies for Connected and Automated Vehicles (CAVs) to increase their fuel efficiency and as a bi-product reduce traffic congestion.

 

The goal of this project is to design control and optimization technologies that enable a plug-in hybrid electric vehicle (PHEV) to communicate with other cars and city infrastructure and act on that information. By providing cars with situational self-awareness, they will be able to efficiently calculate the best possible route, accelerate and decelerate as needed, and manage their powertrain. This is an important task toward advancing the vision to create an ‘Internet of Cars,’ in which connected and self-driving cars operate seamlessly with each other and traffic infrastructure, improving fuel efficiency and safety, and reducing traffic congestion and pollution.

Today’s commercially-available self-driving cars rely on costly sensors, specifically radar, camera, and LIDAR (light) to operate semi-autonomously. In the NEXTCAR project, BU researchers with project collaborators are looking to go beyond that by developing decision-making algorithms to improve the autonomous operation of a single hybrid vehicle as well as algorithms for communications between vehicles and their environment, enabling self-driving cars to cooperate and interact within their socio-cyber-physical environment.

Several different functions have been developed throughout this project including:

●      Eco-routing: Procedure of finding the optimal route for a vehicle to travel between two points, which utilizes the least amount of energy costs.

●      Eco-AND (Economical Arrival and Departure): An optimal control framework for approaching a traffic light without stopping at the intersection by having traffic light cycle time information.

●      CACC (Cooperative Adaptive Cruise Control): An extension of adaptive cruise control

(ACC) that by benefiting from vehicle to vehicle (V2V) communication increases the safety and energy efficiency by reducing headway.

In order to validate and test the developed technologies, researchers first use simulation environments to test the algorithms. After verifying through simulation, they implement the algorithms on Duckietown, and finally deploy them on real cars (Audi A3 e-tron) at the University of Michigan’s M-city (test track for self-driving cars).

 

We use Duckietown to train students on how to implement their algorithms on embedded systems and also as a means to demonstrate our developed technologies in action and in a live setting. Since most of our research focuses on Connected and Automated Vehicles (CAVs), we need to establish connections between individual Duckiebots and traffic lights. As a result, we created a platform for exchanging information and control commands between all the cars and traffic lights.

Online localization of Duckiebots is a challenging task, and is missing from the current framework. We relied on our external motion capture sensors (OptiTrack) to localize the robots.

Duckietown is a nice platform for performing experiments on autonomous robots since It is relatively simple to set up the town and Duckiebots. Moreover, the built in perception and lane keeping capabilities are very useful to kick off experiments quickly. Traffic lights and signs are also helpful to create different scenarios for testing algorithms in city-like scenarios.

What would make Duckietown even more useful in our application is feedback sensors for determining wheel rotational speed/position as it is difficult to correct for rotational speed errors of the wheels and a ROS node for exchanging information between robots and traffic lights for testing collaborative control algorithms. 

AI-DO 3 – Urban Event Winners

In case you missed it AI-DO 3 has come and gone. Interested in reliving the competition? Here’s the video.

We had a great time at NeurIPS hosting the Third Edition of the AI Driving Olympics. As usual the sound of Duckies attracted an engaging and supportive crowd.

 

Racing Event

The competition began with the Racing Event, hosted by AWS DeepRacer. They ran their top 10 submissions and selected the winner by who could complete the fastest lap.

Racing Event Winner 
Ayrat Baykov at 8:08 seconds

 

Advanced Perception Event

The winners of the Advanced Perception Event hosted by APTIV and the nuScenes dataset were announced. Luckily a member of the winning team was present to accept the award.

Rank 3
CenterTrack – Open and Vision

Rank 2
VV_Team

Rank 1
StanfordlPRL-TRI

 

Urban Event

The competition culminated with Duckietown’s own Urban Driving Event, where we ran the top submissions for each of the three challenges on our competition tracks.

Winners

 

Lane Following 

JBRRussia1: Konstantin Chaika, Nikita Sazanovich, Kirill Krinkin, Max Kuzmin

Lane Following with Vehicles

phmarm

Lane Following with Vehicles and Intersections

frank_qcd_qk

 

Final Scoreboard

A few pictures from the event

Congratulations to all the winners and thanks for participating in the competition. We look forward to seeing you for AI-DO 4!