Abstract

Robustness Analysis of Behavioral Cloning based Deep Learning Models for Obstacle mitigation in Autonomous Vehicles

Maneuvering a steady on-road obstacle at high speed involves taking multiple decisions in split seconds. An inaccurate decision may result in crash.  One of the key decision that needs to be taken is can the on-road steady obstacle be surpassed.  The model learns to clone the drivers behavior of maneuvering a non-surpass-able obstacle and pass through a surpass-able obstacle.  No data with labels of “surpass-able” and “non-surpass-able” was provided during training.  We have development an array of test cases to verify the robustness of CNN models used in autonomous driving.  Experimenting between activation functions and dropouts the model achieves an accuracy of 87.33% and run time of 4478 seconds with input of only 4881 images (training + testing).  The model is trained for limited on-road steady obstacles.  This paper provides a unique method to verify the robustness of CNN models for obstacle mitigation in autonomous vehicles.


Author(s): Pranit Gopaldas Shah

Abstract | PDF

Share This Article