Introduction
Recently, Deep Reinforcement Learning (DRL) has been applied to many AI applications, including self-driving AI that’s responsible for decision making. Among these applications, Amazon DeepRacer is a world-class reinforcement learning racing competition well known for its super-human performance AI that only takes camera images to drive.
Originating from AWS DeepRacer, our team developed a high-performance end-to-end distributed training platform that utilizes container and RPC (Remote Process Call). With this platform, the training time is reduced significantly, which increases the efficiency of the search process for the best self-driving model. As a result, our team won numerous awards, including the 1st and the 3rd places in the 2020 DeepRacer World Championships, the 3rd place in the 2019 DeepRacer World Championships, and several virtual online race champions.
Based on the experience gathered from AWS DeepRacer, we developed our own miniature car racing plateform. By using CycleGAN, a style-transferring deep neural network that generates synthetic real-world images from simulated images, our team successfully reduced the performance degradation caused by the sim2real gap, while outperforming previous domain randomization methods. In addition, our team proposed Image-based Conditioning for Action Policy Smoothness (ICAPS), which stabilizes car control at high speed, and significantly improves the lap completion rate while reducing the lap time. This method is published in ICRA 2022, Workshop on Opportunities and Challenges with Autonomous Racing, and IJCAI 2022, AI4AD workshop.