From Learning-based Traffic Reconstruction to Autonomous Driving


Rapid urbanization and increasing traffic have led to digitalization of modern cities and automation of transportation means. As new technologies like VR systems and self-driving cars emerge, there is an increasing demand to incorporate realistic traffic flows into virtualized cities. In this talk, we first present a novel method for learning-based traffic animation and visualization using GPS data. This approach reconstruct city-scale traffic using statistical learning on GPS data and metamodel-based simulation optimization for dynamic data completion in areas of insufficient data coverage. We also propose a novel differentiable hybrid traffic simulator, which simulates traffic using a hybrid model of both macroscopic and microscopic models and can be directly integrated into a neural network for traffic control and flow optimization, being the first differentiable traffic simulator for macroscopic and hybrid models that can compute gradients for traffic states across time steps and inhomogeneous lanes. Next, we present a unified collision avoidance algorithm for the navigation of arbitrary agents, from pedestrians to various types of robots, including vehicles, in a unifying framework using different nonlinear motion extrapolations of motion to support agent dynamics. We then describe a learning-based, multi-level control policy for autonomous vehicles by analyzing simulated accident data and using our collision avoidance algorithm, data annotation, and parameterized traffic & vehicle simulation. We further introduce a simple yet effective framework for improving the robustness of learning algorithm against (input) image corruptions for autonomous driving, due to both internal (e.g., sensor noises and hardware abnormalities) and external factors (e.g., lighting, weather, visibility, and other environmental effects). We conclude by suggesting possible future directions.