Robotics and Planning
Improving Kinodynamic Planners for Vehicular Navigation with Learned Goal-Reaching Controllers
Abstract
This paper aims to improve the path quality and computational efficiency of sampling-based kinodynamic planners for vehicular navigation. It proposes a learning framework for identifying promising controls during the expansion process of sampling-based planners. Given a dynamics model, a reinforcement learning process is trained offline to return a low-cost control that reaches a local goal state in the absence of obstacles. By focusing on the system dynamics and not knowing the environment, this process is data-efficient and can be reused in different environments. The planner generates online local goal states for the learned controller in an informed manner to bias toward the goal and consecutively in an exploratory random manner. For informed expansion, local goal states are generated either via medial-axis information in environments with obstacles or wavefront information for setups with traversability costs. The learning process and resulting planning framework are evaluated for first- and second-order differential drive systems as well as a physically simulated Segway robot. The results show that the proposed integration of learning and planning can produce higher-quality paths than sampling-based kinodynamic planning with random controls in fewer iterations and computation time.
Summary
This paper improves kinodynamic planning for vehicular navigation by incorporating learned goal-reaching controllers. It is relevant to readers searching for motion planning, autonomous driving, robotics planning, and hybrid approaches that combine learned control with classical planners.
Core Contributions
- Combines learned goal-reaching control with classical kinodynamic planning.
- Targets realistic vehicular navigation problems where dynamics matter.
- Provides a citation point for hybrid planning-and-learning approaches in autonomous driving and robotics.
Why this paper matters
- Combines planning and learning rather than treating them as separate stacks.
- Targets realistic vehicle dynamics where kinodynamic constraints matter.
- Provides a bridge between earlier robotics work and later agent systems research.
Context
This paper belongs to the line of work that combines classical motion planning with learned local control. Relative to pure sampling-based kinodynamic planning, it uses learned goal-reaching controllers to bias expansion toward better trajectories while preserving planning structure.
Relevance
Cite this paper when you need a reference for kinodynamic planning with learned controllers, hybrid planning-and-learning methods for autonomous vehicles, or motion planning under realistic vehicle dynamics.
Keywords
Kinodynamic planning, autonomous vehicles, learned controllers, motion planning, robotics, navigation.
BibTeX
@inproceedings{sivaramakrishnan2021kinodynamic,
title={Improving Kinodynamic Planners for Vehicular Navigation with Learned Goal-Reaching Controllers},
author={Sivaramakrishnan, Aravind and Granados, Edgar and Karten, Seth and McMahon, Troy and Bekris, Kostas E.},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},
year={2021}
}