Implicit and Learned Models
Robots must learn from experience and data to be efficient in unmodeled, unknown, and previously unseen domains. There are many methods for learning implicit models of the world, which capture everything from a 3D reconstruction of a scene, quantifying the risk of collision, understanding task constraints from human demonstrations, and more. There are endless opportunities for integrating these models within existing algorithm frameworks or new neurosymbolic approaches to generalize planning capabilities to previously considered intractable problems.
2025
- arXivParallel Heuristic Search as Inference for Actor-Critic Reinforcement Learning ModelsUnder Review
Actor-Critic models are a class of model-free deep reinforcement learning (RL) algorithms that have demonstrated effectiveness across various robot learning tasks. While considerable research has focused on improving training stability and data sampling efficiency, most deployment strategies have remained relatively simplistic, typically relying on direct actor policy rollouts. In contrast, we propose PACHS (Parallel Actor-Critic Heuristic Search), an efficient parallel best-first search algorithm for inference that leverages both components of the actor-critic architecture: the actor network generates actions, while the critic network provides cost-to-go estimates to guide the search. Two levels of parallelism are employed within the search – actions and cost-to-go estimates are generated in batches by the actor and critic networks respectively, and graph expansion is distributed across multiple threads. We demonstrate the effectiveness of our approach in robotic manipulation tasks, including collision-free motion planning and contact-rich interactions such as non-prehensile pushing.
@misc{yang2025pachs, title = {Parallel Heuristic Search as Inference for Actor-Critic Reinforcement Learning Models}, author = {Yang, Hanlan and Mishani, Itamar and Pivetti, Luca and Kingston, Zachary and Likhachev, Maxim}, eprint = {2509.25402}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, year = {2025}, note = {Under Review}, }
- arXivLook as You Leap: Planning Simultaneous Motion and Perception for High-DoF RobotsQingxi Meng, Emiliano Flores, Carlos Quintero-Peña, Peizhu Qian, Zachary Kingston, Shannan K. Hamlin, Vaibhav Unhelkar, and Lydia E. KavrakiUnder Review
In this work, we address the problem of planning robot motions for a high-degree-of-freedom (DoF) robot that effectively achieves a given perception task while the robot and the perception target move in a dynamic environment. Achieving navigation and perception tasks simultaneously is challenging, as these objectives often impose conflicting requirements. Existing methods that compute motion under perception constraints fail to account for obstacles, are designed for low-DoF robots, or rely on simplified models of perception. Furthermore, in dynamic real-world environments, robots must replan and react quickly to changes and directly evaluating the quality of perception (e.g., object detection confidence) is often expensive or infeasible at runtime. This problem is especially important in human-centered environments such as homes and hospitals, where effective perception is essential for safe and reliable operation. To address these challenges, we propose a GPU-parallelized perception-score-guided probabilistic roadmap planner with a neural surrogate model (PS-PRM). The planner explicitly incorporates the estimated quality of a perception task into motion planning for high-DoF robots. Our method uses a learned model to approximate perception scores and leverages GPU parallelism to enable efficient online replanning in dynamic settings. We demonstrate that our planner, evaluated on high-DoF robots, outperforms baseline methods in both static and dynamic environments in both simulation and real-robot experiments.
@misc{meng2025look, title = {Look as You Leap: Planning Simultaneous Motion and Perception for High-{DoF} Robots}, author = {Meng, Qingxi and Flores, Emiliano and Quintero-Peña, Carlos and Qian, Peizhu and Kingston, Zachary and Hamlin, Shannan K. and Unhelkar, Vaibhav and Kavraki, Lydia E.}, eprint = {2509.19610}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, year = {2025}, note = {Under Review}, }
- arXivParallel Simulation of Contact and Actuation for Soft Growing RobotsYitian Gao*, Lucas Chen*, Priyanka Bhovad, Sicheng Wang, Zachary Kingston, and Laura H. BlumenscheinUnder Review
Soft growing robots, commonly referred to as vine robots, have demonstrated remarkable ability to interact safely and robustly with unstructured and dynamic environments. It is therefore natural to exploit contact with the environment for planning and design optimization tasks. Previous research has focused on planning under contact for passively deforming robots with pre-formed bends. However, adding active steering to these soft growing robots is necessary for successful navigation in more complex environments. To this end, we develop a unified modeling framework that integrates vine robot growth, bending, actuation, and obstacle contact. We extend the beam moment model to include the effects of actuation on kinematics under growth and then use these models to develop a fast parallel simulation framework. We validate our model and simulator with real robot experiments. To showcase the capabilities of our framework, we apply our model in a design optimization task to find designs for vine robots navigating through cluttered environments, identifying designs that minimize the number of required actuators by exploiting environmental contacts. We show the robustness of the designs to environmental and manufacturing uncertainties. Finally, we fabricate an optimized design and successfully deploy it in an obstacle-rich environment.
@misc{gaochen2025actsim, title = {Parallel Simulation of Contact and Actuation for Soft Growing Robots}, author = {Gao, Yitian and Chen, Lucas and Bhovad, Priyanka and Wang, Sicheng and Kingston, Zachary and Blumenschein, Laura H.}, eprint = {2509.15180}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, year = {2025}, note = {Under Review}, }
- arXivVariational Shape Inference for Grasp Diffusion on SE(3)Under Review
Grasp synthesis is a fundamental task in robotic manipulation which usually has multiple feasible solutions. Multimodal grasp synthesis seeks to generate diverse sets of stable grasps conditioned on object geometry, making the robust learning of geometric features crucial for success. To address this challenge, we propose a framework for learning multimodal grasp distributions that leverages variational shape inference to enhance robustness against shape noise and measurement sparsity. Our approach first trains a variational autoencoder for shape inference using implicit neural representations, and then uses these learned geometric features to guide a diffusion model for grasp synthesis on the SE(3) manifold. Additionally, we introduce a test-time grasp optimization technique that can be integrated as a plugin to further enhance grasping performance. Experimental results demonstrate that our shape inference for grasp synthesis formulation outperforms state-of-the-art multimodal grasp synthesis methods on the ACRONYM dataset by 6.3%, while demonstrating robustness to deterioration in point cloud density compared to other approaches. Furthermore, our trained model achieves zero-shot transfer to real-world manipulation of household objects, generating 34% more successful grasps than baselines despite measurement noise and point cloud calibration errors.
@misc{bukhari2025graspdiff, title = {Variational Shape Inference for Grasp Diffusion on SE(3)}, author = {Bukhari, S. Talha and Agrawal, Kaivalya and Kingston, Zachary and Bera, Aniket}, eprint = {2508.17482}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, year = {2025}, note = {Under Review}, }
- WorkshopFaster Behavior Cloning with Hardware-Accelerated Motion PlanningIn IEEE ICRA 2025 Workshop—RoboARCH: Robotics Acceleration with Computing Hardware and Systems
@inproceedings{buynitsky2025wksp, title = {Faster Behavior Cloning with Hardware-Accelerated Motion Planning}, author = {Buynitsky, Alexiy and Kingston, Zachary}, booktitle = {IEEE ICRA 2025 Workshop---RoboARCH: Robotics Acceleration with Computing Hardware and Systems}, year = {2025}, }
2024
- AbstractPerception-aware Planning for Robotics: Challenges and OpportunitiesIn 40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40)
In this work, we argue that new methods are needed to generate robot motion for navigation or manipulation while effectively achieving perception goals. We support our argument by conducting experiments with a simulated robot that must accomplish a primary task, such as manipulation or navigation, while concurrently monitoring an object in the environment. Our preliminary study demonstrates that a decoupled approach fails to achieve high success in either action-focused motion generation or perception goals, motivating further developments of approaches that holistically consider both goals.
@inproceedings{meng2024icra40, title = {Perception-aware Planning for Robotics: Challenges and Opportunities}, author = {Meng, Qingxi and Quintero-Peña, Carlos and Kingston, Zachary and Unhelkar, Vaibhav and Kavraki, Lydia E.}, booktitle = {40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40)}, year = {2024}, }
- Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing UncertaintyIn IEEE International Conference on Robotics and Automation
Motion planning under sensing uncertainty is critical for robots in unstructured environments to guarantee safety for both the robot and any nearby humans. Most work on planning under uncertainty does not scale to high-dimensional robots such as manipulators, assumes simplified geometry of the robot or environment, or requires per-object knowledge of noise. Instead, we propose a method that directly models sensor-specific aleatoric uncertainty to find safe motions for high-dimensional systems in complex environments, without exact knowledge of environment geometry. We combine a novel implicit neural model of stochastic signed distance functions with a hierarchical optimization-based motion planner to plan low-risk motions without sacrificing path quality. Our method also explicitly bounds the risk of the path, offering trustworthiness. We empirically validate that our method produces safe motions and accurate risk bounds and is safer than baseline approaches.
@inproceedings{quintero2024impdist, title = {Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty}, author = {Quintero-Peña, Carlos and Thomason, Wil and Kingston, Zachary and Kyrillidis, Anastasios and Kavraki, Lydia E.}, booktitle = {IEEE International Conference on Robotics and Automation}, pages = {2360--2367}, year = {2024}, doi = {10.1109/ICRA57147.2024.10610773}, }
2023
- Object Reconfiguration with Simulation-Derived Feasible ActionsIn IEEE International Conference on Robotics and Automation
3D object reconfiguration encompasses common robot manipulation tasks in which a set of objects must be moved through a series of physically feasible state changes into a desired final configuration. Object reconfiguration is challenging to solve in general, as it requires efficient reasoning about environment physics that determine action validity. This information is typically manually encoded in an explicit transition system. Constructing these explicit encodings is tedious and error-prone, and is often a bottleneck for planner use. In this work, we explore embedding a physics simulator within a motion planner to implicitly discover and specify the valid actions from any state, removing the need for manual specification of action semantics. Our experiments demonstrate that the resulting simulation-based planner can effectively produce physically valid rearrangement trajectories for a range of 3D object reconfiguration problems without requiring more than an environment description and start and goal arrangements.
@inproceedings{lee2023physics, title = {Object Reconfiguration with Simulation-Derived Feasible Actions}, author = {Lee, Yiyuan and Thomason, Wil and Kingston, Zachary and Kavraki, Lydia E.}, booktitle = {IEEE International Conference on Robotics and Automation}, volume = {}, number = {}, pages = {8104--8111}, year = {2023}, doi = {10.1109/ICRA48891.2023.10160377}, }
2021
- Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High DimensionsConstantinos Chamzas, Zachary Kingston, Carlos Quintero-Peña, Anshumali Shrivastava, and Lydia E. KavrakiIn IEEE International Conference on Robotics and Automation
Nominated for Best Paper in Cognitive Robotics.
Earlier work has shown that reusing experience from prior motion planning problems can improve the efficiency of similar, future motion planning queries. However, for robots with many degrees-of-freedom, these methods exhibit poor generalization across different environments and often require large datasets that are impractical to gather. We present SPARK and FLAME, two experience-based frameworks for sampling- based planning applicable to complex manipulators in 3D environments. Both combine samplers associated with features from a workspace decomposition into a global biased sampling distribution. SPARK decomposes the environment based on exact geometry while FLAME is more general, and uses an octree-based decomposition obtained from sensor data. We demonstrate the effectiveness of SPARK and FLAME on a real and simulated Fetch robot tasked with challenging pick-and-place manipulation problems. Our approaches can be trained incrementally and significantly improve performance with only a handful of examples, generalizing better over diverse tasks and environments as compared to prior approaches.
@inproceedings{chamzas2021flame, title = {Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High Dimensions}, author = {Chamzas, Constantinos and Kingston, Zachary and Quintero-Peña, Carlos and Shrivastava, Anshumali and Kavraki, Lydia E.}, booktitle = {IEEE International Conference on Robotics and Automation}, pages = {1283--1289}, year = {2021}, doi = {10.1109/ICRA48506.2021.9561104}, }