Implicit and Learned Models

Robots must learn from experience and data to be efficient in unmodeled, unknown, and previously unseen domains. There are many methods for learning implicit models of the world, which capture everything from a 3D reconstruction of a scene, quantifying the risk of collision, understanding task constraints from human demonstrations, and more. There are endless opportunities for integrating these models within existing algorithm frameworks or new neurosymbolic approaches to generalize planning capabilities to previously considered intractable problems.


2024

  1. Abstract
    Perception-aware Planning for Robotics: Challenges and Opportunities
    In 40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40)
  2. Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty
    In IEEE International Conference on Robotics and Automation
  3. Workshop
    Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty
    In IEEE ICRA 2024 Workshop—Back to the Future: Robot Learning Going Probabilistic
  4. Workshop
    Monitoring Constraints for Robotic Tutors in Nurse Education: A Motion Planning Perspective
    Qingxi Meng*Carlos Quintero-Peña*Zachary Kingston , Nicole M. Fontenot , Shannan K. Hamlin , and 2 more authors
    In IEEE ICRA 2024 Workshop—Workshop on Nursing Robotics

2023

  1. Object Reconfiguration with Simulation-Derived Feasible Actions
    Yiyuan Lee , Wil ThomasonZachary Kingston , and Lydia E. Kavraki
    In IEEE International Conference on Robotics and Automation

2021

  1. Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High Dimensions
    In IEEE International Conference on Robotics and Automation