Humans are hard to model and predict; thus, it is hard for a robot to consider how other people in the environment may act in its decision-making process. Finding efficient and effective ways to model human behavior to maintain the safety of people around a robot or to allow a robot to fluently collaborate with other people (or perhaps even other agents) is critical to deploying robots outside controlled environments such as the lab or factory.
Moreover, as robots become more capable agents and enter the real world, systems today appear in the workplace, home, and public spaces. Ethical considerations of how robots should operate are trailing behind algorithmic advances, and these advances do not necessarily consider the other people the robot is working around explicitly. Suppose the algorithms that control a robot’s behavior are agnostic to human concerns. In that case, they will be (un)intentionally exploited by malicious, unethical, or ignorant actors to violate privacy, injure and disturb people, and further increase inequity.
2025
arXiv
Look as You Leap: Planning Simultaneous Motion and Perception for High-DoF Robots
In this work, we address the problem of planning robot motions for a high-degree-of-freedom (DoF) robot that effectively achieves a given perception task while the robot and the perception target move in a dynamic environment. Achieving navigation and perception tasks simultaneously is challenging, as these objectives often impose conflicting requirements. Existing methods that compute motion under perception constraints fail to account for obstacles, are designed for low-DoF robots, or rely on simplified models of perception. Furthermore, in dynamic real-world environments, robots must replan and react quickly to changes and directly evaluating the quality of perception (e.g., object detection confidence) is often expensive or infeasible at runtime. This problem is especially important in human-centered environments such as homes and hospitals, where effective perception is essential for safe and reliable operation. To address these challenges, we propose a GPU-parallelized perception-score-guided probabilistic roadmap planner with a neural surrogate model (PS-PRM). The planner explicitly incorporates the estimated quality of a perception task into motion planning for high-DoF robots. Our method uses a learned model to approximate perception scores and leverages GPU parallelism to enable efficient online replanning in dynamic settings. We demonstrate that our planner, evaluated on high-DoF robots, outperforms baseline methods in both static and dynamic environments in both simulation and real-robot experiments.
@misc{meng2025look,title={Look as You Leap: Planning Simultaneous Motion and Perception for High-{DoF} Robots},author={Meng, Qingxi and Flores, Emiliano and Quintero-Peña, Carlos and Qian, Peizhu and Kingston, Zachary and Hamlin, Shannan K. and Unhelkar, Vaibhav and Kavraki, Lydia E.},eprint={2509.19610},archiveprefix={arXiv},primaryclass={cs.RO},year={2025},note={Under Review},}
Large Language Models (LLMs) have demonstrated remarkable ability in long-horizon Task and Motion Planning (TAMP) by translating clear and straightforward natural language problems into formal specifications such as the Planning Domain Definition Language (PDDL). However, real-world problems are often ambiguous and involve many complex constraints. In this paper, we introduce Constraints as Specifications through LLMs (CaStL), a framework that identifies constraints such as goal conditions, action ordering, and action blocking from natural language in multiple stages. CaStL translates these constraints into PDDL and Python scripts, which are solved using an custom PDDL solver. Tested across three PDDL domains, CaStL significantly improves constraint handling and planning success rates from natural language specification in complex scenarios.
@inproceedings{guo2025castl,title={{CaStL}: Constraints as Specifications through {LLM} Translation for Long-Horizon Task and Motion Planning},author={Guo, Weihang and Kingston, Zachary and Kavraki, Lydia E.},booktitle={IEEE International Conference on Robotics and Automation},pages={11957--11964},year={2025},doi={10.1109/ICRA55743.2025.11127555},}
2024
Abstract
Perception-aware Planning for Robotics: Challenges and Opportunities
In this work, we argue that new methods are needed to generate robot motion for navigation or manipulation while effectively achieving perception goals. We support our argument by conducting experiments with a simulated robot that must accomplish a primary task, such as manipulation or navigation, while concurrently monitoring an object in the environment. Our preliminary study demonstrates that a decoupled approach fails to achieve high success in either action-focused motion generation or perception goals, motivating further developments of approaches that holistically consider both goals.
@inproceedings{meng2024icra40,title={Perception-aware Planning for Robotics: Challenges and Opportunities},author={Meng, Qingxi and Quintero-Peña, Carlos and Kingston, Zachary and Unhelkar, Vaibhav and Kavraki, Lydia E.},booktitle={40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40)},year={2024},}
Robotics and automation are poised to change the landscape of home and work in the near future. Robots are adept at deliberately moving, sensing, and interacting with their environments. The pervasive use of this technology promises societal and economic payoffs due to its capabilities - conversely, the capabilities of robots to move within and sense the world around them is susceptible to abuse. Robots, unlike typical sensors, are inherently autonomous, active, and deliberate. Such automated agents can become AI double agents liable to violate the privacy of coworkers, privileged spaces, and other stakeholders. In this work we highlight the understudied and inevitable threats to privacy that can be posed by the autonomous, deliberate motions and sensing of robots. We frame the problem within broader sociotechnological questions alongside a comprehensive review. The privacy-aware motion planning problem is formulated in terms of cost functions that can be modified to induce privacy-aware behavior - preserving, agnostic, or violating. Simulated case studies in manipulation and navigation, with altered cost functions, are used to demonstrate how privacy-violating threats can be easily injected, sometimes with only small changes in performance (solution path lengths). Such functionality is already widely available. This preliminary work is meant to lay the foundations for near-future, holistic, interdisciplinary investigations that can address questions surrounding privacy in intelligent robotic behaviors determined by planning algorithms.
@inproceedings{shome2023privacy,title={Robots as AI Double Agents: Privacy in Motion Planning},author={Shome, Rahul and Kingston, Zachary and Kavraki, Lydia E.},booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},volume={},number={},pages={2861--2868},year={2023},doi={10.1109/IROS55552.2023.10341460},}
Robots have begun operating and collaborating with humans in industrial and social settings. This collaboration introduces challenges: the robot must plan while taking the human’s actions into account. In prior work, the problem was posed as a 2-player deterministic game, with a limited number of human moves. The limit on human moves is unintuitive, and in many settings determinism is undesirable. In this paper, we present a novel planning method for collaborative human-robot manipulation tasks via probabilistic synthesis. We introduce a probabilistic manipulation domain that captures the interaction by allowing for both robot and human actions with states that represent the configurations of the objects in the workspace. The task is specified using Linear Temporal Logic over finite traces (LTLf). We then transform our manipulation domain into a Markov Decision Process (MDP) and synthesize an optimal policy to satisfy the specification on this MDP. We present two novel contributions: a formalization of probabilistic manipulation domains allowing us to apply existing techniques and a comparison of different encodings of these domains. Our framework is validated on a physical UR5 robot.
@inproceedings{wells2021icra,title={Finite Horizon Synthesis for Probabilistic Manipulation Domains},author={Wells, Andrew M. and Kingston, Zachary and Lahijanian, Morteza and Kavraki, Lydia E. and Vardi, Moshe Y.},booktitle={IEEE International Conference on Robotics and Automation},pages={6336--6342},year={2021},doi={10.1109/ICRA48506.2021.9561297},}