Andy Liu
(he/him)
I am an undergraduate student in computer science and mathematics at Purdue University, with interest in AI and robotics.
2025
- arXiv
Revisiting Replanning from Scratch: Real-Time Incremental Planning with Fast Almost-Surely Asymptotically Optimal PlannersMitchell E. C. Sabbadini, Andrew H. Liu, Joseph Ruan, Tyler S. Wilson, Zachary Kingston, and Jonathan D. GammellUnder ReviewRobots operating in changing environments either predict obstacle changes and/or plan quickly enough to react to them. Predictive approaches require a strong prior about the position and motion of obstacles. Reactive approaches require no assumptions about their environment but must replan quickly and find high-quality paths to navigate effectively. Reactive approaches often reuse information between queries to reduce planning cost. These techniques are conceptually sound but updating dense planning graphs when information changes can be computationally prohibitive. It can also require significant effort to detect the changes in some applications. This paper revisits the long-held assumption that reactive replanning requires updating existing plans. It shows that the incremental planning problem can alternatively be solved more efficiently as a series of independent problems using fast almost-surely asymptotically optimal (ASAO) planning algorithms. These ASAO algorithms quickly find an initial solution and converge towards an optimal solution which allows them to find consistent global plans in the presence of changing obstacles without requiring explicit plan reuse. This is demonstrated with simulated experiments where Effort Informed Trees (EIT*) finds shorter median solution paths than the tested reactive planning algorithms and is further validated using Asymptotically Optimal RRT-Connect (AORRTC) on a real-world planning problem on a robot arm.
@misc{sabbadini2025replan, title = {Revisiting Replanning from Scratch: Real-Time Incremental Planning with Fast Almost-Surely Asymptotically Optimal Planners}, author = {Sabbadini, Mitchell E. C. and Liu, Andrew H. and Ruan, Joseph and Wilson, Tyler S. and Kingston, Zachary and Gammell, Jonathan D.}, year = {2025}, eprint = {2510.21074}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, note = {Under Review}, } - arXiv
CDE: Concept-Driven Exploration for Reinforcement LearningUnder ReviewIntelligent exploration remains a critical challenge in reinforcement learning (RL), especially in visual control tasks. Unlike low-dimensional state-based RL, visual RL must extract task-relevant structure from raw pixels, making exploration inefficient. We propose Concept-Driven Exploration (CDE), which leverages a pre-trained vision-language model (VLM) to generate object-centric visual concepts from textual task descriptions as weak, potentially noisy supervisory signals. Rather than directly conditioning on these noisy signals, CDE trains a policy to reconstruct the concepts via an auxiliary objective, using reconstruction accuracy as an intrinsic reward to guide exploration toward task-relevant objects. Because the policy internalizes these concepts, VLM queries are only needed during training, reducing dependence on external models during deployment. Across five challenging simulated visual manipulation tasks, CDE achieves efficient, targeted exploration and remains robust to noisy VLM predictions. Finally, we demonstrate real-world transfer by deploying CDE on a Franka Research 3 arm, attaining an 80% success rate in a real-world manipulation task.
@misc{mao2025cde, title = {CDE: Concept-Driven Exploration for Reinforcement Learning}, author = {Mao, Le and Liu, Andrew H. and Zabounidis, Renos and Kingston, Zachary and Campbell, Joseph}, eprint = {2510.08851}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, year = {2025}, note = {Under Review}, }