What's new in Embodied AI at Lambda?
Latent Adaptive Planner (LAP) uses latent-space inference to handle dynamic, non-prehensile manipulation and bridges the embodiment gap (human → robot).
AimBot adds a simple visual overlay (reticles and shooting lines) to ground end-effector spatial cues in images, improving visuomotor policy learning with minimal overhead.
Together, these advance two complementary tracks: planning and inference, and visual grounding and feedback. This dual focus is part of our research to advance embodied AI, and our research team showcased this work at the CoRL 2025.
Latent Adaptive Planner for Dynamic Manipulation: Demo, Conference Paper
AimBot: Demo, Conference Paper