Andy Zeng is an American computer scientist and AI engineer at Google DeepMind. He is best known for his research in robotics and machine learning, including robot learning algorithms that enable machines to intelligently interact with the physical world and improve themselves over time. Zeng was a recipient of the Gordon Y.S. Wu Fellowship in Engineering and Wu Prize in 2016, and the Princeton SEAS Award for Excellence in 2018.[1][2]

Early life and education

edit

Zeng studied computer science and mathematics as an undergraduate student at the University of California, Berkeley.[3] He then moved to Princeton University, where he completed his Ph.D. in 2019. His thesis focused on deep learning algorithms that enable robots to understand the visual world and interact with unfamiliar physical objects.[4] He developed a class of deep network architectures inspired by the concept of affordances in cognitive psychology (perceiving the world in terms of actions), which allow machines to learn complex skills that can quickly adapt and generalize to new scenarios.[5] As a doctoral student, he co-led Team MIT-Princeton[6] to win 1st Place of the Stow Task[7] at the Amazon Picking Challenge,[8] a worldwide competition focused on bin picking. He also spent time as a student researcher at Google Brain.[9] His graduate studies were supported by the NVIDIA Fellowship.[10]

Research and career

edit

Zeng investigates the capabilities of robots to intelligently improve themselves over time through self-supervised learning algorithms, such as learning how to assemble objects by disassembling them,[11] or acquiring new dexterous skills by watching videos of people.[12] Notable demonstrations include Google's TossingBot,[13] a robot that can learn to grasp and throw unfamiliar objects using physics as a prior model of how the world works. His research also investigates 3D computer vision algorithms.

He pioneered the use of Foundation models in robotics, from systems that take action by write their own code,[14] to robots that can plan and reason by grounding language in affordances.[15][16] He co-developed large multimodal models, and showed that they can be used for intelligent robot navigation, world modeling, and assistive agents.[17] He also worked on algorithms that allow large language models to know when they don't know and ask for help.[18]

In 2024, Zeng was awarded the IEEE Early Career Award in Robotics and Automation “for outstanding contributions to robot learning.”[19]

References

edit
  1. ^ "Princeton Robotics Seminar: Language as Robot Middleware | Computer Science Department at Princeton University". Princeton University.
  2. ^ "Andy Zeng". IEEE.
  3. ^ "CSL Seminar - Embodied Intelligence". Massachusetts Institute of Technology.
  4. ^ "Learning Visual Affordances for Robotic Manipulation - ProQuest". www.proquest.com.
  5. ^ "Visual Transfer Learning for Robotic Manipulation". Google.
  6. ^ "MIT-Princeton at the Amazon Robotics Challenge". Princeton University.
  7. ^ "Australian Centre for Robotic Vision from Australia Wins Grand Championship at 2017 Amazon Robotics Challenge". Press Center. 1 August 2017.
  8. ^ Malamut, Layla; Nathans, Aaron. "Princeton graduate student teams advance in robotics, intelligent systems competitions". Princeton University.
  9. ^ "Google's Tossingbot Can Toss Over 500 Objects Per Hour Into Target Locations". NVIDIA Technical Blog. 28 March 2019.
  10. ^ "2018 Grad Fellows | Research". research.nvidia.com.
  11. ^ "Learning to Assemble and to Generalize from Self-Supervised Disassembly". research.google.
  12. ^ "Robot See, Robot Do". research.google.
  13. ^ "Inside Google's Rebooted Robotics Program". The New York Times.
  14. ^ Heater, Brian (2022-11-02). "Google wants robots to generate their own code". TechCrunch. Retrieved 2024-10-18.
  15. ^ "PaLM-SayCan". families.google.com. Retrieved 2024-10-18.
  16. ^ "Google is training its robots to be more like humans". The Washington Post.
  17. ^ "Visual language maps for robot navigation". research.google. Retrieved 2024-10-18.
  18. ^ "These robots know when to ask for help". MIT Technology Review. Retrieved 2024-10-18.
  19. ^ "2024 IEEE RAS Award Recipients Announced! - IEEE Robotics and Automation Society". www.ieee-ras.org. 2024-03-22. Retrieved 2024-10-18.