Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives).
Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation.
Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning".
Imitation learning
editMany research groups are developing techniques where robots learn by imitating. This includes various techniques for learning from demonstration (sometimes also referred to as "programming by demonstration") and observational learning.
Sharing learned skills and knowledge
editIn Tellex's "Million Object Challenge," the goal is robots that learn how to spot and handle simple items and upload their data to the cloud to allow other robots to analyze and use the information.[1]
RoboBrain is a knowledge engine for robots which can be freely accessed by any device wishing to carry out a task. The database gathers new information about tasks as robots perform them, by searching the Internet, interpreting natural language text, images, and videos, object recognition as well as interaction. The project is led by Ashutosh Saxena at Stanford University.[2][3]
RoboEarth is a project that has been described as a "World Wide Web for robots" − it is a network and database repository where robots can share information and learn from each other and a cloud for outsourcing heavy computation tasks. The project brings together researchers from five major universities in Germany, the Netherlands and Spain and is backed by the European Union.[4][5][6][7][8]
Google Research, DeepMind, and Google X have decided to allow their robots share their experiences.[9][10][11]
See also
edit- Cognitive robotics – robot with processing architecture that will allow it to learn
- Developmental robotics
- Evolutionary robotics
- Philosophical ethology#History – Field of multidisciplinary research
References
edit- ^ Schaffer, Amanda. "10 Breakthrough Technologies 2016: Robots That Teach Each Other". MIT Technology Review. Retrieved 4 January 2017.
- ^ "RoboBrain: The World's First Knowledge Engine For Robots". MIT Technology Review. Retrieved 4 January 2017.
- ^ Hernandez, Daniela. "The Plan to Build a Massive Online Brain for All the World's Robots". WIRED. Retrieved 4 January 2017.
- ^ "Europe launches RoboEarth: 'Wikipedia for robots'". USA TODAY. Retrieved 4 January 2017.
- ^ "European researchers have created a hive mind for robots and it's being demoed this week". Engadget. 14 January 2014. Retrieved 4 January 2017.
- ^ "Robots test their own world wide web, dubbed RoboEarth". BBC News. 14 January 2014. Retrieved 4 January 2017.
- ^ "'Wikipedia for robots': Because bots need an Internet too". CNET. Retrieved 4 January 2017.
- ^ "New Worldwide Network Lets Robots Ask Each Other Questions When They Get Confused". Popular Science. 9 March 2013. Retrieved 4 January 2017.
- ^ "Google Tasks Robots with Learning Skills from One Another via Cloud Robotics". allaboutcircuits.com. Retrieved 4 January 2017.
- ^ Tung, Liam. "Google's next big step for AI: Getting robots to teach each other new skills". ZDNet. Retrieved 4 January 2017.
- ^ "How Robots Can Acquire New Skills from Their Shared Experience". Google Research Blog. Retrieved 4 January 2017.
External links
edit- IEEE RAS Technical Committee on Robot Learning (official IEEE website)
- IEEE RAS Technical Committee on Robot Learning (TC members website)
- Robot Learning at the Max Planck Institute for Intelligent Systems and the Technical University Darmstadt
- Robot Learning at the Computational Learning and Motor Control lab
- Humanoid Robot Learning at the Advanced Telecommunication Research Center (ATR) (in English and Japanese)
- Learning Algorithms and Systems Laboratory at EPFL (LASA)
- Robot Learning at the Cognitive Robotics Lab of Juergen Schmidhuber at IDSIA and Technical University of Munich
- The Humanoid Project: Peter Nordin, Chalmers University of Technology
- Inria and Ensta ParisTech FLOWERS team, France: Autonomous lifelong learning in developmental robotics
- CITEC at University of Bielefeld, Germany
- Asada Laboratory, Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, Japan
- The Laboratory for Perceptual Robotics, University of Massachusetts Amherst Amherst, USA
- Centre for Robotics and Neural Systems, Plymouth University Plymouth, United Kingdom
- Robot Learning Lab at Carnegie Mellon University
- Project Learning Humanoid Robots at University of Bonn
- Skilligent Robot Learning and Behavior Coordination System (commercial product)
- Robot Learning class at Cornell University
- Robot Learning and Interaction Lab at Italian Institute of Technology
- Reinforcement learning for robotics Archived 2018-10-08 at the Wayback Machine at Delft University of Technology