Keynotes and Tutorials
CONFERENCE ON ROBOT LEARNING (CORL) 2019
OCTOBER 30 - NOVEMBER 1, 2019, OSAKA JAPAN
Keynote speakers
Short bio: KENJI DOYA took BS in 1984, MS in 1986, and Ph.D. in 1991 at U. Tokyo. He became a research associate at U. Tokyo in 1986, U. C. San Diego in 1991, and Salk Institute in 1993. He joined Advanced Telecommunications Research International (ATR) in 1994 and became the head of Computational Neurobiology Department, ATR Computational Neuroscience Laboratories in 2003. In 2004, he was appointed as the Principal Investigator of Neural Computation Unit, Okinawa Institute of Science and Technology (OIST) and started Okinawa Computational Neuroscience Course (OCNC) as the chief organizer. As OIST established itself as a graduate university in 2011, he became a Professor and served as the Vice Provost for Research till 2014. He serves as the Co-Editor in Chief of Neural Networks since 2008 and a board member of Japanese Neural Network Society (JNNS) and Japan Neuroscience Society (JNSS). He received Tsukahara Award and JSPS Award in 2007, MEXT Prize for Science and Technology in 2012, and Donald O. Hebb Award in 2018. He lead the MEXT project “Prediction and Decision Making” from 2011 to 2016 and currently leads a new MEXT project “Artificial Intelligence and Brain Science”. He is interested in understanding the functions of basal ganglia and the cortical circuit based on the theory of reinforcement learning and Bayesian inference.
Title : Reinforcement learning in machines and the brain
Abstract
While reinforcement learning achieved remarkable success in computer games and simulated environments, its application to robots still faces challenges of data-efficiency, hidden states, and non-stationarity. Given robust and flexible learning capabilities of humans and animals, it is natural to seek some insights from the brain. Partly motivated by such insights from human/animal behaviors and the brain, there have been extensions of reinforcement learning algorithms, such as sophistication of value update operators, modular and hierarchical architectures, and stochastic model-based control. Empirical knowledge from robotic applications of those and other algorithms can, in turn, provide hints as to why the brain’s neural circuits for reinforcement learning are quite complex. In this talk, I will introduce some examples of our search for robust and data-efficient reinforcement learning algorithms for robots and investigation of the neural circuits and molecules for reinforcement learning in the brain.
Short bio: Prof. Jamie Paik is director and founder of Reconfigurable Robotics Lab (RRL) of Swiss Federal Institute of Technology (EPFL) and a core member of Swiss National Centers of Competence in Research (NCCR) Robotics consortium. RRL’s research leverages expertise in multi-material fabrication and smart material actuation. At Harvard University’s Microrobotics Laboratory, she started developing unconventional robots that push the physical limits of material and mechanisms. Her latest research effort is in soft robotics and self-morphing Robogami (robotic orgami) that transforms its planar shape to 2D or 3D by folding in predefined patterns and sequences, just like the paper art, origami.
Title : Soft Robots for Invisible Intuitive Interactions
Abstract
The ultimate goal of any soft robotics system is to have a cohesive solution to improve the human – machine interface. For such an interface, it is critical to realize a versatile and adaptable multi-degrees of freedom robotic design. While the findings in soft robotics have broadened the application of robotics, they are still limited to specific scenarios. The next challenge is in pushing the boundaries of multi-disciplinary science interceptions simultaneously: materials, mechatronics, energy, control, and design. Such efforts will lead to robust solutions in design methodology, novel actuators, and a comprehensive fabrication and integration method of the core robotic components. This talk will highlight on the recent progresses in soft- material robots and origami robots that aim at achieving comprehensive solutions toward diverse soft human – robot applications.
Angela Petra Schoellig (Thu. October 31, 9:00-9:45)
Short bio: Angela Schoellig is an Assistant Professor at the University of Toronto Institute for Aerospace Studies and an Associate Director of the Centre for Aerial Robotics Research and Education. She holds a Canada Research Chair in Machine Learning for Robotics and Control, is a Faculty Member of the Vector Institute for Artificial Intelligence, and a principal investigator of the NSERC Canadian Robotics Network. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other. She is a recipient of a Sloan Research Fellowship (2017), an Ontario Early Researcher Award (2017), and a Connaught New Researcher Award (2015). She is one of MIT Technology Review’s Innovators Under 35 (2017), a Canada Science Leadership Program Fellow (2014), and one of Robohub’s “25 women in robotics you need to know about (2013)”. Her team won the 2018 and 2019 North-American SAE AutoDrive Challenge sponsored by General Motors. Her PhD at ETH Zurich (2013) was awarded the ETH Medal and the Dimitris N. Chorafas Foundation Award. She holds both an M.Sc. in Engineering Cybernetics from the University of Stuttgart (2008) and an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology (2007). More information can be found at: www.schoellig.name
Title : Machine Learning in the Closed Loop: Safety and Performance Guarantees for Robot Learning
Abstract
The ultimate promise of robotics is to design devices that can physically interact with the world. To date, robots have been primarily deployed in highly structured and predictable environments. However, we envision the next generation of robots (ranging from self-driving and -flying vehicles to robot assistants) to operate in unpredictable and generally unknown environments alongside humans. This challenges current robot algorithms, which have been largely based on a-priori knowledge about the system and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been mostly limited to learning single tasks, and demonstrated in simulation or structured lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety when placed in a closed-loop system architecture. It will also require to answer the fundamental question of how to design learning architectures for dynamic and interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.
Short bio: Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society's Early Career Award as well as numerous best paper awards. In 2015, he received an ERC Starting Grant and in 2019, he was appointed as an IEEE Fellow.
Title : Learning Motor Skills on Real Robot Systems
Abstract
Autonomous robots that can assist humans in situations of daily life have been a long-standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction directly on the real robot system. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on real robots. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.
Short bio: Yukie Nagai received her Ph.D. in Engineering from Osaka University in 2004. After working as a Postdoc Researcher at National Institute of Information and Communications Technology (NICT) and at Bielefeld University, she became a Specially Appointed Associate Professor at Osaka University in 2009 and then a Senior Researcher at NICT in 2017.
Since April 2019, she is a Project Professor at the University of Tokyo, where she heads Cognitive Developmental Robotics Lab. She also serves as the research director of JST CREST Cognitive Mirroring since December 2016.
Title : The Now and Future of Cognitive Developmental Robotics
Abstract
Cognitive developmental robotics aims at understanding underlying mechanisms for human cognitive development by means of computational approaches. In the last two decades, many computational models have been proposed to account for the development of cognitive functions such as self recognition, imitation, and goal-directed actions. Despite the success of robot experiments, these results made little impact on neuroscience and psychology. A potential reason is that each robotic model could explain only a limited aspect or phenomenon of development.
To overcome the current limitation, I have been proposing the neural theory of predictive coding as a computational principle for cognitive development (Nagai, PhilTransB 2019). The theory has a potential to account for both the temporal continuity and the individual diversity of development. For example, neural networks based on predictive coding enabled our robot to learn to generate goal-directed actions, estimate the goal of other agents, and assist others trying to achieve a goal successively (i.e., developmental continuity). This result demonstrates that both non-social and social behaviors emerge based on a shared mechanism of predictive learning. Our recent studies using recurrent neural networks suggest that an imbalance between top-down prediction and bottom-up sensation leads to developmental disorders such as autism spectrum disorder (i.e., developmental diversity). Atypical cognitive abilities such as hyper-sensory sensitivity and stereotypical movement were reproduced by hyper-/hypo-priors in predictive coding. I conclude my talk by discussing how our approach could impact and facilitate human understanding in the future.
Short bio:
George Konidaris is the John E. Savage Assistant Professor of Computer Science at Brown and Chief Roboticist of Realtime Robotics, a startup commercializing his work on hardware-accelerated motion planning. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst. Prior to joining Brown, he held a faculty position at Duke and was a postdoctoral researcher at MIT. George is the recent recipient of young faculty awards from DARPA and the AFOSR, and an NSF CAREER award.
Title : Signal to Symbol (via Skills)
Abstract
Generally intelligent behavior requires high-level reasoning and planning, but in robotics perception and actuation must ultimately be performed using noisy, high-bandwidth, low-level sensors and effectors. A key challenge in designing intelligent robots is how to build abstractions that effectively support high-level reasoning for low-level action.
I will describe a research program aimed at constructing robot control hierarchies through the use of learned motor skills. The first part of my talk will address methods for automatically discovering, learning, and reusing motor skills, both autonomously and via demonstration. The second part will establish a link between the skills available to a robot and the abstract representations it should use to plan with them. I will present an example of a robot autonomously learning a (sound and complete) abstract representation directly from sensorimotor data, and then using it to plan. Finally, I will discuss ongoing work on making the resulting abstractions portable across tasks.
tutorial speakers
Short bio:
Daichi Mochihashi received BS from the University of Tokyo and PhD from Nara Institute of Science and Technology in 1998 and 2005, respectively. He was a researcher at ATR Spoken Language Communication research laboratories in Kyoto from 2003, and research associate at NTT Communication Science laboratories from 2007 before joining ISM in 2011.
His research interests include statistical natural language processing and Bayesian machine learning, and has served as an area co-chair of machine learning at several natural language processing conferences, such as ACL, COLING and NAACL.
Title :Gaussian process generative models for Language and Robotics
Abstract
Gaussian processes (GPs) are stochastic processes that can describe continuously changing phenomena as covariates vary, and lately applied in many fields of machine learning and related disciplines. From the viewpoint of robotics, it has an advantage to specify a generative model of the movements of robots, where restricted assumptions such as Markov models have been employed so far.
In this tutorial, I will first describe the basic machinery of Gaussian processes as an infinite extension of linear regression. Then I will introduce our work on how to automatically induce “actions” from the movements of robots by leveraging Gaussian processes and MCMC. Thanks to the Bayesian characteristics of GP, we are able to effectively model the variations in each action, while still possible to capture common structures in a completely unsupervised fashion. Interestingly, essentially this model is equivalent to the unsupervised word induction from strings I have proposed in natural language processing.
This is a joint work with Takayuki Nagai and Wataru Takano (Osaka), Tomoaki Nakamura (UEC), and Ichiro Kobayashi (Ochanomizu).
Sarit Kraus (Thu November 1, 12:00-13:00)
Short bio: Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). For her work she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI and ECCAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles, together with Prof. Tambe, Prof. Ordonez and their USC students, for the creation of the ARMOR security scheduling system. She has published over 350 papers in leading journals and major conferences and co-authored five books. She is a member of the board of directors of the International Foundation for Multi-agent Systems (IFAAMAS) and was IJCAI 2019 program chair.
Title :Computer Agents that Interact Proficiently with People
Abstract
Automated agents that interact proficiently with people can be useful in supporting or replacing people in complex tasks. The inclusion of people presents novel problems for the design of automated agents’ strategies. People do not adhere to the optimal, monolithic strategies that can be derived analytically. Their behavior is affected by a multitude of social and psychological factors. In this talk, I will show how combining machine learning techniques for human modeling, human behavioral models, formal decision-making and game theory approaches enables agents to interact well with people. Applications include intelligent agents that help reduce car accidents, agents that support rehabilitation, employer-employee negotiation, agents that are used for training law-enforcement personnel and agents that support a human operator in managing a team of low-cost robots in search and rescue tasks. I will also discuss robot-human-agent collaboration and learning for elderly assisted living.
Short bio:
Dinesh Jayaraman is currently a visiting research scientist at Facebook AI Research, Menlo Park. He received his PhD from UT Austin (2017) and was a postdoctoral scholar at UC Berkeley. His research interests are broadly in computer vision, robotics, and machine learning. In the last few years, he has worked on visual prediction, active perception, self-supervised visual learning, visuo-tactile robotic manipulation, semantic visual attributes, and zero-shot categorization. He has received a Robotics and Automation Letters Best Paper Runner-Up Award (2018), an Asian Conferene on Computer Vision Best Application Paper Award (2016), , a Samsung PhD Fellowship (2016), a UT Austin Graduate Dean's Fellowship (2016), and a Microelectronics and Computer Development Fellowship Award (2011). He has published in and served on reviewing and area chair committtees for top conferences and journals in computer vision, machine learning, AI, and robotics. He will start as assistant professor at the University of Pennsylvania in 2020.
Title:
REPLAB: The challenges and opportunities of low-cost robots
Abstract
While assembly line and industrial robots have had notable successes in the last few decades through high precision robots and task-specific instrumentation, open-world robotics remains challenging. In this talk, I will argue that an important missing component is solutions to problems like high-bandwidth perception and noisy actuation, which industry robots sidestep and low-cost robots naturally expose. I will introduce the REPLAB standardized and reproducible hardware stack (robot arm, camera, and compact workspace) that can be assembled within a few hours and costs about 2000 USD. I will demonstrate its usage in various data-driven robotics applications: supervised grasp selection algorithms, model-based and model-free reinforcement learning, and visual servoing. I will show the potential for REPLAB for facilitating reproducible research with "plug-and-play" implementations, introducing a grasping benchmark template. Finally, I will highlight potential future applications and invite contributions from the CORL community. The REPLAB project page with assembly instructions, code, and videos is at https://goo.gl/5F9dP4.