Complex Collaborative Systems:
Closing the Loop, Learning, and Self-Confidence

Full Day Workshop at IROS 2017

September 28, Vancouver, Canada

AACUS

Updates

  • 10/09/17: We have uploaded speaker presentations. See Program.

  • 09/27/17: We have an updated program section with lightning talks. See Program.

  • 09/27/17: We have an updated speakers section. See Speakers.

What is the motivation for this workshop?

The recent advancements in autonomy algorithms and computational hardware have enabled robots to break out of contrived laboratory settings and be deployed in the real world. There is a growing demand for modern civilian and military systems to not only exhibit complex and intelligent behavior in response to external stimuli, but to augment capabilities through machine learning techniques. In order for complex robotic systems to demonstrate intelligent and consistent performance across diverse scenarios that they encounter, it's imperative that a number of distinct, yet interrelated modules (i.e. estimation, decision making, learning, control) are tightly integrated. Today, these individual topics are being widely researched upon in isolation. Hence, this workshop aims to bring together researchers from different communities and discuss problems that occur at the intersection of their fields, present recent advances and contemplate on future directions of research.

Summary of the workshop

We had a fantastic workshop with an excellent set of talks that spanned diverse fields, thought provoking panel discussions, and invigorating exchanges between folks from the controls community and the learning community! The discussions unearthed interesting technical problems that lie at the intersection of various disciplines as well as philosophical problems that require further introspection. While its indeed very hard to summarize the plethora of topics that came up, we highlight a few samples.

The workshop was set to motion by a couple bold claims from the learning community:

  1. Given enough data + resources - learning methods will outperform model-based methods.
  2. If we agree perception is learnt from data, we might as well learn planning and control.

3 of the topics from the first panel discussion:

  1. "Not all data is information": Models are rich and allow us to obtain behaviours that we would not be able to discover from open loop data gathering.
  2. However, deep learning has shown great success on learning policies for "human level tasks" where we might not have good models.
  3. An interesting way forward would be to investigate architectures of deep learning applied to planning inspired from model based methods.

3 of the topics from the second panel discussion:

  1. "Satisficing as the alternative to optimal": Complex tasks, such as asking the robot to fetch a croissant, can be expressed as a satisficing problem than the "optimal croissant gathering" problem. Solving for the optimal solution is unnecessarily hard. This was followed by the observation that the success of MPC methods is not for computing optimal solutions, but rather a principled way to deal with constraints.
  2. "Self-contained autonomy is a myth": Autonomous systems at some level have to interact with humans who may not be rational agents.
  3. How can we move towards making learning systems more explainable? What are the legal / ethical ramifications?