Fall 2021 interactive series of talks, tutorials, and learning on SLAM
General Information
The goal of this series is to expand the understanding of those both new and experienced with SLAM. Sessions will include research talks, as well as introductions to various themes of SLAM and thought provoking open-ended discussions. The lineup of events aim to foster fun, provocative discussions on robotics.
Join our mailing list for Zoom links, updates and reminders about the presentations. Our Tartan SLAM Series Discord server, aims to foster an inclusive learning community for SLAM. If you are an expert or newcomer, we invite you to join the discord server to build the community.
Associate Professor, Institute for Aerospace Studies
University of Toronto
Unlocking Dynamic Cameras for Visual Navigation
25 Oct 2021
12:00 PM EST
Abstract:
Gimbal-stabilized dynamic cameras provide many advantages in robotic applications governed by highly dynamic motion profiles and uneven feature distributions, due to their ability to provide smooth image capture independent of robot motion. In order to integrate information received from gimballed cameras, an accurate time-varying extrinsic calibration between the dynamic camera and other sensors, such as static cameras and IMUs, needs to be determined. In this talk, I will first present our work on the extrinsic calibration for dynamic and static camera clusters. I will then talk about our recent efforts to perform the calibration between a dynamic camera and an IMU, online and in flight while presenting results in simulation and real hardware data.
Reconstructing small things in large spaces, and other reconstruction stories.
1 Nov 2021
12:00 PM EST
Bio:
Amy Tabb holds degrees from Sweet Briar College (B.A. Math/Computer Science and Music), Duke University (M.A. Musicology), and Purdue University (M.S. and Ph.D. Electrical and Computer Engineering) and is a
Research Agricultural Engineer at a US Department of Agriculture, Agricultural Research Service laboratory in Kearneysville, West Virginia. There, she has been engaged in creating systems for automation in the tree fruit industry. Her research interests are within the fields of computer vision and robotics, in particular robust three-dimensional reconstruction and perception in outdoor conditions.
Abstract:
Three-dimensional reconstruction is an intermediate step needed for many applications in agriculture, including automation and phenotyping of plants and fruits. The nature of agricultural objects is that they have low texture compared to objects in typical computer vision datasets and consequently many classical approaches do not work well for camera pose localization and reconstruction. In this talk, I will discuss some work I have done on reconstructing a range of objects, from leafless trees with a robot-camera system to individual fruits with a tabletop system. Throughout, I will discuss failures and motivations for choosing one approach over another, as well as why working on three-dimensional reconstruction in agriculture has led to work on calibration systems.
Where Can Machine Learning Help Robotic State Estimation?
11 Nov 2021
12:30 PM EST
Abstract:
Classic state estimation tools (e.g., determining position/velocity of a robot from noisy sensor data) have been in use since the 1960s, perhaps the most famous technique being the Kalman filter. For difficult-to-model nonlinear systems with rich sensing (e.g., almost any real-world robot), clever adaptations are needed to the classic tools. In this talk, I will first briefly summarize a few ideas that have become standard practice in our group over the last several years: continuous-time trajectory estimation (and its connection to sparse Gaussian process regression) as well as estimation on matrix Lie groups (to handle rotations cleanly). I will also discuss two new frameworks we have been pursuing lately: exactly sparse Gaussian variational inference (ESGVI) and Koopman state estimation (KoopSE). ESGVI seeks to minimize the Kullback-Leibler divergence between a Gaussian state estimate and the full Bayesian posterior; however, the framework also easily allows for parameter learning through Expectation Maximization and we’ve used this to learn simple parameters such as constant system matrices and covariances, but also to model rich sensors using Deep Neural Networks and learn the weights from data. KoopSE takes a different approach by lifting a nonlinear system into a high-dimensional Reproducing Kernel Hilbert Space where we can treat it as linear and apply classic estimation tools; it also allows for the system to be learned from training data quite efficiently. For all these ideas, I will give simple intuitive explanations of the mathematics and show some examples of things working in practice.