2024
- Informative Sensor Planning for a Single-Axis Gimbaled Camera on a Fixed-Wing UAV.By Parandekar, A., Moon, B., Suvarna, N. and Scherer, S.In 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), pp. 1798–1804, 2024.
@inproceedings{parandekar2024sensorplanning, title = {Informative Sensor Planning for a Single-Axis Gimbaled Camera on a Fixed-Wing UAV}, author = {Parandekar, Aditya and Moon, Brady and Suvarna, Nayana and Scherer, Sebastian}, year = {2024}, booktitle = {2024 IEEE 20th International Conference on Automation Science and Engineering (CASE)}, pages = {1798-1804}, doi = {10.1109/CASE59546.2024.10711697}, url = {https://arxiv.org/abs/2407.04896} }
Uncrewed Aerial Vehicles (UAVs) are a leading choice of platforms for a variety of information-gathering applications. Sensor planning can enhance the efficiency and success of these types of missions when coupled with a higher-level informative path-planning algorithm. This paper aims to address these data acquisition challenges by developing an informative non-myopic sensor planning framework for a single-axis gimbal coupled with an informative path planner to maximize information gain over a prior information map. This is done by finding reduced sensor sweep bounds over a planning horizon such that regions of higher confidence are prioritized. This novel sensor planning framework is evaluated against a predefined sensor sweep and no sensor planning baselines as well as validated in two simulation environments. In our results, we observe an improvement in the performance by 21.88% and 13.34% for the no sensor planning and predefined sensor sweep baselines respectively.
Parandekar, Aditya and Moon, Brady and Suvarna, Nayana and Scherer, Sebastian, "Informative Sensor Planning for a Single-Axis Gimbaled Camera on a Fixed-Wing UAV," 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), 2024.
- Map It Anywhere (MIA): Empowering Bird’s Eye View Mapping using Large-scale Public Data.By Ho, C., Zou, J., Alama, O., Kumar, S.M.J., Chiang, B., Gupta, T., Wang, C., Keetha, N., Sycara, K. and Scherer, S.In Advances in Neural Information Processing Systems, 2024.
@inproceedings{ho2024map, title = {Map It Anywhere (MIA): Empowering Bird's Eye View Mapping using Large-scale Public Data}, author = {Ho, Cherie and Zou, Jiaye and Alama, Omar and Kumar, Sai Mitheran Jagadesh and Chiang, Benjamin and Gupta, Taneesh and Wang, Chen and Keetha, Nikhil and Sycara, Katia and Scherer, Sebastian}, year = {2024}, booktitle = {Advances in Neural Information Processing Systems}, url = {https://arxiv.org/abs/2407.08726}, code = {https://github.com/MapItAnywhere/MapItAnywhere} }
Top-down Bird’s Eye View (BEV) maps are a popular representation for ground robot navigation due to their richness and flexibility for downstream tasks. While recent methods have shown promise for predicting BEV maps from First-Person View (FPV) images, their generalizability is limited to small regions captured by current autonomous vehicle-based datasets. In this context, we show that a more scalable approach towards generalizable map prediction can be enabled by using two large-scale crowd-sourced mapping platforms, Mapillary for FPV images and OpenStreetMap for BEV semantic maps. We introduce Map It Anywhere (MIA), a data engine that enables seamless curation and modeling of labeled map prediction data from existing open-source map platforms. Using our MIA data engine, we display the ease of automatically collecting a 1.2 million FPV & BEV pair dataset encompassing diverse geographies, landscapes, environmental factors, camera models & capture scenarios. We further train a simple camera model-agnostic model on this data for BEV map prediction. Extensive evaluations using established benchmarks and our dataset show that the data curated by MIA enables effective pretraining for generalizable BEV map prediction, with zero-shot performance far exceeding baselines trained on existing datasets by 35%. Our analysis highlights the promise of using large-scale public maps for developing & testing generalizable BEV perception, paving the way for more robust autonomous navigation.
Ho, Cherie and Zou, Jiaye and Alama, Omar and Kumar, Sai Mitheran Jagadesh and Chiang, Benjamin and Gupta, Taneesh and Wang, Chen and Keetha, Nikhil and Sycara, Katia and Scherer, Sebastian, "Map It Anywhere (MIA): Empowering Bird’s Eye View Mapping using Large-scale Public Data," Advances in Neural Information Processing Systems, 2024.
- MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions.By Ho, C., Kim, S., Moon, B., Parandekar, A., Harutyunyan, N., Wang, C., Sycara, K., Best, G. and Scherer, S.2024.
@misc{ho2024mapex, title = {MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions}, author = {Ho, Cherie and Kim, Seungchan and Moon, Brady and Parandekar, Aditya and Harutyunyan, Narek and Wang, Chen and Sycara, Katia and Best, Graeme and Scherer, Sebastian}, year = {2024}, eprint = {2409.15590}, archiveprefix = {arXiv}, primaryclass = {cs.RO}, url = {https://arxiv.org/abs/2409.15590} }
Exploration is a critical challenge in robotics, centered on understanding unknown environments. In this work, we focus on robots exploring structured indoor environments which are often predictable and composed of repeating patterns. Most existing approaches, such as conventional frontier approaches, have difficulty leveraging the predictability and explore with simple heuristics such as ‘closest first’. Recent works use deep learning techniques to predict unknown regions of the map, using these predictions for information gain calculation. However, these approaches are often sensitive to the predicted map quality or do not reason over sensor coverage. To overcome these issues, our key insight is to jointly reason over what the robot can observe and its uncertainty to calculate probabilistic information gain. We introduce MapEx, a new exploration framework that uses predicted maps to form probabilistic sensor model for information gain estimation. MapEx generates multiple predicted maps based on observed information, and takes into consideration both the computed variances of predicted maps and estimated visible area to estimate the information gain of a given viewpoint. Experiments on the real-world KTH dataset showed on average 12.4% improvement than representative map-prediction based exploration and 25.4% improvement than nearest frontier approach.
Ho, Cherie and Kim, Seungchan and Moon, Brady and Parandekar, Aditya and Harutyunyan, Narek and Wang, Chen and Sycara, Katia and Best, Graeme and Scherer, Sebastian, "MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions," , 2024.
- Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad Traversability Models.By Chen, E., Ho, C., Maulimov, M., Wang, C. and Scherer, S.In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
@article{chen2024learning, title = {Learning-on-the-Drive: {Self-supervised} Adaptation of Visual Offroad Traversability Models}, author = {Chen, Eric and Ho, Cherie and Maulimov, Mukhtar and Wang, Chen and Scherer, Sebastian}, year = {2024}, journal = {2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, url = {https://arxiv.org/pdf/2306.15226} }
Autonomous offroad driving is essential for applications like emergency rescue, military operations, and agriculture. Despite progress, systems struggle with high-speed vehicles exceeding 10m/s due to the need for accurate long-range (>50m) perception for safe navigation. Current approaches are limited by sensor constraints; LiDAR-based methods offer precise short-range data but are noisy beyond 30m, while visual models provide dense long-range measurements but falter with unseen scenarios. To overcome these issues, we introduce ALTER, a learning-on-the-drive perception framework that leverages both sensor types. ALTER uses a self-supervised visual model to learn and adapt from near-range LiDAR measurements, improving long-range prediction in new environments without manual labeling. It also includes a model selection module for better sensor failure response and adaptability to known environments. Testing in two real-world settings showed on average 43.4% better traversability prediction than LiDAR-only and 164% over non-adaptive state-of-the-art (SOTA) visual semantic methods after 45 seconds of online learning.
Chen, Eric and Ho, Cherie and Maulimov, Mukhtar and Wang, Chen and Scherer, Sebastian, "Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad Traversability Models," 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
- A Unified MPC Strategy for a Tilt-rotor VTOL UAV Towards Seamless Mode Transitioning.By Chen, Q., Hu, Z., Geng, J., Bai, D., Mousaei, M. and Scherer, S.In AIAA SCITECH 2024 Forum, p. 2878, 2024.
@inproceedings{chen2024unified, title = {A Unified {MPC} Strategy for a Tilt-rotor {VTOL UAV} Towards Seamless Mode Transitioning}, author = {Chen, Qizhao and Hu, Ziqi and Geng, Junyi and Bai, Dongwei and Mousaei, Mohammadreza and Scherer, Sebastian}, year = {2024}, booktitle = {AIAA SCITECH 2024 Forum}, pages = {2878} }
Chen, Qizhao and Hu, Ziqi and Geng, Junyi and Bai, Dongwei and Mousaei, Mohammadreza and Scherer, Sebastian, "A Unified MPC Strategy for a Tilt-rotor VTOL UAV Towards Seamless Mode Transitioning," AIAA SCITECH 2024 Forum, 2024.
- Present and Future of SLAM in Extreme Environments: The DARPA SubT Challenge.By Ebadi, K., Bernreiter, L., Biggie, H., Catt, G., Chang, Y., Chatterjee, A., Denniston, C.E., Deschênes, S.-P., Harlow, K., Khattak, S., Nogueira, L., Palieri, M., Petráček, P., Petrlík, M., Reinke, A., Krátký, V., Zhao, S., Agha-mohammadi, A.-akbar, Alexis, K., Heckman, C., Khosoussi, K., Kottege, N., Morrell, B., Hutter, M., Pauling, F., Pomerleau, F., Saska, M., Scherer, S., Siegwart, R., Williams, J.L. and Carlone, L.In IEEE Transactions on Robotics, vol. 40, pp. 936–959, 2024.
@article{ebadi2024present, title = {Present and Future of {SLAM} in Extreme Environments: {The DARPA SubT Challenge}}, author = {Ebadi, Kamak and Bernreiter, Lukas and Biggie, Harel and Catt, Gavin and Chang, Yun and Chatterjee, Arghya and Denniston, Christopher E. and Deschênes, Simon-Pierre and Harlow, Kyle and Khattak, Shehryar and Nogueira, Lucas and Palieri, Matteo and Petráček, Pavel and Petrlík, Matěj and Reinke, Andrzej and Krátký, Vít and Zhao, Shibo and Agha-mohammadi, Ali-akbar and Alexis, Kostas and Heckman, Christoffer and Khosoussi, Kasra and Kottege, Navinda and Morrell, Benjamin and Hutter, Marco and Pauling, Fred and Pomerleau, François and Saska, Martin and Scherer, Sebastian and Siegwart, Roland and Williams, Jason L. and Carlone, Luca}, year = {2024}, journal = {IEEE Transactions on Robotics}, volume = {40}, number = {}, pages = {936--959}, doi = {10.1109/TRO.2023.3323938}, url = {https://arxiv.org/pdf/2208.01787} }
Ebadi, Kamak and Bernreiter, Lukas and Biggie, Harel and Catt, Gavin and Chang, Yun and Chatterjee, Arghya and Denniston, Christopher E. and Deschênes, Simon-Pierre and Harlow, Kyle and Khattak, Shehryar and Nogueira, Lucas and Palieri, Matteo and Petráček, Pavel and Petrlík, Matěj and Reinke, Andrzej and Krátký, Vít and Zhao, Shibo and Agha-mohammadi, Ali-akbar and Alexis, Kostas and Heckman, Christoffer and Khosoussi, Kasra and Kottege, Navinda and Morrell, Benjamin and Hutter, Marco and Pauling, Fred and Pomerleau, François and Saska, Martin and Scherer, Sebastian and Siegwart, Roland and Williams, Jason L. and Carlone, Luca, "Present and Future of SLAM in Extreme Environments: The DARPA SubT Challenge," IEEE Transactions on Robotics, 2024.
- TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for Terminal Airspace Operations.By Patrikar, J., Dantas, J., Moon, B., Hamidi, M., Ghosh, S., Keetha, N., Higgins, I., Chandak, A., Yoneyama, T. and Scherer, S.2024.
@article{patrikar2024tartanaviation, title = {TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for Terminal Airspace Operations}, author = {Patrikar, Jay and Dantas, Joao and Moon, Brady and Hamidi, Milad and Ghosh, Sourish and Keetha, Nikhil and Higgins, Ian and Chandak, Atharva and Yoneyama, Takashi and Scherer, Sebastian}, year = {2024}, url = {https://arxiv.org/pdf/2403.03372.pdf}, eprint = {2403.03372}, archiveprefix = {arXiv}, primaryclass = {cs.LG} }
Patrikar, Jay and Dantas, Joao and Moon, Brady and Hamidi, Milad and Ghosh, Sourish and Keetha, Nikhil and Higgins, Ian and Chandak, Atharva and Yoneyama, Takashi and Scherer, Sebastian, "TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for Terminal Airspace Operations," , 2024.
- SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation.By Navarro, I., Patrikar, J., Dantas, J.P.A., Baijal, R., Higgins, I., Scherer, S. and Oh, J.In IEEE Robotics and Automation Letters, vol. 9, no. 4, pp. 3759–3766, 2024.
@article{navarro2024sorts, title = {SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation}, author = {Navarro, Ingrid and Patrikar, Jay and Dantas, Joao P. A. and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean}, year = {2024}, journal = {IEEE Robotics and Automation Letters}, volume = {9}, number = {4}, pages = {3759--3766}, doi = {10.1109/LRA.2024.3370051}, url = {https://arxiv.org/pdf/2309.13144.pdf}, keywords = {Navigation;Robots;Social robots;Predictive models;Behavioral sciences;Monte Carlo methods;Costs;Aerial Systems: Perception and Autonomy;human-aware motion planning;safety in HRI} }
Navarro, Ingrid and Patrikar, Jay and Dantas, Joao P. A. and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean, "SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation," IEEE Robotics and Automation Letters, 2024.
- Learning Generalizable Feature Fields for Mobile Manipulation.By Qiu, R.-Z., Hu, Y., Yang, G., Song, Y., Fu, Y., Ye, J., Mu, J., Yang, R., Atanasov, N., Scherer, S. and Wang, X.2024.
@misc{qiu2024learning, title = {Learning Generalizable Feature Fields for Mobile Manipulation}, author = {Qiu, Ri-Zhao and Hu, Yafei and Yang, Ge and Song, Yuchen and Fu, Yang and Ye, Jianglong and Mu, Jiteng and Yang, Ruihan and Atanasov, Nikolay and Scherer, Sebastian and Wang, Xiaolong}, year = {2024}, url = {https://arxiv.org/pdf/2403.07563.pdf}, eprint = {2403.07563}, archiveprefix = {arXiv}, primaryclass = {cs.RO} }
Qiu, Ri-Zhao and Hu, Yafei and Yang, Ge and Song, Yuchen and Fu, Yang and Ye, Jianglong and Mu, Jiteng and Yang, Ruihan and Atanasov, Nikolay and Scherer, Sebastian and Wang, Xiaolong, "Learning Generalizable Feature Fields for Mobile Manipulation," , 2024.
- Fast and Modular Autonomy Software for Autonomous Racing Vehicles.By Saba, A., Adetunji, A., Johnson, A., Kothari, A., Sivaprakasam, M., Spisak, J., Bharatia, P., Chauhan, A., Duff, B., Gasparro, N., King, C., Larkin, R., Mao, B., Nye, M., Parashar, A., Attias, J., Balciunas, A., Brown, A., Chang, C., Gao, M., Heredia, C., Keats, A., Lavariega, J., Muckelroy, W., Slavescu, A., Stathas, N., Suvarna, N., Zhang, C.T., Scherer, S. and Ramanan, D.In Field Robotics, vol. 4, no. 1, pp. 1–45, Jan. 2024.
@article{saba2024fast, title = {Fast and Modular Autonomy Software for Autonomous Racing Vehicles}, author = {Saba, Andrew and Adetunji, Aderotimi and Johnson, Adam and Kothari, Aadi and Sivaprakasam, Matthew and Spisak, Joshua and Bharatia, Prem and Chauhan, Arjun and Duff, Brendan and Gasparro, Noah and King, Charles and Larkin, Ryan and Mao, Brian and Nye, Micah and Parashar, Anjali and Attias, Joseph and Balciunas, Aurimas and Brown, Austin and Chang, Chris and Gao, Ming and Heredia, Cindy and Keats, Andrew and Lavariega, Jose and Muckelroy, William and Slavescu, Andre and Stathas, Nickolas and Suvarna, Nayana and Zhang, Chuan Tian and Scherer, Sebastian and Ramanan, Deva}, year = {2024}, month = jan, journal = {Field Robotics}, publisher = {Field Robotics Publication Society}, volume = {4}, number = {1}, pages = {1–45}, doi = {10.55417/fr.2024001}, issn = {2771-3989}, url = {http://dx.doi.org/10.55417/fr.2024001} }
Saba, Andrew and Adetunji, Aderotimi and Johnson, Adam and Kothari, Aadi and Sivaprakasam, Matthew and Spisak, Joshua and Bharatia, Prem and Chauhan, Arjun and Duff, Brendan and Gasparro, Noah and King, Charles and Larkin, Ryan and Mao, Brian and Nye, Micah and Parashar, Anjali and Attias, Joseph and Balciunas, Aurimas and Brown, Austin and Chang, Chris and Gao, Ming and Heredia, Cindy and Keats, Andrew and Lavariega, Jose and Muckelroy, William and Slavescu, Andre and Stathas, Nickolas and Suvarna, Nayana and Zhang, Chuan Tian and Scherer, Sebastian and Ramanan, Deva, "Fast and Modular Autonomy Software for Autonomous Racing Vehicles," Field Robotics, 2024.
- Deep Bayesian Future Fusion for Self-Supervised, High-Resolution, Off-Road Mapping.By Aich, S., Wang, W., Maheshwari, P., Sivaprakasam, M., Triest, S., Ho, C., Gregory, J.M., au2, J.G.R.I.I.I. and Scherer, S.2024.
@misc{aich2024deep, title = {Deep Bayesian Future Fusion for Self-Supervised, High-Resolution, Off-Road Mapping}, author = {Aich, Shubhra and Wang, Wenshan and Maheshwari, Parv and Sivaprakasam, Matthew and Triest, Samuel and Ho, Cherie and Gregory, Jason M. and au2, John G. Rogers III and Scherer, Sebastian}, year = {2024}, url = {https://arxiv.org/pdf/2403.11876.pdf}, eprint = {2403.11876}, archiveprefix = {arXiv}, primaryclass = {cs.RO} }
Aich, Shubhra and Wang, Wenshan and Maheshwari, Parv and Sivaprakasam, Matthew and Triest, Samuel and Ho, Cherie and Gregory, Jason M. and au2, John G. Rogers III and Scherer, Sebastian, "Deep Bayesian Future Fusion for Self-Supervised, High-Resolution, Off-Road Mapping," , 2024.
- Multi-Robot Planning for Filming Groups of Moving Actors Leveraging Submodularity and Pixel Density.By Hughes, S., Martin, R., Corah, M. and Scherer, S.2024.
@misc{hughes2024multi, title = {Multi-Robot Planning for Filming Groups of Moving Actors Leveraging Submodularity and Pixel Density}, author = {Hughes, Skyler and Martin, Rebecca and Corah, Micah and Scherer, Sebastian}, year = {2024}, url = {https://arxiv.org/pdf/2404.03103.pdf}, eprint = {2404.03103}, archiveprefix = {arXiv}, primaryclass = {cs.RO} }
Hughes, Skyler and Martin, Rebecca and Corah, Micah and Scherer, Sebastian, "Multi-Robot Planning for Filming Groups of Moving Actors Leveraging Submodularity and Pixel Density," , 2024.
- SubT-MRS Dataset: Pushing SLAM Towards All-weather Environments.By Zhao, S., Gao, Y., Wu, T., Singh, D., Jiang, R., Sun, H., Sarawata, M., Whittaker, W.C., Higgins, I., Su, S., Du, Y., Xu, C., Keller, J., Karhade, J., Nogueira, L., Saha, S., Qiu, Y., Zhang, J., Wang, W., Wang, C. and Scherer, S.In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
@inproceedings{zhao2024subt, title = {{SubT-MRS} Dataset: Pushing SLAM Towards All-weather Environments}, author = {Zhao, Shibo and Gao, Yuanjun and Wu, Tianhao and Singh, Damanpreet and Jiang, Rushan and Sun, Haoxiang and Sarawata, Mansi and Whittaker, Warren C and Higgins, Ian and Su, Shaoshu and Du, Yi and Xu, Can and Keller, John and Karhade, Jay and Nogueira, Lucas and Saha, Sourojit and Qiu, Yuheng and Zhang, Ji and Wang, Wenshan and Wang, Chen and Scherer, Sebastian}, year = {2024}, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, url = {https://arxiv.org/pdf/2307.07607.pdf}, video = {https://youtu.be/mkN72Lv8S7A} }
Zhao, Shibo and Gao, Yuanjun and Wu, Tianhao and Singh, Damanpreet and Jiang, Rushan and Sun, Haoxiang and Sarawata, Mansi and Whittaker, Warren C and Higgins, Ian and Su, Shaoshu and Du, Yi and Xu, Can and Keller, John and Karhade, Jay and Nogueira, Lucas and Saha, Sourojit and Qiu, Yuheng and Zhang, Ji and Wang, Wenshan and Wang, Chen and Scherer, Sebastian, "SubT-MRS Dataset: Pushing SLAM Towards All-weather Environments," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
- AirShot: Efficient Few-Shot Detection for Autonomous Exploration.By Wang, Z., Li, B., Wang, C. and Scherer, S.2024.
@misc{wang2024airshot, title = {AirShot: Efficient Few-Shot Detection for Autonomous Exploration}, author = {Wang, Zihan and Li, Bowen and Wang, Chen and Scherer, Sebastian}, year = {2024}, url = {https://arxiv.org/pdf/2404.05069.pdf}, eprint = {2404.05069}, archiveprefix = {arXiv}, primaryclass = {cs.CV} }
Wang, Zihan and Li, Bowen and Wang, Chen and Scherer, Sebastian, "AirShot: Efficient Few-Shot Detection for Autonomous Exploration," , 2024.
2023
- Follow the rules: Online signal temporal logic tree search for guided imitation learning in stochastic domains.By Aloor, J.J., Patrikar, J., Kapoor, P., Oh, J. and Scherer, S.In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 1320–1326, 2023.
@inproceedings{aloor2023follow, title = {Follow the rules: {Online} signal temporal logic tree search for guided imitation learning in stochastic domains}, author = {Aloor, Jasmine Jerry and Patrikar, Jay and Kapoor, Parv and Oh, Jean and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {1320--1326}, doi = {10.1109/ICRA48891.2023.10160953}, url = {https://arxiv.org/pdf/2209.13737}, organization = {IEEE} }
Aloor, Jasmine Jerry and Patrikar, Jay and Kapoor, Parv and Oh, Jean and Scherer, Sebastian, "Follow the rules: Online signal temporal logic tree search for guided imitation learning in stochastic domains," 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
- Multi-robot, multi-sensor exploration of multifarious environments with full mission aerial autonomy.By Best, G., Garg, R., Keller, J., Hollinger, G.A. and Scherer, S.In The International Journal of Robotics Research, 2023.
@article{best2023multi, title = {Multi-robot, multi-sensor exploration of multifarious environments with full mission aerial autonomy}, author = {Best, Graeme and Garg, Rohit and Keller, John and Hollinger, Geoffrey A and Scherer, Sebastian}, year = {2023}, journal = {The International Journal of Robotics Research}, publisher = {SAGE Publications Sage UK: London, England}, doi = {10.1177/02783649231203342}, url = {https://doi.org/10.1177/02783649231203342} }
We present a coordinated autonomy pipeline for multi-sensor exploration of confined environments. We simultaneously address four broad challenges that are typically overlooked in prior work: (a) make effective use of both range and vision sensing modalities, (b) perform this exploration across a wide range of environments, (c) be resilient to adverse events, and (d) execute this onboard teams of physical robots. Our solution centers around a behavior tree architecture, which adaptively switches between various behaviors involving coordinated exploration and responding to adverse events. Our exploration strategy exploits the benefits of both visual and range sensors with a generalized frontier-based exploration algorithm and an OpenVDB-based map processing pipeline. Our local planner utilizes a dynamically feasible trajectory library and a GPU-based Euclidean distance transform map to allow fast and safe navigation through both tight doorways and expansive spaces. The autonomy pipeline is evaluated with an extensive set of field experiments, with teams of up to three robots that fly up to 3 m/s and distances exceeding 1 km in confined spaces. We provide a summary of various field experiments and detail resilient behaviors that arose: maneuvering narrow doorways, adapting to unexpected environment changes, and emergency landing. Experiments are also detailed from the DARPA Subterranean Challenge, where our proposed autonomy pipeline contributed to us winning the “Most Sectors Explored” award. We provide an extended discussion of lessons learned, release software as open source, and present a video that illustrates our extensive field trials.
Best, Graeme and Garg, Rohit and Keller, John and Hollinger, Geoffrey A and Scherer, Sebastian, "Multi-robot, multi-sensor exploration of multifarious environments with full mission aerial autonomy," The International Journal of Robotics Research, 2023.
- Exploring the Most Sectors at the DARPA Subterranean Challenge Finals.By Cao, C., Nogueira, L., Zhu, H., Keller, J., Best, G., Garg, R., Kohanbash, D., Maier, J., Zhao, S., Yang, F., Cujic, K., Darnley, R., DeBortoli, R., Drozd, B., Sun, B., Higgins, I., Willits, S., Armstrong, G., Zhang, J., Hollinger, G., Travers, M. and Scherer, S.In Field Robotics, vol. 3, no. 25, pp. 801–836, 2023.
@article{cao2023exploring, title = {Exploring the Most Sectors at the {DARPA Subterranean Challenge Finals}}, author = {Cao, C. and Nogueira, L. and Zhu, H. and Keller, J. and Best, G. and Garg, R. and Kohanbash, D. and Maier, J. and Zhao, S. and Yang, F. and Cujic, K. and Darnley, R. and DeBortoli, R. and Drozd, B. and Sun, B. and Higgins, I. and Willits, S. and Armstrong, G. and Zhang, J. and Hollinger, G. and Travers, M. and Scherer, Sebastian}, year = {2023}, journal = {Field Robotics}, volume = {3}, number = {25}, pages = {801--836}, doi = {10.55417/fr.2023025}, url = {https://fieldrobotics.net/Field_Robotics/Volume_3_files/Vol3_25.pdf} }
Cao, C. and Nogueira, L. and Zhu, H. and Keller, J. and Best, G. and Garg, R. and Kohanbash, D. and Maier, J. and Zhao, S. and Yang, F. and Cujic, K. and Darnley, R. and DeBortoli, R. and Drozd, B. and Sun, B. and Higgins, I. and Willits, S. and Armstrong, G. and Zhang, J. and Hollinger, G. and Travers, M. and Scherer, Sebastian, "Exploring the Most Sectors at the DARPA Subterranean Challenge Finals," Field Robotics, 2023.
- How does it feel? self-supervised costmap learning for off-road vehicle traversability.By Castro, M.G., Triest, S., Wang, W., Gregory, J.M., Sanchez, F., Rogers, J.G. and Scherer, S.In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 931–938, 2023.
@inproceedings{castro2023how, title = {How does it feel? self-supervised costmap learning for off-road vehicle traversability}, author = {Castro, Mateo Guaman and Triest, Samuel and Wang, Wenshan and Gregory, Jason M and Sanchez, Felix and Rogers, John G and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {931--938}, doi = {10.1109/ICRA48891.2023.10160856}, url = {https://arxiv.org/pdf/2209.10788}, organization = {IEEE} }
Castro, Mateo Guaman and Triest, Samuel and Wang, Wenshan and Gregory, Jason M and Sanchez, Felix and Rogers, John G and Scherer, Sebastian, "How does it feel? self-supervised costmap learning for off-road vehicle traversability," 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
- Aerial Interaction with Tactile Sensing.By Guo, X., He, G., Mousaei, M., Geng, J., Shi, G. and Scherer, S.In arXiv preprint arXiv:2310.00142, 2023.
@article{guo2023aerial, title = {Aerial Interaction with Tactile Sensing}, author = {Guo, Xiaofeng and He, Guanqi and Mousaei, Mohammadreza and Geng, Junyi and Shi, Guanya and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2310.00142}, url = {https://arxiv.org/pdf/2310.00142} }
Guo, Xiaofeng and He, Guanqi and Mousaei, Mohammadreza and Geng, Junyi and Shi, Guanya and Scherer, Sebastian, "Aerial Interaction with Tactile Sensing," arXiv preprint arXiv:2310.00142, 2023.
- Image-based Visual Servo Control for Aerial Manipulation Using a Fully-Actuated UAV.By He, G., Jangir, Y., Geng, J., Mousaei, M., Bai, D. and Scherer, S.In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5042–5049, 2023.
@inproceedings{he2023image, title = {Image-based Visual Servo Control for Aerial Manipulation Using a Fully-Actuated {UAV}}, author = {He, Guanqi and Jangir, Yash and Geng, Junyi and Mousaei, Mohammadreza and Bai, Dongwei and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {5042--5049}, doi = {10.1109/IROS55552.2023.10342145}, url = {https://arxiv.org/pdf/2306.16530}, organization = {IEEE} }
He, Guanqi and Jangir, Yash and Geng, Junyi and Mousaei, Mohammadreza and Bai, Dongwei and Scherer, Sebastian, "Image-based Visual Servo Control for Aerial Manipulation Using a Fully-Actuated UAV," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
- FoundLoc: Vision-based Onboard Aerial Localization in the Wild.By He, Y., Cisneros, I., Keetha, N., Patrikar, J., Ye, Z., Higgins, I., Hu, Y., Kapoor, P. and Scherer, S.In arXiv preprint arXiv:2310.16299, 2023.
@article{he2023foundloc, title = {{FoundLoc}: {Vision-based} Onboard Aerial Localization in the Wild}, author = {He, Yao and Cisneros, Ivan and Keetha, Nikhil and Patrikar, Jay and Ye, Zelin and Higgins, Ian and Hu, Yaoyu and Kapoor, Parv and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2310.16299}, url = {https://arxiv.org/pdf/2310.16299} }
He, Yao and Cisneros, Ivan and Keetha, Nikhil and Patrikar, Jay and Ye, Zelin and Higgins, Ian and Hu, Yaoyu and Kapoor, Parv and Scherer, Sebastian, "FoundLoc: Vision-based Onboard Aerial Localization in the Wild," arXiv preprint arXiv:2310.16299, 2023.
- Off-Policy Evaluation With Online Adaptation for Robot Exploration in Challenging Environments.By Hu, Y., Geng, J., Wang, C., Keller, J. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3780–3787, 2023.
@article{hu2023policy, title = {Off-Policy Evaluation With Online Adaptation for Robot Exploration in Challenging Environments}, author = {Hu, Yafei and Geng, Junyi and Wang, Chen and Keller, John and Scherer, Sebastian}, year = {2023}, journal = {IEEE Robotics and Automation Letters}, volume = {8}, number = {6}, pages = {3780--3787}, doi = {10.1109/LRA.2023.3271520}, url = {https://arxiv.org/pdf/2204.03140} }
Hu, Yafei and Geng, Junyi and Wang, Chen and Keller, John and Scherer, Sebastian, "Off-Policy Evaluation With Online Adaptation for Robot Exploration in Challenging Environments," IEEE Robotics and Automation Letters, 2023.
- Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis.By Hu, Y., Xie, Q., Jain, V., Francis, J., Patrikar, J., Keetha, N., Kim, S., Xie, Y., Zhang, T., Zhao, Z. and others.In arXiv preprint arXiv:2312.08782, 2023.
@article{hu2023general, title = {Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis}, author = {Hu, Yafei and Xie, Quanting and Jain, Vidhi and Francis, Jonathan and Patrikar, Jay and Keetha, Nikhil and Kim, Seungchan and Xie, Yaqi and Zhang, Tianyi and Zhao, Zhibo and others}, year = {2023}, journal = {arXiv preprint arXiv:2312.08782}, url = {https://arxiv.org/pdf/2312.08782} }
Hu, Yafei and Xie, Quanting and Jain, Vidhi and Francis, Jonathan and Patrikar, Jay and Keetha, Nikhil and Kim, Seungchan and Xie, Yaqi and Zhang, Tianyi and Zhao, Zhibo and others, "Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis," arXiv preprint arXiv:2312.08782, 2023.
- Pegasus Simulator: An Isaac Sim Framework for Multiple Aerial Vehicles Simulation.By Jacinto, M., Pinto, J., Patrikar, J., Keller, J., Cunha, R., Scherer, S. and Pascoal, A.In arXiv preprint arXiv:2307.05263, 2023.
@article{jacinto2023pegasus, title = {Pegasus Simulator: An Isaac Sim Framework for Multiple Aerial Vehicles Simulation}, author = {Jacinto, Marcelo and Pinto, Jo{\~a}o and Patrikar, Jay and Keller, John and Cunha, Rita and Scherer, Sebastian and Pascoal, Ant{\'o}nio}, year = {2023}, journal = {arXiv preprint arXiv:2307.05263}, url = {https://arxiv.org/pdf/2307.05263} }
Jacinto, Marcelo and Pinto, João and Patrikar, Jay and Keller, John and Cunha, Rita and Scherer, Sebastian and Pascoal, António, "Pegasus Simulator: An Isaac Sim Framework for Multiple Aerial Vehicles Simulation," arXiv preprint arXiv:2307.05263, 2023.
- WIT-UAS: A Wildland-fire Infrared Thermal Dataset to Detect Crew Assets From Aerial Views.By Jong, A., Yu, M., Dhrafani, D., Kailas, S., Moon, B., Sycara, K. and Scherer, S.In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11464–11471, 2023.
@inproceedings{jong2023wit, title = {{WIT-UAS}: {A} Wildland-fire Infrared Thermal Dataset to Detect Crew Assets From Aerial Views}, author = {Jong, Andrew and Yu, Mukai and Dhrafani, Devansh and Kailas, Siva and Moon, Brady and Sycara, Katia and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {11464--11471}, doi = {10.1109/IROS55552.2023.10341683}, url = {https://arxiv.org/pdf/2312.09159}, organization = {IEEE} }
Jong, Andrew and Yu, Mukai and Dhrafani, Devansh and Kailas, Siva and Moon, Brady and Sycara, Katia and Scherer, Sebastian, "WIT-UAS: A Wildland-fire Infrared Thermal Dataset to Detect Crew Assets From Aerial Views," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
- SplaTAM: Splat, track & map 3d gaussians for dense rgb-d SLAM.By Keetha, N., Karhade, J., Jatavallabhula, K.M., Yang, G., Scherer, S., Ramanan, D. and Luiten, J.In arXiv preprint arXiv:2312.02126, 2023.
@article{keetha2023splatam, title = {{SplaTAM}: {Splat}, track \& map 3d gaussians for dense rgb-d {SLAM}}, author = {Keetha, Nikhil and Karhade, Jay and Jatavallabhula, Krishna Murthy and Yang, Gengshan and Scherer, Sebastian and Ramanan, Deva and Luiten, Jonathon}, year = {2023}, journal = {arXiv preprint arXiv:2312.02126}, url = {https://arxiv.org/pdf/2312.02126} }
Keetha, Nikhil and Karhade, Jay and Jatavallabhula, Krishna Murthy and Yang, Gengshan and Scherer, Sebastian and Ramanan, Deva and Luiten, Jonathon, "SplaTAM: Splat, track & map 3d gaussians for dense rgb-d SLAM," arXiv preprint arXiv:2312.02126, 2023.
- AnyLoc: Towards universal visual place recognition.By Keetha, N., Mishra, A., Karhade, J., Jatavallabhula, K.M., Scherer, S., Krishna, M. and Garg, S.In IEEE Robotics and Automation Letters, 2023.
@article{keetha2023anyloc, title = {{AnyLoc}: {Towards} universal visual place recognition}, author = {Keetha, Nikhil and Mishra, Avneesh and Karhade, Jay and Jatavallabhula, Krishna Murthy and Scherer, Sebastian and Krishna, Madhava and Garg, Sourav}, year = {2023}, journal = {IEEE Robotics and Automation Letters}, publisher = {IEEE}, url = {https://arxiv.org/pdf/2308.00688} }
Keetha, Nikhil and Mishra, Avneesh and Karhade, Jay and Jatavallabhula, Krishna Murthy and Scherer, Sebastian and Krishna, Madhava and Garg, Sourav, "AnyLoc: Towards universal visual place recognition," IEEE Robotics and Automation Letters, 2023.
- UAS Simulator for Modeling, Analysis and Control in Free Flight and Physical Interaction.By Keipour, A., Mousaei, M., Geng, J., Bai, D. and Scherer, S.In AIAA SCITECH 2023 Forum, p. 1279, 2023.
@inproceedings{keipour2023uas, title = {{UAS} Simulator for Modeling, Analysis and Control in Free Flight and Physical Interaction}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Geng, Junyi and Bai, Dongwei and Scherer, Sebastian}, year = {2023}, booktitle = {AIAA SCITECH 2023 Forum}, pages = {1279}, url = {https://arxiv.org/pdf/2212.02973} }
Keipour, Azarakhsh and Mousaei, Mohammadreza and Geng, Junyi and Bai, Dongwei and Scherer, Sebastian, "UAS Simulator for Modeling, Analysis and Control in Free Flight and Physical Interaction," AIAA SCITECH 2023 Forum, 2023.
- A Simulator for Fully-Actuated UAVs.By Keipour, A., Mousaei, M. and Scherer, S.In The 2023 International Conference on Robotics and Automation (ICRA), Workshop on The Role of Robotics Simulators for Unmanned Aerial Vehicles, 2023.
@inproceedings{keipour2023simulator, title = {A Simulator for Fully-Actuated {UAVs}}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian}, year = {2023}, booktitle = {The 2023 International Conference on Robotics and Automation (ICRA), Workshop on The Role of Robotics Simulators for Unmanned Aerial Vehicles}, url = {https://arxiv.org/pdf/2305.07228} }
Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian, "A Simulator for Fully-Actuated UAVs," The 2023 International Conference on Robotics and Automation (ICRA), Workshop on The Role of Robotics Simulators for Unmanned Aerial Vehicles, 2023.
- Multi-Robot Multi-Room Exploration with Geometric Cue Extraction and Circular Decomposition.By Kim, S., Corah, M., Keller, J., Best, G. and Scherer, S.In IEEE Robotics and Automation Letters, 2023.
@article{kim2023multi, title = {Multi-Robot Multi-Room Exploration with Geometric Cue Extraction and Circular Decomposition}, author = {Kim, Seungchan and Corah, Micah and Keller, John and Best, Graeme and Scherer, Sebastian}, year = {2023}, journal = {IEEE Robotics and Automation Letters}, publisher = {IEEE}, doi = {10.1109/LRA.2023.3342553}, url = {https://arxiv.org/pdf/2307.15202.pdf}, video = {https://youtu.be/zUtK1hh2Tpo?si=zKJPFkfn4DL-8fKe} }
Kim, Seungchan and Corah, Micah and Keller, John and Best, Graeme and Scherer, Sebastian, "Multi-Robot Multi-Room Exploration with Geometric Cue Extraction and Circular Decomposition," IEEE Robotics and Automation Letters, 2023.
- 360FusionNeRF: Panoramic neural radiance fields with joint guidance.By Kulkarni, S., Yin, P. and Scherer, S.In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7202–7209, 2023.
@inproceedings{kulkarni2023360fusionnerf, title = {{360FusionNeRF}: {Panoramic} neural radiance fields with joint guidance}, author = {Kulkarni, Shreyas and Yin, Peng and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {7202--7209}, doi = {10.1109/IROS55552.2023.10341346}, url = {https://arxiv.org/pdf/2209.14265}, organization = {IEEE} }
Kulkarni, Shreyas and Yin, Peng and Scherer, Sebastian, "360FusionNeRF: Panoramic neural radiance fields with joint guidance," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
- PVT++: A Simple End-to-End Latency-Aware Visual Tracking Framework.By Li, B., Huang, Z., Ye, J., Li, Y., Scherer, S., Zhao, H. and Fu, C.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10006–10016, 2023.
@inproceedings{li2023pvt, title = {{PVT++: A} Simple End-to-End Latency-Aware Visual Tracking Framework}, author = {Li, Bowen and Huang, Ziyuan and Ye, Junjie and Li, Yiming and Scherer, Sebastian and Zhao, Hang and Fu, Changhong}, year = {2023}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages = {10006--10016}, url = {https://arxiv.org/pdf/2211.11629} }
Li, Bowen and Huang, Ziyuan and Ye, Junjie and Li, Yiming and Scherer, Sebastian and Zhao, Hang and Fu, Changhong, "PVT++: A Simple End-to-End Latency-Aware Visual Tracking Framework," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.
- AirLoc: Object-based Indoor Relocalization.By Li, B., Scherer, S., Lin, Y.-J., Wang, C. and others.In arXiv preprint arXiv:2304.00954, 2023.
@article{li2023airloc, title = {{AirLoc: Object-based} Indoor Relocalization}, author = {Li, Bowen and Scherer, Sebastian and Lin, Yun-Jou and Wang, Chen and others}, year = {2023}, journal = {arXiv preprint arXiv:2304.00954}, url = {https://arxiv.org/pdf/2304.00954} }
Li, Bowen and Scherer, Sebastian and Lin, Yun-Jou and Wang, Chen and others, "AirLoc: Object-based Indoor Relocalization," arXiv preprint arXiv:2304.00954, 2023.
- VoxDet: Voxel Learning for Novel Instance Detection.By Li, B., Wang, J., Hu, Y., Wang, C. and Scherer, S.In Advances in Neural Information Processing Systems (NeurIPS), 2023.
@inproceedings{li2023voxdet, title = {{VoxDet: Voxel} Learning for Novel Instance Detection}, author = {Li, Bowen and Wang, Jiashun and Hu, Yaoyu and Wang, Chen and Scherer, Sebastian}, year = {2023}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, url = {https://arxiv.org/pdf/2305.17220}, code = {https://github.com/Jaraxxus-Me/VoxDet}, video = {https://youtu.be/tiXpOV1ROOI}, addendum = {Selected as Spotlight} }
Li, Bowen and Wang, Jiashun and Hu, Yaoyu and Wang, Chen and Scherer, Sebastian, "VoxDet: Voxel Learning for Novel Instance Detection," Advances in Neural Information Processing Systems (NeurIPS), 2023.
- PIAug–Physics Informed Augmentation for Learning Vehicle Dynamics for Off-Road Navigation.By Maheshwari, P., Wang, W., Triest, S., Sivaprakasam, M., Aich, S., Rogers III, J.G., Gregory, J.M. and Scherer, S.In arXiv preprint arXiv:2311.00815, 2023.
@article{maheshwari2023piaug, title = {{PIAug--Physics} Informed Augmentation for Learning Vehicle Dynamics for Off-Road Navigation}, author = {Maheshwari, Parv and Wang, Wenshan and Triest, Samuel and Sivaprakasam, Matthew and Aich, Shubhra and Rogers III, John G and Gregory, Jason M and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2311.00815}, url = {https://arxiv.org/pdf/2311.00815} }
Maheshwari, Parv and Wang, Wenshan and Triest, Samuel and Sivaprakasam, Matthew and Aich, Shubhra and Rogers III, John G and Gregory, Jason M and Scherer, Sebastian, "PIAug–Physics Informed Augmentation for Learning Vehicle Dynamics for Off-Road Navigation," arXiv preprint arXiv:2311.00815, 2023.
- Time-Optimal Path Planning in a Constant Wind for Uncrewed Aerial Vehicles using Dubins Set Classification.By Moon, B., Sachdev, S., Yuan, J. and Scherer, S.In IEEE Robotics and Automation Letters, 2023.
@article{moon2023time, title = {Time-Optimal Path Planning in a Constant Wind for Uncrewed Aerial Vehicles using Dubins Set Classification}, author = {Moon, Brady and Sachdev, Sagar and Yuan, Junbin and Scherer, Sebastian}, year = {2023}, journal = {IEEE Robotics and Automation Letters}, publisher = {IEEE}, doi = {10.1109/LRA.2023.3333167}, url = {https://arxiv.org/pdf/2306.11845.pdf}, video = {https://youtu.be/qOU5gI7JshI} }
Moon, Brady and Sachdev, Sagar and Yuan, Junbin and Scherer, Sebastian, "Time-Optimal Path Planning in a Constant Wind for Uncrewed Aerial Vehicles using Dubins Set Classification," IEEE Robotics and Automation Letters, 2023.
- SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation.By Navarro, I., Patrikar, J., Dantas, J., Baijal, R., Higgins, I., Scherer, S. and Oh, J.In arXiv preprint arXiv:2309.13144, 2023.
@article{navarro2023sorts, title = {{SoRTS: Learned} Tree Search for Long Horizon Social Robot Navigation}, author = {Navarro, Ingrid and Patrikar, Jay and Dantas, Joao and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean}, year = {2023}, journal = {arXiv preprint arXiv:2309.13144}, url = {https://arxiv.org/pdf/2309.13144} }
Navarro, Ingrid and Patrikar, Jay and Dantas, Joao and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean, "SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation," arXiv preprint arXiv:2309.13144, 2023.
- Learned Tree Search for Long-Horizon Social Robot Navigation in Shared Airspace.By Navarro, I., Patrikar, J., Dantas, J., Baijal, R., Higgins, I., Scherer, S. and Oh, J.In arXiv preprint arXiv:2304.01428, 2023.
@article{navarro2023learned, title = {Learned Tree Search for Long-Horizon Social Robot Navigation in Shared Airspace}, author = {Navarro, Ingrid and Patrikar, Jay and Dantas, Joao and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean}, year = {2023}, journal = {arXiv preprint arXiv:2304.01428}, url = {https://arxiv.org/pdf/2304.01428} }
Navarro, Ingrid and Patrikar, Jay and Dantas, Joao and Baijal, Rohan and Higgins, Ian and Scherer, Sebastian and Oh, Jean, "Learned Tree Search for Long-Horizon Social Robot Navigation in Shared Airspace," arXiv preprint arXiv:2304.01428, 2023.
- AirIMU: Learning Uncertainty Propagation for Inertial Odometry.By Qiu, Y., Wang, C., Zhou, X., Xia, Y. and Scherer, S.In arXiv preprint arXiv:2310.04874, 2023.
@article{qiu2023airimu, title = {{AirIMU: Learning} Uncertainty Propagation for Inertial Odometry}, author = {Qiu, Yuheng and Wang, Chen and Zhou, Xunfei and Xia, Youjie and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2310.04874}, url = {https://arxiv.org/pdf/2310.04874} }
Qiu, Yuheng and Wang, Chen and Zhou, Xunfei and Xia, Youjie and Scherer, Sebastian, "AirIMU: Learning Uncertainty Propagation for Inertial Odometry," arXiv preprint arXiv:2310.04874, 2023.
- Enhancing Multi-Drone Coordination for Filming Group Behaviours in Dynamic Environments.By Rauniyar, A., Li, J. and Scherer, S.In arXiv preprint arXiv:2310.13184, 2023.
@article{rauniyar2023enhancing, title = {Enhancing Multi-Drone Coordination for Filming Group Behaviours in Dynamic Environments}, author = {Rauniyar, Aditya and Li, Jiaoyang and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2310.13184}, url = {https://arxiv.org/pdf/2310.13184} }
Rauniyar, Aditya and Li, Jiaoyang and Scherer, Sebastian, "Enhancing Multi-Drone Coordination for Filming Group Behaviours in Dynamic Environments," arXiv preprint arXiv:2310.13184, 2023.
- Quantifying the Effect of Weather on Advanced Air Mobility Operations.By Sharma, A., Patrikar, J., Moon, B., Scherer, S. and Samaras, C.In Findings, 2023.
@article{sharma2023quantifying, title = {Quantifying the Effect of Weather on Advanced Air Mobility Operations}, author = {Sharma, Ashima and Patrikar, Jay and Moon, Brady and Scherer, Sebastian and Samaras, Constantine}, year = {2023}, journal = {Findings}, publisher = {Findings Press}, doi = {10.32866/001c.66207}, url = {https://findingspress.org/article/66207-quantifying-the-effect-of-weather-on-advanced-air-mobility-operations} }
Sharma, Ashima and Patrikar, Jay and Moon, Brady and Scherer, Sebastian and Samaras, Constantine, "Quantifying the Effect of Weather on Advanced Air Mobility Operations," Findings, 2023.
- DytanVO: Joint refinement of visual odometry and motion segmentation in dynamic environments.By Shen, S., Cai, Y., Wang, W. and Scherer, S.In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 4048–4055, 2023.
@inproceedings{shen2023dytanvo, title = {{DytanVO: Joint} refinement of visual odometry and motion segmentation in dynamic environments}, author = {Shen, Shihao and Cai, Yilin and Wang, Wenshan and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {4048--4055}, doi = {10.1109/ICRA48891.2023.10161306}, url = {https://arxiv.org/pdf/2209.08430}, organization = {IEEE} }
Shen, Shihao and Cai, Yilin and Wang, Wenshan and Scherer, Sebastian, "DytanVO: Joint refinement of visual odometry and motion segmentation in dynamic environments," 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
- TartanDrive 1.5: Improving Large Multimodal Robotics Dataset Collection and Distribution.By Sivaprakasam, M., Triest, S., Castro, M.G., Nye, M., Maulimov, M., Ho, C., Maheshwari, P., Wang, W. and Scherer, S.In ICRA2023 Workshop on Pretraining for Robotics (PT4R), 2023.
@inproceedings{sivaprakasam2023tartandrive, title = {{TartanDrive 1.5: Improving} Large Multimodal Robotics Dataset Collection and Distribution}, author = {Sivaprakasam, Matthew and Triest, Samuel and Castro, Mateo Guaman and Nye, Micah and Maulimov, Mukhtar and Ho, Cherie and Maheshwari, Parv and Wang, Wenshan and Scherer, Sebastian}, year = {2023}, booktitle = {ICRA2023 Workshop on Pretraining for Robotics (PT4R)}, url = {https://openreview.net/forum?id=7Y1pnhFJUT} }
Sivaprakasam, Matthew and Triest, Samuel and Castro, Mateo Guaman and Nye, Micah and Maulimov, Mukhtar and Ho, Cherie and Maheshwari, Parv and Wang, Wenshan and Scherer, Sebastian, "TartanDrive 1.5: Improving Large Multimodal Robotics Dataset Collection and Distribution," ICRA2023 Workshop on Pretraining for Robotics (PT4R), 2023.
- AirTrack: Onboard Deep Learning Framework for Long-Range Aircraft Detection and Tracking.By Ghosh, S., Patrikar, J., Moon, B., Hamidi, M.M. and Sebastian Scherer, and.In International Conference on Robotics and Automation (ICRA), 2023.
@inproceedings{ghosh2023airtrack, title = {{AirTrack: Onboard} Deep Learning Framework for Long-Range Aircraft Detection and Tracking}, author = {Ghosh, Sourish and Patrikar, Jay and Moon, Brady and Hamidi, Milad Moghassem and and Sebastian Scherer}, year = {2023}, booktitle = {International Conference on Robotics and Automation (ICRA)}, doi = {10.1109/ICRA48891.2023.10160627}, url = {https://arxiv.org/pdf/2209.12849.pdf}, video = {https://youtu.be/H3lL_Wjxjpw} }
Ghosh, Sourish and Patrikar, Jay and Moon, Brady and Hamidi, Milad Moghassem and and Sebastian Scherer, "AirTrack: Onboard Deep Learning Framework for Long-Range Aircraft Detection and Tracking," International Conference on Robotics and Automation (ICRA), 2023.
- Greedy Perspectives: Multi-Drone View Planning for Collaborative Coverage in Cluttered Environments.By Suresh, K., Rauniyar, A., Corah, M. and Scherer, S.In arXiv preprint arXiv:2310.10863, 2023.
@article{suresh2023greedy, title = {Greedy Perspectives: Multi-Drone View Planning for Collaborative Coverage in Cluttered Environments}, author = {Suresh, Krishna and Rauniyar, Aditya and Corah, Micah and Scherer, Sebastian}, year = {2023}, journal = {arXiv preprint arXiv:2310.10863}, url = {https://arxiv.org/pdf/2310.10863} }
Suresh, Krishna and Rauniyar, Aditya and Corah, Micah and Scherer, Sebastian, "Greedy Perspectives: Multi-Drone View Planning for Collaborative Coverage in Cluttered Environments," arXiv preprint arXiv:2310.10863, 2023.
- Multimodal beacon based precision landing system for autonomous aircraft,By Taylor, J., Singh, S., Chamberlain, L., Scherer, S., Grocholsky, B. and Brajovic, V.US Patent 11,762,398, Sep-2023
@patent{taylor2023multimodal, title = {Multimodal beacon based precision landing system for autonomous aircraft}, author = {Taylor, Jonathan and Singh, Sanjiv and Chamberlain, Lyle and Scherer, Sebastian and Grocholsky, Benjamin and Brajovic, Vladimir}, year = {2023}, month = sep, url = {https://patents.google.com/patent/US11762398B1/en}, note = {US Patent 11,762,398} }
Taylor, Jonathan and Singh, Sanjiv and Chamberlain, Lyle and Scherer, Sebastian and Grocholsky, Benjamin and Brajovic, Vladimir, "Multimodal beacon based precision landing system for autonomous aircraft," US Patent 11,762,398, 2023.
- Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation.By Triest, S., Castro, M.G., Maheshwari, P., Sivaprakasam, M., Wang, W. and Scherer, S.In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 924–930, 2023.
@inproceedings{triest2023learning, title = {Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation}, author = {Triest, Samuel and Castro, Mateo Guaman and Maheshwari, Parv and Sivaprakasam, Matthew and Wang, Wenshan and Scherer, Sebastian}, year = {2023}, booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {924--930}, doi = {10.1109/ICRA48891.2023.10161268}, url = {https://arxiv.org/pdf/2302.00134} }
Triest, Samuel and Castro, Mateo Guaman and Maheshwari, Parv and Sivaprakasam, Matthew and Wang, Wenshan and Scherer, Sebastian, "Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation," 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
- PyPose: A library for robot learning with physics-based optimization.By Wang, C., Gao, D., Xu, K., Geng, J., Hu, Y., Qiu, Y., Li, B., Yang, F., Moon, B., Pandey, A. and others.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22024–22034, 2023.
@inproceedings{wang2023pypose, title = {{PyPose: A} library for robot learning with physics-based optimization}, author = {Wang, Chen and Gao, Dasong and Xu, Kuan and Geng, Junyi and Hu, Yaoyu and Qiu, Yuheng and Li, Bowen and Yang, Fan and Moon, Brady and Pandey, Abhinav and others}, year = {2023}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages = {22024--22034}, doi = {10.1109/CVPR52729.2023.02109}, url = {https://arxiv.org/pdf/2209.15428} }
Wang, Chen and Gao, Dasong and Xu, Kuan and Geng, Junyi and Hu, Yaoyu and Qiu, Yuheng and Li, Bowen and Yang, Fan and Moon, Brady and Pandey, Abhinav and others, "PyPose: A library for robot learning with physics-based optimization," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
- MUI-TARE: Cooperative Multi-Agent Exploration With Unknown Initial Position.By Yan, J., Lin, X., Ren, Z., Zhao, S., Yu, J., Cao, C., Yin, P., Zhang, J. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 8, no. 7, pp. 4299–4306, 2023.
@article{yan2023mui, title = {{MUI-TARE: Cooperative} Multi-Agent Exploration With Unknown Initial Position}, author = {Yan, Jingtian and Lin, Xingqiao and Ren, Zhongqiang and Zhao, Shiqi and Yu, Jieqiong and Cao, Chao and Yin, Peng and Zhang, Ji and Scherer, Sebastian}, year = {2023}, journal = {IEEE Robotics and Automation Letters}, volume = {8}, number = {7}, pages = {4299--4306}, doi = {10.1109/LRA.2023.3281262}, url = {https://arxiv.org/pdf/2209.10775} }
Yan, Jingtian and Lin, Xingqiao and Ren, Zhongqiang and Zhao, Shiqi and Yu, Jieqiong and Cao, Chao and Yin, Peng and Zhang, Ji and Scherer, Sebastian, "MUI-TARE: Cooperative Multi-Agent Exploration With Unknown Initial Position," IEEE Robotics and Automation Letters, 2023.
- BioSLAM: A Bioinspired Lifelong Memory System for General Place Recognition.By Yin, P., Abuduweili, A., Zhao, S., Xu, L., Liu, C. and Scherer, S.In IEEE Transactions on Robotics, vol. 39, no. 6, pp. 4855–4874, 2023.
@article{yin2023bioslam, title = {{BioSLAM: A} Bioinspired Lifelong Memory System for General Place Recognition}, author = {Yin, Peng and Abuduweili, Abulikemu and Zhao, Shiqi and Xu, Lingyun and Liu, Changliu and Scherer, Sebastian}, year = {2023}, journal = {IEEE Transactions on Robotics}, publisher = {IEEE}, volume = {39}, number = {6}, pages = {4855--4874}, doi = {10.1109/TRO.2023.3306615}, url = {https://arxiv.org/pdf/2208.14543} }
Yin, Peng and Abuduweili, Abulikemu and Zhao, Shiqi and Xu, Lingyun and Liu, Changliu and Scherer, Sebastian, "BioSLAM: A Bioinspired Lifelong Memory System for General Place Recognition," IEEE Transactions on Robotics, 2023.
- iSimLoc: Visual Global Localization for Previously Unseen Environments With Simulated Images.By Yin, P., Cisneros, I., Zhao, S., Zhang, J., Choset, H. and Scherer, S.In IEEE Transactions on Robotics, vol. 39, no. 3, pp. 1893–1909, 2023.
@article{yin2023isimloc, title = {{iSimLoc: Visual} Global Localization for Previously Unseen Environments With Simulated Images}, author = {Yin, Peng and Cisneros, Ivan and Zhao, Shiqi and Zhang, Ji and Choset, Howie and Scherer, Sebastian}, year = {2023}, journal = {IEEE Transactions on Robotics}, publisher = {IEEE}, volume = {39}, number = {3}, pages = {1893--1909}, doi = {10.1109/TRO.2023.3238201}, url = {https://arxiv.org/pdf/2209.06376} }
Yin, Peng and Cisneros, Ivan and Zhao, Shiqi and Zhang, Ji and Choset, Howie and Scherer, Sebastian, "iSimLoc: Visual Global Localization for Previously Unseen Environments With Simulated Images," IEEE Transactions on Robotics, 2023.
- AutoMerge: A framework for map assembling and smoothing in city-scale environments.By Yin, P., Zhao, S., Lai, H., Ge, R., Zhang, J., Choset, H. and Scherer, S.In IEEE Transactions on Robotics, vol. 39, no. 5, pp. 3686–3704, 2023.
@article{yin2023automerge, title = {{AutoMerge: A} framework for map assembling and smoothing in city-scale environments}, author = {Yin, Peng and Zhao, Shiqi and Lai, Haowen and Ge, Ruohai and Zhang, Ji and Choset, Howie and Scherer, Sebastian}, year = {2023}, journal = {IEEE Transactions on Robotics}, publisher = {IEEE}, volume = {39}, number = {5}, pages = {3686--3704}, doi = {10.1109/TRO.2023.3290448}, url = {https://arxiv.org/pdf/2207.06965} }
Yin, Peng and Zhao, Shiqi and Lai, Haowen and Ge, Ruohai and Zhang, Ji and Choset, Howie and Scherer, Sebastian, "AutoMerge: A framework for map assembling and smoothing in city-scale environments," IEEE Transactions on Robotics, 2023.
- 2D-3D Pose Tracking with Multi-View Constraints.By Yu, H., Chen, K., Yang, W., Scherer, S. and Xia, G.-S.In arXiv preprint arXiv:2309.11335, 2023.
@article{yu20232d, title = {{2D-3D Pose} Tracking with Multi-View Constraints}, author = {Yu, Huai and Chen, Kuangyi and Yang, Wen and Scherer, Sebastian and Xia, Gui-Song}, year = {2023}, journal = {arXiv preprint arXiv:2309.11335}, url = {https://arxiv.org/pdf/2309.11335} }
Yu, Huai and Chen, Kuangyi and Yang, Wen and Scherer, Sebastian and Xia, Gui-Song, "2D-3D Pose Tracking with Multi-View Constraints," arXiv preprint arXiv:2309.11335, 2023.
- PyPose v0.6: The Imperative Programming Interface for Robotics.By Zhan, Z., Li, X., Li, Q., He, H., Pandey, A., Xiao, H., Xu, Y., Chen, X., Xu, K., Cao, K. and others.In arXiv preprint arXiv:2309.13035, 2023.
@article{zhan2023pypose, title = {{PyPose v0.6: The} Imperative Programming Interface for Robotics}, author = {Zhan, Zitong and Li, Xiangfu and Li, Qihang and He, Haonan and Pandey, Abhinav and Xiao, Haitao and Xu, Yangmengfei and Chen, Xiangyu and Xu, Kuan and Cao, Kun and others}, year = {2023}, journal = {arXiv preprint arXiv:2309.13035}, url = {https://arxiv.org/pdf/2309.13035} }
Zhan, Zitong and Li, Xiangfu and Li, Qihang and He, Haonan and Pandey, Abhinav and Xiao, Haitao and Xu, Yangmengfei and Chen, Xiangyu and Xu, Kuan and Cao, Kun and others, "PyPose v0.6: The Imperative Programming Interface for Robotics," arXiv preprint arXiv:2309.13035, 2023.
- SubT-MRS: A Subterranean, Multi-Robot, Multi-Spectral and Multi-Degraded Dataset for Robust SLAM.By Zhao, S., Singh, D., Sun, H., Jiang, R., Gao, Y.J., Wu, T., Karhade, J., Whittaker, C., Higgins, I., Xu, J. and others.In arXiv preprint arXiv:2307.07607, 2023.
@article{zhao2023subt, title = {{SubT-MRS: A} Subterranean, Multi-Robot, Multi-Spectral and Multi-Degraded Dataset for Robust {SLAM}}, author = {Zhao, Shibo and Singh, Damanpreet and Sun, Haoxiang and Jiang, Rushan and Gao, YuanJun and Wu, Tianhao and Karhade, Jay and Whittaker, Chuck and Higgins, Ian and Xu, Jiahe and others}, year = {2023}, journal = {arXiv preprint arXiv:2307.07607}, url = {https://arxiv.org/pdf/2307.07607} }
Zhao, Shibo and Singh, Damanpreet and Sun, Haoxiang and Jiang, Rushan and Gao, YuanJun and Wu, Tianhao and Karhade, Jay and Whittaker, Chuck and Higgins, Ian and Xu, Jiahe and others, "SubT-MRS: A Subterranean, Multi-Robot, Multi-Spectral and Multi-Degraded Dataset for Robust SLAM," arXiv preprint arXiv:2307.07607, 2023.
- Attention-Enhanced Cross-modal Localization Between Spherical Images and Point Clouds.By Zhao, Z., Yu, H., Lyu, C., Yang, W. and Scherer, S.In IEEE Sensors Journal, vol. 23, no. 19, pp. 23836–23845, 2023.
@article{zhao2023attention, title = {Attention-Enhanced Cross-modal Localization Between Spherical Images and Point Clouds}, author = {Zhao, Zhipeng and Yu, Huai and Lyu, Chenwei and Yang, Wen and Scherer, Sebastian}, year = {2023}, journal = {IEEE Sensors Journal}, volume = {23}, number = {19}, pages = {23836--23845}, doi = {10.1109/JSEN.2023.3306377}, url = {https://arxiv.org/pdf/2212.02757} }
Zhao, Zhipeng and Yu, Huai and Lyu, Chenwei and Yang, Wen and Scherer, Sebastian, "Attention-Enhanced Cross-modal Localization Between Spherical Images and Point Clouds," IEEE Sensors Journal, 2023.
2022
- Resilient Multi-Sensor Exploration of Multifarious Environments with a Team of Aerial Robots.By Best, G., Garg, R., Keller, J., Hollinger, G.A. and Scherer, S.In Proceedings of Robotics: Science and Systems, 2022.
@inproceedings{best2022resilient, title = {Resilient Multi-Sensor Exploration of Multifarious Environments with a Team of Aerial Robots}, author = {Best, Graeme and Garg, Rohit and Keller, John and Hollinger, Geoffrey A and Scherer, Sebastian}, year = {2022}, booktitle = {Proceedings of Robotics: Science and Systems}, url = {https://www.roboticsproceedings.org/rss18/p004.pdf} }
Best, Graeme and Garg, Rohit and Keller, John and Hollinger, Geoffrey A and Scherer, Sebastian, "Resilient Multi-Sensor Exploration of Multifarious Environments with a Team of Aerial Robots," Proceedings of Robotics: Science and Systems, 2022.
- Mission-level Robustness with Rapidly-deployed, Autonomous Aerial Vehicles by Carnegie Mellon Team Tartan at MBZIRC 2020.By Bhattacharya, A., Gandhi, A., Merkle, L., Tiwari, R., Warrior, K., Winata, S., Saba, A., Zhang, K., Kroemer, O. and Scherer, S.In Field Robotics Journal, pp. 172–200, May 2022.
@article{bhattacharya2022mission, title = {Mission-level Robustness with Rapidly-deployed, Autonomous Aerial Vehicles by Carnegie Mellon Team Tartan at {MBZIRC 2020}}, author = {Bhattacharya, Anish and Gandhi, Akshit and Merkle, Lukas and Tiwari, Rohan and Warrior, Karun and Winata, Stanley and Saba, Andrew and Zhang, Kevin and Kroemer, Oliver and Scherer, Sebastian}, year = {2022}, month = may, journal = {Field Robotics Journal}, pages = {172--200}, doi = {10.55417/fr.2022007}, url = {https://fieldrobotics.net/Field_Robotics/Volume_2_files/Vol2_07.pdf} }
Bhattacharya, Anish and Gandhi, Akshit and Merkle, Lukas and Tiwari, Rohan and Warrior, Karun and Winata, Stanley and Saba, Andrew and Zhang, Kevin and Kroemer, Oliver and Scherer, Sebastian, "Mission-level Robustness with Rapidly-deployed, Autonomous Aerial Vehicles by Carnegie Mellon Team Tartan at MBZIRC 2020," Field Robotics Journal, 2022.
- I2D-Loc: Camera localization via image to LiDAR depth flow.By Chen, K., Yu, H., Yang, W., Yu, L., Scherer, S. and Xia, G.-S.In ISPRS Journal of Photogrammetry and Remote Sensing, vol. 194, pp. 209–221, 2022.
@article{chen2022i2d, title = {{I2D-Loc: Camera} localization via image to {LiDAR} depth flow}, author = {Chen, Kuangyi and Yu, Huai and Yang, Wen and Yu, Lei and Scherer, Sebastian and Xia, Gui-Song}, year = {2022}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, publisher = {Elsevier}, volume = {194}, pages = {209--221}, doi = {10.1016/j.isprsjprs.2022.10.009}, issn = {0924-2716}, url = {https://www.sciencedirect.com/science/article/pii/S0924271622002775}, keywords = {Camera localization, 2D–3D registration, Flow estimation, Depth completion, Neural network} }
Accurate camera localization in existing LiDAR maps is promising since it potentially allows exploiting strengths of both LiDAR-based and camera-based methods. However, effective methods that robustly address appearance and modality differences for 2D–3D localization are still missing. To overcome these problems, we propose the I2D-Loc, a scene-agnostic and end-to-end trainable neural network that estimates the 6-DoF pose from an RGB image to an existing LiDAR map with local optimization on an initial pose. Specifically, we first project the LiDAR map to the image plane according to a rough initial pose and utilize a depth completion algorithm to generate a dense depth image. We further design a confidence map to weight the features extracted from the dense depth to get a more reliable depth representation. Then, we propose to utilize a neural network to estimate the correspondence flow between depth and RGB images. Finally, we utilize the BPnP algorithm to estimate the 6-DoF pose, calculating the gradients of pose error and optimizing the front-end network parameters. Moreover, by decoupling the intrinsic camera parameters out of the end-to-end training process, I2D-Loc can be generalized to the images with different intrinsic parameters. Experiments on KITTI, Argoverse, and Lyft5 datasets demonstrate that the I2D-Loc can achieve centimeter localization performance. The source code, dataset, trained models, and demo videos are released at https://levenberg.github.io/I2D-Loc/.
Chen, Kuangyi and Yu, Huai and Yang, Wen and Yu, Lei and Scherer, Sebastian and Xia, Gui-Song, "I2D-Loc: Camera localization via image to LiDAR depth flow," ISPRS Journal of Photogrammetry and Remote Sensing, 2022.
- ALTO: A Large-Scale Dataset for UAV Visual Place Recognition and Localization.By Cisneros, I., Yin, P., Zhang, J., Choset, H. and Scherer, S.In arXiv preprint arXiv:2207.12317, 2022.
@article{cisneros2022alto, title = {{ALTO: A} Large-Scale Dataset for UAV Visual Place Recognition and Localization}, author = {Cisneros, Ivan and Yin, Peng and Zhang, Ji and Choset, Howie and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2207.12317}, url = {https://arxiv.org/pdf/2207.12317} }
Cisneros, Ivan and Yin, Peng and Zhang, Ji and Choset, Howie and Scherer, Sebastian, "ALTO: A Large-Scale Dataset for UAV Visual Place Recognition and Localization," arXiv preprint arXiv:2207.12317, 2022.
- On Performance Impacts of Coordination via Submodular Maximization for Multi-Robot Perception Planning and the Dynamics of Target Coverage and Cinematography.By Corah, M. and Scherer, S.In RSS 2022 Workshop on Envisioning an Infrastructure for Multi-Robot and Collaborative Autonomy Testing and Evaluation, 2022.
@inproceedings{corah2022performance, title = {On Performance Impacts of Coordination via Submodular Maximization for Multi-Robot Perception Planning and the Dynamics of Target Coverage and Cinematography}, author = {Corah, Micah and Scherer, Sebastian}, year = {2022}, booktitle = {RSS 2022 Workshop on Envisioning an Infrastructure for Multi-Robot and Collaborative Autonomy Testing and Evaluation}, url = {https://par.nsf.gov/servlets/purl/10358330} }
Corah, Micah and Scherer, Sebastian, "On Performance Impacts of Coordination via Submodular Maximization for Multi-Robot Perception Planning and the Dynamics of Target Coverage and Cinematography," RSS 2022 Workshop on Envisioning an Infrastructure for Multi-Robot and Collaborative Autonomy Testing and Evaluation, 2022.
- TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive SubPixel Refinement of AprilTags.By Duisterhof, B.P., Hu, Y., Teng, S.H., Kaess, M. and Scherer, S.In arXiv preprint arXiv:2210.02511, 2022.
@article{duisterhof2022tartancalib, title = {{TartanCalib: Iterative} Wide-Angle Lens Calibration using Adaptive SubPixel Refinement of AprilTags}, author = {Duisterhof, Bardienus P and Hu, Yaoyu and Teng, Si Heng and Kaess, Michael and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2210.02511}, url = {https://arxiv.org/pdf/2210.02511} }
Duisterhof, Bardienus P and Hu, Yaoyu and Teng, Si Heng and Kaess, Michael and Scherer, Sebastian, "TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive SubPixel Refinement of AprilTags," arXiv preprint arXiv:2210.02511, 2022.
- Targetless Extrinsic Calibration of Stereo, Thermal, and Laser Sensors in Structured Environments.By Fu, T., Yu, H., Yang, W., Hu, Y. and Scherer, S.In IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022.
@article{fu2022targetless, title = {Targetless Extrinsic Calibration of Stereo, Thermal, and Laser Sensors in Structured Environments}, author = {Fu, Taimeng and Yu, Huai and Yang, Wen and Hu, Yaoyu and Scherer, Sebastian}, year = {2022}, journal = {IEEE Transactions on Instrumentation and Measurement}, publisher = {IEEE}, volume = {71}, pages = {1--11}, doi = {10.1109/TIM.2022.3204338}, url = {https://arxiv.org/pdf/2109.13414} }
Fu, Taimeng and Yu, Huai and Yang, Wen and Hu, Yaoyu and Scherer, Sebastian, "Targetless Extrinsic Calibration of Stereo, Thermal, and Laser Sensors in Structured Environments," IEEE Transactions on Instrumentation and Measurement, 2022.
- AirLoop: Lifelong Loop Closure Detection.By Gao, D., Wang, C. and Scherer, S.In International Conference on Robotics and Automation (ICRA), pp. 10664–10671, 2022.
@inproceedings{gao2022airloop, title = {{AirLoop: Lifelong} Loop Closure Detection}, author = {Gao, Dasong and Wang, Chen and Scherer, Sebastian}, year = {2022}, booktitle = {International Conference on Robotics and Automation (ICRA)}, pages = {10664--10671}, doi = {10.1109/ICRA46639.2022.9811658}, url = {https://arxiv.org/pdf/2109.08975}, code = {https://github.com/wang-chen/AirLoop}, video = {https://youtu.be/Gr9i5ONNmz0} }
Gao, Dasong and Wang, Chen and Scherer, Sebastian, "AirLoop: Lifelong Loop Closure Detection," International Conference on Robotics and Automation (ICRA), 2022.
- Towards Robust Visual-Inertial Odometry with Multiple Non-Overlapping Monocular Cameras.By He, Y., Yu, H., Yang, W. and Scherer, S.In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9452–9458, 2022.
@inproceedings{he2022towards, title = {Towards Robust Visual-Inertial Odometry with Multiple Non-Overlapping Monocular Cameras}, author = {He, Yao and Yu, Huai and Yang, Wen and Scherer, Sebastian}, year = {2022}, booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {9452--9458}, doi = {10.1109/IROS47612.2022.9981664}, url = {https://arxiv.org/pdf/2109.12030}, organization = {IEEE} }
He, Yao and Yu, Huai and Yang, Wen and Scherer, Sebastian, "Towards Robust Visual-Inertial Odometry with Multiple Non-Overlapping Monocular Cameras," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
- Predicting Like a Pilot: Dataset and Method to Predict Socially-Aware Aircraft Trajectories in Non-Towered Terminal Airspace.By Patrikar, J., Moon, B., Oh, J. and Scherer, S.In International Conference on Robotics and Automation (ICRA), 2022.
@inproceedings{patrikar2022predicting, title = {Predicting Like a Pilot: {Dataset} and Method to Predict Socially-Aware Aircraft Trajectories in Non-Towered Terminal Airspace}, author = {Patrikar, Jay and Moon, Brady and Oh, Jean and Scherer, Sebastian}, year = {2022}, booktitle = {International Conference on Robotics and Automation (ICRA)}, doi = {10.1109/ICRA46639.2022.9811972}, url = {https://arxiv.org/pdf/2109.15158.pdf}, code = {https://github.com/castacks/trajairnet}, video = {https://youtu.be/elAQXrxB2gw} }
Patrikar, Jay and Moon, Brady and Oh, Jean and Scherer, Sebastian, "Predicting Like a Pilot: Dataset and Method to Predict Socially-Aware Aircraft Trajectories in Non-Towered Terminal Airspace," International Conference on Robotics and Automation (ICRA), 2022.
- Challenges in Close-Proximity Safe and Seamless Operation of Manned and Unmanned Aircraft in Shared Airspace.By Patrikar, J., Dantas, J.P.A., Ghosh, S., Kapoor, P., Higgins, I., Aloor, J.J., Navarro, I., Sun, J., Stoler, B., Hamidi, M., Baijal, R., Moon, B., Oh, J. and Scherer, S.In arXiv preprint arXiv:2211.06932, 2022.
@article{patrikar2022challenges, title = {Challenges in Close-Proximity Safe and Seamless Operation of Manned and Unmanned Aircraft in Shared Airspace}, author = {Patrikar, Jay and Dantas, Joao P. A. and Ghosh, Sourish and Kapoor, Parv and Higgins, Ian and Aloor, Jasmine J. and Navarro, Ingrid and Sun, Jimin and Stoler, Ben and Hamidi, Milad and Baijal, Rohan and Moon, Brady and Oh, Jean and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2211.06932}, url = {https://arxiv.org/pdf/2211.06932} }
Patrikar, Jay and Dantas, Joao P. A. and Ghosh, Sourish and Kapoor, Parv and Higgins, Ian and Aloor, Jasmine J. and Navarro, Ingrid and Sun, Jimin and Stoler, Ben and Hamidi, Milad and Baijal, Rohan and Moon, Brady and Oh, Jean and Scherer, Sebastian, "Challenges in Close-Proximity Safe and Seamless Operation of Manned and Unmanned Aircraft in Shared Airspace," arXiv preprint arXiv:2211.06932, 2022.
- Detection and Physical Interaction with Deformable Linear Objects.By Keipour, A., Mousaei, M., Bandari, M., Schaal, S. and Scherer, S.In ICRA 2022 2nd Workshop on Representing and Manipulating Deformable Objects, 2022.
@inproceedings{keipour2022detection, title = {Detection and Physical Interaction with Deformable Linear Objects}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Bandari, Maryam and Schaal, Stefan and Scherer, Sebastian}, year = {2022}, booktitle = {ICRA 2022 2nd Workshop on Representing and Manipulating Deformable Objects}, url = {https://arxiv.org/pdf/2205.08041} }
Keipour, Azarakhsh and Mousaei, Mohammadreza and Bandari, Maryam and Schaal, Stefan and Scherer, Sebastian, "Detection and Physical Interaction with Deformable Linear Objects," ICRA 2022 2nd Workshop on Representing and Manipulating Deformable Objects, 2022.
- Visual Servoing Approach to Autonomous UAV Landing on a Moving Vehicle.By Keipour, A., Pereira, G.A.S., Bonatti, R., Garg, R., Rastogi, P., Dubey, G. and Scherer, S.In Sensors, vol. 22, no. 17, p. 6549, 2022.
@article{keipour2022visual, title = {Visual Servoing Approach to Autonomous UAV Landing on a Moving Vehicle}, author = {Keipour, Azarakhsh and Pereira, Guilherme AS and Bonatti, Rogerio and Garg, Rohit and Rastogi, Puru and Dubey, Geetesh and Scherer, Sebastian}, year = {2022}, journal = {Sensors}, publisher = {MDPI}, volume = {22}, number = {17}, pages = {6549}, doi = {10.3390/s22176549}, issn = {1424-8220}, url = {https://www.mdpi.com/1424-8220/22/17/6549} }
Many aerial robotic applications require the ability to land on moving platforms, such as delivery trucks and marine research boats. We present a method to autonomously land an Unmanned Aerial Vehicle on a moving vehicle. A visual servoing controller approaches the ground vehicle using velocity commands calculated directly in image space. The control laws generate velocity commands in all three dimensions, eliminating the need for a separate height controller. The method has shown the ability to approach and land on the moving deck in simulation, indoor and outdoor environments, and compared to the other available methods, it has provided the fastest landing approach. Unlike many existing methods for landing on fast-moving platforms, this method does not rely on additional external setups, such as RTK, motion capture system, ground station, offboard processing, or communication with the vehicle, and it requires only the minimal set of hardware and localization sensors. The videos and source codes are also provided.
Keipour, Azarakhsh and Pereira, Guilherme AS and Bonatti, Rogerio and Garg, Rohit and Rastogi, Puru and Dubey, Geetesh and Scherer, Sebastian, "Visual Servoing Approach to Autonomous UAV Landing on a Moving Vehicle," Sensors, 2022.
- Aerial Field Robotics.By Kulkarni, M., Moon, B., Alexis, K. and Scherer, S.In Encyclopedia of Robotics,Berlin, Heidelberg: Springer Berlin Heidelberg, 2022, pp. pp. 1–15
@inbook{kulkarni2022aerial, title = {Aerial Field Robotics}, author = {Kulkarni, Mihir and Moon, Brady and Alexis, Kostas and Scherer, Sebastian}, year = {2022}, booktitle = {Encyclopedia of Robotics}, publisher = {Springer Berlin Heidelberg}, address = {Berlin, Heidelberg}, pages = {1--15}, doi = {10.1007/978-3-642-41610-1_221-1}, isbn = {978-3-642-41610-1}, url = {https://link.springer.com/referenceworkentry/10.1007/978-3-642-41610-1_221-1} }
Kulkarni, Mihir and Moon, Brady and Alexis, Kostas and Scherer, Sebastian, "Aerial Field Robotics," Encyclopedia of Robotics, 2022.
- AdaFusion: Visual-LiDAR fusion with adaptive weights for place recognition.By Lai, H., Yin, P. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12038–12045, 2022.
@article{lai2022adafusion, title = {{AdaFusion: Visual-LiDAR} fusion with adaptive weights for place recognition}, author = {Lai, Haowen and Yin, Peng and Scherer, Sebastian}, year = {2022}, journal = {IEEE Robotics and Automation Letters}, publisher = {IEEE}, volume = {7}, number = {4}, pages = {12038--12045}, doi = {10.1109/LRA.2022.3210880}, url = {https://arxiv.org/pdf/2111.11739} }
Lai, Haowen and Yin, Peng and Scherer, Sebastian, "AdaFusion: Visual-LiDAR fusion with adaptive weights for place recognition," IEEE Robotics and Automation Letters, 2022.
- AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration.By Li, B., Wang, C., Reddy, P., Kim, S. and Scherer, S.In European Conference on Computer Vision (ECCV), pp. 427–444, 2022.
@inproceedings{li2022airdet, title = {{AirDet: Few}-Shot Detection without Fine-tuning for Autonomous Exploration}, author = {Li, Bowen and Wang, Chen and Reddy, Pranay and Kim, Seungchan and Scherer, Sebastian}, year = {2022}, booktitle = {European Conference on Computer Vision (ECCV)}, publisher = {Springer Nature Switzerland}, pages = {427--444}, doi = {10.1007/978-3-031-19842-7_25}, isbn = {978-3-031-19842-7}, url = {https://arxiv.org/pdf/2112.01740} }
Few-shot object detection has attracted increasing attention and rapidly progressed in recent years. However, the requirement of an exhaustive offline fine-tuning stage in existing methods is time-consuming and significantly hinders their usage in online applications such as autonomous exploration of low-power robots. We find that their major limitation is that the little but valuable information from a few support images is not fully exploited. To solve this problem, we propose a brand new architecture, AirDet, and surprisingly find that, by learning class-agnostic relation with the support images in all modules, including cross-scale object proposal network, shots aggregation module, and localization network, AirDet without fine-tuning achieves comparable or even better results than many fine-tuned methods, reaching up to 30–40% improvements. We also present solid results of onboard tests on real-world exploration data from the DARPA Subterranean Challenge, which strongly validate the feasibility of AirDet in robotics. To the best of our knowledge, AirDet is the first feasible few-shot detection method for autonomous exploration of low-power robots. The code and pre-trained models are released at https://github.com/Jaraxxus-Me/AirDet.
Li, Bowen and Wang, Chen and Reddy, Pranay and Kim, Seungchan and Scherer, Sebastian, "AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration," European Conference on Computer Vision (ECCV), 2022.
- TIGRIS: An Informed Sampling-based Algorithm for Informative Path Planning.By Moon, B., Chatterjee, S. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
@inproceedings{moon2022tigris, title = {{TIGRIS: An} Informed Sampling-based Algorithm for Informative Path Planning}, author = {Moon, Brady and Chatterjee, Satrajit and Scherer, Sebastian}, year = {2022}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, doi = {10.1109/IROS47612.2022.9981992}, url = {https://arxiv.org/pdf/2203.12830.pdf}, video = {https://youtu.be/bMw5nUGL5GQ} }
Moon, Brady and Chatterjee, Satrajit and Scherer, Sebastian, "TIGRIS: An Informed Sampling-based Algorithm for Informative Path Planning," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
- Design, Modeling and Control for a Tilt-rotor VTOL UAV in the Presence of Actuator Failure.By Mousaei, M., Geng, J., Keipour, A., Bai, D. and Scherer, S.In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4310–4317, 2022.
@inproceedings{mousaei2022design, title = {Design, Modeling and Control for a Tilt-rotor {VTOL UAV} in the Presence of Actuator Failure}, author = {Mousaei, Mohammadreza and Geng, Junyi and Keipour, Azarakhsh and Bai, Dongwei and Scherer, Sebastian}, year = {2022}, booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4310--4317}, doi = {10.1109/IROS47612.2022.9981806}, url = {https://arxiv.org/pdf/2205.05533}, organization = {IEEE} }
Mousaei, Mohammadreza and Geng, Junyi and Keipour, Azarakhsh and Bai, Dongwei and Scherer, Sebastian, "Design, Modeling and Control for a Tilt-rotor VTOL UAV in the Presence of Actuator Failure," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
- VTOL failure detection and recovery by utilizing redundancy.By Mousaei, M., Keipour, A., Geng, J. and Scherer, S.In arXiv preprint arXiv:2206.00588, 2022.
@article{mousaei2022vtol, title = {{VTOL} failure detection and recovery by utilizing redundancy}, author = {Mousaei, Mohammadreza and Keipour, Azarakhsh and Geng, Junyi and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2206.00588}, url = {https://arxiv.org/pdf/2206.00588} }
Mousaei, Mohammadreza and Keipour, Azarakhsh and Geng, Junyi and Scherer, Sebastian, "VTOL failure detection and recovery by utilizing redundancy," arXiv preprint arXiv:2206.00588, 2022.
- AirObject: A Temporally Evolving Graph Embedding for Object Identification.By Keetha, N., Wang, C., Qiu, Y., Xu, K. and Scherer, S.In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 8397–8406, 2022.
@inproceedings{keetha2022airobject, title = {{AirObject: A} Temporally Evolving Graph Embedding for Object Identification}, author = {Keetha, Nikhil and Wang, Chen and Qiu, Yuheng and Xu, Kuan and Scherer, Sebastian}, year = {2022}, month = jun, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, pages = {8397--8406}, doi = {10.1109/CVPR52688.2022.00822}, url = {https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.00822}, keywords = {location awareness;three-dimensional displays;semantics;transforms;encoding;pattern recognition;object recognition} }
Object encoding and identification are vital for robotic tasks such as autonomous exploration, semantic scene understanding, and relocalization. Previous approaches have attempted to either track objects or generate descriptors for object identification. However, such systems are limited to a “fixed” partial object representation from a single viewpoint. In a robot exploration setup, there is a requirement for a temporally “evolving” global object representation built as the robot observes the object from multiple viewpoints. Furthermore, given the vast distribution of unknown novel objects in the real world, the object identification process must be class-agnostic. In this context, we propose a novel temporal 3D object encoding approach, dubbed AirObject, to obtain global keypoint graph-based embeddings of objects. Specifically, the global 3D object embeddings are generated using a temporal convolutional network across structural information of multiple frames obtained from a graph attention-based encoding method. We demonstrate that AirObject achieves the state-of-the-art performance for video object identification and is robust to severe occlusion, perceptual aliasing, viewpoint shift, deformation, and scale transform, outperforming the state-of-the-art single-frame and sequential descriptors. To the best of our knowledge, AirObject is one of the first temporal object encoding methods. Source code is available at https://github.com/Nik-v9/AirObject.
Keetha, Nikhil and Wang, Chen and Qiu, Yuheng and Xu, Kuan and Scherer, Sebastian, "AirObject: A Temporally Evolving Graph Embedding for Object Identification," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
- AirDOS: Dynamic SLAM benefits from Articulated Objects.By Qiu, Y., Wang, C., Wang, W., Henein, M. and Scherer, S.In International Conference on Robotics and Automation (ICRA), pp. 8047–8053, 2022.
@inproceedings{qiu2022airdos, title = {{AirDOS: Dynamic SLAM} benefits from Articulated Objects}, author = {Qiu, Yuheng and Wang, Chen and Wang, Wenshan and Henein, Mina and Scherer, Sebastian}, year = {2022}, booktitle = {International Conference on Robotics and Automation (ICRA)}, pages = {8047--8053}, doi = {10.1109/ICRA46639.2022.9811667}, url = {https://arxiv.org/pdf/2109.09903}, code = {https://github.com/haleqiu/airdos} }
Qiu, Yuheng and Wang, Chen and Wang, Wenshan and Henein, Mina and Scherer, Sebastian, "AirDOS: Dynamic SLAM benefits from Articulated Objects," International Conference on Robotics and Automation (ICRA), 2022.
- Resilient and Modular Subterranean Exploration with a Team of Roving and Flying Robots.By Scherer, S., Agrawal, V., Best, G., Cao, C., Cujic, K., Darnley, R., DeBortoli, R., Dexheimer, E., Drozd, B., Garg, R., Higgins, I., Keller, J., Kohanbash, D., Nogueira, L., Pradhan, R., Tatum, M., K. Viswanathan, V., Willits, S., Zhao, S., Zhu, H., Abad, D., Angert, T., Armstrong, G., Boirum, R., Dongare, A., Dworman, M., Hu, S., Jaekel, J., Ji, R., Lai, A., Hsuan Lee, Y., Luong, A., Mangelson, J., Maier, J., Picard, J., Pluckter, K., Saba, A., Saroya, M., Scheide, E., Shoemaker-Trejo, N., Spisak, J., Teza, J., Yang, F., Wilson, A., Zhang, H., Choset, H., Kaess, M., Rowe, A., Singh, S., Zhang, J., A. Hollinger, G. and Travers, M.In Field Robotics Journal, pp. 678–734, May 2022.
@article{scherer2022resilient, title = {Resilient and Modular Subterranean Exploration with a Team of Roving and Flying Robots}, author = {Scherer, Sebastian and Agrawal, Vasu and Best, Graeme and Cao, Chao and Cujic, Katarina and Darnley, Ryan and DeBortoli, Robert and Dexheimer, Eric and Drozd, Bill and Garg, Rohit and Higgins, Ian and Keller, John and Kohanbash, David and Nogueira, Lucas and Pradhan, Roshan and Tatum, Michael and K. Viswanathan, Vaibhav and Willits, Steven and Zhao, Shibo and Zhu, Hongbiao and Abad, Dan and Angert, Tim and Armstrong, Greg and Boirum, Ralph and Dongare, Adwait and Dworman, Matthew and Hu, Shengjie and Jaekel, Joshua and Ji, Ran and Lai, Alice and Hsuan Lee, Yu and Luong, Anh and Mangelson, Joshua and Maier, Jay and Picard, James and Pluckter, Kevin and Saba, Andrew and Saroya, Manish and Scheide, Emily and Shoemaker-Trejo, Nathaniel and Spisak, Joshua and Teza, Jim and Yang, Fan and Wilson, Andrew and Zhang, Henry and Choset, Howie and Kaess, Michael and Rowe, Anthony and Singh, Sanjiv and Zhang, Ji and A. Hollinger, Geoffrey and Travers, Matthew}, year = {2022}, month = may, journal = {Field Robotics Journal}, pages = {678--734}, doi = {10.55417/fr.2022023}, url = {https://fieldrobotics.net/Field_Robotics/Volume_2_files/Vol2_23.pdf} }
Scherer, Sebastian and Agrawal, Vasu and Best, Graeme and Cao, Chao and Cujic, Katarina and Darnley, Ryan and DeBortoli, Robert and Dexheimer, Eric and Drozd, Bill and Garg, Rohit and Higgins, Ian and Keller, John and Kohanbash, David and Nogueira, Lucas and Pradhan, Roshan and Tatum, Michael and K. Viswanathan, Vaibhav and Willits, Steven and Zhao, Shibo and Zhu, Hongbiao and Abad, Dan and Angert, Tim and Armstrong, Greg and Boirum, Ralph and Dongare, Adwait and Dworman, Matthew and Hu, Shengjie and Jaekel, Joshua and Ji, Ran and Lai, Alice and Hsuan Lee, Yu and Luong, Anh and Mangelson, Joshua and Maier, Jay and Picard, James and Pluckter, Kevin and Saba, Andrew and Saroya, Manish and Scheide, Emily and Shoemaker-Trejo, Nathaniel and Spisak, Joshua and Teza, Jim and Yang, Fan and Wilson, Andrew and Zhang, Henry and Choset, Howie and Kaess, Michael and Rowe, Anthony and Singh, Sanjiv and Zhang, Ji and A. Hollinger, Geoffrey and Travers, Matthew, "Resilient and Modular Subterranean Exploration with a Team of Roving and Flying Robots," Field Robotics Journal, 2022.
- Robotic Interestingness via Human-Informed Few-Shot Object Detection.By Kim, S., Wang, C., Li, B. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1756–1763, 2022.
@inproceedings{kim2022robotic, title = {Robotic Interestingness via Human-Informed Few-Shot Object Detection}, author = {Kim, Seungchan and Wang, Chen and Li, Bowen and Scherer, Sebastian}, year = {2022}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {1756--1763}, doi = {10.1109/IROS47612.2022.9981461}, url = {https://arxiv.org/pdf/2208.01084.pdf} }
Kim, Seungchan and Wang, Chen and Li, Bowen and Scherer, Sebastian, "Robotic Interestingness via Human-Informed Few-Shot Object Detection," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
- Robust Modeling and Controls for Racing on the Edge.By Spisak, J., Saba, A., Suvarna, N., Mao, B., Zhang, C.T., Chang, C., Scherer, S. and Ramanan, D.In arXiv preprint arXiv:2205.10841, 2022.
@inproceedings{spisak2022robust, title = {Robust Modeling and Controls for Racing on the Edge}, author = {Spisak, Joshua and Saba, Andrew and Suvarna, Nayana and Mao, Brian and Zhang, Chuan Tian and Chang, Chris and Scherer, Sebastian and Ramanan, Deva}, year = {2022}, booktitle = {arXiv preprint arXiv:2205.10841}, publisher = {arXiv}, doi = {10.48550/ARXIV.2205.10841}, url = {https://arxiv.org/pdf/2205.10841.pdf}, video = {https://youtu.be/3_ZFCYQqxyo?t=1334} }
Spisak, Joshua and Saba, Andrew and Suvarna, Nayana and Mao, Brian and Zhang, Chuan Tian and Chang, Chris and Scherer, Sebastian and Ramanan, Deva, "Robust Modeling and Controls for Racing on the Edge," arXiv preprint arXiv:2205.10841, 2022.
- Drone flight data reveal energy and greenhouse gas emissions savings for very small package delivery.By Rodrigues, T.A., Patrikar, J., Oliveira, N.L., Matthews, H.S., Scherer, S. and Samaras, C.In Patterns, vol. 3, no. 8, p. 100569, 2022.
@article{rodrigues2022drone, title = {Drone flight data reveal energy and greenhouse gas emissions savings for very small package delivery}, author = {Rodrigues, Thiago A. and Patrikar, Jay and Oliveira, Natalia L. and Matthews, H. Scott and Scherer, Sebastian and Samaras, Constantine}, year = {2022}, journal = {Patterns}, publisher = {Elsevier}, volume = {3}, number = {8}, pages = {100569}, doi = {10.1016/j.patter.2022.100569}, issn = {2666-3899}, url = {https://www.sciencedirect.com/science/article/pii/S2666389922001805}, keywords = {quadcopter drone, last-mile delivery, energy consumption, greenhouse gas emissions, robot delivery, autonomous delivery} }
Summary Uncrewed aerial vehicles (UAVs) for last-mile deliveries will affect the energy productivity of delivery and require new methods to understand energy consumption and greenhouse gas (GHG) emissions. We combine empirical testing of 188 quadcopter flights across a range of speeds with a first-principles analysis to develop a usable energy model and a machine-learning algorithm to assess energy across takeoff, cruise, and landing. Our model shows that an electric quadcopter drone with a very small package (0.5 kg) would consume approximately 0.08 MJ/km and result in 70 g of CO2e per package in the United States. We compare drone delivery with other vehicles and show that energy per package delivered by drones (0.33 MJ/package) can be up to 94% lower than conventional transportation modes, with only electric cargo bicycles providing lower GHGs/package. Our open model and coefficients can assist stakeholders in understanding and improving the sustainability of small package delivery.
Rodrigues, Thiago A. and Patrikar, Jay and Oliveira, Natalia L. and Matthews, H. Scott and Scherer, Sebastian and Samaras, Constantine, "Drone flight data reveal energy and greenhouse gas emissions savings for very small package delivery," Patterns, 2022.
- TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models.By Triest, S., Sivaprakasam, M., Wang, S.J., Wang, W., Johnson, A.M. and Scherer, S.In 2022 International Conference on Robotics and Automation (ICRA), 2022.
@inproceedings{triest2022tartandrive, title = {{TartanDrive: A} Large-Scale Dataset for Learning Off-Road Dynamics Models}, author = {Triest, Samuel and Sivaprakasam, Matthew and Wang, Sean J. and Wang, Wenshan and Johnson, Aaron M. and Scherer, Sebastian}, year = {2022}, booktitle = {2022 International Conference on Robotics and Automation (ICRA)}, doi = {10.1109/ICRA46639.2022.9811648}, url = {https://arxiv.org/pdf/2205.01791.pdf}, code = {https://github.com/castacks/tartan drive} }
Triest, Samuel and Sivaprakasam, Matthew and Wang, Sean J. and Wang, Wenshan and Johnson, Aaron M. and Scherer, Sebastian, "TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models," 2022 International Conference on Robotics and Automation (ICRA), 2022.
- Lifelong graph learning.By Wang, C., Qiu, Y., Gao, D. and Scherer, S.In 2022 Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
@inproceedings{wang2022lifelong, title = {Lifelong graph learning}, author = {Wang, Chen and Qiu, Yuheng and Gao, Dasong and Scherer, Sebastian}, year = {2022}, booktitle = {2022 Conference on Computer Vision and Pattern Recognition (CVPR)}, url = {https://arxiv.org/pdf/2009.00647} }
Wang, Chen and Qiu, Yuheng and Gao, Dasong and Scherer, Sebastian, "Lifelong graph learning," 2022 Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
- Unsupervised Online Learning for Robotic Interestingness With Visual Memory.By Wang, C., Qiu, Y., Wang, W., Hu, Y., Kim, S. and Scherer, S.In IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2446–2461, 2022.
@article{wang2022unsupervised, title = {Unsupervised Online Learning for Robotic Interestingness With Visual Memory}, author = {Wang, Chen and Qiu, Yuheng and Wang, Wenshan and Hu, Yafei and Kim, Seungchan and Scherer, Sebastian}, year = {2022}, journal = {IEEE Transactions on Robotics}, volume = {38}, number = {4}, pages = {2446--2461}, doi = {10.1109/TRO.2021.3129972}, url = {https://arxiv.org/pdf/2111.09793.pdf} }
Autonomous robots frequently need to detect "interesting" scenes to decide on further exploration, or to decide which data to share for cooperation. These scenarios often require fast deployment with little or no training data. Prior work considers "interestingness" based on data from the same distribution. Instead, we propose to develop a method that automatically adapts online to the environment to report interesting scenes quickly. To address this problem, we develop a novel translation-invariant visual memory and design a three-stage architecture for long-term, short-term, and online learning, which enables the system to learn human-like experience, environmental knowledge, and online adaption, respectively. With this system, we achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment. We show comparable performance to supervised methods for robot exploration scenarios showing the efficacy of our approach. We expect that the presented method will play an important role in the robotic interestingness recognition exploration tasks.
Wang, Chen and Qiu, Yuheng and Wang, Wenshan and Hu, Yafei and Kim, Seungchan and Scherer, Sebastian, "Unsupervised Online Learning for Robotic Interestingness With Visual Memory," IEEE Transactions on Robotics, 2022.
- Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning.By Wang, S.J., Triest, S., Wang, W., Scherer, S. and Johnson, A.In Proceedings of the 5th Conference on Robot Learning, vol. 164, pp. 224–233, 2022.
@inproceedings{wang2022rough, title = {Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning}, author = {Wang, Sean J and Triest, Samuel and Wang, Wenshan and Scherer, Sebastian and Johnson, Aaron}, year = {2022}, month = {08--11 Nov}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, publisher = {PMLR}, series = {Proceedings of Machine Learning Research}, volume = {164}, pages = {224--233}, url = {https://proceedings.mlr.press/v164/wang22c.html}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, pdf = {https://proceedings.mlr.press/v164/wang22c/wang22c.pdf} }
Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.
Wang, Sean J and Triest, Samuel and Wang, Wenshan and Scherer, Sebastian and Johnson, Aaron, "Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning," Proceedings of the 5th Conference on Robot Learning, 2022.
- AirCode: A Robust Object Encoding Method.By Xu, K., Wang, C., Chen, C., Wu, W. and Scherer, S.In IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 2, pp. 1816–1823, 2022.
@article{xu2022aircode, title = {{AirCode: A} Robust Object Encoding Method}, author = {Xu, Kuan and Wang, Chen and Chen, Chao and Wu, Wei and Scherer, Sebastian}, year = {2022}, journal = {IEEE Robotics and Automation Letters (RA-L)}, volume = {7}, number = {2}, pages = {1816--1823}, doi = {10.1109/LRA.2022.3141221}, url = {https://arxiv.org/pdf/2105.00327}, code = {https://github.com/wang-chen/AirCode}, video = {https://youtu.be/ZhW4Qk1tLNQ} }
Xu, Kuan and Wang, Chen and Chen, Chao and Wu, Wei and Scherer, Sebastian, "AirCode: A Robust Object Encoding Method," IEEE Robotics and Automation Letters (RA-L), 2022.
- General Place Recognition Survey: Towards the Real-world Autonomy Age.By Yin, P., Zhao, S., Cisneros, I., Abuduweili, A., Huang, G., Milford, M., Liu, C., Choset, H. and Scherer, S.In arXiv preprint arXiv:2209.04497, 2022.
@article{yin2022general, title = {General Place Recognition Survey: Towards the Real-world Autonomy Age}, author = {Yin, Peng and Zhao, Shiqi and Cisneros, Ivan and Abuduweili, Abulikemu and Huang, Guoquan and Milford, Micheal and Liu, Changliu and Choset, Howie and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2209.04497} }
Yin, Peng and Zhao, Shiqi and Cisneros, Ivan and Abuduweili, Abulikemu and Huang, Guoquan and Milford, Micheal and Liu, Changliu and Choset, Howie and Scherer, Sebastian, "General Place Recognition Survey: Towards the Real-world Autonomy Age," arXiv preprint arXiv:2209.04497, 2022.
- ALITA: A large-scale incremental dataset for long-term autonomy.By Yin, P., Zhao, S., Ge, R., Cisneros, I., Fu, R., Zhang, J., Choset, H. and Scherer, S.In arXiv preprint arXiv:2205.10737, 2022.
@article{yin2022alita, title = {{ALITA: A} large-scale incremental dataset for long-term autonomy}, author = {Yin, Peng and Zhao, Shiqi and Ge, Ruohai and Cisneros, Ivan and Fu, Ruijie and Zhang, Ji and Choset, Howie and Scherer, Sebastian}, year = {2022}, journal = {arXiv preprint arXiv:2205.10737}, url = {https://arxiv.org/pdf/2205.10737} }
Yin, Peng and Zhao, Shiqi and Ge, Ruohai and Cisneros, Ivan and Fu, Ruijie and Zhang, Ji and Choset, Howie and Scherer, Sebastian, "ALITA: A large-scale incremental dataset for long-term autonomy," arXiv preprint arXiv:2205.10737, 2022.
- SphereVLAD++: Attention-Based and Signal-Enhanced Viewpoint Invariant Descriptor.By Zhao, S., Yin, P., Yi, G. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 256–263, 2022.
@article{zhao2022spherevlad, title = {{SphereVLAD++: Attention}-Based and Signal-Enhanced Viewpoint Invariant Descriptor}, author = {Zhao, Shiqi and Yin, Peng and Yi, Ge and Scherer, Sebastian}, year = {2022}, journal = {IEEE Robotics and Automation Letters}, publisher = {IEEE}, volume = {8}, number = {1}, pages = {256--263}, doi = {10.1109/LRA.2022.3223555}, url = {https://arxiv.org/pdf/2207.02958} }
Zhao, Shiqi and Yin, Peng and Yi, Ge and Scherer, Sebastian, "SphereVLAD++: Attention-Based and Signal-Enhanced Viewpoint Invariant Descriptor," IEEE Robotics and Automation Letters, 2022.
- Unified Representation of Geometric Primitives for Graph-SLAM Optimization Using Decomposed Quadrics.By Zhen, W., Yu, H., Hu, Y. and Scherer, S.In 2022 International Conference on Robotics and Automation (ICRA), 2022.
@inproceedings{zhen2022unified, title = {Unified Representation of Geometric Primitives for Graph-{SLAM} Optimization Using Decomposed Quadrics}, author = {Zhen, Weikun and Yu, Huai and Hu, Yaoyu and Scherer, Sebastian}, year = {2022}, booktitle = {2022 International Conference on Robotics and Automation (ICRA)}, doi = {10.1109/ICRA46639.2022.9812162}, url = {https://arxiv.org/pdf/2108.08957.pdf} }
Zhen, Weikun and Yu, Huai and Hu, Yaoyu and Scherer, Sebastian, "Unified Representation of Geometric Primitives for Graph-SLAM Optimization Using Decomposed Quadrics," 2022 International Conference on Robotics and Automation (ICRA), 2022.
2021
- Real-Time Ellipse Detection for Robotics Applications.By Keipour, A., Pereira, G.A.S. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7009–7016, Oct. 2021.
@article{keipour2021real, title = {Real-Time Ellipse Detection for Robotics Applications}, author = {Keipour, Azarakhsh and Pereira, Guilherme A. S. and Scherer, Sebastian}, year = {2021}, month = oct, journal = {IEEE Robotics and Automation Letters}, volume = {6}, number = {4}, pages = {7009--7016}, doi = {10.1109/LRA.2021.3097057}, issn = {2377-3766}, url = {https://arxiv.org/pdf/2102.12670}, archiveprefix = {arXiv}, arxivid = {2102.12670}, video = {https://www.youtube.com/watch?v=CzR-4aqlOhQ} }
We propose a new algorithm for real-time detection and tracking of elliptic patterns suitable for real-world robotics applications. The method fits ellipses to each contour in the image frame and rejects ellipses that do not yield a good fit. The resulting detection and tracking method is lightweight enough to be used on robots’ resource-limited onboard computers, can deal with lighting variations and detect the pattern even when the view is partial. The method is tested on an example application of an autonomous UAV landing on a fast-moving vehicle to show its performance indoors, outdoors, and in simulation on a real-world robotics task. The comparison with other well-known ellipse detection methods shows that our proposed algorithm outperforms other methods with the F1 score of 0.981 on a dataset with over 1500 frames. The videos of experiments, the source codes, and the collected dataset are provided with the letter.
Keipour, Azarakhsh and Pereira, Guilherme A. S. and Scherer, Sebastian, "Real-Time Ellipse Detection for Robotics Applications," IEEE Robotics and Automation Letters, 2021.
- Integration of Fully-Actuated Multirotors into Real-World Applications.By Keipour, A., Mousaei, M., Ashley, A. and Scherer, S.In arXiv preprint arXiv:2011.06666, 2021.
@article{keipour2021integration, title = {Integration of Fully-Actuated Multirotors into Real-World Applications}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Ashley, Andrew and Scherer, Sebastian}, year = {2021}, journal = {arXiv preprint arXiv:2011.06666}, url = {https://arxiv.org/pdf/2011.06666}, archiveprefix = {arXiv}, arxivid = {2011.06666}, eprint = {2011.06666}, video = {https://www.youtube.com/watch?v=lZ3ye1il0W0} }
The introduction of fully-actuated multirotors has opened the door to new possibilities and more efficient solutions to many real-world applications. However, their integration had been slower than expected, partly due to the need for new tools to take full advantage of these robots. As far as we know, all the groups currently working on the fully-actuated multirotors develop new full-pose (6-D) tools and methods to use their robots, which is inefficient, time-consuming, and requires many resources. We propose a way of bridging the gap between the tools already available for underactuated robots and the new fully-actuated vehicles. The approach can extend the existing underactuated flight controllers to support the fully-actuated robots, or enhance the existing fully-actuated controllers to support existing underactuated flight stacks. We introduce attitude strategies that work with the underactuated controllers, tools, planners and remote control interfaces, all while allowing taking advantage of the full actuation. Moreover, new methods are proposed that can properly handle the limited lateral thrust suffered by many fully-actuated UAV designs. The strategies are lightweight, simple, and allow rapid integration of the available tools with these new vehicles for the fast development of new real-world applications. The real experiments on our robots and simulations on several UAV architectures with different underlying controller methods show how these strategies can be utilized to extend existing flight controllers for fully-actuated applications. We have provided the source code for the PX4 firmware enhanced with our proposed methods to showcase an example flight controller for underactuated multirotors that can be modified to seamlessly support fully-actuated vehicles while retaining the rest of the flight stack unchanged. For more information, please visit https://theairlab.org/fully-actuated/.
Keipour, Azarakhsh and Mousaei, Mohammadreza and Ashley, Andrew and Scherer, Sebastian, "Integration of Fully-Actuated Multirotors into Real-World Applications," arXiv preprint arXiv:2011.06666, 2021.
- Batteries, Camera, Action! Learning a Semantic Control Space for Expressive Robot Cinematography.By Bonatti, R., Bucker, A., Scherer, S., Mukadam, M. and Hodgins, J.In Proceedings - IEEE International Conference on Robotics and Automation, Xi’an, China, pp. 7302–7308, 2021.
@inproceedings{bonatti2021batteries, title = {Batteries, Camera, Action! {Learning} a Semantic Control Space for Expressive Robot Cinematography}, author = {Bonatti, Rogerio and Bucker, Arthur and Scherer, Sebastian and Mukadam, Mustafa and Hodgins, Jessica}, year = {2021}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Xi'an, China}, pages = {7302--7308}, doi = {10.1109/ICRA48506.2021.9560745}, url = {https://arxiv.org/pdf/2011.10118}, video = {https://www.youtube.com/watch?v=6WX2yEUE9_k} }
Aerial vehicles are revolutionizing the way film-makers can capture shots of actors by composing novel aerial and dynamic viewpoints. However, despite great advancements in autonomous flight technology, generating expressive camera behaviors is still a challenge and requires non-technical users to edit a large number of unintuitive control parameters. In this work we develop a data-driven framework that enables editing of these complex camera positioning parameters in a semantic space (e.g. calm, enjoyable, establishing). First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip. Next, we analyze correlations between descriptors and build a semantic control space based on cinematography guidelines and human perception studies. Finally, we learn a generative model that can map a set of desired semantic video descriptors into low-level camera trajectory parameters. We evaluate our system by demonstrating that our model successfully generates shots that are rated by participants as having the expected degrees of expression for each descriptor. We also show that our models generalize to different scenes in both simulation and real-world experiments. More results found on the supplementary video.
Bonatti, Rogerio and Bucker, Arthur and Scherer, Sebastian and Mukadam, Mustafa and Hodgins, Jessica, "Batteries, Camera, Action! Learning a Semantic Control Space for Expressive Robot Cinematography," Proceedings - IEEE International Conference on Robotics and Automation, 2021.
- Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography.By Bucker, A., Bonatti, R. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Xi’an, China, pp. 7972–7979, 2021.
@inproceedings{bucker2021do, title = {Do You See What I See? {Coordinating} Multiple Aerial Cameras for Robot Cinematography}, author = {Bucker, Arthur and Bonatti, Rogerio and Scherer, Sebastian}, year = {2021}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Xi'an, China}, pages = {7972--7979}, doi = {10.1109/ICRA48506.2021.9561086}, url = {https://arxiv.org/pdf/2011.05437}, video = {https://www.youtube.com/watch?v=m2R3anv2ADE} }
Aerial cinematography is significantly expanding the capabilities of film-makers. Recent progress in autonomous unmanned aerial vehicles (UAVs) has further increased the potential impact of aerial cameras, with systems that can safely track actors in unstructured cluttered environments. Professional productions, however, require the use of multiple cameras simultaneously to record different viewpoints of the same scene, which are edited into the final footage either in real time or in post-production. Such extreme motion coordination is particularly hard for unscripted action scenes, which are a common use case of aerial cameras. In this work we develop a real-time multi-UAV coordination system that is capable of recording dynamic targets while maximizing shot diversity and avoiding collisions and mutual visibility between cameras. We validate our approach in multiple cluttered environments of a photo-realistic simulator, and deploy the system using two UAVs in real-world experiments. We show that our coordination scheme has low computational cost and takes only 1.17 ms on average to plan for a team of 3 UAVs over a 10 s time horizon. More results can be found on the supplementary video.
Bucker, Arthur and Bonatti, Rogerio and Scherer, Sebastian, "Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography," Proceedings - IEEE International Conference on Robotics and Automation, 2021.
- 3D Human Reconstruction in the Wild with Collaborative Aerial Cameras.By Ho, C., Jong, A., Freeman, H., Rao, R., Bonatti, R. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5263–5269, 2021.
@inproceedings{ho20213d, title = {3D Human Reconstruction in the Wild with Collaborative Aerial Cameras}, author = {Ho, Cherie and Jong, Andrew and Freeman, Harry and Rao, Rohan and Bonatti, Rogerio and Scherer, Sebastian}, year = {2021}, month = sep, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {5263--5269}, doi = {10.1109/IROS51168.2021.9636745}, url = {https://arxiv.org/pdf/2108.03936}, video = {https://youtu.be/jxt91vx0cns} }
Aerial vehicles are revolutionizing applications that require capturing the 3D structure of dynamic targets in the wild, such as sports, medicine, and entertainment. The core challenges in developing a motion-capture system that operates in outdoors environments are: (1) 3D inference requires multiple simultaneous viewpoints of the target, (2) occlusion caused by obstacles is frequent when tracking moving targets, and (3) the camera and vehicle state estimation is noisy. We present a real-time aerial system for multi-camera control that can reconstruct human motions in natural environments without the use of special-purpose markers. We develop a multi-robot coordination scheme that maintains the optimal flight formation for target reconstruction quality amongst obstacles. We provide studies evaluating system performance in simulation, and validate real-world performance using two drones while a target performs activities such as jogging and playing soccer.
Ho, Cherie and Jong, Andrew and Freeman, Harry and Rao, Rohan and Bonatti, Rogerio and Scherer, Sebastian, "3D Human Reconstruction in the Wild with Collaborative Aerial Cameras," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
- CVaR-Based Flight Energy Risk Assessment for Multirotor UAVs Using a Deep Energy Model.By Choudhry, A., Moon, B., Patrikar, J., Samaras, C. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Xi’an, China, 2021.
@inproceedings{choudhry2021cvar, title = {{CVaR}-Based Flight Energy Risk Assessment for Multirotor {UAVs} Using a Deep Energy Model}, author = {Choudhry, Arnav and Moon, Brady and Patrikar, Jay and Samaras, Constantine and Scherer, Sebastian}, year = {2021}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Xi'an, China}, doi = {10.1109/ICRA48506.2021.9561658}, url = {https://arxiv.org/pdf/2105.15189}, video = {https://t.co/qcYHxa6oOW?amp=1} }
An important aspect of risk assessment for UAV flights is energy consumption, as running out of battery during a flight brings almost guaranteed vehicle damage and a high risk of property damage or human injuries. Predicting the amount of energy a flight will consume is challenging as many factors affect the overall consumption. In this work, we propose a deep energy model that uses Temporal Convolutional Networks (TCNs) to capture the time varying features while incorporating static contextual information. Our energy model is trained on a real world dataset and doesn’t require segregating flights into regimes. We showcase an improvement in power predictions by 35.6% on test flights when compared to a state-of-the-art analytical method. Once we have an accurate energy model, we can use it to predict the energy usage for a given trajectory and evaluate the risk of running out of battery during flight. We propose using Conditional Value-at-Risk (CVaR) as a metric for quantifying this risk. We show that CVaR captures the risk associated with worst-case energy consumption on a nominal path by transforming the output distribution of Monte Carlo forward simulations into a risk space and computing the CVaR on the risk-space distribution. Our state-of-the-art energy model and risk evaluation method helps guarantee safe flights and evaluate the coverage area from a proposed takeoff location.
Choudhry, Arnav and Moon, Brady and Patrikar, Jay and Samaras, Constantine and Scherer, Sebastian, "CVaR-Based Flight Energy Risk Assessment for Multirotor UAVs Using a Deep Energy Model," Proceedings - IEEE International Conference on Robotics and Automation, 2021.
- ULSD: Unified Line Segment Detection across Pinhole, Fisheye, and Spherical Cameras.By Li, H., Yu, H., Yang, W., Yu, L. and Scherer, S.In ISPRS Journal of Photogrammetry and Remote Sensing, vol. 178, pp. 187–202, 2021.
@article{li2021ulsd, title = {{ULSD:} {Unified} Line Segment Detection across Pinhole, Fisheye, and Spherical Cameras}, author = {Li, Hao and Yu, Huai and Yang, Wen and Yu, Lei and Scherer, Sebastian}, year = {2021}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, volume = {178}, pages = {187--202}, doi = {10.1016/j.isprsjprs.2021.06.004}, url = {https://www.sciencedirect.com/science/article/pii/S0924271621001623} }
Li, Hao and Yu, Huai and Yang, Wen and Yu, Lei and Scherer, Sebastian, "ULSD: Unified Line Segment Detection across Pinhole, Fisheye, and Spherical Cameras," ISPRS Journal of Photogrammetry and Remote Sensing, 2021.
- Model-based real-time robust controller for a small helicopter.By He, M., He, J. and Scherer, S.In Mechanical Systems and Signal Processing, vol. 146, p. 107022, 2021.
@article{he2021model, title = {Model-based real-time robust controller for a small helicopter}, author = {He, Miaolei and He, Jilin and Scherer, Sebastian}, year = {2021}, journal = {Mechanical Systems and Signal Processing}, volume = {146}, pages = {107022}, doi = {10.1016/j.ymssp.2020.107022}, issn = {0888-3270}, url = {https://www.sciencedirect.com/science/article/pii/S0888327020304088}, keywords = {Helicopter, Control, UAV} }
Small helicopters attract substantial attention because they feature better loading capacity and flight efficiency than multirotor helicopters. The main objective of this work is to study the modeling and flight controller design of a small helicopter. Based on a non-simplified helicopter model, a robust control law combined with a newly developed trajectory tracking controller is proposed. The inner loop uses a H∞ technique to ensure helicopter robust attitude tracking performance. The outer loop uses a prescribed performance control technique to guarantee the position controller output error converges to a pre-defined arbitrary residual set. The convergence rate of out loop will be no less than a certain prespecified value, and any overshoot will be below a preassigned level. The proposed control algorithm was tested on a real helicopter platform when the wind disturbance and system uncertainty exist, and the results demonstrate the effectiveness and robustness of our approach.
He, Miaolei and He, Jilin and Scherer, Sebastian, "Model-based real-time robust controller for a small helicopter," Mechanical Systems and Signal Processing, 2021.
- i3dLoc: Image-to-range Cross-domain Localization Robust to Inconsistent Environmental Conditions.By Yin, P., Xu, L., Zhang, J., Choset, H. and Scherer, S.In Proceedings of Robotics: Science and Systems (RSS ’21), 2021.
@inproceedings{yin2021i3dloc, title = {{i3dLoc:} {Image}-to-range Cross-domain Localization Robust to Inconsistent Environmental Conditions}, author = {Yin, Peng and Xu, Lingyun and Zhang, Ji and Choset, Howie and Scherer, Sebastian}, year = {2021}, month = jul, booktitle = {Proceedings of Robotics: Science and Systems (RSS '21)}, publisher = {Robotics: Science and Systems 2021}, url = {https://arxiv.org/pdf/2105.12883}, keywords = {Visual SLAM, Place Recognition, Condition Invariant, Viewpoint Invariant}, video = {https://www.youtube.com/watch?v=ta1_CeJV5nI} }
We present a method for localizing a single camera with respect to a point cloud map in indoor and outdoor scenes. The problem is challenging because correspondences of local invariant features are inconsistent across the domains between image and 3D. The problem is even more challenging as the method must handle various environmental conditions such as illumination, weather, and seasonal changes. Our method can match equirectangular images to the 3D range projections by extracting cross-domain symmetric place descriptors. Our key insight is to retain condition-invariant 3D geometry features from limited data samples while eliminating the condition-related features by a designed Generative Adversarial Network. Based on such features, we further design a spherical convolution network to learn viewpoint-invariant symmetric place descriptors. We evaluate our method on extensive self-collected datasets, which involve Long-term (variant appearance conditions), Large-scale (up to 2km structure/unstructured environment), and Multistory (four-floor confined space). Our method surpasses other current state-of-the-arts by achieving around 3 times higher place retrievals to inconsistent environments, and above 3 times accuracy on online localization. To highlight our method’s generalization capabilities, we also evaluate the recognition across different datasets. With a single trained model, i3dLoc can demonstrate reliable visual localization in random conditions.
Yin, Peng and Xu, Lingyun and Zhang, Ji and Choset, Howie and Scherer, Sebastian, "i3dLoc: Image-to-range Cross-domain Localization Robust to Inconsistent Environmental Conditions," Proceedings of Robotics: Science and Systems (RSS ’21), 2021.
- 3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors.By Yin, P., Xu, L., Ji, J., Scherer, S. and Choset, H.In IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5953–5960, Jul. 2021.
@article{yin20213d, title = {{3D} Segmentation Learning from Sparse Annotations and Hierarchical Descriptors}, author = {Yin, Peng and Xu, Lingyun and Ji, Jianmin and Scherer, Sebastian and Choset, Howie}, year = {2021}, month = jul, journal = {IEEE Robotics and Automation Letters}, volume = {6}, number = {3}, pages = {5953--5960}, doi = {10.1109/LRA.2021.3088796}, url = {https://www.ri.cmu.edu/wp-content/uploads/2021/06/RAL_SparseSeg.pdf}, keywords = {3D Segmentation, Sparse Annotation} }
One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training. To alleviate manual efforts, we propose GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties. GIDSeg depicts global- and individual- relation via a dynamic edge convolution network coupled with a kernelized identity descriptor. The ensemble effects are obtained by endowing a fine-grained receptive field to a low-resolution voxelized map. In our GIDSeg, an adversarial learning module is also designed to further enhance the conditional constraint of identity descriptors within the joint feature distribution. Despite the apparent simplicity, our proposed approach achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations. Particularly, with 5% annotations of raw data, GIDSeg outperforms other 3D segmentation methods.
Yin, Peng and Xu, Lingyun and Ji, Jianmin and Scherer, Sebastian and Choset, Howie, "3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors," IEEE Robotics and Automation Letters, 2021.
- Improving Off-Road Planning Techniques with Learned Costs from Physical Interactions.By Sivaprakasam, M., Triest, S., Wang, W., Yin, P. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Xi’an, China, pp. 4844–4850, 2021.
@inproceedings{sivaprakasam2021improving, title = {Improving Off-Road Planning Techniques with Learned Costs from Physical Interactions}, author = {Sivaprakasam, Matthew and Triest, Samuel and Wang, Wenshan and Yin, Peng and Scherer, Sebastian}, year = {2021}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Xi'an, China}, pages = {4844--4850}, doi = {10.1109/ICRA48506.2021.9561881} }
Autonomous ground vehicles have improved greatly over the past decades, but they still have their limitations when it comes to off-road environments. There is still a need for planning techniques that effectively handle physical interactions between a vehicle and its surroundings. We present a method of modifying a standard path planning algorithm to address these problems by incorporating a learned model to account for complexities that would be too hard to address manually. The model predicts how well a vehicle will be able to follow a potential plan in a given environment. These predictions are then used to assign costs to their associated paths, where the path predicted to be the most feasible will be output as the final path. This results in a planner that doesn’t rely solely on engineered features to evaluate traversability of obstacles, and can also choose a better path based on an understanding of its own capability that it has learned from previous interactions. This modification was integrated into the Hybrid A* algorithm and experimental results demonstrated an improvement of 14.29% over the original version on a physical platform.
Sivaprakasam, Matthew and Triest, Samuel and Wang, Wenshan and Yin, Peng and Scherer, Sebastian, "Improving Off-Road Planning Techniques with Learned Costs from Physical Interactions," Proceedings - IEEE International Conference on Robotics and Automation, 2021.
- In-flight positional and energy use data set of a DJI Matrice 100 quadcopter for small package delivery.By Rodrigues, T.A., Patrikar, J., Choudhry, A., Feldgoise, J., Arcot, V., Gahlaut, A., Lau, S., Moon, B., Wagner, B., Matthews, H.S., Scherer, S. and Samaras, C.In Scientific Data, Jun. 2021.
@article{rodrigues2021flight, title = {In-flight positional and energy use data set of a DJI Matrice 100 quadcopter for small package delivery}, author = {Rodrigues, Thiago A. and Patrikar, Jay and Choudhry, Arnav and Feldgoise, Jacob and Arcot, Vaibhav and Gahlaut, Aradhana and Lau, Sophia and Moon, Brady and Wagner, Bastian and Matthews, H. Scott and Scherer, Sebastian and Samaras, Constantine}, year = {2021}, month = jun, journal = {Scientific Data}, publisher = {Springer Science and Business Media LLC}, doi = {10.1038/s41597-021-00930-x}, url = {https://arxiv.org/pdf/2103.13313} }
We autonomously direct a small quadcopter package delivery Uncrewed Aerial Vehicle (UAV) or "drone" to take off, fly a specified route, and land for a total of 209 flights while varying a set of operational parameters. The vehicle was equipped with onboard sensors, including GPS, IMU, voltage and current sensors, and an ultrasonic anemometer, to collect high-resolution data on the inertial states, wind speed, and power consumption. Operational parameters, such as commanded ground speed, payload, and cruise altitude, are varied for each flight. This large data set has a total flight time of 10 hours and 45 minutes and was collected from April to October of 2019 covering a total distance of approximately 65 kilometers. The data collected were validated by comparing flights with similar operational parameters. We believe these data will be of great interest to the research and industrial communities, who can use the data to improve UAV designs, safety, and energy efficiency, as well as advance the physical understanding of in-flight operations for package delivery drones.
Rodrigues, Thiago A. and Patrikar, Jay and Choudhry, Arnav and Feldgoise, Jacob and Arcot, Vaibhav and Gahlaut, Aradhana and Lau, Sophia and Moon, Brady and Wagner, Bastian and Matthews, H. Scott and Scherer, Sebastian and Samaras, Constantine, "In-flight positional and energy use data set of a DJI Matrice 100 quadcopter for small package delivery," Scientific Data, 2021.
- TartanVO: A Generalizable Learning-based VO.By Wang, W., Hu, Y. and Scherer, S.In Proceedings of the 2020 Conference on Robot Learning, vol. 155, pp. 1761–1772, 2021.
@inproceedings{wang2021tartanvo, title = {{TartanVO:} {A} Generalizable Learning-based VO}, author = {Wang, Wenshan and Hu, Yaoyu and Scherer, Sebastian}, year = {2021}, month = {16--18 Nov}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, publisher = {PMLR}, series = {Proceedings of Machine Learning Research}, volume = {155}, pages = {1761--1772}, url = {https://proceedings.mlr.press/v155/wang21h/wang21h.pdf}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, video = {https://www.youtube.com/watch?v=NQ1UEh3thbU} }
We present the first learning-based visual odometry (VO) model, which generalizes to multiple datasets and real-world scenarios and outperforms geometry-based methods in challenging scenes. We achieve this by leveraging the SLAM dataset TartanAir, which provides a large amount of diverse synthetic data in challenging environments. Furthermore, to make our VO model generalize across datasets, we propose an up-to-scale loss function and incorporate the camera intrinsic parameters into the model. Experiments show that a single model, TartanVO, trained only on synthetic data, without any finetuning, can be generalized to real-world datasets such as KITTI and EuRoC, demonstrating significant advantages over the geometry-based methods on challenging trajectories. Our code is available at https://github.com/castacks/tartanvo.
Wang, Wenshan and Hu, Yaoyu and Scherer, Sebastian, "TartanVO: A Generalizable Learning-based VO," Proceedings of the 2020 Conference on Robot Learning, 2021.
- Graph-Based Topological Exploration Planning in Large-Scale 3D Environments.By Yang, F., Lee, D.-H., Keller, J. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Xi’an, China, pp. 12730–12736, 2021.
@inproceedings{yang2021graph, title = {Graph-Based Topological Exploration Planning in Large-Scale {3D} Environments}, author = {Yang, Fan and Lee, Dung-Han and Keller, John and Scherer, Sebastian}, year = {2021}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Xi'an, China}, pages = {12730--12736}, doi = {10.1109/ICRA48506.2021.9561830}, url = {https://arxiv.org/pdf/2103.16829} }
Currently, state-of-the-art exploration methods make various efforts in constructing and maintaining high-resolution world representations to acquire positions in configuration space that maximize information gain. However, those “optimal” selections could quickly become obsolete due to the influx of new information, especially in large-scale environments, which results in high-frequency re-planning that hinders the exploration efficiency. In this paper, we propose a graph-based topological planning framework, building a sparse topological map in three-dimensional (3D) space to guide exploration steps with high-level intents so as to render consistent exploration maneuvers. Specifically, this work presents a novel method to represent 3D spaces as convex polyhedrons, whose geometry information is utilized to group spaces into distinctive regions. These distinctive regions are then added as nodes into the topological map, directing the exploration process. We compared our method with the state-of-art in simulated environments. The proposed method achieves better exploration performance in space coverage and outperforms exploration efficiency by more than 40%. Finally, a field experiment was conducted to further evaluate the applicability of our method to empower efficient and robust exploration in real-world environments.
Yang, Fan and Lee, Dung-Han and Keller, John and Scherer, Sebastian, "Graph-Based Topological Exploration Planning in Large-Scale 3D Environments," Proceedings - IEEE International Conference on Robotics and Automation, 2021.
2020
- Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations.By Bonatti, R., Madaan, R., Vineet, V., Scherer, S. and Kapoor, A.In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1637–1644, 2020.
@inproceedings{bonatti2020learning, title = {Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations}, author = {Bonatti, Rogerio and Madaan, Ratnesh and Vineet, Vibhav and Scherer, Sebastian and Kapoor, Ashish}, year = {2020}, booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {1637--1644}, doi = {10.1109/IROS45743.2020.9341049}, url = {https://arxiv.org/pdf/1909.06993}, archiveprefix = {arXiv}, arxivid = {1909.06993}, eprint = {1909.06993}, video = {https://www.youtube.com/watch?v=AxE7qGKJWaw} }
Machines are a long way from robustly solving open-world perception-control tasks, such as first-person view (FPV) aerial navigation. While recent advances in end-to-end Machine Learning, especially Imitation and Reinforcement Learning appear promising, they are constrained by the need of large amounts of difficult-to-collect labeled real-world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. In this work we propose a novel method for learning robust visuomotor policies for real-world deployment which can be trained purely with simulated data. We develop rich state representations that combine supervised and unsupervised environment data. Our approach takes a cross-modal perspective, where separate modalities correspond to the raw camera data and the system states relevant to the task, such as the relative pose of gates to the drone in the case of drone racing. We feed both data modalities into a novel factored architecture, which learns a joint low-dimensional embedding via Variational Auto Encoders. This compact representation is then fed into a control policy, which we trained using imitation learning with expert trajectories in a simulator. We analyze the rich latent spaces learned with our proposed representations, and show that the use of our cross-modal architecture significantly improves control policy performance as compared to end-to-end learning or purely unsupervised feature extractors. We also present real-world results for drone navigation through gates in different track configurations and environmental conditions. Our proposed method, which runs fully onboard, can successfully generalize the learned representations and policies across simulation and reality, significantly outperforming baseline approaches. Supplementary video: https://youtu.be/VKc3A5HlUU8
Bonatti, Rogerio and Madaan, Ratnesh and Vineet, Vibhav and Scherer, Sebastian and Kapoor, Ashish, "Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- Autonomous aerial cinematography in unstructured environments with learned artistic decision-making.By Bonatti, R., Wang, W., Ho, C., Ahuja, A., Gschwindt, M., Camci, E., Kayacan, E., Choudhury, S. and Scherer, S.In Journal of Field Robotics, Jan. 2020.
@article{bonatti2020autonomous1, title = {Autonomous aerial cinematography in unstructured environments with learned artistic decision-making}, author = {Bonatti, Rogerio and Wang, Wenshan and Ho, Cherie and Ahuja, Aayush and Gschwindt, Mirko and Camci, Efe and Kayacan, Erdal and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2020}, month = jan, journal = {Journal of Field Robotics}, doi = {10.1002/rob.21931}, issn = {15564967}, url = {https://arxiv.org/pdf/1910.06988}, keywords = {aerial robotics,cinematography,computer vision,learning,mapping,motion planning} }
Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is a demand for an autonomous cinematographer that can reason about both geometry and scene context in real-time. Existing approaches do not address all aspects of this problem; they either require high-precision motion-capture systems or global positioning system tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow fixed artistic guidelines specified before the flight. In this study, we address the problem in its entirety and propose a complete system for real-time aerial cinematography that for the first time combines: (a) vision-based target estimation; (b) 3D signed-distance mapping for occlusion estimation; (c) efficient trajectory optimization for long time-horizon camera motion; and (d) learning-based artistic shot selection. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in-depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at https://youtu.be/ookhHnqmlaU.
Bonatti, Rogerio and Wang, Wenshan and Ho, Cherie and Ahuja, Aayush and Gschwindt, Mirko and Camci, Efe and Kayacan, Erdal and Choudhury, Sanjiban and Scherer, Sebastian, "Autonomous aerial cinematography in unstructured environments with learned artistic decision-making," Journal of Field Robotics, 2020.
- Autonomous Drone Cinematographer: Using Artistic Principles to Create Smooth, Safe, Occlusion-Free Trajectories for Aerial Filming.By Bonatti, R., Zhang, Y., Choudhury, S., Wang, W. and Scherer, S.In International Symposium on Experimental Robotics, pp. 119–129, 2020.
@inproceedings{bonatti2020autonomous2, title = {Autonomous Drone Cinematographer: {Using} Artistic Principles to Create Smooth, Safe, Occlusion-Free Trajectories for Aerial Filming}, author = {Bonatti, Rogerio and Zhang, Yanfu and Choudhury, Sanjiban and Wang, Wenshan and Scherer, Sebastian}, year = {2020}, month = nov, booktitle = {International Symposium on Experimental Robotics}, pages = {119--129}, doi = {10.1007/978-3-030-33950-0_11}, url = {https://arxiv.org/pdf/1808.09563} }
Autonomous aerial cinematography has the potential to enable automatic capture of aesthetically pleasing videos without requiring human intervention, empowering individuals with the capability of high-end film studios. Current approaches either only handle off-line trajectory generation, or offer strategies that reason over short time horizons and simplistic representations for obstacles, which result in jerky movement and low real-life applicability. In this work we develop a method for aerial filming that is able to trade off shot smoothness, occlusion, and cinematography guidelines in a principled manner, even under noisy actor predictions. We present a novel algorithm for real-time covariant gradient descent that we use to efficiently find the desired trajectories by optimizing a set of cost functions. Experimental results show that our approach creates attractive shots, avoiding obstacles and occlusion 65 times over 1.25 hours of flight time, re-planning at 5 Hz with a 10 s time horizon. We robustly film human actors, cars and bicycles performing different motion among obstacles, using various shot types.
Bonatti, Rogerio and Zhang, Yanfu and Choudhury, Sanjiban and Wang, Wenshan and Scherer, Sebastian, "Autonomous Drone Cinematographer: Using Artistic Principles to Create Smooth, Safe, Occlusion-Free Trajectories for Aerial Filming," International Symposium on Experimental Robotics, 2020.
- Human-in-the-loop Planning and Monitoring of Swarm Search and Service Missions.By Chandarana, M., Lewis, M., Sycara, K. and Scherer, S.In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’2020), 2020.
@inproceedings{chandarana2020human, title = {Human-in-the-loop Planning and Monitoring of Swarm Search and Service Missions}, author = {Chandarana, Meghan and Lewis, Michael and Sycara, Katia and Scherer, Sebastian}, year = {2020}, booktitle = {International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS'2020)}, url = {http://d-scholarship.pitt.edu/39098/1/meghan-aamas.pdf}, organization = {IFMAS} }
Chandarana, Meghan and Lewis, Michael and Sycara, Katia and Scherer, Sebastian, "Human-in-the-loop Planning and Monitoring of Swarm Search and Service Missions," International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’2020), 2020.
- RGB-D SLAM in Dynamic Environments Using Point Correlations.By Dai, W., Zhang, Y., Li, P., Fang, Z. and Scherer, S.In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
@article{dai2020rgb, title = {{RGB-D SLAM} in Dynamic Environments Using Point Correlations}, author = {Dai, Weichen and Zhang, Yu and Li, Ping and Fang, Zheng and Scherer, Sebastian}, year = {2020}, journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, publisher = {IEEE}, doi = {10.1109/TPAMI.2020.3010942}, url = {https://arxiv.org/pdf/1811.03217} }
Dai, Weichen and Zhang, Yu and Li, Ping and Fang, Zheng and Scherer, Sebastian, "RGB-D SLAM in Dynamic Environments Using Point Correlations," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
- Efficient Multiresolution Scrolling Grid for Stereo Vision-based MAV Obstacle Avoidance.By Dexheimer, E., Mangelson, J., Scherer, S. and Kaess, M.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765, 2020.
@inproceedings{dexheimer2020efficient, title = {Efficient Multiresolution Scrolling Grid for Stereo Vision-based {MAV} Obstacle Avoidance}, author = {Dexheimer, Eric and Mangelson, Joshua and Scherer, Sebastian and Kaess, Michael}, year = {2020}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4758--4765}, doi = {10.1109/IROS45743.2020.9341718}, url = {https://www.cs.cmu.edu/~kaess/pub/Dexheimer20iros.pdf} }
Fast, aerial navigation in cluttered environments requires a suitable map representation for path planning. In this paper, we propose the use of an efficient, structured multiresolution representation that expands the sensor range of dense local grids for memory-constrained platforms. While similar data structures have been proposed, we avoid processing redundant occupancy information and use the organization of the grid to improve efficiency. By layering 3D circular buffers that double in resolution at each level, obstacles near the robot are represented at finer resolutions while coarse spatial information is maintained at greater distances. We also introduce a novel method for efficiently calculating the Euclidean distance transform on the multiresolution grid by leveraging its structure. Lastly, we utilize our proposed framework to demonstrate improved stereo camera-based MAV obstacle avoidance with an optimization-based planner in simulation.
Dexheimer, Eric and Mangelson, Joshua and Scherer, Sebastian and Kaess, Michael, "Efficient Multiresolution Scrolling Grid for Stereo Vision-based MAV Obstacle Avoidance," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- 3D-SiamRPN: An End-to-end Learning Method for Real-time 3D Single Object Tracking using Raw Point Cloud.By Fang, Z., Zhou, S., Cui, Y. and Scherer, S.In IEEE Sensors Journal, 2020.
@article{fang20203d, title = {{3D-SiamRPN}: {An} End-to-end Learning Method for Real-time {3D} Single Object Tracking using Raw Point Cloud}, author = {Fang, Zheng and Zhou, Sifan and Cui, Yubo and Scherer, Sebastian}, year = {2020}, journal = {IEEE Sensors Journal}, publisher = {IEEE}, doi = {10.1109/JSEN.2020.3033034}, url = {https://arxiv.org/pdf/2108.05630} }
Fang, Zheng and Zhou, Sifan and Cui, Yubo and Scherer, Sebastian, "3D-SiamRPN: An End-to-end Learning Method for Real-time 3D Single Object Tracking using Raw Point Cloud," IEEE Sensors Journal, 2020.
- Deep-Learning Assisted High-Resolution Binocular Stereo Depth Reconstruction.By Hu, Y., Zhen, W. and Scherer, S.In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 8637–8643, 2020.
@inproceedings{hu2020deep, title = {Deep-Learning Assisted High-Resolution Binocular Stereo Depth Reconstruction}, author = {Hu, Yaoyu and Zhen, Weikun and Scherer, Sebastian}, year = {2020}, booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {8637--8643}, doi = {10.1109/ICRA40945.2020.9196655}, url = {https://arxiv.org/pdf/1912.05012} }
This work presents dense stereo reconstruction using high-resolution images for infrastructure inspections. The state-of-the-art stereo reconstruction methods, both learning and non-learning ones, consume too much computational resource on high-resolution data. Recent learning-based methods achieve top ranks on most benchmarks. However, they suffer from the generalization issue due to lack of task-specific training data. We propose to use a less resource demanding non-learning method, guided by a learning-based model, to handle high-resolution images and achieve accurate stereo reconstruction. The deep-learning model produces an initial disparity prediction with uncertainty for each pixel of the down-sampled stereo image pair. The uncertainty serves as a self-measurement of its generalization ability and the per-pixel searching range around the initially predicted disparity. The downstream process performs a modified version of the Semi-Global Block Matching method with the up-sampled per-pixel searching range. The proposed deep-learning assisted method is evaluated on the Middlebury dataset and high-resolution stereo images collected by our customized binocular stereo camera. The combination of learning and non-learning methods achieves better performance on 12 out of 15 cases of the Middlebury dataset. In our infrastructure inspection experiments, the average 3D reconstruction error is less than 0.004m.
Hu, Yaoyu and Zhen, Weikun and Scherer, Sebastian, "Deep-Learning Assisted High-Resolution Binocular Stereo Depth Reconstruction," 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020.
- Feasibility of discriminating UAV propellers noise from distress signals to locate people in enclosed environments using MEMS microphone arrays.By Izquierdo, A., Val, L.D., Villacorta, J.J., Zhen, W., Scherer, S. and Fang, Z.In Sensors, vol. 20, no. 3, p. 597, 2020.
@article{izquierdo2020feasibility, title = {Feasibility of discriminating {UAV} propellers noise from distress signals to locate people in enclosed environments using {MEMS} microphone arrays}, author = {Izquierdo, Alberto and Val, Lara Del and Villacorta, Juan J and Zhen, Weikun and Scherer, Sebastian and Fang, Zheng}, year = {2020}, journal = {Sensors}, publisher = {Multidisciplinary Digital Publishing Institute}, volume = {20}, number = {3}, pages = {597}, doi = {10.3390/s20030597}, issn = {1424-8220}, url = {https://pubmed.ncbi.nlm.nih.gov/31973156/} }
Detecting and finding people are complex tasks when visibility is reduced. This happens, for example, if a fire occurs. In these situations, heat sources and large amounts of smoke are generated. Under these circumstances, locating survivors using thermal or conventional cameras is not possible and it is necessary to use alternative techniques. The challenge of this work was to analyze if it is feasible the integration of an acoustic camera, developed at the University of Valladolid, on an unmanned aerial vehicle (UAV) to locate, by sound, people who are calling for help, in enclosed environments with reduced visibility. The acoustic array, based on MEMS (micro-electro-mechanical system) microphones, locates acoustic sources in space, and the UAV navigates autonomously by closed enclosures. This paper presents the first experimental results locating the angles of arrival of multiple sound sources, including the cries for help of a person, in an enclosed environment. The results are promising, as the system proves able to discriminate the noise generated by the propellers of the UAV, at the same time it identifies the angles of arrival of the direct sound signal and its first echoes reflected on the reflective surfaces.
Izquierdo, Alberto and Val, Lara Del and Villacorta, Juan J and Zhen, Weikun and Scherer, Sebastian and Fang, Zheng, "Feasibility of discriminating UAV propellers noise from distress signals to locate people in enclosed environments using MEMS microphone arrays," Sensors, 2020.
- A Robust Multi-Stereo Visual-Inertial Odometry Pipeline.By Jaekel, J., Mangelson, J., Scherer, S. and Kaess, M.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4623–4630, 2020.
@inproceedings{jaekel2020robust, title = {A Robust Multi-Stereo Visual-Inertial Odometry Pipeline}, author = {Jaekel, Joshua and Mangelson, Joshua and Scherer, Sebastian and Kaess, Michael}, year = {2020}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4623--4630}, doi = {10.1109/IROS45743.2020.9341604}, url = {https://www.cs.cmu.edu/%7Ekaess/pub/Jaekel20iros.html} }
In this paper we present a novel multi-stereo visual-inertial odometry (VIO) framework which aims to improve the robustness of a robot’s state estimate during aggressive motion and in visually challenging environments. Our system uses a fixed-lag smoother which jointly optimizes for poses and landmarks across all stereo pairs. We propose a 1-point RANdom SAmple Consensus (RANSAC) algorithm which is able to perform outlier rejection across features from all stereo pairs. To handle the problem of noisy extrinsics, we account for uncertainty in the calibration of each stereo pair and model it in both our front-end and back-end. The result is a VIO system which is able to maintain an accurate state estimate under conditions that have typically proven to be challenging for traditional state-of-the-art VIO systems. We demonstrate the benefits of our proposed multi-stereo algorithm by evaluating it with both simulated and real world data. We show that our proposed algorithm is able to maintain a state estimate in scenarios where traditional VIO algorithms fail.
Jaekel, Joshua and Mangelson, Joshua and Scherer, Sebastian and Kaess, Michael, "A Robust Multi-Stereo Visual-Inertial Odometry Pipeline," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- ALFA: A dataset for UAV fault and anomaly detection.By Keipour, A., Mousaei, M. and Scherer, S.In The International Journal of Robotics Research, Oct. 2020.
@article{keipour2020alfa, title = {{ALFA:} {A} dataset for {UAV} fault and anomaly detection}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian}, year = {2020}, month = oct, journal = {The International Journal of Robotics Research}, publisher = {SAGE Publications Ltd STM}, doi = {10.1177/0278364920966642}, issn = {0278-3649}, url = {https://arxiv.org/pdf/1907.06268}, archiveprefix = {arXiv}, arxivid = {1907.06268}, eprint = {1907.06268}, keywords = {Dataset,actuator failure,anomaly detection,autonomous robots,engine failure,evaluation metrics,fault detection and isolation,fixed-wing robots,flight safety,unmanned aerial vehicles} }
We present a dataset of several fault types in control surfaces of a fixed-wing unmanned aerial vehicle (UAV) for use in fault detection and isolation (FDI) and anomaly detection (AD) research. Currently, the dataset includes processed data for 47 autonomous flights with 23 sudden full engine failure scenarios and 24 scenarios for 7 other types of sudden control surface (actuator) faults, with a total of 66 minutes of flight under normal conditions and 13 minutes of post-fault flight time. It additionally includes many hours of raw data of fully autonomous, autopilot-assisted and manual flights with tens of fault scenarios. The ground truth of the time and type of faults is provided in each scenario to enable evaluation of the methods using the dataset. We have also provided the helper tools in several programming languages to load and work with the data and to help the evaluation of a detection method using the dataset. A set of metrics is proposed to help to compare different methods using the dataset. Most of the current fault detection methods are evaluated in simulation and, as far as we know, this dataset is the only one providing the real flight data with faults in such capacity. We hope it will help advance the state of the art in AD or FDI research for autonomous aerial vehicles and mobile robots to enhance the safety of autonomous and remote flight operations further. The dataset and the provided tools can be accessed from https://doi.org/10.1184/R1/12707963 .
Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian, "ALFA: A dataset for UAV fault and anomaly detection," The International Journal of Robotics Research, 2020.
- Real-time Motion Planning of Curvature Continuous Trajectories for Urban UAV Operations in Wind.By Patrikar, J., Dugar, V., Arcot, V. and Scherer, S.In 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 854–861, 2020.
@inproceedings{patrikar2020real, title = {Real-time Motion Planning of Curvature Continuous Trajectories for Urban {UAV} Operations in Wind}, author = {Patrikar, Jay and Dugar, Vishal and Arcot, Vaibhav and Scherer, Sebastian}, year = {2020}, booktitle = {2020 International Conference on Unmanned Aircraft Systems (ICUAS)}, pages = {854--861}, doi = {10.1109/ICUAS48674.2020.9213837} }
Patrikar, Jay and Dugar, Vishal and Arcot, Vaibhav and Scherer, Sebastian, "Real-time Motion Planning of Curvature Continuous Trajectories for Urban UAV Operations in Wind," 2020 International Conference on Unmanned Aircraft Systems (ICUAS), 2020.
- Wind and the City: Utilizing UAV-Based In-Situ Measurements for Estimating Urban Wind Fields.By Patrikar, J., Moon, B. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1254–1260, 2020.
@inproceedings{patrikar2020wind, title = {Wind and the City: {Utilizing} {UAV}-Based In-Situ Measurements for Estimating Urban Wind Fields}, author = {Patrikar, Jay and Moon, Brady and Scherer, Sebastian}, year = {2020}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {1254--1260}, doi = {10.1109/IROS45743.2020.9340812}, url = {https://www.ri.cmu.edu/publications/wind-and-the-city-utilizing-uav-based-in-situ-measurements-for-estimating-urban-wind-fields/}, video = {https://youtu.be/U4XdYgSJRZM} }
A high-quality estimate of wind fields can potentially improve the safety and performance of Unmanned Aerial Vehicles (UAVs) operating in dense urban areas. Computational Fluid Dynamics (CFD) simulations can help provide a wind field estimate, but their accuracy depends on the knowledge of the distribution of the inlet boundary conditions. This paper provides a real-time methodology using a Particle Filter (PF) that utilizes wind measurements from a UAV to solve the inverse problem of predicting the inlet conditions as the UAV traverses the flow field. A Gaussian Process Regression (GPR) approach is used as a surrogate function to maintain the real-time nature of the proposed methodology. Real-world experiments with a UAV at an urban test-site prove the efficacy of the proposed method. The flight test shows that the 95%confidence interval for the difference between the mean estimated inlet conditions and mean ground truth measurements closely bound zero, with the difference in mean angles being between −3.7° and 1.3° and the difference in mean magnitudes being between −0.2 m/s and 0.0 m/s. Video: https://youtu.be/U4XdYgSJRZM
Patrikar, Jay and Moon, Brady and Scherer, Sebastian, "Wind and the City: Utilizing UAV-Based In-Situ Measurements for Estimating Urban Wind Fields," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- Precision UAV Landing in Unstructured Environments.By Pluckter, K. and Scherer, S.In International Symposium on Experimental Robotics, pp. 177–187, 2020.
@inproceedings{pluckter2020precision, title = {Precision {UAV} Landing in Unstructured Environments}, author = {Pluckter, Kevin and Scherer, Sebastian}, year = {2020}, month = nov, booktitle = {International Symposium on Experimental Robotics}, pages = {177--187}, doi = {10.1007/978-3-030-33950-0_16}, url = {https://www.ri.cmu.edu/app/uploads/2019/01/Pluckter_Scherer_ISER_2018_Precision_Landing.pdf} }
Autonomous landing of a drone is a necessary part of autonomous flight. One way to have high certainty of safety in landing is to return to the same location the drone took-off from. Implementations of return-to-home functionality fall short when relying solely on GPS or odometry as inaccuracies in the measurements and drift in the state estimate guides the drone to a position with a large offset from the initial position. This can be particularly dangerous if the drone took-off next to something like a body of water. Current work on precision landing relies on localizing to a known landing pattern, which requires the pilot to carry a landing pattern with them. We propose a method using a downward facing fisheye lens camera to accurately land a UAV from where it took off on an unstructured surface, without a landing pattern. Specifically, this approach uses a position estimate relative to the take-off path of the drone to guide the drone back. With the large Field-of-View provided by the fisheye lens, our algorithm can provide visual feedback starting with a large position error at the beginning of the landing, until 25cm above the ground at the end of the landing. This algorithm empirically shows it can correct the drift error in the state estimation and land with an accuracy of 40cm.
Pluckter, Kevin and Scherer, Sebastian, "Precision UAV Landing in Unstructured Environments," International Symposium on Experimental Robotics, 2020.
- Contingency landing site map generation system,By Strabala, K., Scherer, S. and Arcot, V.US Patent 10,562,643, Feb-2020
@patent{strabala2020contingency, title = {Contingency landing site map generation system}, author = {Strabala, Kyle and Scherer, Sebastian and Arcot, Vaibhav}, year = {2020}, month = feb, url = {https://patents.google.com/patent/US10562643B1/en}, note = {US Patent 10,562,643} }
Strabala, Kyle and Scherer, Sebastian and Arcot, Vaibhav, "Contingency landing site map generation system," US Patent 10,562,643, 2020.
- Efficient Trajectory Library Filtering for Quadrotor Flight in Unknown Environments.By Viswanathan, V., Dexheimer, E., Li, G., Loianno, G., Kaess, M. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2510–2517, 2020.
@inproceedings{viswanathan2020efficient, title = {Efficient Trajectory Library Filtering for Quadrotor Flight in Unknown Environments}, author = {Viswanathan, Vaibhav and Dexheimer, Eric and Li, Guanrui and Loianno, Giuseppe and Kaess, Michael and Scherer, Sebastian}, year = {2020}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {2510--2517}, doi = {10.1109/IROS45743.2020.9341273}, url = {https://www.cs.cmu.edu/~kaess/pub/Viswanathan20iros.pdf}, video = {https://youtu.be/y_lVtT8lJMk} }
Viswanathan, Vaibhav and Dexheimer, Eric and Li, Guanrui and Loianno, Giuseppe and Kaess, Michael and Scherer, Sebastian, "Efficient Trajectory Library Filtering for Quadrotor Flight in Unknown Environments," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- Visual Memorability for Robotic Interestingness via Unsupervised Online Learning.By Wang, C., Wang, W., Qiu, Y., Hu, Y. and Scherer, S.In European Conference on Computer Vision (ECCV), pp. 52–68, 2020.
@inproceedings{wang2020visual, title = {Visual Memorability for Robotic Interestingness via Unsupervised Online Learning}, author = {Wang, Chen and Wang, Wenshan and Qiu, Yuheng and Hu, Yafei and Scherer, Sebastian}, year = {2020}, booktitle = {European Conference on Computer Vision (ECCV)}, publisher = {Springer International Publishing}, pages = {52--68}, doi = {10.1007/978-3-030-58536-5_4}, url = {https://arxiv.org/pdf/2005.08829}, editor = {Vedaldi, Andrea and Bischof, Horst and Brox, Thomasb and Frahm, Jan-Michael}, keywords = {Unsupervised, Online, Memorability, Interestingness} }
In this paper, we aim to solve the problem of interesting scene prediction for mobile robots. This area is currently under explored but is crucial for many practical applications such as autonomous exploration and decision making. First, we expect a robot to detect novel and interesting scenes in unknown environments and lose interests over time after repeatedly observing similar objects. Second, we expect the robots to learn from unbalanced data in a short time, as the robots normally only know the uninteresting scenes before they are deployed. Inspired by those industrial demands, we first propose a novel translation-invariant visual memory for recalling and identifying interesting scenes, then design a three-stage architecture of long-term, short-term, and online learning for human-like experience, environmental knowledge, and online adaption, respectively. It is demonstrated that our approach is able to learn online and find interesting scenes for practical exploration tasks. It also achieves a much higher accuracy than the state-of-the-art algorithm on very challenging robotic interestingness prediction datasets.
Wang, Chen and Wang, Wenshan and Qiu, Yuheng and Hu, Yafei and Scherer, Sebastian, "Visual Memorability for Robotic Interestingness via Unsupervised Online Learning," European Conference on Computer Vision (ECCV), 2020.
- TartanAir: A Dataset to Push the Limits of Visual SLAM.By Wang, W., Zhu, D., Wang, X., Hu, Y., Qiu, Y., Wang, C., Hu, Y., Kapoor, A. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4909–4916, 2020.
@inproceedings{wang2020tartanair, title = {{TartanAir:} {A} Dataset to Push the Limits of Visual {SLAM}}, author = {Wang, Wenshan and Zhu, Delong and Wang, Xiangwei and Hu, Yaoyu and Qiu, Yuheng and Wang, Chen and Hu, Yafei and Kapoor, Ashish and Scherer, Sebastian}, year = {2020}, month = oct, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4909--4916}, doi = {10.1109/IROS45743.2020.9341801}, url = {https://arxiv.org/pdf/2003.14338}, archiveprefix = {arXiv}, arxivid = {2003.14338}, eprint = {2003.14338}, video = {https://youtu.be/qDwfHvTbJx4} }
We present a challenging dataset, the TartanAir, for robot navigation task and more. The data is collected in photo-realistic simulation environments in the presence of various light conditions, weather and moving objects. By collecting data in simulation, we are able to obtain multi-modal sensor data and precise ground truth labels, including the stereo RGB image, depth image, segmentation, optical flow, camera poses, and LiDAR point cloud. We set up a large number of environments with various styles and scenes, covering challenging viewpoints and diverse motion patterns, which are difficult to achieve by using physical data collection platforms.
Wang, Wenshan and Zhu, Delong and Wang, Xiangwei and Hu, Yaoyu and Qiu, Yuheng and Wang, Chen and Hu, Yafei and Kapoor, Ashish and Scherer, Sebastian, "TartanAir: A Dataset to Push the Limits of Visual SLAM," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- Line-Based 2D–3D Registration and Camera Localization in Structured Environments.By Yu, H., Zhen, W., Yang, W. and Scherer, S.In IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 11, pp. 8962–8972, Jul. 2020.
@article{yu2020line, title = {Line-Based {2D–3D} Registration and Camera Localization in Structured Environments}, author = {Yu, Huai and Zhen, Weikun and Yang, Wen and Scherer, Sebastian}, year = {2020}, month = jul, journal = {IEEE Transactions on Instrumentation and Measurement}, volume = {69}, number = {11}, pages = {8962--8972}, doi = {10.1109/TIM.2020.2999137} }
Accurate registration of 2D imagery with point clouds is a key technology for image-LiDAR point cloud fusion, camera to laser scanner calibration and camera localization. Despite continuous improvements, automatic registration of 2D and 3D data without using additional textured information still faces great challenges. In this paper, we propose a new 2D-3D registration method to estimate 2D-3D line feature correspondences and the camera pose in untextured point clouds of structured environments. Specifically, we first use geometric constraints between vanishing points and 3D parallel lines to compute all feasible camera rotations. Then, we utilize a hypothesis testing strategy to estimate the 2D-3D line correspondences and the translation vector. By checking the consistency with computed correspondences, the best rotation matrix can be found. Finally, the camera pose is further refined using non-linear optimization with all the 2D-3D line correspondences. The experimental results demonstrate the effectiveness of the proposed method on the synthetic and real dataset (outdoors and indoors) with repeated structures and rapid depth changes.
Yu, Huai and Zhen, Weikun and Yang, Wen and Scherer, Sebastian, "Line-Based 2D–3D Registration and Camera Localization in Structured Environments," IEEE Transactions on Instrumentation and Measurement, 2020.
- Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences.By Yu, H., Zhen, W., Yang, W., Zhang, J. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4588–4594, 2020.
@inproceedings{yu2020monocular, title = {Monocular Camera Localization in Prior {LiDAR} Maps with {2D-3D} Line Correspondences}, author = {Yu, Huai and Zhen, Weikun and Yang, Wen and Zhang, Ji and Scherer, Sebastian}, year = {2020}, month = oct, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4588--4594}, doi = {10.1109/IROS45743.2020.9341690}, url = {https://arxiv.org/pdf/2004.00740}, archiveprefix = {arXiv}, arxivid = {2004.00740}, eprint = {2004.00740}, video = {https://youtu.be/H80Bnxm8IPE} }
Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO\backslash&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localization method in prior LiDAR maps using directly estimated 2D-3D line correspondences. To handle the appearance differences and modality gaps between untextured point clouds and images, geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines are extracted online from video sequences. With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences. After that, the camera poses and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers. The experiment results on the EurocMav dataset and our collected dataset demonstrate that the proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in urban environments. The code and our collected data are available at https://github.com/levenberg/2D-3D-pose-tracking.
Yu, Huai and Zhen, Weikun and Yang, Wen and Zhang, Ji and Scherer, Sebastian, "Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint.By Zhao, S., Wang, P., Zhang, H., Fang, Z. and Scherer, S.In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4505–4512, 2020.
@inproceedings{zhao2020tp, title = {{TP-TIO: A} Robust Thermal-Inertial Odometry with Deep ThermalPoint}, author = {Zhao, Shibo and Wang, Peng and Zhang, Hengrui and Fang, Zheng and Scherer, Sebastian}, year = {2020}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {4505--4512}, doi = {10.1109/IROS45743.2020.9341716}, url = {https://arxiv.org/pdf/2012.03455}, video = {https://youtu.be/sBqW6GD9Vjg} }
Zhao, Shibo and Wang, Peng and Zhang, Hengrui and Fang, Zheng and Scherer, Sebastian, "TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- LiDAR-enhanced Structure-from-Motion.By Zhen, W., Hu, Y., Yu, H. and Scherer, S.In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6773–6779, 2020.
@inproceedings{zhen2020lidar, title = {{LiDAR}-enhanced Structure-from-Motion}, author = {Zhen, Weikun and Hu, Yaoyu and Yu, Huai and Scherer, Sebastian}, year = {2020}, booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)}, pages = {6773--6779}, doi = {10.1109/ICRA40945.2020.9197030}, url = {https://arxiv.org/pdf/1911.03369}, annote = {6 pages plus reference. Work has been submitted to ICRA 2020} }
Although Structure-from-Motion (SfM) as a maturing technique has been widely used in many applications, state-of-the-art SfM algorithms are still not robust enough in certain situations. For example, images for inspection purposes are often taken in close distance to obtain detailed textures, which will result in less overlap between images and thus decrease the accuracy of estimated motion. In this paper, we propose a LiDAR-enhanced SfM pipeline that jointly processes data from a rotating LiDAR and a stereo camera pair to estimate sensor motions. We show that incorporating LiDAR helps to effectively reject falsely matched images and significantly improve the model consistency in large-scale environments. Experiments are conducted in different environments to test the performance of the proposed pipeline and comparison results with the state-of-the-art SfM algorithms are reported.
Zhen, Weikun and Hu, Yaoyu and Yu, Huai and Scherer, Sebastian, "LiDAR-enhanced Structure-from-Motion," 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020.
2019
- Hybrid Model for A Priori Performance Prediction of Multi-Job Type Swarm Search and Service Missions.By Chandarana, M., Hughes, D., Lewis, M., Sycara, K. and Scherer, S.In 2019 19th International Conference on Advanced Robotics (ICAR), pp. 714–719, 2019.
@inproceedings{chandarana2019hybrid, title = {Hybrid Model for A Priori Performance Prediction of Multi-Job Type Swarm Search and Service Missions}, author = {Chandarana, Meghan and Hughes, Dana and Lewis, Michael and Sycara, Katia and Scherer, Sebastian}, year = {2019}, booktitle = {2019 19th International Conference on Advanced Robotics (ICAR)}, pages = {714--719}, doi = {10.1109/ICAR46387.2019.8981582}, url = {https://www.ri.cmu.edu/app/uploads/2020/01/ICAR2019_final.pdf} }
Chandarana, Meghan and Hughes, Dana and Lewis, Michael and Sycara, Katia and Scherer, Sebastian, "Hybrid Model for A Priori Performance Prediction of Multi-Job Type Swarm Search and Service Missions," 2019 19th International Conference on Advanced Robotics (ICAR), 2019.
- Decentralized Method for Sub-Swarm Deployment and Rejoining.By Chandarana, M., Luo, W., Lewis, M., Sycara, K. and Scherer, S.In Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018, pp. 1209–1214, 2019.
@inproceedings{chandarana2019decentralized, title = {Decentralized Method for Sub-Swarm Deployment and Rejoining}, author = {Chandarana, Meghan and Luo, Wenhao and Lewis, Michael and Sycara, Katia and Scherer, Sebastian}, year = {2019}, month = oct, booktitle = {Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018}, pages = {1209--1214}, doi = {10.1109/SMC.2018.00212}, isbn = {9781538666500}, url = {https://www.ri.cmu.edu/app/uploads/2018/08/SMC2018.pdf} }
As part of swarm search and service (SSS) missions, robots are tasked with servicing jobs as they are sensed. This requires small sub-swarm teams to leave the swarm for a specified amount of time to service the jobs. In doing so, fewer robots are required to change motion than if the whole swarm were diverted, thereby minimizing the job’s overall effect on the swarm’s main goal. We explore the problem of removing the required number of robots from the swarm, while maintaining overall swarm connectivity. By preserving connectivity, robots are able to successfully rejoin the swarm upon completion of their assigned job. These robots are then made available for reallocation. We propose a decentralized and asynchronous method for breaking off sub-swarm groups and rejoining them with the main swarm using the swarm’s communication graph topology. Both single and multiple job site cases are explored. The results are compared against a full swarm movement method. Simulation results show that the proposed method outperforms a full swarm method in the average number of messages sent per robot in each step, as well as, the distance traveled by the swarm.
Chandarana, Meghan and Luo, Wenhao and Lewis, Michael and Sycara, Katia and Scherer, Sebastian, "Decentralized Method for Sub-Swarm Deployment and Rejoining," Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018, 2019.
- High performance and safe flight of full-scale helicopters from takeoff to landing with an ensemble of planners.By Choudhury, S., Dugar, V., Maeta, S., MacAllister, B., Arora, S., Althoff, D. and Scherer, S.In Journal of Field Robotics, vol. 36, no. 8, pp. 1275–1332, Dec. 2019.
@article{choudhury2019high, title = {High performance and safe flight of full-scale helicopters from takeoff to landing with an ensemble of planners}, author = {Choudhury, Sanjiban and Dugar, Vishal and Maeta, Silvio and MacAllister, Brian and Arora, Sankalp and Althoff, Daniel and Scherer, Sebastian}, year = {2019}, month = dec, journal = {Journal of Field Robotics}, volume = {36}, number = {8}, pages = {1275--1332}, doi = {10.1002/rob.21906}, issn = {15564967}, url = {https://www.cs.cornell.edu/courses/cs6756/2022fa/assets/papers/aacus.pdf}, keywords = {aerial robotics,learning,planning}, video = {https://www.youtube.com/watch?v=i4rDt9Bwgps} }
Autonomous flight of unmanned full-size rotor-craft has the potential to enable many new applications. However, the dynamics of these aircraft, prevailing wind conditions, the need to operate over a variety of speeds and stringent safety requirements make it difficult to generate safe plans for these systems. Prior work has shown results for only parts of the problem. Here we present the first comprehensive approach to planning safe trajectories for autonomous helicopters from takeoff to landing. Our approach is based on two key insights. First, we compose an approximate solution by cascading various modules that can efficiently solve different relaxations of the planning problem. Our framework invokes a long-term route optimizer, which feeds a receding-horizon planner which in turn feeds a high-fidelity safety executive. Secondly, to deal with the diverse planning scenarios that may arise, we hedge our bets with an ensemble of planners. We use a data-driven approach that maps a planning context to a diverse list of planning algorithms that maximize the likelihood of success. Our approach was extensively evaluated in simulation and in real-world flight tests on three different helicopter systems for duration of more than 109 autonomous hours and 590 pilot-in-the-loop hours. We provide an in-depth analysis and discuss the various tradeoffs of decoupling the problem, using approximations and leveraging statistical techniques. We summarize the insights with the hope that it generalizes to other platforms and applications.
Choudhury, Sanjiban and Dugar, Vishal and Maeta, Silvio and MacAllister, Brian and Arora, Sankalp and Althoff, Daniel and Scherer, Sebastian, "High performance and safe flight of full-scale helicopters from takeoff to landing with an ensemble of planners," Journal of Field Robotics, 2019.
- Automatic Real-time Anomaly Detection for Autonomous Aerial Vehicles.By Keipour, A., Mousaei, M. and Scherer, S.In 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, Canada, pp. 5679–5685, 2019.
@inproceedings{keipour2019automatic, title = {Automatic Real-time Anomaly Detection for Autonomous Aerial Vehicles}, author = {Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian}, year = {2019}, month = may, booktitle = {2019 International Conference on Robotics and Automation (ICRA)}, address = {Montreal, QC, Canada, Canada}, pages = {5679--5685}, doi = {10.1109/ICRA.2019.8794286}, isbn = {978-1-5386-6027-0}, url = {https://arxiv.org/pdf/1907.00511}, keywords = {Actuators,Aircraft,Atmospheric modeling,Computational modeling,Fault detection,Reliability,Safety,actuators,aerospace components,aerospace simulation,aircraft model,aircraft testing,anomaly detection method,autonomous aerial vehicles,autonomous aircraft,correlated input-output pairs,fault detection open dataset,fault detection research,fault diagnosis,fault tolerant control,fixed-wing flights,ground truth,least squares approximations,mid-flight actuator failures,mobile robots,recursive least squares method}, video = {https://www.youtube.com/watch?v=HCtGbnqjKj8} }
The recent increase in the use of aerial vehicles raises concerns about the safety and reliability of autonomous operations. There is a growing need for methods to monitor the status of these aircraft and report any faults and anomalies to the safety pilot or to the autopilot to deal with the emergency situation. In this paper, we present a real-time approach using the Recursive Least Squares method to detect anomalies in the behavior of an aircraft. The method models the relationship between correlated input-output pairs online and uses the model to detect the anomalies. The result is an easy-to-deploy anomaly detection method that does not assume a specific aircraft model and can detect many types of faults and anomalies in a wide range of autonomous aircraft. The experiments on this method show a precision of 88.23%, recall of 88.23% and 86.36% accuracy for over 22 flight tests. The other contribution is providing a new fault detection open dataset for autonomous aircraft, which contains complete data and the ground truth for 22 fixed-wing flights with eight different types of mid-flight actuator failures to help future fault detection research for aircraft.
Keipour, Azarakhsh and Mousaei, Mohammadreza and Scherer, Sebastian, "Automatic Real-time Anomaly Detection for Autonomous Aerial Vehicles," 2019 International Conference on Robotics and Automation (ICRA), 2019.
- A Stereo Algorithm for Thin Obstacles and Reflective Objects.By Keller, J. and Scherer, S.In arXiv preprint arXiv:1910.04874, Oct. 2019.
@article{keller2019stereo, title = {A Stereo Algorithm for Thin Obstacles and Reflective Objects}, author = {Keller, John and Scherer, Sebastian}, year = {2019}, month = oct, journal = {arXiv preprint arXiv:1910.04874}, url = {https://arxiv.org/pdf/1910.04874}, annote = {6 pages, 5 figures}, archiveprefix = {arXiv}, arxivid = {1910.04874}, eprint = {1910.04874} }
Stereo cameras are a popular choice for obstacle avoidance for outdoor lighweight, low-cost robotics applications. However, they are unable to sense thin and reflective objects well. Currently, many algorithms are tuned to perform well on indoor scenes like the Middlebury dataset. When navigating outdoors, reflective objects, like windows and glass, and thin obstacles, like wires, are not well handled by most stereo disparity algorithms. Reflections, repeating patterns and objects parallel to the cameras’ baseline causes mismatches between image pairs which leads to bad disparity estimates. Thin obstacles are difficult for many sliding window based disparity methods to detect because they do not take up large portions of the pixels in the sliding window. We use a trinocular camera setup and micropolarizer camera capable of detecting reflective objects to overcome these issues. We present a hierarchical disparity algorithm that reduces noise, separately identify wires using semantic object triangulation in three images, and use information about the polarization of light to estimate the disparity of reflective objects. We evaluate our approach on outdoor data that we collected. Our method contained an average of 9.27% of bad pixels compared to a typical stereo algorithm’s 18.4% of bad pixels in scenes containing reflective objects. Our trinocular and semantic wire disparity methods detected 53% of wire pixels, whereas a typical two camera stereo algorithm detected 5%.
Keller, John and Scherer, Sebastian, "A Stereo Algorithm for Thin Obstacles and Reflective Objects," arXiv preprint arXiv:1910.04874, 2019.
- Multi-view reconstruction of wires using a catenary model.By Madaan, R., Kaess, M. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, pp. 5657–5664, 2019.
@inproceedings{madaan2019multi, title = {Multi-view reconstruction of wires using a catenary model}, author = {Madaan, Ratnesh and Kaess, Michael and Scherer, Sebastian}, year = {2019}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, pages = {5657--5664}, doi = {10.1109/ICRA.2019.8793852}, isbn = {9781538660263}, issn = {10504729}, url = {https://www.cs.cmu.edu/~kaess/pub/Madaan19icra.pdf} }
Reliable detection and reconstruction of wires is one of the hardest problems in the UAV community, with a wide ranging impact in the industry in terms of wire avoidance capabilities and powerline corridor inspection. In this work, we introduce a real-time, model-based, multi-view algorithm to reconstruct wires from a set of images with known camera poses, while exploiting their natural shape-the catenary curve. Using a model-based approach helps us deal with partial wire detections in images, which may occur due to natural occlusion and false negatives. In addition, using a parsimonious model makes our algorithm efficient as we only need to optimize for 5 model parameters, as opposed to hundreds of 3D points in bundle-adjustment approaches. Our algorithm obviates the need for pixel correspondences by computing the reprojection error via the distance transform of binarized wire segmentation images. Further, we make our algorithm robust to arbitrary initializations by introducing an on-demand, approximate extrapolation of the distance transform based objective. We demonstrate the effectiveness of our algorithm against false negatives and random initializations in simulation, and show qualitative results with real data collected from a small UAV.
Madaan, Ratnesh and Kaess, Michael and Scherer, Sebastian, "Multi-view reconstruction of wires using a catenary model," Proceedings - IEEE International Conference on Robotics and Automation, 2019.
- State estimation for aerial vehicles using multi-sensor fusion,By Scherer, S., Yu, S. and Nuske, S.US Patent 10,295,365, May-2019
@patent{scherer2019state, title = {State estimation for aerial vehicles using multi-sensor fusion}, author = {Scherer, Sebastian and Yu, Song and Nuske, Stephen}, year = {2019}, month = may, url = {https://patents.google.com/patent/US10295365B2/en}, note = {US Patent 10,295,365} }
Scherer, Sebastian and Yu, Song and Nuske, Stephen, "State estimation for aerial vehicles using multi-sensor fusion," US Patent 10,295,365, 2019.
- Improved generalization of heading direction estimation for aerial filming using semi-supervised regression.By Wang, W., Ahuja, A., Zhang, Y., Bonatti, R. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, pp. 5901–5907, 2019.
@inproceedings{wang2019improved, title = {Improved generalization of heading direction estimation for aerial filming using semi-supervised regression}, author = {Wang, Wenshan and Ahuja, Aayush and Zhang, Yanfu and Bonatti, Rogerio and Scherer, Sebastian}, year = {2019}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, pages = {5901--5907}, doi = {10.1109/ICRA.2019.8793994}, isbn = {9781538660263}, issn = {10504729}, url = {https://arxiv.org/pdf/1903.11174} }
In the task of Autonomous aerial filming of a moving actor (e.g. a person or a vehicle), it is crucial to have a good heading direction estimation for the actor from the visual input. However, the models obtained in other similar tasks, such as pedestrian collision risk analysis and human-robot interaction, are very difficult to generalize to the aerial filming task, because of the difference in data distributions. Towards improving generalization with less amount of labeled data, this paper presents a semi-supervised algorithm for heading direction estimation problem. We utilize temporal continuity as the unsupervised signal to regularize the model and achieve better generalization ability. This semi-supervised algorithm is applied to both training and testing phases, which increases the testing performance by a large margin. We show that by leveraging unlabeled sequences, the amount of labeled data required can be significantly reduced. We also discuss several important details on improving the performance by balancing labeled and unlabeled loss, and making good combinations. Experimental results show that our approach robustly outputs the heading direction for different types of actor. The aesthetic value of the video is also improved in the aerial filming task.
Wang, Wenshan and Ahuja, Aayush and Zhang, Yanfu and Bonatti, Rogerio and Scherer, Sebastian, "Improved generalization of heading direction estimation for aerial filming using semi-supervised regression," Proceedings - IEEE International Conference on Robotics and Automation, 2019.
- CubeSLAM: Monocular 3-D Object SLAM.By Yang, S. and Scherer, S.In IEEE Transactions on Robotics, vol. 35, no. 4, pp. 925–938, Jun. 2019.
@article{yang2019cubeslam, title = {{CubeSLAM}: {Monocular} 3-D Object {SLAM}}, author = {Yang, Shichao and Scherer, Sebastian}, year = {2019}, month = jun, journal = {IEEE Transactions on Robotics}, volume = {35}, number = {4}, pages = {925--938}, doi = {10.1109/TRO.2019.2909168}, issn = {19410468}, url = {https://arxiv.org/pdf/1806.00557}, annote = {IEEE Transactions on Robotics}, keywords = {Dynamic SLAM,SLAM,object detection,object simultaneous localization and mapping (SLAM)} }
In this paper, we present a method for single image three-dimensional (3-D) cuboid object detection and multiview object simultaneous localization and mapping in both static and dynamic environments, and demonstrate that the two parts can improve each other. First, for single image object detection, we generate high-quality cuboid proposals from two-dimensional (2-D) bounding boxes and vanishing points sampling. The proposals are further scored and selected based on the alignment with image edges. Second, multiview bundle adjustment with new object measurements is proposed to jointly optimize poses of cameras, objects, and points. Objects can provide long-range geometric and scale constraints to improve camera pose estimation and reduce monocular drift. Instead of treating dynamic regions as outliers, we utilize object representation and motion model constraints to improve the camera pose estimation. The 3-D detection experiments on SUN RGBD and KITTI show better accuracy and robustness over existing approaches. On the public TUM, KITTI odometry and our own collected datasets, our SLAM method achieves the state-of-the-art monocular camera pose estimation and at the same time, improves the 3-D object detection accuracy.
Yang, Shichao and Scherer, Sebastian, "CubeSLAM: Monocular 3-D Object SLAM," IEEE Transactions on Robotics, 2019.
- Monocular object and plane SLAM in structured environments.By Yang, S. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3145–3152, 2019.
@article{yang2019monocular, title = {Monocular object and plane {SLAM} in structured environments}, author = {Yang, Shichao and Scherer, Sebastian}, year = {2019}, journal = {IEEE Robotics and Automation Letters}, volume = {4}, number = {4}, pages = {3145--3152}, doi = {10.1109/LRA.2019.2924848}, issn = {23773766}, url = {https://arxiv.org/pdf/1809.03415}, keywords = {Object and plane slam,Semantic scene understanding,Slam} }
In this letter, we present a monocular simultaneous localization and mapping (SLAM) algorithm using high-level object and plane landmarks. The built map is denser, more compact and semantic meaningful compared to feature point based SLAM. We first propose a high-order graphical model to jointly infer the three-dimensional object and layout planes from single images considering occlusions and semantic constraints. The extracted objects and planes are further optimized with camera poses in a unified SLAM framework. Objects and planes can provide more semantic constraints such as Manhattan plane and object supporting relationships compared to points. Experiments on various public and collected datasets, including ICL NUIM and TUM Mono show that our algorithm can improve camera localization accuracy compared to state-of-the-art SLAM, especially when there is no loop closure, and also generate dense maps robustly in many structured environments.
Yang, Shichao and Scherer, Sebastian, "Monocular object and plane SLAM in structured environments," IEEE Robotics and Automation Letters, 2019.
- A Robust Laser-Inertial Odometry and Mapping Method for Large-Scale Highway Environments.By Zhao, S., Fang, Z., Li, H.L. and Scherer, S.In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1285–1292, 2019.
@inproceedings{zhao2019robust, title = {A Robust Laser-Inertial Odometry and Mapping Method for Large-Scale Highway Environments}, author = {Zhao, Shibo and Fang, Zheng and Li, HaoLai and Scherer, Sebastian}, year = {2019}, booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages = {1285--1292}, doi = {10.1109/IROS40897.2019.8967880}, url = {https://arxiv.org/pdf/2009.02622} }
Zhao, Shibo and Fang, Zheng and Li, HaoLai and Scherer, Sebastian, "A Robust Laser-Inertial Odometry and Mapping Method for Large-Scale Highway Environments," 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
- A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions.By Zhen, W., Hu, Y., Liu, J. and Scherer, S.In IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3585–3592, Oct. 2019.
@article{zhen2019joint, title = {A Joint Optimization Approach of {LiDAR-Camera} Fusion for Accurate Dense {3-D} Reconstructions}, author = {Zhen, Weikun and Hu, Yaoyu and Liu, Jingfeng and Scherer, Sebastian}, year = {2019}, month = oct, journal = {IEEE Robotics and Automation Letters}, volume = {4}, number = {4}, pages = {3585--3592}, doi = {10.1109/LRA.2019.2928261}, issn = {23773766}, url = {https://arxiv.org/pdf/1907.00930}, keywords = {Calibration and Identification,Mapping,Sensor Fusion} }
Fusing data from LiDAR and camera is conceptually attractive because of their complementary properties. For instance, camera images are of higher resolution and have colors, while LiDAR data provide more accurate range measurements and have a wider field of view. However, the sensor fusion problem remains challenging since it is difficult to find reliable correlations between data of very different characteristics (geometry versus texture, sparse versus dense). This letter proposes an offline LiDAR-camera fusion method to build dense, accurate 3-D models. Specifically, our method jointly solves a bundle adjustment problem and a cloud registration problem to compute camera poses and the sensor extrinsic calibration. In experiments, we show that our method can achieve an average accuracy of 2.7 mm and resolution of 70 points/cm2 by comparing to the ground truth data from a survey scanner. Furthermore, the extrinsic calibration result is discussed and shown to outperform the state-of-the-art method.
Zhen, Weikun and Hu, Yaoyu and Liu, Jingfeng and Scherer, Sebastian, "A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions," IEEE Robotics and Automation Letters, 2019.
- Estimating the localizability in tunnel-like environments using LiDAR and UWB.By Zhen, W. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, pp. 4903–4908, 2019.
@inproceedings{zhen2019estimating, title = {Estimating the localizability in tunnel-like environments using {LiDAR} and {UWB}}, author = {Zhen, Weikun and Scherer, Sebastian}, year = {2019}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, pages = {4903--4908}, doi = {10.1109/ICRA.2019.8794167}, isbn = {9781538660263}, issn = {10504729}, url = {https://www.ri.cmu.edu/app/uploads/2019/06/root.pdf} }
The application of robots in inspection tasks has been growing quickly thanks to the advancements in autonomous navigation technology, especially the robot localization techniques in GPS-denied environments. Although many methods have been proposed to localize a robot using onboard sensors such as cameras and LiDARs, achieving robust localization in geometrically degenerated environments, e.g. tunnels, remains a challenging problem. In this work, we focus on the robust localization problem in such situations. A novel degeneration characterization model is presented to estimate the localizability at a given location in the prior map. And the localizability of a LiDAR and an Ultra-Wideband (UWB) ranging radio is analyzed. Additionally, a probabilistic sensor fusion method is developed to combine IMU, LiDAR and the UWB. Experiment results show that this method allows for robust localization inside a long straight tunnel.
Zhen, Weikun and Scherer, Sebastian, "Estimating the localizability in tunnel-like environments using LiDAR and UWB," Proceedings - IEEE International Conference on Robotics and Automation, 2019.
- Robust Localization of an Arbitrary Distribution of Radioactive Sources for Aerial Inspection.By Zhen, W., Shah, D. and Scherer, S.In 44th Annual Waste Management Conference (WM2018), 2019.
@inproceedings{zhen2019robust, title = {Robust Localization of an Arbitrary Distribution of Radioactive Sources for Aerial Inspection}, author = {Zhen, Weikun and Shah, Dhruv and Scherer, Sebastian}, year = {2019}, month = mar, booktitle = {44th Annual Waste Management Conference (WM2018)}, url = {https://arxiv.org/pdf/1710.01701} }
Radiation source detection has seen various applications in the past decade, ranging from the detection of dirty bombs in public places to scanning critical nuclear facilities for leakage or flaws, and in the autonomous inspection of nuclear sites. Despite the success in detecting single point sources or a small number of spatially separated point sources, most of the existing algorithms fail to localize sources in complex scenarios with a large number of point sources or non-trivial distributions & bulk sources. Even in simpler environments, most existing algorithms are not scalable to larger regions and/or higher dimensional spaces. For effective autonomous inspection, we not only need to estimate the positions of the sources, but also the number, distribution, and intensities of each of them. In this paper, we present a novel algorithm for the robust localization of an arbitrary distribution of radiation sources using multi-layer sequential Monte Carlo methods coupled with suitable clustering algorithms. We achieve near-perfect accuracy, in terms of F1-scores (\textgreater 0.95), while allowing the algorithm to scale, both to large regions in space and to higher dimensional spaces (5 tested).
Zhen, Weikun and Shah, Dhruv and Scherer, Sebastian, "Robust Localization of an Arbitrary Distribution of Radioactive Sources for Aerial Inspection," 44th Annual Waste Management Conference (WM2018), 2019.
2018
- Hindsight is Only 50/50: Unsuitability of MDP based Approximate POMDP Solvers for Multi-resolution Information Gathering.By Arora, S., Choudhury, S. and Scherer, S.In arXiv preprint arXiv:1804.02573, Apr. 2018.
@article{arora2018hindsight, title = {Hindsight is Only 50/50: {Unsuitability} of {MDP} based Approximate {POMDP} Solvers for Multi-resolution Information Gathering}, author = {Arora, Sankalp and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2018}, month = apr, journal = {arXiv preprint arXiv:1804.02573}, url = {https://arxiv.org/pdf/1804.02573}, annote = {6 pages, 1 figure} }
Partially Observable Markov Decision Processes (POMDPs) offer an elegant framework to model sequential decision making in uncertain environments. Solving POMDPs online is an active area of research and given the size of real-world problems approximate solvers are used. Recently, a few approaches have been suggested for solving POMDPs by using MDP solvers in conjunction with imitation learning. MDP based POMDP solvers work well for some cases, while catastrophically failing for others. The main failure point of such solvers is the lack of motivation for MDP solvers to gain information, since under their assumption the environment is either already known as much as it can be or the uncertainty will disappear after the next step. However for solving POMDP problems gaining information can lead to efficient solutions. In this paper we derive a set of conditions where MDP based POMDP solvers are provably sub-optimal. We then use the well-known tiger problem to demonstrate such sub-optimality. We show that multi-resolution, budgeted information gathering cannot be addressed using MDP based POMDP solvers. The contribution of the paper helps identify the properties of a POMDP problem for which the use of MDP based POMDP solvers is inappropriate, enabling better design choices.
Arora, Sankalp and Choudhury, Sanjiban and Scherer, Sebastian, "Hindsight is Only 50/50: Unsuitability of MDP based Approximate POMDP Solvers for Multi-resolution Information Gathering," arXiv preprint arXiv:1804.02573, 2018.
- On-board, computerized landing zone evaluation system for aircraft,By Chamberlain, L., Cover, H., Grocholsky, B., Hamner, B., Scherer, S. and Singh, S.US Patent 15,152,944, May-2018
@patent{chamberlain2018board, title = {On-board, computerized landing zone evaluation system for aircraft}, author = {Chamberlain, Lyle and Cover, Hugh and Grocholsky, Ben and Hamner, Bradley and Scherer, Sebastian and Singh, Sanjiv}, year = {2018}, month = may, url = {https://patents.google.com/patent/US10029804B1}, note = {US Patent 15,152,944} }
Chamberlain, Lyle and Cover, Hugh and Grocholsky, Ben and Hamner, Bradley and Scherer, Sebastian and Singh, Sanjiv, "On-board, computerized landing zone evaluation system for aircraft," US Patent 15,152,944, 2018.
- Swarm size planning tool for multi-job type missions.By Chandarana, M., Lewis, M., Allen, B.D., Sycara, K. and Scherer, S.In 2018 Aviation Technology, Integration, and Operations Conference, 2018.
@inproceedings{chandarana2018swarm, title = {Swarm size planning tool for multi-job type missions}, author = {Chandarana, Meghan and Lewis, Michael and Allen, Bonnie Danette and Sycara, Katia and Scherer, Sebastian}, year = {2018}, month = sep, booktitle = {2018 Aviation Technology, Integration, and Operations Conference}, doi = {10.2514/6.2018-3846}, isbn = {9781624105562}, url = {https://www.ri.cmu.edu/app/uploads/2018/09/Aviation2018.pdf} }
As part of swarm search and service (SSS) missions, swarms are tasked with searching an area while simultaneously servicing jobs as they are encountered. Jobs must be immediately serviced and can be one of multiple types. Each type requires that vehicle(s) break off from the swarm and travel to the job site for a specified amount of time. The number of vehicles needed and the service time for each job type are known. Once a job has been successfully serviced, vehicles return to the swarm and are available for reallocation. When planning SSS missions, human operators are tasked with determining the required number of vehicles needed to handle the expected job demand. The complex relationship between job type parameters makes this choice challenging. This work presents a prediction model used to estimate the swarm size necessary to achieve a given performance. User studies were conducted to determine the usefulness and ease of use of such a prediction model as an aid during mission planning. Results show that using the planning tool leads to 7x less missed area and a 50% cost reduction.
Chandarana, Meghan and Lewis, Michael and Allen, Bonnie Danette and Sycara, Katia and Scherer, Sebastian, "Swarm size planning tool for multi-job type missions," 2018 Aviation Technology, Integration, and Operations Conference, 2018.
- Determining Effective Swarm Sizes for Multi-Job Type Missions.By Chandarana, M., Lewis, M., Sycara, K. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, pp. 4848–4853, 2018.
@inproceedings{chandarana2018determining, title = {Determining Effective Swarm Sizes for Multi-Job Type Missions}, author = {Chandarana, Meghan and Lewis, Michael and Sycara, Katia and Scherer, Sebastian}, year = {2018}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, pages = {4848--4853}, doi = {10.1109/IROS.2018.8593919}, isbn = {9781538680940}, issn = {21530866}, url = {http://d-scholarship.pitt.edu/36815/1/IROS2018.pdf} }
Swarm search and service (SSS) missions require large swarms to simultaneously search an area while servicing jobs as they are encountered. Jobs must be immediately serviced and can be one of several different job types - each requiring a different service time and number of vehicles to complete its service successfully. After jobs are serviced, vehicles are returned to the swarm and become available for reallocation. As part of SSS mission planning, human operators must determine the number of vehicles needed to achieve this balance. The complexities associated with balancing vehicle allocation to multiple as yet unknown tasks with returning vehicles makes this extremely difficult for humans. Previous work assumes that all system jobs are known ahead of time or that vehicles move independently of each other in a multi-agent framework. We present a dynamic vehicle routing (DVR) framework whose policies optimally allocate vehicles as jobs arrive. By incorporating time constraints into the DVR framework, an M/M/k/k queuing model can be used to evaluate overall steady state system performance for a given swarm size. Using these estimates, operators can rapidly compare system performance across different configurations, leading to more effective choices for swarm size. A sensitivity analysis is performed and its results are compared with the model, illustrating the appropriateness of our method to problems of plausible scale and complexity.
Chandarana, Meghan and Lewis, Michael and Sycara, Katia and Scherer, Sebastian, "Determining Effective Swarm Sizes for Multi-Job Type Missions," IEEE International Conference on Intelligent Robots and Systems, 2018.
- Data-driven planning via imitation learning.By Choudhury, S., Bhardwaj, M., Arora, S., Kapoor, A., Ranade, G., Scherer, S. and Dey, D.In International Journal of Robotics Research, vol. 37, no. 13-14, pp. 1632–1672, Dec. 2018.
@article{choudhury2018data, title = {Data-driven planning via imitation learning}, author = {Choudhury, Sanjiban and Bhardwaj, Mohak and Arora, Sankalp and Kapoor, Ashish and Ranade, Gireeja and Scherer, Sebastian and Dey, Debadeepta}, year = {2018}, month = dec, journal = {International Journal of Robotics Research}, volume = {37}, number = {13-14}, pages = {1632--1672}, doi = {10.1177/0278364918781001}, issn = {17413176}, url = {https://arxiv.org/pdf/1711.06391}, keywords = {,Imitation learning,POMDPs,QMDPs,heuristic search,sequential decision making} }
Robot planning is the process of selecting a sequence of actions that optimize for a task=specific objective. For instance, the objective for a navigation task would be to find collision-free paths, whereas the objective for an exploration task would be to map unknown areas. The optimal solutions to such tasks are heavily influenced by the implicit structure in the environment, i.e. the configuration of objects in the world. State-of-the-art planning approaches, however, do not exploit this structure, thereby expending valuable effort searching the action space instead of focusing on potentially good actions. In this paper, we address the problem of enabling planners to adapt their search strategies by inferring such good actions in an efficient manner using only the information uncovered by the search up until that time. We formulate this as a problem of sequential decision making under uncertainty where at a given iteration a planning policy must map the state of the search to a planning action. Unfortunately, the training process for such partial-information-based policies is slow to converge and susceptible to poor local minima. Our key insight is that if we could fully observe the underlying world map, we would easily be able to disambiguate between good and bad actions. We hence present a novel data-driven imitation learning framework to efficiently train planning policies by imitating a clairvoyant oracle: an oracle that at train time has full knowledge about the world map and can compute optimal decisions. We leverage the fact that for planning problems, such oracles can be efficiently computed and derive performance guarantees for the learnt policy. We examine two important domains that rely on partial-information-based policies: informative path planning and search-based motion planning. We validate the approach on a spectrum of environments for both problem domains, including experiments on a real UAV, and show that the learnt policy consistently outperforms state-of-the-art algorithms. Our framework is able to train policies that achieve up to 39% more reward than state-of-the art information-gathering heuristics and a 70 x speedup as compared with A* on search-based planning problems. Our approach paves the way forward for applying data-driven techniques to other such problem domains under the umbrella of robot planning.
Choudhury, Sanjiban and Bhardwaj, Mohak and Arora, Sankalp and Kapoor, Ashish and Ranade, Gireeja and Scherer, Sebastian and Dey, Debadeepta, "Data-driven planning via imitation learning," International Journal of Robotics Research, 2018.
- Bayesian active edge evaluation on expensive graphs.By Choudhury, S., Srinivasa, S. and Scherer, S.In IJCAI International Joint Conference on Artificial Intelligence, pp. 4890–4897, Nov. 2018.
@article{choudhury2018bayesian, title = {Bayesian active edge evaluation on expensive graphs}, author = {Choudhury, Sanjiban and Srinivasa, Siddhartha and Scherer, Sebastian}, year = {2018}, month = nov, journal = {IJCAI International Joint Conference on Artificial Intelligence}, pages = {4890--4897}, doi = {10.24963/ijcai.2018/679}, isbn = {9780999241127}, issn = {10450823}, url = {https://arxiv.org/pdf/1711.07329} }
We consider the problem of real-time motion planning that requires evaluating a minimal number of edges on a graph to quickly discover collision-free paths. Evaluating edges is expensive, both for robots with complex geometries like robot arms, and for robots sensing the world online like UAVs. Until now, this challenge has been addressed via laziness i.e. deferring edge evaluation until absolutely necessary, with the hope that edges turn out to be valid. However, all edges are not alike in value - some have a lot of potentially good paths flowing through them, and some others encode the likelihood of neighbouring edges being valid. This leads to our key insight - instead of passive laziness, we can actively choose edges that reduce the uncertainty about the validity of paths. We show that this is equivalent to the Bayesian active learning paradigm of decision region determination (DRD). However, the DRD problem is not only combina-torially hard, but also requires explicit enumeration of all possible worlds. We propose a novel framework that combines two DRD algorithms, DIRECT and BISECT, to overcome both issues. We show that our approach outperforms several state-of-the-art algorithms on a spectrum of planning problems for mobile robots, manipulators and autonomous helicopters.
Choudhury, Sanjiban and Srinivasa, Siddhartha and Scherer, Sebastian, "Bayesian active edge evaluation on expensive graphs," IJCAI International Joint Conference on Artificial Intelligence, 2018.
- Joint Point Cloud and Image Based Localization for Efficient Inspection in Mixed Reality.By Das, M.P., Dong, Z. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, pp. 6357–6363, Nov. 2018.
@article{das2018joint, title = {Joint Point Cloud and Image Based Localization for Efficient Inspection in Mixed Reality}, author = {Das, Manash Pratim and Dong, Zhen and Scherer, Sebastian}, year = {2018}, month = nov, journal = {IEEE International Conference on Intelligent Robots and Systems}, pages = {6357--6363}, doi = {10.1109/IROS.2018.8594318}, isbn = {9781538680940}, issn = {21530866}, url = {https://arxiv.org/pdf/1811.02563} }
This paper introduces a method of structure inspection using mixed-reality headsets to reduce the human effort in reporting accurate inspection information such as fault locations in 3D coordinates. Prior to every inspection, the headset needs to be localized. While external pose estimation and fiducial marker based localization would require setup, maintenance, and manual calibration; marker-free self-localization can be achieved using the onboard depth sensor and camera. However, due to limited depth sensor range of portable mixed-reality headsets like Microsoft HoloLens, localization based on simple point cloud registration (sPCR) would require extensive mapping of the environment. Also, localization based on camera image would face same issues as stereo ambiguities and hence depends on viewpoint. We thus introduce a novel approach to Joint Point Cloud and Image-based Localization (JPIL) for mixed-reality headsets that uses visual cues and headset orientation to register small, partially overlapped point clouds and save significant manual labor and time in environment mapping. Our empirical results compared to sPCR show average 10 fold reduction of required overlap surface area that could potentially save on average 20 minutes per inspection. JPIL is not only restricted to inspection tasks but also can be essential in enabling intuitive human-robot interaction for spatial mapping and scene understanding in conjunction with other agents like autonomous robotic systems that are increasingly being deployed in outdoor environments for applications like structural inspection.
Das, Manash Pratim and Dong, Zhen and Scherer, Sebastian, "Joint Point Cloud and Image Based Localization for Efficient Inspection in Mixed Reality," IEEE International Conference on Intelligent Robots and Systems, 2018.
- An efficient global energy optimization approach for robust 3D plane segmentation of point clouds.By Dong, Z., Yang, B., Hu, P. and Scherer, S.In ISPRS Journal of Photogrammetry and Remote Sensing, vol. 137, pp. 112–133, Mar. 2018.
@article{dong2018efficient, title = {An efficient global energy optimization approach for robust {3D} plane segmentation of point clouds}, author = {Dong, Zhen and Yang, Bisheng and Hu, Pingbo and Scherer, Sebastian}, year = {2018}, month = mar, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, volume = {137}, pages = {112--133}, doi = {10.1016/j.isprsjprs.2018.01.013}, issn = {09242716}, url = {https://www.ri.cmu.edu/app/uploads/2018/01/1-s2.0-S0924271618300133-main.pdf}, keywords = {Energy optimization,Guided sampling,Hybrid region growing,Multiscale supervoxel,Plane segmentation,Simulated annealing} }
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., SEMANTIC3D.NET dataset) and low-quality RGB-D point clouds (i.e., S3DIS dataset) with six metrics of (94.2%, 95.1%, 2.9%, 3.8%, 93.6%, 94.1%) and (90.4%, 91.4%, 8.2%, 7.6%, 90.8%, 91.7%) respectively.
Dong, Zhen and Yang, Bisheng and Hu, Pingbo and Scherer, Sebastian, "An efficient global energy optimization approach for robust 3D plane segmentation of point clouds," ISPRS Journal of Photogrammetry and Remote Sensing, 2018.
- Hierarchical registration of unordered TLS point clouds based on binary shape context descriptor.By Dong, Z., Yang, B., Liang, F., Huang, R. and Scherer, S.In ISPRS Journal of Photogrammetry and Remote Sensing, vol. 144, pp. 61–79, Oct. 2018.
@article{dong2018hierarchical, title = {Hierarchical registration of unordered {TLS} point clouds based on binary shape context descriptor}, author = {Dong, Zhen and Yang, Bisheng and Liang, Fuxun and Huang, Ronggang and Scherer, Sebastian}, year = {2018}, month = oct, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, volume = {144}, pages = {61--79}, doi = {10.1016/j.isprsjprs.2018.06.018}, issn = {09242716}, url = {https://www.ri.cmu.edu/app/uploads/2018/01/1-s2.0-S0924271618301813-main.pdf}, keywords = {Binary shape context,Hierarchical registration,Multiple overlaps,Point cloud registration,Point cloud similarity,Vector of locally aggregated descriptors} }
Automatic registration of unordered point clouds collected by the terrestrial laser scanner (TLS) is the prerequisite for many applications including 3D model reconstruction, cultural heritage management, forest structure assessment, landslide monitoring, and solar energy analysis. However, most of the existing point cloud registration methods still suffer from some limitations. On one hand, most of them are considerable time-consuming and high computational complexity due to the exhaustive pairwise search for recovering the underlying overlaps, which makes them infeasible for the registration of large-scale point clouds. On the other hand, most of them only leverage pairwise overlaps and rarely use the overlaps between multiple point clouds, resulting in difficulty dealing with point clouds with limited overlaps. To overcome these limitations, this paper presents a Hierarchical Merging based Multiview Registration (HMMR) algorithm to align unordered point clouds from various scenes. First, the multi-level descriptors (i.e., local descriptor: Binary Shape Context (BSC) and global descriptor: Vector of Locally Aggregated Descriptor (VLAD)) are calculated. Second, the point clouds overlapping (adjacent) graph is efficiently constructed by leveraging the similarity between their corresponding VLAD vectors. Finally, the proposed method hierarchically registers multiple point clouds by iteratively performing optimal registration point clouds calculation, BSC descriptor based pairwise registration and point cloud groups overlapping (adjacent) graph update, until all the point clouds are aligned into a common coordinate reference. Comprehensive experiments demonstrate that the proposed algorithm obtains good performance in terms of successful registration rate, rotation error, translation error, and runtime, and outperformed the state-of-the-art approaches.
Dong, Zhen and Yang, Bisheng and Liang, Fuxun and Huang, Ronggang and Scherer, Sebastian, "Hierarchical registration of unordered TLS point clouds based on binary shape context descriptor," ISPRS Journal of Photogrammetry and Remote Sensing, 2018.
- DROAN - Disparity-Space Representation for Obstacle Avoidance: Enabling Wire Mapping Avoidance.By Dubey, G., Madaan, R. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, pp. 6311–6318, 2018.
@inproceedings{dubey2018droan, title = {{DROAN - D}isparity-Space Representation for Obstacle Avoidance: Enabling Wire Mapping Avoidance}, author = {Dubey, Geetesh and Madaan, Ratnesh and Scherer, Sebastian}, year = {2018}, month = oct, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, pages = {6311--6318}, doi = {10.1109/IROS.2018.8593499}, isbn = {9781538680940}, issn = {21530866}, url = {https://www.ri.cmu.edu/app/uploads/2018/08/IROS18_1856_FI.pdf} }
Wire detection, depth estimation and avoidance is one of the hardest challenges towards the ubiquitous presence of robust autonomous aerial vehicles. We present an approach and a system which tackles these three challenges along with generic obstacle avoidance as well. First, we perform monocular wire detection using a convolutional neural network under the semantic segmentation paradigm, and obtain a confidence map of wire pixels. Along with this, we also use a binocular stereo pair to detect other generic obstacles. We represent wires and generic obstacles using a disparity space representation and do a C-space expansion by using a non-linear sensor model we develop. Occupancy inference for collision checking is performed by maintaining a pose graph over multiple disparity images. For avoidance of wire and generic obstacles, we use a precomputed trajectory library, which is evaluated in an online fashion in accordance to a cost function over proximity to the goal. We follow this trajectory with a path tracking controller. Finally, we demonstrate the effectiveness of our proposed method in simulation for wire mapping, and on hardware by multiple runs for both wire and generic obstacle avoidance.
Dubey, Geetesh and Madaan, Ratnesh and Scherer, Sebastian, "DROAN - Disparity-Space Representation for Obstacle Avoidance: Enabling Wire Mapping Avoidance," IEEE International Conference on Intelligent Robots and Systems, 2018.
- Monocular and Stereo Cues for Landing Zone Evaluation for Micro UAVs.By Garg, R., Yang, S. and Scherer, S.In arXiv preprint arXiv:1812.03539, Dec. 2018.
@article{garg2018monocular, title = {Monocular and Stereo Cues for Landing Zone Evaluation for Micro {UAVs}}, author = {Garg, Rohit and Yang, Shichao and Scherer, Sebastian}, year = {2018}, month = dec, journal = {arXiv preprint arXiv:1812.03539}, url = {https://arxiv.org/pdf/1812.03539}, archiveprefix = {arXiv}, arxivid = {1812.03539}, eprint = {1812.03539} }
Autonomous and safe landing is important for unmanned aerial vehicles. We present a monocular and stereo image based method for fast and accurate landing zone evaluation for UAVs in various scenarios. Many existing methods rely on Lidar or depth sensor to provide accurate and dense surface reconstruction. We utilize stereo images to evaluate the slope and monocular images to compute homography error. By combining them together, our approach works for both rigid and non-rigid dynamic surfaces. Experiments on many outdoor scenes such as water, grass and roofs, demonstrate the robustness and effectiveness of our approach.
Garg, Rohit and Yang, Shichao and Scherer, Sebastian, "Monocular and Stereo Cues for Landing Zone Evaluation for Micro UAVs," arXiv preprint arXiv:1812.03539, 2018.
- Open Problems in Robotic Anomaly Detection.By Gupta, R., Kurtz, Z.T., Scherer, S. and Smereka, J.M.In arXiv preprint arXiv:1809.03565, Sep. 2018.
@article{gupta2018open, title = {Open Problems in Robotic Anomaly Detection}, author = {Gupta, Ritwik and Kurtz, Zachary T. and Scherer, Sebastian and Smereka, Jonathon M.}, year = {2018}, month = sep, journal = {arXiv preprint arXiv:1809.03565}, url = {https://arxiv.org/pdf/1809.03565}, archiveprefix = {arXiv}, arxivid = {1809.03565}, eprint = {1809.03565} }
Failures in robotics can have disastrous consequences that worsen rapidly over time. This, the ability to rely on robotic systems, depends on our ability to monitor them and intercede when necessary, manually or autonomously. Prior work in this area surveys intrusion detection and security challenges in robotics, but a discussion of the more general anomaly detection problems is lacking. As such, we provide a brief insight-focused discussion and frameworks of thought on some compelling open problems with anomaly detection in robotic systems. Namely, we discuss non-malicious faults, invalid data, intentional anomalous behavior, hierarchical anomaly detection, distribution of computation, and anomaly correction on the fly. We demonstrate the need for additional work in these areas by providing a case study which examines the limitations of implementing a basic anomaly detection (AD) system in the Robot Operating System (ROS) 2 middleware. Showing that if even supporting a basic system is a significant hurdle, the path to more complex and advanced AD systems is even more problematic. We discuss these ROS 2 platform limitations to support solutions in robotic anomaly detection and provide recommendations to address the issues discovered.
Gupta, Ritwik and Kurtz, Zachary T. and Scherer, Sebastian and Smereka, Jonathon M., "Open Problems in Robotic Anomaly Detection," arXiv preprint arXiv:1809.03565, 2018.
- Toward delay-tolerant multiple-unmanned aerial vehicle scheduling system using Multi-strategy Coevolution algorithm.By Khosiawan, Y., Scherer, S. and Nielsen, I.In Advances in Mechanical Engineering, vol. 10, no. 12, p. 168781401881523, Dec. 2018.
@article{khosiawan2018delay, title = {Toward delay-tolerant multiple-unmanned aerial vehicle scheduling system using Multi-strategy Coevolution algorithm}, author = {Khosiawan, Yohanes and Scherer, Sebastian and Nielsen, Izabela}, year = {2018}, month = dec, journal = {Advances in Mechanical Engineering}, volume = {10}, number = {12}, pages = {168781401881523}, doi = {10.1177/1687814018815235}, issn = {16878140}, url = {https://journals.sagepub.com/doi/pdf/10.1177/1687814018815235}, keywords = {Unmanned aerial vehicle,delay-tolerant,metaheuristic,optimization,scheduling} }
Autonomous bridge inspection operations using unmanned aerial vehicles take multiple task assignments and constraints into account. To efficiently execute the operations, a schedule is required. Generating a cost optimum schedule of multiple-unmanned aerial vehicle operations is known to be Non-deterministic Polynomial-time (NP)-hard. This study approaches such a problem with heuristic-based algorithms to get a high-quality feasible solution in a short computation time. A constructive heuristic called Retractable Chain Task Assignment algorithm is presented to build an evaluable schedule from a task sequence. The task sequence representation is used during the search to perform seamless operations. Retractable Chain Task Assignment algorithm calculates and incorporates slack time to the schedule according to the properties of the task. The slack time acts as a cushion which makes the schedule delay-tolerant. This algorithm is incorporated with a metaheuristic algorithm called Multi-strategy Coevolution to search the solution space. The proposed algorithm is verified through numerical simulations, which take inputs from real flight test data. The obtained solutions are evaluated based on the makespan, battery consumption, computation time, and the robustness level of the schedules. The performance of Multi-strategy Coevolution is compared to Differential Evolution, Particle Swarm Optimization, and Differential Evolution–Fused Particle Swarm Optimization. The simulation results show that Multi-strategy Coevolution gives better objective values than the other algorithms.
Khosiawan, Yohanes and Scherer, Sebastian and Nielsen, Izabela, "Toward delay-tolerant multiple-unmanned aerial vehicle scheduling system using Multi-strategy Coevolution algorithm," Advances in Mechanical Engineering, 2018.
- Season-Invariant Semantic Segmentation with a Deep Multimodal Network.By Kim, D.-K., Maturana, D., Uenoyama, M. and Scherer, S.In Field and Service Robotics,Springer, Cham, 2018, pp. pp. 255–270
@incollection{kim2018season, title = {Season-Invariant Semantic Segmentation with a Deep Multimodal Network}, author = {Kim, Dong-Ki and Maturana, Daniel and Uenoyama, Masashi and Scherer, Sebastian}, year = {2018}, booktitle = {Field and Service Robotics}, publisher = {Springer, Cham}, pages = {255--270}, doi = {10.1007/978-3-319-67361-5_17}, url = {https://www.ri.cmu.edu/app/uploads/2017/11/Season-Invariant_Semantic_Segmentation_with_A_Deep_Multimodal_Network.pdf} }
Semantic scene understanding is a useful capability for autonomous vehicles operating in off-roads. While cameras are the most common sensor used for semantic classification, the performance of methods using camera imagery may suffer when there is significant variation between the train and testing sets caused by illumination, weather, and seasonal variations. On the other hand, 3D information from active sensors such as LiDAR is comparatively invariant to these factors, which motivates us to investigate whether it can be used to improve performance in this scenario. In this paper, we propose a novel multimodal Convolutional Neural Network (CNN) architecture consisting of two streams, 2D and 3D, which are fused by projecting 3D features to image space to achieve a robust pixelwise semantic segmentation. We evaluate our proposed method in a novel off-road terrain classification benchmark, and show a 25% improvement in mean Intersection over Union (IoU) of navigation-related semantic classes, relative to an image-only baseline.
Kim, Dong-Ki and Maturana, Daniel and Uenoyama, Masashi and Scherer, Sebastian, "Season-Invariant Semantic Segmentation with a Deep Multimodal Network," Field and Service Robotics, 2018.
- Robust image-based crack detection in concrete structure using multi-scale enhancement and visual features.By Liu, X., Ai, Y. and Scherer, S.In Proceedings - International Conference on Image Processing, ICIP, pp. 2304–2308, 2018.
@inproceedings{liu2018robust, title = {Robust image-based crack detection in concrete structure using multi-scale enhancement and visual features}, author = {Liu, Xiangzeng and Ai, Yunfeng and Scherer, Sebastian}, year = {2018}, month = sep, booktitle = {Proceedings - International Conference on Image Processing, ICIP}, pages = {2304--2308}, doi = {10.1109/ICIP.2017.8296693}, isbn = {9781509021758}, issn = {15224880}, url = {https://www.ri.cmu.edu/app/uploads/2017/05/Robust-Image-based-Crack-Detection.pdf}, keywords = {Concrete structure,Crack detection,Guided filter,Image enhancement} }
Crack detection is an important technique to evaluate the safety and predict the life of a concrete asset. In order to improve the robustness of the crack detection in complex background, a new crack detection framework based on multi-scale enhancement and visual features is developed. Firstly, to deal with the effect of low contrast, a multi-scale enhancement method using guided filter and gradient information is proposed. Then, the adaptive threshold algorithm is used to obtain the binary image. Finally, the combination of morphological processing and visual features are adopted to purify the cracks. The experimental results with different images of real concrete surface demonstrate the high robustness and validity of the developed technique, in which the average TPR can reach 94.22%.
Liu, Xiangzeng and Ai, Yunfeng and Scherer, Sebastian, "Robust image-based crack detection in concrete structure using multi-scale enhancement and visual features," Proceedings - International Conference on Image Processing, ICIP, 2018.
- Real-Time Semantic Mapping for Autonomous Off-Road Navigation.By Maturana, D., Chou, P.-W., Uenoyama, M. and Scherer, S.In Field and Service Robotics,Springer, Cham, 2018, pp. pp. 335–350
@incollection{maturana2018real, title = {Real-Time Semantic Mapping for Autonomous Off-Road Navigation}, author = {Maturana, Daniel and Chou, Po-Wei and Uenoyama, Masashi and Scherer, Sebastian}, year = {2018}, booktitle = {Field and Service Robotics}, publisher = {Springer, Cham}, pages = {335--350}, doi = {10.1007/978-3-319-67361-5_22}, url = {https://www.ri.cmu.edu/app/uploads/2017/11/semantic-mapping-offroad-nav-compressed.pdf} }
In this paper we describe a semantic mapping system for autonomous off-road driving with an All-Terrain Vehicle (ATVs). The system’s goal is to provide a richer representation of the environment than a purely geometric map, allowing it to distinguish, e.g., tall grass from obstacles. The system builds a 2.5D grid map encoding both geometric (terrain height) and semantic information (navigation-relevant classes such as trail, grass, etc.). The geometric and semantic information are estimated online and in real-time from LiDAR and image sensor data, respectively. Using this semantic map, motion planners can create semantically aware trajectories. To achieve robust and efficient semantic segmentation, we design a custom Convolutional Neural Network (CNN) and train it with a novel dataset of labelled off-road imagery built for this purpose. We evaluate our semantic segmentation offline, showing comparable performance to the state of the art with slightly lower latency. We also show closed-loop field results with an autonomous ATV driving over challenging off-road terrain by using the semantic map in conjunction with a simple path planner. Our models and labelled dataset will be publicly available at http://dimatura.net/offroad.
Maturana, Daniel and Chou, Po-Wei and Uenoyama, Masashi and Scherer, Sebastian, "Real-Time Semantic Mapping for Autonomous Off-Road Navigation," Field and Service Robotics, 2018.
- Robust Localization and Localizability Prediction Using a Rotating Laser Scanner,By Scherer, S., Zhen, W. and Zeng, S.US Patent App. 15/717,578, Mar-2018
@patent{scherer2018robust, title = {Robust Localization and Localizability Prediction Using a Rotating Laser Scanner}, author = {Scherer, Sebastian and Zhen, Weikun and Zeng, Sam}, year = {2018}, month = mar, url = {https://patents.google.com/patent/US20180088234A1/en}, note = {US Patent App. 15/717,578} }
Scherer, Sebastian and Zhen, Weikun and Zeng, Sam, "Robust Localization and Localizability Prediction Using a Rotating Laser Scanner," US Patent App. 15/717,578, 2018.
- Path Planning for Unmanned Fixed-Wing Aircraft in Uncertain Wind Conditions Using Trochoids.By Schopferer, S., Lorenz, J.S., Keipour, A. and Scherer, S.In 2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018, Dallas, TX, pp. 503–512, 2018.
@inproceedings{schopferer2018path, title = {Path Planning for Unmanned Fixed-Wing Aircraft in Uncertain Wind Conditions Using Trochoids}, author = {Schopferer, Simon and Lorenz, Julian Soren and Keipour, Azarakhsh and Scherer, Sebastian}, year = {2018}, month = jun, booktitle = {2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018}, address = {Dallas, TX}, pages = {503--512}, doi = {10.1109/ICUAS.2018.8453391}, isbn = {9781538613535}, keywords = {Fixed-Wing UAV,Path Planning,Trochoids,Wind}, video = {https://www.youtube.com/watch?v=cltd0eY2dcM} }
On-board path planning is a key capability for safe autonomous unmanned flight. Recently, it has been shown that using trochoids for turn segments allows for run-time efficient path planning under consideration of the prevailing wind. However, with varying wind conditions and uncertainty in the estimation of wind speed and direction, paths optimized for a reference wind condition may become infeasible to track for the aircraft. In this work, we discuss how to calculate conservative turn rate limits and we present a novel approach to calculate safety distances along trochoidal turn segments in order to account for an unknown wind speed and airspeed component of bounded magnitude. This allows to plan flight paths that are optimized for the currently expected wind condition but are still safe in case the aircraft experiences different wind conditions. We present results from simulation and flight tests which demonstrate the impact of uncertain and varying wind conditions on the tracking performance of paths with circular and trochoidal turn segments. The results show that trochoids can be used to reduce path tracking errors even if the prevailing wind changes significantly. Furthermore, the proposed method to calculate safety distances conservatively over-approximates path deviations in all considered cases. Thus, it can be used to plan safe paths in the presence of uncertain wind conditions without solely relying on conservative performance limits.
Schopferer, Simon and Lorenz, Julian Soren and Keipour, Azarakhsh and Scherer, Sebastian, "Path Planning for Unmanned Fixed-Wing Aircraft in Uncertain Wind Conditions Using Trochoids," 2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018, 2018.
- Addressing multiple time around (MTA) ambiguities, particularly for lidar systems, and particularly for autonomous aircraft,By Stambler, A., Chamberlain, L.J. and Scherer, S.US Patent 10,131,446, Nov-2018
@patent{stambler2018addressing, title = {Addressing multiple time around (MTA) ambiguities, particularly for lidar systems, and particularly for autonomous aircraft}, author = {Stambler, Adam and Chamberlain, Lyle J and Scherer, Sebastian}, year = {2018}, month = nov, url = {https://patents.google.com/patent/US10131446B1/en}, note = {US Patent 10,131,446} }
Stambler, Adam and Chamberlain, Lyle J and Scherer, Sebastian, "Addressing multiple time around (MTA) ambiguities, particularly for lidar systems, and particularly for autonomous aircraft," US Patent 10,131,446, 2018.
- Positioning error analysis of least squares method for wireless sensor networks.By Tian, X., Zhen, W., Scherer, S. and Lu, X.In 50th International Symposium on Robotics, ISR 2018, pp. 143–146, 2018.
@inproceedings{tian2018positioning, title = {Positioning error analysis of least squares method for wireless sensor networks}, author = {Tian, Xiangrui and Zhen, Weikun and Scherer, Sebastian and Lu, Xiong}, year = {2018}, month = jun, booktitle = {50th International Symposium on Robotics, ISR 2018}, publisher = {VDE}, pages = {143--146}, isbn = {9781510870314} }
Wireless sensor networks (WSN) is widely used for indoor positioning and navigation of mobile robots. Least squares method (LSM) is the most common and simple method for position calculation, and various optimization algorithms were designed elaborately for reducing localization error. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where the localization error comes from and how it propagates. Based on the theory of variance and covariance, a novel simplified error propagation algorithm is proposed to analyse the localization error for triangulation method. This algorithm exactly shows influence of ranging errors and network structure on positioning. Finally, a simulation test in Matlab is carried out to verify the validity of the proposed algorithm, and it is shown that the algorithm significantly simplifies the calulation of the positioning error. The work of this paper can be used for mul-ti-sensor fusion where an accurate error model is required. Besides, error estimation is also useful for error control by optimizing the structure of WSN.
Tian, Xiangrui and Zhen, Weikun and Scherer, Sebastian and Lu, Xiong, "Positioning error analysis of least squares method for wireless sensor networks," 50th International Symposium on Robotics, ISR 2018, 2018.
- Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories.By Zhang, Y., Wang, W., Bonatti, R., Maturana, D. and Scherer, S.In Conference on Robot Learning, 2018.
@inproceedings{zhang2018integrating, title = {Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories}, author = {Zhang, Yanfu and Wang, Wenshan and Bonatti, Rogerio and Maturana, Daniel and Scherer, Sebastian}, year = {2018}, month = oct, booktitle = {Conference on Robot Learning}, publisher = {Journal of Machine Learning Research}, url = {https://arxiv.org/pdf/1810.07225} }
Predicting the motion of a mobile agent from a third-person perspective is an important component for many robotics applications, such as autonomous navigation and tracking. With accurate motion prediction of other agents, robots can plan for more intelligent behaviors to achieve specified objectives, instead of acting in a purely reactive way. Previous work addresses motion prediction by either only filtering kinematics, or using hand-designed and learned representations of the environment. Instead of separating kinematic and environmental context, we propose a novel approach to integrate both into an inverse reinforcement learning (IRL) framework for trajectory prediction. Instead of exponentially increasing the state-space complexity with kinematics, we propose a two-stage neural network architecture that considers motion and environment together to recover the reward function. The first-stage network learns feature representations of the environment using low-level LiDAR statistics and the second-stage network combines those learned features with kinematics data. We collected over 30 km of off-road driving data and validated experimentally that our method can effectively extract useful environmental and kinematic features. We generate accurate predictions of the distribution of future trajectories of the vehicle, encoding complex behaviors such as multi-modal distributions at road intersections, and even show different predictions at the same intersection depending on the vehicle’s speed.
Zhang, Yanfu and Wang, Wenshan and Bonatti, Rogerio and Maturana, Daniel and Scherer, Sebastian, "Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories," Conference on Robot Learning, 2018.
- A Unified 3D Mapping Framework Using a 3D or 2D LiDAR.By Zhen, W. and Scherer, S.In International Symposium on Experimental Robotics, pp. 702–711, 2018.
@inproceedings{zhen2018unified, title = {A Unified {3D} Mapping Framework Using a {3D} or {2D} {LiDAR}}, author = {Zhen, Weikun and Scherer, Sebastian}, year = {2018}, month = nov, booktitle = {International Symposium on Experimental Robotics}, pages = {702--711}, doi = {10.1007/978-3-030-33950-0_60}, url = {https://arxiv.org/pdf/1810.12515} }
Simultaneous Localization and Mapping (SLAM) has been considered as a solved problem thanks to the progress made in the past few years. However, the great majority of LiDAR-based SLAM algorithms are designed for a specific type of payload and therefore don’t generalize across different platforms. In practice, this drawback causes the development, deployment and maintenance of an algorithm difficult. Consequently, our work focuses on improving the compatibility across different sensing payloads. Specifically, we extend the Cartographer SLAM library to handle different types of LiDAR including fixed or rotating, 2D or 3D LiDARs. By replacing the localization module of Cartographer and maintaining the sparse pose graph (SPG), the proposed framework can create high-quality 3D maps in real-time on different sensing payloads. Additionally, it brings the benefit of simplicity with only a few parameters need to be adjusted for each sensor type.
Zhen, Weikun and Scherer, Sebastian, "A Unified 3D Mapping Framework Using a 3D or 2D LiDAR," International Symposium on Experimental Robotics, 2018.
- Achieving Robust Localization in Geometrically Degenerated Tunnels.By Zhen, W. and Scherer, S.In Workshop on Challenges and Opportunities for Resilient Collective Intelligence in Subterranean Environments, Pittsburgh, Pa, 2018.
@inproceedings{zhen2018achieving, title = {Achieving Robust Localization in Geometrically Degenerated Tunnels}, author = {Zhen, Weikun and Scherer, Sebastian}, year = {2018}, month = jun, booktitle = {Workshop on Challenges and Opportunities for Resilient Collective Intelligence in Subterranean Environments}, address = {Pittsburgh, Pa}, url = {http://rssworkshop18.autonomousaerialrobot.com/wp-content/uploads/2018/06/RSS18RCISE_paper_3.pdf} }
Although many methods have been proposed to localize a robot using onboard sensors in GPS-denied environments , achieving robust localization in geometrically degenerated tunnels remains a challenging problem in robot-based inspection tasks. In this work, we first present a novel model to analyze the localizability of the prior map at a given location. Then we propose the utilization of a single Ultra-Wideband (UWB) ranging device to compensate for the degeneration of LiDAR based localization inside tunnels. A probabilistic sensor fusion method is developed and demonstrated to achieve real-time robust localization inside a geometrically degenerated tunnel.
Zhen, Weikun and Scherer, Sebastian, "Achieving Robust Localization in Geometrically Degenerated Tunnels," Workshop on Challenges and Opportunities for Resilient Collective Intelligence in Subterranean Environments, 2018.
- Visual Place Recognition in Long-term and Large-scale Environment based on CNN Feature.By Zhu, J., Ai, Y., Tian, B., Cao, D. and Scherer, S.In IEEE Intelligent Vehicles Symposium, Proceedings, pp. 1679–1685, 2018.
@inproceedings{zhu2018visual, title = {Visual Place Recognition in Long-term and Large-scale Environment based on {CNN} Feature}, author = {Zhu, Jianliang and Ai, Yunfeng and Tian, Bin and Cao, Dongpu and Scherer, Sebastian}, year = {2018}, month = oct, booktitle = {IEEE Intelligent Vehicles Symposium, Proceedings}, pages = {1679--1685}, doi = {10.1109/IVS.2018.8500686}, isbn = {9781538644522} }
With the universal application of camera in intelligent vehicles, visual place recognition has become a major problem in intelligent vehicle localization. The traditional solution is to make visual description of place images using hand-crafted feature for matching places, but this description method is not very good for extreme variability, especially for seasonal transformation. In this paper, we propose a new method based on convolutional neural network (CNN), by putting images into the pre-trained network model to get automatically learned image descriptors, and through some operations of pooling, fusion and binarization to optimize them, then the similarity result of place recognition is presented with the Hamming distance of the place sequence. In the experimental part, we compare our method with some state-of-the-art algorithms, FABMAP, ABLE-M and SeqSLAM, to illustrate its advantages. The experimental results show that our method based on CNN achieves better performance than other methods on the representative public datasets.
Zhu, Jianliang and Ai, Yunfeng and Tian, Bin and Cao, Dongpu and Scherer, Sebastian, "Visual Place Recognition in Long-term and Large-scale Environment based on CNN Feature," IEEE Intelligent Vehicles Symposium, Proceedings, 2018.
2017
- Randomized algorithm for informative path planning with budget constraints.By Arora, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Singapore, Singapore, pp. 4997–5004, 2017.
@inproceedings{arora2017randomized, title = {Randomized algorithm for informative path planning with budget constraints}, author = {Arora, Sankalp and Scherer, Sebastian}, year = {2017}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Singapore, Singapore}, pages = {4997--5004}, doi = {10.1109/ICRA.2017.7989582}, isbn = {9781509046331}, issn = {10504729}, url = {https://www.ri.cmu.edu/app/uploads/2017/04/Arora-paper.pdf} }
Maximizing information gathered within a budget is a relevant problem for information gathering tasks for robots with cost or operating time constraints. This problem is also known as the informative path planning (IPP) problem or correlated orienteering. It can be formalized as that of finding budgeted routes in a graph such that the reward collected by the route is maximized, where the reward at nodes can be dependent. Unfortunately, the problem is NP-Hard and the state of the art methods are too slow to even present an approximate solution online. Here we present Randomized Anytime Orienteering (RAOr) algorithm that provides near optimal solutions while demonstrably converging to an efficient solution in runtimes that allows the solver to be run online. The key idea of our approach is to pose orienteering as a combination of a Constraint Satisfaction Problem and a Traveling Salesman Problem. This formulation allows us to restrict the search space to routes that incur minimum distance to visit a set of selected nodes, and rapidly search this space using random sampling. The paper provides the analysis of asymptotic near-optimality, convergence rates for RAOr algorithms, and present strategies to improve anytime performance of the algorithm. Our experimental results suggest an improvement by an order of magnitude over the state of the art methods in relevant simulation and in real world scenarios.
Arora, Sankalp and Scherer, Sebastian, "Randomized algorithm for informative path planning with budget constraints," Proceedings - IEEE International Conference on Robotics and Automation, 2017.
- Learning Heuristic Search via Imitation.By Bhardwaj, M., Choudhury, S. and Scherer, S.In Proceedings of the 1st Annual Conference on Robot Learning, vol. 78, pp. 271–280, 2017.
@inproceedings{bhardwaj2017learning, title = {Learning Heuristic Search via Imitation}, author = {Bhardwaj, Mohak and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2017}, month = {13--15 Nov}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, publisher = {PMLR}, series = {Proceedings of Machine Learning Research}, volume = {78}, pages = {271--280}, url = {http://proceedings.mlr.press/v78/bhardwaj17a/bhardwaj17a.pdf}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken} }
Robotic motion planning problems are typically solved by constructing a search tree of valid maneuvers from a start to a goal configuration. Limited onboard computation and real-time planning constraints impose a limit on how large this search tree can grow. Heuristics play a crucial role in such situations by guiding the search towards potentially good directions and consequently minimizing search effort. Moreover, it must infer such directions in an efficient manner using only the information uncovered by the search up until that time. However, state of the art methods do not address the problem of computing a heuristic that \emphexplicitly minimizes search effort. In this paper, we do so by training a heuristic policy that maps the partial information from the search to decide which node of the search tree to expand. Unfortunately, naively training such policies leads to slow convergence and poor local minima. We present \textscSaIL, an efficient algorithm that trains heuristic policies by imitating \emphclairvoyant oracles - oracles that have full information about the world and demonstrate decisions that minimize search effort. We leverage the fact that such oracles can be efficiently computed using dynamic programming and derive performance guarantees for the learnt heuristic. We validate the approach on a spectrum of environments which show that \textscSaIL consistently outperforms state of the art algorithms. Our approach paves the way forward for learning heuristics that demonstrate an anytime nature - finding feasible solutions quickly and incrementally refining it over time. Open-source code and details can be found here: https://goo.gl/YXkQAC.
Bhardwaj, Mohak and Choudhury, Sanjiban and Scherer, Sebastian, "Learning Heuristic Search via Imitation," Proceedings of the 1st Annual Conference on Robot Learning, 2017.
- Near-optimal edge evaluation in explicit generalized binomial graphs.By Choudhury, S., Javdani, S., Srinivasa, S. and Scherer, S.In Advances in Neural Information Processing Systems, pp. 4632–4642, Jun. 2017.
@article{choudhury2017optimal, title = {Near-optimal edge evaluation in explicit generalized binomial graphs}, author = {Choudhury, Sanjiban and Javdani, Shervin and Srinivasa, Siddhartha and Scherer, Sebastian}, year = {2017}, month = jun, journal = {Advances in Neural Information Processing Systems}, pages = {4632--4642}, issn = {10495258}, url = {https://proceedings.neurips.cc/paper/2017/file/e139c454239bfde741e893edb46a06cc-Paper.pdf} }
Robotic motion-planning problems, such as a UAV flying fast in a partially-known environment or a robot arm moving around cluttered objects, require finding collision-free paths quickly. Typically, this is solved by constructing a graph, where vertices represent robot configurations and edges represent potentially valid movements of the robot between these configurations. The main computational bottlenecks are expensive edge evaluations to check for collisions. State of the art planning methods do not reason about the optimal sequence of edges to evaluate in order to find a collision free path quickly. In this paper, we do so by drawing a novel equivalence between motion planning and the Bayesian active learning paradigm of decision region determination (DRD). Unfortunately, a straight application of existing methods requires computation exponential in the number of edges in a graph. We present BISECT, an efficient and near-optimal algorithm to solve the DRD problem when edges are independent Bernoulli random variables. By leveraging this property, we are able to significantly reduce computational complexity from exponential to linear in the number of edges. We show that BISECT outperforms several state of the art algorithms on a spectrum of planning problems for mobile robots, manipulators, and real flight data collected from a full scale helicopter.
Choudhury, Sanjiban and Javdani, Shervin and Srinivasa, Siddhartha and Scherer, Sebastian, "Near-optimal edge evaluation in explicit generalized binomial graphs," Advances in Neural Information Processing Systems, 2017.
- Adaptive information gathering via imitation learning.By Choudhury, S., Kapoor, A., Ranade, G., Scherer, S. and Dey, D.In Robotics: Science and Systems, vol. 13, May 2017.
@article{choudhury2017adaptive, title = {Adaptive information gathering via imitation learning}, author = {Choudhury, Sanjiban and Kapoor, Ashish and Ranade, Gireeja and Scherer, Sebastian and Dey, Debadeepta}, year = {2017}, month = may, journal = {Robotics: Science and Systems}, volume = {13}, doi = {10.15607/rss.2017.xiii.041}, isbn = {9780992374730}, issn = {2330765X}, url = {https://arxiv.org/pdf/1705.07834} }
In the adaptive information gathering problem, a policy is required to select an informative sensing location using the history of measurements acquired thus far. While there is an extensive amount of prior work investigating effective practical approximations using variants of Shannon’s entropy, the efficacy of such policies heavily depends on the geometric distribution of objects in the world. On the other hand, the principled approach of employing online POMDP solvers is rendered impractical by the need to explicitly sample online from a posterior distribution of world maps. We present a novel data-driven imitation learning framework to efficiently train information gathering policies. The policy imitates a clairvoyant oracle - an oracle that at train time has full knowledge about the world map and can compute maximally informative sensing locations. We analyze the learnt policy by showing that offline imitation of a clairvoyant oracle is implicitly equivalent to online oracle execution in conjunction with posterior sampling. This observation allows us to obtain powerful near-optimality guarantees for information gathering problems possessing an adaptive sub-modularity property. As demonstrated on a spectrum of 2D and 3D exploration problems, the trained policies enjoy the best of both worlds - they adapt to different world map distributions while being computationally inexpensive to evaluate.
Choudhury, Sanjiban and Kapoor, Ashish and Ranade, Gireeja and Scherer, Sebastian and Dey, Debadeepta, "Adaptive information gathering via imitation learning," Robotics: Science and Systems, 2017.
- DROAN - Disparity-space representation for obstacle avoidance.By Dubey, G., Arora, S. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Vancouver, pp. 1324–1330, 2017.
@inproceedings{dubey2017droan, title = {{DROAN} - {Disparity-space} representation for obstacle avoidance}, author = {Dubey, Geetesh and Arora, Sankalp and Scherer, Sebastian}, year = {2017}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Vancouver}, pages = {1324--1330}, doi = {10.1109/IROS.2017.8202309}, isbn = {9781538626825}, issn = {21530866}, url = {https://www.ri.cmu.edu/app/uploads/2018/01/root.pdf} }
Agile MAVs are required to operate in cluttered, unstructured environments at high speeds and low altitudes for efficient data gathering. Given the payload constraints and long range sensing requirements, cameras are the preferred sensing modality for MAVs. The computation burden of using cameras for obstacle sensing has forced the state of the art methods to construct world representations on a per frame basis, leading to myopic decision making. In this paper we propose a long range perception and planning approach using cameras. By utilizing FPGA hardware for disparity calculation and image space to represent obstacles, our approach and system design allows for construction of long term world representation whilst accounting for highly non-linear noise models in real time. We demonstrate these obstacle avoidance capabilities on a quadrotor flying through dense foliage at speeds of up to 4 m/s for a total of 1.6 hours of autonomous flights. The presented approach enables high speed navigation at low altitudes for MAVs for terrestrial scouting.
Dubey, Geetesh and Arora, Sankalp and Scherer, Sebastian, "DROAN - Disparity-space representation for obstacle avoidance," IEEE International Conference on Intelligent Robots and Systems, 2017.
- A KITE in the wind: Smooth trajectory optimization in a moving reference frame.By Dugar, V., Choudhury, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Singapore, Singapore, pp. 109–116, 2017.
@inproceedings{dugar2017kite, title = {A {KITE} in the wind: {Smooth} trajectory optimization in a moving reference frame}, author = {Dugar, Vishal and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2017}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Singapore, Singapore}, pages = {109--116}, doi = {10.1109/ICRA.2017.7989017}, isbn = {9781509046331}, issn = {10504729}, url = {https://www.ri.cmu.edu/app/uploads/2017/04/kite_final.pdf} }
A significant challenge for unmanned aerial vehicles capable of flying long distances is planning in a wind field. Although there has been a plethora of work on the individual topics of planning long routes, smooth trajectory optimization and planning in a wind field, it is difficult for these methods to scale to solve the combined problem. In this paper, we address the problem of planning long, dynamically feasible, time-optimal trajectories in the presence of wind (which creates a moving reference frame). We present an algorithm, \kappaITE, that elegantly decouples the joint trajectory optimization problem into individual path optimization in a fixed ground frame and a velocity profile optimization in a moving reference frame. The key idea is to derive a decoupling framework that guarantees feasibility of the final fused trajectory. Our results show that \kappaITE is able to produce high-quality solutions for planning with a helicopter flying at speeds of 50 m/s, handling winds up to 20 m/s and missions over 200 km. We validate our approach with real-world experiments on a full-scale helicopter with a pilot in the loop. Our approach paves the way forward for autonomous systems to exhibit pilot-like behavior when flying missions in winds aloft.
Dugar, Vishal and Choudhury, Sanjiban and Scherer, Sebastian, "A KITE in the wind: Smooth trajectory optimization in a moving reference frame," Proceedings - IEEE International Conference on Robotics and Automation, 2017.
- Smooth trajectory optimization in Wind: First results on a full-scale helicopter.By Dugar, V., Choudhury, S. and Scherer, S.In Annual Forum Proceedings - AHS International, Fort Worth, TX, pp. 2924–2932, 2017.
@inproceedings{dugar2017smooth, title = {Smooth trajectory optimization in Wind: {First} results on a full-scale helicopter}, author = {Dugar, Vishal and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2017}, month = may, booktitle = {Annual Forum Proceedings - AHS International}, address = {Fort Worth, TX}, pages = {2924--2932}, issn = {15522938}, url = {https://www.ri.cmu.edu/app/uploads/2017/04/ahs_final.pdf} }
A significant challenge for unmanned aerial vehicles is flying long distances in the presence of wind. The presence of wind, which acts like a forcing function on the system dynamics, significantly affects control authority and flight times. While there is a large body of work on the individual topics of planning long missions and path planning in wind fields, these methods do not scale to solve the combined problem under real-time constraints. In this paper, we address the problem of planning long, dynamically feasible, time-optimal trajectories in the presence of wind for a full-scale helicopter. We build on our existing algorithm, kITE, which accounts for wind in a principled and elegant way, and produces dynamically-feasible trajectories that are guaranteed to be safe in near real-time. It uses a novel framework to decouple path optimization in a fixed ground frame from velocity optimization in a moving air frame. We present extensive experimental evaluation of kITE on an autonomous helicopter platform (with a human safety pilot in the loop) with data from over 23 missions in winds up to 20m=s and airspeeds up to 50m=s. Our results not only shows the efficacy of the algorithm and its implementation, but also provide insights into failure cases that we encountered. This paves the way forward for autonomous systems to exhibit pilot-like behavior when flying missions in winds aloft.
Dugar, Vishal and Choudhury, Sanjiban and Scherer, Sebastian, "Smooth trajectory optimization in Wind: First results on a full-scale helicopter," Annual Forum Proceedings - AHS International, 2017.
- Robust Autonomous Flight in Constrained and Visually Degraded Shipboard Environments.By Fang, Z., Yang, S., Jain, S., Dubey, G., Roth, S., Maeta, S., Nuske, S., Zhang, Y. and Scherer, S.In Journal of Field Robotics, vol. 34, no. 1, pp. 25–52, Jan. 2017.
@article{fang2017robust, title = {Robust Autonomous Flight in Constrained and Visually Degraded Shipboard Environments}, author = {Fang, Zheng and Yang, Shichao and Jain, Sezal and Dubey, Geetesh and Roth, Stephan and Maeta, Silvio and Nuske, Stephen and Zhang, Yu and Scherer, Sebastian}, year = {2017}, month = jan, journal = {Journal of Field Robotics}, volume = {34}, number = {1}, pages = {25--52}, doi = {10.1002/rob.21670}, issn = {15564967}, url = {https://www.researchgate.net/publication/308609518_Robust_Autonomous_Flight_in_Constrained_and_Visually_Degraded_Shipboard_Environments} }
This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) for inspection and damage assessment inside a constrained shipboard environment, which might be perilous or inaccessible for humans, especially in emergency scenarios. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways, and small objects protruding from the wall. This causes existing two-dimensional LIDAR, vision, or mechanical bumper-based autonomous navigation solutions to fail. To realize autonomous navigation in such challenging environments, we first propose a robust state estimation method that fuses estimates from a real-time odometry estimation algorithm and a particle filtering localization algorithm with other sensor information in a two-layer fusion framework. Then, an online motion-planning algorithm that combines trajectory optimization with a receding horizon control framework is proposed for fast obstacle avoidance. All the computations are done in real time on the onboard computer. We validate the system by running experiments under different environmental conditions in both laboratory and practical shipboard environments. The field experiment results of over 10 runs show that our vehicle can robustly navigate 20-m-long and only 1-m-wide corridors and go through a very narrow doorway (66-cm width, only 4-cm clearance on each side) autonomously even when it is completely dark or full of light smoke. These experiments show that despite the challenges associated with flying robustly in challenging shipboard environments, it is possible to use a MAV to autonomously fly into a confined shipboard environment to rapidly gather situational information to guide firefighting and rescue efforts.
Fang, Zheng and Yang, Shichao and Jain, Sezal and Dubey, Geetesh and Roth, Stephan and Maeta, Silvio and Nuske, Stephen and Zhang, Yu and Scherer, Sebastian, "Robust Autonomous Flight in Constrained and Visually Degraded Shipboard Environments," Journal of Field Robotics, 2017.
- Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles.By Madaan, R., Maturana, D. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Vancouver, pp. 3487–3494, 2017.
@inproceedings{madaan2017wire, title = {Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles}, author = {Madaan, Ratnesh and Maturana, Daniel and Scherer, Sebastian}, year = {2017}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Vancouver}, pages = {3487--3494}, doi = {10.1109/IROS.2017.8206190}, isbn = {9781538626825}, issn = {21530866}, url = {https://www.ri.cmu.edu/app/uploads/2017/08/root.pdf} }
Wire detection is a key capability for safe navigation of autonomous aerial vehicles and is a challenging problem as wires are generally only a few pixels wide, can appear at any orientation and location, and are hard to distinguish from other similar looking lines and edges. We leverage the recent advances in deep learning by treating wire detection as a semantic segmentation task, and investigate the effectiveness of convolutional neural networks for the same. To find an optimal model in terms of detection accuracy and real time performance on a portable GPU, we perform a grid search over a finite space of architectures. Further, to combat the issue of unavailability of a large public dataset with annotations, we render synthetic wires using a ray tracing engine, and overlay them on 67K images from flight videos available on the internet. We use this synthetic dataset for pretraining our models before finetuning on real data, and show that synthetic data alone can lead to pretty accurate detections qualitatively as well. We also verify if providing explicit information about local evidence of wiry-ness in the form of edge and line detection results from a traditional computer vision method, as additional channels to the network input, makes the task easier or not. We evaluate our best models from the grid search on a publicly available dataset and show that they outperform previous work using traditional computer vision and various deep net baselines of FCNs, SegNet and E-Net, on both standard edge detection metrics and inference speed. Our top models run at more than 3Hz on the NVIDIA Jetson TX2 with input resolution of 480×640, with an Average Precision score of 0.73 on our test split of the USF dataset.
Madaan, Ratnesh and Maturana, Daniel and Scherer, Sebastian, "Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles," IEEE International Conference on Intelligent Robots and Systems, 2017.
- Looking forward: A semantic mapping system for scouting with micro-aerial vehicles.By Maturana, D., Arora, S. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, pp. 6691–6698, 2017.
@inproceedings{maturana2017looking, title = {Looking forward: {A} semantic mapping system for scouting with micro-aerial vehicles}, author = {Maturana, Daniel and Arora, Sankalp and Scherer, Sebastian}, year = {2017}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, pages = {6691--6698}, doi = {10.1109/IROS.2017.8206585}, isbn = {9781538626825}, issn = {21530866}, url = {https://www.ri.cmu.edu/app/uploads/2017/11/Looking-Forward-A-Semantic-Mapping-System-for-Scouting-with-Micro-Aerial-Vehicles.pdf} }
The last decade has seen a massive growth in applications for Micro-Aerial Vehicles (MAVs), due in large part to their versatility for data gathering with cameras, LiDAR and various other sensors. Their ability to quickly go from assessing large spaces from a high vantage points to flying in close to capture high-resolution data makes them invaluable for applications where we are interested in a specific target with an a priori unknown location, e.g. survivors in disaster response scenarios, vehicles in surveillance, animals in wildlife monitoring, etc., a task we will refer to scouting. Our ultimate goal is to enable MAVs to perform autonomous scouting. In this paper, we describe a semantic mapping system designed to support this goal. The system maintains a 2.5D map describing its belief about the location of semantic classes of interest, using forward-looking cameras and state estimation. The map is continuously updated on the fly, using only onboard processing. The system couples a deep learning 2D semantic segmentation algorithm with a novel mapping method to project and aggregate the 2D semantic measurements into a global 2.5D grid map. We train and evaluate our segmentation method on a novel dataset of cars labelled in oblique aerial imagery. We also study the performance of the mapping system in isolation. Finally, we show the integrated system performing a fully autonomous car scouting mission in the field.
Maturana, Daniel and Arora, Sankalp and Scherer, Sebastian, "Looking forward: A semantic mapping system for scouting with micro-aerial vehicles," IEEE International Conference on Intelligent Robots and Systems, 2017.
- Improving Stochastic Policy Gradients in Continuous Control with Deep Reinforcement Learning using the Beta Distribution.By Chou, P.-W., Maturana, D. and Scherer, S.In Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 834–843, 2017.
@inproceedings{chou2017improving, title = {Improving Stochastic Policy Gradients in Continuous Control with Deep Reinforcement Learning using the Beta Distribution}, author = {Chou, Po-Wei and Maturana, Daniel and Scherer, Sebastian}, year = {2017}, month = {06--11 Aug}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, publisher = {PMLR}, series = {Proceedings of Machine Learning Research}, volume = {70}, pages = {834--843}, url = {http://proceedings.mlr.press/v70/chou17a/chou17a.pdf}, editor = {Precup, Doina and Teh, Yee Whye} }
Recently, reinforcement learning with deep neural networks has achieved great success in challenging continuous control problems such as 3D locomotion and robotic manipulation. However, in real-world control problems, the actions one can take are bounded by physical constraints, which introduces a bias when the standard Gaussian distribution is used as the stochastic policy. In this work, we propose to use the Beta distribution as an alternative and analyze the bias and variance of the policy gradients of both policies. We show that the Beta policy is bias-free and provides significantly faster convergence and higher scores over the Gaussian policy when both are used with trust region policy optimization (TRPO) and actor critic with experience replay (ACER), the state-of-the-art on- and off-policy stochastic methods respectively, on OpenAI Gym’s and MuJoCo’s continuous control environments.
Chou, Po-Wei and Maturana, Daniel and Scherer, Sebastian, "Improving Stochastic Policy Gradients in Continuous Control with Deep Reinforcement Learning using the Beta Distribution," Proceedings of the 34th International Conference on Machine Learning, 2017.
- A multi-sensor fusion MAV state estimation from long-range stereo, IMU, GPS and barometric sensors.By Song, Y., Nuske, S. and Scherer, S.In Sensors, vol. 17, no. 1, p. 11, Dec. 2017.
@article{song2017multi, title = {A multi-sensor fusion {MAV} state estimation from long-range stereo, {IMU}, {GPS} and barometric sensors}, author = {Song, Yu and Nuske, Stephen and Scherer, Sebastian}, year = {2017}, month = dec, journal = {Sensors}, volume = {17}, number = {1}, pages = {11}, doi = {10.3390/s17010011}, issn = {14248220}, url = {https://www.mdpi.com/1424-8220/17/1/11}, keywords = {Absolute and relative state measurements,GPS-denied state estimation,Long-range stereo visual odometry,Multi-sensor fusion,Stochastic cloning EKF} }
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
Song, Yu and Nuske, Stephen and Scherer, Sebastian, "A multi-sensor fusion MAV state estimation from long-range stereo, IMU, GPS and barometric sensors," Sensors, 2017.
- Semantic 3D occupancy mapping through efficient high order CRFs.By Yang, S., Huang, Y. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Vancouver, pp. 590–597, 2017.
@inproceedings{yang2017semantic, title = {Semantic {3D} occupancy mapping through efficient high order {CRFs}}, author = {Yang, Shichao and Huang, Yulan and Scherer, Sebastian}, year = {2017}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Vancouver}, pages = {590--597}, doi = {10.1109/IROS.2017.8202212}, isbn = {9781538626825}, issn = {21530866}, url = {https://arxiv.org/pdf/1707.07388} }
Semantic 3D mapping can be used for many applications such as robot navigation and virtual interaction. In recent years, there has been great progress in semantic segmentation and geometric 3D mapping. However, it is still challenging to combine these two tasks for accurate and large-scale semantic mapping from images. In the paper, we propose an incremental and (near) real-time semantic mapping system. A 3D scrolling occupancy grid map is built to represent the world, which is memory and computationally efficient and bounded for large scale environments. We utilize the CNN segmentation as prior prediction and further optimize 3D grid labels through a novel CRF model. Superpixels are utilized to enforce smoothness and form robust PN high order potential. An efficient mean field inference is developed for the graph optimization. We evaluate our system on the KITTI dataset and improve the segmentation accuracy by 10% over existing systems.
Yang, Shichao and Huang, Yulan and Scherer, Sebastian, "Semantic 3D occupancy mapping through efficient high order CRFs," IEEE International Conference on Intelligent Robots and Systems, 2017.
- Obstacle Avoidance through Deep Networks based Intermediate Perception.By Yang, S., Konam, S., Ma, C., Rosenthal, S., Veloso, M. and Scherer, S.In arXiv preprint arXiv:1704.08759, Apr. 2017.
@article{yang2017obstacle, title = {Obstacle Avoidance through Deep Networks based Intermediate Perception}, author = {Yang, Shichao and Konam, Sandeep and Ma, Chen and Rosenthal, Stephanie and Veloso, Manuela and Scherer, Sebastian}, year = {2017}, month = apr, journal = {arXiv preprint arXiv:1704.08759}, url = {https://arxiv.org/pdf/1704.08759} }
Obstacle avoidance from monocular images is a challenging problem for robots. Though multi-view structure-from-motion could build 3D maps, it is not robust in textureless environments. Some learning based methods exploit human demonstration to predict a steering command directly from a single image. However, this method is usually biased towards certain tasks or demonstration scenarios and also biased by human understanding. In this paper, we propose a new method to predict a trajectory from images. We train our system on more diverse NYUv2 dataset. The ground truth trajectory is computed from the designed cost functions automatically. The Convolutional Neural Network perception is divided into two stages: first, predict depth map and surface normal from RGB images, which are two important geometric properties related to 3D obstacle representation. Second, predict the trajectory from the depth and normal. Results show that our intermediate perception increases the accuracy by 20% than the direct prediction. Our model generalizes well to other public indoor datasets and is also demonstrated for robot flights in simulation and experiments.
Yang, Shichao and Konam, Sandeep and Ma, Chen and Rosenthal, Stephanie and Veloso, Manuela and Scherer, Sebastian, "Obstacle Avoidance through Deep Networks based Intermediate Perception," arXiv preprint arXiv:1704.08759, 2017.
- Direct monocular odometry using points and lines.By Yang, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, pp. 3871–3877, Mar. 2017.
@article{yang2017direct, title = {Direct monocular odometry using points and lines}, author = {Yang, Shichao and Scherer, Sebastian}, year = {2017}, month = mar, journal = {Proceedings - IEEE International Conference on Robotics and Automation}, pages = {3871--3877}, doi = {10.1109/ICRA.2017.7989446}, isbn = {9781509046331}, issn = {10504729}, url = {https://arxiv.org/pdf/1703.06380}, annote = {ICRA 2017} }
Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in texture-less environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging texture-less environments, our algorithm reduces the state estimation error over 50%.
Yang, Shichao and Scherer, Sebastian, "Direct monocular odometry using points and lines," Proceedings - IEEE International Conference on Robotics and Automation, 2017.
- Robust localization and localizability estimation with a rotating laser scanner.By Zhen, W., Zeng, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Singapore, Singapore, pp. 6240–6245, 2017.
@inproceedings{zhen2017robust, title = {Robust localization and localizability estimation with a rotating laser scanner}, author = {Zhen, Weikun and Zeng, Sam and Scherer, Sebastian}, year = {2017}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Singapore, Singapore}, pages = {6240--6245}, doi = {10.1109/ICRA.2017.7989739}, isbn = {9781509046331}, issn = {10504729}, url = {https://www.ri.cmu.edu/app/uploads/2017/04/eskf.pdf} }
This paper presents a robust localization approach that fuses measurements from inertial measurement unit (IMU) and a rotating laser scanner. An Error State Kalman Filter (ESKF) is used for sensor fusion and is combined with a Gaussian Particle Filter (GPF) for measurements update. We experimentally demonstrated the robustness of this implementation in various challenging situations such as kidnapped robot situation, laser range reduction and various environment scales and characteristics. Additionally, we propose a new method to evaluate localizability of a given 3D map and show that the computed localizability can precisely predict localization errors, thus helps to find safe routes during flight.
Zhen, Weikun and Zeng, Sam and Scherer, Sebastian, "Robust localization and localizability estimation with a rotating laser scanner," Proceedings - IEEE International Conference on Robotics and Automation, 2017.
2016
- Detecting cars in aerial photographs with a hierarchy of deconvolution nets.By Chakraborty, S., Maturana, D. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-16-60, 2016
@techreport{chakraborty2016detecting, title = {Detecting cars in aerial photographs with a hierarchy of deconvolution nets}, author = {Chakraborty, Satyaki and Maturana, Daniel and Scherer, Sebastian}, year = {2016}, address = {Pittsburgh, PA}, number = {CMU-RI-TR-16-60}, url = {https://www.ri.cmu.edu/pub_files/2016/11/cmu-ri-tr-Maturana.pdf}, institution = {Carnegie Mellon University}, keywords = {Deconvolution nets,Neural networks,—Object detection} }
—Detecting cars in large aerial photographs can be quite a challenging task, given that cars in such datasets are often barely visible to the naked human eye. Traditional object detection algorithms fail to perform well when it comes to detecting cars under such circumstances. One would rather use context or exploit spatial relationship between different entities in the scene to narrow down the search space. We aim to do so by looking at different resolutions of the image to process context and focus on promising areas. This is done using a hierarchy of deconvolution networks with each level of the hierarchy trying to predict a heatmap of a certain resolution. We show that our architecture is able to model context implicitly and use it for finer prediction and faster search.
Chakraborty, Satyaki and Maturana, Daniel and Scherer, Sebastian, "Detecting cars in aerial photographs with a hierarchy of deconvolution nets," , 2016.
- Regionally accelerated batch informed trees (RABIT∗): A framework to integrate local information into optimal path planning.By Choudhury, S., Gammell, J.D., Barfoot, T.D., Srinivasa, S.S.D. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Stockholm, Sweden, pp. 4207–4214, 2016.
@inproceedings{choudhury2016regionally, title = {Regionally accelerated batch informed trees {(RABIT∗)}: {A} framework to integrate local information into optimal path planning}, author = {Choudhury, Sanjiban and Gammell, Jonathan D. and Barfoot, Timothy D. and Srinivasa, Siddhartha S.D. and Scherer, Sebastian}, year = {2016}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Stockholm, Sweden}, pages = {4207--4214}, doi = {10.1109/ICRA.2016.7487615}, isbn = {9781467380263}, issn = {10504729}, url = {https://www.cse.lehigh.edu/~trink/Courses/RoboticsII/reading/choudhury_RABIT-star.pdf} }
Sampling-based optimal planners, such as RRT∗, almost-surely converge asymptotically to the optimal solution, but have provably slow convergence rates in high dimensions. This is because their commitment to finding the global optimum compels them to prioritize exploration of the entire problem domain even as its size grows exponentially. Optimization techniques, such as CHOMP, have fast convergence on these problems but only to local optima. This is because they are exploitative, prioritizing the immediate improvement of a path even though this may not find the global optimum of nonconvex cost functions.
Choudhury, Sanjiban and Gammell, Jonathan D. and Barfoot, Timothy D. and Srinivasa, Siddhartha S.D. and Scherer, Sebastian, "Regionally accelerated batch informed trees (RABIT∗): A framework to integrate local information into optimal path planning," Proceedings - IEEE International Conference on Robotics and Automation, 2016.
- Constrained CHOMP using Dual Projected Newton Method.By Choudhury, S. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-16-17, Apr-2016
@techreport{choudhury2016constrained, title = {Constrained {CHOMP} using Dual Projected {Newton} Method}, author = {Choudhury, Sanjiban and Scherer, Sebastian}, year = {2016}, month = apr, address = {Pittsburgh, PA}, number = {CMU-RI-TR-16-17}, url = {https://www.ri.cmu.edu/pub_files/2016/5/main-choudhury.pdf}, abstraxt = {CHOMP is a popular trajectory optimization algorithm that uses covari- ant gradient techniques to produce high quality solutions. In its original formulation, it solves an unconstrained sequentially quadratic problem with extensions for handling equality constraints. In this paper we present an approach to solve sequentially quadratic problem with linear inequality con- straints. We present a dual projected newton method to efficiently solve this problem. The proposed method alternates between primal and dual up- dates thus leading to faster convergence than solving a constrained quadratic program at each iteration.}, institution = {Carnegie Mellon University} }
Choudhury, Sanjiban and Scherer, Sebastian, "Constrained CHOMP using Dual Projected Newton Method," , 2016.
- Modeling and Control of Coaxial UAV with Swashplate Controlled Lower Propeller.By Lee, R., Sreenath, K. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-16-33, Jun-2016
@techreport{lee2016modeling, title = {Modeling and Control of Coaxial {UAV} with Swashplate Controlled Lower Propeller}, author = {Lee, Richard and Sreenath, Koushil and Scherer, Sebastian}, year = {2016}, month = jun, address = {Pittsburgh, PA}, number = {CMU-RI-TR-16-33}, url = {http://www.andrew.cmu.edu/user/rl1/lee-coax-technical-report.pdf}, institution = {Carnegie Mellon University} }
There is a growing interest in the design and control of coaxial vehicles for the purposes of autonomous flight. These vehicles utilize two, contra-rotating propellers for generating thrust and swashplates for generating pitch and roll. In this report, we present a novel coaxial design in which both upper and lower rotors are contained within a ducted fan, the speeds of both rotors are inde-pendently controlled, and the lower rotor’s cyclic pitch is controlled through a swashplate. Based on this design, a simple dynamic model was developed with unique force and moment generation equations. Given this model, we are able to map desired force and moment values to the control inputs capable of producing them. Afterwards, position and attitude control were implemented over this nonlinear dynamic model in simulation, such that the vehicle was able to recover from poor initial conditions and follow desired trajectories. As demonstrated by the examples presented in this report, position control results in simulations with low max percent overshoot and reasonable settling times. These results prove promising for the implementation of position and attitude control on our physical system.
Lee, Richard and Sreenath, Koushil and Scherer, Sebastian, "Modeling and Control of Coaxial UAV with Swashplate Controlled Lower Propeller," , 2016.
- Kinodynamic Motion Planning on Vector Fields using RRT *.By Pereira, G.A.S., Choudhury, S. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-16-17, 2016
@techreport{pereira2016kinodynamic, title = {Kinodynamic Motion Planning on Vector Fields using {RRT *}}, author = {Pereira, Guilherme A S and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2016}, address = {Pittsburgh, PA}, number = {CMU-RI-TR-16-17}, url = {https://www.ri.cmu.edu/pub_files/2016/7/kinodynamic-motion-planning.pdf}, institution = {Carnegie Mellon University}, keywords = {motion planning,navigation,optimal planners,rrt,vector fields} }
This report presents a methodology to integrate vector field based motion planning tech-niques with optimal, differential constrained trajectory planners. The main motivation for this integration is the solution of robot motion planning problems that are easily and intuitively solved using vector fields, but are very difficult to be even posed as an optimal motion planning problem, mainly due to the lack of a clear cost function. Examples of such problems include the ones where a goal configuration is not defined, such as circulation of curves, loitering, road following, etc. While several vector field methodologies were proposed to solve these tasks, they do not explicitly consider the robot’s differential constraints and are susceptible to failures in the presence of previously unmodeled obstacles. To add together the good characteristics of each approach, our methodology uses a vector field as a high level specification of a task and an optimal motion planner (in our case RRT*) as a local planner that generates trajectories that follow the vector field, but also consider the kinematics and the dynamics of the robot, as well as the new obstacles encountered by the robot in its workspace.
Pereira, Guilherme A S and Choudhury, Sanjiban and Scherer, Sebastian, "Kinodynamic Motion Planning on Vector Fields using RRT *," , 2016.
- Nonholonomic Motion Planning in Partially Unknown Environments Using Vector Fields and Optimal Planners.By Pereira, G.A.S., Choudhury, S. and Scherer, S.In Congresso Brasileiro de Automatica (CBA), Vitoria, Brazil, 2016.
@inproceedings{pereira2016nonholonomic, title = {Nonholonomic Motion Planning in Partially Unknown Environments Using Vector Fields and Optimal Planners}, author = {Pereira, Guilherme A S and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2016}, month = oct, booktitle = {Congresso Brasileiro de Automatica (CBA)}, address = {Vitoria, Brazil}, url = {https://pdfs.semanticscholar.org/28ac/1260c50c9812711914a58ce687e00803cd13.pdf}, keywords = {este artigo apresenta uma,metodologia para integra,motion planning,optimal planning,resumo,robotics,rrt,vector fields} }
— This paper presents a methodology to integrate vector field-based robot motion planning tech-niques with optimal trajectory planners. The main motivation for this integration is the solution of planning problems that are intuitively solved using vector fields, but are very difficult to be even posed as an optimal mo-tion planning problem, mainly due to the lack of a clear cost function. Among such problems are the ones where a goal configuration is not defined, such as circulation of curves and road following. While several vector field based methodologies were proposed to solve these tasks, they do not explicitly consider the robot’s differential constraints and are susceptible to failures in the presence of previously unmodeled obstacles. Our methodology uses a vector field as a high level specification of a task and an optimal motion planner (in our case RRT*) as a local planner that generates trajectories that follow the vector field and also consider the kinematic and dynamic constraints of the robot, as well as the new obstacles encountered in the workspace. To illustrate the approach, we show simulations with a Dubins like vehicle moving in partially unknown planar environments. Keywords— Robotics, motion planning, vector fields, optimal planning, RRT*. Resumo— Este artigo apresenta uma metodologia para integração de técnicas de planejamento de movimento para robôs baseadas em campos vetorias e planejadore otimos. A principal motivação para essa integraçãó e a solução de problemas de planejamento que são intuitivamente solucionados usando campos vetoriais, mas são muito difícies de serem modelados como um problema de planejamentó otimo, principalmente devidò a falta de uma função de custo. Entre esses problemas estão aqueles em que um alvo nãó e definido, como circulação de curvas e seguimento de rodovias. Enquanto várias metodologias baseadas em campos vetoriais foram propostas para solucionar essas tarefas, elas não consideram explicitamente as restrições diferenciais dos robôs e estão sujeita a falhas na presença de obstáculos que não foram previamente modelados. A metodologia proposta nesse artigo usa um campo vetorial como uma especificação de alto nível para uma tarefa e um planejado otimo (nesse caso o RRT*) como um planejador local que gera trajetórias que seguem o campo vetorial e também consideram as restrições cinemáticas e dinâmicas do robô, bem como os novos obstáculos encontrados em seu espaço de trabalho. Para ilustrar a metodologia, o artigo apresenta simulações utilizando um robô com cinemática de Dubins se locovendo em um ambiente planar parcialmente conhecido. Palavras-chave— Robótica, planejamento de movimento, campos vetorias, planejamentó otimo, RRT*.
Pereira, Guilherme A S and Choudhury, Sanjiban and Scherer, Sebastian, "Nonholonomic Motion Planning in Partially Unknown Environments Using Vector Fields and Optimal Planners," Congresso Brasileiro de Automatica (CBA), 2016.
- A framework for optimal repairing of vector field-based motion plans.By Pereira, G.A.S., Choudhury, S. and Scherer, S.In 2016 International Conference on Unmanned Aircraft Systems, ICUAS 2016, Washington, D.C., pp. 261–266, 2016.
@inproceedings{pereira2016framework, title = {A framework for optimal repairing of vector field-based motion plans}, author = {Pereira, Guilherme A.S. and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2016}, booktitle = {2016 International Conference on Unmanned Aircraft Systems, ICUAS 2016}, address = {Washington, D.C.}, pages = {261--266}, doi = {10.1109/ICUAS.2016.7502525}, isbn = {9781467393331}, url = {https://www.ri.cmu.edu/pub_files/2016/6/0021.pdf} }
This paper presents a framework that integrates vector field based motion planning techniques with an optimal path planner. The main motivation for this integration is the solution of UAVs’ motion planning problems that are easily and intuitively solved using vector fields, but are very difficult to be even posed as optimal motion planning problems, mainly due to the lack of clear cost functions. Examples of such problems include the ones where a goal configuration is not defined, such as circulation of curves, loitering and road following. While several vector field methodologies were proposed to solve these tasks, they are susceptible to failures in the presence of previously unmodeled obstacles, including no-fly zones specified during the flight. Our framework uses a vector field as a high level specification of a task and an optimal motion planner (in our case RRT - ) as a local, on-line planner that generates paths that follow the vector field, but also consider the new obstacles encountered by the vehicle during the flight. A series of simulations illustrate and validate the proposed methodology. One of these simulations considers a rotorcraft UAV equipped with a spinning laser patrolling an urban area in the presence of unmodeled obstacles and no-fly zones.
Pereira, Guilherme A.S. and Choudhury, Sanjiban and Scherer, Sebastian, "A framework for optimal repairing of vector field-based motion plans," 2016 International Conference on Unmanned Aircraft Systems, ICUAS 2016, 2016.
- List prediction applied to motion planning.By Tallavajhula, A., Choudhury, S., Scherer, S. and Kelly, A.In Proceedings - IEEE International Conference on Robotics and Automation, Stockholm, Sweden, pp. 213–220, 2016.
@inproceedings{tallavajhula2016list, title = {List prediction applied to motion planning}, author = {Tallavajhula, Abhijeet and Choudhury, Sanjiban and Scherer, Sebastian and Kelly, Alonzo}, year = {2016}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Stockholm, Sweden}, pages = {213--220}, doi = {10.1109/ICRA.2016.7487136}, isbn = {9781467380263}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2016/5/main1.pdf} }
There is growing interest in applying machine learning to motion planning. Potential applications are predicting an initial seed for trajectory optimization, predicting an effective heuristic for search based planning, and even predicting a planning algorithm for adaptive motion planning systems. In these situations, providing only a single prediction is unsatisfactory. It leads to many scenarios where the prediction suffers a high loss. In this paper, we advocate list prediction. Each predictor in a list focusses on different regions in the space of environments. This overcomes the shortcoming of a single predictor, and improves overall performance. A framework for list prediction, ConseqOpt, already exists. Our contribution is an extensive domain-specific treatment. We provide a rigorous and clear exposition of the procedure for training a list of predictors. We provide experimental results on a spectrum of motion planning applications. Each application contributes to understanding the behavior of list prediction. We observe that the benefit of list prediction over a single prediction is significant, irrespective of the application.
Tallavajhula, Abhijeet and Choudhury, Sanjiban and Scherer, Sebastian and Kelly, Alonzo, "List prediction applied to motion planning," Proceedings - IEEE International Conference on Robotics and Automation, 2016.
- Real-time 3D scene layout from a single image using Convolutional Neural Networks.By Yang, S., Maturana, D. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Stockholm, Sweden, pp. 2183–2189, 2016.
@inproceedings{yang2016real, title = {Real-time {3D} scene layout from a single image using Convolutional Neural Networks}, author = {Yang, Shichao and Maturana, Daniel and Scherer, Sebastian}, year = {2016}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Stockholm, Sweden}, pages = {2183--2189}, doi = {10.1109/ICRA.2016.7487368}, isbn = {9781467380263}, issn = {10504729} }
We consider the problem of understanding the 3D layout of indoor corridor scenes from a single image in real time. Identifying obstacles such as walls is essential for robot navigation, but also challenging due to the diversity in structure, appearance and illumination of real-world corridor scenes. Many current single-image methods make Manhattan-world assumptions, and break down in environments that do not meet this mold. They also may require complicated hand-designed features for image segmentation or clear boundaries to form certain building models. In addition, most cannot run in real time In this paper, we propose to combine machine learning with geometric modelling to build a simplified 3D model from a single image.We first employ a supervised Convolutional Neural Network (CNN) to provide a dense, but coarse, geometric class labelling of the scene. We then refine this labelling with a fully connected Conditional Random Field (CRF). Finally, we fit line segments along wall-ground boundaries and ?pop up? a 3D model using geometric constraints. We assemble a dataset of 967 labelled corridor images. Our experiments on this dataset and another publicly available dataset show our method outperforms other single image scene understanding methods in pixelwise accuracy while labelling images at over 15 Hz..
Yang, Shichao and Maturana, Daniel and Scherer, Sebastian, "Real-time 3D scene layout from a single image using Convolutional Neural Networks," Proceedings - IEEE International Conference on Robotics and Automation, 2016.
- Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments.By Yang, S., Song, Y., Kaess, M. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Daejeon, Korea, pp. 1222–1229, 2016.
@inproceedings{yang2016pop, title = {Pop-up {SLAM}: {Semantic} monocular plane {SLAM} for low-texture environments}, author = {Yang, Shichao and Song, Yu and Kaess, Michael and Scherer, Sebastian}, year = {2016}, month = nov, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Daejeon, Korea}, pages = {1222--1229}, doi = {10.1109/IROS.2016.7759204}, isbn = {9781509037629}, issn = {21530866}, url = {https://arxiv.org/pdf/1703.07334} }
Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.
Yang, Shichao and Song, Yu and Kaess, Michael and Scherer, Sebastian, "Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments," IEEE International Conference on Intelligent Robots and Systems, 2016.
2015
- Online safety verification of trajectories for unmanned flight with offline computed robust invariant sets.By Althoff, D., Althoff, M. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 3470–3477, 2015.
@inproceedings{althoff2015online, title = {Online safety verification of trajectories for unmanned flight with offline computed robust invariant sets}, author = {Althoff, Daniel and Althoff, Matthias and Scherer, Sebastian}, year = {2015}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Hamburg, Germany}, pages = {3470--3477}, doi = {10.1109/IROS.2015.7353861}, isbn = {9781479999941}, issn = {21530866}, url = {https://mediatum.ub.tum.de/doc/1280463/618197.pdf} }
We address the problem of verifying motion plans for aerial robots in uncertain and partially-known environments. Thereby, the initial state of the robot is uncertain due to errors from the state estimation and the motion is uncertain due to wind disturbances and control errors caused by sensor noise. Since the environment is perceived at runtime, the verification of partial motion plans must be performed online (i.e. during operation) to ensure safety within the planning horizon and beyond. This is achieved by efficiently generating robust control invariant sets based on so-called loiter circles, where the position of the aerial robot follows a circular pattern. Verification of aerial robots is challenging due to the nonlinearity of their dynamics, the high dimensionality of their state space, and their potentially high velocities. We use novel techniques from reachability analysis to overcome those challenges. In order to ensure that the robot never finds itself in a situation for which no safe maneuver exists, we provide a technique that ensures safety of aerial robots beyond the planning horizon. Our method is applicable to all kinds of robotic systems that follow reference trajectories, such as bipedal robotic walking, robotic manipulators, automated vehicles, and the like. We evaluate our method by simulations of high speed helicopter flights.
Althoff, Daniel and Althoff, Matthias and Scherer, Sebastian, "Online safety verification of trajectories for unmanned flight with offline computed robust invariant sets," IEEE International Conference on Intelligent Robots and Systems, 2015.
- Connected invariant sets for high-speed motion planning in partially-known environments.By Althoff, D. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 3279–3285, 2015.
@inproceedings{althoff2015connected, title = {Connected invariant sets for high-speed motion planning in partially-known environments}, author = {Althoff, Daniel and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {3279--3285}, doi = {10.1109/ICRA.2015.7139651}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/3/final.pdf} }
Ensuring safety in partially-known environments is a critical problem in robotics since the environment is perceived through sensors and the environment cannot be completely known ahead of time. Prior work has considered the problem of finding positive control invariant sets (PCIS). However, this approach limits the planning horizon of the motion planner since the PCIS must lie completely in the limited known part of the environment. Here we consider the problem of guaranteeing safety by ensuring the existence of at least one PCIS in partially-known environments leading to an extension of the PCIS concept. It is shown, that this novel method is less conservative than the common PCIS approach and robust to unknown small obstacles which might appear in the close vicinity of the robot. An example implementation for loiter circles and power line obstacles is presented. Simulation scenarios are used for validating the proposed concept.
Althoff, Daniel and Scherer, Sebastian, "Connected invariant sets for high-speed motion planning in partially-known environments," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- Autonomous Semantic Exploration Using Unmanned Aerial Vehicles.By Arora, S., Dubey, G., Jain, S., Maturana, D., Song, Y., Nuske, S. and Scherer, S.In Workshop on Vision-based Control and Navigation of Small Lightweight UAVs, IROS 2015, 2015.
@inproceedings{arora2015autonomous, title = {Autonomous Semantic Exploration Using Unmanned Aerial Vehicles}, author = {Arora, S and Dubey, G and Jain, S and Maturana, D and Song, Y and Nuske, S and Scherer, Sebastian}, year = {2015}, month = oct, booktitle = {Workshop on Vision-based Control and Navigation of Small Lightweight UAVs, IROS 2015} }
Arora, S and Dubey, G and Jain, S and Maturana, D and Song, Y and Nuske, S and Scherer, Sebastian, "Autonomous Semantic Exploration Using Unmanned Aerial Vehicles," Workshop on Vision-based Control and Navigation of Small Lightweight UAVs, IROS 2015, 2015.
- Emergency maneuver library - Ensuring safe navigation in partially known environments.By Arora, S., Choudhury, S., Althoff, D. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 6431–6438, 2015.
@inproceedings{arora2015emergency, title = {Emergency maneuver library - {Ensuring} safe navigation in partially known environments}, author = {Arora, Sankalp and Choudhury, Sanjiban and Althoff, Daniel and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {6431--6438}, doi = {10.1109/ICRA.2015.7140102}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/5/paper_small.pdf} }
Autonomous mobile robots are required to operate in partially known and unstructured environments. It is imperative to guarantee safety of such systems for their successful deployment. Current state of the art does not fully exploit the sensor and dynamic capabilities of a robot. Also, given the non-holonomic systems with non-linear dynamic constraints, it becomes computationally infeasible to find an optimal solution if the full dynamics are to be exploited online. In this paper we present an online algorithm to guarantee the safety of the robot through an emergency maneuver library. The maneuvers in the emergency maneuver library are optimized such that the probability of finding an emergency maneuver that lies in the known obstacle free space is maximized. We prove that the related trajectory set diversity problem is monotonic and sub-modular which enables one to develop an efficient trajectory set generation algorithm with bounded sub-optimality. We generate an off-line computed trajectory set that exploits the full dynamics of the robot and the known obstacle-free region. We test and validate the algorithm on a full-size autonomous helicopter flying up to speeds of 56\textlesssup\textgreaterm/s\textless/sup\textgreater in partially-known environments. We present results from 4 months of flight testing where the helicopter has been avoiding trees, performing autonomous landing, avoiding mountains while being guaranteed safe.
Arora, Sankalp and Choudhury, Sanjiban and Althoff, Daniel and Scherer, Sebastian, "Emergency maneuver library - Ensuring safe navigation in partially known environments," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- PASP: Policy based approach for sensor planning.By Arora, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, pp. 3479–3486, 2015.
@inproceedings{arora2015pasp, title = {{PASP: P}olicy based approach for sensor planning}, author = {Arora, Sankalp and Scherer, Sebastian}, year = {2015}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA}, pages = {3479--3486}, doi = {10.1109/ICRA.2015.7139680}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/5/sensor_planner_revision.pdf} }
Capabilities of mobile autonomous systems is often limited by the sensory constraints. Range sensors moving in a fixed pattern are commonly used as sensing modalities on mobile robots. The performance of these sensors can be augmented by actively controlling their configuration for minimizing the expected cost of the mission. The related information gain problem in NP hard. Current methodologies are either computationally too expensive to run online or make simplifying assumptions that fail in complex environments. We present a method to create and learn a policy that maps features calculated online to sensory actions. The policy developed in this work actively controls a nodding lidar to keep the vehicle safe at high velocities and focuses the sensor bandwidth on gaining information relevant for the mission once safety is ensured. It is validated and evaluated on an autonomous full-scale helicopter (Boeing Unmanned Little Bird) equipped with an actively controlled nodding laser. It is able to keep the vehicle safe at its maximum operating velocity, 56 m=s, and reduce the landing zone evaluation time by 500% as compared to passive nodding. The structure of the policy and efficient learning algorithm should generalize to provide a solution for actively controlling a sensor for keeping a mobile robots safe while exploring regions of interest to the robot.
Arora, Sankalp and Scherer, Sebastian, "PASP: Policy based approach for sensor planning," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- The planner ensemble: Motion planning by executing diverse algorithms.By Choudhury, S., Arora, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 2389–2395, 2015.
@inproceedings{choudhury2015planner, title = {The planner ensemble: {Motion} planning by executing diverse algorithms}, author = {Choudhury, Sanjiban and Arora, Sankalp and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {2389--2395}, doi = {10.1109/ICRA.2015.7139517}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/5/main1.pdf} }
Autonomous systems that navigate in unknown environments encounter a variety of planning problems. The success of any one particular planning strategy depends on the validity of assumptions it leverages about the structure of the problem, e.g., Is the cost map locally convex? Does the feasible state space have good connectivity? We address the problem of determining suitable motion planning strategies that can work on a diverse set of applications. We have developed a planning system that does this by running competing planners in parallel. In this paper, we present an approach that constructs a planner ensemble - a set of complementary planners that lever-age a diverse set of assumptions. Our approach optimizes the submodular selection criteria with a greedy approach and lazy evaluation. We seed our selection with learnt priors on planner performance, thus allowing us to solve new applications without evaluating every planner on that application. We present results in simulation where the selected ensemble outperforms the best single planner and does almost as well as an off-line planner. We also present results from an autonomous helicopter that has flown missions several kilometers long at speeds of up to 56m/s which involved avoiding unmapped mountains, no-fly zones and landing in cluttered areas with trees and buildings. This work opens the door on the more general problem of adaptive motion planning.
Choudhury, Sanjiban and Arora, Sankalp and Scherer, Sebastian, "The planner ensemble: Motion planning by executing diverse algorithms," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- Theoretical limits of speed and resolution for kinodynamic planning in a poisson forest.By Choudhury, S., Scherer, S. and Bagnell, J.A.In Robotics: Science and Systems, vol. 11, 2015.
@inproceedings{choudhury2015theoretical, title = {Theoretical limits of speed and resolution for kinodynamic planning in a poisson forest}, author = {Choudhury, Sanjiban and Scherer, Sebastian and Bagnell, J. Andrew}, year = {2015}, booktitle = {Robotics: Science and Systems}, volume = {11}, doi = {10.15607/RSS.2015.XI.005}, isbn = {9780992374716}, issn = {2330765X}, url = {https://www.ri.cmu.edu/pub_files/2015/7/camera_ready_paperid125.pdf} }
The performance of a state lattice motion planning algorithm depends critically on the resolution of the lattice to ensure a balance between solution quality and computation time. There is currently no theoretical basis for selecting the resolution because of its dependence on the robot dynamics and the distribution of obstacles. In this paper, we examine the problem of motion planning on a resolution constrained lattice for a robot with non-linear dynamics operating in an environment with randomly generated disc shaped obstacles sampled from a homogeneous Poisson process. We present a unified framework for computing explicit solutions to two problems - i) the critical planning resolution which guarantees the existence of an infinite collision free trajectory in the search graph ii) the critical speed limit which guarantees infinite collision free motion. In contrast to techniques used by Karaman and Frazzoli [11], we use a novel approach that maps the problem to parameters of directed asymmetric hexagonal lattice bond percolation. Since standard percolation theory offers no results for this lattice, we map the lattice to an infinite absorbing Markov chain and use results pertaining to its survival to obtain bounds on the parameters. As a result, we are able to derive theoretical expressions that relate the non-linear dynamics of a robot, the resolution of the search graph and the density of the Poisson process. We validate the theoretical bounds using Monte-Carlo simulations for single integrator and curvature constrained systems and are able to validate the previous results presented by Karaman and Frazzoli [11] independently using the novel connections introduced in this paper.
Choudhury, Sanjiban and Scherer, Sebastian and Bagnell, J. Andrew, "Theoretical limits of speed and resolution for kinodynamic planning in a poisson forest," Robotics: Science and Systems, 2015.
- The Dynamics Projection Filter (DPF) - Real-time nonlinear trajectory optimization using projection operators.By Choudhury, S. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 644–649, 2015.
@inproceedings{choudhury2015dynamics, title = {The Dynamics Projection Filter {(DPF)} - Real-time nonlinear trajectory optimization using projection operators}, author = {Choudhury, Sanjiban and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {644--649}, doi = {10.1109/ICRA.2015.7139247}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/5/main.pdf} }
Robotic navigation applications often require on-line generation of trajectories that respect underactuated non-linear dynamics, while optimizing a cost function that depends only on a low-dimensional workspace (collision avoidance). Approaches to non-linear optimization, such as differential dynamic programming (DDP), suffer from the drawbacks of slow convergence by being limited to stay within the trust-region of the linearized dynamics and having to integrate the dynamics with fine granularity at each iteration. We address the problem of decoupling the workspace optimization from the enforcement of non-linear constraints. In this paper, we introduce the Dynamics Projection Filter, a nonlinear projection operator based approach that first optimizes a workspace trajectory with reduced constraints and then projects (filters) it to a feasible configuration space trajectory that has a bounded sub-optimality guarantee. We show simulation results for various curvature and curvature-derivatives constrained systems, where the dynamics projection filter is able to, on average, produce similar quality solution 50 times faster than DDP. We also show results from flight tests on an autonomous helicopter that solved these problems on-line while avoiding mountains at high speed as well as trees and buildings as it came in to land.
Choudhury, Sanjiban and Scherer, Sebastian, "The Dynamics Projection Filter (DPF) - Real-time nonlinear trajectory optimization using projection operators," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- Mixed-Initiative Control of a Roadable Air Vehicle for Non-Pilots.By Dorneich, M.C., Letsu-Dake, E., Singh, S., Scherer, S., Chamberlain, L. and Bergerman, M.In Journal of Human-Robot Interaction, vol. 4, no. 3, p. 38, Jan. 2015.
@article{dorneich2015mixed, title = {Mixed-Initiative Control of a Roadable Air Vehicle for Non-Pilots}, author = {Dorneich, Michael Christian and Letsu-Dake, Emmanuel and Singh, Sanjiv and Scherer, Sebastian and Chamberlain, Lyle and Bergerman, Marcel}, year = {2015}, month = jan, journal = {Journal of Human-Robot Interaction}, volume = {4}, number = {3}, pages = {38}, doi = {10.5898/jhri.4.3.dorneich}, issn = {2163-0364}, url = {https://www.ri.cmu.edu/pub_files/2015/12/JHRI2015.pdf} }
This work developed and evaluated a human-machine interface for the control of a roadable air vehicle (RAV), capable of surface driving, vertical takeoff, sustained flight, and landing. Military applications seek to combine the benefits of ground and air vehicles to maximize flexibility of movement but require that the operator have minimal pilot training. This makes the operator vulnerable to automation complexity issues; however, the operator will expect to be able to interact extensively and control the vehicle during flight. A mixed-initiative control approach mitigates these vulnerabilities by integrating the operator into many complex control domains in the way that they often expect—flexibly in charge, aware, but not required to issue every command. Intrinsic safety aspects were evaluated by comparing performance, decision making, precision, and workload for three RAV control paradigms: human-only, fully automated, and mixed-initiative control. The results suggest that the mixed-initiative paradigm leverages the benefits of human and automated control while also avoiding the drawbacks associated with each.
Dorneich, Michael Christian and Letsu-Dake, Emmanuel and Singh, Sanjiv and Scherer, Sebastian and Chamberlain, Lyle and Bergerman, Marcel, "Mixed-Initiative Control of a Roadable Air Vehicle for Non-Pilots," Journal of Human-Robot Interaction, 2015.
- Real-time onboard 6DoF localization of an indoor MAV in degraded visual environments using a RGB-D camera.By Fang, Z. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 5253–5259, 2015.
@inproceedings{fang2015real, title = {Real-time onboard {6DoF} localization of an indoor {MAV} in degraded visual environments using a {RGB-D} camera}, author = {Fang, Zheng and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {5253--5259}, doi = {10.1109/ICRA.2015.7139931}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/5/ICRA15_1722_FI.pdf} }
Real-time and reliable localization is a prerequisite for autonomously performing high-level tasks with micro aerial vehicles(MAVs). Nowadays, most existing methods use vision system for 6DoF pose estimation, which can not work in degraded visual environments. This paper presents an onboard 6DoF pose estimation method for an indoor MAV in challenging GPS-denied degraded visual environments by using a RGB-D camera. In our system, depth images are mainly used for odometry estimation and localization. First, a fast and robust relative pose estimation (6DoF Odometry) method is proposed, which uses the range rate constraint equation and photometric error metric to get the frame-to-frame transform. Then, an absolute pose estimation (6DoF Localization) method is proposed to locate the MAV in a given 3D global map by using a particle filter. The whole localization system can run in real-time on an embedded computer with low CPU usage. We demonstrate the effectiveness of our system in extensive real environments on a customized MAV platform. The experimental results show that our localization system can robustly and accurately locate the robot in various practical challenging environments.
Fang, Zheng and Scherer, Sebastian, "Real-time onboard 6DoF localization of an indoor MAV in degraded visual environments using a RGB-D camera," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- Autonomous river exploration.By Jain, S., Nuske, S., Chambers, A., Yoder, L., Cover, H., Chamberlain, L., Scherer, S. and Singh, S.In Springer Tracts in Advanced Robotics, Brisbanne, Australia, vol. 105, pp. 93–106, 2015.
@inproceedings{jain2015autonomous, title = {Autonomous river exploration}, author = {Jain, Sezal and Nuske, Stephen and Chambers, Andrew and Yoder, Luke and Cover, Hugh and Chamberlain, Lyle and Scherer, Sebastian and Singh, Sanjiv}, year = {2015}, month = dec, booktitle = {Springer Tracts in Advanced Robotics}, address = {Brisbanne, Australia}, volume = {105}, pages = {93--106}, doi = {10.1007/978-3-319-07488-7_7}, isbn = {9783319074870}, issn = {1610742X}, url = {https://www.ri.cmu.edu/pub_files/2013/12/AutonomousRiverExploration.pdf} }
Mapping a rivers course and width provides valuable information to help understand the ecology, topology and health of a particular environment. Such maps can also be useful to determine whether specific surface vessels can traverse the rivers. While rivers can be mapped from satellite imagery, the presence of vegetation, sometimes so thick that the canopy completely occludes the river, complicates the process of mapping. Here we propose the use of a micro air vehicle flying under the canopy to create accurate maps of the environment.We study and present a systemthat can autonomously explore riverswithout any prior information, and demonstrate an algorithm that can guide the vehicle based upon local sensors mounted on board the flying vehicle that can perceive the river, bank and obstacles. Our field experiments demonstrate what we believe is the first autonomous exploration of rivers by an autonomous vehicle. We show the 3D maps produced by our system over runs of 100-450 meters in length and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers.
Jain, Sezal and Nuske, Stephen and Chambers, Andrew and Yoder, Luke and Cover, Hugh and Chamberlain, Lyle and Scherer, Sebastian and Singh, Sanjiv, "Autonomous river exploration," Springer Tracts in Advanced Robotics, 2015.
- Recognition of human group activity for video analytics.By Ju, J., Yang, C., Scherer, S. and Ko, H.In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9315Y.-S. Ho, J. Sang, Y. M. Ro, J. Kim, and F. Wu, Eds.Cham: Springer, 2015, pp. pp. 161–169
@incollection{ju2015recognition, title = {Recognition of human group activity for video analytics}, author = {Ju, Jaeyong and Yang, Cheoljong and Scherer, Sebastian and Ko, Hanseok}, year = {2015}, month = sep, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, publisher = {Springer}, address = {Cham}, volume = {9315}, pages = {161--169}, doi = {10.1007/978-3-319-24078-7_16}, isbn = {9783319240770}, issn = {16113349}, editor = {Ho, Yo-Sung and Sang, Jitao and Ro, Yong Man and Kim, Junmo and Wu, Fei}, keywords = {Activity recognition,Human group activity,Video analytics} }
Human activity recognition is an important and challenging task for video content analysis and understanding. Individual activity recognition has been well studied recently. However, recognizing the activities of human group with more than three people having complex interactions is still a formidable challenge. In this paper, a novel human group activity recognition method is proposed to deal with complex situation where there are multiple sub-groups. To characterize the inherent interactions of intra-subgroups and inter-subgroups with the varying number of participants, this paper proposes three types of group-activity descriptor using motion trajectory and appearance information of people. Experimental results on a public human group activity dataset demonstrate effectiveness of the proposed method.
Ju, Jaeyong and Yang, Cheoljong and Scherer, Sebastian and Ko, Hanseok, "Recognition of human group activity for video analytics," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015.
- 3D Convolutional Neural Networks for landing zone detection from LiDAR.By Maturana, D. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Seattle, WA, USA, pp. 3471–3478, 2015.
@inproceedings{maturana20153d, title = {{3D} Convolutional Neural Networks for landing zone detection from {LiDAR}}, author = {Maturana, Daniel and Scherer, Sebastian}, year = {2015}, month = jun, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Seattle, WA, USA}, pages = {3471--3478}, doi = {10.1109/ICRA.2015.7139679}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2015/3/maturana-root.pdf} }
We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopters from LiDAR point clouds. Current methods for this problem rely on heuristic rules and use simple geometric features. These heuristics break down in the presence of low vegetation, as they do not distinguish between vegetation that may be landed on and solid objects that should be avoided. We evaluate the system with a combination of real and synthetic range data. We show our system outperforms various benchmarks, including a system integrating various hand-crafted point cloud features from the literature.
Maturana, Daniel and Scherer, Sebastian, "3D Convolutional Neural Networks for landing zone detection from LiDAR," Proceedings - IEEE International Conference on Robotics and Automation, 2015.
- VoxNet: A 3D Convolutional Neural Network for real-time object recognition.By Maturana, D. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 922–928, 2015.
@inproceedings{maturana2015voxnet, title = {{VoxNet:} {A} {3D} Convolutional Neural Network for real-time object recognition}, author = {Maturana, Daniel and Scherer, Sebastian}, year = {2015}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Hamburg, Germany}, pages = {922--928}, doi = {10.1109/IROS.2015.7353481}, isbn = {9781479999941}, issn = {21530866}, url = {https://www.ri.cmu.edu/pub_files/2015/9/voxnet_maturana_scherer_iros15.pdf} }
Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.
Maturana, Daniel and Scherer, Sebastian, "VoxNet: A 3D Convolutional Neural Network for real-time object recognition," IEEE International Conference on Intelligent Robots and Systems, 2015.
- Autonomous exploration and motion planning for an unmanned aerial vehicle navigating rivers.By Nuske, S., Choudhury, S., Jain, S., Chambers, A., Yoder, L., Scherer, S., Chamberlain, L., Cover, H. and Singh, S.In Journal of Field Robotics, vol. 32, no. 8, pp. 1141–1162, 2015.
@article{nuske2015autonomous, title = {Autonomous exploration and motion planning for an unmanned aerial vehicle navigating rivers}, author = {Nuske, Stephen and Choudhury, Sanjiban and Jain, Sezal and Chambers, Andrew and Yoder, Luke and Scherer, Sebastian and Chamberlain, Lyle and Cover, Hugh and Singh, Sanjiv}, year = {2015}, journal = {Journal of Field Robotics}, volume = {32}, number = {8}, pages = {1141--1162}, doi = {10.1002/rob.21596}, issn = {15564967}, url = {https://www.ri.cmu.edu/pub_files/2015/6/nuske_jfr_2015.pdf} }
Mapping a river’s geometry provides valuable information to help understand the topology and health of an environment and deduce other attributes such as which types of surface vessels could traverse the river. While many rivers can be mapped from satellite imagery, smaller rivers that pass through dense vegetation are occluded. We develop a micro air vehicle (MAV) that operates beneath the tree line, detects and maps the river, and plans paths around three-dimensional (3D) obstacles (such as overhanging tree branches) to navigate rivers purely with onboard sensing, with no GPS and no prior map. We present the two enabling algorithms for exploration and for 3D motion planning. We extract high-level goal-points using a novel exploration algorithm that uses multiple layers of information to maximize the length of the river that is explored during a mission. We also present an efficient modification to the SPARTAN (Sparse Tangential Network) algorithm called SPARTAN-lite, which exploits geodesic properties on smooth manifolds of a tangential surface around obstacles to plan rapidly through free space. Using limited onboard resources, the exploration and planning algorithms together compute trajectories through complex unstructured and unknown terrain, a capability rarely demonstrated by flying vehicles operating over rivers or over ground. We evaluate our approach against commonly employed algorithms and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers. We also present fully autonomous flights on riverine environments generating 3D maps over several hundred-meter stretches of tight winding rivers.
Nuske, Stephen and Choudhury, Sanjiban and Jain, Sezal and Chambers, Andrew and Yoder, Luke and Scherer, Sebastian and Chamberlain, Lyle and Cover, Hugh and Singh, Sanjiv, "Autonomous exploration and motion planning for an unmanned aerial vehicle navigating rivers," Journal of Field Robotics, 2015.
- Multi-Scale Convolutional Architecture for Semantic Segmentation.By Raj, A., Maturana, D. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-15-21, Oct-2015
@techreport{raj2015multi, title = {Multi-Scale Convolutional Architecture for Semantic Segmentation}, author = {Raj, Aman and Maturana, Daniel and Scherer, Sebastian}, year = {2015}, month = oct, booktitle = {Cmu}, address = {Pittsburgh, PA}, number = {CMU-RI-TR-15-21}, url = {https://www.ri.cmu.edu/pub{\_}files/2015/10/CMU-RI-TR{\_}AmanRaj{\_}revision2.pdf}, institution = {Carnegie Mellon University} }
Advances in 3D sensing technologies have made the availability of RGB and Depth information easier than earlier which can greatly assist in the semantic segmentation of 2D scenes. There are many works in literature that perform semantic segmentation in such scenes, but few relates to the environment that possesses a high degree of clutter in general e.g. indoor scenes. In this paper, we explore the use of depth information along with RGB and deep convolutional network for indoor scene understanding through semantic labeling. Our work exploits the geocentric encoding of a depth image and uses a multi-scale deep convolutional neural network architecture that captures high and low-level features of a scene to generate rich semantic labels. We apply our method on indoor RGBD images from NYUD2 dataset [1] and achieve a competitive performance of 70.45 % accuracy in labeling four object classes compared with some prior approaches. The results show our system is capable of generating a pixel-map directly from an input image where each pixel-value corresponds to a particular class of object.
Raj, Aman and Maturana, Daniel and Scherer, Sebastian, "Multi-Scale Convolutional Architecture for Semantic Segmentation," Cmu, 2015.
- High-precision Autonomous Flight in Constrained Shipboard Environments.By Yang, S., Fang, Z., Jain, S., Dubey, G., Maeta, S., Roth, S., Scherer, S., Zhang, Y. and Nuske, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-16-17, Feb-2015
@techreport{yang2015high, title = {High-precision Autonomous Flight in Constrained Shipboard Environments}, author = {Yang, Shichao and Fang, Zheng and Jain, Sezal and Dubey, Geetesh and Maeta, Silvio and Roth, Stephan and Scherer, Sebastian and Zhang, Yu and Nuske, Stephen}, year = {2015}, month = feb, address = {Pittsburgh, PA}, number = {CMU-RI-TR-16-17}, pages = {CMU--RI--TR--15--06}, url = {https://www.ri.cmu.edu/pub{\_}files/2015/2/shipboard{\_}final{\_}report{\_}20151.pdf}, institution = {Carnegie Mellon University} }
This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) inside of a constrained shipboard environment to aid in fire con-trol, which might be perilous or inaccessible for humans. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways and small objects protruding from the wall, which makes existing 2D LIDAR, vision or mechanical bumper-based autonomous navigation solutions fail. To re-alize autonomous navigation in such challenging environments, we first propose a fast and robust state estimation algorithm that fuses estimates from a direct depth odometry method and a Monte Carlo localization algorithm with other sensor in-formation in a two-level fusion framework. Then, an online motion planning al-gorithm that combines trajectory optimization with receding horizon control is proposed for fast obstacle avoidance. All the computations are done in real-time onboard our customized MAV platform. We validate the system by running ex-periments in different environmental conditions. The results of over 10 runs show that our vehicle robustly navigates 20m long corridors only 1m wide and goes through a very narrow doorway (only 4cm clearance on each side) completely autonomously even when it is completely dark or full of light smoke.
Yang, Shichao and Fang, Zheng and Jain, Sezal and Dubey, Geetesh and Maeta, Silvio and Roth, Stephan and Scherer, Sebastian and Zhang, Yu and Nuske, Stephen, "High-precision Autonomous Flight in Constrained Shipboard Environments," , 2015.
2014
- Visual Odometry in Smoke Occluded Environments.By Agarwal, A., Maturana, D. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-15-07, Jul-2014
@techreport{agarwal2014visual, title = {Visual Odometry in Smoke Occluded Environments}, author = {Agarwal, Aditya and Maturana, Daniel and Scherer, Sebastian}, year = {2014}, month = jul, address = {Pittsburgh, PA}, number = {CMU-RI-TR-15-07}, pages = {CMU--RI--TR--15--07}, url = {https://www.ri.cmu.edu/pub{\_}files/2014/7/aditya{\_}tr.pdf}, institution = {Carnegie Mellon University} }
Agarwal, Aditya and Maturana, Daniel and Scherer, Sebastian, "Visual Odometry in Smoke Occluded Environments," , 2014.
- A principled approach to enable safe and high performance maneuvers for autonomous rotorcraft.By Arora, S., Choudhury, S., Althoff, D. and Scherer, S.In Annual Forum Proceedings - AHS International, Montreal, CAN, vol. 4, pp. 3228–3236, 2014.
@inproceedings{arora2014principled, title = {A principled approach to enable safe and high performance maneuvers for autonomous rotorcraft}, author = {Arora, Sankalp and Choudhury, Sanjiban and Althoff, Daniel and Scherer, Sebastian}, year = {2014}, month = may, booktitle = {Annual Forum Proceedings - AHS International}, address = {Montreal, CAN}, volume = {4}, pages = {3228--3236}, isbn = {9781632666918}, issn = {15522938}, url = {https://www.ri.cmu.edu/pub_files/2014/5/AHS_safety.pdf} }
Autonomous rotorcraft are required to operate in cluttered, unknown, and unstructured environments. Guaranteeing the safety of these systems is critical for their successful deployment. Current methodologies for evaluating or ensuring safety either do not guarantee safety or severely limit the performance of rotorcraft. To design a guaranteed safe rotorcraft, we have defined safety for an autonomous rotorcraft flying in unknown environments given sensory and dynamic constraints. We have developed an approach that ensures the vehicle’s safety while pushing the limits of safe operation of the vehicle. Furthermore, the presented safety definition and the presented approach are independent of the vehicle and planning algorithm used on the rotorcraft. In this paper we present a real time algorithm to guarantee the safety of the rotorcraft through a diverse set of emergency maneuvers. We prove that the related trajectory set diversity problem is monotonic and sub-modular which enables us to develop an efficient, bounded sub-optimal trajectory set generation algorithm. We present safety results for the autonomous Unmanned Little Bird Helicopter flying at speeds of up to 56m/s in partially-known environments. Through months of flight testing the helicopter has been avoiding trees, performing autonomous landing, avoiding mountains while being guaranteed safe. We also present simulation results of the helicopter flying in the Grand Canyon, with no prior map of the environment. Copyright \textcopyright 2014 by the American Helicopter Society International, Inc. All rights reserved.
Arora, Sankalp and Choudhury, Sanjiban and Althoff, Daniel and Scherer, Sebastian, "A principled approach to enable safe and high performance maneuvers for autonomous rotorcraft," Annual Forum Proceedings - AHS International, 2014.
- Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments.By Chambers, A., Scherer, S., Yoder, L., Jain, S., Nuske, S. and Singh, S.In Proceedings of the American Control Conference, Portland, OR, pp. 1892–1899, 2014.
@inproceedings{chambers2014robust, title = {Robust multi-sensor fusion for micro aerial vehicle navigation in {GPS}-degraded/denied environments}, author = {Chambers, Andrew and Scherer, Sebastian and Yoder, Luke and Jain, Sezal and Nuske, Stephen and Singh, Sanjiv}, year = {2014}, booktitle = {Proceedings of the American Control Conference}, address = {Portland, OR}, pages = {1892--1899}, doi = {10.1109/ACC.2014.6859341}, isbn = {9781479932726}, issn = {07431619}, url = {https://www.ri.cmu.edu/pub_files/2014/6/acc-paper-small.pdf}, keywords = {Autonomous systems,Filtering,Vision-based control} }
State estimation for Micro Air Vehicles (MAVs) is challenging because sensing instrumentation carried on-board is severely limited by weight and power constraints. In addition, their use close to and inside structures and vegetation means that GPS signals can be degraded or all together absent. Here we present a navigation system suited for use on MAVs that seamlessly fuses any combination of GPS, visual odometry, inertial measurements, and/or barometric pressure. We focus on robustness against real-world conditions and evaluate performance in challenging field experiments. Results demonstrate that the proposed approach is effective at providing a consistent state estimate even during multiple sensor failures and can be used for mapping, planning, and control. \textcopyright 2014 American Automatic Control Council.
Chambers, Andrew and Scherer, Sebastian and Yoder, Luke and Jain, Sezal and Nuske, Stephen and Singh, Sanjiv, "Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments," Proceedings of the American Control Conference, 2014.
- The planner ensemble and trajectory executive: A high performance motion planning system with guaranteed safety.By Choudhury, S., Arora, S. and Scherer, S.In Annual Forum Proceedings - AHS International, Montreal, CAN, vol. 4, pp. 2872–2891, 2014.
@inproceedings{choudhury2014planner, title = {The planner ensemble and trajectory executive: A high performance motion planning system with guaranteed safety}, author = {Choudhury, Sanjiban and Arora, Sankalp and Scherer, Sebastian}, year = {2014}, month = may, booktitle = {Annual Forum Proceedings - AHS International}, address = {Montreal, CAN}, volume = {4}, pages = {2872--2891}, isbn = {9781632666918}, issn = {15522938}, url = {https://www.ri.cmu.edu/pub_files/2014/5/The_Planner%20Ensemble_and_Trajectory_Executive_small.pdf} }
Autonomous helicopters are required to fly at a wide range of speed close to ground and eventually land in an unprepared cluttered area. Existing planning systems for unmanned rotorcrafts are capable of flying in unmapped environments, however they are restricted to a specific operating regime dictated by the underlying planning algorithm. We address the problem of planning a trajectory that is computed in real time, respects the dynamics of the helicopter, and keeps the vehicle safe in an unmapped environment with a finite horizon sensor. We have developed a planning system that is capable of doing this by running competing planners in parallel. This paper presents a planning architecture that consists of a trajectory executive - A low latency, verifiable component - That selects plans from a planner ensemble and ensures safety by maintaining emergency maneuvers. Here we report results with an autonomous helicopter that flies missions several kilometers long through unmapped terrain at speeds of upto 56 m/s and landing in clutter. In over 6 months of flight testing, the system has avoided unmapped mountains, popup no fly zones, and has come into land while avoiding trees and buildings in a cluttered landing zone. We also present results from simulation where the same system is flown in challenging obstacle regions - In all cases the system always remains safe and accomplishes the mission. As a result, the system showcases the ability to have a high performance in all environments while guaranteeing safety. \textcopyright 2014 by the American Helicopter Society international Inc. All rights reserved.
Choudhury, Sanjiban and Arora, Sankalp and Scherer, Sebastian, "The planner ensemble and trajectory executive: A high performance motion planning system with guaranteed safety," Annual Forum Proceedings - AHS International, 2014.
- Experimental study of odometry estimation methods using RGB-D cameras.By Fang, Z. and Scherer, S.In IEEE International Conference on Intelligent Robots and Systems, Chicago, IL, pp. 680–687, 2014.
@inproceedings{fang2014experimental, title = {Experimental study of odometry estimation methods using {RGB-D} cameras}, author = {Fang, Zheng and Scherer, Sebastian}, year = {2014}, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Chicago, IL}, pages = {680--687}, doi = {10.1109/IROS.2014.6942632}, isbn = {9781479969340}, issn = {21530866}, url = {https://www.ri.cmu.edu/pub_files/2014/9/0322.pdf} }
Lightweight RGB-D cameras that can provide rich 2D visual and 3D point cloud information are well suited to the motion estimation of indoor micro aerial vehicles (MAVs). In recent years, several RGB-D visual odometry methods which process data from the sensor in different ways have been proposed. However, it is unclear which methods are preferable for online odometry estimation on a computation-limited, fast moving MAV in practical indoor environments. This paper presents a detailed analysis and comparison of several state-of-the-art real-time odometry estimation methods in a variety of challenging scenarios, with a special emphasis on the trade-off among accuracy, robustness and computation speed. An experimental comparison is conducted using public available benchmark datasets and author-collected datasets including long corridors, illumination changing environments and fast motion scenarios. Experimental results present both quantitative and qualitative differences among these methods and provide some guidelines on choosing the ’right’ algorithm for an indoor MAV according to the quality of the RGB-D data and environment characteristics.
Fang, Zheng and Scherer, Sebastian, "Experimental study of odometry estimation methods using RGB-D cameras," IEEE International Conference on Intelligent Robots and Systems, 2014.
- Learning Motion Planning Assumptions.By Vemula, A., Choudhury, S. and Scherer, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-14-14, Aug-2014
@techreport{vemula2014learning, title = {Learning Motion Planning Assumptions}, author = {Vemula, Anirudh and Choudhury, Sanjiban and Scherer, Sebastian}, year = {2014}, month = aug, address = {Pittsburgh, PA}, number = {CMU-RI-TR-14-14}, pages = {11}, url = {https://www.ri.cmu.edu/pub_files/2014/8/LearningMotionPlanningAssumptions.pdf}, institution = {Carnegie Mellon University} }
The performance of a motion planning algorithm is intrinsically linked with applications that respect the assumptions being made. However, the mapping of these assumptions to actual environments is not always transparent. For example, a gradient descent algorithm is capable of tackling a complex opti- mization problem if some assurance of absence of bad local minimas can be ensured - however detecting the local minimas beforehand is very challenging. The state of the art technique relies on an expert to analyze the application, deduce assumptions that the planner can leverage and subsequently make key design decisions. In this work, we make an attempt to learn a mapping from environments to specific planning assumptions. This paper presents a diverse ensemble of planners that exploit very different aspects of the planning problem. A classifier is then trained to approximate the mapping from environment to performance difference between a pair of planners. Preliminary results hints at the role played by convexity, whilst also demonstrating the difficulty of the classification task at hand.
Vemula, Anirudh and Choudhury, Sanjiban and Scherer, Sebastian, "Learning Motion Planning Assumptions," , 2014.
2013
- Infrastructure-free shipdeck tracking for autonomous landing.By Arora, S., Jain, S., Scherer, S., Nuske, S., Chamberlain, L. and Singh, S.In Proceedings - IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, pp. 323–330, 2013.
@inproceedings{arora2013infrastructure, title = {Infrastructure-free shipdeck tracking for autonomous landing}, author = {Arora, Sankalp and Jain, Sezal and Scherer, Sebastian and Nuske, Stephen and Chamberlain, Lyle and Singh, Sanjiv}, year = {2013}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Karlsruhe, Germany}, pages = {323--330}, doi = {10.1109/ICRA.2013.6630595}, isbn = {9781467356411}, issn = {10504729}, url = {https://kilthub.cmu.edu/articles/journal_contribution/Infrastructure-free_Shipdeck_Tracking_for_Autonomous_Landing/6555173} }
Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking a shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in three different environments - field testing results on actual helicopter flights, in simulation with a moving shipdeck for lidar based tracking and in laboratory using an occluded, and, moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions. \textcopyright 2013 IEEE.
Arora, Sankalp and Jain, Sezal and Scherer, Sebastian and Nuske, Stephen and Chamberlain, Lyle and Singh, Sanjiv, "Infrastructure-free shipdeck tracking for autonomous landing," Proceedings - IEEE International Conference on Robotics and Automation, 2013.
- Robocopters to the rescue.By Chamberlain, L. and Scherer, S.In IEEE Spectrum, vol. 50, no. 10, pp. 28–33, 2013.
@article{chamberlain2013robocopters, title = {Robocopters to the rescue}, author = {Chamberlain, Lyle and Scherer, Sebastian}, year = {2013}, journal = {IEEE Spectrum}, volume = {50}, number = {10}, pages = {28--33}, doi = {10.1109/MSPEC.2013.6607012}, issn = {00189235}, url = {https://spectrum.ieee.org/robocopters-to-the-rescue} }
We’re standing on the edge of the hot Arizona tarmac, radio in hand, holding our breath as the helicopter passes 50 meters overhead. We watch as the precious sensor on its blunt nose scans every detail of the area, the test pilot and engineer looking down with coolly professional curiosity as they wait for the helicopter to decide where to land. They¿re just onboard observers. The helicopter itself is in charge here. \textcopyright 1964-2012 IEEE.
Chamberlain, Lyle and Scherer, Sebastian, "Robocopters to the rescue," IEEE Spectrum, 2013.
- Autonomous emergency landing of a helicopter: Motion planning with hard time-constraints.By Choudhury, S., Scherer, S. and Singh, S.In Annual Forum Proceedings - AHS International, Phoenix, AZ, vol. 3, pp. 2236–2249, 2013.
@inproceedings{choudhury2013autonomous, title = {Autonomous emergency landing of a helicopter: Motion planning with hard time-constraints}, author = {Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv}, year = {2013}, month = may, booktitle = {Annual Forum Proceedings - AHS International}, address = {Phoenix, AZ}, volume = {3}, pages = {2236--2249}, doi = {10.1109/ICRA.2013.6631133}, isbn = {9781467356411}, issn = {15522938}, url = {https://www.ri.cmu.edu/pub_files/2013/5/Autonomous_Emergency_Landing_of_a_Helicopter.pdf} }
Engine malfunctions during helicopter flight poses a large risk to pilot and crew. Without a quick and coordinated reaction, such situations lead to a complete loss of control. An autonomous landing system is capable of reacting quickly to regain control, however current emergency landing methods focus only on the offline generation of dynamically feasible trajectories while ignoring the more severe constraints faced while autonomously landing a real helicopter during an unplanned engine failure. We address the problem of autonomously landing a helicopter while considering a realistic context: hard time-constraints, challenging terrain, sensor limitations and availability of pilot contextual knowledge. We designed a planning system that deals with all these factors by being able to compute alternate routes (AR) in a rapid fashion. This paper presents an algorithm, RRT*-AR, building upon the optimal sampling-based algorithm RRT* to generate AR in realtime while maintaining optimality guarantees and examines its performance for simulated failures occurring in mountainous terrain. After over 4500 trials, RRT*-AR outperformed RRT* by providing the human 280% more options 67% faster on average. As a result, it provides a much wider safety margin for unaccounted disturbances, and a more secure environment for a pilot. \textcopyright 2013 by the American Helicopter Society International, Inc. All rights reserved.
Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv, "Autonomous emergency landing of a helicopter: Motion planning with hard time-constraints," Annual Forum Proceedings - AHS International, 2013.
- Sparse Tangential Network (SPARTAN): Motion planning for micro aerial vehicles.By Cover, H., Choudhury, S., Scherer, S. and Singh, S.In Proceedings - IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, pp. 2820–2825, 2013.
@inproceedings{cover2013sparse, title = {Sparse Tangential Network {(SPARTAN)}: {Motion} planning for micro aerial vehicles}, author = {Cover, Hugh and Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv}, year = {2013}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Karlsruhe, Germany}, pages = {2820--2825}, doi = {10.1109/ICRA.2013.6630967}, isbn = {9781467356411}, issn = {10504729}, url = {https://www.researchgate.net/publication/261416149_Sparse_Tangential_Network_SPARTAN_Motion_planning_for_micro_aerial_vehicles} }
Micro aerial vehicles operating outdoors must be able to maneuver through both dense vegetation and across empty fields. Existing approaches do not exploit the nature of such an environment. We have designed an algorithm which plans rapidly through free space and is efficiently guided around obstacles. In this paper we present SPARTAN (Sparse Tangential Network) as an approach to create a sparsely connected graph across a tangential surface around obstacles. We find that SPARTAN can navigate a vehicle autonomously through an outdoor environment producing plans 172 times faster than the state of the art (RRT*). As a result SPARTAN can reliably deliver safe plans, with low latency, using the limited computational resources of a lightweight aerial vehicle. \textcopyright 2013 IEEE.
Cover, Hugh and Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv, "Sparse Tangential Network (SPARTAN): Motion planning for micro aerial vehicles," Proceedings - IEEE International Conference on Robotics and Automation, 2013.
- First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles.By Mori, T. and Scherer, S.In Proceedings - IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, pp. 1750–1757, 2013.
@inproceedings{mori2013first, title = {First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles}, author = {Mori, Tomoyuki and Scherer, Sebastian}, year = {2013}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Karlsruhe, Germany}, pages = {1750--1757}, doi = {10.1109/ICRA.2013.6630807}, isbn = {9781467356411}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2013/5/monocularObstacleAvoidance.pdf} }
Obstacle avoidance is desirable for lightweight micro aerial vehicles and is a challenging problem since the payload constraints only permit monocular cameras and obstacles cannot be directly observed. Depth can however be inferred based on various cues in the image. Prior work has examined optical flow, and perspective cues, however these methods cannot handle frontal obstacles well. In this paper we examine the problem of detecting obstacles right in front of the vehicle. We developed a method to detect relative size changes of image patches that is able to detect size changes in the absence of optical flow. The method uses SURF feature matches in combination with template matching to compare relative obstacle sizes with different image spacing. We present results from our algorithm in autonomous flight tests on a small quadrotor. We are able to detect obstacles with a frame-to-frame enlargement of 120% with a high confidence and confirmed our algorithm in 20 successful flight experiments. In future work, we will improve the control algorithms to avoid more complicated obstacle configurations. \textcopyright 2013 IEEE.
Mori, Tomoyuki and Scherer, Sebastian, "First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles," Proceedings - IEEE International Conference on Robotics and Automation, 2013.
2012
- Realtime alternate routes planning: the RRT*-AR algorithm.By Choudhury, S., Scherer, S. and Singh, S.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-12-2, 2012
@techreport{choudhury2012realtime, title = {Realtime alternate routes planning: the {RRT*-AR} algorithm}, author = {Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv}, year = {2012}, address = {Pittsburgh, PA}, number = {CMU-RI-TR-12-2}, url = {http://repository.cmu.edu/robotics/918/?utm{\_}source=repository.cmu.edu/robotics/918{\&}utm{\_}medium=PDF{\&}utm{\_}campaign=PDFCoverPages}, institution = {Carnegie Mellon University} }
Motion planning in the most general sense is an optimization problem with a single elusive best solution. However attempting to find a single answer isn’t often the most desired approach. On the one hand, the reason is theoretical - planners often get trapped in local minimas because the cost function has many valleys or dynamics are too complex to fully exploit. On the other hand, there are many practical deterrents - unmapped obstacles might require the system to switch quickly to another plan, unmodelled dynamics can make a computed plan infeasible, or the system may have a human-in-loop who has a vote in the decision process. In situations where the current plan is no longer desirable, a new plan has to be planned. The re-planning time induces a reaction latency which might result in mission failure. We advocate the use of alternate routes (AR), a set of spatially different, locally optimal paths, as a powerful tool to address several of the afore-mentioned issues. By enforcing the routes to be spatially separated, appearance of unexpected obstacles has less chance of rendering all trajectories to be infeasible. In such cases, alternate routes act as a set of backup options which can be switched to instantly. This reduces reaction latency allowing the system to operate with a lower risk. This paper presents an algorithm, RRT*-AR, to generate alternate routes in real time by making tradeoffs in exploitation for exploration, precision for speed and leveraging assumptions about the vehicle and environment constraints. In the case of emergency landing of a helicopter, RRT*-AR outperformed RRT* by providing the human 280% more flight paths 67% faster on average. By planning multiple routes to potential landing zones, the planner was able to seamlessly switch to a new landing site without having to replan.
Choudhury, Sanjiban and Scherer, Sebastian and Singh, Sanjiv, "Realtime alternate routes planning: the RRT*-AR algorithm," , 2012.
- Autonomous landing at unprepared sites by a full-scale helicopter.By Scherer, S., Chamberlain, L. and Singh, S.In Robotics and Autonomous Systems, vol. 60, no. 12, pp. 1545–1562, Dec. 2012.
@article{scherer2012autonomous, title = {Autonomous landing at unprepared sites by a full-scale helicopter}, author = {Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv}, year = {2012}, month = dec, journal = {Robotics and Autonomous Systems}, volume = {60}, number = {12}, pages = {1545--1562}, doi = {10.1016/j.robot.2012.09.004}, issn = {09218890}, url = {https://www.ri.cmu.edu/pub_files/2012/9/1-s2.0-S0921889012001509-main.pdf}, keywords = {3D perception,Landing zone selection,Lidar,Rotorcraft,UAV} }
Helicopters are valuable since they can land at unprepared sites; however, current unmanned helicopters are unable to select or validate landing zones (LZs) and approach paths. For operation in unknown terrain it is necessary to assess the safety of a LZ. In this paper, we describe a lidar-based perception system that enables a full-scale autonomous helicopter to identify and land in previously unmapped terrain with no human input. We describe the problem, real-time algorithms, perception hardware, and results. Our approach has extended the state of the art in terrain assessment by incorporating not only plane fitting, but by also considering factors such as terrain/skid interaction, rotor and tail clearance, wind direction, clear approach/abort paths, and ground paths. In results from urban and natural environments we were able to successfully classify LZs from point cloud maps. We also present results from 8 successful landing experiments with varying ground clutter and approach directions. The helicopter selected its own landing site, approaches, and then proceeds to land. To our knowledge, these experiments were the first demonstration of a full-scale autonomous helicopter that selected its own landing zones and landed. \textcopyright 2012 Elsevier Ltd. All rights reserved.
Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv, "Autonomous landing at unprepared sites by a full-scale helicopter," Robotics and Autonomous Systems, 2012.
- First results in autonomous landing and obstacle avoidance by a full-scale helicopter.By Scherer, S., Chamberlain, L. and Singh, S.In Proceedings - IEEE International Conference on Robotics and Automation, St. Paul, MN, pp. 951–956, 2012.
@inproceedings{scherer2012first, title = {First results in autonomous landing and obstacle avoidance by a full-scale helicopter}, author = {Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv}, year = {2012}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {St. Paul, MN}, pages = {951--956}, doi = {10.1109/ICRA.2012.6225215}, isbn = {9781467314039}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2012/5/ICRA12_1750_FI.pdf} }
Currently deployed unmanned rotorcraft rely on carefully preplanned missions and operate from prepared sites and thus avoid the need to perceive and react to the environment. Here we consider the problems of finding suitable but previously unmapped landing sites given general coordinates of the goal and planning collision free trajectories in real time to land at the "optimal" site. This requires accurate mapping, fast landing zone evaluation algorithms, and motion planning. We report here on the sensing, perception and motion planning integrated onto a full-scale helicopter that flies completely autonomously. We show results from 8 experiments for landing site selection and 5 runs at obstacles. These experiments have demonstrated the first autonomous full-scale helicopter that successfully selects its own landing sites and avoids obstacles. \textcopyright 2012 IEEE.
Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv, "First results in autonomous landing and obstacle avoidance by a full-scale helicopter," Proceedings - IEEE International Conference on Robotics and Automation, 2012.
- River mapping from a flying robot: State estimation, river detection, and obstacle mapping.By Scherer, S., Rehder, J., Achar, S., Cover, H., Chambers, A., Nuske, S. and Singh, S.In Autonomous Robots, vol. 33, no. 1-2, pp. 189–214, 2012.
@article{scherer2012river, title = {River mapping from a flying robot: {State} estimation, river detection, and obstacle mapping}, author = {Scherer, Sebastian and Rehder, Joern and Achar, Supreeth and Cover, Hugh and Chambers, Andrew and Nuske, Stephen and Singh, Sanjiv}, year = {2012}, journal = {Autonomous Robots}, volume = {33}, number = {1-2}, pages = {189--214}, doi = {10.1007/s10514-012-9293-0}, issn = {09295593}, url = {https://www.ri.cmu.edu/pub_files/2012/5/riverineAutonJune2012.pdf}, keywords = {3D ladar scanning,3D obstacle mapping,Micro aerial vehicles,Self supervised learning,Visual localization} }
Accurately mapping the course and vegetation along a river is challenging, since overhanging trees block GPS at ground level and occlude the shore line when viewed from higher altitudes. We present a multimodal perception system for the active exploration and mapping of a river from a small rotorcraft. We describe three key components that use computer vision, laser scanning, inertial sensing and intermittant GPS to estimate the motion of the rotorcraft, detect the river without a prior map, and create a 3D map of the riverine environment. Our hardware and software approach is cognizant of the need to perform multi-kilometer missions below tree level with size, weight and power constraints. We present experimental results along a 2 km loop of river using a surrogate perception payload. Overall we can build an accurate 3D obstacle map and a 2D map of the river course and width from light onboard sensing. \textcopyright 2012 Springer Science+Business Media, LLC.
Scherer, Sebastian and Rehder, Joern and Achar, Supreeth and Cover, Hugh and Chambers, Andrew and Nuske, Stephen and Singh, Sanjiv, "River mapping from a flying robot: State estimation, river detection, and obstacle mapping," Autonomous Robots, 2012.
2011
- Self-supervised segmentation of river scenes.By Achar, S., Sankaran, B., Nuske, S., Scherer, S. and Singh, S.In Proceedings - IEEE International Conference on Robotics and Automation, Shanghai, China, pp. 6227–6232, 2011.
@inproceedings{achar2011self, title = {Self-supervised segmentation of river scenes}, author = {Achar, Supreeth and Sankaran, Bharath and Nuske, Stephen and Scherer, Sebastian and Singh, Sanjiv}, year = {2011}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Shanghai, China}, pages = {6227--6232}, doi = {10.1109/ICRA.2011.5980157}, isbn = {9781612843865}, issn = {10504729}, url = {https://www.ri.cmu.edu/pub_files/2011/5/ICRA11_Achar.pdf} }
Here we consider the problem of automatically segmenting images taken from a boat or low-flying aircraft. Such a capability is important for autonomous river following and mapping. The need for accurate segmentation in a wide variety of riverine environments challenges the state of the art vision-based methods that have been used in more structured environments such as roads and highways. Apart from the lack of structure, the principal difficulty is the large spatial and temporal variations in the appearance of water in the presence of nearby vegetation and with reflections from the sky. We propose a self-supervised method to segment images into ’sky’, ’river’ and ’shore’ (vegetation + structures) regions. Our approach uses assumptions about river scene structure to learn appearance models based on features like color, texture and image location which are used to segment the image. We validated our algorithm by testing on four datasets captured under varying conditions on different rivers. Our self-supervised algorithm had higher accuracy rates than a supervised alternative, often significantly more accurate, and does not need to be retrained to work under different conditions. \textcopyright 2011 IEEE.
Achar, Supreeth and Sankaran, Bharath and Nuske, Stephen and Scherer, Sebastian and Singh, Sanjiv, "Self-supervised segmentation of river scenes," Proceedings - IEEE International Conference on Robotics and Automation, 2011.
- Self-aware helicopters: Full-scale automated landing and obstacle avoidance in unmapped environments.By Chamberlain, L., Scherer, S. and Singh, S.In Annual Forum Proceedings - AHS International, Virginia Beach, vol. 4, pp. 3210–3219, 2011.
@inproceedings{chamberlain2011self, title = {Self-aware helicopters: {Full}-scale automated landing and obstacle avoidance in unmapped environments}, author = {Chamberlain, Lyle and Scherer, Sebastian and Singh, Sanjiv}, year = {2011}, month = mar, booktitle = {Annual Forum Proceedings - AHS International}, address = {Virginia Beach}, volume = {4}, pages = {3210--3219}, isbn = {9781617828812}, issn = {15522938}, url = {https://www.ri.cmu.edu/pub_files/2011/3/ahs2011_Final.pdf} }
In this paper we present a perception and autonomy package that for the first time allows a full-scale unmanned helicopter (the Boeing Unmanned Little Bird) to automatically fly through unmapped, obstacle-laden terrain, find a landing zone, and perform a safe landing near a casualty, all with no human control or input. The system also demonstrates the ability to avoid obstacles while in low-altitude flight. The perception system consists of a 3D LADAR mapping unit with sufficient range, accuracy, and bandwidth to bring autonomous flight into the realm of full-scale aircraft. Efficient evaluation of this data and fast planning algorithms provide the aircraft with safe flight trajectories in real-time. We show the results of several fully autonomous landing and obstacle avoidance missions. Copyright \textcopyright 2011, American Helicopter Society International, Inc. All rights reserved.
Chamberlain, Lyle and Scherer, Sebastian and Singh, Sanjiv, "Self-aware helicopters: Full-scale automated landing and obstacle avoidance in unmapped environments," Annual Forum Proceedings - AHS International, 2011.
- Perception for a river mapping robot.By Chambers, A., Achar, S., Nuske, S., Rehder, J., Kitt, B., Chamberlain, L., Haines, J., Scherer, S. and Singh, S.In IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, pp. 227–234, 2011.
@inproceedings{chambers2011perception, title = {Perception for a river mapping robot}, author = {Chambers, Andrew and Achar, Supreeth and Nuske, Stephen and Rehder, J{\"{o}}rn and Kitt, Bernd and Chamberlain, Lyle and Haines, Justin and Scherer, Sebastian and Singh, Sanjiv}, year = {2011}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {San Francisco, CA}, pages = {227--234}, doi = {10.1109/IROS.2011.6048799}, isbn = {9781612844541}, url = {https://kilthub.cmu.edu/articles/journal_contribution/Perception_for_a_River_Mapping_Robot/6557432} }
Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision-free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system. \textcopyright 2011 IEEE.
Chambers, Andrew and Achar, Supreeth and Nuske, Stephen and Rehder, Jörn and Kitt, Bernd and Chamberlain, Lyle and Haines, Justin and Scherer, Sebastian and Singh, Sanjiv, "Perception for a river mapping robot," IEEE International Conference on Intelligent Robots and Systems, 2011.
- Navigation and Control for Micro Aerial Vehicles in GPS-Denied Environments.By Molero, R., Scherer, S. and Chamberlain, L.J.Carnegie Mellon University, Pittsburgh, PATechnical Report #CMU-RI-TR-10-08, Jun-2011
@techreport{molero2011navigation, title = {Navigation and Control for Micro Aerial Vehicles in {GPS}-Denied Environments}, author = {Molero, Rudolph and Scherer, Sebastian and Chamberlain, Lyle J}, year = {2011}, month = jun, booktitle = {CMU-RI-TR-10-08}, address = {Pittsburgh, PA}, number = {CMU-RI-TR-10-08}, url = {https://www.ri.cmu.edu/pub_files/2011/6/smc.pdf}, institution = {Carnegie Mellon University}, keywords = {GPS-denied,MAV,UAV,control,navigation} }
Micro-air vehicles have been increasingly employed in diverse research projects in both military and civilian applications. That is because their high maneuverability and accurate mobility. Many of them have been successfully used in outdoor areas, while some have been operated indoors. However, very few have dedicated especial attention to the case of high pitch and roll movements while doing scan-line based odometry. In this paper, we present a general approach consisting of algorithms that enable small aerial robots to fly indoors. We solve the overall problem of large movement change in pitch and roll angles by improving the standard scan matching algorithm. We also validate the effectiveness of the upgraded algorithm by a set of experiments that demonstrate the ability of a small quad-rotor to autonomously operate in cluttered indoor scenarios.
Molero, Rudolph and Scherer, Sebastian and Chamberlain, Lyle J, "Navigation and Control for Micro Aerial Vehicles in GPS-Denied Environments," CMU-RI-TR-10-08, 2011.
- Multiple-objective motion planning for unmanned aerial vehicles.By Scherer, S. and Singh, S.In IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, pp. 2207–2214, 2011.
@inproceedings{scherer2011multiple, title = {Multiple-objective motion planning for unmanned aerial vehicles}, author = {Scherer, Sebastian and Singh, Sanjiv}, year = {2011}, month = sep, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {San Francisco, CA}, pages = {2207--2214}, doi = {10.1109/IROS.2011.6048126}, isbn = {9781612844541}, url = {https://kilthub.cmu.edu/articles/journal_contribution/Multiple-Objective_Motion_Planning_for_Unmanned_Aerial_Vehicles/6555680} }
Here we consider the problem of low-flying rotorcraft that must perform various missions such as navigating to specific goal points while avoiding obstacles, looking for acceptable landing sites or performing continuous surveillance. Not all of such missions can be expressed as safe, goal seeking, partly because in many cases there isn’t an obvious goal. Rather than developing singular solutions to each mission, we seek a generalized formulation that enables us to express a wider range of missions. Here we propose a framework that allows for multiple objectives to be considered simultaneously and discuss corresponding planning algorithms that are capable of running in realtime on autonomous air vehicles. The algorithms create a set of initial hypotheses that are then optimized by a sub-gradient based trajectory algorithm that optimizes the multiple objectives, producing dynamically feasible trajectories. We have demonstrated the feasibility of our approach with changing cost functions based on newly discovered information. We report on results in simulation of a system that is tasked with navigating safely between obstacles while searching for an acceptable landing site. \textcopyright 2011 IEEE.
Scherer, Sebastian and Singh, Sanjiv, "Multiple-objective motion planning for unmanned aerial vehicles," IEEE International Conference on Intelligent Robots and Systems, 2011.
- Low-Altitude Operation of Unmanned RotorcraftPhD thesis Carnegie Mellon University, Pittsburgh, PA 2011
@phdthesis{scherer2011low, title = {Low-Altitude Operation of Unmanned Rotorcraft}, author = {Scherer, Sebastian}, year = {2011}, booktitle = {CMU-RI-TR-11-03}, publisher = {Carnegie Mellon University}, address = {Pittsburgh, PA}, pages = {138}, isbn = {9781124819532}, url = {https://kilthub.cmu.edu/articles/thesis/Low-Altitude_Operation_of_Unmanned_Rotorcraft/6720461}, school = {The Robotics Institute, Carnegie Mellon University} }
Currently deployed unmanned rotorcraft rely on preplanned missions or teleoperation and do not actively incorporate information about obstacles, landing sites, wind, position uncertainty, and other aerial vehicles during online motion planning. Prior work has successfully addressed some tasks such as obstacle avoidance at slow speeds, or landing at known to be good locations. However, to enable autonomous missions in cluttered environments, the vehicle has to react quickly to previously unknown obstacles, respond to changing environmental conditions, and find unknown landing sites. We consider the problem of enabling autonomous operation at low-altitude with contributions to four problems. First we address the problem of fast obstacle avoidance for a small aerial vehicle and present results from over a 1000 rims at speeds up to 10 m/s. Fast response is achieved through a reactive algorithm whose response is learned based on observing a pilot. Second, we show an algorithm to update the obstacle cost expansion for path planning quickly and demonstrate it on a micro aerial vehicle, and an autonomous helicopter avoiding obstacles. Next, we examine the mission of finding a place to land near a ground goal. Good landing sites need to he detected and found and the final touch down goal is unknown. To detect the landing sites we convey a model based algorithm for landing sites that incorporates many helicopter relevant constraints such as landing sites, approach, abort, and ground paths in 3D range data. The landing site evaluation algorithm uses a patch-based coarse evaluation for slope and roughness, and a fine evaluation that fits a 3D model of the helicopter and landing gear to calculate a goodness measure. The data are evaluated in real-time to enable the helicopter to decide on a place to land. We show results from urban, vegetated, and desert environments, and demonstrate the first autonomous helicopter that selects its own landing sites. We present a generalized planning framework that enables reaching a goal point, searching for unknown landing sites, and approaching a landing zone. In the framework, sub-objective functions, constraints, and a state machine define the mission and behavior of an UAV. As the vehicle gathers information by moving through the environment, the objective functions account for this new information. The operator in this framework can directly specify his intent as an objective function that defines the mission rather than giving a sequence of pre-specified goal points. This allows the robot to react to new information received and adjust its path accordingly. The objective is used in a combined coarse planning and trajectory optimization algorithm to determine the best patch the robot should take. We show simulated results for several different missions and in particular focus on active landing zone search. We presented several effective approaches for perception and action for low-altitude flight and demonstrated their effectiveness in field experiments on three autonomous aerial vehicles: a 1m quadrocopter, a 36m helicopter, and a hill-size helicopter. These techniques permit rotorcraft to operate where they have their greatest advantage: In unstructured, unknown environments at low-altitude.
Scherer, Sebastian, "Low-Altitude Operation of Unmanned Rotorcraft," CMU-RI-TR-11-03, 2011.
2010
- Online assessment of landing sites.By Scherer, S., Chamberlain, L. and Singh, S.In AIAA Infotech at Aerospace 2010, Atlanta, 2010.
@inproceedings{scherer2010online, title = {Online assessment of landing sites}, author = {Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv}, year = {2010}, month = apr, booktitle = {AIAA Infotech at Aerospace 2010}, address = {Atlanta}, doi = {10.2514/6.2010-3358}, isbn = {9781600867439}, url = {https://www.ri.cmu.edu/pub_files/2010/4/landingsiteConferenceShortSmall.pdf} }
Assessing a landing zone (LZ) reliably is essential for safe operation of vertical takeoff and landing (VTOL) aerial vehicles that land at unimproved locations. Currently an operator has to rely on visual assessment to make an approach decision; however. visual information from afar is insufficient to judge slope and detect small obstacles. Prior work has modeled LZ quality based on plane fitting, which only partly represents the interaction between vehicle and ground. Our approach consists of a coarse evaluation based on slope and roughness criteria, a fine evaluation for skid contact, and body clearance of a location. We investigated whether the evaluation is correct for using terrain maps collected from a helicopter. This paper defines the problem of evaluation, describes our incremental real-time algorithm, and discusses the efectiveness of our approach. In results from urban and natural environments, we were able to successfully classify LZs from point cloud maps collected on a helicopter. The presented method enables detailed assessment of LZs without an landing approach, thereby improving safety. Still, the method assumes low-noise point cloud data. We intend to increase robustness to outliers while still detecting small obstacles in future work.
Scherer, Sebastian and Chamberlain, Lyle and Singh, Sanjiv, "Online assessment of landing sites," AIAA Infotech at Aerospace 2010, 2010.
2009
- Efficient C-space and cost function updates in 3D for unmanned aerial vehicles.By Scherer, S., Ferguson, D. and Singh, S.In Proceedings - IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 2049–2054, 2009.
@inproceedings{scherer2009efficient, title = {Efficient {C}-space and cost function updates in {3D} for unmanned aerial vehicles}, author = {Scherer, Sebastian and Ferguson, Dave and Singh, Sanjiv}, year = {2009}, month = may, booktitle = {Proceedings - IEEE International Conference on Robotics and Automation}, address = {Kobe, Japan}, pages = {2049--2054}, doi = {10.1109/ROBOT.2009.5152790}, isbn = {9781424427895}, issn = {10504729}, url = {https://kilthub.cmu.edu/articles/journal_contribution/Efficient_C-Space_and_Cost_Function_Updates_in_3D_for_Unmanned_Aerial_Vehicles/6554672} }
When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.\textcopyright 2009 IEEE.
Scherer, Sebastian and Ferguson, Dave and Singh, Sanjiv, "Efficient C-space and cost function updates in 3D for unmanned aerial vehicles," Proceedings - IEEE International Conference on Robotics and Automation, 2009.
2008
- Flying fast and low among obstacles: Methodology and experiments.By Scherer, S., Singh, S., Chamberlain, L. and Elgersma, M.In International Journal of Robotics Research, vol. 27, no. 5, pp. 549–574, 2008.
@article{scherer2008flying, title = {Flying fast and low among obstacles: {Methodology} and experiments}, author = {Scherer, Sebastian and Singh, Sanjiv and Chamberlain, Lyle and Elgersma, Mike}, year = {2008}, journal = {International Journal of Robotics Research}, volume = {27}, number = {5}, pages = {549--574}, doi = {10.1177/0278364908090949}, issn = {02783649}, url = {https://www.ri.cmu.edu/pub_files/pub4/scherer_sebastian_2008_1/scherer_sebastian_2008_1.pdf}, keywords = {Aerial robotics,Learning} }
Safe autonomous flight is essential for widespread acceptance of aircraft that must fly close to the ground. We have developed a method of collision avoidance that can be used in three dimensions in much the same way as autonomous ground vehicles that navigate over unexplored terrain. Safe navigation is accomplished by a combination of online environmental sensing, path planning and collision avoidance. Here we outline our methodology and report results with an autonomous helicopter that operates at low elevations in uncharted environments, some of which are densely populated with obstacles such as buildings, trees and wires. We have recently completed over 700 successful runs in which the helicopter traveled between coarsely specified waypoints separated by hundreds of meters, at speeds of up to 10 m s-1 at elevations of 5-11 m above ground level. The helicopter safely avoids large objects such as buildings and trees but also wires as thin as 6 mm. We believe this represents the first time an air vehicle has traveled this fast so close to obstacles. The collision avoidance method learns to avoid obstacles by observing the performance of a human operator. \textcopyright SAGE Publications 2008.
Scherer, Sebastian and Singh, Sanjiv and Chamberlain, Lyle and Elgersma, Mike, "Flying fast and low among obstacles: Methodology and experiments," International Journal of Robotics Research, 2008.
2007
- Flying Fast and Low Among Obstacles.By Scherer, S., Singh, S., Chamberlain, L. and Saripalli, S.In IEEE International Conference on Robotics and Automation ICRA, Rome, Italy, pp. 2023–2029, 2007.
@inproceedings{scherer2007flying, title = {Flying Fast and Low Among Obstacles}, author = {Scherer, Sebastian and Singh, Sanjiv and Chamberlain, Lyle and Saripalli, Srikanth}, year = {2007}, month = may, booktitle = {IEEE International Conference on Robotics and Automation ICRA}, address = {Rome, Italy}, pages = {2023--2029}, doi = {10.1109/ROBOT.2007.363619}, url = {https://www.ri.cmu.edu/pub_files/pub4/scherer_sebastian_2007_1/scherer_sebastian_2007_1.pdf} }
Scherer, Sebastian and Singh, Sanjiv and Chamberlain, Lyle and Saripalli, Srikanth, "Flying Fast and Low Among Obstacles," IEEE International Conference on Robotics and Automation ICRA, 2007.
- Tartan Racing: A Multi-Modal Approach to the DARPA Urban Challenge.By Urmson, C., Anhalt, J., Bagnell, D., Baker, C., Bittner, R., Dolan, J., Duggins, D., Ferguson, D., Galatali, T., Geyer, H., Gittleman, M., Harbaugh, S., Hebert, M., Howard, T.M., Kelly, A., Kohanbash, D., Likhachev, M., Miller, N., Peterson, K., Rajkumar, R., Rybski, P., Salesky, B., Scherer, S., Seo, Y.-W., Simmons, R., Singh, S., Snider, J., Stentz, A., Whittaker, W.R. and Ziglar, J.Carnegie Mellon UniversityApr-2007
@techreport{urmson2007tartan, title = {{Tartan Racing: A} Multi-Modal Approach to the {DARPA} Urban Challenge}, author = {Urmson, Christopher and Anhalt, Joshua and Bagnell, Drew and Baker, Christopher and Bittner, Robert and Dolan, John and Duggins, Dave and Ferguson, David and Galatali, Tugrul and Geyer, Hartmut and Gittleman, Michele and Harbaugh, Sam and Hebert, Martial and Howard, Thomas M and Kelly, Alonzo and Kohanbash, David and Likhachev, Maxim and Miller, Nick and Peterson, Kevin and Rajkumar, Raj and Rybski, Paul and Salesky, Bryan and Scherer, Sebastian and Seo, Young-Woo and Simmons, R and Singh, Sanjiv and Snider, Jarrod and Stentz, Anthony and Whittaker, William Red and Ziglar, Jason}, year = {2007}, month = apr, doi = {10.1184/R1/6561125.v1}, url = {https://kilthub.cmu.edu/articles/journal_contribution/Tartan_Racing_A_Multi-Modal_Approach_to_the_DARPA_Urban_Challenge/6561125/1}, institution = {Carnegie Mellon University} }
The Urban Challenge represents a technological leap beyond the previous Grand Challenges. The challenge encompasses three primary behaviors: driving on roads, handling intersections and maneuvering in zones. In implementing urban driving we have decomposed the problem into five components. Mission Planning determines an efficient route through an urban network of roads. A behavioral layer executes the route through the environment, adapting to local traffic and exceptional situations as necessary. A motion planning layer safeguards the robot by considering the feasible trajectories available, and selecting the best option. Perception combines data from lidar, radar and vision systems to estimate the location of other vehicles, static obstacles and the shape of the road. Finally, the robot is a mechatronic system engineered to provide the power, sensing and mobility necessary to navigate an urban course. Rigorous component and system testing evaluates progress using standardized tests. Observations from these experiments shape the design of subsequent development spirals and enable the rapid detection and correction of bugs. The system described in the paper exhibits a majority of the basic navigation and traffic skills required for the Urban Challenge. From these building blocks more advanced capabilities will quickly develop.
Urmson, Christopher and Anhalt, Joshua and Bagnell, Drew and Baker, Christopher and Bittner, Robert and Dolan, John and Duggins, Dave and Ferguson, David and Galatali, Tugrul and Geyer, Hartmut and Gittleman, Michele and Harbaugh, Sam and Hebert, Martial and Howard, Thomas M and Kelly, Alonzo and Kohanbash, David and Likhachev, Maxim and Miller, Nick and Peterson, Kevin and Rajkumar, Raj and Rybski, Paul and Salesky, Bryan and Scherer, Sebastian and Seo, Young-Woo and Simmons, R and Singh, Sanjiv and Snider, Jarrod and Stentz, Anthony and Whittaker, William Red and Ziglar, Jason, "Tartan Racing: A Multi-Modal Approach to the DARPA Urban Challenge," , 2007.
2006
- Learning to drive among obstacles.By Hamner, B., Scherer, S. and Singh, S.In IEEE International Conference on Intelligent Robots and Systems, Beijing, China, pp. 2663–2669, 2006.
@inproceedings{hamner2006learning1, title = {Learning to drive among obstacles}, author = {Hamner, Bradley and Scherer, Sebastian and Singh, Sanjiv}, year = {2006}, month = oct, booktitle = {IEEE International Conference on Intelligent Robots and Systems}, address = {Beijing, China}, pages = {2663--2669}, doi = {10.1109/IROS.2006.281987}, isbn = {142440259X}, url = {https://www.ri.cmu.edu/pub_files/pub4/hamner_bradley_2006_1/hamner_bradley_2006_1.pdf} }
This paper reports on an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human behavior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters, and then show how these parameters can be automatically estimated by observation of a human driver. We present results from experiments with a vehicle (both real and simulated) that avoids obstacles while following a prescribed path at speeds up to 4 m/sec. We compare the proposed method with another method based on Principal Component Analysis, a commonly used learning technique. We find that the proposed method generalizes well and is capable of learning from a small number of examples. \textcopyright 2006 IEEE.
Hamner, Bradley and Scherer, Sebastian and Singh, Sanjiv, "Learning to drive among obstacles," IEEE International Conference on Intelligent Robots and Systems, 2006.
- Learning obstacle avoidance parameters from operator behavior.By Hamner, B., Singh, S. and Scherer, S.In Journal of Field Robotics, vol. 23, no. 11-12, pp. 1037–1058, 2006.
@article{hamner2006learning2, title = {Learning obstacle avoidance parameters from operator behavior}, author = {Hamner, Bradley and Singh, Sanjiv and Scherer, Sebastian}, year = {2006}, journal = {Journal of Field Robotics}, volume = {23}, number = {11-12}, pages = {1037--1058}, doi = {10.1002/rob.20171}, issn = {15564959}, url = {https://www.ri.cmu.edu/pub_files/pub4/hamner_bradley_2006_2/hamner_bradley_2006_2.pdf} }
This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human behavior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method. We find that the proposed method generalizes well and is capable of learning from a small number of samples. \textcopyright 2007 Wiley Periodicals, Inc.
Hamner, Bradley and Singh, Sanjiv and Scherer, Sebastian, "Learning obstacle avoidance parameters from operator behavior," Journal of Field Robotics, 2006.
2005
- Model checking of robotic control systems.By Scherer, S., Lerda, F. and Clarke, E.M.In European Space Agency, (Special Publication) ESA SP, Munich, Germany, pp. 371–378, 2005.
@inproceedings{scherer2005model, title = {Model checking of robotic control systems}, author = {Scherer, Sebastian and Lerda, Flavio and Clarke, Edmund M.}, year = {2005}, month = sep, booktitle = {European Space Agency, (Special Publication) ESA SP}, address = {Munich, Germany}, pages = {371--378}, issn = {03796566}, url = {https://www.ri.cmu.edu/pub_files/pub4/scherer_sebastian_2005_1/scherer_sebastian_2005_1.pdf}, keywords = {Control Systems,Java,Model Checking,Software Testing,Verification} }
Reliable software is important for robotic applications. We propose a new method for the verification of control software based on Java PathFinder, a discrete model checker developed at NASA Ames Research Center. Our extension of Java PathFinder supports modeling of a real-time scheduler and a physical system, defined in terms of differential equations. This approach not only is able to detect programming errors, like null-pointer dereferences, but also enables the verification of control software whose correctness depends on the physical, real-time environment. We applied this method to the control software of a line-following robot. The verified source code, written in Java, can be executed without any modifications on the microcontroller of the actual robot. Performance evaluation and bug finding are demonstrated on this example.
Scherer, Sebastian and Lerda, Flavio and Clarke, Edmund M., "Model checking of robotic control systems," European Space Agency, (Special Publication) ESA SP, 2005.