Robotic Perception

  • Place Recognition with Robustness to Strong Perceptual Aliasing And Appearance Variations
  • Loop closure detection is an essential component for simultaneously localization and mapping in a variety of robotics applications, i.e. autonomous driving. One of the most challenging problems is to perform long-term place recognition with strong perceptual aliasing and appearance variations due to changes of illumination, vegetation, weather, etc. To address this challenge, we are developing novel robust methods for long-term place recognition, by formulating image sequence matching as an optimization problem regularized by structured sparsity-inducing norms. Our framework is able to model the sparsity nature of place recognition, i.e., the current location should match only a small subset of previously visited places, as well as to model underlying structures of image sequences and incorporate multiple feature modalities to construct a discriminative scene representation.

    1. Fei Han, Hua Wang, and Hao Zhang, "Learning of Integrated Holism-Landmark Representations for Long-Term Loop Closure Detection," in AAAI Conference on Artificial Intelligence (AAAI), 2018, accepted. [bibtex]
    2. Fei Han, Xue Yang, Yiming Deng, Mark Rentschler, Dejun Yang, and Hao Zhang, "SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition," IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 2, pp. 1172-1179, April 2017 [bibtex] [project] [code]
    3. Fei Han, Xue Yang, Yiming Deng, Mark Rentschler, Dejun Yang, and Hao Zhang, "Life-Long Place Recognition by Shared Representative Appearance Learning," in Robotics: Science and Systems (RSS) Workshop of Visual Place Recognition: What is it Good For?, 2016. [bibtex] [project] [slides] [poster] [code]
    4. Hao Zhang, Fei Han, and Hua Wang, "Robust multimodal sequence-based loop closure detection via structured sparsity," in Robotics: Science and Systems (RSS), 2016, (Acceptance Rate: 20.6%, Best Paper Finalist). [bibtex] [slides] [poster]

  • Object Recognition and Localization in Autonomous Driving
  • A new robust object recognition and localization algorithm for autonomous driving applications is being developed, which fuzes camera and radar sensor data simultaneously.

  • Real-time Activity Recognition
  • Human activity understanding is a critical research problem in robotics with many important real-world applications such as human-robot interaction. After comprehensively reviewing the state-of-art approaches on 3D human skeleton representations, we are developing high-speed human activity recognition methods with high accuracy at the same time.

    1. Fei Han, Brian Reily, William Hoff, and Hao Zhang, "Space-Time Representation of People Based on 3D Skeletal Data: A Review," Computer Vision and Image Understanding (CVIU), vol. 158, pp. 85-105, May 2017. [bibtex]
    2. Fei Han, Xue Yang, Christopher Reardon, Yu Zhang, Hao Zhang, "Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors," in IEEE International Conference on Robotics and Automation (ICRA), 2017, accepted. [bibtex] [project] [code]
    3. Fei Han, Christopher Reardon, Lynne Parker, Hao Zhang, "Minimum Uncertainty Latent Variable Models for Robot Recognition of Sequential Human Activities," in IEEE International Conference on Robotics and Automation (ICRA), 2017, accepted. [bibtex]
    4. Brian Reily*, Fei Han*, Lynne Parker, and Hao Zhang, "Skeleton-Based Bio-Inspired Human Activity Prediction For Real-Time Human-Robot Interaction," Autonomous Robots (AuRo), accepted. * Equal contribution [bibtex]

Decision-making Under Uncertainty

  • Decision making under uncertainty in human-robot teaming scenarios
    1. Christopher Reardon, Fei Han, Hao Zhang, and Jonathan Fink, "Optimizing Autonomous Surveillance Route Solutions from Minimal Human-Robot Interaction," in IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2017, accepted. [bibtex]
  • Onsite workflow guide using augmented reality (AR) techniques
    1. Fei Han, Jiayi Liu, William Hoff, and Hao Zhang, "Planning-based Workflow Modeling for AR-enabled Automated Task Guidance," in IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2017, accepted. [bibtex]

Bridging the Gap Between Perception and Decision Making

Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we are developing novel approaches that are capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making.

  1. Fei Han, Xue Yang, Yu Zhang, and Hao Zhang, "Sequence-based Multimodal Apprenticeship Learning For Robot Perception and Decision Making," in IEEE International Conference on Robotics and Automation (ICRA), 2017, accepted. [bibtex]