Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey


Tang ,  Yang
External Organizations;

Zhao,  Chaoqiang
External Organizations;

Wang,  Jianrui
External Organizations;

Zhang,  Chongzhen
External Organizations;

Sun,  Qiyu
External Organizations;


Kurths,  Jürgen
Potsdam Institute for Climate Impact Research;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PIKpublic
Supplementary Material (public)
There is no public supplementary material available

Tang, Y., Zhao, C., Wang, J., Zhang, C., Sun, Q., Kurths, J. (2023): Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey. - IEEE Transactions on Neural Networks and Learning Systems, 34, 12, 9604-9624.

Cite as: https://publications.pik-potsdam.de/pubman/item/item_27971
Autonomous systems possess the features of inferring their own state, understanding their surroundings, and performing autonomous navigation. With the applications of learning systems, like deep learning and reinforcement learning, the visual-based self-state estimation, environment perception, and navigation capabilities of autonomous systems have been efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous visual perception and navigation. In this review, we focus on the applications of learning-based monocular approaches in ego-motion perception, environment perception, and navigation in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the shortcomings of existing classical visual simultaneous localization and mapping (vSLAM) solutions, which demonstrate the necessity to integrate deep learning techniques. Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks. Then, we focus on the visual navigation based on learning systems, mainly including reinforcement learning and deep reinforcement learning. Finally, we examine several challenges and promising directions discussed and concluded in related research of learning systems in the era of computer science and robotics.