Deutsch
 
Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey

Urheber*innen

Tang ,  Yang
External Organizations;

Zhao,  Chaoqiang
External Organizations;

Wang,  Jianrui
External Organizations;

Zhang,  Chongzhen
External Organizations;

Sun,  Qiyu
External Organizations;

/persons/resource/Juergen.Kurths

Kurths,  Jürgen
Potsdam Institute for Climate Impact Research;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PIKpublic verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Tang, Y., Zhao, C., Wang, J., Zhang, C., Sun, Q., Kurths, J. (2023): Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey. - IEEE Transactions on Neural Networks and Learning Systems, 34, 12, 9604-9624.
https://doi.org/10.1109/TNNLS.2022.3167688


Zitierlink: https://publications.pik-potsdam.de/pubman/item/item_27971
Zusammenfassung
Autonomous systems possess the features of inferring their own state, understanding their surroundings, and performing autonomous navigation. With the applications of learning systems, like deep learning and reinforcement learning, the visual-based self-state estimation, environment perception, and navigation capabilities of autonomous systems have been efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous visual perception and navigation. In this review, we focus on the applications of learning-based monocular approaches in ego-motion perception, environment perception, and navigation in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the shortcomings of existing classical visual simultaneous localization and mapping (vSLAM) solutions, which demonstrate the necessity to integrate deep learning techniques. Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks. Then, we focus on the visual navigation based on learning systems, mainly including reinforcement learning and deep reinforcement learning. Finally, we examine several challenges and promising directions discussed and concluded in related research of learning systems in the era of computer science and robotics.