hide
Free keywords:
-
Abstract:
This paper provides an enhanced Q-learning algorithm for agent path planning in traditional grid maps, which solves the problem of agent path planning in grid maps. In an unknown environment, the classic Q-learning algorithm addresses the agent's path planning problem. However, there are several limits to this strategy in terms of path planning: the agent could only move nearby grids, and the step size is only one frame. Improved Q-learning algorithm, including changes the agent's direction and step size. The action direction of agents is increased from four to eight. The movement step of agents is raised from one to three frames. The new method makes the convergence faster and proxy path smoother. Finally, a set of simulation tests are presented to validate the modified Q-learning algorithm.