Share Email Print
cover

Proceedings Paper • new

Centralized and decentralized application of neural networks learning optimized solutions of distributed agents
Author(s): Joshua A. Shaffer; Huan Xu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper explores a methodology for training recurrent neural networks in replicating path planning solutions from optimization problems of multi-agent systems. Training data is generated by solving a centralized nonlinear programming problem, from which both centralized (representing all agents) and decentralized (representing individual agents) recurrent neural networks are trained with reinforcement learning to produce an agent’s state path through fixed time-step execution. Path-tracking controllers are formulated for each agent to follow the path generated by the network. The control signal from such a controller should mimic that of the optimized solution. Results for a 10 agent, 2D dynamics problem with synchronized arrival and collision avoidance constraints showcase the ability of this approach to achieve the desired controller execution and resulting state path. Through these results, this work showcases the ability of recurrent neural networks to learn and generalize centralized and synchronous multi-agent optimization solutions, the end of which is a much more computationally fast multi-agent path planner that trends the slower-to-compute optimization solutions.

Paper Details

Date Published: 13 May 2019
PDF: 14 pages
Proc. SPIE 10982, Micro- and Nanotechnology Sensors, Systems, and Applications XI, 109822G (13 May 2019); doi: 10.1117/12.2518604
Show Author Affiliations
Joshua A. Shaffer, Univ. of Maryland, College Park (United States)
Huan Xu, Univ. of Maryland, College Park (United States)


Published in SPIE Proceedings Vol. 10982:
Micro- and Nanotechnology Sensors, Systems, and Applications XI
Thomas George; M. Saif Islam, Editor(s)

© SPIE. Terms of Use
Back to Top