Controlling the Movement of an Object on a Field with Barrier Using a Recurrent Neural Network
Abstract
Consider control model recurrent neural network moving object on a field with barrier using a recurrent neural network. Via genetic algorithm create two neural network different complexity. For each neural network describe algorithm reinforcement learning. Comparison of the effectiveness of their work.
References
A. N. Chernodub and D. A. Dzyuba, “Review of neurocontrol methods,” Problemy programmirovaniya, no. 2, pp. 79–94, 2011 (in Russian).
D. Kopec, Classic Computer Science Problems in Python, St. Petersburg, Russia: Piter, 2020 (in Russian).
G. A. Kilin and E. O. Zhdanovsky, “Benefits of using reinforcement learning for training neural networks,” in Proc. Review of Automated control systems and information technologies neurocontrol methods, Perm, Russia, 17 May 2018, Perm, Russia: Perm National Research Polytechnic University, vol. 1, 2018, pp. 152–158 (in Russian).
J. Prateek, Artificial intelligence with Python, St. Petersburg, Russia: OOO "Dialektika 2019 (in Russian).
S. I. Nikolenko and A. L. Tulup’ev, Self-learning systems, Moscow: MCCME, 2009 (in Russian).
S. Nikolenko, A. Kadurin, and E. Arkhangelskaya, Deep learning, St. Petersburg, Russia: Piter, 2018 (in Russian).
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Moscow: DMK Press, 2018 (in Russian).
This work is licensed under a Creative Commons Attribution 4.0 International License.