Optimality-Oriented Stabilization for Recurrent Neural Networks

Optimality-Oriented Stabilization for Recurrent Neural Networks

Ziqian Liu (State University of New York Maritime College, USA)
DOI: 10.4018/978-1-60960-018-1.ch005

Abstract

This chapter presents an approach of how optimality-oriented stabilization is achieved for recurrent neural networks, which includes both the input-to-state stabilization for deterministic recurrent neural networks and the noise-to-state stabilization for stochastic recurrent neural networks. Owing to the difficulty in solving the Hamilton-Jacobi equation for nonlinear systems, optimal regulation seems to be an unachievable goal in control design for recurrent neural networks. However, a methodology proposed in this chapter solves the problem and obtains optimal stabilization by using the knowledge of Lyapunov technique, inverse optimality, and differential game theory. Numerical examples demonstrate the effectiveness of the proposed design.
Chapter Preview
Top

Part-A: Input-To-State Stabilization For Deterministic Recurrent Neural Networks

This part considers the design of input-to-state stabilization for deterministic recurrent neural networks. This approach is developed by using Lyapunov technique, inverse optimality, and Hamilton-Jacobi-Bellman (HJB) equation. Depending on the dimensions of state and input, two optimal control laws are constructed in order to achieve input-to-state stabilization for the networks. Furthermore, the proposed designs achieve global asymptotic stability and global inverse optimality with respect to some meaningful cost functional. Two numerical examples demonstrate the performance of the approach. The content of this part is based on (Liu, Torres, Patel, & Wang, 2008) and (Liu & Wang, 2007).

Complete Chapter List

Search this Book:
Reset