Submitted by: Submitted by ssunda27
Views: 10
Words: 1632
Pages: 7
Category: Science and Technology
Date Submitted: 09/17/2015 05:39 PM
ECE 451
LECTURE 7
Preface
This lecture continues our discussion of the method of dynamic programming and its application to
continuous-time systems resulting in the classic Hamilton-Jacobi-Bellman (HJB) partial differential
equation. We first consider an exercise presented at the end of Lecture 6. We use the HJB equation as
a method of its solution. We next re-visit the linear regulator problem for continuous-time systems and use
the HJB equation to solve that problem. Again, this shows both the power and value of the HJB
formulation.
HJB Equation Exercise
Recall that at the end of Lecture 6 we posed an optimal control problem as an exercise in using the HJB
equation as a solution mechanism. We will now undertake that solution.
We consider the following first-order system described by the state equation
x (t ) x (t ) u (t )
(7.1)
We want to find the control law using dynamic programming which minimizes the following performance
measure
J
1 2
1
x (T )
4
4
T
u
2
(t ) d t
(7.2)
0
Where, we shall assume that the final time T is specified, and that the admissible state and control values
are not constrained.
We begin the solution by computing the Hamiltonian, i.e.
H ( x (t ) , u ( t ) , J x , t )
1 2
u (t ) J x x(t ) u (t )
4
Note that the arguments of J x have been omitted in this case for clarity.
(7.3)
Since the control is
unconstrained, the optimal control must satisfy the following normal equation, i.e.
H 1
u (t ) J x ( x (t ) , t ) 0
u 2
(7.4)
Now, looking at the second derivative, we have that
1
ECE 451
LECTURE 7
2H 1
0
u2 2
Thus, the control that satisfies equation (7.4) does indeed minimize the Hamiltonian, i.e. H . So, using
the normal equation, we have that
u ( t ) 2 J x ( x ( t ) , t )
(7.5)
We can substitute equation (7.5) into the HJB equation and obtain the following result:
J ...