Starting from:

$25

Homework 4 A finite horizon discrete time LQR

Homework 4

HW PA Q1, Q2, and Q3 should be submitted in the form of a set of three
files named Q1.m, Q2.m, and Q3.m. All this should be zipped up into a
single file and emailed to me.
Have a look at the accompanying zip file. Stub files for Q1.m, Q2.m,
and Q3.m are provided to you. You should implement each of these. Once
implemented, you should be able to run “hw4(X)” in order to run code for
question “X”. hw4.m is given to you and should not need to be modified.
The only thing you need to do is to insert code into the stub functions in
Q1.m, Q2.m, and Q3.m. NOT ALL OF THE STUB FUNCTIONS IN Q1.m,
Q2.m, and Q3.m NEED TO BE MODIFIED. Please see the code.
PA Q1: In this question, you must implement a finite horizon discrete
time LQR for the damped mass system described in class and illustrated
in the course slides. The time horizon, T, and the A and B matrices that
encode the system dynamics are already encoded in hw4.m and you don’t
need to change them. Also, the QT, Q, and R cost matrices as well as the
initial state, x0, are already encoded in hw4.m. What you need to do is to
implement two functions: FH DT Riccati and getControl as described
in Q1.m. FH DT Riccati should do the Riccati equation recursion. It
should return a cell array, P seq , where each cell is a 4 × 4 P matrix and
the cell index is the same as the time index ( P seq{i} denotes the 4 × 4 P
matrix at time i). getControl should calculate the control action when
the system is in state x at time i.
1
PA Q2: Exactly the same as Q1 except that you should now implement
a receeding horizon controller. Whereas in Q1 you were to find the optimal control for a fixed time horizon T, now you must execute a receeding
horizon controller where you calculate a control action at each time step by
optimizing over the next T time steps.
PA Q3: In this question, we are going to use the LQR framework in a new
way. As we have studied it so far, LQR can generate large control inputs
that can change quickly. For example, the control output calculated in Q1
and Q2 is very large on the first few time steps. It is sometimes desirable
to find a trajectory that minimizes the change in control input rather than
the magnitude of the control input itself. It turns out that we can use LQR
to calculate these sorts of control policies as well. Suppose that we want to
minimize a cost function of the form:
J(X, U) = x
T
TQT xT +
T
X−1
t=1
x
T
t Qxt + u
T
t Rut + ∆u
T
t Rˆ∆ut
,
where the last term in the summation imposes a cost on change in control
input. We can achieve this behavior by defining a new system:

xt+1
ut+1 !
=

A B
0 I
! xt
ut
!
+

B
I
!
∆ut
with cost function
J(X, U) =
xT
uT
!T
QT 0
0 0 ! xT
uT
!
+
T
X−1
t=1
xt
ut
!T
Q 0
0 R
! xt
ut
!
+ ∆u
T
t Rˆ∆ut
.
You should ask yourself the following questions: what is the new state vector representation? What are the new “A” and “B” matrices? Use this
new representation to calculate the optimal trajectory in this scenario. You
need to create three functions in Q3.m: FH DT Riccati, getControl,
and getSyntheticDynamics. However, FH DT Riccati and getControl should be exactly the same functions as you created in Q1.m. The
only new function is getSyntheticDynamics. This takes the underlying
parameters of the system as input and produces as output the new modified
parameters.
2

More products