Starting from:

$30

Robot Autonomy I Problem Set 1

Principles of Robot Autonomy I
Problem Set 1

Starter code for this problem set has been made available online through Github. To get started, download
the code by running git clone https://github.com/PrinciplesofRobotAutonomy/Course1_Fall19_HW1.git
in a terminal window.
You will submit your homework to Gradescope. Your submission will consist of (1) a single pdf with your
answers for written questions (denoted by the symbol) and (2) a zip folder containing your code (and any
relevant files and plots) for the programming questions (denoted by the symbol).
Your written part must be typeset (e.g., LATEX or Word).
Also note this problem set represents the start of an incremental journey to build our robot autonomy stack.
Thus, full completion of the coding modules in this set is key, as your implementation be leveraged in the
next problem set and in the final project.
Introduction
The goal of this problem set is to familiarize you with some of the techniques for controlling nonholonomic
wheeled robots. The nonholonomic constraint here refers to the rolling without slipping condition for the
robot wheels which leads to a non-integrable set of differential constraints. In this problem set we will
consider the simplest nonholonomic wheeled robot model, the unicycle, shown below in Figure 1.
Figure 1: Unicycle robot model
1
Stanford University Principles of Robot Autonomy I - Fall 2019
The kinematic model we will use reflects the rolling without slipping constraint, and is given below in Eq. (1).
x˙(t) = V cos(θ(t))
y˙(t) = V sin(θ(t))
˙θ(t) = ω(t)
(1)
In this model, the robot state is x = (x, y, θ), where (x, y) is the Cartesian location of the robot center and θ
is its alignment with respect to the x-axis. The robot control inputs are u = (V, ω), where V is the velocity
along the main axis of the robot and ω is the angular velocity, subject to the control constraints:
|V (t)| ≤ 0.5 m/s, and |ω(t)| ≤ 1.0 rad/s.
We’ll start by exploring how we might generate dynamically feasible trajectories and associated open-loop
control histories. However, the presence of un-modeled effects, external disturbances, and/or time sampling
may lead to open-loop control being ineffective on real systems. In the second part of the problem set, we
will focus on closed-loop control strategies for these systems that can reject these disturbances and provide
robustness.
Problem 1: Trajectory Generation via Differential Flatness
Consider the dynamically extended form of the robot kinematic model:
x˙(t) = V cos(θ(t)),
y˙(t) = V sin(θ(t)),
V˙ (t) = a(t),
˙θ(t) = ω(t),
(2)
where the two inputs are now (a(t), ω(t)). Differentiating the velocities ( ˙x(t), y˙(t)) once more yields

x¨(t)
y¨(t)

=

cos(θ) −V sin(θ)
sin(θ) V cos(θ)

| {z }
:=J

a
ω

:= 
u1
u2

.
Note that det(J) = V . Thus for V > 0, the matrix J is invertible. Hence, the outputs (x, y) are the flat
outputs for the unicycle robot, which means we may use the virtual control inputs (u1(t), u2(t)) to design
the trajectory (x(t), y(t)) and invert the equation above to obtain the corresponding control histories a(t)
and ω(t).
In this problem, we will design the trajectory (x(t), y(t)) using a polynomial basis expansion of the form
x(t) = Xn
i=1
xiψi(t), y(t) = Xn
i=1
yiψi(t),
where ψi for i = 1, . . . , n are the basis functions, and xi
, yi are the coefficients to be designed.
(i) Take the basis functions ψ1(t) = 1, ψ2(t) = t, ψ3(t) = t
2
, and ψ4(t) = t
3
. Write a set of linear
equations in the coefficients xi
, yi for i = 1, . . . , 4 to express the following initial and final conditions:
x(0) = 0, y(0) = 0, V (0) = 0.5, θ(0) = −π/2,
x(tf ) = 5, y(tf ) = 5, V (tf ) = 0.5, θ(tf ) = −π/2,
where tf = 15.
2
Stanford University Principles of Robot Autonomy I - Fall 2019
(ii) Why can we not set V (tf ) = 0?
(iii) Implement the following functions in P1_differential_flatness.py to compute the state-trajectory
(x(t), y(t), θ(t)), and control history (V (t), ω(t)):
• compute_traj_coeffs to compute the coefficients xi
, yi
, for i = 1, . . . , 4.
• compute_traj to use the coefficients to compute the state trajectory [x, y, θ, x,˙ y,˙ x, ¨ y¨] from t = 0
to t = tf .
• compute_controls to compute the actual robot controls (V, ω) from the state trajectory.
Run the script in the terminal by typing
$ python P1_differential_flatness.py
to see your trajectory optimization in action.
(iv) You may have noticed that in this process, we didn’t specify that our robot is constrained to have
a maximum speed Vmax = 0.5 and maximum angular velocity of ωmax = 1, and in fact, the resulting
trajectory violates these constraints!
Since there are only 4 basis polynomials and 4 constraints each for x and y, we get a unique solution, but
this solution may not necessarily satisfy the control constraints. Some ways to circumvent this are to
(1) increase the number of basis polynomials and solve a constrained trajectory optimization problem,
(2) increase the value of tf until the control history becomes feasible (this can be very suboptimal),
or (3) “edit” the infeasible trajectory, by re-scaling the velocity trajectory while keeping the geometric
aspects of the trajectory the same. We will adopt option (3).
First, we need to reparameterize the trajectory (x(t), y(t), θ(s)) from being a function of time t to a
function of arc-length s, the distance from the starting point. In this way, s(t) = R t
0
V (τ )dτ represents
how far along the trajectory we are at time t, and (x(s), y(s), θ(s)) represent the state at that arc-length.
Now, in order to satisfy control constraints, we can follow this geometric trajectory at a different pace,
applying a different velocity as a function of s, Ve(s). This corresponds to defining a new time history
τ defined by Ve =
ds
dτ , τ (0) = 0. Integrating, we have:
τ (s) = Z τ
0
dτ 0 =
Z s
0
ds0
Ve(s
0)
.
Note that choosing Ve also defines a new angular velocity ωe:
ωe(s) = dθ
dτ =

dt ·
dt
ds ·
ds
dτ = ω(s) ·
1
V (s)
· Ve(s),
where ω(s) is the angular velocity at arc-length s as computed in part (iii).
Implement the remaining functions in P1_differential_flatness.py:
• compute_arc_length which computes s at each point of the trajectory computed in part (iii).
• rescale_V which chooses Ve at each point along the original trajectory, ensuring that the resulting
Ve and ωe satisfy the control constraints.
• compute_tau which uses Ve and s to compute the new time profile.
• rescale_om which computes the adjusted angular velocity ωe from Ve and the original (V, ω).
Run the script again to check your results! The new trajectory should take the same geometric
path, but take a longer time to satisfy the control constraints. The resulting plot will be saved to
plots/differential_flatness.png.
3
Stanford University Principles of Robot Autonomy I - Fall 2019
(v) Please include differential_flatness.png in your write up.
(vi) Now, let’s see what happens when we execute these actions on a system with disturbances. To do
so, open the visualization Jupyter notebook by running the following command:
$ jupyter notebook sim_trajectory.ipynb
This notebook will use the code you wrote for the this problem to plan trajectories for the initial and
final states specified in the notebook. Run the cells up until the closed loop simulation to validate your
work up to this point. You should see that the planned trajectory indeed reaches the goal, while in
the presence of noise, executing actions open-loop causes the trajectory to deviate. Experiment with
different initial and final states to see how the system behaves! Each time you run the plotting code
it will save the plot to plots/sim_traj_openloop.pdf. Include a plot with the initial conditions given
in part (i) in your write up.
Closed-Loop Control
Clearly, in order to deal with unmodeled effects and noise, we need to use feedback control. However, the
nonholonomic nature of the system raises some challenging controllability issues. Chiefly, it can be shown that
the unicycle (and more generally, the class of underactuated driftless regular systems) cannot be stabilized
around any pose (position & orientation) using smooth (or even continuous) time-invariant feedback control
laws [1] (Brockett’s Theorem). This negative result has prompted extensive research in various control
strategies such as smooth time-varying control laws, discontinuous control laws, and feedback linearization.
In contrast to the pose stabilization problem, the trajectory tracking problem is significantly simpler. In
this problem set you will experiment with a discontinuous kinematic pose controller and a smooth dynamic
trajectory tracking controller for the unicycle model1
, thereby imbuing your robot with some robustness.
Problem 2: Pose Stabilization
We will now study closed-loop control for pose stabilization, i.e., stabilizing the robot around some desired
position and orientation. For consistency, we will set the desired goal position to be (xg, yg) = (5, 5) and
desired pose as θg = −π/2. One way to get around Brockett’s controllability result which prevents the use
of smooth time-invariant state-feedback control laws is to use a change of variables. Consider Figure 2.
Here, α is the bearing error and (ρ, δ) represent the robot’s polar coordinate position with respect to the
goal. In these new coordinates our kinematic equations of motion become:
ρ˙(t) = −V (t) cos(α(t))
α˙(t) = V (t)
sin(α(t))
ρ(t)
− ω(t)
˙δ(t) = V (t)
sin(α(t))
ρ(t)
(3)
The pose stabilization problem is now equivalent to driving the state (ρ, α, δ) to the unique equilibrium
(0, 0, 0). In these coordinates, Brockett’s condition no longer applies, so we can use the simple (smooth)
control law
V = k1ρ cos(α),
ω = k2α + k1
sin(α) cos(α)
α
(α + k3δ),
(4)
1Note that many of the control laws discussed in this problem set can also be extended to related nonoholonomic robot
models, e.g., a rear/front wheel drive robot with forward steering and a trailer/car combination. Interested students are referred
to [2].
4
Stanford University Principles of Robot Autonomy I - Fall 2019
Figure 2: New coordinate system
where k1, k2, k3 > 0. Note that the kinematic equation (3) is undefined at ρ = 0. However, the control law
above drives the robot to the desired goal asymptotically (i.e., as time goes to infinity) so we never have to
worry about the equations becoming undefined. Furthermore, note that this control law is still discontinuous
in the original coordinates, as necessary due to Brockett’s result.
(i) Complete the function compute_control in the PoseController class in P2_pose_stabilization.py
and program the control law above. The goal position and controller gains k1, k2, k3 are member
variables of the class.
(ii) Validate your work. Open the visualization Jupyter notebook by running the following command:
$ jupyter notebook sim_parking.ipynb
Test your controller with different initial positions and goal poses to simulate different parking scenarios.
For each section of the notebook, choose initial and final poses to demonstrate forward, reverse, and
parallel parking scenarios.
(iii) The plots will be saved as plots/sim_parking_<parking-type>.png. Please include all three plots
(forward, reverse, and parallel) in your write up.
Hints: (1) You should use the function wrapToPi also present in utils.py to ensure that α and δ remain
within the range [−π, π]. This is needed since the control law has terms linear in α and δ. (2) Use the
function np.sinc to handle values α ≈ 0 in the control law for ω above. (3) You are free to choose the
gains k1, k2, and k3 (try staying between (0, 1.5]), however, do not be too aggressive since we have saturation
limits!
Note: It is possible to extend this method to allow path tracking [3] however we will adopt a simpler and
more elegant approach for tracking trajectories.
5
Stanford University Principles of Robot Autonomy I - Fall 2019
Problem 3: Trajectory Tracking
We will now address closed-loop control for (i) trajectory tracking using the differential flatness approach,
and (ii) parking using the non-linear kinematic controller from Problem 3. Consider the trajectory designed
in Problem 2 (we will refer to the center coordinates of this desired trajectory as (xd, yd)). We will implement
the following virtual control law for trajectory tracking:
u1 = ¨xd + kpx(xd − x) + kdx( ˙xd − x˙)
u2 = ¨yd + kpy(yd − y) + kdy( ˙yd − y˙)
(5)
where kpx, kpy, kdx, kdy > 0 are the control gains.
(i) Write down a system of equations for computing the true control inputs (V, ω) in terms of the
virtual controls (u1, u2) = (¨x, y¨) and the vehicle state. HINT: it includes an ODE.
(ii) Complete the compute_control function in the TrajectoryTracker class which is defined in the
file P3_trajectory_tracking.py. Specifically, program in the virtual control law Eq. (5) to compute
u1 and u2, and integrate your answer from (i) to convert (u1, u2) into the actual control inputs (V, ω).
Hint: At each timestep you may consider the current velocity to be that commanded in the previous
timestep. The controller class is designed to save this as the member variable self.V_prev
Warning: You must be careful when computing the actual control inputs due to the potential singularity in velocity discussed in Problem 1 — although we took care to ensure that the nominal trajectory
did not have V ≈ 0 it is possible that noise might cause this to singularity to occur. If the actual
velocity drops below the threshold V_PREV_THRES, you should “reset” it to the nominal velocity In this
case you should use a “reset” strategy to the nominal velocity.
(iii) Validate your work. Open the trajectory tracking visualization notebook:
$ jupyter notebook sim_trajectory.ipynb
Run the closed loop simulation section and see if using the controller, the robot can now track the
planned trajectory even in the presence of noise. Feel free to experiment with different initial and final
states, as well as different amounts of noise or different controller gains.
Each run will save a plot in plots/sim_traj_closedloop.png. Run the script with the initial and final
positions as given in Problem 1 and a nonzero noise amount, and include the resulting plot in your
write up.
Note: It is possible to modify the flatness-based trajectory tracking controller to also allow pose stabilization.
See [4] for details.
These techniques will be core components of the trajectory generation and control modules of our autonomy
stack. However, you may notice that so far our trajectory generation neglects key constraints such as the
presence of obstacles. Thus, in Problem Set 2 we will integrate these components with motion planning
algorithms to find and track feasible trajectories through cluttered environments.
6
Stanford University Principles of Robot Autonomy I - Fall 2019
Extra Problem: Optimal Control and Trajectory Optimization
Let’s revisit the problem of designing a dynamically feasible trajectory. In Problem 1, we only were interested
in finding a trajectory that connected the starting state to the goal state in a dynamically feasible manner.
However, there are often many such dynamically feasible trajectories, and we might prefer some to others.
In this problem, we will utilize tools from optimal control to design a trajectory that explicitly optimizing a
given objective.
Consider the kinematic model of the unicycle given in (1). The objective is to drive from one waypoint to
the next waypoint with minimum time and energy, i.e., we want to minimize the functional
J =
Z tf
0

λ + V (t)
2 + ω(t)
2

dt,
where λ ∈ R≥0 is a weighting factor and tf is free. We’ll use the following initial and final conditions, which
are the same as in Problem 1.
x(0) = 0, y(0) = 0, θ(0) = −π/2,
x(tf ) = 5, y(tf ) = 5, θ(tf ) = −π/2.
(i) Derive the Hamiltonian and conditions for optimality and formulate the problem as a 2P–BVP.
Make sure you explain your steps, and include the boundary conditions.
(ii) Complete P4_optimal_control.py to solve the 2P–BVP with the largest possible value of λ such
that the resulting optimal control history satisfies the control constraints. Note, for this problem, do
not use the constrained control version of the optimality conditions, and rather vary the value of λ
youself until the unconstrained problem yields a feasible solution. Experiment with different initial
guesses for the solution. Run python P4_optimal_control.py to test your code. In particular, it will
generate a plot of the trajectory (x(t), y(t)) and the control history (V (t), ω(t)). The plot will be
saved to plots/optimal_control.png. Use this to verify that the control constraints and boundary
conditions are met, and that the solution trajectory is smooth.
(iii) Please include optimal_control.png in your write up. Compare the result with the trajectory
obtained through the differential flatness approach in Problem 1.
(iv) Explain the significance of using the largest feasible λ.
(v) Validate your work. We will now simulate the car with the computed control history in the presence
of disturbances (modeled as input noise).
Open and run the visualization notebook for this problem:
$ jupyter notebook sim_optimal_control.ipynb
Try experimenting with different amounts of noise. The plot will be saved as
plots/sim_traj_optimal_control.png. Please include it in your write up.
Note: You will need the scikits.bvp_solver package (which is already installed in the virtual machine we
provide - see question 5 below). See https://pythonhosted.org/scikits.bvp solver/, and try installing
directly using sudo pip install scikits.bvp_solver. If this fails for some reason, update/reinstall gcc
(make sure you include gfortran), download the source code for the package, and build it yourself.
Hint: You will need to re-formulate the problem into “standard” form - see Reformulation of boundary value
problems into standard form,” SIAM review, 23(2):238-254, 1981.
7
Stanford University Principles of Robot Autonomy I - Fall 2019
References
[1] R. W. Brockett, “Asymptotic stability and feedback stabilization,” Differential geometric control theory,
vol. 27, no. 1, pp. 181–191, 1983.
[2] A. De Luca, G. Oriolo, and C. Samson, “Feedback control of a nonholonomic car-like robot,” in Robot
motion planning and control. Springer, 1998, pp. 171–253.
[3] M. Aicardi, G. Casalino, A. Bicchi, and A. Balestrino, “Closed loop steering of unicycle like vehicles via
lyapunov techniques,” IEEE Robotics & Automation Magazine, vol. 2, no. 1, pp. 27–35, 1995.
[4] G. Oriolo, A. De Luca, and M. Vendittelli, “Wmr control via dynamic feedback linearization: design,
implementation, and experimental validation,” IEEE Transactions on control systems technology, vol. 10,
no. 6, pp. 835–852, 2002.
8

More products