$30
AA 274: Principles of Robot Autonomy
Problem Set 4: Filtering, Localization, and Mapping
Starter code for this problem set has been made available online through github; to get started download the
code by running git clone https://github.com/PrinciplesofRobotAutonomy/Course1_Fall19_HW4.git
in a terminal window.
For your final submission, you will submit the python files that contains your work for the code items
(denoted by the symbol) and a pdf containing your write up for questions with the symbol.
Before you start, you will have to git pull from the asl turtlebot repo. To make sure you do not have
any merge conflicts, make sure that the following files have not been modified. You can type git status to
check. If you have modified them, type git checkout <filename>, and then git pull.
• localization.py
• map_fixing.py
Problem 1: EKF Localization
In this problem we will use the linear feature extraction methodology developed in Problem Set 3 as the basis
for a robot localization Extended Kalman Filter (EKF). Essentially, given a known map of linear features,
we can correct the output of open-loop state propagation by measuring the difference between linear features
perceived by the robot and the map features it expects to see. See pages 331–342 in SNS for a more detailed
exposition of line-based EKF localization on which this problem is modeled.
Recall the differential drive model through which we represent the dynamics of our simulated Turtlebot:
x˙(t) = V (t) cos(θ(t))
y˙(t) = V (t) sin(θ(t))
˙θ(t) = ω(t)
(1)
The continuous state variable is x(t) = [x(t), y(t), θ(t)]T and the instantaneous control is u(t) = [V (t), ω(t)]T
.
(i) As discussed in class, we can derive a discrete time model from these continuous dynamics by
assuming a zero-order hold on the control input (i.e., hold u(t) constant over a time interval of length
dt). That is, for suitable notions of xt and ut we may write xt = g(xt−1, ut) and for small perturbations
(x˜t−1, u˜t) close to (xt−1, ut) we may Taylor expand:
x˜t = g(x˜t−1, u˜t) ≈ g(xt−1, ut) + Gx(xt−1, ut) · (x˜t−1 − xt−1) + Gu(xt−1, ut) · (u˜t − ut)
where Gx and Gu are the Jacobians of g with respect to x and u respectively.1
1This is just the multivariate analogy of approximations like f(˜x) ≈ f(x) + (˜x − x)f
0
(x).
1
Stanford Aeronautics & Astronautics Fall 2019
Implement the computation of g, Gx, and Gu in the compute_dynamics() function in turtlebot_model.py.
Then call this function in the transition_model() method of the EKFLocalization class in ekf.py.
Note that your implementation must accommodate arbitrary control durations dt, as well as the case
when |ω| < EPSILON_OMEGA.
Run validate_localization_transition_model() from validate_ekf.py to check your work.
(ii) Let the belief state at time t − 1 be distributed as N (xt−1, Σt−1). We will model the uncertainty
in our Turtlebot’s dynamics propagation by additive continuous white noise ν ∼ N (0, R) applied to
the control input. Then over a time step dt the EKF prediction step is:
x¯t = g(xt−1, ut)
Σ¯
t = Gx · Σt−1 · G
T
x + dt · Gu · R · G
T
u
.
Implement the dynamics transition update (i.e., prediction step) in the transition_update() method
of the Ekf class in ekf.py.
Run validate_ekf_transition_update() from validate_ekf.py to check your work.
The discrete-time EKF developed in class assumes that all time steps are the same duration and every
prediction step is followed immediately by a measurement correction. This yields arguably the cleanest
presentation of the EKF, but is a bit at odds with how typical robotic systems act in practice. Measurement
corrections (e.g., scanner data, GPS data) often occur at rates ∼ 10Hz interspersed with controls streaming
in at up to thousands of updates per second (e.g., modern MEMS inertial measurement units). Moreover,
these measurement updates often take time to process so that by the time they’re ready to be applied for
filtering they may no longer be relevant to the most recent prediction step. Thus instead of the nice, lockstep
mathematical version of the EKF we may equivalently think of the EKF as an object that tries to maintain
a high-quality Gaussian belief state based on whatever information it’s seen up to some time. That is,
transition updates and measurement updates may stream in arbitrarily, but as long as they’re processed in
order and appropriately according to our mathematical dynamics/measurement/noise models, the lack of
synchronization won’t be a problem. Thus in the following parts let x¯t and P¯
t denote the most up-to-date
belief mean and covariance at the time of measurement.
(iii) Our turtlebot is equipped with a depth sensor2
that allows it to detect lines in the coordinate
frame of its camera. In this problem we compare these exteroceptive location cues against known lines
in a map M, a 2 × J matrix with columns mj = [α
j
, rj
]
T
corresponding line parameters in the world
frame (in the code this map is stored as self.map_lines for a EkfLocalization object). To compare
the predicted and observed measurements we must convert the world-frame parameters for each map
line into the camera frame.
Implement the coordinate change for a map entry between the world frame and camera frame in the
transform_line_to_scanner_frame() function in turtlebot_model.py; see Figure 1 for a description of the relationship between these two coordinate frames. That is, you should compute the mean
camera frame parameters ht (which will depend on x¯t) for a single map entry m, as well as its Jacobian
Ht with respect to the belief state mean.
Next, use this coordinate change function to complete the compute_predicted_measurements() method
of the EkfLocalization class in ekf.py.
Run validate_localization_compute_predicted_measurements() from validate_ekf.py to check
your work.
(iv) In order to apply the Kalman filter update correctly, the observed lines must be associated with
the map entries most likely to have produced them. To this end, we use the Mahalanobis distance
2Which we’ve made quite noisy in the simulation portion of this problem for “dramatic effect.”
2
Stanford Aeronautics & Astronautics Fall 2019
Figure 1: The robot’s state defines the offset/yaw of the robot’s base frame with respect to
the world frame. The member variable self.tf base to camera will contain the (constant)
offset/yaw of the robot’s camera frame with respect to its base frame.
between a predicted measurement h
j
t
(i.e., a map entry m
j
t
in the camera frame) and observation z
i
t
(i.e., a line extracted from the scanner angle/depth data). With the innovation
v
ij
t = z
i
t − h
j
t
as a measure of the difference between a predicted and observed measurement, and with the innovation
covariance
S
ij
t = H
j
t
· Σ¯
t · H
j
t
T
+ Q
i
t
(where Qi
t
is the covariance3 of the observation z
i
t
), we may calculate the Mahalanobis distance as
d
ij
t = v
ij
t
T
· (S
ij
t
)
−1
· v
ij
t
.
For each observed scanner line we want to associate the most likely map entry (i.e., the entry with least
Mahalanobis distance), but we want to make sure we don’t fall prey to any corrupting measurements
that do not correspond well to any entries in the map (i.e., unmapped changes in the environment).
Thus we introduce a validation gate g and consider only associations that fall into this gate d
ij
t < g2
.
The validation gate g may be found as self.g for a EkfLocalization object.4
Each observation z
i
t will be associated with at most one map entry h
j
t
; for each association (i, j(i))
the “measurement” that we actually use in the EKF update equations (see part 5 below) will be the
innovation v
ij , with measurement covariance Ri and Jacobian with respect to the belief mean Ht
j
.
Implement the measurement association process in the compute_innovations() method of the EkfLocalization
class in ekf.py.
Run validate_localization_compute_innovations() from validate_ekf.py to check your work.
(v) We are now in a position to implement the EKF measurement correction. We may stack the
innovation vectors for all line associations into one big measurement
zt =
v
1
t
.
.
.
v
K
t
with covariance
Qt =
Q1
t 0
.
.
.
0 QK
t
3We didn’t make you compute this covariance in last week’s assignment, but rest assured that a Gaussian approximation of
the uncertainty in (α, r) that arises from Gaussian noise on the scanner measurements (θ, ρ)k may be derived using linearization
and Jacobians (the common theme of this whole EKF business).
4Note that this g is distinct from the discrete transition function g(x, u); the validate gate is a multivariate Gaussian analog
of z-score (and is a scalar)!
3
Stanford Aeronautics & Astronautics Fall 2019
and Jacobian with respect to x
Ht =
H1
t
.
.
.
HK
t
(note: the indexing on R’s and H’s here comes from compute_innovations() and should correspond
to the ordering of the v’s).
Implement the assembly of this joint measurement in the measurement_model() method of the EkfLocalization
class in ekf.py.
(vi) Finally, we can implement the standard EKF measurement correction:
St = Ht · Σ¯
t · HT
t + Qt
Kt = Σ¯
t · HT
t
· S
−1
t
xt = ¯xt + Kt · zt
Σt = Σ¯
t − Kt · St · KT
t
Implement the measurement update (i.e., correction step) in the measurement_update() method of
the Ekf class in ekf.py.
Run validate_ekf_localization() from validate_ekf.py to check your work.
(vii) Time to test your localization EKF on a real (simulated) robot! Update (git pull) your asl turtlebot
package and run the turtlebot3_maze.launch launch file to load the simulation environment (gazebo)
and state estimate visualization (rviz). The default is gui:=false. You can still visualize the environment through the camera in RViz. Click the “Add” button on the bottom left and investigate
more!
Getting the right ROS plumbing together for feeding control/measurement information into your EKF
(including keeping track of timestamps) is a bit tricky 5
so we’ve provided a script localization.py
for you (where would this file live?). Copy ekf.py, ExtractLines.py, maze_sim_parameters.py, and
turtlebot_model.py into the asl_turtlebot/scripts/HW4 folder and run the localization.py
node.
Run the teleop launch file. You can use the one from the turtlebot3_teleop package, or the rosnode
in the asl_turtlebot package. 6
Edit localization.py to inject some extra uncertainty into the system (e.g., perturb the initial
position/heading, add Gaussian noise to the control/scanner measurements, routinely drop half your
scan points, etc. — sometimes simulations are a little bit too perfect to provide really compelling
results!). Depending on what uncertainty you add, it may also make sense to change some of the
variance/covariance parameters in maze sim parameters.py as well.
(viii) Drive your TurtleBot around the maze, and observe how the open loop and EKF state estimate
changes. By driving it around, explain what type of motions cause those estimates to diverge from
each other. Take three screenshots of RViz (1) the initial state, (2) when the TurtleBot has moved far
from the initial state, and (3) when the state estimates diverge, and include it in your write up.
5
If you have a particular interest in learning the ins and outs of ROS we strongly encourage you to write this script yourself!
6You may note that if you stop with only one wall in view, your EKF estimate may have drifted parallel to the direction of
that wall. Turn about yourself with j or l to see that drift corrected!
4
Stanford Aeronautics & Astronautics Fall 2019
Problem 2: EKF SLAM
Note: This problem builds upon Problem 1, so you should finish that problem before moving on to this one.
Localizing ourselves with respect to an a priori known map may make sense in structured settings (e.g., a
factory floor which a robot will repeatedly traverse), but in general robots must autonomously build their own
maps from scratch (accounting for uncertain localization) while exploring an environment. In this problem
we’ll explore a middle ground between these two extremes — where the robot has a fuzzy knowledge of the
map that it must correct during its operation.
(i) EKF Simultaneous Localization and Mapping (SLAM) follows the same general structure of EKF
localization; the main difference is that the EKF belief state is augmented with a probabilistic representation of the map:
x(t) = [x(t), y(t), θ(t), α1
, r1
, . . . , αJ
, rJ
]
T
.
The map features are assumed to be static in the world frame so that their dynamics are
α˙
j = 0 ∀j
r˙
j = 0 ∀j.
Similar to Problem 1 (and likely reusing some code) we may consider the state transition function
xt = g(xt−1, ut) and its derivatives Gx =
∂g(xt−1,ut)
∂x
and Gu =
∂g(xt−1,ut)
∂u
.
Implement the computation of g, Gx, and Gu in the transition_model() method of the EkfSlam
class in ekf.py.
(ii) The main work of EKF SLAM is done in the measurement update. To do EKF SLAM “right” this
measurement update should even include a procedure for expanding the state with new line features
as they’re encountered (and confidently identified as new, as opposed to a mis-extracted or noisy
measurement of an existing line), but as mentioned above we’re going to “cheat” and assume we have
prior knowledge of the number of line features and a noisy idea of the parameters for each one.
First we note that without a fixed world frame only the relative position and heading of the robot with
respect to the observable landmarks can be estimated. That is, for any fixed scenario there are degrees
of freedom in x corresponding to translating or rotating both the robot and line features by the same
amount simultaneously. To eliminate these degrees of freedom we will fix as certain the parameters of
the first two landmarks in the state vector. You may imagine these as the measured parameters for
the first two non-parallel line features the robot sees together in the same scanner measurement, and
saying the robot state (x, y, θ) at that moment is (0, 0, 0)).
Reimplement measurement_model(), compute_innovations(), and compute_predicted_measurements()
for the EkfSlam class based on your implementation for EkfLocalization in ekf.py. Not much should
actually be different; the most notable change should be seen in the construction of H
j
t =
∂h
j
t
(xt)
∂x
,
minding the size increase of x.
Run validate_ekf_slam() from validate_ekf.py to check your work.
(iii) Time to test! Run the turtlebot3_arena.launch launch file to load the simulation environment
(gazebo) and state, including map, estimate visualization (rviz). The default is gui:=false. You can
still visualize the environment through the camera in RViz. Click the “Add” button on the bottom
left and investigate more!
Make sure that the ekf.py, ExtractLines.py, maze_sim_parameters.py, and turtlebot_model.py
in the scripts/HW4 directory of asl_turtlebot are updated with your changes from parts (1) and
(2). Run the map_fixing.py node to start up the EKF SLAM.
Finally, run the teleop node to drive the robot around.
5
Stanford Aeronautics & Astronautics Fall 2019
Drive your TurtleBot around the arena7
, and observe how your map estimate changes. In particular,
you should notice that over time, your map estimate should converge to the to the true estimate. By
driving your TurtleBot around, explain what you can do to make your map estimate to converge to the
right one. To show this, take screenshots of RViz at (1) the initial state, (2) when the TurtleBot has
moved away from the initial state and the map estimate has changed, and (3) when the map estimates
have converged and include them in your write up.
Further, investigate what type of motions may cause your EKF and ground truth estimates to diverge.
Extra Problem: Monte Carlo Localization
Note: This problem builds upon Problem 1, so you should finish that problem before moving on to this one.
In this problem, we will implement Monte Carlo Localization (MCL), a popular localization algorithm that
uses particle filters instead of Kalman filters to localize the robot with respect to a known map. While Kalman
filters are limited to unimodal distributions of the belief state (represented with a Gaussian), particle filters
are non-parametric and can approximate any distribution with enough particles. This property allows MCL
to perform global localization, while standard EKF Localization requires that the initial pose is known.
Most of the code should look very similar to Problem 1, except the computations need to be done for
each particle. For extra credit, try to avoid for loops in your implementation wherever possible by using
Numpy vector operations instead. Python for loops carry significant computational overhead, while Numpy
operations use highly optimized C/Fortran code. Our solution contains no Python for loops, and is ∼ 100
times faster as a result. This means we can comfortably handle 1000 particles in real-time, while a naive
implementation might only handle 10. This gives you a sense of how important engineering quality is for
localization and mapping algorithms.
(i) In MCL, we represent the belief state with a set of M particles, each particle represented by a
state estimate and its weight (x
(m)
t
, w
(m)
t
).
Implement the transition update step of MCL by completing the transition_model() method in the
ParticleFilter class and the transition_update() method in the MonteCarloLocalization class
in particle_filter.py. The transition update needs to be performed for each of the M particles.
Run validate_transition_model() from validate_pf.py to check your work.
(ii) Now implement the measurement update step by completing the measurement_model(),
compute_innovations(), and compute_predicted_measurements() methods for the
MonteCarloLocalization class in particle_filter.py. Although particle filtering can perform measurement updates with raw observations (point clouds), we choose to use extracted line features to
provide a direct comparison to EKF Localization. Unlike EKF Localization, there is no need to filter out corrupting measurements using the validation gate g, since particles with low weights will be
naturally filtered out in the resampling step. The Mahalanobis distance can be computed as
d
ij
t = v
ij
t
T
· (Q
i
t
)
−1
· v
ij
t
.
Run validate_predicted_measurements() and validate_compute_innovations() from
validate_pf.py to check your work.
7We’ve restricted the field of view of the simulated LIDAR scan so that the robot can’t see the entire arena at once.
6
Stanford Aeronautics & Astronautics Fall 2019
(iii) After updating the particle weights in the measurement update step, the particles should be
resampled with importance sampling. One problem with resampling is that it decreases the diversity
of the particles, since some particles will most likely be duplicated and others lost. To avoid this, we
can perform the following low variance sampling algorithm.
function Resample((x
(0), w(0)), . . . ,(x
(M−1), w(M−1)))
r ← UniformSample(0,
1
M )
i ← 0
c ← w
(0)
X ← ∅
for m ← 0 . . . M − 1 do
u ←
P
j w
(j)
r +
m
M
while c < u do
i ← i + 1
c ← c + w
(i)
end while
X ← X ∪ (x
(i)
,
1
M )
end for
return X
end function
This algorithm works by selecting a single random number at the beginning, and then cycling through
the weights in increments of 1
M . If the distribution of weights is uniform, then this resampling method
will return the same set of weights. Even if the distribution isn’t uniform, the probability that a particle
is sampled is still proportional to its weight!
Implement this algorithm in the resample() method for the ParticleFilter class in particle_filter.py.
Run validate_resample() and validate_mc_localization() from validate_pf.py to check your
work.
(iv) Time to test! First copy particle_filter.py into the asl_turtlebot/scripts/HW4 folder.
Run the turtlebot3_maze.launch launch file to load the simulation environment (gazebo) and state,
including map, estimate visualization (rviz). The default is gui:=false.
Next, run the localization.py node with Monte Carlo enabled (_mc:=true) and the number of
particles set to 100 (_num_particles:=100). Once you’ve launched the localization node, you can
visualize the particles by adding the particle_filter topic in RViz.
Finally, run the teleop node to drive the robot around.
Depending on how fast your implementation is, you may have to lower the number particles to get
reasonable performance. See if you can handle up to 1000 particles.
Drive your TurtleBot around the maze, and observe how the open loop and MCL state estimate
changes. Take three screenshots of RViz (with the particle view enabled) with (1) the initial state, (2)
when the TurtleBot has moved far from the initial state, and (3) when the state estimates diverge,
and include it in your write up. Explain what type of motions may cause your MCL and ground truth
estimates to diverge. Does this change with the number of particles you use?
(v) Extra Credit: Try to implement the following functions without the use of for or while loops:
• resample()
• transition_model()
• compute_innovations()
• compute_predicted_measurements()
Include in your writeup the code of the functions you have vectorized. Even if you can’t fully vectorize
a function, you will get partial credit for each nested for loop you eliminate.