Starting from:

$30

Monte Carlo Localization Lab

CSCI545 Monte Carlo Localization Lab
1 Introduction
In this lab, we will implement a localization system for a mobile robot with a planar LIDAR range sensor.
Localization systems are important elements of all robots that need to act. Before you can choose actions,
you need to know where you are relative to your environment.
There are many types of localizers. Some use RGB camera sensors and attempt to recognize the viewpoint
from which they are seeing a scene, which remains a difficult problem in the general case. Others look at
pre-crafted features in the environment, like ArUco tags, which are easy to detect and placed in known
locations. These systems are robust and reliable for controlled environments like factories, but infeasible for
uncontrolled environments (like most of the world).
We will implement a common type of localizer, called a Monte Carlo Localizer, which is based on a probabilistic filter called a particle filter. Our sensory input will be a dense planar LIDAR scan of our surroundings,
and we will try to localize against a map generated with the same kind of sensor. Using this sensory information, and integrating it over time, we can gain an accurate probabilistic understanding of where our robot
is relative to an existing map.
2 Theory
2.1 Localization
A localizer is a system that takes a series of actions ut, observations zt and a map m and tries to estimate
the global pose xt of the robot relative to the map.
2.2 Particle Filters
A particle filter is a nonparametric Bayesian filter, which approximates the posterior distribution bel(xt) using
a finite number of parameters, called particles. We call our collection of particles Xt = {x
(1)
t
, . . . , x
(n)
t }, where
each particle x
(i)
t
represents a sample drawn from the posterior. More simply, each particle is a concrete
hypothesis of the global pose of the robot.
Lets examine the algorithm for a particle filter line by line, specifically focused on the case of localization.
1
Algorithm 1: Particle Filter
input : Xt−1, ut, zt
output: Xt
1 Xt = Xt = ∅;
2 for i = 1 → n do
3 Sample x
(i)
t ∼ p(xt|ut, x
(i)
t−1
);
4 w
(i)
t = p(zt|x
(i)
t
);
5 Add {(x
(i)
t
, w
(i)
t
)} to Xt;
6 end
7 for i = 1 → n do
8 Draw i with probability ∝ w
(i)
t
;
9 Add x
(i)
t
to Xt
10 end
From the input and output, we can immediately see that the particle filter is a recursive Bayesian filter,
that is, it takes a current estimate of the state, along with some new information, and produces a new state
estimate. As discussed in class, recursive filters are often simpler to implement and more memory efficient
than filters that require an entire history of sensory inputs.
Line 3 applies the motion model to each pose hypothesis x
(i)
t
. Given some motion information ut from the
robot, we change each pose by moving it in accordance with our understanding of how the robot moves.
More information on motion models is presented in Section 2.4.
Line 4 calculates an importance weighting for each particle based on the sensor model. A sensor model
tells us how likely the given measurement zt is from each hypothesis x
(i)
t
, given our known map m. More
information on sensor models is presented in Section 2.3.
Lines 8 and 9 take the updated hypotheses from the sensor and motion models and probabilistically resample
the particles to match the importance weight of each sample.
After running this algorithm once, we see that our particles Xt approximate the new posterior p(xt|zt, ut, xt−1).
For a more complete presentation of the general particle filter, including a mathematical derivation of its
properties as an estimator and discussion on sampling bias and particle deprivation problems, please refer
to Thrun’s Probabilistic Robotics Section 4.3.
2.3 Motion Models
In this lab, we will consider an odometry motion model. The alternative, a velocity motion model was
presented in class. Please refer to your lecture notes for the difference between the two.
A motion model specifies the probability distribution p(xt|xt−1, ut). That is, given our current pose and
some control input, we can predict the likelihood of ending up in various next states. For the particle filter
algorithm, however, we don’t need to compute the exact probabilities over the state space, we just need to
be able to draw samples from this posterior.
Odometry is a measure of how far a robot has driven. Oftentimes, a rotary encoder is attached to the
wheel axle of a robot, and the ticks are integrated per wheel to get estimate the position of the robot.
Understanding the relationship between the rotary encoder and the pose of the robot requires knowledge of
the robot’s kinematics, i.e. how large each wheel is, and where the wheels are relative to each other.
Note that odometry is a sensor measurement, while a motion model integrates control input. We work
around this by treating a pair of odometry-estimated poses as a control input ut = (xt, xt−1). Practically,
2
this is really the difference between these two poses ∆x = (∆x, ∆y, ∆θ).
Odometric sensors using rotary encoders cannot observe many common types of error, such as wheel slippage.
Imagine a robot with one wheel resting on ice. The wheel may turn, and we will integrate information as
robot motion, but in reality the robot has not moved because of the slippery surface.
To model uncertainty caused by sensor noise and possible slip, we perturb each dimension of the change
in pose by a zero-mean Gaussian  ∼ N (0, σ2
) with variance σ
2 a tunable parameter. By sampling the
perturbation in each dimension, we can sample a new pose from the motion-model posterior
xt ∼ p(xt|xt−1, ut)
or, more concretely,
xt = xt−1 + ∆x + wx
where the noise wx is described as follows
wx = (wx, wy, wθ)
∆wx ∼ N (0, σ2
x
)
∆wy ∼ N (0, σ2
y
)
∆wθ ∼ N (0, σ2
θ
)
2.4 Sensor Models
In our context, each observation zt = {z
(1)
t
, z
(2)
t
, . . . , z
(n)
t } is a planar LIDAR scan of n rays, where each ray
z
(i)
t = (r
(i)
t
, θ(i)
t
) consists of a measured distance r
(i)
t and an angle from the sensor frame θ
(i)
t
.
Given the map m, these observations become conditionally independent, so our posterior can be broken
apart as follows
p(zt|xt, m) = Yn
i=1
p(z
(i)
t
|xt, m)
For a particular ray z
(i)
t
, we can simulate a corresponding ray zˆ
(i)
t = (ˆr
(i)
t
, θ(i)
t
) given a pose (or pose
hypothesis) xt, and a map m by raycasting in the map until we hit an obstacle. We can get this distance by
using Bresenham’s algorithm, or a similar line-tracing algorithm, along the discretised occupancy grid which
represents our map.
We will use a simplified sensor model which models the probability of a laser hit using a single dimensional
Gaussian centered on the simulated laser return. Given a particle hypothesis xt, we know what each ray of
a simulated scan would look like. We can thus compute, per ray, the probability of our actual measurement
z
(i)
t as
p(z
(i)
t
|xt, m) = N (r
(i)
t
; ˆr
(i)
t
, σ2
hit) = 1
p
2πσ2
hit
e

(r
(i)
t −rˆ
(i)
t )
2
2σ2
hit
The variance of this distribution, σ
2
hit, is an intrinsic tunable noise parameter which encompasses sensor
error and map error.
In practice, beam sensors like LIDAR have more complex models. See Probabilistic Robotics Section 6.3
for a detailed discussion of other sources of discrepancy from map simulated returns, including unexpected
objects, failures, and special cases at the maximum range of the sensor.
3
2.5 Monte Carlo Localization
After integrating information through the particle filter, we now have almost enough information to localize
a robot. Note that Xt now gives a collection of samples approximating the posterior bel(xt). However, to
plan and act using a robot, we need a concrete guess as to the robot’s state.
We can concretize the distribution in a simple way by assuming it is unimodal, and taking the average pose
of all particles in the filter
xˆt =
 Xn
i=1
x
(i)
t
n
,
Xn
i=1
y
(i)
t
n
,
Xn
i=1
θ
(i)
t
n
!
3 Implementation
You will write a single ROS node in Python, called mcl545 node. The skeleton code will open a pre-recorded
bag file for you and subscribes to odometry and scan information for you.
Search the code in catkin ws/src/usc545mcl/bin/usc545mcl.py for 3 comments titled YOUR CODE HERE
for implementation instructions. This is all of the code you will need to write.
Before you can execute your code, cd to the directory into which you cloned your repo, and then (you only
need to do this once)
sudo ./setup.sh
cd catkin ws
catkin build
then, to execute your lab code, still in the catkin ws directory, run
source devel/setup.bash
roslaunch usc545mcl usc545mcl.launch
4 Output
Answer the following questions in 3-5 sentences each in a lab document.
1. How does each of the model variance parameters affect the performance of the localizer? Give possible
reasons for the behavior you see. The parameters are defined at the top of the file.
2. How does the number of particles affect the behavior of the localizer? The number of particles can be
modified at the top of the file.
3. Can a particle filter with a single particle perform well? Why or why not? What if it starts in the
correct position?
Include the error graph generated by the localizer in your writeup document.
5 Extra Credit
• The noise model for the motion model presented in Section 2.3 is overly simplistic. You can find a
better noise model in Section 5.4 of Probabilistic Robotics. Explain why the noise model presented in
this lab has poor properties. Implement the better noise model.
4
• There are more components involved in an accurate beam sensor model. They can be found in Section
6.3 of Probabilistic Robotics. Implement the complete beam sensor model from this section.
• The particle filter is powerful because it can model complex, multimodal posterior distributions. Averaging all particles assumes a unimodal distribution and may yield very unrealistic estimates. Use
a clustering algorithm to estimate the particles in the largest mode, and compute their average as a
more realistic estimate to concretize the posterior.
5

More products