$30
ECE4580 Homework #6
Problem 1 (35 pts) Now that you have practice with simple calibration using the singular value decomposition
(SVD), it’s time to calibration a stereo rig that exists in your little vision lab (depicted in Figure 1). Since you bought
to of the exact same model of camera, you’ve found that the intrinsic camera matrix for the left and right camera is the
same:
Ψ =
400 0 320
0 −400 240
0 0 1
,
and you know that the following world points
5.000
6.000
2.000
,
7.0
3.0
1.0
,
6.0
2.0
3.0
,
5.0
3.0
2.0
,
8.0
5.0
3.5
,
9.0
3.0
0.5
, and
8.000
3.000
2.000
.
project to the following image plane points
?
115.0
233.0
?
,
?
307.0
250.0
?
,
?
419.0
221.0
?
,
?
323.0
163.0
?
,
?
288.0
433.0
?
,
?
308.0
316.0
?
, and ?
342.0
315.0
?
.
for the first camera, and to the following imageplane points
?
282.000
365.000 ?
,
?
364.0
252.0
?
,
?
320.0
283.0
?
,
?
273.0
283.0
?
,
?
414.0
399.0
?
,
?
445.0
238.0
?
, and ?
415.0
283.0
?
.
as best as you can tell (there should be a file called stereo prob.mat with this data). Note that the intrinsic camera
matrix has a negative focal length in the second row. This is because of the fact that Matlab uses a reverse orientation
for the matrices. It is normal and OK since the image coordinate are given in Matlab’s matrix coordinate system, which
Matlab calls ij-coordinates versus xy-coordinates. You can plot points in ij-coordinates by typing “axis ij” and
in xy-coordinates by typing “axis xy” (this is the default view mode for plotting points).
Your job is to identify what the positions and orientations are of the two cameras. In your lab setup, knowing these
transformation parameters will help you use triangulation to compute the distance of a point seen in both cameras.
What are the (R, T) pairs for each camera, with respect to the world frame? Also what is the (R, T) pair for the right
camera relative to the left camera?
Note: Recall that if you solve using what was discussed in class, you will solve for the world frame relative to
the camera frame, which is the inverse of what’s really asked for. This was discussed a bit in the earlier extrinsic
parameters problem. Sometimes it is easier to solve for the inversed R and T, then build the full g matrix and invert it
to get the answer in the wrold coordinate frame.
Testing: You can always test your solution by seeing if the world points project properly onto the image coordinates
for both cameras.
Code Stub: There should be a code stub called extrinsicCalib.m that you can use. Some of the code from
the earlier SVD problem can be recycled.
Problem 2. (20 pts) Recall the earlier problem where you had a surveillance camera setup. You had calibrated it
for use in some code you have. However the earlier calibration work was incomplete because it did not extract the
intrinsic from the extrinsic parameters. Use the data that you had before, plus the QR decomposition to decompose
the camera projection matrix M into its different components.
Recall that your measurements were as follows. The world coordinates
p
W
1 =
1.006
10.080
4.474
, p
W
2 =
−1.580
9.980
7.068
, p
W
3 =
15.150
−0.247
11.200
, p
W
4 =
6.960
5.381
1.199
, p
W
5 =
7.768
6.948
3.091
, p
W
6 =
0.363
6.348
10.520
.
led to the following image coordinates,
r1 =
?
467.000
338.000 ?
, r2 =
?
609.000
353.000 ?
, r3 =
?
72.000
18.000 ?
, r4 =
?
345.000
222.000 ?
, r5 =
?
307.000
243.000 ?
, r6 =
?
479.000
78.000 ?
.
1
0
5
10
15 −5
0
5
10
0
2
4
6
8
10
12
y
x
z
(a) Stereo camera setup with test points.
0 100 200 300 400 500 600
−50
0
50
100
150
200
250
300
350
400 Camera 1
Camera 2
(b) Image coordinate projections of the test points.
Figure 1: Depictions of the setup for Problem 1.
Complete the full camera calibration and provide both the Ψ matrix plus the D = [R | T] matrix.
Note: To avoid having to enter the world points and the image coordinates, recycle the earlier Matlab file calib01.mat.
Also, to make your life easier, you may want to consider using the code stub calibrateFull.m (uploaded). Some part of it can
be recycled from last week’s homework.
Problem 3. (15 pts) Due to the weather issues and the fact that many groups did not respond in time, this week’s group
assignment has been knocked a bit off course. So let’s start a bit easy this time around. Do the following:
(a) Depending on your project, take either several pictures or a couple of videos. Basically, when starting on a vision-based
project, it is usually good to have a scenario in your head. Go out and get some imagery of the kind of problem your team
will be solving (depending on you project, it will be either video or images). Make it represent the kind of images or image
sequences that you will be processing.
For the kinect people, that might be tough, so your objective is to get the kinect drivers loaded in your machine and running
with Matlab or OpenCV or whatever software you will be using. If you need a kinect, you can check one out of my lab.
Get in touch with your contact to do so. Show that you have done so by giving a screencap or figure output of the kinext
working.
Any group that needs a webcam can contact me or your group contact to check one out.
(b) Explain what will be done with the images or video. What do you expect to process and what will be the output of the
processing.
(c) If, and this is a big if, your contact has assigned something, go ahead and do it. At this point, it will most likely be some
reading related to your topic. Turn in a short summary of what it means.
I believe you should be able to upload the videos to the Facebook group. Your description of how the video will be processed as
per (b) can also be in the group site. Part (c), if assigned by Monday, should be turned in to the group contact via e-mail. Basically,
this part will not involve individual submissions in your homework document. It will be a group submission to the group contact.
2