Starting from:

$30

Final Projects RAW-capable camera

Final Projects
Final projects are done individually. A list of topics is below. These topics were
chosen because they are directly related to course material, are interesting, and are
the right scope. Most are linked to a single research paper. Some projects require a
RAW-capable camera. You can use your own camera for these if you have one, or
you can contact Prof. Zickler about borrowing one (a limited number are available).
Deliverables. You are required to submit a single ZIP archive that contains code
and a PDF report. The report will include (at least) the sections: Introduction,
Methods, Results, Conclusion, and References.
Evaluation. The project is evaluated based on how well you demonstrate the
attainment of course-relevant knowledge (60%), your demonstrated research skills
(20%) and your demonstrated communication skills (20%). A good final project is
one that implements the algorithm(s) in the topic's associated paper(s),
demonstrates results that are equivalent or very close to those in the original
paper(s), and extends these results by applying the algorithm(s) to new data in an
interesting manner. A good report is one that clearly and concisely describes, in your
own words: the motivation for, and principal contributions of, the original work; the
intuition and details of the techniques that you implemented; the experiments you
performed; and your critical analysis of the experimental results.
Timeline:
Fri, Oct 20, 5:00pm Individual preferences submitted for final project topics.
Topics are assigned by the teaching staff in a way that
balances these individual preferences with the diversity
of topics across the class.
Mon, Dec 11, 5:00pm Final report and code submitted through the course
website.
Collaboration and third-party code. Final projects are done independently. That
said, we encourage you to keep using the online discussion board to help each other
out and to identify and fix bugs. If you would like to use openCV or other third-party
libraries, you must consult the teaching staff first.
Some online resources:
• CVonline (Links to an external site.)Links to an external site.: Communitycontributed tutorials and information on many, many topics.
• Matlab Image Processing Toolbox (Links to an external site.)Links to an external
site.: Useful documentation.
Topics
last update: 10/14
Radiometry
1. Recover shape and lighting via photometric stereo with unknown light
directions.
Paper: A. Yuille, and D. Snow. "Shape and Albedo from Multiple Images using
Integrability." CVPR. 2007.
A possible extension is to resolve the affine ("GBR") shape ambiguity using nonLambertian reflectance based on A. Georghiades. "Incorporating the Torrance and
Sparrow Model of Reflectance in Uncalibrated Photometric Stereo." ICCV. 2003.
Possible
datasets: http://vision.seas.harvard.edu/qsfs/Data.html, https://sites.google.com/site/
photometricstereodata/ (Links to an external site.)Links to an external site.
Multi-view geometry and model fitting
2. Globally optimal model fitting with geometric error.
Fredrik Kahl, Sameer Agarwal, Manmohan Chandraker, David Kriegman, Serge
Belongie, “Practical global optimization for multiview geometry.” IJCV 2008.
3. RANSAC in the presence of extreme outliers.
Litman et al., “Inverting RANSAC: Global Model Detection via Inlier Rate Estimation.”
CVPR 2015.
4. Rotation-based camera calibration.
R. I. Hartley. "Self-calibration from multiple views with a rotating camera." ECCV.
1994.
See also Section 19.6 in Hartley and Zisserman.
5. Projective factorization and 3D reconstruction.
P. Sturm, and W. Triggs. "A factorization based algorithm for multi-image projective
structure and motion." ECCV. 1996.
See also Section 18.4 in Hartley and Zisserman. Possible extension is to upgrade
from a projective to metric reconstruction using the constraints of square pixels and
zero skew based on M. Pollefeys, R. Koch, L. Van Gool, "Self-calibration and metric
reconstruction inspite of varying and unknown intrinsic camera parameters." IJCV,
1999.
6. Factorization with multiple rigid motions using gPCA.
Vidal & Hartley, “Motion segmentation with missing data using power factorization
and gPCA.” CVPR 2004.
Possible dataset: http://www.vision.jhu.edu/data/
7. Factorization with multiple rigid motions using subspace clustering.
E. Elhamifar and R. Vidal. Sparse Subspace Clustering. CVPR 2009.
Possible dataset: http://www.vision.jhu.edu/data/ (Links to an external site.)Links to
an external site.
8. Factorization for non-rigid motion using “basis shapes.”
Xiao J., Chai J., Kanade T., "A Closed-Form Solution to Non-rigid Shape and Motion
Recovery." ECCV 2004.
Edges and lines
9. Use line segments to recover the layout of an indoor room.
D. C. Lee, M. Hebert and T. Kanade, "Geometric reasoning for single image
structure recovery," CVPR 2009.
See also the discussion of estimating vanishing points from edgels in Szeliski
Section 4.3.3.
Color
10. Detect and remove shadows from images using color information.
G. D. Finlayson, S. D. Hordley, Cheng Lu and M. S. Drew, "On the removal of
shadows from images," in IEEE Transactions on Pattern Analysis and Machine
Intelligence 28(1): 59-68, 2006.
Dataset to consider: http://dhoiem.cs.illinois.edu/ (Links to an external site.)Links to
an external site.
11. Determine object color from image color using the retinex algorithm.
E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical
Society of America, 61(1):1–11, 1978.
See also the important adaptations described in:
Roger Grosse, Micah K. Johnson, Edward H. Adelson, and William T. Freeman.
"Ground truth dataset and baseline evaluations for intrinsic image algorithms." ICCV
2009.
Dataset to consider: http://www.cs.toronto.edu/~rgrosse/intrinsic/ (Links to an
external site.)Links to an external site.
Classifiers and recognition
12. PCA versus linear discriminant analysis in the context of face recognition.
P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, "Eigenfaces vs. Fisherfaces:
recognition using class specific linear projection," in IEEE Transactions on Pattern
Analysis and Machine Intelligence 19(7): 711-720, Jul 1997.
Dataset to consider: Yale Face Database B
13. Set-based face recognition.
T. Kim, J. Kittler, and R. Cipolla, "Discriminative learning and recognition of image
set classes using canonical correlations." IEEE Transactions on Pattern Analysis and
Machine Intelligence. 2007.
Segmentation
14. The mean shift segmentation algorithm.
D. Comaniciu and P. Meer, "Mean shift: a robust approach toward feature space
analysis," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
24, no. 5, pp. 603-619, May 2002.
15. Interactive segmentation using graph-cuts.
C. Rother, V. Kolmogorov, and A. Blake, "GrabCut - Interactive Foreground
Extraction using Iterated Graph Cuts." ACM Transactions on Graphics. 2004.
Stereo
16. Fast, dense depth maps from a stereo pair of images.
H. Hirschmuller, "Stereo Processing by Semiglobal Matching and Mutual
Information," in IEEE Transactions on Pattern Analysis and Machine Intelligence
30(2):328-341. 2008.
See also the discussion in Szeliski Section 11.5.1.
Editing photographs
17. Blend two different images without seams using poisson image editing.
P. Perez, M. Ganget, and A. Blake, "Poisson Image Editing." ACM Transactions on
Graphics. 2003.
18. Hallucinate higher-resolution photographs using super-resolution.
Daniel Glasner, Shai Bagon, Michal Irani. "Super-Resolution form a Single Image."
ICCV 2009.
19. Fill holes in a photograph.
A. Criminisi, P. Perez, and, K. Toyama. "Region filling and object removal by
exemplar-based inpainting." IEEE Transactions on Image Processing. 2004.
20. Smooth while preserving edges.
Li Xu, Cewu Lu, Yi Xu, and Jiaya Jia. 2011. Image smoothing via L0 gradient
minimization. SIGGRAPH 2011.

More products