Starting from:

$29.99

Digital Image Processing: Homework #4

EE 569 Digital Image Processing: Homework #4

EE 569: Homework #4
General Instructions:
1. Read Homework Guidelines and MATLAB Function Guidelines for the information about homework
programming, write-up and submission.
2. If you make any assumptions about a problem, please clearly state them in your report.
3. You need to understand the USC policy on academic integrity and penalties for cheating and
plagiarism. These rules will be strictly enforced.
In the first two problems, you will apply geometric modification and spatial warping techniques to do
some interesting image processing tricks. During this process, you may need to solve some linear
equations to get the matrix parameters.
Problem 1: Texture Analysis (35%)
In this problem, you will implement texture analysis and segmentation algorithms based on the 5x5 Laws
Filters constructed by the tensor product of the five 1D kernels in Table 1.
a) Texture Classification – Feature Extraction (15%)
48 images of four types textures are given for the texture classification task. They are split into two sets,
36 training samples and 12 testing samples. The ground truth labels of the 36 training samples are known,
while the testing samples’ categories are waiting for you to explore. Samples of these images are shown
in Fig. 1.
Figure 1: Bark, Brick, Knit, Stones Texture.
EE 569 Digital Image Processing: Homework #4
Professor C.-C. Jay Kuo Page 2 of 6
Please follow steps below to extract features for all texture images provided and do analysis:
1. Filter bank response computation: Use the twenty-five 5x5 Laws Filters in Table 1 to extract
the response vectors from each pixel in the image (use appropriate boundary extensions).
2. Energy feature averaging: Compute the energy feature of each element of the response vector.
Average the energy feature vectors of all image pixels, leading to a 25-D feature vector for each
image. Which feature dimension has the strongest discriminant power? Which has the weakest?
Please justify your answer.
3. Feature reduction: Reduce the feature dimension from 25 to 3 using the principal component
analysis (PCA). Plot the reduced 3-D feature vector in the feature space.
Please conduct texture classification using the nearest neighbor rule based on the Mahalanobis distance.
Report and compare your results with observations (by eyes), give error rate.
Note: Built-in PCA function can be used.
b) Advanced Texture Classification --- Classifier Explore (20%)
Based on the 25-D and 3-D feature vectors obtained above, conduct both unsupervised and supervised
learning. Please follow steps below.
1. Unsupervised: K-means clustering is kind of unsupervised classifier which categorize the textures
without the help of ground truth labels. Use the K-means algorithm for test image clustering based
on the 25-D and 3-D feature. Discuss the effectiveness of feature dimension reduction over Kmeans. Report and compare your results with observations (by eyes), give error rate.
2. Supervised: Use the 3-D feature of training images to train the Random Forest (RF) and the
Support Vector Machine (SVM) respectively. Then predict the test set labels and give error rate.
Compare the two kinds of classification.
Note: Built-in K-means function, RF and SVM can be used.
EE 569 Digital Image Processing: Homework #4
Professor C.-C. Jay Kuo Page 3 of 6
Problem 2: Texture Segmentation (30%)
a) Basic Texture Segmentation (20%)
Segment the texture mosaic in Fig. 2 by following the steps below:
1. Filter bank response computation: Use the twenty-five 5x5 Laws Filters in Table 1 to extract
the response vectors from each pixel in the image (use appropriate boundary extensions).
2. Energy feature computation: Use a window approach to compute the energy measure for each
pixel based on the results from step 1. You may try a couple of different window sizes. After this
step, you will obtain 25-D energy feature vector for each pixel.
3. Energy feature normalization: All kernels have a zero-mean except for �5#�5. Actually, the
feature extracted by the filter �5# �5 is not a useful feature for texture classification and
segmentation. Use its energy to normal all other features at each pixel.
4. Segmentation: Discard the feature associated with L5T L5. Use the K-means algorithm to perform
segmentation on the composite texture images given in Fig. 2 based on the 24-D energy feature
vectors.
If there are K textures in the image, your output image will be of K gray levels, with each level represents
one type of texture. For example, you can use (0, 63, 127, 191, 255) to denote five segmented regions in
the output for five textures.
Figure 2: Composite Texture Images.
b) Advanced Texture Segmentation (10%)
You may not get good segmentation results for the complicated texture mosaic image in Fig. 2. Please
develop some techniques to improve your segmentation result. Several ideas are sketched below.
1. Use the PCA for feature reduction. Use the dimension reduced features to do texture segmentation
of Fig. 2.
2. Develop a post-processing technique to merge small holes.
3. Enhance the boundary of two adjacent regions by focusing on the texture properties in these two
regions only.
EE 569 Digital Image Processing: Homework #4
Professor C.-C. Jay Kuo Page 4 of 6
Problem 3: SIFT and Image Matching (35%)
Image feature extractors are useful for representing the image information in a low dimensional form.
(a) Salient Point Descriptor (Basic: 10%)
SIFT are effective tools to extract salient points in an image. Read the paper in [1] and answer the
following questions.
1. From the paper abstract, the SIFT is robust to what geometric modifications?
2. How does SIFT achieves its robustness to each of them?
3. How does SIFT enhances its robustness to illumination change?
4. What the advantages that SIFT uses Difference of Gaussians (DoG) instead of Laplacian of
Gaussians (LoG)?
5. What is the SIFT’s output vector size in its original paper?
(b) Image Matching (Basic: 15%)
You can apply SIFT to image matching. Extract and show SIFT features.
1. Find key-points of the Dogs 1 and 3 images in Fig. 3. Pick the key-point with the largest scale in
Dog_3 and find its closest neighboring key-point in Dog_1. You can do nearest neighbor search
in the searching database for the query image which is represented as a SIFT extracted feature
vector. Discuss your results, esp. the orientation of each key-point.
2. Show the corresponding SIFT pairs between the Dog_1 and Dog_3 in Fig. 3. The matching may
not work well between different objects and against the same object but with a large viewing angle
difference. Perform the same job with the following three image pairs: 1) Dog_3 and Dog_2, 2)
Dog_3 and Cat, 3) Dog_1 and Cat. Show and comment on the matching results. Explain why it
works or fails in some cases.
You are allowed to used open source library (OpenCV or VLFeat) to extract features.
(a) Dog_1 (b) Dog_2
(b) Dog_3 (c) Cat
Figure 3: Images for image matching.
EE 569 Digital Image Processing: Homework #4
Professor C.-C. Jay Kuo Page 5 of 6
(c) Bag of Words (10%)
Apply the K-means clustering to extracted SIFT features to form a codebook. The codebook contains 8
bins, where each bin is characterized by the centroid of the SIFT feature vector. In other words, each
image can be represented as histogram of SIFT feature vectors. This representation is called the Bag of
Words (BoW). Create codewords for all four images and match Dog_3’s codewords with other images.
Show the results and discuss your observations.
EE 569 Digital Image Processing: Homework #4
Professor C.-C. Jay Kuo Page 6 of 6
Appendix:
Problem 1: Texture Analysis
Texture_1 to 48.png 128x128 greyscale
Problem 2: Texture Segmentation
Composite.png 575x360 greyscale
Problem 3: Image Feature Extractors
 Dog_1.png 640x420 Color (RGB)
 Dog_2.png 640x420 Color (RGB)
Dog_3.png 640x420 Color (RGB)
Cat.png 640x420 Color (RGB)
Reference Images
Images in this homework are taken from Google images [2].
References
[1] David G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of
Computer Vision, 60(2), 91-110, 2004
[2] [Online] http://images.google.com/

More products