Starting from:

$29.99

Lab 2 command-line application

EECS 2030: Lab 2
 (about 3 % of the final grade, may be done in groups of up to three students)
Introduction
For this assignment you will finish implementing a command-line application that takes one
argument – an image file name (JPG, PNG, or GIF files can be used), reads the image from
the specified file and processes it as soon as the user clicks a button in the application
window, performing one of the two operations: blurring or edge detection. All operations are
based on convolving images with proper kernels1
.
The actual image processing is to be done in a dedicated class called ImageFilter. The
class should be a utility class (i.e., it is not meant to be used to create objects).
To run your code from the command-line during the development, you can add the following
changes to your Run Configurations:
Currently, the main methods always loads the flower.png image.
Please leave your class in the default package.
Blurring
Blurring is based on Gaussian Blur2
. Gaussian kernel, based on a mathematical Gaussian
bell-shaped function, has a useful property of separability: it is possible to replace a more
lengthy 2D convolution with a faster sequence of two 1D convolutions – one per dimension. A
sample way of calculating Gaussian kernel coefficients is available here3
.
As an example,
- for a kernel [0.06136 0.24477 0.38774 0.24477 0.06136]
- step 1: convolution in the x dimension. The value of a pixel in column j, pixel[j] is
pixel[j] = pixel [j − 2] * 0.06136 +
pixel [j − 1] * 0.24477 +
1 https://en.wikipedia.org/wiki/Kernel_(image_processing)
2 https://en.wikipedia.org/wiki/Gaussian_blur
3 http://dev.theomader.com/gaussian-kernel-calculator/
pixel [j − 0] * 0.38774 +
pixel [j + 1] * 0.24477 +
pixel [j + 2] * 0.06136
In case a pixel coordinate is outside of the image boundaries, replace the index with either 0
or the image size − 1 (whichever appropriate). Note that both input and output pixels are
in the same row.
- step 2: starting with the result from step 1, do the same in the y dimension.
For your application, you will have to write a method
private void blur (int[] imageData, int width, double sigma)
Based on the sigma value, first you will have to generate a proper 1D kernel of a small size
(you can assume sigma value will not exceed 5), and then apply the convolution to the image
data supplied. For testing purposes, the code contains three 1D kernels, for sigma values of
1.0, 3.0, and 5.0.
Note that you will probably need to use temporary arrays for intermediate steps.
Edge Detection
For edge detection, your will have to implement the following method
private void edgeDetection(int[] imageData, int width)
Feel free to use any kernel you can find; you can hard-code it in your Java code. Link [1] from
the previous page contains some possible examples of edge detection kernels. Note that
kernels for edge detection are often 2D kernels, and you will need to perform a 2D
convolution (the process is not that different from the 1D case).
Image Data Internal Format
The images are read for you into a BufferedImage object (called image). Among many
things, BufferedImage allows one to query image parameters, such as width and height,
as well as reading it into a 1D array of int’s (in the starter code – rgbData).
The data for every pixel has the following format (you can assume that all images will use 24-
bit colour, with 8 bits per channel): 0x__RRGGBB4
, where RR, GG, BB are red, green, and
blue bytes respectively. The 8 most significant bits are not used. The individual colour
components can be extracted using bitwise operations, as you will notice in the code
fragments (look inside edgeDetection).
Remember that you can always calculate the image pixels addresses in the 1D array using
the following procedure:
image1DIndex = imageRowIndex * image.getWidth() + imageColumnIndex
Starter Code
You are expected to work with the starter code supplied (find the TODO sections). Feel free
to create extra private methods if needed. You may modify the code as you see fit – as long
as the source and the result images are shown in the same window at the end.
The application displays its output to the right of the original image window. For simplicity, the
application does not display larger images properly (no scaling is possible). Your methods,
however, should not assume any limits for image sizes. E.g., if you need to create a
4 0x indicates hexadecimal representation, where each symbol represents 4 bits, and every pair of symbols represents one
byte
temporary array in your code, do not assume the images are no larger than N×M, where N
and M are “huge” numbers. Instead, read the image dimensions from the image object (or
equivalent) and create that temporary array of the exact size required.
You don’t have to understand the Graphical User Interface (GUI) code written using Java
Swing, however, you might find it useful for the future.
An example input and the output the application produces for bluring (sigma = 1):
another example (sigma = 3):
sigma = 5:
An example input and the output the application produces for edge detection:
Grading
The assignment will be graded using the Common Grading Scheme for Undergraduate
Faculties5
. We look at whether the code passes the unit tests, and whether it conforms to the
code style rules.
Submission:
You have to submit to eClass the following files (and no more):
ImageFilter.java
GraphicsApp.java
group.txt (optional, contains student numbers and names of the students in the group)

More products