Starting from:

$30

Lab 2: LSI Systems

ECE 311 Lab 2: LSI Systems

In this lab, we will explore Linear Shift-Invariant (LSI) systems and their properties with applications involving toy signals, image filtering, stock data, and even an example of a simple non-linear system. Let's get started!

#import necessary libraries for this lab
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt

from skimage.io import imread

%matplotlib inline
Getting Started with Convolution
We remember from ECE 210, that convolution describes how any continuous-time input signal is processed by an LTI system. Given an input  x(t)  and an LTI system's impulse response  h(t) , the system output  y(t)  is given by

y(t)=x(t)∗h(t).
 
Recall that convolution for continous signals is defined as

y(t)=∫∞τ=−∞x(τ)h(t−τ)dτ=∫∞τ=−∞x(t−τ)h(τ)dτ.
 
You have learned in ECE 310 that discrete-time LSI systems also have an impulse response  h[n] , which is the system response to a unit Kronecker delta  δ[n]  input. Thus we can express the system output given an input signal via discrete-time convolution.

y[n]=x[n]∗h[n]
 
y[n]=∑k=−∞∞x[k]h[n−k]=∑k=−∞∞x[n−k]h[k]
 
Like the width properties of continuous-time convolution, if  x  is of length  N  and  h  is of length  M , the result  y  will be of length  N+M−1 . It is important to note that every LSI system can be represented by a convolution, every system that can be expressed as a convolution has an impulse response, and any system with an impulse response must be LSI. This means the relationship between LSI systems, convolution, and impulse responses is an "if and only if" relationship; they all imply one another! This is something handy to keep in mind whenever you want to identify and describe an LSI system.

The key function we will use to perform convolutions is the  convolve()  function in the  scipy.signal  module. The usage of this function for an example system is as follows:

x[n]=δ[n]+2δ[n−2]+3δ[n−4]
 
y[n]=x[n]+3x[n−1]
 
x = np.array([1, 0, 2, 0, 3]) #input signal
h = np.array([1, 3]) #filter/system's impulse response
y = signal.convolve(x,h) #signal.convolve(in1,in2)

print(y) #verify this result by hand!
[1 3 2 6 3 9]
Note how we extracted the system's impulse response for the system's Linear Constant Coefficient Difference Equation (LCCDE). Our first term takes the present input value and multiplies it by one, and the second term multiplies the most recent input by three. Intuitively, when we flip and shift our filter  h  for the convolution, we will be applying this system to the input signal at each shift step. The  signal.convolve()  function assumes the arrays that represent our signals begin at index zero.

Exercise 1: Implementing LSI Systems
In the below code cell, implement the following LSI systems and plot the system response (using  plt.stem() ) to each of the listed input signals. Comment on the results in the following Markdown cell. Remember to determine the LCCDE for each system to infer its impulse response!

System A:  ya[n]=−x[n]+2x[n−1]−x[n−2] 
System B:  yb[n]=14x[n]+12x[n−1]+14x[n−2] 
x1[n]=u[n]−u[n−7],0≤n<10 
x2[n]=sin(π20n),0≤n<40 
#create input signals here:
x1 = np.array([1,1,1,1,1,1,0,0,0,0])
x2 = np.array([np.sin((np.pi/20)*n) for n in range (0,41)])
#Hint: Use np.sin and np.pi!

#System A
ha = np.array([-1,2,-1])
#plot result for x1
ya1 = signal.convolve(x1,ha)
plt.stem(ya1)
plt.title("rectangle function convolved with ha")
plt.figure()
#plot result for x2
ya2 = signal.convolve(x2,ha)
plt.stem(ya2)
plt.title("sin function convolved with ha")
plt.figure()
#System B
hb = np.array([1./4,1./2,1./4])
#plot result for x1
yb1 = signal.convolve(x1,hb)
plt.stem(yb1)
plt.title("rectangle function convolved with hb")
plt.figure()
#plot result for x2
yb2 = signal.convolve(x2,hb)
plt.stem(yb2)
plt.title("sin function convovled with hb")
plt.figure()
/Users/ryanyoseph/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
  # Remove the CWD from sys.path while we load stuff.
/Users/ryanyoseph/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
  from ipykernel import kernelapp as app
/Users/ryanyoseph/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:22: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
/Users/ryanyoseph/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:27: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
<Figure size 432x288 with 0 Axes>




<Figure size 432x288 with 0 Axes>
Comments here: (Consider how the different filters affect the flatter and faster moving parts of the input signals. What do you think each filter is doing?)

The ha filter is acting to detect where the most change in the signal is. You can see that this is evident in the first plot since the rectangle function jumps from 0 to 1 at the start then from 1 to 0 at the end.

The hb filter is acting as a sort of integral for the input signals. This is especially evident when you look at the 3rd plot and can see that it looks like a rectangle function. The sin function in the 4th plot is also similar to an integral because take the discrete summation over that interval yields an answer that makes sense.

Exercise 2: Bitcoin Pricing Correction
One type of signal could be some non-physical 1D information. An example of this would be stock price or cryptocurrency data. This data is notoriously noisy and can jump around unpredictably.

Remember that the systems we work with can be either causal or non-causal. A causal system only uses present and past information or values to calculate its present values, while a non-causal system can leverage future information. In this excercise, we will compare causal and non-causal versions of a filter to smooth a day's worth of bitcoin pricing data. We have provided 24 hours of prices with pricing updates every minute (1440 points). The date in question is Christmas Eve Day (12/24), 2017.

bitcoin_data = np.load('bitcoin-christmas.npy', allow_pickle=True)
n_points = len(bitcoin_data)
plt.figure(figsize = (8,6))
plt.plot(range(n_points), bitcoin_data)
plt.title('Bitcoin Prices Every Minute 12/24/2017')
plt.xlabel('Minute from Midnight')
plt.ylabel('Price (USD)')
Text(0, 0.5, 'Price (USD)')

Pretty noisy, right? Maybe a lot of last-minute Christmas gifts made the price even more unpredictable!

In this exercise, you will implement two length-51 moving average filters on this Bitcoin price data. The first will be causal and the second will be non-causal. Mathematically, we can represent these systems as follows:

y1[n]=151∑i=050x[n−i]
 
y2[n]=151∑i=−2525x[n−i]
 
Furthermore, since the moving average filter is an LSI system we may implement it as a convolution. If you are having trouble seeing this, we suggest considering a length-5 moving average filter and "unrolling" the sum for that system definition.

Notice that the non-causal filter will require us to access negative indices according to the impulse response of our filter. A natural question to ask is how does the  signal.convolve()  function perform non-causal convolution? How can you indicate negative indices when making an array for a system's impulse response? This is where the "same" mode comes in! We may use the "same" mode as follows:

y = signal.convolve(x,h,'same'),
where  x  is of length  N  and  h  is length  M . This line of code will perform a full linear convolution like the default mode, but then it will only keep the center  N  values (length of first argument/array). This operation is equivalent to zero-centering our filter array (second argument/array). You may want to try a couple small examples to convince yourself this is true. The "same" mode will be important to keep in mind throughout this lab and the rest of the course.

Important Note: For the following two parts, we have provided the appropriate start and end indices to help us make sure each implementation returns results of the same size and to remove initial condition worries (ramping behavior since we would have fewer than 51 samples as the filter has partial overlap).

a. Construct the causal filter and apply it to the provided bitcoin price data (contained in the bitcoin_data variable). To make sure your output is the same length and matches up correctly in time, you should slice your result using start and end as we did to create the plotting_data variable. Plot the original data (plotting_data) and your smoothed data on the same plot.

b. Construct the non-causal filter and apply it to the provided bitcoin price data. Perform the same start and end slicing on your result as in part (a). Plot the original data (plotting_data) and your smoothed data on the same plot.

c. Plot the error signals for each filter on the same plot. Let the error signal for a system's output be given by

ye=y−y^,
 
where  y^  is your system output and  y  is the sliced original data used for plotting, plotting_data.

d. Comment on the results in the following Markdown cell. What is noticeably different? Is it better to know a lot of past information or a decent amount of past and future information?

#Provided code
L = 51
half_L = 25
start = 50
end = len(bitcoin_data)-half_L
plotting_data = bitcoin_data[start:end] #plot against this data in parts a/b, pay attention to how we slice
result_length = len(plotting_data)

# Code for 2.a here, don't forget to plot original and filtered signals on same plot!
h1 = 1./51 * np.ones(L)
y1 = signal.convolve(bitcoin_data, h1)
y1_slice = y1[start:end]
plt.plot(y1_slice, label = 'filtered causal data')
plt.plot(plotting_data, label = 'original data')
plt.title("Filtered causal Bitcoin data ")
plt.legend()
plt.figure(figsize = (8,6))
# Code for 2.b here
h2 = 1./51 * np.ones(L)
y2 = signal.convolve(bitcoin_data, h2, 'same')
y2_slice = y2[start:end]
plt.plot(y2_slice, label = 'filtered non-causal data')
plt.plot(plotting_data, label = 'original data')
plt.title("Filtered non-causal Bitcoin data")
plt.legend()
plt.figure(figsize = (8,6))
# Code for 2.c here
ye1 = plotting_data - y1_slice
ye2 = plotting_data - y2_slice
plt.plot(ye1, label = 'Error signal for causal data')
plt.plot(ye2, label = 'Error signal for non-causal data')
plt.legend()
plt.figure(figsize = (8,6))
<Figure size 576x432 with 0 Axes>



<Figure size 576x432 with 0 Axes>
Comments for part 2.d here: The most noticeable difference between the causal and non-causal signal data is that the non-causal data is more accurate. What I mean by that is that the non-causal data seems to follow the bitcoin data more accurately while the causal data is shifted to the right by 25 data points. Therefore, it seems like it would be better to know past information as well as future information to accurately plot this figure. Also the flucuations in the error signal for non-causal data is on average lower than the causal data error.

Image Convolution
In ECE 310, we typically focus on the implications of filtering in the frequency domain of a 1D signal. Our most common idea of a 1D signal is a piece of audio. In this section, we will experiment with image filtering along both axes of an image and see that we can do more than just filtering with convolution.

Exercise 3: 1D Image Convolution
Apply a 1D length 11 moving average filter on the provided test-image.jpg image along its:

a. Rows

b. Columns

c. Rows then columns

d. Columns then rows

Plot each of the resulting images and give them unique titles.

e. Comment on the images from the "rows then columns" and "columns then rows" procedures. Are they the same? Explain your answer, why are they the same or different?

#make filter and load image
image = imread('test-image.jpg')
plt.figure(figsize=(10,6))
plt.imshow(image,'gray')
plt.title('Original Image')
L = 11
h = np.ones(L) * 1/11
n_rows,n_cols = image.shape
#Code for 3.a along rows (apply filter to each row independently)
image_row = np.zeros(image.shape)

for i in range(n_rows):
    image_row[i, :] = signal.convolve(image[i,:],h,'same')

plt.subplot(231)
plt.imshow(image,'gray')
plt.title('Original Image')
plt.subplot(232)
plt.imshow(image_row, 'gray')
plt.title('Convolution along the rows')
#Hint: image_row[i,:] = signal.convolve(image[i,:],h,'same') for each row

#along the columns (3.b)
image_col = np.zeros(image.shape)

for i in range(n_cols):
    image_col[:,i] = signal.convolve(image[:,i],h,'same')
    
plt.subplot(233)
plt.imshow(image_col, 'gray')
plt.title('Convolution along the columns')
#rows then columns (3.c)
image_row_col = np.zeros(image.shape)

for i in range(n_rows):
    image_row_col[i, :] = signal.convolve(image[i,:],h,'same')
    
for j in range(n_cols):
    image_row_col[:,j] = signal.convolve(image_row_col[:,j],h,'same')
    
plt.subplot(234)
plt.imshow(image_row_col, 'gray')
plt.title('Convolution along the rows then columns')
#columns then rows (3.d)
image_col_row = np.zeros(image.shape)

for i in range(n_cols):
    image_col_row[:,i] = signal.convolve(image[:,i],h,'same')
    
for j in range(n_rows):
    image_col_row[j,:] = signal.convolve(image_col_row[j,:],h,'same')
    
plt.subplot(235)
plt.imshow(image_col_row, 'gray')
plt.title('Convolution along the columns then rows')
Text(0.5, 1.0, 'Convolution along the columns then rows')

Comments for 3.e: The two images are the same output due to the property of convolution that says FG = GF.

Exercise 4: Building an Edge Detector
Let's now apply image convolution to perform edge detection. We will build a simple edge detector step-by-step using the following simple 1D filter:

h[n]=δ[n+1]−δ[n−1]
 
a. Intuitively or mathematically, what does this filter do to an input signal? In other words, what parts of a signal would give a strong (large magnitude) response and what parts would give a weak (small magnitude) response? You may answer this with a couple signal examples and the result of convolution with  h[n]  or qualitative intuition.

b. Is this filter causal? Why or why not? Is it a problem if the filter is non-causal? (Hint: consider the contexts in which we cannot violate causality!)

Note: For the next two parts, please store your results in separate variables. This will make part (e) much cleaner.

c. Apply  h[n]  along the rows of the test-image.jpg image. Plot the result with a grayscale color mapping.

d. Apply  h[n]  along the columns of the test-image.jpg image. Plot the result with a grascale color mapping.

So far we have checked for edge-like features in the image going along the rows and columns. Imagine these two results as being vectors indicating edge strength along the row axis and column axis of the image, respectively. Take a minute to look at the differences between these two resulting images. Can you tell which one is detecting edges within a row and which one is doing so within a column? What would be a sensible way to incorporate these two dimensions of information? Imagine they form a 2D vector and take the norm! More precisely:

IF(r,c)=(IR(r,c))2+(IC(r,c))2−−−−−−−−−−−−−−−−−−√,
 
where  IR  and  IC  are the row and column filtered results, respectively.

e. Build the final result image  IF  according to the above equation. Plot the result again with a grayscale color mapping.

Answer for 4.a here: This filter is the difference of two delta functions seperated by 2 units. If the magnitude of our output signal is large, we can deduce that the input signal either has a large difference in value between 2 units (due to a discontinuity of the signal or merely the actual values of the signal). The discontinuity could be interpretated as an edge case as well because if the input signal is more than just a unit step function, then the difference in magnitude would be rather large. If the magnitude of the output signal is small, we can deduce that the input signal value is very similar in value for that time interval.

Answer for 4.b here : No, this filter IS NOT causal. This is because h[n] is ≠ 0 for all n<0. There is a delta function with value 1 at n = -1. It won't be a problem if the filter is non-causal because the future values are the next pixels in the image and we have all that data!

#load test-image.jpg
test = imread('test-image.jpg')
plt.figure(figsize=(15,15))
plt.subplot(221)
plt.imshow(test,'gray')
plt.title('Original Image')
#Code for 4.c here:

h = [1,0,-1]
n_rows,n_cols = image.shape
test1 = np.zeros(test.shape)
for i in range(n_rows):
    test1[i,:] = signal.convolve(test[i,:],h,'same')

plt.subplot(222)
plt.imshow(test1,'gray')
plt.title("Convolution across the rows")
#Code for 4.d here:
test2 = np.zeros(test.shape)
for i in range(n_cols):
    test2[:,i] = signal.convolve(test[:,i],h,'same')
plt.subplot(223)
plt.imshow(test2,'gray')
plt.title('Convolution along the columns')
#Code for 4.e here:
test_norm = test 
for j in range(n_rows):
    for i in range(n_cols):
        test_norm[j,i] = np.sqrt((test1[j,i]**2)+(test2[j,i]**2))
        
plt.subplot(224)
plt.imshow(test_norm,'gray')
plt.title('Normalized output of test1 and test2')
Text(0.5, 1.0, 'Normalized output of test1 and test2')

2D Image Convolution
We don't need to limit ourselves to 1D image convolution. Our filters or "kernels" can be in two dimensions also! We will not spend much time on the math of 2D convolution/filtering in this class because it is best left for ECE 418 (Image and Video Processing); still, we can use Python to try it out. But let's try something other than filtering this time!

Image convolution is not just for filtering or modifying an image. We can also use convolution to extract information from an image. Remember that convolution is is the process of "flipping and shifting" one signal over another signal. At each shift location, we perform a dot product (or inner product) to see how  similar  the signals are. A larger value at the output means the two signals were more similar. The following image illustrates 2D convolution.



More formally, say we have a  3x3  convolution kernel  K  where the center pixel is at index  (0,0) , the result of the 2D convolution at pixel  (i,j)  for image  I  is given by:

O(i,j)=∑k=−11∑l=−11I(i−k,j−l)⋅K(k,l)
 
Now, why is this useful? Suppose you want to design a system to recognize handwritten digits. How can you tell the difference between a "1" and a "4", for example? Think about how you as a human can separate these numbers! They both typically have one large vertical line down the middle, but we know we can differentiate them because a "4" has another shorter vertical line (depending how you draw it) and a horizontal line connecting them. This is where 2D convolution can help us! How about we create convolution kernels to highlight features we know to be discriminative, like horizontal and vertical lines.

The below code cell includes a function to perform 2D image convolution on a target image given a convolution kernel. We have also provided two 2D kernels: one for horizontal features and another for vertical features.

def convolve_2d(image,kernel):
    result = signal.convolve2d(image,kernel,'same')
    result[result < 0] = 0 #Keep values non-negative
    return result

#identify horizontal lines
horiz_kernel = np.array([[-2,-2,-2,-2,-2],
                         [1,1,1,1,1],
                         [1,1,1,1,1],
                         [1,1,1,1,1],
                         [-2,-2,-2,-2,-2]])

#identify vertical lines
vert_kernel = np.array([[-2,1,1,1,-2],
                        [-2,1,1,1,-2],
                        [-2,1,1,1,-2],
                        [-2,1,1,1,-2],
                        [-2,1,1,1,-2]])
In the folder for this lab, we have provided example images of the numbers "1", "4", and "8" from the popular MNIST dataset. These images are 28x28 and grayscale. Let's see what our filters can identify in the one.jpg image! Note the different scales on the feature image colorbars.

one = imread('one.jpg')

plt.figure(figsize = (16,10))
plt.subplot(131)
plt.title('Original')
plt.imshow(one,'gray')

one_horiz = convolve_2d(one,horiz_kernel)
plt.subplot(132)
plt.title('Horizontal Features')
plt.imshow(one_horiz,'hot')
plt.colorbar(fraction=0.05)

one_vert = convolve_2d(one,vert_kernel)
plt.subplot(133)
plt.title('Vertical Features')
plt.imshow(one_vert,'hot')
plt.colorbar(fraction=0.05)
<matplotlib.colorbar.Colorbar at 0x7f82b3ef7d68>

Exercise 5: 2D Image Convolution for Feature Detection
a. Create similar plots as the above example for the "1" image for the "4" (four.jpg) and "8" (eight.jpg) images in the following code cell.

b. Comment on the results and compare what is highlighted for each number.

c. What is the significance of having negative kernel values around the positive "feature highlighting" values? Think about what would happen if the negative values were zeros instead. Try playing around with the kernels or creating your own kernel if you are unsure.

# Code for 5.a here
four = imread('four.jpg')
eight = imread('eight.jpg')

plt.figure(figsize = (16,10))
plt.subplot(231)
plt.title('Original')
plt.imshow(four,'gray')

four_horiz = convolve_2d(four,horiz_kernel)
plt.subplot(232)
plt.title('Horizontal Features')
plt.imshow(four_horiz,'hot')
plt.colorbar(fraction=0.05)

four_vert = convolve_2d(four,vert_kernel)
plt.subplot(233)
plt.title('Vertical Features')
plt.imshow(four_vert,'hot')
plt.colorbar(fraction=0.05)

plt.subplot(234)
plt.title('Original')
plt.imshow(eight,'gray')

eight_horiz = convolve_2d(eight,horiz_kernel)
plt.subplot(235)
plt.title('Horizontal Features')
plt.imshow(eight_horiz,'hot')
plt.colorbar(fraction=0.05)

eight_vert = convolve_2d(eight,vert_kernel)
plt.subplot(236)
plt.title('Vertical Features')
plt.imshow(eight_vert,'hot')
plt.colorbar(fraction=0.05)
<matplotlib.colorbar.Colorbar at 0x7f82b4f9e438>

Answer for 5b: The digit 4 has a strong horizontal convolution pattern near its only horizontal line and the two vertical components are also strongly imaged here as well. The eight has a strong horizontal bar at the top of it and then more mild horizontal convolution near the bottom of the digit as well as the swoopy areas. The vertical convolution imaging is more present near the left vertial components and more milder for the right side.

Answer for 5c: The negative values that surround the horziontal and vertical kernal are there to add contrast to the actually highlighted components. For example, in the horizontal kernal, the negative values act to filter out all vertical components and make them appear black while also having the horizontal components show up as red instead of black. When I switched the negatives around in the horizontal kernal, instead of having the horinzontal components show up as colorized, I had them show up as black while the rest of the 4's vertical components were highlighted red. This makes me believe that the reason the negative values are surrounding the "feature highlighting" values is to not only filter out the vertical componenets of the 4 but to also make the horizontal component stand out as a color.

For the final activity, we will explore an example of a non-linear system. First, a bit of background.

There are many different types of noise that can appear in images. One such type is salt-and-pepper noise. This noise occurs when pixels in a camera or an existing image become fully active or inactive. In other words, a normal pixel either takes on its minimum or maximum possible value. The following code cell shows an original image and a version of it that has been corrupted by 20% salt-and-pepper noise (20% of the pixels are affected). In this activity, we will see whether we can use our LSI systems from before to denoise our image.

clean_image = imread('clean-image.jpg')
noisy_image = imread('noisy-image.jpg')
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(clean_image,'gray')
plt.title('Original Image')
plt.subplot(122)
plt.imshow(noisy_image,'gray')
plt.title('Image with 20% Salt-and-Pepper Noise')
Text(0.5, 1.0, 'Image with 20% Salt-and-Pepper Noise')

Exercise 6: Non-Linear Systems are Cool Too!
We will attempt to use two different filters: a 5x5 mean filter and a 5x5 median filter. Note that a median filter is a non-linear system! A 5x5 median filter simply takes the median of the 25 pixels surrounding the center pixel in the filter and assigns that value to the center pixel.

a. Explain/prove why the median filter is a non-linear system. You may write your answer with respect to a one-dimensional median filter.

b. Apply a 5x5 mean filter to the noisy image and plot the result. You can do this two different ways. You can apply a length-5 mean filter along the rows and columns in any order or use our  convolve_2d()  function from before with an appropriate filter you create.

c. Apply a 5x5 median filter to the noisy image and plot the result. Use  signal.medfilt()  to perform the filtering. Look up the scipy documentation for notes on this function's usage.

d. Comment on the differences. Which filter seems to work better? Why do you think so?

# Code for 6.b
mean_filter = 1./25 * np.array([[1,1,1,1,1],
                                [1,1,1,1,1],
                                [1,1,1,1,1],
                                [1,1,1,1,1],
                                [1,1,1,1,1]])

plt.figure(figsize = (15,15))
plt.subplot(221)
plt.title('Original Image')
plt.imshow(noisy_image,'gray')
noisy_image_mean = convolve_2d(noisy_image, mean_filter)
plt.subplot(222)
plt.title('Image after applying mean filter')
plt.imshow(noisy_image_mean, 'gray')

# Code for 6.c
plt.subplot(223)
plt.title('Original Image')
plt.imshow(noisy_image,'gray')
noisy_image_median = signal.medfilt(noisy_image, 5)
plt.subplot(224)
plt.title('Image after applying median filter')
plt.imshow(noisy_image_median, 'gray')
<matplotlib.image.AxesImage at 0x7f82b7d65a90>

Answer for 6.a here: A median filter is a non-linear system due to the fact that given two weighted inputs, ax1[n] + bx2[n], I will not obtain an output that is a weighted sum of their respective outputs, ay1[n] + by2[n]. This is because since the median filter operates by taking the median of the 25 pixels surrounding the center pixel, the combination of these two signals will yield a median that is far different from a mere summation of the two respective outputs. Not to mention the value of the center pixel will also be altered when we sum these two inputs together! You can also apply the reasoning that a median operation on a data set is not a scalar operation but instead an operation that takes position and values of the data into account, a sort of vector.

Answer for 6.d here: the median filter works a lot better than the mean filter and this makes sense! This is because a median filter will order the mins and maxes of the surrounding pixels, eliminate those values and choose a value that is the center value, a much more useful approach to reconstructing the image.

Submission Instructions
Make sure to place all image and data files along with your .ipynb lab report (this file) in one folder, zip the folder, and submit it to Compass under the Lab 2 assignment. Please name the zip file <netid>_Lab2.

More products