Starting from:

$30

Assignment 1: Logistic Regression

ECE421 - 
Assignment 1:
Logistic Regression

Submission: Submit both your report (a single PDF file) and all codes on Quercus.
Objectives:
In this assignment, you will first implement a simple logistic regression classifier using Numpy and
train your model by applying (Stochastic) Gradient Descent algorithm. Next, you will implement
the same model, this time in TensorFlow and use Stochastic Gradient Descent and ADAM to train
your model.
You are encouraged to look up TensorFlow APIs for useful utility functions, at: https://www.
tensorflow.org/api_docs/python/.
General Note:
• Full points are given for complete solutions, including justifying the choices or assumptions
you made to solve each question. A written report should be included in the final submission.
• Programming assignments are to be solved and submitted individually. You are encouraged
to discuss the assignment with other students, but you must solve it on your own.
• Please ask all questions related to this assignment on Piazza, using the tag assignment1.
Two-class notMNIST dataset
The notMNIST dataset is a image recognition dataset of font glyphs for the letters A through J
useful with simple neural networks. It is quite similar to the classic MNIST dataset of handwritten
digits 0 through 9. We use the following script to generate a smaller dataset that only contains the
images from two letter classes: “C”(the positive class) and “J”(the negative class). This smaller
subset of the data contains 3500 training images, 100 validation images and 145 test images.
1
1 LOGISTIC REGRESSION WITH NUMPY[20 POINTS]
with np.load(’notMNIST.npz’) as data :
Data, Target = data [’images’], data[’labels’]
posClass = 2
negClass = 9
dataIndx = (Target==posClass) + (Target==negClass)
Data = Data[dataIndx]/255.
Target = Target[dataIndx].reshape(-1, 1)
Target[Target==posClass] = 1
Target[Target==negClass] = 0
np.random.seed(521)
randIndx = np.arange(len(Data))
np.random.shuffle(randIndx)
Data, Target = Data[randIndx], Target[randIndx]
trainData, trainTarget = Data[:3500], Target[:3500]
validData, validTarget = Data[3500:3600], Target[3500:3600]
testData, testTarget = Data[3600:], Target[3600:]
1 Logistic Regression with Numpy[20 points]
Logistic regression is one the most widely used linear classification models in machine learning. In
logistic regression, we model the probability of a sample x belonging to the positive class as
yˆ(x) = σ(w
>x + b),
where z = w>x + b, also called logit, is basically the linear transformation of input vector x using
weight vector w and bias scalar b, and σ(z) = 1/(1 + exp(−z)) is the sigmoid or logistic function.
The sigmoid function “squashes” the real-valued logits to fall between zero and one.
The cross-entropy loss LCE and the regularization term Lw will form the total loss function as:
L =LCE + Lw
=
1
N
X
N
n=1

− y
(n)
log ˆy

x
(n)



1 − y
(n)

log
1 − yˆ

x
(n)
 
+
λ
2
kwk
2
2
Note that y
(n) ∈ {0, 1} is the class label for the n-th training image and λ is the regularization
parameter

More products