Starting from:

$30

APM 598: Homework 3 

APM 598: Homework 3 
1 n-gram models
Ex 1.
a) Load and tokenize the text attached ’data_HW3_Plato_Republic.txt’.
Put all the words in lower case to regroup words like ’The’ and ’the’.
Compute the total number of words T in the text and the number of unique words
(size of the vocabulary).
b) Build a uni-gram. Deduce the 5 most common words with at least 8 characters.
Hint: use the method ’most_common’ on an object ’nltk.FreqDist’.
c) Build a bi-gram and define a function that given two words (ω1, ω2) compute the
probability:
P(ω2|ω1) = #{(ω1, ω2)}
#{ω1}
where # denotes the number of occurences of the word (or pair of words) in the
corpus.
d) Deduce the so-called perplexity of the bi-gram model defined as:
P P =


Y
k=1..(T −1)
P(ωk+1|ωk)


− 1
T −1
where T denotes the total number of words in the corpus.
2 Recurrent Neural Networks
Ex 2.
The goal of this exercise is to experiment with a simple Recurrent Neural Network
(RNN) model for predicting letters. We only consider four letters ”h”, ”e”, ”l” and ”o”
that we embed in R
4
:
”h” →


1
0
0
0


, ”e” →


0
1
0
0


, ”l” →


0
0
1
0


, ”o” →


0
0
0
1


.
1
We consider a RNN with hidden states ht
in R
2
:
(
ht = tanh(Rht−1 + Axt)
yt = Bht
(1)
where A ∈ M2,4(R), R ∈ M2,2(R) and B ∈ M4,2(R) (e.g. A is a 2 × 4 matrix).
a) Given the input ”hello” (i.e. x1 = (1, 0, 0, 0), . . . , x5 = (0, 0, 0, 1)), the initial state
h0 = (0, 0) and the matrices:
A =
"
1 −1 −1/2 1/2
1 1 −1/2 −1
#
, R =
"
1 0
0 1 #
, B =





1 1
1/2 1
−1 0
0 −1/2





,
find the output y1, . . . , y5 and deduce the predicted characters (see figure 1).
b) Find (numerically) matrices A, R, B such that the predicted characters are ”olleh”.
"h" "e" "l" "l" "o"
A
B
R
(0,0)
A
B
A
B
A
B
A
B
? ? ? ? ?
embedding
decoding
Figure 1: Predictions of a vanilla RNN. After encoding the letters (e.g. “h”) into vectors
(e.g. x1 = (1, 0, 0, 0)), the network performs the operations described in eq. (1) to
estimate a vector prediction (e.g. y1). The ’letter’ predicted is chosen as the index of the
output with the largest value (i.e. find the hot vector the closest to (softmax) of y1).
2
Ex 3. [vanishing/exploding gradient]
We would like to illustrate one of the issue with vanilla RNN, namely the vanishing
or exploding gradient phenomenon. Rather than computing the gradient of the loss
function, we simply are going to investigate how a small perturbation in the input x1 will
affect the output yt (see figure 2).
A
B B
B
R R
Vanilla RNN
+1
-1
Figure 2: To study how a perturbation of x1 affects yt
, we suppose in this exercise that
x2 = . . . xt = 0 and h0 = 0. Due to the iterations of the matrix R in the estimation of
yt
, the perturbation of x1 could have small or large influence on yt
.
We consider a standard RNN defined with three matrices A, R, B and σ(x) = tanh(x)
(see figure 2).
a) Compute the differential Dht−1ht
, i.e. compute the differential of the function
h → σ(Rh + Axt).
Deduce that:
∥Dx1 yt∥ ≤ ∥B∥ · t
Y−1
k=1


(Rhk−1 + Axk)|∞
!
· ∥R∥
t−1
· ∥A∥, (2)
where ∥.∥ is a (multiplicative) matrix norm and |σ

(h)|∞ = max(|σ

(h1)|, . . . , |σ

(hd)|)
where d is the dimension of the vector h.
b) From now on, we take t = 30 and suppose x, y, h ∈ R
2 with:
A = B =
"
1 0
0 1 #
, R =
"
1
2 −1
−1
1
2
#
, x2 = x3 = · · · = x30 = h0 =

0
0
!
.
Denote x1 = (0, 0) and y30 the output after t = 30 iterations.
Similarly, denote the perturbation x
ε
1 = (ε, −ε) and y
ε
30 the output after t = 30
iterations starting from x
ε
1
.
Compute and plot (in log-log scale) the difference ∥y30−y
ε
30∥ for ε ∈ (10−4
, . . . , 10−9
).
Explain the result using eq. (2).
c) Proceed similarly as b) using x1 = (2, 1) and x
ε
1 = (2 + ε, 1 − ε).
Why does the perturbation have a small effect in this case compare to b)? Use eq.
(2) to explain it.
3
Extra) Proceed similarly as b) using x1 = (0, 0) and x
ε
1 = (ε, ε). Why is the perturbation
having a small effect? In general, let xe1 = (ε, δ) with (ε, δ) small and ye30 the output
of the network, do you expect ∥y30 − ye30∥ to be small (≪ 1) or large (≈ 1)?
4

More products