Starting from:

$35

HW5: machine learning

HW5: ML


Total points: 6
This last hw is on machine learning! It is data-related, after all...
Here is a summary of what you
'll do: on Google
'
s Colab
(https://colab.research.google.com/), train a neural network on
dierentiating between a cat pic and dog pic, then use the
trained network to classify a new (cat-like or dog-like) pic into a
cat or dog. This is a soup-to-nuts (start to nish) assignment
that will get your feet wet (or plunge you in!), doing ML - a
VERY valuable skill - training a self-driving car
(http://apollo.auto/), for example, would involve much more
complexity, but would be based on the same workow.
You are going to carry out '
supervised learning
', as shown in this
annotated graphic [from a book on TensorFlow]:
Below are the steps. Have fun!
1. Use your GMail/GDrive account to log in, go to
https://drive.google.com/ (https://drive.google.com/), click on the
'
+ New
' button at the top left of the page, look for the
'Colab'
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 2/13
app [after + New, click on More , then + Connect more apps]
and connect it - this will make the app [which connects to the
mighty Google Cloud on the other end!] be able to access (read,
write) les and folders in your GDrive.
2. You
'll notice that the above step created a folder called Colab
Notebooks, inside your GDrive - this is good, because we can
keep Colab-related things nicely organized inside that folder.
Within the Colab Notebooks subdir/folder, create a folder
called cats-vs-dogs, for the hw:
Now we need DATA [images of cats and dogs] for training and
validation, and scripts for training+validation and classifying.
3. Download this (data/data.zip) .zip data le, unzip it. You
'll see
this structure:
data/
 live/
 train/
 cats/
 dogs/
 validation/
 cats/
 dogs/
The train/ folder contains 1000 kitteh images under cats/, and
1000 doggo/pupper ones in dogs/. Have fun, looking at the
adorable furballs :) Obviously you know which is which :) A
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 3/13
neural network is going to start from scratch, and learn the
dierence, just based on these 2000 'training dataset' images.
The validation/ folder contains 400 images each, of more cats
and dogs - these are to feed the trained network, compare its
classication answers to the actual answers so we can compute
the accuracy of the training (in our code, we do this after each
training epoch, to watch the accuracy build up, mostly
monotonically). Finally, live/ is where you
'd be placing new
images of cats and dogs [that are not in the training or
validation datasets], and use their lenames to ask the network
to classify them: an output of 0 means
'
cat', 1 means
'dog
'. Fun!
Simply drag and drop the data/ folder on to your My
Drive/Colab Notebooks/cats-vs-dogs/ area, and wait for about a
half hour for the 2800 (2*(1000+400)) images to be uploaded.
After that, you should be seeing this [click inside the train/ and
validation/ folders to see that the cats and dogs pics have been
indeed uploaded]:
4. OK, time to train a network! Download this (nb/train.ipynb)
Jupyter notebook. A Jupyter notebook (.ipynb extension, 'Iron
Python Notebook') is a JSON le that contains a mix of two
types of "
cells
" - text cells that have Markdown-formatted text
and images, and code cells that contain, well, code :) Drag and
drop the notebook into cats-vs-dogs:
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 4/13
Double click on the notebook, that will open it so you can
execute the code in the cell(s).
As you can see, it is a VERY short piece of code [not mine,
except annotations and mods] where a network is set up
[starting with '
model = Sequential()'], and the training is done
using it [model.t_generator()]. In the last line, the RESULTS
[learned weights, biases, for each neuron in each layer] are
stored on disk as a weights.h5 le [a .h5 le is binary, in the
publicly documented .hd5 le format
(https://en.wikipedia.org/wiki/Hierarchical_Data_Format)
(hierarchical, JSON-like, perfect for storing network weights)].
The code uses the Keras NN library (https://keras.io/), which
runs on graph (dataow) execution backends such
TensorFlow(TF), Theano, CNTK [here we are running it over TF
via the Google cloud]. With Keras, it is possible to express NN
architectures succintly
(https://www.datacamp.com/community/blog/keras-cheat-sheet)
- the TF equivalent (or Theano
'
s etc.) would be more verbose. As
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 5/13
a future exercise, you can try coding the model in this hw,
directly in TF or Theano or CNTK - you should get the same
results.
Before you run the code to kick o the training, note that you
will be using GPU acceleration on the cloud (results in ~10x
speedup) - cool! You
'd do this via
'Edit - Notebook settings
'. In
this notebook, this is already set up (by me), but you can verify
that it'
s set:
When you click on the circular
'
play
' button at the left of the
cell, the training will start - here is a sped-up version of what
you will get (your numerical values will be dierent):
0:00 / 0:11
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 6/13
The backprop loop runs 50 times ('
epochs
'
(https://keras.io/getting-started/faq/#what-does-sample-batchepoch-mean)) through all the training data. The acc: column
shows the accuracy [how close the training is, to the expected
validation/ results], which would be a little over 80% - NOT
BAD, for having learned from just 1000 input images for each
class!
Click the play button to execute the code! The rst time you run
it (and anytime after logging out and logging back in), you
'd
need to authorize Colab to access GDrive - so a message will
show up, under the code cell, asking you to click on a link
whereby you can log in and provide authorization, and copy and
paste the authorization code that appears. Once you do this,
the rest of the code (where the training occurs) will start to run.
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 7/13
Scroll down to below the code cell, to watch the training happen.
As you can see, it is going to take a short while.
After the 50th epoch, we
'
re all done training (and validating
too, which we did 50 times, once at the end of each epoch).
What'
s the tangible result, at the end of our
training+validating process? It'
s a
'
weights.h5' le! If you look
in your cats-vs-dogs/ folder, it should be there:
5. Soooo, what exactly [format and content-wise] is in the
weights le? You can nd out, by downloading HDFView-2.14.0,
from
https://support.hdfgroup.org/products/java/release/download.html
(https://support.hdfgroup.org/products/java/release/download.html)
[grab the binary, from the
'HDFView+Object 2.14'
column on the
left]. Install, and bring up the program. Download the .h5 le
from GDrive to your local area (eg. desktop), then drag and
drop it into HDView:
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 8/13
Right-click on weights.h5 at the top-left, and do
'Expand All'
:
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 9/13
Neat! We can see the NN columns, and the biases and weights
(kernels) for each. Double click on the bias and kernel items in
the second (of the two) dense layers [dense_12, in my case -
yours might be named something else], and stagger them so
you can see both:
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 10/13
Computing those oating point numbers is WHAT -EVERY
FORM- OF NEURAL NETWORK TRAINING IS ALL ABOUT!
A self-driving car, for example, is also trained the same way,
resulting in weights that can classify live trac data (scary, in
my opinion). Here, collectively (taking all layers into account),
it'
s those numbers that REPRESENT the network'
s
"learning
"
of
telling apart cats and dogs! The
"learned"
numbers (the .h5
weights le, actually) can be sent to anyone, who can instantiate
a new network (with the same architecture as the one in the
training step), and simply re/use the weights in weights.h5 to
start classifying cats and dogs right away - no training
necessary. The weight arrays represent "
catness
"
and "dogness
",
in a sense :) In a self-driving car, the weights would be copied to
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 11/13
the processing hardware (https://www.wired.com/story/selfdriving-cars-power-consumption-nvidia-chip/) that resides in the
car.
Q1 [1+1=2 points]. Submit your weights.h5 le. Also, create a
submittable screengrab similar to the above [showing values for
the second dense layer (eg. dense_12)]. For fun, click around,
examine the arrays in the other layers as well. Again, it'
s all
these values that are the end result of training, on account of
iterating and minimizing classication errors through those
epochs.
6. Now for the fun part - nding out how well our network has
learned! Download this (nb/classify.ipynb) Jupyter notebook,
and upload it to your cats-vs-dogs/ Colab area:
When you open classify.ipynb, you can see that it contains Keras
code to read the weights le and associate the weights with a
new model (which needs to be 100% identical to the one we had
set up, to train), then take a new image
'
s lename as input, and
predict (model.predict()) whether the image is that of a cat
d
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 12/13
[output: 0], or a dog [output: 1]! Why 0 for cat and 1 for dog?
Because
'
c
'
comes before
'd'
alphabetically [or because
(pics/purple.png)] :)
Supply (upload, into live/) a what1.jpg cat image, and what2.jpg
dog image, then execute the cell. Hopefully you
'd get a 0, and 1
(for what1.jpg and what2.jpg, respectively). The images can be
any resolution (size) and aspect ratio (squarishness), but nearlysquare pics would work best. Try this with pics of your pets, your
neighbors
', images from a Google search, even your
drawings/paintings... Isn
't this cool? Our little network can
classify!
Just FYI, note that the classication code in classify.ipynb could
have simply been inside a new cell in train.ipynb instead. The
advantage of multiple code cells inside a notebook, as opposed
to multiple code blocks in a script, is that in a notebook, code
cells can be independently executed one at a time (usually
sequentially) - so if both of our programs were in the same
notebook, we would run the training code rst (just once),
followed by classication (possibly multiple times); a script on
the other hand, can
't be re/executed in parts.
Q2 [1+1=2 points]. Create a screenshot that shows the [correct]
classication (you
'll also be submitting your what{1,2}.jpg images
with this).
What about misclassication? After all, we trained with "just"
1000 (not 1000000) images each, for about an 80% accurate
prediction. What if we input 'dicult' images, of a cat that looks
like it could be labeled a dog, and the other way around? :)
Q3 [1+1=2 points]. Get a
'Corgi' image [the world'
s smartest
(https://www.buzzfeed.com/mjs538/why-corgis-are-the-smartestanimals) dogs!], and a
'dog-like
'
cat image [hint, it'
s all about
the ears!], upload to live/, attempt to (mis)classify, ie. create
incorrect results (where the cat pic outputs a 1, and the dog
'
s, 0),
make a screenshot. Note that you need to edit the code to point
myPic and myPic2 to these image lenames.
Here
'
s a checklist of what to submit [as a single .zip le]:
h h d h f
11/27/2018 HW5: ML
http://bytes.usc.edu/cs585/f18_DS0agI4Me/hw/HW5/index.html 13/13
• weights.h5, and a screenshot from HDFView
• your 'good' cat and dog pics, and screenshot that shows proper classication
• your 'trick' cat and dog pics, and screenshot that shows misclassication
All done - hope you had fun, and learned a lot!
Note - you can continue using Colab to run all sorts of
notebooks (https://github.com/jupyter/jupyter/wiki/A-gallery-ofinteresting-Jupyter-Notebooks) [on Google
'
s cloud GPUs!],
including ones with TensorFlow, Keras, PyTorch... etc. ML code.
Here are specic improvements/extras you can do [after
completing the course - NOT for credit towards this HW!],
related to the project:
• instead of weights.h5, write out a dierent le that will contain the NN model
(architecture) as well as the weights; then, remove the model creation part from the
classier code, and instead just load the le you saved - that will re-create the model
from the training step and associate the read-in weights with it
• encapsulate the model creation code in a function called buildModel(), and have it
return a model - then use it for training
• encapsulate the classication code in a function called classify() and have it accept
an array of lenames and return an array of classication values; then use to classify
the 'good' cat/dog pair and the 'bad' cat/dog pair, all using a single call to classify()
• combine the code from training.ipynb and classication.ipynb, into a single notebook
where you annotate the steps
• read the weights from your output le, and save as JSON. Then, create a web page
with embedded JavaScript code that recreates the model that was used in training,
gets an image URL from the user, then uses the model to classify, and to inform the
user of the result.
• similar to the above - create an Android or iTunes smartphone app that does the
classication, of pictures taken with the phone camera
• redo the project by porting it to PyTorch, TensorFlow, Theano

More products