Starting from:

$30

Cmput 466 Assignment 8

Naïve Bayes Model
● For simplicity, we only consider binary features
● The generation model is
Here: A Bernoulli distribution parametrized by 𝜋 means that
Pr[𝑋 = 1] = π and Pr[𝑋 = 0] = 1 − π.
It is a special case of categorical distributions in that only two cases are considered.
● Such a model can be used to represent a document in text classification. For example,
the target indicates Spam or NotSpam. The feature indicates if a word in the vocabulary
occurs in the document.
Problem 1. Please show that the parameters of naïve Bayes decompose, i.e., the probability
factorizes (for the same reason as Gaussian mixture models).
Problem 2. Write out the MLE for naïve Bayes (which is simply counting).
Hint: No proof is needed for the second part, because the MLE for categorical distribution has
been clear in the Gaussian mixture models.
END OF W8

More products