Build a Sentiment Analysis app with Movie Reviews
0. Introduction to NLP and Sentiment Analysis
1. Natural Language Processing with NTLK
3. Build a sentiment analysis program
4. Sentiment Analysis with Twitter
5. Analysing the Enron Email Corpus
6. Build a Spam Filter using the Enron Corpus
So now we use everything we have learnt to build a Sentiment Analysis app.
Sentiment Analysis means finding the mood of the public about things like movies, politicians, stocks, or even current events. We will analyse the sentiment of the movie reviews corpus we saw earlier.
Only interested in videos? Go here for the next part: Sentiment Analysis with Twitter
Let’s import our libraries:
import nltk.classify.util
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
We will be using the Naive Bayes classifier for this example. The Naive Bayes is a fairly simple machine learning algorithm, that works mainly with probabilities. Stack Overflow has a great (if slightly long) explanation of how it works. The top 2 answers are worth reading.
Before we start, there is something that had me stumped for a long time. I saw it in all the examples, but it didn’t make sense. But the Naive Bayes classifier, especially in the Nltk library, expects the input to be in this format: Every word must be followed by true. So for example, if you have these words:
"Hello World"
you need to pass it in as:
{'Hello': True, 'World': True}
NOTE: It’s just a quirk of the nltk library. What bothers me is none of the dozens of tutorials/videos I looked at make this clear. They just write this weird code to do so, and expect you to figure it out for yourselves. (Hurray for us).
I’ll show you the function I wrote, and hopefully, you will understand why we need to do it this way. Here is the function:
# This is how the Naive Bayes classifier expects the input
def create_word_features(words):
useful_words = [word for word in words if word not in stopwords.words("english")]
my_dict = dict([(word, True) for word in useful_words])
return my_dict
Let’s go over it line by line.
# This is how the Naive Bayes classifier expects the input
def create_word_features(words):
useful_words = [word for word in words if word not in stopwords.words("english")]
The first thing we do is remove all stopwords. This is what we did in the last lesson. This step is optional.
my_dict = dict([(word, True) for word in useful_words])
return my_dict
For each word, we create a dictionary with all the words and True. Why a dictionary? So that words are not repeated. If a word already exists, it won’t be added to the dictionary.
Let’s see how this works:
create_word_features(["the", "quick", "brown", "quick", "a", "fox"])
{'brown': True, 'fox': True, 'quick': True}
We call our function with the string “the quick brown quick a fox”.
You can see that a) The stop words are removed b) Repeat words are removed c) There is a True with each word.
Again, this is just the format the Naive Bayes classifier in nltk expects.
Okay, let’s start with the code. Remember, the sentiment analysis code is just a machine learning algorithm that has been trained to identify positive/negative reviews.
neg_reviews = []
for fileid in movie_reviews.fileids('neg'):
We create an empty list called neg_reviews. Next, we loop over all the files in the neg folder.
words = movie_reviews.words(fileid)
We get all the words in that file.
neg_reviews.append((create_word_features(words), "negative"))
Then we use the function we wrote earlier to create word features in the format nltk expects. Here is a sample of the output:
print(neg_reviews[0])
print(len(neg_reviews))
({'entire': True, 'really': True, '.': True, 'beauty': True, 'generally': True, 'trying': True, 'ago': True, 'mess': True, 'personally': True,
'starts': True, 'character': True, 'figured': True, 'throughout': True, 'ever': True, 'even': True,)}
1000
So there are a 1000 negative reviews.
Let’s do the same for the positive reviews. The code is exactly the same:
pos_reviews = []
for fileid in movie_reviews.fileids('pos'):
words = movie_reviews.words(fileid)
pos_reviews.append((create_word_features(words), "positive"))
#print(pos_reviews[0])
print(len(pos_reviews))
1000
So we have a 1000 negative and 1000 positive reviews, for a total of 2000. We will now create our test and train samples, this time manually:
train_set = neg_reviews[:750] + pos_reviews[:750]
test_set = neg_reviews[750:] + pos_reviews[750:]
print(len(train_set), len(test_set))
1500 500
We end up with 1500 training samples and 500 test.
Let’s create our Naive Bayes Classifier, and train it with our training set.
classifier = NaiveBayesClassifier.train(train_set)
And let’s use our test set to find the accuracy:
accuracy = nltk.classify.util.accuracy(classifier, test_set)
print(accuracy * 100)
72.6
Ac accuracy of 72%. Could you improve it? How?
For now, I want to show you how to classify a review as negative or positive. But before that, a warning.
The problem with sentiment analysis, as with any machine learning approach, is that your algorithm is only as good as your data. If your data is crap, your algorithm will be crap.
Not only that, the algorithm depends on the type of input you train it with. So if you train your data with long movie reviews, it will not work with Twitter data, which is much shorter.
This particular dataset is, imo, a bit short. Also, the reviews are very informal, using a lot of swear words etc. Which is why I found it not very accurate when comparing it to Imdb reviews, where swearing is discouraged and reviews are (slightly) more formal.
Anyway, I was looking for negative and positive reviews. Our algorithm is more accurate when the review contains stronger words (horrible instead of bad). For the bad reviews, I found this gem of a movie. A real masterpiece:
review_santa = '''
It would be impossible to sum up all the stuff that sucks about this film, so I'll break it down into what I remember most strongly: a man in an ingeniously fake-looking polar bear costume (funnier than the "bear" from Hercules in New York); an extra with the most unnatural laugh you're ever likely to hear; an ex-dope addict martian with tics; kid actors who make sure every syllable of their lines are slowly and caaarreee-fulll-yyy prrooo-noun-ceeed; a newspaper headline stating that Santa's been "kidnaped", and a giant robot. Yes, you read that right. A giant robot.
The worst acting job in here must be when Mother Claus and her elves have been "frozen" by the "Martians'" weapons. Could they be *more* trembling? I know this was the sixties and everyone was doped up, but still.
'''
print(review_santa )
We need to word_tokenize the text, call our function it, and then use the classify() function to let our algorithm decide if this is a positive or negative review.
words = word_tokenize(review_santa)
words = create_word_features(words)
classifier.classify(words)
'negative'
That was correct, but only because the review was really scathing.
For the positive review, I chose one of my favourite movies, Spirited Away, a very beautiful movie:
review_spirit = '''
Spirited Away' is the first Miyazaki I have seen, but from this stupendous film I can tell he is a master storyteller. A hallmark of a good storyteller is making the audience empathise or pull them into the shoes of the central character. Miyazaki does this brilliantly in 'Spirited Away'. During the first fifteen minutes we have no idea what is going on. Neither does the main character Chihiro. We discover the world as Chihiro does and it's truly amazing to watch. But Miyazaki doesn't seem to treat this world as something amazing. The world is filmed just like our workaday world would. The inhabitants of the world go about their daily business as usual as full with apathy as us normal folks. Places and buildings are not greeted by towering establishing shots and majestic music. The fact that this place is amazing doesn't seem to concern Miyazaki.
What do however, are the characters. Miyazaki lingers upon the characters as if they were actors. He infixes his animated actors with such subtleties that I have never seen, even from animation giants Pixar. Twenty minutes into this film and I completely forgot these were animated characters; I started to care for them like they were living and breathing. Miyazaki treats the modest achievements of Chihiro with unashamed bombast. The uplifting scene where she cleanses the River God is accompanied by stirring music and is as exciting as watching gladiatorial combatants fight. Of course, by giving the audience developed characters to care about, the action and conflicts will always be more exciting, terrifying and uplifting than normal, generic action scenes.
'''
print(review_spirit)
Repeat the steps:
words = word_tokenize(review_spirit)
words = create_word_features(words)
classifier.classify(words)
'positive'
Correct again, but I’d like to repeat, the classifier isn’t very accurate overall, I suspect because the original sample is very small and not very representative for Imdb reviews. But it’s good enough for learning.
Okay, the next video is not just a practice session, but also contains some learning exercises, so I strongly recommend you do it. We will build a Sentiment analysis engine with Twitter data.