$30
Assignment 2: Entity Resolution (Part 1)
Objective
Data scientists often spend 80% of their time on data preparation. If your career goal is to become a data scientist, you have to master data cleaning and data integration skills. In this assignment, you will learn how to solve the Entity Resolution (ER) problem, a very common problem in data cleaning and integration. After completing this assignment, you should be able to answer the following questions:
What is ER?
What are the applications of ER in data integration and cleaning?
How to avoid n2
comparisons?
How to compute Jaccard Similarity?
How to evaluate an ER result?
Requirements:
Please use pandas.DataFrame rather than spark.DataFrame to manipulate data.
Please follow python code style (https://www.python.org/dev/peps/pep-0008/). If TA finds your code hard to read, you will lose points. This requirement will stay for the whole semester.
The data for Assignment 2 (Part 1 and Part 2) can be downloaded from A2-data.zip.
Overview
ER is defined as finding different records that refer to the same real-world entity, e.g., iPhone 4-th generation vs. iPhone four. It is central to data integration and cleaning. In this assignment, you will learn how to apply ER in a data integration setting. But the program that you are going to write can be easily extended to a data-cleaning setting, being used to detect duplication values.
Imagine that you want to help your company's customers to buy products at a cheaper price. In order to do so, you first write a web scraper to crawl product data from Amazon.com and Google Shopping, respectively, and then integrate the data together. Since the same product may have different representations in the two websites, you are facing an ER problem.
Existing ER techniques can be broadly divided into two categories: similarity-based (Part 1) and learning-based (Part 2).
Similarity Join
Unlike a learning-based technique, a similarity-based technique (a.k.a similarity join) does not need any label data. It first chooses a similarity function and a threshold, and then returns the record pairs whose similarity values are above the threshold. These returned record pairs are thought of as matching pairs, i.e., referring to the same real-world entity.
Depending on particular applications, you may need to choose different similarity functions. In this assignment, we will use Jaccard similarity, i.e., Jaccard(r,s)=∣∣r∩sr∪s∣∣
. Here is the formal definition of this problem.
Jaccard-Similarity Join: Given two DataFrames, R and S, and a threshold θ∈(0,1]
, the jaccard-similarity join problem aims to find all record pairs (r,s)∈R×S
such that Jaccard(r,s)≥θ
To implement similarity join, you need to address the following challenges:
Jaccard is used to quantify the similarity between two sets instead of two records. You need to convert each record to a set.
A naive implementation of similarity join is to compute Jaccard for all |R×S|
possible pairs. Imagine R and S have one million records. This requires doing 10^12 pair comparisons, which is extremely expensive. Thus, you need to know how to avoid n^2 comparisons.
The output of ER is a set of matching pairs, where each pair is considered as referring to the same real-world entity. You need to know how to evaluate the quality of an ER result.
Next, you will be guided to complete four tasks. After finishing these tasks, I suggest you going over the above challenges again, and understand how they are addressed.
Read the code first, and then implement the remaining four functions: preprocess_df, filtering, verification, and evaluate by doing Tasks A-D, respectively.
# similarity_join.py
import re
import pandas as pd
class SimilarityJoin:
def __init__(self, data_file1, data_file2):
self.df1 = pd.read_csv(data_file1)
self.df2 = pd.read_csv(data_file2)
def preprocess_df(self, df, cols):
"""
Write your code!
"""
def filtering(self, df1, df2):
"""
Write your code!
"""
def verification(self, cand_df, threshold):
"""
Write your code!
"""
def evaluate(self, result, ground_truth):
"""
Write your code!
"""
def jaccard_join(self, cols1, cols2, threshold):
new_df1 = self.preprocess_df(self.df1, cols1)
new_df2 = self.preprocess_df(self.df2, cols2)
print ("Before filtering: %d pairs in total" %(self.df1.shape[0] *self.df2.shape[0]))
cand_df = self.filtering(new_df1, new_df2)
print ("After Filtering: %d pairs left" %(cand_df.shape[0]))
result_df = self.verification(cand_df, threshold)
print ("After Verification: %d similar pairs" %(result_df.shape[0]))
return result_df
if __name__ == "__main__":
er = SimilarityJoin("Amazon_sample.csv", "Google_sample.csv")
amazon_cols = ["title", "manufacturer"]
google_cols = ["name", "manufacturer"]
result_df = er.jaccard_join(amazon_cols, google_cols, 0.5)
result = result_df[['id1', 'id2']].values.tolist()
ground_truth = pd.read_csv("Amazon_Google_perfectMapping_sample.csv").values.tolist()
print ("(precision, recall, fmeasure) = ", er.evaluate(result, ground_truth))
The program will output the following when running on the sample data:
Before filtering: 256 pairs in total
After Filtering: 84 pairs left
After Verification: 6 similar pairs
(precision, recall, fmeasure) = (1.0, 0.375, 0.5454545454545454)
Task A. Data Preprocessing (Record --> Token Set)
Since Jaccard needs to take two sets as input, your first job is to preprocess DataFrames by transforming each record into a set of tokens. Please implement the following function.
def preprocess_df(self, df, cols):
"""
Input: $df represents a DataFrame
$cols represents the list of columns (in $df) that will be concatenated and be tokenized
Output: Return a new DataFrame that adds the "joinKey" column to the input $df
Comments: The "joinKey" column is a list of tokens, which is generated as follows:
(1) concatenate the $cols in $df;
(2) apply the tokenizer to the concatenated string
Here is how the tokenizer should work:
(1) Use "re.split(r'\W+', string)" to split a string into a set of tokens
(2) Convert each token to its lower-case
"""
For the purpose of testing, you can compare your outputs with new_df1 and new_df2 that can be found from the Amazon-Google-Sample folder.
Task B. Filtering Obviously Non-matching Pairs
To avoid n2
pair comparisons, ER algorithms often follow a filtering-and-verification framework. The basic idea is to first filter obviously non-matching pairs and then only verify the remaining pairs.
In Task B, your job is to implement the filtering function. This function will filter all the record pairs whose joinKeys do not share any token. This is because based on the definition of Jaccard, we can deduce that if two sets do not share any element (i.e., r∩s=ϕ
), their Jaccard similarity values must be zero. Thus, we can safely remove them.
def filtering(self, df1, df2):
"""
Input: $df1 and $df2 are two input DataFrames, where each of them
has a 'joinKey' column added by the preprocess_df function
Output: Return a new DataFrame $cand_df with four columns: 'id1', 'joinKey1', 'id2', 'joinKey2',
where 'id1' and 'joinKey1' are from $df1, and 'id2' and 'joinKey2'are from $df2.
Intuitively, $cand_df is the joined result between $df1 and $df2 on the condition that
their joinKeys share at least one token.
Comments: Since the goal of the "filtering" function is to avoid n^2 pair comparisons,
you are NOT allowed to compute a cartesian join between $df1 and $df2 in the function.
Please come up with a more efficient algorithm (see hints in Lecture 2).
"""
For the purpose of testing, you can compare your output with cand_df that can be found from the Amazon-Google-Sample folder.
Task C. Computing Jaccard Similarity for Survived Pairs
In the second phase of the filtering-and-verification framework, we will compute the Jaccard similarity for each survived pair and return those pairs whose jaccard similarity values are no smaller than the specified threshold.
In Task C, your job is to implement the verification function. This task looks simple, but there are a few small "traps".
def verification(self, cand_df, threshold):
"""
Input: $cand_df is the output DataFrame from the 'filtering' function.
$threshold is a float value between (0, 1]
Output: Return a new DataFrame $result_df that represents the ER result.
It has five columns: id1, joinKey1, id2, joinKey2, jaccard
Comments: There are two differences between $cand_df and $result_df
(1) $result_df adds a new column, called jaccard, which stores the jaccard similarity
between $joinKey1 and $joinKey2
(2) $result_df removes the rows whose jaccard similarity is smaller than $threshold
"""
For the purpose of testing, you can compare your output with result_df that can be found from the Amazon-Google-Sample folder.
Task D. Evaluating an ER result
How should we evaluate an ER result? Before answering this question, let's first recall what the ER result looks like. The goal of ER is to identify all matching record pairs. Thus, the ER result should be a set of identified matching pairs, denoted by R. One thing that we want to know is that what percentage of the pairs in R
that are truly matching? This is what Precision can tell us. Let T
denote the truly matching pairs in R
. Precision is defined as:
Precision=|T||R|
In addition to Precision, another thing that we care about is that what percentage of truly matching pairs that are identified. This is what Recall can tell us. Let A
denote the truly matching pairs in the entire dataset. Recall is defined as:
Recall=|T||A|
There is an interesting trade-off between Precision and Recall. As more and more pairs that are identified as matching, Recall increases while Precision potentially decreases. For the extreme case, if we return all the pairs as matching pairs, we will get a perfect Recall (i.e., Recall = 100%) but precision will be the worst. Thus, to balance Precision and Recall, people often use FMeasure to evaluate an ER result:
FMeasure=2∗Precision∗RecallPrecision+Recall
In Task D, you will be given an ER result as well as the ground truth that tells you what pairs are truly matching. Your job is to calculate Precision, Recall and FMeasure for the result.
def evaluate(self, result, ground_truth):
"""
Input: $result is a list of matching pairs identified by the ER algorithm
$ground_truth is a list of matching pairs labeld by humans
Output: Compute precision, recall, and fmeasure of $result based on $ground_truth, and
return the evaluation result as a triple: (precision, recall, fmeasure)
"""
Submission
Implement preprocess_df, filtering, verification, and evaluate functions in similarity_join.py. Submit your code file (similarity_join.py) to the CourSys activity Assignment 2.