Deduplication
CSCI 572: Information Retrieval and Search Engines
Summer 2010
Outline
What is Deduplication?
Importance
Challenges
Approaches
What are web duplicates?
The same page, referenced by different URLs
http://espn.
What are the differences?
URL host (virtual hosts), sometimes protocol, sometimes page name, etc.
What are web duplicates?
Near identical page, referenced by the same URLs
Google search for “search engines”
Google search for “search engines”
What are the differences?
Page is within some delta % similar to the other (where delta is a large number), but may differ in ., adds, counters, timestamps, etc.
Why is it important to consider duplicates?
In search engines, URLs tell the crawlers where to go and how to navigate the information space
Ideally, given the web’s scale plexity, we’ll give priority to crawl content that we haven’t already stored or seen before
Saves resources (on the crawler end, as well as the remote host)
Increases crawler politeness
Reduces the analysis that we’ll have to do later
Why is it important to consider duplicates?
Identification of website mirrors (or copies of content)used to spread the load andbandwidth consumption
, CPAN, Apache, etc.
If you identify a mirror, you canomit crawling many web pagesand save crawler resources
“More Like This”
Finding similarcontent to whatyou were lookingfor
As we discussedduring the lecture on the search engine architecture, much of the time in search engines is spent filtering through the results. Presenting similar documents can cut down on that filtering time
XML
XML documents, structurally appear very similar
What’s the difference between RSS and RDF and OWL and XSL and XSLT and any number of XML documents out there?
With the ability to identify similarity and reduce duplication of XML, we could identify XML documents with similar structure
RSS feeds that contain the same links
Differentiate RSS (crawl more often) from other less frequently update
deduplication数据重复删除 来自淘豆网www.taodocs.com转载请标明出处.