Deduplication CSCI 572: Information Retrieval and Search Engines Summer 2010 May-20-10 CS572-Summer2010 CAM- 2 Outline ? What is Deduplication ? ? Importance ? Challenges ? Approaches May-20-10 CS572-Summer2010 CAM- 3 What are web duplicates? ? The same page, referenced by different URLs http:// espn. http:// ? What are the differences? ? URL host (virtual hosts), sometimes protocol, sometimes page name, etc. May-20-10 CS572-Summer2010 CAM- 4 What are web duplicates? ? Near identical page, referenced by the same URLs Google search for “ search engines ” Google search for “ search engines ”? What are the differences? ? Page is within some delta % similar to the other (where delta is a large number), but may differ in ., adds, counters, timestamps, etc. May-20-10 CS572-Summer2010 CAM- 5 Why is it important to consider duplicates? ? In search engines, URLs tell the crawlers where to go and how to navigate the information space ? Ideally, given the web ’ s scale plexity, we ’ ll give priority to crawl content that we haven ’ t already stored or seen before – Saves resources (on the crawler end, as well as the remote host) – Increases crawler politeness – Reduces the analysis that we ’ ll have to do later May-20-10 CS572-Summer2010 CAM- 6 Why is it important to consider duplicates? ? Identification of website mirrors (or copies of content) used to spread the load and bandwidth consumption – , CPAN, Apache, etc. ? If you identify a mirror, you can omit crawling many web pages and save crawler resources May-20-10 CS572-Summer2010 CAM- 7“ More Like This ”? Finding similar content to what you were looking for – As we discussed during the lecture on the search engine architecture, much of the time in search engines is spent filtering through the results. Presenting similar documents can cut down on that filtering time May-20-10 CS572-Summer2010 CAM- 8 XML ? XML documents, structurally appear very similar – What ’ s the difference between RSS and RDF a
Deduplication数据重复删除 来自淘豆网www.taodocs.com转载请标明出处.