下载此文档

deduplication数据重复删除.ppt


文档分类:IT计算机 | 页数:约16页 举报非法文档有奖
1/16
下载提示
  • 1.该资料是网友上传的,本站提供全文预览,预览什么样,下载就什么样。
  • 2.下载该文档所得收入归上传者、原创者。
  • 3.下载的文档,不会出现我们的网址水印。
1/16 下载此文档
文档列表 文档介绍
Deduplication
CSCI 572: Information Retrieval and Search Engines
Summer 2010
Outline
What is Deduplication?
Importance
Challenges
Approaches
What are web duplicates?
The same page, referenced by different URLs
http://espn.
What are the differences?
URL host (virtual hosts), sometimes protocol, sometimes page name, etc.
What are web duplicates?
Near identical page, referenced by the same URLs
Google search for “search engines”
Google search for “search engines”
What are the differences?
Page is within some delta % similar to the other (where delta is a large number), but may differ in ., adds, counters, timestamps, etc.
Why is it important to consider duplicates?
In search engines, URLs tell the crawlers where to go and how to navigate the information space
Ideally, given the web’s scale plexity, we’ll give priority to crawl content that we haven’t already stored or seen before
Saves resources (on the crawler end, as well as the remote host)
Increases crawler politeness
Reduces the analysis that we’ll have to do later
Why is it important to consider duplicates?
Identification of website mirrors (or copies of content) used to spread the load and bandwidth consumption
, CPAN, Apache, etc.
If you identify a mirror, you can omit crawling many web pages and save crawler resources
“More Like This”
Finding similar content to what you were looking for
As we discussed during the lecture on the search engine architecture, much of the time in search engines is spent filtering through the results. Presenting similar documents can cut down on that filtering time
XML
XML documents, structurally appear very similar
What’s the difference between RSS and RDF and OWL and XSL and XSLT and any number of XML documents out there?
With the ability to identify similarity and reduce duplication of XML, we could identify XML documents with similar structure
RSS feeds that contain the same links
Differentiate RSS (crawl more often) from other less frequently update

deduplication数据重复删除 来自淘豆网www.taodocs.com转载请标明出处.

相关文档 更多>>
非法内容举报中心
文档信息
  • 页数16
  • 收藏数0 收藏
  • 顶次数0
  • 上传人wyj15108451
  • 文件大小2.27 MB
  • 时间2017-07-04
最近更新