Differences between revisions 4 and 5
Revision 4 as of 2014-05-16 11:20:09
Size: 1634
Editor: JulienNioche
Comment:
Revision 5 as of 2014-05-29 10:33:05
Size: 1635
Editor: JulienNioche
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Solrdedup is an alias for org.apache.nutch.indexer.solr.SolrDeleteDuplicates Solrdedup is an alias for org.apache.nutch.indexer.solr.!SolrDeleteDuplicates

Solrdedup is an alias for org.apache.nutch.indexer.solr.SolrDeleteDuplicates

THIS HAS BEEN DEPRECATED IN NUTCH 1.x see dedup command.

As the name suggests this is a utility class for deleting duplicate documents from within a solr index.

The algorithm goes like follows:

Preparation: Query the solr server for the number of documents (say, N), Partition N among M map tasks. For example, if we have two map tasks the first map task will deal with solr documents from 0 - (N / 2 - 1) and the second will deal with documents from (N / 2) to (N - 1). This can be thought of as a linearly executing divide and conquer algorithm.

MapReduce:

  • Map: Identity map where keys are digests and values are {@link SolrRecord} instances(which contain id, boost and timestamp)

  • Reduce: After map, {@link SolrRecord}s with the same digest will be grouped together. Now, of these documents with the same digests, delete all of them except the one with the highest score (boost field). If two (or more) documents have the same score, then the document with the latest timestamp is kept. Again, every other is deleted from solr index.

Note that unlike {@link DeleteDuplicates} we assume that two documents in a solr index will never have the same URL. So this class only deals with documents with different URLs but the same digest.

Usage:

bin/nutch solrdedup <solr url>

<solr url>: Luckily all of the hard work is encapsulated within the class therefore the onyl parameter we pass is our SolrURL e.g. http://localhost:8983/solr/

CommandLineOptions

bin/nutch solrdedup (last edited 2014-05-29 10:33:05 by JulienNioche)