Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Add Solr version for Nutch 1.19

...

  • Unix environment, or Windows-Cygwin environment
  • Java Runtime/Development Environment (JDK 1.8 11 / Java 811)
  • (Source build only) Apache Ant: http https://ant.apache.org/

Install Nutch

Option 1: Setup Nutch from a binary distribution

  • Download a binary package (apache-nutch-1.X-bin.zip) from here.
  • Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
  • cd apache-nutch-1.X/
    From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).

...

  • Download a source package (apache-nutch-1.X-src.zip)
  • Unzip
  • cd apache-nutch-1.X/
  • Run ant in this folder (cf. RunNutchInEclipse)
  • Now there is a directory runtime/local which contains a ready to use Nutch installation.
    When the source distribution is used ${NUTCH_RUNTIME_HOME} refers to apache-nutch-1.X/runtime/local/. Note that
  • config files should be modified in apache-nutch-1.X/runtime/local/conf/
  • ant clean will remove this directory (keep copies of modified config files)

Option 3: Set up Nutch from source

See UsingGit#CheckingoutacopyofNutchandmodifyingit

Verify your Nutch installation

...

No Format
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.811/Home
# note that the actual path may be different on your system

...

Step-by-Step: Seeding the crawldb with a list of URLs

...

Bootstrapping from

...

an initial seed list.

This option shadows the creation of the seed list as covered here.

No Format

bin/nutch inject crawl/crawldb urls

Now we have a Web database with your unfetched URLs in it.

Step-by-Step: Fetching

To fetch, we first generate a fetch list from the database:

No Format

bin/nutch generate crawl/crawldb crawl/segments

This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:

No Format

s1=`ls -d crawl/segments/2* | tail -1`
echo $s1

Now we run the fetcher on this segment with:

The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)

No Format

wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz

Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:

No Format

mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls

The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.

No Format
bin/nutch inject crawl/crawldb dmoz

Now we have a Web database with around 1,000 as-yet unfetched URLs in it.

Option 2. Bootstrapping from an initial seed list.

fetch $s1

Then we parse the entries:

No Format

bin/nutch parse $s1

When this is complete, we update the database with the results of the fetch:This option shadows the creation of the seed list as covered here.

No Format
bin/nutch injectupdatedb crawl/crawldb urls

Step-by-Step: Fetching

$s1

Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.

Now we generate and fetch a new segment containing the top-scoring 1,000 pagesTo fetch, we first generate a fetch list from the database:

No Format
bin/nutch generate crawl/crawldb crawl/segments

This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:

No Format

s1=`ls - -topN 1000
s2=`ls -d crawl/segments/2* | tail -1`
echo $s1

Now we run the fetcher on this segment with:

No Format
$s2

bin/nutch fetch $s2
bin/nutch fetch $s1

Then we parse the entries:

No Format
parse $s2
bin/nutch updatedb parsecrawl/crawldb $s1
$s2

Let's fetch one more roundWhen this is complete, we update the database with the results of the fetch:

No Format
bin/nutch updatedbgenerate crawl/crawldb $s1

Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.

Now we generate and fetch a new segment containing the top-scoring 1,000 pages:

No Format

bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s2$s3

bin/nutch fetch $s2$s3
bin/nutch parse $s2$s3
bin/nutch updatedb crawl/crawldb $s2$s3

By this point we've fetched a few thousand pages. Let's fetch one more round:invert links and index them!

Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.

No Format
bin/nutch generateinvertlinks crawl/crawldblinkdb -dir crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s3

bin/nutch fetch $s3
bin/nutch parse $s3
bin/nutch updatedb crawl/crawldb $s3

By this point we've fetched a few thousand pages. Let's invert links and index them!

Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.

No Format

bin/nutch invertlinks crawl/linkdb -dir crawl/segments

We are now ready to search with Apache Solr.

Step-by-Step: Indexing into Apache Solr

Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.

Now we are ready to go on and index all the resources. For more information see the command line options.

No Format

     Usage: Indexer <crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...] (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
     Example: bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone

Step-by-Step: Deleting Duplicates

Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.

MapReduce "dedup" job:

  • Map: Identity map where keys are digests and values are CrawlDatum records
  • Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format

     Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<urlLength>]

Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone.

For more information see dedup documentation.

Step-by-Step: Cleaning Solr

The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.

No Format

     Usage: bin/nutch clean <crawldb> <index_url>
     Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solr

For more information see clean documentation.

Using the crawl script

If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.

Nutch developers have written one for you (smile), and it is available at bin/crawl. Here the most common options and parameters:


We are now ready to search with Apache Solr.

Step-by-Step: Indexing into Apache Solr

Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.

Now we are ready to go on and index all the resources. For more information see the command line options.

No Format
Usage: Indexer (<crawldb> | -nocrawldb) (<segment> ... | -dir <segments>) [general options]

Index given segments using configured indexer plugins

The CrawlDb is optional but it is required to send deletion requests for duplicates
and to read the proper document score/boost/weight passed to the indexers.

Required arguments:

        <crawldb>       path to CrawlDb, or
        -nocrawldb      flag to indicate that no CrawlDb shall be used

        <segment> ...   path(s) to segment, or
        -dir <segments> path to segments/ directory,
                        (all subdirectories are read as segments)

General options:

        -linkdb <linkdb>        use LinkDb to index anchor texts of incoming links
        -params k1=v1&k2=v2...  parameters passed to indexer plugins
                                (via property indexer.additional.params)

        -noCommit       do not call the commit method of indexer plugins
        -deleteGone     send deletion requests for 404s, redirects, duplicates
        -filter         skip documents with URL rejected by configured URL filters
        -normalize      normalize URLs before indexing
        -addBinaryContent       index raw/binary content in field `binaryContent`
        -base64         use Base64 encoding for binary content

Example:
   bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone

Step-by-Step: Deleting Duplicates

Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.

MapReduce "dedup" job:

  • Map: Identity map where keys are digests and values are CrawlDatum records
  • Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format
Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<httpsOverHttp>,<urlLength>]

Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone.

For more information see dedup documentation.

Step-by-Step: Cleaning Solr

The class scans a crawldb directory looking for entries with status DB_GONE (404), duplicates or optionally redirects and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.

No Format
Usage: bin/nutch clean <crawldb> [-noCommit]
Example: bin/nutch clean crawl/crawldb/

For more information see clean documentation.

Using the crawl script

If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.

Nutch developers have written one for you (smile), and it is available at bin/crawl. Here the most common options and parameters:

No Format
Usage: crawl [options] <crawl_dir> <num_rounds>

Arguments:
  <crawl_dir>                           Directory where the crawl/host/link/segments dirs are saved
  <num_rounds>                          The number of rounds to run this crawl for

Options:
  -i|--index                            Indexes crawl results into a configured indexer
  -D                                    A Nutch or Hadoop property to pass to Nutch calls overwriting
                                        properties defined in configuration files, e.g.
                                        increase content limit to 2MB:
                                          -D http.content.limit=2097152
                                        (distributed mode only) configure memory of map and reduce tasks:
                                          -D mapreduce.map.memory.mb=4608    -D mapreduce.map.java.opts=-Xmx4096m
                                          -D mapreduce.reduce.memory.mb=4608 -D mapreduce.reduce.java.opts=-Xmx4096m
  -w|--wait <NUMBER[SUFFIX]>            Time to wait before generating a new segment when no URLs
                                        are scheduled for fetching. Suffix can be: s for second,
                                        m for minute, h for hour and d for day. If no suffix is
                                        specified second is used by default. [default: -1]
  -s <seed_dir>                         Path to seeds file(s)
  -sm <sitemap_dir>                     Path to sitemap URL file(s)
  --hostdbupdate                        Boolean flag showing if we either update or not update hostdb for each round
  --hostdbgenerate                      Boolean flag showing if we use hostdb in generate or not
  --num-fetchers <num_fetchers>         Number of tasks used for fetching (fetcher map tasks) [default: 1]
                                        Note: This can only be set when running in distributed mode and
                                              should correspond to the number of worker nodes in the cluster.
  --num-tasks <num_tasks>               Number of reducer tasks [default: 2]
  --size-fetchlist <size_fetchlist>     Number of URLs to fetch in one iteration [default: 50000]
  --time-limit-fetch <time_limit_fetch> Number of minutes allocated to the fetching [default: 180]
  --num-threads <num_threads>           Number of threads for fetching / sitemap processing [default: 50]
  --sitemaps-from-hostdb <frequency>    Whether and how often to process sitemaps based on HostDB.
                                        Supported values are:
                                          - never [default]
                                          - always (processing takes place in every iteration)
                                          - once (processing only takes place in the first iteration)
No Format

     Usage: crawl [-i|--index] [-D "key=value"] [-s <Seed Dir>] <Crawl Dir> <Num Rounds>
	-i|--index	Indexes crawl results into a configured indexer
	-D...		A Java property to pass to Nutch calls
	-s <Seed Dir>	Directory in which to look for a seeds file
	<Crawl Dir>	Directory where the crawl/link/segments dirs are saved
	<Num Rounds>	The number of rounds to run this crawl for
     Example: bin/crawl -i -s urls/ TestCrawl/  2

The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.

...

Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.

Nutch

Solr

1.198.11.2
1.188.5.1
1.178.5.1
1.167.3.1

1.15

7.3.1

1.14

6.6.0

1.13

5.5.0

1.12

5.4.1

To install Solr 7.x8.x (or upwards):

  • download binary file from here
  • unzip to $HOME/apache-solr, we will now refer to this as as ${APACHE_SOLR_HOME}
  • create resources for a new "nutch" Solr core

    No Format
    mkdir -p ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
    cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/* ${APACHE_SOLR_HOME}
    /server/solr/configsets/nutch/
    


  • copy the Nutch's schema.xml into the Solr conf directory

    • (Nutch 1.15 or prior) copy the schema.xml from the conf/ directory:

      No Format
      cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml

    create resources for a new nutch solr core

    No Format mkdir -p
    •  ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      
    cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/*

    • (Nutch 1.16 and upwards) copy the schema.xml from the indexer-solr source folder (source package):

      No Format
      cp .../src/plugin/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      
    copy the nutch schema.xml into the conf directory
    • or indexer-solr plugins folder (binary package):

      No Format
    • cp 
    ${NUTCH_RUNTIME_HOME}/conf
    • .../plugins/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      

      Note for Nutch 1.16: due to NUTCH-2745 the schema.xml is not contained in the 1.16 binary package. Please download the schema.xml from the source repository.

    • You may also try to use the most recent schema.xml in case of issues launching Solr with this schema.

  • make sure that there is no managed-schema "in the way":

    No Format
    rm ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/managed-schema
    


  • start the solr server

    No Format
    ${APACHE_SOLR_HOME}/bin/solr start
    


  • create the nutch core

    No Format
    ${APACHE_SOLR_HOME}/bin/solr create -c nutch -d ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
    


...