Introduction
Nutch is a well matured, production ready Web crawler. Nutch 1.x enables fine grained configuration, relying on Apache Hadoop data structures, which are great for batch processing. Being pluggable and modular of course has it's benefits, Nutch provides extensible interfaces such as Parse, Index and ScoringFilter's for custom implementations e.g. Apache Tika for parsing. Additonally, pluggable indexing exists for Apache Solr, Elastic Search, SolrCloud, etc. We can find Web page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking broken links, and create a copy of all the visited pages for searching over. This tutorial explains how to use Nutch with Apache Solr. Solr is an open source full text search framework, with Solr we can search pages acquired by Nutch. Apache Nutch supports Solr out-the-box, simplifying Nutch-Solr integration. It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web Application and upon Apache Lucene for indexing. Just download a binary release from here.
Learning Outcomes
By the end of this tutorial you will
- Have a configured local Nutch crawler setup to crawl on one machine
- Learned how to understand and configure Nutch runtime configuration including seed URL lists, URLFilters, etc.
- Have executed a Nutch crawl cycle and viewed the results of the Crawl Database
- Indexed Nutch crawl records into Apache Solr for full text search
Any issues with this tutorial should be reported to the Nutch user@ list.
Table of Contents
Contents
- Introduction
- Learning Outcomes
- Table of Contents
- Steps
- Requirements
- Install Nutch
- Verify your Nutch installation
- Crawl your first website
- Setup Solr for search
- Verify Solr installation
- Whats Next
Steps
This tutorial describes the installation and use of Nutch 1.x (e.g. release cut from the master branch). For a similar Nutch 2.x with HBase tutorial, see Nutch2Tutorial.
Requirements
Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (JDK 1.8 / Java 8)
(Source build only) Apache Ant: http://ant.apache.org/
Install Nutch
Option 1: Setup Nutch from a binary distribution
Download a binary package (apache-nutch-1.X-bin.zip) from here.
Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
cd apache-nutch-1.X/
From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).
Option 2: Set up Nutch from a source distribution
Advanced users may also use the source distribution:
Download a source package (apache-nutch-1.X-src.zip)
- Unzip
cd apache-nutch-1.X/
Run ant in this folder (cf. RunNutchInEclipse)
Now there is a directory runtime/local which contains a ready to use Nutch installation.
When the source distribution is used ${NUTCH_RUNTIME_HOME} refers to apache-nutch-1.X/runtime/local/. Note that
config files should be modified in apache-nutch-1.X/runtime/local/conf/
ant clean will remove this directory (keep copies of modified config files)
Verify your Nutch installation
run "bin/nutch" - You can confirm a correct installation if you see something similar to the following:
Usage: nutch COMMAND where command is one of: readdb read / dump crawl db mergedb merge crawldb-s, with optional filtering readlinkdb read / dump link db inject inject new urls into the database generate generate new segments to fetch from crawl db freegen generate new segments to fetch from text files fetch fetch a segment's pages ...
Some troubleshooting tips:
- Run the following command if you are seeing "Permission denied":
chmod +x bin/nutch
Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home # note that the actual path may be different on your system
On Debian or Ubuntu, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
You may also have to update your /etc/hosts file. If so you can add the following
## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost.localdomain localhost LMC-032857 ::1 ip6-localhost ip6-loopback fe80::1%lo0 ip6-localhost ip6-loopback
Note that the LMC-032857 above should be replaced with your machine name.
Crawl your first website
Nutch requires two configuration changes before a website can be crawled:
- Customize your crawl properties, where at a minimum, you provide a name for your crawler for external servers to recognize
- Set a seed list of URLs to crawl
Customize your crawl properties
Default crawl properties can be viewed and edited within conf/nutch-default.xml - where most of these can be used without modification
The file conf/nutch-site.xml serves as a place to add your own custom crawl properties that overwrite conf/nutch-default.xml. The only required modification for this file is to override the value field of the http.agent.name
i.e. Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml, for example:
<property> <name>http.agent.name</name> <value>My Nutch Spider</value> </property>
ensure that the plugin.includes property within conf/nutch-site.xml includes the indexer as indexer-solr
Create a URL seed list
- A URL seed list includes a list of websites, one-per-line, which nutch will look to crawl
The file conf/regex-urlfilter.txt will provide Regular Expressions that allow nutch to filter and narrow the types of web resources to crawl and download
Create a URL seed list
mkdir -p urls
cd urls
touch seed.txt to create a text file seed.txt under urls/ with the following content (one URL per line for each site you want Nutch to crawl).
http://nutch.apache.org/
(Optional) Configure Regular Expression Filters
Edit the file conf/regex-urlfilter.txt and replace
# accept anything else +.
with a regular expression matching the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.apache.org domain, the line should read:
+^https?://([a-z0-9-]+\.)*nutch\.apache\.org/
This will include any URL in the domain nutch.apache.org.
NOTE: Not specifying any domains to include within regex-urlfilter.txt will lead to all domains linking to your seed URLs file being crawled as well.
Using Individual Commands for Whole-Web Crawling
NOTE: If you previously modified the file conf/regex-urlfilter.txt as covered here you will need to change it back.
Whole-Web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. This also permits more control over the crawl process, and incremental crawling. It is important to note that whole Web crawling does not necessarily mean crawling the entire World Wide Web. We can limit a whole Web crawl to just a list of the URLs we want to crawl. This is done by using a filter just like the one we used when we did the crawl command (above).
Step-by-Step: Concepts
Nutch data is composed of:
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
a crawl_generate names a set of URLs to be fetched
a crawl_fetch contains the status of fetching each URL
a content contains the raw content retrieved from each URL
a parse_text contains the parsed text of each URL
a parse_data contains outlinks and metadata parsed from each URL
a crawl_parse contains the outlink URLs, used to update the crawldb
Step-by-Step: Seeding the crawldb with a list of URLs
Option 1: Bootstrapping from the DMOZ database.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
bin/nutch inject crawl/crawldb dmoz
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Option 2. Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
bin/nutch inject crawl/crawldb urls
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
bin/nutch generate crawl/crawldb crawl/segments
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:
s1=`ls -d crawl/segments/2* | tail -1` echo $s1
Now we run the fetcher on this segment with:
bin/nutch fetch $s1
Then we parse the entries:
bin/nutch parse $s1
When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb crawl/crawldb $s1
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s2=`ls -d crawl/segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch parse $s2 bin/nutch updatedb crawl/crawldb $s2
Let's fetch one more round:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch parse $s3 bin/nutch updatedb crawl/crawldb $s3
By this point we've fetched a few thousand pages. Let's invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
Usage: Indexer <crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...] (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
Example: bin/nutch index http://localhost:8983/solr crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone
Step-by-Step: Deleting Duplicates
Once indexed the entire contents, it must be disposed of duplicate urls in this way ensures that the urls are unique.
Map: Identity map where keys are digests and values are SolrRecord instances (which contain id, boost and timestamp)
Reduce: After map, SolrRecords with the same digest will be grouped together. Now, of these documents with the same digests, delete all of them except the one with the highest score (boost field). If two (or more) documents have the same score, then the document with the latest timestamp is kept. Again, every other is deleted from solr index.
Usage: bin/nutch dedup <solr url>
Example: /bin/nutch dedup http://localhost:8983/solrFor more information see dedup documentation.
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
Usage: bin/nutch clean <crawldb> <index_url>
Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solrFor more information see clean documentation.
Using the crawl script
If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you :), and it is available at bin/crawl.
Usage: crawl [-i|--index] [-D "key=value"] <Seed Dir> <Crawl Dir> <Num Rounds>
-i|--index Indexes crawl results into a configured indexer
-D A Java property to pass to Nutch calls
Seed Dir Directory in which to look for a seeds file
Crawl Dir Directory where the crawl/link/segments dirs are saved
Num Rounds The number of rounds to run this crawl for
Example: bin/crawl -i -D solr.server.url=http://localhost:8983/solr/nutch urls/ TestCrawl/ 2The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.
Setup Solr for search
Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.
Nutch |
Solr |
1.14 |
6.6.0 |
1.13 |
5.5.0 |
1.12 |
5.4.1 |
download binary file from here
unzip to $HOME/apache-solr, we will now refer to this as ${APACHE_SOLR_HOME}
create resources for a new nutch solr core cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/basic_configs ${APACHE_SOLR_HOME}/server/solr/configsets/nutch
copy the nutch schema.xml into the conf directory cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf
make sure that there is no managed-schema "in the way": rm ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/managed-schema
start the solr server ${APACHE_SOLR_HOME}/bin/solr start
create the nutch core ${APACHE_SOLR_HOME}/bin/solr create -c nutch -d server/solr/configsets/nutch/conf/
add the core name to the Solr server URL: -Dsolr.server.url=http://localhost:8983/solr/nutch
Verify Solr installation
After you started Solr admin console, you should be able to access the following links:
http://localhost:8983/solr/#/
You should be able to navigate to the nutch core and view the managed-schema, etc.
Whats Next
You may want to check out the documentation for the Nutch 1.X REST API to get an overview of the work going on towards providing Apache CXF based REST services for Nutch 1.X branch.