First, you need to get a copy of the Nutch code. You can download a release from http://www.apache.org/dyn/closer.cgi/nutch/. Unpack the release and connect to its top-level directory. Or, check out the latest source code from subversion and build it with Ant.
Try the following command:
bin/nutch |
This will display the documentation for the Nutch command script.
Good! You are almost ready to crawl. You need to give your crawler a name. This is required.
<property> <name>http.agent.name</name> <value>YOUR_CRAWLER_NAME_HERE</value> </property> |
http.agent.url
and http.agent.email
properties so that webmasters can identify who is crawling their site and contact you if necessary.
Note : It is advised to specify your parameters in the file nutch-site.xml and leave nutch-default.xml as it is. The latter should be used as a reference only for checking the list of available parameters and their descriptions.
Now we're ready to crawl. There are two approaches to crawling:
The crawl command is more appropriate when you intend to crawl up to around one million pages on a handful of web servers.
To configure things for the crawl command you must:
http://lucene.apache.org/nutch/ |
# accept anything else +. |
with a regular expression matching the domain you wish to crawl. For example, if you wished to limit the crawl to the apache.org domain, the line should read:
+^http://([a-z0-9]*\.)*apache.org/ |
This will include any url in the domain apache.org.
Once things are configured, running the crawl is easy. Just use the crawl command. Its options include:
For example, a typical call might be:
bin/nutch crawl urls -dir crawl -depth 3 -topN 50 |
Typically one starts testing one's configuration by crawling at shallow depths, sharply limiting the number of pages fetched at each level (-topN), and watching the output to check that desired pages are fetched and undesirable pages are not. Once one is confident of the configuration, then an appropriate depth for a full crawl is around 10. The number of pages per level (-topN) for a full crawl can be from tens of thousands to millions, depending on your resources.
Once crawling has completed, one can skip to the Searching section below.
Whole-web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. This also permits more control over the crawl process, and incremental crawling. It is important to note that whole web crawling does not necessarily mean crawling the entire world wide web. We can limit a whole web crawl to just a list of the URLs we want to crawl. This is done by using a filter just like we the one we used when we did the crawl command (above).
Nutch data is composed of:
The injector adds urls to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gz |
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5000, so that we end up with around 1000 URLs:
mkdir dmoz bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls |
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawl db with the selected urls.
bin/nutch inject crawl/crawldb dmoz |
Now we have a web database with around 1000 as-yet unfetched URLs in it.
Instead of bootstrapping from DMOZ, we can create a text file called urls
, this file should have one url per line. We can initialize the crawl db with the selected urls.
bin/nutch inject crawl/crawldb urls |
NOTE: version 0.8 and higher requires that we put this file into a subdirectory, e.g. seed/urls
- in this case the command looks like this:
bin/nutch inject crawl/crawldb seed |
To fetch, we first generate a fetchlist from the database:
bin/nutch generate crawl/crawldb crawl/segments |
This generates a fetchlist for all of the pages due to be fetched. The fetchlist is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1
:
s1=`ls -d crawl/segments/2* | tail -1` echo $s1 |
Now we run the fetcher on this segment with:
bin/nutch fetch $s1 |
When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb crawl/crawldb $s1 |
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1000 pages:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s2=`ls -d crawl/segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch updatedb crawl/crawldb $s2 |
Let's fetch one more round:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch updatedb crawl/crawldb $s3 |
By this point we've fetched a few thousand pages. Let's index them!
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
bin/nutch invertlinks crawl/linkdb -dir crawl/segments |
NOTE: the invertlinks command only applies to Nutch 0.8 and higher.
To index the segments we use the index command, as follows:
bin/nutch index crawl/indexes crawl/crawldb crawl/linkdb crawl/segments/* |
Now we're ready to search!
Simplest way to verify the integrity of your crawl is to launch NutchBean from command line:
bin/nutch org.apache.nutch.searcher.NutchBean apache |
where apache is the search term (note that NutchBean will only search pages in the crawl
directory, so if you named the crawl directory something else, NutchBean will not find any results). After you have verified that the above command returns results you can proceed to setting up the web interface.
To search you need to put the nutch war file into your servlet container. (If instead of downloading a Nutch release you checked the sources out of SVN, then you'll first need to build the war file, with the command ant war
.)
Assuming you've unpacked Tomcat as ~/local/tomcat, then the Nutch war file may be installed with the commands:
mkdir ~/local/tomcat/webapps/nutch cp nutch*.war ~/local/tomcat/webapps/nutch/ jar xvf ~/local/tomcat/webapps/nutch/nutch-1.1.war rm nutch-1.1.war; |
The webapp finds its indexes in ./crawl, relative to where you start Tomcat, so use a command like (platform dependent):
~/local/tomcat/bin/catalina.sh start |
If you want to put your search index at a different location. Edit the webapps/nutch/WEB-INF/classes/nutch-site.xml and add the following
<property> <name>searcher.dir</name> <value>/somewhere/crawl<value> <!-- There must be a crawl/index directory to run off !--> </property> |
If your index is changed you need to restart Tomcat with a command like (platform dependent):
/etc/init.d/tomcat restart |
Also it is recommended to make a copy of the index for Tomcat, so that you can crawl and update your index independently.
Then visit: http://localhost:8080/nutch/