...
- Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (JDK 1.8 11 / Java 811)
- (Source build only) Apache Ant: https://ant.apache.org/
...
- Download a source package (
apache-nutch-1.X-src.zip
) - Unzip
cd apache-nutch-1.X/
- Run
ant
in this folder (cf. RunNutchInEclipse) - Now there is a directory
runtime/local
which contains a ready to use Nutch installation.
When the source distribution is used${NUTCH_RUNTIME_HOME
} refers toapache-nutch-1.X/runtime/local/
. Note that - config files should be modified in
apache-nutch-1.X/runtime/local/conf/
ant clean
will remove this directory (keep copies of modified config files)
Option 3: Set up Nutch from source
See UsingGit#CheckingoutacopyofNutchandmodifyingit
Verify your Nutch installation
...
No Format |
---|
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.811/Home # note that the actual path may be different on your system |
...
NOTE: If you previously modified the file conf/regex-urlfilter.txt
as covered here you will need to change it back.
...
This option shadows the creation of the seed list as covered here.
No Format |
---|
bin/nutch inject crawl/crawldb urls |
Bootstrapping from DMOZ
Note: DMOZ closed in 2017. The steps below do not work, you need to get DMOZ's content.rdf.u8.gz from elsewhere.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
bin/nutch inject crawl/crawldb dmoz
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Step-by-Step: Fetching
Now we have a Web database with your unfetched URLs in it.
Step-by-Step: Fetching
To fetch, we first generate a fetch To fetch, we first generate a fetch list from the database:
No Format |
---|
bin/nutch generate crawl/crawldb crawl/segments |
...
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
...
Every version of Nutch is built against a specific Solr version, but you may also try a "close" version." version.
Nutch | Solr | ||
1.19 | 8.11.2 | ||
1.18 | 8.5.1 | Nutch | Solr|
1.17 | 8.5.1 | ||
1.16 | 7.3.1 | ||
1.15 | 7.3.1 | ||
1.14 | 6.6.0 | ||
1.13 | 5.5.0 | ||
1.12 | 5.4.1 |
To install Solr 78.x (or upwards):
- download binary file from here
- unzip to
$HOME/apache-solr
, we will now refer to this as${APACHE_SOLR_HOME
} create resources for a new "nutch" Solr core
No Format mkdir -p ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/ cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/* ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
copy the Nutch's schema.xml into the Solr
conf
directory(Nutch 1.15 or prior) copy the schema.xml from the conf/ directory:
No Format cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
(Nutch 1.16 and upwards) copy the schema.xml from the indexer-solr source folder (source package):
No Format cp .../src/plugin/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
or indexer-solr plugins folder (binary package):
No Format cp .../plugins/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
Note for Nutch 1.16: due to NUTCH-2745 the schema.xml is not contained in the 1.16 binary package. Please download the schema.xml from the source repository.
You may also try to use the most recent schema.xml in case of issues launching Solr with this schema.
...