Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update tutorial to reflect changes introduced in Nutch 1.16; add note that dmoz.org was closed in 2017

...

  • Unix environment, or Windows-Cygwin environment
  • Java Runtime/Development Environment (JDK 1.8 / Java 8)
  • (Source build only) Apache Ant: http https://ant.apache.org/

Install Nutch

Option 1: Setup Nutch from a binary distribution

  • Download a binary package (apache-nutch-1.X-bin.zip) from here.
  • Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
  • cd apache-nutch-1.X/
    From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).

...

Step-by-Step: Seeding the crawldb with a list of URLs

...

Bootstrapping from

...

an initial seed list.

This option shadows the creation of the seed list as covered here.

No Format

bin/nutch inject crawl/crawldb urls

Bootstrapping from DMOZ

Note: DMOZ closed in 2017. The steps below do not work, you need to get DMOZ's content.rdf.u8.gz from elsewhere.

The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)

...

...

wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz


Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:

...

...

mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls


The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.

...

...

bin/nutch inject crawl/crawldb dmoz


Now we have a Web database with around 1,000 as-yet unfetched URLs in it.

Option 2. Bootstrapping from an initial seed list.

This option shadows the creation of the seed list as covered here.

No Format

bin/nutch inject crawl/crawldb urls

Step-by-Step: Step-by-Step: Fetching

To fetch, we first generate a fetch list from the database:

...

Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.

Nutch

Solr

1.167.3.1

1.15

7.3.1

1.14

6.6.0

1.13

5.5.0

1.12

5.4.1

...

  • download binary file from here
  • unzip to $HOME/apache-solr, we will now refer to this as ${APACHE_SOLR_HOME}
  • create resources for a new "nutch solr " Solr core

    No Format
    
    mkdir -p ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
    cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/* ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
    


  • copy the nutch Nutch's schema.xml into the Solr conf directory

    No Format
    • (Nutch 1.15 or prior) copy the schema.xml from the conf/ directory:

      No Format
      cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      


    • (Nutch 1.16) copy the schema.xml from the indexer-solr source folder (source package):

      No Format
      cp .../src/plugin/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      

      Note: due to NUTCH-2745 the schema.xml is not contained in the binary package. Please download the schema.xml from the source repository.

    • You may also try to use the most recent schema.xml in case of issues launching Solr with this schema.

  • make sure that there is no managed-schema "in the way":

    No Format
    rm ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/managed-schema
    


  • start the solr server

    No Format
    ${APACHE_SOLR_HOME}/bin/solr start
    


  • create the nutch core

    No Format
    ${APACHE_SOLR_HOME}/bin/solr create -c nutch -d ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
    


...