RDF data can be imported into Marmotta in different ways:

Import data via the Admin UI

This is maybe the easiest way to do it. Just just need to access the Admin UI (http://host/marmotta/core/admin/import.html), and then provide the file or URL with the file your would like to import. Based on the file name, the wizard should automatically detect details such as the RDF format that is encoded; but you customize those details, such as the target context name where you would like to import the data.

Import data via the client library

The client library could be also used to import data. For example, using Java you would need something like:

String path = "/path/to/file.rdf";
String context = "http://example.org/context";
ClientConfiguration configuration = new ClientConfiguration("http://host/marmotta/");
configuration.setMarmottaContext(context);
ImportClient importClient = new ImportClient(configuration);
InputStream is = new FileInputStream(new File(path));
RDFFormat format = Rio.getParserFormatForFileName(path);
importClient.uploadDataset(is, format.getDefaultMIMEType());

(import and so on have been intentionally removed from the snippet)

Import data via the Web Service

A very convenient method for batch processes. The ui and client described above just make use of a web service, which you can also directly use. For instance, using CURL:

curl -sfS -X POST -H "Content-Type: text/turtle; charset=utf-8" -d @file.ttl http://host/marmotta/import/upload

Optionally you can specify the target context by appending a query parameter with the urlencoded name:

curl -sfS -X POST -H "Content-Type: text/turtle; charset=utf-8" -d @file.ttl "http://host/marmotta/import/upload?context=http%3A%2F%2Fexample.org"

Import data via the local directory

There is an special directory (${MARMOTTA_HOME}/import) which is being watched by Marmotta. Every RDF file copied there would be automatically imported into the triple store; and once the import has finished, the file will be removed from there.

(Sub-)Folders containing a file called 'lock' are ignored for the automatic import as long as the lockfile is present.

This import method supports context names:

Import data directly to the KiWi triple store

Using the KiWiLoader, you can bypass the the Marmotta platform and directly connect to the KiWi backend.

NOTE: pre-3.2 versions require exclusive access to the database!

Easiest usage is to provide KiWiLoader with a system-config.properties from an existing Marmotta instance. DB-Connection and Base-URI will be loaded from the config file and can be overwritten using cli-parameters.

Selected parameters in detail:

context


 the context to import into, defaults to _$\{baseURI\}/context_

 

format

 is guessed based on the file name but can be overruled by the cli-parameter.
 

compression

 is guessed based on the file name but can be overruled by the cli-parameter.
 

d, D, P, U

 DB-Connection config.
 

reasoning

 _not implemented_
 

versioning

 _not implemented_
 

Usage:

usage: KiWiLoader [-b <baseUri>] [-c <config>] [-C <context>] [-D
       <dialect>] [-d <jdbc-url>] [-f <mime-type>] [-h] [-i <rdf-file>] [<rdf-file> ...]
       [-P <passwd>] [--reasoning] [-U <dbUser>] [--versioning] [-z]
 -b,--baseUri <baseUri>     baseUri during the import (fallback:
                            kiwi.context from the config file)
 -c,--config <config>       Marmotta system-config.properties file
 -C,--context <context>     context to import into
 -D,--dbDialect <dialect>   database dialect (h2, mysql, postgres)
 -d,--database <jdbc-url>   jdbc connection string
 -f,--format <mime-type>    format of rdf file (if guessing based on the
                            extension does not work)
 -h,--help                  print this help
 -i,--file <rdf-file>       input file(s) or directory(s) to load
 -P,--dbPasswd <passwd>     database password
    --reasoning             enable reasoning
 -U,--user <dbUser>         database user
    --versioning            enable versioning
 -z                         Input file is gzip compressed