RDF data can be imported into Marmotta in different ways:
Import data via the Admin UI
This is maybe the easiest way to do it. Just just need to access the Admin UI (http://host/marmotta/core/admin/import.html), and then provide the file or URL with the file your would like to import. Based on the file name, the wizard should automatically detect details such as the RDF format that is encoded; but you customize those details, such as the target context name where you would like to import the data.
Import data via the client library
The client library could be also used to import data. For example, using Java you would need something like:
String path = "/path/to/file.rdf"; String context = "http://example.org/context"; ClientConfiguration configuration = new ClientConfiguration("http://host/marmotta/"); configuration.setMarmottaContext(context); ImportClient importClient = new ImportClient(configuration); InputStream is = new FileInputStream(new File(path)); RDFFormat format = Rio.getParserFormatForFileName(path); importClient.uploadDataset(is, format.getDefaultMIMEType());
(import and so on have been intentionally removed from the snippet)
Import data via the Web Service
A very convenient method for batch processes. The ui and client described above just make use of a web service, which you can also directly use. For instance, using CURL:
curl -sfS -X POST -H "Content-Type: text/turtle; charset=utf-8" -d @file.ttl http://host/marmotta/import/upload
Optionally you can specify the target context by appending a query parameter with the urlencoded name:
curl -sfS -X POST -H "Content-Type: text/turtle; charset=utf-8" -d @file.ttl "http://host/marmotta/import/upload?context=http%3A%2F%2Fexample.org"
Import data via the local directory
There is an special directory (${MARMOTTA_HOME}/import
) which is being watched by Marmotta. Every RDF file copied there would be automatically imported into the triple store; and once the import has finished, the file will be removed from there.
(Sub-)Folders containing a file called 'lock' are ignored for the automatic import as long as the lockfile is present.
This import method supports context names:
- The files copied in the root on that directory would be imported into the default context.
- Sub-folders would be taken into account to select the target context name. For instance, if you copy a RDF file into
${MARMOTTA_HOME}/import/foo/bar
, the data would be imported into a context named${BASE_URI}/context/foo/bar
. - For fully qualified context names (i.e., non-local), a URL-encoded directory can be used. For example, if you copy a file into
${MARMOTTA_HOME}/import/http%3A%2F%2Fexample.org
, the triples will be imported in thehttp://example.org
context in Marmotta. - In case more flexibility is required, if the (sub-)folder containing the file to import contains a file called
config
containing a propertycontext
the value of this property is used as (full qualified) context.
Import data directly to the KiWi triple store
Using the KiWiLoader, you can bypass the the Marmotta platform and directly connect to the KiWi backend.
NOTE: pre-3.2 versions require exclusive access to the database!
Easiest usage is to provide KiWiLoader with a system-config.properties
from an existing Marmotta instance. DB-Connection and Base-URI will be loaded from the config file and can be overwritten using cli-parameters.
Selected parameters in detail:
context
format
compression
d, D, P, U
reasoning
versioning
Usage:
usage: KiWiLoader [-b <baseUri>] [-c <config>] [-C <context>] [-D <dialect>] [-d <jdbc-url>] [-f <mime-type>] [-h] [-i <rdf-file>] [<rdf-file> ...] [-P <passwd>] [--reasoning] [-U <dbUser>] [--versioning] [-z] -b,--baseUri <baseUri> baseUri during the import (fallback: kiwi.context from the config file) -c,--config <config> Marmotta system-config.properties file -C,--context <context> context to import into -D,--dbDialect <dialect> database dialect (h2, mysql, postgres) -d,--database <jdbc-url> jdbc connection string -f,--format <mime-type> format of rdf file (if guessing based on the extension does not work) -h,--help print this help -i,--file <rdf-file> input file(s) or directory(s) to load -P,--dbPasswd <passwd> database password --reasoning enable reasoning -U,--user <dbUser> database user --versioning enable versioning -z Input file is gzip compressed