Programmatic Operation of LCF
A certain subset of LCF users want to think of LCF as an engine that they can poke from whatever other system they are developing. While LCF is not precisely a document indexing engine per se, it can certainly be controlled programmatically. Right now, there are three principle ways of achieving this control.
Control by Servlet API
LCF provides a servlet-based JSON API that gives you the complete ability to define connections and jobs, and control job execution. You can read about JSON here. The API can be called with either GET, POST, or multipart POST methods. The format of the servlet URL is as follows:
http[s]://<server_and_port>/lcf-api/json/<command>[?object=<json_argument>]
The servlet either returns an error response code (either 400 or 500) with an appropriate explanatory message, or a 200 response code and a JSON object. The json_argument parameter can be passed in either as part of the URL, or in POST data, whichever is most convenient. Bear in mind that URLs are limited by specification to 4096 characters, so for large payloads you will want to use multipart form data rather than encoding arguments on the URL.
The actual available commands are as follows:
Command |
What it does |
Argument format |
Response format |
||
---|---|---|---|---|---|
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ccb97b6f-2a00-4d33-9613-b172f14e9fc4"><ac:plain-text-body><![CDATA[ |
outputconnection/list |
List all output connections |
N/A |
{"outputconnection":[<list_of_output_connection_objects>]} OR {"error":<error_text>} |
]]></ac:plain-text-body></ac:structured-macro> |
outputconnection/get |
Get a specific output connection |
{"connection_name":<connection_name>} |
{"outputconnection":<output_connection_object>} OR { } OR {"error":<error_text>} |
||
outputconnection/save |
Save or create an output connection |
{"outputconnection":<output_connection_object>} |
{"connection_name":<connection_name>} OR {"error":<error_text>} |
||
outputconnection/delete |
Delete an output connection |
{"connection_name":<connection_name>} |
{ } OR {"error":<error_text>} |
||
outputconnection/checkstatus |
Check the status of an output connection |
{"connection_name":<connection_name>} |
{"check_result":<message>} OR {"error":<error_text>} |
||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="c30f0c30-2b21-414c-be0f-423596db772b"><ac:plain-text-body><![CDATA[ |
authorityconnection/list |
List all authority connections |
N/A |
{"authorityconnection":[<list_of_authority_connection_objects>]} OR {"error":<error_text>} |
]]></ac:plain-text-body></ac:structured-macro> |
authorityconnection/get |
Get a specific authority connection |
{"connection_name":<connection_name>} |
{"authorityconnection":<authority_connection_object>} OR { } OR {"error":<error_text>} |
||
authorityconnection/save |
Save or create an authority connection |
{"authorityconnection":<authority_connection_object>} |
{"connection_name":<connection_name>} OR {"error":<error_text>} |
||
authorityconnection/delete |
Delete an authority connection |
{"connection_name":<connection_name>} |
{ } OR {"error":<error_text>} |
||
authorityconnection/checkstatus |
Check the status of an authority connection |
{"connection_name":<connection_name>} |
{"check_result":<message>} OR {"error":<error_text>} |
||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="d2a73f84-1f09-442e-86d7-ecebc1bfa197"><ac:plain-text-body><![CDATA[ |
repositoryconnection/list |
List all repository connections |
N/A |
{"repositoryconnection":[<list_of_repository_connection_objects>]} OR {"error":<error_text>} |
]]></ac:plain-text-body></ac:structured-macro> |
repositoryconnection/get |
Get a specific repository connection |
{"connection_name":<connection_name>} |
{"repositoryconnection":<repository_connection_object>} OR { } OR {"error":<error_text>} |
||
repositoryconnection/save |
Save or create a repository connection |
{"repositoryconnection":<repository_connection_object>} |
{"connection_name":<connection_name>} OR {"error":<error_text>} |
||
repositoryconnection/delete |
Delete a repository connection |
{"connection_name":<connection_name>} |
{ } OR {"error":<error_text>} |
||
repositoryconnection/checkstatus |
Check the status of a repository connection |
{"connection_name":<connection_name>} |
{"check_result":<message>} OR {"error":<error_text>} |
||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="26a42091-a26c-4342-a85c-e560952cb264"><ac:plain-text-body><![CDATA[ |
job/list |
List all job definitions |
N/A |
{"job":[<list_of_job_objects>]} OR {"error":<error_text>} |
]]></ac:plain-text-body></ac:structured-macro> |
job/get |
Get a specific job definition |
{"job_id":<job_identifier>} |
{"job":<job_object>} OR { } OR {"error":<error_text>} |
||
job/save |
Save or create a job definition |
{"job":<job_object>} |
{"job_id":<job_identifier>} OR {"error":<error_text>} |
||
job/delete |
Delete a job definition |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="975bbefd-9a5a-4e0c-80f5-eb351d4600da"><ac:plain-text-body><![CDATA[ |
jobstatus/list |
List all jobs and their status |
N/A |
{"job":[<list_of_job_status_objects>]} OR {"error":<error_text>} |
]]></ac:plain-text-body></ac:structured-macro> |
jobstatus/get |
Get a specific job's status |
{"job_id":<job_identifier>} |
{"jobstatus":<job_status_object>} OR { } OR {"error":<error_text>_} |
||
jobstatus/start |
Start a specified job manually |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
||
jobstatus/abort |
Abort a specified job |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
||
jobstatus/restart |
Stop and start a specified job |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
||
jobstatus/pause |
Pause a specified job |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
||
jobstatus/resume |
Resume a specified job |
{"job_id":<job_identifier>} |
{ } OR {"error":<error_text>} |
Other commands having to do with reports have been planned, but not yet been implemented.
Output connection objects
The JSON fields an output connection object are as follows:
Field |
Meaning |
---|---|
"name" |
The unique name of the connection |
"description" |
The description of the connection |
"class_name" |
The java class name of the class implementing the connection |
"max_connections" |
The total number of outstanding connections allowed to exist at a time |
"configuration" |
The configuration object for the connection, which is specific to the connection class |
Authority connection objects
The JSON fields for an authority connection object are as follows:
Field |
Meaning |
---|---|
"name" |
The unique name of the connection |
"description" |
The description of the connection |
"class_name" |
The java class name of the class implementing the connection |
"max_connections" |
The total number of outstanding connections allowed to exist at a time |
"configuration" |
The configuration object for the connection, which is specific to the connection class |
Repository connection objects
The JSON fields for a repository connection object are as follows:
Field |
Meaning |
---|---|
"name" |
The unique name of the connection |
"description" |
The description of the connection |
"class_name" |
The java class name of the class implementing the connection |
"max_connections" |
The total number of outstanding connections allowed to exist at a time |
"configuration" |
The configuration object for the connection, which is specific to the connection class |
"acl_authority" |
The (optional) name of the authority that will enforce security for this connection |
"throttle" |
An array of throttle objects, which control how quickly documents can be requested from this connection |
Each throttle object has the following fields:
Field |
Meaning |
---|---|
"match" |
The regular expression which is used to match a document's bins to determine if the throttle should be applied |
"match_description" |
Optional text describing the meaning of the throttle |
"rate" |
The maximum fetch rate to use if the throttle applies, in fetches per minute |
Job objects
The JSON fields for a job are is as follows:
Field |
Meaning |
---|---|
"id" |
The job's identifier, if present. If not present, LCF will create one (and will also create the job when saved). |
"description" |
Text describing the job |
"repository_connection" |
The name of the repository connection to use with the job |
"output_connection" |
The name of the output connection to use with the job |
"document_specification" |
The document specification object for the job, whose format is repository-connection specific |
"output_specification" |
The output specification object for the job, whose format is output-connection specific |
"start_mode" |
The start mode for the job, which can be one of "schedule window start", "schedule window anytime", or "manual" |
"run_mode" |
The run mode for the job, which can be either "continuous" or "scan once" |
"hopcount_mode" |
The hopcount mode for the job, which can be either "accurate", "no delete", "never delete" |
"priority" |
The job's priority, typically "5" |
"recrawl_interval" |
The default time between recrawl of documents (if the job is "continuous"), in milliseconds, or "infinite" for infinity |
"expiration_interval" |
The time until a document expires (if the job is "continuous"), in milliseconds, or "infinite" for infinity |
"reseed_interval" |
The time between reseeding operations (if the job is "continuous"), in milliseconds, or "infinite" for infinity |
"hopcount" |
An array of hopcount objects, describing the link types and associated maximum hops permitted for the job |
"schedule" |
An array of schedule objects, describing when the job should be started and run |
Each hopcount object has the following fields:
Field |
Meaning |
---|---|
"link_type" |
The connection-type-dependent type of a link for which a hop count restriction is specified |
"count" |
The maximum number of hops allowed for the associated link type, starting at a seed |
Each schedule object has the following fields:
Field |
Meaning |
---|---|
"timezone" |
The optional time zone for the schedule object; if not present the default server time zone is used |
"duration" |
The optional length of the described time window, in milliseconds; if not present, duration is considered infinite |
"dayofweek" |
The optional day-of-the-week enumeration object |
"monthofyear" |
The optional month-of-the-year enumeration object |
"dayofmonth" |
The optional day-of-the-month enumeration object |
"year" |
The optional year enumeration object |
"hourofday" |
The optional hour-of-the-day enumeration object |
"minutesofhour" |
The optional minutes-of-the-hour enumeration object |
Each enumeration object describes an array of integers using the form:
{"value":[<integer_list>]}
Each integer is a zero-based index describing which entity is being specified. For example, for "dayofweek", 0 corresponds to Sunday, etc., and thus "dayofweek":{"value":[0,6]} would describe Saturdays and Sundays.
Job status objects
The JSON fields of a job status object are as follows:
Field |
Meaning |
---|---|
"job_id" |
The job identifier |
"status" |
The job status, having the possible values: "not yet run", "running", "paused", "done", "waiting", "starting up", "cleaning up", "error", "aborting", "restarting", "running no connector", and "terminating" |
"error_text" |
The error text, if the status is "error" |
"start_time" |
The job start time, in milliseconds since Jan 1, 1970 |
"end_time" |
The job end time, in milliseconds since Jan 1, 1970 |
"documents_in_queue" |
The total number of documents in the queue for the job |
"documents_outstanding" |
The number of documents for the job that are currently considered 'active' |
"documents_processed" |
The number of documents that in the queue for the job that have been processed at least once |
Control via Commands
For script writers, there currently exist a number of LCF execution commands. These commands are primarily rich in the area of definition of connections and jobs, controlling jobs, and running reports. The following table lists the current suite.
Command |
What it does |
---|---|
org.apache.lcf.agents.DefineOutputConnection |
Create a new output connection |
org.apache.lcf.agents.DeleteOutputConnection |
Delete an existing output connection |
org.apache.lcf.authorities.ChangeAuthSpec |
Modify an authority's configuration information |
org.apache.lcf.authorities.CheckAll |
Check all authorities to be sure they are functioning |
org.apache.lcf.authorities.DefineAuthorityConnection |
Create a new authority connection |
org.apache.lcf.authorities.DeleteAuthorityConnection |
Delete an existing authority connection |
org.apache.lcf.crawler.AbortJob |
Abort a running job |
org.apache.lcf.crawler.AddScheduledTime |
Add a schedule record to a job |
org.apache.lcf.crawler.ChangeJobDocSpec |
Modify a job's specification information |
org.apache.lcf.crawler.DefineJob |
Create a new job |
org.apache.lcf.crawler.DefineRepositoryConnection |
Create a new repository connection |
org.apache.lcf.crawler.DeleteJob |
Delete an existing job |
org.apache.lcf.crawler.DeleteRepositoryConnection |
Delete an existing repository connection |
org.apache.lcf.crawler.ExportConfiguration |
Write the complete list of all connection definitions and job specifications to a file |
org.apache.lcf.crawler.FindJob |
Locate a job identifier given a job's name |
org.apache.lcf.crawler.GetJobSchedule |
Find a job's schedule given a job's identifier |
org.apache.lcf.crawler.ImportConfiguration |
Import configuration as written by a previous ExportConfiguration command |
org.apache.lcf.crawler.ListJobStatuses |
List the status of all jobs |
org.apache.lcf.crawler.ListJobs |
List the identifiers for all jobs |
org.apache.lcf.crawler.PauseJob |
Given a job identifier, pause the specified job |
org.apache.lcf.crawler.RestartJob |
Given a job identifier, restart the specified job |
org.apache.lcf.crawler.RunDocumentStatus |
Run a document status report |
org.apache.lcf.crawler.RunMaxActivityHistory |
Run a maximum activity report |
org.apache.lcf.crawler.RunMaxBandwidthHistory |
Run a maximum bandwidth report |
org.apache.lcf.crawler.RunQueueStatus |
Run a queue status report |
org.apache.lcf.crawler.RunResultHistory |
Run a result history report |
org.apache.lcf.crawler.RunSimpleHistory |
Run a simply history report |
org.apache.lcf.crawler.StartJob |
Start a job |
org.apache.lcf.crawler.WaitForJobDeleted |
After a job has been deleted, wait until the delete has completed |
org.apache.lcf.crawler.WaitForJobInactive |
After a job has been started or aborted, wait until the job ceases all activity |
org.apache.lcf.crawler.WaitJobPaused |
After a job has been paused, wait for the pause to take effect |
Control by direct code
Control by direct java code is quite a reasonable thing to do. The sources of the above commands should give a pretty clear idea how to proceed, if that's what you want to do.
Caveats
The existing commands know nothing about the differences between connection types. Instead, they deal with configuration and specification information in the form of XML documents. Normally, these XML documents are hidden from a system integrator, unless they happen to look into the database with a tool such as psql. But the API commands above often will require such XML documents to be included as part of the command execution.
This has one major consequence. Any application that would manipulate connections and jobs directly cannot be connection-type independent - these applications must know the proper form of XML to submit to the command. So, it is not possible to use these command APIs to write one's own UI wrapper, without sacrificing some of the generality that LCF by itself maintains.