...
All properties for solr listed below start with the following prefix:
“xasecure.audit.destination.solr.”
. For example, full name of the first property below would be:xasecure.audit.destination.solr.urls
To enable audit to
solr
set the propertyxasecure.audit.destination.solr
totrue
.Following are the configuration details to configure Ranger audit to Solr.
Property name | Details |
|
|
|
|
| Example value:
|
Audit to Db
Solr is the preferred and recommended audit store. Use of database to store Ranger Audits is deprecated. Users are strongly encouraged to move to Solr to store their audit messages. The new DB Audit Provider exits only to ease the adoption of Apache Ranger 0.4 users of audit to Ranger 0.5 audit framework. DB Audit Provider might be removed in future releases.
All properties for db listed below start with the following prefix:
“xasecure.audit.destination.db.”
. For example, full name of the first property below would be:xasecure.audit.destination.db.jdbc.driver
.To enable audit to
solr
db
set the propertyxasecure.audit.destination.db
totrue
.Following are the configuration details to configure Ranger audit to
db
.
Property name | Details |
|
|
|
|
| For example, database user for database where ranger audit data is to be stored: |
| Password to be used to connect to the target database. This property is ignored if a password can be found in the credentials file. |
|
|
Audit to HDFS
HDFS is the preferred and recommended long term store for Ranger audit messages along with Solr for keeping short term audit messages that might need to be searched. Audits in Solr would be used to view audits logs using Ranger Admin UI where as audits kept in HDFS can be for compliance or other off-line uses like thread detection, etc.. Solr can be configured to purge audits older than, say, a month or so.
All properties for hdfs listed below start with the following prefix:
“xasecure.audit.destination.hdfs.”
. For example, full name of the first property below would be:xasecure.audit.destination.hdfs.dir
.To enable audit to
hdfs
set the property xasecure.audit.destination.hdfs totrue
.Following are the configuration details to configure Ranger audit to hdfs.
...
Property name | Details |
|
|
|
|
|
|
| Age of the audit log file in seconds after which it would get rolled over to a new file. Default is set to |
Audit to Log4j
To enable Ranger to send audit logs to a log4j appender, set property xasecure.audit.destination.log4j to true
. Also make sure that property logger
is specified as mentioned below.
Property name | Details |
| The name of the logger where the audit logs should be sent to, as specified in the component's log4j configuration file. Ranger writes audit logs at INFO level. Please ensure that the log4j configuration has INFO level enabled for the logger specified above. |
Example
Below are the configuration details to enable Ranger Hive plugin to write audit logs to log4j.
Configure a log4j appender for audit logs in component's log4j configuration file (hive-log4j.properties
for Hive):
log4j.appender.RANGER_AUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RANGER_AUDIT.File=${hive.log.dir}/ranger-hive-audit.log
log4j.appender.RANGER_AUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RANGER_AUDIT.layout.ConversionPattern=%m%n
log4j.logger.ranger.audit=INFO,RANGER_AUDIT
Configure Ranger plugin to write audit logs to log4j (ranger-hive-audit.xml
for Hive):
xasecure.audit.destination.log4j=true
xasecure.audit.destination.log4j.logger=ranger.audit
...
Code Block | ||||
---|---|---|---|---|
| ||||
log4j.appender.RANGER_AUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.RANGER_AUDIT.File=${hive.log.dir}/ranger-hive-audit.log log4j.appender.RANGER_AUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RANGER_AUDIT.layout.ConversionPattern=%m%n log4j.logger.ranger.audit=INFO,RANGER_AUDIT |
...
Add the following properties in "Custom ranger-hive-audit" section.
Code Block | ||
---|---|---|
| ||
xasecure.audit.destination.log4j=true xasecure.audit.destination.log4j.logger=ranger.audit |
Audit Queues
There is a system of queues that handle audit messages before it gets written to final destination. These queues provides various feature. Following diagram gives an overview and subsequent sections provide details of each one of them.
Asynchronous logging to in-memory buffer queue
...
Various aspected of those queue providers can be configured via following settings. All properties below start with the following prefix: “xasecure.audit.destination.async.”
. For example, full name of the first property below would be: xasecure.audit.destination.async.queue.size
.
Configuration name | Notes |
|
|
Summarization
In high volume systems, like kafka a very large number of audit messages can be generated in a short amount of time. For compliance and for other practical reasons, like threat detection, it may not be desirable to throttle back the amount or granularity of auditing.
...
Following are properties control the behavior of audit summarization.
...
Configuration name | Notes |
|
|
|
|
|
|
Summarization Batch size |
|
Batching and bulk write of of audit messages
...
This each property configuration name below should be prefixed by: xasecure.audit.destination.solr.batch
. Change the values of audit sink type and queue name to suite your configuration.
Configuration name | Notes |
| By default up to |
|
|
Configuration related to File spooling
...
Accordingly each property configuration name is prefixed by: xasecure.audit.destination.solr.batch.filespool
. Change the values of audit sink type and queue name to suite your configuration.
...
Configuration name | Default value | Notes |
enabled | false | Controls if audit messages would be spooled to local disk files if in-memory buffer queue gets filled up. |
dir | N/A | Local disk directory where spool files would be kept. This value must be specified. |
filename.format | spool_%app-type%_%time:yyyyMMdd-HHmm.ss%.log |
|
archive.dir | archive subdirectory of the spool file dir. | For example, if spool file for solr sink is configured to be /var/log/hadoop/hdfs/audit/solr/spool then by default the spool files would get archived to /var/log/hadoop/hdfs/audit/solr/spool/archive directory. |
archive.max.files | 100 | Max number of files to archive. If number of files in the archive directory exceed this number then oldest file(s) would get deleted. |
file.rollover.sec | 86400 | Age of the spool file in seconds after which it would get rolled over to a new file. Default is set to a day (24 * 60 * 60 = 86400 seconds) . |
destination.retry.ms | 30000 | How often should spooler try to reconnect to the destination that was down the last time in milliseconds. The default is 30s (30 * 1000 = 30000) |
drain.threshold.percent | 80 | Don’t start spooling to disk unless in-memory queue is this much percent full. As long as audit destination is able to keep up and in-memory queue is adequately sized, a high enough value would ensures that messages are never flushed to local disk. |
drain.full.wait.ms | 300000 | Once a destination comes back up amount of time to let new audit messages get buffered in memory before spooling them. By default this is set to 5 minutes. If spool is given enough time to send on-disk messages to the final destination and in-memory queue is properly sized then disk spooling of new messages can be avoided and system can revert back to in-memory buffering with no disk access. |
Suppressing the Spooling of Audit messages
If you wish to suppress the automatic spooling of audit messages then set the following property settings. Please note that doing so has consequences since one can lose audit messages.
...
Configuration name | Notes |
xasecure.audit.destination.<sink-type>.queue |
|
Common configuration Properties
Below are a few properties common to audit framework as a whole and/or they apply to all audit providers.
Configuration name | Default value | Notes |
xasecure.audit.log.failure.report.min.interval.ms | 60000 | In event of a failure to send audit events to an audit sink, say, due to a connectivity issue, this is the interval at which WARN messages would be logged to log4j. |
xasecure.audit.credential.provider.file | N/A |
|
Using Custom Audit Providers and Queue Providers
...
Standard Audit providers and Queue Providers are quite Robust and function rick. You can ignore this section if you don’t have a need to use their custom implementations.
...
Configuration name | Notes |
| If you wanted to use a new audit sink, say, JMS to store audit messages then you could define a new property to signal that by setting xasecure.audit.destination.jms to true. |
| Since there isn’t a standard Audit Provider for JMS one needs to let the framework know about the class which implements it. Set the property xasecure.audit.destination.jms.classname to the fully qualified class name of the implementation, e.g. com.company.JmsAuditDestination. |
|
|
| Let’s say you If you also also want to use a custom Queue Provider then use this property to identify that Queue provider type. To use the default queue provider either leave this property unspecified or set it to batch. |
| This property is provides the full name of the class which implements the custom Queue provider. For example, to use a Queue provider that uses a ring buffer with your JMS Audit Provider:
|
|
|
Passing Custom config properties to standard Audit Providers
...