This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Page tree
Skip to end of metadata
Go to start of metadata

There are two ways to try out Ozone. Either you can build from source code or download a binary release.

Build from Source

Build From Git Repo

Get the Apache Hadoop source code from the Apache Git repository. Then check out trunk and build it with the hdds Maven profile enabled.

  git clone
  cd hadoop
  mvn clean install -Phdds -DskipTests=true -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade

Initial compilation may take over 30 minutes as Maven downloads dependencies. -DskipShade is optional - it makes compilation faster for development.

This will give you a tarball in your distribution directory. Here is an example of the tarball that will be generated.


Build From a Source Release

Download and extract a source tarball from E.g.

  tar xf hadoop-ozone-0.3.0-alpha-src.tar.gz
  cd hadoop-ozone-0.3.0-alpha-src-with-hdds/
  mvn clean install -Phdds -DskipTests=true -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade

Partial build

Ozone requires just a subset of the hadoop submodules (for example hdfs/common projects are needed but mapreduce/yarn projects are not). The build could be make faster with building just the ozone-dist project (-pl :hadoop-ozone-dist) and all of the dependencies (-am)

  mvn clean install -Phdds -DskipTests=true -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade -am -pl :hadoop-ozone-dist

Download Binary Release

Download and extract a binary release from E.g.

  tar xf hadoop-ozone-0.3.0-alpha.tar.gz
  cd hadoop-ozone-0.3.0-alpha/

Start Cluster Using Docker

If you downloaded or built a source release, run the following commands to start an Ozone cluster in docker containers with 3 datanodes.

  cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
  docker-compose up -d --scale datanode=3

If you downloaded a binary release, run the following instead.,

  cd compose/ozone
  docker-compose up -d --scale datanode=3

For more docker-compose commands, please check the end of the Getting started with docker guide

To Shutdown the cluster, please run the command docker-compose down

Single Node Development Cluster

This is the traditional way to start a development cluster from source code. Once the package is built, you can start Ozone services by going to the hadoop-ozone/dist/target/ozone-*/ directory. Your Unix shell should expand the '*' wildcard to the correct Ozone version number.


Save the minimal snippet to hadoop-ozone/dist/target/ozone-*/etc/hadoop/ozone-site.xml in the compiled distribution.


Start Services

To start ozone, you need to start SCM, OzoneManager and DataNode. In pseudo-cluster mode, all services will be started on localhost.

  bin/ozone scm --init
  bin/ozone --daemon start scm
  bin/ozone om --init
  bin/ozone --daemon start om
  bin/ozone --daemon start datanode

Run Ozone Commands

Once you have ozone running you can use these Ozone shell commands to create a volume, bucket and keys. E.g.

  bin/ozone sh volume create /vol1
  bin/ozone sh bucket create /vol1/bucket1
  dd if=/dev/zero of=/tmp/myfile bs=1024 count=1
  bin/ozone sh key put /vol1/bucket1/key1 /tmp/myfile
  bin/ozone sh key list /vol1/bucket1

Stop Services

  bin/ozone --daemon stop om
  bin/ozone --daemon stop scm
  bin/ozone --daemon stop datanode

Clean up your Dev Environment (Optional)

Remove the following directories to wipe the Ozone pseudo-cluster state. This will also delete all user data (volumes/buckets/keys) you added to the pseudo-cluster.

rm -fr /tmp/ozone
rm -fr /tmp/hadoop-${USER}*

Note: This will also wipe state for any running HDFS services.

Multi-Node Ozone Cluster


Ensure you have password-less ssh setup between your hosts.



Save the following snippet to etc/hadoop/ozone-site.xml in the compiled Ozone distribution.


Replace SCM-HOSTNAME and OM-HOSTNAME with the names of the machines where you want to start the SCM and OM services respectively. It is okay to start these services on the same host. If you are unsure then just use any machine from your cluster.

The only mandatory setting in is JAVA_HOME. E.g.

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/java/latest


The workers file should contain a list of hostnames in your cluster where DataNode service will be started. E.g.

Start Services

Initialize the SCM

Run the following commands on the SCM host

bin/ozone scm --init
bin/ozone --daemon start scm

Format the OM

Run the following commands on the OM host

bin/ozone om -createObjectStore
bin/ozone --daemon start om

Start DataNodes

Run the following command on any cluster host.


Stop Services

Run the following command on any cluster host.


  • No labels