You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

To decommission DataNodes in bulk:

curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '{
   "RequestInfo":{
      "context":"Decommission DataNodes",
      "command":"DECOMMISSION",
      "parameters":{
         "slave_type":"DATANODE",
         "excluded_hosts":"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org"
      },
      "operation_level":{
         "level":"HOST_COMPONENT",
         "cluster_name":"c1"
      }
   },
   "Requests/resource_filters":[
      {
         "service_name":"HDFS",
         "component_name":"NAMENODE"
      }
   ]
}' http://localhost:8081/api/v1/clusters/c1/requests

"excluded_hosts" is a comma-delimited list of hostnames where the DataNodes should be decommissioned.

Note that the decommission of DataNodes can take a long time if you have a lot of blocks; HDFS needs to replicate blocks belonging to decommissioning DataNodes to other live DataNodes to reach the replication factor that you have specified via dfs.replication in hdfs-site.xml.  If you do not have enough live DataNodes to reach the replication factor, decommission process would hang until more DataNodes become available (e.g., if you have 3 DataNodes in your cluster with dfs.replication is set to 3 and you are trying to decommission 1 DataNode out of 3, decommission process would hang until you add another DataNode to the cluster).

 

 

  • No labels