Apache Solr Documentation

6.5 Ref Guide (PDF Download)
Solr Tutorial
Solr Community Wiki

Older Versions of this Guide (PDF)

Ref Guide Topics


*** As of June 2017, the latest Solr Ref Guide is located at https://lucene.apache.org/solr/guide ***

Please note comments on these pages have now been disabled for all users.

Skip to end of metadata
Go to start of metadata

Apache Solr includes the ability to set up a cluster of Solr servers that combines fault tolerance and high availability. Called SolrCloud, these capabilities provide distributed indexing and search capabilities, supporting the following features:

  • Central configuration for the entire cluster
  • Automatic load balancing and fail-over for queries
  • ZooKeeper integration for cluster coordination and configuration.

SolrCloud is flexible distributed search and indexing, without a master node to allocate nodes, shards and replicas. Instead, Solr uses ZooKeeper to manage these locations, depending on configuration files and schemas. Queries and updates can be sent to any server.  Solr will use the information in the ZooKeeper database to figure out which servers need to handle the request.

In this section, we'll cover everything you need to know about using Solr in SolrCloud mode. We've split up the details into the following topics:



  • No labels


  1. Hello Everyone,

    I have a query regarding resizing a cluster. Found some useful information in "Nodes, Cores, Clusters and Leaders". But it seems the page is not helpful and has been removed from the guide. I am not finding any information regarding resizing of cluster in solr 6 documentation. Can anyone please let me know/share the link related to this information. I need to understand if it is possible to create shards dynamically.




    1. You can only dynamically add shards if the collection's router is set to "implicit" ... which means that routing is entirely manual.  If you are using compositeId so you have automatic routing, then you can split shards, but you can't add them.

      Please use the mailing list for support questions like this.