More and more instrumentation is being added to Cassandra via standard JMX apis.
nodetool utility (
nodeprobe in versions prior to 0.6) provides a simple command line interface to these exposed operations and attributes.
See Operations for a more high-level view of when you would want to use the actions described here.
Note: This utility currently requires the same environment as cassandra itself, namely the same classpath (including log4j.properties), and a valid storage-conf property.
bin/nodetool with no arguments produces some usage output.
The -host argument is required, the -port argument is optional and will default to 8080 if not supplied.
The ring command will present node status and an ascii art rendition of the ring, as determined by the node being queried.
The format is a little different for later versions - this is from v0.7.6:
Owns column indicates the percentage of the ring (keyspace) handled by that node
The largest token is repeated at the top of the list to indicate that we have a ring. i.e, the first and last printed token are the same, suggesting some kind of continuity
Triggers the specified node to join the ring. This assumes that the node was started with
-Dcassandra.join_ring=false so that it did not join the ring upon startup.
Outputs node information including the token, load info (on disk storage), generation number (times started), uptime in seconds, and heap memory usage.
Triggers the immediate cleanup of keys no longer belonging to this node.
Initiates an immediate table compaction.
Note: the compacted tables will not immediately be cleared from the hard disk and will remain on the system until the JVM performs a GC. For more information, read about MemtableSSTables.
Tells the node to move its data elsewhere; opposite of bootstrap. Since 0.5. See https://issues.apache.org/jira/browse/CASSANDRA-435
Removing a node that does not physically exist anymore is done in two steps:
The first command will block forever if the computer attached to that UUID was physically removed (or does not run Cassandra anymore). Just click Ctrl-C after a second or two before running the second command. Obviously, it is better to first decommission a node if possible or you may lose some of your data.
The "bin/nodetool status" command shows the UUID of your nodes.
Flushes memtables on the node and stop accepting writes. Reads will still be processed. Useful for rolling upgrades.
Flushes memtables (in memory) to SSTables (on disk), which also enables CommitLog segments to be deleted.
Removes a dead node from the ring - this command is issued to any other live node (since clearly the dead node cannot respond!).
Cassandra v0.7.1 and v0.7.2 shipped with a bug that caused incorrect row-level bloom filters to be generated when compacting sstables generated with earlier versions. This would manifest in IOExceptions during column name-based queries. v0.7.3 provides "nodetool scrub" to rebuild sstables with correct bloom filters, with no data lost. (If your cluster was never on 0.7.0 or earlier, you don't have to worry about this.) Note that nodetool scrub will snapshot your data files before rebuilding, just in case.
While "scrub" does rebuild your sstables, it will also discard data it deems broken and create a snapshot, which you have to remove manually. If you just wish to rebuild your sstables without all that jazz, then use "nodetool upgradesstables". This is useful e.g. when you are upgrading your server, or changing compression options.
upgradesstables is available from Cassandra 1.0.4 onwards.
As of Cassandra 1.0, the amount of resources that compactions can use can be easily controlled using a single value: the compaction throughput, which is expressed in Megabytes/second. You can (and probably should) specify this in your cassandra.yaml file, but in some cases it can be very beneficial to change it live using the nodetool.
For example, in this presentation Edward Capriolo explains how their company throttles compaction during the day so that I/O is mostly reserved for serving requests, whereas during the night they allocate more capability for running compactions. This can be e.g. accomplished through a simple cron script:
Setting the compaction thresholds to zero disables compaction. This may be useful in some cases if you e.g. wish to avoid the compaction I/O during extremely busy periods. It is not a good idea to leave it on for a long period, since you will end up with a large number of very small sstables, which will start to slow down your reads.
Setting the compaction throughput to zero, however, disables throttling.
Excellent description from: http://narendrasharma.blogspot.com/2011/04/cassandra-07x-understanding-output-of.html
The output of the command has following 6 columns:
- Write Latency
- Read Latency
- Row Size
- Column Count
Interpreting the output
- Offset: This represents the series of values to which the counts for below 5 columns correspond. This corresponds to the X axis values in histograms. The unit is determined based on the other columns.
- SSTables: This represents the number of SSTables accessed per read. For eg if a read operation involved accessing 3 SSTables then you will find a +ve value against Offset 3. The values are recent i.e. for duration lapsed between two calls.
- Write Latency: This shows the distribution of number of operations across the range of Offset values representing latency in microseconds. For eg. If 100 operations took say 5 ms then you will find a +ve value against offset 5.
- Read Latency: This is similar to write latency. The values are recent i.e. for duration lapsed between two calls.
- Row Size: This shows the distribution of rows across the range of Offset values representing size in bytes. For eg. If you have 100 rows of size 2000bytes then you will find a +ve value against offset 2000.
- Column Count: This is similar to row size. The offset values represent column count.
Some additional details
Typically in a histogram the values are plotted over discrete intervals. Similarly Cassandra defines buckets. The number of buckets is 1 more than the bucket offsets. The last element is values greater than the last offset. The values you see in the Offset column in the output is bucket offsets. The bucket offset starts at 1 and grows by 1.2 each time (rounding and removing duplicates). It goes from 1 to around 36M by default (creating 90+1 buckets), which will give us timing resolution from microseconds to 36 seconds, with less precision as the numbers get larger. (see EstimatedHistogram class)