vscsiStats – Utility Data Collection

Not having written about optimization, IO sizes and such for some time, I felt it would be a good time to pick up and share some information, which I believe is helpful when planning/implementing storage behind VMware. One area where storage companies like us struggle a little bit is in being able to fairly evaluate a given environment and scope an ideal storage solution which properly addresses required IOPS, throughput, latency, etc.

This challenge is one few really try to take on, and far fewer still have been able to crack meaningfully. We keep trying to find ways to help existing customers and potential by getting to know the current environment as well as possible. This is where this utility from VMware (vscsiStats) comes in. The tool is a part of a larger data collection system. vscsiStats allows users to enable data gathering and then to report on gathered data after some period of time.

While our storage system has a massive amount of tools that we can use, if you are a potential customer you don’t yet have our system and cannot therefor benefit from these tools. And likewise we cannot benefit either, because these tools are specifically built for the BrickstorOS.

We commit ourselves to being data and facts driven when we recommend systems to our customers. Our goal is to build something that is not undersized, crippling you from day one, while also not oversized where there ends-up being a waste of resources. Best decisions are made with good data. And, when we come into an environment where information is imprecise or lacking, we have to do some work to investigate and discover as much useful data as we can.

Tools like this one can reduce burden of information collection, and are generally useful for periodic observions, which may help to spot changes in behavior, etc.

I don’t want to attempt to give an expansive tutorial on the tool, instead I want to point out some of the most useful things. Let’s look at the tool and see what it can do for us.

At this point we have have to use CLI on ESX hosts. This tool does require that we ssh into one or multiple ESX hosts in the environment.

First, we will need to get worldGroupID(s) for machines that we are interested in. These IDs are effectively names of VMs represented in a numeric form. We want to list our VMs first, and from output gather those which we are interested in observing further.

[root@myhost:~] /usr/lib/vmware/bin/vscsiStats -l

... output is unique to each system ...

Data is collected only when observation gathering is enabled. In the example below we enable data gathering for a VM with worldGroupID 6763650.

[root@myhost:~] /usr/lib/vmware/bin/vscsiStats -s -w 6763650

After some period of time passes we can take start gathering data, which will be updated periodically as we re-run the same command. Example below asks for a histogram of what effectively would be IO length. A histogram here is not unlike a traditional histogram, sans visual representation with bars or a density curve to represent the binning.

In this instance we have 18 bins, with first column being a count of IO events and the last being the bucket into which a given IO fits. So for example we see 8404 of IOs at 4096 (4K). This is the most common IO size by far, with something smaller than 512b being second most common at 3228. This is actually representing very small ~64b IOs that happen from updates to a lock file that VMware does periodically.

[root@myhost:~] /usr/lib/vmware/bin/vscsiStats -w 6763650 -p ioLength
Histogram: IO lengths of commands for virtual machine worldGroupID : 6763650, virtual disk handleID : 11415 (scsi0:0) {
 min : 512
 max : 393216
 mean : 4413
 count : 13953
      3228               (<=                512)
      84                 (<=               1024)
      349                (<=               2048)
      166                (<=               4095)
      8404               (<=               4096)
      84                 (<=               8191)
      970                (<=               8192)
      368                (<=              16383)
      101                (<=              16384)
      78                 (<=              32768)
      58                 (<=              49152)
      32                 (<=              65535)
      8                  (<=              65536)
      13                 (<=              81920)
      7                  (<=             131072)
      1                  (<=             262144)
      2                  (<=             524288)
      0                  (>              524288)

This is a great tool which could be used to measure latency as experienced by VM, as well as just how random or sequential IOs are. The bucket strategy actually helps a lot because instead of some average which severely muddies the waters, histograms give us a ranked output.

Seek distance is very interesting as well, because it gives us a sense for how much random, as opposed to how much sequential IO happens. Of course the more random the IO, the harder storage has to work, mechanical media in particular, due to increased seek times, as opposed to sequential IO. This example below presents a histogram with center being 0, and the two ends are seeks that happen back and forward from any given point in the file. In theory, concentration around 0 means that most IOs are sequential in nature, while concentration in the tails, or a very spread out dataset without any particularly heavily weighted buckets means that there is a lot of random IO that is relatively evenly distributed.

[root@myhost:~] /usr/lib/vmware/bin/vscsiStats -w 6763650 -p seekDistance
Histogram: distance (in LBNs) between successive commands for virtual machine worldGroupID : 6763650, virtual disk handleID : 11415 (scsi0:0) {
 min : -67656219
 max : 68230601
 mean : 3947
 count : 16591
      2752               (<=            -500000)
      578                (<=            -100000)
      97                 (<=             -50000)
      1577               (<=             -10000)
      0                  (<=              -5000)
      21                 (<=              -1000)
      0                  (<=               -500)
      16                 (<=               -128)
      0                  (<=                -64)
      0                  (<=                -32)
      574                (<=                -16)
      206                (<=                 -8)
      20                 (<=                 -6)
      1                  (<=                 -4)
      0                  (<=                 -2)
      619                (<=                 -1)
      0                  (<=                  0)
      4330               (<=                  1)
      0                  (<=                  2)
      0                  (<=                  4)
      0                  (<=                  6)
      0                  (<=                  8)
      138                (<=                 16)
      263                (<=                 32)
      183                (<=                 64)
      263                (<=                128)
      225                (<=                500)
      23                 (<=               1000)
      218                (<=               5000)
      163                (<=              10000)
      598                (<=              50000)
      48                 (<=             100000)
      828                (<=             500000)
      2850               (>              500000)

One last example is latency. This is of course something we are often concerned with more than IOPS, throughput, etc., because it directly translates into experience from given application/system. Again, we are given a histogram with mean 1198, which is in microseconds. In this example therefor we should expect average latency for both reads and writes to be around 1ms. We can see from the buckets that the largest so far, at 10481 is the 1000us bucket which is indeed equivalent to 1ms.

[root@myhost:~] /usr/lib/vmware/bin/vscsiStats -w 6763650 -p latency
Histogram: latency of IOs in Microseconds (us) for virtual machine worldGroupID : 6763650, virtual disk handleID : 11415 (scsi0:0) {
 min : 332
 max : 71587
 mean : 1198
 count : 18064
      0                  (<=                  1)
      0                  (<=                 10)
      0                  (<=                100)
      4945               (<=                500)
      10481              (<=               1000)
      2077               (<=               5000)
      374                (<=              15000)
      147                (<=              30000)
      37                 (<=              50000)
      3                  (<=             100000)
      0                  (>              100000)

To summarize, it is very important to have data about the environment both when a system is being designed and implemented as well as throughout normal lifetime of the system. Changing patterns become more visible if over time reported data changes as compared to data from prior periods. All information should be taken with a grain of salt, and may still not be enough for projecting requirements, but it is a vital tool in our toolset. This tool in particular makes us better administrators, because it gives us ability to have finely grained data about what the VMs do in terms of storage, and various vital metrics to support quantifying that behavior and enabling both you as the consumer and us as your trusted vendor to design a solution that is sized correctly and performs well with a long useful life.

Join the Mailing List
150 150 RackTop BrickStor Security Platform