There are a number of tools we all reach for, the moment we know there is a problem with applications running in our environment. From what I have seen,
iostat is reached for quite often. Probably one of the first tools that I often see systems folks reaching for is in fact
iostat, along with other common tools, like
vmstat and similar. There are approximations of these tools on multiple Unix and Linux flavors and they all do more or less same things, and get us into more or less same sorts of trouble.
So, what is wrong with
iostat? On the surface, there is nothing obviously wrong, and in fact it honestly reports on the state of I/O to block devices, and optionally other devices that may look like block devices. When our applications slow down, or experience repeated cycles of poor response / normal response, we often assume that the likely delta is I/O. Often, this is a good assumption, and often we are right. But, what is missing from this picture is how well something like
iostat can actually help us spot these problems, rather than lead us astray. The missing element from the picture presented by
iostat is filesystem, assuming there is one.
Because most I/O today happens through filesystems, whether local or network, NFS for example, we will aim this conversation at applications that reside on a filesystem and issue file-level I/Os. Unlike what our applications experience,
iostat reports what the storage experiences, which often is very much disconnected from the experience of our applications. While this seems to not make sense, the reality of it is that modern filesystems are abstractions on top of physical storage, and as abstractions go, they also introduce a lot of intelligence and processing of information before physical storage enters the mix. In fact, some filesystems, like those used on RackTop’s BrickStor storage appliances have multiple tiers of caching, pre-fetching and buffering, all contributing to making an application experience very different from the experience of the underlying storage. The goal is always to: 1) eliminate as much as possible I/Os to physical disks, 2) reduce latency as much as possible when I/O to disks is necessary, and 3) when doing I/O, do it intelligently, reducing number of IOPs, but getting more work done. What we usually forget is that most parts of the system operate on similar time-scales, which is not the case for physical storage. While we tend to measure in nanoseconds operations happening on modern microprocessors, and likewise for data streaming through PCI bus and within memory and other microchips, even those on disk drives, we think in milliseconds usually when talking about disk drives. This is a difference of 6 zeros, significant – in other words.
Yes, even the best of modern mechanical drives are like plate tectonics, in comparison to other things happening in a computer. This wide gap in time-scales must be bridged in some way and filesystems have invested in a lot of clever engineering to narrow the divide. Various caches and buffers and IO pattern analyzers in filesystems abstract the nature of storage and present users with persistent storage that seems to work on the same time-scales as the rest of the system.
The thing is, when we look at
iostat data, as useful as it might be, we are too often completely overlooking the fact that application experience is not at all that of what
iostat may be suggesting. We often see bursty IO, frequently with fairly high latency, and unless we know why that is, we may assume that it is a direct representation of our application’s performance woes. In reality, it likely does not look tremendously different from when performance is fine. Should we avoid
iostat completely? I think the answer is no. However, we should have a good idea about what its data looks like when everything is normal and performance is good. Having that as a baseline will help to keep us on the right path. What is a certainty is that it should not be our primary goto tool for troubleshooting suspected storage bottlenecks. We should not always assume that storage is our cause, and if we start with storage as first suspect, it helps to know, given storage is shared, whether other consumers are experiencing similar symptoms. Tools closer to the application should help us reveal if application is waiting for storage, which is usually revealed as IO-wait time in some form on most operating systems. If we suspect there is a physical problem, like a bad disk,
iostat may help by revealing behavior of one disk being drastically different from others that should look quite similar. Don’t think of
iostat as a hammer, and everything around you being a nail. It is sure to take you on a journey which you do not want to take.