Another axis of thinking about performance is managed performance. This is different from the highest raw performance one can extract from a storage system. Given a fixed amount of performance that a storage system can produce, how should this performance be shared between the many applications (VMS) running on it? Not all applications have the same performance needs: some are throughput driven, and some need the lowest latencies.
Not all applications are equal: some are more important than others in the data center, and when the total performance demands exceed availability, they have to be prioritized.
Additionally, not all applications are the best neighbors. Write-intensive applications may better co-exist with more read-intensive applications than other write-intensive applications.
Finally, not all performance problems originate at the storage level. It is very common for the source of nagging performance problem to be a misconfigured network, settings at the switch, hypervisor host settings, or even the VM-level CPU or memory settings.
Building performance isolation and guarantees is not simple and cannot be just bolted onto an existing storage architecture; the techniques run deep into the storage stack. Not only do IOs need to be proportionally scheduled by principle when they are received, the scheduling by principle needs to happen throughout the stack. Individual IOs must be tracked by its principle deep into the storage stack. Statistics at all layers of the storage stack must be maintained at the IO level. And, of course, the statistics need to feed into a model that can produce the desired results.
The performance aspect of the Tintri storage system was designed with several requirements in mind:
- The high performance of SSDs
- The expectation that it will become even faster and lead to millions of IOPS at the whole array level
- The challenges of flash management
- The need to provide proper performance isolation at the VM level
- The need for diagnosing performance issues end-to-end