The first generation of commodity SSDs was at least two orders of magnitude faster than HDDs for random reads. But they were not without problems; the random write case was more complex.
The random write performance was not as good and had latency spike issues that the SSD FTL took many generations to improve. Even sequential write performance, although much higher, had latency problems in the steady state.
Related to this is the fact that the density of the first generation of SSDs was not as high as HDDs. Many applications access a certain amount of data before they reach a certain level of performance. In other words, a minimum space density is needed to drive minimum performance needs, from the application perspective. Here again, it was anticipated that the SSD space density would grow steadily over time. And so, even if applications would not drive the first generation of SSD-based storage as hard, they would eventually drive stronger performance as a lot more data would become concentrated in a single SSD.
Tintri understood that simply taking current-generation architectures and ideas developed for HDDs and running them on SSDs would produce some performance benefit; but it would be necessary to design from the group up to deliver the maximum performance benefit of high IOPS at consistently low latencies. It also foresaw that SSD performance would grow massively, and that eventually a full array of SSDs would easily outstrip the performance available on the controller. At that point, the CPU would become the bottleneck.