Virtualization has helped to transform the data center by enabling increased server utilization, scalability and flexibility. However, these benefits have also brought undeniable disruption to storage infrastructures.
Servers hosting multiple applications are running mixed workloads that create random I/O for the storage array. This random I/O is difficult for spinning disks to handle, adding precious milliseconds to the read/write process.
It’s called the “I/O blender’ effect.” When data is requested, a hard drive must rotate its platters to the location where that data resides. In a one server/one application arrangement with sequential I/O operations, this rotation is manageable. But In a virtualized environment, tens or hundreds of virtual machines are sending random I/O requests at the same time. The disk heads are almost constantly rotating back and forth looking for data, which creates a huge performance bottleneck.
The typical answer has been to add more spindles, provision more disk space and create storage silos to meet the different performance and capacity requirements of various applications. But that’s just not sustainable. This approach wastes storage capacity and physical space while increasing the management burden and the power required to spin all those disks. In the end, you can end up doubling your cost per gigabyte of storage.
The I/O blender effect is one reason why organizations are increasingly turning to all-flash storage arrays for Tier 1 storage. Unlike hard-disk arrays with spinning magnetic disks, flash has no moving parts and thus eliminates the mechanical chokepoints of hard-disk drives.
Flash memory actually excels at random I/O performance. Since it doesn’t have to rotate, it has direct access to all data locations simultaneously. It is equally fast on random workloads as on sequential ones, producing huge gains in IOPS compared to disk. A single flash drive can deliver tens of thousands of IOPS — the equivalent of an entire midrange disk array.
While flash is indisputably faster, more powerful and more space efficient than disk, price has been a barrier to using it across an entire array. This led most storage vendors to try incorporating a small amount of flash into disk arrays to help resolve performance issues while still keeping costs manageable. In hybrid arrays, flash is incorporated as a cache or tier alongside disk drives, with a caching algorithm deciding which media is the most appropriate landing spot for data.
While the hybrid architecture delivers some performance gains, it also creates new challenges. Chief among them is the potential for a so-called “cache miss” due to latency differences between the two media.
The problem is that caching algorithms cannot predict with 100 percent certainty which data will be accessed by an application. If the algorithm makes the wrong prediction, a cache miss occurs and data must be read from back-end storage — which dramatically increases latency for that operation and makes storage performance unpredictable.
Dropping prices for solid-state disks have made all-flash arrays far more practical. According to a recent whitepaper from IDC, all-flash arrays will become the go-to technology for primary storage due to compelling business and financial benefits.
Technologent’s engineers have extensive experience in the design and implementation of enterprise-class storage environments. Let us help you determine if all-flash arrays could deliver operational and performance benefits in your virtualized environment.
Tags:
Data StorageNovember 2, 2015
Comments