There's no question that solid-state disks (SSDs) offer many advantages over traditional hard disk drive (HDD) storage. The long list of benefits includes better performance, lower latency, and greater flexibility and scalability.
Of course, the biggest obstacle to more widespread deployment of flash-based SSDs has always been cost. But as flash storage prices continue to drop, data volumes continue to explode, and organizations demand more performance for their storage environments, the shift from HDDs to SSDs in the data center has accelerated.
Initially, organizations that implemented flash storage were focused on the raw speed of SSDs. As flash storage technology and data center designs continue to evolve, there are other factors that need to be considered. One is storage Quality of Service (QoS).
Many organizations have implemented shared infrastructures in order to increase operational efficiencies and improve resource utilization. However, resource contention is always an issue, making it difficult to ensure performance levels for specific applications. That’s where storage QoS comes in.
When organizations first began to deploy SSDs, they thought the technology would be fast enough to overcome resource contention in a shared environment. Flash storage also had limited capacity, making it unlikely that more than one workload would share an SSD. Now, as organizations are increasing their use of flash and implementing all-flash arrays, resource contention has become more of a problem.
With all-flash arrays, performance bottlenecks are the result of memory and processing, not the storage media itself. That’s why purpose-built all-flash arrays include data reduction technologies such as de-duplication and compression, which enable more efficient and scalable processing. However, these capabilities also create more processing overhead.
Storage QoS guarantees that a given workload will get a minimum performance level. It also makes it possible to set the maximum amount I/O the workload can use, preventing an application from hogging performance, and prioritize mission-critical applications so that adequate I/O is maintained when the array is performing maintenance tasks.
Another consideration is whether to implement a scale-up or scale-out architecture. Scale-up architectures that have been purpose built for flash use a single controller for all storage processing. With scale-up disk storage, bottlenecks are common when storage arrays and capacity are added and all I/O continues to go through one controller. Efficiency also suffers because controllers often hit maximum performance levels before maximum capacity is even close to being reached. A scale-up flash system with an all-flash controller minimizes bottlenecks that plague disk storage and makes more efficient use of storage capacity.
Instead of relying on a single controller, scale-out architectures use a modular configuration and each module has its own controller and capacity. Capacity can be expanded by adding modules or nodes, which allows performance to scale as well. Bottlenecks are minimized and higher workloads are supported because multiple controllers are used to process I/O independently.
Scale-up is ideal for organizations that want to solve a specific performance problem. However, if you expect to add more applications and workloads to the system, which will then require more capacity, a scale-out architecture may make more sense.
Technologent specializes in the design and implementation of all-flash storage systems. Let us assess the state of your existing storage infrastructure and help you select an all-flash array that will enable you to maximize performance, scalability and QoS for your mission-critical workloads.
Tags:
Data StorageAugust 7, 2015
Comments