Software-defined storage has the potential to revolutionize the data center. Rather than grapple with many disparate legacy storage systems, operators could use SDS to set up a single pool of resources that is easily automated and amenable to industry-standard appliances.

Moreover, SDS would theoretically obviate the need to worry as much about hardware failure or maintenance of expensive proprietary systems. Software solutions enable seamless, global interoperability between devices, regardless of vendor or operating system, making it much easier for administrators to think about cloud storage at large rather than fixate on the peculiarities of individual servers, disks and switches.

Defining SDS and distinguishing between software- and hardware-oriented solutions
But what is SDS, precisely? While the concept is widely debated, the accompanying definition is sometimes vague or, in extreme cases, misleading. As blogger Howard Marks pointed out last year, some vendors have started referring to hardware-intensive solutions as “software-defined,” if only to take advantage of growing interest in the space.

Still, if one wanted to get really technical, he could argue that almost all data center infrastructure is valuable primarily because of the software that it runs, right down to the systems-on-a-chip inside servers. So differentiation between real and pseudo SDS only comes from looking at what types of hardware, if any, are required for particular solutions.

“If your product requires any hardware that customers can only buy from you, with the grudging exception of a licensing dongle or NVRAM module, it’s not software-defined,” Marks explained in a piece for Network Computing. “That means that an HP P6000 Lefthand storage system isn’t really software-defined, but the same software running as an instance of the StorVirtual [virtual storage appliance] is.”

The malleability of the term “software-defined” has led some organizations to think that even legacy appliances utilizing software for storage allocation fit into the SDS category. In truth, it took the rise of virtual storage appliances, which run virtual machines under a hypervisor, to liberate storage technologies from the hardware dependencies that were characteristic of older solutions. Now SDS is moving toward consolidated pools, further abstracting storage from equipment.

Vendors give organizations SDS options that improve performance and lower costs
Vendors such as Nexenta, a Seagate Cloud Builder Alliance partner, have been on the cutting-edge of the SDS movement, offering buyers the chance to significantly lower their costs relative to legacy storage systems through a combination of powerful software and economical hardware. These arrangements are ideal now that more organizations are keen to set up cloud storage systems capable of handling rapidly evolving data demands.

“The volume and type of data being collected by enterprises today is moving storage to the forefront of CIO and IT agendas,” stated IDC research director Donna Taylor. “Storage is becoming a strategic company-wide issue and companies are eager to find solutions that balance security, price, performance and reliability; Nexenta is one solution worth examining further.”

More specifically, the need for better storage performance can be seen in areas such as higher education. Several Canadian and British universities recently adopted open source storage systems from Red Hat as their data requirements changed. They already operated highly virtualized environments, but needed additional availability and replication. Since both McMaster University and the University of Reading had already invested heavily in cloud hardware, it made sense for them to turn to open source software in order to leverage existing infrastructure. Projects such as OpenStack, to which Red Hat is a prominent contributor, are compatible with a wide range of appliances, demonstrating that the real power of SDS is in creating a data center in which workloads can be distributed across disparate types of hardware without compromising efficiency.

The University of Reading required highly available, scalable storage to support scientific research projects. Its combination of new open source solutions and old standalone servers may result in more than 1 petabyte of data being handled

Reading high performance computing manager Dan Bretherton stated that his group was committed to the safe and efficient storage of hundreds of terabytes of information, as well as the facilitation of adequate performance for the institution’s increasingly I/O-intensive workloads. While SDS is an important part of remaking the data center for complex new requirements, HPC appliances and other specialized infrastructure also play important roles in ensuring that organizations have the best cloud storage and compute setup possible.

Making SDS implementation and purchasing decisions
After cutting through the fog and confusion that sometimes surrounds SDS terminology, the next challenge is implementing solutions. Writing for TechTarget, Jon William Toigo argued that buyers should look for SDS products that aren’t tied to specific types of hardware or limited to certain hypervisors.

“In the business of software-defined infrastructure, agnosticism is highly prized and a matter of architectural freedom and cost containment,” explained Toigo. “You should consider a product that supports implementation, both as a centralized server and as a federated resource manager. ”

This approach could simplify some of the troubles that new SDS users often encounter. For example, many of them get into situations in which they’re beholden to how hardware vendors update firmware to interact with different SDS products. Once the firmware is changed, some SDS vendors are left playing catch-up, trying to make their offerings compatible with relevant APIs. In this context, it makes sense to look for SDS solutions that are as flexible as possible. Doing so may also address the common problems of having to keep up with new releases and mergers and acquisitions in the storage hypervisor space.

Once implemented, SDS can provide an immediate performance boost, which Toigo estimated at 2 to 4 times better than strictly hardware-defined configurations. On a technical level, I/O queuing on flash devices speeds things up for the remaining storage infrastructure, much like memory caching has dramatically improved the efficiency of storage devices on Tier 0 and Tier 1.

Overall, SDS gives data center operators the best of both worlds. It reduces dependency on hardware, better leverages existing infrastructure investments and improves enterprises’ ability to handle large quantities of data and increasingly demanding workloads.

Twitter Facebook Google Plus Linked in