Intelligent Infrastructure

Mass data and next-gen workloads Intelligent Infrastructure

Enterprises learn hyperscale lessons from Open Compute Project

Open hardware and software combinations aren't just for Web companies anymore.

Standards-based cloud hardware is sometimes exclusively associated with hyperscale organizations such as Facebook. The latter’s Open Compute Project has been an important innovation for large data center operators, which can benefit from reduced electricity costs thanks to more efficient cooling systems. Going forward, however, companies of different sizes may explore the benefits of hyperscale infrastructure and follow OCP guidance in transforming legacy systems into efficient cloud computing platforms.

Enterprises are learning hyperscale lessons from Web companies
Companies that provide mostly consumer services, including Google, Facebook and Amazon, already operate hyperscale data centers, but these setups have a place in the enterprise, too. Building scalable architectures for the Web requires a certain degree of standardization and orchestration to get the most out of hardware resources.

“Web-scale computing emphasizes scale-out, over scale-up, server nodes – and it relies on an architecture that is built on standards, APIs and a high degree of abstraction that allows workloads to run on top of a standardized platform,” explained SanDisk’s Jean Bozeman in a blog post. “Importantly, it allows data centers to scale up workloads without scaling up individual servers, because computing resources throughout the infrastructure are leveraged, as needed, using orchestration software.”

Along the way, more enterprises may construct their own systems from open source software and hardware, favoring these arrangements over integrated proprietary solutions. More specifically, they may implement redundancies and rapid provisioning to harden their infrastructure against server outages, something that Web-scale operators have already become adept at addressing.

What about software? The emergence of Web-scale architectures in the enterprise could encourage development of applications that leverage OpenStack components for telemetry and orchestration, as well as standard software and hardware APIs. To support development, infrastructure will need to be upgraded for better efficiency in storage, compute and networking.

Flash, OCP and energy savings in the data center
One of the primary reasons for enterprises to build hyperscale infrastructure is to save time on testing and deployment by using more efficient components. Software-defined data centers are an appealing option because they can be easier to maintain than legacy infrastructure and can be run on top of industry-standard storage hardware. OCP is a good model for enterprises to learn from, even if many do not operate at Facebook’s scale or have the expertise to make informed purchasing decisions about commodity appliances.

“While commodity hardware will take time to infiltrate the enterprise, the lessons of hyperscale operations are already making an impact on converged infrastructure,” wrote Wikibon contributor Stuart Miniman in an article for InformationWeek. “Converged infrastructure simplifies deployment and maintenance, which targets the high operational overhead of traditional infrastructure.”

For the enterprises that are setting up hyperscale-like infrastructure, software development is becoming more critical than ever. IT departments are seeking to integrate solutions as far up the stack as possible to get a higher return on investment, illustrating the importance of regarding software and hardware as two sides of the same coin.

More specifically, Facebook’s efforts with OCP are just the latest signs of how flash has become a key building block of scalable infrastructure by providing new efficiencies in both hardware and software. Flash is not only faster than magnetic storage, but it is more energy efficient and easier to cool.

The implementation of flash storage across data centers has also permitted different approaches to application development. DevOps teams have been able to leave behind legacy disk code and rewrite applications so that they consume fewer resources. The result has been new management systems that reduce overhead from long-running input/output operations.