clock menu more-arrow no yes mobile

Filed under:

Hacking Open the Data Center

"If you build it this way, for our needs, we will buy it."

Just a few years ago, the words “open source” and “hardware” were never mentioned in the same sentence. Instead, the focus was on open source software running on top of closed, proprietary hardware solutions.

Hardware suppliers were inwardly focused on creating proprietary, “converged” infrastructure to protect their existing businesses, instead of working with the community to develop new solutions.

So, why change?

Open source creates transparency and community engagement, which accelerates the pace of innovation. We saw this happen in software, and needed it to happen in hardware to keep pace with the infrastructure that will be built in the years to come.

Could we have survived if hardware remained closed and proprietary? Yes, but the lack of innovation and lack of efficiency would have certainly become a gating factor to delivering rich online experiences to people around the world.

One example of the pace of change we need to support is the exponential growth in data storage. Data might have been growing then, but it’s exploding now, and it’s forcing CIOs, data-center managers and engineers to rethink how to efficiently store and manage all that information.

The numbers tell the story

By 2017, there will be about 3.6 billion Internet users, more than 48 percent of the world’s projected population. Driven by Internet-connected devices like tablets and wearables, the average U.S, household now has 5.7 Internet-connected devices, according to research firm NPD Group, with more than half a billion Internet-connected devices in the U.S. alone.

Combine those two trends, and the result is that the amount of global digital information created and shared — from documents to photos to tweets — is estimated to reach 40 zettabytes by the year 2020, according to IDC. If we put all that data on three-terabyte hard drives (at about six inches long) and lined them up end to end, they would cover enough distance to make two round-trip journeys to the moon — and we’d still have plenty to spare.

We saw this firsthand at Facebook. Back in 2009, the company’s infrastructure was growing at an incredible rate as we were approaching 500 million users. Today, with more than one billion users, more than 350 million photos are uploaded, and more than 4.75 billion total content items, including comments and status updates, are generated each day. When we mapped this growth against the trajectory of buying more servers, leasing more data-center space, and adding to a growing carbon footprint, we decided we needed a more economically and environmentally sustainable way forward.

So we started with a clean slate and brainstormed ideas for building a data center from the ground up, beginning with the very structure of the facility itself to how each server functioned. What could we remove from the system? How high could we raise operating temperatures and have the servers survive?

For example, instead of installing an immense, centralized air conditioning system for Facebook’s first data center in Prineville, Ore., we deployed a free-air cooling process in which large fans draw outside air into the building. The air passes through filter banks, and an evaporative cooling system cools and humidifies the air to temperatures and relative humidity levels suitable for IT equipment. When the data center came online in 2010, we measured it against our existing facilities and found that it was 38 percent more energy efficient, at 24 percent lower cost.

Once we realized the early results of our efforts, Facebook founded the Open Compute Project (OCP) to build a community around open source hardware innovation for data centers. The blueprints for the cooling system and other energy-efficient designs contained within our facility in Prineville have been shared through OCP, for instance.

In just over two years since OCP was started, more than 50 percent of the contributions to the project’s designs now come from outside of Facebook — with contributions from industry leaders like AMD, ARM, Intel, Fusion-io, LSI, Fidelity, Goldman, Rackspace, and many others. We are also seeing significant adoption of OCP technology through the Open Compute Solution Providers, who are selling and supporting OCP technology for multiple customers around the world.

Wave of change

Suppliers have underestimated and underserved customers. Customers know more about how their infrastructure should be designed than suppliers do. Since that’s pretty obvious, I’ve always wondered why it makes sense that suppliers approach customers with a predefined and non-modifiable roadmap of hardware products for sale. Isn’t that a little bit like a doctor writing a prescription for his patient before diagnosing the problem?

Consider open source hardware as building blocks — and not just for companies operating at Facebook’s scale. Every business has different needs or regulations, so not every open source server or networking switch will be the same. Maybe a financial institution on Wall Street won’t be able to adopt OCP hardware “as is,” but it may be able to use 80 percent of the contributed design that the OCP community has put forth.This gives people a great starting point for a customized design without starting from scratch.

The best thing about open hardware is that it’s not all-or-nothing; it’s about delivering the open building blocks and transparency so customers can modify to exactly the way they want it, instead of being forced to adopt the pr-defined products from closed/proprietary vendors. We see customers attracted to open source hardware for the same reasons they’ve been attracted to open source software — transparency, flexibility and pace of innovation.

In the past, making the best hardware investments has been very difficult due to the murkiness of feature-to-price negotiations among several vendors. This is because the goal of a hardware business is to cover the largest market potential with the fewest product offerings. Current data-center solutions come loaded with features and functionalities a company may not need, because it was built to service the diverse needs of a large market. It’s a logical approach, but the result is that the margins in performance, pricing, and innovation have been shrouded in mystery.

With open source hardware, we’re able to demystify computing. We can identify the opportunities in these margins, challenge them, and reinvent technology to best suit our needs. The voices of IT leaders are galvanized, and the hardware model is flipped on its head — “if you build it this way, for our needs, we will buy it” — with the potential to improve every aspect of the modern data center.

Open source attitude

It has been said before that open source is not just a methodology, it’s an attitude. As with any major change, there are those who embrace it and those who fight it. The introduction of Linux was met with heavy skepticism and back-turners. Yet today, Linux has been ported to more hardware platforms than any other operating system. The companies that embraced it early became more successful than those that did not. The open source attitude is one of optimism, creativity and positive collaboration for a common good.

The fact is that data centers around the globe are collectively not efficient enough, and data has no plans of slowing down. Based on how data is growing exponentially, it has been said that the total global computing power that we currently use in two days’ time this week will be used in just 10 minutes in 2016. To handle the explosion of data we are seeing today, and will continue to see in the future, we need to harness the best minds in world, not just the best minds under one roof — and that is open source. Join us.

Frank Frankovsky is chairman and president of the Open Compute Project, which will hold its annual OCP Summit in San Jose, Calif., January 28-29. Reach him @opencomputeprj.

This article originally appeared on Recode.net.