LinkedIn is going in the way of Google, Facebook and Microsoft by building hyperscale data center infrastructure. It is going with the approach of designing custom hardware, software and data center infrastructure as per its computing needs. The company has also decided to source hardware directly from design manufactures like HP, Dell, Cisco and such. This direct procurement procedure will bypass the leading big IT vendors such as HP, Dell and Cisco.
LinkedIn is applying its Hyperscale approach for the first time at its new data center outside of Portland. The facility is being leased by LinkedIn from Infomart Data Centers, features custom electrical and mechanical design, as well as custom network switches.
The server farm will be designed in such a way by LinkedIn that the company will go from running on tens of thousands of servers to running on hundreds of thousands of servers.
On a gradual note, the other LinkedIn data centers, located in California, Texas, Virginia and Singapore will transform to the new Hyperscale infrastructure.
Revealing more details about hyper scale adoption, LinkedIn is said to include its new 100 Gigabit switches and scale out data center network fabric into its infrastructure. LinkedIn will be using the same switch in all its data centers in future.
As of now, the company has a mix of whitebox switches designed to its specification and regular switches by the big well-known vendors.
Presently, LinkedIn is yet to design its own servers the way other hyperscale data centers like Facebook, Google and other operators are doing. It is planning to buy servers from the same original design manufactures for this year. May be it will change its plan for the next generation deployment which will take place in 2018.
LinkedIn’s Oregon data center will have 96 servers per cabinet. It is slightly below 18kW per cabinet today, but in future, the cooling designs will allow densities up to 32 kW per rack.
According to Facebook’s VP Infrastructure, the average density of Facebook’s data center is about 5.5 kW.
To cool this kind of density, LinkedIn is using heat-conducting doors on every cabinet, and every cabinet is its own contained ecosystem. There are no hot and cold aisles like you would find in a typical data center…..Now that’s interesting!
The decision to use a high-density design was made after a detailed analysis of server, power, and space costs. It turned out high density was the most optimal route for LinkedIn. The social networking giant kept the design as optimal because it uses leased data center space, so it has space and power restriction. For this reason, LinkedIn opted out of Open Compute Project Hardware, which is not designed for standard data centers and data center racks.
By the end of this year, LinkedIn will make some of the infrastructure publicly available via OCP or another avenue. That means, the business oriented social networking service is ready to share its hardware and software developments.