“It’s time to stop
treating data center design like Fight
Club,” said Jonathan Heiliger, “and demystify the way these things are
built.”
It was April 2011,
and Heiliger — the man who oversaw all the hardware driving Facebook’s online
empire — was announcing the creation of something Facebook called the Open Compute Project. As Google,
Amazon, and other online giants jealously guarded the technology inside their
massive computing facilities — treating data center design as the most important
of trade secrets — Heiliger and Facebook took the opposite tack, sharing their
hardware designs with the rest of the world and encouraging others to do the
same. The aim was to transform competition into collaboration, Heiliger said —
to improve computer hardware using the same communal
ethos that underpins the world of open source software.
Some saw it as little more than a publicity stunt. Others
bemoaned the comparison to open source software, arguing that Facebook’s designs
“weren’t as open” as this would imply. But less than two years later, the Open
Compute Project has lived up to Heiliger’s billing — and then some.
Last week, at the
Open Compute Project’s latest public get-together, Facebook
donated a
host of new hardware designs to the project, as it continues
to overhaul the gear that typically drives a data center. But this is only
half the story. Two hours later, Rackspace — the Texas outfit that’s second
only to Amazon in the cloud computing game — revealed that it has followed in
Facebook’s footsteps, designing its own data center servers, and yes, it will
donate these designs to the world at large.
After Facebook
opened the curtain on its hardware operation, showing how it had significantly
cut costs with a new breed of slim-down gear purchased directly from
manufacturers in Asia, Rackspace was inspired to do the same. But it didn’t
just mimic Facebook’s designs. Those weren’t suited to its particular operation.
It took those designs in a new direction.
“We basically iterated on the Facebook design,” says Wesley
Jess, vice president of supply chain operations at Rackspace, who oversees the
team that designed the company’s new servers. “Our tenet is to repurpose all the
testing and the good work that Facebook has already done.”
For Frank Frankovsky — who oversees the hardware operation at
Facebook — this is just the thing the Open Compute Project was meant to foster.
“It’s about empowering the end user to take control of their infrastructure
design,” he says, “to evaluate for yourself what’s best for your
infrastructure.”
Facebook didn’t
just share its hardware designs. It shared its story — and that’s just as
valuable, if not more so. In fashioning its own servers, Facebook worked in
tandem with a wide range of parts suppliers, various server manufacturers in
Asia, and a “system integrator” that puts the
final pieces together at a warehouse in Northern California, and Rackspace
set up a similar supply chain, though its list of partners is slightly
different. Like Facebook, Rackspace is working with Asian manufacturers Quanta
and Wistron to build its gear, but whereas Facebook works with Hyve, an
integrator in Northern California, Rackspace will use Quanta as an integrator —
and possibly others.
The basic arrangement may seem simple, but for years, this
sort of custom server work was cloaked in mystery — and more than a little FUD.
Google and Amazon have also bypassed big-name American server makers such as
Dell, IBM, and HP, going directly to more nimble manufacturers, but they’re
loath to discuss the particulars, and many of the tech world’s entrenched
hardware makers have painted this shadow market as a place suited only for
someone of Google’s size — if they talk about it at all.
Yes, Google is bigger than most web operations. But the rest
of the web is always growing, and Rackspace has shown that a second tier
operation now has the volume — as well as the talent — needed to customize its
data center hardware. According to Jess, Rackspace is underpinned by about
89,000 servers, and the company designed its new gear — including a server, a
storage device, and a new rack to house that can house them both — with a team
of two to three engineers. The team is so small, Jess says, at least in part
because it’s leaning on work that’s already been done by Facebook and other
members of the Open Compute Project.
“I don’t think it’s been a huge hassle for us,” he says.
“Everybody used to have to do it by themselves. You had to come up with your own
test scripts. You had to come up with your own work — and that was a lot of
effort. But if you’re working in a community effort, a lot of that stuff is
shared.” Whenever he wants to, Jess explains, he can phone Frankovsky and others
familiar with Facebook’s particular operation.
Much like Facebook,
Google, and Amazon,
Rackspace will directly negotiate contacts with parts suppliers, including big
name processor makers Intel and AMD. This sort of thing isn’t widely discussed —
at least not in the press — but Facebook has broken the code of silence, and
this has allowed companies like Rackspace to follow. In another echo of
Facebook, Rackspace won’t actually acquire chips from the likes of
Intel and AMD — the chips will move through the system manufacturers — but it
will set its own prices.
All this is not to say that Dell, HP, and IBM are done selling
servers. These tech giants still sell massive amounts of server gear, and Dell
and HP in particular have worked to refashion their server businesses so that
they can better serve the large web players and other massive online operations.
Jimmy Pike — the director of system architecture for Dell’s Data Center
Solutions business — tells us that the DSC still counts one of the biggest web
players among of its customers. And this only makes sense. In buying servers,
the big web players want options — as many as possible.
With Open Compute, Facebook has increased those options
several fold. It’s not just that someone like Rackspace can buy machines from
several manufacturers. It can purchase individual parts from multiple suppliers.
According to Jess, this is particularly useful to a cloud outfit because it’s
operation can grow so quickly. When you grow, you need more hardware. And if you
have multiple suppliers, that hardware is easier to come by — and it’s cheaper.
Jess uses the word “flexibility” over and over again.
As Rackspace and
others feed the Open Compute project, Jess and Frankovksy say, those options
will continue to expand. If you aren’t prepared to design your own gear, you’ll
have the option of buying existing Open Compute designs — or similar designs —
from the likes of Hyve and Quanta.
If you use these designs, Jess explains, you can even make the model work with a
relatively small operation. You’ll be riding on the big volumes already created
by the likes of Facebook and Rackspace.
Yes, this model is
a little different from open source software, where it’s so much easier to share
and modify what you’ve created. But Facebook and Rackspace are still sharing and
modifying — and that’s the crux of the matter. Rackspace has already remade its
cloud services with open source software — the OpenStack
platform — and now it’s doing much the same with what you can rightly call
open source hardware.
“Both let users influence the technology that gets built,”
says Jonathan Bryce, a former Rackspace employee who helped bootstrap OpenStack
and now serves as the executive director of the OpenStack Foundation. “They give
people alternatives they didn’t necessarily have a decade ago.”
Source:
0 yorum:
Yorum Gönder