Found this super-interesting article at one of my favorite websites, Data Center Knowledge: Facebook Now Has 30,000 Servers
Some of the interesting points are:
- 30,000 servers
- Added 20,000 servers in 18 months
- Stores 80 billion images (20 billion but 4 sizes per image)
- 600,000 photos served per second
- 25TB log data generated daily
- 230 engineers
You can view the CNS 2009 Lecture series webcast where Jeff Rothschild (facebook.com/jeff of course), VP of Technology @ Facebook, presented this information and some more tidbits at the post High Scalability: High performance at massive levels – lessons learned at Facebook discussing this presentation.
I’m glad I only have 400 servers.
Occasionally I am interviewed for articles about VDI or Virtualization in the healthcare field, but this is the first time I have been asked to contribute to the article myself!
Nothing inspires debate among IT managers like the question of which server hardware platform to choose for their virtualization deployment. On one hand, some organizations opt for generic rack servers, which typically feature a lower entry cost, and do not require any modifications to a data center’s power supplies.
Other IT managers feel that the benefit of the centralized management console offered by blade servers is great, and that the integrated blade enclosure provides important power, cabling and infrastructure efficiencies that IT managers grappling with cramped data center quarters cannot afford to pass up.
In this face-off, two seasoned IT professionals and virtualization architects debate rack vs. blade servers, explaining the benefits of each architectural choice in a virtual environment.
Rick Vanover: Server racks the way to go
Chris House: Blade servers always win
via Blades vs. rack servers for virtualization. (Free registration required)
Rick makes some good points in why rack-mount servers may be a better choice for a virtualization platform, but I’m sticking with blades for sure, partially because we have already absorbed any start-up cost in purchasing enclosures and infrastructure components, and if we went with rack-mount servers, we’d have to get a dozen or so more racks in the datacenter which would all contribute heat and suck up power.
As with any infrastructure choice, your mileage may vary and a cost/benefit analysis must be done to see which solution will be more financially appropriate given the initial and ongoing cost, as well as growth opportunities.