January 11, 2010
Computer Servers Cheap Compared To Electricity

The challenge in computing has shifted from making them faster to make them do more work per unit of energy used. Hardware no longer costs as much as electricity in some computer server farms.

Over the next couple of years, balancing performance, reliability and energy will grow trickier because of shift in data center economics. Itís expected that at least half of the Fortune 2000 companies will spend more on electricity than on purchasing new hardware by about 2010, according to Hewlett-Packard executives.

I picture a future with lots of server farms combined with solar photovoltaic installations straddling the equator in low cloud, high insolation regions. Since the computers will cost less than the electric power fiber optic cables connecting these server farms can shift compute jobs around the planet as the Earth spins thru its 24 hour day.

That future won't happen until photovoltaic costs fall by another order of magnitude and computer electric power usage climbs even higher.

Currently Google locates server farms closer to customers in order to minimize latency on query responses. But compute jobs that aren't interactive (e.g. big sims of climate or for designs) don't need that proximity to users. The biggest obstacle to shifting them around might be the size of their datasets. Will dataset size serve as an obstacle to simulation shifting around the clock? Or will fiber optic transmission capacity be so cheap that shifting jobs from server to server several times a day won't pose any cost problems?

Of course, if 4th gen nuclear reactors end up being cheaper than PV 20 years from now server farms will probably each continue to run 24x7.

Share |      Randall Parker, 2010 January 11 09:30 PM  Energy Electric Users


Comments
Jerry Martinson said at January 11, 2010 10:17 PM:

It's cheaper to save power in computers by using it more efficiently rather than using solar power. The x86 architecture isn't as efficient per computation as some others such as more multiscalar multithreaded MIPS ones. There are also other ways to save power but the closest you are to creating the real hedonic value, generally the more economical it is to save power.

Although this has gotten a lot more attention recently, there is still little incentive for the EE's designing the products to save power except when it becomes a major design consideration like thermal envelope or battery performance. It is so sad - the invisible hand doesn't work. That's why energy star and the occasional mandate here and there can help even though gov't intervention sucks.

Bruce said at January 11, 2010 11:18 PM:

Coal is cheap. And the USA has vast quantities. Microsoft just opened a datacenter in Chicago (Illinois is the #2 coal producing state) in June.

Nick G said at January 12, 2010 1:37 PM:

I agree with Jerry - efficiency is by far the cheapest strategy, and up until recently it was pretty neglected.

Lately, I see quite a lot of discussion in CIO literature about reducing data center power costs.

Bruce said at January 12, 2010 1:53 PM:

Google uses cutom built cheap PC's with DC power supplies.

http://insidehpc.com/2009/04/02/google-unveils-its-super-secret-server-design-dc-and-batteries-built-in/

But big datacenters can still save a ton of money by picking a cheap electricity state over an expensive one like California. Wyoming (for example) has electricity at half the cost of California thanks to coal ~6 cents/kWh vs 13 cents in California.

Of course tax breaks and construction costs and a bunch of other things factor in.

LarryD said at January 12, 2010 5:30 PM:

Server consolidation has been a big topic for years, cutting the expense of electrical power has been a major driver. It's not a new thing. Virtualization is one major technique, running multiple virtual machines on one box. IBM has made a ton of money since Linux was ported to its mainframes, IPs can save big bucks by replacing thousands of servers with one mainframe, the mainframe not only takes less power, but is more reliable too.

Google is an odd case, their servers all serve the same distributed applications, so its cheaper for them to just add more servers than fix any that go down. I imagine that when enough servers in a module go down, they just power down the entire module.

anonyq said at January 12, 2010 6:10 PM:

Chips need to be coolde and that is cheaper/easier in more temperate regions.

Jerry Martinson said at January 12, 2010 7:23 PM:

Google has made solid progress on certain aspects of data center efficiency. But Google still uses x86's though. I'm sure it's a difficult sell internally within Google to switch because it's another risk and schedule hit to compile to a different CPU - and it might be more difficult if they are monkeying with the OS themselves a bit.

Why it matters: from a hardware perspective some of the simpler instruction sets are much easier to multithread and multicore than the more complex x86s and there's a little less legacy involved if you're trying to introduce some new instructions that allow for multiscalar computing to work better. The x86 evolved from the late 1970s and has spent a significant amount of it's lifetime being optimized for latency whereas other instruction set CPUs have not and have more flexibility to be optimized for throughput, which is what Google mostly needs. I've run experiments where these other machines can get 5x the throughput of typcial server workloads using 1/2 the power of the best hyperthreaded x86 but I suspect the gaps has narrowed since then to perhaps 3x. The problem is that it's a race that also depends on other things as well. Intel and and AMD can target the latest expensive process tech for the x86 because of the huge size of the x86 market whereas the more custom but better chips are typically behind a generation of Moore's law..

If you can get 3x as many instructions/lookups/etc... on a more threaded architecture than the typical x86s - you can get away with 3 times fewer computers and 1/6th the thermal footprint.

Saving power by using physically larger, consolidated AC-MAINS to loosely regulated 12V converters saves power too - I don't know perhaps 20%. Improving the HVAC and system cooling so it's integrated with the chillers saves perhaps 20% as well. This is my impression of what Google is doing. But saving the power in the CPU boards by a factor of 6 dwarfs this.

Randall Parker said at January 12, 2010 9:23 PM:

Jerry,

The huge size of the x86 market means a lot more engineering resources are available to develop x86 designs. Which processor architecture would work better for massive server farms? Keep in mind that a lot of the RISC architectures have lagging designs on older fabrication processes.

I've wondered whether Linux ARM could compete in server applications. Certainly lots of developers use ARM for embedded. I do Linux ARM work. But I've never looked at upper end ARM processors to see how they stack up against x86.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©