overview

Advanced

Put a bunch of cheap PCs together and what do you have? An economic revolution

Posted by archive 
Put a bunch of cheap PCs together and what do you have? An economic revolution
By: Wall Street Journal
Posted: March 31, 2003 11:32 p.m.
Source[/url[

In the world of heavy-duty computing, the big boys no longer rule.

For years, when researchers wanted a computer that could run complicated experiments, or businesses wanted a server to handle a corporate network, they turned to proprietary machines that could range into the millions of dollars.

But as personal computers became more powerful, techies realized that a bunch of small machines made from inexpensive PC components could be hooked together to do the same job as one big, expensive machine. "The cluster" was a simple idea - but it has remade the economics of the computer business, forcing hardware makers to produce cheaper product lines and saving their customers big money.

Already, wherever possible, companies are buying clusters of cheap, generic "commodity" machines instead of supercomputers or proprietary servers, the machines that do the heavy lifting in computer rooms around the world. But commodity computing is expected to grow even more popular this year, as corporate budgets shrink and companies search for ways to add computing capacity without shelling out lots of cash.

Commodity servers, among other things, allow companies to gradually add inexpensive capacity for chores such as serving up Web pages. And by combining commodity chips with ultra-compact circuit boards, called "blades," new styles of servers are helping companies reduce the floor space they need in costly computer rooms, and are saving them money on electricity and air conditioning.

Meanwhile, the biggest cost of all - skilled labor to maintain machines - is being tackled by software that helps turn commodity servers into a centrally managed resource that seldom breaks down. The software has improved so dramatically, in fact, that companies are willing to use commodity servers for their most critical operations.

Commodity computing starts with microprocessors, the chips that act as a computer's internal calculator. Makers of large computers originally designed their own chips. But proprietary circuitry increasingly gave way during the 1990s to standard chips made by Intel, which relentlessly boosted the performance and lowered the price on PC microprocessors that can be adapted for servers.

In one of the first cluster setups - a 1994 project at NASA's Goddard Space Flight Center called Beowulf - 16 PCs were combined into a $40,000 cluster that cost about one-tenth of a comparable commercial supercomputer of the day. Many other Beowulf-style machines followed. Lawrence Livermore National Laboratory, for example, built a powerful machine from 1,152 inexpensive circuit boards for about $13 million, about one-fourth to one-eighth the price of a conventional supercomputer, the lab estimates.

Overall, servers that use Intel Corp. chips - the same inexpensive processors that power most PCs - accounted for 88 percent of unit shipments in 2002, according to International Data Corp., the Framingham, Mass., market researcher. But all those Intel-based machines were so inexpensive that they accounted for only about 40 percent of the money that customers spent on servers that year.

"The cost of computing has gotten so low you can throw lots of hardware at problems," says John Humphreys, an analyst at IDC.

And the rest of the hardware associated with servers has gotten dirt cheap, too. Makers of commodity machines use circuit boards churned out by the million from factories in Taiwan and China, and even storage cabinets have become standardized. Most servers today are designed to fit into identical slots - 1.75 inches high and 19 inches wide - in a cabinet up to 80 inches high.

Software is just as important. Google Inc., which may be the biggest commercial user of commodity computers, gets its rapid Web search speed by using tens of thousands of low-priced servers stacked in cabinets. At any one time, hundreds of the servers may be down for one reason or another, notes Eric Schmidt, Google's chief executive officer. But the service keeps humming at high speed because of cutting-edge software that routes computing chores to machines that are working.

"How do we have such a reliable system from such unreliable parts?" Mr. Schmidt asks. "We solve the problem through replication."

Commodity software offers another advantage: adaptability. Most cluster machines run Linux, a cousin of the Unix operating systems that server makers have long sold with their hardware. Besides being free of charge, Linux is free in a more important sense to many customers: They can tweak and modify it, and develop programs that can work on any Linux-based cluster, reducing their risk if a supplier goes out of business and no longer supports its software.

Partly for that reason, researchers have felt comfortable using small software companies for projects like NASA's Beowulf. Los Alamos National Laboratories, for example, selected Linux NetworX Inc., of Sandy, Utah, for a 1,024-processor system, dubbed the "Science Appliance," that uses data-storage chips on each circuit board to reduce risks of mechanical failure associated with disk drives.

Another popular approach relies on "blades," computing components that are essentially circuit boards arranged vertically, like books on a shelf, instead of stacked horizontally, as in most commodity servers. In many cases, this allows you to fit almost double the usual number of servers in a cabinet. Blade servers also save power and labor: In a conventional rack-mounted design, each circuit board requires a separate power supply and electrical plug, as well as hand-wired network connections. Blades, on the other hand, simply plug into slots that provide them with electrical and networking connections.

Though blade servers are still a small market, they are expected to take off quickly. In 2002, Mr. Humphreys of IDC estimates, 43,510 of the machines were sold world-wide for aggregate revenue of $100 million; by 2006, he predicts unit sales of nearly 1.5 million and revenue of $4 billion.

Much of the early momentum came from start-ups, such as RLX Technologies Inc., of The Woodlands, Texas, and Egenera Inc., of Marlboro, Mass. Vern Brownell, Egenera's chief executive officer, says he wanted to help cut the kinds of costs he faced as chief technology officer of Goldman Sachs & Co. Many such companies buy more machines than they need during normal conditions so they can handle spikes in demand from users. Moreover, he says, it often took Goldman one to four weeks to get a new server up and running, because of chores such as configuring networking boards and wiring components together.

So, much of the time, a lot of a company's hardware sits idle. "We found that the average utilization of a server complex was maybe 10 percent," Mr. Brownell says. "You felt like you were leaving a lot of money on the table."

Egenera combines blades with special software that makes it easier to set up and maintain servers. Computer technicians can install operating systems, change networking connections or switch from a failing blade to a backup with a few keyboard commands, rather than plugging in wires or new boards on the back of a console. In some cases, customers can shift work to a new blade in less than three minutes, compared to the weekend that might be required in shifting to a backup server in a conventional installation, Mr. Brownell says.

In another trick, Egenera customers can designate servers running low-priority jobs, such as human-resources records, to become backup servers at a moment's notice for more important operations, eliminating the problem of idle, redundant computer installations.

Big companies also have jumped on the blade bandwagon. Hewlett-Packard Co., in fact, is No. 1 in the market for blade systems, and International Business Machines Corp. recently entered the field. Sun Microsystems Inc. has developed blade servers that can simultaneously use blades based on Intel chips or its own Sparc microprocessors, as well as both Linux and Solaris, its proprietary variant of the Unix operating system.

Egenera and other blade vendors often offer some sort of software for "virtualization," a broad term for managing computing resources in ways that are divorced from physical arrangements of hardware. In clustering, the approach relates to controlling multiple servers or disk drives from a central console. Another form of virtualization is important in commodity computing - partitioning a single server so multiple programs can run on it. VMware Inc., a start-up in Palo Alto, Calif., sells software that helps run many copies of Linux or Windows on servers, another tool in consolidating jobs and improving utilization rates.

Most servers are used by employees who tap into them from PCs. ClearCube Technology Inc. is using the blade approach to deliver desktop computing in an entirely different way. The Austin, Texas, start-up has devised a way to locate the guts of each PC, including the microprocessor and hard drive, on a blade in a server chassis. The monitor and keyboard on a user's desk plug into a device about the size of a paperback book that is wired back to an associated blade in a wiring closet or computer room.

Shifting work from desktops to servers has been discussed since the mid-1990s. But performance problems limited the appeal of that approach. Users accustomed to the power of a dedicated PC and its associated software rebelled at moving to a lower-function device that shares resources with others, says Mike Frost, ClearCube's president and chief executive officer. With ClearCube's arrangement, every worker still has his or her own computer; the guts of the machine are simply located in another spot.

By adopting his company's hybrid approach, computer technicians can often set up a new desktop user with a few clicks of a mouse, and shift a user from a balky blade to a backup device without ever having to visit the user's cubicle. Delivering and replacing desktop systems is often one of the most costly and time-consuming parts of supporting a network of computers, Mr. Frost says.

ClearCube doesn't offer much of a cost advantage up front. Its blades cost $1,200 to $1,800 each, and the desktop-connection device costs an additional $200; companies can often buy PCs at a lower price point these days. But maintenance costs usually outweigh hardware costs. Moreover, desktop computers pose security risks in some environments, such as defense installations, because of floppy or CD drives that enable users to take away sensitive information or introduce suspect programs.

A group of doctors in downtown Chicago, Northwestern Memorial Physician's Group, saw another reason to go with the ClearCube approach. The group wanted to dispense with the venerable system of having paper charts follow patients between desks and exam rooms. But installing a PC in each room, with its noisy fan and the potential need to be handled by technicians, was a major obstacle. So the group has outfitted 50 exam rooms with ClearCube systems, and expects to do a total of more than 200.

"Culturally, to have technicians running around 200 or so exam rooms in the event of failure is risky business," says Guy Fuller, the group's manager of information technology. "Keep the technicians at their desk - that's the idea."