Facebook is exploring a technology that controls temperatures in the data center by automatically moving software workloads among servers according to the air pressure on either side of each machine.
As noticed by Data Center Knowledge, the technology is laid out in a Facebook patent application recently released to the world at large.
Filed in 2010 by Facebook engineers Amir Michael and Michael Paleczny, the patent application describes a system that uses a load balancer to shift tasks among servers based on their particular "cooling needs," which are related to air pressure in the data center. The idea is to keep each server running as hot as they can without malfunctioning. The higher the temperature, the more you save on cooling costs. "Determining the cooling needs beforehand avoids spikes in server temperature, thereby enabling the servers to operate safely at a temperature closer to their maximum rated temperatures," the application reads.
It's unclear whether Facebook is actually using the system -- the company did not immediately respond to a request for comment -- but this "intelligent" cooling system is indicative of a larger trend across the industry. Google, for instance, has built a platform called Spanner that automatically shifts server loads when data center temperatures reach certain thresholds. And over the years, outfits such as HP, Opengate Data Systems, and SynapSense have sold various tools that use temperature and pressure sensors to help drive hardware inside the data center.
Facebook's system is designed for data centers that cool server racks by placing them between a "cold aisle" and a "hot aisle." In essence, a difference in pressure between the two aisles causes the air from the cold aisle to move across the servers and push server heat into the hot aisle, where it's then expelled from the data center. With Facebook's system, sensors monitor the pressure difference between the two aisles and assign server workloads accordingly. At a given pressure difference, a server can only handle a certain workload without exceeding a particular temperature threshold, and Facebook's system attempts to keep the servers as close to this threshold as possible.
Yes, the system will also vary workloads according to the design of the servers. And it can also be used to control a data center's central fans so that they can maintain a particular pressure differential between the cold aisle and the hot aisle.
The system is also meant reduce the need for miniature fans inside the servers themselves -- or remove the need for them entirely. This can save added dollars. If you have fewer fans, you need less power. Facebook is still using such on-board fans with the servers it designed for its massive data center in Prineville, Oregon, but the company is perpetually rethinking its custom server designs, with the help of a larger community of hardware engineers. The goal is to not only reduce the power consumption of the servers, but also strip out any unnecessary hardware.