Spring 2011 Forum
“Turning Data Center Energy Hogs Around”
tips and tricks you'll want to know to better manage data centers
March 18, 2011
Pringle Creek Community Center, Salem
In case you missed it – APEM 2011 Spring Forum Recap “Energy Conservation Opportunities in Data Centers”
Our Spring Forum presented technical information on energy conservation strategies for existing data centers, a look at the future of data centers, two case studies of new highly efficient data centers, an update from the ETO (Energy Trust of Oregon) on incentives for data centers, and a tour of the State Data Center where we not only toured the mechanical rooms but also walked onto the data center floor in hot aisles and cold aisles.
John Pappas of Mazetti Nash Lipscomb Birch presented detailed information about energy conservation strategies in existing data centers. John has been a mechanical designer for over thirty years, and he was recently hired by the ETO to educate design engineers in Oregon about how to design high efficiency data centers. He presented detailed results of CFD (computational fluid dynamics) modeling that shows how air is often short-circuited inside data centers that result in inefficient cooling systems. Cooling systems are designed to operate with a 20 – 25 F delta (difference between the leaving temperature and the return temperature) but due to short-circuiting of the air they often operate with only a 10 F delta. Some cold air passes through the front side of a computer rack where it picks up heat and then exists through the back side of the rack – often this warm air then circles back around to the front and re-enters a different spot on the rack – leading to “hot spots” in the rack, which are then typically remedied by lowering the supply air temperature or adding more cooling and fans. Another problem with poor airflow is that the cold air enters the room and then leaves the room without passing through a computer rack. The efficient way to remedy this is to block the air from short-circuiting and force all the cool air through the rack one time and then collect the warm air and send it back to the air conditioning unit, without mixing it with cold air. John showed us how the ceiling plenum can be converted into a return air plenum, and CRACs (computer room air conditioners) can be ducted to the ceiling, as one method to minimize short-circuiting of the air and improve airflow distributions. Another energy-saving strategy is to use outside air for cooling – when a data center is laid out with isolated hot and cold aisles the supply air only needs to be as low as 70 or 75 F to provide adequate cooling, and this 70 to 75 F air can be provided over 90% of the year in the Willamette Valley with outside air economizers. Adding direct and or indirect evaporative cooling to the economizer can provide cooling about 95% of the year, resulting in the need to run compressors for cooling only 5% of the year. John spoke of a company he recently designed a system for in California where the economizer, direct and indirect cooling provides adequate cooling for all but 36 hours of a typical year, and the owner decided they can live without mechanical cooling for those 36 hours a year, resulting in savings first cost of installing a mechanical cooling system. see his presentation
Steve Knipple of EasyStreet Online Solutions provided a case study of the Hillsboro data center they opened in 2011. Their data center floor is laid out with racks that incorporate hot aisle containment chimneys and they do not use a raised floor. The cool supply air enters the room and enters the racks around 70 F, much higher than most data centers that are designed with 55 F supply air. The 70 F air enters the racks and is exhausted through the chimneys – which eliminate any short circuiting and guarantees that all the air entering the racks is around 70 F. They monitor the exhaust temperatures to ensure that they stay below 115 F, which is the temperature recommended by the manufacturers of the computer systems inside the racks. The rooftop air handlers supply the 70 F supply air using direct and indirect evaporative cooling to produce cooling without the need for mechanical cooling over 90% of the year. The building collects rainwater and stores it for use in the evaporative cooling systems. This facility also uses virtualized systems – virtualized systems typically combine 30 to 40 traditional servers onto one “virtualized” server through the use of sophisticated software. see his presentation
Brandon Adams of McKinstry presented a case study of a data center they recently expanded in downtown Seattle. This existing facility was electrically constrained so that they could not add mechanical cooling for the increased data center. This led them to innovate and design the expansion to be cooled without the use of any mechanical cooling. This design worked so well that they removed the mechanical cooling from the existing data center. The cooling for the entire data center is now provided through the use of airside economizers, and direct and indirect evaporative cooling systems. see his presentation
A networking activity was held during a break to give the audience and speakers a chance to get to know one another. Will Miller of PGE won the prize for meeting the most members during this activity.
Jonny Holz spoke on behalf of the ETO to educate the audience about energy incentives that are available from the ETO specifically for data centers. The ETO offers a $350 incentive for each server that is virtualized, as long as the owner virtualizes a minimum of ten servers at a time. The ETO also offers incentives for owners to install PC software that turns monitors off and puts desktop computers to sleep when they are idle. Both the State Data Center and EasyStreet Online solutions applied for and received incentives from the ETO when building and retrofitting their data centers.
Jon Haas of Intel presented information on the future of the equipment that is inside the data centers. The computer systems are getting more and more powerful, and they are getting smaller and smaller, the result of this is that the equipment inside a rack uses more and more power, resulting in higher watts per square foot density. John also educated us about the Green Grid, which is an international organization dedicated to improving the energy efficiency of data centers.see his presentation
John Santana of the Pringle Creek Community gave a presentation of the energy dashboard system that monitors the energy use and the energy production of the solar PV array. During our rainy spring day forum the solar panels were generating about 4 kW of electricity, while on a sunny summer day they will generate about 21 kW.
James Meyer of Opsis Architects gave a presentation on the Pringle Creek Community Center and Community. This is a planned community designed to house over 100 individual homes when they are fully built out. All the homes will be owned by the individuals living there, and they are required to be built to meet LEED standards. The homes will have small yards and share large green spaces. There is a creek running through the community, two large greenhouses, and plenty of trees. The community utilizes pervious paving to allow groundwater to seep right through without collecting and causing erosion. The existing buildings utilize a ground source heat pump system to provide heating and cooling. The community center utilizes natural ventilation, high efficiency lights, and has 96 photovoltaic solar collectors mounted on the roof.
Bryan Nealy and Ben Tate of the State of Oregon gave a presentation of the strategies that the State has been incorporating into their data center to save energy, and a tour of their facility after a gourmet lunch catered by Wild Pear consisting of salad, vegetarian lasagna, steak and cake. The State has been virtualizing their computer servers to run the same amount of systems on fewer and fewer servers. They participate in PGE’s (Portland General Electric) Dispatchable Generation program whereby PGE takes care of the maintenance of their generators in exchange for being allowed to turn them on when PGE needs extra power in the service area. Ben and Bryan took us on a tour of the data center floor. The computer racks are lined up in hot and cold aisles, but they are not physically isolated so there is still some short-circuiting of air around the server racks. They previously installed blanking plates on the unused slots of the racks, which did reduce the short-circuiting of air and lowered the average cold air aisle temperature by 4 ½ degrees F. They use an underfloor supply air distribution system and they were operating with outside air economizers on the day of our visit to provide 63 F supply air to the entire floor. Their two centrifugal chillers were off, and we toured the mechanical rooms housing them and could see for ourselves that they were off. In fact, their mechanical chillers are only needed to operate when the outside air is above 65 F, which is only about 15% of the year in a typical weather year. They are now looking to the ETO to provide an energy audit of their facility to identify additional strategies that they can incorporate to save more energy, and they are very interested in a cold aisle containment system. Due to the short-circuiting of the air in their computer cabinets they turn on a chiller when the return air temperature reaches the mid 70’s F or higher.