|Автор: Yevgeniy Sverdlik|
Data center-focused US real estate investment trust (REIT) CoreSite recently brought online Phase II of its massive data center campus in the Silicon Valley. Depending on demand, there will eventually be four or more phases at the campus in Santa Clara, California.
Phase I was commissioned in April 2010, after a seven-month construction period, and is currently 100% leased to one big customer.
Phase II went live in early July. The phase will provide total of 10.5MW of critical load, serving more than 50,000 sq ft of raised floor. It will be brought online in 1.5MW-2.5MW increments.
We sat down with CoreSite VP Jameson Agraz to talk about the campus, which has already received a number of awards for its energy efficient design, including LEED Gold certification by the US Green Building Council.
DatacenterDynamics FOCUS: CoreSite has received a number of awards for energy efficiency features of this campus. Give us an overview of what you guys have done here in terms of environmental friendliness.
Jameson Agraz: With the first phase of the campus we really did just a phenomenal job in terms of efficiency all the way around. Efficiency not only from the space utilization perspective, but efficiency from electrical distribution perspective, efficiency from a mechanical perspective.
With Phase II, we're making some incremental improvements on that design, both on the electrical and mechanical sides of things. And one of those is the modifications we've done to the mechanical systems which support a much lower reliance on city water. It's a utility that often gets overlooked when compared to electricity. You need a lot of water from the municipality that's serving you to support a data center.
We built a mechanical system that basically allows us, if we want to, to completely come off of city water, so we have no reliance on city water as a utility from availability perspective.
DCDF: How does that work?
JA: We are using roof-top mounted mechanical infrastructure that has built-in airside economization, and we're using direct-expansion units that use refrigerant gas and a coil to cool down the air if it needs to be.
Because we're not running a chilled-water loop, we don't have a chiller system, so there's no central chiller plant, no condenser water pumps, no chilled-water pumps, no reserve water tank and, most importantly, no cooling towers. That's where all of your water ends up getting lost in a traditional data center: evaporation at the cooling towers.
One of the unique things about this property is we're one of the first to use Dx package units that also have what's known as Munters units in them. That allows us to basically extend the amount of airside economization hours that we'll be able to run on. We are basically increasing the thermal capacity of the air going through the data center so that we can cool down more heat load with the same amount of air.
We have the ability to either mange to a really good PUE, or manage to a really good water utilization effectiveness (WUE). We can come completely off city water at the expense of a little-higher PUE, or a really low PUE with some use of city water.
DCDF: Tell us about the electrical design at Phase II.
JA: All the data center rooms themselves are very flexible in terms of how we deliver power. We're basically planning for being able to deliver Tier III infrastructure throughout the data center. So basically, 2N UPS and so on – everything fully concurrently maintainable. We have the ability to go to a less redundant system, such as N+1. We've developed back-end electrical distribution that allows for different types of UPS deployment within the data centers themselves.
The other thing that we've done is we've eliminated power distribution units from the electrical distribution line-up and instead of using PDUs, we're using dry transformers, which are actually placed outside of the data center. We don't have any transformed load in the data center, meaning PDUs, which will actually have some electrical loss, generate heat and take up space within the data center, have been moved outside. That allows for more space efficiency and just better cooling efficiency. The mechanical units are now only cooling the IT load instead of cooling the transformers.
DCDF: Tell us a little bit about the thinking that went into the site selection process for this location.
JA: Santa Clara is a great place for data centers. It's a lower cost of power than neighboring PG&E territories due to Silicon Valley Power (the utility serving Santa Clara). They really seem to understand data centers.
Also, the land was available for us and offered us a long runway for a larger campus-type development, so that's kind of how we isolated on this site. We established footprint here in the Bay Area with a good set of customers, so we just wanted to make sure we had the capacity to capture the demand.