Skip to content

Data Centers: A Latent Environmental Threat

Towards the end of the 1990s, the Internet started to become an omnipresent fixture in many people’s lives. With its user base annually doubling in size and its user-generated content growing uncontrollably, the Internet was overwhelming the infrastructure built to maintain it. In order to keep up with this growth, enterprises began to invest in more robust server infrastructure in hopes of accommodating users’ large demands for increased processing power, larger storage space, and faster network speeds. It soon became apparent that the growth rate of Internet traffic and content would not be declining in the near future and enterprises would need a more permanent solution for accommodating Internet users. This demand for a more capable foundation for the Internet had become the catalyst that fueled the creation of the 3 million data centers currently active in the United States today. Despite becoming fundamentally vital to the daily functioning of the Internet, data centers are undoubtedly guilty of nurturing the world’s unhealthy dependence on the Internet. This dependence made the cyber world so intertwined with the real world to the point that users expect this anthropogenic system to respond instantaneously and perform flawlessly. Thus, propelled by always rising user expectations, data centers have fostered a cycle of continuously building facilities that are capable of generating more power. Unfortunately, this chronic power increase is produced at the expense of further depleting limited natural resources and ultimately results in a permanent addition to data centers’ growing damage to the environment.

While browsing the Internet, users are hidden from its inner workings. Users can neither see the movement of data between their computer and a network nor the energy required to facilitate that communication. Although this abstraction yields a simpler experience for users, it hides a harmful byproduct of their web activity, the polluting effect it has on the environment.

Nearly every action performed on the Internet has the cost of a resultant carbon emission: making a Google search emits .2g of CO2, watching a YouTube video for only 10 minutes emits 1g of CO2, and simply owning a Gmail account for a year emits 1200g of CO2. Regardless of whether a user is actively using the Internet or not, a user’s carbon footprint is quantified by the volume of data the user creates and its subsequent manipulation which can be independent of user action. Therefore, this omnipresent carbon offset that accompanies Internet usage calls for the need to augment the efficiency with which data is manipulated. This can be accomplished by focusing on the Internet’s primary data handler— data centers.

Annually, data centers are becoming markedly more energy efficient; however, any gain in efficiency is subsequently eaten by an increase in user demand. For most data centers, their clients expect not only fast speeds but also guaranteed data integrity and ubiquitous service. While these expectations can be met by data centers, they introduce several bottlenecks to a data center’s energy efficiency that are necessary to provide the expected near-perfect service. A few of these bottlenecks are: high-performing server hardware, readily-available uninterruptible power supply (UPS) systems, powerful temperature maintenance systems. In order to service data at the bare minimum, a data center just needs sufficient electricity to power its server hardware. Generally, any additional electricity is used to accommodate large server loads and, most importantly, maintain temperatures within the facility to prevent damage to both hardware and data.

In order to quantify the power efficiency at which the data center is running, the industry uses the power usage effectiveness (PUE) metric. This metric is calculated by dividing the total electricity the data center consumes by the energy required to run the server hardware. This essentially describes the additional electricity necessary to maintain the normal operation of the server hardware. An ideal PUE is 1.0 which denotes that all electricity used by the data center is solely consumed by server hardware, but this would also mean that no electricity is consumed by cooling mechanisms, lighting, or any other overhead, which is highly unlikely. Within the United States, data centers achieve a PUE of 1.85 on average; however, this value largely depends on the size of the data center which is generally correlated with the scale of the operation and the quality of the infrastructure. Therefore, it becomes less surprising that small data centers generally have a PUE of 2.0 and large data centers can have a PUE as low as 1.1. Although the average PUE for large data centers is lower than that of smaller data centers, it is an insufficient metric for comparing infrastructures. This metric is only useful for measuring a data center’s efficiency with respect to itself and fails to reveal complete information about the data center’s total energy consumption, a vital factor in computing a more telling metric: the data center’s carbon footprint.

In 2016, the U.S. government released a study of data center energy use, which the first in the complete analysis of the topic over the past decade. In the study, it was estimated that data centers in the United States consumed 70 billion kWh of electricity in 2014 which is equal to 1.8% of the country’s total energy consumption for that year. In the same year, data centers were estimated to have used 626 billion liters of water which equates to 1 liter of water consumed for every .11 kWh used. To put that into perspective, that .11 kWh can only power a 40-watt lightbulb for about 3 hours. Alternatively, the average daily electricity consumption for a U.S. residence is 30.03 kWh. If this amount of electricity was consumed by a data center instead, it would equivalent to about 273 liters of water consumed which is the recommended intake of water for men over the course of three months. Due to their excessive consumption of electricity and water, in addition to several smaller factors, data centers have been estimated to account for about 2% of total global greenhouse emissions annually, or about 648 billion kilograms of CO2 in 2014. The industry claims that in the coming years they will be able to upgrade their facilities without increasing their proportional contribution to total global greenhouse emissions, but regardless of their claim’s veracity, their objective should not only be to maintain their consumption but to minimize it.

Somewhat inconsistent with their recorded extreme energy usage, data centers do not necessarily require all electricity they expend to operate normally; in fact, for most data centers, using only half of their average total energy usage would result in service that is nearly identical in quality to their normal service. The reasoning for data centers’ consumption of the other half of their average total energy usage is rooted in the expectation of delivering “perfect” service to their clients. This service is defined by three promises: fast network speeds, guaranteed data integrity, and most importantly, functional service at all times. Although data centers aim to achieve this level of service, they know it is impossible to be perfect. Therefore, they consume a significant portion of the extra energy for minimizing the risk of any service-affecting error occurring within the facility. Though a majority of clients do not require this level of service, for some clients, any time period of the data center being unavailable could create serious risk for their businesses. Likewise, for a data center, any time period of them being unable to service requests could also create serious risk for their business—losing their clients. As a result, many pubic data centers have no choice but to expend even more energy to meet their clients’ high service expectations despite them knowing that this additional energy consumed, almost independent of its volume, will principally result in only a marginal increase in their service.

There are several factors on which data centers expend the additional energy. One of the most critical factors is the maintenance of cold temperatures and humidity levels within the facility to ensure hardware can operate and protect client data. One method many data centers use to maintain their controlled climate is using computer room air conditioning (CRAC) units. These are devices which monitor and maintain air temperature by producing cold air that propagates throughout the facility and cools down server racks. This air is subsequently funneled back into the CRAC units to repeat the process indefinitely. By maintaining a cool temperature in the data center, the facility ensures a prolonged lifespan for the hardware and minimizes the risk of hardware being damaged or shut down due to overheating. Additionally, most CRAC units also have humidifier components that help to maintain a moderate relative humidity within the data center. It is imperative that the humidity within data centers is controlled so that it becomes neither too dry nor too moist. Within a dry atmosphere, the chance for electrostatic discharge (ESD) occurring increases, which could result in destroyed or corrupted data. On the other hand, within a moist atmosphere, dust particles become more susceptible to sticking to electrical components and thereby reduce the components’ heat transfer ability and further contribute to their corrosion. Now, it should be evident that CRAC units and alternative cooling mechanisms not only play a useful role within data centers by controlling atmospheric variables but also ensure hardware integrity as a result.

It may seem advantageous to have prolong the equipment’s lifespan as long as possible; however, it should be noted that within most data centers, a server’s average lifetime is about 4.4 years before becoming replaced with an upgrade. Therefore, data centers can mitigate the energy its cooling mechanisms consume by only aiming to sustain the minimum air temperature that would ensure a piece of hardware’s lifespan to be equal to the length of its upgrade cycle. Historically, CRAC units have been used to effectively achieve this goal, but as average processing power within a server grows over time, the heat to be maintained by CRAC units grows as well. This necessitates CRAC units to expend increasing amounts of power to perform its maintenance tasks, which could make a noticeable impact on a data center’s total energy consumption but also implies greater volumes of coolant necessary to run the units themselves. The coolants used generally contain halocarbons or chlorofluorocarbons which are mildly toxic substances that can contribute to ozone depletion. Given that disadvantages of CRACs and their accompanying harmful environmental impact become magnified over time, it is crucial that data centers begin to invest in alternative cooling systems, such as evaporative chillers, in order to adopt a cooling mechanism that not only sustains its efficiency but also minimizes its ecological threat over time.

In addition to cooling systems, another factor in which data centers invest a significant amount of extra energy to maximize their quality of service is the collection of alternative power sources of that would be used in case of primary power source failure. Even a small power outage could result could compromise data integrity or even render the service unavailable, which is an unacceptable outcome. In order to prevent against this possibility, data centers have uninterruptable power supply (UPS) systems which can temporarily power their facilities in case of emergency. These systems generally use lead-acid batteries as their power supplies, but they have the disadvantages of having a short average lifespan of 2.5 years as well as having a guaranteed negative environmental impact since their production often employs destructive mining techniques. Further contributing to UPS systems’ disadvantages, data centers typically have backup diesel-powered generators in addition to their lead-acid batteries because modern battery technology is incapable of independently power an average data center for an extended period of time. In the case that backup generators do not exhaust all of their stored diesel quickly enough, the stored diesel could become expired and must be discarded. This discarded fuel is an unavoidable waste This waste of fuel is unavoidable and adds to all data centers’ compounding environmental debt that has been formed as a byproduct of their interminable practice of generating minimal benefit at large environmental costs.

In an effort to maximize the reliability of all facets of their service, in addition to protecting against external hardware-related failure, data centers must always be prepared for sudden, and often rare, spikes in server load. In order to alleviate this risk, any active servers cannot operate using all of their available power because a server needs readily available computing power in the case of peak data traffic loads, which are often rare. As a result, average server utilization can only range from 10% to 45%, depending on the data center’s infrastructure quality. In addition to this limitation, active servers within a data center further waste energy because usually about 10% to 30% of them are vampire servers. Vampire servers are active servers that are not performing any jobs and instead are sitting idly, still using the same amount of energy. Unlike server utilization limitations, this is a byproduct of poor infrastructure management and can easily be avoided. This negligence combined with data centers’ inability to concurrently deliver maximum server utilization and elastic peak load handling results in data centers wasting nearly 90% of electricity used to power the servers which results in unnecessary inflation of data centers’ carbon footprints.

Despite data centers being a fundamental necessity for the functioning of the Internet, the negative ecological impact of the extra energy expended to fulfill clients’ expectations of their quality of service ultimately outweighs the positive anthropological impact of their almost negligible service improvements. This imbalance has been steadily increasing over past two decades because Internet users have unconditionally embraced all improvements to speed and reliability. Unfortunately, the resource cost to obtain the enhanced quality of service is often times grossly disproportionate to the value of the service improvements. This inequality is hidden from Internet users and conditioned them to expect this performance, ignorant of the costs. Stemming from their fear of damaging their business, data centers give themselves no choice but to comply with the users’ expectations and ultimately make it more difficult for themselves to reduce their exceedingly negative impact on the environment.

Despite data centers’ significant contributions to pollution over the years, it approximately took until 2007 for people to notice the magnitude of their pollution problem with studies being released detailing their environmental impact. Even then, the following few years saw growing public effort to rally against their massive carbon footprint, and by 2012, partially motivated by the feat of bad publicity, data centers began to actively invest in improvements in infrastructure and management that would lead to reductions in their carbon footprint. Of these improvements, the most globally-benefitting was the diversification of data center energy sources by increasing their consumption of renewable energy.

Starting in 2010, Greenpeace began campaigning to spread awareness about data centers’ hidden dependence on coal. During this campaign, Greenpeace revealed that 50% to 60% of the electricity used by Facebook’s data centers was directly produced from coal. Originally, Facebook deflected the accusation by saying that they had no control over the methods the grid used to produce electricity, but during the next year, after realizing the opportunity to drive growth in renewable energy, Facebook became one of the first tech companies by publically committing to powering their data centers completely with renewable energy.

Since making their commitment, Facebook has made several strides in reducing their carbon footprint and has motivated other tech companies to do the same. In 2012, Facebook created the Open Compute Project which is an industry-wide initiative to share specifications and best practices for building the most energy efficient and economical data centers. This gained some traction within the industry, and Facebook used it to introduce the water usage effectiveness (WUE) metric, which has become increasingly popular within the industry. This metric provides more meaningful data about energy consumption than PUE because it can be used to estimate both total water and electricity consumption for a data center by using average values in the industry. As a result of taking these steps forward, Facebook played a vital role in increasing competition within the industry by permanently adding environmental impact as a factor in consumers’ minds.

Earnest to not be outmatched, other large tech companies, also made efforts to reduce their data centers’ environmental impact and promote change within the industry. Given that in 2016 a study estimated that only 12% of a data centers’ total carbon emissions are generated within the facility and the remaining 88% is from production of its third party resources, such as electricity and water, it was clear that data centers needed to focus more attention where they procure their required resources than the manner with which the data centers used them. With respect to reducing carbon emissions from water usage, some companies have made progress in using more grey and seawater to for use in their data centers. In 2012, Google’s data center near Atlanta completely moved to recycling waste water to use in its cooling mechanisms and releases any excess purified water into the nearby Chattahoochee river. More impressively, in 2013, after building its large solar farm in North Carolina, Apple was able to announce that all of its data centers were now running on 100% renewable energy. This energy is sourced from a combination of renewable energy bought from energy companies and renewable energy that is generated onsite. Many companies have begun to follow these company’s paths to augment the percentage of clean energy they use, and for some small companies, this option literally would not have been available without the larger companies’ help.

Given the large energy demands data centers have, large companies have leverage over public utility companies to push them to embrace renewable energy. Duke Energy, the largest energy utility company in the country, has been the target of pressure to increase renewable energy production. Pushed by the demands of companies like Apple and Google, Duke Energy has already invested more than $4 billion in solar and wind facilities in 12 different states over the past 8 years. In addition to pushing for cleaner sources of electricity, many data centers have been working with their local municipalities to build mutually beneficial water treatment plants that reduce water waste and practice more carbon-conscious methods for purifying water. This would benefit data centers by giving more control over the carbon offset of the water they would consume and also local citizens by providing them more options for clean water. Thanks to the advances of large companies, smaller data centers were able to benefit from these newly-created renewable energy facilities; in fact, for some, a new facility was their only reasonable opportunity to use renewable energy. Ultimately, the continuing advances in reducing carbon emissions made by large tech companies have reduced their own ecological impact and have also set a good example for smaller companies to practice the same mindfulness in the future.

Aside from the external carbon emission reductions available to a data center, the infrastructure within a data center presents several possible improvements to be made as well. As mentioned previously, a well-known serious problem with data center infrastructure is server utilization. Many data center servers use on average about 10% to 20% of the power consumed by their servers effectively. This is largely due to the limitation of a physical server only being able to be installed in at most one application whereas applications can have multiple physical servers installed. Since most applications do not require maximum server power at all times, their servers often only use a fraction of their energy productively. This consequently results in a large and unnecessary contribution to a data center’s carbon emissions. Fortunately, in recent years many data centers have been able to avoid this problem through their adoption of virtualization. With virtualization, a physical server can power multiple virtual servers located on the cloud. A virtual server executes processes in the same way as physical servers do, so it becomes possible for one physical server to accommodate the processes for multiple applications because these applications would have the physical server’s virtual servers installed instead. Through this, it has been shown that it becomes possible for servers to increase their utilization up to 80% safely, a marked increase in efficiency. Additionally, since physical servers can accommodate multiple jobs at once, data centers can possibly reduce the number of active servers in their facility which reduces the data center’s cumulative heat generation and carbon footprint. Incentivized by these clear benefits, many data centers have been updating their infrastructure to support virtualization, and even in 2011, it was recorded that 72% of companies had data centers that were at least 25% virtualized. Propelled by the increasing popularity of the cloud, there is no doubt that the adoption of virtualization will increase in the future and its effects on reducing data center carbon emissions will be significant.

It’s been shown that there are data centers are readily embracing opportunities to reduce their carbon emissions, but there is also a substantial number of data centers who have not embraced these opportunities at all. In 2014, of the 70 billion kWh of electricity consumed by data centers in the United States, only 5% of that consumption can be attributed to the massive “hyperscale” data centers constructed by companies like Apple, Facebook, and Google while the remaining 95% is attributed to small- and medium-sized data centers. This seems counter-intuitive because these average-sized data centers cumulatively process only 66% of all data center traffic in general. This large discrepancy between data center traffic and energy consumption of these two classes of data centers reveals many of these average-sized data centers are largely energy-inefficient. This inefficiency is primarily rooted in those data centers’ lack of incentive to make changes to become more energy efficient. In order to do so, they would have to make substantial investments in upgrading their server infrastructure and spend time teaching their IT staff about the new infrastructure, both of which are difficult due to the limitations imposed by the small scale of their company. Therefore, many of the carbon emission-reducing upgrades available in data center operations only benefit hyperscale data centers because they do not suffer from the same limitations as average-sized data centers. Unless average-sized data centers are presented worthwhile incentives, a majority of data center traffic will continue to be serviced by their outdated systems and ultimately remain unaffected by the several improvements available to reduce its future carbon emissions.

Currently, the incentives for companies to upgrade or replace their average-sized data centers may be negligible, but in the coming years, the advantages of becoming a client of a hyperscale data center will be too hard to ignore. By 2020, it’s projected that hyperscale data centers will account for 68% of total global data center processing power and will service 53% of total global data center traffic. These are increases of 74% and 55%, respectively, from their values in 2015. This projected increase in popularity stems from the compounding effects of hyperscale data centers’ improvements in cost and energy efficiency and quality of service over the years. In addition, small data centers will notice difficulties in accommodating their future traffic given that annual global data center traffic is projected to increase from 4.7 trillion gigabytes in 2015 to 15.3 trillion gigabytes in 2020. This 326% increase in data volume will be much more difficult to cope with for smaller data centers than it would be for hyperscale data centers because small data center server utilization is currently about 4 times lesser than that of hyperscale data centers and is only expected to receive a marginal increase by 2020. As a result, this incapability will force small data centers to make the decision: upgrade its outdated infrastructure or become a client of hyperscale data center. Despite the improbability of small data centers making energy-efficient infrastructure upgrades in the future, their future contribution to global carbon emissions will likely be reduced if they choose to redirect their traffic to hyperscale data centers instead. This migration has the potential to eliminate several serious inefficiencies in the industry and also would result in a more compact, environmentally-conscious collection of companies that could motivate each other to invest in more efficiency innovations in the future.

Data centers have undergone a serious evolution since their boom during the dot-com bubble in late 1990s. They have become vital to the functioning of the Internet and have transitively played an integral, yet abstracted, role in many Internet users’ everyday lives. Despite the clear advantages they have given to the modern world, data centers have been more ecologically harmful than they have been anthropologically helpful. This omnipresent ecological threat stems from their growing energy consumption and the permanent negative environmental impact it generates.

While a portion of data centers’ energy consumption is effectively used for the functioning of applications, the remaining energy is largely wasted by facilities’ inefficient infrastructure and their investments in chasing an unattainable quality of service. The quality of service delivered is especially correlated with the magnitude of data centers’ anthropological and ecological impacts, both positive and negative. These two realms of impact have an inverse relationship where a positive improvement for one realm generally results in a more negative future for the other; however, the magnitude of change is not always equivalent for both realms.

Currently in the industry, a substantial amount of energy is often expended to yield marginal increases in performance, but it is questionable whether users even notice the improvements bestowed upon them. Would they notice that an image was loaded a fraction of a second faster than previously? Or that the file they recently uploaded to the cloud now has several redundant copies of it to restore the file in case it will be corrupted? The likely answer is that people would neither notice nor even expect these improvements, but despite this possibility, companies still strive to deliver increasingly better speeds and service. This ultimately pushes users to eventually become accustomed to a higher quality of service that they would have otherwise ignored. This hazardous cycle continues to cause further pollute the environment, and in order to eliminate its negative ecological impact, it is indispensable that all data centers unanimously agree upon an upper bound for their future quality of service and begin to direct their focus to minimizing their carbon impact instead.

The future of the intertwined relationship between data centers and the environment can result in many different outcomes. In one outcome, data centers will magnify their positive role of developing energy innovations and promote their widespread adoption, but in another outcome, the destructive cyclic relationship between data centers and their clients will continue to exist and continue to deliver their constantly growing levels of ecological damage. The relationship’s future outcome is not only dependent on the actions of data centers but the actions of their users as well. In order to create positive change, it is imperative that users become cognizant of the toxic byproducts resulting from their cyber activity. Whether this cognition yields reductions in their daily streaming activity or a decreased dependence on the cloud, the cumulative effect can make a marked difference in their cumulative impact on the environment and could potentially trigger a permanent acceptance of a more environmentally-conscious usage of technology.

The future state of the environment, as it has always been, is inherently decided by people’s actions. Technology users need to accept that since they were satisfied with humbler service in the past, they can still be satisfied with the same quality of service in the future. In the same vein, data centers may be reducing their yearly increases in energy consumption rates, but this trend is not guaranteed to last. Therefore, in order to ensure the longevity of the environment, it is ultimately humans’ responsibility to not only reduce their levels of consumption but also amend their relationship with the environment before their intertwined longevity is permanently compromised.