The Paris Agreement

Paris Agreement Summary

Article 1

  • A glossary for the paper

Article 2

    • It is time to respond more appropriately to climate change through sustainability and eradication of poverty.
    • Become more adaptable and have each country work within their capabilities, whatever they may be.

 

  • Don’t let temperatures get 2 degrees C above pre-industrial levels.

 

Article 3

  • Countries should be ambitious and help each other out in following the Agreement.

Article 4

    • Reach peak emissions ASAP, then rapidly reduce.

 

  • Each nation sets a goal beyond its current capabilities.
  • Developed countries take the lead, support developing.
  • Report information on progress with transparency every 5 years.

 

  • Time frames TBD by the conference.
  • Responsible for own emissions level.
  • Economy-wide changes if achievable.

Article 5

  • Preserve large forests – properly incentivize stakeholders to do so, perhaps through monetary conservation

Article 6

  • Countries acknowledge that action is voluntary
  • Countries will face adaptation costs and any proceeds obtained can be used to offset these costs
  • Countries should use a variety of methods to achieve their planned contributions

Article 7

    • Countries need to adapt to prevent climate change
    • Adaptation costs in the present will greatly reduce adaptation costs later on
    • Countries should cooperate to share research and techniques

 

  • Developed countries should assist less developed countries

 

  • The UN will lend support
  • Natural resources must be properly managed
  • Adaptation efforts will be recognized by the “global stockplace”

Article 8

  • All parties recognizing the need for addressing loss and damage to climate
  • The Warsaw International Mechanism for Loss and Damage associated with Climate Change Impacts is the overseeing body for the Paris Agreement
  • Detailed out ways they can cooperate such as early warning systems, “comprehensive risk assessment and management”

Article 9

  • Developed countries have to provide the resources to assist developing countries in their efforts
  • Developed countries should lead the way for “climate finance” for resources, instruments, etc.
  • Need to communicate efforts for qualitatively and quantitatively providing aid to developed countries biennially

Article 10

    • The agreement highlights the nations sharing a long term vision and coming to a unanimous agreement to use technology in order to reduce greenhouse gas emissions.
    • They will strengthen cooperative action on technology development and transfer.
    • The Technology Mechanism established will thus serve the agreement, working in accordance to the Technology Framework.
    • Accelerate, encourage and enable innovation

 

  • Excessive jargon, repetition and ambiguous claims of what to do. No concrete plans in accordance to their “long term vision”

 

  • Important: To support developing nations.

Article 11

  • Calls for the need of capacity building in high risk areas ie. developing countries, and those who will be most adversely affected by climate change.
  • Says developed countries should enhance support for capacity building in developing countries
  • Last line says they will discuss/agree to all the above at another meeting?

Article 12

  • Says: Parties shall cooperate in taking measures, as appropriate, to enhance climate change education, training, public awareness, public participation and public access to information, recognizing the importance of these steps with respect to enhancing actions under this Agreement.
  • Reiteration of previous statements.

Article 13

    • Repetition of points that call for a mechanism of transparency and extending support to developing countries for the implementation of this article.

 

  • Important: Says that each party shall regularly provide the following information:

 

    • (a) A national inventory report of anthropogenic emissions by sources and removals by sinks of greenhouse gases, prepared using good practice methodologies accepted by the Intergovernmental Panel on Climate Change and agreed upon by the Conference of the Parties serving as the meeting of the Parties to this Agreement
    • (b) Information necessary to track progress made in implementing and achieving its nationally determined contribution under Article 4. 8. Each Party should also provide information related to climate change impacts and adaptation under Article 7, as appropriate

Article 14 *mutatis mutandis

    • Countries “shall periodically take stock of the implementation of this Agreement to assess the collective progress towards achieving the purpose of this Agreement and its long-term goals”

 

  • Updates every 5 years, starting from 2023

 

  • These updates will help inform future action

Article 15 *mutatis mutandis

  • Basically, whatever mechanism to facilitate compliance to this agreement will be followed
  • The mechanism is a basically a committee that the Paris Agreement will form, that will check upon progress?

Article 16 *mutatis mutandis

  • Article basically says that anyone who isn’t part of the Paris Convention is allowed to view as an observer. Some other specific classifications are given to others…but everyone is free to view ultimately.

Article 17

  • “The secretariat established by Article 8 of the Convention shall serve as the secretariat of this Agreement.”

Article 18

  • “The Subsidiary Body for Scientific and Technological Advice and the Subsidiary Body for Implementation established by Articles 9 and 10 of the Convention shall serve, respectively, as the Subsidiary Body for Scientific and Technological Advice and the Subsidiary Body for Implementation of this Agreement.”

Article 19

  • Subsidiary bodies can be established by the countries in the Paris Agreement if necessary

Article 20

  • “This Agreement shall be open for signature and subject to ratification, acceptance or approval by States and regional economic integration organizations that are Parties to the Convention.”
  • It shall be open for signature at the United Nations Headquarters in New York from 22 April 2016 to 21 April 2017.

Article 21

  • “This Agreement shall enter into force on the thirtieth day after the date on which at least 55 Parties to the Convention accounting in total for at least an estimated 55 per cent of the total global greenhouse gas emissions have deposited their instruments of ratification, acceptance, approval or accession.”
  • Definition of what total greenhouse emissions emissions

Article 22

  • The provisions of Article 15 of the Convention on the adoption of amendments to the Convention shall apply mutatis mutandis to this Agreement.

Article 23

  • Article 16 will apply mutatis mutandis (Latin phrase meaning “once the necessary changes have been made”)
    • In other words, it isn’t done yet and obvious changes still have to be made
  • Future changes, additions, or annexes to the agreement will only be lists, forms and any other scientific, technical, procedural or administrative material

Article 24

  • The part of Article 14 relating to settlement of disputes is also mutatis mutandis

Article 25

  • Each party gets 1 vote
  • If the EU or another organization consisting of a collective group of countries votes, it gets as many votes as it has member countries, but only if none of those countries vote independently

Article 26

  • The agreement is entrusted to the Secretary-General of the UN (in legal terms, he is the Depositary) since it is a multinational agreement

Article 27

  • No reservations may be made to the agreement, meaning that no countries may place caveats on their ratification

Data Centers: A Latent Environmental Threat

Towards the end of the 1990s, the Internet started to become an omnipresent fixture in many people’s lives. With its user base annually doubling in size and its user-generated content growing uncontrollably, the Internet was overwhelming the infrastructure built to maintain it. In order to keep up with this growth, enterprises began to invest in more robust server infrastructure in hopes of accommodating users’ large demands for increased processing power, larger storage space, and faster network speeds. It soon became apparent that the growth rate of Internet traffic and content would not be declining in the near future and enterprises would need a more permanent solution for accommodating Internet users. This demand for a more capable foundation for the Internet had become the catalyst that fueled the creation of the 3 million data centers currently active in the United States today. Despite becoming fundamentally vital to the daily functioning of the Internet, data centers are undoubtedly guilty of nurturing the world’s unhealthy dependence on the Internet. This dependence made the cyber world so intertwined with the real world to the point that users expect this anthropogenic system to respond instantaneously and perform flawlessly. Thus, propelled by always rising user expectations, data centers have fostered a cycle of continuously building facilities that are capable of generating more power. Unfortunately, this chronic power increase is produced at the expense of further depleting limited natural resources and ultimately results in a permanent addition to data centers’ growing damage to the environment.

While browsing the Internet, users are hidden from its inner workings. Users can neither see the movement of data between their computer and a network nor the energy required to facilitate that communication. Although this abstraction yields a simpler experience for users, it hides a harmful byproduct of their web activity, the polluting effect it has on the environment.

Nearly every action performed on the Internet has the cost of a resultant carbon emission: making a Google search emits .2g of CO2, watching a YouTube video for only 10 minutes emits 1g of CO2, and simply owning a Gmail account for a year emits 1200g of CO2. Regardless of whether a user is actively using the Internet or not, a user’s carbon footprint is quantified by the volume of data the user creates and its subsequent manipulation which can be independent of user action. Therefore, this omnipresent carbon offset that accompanies Internet usage calls for the need to augment the efficiency with which data is manipulated. This can be accomplished by focusing on the Internet’s primary data handler— data centers.

Annually, data centers are becoming markedly more energy efficient; however, any gain in efficiency is subsequently eaten by an increase in user demand. For most data centers, their clients expect not only fast speeds but also guaranteed data integrity and ubiquitous service. While these expectations can be met by data centers, they introduce several bottlenecks to a data center’s energy efficiency that are necessary to provide the expected near-perfect service. A few of these bottlenecks are: high-performing server hardware, readily-available uninterruptible power supply (UPS) systems, powerful temperature maintenance systems. In order to service data at the bare minimum, a data center just needs sufficient electricity to power its server hardware. Generally, any additional electricity is used to accommodate large server loads and, most importantly, maintain temperatures within the facility to prevent damage to both hardware and data.

In order to quantify the power efficiency at which the data center is running, the industry uses the power usage effectiveness (PUE) metric. This metric is calculated by dividing the total electricity the data center consumes by the energy required to run the server hardware. This essentially describes the additional electricity necessary to maintain the normal operation of the server hardware. An ideal PUE is 1.0 which denotes that all electricity used by the data center is solely consumed by server hardware, but this would also mean that no electricity is consumed by cooling mechanisms, lighting, or any other overhead, which is highly unlikely. Within the United States, data centers achieve a PUE of 1.85 on average; however, this value largely depends on the size of the data center which is generally correlated with the scale of the operation and the quality of the infrastructure. Therefore, it becomes less surprising that small data centers generally have a PUE of 2.0 and large data centers can have a PUE as low as 1.1. Although the average PUE for large data centers is lower than that of smaller data centers, it is an insufficient metric for comparing infrastructures. This metric is only useful for measuring a data center’s efficiency with respect to itself and fails to reveal complete information about the data center’s total energy consumption, a vital factor in computing a more telling metric: the data center’s carbon footprint.

In 2016, the U.S. government released a study of data center energy use, which the first in the complete analysis of the topic over the past decade. In the study, it was estimated that data centers in the United States consumed 70 billion kWh of electricity in 2014 which is equal to 1.8% of the country’s total energy consumption for that year. In the same year, data centers were estimated to have used 626 billion liters of water which equates to 1 liter of water consumed for every .11 kWh used. To put that into perspective, that .11 kWh can only power a 40-watt lightbulb for about 3 hours. Alternatively, the average daily electricity consumption for a U.S. residence is 30.03 kWh. If this amount of electricity was consumed by a data center instead, it would equivalent to about 273 liters of water consumed which is the recommended intake of water for men over the course of three months. Due to their excessive consumption of electricity and water, in addition to several smaller factors, data centers have been estimated to account for about 2% of total global greenhouse emissions annually, or about 648 billion kilograms of CO2 in 2014. The industry claims that in the coming years they will be able to upgrade their facilities without increasing their proportional contribution to total global greenhouse emissions, but regardless of their claim’s veracity, their objective should not only be to maintain their consumption but to minimize it.

Somewhat inconsistent with their recorded extreme energy usage, data centers do not necessarily require all electricity they expend to operate normally; in fact, for most data centers, using only half of their average total energy usage would result in service that is nearly identical in quality to their normal service. The reasoning for data centers’ consumption of the other half of their average total energy usage is rooted in the expectation of delivering “perfect” service to their clients. This service is defined by three promises: fast network speeds, guaranteed data integrity, and most importantly, functional service at all times. Although data centers aim to achieve this level of service, they know it is impossible to be perfect. Therefore, they consume a significant portion of the extra energy for minimizing the risk of any service-affecting error occurring within the facility. Though a majority of clients do not require this level of service, for some clients, any time period of the data center being unavailable could create serious risk for their businesses. Likewise, for a data center, any time period of them being unable to service requests could also create serious risk for their business—losing their clients. As a result, many pubic data centers have no choice but to expend even more energy to meet their clients’ high service expectations despite them knowing that this additional energy consumed, almost independent of its volume, will principally result in only a marginal increase in their service.

There are several factors on which data centers expend the additional energy. One of the most critical factors is the maintenance of cold temperatures and humidity levels within the facility to ensure hardware can operate and protect client data. One method many data centers use to maintain their controlled climate is using computer room air conditioning (CRAC) units. These are devices which monitor and maintain air temperature by producing cold air that propagates throughout the facility and cools down server racks. This air is subsequently funneled back into the CRAC units to repeat the process indefinitely. By maintaining a cool temperature in the data center, the facility ensures a prolonged lifespan for the hardware and minimizes the risk of hardware being damaged or shut down due to overheating. Additionally, most CRAC units also have humidifier components that help to maintain a moderate relative humidity within the data center. It is imperative that the humidity within data centers is controlled so that it becomes neither too dry nor too moist. Within a dry atmosphere, the chance for electrostatic discharge (ESD) occurring increases, which could result in destroyed or corrupted data. On the other hand, within a moist atmosphere, dust particles become more susceptible to sticking to electrical components and thereby reduce the components’ heat transfer ability and further contribute to their corrosion. Now, it should be evident that CRAC units and alternative cooling mechanisms not only play a useful role within data centers by controlling atmospheric variables but also ensure hardware integrity as a result.

It may seem advantageous to have prolong the equipment’s lifespan as long as possible; however, it should be noted that within most data centers, a server’s average lifetime is about 4.4 years before becoming replaced with an upgrade. Therefore, data centers can mitigate the energy its cooling mechanisms consume by only aiming to sustain the minimum air temperature that would ensure a piece of hardware’s lifespan to be equal to the length of its upgrade cycle. Historically, CRAC units have been used to effectively achieve this goal, but as average processing power within a server grows over time, the heat to be maintained by CRAC units grows as well. This necessitates CRAC units to expend increasing amounts of power to perform its maintenance tasks, which could make a noticeable impact on a data center’s total energy consumption but also implies greater volumes of coolant necessary to run the units themselves. The coolants used generally contain halocarbons or chlorofluorocarbons which are mildly toxic substances that can contribute to ozone depletion. Given that disadvantages of CRACs and their accompanying harmful environmental impact become magnified over time, it is crucial that data centers begin to invest in alternative cooling systems, such as evaporative chillers, in order to adopt a cooling mechanism that not only sustains its efficiency but also minimizes its ecological threat over time.

In addition to cooling systems, another factor in which data centers invest a significant amount of extra energy to maximize their quality of service is the collection of alternative power sources of that would be used in case of primary power source failure. Even a small power outage could result could compromise data integrity or even render the service unavailable, which is an unacceptable outcome. In order to prevent against this possibility, data centers have uninterruptable power supply (UPS) systems which can temporarily power their facilities in case of emergency. These systems generally use lead-acid batteries as their power supplies, but they have the disadvantages of having a short average lifespan of 2.5 years as well as having a guaranteed negative environmental impact since their production often employs destructive mining techniques. Further contributing to UPS systems’ disadvantages, data centers typically have backup diesel-powered generators in addition to their lead-acid batteries because modern battery technology is incapable of independently power an average data center for an extended period of time. In the case that backup generators do not exhaust all of their stored diesel quickly enough, the stored diesel could become expired and must be discarded. This discarded fuel is an unavoidable waste This waste of fuel is unavoidable and adds to all data centers’ compounding environmental debt that has been formed as a byproduct of their interminable practice of generating minimal benefit at large environmental costs.

In an effort to maximize the reliability of all facets of their service, in addition to protecting against external hardware-related failure, data centers must always be prepared for sudden, and often rare, spikes in server load. In order to alleviate this risk, any active servers cannot operate using all of their available power because a server needs readily available computing power in the case of peak data traffic loads, which are often rare. As a result, average server utilization can only range from 10% to 45%, depending on the data center’s infrastructure quality. In addition to this limitation, active servers within a data center further waste energy because usually about 10% to 30% of them are vampire servers. Vampire servers are active servers that are not performing any jobs and instead are sitting idly, still using the same amount of energy. Unlike server utilization limitations, this is a byproduct of poor infrastructure management and can easily be avoided. This negligence combined with data centers’ inability to concurrently deliver maximum server utilization and elastic peak load handling results in data centers wasting nearly 90% of electricity used to power the servers which results in unnecessary inflation of data centers’ carbon footprints.

Despite data centers being a fundamental necessity for the functioning of the Internet, the negative ecological impact of the extra energy expended to fulfill clients’ expectations of their quality of service ultimately outweighs the positive anthropological impact of their almost negligible service improvements. This imbalance has been steadily increasing over past two decades because Internet users have unconditionally embraced all improvements to speed and reliability. Unfortunately, the resource cost to obtain the enhanced quality of service is often times grossly disproportionate to the value of the service improvements. This inequality is hidden from Internet users and conditioned them to expect this performance, ignorant of the costs. Stemming from their fear of damaging their business, data centers give themselves no choice but to comply with the users’ expectations and ultimately make it more difficult for themselves to reduce their exceedingly negative impact on the environment.

Despite data centers’ significant contributions to pollution over the years, it approximately took until 2007 for people to notice the magnitude of their pollution problem with studies being released detailing their environmental impact. Even then, the following few years saw growing public effort to rally against their massive carbon footprint, and by 2012, partially motivated by the feat of bad publicity, data centers began to actively invest in improvements in infrastructure and management that would lead to reductions in their carbon footprint. Of these improvements, the most globally-benefitting was the diversification of data center energy sources by increasing their consumption of renewable energy.

Starting in 2010, Greenpeace began campaigning to spread awareness about data centers’ hidden dependence on coal. During this campaign, Greenpeace revealed that 50% to 60% of the electricity used by Facebook’s data centers was directly produced from coal. Originally, Facebook deflected the accusation by saying that they had no control over the methods the grid used to produce electricity, but during the next year, after realizing the opportunity to drive growth in renewable energy, Facebook became one of the first tech companies by publically committing to powering their data centers completely with renewable energy.

Since making their commitment, Facebook has made several strides in reducing their carbon footprint and has motivated other tech companies to do the same. In 2012, Facebook created the Open Compute Project which is an industry-wide initiative to share specifications and best practices for building the most energy efficient and economical data centers. This gained some traction within the industry, and Facebook used it to introduce the water usage effectiveness (WUE) metric, which has become increasingly popular within the industry. This metric provides more meaningful data about energy consumption than PUE because it can be used to estimate both total water and electricity consumption for a data center by using average values in the industry. As a result of taking these steps forward, Facebook played a vital role in increasing competition within the industry by permanently adding environmental impact as a factor in consumers’ minds.

Earnest to not be outmatched, other large tech companies, also made efforts to reduce their data centers’ environmental impact and promote change within the industry. Given that in 2016 a study estimated that only 12% of a data centers’ total carbon emissions are generated within the facility and the remaining 88% is from production of its third party resources, such as electricity and water, it was clear that data centers needed to focus more attention where they procure their required resources than the manner with which the data centers used them. With respect to reducing carbon emissions from water usage, some companies have made progress in using more grey and seawater to for use in their data centers. In 2012, Google’s data center near Atlanta completely moved to recycling waste water to use in its cooling mechanisms and releases any excess purified water into the nearby Chattahoochee river. More impressively, in 2013, after building its large solar farm in North Carolina, Apple was able to announce that all of its data centers were now running on 100% renewable energy. This energy is sourced from a combination of renewable energy bought from energy companies and renewable energy that is generated onsite. Many companies have begun to follow these company’s paths to augment the percentage of clean energy they use, and for some small companies, this option literally would not have been available without the larger companies’ help.

Given the large energy demands data centers have, large companies have leverage over public utility companies to push them to embrace renewable energy. Duke Energy, the largest energy utility company in the country, has been the target of pressure to increase renewable energy production. Pushed by the demands of companies like Apple and Google, Duke Energy has already invested more than $4 billion in solar and wind facilities in 12 different states over the past 8 years. In addition to pushing for cleaner sources of electricity, many data centers have been working with their local municipalities to build mutually beneficial water treatment plants that reduce water waste and practice more carbon-conscious methods for purifying water. This would benefit data centers by giving more control over the carbon offset of the water they would consume and also local citizens by providing them more options for clean water. Thanks to the advances of large companies, smaller data centers were able to benefit from these newly-created renewable energy facilities; in fact, for some, a new facility was their only reasonable opportunity to use renewable energy. Ultimately, the continuing advances in reducing carbon emissions made by large tech companies have reduced their own ecological impact and have also set a good example for smaller companies to practice the same mindfulness in the future.

Aside from the external carbon emission reductions available to a data center, the infrastructure within a data center presents several possible improvements to be made as well. As mentioned previously, a well-known serious problem with data center infrastructure is server utilization. Many data center servers use on average about 10% to 20% of the power consumed by their servers effectively. This is largely due to the limitation of a physical server only being able to be installed in at most one application whereas applications can have multiple physical servers installed. Since most applications do not require maximum server power at all times, their servers often only use a fraction of their energy productively. This consequently results in a large and unnecessary contribution to a data center’s carbon emissions. Fortunately, in recent years many data centers have been able to avoid this problem through their adoption of virtualization. With virtualization, a physical server can power multiple virtual servers located on the cloud. A virtual server executes processes in the same way as physical servers do, so it becomes possible for one physical server to accommodate the processes for multiple applications because these applications would have the physical server’s virtual servers installed instead. Through this, it has been shown that it becomes possible for servers to increase their utilization up to 80% safely, a marked increase in efficiency. Additionally, since physical servers can accommodate multiple jobs at once, data centers can possibly reduce the number of active servers in their facility which reduces the data center’s cumulative heat generation and carbon footprint. Incentivized by these clear benefits, many data centers have been updating their infrastructure to support virtualization, and even in 2011, it was recorded that 72% of companies had data centers that were at least 25% virtualized. Propelled by the increasing popularity of the cloud, there is no doubt that the adoption of virtualization will increase in the future and its effects on reducing data center carbon emissions will be significant.

It’s been shown that there are data centers are readily embracing opportunities to reduce their carbon emissions, but there is also a substantial number of data centers who have not embraced these opportunities at all. In 2014, of the 70 billion kWh of electricity consumed by data centers in the United States, only 5% of that consumption can be attributed to the massive “hyperscale” data centers constructed by companies like Apple, Facebook, and Google while the remaining 95% is attributed to small- and medium-sized data centers. This seems counter-intuitive because these average-sized data centers cumulatively process only 66% of all data center traffic in general. This large discrepancy between data center traffic and energy consumption of these two classes of data centers reveals many of these average-sized data centers are largely energy-inefficient. This inefficiency is primarily rooted in those data centers’ lack of incentive to make changes to become more energy efficient. In order to do so, they would have to make substantial investments in upgrading their server infrastructure and spend time teaching their IT staff about the new infrastructure, both of which are difficult due to the limitations imposed by the small scale of their company. Therefore, many of the carbon emission-reducing upgrades available in data center operations only benefit hyperscale data centers because they do not suffer from the same limitations as average-sized data centers. Unless average-sized data centers are presented worthwhile incentives, a majority of data center traffic will continue to be serviced by their outdated systems and ultimately remain unaffected by the several improvements available to reduce its future carbon emissions.

Currently, the incentives for companies to upgrade or replace their average-sized data centers may be negligible, but in the coming years, the advantages of becoming a client of a hyperscale data center will be too hard to ignore. By 2020, it’s projected that hyperscale data centers will account for 68% of total global data center processing power and will service 53% of total global data center traffic. These are increases of 74% and 55%, respectively, from their values in 2015. This projected increase in popularity stems from the compounding effects of hyperscale data centers’ improvements in cost and energy efficiency and quality of service over the years. In addition, small data centers will notice difficulties in accommodating their future traffic given that annual global data center traffic is projected to increase from 4.7 trillion gigabytes in 2015 to 15.3 trillion gigabytes in 2020. This 326% increase in data volume will be much more difficult to cope with for smaller data centers than it would be for hyperscale data centers because small data center server utilization is currently about 4 times lesser than that of hyperscale data centers and is only expected to receive a marginal increase by 2020. As a result, this incapability will force small data centers to make the decision: upgrade its outdated infrastructure or become a client of hyperscale data center. Despite the improbability of small data centers making energy-efficient infrastructure upgrades in the future, their future contribution to global carbon emissions will likely be reduced if they choose to redirect their traffic to hyperscale data centers instead. This migration has the potential to eliminate several serious inefficiencies in the industry and also would result in a more compact, environmentally-conscious collection of companies that could motivate each other to invest in more efficiency innovations in the future.

Data centers have undergone a serious evolution since their boom during the dot-com bubble in late 1990s. They have become vital to the functioning of the Internet and have transitively played an integral, yet abstracted, role in many Internet users’ everyday lives. Despite the clear advantages they have given to the modern world, data centers have been more ecologically harmful than they have been anthropologically helpful. This omnipresent ecological threat stems from their growing energy consumption and the permanent negative environmental impact it generates.

While a portion of data centers’ energy consumption is effectively used for the functioning of applications, the remaining energy is largely wasted by facilities’ inefficient infrastructure and their investments in chasing an unattainable quality of service. The quality of service delivered is especially correlated with the magnitude of data centers’ anthropological and ecological impacts, both positive and negative. These two realms of impact have an inverse relationship where a positive improvement for one realm generally results in a more negative future for the other; however, the magnitude of change is not always equivalent for both realms.

Currently in the industry, a substantial amount of energy is often expended to yield marginal increases in performance, but it is questionable whether users even notice the improvements bestowed upon them. Would they notice that an image was loaded a fraction of a second faster than previously? Or that the file they recently uploaded to the cloud now has several redundant copies of it to restore the file in case it will be corrupted? The likely answer is that people would neither notice nor even expect these improvements, but despite this possibility, companies still strive to deliver increasingly better speeds and service. This ultimately pushes users to eventually become accustomed to a higher quality of service that they would have otherwise ignored. This hazardous cycle continues to cause further pollute the environment, and in order to eliminate its negative ecological impact, it is indispensable that all data centers unanimously agree upon an upper bound for their future quality of service and begin to direct their focus to minimizing their carbon impact instead.

The future of the intertwined relationship between data centers and the environment can result in many different outcomes. In one outcome, data centers will magnify their positive role of developing energy innovations and promote their widespread adoption, but in another outcome, the destructive cyclic relationship between data centers and their clients will continue to exist and continue to deliver their constantly growing levels of ecological damage. The relationship’s future outcome is not only dependent on the actions of data centers but the actions of their users as well. In order to create positive change, it is imperative that users become cognizant of the toxic byproducts resulting from their cyber activity. Whether this cognition yields reductions in their daily streaming activity or a decreased dependence on the cloud, the cumulative effect can make a marked difference in their cumulative impact on the environment and could potentially trigger a permanent acceptance of a more environmentally-conscious usage of technology.

The future state of the environment, as it has always been, is inherently decided by people’s actions. Technology users need to accept that since they were satisfied with humbler service in the past, they can still be satisfied with the same quality of service in the future. In the same vein, data centers may be reducing their yearly increases in energy consumption rates, but this trend is not guaranteed to last. Therefore, in order to ensure the longevity of the environment, it is ultimately humans’ responsibility to not only reduce their levels of consumption but also amend their relationship with the environment before their intertwined longevity is permanently compromised.

Let’s go somewhere!

Hey y’all,

I don’t know if anyone will see this in time but, we have the media challenge and have the opportunity to explore off campus. Would anyone want to get together and go explore the unknown? I don’t have a car but, we could share an Uber. Let me know if you are interested by commenting to this post or emailing me at my duke email: Victoria.grant@duke.edu. I think getting together would be really cool and a great way to go out farther in Durham. Hopefully, I’ll hear from y’all if not, enjoy the challenge!

Eco Media Challenge

By now, we have read environmentally-themed novels and short stories, we’ve discussed eco-imagery and climate communications, we’ve conducted deep critical analyses of a range of environmental issues, we’ve watched environmental films, and we’ve now for today skimmed the UN Paris Agreement and the Pope’s Encyclical Letter Laudato Si. The only thing we’ve not yet done is move our scholarship outside (into ‘nature’) and beyond the classroom (to the public social sphere). Today we’ll do both. Today your challenge is to participate in the local and global online conversations around environmental and climate change. On Thursday, then, we’ll discuss the Laudato Si, the UN Paris Agreement, and the environmental explorations you document today. <The schedule has been corrected to reflect our new plan>

 

For TODAY:

  1. Create a Twitter or Instagram account if you do not already have one. If you already have one but would prefer to create a new one for this assignment, please feel free.
  2. Send me via email the Twitter or Instagram handle you’ll be using today.
  3. Explore! From 3:05 – 4:20, explore the campus (East, West, and Central) and/or spaces off-campus (Duke Forest and the Eno River are great if you’ve never been) and document your environmental findings. Find a new outdoor space, take a new look at a favorite place, or go on an afternoon hike with your eyes trained toward your surroundings. Post images of what you see to your Instagram or Twitter account using the hashtag #ecolit290 + at least one of the following:

#climate
#climateselfie
#soilselfie
#everydayclimatechange
#climatechange
#environment
#globalwarming
#water
#renewableenergy
#climatejustice
#environmentaljustice
#oil
#sustainability
#recycle
#ActonClimate
#nature
#animal
#plant
#mineral
#humanschangingclimate
#anthropocene

As you post your photos and selfies, take note of tweets or Instagram posts that are using the same hashtag. See if you can get someone (or many someones) to retweet or regram you. Accumulate at least 10 high-quality microposts before the close of the course period. Questions? Email me, send me a chat message via gmail (amandastarling [at] gmail…), or send a DM via my @stargould Twitter account. I will be online and at the ready during our full course period, watching for your posts and waiting to answer any questions you may have. Bonus: You do not need to go to the classroom today. Just start your explorations when the course period begins. Don’t forget to send me your Twitter or Instagram account before you start and don’t forget to use #ecolit290 so that your posts will be counted toward your total.

Thabit Pulak: The Flint Water Crisis – Not simply an honest mistake

The Flint Water Crisis – Not simply an honest mistake

In spring 2014, residents of Flint, Michigan were struck by surprise as they turned their taps to receive brown-colored contaminated water. It was later found out that the water was contaminated with lead concentration pushing hundreds of times beyond the acceptable limit, effectively poisoning the whole city’s population. With America’s per capita GDP pushing $53,000, as compared to the worldwide average of about $10,000, we are one of the wealthiest nations on this planet. To other nations, the United States is often viewed as the beacon of success – a place where everyone lives in comfort, and happiness. But such an image doesn’t hold up well when one takes a look into the living situation of those residing in Flint, Michigan. As a city stricken by a dangerously contaminated water supply, with some of the poorest residents in the nation, Flint is a grim reminder of the suffering that some residents in America actively endure on a day to day basis as a result of the gross negligence of both attention from leadership, as well as a lack of allocation of resources. What specifically happened here? What impact did this ultimately have on the people of Flint?

April 25th, 2014 was the fateful day when the officials of Flint, Michigan officially switched the water source of the city from the Detroit Water and Sewage Department (sourced from Lake Huron) to that of the Flint River, citing that this was a “temporary switch”. The ultimate goal was to build a pipeline to the Karegnondi Water Authority (KWA), which would allegedly save the city 200 million dollars in the next 25 years if executed properly, according to officials. Officials tried to keep the public relaxed about the decision, claiming that “Flint water is safe to drink” [1]. However, officials didn’t attempt to proactively do any tests for themselves to see whether such a drastic change in water source change would affect the corrosion within the pipes. One of the biggest red flags was that the pipeline system of Flint Michigan was made of lead – any type of corrosion would pose a big health risk to the citizens. Yet, the officials decided to take a “wait and see” approach, as characterized by the Michigan Radio at the time [2]. The officials didn’t have to wait long. The reports of contaminated water came almost directly after the switch was made. In May, reports of E. Coli in the water prompted the city to put up a boiling advisory for water before use. Eventually, it turned out that the water from the Flint River was indeed corroding the pipes, and a harmful amount of corrosive flow through was coming out of people’s taps. In fact, in October, General Motors decided to stop using the water, as they were fearing it would corrode the machines in their facility. By February of 2015, it was determined officially that there was extensive lead contamination of water supplies across Flint Michigan (this was already known before – it just took a while for city officials to catch up to the fact) [3]. The interesting thing to note here is that it took several months before city officials could even acknowledge officially that there was contamination of something dangerous in the water. What could possibly suggest such a delay?

The Michigan Civil Rights commission published a 129-page report, after a yearlong investigation of the Flint Michigan water crisis. Over 150 residents’ testimonies were heard, and compiled in the report. The commission concluded that the problems in Flint could be connected to “historical, structural and systemic racism combined with implicit bias”. The report specifically zeroed in on how the reckless decision of the emergency management to switch the water source from Lake Huron to the Flint River without being more careful, disproportionately affected communities of color, which predominantly make up Flint Michigan. According to the US census, Flint is over 57% black [4]. Over 41% of the citizens in flint live below the poverty line. Democratic U.S. Rep. Dan Kildee, representing the city of Flint, stated “It’s hard for me to imagine the indifference that we’ve seen exhibited if this had happened in a much more affluent community”.

Upon further analysis of the crisis, there are some potential conflicts of interest that arise of the politicians that have been involved in making some of the decisions. Governor Snyder of Michigan’s chief of staff was Dennis Muchmore, who was with the Flint water issue. During this time, his wife, Deb Muchmore, was also the spokesperson in Michigan for Nestle, which happens to be the largest private owner of water resources in Michigan. Given that Nestle has a business model that revolves around bottled water, it is not unlikely for one to see that there is definitely a conflict of interest. Michael Moore elaborates on this “The Muchmores have a personal interest in seeing to it that Nestles grabs as much of Michigan’s clean water was possible — especially when cities like Flint in the future are going to need that Ice Mountain.” While Moore might be slightly alarmist in his rhetoric here, there is a solid point to be seen. The Michigan government recently allowed Nestle to expand groundwater retrieval extensively from sources just 120 miles from Flint Michigan, for a measly $200 per year [5]. This decision to let Nestle get access to such vast amounts of fresh water for little to no benefit to the state led to a big controversy from the people of Flint. Finding out the finer the details of the arrangement with Nestle is also still unclear, as the specific terms hadn’t yet been disclosed. Unfortunately, substantial information regarding this deal is unlikely to come out about, especially given Michigan’s dead last ranking of transparency, according to a recent national study of state ethics and transparency laws [6].

Regardless of the debate of the underlying intentions of how the flint water crisis came out, at this point, there is at least one thing that everyone can agree about – that the entire city was exposed to noxious water. This toxic, lead-laced water has already affected the population, and will very likely have long lasting effects. It was concluded via a study by Virginia Tech University, that Flint Michigan as a whole was contaminated by levels of lead that far exceeded safe limits. Furthermore, certain parts of Flint had households that had lead levels exceeding 13,000 parts per billion (ppb). This is an extremely high amount – for comparison, 5,000 ppb is considered to be a level of contamination equivalent to that found in industrial waste. 5 ppb is considered to be the level of lead from which one should be concerned. Extended exposure to lead at concentrations of 5 ppb and above lead to various neurological and developmental problems, which means that children are particularly susceptible [7]. With such extreme concentrations, citizens of flint were already being affected by the lead poisoning within a very short time window.

As of now, 6 officials have been criminally charged in the Flint Water Crisis case [8]. There were intentional efforts to cover up the facts of the crisis – according to the Detroit Free Press, “ Some people failed to act, others minimized harm done and arrogantly chose to ignore data, some intentionally altered figures … and covered up significant health risks.” The fact that there are now criminal charges being filed indicates that the notion that the overall situation as an “innocent mistake” was untrue.

When was the last time we heard about such an issue affecting an affluent population? Had it affected a richer population, would the actions taken by the government been much quicker and more effective? Would there be willful negligence by the leaders to such an extreme scale? Such questions we might never find the answer to – and that is the irony of it all. While officials argue over who is to blame, and who isn’t – a neglected population continues to suffer. Down the road, what does the future of America look like? Can we work to prevent underprivileged populations from suffering disproportionately from manmade environmental problems? The situation at Flint, Michigan tells us that we have a lot of work left to do. Only time will tell if we are learning from our mistakes.

Work Cited

1. Snyder, Rick. “Snyder Email.” State of Michigan. Executive Office, n.d. Web. 04 Mar. 2017.

2. Smith, Lindsey. “After Ignoring and Trying to Discredit People in Flint, the State Was Forced to Face the Problem.” Michigan Radio. N.p., n.d. Web. 03 Mar. 2017.

3. Roy, Siddhartha. “Hazardous Waste-levels of Lead Found in a Flint Household’s Water.” Flint Water Study Updates. N.p., 01 Sept. 2015. Web. 03 Mar. 2017.

4. “Population Estimates, July 1, 2015, (V2015).” Flint City Michigan QuickFacts from the US Census Bureau. N.p., n.d. Web. 03 Mar. 2017.

5. Ellison, Garret. “Nestle Bottled Water Plant Upgrade Driving More Groundwater Extraction.” MLive.com. N.p., 31 Oct. 2016. Web. 03 Mar. 2017.

6. Egan, Paul. “Michigan Ranks Last in Laws on Ethics, Transparency.” Detroit Free Press. N.p., 09 Nov. 2015. Web. 06 Mar. 2017.

7. Mayo Clinic Staff Print. “Lead Poisoning.” Mayo Clinic. N.p., 06 Dec. 2016. Web. 03 Mar. 2017

8. “6 State Employees Criminally Charged in Flint Water Crisis.” Detroit Free Press. N.p., 29 July 2016. Web. 03 Mar. 2017

Don’t Let Waste Go to Waste

 

Duke University employs over 37,000 people and its property holdings span 8,691 acres (Duke University’s Office of News and Communication). Consequently, its energy footprint is enormous. In 2013, energy comprised 76 percent of the University’s greenhouse gas emissions, and of its carbon emissions, 50 percent was derived from the purchase of electricity (Sustainable Duke – Energy). Equally large, meanwhile, is the University’s influence on sustainability and environmental policy. In June 2007, President Richard Brodhead signed the American College & University Presidents Climate Commitment and pledged Duke to achieving carbon neutrality by 2024 (Sustainable Duke 2009). The University now faces an uphill battle, as change will need to be swift and efficient in order to achieve this goal.

Duke plans to combine a strategy of emission reduction and carbon offsetting, which will involve “providing successful examples of technologies such as solar PV, solar thermal, biomass and biogas steam production, and hybrid fleet vehicles” (Sustainable Duke 2009). However, while many people have heard of solar power and hybrid car technology, the idea of biomass or biogas is more unfamiliar because it is much less common; according to the American Biogas Council, the United States has just 2,200 sites producing biogas in all 50 states; by comparison, Europe operates over 10,000 (American Biogas Council 2016).

The technology is relatively simple in its design. Organic material—in this case, pig manure—is delivered to a digester system, which breaks it down into biogas and digested material. Solids and liquids are used for the production of fertilizer, compost, and other agricultural processes. The biogas is taken out and processed until it is mostly composed of methane, which is then distributed and used for electricity and fuel (American Biogas Council 2016).

In North Carolina in particular, biogas is becoming increasingly important thanks to the Renewable Energy & Energy Efficiency Portfolio Standard law, which was passed in 2007 (North Carolina Utilities Commission 2008). It requires public utilities to produce 12.5% of their portfolios through renewable energy resources or energy efficiency measures, and as of 2017, 0.14% of this must come from swine biogas. In order to comply, electric utilities must either purchase or develop 284,000 swine Renewable Energy Certificates (equal to 1 megawatt hour of electricity) by 2018 (Maier 2015).

However, the practice of large-scale hog farming translates into less-than-ideal outcomes for residents of surrounding areas. Between the 1980s and 1990s, North Carolina went from fifteenth to second in hog production in the United States, with most of this explosive growth taking place in the “Black Belt”—the eastern region of the state where a large African American population still suffers from high rates of poverty, poor health care, low educational attainment, unemployment, and substandard housing (Nicole 2014). This proximity presents a many-layered problem. Namely, “people of color and the poor living in rural communities lacking the political capacity to resist are said to shoulder the adverse socio-economic, environmental, or health related effects of swine waste externalities without sharing in the economic benefits brought by industrialized pork production” (Edwards & Ladd 2001).

Although some have argued that the geographic distribution of pig farms is purely coincidental, researchers have found that the counties with larger minority populations contained proportionally more hog waste, “even when controlling for regional differences, urbanization level, property value, and attributes of the labor force” (Edwards & Ladd 2001). And this is not a small issue: the North Carolina Department of Agriculture and Consumer Services reported in 2012 that between 9 and 10 million hogs were raised at these farms, resulting in the production of approximately 19.6 million tons of waste annually (NCDACS 2012). This waste is usually stored in vast “lagoons” that are breeding grounds for Salmonella and antibiotic-resistant bacteria in addition to containing insecticides, antimicrobial agents and other pharmaceuticals, and nutrients that can cause widespread pollution and damage to local ecosystems when they inevitably leach into local waterways or overflow during storms (Nicole 2014).

The cumulative effect of proximity to hog farms is damage to one’s health that ranges from mild to life-threatening. Sacoby Wilson, a University of Maryland environmental health professor who has documented environmental justice issues surrounding hog farms in North Carolina and Mississippi, explains that the problem is worse than simply bad smells. “You have exposures through air, water, and soil. You have … inhalation, ingestion, and dermal exposures. People have been exposed to multiple chemicals: hydrogen sulfide, particulate matter, endotoxins, nitrogenous compounds. Then you have a plume that moves; what gets into the air gets into the water. You have runoff from spray fields. These are complex exposure profiles” (Wilson & Serre, 2007).

Fortunately, growing interest in using anaerobic digesters to process biogas at these farms may provide an avenue to combatting these problems. By retaining the waste and putting all parts of it to use, adverse outcomes and severe ecological damage can be avoided, at least in part. The benefits of swine biogas are twofold: first, it provides a fuel source that burns cleanly, and second, it is an efficient use of manure that would otherwise literally go to waste.

The gas produced by anaerobic digestion primarily consists of methane, which is a relatively clean fuel when burned due to its chemical simplicity (Laurell 2014). However, if released in its un-combusted form into the atmosphere, methane is roughly thirty times more potent than carbon dioxide as a greenhouse gas, making its capture and use increasingly important as the pace of global warming continues to accelerate (Kelly 2014). According to statistics from the Energy Information Administration, hog waste accounts for 11.34% of methane emissions from the agricultural industry—a figure which has increased due to a growth in hog farming since 1990 (Conti & Holtberg 2011). Burning methane results in more energy per unit of carbon dioxide emissions as compared to oil (29% less) and coal (43% less.) In addition, unlike other fuels, methane combustion releases basically no dangerous nitrous oxide, sulfur dioxide, or particulate matter into the atmosphere (Laurell 2014).

Additionally, increasing production of swine biogas gives pig farmers another source of income while using up manure that would otherwise have simply been discarded or potentially washed away in rainstorms and polluted local bodies of water. For example, one farm in North Carolina has 28,000 hogs and a 1.2 million gallon anaerobic tank digester, which processes about 50,000 gallons daily of hog manure, carcasses from pig and chicken operations, and dissolved air flotation (DAF) sludge from nearby animal processing plants. “While a significant portion of the $5 million project was financed by the farmer, [owner Billy] Storms is not worried about the return on his investment. He says he will easily make his money back with the combination of selling the electricity and the accompanying Renewable Energy Certificates… and with payments for taking the DAF sludge from the plants” (Maier 2015). Processing the waste also creates a cycle of benefits for the farmer, as some of the gas that is produced can be kept and used for power on-site. On top of this, waste heat can be used for “heating barns, water, and greenhouses or even used for drying grain” (Maier 2015).

Such systems have seen moderate success when implemented in North Carolina. For example, in 2011 Google partnered with Duke University and Duke Energy to implement such a system at Yadkin County’s Loyd Ray Farms. The Sustainable Duke website explains, “The electricity… is used to support five of the nine swine barns at the farm and the operation of the innovative animal waste management system. From the digester, the liquid waste flows to an open-air basin where the wastewater is aerated to reduce the concentrations of ammonia and other remaining pollutants so that it can be reused for irrigation.” Not only is this system self-sustaining and environmentally friendly, but it also reduced carbon emissions by 2,087 metric tons in just one year (Sustainable Duke – Loyd Ray Farms).

Unfortunately, capturing and processing hog waste will not reverse the adverse health outcomes for individuals who live near these farms. Airborne particulates, unhealthy compounds, and toxic gases will still pose challenges for these communities, and the intersection of socioeconomic status and race in these areas adds another layer of ethical and social obligations to the burden on North Carolina to find a long-term solution. Anaerobic digesters are certainly very promising to avoid ecological damage and reduce dangerous greenhouse gas emissions, but they cannot be the only answer to this extremely complex problem. They will have to be managed extremely carefully to prevent methane emissions, and if the solid waste is used for fertilizing or irrigation, it must be processed adequately to remove dangerous compounds that can be extremely toxic to surrounding ecosystems if they leach into water supplies. Perhaps some technological advancements will be able to improve this solution in the future; however, for now, using swine biogas for energy is still better than letting all that waste go to waste.

 

References

American Biogas Council (2016). Biogas 101 Handout. Retrieved from https://www.americanbiogascouncil.org/pdf/ABC%20Biogas%20101%20Handout%20NEW.pdf

Conti, J., & Holtberg, P. (2011). Emissions of greenhouse gases in the United States 2009. U.S. Energy Information Administration. Retrieved from http://www.eia.gov/environment/emissions/ghg_report/pdf/0573%282009%29.pdf

Duke University’s Office of News and Communication. Duke at a Glance. Retrieved from https://duke.edu/about/duke_at_glance.pdf

Edwards, B. & Ladd, A.E. (2001). Race, poverty, political capacity and the spatial distribution of swine waste in North Carolina, 1982–1997. North Carolina Geography, (9):55–77. Retrieved from http://www.academia.edu/1446269/Race_poverty_political_capacity_and_the_spatial_distribution_of_swine_waste_in_North_Carolina_1982-1997

Kelly, M. (2014, March 26). A more potent greenhouse gas than CO2, methane emissions will leap as Earth warms. Research at Princeton blog. Retrieved from https://blogs.princeton.edu/research/2014/03/26/a-more-potent-greenhouse-gas-than-co2-methane-emissions-will-leap-as-earth-warms-nature/

Laurell, N. (2014, June 12). Natural gas overview – why is methane a clean fuel? The Discomfort of Thought. Retrieved from http://www.nlaurell.com/natural-gas-overview-why-is-methane-a-clean-fuel/

Maier, A. (2015, August 12). Hog wild about biogas. North Carolina Bioenergy Council. Retrieved from https://research.cnr.ncsu.edu/sites/ncbioenergycouncil/2015/08/20/289/

NCDACS (2012). 2012 North Carolina Agricultural Statistics.North Carolina Department of Agriculture and Consumer Services/National Agricultural Statistics Service, U.S. Department of Agriculture. Retrieved from http://www.ncagr.gov/stats/2012AgStat/AgStat2012.pdf

Nicole, W. (2013). CAFOs and Environmental Justice: The Case of North Carolina. Environmental Health Perspectives, 121(6), a182–a189. http://doi.org/10.1289/ehp.121-a182

North Carolina Utilities Commission (2008). Renewable Energy and Energy Efficiency Portfolio Standard (REPS). Retrieved from http://www.ncuc.commerce.state.nc.us/reps/reps.htm

Sustainable Duke (2009, October 15). Duke University Climate Action Plan. Retrieved from http://sustainability.duke.edu/climate_action/Duke%20Climate%20Action%20Plan.pdf

Sustainable Duke. Energy. Retrieved from http://sustainability.duke.edu/campus_initiatives/energy/index.html

Sustainable Duke (n.d.). Loyd Ray Farms. Retrieved from http://sustainability.duke.edu/carbon_offsets/loydrayfarms/index.php

Wilson, S.M. & Serre, M.L. (2007). Examination of atmospheric ammonia levels near hog CAFOs, homes, and schools in eastern North Carolina. Atmospheric Environment, 41(23), 4977–4987. Retrieved from http://www.sciencedirect.com/science/article/pii/S1352231007000453

 

Health and Socioeconomic Disparities of Food Deserts

Brielle Tobin and Barbara Lynn Weaver

Health and Socioeconomic Disparities of Food Deserts

Food insecurity exists when communities experience inconsistent access to adequate food due to lack of money and other resources. While especially relevant in today’s social climate, food insecurity in and of itself it not a new issue. Historically, there have always been parties who don’t know where or when their next meal will happen, such as early hunter gatherers. However, despite the time elapsed since we transitioned from a nomadic lifestyle, one in six Americans still experience food insecurity, either lacking funds to provide food or lacking access to food (Hartman). The latter can be described using the term food deserts, which are defined as “households being more than a mile from a supermarket with no access to a vehicle” (Chinni). The most recent societal transition from urban life to suburban life has exacerbated food insecurity as wealthy families move out of cities, and grocery stores move with them. For families with cars, this spread of resources doesn’t create a problem, but for families without transportation, the distance to the grocery store, and therefore access to food, can become impassable.

In areas such as Durham, North Carolina, where a history of redlining defines and restricts economic opportunities for all households within specific areas, families must rely on supermarkets and grocery stores that cater to low-income budgets for nutritious meals (Michaels). Redlined districts were originally based on racial division, and those within the districts were and still are deliberately denied loans based simply on the area in which they live. As a result, private transportation is often a luxury that only those of higher socioeconomic status, or those living in higher graded districts, can achieve. Grocery stores that carry the healthiest food are also oftentimes the most expensive. Consequently, chains such as Whole Foods are focused in areas of higher wealth, like suburbs, where families that can afford cars and gas can also afford the more expensive goods. The need for transportation and mobility to reach nutritious food is then the first barrier between a well-rounded dinner for four and items off the four for four menu at a fast food restaurant.

As demonstrated by redlining, low income populations facing discrimination are almost always populations of minorities. These populations are more likely to be living in areas affected by food deserts. For example, the United States Department of Agriculture states that, “the percent of the population that is non-Hispanic Black is over twice as large in urban food deserts than in other urban areas” (Dutko). A history of oppression coupled with increasing economic disparities creates areas of poverty in which food deserts appear. This causation is evident in the specific locations of grocery stores, as stated in the American Journal of Preventative Medicine: “Studies have found that wealthy districts have three times as many supermarkets as poor ones do, that white neighborhoods contain an average of four times as many supermarkets as predominantly black ones do” (Morland). Distinction between white and/or wealthy neighborhoods and lower income communities with minorities is not a new phenomenon; however, the issue of food security is a pertinent and daily battle in which every person regardless of wealth or race must participate.

Discrimination in terms of supermarket placement is not solely found in densely populated cities, as food deserts outside of urban areas also present immense obstacles for rural communities. For instance, the subject of food sovereignty is prioritized in numerous Native American communities. An exemplification of this issue is the Oglala Lakota people of South Dakota’s Pine River Reservation who rely on 95 percent of their goods to be shipped in from outside of the Nation (Elliot). The dependency caused by this food desert restricts the lives of those within the community and prevents communities from maintaining their independence. The experiences of people living within urban and rural food deserts establishes the pressing matter of food deserts as an environmental justice issue. Withdrawing access to goods from specific communities based on race and income rejects the rights for all to lead safe and healthy lives, and stresses the increasing importance of providing equal opportunities for adequate food.

Food deserts are indicators of more than just socioeconomic injustice; they indicate public health and safety concerns for those living within their borders. Residents with a chronic lack of access to adequate food resources are shown to have higher rates of diabetes, obesity, and cardiovascular disease (Corapi). Families who cannot afford grocery stores will purchase food from the ever-available and affordable fast food restaurants, causing higher than usual rates of chronic illnesses to develop in the population. Along with medical bills that may exceed what a family is capable of paying, these chronic illnesses can cause diet-related cancers and even premature death. These severe consequences of living in a food desert represent the potential for a life expectancy far shorter than counterparts living near a grocery store. For example, adults diagnosed with diabetes can anticipate a life 15 years shorter than otherwise would have been allotted to them (Gallagher). In this case, consistent access to healthy food is truly a life or death situation.

Image 1: Diagram of the Impacts of Food Deserts by Barbara Lynn Weaver

Along with experiencing shorter than average life expectancy, families living within the bounds of food deserts are also subjected to decreasing wealth as time passes. By their very nature, food deserts are located in areas of low population and low income, but as time progresses, these two characteristics are exacerbated. As wealth abandons a neighborhood, businesses follow. This means that all too often, when new stores do open, they choose areas of relative wealth and prosperity. Without new businesses to bring economic attention to a neighborhood, that neighborhood will get less wealthy over time. This trend of decreasing wealth represents a positive feedback loop, in which low initial wealth causes even lower wealth to develop within a population. Additionally, food deserts have long term impacts on the economic success of the children raised within them. Children facing poor nutrition or chronic illness are statistically more susceptible to encountering social and behavioral problems in school (Child Hunger). Such problems can hinder educational advancements, causing children to incur administrative discipline or academic probation. Living in a food desert can stand between educational, and therefore economic, success or failure.

While causes of food deserts are systemic and their impacts often cyclical, many solutions are emerging that attempt to address multiple aspects of the harm caused by food deserts. A favorable program promising the eventual erasure of food deserts originates from the USDA’s National Institute of Food and Agriculture. The goal of the USDA’s Community Food Projects Competitive Grant Program is to increase access to local nutritious food by working with producers and consumers to promote independence, create long-term solutions, and construct programs that are beneficial for the whole community (CFP). Another solution is the opening of community-owned food cooperatives. In areas such as Greensboro, North Carolina, communities previously living within food deserts are given a sense of responsibility when shopping at their co-op grocery store because not only are they improving their own health, but they are also showing the value of communities that mobilize and make democratic decisions to benefit one another (Johnson). However, a point often overlooked is that increasing access to supermarkets and grocery stores does not necessarily change behavior. According to a pilot study in Philadelphia, members of a community in which access was expanded did not show an increase in the consumption of fruits or vegetables (Cummins). To further decrease the harm of food deserts, new initiatives need to be created to address the connected between community awareness and individual action.

Examining the causes, impacts, and solutions of resource insecurity found inside food deserts reveals the complexity of the problem and the importance of environmental justice. Historical events like redlining, which separate people of socioeconomic status, are inextricably linked to the creation of food deserts. Food deserts in turn lower the wealth and health of affected communities, leading to increasing public health concerns and propagating the cycle of poverty. Programs that acknowledge the issue, and bring it into the public sphere, are key to combating food deserts. Grocery stores alone cannot solve food deserts, and it is vital that the culture of fast food and convenience be examined in relation to socioeconomic disparities. Environmental justice, behavioral change, and exposure to adequate food have the potential to bring the development and expansion of food deserts under control when used in combination.

 

Works Cited

“Child Hunger in America.” Feeding America. Web. Fed 28, 2017. http://www.feedingamerica.org/hunger-in-america/impact-of-hunger/child-hunger/?referrer=http://www.sustainableamerica.org/blog/what-is-food-insecurity/

Chinni, Dante. “The Socio-Economic Significance of Food Deserts.” PBS. June 29, 2011. Web. Feb. 29, 2017.  http://www.pbs.org/newshour/rundown/the-socio-economic-significance-of-food-deserts/

“Community Food Projects (CFP) Competitive Grants Program.” National Institute of Food and Agriculture in partnership with the USDA. Web. Feb 27, 2017. https://nifa.usda.gov/funding-opportunity/community-food-projects-cfp-competitive-grants-program

Corapi, Sarah. “Why it takes more than a grocery store to eliminate a ‘food desert’” PBS. Feb 3, 2014. Web. Feb 27, 2017.http://www.pbs.org/newshour/updates/takes-grocery-store-eliminate-food-desert/

Cummins*, Steven, Ellen Flint, and Stephen A. Matthews. “New Neighborhood Grocery Store Increased Awareness Of Food Access But Did Not Alter Dietary Habits Or Obesity.”Health Affairs. N.p., 01 Feb. 2014. Web. 03 Mar. 2017. <http://content.healthaffairs.org/content/33/2/283.abstract>.

Elliot, Scott. “Tribal Communities Strive to Regain Food Sovereignty.” National Institute of Food and Agriculture in partnership with the USDA. Nov 17, 2015. Web. Mar 2, 2017. http://blogs.usda.gov/2015/11/17/tribal-communities-strive-to-regain-food-sovereignty/#more-61940

Gallagher, Mari. “The Chicago Food Desert Progress Report.” Mari Gallagher Research and Consulting Group. June 2009. Web. March 1, 2017. http://marigallagher.com/site_media/dynamic/project_files/ChicagoFoodDesProg2009.pdf

Hartman, Brian. “Food Insecurity: 1 in 6 Americans Struggles to Buy Food.” abc News. Sept. 8, 2011. Web. Feb. 28, 2017. http://abcnews.go.com/blogs/headlines/2011/09/food-insecurity-1-in-6-americans-struggles-to-buy-food/

Johnson, Cat. “New North Carolina Coop to Turn a Food Desert into a Food Oasis.”Shareable. 9 Feb. 2015. Web. 03 Mar. 2017. http://www.shareable.net/blog/new-north-carolina-coop-to-turn-a-food-desert-into-a-food-oasis

Morland, K., Wing, S., et al. “Neighborhood characteristics associated with the location of food stores and food service places.” American Journal of Preventive Medicine. January 2002, vol. 22(1): p. 23-29. http://www.ncbi.nlm.nih.gov/pubmed/11777675 (3/05/11)

Paula Dutko, Michele Ver Ploeg, Tracey Farrigan. “Characteristics and Influential Factors of Food Deserts.” Economic Research Service, USDA. Aug. 2012. Web. Feb. 27, 2017. https://www.ers.usda.gov/webdocs/publications/err140/30940_err140.pdf
Will Michaels, Frank Stasio. “Mapping Inequality: How Redlining is Still Affecting Inner Cities.” WUNC, North Carolina Public Radio. Jun 26, 2014. Web. Mar 2, 2017. http://wunc.org/post/mapping-inequality-how-redlining-still-affecting-inner-cities#stream/0

A Globally Impending Disaster: Riley’s Home is Being Destroyed

Nanki Singh and Riley Cohen

March 3rd, 2017

 

Introduction:

 

Deviating from its original use as gum filling for indigenous canoes, the first barrel of refined Alberta Oil Sand was only shipped out in 1978. Despite only being processed for around forty years, the Oil Sands have impacted Canada, and even the world, in an environmental, social, and economic fashion.

 

In recent years the Oil Sands industry in Alberta has seriously expanded, evolving into a multi-billion dollar industry that employs hundreds of thousands of people. It would be impossible for Canada to completely eliminate its reliance on the Oil Sands in a short period of time without causing serious harm to the employees of this industry. However, it is clear, even to the current Prime Minister Justin Trudeau, that allowing the oil industry to control the economic fate of Canada is unwise. It must be phased out.

 

Over the course of this paper, we will explore the specific impacts of the Oil Sands on multiple facets of the issue. Furthermore, we will weigh the benefits and harms that the oil sands have on Canada. This should clarify and, hopefully, reduce the complexity of the issue.

 

History of the Oil Sands:

 

Upon first exploring western Canada, colonists were intrigued by the Oil Sands and documented their encounters with these strange pockets of black goo. One of the earliest recorded descriptions of the Oil Sands or bitumen comes from Sir Alexander McKenzie in 1788. Not only does he note just how extensive the reserves of bitumen are, but he also writes about how this substance is used by the indigenous population to fill their canoes. Interestingly, the process by which the native inhabitants of Alberta refine the bitumen into gum for their canoes is similar to the process we use now to refine the sand into oil.

 

Throughout the 19th century there was no commercial use for the Oil Sands, but quickly after the turn of the century many began to drill in Alberta, hoping that the presence of oil in the sand was an overflow of a large oil reserve stuck underneath the surface. To their dismay, no attempts to strike pockets of oil were successful. The United States, shortly after gaining nuclear technology, also developed an interest in refining the Oil Sands. They planned on igniting a nuclear bomb underground, which would theoretically make the ground reach high enough temperatures to refine the sand into crude oil. Thankfully, the Canadian government rejected this project.

 

In the 1960s, the first successful projects to refine the Oil Sands began to emerge, and by 1971, the Sun Oil Company was pumping out 30,000 barrels of crude oil a day. This marks the beginning large-scale corporate mining operations in Alberta.

 

Today, the scale of the Oil Sand extraction has increased to gargantuan proportions. It is now the third largest oil reserve that is proven to exist. In 2007, 726,100 barrels of oil were being pumped out each day. If companies keep to their current mining objectives, future planned production could be around 5,000,000 barrels of oil a day.   

 

Citations:

Estimate of production is taken from a synthesis of data from:

“Athabasca oil sands.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 27 February 2017. Web. 28 Feb. 2017.

Petroleum History Society Archives 15.4 (2005): n. pag. Web. 28 Feb. 2017.

“Oil Sands History and Development.” Institute for Oil Sands Innovation. University of Alberta, Oct. 2014. Web. 28 Feb. 2017.

Gordon Pitts. “The Man Who Saw Gold in Alberta’s Oil Sands.” Globe and Mail [Calgary] 25 Aug. 2012: n. pag. Web.

 

Oil Sands’ Emissions and Contributions to Climate Change:

 

Although the Oil Sands have only been industrially mined for about forty-five years, a relatively small time when compared to a geological timescale, its impact on the environment and contribution to global warming is immense. With today’s technology, we have access to about 170 billion barrels of oil from the Oil Sands. It is estimated that the Oil Sands hold up to 1.63 trillion barrels of oil. If those barrels of oil were all consumed, the Earth’s average temperature would raise 0.4 degrees Celsius.

 

The statistic above, however, only includes the green gas house gas emissions from burning the oil once it has been refined. The process of refining the Oil Sands also contributes to climate change, and it is far more polluting than the process of refining normal crude oil. In fact, Greenpeace estimates that the Oil Sands are three to four times dirtier than normal oil. It is important to note, however, that the Greenpeace estimation is pretty liberal. The Scientific American estimates that oil burned from the Oil Sands results in an increase of greenhouse gas emissions of only fourteen percent more than the average oil burned in the US. This increase is still considerable, although it is far less than Greenpeace’s estimation.

 

Furthermore, the the bitumen in the Oil Sands create a by-product known as petroleum (pet) coke. It is used to create jet fuel and diesel and is possible one of the dirtiest fossil fuels, emitting 20 percent more CO2 than oil. The Canadian tar sands alone produce 10 million metric tons of pet coke a year. It is important to note that since pet coke is not the primary resource being mined in the Oil Sands, it is often not included in companies’ reports on their environmental impact.

 

In 2011, the Oil Sands emitted 47 million metric tons of CO2 into our atmosphere, which is a relatively small number when compared to the two billion tons of CO2 that was emitted by US coal mining in the same year. Eliminating our consumption of oil from the Tar Sands would definitely not stop greenhouse gas emissions, but it would be a step in the right direction.  

Citations:

Biello, David. “How Much Will Tar Sands Oil Add to Global Warming?” Scientific American 23 Jan. 2013: n. pag. Web. 1 Mar. 2017.


“The Tar Sands and Climate Change.” Greenpeace Canada. Greenpeace, n.d. Web. 01 Mar. 2017.

Economic Impact of the Oil Sands:

 

It is estimated that over the next 20 years, the Oil Sands will have contributed four trillion dollars to the Canadian economy and 1.2 trillion dollars in taxes to the Canadian government. In 2012, the Oil Sands single handedly generated 91 billion dollars. Considering that in 2016 the Canadian gross domestic product was only 1.5 trillion, the Oil Sands clearly have an immense impact on its country’s economy.

 

The Oil Sands have essentially transformed Alberta, replacing its agriculture-based economy to a massive oil-based economy. This change, of course, has led to economic prosperity for Alberta. The province is now even debt-free. The Oil Sands have also supplied the Canadian economy with 480,000 jobs, many of which are in Alberta. It is to think of that number as just figure, but those are 480,000 people who are now able to provide for their families. These are also, in general, low skilled jobs. If the Oil Sands industry would cease to exist, many of these people would have a difficult time finding a job with a similar skill-level and pay.

 

It is also important to note that there still a lot of oil left in the tar sands. In fact, Canada has the world’s third largest proven oil reserve. A lot of it, however, is difficult to retrieve. Nonetheless, in theory, the Oil Sand industry could generate a lot of income for Canada for many years to come. In fact, the industry would like to double their output of oil by 2025. If they succeed in doing so, it is estimated that the Canadian gross domestic product will double and another 700,000 jobs will be added to the economy over the next 25 years.

 

Although allowing the oil industry in Canada to grow would theoretically be lucrative, many question whether this growth would result in Canada being unhealthily dependent on oil. Oil prices definitely have a direct impact on the Canadian economy. In 2016 oil prices dropped significantly, which resulted in an economic slouch. Since the process of refining the tar sands is tedious, companies in Alberta spend more per barrel to extract oil. As oil prices continue to plunge, profit margins for these companies shrink and reverberations are felt across the country. These reverberations would be exponentially larger if the industry succeeded in doubling their size.

Citations:

“Economic Contribution.” Canada’s Oil Sands. Canada’s Oil & Natural Gas Producers, n.d. Web. 01 Mar. 2017

“Alberta’s Oil Sands: Social Impacts.” Gale Canada in Context, Gale, 2016. Canada in Context, n.d. 01 Mar. 2017.

“Oil Sands and the Economy: 5 Things You May Not Know.” Oil Sands Question and Response (OSQAR) Blog. Suncor, 18 Sept. 2014. Web. 01 Mar. 2017.

Slav, Irina. “Oil Bust Continues To Take Its Toll On Canadian Economy.” OilPrice.com. OilPrice, 05 July 2016. Web. 01 Mar. 2017.

 

The impact of Oil Sands on Human Health

All we are saying is that the basis for the human health risk assessment is flawed.”

It is a fact, that the crude oil processed from the sands of the Boreal Forests in Alberta, Canada is one of the world’s dirtiest and most environmentally destructive sources of fuel. An increasing body of research evidences the serious health risks posed by the extraction and production processes. Despite this, the state and federal governments have done little to address the public health risks it poses.

The University of Toronto’s environmental chemistry research group recently published a study that reported: the PAH emissions estimated in the environmental impact assessments of the oilsands, are shown to be lower, than what they actually are. . In fact, researchers recently studied the chemical concentrations from direct oil sands industrial activity (mining, processing and transport). They asserted that the actual levels of chemicals in the air, may be two to three times higher than what was recorded in other scientific studies. (Cotter)

PAH refers to the polycyclic aromatic hydrocarbons, which are released into the air, water and soil when bitumen-rich oil sands are mined and processed .(Cotter) This was to approve developments in the Athabasca oilsands region. Of the multiple risks posed by the oil sands operations in Alberta’s Athabasca region, the health implications remain vast, and extremely underestimated. The most predominant causes are the explosions and industrial accidents that occur at the site(s) during the production process, and setting up of its infrastructure. These industrial mishaps have deleterious effects, and manifest themselves by decreasing the health of the biotic species around:

              Various forms of cancer are becoming prevalent, attributed to the production process.

              Freshwater is being made toxic by downstream seepage- causing catastrophic damage to aquatic, animal and human life in the area.

              It accounts for high emissions of hazardous pollution and dust from tailing ponds and mining sites. These lead to increasing cases of respiratory and lung diseases, especially in the miners and workers.

Despite all the overwhelming research that of the negative effects of the oil sands projects, especially those that impact human health- directly and indirectly- more projects are being planned in the region.

 

Oil Sands: An endangerment to the animals of Alberta

“Birds tell us so much about what is going on around us. They tell us that there needs to be a change in U.S. energy policy.” Gabriela Chavarria, director of the NRDC’s science center

It comes as no surprise, that the habitat and health, of various species of animals have been detrimentally impacted by the infrastructure and production processes of the oilsands. While the industry’s impacts on human health have been greatly assessed and criticized, there remains a lack of literature and urgency when it comes to our animal co-inhabitants. The oil sands have led to the poisoning of waterways, irregularities in the food chain and clean air supply. Concurrently there has been an incessant increase in CO2 emissions and a gradual degradation in the surrounding areas. This has led to “drops and even disappearances of species near pipelines, platforms and other infrastructure of the tarsands.” (Wells)

Powell, Todd. “Alberta Places Wildlife at Further Risk with Tar Sands Wetlands Exemption.”The National Wildlife Federation Blog. National Wildlife Federation, 12 Sept. 2012. Web. 03 Mar. 2017.

Over hundreds of decomposed ducks have been found on the surface of the oil sands company’s pollutant-filled reservoir in Alberta. It has been reported that its lake-sized reservoir, also known as a tailings pond, killed an estimated 1,606 birds. (Blog) These ponds hold an amalgamate of clayey sand, hydrocarbons, and heavy metals that remain as by products after the oil extraction process. A study conducted by the Natural Resources Defense Council (NRDC) estimated a death toll of 166 million birds over the next 5 decades.

Sadly, one of the animals most harmed by the expanding tar sands operation is the the Woodland Caribou. Due to the pernicious loss of its habitat, the Caribou, an animal already considered endangered is soon expected to go extinct. This would make the Woodland Caribous the third species of Caribou to disappear from the earth. Other animals facing endangerment include: the Grey Wolf, the Black Bear and aquatic life. (Wells)

 

The Oil Sands: A social cost or social benefit

It is an indisputable fact, that the Canadians have and continue to benefit tremendously from the economic growth and high paying jobs due to the tar sands in Alberta. But these developments have rightly raised concerns to various social issues. In the recent past, debate over issues such as: short- and long-term environmental impacts on water quality and wildlife habitats, the affect of extraction projects on aboriginal people’s traditional lifestyles, affordable housing and drug and alcohol addiction have begun to be brought to the fore-front.

The exploitation of the oil sands has left in its wake of positive financial upshots, multiple negative payoffs. The magnitude of the social costs- direct and indirect- was perhaps never fathomed to have been what it currently is. It is not hard to conclude, that the socials costs are indeed outweighing social benefits. This led us to question: what then is justification enough, for the proponents of the oilsands, to either scale back or completely stop their economically beneficial yet biologically, socially and environmentally detrimental activities?

The following is the data provided by Canada’s GreenPeace 2010 report on the Social Costs of OilSands production in Alberta:

1996 – 2006: More than 700,000 people poured into Alberta to work in the oil industry, creating severe housing shortages roads, schools and healthcare facilities.

2006: Homelessness in Edmonton increased by 19 per cent, while Calgary has seen a 458 per cent growth in the number of homeless people since 1996.

1999 – 2007: The population of Fort McMurray jumped from 36,000 to 65,000.

Further, it was reported that in the span of a decade, the cost of a single-family home in Fort McMurray rose from $175,000 to over $900,000. (GreenPeace) This was twice the average price of a house in Canada. In fact, some workers were paying over $700 monthly for a cramped single room; in desperation workers and tradesman wrapped insulation around their vehicles and camped outside in below freezing temperatures. (Greenpeace)

The negative socio-economic effects of rapid growth, have negative alterations to the traditional way of life: on the land, drug and alcohol abuse, and increased dependence on handouts by NGO’s. It has simply been a downward spiral for the way of life for the the people of Alberta. Additionally, crime and safety issues have seen a dramatic increase (Gale). They have been in “lock step with increased population and the boomtown mentality of Fort McMurray that has been fostered by the oil sands development” (Gale).

Substance abuse, gambling and family violence has thus increased in Alberta, especially in towns closer to the tar sands projects. At this rate, it is not difficult to see Alberta as the Skid-Row of Canada in a not so distant future. For example, A GreenPeace Report shows-

 

Fort McMurray:

  •             Has the highest suicide rate in the country for men age 18-24;
  •             Reports five times more drug offences than the rest of Alberta;
  •             Has an 89 per cent higher rate of assault;
  •             Has a 117 per cent higher rate of impaired driving offences.
  •             Women in Alberta experience the highest level of spousal abuse in Canada.
  •             A recent report doctor-patient ratio of 1 to 1579 – three times lower than found in countries such as China, Mexico and Uzbekistan.
  •             Exploitation of the workers is not uncommon.
  •             Tailings ponds cover nearly 60 square kilometres of forest and muskeg around the Athabasca River. They contain dozens of carcinogens that have killed birds, fish and mammals. To date, no provincial or federal agency has done a review of the ponds or their seepage rates into groundwater and the river.

Those most harshly impacted by these actions however, have been the aboriginal communities of the Fort McMurray district. For them the negative ecological and socio-economical impacts of the oil sands developments are closely intertwined, and highly detrimental. The First Nations have inhabited the forest lands of the Athabasca river region for hundreds of years. “Thousands of Chipewyans live in small communities downstream from the oil sands projects north of Fort McMurray. These communities fear a destruction of the forest and river habitats that support the fishing and hunting that is central to their traditional lifestyle.” (Steward)

A recent study on the Social Impacts of Alberta’s Oil Sands reported shockingly despairing news. According to them, the Moose meat from the region now contains unacceptably high levels of arsenic. Arsenic is known to be a potent carcinogen. Further, the Metis fishermen in Fort Chipewyan have discovered hundreds of deformed fish, downstream from the mining areas. It is not hard to understand then, why there is a sudden increase in the reported cases of renal failure, lupus, hyperthyroidism and cancer amongst the aboriginals who eat the local duck, moose and fish (Gale).

Additionally, the Chipewyans claim that the production surrounding tar sands is damaging and degrading their traditional lands. They contest that not only are they are not made aware of the development plans, but nor do they receive adequate compensation for this utter destruction of their resources. “Many of Canada’s First Nations people, including the Cree, Métis, Dene, and Athabascan, are tied to the land and rely on the continued existence of wildlife for their living. Wildlife is becoming tainted by toxins. Fish and game animals are appearing covered with tumors and mutations. Fish frying in a pan smells like burning plastic.” (Steward)

 

Oil Sands and Deforestation

According to Global Forest Watch data, from 2000-2013, Canada lost more than 26 million hectares of forest, mainly in its boreal region. More than 20 percent of the boreal forest region (more than 150 million hectares) is now covered by industrial concessions for timber operations, hydrocarbon development, hydroelectric power reservoirs, and mineral extraction.”

Canada is laden with one of Earth’s major ecological treasures: The Boreal forests. But in the light of the Economic Boom attributed to the Oil Sands found here, why are they important? It has been found that Canada is home to 54% of the worlds Boreal Forests. Making it the world largest and most ecologically intact of its kind- at least up until now. (Peterson) These forests boast of a rich and varied system of bogs, mountain ranges, coniferous and mixed forests, forested plains, waterways and peatlands. It also supports a web of wildlife that are now increasingly facing endangerment. From the Grey Wolf to the Woodland Caribou and the Black Bear. Further, some studies have shown that because the caribou avoids areas approx. 500m of the industrial areas, it does not cross the fragmented and cleared forest areas. This in turn, makes the ecological footprint of the tar sands higher than the actual physical footprint. Industrial development and forest fires in Canada’s tar sands region has cleared or degraded 775,500 hectares (almost two million acres) of boreal forest since the year 2000. (Tencer)

FORT MCMURRAY, AB – JUNE 20: The Shell Oil Jackpine open pit mine uses trucks that are 3 stories tall, weigh one million pounds, and cost 7 million dollars each. There is explosive growth in the oil field areas around Fort McMurray, Alberta, Canada. The oil extracted from this area is the product that would travel through the proposed Keystone XL oil pipeline. (Photo by Michael S. Williamson/The Washington Post via Getty Images)

It has oft been overlooked, that these boreal forests capture and retain twice the amount of Carbon Dioxide as compared to Tropical Forests. Hence, their destruction plays a critical role in the increasing global climate issue today.  However, today these forests are being recklessly destroyed. The oilsands companies in their incessant need for extracting oil have cleared and harmed much of the forest by activities such as logging, mining and building hydrodams. (Petersen)

 

Solutions:

In the long-term, the rate at which our planet currently consumes fossil fuels is not sustainable and we must move towards renewable energy sources. However, given that the economy is essentially inextricably tied to oil, it would be unrealistic to cut out oil from the Canadian economy any time in the near future. This makes the Oil Sands issue far more complex.

 

The long term goal for many people concerned about climate change is to first halt the expansion of the oil industry in Canada and then slowly reduce its output. Below are some proposed methods to accomplish this task.

 

Land:

In order to protect the land and animals that live in the Albertan forest and halt the expansion of the oil industry, the Government of Alberta would have have to use legislation. Some proposed legislation includes:

  • Establishing more protected areas. This would restrict the amount of land that could be purchased and developed by oil companies.
  • Establishing harsher offset policies. Offset policies ensure that mining companies offset their environmental impact by contributing by doing environmental work. The Pembina Institute recommends that for every hectare of land companies destroy for oil, three hectares of land should be restored or conserved.
  • Establishing conserved land that integral to the survival of certain species.

 

Reducing Impact on Climate Change:

No matter what measures are implemented, it is impossible to completely eliminate the carbon footprint of the Oil Sands, and for the extraction of oil in general. However, there are ways to reduce the impact of mining until we transition to sustainable forms of energies. These include:

  • Implementing carbon capture technology at the mining sites. This technology would reduce the output of CO2 at the sites.
  • Ensuring that Alberta stays committed to scientifically defined greenhouse gas emission rates that are in line with Global emission reduction targets. This means that the Canadian government cannot back out of Global initiatives, such as the Tokyo protocol.

 

Reducing the Social Costs of the Oil Sands:

It is important to note that as the the Oil Sands industry begins to contract, so will its societal impacts. However, in the meantime, the Canadian treat the issues in these affected with a high degree of seriousness. This includes creating programs to work directly with these communities to create a better home and working environment.

 

Institute, Pembina. “Oilsands Solutions.” Pembina Institute. Pembina Institute, n.d. Web. 01 Mar. 2017.

United Nations Framework Convention on Climate Change. “Kyoto Protocol.” Kyoto Protocol. N.p., 30 May 2013. Web. 03 Mar. 2017.

 

Conclusion

In this analysis we sought to map the impact of the multiple consequences produced due the extraction and production processes of the Oil Sands in Alberta. It serves as a preliminary exposition into the history, the significant changes and the research with regard to the Oil Sands.

While the detrimental effects of Canada’s economic juggernaut have been de-emphasized in the past, today there is a growing dialogue in the public sphere and media for the same. In our greed for Petro-dollars, we have destroyed the product of thousands of geological years, in a short span of 45 years. While Canada’s economy remains largely dependent on the Oil Sands, so are a myriad of other factors. These other dependent factors in turn are dying, being depleted and damaged in our belligerent greed for oil. We have wreaked irreparable havoc to the land, the boreal forests and the flora and fauna of areas surrounding the Oil Sands extraction sites. We are driving animals to extinction, polluting the air and making the once clean waters highly toxic. These systemic failures, in turn are contributing heavily to the global change in climate. Further, the displacement of the indigenous people, and the influx of migrant workers and laborers has contributed to increasing levels of crime, and lower standards of living. However, solutions exist. Those listed above are just a few of the myriad of ways to prevent the current situation from worsening.

Today, we possess the means and are aware of the methods to halt our destructive actions. In this vein, we need to address the issue before it gets too late.  

 

 Works Cited

“Alberta’s Oil Sands: Social Impacts.” Gale Canada in Context. Detroit: Gale, 2016. N. pag. Canada in Context. Web. 3 Mar. 2017.

Blog, /. Tar Sands Project. “Alberta’s Wildlife Death Toll on the Rise.” Iowa Tar Sands Project. N.p., 18 Dec. 2014. Web. 03 Mar. 2017.

Cotter, John. “Health Risks Of Oilsands Likely Worse Than We Thought.” The Huffington Post. The Huffington Post, 05 Apr. 2014. Web. 01 Mar. 2017.

GreenPeace. “Tar Sands and Social Costs.” Stop the Tar Sands (2010): 1-2. Web. 02 Mar. 2017.

Petersen, Rachael, Nigel Sizer, and Peter Lee. “Tar Sands Threaten World’s Largest Boreal Forest.” Tar Sands Threaten World’s Largest Boreal Forest. World Resources Institute, n.d. Web. 02 Mar. 2017.

Steward, Gillian. “First Nations Bear the Risks of Oilsands Development.” Thestar.com. N.p., 28 Aug. 2015. Web. 03 Mar. 2017.

Tencer, Daniel. “Canada The World Leader In Deforestation, Study Finds.” The Huffington Post. The Huffington Post, 05 Sept. 2014. Web. 03 Mar. 2017.

Wells, Jeff, PhD. “Impact on Birds of Tar Sands Oil Development in Canada’s Boreal Forest.” NRDC Report December 2008 (n.d.): n. pag. Web.

Impact Analysis – Ryan Bronstein and Brandon Foreman

Mexico City Water Crisis

Welcome to Iztapalapa, one of the sixteen municipalities of Mexico City. As the most populous borough of Mexico City, Iztapalapa is home to 1.8 million residents with a population density of 40,000 people per square mile – approximately 150% of the density of New York City. It is also the poorest neighborhood in Mexico City. The archetypal image of an urban slum, the city’s residents all share the same obsession, an obsession that permeates every action, movement, and thought: clean water.

        Growing up in Iztapalapa, or any other of Mexico City’s impoverished boroughs, citizens might incredulously chuckle at the idea that the brown and gray landscape they find themselves living in, with the faucets in their homes that are either broken or dry, was once a breathing civilization upon an even livelier lake. Indeed, this was the case until the Spanish arrived in the 16th century and drained the lake for their own well-being, filling it with concrete. Now, centuries later, millions of people still yearn for this lost water.

        For a country striving to emulate the success of its neighbors, Mexico’s government maniacally refuses to commit resources to solve a 20th century engineering problem. After expelling the clean water that naturally resided within the city, the capital – located more than two kilometers above sea level – has not found a way to effectively retain and recycle rain water. Moreover, their sewage system is a travesty, and the city is forced to discharge billions of gallons of dirty water through artificial canals that leak and pollute the surrounding land and rivers. Consequently, they face the challenge of bringing clean water back to the city, consuming a tremendous amount of energy and capital required to pump the water towards the highland metropolis, opposing gravity the entire passage.

        Once that water makes its way (back) to the city, one can imagine it is in high demand. Mexico City boasts a population of nine million; thus expectedly, just as there exist impoverished neighborhoods such as Iztapalapa, there are wealthy neighborhoods, such as Miguel Hidalgo and Cuajimalpa, located westward, closer to the water reservoirs. Here, the amenities include aesthetic golf courses and a water pressure of 14 kg/cm2, a rate 28 times that of Iztapalapa (Watts 2015). These districts also claim the lowest water prices in the city.

        For those living in poor neighborhoods, but in particular Iztapalapa, local wells continually prove themselves to be unreliable. Most days, water will not even come out of the tap, and when it does, its yellow tint and smell of hydrogen sulfide are enough to dissuade even the most dehydrated people from quenching their thirst with its almost certain disease-ridden water. Additionally, what cannot be seen or smelled also pose dangers. The local water is known to contain high levels of toxic chemicals – such as magnesium, nitrogen, and sodium – which can only be removed at prices which the town struggles to afford. Consumption of such water could lead to diarrheal diseases, chronic kidney disease, intestinal infectious diseases, and lower respiratory infections, responsible for approximately 8% of Mexico’s total burden of disease, according to the Institute for Health Metrics and Evaluation (IHME 2015). Furthermore, these contaminants make their way to the food supply through irrigation, thereby increasing the occurrence of disease and leading to crop kills. In fact, in the town of Endhó, farmers are no longer able to grow tomatoes due to the high concentration of heavy metals (Watts 2015).

        As a result of its contaminated wells, Iztapalapa is forced to bring in water from the dam in the wealthy city of Cutzamala, located 150km away, through a series of pipelines. Nevertheless, the water they receive from these pipes can be thought of as nothing more than the leftovers of the wealthy neighborhoods. The little water that enters Iztapalapa only does so after flowing through fissured, eroded pipes that cause leaks and add heavy metals and other contaminants to the water. Thus, the most populated borough of one of the most populated cities in the world is left with a shortage of dirty water.

        Other than leaks and a small supply to begin with, there are a number of confounding factors affecting the diminished water supply of Iztapalapa. First of all, Mexico City lacks large-scale wastewater reclamation and rainwater collection processes, forcing it to drill into aquifers to meet the high water demand. This is important because Mexico City is located upon clay beds, and when drillers break through the clay, the ground is susceptible to fissures. In fact, the fissures and fragments have become such a pervasive problem that Mexico City is sinking up to nine inches per year in some regions. Even worse, it is sinking unevenly due to the composition of volcanic soil within the ground. Therefore, the pipelines that rely on gravity to bring water to towns like Iztapalapa become greatly imbalanced and cannot transport water as efficiently.

        Human-caused climate change is also exacerbating these issues by simultaneously increasing the demand for water and decreasing its supply. Through the burning of fossil fuels and the release of carbon dioxide into the atmosphere, Iztapalapa is experiencing aridification, which makes raises the heat and incidents of drought. Thus, in the midst of desert-like conditions, when the citizens need water most, more water is being stolen away through evaporation. To compound the problem, the dam in Cutzamala that brings water to Iztapalapa is on its way towards drying up. This leads to an increased need for digging to tap aquifers, which in turn leads to more land sinking. The impact of this subsidence cannot be overstated. There have been 15 elementary schools recorded to have crumbled or caved in as a result of the cracks in the ground. It follows that experts predict that ten percent of Mexicans ages 15-65 could try to emigrate north in response to the subsidence, unpredictability of floods, and the recurrence of droughts (Kimmelman 2017).      

If ten percent of citizens do become climate refugees, what becomes of the other ninety percent who remain in their homes? How does their future look? Unfortunately, it appears bleak. Mexico has already seen tensions rise through several protests, including the hijacking of water delivery trucks led by citizens claiming their pipes have not brought them water for weeks at a time. Moreover, all signs point to this water disparity growing as a result human-induced climate change, increased water demand, decreased water supply, and Iztapalapa’s increasing population growth. One environmental scientist, Juan Jose Santibanez, has already made a bold prediction for the city’s bleak future in an interview with PBS: “There is a very high probability that, by 2020, there will be a mini-revolution, at least in Mexico City”. At stake is the health of citizens without access to clean water as well as those with access as violent conflict may occur to end this disparity once and for all. Consequently, a bigger picture shows us that what is really at stake is the health of Mexico City’s civilization and environment, especially considering that in perhaps every war, it is both the social institutions and physical environment that become ravaged.

        There are ongoing debates as to how Mexico City’s dangerous future can be avoided. One in particular is over the importance of rainwater collection. Systems for effective rainwater collection can be very expensive for impoverished areas. Of course, the Mexican government could fund such systems; however, there are many complications. First of all, the government is not incredibly liquid, as it is dealing with a current account deficit that is 2.7% of its GDP. Secondly, the effectiveness of rainwater collection is still under large scrutiny in part due to climate change. As stated earlier, climate change is causing desertification, but also unpredictable flooding. It is not known whether rainwater would be a reliable, consistent source of hydration. Scientists predict that rainwater could provide anywhere from 10 to 30 percent of the city’s water needs, a range that is supposedly too wide for politicians to confidently invest in rainwater collection systems.

Other proposed solutions have been to repair leaks in the poorly constructed pipelines and increase the use of recycled water. Nevertheless, such projects would be devastating to Mexico’s budget. This raises perhaps the most crucial debate centered around the water crisis of Mexico City: is clean water a basic right to all people regardless of socioeconomic background? If so, the Mexican government needs to ensure access to clean water to every citizen. Whether through out-of-pocket funding or foreign aid, the water crisis must be resolved. On the other hand, if accessible clean water is not deemed a basic right, then Mexico can continue on the track it follows today: the privatization of water. One of the leading political ideologies pertaining to water comes from Mexico’s center-left political group which proposed the General Water Act that allows private firms to control the water supply system (Watts 2015). This proposal quickly led to massive demonstrations across Mexico this past January, particularly in Tijuana. Citizens fear that privatization would increase the price of water and further the disparity of water accessibility that exists between the upper and lower classes. As the water crisis and ensuing violence continue to grow each day, somebody – whether it be a public or private organization – must write the checks and bring water to Mexico City before supplies completely run out.

Of course, it would be wonderful if the Mexican government would declare clean water a basic right and could deliver it to all its people. Though this appears highly improbable today as the government moves towards privatization, such a declaration is not entirely out of reach. By applying the ecological humanities, strides can be made in this direction. By daring to ask questions related to ethics and human nature, going against the status quo, this area of study brings to the table discussions that have not otherwise been had. The ecological humanities would explore the golden question of who should have access to clean water, provide an understanding as to why this water disparity exists today, and through the application of persuasive, honest literature, it could reshape perceptions of accessibility to a more equitable, healthy form. The Mexico City water crisis is an extremely complex issue with no simple solution. In order to understand the full scope of the problem and thus respond accordingly, its analysis cannot be limited to just an economic or empirical approach. The social circumstances that developed this disparity must also be explored, and this starts by gaining an understanding of the human to human interactions and attitudes that allowed this conflict to occur in the first place.

Throughout this paper, the term “crisis” was used to describe the water disparity of Mexico City, highlighted by the town of Iztapalapa. This term is fitting in the present as low-income citizens suffer diarrheal diseases, respiratory infections, and decreased productivity as a result of their dehydration and the contaminants in the little water they have. Nevertheless, this term is on the verge of being replaced. If the Mexican government does not provide its citizens with one of their most basic rights in the access to clean water, then a new term could arise, and the “Mexican Water Crisis” could become the “Mexican Water Wars.”

 

Works Cited

Institute for Health Metrics and Evaluation (IHME). GBDCompareDataVisualization. Seattle, WA:        IHME, University of Washington, 2016. Available from http://         vizhub.healthdata.org/gbd-compare. (Accessed March 2nd, 2017).

Kimmelman, Michael. “Mexico City, Parched and Sinking, Faces a Water Crisis.” The New York  Times. The New York Times, 17 Feb. 2017. Web. 03 Mar. 2017.

“Mexico City faces growing water crisis.” PBS. Public Broadcasting Service, n.d. Web. 03 Mar. 2017.

“Mexico Current Account to GDP | 1980-2017 | Data | Chart | Calendar.” Mexico Current        Account to GDP | 1980-2017 | Data | Chart | Calendar. N.p., n.d. Web. 03 Mar. 2017.

Watts, Jonathan. “Mexico City’s water crisis – from source to sewer.” Mexico City live. Guardian        News and Media, 12 Nov. 2015. Web. 03 Mar. 2017.

 

Plastic

The world has become obsessed with plastic, and it is easy to at first see why. It is lightweight, water-resistant, durable, versatile, strong and seemingly inexpensive. The very properties that make plastic so popular, are the same reasons why it is destroying our planet. This issue has become so significant that the UN declared war on Marine Debris in February 2017. Plastic’s chemical makeup and single-use nature enable it to threaten marine life, the world’s ecosystems and human health.

It is important to first understand the chemical make up of plastic and how plastic is made. Plastics “belong to a chemical family of high polymers, they are essentially made up of a long chain of molecules containing repeated units of carbon atoms” (Plasticpollution.org). Oil companies make plastic through fractional distillation of crude oil because the boiling point of hydrocarbons increases with molecular size. These companies will use the practice of  “cracking” to convert the higher-boiling fractions into gasoline and plastic by cracking the molecules to the C4-C10 range. Since plastic is made from the same polymers that gasoline is made from (Molecules with 4 to 10 carbon atoms) oil companies far the trade off producing gasoline or plastic. it is worth noting that “making [water] bottles to meet America’s demand for bottled water uses more than 17 millions barrels of oil annually, enough to fuel 1.3 million cars for a year” (Pacific Institute).

According to a 2014 report from the EPA, The United States produced 258 million tons of waste. 12.9% of this waste was plastic waste and this percent is deceivingly small. 12.9% of 258 million tons is over 33 million tons of plastic waste! This 12.9% of trash is far more damaging to the environment than the 61% of waste from paper, food, yard clippings and wood. On a global scale, 299 million tons of plastic were produced in 2013. While plastic debris makes a small percentage of on-land waste, it makes up a disproportionate amount of marine debris. The United Nations Joint Group of Experts on the Scientific Aspects of Marine Pollution determined that of the 80% of the world marine pollution, 60-95% of the waste is plastic debris.

Plastic marine debris wreaks havoc on every aspect ecosystems and can do so for hundreds of years. Since plastic has a very high molecular weight and stability due to its long chain of repeated carbon molecules, it does not degrade quickly. For example, a banana peel would take 2-5 weeks to decompose while a plastic bottle would take 450 years. The convenience of plastic’s lightweight means that plastic will be buoyant enough to float on the ocean’s surface and be carried by currents around the globe. Plastic will absorb waterborne pollutants as it travels and leach toxic compounds, such as Bisphenol A (BPA). This means that not only will plastic make the water it inhabits more toxic, it too will become more toxic. The toxins that are released as plastic degrades will contribute to increasing ocean acidification.

Marine animals die from plastic by either consuming it or getting entangled in it. It is estimated that 100,000 marine animals and 1,000,000 marine birds die each year from plastic marine debris. Sea turtles mistake plastic bags for jellyfish causing them to suffocate to death. Plastic is in the environment for so long that it is entirely possible for a turtle to consume a plastic bag and after that sea turtle dies, the plastic bag will resurface and then be re-consumed. Seals and dolphins get entangled and drown in floating trash. Not even the mighty Blue Whale is safe from mile long ghost nets. (Ghost nets are fishing nets that been purposefully discarded or accidentally lost in the ocean.) Marine birds are particularly susceptible to marine debris because floating red pieces of plastic look exactly like their favorite pray, the shrimp. That is why it is nearly impossible to find any red/pink small pieces of litter in the ocean. “It is estimated that of the 1.5 million Laysan Albatrosses which inhabit Midway, all of them have plastic in their digestive system; for one third of the chicks, the plastic blockage is deadly, coining Midway Atoll as ‘albatross graveyards.’ Since plastic is able to break into smaller and smaller pieces but remain plastic, even the microscopic zoo plankton will consume plastic debris” ( ).

Biomagnification makes the plastic consumption issue relevant to human health. It is estimated that 66% of the world fish population has plastic in their digestive tract. Since plastic can not be broken down by any animal’s stomach, as fish are eaten by predators higher on the food chain, those animals will build up plastic in their own stomach. Plastic releases Bisphenol within an organism (as it does in the environment). The problem with this is that the animals at the top of the food chain, such as Tuna and Swordfish, will have large quantities of Bisphenol in them. Unfortunately, these are also some of the most popular consumer fish in the world. Biomagnification is why there has been a documented increase of Bisphenol in humans. A 2008 study published in the Journal of American Association found that higher Bisphenol A levels were significantly associated with heart disease, diabetes, and abnormally high levels of certain liver enzymes ( ). Other studies concluded that Bisphenol A increase breast cancer risks and exposure to Bisphenol A at young ages leads to externalizing behaviors ( ).

Single-use plastic products and private companies/unions and are the 2 main culprits of marine debris. The fact that we have a products that are intended to be used once and then thrown out has altered how we treat all consumer goods. No product in the history of humanity has been designed to be used once and then disposed of. The Ocean Conservancy organization does hundreds of beach cleans each year and tally what kinds of trash is ending up on the beach. Out of the top 10 items frequently found only the most found item (cigarette) and least found (paper bags) are not a single use plastic items. The cheapness of these products makes people naturally assume they are trash. At the very most, 25% of plastic debris is recycled in the United States. (Utah Recycles) This means that the other 75% of 33 million tons of plastic is sitting in landfills or polluting the environment. While there certainly needs to be reform to Americas municipal solid waste service in order to better handle and treat the increased amount of plastic, it would be easier to not even have the plastic to begin with. For a personal example, in my first week of personal trash collection I realized that I had used 6 sets of plastic utensils. I did not even realize I used that much in a week because once I throw the product away, I forget about it. I now carry metal utensils with me when I eat. Not only does this save plastic, but I get to use a better product that is available at all times.

Major Corporations, such as Kelloggs and Coca Cola, have billions of dollars in assets and are doing everything they can to fight any legislation that would increase standards for plastic pollution. These companies pay groups such as the Progressive Bag Affiliates and America Chemistry Council to lobby against reducing plastic. Both of these companies helped pass legislation in Arizona that made the banning of plate bags illegal. A 2010 bill that was introduced in the Senate that would outlaw single-use carryout bags of any material failed after the American Chemistry Council spent millions on lobbyists to defeat it. In Europe, Cocla Cola has been spent one million euros to lobby against deposit return schemes(DRS). The DRS laws charge an extra tax on all plastic bottles that will be returned to the customer when the consumer deposits their purchased plastic bottles. The movement was intended to reduce the amount of plastic that gets thrown away, but Coca Cola feared that it would reduce their bottom line. While these companies try to reduce environmental legislation in their usual markets, they send their trash to foreign markets where there is little to no anti-plastic legislation.  This why china recovers 56% (by weight) of waste plastic imports worldwide. In China, trash can be deposited at low-tech, unregulated facilities that destroy the local environments because there is no legislation in place against such practices.

Yet, there is hope. The world has the ability to increase how much plastic is recycled, to develop new uses for recycled plastic and to make environmentally-friendly plastic. San Francisco is paving the way towards high recycling rates. The city has already reached an 80% recycling rate and has shown that if a megacity can do it, then so can any city. Every extra ton of plastic recycled saves 5,774 kWh of electricity, 685 gallons of oil, 98 millions bTu’s of Energy and 30 cub yards of landfall space. Not only is this economical beneficial to society, every ton recycled will be one less ton entering the ocean. There have been studies that demonstrated that plastic bottles shredded into small polyethylene terephthalate (PET) can be used as sand-substitution aggregates in cementitious concrete composites. If this can be done on a large scale, the world can use plastic in landfalls to make a cheap legitimate building material. Another research group was able to create plastic that is decomposable by adding biodegradable polyolefin’s synthesis (active additives such as pro oxidants and starch). There are several other potential solutions, but there should be more. As attention to this issue grows, so will the funding necessary to develop solutions. However, technological advances will not be enough. We most change our behavior as humans.

This does not fit into my analysis of the issue, but is certainly worth reporting. I took a environmental course (ENVRON 346) in which we learned about the different issue affecting the world’s oceans. I learned about how plastic was produced from oil, but I had trouble recalling all the part of the process. I searched on google “how is plastic made from oil” and every answer on the first page was biased! The first result is from Plastic Europe the Association of Plastic Manufactures, the second was the Independent Statistic & Analysis U.S. Energy Information Administration then the third American Chemistry Council and forth Polyplastics Solution Platform for Engineering Plastics. There are countless sites after the first page that are also run by fraudulent organizations. None of these websites accurately represented how oil was made at all. All the sites made the process seem organic and healthy and even said that recycling is incredibly efficient. None of the sites made it clear that the same hydro-carbons used for plastic were the same used for gasoline production. These organizations have managed to use their massive budgets to buy prime internet space, space that is supposed to be for open and true information. This should trouble every individual.