Practical Impacts of Burlington Northern on Multi-Party Superfund Sites

Posted on January 29, 2010 by William Hyatt

To many Superfund practitioners, United States v. Burlington Northern & Sante Fe Railway Company, __ U.S. __, 129, S. Ct. 1870 (2009) represents the latest in a series of surprises from the Supreme Court. The decision follows Cooper Industries, Inc. v. Aviall Services, Inc, 543 U.S. 157 (2004), from which we learned that the statutory words “during or following” really mean just what they say and contribution claims under the Comprehensive Response Compensation and Liability Act (also referred to as CERCLA or the Superfund statute) are only available in those limited circumstances. A few years later, in United States v. Atlantic Research Corp., 551 U.S. 128 (2007), we learned that “covered persons” (also referred to as potentially responsible parties or PRPs) under the statute may, in certain procedural circumstances, have cost recovery claims in the event they do not meet the criteria for contribution claims.   In Burlington Northern, we learned that “arranger” liability may not be as broad as we had thought it was, and that joint and several liability may not be the automatic we thought it was. It is probably fair to say that the outcome in Burlington Northern, like the outcomes in Aviall and Atlantic Research, was not intuitive to Superfund practitioners.

 

            A Superfund practitioner might have expected the Supreme Court decision in Burlington Northern to look more like the Ninth Circuit opinion it reversed (found at 502 F.3d 781), endorsing a broad reading of “arranger” liability under the statute and applying joint and several liability to all the defendants, the latter being the norm for more than 25 years since the seminal decision in United States v. ChemDyne, 572 F. Supp. 802 (S.D. Ohio 1983).

 

As with Aviall and Atlantic Research, it will probably take many years, and many decisions by the lower courts, before we fully appreciate the implications of Burlington Northern, but one thing is already clear. Defendants in multi-party Superfund sites will be contending for apportionment as the alternative to joint and several liability, if for no other reason than to avoid funding the orphan share represented by “covered persons” who can’t be found, no longer exist, or, as is more recently the case, are bankrupt. On the other hand, governments asserting cost recovery claims can be expected to continue to advocate aggressively for joint and several liability, so as to avoid having to absorb the orphan share themselves. The question is what practical impacts this battleground will have on Superfund practice at multi-party sites.

 

            Burlington Northern raises several practical questions which will have to resolved as the law and practice develop. Here are some of them.

 

Whether a defendant is entitled to apportioned liability is a fact-intensive inquiry, resolved in Burlington Northern only after a six week bench trial, and only after the district judge took four years to render a decision. Will governments be able to obtain liability judgments at the beginning of cost recovery actions, as they have typically tried to do in the past? Will Burlington Northern force more cases to go to trial? 

 

Whether liability is subject to apportionment is not likely to be decided until the end of a case, as it was in Burlington Northern. How will cost recovery defendants evaluate their chances of success in the early stages of a case? Will they feel compelled to develop a detailed record to support arguments that liability for a single harm is subject to apportionment, unlike the defendants in Burlington Northern, who limited their arguments to general denials of liability?

 

Governmental plaintiffs can be expected to insist that liability at multi-party sites is still joint and several, even after Burlington Northern. Will those governmental plaintiffs be willing to consider the litigation risk that liability may be subject to apportionment in negotiating settlements? If so, how will that litigation risk be taken into consideration?

If liability is apportioned, how will any resulting orphan shares be funded? Will EPA’s historic limitations on orphan share funding be adequate? If not, where will the funding come from? Is the Superfund tax more likely to be reinstated because of Burlington Northern?

 Will the organization of multiple “covered persons” into PRP groups be more difficult if the defendants believe they can escape liability through apportionment? How will defendants balance that possibility against the potential benefit in the form of reduced costs that might be gained by performing cleanup work themselves?

 

Will ADR emerge as the norm for dividing responsibility among defendants who believe their liability is subject to apportionment, as it has in allocating joint and several liability? What evidence will be used to apportion liability? Burlington Northern endorsed many of the same causation-related considerations as the equitable factors historically used to allocate joint and several liability; will some or all of the Gore factors still be relevant? Burlington Northern also endorsed estimations and compromises, considerations not normally found in legal determinations; how will the lower courts react to imprecise calculations of apportioned liability?

 

How will defendants argue for an orphan share? Will they seek to establish an orphan share from the bottom up (by quantifying the share of missing PRPs), or from the top down (by quantifying their own individual shares)? Whichever way defendants decide to approach the issue, they can be expected to develop the record the district judge found lacking in Burlington Northern.

 

Finally, in states whose statutes make joint and several liability explicit (e.g, the New Jersey Spill Compensation and Control Act, N.J.S.A. 58:10-23.11g(c)(1)), how will apportionment decisions be made? Will the scope of liability be different to EPA and to such states?   Under such statutes, is there no instance in which liability will be subject to apportionment, even for distinct harms?

Like Aviall and Atlantic Research before it, Burlington Northern promises to be a fertile source of future litigation. 

BLANKENSHIP-KENNEDY DEBATE CLIMATE CHANGE

Posted on January 28, 2010 by David Flannery

On January 21, 2010 thousands packed the auditorium at the University of Charleston in Charleston West Virginia and tuned in on television and radio for the debate between Massey Energy CEO Don Blankenship and environmentalist Robert F. Kennedy, Jr.

Asked about his primary concerns for the future of energy, Mr. Blankenship stated that they were the security of this country and improving the quality of life in this country and throughout the world. This answer became somewhat of a theme for Mr. Blankenship, as he stated his concern for the health and well-being of people, which is dependent on their quality of life, which is heavily dependant on affordable electricity, which is heavily dependent on coal.

When asked the same question, Mr. Kennedy offered several minutes of comments similar to other speeches he has given around the country concerning Appalachia and coal in which he highlighted his families’ ties to West Virginia along with his views against surface mining.

The audience, having a near equal number of supporters from both sides, was relatively subdued thanks to early pleas from University of Charleston President and event moderator Dr. Welch to hold-off applause until the end. At times, however, both debaters received loud applause for their answers to questions.

Throughout the debate, Mr. Kennedy stated the many health and environmental issues he believed to be caused by coal, while Mr. Blankenship reminded Mr. Kennedy that many of his biggest issues with coal, such as the burning of coal and its contribution to Mercury in water, are primarily caused by other countries with much a higher usage of coal, such as China and India.

Mr. Kennedy also focused a great deal on alternative energy, such as wind and solar energy, as well as West Virginia’s need to switch its focus on these alternative energy sources. Mr. Blankenship responded that if it was profitable to build solar panel fields or wind farms, without government subsidies, it would be happening at a greater rate than is occurring. Blankenship stated that his company is pouring hundreds of millions of dollars into the coal industry because that is where the investment will pay off in a free enterprise market.

While the security at the event mirrored that of international flight travel, the debate itself was a success, going off without much disturbance other than the occasional burst of applause.

BLANKENSHIP-KENNEDY DEBATE CLIMATE CHANGE

Posted on January 28, 2010 by David Flannery

On January 21, 2010 thousands packed the auditorium at the University of Charleston in Charleston West Virginia and tuned in on television and radio for the debate between Massey Energy CEO Don Blankenship and environmentalist Robert F. Kennedy, Jr.

Asked about his primary concerns for the future of energy, Mr. Blankenship stated that they were the security of this country and improving the quality of life in this country and throughout the world. This answer became somewhat of a theme for Mr. Blankenship, as he stated his concern for the health and well-being of people, which is dependent on their quality of life, which is heavily dependant on affordable electricity, which is heavily dependent on coal.

When asked the same question, Mr. Kennedy offered several minutes of comments similar to other speeches he has given around the country concerning Appalachia and coal in which he highlighted his families’ ties to West Virginia along with his views against surface mining.

The audience, having a near equal number of supporters from both sides, was relatively subdued thanks to early pleas from University of Charleston President and event moderator Dr. Welch to hold-off applause until the end. At times, however, both debaters received loud applause for their answers to questions.

Throughout the debate, Mr. Kennedy stated the many health and environmental issues he believed to be caused by coal, while Mr. Blankenship reminded Mr. Kennedy that many of his biggest issues with coal, such as the burning of coal and its contribution to Mercury in water, are primarily caused by other countries with much a higher usage of coal, such as China and India.

Mr. Kennedy also focused a great deal on alternative energy, such as wind and solar energy, as well as West Virginia’s need to switch its focus on these alternative energy sources. Mr. Blankenship responded that if it was profitable to build solar panel fields or wind farms, without government subsidies, it would be happening at a greater rate than is occurring. Blankenship stated that his company is pouring hundreds of millions of dollars into the coal industry because that is where the investment will pay off in a free enterprise market.

While the security at the event mirrored that of international flight travel, the debate itself was a success, going off without much disturbance other than the occasional burst of applause.

SCOTT BROWN'S ELECTION - ONE MORE SET-BACK FOR CLIMATE CHANGE LEGISLATION?

Posted on January 27, 2010 by Michael Hockley

When Scott Brown was elected to fill Senator Kennedy’s senate seat, news reports highlighted the impact on health care legislation and the loss of the filibuster-proof sixty vote Democratic majority in the Senate. In environmental circles, however, many commentators pointed out the potential impact on climate change legislation. 

 

Prior to his election, most believed that once Congress passed the health care bill, it would turn its full attention to climate change legislation and pass some form of legislation to limit green house gas (“GHG”) emissions. The loss of this key Democratic Senate seat makes the prospect of GHG legislation in the near future seem less likely, although some commentators take the contrarian view. They argue that if health care reform moves to the back burner, the chances of passing a climate bill would increase because Democrats need a major legislative victory to bolster the 2010 election efforts.

 

Following the United States Supreme Court’s decision in Massachusetts v. EPA, 549 U.S. 497 (2007) finding the Environmental Protection Agency (“EPA”) has the authority to regulate carbon dioxide as a pollutant under the Clean Air Act (“CAA”), some form of mandatory GHG controls, either through legislation, regulation, or a combination of both, has seemed inevitable. In response to the Massachusetts decision, EPA and Congress have been moving on parallel tracks to regulate GHG emissions. 

 

EPA has issued a number of proposed and final rules, including a final mandatory GHG reporting rule, 74 Fed. Reg. 56260 (Oct. 30, 2009), an Endangerment and Cause or Contribute Finding that motor vehicle GHG emissions contribute to GHG pollution and threaten public health and welfare, 74 Fed. Reg. 66496 (De. 15, 2009), and a proposed “Prevention of Significant Deterioration and Title 5 Greenhouse Gas Tailoring Rule,” 74 Fed. Reg. 55292 (Oct. 27, 2009), among others. EPA and the National Highway Traffic Safety Administration also announced a joint proposal to establish light duty vehicle GHG and mileage standards for model years 2012 through 2016.

 

In response to concerns expressed by both industry and environmental interests that the CAA is not the best vehicle for regulating GHGs, factions in the House and the Senate have proposed sweeping legislation to reduce GHG emissions, the Waxman-Markey Climate Change bill, H.R. 2454, “The American Clean Energy and Security Act of 2009,” in the House of Representatives, and  the Boxer-Kerry bill, the “Clean Energy Jobs and American Power Act,” in the Senate.  Both include GHG emissions reductions targets and use a cap and trade scheme to achieve those goals. In addition, they include a variety of other measures to encourage investment in alternative energy sources and energy efficiency. 

 

In recent months, efforts to move forward with this legislation seems to have been eclipsed by efforts to pass comprehensive  health care legislation, but the conventional wisdom was that some form of legislation would be passed once health care was put to rest. Now that the Democrats have lost a filibuster-proof super majority, prospects for climate change legislation seem to be dimming.

 

On the EPA regulatory front, Senator Lisa Murkowski (R-Alaska) has been on the attack, trying to prevent EPA from promulgating GHG regulations that limit emissions from major sources. Most recently, she filed a “disapproval resolution” on January 22, 2010, seeking to retroactively veto EPA’s endangerment and cause or contribute findings that GHGs endanger public health and the environment, thereby .blocking EPA’s GHG regulations. 

 

A disapproval resolution is a procedural mechanism that prohibits executive branch agency rules from taking effect. It only requires 51 votes and is not subject to filibuster rules. Senator Murkowski claims to have the backing of 39 other senators, including three Democrats, Sen. Blanche Lincoln (D-Ark.), Sen. Ben Nelson (D-Neb., and Sen. Mary Landrieu (D-La.). She introduced this resolution on the heels of Scott Brown’s election, and she does not expect this resolution to reach the floor for a vote before Scott Brown is sworn into office.

Even if she is able to garner 51 votes in the Senate, the House must pass a similar resolution, and it must be signed by the President to go into effect. Even if it does not succeed, it signals a widespread lack of support, even among Democrats, for legislation controlling GHG emissions this year.  Scott Brown’s election should make it more difficult to enact climate change legislation, especially with an election season just around the corner because his election is being interpreted by many to signal the electorate’s disapproval of the Obama agenda. 

 

In the meantime, if there is no climate change legislation passed, EPA likely will continue to move down the regulatory path of limiting GHG emissions using its authority under the CAA.

WATER MORE VALUABLE THAN OIL NOW? FOR SURE SOMEDAY!

Posted on January 21, 2010 by Stephen E. Herrmann

According to Bloomberg News, the worldwide scarcity of usable water worldwide already has made water more valuable than oil. The Bloomberg World Water Index, which tracks 11 utilities, has returned more to investors every year since 2003 than oil and gas stocks or the Standard & Poor’s 500 Index.

When you want to spot emerging trends, follow the money. Today, many of the world’s leading companies and investors are making big bets on water. Why -- there simply is not enough freshwater to go around, and the situation is expected to get worse before it gets better.

The most essential commodity in the world today is not oil, not natural gas, not even some type of renewable energy. It’s water -- clean, safe, fresh water.

 

TODAY:

In 1992, the United Nations General Assembly designated March 22 as World Water Day. Every year on that date, people worldwide participate in events and programs to raise public awareness about what many believe to be the world’s most serious health issue -- unsafe and inadequate water supplies -- and to promote the conservation and development of global water resources.

 

More than a billion people -- almost one-fifth of the world’s population -- lack access to safe drinking water, and 40 percent lack access to basic sanitation, according to the 2nd UN World Water Development Report.

 

The United Nations estimates that by 2050 more than two billion people in 48 countries will lack sufficient water. Approximately 97 percent to 98 percent of the water on planet Earth is saltwater (the estimates vary slightly depending on the source). Much of the remaining freshwater is frozen in glaciers or the polar ice caps. Lakes, rivers and groundwater account for about 1 percent of the world’s potentially usable freshwater.

 

According to the United Nations, which has declared 2005-2015 the “Water for Life” decade, 95 percent of the world’s cities still dump raw sewage into their water supplies. Thus it should come as no surprise to know that 80 percent of all the health maladies in developing countries can be traced back to unsanitary water. The global water crisis is the leading cause of death and disease in the world, taking the lives of more than 14,000 people each day, 11,000 of them children under age 5.

 

TOMORROW:

 

If global warming continues to melt glaciers in the polar regions, as expected, the supply of freshwater may actually decrease. First, freshwater from the melting glaciers will mingle with saltwater in the oceans and become too salty to drink. Second, the increased ocean volume will cause sea levels to rise, contaminating freshwater sources along coastal regions with seawater.

 

Sandra Postel, author of the 1998 book, Last Oasis: Facing Water Scarcity, predicts big water availability problems as populations of so-called “water-stressed” countries jump perhaps six fold over the next 30 years. “It raises tons of issues about water and agriculture, growing enough food, providing for all the material needs that people demand as incomes increase, and providing drinking water,” says Postel.

 

Developed countries are not immune to freshwater problems either. Researchers found a six-fold increase in water use for only a two-fold increase in population size in the United States since 1900. Such a trend reflects the connection between higher living standards and increased water usage, and underscores the need for more sustainable management and use of water supplies even in more developed societies. Further evidence of the coming issue with water is that while China is home to 20 percent of the world’s people, only 7 percent of the planet’s freshwater supply is located there.

 

THE PATH:

 

With world population expected to pass nine billion by mid-century, solutions to water scarcity problems are not going to come easy. Some have suggested that technology -- such as large-scale saltwater desalination plants -- could generate more freshwater for the world to use. But environmentalists argue that depleting ocean water is no answer and will only create other serious problems. 

 

The cost of water is usually set by government agencies and local regulators. Water is not traded on commodity exchanges, but many utilities stocks are publicly traded. Meanwhile, investments in companies that provide desalinization, and other processes and technologies that may increase the world’s supply of freshwater, are growing rapidly. General Electric Chairman Jeffrey Immelt said the scarcity of clean water around the world will more than double GE’s revenue from water purification and treatment by 2010 -- to a total of $5 billion. GE’s strategy is for its water division to invest in desalinization and purification in countries that have a shortage of freshwater. Research and development into improving desalination technologies is ongoing, especially in Saudi Arabia, Israel and Japan. And already an estimated 11,000 desalination plants exist in some 120 countries around the world.

 

As individuals, we can all reign in our own water use to help conserve what is becoming an ever more precious resource. We can hold off on watering our lawns in times of drought. And when it does rain, we can gather gutter water in barrels to feed garden hoses and sprinklers. We can turn off the faucet while we brush our teeth or shave, and take shorter showers. As Sandra Postel concludes, “Doing more with less is the first and easiest step along the path toward water security.”

Zubulake Revisited: Judge Scheindlin on Discovery Sanctions

Posted on January 20, 2010 by John Barkett

Every environmental litigator understands the duty to preserve documents. Before a complaint is filed, a plaintiff must preserve documents relevant to the claims about to be advanced. If a defendant reasonably anticipates litigation, the defendant must undertake reasonable efforts to preserve documents that are relevant to the impending lawsuit. Once a complaint is served, a defendant must preserve documents relevant to the claims alleged.

 

In the electronic world, especially on a prelitigation basis, it is doubly important to identify custodians with relevant documents (“key players”) since with a keystroke, they have the ability to delete responsive electronically stored information. Aluminum Corp. v. Alcoa, Inc., 2006 U.S. Dist. LEXIS 66642 (M.D. La. July 19, 2006) illustrates the risk. Alcoa sent a cost-recovery demand to Consolidated Aluminum in 2002 and promptly put a litigation hold on the electronic documents of four Alcoa employees involved with a remedial investigation and cleanup. In 2003, Consolidated filed a declaratory judgment action seeking to be absolved of liability. In 2005, Consolidated propounded discovery that prompted Alcoa to expand its key player list by eleven more names. It was not until this expansion that Alcoa suspended its janitorial email deletion policy and backup tape maintenance policy which at Alcoa meant that email older than about seven months was no longer available unless it had been archived by the individual user. The magistrate judge imposed a monetary sanction on Alcoa—in effect determining that Alcoa should have identified these additional individuals as key players in 2002. 2006 U.S. Dist. LEXIS 66642, *36.

 

If a duty to preserve is violated, and documents are lost as a result, sanctions may result. What sanction will depend upon the level of culpability of the “spoliating” party—negligence, gross negligence, or bad faith--and the amount of prejudice to the “innocent” party by the loss of information relevant to the innocent party’s claim or defense. But what is the difference between “negligence” and “gross negligence”? Who has the burden of proof in establishing the culpability of the conduct or the existence of prejudice? May a court presume prejudice depending upon the level of culpability? If so, is such a presumption rebuttable?

 

Much like she did in the five Zubulake v. UBS Warburg decisions, Judge Shira Scheindlin has written another blockbuster decision answering all of these questions. The Pension Committee of the University of Montreal Pension Plan et al. v. Banc of America Securities, LLC et al., Civ. 9016 (January 15, 2010). In her amended opinion and order, (the original opinion was issued January 11, 2010 and appears at 2010 WL 93124), Judge Scheindlin defined gross negligence by reference to misfeasance following the attachment of a duty to preserve. She held that a finding of gross negligence will accompany the failure to

 

  • issue “a written litigation hold,”
  • “identify key players” and to “ensure that their electronic and paper records are preserved,”
  • “cease the deletion of email” or “preserve the records of former employees that are in a party's possession, custody, or control,” and
  • preserve backup tapes “when they are the sole source of relevant information or when they relate to key players, if the relevant information maintained by those key players is not obtainable from readily accessible sources.”

In contrast, the failure to obtain records from all employees, as opposed to key players, or to take all appropriate measures to preserve electronically stored information in most cases “likely” will fall into the “negligence” category, unless the facts, on a case-by-case basis, demonstrate otherwise, she held.

 

The burden of proof, the court said, is on the innocent party to show that the spoliating party had (1) control over the evidence and an obligation to preserve it at the time of its loss and (2) acted with a culpable state of mind, and that (3) the missing evidence is relevant to the innocent party’s claim or defense. Relevance is presumed when bad faith exists. Some courts presume relevance when “gross negligence” has been found, but Judge Scheindlin held that this presumption is “not required.” If only negligence has been found, the innocent party must prove relevance and prejudice. Irrespective of the level of culpability, “any presumption is rebuttable.”

 

The slip opinion is 85 pages in length and rather than summarizing it further here, I urge readers to review it. In the end, Judge Scheindlin decided that relevant information was lost and the innocent party (here a defendant) was prejudiced. She decided to give an adverse inference instruction that itself represents two illuminating single-spaced pages of the opinion, along with monetary sanctions (including attorneys’ fees for deposing certain declarants and bringing the sanctions motion).

 

Pension Committee begins with the byline, “Zubulake Revisited: Six Years Later.” This time, there will be no debate over how to pronounce Pension Committee. And, in the years to come, Pension Committee is sure to be cited just as often as Zubulake has been.

"MEGA" SHALE AND TIGHT SANDS GAS - A GAME CHANGER

Posted on January 19, 2010 by R. Kinnan Golemon

In the past several decades, due in large measure to the persistence of innovative independent oil and gas operators, advancements in drilling and completion technology and the increased demand for natural gas during the expanding economic times that existed prior to year-end 2008, a paradigm shift occurred in the domestic natural gas market that will have significant impact in areas of the U.S. that, heretofore, were not significant producers of the commodity. Prior to this development, supply tightness and price volatility were characteristic features of the natural gas market. Now, due to these " Mega" shale and tight sands gas plays, there will be increased environmental scrutiny of this sector's activities, in addition to the dampening of price swings.

 

            The U.S. gas supply currently is predicted to be at least 150 years at use levels similar to those existing in 2008. Only a few short years ago, forecasters were predicting the need for massive imports of liquefied natural gas to meet predicted near term demand. This change in conditions has very significant implications politically and certainly presents interesting opportunities on a variety of fronts for environmental attorneys.

 

            One particularly interesting aspect of these newly found natural gas reserves is the fact that a significant portion of this exploration, production, processing and transmission activity will be occurring in areas of the U.S. that have had limited exposure to such activity. The last ten (10) years of rapid expansion of natural gas activity in the Barnett Shale area of Texas, i.e., North Central Texas and the Dallas-Ft. Worth metroplex, is a forerunner for what is likely to occur as the resource development expands to other known shale deposits.

 

 

            Needless to say, there is opportunity for tremendous growth in local tax base, ample employment opportunities for certain skill sets, increased income to property owners, and, most certainly, a variety of allegations of environmental harm from anti-drilling opposition. Much of the latter in the very recent past in the Barnett Shale area has been directed at perceived increases in emissions of air contaminants, e.g., VOCs and "toxic" constituents. To date, snapshot air quality sampling has not confirmed any problem. (see January 12, 2010, Texas Environmental Quality Press Release – Oil and Gas Air Tests in Ft. Worth find "No Cause for Concern".) However, on the same date, the Mayor of Dish, a rural community of less than 200 residents, was appearing before another state agency, the Texas Railroad Commission, seeking a cessation to all natural gas drilling, production, processing and transmission activity with the contention that this community was besieged by toxins and odors emanating from nearby natural gas activity. (Additional TCEQ air sampling results from recent tests in that rural setting are due to be released this month.) 

  

           Numerous other environmental related contentions relative to the development of the Barnett Shale reserve have generally been directed at the well completion phase where large volumes of fresh water with additives are utilized in hydraulic fracing (pressurized mixture for breaking apart the formation rock to allow for the natural gas to flow), the disposal of wastewater and the specifics of the proprietary formulas for the additives. In addition, there are a variety of claims relative to general safety, increased truck traffic and disturbances of property for the placing of associated gathering and transmission lines.           

 

            This paradigm shift in the natural gas reserve potential should afford many in our profession an excellent opportunity to provide sound advice and counsel utilizing the experiences we have gained in addressing similar issues in the past.

"MEGA" SHALE AND TIGHT SANDS GAS - A GAME CHANGER

Posted on January 19, 2010 by R. Kinnan Golemon

In the past several decades, due in large measure to the persistence of innovative independent oil and gas operators, advancements in drilling and completion technology and the increased demand for natural gas during the expanding economic times that existed prior to year-end 2008, a paradigm shift occurred in the domestic natural gas market that will have significant impact in areas of the U.S. that, heretofore, were not significant producers of the commodity. Prior to this development, supply tightness and price volatility were characteristic features of the natural gas market. Now, due to these " Mega" shale and tight sands gas plays, there will be increased environmental scrutiny of this sector's activities, in addition to the dampening of price swings.

 

            The U.S. gas supply currently is predicted to be at least 150 years at use levels similar to those existing in 2008. Only a few short years ago, forecasters were predicting the need for massive imports of liquefied natural gas to meet predicted near term demand. This change in conditions has very significant implications politically and certainly presents interesting opportunities on a variety of fronts for environmental attorneys.

 

            One particularly interesting aspect of these newly found natural gas reserves is the fact that a significant portion of this exploration, production, processing and transmission activity will be occurring in areas of the U.S. that have had limited exposure to such activity. The last ten (10) years of rapid expansion of natural gas activity in the Barnett Shale area of Texas, i.e., North Central Texas and the Dallas-Ft. Worth metroplex, is a forerunner for what is likely to occur as the resource development expands to other known shale deposits.

 

 

            Needless to say, there is opportunity for tremendous growth in local tax base, ample employment opportunities for certain skill sets, increased income to property owners, and, most certainly, a variety of allegations of environmental harm from anti-drilling opposition. Much of the latter in the very recent past in the Barnett Shale area has been directed at perceived increases in emissions of air contaminants, e.g., VOCs and "toxic" constituents. To date, snapshot air quality sampling has not confirmed any problem. (see January 12, 2010, Texas Environmental Quality Press Release – Oil and Gas Air Tests in Ft. Worth find "No Cause for Concern".) However, on the same date, the Mayor of Dish, a rural community of less than 200 residents, was appearing before another state agency, the Texas Railroad Commission, seeking a cessation to all natural gas drilling, production, processing and transmission activity with the contention that this community was besieged by toxins and odors emanating from nearby natural gas activity. (Additional TCEQ air sampling results from recent tests in that rural setting are due to be released this month.) 

  

           Numerous other environmental related contentions relative to the development of the Barnett Shale reserve have generally been directed at the well completion phase where large volumes of fresh water with additives are utilized in hydraulic fracing (pressurized mixture for breaking apart the formation rock to allow for the natural gas to flow), the disposal of wastewater and the specifics of the proprietary formulas for the additives. In addition, there are a variety of claims relative to general safety, increased truck traffic and disturbances of property for the placing of associated gathering and transmission lines.           

 

            This paradigm shift in the natural gas reserve potential should afford many in our profession an excellent opportunity to provide sound advice and counsel utilizing the experiences we have gained in addressing similar issues in the past.

ADVENTURES IN WATER QUALITY MITIGATION

Posted on January 13, 2010 by Rick Glick

The regulated community is experimenting with solutions to water quality regulatory problems that are market based and implemented on a watershed scale. Such efforts are being met with guarded interest by agencies, environmental organizations and the public, but offer the best hope for true ecological restoration. Oregon has recently passed legislation to foster ecosystem services markets to facilitate this approach. 

 

The Clean Water Act addresses water quality degradation through establishment of water quality standards and imposition of technology based effluent limitations in point source discharge permits. The receiving waters are tested periodically to see if standards are being attained, and if not, then Total Maximum Daily Loads are set and waste load allocations given to point sources so that permits can be adjusted. Non-point sources are given load allocations in the TMDL, but since there is no direct regulatory enforcement mechanism, and since funding sources are limited, compliance is not assured. 

 

This model has worked out pretty well for dealing with municipal and industrial waste water discharges, and toxics in receiving waters have been  much reduced. However, there has been little effect on water quality degradation related to non-point sources. In Oregon, over 1,200 streams are listed as water quality limited, and the vast majority are on the list for non-point source related problems, such as warmer ambient water temperatures and nutrient loading. What to do?

 

 

The conventional response is to ratchet up permit requirements for point sources, or impose local mitigation requirements on those caught in the Clean Water Act § 401 water certification web.  As it is said, if all you have is a hammer, all your problems are nails. There are, however, other tools in the box. Here are a couple of examples of ecomarket approaches.

 

Clean Water Services is the second largest sewerage agency in Oregon. It has four treatment outfalls discharging to the flat, slow moving Tualatin River. The discharge raises  receiving water temperatures, and when it came time to renew its four permits, the agency was facing stricter requirements to control thermal loading. Rather than installing mechanical chillers at the outfalls, the CWS proposed a large-scale riparian revegetation program. It was projected that the massive tree planting effort would take about ten years to match the cooling effect of the chillers, but would double the cooling as the trees matured. And with such an effort come ancillary habitat and other ecological benefits throughout the watershed that no chiller could provide. The Oregon Department of Environmental Quality approved the program and it is being implemented.

 

Idaho Power Company has proposed a similar approach to resolve water temperature problems associated with its Hells Canyon Complex on the Snake River. The HCC is comprised of three dams and reservoirs that together generate over 1,100 MW.  The HCC is undergoing relicensing, which triggers the CWA 401 water quality certification process before both the Oregon and Idaho Departments of Environmental Quality, as the Snake River is a border stream. A temperature control structure installed in the HCC’s largest reservoir would probably solve the regulatory problem, but would offer few ecological benefits. Instead, the company is proposing an ambitious upstream watershed improvement program comprised of riparian planting, fencing, wetlands enhancement, irrigation efficiency upgrades and flow augmentation. The Snake River watershed is vast and complex, with heavy human influence throughout, so a program on this scale will be tough to implement. However, the potential upside piques the imagination. 

 

Official policy favors such watershed approaches. EPA has adopted a water quality trading policy that encourages transactions between point and non-point sources with a focus on reducing nutrient loads and thus restoring depleted dissolved oxygen. EPA also recognizes the potential for applying the policy to temperature problems. Last year the Oregon legislature enacted Senate Bill 513 , which establishes state policy supporting development of ecosystem services markets to facilitate watershed scale solutions to water quality restoration. 

 

I have been appointed to the SB 513 working group tasked with developing the policy and making further recommedations to the legislature. One of the greatest challenges is the lack of reliable metrics. Because there are myriad other upstream influences on water temperature, it is exceedingly difficult to measure the effect of an upstream tree planting program on downstream temperatures. Further, the benefits from watershed programs are long term in nature. 

 

Thus, there is risk both to the permittee and the regulatory agency that someone will sue to require immediate and measureable results. But if the goal of the overall regulatory program is truly ecological protection and restoration, then we must go beyond compliance for the sake of compliance and focus on outcomes. The huge potential for sustainable, widespread benefits resulting from watershed approaches makes this an effort well worth making.

Ozone and the Citizen

Posted on January 12, 2010 by Alan Gilbert

Accompanied by a considerable public relations effort, the United States Environmental Protection Agency proposed new national ambient air quality standards for ozone on January 7, 2010. The agency wants to reduce the primary 8-hour ozone standard from its current value of 0.075 parts per million, promulgated by the last administration in March 2008, to a level in the range from 0.060 to 0.070 parts per million. Using its reconsideration of the 2008 standard as a platform, EPA emphasized that more careful attention should be paid to the recommendations of its Clean Air Scientific Advisory Committee. That, it says, is just good science.

 

The country will face considerable difficulty and expense meeting the proposed primary standard nationwide, especially at its lower range. Based on monitoring data from 2006 to 2008, EPA predicts that a proposed primary standard set at 0.060 parts per million would be violated in all but 24 of the counties monitored counties nationwide for the pollutant.

 

In the part of the country where I live ― my office is in Colorado ― a new standard is going to be extremely difficult to meet, especially at the lower range of the proposal. In our area of the West, monitoring shows that we are, for the most part, quite close to either side of the current 2008 standard. The populous Northern Front Range region of the state reports ozone values of 0.071 parts per million to 0.086 parts per million. Colorado’s state health department, like many others in the West, is struggling mightily to form compliance strategies that will substantially improve ozone air quality in areas that today do not meet the existing standard.

 

Ozone pollution generally is formed in the atmosphere near the ground in very complex reactions that exploit energy from sunlight to transform a mix of volatile organic compounds, nitrogen oxides, carbon monoxide and methane. Control of ozone focuses on industrial facilities, the generation of electricity, motor vehicle exhaust gases, gasoline vapors, and chemical solvents. These are the major sources of volatile organic compounds and nitrogen oxides generated from human activities.

 

Ozone is particularly difficult to control because of pollutant transport. Precursor pollutants and ozone often arrive near a monitor in a complicated mix of local emissions and emissions carried by the wind from hundreds of miles away. A knowledgeable local air quality expert, quoted in an editorial printed on January 11, 2010, told The Denver Post that the lower range of EPA’s proposed standard is “close to background” levels for the Front Range of Colorado.

 

As in any primary national ambient air quality standard proceeding ― where the goal under the Clean Air Act is to protect the public’s health with a margin of safety ― fundamental, difficult and interesting questions must be addressed and answered. Who is EPA trying to protect through its proposed standard? It is focusing upon people with lung disease, especially children with asthma, elderly people, and people who are active outdoors, but it emphasizes protection of children. In her speech when the proposed standard was released, Lisa Jackson, the Administrator of EPA, told her audience of her 13-year old son’s difficulty with asthma on days with high ozone levels. What is EPA protecting these people from? It is protecting against reduced lung function and irritation in their airways, aggravation of asthma and susceptibility to respiratory infection, and aggravation of chronic lung diseases. What does science have to say about the level of air pollution that supplies that protection? Usually even more important, what does the science not have to say ― what are the unknowns and assumptions we must make, given the limits to our knowledge? And ― dare I write it? ― is control possible at the levels suggested by the science available to us? At what cost and with what set of benefits?

 

In any event, I was particularly struck by a comment reported in our local newspaper when the standard was announced. My reaction may be an over-reaction, but what I read seemed eerily familiar to me and a bit worrisome. The Denver Post reprinted remarks by the  Director of EPA’s Air Programs in Region 8. She reportedly said that residents can begin to make a difference to lower ozone levels by riding the bus and bicycling more instead of driving, weed-whacking and lawn mowing after sunset, and maybe ditching leaf-blowers and switching to push mowers. EPA Increases Burden on Denver to Reduce Smog,The Denver Post, January 8, 2010, p. B-1.

 

I was a young engineer working for EPA during its foray into federal indirect source controls in the late 1970’s. The remarks I read in the newspaper brought back memories of those days. I hope the group of people currently in charge at EPA remember, too.

 

An indirect source, as defined in Section 110 (a)(5)(C) of the Clean Air Act, is “a facility, building, structure, installation, real property, road, or highway which attracts, or may attract, mobile sources of pollution.” More plainly, in the early 1970s federal indirect source controls were designed to force people not to drive their cars to particular areas, if air quality might be imperiled, by forbidding or regulating the businesses that drew those people. In early 1973, responding to a court order, EPA proposed approaches like limiting the number of parking spots at airports, malls, sports venues and amusement parks, forcing parking garages to be smaller, and limiting or rejecting the development of real estate projects that would draw people in their cars. 38 Fed. Reg. 9599 (April 18, 1973). Later that year, the EPA instructed the states to consider these strategies in preconstruction indirect source review. 38 Fed. Reg. 15834 (June 18, 1973) (promulgation of state implementation plan requirements for “mobile source activity associated with . . . buildings, facilities, and installations.”). 

Even a very young engineer could tell that EPA’s approach was a disaster. Ordinary people (as distinct from the business people operating electric generating stations, paint booths, or oil and gas pipelines) were extremely unhappy when the federal government wanted to make their ordinary activities ― driving their cars to a shopping center or to watch a sporting event ― difficult or forbidden. And, of course, the states resented federal intrusion into traditional areas of state and local land use controls. It is also quite easy today to imagine how unhappy the powerful real estate developers were, too.

The reaction was predictable (at least in hindsight) and rapid. Congress forbade EPA from pursuing indirect source regulation in the interest of meeting air quality standards. In a 1974 supplemental appropriations act, Pub. L. No. 93-245, 87 Stat. 1071 (1974), Congress denied EPA budget funds and administrative authority “to administer any program to tax, limit, or otherwise regulate parking facilities.”  Permanent changes to the Clean Air Act came with the 1977 Amendments to the Clean Air Act, when restrictive language was added that is still codified in Section 110(a)(5) of the Clean Air Act. You can read more in a federal Court of Appeals opinion about the controversy and its aftermath, Manchester Environmental Coalition v. EPA, 612 F.2d 56 (2nd Cir. 1979) (successful challenge to EPA’s approval of Connecticut’s revocation of its implementation plan’s indirect source review program). You can get the flavor of this controversy by reading the discussion in the House Report that accompanied H.R. 6161, the House bill underlying the Clean Air Act Amendments of 1977. H.R. Rep. No. 294, 95th Cong., 1st Sess., 220-221 (May 12, 1977).

Today some states and political subdivisions choose to use indirect controls on air pollution in their programs. The San Joaquin Valley Air Pollution Control District in California and the State of Wisconsin have programs that are easily found on the Internet, for example.

But EPA is still forbidden to impose indirect source controls upon the states. And the remarks of the regional air quality official about the ozone proposal strike me as suggesting federal regulation that will be perceived in a quite similar and unhappy way by ordinary citizens. Is EPA going to force people to use push lawnmowers? To throw away their leaf blowers and weed whackers?

  

Will we follow this path again? I hope not. Controlling the precursors to the formation of ground level ozone at the levels now proposed by EPA is going to be terribly difficult and expensive, at best. But those controls must first have the support of citizens and their elected representatives if they are to have any chance to succeed. Whatever ambient air quality levels are eventually chosen by EPA in the ozone rulemaking, the control strategies eventually imposed by our federal government should not make people so angry that they forget why they are being protected.

When Do EPA BACT Requirements "Redesign the Source"? Not When EPA Says They Don't

Posted on January 7, 2010 by Seth Jaffe

Shortly before the holidays, EPA Administrator Jackson issued an Order in response to a challenge to a combined Title V / PSD permit issued by the Kentucky Division for Air Quality to an Integrated Gasification Combined Cycle, or IGCC, plant. The Order upheld the challenge, in part, on the ground that neither the permittee nor KDAQ had adequately justified why the BACT analysis for the facility did not include consideration of full-time use of natural gas notwithstanding that the plant is an IGCC facility. 

The Order may not be shocking in today’s environment – all meanings of that word intended – but the lengths to which the Order goes to avoid its own logical consequences shows just what a departure this decision is from established practice concerning BACT. BACT analyses have traditionally involved the proverbial “top-down” look at technologies that can be used to control emissions from a proposed facility. In other words, EPA takes the proposal as a given, and then asks what the best available control technology is for that facility

In EPA’s own words – from its New Source Review Workshop Manual (long the Bible for BACT analysis):

Historically, EPA has not considered the BACT requirement as a means to redefine the design of the source when considering available control alternatives. For example, applicants proposing to construct a coal-fired electric generator, have not been required by EPA as part of a BACT analysis to consider building a natural gas-fired electric turbine although the turbine may be inherently less polluting per unit product (in this case electricity).

Apt example, don’t you think? (In case you are wondering, EPA’s decision does not discuss or refer to this text from the NSR Manual.)

What was the basis for EPA’s decision here? Largely, it is that the IGCC facility will be designed to burn natural gas as well as syngas and the permittee specifically stated that it planned to combust natural gas during a 6-12 month startup period. On these facts, EPA concluded that the permittee and KDAQ had to do a better job explaining why full-time use of natural gas should be considered “to redefine the design of the source.”

As noted above, EPA went to great lengths to minimize the scope of the decision. It states that the Order:

should in no way be interpreted as EPA expressing a policy preference for construction of natural-gas fired facilities over IGCC facilities.

should not be interpreted to establish or imply an EPA position that PSD permitting authorities should conclude … that BACT for a proposed electricity generating unit is … natural gas.

does not conclude that it is not possible or permissible for the permit applicant … to develop a rationale which shows that firing exclusively with natural gas would “redefine the source.”

EPA does not intend to discourage applicants that propose to construct an IGCC facility from seeking to hedge the risk of investing in … IGCC technology by proposing … utilizing natural gas for some period….

Methinks EPA doth protest too much. If I may say so, this is a freakin’ IGCC facility. Isn’t it obvious that one doesn’t plan or build an IGCC facility if one plans to burn natural gas? Don’t you think that EPA could have taken administrative notice of what IGCC technology is?

All of EPA’s protestations about the Order’s limits may be designed to mollify IGCC supporters, but what does its rationale mean for all of the existing facilities – coal and oil – that are already capable of firing on natural gas? Next time they are subject to NSR/PSD review, must they evaluate the possibility of switching completely to natural gas? As I’ve said here before, yikes!

When Do EPA BACT Requirements "Redesign the Source"? Not When EPA Says They Don't

Posted on January 7, 2010 by Seth Jaffe

Shortly before the holidays, EPA Administrator Jackson issued an Order in response to a challenge to a combined Title V / PSD permit issued by the Kentucky Division for Air Quality to an Integrated Gasification Combined Cycle, or IGCC, plant. The Order upheld the challenge, in part, on the ground that neither the permittee nor KDAQ had adequately justified why the BACT analysis for the facility did not include consideration of full-time use of natural gas notwithstanding that the plant is an IGCC facility. 

The Order may not be shocking in today’s environment – all meanings of that word intended – but the lengths to which the Order goes to avoid its own logical consequences shows just what a departure this decision is from established practice concerning BACT. BACT analyses have traditionally involved the proverbial “top-down” look at technologies that can be used to control emissions from a proposed facility. In other words, EPA takes the proposal as a given, and then asks what the best available control technology is for that facility

In EPA’s own words – from its New Source Review Workshop Manual (long the Bible for BACT analysis):

Historically, EPA has not considered the BACT requirement as a means to redefine the design of the source when considering available control alternatives. For example, applicants proposing to construct a coal-fired electric generator, have not been required by EPA as part of a BACT analysis to consider building a natural gas-fired electric turbine although the turbine may be inherently less polluting per unit product (in this case electricity).

Apt example, don’t you think? (In case you are wondering, EPA’s decision does not discuss or refer to this text from the NSR Manual.)

What was the basis for EPA’s decision here? Largely, it is that the IGCC facility will be designed to burn natural gas as well as syngas and the permittee specifically stated that it planned to combust natural gas during a 6-12 month startup period. On these facts, EPA concluded that the permittee and KDAQ had to do a better job explaining why full-time use of natural gas should be considered “to redefine the design of the source.”

As noted above, EPA went to great lengths to minimize the scope of the decision. It states that the Order:

should in no way be interpreted as EPA expressing a policy preference for construction of natural-gas fired facilities over IGCC facilities.

should not be interpreted to establish or imply an EPA position that PSD permitting authorities should conclude … that BACT for a proposed electricity generating unit is … natural gas.

does not conclude that it is not possible or permissible for the permit applicant … to develop a rationale which shows that firing exclusively with natural gas would “redefine the source.”

EPA does not intend to discourage applicants that propose to construct an IGCC facility from seeking to hedge the risk of investing in … IGCC technology by proposing … utilizing natural gas for some period….

Methinks EPA doth protest too much. If I may say so, this is a freakin’ IGCC facility. Isn’t it obvious that one doesn’t plan or build an IGCC facility if one plans to burn natural gas? Don’t you think that EPA could have taken administrative notice of what IGCC technology is?

All of EPA’s protestations about the Order’s limits may be designed to mollify IGCC supporters, but what does its rationale mean for all of the existing facilities – coal and oil – that are already capable of firing on natural gas? Next time they are subject to NSR/PSD review, must they evaluate the possibility of switching completely to natural gas? As I’ve said here before, yikes!

Ninth Circuit Rejects CERCLA UAO Due Process Challenge

Posted on January 6, 2010 by Theodore Garrett

The 9th Circuit affirmed the dismissal, for lack of jurisdiction, over a “pattern and practice” claim by a company that complied with an Environmental Protection Agency (EPA) unilateral administrative order (UAO) to conduct a remedial investigation. City of Rialto v. W. Coast Loading Corp., 581 F.3d 865 (9th Cir. 2009).  While acknowledging that CERCLA's judicial review provisions contain "some pitfalls and difficult decisions for a PRP that faces a UAO," the court stated that the pattern and practice claim was not an “automatic shortcut” to federal court jurisdiction. 

 

The case arose as a result of a unilateral administrative order (UAO) issued by EPA in July 2003 directing Goodrich to conduct a remedial investigation at a 160-acre site in Rialto, California. Goodrich elected to comply with the order. However, in late 2006 Goodrich filed a complaint against EPA alleging, inter alia, that the CERCLA review provisions on their face constitute a coercive regime violating due process. The district court held that it lacked jurisdiction over Goodrich’s “as-applied” challenge to the UAO because such pre-enforcement judicial review is foreclosed by §9613(h) of CERCLA. Goodrich then filed an amended “pattern and practice” claim alleging that EPA issues orders where no emergency exists, obstructs judicial review by delaying its discretionary certificates of completion, and controls and manipulates the record of decision. The district court granted EPA’s motion to dismiss, and Goodrich appealed to the Ninth Circuit.

 

The Ninth Circuit affirmed. The court of appeals concluded that Goodrich’s allegation that EPA routinely issues orders beyond its statutory authority was substantive because it necessarily depended on the facts of the particular UAO, and that meaningful judicial review of Goodrich’s substantive challenge is available under §9613(h). A claim that a UAO is unlawful can be addressed, the court stated, either by not complying with the UAO and defending an enforcement action, or by complying with a UAO and seeking reimbursement from the government. With respect to Goodrich’s claim that EPA routinely delays certifications of completion in order to thwart judicial review, the Ninth Circuit held that Goodrich’s claim is not ripe because the work required by the UAO has not been completed. Once Goodrich completes the work, it may bring a claim for reimbursement under §9606(b)(2). Finally, with respect to Goodrich’s allegation that EPA controls and manipulates the administrative record supporting the selected cleanup plan, the Ninth Circuit concluded that Goodrich allegations were not a “pattern and practice” claim , but rather were a challenge to the judicial review provisions of the statute itself, which were rejected by the District Court and not appealed by Goodrich. 

 

The Ninth Circuit noted that in General Electric v. Whitman, 360 F.3d 188, 191 (D.C. Cir. 2004), the D.C. Circuit remanded GE’s suit to the district court to address the merits of GE’s facial due process claim, and on remand the district court ruled on merits and rejected GE’s pattern and practice claim. General Electric v. Jackson, 595 F.Supp.2d 8 (D.D.C. 2009). This ruling on the merits contrasts with the Ninth Circuit’s ruling that the district court lacked jurisdiction. The Ninth Circuit, however, commented that its decision was “consistent” with the District Court’s decision in GE, noting that the District Court there held that it had jurisdiction not because of any independent analysis but because of its interpretation of the D.C. Circuit’s decision remanding the case for further proceedings. 

Companies receiving a UAO and facing the statutory pitfalls and difficult decisions will likely not find much solace in the Ninth Circuit’s opinion. The district court’s opinion in the GE case is being appealed.

RESOLUTION OF TRI STATE WATER WAR ON THE HORIZON?

Posted on January 5, 2010 by Fournier J. Gale, III

For more than two decades, Alabama, Florida and Georgia have clashed over water use from the Apalachicola-Chattahoochee-Flint River Basin and the Alabama-Coosa-Tallapoosa River Basin to support growing demands for water in each state.  While it may be a an over generalization, the controversy largely pits Atlanta’s need for a large enough water supply to support its tremendous population growth against water needs in Alabama and Florida for consumption, hydroelectricity, irrigation, recreation, fisheries, and endangered species protection. The states reached a Memorandum of Agreement in 1992 which set a deadline for allocating water from the two watersheds to each state; however, the states were unable to reach an allocation agreement within the deadline and previously filed litigation resumed. While negotiations since have proved futile, a recent federal court decision along with the fact that the governors from each state are all leaving office in January 2011 may lead to a permanent solution to the tri-state water wars in the near future.

 

Specifically, on July 17, 2009, United States District Court Judge Paul Magnuson of the Middle District of Florida ruled that Georgia was not properly authorized to withdraw substantial amounts of water from Lake Lanier (a part of the Apalachicola-Chattahoochee-Flint River Basin) to provide drinking water to Atlanta. The Court held that because Lake Lanier is a federal reservoir built for purposes of flood control, hydropower generation, and navigation support, only Congress can approve the operational changes required for increased withdrawals of drinking water. Thus, the Court froze water withdrawals at current levels for the next three years to give time for Congressional approval. Without Congressional approval, withdrawals will revert to very low, baseline withdrawal levels used in the mid-1970s. Click here for a copy of the Court’s opinion.

 

As a result of the new court-ordered deadline, negotiations between the three states have resumed with a new fervor. On December 15, 2009, the Governors of Alabama, Florida and Georgia met in Montgomery, Alabama to discuss plans for reaching an agreement. While the Governors did not offer specifics on their negotiations, they did indicate that they now hope to reach an agreement on an allocation plan that could be presented to their respective state legislatures for approval this year. If an allocation plan does make it through each state’s legislature, it would of course have to go before Congress for final approval as well. To meet such an ambitious goal, the Governors would have to reach an accord as early as spring of this year.