Here's What The Guardian and Die Zeit Ignored in their Distorted Coverage of Carbon Markets
Steve Zwick
Owner Producer Host @ Bionic Planet | Senior Advisor, Land Use and Supply Chains
As the community of experts weighs in on sloppy carbon reporting from the Guardian, die Zeit, and SourceMaterial, it's important to look at what the reporters knew -- yet chose to ignore -- before publishing.
Verra's official response was short and sweet, but I sent the reporters a longer e-mail (see below) summarizing the themes of our prior conversations in an effort to put some of the stickier issues into perspective. Although I got this to them a few hours after their deadline, it was well before they published their stories -- and, more importantly, it summarizes issues we had already discussed.
In this e-mail, I didn't address the broader issue of how first-generation baselines are being reassessed because they only asked us to comment on their analysis. In conversations, however, we did discuss first-generation baselines, and I repeatedly stated that many of them would be reassessed downward. This is something Verra has long acknowledged, and it's why all projects undergo automatic reviews, during which all of the basic assumptions are re-examined.
We discussed the sweeping changes Verra is implementing in the new generation of REDD+ methodologies , as well as the challenge of moving from site-specific modeling to a more standardized approach -- which is key to reaching the scale we need. Instead of a nuanced discussion on probabilities and modeling, however, these reporters kept insisting on a simplistic dichotomy wherein experimental models are infallible oracles and first-generation methodologies were "fairy tales."
The fact, as I've stated before , is that deforestation is a wicked problem with multiple underlying drivers and no perfect solution. Both REDD+ and the larger suite of natural climate solutions have evolved substantially over the past 30 years, and they will continue to evolve as science advances.
But science won't advance if we turn the tools of honest inquiry against itself.
The E-Mail I Sent Late Monday Night/Early Tuesday Morning
Greetings;
After reviewing your questions and double-checking with members of our technical team, I can reiterate what I said initially: your conclusions about Verra baselines simply don’t add up. The academic exercises we discussed on our call, and which you cite, are philosophically interesting, but even the authors have layered in caveats that preclude your conclusions, which flow from six fundamental errors.
Let me recap the errors as I see them.
Error #1: Mistaking New Tools for Magic Bullets
On our only call together, we discussed new synthetic modeling tools that Verra and others are experimenting with to assess the impact of human activities on forests. You seemed to be trying to understand the role these tools may and may not play, as well as their advantages, disadvantages, and how they may or may not inform the reassessment of REDD+ baselines going forward. In your questions, however, however, you’ve glommed onto a handful of academic exercises that use synthetic modeling in ways that are philosophically interesting but would never pass muster in a bona fide carbon methodology.
The fact is that synthetic controls, counterfactual pixels, and a slew of other new tools are meaningless when detached from the realities of what’s happening on the ground – the old GIGO principle.
This does not mean the concepts have no value. Indeed, as we discussed:
The bottom line is that synthetic modeling has value but isn’t a magic bullet. Wrongly applied, it can serve the interests of ideologues and opportunists while sidelining pragmatists seeking viable solutions.
You, however, have mislabeled these exercises as “real world” counterfactuals while dismissing methodologies built on decades of piloting, review, and consultation as “fairy tales.”
Error #2: Cherry-Picking and Magnifying the Minority
Related to the above: the basic concepts underlying these approaches are simple, but the tricky parts are:
The authors of the papers you’re citing have, by their own admission, given these components short shrift, which means they have not “estimated how much deforestation was prevented by the projects,” as you claim. They have merely shown that different approaches yield different results.
Error #3: Conflating the Measurement of Deforestation with Impact Evaluation
Some REDD+ projects are designed to impact only the project area, but most are designed to generate positive activities that spread into surrounding areas. These “positive externalities” are well-documented, but you’ll miss them if you rely on synthetic modeling without corresponding scenario analysis and process tracing.
Error #4: Ignoring the Mission of Standard-Setting Bodies
As a standard-setting body, Verra’s role is to review all available research in context and identify the sweet spot where most experts align. It is not to defend or attack individual studies.
Although we have recently begun to propose new methodologies ourselves, we have traditionally acted as a forum through which entities that want to produce a methodology can do so by exposing their ideas to bona fide experts through iterative rounds of expert review and public consultation.
By following this approach and then making its documentation publicly available, Verra provides foundational methodologies on which some buyers may layer in additional filters – such as the Climate, Community, and Biodiversity (CCB) Standards, the proposed ABACUS label, or their own proprietary filters and preferences.
Intermediaries such as South Pole and CoolEffect apply filters based on their own additional criteria, as do sophisticated buyers such as Salesforce and many others. ?
领英推荐
Error #5: Seeing Offsets as See-Saws
You’re continuing to insist that every reduction achieved through the Voluntary Carbon Market (VCM) results in an increase someplace else, which is simply not true.
In a compliance market, offsetting is only permitted for residual emissions, and the voluntary market provides a vehicle for going above and beyond that to drive overall emissions down deeper and faster than companies can realistically achieve internally – and not, as you seem to believe, an excuse for doing nothing.
There is debate over what can be realistically achieved internally, and the Voluntary Carbon Markets Integrity (VCMI) initiative is working on identifying science-based criteria for what constitutes carbon neutrality. Verra supports that initiative as well as broader calls for more transparency in corporate disclosures, but you seem intent on holding Verra accountable for policing claims – which exhibits a profound (and, I suspect, willful) ignorance of the nature of that challenge.
Ecosystem Marketplace conducted an analysis of buyers in 2016 and found companies that voluntarily purchased offsets?tended to do so ?as part of a structured reduction strategy, and the fundamental laws of supply and demand render it impossible for emitters to offset their way out of this mess.
Error #6: Ignoring the Nature of the Challenge
Building on the above, you have chosen to ignore the near-universal acceptance of the need to emphasize deep reductions now while gradually building up the capacity to pull greenhouse gases from the atmosphere.
The Intergovernmental Panel on Climate Change (IPCC) tells us we must dramatically scale up Nature-Based Solutions (NBS) – and specifically REDD+ – to meet the climate challenge, and analysis shows ?that we’ll have to reforest 50 hectares of forest for every hectare of we lose in a given year to break even – or wait 50 years for that hectare to recover.
REDD+ is a necessary transition mechanism, and scaling up requires, among other things, moving from site-specific modeling to a more standardized approach that incorporates the newest technologies. That’s the central challenge we’re dealing with here, but you keep insisting that an oversimplified application of new and evolving tools for developing standardized approaches automatically generates “findings” that are superior to site-specific modeling.
The Promise and Pitfalls of Synthetic Modelling
Synthetic modeling comes from the social sciences, where researchers have used it to isolate the effects of an “event or intervention of interest [on an] aggregate unit, such as a state or school district.”
It works not by comparing the impacted city or state to a comparable unit but to a synthetic city or state modeled from multiple states, school districts, or other population centers.
VCM stakeholders have been experimenting with the application of synthetic modeling to deforestation for over a decade, and last year Verra approved a new methodology for projects that reduce emissions by promoting improved forest management (IFM) in the United States. Synthetic modeling proved effective with IFM because IFM consists of standardized interventions carried out across a relatively homogenous region – in contrast to REDD+ projects, which prescribe site-specific interventions for site-specific drivers of deforestation.
Despite the relative simplicity of IFM, it still took several years of piloting and multiple rounds of expert review and public consultation for the American Forests Foundation and The Nature Conservancy to develop the dynamic performance benchmark s (DPBs) that were eventually approved under Verra.
Recent advancements in remote sensing and artificial intelligence have ushered in a new era of digital measurement, reporting, and verification (DMRV), which has enabled several groups to present strategies for incorporating DPBs into REDD+ methodologies. All these efforts, however, are struggling to overcome the “tricky” challenges alluded to above because the drivers of deforestation are woven into local economies and thus vary greatly from country to country and region to region.
Verra faced a similar challenge in developing the new risk mapping tool designed to underpin the nesting of projects in jurisdictional REDD+ programs. In this case, research shows some indicators are somewhat predictive globally in the short term, but the variability from region to region is such that local weighting will be necessary. Even then, risk mapping is one component of a larger methodology and not a methodology in itself.
Among the many challenges to implementing DPBs in REDD+ are data collection and the identification of reliable indicators – called “covariates” – that can be used to synthesize counterfactual rates of deforestation. If you look at the covariates in the IFM methodologies, you will see how specific they are.
Limitations in the Literature
As I have stated on our call and again here, Verra is reticent about praising or critiquing academic exercises produced in good faith to inform broader discussion – especially if the authors already provide multiple caveats, as is the case here. Indeed, we welcome and encourage such exercises because they help us identify the sweet spots where most experts align.
I’ll remind you that Guizar-Coutino, while not evaluating baselines, found that deforestation was 47 percent lower in project areas than in their counterfactual pixels, while degradation rates were 58 percent lower. They concluded, "Our results indicate that incentivizing forest conservation through voluntary site-based projects can slow tropical deforestation and highlight the particular importance of prioritizing financing for areas at greater risk of deforestation.”
In our call, I pointed out this was different from West et al, but that neither was conclusive.
Nonetheless, you have insisted on presenting these baseline extrapolations as gospel, so we have no choice but to point out some obvious shortcomings – which, again, the authors mostly acknowledge.
First, all three papers skirt the three tricky issues that have so far prevented a wider incorporation of synthetic modeling into REDD+ baselines:
Second, real-world evidence contradicts the synthetic models. Swallow et al, for example, pointed out that the true real-world rates of deforestation (as opposed to the synthetic models) in project reference regions exceeded the rates projected in the original baseline assessments. This deviates substantially from West’s synthetic controls and controverts his thesis. Indeed, West et al 2020 shows that the synthetic controls do a poor job of projecting deforestation in many of the projects.
Third, the authors select project areas based on how well they fit their approach rather than objective criteria. West et al dropped about 25 percent of projects or project areas from the selection due to a poor fit of synthetic controls, and in the case of Guizar-Couti?o et al, your final analysis included less than half of the projects they initially looked at.
Fourth, West et al acknowledge that synthetic controls are derived from multiple, scattered sites that are smaller than the project area. In West et al 2020, they acknowledge that smaller sites cannot be said with certainty to behave the same as the large project area.
Fifth, even if the synthetic modeling was accurate, the findings wouldn’t hold up because a project baseline is not the same as the number of credits issued.
Photo by Michal Matlon on Unsplash
Helping you take climate leadership for your business | Calculate your carbon emissions | Set net zero targets | Decarbonise your operations | Fund certified climate action | Report on your performance
1 年Very helpful, Steve. Thanks for sharing ??
Impact Communications | Content | Creative | Campaigns
1 年Bravo.
Managing Partner - Valitera
1 年Thanks Steve Zwick this is very helpful.
Owner Producer Host @ Bionic Planet | Senior Advisor, Land Use and Supply Chains
1 年Hat tip to David Swallow for this podcast episode on cognition and counterfactuals https://open.spotify.com/episode/7uxyPWyoeJxXBytEf2Qrf8
President at Center for Climate and Energy Solutions (C2ES)
1 年Thanks, Steve — this is the clearest, most objective explanation I’ve seen about what the Guardian got wrong (and didn’t get right).