Accompanying the Renewable Energy Directive (RED) draft Delegated Act released on 8th February 2019, the European Commission released its scientific report that attempted to determine which crops could be considered ‘high ILUC risk fuel’ feedstocks.
The report isolates palm oil as the only feedstock crop that can be considered as presenting a high risk of indirect land use change.
This is odd, given the EU’s previous pointing to soybean and maize as significant drivers of deforestation. Industry observers should therefore be sceptical, particularly as the EU has agreed to a political deal to buy more soybeans from the US – a deal made just before the scientific report was published.
In sum, this report is shoddy and unusual for a body that prides itself on producing accurate scientific data. Unfortunately, it reads like a press release-report from Transport and Environment (T&E) funded by a left-wing foundation or the Norwegian government. This is a sad decline in the intellectual rigour of the European Commission.
First, it appears to have been quite rushed. We are aware from our sources in Brussels that the scientific report didn’t look like it was anywhere near completion as late as Christmas last year.
This is supported by the fact that there are a number of spelling errors throughout the report. This might be a minor quibble, but if you’re preparing something for publication and you don’t have time to run it through a spell check, you’re clearly pressed for time.
Second, if this is the case, it’s a bit of a mystery as to how they’ve reached firm conclusions so quickly. Accurately linking deforestation by crop type is something that has eluded researchers for years, particularly in the satellite imaging field. However, if the report is to be accepted as robust, we are also supposed to believe that the study managed to construct a robust methodology without producing any new data sets, in record time.
Third, this issue of the timeline is not incidental: it’s fundamental to how the Delegated Act was conceived. MEPs demanded such a fast timeline, to benefit their political positioning ahead of the EU elections in May. Such politicisation of a scientific and technical process is highly irresponsible. The EU Commission admits privately that this timeline was never sufficient for the job at hand – and even in public meetings before the EU Parliament, Commission officials have hinted strongly that the timeline was irresponsible.
Problem 1: Can ILUC risk even be assessed?
Indirect land use change (ILUC) as a concept has been criticised many times before. The doubts around it are encapsulated in one simple statement: “ILUC cannot be observed or measured. Modelling is required to estimate the potential impacts.”
The problems with ILUC have been gone over again and again. The last time the EU undertook a review exercise was in 2017. It stated then:
Results of recent ILUC studies are far from consistent in their approaches and outcomes. After 2012, no further convergence in results is presented in the literature.
It also noted that palm oil had a lower mean and median ILUC factors than sunflower, and a lower median than soybean.
That 2017 report also assessed if or how low-ILUC risk certification could take place, and critically examined certification pathways and whether low-risk ILUC was in fact possible given the complexities of the market, and substitutions between crops.
Isolating particular feedstocks was not one of the recommendations. In fact, some of the key recommendations of that report were as follows:
The analysis of the evidence on the different components of ILUC shows that for most ILUC components the scientific evidence is extremely poor …
Datasets on biofuel crop production must be collected, synthesized and standardized to common data formats. Analysis of historical information on agricultural production, trade, prices and yield, as well as land use changes may require further attendance in order to get a better understanding of the fundamental parameters that generate ILUC. Increased data availability and convergence of data formats and transparency, could also potentially help for validation of models and increase the use of empirical models. Satellite monitoring can support this development for different purposes, including ILUC research.
Finally, on methodologies for ILUC risk, it stated:
these need further refinement, particularly regarding: (1) the prove of additionality through calculation of trend line baseline yields, (2) availability of reliable data in all potential sourcing regions in the world, and (3) risk for unsustainable increases in irrigation water consumption needed to increase yields in arid regions. Also, the evaluation of unused land status and the duration of certification of 10 years, still has many open ends which need to be evaluated further.
This begs the question as to how the Commission moved from accepting that ILUC risk was close to impossible to stating in the scientific report that:
‘[ILUC] modelling has a number of limitations, but nevertheless, it is robust enough to show the risk of ILUC associated with conventional biofuels.’
Moreover, how does the Commission find it acceptable that isolating feedstocks where expansion on high carbon stock areas is high can somehow operate as a proxy for ILUC risk?
This is the critical jump in logic that the Commission is attempting to make. And it doesn’t even explain how this ‘trick’ is done.
Problem 2: Smashing together the data
The second problem in the scientific study is how the various datasets are meshed together.
As stated above, the EU is attempting to quantify how much deforestation can be attributed to the expansion of a particular crop. According to the EU’s logic, the greater this expansion for a single crop, the greater the risk of ILUC.
The scientific report has come up with some figures despite the fact that this calculation of a highly complex and essentially unmeasurable figure has been eluding researchers for decades.
There are three datasets that are drawn heavily upon. All of them are robust for their own purposes. The question is whether they should be used for different purposes, which is what the EU seems to have done.
The first is by Curtis et al (2018). This study uses satellite monitoring data to determine whether forest loss is likely to be deforestation, and then whether that deforestation was for commodity production, urbanization, shifting agriculture, etc. The study does not attempt to determine which commodity crops (or trees) were grown following the deforestation, and it also only determines one factor to a particular area. In other words, the study does not say if deforestation was caused by oil palm or fibre plantations or soybean or cattle.
The second is a dataset produced by IIASA and IFPRI in 2015. This dataset is an estimate of which crops are grown where and how much of each crop is grown within a particular area. It is based on 5-square kilometre blocks, and isn’t directly derived from satellite data. Rather, the data is crowdsourced via user input. There are, therefore, some major quirks in the data. According to the maps, there’s quite a lot of oil palm grown in downtown Petaling Jaya, as well as some Arabica coffee (which only grows above 1000m). There’s also some wheat being grown in the south-eastern suburbs of Melbourne.
The thinking behind the dataset was to use it as a tool for crop land use decisions. It wasn’t designed to produce granular data to attribute precise areas to particular crops – although this is improving.
As the study says, “globally consistent maps showing the expansion of all individual biofuel crops through time are not available.”
However, the EU consultants overlaid the two datasets and assessed which ‘commodity driven deforestation’ lined up with the IFPRI/IIASA datasets, therefore coming up with an amount of deforestation that can be attributed to a particular, single crop. There are two caveats here that the consultants note: first, they are attributing all tree cover loss in a ‘commodity driven deforestation’ area to agriculture, and second, that they can attribute this deforestation proportionally to a single crop based on IFPRI data.
The third dataset is FAO data on harvested areas for particular crops, which can be used to assess the total expansion of a particular crop over a particular time period. This data is based on country reports given to the FAO.
So, these three datasets: satellite, user and country-level reports are very simply smashed together, with no harmonisation of the data and no verification – and what seems like a lot of guesswork.
Here are some examples of problems.
Problem 1: crop area expansion. The study bases its cropland expansion area on USDA and FAO reports. It says that between 2008 and 2015, global oil palm plantation area increased by 7.8 million ha. USDA bases its harvested and production area analyses on seed sales and trade data.
Do these figures square with other data? According to statistics from MPOB, oil palm planted area in Malaysia increased by around 1.2 million ha in 2008-2015. Much of this was attributed to conversion of old rubber plantations. This is also the data used by the FAO.
According to Indonesia’s statistics agency, oil palm planted area increased by around 2.1 million ha over the same period, which was partly driven by a decline in smallholder palm areas.
This totals 3.3 million ha. Did the rest of the world really stack on another 4.4 million ha of oil palm plantations? This seems unlikely, if not impossible, given the dominance of Malaysia and Indonesia over the palm oil sector during that time period.
Much of the discrepancy is in the FAO and USDA data. The data used by the FAO on Indonesia’s planted area is an ‘unofficial figure’. Both say the Indonesian industry expanded by around 4 million ha in this time. But these estimates are only as good as the data they have on trade and seeds.
The point is that using estimated crop expansion data based on trade and seed data, then combining that with satellite deforestation data, and adding that to estimates of cropland, mean that you end up with data that simply isn’t accurate.
Problem 2: crop area expansion (redux). The only truly comprehensive study of oil palm plantation mapping based on remote sensing data was undertaken by Cheng et al (2018). It’s worth noting that this study wasn’t cited in the EU’s literature review.
This study doesn’t look at changes in area over time. But it does compare satellite estimates with other sources for 2016. It looks like this:
This isn’t a small gap – it’s around 10 million ha. Unfortunately, it doesn’t establish the original 2008 baseline.
And this begs the question: do we really know what the 2008 baseline is? Neither the 2008 palm oil area or the final 2015 figure used in the study are based on satellite mapping. But the supposed amount of deforestation for palm oil is.
Problem 3: disaggregation of palm oil from other tree crops and plantations. One of the problems that has plagued interpretation of satellite data in the tropics has been identifying and disaggregating oil palm expansion from fiber plantations and other crops.
As Curtis states, the problem is:
Forest plantations in Southeast Asia contained patterns of loss and regrowth similar to those seen with the expansion of new agricultural oil palm plantations categorized within the commodity-driven deforestation class. This was particularly true for small-scale palm plantations that are planted and grown at roughly the same spatial and temporal scale as short-rotation wood fiber plantations.
This is accounted for in the methodology used by Curtis et al, by lowering the confidence interval in the final result, which is a landscape-wide quantification of deforestation due to commodities. The question is whether the IFPRI/IIASA datasets do this – and it doesn’t appear to be any way that the crowdsourced data makes this distinction.
There are some reasonably obvious spots in Sumatra that have converted from natural forest to eucalyptus and pine plantations, but these appear to be included in the ‘commodity’ class rather than palm oil in both the Curtis and IFPRI datasets.
Problem 4: Regrowth and replanting. Finally, replanting and regrowth doesn’t appear to be fully accounted for. Areas around Jengka in Malaysia, which had plantations established decades ago and replanted at least once, if not twice, appear in GFW data as ‘commodity driven deforestation’. This doesn’t appear to have been resolved in the Curtis data – and therefore replanting will be considered as ‘deforestation-based expansion’. This could potentially skew both the aggregate numbers and the geographic bias of the report; more importantly it highlights again how simple errors creep in when different datasets are hurriedly layered on top of one another.
The problems with the EU study are can be summarised as follows:
- There’s a considerable leap from the existing state of ILUC-risk knowledge to simply equating it with commodity-specific deforestation;
- There’s no accurate, satellite-based figure of palm crop expansion between 2008 and 2015;
- Data for commodity-based deforestation based on satellite data does not appear to adequately disaggregate palm oil from other commodities or account for replanting;
- Lining up ‘commodity based deforestation’ data with existing cropland map datasets is novel, but requires verification.
The EU is attempting to do something that has never previously been achieved: align deforestation data with crop data. There’s a reason this hasn’t been done previously: it’s a very difficult thing to do.
If this was a research project, it would require months, possibly years of refining and improving existing data and analytical techniques.
These conclusions are supposed to be informing a regulation that will have far-reaching implications for global vegetable oil markets and for millions of farmers.
The EU needs to take it more seriously. In fairness, this is not necessarily the Commission’s fault: the rushed timeline was a political gambit by the EU Parliament. Unfortunately neither the Commission nor the Member States appear to have the backbone to explain to the MEPs that their timeline was impossible and would lead to a flawed and indefensible outcome. Which is what has happened.
We hear that the Commission’s recent Stakeholder Expert Group meeting highlighted the lack of a proper impact assessment. EU officials at that meeting hinted that there will be a thorough re-assessment before the RED is fully transposed in the 28 EU Member States (the RED foresees a review of the Delegated Act in 2021 in any case). There is a lot of room for improvement.
In 2010, the LSE published a paper that described the Renewable Energy Directive as an example of ‘policy based evidence gathering’. Nearly a decade later, not much has changed.