5 essential tools for nature conservation we are still missing (Part 2/2)
Grégoire Dubois
Angry scientist. Managing the Knowledge Centre for Biodiversity (KCBD) of the European Commission. Posts are my own views.
In this second part of my post on "5 essential tools for nature conservation we are still missing (Part 1/2)" I will address the remaining 3 tools which I believe are a priority to develop in support to biodiversity conservation:
3) A single entry point to access free flowing and open-access biodiversity data
4) A citizen-science platform for protected areas
5) A tool answering "What is the impact on biodiversity of losing area X?"
I apologize for the clumsy writing, finding any time these days to write properly is becoming a real challenge.
3) A single entry point to access free flowing and open-access biodiversity data
Because everything affects biodiversity, almost all kinds of data are relevant to nature conservation. From species observations to data on international trade, from earth observations to local climatic data, from data on resource extraction to any kind of other information documenting our human footprint, to cite only a few key topics. However, I will focus here on species information (species occurrences, diversity and abundance) and address the need for a single entry point where we can access for any location all knowledge we have about species.
So what is the current status regarding the main global species data sets that are used for policy/decision making?
The GBIF—the Global Biodiversity Information Facility—is an international network and research infrastructure funded by the world’s governments and aimed at providing anyone, anywhere, open access to data about all types of life on Earth (see https://www.gbif.org). "Coordinated through its Secretariat in Copenhagen, the GBIF network of participating countries and organizations, working through participant nodes, provides data-holding institutions around the world with common standards and open-source tools that enable them to share information about where and when species have been recorded. This knowledge derives from many sources, including everything from museum specimens collected in the 18th and 19th century to geotagged smartphone photos shared by amateur naturalists in recent days and weeks." GBIF is distributing nearly 50,000 data sets and has 1.3 billion occurrence records.
Juffe-Bignoli et al. (2016) evaluated the costs and funding sources for developing and maintaining key global biodiversity datasets including the IUCN Red List of Threatened Species documenting more than 100,000 species in terms of their habitats and threats (https://www.iucnredlist.org/), the World Database on Protected Areas that is providing the reference information on around 250,000 protected areas (https://www.protectedplanet.net/), the IUCN Red List of Ecosystems (https://iucnrle.org/), and the World Database of Key Biodiversity Areas (https://www.keybiodiversityareas.org/). These four datasets are secondary data sets, built on primary data collected by extensive networks of expert contributors worldwide. They estimate that US$ 160 million, around 300 person-years of volunteer time valued at US$ 14 million, were invested in these four knowledge products between 1979 and 2013. More than half of this financing was provided through philanthropy, and nearly three-quarters was spent on personnel costs. The estimated annual cost of maintaining data and platforms for three of these knowledge products (excluding the IUCN Red List of Ecosystems) is US$ 6.5 million in total. In contrast to GBIF data, these datasets need to be downloaded after a request is approved, can not be redistributed and are free of use only for no commercial purposes. However, private companies can make use of these datasets through the IBAT portal (https://ibat-alliance.org/ ) for an annual fee and the money made goes back into the maintenance of the datasets.
There are interesting observations to be made here.
a. We have only little information about global biodiversity. We have only 100,000 species documented by the red list from an estimated 9 million different species. Similarly, while the 1.3 billion occurrence records from the GBIF sounds a lot, it actually represents globally in average only 2.5 observations / km2 . Considering that there records encompass zoological collections in museums, these numbers put into perspective show that our level of ignorance regarding our living environment is incredibly high.
b. Biodiversity data are expensive to collect and manage and governments have done little so far to support these efforts. The paper of Juffe-Bignoli et al. shows that only a third of the funding is coming from governments while charities are the first source of funding.
c. There is only a very few providers of global datasets and these datasets can not be used for commercial purposes unless an annual fee is paid to the data providers. Aside for the GBIF, the datasets can not be used for commercial purposes unless end-users pay an annual fee to the IBAT partnership. This is a very important restriction that can prevent small businesses to assess their impact on biodiversity and for other organisations to do independent assessments.
d. Aside for the GBIF, data are are only accessible as downloadable files after access has been granted following a written request. As a consequence of this, data cannot be automatically processed and integrated in other applications.
I have used in the past the terminology "Critical Biodiversity Infrastructures" to refer to the platforms supporting the World Database on Protected Areas, the GBIF and the IUCN Red List of Threatened Species. If we are still quite far from a single platform where these datasets can be accessed freely, any time and with a few mouse clicks for a specific location of interest, the Map Of Life (https://mol.org/) actually paved the way in this direction but it is still experimental and the global datasets proposed are largely outdated, most probably because of licensing issues. Map Of Life is furthermore an application and no web services are provided.
Providing end-users with services to allow the selective access of data without having to download large datasets is easy to do and would save many a lot of time and effort. We also have the technology to share such data in a way that these can easily be integrated with other connected platforms and datasets, an essential requirement for developing most of the information that is required for decision making as we have demonstrated with the DOPA Explorer (https://dopa.jrc.ec.europa.eu/explorer). What is indeed the point of collecting species observations if these can't be linked to information on climate, land cover or administrative units? The challenges regarding climate change and the loss of biodiversity can be addressed only by developing information systems that are not only catalogues but full interoperable systems allowing any kind of datasets to be integrated on the fly. Any non-compliance regarding interoperability of data and systems is a barrier hindering the effective production of the information that is underpinning the decisions about our future (see also GEO https://www.earthobservations.org/geo_wwd.php).
Setting up a single entry point to access biodiversity is not only technically possible but its funding would be minimal compared to the money wasted by researchers and authorities who are downloading and processing the various datasets before extracting the information required. Considering the importance of these datasets, their maintenance and distribution should be properly funded by all governments and made completely free to everyone. One could probably challenge the current restrictions considering that some of the data are provided by governments (i.e. World Database on Protected Areas) and that the funding is coming from charities and governments that will probably want to see their support more broadly used. Still, we need to remember that open and free access to datasets does not imply that no costs are involved. In the contrary, if we want to make sure data are curated, made accessible to everyone, anywhere and anytime at no costs, it means that someone is taking care of the maintenance, updates and services to make these data available. This comes at a cost which I believe would be really compared to the global benefits. If I am obsessed with open access data it it because it is the sine qua non condition for transparent policies, decision-making and communication. Everyone, absolutely everyone, should be allowed to challenge any kind of decisions by accessing freely and without conditions the information used to take these decisions. Democracy comes at a certain cost and this point is fundamental to me. An open access policy also allows for larger usability of the data and dramatically increases opportunities to verify the data quality. Personally, I would rather use an open access dataset with many imperfections that is more likely to be used and assessed by others than a product of higher quality but available only to a few. Last but not least, for the data providers, monitoring the commercial use of the datasets might also come to a high cost considering the uncertainties associated to the definition of "commercial use". This can indeed cover a broad range of activities, from the reselling of the data to supporting commercial journals through publications or to the use of the data by private companies performing environmental impact assessments when planning new extraction or farming activities.
I have stressed this regularly, but our future will depend on the mutual trust to be developed between citizens, companies, NGOs and administrations. This can be done only if we establish common grounds for discussions where decision-makers and citizens are not the hostages of the data providers and where efforts by the data providers are properly recognized and supported.
4) A citizen science version of the Protected Planet and DOPA.
Scientific papers regularly show that we actually do not know much about the effectiveness of protected areas. Many are still "paper parks", namely protected areas that have been declared by local authorities on paper but that are lacking any kind of management. Earth observations are extremely useful to monitor protected areas in terms of deforestation, changes in water surfaces, encroachment by local populations, deforestation, urbanisation or loss of natural land to agriculture. But we still lack simple means to check whether the documented protected areas have well defined borders or even really exist. We also have no simple means to capture information about other threats coming from invasive species or poaching. We also learn about human-wildlife conflicts from the news when it is already too late.
Protected Planet (https://protectedplanet.net/) developed by the UNEP-WCMC and the European Commission's Digital Observatory for Protected Areas (DOPA, https://dopa.jrc.ec.europa.eu/explorer) are two platforms providing essential information about protected areas. While the first provides the reference information on park boundaries and the associate management information as provided by the national authorities, the second one goes farther by documenting the protected areas in terms of pressures, species and ecosystems using largely products derived from remote sensing. These platforms are thus complementary but they do not capture any additional information from the ground. By developing standardised forms and/or by launching a new citizen science initiative, it would be quite straightforward to get essential information from the ground about any protected area. This gap would not only be easily filled but it would create a powerful bridge between policy-makers, decision-makers, donors, the actors on the ground and the citizens visiting the area. Considering the importance of protected areas for the future of biodiversity and the forthcoming new global strategy post 2020, having still a very abstract baseline information on protected areas is really risky.
5) A tool answering "What is the impact on biodiversity of losing area X?"
Point 3 above addressed almost exclusively information on species.
To estimate properly biodiversity loss in the case of How many threatened species would one affect by building/farming in a given area? How much carbon would we lose? How would this loss of a natural area impact ecological connectivity, nature's blood system? These basic questions are actually not too complicated to answer and a generic platform providing such information freely and openly to everyone would be quite useful, if only to highlight areas under highest pressure and of highest biodiversity value.
Essential for investors
References
- Jufe-Bignoli et al. (2016) Assessing the Cost of Global Biodiversity and Conservation Knowledge https://doi.org/10.1371/journal.pone.0160640
Ecologist, Nature and Biodiversity Conservation Expert at Stantec
4 年Soooo true! Thanks for summarizing these topics/tools. I would only like to emphasize on the fact that for some areas and/or species data cant be collected only on a voluntary basis/citizen science, this a structured support will be vital to ensure reliable data input.
Officier de Garde chez ICCN
5 年Comment peut-on en avoir c manuel
Group Lead of the Coastal Biodiversity Lab at Ciimar - Interdisciplinary Centre for Marine and Environmental Research
5 年Please look also to OBIS and for the coverage of the marine environment!
Director General at CIFOR - Center for International Forestry Research
5 年Great. I would add about your #5 that what we need is also the reverse: "what does it mean in terms of biodiversity if we restore this area?"
CEO / NED / impact leader / change maker
5 年Have you heard of eDNA to monitor nature, can provide very big data sets, can be done by citizen scientists NatureMetrics are already working with citizen science projects around the world. It can also help to answer question about impact on biodiversity using big data / AI be sure the data sets we can collect are so large compared to traditional techniques. Happy to chat!