Published

OGC Engineering Report

Engineering report for OGC Climate Resilience Pilot
Guy Schumann Editor Albert Kettner Editor Nils Hempelmann Editor
OGC Engineering Report

Published

Document number:23-020r2
Document type:OGC Engineering Report
Document subtype:
Document stage:Published
Document language:English

License Agreement

Use of this document is subject to the license agreement at https://www.ogc.org/license



I.  Executive Summary

The OGC Climate Resilience Pilot marked the beginning of a series of enduring climate initiatives with the primary goal of evaluating the value chain encompassing raw data to climate information processes within Climate Resilience Information Systems. This includes the transformation of geospatial data into meaningful knowledge for various stakeholders, including decision-makers, scientists, policymakers, data providers, software developers, service providers, and emergency managers. The results of the OGC Climate Resilience Pilot support the location community to develop more powerful visualization and communication tools to accurately address ongoing climate threats such as heat, drought, floods, and wild-fires as well as supporting governments in meeting commitments for their climate strategies. This will be accomplished through evolving geospatial data, technologies, and other capabilities into valuable information for decision-makers, scientists, policymakers, data providers, software developers, and service providers so they can make valuable, informed decisions to improve climate action. One of the most significant challenges so far has been converting the outputs of global and regional climate models into specific impacts and risks at the local level. The climate science community has adopted standards and there are now numerous climate resilience information systems available online, allowing experts to exchange and compare data effectively. However, professionals outside the weather and climate domain, such as planners and GIS analysts working for agencies dealing with climate change impacts, have limited familiarity with and capacity to utilize climate data.

Stakeholders depend on meaningful information to make decisions or advance their science. In the context of climate change, this meaningful information is delivered through climate services as a combination of technical applications and human consultation. The technical infrastructures underpinning climate services, named here as Climate Resilience Information Systems, require the processing of vast amounts of data from diverse providers across various scientific ecosystems as follows.

This report assesses the value chain from raw data to climate information and the onward delivery to stakeholders. It explains good practices on how to design climate resilience information systems, identifies gaps, and gives recommendations on future work.

The OGC pilot demonstrated the capability of creating data pipelines to convert vast amounts of raw data through various steps into decision-ready information and 3D visualizations while embedding good practice approaches for communicating this knowledge to non-specialized individuals. In other words, in order to obtain decision-ready information, the data must first be collected from multiple sources and organized, then transformed into analysis-ready formats.

To address the value chain from raw data to decision-ready indicators, one focus of this pilot was to explore methods for extracting climate variables from climate model output scenarios and delivering them in formats that are more easily usable for post-processing experts, alongside being applicable to local situations and specific use-cases. Climate variable Data Cubes were extracted or aggregated into temporal and spatial ranges specific to the use cases. Then, the data structure was transformed from multidimensional gridded cubes into forms that can be readily utilized by geospatial applications. These pilot data flows serve as excellent examples of how climate data records can be translated into estimates of impacts and risk at the local level in a way that seamlessly integrates into existing planning workflows and is made available to a broad user community via open standards.

In addition, the pilot explored various parts of the processing pipelines that were examined using climate-impact case studies related to heat, droughts, floods, and wildfires, highlighting assessment tools and the complexities of climate indices. It also recognized the existence of solar radiation databases and web map services, emphasizing the need to enhance their accessibility and applicability at a national level to combat the effects of climate change by utilizing solar energy resources more efficiently. Ultimately, this Climate Resilience Pilot serves as a crucial asset for making well-informed decisions that bolster climate action. It particularly aids the location community in developing enhanced 3D visualization, simulation, and communication tools to effectively address prevalent climate change impacts or hazards caused by meteorological extreme events.

This report also demonstrates the workflow from data to 3D visualization, specifically for non-technical individuals. A chapter is dedicated to the options and challenges of applying artificial intelligence to establish a climate scenario digital twin where various scenarios of efficiencies of climate action can be simulated. These simulations can encompass the reduction of disaster risks through technical engineering. The concept of climate resilience is explored, not only considering the shift of meteorological phenomena but also accounting for land degradation and biodiversity loss. More specifically, the scenarios focus on understanding the effects of climate change on vegetation in the Los Angeles area. 3D landscape vegetation simulations are presented, demonstrating how different tree species adapt under changing climate conditions represented by a range of climate and policy scenarios over time.

The pilot acknowledges the significant challenges of effectively conveying information to decision-makers. This necessitates a thorough examination of communication methods. Consequently, a dedicated chapter emphasizes unique approaches to facilitate effective communication with non-technical individuals, who frequently hold responsibility for local climate resilience action strategies. The development and implementation of a stakeholder survey provides insight into the strengths and weaknesses of past adaptation processes and allows for the derivation of opportunities for improvement. By prioritizing communication, the pilot aims to bridge the gap between technical and non-technical stakeholders, ensuring accurate and comprehensive information transmission for the benefit of both sides. The addition of this chapter demonstrates the pilot’s aim to enhance communication strategies to foster improved decision-making in the realm of climate resilience.

Overall, this engineering report presents various workflow processes which illustrate the seamless exchange of data, models, and components, such as climate application packages, that emphasize the potential for optimization using OGC Standards.

In the context of climate and disaster resilience, this document greatly contributes to a comprehensive understanding of flood, drought, heat, and wildfire assessments offering insights into decision-making for climate actions, specifically addressing the enhancement of Climate Resilience Information Systems in line with FAIR Climate Services principles.

II.  Keywords

The following are keywords to be used by search engines and document catalogues.

Climate Resilience, data, ARD, component, use case, FAIR, Drought, Heat, Fire, Floods, Data cubes, Climate scenario, Impact, Risk, Hazard, DRI, Indicator

III.  Submitters

The various organizations and institutes that contribute to the Climate Resilience Pilot are described below.

Table — Contributors of this Climate Resilience Pilot

NameOrganizationRole or Summary of contribution
Guy SchumannRSS-HydroLead ER Editor
Albert KettnerRSS-Hydro/DFOLead ER Editor
Sacha LepretreCAE, Presagis (CAE Subsidiary)Use of AI DigitalTwin and Simulation for climate (5D Meta World demo with Laubwerk).
Timm DapperLaubwerk GmbH
Peng YueWuhan UniversityDatacube component
Zhe FangWuhan UniversityClimate ARD component
Hanwen XuWuhan UniversityDrought impact use cases
Dean HintzSafe Software, Inc.Climate Analysis Ready Data and Drought Indicator
Kailin OpaleychukSafe Software, Inc.Climate Analysis Ready Data and Drought Indicator
Samantha LavenderPixalytics LtdDevelopment of drought indicator
Andrew LavenderPixalytics LtdDevelopment of drought indicator
Jenny CocksPixalytics LtdDevelopment of drought indicator
Jakub P. WalawenderFreelance climate scientist and EO/GIS expertClimate ARD and solar radiation use case
Daniela Hohenwallner-RiesalpS GmbHCommunication with stakeholders
Hanna KrimmalpS GmbHCommunication with stakeholders
Hinnerk RiesalpS GmbHCommunication with stakeholders
Paul SchattanalpS GmbHCommunication with stakeholders
Jérôme Jacovella-St-LouisEcere CorporationDatacube API client and server
Patrick DionEcere CorporationDatacube API client and server
Eugene YuGMU
Gil HeoGMU
Glenn LaughlinPelagis Data SolutionsCoastal Resilience & Climate Adaptation
Tom LandryIntact Financial Corporation
Steve KoppEsriClimate services & web interface
Lain GrahamEsriClimate services & web interface
Nils HempelmannOGCClimate resilience Pilot Coordinator

III.A.  About alpS

alpS GmbH is an international engineering and consulting firm that supports companies, municipalities, and governments in sustainable development and in dealing with the consequences, opportunities, and risks of climate change. Over the past 20 years, alpS has worked with more than 250 municipalities and industrial partners on climate-related projects. alpS accompanied a large number of adaptation cycles from risk assessments to the implementation and evaluation of adaptation measures.

III.B.  CAE

CAE is a high-tech company with a mission and vision focused on safety, efficiency, and readiness. As a technology company, CAE digitalizes the physical world, deploying simulation training and critical operations support solutions. Above all else, empowering pilots, airlines, defense and security forces, and healthcare practitioners to perform at their best every day, especially when the stakes are the highest. CAE represents 75 years of industry firsts—the highest-fidelity flight, future mission, and medical simulators, and personalized training programs powered by artificial intelligence. CAE invests time and resources into building the next generation of cutting-edge, digitally immersive training and critical operations solutions while keeping positive environmental, social, and governance (ESG) impact at the core of its mission. Presagis is part of CAE and is specialized in developing 3D Modeling & Simulation Software. Presagis has developed VELOCITY 5D (V5D), a Next Generation 3D Digital Twins Creation and Simulation geospatial platform leveraging artificial intelligence.

III.C.  About Ecere

Ecere is a small software company located in Gatineau, Québec, Canada. Ecere develops the GNOSIS cross-platform suite of geospatial software, including a map server, a Software Development Kit and a 3D visualization client. Ecere also develops the Free and Open Source Ecere cross-platform Software Development Kit, including a 2D/3D graphics engine, a GUI toolkit, an Integrated Development Environment and a compiler for the eC programming language. As a member of OGC, Ecere is an active contributor in several Standard Working Groups as co-chair and editor, and participated in several testbeds, pilots and code sprints. In particular, Ecere has been a regular contributor and an early implementer for several OGC API standards in its GNOSIS Map Server and GNOSIS Cartographer client, and is also active in the efforts to modernize the OGC CDB data store and OGC Styles & Symbology standard.

III.D.  About Esri

Esri is a leading provider of geographic information system (GIS) software, location intelligence, and mapping. Since 1969, Esri has supported customers (more than a half million organizations in over 200 countries) with geographic science and geospatial analytics, taking a geographic approach to problem-solving, brought to life by modern GIS technology. The ArcGIS platform includes an integrated system of desktop, web, and mobile software products and data committed to open science.

Within the context of this OGC engagement, Esri provides the full range of capabilities from CMIP climate data processing and publishing, spatial analysis for risk assessment, climate adaption and resilience, to web application development and science communication tools.

III.E.  About George Mason University (GMU)

George Mason University (GMU) is a public research university that conducts research and provides training to postdoctoral fellows, PhD candidates, and master’s students in Geospatial information science, remote sensing, satellite image analysis, geospatial data processing, Earth system science, geospatial interoperability and standards, geographic information systems, and other related subjects. GMU will contribute an ARD use-case.

III.F.  About Intact

Intact Financial Corporation (IFC) is the largest provider of Property & Casualty (P&C) insurance in Canada. IFC’s purpose is to help people, businesses, and society prosper in good times and be resilient in bad1. The company has been on the front lines of climate change for more than a decade – getting its customers back on track and adapted to change. As extreme weather is predicted to get worse over the next decade, Intact intends to double down on adjusting to this changing environment to become more well prepared for floods, wildfire, and extreme heat2.

With close to 500 experts in data, artificial intelligence, machine learning, and pricing, the Intact Data Lab has deployed almost 300 AI models in production to date, focussing on improving risk selection and making operations as efficient as possible while creating outstanding interactions with customers. Within Intact’s Data Lab, the Centre for Climate and Geospatial Analytics (CCGA) uses weather, climate, and geospatial data along with machine learning models and claims data to develop risk maps and other specialized products.

III.G.  About Laubwerk

Laubwerk is a software development company whose mission is to combine accurate, broadly applicable visualizations of vegetation with deeper information and utility that goes far beyond their visual appearance. Laubwerk achieves this through building a database that combines ultra-realistic 3D representations of plants with extensive metadata that represents plant properties. This unique combination makes Laubwerk a prime partner to bridge the gap from data-driven simulation to eye-catching visualizations.

III.H.  About Pixalytics Ltd

Pixalytics Ltd is an independent consultancy company specializing in Earth Observation (EO) combining cutting-edge scientific knowledge with satellite and airborne data to provide answers to questions about EArth’s resources and behavior. The underlying work includes developing algorithms and software, with activities including a focus on EO quality control and end-user focused applications.

III.I.  About Pelagis

Pelagis is an OceanTech venture located in Nova Scotia, Canada focusing on the application of open geospatial technology and standards designed to promote the sustainable use of ocean resources. As a member of the Open Geospatial Consortium, Pelagis co-chairs the Marine Domain Working Group responsible for developing a spatially-aware federated service model of marine and coastal ecosystems.

III.J.  About RSS-Hydro

RSS-Hydro is a geospatial solutions and service company focusing its R&D and commercial products in the area of water risks, with a particular emphasis on the SDGs. RSS-Hydro has been part of several successful OGC testbeds, including the DP 21 to which this pilot is linked, not only in terms of ARD and IRD but also in terms of use cases. In this pilot, RSS-Hydro’s main contribution is the lead of the Engineering report. In terms of technical contributions to various other OGC testbeds and pilots, RSS-Hydro is creating digestible OGC data types and formats for specific partner use cases, in particular producing ARD from publicly available EO and model data, including hydrological model output as well as climate projections. These ARD will feed into all use cases for all participants, especially use cases proposed for floods, heat, drought and health impacts by other participants in the pilot. The created ARD in various OGC interoperable formats will create digestible dataflows for the proposed OGC Use Cases.

Specifically, RSS-Hydro can provide access to the following satellite and climate projection data.

  • Wildfire: Fire Radiant Power (FRP) product from Sentinel 3 (NetCDF), 5p, MODIS products (fire detection), VIIRS (NOAA); possibly biomass availability (fire fuel)

  • Land Surface Temp: Sentinel 3

  • Pollution: Sentinel 5p

  • Climate Projection data (NetCDF, etc., daily downscaled possible): air temp (10 m above ground) with rainfall and possibly wind direction as well

  • Satellite-derived Discharge Data to look at droughts/floods etc. by basin or other scale

  • Hydrological model simulation outputs at (sub)basin scale

III.K.  About Safe Software

Safe Software is a leader in supporting geospatial interoperability and automation for more than 25 years as creators of the FME platform. FME was created to promote FAIR principles, including data sharing across barriers and silos, with unparalleled support for a wide array of both vendor specific formats and open standards. Within this platform, Safe Software provides a range of tools to support interoperability workflows. FME Form is a graphical authoring environment that allows users to rapidly prototype transformation workflows in a no-code environment. FME Flow then allows users to publish data transforms to enterprise oriented service architectures. FME Hosted offers a low cost, easy to deploy, and scalable environment for deploying transformation and integration services to the cloud.

Open standards have always been a core strategy for Safe Software to better support data sharing. The FME platform can be seen as a bridge between the many supported vendor protocols and open standards such as XML, JSON, and OGC standards such as GML, KML, WMS, WFS, and OGC APIs. Safe Software has collaborated extensively over the years with the open standards community. Safe Software actively participates in the CityGML and INSPIRE communities in Europe and is also active within the OGC community and participated in many initiatives including test beds, pilots such as Maritime Limits and Boundaries and IndoorGML, and most recently the 2021 Disaster Pilot and 2023 Climate Resilience Pilot. Safe Software also actively participates in a number of Domain and Standards working groups.

III.L.  About Jakub P. Walawender

Jakub P. Walawender is a freelance climate scientist and EO/GIS expert carrying out his PhD research on the solar radiation climatology of Poland at the Laboratory for Climatology and Remote Sensing (LCRS), Faculty of Geography, Philipps University in Marburg, Germany. Jakub specializes in the application of satellite remote sensing, GIS, and geostatistics in the monitoring and analysis of climate variability and extremes and supports users in the application of different climate data records to tackle the effects of climate change.

III.M.  About Wuhan University (WHU)

Wuhan University (WHU) is a university that plays a significant role in researching and teaching all aspects of surveying and mapping, remote sensing, photogrammetry, and geospatial information sciences in China. In this Climate Resilience Pilot, WHU will contribute three components (ARD, Drought Indicator, and Data Cube) and one use-case (Drought Impact Use-cases).

1.  Terms, definitions and abbreviated terms

No terms and definitions are listed in this document.

Carrying Capacity

an area both suitable and available for human activity based on the state of the ecosystem and competitive pressures for shared resources

CityGML

an open standardized data model and exchange format to store digital 3D models of cities and landscapes

Data Cube

In computer programming contexts, a data cube (or datacube) is a multi-dimensional (“n-D”) array of values. Typically, the term data cube is applied in contexts where these arrays are massively larger than the hosting computer’s main memory; examples include multi-terabyte/petabyte data warehouses and time series of image data.

FAIR Climate Service

a climate resilience information system where the entire architecture follows FAIR principles

FAIR principles3

the concept of making digital assets Findable, Accessible, Interoperable, and Reusable

Resilience

the ability of a system to compensate impacts

Sentinel (satellite mission)

a series of next-generation Earth observation missions developed by the European Space Agency (ESA) on behalf of the joint ESA/European Commission initiative Copernicus

1.1.  Abbreviated terms

ACDC

Atmospheric Composition Data Cube

ACDD

Attribute Convention for Data Discovery

ACIS

Applied Climate Information System

ADES

Application Deployment and Execution Service

ADS

Atmosphere Data Store

AP

Application Package

API

Application Programming Interface

AR

Assessment Report

ARD

Analysis Ready Data

ARDC

Analysis Ready Data Cube

AWS

Amazon Web Service

BCSD

Bias Corrected Spatially Downscaled

BRDF

Bidirectional Reflectance Distribution Function

C3S

Copernicus Climate Change Service

CCI

Climate Change Initiative

CDI

Combined Drought Indicator

CDR

Climate Data Record

CDS

Climate Data Store

CEOS

Committee on Earth Observation Satellites

CF

Climate and Forecast

CGMS

Coordination Group for Meteorological Satellites

CIOOS

Canadian Integrated Ocean Observing System

CMIP

Coupled Model Intercomparison Project

CMR

Common Metadata Repository

CMRA

Climate Mapping for Resilience and Adaptation

COG

Cloud Optimized Geotiff

CRIS

Climate Resilience Information System

CRMA

Climate Mapping for Resilience and Adaptation

CSV

Comma-Separated Values

CWIC

CEOS WGISS Integrated Catalog

DEM

Digital Elevation Model

DRI

Decision Ready Indicator

DSW

Drought Severity Workflow

DWG

Domain Working Group

ECMWF

European Centre for Medium-Range Weather Forecasts

ECV

Essential Climate Variable

EDR

Environmental Data Retrieval

EFFIS

European Forest Fire Information System

EMS

Exploitation Platform Management Service

EO

Earth Observation

ER

Engineering Report

ERA5

fifth generation ECMWF atmospheric reanalysis of the global climate

ESA

European Space Agency

ESDC

Earth System Data Cube

ESDL

Earth System Data Laboratory

ESIP

Earth Science Information Partners

EUMETSAT

European Organisation for the Exploitation of Meteorological Satellites

FAIR

Findable, Accessible, Interoperable, Reusable

FAPAR

Fraction of Absorbed Photosynthetically Active Radiation

FME

Feature Manipulation Engine

FOSS4G

Free and Open Source Software for Geospatial

FRP

Fire Radiant Power

FWI

Fire Weather Index

GCM

General Circulation Model

GCOS

Global Climate Observing System

GDO

Global Drought Observatory

GDP

Gross Domestic Product

GHG

Greenhouse Gasses

GML

Geography Markup Language

GMU

George Mason University

GOOS

Global Ocean Observing System

GRACE

Gravity Recovery and Climate Experiment

HDF

Hierarchical Data Format

IFC

International Finance Corporation

IHO

International Hydrographic Organization

IMGW

Institute of Meteorology and Water Management4

IOOS

Integrated Ocean Observing System

IoT

Internet of Things

IPCC

Intergovernmental Panel on Climate Change

JRC

Joined Research Center

JSON

JavaScript Object Notation

KML

Keyhole Markup Language

LCRS

Laboratory for Climatology and Remote Sensing

LDN

Land Degradation Neutrality

LOCA

Localized Constructed Analogs

MERRA

Modern Era Retrospective-Analysis for Research and Applications

ML/AI

Machine Learning / Artificial Intelligence

MODIS

Moderate Resolution Imaging Spectroradiometer

MSDI

Marine Spatial Data Infrastructures

NASA

National Aeronautics and Space Administration

NCA4

National Climate Assessment 4

NCAR

National Center for Atmospheric Research

NDVI

Normalized Difference Vegetation Index

NDWI

Normalized Difference Water Index

NetCDF

Network Common Data Form

NOAA

National Oceanic and Atmospheric Administration

NRCan

Natural Resources Canada

OGC

Open Geospatial Consortium

OGE

Open Geospatial Engine

OMSv3

OGC Observations & Measurements 3.0

OPeNDAP

Open-source Project for a Network Data Access Protocol

OSM

OpenStreetMap

QGIS

Quantum Geographic Information System

RCI

Regional Climate Indicator

RCM

Regional Climate Model

RCP

Representative Concentration Pathway

REST

Representational State Transfer

S3

Simple Storage Service

SDG

Sustainable Development Goal

SMA

Soil Moisture Anomaly

SPEI

Standardized Precipitation Evapotranspiration Index

SPI

Standardized Precipitation Index

SQL

Structured Query Language

SR

Surface Reflectance

SSL

Secure Sockets Layer

STAC

SpatioTemporal Asset Catalogs

THREDDS

Thematic Real-time Environmental Distributed Data Services

TIE

Technical Interoperability Experiments

UNFCCC

United Nations Framework Convention on Climate Change

URL

Uniform Resource Locator

USGS

United States Geological Survey

VIIRs

Visible Infrared Imaging Radiometer Suite

WCS

Web Coverage Service

WFV

Wide Field View

WG Climate

Joint Working Group on Climate

WGISS

Working Group on Information Systems and Services

WHI

Wildland-Human Interface

WHU

Wuhan University

WMS

Web Map Service

WPS

Web Processing Service

WUI

Wildland-Urban Interface

XML

Extensible Markup Language

2.  Introduction

The OGC Climate Resilience Pilot represents the first phase of multiple long term climate activities aiming to combine geospatial data, technologies, and other capabilities into valuable information for decision makers, scientists, policy makers, data providers, software developers, and service providers to assist in making valuable, informed decisions to improve climate action.

2.1.  The goal of the pilot

The goal of this pilot was to enable decision makers (scientists, city managers, politicians, etc.) in taking the relevant actions to address climate change and make well informed decisions for climate change adaptation. Since no single organization has all the data needed to understand the consequences of climate change, this pilot shows how to use data from multiple organizations—​available at different scales for large and small areas—​in scientific processes, analytical models, and simulation environments. The aim was to demonstrate visualization and communication tools used to craft the message in the best way for any client. Many challenges can be met through resources that adhere to FAIR (Findable, Accessible, Interoperable, and Reusable) principles. The OGC Climate Resilience Pilot identifies, discusses, and develops these resources.

The goal was to help the location community develop more powerful visualization and communication tools to accurately address ongoing climate threats such as heat, drought, floods, and fires as well as supporting nationally determined targets for greenhouse gas emission reduction. Climate resilience is often considered the use case of our lifetime; the OGC community is uniquely positioned to accelerate solutions through collective problem solving with this initiative.

ValueChain

Figure 1 — Value chain from raw data to climate information

As illustrated, large sets of raw data from multiple sources require further processing in order to be used for analysis and climate change impact assessments. Applying data enhancement steps, such as bias adjustments, re-gridding, or calculation of climate indicators and essential variables creates “Decision Ready Indicators.” The spatial data infrastructures required for this integration should be designed with interoperable application packages following FAIR data principles. Heterogeneous data from multiple sources can be enhanced, adjusted, refined, or quality controlled to provide Science Services data products for Climate Resilience. The OGC Climate resilience pilot also illustrates the graphical exploration of the Decision Ready Indicators and effectively demonstrates how to design FAIR climate resilience information systems underpinning FAIR Climate Services. The OGC Pilot participants illustrate the necessary tools and the visualizations to address climate actions moving towards climate resilience.

The vision of the OGC Climate Resilience Community is to support efforts on climate actions, enable international partnerships (SDG 17), and move towards global interoperable open digital infrastructures providing climate resilience information on demand by users. This pilot contributes to establishing an OGC climate resilience concept store for the community where all appropriate climate information to build climate resilience information systems as open infrastructures can be found in one place, be it information about data services, tools, software, or handbooks, or a place to discuss experiences and needs. It covers all phases of Climate Resilience from initial hazards identification and mapping, vulnerability and risk analysis, options assessments, prioritization, and planning, to implementation planning and monitoring capabilities. These major challenges can only be met through the combined efforts of many OGC members across government, industry, and academia.

2.2.  Objectives

This Pilot set the stage for a series of follow up activities and focuses on use-case development, implementation, and exploration. It also answers the following questions.

  • What use-cases can be realized with the data, services, analytical functions, and visualization capabilities currently available? Current data services include, for example, the Copernicus Services, including Climate Data Store (CDS) https://cds.climate.copernicus.eu/ and Atmosphere Data Store (ADS) https://ads.atmosphere.copernicus.eu/.

  • How much effort is required to realize these use-cases?

  • What is missing, or needs to be improved, in order to transfer the use-cases developed in the pilot to other areas?

The pilot had three objectives:

  • to better understand what is currently possible with the available data and technology;

  • to determine what additional data and technology need to be developed in the future to better meet the needs of the Climate Resilience Community; and

  • to capture Best Practices and allow the Climate Community to copy and transform as many use-cases as possible to other locations or framework conditions.

2.3.  Background

With growing local communities, an increase in climate-driven disasters, and an increasing risk of future natural hazards, the demand for National Resilience Frameworks and Climate Resilience Information Systems (CRISs) cannot be overstated. CRISs are enabling data-search, -fetch, -fusion, -processing, and -visualization enabling access, understanding, and use of federal data, facilitating integration of federal and state data with local data, and serving as local information hubs for climate resilience knowledge sharing.

CRISs already exist and are operational, such as the Copernicus Climate Change Service with the Climate Data Store. CRIS architectures can be further enhanced by providing scientific methods and visualization capabilities as climate application packages. Based on FAIR principles, these application packages enable the reusability of CRIS features and capabilities. Reusability is an essential component when goals, expertise, and resources are aligned from the national to the local level. Framework conditions differ across nations, but application packages enable as much reuse of existing Best Practices, tools, data, and services as possible.

Goals and objectives of decision makers vary at different scales. At the municipal level, leaders and citizens directly face climate-related hazards. Aspects thus come into focus, such as reducing vulnerability and risk, building resilience through local measures, or enhancing emergency response. At the state level, the municipal efforts can be coordinated and supported by providing funding and enacting relevant policies. The national, federal, and international levels provide funding, data, and international coordination to enable the best analyses and decisions at the lower scales.

image

Figure 2 — Schematic synergies within different climate and science services FAIR and open infrastructures

Productivity and decision making are enhanced when climate application packages are exchangeable across countries, organizations, or administrative levels (see Figure 2). This OGC Climate Resilience Pilot is a contribution towards an open, multi-level infrastructure that integrates data spaces, open science, and local-to-international requirements and objectives. It contributes to the technology and governance stack that enables the integration of data including historical observations, real time sensing data, reanalyses, forecasts, and future projections. It addresses data-to-decision pipelines, data analyses, and representation, and bundles everything in climate resilience application packages. These application packages are complemented by Best Practices, guidelines, and cook-books that enable multi–stakeholder decision making for the good of society in a changing natural environment.

The OGC Innovation Program brings all of the various groups together: members of the stakeholder groups define use cases and requirements; the technologists and data providers experiment with new tools and data products in an agile development process; and the scientific community provides results in appropriate formats and enables open science by providing applications that can be parameterized and executed on demand.

Figure 3 — The OGC Climate Resilience DWG and Pilot brings the climate resilience community together with infrastructure providers, policy makers, commercial companies, and the scientific community

This OGC Climate Resilience Pilot is part of the OGC Climate Community Collaborative Solution and Innovation process, an open community process that uses OGC as the governing body for collaborative activities among all members. A spiral approach is applied to connect technology enhancements, new data products, and scientific research with community needs and framework conditions at different scales. The spiral approach defines real world use cases, identifies gaps, produces new technology and data, and tests these against the real world use cases before entering the next iteration. Evaluation and validation cycles alternate and continuously define new work tasks. These tasks include documentation and toolbox descriptions on the consumer side, and data and service offerings, interoperability, and system architecture developments on the producer side. It is emphasized that research and development is not constrained to the data provider or infrastructure side. Many tasks need to be executed on the data consumer side in parallel and then merged with advancements on the provider side in regular intervals.

Good results have been achieved using OGC API standards in the past. For example, the remote operations on climate simulations (roocs) use OGC API Processes for subsetting data sets to reduce the data volume being transported. Other systems use OGC STAC for metadata and data handling or OGC Earth Observation Exploitation Platform Best Practices for the deployment of climate application packages into CRIS architectures. Still, data handling regarding higher complex climate impact assessments within FAIR and open infrastructures needs to be enhanced. There is no international recommendation or best practice on usage of existing API standards within individual CRISs. It is the goal of this pilot to contribute to the development of such a recommendation, respecting existing operational CRISs already in service.

Figure 4 — Schematic Architecture of a Climate Resilience Information System. By respecting FAIR principles for the climate application packages the architecture enables open infrastructures to produce and deliver information on demand of the users needs

2.4.  Technical Challenges

Realizing the delivery of Decision Ready Data on demand to achieve Climate Resilience involves a number of technical challenges that have already been identified by the community. A subset will be selected and embedded in use-cases that will be defined jointly by Pilot Sponsors and the OGC team. The goal is to ensure a clear value-enhancement pipeline as illustrated in Figure 1, above. This includes, among other elements, a baseline of standardized operators for data reduction and analytics. These need to fit into an overall workflow that provides translation services between upstream model data and downstream output — basically from raw data to analysis-ready data to decision-ready data.

The following technical challenges have been identified and will be treated in the focus areas of the pilot.

  • Big Data Challenge: Multiple obstacles still exist which create barriers for seamless information delivery starting from Data Discovery. The emergence of new data platforms, processing functionalities, and products means that data discovery remains a challenge. In addition to existing solutions based on established metadata profiles and catalog services, new technologies such as OGC’s Spatio-Temporal Asset Catalog (STAC) and open Web APIs such as OGC API Records will be explored. Furthermore, aspects of Data Access need to be solved where the new OGC API suite of Web APIs for data access, subsetting, and processing are currently utilized very successfully in several domains. Several code sprints have shown that server-side solutions can be realized within days and clients can interact very quickly with these server endpoints, radically reducing development time. A promising specialized candidate for climate data and non-climate data integration has been recently published in the form of the OGC API — Environmental Data Retrieval (EDR). But which additional APIs are needed for climate data? Is the current set of OGC APIs sufficiently qualified to support the data enhancement pipeline illustrated in Figure 1? If not, what modifications and extensions need to be made available? How do OGC APIs cooperate with existing technologies such as THREDDS and OPEnDAP? For challenges of data spaces, Data Cubes have recently been explored in the OGC Data Cube workshop. Ad hoc creation and embedded processing functions have been identified as essential ingredients for efficient data exploration and exchange. Is it possible to transfer these concepts to all stages of the processing pipeline? How can users scale both ways from local, ad hoc cubes to pan-continental cubes, and vice versa? How can cubes be extended as part of data fusion and data integration processes?

  • Cross-Discipline Data Integration: Different disciplines such as Earth Observation, various social sciences, or climate modeling use different conceptual models in their data collection, production, and analytical processes. How can these different models be mapped? What patterns have been used to transform conceptual models to logical models and, eventually, physical models? The production of modern Decision-ready information requires the integration of several data sets, including census and demographics, further social science data, transportation infrastructure, hydrography, land use, topography and other data sets. This pilot cycle uses ‘location’ as the common denominator between these diverse data sets which works with several data providers and scientific disciplines. In terms of Data Exchange Formats, the challenge is to know what data formats need to be supported at the various interfaces of the processing pipeline. What is the minimum constellation of required formats to cover the majority of use cases? What role do container formats play? Data Provenance is also challenging on the technical level. Many archives include data from several production cycles, such as IPCC AR 5 and AR 6 models. In this context, long term support needs to be realized and full traceability from high level data products back to the original raw data. Especially in context of reliable data based policy, clear audit trails and accountability for the data to information evolution must be ensured.

  • Application packages for processing pipelines: Machine Learning and Artificial Intelligence plays an increasing role in the context of data science and data integration. This focus area evaluates the applicability of machine learning models in the context of the value-enhancing processing pipeline. What information needs to be provided to describe machine learning models and corresponding training data sufficiently to ensure proper usage at various steps of the pipeline? Upcoming options to deploy ML/AI within processing APIs to enhance climate services are rising challenges, e.g., on how to initiate or ingest training models and the appropriate learning extensions for the production phase of ML/AI. Heterogeneity in data spaces can be bridged with Linked Data and Data Semantics. Proper and common use of shared semantics is essential to guarantee solid value-enhancement processes. At the same time, resolvable links to procedures, sampling and data process protocols, and used applications will ensure transparency and traceability of decisions and actions based on data products. What level is currently supported? What infrastructure is required to support shared semantics? What governance mechanisms need to be put in place?

2.5.  Relevance to the Climate Resilience Domain Working Group

The Climate Resilience DWG will concern itself with technology and technology policy issues, focusing on geospatial information and technology interests as related to climate mitigation and adaptation, as well as the means by which those issues can be appropriately factored into the OGC standards development process.

The mission of the Climate Resilience DWG is to identify geospatial interoperability issues and challenges that impede climate action, then examine ways in which those challenges can be met through the application of existing OGC Standards, or through development of new geospatial interoperability standards under the auspices of OGC.

Activities to be undertaken by the Climate Resilience DWG include, but are not limited to:

  • identify the OGC interface standards and encodings useful to apply FAIR concepts to climate change services platforms;

  • liaise with other OGC Working Groups (WGs) to drive standards evolution;

  • promote the use of the aforementioned standards with climate change service providers and policy makers addressing international regional and local needs;

  • liaise with external groups working on technologies relevant to establishing ecosystems of EO Exploitation Platforms;

  • liaise with external groups working on relevant technologies;

  • publish OGC Technical Papers, Discussion Papers, or Best Practices on interoperable interfaces for climate change services; and

  • provide software tool kits to facilitate the deployment of climate change services platforms.

2.6.  Value Chain from raw data to Information

During this pilot, participants have worked on a number of workflows and architectures focusing on several use cases of floods, droughts, heatwaves, and fires. It required the use of Climate Resilience Information Systems where interoperability played a vital role in producing climate information by enabling seamless integration and exchange of information between data, models, and various components.

The value chain from raw data to climate information (Figure 1) can be clustered in sections according to the value quality. This value chain, often also compared to a conveyor belt, can be designed with different component workflows which are developed, analyzed, and described in this pilot. The order of the chapters of the document reflects value chain organizing and processing starting from Raw data to Datacubes (Chapter 3). The following Chapter 4 describes the data refinement from Raw Data and Datacubes to Analysis Ready Data (ARD). Various data pipelines are considered and evaluated on how best to move raw data, first to data cubes for efficient handling, and then how to process them to ARD, or derive the ARD directly from the raw data. This guides the discussion on the standardization of Data Cubes and ARD. Subsequently, Chapter 5, illustrates how to transform ARD to Decision Ready Indicator (DRI) by including an example set of climate indices. The pilot also demonstrates the value added of high-end 3D visualization combined with artificial-intelligence-enriched simulations for increasing climate resilience and for facilitating the decision-making process. The use cases driven value chain from Data to Visualization is described in Chapter 6. To close an important gap, a strong emphasis has been made to Climate Information and Communication with Stakeholders in Chapter 7 lining out the importance of consultation work to non-technical users to identify their requirements and optimize the information delivery use-case specific on demand. Some of the value chain elements from raw data to visualization are illustrated by Use cases in Chapter 8. And Lessons Learned (Chapter 9) showcase the pilot’s work and include challenges with the value chain from raw data to climate information. The final chapter, chapter 10Recommendations for future climate resilience pilots describes future work.

3.  Raw data to Datacubes

Raw data and Datacubes are two different forms for organizing and structuring data in the context of data analysis and data warehousing.

  1. Raw Data refers to the unprocessed, unorganized, and unstructured data that is collected or generated directly from various sources. It can include a variety of forms such as text, numbers, (geo) images, audio, video, or any other form of data. Raw data often lacks formatting or context and requires further processing or manipulation before it can be effectively analyzed or used for decision-making purposes. Raw data is typically stored in databases or data storage systems.

  2. Datacubes, also known as multidimensional cubes, are a structured form of data representation that organizes and aggregates raw data into a multi-dimensional format. Datacubes are designed to facilitate efficient and fast analysis of data from different dimensions or perspectives. They are commonly used in data warehousing.

Datacubes organize data into a multi-dimensional structure typically comprising dimensions, hierarchies, and cells. Dimensions represent various attributes or factors that define the data, such as time, geography, or products. Hierarchies represent the levels of detail within each dimension. Cells typically store the aggregated data values at the intersection of dimensions.

Datacubes enable users to perform complex analytical operations like slicing, dicing, drilling down, or rolling up data across different dimensions. They provide a summarized and pre-aggregated view of data that can significantly speed query processing and analysis compared to working directly with raw data, which is very valuable for the climate resilience community. Therefore, Datacubes are often used to support decision-making processes. The example below highlights a climate resilience related example of how to create and make available Datacubes for wildfire risk analysis.

3.1.  Analysis Ready Data Cubes — user-friendly sharing of climate data records

Climate Data Record (CDR) is a time series of measurements of sufficient length, consistency, and continuity to determine potential climate variability and change (US National Research Council). These measurements can be obtained through ground based stations or derived from a long time series of satellite data.

Data Cube (Different approaches): Datacubes organize data into a multi-dimensional structure. They contain typically:

  • multidimensional arrays of data (Kopp et al., 2019);

  • 4-dimensional arrays with dimensions x (longitude or easting), y (latitude or northing), time, and bands sharing the same data properties (Appel and Pebesma, 2019); and

  • the term “cube” can be a metaphor to help illustrate a data structure that can in fact be 1- dimensional, 2-dimensional, 3-dimensional, or higher-dimensional. The dimensions may be coordinates or enumerations, e.g., categories (OGC, 2021).

The common (technical) definition of the Data Cube focuses on the data structure aspects exclusively!

Analysis Ready Data (ARD) — are data sets that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets. ARDs often represent satellite data (CEOS website).

The idea behind the ARD is that data providers, such as EUMETSAT or ESA, are better suited to perform data pre-processing, e.g., atmospheric correction, cloud masking, and re-gridding, than users.

Analysis Ready Data Cubes (ARDCs)
ARDCs are often made available in the form of data cubes which focus on one specific region (e.g., Swiss Data Cube) or thematic application (e.g., EUMETSAT Drought & Vegetation Data Cube (D&V Data Cube)). Data cubes can also be defined by the type of data they include (e.g., Atmospheric Composition Data Cube (ACDC), Earth System Data Cube (ESDC) + Data Analytics Toolkit (Earth System Data Lab)).
A data cube which contains various climate data records can be generally referred to as Climate Data Cube.

Table 1 — Example Climate Data Cubes

Data Cube nameClimate Data RecordsProviderYear of releaseData sourceAccessibilityData formatTemporal coverage
EUMETSAT Drought & Vegetation Data CubeSolar radiation: Global Radiation, Direct normal Solar Radiation, Sunshine Duration, other: Land Surface Temperature, Reference Evapotranspiration, NDVI, Fractional Vegetation Cover, Leaf Area Index, Fraction of absorbed photosynthetically active radiation, Soil Wetness Index (root zone), Precipitation, Air temperature at 2mEUMETSAT2021CMSAF SARAH2 (for solar radiation), other: LSA SAF, H SAF, GPCC, ECMWFFree after enrollment (EUMETSAT Prototype Satellite Data Cube)CF compliant netCDF4 via a THREDDS serverSolar radiation: 1983-2020, other: 2004-2020, SWI 1992-2020, Precipitation 1982-2020, T2m 1979-2020
mesogeos — a Daily Datacube for the Modeling and Analysis of Wildfires in the MediterraneanSolar radiation: mean daily surface solar radiation downwards from ERA5-Land, other: dynamic variables — previous day Leaf Area Index, evapotranspiration, Land Surface Temperature, meteorological data, fire variables, and Fire Weather Index static variables — roads density, population density, and topography layersOne of many data cubes created within the Deep Cube (Horizon 2020 Project “Explainable AI pipelines for big Copernicus data”")2022MODIS, ERA5, JRC European Drought Observatory, worldpop.org, Copernicus C3S, Copernicus EU-DEM, EFFISFree, open code on github.zarr (file storage format for chunked, compressed, N-dimensional arrays based on an open-source specification), Jupyter Notebooks (python)2002 — 2022
The Earth System Data Cube (ESDC) Solar radiation: Surface Net Solar Radiation Other: the cube includes all important meteorological variables (the list is too long to include in this table)DeepESDL Team (ESA-funded project Earth System Data Lab)2022ERA5 (for solar radiation)FreeDirectory of NetCDF files based on xcube, can also be accessed via a dedicated ESDL THREDDS server which supports the OPeNDAP and WCS1979 — 2021
MADIA — Meteorological variables for agriculture (Italy)Solar radiation: mean of daily surface solar radiation downwards (shortwave radiation), other 10-day gridded agro-meteorological data: air temperature and humidity, precipitation, wind speed, evapotranspirationCouncil for Agricultural Research and Economics–Research Centre for Agriculture and Environment2022ERA5 hourly data accessed through Climate Data StoreFreeNetCDF, csv and vector file (Shapefile) for administrative regions (NUTS 2 and 3)1981 — 2021
Open Environmental Data CubeClimate: air temperature (Min, Mean, Max), land surface temperature (Min, Mean, Max), precipitation (Daily Sum) Other: natural disasters, air quality, land cover, terrain, soil, forest, and vegetationOpenGeoHub, CVUT Prague, mundialis,Terrasigna, MultiOne (Horizon2020 Project: “Geo-harmonizer: EU-wide automated mapping system for harmonization of Open Data based on FOSS4G and Machine Learning”)2022ERA5 (for climate variables)FreeWFS for vector data, Cloud Optimized GeoTIFFs for raster datasets (allowing import, subset, crop, and overlay parts of data for the local area.)2000 — 2020 and Predictions based on Ensemble Machine Learning

Analysis Ready Data Cubes (ARDCs) play an important role in handling large volumes of data (such as satellite-based CDRs). They are often deployed on different spatial scales and consist of datasets dedicated for particular application. This makes them more accessible, easier to use, and less costly for the users.

3.2.  Data cubes to support wildfire risk analysis

To support the pilot activities, Ecere provided, as an in-kind contribution, a deployment of its GNOSIS Map Server implementing several OGC API standards enabling efficient access to data cubes. The API and backend functionality for these data cubes, improved throughout this pilot, also support a Wildland Fire Fuel indicator workflow for the OGC Disaster Pilot taking place until the end of September 2023. As an end goal of that Disaster Pilot, the data cube API should support machine learning predictions for classifying wildland fire fuel vegetation type from Earth Observation imagery. A number of climate datasets and wildland fire danger indices were also made accessible through that same data cube API. Additional machine learning prediction experiments may be performed based on those datasets as well.

The API and datasets were provided in the hope that they would prove useful to other participants and could be part of Technology Integration Experiments (TIEs) for the pilot and other related OGC initiatives. Mainly due to the exploratory nature of this first phase of the pilot, no successful TIE with these resources with other participants were noted during its execution. However, these resources will remain operational and successful TIEs are expected with them as part of the Disaster Pilot, the OGC Testbed 19 Geo Data Cube tasks, and as future phases of the climate resilience pilot.

3.2.1.  Climate resilience data cubes

During the course of the pilot, the following datasets relevant to climate resilience were optimized and deployed at a data cube API demonstration end-point using the GNOSIS Map Server.

Table 2 — Datasets provided through GNOSIS Map Server data cube API

Data collectionFieldsTemporal intervalTemporal resolutionSpatial extentSpatial resolutionAdditional dimensionSource
ESA sentinel-2 Level-2AB01..B12, B8A, AOT, WVP, SCLNovember 2016 to October 20225 daysGlobal (land only)10 metersN/ACOGs and STAC catalogs on AWS
CMIP5 projections (wind speed)Eastward and Northward wind velocity2016 to 2025dailyGlobal2.5° longitude x 2° latitude8 pressure levelsCopernicus Climate Data Store
CMIP5 projections (air temperature)Air temperature2016 to 2025dailyGlobal2.5° longitude x 2° latitude8 pressure levelsCopernicus Climate Data Store
CMIP5 projections (geopotential height)Geopotential height2016 to 2025dailyGlobal2.5° longitude x 2° latitude8 pressure levelsCopernicus Climate Data Store
CMIP5 projections on single levelNear-surface specific humidity, Precipitation, Snowfall flux, Sea level pressure, Surface downwelling shortwave radiation, Daily-mean near-surface wind speed, Average, Minimum and Maximum, near-surface air temperature2016 to 2025dailyGlobal2.5° longitude x 2° latitudeN/ACopernicus Climate Data Store
ERA5 reanalysis (relative humidity)Relative humidityApril 1 to 6, 2023hourlyGlobal0.25° longitude x 0.25° latitude37 pressure levelsCopernicus Climate Data Store
ECMWF CEMS Fire Danger indicesBurning index, Build-up index, Danger risk, Drought code, Duff moisture code, Fire danger severity rating, Energy release component, Fire danger index, Fine fuel moisture code, Forest fire weather index, Ignition component, Initial spread index, Keetch-byram drought index, Spread componentJanuary 2021 to July 2022dailyGlobal (except Antarctica)0.25° longitude x 0.25° latitudeN/ACopernicus Climate Data Store
Fuel Vegetation Types for Continental United StatesFuel vegetation type2022 (no time axis)N/AContinental U.S.~20 metersN/Alandfire.gov

Figure 5 — ESA sentinel-2 Level-2A from COGs and STAC catalogs on AWS

Figure 6 — CMIP5 projections (air temperature) from Copernicus Climate Data Store

Figure 7 — ECMWF CEMS Fire Danger indices from Copernicus Climate Data Store

Figure 8 — Fuel Vegetation Types for Continental United States from landfire.gov

3.2.2.  Overview of supported OGC API standards to access the data

The GNOSIS Map Server implements several published and candidate OGC API standards and is a certified implementation of OGC API — Features as well as OGC API — Processes. This section describes some of these supported standards and illustrates their use with requests for the climate data collections listed above.

3.2.2.1.  OGC API — Common

The OGC API standards form a complementary set of functionality for efficiently accessing data and processing resources, combining together through the OGC API — Common framework. Whereas OGC API — Common — Part 1 standardizes how the API can present a landing page, describe itself, and declare conformance to specific standards, Part 2 provides a consistent mechanism to list and describe collections of geospatial data. The following Common resources are available from the GNOSIS Map Server demonstration end-point:

Table 3 — Common resources that are available from the GNOSIS Map Server

ResourceCommon PartURL
Landing pagePart 1https://maps.gnosis.earth/ogcapi
OpenAPI descriptionPart 1https://maps.gnosis.earth/ogcapi/api
Conformance declarationPart 1https://maps.gnosis.earth/ogcapi/conformance
List of collectionsPart 2https://maps.gnosis.earth/ogcapi/collections
Collection descriptionPart 2https://maps.gnosis.earth/ogcapi/collections/{collectionId}

In addition to the common resources standardized by Part 1 and Part 2, several API building blocks are consistently re-used across the different OGC API standards. The following table summarizes common query parameters supported by several of the data access APIs:

Table 4 — Common query parameters

Query parameterDescriptionAPIs
subsetFor subsetting (trimming or slicing) on an arbitrary dimensionCoverages, Maps, Tiles (except for spatial dimensions), DGGS (zone query; for data retrieval: except for DGGS dimensions)
bboxFor subsetting on spatial dimensions (Features: spatial intersection)Coverages, Maps, DGGS (zone query), Features
datetimeFor subsetting on temporal dimension (Features: temporal intersection)Coverages, Maps, Tiles, DGGS (data retrieval: except for temporal DGGS), Features
propertiesFor selecting specific properties to return (range subsetting); deriving new fields (properties) using CQL2 expressionCoverages, Tiles, DGGS, Features
filterFor filtering using a CQL2 expressionCoverages, Maps, Tiles, DGGS, Features
crsFor selecting an output coordinate reference systemCoverages, Maps, Features
bbox-crsFor specifiying the coordinate reference system of the bbox parameterCoverages, Maps, Features, DGGS
subset-crsFor specifiying the coordinate reference system of the subset parameterCoverages, Maps, DGGS
widthFor specifying the width of the output (resampling)Coverages, Maps
heightFor specifying the height of the output (resampling)Coverages, Maps

With Coverages and Maps, a spatial area of interest can be specified using either, e.g., bbox=10,20,30,40 or subset=Lat(20:40),Lon(10:30).

For temporal datasets, a specific time can be requested using, e.g., datetime=2022-03-01 or subset=time("2022-03-01").

For the data cubes with multiple pressure levels, the pressure dimension is defined and can be used with the subset query parameter with all of the data access OGC API standards (Coverages, Tiles, DGGS and Maps), e.g., subset=pressure(500).

3.2.2.2.  OGC API — Coverages

The OGC API — Coverages candidate Standard is a simple API defining fundamental functionality to retrieve access data for arbitrary fields, area, time, and resolution of interest from a data cube.

The main resource to retrieve data using the Coverages API is located at /collections/{collectionId}/coverage for each data collection. This resource supports a number of query parameters defined by optional requirements classes and extensions supported by the GNOSIS Map Server.

Table 5 — Supported query parameters defined by optional requirements classes and extensions supported by the GNOSIS Map Server

Query parameterDescriptionRequirements class
subsetFor subsetting (trimming or slicing) on an arbitrary dimensionSubsetting
bboxFor subsetting on spatial dimensionsSubsetting
datetimeFor subsetting on temporal dimensionSubsetting
scale-factorFor resampling using the same factor for all dimensions (1: no resampling, 2: 2x downsampling)Scaling (resampling)
scale-axesFor resampling using a specific factor for individual dimensionsScaling (resampling)
scale-sizeFor resampling by specifying the expected number of cells for each dimensionScaling (resampling)
widthFor specifying the width of the output (resampling)Scaling (resampling)
heightFor specifying the height of the output (resampling)Scaling (resampling)
propertiesFor selecting specific properties to return (range subsetting); deriving new fields using CQL2 expressionRange subsetting; Derived fields extension
filterFor filtering using a CQL2 expressionRange filtering extension
crsFor selecting an output coordinate reference systemCRS extension
bbox-crsFor specifiying the coordinate reference system of the bbox parameterCRS extension
subset-crsFor specifiying the coordinate reference system of the subset parameterCRS extension

The Coverages draft currently also specifies a DomainSet JSON object which is linked using the [ogc-rel:coverage-domainset] link relation from the collection description, which may be included either within the collection description itself, or at a dedicated resource (/collections/{collectionId}/coverage/domainset). The schema for this DomainSet object describes the domain of the coverage (the extent and resolution of its dimensions / axes) and follows the Coverages Implementation Schema (CIS) 1.1.1. An example of such a domain set resource can be found at https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:byPressureLevel:windSpeed/coverage/domainset?f=json .

At the time of writing this report discussions are underway to potentially simplify the API by fully describing the domain directly within the collection description resource, using uniform additional dimensions, as well as the grid property, inside the extent property, which can describe both regular as well as irregular grids, removing the need for this extra resource. For example, see the collection description for the CMIP5 single pressure level data and its corresponding CIS domain set resource.

The Coverages draft currently also specifies a RangeType JSON object which is linked using the [ogc-rel:coverage-rangetype] link relation from the collection description, which may be included either within the collection description itself or at a dedicated resource (/collections/{collectionId}/coverage/domainset). The schema for this RangeType object describes the range type of the coverage (the extent and resolution of its dimensions / axes) and follows the Coverages Implementation Schema (CIS) 1.1.1. An example of such a range type resource can be found at https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:byPressureLevel:windSpeed/coverage/rangetype?f=json . It might be possible to also describe the range type in a common way across the different OGC APIs using a JSON schema with semantic annotations, as per the work undertaken for OGC API — Features — Part 5: Schemas.

A Coverage Tiles requirements class is defined in OGC API — Coverages, leveraging the OGC API — Tiles standard while clarifying requirements for coverage tile responses. Examples of coverage tile requests are described below in the OGC API — Tiles section.

At the moment, the GNOSIS Map Server implementation of Coverages is limited to the following 2D (spatial dimensions) output formats:

  • GeoTIFF (multiple fields, two-dimensional); and

  • PNG (single field, 16-bit output, currently using fixed scale (2.98) and offset (16384) modifiers).

There is a plan to add support for n-dimensional output formats, including netCDF, CIS JSON, and eventually CoverageJSON, as well. For coverages with more than two dimensions, a specific time and/or pressure slice must therefore be selected, currently requiring separate API requests to retrieve a range of time or pressure levels.

Some example of coverage requests:

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/coverage?f=geotiff=tas,tasmax,tasmin,pr,psl=Lat(-90:90),Lon(0:180)=400=2020-05-20 (GeoTIFF coverage with 5 bands for each field)

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/coverage?f=geotiff=pressure(750) (GeoTIFF Coverage)

Figure 9 — Coverage request for CMIP5 maximum daily temperature

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/coverage?f=png=(tasmax-250)*400

3.2.2.3.  OGC API — Maps

The OGC API — Maps candidate Standard defines the ability to retrieve a visual representation of geospatial data. The main resource to retrieve data using the Maps API is located at /collections/{collectionId}/map for each data collection. This resource supports a number of query parameters defined by optional requirements classes and extensions supported by the GNOSIS Map Server.

Table 6 — Supported query parameters defined by optional requirements classes and extensions supported by the GNOSIS Map Server

Query parameterDescriptionRequirements class
bboxFor subsetting on spatial dimensionsSpatial Subsetting
bbox-crsFor specifiying the coordinate reference system of the bbox parameterSpatial Subsetting
subsetFor subsetting (trimming or slicing) on an arbitrary dimensionSpatial/Temporal/General Subsetting
subset-crsFor specifiying the coordinate reference system of the subset parameterSpatial/Temporal/General Subsetting
datetimeFor subsetting on temporal dimensionTemporal Subsetting
widthFor specifying the width of the output (resampling)Scaling (resampling)
heightFor specifying the height of the output (resampling)Scaling (resampling)
crsFor selecting an output coordinate reference systemCRS
bgcolorFor specifiying the color of the backgroundBackground
transparentFor specifiying whether the background should be transparentBackground
filterFor filtering using a CQL2 expressionFiltering extension

Some example map requests:

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/map?width=2048=pressure(750)=0x002040

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:byPressureLevel:windSpeed/map?subset=pressure(850)=1024

NOTE:    Proper symbolization for this wind velocity map (above request) would require support for wind barbs. In the meantime, the Eastward and Northward velocities are assigned to the green and blue color channels.

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:byPressureLevel:temperature/map?subset=pressure(850)

Figure 10 — Sentinel-2 map (natural color)

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map?subset=Lat(-16.259765625:-16.2158203125),Lon(124.4091796875:124.453125)=2022-06-28

Some example map requests for a specific style, in conjunction with OGC API — Styles:

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/styles/precipitation/map?datetime=2022-09-04

Figure 11 — Sentinel-2 map for NDVI style

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/ndvi/map?subset=Lat(-16.259765625:-16.2158203125),Lon(124.4091796875:124.453125)=2022-04-28

Figure 12 — Sentinel-2 map for Scene Classification Map style

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/styles/scl/map?subset=Lat(-16.259765625:-16.2158203125),Lon(124.4091796875:124.453125)=2022-06-28

A Map Tilesets requirements class is defined in OGC API — Maps, leveraging the OGC API — Tiles stand while clarifying requirements for map tile responses. Examples of map tiles requests are described below in the OGC API — Tiles section.

3.2.2.4.  OGC API — Tiles

The OGC API — Tiles Standard defines the ability to retrieve geospatial data as tiles based on the OGC 2D Tile Matrix Set and Tileset Metadata Standard, originally defined as part of the Web Map Tile Service (WMTS) Standard. Unlike WMTS, which focused strictly on pre-rendered or server-side rendered Map tiles, the Tiles API was designed to also enable the use of data tiles such as Coverages Tiles and Vector Tiles which can be styled, rendered, and used for data analytics performed on the client side. Using pre-determined partitioning schemes facilitates caching for both servers and clients, resulting in more responsive dynamic maps.

The following Tiles API resources are defined:

Table 7 — Tiles API resources

ResourceRequirements ClassDescription
…​/tilesTilesets listList of available tilesets
…​/tiles/{tileMatrixSetId}TilesetDescription of tileset and link to 2D Tile Matrix Set definition
…​/tiles/{tileMatrixSetId}/{tileMatrix}/{tileRow}/{tileCol}CoreTiles for a given Tile 2D Matrix Set, tile matrix/row/column

The GNOSIS Map Server supports a number of 2D Tile Matrix Sets for all of the collections it hosts, including:

3.2.2.4.1.  Coverage Tiles

The GNOSIS Map Server currently supports the following coverage tile formats:

  • GNOSIS Map Tiles (multiple fields, n-dimensional);

  • GeoTIFF (multiple fields, two-dimensional); and

  • PNG (single field, 16-bit value using fixed scale (2.98) and offset (16384) modifiers).

Support is planned for netCDF, CIS JSON, and eventually CoverageJSON, as well as additional formats.

Example coverage tile queries:

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288

To request a different sentinel-2 band than the default RGB (B04, B03, B02) bands:

Figure 13 — Sentinel-2 PNG coverage tile for band 08 (near infra-red)

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17?properties=B08=png

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288?properties=B08

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/coverage/tiles/WebMercatorQuad/1/1/0?f=geotiff=2022-09-04 (GeoTIFF coverage tile)

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/coverage/tiles/WorldCRS84Quad/0/0/0?f=geotiff=pressure(750) (GeoTIFF coverage tile)

3.2.2.5.  OGC Common Query Language (CQL2)

The OGC Common Query Language, abbreviated CQL2, allows the user to define query expressions. Although introduced as a language to specify a boolean predicate for OGC API — Features — Part 3: Filtering, the language is easily extended for additional use cases, such as filtering the range set of a coverage request, or deriving new fields using expressions—​that can return non-boolean values—​including performing coverage band arithmetics, such as calculating vegetation indices.

Support for CQL2 in the filter parameter is implemented in the GNOSIS Map Server for Coverages, Features, Maps, Tiles as well as DGGS. For example, requesting all data from the CMIP5 single pressure level collection where the maximum daily temperature is greater than 300 Kelvins, filter=tasmax>300 (unmatched cells will be replaced by NODATA values).

Support for CQL2 in the properties parameter is currently implemented for Coverages, Tiles and DGGS. For example, the pr precipitation property can be multiplied by a factor of one thousand using properties=pr*1000.

Using a CQL2 expression to filter out the clouds in a map tile:

Figure 17 — Sentinel-2 map tile filtered by Scene Classification Layer to remove clouds (a longer time interval with fewer clouds would be necessary to complete the mosaic)

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/GNOSISGlobalGrid/3/4/17?filter=SCL8 or SCL>10

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/map/tiles/ISEA9Diamonds/4/373/288?filter=SCL8 or SCL>10

Using a CQL2 expression in coverage tile requests to perform band arithmetic computing NDVI:

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17?properties=(B08/10000-B04/10000)/(B08/10000+B04/10000)

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/ISEA9Diamonds/4/373/288?properties=(B08/10000-B04/10000)/(B08/10000+B04/10000)

Figure 18 — Coverage tile request from sentinel-2 computing NDVI

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/coverage/tiles/GNOSISGlobalGrid/3/4/17?properties=(B08/10000-B04/10000)/(B08/10000+B04/10000)*10000=png

Using a CQL2 expression in a coverage request to multiply the relative humidity and filter resulting values below a threshold (20):

Figure 19 — Coverage request from relative humidity coverage multiplying r by 200 and returning only values where r > 20

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/coverage?f=png=pressure(750),Lat(-90:90),Lon(0:180),time(%222023-04-03%22)=r*200=r%3E20

3.2.2.6.  OGC API — Discrete Global Grid Systems

The OGC API — DGGS candidate Standard allows the retrieval of data and performance of spatial queries based on hierarchical multi-resolution discrete grids covering the entirety of the Earth. There are three main requirement classes for this standard:

  • Core (DGGS definition and zone information resource);

  • Zone Data Retrieval (What is here?); and

  • Zones Query (Where is it?).

The following DGGS API resources are defined:

Table 8 — DGGS API resources

ResourceRequirements ClassDescription
…​/dggsCoreList of available DGGSs
…​/dggs/{dggsId}CoreDescription and link to definition of a specific DGGS
…​/dggs/{dggsId}/zonesZone QueryFor retrieving the list of zones matching a collection and/or query
…​/dggs/{dggsId}/zones/{zoneId}CoreFor retrieving information about a specific zone
…​/dggs/{dggsId}/zones/{zoneId}/dataData RetrievalFor retrieving data for a specific zone

DGGS API requests imply the use a particular grid, understood by both the client and the server, associated with the {dggsId} of the resource on which the request is performed. Several different discrete global grids have been defined. The GNOSIS Map Server currently supports two discrete global grids:

  • the GNOSIS Global Grid, based on the 2D Tile Matrix Set of the same name defined in the EPSG:4326 geographic CRS, axis-aligned with latitude and longitude, and using variable width tile matrices to approach equal area (maximum variation is ~48% up to a very detailed zoom level); and

  • the ISEA9R (Icosahedral Snyder Equal Area aperture 9 Rhombus) grid, a dual DGGS of ISEA3H (aperture 3 hexagonal) for its even levels, using rhombuses/diamonds which, compared to hexagons, are much simpler to index and for which it is much easier to encode data in rectilinear formats such as GeoTIFF. The area values of ISEA3H hexagons can be transported as points on the rhombus vertices for those ISEA3H even levels. The ISEA9R grid is also axis-aligned to a CRS defined by rotating and skewing the ISEA projection, also allowing the definition of a 2D Tile Matrix Set for it.

A client will normally opt to use OGC API — DGGS if the client shares an understanding and internal use of the same grid with the server. Although, for axis-aligned DGGS that can be represented as a 2D Tile Matrix Set OGC, API — Tiles can be used to retrieve data for specific zones. The DGGS API enables zone data retrieval for other DGGS which are not axis-aligned or whose geometry makes that impossible (e.g., hexagons). Another important use of the DGGS API is the ability to efficiently retrieve the results of a spatial query (e.g., using CQL2) in the form of a compacted list of zone IDs.

3.2.2.6.1.  Core

The core requirements class defines requirements for listing available DGGS, describing each of them, and providing information for individual zones.

In the GNOSIS Map Server implementation of the zone information resource, since both supported DGGS also correspond to a 2D Tile Matrix Set, the Level, Row, and Column for the equivalent OGC API — Tiles request is displayed on the information page, as can be seen below.

For the DGGS {zoneId}, the level, row, and column are encoded differently in a compact hexadecimal identifier.

Some example zone information requests:

Figure 20 — GNOSIS Map Server information resource for GNOSIS Global Grid zone 5-24-6E

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/GNOSISGlobalGrid/zones/5-24-6E

Figure 21 — GNOSIS Map Server information resource for ISEA9Diamonds zone A7-0

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/ISEA9Diamonds/zones/A7-0

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/ISEA9Diamonds/zones/E7-FAE

3.2.2.6.2.  Zone Data Retrieval: What is here?

The Zone Data Retrieval requirements class allows the retrieval of data for a specific DGGS zone. For axis-aligned DGGSs whose zone geometry can be described by a 2D Tile Matrix Set, such as the GNOSISGlobalGrid, ISEA9R, or rHealPix, this capability is equivalent to Coverage Tiles requests for the corresponding TileMatrixSets. This requirements class supports returning data for zones whose geometries are of an arbitrary shape, e.g., hexagonal or triangular.

The zone data retrieval resource is …​/dggs/{dggsId}/zones/{zoneId}/data, for which the GNOSIS Map Server supports a number of query parameters:

Table 9 — Query parameters supported by the GNOSIS Map Server

Query parameterDescription
filterFor filtering data within the response using a CQL2 expression
propertiesFor selecting specific properties to return (range subsetting); deriving new fields using CQL2 expression
datetimeFor subsetting on temporal dimension
subsetFor subsetting (trimming or slicing) on an arbitrary dimension (besides the DGGS dimensions)
subset-crsFor specifiying the coordinate reference system of the subset parameter
zone-depthFor specifying zone depths to return relative to the requested zone (0 corresponding to a single set of values for the zone itself)

Some example of data retrieval queries:

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/GNOSISGlobalGrid/zones/3-4-11/data

https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a/dggs/ISEA9Diamonds/zones/E7-FAE/data

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/GNOSISGlobalGrid/zones/0-0-3/data?f=geotiff=2023-04-03

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/GNOSISGlobalGrid/zones/0-0-3/data?f=geotiff=2022-09-04

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/ISEA9Diamonds/zones/A7-0/data?f=geotiff=2023-04-03

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/ISEA9Diamonds/zones/A7-0/data?f=geotiff=2022-09-04

3.2.2.6.3.  Zone Queries: Where is it?

The Zone Query requirements class allows the efficient retrieval of the results of a spatial query in the form of a compact list of zone IDs. The list can be compacted (the default) by replacing children zones by their parents when all children of that parent are part of the result set. The zone query resource is …​/dggs/{dggsId}/zones, for which the GNOSIS Map Server supports a number of query parameters:

Table 10 — Zone query parameters supported by the GNOSIS Map Server

Query parameterDescription
zone-levelFor specifying the desired zone hierarchy level for the resulting list of zone IDs
compact-zonesFor specifying whether to return a compact list of zones (defaults to true)
filterFor filtering using a CQL2 expression
datetimeFor subsetting on temporal dimension
bboxFor subsetting on spatial dimensions
bbox-crsFor specifiying the coordinate reference system of the bbox parameter
subsetFor subsetting (trimming or slicing) on an arbitrary dimension
subset-crsFor specifiying the coordinate reference system of the subset parameter

By creating a kind of mask at a specifically requested resolution level, DGGS Zones Query can potentially greatly help parallelization and orchestration of spatial queries combining multiple datasets across multiple services, allowing the performance of early optimizations with lazy evaluations.

NOTE:    There are currently some limitations to the GNOSIS Map Server implementation of the Zones Query requirements class.

Examples of zone queries:

Where is relative the humidity at 850 hPa greater than 80% on April 3rd, 2023? (at the precision level of GNOSIS Global Grid level 6)

(using the default compact-zones=true where children zones are replaced by parent zones if all children zones are included)

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/GNOSISGlobalGrid/zones?subset=pressure(850)=2023-04-03=r%3E80=6=json (Plain Zone ID list output)

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/GNOSISGlobalGrid/zones?subset=pressure(850)=2023-04-03=r%3E80=6=uint64 (Binary 64-bit integer Zone IDs)

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/GNOSISGlobalGrid/zones?subset=pressure(850)=2023-04-03=r%3E80=6=geotiff (GeoTIFF output)

Figure 22 — GeoJSON output of a GNOSIS Global Grid DGGS Zone Query for relative humidity at 850 hPa greater than 80% on April 3rd, 2023

https://maps.gnosis.earth/ogcapi/collections/climate:era5:relativeHumidity/dggs/GNOSISGlobalGrid/zones?subset=pressure(850)=2023-04-03=r%3E80=6=geojson

Where is the maximum daily temperature greater than 300 degrees Kelvin on September 4, 2022? (at precision level of GNOSIS Global Grid level 6)

(using the default compact-zones=true where children zones are replaced by parent zone if all children zones are included)

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/GNOSISGlobalGrid/zones?filter=tasmax%3E300=2022-09-04=6=json (Plain JSON Zone ID list output)

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/GNOSISGlobalGrid/zones?filter=tasmax%3E300=2022-09-04=6=uint64 (Binary 64-bit integer Zone IDs)

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/GNOSISGlobalGrid/zones?filter=tasmax%3E300=2022-09-04=6=geotiff (GeoTIFF output)

Figure 23 — GeoJSON output of a GNOSIS Global Grid DGGS Zone Query for maximum daily temperature greater than 300 degrees Kelvins on September 4, 2022

https://maps.gnosis.earth/ogcapi/collections/climate:cmip5:singlePressure/dggs/GNOSISGlobalGrid/zones?filter=tasmax%3E300=2022-09-04=6=geojson

Additional examples of zone queries for a Digital Elevation Model (returning regions where elevation data is available):

https://maps.gnosis.earth/ogcapi/collections/SRTM_ViewFinderPanorama/dggs/ISEA9Diamonds/zones

https://maps.gnosis.earth/ogcapi/collections/SRTM_ViewFinderPanorama/dggs/ISEA9Diamonds/zones?f=json (as a list of compact JSON IDs)

3.2.2.7.  OGC API — Processes — Part 1: Core

The OGC API — Processes standard defines the capability to execute remote processes accepting inputs and returning outputs.

A list of processes is available from the GNOSIS Map Server demonstration end-point at https://maps.gnosis.earth/ogcapi/processes . The following table summarizes the available processes and their current functionality status.

Table 11 — Available processes and their current functionality status for the GNOSIS Map Server

ProcessStatusDescription
Features Attributes CombinerWorkingThis process augments existing vector features with attributes available from a separate feature collection based on an attribute key.
Elevation contours tracerWorkingThis process computes contours over an elevation coverage, uniformly spaced by a given vertical distance.
Processes — Core / Modular OGC API Workflows adapterWorkingThis process enables the integration of servers supporting OGC API — Processes — Part 1: Core within a modular workflow.
OSM Ecere Routing Engine (OSMERE)WorkingThis process computes a route from waypoints based on an OSM roads network.
Maps rendering processWorkingThis process renders a map from input data layers.
Passthrough processWorking for features (coverage support to implement)This process integrates inputs passing them through as an output, providing an opportunity to apply field modifiers.
Echo ProcessWorking (passing TeamEngine CITE test)This process accepts any number of inputs and simply echoes each input as an output.
Point Cloud Gridifier(Currently requires a local Point Cloud collection, and none is loaded)This process generates a Digital Elevation Model or orthorectified imagery from a point cloud
Point Cloud Elevation(Currently requires a local Point Cloud collection, and none is loaded)This process extracts elevation values from a point cloud and applies them as attributes to vector features.
Random Forest Classification(To be tested again with local sentinel2-l2a collection)This process outputs random-forest classified images using an imagery and training feature dataset
MOAW-WCPS adapter(To be tested again with WCPS implementation)This process integrates a WCPS service as part of a Modular OGC API Workflow.

The description of each individual process is available at /processes/{processId}, listing available inputs and outputs, whereas the execution end-point for each process is at /processes/{processId}/execution, supporting a POST operation in which the client includes an execution request as a payload. At this time, only synchronous execution and (Part 3) collection output deferred execution are supported.

A new process is being developed to classify fuel vegetation types using machine learning predictions in the context of the OGC Disaster Pilot 2023. This process will accept input data from the sentinel-2 Level-2A collection and will return fuel vegetation types. The fuel vegetation type coverage for continental United States from landfire.gov will be used as initial training data. This process was not yet operational at the time of writing this report.

3.2.2.8.  OGC API — Processes — Part 3: Workflows and Chaining

The Part 3: Workflows and Chaining candidate Standard extends OGC API — Processes enabling the chaining of nested local and remote processing capabilities and their integration with local and remote OGC API data collections.

The GNOSIS Map Server currently supports the following extensions defined by Part 3: Workflows and Chaining to the process execution capabilities of Part 1.

  • Extending execution requests submitted to /processes/{processId}/execution by:

    • referencing local and remote nested processes as inputs ("process");

    • referencing local and remote OGC API collections as inputs ("collection"); and

    • modifying data accessed as inputs and returned as outputs (currently only for the PassThrough process) by filtering with "filter", as well as selecting and deriving fields with "properties".

  • Requesting output data from virtual OGC API data collections to trigger processing executions (collection output) using response=collection query parameters and values.

Work is ongoing to enhance the data integration capabilities and cross-collection queries to achieve the full potential of Part 3 bringing together local and remote OGC API data and processing capabilities.

4.  Raw Data and Datacubes to Analysis Ready Data (ARD)

CEOS defines Analysis Ready Data (ARD) as satellite data that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort. ARD incorporates interoperability both through time and with other datasets. See https://ceos.org/ard/, and especially the information for data producers: https://ceos.org/ard/files/CARD4L_Info_Note_Producers_v1.0.pdf.

4.1.  Transforming climate relevant raw data to ARD

Several successful OGC testbeds, including DP 21—​which is linked to this pilot—​have looked at ARD and IRD in terms of use cases. In this pilot, some main technical contributions have created digestible OGC data types and formats for specific partner use cases and have produced ARD from publicly available EO and model data, including hydrological and other types of model outputs, as well as climate projections.

These ARD will feed into all use cases for all participants with a particular focus toward the use cases proposed for heat, drought, and health impacts by participants in the pilot.

Specifically, participants provide access to the following satellite and climate projection data.

  • Wildfire: Fire Radiant Power (FRP) product from Sentinel 3 (NetCDF), 5p, MODIS products (fire detection), VIIRS (NOAA); possibly biomass availability (fire fuel)

  • Land Surface Temp — Sentinel 3

  • Pollution — Sentinel 5p

  • Climate Projection data (NetCDF, etc., daily downscaling possible): air temp (10 m above ground), rainfall and possibly wind direction as well

  • Satellite-derived Discharge Data to look at droughts/floods, etc., by basin or other scale

  • Hydrological model simulation outputs at (sub)basin scale (within reason)

The ARD in various OGC interoperable formats creates digestible data flows for the proposed OGC use cases. This proposed data chain by several participants is similar to DP21, in which contributors, like RSS-Hydro, SafeSoftware, and others also participated. A generated climate indicator, or EO, uses remotely sensed data from various sources (NASA, NOAA, ESA, etc.) that are “simplified” to GeoTIFF and/or vectorized geopackage per time step by other participants’ tools, such as the FME software (by SafeSoftware). Another option as an intermediate data type (IRD) can be COG —cloud optimized geotiff—​which would make access more efficient. The COG GeoTIFFs are optimized for the cloud so data sharing can be made more efficient. ARD and IRD should become more service/cloud based wherever possible.

Besides the data format, the data structures and semantics required to support the desired DRI’s are important. The time series/raster and classification to vector contour transform is an approach that worked well in DP21 and has also been a good starting point in this pilot. For example, in the FME processing engine, time series grids can be aggregated across timesteps to mean or max values, then classified into ranges suitable for decision making. These time series grids can then be published as time tagged vector contour tables.

In summary, ARD and IRD data can be created from the following data sources.

  • Inputs: EO (US sources fire related: MODIS, VIIRS), Climate projections, sub catchment polygons, ESA sources, Sentinel-3, Sentinel 5-P

  • Outputs forma & instances: WCS, GeoTIFF spatial/temporal subset, Shape, NetCDF

  • Output parameters: e.g., the hydrological condition (drought, flood, etc.) of a basin, both historically and current

  • Output themes: downscaled/subset outputs, hydrologic scenarios.

Another highly relevant input is the Essential Climate Variables (ECV) Inventory (https://climatemonitoring.info/ecvinventory/) houses information on Climate Data Records (CDR) provided mostly by CEOS and CGMS member agencies. The inventory is a structured repository for the characteristics of two types of GCOS ECV CDRs: * climate data records that exist and are accessible, including frequently updated interim CDRs; and * climate data records that are planned for delivery.

The ECV Inventory is an open resource to explore existing and planned data records from space agency sponsored activities and provides a unique source of information on CDRs available internationally. Access links to the data are provided within the inventory alongside details of the data’s provenance, integrity, and application to climate monitoring.

Participants, particularly GMU CSISS, have demonstrated the use of ECV record information as input with OpenSearch service endpoints (currently CMR(CWIC) and FedEO), and for downloading URLs for accessing NetCDF or HDF files.

Outputs in this case include WCS service endpoints for accessing selected granule level product images (GeoTIFF, PNG, JPEG, etc.) focusing on the WCS for downloading images and WMS for showing layers on a base map.

4.3.  From Raw Data and Data Cubes to ARD with the FME Platform

4.3.1.  Component Descriptions

D100 — Client instance: Analysis Ready Data Component

Our Analysis Ready Data component (ARD) uses the FME platform to consume regional climate models and generate FAIR analysis-ready datasets for downstream analysis and decision support.

The challenge to manage and mitigate the effects of climate change poses difficulties for spatial and temporal data integration. One of the biggest gaps to date has been translating the outputs of global climate models into specific impacts at the local level. FME is ideally suited to help explore options for bridging this gap given its ability to read datasets produced by climate models such as NetCDF or OGC WCS and then filter, aggregate, interpolate, and restructure it as needed. FME can inter-relate it with higher resolution local data, and then output it to whatever format or service is most appropriate for a given application domain or user community.

Our ARD component supports the consumption of climate model outputs such as NetCDF. It also has the capacity to consume earth observation (EO) data and the base map datasets necessary for downstream workflows, though given time and resource constraints during this phase we did not pursue consumption of other data types besides climate data.

4.3.1.1.  ARD Workflow

The basic workflow for generating output from the FME ARD component is as follows. The component extracts, filters, interrelates, and refines these datasets according to indicator requirements. After extraction, datasets are filtered by location and transformed to an appropriate resolution and CRS. Next, the workflow resamples, simplifies, and reprojects the data, and then defines record level feature identifiers, climate variable values, metadata, and other properties to satisfy the target ARD requirements. This workflow is somewhat similar to what was needed to evaluate disaster impacts in DP21. Time ranges for climate scenarios are significantly longer — years rather than weeks for floods.

Once the climate model and other supporting datasets have been adequately extracted, prepared, and integrated, the final step is to generate the data streams and datasets required by downstream components and clients. The FME platform is well suited to deliver data in various formats as needed. This includes Geopackage format for offline use. For online access, other open standard data streams are available, such as GeoJSON, KML, or GML, via WFS and OGC Features APIs and other open APIs. For this pilot, we generated OGC Geopackage, GeoJSON, CSV, and OGC Features API services.

FME_ARD_workflow

Figure 29 — High level FME ARD workflow showing generation of climate scenario ARD and impacts from climate model, EO, IoT, infrastructure and base map inputs

As our understanding of end user requirements continues to evolve, this will necessitate changes in which data sources are selected and how they are refined using a model based rapid prototyping approach. We anticipate that any operational system will need to support a growing range of climate change impacts and related domains. Tools and processes must be able to absorb and integrate new datasets into existing workflows with relative ease. As the pilot develops, data volumes increase, requiring scalability methods to maintain performance and avoid overloading downstream components. Cloud based processing near cloud data sources using OGC API web services supports data scaling. Regarding the FME platform, this involves deployment of FME workflows to FME Cloud. Note that in future phases, we are likely to test how cloud native datasets (COG, STAC, ZARR) and caching can be used to scale performance as data transactions and volume requirements increase.

It is worth underlining that our ARD component depends on the appropriate data sources in order to produce the appropriate decision ready data (DRI) for downstream components. Risk factors include being able to locate and access suitable climate models of sufficient quality, resolution and timeliness to support indicators as the requirements, and changing business rules associated with them. Any data gaps encountered are documented under this under this document’s Challenges and Opportunities section and in the Lessons Learned chapter and the end of the ER.

SafeSoftware_1

Figure 30 — Environment Canada NetCDF GCM time series downscaled to Vancouver area. From: https://climate-change.canada.ca/climate-data/#/downscaled-data

SafeSoftware_2

Figure 31 — Data Cube to ARD: NetCDF to KML, Geopackage, GeoTIFF

Original Data workflow:

  1. Split data cube

  2. Set timestep parameters

  3. Compute timestep stats by band

  4. Compute time range stats by cell

  5. Classify by cell value range

  6. Convert grids to vector contours

SafeSoftware_3

Figure 32 — Extracted timestep grids: Monthly timesteps, period mean T, period max T

SafeSoftware_4

Figure 33 — Convert raster temperature grids into temperature contour areas by class

SafeSoftware_5

Figure 34 — Geopackage Vector Area Time Series: Max Yearly Temp

4.3.1.2.  ARD Development Observations

FME_Inspector_NetCDF_MB_temp]

Figure 35 — FME Data Inspector: RCM NetCDF data cube for Manitoba temperature 2020-2099

Disaster Pilot 2021 laid a good foundation for exploring data cube extraction and conversion to ARD using the FME data integration platform. A variety of approaches were explored for extraction, simplification, and transformation including approaches to select, split, aggregate, and summarize time series. However, more experimentation was needed to generate ARD that can be queried to answer questions about climate trends. This evolution of ARD was one of the goals for this pilot. This goal includes better support for both basic queries, and analytics, statistical methods, simplification, and publication methods, including cloud native — NetCDF to Geopackage, GeoJSON, and OGC APIs.

In consultation with other participants early in the pilot, we learned that our approach to temperature and precipitation polygons inherited from our work in DP21 on flood contours involved too much data simplification to be useful. For example, contouring required grid classification into segments, such as 5°C or 10 mm of precipitation, etc. However, this effective loss of detail oversimplified the data to the point where it no longer held enough variation over local areas. Based on user feedback, it was determined that simply converting multidimensional data cubes to vector time series point data served the purpose of simplifying the data structure for ease of access, but retained the climate variable precision needed to support a wide range of data interpretations for indicator derivation. It also meant that as a data provider, we did not need to anticipate or encode interpretation of indicator business rules into our data simplification process. By merely providing climate variable data points, the end user was free to run queries to find locations and time steps where temperature or precipitation values are within their threshold of interest.

Initially, it was thought that classification rules needed to more closely model impacts of interest. For example, the business rules for a heat wave might use a temperature range and stat type as part of the classification process before conversion to vector. However, this imposes the burden of domain knowledge on the data provider rather than on the climate service end user who is much more likely to understand the domain they wish to apply to the data and how best to interpret it.

FME_ARD_Workflow_MB_precip]

Figure 36 — Modified ARD Worflow: NetCDF data cube to precipitation point time series in Geopackage for Manitoba

Modified ARD Data workflow

  1. Split data cube

  2. Set timestep parameters

  3. Compute timestep stats by band

  4. Compute time range stats by cell

  5. Convert grids to vector points

FME_ARD_MB_precip_results]

Figure 37 — Modified ARD Results: Manitoba future precipitation time series points from FME OGC API Feature Service (GeoJSON served from published Geopackage)

Further scenario tests were explored, including comparison with historical norms. Calculations were made using the difference between projected climate variables and historical climate variables. These climate variable deltas may well serve as a useful starting point for climate change risk indicator development. They also serve as an approach for normalizing climate impacts when the absolute units are not the main focus. Interesting patterns emerged for the LA area when we ran this process on deltas between projected and historical precipitation. While summers are typically dry, winters are wet and prone to flash flooding. Initial data exploration seemed to show an increase in drought patterns in the spring and fall. More analysis needs to be done to see if this is a general pattern or simply one that emerged from the climate scenario we ran. However, this is the type of trend that local planners and managers may benefit from by having the ability to explore once they have better access to climate model scenario outputs along with the ability to query and analyze the information.

FME_ARD_Workflow_LA_precip_diff]

Figure 38 — Modified ARD Workflow: NetCDF data cube to precipitation delta grids (future - historical) in Geopackage for LA

ARD Climate Variable Delta Data workflow

  1. Split data cubes from historic and future netcdf inputs

  2. Set timestep parameters

  3. Compute historic mean for 1950 — 1980 per month based on historic time series input

  4. Multiply historic mean by -1

  5. Use RasterMosaiker to sum all future grids with -1 * historic mean grid for that month

  6. Normalize environmental variable difference by dividing by historic average for that month

  7. Convert grids to vector points

  8. Define monthly environment variables from band and range values

FME_ARD_Workflow_LA_precip_diff_results]

Figure 39 — NetCDF data cube to precipitation delta grids (future - historical) for LA in GeoJSON. This point dataset was fed to other components such as Laubwerk’s visualization component to support interoperability.

More analysis needs to be done with higher resolution time steps — weekly and daily. At the outset, monthly time steps were used to make it easier to prototype workflows. Daily time step computations will take significantly more processing time. Future pilots should further explore ways of supporting scalability of processing through automation and cloud computing approaches such as the use of cloud native formats (STAC, COG, ZARR, etc.).

4.3.1.3.  OGC API Features Service

Compared to OGC WFS2, OGC APIs are a simpler and more modern standard based on a REST and JSON / openAPI approach. However, we found implementation of OGC API services somewhat challenging. There seems to be more complexity in terms of the number of ways for requesting features and too many options for representing service descriptions. As every client tends to interpret and use the standard a bit differently, it becomes a challenge to derive how to configure services for a wide range of clients. In particular, QGIS / ArcPro was a challenge to debug given limited logging. For QGIS, we had to examine cache files in the operating system temp directories to look for and resolve errors.

Once correctly configured, OGC API feature services seemed to perform well and likely are more efficient than the equivalent WFS2 / GML feature services. A key aspect of performance improvement was achieving query parameter continuity by passing query settings from the client all the way to the database reader configuration. For example, it was important to make sure the spatial extent and feature limits from the end user client were implemented in the database SQL extraction query and not just at an intermediate stage. We will need to explore a better use of caching to further optimize performance. There may also be opportunities for pyramiding time series vector data at a lower resolution for wide area requests. This may better serve those interested in observing large area patterns and who don’t necessarily need full resolution at the local level.

It should also be noted that while OGC API services should be a priority for standards support, for a climate and disaster management context, given the relative recent nature of these standards, many users may be less familiar with or prepared to use these standards. As such, there should also be provision to access data directly in well accepted open standards such as GeoJSON, CSV, GeoTIFF, Geopackage, or Shape. In this project, some users preferred direct access to GeoJSON or CSV over OGC API access.

FME_OGCAPIFeaturesQuery_MBtemperature]

Figure 40 — Modified ARD Results: Query result for Manitoba temperature time series points, from FME OGC API Feature Service (GeoJSON served from published Geopackage)

4.3.2.  Component Integrations

One of the challenges with this pilot, particularly considering that this was the first phase, was building interoperability integrations with other components. Much of the pilot duration was spent building the individual components, so little time was left to experiment with integrations between them. That said, there were two notable integrations between our ARD component and other participants. Both of these integrations are also described in their respective component sections from their perspective.

First of all, we were able to produce climate scenario data for precipitation that was used by the Pixalytics drought model component. Our component extracted data from the climate model scenario data cube and transformed the data into a simple Geopackage time series. This time series was published to our FME Flow server which hosts an OGC API feature service. We also made the data available as GeoJSON point feature data with the embedded precipitation values. Pixalitics then took this GeoJSON for 2023 and 2024 and incorporated the associated climate projection variables into the Pixalitics drought model. This enabled Pixalytics to show a continuous representation of drought risk from past to present to near future. For more information on the Pixalitics drought model please refer to the Pixalytics component description.

Another cross component integration that was particularly interesting was the connection between our ARD component and Laubwerk’s landscape vegetation visualization component. We produced GeoJSON outputs for precipitation point features. In this case we produced environmental variable projections for a much longer time range, from 2020 to 2060. Our initial output was simply precipitation totals per month per location. However, because Laubwerk did not have a comprehensive drought model, such as was the case with Pixalytics, they could not make use of raw precipitation totals on their own. So instead, we decided to produce a normalized precipitation delta based on past historical norms. Laubwerk was then able to take this percent change and determine whether or not specific vegetation species could survive for a given time and location. Laubwerk then reran the visualizations with Safe’s climate projection precipitation index as input. The result was a different visualization per time step that showed the effects of drought over time. Clearly this precipitation is a rather primitive proxy for a more comprehensive drought model, but as a starting point it still allowed users to explore potential impacts for different climate scenarios over time. In addition, Laubwerk was able to model different climate resilience adaptation scenarios. After determining which species were most at-risk, at-risk tree species were replaced in the visualization model for more resilient ones. The result was a future potential landscape with improved tree survival rates even given the potential for reduced precipitation due to anticipated climate impacts.

For more information on the Pixalytics drought model, see the ARD description above. For more info on the Laubwerk component see the Laubwerk component description in the Data to Visualization chapter. For more details on Safe Software’s drought and heat impact / DRI components driven by the ARD from this component see the DRI Heat and Drought Impact Components section in the ARD to DRI chapter. Note that the integrations described above were developed in the final weeks of the pilot and presented at the Climate Pilot final workshop at the Huntsville member meeting. Please refer to the member meeting video recording to review the associated demonstrations. For more information on Safe Software’s contribution to this pilot, refer to: https://community.safe.com/s/article/OGC-Climate-Resilience-Pilot

4.3.3.  Data Sources

4.3.3.2.  Climate Model Scenarios

RCP 4.5 is the most probable baseline scenario (no climate policies) taking into account the exhaustible character of non-renewable fuels. CMIP5 describes the RPC run version or generation (Phase 5 2012-2014) and BCSD is a statistical term about the method of downscaling used (bias-corrected and spatially downscaled). CMIP5 and BCSD are basically technical terms that would not be meaningful to readers not familiar with climate models, but are necessary parameters if one wants to get the same results. For more information on climate model parameters see: https://en.wikipedia.org/wiki/Coupled_Model_Intercomparison_Project

Manitoba Regional Climate Model (RCM) Details MB Extent: Lat 49 N to 52 N

Future total monthly precipitation and mean temp from RCP45 CMIP5 for 2020-2100 Statistically downscaled climate scenarios from Environment Canada Climate Data Portal (BCSD: bias-corrected and spatially downscaled) RCP4.5: ‘Business as usual’

Los Angeles Regional Climate Model Details LA area Extent Future total monthly precipitation from RCP45 CMIP5 BCSD for 2020-2050, from CIDA – USGS THREDDS (BCSD: bias-correctws and spatially downscaled) RCP4.5: ‘Business as usual’

4.4.  A framework example for climate ARD generation

4.4.1.  Component: Surface Reflectance ARD

  • Inputs: Gaofen L1A data and Sentinel-2 L1C data

  • Outputs: Surface Reflectance ARD

  • What other component(s) can interact with the component: any components requiring access to surface reflectance data

Surface Reflectance (SR) is the fraction of incoming solar radiation reflected from the Earth’s surface for specific incidents or viewing cases. It can be used to detect the distribution and change of ground objects by leveraging the derived spectral, geometric, and textural features. Since a large amount of optical EO data has been released to the public, ARD can facilitate interoperability through time and multi-source datasets. As the probably most widely applied ARD product type, the SR ARD can contribute to climate resilience research. For example, the SR-derived NDVI series can be applied to monitor wildfire recovery by analyzing vegetation index increases. Several SR datasets have been assessed as ARD by CEOS, such as the prestigious Landsat Collection 2 Level 2 and Sentinel-2 L2A, while many other datasets are still provided at a low processing level.

WHU is developing a pre-processing framework for SR ARD generation. The framework supports radiometric calibration, geometric ratification, atmospheric correction, and cloud mask. To address the inconsistencies in observations from different platforms, including variations in band settings and viewing angles, a processing chain to produce harmonized ARD is proposed which will enable the generatation of SR ARD with consistent radiometric and geometric characteristics from multi-sensor data, resulting in improved temporal coverage. In the first stage of this mission, the focus is on the harmonization of Chinese Gaofen data and Sentinel-2 data, as shown in Figure 41. The harmonization involves spatial co-registration, band conversion, and bidirectional reflectance distribution function (BRDF) correction. Figure 42 shows the Sentinel-2 data before and after pre-processing. Furthermore, there is a desire to seek the assessment of CEOS-ARD in the long-term plan.

image

Figure 41 — The processing chain to produce harmonized ARD.

WHU_image2

Figure 42 — Sentinel-2 RBG composite (red Band4, green Band3, blue Band2), over Hubei, acquired on October 22, 2020. (a) corresponds to the reflectance at the top of the atmosphere (L1C product); (b) corresponds to the surface reflectance after pre-processing.

4.4.2.  Component: Drought Indicator

  • Inputs: Climate data, including precipitation and temperature

  • Outputs: Drought risk map derived from drought indicator

  • What other component(s) can interact with the component: any components requiring access to drought risk map through OGC API

  • What OGC standards or formats does the component use and produce: OGC API — Processes

Drought is a disaster whose onset, end, and extent are difficult to detect. Original meteorological data, such as precipitation, can be obtained through satellites and radar, which can be used for drought monitoring. However, the accuracy is easily affected by detection instruments and terrain occlusion, and the ability to retrieve special shapes, such as solid precipitation, is limited. In addition, many meteorological monitoring stations on the ground can provide local raw meteorological observation data. The SPEI is a model to monitor, quantitatively analyze, and determine the spatiotemporal range of the occurrence of drought using meteorological observation data from various regions. It should supplement the result of drought monitoring with satellite and radar.

SPEI has two main characteristics: 1) it considers the deficits between precipitation and evapotranspiration comprehensively, that is, the balance of water; 2) multi-time scale characteristics. For the first characteristic, drought is caused by insufficient water resources. Precipitation can increase water, while evapotranspiration can reduce water. The differences between the two variables simultaneously, and in space, can characterize the balance of water. For the second characteristic, the deficits value of different usable water sources is distinct at different time scales due to the different evolution cycles of different types, resulting in various temporal representations. By accumulating the difference between precipitation and evapotranspiration at different time scales, agricultural (soil moisture) droughts, hydrological (groundwater, streamflow, and reservoir) droughts, and other droughts can be distinguished by SPEI.

In our project, the dataset for SPEI calculation is ERA5-Land monthly averaged data from 1950 to the present. Years of data about partial areas of East Asia were selected for experiments. Through the following flow of the SPEI calculation, the SPEI value for assessments of drought impact were obtained. The flow of the SPEI calculation is shown in Figure 43.

WHU_image3

Figure 43 — Flow of the SPEI calculation.

WHU has provided the SPEI drought index calculation services through the OGC API — Processes, enabling interaction with other components. The current endpoint for OGC API — Processes is http://oge.whu.edu.cn/ogcapi/processes_api. This section will explain how to use this API for calculating the drought index.

{ “inputs”: { “startTime”: “2010-01-01”, “endTime”: “2020-01-01”, “timeScale”: 5, “extent”: { “bbox”: [73.95, 17.95, 135.05, 54.05], “crs”: “http://www.opengis.net/def/crs/OGC/1.3/CRS84”/> } } }

WHU_image4

Figure 44 — The SPEI results for the date 2000_02_01.

4.4.3.  Component: Data Cube Infrastructure

  • Outputs: Results in the form of GeoTIFF after processing in Data Cubes

  • What other component(s) can interact with the component: any components requiring access to temperature and precipitation data, surface reflectance ARD, and drought risk map in part of Asia through OGC API

  • What OGC standards or formats does the component use and produce: OGC API- Coverages, OGC API — Processes

WHU has introduced GeoCube as a cube infrastructure for the management and large-scale analysis of multi-source data. GeoCube leverages the latest generation of OGC standard service interfaces, including OGC API-Coverages, OGC API-Features, and OGC API-Processes, to offer services encompassing data discovery, access, and processing of diverse data sources. The UML model of the GeoCube is given in Figure 5, and has four dimensions: product, spatial, temporal, and band. Product dimension specifies the thematic axis for the geospatial data cube using the product name (e.g., ERA5_Precipitation or OSM_Water), type (e.g., raster, vector, or tabular), processes, and instrument name. For example, the product dimension can describe optical image products by recording information on the instrument and band. Spatial dimension specifies the spatial axis for the geospatial data cube using the grid code, grid type, city name, and province name. The cube uses a spatial grid for tiling to enable data readiness in a high-performance form. Temporal dimension specifies the temporal axis for the geospatial data using the phenomenon time and result time. Band dimension describes the band attribute of the raster products according to the band name and polarization mode that is reserved for SAR images and product-level band. The product-level band is the information that is extracted from the original bands. For example, the Standardized Precipitation Evapotranspiration Index (SPEI) band is a product-level band that takes into account the hydrological process and evaluates the degree of drought by calculating the balance of precipitation and evaporation.

WHU_image5

Figure 45 — The UML model of WHU Data Cube.

WHU has organized ERA5 temperature and precipitation data, surface reflectance ARD, and drought risk map into cubes and offers climate data services through the OGC API — Coverages and OGC API — Processes. The API endpoint of Processes was given in the previous chapter. The API endpoint of Coverages is http://oge.whu.edu.cn/ogcapi/coverages_api, allowing users to query and retrieve the desired data from the cube. This section provides examples demonstrating how to access the data from the cube using OGC API — Coverages.

WHU_image6

Figure 46 — The coverage with the ID "2m_temperature_201602" in the Asian region.

4.5.  Climate Resilience Data

4.5.1.  Climate Projection Data

To make climate projection data more easily usable we transformed CMIP5 data (version 1 of our project, now working on CMIP6) into an Analysis Ready Data collection of indices of future temperature and precipitation. Climate summaries for the contiguous 48 states were derived from data generated for the 4th National Climate Assessment. These data were accessed from the Scenarios for the National Climate Assessment website. The 30-year mean values for four time periods (historic, early-, mid-, and late-century) and two climate scenarios (RCP 4.5 and 8.5) were derived from the Localized Constructed Analogs (LOCA) downscaled climate model ensembles, processed by the Technical Support Unit at NOAA’s National Center for Environmental Information.

  • Historical: 1976-2005

  • Early-Century: 2016-2045

  • Mid-Century: 2036-2065

  • Late-Century: 2070-2099

In order to display the full range of projections from individual climate models for each period, data originally obtained from USGS THREDDS servers were accessed via the Regional Climate Center’s Applied Climate Information System (ACIS). This webservice facilitated processing of the raw data values to obtain the climate hazard metrics available in CMRA.

As LOCA was only generated for the contiguous 48 states (and the District of Columbia), alternatives were used for Alaska and Hawaii. In Alaska, the Bias Corrected Spatially Downscaled (BCSD) method was used. Data were accessed from USGS THREDDS servers. The same variables provided for LOCA were calculated from BCSD ensemble means. However, only RCP 8.5 was available. Minimum, maximum, and mean values for county and census tracts were calculated in the same way as above. For Hawaii, statistics for two summary geographies were accessed from the U.S. Climate Resilience Toolkit’s Climate Explorer: Northern Islands (Honolulu County, Kauaʻi County) and Southern Islands (Maui County, Hawai’i County).

This data is being updated to CMIP6 and will be available in the latter half of 2023. The system is being expanded globally using NASA NEX CMIP6 data using the same time periods and climate scenarios.

4.5.2.  Climate Indices

To provide a more approachable context to future climate, a collection of 47 indices of future temperature and precipitation are computed. These indices build upon prior work on Climdex indices and additional indices developed for National Climate Assessment 4 (NCA4).

  • Cooling Degree Days: Cooling degree days (annual cumulative number of degrees by which the daily average temperature is greater than 65°F) [degree days (degF)]

  • Consecutive Dry Days: Annual maximum number of consecutive dry days (days with total precipitation less than 0.01 inches)

  • Consecutive Dry Days Jan Jul Aug: Summer maximum number of consecutive dry days (days with total precipitation less than 0.01 inches in June, July, and August)

  • Consecutive Wet Days: Annual maximum number of consecutive wet days (days with total precipitation greater than or equal to 0.01 inches)

  • First Freeze Day: Date of the first fall freeze (annual first occurrence of a minimum temperature at or below 32°F in the fall)

  • Growing Degree Days: Growing degree days, base 50 (annual cumulative number of degrees by which the daily average temperature is greater than 50°F) [degree days (degF)]

  • Growing Degree Days Modified: Modified growing degree days, base 50 (annual cumulative number of degrees by which the daily average temperature is greater than 50°F; before calculating the daily average temperatures, daily maximum temperatures above 86°F and daily minimum temperatures below 50°F are set to those values) [degree days (degF)]

  • Growing-season: Length of the growing (frost-free) season (the number of days between the last occurrence of a minimum temperature at or below 32°F in the spring and the first occurrence of a minimum temperature at or below 32°F in the fall)

  • Growing Season 28F: Length of the growing season, 28°F threshold (the number of days between the last occurrence of a minimum temperature at or below 28°F in the spring and the first occurrence of a minimum temperature at or below 28°F in the fall)

  • Growing Season 41F: Length of the growing season, 41°F threshold (the number of days between the last occurrence of a minimum temperature at or below 41°F in the spring and the first occurrence of a minimum temperature at or below 41°F in the fall)

  • Heating Degree Days: Heating degree days (annual cumulative number of degrees by which the daily average temperature is less than 65°F) [degree days (degF)]

  • Last Freeze Day: Date of the last spring freeze (annual last occurrence of a minimum temperature at or below 32°F in the spring)

  • Precip Above 99th pctl: Annual total precipitation for all days exceeding the 99th percentile, calculated with reference to 1976-2005 [inches]

  • Precip Annual Total: Annual total precipitation [inches]

  • Precip Days Above 99th pctl: Annual number of days with precipitation exceeding the 99th percentile, calculated with reference to 1976-2005 [inches]

  • Precip 1in: Annual number of days with total precipitation greater than 1 inch

  • Precip 2in: Annual number of days with total precipitation greater than 2 inches

  • Precip 3in: Annual number of days with total precipitation greater than 3 inches

  • Precip 4in: Annual number of days with total precipitation greater than 4 inches

  • Precip Max 1 Day: Annual highest precipitation total for a single day [inches]

  • Precip Max 5 Day: Annual highest precipitation total over a 5-day period [inches]

  • Daily Avg Temperature: Daily average temperature [°F]

  • Daily Max Temperature: Daily maximum temperature [°F]

  • Temp Max Days Above 99th pctl: Annual number of days with maximum temperature greater than the 99th percentile, calculated with reference to 1976-2005

  • Temp Max Days Below 1st pctl: Annual number of days with maximum temperature lower than the 1st percentile, calculated with reference to 1976-2005

  • Days Above 100F: Annual number of days with a maximum temperature greater than 100°F

  • Days Above 105F: Annual number of days with a maximum temperature greater than 105°F

  • Days Above 110F: Annual number of days with a maximum temperature greater than 110°F

  • Days Above 115F: Annual number of days with a maximum temperature greater than 115°F

  • Temp Max 1 Day: Annual single highest maximum temperature [°F]

  • Days Below 32F: Annual number of icing days (days with a maximum temperature less than 32°F)

  • Temp Max 5 Day: Annual highest maximum temperature averaged over a 5-day period [°F]

  • Days Above 86F: Annual number of days with a maximum temperature greater than 86°F

  • Days Above 90F: Annual number of days with a maximum temperature greater than 90°F

  • Days Above 95F: Annual number of days with a maximum temperature greater than 95°F

  • Temp Min: Daily minimum temperature [°F]

  • Temp Min Days Above 75F: Annual number of days with a minimum temperature greater than 75°F

  • Temp Min Days Above 80F: Annual number of days with a minimum temperature greater than 80°F

  • Temp Min Days Above 85F: Annual number of days with a minimum temperature greater than 8°F

  • Temp Min Days Above 90F: Annual number of days with a minimum temperature greater than 90°F

  • Temp Min Days Above 99th pctl: Annual number of days with minimum temperature greater than the 99th percentile, calculated with reference to 1976-2005

  • Temp Min Days Below 1st pctl: Annual number of days with minimum temperature lower than the 1st percentile, calculated with reference to 1976-2005

  • Temp Min Days Below 28F: Annual number of days with a minimum temperature less than 28°F

  • Temp Min Max 5 Day: Annual highest minimum temperature averaged over a 5-day period [°F]

  • Temp Min 1 Day: Annual single lowest minimum temperature [°F]

  • Temp Min 32F: Annual number of frost days (days with a minimum temperature less than 32°F)

  • Temp Min 5 Day: Annual lowest minimum temperature averaged over a 5-day period [°F]

The individual web services of climate indices and raster data for download can be accessed at: https://resilience.climate.gov/pages/climate-model-content-gallery

Or for each scenario:

The data can be viewed directly in the online map viewer or opened in ArcGIS Online, ArcGIS Desktop, or a StoryMap. To view in other softwares GeoService and KMZ URLs are on the right side of the page under View API Resources.

esri_viewAPI

Figure 47 — View API Resources

4.5.3.  Summarized Indices for Locations

To support easier interpretation and local decision making, the above indices were summarized by county, census tract, and tribal areas using the Zonal Statistics as Table utility in ArcGIS Pro. The results were joined into the corresponding geography polygons. A minimum, maximum, and mean value for each variable was calculated. This process was repeated for each time range and scenario. Precomputing enables quick map and graph response in the web application and also provides as easily reusable download for someone who wants to utilize the data elsewhere.

To reuse the summarized services outside of the CRMA application or to download the processed data visit the links below for the geography of interest.

On these pages, a list of buttons allows the user to filter the selection to a subset by attribute or geography, download into a variety of formats, and translate the descriptive documentation for viewing in other languages.

4.5.4.  Future Work

For Esri’s contribution, the first version of CRMA was well received. It is widely used by the intended users and there is high interest by many others. Before the first version was released there were requests for other countries and customizations of the project.

Due to many customization requests, version 2 is being developed from inception with the intent for all code, from data processing python to web application Javascript, to be available in Github repositories with documentation of typical customization workflows.

  • Use other climate projection data

  • Compute other indices

  • Summarize to other geographies

  • Customize the web application

The project is not only a solution, but a pattern for others to adapt to their data, geography, and goals.

Version 2 data development is underway and will include more indices, both imperial and metric units, and min/max/mean for statistics instead of only areal mean. All modeling will be updated to CMIP6 and expanded from US to global. The release is anticipated in Q4 2023.

5.  ARD to Decision Ready Indicator (DRI)

A decision Ready Indicator (DRI) is information and knowledge that provides specific support for actions and decisions. These indicators are pre-determined, using a set recipe which pulls together one or more ARDs to create an indicator of action and/or decision. DRIs hold significant importance as they serve as benchmarks to determine when a decision-making process is adequately prepared and can proceed efficiently. Their importance lies in several aspects. Firstly, DRIs facilitate efficient decision-making by signaling that all necessary information, analysis, and resources are available, minimizing delays and preventing hasty or uninformed decisions. Secondly, they provide quality assurance by setting standards for the decision-making process, ensuring thorough consideration of relevant factors, accurate analysis, and reliable information. DRIs also promote accountability and transparency by defining expectations and providing a framework for evaluation, enabling stakeholders to understand the reasoning behind decisions and hold decision-makers accountable. Additionally, DRIs aid in effective resource allocation by identifying the point at which resources can be allocated, preventing wastage on under prepared decisions. They also assist in managing risks associated with decision-making by encouraging thorough analysis and consideration of potential risks. Furthermore, DRIs promote consistency and standardization, reducing subjectivity and increasing fairness across different decisions. In summary, DRIs play a crucial role in ensuring well-prepared, informed, and accountable decision-making processes, enhancing efficiency, quality, transparency, and resource management.

Analyze Ready Data (ARD) that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets, form the building blocks for DRIs. The transition from ARDs to DRIs encompasses a series of steps designed to extract meaningful insights and facilitate informed decision-making commencing with the collection and preparation of data, where relevant information is gathered from diverse sources and formatted appropriately for analysis. This involves data cleaning, standardization, and transformation to ensure consistency and reliability. Following data preparation, the integration stage merges multiple data sources, which are aligned based on common variables or identifiers, thereby creating a comprehensive dataset.

Subsequently, data exploration and analysis techniques are employed to delve into the dataset’s intricacies. Through statistical analysis, data visualization, and data mining, analysts uncover patterns, relationships, and trends that enable a deeper understanding of the underlying information. Feature engineering plays a crucial role in enhancing the analytical model’s performance. By selecting pertinent features, transforming existing variables, handling missing data, and encoding categorical variables, analysts optimize the model’s ability to extract insights from the data.

Once the data is prepared and features are engineered, model development ensues. Depending on the nature of the problem and the data at hand, analysts choose appropriate algorithms, such as regression, classification, clustering, or machine learning, to build predictive or analytical models. These models are then trained using a portion of the data, often referred to as the training set. Validation is performed using a separate portion of the data, the validation set, to assess the model’s performance. The model can then be fine-tuned for optimal results.

With the validated model in place, the focus shifts to generating DRIs. These indicators are specific metrics, scores, or predictions derived from the model’s outputs, providing actionable insights relevant to the decision-making process. The DRIs serve as valuable tools that support decision-makers in interpreting the analyzed data and, therefore, making well-informed choices.

The generated DRIs become pivotal components in the decision-making process. Decision-makers leverage these indicators to assess different scenarios, evaluate risks, and identify opportunities. By incorporating the insights gained from the analyzed data and model outputs, decision-makers can make more informed and data-driven decisions to achieve desired outcomes.

It is worth noting that while the outlined steps provide a general framework, the specific implementation of the process may vary based on the unique context, data characteristics, and analytical techniques employed. Nonetheless, the overarching objective remains constant: to transform Analyze Ready Data into Decision Ready Indicators that facilitate effective decision-making. Below we provide examples on what DRIs can be developed in relation to Climate Resilience.

5.1.  Wildfire hazard component

To develop its component, Intact migrated its previous proprietary wildfire hazard model to a private on-premise data science environment. For key inputs to the model, external connections to several open data repositories were established. To facilitate these access tests, several public open source datasets, such as climate model outputs, Earth observations, weather, and geospatial, were vetted by the appropriate cybersecurity boards. The tests also informed experts of changes in platforms offerings, of new data products specifications, applicable licenses, and of current authoritative scientific references.

Figure3_Intact

Figure 48 — Two samples of IFC’s current national wildfire hazard map

The table below shows the datasets accessed by Intact during the pilot.

Table 12 — Technical Interoperability Experiments (TIE) Table

DatasetSourceURLNotes
National Fire Database fire polygon dataNRCanhttps://cwfis.cfs.nrcan.gc.ca/datamart/download/nfdbpolyUnable to establish SSL connection into private network
Fire Weather Index and its componentsNRCanhttps://cwfis.cfs.nrcan.gc.ca/downloads/fwi_obs/Unable to establish SSL connection into private network
Forest FuelsNRCanftp://ftp.nofc.cfs.nrcan.gc.ca/pub/fire/cwfis/data/fuels/
Vegetation concentration and massNRCanhttp://tree.pfc.forestry.ca/503 Service Unavailable from private network
Daily reanalysis compositesNOAAhttps://psl.noaa.gov/data/composites/day/
Monthly reanalysis compositesNOAAhttps://psl.noaa.gov/cgi-bin/data/composites/printpage.pl
Global temperature anomalies/trendsNASAhttps://data.giss.nasa.gov/gistemp/maps/
Elevation at 30 metersNASAhttps://lpdaac.usgs.gov/products/nasadem_hgtv001/
Canadian Drought MonitorAAFChttps://agriculture.canada.ca/atlas/data_donnees/canadianDroughtMonitor/data_donnees/shp/
Canadian Lightning Detection NetworkNRCanftp://ftp.nofc.cfs.nrcan.gc.ca/pub/fire/CLDN/Connection timed out, can’t find alternate source
TopographyUSGShttps://topotools.cr.usgs.gov/gmted_viewer/viewer.htmInteractive map, not layers
Road segmentsNRCanftp://ftp.nofc.cfs.nrcan.gc.ca/pub/fire/cwfis/data/base_dataConnection timed out, can’t find alternate source
Population of the worldColumbia U.https://beta.sedac.ciesin.columbia.edu/data/set/gpw-v4-population-density/data-download
CanVec Manmade StructuresNRCanhttp://ftp.geogratis.gc.ca/pub/nrcan_rncan/vector/canvec/shp/ManMade/503 Service Unavailable from private network

Below is a summarized list of the key datasets required to produce or update a wildfire hazard map.

  • National fire database polygon data

  • Fire Weather Index (FWI) daily maps

  • Land cover maps

  • Drought conditions

  • Digital Elevation Model (DEM)

  • Population density

  • Fuel and vegetation data

Intact’s wildfire hazard map is developed exclusively for internal use. Aside from intellectual property terms, it is meant to be deployed in highly secured data environments, and as such it cannot readily interact with other components of the pilot at this point of time. The intent is to develop geospatial infrastructures and legal terms that would allow a closer collaboration with the pilot’s participants.

Very early in the project, Intact also developed an H3 synthetic exposure dataset (see next figure) composed of fourteen million points spread out across the country in a statically representative pattern. The purpose of this dataset was to facilitate visualization and analysis of the exposure and allow pilot participants to have a common exposure reference on which to develop decision-ready use cases for insurance, thus advancing towards standardization. Unfortunately, time constraints prevented update and sharing of this dataset.

Figure4_Intact

Figure 49 — IFC’s exposure synthetic dataset, with Montreal – Ottawa corridor on the left, and close-up of Montreal on the right. Color scale represent relative risk density in each cell, while points are representative individual risks

5.2.  The Blue Economy

Pelagis’ participation in the Climate Resilience pilot focuses on enhancing the view of a global oceans observation system by combining real-world ground observations with analysis ready datasets. Monitoring aspects of the oceans through both a temporal and spatial continuum while providing traceability through the observations process allows stakeholders to better understand the stressors affecting ocean health and investigate opportunities to mitigate the longer term implications related to climate change.

The approach to address the needs for a sustainable ocean economy is to make Marine Spatial Planning a core foundation on which to build vertical applications. Pelagis’ platform is based on a federated information model represented as a unified social graph. This provides a decentralized approach towards designing various data streams each represented by their well-known and/or standardized model. To date, service layers based on the OGC standards for Feature, Observations, and Measurements, and Sensors APIs have been developed and extended for adoption within the marine domain model. Previous work provides for data discovery and processing of features based on the IHO S-100 standard (Marine Protected Areas, Marine Traffic Management, etc.); NOAA open data pipelines for major weather events (Hurricane Tracking, Ocean Drifters, Saildrones, etc.); as well as connected observation systems as provided by IOOS and its Canadian variant, CIOOS.

5.2.1.  From Raw Data to ARD and DRIs

The United Nations Framework Convention on Climate Change (UNFCCC) is supported through a number of organizations providing key observation data related to climate change. Of primary interest to this project scenario is the Global Climate Observing System (GCOS) and Global Ocean Observing System (GOOS), and the Joint Working Group on Climate (WG Climate) of the Committee on Earth Observation Satellites (CEOS). In-situ data sources are provided through a number of program initiatives sponsored through NOAA and provide key indicators for climate change that cannot be directly inferred from raw satellite information.

GCOS defines 54 Essential Climate Variables (ECVs) of which 18 ECVs apply to the oceans domain. Of these, only 6 ECVs may be inferred from satellite based earth observations while the remainder must be inferred through in-situ site observations and/or sampling programs.

The following table identifies the ocean-specific ECVs and associated providers.

Table 13 — Ocean-specific ECVs and associated providers

VariableDescriptionSource of Indicator
Ocean ColorProvides indication of phytoplankton based on Ocean Color Radiance (OCR)ESA CEOS
Carbon Dioxide Partial PressurePrimary indicator of the exchange of CO2 at the ocean surfaceNOAA
Ocean AciditypH of ocean water as measured at varying depths and locationsNOAA PMEL
PhytoplanktonIndicator of the health of the ecosystem associated with the food web and directly a result of increased CO2 and eutrophicationNOAA
Sea IceSea ice coverage associated with the ocean surface and a concern reflected in warming surface temperatures and sea level rise
Sea LevelSea level global mean and variability leading to sea level rise
Sea StateWave height, direction, wavelength as indicators of energy at the ocean surface
Sea-surface SalinityThe proportion of ocean water composed of salt, an indicator of mortality rates in shellfish
Sea-surface TemperatureDirectly affects major weather patterns and ecosystemsESA CEOS; NOAA Monitoring Stations; NOAA Saildrone program
Surface CurrentTransports heat, salt, and passive tracers and has a large impact on seaborne commerce and fishing

In addition, social and economic key indicators related to the area of interest are ingested to identify relationships between the immediate effects of climate change on the associated human activity.

Table 14 — Social and economic key indicators

VariableDescriptionSource of Indicator
AQ LandingsAnnual yields associated with Aquaculture sites within a region of interestMaineAQ
GDPGross Domestic Product ($USD) associated with dependent human activities within the region of interestUS Census
EmploymentNumber of individuals dependent on the targeted ecosystemUS Census
PopulationNumber of people inhabiting the area of interest associated with the ecosystemUS Census

5.2.2.  Approach

Each ECV applicable to the use case is resolved as a service endpoint representing the area of interest, associated samplings, and observations, and where possible, inferred from earth observation datasets transformed as ‘analysis ready.’ Earth observation datasets are sourced through the ESA GCOS service endpoint; ocean related samplings and in-situ observations are sourced through NOAA; socio-economic data is sourced from various open data portals available through government agencies.

The project effort centers around 3 key challenges:

  • the ability to collect data relevant to Climate Resilience;

  • the ability to apply the data in a coherent and standardized manner in which to draw out context; and

  • the ability to impart insight to community members and stakeholders so as to identify, anticipate, and mitigate the effects of climate change across both local and international boundaries.

Each of these activities aligns with the best practices and standards of OGC and are used as input to the MarineDWG MSDI reference model.

categoryOfMarineProtectedAreafeatureNamegeometry__typename_id{'category': 'NOT_REPORTED’}JNCCNoneMarineProtectedAreaT1BFTlNFQS5TRU5USU5FTC5NYXJpbmVQcm90ZWN0ZWRBcm...0{'category': 'NOT_REPORTED’}Central Fladen{'geojson': {'type': 'MultiPolygon', 'coordina...MarineProtectedAreaT1BFTlNFQS5TRU5USU5FTC5NYXJpbmVQcm90ZWN0ZWRBcm...1{'category': 'NOT_REPORTED’}Turbot Bank{'geojson': {'type': 'MultiPolygon', 'coordina...MarineProtectedAreaT1BFTlNFQS5TRU5USU5FTC5NYXJpbmVQcm90ZWN0ZWRBcm...2{'category': 'NOT_REPORTED}Norwegian Boundary Sediment Plain{'geojson': {'type': 'MultiPolygon', 'coordina...MarineProtectedAreaT1BFTlNFQS5TRU5USU5FTC5NYXJpbmVQcm90ZWN0ZWRBcm...3{'category': 'NOT_REPORTED’}Firth of Forth Banks Complex{'geojson': {'type': 'MultiPolygon', 'coordina...MarineProtectedAreaT1BFTlNFQS5TRU5USU5FTC5NYXJpbmVQcm90ZWN0ZWRBcm...4OpenSEASentinelNgS-122 MPA ServiceMarine DomainIALA BuoyageJNCC Baltic / North SeaOcean Sensor NetworksIHO S-100ServiceWDPA DenmarkExploratory Data AnalysisSentryOGC Connected SystemsNorth AtlanticMovingFeaturesAIS Vessel TrafficNOAA Saildrone MissionHurricane MonitoringHDOB ServiceNOAA IOOS

Figure 50 — Architecture

5.3.  DRI: Heat Impact and Drought Impact FME Components

The following subsections cover the heat and drought impact components developed by Safe Software using the FME platform. For more information on the ARD component these depend on see the 4.3 section on Data Cube to ARD with FME.

5.3.1.  Heat Impact DRI Component

This component takes the climate scenario summary ARD results from the ARD component and analyzes them to derive estimated heat impacts over time, based on selected climate scenarios. Central to this is the identification of key heat impact indicators required by decision makers and the business rules needed to drive them. Process steps include data aggregation and statistical analysis of maximum temperature spikes, taking into account the cumulative impacts of multiple high temperature days. Heat exhaustion effects are likely dependent on duration of heat spells, in addition to high maximum temperatures on certain days.

SafeSoftware_6

Figure 51 — ARD Query: Monthly Max Temp Contours

SafeSoftware_7

Figure 52 — ARD Query: Max Mean Monthly Temp > 25C

SafeSoftware_8

Figure 53 — Town of Lytton - location where entire town was devastated by fire during the heat wave of July 2021 - same location highlighted in ARD query from heat risk query in previous figure

5.3.2.  Drought Impact DRI Component and Interoperability with Other Components

This component takes the climate scenario summary ARD results from the ARD component and analyzes them to derive estimated drought risk impacts over time based on selected climate scenarios. It also feeds drought related environmental factors to other pilot DRI components for more refined drought risk analysis. For the purposes of this pilot, it was recognized that more complex indicators such as drought are likely driven by multiple environmental and physical factors. As such, our initial goal was to select and provide primary climate variable data that would be useful for deriving drought risks in combination with other inputs. Given that the primary input to drought models is precipitation, or lack thereof, we developed a data flow that extracted total precipitation per month and made this available both as a time series CSV and GeoJSON datasets, as well as OGC API features time series points. This climate scenario primary drought data was provided for the province of Manitoba and for Los Angeles. These two regions were chosen since there were pilot participants interested in each of these regions and, in the case of Manitoba, there is also a tie in to future work as this is an area of interest for the subsequent Disaster Pilot 2023.

The Los Angeles use case provided the Laubwerk visualization component with climate change impact data that could help drive a drought impact that could affect their future landscape visualization model. The idea is that based on changes to climatic variables, certain areas may be more or less suited to different vegetation types, causing the distribution of vegetation to change over time. For more on their component, including example visualization results, please refer to section 7: From Data To Visualization.

In the case of this visualization component, simply providing precipitation totals per month were not sufficient to drive the needs of their vegetation model. In this case there was not an intermediate drought model to feed climate variables to. In the absence of a more comprehensive drought model, the decision was made to develop a proxy drought risk indicator by normalizing the difference between precipitation from the past versus future climate scenarios.

Calculations were made using the difference between time series grids of projected precipitation and historical grids of mean precipitation per month. These precipitation deltas were then divided by the historical mean per month to derive a precipitation index. The goal was to provide a value between 0 and 1 where 1 = 100% of past mean precipitation for that month. Naturally this approach can generate values that exceed the range of 1 if the projected precipitation values exceed the historic mean. The goal was not so much to predict future absolute precipitation values but rather to generate an estimated for precipitation trends given the influence of climate change. For example, this approach can help answer the question — in 30 years for a given location, compare to historical norms, by what percentage do we expect precipitation to increase or decrease. Laubwerk then used these results as input to Laubwerk’s landscape vegetation model which evaluates precipitation changes to determine whether the drought stress will cause a specific vegetation species to die out for a particular location.

Interesting patterns emerged for the LA area when this process was run on deltas between projected and historical precipitation. Summers are typically dry while winters are wet and prone to flash floods. Initial data exploration seemed to show an increase in drought patterns in the spring and fall. More analysis needs to be done to see if this is a general pattern or simply one that emerged from the climate scenario that was run. However, this is the type of trend that local planners and managers may benefit from having the ability to explore once better access to climate model scenario outputs is achieved along with the ability to query and analyze the results.

FME_Query_Workflow_LA_precip]

Figure 54 — FME Query Workflow: Geopackage precipitation delta time series to GeoJSON points

FME_DroughtQuery1Params_LA.png]

Figure 55 — FME Query Parameters: Geopackage precipitation delta time series to GeoJSON points

FME_Result_DroughtQuery1_LA]

Figure 56 — FME Data Inspector: precipitation delta result showing potential drought risk for areas and times with significantly less precipitation than past

This approach is only a start and just scratches the surface in terms of what is possible for future drought projection based on climate model scenario climate variables. The specific business rules used to assess drought risk could be enriched and refined, or climate variables can simply be fed to external drought models as described below. FME provides a flexible data and business rule modeling framework. This means that as indicators and drought threshold rules are refined, it is relatively straightforward to adjust the business rules in this component to refine risk projections. Also, business rule parameters can be externalized as execution parameters so that end users can control key aspects of the scenario drought risk assessment without having to modify the published FME workflow. However one of the main goals of this pilot was not so much to produced highly refined forecast models for drought but rather to demonstrate the data value chain whereby raw climate model data cube outputs can feed a data pipeline that filters, refines, and simplifies the data and ultimately can be used to drive indicators that help planners model visualize and understand the effects of climate change on the landscapes and environments within their communities.

To support future drought risk estimates for Manitoba, precipitation forecast time series were provided to Pixalytics as an input to their drought analytics and DRI component. Their component provides a much more sophisticated indicator of drought probability since, in addition to precipitation, it also takes into account soil moisture and vegetation. The goal was to extract precipitation totals per time step from the downscaled regional climate model (RCM) climate variable outputs for Manitoba based on CMIP5 (Coupled Model Intercomparison Project Phase 5) model results obtained from Environment Canada. For this use case the grids have a spatial resolution of roughly 10 km and a temporal resolution of a monthly time step. The Pixalytics drought model was then run based on these precipitation estimates in order to asses potential future drought risk in southern Manitoba. The data was provided to Pixalytics initially as a GeoJSON feed of 2d points derived from the data cube cells with precipitation totals per cell. This same data feed was later provided as a OGC API Feature service.

For future phases of the climate or disaster pilots, it may be useful to explore additional approaches for both precipitation data analysis and combination with other related datasets and external models. It may be useful to segment cumulative rainfall below a certain threshold within a certain time window (days, weeks, or months), since cumulative rainfall over time will be crucial for computing water budgets by watershed or catch basin. To do this, the use of higher resolution time steps (daily) should be tested, to see if the increased resolution reveals patterns that the coarser monthly time step does not. There are also other statistical RCM results that might be useful to make available (mean, min, max). In addition to precipitation, climate models also generate soil moisture predictions which could used to assess drought risk. This component would also benefit from integration with topography, DEMs, and hydrology related data such as river networks, water bodies, aquifers, and watershed boundaries. This would help increase the effective spatial resolution of impact projections by combining the coarser climate projections with higher resolution local factors such as elevation. Therefore, rather than just computing precipitation deltas at the cell level, soil moisture predictions would allow for assessing flood risks along rivers and water bodies and provide the ability to evaluate precipitation by catch basin and compute future cumulative trends that may indicate potential drought or flood, or derivative impacts such as irrigation or hydropower generation potential.

It should be stressed that the field of drought modeling is not new and there are many drought modeling tools available that are far more sophisticated than anything described above. As such, subsequent Climate and Disaster pilots should explore how future climate projections can be funneled into these more mature climate and impact models in an automated fashion to produce more refined estimates of projected drought risk. That said, it is hoped that this basic demonstration of the raw data to ARD to DRI value chain for drought can provide some insights into what type of indicators should be generated to help better understand future drought risks, and where improvements on this process can be made.

6.  Data to Visualization

Advances in data representation and visualization have revolutionized the way we understand and analyze information. The ability to transform raw data into meaningful visual representations has become increasingly important across various fields, including climate change. The exponential growth of data generated by various sources such as in-situ sensors, EO sensors, and social media has all led to the emergence of big data. Data visualization techniques help in extracting insights, identifying patterns, and making data-driven decisions in the face of vast and complex datasets. Visualization plays a crucial role in exploring, summarizing, and communicating the results of data analysis, making it easier for decision-makers to comprehend complex information. Data visualization enhances storytelling by presenting information in a visually engaging and intuitive manner. It helps convey complex ideas more effectively, enabling clearer communication of data-driven narratives to both technical and non-technical audiences.

Above all, the general need for data visualization arises from the complexity and volume of data that is involved with climate change adaptation. Data visualizations are stimulated by the desire for actionable insights, and the importance of clear communication in various domains.

Below are some examples of how big data can be visualized in such a way that it captures the impact of climate change on, for example, vegetation in urban areas, or the impact of climate change on climate hazards and how to overcome challenges to realize these visualizations.

6.1.  Visualizing the Impact of Climate Change and Mitigation on Vegetation

One of the biggest challenges in communicating climate change is to tie global changes to the local impact they will have. Photorealistic visualization is a critical component for assessing and communicating the impact of environmental changes and possibilities for mitigation. For this to work, it is crucial for visualizations to reflect the underlying data accurately and allow for quick iteration. In this regard, manual visualization processes are inferior. As much as possible, visualizations of real-life scenarios should be driven directly by available data of present states and simulations of possible scenarios. This is a first attempt at determining what already works and what doesn’t with existing data and technology.

This part of the Climate Resilience Pilot explored such data-driven high-quality visualizations, focusing on the impact on vegetation. Because this was a pilot, the study was constrained in terms of coverage area, to account for limited time and to cope with potentially limited data availability. This ensured that a full connection from input data to final visualization, drawing valuable conclusions for broader application in the future. This size limitation allowed the production of meaningful results even if data transfer and processing was slow or if the data had to be processed in manual or half-automated ways due to inconsistent formatting. It also allowed visualization at a high level of detail without having to account too much for the sheer amount of data that could be associated with very large areas.

A relatively small section of Los Angeles was chosen for actual visualization. The rationale behind this choice of location had several components:

  • the given area that will (and already does) see considerable direct impact of climate change through heat, drought, wildfires, etc.;

  • the given area contains different areas of land use (from deeply urban and sub-urban to unmanaged areas);

  • the area is part of a major metro area, therefore the results will be relevant to a large population base;

  • some known mitigation measures that can be considered for visualization are in place; and

  • other external (non climate change) known influences on vegetation, such as pests, irrigation limitations, known life spans of relevant plant species, etc.) are in play that could be considered.

6.1.1.  Source Data

This visualization ties data that is very global together with data that is hyper-local, drawing on data from a wide variety of sources that are not usually combined. Examples of data sources used for this visualization includes the following.

  • Satellite Imagery

  • Building Footprints and Heights

  • Plant Inventory from Bureau of Street Services and Department of Recreation and Parks

  • Results from Climate Models, particularly RPC 4.5 data that was pre-processed for this purpose by Safe Software as part of the work for this pilot (see the Safe Software ARD component in this document for more details)

  • 3D Plant Models from the Laubwerk database

  • Plant Metadata to Judge Climate Change Impact on Specific Species through given Environmental factors, also from the Laubwerk database

  • Information on local mitigation measures from various sources

6.1.2.  Results

The aforementioned data sources were combined to create a detailed visualization of the area in question. The pairs of images below show a visualization of the status quo first as an image and then a composite of the four scenarios that were visualized (one scenario per vertical stripe). The scenarios are projections of possible climate scenarios with and without mitigation measures in place and are in the following order.

  1. Year 2045 without any mitigation measures. Plants that were likely to die off due to adverse climate events were just removed based on a probability.

  2. Year 2070 without any mitigation measures with plants removed like in the scenario before.

  3. Year 2045 with mitigation measures. Plants that were just removed in the two scenarios before have been replaced by plants that are more resilient and are part of the aforementioned initiatives for better climate resilience.

  4. Year 2070 with mitigation measures with the same replacement logic as in the scenario before.

It should be stressed that these are visualizations of possible outcomes, but there are are many factors that make it difficult to make exact predictions. This contribution is merely meant as an example of how data could be used to drive scenario-based hyper-local visualization.

img-laubwerk-overview

Figure 57 — Overview of the Visualized Region (Status Quo)

img-laubwerk-overview-scenarios

Figure 58 — Overview of the Visualized Region (Climate Projection With and Without Mitigation Scenarios as Described at the Start of the Results Section

img-laubwerk-sunset-blvd-n-curson-ave

Figure 59 — Above the Corner Sunset Blvd and N Curson Ave Looking North-East (Status Quo)

img-laubwerk-sunset-blvd-n-curson-ave-scenarios

Figure 60 — Above the Corner Sunset Blvd and N Curson Ave Looking North-East (Climate Projection With and Without Mitigation Scenarios as Described at the Start of the Results Section)

img-laubwerk-franklin-ave-n-sierra-bonita-ave

Figure 61 — Corner Franklin Ave And N Sierra Bonita Ave Looking East (Status Quo)

img-laubwerk-franklin-ave-n-sierra-bonita-ave-scenarios

Figure 62 — Corner Franklin Ave And N Sierra Bonita Ave Looking East (Climate Projection With and Without Mitigation Scenarios as Described at the Start of the Results Section)

img-laubwerk-hollywood-blvd-camino-palmero-st

Figure 63 — Corner Hollywood Blvd And Camino Palmero St Looking Looking North (Status Quo)

img-laubwerk-hollywood-blvd-camino-palmero-st-scenarios

Figure 64 — Corner Hollywood Blvd And Camino Palmero St Looking Looking North (Climate Projection With and Without Mitigation Scenarios as Described at the Start of the Results Section)

6.1.3.  Challenges and Learnings

The goal of a visualization such as this is to make data and its implications visible on a hyper-local level. The hope behind this is to turn a large amount of abstract data into something the general public can better judge the very local impact of global changes.

This hyper-locality brings to light a number of problems with the granularity, availability, and machine readability of existing data such as the following.

  • Producing a high fidelity photorealistic 3D model of a specific area is still not easy. Even in an urban area of an industrialized country (which usually has better data availability), this study resorted to relatively simple elevation data and building footprints. There are solutions for this on the horizon, but general availability is not a given, yet. 3D models based on photogrammetry seem like a promising approach to reach higher fidelity where available, but that generally available datasets like these currently lack classification, so it was not possible to remove and replace vegetation elements which will probably improve and become more widely available in the near future.

  • Information about existing vegetation is of varying quality and completeness. Detailed data is sometimes maintained by different authorities with different scopes. In this case data from the Bureau of Street Services as well as the Department of Recreation and Parks was used. Those datasets have different data layouts and different depths and quality of data. OpenStreetMap also sometimes has vegetation data, but coverage and data quality are also problematic. None of the aforementioned really cover individual plants on private property or unmanaged land, which had to be filled in from photogrammetry, satellite imagery, and aerial photography.

  • Climate projection data is widely available and generally easy to process in terms of data volume, because the areas a visualization will typically cover is fairly small compared to the resolution of most climate models. What is still a challenge is to turn climate scenario data into properties that are needed to easily model the impact on vegetation, such as the probability of extreme drought, heat, or fire events. This was partially addressed by other contributions to this pilot and further improvements are expected.

  • Exact data on average plant behavior in the context of relevant climate indicators is extremely patchy. Most data is only qualitative in nature. Data gathering is complex because of the large number of factors at play when judging health of plants. This is a complex research topic that will need more work, both to produce more reliable projections based on existing research, but also on how to gather data and how to predict plant health more reliably on a large scale.

  • Information about climate change mitigation is often not present in a machine readable format. In this specific case, information was gathered manually from publicly available material, mostly websites. Part of the problem here is that several stakeholders are working on mitigation measures, from different local government organizations, from non-profit organizations to private companies. Examples relevant to this specific example are City Plants (a non-profit supported by the Los Angeles Department of Water and Power) and the County of Los Angeles Parkway Trees Program. This manual way of data gathering obviously will not scale, is prone to data being missed, and has no unified format. All of this makes automated processing next to impossible at the moment.

  • There may be further factors that need to be considered, which are not part of any of the existing data sources. In this specific case there is a high average age and also various pests and diseases that the Mexican fan palm (Washingtonia robusta), which has become such a distinctive feature of Southern California (especially Los Angeles), is suffering from. While this isn’t directly related to climate change, it still needs to be considered for any visualization to be accurate.

As was expected, the data-driven visualization of very local phenomena and changes is a challenging problem which reveals many issues in terms of data availability as well as standardization and compatibility of storage formats.

6.2.  5D Meta World

Presagis offered the V5D rapid 3D (trial) Digital Twin generation capability to Laubwerk. Presagis gathered an open source GIS dataset for the Hollywood region in order to match the location of the tree dataset from Laubwerk. Using V5D, Presagis created a representative 3D digital twin of the buildinsg and terrain. Presagis imported the Laubwerk tree point dataset providing vegetation type information inside V5 Presagis provided V5D Unreal plugin to Laubwerk in order to allow the insertion of the Laubwerk 3D tree (as Unreal assets) into the scene. Using V5D, Laubwerk is capable of adapting the tree model in order to demonstrate the impact of climate change on the city vegetation

Presagis also provided to Laubwerk its V5D AI extracted vegetation dataset in order to complement the existing tree dataset as needed.

Figure 65 — image of the Presagis deliverable to Laubwerk. At this stage, all trees are using the same 3D model (palm tree). Laubwerk will use V5D to assign a representative 3D model based the on point feature attribution accessible in V5D. With V5D, this operation takes seconds to do and visualize the result in 3D.

6.3.  CRMA Web Application

Decision makers, public authorities, and citizens will primarily access data via a custom Esri web application, providing a simple dashboard interface for viewing interactive maps and graphs of the indices, and output formatted reports. The indices are grouped by 5 climate hazard types (Wildfire, Heat, Drought, Inland Flooding, and Coastal Inundation). The current US project (https://livingatlas.arcgis.com/assessment-tool/explore/details) can be explored to gain context of what the global project will be.

esri_project

Figure 66 — Climate Mapping For Resilience and Adaptation (CMRA) portal, US project view

esri_project_2

Figure 67 — Climate Mapping For Resilience and Adaptation (CMRA) portal, showing number of max temperature for the period 2023-2064

The application also outputs formatted reports by county or census tract summarizing the data in a format easy to share with others.

esri_project_3

Figure 68 — Application output reports

For each of those 5 climate hazards there is a corresponding StoryMap to further explain the hazard type, visualize the current and future hazard, and provide links to additional relevant resources.

6.4.  Ecere’s Client for NOAA’s Environmental Data Retrieval API

For the D100 Client Instance deliverable, Ecere enhanced its GNOSIS Cartographer geospatial client to better support visualizing and accessing multi-dimensional datasets, both from local sources and remote sources such as through OGC API standards. Support for the OGC API — Environmental Data Retrieval (EDR) standard as well as for OGC netCDF was implemented in the GNOSIS Software Development Kit. The GNOSIS implementation of the GNOSIS Map Tiles specification was also enhanced as an efficient format to store and exchange n-dimensional coverage tiles, including support for multiple pressure levels within a single tile packet. A pressure level selector control was added to the user interface, as seen below.

Figure 69 — Ecere’s GNOSIS Cartographer client accessing 4-dimensional CMIP5 air temperature dataset from GNOSIS Map Server, showing pressure level selector

Figure 70 — Ecere’s GNOSIS Cartographer client accessing 4-dimensional ERA5 relative humidity dataset from GNOSIS Map Server, showing pressure level selector

Technology Integration Experiments were performed with NOAA’s experimental EDR API deployment, providing feedback to its developers to help achieve conformance to the Standard, as well as help to improve interoperability and usability. The results of visualization experiments with multiple data collections are shown below.

Figure 71 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, minimum daily temperature for January 2014)

Figure 72 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, minimum daily temperature for January 2022)

Figure 73 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, maximum daily temperature for January 2014)

Figure 74 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, maximum daily temperature for January 2022)

Figure 75 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, precipitations for January 2014)

Figure 76 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (nclimgrid-monthly collection, precipitations for January 2022)

Figure 77 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (NASA CMIP6 Global Daily Downscaled Projections collection, maximum temperature for January 14, 2014)

Figure 78 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (NASA CMIP6 Global Daily Downscaled Projections collection, near-surface relative humidity January 15, 2014)

Figure 79 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (NASA CMIP6 Global Daily Downscaled Projections collection, wind speed for January 15, 2014)

Figure 80 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (NCAR Livneh gridded wind speed for January 15, 2013)

Figure 81 — Ecere’s GNOSIS Cartographer client accessing NOAA’s EDR API (NCAR Livneh gridded precipitations for January 15, 2013)

7.  Climate Information and Communication with Stakeholders

Climate change is happening: mitigation efforts will simply not be enough to tackle its impacts. Thus, climate action at the local level, mitigation as well as adaptation, is needed. The alpS GmbH supports communities, regions, and industrial partners in sustainable development and in dealing with the consequences, opportunities, and risks of climate change.

In the understanding of alpS, climate change consultancy services are successful if they trigger the implementation of proactive measures to enhance climate resilience that are supported by a large number of participants. However, the degree of effectiveness of the consultancy services of alpS as a function of various communication methods (e. g. presentations including processed local climate data, information processing, moderation techniques, discussion tools) and scientific know-how has never been systematically investigated. In the pilot project, alpS therefore evaluated methods used in climate change adaptation workshops and started with the improvement of the workshop setup and aspects of communication.

In addition, during this Climate Resilience Pilot, the importance of stakeholder participation became apparent. At the final workshop in Huntsville, all participants agreed that there needs to be more focus on stakeholder engagement and that questions should come from the stakeholders rather than being predefined by the availability of data. This would put communication with stakeholders at the center of upcoming phases of the project.

7.1.  Climate adaptation processes

One way to introduce adaptation processes is to frame them as a cycle (Figure 1), starting with the evaluation of past, present, and future climatic conditions to define the exposure of a system to the impacts of climate change. The second step is to assess the sensitivity of a system towards the impacts of climate change with local experts. Thus, the risk of a system results from its exposure and sensitivity, based on which targeted adaptation measures can be implemented. Finally, the fifth stage is monitoring and evaluation. At this point the cycle starts over again.

Figure 82 — Adaptation Cycle

In the entire program, the focus is on supporting communities to secure the living and economic space at the local level, which requires a well-founded assessment of the climate risks supported by local experts. The aim is to minimize risks, take necessary measures, and raise awareness of precautionary planning, especially regarding the consequences of climate change.

The conducted evaluation focuses on one participatory element of adaptation cycles, the impact analysis workshops. The workshops aim to initiate stakeholder participation, raise local awareness of climate change impacts, gather expert input on sensitivity, and implement an adaptation process that is widely accepted.

7.2.  Approach

alpS conducted a structured evaluation of available datasets of participatory processes with the goal to improve the level of information about climate change impacts and to identify the broadest accepted way of presenting user-related scientific statements. The assessment of adaptation cycles at different spatial levels allowed the further development and improvement of suitable interoperable solutions.

A set of questions was developed to measure the success of vulnerability workshops. This involved developing questions on workshop content (e.g., climate information, methodological approach) and permanence (e.g., adaptation measures implemented), in addition to external factors that influence workshop outcomes (e.g., political backing, human resources, time spent, financial commitment to adaptation, geographic conditions).

In a three-stage process, workshop participants were surveyed before the workshop, immediately after the workshop, and six months after the workshop. The following figure provides an overview of the survey process (Figure 2). In total, six representatives of communities or regions were asked for a pilot survey. Stakeholders of three of the communities and regions were interviewed shortly before the workshop, one community was interviewed immediately after a workshop implementation, and two communities and regions were interviewed more than six months after their workshop participation. To supplement the comparatively small pilot survey, experiences from previous consulting cycles are also considered in the following.

Figure 83 — Three-part questionnaire

7.3.  Main results of interviews

In all surveyed municipalities or regions, it could be shown that the assessment of climate impacts must be done on the local level. Regional adaptation strategies and climate information provide a good overview and starting point for the municipal level. In topographically heterogeneous areas, such as mountainous areas, there is a need for assessments at the local level. It is therefore necessary to reassess climate impacts from a community perspective, considering the local risk landscape. A detailed consideration of the risks and the subsequent intersection of risks with the consequences of climate change is suitable to promote awareness, clarify the community’s concern, and facilitate the implementation of measures due to safety aspects. For this, climate information must be prepared accordingly. Climate data must not be too complicated, but should also not leave anything out. Climate impacts for which the community’s sensitivity is assessed in the vulnerability workshop by local actors must especially be accurate, consistent, and not duplicative. To achieve this, the climate impact chain will be introduced in the next section.

The abundance of content when initiating adaptation measures often leads to the community being overwhelmed. The limitation to selected climate impacts is achieved through the identification of adaptation needs. This leads to the necessary focus on a few urgent adaptation measures. The elaboration of measures must be done individually and in consideration of the communities’ ideas. Showing good examples of adaptation is useful and provides inspiration, however, in surveyed communities and regions mostly new measures were developed, which are exactly tailored to local conditions and needs. Necessary measures can partly be implemented directly by the municipality or only in cooperation with other actors (landowners, other municipalities, etc.). Both the development of measures and the process support must be carried out against this background.

Supporting communities throughout the process is essential. Equally important is the cooperation between local organizations and scientifically sound external support that conveys seriousness and builds stakeholder’s confidence in the adaptation process. In addition, an active contact person with sufficient time resources is needed in each community to bring together the relevant actors and to follow up the topic beyond the events. Only in this way can successful adaptation take place.

7.4.  Improvement of the workshop setup and aspects of communication

As part of the Climate Resilience Pilot, alpS was able to optimize two aspects in the adaptation process. First, the creation of climate impact chains for different sectors was initiated. The climate impact chain improves the consistency and understanding of climate impacts. Second, the guideline to deal with external factors was developed. The pre-test conducted before the workshop, which specifically asks about these external factors, enables a direct response and preparation for dealing with uncontrollable factors in the process.

7.4.1.  Climate Impact Chain

In the current workshop design of the vulnerability workshops of alpS, local climate impacts are assessed on a matrix. The responses from workshop participants highlighted the importance of clear, unambiguous, and simple language when communicating climate impacts. Inspired by the responses of the workshop participants, the wording of climate impacts was optimized and broken down in the context of an impact chain from climatic effect to direct and indirect climate impacts (Figure 3). Thus, it is easy to understand which climatic effects drive climate impacts, facilitating the data-driven assessment of the exposure of individual communities. In addition to the exemplary impact chain for the forestry sector shown in Figure 3, further impact chains were created for all relevant fields of action in climate adaptation.

Figure 84 — Climate impact chain for the forestry sector

7.4.2.  Uncontrolled external influences

Conducting a vulnerability assessment workshop as part of the adaptation process is a complex sub-process. Its success depends on many factors, some of which are controlled by the moderator, others are not. The latter called “external factors” here and encompass the influences and motivations of individual participants. The identification of the relevant external factors is important because if the existence and strength of external factors for a particular workshop are known, adequate predefined responses can be implemented to better control the sub-process.

Indeed, the question of what motivates climate change adaptation behavior is widely discussed in literature. In a meta-analysis of over 106 studies, van Valkengoed, A., & Steg, L. (2019) investigate the relationship between the adaptation behavior of households and thirteen motivational factors. These factors are included in various theoretical frameworks but are rather generalized and not concrete enough to be taken into account here. In fact, anything that influences the workshop participants can have an impact on the workshop outcome and could therefore be called a factor. However, in order to keep the application of the guideline practicable, the catalog of external factors is limited to the key factors, which, in addition, should also be easy to research and observe.

The evaluation of the existence and strength of external factors on the basis of the compiled catalog needs to be performed from the participants’ perspective: How strongly do they perceive (consciously or subconsciously) an external influence? Does this external influence meet with an optimistic or pessimistic basic attitude? Are the participants rather jaded or thin-skinned?

We have learned that it is helpful to gather information about the background and motivation of individual participants in preparatory talks with the organizer.

Catalog of external factors

  1. Natural space that the municipality/company is located in

  2. Number of inhabitants/number of employees

  3. Vulnerabilities that are known to be affected by climate change

    1. strong dependence on a few infrastructures

    2. strong dependence on a few companies/sectors of the economy

    3. demographic characteristics

    4. shortages in emergency responses

  4. The municipality/company depends on its neighbors to carry out its adaptation measures (e.g., upstream/downstream riparian community set of problems).

  5. In case of a suffered catastrophe (here or elsewhere): Have neglected precautions led to legal or political consequences?

  6. The municipality/company has experience with weather extremes or unusual seasonal conditions.

  7. The municipality/company is affected by other geophysical, geopolitical, social, or economic crises.

  8. The handling of climate change in the media is present.

  9. Political backing is given.

  10. Provided human resources are sufficient.

  11. Monetary commitment for climate adaptation is sufficient.

  12. Participants are legally obligated to take precautions.

  13. Risks of increased devaluation of real estate, equity investments, property, plant, and equipment as well as increased depreciation, interest, and insurance costs exist.

  14. Participants recognize different needs, advantages, and benefits.

  15. Individuals are willing to take responsibility.

  16. Different perception of the environment: outdoor professionals (e.g., farmers, foresters) as well as indoor professionals are participating.

  17. Different levels of knowledge: accepted experts for individual topics (e.g., infrastructure, public health) are participating.

7.5.  Outlook: Stakeholders as a starting point for processing climate information

Overall, the consensus at the Closing Workshop in Huntsville was to focus more on stakeholder participation and to start from the stakeholders’ questions instead of the raw data. alpS is experienced in implementing and guiding participatory processes. In the coming project phase, alpS could offer a concept that enables data providers to identify their stakeholders, jointly define questions, and collect targeted feedback.

7.6.  Summary

  • Component: Climate communication and support for adaptation.

  • Inputs: Selected climate indicators (past and future, different scenarios), cartographic data (hazard zones, population density, etc.), existing plans, strategies and concepts (regional development plans, climate protection strategies, previous analyses), and most important, local climate and resilience information from stakeholders.

  • Outputs: Target group-specific communication material (fact sheets, graphs), description of the vulnerability and visualization of risk maps, adaptation measures, strategies for adaptation to climate change. In the context of this pilot alpS improved its communication methods and shared its findings to allow the Climate Community to copy and transform as many use-cases as possible to other locations or framework condition.

  • What other component(s) can interact with the component: All components that deliver dri can interact with the component. Also, any component that needs user feedback or a test group or that wants to develop data as part of a participatory process can interact with the component.

  • What OGC standards or formats does the component use and produce: Processed local climate data, NetCDF.

8.  Use cases

In a pilot study on interoperability, a use case represents a specific scenario or application that demonstrates how different components, such as data, models, and systems, interact and exchange information to address a particular challenge or problem. In the context of droughts and fires, use cases showcase how interoperability enables seamless integration and analysis of diverse geospatial data sources, coupled with specialized models, to enhance understanding, prediction, and mitigation of drought and fire risks. These use cases provide practical demonstrations of how interoperability workflows and techniques can be applied to foster effective collaboration, decision-making, and climate resilience in the face of drought and fire-related challenges.

8.1.  Drought Impact Use Cases

Based on the ARD, drought indicator, and data cube components, WHU developed three use-cases based on a self-developed Open Geospatial Engine (OGE) for drought impact for rapid response to drought occurrences. Figure 85 shows the technical architecture of the OGE. It has the following features: 1) For data discovery, a catalog service from the OGE data center following OGC API is provided, allowing users to search geospatial data both available from WHU data stores and remote data stores; 2) For data integration, data can be integrated into the WHU software in the form of data cubes with three efforts: formalizing cube dimensions for multi-source geospatial data, processing geospatial data query along cube dimensions, and organizing cube data for high-performance geoprocessing; 3) For data processing, a processing chain is enabled in OGE using a code editor and model builder; and 4) For data visualization, a Web-based client for visualization of spatial data and statistics is provided using a virtual globe and charts.

DroughtWildfire

Figure 85 — The technical architecture of the use-case for drought impact.

8.1.1.  Case study 1: Visualization for drought indicator

A drought risk map on a virtual globe, incorporating SPEI and OGE, was created as shown in Figure 86 (a). The color matching of the visualization result is referenced to the classification standard for the drought grade of SPEI illustrated in the table. The red and orange area in the visualization result represents a trend of drought (SPEI≤-0.5), while the green and blue represent wetness. The SPEI is calculated for each month of the input dataset and users can visualize the SPEI of any month on the virtual globe for flexible drought analysis. Meanwhile, the use case also supports cube-based SPEI visualization for a time series drought analysis as given in Figure 86 (b), where the height of the cube is a range of time arranged in order of month and each layer in the cube represents the drought impact of one month.

Table 15 — Gradations of drought specified by SPEI

GradeTypeSPEI Value
1Normal-0.5<SPEI
2Light drought-1.0<SPEI≤-0.5
3Moderate drought-1.5<SPEI≤-1.0
4Severe drought-2.0<SPEI≤-1.5
5Extreme droughtSPEI≤-2.0
WHU_image7

Figure 86 — Visualization of SPEI on a virtual globe.

8.1.2.  Case study 2: Drought Risk analysis of Yangtze River basin

In the summer of 2022, an extreme drought hit the Yangtze River basin, posing huge impacts on agriculture, the ecosystem, and human livelihoods. It developed rapidly in the upper, middle, and lower reaches of the Yangtze River, intensifying on a large scale in 10 provinces (municipalities) in the basin (https://doi.org/10.1002/rvr2.23). The water area of Poyang Lake has been reduced by 90%, threatening the habitat for fish and migratory birds, etc. To analyze drought trends in the Yangtze River Basin, the monthly SPEI for 2022 is shown in Figure 87. From the figure, it can be seen that the drought index in the Yangtze River Basin has been rising since March. In July, the drought risk map turned light yellow, indicating a moderate drought. In August and September, the drought further intensified and reached an extreme drought situation. In October, the drought eased somewhat, and it had mostly subsided by November.

WHU_image8

Figure 87 — Drought risk map in part of China.

8.1.3.  Case study 3: Drought risk analysis of Poyang Lake

Due to the extreme drought in the Yangtze River Basin, the water inflow into Poyang Lake, the largest freshwater lake in China, declined dramatically due to continuous hot weather with little rain since early summer. Hence, a use case of drought analysis applying multi-source SR ARD was developed.

In this use case, Sentinel-2 SR and Landsat-8 SR were collected, and Gaofen-1 WFV SR data in the center area of Poyang Lake was produced (As shown in Figure 88) before and during the drought period. NDWI indices were calculated to monitor water area changes in Poyang Lake. Water bodies typically exhibit positive NDWI values, making it an effortless method to extract water areas. As illustrated in Figure 89, the first column represents Poyang Lake before the drought, while the last three columns represent Poyang Lake which was currently experiencing the drought. It is evident from the RGB composite that the water body of Poyang Lake has significantly decreased due to the drought. The water body extraction results by NDWI indicate that from May to October, the water area in the study area decreased from ~1800 square kilometers to ~350 square kilometers, representing a reduction of ~ 80% in water area.

WHU_image9

Figure 88 — The study area of the Poyang Lake case.

WHU_image10

Figure 89 — The changes in Poyang Lake before and during the drought period.


8.2.  Analysis Ready Data (ARD) Use Case

8.2.1.  Background

Analysis Ready Data (ARD) is remote sensing data and products that have been pre-processed and organized to allow immediate analysis with little additional user effort and interoperability both through time and with other datasets.

— Analysis Ready Data (ARD) as defined by CEOS

Major steps in preparing satellite data into ARD include conversion of raw readings into radiometric quantity, quality assessment, quantity normalization, and temporal integration. The ARD should follow the FAIR (Findable, Accessible, Interoperable, and Reusable) Data Principles.

Immediate analysis requires that data obtained by the data users exactly match users’ specification in the format, projection, spatial/temporal coverage and resolution, and parameters so that it can be ingested into user’s analysis system immediately without further efforts. Since individual data users and projects have different requirements, personalized services for customizing the data must be provided in order to meet the requirement of immediate analysis, which we call ARD services.

Essential Climate Variables (ECV) are key data sets for climate change studies. ECV Inventory houses information on Climate Data Records (CDR) provided mostly by CEOS and CGMS member agencies. The inventory is a structured repository for the characteristics of two types of GCOS ECV CDRs:

  • climate data records that exist and are accessible, including frequently updated interim CDRs; and

  • climate data records that are planned to be delivered.

The ECV Inventory is an open resource to explore existing and planned data records from space agency sponsored activities and provides a unique source of information on CDRs available internationally. Access links to the data are provided within the inventory, alongside details of the data’s provenance, integrity, and application to climate monitoring.

The client used the existing CEOS WGISS Community Portal. The portal is capable of providing automated discovery and customization services of ECV and satellite data. The client will be able to discover and access ECV and other remote sensing data and customize them into ARD for anywhere in the world to support various climate change resilience analysis.

8.2.2.  Approach

The client instance is implemented as a Web application to support the creation and delivery of ARD for climate change impact assessment.

The Carbon Portal conducted data discovery and access in the following two steps.

  • Step 1: Data collection search

  • Step 2: Granule search to search granules in the collection

ARD services are enabled on results of granule search if the collection is an ECV. If the ECV data provider has implemented the WCS service for the dataset, the portal will directly communicate with the ECV provider’s WCS server for ARD service. If the ECV data provider does not have the WCS service, the portal’s server will download the entire granule and stage it on the portal server to provide ARD service.

Most of ECV data provides do not provide such service.

Figure Figure 90 shows a software architecture of the CEOS WGISS Carbon Community Portal.

software-architecture

Figure 90 — Software Architecture

ECV Inventory v4.1 records are converted as a unified form of the portal predefined metadata format by a converting tool to retrieve collection metadata for ECV entries from CWIC/FedEO OpenSearch referred by Data Record Information. There are 1251 ECV inventory records (the same as WGClimate, 870 for Existing, 381 for Planned). The portal supports a total of 1910 predefined ECV relative collection datasets from ECV Records.

ARD service for ECVs in case providers have no WCS services:

  • support when the user selects one granule entry;

  • download granule dataset file from the given repository and manipulate it for serving WCS;

  • stage the data in the portal backend server and generate a list of all coverages in the granule;

  • the user determines the specifications of data to download; and

  • the user obtains the customized data by downloading via WCS GetCoverage request.

ARD service for ECVs with data providers’ WCS:

  • directly talk to provider’s WCS; and

  • without granule downloading and stage steps in the portal’s backend server.

8.2.3.  Use Case: The climate change impact on crop production in Turkmenistan

This use case is for the climate change impact on crop production in Turkmenistan. However, the portal can switch to another use case or support multiple use cases if necessary.

Drought is one of the major climate-related natural hazards that cause significant crop production loss in Turkmenistan. Climate change increases the risk of drought in Turkmenistan. Crop models (such as WOFOST) are often used to support the decision-making in long-term adaptation and mitigation. The client prepare data to be readily used as parameters and drivers in such modeling processes. Drought impact analysis data may include long time series of precipitation, temperature, or indices for crop conditions, water content, or evapotranspiration. Many of these climate data and products from satellite sensors are served at NASA’s Goddard Earth Sciences Data and Information Services Center, such as GPM data products, MERRA assimilated climate data. These will be used in the case of drought impact assessment in Turkmenistan.

The drought impact ARD case will demonstrate:

  1. the applicability of open standards and specifications in support of data discovery, data integration, data transformation, data processing, data dissemination, and data visualization;

  2. transparency of metadata, data quality, and provenance;

  3. efficiency of using ARD in modeling and analysis; and

  4. interoperable dissemination of ARD abiding by FAIR principles.

The search starts with the following information.

  • Keyword: surface soil moisture

  • Filter: daily

  • Date: 10/1/2021, 10/1/2020, 10/1/2019, 10/1/2018

  • Area: Turkmenistan (Bbox: 52.264(Left), 35.129(Bottom), 66.69(Right), 42.8(Top))

Choose a collection dataset:

Groundwater and Soil Moisture Conditions from GRACE and GRACE-FO Data Assimilation L4 7-days 0.25 x 0.25 degree Global V3.0 (GRACEDADM_CLSM025GL_7D) at GES DISC

Choose the following granule data file:

GRACEDADM_CLSM025GL_7D.3.0:GRACEDADM_CLSM025GL_7D.A20220926.030.nc4 (for year 2022)
GRACEDADM_CLSM025GL_7D.3.0:GRACEDADM_CLSM025GL_7D.A20210927.030.nc4 (for year 2021)
GRACEDADM_CLSM025GL_7D.3.0:GRACEDADM_CLSM025GL_7D.A20200928.030.nc4 (for year 2020)
GRACEDADM_CLSM025GL_7D.3.0:GRACEDADM_CLSM025GL_7D.A20190930.030.nc4 (for year 2019)

Retreve the file and choose a variable:

sfsm_inst (Surface soil moisture percentile)

Adjust legend color (0 is the least soil moisture), and get the following results:

Figure 91 — Surface soil moisture percentile (year 2019-2022)

8.3.  Solar climate atlas for Poland

The project aims at creating analysis ready solar radiation data cube and web map services for Poland to advance development of the solar-smart society and economy and to provide know-how and tools which are easily reusable in other geographical regions worldwide, in accordance with the FAIR principles.

The project will update a previously created solar climate atlas for Poland by:

  • increasing spatial and temporal resolution of the datasets: 0.05° x 0.05° degrees (regular lat/lon grid) → 100×100m Monthly means → Daily/Hourly(tbc) means;

  • extending time period: 1991-2014 (24 yrs) → 1983-2022 (40 yrs);

  • replacing static maps with a dynamic and interactive interface;

  • using practical solar radiation parameters instead of physical variables;

  • making datasets (+ metadata) available for downloaded in interoperable file formats (for further use); and

  • providing solar climate knowledge base and data/service user guides

in order to:

  • advance development of the solar-smart society and economy in Poland; and

  • provide know-how and tools, which are easily reusable in other geographical regions.

Figure 92 — Solar Climate atlas for Poland available on the IMGW website: https://klimat.imgw.pl/en/solar-atlas

Newly created solar climate data cube and web map services will be more FAIR as they will be made available online, possibly on the official website of the Polish Hydrometeorological Service (IMGW) for an increased findability, upon future agreement (to be discussed) to make them more Findable by the general public. The whole process of data access (including authentication) will be transparent and accompanied by appropriate instructions so that the Accessibility could be much higher. The format of the datasets in the data cube will be an OGC netCDF standard compliant with the CF (Climate and Forecast) convention, which is suitable for encoding a gridded data for space/time-varying phenomena and commonly known in the climate science community but also easily readable with other common spatial data processing and visualization software including most of the GIS software to keep fully Interoperable. Finally, even though the proposed solar climate information system (maps+ dataset) are limited to the area of Poland, all processing scripts will be made available on github along with well-described processing steps (both Jupyter notebooks and instructional videos will be considered) to provide Reusability for other countries or geographical regions.

It is important to emphasize the importance of solar radiation studies in addressing the impact of climate change. Solar activity is not responsible for the global warming trend we have been experiencing in recent decades. The Sun is, however,a primary energy source driving most of the processes in the Earth’s climate system. This energy can also be used as a renewable source of electrical energy. Therefore, knowledge about the spatio-temporal distribution of surface incoming solar radiation is crucial for decision making in the sustainable energy transition process which allow both to rationalize energy consumption and reduce emissions of greenhouse gas (responsible for global warming effects).

Scope of work for the Climate Resilience Pilot: State-of-the-art review and exploratory data analysis

Objectives for the Climate Resilinece Pilot #1 are:

  • to document existing solar radiation datasets (satellite, model, and reanalysis data) and services (both freely accessible and commercial); and

  • to verify the accuracy of the in situ measurements and satellite climate data records for the selected solar radiation parameters using proper statistical methods

Different solar radiation climate data records and their characteristics have been listed in the table below.

Table 16 — Solar radiation climate data records

Dataset or Database nameVariablesData providerYear of releaseData sourceAccessibilityData formatTemporal resolution
SARAH-3 Surface Solar Radiation Climate Data RecordGlobal Radiation/Surface Irradiance (SIS), Direct Radiation (SID), Direct Normalized Radiation (DNI), Sunshine Duration (SDU), Photosynthetic Active Radiation (PAR), Daylight (DAL), Effective Cloud Albedo (CAL)CMSAF2023Satellite Data FreeNetCDF-4Monthly and daily mean or sum (SDU), provided every 30 min (except from SDU)CLARA (clouds, albedo and radiation dataset from AVHRR data)
Surface Incoming Shortwave Radiation (SIS), Longwave Surface Radiation (SDL), Surface Albedo (SAL)CMSAF2020Satellite DataFreeNetCDF-4Daily- and monthly-averagedNSRDB (National Solar Radiation Database)
Global Horizontal Irradiance, Direct Normal Irradiance, Diffuse Horizontal IrradianceNational Renewable Energy Laboratory-NREL2017Satellite data (GOES), physical solar model (PSM) version 3Free.zip sent to the email address30, 60-minutePVGIS
Average Global Irradiance on a Horizontal Surface (W/m2), Average Global Irradiance on an Optimally Inclined Surface (W/m2), Average Global Irradiance on a Two-axis Sun-tracking Surface, Optimal Inclination Angle for an Equator-facing Plane (degrees)JRC2012Satellite Data (CMSAF SARAH and NSRDB), partly combined with Reanalysis (Sarah2: NA filled with ERA5 reanalysis)FreeEsri ascii grid, GeoTIFF (SARAH 2)Monthly and yearly long-term averages calculated from hourly values overERA5 (the 5th generation of ECMWF atmospheric reanalysis, after 4th: Era Interim)
Clear-sky Direct Solar Radiation at Surface, Surface Net Solar Radiation, Surface Solar Radiation Downwards, TOA Incident Solar Radiation, Top Net Solar Radiation, Total Sky Direct Solar Radiation at SurfaceECMWF, Copernicus Climate Change Service (C3S)2018ReanalysisFreeGRIB (NetCDF experimentally via C3S CDS)1hCERRA Copernicus European Regional ReAnalysis
Surface Solar Radiation Downwards, Surface Thermal Radiation Downwards, Iime-integrated Surface Direct Short-wave Radiation, Surface Net Solar Radiation (Clear Sky)Copernicus2022ReanalysisFreeGRIB23 / 6hMERRA-2 (Modern-Era Retrospective Analysis for Research and Applications)
Radiation Diagnostics (i.e., surface albedo, cloud area fraction, in cloud optical thickness), Surface Incoming Shortwave Flux (i.e., solar radiation), Surface Net Downward Shortwave Flux, and Upwelling Longwave Flux at TOA (top of atmosphere) (i.e., Outgoing Longwave Radiation (OLR) at TOA)NASA Global Modeling and Assimilation Office (GMAO)2015ReanalysisFreeNetCDF1hCOSMO/REA
Instantaneous Direct Radiation Surface, instantaneous diffuse Radiation SurfaceDeutscher Wetterdienst (DWD)2020ReanalysisFreeGRIB1hSOLARGIS
GHI — Global Horizontal Irradiation [kWh/m2], DHI -Diffuse Horizontal Irradiation [kWh/m2], GTI — Global Irradiation for Optimally Tilted Surface [kWh/m2], DNI — Direct Normal Irradiation [kWh/m2], PVOUT — Photovoltaic Power Potential [kWh/kWp], OPTA — Optimum Tilt to Maximize Yearly Yield [°]Solargis s.r.o.2019 (free data)Solar Radiation Data from satellite-based model developed by Solargis company (input: Meteosat PRIME and IODC by Eumetsat, GOES-East and GOES-West by NOAA, MTSAT and Himawari-8 by JMA, MACC-II/CAMS atmospheric data by ECMWF, MERRA-2 atmospheric data by NASA, GFS data by NOAA).Limited data for free (e.g., average daily totals and yearly monthly totals), full resolution data and service are commercialGeoTIFF, Esri ASCII GRIDYear, Month (free data)HelioClim-3, Version 5
All radiation components (global, diffuse, direct) over horizontal, fix-tilted, and any tracker-type planes (Wh/m²)The research center O.I.E. (ARMINES and MINES ParisTech, Center for Observation, Impacts, Energy)2014 (?)“It exploits the Heliosat-2 method to estimates a “”cloud index”", based on the analysis of the 15 minute Meteosat Second Generation (MSG) satellite images in the visible band.”Demo data for free. Access to full data is based on paid subscriptions. Data are accessible either using the SoDa web portal or via automatic wget commands.csv, JSON, Excel file, NetCDF1 min (downscaling from 15 min Meteosat) to 1 monthCAMS solar radiation time-series
Cloud-free conditions only: Global Horizontal irradiation (GHI), Direct Horizontal Irradiation (BHI), Diffuse Horizontal Irradiation(DHI), Direct Normal Irradiation (BNI)The research center O.I.E. (ARMINES and MINES ParisTech, Center for Observation, Impacts, Energy)2015Meteosat satellite field-of-view for all-sky parameters, McClear algorithm (clear-sky solar radiation over the world)Free via CAMS for Europe (AGATE Dataset) and Africa (JADE Dataset)ASCII (CSV), NetCDF (Point data, time-series)1 min, 15 min, 1 h, 1 day, 1 monthEMHIRES (European Meteorological derived High Resolution RES) Dataset
Solar and wind power at different aggregation levelsJoint Research Centre of the European Commission (JRC)2018CMSAF SARAH, PVGIS modelFreeExcel XLSHourly time seriesMeteonorm

All the solar radiation datasets listed above cover the area of Poland but are not ready for direct use by non-scientific users. They are pre-processed to meet the requirements for climate data records and quality controlled but still primarily addressed for further scientific use or not available for free (SOLARGIS). Even though open source tools for managing and exploiting these datasets are often delivered by the data providers, they are not always sufficiently “attractive” due to an extensive workload, which is still required in order to make them easily integrated with additional data from other sources and ready for specific applications in support of policy- and/or decision-making processes at the national level.

The CM SAF SARAH-3 dataset meets all the requirements for the planned solar climate data cube and web map service in terms of quality of estimates, spatial resolution, and temporal coverage. Its accuracy is now being verified in more detail against in situ measurements for the selected solar radiation parameters using proper statistical methods.

The following table includes a selection of existing solar climate web map services.

Table 17 — Solar climate web map services

Service nameVariables & InfoData providerYear of first releaseData sourceAccessibilityFunctionalityTemporal resolution
Solar Atlas for PolandGlobal Radiation / Surface Irradiance (SIS), Direct Radiation (SID), Direct Normalized Radiation (DNI), Statistics: Mean, Max, Min, STD, AnomalyThe Institute of Meteorology and Water Management — National Research Institute (IMGW-PIB)2016CM SAF SARAHOpen AccessView, save a graphic, Static maps (graphics), only in polish, pixel values cannot be accessed, no metadataMonth, Season, Year
Baltic Solar AtlasGlobal Radiation / Surface Irradiance (SIS), Direct Radiation (SID), Direct Normalized Radiation (DNI), Statistics: Mean, Max, Min, STD, AnomalyLietuvos Hidrometeorologijos Tarnyba2015CM SAF SARAHOpen AccessView, save a graphic, Static maps (graphis), pixel values cannot be accessed, no metadataMonth, Season, Year
Sunny Days ProbabilityProbability of sunny days for selected stations worldwide based on CM SAF data. For each selected city the likelihood that a certain day throughout the year is sunny is shown. For Europe and Africa also the likelihood for a 5-day sunny period is shownDeutscher Wetterdienst (DWD)2017 ?CM SAF SARAH and CLARAOpen AccessView, Zoom In/Out, Charts for selected citiesDay of year
PVGIS Webservice (The Photovoltaic Geographic Information System)Global Horizontal Irradiation, Direct Normal Irradiation, Global Irradiation Optimum Angle, Global Irradiation at Angle, PV PerformanceJoint Research Centre of the European Commission (JRC)2012Satellite Data (CMSAF SARAH and NSRDB), partly combine with Reanalysis (Sarah2: NA filled with ERA5 reanalysis)Open AccessWebGIS, Data Download (JSON, CSV), User dedicated tools (solar resource assessment, photovoltaic (PV) performance studies)Hour, Day (only point data series), Month, Year
Global Solar AtlasGlobal Horizontal Irradiation, Direct Normal Irradiation, Global Irradiation Optimum Angle, Global Irradiation at Angle, Specific Photovoltaic Power Output, Optimum Tilt of PV Modules, Air TemperatureSolargis s.r.o. , supported by The World Bank Group and funded by the Energy Sector Management Assistance Program (ESMAP)2016SOLARGIS Data, ERA5 post-processed by Solargis (air temperature)Open AccessInteractive maps, PV energy calculator, Solar and meteo data, area analysis for regions and custom areas, data layer download (Tabular data (XLSX) and GIS raster layers (GeoTIFF)), ready to print maps, Global PV Potential Study: Country FactsheetLong-term yearly average of daily and yearly totals
SoDaSoDa provides historical, real-time, and forecast solar radiation and weather data services. Solar radiation: HelioClim-3 solar radiation database (All radiation components: global, diffuse, direct over horizontal, fix-tilted, and any tracker-type planes (Wh/m²)), Solar radiation under clear-sky conditions: McClear fix-tilted service, Ultraviolet and Photosynthetically Active Radiation DataThe research center O.I.E. (ARMINES and MINES ParisTech, Center for Observation, Impacts, Energy), SoDa is commercialised by Transvalor S.A. since 20092003Meteosat geostationary satellites, for weather data MERRA-2 and GFSPart of data and services for free. Access to full data is based on paid subscriptionsSolar radiation database (e.g., Helioclim-3) to get long-term irradiation time-series data at a specific region, solar radiation maps (hourly, monthly, yearly avarage), real-time maps and time-series to identify solar radiation resource and cloud motion, solar forecast, Web Services and Products based on the data1 min (downscaling from 15 min Meteosat) to 1 month
Global Atlas for Renewable EnergyGlobal Horizontal Irradiation (GHI), Direct Horizontal Irradiation (BHI), Diffuse Horizontal Irradiation(DHI), Global Normal Irradiation (GNI), Direct Normal Irradiation(BNI), Diffuse Normal Irradiation (DNI)The International Renewable Energy Agency (IRENA), Full list of partners and data providers2013Multiple, e.g., HELIOCLIM3 (SODA , CAMS), ENDORSE, SOLARGIS (Global Solar Atlas), METEONORMOpen AccessDisplay and overlay different renewable resource and ancillary datasets (transmission and road networks, protected areas, population density, and topography) WebGIS functionality, link with other platforms, systems or software for purposes of conducting comparative analyses, conduct specific mapping functionalities to screen areas of opportunity where further assessments can be of relevance, download data over identified areas of interestLong-term yearly and monthly average
Webservice-Energy CatalogueThe platform hosts web services dedicated to solar radiation and more generally, energy.GEOSS community portal. Initiative of MINES ParisTech, ARMINES and also the SoDa Team, several services have been developed by MINES ParisTech, others by other providers such as DLRN/AMultiple, e.g., SoDa, SOLEMI, PACA Solar Atlas, CAMS Radiation ServiceOpen AccessAccess, search and discover hundreds of Energy and Environmental related resources (data, applications, tools, services). Catalog with metadata. The website has several web services: Web Map Services, Web Processing Services, obeying OGC (Open Geospatial Consortium) or W3C (World Wide Web Consortium) standards. The W3C standard is abandoned in favor of the OGC standardVarious
ENERGYDATA.INFO An open data platform providing access to datasets and data analytics that are relevant to the energy sectorThe World Bank Group, list of partners at https://energydata.info/organizationN/ACurrently (2023-06-21) there is 976 datasets availableOpen AccessAccess, search and discover hundreds of Energy and Environmental related resources (data, applications, tools, services). Explore data by countryVarious
The Renewable Energy Zoning (REZoning) toolAvailable resources – solar PV, wind, or offshore wind — for a chosen country: Technical Potential, Economic Potential and Multi-criteria analysis and prioritization. Data on radiation: Global Horizontal and Tilted IrradiationPartners: The World Bank Group, ESMAP, UC Santa Barbara, Development SEED2021 ?For radiationa data: Global Solar AtlasOpen AccessIdentify and explore high potential project areas for solar, onshore wind, and offshore wind development. The final map can be printed in .PDF or .PNG format. Download the results of the analysis in .CSV, .SHP (for boundaries selection) or GeoTIFF format (for grid selection) for further processingLong-term yearly average of daily and yearly totals
Copernicus Atmospheric Monitoring Service (CAMS) — Radiation ServiceClear Sky and Total Sky Radiation. The Global, Direct, and Diffuse Horizontal Irradiation, as well as the Beam Normal Irradiation are providedGerman Aerospace Center (DLR), ARMINES, TRANSVALOR2018 ?SoDa Database: CAMS Clear-Sky (McClear model based on the processing of Meteosat data) and All-Sky Radiation (Heliosat-4) products: all skies (cloudy or not) SSI within the MSG viewOpen AccessAccess and download data (time series)1 min, 15 min, 1 h, 1 day, 1 month
RE Data ExplorerFor Poland: Global Horizontal Irradiance, Direct Normal Irradiance. More data for selected developing countriesNational Renewable Energy Laboratory (NREL), USA . List of partners and data providers at https://www.re-explorer.org/about.htmlN/AFor Poland: NSRDB, METEOSAT, Global Solar AtlasOpen AccessWebGIS, user-friendly interface, provides renewable energy data, analytical tools (Levelized Cost of Energy Mapping Tool, Technical Potential Tool, PVWatts Lite), training materials and technical assistance.For Poland only multiannaul average (time series for selected develooping countries)
Renewables Ninja Ground-level Solar Irradiance, Top of Atmosphere Solar Irradiance, Average Annual Capacity Factors for PV, for a Point of Interest also Daily Mean and Monthly Capacity FactorETH Zuich2016 ?Merra-2 Reanalysis, CMSAF SARAHOpen AccessRenewables Ninja WepPortal allows running of simulations of hourly power output from wind and solar PV farms by clicking anywhere on the map. Also download ready-made datasets for a chosen country is possible.hourly time series, daily and monthly mean for a one year (2015 or 2019)
Meteonorm based services: Solar Cadastre, SolarSat, CloudMove, SolarForecastGlobal Radiation on the Horizontal Plane, Diffuse Radiation on the Horizontal Plane, Direct Radiation on the Horizontal Plane, Global Radiation on the Inclined Plane, Direct Normal Irradiation, Energy OutputMeteotest AG (Bern, Switzerland)N/AMeteonorm DatabaseCommercialSolarSat: Measured quarter-hourly radiation data for the last 24 hours, updated every hour, CloudMove: Solar radiation forecast for the next 6 hours, updated every 15 minutes, Booster option, SolarForecast: Solar radiation forecast for the next 72 hours, updated every hourfrom 15 Min. (past 24h) to month

Identified services provide generalised information on spatio-temporal solar radiation patterns (high resolution grids for longer periods not available at all or not for free!). A set of high-resolution analysis-ready data products in user-friendly formats dedicated to specific applications at national level are needed to support decision-making processes.

The identified solar radiation services are currently being analyzed in terms of overall functional scope, usability of the interface, innovative tools, and possible shortcomings. The results of this analysis will be verified against a detailed recognition of the potential user needs.

8.4.  Wildfire resilience in insurance

The main focus of IFC’s participation to this project is to better understand end-to-end hazard and risk modeling workflows, in turn supporting the climate services required for decision-making in the business. This participation is also intended to further open up Intact Lab to the outside world, by exchanging information on wildfire risks and climate resiliency in the context of the insurance industry.

The project centered the efforts around these challenges:

  • identify current usages of wildfire maps at Intact by interviewing various business units;

  • revisit and update previous wildfire hazard maps, using external open data sources;

  • identify and seek collaboration opportunities with pilot participants;

  • inform internal architectural, infrastructure, and procurement processes of new geospatial standards and trends; and

  • identify and develop insurance wildfires risk use cases to help build resilient communities.

These activities should align with the best practices and standards of the OGC and current and proposed themes in OGC’s climate resilience Domain Working Group (DWG).

Wildfire risk in Canada is prominent and even though major events do not occur every year, they can cause unprecedented damage. Costs from the wildfire events of the summer of 2021 in British Columbia reached $77 and $78 millions in insured damage at White Rock Lake and Lytton, respectively (https://doi.org/10.5194/nhess-12-3519-2012). Wildfire activity is expected to go up due to an increase in fire-prone conditions across the country (https://doi.org/10.5194/hess-21-6329-2017).

In an insurance company, wildfire risk impacts the work of a wide array of users, such as claim adjusters, insurance brokers, engineers, data scientists, actuaries, portfolio managers, and executives. IFC’s stakeholders were invited to provide information about current and potential uses of wildfire risk products within their operations. This information was used to identify use cases supporting this pilot project, as well as prospective proof-of-concepts for wildfire resiliency. It was determined that wildfires can impact numerous activities in the business, including but not limited to, restoration, claims, portfolio management, CAT modeling, risk management, and loss prevention. A resiliency and adaptation use case relevant to the topic of climate resilience is presented below.

Through granting programs, Intact is investing in communities across Canada to protect people from the effects of climate change and build more resilient communities (ESIP: Attribute Convention for Data Discovery (ACDD) – http://wiki.esipfed.org/index.php/). The Regional Municipality of Wood Buffalo and the community of Lac La Biche are both at an increased risk of being affected by wildfires. Their respective programs provide rebates and other incentives to residents to participate in home FireSmart assessments, and to upgrade their homes.

Figure1_Intact

Figure 93 — FireSmart Canada’s Home Ignition Zones (Lawrence Livermore National Laboratory: NetCDF CF Metadata Conventions – http://cfconventions.org/)

Homeowners are informed of building material options in the immediate zone to reduce their risk of serious property damage. Residents and communities are also presented with landscaping practices for intermediate zones, further helping reduce the risk of wildfires in the area. The Acadia First Nation’s member communities are acting in the extended zone, creating 10 to 30 meter fire breaks to increase time for emergency response in case of fire and to decrease the risk of fire spread.

Ignition zones can be seen as interfaces between individual homes or structures and the surrounding area. In the scientific literature, the area where wildland meets or mixes with human-built structures is called the Wildland-Urban Interface (WUI). As the WUI is the area that is the most at risk of wildfire, it is important to closely consider it when modeling risk. The first WUI dataset for Canada was generated in 2018, and it was identified that 3.8% of the national land area is located in the WUI (https://doi.org/10.5220/0006681102050210).

Figure2_Intact

Figure 94 — Wildland-Urban Interface for Canada, on the left. Extraction of the WUI using satellite-derived imagery, on the right (https://doi.org/10.5220/0006681102050210).

A more comprehensive view of the WUI considers industrial areas as well public infrastructures, such as power lines and railroads. This area is called the Wildland-Human Interface (WHI) and covers 13.0% of the national land. It is estimated that within the WHI, 19.4% of the area is in the zone of wildfire recurrence ≤ 250 years (OGC: OGC 11-165r2: CF-netCDF3 Data Model Extension standard, 2012). By the end of the century, this number could increase to 28.8% under Representative Concentration Pathway (RCP) 2.6 low emissions scenario, and to 43.3% under RCP 8.5 high emissions scenario. Integrating WUI in climate scenarios can help in conducting portfolio stress testing and evaluation of future risk.

As cities will keep sprawling as population increases, the WUI is also expected to grow. This is an issue since increased fire activity due to climate change is to be expected. Furthermore, this increased exposure will reach more vulnerable communities. It was shown that WUI is significantly related to socioeconomic variables such as GDP per capita, population density, road density, and the proportion of the population above 65 years old (https://docs.ogc.org/is/14-083r2/14-083r2.html).

The Canadian WUI dataset (https://doi.org/10.5220/0006681102050210) is unfortunately not available for download but could be replicated with open data sources, for instance through Natural Resources Canada (NRCAN) spatial infrastructures. When developing a WUI dataset, an important parameter for users to fine tune is the ember transport distance. Values can vary between the median value of maximum travel distances, which is 600 m (https://doi.org/10.3390/fire3020010), and the maximum travel distance of 2400 m which is the official standard in the United States. Novel wildfire risk models can also dynamically adapt fuel classes within the WUI to represent propagation more accurately. Producing, hosting and integrating WUI datasets can therefore support creation of better risk indices and also help identify vulnerable areas to support further adaptation.

8.5.  Climate Resilience for Coastal Ecosystems

The following use case(s) examine various scenarios designed to qualify the risks and pending impacts of climate change to coastal ecosystems. The scenarios are designed to leverage Analysis Ready Datasets combined with in-situ observations to draw direct relationships between a changing environment and dependent human activities.

The core of this exercise is focused on the application of OGC standards and specifications as adapters to accessing various datasets supporting key ocean and coastal climate indicators.

8.5.1.  Ocean Acidification and Food Security

The ocean is responsible for upwards of 30% of the absorption of carbon dioxide from the atmosphere. As CO2 is taken in, it combines with water to form carbonic acid causing the pH to lower. As concentrations of CO2 in the atmosphere continue to increase, the pH of the ocean has fallen by as much a 0.1 pH units — representing a 30% increase in ocean acidity. As acidity rises, available carbonate ions bond with excess hydrogen ions, impeding the development of calcifying organisms such as oysters and shellfish. Of critical importance is the recognition that, as ocean acidity increases, the ability of the ocean to effectively act as a carbon sink for atmospheric CO2 is directly reduced further spiralling the future impact of anthropogenic activities and CO2 emissions.

This use case attempts to relate the trends in changing climate variables to the ocean’s ability to support the shellfish aquaculture industry along the Northeast coastline of the United States. Of particular importance is the direct relationship between essential climate variables and the carrying capacity of coastal environments to support dependent socio-economic activities. Indirectly, this use case attempts to identify the role of coastal ecosystems within a nature-based climate resilience strategy.

8.5.2.  Background

The study combines publicly available socio-economic data with climate change indicators relevant to an area of interest off the coast of Maine, USA. This area is supported through a number of observation platforms to measure ocean surface temperatures, salinity, wave heights, and other important characteristics related to the ocean’s state. Raw data processed to ARD provide additional metrics of the ocean’s regional climate indicators.

The framework takes advantage of previous efforts made through the OGC MarineDWG implementing a ‘federated marine spatial data infrastructure’ (FMSDI). In this case, the framework is designed to incorporate each data source as an independent service endpoint encoded as an OGC-compliant implementation of Feature, Coverage, and/or Observation Collection. The service endpoints are developed and aligned with the OGC Features API, OGC EDR API, and the OGC Observations, Measurements, and Sampling (OMSv3) standards respectively. The goal being to take advantage of these standards and specifications as an adapter to the custom encoding of each raw data source allowing for a predictable semantic relationship and a loosely-coupled distributed feature schema.

This use case extends the concept of Analysis Ready Data to include processed data pipelines sourced from in-situ observation collections and sampling programs. Raw data, such as NetCDF datasets provided through the NOAA Saildrone program for monitoring ocean conditions, are processed into an ‘ARD’ encoded using the OGC Moving Features specification (mf-json). Extending the concept of ARD to include datasets sourced from non-satellite based observing platforms allows for a consistent view of important datasets independent of their originating platforms and associated processes and procedures. Where possible, this use case applies the OGC OMSv3 concepts of Host, Observation, and Observable collections over a common spatio-temporal coverage area to reduce raw data measurements to analysis ready data.

The use case is modeled as a federated service employing a recognized schema compliant with OGC and/or external industry standards. A user query resolves each ECV to its source and combines the related feature and observation data into a ‘decision ready dataset’ for further exploration.

Example — Storyline

A user wants to see the effect of rising sea surface temperatures, salinity, and other key ECVs on local aquaculture production for a particular area of interest.

In this use case, site information available through the Maine open data portal is used to define an area of interest. Related socio-economic variables for the area of interest and the topic are resolved against the state government’s open data portal (GDP, employment metrics, etc.). The area of interest is used to refine the ARD datasets applicable to the area and associated ECV measurements across the time period of interest are processed and aggregated using a weighted average approach. The net result is an indicator of the relationship between the set of ECV measurements as a trend with milestones representing the harvest yields for each defined time period.

8.5.3.  Challenges, Resolutions and Lessons Learned

  • Spatial Resolution

  • Temporal Resolution

  • Pub/Sub Event Model

  • Provenance [ accuracy, reliability, peer-review, …​]

  • Map, Binning and Global Grids

  • Weighted relationships between observable properties and features of interest

8.5.4.  Future Work

Catalog Services When combining EO observation datasets with in-situ observations and sampling programs, an inordinate amount of effort is required to find acceptable sources of ARD datasets. Although individual organizations tend to align with the ISO 19115 metadata standard for describing ARD datasets, there is limited support apart from manual efforts to discover aligned ARD datasets provided across multiple providers. Recently, OGC announced an effort to establish the GeoDCAT working group. This effort, combined with efforts aligned with the OGC OMS SWG, would be beneficial if the goal is to address the requirement to harvest metadata across multiple providers in one ‘centralized’ service endpoint.

Temporal Resolution Typically when addressing spatial analysis, the temporal resolution of the datasets is assumed to be aligned. In the case of climate modeling and raw EO datasets, care must be taken to ensure the temporal resolution of the ARD aligns with the temporal dimension of in-situ observations, sampling programs, and real-world feature datasets.

Scalability Considering the volume of data to describe climate trends specific to an area of interest, the methodology of how raw data through to ARD is loaded into a client environment needs to be addressed. The integration framework in support of the above use case tends to instantiate local copies of raw data and ARD datasets into the compute environment for processing and analysis. The OGC GeoDatacube initiative is well positioned to play a role in addressing the scalability requirements, although it’s unclear whether this approach addresses loosely coupled, distributed data pipelines or requires local caching of datasets within the GDC processing workflow.

9.  Lessons Learned

In this first OGC Climate Resilience Pilot study, several valuable lessons have been learned regarding the effective integration and exchange of information between different components. These lessons highlight the importance of harmonizing extractions from diverse data sources, selecting or developing suitable models, and establishing robust workflows. Additionally, the pilot study has shed light on the significance of stakeholder engagement, iterative refinement, and continuous evaluation to enhance the interoperability of systems and components. By identifying and addressing challenges and leveraging these lessons learned, future climate resilience efforts can benefit from improved interoperability, enabling more informed decision-making and proactive strategies to mitigate the impacts of climate-related hazards.

Participants of the various organizations and institutes that contribute to the Climate Resilience Pilot noted the following gaps or challenges that still exist and require additional work (in future) to overcome.

The Pixalytics Drought indicator utilizes data from sources such as the Copernicus Climate Data Store (CDS), Global Drought Observatory, and NOAA Climate Environmental Data Retrieval (EDR) API. This included testing the various sources and datasets to assess the speed, reliability, and cost of accessing input data from different providers with a goal of enabling on-demand data processing.

As an example, the input precipitation data obtained from the ERA5 dataset within the Registry of Open Data on AWS was compared to the CDS API. It was found that accessing the data stored on Amazon Web Service (AWS) Simple Storage Service (S3) was faster once virtual Zarrs were set up. However, there are concerns regarding the data’s provenance, as it was uploaded to AWS by an organization other than the original data provider. Additionally, the Zarr approach faced challenges when dealing with more recent years’ data, as the NetCDFs stored on S3 had inconsistent chunking. To address this issue, a request has been submitted to enhance the Python kerchunk library’s ability to handle variable chunking. This is pointed out as it is not specific to this datasource; these challenges can happen to any large datasource that needs to transform into Zarrs to operate faster.

Also, through testing the ECMWF, CDS, and NOAA APIs it was seen that having an OGC API interface to datasets provided a more streamlined interface than directly accessing files as once code had been written it was easier to amend when an additional API was incorporated. Feedback was provided to ECMWF and NOAA on their API usage by Pixalytics, including collaborative discussions on potential improvements. In terms of the Pixalytics drought indicator output, QGIS modules have been identified to allow non-programmers to access and visualize the API outputs.

For Esri’s contribution, the following lessons were learned in building CMRA version 1 and the last 6 months since its release.

For Intact Financial Corporation, the following lessons learned are highlighted.

Jakub P. Walawender noticed that data cubes may not necessarily be assigned to a specific data processing stage in the data value chain but can be applicable to data at different processing levels, e.g., row data are often structured as multidimensional datasets (Row Data Cubes) and analysis-ready data are often structured as a collection sharing some common properties, such as: spatial reference system, spatial resolution (pixel size), temporal coverage, etc. (Analysis Ready Data Cubes). The definition of a data cube is slightly different in the case of row data cubes (technical approach: focused on data structure aspects) and Analysis Ready Data Cubes (application-oriented approach: focused on a specific thematic application at regional or national level). Therefore, Analysis Ready Data Cubes (ARDCs) play a very important role in bringing data closer to the user and bridging the gap between global and local context to meet all the specific user needs and expectations.

The review and analysis of available solar radiation climate data records and web map services done by Walawender revealed that CDRs are primarily addressed to the other scientific users (physical variables, scientific data formats, large data volumes, etc.) and require extensive processing to be more user-friendly and analysis ready. Whereas services are sometimes not FAIR, as they provide generalized information on the spatio-temporal solar radiation patterns, mainly at a global or continental scale and high-resolution products are not always available for free.

The pilot project aimed to achieve multiple objectives, one of which was to reduce the obstacles that users face when accessing Copernicus CDS/ADS (Climate Data Store/Atmospheric Data Store) data and services. By identifying these barriers or gaps from the users’ perspective, the pilot can adapt and evolve accordingly. This approach ensures that the project engages a broader user community and facilitates their interaction with CDS/ADS.

To provide a clear direction for developers and users, the pilot intends to establish templates and common guidelines for well-defined climate service workflows. These workflows will serve as road maps, guiding individuals through the entire process from raw data to actionable information. By offering structured example frameworks, the project aims to enhance efficiency and streamline the utilization of climate services.

Several enhancements were planned for the project, including improvements to the performance of the Sentinel-2 data cube. Climate data can be incorporated in this, and vegetation fuel type classification, as well, to support a wildfire risk assessment workflow. These enhancements contribute to expanding the capabilities and functionalities of the pilot project.

In regards to Analysis Ready Data, ARD principles can be applied to climate time series, not just tp earth observation (EO). Good ARD should be useful for a range of scenarios and useful to answer a range of analytic questions. ARD usually involves some degree of filtering, simplification, and data aggregation without losing the essential information necessary to support decision making.

During the DP21 phase, a solid foundation was established for exploring data cube extraction and conversion to ARD using the FME data integration platform. In this pilot, a number of new approaches were explored for tasks such as data extraction, simplification, and transformation. Additionally, different methods were investigated for selecting, splitting, aggregating, and summarizing time series. The primary objective was to generate ARDs capable of answering questions related to climate trends and readily consumable by GIS and other geospatial applications.

The initial ARD approach to derive temperature and precipitation polygons inherited from the work in DP21 on flood contours involved too much data simplification to be useful. Classification into temperature or participation bands resulted in loss of detail, oversimplifying the data to the point where it no longer held enough variation over local areas to be useful. Based on user feedback, it was determined that converting data cubes to vector time series point data served the purpose of simplifying the data structure for ease of access, but retained the environmental variable precision needed to support a wider range of data interpretations for indicator derivation. It also meant it was not necessary to anticipate or encode indicator business rules into the data simplification process. The end user is free to run queries to find locations and time steps for specific temperatures or precipitation ranges of interest.

Initially it was thought that classification rules need to more closely model impacts of interest. Business rules for a heat wave might use a temperature range and statistic as part of the classification process before conversion to vector. However, this imposes the burden of domain knowledge on the data provider rather than on the climate service end user who is more likely to understand the domain of interest and how best to interpret the associated data. This represents a tension of where in the process to apply indicator business rules, which in turn is influenced by the type of user intended.

In the absence of more sophisticated models, looking at the delta between future forecast and historical averages served as an interesting experiment for highlighting potential climate change impact hotspots. Past and future were differentiated both spatially and temporally for equivalent time steps (monthly). These deltas may serve as a useful starting point for climate change risk indicator development and can serve as an approach for normalizing climate impacts when the absolute units are not the main focus. This may give local planners and managers more options to explore and analyze local areas and times of concern related to climate model scenario outputs.

More analysis needs to be done with higher resolution time steps — weekly and daily. At the outset, monthly time steps were used to make it easier to prototype workflows. Daily time step computations will take significantly more processing time. Future pilots should further explore ways to better support scalability of processing through automation and cloud computing approaches, such as the use of cloud native formats (STAC, COG, ZARR, etc).

Environmental climate variables (ECVs) have traditionally been discussed in the context of earth observation (EO) data. Within this pilot, at first it seemed that ECVs could just as easily relate to the environmental variables stored in climate model outputs such as data cubes. However, on closer examination and discussion within the pilot, it was determined that the term ‘ECV’ has a specific meaning related to earth observation and sensors that does not translate well into the climate model data context. This is partly because ECVs have a certain prescribed statistical certainty that is not relevant to climate projections which have a much higher degree of uncertainty.

Nevertheless, whatever climate variables are used for deriving impacts based on climate scenarios, it is not necessary to develop standardized approaches for climate variable selection, analysis, and summarization. Careful attention should be paid to preserve metadata related to the source of climate scenarios used to derive the climate variables in order for consumers of related impact information to better understand the veracity of the data behind the impact estimates. Together this will help support a better understanding of ARD in relation to climate change impact management which in turn will ultimately support better decision making.

Further experimentation is required to enhance the project’s capabilities. This experimentation encompasses various aspects, including analytic techniques, statistical methods, simplification processes, and publication methodologies. Additionally, the project aims to explore cloud-native approaches such as NetCDF to COG conversion and the utilization of APIs. These ongoing experiments contribute to refining the project’s methodologies and expanding its range of applications.

Currently, the participants have implemented the first Drought Index (SPI) using precipitation data from the Copernicus Climate Data Store (CDS). However, the participants are open to incorporating additional data sources as per the project’s requirements. This flexibility ensures that the pilot project remains adaptable to evolving needs and can utilize diverse datasets to enhance its outputs.

In summary, the pilot project seeks to overcome barriers and engage a wider user community by facilitating access to CDS/ADS data and services. A well-defined climate service workflow will guide developers and users through the entire process, ensuring efficiency and effectiveness. Enhancements to the Sentinel-2 data cube, the inclusion of climate data and vegetation fuel type classification, and the development of a wildfire risk assessment workflow will expand the project’s capabilities. By applying ARD principles and refining classification rules, the project aims to generate valuable insights into climate trends. Ongoing experimentation and the exploration of different methods contribute to the project’s continuous improvement.

Being the first OGC Climate Resilience Pilot, there has been significant underpinning work on the component elements that has supported an improved understanding of what is currently possible and what needs to be developed. Future pilots will focus on supporting the filling-in of identified gaps and definition of best practices guidelines to support and enable broader international partnerships.

During the pilot, participants agreed to the following items where specific actions where future work would be needed.

In addition, during the presentation of the outcomes at the OGC Member Meeting in Huntsville (June 2023) it was emphasized that for the next Pilot the logic needs to be changed. Instead of starting with the raw data and generating the information to support decisions, the work should start understanding the stakeholders interests and problems, and then the work should proceed backwards to find the raw data inputs that would help answer the stakeholders questions. There needs to be a focus on how to position knowledge in order to have an impact on decision makers. Questions include, what is the market need, benefit to communities, and how are we helping people.

10.  Recommendations for future climate resilience pilots

Based on the experiences of this first Climate Resilience Pilot, the lessons learned could result in a refined design of the upcoming future pilot focusing on climate resilience to address the most relevant challenges. Within these efforts, the OGC Climate Resilience Community has been growing and is bringing together decision-makers, scientists, policymakers, data providers, software developers, and service providers. This includes scientists, decision-makers, city managers, politicians, and last but not least, it includes every one of us.

10.1.  Thematic aspects: Climate change resilience to the triple crisis

The current climate resilience pilot had focused on climate change-related phenomena, while the ongoing international discussion is moving toward the holistic perspective of the triple crisis, where the triple is targeting Climate Change, the loss of Biodiversity, and Pollution. Therefore, it is recommended to address these within the upcoming climate resilience pilot, in line with the technical challenges of this triple crisis and selecting doable aspects according. As a result of the closing workshop at the OGC Member Meeting in Huntsville, the thematic areas of upcoming work should focus on precipitation with extreme aspects extremes causing disasters: floods in case of extremely high amounts of precipitation and droughts and desertification in case of dryness anomalies. Both cases can lead to land degradation, which should be considered inline with climate resilience and loss of biodiversity.

To cover the third aspect of the triple crisis, pollution, air pollution and emissions of greenhouse gasses (GHG) have been identified as important issues to tackle in various ways by enhancing information production out of the monitoring systems and services with respect to the UN reporting requirements of the upcoming UN Global Stocktake underpinning the climate actions of the nations inline with their national determined contributions (NDC) to reduce GHG emissions. A further aspect of GHG emission is the feedback loop to the change in precipitation patterns and land use change, which is affecting soil carbon content, as one of the indicators of land degradation. Changes in soil carbon content are also caused by to melting of the permafrost affecting very large regions in the upper northern hemisphere such as the Canadian and the Russian tundras.

Focusing on regions, the challenges are located in mountainous areas, small islands, and hyper-arid areas, which is related to the complexity of the very small scale of natural climatic phenomena. The complexity is going to be fostered if land use change is been taken into account.

The climate resilience pilot ran in parallel with other pilots and had many touch points with the disaster pilot. Important lessons learned from the growth of the OGC climate resilience community, and the common understanding of phenomena inline with climate resilience versus disaster resilience are guiding the recommendation to merge both pilot lines.

A future climate resilience pilot with disaster aspects will be respected since the technical Climate Resilience Information Systems (CRIS) and the aspects of FAIR Climate services establish modular climate application packages which are interoperable with each other and are guided by the same technical principles and tools.

10.2.  Technical aspects: Climate resilience information systems towards FAIR Climate services

Issues around the delivery of climate information to support adaptation decisions to facilitate the difficult and time-consuming work of climate service centers. These centers may have a local, regional, or international scope, but typically act as boundary organizations, connecting clients to climate science data and expertise. As demand increases for climate products, climate service centers are pressured to develop and deploy IT systems to access and process climate data more efficiently and expand the range and complexity of services delivered. Although climate adaptation challenges vary across regions, data processing workflows are very similar and could benefit from shared information systems. Land Degradation Neutrality (LDN) and climate resilience are strongly related to the scientific phenomenon as well the technical applications regarding the value chain from raw data to information and knowledge. In both cases, similar approaches have been developed concerning data handling in datacubes and analysis-ready data (ARD) up to the decision-ready indicators (DRI). Agreeing on standards regarding DataCube, ARD, and DRI would enable a better linkage between the information exchange within the UN climate policy frame and beyond.

10.3.  Interoperability studies and gap analysis of data sources and infrastructures

Here, especially, the Copernicus Climate Change Service (C3S) with its underpinned Climate Data Store (CDS) is an example that has been renewed and moved into higher interoperability. A dedicated gap analysis and interoperability experiment concerning the usability of the C3S technical services inline with existing other climate resilience information systems would be a useful step towards the vision of global collaborative solutions. In this aspect, the concept of FAIR Climate services can be refined, extended, and properly documented.

It is further recommended to continue lowering the barriers for experts who want to spin off climate resilience information systems for their specific use cases and needs. As demonstrated in the pilot, the modular chaining of components is the recommended approach to design the architecture, with climate application packages being interoperable with each other and following the FAIR Climate Services principles. There are existing utilities (birdhouse approach demonstrated in this pilot) helping developers to establish climate application packages that need to be further developed and improved for better usability. Equality to climate application packages, the concept of LDN value chain, is following the same approach of modular interoperable components where interoperability can be enhanced.

It is further recommended, that aspects of data visualization and the use of case-specific simulations need to be emphasized. Especially the small-scale 3D visualizations, including realistic digital twins of vegetation, and respective trees in digital twins of urban areas are recommended for enhancement in the future. The pilot has shown the power of artificial intelligence to establish realistic simulations of use cases under different climatic scenarios. Enhancing the technology behind, and establishing the data visualization and simulation for, specific use-cases, political decisions or socioeconomic scenarios with respect to future climate projections would be a step forward in closing the gap between existing climate information and implemented climate action.

10.4.  Climate Service Consultation aspects: Communication to Stakeholders

The upcoming CRP24 should have a user-centric approach, to tailor the data products and application packages for optimization to the user needs and requirements. The existing OGC stakeholder community should be more strongly included during the pilot execution to tailor the value chain from raw data to information according to the stakeholder requirements. It is recommended to design the OGC pilots with activities addressing potential stakeholders to grow the OGC Stakeholder community as well as to understand their requirements and address data products and tools related to their needs, further bridging the gap of scientific knowledge to policy-driven climate action. The huge amount of knowledge about climate change and its potential impact and the relatively low socio-economic change, called the knowing-doing-gap, can be addressed in future work. The improvement and incorporation of the communication aspects explored in the current pilot should be emphasized and enhanced with existing technologies, especially simulations and data visualization. Also, the good practice guidance of the UNCCD is proposing ‘decision trees’ for end users to identify the most reliable data that exist for a region.

OGC needs to continue to move towards non-technical communication; breaking down the very technical engineering reports into non-technical content, such as animation videos. Especially for the domain of climate and disaster resilience, the understanding of the principle importance for the work concerning FAIR Climate services is essential to be presented in other formats than only engineering reports. Besides explanation videos, capacity building can be done with modules in e-learning platforms that are currently being established in OGC. Future pilots should output tutorials and training materials to lower the barriers for developers by spinning off their applications based on good practice guidelines, tutorials, and e-learning modules. In this context, established running applications can be promoted over the upcoming open science persistent demonstrator. It is recommended that OGC tailor capacity-building material in formats that are exchangeable with other open knowledge platforms like the GEO Knowledge hub. A further aspect of capacity building and reaching out is to move the upcoming work into multiple languages. The animation video of this pilot has already been produced in four languages; in addition to English, it is available in French, Spanish, and Chinese. Upcoming future work should not be restricted to only English, but should target to other languages as well.


Annex A
(normative)
Data Sources

Base map data * OSM: https://www.openstreetmap.org/#map=3/71.34/-96.82 * OSM Extractor: https://extract.bbbike.org/

Earth Observation Data

Elevation Models * Province of Manitoba Land Initiative: https://mli.gov.mb.ca/dems/index.html

Climate related data


Annex B
(informative)
Revision History

DateReleaseAuthorPrimary clauses modifiedDescription
2023-08-04revision 1All editiors and contributorsallinitial revised version posted after first draft release
2023-08-05revision 1.01All editiors and contributorsallincluded various minor changes
2023-08-29revision 1.02Pixalitics, merged by AKconcl. and future workhandful of minor textual changes
2023-09-01revision 1.03AKAllUpdated appendix, list of data sources
2023-09-01revision 2AKAllsecond revised version posted after first revision release