Global Earth Monitor use-cases will be developed through the employment of GEM infrastructure to demonstrate and validate the added value:
- Conflict Pre-Warning Map (SatCen)
- Map Making Support (TomTom)
- Crop Identification (meteoblue)
- Built-Up Area (Sinergise)
Conflict Pre-Warning Map
The Conflict Pre-Warning Map (CPW) aims to merge multiple datasources available in GEM. Geographic data and other open sources of information (e.g. distribution of ethnicities or religion) will provide a map or a report to support political decision-making. A new semi-automated CPW service of extreme flexibility will be developed as a combination of CPW map and a dedicated Decision Support System (DSS) based on weighted variables associated with the input data. Different geospatial data will contribute to CPW, either derived from existing products or generated in the framework of the GEM project:
- Geospatial data (e.g. from the existing Copernicus services, as vegetation indexes series, water bodies).
- Meteorological data (e.g. precipitation, air temperature, forecasts).
- Traffic data to support the detection of Hot-spot areas (e.g. road blocks for natural disasters, and border passing points).
- Land Cover data and Land Cover Change Monitoring (LC CMS) and their subsequent products will be used to detect and analyse potential disruptive changes in land cover.
- Open data (e.g. distribution of ethnicities or religion).
- Very High Resolution (VHR) data.
CPW will directly benefit from the input of GEM products. The LC CMS products and the correlation between different climatic/environmental/thematic variables will be explored to define their incidence in generating a conflict. The datasets will be included in SatCen’s GIS system to create a decision support model to define the risk of conflicts (which might be associated to scarcity of resources, natural disasters or impact of drastic changes associated to climate). Alerting capabilities will be employed to generate Warning reports at regional scale when situations match those identified as indicative for upcoming violent outburst. CPW serves as a crown demonstration of higher-level cross-correlated ML processing for decision making support, going beyond typical "one-problem ML challenges".
Map-Making Support
The goal of TomTom’s use-case is not only to demonstrate added value of GEM as a support to the map-making industry, but to effectively improve quality of their maps and reduce mapping costs.
Map-Making Support use-case will integrate GEM LC services to:
- Perform fully automated and repeatable global LC mapping for small scale
Leveraging HR satellite data (30-10m resolution), such as the Sentinels, can bring tremendous semantic and visual value up to zoom level 14 (scale of 1:35.000), as the quality requirements for features at the higher zoom levels are too strict for feature extraction using HR imagery to make sense. GEM services are completely aligned with TomTom’s mapping strategy since it is split along the lines of large-scale and small-scale mapping, the latter consisting of zoom levels 14 and below. TomTom‘s use-case will ingest GEM LC CMS (LC raster maps) and transform them into “map-ready” features that can be directly ingested in TomTom’s cartographic master database. TomTom has already developed a LC production pipeline (Earth Cover Engine) based on eo-learn’s capabilities. Although global LC production and integration to TomTom’s map is under way, a maintenance process allowing near-real time updating of LC based on meaningful changes in the landscape, has yet to be defined and implemented. GEM’s Periodic Big-Data Service, and particularly the concept of Continuous Monitoring, can be instrumental in guaranteeing the cost-effective repeatability of LC production, and as such guarantee to customers the latest map content. In conclusion, GEM provides an ideal opportunity to enrich TomTom’s maps at mid-to-small scale in an (almost)-fully automated way and at tremendous speeds compared to the current map update cycles which can take up to several years for a given geographic extent.
- Optimise LC map production at large-scale
On top of creating “map-ready” LC features to update and visually enhance TomTom’s map, the extraction performed at 10m resolution, even though insufficient for the production of features at the highest zoom levels, can still be leveraged to provide leads at locations where existing map features are likely to have changed. Upon detection of such phenomena, those leads could be provided to editors in order to update the product accordingly through the employment of GEM’s drill-down capabilities. This could drastically boost efficiencies by providing editors with only the locations and extents they need to perform their work, as well as an indication of what to look for. In order to provide additional semantic context to the detected leads, combining additional source such as vehicle traces (Probe) could support in determining whether the detected change is affecting traffic (e.g. floods, newly built road), in which case such changes would be escalated to a higher level of priority than other leads. Furthermore, it will solve a high-cost problem of false positive detections. This is a problem arising from applying deep learning techniques to detect features at high zoom levels such as playing fields, open parking areas, buildings, etc. High false-positive detection rates are counter-productive as they provide a lot of additional work for editors who need to verify every single detection.
Crop Identification
Given the pressing societal needs, the unique requirements of agricultural monitoring and its importance from the perspective of Food Safety make it pressing to develop new analysis and processing strategies that allow for accurate and spatially detailed agricultural monitoring over large areas. From the EO perspective, agriculture is a complex phenomenon which poses unique challenges. For example, the type of crop that is grown on a parcel usually changes within and between years according to the chosen crop rotation. The same crop type can have different temporal and spectral appearance due to local land management, genotype features, site conditions or environmental factors such as weather, solar radiation etc. Temporal information is usually the key to differentiating individual crop types, making use of unique differences in seasonal growing characteristics and crop phenology. A dedicated use-case will be developed as a continuation of GEM’s generalised LC monitoring approach. Agricultural areas will be further modelled for crop identification and the services will be evaluated/demonstrated on GEM’s demo area. The goal is to develop a model that could be run on a global scale to provide automated phenological stages prediction for timely decision making, real-time crop health monitoring, crop management optimisation (irrigation, spraying suggestions) and yield forecasts.
Satellite-derived vegetation indices (e.g. LAI, NDVI…) combined with weather variables and known phenological phases distribution in each crop growing season (crop calendar from literature and/or observation databases) will allow to discriminate between crop species with uncertainties considered acceptable for operational purposes within the CAP Area Monitoring.
Built-Up Area
Identification of built-up area (and buildings individually, from VHR data) provides important input for climate change analysis (impact to the environment), security (automatic identification of potential security threats) as well as land administration, especially in the developing countries, where property construction processes are not well established and the officials want to be promptly informed about the new built-up areas. We will explore the existing research in this area which produces good quality results but is extremely lengthy in terms of processing and impossible to be used on an ongoing basis. By using existing knowhow and integrating it with GEM platform we plan to establish a process, that can identify new urban areas at large scale on quarterly or even monthly basis, similarly as we have in the past "ported" the JRC's and Google's knowhow from their Global Surface Water project to create a low-cost yet similarly functional Blue Dot Water Observatory, reducing the processing costs from 10 million processing hours (Google's estimate) to less than 100 hours monthly.
The end result of the use-case will be a process, which will on an ongoing basis produce information about new built-up areas using first Sentinel-2 semantic segmentation and then data-fusion based drill-down, present these to end-users (e.g. governmental officials) for validation, integrate their feedback back to improve the ML model and continue this throughout the year. Process will be designed to work with both Sentinel data (SAR and optical) for identification of larger scale areas (e.g. new neighbourhoods) and higher resolution commercial data (e.g. SPOT with 1.5-meter resolution and Pleiades with 0.5-meter resolution) for identification of individual buildings.