Vous pouvez lire le billet sur le blog La Minute pour plus d'informations sur les RSS !
Feeds
10384 items (3 unread) in 53 feeds
-
Décryptagéo, l'information géographique
-
Cybergeo
-
Revue Internationale de Géomatique (RIG)
-
SIGMAG & SIGTV.FR - Un autre regard sur la géomatique
-
Mappemonde
-
Imagerie Géospatiale
-
Toute l’actualité des Geoservices de l'IGN
-
arcOrama, un blog sur les SIG, ceux d ESRI en particulier
-
arcOpole - Actualités du Programme
-
Géoclip, le générateur d'observatoires cartographiques
-
Blog GEOCONCEPT FR

-
Géoblogs (GeoRezo.net)
-
Geotribu (1 unread)
-
Les cafés géographiques
-
UrbaLine (le blog d'Aline sur l'urba, la géomatique, et l'habitat)
-
Séries temporelles (CESBIO)
-
Datafoncier, données pour les territoires (Cerema)
-
Cartes et figures du monde
-
SIGEA: actualités des SIG pour l'enseignement agricole
-
Data and GIS tips
-
Neogeo Technologies
-
ReLucBlog
-
L'Atelier de Cartographie
-
My Geomatic
-
archeomatic (le blog d'un archéologue à l’INRAP)
-
Cartographies numériques (2 unread)
-
Veille cartographie
-
Makina Corpus
-
Oslandia
-
Camptocamp
-
Carnet (neo)cartographique
-
Le blog de Geomatys
-
GEOMATIQUE
-
Geomatick
-
CartONG (actualités)
Séries temporelles (CESBIO)
-
16:36
Télédétection de l’orientation des cultures
sur Séries temporelles (CESBIO)(article publié aussi sur le site du labo-OT au CNES)
Pourquoi s’intéresser à l’orientation des cultures agricoles ? Au moins pour deux raisons. L’orientation des cultures par rapport à la pente du terrain contribue de façon importante au risque de ruissellement intense et par voie de conséquence à l’érosion des sols. Par ailleurs, connaitre l’orientation des cultures est indispensable pour corriger les effets directionnels qui perturbent l’interprétation des images spatiales. Le Lab’OT a donc mis au point une méthode permettant de déterminer l’orientation des cultures à partir d’un parcellaire et d’une image de télédétection à très haute résolution spatiale.
Contexte
La problématique de la détection de l’orientation des cultures agricoles est un enjeu central dans différents dossiers traités au Lab’OT. En effet, l’orientation des cultures en fonction de la pente a un impact fort sur le ruissellement de l’eau, et son changement d’orientation peut avoir des conséquences importantes sur le chemin de l’eau et l’érosion des sols. Ainsi, de nombreux acteurs sont intéressés par cette thématique, qu’ils soient institutionnels ou privés.
Parmi ces acteurs, nous pouvons citer SNCF-réseau qui est intéressé par l’orientation des cultures pour évaluer le risque d’incident sur voirie causé par le ruissellement. En effet, certaines voies de chemins de fer sont détruites suite à des glissements de terrain résultants de ce ruissellement. De plus, l’information du changement d’orientation de culture d’une année sur l’autre peut entrainer une augmentation très importante du risque. Repérer ces changement d’orientation de cultures à grande échelle à proximité des voies devient alors un outil précieux pour déterminer un plan de prévention. Autres acteurs mais même problématique au sein du projet SCO FLAude : les participants à ce projet qui vise à comprendre l’impact de différents éléments de l’aménagement du territoire sur les évènements extrêmes tels que les inondations, sont également intéressés par cette thématique.
Ensuite, connaitre l’orientation des cultures est indispensable pour étudier les effets directionnels dans les images de télédétection. Et prendre en compte ces effets est très important pour traiter correctement des séries temporelles d’images prises avec différentes géométries d’acquisition. C’est par exemple le cas pour les images prises à partir de 2 orbites relatives avec Copernicus/Sentinel-2. De son côté, la future mission franco-indienne Trishna, avec son champ de vue de plus de 1000km, sera largement plus impactée par ce phénomène.
- Images aériennes ou spatiales : notre algorithme a été testé avec succès sur des images BD_ORTHO (pixel à 20cm) et Pleiades (pixel à 50cm).
- Emprises de parcelles : notre méthode a été mise au point en utilisant le RPG.
- Un modèle numérique de terrain
Une publication décrivant de manière détaillée notre travail est en préparation, nous décrivons donc ici seulement de manière succincte la méthode.
La clé de voute du traitement est l’étape de détection de segments dans l’image, nous utilisons pour ce faire une méthode inspirée de von Gioi et al. 2012. Une deuxième étape du traitement concerne le filtrage des segments obtenus pour chaque parcelle, étape visant à supprimer les segments non pertinents car trop courts ou non représentatifs de l’orientation générale. L’orientation d’une parcelle est déduite de l’analyse de l’orientation des segments après filtrage. Pour faciliter l’usage ultérieur de l’orientation des cultures, des informations sur la pente du terrain sont également incluses dans les résultats.
Le traitement de départements entiers est rendu possible grâce à une architecture parallèle du code mis en place. Cette dernière prend en compte le tuilage éventuel des images d’entrée, une étape de découpage en patchs de taille plus faible et une concaténation des résultats de sortie en fin de traitement. La durée caractéristique du traitement d’un département entier est de 5 heures sur les moyens matériels du CNES.
En bleu, les départements déjà traités
11 département français (09, 11, 21, 31, 32, 33, 34, 35, 39, 68, 70, 71, 82) ont été traités à partir d’images de la BD_ORTHO datant de 2015 à 2020. Ce traitement a permis de déterminer l’orientation des cultures de plus de 320 000 parcelles, avec un taux d’orientations déterminées de près de 22%. Ce dernier chiffre relativement faible doit être relativisé : en effet, toutes les parcelles du RPG n’ont pas d’orientation, comme celles occupées par une prairie par exemple !
Exemples de résultats sur différents types de parcelles. En bleu, des cas simples de parcelles de vignes. En jaune des cas plus complexes, certains n’aboutissant pas. On constate qu’il est également possible de travailler sur des cultures pendant ou après récolte car l’étape de détection de segments peut fonctionner sur les traces laissées par le tracteur.
En réalité, deux limitations majeures sont rencontrées. Tout d’abord, l’absence d’information sur l’orientation dans l’image (parcelle non cultivée, friche, période d’interculture…). Cette limitation peut être compensée dans une certaine mesure en traitant plusieurs images sur la même zone, idéalement prises à différentes périodes de l’année. Le deuxième problème majeur vient de la définition des parcelles dans le RPG : une parcelle dans le RPG correspond régulièrement à plusieurs parcelles physiques avec différentes orientations. A ce sujet, nous sommes en train d’implémenter une étape de détection de parcelles multi-orientées qui améliore significativement notre taux de succès.
Détection de parcelles multi-orientées
Les cas rares dans lesquels notre traitement commet clairement une erreur sont reliés à la présence de structures linéaires dans les parcelles qui n’ont rien à voir avec leur culture comme par exemple une ligne haute tension, un chemin ou des traces laissées par un engin agricole pendant ou après la récolte.
Exemple de détection de segments sur une ligne haute tension au dessus d’une prairie induisant notre algorithme en erreur
Pour le moment, les résultats décrits dans cet article peuvent être mis à disposition sur simple demande, mais Theia pourrait bientôt les mettre à disposition.
-
16:46
A new downloader to access individual files within the zip distributed by THEIA
sur Séries temporelles (CESBIO)Downloading time series from Theia can take a while, depending on the area and the time period covered by the images. Also, it consumes a lot of storage locally, since all the archives in .zip format have to be downloaded (the GeoDataHub will probably distribute Cloud Optimized Geotifs). From my experience, a lot of people download an archive, extract the files, keep the files they need (or sometimes, compute what they want from the product, and store only the compressed result), delete the archive, then download the next archive. This approach is not very optimal!
Theia-picker is a small python package enabling to download archives, or individual files from the remote archive. When individual files are downloaded, only the bytes relative to the compressed file in the remote archive are downloaded. Then they are decompressed and written as the file. This is particularly interesting when only a few files are needed. No need to download the entire archive! Only the bytes for the requested files are downloaded. This should improve workflows that download the products archives just to grab 3 or 4 spectral bands…
To ensure that the downloads are correctly performed, theia-picker computes checksums (MD5 for the archives, CRC32 for the extracted individual files). When files checksums don’t match with the expected version, they are downloaded again.
How is it done? Compressed zip archives include information about their contents in a data block at the end of the file called the Central Directory [1]. From this data block, all the compressed files information can be retrieved.
To access this data block, one can use [HTTP-range] requests. These requests are HTTP GET with an additional header that specify a range of bytes to access {‘range’: ‘bytes=-‘}. This is enough to retrieve the files information, and also download and decompress them.
At least, this is the theory… In practice, you can still try that with the Theia server: it won’t work very well! I don’t know why exactly, but that is why every single packages for remote zip retrieval fail: the server just closes the connection before sending all the requested bytes. Theia-picker’s workaround consist in always asking all bytes after using byte-range headers {‘range’: ‘bytes=-‘} (which still enforces the standard [2]) and closes the connection when the desired length of bytes is received.
Theia-picker is open-source (Licence Apache-2.0) and anyone can open a PR on github. Currenlty, it has not been extensively tested, and feedbacks are welcome. Also, the API is quite minimal (in particular for the search of products) and contributions are welcome!
An example copied from github’s readme.
Rémi Cresson @ INRAE
References:[1] [https:]]
[2] [https:]]
-
15:19
[VENµS] production of VM5 images restarts
sur Séries temporelles (CESBIO)We have had to cancel the production of VENµS L1C and L2A products for one month, because the footprints of VENµS images tend to change more than anticipated. The ground segment recognizes the sites by the coordinates of the corners, and the tolerances were not large enough. This problem has been understood and corrected by the VENµS team, and they managed to finally restart the production this week. The products are available from the CNES distribution site of Theia.
VENµS most recent products from the catalogue (screen copy on the 17th of March).
The level 1C (Orthorectified TOA reflectances) , and Level 2A (Surface reflectances with a cloud mask) are now produced with a short delay. We still need to fine tune some parameters for sites with a large snow cover. The L2A production has not started yet for some sites, as the level 2A production only starts when 8 L1C products are available (and L1C products are only produced when the images are cloud free).
We are very sorry for this difficult start of production, and apologize to our users who are waiting for the data. We now expect to have a much steadier production in the coming weeks.
-
12:03
Protégé : Le GeoDataHub, futur centre d’accès aux données du CNES pour l’observation de la terre
sur Séries temporelles (CESBIO)ContexteRéseau de centres de données et de services du consortium de recherche Gaïa Data
En 2021, le CNES a décidé de mettre en œuvre un hub de services autour de la diffusion des données satellitaires d’Observation de la Terre (OT). Intégré dans un écosystème national et européen, dans le cadre de Data Terra et au service des communautés scientifiques et des projets du CNES, le projet Geo Data Hub (GDH) va offrir des services de diffusion, d’exploration, d’incubation et de gestion des données issues du spatial, afin de promouvoir et favoriser l’innovation et la science dans le domaine du « Système Terre ».
Les données considérées sont l’ensemble des données d’observation de la Terre pour lesquelles le CNES a eu une contribution majeure, ainsi que les données Sentinel 1 et 2. A partir de fin 2023, la diffusion des données Sentinel sera réalisée en France par la plate-forme GDH en remplacement de la plate-forme PEPS.
GDH sera aussi le nœud Toulousain de l’Infrastructure de Recherche (IR) DataTerra , bâtie sur les pôles de données et de services Aeris, ForM@Ter, ODATIS et Theia. Il est aussi intégré dans le projet Gaia-Data (ANR Equipex), présenté par DataTerra, en partenariat avec deux autres IRs, Climeri (Modélisation du climat) et PNDB (Biodiversité), et qui regroupe un consortium de 21 partenaires – dont Ifremer, INRAE, CNRS/INSU, IGN, IRD, Météo France. Gaïa Data a bénéficié d’un financement dans le cadre du Plan d’Investissement de l’Avenir.
La plate-forme GDH permettra bientôt :
- d’ingérer les données produites par les missions OT du CNES afin de les valoriser,
- de s’intégrer dans l’infrastructure de recherche Data Terra pour faciliter l’accès aux services et aux données pour les utilisateurs scientifiques
- d’inter-opérer avec les plateformes de services institutionnels (e.g. Géo Plateforme)
- De s’appuyer sur des relais privés quand nécessaire (hébergement Cloud ou autres services) pour favoriser le développement économique autour de l’usage de la donnée.
Pour répondre à ces enjeux et à une meilleure intégration dans l’écosystème, l’offre de services a été pensée en trois couches de services pour répondre aux différents types d’utilisation. Le schéma suivant présente de manière synthétique cette offre de service permettant d’adresser :- Services basiques: découvrir et explorer les données
- Services pour les communautés: permettre le développement et la maturation de nouveaux services,
- Services pour les projets et pôles: diffuser les données et produits à valeur ajoutée pour les centres de production du CNES ou de partenaires.
Les composants techniques de la plate-forme
- Un portail WEB pour les fonctions de communication : publication d’articles pour valoriser les activités OT du CNES (Missions OT, Contributions aux pôles, Labo OT, SCO, Programme Aval, Booster PEPS, …) et vulgariser l’utilisation des données satellitaires,
- Un Composant WEB SIG pour la découverte (consultation du catalogue), la visualisation, le téléchargement et le traitement à la demande sur les données
- Un composant DATALABS (comme JupyterLab) pour des traitements interactifs (maturation d’algorithmes de traitements sur les données).
L’interopérabilité avec d’autres plates-formes et l’intégration dans l’IR Data Terra sera assurée par l’utilisation de standards (OGC, STAC). D’autres sujets techniques sont en cours dans le cadre du projet Gaia Data tels que la fédération de l’authentification et des autorisations, l’échanges de données inter-centres, la fédération de services… Une attention particulière est portée sur le support et l’exploitation, l’accompagnement et l’expertise afin d’accompagner au mieux l’ensemble des utilisateurs sur les questions thématiques, technologiques, et d’exploitation. L’équipe s’appuie notamment sur les retours d’expérience de PEPS et des équipes d’exploitation du CNES.
La plate-forme résultante reposera sur deux moyens stratégiques du CNES qui sont en cours de renouvellement – stockage (DATALAKE) et calcul (HPC6G) – Elle décloisonnera l’ensemble des jeux de données OT que le CNES met à disposition actuellement et dans le futur. Ces infrastructures permettent une montée en charge importante, en considérant également un usage numérique responsable (stockage sur bande, suivi et optimisation énergétique du HPC).
Roadmap et évolutionsDes premières maquettes des interfaces du GDH ont été dessinées et soumises aux utilisateurs
La plate-forme est en cours de développement et s’appuie sur l’intégration et la réutilisation de composants logiciels génériques développés dans le cadre du projet Hysope II, appelés « Composants de la plateforme OT » (POTe). La méthode de développement retenue est une méthode Agile et DevOps intégrant également une démarche « UX Design » dans laquelle les utilisateurs sont mis à contribution pour nous aider à construire la solution la plus alignée avec leurs attentes et besoins. Le CNES a choisi de définir un périmètre « MVP » (Minimum Viable Product) pour la première phase de développement avec l’industriel retenu, Cap Gémini. Cette phase est prévue de septembre 2022 à juillet 2023. Elle permettra d’implémenter la reprise des services de PEPS et la diffusion des données THEIA / Muscate. La phase suivante consistera à implémenter les services additionnels et à ingérer les données éligibles à GDH. Les priorités seront définies selon la valeur ajoutée et les contraintes externes éventuelles. La migration des utilisateurs PEPS et l’ouverture de service officielle sera fin 2023.
Vos contactsCalendrier prévisionnel du projet Geo Data Hub, si tout avance comme prévu.
Pour plus d’informations, vous pouvez contacter le Chef de projet : Francois.jocteur-monrozier@cnes.fr ou les Product Owner : Benjamin.Husson@cnes.fr et Marjorie.Robert@cnes.fr pour des questions plus techniques. L’article a été principalement écrit par François Jocteur-Montrosier (mais pour une sombre raison, je n’ai pas réussi à le lui attribuer dans wordpress).
-
11:51
Pléiades image of AFAD relief center at Hatay Stadium
sur Séries temporelles (CESBIO)In Turkey, the Disaster and Emergency Management Presidency (AFAD), affiliated to the Ministry of Interior, has completed the installation of 300 thousand tents in the provinces affected by the earthquakes.
On Feb 16, Pléiades captured such installation on the parking of the New Hatay Stadium near Antakya. I count about 400 tents in this picture.
Read more about the life in this camp in this report by CNN Turk.
Top picture from U.S. Department of State from United States — Aerial View of the Hatay Province in Turkey, Domaine public, [https:]
-
18:52
Antakya before and after the earthquakes
sur Séries temporelles (CESBIO)We received satellite images of Antakya, Turkey acquired by Pléiades 10 days after the earthquake. The comparison with pre-event imagery leaves me speechless.
The post-event images were acquired by Pléiades PHR1A thanks to CNES Cellule d’Intervention et d’Expertise Scientifique et Technique nouvelle génération : CIEST²
( [https:]] ). I rectified the raw images using the SRTM digital elevation model and pan-sharpened the multispectral image. All processing done using the Orfeo Toolbox. -
23:29
First InSAR analysis of the 2023 Turkey–Syria earthquake
sur Séries temporelles (CESBIO)Sentinel-1A acquired an image three days after the Mw 7.8 earthquake which struck Turkey and Syria on Feb 06. The magnitude and the extent of the displacement is hard to believe (+1m vertical displacement, 100 km) but follows well the East Anatolian fault and the shakemaps generated from in situ seismic instruments.
These maps were obtained using Alaska Satellite Facility on-demand InSAR processing (ASF DAAC 2015, contains modified Copernicus Sentinel data 2015, processed by ESA).
-
13:01
The low predictability of agricultural cycles in semi-arid regions
sur Séries temporelles (CESBIO)The low predictability of agricultural cycles in semi-arid regions [1].
In the early 2010s, chaotic attractors were captured for the cycles of cereal crops in North Morocco. This result was unexpected for several reasons. It was the first time a chaotic attractor was obtained from remote sensing data. It was also the the first « weakly dissipative » dynamics directly captured from observational data. Moreover, its dynamics was produced by a canonical form with which only highly dissipative dynamics had been produced before. Was it a hapax? Or was there any generality in this result?
To obtain a chaotic attractor is often an important result because it shows that the low predictability — a comon feature of environmental dynamics — is not just a problem of natural hazard, that is, of probability. Indeed, chaos is — deterministic — which means that the state of the system at a given time entirely determines its state immediately after. To obtain a chaotic attractor from observational data gives a strong argument for determinism, and therefore, for a strong order hidden behind the low predictability.
The theory of chaos is particularly well designed to study dynamics of low predictability. It enables an alternative way to do science [2]. Instead of trying to solve equations (which are not always known) as mathematicians commonly do, this theory fosters the use of a geometric space enabling to follow the evolution in time of the studied system state. Since this space can be reconstructed directly from observational data, it can be used to study dynamics under real world conditions. Moreover, this space being independent from the initial conditions (because it contains them all) it can be used to unveil the underlying equations governing the observed dynamics, or to get compact approximation of them. Other approaches have been developped, but very few of them were able to extract chaotic attractors from environmental or experimental time series.
In 2011, a chaotic attractor was obtained for the cycles of cereal crops in North Morocco [3]. A time series of vegetation index observed by remote sensing was used for this purpose. A chaotic attractor is an oriented trajectory in this state space. It is characterized by specific properties. It takes place in a bounded part of this space and it is non-periodic: it is never looping back to itself, even after numerous loops (otherwise, it would be fully predictable) [4]. Such an attractor is also characterized by diverging property giving rise to unpredictability: two states taken close one to another in the phase space will diverge. To remain in a bounded space, its structure must present folding, and possibly tearing properties. Finally, the geometry of the attractor should also be fractal, that is, it should be auto-similar by scale change. All these properties were confirmed for the first chaotic attractors obtained for cereal crops from remote sensing data: non-periodic and bounded trajectory, divergence of the flow (characterized by a positive first Lyapunov exponent), with folding and characterized by a fractal dimension (D = 2.68 and D = 2.75).
To find a fractal dimension was coherent for cereal crops cycles, which yields are known to be difficult to foresee. However, the attractor obtained for crops cycles was different from what most usual chaotic attractors. Indeed, most of the well known chaotic attractors, such as the Lorenz [5] or the Rössler [6] attractors arise from highly dissipative systems. As a consequence, although developed in three dimensions, these attractors are flat (locally bidimensional) almost everywhere and their fractal dimension is close to two (respectively 2.05 and 2.06 for usual parameter values). Surprisingly, the first cereal crops attractor was not flat at all (see for example Fig. 1, the cross section of the chaotic attractor obtained), and its fractal dimension was close to three which means that divergence and convergence speeds were of similar amplitude: the attractor was belonging to weakly dissipative chaos.
Fig. 1: Poincaré section of the weakly dissipative chaotic attractor (D = 2.75) obtained for the cycles of cereal crops.
Weakly dissipative chaos is rather rare in dynamical systems and it was the first time this type of dynamics could be extracted directly from observational data. Previous cases were weakly dissipative chaos obtained by Langford [7] or by Lorenz [8] in the early 1980s, but these were derived from theoretical considerations, not from observational data. This result has provided a strong argument for determinism hidden behind the low predictability of the agricultural cycles from the real world. But, this result obtained in 2011 was local, since obtained from a single time series and therefore specific to to chosen location. The generality of this result was thus questionable.
To investigate this question, the global modelling technique was applied again to four other provinces in Morocco: two ones located on the coastal area (El Jadida and Safi provinces), two others located inland (Khourigba and Khenifra). The global modelling technique aims to otbain equations directly from observational time series, without strong hypotheses. Even when applied to a single variable, the approach can be applied to multiple time series or to a single time series with gaps (see [9]). The GPoM package [10] was used for this purpose.
In the analyses just published in the Journal of Difference Equations and Applications [1], time series of Normalized Difference Vegetation Index (NDVI) from AVHRR sensors were considered. The provinces were analysed one by one, two by two and the four ones altogether, either in association (concatenated time series), or in aggregation (averaged time series). Models could be obtained and validated in numerous cases. One example, obtained from the time series of NDVI spatially averaged on the four provinces, is provided in Figure 2.
Fig. 2: One of the chaotic attractors (in orange) newly obtained for cereal crops cycles in semi-arid regions from remote sensing data (in black). This attractor is weakly dissipative and was obtained from the time series of NDVI spatially aggregated on the four provinces of El Jadida-Safi-Khenifra-Khourigba.
All the properties of chaos were confirmed in most of the cases, that is: deterministic and characterized by a non-periodic, diverging, bounded and fractal attractor. Weakly dissipative chaos was also confirmed in all these cases (2.28 ? D ? 2.80) except one (for which dimension was a bit lower, D = 2.10).
Fig.3: Poincaré section of the weakly dissipative chaotic attractor presented in Fig. 2. This section is very thick and characteristic of weakly dissipative systems, as confirmed by the fractal dimension (here D = 2.80).
Fig. 4: First Return Map deduced from Fig. 3. This pattern reflects a structure with complex folding, a ncessary condition for chaos.
Weakly dissipative chaos was thus confirmed for the cycles of cereal crops in semi-arid region. How to explain such a type of dynamics? Their interpretation is challenging. It is obvious that energy, water, and nutrients are required to grow plants. However, the analysis being based on a single variable used to monitor the cycle of cereal crops – here the vegetation index measured from remote sensing – as a whole. What is characterized, here, is the global dynamics of the plants and its agricultural cycle. It tells us that the plant dynamics, as it is observed from the point of view of the vegetation index, is close to optimal. In other words, the chosen plants as well as the agricultural practice, are well adapted to the meteorological and hydric conditions of the soil, at the provinces scale.
References:
[1] S. Mangiarotti & F. Le Jean, 2022. Chaotic attractors captured from remote sensing time series for the dynamics of cereal crops, Journal of Difference Equations and Applications, DOI: 10.1080/10236198.2022.2152336
[2] C. Letellier C., L.F. Olsen, S. Mangiarotti, 2021. Chaos : from theory to applications for the 80th birthday of Otto E. Rossler, Chaos, 2021, 31(6), 060402. [https:]]
[3] S. Mangiarotti, L. Drapeau, R. Coudret, & L. Jarlan, 2011. Modélisation par approche globale de la dynamique du blé pluvial observée par télédétection spatiale en zone semi-aride. Rencontre du Non-Lineaire, 14, 103-108. Mangiarotti.pdf (univ-lille1.fr)
[4] R. Lozi, “Giga-periodic orbits for weakly coupled tent and logistic discretized maps,” in Proceedings of International Conference on Industrial and Applied Mathematics, Modern Mathematical Models: Methods and Algorithms for Real World Systems, 4–6 December 2004, edited by A. H. Siddiqi, I. S. Duff, and O. Christensen (Anamaya, New Delhi, 2004).
[5] E.N. Lorenz. 1963. Deterministic nonperiodic flow, Journal of Atmospheric Science, 20(2),? 130-141. [https:]]
[6] O.E. Rössler, 1976. An equation for continuous chaos. Physics Letters A,? 1976, 31, 259-264. [https:]]
[7] W.F. Langford, 1984. Numerical studies of torus bifurcations, International Series of Numerical Mathematics, 70, 285–295. DOI: 10.1007/978-3-0348-6256-1_19.
[8] E.N. Lorenz, 1984. Irregularity: A fundamental property of the atmosphere, Tellus, 36(A), 98–110. [https:]]
[9] S. Mangiarotti, F. Le Jean, M. Huc, C. Letellier, 2016. Global modeling of aggregated and associated chaotic dynamics, Chaos, Solitons & Fractals, 83. 82-96. https://doi.org/10.1016/j.chaos.2015.11.031">[https:]
[10] S. Mangiarotti, M. Huc, F. Le Jean, M. Chassan, L. Drapeau [ctb], GPoM: Generalized Polynomial Modelling. Version 1.3, CeCILL-2 licence. [https:]]
-
2:11
How to use on demand InSAR to analyse the Joshimath landslide
sur Séries temporelles (CESBIO)International news media reported that the Joshimath city in north India (Uttarakhand) was « sinking » due to a slow landslide. In January, many houses had developed major cracks on the walls and 145 families were temporarily moved. The Indian space agency published a map of the ground deformation obtained by SAR interferometry using Sentinel-1 data. Although this method may seem complex, it is actually easy to do its own analysis based on free tools without downloading huge amount of satellite data. I give below a few hints but I can provide more details in the comments.
First, the InSAR processing can be done remotely from Alaska Satellite Facility’s web portal (Vertex). Their baseline tool identifies Sentinel-1 pairs that are appropriate for InSAR processing. The computation is done on ASF servers using the GAMMA software. I looked for pairs associated with the product acquired on 2022-12-27 over Joshimath. This scene can be matched with the scene of 2023-01-08 (12 days later). The highest level product is the vertical displacement map below:
Vertical surface deformation between Dec 27, 2022 and Jan 08, 2023 assuming that there is no horizontal component to the change. Positive values indicate uplift and negative values indicate subsidence. Pixel spacing is 80 m. ASF DAAC HyP3 2023. Contains modified Copernicus Sentinel data 2023, processed by ESA.
ASF/Vertex also provides a search engine to find all secondary scenes that match the coverage area of the reference (small baseline subset, SBAS). Using this tool I queried all pairs of Sentinel-1 scenes that are 12 days apart and submitted the InSAR processing of the 110 identified pairs. The « mintpy » option activates the generation of all necessary files to post-process the results using MintPy.
The MintPy software should be installed locally in a dedicated conda environment. It allows the definition of a reference fixed pixel which is used to infer the displacement values in the region of interest. Before running MintPy main script (smallbaselineApp.py), it is recommended to clip the files obtained from ASF for each InSAR pair (unwrapped phase, correlation, DEM, view angles, water mask) to the region of interest. This can be done efficiently using gdalwarp -crop_to_cutline without unzipping the downloaded files through the vsizip file handler.
Output of MintPy/smallbaselineApp.py on the stack of interferograms obtained over Joshimath city from 2018-09-07 to 2023-01-08
The MintPy package provides the tsview.py tool to easily obtain displacement time series at specific locations:
Time series of ground deformation at three locations in the Joshimath city (same reference point as above)
Ideally, these results should be evaluated using in situ GPS data. The reference point may be better defined using a good knowledge of the study area. But the above results are in agreement with reports that the first cracks appeared in October 2021. Also it is possible to note an acceleration in 2023 in the area of the city where most of the damages were observed.
#JoshimathIsSinking ! These custom maps show #Joshimath city with markers of approx areas where buildings are affected (as reported in press).
From the Tunnel, the closest area is Parsari ward (500m) and the farthest is Marwadi ward (2600m), approx. @rajbhagatt @rajatpTOI pic.twitter.com/fvW2fnTjKI— Thiyagu
(@jThiyagu) January 8, 2023
Joshimath is located in the Chamoli district, in the same valley that was flooded by a massive rock and ice avalanche in 2021 and that we studied using Pléiades optical stereo images.
Top picture: Joshimath, view from Narsingh temple, Uttarakhand, India by ArmouredCyborg, CC BY-SA 4.0, via Wikimedia Commons
-
18:26
[VENµS] Full reprocessing of VM1 data available
sur Séries temporelles (CESBIO)The full reprocessing of VENµS VM1 data acquired between end 2017 and end 2020 has just been completed, with improvements for the Level 1C and level 2A processing. The data can be downloaded from the Theia website : [https:]] by clicking on VENµS VM1 (The data from VENµS new on-going phase, with a revisit of one day on most sites, can also be accessed by clicking on VENµS VM5)
L1C processingFor the Level 1C, the main efforts were focused on increasing the percentage of valid scenes. Without ground control points, Venµs images would have geolocation errors of a few hundreds of meters. To improve the geolocation and the multi-temporal registration of images to a fraction of the resolution (5 m), ground control points are used, obtained by matching the images to a well geolocated reference image. In the past, this was done by selecting a cloud free Venµs image which was then carefully geolocated. But it turned out that a large number of sites tend to show large seasonal variations. With larger field of view instruments, such as Sentinel-2, it is still easy to find invariant landscapes, such as cities, rocks, coasts… But with a field of view of 30 km, our method struggled to finding good matching points between a winter and a summer image, all the more so when parts of the images were cloudy. This resulted in a high percentage of invalid images because the number of quality GCP was too low.
In this new reprocessing, reference images from two seasons have been used to process most of the sites. The image matching parameters and the thresholds were also optimized to provide a better percentage of valid images. It was a success, as the percentage of valid images increased from 48.5 to 53.4 %, with a gain of 3500 images.
Some other marginal improvements were brought on the radiometric calibration and the L1C cloud detection, which uses the 2° parallax between two identical bands.
L2A processingFor the L2A processor, MAJA, the estimation of aerosols relies on two criteria :
- a multi-temporal one, that assumes the surface reflectances change slowly with time
- a multi-spectral one based on a relationship between the blue and red band.
Our validations using Aeronet showed that the coefficients of the relation between the blue and red band was not perfectly tuned, which caused a negative bias on the surface reflectances after atmospheric correction (these reflectances were too low, and sometimes negative). A better tuning has now been implemented and used in this reprocessing.
We also benefitted from the last version of MAJA, which enables to use CAMS data to set the type of aerosols, or to process the cloud masks at a better resolution (100 m)
Here are some validation results for the aerosol optical thickness measurements :
First processing New processing One can note that for all the sites but the last one one, we have a large improvement of the rmse error. In fact, in the first processing, only the ARM site was used to set the parameters of the multi-spectral criterion to detect the aerosols. It was an error. It turned out that that site has a very reddish soil, and the coefficients that we derived there were not at all optimised for the other sites. Here we used some average coefficients that improve the results generally, but degrade them on the ARM site.
Rms error of the aerosol optical thickness for each aeronet site observed in VENµS VM1, left for the first processing, and right with the new reprocessing
The improvements of Level 1C processing were brought by Arthur Dick (CNES) for radiometry and by Amandine Rolland (Thales) for the geometry, and of course with the help of the production team, while the improvements of Level 2A and the validation of the results were brought by Sophie Coustance (CNES). The whole reprocessing also required lots of efforts from the production and development teams at CNES (VENµS and THEIA ground segments). In particular many thanks to Marie France Larif (CNES), and Gwenaelle Baldit (Sopra Steria).
-
22:13
Extent of the Alps snow cover in January 2023 (before the storm)
sur Séries temporelles (CESBIO)In early January 2023 the snow cover area in the Alps was lower than the 30 years minimum.
The snow cover area in January 2023 was lower than the snow cover area last year at the same period. This caused the publication of a lot of depressing ski resorts photographs. But the ongoing snowfalls in the Alps are changing the situation. This graph was obtained from our Alps Snow Monitor. We will update it graph when cloud-free conditions will allow a new assessment.
Top picture: Sentinel-2 image of Riezlern Austria on January 6, 2023.
-
10:57
A 10 m resolution land cover map of Sahel with iota2
sur Séries temporelles (CESBIO)iota2
is the large scale mapping software developed at CESBIO.iota2
takes high resolution satellite image time series (SITS), usually Sentinel (1 and 2) or Landsat, and produces maps over large areas. Maps of most usual variables of interest in remote sensing can be produced, sinceiota2
can compute user-defined functions at the pixel level, perform regression and classification. The main feature ofiota2
is not what is computed, but the possibility of doing that on huge volumes of data (long time series, large geographical areas). Indeed,iota2
manages the image data split in tiles, the time series, the reference data for training models, the spatial stratification if needed, etc.In the frame of the SWOT Downstream Program, a 10 m resolution land cover map of the Sahel region in Africa has been produced with
iota2
using Sentinel-2 SITS covering the whole 2018 year. This amounts to 290 tiles or about 3 million km².iota2
map of Sahel with Sentinel-2The map is available for download from Zenodo.
ObjectivesLand cover and land use maps provide important inputs for hydrological modelling. For example, determining land-cover changes allows a better estimation on runoff 1. Different types of vegetation and soil composition on river plains can be used to estimate river roughness in case of flood 23. Regarding SWOT and its global coverage, large scale land cover and land use maps would foster hydrological research and downstream activities.
Can
iota2
be used at a continental scale while preserving high resolution maps? What public data can be used to infer global maps? What classes are available while useful for hydrological studies? What quality level can we expect? These questions are partially addressed on this exercise.The evaluation region includes three important basins in western Africa: Senegal, Niger and Chad basins. Such hydrographic basins extend through different countries, and in general, in situ hydrological data are out of public reach. In some other cases, basins are just insufficiently gauged. Satellite data might then provide some relevant information to better understand basin dynamics. Since such basins are affected by heavy rainy seasons and floods, a new LULC map at high resolution would help to improve runoff and flood modelling.
Data
Image dataiota2
uses supervised classification for land cover map production. For supervised classification, we need images as predictors and reference data as targets to train the classifiers.In terms of images, we decided to use Sentinel-2 time series because of their high spatial, spectral and temporal resolution. We used the data produced by the Theia Land Data Centre over the Sahel region. These are surface reflectance image time series processed with MAJA. The area is composed of 290 MGRS tiles and we used all available dates between January and December 2018. This amounts to about 58 TB (around 200GB per tile).
Obtaining reference data over such a large area is a difficult issue. Field surveys are out of question because of the cost of the operation. Large, well funded projects, like CGLS or WorldCover usually approach the problem via photo-interpretation, which reduces the costs, but still needs a fair amount of trained operators.
We finally decided to use existing, lower resolution maps, and settled on CGLS 4 as our source of reference data. Since CGLS is a 110m resolution map, using these labels for 10 m resolution images will fatally introduce some label noise. This is not very different to what is done for OSO over the classes where Corine Land Cover is used as reference data. Research shows that RF are rather robust to label noise 5.
Since the CGLS maps are distributed as raster data, they were vectorised so that they could be used as reference data for
iota2
. The vectorisation was followed by a suppression of the smallest polygons and a splitting of the larger ones so that the sampling could be efficient.CGLS raster vectorised iota2
workflow and its limitationsThe technical details of the standard
iota2
workflow for land cover mapping are described in 6. In a nutshell, the workflow is made of the following steps:- Sampling the reference data
- Building the SITS
- Extracting the image features for the sampled locations to generate the training data
- Training the classifier
- Applying the trained classifier to all the SITS
The procedure can use a geographical stratification. This approach uses a geographical partition (provided as a map, which can represent eco-climatic areas) and a different classifier is trained for each defined region. The geographical stratification serves 2 purposes. The first is to reduce the intra-class variability, which is a problem when the same thematic class has different spectro-temporal signatures on different areas. The second is reducing the amount of data, and thus the memory requirements, needed for training a classifier.
Disk spaceFor this exercise, it was the first time that
Geographical stratificationiota2
had to process over 100 Sentinel-2 tiles for a time span of 1 year. This made appear an additional constraint: the storage capacity for all the input SITS. Indeed, for efficiency reasons,iota2
builds data stacks with all Sentinel-2 bands at 10 m resolution. This meant that we had to generate the whole map by chunks. See below for an explanation on how we proceeded.In the case of the Sahel area, the different eco-climatic maps that we found were made of regions that were too large for the amount of disk space we had. Indeed, eco-climatic regions in the Sahel area extend in the East-West direction and a single region may intersect many MGRS tiles as shown below.
Eco-climatic regions of the Sahel
We therefore decided to generate a pseudo eco-climatic map with constraints on the region size. We used the 19 bio-climatic WorldClim variables to perform a clustering so that each region would contain a limited amount of tiles. We settled on the map below.
Pseudo eco-climatic regions of the Sahel
Each colour represents a set of climatic regions processed together. In order to avoid land cover discontinuities between the different areas, we added samples from the adjacent sub-regions for the training. In this way, adjacent classifiers have some common training samples and their decisions are similar on the boundary areas. We nearly submitted a paper to an AI journal explaining this smart strategy, but we finally decided that the Turing Award could wait.
Results Quantitative validationFor a quantitative validation of the map, we had to rely on the CGLS map itself. As it is customary in ML, we used a hold-out set (not used for training) to compute confusion matrices. Since the reference data is a 110 m resolution raster and we produced a 10 m resolution map, we decided to produce 2 confusion matrices, one at each resolution. If we measured the quality of the map at 10 m. resolution, the discrepancies could be due to both classification errors and “super-resolution” effects. The latter correspond to the cases where the classifier predicts the correct class thanks to the 10 m resolution of Sentinel-2, but the reference data can’t contain the correct class because of its coarser resolution.
To compute the 10 m resolution matrix, we just compare the 110 m label to the 10 m pixel of the map which corresponds to the centre of the reference data pixel. To compute the 110 m resolution matrix, we first degrade the 10 m. resolution map to 110 m. by majority voting and then we compare with the reference label. Both matrices are shown below.
confusion matrix, iota2
at 10m vs CGLS as referenceconfusion matrix, iota2
at 110m vs CGLS as referenceWe see that the agreement between our map and CGLS increases when we compare them at the coarser resolution, as expected. However, the general trends are similar.
Qualitative analysisThe resulting
iota2
map and the original CGLS present several differences. In general,iota2
maps provide more detailed and granular results thanks to higher resolution of the inputs. Regarding the classes that would be relevant on hydrological studies, we observe that: permanent and non-permanent water areas are better delineated oniota2
maps; urban areas oniota2
maps seem less compact and present some confusion with bare ground classes, and crop areas seem also less compact and sparse than CGLS, which seems realistic in some cases. || |:——:| |
iota2
map (left), CGLS (right) |iota2
map (left), CGLS (right) on Manatali LakeDuring the final steps of the map generation, ESA WorldCover project published their global 10 m. resolution map7 based on Sentinel-1 and Sentinel-2 data. This map has been produced by highly qualified teams (VITO, Brockmann Consult, CS, Wageningen University, Gamma Remote Sensing and IIASA) funded by ESA. The product validation report states an overall accuracy of about 74%, which is very good for a global product. The overall approach is very similar to the one used for the CGLS product: a supervised classification using Gradient Boosting Trees of the time series using a very good set of reference data generated by trained operators.
We thought it was interesting to compare our map to WorldCover’s before dumping it into the trash bin, to see how worse our results were. We decided “validate” the ESA WorldCover with the same protocol used to validate the map produced with
iota2
. This allows to compare both products via the confusion matrices with respect to a 3rd one (the CGLS map). The confusion matrices obtained for the WorldCover over the region where theiota2
map was produced are shown below.Confusion matrix, ESA product (10m) vs CGLS as reference Confusion matrix, ESA product (110m) vs CGLS as reference We see that the accuracy scores are slightly worse than those of
Qualitative comparison with ESA WorldCoveriota2
. Of course, this can be due to several reasons: the WorldCover map could actually be better than CGLS. Also, the comparison may not be fair, since theiota2
classifier was trained on a hold-out set of CGLS. However, these results are coherent with an independent, expert validation of our map and WorldCover’s on a small area around Lake Chad.At a large scale, the geographical distribution of majority classes look similar. However,
iota2
maps are less homogeneous and look more granular in transition areas. Let’s take a look at each class: vegetation and tree cover classes seem to differ, shrub (orange) and forest (dark green), probably caused by different training samples or/and class definition criteria. Water classes seem better delineated oniota2
maps, especially non-permanent water. Urban classes are clearly better defined on ESA Worldcover, being more homogeneous and having less mis-classifications as bare soil.Comparison at large scale: iota2
map (left) vs ESA World Cover (right)Large scale comparison at central Western Africa: iota2
map (left) vs ESA World Cover (right). Urban areas look more compact in ESA World CoverCloser view on vegetated areas: iota2
map (left) vs ESA World Cover (right). Shrub and forest class definition seem differentCloser view on vegetated areas: iota2
map (left) vs ESA World Cover (right). Water classes delineation look better. Shrub and forest class definition seem differentWe have found an innovative solution for land cover mapping over very large areas without deploying costly field surveys or intensive photo-interpretation campaigns. Indeed, leveraging existing maps at lower resolution, for which reference data was used, we have produced a high spatial resolution product which seems to be on par with similar products for which reference data was specially collected.
The current study was limited by the lack of reference data for the validation step. This has 2 main consequences:
- it is impossible to give an accurate assessment of the quality of the product;
- we can’t determine whether the disagreements with the CGLS maps come from classification errors or from the increased spatial resolution of our product.
Unfortunately, the reference data collected for existing products are not publicly available in spite of the fact that some of them (CGLS or WorldCover, for instance) are funded with public money. This kind of data could have been used to assess the quality of our product.
From the hydrological perspective, the
iota2
map seems to provide a better mapping on water areas, mainly around river-sheds and wetlands, compared to the two other global maps available (CGLS, ESA WorldCover), which would help on defining river models (river and flood plains width). Crop areas look similar between the different maps, and finally, urban and vegetated zones look better in ESA World Cover.One could wonder why we produced this map if other products were available. First of all, the WorldCover product was not available when we started this work, but most important, one of the goals of the exercise was to assess the ability of
iota2
to produce at a larger scale than the country-wide annual production for OSO. Indeed, it seems that every new land-cover initiative needs the development of a new processing chain: to the best of our knowledge, the processing chains used for CGLS, WorldCover, CCI Landcover, etc. are not open source.iota2
is free/libre software and, as such, allows study, inspection, reproducibility and adaptation to other contexts. Now we have demonstrated that it can scale beyond national mapping.The final point worth noting is that the most burdensome part of the product generation was dealing with the huge amounts of data to be ingested by the processing pipelines. Although
Creditsiota2
can jointly process Sentinel-1 and Sentinel-2 image time series, we did not use SAR data to reduce the volumes of data to use. Our past experience shows that SAR brings only small improvements for annual land-cover mapping8, however, these data can be useful for specific classes (i.e. urban, forest) and over tropical areas. However, the high redundancy between time series made doubling the data volume not worthy for our exercise. One way to alleviate the problem would be making available IA ready fused data, i.e. generic embeddings of multi-modal data which could be used for different downstream machine learning tasks. Imagine a 5-dimensional vector at 10 m resolution every 5 days instead of 13 reflectances every 5 days, plus 2 back-scatter coefficients every 6 days (times 2, for ascending and descending orbits), etc. This would imply a huge compression ratio, but would also simplify feature extraction and therefore less compute to train the machine learning models.This work was carried out in the frame of the SWOT-Downstream Program. Implementation and production were done by Arthur Vincent. Algorithm design was done by Jordi Inglada. Project management and supervision were done by Santiago Peña Luque.
We are particularly grateful to CNES for the HPC infrastructure (data storage, computing resources) and CNES’ HPC technical support without whom these wonderful resources wouldn’t be operational.
The map can be cited as Vincent, Arthur, Inglada, Jordi, & Peña Luque, Santiago. (2022). Sahel Land Cover OSO 2018 [Data set]. Zenodo. [https:]
ReferencesSort references by citation order
- Basu, A.S.; Gill, L.W.; Pilla, F.; Basu, B. Assessment of Variations in Runoff Due to Landcover Changes Using the SWAT Model in an Urban River in Dublin, Ireland. Sustainability 2022, 14, 534. [https:]
?
- Wilson, M.D. and Atkinson, P.M. (2007), The use of remotely sensed land cover to derive floodplain friction coefficients for flood inundation modelling. Hydrol. Process., 21: 3576-3586. [https:]
?
- Hydrogeomorphological parameters extraction from remotely sensed products for SWOT Discharge Algorithm, C.Emery et al, 2021, Geoglows-Hydrospace Conference 2021, [https:]
?
- Buchhorn, M. ; Smets, B. ; Bertels, L. ; De Roo, B. ; Lesiv, M. ; Tsendbazar, N. – E. ; Herold, M. ; Fritz, S. Copernicus Global Land Service: Land Cover 100m: collection 3: epoch 2018: Globe 2020. DOI 10.5281/zenodo.3518038
?
- Pelletier, C., Valero, S., Inglada, J., Champion, N., Marais Sicre, C., & Dedieu, G. (2017). Effect of training class label noise on classification performances for land cover mapping with satellite image time series. Remote Sensing, 9(2), 173. [dx.doi.org]
?
- Inglada, J., Vincent, A., Arias, M., Tardy, B., Morin, D., & Rodes, I. (2017). Operational high resolution land cover map production at the country scale using satellite image time series. Remote Sensing, 9(1), 95. [dx.doi.org]
?
- Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, Linlin, Tsendbazar, N.E., Ramoino, F., Arino, O., 2021. ESA WorldCover 10 m 2020 v100. [https:]
?
- Inglada, J., Vincent, A., Arias, M., & Marais-Sicre, C. (2016). Improved early crop type identification by joint use of high temporal resolution sar and optical image time series. Remote Sensing, 8(5), 362. [dx.doi.org]
?
-
0:43
Iceberg B-22A on the go
sur Séries temporelles (CESBIO)Enclosure: [download]
B-22A is the largest iceberg in the Amundsen Sea, Antarctica (50 times Manhattan land area). It broke off from Thwaites Glacier’s tongue and remained grounded for 20 years.. But now it’s on the go, as shown by this animation I made from Sentinel-1 SAR images (about 1 per month since June 2015, total 75 frames):
document.createElement('video'); [https:]]
This animation (.mp4 file) was reproduced in this article of New Scientist on January 11: Breakaway iceberg raises concerns over Antarctica’s ‘doomsday glacier’
Several authors have warned that this event could have important implications
when the grounded iceberg [B22-A] is removed from the Thwaites embayment (likely in the near-future), a change to less favourable landfast sea-ice conditions is likely to occur. Any decrease in landfast sea-ice persistency or extent would ultimately increase the prospect of further retreat or disintegration of the Thwaites Ice Tongue.
Miles et al. (2020) Journal of Glaciology
Removal of this iceberg and subsequent loss of landfast sea ice is not only likely to modify regional ocean circulation, but an open-water regime might also allow the seasonal inflow of solar-heated surface water that increases basal melting.
Wild et al.(2022) The Cryosphere
The MODIS images below show that B-22 iceberg broke off more than 20 years ago on March 15, 2002 (source: NASA [earthobservatory.nasa.gov] , Public Domain, Link).
B22A is the fourth largest Antarctic iceberg!
The four largest Antarctic icebergs in January 2023. Only D15A is grounded (stuck on the ocean floor). Map made from the U.S. National Ice Center inventory.
-
18:51
Manque de neige et manque de données dans les Pyrénées Orientales
sur Séries temporelles (CESBIO)J’étais hier dans la maison du parc naturel régional des Pyrénées Catalanes pour discuter des possibilités offertes par la télédétection pour suivre le manteau neigeux.
Les participants à la réunion représentaient bien les différents usages de l’eau dans ce territoire (milieux naturels, agriculture, eau municipale, hydro-électricité, domaines skiables). Tous ont exprimé le besoin de mieux connaitre les réserves en eau stockées sous forme de neige, ce que les hydrologues appellent l’équivalent en eau du manteau neigeux (snow water equivalent, SWE). Malheureusement les satellites ne donnent pas de solution immédiate à cette question. En France, les compagnies hydroélectriques font des mesures de terrain et estiment le SWE, généralement en amont de leurs ouvrages. Mais les données ne sont pas diffusées car elles sont jugées trop stratégiques dans le contexte du renouvellement des concessions hydroélectriques. Les opérateurs de la SHEM ou EDF avec qui j’ai pu discuter sont d’ailleurs les premiers à le regretter. Mais la décision de ne pas diffuser ces données dépend de leur direction qui n’a peut-être pas le même sentiment d’appartenir à un collectif d’usagers.
Pourtant la sécheresse de 2022 a montré l’importance de mettre autour de la table les différents acteurs et de partager les connaissances. Ainsi en janvier 2022, les opérateurs des réservoirs dans les Pyrénées savaient que le stock de neige était suffisant pour faire le plein avant la période d’étiage, grâce aux précipitations abondantes de décembre 2021. Les restrictions d’usages et la gestion coordonnée des barrages ont permis de maintenir le débit de cours d’eau comme la Têt dans les Pyrénées Orientales tout en préservant les usages agricoles notamment.
Cet hiver commence très différemment. Le secteur Têt Amont est classé par arrêté préfectoral du 30 décembre 2022 en niveau « alerte ». Les niveaux d’eau dans les réservoirs sont restés très bas à cause du manque de précipitations en automne. Un lâcher d’eau exceptionnel depuis le lac des Bouillouses a dû être effectué pour assurer l’approvisionnement en eau potable de villages à sec dans la vallée. De surcroit, les réserves d’eau sous forme de neige sont encore très faibles. Sans réserves d’eau (naturelles ou artificielles), il sera difficile de satisfaire tous les usages de l’eau au printemps. La situation est d’autant plus critique que la justice a demandé de relever le débit minimum de la Têt suite à une plainte de France Nature Environnement. Il va donc falloir laisser davantage d’eau dans la rivière avec moins de réserves d’eau en amont que l’an dernier si la sécheresse persiste en 2023.
Surface enneigée dans le Parc des Pyrénées catalanes vue par satellite depuis le 01 septembre 2000 (correspond à l’amont de la Têt)
En Espagne, l’agence du bassin de l’Ebre publie des bulletins très détaillés sur l’état des stocks de neige dans les bassins versants pyrénéens, et les données sont librement accessibles. Ces estimations sont réalisées à partir de mesure in situ, d’images satellitaires et de modélisation. Ces informations répondraient parfaitement à la demande des gestionnaires que j’ai rencontrés hier. Les instruments utilisés pour mesurer localement l’équivalent en eau de la neige par l’agence de l’Ebre sont les mêmes que ceux utilisés par EDF en France (nivomètres à rayonnement cosmique). Cette différence dans le partage des données est lié au fait que les aménagement hydrauliques en Espagne étaient avant tout pensés pour soutenir l’agriculture, alors que côté français les barrages sont historiquement liés à la production d’électricité.
Cela change tout car l’agriculture est pratiquée par une multitude d’acteurs qui financent un service collectif, alors que l’hydroélectricité concerne une ou deux entreprises par vallée qui gèrent leurs propres réseaux de mesure. Mais l’évolution du climat et la tension croissante sur les ressources en eau m’incite à penser qu’il est urgent de repenser le partage des données hydrométéorologiques en montagne pour que tous les acteurs de la gestion de l’eau puissent prendre des décisions judicieuses. Par ailleurs, cela favoriserait aussi la recherche académique. Pour mes recherches, j’utilise des données librement accessibles en Espagne ou aux USA, car il est difficile de mettre en place les conventions d’échanges avec les détenteurs de données en France.
La non-diffusion de données nivologiques et hydrologiques par EDF est d’autant plus difficile à justifier que les réseaux de mesure ont été installés quand l’entreprise EDF était publique. Maintenant que l’entreprise a été renationalisée, il y a peut-être une chance que les choses changent ? Pour une entreprise privée comme ENGIE, pourquoi ne pas imposer un partage de certaines données lors de l’attribution des concessions ?
Côté Météo-France, certaines données de hauteur de neige du réseau nivo (mesures faites en général par les stations de skis) sont librement disponibles, mais ces données sont difficilement exploitables car elles couvrent seulement la saison d’ouverture des stations de ski. Les données plus fiables de hauteur de neige mesurée aux « nivoses » de Météo-France, tout comme les données météorologiques classiques (précipitations, etc) sont difficiles à obtenir pour des études académiques et payantes autrement. Météo-France s’oriente vers une ouverture des données comme l’IGN récemment sous l’injonction de leur ministère de tutelle. Tant mieux. Les données sur notre environnement doivent être considérées comme un bien commun et donc être accessibles à tous. Cette philosophie s’est imposée aux observations satellitaires avec l’avènement du programme Copernicus. Maintenant il faut que les données in situ suivent la même évolution pour que nous puissions mieux comprendre et nous adapter à notre environnement en plein bouleversement.
Bulletin sur l’état des stocks d’eau (barrage et neige) dans les sous-bassins de l’Ebre le 26 décembre 2022
Photo par Herbert Ortner, CC BY 3.0, [https:]
-
20:35
Multitemp blog is now 10 years old
sur Séries temporelles (CESBIO)10 years ago, I managed to convince CNES to change the orbit of SPOT4 when it reached its end of life, to place it on a 5 days repeat orbit, in order to simulate the repetitive observations of Sentinel-2. It was the SPOT4 (Take5) experiment. We managed to distribute Sentinel-2 like data (With L1C, 2A and L3A products) on 50 sites, to a lot of users that used these data set to prepare their processing methods. Thanks to these data, at CESBIO, we were able to quickly distribute high quality L2A products with MAJA, or to generate the first ever fully automatic land cover map at 10m resolution and at country scale with Sentinel-2, using the data acquired in 2016.
Communication was an important part of the project, and to provide information and news to SPOT4 users, we decided to start a blog. Our first post is now 10 years old !
In this first post, I wrote some « visionnary » sentences :
The first Sentinel-2 satellite should be launched within the next two years, and the second satellite should follow 18 months later. Together, these satellites will provide us every fifth day with high-resolution images of all land areas… or of the clouds that cover them. Despite these clouds, users will be guaranteed access to cloud-free data at least once per month. The arrival of these data should therefore cause a revolution in the use of remote sensing data.
However, I have to confess that I completely underestimated the success of this mission we had been promoting for years. I was expecting thousands of users, and we had hundreds of thousands !
Coming back to our blog, we first intended to keep it only for the duration of the experiment, but as a matter of fact, providing news and writing post was fun ! I was soon joined by Simon Gascoin, and with the help of more than 20 co-authors, we published 1000 posts in the last 10 years. Our blog has been a success with more than 850 000 page views recorded. However, our audience is now slowly eroding, and we hardly publish a few articles per month. Simon and I are getting more and more busy, and perhaps do we lack a little bit of inspiration, it is maybe time to greet some new editors from CESBIO !
Page views per month for the last 10 years !
-
19:00
Some news from ESA regarding the coming launches of Sentinels 1 and 2
sur Séries temporelles (CESBIO)In this article, ESA gave us some news of the time table for the coming launches of next Sentinel1 and 2 satellites. As you may have heard, we have these days a shortage of European launchers, due to the ending of Soyouz launches from Kourou, the delays in Ariane 6 delivery, and the initial uncertainties of the availability of the Ukrainian engines for the VEGA-C rocket.
It is therefore a good news to know that the next Sentinel1 and Sentinel-2 launches have been booked. Here are the anticipated launch dates :
Satellite Booked launch date SENTINEL-1C Semester 1 2023 SENTINEL-1D Semester 2 2024 SENTINEL-2C Mid 2024 SENTINEL-3C 2024-2025 Sentinel-1C is the most wanted, since the breakdown of Sentinel-1B. Sentinel 2C will be launched almost 9 years after Sentinel-2A. It will replace it, and Sentinel-2A will be kept in orbit for some time, in case something happens to the operational satelllites. We could have dreamed of using the availability of three satellites to improve the revisit, but the costs of exploiting three satellites seems not to be fitting in the available funding. However, if several important users asked for it repeatedly, such as Copernicus services, maybe it could help convince the EU…
-
22:40
Feedback on hydrological monitoring of Telangana region in India using remote sensing
sur Séries temporelles (CESBIO)Claire Pascal, PhD student at CESBIO, under the supervision of Olivier Merlin (CNRS researcher) and Sylvain Ferrant (IRD researcher) brilliantly defended her thesis on the monitoring of water resources by satellite on November 18.
Claire’s work focused on the Telangana region of India where rainfed cotton and flooded rice farming dominate. The region has a wet season, during which 4 months of intense monsoon rains recharge aquifers and dam lakes and feed the large Godavari and Krishna rivers. It is followed by a long dry season of 8 months during which the drying of the soils requires to irrigate the crops with these water resources. Three quarters of the irrigation volumes are pumped from groundwater, and one quarter from surface reservoirs (100 large dams, over 40,000 small hillside reservoirs). Claire studied the feasibility of monitoring these resources with current and future satellite data.
Gravimetric satellites (GRACE and GRACE-FO) provide access to water mass variations at a low resolution (300km): surface water, soil moisture and groundwater. Passive microwave satellites (such as SMOS and SMAP) allow us to reconstruct soil moisture variations at an intermediate spatial resolution (25 to 40 km). Multi-spectral optical satellites (MODIS and Sentinel-2 ) allow the monitoring of crops and vegetation development at medium and high spatial resolution (MRS, HRS, 1km to 10m). Stereoscopic satellites (Pléiades) at very high resolution (THRS, 50cm) allow the restitution of the bathymetry of hundreds of small hillside reservoirs, whose cumulative regional capacity is unknown until now. The environmental variables estimated by these satellite missions constitute heterogeneous data sources that are used here to explore our ability to:
- Disaggregate the low resolution gravimetric signal with the variables obtained at higher resolution, ( see figure GRACE_method)
Location of the Telangana state (113,000 km2) in the granitic part of the state (pink 67,000 km2) distributed on a 0.5? resolution grid. The black triangles correspond to groundwater depth observation wells of the Groundwater Department of Telangana. The state capital, the city of Hyderabad (in gray) concentrates 12 million inhabitants. The main rivers are in blue.(Pascal, et al., HESS, 2022)
- Explore the contribution of newer very high resolutions for monitoring water resources in small capacity hydrological structures.
- Assess existing methods for quantifying rice irrigation using SMOS soil moisture observations and Sentinel-2 rice maps.
Mapping of rice cultivation in Telangana state produced from Sentinel-1 and Sentinel-2 images(from Ferrant et al., 2017, 2019). Example for the dry season in 2017.
The conclusions regarding space observation are as follows:
- SMOS can estimate the volume of water in the root zone by combining data and models (this is not a result of the PhD, it was known before)
- The set of hillside reservoirs form a cascade of ungauged retention reservoirs for monsoon runoff, maintained by local populations. Their size is too small for current altimetry data (Jason) but also for future SWOT data. The PhD evaluated their maximum capacity at about 30 mm thanks to bathymetry from Pleiades stereoscopy in low water period (empty reservoirs). This maximum capacity may seem limited compared to those of the large dams in this state (over 200mm of cumulative maximum capacity) but the contribution of these small structures, located in these areas upstream of large rivers and dams, is crucial for the surrounding irrigated agriculture. Claire showed that a regional monitoring of water volumes of this set of small reservoirs is possible using THRS data, acquired when the basins are empty, combined with the detection of water surfaces with Sentinel2. Pleiades does not have sufficient imaging capacity to extend this method to the whole of India, but it will be possible with the future CO3D mission, provided that the numerical models are obtained when the reservoirs are empty (some revisiting will be necessary). The Sentinel-HR mission could also meet these new needs with more revisit but less accuracy.
Filling of the « Water Harvesting System » of the 4 regions covered by the Pleiades stereoscopic acquisitions (Claire Pascal’s thesis, 2022).
- The GRACE mission provides monthly variations of land water stocks with a resolution (~300km) equivalent to the dimensions of the study area. This data needs to be spatially deconvolved and disaggregated in order to estimate the variations of each hydrological compartment (especially the groundwater stock) in interaction with land use and irrigation practices. The PhD focused on the evaluation of already used disaggregation methods (15 publications worldwide) in order to propose a more realistic method of disaggregation validation. The different approaches using soil moisture from SMOS, NDVI from MODIS, rainfall from TRMM, evaluated with a piezometric dataset provided by the state of Telangana and the Franco-Indian Groundwater Research Cell in Hyderabad, generally improve the spatial representativeness of the GRACE data. However, the uncertainty on the groundwater stock derived from the disaggregated GRACE data remains relatively high. Indeed, the recharge and capture fluxes of the water table are indirectly linked to the variables available by remote sensing, which explains the difficulty of obtaining a predictive model from these observables alone. An improvement in the quality of the gravity signal and its resolution, as imagined by future gravity missions (MAGIC), is therefore desirable.
- In a final exploratory study, Claire investigated the presence in some highly irrigated areas of a significant dry season soil moisture « bounce » in the SMOS product (this bounce is less present in the SMAP products). She linked the magnitude of this soil moisture signal to the extent of rice cultivation on each of the SMOS maps (25km). These seasonal rice areas are estimated at high resolution (10m) using the land cover map production chain (IOTA2) deployed on the CNES HAL cluster, from Sentinel-2 surface reflectances (L2A products produced by THEIA) for the 8 rice growing seasons between 2016 and 2019, at the Telangana state scale. These relationships are preliminary results that could be used to build a regional quantitative model of water resources mobilized for rice irrigation. An upgrade of SMOS resolution to 10 km, as proposed by the SMOS-HR mission, should improve the models found. An internship is planned in 2023 to address this issue.
Dynamics of SMOS soil moisture during the dry period, proportional to the extent of rice cultivation.
-
10:44
[VENµS] How long will the VM5 imaging phase last ? The sun will decide
sur Séries temporelles (CESBIO)The VENµS satellite was launched in August 2017, with two missions :
- observing the earth with a frequent revisit and a high resolution, under constant view angles
- testing an electric propulsion system to change its orbit, and to show the possibility to maitain the spacecraft orbit at a very low altitude.
As a result, VENµS has had different phases separated by orbital changes. It’s imaging phases are VM1 at 720 km altitude, which lasted for three years, and the current VM5 phase at 560 km altitude, which started in March 2022.
So, what will be the duration of the VM5 phase ? The sun will decide !
The main limiting factor is indeed the quantity of propellant within the satellite. VENµS has a limited amount of propellant in its tank, and at 560 km altitude, the satellite is still a bit slowed down by the atmospheric friction, which itself depends on the solar activity. The propellant is used to maintain the speed and height of the satellite, and in some time, we will run out of propellant.
The more sun activity, the more atmospheric friction. We are now near the maximum of the sun activity, and our Israeli colleagues who control the VENµS platform gave us the following estimates :
- If average activity is 75%, VM5 will last at least until May 2024
- If average activity is 50%, VM5 will last at least until December 2024
These estimates are minimum values as we have some uncertainty on the exact remaining quantity or propellant, so the minimum estimate was used. This is good news, it means that if the satellite health stays good, we still have more than one year of acquisitions ahead of us, and maybe two !
Current solar activity, from the space weather prediction center. It is currently increasing with a maximum that should be reached in 2025. A mild activity was forecasted compared to previous cycles, but the observations are above the expected activity…
And finally, we do not need to save propellant for a deorbitation, as at the current altitude, if the orbit is not maintained, the deorbitation comes only after a few years (once again depending on solar activity).
-
16:35
Retour d’expérience sur le suivi hydrologique d’une région Indienne par télédétection
sur Séries temporelles (CESBIO)Claire Pascal, doctorante au CESBIO, sous la direction d’Olivier Merlin (chercheur CNRS) et co-dirigée par Sylvain Ferrant (chercheur IRD) a brillamment soutenu sa thèse sur le suivi des ressources en eau par satellite le 18 novembre.
Les travaux de Claire se sont focalisés sur la région du Telangana en Inde où la culture non irriguée du coton et la culture inondée du riz dominent. La région présente une saison humide, durant laquelle 4 mois de pluies de mousson intenses rechargent les aquifères et les lacs de barrage et alimentent les larges rivières de la Godavari et de la Krishna. Elle est suivie par une longue saison sèche de 8 mois pendant laquelle l’assèchement des sols oblige à irriguer les cultures avec ces ressources en eau. Les trois quarts des volumes d’irrigation sont pompés dans les eaux souterraines, et un quart dans les réservoirs de surface (100 grands barrages, plus de 40 000 petites retenues collinaires). Claire a étudié la faisabilité d’un suivi de ces ressources avec les données satellitaires actuelles et futures.
Les satellites gravimétriques (GRACE et GRACE-FO) permettent d’accéder aux variations des masses d’eau à basse résolution (300km) : eaux de surface, humidité des sols et eaux souterraines. Les satellites à micro-ondes passives (comme SMOS et SMAP) permettent de restituer la variation de l’humidité des sols à une résolution spatiale intermédiaire (25 à 40 km). Les satellites optiques multi-spectraux MODIS et Sentinel-2 permettent le suivi de la mise en culture et le développement de la végétation à moyenne et haute résolution spatiale (MRS, HRS, 1km à 10m). Les satellites stéréoscopiques (Pléiades) à très haute résolution (THRS, 50cm) permettent la restitution de la bathymétrie de centaines de petits réservoirs collinaires, dont la capacité cumulée régionale est jusqu’ici inconnue. Ces variables environnementales restituées par ces missions satellite constituent des sources de données hétérogènes qui sont ici utilisées pour explorer notre capacité à:
- Désagréger le signal gravimétrique à basse résolution avec les variables obtenues à plus haute résolution, cf GRACE_méthode
Localisation de l’état du Telangana (113 000 km2) dans la partie granitique de l’état (rose 67 000 km2) distribué sur une grille de 0.5? de résolution. Les triangles noirs correspondent à des puits d’observation de la profondeur de nappe du Groundwater Department of Telangana. La capitale de l’état, la ville d’Hyderabad (en gris) concentre 12 millions d’habitants. Les rivières principales sont en bleu. (Pascal, et al., HESS, 2022)
- Explorer l’apport des très hautes résolutions plus récentes pour le suivi des ressources en eau dans les structures hydrologiques de petite capacité.
- Évaluer les méthodes existantes de quantification de l’irrigation du riz à l’aide des observations d’humidité du sol SMOS et des cartes de riz Sentinel-2.
Cartographie de mise en culture du riz sur l’état du Telangana produite à partir des images Sentinel-1 et Sentinel-2 (d’après Ferrant et al., 2017, 2019). Exemple pour la saison sèche 2017.
Les conclusions concernant l’observation spatiale sont les suivantes :
- SMOS peut estimer le volume d’eau en zone racinaire en combinant données et modèles (ce n’est pas un résultat de la thèse, c’était connu avant)
- L’ensemble des retenues collinaires forment une cascade de réservoirs de rétention du ruissellement de la mousson, non jaugé, maintenue par les populations locales. Leur dimension est trop restreinte pour les données altimétriques actuelles (Jason) mais aussi pour les futures données SWOT. La thèse a évalué leur capacité maximale à environ 30 mm grâce à des bathymétries issues de stéréoscopie Pléiades en période d’étiage (réservoirs vides). Cette capacité maximale peut sembler limitée en comparaison de celles des grands barrages de cet état (plus de 200mm de capacité maximale cumulée) mais la contribution de ces petites structures, situées dans ces zones en amont des grandes rivières et barrages, est cruciale pour l’agriculture irriguée alentour. Claire a montré qu’un suivi régional des volumes d’eau de cet ensemble de petites retenues est possible en utilisant ces données THRS, acquises lorsque les bassins sont vides, combinées à la détection des surfaces en eau avec Sentinel2. Pléiades n’a pas une capacité de prise de vue suffisante pour étendre cette méthode à l’ensemble de l’Inde, mais ce sera possible avec la future mission CO3D, à condition d’obtenir les modèles numériques au moment où les réservoirs sont vides (une certaine revisite sera nécessaire). La mission Sentinel-HR pourrait elle aussi répondre à ces nouveaux besoins avec plus de revisite mais moins de précision.
Remplissage du « Water Harvesting System » des 4 régions couvertes par les acquisitions stéréoscopiques Pleiades (Thèse de Claire Pascal, 2022).
- La mission GRACE fournit les variations mensuelles des stocks d’eau terrestre avec une résolution (~300km) équivalente aux dimensions de la zone d’étude. Il faut déconvoluer et désagréger spatialement cette donnée pour parvenir à estimer les variations de chaque compartiment hydrologique (notamment le stock d’eau souterraine) en interaction avec les usages de sols et les pratiques d’irrigation. La thèse a porté sur l’évaluation des méthodes de désagrégation déjà employées (15 publications dans le monde) pour proposer une méthode plus réaliste de validation des désagrégations. Les différentes approches utilisant les humidités du sol issues de SMOS, le NDVI issu de MODIS, les précipitations issues de TRMM, évaluées avec un ensemble de données piézométriques fournies par l’état du Télangana et la Cellule Franco Indienne de Recherche sur les Eaux Souterraines d’Hyderabad, améliorent en général la représentativité spatiale des données GRACE. Cependant, l’incertitude sur le stock d’eau souterraine dérivé des données GRACE désagrégées reste relativement élevée. En effet, les flux de recharge et de captage de la nappe sont indirectement liés aux variables disponibles par télédétection, ce qui explique la difficulté d’obtenir un modèle prédictif à partir de ces observables uniquement. Une amélioration de la qualité du signal gravimétrique et de sa résolution, imaginé par les missions gravimétriques à venir (mission MAGIC), est donc souhaitable.
- Dans une dernière étude exploratoire, Claire a étudié en détail la présence dans certaines régions très irriguées d’un « rebond » important de l’humidité du sol en saison sèche dans le produit SMOS (ce rebond est moins présent dans les produits SMAP). Elle a lié l’amplitude de ces signaux d’humidité du sol, à l’importance de mise en culture du riz sur chacune des restitutions SMOS (25km). Ces surfaces en riz saisonnières sont estimées à haute résolution (10m) à l’aide de la chaîne de production de cartes d’occupation des sols (IOTA2) déployée sur le cluster du CNES HAL, à partir de réflectances de surface Sentinel-2 (produits L2A produits par THEIA) pour les 8 saisons de croissance du riz comprises entre 2016 et 2019, à l’échelle de l’état du Telangana. Ces relations sont des résultats préliminaires qui pourraient servir à établir un modèle quantitatif régional des ressources en eau mobilisées pour l’irrigation du riz. Une amélioration de la résolution de SMOS à 10 km, comme le propose la mission SMOS-HR, devrait améliorer les modèles trouvés. Un stage est prévu en 2023 pour aborder cette question.
Dynamique de l’humidité des sols SMOS pendant la période sèche, proportionnelle à l’importance de mise en culture du riz.
-
15:14
[THEIA] Near real time production of Sentinel-2 images has resumed
sur Séries temporelles (CESBIO)Newly produced Sentinel2 L2A products over Toulouse (we don’t distribute the 100% cloudy images)
Due to Brexit, the ECMWF moved some of its computer infrastructure from Reading (UK) to Bologna (IT), and a 5 week interruption of service occurred for the production of Copernicus Atmosphere Monitoring Service. As our Level 2A production for Sentinel-2 uses this information to set the type or aerosols for the atmospheric correction, we had to suspend our production.
The service resumed last week, and Theia has almost caught up all the production, since images until October 26th are available ! Well done to the production team, and sorry for those of you who had to wait until the data became available.
-
14:54
Sample images from VENµS new phase are available
sur Séries temporelles (CESBIO)Here are some long awaited news about the VENµS VM5 phase. The acquisitions started end of March 2022 with the following list of sites, and should go on for at least one year, and maybe more depending on the consumption of fuel (Hydrazine) to maintain the orbit. Most of the sites are acquired with a daily revisit, and some sites with a two day revisit, as displayed on this map (which should be updated, because there have been some changes).
As you have probably noticed, we are a bit late in delivering VENµS products and we would like to apologize about it. The verification phase of the instrument and processing showed that a new calibration phase was needed :
- the radiometric team had to recalibrate the instrument as it is now used with different integration times and gains
- they noticed that some gains of the elementary detectors evolved, and that the models we used to correct for spikes that cause the appearance of bright and dark columns had to be revised
- the geometric calibration was checked, and as we have new sites, the production team had to prepare new reference images
This takes more time than anticipated, and we now plan to release the L1C products in November and the L2A shortly after (we need to validate a bit after the calibration is finalized). Just in order for you to see how the selected sites look when seen by VENµS, we have decided to publish one or two valid L1C images per site, except for the sites in Israel, handled by our Israeli partners. These images are accurately ortho-rectified but don’t have the final calibration and detector normalization, so please don’t draw definitive conclusions from this first set. Some issues in the display of the images in the distribution site have been noticed too.
You may get them directly from this address, or from the usual Theia website : [https:]] and select the “VENµS VM5 L1C TEST” collection.
Feedback is welcome !And finally, as you can see, it is not Gérard Dedieu who writes this post. Gérard retired two years ago already, but kindly accepted to go on being the French PI for the VENµS mission. He did most of the work to read the proposals and select the VM5 sites with the help of VENµS exploitation team. He now wishes to have more time for all his activities. He asked me to take over as the French VENµS PI. I accepted this proposal as most of the work has been done (thanks a lot Gérard !), and as I have been working on VENµS from the start in the shadow of Gérard (What a shadow
). Gérard still wishes to be kept informed about what you will find from VENµS data sets.
-
0:44
First satellite images of Nord Stream leaks
sur Séries temporelles (CESBIO)I found the leak from Nord Stream 2 in the latest Sentinel-1 image (29 Sep 2022). It’s large enough to be visible from space.
From this radar image, I could also spot the leak in the Landsat image captured on the same day (29 Sep).And there’s another leak visible northeast of Bornholm Island, this one from Nord Stream 1. Check by yourself in the SentinelHub ( [https:]] ).
-
17:19
We are hiring an expert (M/F) in scientific computing to improve MAJA
sur Séries temporelles (CESBIO)We are hiring an expert (M/F) in scientific computing to improve the consideration of aerosol type in the MAJA Earth observation image processing chain using machine learning methods. The aim is to explore single and multi-variable regression approaches in order to select the most efficient one, which is the result of a trade-off between simplicity and speed, accuracy and robustness. The ultimate goal is to implement this method in the MAJA chain, which is used at CNES in the operational processing of Sentinel-2 data distributed to the community, but also to prepare the use of the chain for the future TRISHNA space mission. Application: HERE Duration: 12 month Main skills:- Strong background in computer science and mathematics
- Good skills in Python
- Previous experience in the use of machine learning methods and libraries (eg. Pytorch, scikit-learn) would be an asset
-
16:44
Disruptions in the availability of Theia products
sur Séries temporelles (CESBIO)You may have noticed an interruption in the availability of level 2 and 3 products on the THEIA portal. This is due to ongoing disruptions in the availability of CAMS (Copernicus Atmosphere Monitoring Service) data used by the MAJA atmospheric correction chain. These disruptions are linked to the migration of the ECMWF data centre, which began on 8 September, and could affect all downstream users for another 3 to 4 weeks. We were expecting some disruptions, but we are currently experiencing a complete shutdown of the service.
As a reminder, the use of CAMS products greatly improves the estimation of the atmospheric optical thickness (AOT) in MAJA, by providing at a 12-hour time resolution and a 0.4° spatial resolution the relative contribution of 7 aerosol species to the total AOT. These data are usually retrieved on the fly by the production centre upstream of the level 1 product processing, and are currently missing.
The CNES production team will catch up with the processing as soon as the CAMS service is back in operation.
Thank you for your patience
-
16:34
Perturbations dans le délais de mise à disposition des produits Theia
sur Séries temporelles (CESBIO)Vous avez peut-être noté une interruption dans la mise à disposition des produits de niveau 2 et 3 sur le portail THEIA. Ceci s’explique par les perturbations en cours dans la disponibilité des données CAMS (Copernicus Atmosphere Monitoring Service) utilisées par la chaîne de correction atmosphérique MAJA. Ces perturbations sont liées à la migration du Data Center de l’ECMWF, initiée le 8 septembre dernier, et pourraient affecter l’ensemble des utilisateurs avals pour encore 3 à 4 semaines. Nous nous attendions à quelques perturbation, mais nous subissons actuellement un arrêt complet du service.
Pour mémoire, l’utilisation des produits CAMS améliore grandement l’estimation de l’épaisseur optique de l’atmosphère (AOT) dans MAJA, en fournissant à une résolution temporelle de 12 heures et une résolution spatiale de 0.4° la contribution relative de 7 espèces d’aérosols à l’AOT totale. Ces données sont habituellement récupérées au fil de l’eau par le centre de production en amont du traitement des produit de niveau 1, et nous font actuellement défaut.
L’équipe de production du CNES procédera à un rattrapage des traitements dès que le service CAMS reviendra en service.
Merci pour votre patience
-
16:53
Campagne expérimentale dans des conditions estivales extrêmes sur Auradé !
sur Séries temporelles (CESBIO)Campagne expérimentale dans des conditions estivales extrêmes sur Auradé ! Entre orage (apport de 40 mm de pluie), puis sècheresse et canicule (40°C), les variations de l’état hydrique du sol de la parcelle flux ICOS d’Auradé furent optimales entre fin juin et mi-juillet 2022 pour accueillir et mener la campagne expérimentale Pré-HiDRATE (integrating High resolution Data from Remote sensing And land surface models for Transpiration and Evaporation mapping). Cette campagne fut menée pendant près de 4 semaines, dans le cadre du projet DETECT (Disentangling the role of land use and water management ; contact scientifique : Youri Rothfuss) et de la préparation de l’ANR potentielle HiDRATE (contact scientifique : Gilles Boulet). Elle était destinée à expérimenter et comparer des méthodologies de mesures pour (1) discriminer et quantifier les composantes des flux d’évapotranspiration (ETR) (Quade et al, 2019, [https:]] ), soit l’évaporation du sol et la transpiration du tournesol cultivé, et (2) caractériser l’effet de l’état d’humidité du sol sur leur contribution respective à l’ETR. Cette campagne a fédéré plusieurs équipes, des scientifiques du CESBIO et une équipe allemande venue s’installer sur le terrain avec sa lab’mobile MOSES (pick-up et cabine laissée à demeure).
Lors de cette campagne, plusieurs dispositifs ont été mis en œuvre et sont présentés ci-après :
La campagne étant maintenant terminée, il ne reste plus qu’à traiter et analyser les données ! A suivre… on croise les doigts pour l’ANR HiDRATE.
Youri et son équipe reviendront au printemps 2023 pour y réaliser une nouvelle campagne, en tout point similaire mais cette fois sur une culture d’hiver et en conditions moins « stressantes ».
-
10:12
Iota2 latest release – Deep Learning at the menu
sur Séries temporelles (CESBIO) Table of Contents- 1. New iota2 release
- 2. Classification using deep learning
- 3. Conclusions
- 4. Acknowledgement
- 5. References
The last version of iota2 ( [https:]] ) released on [2022-06-06 Mon] includes many new features. A complete list of changes is available here. Among them, let cite a few that may be of interest for users:
External features with padding: External features is a iota2 feature that allows to include user-defined computation (e.g. spectral indices) in the processing chain. They come now with a padding option. Each chunk can have an overlap with all his adjacent chunks and therefore it is possible to perform basic spatial processing with external features without discontinuity issues. Check this for a toy example.
External features with parameters: The function provided by the user can now have their parameters set directly in the configuration file. It is no longer necessary to hard-coded them in the python file.
- Documentation: The documentation is now hosted at [https:]] . An open access labwork is also available [https:]] for advanced users that have already done the tutorial from the documentation.
Deep Learning workflow: iota2 is now able to perform classification with (deep) neural networks. It is possible to use either one of the pre-defined network architectures provided in iota2 or to define its own architecture. The workflow is based on the library Pytorch.
Introducing the deeplearning workflow was a hard job: including all the machinery for batches training as well as various neural architectures in the workflow has introduced some major internal changes in iota2. A lot of work were done to ensure iota2 is able to scale well accordingly the size of the data to be classified when deep learning is used. In the following, we will provide an example of classification of large scale Sentinel 2 time series using deep learning.
2. Classification using deep learningIn this post we discuss about deep learning in iota2. We describe the data set used for the experiment, the different pre-processing done to prepare the different training/validation files, the deep neural network used and how it is learned with iota2. Then classification results (classification accuracy as well as classification maps) will be presented to enlight the capacity of iota2 to easily perform large scale analysis, run various experiments and compare their outputs.
2.1. Material 2.1.1. Satellite image time seriesFor the experiments, we use all the Level-2A acquisitions available from Theia Data Center ( [https:]] ) for one year (2018) over 4 Sentinel 2 tiles ([“31TCJ”, “31TCK”, “31TDJ”, “31TDK”]). See figure 1. The raw files size amount to 777 Gigabytes.
Figure 1: Sentinel 2 tiles used in the experiments (background map © OpenStreetMap contributors).
2.1.2. Preparation of the ground truth dataFor the ground truth, we have extracted the data from the database used to produce the OSO product ( [https:]] ). The database was constructed by merging several open source databases, such as Corine Land Cover. The whole process is described in (Inglada et al. 2017). The 23-categories nomenclature is detailed here: [https:]] . An overview is given figure 2.
Figure 2: Zoom of the ground truth over the city of Toulouse. Each colored polygon corresponds to a labelized area (background map © OpenStreetMap contributors).
2.1.2.1. Sub data setThis step is not mandatory and is used here only for illustrative purpose.
In order to run several classifications and to assess quantitatively and qualitatively the capacity of deep learning model, 4 sub-data set were build using a leave-one-tile-out procedure: training samples for 3 tiles will be used to train the model and samples for the remaining tile will be left out for testing. The process will be repeated for each subset of 3 tiles from a set of 4 tiles (i.e. 4 times !). We will see later how iota2 allows to perform several classifications tasks from different ground truth data easily.
For now, once you have a vector file containing your tiles and the (big) database, running this kind of code
should do the job (at least for us it does!): construct 4 couples of training/testing vector files. You can adapt it to your own configuration. An example of one sub data set is given figure 3.
Figure 3: Sub data set: polygons from the brown area are used to train the model and polygons from the gray area are used to test the model. There are 4 different configurations, one for each tile left-out. (background map © OpenStreetMap contributors).
2.1.2.2. Clean the ground truth vector fileThe final step in the ground truth data preparation is to clean the ground truth file: when manipulating vector files it is common to have multi-polygons, empty or duplicate geometries. Such problematic features should be handled before running iota2. Fortunately, iota2 is packed with the necessary tools (
Listing 1: Shell scripts to clean the 4 sub data-set.check_database.py
, available from the iota2 conda environment) to prevent all these annoying things that happen when you process large vector files. The code snippet in 1 shows how to run the tool on the ground truth file.for i in 0 1 2 3 do check_database.py \ -in.vector ../data/gt_${i}.shp \ -out.vector ../data/gt_${i}_clean.shp \ -dataField code -epsg 2154 \ -doCorrections True done
2.2. Configuration of iota2This part is mainly based on the documentation ( [https:]] ) as well as a tutorial we gave ( [https:]] ). We encourage the interested reader to carefully reads these links for a deeper (!) understanding.
2.2.1. Config and ressources filesAs usual with iota2, the first step is to set-up the configuration file. This file hosts most of the information required to run the computation (where are the data, the reference file, the output folder etc …). The following link is a good start to understand the configuration file: [https:]] . We try to make the following understandable without the need to fully read it.
To compute the classification accuracy obtained on the area covered by ground truth used for training, we indicate to iota2 to split polygons from the ground truth file into two files, one for training and one for testing with a ratio of 75%:
split_ground_truth : True ratio : 0.75
It means that 75% of the available polygons for each class is used for training while the remaining is used for testing. Note that we do not talk about pixels here. By splitting at the polygons level, we ensure that pixels from a polygon are used either for training or testing. This is a way to reduce the spatial auto-correlation effect between pixels when assessing the classification accuracy.
We need now to set-up how training pixels are selected from the polygons. Iota2 relies on OTB sampling tools ( [https:]] ). For this experiment, we asked for a maximum number of pixels of 100000 with a periodic sampler.
arg_train : { sample_selection : { "sampler" : "periodic" "strategy": "constant" "strategy.constant.nb" : 100000 } }
We are working on 4 different tiles, each of them having its own temporal sampling. Furthermore, we need to deals with clouds issues (Hagolle et al. 2010). Iota2 uses temporal gap-filling as discussed in (Inglada et al. 2015). In this work, we use a temporal step-size of 10 days, i.e., we have 37 dates. Iota2 also computes per default three indices (NDVI, NDWI and Brightness). Hence, for a each pixel we have a set of 481 features (37\(\times\)13).
For the deep neural network, we use the default implementation in iota2. However, it is possible to define its own architecture ( [https:]] ). In our case, the network is composed of four layers (see Table 1) with a
relu
function between each of them ( [https:]] ).Table 1: Network architecture.
Input size Output size First Layer 481 240 Second Layer 240 69 Third Layer 69 69 Last Layer 69 23 ADAM solver was used for the optimization, with a learningrate of \(10^{-5}\) and a batch size of 4096. 200 epochs were performed and a validation sample set, extracted from the training pixels is used to monitor the optimization. The best model in terms of Fscore is selected. Off course all these options are configurable with iota2. A full configuration file is given in Listing 2.
Listing 2: Example of configuration file. Paths need to be adapted to your set-up.chain : { output_path : "/datalocal1/share/fauvelm/blog_post_iota2_output/outputs_3" remove_output_path : True check_inputs : True list_tile : "T31TCJ T31TDJ T31TCK T31TDK" data_field : "code" s2_path : "/datalocal1/share/PARCELLE/S2/" ground_truth : "/home/fauvelm/BlogPostIota2/data/gt_3_clean.shp" spatial_resolution : 10 color_table : "/home/fauvelm/BlogPostIota2/data/colorFile.txt" nomenclature_path : "/home/fauvelm/BlogPostIota2/data/nomenclature.txt" first_step : 'init' last_step : 'validation' proj : "EPSG:2154" split_ground_truth : True ratio : 0.75 } arg_train : { runs : 1 random_seed : 0 sample_selection : { "sampler" : "periodic" "strategy": "constant" "strategy.constant.nb" : 100000 } deep_learning_parameters : { dl_name : "MLPClassifier" epochs : 200 model_selection_criterion : "fscore" num_workers : 12 hyperparameters_solver : { "batch_size" : [4096], "learning_rate" : [0.00001] } } } arg_classification : { enable_probability_map : True } python_data_managing : { number_of_chunks : 50 } sentinel_2 : { temporal_resolution : 10 } task_retry_limits : { allowed_retry : 0 maximum_ram : 180.0 maximum_cpu : 40 }
The configuration file is now ready and the chain can be launched, as described in the documentation. Classification accuracy will be outputted in the directory
2.2.2. Iteration over the different sub ground truth filesfinal
as well as the final classification map and related iota2 outputs.However, in this post we want to go a bit further to enlighten how easy it is to run several simulations with iota2. As discussed in 2.1.2.1, we have generated a set of pair of spatially disjoint ground truth vector files for training and for testing. Also, remind that iota2 starts by splitting the provided training ground truth file into two spatially disjoints files, one used to train the model and the other used to test the model. In such particular configuration, we have now two test files:
- One extracted from the same area than the training samples,
- One extracted from a different area than the training samples.
With this files, we can do a spatial cross validation estimation of the classification accuracy, as discussed in (Ploton et al. 2020). To perform such analysis, we first stop the chain after the prediction of the classification map (setting the parameter as
last_step : 'mosaic'
) and we manually add another step to estimate the confusion matrix from both sets. We rely on the OTB tools: [https:]] . The last ingredient is to be able to loop over the different tiles configurations, i.e., to iterate over the cross-validation folds. This is where iota2 is really powerful: we just need to change few parameters in the configuration file to run all the different experiments. In this case, we have to change the ground truth filenames and the output directory. To make it simple, we indexed our simulations from 0 to 3 and usesed
shell tool to modify the configuration file in the big loop:sed -i "s/outputs_\([0-9]\)/outputs_${REGION}/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg sed -i "s/gt_\([0-9]\)_clean/gt_${REGION}_clean/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg
The global script is given in Listing 3.
Listing 3: Script to perform spatial cross validation. Paths need to be adapted to different configuration. Merging validation samples from the train set is required because iota2 extracts samples on a tile basis for the validation samples (behavior subject to modification in future release).#/user/bin/bash # Set ulmit ulimit -u 6000 # Set conda env source ~/.source_conda conda activate iota2-env # Loop over region for REGION in 0 1 2 3 do echo Processing Region ${REGION} # (Delate and) Create output repertory OUTDIR=/datalocal1/share/fauvelm/blog_post_iota2_output/outputs_${REGION}/ if [ -d "${OUTDIR}" ]; then rm -Rf ${OUTDIR}; fi mkdir ${OUTDIR} # Update config file sed -i "s/outputs_\([0-9]\)/outputs_${REGION}/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg sed -i "s/gt_\([0-9]\)_clean/gt_${REGION}_clean/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg # Run iota2 Iota2.py \ -config /home/fauvelm/BlogPostIota2/configs/config_base.cfg \ -config_ressources /home/fauvelm/BlogPostIota2/configs/ressources.cfg \ -scheduler_type localCluster \ -nb_parallel_tasks 2 # Compute Confusion Matrix for test samples otbcli_ComputeConfusionMatrix \ -in ${OUTDIR}final/Classif_Seed_0.tif \ -out ${OUTDIR}confu_test.txt \ -format confusionmatrix \ -ref vector -ref.vector.in /home/fauvelm/BlogPostIota2/data/tgt_0.shp \ -ref.vector.field code \ -ram 16384 # Merge validation samples ogrmerge.py -f SQLITE -o ${OUTDIR}merged_val.sqlite \ ${OUTDIR}dataAppVal/*_val.sqlite # Compute Confusion Matrix for train samples otbcli_ComputeConfusionMatrix \ -in ${OUTDIR}final/Classif_Seed_0.tif \ -out ${OUTDIR}confu_train.txt \ -format confusionmatrix \ -ref vector -ref.vector.in ${OUTDIR}merged_val.sqlite \ -ref.vector.field code \ -ram 16384 done python compute_accuracy.py
Then we can compute classification metrics, such as the overall accuracy, the Kappa coefficient and the average Fscore. For this post, we have written a short python script to perform such operations: [https:]] .
We can just run it, using
2.3. Resultsnohup
for instance, take a coffee, a slice of cheesecake and wait for the results ?Results provided by iota2 will be discussed in this section. The idea is not to perform a full analysis, but to glance through the possibilities offer by iota2. The simulations were run on computer with 48 Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz, 188 Gb of RAM and a NVIDIA GV100GL [Tesla V100 PCIe 32GB].
2.4. Numerical resultsFirst we can check the actual number of training samples used to train the model. Iota2 provides the total number of training samples used ( [https:]] ). Table 2 provides the number of training samples extracted to learn the MLP. Yes, you read it well 1.8 millions of samples for only 4 tiles. During training, 80% of the samples were used to optimized the model and 20% were used to validate and monitor the model after each epoch. Four metrics were computed automatically by iota2 to monitor the optimization: the cross-entropy (same loss that is used to optimize the network), the overall accuracy, the Kappa coefficient and the F-score. Figures 4 and 5 display the evolution of the different metrics along the epochs. The model used for the classification is the one with the highest F-score.
Table 2: Number of training samples used.
Class Name Label Total Continuous urban fabric 1 67899 Discontinuous urban fabric 2 100000 Industrial and commercial units 3 100000 Road surfaces 4 47664 Rapeseed 5 100000 Straw cereals 6 100000 Protein crops 7 100000 Soy 8 100000 Sunflower 9 100000 Corn 10 100000 Rice 11 0 Tubers / roots 12 53094 Grasslands 13 100000 Orchards 14 100000 Vineyards 15 100000 Broad-leaved forest 16 100000 Coniferous forest 17 100000 Natural grasslands 18 100000 Woody moorlands 19 100000 Natural mineral surfaces 20 11675 Beaches, dunes and sand plains 21 0 Glaciers and perpetual snows 22 0 Water Bodies 23 100000 Others 255 0 Total 1780332 Figure 4: Loss function on the training and validation set.
Figure 5: Classification metrics computed in the validation set.
For this set-up, the overall accuracy, the Kappa coefficient and the average F1 score are 0.85, 0.83 and 0.73, respectively. Which is in line with others results over the same area (Inglada et al. 2017).
Classification metrics provide quantitative assessment of the classification map. But it is still useful to do a qualitative analysis of the maps, especially at large scale where phenology, topography and others factors can influence drastically the reflectance signal. Off course, Iota2 allows to output the classification maps ! We choose three different sites, display on figures 6, 7 and 8. The full classification map is available here.
Figure 6: Classification map for an area located between two tiles.
Figure 7: Classification map for an area over the city of Toulouse.
Figure 8: Classification map for a crop land area.
2.4.1. Results for the different sub ground truth filesThe figure 9 shows the Fscore for the 4 models (coming from the 4 different runs), and the 2 test sets. We can easily see that depending on the tile left out, the difference of classification accuracy in terms of Fscore between test samples extracted from the same or disjoint area can be significant. Discussing the reasons why the performance are decreasing and what metrics should we use to asses the map accuracy are out of the scope of this post. It is indeed a controversy topic in remote sensing (Wadoux et al. 2021). We just want to emphasize that iota2 simplifies and automatizes a lot the process of validation, especially at large scale. Using this spatial cross validation with 4 folds, the mean estimated Fscore is 0.59 with a standard deviation of 0.08, which is indeed much lower than the 0.73 estimated in the previous part.
Figure 9: Fscore computed on samples from the same tiles used for training (train) and from one tile left out from the training region (test).
The different classification maps are displayed in the animated figures 10, 11 and 12. The first one displays a tricky situation at a frontier of two tiles. We can see a strong discontinuity, whatever the model used. For the second case over Toulouse, there is a global agreement between the 4 models, except for the class Continuous Urban Fabric (pink) that disappears for one model: the one learnt without data coming from the tile containing Toulouse (T31TCJ). The last area exhibits a global agreement with some slight disagreements for some crops. Note, the objective is not to fuse/combine the different results, but rather to quantitatively observed the differences in terms of classification maps when the ground truth is changed.
Figure 10: Classification maps for the 4 runs for an area located between two tiles.
Figure 11: Classification maps for the 4 runs over the city of Toulouse.
Figure 12: Classification maps for the 4 runs for a crop land area.
A finer analysis could be done, indeed. But we let this as an homework for interested reader: all the materials for the simulation are available here
and the Level-2A MAJA processed Sentinel-2 data are downloadable from Theia Data Center (try this out: [https:]] ), while the ground truth data can be downloaded here.
3. ConclusionsTo conclude, in this post we have presented briefly the latest release of iota2. Then, we focused on the deep learning classification workflow to classify 4 tiles of one year of Sentinel-2 time series. Even if it was only four tiles, it amount to process around 800 Gb of data, and with our data set, about \(4\times10^{7}\) pixels to be classified. We have skipped a lot of parts of the worklow, that iota2 takes care (projection, upsampling, gapfilling, streaming, multiple run, mosaicing to mention few). The resulting simulation allows to assess qualitatively and quantitatively the classification maps, in a reproducible way: you got the version of iota2 and the config file, you can reproduce your results.
From a machine learning point of view, for this simulation, we have processed a lot of data easily (check publications with 2 millions of training pixels, we don’t find that much with open source tools). Iota2 allows to concentrate on the definition of the learning task. We make it simple here, an moderate size MLP. But much more can be done, regarding the architecture of the neural network, the training data preparation or post-processing. If you are interested, you can try: again everything is open source. We will be very happy to welcome and help you: [https:]] .
Finally, with a few boilerplate code, we were able to perform spatial cross validation smoothly.
In a close future, we plan to release a new version that will also handle regression: currently only categorial data is supported in learning.
The post can be download in pdf: here
4. AcknowledgementIota2 development team is composed of Arthur Vincent, CS Group, from the beginning, recently joined by Benjamin Tardy, CS Group. Hugo Trentesaux spend 10 months (October 2021 – July 2022) in the team. Developments are coordinated by Jordi Inglada, CNES & CESBIO-lab. Promotion and training are ensured by Mathieu Fauvel, INRAe & CESBIO-lab and Vincent Thierion, INRAe & CESBIO-lab.
Currently, the development are funded by several projects: CNES-PARCELLE, CNES-SWOT Aval, ANR-MAESTRIA and ANR-3IA-ANITI with the support of CESBIO-lab and Theia Data Center. Iota2 has a steering committee which is described here.
We thank the Theia Data Center for making the Sentinel-2 time series available and ready to use.
5. References Hagolle, O., M. Huc, D. Villa Pascual, and G. Dedieu. 2010. “A Multi-Temporal Method for Cloud Detection, Applied to Formosat-2, Venµs, Landsat and Sentinel-2 Images.” Remote Sensing of Environment 114 (8): 1747–55. doi:https://doi.org/10.1016/j.rse.2010.03.002. Inglada, Jordi, Marcela Arias, Benjamin Tardy, Olivier Hagolle, Silvia Valero, David Morin, Gérard Dedieu, et al. 2015. “Assessment of an Operational System for Crop Type Map Production Using High Temporal and Spatial Resolution Satellite Optical Imagery.” Remote Sensing 7 (9): 12356–79. doi:10.3390/rs70912356. Inglada, Jordi, Arthur Vincent, Marcela Arias, Benjamin Tardy, David Morin, and Isabel Rodes. 2017. “Operational High Resolution Land Cover Map Production at the Country Scale Using Satellite Image Time Series.” Remote Sensing 9 (1). doi:10.3390/rs9010095. Ploton, Pierre, F. Mortier, Maxime Réjou-Méchain, Nicolas Barbier, N. Picard, V. Rossi, C. Dormann, et al. 2020. “Spatial validation reveals poor predictive performance of large-scale ecological mapping models.” Nature Communications 11: 4540 [11 ]. doi:10.1038/s41467-020-18321-y. Wadoux, Alexandre M.J.-C., Gerard B.M. Heuvelink, Sytze de Bruin, and Dick J. Brus. 2021. “Spatial Cross-Validation Is Not the Right Way to Evaluate Map Accuracy.” Ecological Modelling 457: 109692. doi:https://doi.org/10.1016/j.ecolmodel.2021.109692.Author: Iota2 dev team
Created: 2022-09-16 Fri 09:34
-
20:11
Towards 3D time series
sur Séries temporelles (CESBIO)CESBIO researchers analyzing a time series from Sentinel-HR in 2028.
In our a priori definition of Sentinel-HR mission, we had included an option to observe the 3D topography at a high resolution, with a moderate accuracy ambition (lower than that offered by the CO3D satellite, as shown in the table below), but globally and with a systematic revisit.
But our mission advisory group insisted, and the option has become one of the essential charactestics of Sentinel-HR. This post gives some examples of the potential applications of such a mission
ASTER CO3D Sentinel-HR Image resolution 15 m 0.5 m 2 m Resolution of elevation model 30 m 4 m (free at 12 m) 10 m Altimetric uncertainty (CE90)
10 m 1 m 4 m Cloud free periodicity Monthly to yearly 4 years 3 months Operations 1999 - 2023 2024 - 2028 ? Map of elevation changes for the glaciers of Mont Blanc, in metres, between 2003 (images SPOT5) and 2018 (images Pléiades)
Our glaciologists have been the most enthusiastic about the inclusion of 3D in Sentinel-HR. The objective is to measure the seasonal and multi-annual evolution of glacier and polar ice sheets thickness, one of the essential climate variables (ECV). An accuracy of 4 m in 90% of the cases is enough to measure the evolution of these glaciers over a few years. Using similar but less accurate data, obtained over the very long term with the ASTER instrument on the Terra satellite launched in 1999, Hugonnet et al were able to map the global variations (mostly decreases
) of the thickness of glaciers around the world, between 2000 and 2019.
https://www.nature.com/articles/s41586-021-03436-z (Hugonnet et al 2021)
For the polar ice caps, the annual change is even stronger, and it is interesting to monitor the seasonal variations as shown in the figure below, which shows the variation of ice thickness at the limit between land and sea in Greenland.
Colour-shaded relief maps for a subset of DEMs. Elevation contours of 150 (black) and 100m (white) bound the approximate transition from grounded to floating ice. The letter after the date-string (DD-MM-YYYY) indicates the DEMs source (A: ASP WorldView; T: TanDEM-X; S: SETSM WorldView; and G: GLISTIN) (adapted from Joughin et al 2020)
Unfortunately, the ASTER mission is planned to end in 2023, after 24 years of good service, with no follow-on mission, except if Sentinel-HR turns out to be successful. The CO3D mission will allow to monitor some 50 glaciers, but according to the Randolph Glacier Inventory, there are about 220 000 glaciers on Earth, with an accumulated surface of 700 000 km². Monitoring so many glaciers cannot be done with the current mission.
"The [ASTER] mission is officially planned to end in September 2023". [https:]] . Sad to read that the single non-commercial stereo mission will end in two years. Was an invaluable source of DEMs for glacier studies. pic.twitter.com/V1w9hawKF6
— Etienne Berthier (@EtienneBerthie2) June 3, 2021
A bit like for glaciers, the possibility to observe with Sentinel-HR the 3D changes would allow to follow the evolution of volcanic flows. As shown in the image below, it is already possible to obtain this information with tasked satellites such as Pleiades, but it may be difficult to obtain this information on all the volcanoes of the world, and we would sometimes miss the situation before the beginning of the eruption.
Evolution of lava volume for the 2021 eruption in la Palma, measures with Pleiades stereoscopic images in September 2021 and January 2022. (Crédit: V. Pinel, Isterre et J.M.C. Belart, LMI Iceland).
The knowledge of elevation variation with a frequent and systematic revisit will allow to measure the volume of eroded rocks or sediments. it will also be possible the movements of sand dunes or of landslides, with an evaluation of the slope and of the associated risks.
Water bodiesAltimetric satellite missions such as Jason, Sentinel-6 of SWOT measure the height of water, but to obtain the volume which is generally the seeked quantity, it is necessary to know the bathymetric profile of the water bodies. A mission such as Sentinel-HR will be able to do so at low water periods, and therefore, a good revisit is needed to be sure to get the acquisitions at the best moment.
Sentinel-HR could even make it possible to map the depth of the snow cover in certain mountain regions which receive heavy snowfall and thus help better predict the water resources available in the spring. The 4 m altimetric uncertainty is a bit high, but of course, it should be reduced by averaging over a neighborhood.
CitiesWith a 4 m altimetric accuracy, it will be possible to measure the height of buildings with an accuracy of 1 floor. The height of building is an essential information to estimate habitat density. 3D also allows to monitor urban sprawl and accurately detect new buildings. Knowing the height of buildings is also important to model urban climate, because of their effect on air flow and cas shadows. The image below, obtained during Sentinel-HR phase 0 shows the expected accuracy obtained with a base/height ratio of 0.2 and a ground sampling distance of 2 m.
Simulated digital surface model for a Sentinel-HR acquisition with a resolution of 2 m, and a B/H of 0.2. However, not all noise was simulated and this image is probably a bit optimistic.
For a better modeling of the climate in cities, it is not only necessary to have a map of the vegetation, but also to know its height, a bush does not have the same cooling effect as a tree. The Sentinel-HR mission will allow to follow the evolution of the vegetation height, which is quite variable with time depending on the development and urbanism works.
Classification of vegetation height using stereoscopic observations from Pleïades (Rougier et al 2016) Forests, trees
As with urban vegetation, multi-temporal 3D information will provide insight into tree height and its evolution, which can be used in machine learning methods for estimating forest characteristics. Validation with Lidar data such as those from the GEDI Lidar onboard the International Space Station will allow for a good estimation of uncertainties and refinement of the models. The tree height information will allow better classification of different vegetation types, accurate detection of forest harvesting, and estimation of exported biomass.
Bathymétrie et continuité côtièreContinuity of the coastal profile between the emerged part measured with stereoscopy, and the immersed part measured by inversion of the wave train, thanks to the temporal shift of the stereoscopic observations. Bergsma et al
It is difficult to measure both the emerged and submerged coastal profile in its continuity. Bergsma et al 2021 showed that coastal bathymetry can be inverted from the speed of wave trains. Thus, it is not directly the stereoscopy that is used, but the slight temporal difference between the two observations. Of course, for the emerged part, stereoscopy allows to measure the relief. These methods applied to the VENµS satellite have made it possible to observe the coastal profile in its continuity. As the inversion of the bathymetry requires particular conditions (presence of waves), a revisit will be necessary.
ConclusionsThe possibility to include stereoscopy in the Sentinel-HR mission, which we had initially considered as an option, has many applications, of which we have only mentioned the most obvious here.
In summary, for a mission dedicated to change detection for map updating, relief observation has a huge advantage: surface reflectance changes may be seasonal or meteorological in origin, while the changes visible in 3D are real changes.
Of course, periodic observation in 3D has a high cost. In the case of a Sentinel-HR mission with CO3D-derived satellites, it requires an increase from 6 satellites for the imaging mission alone, to 12 satellites if we want a 20-day revisit for 3D or 9 satellites, if we decide to decrease stereoscopic acquisitions to 40 days. If the additional cost seems high, it should be considered that the imaging mission would require some redundancy to compensate for the possible failure of some satellites. The fact of having additional satellites for 3D would therefore contribute to securing the mission.
This article has been written thanks to the Sentinel-HR phase-0 study report, edited by Julien Michel from CESBIO, from the contributions of the mission group members. For the work concerned by this article, we would like to thank Etienne Berthier (LEGOS), Jérémie Mouginot (IGE), Renaud Binet (CNES), Raphael Almar (LEGOS), Hervé Yesou (i-Cube), Arnaud Lucas (IPGP), Anne Puissant (Live), Jean-Philippe Malet (EOST), David Scheeren (Dynafor), Simon Gascoin (Cesbio).
ReferencesRomain Hugonnet, Robert McNabb, Etienne Berthier, Brian Menounos, Christopher Nuth, Luc Girod, Daniel Farinotti, Matthias Huss, Ines Dussaillant, Fanny Brun, et al. Accelerated global glacier mass loss in the early twenty-first century. Nature, 592(7856):726–731, 2021.
Ian Joughin, David E Shean, Benjamin E Smith, and Dana Floricioiu. A decade of variability on jakobshavn isbræ: ocean temperatures pace speed through influence on mélange rigidity. The cryosphere, 14(1):211–227, 2020.
Simon Rougier, Anne Puissant, André Stumpf, and Nicolas Lachiche. Comparison of sampling strategies for object-based classification of urban vegetation from very high resolution satellite images. International Journal of Applied Earth Observation and Geoinformation, 51:60–73, 2016.
David Morin, Milena Planells, Dominique Guyon, Ludovic Villard, Stéphane Mermoz, et al.. Estimation and mapping of forest structure parameters from open access satellite images: development of a generic method with a study case on coniferous plantation. Remote Sensing, MDPI, 2019, 11 (11), pp.1-25.
Erwin W.J. Bergsma, Rafael Almar, Amandine Rolland, Renaud Binet, Katherine L. Brodie, and A. Spicer Bak. Coastal morphology from space: A showcase of monitoring the topography-bathymetry continuum. Remote Sensing of Environment, 261:112469, 2021.
-
18:56
Du relief pour nos séries temporelles
sur Séries temporelles (CESBIO)Scientifiques du CESBIO analysant une série d’images Sentinel-HR en 2028
Dans notre définition initiale de la mission Sentinel-HR, nous avions inclus une option permettant d’observer le relief à haute résolution, avec un objectif de précision modéré (inférieur à celui offert par CO3D, comme le montre le tableau ci-dessous), mais avec une revisite systématique et sur le monde entier.
Devant l’intérêt des membres de notre groupe mission, cette option est devenue une caractéristique essentielle de la mission Sentinel-HR. Cet article résume les nombreuses applications liée à l’observation régulière et globale du relief des surfaces terrestres.
ASTER CO3D Sentinel-HR Résolution des images 15 m 0.5 m 2 m Résolution du modèle numérique de surface 30 m 4 m (gratuit à 12 m) 10 m Précision altimétrique (CE90)
10 m 1 m 4 m Périodicité Mensuelle à annuelle 4 années 3 mois Lancement 1999 2023 2028 Carte des changements d’altitude des glaciers du massif du Mont Blanc, en mètres entre 2003 (images SPOT5) et 2018 (images Pléiades)
Ce sont nos glaciologues qui se sont montrés les plus enthousiastes pour la mesure du relief et ses changements par Sentinel-HR. L’objectif est de mesurer l’évolution saisonnière et pluri-annuelle de l’épaisseur des glaciers et des calottes polaires (Groenland et Antarctique), des variables climatiques essentielles (ECV). Avec une précision de 4 m dans 90% des cas, il est possible de mesurer l’évolution de ces glaciers sur quelques années et peut être saison après saison. C’est à partir de données similaires (mais moins précises), obtenues sur le très long terme avec l’instrument ASTER sur le satellite Terra lancé en 1999, que Hugonnet et al. (2021) ont pu établir une cartographie mondiale des variations (diminutions surtout) de hauteur des glaciers du monde entier entre 2000 et 2019.
https://www.nature.com/articles/s41586-021-03436-z (Hugonnet et al 2021)
Au niveau des pôles, les dynamiques sont souvent encore plus rapides, et il est intéressant de suivre les évolutions saisonnières comme sur l’illustration ci-dessous, qui montre l’évolution des niveaux de hauteur à l’interface entre terre et mer d’un glacier du Groenland.
Cartes de relief pour l’interface terre/mer d’un glacier du Groenland. Les contours d’élévation 150m (noir) ou 100m (blanc), délimitent la transition approximative entre glace flottante et glace terrestre. La lettre après la date indique la provenance de l’information de relief : (A: ASP WorldView; T:TanDEM-X; S: SETSM WorldView; and G: GLISTIN) (Joughin et al 2020)
Malheureusement, la mission Aster va se terminer en 2023, après 24 ans de bons et loyaux services, sans successeur, sauf si la mission Sentinel-HR venait à être décidée. CO3D permettra d’observer en continu une cinquantaine de glaciers, mais selon le Randolph Glacier Inventory, il y a presque 220000 glaciers sur terre, qui couvrent au total plus de 700 000 km². La surveillance de tous ces glaciers ne peut donc pas être assurée par la mission CO3D actuelle.
"The [ASTER] mission is officially planned to end in September 2023". [https:]] . Sad to read that the single non-commercial stereo mission will end in two years. Was an invaluable source of DEMs for glacier studies. pic.twitter.com/V1w9hawKF6
— Etienne Berthier (@EtienneBerthie2) June 3, 2021
Un peu comme pour les glaciers, la possibilité d’observer avec Sentinel-HR les changements 3D permettrait de suivre l’évolution des coulées volcaniques. Comme le montre l’image ci-dessous, il est déjà possible d’obtenir cette information avec des satellites missionnés comme Pléiades, mais il peut être difficile d’obtenir cette information sur tous les volcans du monde. Sentinel-HR garantirait de plus la disponibilité d’une carte topographique quelques semaines avant l’éruption.
Évolution du volume de lave lors de l’éruption de 2021 à la Palma, mesurés avec les images stéréoscopiques Pléiades de septembre 2021 et janvier 2022 (Crédit: V. Pinel, Isterre et J.M.C. Belart, LMI Iceland).
D’autre part, la connaissance des l’évolution du relief avec une revisite régulière devrait permettre de mesurer les volumes de roches ou sédiments érodés ou déposés. Il sera également possible de suivre les déplacements des dunes de sable ou ceux des glissements de terrain, avec une évaluation de la pente et donc des risques associés.
Eaux continentalesLes missions altimétriques permettent de mesurer les hauteurs d’eau, mais pour accéder au volume, qui est la quantité nécessaire pour évaluer la disponibilité en eau, il faut connaître le profil bathymétrique des étendues d’eau. Une mission comme Sentinel-HR permettra de mesurer ces profils lors des phases de basses eaux. Pour ce faire, une bonne revisite est nécessaire.
Il est même possible que Sentinel-HR permette de cartographier la hauteur de neige dans certaines régions de montagne où les accumulations sont fortes et ainsi mieux prévoir les ressources en eau disponibles au printemps. La précision de 4 m peut être insuffisante, mais le bruit devrait diminuer en moyennant sur un voisinage.
VillesAvec une précision de 4 m, il sera possible de mesurer les hauteurs des bâtiments avec une précision de l’ordre d’un étage. La hauteur des bâtiments est un élément essentiel de la mesure de la densité de l’habitat. Enfin elle joue un rôle essentiel dans la modélisation du climat urbain de part sa canalisation des flux d’air, et par les ombres projetées au sol. L’illustration ci-dessous, réalisée au cours de la phase-0 Sentinel-HR, montre la précision qui pourrait être obtenue avec des observations obtenues avec un rapport base sur hauteur de 0.2, et un pas au sol de 2 m. La comparaison de topographies multi-dates permet aussi de suivre le développement urbain et de détecter précisément la construction de nouveaux bâtiments.
Modèle numérique de surface simulé pour une acquisition de Sentinel-HR avec une résolution de 2 m, et un B/H de 0.2. Tous les bruits n’ont cependant pas été simulés et cette image est probablement un peu optimiste.
Pour une meilleure modélisation du climat en ville, il ne faut pas seulement avoir une carte de la végétation, mais aussi connaître sa hauteur, un buisson n’ayant pas le même effet refroidissant qu’un arbre. La mission Sentinel-HR permettra de suivre l’évolution de la hauteur végétation, élément assez variable avec le temps en fonction des travaux d’aménagement et d’urbanisme.
Classification des types de végétation en ville à partir de données stéréoscopiques de Pléiades (Rougier et al. 2016)
Comme pour la végétation en ville, l’information 3D multi-temporelle fournira une idée de la hauteur des arbres et de son évolution, qui pourra être utilisée dans les méthodes d’estimation des caractéristiques forestières par apprentissage automatique. La validation avec des données Lidar comme celles de du Lidar GEDI à bord de la station spatiale internationale permettra de bien estimer les incertitudes et d’affiner les modèles. L’information permettra de mieux classer les différents types de végétation, de détecter avec précision les coupes forestières et d’estimer les biomasses exportées.
Continuité du profil côtier entre la partie émergée mesurée avec la stéréoscopie, et la partie immergée mesurée par inversion du train de vagues, grâce au décalage temporel des observations stéréoscopiques. (Bergsma et al)
Il est difficile de mesurer dans sa continuité le profil côtier émergé et immergé. Bergsma et al 2021 ont montré qu’il était possible d’inverser la bathymétrie côtière à partir de la vitesse des trains de vague. Ce n’est donc pas directement la stéréoscopie qui est utilisée, mais le léger écart temporel entre les deux observations. Bien évidemment, pour la partie émergée, la stéréoscopie permet de mesurer le relief. Ces méthodes appliquées au satellite VENµS ont permis d’observer le profil côtier dans sa continuité. L’inversion de la bathymétrie nécessite des conditions particulières (présence de vagues), la revisite sera donc nécessaire.
ConclusionsLa possibilité d’inclure l’acquisition du relief dans la mission Sentinel-HR, que nous avions initialement envisagée comme une option, présente finalement beaucoup d’applications. Nous n’avons d’ailleurs ici mentionné que les plus évidentes. En résumé, pour une mission dédiée à la détection de changements pour la mise à jour de cartes, l’observation du relief présente un énorme avantage : alors que les changements de réflectances de surface peuvent être dues à des effets saisonniers ou météorologiques, les changements visibles en 3D sont, eux, de vrais changements.
Bien évidemment, l’observation périodique en 3D présente un coût élevé. Dans le cas d’une mission Sentinel-HR réalisée avec des satellites dérivés de CO3D, elle nécessite de passer de 6 satellites pour la mission imagerie seule, à 12 satellites si on veut une revisite de 20 jours pour le 3D. Pour obtenir une revisite 3D à 40 jours, 9 satellites pourraient suffire. Si le surcoût parait élevé, il faut cependant considérer que la mission nécessiterait une certaine redondance pour pallier à la panne de quelques satellites. Le fait de disposer de satellites supplémentaires pour le relief contribuerait donc à sécuriser la mission.
Cet article a été écrit grâce au rapport de l’étude de phase-0 de Sentinel-HR, édité par Julien Michel du CESBIO, à partir des contributions des membres du groupe mission. Pour les travaux concernés par cet article, nous tenons à remercier Etienne Berthier (LEGOS), Jérémie Mouginot (IGE), Renaud Binet (CNES), Raphael Almar (LEGOS), Hervé Yesou (i-Cube), Arnaud Lucas (IPGP), Anne Puissant (Live), Jean-Philippe Malet (EOST), David Scheeren (Dynafor) et Simon Gascoin (CESBIO).
RéférencesRomain Hugonnet, Robert McNabb, Etienne Berthier, Brian Menounos, Christopher Nuth, Luc Girod, Daniel Farinotti, Matthias Huss, Ines Dussaillant, Fanny Brun, et al. Accelerated global glacier mass loss in the early twenty-first century. Nature, 592(7856):726–731, 2021.
Ian Joughin, David E Shean, Benjamin E Smith, and Dana Floricioiu. A decade of variability on jakobshavn isbræ: ocean temperatures pace speed through influence on mélange rigidity. The cryosphere, 14(1):211–227, 2020.
Simon Rougier, Anne Puissant, André Stumpf, and Nicolas Lachiche. Comparison of sampling strategies for object-based classification of urban vegetation from very high resolution satellite images. International Journal of Applied Earth Observation and Geoinformation, 51:60–73, 2016.
David Morin, Milena Planells, Dominique Guyon, Ludovic Villard, Stéphane Mermoz, et al.. Estimation and mapping of forest structure parameters from open access satellite images: development of a generic method with a study case on coniferous plantation. Remote Sensing, MDPI, 2019, 11 (11), pp.1-25.
Erwin W.J. Bergsma, Rafael Almar, Amandine Rolland, Renaud Binet, Katherine L. Brodie, and A. Spicer Bak. Coastal morphology from space: A showcase of monitoring the topography-bathymetry continuum. Remote Sensing of Environment, 261:112469, 2021.
-
18:39
VENµS satellite new phase started: high resolution images every day (almost)
sur Séries temporelles (CESBIO)VENµS, the French-Israeli satellite that goes up and down, reached its new orbit on the first of March, and after first tunings of the programming and telemetry, we have started having our first steady acquisitions on March the 15th. This new phase is called the VM5 phase. On the selected sites, images will be taken every day (sometimes every second day), with a 4 m resolution, and with 12 spectral bands in the visible and near infrared. We will have a bit of a commissionning phase to do in the coming weeks, and we need to prepare all the geometric reference images to obtain an accurate registration. The L1C, L2A and L3A data will only be released at the end of spring (hopefully).
One of the first VENµS images for VM5 phase, over Saint-Louis, Senegal, already processed at L1C.
The list of selected VENµS sites is available on the map below. It might be not fully definitive, and we cannot exclude that a few sites will have to be removed in the coming days according possible difficulties in the acquisitions. Thiis uncertainty is due to differences between the programming simulator, and the reality. On these first days, 20 of the selected sites did not make it to the current programming, we are trying hard to include as many of them as possible. The current list just indicates a high probability for these sites to be included in VENµS programming.
An email will be sent for each proposing team this week.
Choosing those sites kept our team busy for a few months :
- Together with Gérard Dedieu, who did 90% of the work benevolently from his retirement in the Pyrenees, we read all the proposals (almost 90 !) and classified them with priorities between 1 and 5. We quickly figured out that priorities 1 and 2 would be enough to fill VENµS programming.
- The technical team in France and Israel (Sophie Pelou (CNES), Thibaut Faijan (CS-Group), Thomas Cruz (Sopra-Steria)) had then to program all the sites, accounting for all the constraints :
- the satellite moderate agility
- the memory occupation,
- a constraint to have an empty memory on the first orbit
- impossibility to acquire while downloading data to the ground station
- limit the number of downloads, as are they are not free
- Eventually adapt the processors, prepare the reference images for ortho-rectification, check the product quality. This is the work of Jean-Louis Raynaud (CNES), Amandine Rolland, Lucas Tuillier, Laetitia Fenouil (Thales), Sophie Coustances (CNES)
- Process the data ( Marie France Larif, Bernard Specht, CNES) and Theia’s production team (Gwénaëlle Baldit et Victor Vidal, Sopra Steria)
The second part was the hardest, and we didn’t count the iterations necessary with our simulator, checked by our colleagues in Israel. AS we wanted to include as many sites as possible, it took an almost infinite time to obtain an optimised result, and this explains why we were not able to deliver the list of sites earlier.
The result is as follows. I know some of our friends who suggested sites will be disappointed, but this is the best we could program. Due to the constraints, a part of the sites could only be programmed every second day instead of every day. I’ll soon have a map with two different colours depending on the revisit.
Sites displayed in blue have a one day revisit, and red sites have a 2 day revisit. Click on image to see the interactive map. and zoom
-
18:41
Watching snow melt at 10 minute intervals from satellite
sur Séries temporelles (CESBIO)This is something new to me…
The Geostationary Operational Environmental Satellites (GOES) capture images every 10 minutes at 1 km resolution over America… Such imagery makes it possible to watch the quick melt of a thin snow cover over the US Great Plains during a single day.
Sub-daily monitoring of the snow cover at such spatial resolution was already possible by combining Aqua and Terra observations (overpass times of 10:30am and 1:30 pm). Here is the same example zoomed in near Ohama, Nebraska (snow cover in blue, clouds in white):
-
14:18
Lowest snow cover area in the Alps since 2001
sur Séries temporelles (CESBIO)On March 2nd, the snow cover area in the Alps has reached its lowest value since 2001. Only 43% of the alpine range was covered by snow (about 82’000 square kilometers), whereas the average is 63% on the same day over the period 2001-2021. The deficit in the number of snow covered days (map below, computed from 01 November) is particularly evident in the Italian Alps.
The current conditions contrast with the beginning of the season when early snowfalls covered almost 90% of the entire alpine range. But a « warm December » and a « dry January » have changed the situation. The situation is similar in the Pyrenees.
These data are generated from Nasa Terra/MODIS observations. You can follow the evolution of the snow cover area in near real time through our Alps Snow Monitor.
Top picture: MODIS image of the Alps on 03 March 2022.
-
21:43
Les Pyrénées ont fait le dry January
sur Séries temporelles (CESBIO)La surface enneigée n’a jamais été aussi basse un 6 février depuis 2001. Elle était de 6 100 kilomètres carrés, alors que la moyenne à cette date est 13 900 kilomètres carrés. Cette situation résulte d’une longue période sèche et ensoleillée qui a commencé le 10 janvier avec l’installation d’une situation anticyclonique.
Surface enneigée exprimée en fraction du domaine cartographié ci-dessous
Durée d’enneigement du 01 novembre 2021 au 6 février 2022 (nombre de jours)
La surface enneigée donne une vision partielle de l’état du manteau neigeux. Il reste des cumuls importants à plus haute altitude et sur les versants nord qui sont protégés du rayonnement solaire, datant des fortes précipitations du début de l’hiver.
Suivez l’évolution de la surface enneigée sur le Pyrenees Snow Monitor.
Informations publiées sur le site de France 3 Occitanie : [https:]]
-
16:35
TropiSCO scores against deforestation
sur Séries temporelles (CESBIO)The TropiSCO project aims at providing maps of tree cover loss in dense tropical forests using Sentinel-1 satellite images, starting in 2018 and in near real time. The maps will soon be publicly available via a webGIS platform and updated weekly at 10m resolution. Forest loss as small as 0.1 hectare will be detected (corresponding to ten Sentinel-1 pixels). Compared to other existing systems, TropiSCO brings two main improvements: its fine spatial resolution and, above all, its short forest loss detection time, whatever the weather conditions, which is essential in the tropics to allow rapid interventions on the ground.
The TropiSCO project, labelled by the Space Climate Observatory in 2021, is led by the GlobEO company in close collaboration with CNES and CESBIO. The project is divided into two phases, A and B. Phase A, which will end in April 2022, has three objectives:
- The collection of user requirements,
- An analysis of the system architecture and of the costs associated with each technical solution studied,
- A demonstration of the concept in seven countries with the creation of a dedicated webGIS by the Someware company. The demonstration was carried out in Guyana, Suriname, French Guiana, Gabon, Vietnam, Laos and Cambodia.
The production will be extended to the whole tropical dense forests in the frame of phase B.
User needs have been collected via a questionnaire filled by twenty-five institutions, which is being analysed in order to produce and distribute the most relevant maps. In parallel, the architecture of the production system is being studied at CNES, in order to tailor a technical solution adapted to the ambition of this project.
The products generated in the frame of the TropiSCO project consist mainly of maps of forest loss dates at a high spatial and temporal resolution, but also of synthetic maps highlighting areas of significant activity, as well as monthly and annual statistics by territory (provinces, countries, etc.).
Figure 1. Synthetic maps of forest loss from January 2018 to December 2021, with weekly temporal resolution and ten-metre pixel size, obtained using Sentinel-1 satellite images. The figure was made with the help of Simon Gascoin and Maylis Duffau.
Examples of synthetic maps are shown in Figure 1. The red shading indicates the area of forest loss within each 460 km² hexagon. Examples include gold mining in Suriname on the border with French Guiana, as well as tree plantations harvests in central Vietnam and the conversion of natural forests to tree plantations in northern Laos. The contrast between northern Laos and Vietnam is striking, illustrating the fact that forest exploitation and management is highly dependent on national strategy. More than 70,000 Sentinel-1 images were processed with CNES computing resources to produce maps of Vietnam, Laos and Cambodia, covering 1,230,000 km². For these three countries, the errors of omission and commission were estimated at 10% and 0.9% respectively according to an adapted validation protocol (Mermoz et al., 2021).
Figure 2 shows an example of a detection map for Suriname from 2018 to 2021. The colour gradations from yellow to red show the progressive temporal evolution of logging roads. Selective logging (yellow to red dots) is visible between the roads.
Figure 2: Logging area in Suriname. The first forest losses are often associated with the creation of logging roads, followed by selective logging. Background image: Google Earth.
This work was presented on 11 October 2021 at the Theia workshop on the uses of remote sensing for forestry, and on 20 January 2022 at the third Quarterly Meeting of the SCO France. By the end of phase A, the TropiSCO team is working on the complete automation of the processing chain and on the production of forest loss maps for Gabon. The webGIS will be open and accessible to all by April 2022.
References :Mermoz et al. (2021). Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data. Remote Sensing, 13(23), 4877. [https:]]
-
12:02
TropiSCO secoue la déforestation
sur Séries temporelles (CESBIO)L’objectif du projet TropiSCO est de fournir des cartes de suivi des pertes de couverture forestière dans les forêts denses tropicales avec les images des satellites Sentinel-1, à partir de 2018 et en continu. Les cartes seront bientôt accessibles publiquement via une plateforme webGIS et mises à jour toutes les semaines, à 10m de résolution. Des coupes forestières d’à peine 0,1 hectare pourront être détectées (correspondant à dix pixels Sentinel-1). Par rapport aux autres systèmes existants, TropiSCO apporte donc deux améliorations : sa fine résolution spatiale et surtout son court délai de détection des pertes de couverture forestière, quelles que soient les conditions météorologiques, ce qui est indispensable dans les tropiques pour permettre des interventions rapides sur le terrain.
Le projet TropiSCO, labellisé par le Space Climate Observatory en 2021, est mené par la société GlobEO en étroite collaboration avec le CNES et le CESBIO. Le projet se déroule en deux phases A et B. La phase A, qui se terminera en avril 2022, a trois objectifs :
- le recueil des besoins utilisateurs
- une analyse de l’architecture du système et des coûts associés à chaque solution technique étudiée
- une démonstration du concept sur sept pays avec la création d’un webGIS dédié par la société Someware. La démonstration est faite sur le Guyana, le Suriname, la Guyane, le Gabon, le Vietnam, le Laos et le Cambodge.
L’objectif modeste mais principal de la phase B sera d’étendre progressivement la méthode à toutes les forêts denses tropicales !
A l’heure actuelle, les besoins utilisateurs ont été recueillis via un questionnaire auprès de vingt-cinq institutions et sont en cours d’analyse. Ils nous fournissent des informations précieuses afin de produire les produits cartographiques les plus pertinents possible. En parallèle, l’architecture du système de production est en cours d’étude au CNES, afin de dimensionner une solution technique adaptée à l’ambition de ce projet.
Les produits générés par le projet TropiSCO consistent essentiellement en des cartes de dates de pertes de couverture forestière à haute résolution spatiale et temporelle, mais aussi en des cartes synthétiques permettant de mettre en valeur des zones d’activité importante, ainsi que des statistiques mensuelles et annuelles par territoire (provinces, pays, etc).
Figure 1. Cartes synthétiques des activités de coupes forestières de janvier 2018 à décembre 2021, avec une résolution temporelle hebdomadaire et une taille de pixels de dix mètres, obtenues avec les images des satellites Sentinel-1. La figure a été effectuée avec l’aide de Simon Gascoin et Maylis Duffau.
Des exemples de produits synthétiques sont présentés dans la Figure 1. Les dégradés de rouge indiquent la superficie de forêts coupées au sein de chaque hexagone de 460 km2 de superficie. On identifie par exemple les zones d’orpaillage au Suriname à la frontière avec la Guyane, ainsi que les coupes de plantations d’arbres dans le centre du Vietnam et la conversion de forêts naturelles en plantations d’arbres dans le nord du Laos. On peut observer aussi le contraste entre le nord Laos et Vietnam, ce qui illustre que l’exploitation et la gestion des forêts dépendent fortement de la stratégie nationale. Plus de 70 000 images Sentinel-1 ont été traitées avec les moyens de calcul du CNES pour produire les cartes sur le Vietnam, le Laos et le Cambodge, couvrant 1 230 000 km². Sur ces trois pays, les erreurs d’omission et de commission ont été estimées respectivement à 10% et 0,9% selon un protocole de validation adapté (Mermoz et al., 2021).
Matrice de confusion pour l’évaluation des produits sur l’Asie du Sud-Est (Mermoz et al 2021)
La Figure 2 montre un exemple de carte de détections sur le Suriname de 2018 à 2021. Les dégradés de couleur du jaune au rouge montrent l’évolution progressive dans le temps des routes forestières. Les coupes sélectives (points jaunes à rouge) sont visibles entre les routes.
Figure 2. Zone d’exploitation forestière au Suriname. Les premières coupes sont souvent associées à la création de routes forestières, suivies des coupes sélectives. Image de fond : Google Earth.
Ces travaux ont été présentés le 11 octobre 2021 dans le cadre de l’atelier Theia sur les utilisations de la télédétection pour la forêt, et le 20 janvier 2022 lors de la troisième Trimestrielle du SCO France. D’ici la fin de la phase A, l’équipe TropiSCO va continuer à travailler sur l’automatisation complète de la chaine de traitement et sur la production de carte des coupes forestières sur le Gabon. Le webGIS sera ouvert et accessible à tous en avril 2022.
References :Mermoz et al. (2021). Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data. Remote Sensing, 13(23), 4877. [https:]]
-
18:21
Neige et crues dans les Pyrénées entre novembre 2021 et janvier 2022
sur Séries temporelles (CESBIO)Le manteau neigeux s’est constitué rapidement dans les Pyrénées suite à plusieurs chutes de neige abondantes entre le 23 novembre et le 10 décembre. Après ce début de saison tonitruant, la surface enneigée a rapidement décliné pour atteindre une valeur plutôt basse au début de l’année 2022. Cette fonte rapide est la conséquence des températures exceptionnellement hautes fin décembre puisque la fin d’année 2021 est la plus douce que la France ait jamais connue.
Ce graphique est tiré de mon Pyrenees Snow Monitor qui utilise les données du satellite Terra/MODIS pour calculer le pourcentage de la surface des Pyrénées qui est enneigée .
Malgré l’épisode de fonte fin décembre, la durée de l’enneigement est plutôt excédentaire à ce jour (16 janvier) si on compare à la moyenne des 20 dernières années.
L’enneigement a connu un regain au début du mois de janvier avec les précipitations amenées par le front chaud aquitain. Si les chutes de neige ont été importantes en altitude, ce front a été accompagné par un fort redoux si bien que les précipitations abondantes sont tombées sous forme de pluie en moyenne montagne, y compris sur des secteurs encore enneigés, entraînant les crues remarquables de la Garonne, de l’Ariège, de l’Adour.
Ainsi l’Adour à Tarbes a connu une première crue trentennale en décembre 2021, puis une crue cinquantennalle en janvier 2022 ! Une crue « trentennalle » est une crue qui arrive en moyenne tous les 30 ans… Il est donc remarquable qu’une crue « cinquentennalle » survienne le mois suivant ! Par ailleurs, l’Adour a connu sa dernière crue trentennalle.. en décembre 2019 !
L’Adour à Tarbes, comme l’Ariège à Foix sont des rivières sous influence nivale si bien que le débit mensuel moyen est maximum au printemps.
Écoulements mensuels (naturels) de l’Adour à Tarbes (source Banque Hydro, DREAL Aquitaine) – données calculées sur 54 ans.
Sous l’influence du changement climatique ce régime évolue vers un régime pluvial, avec des crues plus précoces (tendance à la baisse de la date de crue annuelle sur le graphique ci-dessous).
Date de la crue annuelle de l’Ariège à Foix (ici définie comme le débit maximum sur 10 jours consécutifs)
Même si on ne peut pas facilement attribuer des évènements avec une période de retour aussi élevée au changement climatique, ces crues pyrénéennes récentes sont tout à fait compatibles avec l’effet attendu du changement climatique : des précipitations extrêmes plus intenses, avec une fraction liquide sur les massifs plus élevée, et un manteau neigeux de haute montagne qui contribue à renforcer l’onde de crue par des pics de fonte au cœur de l’hiver.
Photo : Pascal Fanise, Etang de Lers le 12 décembre 2021
-
14:03
Sentinel-1b is currently out of work !
sur Séries temporelles (CESBIO)New Update : 1.5 month after, ESA has not yet been able to resume operations with Sentinel-1. The work is now concentrating on understanding the cause of the breakdown, to determine if some modifications are necessary before launching Sentinel1C, not before the end of 2022.
Update : 1 week after, attemps to resume exploitation of Sentinel-1b were not succesful, but ESA doesn’t give up
: scihub.copernicus.eu/news/News00985
An anomaly occurred on Sentinel-1B on the 23rd of December 2021. ESA tried to resume operations, but « the initial anomaly was a consequence of a potential serious problem related to a unit of the power system of the Sentinel-1B satellite. The operations performed over the last days did not allow to reactivate so far a power supply function required for the radar operations. » This information was released by ESA yesterday.
These kinds of glitches appear on satellites, which are designed with redundancies, in case one piece of equipment fails. Let’s hope the technical teams at ESA will be able to restart the acquisitions soon.
In case they don’t, there is a Sentinel-1C almost ready for launch : The current plans were « S1C Launch Period: Between 1 December 2022 and 31 May 2023« .
-
18:23
CO3D : the Very High Resolution mission dedicated to 3D
sur Séries temporelles (CESBIO)Do you know the CNES/AIRBUS CO3D mission ? As our Sentinel-HR mission could be made of satellites derived from those of the CO3D mission, it is a good opportunity to advertise this nice mission here. I took all the information from a presentation and a paper [1] written by Laurent Lebègue and the CO3D team.
A tasked multispectral VHR mission
The CO3D mission (3D Optical Constellation) is a decided mission, currently in phase C/D, due to be launched by mid 2023, next year! It is made of two pairs of low cost satellites. The 300 kg CO3D satellites may be used as tasked VHR satellites, allowing to take images at 50 cm resolution, 4 bands (Blue, Green, Red, NIR), and a field of view of 7 km. At this resolution, one usually only gets a panchromatic band, or at best a Pan-sharpened image, but CO3D will provide 4 bands at 50 cm! These products will be commercialized by Airbus Defense and Space.
A 3D mission
But the main feature of CO3D is its ability to make stereoscopic observations from the same orbit with a base-height ratio of 0.2 to 0.3, almost simultaneously, thanks to two pairs of satellites, which can make simultaneous stereoscopic observations, as shown on the figure to the right. The main mission objective for CNES is to make a global Digital Surface Model (DSM) by 2026, with 90% of the surface covered (there are regions with persistent cloudiness). During the first eighteen months, the efforts will concentrate on two priority regions in which the DSM will be produced: one in France, where we will be able to perform an extensive validation, and the other bigger one (27 M. km²) covers North and Tropical Africa and the Middle East.
But meanwhile, stereo acquisitions on other parts of the world will be also performed. After the 18 first months, the DSM production will be extended to the whole land surfaces. Some capacity of revisit to monitor DSM change with time will be possible, but it will not be global. A capacity of 600 000 km² per year has already been negotiated, and it will be possible to increase it at a low fare. Demands will go through the Dinamis portal.
The objective is to get a 1 m relative altimetric accuracy (CE90) at 1 m ground sampling distance (GSD). Each DSM will be produced at 1, 4, 12, 15 and 30 meter GSD. At 15 m and 30m GSD, the DSM will be delivered as open data.
The project aims at delivering DSMs fully automatically, which requires a huge effort for the development of algorithms and software, including the CARS method [2], a stereo pipeline to produce DSMs that is available as open source. A large downstream service to develop 3D methods and applications is being set-up through CNES S3D2 program, and through the AI4GEO project. I’ll try to convince my colleagues to describe the products more accurately here.
A DSM of a part of Nice, France, obtained with a Pleiades data set acquired to prepare CO3D ground segment and prcessors.
Written by O.Hagolle, with the appreciated help of Jean-Marc Delvit, Laurent Lebègue, Delphine Leroux and Simon Gascoin.
References
[1] Lebègue, L., Cazala-Hourcade, E., Languille, F., Artigues, S., and Melet, O.: CO3D, A WORLDWIDE ONE ONE-METER ACCURACY DEM FOR 2025, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B1-2020, 299–304, [https:] 2020
[2] Michel, J., Sarrazin, E., Youssefi, D., Cournet, M., Buffe, F., Delvit, J. M., Emilien, A., Bosman, J., Melet, O., and L’Helguen, C.: A NEW SATELLITE IMAGERY STEREO PIPELINE DESIGNED FOR SCALABILITY, ROBUSTNESS AND PERFORMANCE, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 171–178, [https:] 2020.
-
14:40
Multitemp blog statistics in 2021
sur Séries temporelles (CESBIO)As every year, I have gathered the statistics of the multitemp blog in 2021. It seems that we have more or less stopped the decrease of its audience, with a higher number of visits than last year, even if the number of pages viewed is still decreasing a bit. The two main authors of the blog have been quite busy this year :
- Simon Gascoin successfully defended his « Habilitation à Diriger des Recherches », the diploma you need to be allowed to supervise PhD in France. Instead of preparing superb images for the blog, he made a magnificent dissertation on how remote sensing can be useful to monitor Snow Water Equivalent.
- I have become the big boss of CESBIO’s observation systems team and I have the honor of spending all my time in meetings with other big bosses, while the researchers at CESBIO are doing real work.
Let me recall that the columns of this blog are open, and that all CESBIO members are welcome to submit short articles, and we even accept articles from outside CESBIO, as long as they concern time series of remote sensing images.
I have another meeting and I lack time to detail which were the successful pages this year. Apart of the didactic pages in the « how it works » menu, three five pages had a large success, with more than 500 views :
- The call by Simon Gascoin to collect samples of orange snow after large quantities of sand were spread over Europe. Strangely, the account of the results did not bring as much interest…
- The team work, led by Simon again, to understand what happened in the Himalaya to cause the sudden flood and the numerous victims of the Uttarakhand disaster
- The discussions on certain limits of AI when it is not well used, by Jordi Inglada
- The discussions on whether commercial satellites could replace a mission such as Sentinel-HR by J.Michel, E.Berthier and myself
- Our successful campaign to ask ESA for a delay before changing the format of Sentinel-2
Despite its technical content, the multitemp blog, relayed by social media, is becoming a great tool in the politics of remote sensing !
-
13:12
Happy 2022 !
sur Séries temporelles (CESBIO)=>
As shown by the image above, warmth can be found in winter, and beauty often comes out of the fog. And in matter of fog, this beginning of 2022 seems to be a great millesim. I took this picture at sunset on the 31st of december, in Hendaye, right in the South West corner of France in the Basque Country.
(I’m sorry, I don’t know to whom I should attribute this illustration)
I was very optimistic at the beginning of last year, but it turns out that this start of 2022 is looking gloomy once again. We sincerely hope that Covid19 did not affect you too much in 2021, and that this new wave will leave us, and you, with no more troubles than the obligation to work from home, once again, for a couple of weeks only.
2021 was not entirely bad at CESBIO. Even if we did not have all the celebrations we planned at the beginning of the year, we were lucky enough to celebrate the first birthday of CESBIO’s 25th birthday, as well as Gérard Dedieu’s retirement. In 2021 also, our director, Mehrez Zribi, turned out to be persuasive enough to open a few permanent positions in the lab :
- Karin Dassas, who used to care for the on-board software of astronomy instruments, joined the radar and GNSS team at CESBIO
- Alexandre Bouvet has been hired at CESBIO, as a research engineer to use radar remote sensing data to monitor deforestation and plant growth, and to increase scientific collaboration with South-East Asia.
- Ludovic Arnaud joined us as a research engineer to estimate carbon fluxes in the soil using optical data.
- Rémi Fieuzal joined us to work on the integration of different types of remote sensing data in vegetation growth models
Now he is in Guyana, François looks very serious, bur it was fun working with him !
François Cabot also left us in 2021, to participate to the great adventure of European launchers in French Guyana. François Cabot was an essential part of the SMOS team, in which he was in charge of the Level 1 products and of the instrument calibration. If the SMOS products allowed the publication of so many articles and useful products, this is largely due to the excellent work of François. He was also the PI of a nanosat mission, ULID, that aimed to test the technologies necessary for a future enhanced SMOS mission on which the antennae would be placed on different satellites, and the cancellation of this missionAs a side effect also, the quality of the famous CESBIO BBQs will also decrease a lot with François far from here, and someone else will have to learn how to sing « où sont passés les tuyaux ». François will be in charge of avoiding that rocket debris fall on your head in case of a failure during a launch. With him there, you can get out of your house without checking the lauch planning of ESA.
And for 2022 ?
We were expecting the Biomass satellite to be launched in 2022, but this has been postponed to the end of 2023, which will give us a little more time to prepare the processing methods. In 2022, VENµS will start acquiring data every day on more than 50 sites. The VM5 period will start in a few weeks, and we are in the last optimizations of the programming to get the most sites possible. In a few days, we will also have an important change in the Sentinel-2 data format. In 2022, CNES will renew the 5 year plan for Theia, and we are expecting a large increase of the budget dedicated to this data centre.
Of course, at CESBIO, we will continue our work on the definition of new missions (SMOS-HR, Sentinel-HR, Sentinel2-NG), the preparation of the arrival of the decided missions (Biomass, Trishna), or the processing of the missions in operation (Sentinel, SMOS, VENµS…), and to help us understand all these observations, the in situ data acquisition and the modeling activities will continue.
In our immediate environment, CNES has just reorganized. The CNES researchers of CESBIO are now attached to a huge technical direction, and within this direction, to a new sub-direction focused on data processing, the « data campus », led by Simon Baillarin, a former CESBIO student, some time ago. Until now, we were attached to a sub-direction focused on instrumentation and image quality, and led by Philippe Kubik, whom we thank for his unfailing support. This reorganisation is thus an important change of orientation for the CESBIO which relies on two pillars, the observation instruments and the processings. We will now be closer to the latter. It will be a little easier for us to get our processing methods into operation at CNES, but we will have to be more convincing to support the missions we propose, and to get help from our instrumentalist colleagues.
-
11:51
Joyeux 2022 !
sur Séries temporelles (CESBIO)=>
Comme le montre l’image ci-dessus, on peut trouver de la chaleur en hiver, et la beauté peut naître du brouillard. Et en matière de brouillard, cet hiver 2022 semble être un grand millésime. J’ai pris cette photo au coucher du soleil le 31 décembre, à Hendaye, dans le sud-ouest de la France, au Pays Basque.
(I’m sorry, I don’t know to whom I should attribute this nice illustration found in a tweet)
J’étais très optimiste au début de l’année dernière, mais il s’avère que ce début d’année 2022 s’annonce à nouveau morose. Nous espérons sincèrement que le Covid19 ne vous a pas trop affectés en 2021, et que cette nouvelle vague ne nous causera pas plus de soucis que l’obligation de travailler à domicile, une fois de plus, pour quelques semaines seulement.
Un petit retour sur 2021
L’année 2021 n’a pas été entièrement mauvaise au CESBIO. Même si nous n’avons pas pu tenir toutes les fêtes que nous avions prévues en début d’année, nous avons eu la chance de pouvoir fêter le premier anniversaire des 25 ans du CESBIO, ainsi que le départ en retraite de Gérard Dedieu.
Sous l’impulsion du Space Climate Observatory, nous avons pu faire progresser de nombreuses applications pré-opérationnelles basées sur les observations spatiales, sans abandonner nos recherches plus fondamentales et la définition de nouvelles missions.
Sur le terrain, nos deux sites du Sud Ouest de la France, (Lamasquère et Auradé) ont été labellisés ICOS, et nous avons mis en service notre station de mesure de réflectances ROSAS. Nous avons également participé à l’énorme campagne Liaise en Espagne, et commencé à préparer les mesures CAL/VAL nécessaires pour TRISHNA.
En 2021 également, notre directeur, Mehrez Zribi, s’est montré suffisamment persuasif auprès de nos tutelles pour ouvrir quelques postes permanents au sein du laboratoire :
- Karin Dassas, qui s’occupait jusqu’ici, à l’institut d’Astrophysique d’Orsay, du logiciel embarqué de l’instrument MAJIS sur la mission Juice qui va bientôt partir vers Jupiter, a rejoint l’équipe radar et GNSS du CESBIO.
- Alexandre Bouvet, de l’équipe Globeo, a été embauché au CESBIO, en tant qu’ingénieur de recherche pour utiliser les données de télédétection radar pour surveiller la déforestation et la croissance des plantes, et pour accroître la collaboration scientifique avec l’Asie du Sud-Est.
- Ludovic Arnaud, qui travaillait déjà au CESBIO en CDD, nous a rejoint en tant qu’ingénieur de recherche pour estimer les flux de carbone dans le sol à partir de données optiques.
- Rémi Fieuzal nous a aussi rejoint au CESBIO. Il travaille sur l’intégration de données radar et optiques dans des modèles de croissance de la végétation.
Maintenant qu’il est en Guyane, François a l’air très sérieux, mais c’était souvent très drôle de travailler avec lui !
François Cabot a quitté le CESBIO fin 2021, pour participer à la grande aventure des lanceurs européens en Guyane française. François était un élément essentiel de l’équipe SMOS, au sein de laquelle il était en charge des traitements de niveau 1 et de l’étalonnage de l’instrument. Si les produits SMOS ont permis la publication de tant d’articles et de produits utiles, c’est aussi grâce à l’excellent travail de François. Il était également le PI d’une mission nanosat, ULID, qui avait pour but de tester les technologies nécessaires à une future mission SMOS améliorée sur laquelle les antennes seraient placées sur différents satellites. Cette mission s’est malheureusement arrêtée, et ce n’est pas étranger au départ de François. Mais François était surtout le spécialiste des fameux BBQ du CESBIO, et on se demande bien qui maintenant va chanter « où sont passés les tuyaux ? » tard le soir. A Kourou, François sera chargé d’éviter que des débris de fusée ne vous tombent sur la tête en cas d’échec lors d’un lancement. Avec lui, vous pourrez sortir de chez vous sans vérifier le planning des lancements de l’ESA.
Et pour 2022 ?
Nous comptions sur le lancement de Biomass en 2022, mais celui-ci a été reporté à fin 2023, ce qui nous laissera un peu plus de temps pour préparer les méthodes de traitemeent. En 2022, VENµS commencera à acquerir des données tous kles jours sur une cinquantaine de sites. La période VM5 commencera dans quelques semaines, et nous en sommes aux dernières optimisations de la programmation pour obtenir le plus de sites possible. Dans quelques jours, nous aurons aussi un changement important du format des données Sentinel-2.
Bien entendu, nous poursuivrons nos travaux de définition de nouvelles missions (SMOS-HR, Sentinel-HR, Sentinel2-NG), les activités de préparation de l’arrivée des missions décidées(Biomass, Trishna), ou les traitements des missions en exploitation (les Sentinel, SMOS, VENµS…), et pour tout celà, les acquitions de données in situ et les activités de modélisation vont se poursuivre.
Dans notre environnement immédiat, le CNES vient de se réorganiser. Les chercheurs CNES du CESBIO sont maintenant rattachés à une immense direction technique, et au sein de cette direction, à une nouvelle sous-direction focalisée sur les traitements de données, le « campus de la donnée », pilotée par Simon Baillarin, un ancien du CESBIO. Nous étions jusqu’ici rattachés à une sous direction focalisées sur l’instrumentation et la qualité des images, et pilotée par Philippe Kubik, que nous remercions pour son support indéfectible. Cette réorganisation est donc un changement d’orientation important pour le CESBIO qui s’appuie sur les deux pilliers, les instruments d’observation d’une part et les traitements d’autre part. Nous sommes maintenant un peu plus près des seconds, mais nous devrons maintenir le lien avec les premiers. Il nous sera un peu plus facile de faire passer nos méthodes de traitement en exploitation au CNES, mois nous devrons nous montrer plus convainquants pour défendre les missions que nous proposons, et pour obtenir de l’aide de nos collègues instrumentistes.
Un autocollant trouvé pendant les vacances sur le bureau de mon neveu
-
15:48
La barrière de glace orientale du glacier Thwaites est en train de craquer
sur Séries temporelles (CESBIO)Thwaites est un immense glacier en Antarctique qui contribue déjà à 4% de la hausse du niveau des océans à lui tout seul. La barrière de glace flottante devant le front oriental du glacier agit comme un barrage qui ralentit le flux de glace du continent vers l’océan. Si cette plateforme de glace se brise, le glacier Thwaites s’accélérera et sa contribution à la hausse du niveau de la mer pourrait atteindre 25%. A partir d’images Sentinel-1, Pettit et al. 2021 ont remarqué que cette plateforme est en train de se fracturer. On peut voir les crevasses apparaître sur cette animation :
Doomsday approaching!
Breakup of the @ThwaitesGlacier eastern ice shelf.
Time lapse of 221 #Sentinel1 radar images @CopernicusEU pic.twitter.com/HkWEBucj18— Simon Gascoin (@sgascoin) December 29, 2021
Pour en savoir plus :
Source
Pettit, Erin C., Christian Wild, Karen Alley, Atsuhiro Muto, Martin Truffer, Suzanne Louise Bevan, Jeremy N. Bassis, Anna Crawford, Ted A. Scambos, et Doug Benn. 2021. « Collapse of Thwaites Eastern Ice Shelf by intersecting fractures. » In . AGU. [https:]] . -
16:47
Viedma glacier terminus since 1985
sur Séries temporelles (CESBIO)Viedma Glacier is an iconic glacier of the Southern Patagonia icefield. Landsat 5, Landsat 8 and Sentinel-2 images series show that its terminus has retreated by 3 km since 1985.
Viedma glacier terminus evolution
Looking closer one can notice a new proglacial lake forming in a recently deglaciated area.
Since I won 80 sq km of Airbus Pléiades data, I ordered 3 sq km of a Pléiades image captured on Feb 05, 2021 to estimate that the area of the neo-lake should be between 0.3 and 0.4 sq km.
Pléiades 2021-02-05 © Airbus DS
But the images also show the disappearance of a formerly ice-dammed lake in the south..
This place is very interesting to study as many periglacial processes are exacerbated by climate change. Yet, Viedma glacier front will never look the same without its curved stretch of ice…
So sad!! Here is a nice picture of Viedma during my very first fieldwork as a Ph.D. at #IANIGLA (February 2008) with Ricardo Villaba, Brian Luckman, Pierre Pitte, among others pic.twitter.com/vOvw8KyOMM
— Lucas Ruiz (@ARGlaciares) June 18, 2021
You can look at the images here. And make your own gif from 35 years of satellite imagery in the Sentinel Hub EO Browser.
-
14:14
First validation results with the Lamasquère ROSAS instrument
sur Séries temporelles (CESBIO)Following its installation in March 2021, our new ROSAS device at Lamasquère has produced a first series of acquisitions over a complete maize crop cycle. The ROSAS system is based on the use of a multi-spectral photometer to carry out angular and spectral measurements of both incident and reflected radiation from the surface. The entire measurement sequence, which lasts 140 minutes, is then pre-processed to produce a bi-directional reflectance (BRDF) of the surface of the measurement site. It is then possible to calculate the spectral reflectances of a given instrument for observation conditions corresponding to the illumination and viewing angles of a remote sensing product, and thus to validate the Sentinel-2 surface reflectances calculated by an atmospheric effects correction processor such as Maja.
The system was installed during an intermediate crop cycle. A forage maize was then sown on the plot in mid-April, and harvested for silage in early October. The photometer therefore followed the entire growth cycle of the maize, from germination to maximum growth (~ 2.60m around the mast).
Status of the maize crop during the 2021 cycle: May 10th (top left), June 16th (top right), August 18th (bottom left), October 6th (bottom right)
In spite of a stronger cloud cover than at the ROSAS site in La Crau (south of Arles), and even more so than at Gobabeb (Namibian site), the photometer performed enough complete cycles to follow the evolution of the surface state.
Evolution of ROSAS (red) and MAJA (blue) surface reflectances for Sentinel-2 band 4
Evolution of ROSAS (red) and MAJA (blue) surface reflectances for Sentinel-2 band 8
The evolution of the Sentinel-2 surface reflectances produced by MAJA fits pretty well with the ROSAS acquisitions. Although the sample size is small (9 observations after the acquisition quality filter), it is nevertheless possible to calculate the root mean square error (RMSE) per Sentinel-2 band as well as for the estimation of the optical thickness of the atmosphere (AOT). The results are presented in the table below.
RMSE (S2A et S2B) B2 (490nm) 0.009 B3 (560nm) 0.009 B4 (665nm) 0.007 B5 (705nm) 0.012 B6 (740nm) 0.024 B7 (783nm) 0.026 B8A (865nm) 0.030 B8 (842nm) 0.033 B11 (1610nm) 0.028 AOT (-) 0.068 While the RMSEs are very satisfactory for bands B2 to B4 (an RMSE of 1% is targeted for the estimation of surface reflectances), there is a quite some loss in the quality of estimates from B6 to B11. The differences are particularly significant in the near infrared for reflectances greater than 0.2. As atmospheric correction errors are usually greater in the blue than in the near infra-red, these errors could be due to stronger spatial heterogeneity in the PIR than in the visible, or to a BRDF that is not well reproduced by our model or by adjacency effects, or maybe to the effects of irrigation that can change the ground reflectance during the day (it takes one full day to obtain the full BRDF from ROSAS). This still needs further investigation. Winter wheat, sown at the end of October, will give us the opportunity to continue the exercise, this time on a lower and denser (and, we hope, more homogeneous) crop cover, which will provide us with additional data to refine our analysis.
To be continued…Many thanks to the CNES team in charge of processing the ROSAS datasets (Lucas Landier, Sophie Coustance and Nicolas Guilleminot) !Comparison between ground and satellite measurements at Lamasquère for 4 spectral bands (Blue, Red, Near Infrared, Short Wave infrared), in blue for Sentinel-2A, and in red for Sentinel-2B. Click on the graphs to enlarge.
-
13:58
Premiers résultats de validation du site ROSAS de Lamasquère
sur Séries temporelles (CESBIO)Suite à sa mise en place en mars 2021, notre nouveau dispositif ROSAS de Lamasquère a produit une première série d’acquisitions sur un cycle complet de culture de maïs. Le dispositif ROSAS repose sur l’utilisation d’un photomètre multi-spectral pour non seulement réaliser des mesures angulaires et spectrales du rayonnement incident, mais aussi du rayonnement réfléchi par la surface. L’ensemble d’une séquence de mesure, qui dure 140 minutes, est ensuite pré-traité pour produire une réflectance bi-directionnelle (BRDF) de la surface du site de mesure. Il est alors possible de calculer les réflectances spectrales d’un instrument donné pour des conditions d’observation correspondant aux angles d’éclairement et de visée d’un produit de télédétection, et donc de valider les réflectances de surfaces calculées par une chaîne de correction des effets de l’atmosphère telle que Maja.
Le dispositif a été installé durant un cycle de culture intermédiaire. Un maïs fourrage a ensuite été semé sur la parcelle mi-avril, et récolté pour ensilage début octobre. Le photomètre a donc suivi tout le cycle de croissance du maïs, de la germination jusqu’au maximum de croissance (environ 2,60m autour du mât).
Etat de la culture de maïs durant le cycle 2021
Malgré une nébulosité plus forte que sur le site ROSAS de La Crau (Sud d’Arles), et a fortiori qu’à Gobabeb (site Namibien), le photomètre a effectué suffisamment de cycles complets pour suivre l’évolution de l’état de surface.
Évolution des réflectances de surface ROSAS (rouge) et MAJA (bleu) pour la bande 4 Sentinel-2
Évolution des réflectances de surface ROSAS (rouge) et MAJA (bleu) pour la bande 8 Sentinel-2
On retrouve bien l’évolution des réflectances de surface Sentinel-2 produites par MAJA avec les acquisitions du dispositif ROSAS. Si l’échantillon est faible (9 observations après le filtre de la qualité des acquisitions), on peut néanmoins calculer l’erreur quadratique moyenne (RMSE) par bande Sentinel-2 ainsi que celle de l’épaisseur optique de l’atmosphère (AOT). Les résultats sont présentés dans le tableau ci-dessous.
RMSE (S2A et S2B) B2 (490nm) 0.009 B3 (560nm) 0.009 B4 (665nm) 0.007 B5 (705nm) 0.012 B6 (740nm) 0.024 B7 (783nm) 0.026 B8A (865nm) 0.030 B8 (842nm) 0.033 B11 (1610nm) 0.028 AOT (-) 0.068 Si les RMSE sont très satisfaisantes pour les bandes B2 à B4 (on vise une RMSE à 1% pour l’estimation des réflectances de surface), on observe par contre une forte dégradation de la qualité des estimations de B6 à B11. Les écarts sont particulièrement importants dans le proche infra-rouge pour des réflectances supérieures à 0,2. Comme les erreurs de corrections atmosphériques sont en général plus fortes dans le bleu que dans le proche infra rouges, les écarts observés pourraient être dues à une hétérogénéité spatiale plus importante dans le PIR que dans le visible, à une modélisation de la BRDF imparfaite, à une calibration perfectible du photomètre, où à l’effet de l’irrigation et de son séchage, qui introduit une variation de la réflectance dans la journée, alors que la mesure de Sentinel-2 est instantanée et celle de ROSAS demande une journée entière. Plusieurs pistes sont donc à explorer. Le blé d’hiver, semé fin octobre dernier, va nous donner l’opportunité de continuer l’exercice, cette fois sur un couvert plus bas et plus dense (et, nous l’espérons, bien homogène), ce qui nous fournira des données supplémentaires pour affiner l’analyse.
A suivre
Un grand merci aux équipes du CNES qui exploitent ces données et ont produit ces premiers résultats (Lucas Landier, Sophie Coustance et Nicolas Guilleminot)
Comparison des réflectances mesurées par Sentinel-2 et Rosas pour quatre bandes spectrales (Bleu, Rouge, Proche Infrarouge, Moyen infrarouge), en bleu pour Sentinel-2A, en rouge pour Sentinel-2B. Cliquez sur les graphiques pour agrandir.
-
12:16
The snow you can't see
sur Séries temporelles (CESBIO)Last year in November we noticed these unusual white spots in a Sentinel-2 image near Barèges…
Les stations anticipent la fin du confinement. Profitant du froid les canons à neige ont tourné dans les Pyrénées comme le montre cette image satellite #Sentinel2 prise samedi 21/11 entre La Mongie et Super Barèges. @Meteo_Pyrenees pic.twitter.com/YT1rDtRUBs
— Simon Gascoin (@sgascoin) November 23, 2020
These white spots were artificial snow spread by snow cannons. At the same time of the Sentinel-2 acquisition, they were also « visible » as cold spots from our thermal infrared camera installed at Pic du Midi.
This year in November too, we noticed the same patches in the thermal camera pictures… However, the surrounding surface was also snow-covered therefore the patches were invisible in webcam pictures taken from the same location.
This year’s artificial snow patches were much warmer (8K) than the surrounding natural snow cover. Such thermal anomalies surprised us since the snow surface temperature is expected to be primarily influenced by atmospheric and radiative forcing (Pomeroy et al., 2016). Hence, the snow surface temperature should be rather homogeneous in areas with similar atmospheric and radiation forcing (experts say that it should not depend much on the physical properties of the underlying snow layers).
In fact, the « warm » patches were not « visible » anymore in the images captured later in the afternoon.
Artificial snow grains and natural snow grains have different shape and size. As a result, artificial and natural snow covers have different optical properties, in particular reflectance and emissivity. Reflectance in the shortwave controls the energy absorbed by the surface snow, while emissivity controls the radiant energy emitted by the surface snow. Hence both properties could explain the difference in the snow surface temperature. Pomeroy et al., (2016) consider that the snow surface temperature is secondarily affected by absorption of shortwave and near infrared radiation (especially in this case, we are at the beginning of the winter, the content in light absorbing impurities such as dust is expected to be low). Reported variations of the snow emissivity due to snow grain type is likely not sufficient to explain such temperature contrasts either (Hori et al. 2006). Therefore, the observed temperature differences are probably not due to the differences in optical properties of the artificial vs natural snow.
Artificial snow is produced from liquid water (above 0°C) that is atomized into fine droplets. The snow cannons are operated when the air is sufficiently cold and dry for ice crystals to form. Yet the phase change from liquid water to ice requires a lot of energy. If some droplets don’t freeze before hitting the ground the snow will be « wet » (Lintzen 2012). Here it may be what happened, the snow grains which deposited on the ground were much warmer than the surrounding snow and even a bit wet. This warmer, artificial snow cover radiates more in the thermal infrared.
These spatial variations in longwave radiation are well captured by the thermal camera despite a distance of ~ 5 km between the camera sensor and the ski slopes on the other side of the Barèges valley. In the spectral window of the thermal camera (7.5 – 13 µm), the atmosphere is expected to be nearly transparent. Is it? This is something we need to investigate to be able to relate the camera images to actual surface temperatures..
Thanks to Baptiste for spotting this and to Ghislain for the fruitful discussion!
-
10:48
Sortie de MAJA 4.5, compatible avec le nouveau format des L1C Sentinel-2
sur Séries temporelles (CESBIO)Suite à l’annonce par l’ESA d’une mise à niveau majeure du format du produit Sentinel-2 pour début janvier 2022, l’équipe de développement de CS et du CNES s’est empressée d’adapter MAJA à ces nouvelles spécifications. En particulier :
- prendre en compte le nouvel offset radiométrique permettant des valeurs négatives ;
- prendre en compte du nouveau format raster des masques de qualité L1C à la place du format GML actuel.
Nous encourageons vivement nos utilisateurs à effectuer la mise à jour vers cette nouvelle version disponible ici : télécharger Maja 4.5.1
Notez que nous avons déplacé le dépôt GIPP de Maja vers un nouveau service GitLab : obtenir le dernier GIPP pour la version 4.5.1
Notez également que MAJA ne bénéficie pas de l’humidité relative de l’ECMWF ajoutée dans les données auxiliaires des L1C car notre option –cams nécessite un profil vertical complet, qui reste automatiquement téléchargé par StartMaja grâce à la dernière API CSD CAMS. Cependant, comme la variable d’humidité relative dans la prévision ne relève pas de la licence générale des données CAMS, elle n’est disponible qu’avec un retard de 5 jours.Nous tenons à remercier notre équipe de développement chez CS et au CNES pour avoir publié cette version à temps, malgré un calendrier serré !
-
10:21
MAJA 4.5 is now available
sur Séries temporelles (CESBIO)UPDATE (Feb 14th 21): the version 4.5.3 is now released and fixes a few minor issues found while testing the first time series of the new L1C format, please upgrade to Maja 4.5.3 !
Following the announcement by ESA of a major product format upgrade for Sentinel-2 by early January 2022, the development team at CS and CNES rushed to adapt MAJA to these new specifications. In particular :
- account for the new radiometric offset allowing for negative radiometric values;
- account for the new raster format of the L1C quality masks in place of the current GML format.
The radiometric offset and the bias between S2A and S2B are accounted for transparently, no action is required users side.
We strongly encourage our users to upgrade to this new release available here : download Maja 4.5.3
Note that we have moved the Maja GIPP (parameters and auxiliary data) repository to a new GitLab service: get the latest GIPP for version 4.5.3
Also note that MAJA doesn’t benefit from the ECMWF relative humidity added in the auxiliary data part of L1C since our –cams option requires a full vertical profile, which is automatically downloaded by StartMaja using the latest CAMS CSD API. However, since the relative humidity variable in the forecast does not fall within the general CAMS data licence, it is only available with a delay of 5 days.
We wish to thank our development team at CS and CNES for releasing this version in time, despite the tight schedule !
-
1:16
The plainbow over Fraser river sediment plume
sur Séries temporelles (CESBIO)Sentinel-2 captured an impressive view of the Fraser river sediment plume in Georgia Strait caused by the historic flood in British Columbia (250 mm rainfall in 48 hours).
Fraser River delta and Georgia Strait near Vancouver, British Columbia, Canada. Sentinel-2 on 2021-Nov-16 (color composite of L2A red, greed and blue images).
The image contains an amusing detail west off the Vancouver International Airport. This red-green-blue object is an airplane or « plainbow« . It is due to shifts between the multi-spectral images composing the « true color » image (in this case an RGB image composed with the red, green and blue channels of a Sentinel-2 multispectral image).
Misregistration shifts between multi-spectral images are well explained by Sergii Skakun et al. (2018):
The MSI is designed in such a way, that the sensor’s detectors for the different spectral bands are displaced from each other. This introduces a parallax angle between spectral bands that can result in along-track displacements of up to 17 km in the Sentinel-2A scene [2]. Corresponding corrections using a numerical terrain model are performed to remove these interband displacements, so the MSI images, acquired in different spectral bands, are co-registered at the sub-pixel level to meet the requirement of 0.3 pixels at 99.7% confidence. However, these pre-processing routines cannot fully correct displacements for high altitude objects, e.g. clouds or fast moving objects such as airplanes or cars. Therefore, these types of objects appear displaced in images for different spectral bands. The magnitude of the displacement varies among pairs of bands depending on the inter-band parallax angle. For example, at 10 m spatial resolution, the maximum displacement is observed for bands 2 (blue) and 4 (red).
The parallax effect has been used to determine the elevation of a volcanic plume from Landsat imagery (de Michele et al. 2016). Here the object is an airplane, hence it is both moving at high velocity and it is not at the elevation of the digital terrain model at this location (i.e. sea level), therefore two different effects are contributing the color shift.
Multispectral shift in the case of an air balloon floating above the surface (not moving) and an airplane flying at the right velocity and the right direction to compensate for this shift.
To better understand, let’s consider an airplane flying at an altitude of 10 km. Let’s imagine it is heading toward the same direction as Sentinel-2 (roughly from the north east to south west). What should be its velocity to compensate for the time delay between two spectral bands, so that both spectral images of the airplane are well aligned?
The problem can be solved using Thales’ theorem.
$$\frac{d}{D}=\frac{h}{H}$$
where \(d\) and \(D\) are the distances traveled by the airplane and Sentinel-2 respectively during the time delay ?t between two spectral band acquisitions:
$$d=v \Delta t$$
$$D=V \Delta t$$
where \(v\) and \(V\) are the velocities of the airplane and Sentinel-2 respectively.
Therefore we obtain:
$$v=V\frac{h}{H}$$
Sentinel-2 is flying at V = 7.44 km/s on its orbit at H = 786 km. Hence, at h = 10 km, we get v = 340 km/h. If the airplane was flying at 340 km/h towards the southwest it should not appear as a « plainbow » in a Sentinel-2 multispectral composite.
The same equation can be written in vector form to account for the airplane flying direction (see Eq. 7 in Heiselberg 2019). To determine the velocity and the elevation of an airplane from Sentinel-2 images, the system of two equations has three unknowns (velocity, altitude and heading angle) and cannot be solved without additional information or hypothesis. Heiselberg (2019) used airplane contrails to determine the airplane heading angle. However, the airplane elevation is very small with respect to Sentinel-2 altitude and most of the uncertainty actually comes from the low spatial resolution of Sentinel-2 images with respect to the airplane size. In the above case of the plainbow over Georgia Strait, I found a distance of \(d_{B2,B8A}\) = 200 m ± 10 m between the airplane location in B2 and B8a (?t = 2.055 s). Assuming that the airplane is at sea level (h = 0) we get v = 350 km/h ± 18 km/h, whereas we would get v = 352 km/h at h = 1000 m. Hence, the error on the velocity due to the uncertainty on the distance estimate \(d_{B2,B8A}\) is much larger than the uncertainty on the airplane elevation.
-
23:39
Ça y est, le glacier de l'Astrolabe a vêlé !
sur Séries temporelles (CESBIO)En janvier dernier nous vous annoncions la mise en couveuse d’un bel iceberg au terminus du glacier de l’Astrolabe … Yann Niort qui travaille actuellement pour Météo-France en Terre-Adélie nous a signalé que le glacier avait enfin vêlé !
En fait ce n’est pas un iceberg mais une jolie portée de plusieurs icebergs que l’Astrolabe a engendré en ce début novembre, alors que sa langue flottante était dégagée de sa gangue de banquise hivernale.
La surface perdue par le glacier est 20 km² (7 km² de moins que ce que nous avions anticipé ici en janvier dernier).
-
20:34
MAGicians and deep learning wizards
sur Séries temporelles (CESBIO)There are several MAGicians in the CESBIO laboratory, and I am one of them (with all due modesty). According to my definition, MAGicians are
participants to Mission Advisory Groups, for instance for CNES or ESA satellite projects.
These days, I participate to the MAG for Sentinel-HR at CNES, and for Sentinel-2 Next Generation at ESA. I was also involved at CNES in the group that prepared the French answer to the Copernicus Long Term Scenario, as well as in the MAG for TRISHNA. Other colleagues are involved in other groups, such as CO3D, LSTM, SMOS-HR or a phase-0 for an hyperspectral mission. In the early phases of a space project, the MAG, which is a working group with selected experts interested by the project, helps the project manager to better define the needs and requirements for the project. During the development phase, the MAGicians are helping to define the content of products, the methods, define the priorities for programming the acquisitions… MAGicians are also useful when difficulties arise with the satellite that have an impact on the performances. After launch, some MAGs can be kept to organize the validation and outreach of the satellite. In that phase ESA calls them Quality Working Groups.
During the design phase, the challenge is always to obtain the decision of engagement of the project. It is often useful to try to convince the funding agencies of the interest of the mission, and for that, obtaining support from the user community and the stakeholders is necessary. It is therefore always good to explain the mission to the community, but it is on these occasions that the MAGicians might meet the dreadful deep learning wizards or, even worse, their spokespersons.
This happened to me twice in the last year, and I still shiver when I remember these occasions :
- when we published our post about the Sentinel-HR mission, a deep learning wizard told us that a resolution of 2 m for a mission in complement to Sentinel-2 was not necessary, as they were able to bring Sentinel2’s resolution to 1 m using a very secret deep learning super resolution method.
- during the analysis of the Copernicus Long Term Scenario, I pleaded to keep Sentinel2-A operational when Sentinel2-C would be launched, to get a better revisit and more chances to get cloud free images every fortnight. My argument was dismissed by a high ranking official in the French Ministry of research, who told us that the clouds on optical images were not an issue anymore because it was possible to create Sentinel2 images from Sentinel1 images using deep learning.
This series of images show : 1) a Sentinel-2 image at 10 m resolution resampled with nearest neighbor (did you notice that superresolution products are compared to nearest neighbour interpolation ?), 2) the same image provided at 1m resolution using a deep learning super resolution method, 3) the same zone in VHR taken from google earth (at a different date).
The first objection was easy to dismiss as the images shown by the wizard are awfully noisy (see above), and a comparison with real high resolution data shows that their guess of the high resolution features is often completely wrong.
But I failed to correctly answer the second objection (DL alchemy can transform radar images in cloud free optical images). Although not an expert, which is normal, the official is high ranked ministry officer, who is not used to be contradicted, especially by simple researchers. I think I mumbled that deep learning techniques, although powerful, are not magical, and that SAR and optics are not bijectively related. But no one in the assistance took my defense, and I did not succeed in getting a French demand for keeping Sentinel-2A operational after the launch of Sentinel-2C (sorry !).
I have nothing against deep learning, which can be an efficient machine learning technique, but, as shown by Jordi Inglada in a previous post, it is sometimes misused, or even more often, its results can be misinterpreted by non specialists (I am one of them).
So, dear fellow MAGicians, how would you react in front of a wizard that claims your mission is useless because he saw a presentation that does the same thing with deep learning and existing satellites ? The best answer I have so far was suggested by a genius of deep learning (you know, the type of genius who comes out of a lamp), Jordi Inglada himself. He suggests that the wizards and their relays in the commissions or ministries are trying to shift the burden of the proof. As a MAGician, our job is to show that what we ask for is really needed, and if a wizard or his/hers spokesperson wants to replace what we suggest by some deep learning wizardry, he/she has to prove that it works well. I should therefore have answered : « could you please give us the reference of the paper that proves that we can fully replace Sentinel-2A by the existing Sentinel-1 satellites ? »
Would you have any other magic formulae to survive against a DL wizard ? Any spells, ideas or strategy ? Please feel free to comment !
-
16:17
Mount Everest in 3D
sur Séries temporelles (CESBIO)Because it's a stunning place that I will probably never visit, I have spent some time to make a 3D visualization of Mount Everest using QGISthreejs plugin in QGIS. I draped the 24 Oct 2021 clear-sky Sentinel-2 image on a gapfilled version of the HMA 8m digital elevation model (like in this post on the Khumbu glacier..). Besides, its a nice way to look at the spatial variations of the snowline elevation on the slopes of Mount Everest.
I made this animation available on Wikimedia so feel free to download and re-use it!
-
0:41
Amincissement du glacier d'Ossoue entre 2017 et 2021
sur Séries temporelles (CESBIO)Au cours des quatre dernières années le glacier d'Ossoue a perdu en moyenne 6 mètres d'épaisseur. Par endroit, son amincissement dépasse 8 mètres, soit 2 mètres par an.
Changement d'altitude du glacier d'Ossoue entre 2017 et 2021
Cette carte a été obtenue par le traitement de couples d'images stéréoscopiques Pléiades (CNES/Airbus DS) acquises le 8 octobre 2017 et le 9 octobre 2021. En octobre le glacier est débarrassé de son manteau neigeux d'hiver donc on mesure bien une perte nette de glace. La méthode* est exactement la même que celle décrite dans cet article. D'après les mesures de terrain effectuées par l'association Moraine, le glacier d'Ossoue a perdu près de 35 m d'épaisseur entre 2001 et 2021.
Une des deux images panchromatiques acquises par Pléiades 1A le 9 octobre 2021 (celle-ci à 11:04:58 UTC)
Le glacier d'Ossoue était à l'honneur dans l'édition de samedi 16 octobre de la Dépêche ! Et il le sera de nouveau lors du prochain colloque (30 oct 2021) et l'exposition de Grégoire Eloy (oct et nov 2021) de la Résidence 1+2 Photographie & Sciences à Toulouse. Plus d'info sur : [https:]]
Dans les Pyrénées, les glaciers ont perdu près d’un quart de leur superficie et 6,3 m d'épaisseur entre 2011 et 2020 (Revuelto, Vidaller et al. 2021).
* Pour les puristes voici la carte de changement d'altitude à une échelle plus large. On note aussi un signal négatif sur le glacier des Oulettes, mais plus bruité à cause (i) de l'ombre portée du Vignemale qui dégrade la qualité de la restitution 3D (ii) de l'écoulement rapide de ce glacier très crevassé et accidenté.
-
17:46
Is DS 4 EO BS?
sur Séries temporelles (CESBIO)EO, RS, GS, DS, AI, ML, DL, BS, WTF?Everybody in the Geosciences (GS) and Remote Sensing (RS) community is now aware of the great advances in Data Science (DS) and Machine Learning (ML) that have taken place in the last 10 years. Although many people talk about Deep Learning (DL) or Artificial Intelligence (AI), these terms don't usually accurately describe the techniques used in our field. If we want to be pedantic, we may say that:
- most of the neural networks used are not really deep;
- machine learning techniques other than artificial neural networks (ANN) like for instance Random Forests, Gradient Boosting Trees and similar techniques scale better for some problems and are still the first choice for operational applications;
- most of the approaches in AI are not used anymore (search algorithms, constraint satisfaction problems, logical agents, etc.), so using the term AI is not appropriate;
- other techniques are on the rise to cope with the limitations of ANNs (evolutionary approaches to build neural architectures, probabilistic programming to model uncertainty), so speaking about DL is too narrow.
I like to use the term Data Science, because it encompasses not only the techniques used, but also how they are deployed and, most importantly, the domain problem that one wants to solve. In figure 1 we see that DS is at the intersection of the domain expertise (i.e. hydrology, agronomy, geology, ecology), mathematical modeling (statistics, optimization) and computer science (automation, scalability).
Figure 1: The Venn diagram of Data Science (from [www.datascienceassn.org] )
In Earth Observation (EO) problems and more generally in the Geosciences, this point of view is very useful.
It is interesting to understand that classical science falls at the intersection of the domain expertise and the mathematical modeling, where for instance, a simple regression can be used to calibrate the parameters of a model. The data processing done by data centers and ground segments is at the intersection of the domain expertise (the scientist writes the ATBD1 which is then coded as processing chains that operate on massive data). Machine Learning can be seen as scaling the mathematical models through massive computation exploiting large volumes of data.
So let's try to understand if DS for EO is really BS (Latin for Bulbus Stercum2).
Good buzz, bad buzzThe sad fact is that there is a lack of nuance in the discussion about these topics: either DS is great, or it is BS. The cynical in me would say that this is due to the fact that nowadays too many discussions take place on Twitter (where flame wars are legion) or LinkedIn (where everybody loves everybody's work) and also because researchers are forced to act like startup founders: work on an elevator pitch for a funding agency who has no technical knowledge and thinks ROI3.
But beyond Twitter and desperate research grant proposals, there are publications in peer reviewed journals that may give the impression that we are missing the point in terms of the problems we are trying to solve with these techniques.
I will try illustrate this with 3 examples. My goal here is not to make fun of these specific cases. There is surely serious work behind these examples, but the fact is that, as presented, they may leave a suspicious impression to an attentive reader.
- Cropland parcel delineation
Automatically delineating agricultural parcels is a difficult task. In a prestigious conference publication [1], the authors claim that their ML model
[…] automatically generates closed geometries without any heavy post-processing. When tested with satellite imagery from Denmark, this tailored model correctly predicts field boundaries with an overall accuracy of 0.79. Besides, it demonstrates a robust knowledge generalization with positive results over different geographies, as it gets an overall accuracy of 0.71 when used over areas in France.
If we don't pay attention to the numbers, this statement can be understood as if the problem was solved and an operational product was available. Looking at the numbers we understand that there is between 20% and 30% error. For a product which is meant to replace the Land Parcel Information System (LPIS) for which the annual changes are less than 5%, we understand that we are far from the goal. Furthermore, the visual results in figure 2 show that the cartographic quality of the objects is low: the fields are blobby, many boundaries are missed, etc.
Figure 2: Example of cropland parcel delineation (from [1])
I chose this example because we have tried to reproduce this results at CESBIO, and despite having a very skilled intern and, of course, great supervision, we had a hard time. The feedback on this work was reported in this post.
Maybe the mistake here was using approaches that may be suited to multimedia data, where the level of expected accuracy is way lower than for cartographic products. Also, sometimes we are hammers looking for a nail: we forget (or don't have the domain expertise) that some problems may be better solved using other data sources than satellite imagery alone: in countries where the parcels are beautiful polygons, starting from scratch is a pity, because the most recent LPIS is usually a very good first guess.
- SAR to optical image translation
Using the all-weather capabilities of Synthetic Aperture Radar to fill the gaps due to clouds or even completely replace optical imagery is a goal that has been tracked for a long time. In the recent years, many works using the latest techniques in DL have been published. These techniques reuse approaches that have been developed for image synthesis in domains like multimedia or video games.
If we take one recent example [2], we can read:
The powerful performance of Generative Adversarial Networks (GANs) in image-to-image
translation has been well demonstrated in recent years.
[…]
The superiority of our algorithm is also demonstrated by being applied to different networks.However, when we have a look at the images (see figure 3) we see that reflectance levels can be very different between the real optical image and the translated one.
Figure 3: Illustration of SAR to optical image translation (from [2])
It is of course impressive that the algorithm is able to generate a plausible image: if we did not have the real optical image, we could think that the result is a good one. The problem here is that, in Remote Sensing, we don't deal with pictures, we deal with physical measures which have a physical meaning. For most applications, we don't want plausible images (these generative methods produce data that follow the same statistical distribution as the training data): we need the most likely image together with uncertainty estimates.
- Image super-resolution
Another nice obsession of image processing folks is super-resolution: transforming the images acquired by a satellite to a version where the pixels are smaller and fine details are visible. For a long time, the super-resolution approaches were based on sound mathematical tools from the signal processing toolbox. A nice thing with our signal processing expert friends is that they had objective quality measures and they also used to assess the algorithms by expert photo-interpreters.
In recent times, the trend has been towards using learning algorithms: use a pair of images of the same area, one at high resolution and the other at low resolution and train a model to do the super-resolution. The publications on the topic are galore and we have even companies which aim to provide this kind of data. For instance, in a recent LinkedIn post by SkyCues one could read:
THE ONLY SOURCE FOR HIGH-RESOLUTION SATELLITE IMAGERY covering the entire world, updating every few days based on a consolidated source of coherent, dependable and secure earth observation data Now at 1m and improved color correctness
[…]
Dear EO Colleagues: we are Swiss-based EO innovation, where we managed to produce decent quality 1m from Sentinel-2 (see image). This employ very deep Machine Learning trained globally.Exploring the publicly available data, one can evaluate the quality of the results. Figure 4 shows an illustration of artifacts present in these super-resolved images.
Figure 4: Illustration of Sentinel-2 super-resolution (from [https:]] )
There may be some applications for which this kind of data is useful, but for many others, this quality is not acceptable and is certainly equivalent to (or worse than) the data already available through some small satellites with very low image quality standards.
- Many other examples
I will stress again that the choice of examples above is anecdotal. Many more could be cited. I have seen papers or attended to presentations about very puzzling things, like for instance (just to name a few):
- trying to detect clouds on Sentinel-2 images by using the visible bands only (no cirrus band, no multi-temporal information);
- training a DL algorithm to reproduce the results of a classical algorithm «because we did not have real reference data»;
- doing data augmentation4 on VHR SAR images by applying rotations (the SAR acquisition geometry effects like layover, foreshortening or shadowing do not make any sense after rotation);
Also, with the goal of building corpora of data on which algorithms could be benchmarked, many data sets have been proposed to the RS and ML communities. Most of these data sets are not representative of real problems with the specificities of remote sensing data. Some of them contain only the 3 visible bands of a satellite having many more, or do not contain any temporal information when they are supposed to be useful for land cover classification.
Others, are not well designed from the machine learning point of view. For instance the data set for the TiSeLaC contest had pixels in the training set which were neighbors of pixels in the test set. It is not surprising that the algorithm that won the competition only used the pixel coordinates and discarded the spectro-temporal information!
Again, remember that I am using the Latin word only to troll, among other things because I guess one could find papers with similar issues with my name on them.
- Why bother about all this?
The issue here is not fraud or dishonesty from those who write or publish the things I highlighted above. After all, most of these publications go through a peer review process … The real problem here comes from those who extrapolate the consequences of these results.
It would be a real problem if someone decided to reduce the revisit of an optical constellation «because we can get the same data from SAR images, anyway».
It is also difficult to explain to users that a country-scale land cover map can't have 99% accuracy like the results shown in the last CNN-GAN-Transformer paper they have seen on Twitter.
All these issues need detailed analysis, good validation and, most of all, real understanding of how things work in terms of physics, math and software. And here is the bad news: it takes time, hard work and collaboration.
- A unicorn is a team
If we go back to the Venn diagram of DS, we can identify some danger zones. Figure 5 illustrates those (although it should be adapted to our field, but I am too lazy).
Figure 5: Danger zones in DS (from [https:]] )
- Domain knowledge and automation without mathematical rigor lead to unproven or wrong results: for example, a regression model assumes that the soil moisture data are Gaussian when they are not.
- Domain knowledge and mathematical modeling without the automation to large data can lead to biased results. It seems that there was a scientist that spent their whole career calibrating the same model for every new study site.
- And most problematic, ML (math + CS) without the domain knowledge can lead to daft approaches that do not solve any problem. This is when we take the latest DL model trained to detect cats and use it for leaf area index estimation, because the tensors have the same dimensions and the code was available on Github.
A way to avoid this is to ensure that the work is supported by the 3 pillars of DS. If you are reading this, you may lay somewhere in the Venn diagram. Being at the center is rare, these are unicorns [3]. However, a team can have a center of mass that gravitates towards the intersection of the 3 pillars. At CESBIO, we are lucky to have that. When my code is crap and won't scale, Julien tells me so and helps me improve it. Mathieu is there to remind me that I need to shuffle the training data correctly for stochastic gradient descent to be efficient. Olivier is happy to explain to me for the nth time that, yes, reflectances can be greater than 1. There you go: automation, math and domain knowledge.
This 3-sided reality check can be sometimes frustrating in the publish or perish world we live in. We can also give the impression of being Dr. Know-it-all when we review papers (or write this kind of post!).
- Where best leverage DL?
We can conclude that DS is not BS, but the buzz about ML in general, and recently DL in particular, and their uses without domain knowledge, may have created a negative impression.
If we abandon the DL all the things motto and are pragmatic, we can say that DL approaches are just universal function approximators that can be calibrated by optimizing a cost function. If the problem at hand can be posed as a cost function, either in terms of fit to a reference data set (supervised inference learning), or in terms of the properties of the function output (unsupervised generative models), chances are that DL can be applied.
Without wanting to give an agenda of AI research for the Geosciences (others have already done it [4]), we can identify the kinds of problems for which these techniques could be useful.
There are of course the classical problems for which ML has been used for decades now: land cover classification and biophysical parameter estimation. But one may wonder if replacing RF by DL to maybe gain 1% in accuracy while increasing the computation cost is worth it.
For these applications problems, it may be more interesting to guide the learning approaches by the physics of the problem.
Complex physical models can't be run over large geographical areas because of computational costs. The alternative has usually been to use simpler, less accurate models when going from a small study area to a large region. Since neural networks are universal approximators, they can be used to replace the complex physical model and run with a fraction of the computational cost.
For some of the applications illustrated above (super-resolution, multi-sensor fusion) DL can be a pertinent solution, but the cost functions and the network architectures can't just be copied from other domains. The domain knowledge of the RS expert (the sensor characteristics for super-resolution, the physics of the signal and the observed processes for multi-sensor fusion) can be translated in terms of cost functions to optimize or latent variables of the model.
But maybe we also have to question when DL is not the appropriate solution. One of the criticisms that one can make to many recent papers in peer reviewed journals is that they lack to check the performances of simple, straightforward solutions. Many DL papers benchmark DL algorithms between them but forget to compare to a simple regression, or a Random Forest. Simplicity of the models (for explainability and for energy consumption) should always be a criterion when proposing a new algorithm. Let's not forget that in the Geosciences we want to produce information that helps to advance knowledge. Using the coolest tool may not be the real goal.
Algorithm Theoretical Basis Document
Return On Investment, not Region Of Interest on your study area!
A technique used to generate lots of training data from a smaller data set
-
10:30
TRISHNA Workshop Toulouse 2022
sur Séries temporelles (CESBIO)>
Save the date : 1st TRISHNA Workshop in March 2022 in Toulouse
Toulouse, France, 22-24 March 2022
WORKSHOP ANNOUNCEMENT
Important note: this page is for information only. Registration and call for abstracts will be available soon on a dedicated website.
Background
Monitoring accurately the water cycle at the Earth surface is becoming extremely important in the context of climate change and population growth. It also provides valuable information for a number of practical applications: agriculture, soil and water quality assessment, irrigation and water resource management, etc... It requires surface temperature measurements at local scale. Such is the goal of the Indian-French high spatio-temporal TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment), led by ISRO (Indian space agency) and CNES (French Space agency). It will be launched in late 2024/beginning of 2025.
The surface temperature and its dynamics are precise indicators of the evaporation of water from soils, transpiration of plants and of the local climate. TRISHNA and its frequent high-resolution measurements raise major scientific, economic and societal issues through the 6 major themes that the mission addresses from the angle of research and development of applications: ecosystem stress and water use; coastal and inland waters; monitoring of the urban climate; cryosphere; solid Earth; atmosphere.
Workshop Objectives
The objective is to gather all people involved and interested in the science and applications in which TRISHNA will contribute: contributors to the design of the mission or elaboration of the products and potential users. This first international workshop shall be a key milestone in order to converge on TRISHNA science plan, including CAL/VAL activities, scientific algorithms for data processing, product definition, with enhanced coordination between all partners.
Discussions and exchanges will include:
TRISHNA in the international context
All discussions within the workshop shall be considered in synergy with the future operational missions with high resolution and high revisit thermal infrared capability (TRISHNA, Copernicus’ LSTM, NASA’s SBG), and based on the experience gathered with the pathfinder missions (LANDSAT, ASTER, ECOSTRESS)
Elaboration of TRISHNA products and Calibration/Validation
- The definition of TRISHNA scientific products and variables
- Requirements and constraints for the distribution of the products to the users
- The elaboration of the products, through the redaction of the Algorithm Theoretical Basis Documents
- Calibration and validation activities
TRISHNA scientific themes and associated applications
- Ecosystem stress and water use: advances in the assimilation of land surface temperature or evapotranspiration in hydrological models
- Coastal and inland waters
- Urban climate modeling
- Cryosphere
- Solid earth
- Atmosphere
Workshop Format
The workshop will last for 3 days. It will include a series of plenary presentations.
Furthermore, specific themes and issues will be discussed in follow-up breakout sessions. The outcomes of the breakout sessions will be presented in plenary by the respective moderators followed by open discussion.
Workshop Outcomes
Following the workshop, the proceedings (presentations, discussion summaries and
conclusions) will be prepared and made available to participants and other interested parties on the TRISHNA website.Working language
The working language of the workshop will be English.
Date and Venue
The workshop will be held 22-24 March 2022.
The meeting venue is:
Hôtel Mercure Toulouse Centre Compans
Boulevard Lascrosses - 8 Esplanade Compans Caffarelli
31000 Toulouse - FRANCE
Registration, Deadlines and Accomodations
The registration fee for workshop participation is TBD. On-line registration is required. Registration will open soon. Registration can be done at the following website: TBD
The workshop website will also be used to post relevant documents for the workshop, as well as practical information such as hotel lists and directions to the venue.
A block of rooms will be secured at special rate at the conference hotel.
Workshop Organization and contact information
The workshop is organised by CNES, and hosted by the Mercure hotel Toulouse Center Compans.
CESBIO will provide the Scientific Secretariat for the workshop.
Workshop Organization Committee
Jean-Louis Roujean
Bimal K. Bhattacharya
Philippe Gamet
Philippe Maisongrande
Gilles Boulet
Mark Irvine
-
9:47
Turbulences ahead for Sentinel-2 users
sur Séries temporelles (CESBIO)*Update* on 27/10/2021 : the deadline has been extended to early January 2022 !
*Update* on 07/10/2021 : The deadline has been extended to the 15th of November !! And sample products are now available. We're relieved, thank you ESA:
*Update* on 29/09/2021 : ESA just detailed the changes here, and they will become operational on the 26th of October !!
You might have noticed this sentence on the most recent Sentinel-2 status report :
Outlook
• Upgrade of the processing baseline for both Level-1C and Level-2A featuring several improvements in the algorithms and changing products format. The change is foreseen in September/October 2021 and further news will follow here.
From what we have heard at CNES, the changes will include :
- A change in the coding of the reflectances : an offset will be added to keep the zero as no-data value, and still enable to code for zeroi or negative reflectances in the products.
- The vector masks should be replaced by raster masks, which is probably not a simple change that you can implement with two lines of code.
The change should be put in production before the end of October, while we do not have any accurate description so far. Accounting for these modifications within our and your processors and products in such a short delay will be difficult. Within Theia and for the MAJA team, being on time for these changes will be a hard challenge. Turbulences ahead !
-
12:09
Snowmelt and snow sublimation in the Indus basin
sur Séries temporelles (CESBIO)The Indus basin has one of the largest irrigation system in the world. Available water resources are abstracted almost entirely, mostly for crop irrigation in Pakistan.
The Indus basin is also considered as one of the large basins in Asia with the highest dependence on snowmelt runoff.
VIIRS false color image of the Indus basin (irrigated areas in green, snow cover in blue)
The contribution of snow and ice melt to runoff in the Indus basin was already well studied [1,2,3] but a pending question was how much water is lost by sublimation of the snow cover in the high mountain regions of the basin?
I used the recent High Mountain Asia snow reanalysis to re-evaluate snowmelt and estimate snow sublimation at the scale of the Indus basin.
Snowmelt and sublimation in the Indus basin. Timeseries of annual snowmelt and sublimation (a) and timeseries of the mean daily snowmelt and sublimation (b).
Over 2000–2016, snowmelt was about 25–30% of basin-average annual precipitation. About 11% of the snowfall was "lost" by sublimation, but with a large spatial variability across the basin. Sublimation fraction can be much higher in the arid regions in Ladakh and Western Tibet.
For this study I challenged myself to follow best practices in open science. I only used open data and the code to reproduce the study is available online. It was fun!
Paper: [https:]]
Dataset: [https:]]
Code: [https:]]
Many thanks to the authors High Mountain Asia snow reanalysis for sharing this amazing dataset!
Top picture: Mountains in Swat Valley Pakistan (by Designer429, CC BY-SA 3.0, via Wikimedia Commons)
-
16:33
Le Mask R-CNN pour la délinéation de parcelles : retour d'expérience
sur Séries temporelles (CESBIO)Les techniques de l'état de l'art en Deep Learning permettent-elles une délinéation individuelle de chaque parcelle de culture, comme le laisse penser cet article récent ? C'est ce que nous avons cherché à savoir au cours d'un stage au CESBIO en cherchant à qualifier l'architecture Mask R-CNN pour cette tâche. Nous vous livrons ici les principaux enseignements.
Le Mask R-CNN en théorieEn bref, le Mask R-CNN est une architecture qui fonctionne en trois grandes parties. On a tout d'abord un réseau convolutif appelé backbone qui extrait des primitives à partir de notre image de départ. À partir de ces primitives, une seconde partie (le Region Proposal Network ou RPN) va proposer et affiner un certain nombre de régions d'intérêts (sous la forme de boîtes englobantes rectangulaires) susceptibles de contenir une parcelle. Enfin, la dernière partie va récupérer les meilleures propositions, les affiner de nouveau, et produire un masque de segmentation propre à chacune d'entre elles.
À gauche les propositions conservées par le RPN, à droite les détections finales du Mask R-CNN avec leurs masques de segmentation.
Un schéma récapitulatif du réseau se trouve ci-dessous, il provient ce court article, qui peut être un bon point d'entrée si vous souhaitez en savoir plus. Il est à noter qu'au total, ce réseau possède une centaine de couches de convolution - ce qui rend plus difficile sa manipulation, car il est plus difficile d’interpréter les résultats obtenus.
Le Mask R-CNN en pratiqueAfin d'entraîner ce réseau, nous avons utilisé les données du Registre Parcellaire Graphique (RPG) distribué chaque année par l'IGN. Cette base de donnée étant lacunaire, nous y avons ajouté un complément produit par l'Observatoire du Développement Rural de l'INRAE. Afin de simplifier notre problème autant que possible, nous n'avons défini qu'une seule et unique classe, laissant ainsi de côté les types de culture fournies dans ces bases de données.
En ce qui concerne les données d'entrée, nous avons utilisé des images Sentinel-2 de niveau 2A fournies par Theia et plus exactement les 4 bandes spectrales à 10m de résolution (rouge, vert, bleu et proche infrarouge). Nous avons sélectionné 7 tuiles au dessus du territoire métropolitain, et choisi 4 dates différentes en 2018 pour chacune d'entre elle, durant les périodes de culture. Nous disposons aussi d'images super-résolues (à 5m plutôt que 10), qui sont produites grâce à un travail précédant au CESBIO (utilisant un Cascading Residual Network). Ces images permettent de gagner en netteté, par rapport aux bandes Sentinel-2 à 10m, aussi nous pensions également gagner en précision sur les contours prédits par nos modèles.
Tuiles sélectionnées (31UDR et 31UEP sont nos tuiles de test, les autres nos tuiles d'entraînement/validation)
Le Mask R-CNN ayant déjà été expérimenté dans la littérature pour cette même tâche de segmentation par instance des parcelles, nous avons tenté de reproduire le travail de ces auteurs. Bien que cette publication utilise une implémentation basée sur Tensorflow, nous avons d'abord cherché à reproduire ces résultats en utilisant l'implémentation fournie par le module Torchvision de Pytorch. Or nous n'avons jamais réussi à faire converger cette implémentation, et avons au passage noté de nombreuses différences entre ces deux implémentations. Les modèles pré-entraînés fournis l'ont été avec la base de données Image-Net pour Pytorch et COCO (plus dense en objets) pour Tensorflow, le pré-entraînement ne concerne que la partie backbone pour Pytorch mais couvre l'ensemble du réseau pour Tensorflow, et enfin l'implémentation elle même diffère dans l'ordre de certaines couches et le choix des primitives utilisées par les dernières étapes du réseau. Nous avons tenté d'isoler les éléments qui permettaient au réseau Tensorflow de converger, sans succès. Au temps pour la recherche reproductible !
RésultatsAprès avoir écarté l'implémentation Pytorch, nous avons évalué plusieurs scénarii d'apprentissage, que nous pouvons séparer en trois groupes différents, décrits dans le tableau ci-dessous. Le premier groupe de scenarii contient les trois premiers entraînements, qui utilisent chacun une seule des quatre dates de nos tuiles d'entraînement, afin de tester la capacité de généralisation d'une date à un autre. On utilise toujours, comme trois premiers canaux, les bandes RVB, auxquels on ajoute soit le PIR, soit le NDVI. Les trois scenarii suivants tentent d'utiliser une approche multi-temporelle : soit on se contente d'utiliser l'ensemble de nos dates pour chaque tuile (aboutissant à un jeux de données quatre fois plus grand), soit on extrait le NDVI de chaque date avant de les empiler ; on utilise ainsi une pile multi-temporelle. Enfin, les 3 derniers entraînements utilisent des images super-résolues.
Nom Dates Canaux Résolution T09NIR Septembre BVR - PIR 10 m T06NIR Juin BVR - PIR 10 m T09NDVI Septembre BVR - NDVI 10 m TADNIR Toutes BVR - PIR 10 m TADNDVI Toutes BVR - NDVI 10 m TMTNDVI Toutes (empilées) NDVI (dates 1 à 4) 10 m T09NIRSR Septembre BVR - PIR 5 m T09NDVISR Septembre BVR - NDVI 5 m T09NDVISRSA Septembre BVR - NDVI 5 m Avec ces entraînements, on obtient quelques résultats intéressants qualitativement (à défaut d'être bons quantitativement). Lors d'une inférence, nous produisons un ensemble de polygones, chacun assorti d'un score de confiance. Sur ce score de confiance nous pouvons fixer un premier seuil, au dessous duquel les prédictions ne seront pas considérées.
Pour ensuite les associer à notre ensemble de polygones cibles, nous avons utilisé un critère géométrique illustré ci-contre.
Ce critère estime si une cible (rectangle vert) et une prédiction (ellipse rouge) possèdent une intersection (en jaune) suffisante pour que leur correspondance soit jugée valide. En calculant deux ratios (jaune sur vert et jaune sur rouge), puis en prenant le minimum, on s'assure d'être le plus restrictif possible. La valeur obtenue est appelée RC, et on peut également fixer un seuil sur celui-ci afin d'être plus ou moins strict sur la qualité de nos prédictions. Une fois les associations entre les prédictions et les cibles effectuées, on peut alors calculer les métriques classiques de détection que sont la précision, le rappel et le F1-Score. La précision correspond au nombre de nos prédictions qui sont effectivement des cibles, le rappel correspond au nombre de cibles détectées, et le F1-Score est un compromis entre les deux premiers.
Sur l'ensemble de nos cas d'usage, ce sont les entraînements T09NDVISR et TMTNDVI qui ont fourni les meilleurs résultats, comme on peut le voir dans le tableau ci-dessous. Nous y avons utilisé des seuils assez restrictifs, à savoir 0.8 pour les deux valeurs (la confiance et le RC).
Test Précision Rappel F1-Score T06NIR (sur juin) 35.34 25.61 29.7 T06NIR (sur septembre) 31.51 21.45 25.52 T09NIR (sur juin) 30.08 20.8 24.6 T09NIR (sur septembre) 30.31 21.79 25.35 T09NDVI (sur juin) 31.03 21.43 25.35 T09NDVI (sur septembre) 29.79 21.21 24.78 TADNIR 34.51 25.11 29.07 TADNDVI 39.35 26.88 31.94 TMTNDVI 42.56 30.28 35.39 T09NIRSR 34.24 38.26 36.13 T09NDVISR 36.92 39.35 38.09 T09NDVISRSA 34.38 37.78 36.0 Afin de comparer nos performances à celles des auteurs de la publication mentionnée plus haut, nous avons choisi un seuil de RC à 0.5 et un seuil de confiance à 0.7. Les résultats sont montrés dans le tableau ci-dessous.
Test Précision Rappel F1-Score T09NDVISR 55.79 59.41 57.30 TMTNDVI 75.73 55.86 64.13 OSO (sur 31UDR) 45.89 22.54 30.23 Auteurs 68.7 48.5 56.2 Notre pile multi-temporelle de NDVI semble donc fournir des information pertinentes, et parvient à faire mieux que les résultats déclarés par les auteurs (sur leurs propres tuiles de test). La super résolution est quant à elle un peu en dessous en termes de métriques de détection, et au final on est très proche des résultats obtenus par les auteurs (on gagne sur le rappel mais on perd sur la précision). Toutefois, les bonnes prédictions fournies semblent être de meilleure qualité. En effet, les contours fournis (dont on peut voir un extrait ci-dessous, à deux échelles différentes) collent davantage aux parcelles de référence : l'entraînement à l'aide d'images super-résolues semble donc bien fournir des contours plus précis.
Parcelles cibles (en vert) et prédictions du modèle T09NDVISR (en bleu)
Pour finir, nous avons également soumis le produit OSO vectorisé à notre procédure d’appariement, afin de justifier la pertinence d'une approche de segmentation par instance. Nous avons fait ce test sur la tuile 31UDR, et on constate sans grande surprise que le rappel est très bas - puisque le produit OSO va notamment fusionner des parcelles de mêmes cultures qui sont côte-à-côte, on va donc avoir un grand nombre de cibles non détectées. Cela justifie ainsi l'intérêt d'utiliser une approche de segmentation par instance. Malgré cela, les résultats obtenus sont, à ce stade, bien loin d'être exploitables pour un utilisateur final. Il est à noter cependant que, à cause de notre critère d'appariement des prédictions aux cibles, il nous est impossible de détecter la fragmentation (le fait de détecter une cible en plusieurs prédictions) ou l'agglomération (le fait de détecter plusieurs cibles avec une seule prédiction). Pourtant, sur l'image, certaines fragmentations ou agglomérations pourraient sembler légitime, aussi nous sous-estimons nécessairement nos résultats.
Article réalisé dans le cadre d'un stage au CESBIO financé par le CNES.
Merci à Julien, Jordi et Olivier pour l'aide précieuse qu'ils m'ont apportée durant ce stage.
-
16:33
Feedback on Mask R-CNN for croplands delineation
sur Séries temporelles (CESBIO)Do state-of-the-art deep learning techniques allow for individual delineation of each cropland, as suggested by this recent article? In the context of an internship at CESBIO, we tried to evaluate the performance of Mask R-CNN architecture for this task. In this post, we summarize what we learned.
Mask R-CNN principlesTo sum up, Mask R-CNN is an architecture made of three main parts. First, there is a convolutional network called backbone, which produces features from an input image. From these features, a second part (called RPN for Region Proposal Network) proposes and refines a certain number of regions of interest (as rectangular bounding boxes), which are likely to contain a single cropland. Finally, the last part extracts the best proposals, refines them once again, and produces a segmentation mask for each of them.
Left: Proposals kept by RPN; Right: Final detections made by Mask R-CNN along with their segmentation masks.
The figure below illustrates this network. It is extracted from this short external article, where you can learn more about Mask R-CNN. In addition, another noticeable fact about this network is that is contains a lot of convolution layers - about a hundred - that makes it complex to manipulate. Indeed, it is difficult to explain results and trends.
Mask R-CNN in practiceIn order to train this network, we used the RPG database (Registre Parcellaire Graphique), distributed once a year by IGN. Since this database is not complete, we added a complement, provided by the ODR (Observatoire du Développement Rural, an INRAE lab). We simplified our approach de by defining only one class, removing information about crop types provided in the mentionned databases.
As input data, we used level 2A Sentinel-2 images provided by Theia. More precisely, we used the 4 spectral bands at 10m resolution (red, green, blue and near infrared). We selected 7 MGRS tiles over French territory. For each of them, we chose 4 different dates, included in growing periods. We also have super-resolution images (at 5m rather than 10), which are produced thanks to a previous work at CESBIO (using a Cascading Residual Network). These images are sharper than the 10m Sentinel-2 bands, so that we expected a better accuracy on predicted contours when training models on these super resolution images.
Selected tiles (31UDR and 31UEP are our test tiles, others are for training and validation)
Mask R-CNN has been tested in the litterature for this cropland instance segmentation task, so we tried to reproduce their work. Although this paper uses a tensorflow-based implementation, we first tried to reproduce their results using the one contained in Torchvision module from Pytorch. Yet, we never managed to make this implementation to converge, and we also noticed that there are actually many differences compared between the two implementations. Particularly, the provided pretrained models were trained on Image-Net for Pytorch and COCO (which has a greater density of objects) for Tensorflow. Moreover, pretraining involves only the backbone for Pytorch, but the whole network for tensorflow. Finally, the order of some layers and the choice of the features used during the last step of the network are different. We tried to indentify the main elements that allowed Tensorflow network to converge, without success.
ResultsUsing the Tensorflow implementation, we evaluated several training use cases, that can be separated in three groups, described in the table below. The first group contains the first three cases. Each of them uses only one date from our training tiles, in order to test the ability of the model to generalise to a data it has never seen. The first three channels are always RGB bands, then NIR or NDVI is added as a fourth channel. The next three cases try to use multi-temporal input data. Either we used all the dates for each training tile (so we get a 4 times larger patches dataset), or we extract NDVI from each date and then stack them (thus using a multi-temporal NDVI stack). Finally, the last three cases use super-resolution images.
Nom Dates Canaux Résolution T09NIR Septembre BVR - PIR 10 m T06NIR Juin BVR - PIR 10 m T09NDVI Septembre BVR - NDVI 10 m TADNIR Toutes BVR - PIR 10 m TADNDVI Toutes BVR - NDVI 10 m TMTNDVI Toutes (empilées) NDVI (dates 1 à 4) 10 m T09NIRSR Septembre BVR - PIR 5 m T09NDVISR Septembre BVR - NDVI 5 m T09NDVISRSA Septembre BVR - NDVI 5 m Thanks to these cases, we get some qualitatively - but not quantitatively - interesting results. During an inference, a set of polygons is produced, each with a confidence score. On this first score we can set a first threshold, below which predictions will not be considered.
Then, to match them with our target polygons, we used a geometrical criterion, illustrated on the right.
This criterion estimates if a target (green rectangle) and a prediction (red ellipse) have a sufficient intersection (in yellow) to consider their match as valid. By computing two ratios (yellow out of green and yellow out of red), then taking the minimum, we ensure that we are as restrictive as possible. The value obtained here is called RC, and we can also set a threshold on it, in order to be more or less restrictive on the quality of our predictions. Once matches between predictions and targets have been computed, we can compute some classic detection metrics - precision, recall and F1-Score. As a reminder, precision corresponds to the number of predictions that are actually croplands, recall corresponds to the number of targets detected among all targets, and F1-Score makes a compromise between the first two.
Among our use cases, T09NDVISR and TMTNDVI are those which have produced the best results, as we can see in the table below. Here, we used rather restrictive thresholds, i.e. 0.8 for both confidence and RC.
Test Précision Rappel F1-Score T06NIR (sur juin) 35.34 25.61 29.7 T06NIR (sur septembre) 31.51 21.45 25.52 T09NIR (sur juin) 30.08 20.8 24.6 T09NIR (sur septembre) 30.31 21.79 25.35 T09NDVI (sur juin) 31.03 21.43 25.35 T09NDVI (sur septembre) 29.79 21.21 24.78 TADNIR 34.51 25.11 29.07 TADNDVI 39.35 26.88 31.94 TMTNDVI 42.56 30.28 35.39 T09NIRSR 34.24 38.26 36.13 T09NDVISR 36.92 39.35 38.09 T09NDVISRSA 34.38 37.78 36.0 To compare our performance with authors' ones, from the paper mentionned above, we chose a RC threshold at 0.5 and a confidence one at 0.7. The results are the following:
Test Précision Rappel F1-Score T09NDVISR 55.79 59.41 57.30 TMTNDVI 75.73 55.86 64.13 OSO (sur 31UDR) 45.89 22.54 30.23 Auteurs 68.7 48.5 56.2 Our multi-temporal NDVI stack seems to provide relevant information, and manages to overcome authors' results (computed on their own test tiles). Super resolution provides results a bit lower in terms of detection metrics - actually, at the level of the authors' ones (with a better recall but a lower precision). However, good predictions seem to have a better quality. Indeed, the contours provided (we can see examples, at two different scales, on images below) are closer to the reference croplands than for the other models. Therefore, training with super resolution images seems to produce more accurate contours.
Target croplands (green) and prediction made by T09NDVISR model (blue)
Finally, we also ran our matching process on the vectorised OSO product. We used tile 31UDR for this test, and we can see that the recall obtained is very low - since the OSO product will merge neighbouring croplands of the same type, a large number of targets will be undetected. This justifies the interest in using an instance-based segmentation approach. Despite this, the results obtained are, at this stage, far from being an end-user product. It should be noted, however, that due to the use of RC as a matching criterion, we are unable to measure fragmentation (i.e. detecting a target with several predictions) or agglomeration (i.e. detecting several targets with a single prediction). Yet, on the images, some fragmentation or agglomeration could seem legitimate, se we obviously underestimate our performances.
Article written in the frame of my CESBIO internship funded by CNES. May thanks to Julien, Jordi and Olivier for their help.
-
20:00
VENµS, the itsy-bitsy satellite that goes up and down
sur Séries temporelles (CESBIO)The Israeli and French VENµS satellite is a small research satellite designed to test two innovative missions : an optical mission with a frequent revisit on the French side, and an experiment of a Ion Thruster to maintain the satellite at a low altitude on the Israeli side.
Of course, we did not manage to make both experiments at the same time, as the frequent revisit required to maintain the satellite at a high altitude, 720 km, and to be exactly at the same place every second day, while the electric propulsion was there to move the satellite. It was therefore decided to split the mission in different phases, that we called VENµS Mission x (VMx) :
- VM1 : Imaging mission at 720 km of altitude, with a 2 day cycle
- VM2 : Propulsion mission to lower the orbit altitude at 410 km using the Ion Thruster
- VM3 : Propulsion mission to keep the orbit at 410 km using the Ion Thruster
- VM4 : Propulsion mission to raise the orbit to 560 km altitude using the Hydrazine Thruster
- VM5 : Imaging mission at 560 km altitude, with a one day cycle
Where do we stand now ? VM3 has started beginning of September, it will last until November. The altitude will be raised in December, and VM5 should start in January 2022. A call for proposals was issued this winter, for which we received 85 proposals asking for 219 sites. Gérard Dedieu and I did a screening of all the proposals (Gérard did most of the work), and we ranked 45 proposals as excellent. We still have to check that the satellite will be able to observe the sites that correspond to these proposals, and probably, a few of these excellent proposals might not be accessible. Optimizing the cinematic of the satellite takes some time, so we apologize for not being able yet to release the results of our selection. Stay tuned !
Meanwhile, if you are eager to get access to VENµS time series, the data on more than 100 sites are available from the Theia distribution site.
Click to view an interactive map of VENµS sites during VM1
-
11:46
A very green France in July 2021
sur Séries temporelles (CESBIO)When the summers were hot and dry during the past years, our monthly cloudfree syntheses of surface reflectances from Sentinel-2 for the month of July interested the media. It will probably not be the case in 2021, as France thankfully kept its green colour in July.
Because of the cloud cover that allowed to keep this beautiful green color, some regions could never be observed without clouds by Sentinel-2, and yet we use a period of 45 days for our syntheses. This is for example the case in the Basque Country and in the Landes, in SouthWest France, as shown in the image below. The residual clouds and their shadows are of course indicated in our products. This shows once again the importance of improving the revisit of the Sentinel-2 mission in the coming years to improve the probability of clear observations.
You can view these syntheses by yourself, and zoom in to 10m resolution, using Theia's beautiful map dissemination tool [maps.theia-land.fr] . The tool also allows you to compare two different dates, and for France, we have made available all the data produced since 2018. Data from other regions produced by Theia are also available (Europe, Maghreb, Madagascar and Sahel).
To conclude, here are a few zooms on some French regions, where differences betwen 2021 and the three previous years are particularly striking :
Zoom on the agricultural region of champagne (there is not only wine in Champagne) . In 2021, the harvests were later, and the soils more humid, which explains the brownish color.
Zoom on Haute Loire, Center East of France.I n this region with a high proportion of grasslands and forests, the droughts of 2019 and 2020 had caused the grasslands to turn yellow. The return to their beautiful green color in 2021 brought relied to the ranchers and their livestock.
-
12:50
Une France bien verte en juillet 2021
sur Séries temporelles (CESBIO)Depuis quelques années, lors de chaque été sec en France, nos synthèses mensuelles de réflectances de surface issues de Sentinel-2 pour le mois de juillet ont été reprises dans les médias. Ce ne sera probablement pas le cas cette année, car la France est restée bien verte en juillet 2021.
En raison de la couverture nuageuse qui a permis de conserver cette belle couleur verte, certaines régions n'ont jamais pu être observées sans nuages par Sentinel-2, et pourtant, nous utilisons une période de 45 jours pour nos synthèses. C'est par exemple le cas au Pays Basque et dans les Landes, comme le montre l'image ci-dessous. Les nuages résiduels ainsi que leurs ombres sont bien entendu indiqués dans nos produits. Ceci montre une fois de plus l'importance d'améliorer la revisite de la mission Sentinel-2 dans les années qui viennent afin d'améliorer les probabilités d'observations claires.
Vous pouvez visualiser ces synthèses par vous mêmes, et zoomer jusqu'à 10m de résolution en utilisant le bel outil de diffusion cartographique de Theia http://maps.theia-land.fr. L'outil vous permet aussi de comparer deux dates différences, et sur la France, nous avons mis à disposition l'ensemble des données produites depuis 2018. Les données des autres régions produites par Theia sont également disponibles (Europe, Maghreb, Madagascar et Sahel).
Pour finir, voici quelques zooms sur des régions Françaises où les différences entre 2021 et les trois années précédentes sont particulièrement fortes :
Zoom sur la région agricole de champagne. En 2021, les moissons ont été plus tardives, et les sols plus humides, ce qui explique cette couleur brune.
Zoom sur la Haute Loire. Dans cette région avec une forte proportion de prairies et de forêts, les sécheresses de 2019 et 2020 avaient fait jaunir les prairies qui ont retrouvé leur belle couleur verte en 2021, au grand soulagement des éleveurs.
Pour la première fois, je n'ai rien eu à faire pour préparer cet article, à part les copies d'écran et l'écriture de l'article. Ce sont les équipes du CNES et de ses sous traitants qui ont tout pris en charge. Un grand merci !
-
17:29
Format changes to L1C
sur Séries temporelles (CESBIO)A CNES colleague, member of the Sentinel-2 quality working group, just warned us of coming changes to the Sentinel-2 Level 1C format. From what I have heard, there will be two main changes :
- the reflectance will be coded with an affine law, with an offset and a gain, instead of the current linear law (the offset is zero). The aim is to make a difference between the no-data value (0), and the null reflectance.
- the GML vector masks will be replaced by raster masks
Even if this could be handy for Sentinel-2 users, a huge quantity of code among the thousands of softwares that use Sentinel2 will have to be adapted and validated to account for these changes.
ESA plans to implement the changes by September, while precise information on the changes as well as sample products are not available yet. Knowing the time needed to change an operational software and make all the qualifications, I think this modification comes much too soon. Don't you agree ? Please ESA, give us an additional month...
-
4:23
The role of snow in the Indus river basin
sur Séries temporelles (CESBIO)In the Indus river basin, available water resources are abstracted almost entirely, mainly to irrigate crops in India, Afghanistan and Pakistan [1]. Most of the population lives in the low elevation regions of the basin but a large part of the runoff and groundwater recharge comes from the high elevation regions of the basin. This is due to (i) the orographic enhancement of the precipitation and (ii) the reduction of the evaporative demand with elevation. In the Indus basin, the precipitation trend with elevation is not monotonous because a vast high elevation region in the north east lies in the rain shadow area of the Himalayas (e.g. Ladakh). Also, global datasets of precipitation (such as the one used below) are known to underestimate precipitation in the upper Indus [2].
Indus river basin population, precipitation and potential evapotranspiration by elevation band. The runoff production is maximum in high elevation areas, where precipitation is high and evaporation is low (data sources: GPWv4 [3], Terraclimate [4])Above 2000 m, in the Karakoram, Hindu Kush and Himalayas mountain ranges, snowmelt is the major contributor to runoff. The snow melt contribution to runoff is also significant in the high elevation regions of the Ganges and Brahmaputra basins, but not as much as in the Indus.
Contribution of rain, snow melt and ice melt to runoff in the Indus, Ganges and Brahmaputra river basins in the areas above 2000 m (data source: Armstrong et al. [5])Previous studies have estimated snowmelt in the Indus using temperature-index models [5, 5b]. An limitation of such models is that they do not account for sublimation. Field studies suggest that sublimation can be an important component of the water balance in the highest elevations [5c].
The recent release of the High Mountain Asia Snow Reanalysis [6] makes it possible to better characterize the hydrological significance of the snow cover in the Indus basin. This reanalysis is based on an energy balance model which computes sublimation among other processes. It also removes some biases in the meteorological input data by using remote sensing observations of the snow covered area.
This dataset provides daily estimates of snowmelt over the period 2000-2016. During that period, the annual basin snowmelt ranged between 100 mm and 140 mm except for the first year (maybe an artifact related to the model initialization?).
Cumulated snowmelt in the Indus river basin (data source: HMASR [6])The average annual basin snowmelt was 118 mm (= 102 gigatons of water per year). This annual snowmelt represents about a quarter of the average annual basin precipitation. Here I used the WorldClim mean precipitation value (424 mm) which falls within the range of previous estimates reported by [1] (392 to 461 mm). I also compared to the glacier ablation in the basin over the same 2000-2016 period [8]. Glacier ablation includes the melt of the seasonal snow cover over the glacier area. I also added the "imbalance" fraction of the glacier ablation, which gives an estimate of the ice melt contribution caused by climate warming over the period (glacier mass loss that was not compensated by snow accumulation).
Cumulated snowmelt in the Indus river basin (data source: WorlClim [7], HMASR [6], Miles et al. [8])The snowmelt rates were highest during the summer, which is also the monsoon season. During that period, monsoon rains and meltwater runoff can cause flooding. Snowmelt rates remained high after the monsoon, hence contributing to sustain river flow after the monsoon flood.
Daily snowmelt in the Indus river basin (data source: HMASR [6])From the HMASR, we can also evaluate the amount of snow that was sublimated and which did not contribute to runoff.
Daily sublimation in the Indus river basin (data source: HMASR [6])Over 2000-2016, the sublimation represented 11.7% of the total snow ablation.
Annual snowmelt and sublimation in the Indus river basin (data source: HMASR [6])The sublimation generally peaked in late June, about a month before snowmelt.
Daily sublimation and snowmelt in the Indus river basin (data source: HMASR [6])To conclude, we can learn from the HMASR that snowmelt is a significant but not dominant input to the water balance in the Indus river basin. In addition, the melt season overlaps with the monsoon season, when the issue is more an excess of water than a lack of water. About 12% the seasonal snowfall was "lost" by sublimation.
A similar study could be done for other river basins. The code to reproduce this analysis is available is this repository: [https:]]
References
[1] Laghari, A. N., Vanham, D., and Rauch, W.: The Indus basin in the framework of current and future water resources management, Hydrol. Earth Syst. Sci., 16, 1063–1083, [https:] 2012.
[2] Immerzeel, W. W., Wanders, N., Lutz, A. F., Shea, J. M., and Bierkens, M. F. P.: Reconciling high-altitude precipitation in the upper Indus basin with glacier mass balances and runoff, Hydrol. Earth Syst. Sci., 19, 4673–4687, [https:] 2015.
[3] Center for International Earth Science Information Network - CIESIN - Columbia University. 2018. Gridded Population of the World, Version 4 (GPWv4): Population Density, Revision 11. Palisades, NY: NASA Socioeconomic Data and Applications Center (SEDAC). [https:] Accessed 01 July 2021.
[4] Abatzoglou, J.T., S.Z. Dobrowski, S.A. Parks, K.C. Hegewisch, 2018, Terraclimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015, Scientific Data 5:170191, [https:]
[5] Armstrong, R. L., Rittger, K., Brodzik, M. J., Racoviteanu, A., Barrett, A. P., Khalsa, S.-J. S., Raup, B., Hill, A. F., Khan, A. L., Wilson, A. M., Kayastha, R. B., Fetterer, F., and Armstrong, B.: Runoff from glacier ice and seasonal snow in High Asia: separating melt water sources in river flow, Reg Environ Change, [https:]] , 2018.
[5b] Kraaijenbrink, P.D.A., Stigter, E.E., Yao, T. et al. Climate change decisive for Asia’s snow meltwater supply. Nat. Clim. Chang. 11, 591–597 (2021). [https:]
[5c] Stigter EE, Litt M, Steiner JF, Bonekamp PNJ, Shea JM, Bierkens MFP and Immerzeel WW (2018) The Importance of Snow Sublimation on a Himalayan Glacier. Front. Earth Sci. 6:108. doi: 10.3389/feart.2018.00108
[6] Liu, Y., Y. Fang, and S. A. Margulis. 2021. High Mountain Asia UCLA Daily Snow Reanalysis, Version 1. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. [https:]] .
[7] Hijmans, R.J., S.E. Cameron, J.L. Parra, P.G. Jones and A. Jarvis, 2005. Very High Resolution Interpolated Climate Surfaces for Global Land Areas. International Journal of Climatology 25: 1965-1978. doi:10.1002/joc.1276.
[8] Miles, E., McCarthy, M., Dehecq, A. et al. Health and sustainability of glaciers in High Mountain Asia. Nat Commun 12, 2868 (2021). [https:]
-
18:43
Drought returns to California
sur Séries temporelles (CESBIO)A few years only after its worst drought in 1,200 years (2011-2016), the drought returns to California.
Lake Oroville is both a witness and a victim of the extreme hydro-meteorological variability in the Golden State. It provides water to "homes?, farms, and industries in the San Francisco Bay area, the San Joaquin Valley and Southern California". In 2017, 188'000 northern California residents were evacuated as they were under the threat of spillway failure at Oroville Dam following severe rainfalls. Today, the water level is so low that its hydroelectric power plant will be forced to shut down for the first time since it opened in 1967.
In May 2021, water levels of Lake Oroville dropped to 38% of capacity (Wikipedia). © Frank Schulenburg
Lake Oroville water level is closely monitored by the California Department of Water Resources. However, it is also possible to watch the water area fluctuations in the growing archive of Sentinel-2 images (below using the NDWI index in the EO Browser).
Lake Oroville in June (Sentinel-2 NDWI)
In 2018, Lake Oroville could feel the heat of Camp Fire the deadliest and most destructive wildfire in California's history.
Lake Oroville and #campfire in the same satellite picture. Two expressions of climate hazards in #California? pic.twitter.com/wqNtf38rlW
— Simon Gascoin (@sgascoin) November 18, 2018
In 2020, Lake Oroville had a ringside seat regarding the largest wildfire season recorded in California's modern history (the image is hazy because of the wildfire smoke).
The snowpack is the primary source of runoff in the Lake Oroville catchment, but also in the entire western United States where 53% of the total runoff originates as snowmelt (Li et al. 2017).
This year' snow deficit in the West USA can be observed from the 20 years dataset of MODIS observations. Here is a snapshot from my Western USA Snow Monitor :
In May 2021, the snow cover area had reached one its lowest value since 2001 (01 May = day 121), reflecting the lack of snowfall during the winter.
The current conditions does not augur well for the farmers who need this water to irrigate their crops during the dry summer season. It also mean that they will have to further deplete the groundwater which is causing ground subsidence.
-
18:12
Enneigement des Pyrénées en 2021
sur Séries temporelles (CESBIO)Malgré des chutes de neige tardives en mai, la durée d'enneigement est déficitaire dans les Pyrénées cette année comme on peut le voir sur cette synthèse d'images satellitaires:
On se souvient aussi de la tempête Filomena qui a blanchi le piedmont pyrénéen le 10 janvier, bien visible sur le graphique ci-dessous.
Évolution de la surface enneigée depuis le 1er janvier pour toutes les années depuis 2001 (exprimée en fraction du domaine dessiné ci-dessus, c'est-à-dire : 1.0=100% de la surface est enneigée)
Suivant une suggestion de Christophe Cassou, j'ai aussi tracé la carte des anomalies, c'est-à-dire la différence par pixel entre la durée d'enneigement de l'année en cours et la durée d'enneigement moyenne observée entre 2001 et 2019.
Carte des anomalies de durée d'enneigement (2021 "moins" la climatologie)
On remarque que le déficit est surtout marqué côté français, alors que le versant espagnol présente des zones ayant connu une durée d'enneigement supérieure à la moyenne. Ceci est encore la conséquence de Filomena qui a surtout touché la péninsule ibérique (souvenez-vos de Madrid sous la neige !).
Enfin, si on extrait les altitudes des pixels négatifs (rouge) et positifs (bleus) on constate que les secteurs déficitaires sont situés aux altitudes moyennes et hautes. Les secteurs excédentaires sont minoritaires sauf à basse altitude.
Il est probable que le dépôt de poussières saharienne exceptionnel du 6 février 2021 ait contribué à accélérer la fonte printanière, réduisant ainsi le nombre de jours enneigés.
Pour suivre l'évolution en temps réel consultez cette page : [https:]]
La méthode pour générer ces cartes est expliquée dans cet article :
Gascoin, S., Hagolle, O., Huc, M., Jarlan, L., Dejoux, J.-F., Szczypta, C., Marti, R., and Sánchez, R.: A snow cover climatology for the Pyrenees from MODIS snow products, Hydrol. Earth Syst. Sci., 19, 2337–2351, [https:]] , 2015.
-
19:48
Notre nouveau travail, jardiniers !
sur Séries temporelles (CESBIO)Après qu'un premier tracteur s'est approché trop près de notre station ROSAS, le suivant est passé un peu trop loin. Même si on n'est jamais contents, c'est bien mieux comme ça ! Ce tracteur a semé le maïs sur la parcelle sur laquelle nous avons construit la station. S'il est acceptable de laisser un cercle de terre inculte de deux mètres autour du mât (le pied du mât est masqué dans le traitement), l'espace sans plants de maïs était beaucoup trop important et atteignait une distance de six mètres.
L'équipe Maja (la guèpe, avec des bandes jaunes et noires) au travail
Pour résoudre ce problème, le 28 mai, notre petite équipe est allée jardiner pour repiquer quelques pousses de maïs au pied du mât. Sans tracteur, et à 500 m du point d'eau le plus proche, ce fut un sacré effort, mais bien plus amusant que de passer la journée en visioconférence. Comme toujours, Jérôme Colin a parfaitement organisé cette sortie, et vous devriez voir comment il bichonne les pieds de maïs, Micael Lassalle fait un excellent porteur d'eau, je suis l'expert de la pelle ;).
Ci-dessous le résultat vu d'en haut, nous espérons que nos petites plantes pousseront aussi bien au pied du mât que dans leur emplacement initial.
-
17:30
Notre nouveau travail, jardiniers !
sur Séries temporelles (CESBIO)Après qu'un premier tracteur s'est approché trop près de notre station ROSAS, le suivant est passé un peu trop loin. Même si on n'est jamais contents, c'est bien mieux comme ça ! Ce tracteur a semé le maïs sur la parcelle sur laquelle nous avons construit la station. S'il est acceptable de laisser un cercle de terre inculte de deux mètres autour du mât (le pied du mât est masqué dans le traitement), l'espace sans plants de maïs était beaucoup trop important et atteignait une distance de six mètres.
L'équipe Maja (la guèpe, avec des bandes jaunes et noires) au travail
Pour résoudre ce problème, le 28 mai, notre petite équipe est allée jardiner pour repiquer quelques pousses de maïs au pied du mât. Sans tracteur, et à 500 m du point d'eau le plus proche, ce fut un sacré effort, mais bien plus amusant que de passer la journée en visioconférence. Comme toujours, Jérôme Colin a parfaitement organisé cette sortie, et vous devriez voir comment il bichonne les pieds de maïs, Micael Lassalle fait un excellent porteur d'eau, je suis l'expert de la pelle ;).
Ci-dessous le résultat vu d'en haut, nous espérons que nos petites plantes pousseront aussi bien au pied du mât que dans leur emplacement initial.
-
20:41
Our new job: gardeners !
sur Séries temporelles (CESBIO)After a first tractor came too close from our ROSAS station, the next one passed by a little too far. Even if we're never satisfied, it is much better ! This tractor sowed the maize on the plot on which we have built the station. If it is acceptable to leave a two meter circle of uncultivated ground around the mast (the foot of the mast is masked in the processing), the gap without maize plants was much too large and reached six meters.
The MAJA team at work (you know that MAJA, the wasp, has yellow and black stripes)
To solve that issue, on the 28th of May, our little team went gardening to transplant a few sprouts of Maize to the foot of the mast. Without a tractor, and at 500 m from the closest water source, it was quite an effort, but much more fun that spending the day in video conferences. As always, Jerome organized this perfectly, and you should see how he pampers the sprouts, Micaël Lassalle makes an excellent water carrier, I am the shovel expert.
Below is the result as seen from above, we hope our little plants will grow as well there than in their initial location.
-
9:48
Can commercial satellites do the job of Sentinel-HR ?
sur Séries temporelles (CESBIO)This post intends to answer a question about Sentinel-HR, that we have had quite often inside or outside CNES :
Securing continuity in a critical timeseries requires user community foresight, Programme justification, 10 yrs Agency prep, and finances. Whilst Sentinel-HR may well be an excellent/valid idea, I’m left asking why certain Copernicus contributing missions couldn’t plug this gap?
— Mark Drinkwater (@kryosat) April 29, 2021
Sentinel- HR is a mission project, currently studied in phase zero, for a free and open, global, high resolution (~2m), repetitive (~20 days), systematic earth observation mission in optics. Sentinel-HR would also provide, in one pass, a stereoscopic observation with a low difference in angle, but allowing a terrain restitution with an accuracy of 3 to 4 meters. For instance, this capability could be useful to monitor the evolution of worldwide glaciers, as in this study recently published in Nature. The latter study was mostly based on a sensor, ASTER, that acquired a vast and open archive of stereo images since late 1999. To our knowledge, no replacement is planned for ASTER.
"Copernicus contributing missions", in the European glossary, are commercial satellite missions (sometimes funded by the member states), whose data can be bought by Copernicus to serve the interests of European users. The table below shows a few examples of high resolution satellite constellations. The number of satellites in this category is of course much greater, but the satellites identified below are emblematic examples of what is possible.
These satellites may be stored in two categories :
- on demand acquisition or tasked satellites (such as Pleiades , Pleiades Neo , Maxar Workdview 3)
- systematic acquisition satellites (or coverage satellites) (such as planet, and maybe someday UrtheDaily)
Our Sentinel-HR mission would of course be a coverage mission.
Tasked missions VHR tasked satelllite missionsMission N° of satellites Swath Width (km) Multi-spectral Resolution (m) Surface/day (M.km²) Pleiades 2 20 2.4 1 Pleiades Neo 4 14 1.2 2 SPOT6/7 2 60 8 4 Worldview 2 & 3 2 13 1.2 1.2 Planet Skysat 21 6 1 0.4 A few examples of VHR satellites with on-demand acquisition (we did not include the Skysat satellites in our analysis below because half of them are not heliosynchronous, and others have a different overpass time).
There are 150 M km² of emerged lands, so in order to fulfill its mission needs (20 days revisit at 2 meters resolution), Sentinel-HR should acquire 8 M km² daily (accounting for 20% overlap required for mosaics, cloud and shadow detection, etc.). Accomplishing this with tasked VHR satellites would require to dedicate 100% of the capacity of the satellites in the table above. Excluding SPOT6/7, which does not have the necessary resolution, this would even amount to about 200%. If we now consider their stereoscopic capabilities, the daily coverage from tasked VHR satellites would drop drastically. Even if reserving this amount of capacity was possible and affordable (and we think it is not), there are still loads of issues with sensors, lifetimes, orbits, resolutions, swaths, viewing angles and spectral bands difference which will make it very difficult to derive consistent, long term datasets. If we let those issues aside and admit that maybe only 20% of the total capability can be dedicated to a public observation service, then we will need to start making choices of what area is observed and what area is not. And inevitably, those data will miss someone, sometime, somewhere that we can not foresee.
Acquisitions by Pleiades 1A and 1B over France for the three first weeks of April 2021. Although all these images are great and locally useful, probably less than 20 % of the French surface is covered. (from Airbus geostore catalog). Coverage missions VHR coverage satelllite missions
Mission N° of satellites Swath Width (km) Multi-spectral Resolution (m) Surface/day (M.km²/sat) Planetscope superdove 80 19.5 4 0.5 Urthedaily 8 360 ? 5 25 ? A few examples of VHR satellites with systematic acquisition (until the limits of their capacity is reached). We call these missions "coverage satellites". For Urthedaily, although a launch is announced in 2023, there is not much literature, and we had to guess the swath width and acquisition capacity.
The planet constellation is closer to the type of mission we consider for Sentinel-HR, allowing for a daily revisit, at 4m resolution, which is the double of what is expected for Sentinel-HR. The issue with planet is the data quality. As the optics have a very small aperture, the acquisitions are made with very broad spectral bands, that overlap each other. The signal to noise ratio is also not in the range of the Sentinels. Another difficulty lies in the very small field of view, even if the most recent model of the constellation has reached almost 20 kilometers. As at least a few kilometers of overlap is necessary between the adjacent swaths, the world is in fact covered by a huge amount of spaghetti stitched together. The orbit of the satellites is not maintained so users will have to handle different overpass times when processing time series, which degrades the data quality.
On paper, the Urthedaily constellation of 8 satellites could be much more interesting, even if its resolution is also twice lower than the one needed by Sentinel HR. But there is not much information on this constellation on the internet which stays hypothetical, although announced in 2022.
Both missions (Planet and Urthedaily) do not provide stereoscopy, which is one additional reason why they do not fulfill the objectives of Sentinel-HR.
ConclusionsOur review may have missed newly launched or planned missions, including the Chinese missions for instance, and we do not claim having the complete overview of very high resolution missions in mind. But the tasked missions, with their off-nadir acquisition capabilities, are not adapted to make a systematic coverage, and too many expensive satellites would be required to fulfill the Sentinel-HR mission. So far, the only existing high resolution coverage mission, Planet, does not meet the standards of data quality from Sentinels, and would need an enhancement of a factor 2 of its resolution. Its stereo capabilities also remain modest and not systematic.
Their daily revisit is of course a plus compared to Sentinel-HR, but we believe a Very High Resolution mission does not require such a high revisit, for a lot of applications (urban, infrastructures, forests, glacier evolution, coasts, rivers, hedges...) and that it is more reasonable to try to merge, low revisit and high resolution mission such as Sentinel-HR with a frequent revisit and high resolution mission such as Sentinel-2 or Sentinel-2 NG. Moreover, neither Planet or Earthdaily will provide the stereoscopic mission included in Sentinel-HR.
Finally, let's recall that when we were preparing Sentinel-2, we often heard that the SPOT and SPOT-like satellites already provided this kind of data, and that a Sentinel-2 mission was therefore not necessary, and would kill the earth observation private missions. 20 years after that, the commercial earth observation is thriving, with a lot of different very high resolution missions, while the Sentinel-2 mission is an extraordinary success which proved useful to hundreds of thousands of people.
Moreover, we absolutely don't know how the landscape of VHR earth observation will be in the 2030's, and maybe a private owned mission could answer our needs some day. But, if we want its data to be free and open, it will mean that its entire image archive will have to be bought by the public sector (EU for instance) and provided to the community. How can we be sure it will be cheaper and correspond better to our needs than a mission designed by a space agency ?
The post was prepared by Olivier Hagolle, Julien Michel (CESBIO) and Etienne Berthier (LEGOS)
-
1:25
Neige orange : premiers résultats
sur Séries temporelles (CESBIO)Trois mois après le dépôt de poussières sahariennes qui avait coloré le manteau neigeux en orange, et grâce à la participation de nombreux citoyens-scientifiques, nous sommes en mesure d'annoncer que la masse de poussière tombée dans les Alpes et les Pyrénées varie entre 0.2 et 30 grammes par mètre carré ce qui représente des centaines de milliers de tonnes de poussière ! Ce dépôt pourrait entrainer une réduction de la durée d'enneigement de plusieurs dizaines de jours. Mais il faudra attendre la fin de la saison pour tirer des conclusions définitives sur l'enneigement de cette année pleine de surprises (le mois d'avril ayant été le plus froid depuis 1994, et l'hiver joue les prolongations dans nos massifs avec des chutes de neige tardives).Échantillons prélevés par Anne et Florence, Cap du Carmil (Ariège), crédits photo : Anne Barnoud.
-
11:42
Premiers retours sur le nouveau traitement géométrique Sentinel2
sur Séries temporelles (CESBIO)Si vous travaillez avec des produits Sentinel2, vous avez probablement entendu dire que le nouveau traitement géométrique de l'ESA est actif depuis le 1er avril 2021. Ce nouveau traitement se base sur l'affinage des paramètres de prise de vue par rapport à un ensemble d'images de reference (Global Reference Image, GRI), et doit diminuer l'erreur de localisation absolue de 11 mètres (pour 95.5% des produits) à moins de 8 mètres. Plus important encore, l'erreur de recalage multi-temporelle (entre images d'une série temporelle pour une tuile) doit diminuer de 12 mètres (pour 95.5% des produits) à mieux que 5 mètres et même 3 mètres depuis une même orbite (source: ESA Data Quality Reports).
Au CESBIO, nous savons que le calage multi-temporel des produits Sentinel2 peut être problématique dans certains cas, et nous avons récemment développé une chaîne de traitement appelée StackReg qui permet d'estimer rapidement les biais de localisation relatifs pour une grande quantité de produits. Cet outil calcule les translations à appliquer à la géo-localisation de chaque image afin d'amélioration la cohérence spatial de la pile multi-temporelle, comme on peut l'observer dans la vidéo ci-dessous.
document.createElement('video'); [https:]]Haut, de gauche à droite : pile Sentinel2 sans recalage, pile Sentinel2 recalée avec les translations calculées par StackReg, dérivée temporelle sans recalage, dérivée temporelle avec recalage. Bas : profil temporelle du NDVI sans recalage (bleu) et avec recalage (rouge) au point indiqué par une croix rouge dans les images, et amplitude du recalage (tirets gris).
StackReg en quelques lignesStackReg est un outil que j'ai du développer dans le cadre de mon travail sur l'élaboration d'un prototype de chaîne de fusion spatio-temporelle pour l'étude de phase 0 Sentinel-HR. Pour une présentation plus complète, vous pouvez consulter la présentation donnée récemment dans le cadre d'un cycle de conférence au CESBIO : les planches (anglais) sont disponibles ici, et la vidéo de l'exposé (français) est diponible ici.
StackReg met en correspondances toutes les images d'une tuile Sentinel2 donnée disponibles dans l'archive Theia avec l'image possédant la meilleure couverture des pixels de terres émergées (excluant nuages, saturations, eaux libres et données manquantes), avec l’algorithmee SIFT, qui génères des milleurs de points homologues sous-pixelliques. Les images sont découpées en sous-tuiles, et l'algorithme de mise en correspondance est appliqué à l'échelle d'une sous-tuile, afin de réduire le coût d'appariement des points et éliminer les appariements aberrants. Une fois tous les points homologues collectés, ceux qui correspondent à une déplacement supérieur à 20 mètres sont éliminés, car les Data Quality Reports nous disent que l'erreur de calage multi-temporelle est inférieure à 12 mètres dans la plupart des cas. Ce procédé est similaire à celui utilisé dans CARS (chaîne de photogrammétrie libre du CNES).
Ce procédé est distribué sur le centre de calcul haute performance (HPC) du CNES, et le traitement de l'archive complète pour une tuile prend un peu moins de 15 minutes (une fois les données téléchargées).
Performances de mise en correspondance pour l'archive complète disponible pour la tuile 31TCJ. Première ligne : intervisibilité (pourcentage de terre émergées visible dans les deux images), deuxième ligne : nombre de points homologues avant et après filtrage (échelle logarithmique), troisième ligne : amplitude de la moyenne des translations générées par les points homologues, quatrième ligne : amplitude de l'écart type des translations générées par les points homologues.
Positions relatives des images calculées par StackReg
Une fois les translations de l'ensemble des images de l'archive vers l'image de références calculées, nous pouvons en déduire les positions relatives de ces images en plaçant arbitrairement l'image de référence au centre du repère. Comme il n'y a pas de raison particulière pour que cette image de référence soit mieux localisée en absolu que les autres, nous calculons ensuite la position moyenne de l'ensemble des images, et l'utilisons comme position cible pour en déduire les translations à appliquer à chaque image pour recaler la pile de données, comme illustré sur la figure de droite. Ce type de graphe permet également d'analyser la précision géométrique et la cohérence du calage multi-temporel des données (mais pas la localisation absolue, nous y reviendrons).
La liste des translations estimées est écrite dans un fichier csv, qui est la seule sortie de StackReg (stocker les sorties de StackReg est donc très bon marché). Ce fichier csv peut être utiliser pour ré-échantillonner les images afin de générer une pile de données recalées à la volée (par exemple en utilisant la classe WarpedVRT de rasterio)
Ce procédé permet une amélioration significative de la cohérence spatial de la pile multi-temporelle, même quand on considère des paires d'images qui n'ont pas été mises en correspondances l'une avec l'autre, comme montré par la figure suivante. Nous pouvons en déduire que le calage est cohérent avec les chiffres des Data Quality Reports, et que StackReg est très efficaces pour construire une pile multi-temporelle cohérente spatialement.
Estimation de l'amplitude de l'erreur de calage entre toutes les paires possibles d'images avec un taux de couverture supérieur à 60% de la tuile 31TCH sur la période 2018-2019, en utilisant un extrait de 2000x2000 pixels, sans et avec StackReg. Que peut nous dire StackReg du nouveau traitement géométrique Sentinel2 ?
Positions relatives des images de la tuile 31TCJ
Nous pouvons utiliser le même type de diagramme de dispersion pour voir comment les images à partir du 1er avril 2021 (date à laquelle le nouveau traitement est activé si possible) se comportent, et voir comment StackReg localise ces images par rapport aux autres. Voici ce que cela donne pour la tuile 31TCJ. Nous pouvons voir que les croix rouges (produits avec le nouveau traitement géométrique) sont localisée à l'intérieur de l'ellipse à 99%. De plus, elles semblent relativement groupées, ce qui suggère moins de variation dans leurs positions et va dans le sens d'un calage multi-temporel mieux que 5 mètres, à l'exception d'un point isolé en bas de l'ellipse (mais peut-être que le nouveau traitement géométrique n'était pas actif pour cette image). Nous pouvons également noter que, même si nous n'avons pas encore assez d'acquisitions avec le nouveau traitement actif pour en être sûr, la position moyenne des croix rouges seraient localisées 2.5 mètre au nord du centre de l'ensemble des images, ce qui semble indiquer que nous devrions utiliser cette position estimée après le 1er avril 2021 pour recentrer les images, afin d'améliorer la localisation absolue de l'ensemble.
Ces observations se confirment si l'on regarde la matrice de cohérence spatiale pour les dates postérieures au 1er avril 2021. Cette cohérence semble bonne avec le nouveau traitement géométrique, à l'exception d'une image (quoi doit être celle localisée au bas de l'ellipse). Nous pouvons également observer que StackReg permet d'améliorer encore cette cohérence et de ramener cette image fautive avec les autres.
Matrice de cohérence spatiale avec le nouveau traitement géométrique Sentinel2 pour la tuile 31TCJ
Si nous nous intéressons à d'autres tuiles, nous pouvons voir que les mêmes conclusions s'appliquent. 30TYS montre un groupement très resserré d'acquisitions. A nouveau, le centre de l'ensemble des positions ne correspond pas auc entre des dates bénéficiant du nouveau traitement géométrique, ce qui suggère que nous pourrions utiliser ce dernier pour améliorer la localisation absolue de l'ensemble. 31TGL montre également un groupe très resserré, cette fois-ci un peu plus loin de notre ellipse de confiance. A nouveau, nous sommes probablement centré sur une localisation erronée. Les mêmes conclusions peuvent être observées pour les tuiles 30TYQ et 30TXT (voir graphes à la fin de ce billet).
Que pouvons nous en conclure ? Bien sûr, nous devrons confirmer tout cela quand plus de produits seront disponible mais ... ça marche ! Nous avons juste à attendre le retraitement complet de l'archive (incluant le retraitement des produits L2A) ... Dans l'intervalle, StackReg peut nous aider à constituer de longue séries temporelles Sentinel2 avec une meilleure cohérence spatiale, à améliorer encore la cohérence des produits qui bénéficient du nouveau traitement géométrique et parfois même à attraper des images anormalement localisée et les ramener avec les autres. Et nous allons regarder de plus près les améliorations potentielles de la localisation absolue de la pile recalée en utilisant la moyenne des positions bénéficiant du nouveau traitement géométrique.
Positions relatives des images de la tuile 31TYS
Positions relatives des images de la tuile 31TGL
Positions relatives des images de la tuile 30TYQ
Positions relatives des images de la tuile 30TXT
-
11:41
A first peek at new Sentinel2 geometric processing
sur Séries temporelles (CESBIO)If you are working with Sentinel2 products, you probably heard that ESA new geometric processing is active since 1st of April 2021. This new processing, based on the geometric refinement of viewing parameters with respect to a Global Reference Image (GRI), should bring absolute location error from 11 meters (95.5% of products) to better than 8 meters, and more importantly, multi-temporal registration from 12 meters (95.5% of products) down to better than 5 meters and even 3 meters for a single orbit (source: ESA Data Quality Reports).
At CESBIO, we know that the multi-temporal registration of Sentinel2 products may be problematic in some cases, and we recently developed a processing chain, named Stackreg that allows to quickly estimate relative location biases for a large amount of products. This processor computes offsets to apply to image geo-location information in order to improve stack spatial registration, as shown in this example video.
[https:]]Top, from left to right: S2 stack wihtout registration, S2 stack with StackReg computed offsets, temporal derivative without registration, temporal derivative with StackReg computed offsets. Bottom : NDVI profile at red cross location without registration (blue), with registration (red) and registration amplitude (dotted gray).
StackReg in a nutshell
StackReg is a tool that I needed to develop on my way to a spatio-temporal fusion processing chain for the Sentinel-HR phase 0 study For a complete introduction, you can watch the talk I gave for a lab workshop: the slides (english) are available here, and the conference video (french) is available here. For those in a hurry, here are the main things to know about it.
StackReg matches all images of a given Sentinel2 tiles available in Theia archive against the image with the highest ground coverage (excluding clouds, saturation, open waters and edges) with the SIFT algorithm, which will yield thousand of sub-pixels pairs of points called matches. Images are broken into sub-tiles, and matching is done at sub-tile level in order to reduce matching cost and discard obvious outliers. Once all matches to the target image are collected for a given image, matches that correspond to an offset of more than 20 meters are discarded since we know from Data Quality Reports that the mutli-temporal registration should be better than 12 meters. This process is similar to the one used in CARS (CNES open-source photogrammetry pipeline).
This matching process in distributed on CNES High Performance Computing center, and processing the full archive for one tile takes a little less than 15 minutes (once data have been downloaded).
Matching performances for the full archive of the 31TCJ tile. First line is intervisibilty (amount of ground pixels visible in both images), second line is number of point before and after filtering (notice the logarithmic scale), Third line is the mean offset amplitude, and fourth line is the standard deviation of the offset amplitude. Dashed gray line indicates target date.
Relative positions of images computed by StackReg.
Once we computed the offset to the reference image for all images in the archive, we can derive the relative positions of all images in a space by considering the reference image as the origin of our frame. Since there is no particular reason for this image to be better than the others, we then compute the mean position of all images, and use this as the target position, from which we derive the offsets that allow to register all images together, as illustrated in the figure on the right. This kind of graph can also be used to analyse the geometric accuracy and multi-temporal registration of data (but not absolute location, we will get to that).
The list of offset is compiled in a csv file which is the only output of StackReg (storing StackReg outputs is therefore very cheap). This csv file can then be used to resample images or generate registered stacks on the fly (for instance using the WarpedVRT feature of rasterio).
This yields a significant improvement of spatial registration consistency in the multi-temporal stack, even when considering pairs of images that have not been matched together, as shown in the following figure. From this figure, we can say that initial registration is coherent with Data Quality Report, and StackReg is very efficient in build a coherent multi-temporal stack.
Estimated mis-registration amplitude of possible pairs of images from tile 31TCJ, 2018-2019, with coverage >60%, 2000x2000 extract, without and with StackReg. What can StackReg tell us about Sentinel2 new geometric processing ?
31TCJ location scatter plot
We can use the same scatter plot diagram to see how images starting 2021.04.01 behave (date at which the processing is activated when possible), and see how StackReg locates those images with respect to the others. So here it is for tile 31TCJ. We can see that all the red crosses (with new geometric processing) fall into the confidence ellipse. Furthermore, they look quite grouped together, which suggest less jitter in image positions and supports the idea of a better 5 meters multi-temporal registration, except for one point at the bottom of the ellipse (but maybe the geometric processing was not active for this image). We can also note that, even if we do not have enough acquisitions yet to be sure, the mean of red points would be 2.5 meters north of the mean of the full cloud, which may indicate that we should rather use the mean of post 2021.04.01 images as our target location in StackReg.
Indeed, if we look at the spatial registration coherency matrix for dates post 2021.04.01, we can confirm that coherency looks good with new geometric processing except from one image. We can also see that StackReg allows to slightly enhance the coherency and brings this faulty image back with the others.
Coherency matrix for images with new geometric processing on tile 31TCJ.
If we look at other tiles, we can see that the same conclusions can apply. 30TYS shows a very tight pack of acquisitions. Again, center of the cloud is not the center of the dates corrected by the new geometric processing, which suggest that we could use those dates to enhance absolute location of the full stack. 31TGL shows also a very tight pack, this time a bit further out our confidence ellipse. Once again, we are probably wrong and they are probably right. The same applies to 30TYQ and 30TXT (see graphs at the end of the post).
So what can we say? Of course, we will have to confirm this when more products will become available but ... it works, folks! We only have to wait for the complete archive reprocessing (including L2A products) ... In the mean time, StackReg can help building spatially coherent long Sentinel2 Time Series, help to improve further this coherency when dealing with products with new geometric processing, and sometimes catch outlier images and bring them back with the others. And we will have a closer look at the potential improvement of absolute location error by using mean location of dates with the geometric processing as the target location.
31TYS location scatter plot
31TGL location scatter plot
30TYQ location scatter plot
30TXT location scatter plot
-
10:18
Oups...
sur Séries temporelles (CESBIO)Field work rarely goes without a few small hazards. In the case of our new ROSAS station, newly installed a month ago, the hazard unfortunately took the form of a farm machine hitting the mast while tilling the soil on April 15th.
The base of the mast twisted under the impact, leaving the structure in a precarious position. The stress applied to the mast led to a torsion of its fixing plate, and to the breaking of the concrete mass around one of the four anchoring points. The concrete mass itself was destabilized.
In order to avoid any further damage, the instrumentation was removed from the mast on April 19 by the CESBIO team.
The mast was then repositioned vertically with the help of an agricultural machine thanks to the dexterity of the Purpan School team.
Nevertheless, we will have to replace the damaged elements to guarantee the safety of the installation. In order not to impact the corn which has just been sown, the work will only be done after the harvest, that is to say around September. In the meantime, the photometer will be repositioned on the mast and re-aligned with the help of the CIMEL team in order to acquire BRDF data on this new crop cycle.
Tadaa !
-
10:09
Oups...
sur Séries temporelles (CESBIO)L’acquisition de mesures sur le terrain se passe rarement sans quelques petits aléas. Dans le cas de notre nouvelle station ROSAS fraîchement installée il y a un mois, l'aléa a malheureusement pris la forme d'un engin agricole venu percuter le mât lors d'un travail du sol le 15 avril dernier.
La base du mât s'est tordue sous l’impact, laissant la structure dans une position précaire. La contrainte appliquée au mât a conduit à une torsion de sa platine de fixation, et à la rupture du massif béton autour de l’un des quatre points d’ancrage. Le massif béton a lui-même été déstabilisé.
Afin d’éviter tout dégâts supplémentaires, l’instrumentation a été démontée du mât le 19 avril par l’équipe du CESBIO.
Le mât a été repositionné verticalement à l’aide d’un engin agricole grâce à la dextérité de l'équipe de l’Ecole de Purpan.
Il nous faudra néanmoins remplacer les éléments endommagés pour garantir la sécurité de l'installation. Afin de ne pas impacter le maïs qui vient d'être semé, les travaux n'interviendront qu'à partir de la récolte, soit vers le mois de septembre. En attendant, le photomètre sera repositionné sur le mât et ré-aligné avec l'aide de l'équipe CIMEL afin d'acquérir des données de BRDF sur ce nouveau cycle de culture.
Tadaa !
-
11:29
Muldrow Glacier surge in Alaska
sur Séries temporelles (CESBIO)Muldrow Glacier (McKinley Glacier) is a large glacier in Denali National Park and Preserve in Alaska, USA. It is now moving 100 times faster than normal, which means that it is undergoing a "surge".
The abrupt acceleration of the glacier movement is clearly visible in the recent series of Sentinel-1 SAR backscatter images (animation below). Also visible is a strong increase in the backscatter due to the formation of new crevasses. These crevasses create new reflectors on which the radar waves can bounce and return to the satellite antenna.
Read more about the Muldrow Glacier surge and the ongoing effort to study it : https://www.nps.gov/articles/dena-muldrow.htm
-
18:04
2 000 downloads for MAJA !
sur Séries temporelles (CESBIO)The distribution of MAJA L2A processor, as an executable software, started in april 1997, but we started counting the number of downloads only since October 2017. The cumulative number of downloads just reached 2000 on April the 2nd 2021. It is on average 1.56 downloads per day, and more than 2 downloads per work day !
Since October 2020, MAJA became an open source software distributed on github. Github does not count the downloads, and only provides the number of clones made by different users in the last 2 weeks. The plot below shows that we have to add 3 clones per week to the downloads of the executable version. As the github site also serves as MAJA's forum so far, you can also see the traffic on the documentation pages and forum in the plot provided below.
MAJA is not a simple software that runs on a laptop. It is designed to work only on linux platforms, and the Sentinel-2 data require a large disk space, and a comfortable memory. We tehrefore did not expect it would reach such a large amount of downloads. If you are one of MAJA users, we would be pleased to hear about the aplications for which you used MAJA.
MAJA clones (top) and information traffic (bottom) on github
-
11:09
Can surface reflectance be negative ?
sur Séries temporelles (CESBIO)Here is a frequently asked question :
I noticed, in such L2A product processed by MAJA, that some pixels had negative reflectances, is it normal ?
No, it should not happen, but the fact it happens is not entirely surprising, I will explain it below. Unlike negative reflectances, reflectances greater than one can exist, this is explained here.
Reflectance should be positive, as it corresponds, with a normalization factor, to the ratio of the radiance reflected by the earth's surface (positive or zero) and the illumination received by this surface (positive or zero). This said, the surface reflectances observed in nature can be very low, of the order of 0.01 to 0.03 for example, in the following cases:
- in cloud or topographic shadows
- on slopes facing away from the sun
- over water or lava flows
- over dense vegetation in the visible
Atmospheric corrections are not without error. Our estimates of MAJA performance have shown that the standard deviation of atmospheric correction errors on locally uniform scenes is about 0.01. These errors are probably mainly due to errors in estimating the optical thickness of aerosols or to errors in the choice of the aerosol type. MAJA is however one of the software that provides the best atmospheric correction performances, as shown by the ACIX-I experiment.
RMS errors for surface reflectances obtained by different atmospheric correction methods, compared to reflectances obtained using in-situ optical properties from the Aeronet network, using the 6SV radiative transfer code. These results are from the ACIX-I experiment. As the LaSRC chain also uses 6SV, this criterion gives a significant advantage to this atmospheric correction method. These performances do not take into account the adjacency effects and the quality of their correction. For each wavelength, the best performances are written in red, and the second best performances in blue.
A standard deviation of 0.01 means that in about 1% of the cases, the errors can be greater than 0.03. In this case, the reflectances of the few cases described above can become negative. They occur in general when the optical thickness of aerosols is overestimated. The errors of the surface reflectances can also be larger than the estimate provided above, due to adjacency effects and residuals of their correction. We are currently working on improving the adjacency effects correction, especially using the ROSAS station located in Lamasquère.
So how do we deal with the unavoidable negative reflectances in MAJA? We provide two types of output products from MAJA: reflectances before correction of topographic effects (coded SRE for Surface REflectances), and reflectances after this correction (coded FRE for Flat surface REflectances)
- SRE : we leave the negative reflectances in the product, because we see no point in hiding these errors, which are also present on all the pixels
- FRE : as the correction of the effects of the relief can lead to multiply by 5 the reflectances, and thus to make them even more negative, we put these reflectances at zero.
-
9:32
Les réflectances de surface peuvent-elle être négatives
sur Séries temporelles (CESBIO)Voici une question que je reçois assez fréquemment :
J'ai remarqué dans (tel produit L2A Sentinel-2 corrigé des effets atmosphériques par MAJA) des réflectances négatives ou égales à zéro. Est-ce normal ?
Non, ce n'est pas normal, mais ce n'est pas totalement surprenant non plus. Contrairement aux réflectances négatives, qui ne devraient pas exister, les réflectances supérieurs à un existent. C'est expliqué ici.
Une réflectance doit être positive, puisqu'elle correspond, à un facteur de normalisation près, au rapport de la luminance réfléchie par la surface terrestre (positive ou nulle) et de l'éclairement reçu par cette surface (positif ou nul). Ceci dit, les réflectances de surface observées dans la nature peuvent être très faibles, de l'ordre de 0.01 à 0.03 par exemple, dans les cas suivants :
- dans les ombres de nuages ou de relief
- sur des pentes orientées à l'opposé du soleil
- au dessus de l'eau ou de coulées de lave
- sur de la végétation dense dans le visible
Les corrections atmosphériques ne sont pas dénuées d'erreurs. Nos estimations de performances de MAJA ont montré que l'écart-type des erreurs de correction atmosphérique sur des scènes localement uniformes est de l'ordre de 0.01. Ces erreurs sont probablement principalement dues à des erreurs d'estimation de l'épaisseur optique des aérosols ou des erreurs sur la connaissance du type d'aérosol. MAJA est pourtant l'un des logiciels qui fournit les meilleures performances de correction atmosphérique, comme l'a montré l'expérience ACIX.
Erreurs RMS pour les réflectance de surface obtenues par différentes méthodes de correction atmosphérique, comparées à des réflectances obtenues en utilisant les propriétés optiques des aérosols mesurées in-situ et issues du réseau Aeronet, en utilisant le code de transfert radiatif 6SV. Ces résultats sont issus de l'expérience ACIX-I. Comme la chaîne LaSRC utilise également 6SV, ce critère avantage significativement cette méthode de correction atmosphérique. Ces performances ne prennent pas en compte les effets d'environnement et la qualité de leur correction. Pour chaque longueur d'onde, les meilleures performances sont écrites en rouges, et les deuxièmes performances en bleu.
Un écart-type de 0.01 signifie que dans environ 1% des cas, les erreurs peuvent être supérieures à 0.03. Dans ce cas, les réflectances des quelques cas exposés ci-dessus peuvent devenir négatives. Ils interviennent en général lorsque l'épaisseur optique des aérosols est surestimée. Les erreurs des réflectances de surface peuvent être aussi plus importantes que l'estimation fournie ci-dessus, en raison des effets d'environnement et des résidus de leur correction. Nous sommes en train d'y travailler actuellement, notamment à partir de la station ROSAS implantée à Lamasquère.
Comment gérons nous les réflectances négatives inévitables dans MAJA ? Nous fournissons deux types de produits en sortie de MAJA : des réflectances avant correction des effets topographiques (codées SRE pour Surface REflectances), et des réflectances après cette correction ( codées FRE pour Flat surface REflectances)
- SRE : nous laissons les réflectances négatives dans le produit
- FRE : comme la correction des effets du relief peut conduire à multiplier par 5 les réflectances, et donc à les rendre encore plus négatives, nous mettons ces réflectances à zéro.
Une des conséquences des réflectances négatives, est que le NDVI peut devenir supérieur à un (dans le cas des produits SRE) ou égal à un (dans le cas des produits FRE). Nous avons étudié cette question dans un article.
-
19:45
ROSAS first rosaces
sur Séries temporelles (CESBIO)These space engineers are fast and efficient ! Just a week after our ROSAS station in Lamasquère was erected, we already have the first BRDF measurements, the producing of which required also the processing of calibration sequences. This was done thanks to our colleagues at CNES from the service in charge of measurement physics in optics, especially Lucas Landier, Sebastien Marcq and Aimé Meygret, and the exploitation team (Nicolas Guilleminot). Although the installation of the system has degraded the uniformity of the existing cover crop, the orders of magnitude of reflectances are as expected; and we can even see the shadow of the mast in the 120° azimuth direction in the top-left graph.
Let's wait for the corn to be sown, and we should have much cleaner BRDFs.
Polar diagrams of surface reflectances measured by our ROSAS station in Lamasquère. In this not very intuitive representation, the 0° azimut corresponds to observations towards the South. The top left image was taken in the morning, the top right around noon, bottom left in the afternoon, and bottom right later on after the arrival of clouds. The yellow dots indicate the position of the sun. The radius of the graph corresponds to the zenith angle, and the other dimension is the azimuth with regard to the North.
-
10:44
Start-up of the new ROSAS station for bi-directional reflectance measurements in Lamasquère
sur Séries temporelles (CESBIO)At last! We announced it in March 2020, and here it is, one year later! From lock-downs to constraints related to crops grow stages or soil wetness, we have been forced to postpone the operations several times. And finally, the ROSAS* Lamasquère (South-Western France) station sent its very first measurements on March 17, 2021.
Let us recall that the ROSAS protocol is based on the use of a multi-spectral photometer to carry out not only angular and spectral measurements of the incident radiation, but also of the radiation reflected by the surface. It is thus possible after processing to deduce the bi-directional reflectance (BRDF) of the surface of the measurement site. With the CNES ROSAS stations in La Crau (France) and the CNES/ESA station in Gobabeb (Namibia), the CESBIO Lamasquère station is the third site of its kind in the world, and the first to characterize an agricultural vegetated surface, with seasonal and inter-annual variations of the cover.
This station will allow us to validate the satellite surface reflectances (corrected for atmospheric effects) in difficult cases, since :
- when crops are very green and dense, the surfaces are dark and the atmospheric correction errors have a strong impact on the reflectance estimates;
- when the crops are mature or the plot is bare ground, the adjacency effects due to the nearby forest become strong.
Feel free to refer to our previous article to read the detailed motivations that led us to set up this new site.
In spite of not so promising weather forecast which were fortunately systematically denied by the facts (well ok, it was quite chilly and it rained on Friday), Hery and Mohamad from CIMEL were able to proceed with the installation and wiring of the instrumentation to the mast on the ground as early as Tuesday March 16. This includes: a masthead robot, a lightning rod, an inclinometer, a GSM antenna, a solar panel, a hygrometric probe, a GPS antenna and an acquisition box, as well as the multiple corresponding cables.
Once the mast was equipped, it was Jean-Philippe, from the Lamothe farm, who proceeded to lift the whole thing on Wednesday March 17. The maneuver caused us some cold sweat, but went well, causing only minor damage quickly repaired on the ground cable.
The sensor is installed on a tilting mast, which greatly facilitates the maintenance of the whole instrumentation. The last step, once the mast was in place, was to put the photometer on the robot, with its collimator, and to pre-set the alignment of the photometer.
The sunshine on Thursday allowed us to validate the alignment of the sensor and to check the good functioning of the data transmission to the CESBIO, and the rain on Friday allowed us to
fixvalidate the proper functioning of the hygrometric sensor which stops the acquisitions in case of precipitations (who said we should avoid rainy days?).The acquisitions started when the sun showed up on Saturday March 20. The data are automatically transmitted to CESBIO and CNES every hour via the cell phone network (GPRS). If the intermediate crop currently in place is not very interesting in terms of BRDF (especially because we trampled it quite a lot around the mast these days...), the period from the end of April to the end of August will cover a new maize crop cycle (the now famous 4-meter-high-maize from Lamasquère!). So we will for sure publish new interesting data very soon!
We wish to send a big THANK YOU to the CIMEL team for its efficiency and good mood, to Baptiste from CESBIO for the inclinometer device, to Cedric Hillembrand from OMP SI for the data server, as well as to Gervais and Jean-Philippe from Lamothe farm for their decisive help in lifting the mast and for hosting our station on their land!
(from right to left: Mohamad (CIMEL), Hery (CIMEL) and Jérôme (CESBIO), all three relieved to see the mast standing up)
*RObotic Station for Atmosphere and Surface characterization
-
10:43
Démarrage de la nouvelle station ROSAS de mesure de réflectance bi-directionnelle à Lamasquère
sur Séries temporelles (CESBIO)Enfin ! Nous vous l'avions annoncé en mars 2020, la voici opérationnelle un an plus tard. De confinements en contraintes liées à l'état des cultures et du sol, nous avons été obligé de repousser les opérations à de multiples reprises. Et finalement, la station ROSAS* de Lamasquère (Haute-Garonne, France) aura fait ses premières mesures le 17 mars 2021.
Rappelons que le protocole ROSAS repose sur l'utilisation d'un photomètre multi-spectral pour non seulement réaliser des mesures angulaires et spectrales du rayonnement incident, mais aussi du rayonnement réfléchi par la surface. Il est ainsi possible après traitement d'en déduire la réflectance bi-directionnelle (BRDF) de la surface du site de mesure. Avec les stations ROSAS du CNES à la Crau (France) et CNES/ESA à Gobabeb (Namibie), la station ROSAS du CESBIO est donc le 3ème site de ce type au niveau mondial, et le premier à caractériser une surface végétalisée agricole, avec des variations saisonnières et inter-annuelles du couvert. Cette station nous permettra de valider les réflectances de surface des satellites (corrigées des effets de l'atmosphère) dans des cas de figure difficiles. En effet :
- lorsque les cultures sont très vertes et denses, les surfaces sont sombres et les erreurs de correction atmosphérique ont un fort impact sur l'estimation des réflectances;
- lorsque les cultures sont mûres ou que la parcelle est en sol nu, ce sont les effets d'environnement dus à la forêt toute proche qui deviennent forts.
N'hésitez pas à vous reporter à notre précédent article pour lire le détail des motivations qui nous ont amené à mettre en place de ce nouveau site.
En dépit de prévisions météo peu prometteuses mais heureusement systématiquement démenties par les faits (enfin, il faisait assez frisquet et il a plu vendredi), Hery et Mohamad de CIMEL ont pu procéder à l'installation et au câblage de l'instrumentation sur le mât au sol dès le mardi 16 mars. Cela comprend : un robot en tête de mât, un paratonnerre, un inclinomètre, une antenne GSM, un panneau solaire, une sonde hygrométrique, une antenne GPS et un boîtier d'acquisition, ainsi que les multiples câbles correspondants.
Une fois le mât équipé, c'est Jean-Philippe, de la ferme de Lamothe, qui a procédé au levage de l'ensemble le mercredi 17 mars. La manœuvre nous a occasionné quelques sueurs froides, mais s'est bien passée, n’occasionnant que des dégâts mineurs vite réparés sur la tresse de terre.
Le mât utilisé est basculant, ce qui facilite la maintenance de l'instrumentation. La dernière étape, une fois le mât en place, a donc consisté à mettre le photomètre sur le robot, avec son collimateur, et à pré-régler l'alignement de l'ensemble.
L'ensoleillement du jeudi a permis de valider l'alignement du capteur et de vérifier le bon fonctionnement de la transmission des données jusqu'au CESBIO, et la pluie du vendredi nous a permis de
corrigervalider le fonctionnement de la sonde hygrométrique qui arrête les acquisitions en cas de précipitations (comme quoi, le mauvais temps a aussi du bon).Les acquisitions ont donc débuté avec le retour du soleil le samedi 20 mars. Les données sont automatiquement transmises au CESBIO et au CNES toutes les heures via le réseau téléphonique mobile (GPRS). Si la culture intermédiaire actuellement en place ne présente pas beaucoup d'intérêt en terme de BRDF (notamment parce qu'on a beaucoup piétiné la féverole autour du mât...), la période de fin avril à fin août couvrira un nouveau cycle de culture de maïs (le désormais fameux maïs de 4 mètres de haut de Lamasquère !). Nous ne manquerons donc pas de publier bientôt de nouvelles données intéressantes !
Nous adressons un grand MERCI à l'équipe CIMEL pour son efficacité et sa bonne humeur, à Baptiste du CESBIO pour le dispositif de l'inclinomètre, à Cédric Hillembrand de la DSI OMP pour le serveur de réception des données, ainsi qu'à Gervais et Jean-Philippe de la ferme de Lamothe pour leur aide déterminante dans le levage du mât et l'accueil de notre station sur leur terrain ! L'achat de la station a été financé par le contrat de plan Etat Région, merci à ceux qui ont déposé le projet (Eric Ceschia, Valérie le Dantec...).
(de droite à gauche: Mohamad (CIMEL), Hery (CIMEL) et Jérôme (CESBIO), tous trois soulagés de voir le mât debout)
*RObotic Station for Atmosphere and Surface characterization
-
16:19
Several issues found in recent papers on cloud detection published in MDPI remote sensing
sur Séries temporelles (CESBIO)In the last few months, several papers on Sentinel-2 cloud detection have been published by MDPI remote sensing journal. We found large errors or shortcomings on two of these papers, that should not have been allowed by the reviewers or editors. The third one is much better, even if we disagree with one of its conclusions.
IntroductionBefore analyzing the papers, let's review a few points that someone interested in the performances of Sentinel-2 cloud masks should know.
1- False cloud negatives (cloud omissions) are worse than false cloud positives:- given that Sentinel-2 observes the same pixel every fifth day, false cloud positives only reduce the number of available data for the processing, but one can expect the same pixel will be available and clear -and classified as such- 5 or 10 days before or after. Of course, systematic false positives, like, for instance, the classification of bright pixels as clouds, should be avoided as they would mean such a pixel would never be available during the long period in which it is bright;
- false cloud negatives can degrade the analysis of a whole time series of surface reflectance, yielding a wrong estimate of bio-physical variables or of land cover classification for instance.
Due to the difference of observation angles between bands, the edges of the cloud have a different color 2 - Cloud masks should be dilated, for at least three reasons:
- all Sentinel-2 spectral bands do not observe the surface in the exact same direction. As the cloud mask is made using a limited number of bands, it is necessary to add a buffer around it so that the clouds are asked in all the bands;
CLouds have fuzzy edges which can be hard to detect, hence the interest of dilation
- cloud edges are often fuzzy, and the pixels in the cloud neighborhood can be affected by the limbos of the cloud;
- clouds scatter light around them, and the measurement of surface reflectances is disturbed by this effect, named "adjacency effect" in remote sensing jargon.
For these reasons, in our software MAJA, we recommend to use a parameter which dilates the cloud mask by 240 meters. This dilation is a parameter, and different cloud masks should be compared using the same value of this parameter. Dilating the cloud mask will lower the false negatives, and increase the false positives, and overall, it will reduce the noise due to clouds in time series of surface reflectance.
3- Cloud shadows or clouds are both invalid pixels:- clouds and cloud shadows disturb surface reflectance time series. These pixels are therefore invalid for most analyses. we do not know of any user who really needs to know if an invalid pixel is in fact a shadow or a cloud. And moreover, a pixel can very often be both a cloud and a shadow, as some clouds partly cover their shadows. It is therefore not very useful to separate both classes in the validation, as it can bring differences if the hypotheses are different between the reference and the cloud detection method.
We have followed these guidelines in our own cloud mask validation paper, also published in remote sensing, and provided more details about their justification. This being said, we can now analyze the three papers.
Sanchez et al: Comparison of Cloud cover detection algorithms on sentinel–2 images of the amazon tropical forestA typical image in over the Amazon (here, Surinam), with lots of broken clouds.
This paper compares the quality of FMask, Sen2cor and MAJA cloud masks over the Amazon forest, which has an excessively high cloud cover. Such a cloud cover is not favorable to MAJA, which is a multitemporal method that works better when the surface is seen cloud free once per month. We are therefore not surprised that MAJA is not at its best in this comparison. The paper relies on a careful elaboration of reference fully classified images. It is a serious work, but which still includes at least 4 shortcomings concerning MAJA's evaluation:
- MAJA cloud mask was improperly decoded, and the authors wrongly concluded that MAJA could not detect cloud shadows. This error has been admitted by the authors. The Joined image shows that, for the images used by Sanchez et al, the cloud shadows are indeed detected.
One of the images for which Sanchez et al wrote that MAJA could not detect shadows.
- MAJA dilates the clouds, but the authors compared the dilated cloud masks to non-dilated "reference cloud masks". In the Amazon region, where the clouds are often large fields of small cumulus, this approximation can lead to very large differences. MAJA's dilation parameter could have been tuned to use the same hypothesis as in the reference (no dilation), but it wasn't. The differences are therefore counted as false positives for MAJA, which inequitably degrades its performances;
- The authors considered shadows and clouds as two different categories. As explained in the introduction, differences in hypotheses on the classification of pixels with clouds above shadows can bring errors in the evaluation of the performance of the method, which is of no interest for the users;
- Moreover, the Sentinel-2 mission started to be operational (5 days revisit on all lands) only after November 2017. Before that, the revisit in Amazonia was 20 days until July 2017, and 10 days after Sentinel-B entered in operations. Two thirds of the images used in the paper from Sanchez et al were obtained before July 2017. In these conditions, the average frequency of cloud free observations was lower than one every three months, or 4 per years, quite far from the cloud-free observations per month required for MAJA to work optimally. Of course it is not representative of MAJA's performance for the rest of the life of Sentinel-2 (for at least 15 years).
We asked MDPI for a correction, but while it seems that MDPI tries to be fast in the review process, it is not as fast to recognize errors and publish corrections. We signaled the errors in September, asking how to correct it. We received an answer end of October, submitted our comment in November, which was finally published in March, after a minor revision in January which requested us to change only one single word. With our comment, MDPI also published an answer from the authors, who acknowledge the error on the decoding, but did not bother to show the results after separating the results obtained with a revisit of 20 days, and those with a revisit of five days. MDPI did not insist, which we found -to say the least- disappointing. Meanwhile, this paper with false results has been quoted 9 times.
EDIT : as this blog post has had some success, I have received some feedback, and one of the actual reviewers of the paper told me he had submitted comments close to ours, and these comments where disregarded by the authors and the paper was accepted by MDPI, while the reviewer still recommended Major revisions.
Zekoll et al: Comparison of Masking Algorithms for Sentinel-2 ImageryComparison of cloud masks over Naples, for MAJA, left, and Sen2cor, right. The detected clouds are circled in green.
This paper compares three cloud detection codes, FMask, ATCOR and Sen2cor, by comparing the cloud masks generated by the automatic methods to reference data taken manually:
"Classification results are compared to the assessment of an expert human interpreter using at least 50 polygons per class randomly selected for each image".
The method by Sanchez et al used fully classified images, and so did our method, but the one used in Zekoll et al is based on selected polygons, which might be less accurate, because it is highly dependent on the choice of the samples. For instance, with such a method, you tend not to select samples near the cloud edges, because it is hard to do so manually. But the issue is that the cloud edges are one of the most difficult cases, while the center of a cloud is usually easier to classify automatically. Cloud edges are also one of the cases where Sen2cor classification is often wrong, avoiding to sample them is a convenient way to obtain good results. The paper does not show any example of reference classification, which is described in one sentence and a graph, so the reader can just hope that the work was done properly.
The paper contains also a sentence that should have shocked a good reviewer:
"However, dilation of Sen2Cor cloud mask is not recommended with the used processor version because it is a known issue that it misclassifies many bright objects as clouds in urban area, which leads to commission of clouds and even more if dilation is applied."
It could be translated by, "let's avoid the dilation or it would reveal the real value of Sen2cor". How can a review accept such a statement ? Yes, the disclaimer is present in the paper, but the performance quoted in the abstract and conclusion does not take it into account. It is therefore misleading.
And finally, the most beautiful construction in the paper is in the abstract:
"The most important part of the comparison is done for the difference area of the three classifications considered. This is the part of the classification images where the results of Fmask, ATCOR and Sen2Cor disagree. Results on difference area have the advantage to show more clearly the strengths and weaknesses of a classification than results on the complete image. The overall accuracy of Fmask, ATCOR, and Sen2Cor for difference areas of the selected scenes is 45%, 56%, and 62%, respectively. User and producer accuracies are strongly class and scene-dependent, typically varying between 30% and 90%. Comparison of the difference area is complemented by looking for the results in the area where all three classifications give the same result. Overall accuracy for that “same area” is 97% resulting in the complete classification in overall accuracy of 89%, 91% and 92% for Fmask, ATCOR and Sen2Cor respectively."
Instead of giving in the abstract the Overall Accuracy for all reference data sets, which is not good (despite using non dilated reference cloud masks), the authors have found a way to show the fast reader that the "overall accuracy is 92% for Sen2cor". You need to carefully read between the lines to understand that it is only for the pixels on which the three methods agree, ie. for the pixels which are easy to classify. The real performance for cloud detection is available but lost in the results :
Fmask performs best for the classification of cloud pixels (84.5%), while ATCOR and Sen2Cor have a recognition rate of 62.7% and 65.7%, respectively
The corresponding User Accuracy of FMask is low, but most of the Cloud commision errors are due to the dilation.
To conlude, here is one element which explains the low quality of the paper :
Received: 1 December 2020 / Revised: 25 December 2020 / Accepted: 27 December 2020 / Published: 4 January 2021
Congratulations to the MDPI review process, who accepted this paper in less than one month, in a period that includes the Christmas and New Year break . Let me recall that our negotiation with MDPI for adding a comment to Sanchez et al took seven months.
I have to add that I just saw a presentation at the S2 Validation Team meeting from by the first author, V. Zekoll, a PhD student. The presentation was much better than the paper, and which only focused on the comparison of the three methods she studied, and did not even show any comparison with the reference. This blog post does not aim at blaming her, but rather the editing process.
Cilli et al: Machine Learning for Cloud Detection of Globally Distributed Sentinel-2 ImagesThe last paper by Cilli et al avoids most of the traps in which the other papers fell. It takes the necessary dilation into account, even though the reference mask was not dilated. The reference validation data set was a good one based on fully classified images. In fact it was the data set we generated three years ago and made available to the public. We are happy it was useful and well used. But finally, the paper missed the necessity to detect cloud shadows, i just suppose it is a work in progress.
The paper compares machine learning approaches and more classical threshold based methods including MAJA, Sen2cor and FMask. The machine learning method uses a database by Hollstein et al as training data set, and evaluates all methods against CESBIO's data set.
The conclusions correspond to how we designed MAJA. MAJA is the most sensitive, but has some false positive clouds, which mostly correspond to the dilation in the products generated by MAJA. The authors also note that MAJA does not have false positives on bright pixels, while the other methods have.
The conclusion of the paper for the threshold based method is as follows :
"In general, MAJA resulted in the most sensitive (95.3%) method and FMask resulted in the most precise (98.0%) and specific (99.5%); however, it is worth noting that according to specificity the difference with Sen2Cor is negligible (99.4%)."
The SVM method developed by Cilli et al is less sensitive than MAJA but more precise (probably as the dilation is thinner).
However, I am puzzled by the following sentence:
"These findings should be taken into consideration as the main purpose of cloud detection is avoiding false positives, especially for change detection or land cover applications."
I had an email interaction with some of the authors, who come from the machine learning domain and are rather new to remote sensing, and they recognize that as Sentinel-2 provides time series, it is more relevant to avoid false negatives than false positives. The authors concede that they did not take into consideration that Sentinel-2 data are not single images, but time series of images. This is again a questionnable position that passed through the MDPI review process.
ConclusionNumber of special issues per MDPI journal, source : [https:]
Validating cloud masks it not as easy as it seems. It includes numerous pitfalls, and requires both a good understanding of cloud mask processors strategies and tuning, and a robust methodology. Falling into one of these traps is not shocking in itself and can fuel scientific discussions. But the fact that the cited articles go into publication with such shortcomings despite the usual revision process is surprising. This questions both the very fast review process of MDPI and the multiplication of guest editors who may not be specialists.
To finish, I have to say that I have published several papers in MDPI, and I appreciated that the review process wasquick and the reviews not too severe, my critics can apply to my own papers. Moreover, I have been a guest editor for 4 special issues of MDPI over the years :
- SPOT (Take5) special issue
- VENµS special issue
- Sentinel Analysis Ready Data
- Sentinel-2 science and applications
My feedback in these special issues, is the easyness to open one, but also the pressure from MDPI to have a fast processing. This pressure results in only one member of the guest editing board handling all the papers, because it takes a long time to coordinate on who will be handling the paper. For the SPOT(Take5) paper, I handled most papers, except those of which I was a co-author. For the other special issues, the main guest editors does most of the work, except for the few papers that fall exactly in my field of expertise.
Written by O.Hagolle and J.Colin
-
21:42
TropiSCO : a projet to monitor tropical deforestation with Sentinel-1 on a weekly basis
sur Séries temporelles (CESBIO)Do we need to remind you of all the collateral damage linked to deforestation or the eco-system services that forests provide ? We have already talked about these in this blog (here and there too). Yet forests are disappearing at an alarming rate. Between 1990 and 2020, an area of forest equal to more than three times that of metropolitan France has disappeared. The tropical forest, which accounts for half of the world's forests, is seriously threatened: in 2019, the equivalent of a soccer stadium is destroyed every two seconds (FAO, 2020).
Yet France is about to acquire a tool for monitoring deforestation thanks to the TropiSCO project, which has just been labelled by the Space Climate Observatory, which we call the SCO. Within the framework of this project, the deforestation detection method developed by CESBIO, GlobEO and CNES (Bouvet et al., 2018; Ballère et al., 2021) will be applied to humid forests in all the tropics, and possibly to temperate and boreal forests. This observation tool will be ready within 18 months and the produced data will be made public.
This deforestation detection method is especially suited to the tropics since it is based on data from the Sentinel-1 radar satellite, which is almost insensitive to the clouds that obstruct most optical images in these regions. Deforestation is therefore detected every week regardless of weather conditions, at 10 meters resolution. According to Ballère et al (2021), in a third of the cases, our method detects deforested areas more than 3 months ahead of the method used by the Maryland GLAD team (Hansen et al., 2016) which is based on optical data from Landsat. Our method has already been applied to different areas of the tropics (French Guiana, Peru, Gabon, Vietnam, Laos and Cambodia) and successfully validated. TropiSCO is therefore an early warning system, but not only, since the results can be used to reliably calculate annual deforestation statistics.
The data produced is likely to be of interest to many users, including governments, NGOs, universities, the general public, but also companies wishing to reduce the risk of deforestation in their supply chains or fire monitoring actors.
It is important to note that similar initiatives based on the use of Sentinel-1 data are currently emerging, such as the Wageningen University's RADD alert system (Reiche et al., 2021) and the Brazilian alert system (Doblas et al., 2020). However, our method based on radar shadow detection has the advantage of effectively avoiding false alarms.
The project's labeling by the SCO provides us with funding to precisely define the architecture of the production system and finalize the demonstration mock-ups to distribute the first data and obtain feedback from users. In parallel, and with the assistance of the SCO team at CNES, we will quickly finalize the financing of the project development and production for three years.
Monitoring of rubber tree cutting in the north of Ho Chi Minh City in Vietnam between 2018 (yellow) and 2020 (red). In gray, the areas that were not cut during the period.
Références :
Ballère, M., Bouvet, A., Mermoz, S., Le Toan, T., Koleck, T., Bedeau, C., ... & Lardeux, C. (2021). SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sensing of Environment, 252, 112159.
Bouvet, A., Mermoz, S., Ballère, M., Koleck, T., & Le Toan, T. (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sensing, 10(8), 1250.
Doblas, J., Shimabukuro, Y., Sant’Anna, S., Carneiro, A., Aragão, L., & Almeida, C. (2020). Optimizing Near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sensing, 12(23), 3922.
FAO : State of the World’s Forests 2020. [www.fao.org]
Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., ... & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3), 034008.
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., ... & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005.
-
16:08
TropiSCO : un projet pour le suivi hebdomadaire de la déforestation tropicale avec Sentinel-1
sur Séries temporelles (CESBIO)On ne vous rappelle pas tous les dommages collatéraux liés à la déforestation ni les services éco-systémiques que nous rendent les forêts, puisqu’on en a déjà parlé dans ce blog (là, et là aussi). Pourtant, les forêts disparaissent à un rythme alarmant. Entre 1990 et 2020, une surface de forêt équivalente à plus de trois fois celle de la France métropolitaine a disparu. La forêt tropicale, comptant pour la moitié des forêts du monde, est gravement menacée: en 2019, l’équivalent d’un stade de football y est détruit toutes les deux secondes (FAO, 2020).
Or, la France est sur le point de se doter d’un outil de surveillance de la déforestation grâce au projet TropiSCO qui vient d’être labellisé par l’Observatoire Spatial du Climat que nous appelons maintenant le SCO. Dans le cadre de ce projet, la méthode de détection de la déforestation développée par le CESBIO, GlobEO et le CNES (Bouvet et al., 2018 ; Ballère et al., 2021) va être appliquée aux forêts humides de tous les tropiques, et possiblement aux forêts tempérées et boréales. Cet outil d’observation sera prêt d’ici 18 mois et les données produites seront publiques.
Cette méthode de détection de la déforestation est surtout adaptée aux tropiques puisqu’elle est basée sur les données du satellite radar Sentinel-1, presque insensibles aux nuages qui obstruent la plupart des images optiques dans ces régions. La déforestation est donc détectée chaque semaine quelles que soient les conditions météorologiques, le tout à 10 mètres de résolution. D’après Ballère et al. (2021), dans un tiers des cas, notre méthode détecte les zones déforestées avec plus de 3 mois d’avance par rapport à la méthode de l’équipe GLAD du Maryland (Hansen et al., 2016) basée sur les données optiques issues de Landsat. Notre méthode a déjà été appliquée sur différentes zones des tropiques (Guyane, Pérou, Gabon, Vietnam, Laos et Cambodge) et validée avec succès. C’est donc un système d’alerte rapide qui va être développé, mais pas seulement, puisque les résultats peuvent être utilisés pour calculer de façon fiable des statistiques annuelles de déforestation.
Les données produites sont susceptibles d'intéresser de nombreux utilisateurs, notamment les gouvernements, les ONG, les universités, le grand public mais également les entreprises qui souhaitent réduire le risque de déforestation dans leurs chaînes d'approvisionnement ou encore les acteurs de la surveillance des incendies.
Il est important de noter que des initiatives similaires basées sur l’utilisation des données Sentinel-1 émergent actuellement, comme le système d’alertes RADD de l’université de Wageningen (Reiche et al., 2021) et le système d’alertes Brésilien (Doblas et al., 2020). Cependant, notre méthode basée sur la détection des ombres radar présente l'avantage d’éviter efficacement les fausses alarmes.
La labellisation du projet par le SCO nous assure un financement pour définir précisément l’architecture du système de production et finaliser les maquettes de démonstration pour distribuer les premières données et obtenir un retour des utilisateurs. En parallèle, et avec l’assistance de l’équipe du SCO au CNES, nous finaliserons rapidement le financement du développement du projet et de la production pendant trois ans.
Suivi de coupes d'hévéas au nord d'Hô Chi Minh-Ville au Vietnam entre 2018 (jaune) et 2020 (rouge). En gris, les zones n'ayant pas subi de coupes durant la période.
Références :
Ballère, M., Bouvet, A., Mermoz, S., Le Toan, T., Koleck, T., Bedeau, C., ... & Lardeux, C. (2021). SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sensing of Environment, 252, 112159.
Bouvet, A., Mermoz, S., Ballère, M., Koleck, T., & Le Toan, T. (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sensing, 10(8), 1250.
Doblas, J., Shimabukuro, Y., Sant’Anna, S., Carneiro, A., Aragão, L., & Almeida, C. (2020). Optimizing Near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sensing, 12(23), 3922.
FAO : State of the World’s Forests 2020. [www.fao.org]
Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., ... & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3), 034008.
Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., ... & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005.
-
10:54
Le CESBIO pique la Chine
sur Séries temporelles (CESBIO)Ce n'est pas tous les jours que le Monde cite notre laboratoire ! Depuis plusieurs mois, une source d'interférence intense perturbe les données SMOS dans une bande de fréquences normalement protégée pour l'observation. sur une grande partie de l'Asie du Sud Est, rendant les données inutilisables.
Les données SMOS, qui estiment l'humidité des sols à l'échelle globale, sont très utiles en météorologie, et ces sources d'interférences dégradent donc les prévisions dans les régions affectées. Après avoir suivi les procédures d'alerte normales, enquêté et trouvé l'origine de la source, et alerté les scientifiques, les collègues n'ont plus d'autres possibilité que d'en appeler à l'opinion publique. Certes, il est possible que cet article ne fasse pas plus de mal qu'un caillou dans la chaussure (lorsqu'on arrive à pied...), mais nous espérons que cette campagne permettra d'éteindre cette source.
-
21:24
Pléiades images of the Uttarakhand disaster
sur Séries temporelles (CESBIO)The Indian Space Agency (ISRO) activated the International Charter "Space and Major Disasters" to image the area of the disater in Uttarakhand (excellent visualisation here). Thanks to CNES and Airbus DS, Pléiades images (resolution: 70 cm in panchromatic, 2.8 m in multispectral) were acquired today 09 Feb 2021, two days after the event. These images show the detachment area with a clear rupture line of 550 m on the north face of Ronti.
This is a comparison with the latest Sentinel-2 image, before the flood.
The wall.. Impressive post-event image by @cnes @AirbusSpace Pléiades #Chamoli pic.twitter.com/vch2Ox4Qae
— Simon Gascoin (@sgascoin) February 10, 2021
Preliminary work by many scientists suggests that a rockslope failure released a mixture of rock and ice which created a potent flood in the valley of the Rishiganga River.
UPDATE 10 Feb 2021. A Pléiades stereo pair has been acquired (B/H = 0.12) which allowed us to generate a high resolution 3D model of the area.
Thanks to @cnes & Pléiades images acquired this morning, we computed a high resolution DEM of the source area of the disaster. Maybe the first post-event high resolution topography. pic.twitter.com/5YmbEVzLgw
— Etienne Berthier (@EtienneBerthie2) February 10, 2021
Update 11 Feb 2021
Two elevation difference maps were computed by Etienne. A first one by differencing the above Pléiades DEM with the Copernicus 30 m resolution DEM
Elevation changes in the source area of the #Chamoli landslide, #Uttarakhand. Massive 150 m loss, about 100 m on average. 10 Feb. 2021 Pléiades DEM was compared to the Copernicus 30 m DEM from ~2013. Data from @CopernicusLand @cnes @AirbusSpace. pic.twitter.com/q8xoBWoXpk
— Etienne Berthier (@EtienneBerthie2) February 11, 2021
Then, D. Shean (Univ Washington) computed a pre-event DEM from WorldView-1 images which allowed a finer analysis. The estimated detached volume is 25 millions cubic meters.
Elevation difference map & Pléiades digital surface model. Computed from #Worldview @Maxar by D. Shean @uwTACOlab and @EtienneBerthie2 #Chamoli pic.twitter.com/F9gzgYwDQa
— Simon Gascoin (@sgascoin) February 11, 2021
Authors Etienne Berthier (CNRS/LEGOS) and Simon Gascoin (CNRS/CESBIO)
Acknowledgements Work carried out with the support of CNES, the International Charter for Major Spaces and Disasters and the DINAMIS program.
-
20:35
Aidez-nous à mesurer la neige orange !
sur Séries temporelles (CESBIO)Mise à jour 22 février 2021 : la campagne est terminée ! Merci aux participantes et participants, nous avons ~60 échantillons que nous allons analyser dans les mois qui viennent. A suivre...
Un dépôt de poussières sahariennes a recouvert la neige des Alpes et des Pyrénées. Avis aux amateurs de montagne : nous avons besoin de votre participation pour étudier cet évènement qui semble exceptionnel ! Ceux qui se demandent "pourquoi faire ?" peuvent aller voir en bas de ce post.
Mise à jour ! Une couche de neige fraîche a recouvert le dépôt dans la nuit de dimanche, mais les prélèvements sont toujours utiles et nécessaires. Pour cela il suffit de retirer le gros de la couche supérieure et de prélever la couche orange située en-dessous (voir les vidéos ci-dessous).
Ensuite merci de nous contacter par e-mail pour que nous puissions récupérer les échantillons.
Pyrénées : simon.gascoin@cesbio.cnes.fr
Alpes : marie.dumont@meteo.fr
Le récipient de prélèvement peut être de n'importe quelle taille pourvu que l'on puisse connaitre la surface de son ouverture ! Pour un pot de confiture il suffit donc de connaitre son diamètre (7,5 cm en général). En effet notre objectif est de caractériser le flux de poussière en grammes par mètre carré. Le protocole idéal est un carré de 10 cm x 10 cm. L'épaisseur de prélèvement doit être suffisante pour prélever toute la couche orange (5 cm devraient suffire).
Prélèvement du sable. Comment faire ? Version Queyras.@sgascoin @AlvaroRobledano @mpneige pic.twitter.com/IDjPrF8F3I
— Ghislain Picard (@gsnowph) February 10, 2021
Voici une autre façon de faire conseillée par Didier Voisin (IGE, Université de Grenoble)
Pourquoi mesurer ce dépôt orange ?D'abord pour caractériser son intensité : était-ce réellement un évènement exceptionnel ? Ensuite pour étudier l'impact de ce dépôt sur l'enneigement : nous nous attendons à ce que la neige fonde un peu plus vite comme l'explique Marie ci-dessous. Enfin, pour valider et peut-être améliorer des algorithmes qui permettent de mesurer la quantité de poussières à partir d'images satellitaires ainsi que les modèles atmosphériques de transport des poussières.
Curieux au sujet de l’effet des poussières sahariennes qui colorent actuellement les massifs #Alpes & #Pyrénées ?
Explications (causes, impacts, tendances), du Caucase à "nos" montagnes, de Marie Dumont @mpneige #CEN #CNRM @meteofrance @CNRS @OSUG_fr @IGE_Grenoble @CesbioLab [https:]]
— Samuel Morin (@smlmrn) February 6, 2021
Val d’Aran
Une journée pas comme les autres@Aymarfreeride #pyrenees #valdaran pic.twitter.com/FT9499eAFd
— Météo Pyrénées (@Meteo_Pyrenees) February 6, 2021
-
20:22
What makes earthworms move ?
sur Séries temporelles (CESBIO)What makes earthworms move?Eartworms directly relies on soil water. Though, their activity seems completely disconnected to the variations of soil water content.
Earthworms have a key role in soil dynamics. Among them, the anecic earthworms dig vertical galleries, ingest the earth located in the first meters of the ground and come to reject it onto the surface in the form of small castings. Passing through their digestive system makes the soil nutrients usable by plants. Since castings are always very water-laden, water from the soil is expected to play an important role in earthworms activity. Yet, the production of castings does not vary at all like the water content in the soil. The behaviour of the earthworms is therefore difficult to understand.
The global modelling technique was used to unveil the dynamical couplings between earthworms activity and soil water content (Dong Cao basin, Viêt Nam). Results show that high water content will not generate a strong and immediate activity of earthworms but a progressive increase, by sensitization. Conversely, low water content will generate a gradual decrease in their activity, by habituation. This progressivity will result in an evolution of the production of castings completely different from that of the water content. In return, earthworms activity will lead to very different soil evolution from one site to another, even for identical hydroclimatic conditions.
Since the obtained models enables to formulate algebraically the coupling between earthworm activity and soil water content, soil moisture estimated from satellite may now be used to monitor earthworm activity from space.
La dynamique des lombrics relève de la théorie du chaos, Olivier Blot, IRD le Mag', 21 janvier 2021.
Photo: Earthworm casting rejected by earthworm at the soil surface (Dong Cao basin, Viêt Nam). Copyright: Pascal Jouquet.
-
13:17
Le glacier de l'Astrolabe s'apprête à vêler un bel iceberg
sur Séries temporelles (CESBIO)Enclosure: [download]
Sur une image Pléiades acquise le 15 janvier, Etienne a remarqué une fracture qui traverse une bonne partie de la langue du glacier de l'Astrolabe auprès de la base Dumont-d'Urville en Antarctique.
Just another iceberg in Antarctica? Maybe, but this one it at the front door of the French research station, Dumont d'Urville. When will it happen ? What impacts on sea ice, logistics and Penguins ? @_IPEV @sgascoin @IGE_Grenoble pic.twitter.com/slTi7adEQe
— Etienne Berthier (@EtienneBerthie2) January 22, 2021
On peut reconstituer l'ouverture de cette crevasse à partir d'une série d'images Sentinel-1 (une année complète de janvier 2020 à janvier 2021)
document.createElement('video'); [https:]]D'après la dernière image Sentinel-2 disponible, l'iceberg en couveuse pourrait avoir une superficie de 27 kilomètres carrés soit 4 000 terrains de football, ce qui est vaste mais beaucoup moins que l'iceberg A68 qui mesurait 5 800 km2 au moment de sa séparation avec la plateforme de glace Larsen C !
Le vêlage pourrait prendre encore du temps. Un iceberg à suivre donc...
-
18:17
Crop Irrigation, a new project labeled by the Space Climate Observatory
sur Séries temporelles (CESBIO)=>
The Space Climate Observatory (SCO) is an international initiative of the One Planet Summit, launched in June 2019, which is also a French initiative. The SCO brings together several space agencies around the world and international organizations (UNDP, ESA, UNEP). It aims at developping projects for local decision-makers to help them adapt to climate change. These projects help to monitor climate change impacts on landscapes using satellite data, field data and local socio-economic data. The SCO works within the framework of the international agreements of Paris, the Agenda 2030 for sustainable development, the United Nations Framework Convention on Climate Change (UNFCCC) and the strategies developed by the WMO and the Global Climate Observing System (GCOS).
SCO France is the national version of the international initiative. It is a national network whose vocation is to bring together the scientific community, public authorities and companies around the objectives of SCO International (the study of impact and mitigation of climate change). Since 2020, it periodically launches a call for projects, and projects labeled by the SCO can benefit from modest funding and assistance over a period of two years to move to a pre-operational or operational operation and find the necessary funding.
The CESBIO project " Irrigation Grandes Cultures " has just been labeled by SCO France. Its aim is to provide spatial indicators that will enable water managers to optimize water resources and to identify adaptation strategies well adapted to local issues . For the past ten years, many French departments have been using water restrictions particularly for agriculture. A record was reached in 2020, with 80 departments concerned. We must therefore take action!
The partners of this project are the CESBIO, the CNES, TETIS, the Chamber of Agriculture of the Tarn, the Syndicat Mixte d'Aménagement de la vallée de la Durance, the Regional Chamber of Agriculture of Occitania, the Regional Chamber of Agriculture of PACA, the Bureau of Geological and Mining Research, the Société du Canal de Provence and MEOSS, a company that will develop the operational tools for the management and development of the territories. This project is also supported by the Adour-Garonne Water Agency and the International Office for Water.
It is based on the infrastructure of the Theia data center and the works carried out by the Scientific Expertise Centers "Irrigation" and "Soil Moisture at very high spatial resolution". We will use the high-resolution mapping methods of irrigated surfaces and soil moisture developed by CESBIO and TETIS. The indicators will be estimated from free and open-source images from the Copernicus program, with the Sentinel-1 radar and Sentinel-2 optical satellites. They will be combined with crop classification and water requirement models developed at the two partner laboratories (CESBIO, TETIS).
So this for us the beginning of a great adventure that aims to support the water stakeholders to face one major challenge: preserve the WATER ressources as water needs increase.
-
14:18
Irrigation Grandes Cultures, un nouveau projet labellisé par l'observatoire spatial du climat
sur Séries temporelles (CESBIO)L' Observatoire Spatial du Climat (SCO pour Space Climate Observatory) est une initiative internationale du One Planet Summit, lancée officiellement en Juin 2019, à l'initiative de la France. Cette initiative regroupe des agences spatiales du monde entier et des organisations internationales (PNUD, ESA, PNUE). Elle a pour vocation de développer des projets à destination des décideurs locaux pour les aider à s’adapter au changement climatique. Les projets assurent le suivi des impacts sur les territoires à l’aide de données satellitales, de données de terrain et de données socio-économiques locales. Le SCO s’inscrit dans le cadre des accords internationaux de Paris, de l’Agenda 2030 du développement durable, de la Convention Cadre des Nations Unies sur le Changement Climatique (CNUCC) et des stratégies élaborées par l’OMM et le Système Mondial d’observation du climat (GCOS).
Le SCO France est la déclinaison nationale de l'initiative internationale. C’est un réseau national dont la vocation est de rassembler la communauté scientifique, les autorités publiques, et les entreprises autour des objectifs du SCO International et de l’étude des impacts des changements climatiques. Depuis 2020, il lance périodiquement un appel à projets, et les projets labellisés par le SCO peuvent bénéficier d'un financement modeste et d'une aide sur une durée de deux ans pour passer à une exploitation pré-opérationnelle ou opérationnelle et trouver les financements nécessaires.
Le CESBIO vient de voir son projet « Irrigation Grandes Cultures » labellisé par le SCO France. Ce projet a pour objectif de fournir aux gestionnaires de l’eau des indicateurs qui leur permettront d’optimiser la gestion des ressources en eau sur leur territoire et d’identifier des stratégies d’adaptation au changement climatique adaptées aux spécificités locales. En effet, en France, depuis une dizaine d’années, de nombreux départements ont recours à des restrictions d’eau notamment pour l’agriculture. Ce chiffre a atteint son record en 2020 avec 80 départements soumis à des arrêtés sècheresse. Il faut donc agir !
Les partenaires de ce projet sont le CESBIO, le CNES , TETIS, la Chambre d’Agriculture du Tarn, le Syndicat Mixte d’Aménagement de la vallée de la Durance, la Chambre Régionale d’Agriculture d’Occitanie, la Chambre Régionale d’Agriculture de PACA, la Bureau de Recherches Géologiques et Minières, la Société du Canal de Provence et MEOSS, entreprise qui va développer les outils opérationnels pour le pilotage et à la mise en valeur des territoires. Ce projet est également soutenu par l’Agence de l’Eau Adour-Garonne et l’Office International de l’eau.
Il s’appuie sur l’infrastructure du pôle Theia ( [https:]] ) et des travaux menés par les Centres d’Expertise Scientifique « Irrigation » et « Humidité des sols à très haute résolution spatiale ». Nous utiliserons notamment les méthodes de cartographie des surfaces irriguées et d'humidité des sols à haute résolution développées par le CESBIO et TETIS. Les indicateurs seront estimés à partir des images libres et gratuites du programme Copernicus, avec les satellites radar Sentinel-1 et optiques Sentinel-2 Ils seront combinés à des modèles de classifications et de besoins en eau des cultures développés au sein des deux laboratoires partenaires (CESBIO, TETIS).
Nous voilà donc partis pour une belle aventure qui vise à accompagner les acteurs de l’eau face à un défi majeur : répondre aux différents enjeux de la gestion de l’EAU, bien commun mis en péril dans un contexte de pénuries à venir.
-
23:05
CESBIO's blog frequentation drops, let's blame the Covid !
sur Séries temporelles (CESBIO)=>
2020 has been a very difficult year for everyone (but it's over
), and the multitemps blog did not make an exception. Our audience has decreased by 13 to 20% depending on the statistics, compared to 2019, which already did not break records.
It would be easy to blame the Covid, and I guess a large part of the time we spent this year scrolling on the internet or social networks was devoted to checking the last news and stats of the virus. But I guess a good part of the the explanation lies in the fact that we wrote much less posts this year : 132 against 188 in 2019. The burden of the covid is once again an explanation, but also the the fact that some of us took new functions, started writing a dissertation (l'habiliitation à diriger des recherches) or simply lacked inspiration. Even if we have welcome new authors with great posts, Julien Michel, Philippe Gamet, Amandine Rolland, Sylvain Mangiarotti, Jerôme Colin, Marie Ballère and Stephane Mermoz, they are sill a bit shy and only produced a few posts.
This blog is open to all CESBIO personnel, but also to our close collaborators in different labs and industries, or to the users of our products, to provide feedback. Feel free to suggest articles, it does not take long, and there is no reviewer 2.
Maybe are there other reasons, and we would be happy to receive feedback. After 8 years of blogging, are we starting to repeat ourselves ? You all know now that Sentinel-2 is a great satellite, and that MAJA is better than Sen2cor ;).
Comparison of page views to our blog in 2020 and 2021
So here is the list of the most read pages this year, after having removed the lists of articles, like of course the home page, the Sentinel-2 or Landsat pages, the author names (this year, Simon's name was more clicked than mine, on the blog I created,, should I fire him ? ).
2020 top postsSo what may we conclude ?
- the 3 top posts are the same as last year
- only 4 posts from 2020 made it to the top 15
- the distribution of small free software is at the top of the list (and we get several questions a week...)
- Simon's geophysics articles (with the associated add on social networks) attract crowds
- the description of the MAJA software is a success, but half of my articles point to this page
- the example on how to use Google Earth Online attracts much more than the articles that denounce its dangers (it's sad)
- the "How It Works" series continues to be a success
- Level 3A products were very popular
- the posts on Theia's product formats are useful
- two articles on Sentinel-1 ranked in the top 15 (and a third on deforestation ranked in the top 30)
- our tries with remote sensing economy are successful
- Sentinel-HR mission articles are regularly read, although not yet in the top 15
-
18:25
Heureuse année 2021 !
sur Séries temporelles (CESBIO)Comme presque partout, l'année 2020 a été bien morose au CESBIO ! Même si le CESBIO a été relativement épargné par la maladie, avec un faible nombre de cas bénins jusqu'à présent, quelques uns de nos collègues ont perdu des proches et nous leurs adressons nos plus chaleureuses pensées ! La situation est probablement la même pour les lecteurs du blog, et nous espérons sincèrement que nos lecteurs ont pu faire face à cette mauvaise année aussi bien que possible.
Mais 2020 est terminée et c'est un grand plaisir de vous souhaiter une heureuse année 2021. Souhaitons que nous soyons bientôt en mesure de nous rencontrer physiquement et pas virtuellement.
L'une des conséquences de 2020 au CESBIO, est que nous avons une énorme pile d’événements à célébrer, et si le Covid nous laisse le faire, nous allons probablement passer la fin de l'année à faire la fête !
Le CESBIO a eu 25 ans en 2020, mais nous avons du annuler la fête qui était prévue en mai. Nous espérons pouvoir célébrer en 2021 le premier anniversaire des 25 ans du CESBIO.
Le mandat de 5 ans du directeur du CESBIO, Laurent Polidori, s'est achevé au 31 décembre. Même si nous avons organisé une petite fête, avec une dizaine de personnes dans la salle, et le reste du laboratoire sur zoom, nous devrons faire une vraie fête lors d'un de ses retours du brésil, où il sera bientôt professeur. Ce blog est un bon témoignage de certains des progrès du laboratoire sous la direction calme, attentive et éclairée de Laurent.
Une nouvelle équipe, avec Mehrez Zribi en tant que directeur, va piloter le CESBIO pour les 5 prochaines années. Valérie Demarez et Lionel Jarlan seront directeurs adjoints, et Gilles Boulet et Olivier Hagolle (oui, moi
CESBIO's new direction team) animeront les deux équipes du CESBIO (Modélisation et Observation resp.). Ca se fête !
Mehrez Zribi - Director Valérie Démarez - Deputy Director Lionel Jarlan - Deputy Director Gilles Boulet - Modelling team leader Olivier Hagolle - Observation team leader Nous devrons également célébrer le départ à la retraite de deux piliers du CESBIO, Yann Kerr et Gérard Dedieu, qui ont jeté les bases du laboratoire, défini des missions satellites réussies, produit un vaste corpus de littérature et formé beaucoup d'entre nous.Même s'ils continueront à travailler bénévolement avec nous pendant quelques années (en tant que chercheur émérite etPI de SMOS pour Yann et PI de VENµS pour Gérard), 2021 marque le début d'un grand changement pour le laboratoire. Nous aurons encore besoin de leur vision à long terme et de leur connaissance approfondie du monde de la télédétection des terres émergées.
Happy retireesYann Kerr Gérard Dedieu Nous avons également accueilli plusieurs nouveaux chercheurs, promu de nouveaux docteurs, démarré une nouvelle mission spatiale, Trishna, et vu le départ de personnes importantes pour le CESBIO, comme Bernard Marciel, qui avait pris en charge la gestion de notre bâtiment et sa logistique pendant plusieurs années, et a parfois réussi à nous obtenir la climatisation avant l'hiver (ce qui n'est pas facile à l'Université Paul Sabatier).
Donc, si jamais vous croisez des chercheurs ivres du CESBIO à Toulouse à la fin de 2021, ce sera un bon signe de victoire contre le virus puisque nous aurons commencé à célébrer toutes les fêtes que nous repoussons de mois en mois.
En cherchant une photo pour Gérard, j'ai trouvé un indice qui montre qu'il est très bien préparé pour sa retraite, ne vous inquiétez pas pour lui.
-
15:45
Happy 2021 !
sur Séries temporelles (CESBIO)=>
As almost everywhere, 2020 has been a gloomy year at CESBIO ! Even if CESBIO has been relatively spared by the disease, with a low number of mild cases so far, a few of our colleagues lost relatives and we send them our warmest thoughts ! The situation is probably the same for the blog's audience, and we sincerely hope our readers coped with this bad year as well as possible.
But 2020 is over now and it's a pleasure to wish you a very happy 2021. Let's wish that we will soon be able to meet in a room and not in a zoom !
At CESBIO, we have a terrible backlog of events to celebrate, and if the COVID lets us do it, we will probably spend the end of the year partying !
CESBIO turned 25 this year, but we had to cancel the party which was scheduled in May. We hope to be able to celebrate in 2021 the first birthday of CESBIO 25th anniversary.
The 5 year mandate of CESBIO's director, Laurent Polidori, ended on the December 31st, and even if we did celebrate it, with 10 people in the room, and the rest of the lab in the zoom, we will still need to really celebrate his leaving next time he returns from Brazil, where he'll soon be a professor of remote sensing. This blog is a good testimony of how the laboratory progressed during his calm, attentive and enlightened direction.
A new team, with Mehrez Zribi as director, will lead the CESBIO for the 5 next years. Valérie Demarez and Lionel Jarlan will be deputy directors, and Gilles Boulet and Olivier Hagolle (yes, me
) will lead the two teams of CESBIO (Modelling and Observation resp.).
Mehrez Zribi - Director Valérie Démarez - Deputy Director Lionel Jarlan - Deputy Director Gilles Boulet - Modelling team leader Olivier Hagolle - Observation team leader We will have also to celebrate the retirement of two pillars of the CESBIO lab, Yann Kerr and Gérard Dedieu, who set the bases of the laboratory, defined successful satellite missions, produced a large corpus of literature and trained a lot of us. Although they will still work benevolently with us for a few years (as an emerit researcher and SMOS PI for Yann and VENµS PI for Gérard), 2021 marks the beginning of a big change for the laboratory. We will still need their long term vision and deep knowledge of the land remote sensing world.
Happy retireesYann Kerr Gérard Dedieu We have also welcome several new researchers, promoted new doctors, started a new space mission, Trishna, and seen the departure of key personnel, such as Bernard Marciel who took care of our building and logistics team during several years, and sometimes managed to get us air conditioning before winter (which is not easy at the University Paul Sabatier). So if ever you see drunk researchers from CESBIO in Toulouse at the end of 2021, it will be a good sign we have beaten the virus and started to celebrate all the parties we have been accumulating.
Looking for a photograph for Gérard, I found out that Gérard is well prepared for the retirement, so let's not worry for him.
-
1:02
Free and open data: fine, but who pays for the processing?
sur Séries temporelles (CESBIO)In the previous post, Olivier advocated for the open data policy in remote sensing. Although Oliver is facing some actual resistance because the Sentinel-HR mission would step on the toes of industrial champions, my feeling is that there is now a large consensus on this issue. The economic and social benefit of the open data policy in remote sensing is well accepted, especially in the scientific community. Yes, scientists are rational people and they prefer not to pay rather than.. to pay.
Talk about free data!I think that the discussion should go beyond the cost of the data itself. Sentinel-2 generates 1.6 Terabytes of compressed raw image every day. It's great that the data is free, but how do I handle that? Currently, a hard drive of 1 Tb costs about 50 €. Storing all Sentinel-2 data would cost me 30 k€ every year. Let's assume my department tries to optimize the expenses by subscribing to a cloud storage service. Google provides an example of pricing for a storage usage of 160 Tb as well as bandwidth consumption that spans multiple tiers: it costs 7500 € per month, and the storage is largely insufficient for large scale processing of Sentinel-1 or Sentinel-2 data. This is clearly too expensive for many groups, not only research labs but also startup companies.
Of course I don't need to store all Sentinel data, I could download them and delete the files once I've done the processing. Yet, processing data is costly, too. Amazon rates range from $ 94 / year to $ 2,367 / year (CPU time). As an example, the CPU time to generate snow and ice products over Europe from Sentinel-2 since 2016 was about 100 years of CPU!
So far the computation took approximately 1 million hours CPU, that is 1 century of CPU time
[https:]]
— Simon Gascoin (@sgascoin) November 12, 2020
These estimates of storage and processing costs are probably inaccurate but they give an order of magnitude. The conclusion is that although data is free, the cost of storage and processing is out of reach for many research projects, not to mention small businesses. This is why Google Earth Engine is a huge success in the remote sensing community (despite the warnings of some authors..). Recently, I was also struck by this sentence in the recent announcement of the release of Landsat collection 2:
"Collection 2 was processed in the Amazon Web Services (..) at a clip of 450,000 scenes per day—a speed that enabled the reprocessing of the entire archive in just five weeks. By comparison, it took 18 months to process Collection 1 in 2016, at a rate of 25,000 scenes each day." (USGS, 01 Dec 2020)
Due to economies of scale, unit costs of storage and processing are significantly lower for Google and Amazon than for a small company or a university. But can we rely on Tech Giants to do public research? Is it sustainable for a startup company to build a commercial application based on Google Earth Engine[1] ? Should we build operational services for the monitoring of climate, agriculture, water resources, based on privately-owned data centers? The DIAS are an alternative to Google/Amazon but the cost are still to high for many users (we investigated this option in our lab to replace our current infrastructure).
A public cloud for the public goodI wonder how much would cost a free, public cloud server based on open source software to give everyone the possibility to tap into the power of the Copernicus data in a transparent and reproducible way?
The Copernicus 2019 market report indicates that "the EU has invested 8 billions € into this program from 2008 up to 2020 (...) Over the same period, this investment will generate economic benefits between EUR 16.2 and 21.3 billion (excluding non-monetary benefits)."
The EU has made the Copernicus Earth observation program a flagship of the European space policy thanks to the high quality of its satellite fleet and its open data policy. There is evidence that the open data policy is generating more "monetary benefits" than its investments. It is time to evaluate the cost of giving everyone the ability to freely process these data. Not to mention the potential economies of scales in terms of energy consumption by concentrating computation and storage.
"A public cloud for the public good" it sounds like a nice program, isn't it?
Found out yesterday @ECMWF ERA5-Land is now available on #EarthEngine as well with 69 variables! Excited about the potential for easy coupling of climate and #remotesensing. To celebrate I made an animation of 18 of my favorite variables using #rstats #ggplot. pic.twitter.com/wwHWrKpOUz
— Philip Kraaijenbrink (@philipkraai) December 17, 2020
[1] From Earth Engine FAQ: "Earth Engine's terms allow for use in development, research, and education environments. It may also be used for evaluation in a commercial or operational environment, but sustained production use is not allowed."
-
22:36
Free and open data pays off
sur Séries temporelles (CESBIO)As for most users of Copernicus or LANDSAT data I guess, the advantages of free and open remote sensing data are so clear that it's hard to imagine they could be challenged. However, in the framework of Sentinel-HR phase 0, we prepared arguments in case our requirement for free and open data is questioned.
Table of contentsWhy should users pay for data ?
Why should users pay for data ?
The LANDSAT example
Economic benefits of free and open data
Social benefits of free and open dataOf course, remote sensing data from private companies (Planet, Airbus, Maxar...) are not free. The hard work to make satellites and produce data must be funded and companies are there to earn money.
There has also been examples of state owned satellites whose data were commercialized, such as SPOT, Pleiades, and LANDSAT data at certain periods of time. With the exception of LANDSAT, Most of these data were provided by satellites which perform observations on demand. These satellites are tasked to optimize the acquisition according to the orders, and the satellites is oriented towards the site to acquire when it is possible. These satellites usually have a limited observing capacity, and in our capitalistic economies, asking a customer to pay is a classical way to choose which scene will be observed.
Thanks to the development of commercial satellites, the states are discharged of the necessity to fund the investment for the spatial infra-structure, and they can limit their intervention to missions which do not have a straightforward commercial application capacity. However, it turns out that states or local communities are also large customers for the data of these satellites, e.g. for the defense application, for research purposes, or for local land management. Finally, at the European scale, is it really cheaper to let the industry build and fund high resolution satellite missions and buy a large quantity of data, or to build public satellite missions which ensure a free and open access to data ? I think the latter is cheaper, except maybe for some niches, but feel free to correct me, I'll be happy to learn.
When satellites have a sufficient observation capacity, they can become systematic observation satellites, which observe almost all the landscapes that enter their field of view, with no need to change the satellite orientation to observe a new scene. I only know one public decametric or metric resolution satellite mission that performed systematic observations and still tried to sell its data : the LANDSAT mission, until 2008.
The LANDSAT exampleThe information below comes from the following paper. Quoted sentences are in italics.
Michael A. Wulder, Jeffrey G. Masek, Warren B. Cohen, Thomas R. Loveland, Curtis E. Woodcock, Opening the archive: How free data has enabled the science and monitoring promise of Landsat, Remote Sensing of Environment, Volume 122, 2012, Pages 2-10, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2012.01.010.
In October 2008, Landsat data became free and open data. Before that, costs for an individual photographic image varied from $20 (1972–1978) to $200 (1979–1982) for MSS digital data; digital data ranged from approximately $3000 to $4000 for TM data (1983–1998), and $600 for ETM+ data (1999–2008). Prior to October 2008, no calendar month ever recorded more than 3000 scenes sold in a given month.
Here is what happened after October 2008 :
In red, number of Landsat Scenes distributed each month, after data became free and open in October 2008. The very little blue bar in the bottom left corner is the maximum number of scenes bought in a given month when LANDSAT data were not open (From Wulder, 2012))
In less than three years, the number of tiles downloaded each month was multiplied by 100 ! But it was just the beginning, and in the next years, the download rate continued to increase until Sentinel-2 came into operation.
Cumulative number of downloaded Landsat Scenes [https:]
A few hours ago, the French Cartographic Institute (IGN) announced the release of its data bases as free and open data. This decision comes was justified by a report from the French Parliament : "It's not the sale of data that creates its value, it is its circulation". To justify this change of policy, the report states that : "Free dissemination and reuse of sovereign geographic data implies that the production of sovereign geographic data must be financed by the state subsidies, if not by the sale of the data. If the open data business model is empirically verified, a return to the public purse will be achieved by the taxes of the additional wealth created by the release of data."
After a policy change was envisaged during the moderate center right Trump administration, the LANDSAT Science team studied the benefits of the free and open data policy of LANDSAT data, quoted sentences are in italics.
Zhe Zhu, Michael A. Wulder, David P. Roy, Curtis E. Woodcock, Matthew C. Hansen, Volker C. Radeloff, Sean P. Healey, Crystal Schaaf, Patrick Hostert, Peter Strobl, Jean-Francois Pekel, Leo Lymburner, Nima Pahlevan, Ted A. Scambos, Benefits of the free and open Landsat data policy, Remote Sensing of Environment, Volume 224, 2019, Pages 382-385, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2019.02.016.
The National Geospatial Advisory Committee (National Geospatial Advisory Committee Landsat Advisory Group, 2014) analyzed sixteen economic sectors (e.g., agriculture, water consumption, wildfire mapping) where the use of Landsat data lead to productivity savings, and estimated the economic benefit of Landsat data for the year 2011 as $1.70 billion for U.S. users plus $400 million for international users. Many of the sixteen economic sectors are directly associated with U.S. federal, state and local government activities (e.g., risk assessments, mapping and monitoring activities). In addition, the open data policy is particularly beneficial to government, university, and commercial research groups and organizations that have limited budgets.
An economic study tried to measure the economic benefits of the Landsat open data :
John Loomis, Steve Koontz, Holly Miller, Leslie Richardson, Valuing Geospatial Information: Using the Contingent Valuation Method to Estimate the Economic Benefits of Landsat Satellite Imagery, Photogrammetric Engineering & Remote Sensing,Volume 81, Issue 8, 2015, Pages 647-656, ISSN 0099-1112, [https:]]
Based on a survey among 14 000 users of all sorts the authors tried to determine the value at which they would stop buying data and then how much benefit from the activity would be lost. The results are provided in the table below and show that at a cost of 100 $ per image, 46 M$ would be lost each year.
Of course, I have not enough economics skills to be able to criticize the study, but a sentence stroke me in the discussion, because I think we might be in that case : If the users who drop out of the market more quickly as the price per scene increases (i.e., they are not willing to pay as much per scene) obtain a greater share of the scenes than those users who stay in the market as the price in-creases, the results in Table 5 represent a lower bound on the annual economic loss to society associated with increasing the price of the imagery.
Let's look at just one of the many products made at European Continental scale with Sentinel-2 data, in which CESBIO was involved, the snow and ice fraction cover product. This product is useful for hydrology purposes (how much water will be available for irrigation or to fill the dams after melting ?), for biodiversity (plants are different where the snow stays longer), for climate studies of course, as well as for recreational purposes (should I take snow equipment for my hike this week-end ?) This product involves 1500 tiles, 365/3 images per year (there is overlap in Northern regions, so one image every 3rd day on average), and we processed 5 years of data, in the past, and the exploitation will go on for 2 more years. The cost of the project was 1.5 M€, thanks to the free and open data. Let's consider only a cost of 10 € per scene (which is probably a very low fee, Landsat data were sold 600 $):
- Number of images : 1500*365/3* 7*10 = 1 280 000
- Total data cost for a cost of 10 € per scene : 12.8 M€
The data cost would be almost 10 times the current cost of the project, and it probably means the project would not exist, and I doubt it would even exist with a cost of 1€. Could we imagine selling Sentinel-2 data at 1€ per image ? Well let's see with planet. Planet has a special offer for research and education, which allows to download 5 000 km² of data for free. 5 000 km² is less than half the surface of a Sentinel-2 image. As this proposal is an incentive for future users, and as I expect planet to be more generous, I guess the cost of 5 000 km² is more than 1 €.
The Copernicus-NG studyThe European Copernicus is preparing a new generation of the Sentinel 1 to 6 in the 2030 horizon. Before studying the new missions, a comprehensive study of user needs was done, which included an economic study to consider a possible change of data policy. It also includes a nice summary of the pros and cons of adding a fee to Sentinel-NG data.
Study on the Copernicus data policy post-2020, Nextspace Final Report. 2 February 2019
3 options were considered regarding data policy changes, the main option being keeping the current data policy :
- Allowing access only to European citizens
- Access allowed against a fee for all users with the exception of European public institutions
- Access allowed for all users, but with no right to redistribute data (except upon agreement). This option would enable to avoid Amazon or Google to get revenue from Sentinel-2 data.
As seen in the study, this would have difficult legal implications and diplomatic consequences (access for Europe to data from other countries could be denied), but let's see the economic impact according to the study:
Data policy options 1 -access only in Europe 2 - add fee on data 3 - redistribution forbidden, except upon agreement Operational costs/year 7 M€ 11 M€ 9M€ Loss in economic activity in Europe/year 700 M€ 2300M€ 220 M€ Activity loss in Europe, in percentage -13% - 48% -5 % Economic impact of the three changes studied for Copernicus data policy
The operational costs would cover the need to enforce the regulations and to check users comply with the conditions. Of course, the option with a fee would require to set up services to advertise, negociate, recover the fees and provide support on the billing process.
The economical costs cover the fact that the data use relies on partners from outside Europe that would need to be replaces, by reduce export possibilities due to the restriction, and for the option 2 with fees, for all the activity that would not happen because of the necessity to pay for the input data. The study concludes, for option 2, that a reduction of the economic activity would be as high as 50%.
Although I have not heard if the Copernicus program intends to change something regarding the data policy, bades on the reports conclusions, it does not seem likely.
More economic considerations How to start a new business based on remote sensing data ?Now, imagine you are a company, and you have a good idea for a product that could be successful and generate revenue, for instance estimating evapo-transpiration in agricultural fields to provide advice on irrigation. To be credible, you need to demonstrate its over a region such as Occitania (10 tiles, 2 years). Can you afford to invest 180 k€ (10 x 365/4 x 2 x 100, assuming 100€ per image), just to make this demonstration ? Will you find funding for that, while you already need to fund the R & D investment ?
Sentinel-2 global mosaic 2019 by EOX.
Without the free and open data policy, the EOX could not have made its fantastic cloudless global mosaic, which is perfect tool to explore our planet and a good showcase for EOX remote sensing skill. Without the free and open data, Sinergise Sentinel-playground would not exist, and Sinergise would probably still be a little software company. In France, Kermap, Geosys and many others benefit from these data.
Cloud processing
Given the data volume, the current tendency is to avoid data transfers and process the data where they are stored. We have seen a large number of private operators and public platforms proposing cloud computing facilities with access to the data archive. If data are not free, with a unique distributor, it will become more difficult for independent organizations to duplicate archives to propose them withing their platforms. Of course, contracts could be signed, but probably not for free, and if two data sources belong to two different operators, will they propose access to both of them ?
Cloudy data have value when they are freeLet's forget the costs, budgets and suppose we are in a good society where users have some money to do their research. Of course they are asked to spare money and can't buy everything they want. This image in November with 35% of clouds has the same price as a cloud free image in September, should they buy it ? As it happened for LANDSAT when the data had to be paid, only the almost cloud free images are paid, and more than half of the images are never ordered, while they have intrinsic value.
Social benefits of free and open data Science for the citizens, NGO , journalistsI don't have a statistics, but free and open data enable surveillance of our planet by citizens, NGOs, journalists. Just take a few examples from citizen Gascoin :
- A mining company polluted a Siberian river, in 2016, and Sentinel-2 witnessed it. In this case, it is true that free and open data can be detrimental to corporate profits (the company was fined).
- Scientists emitted a warning about a glacier about to collapse with Sentinel-1 data, triggering evacuation and maybe saving lives.
- A video with LANDSAT-8 data by Simon, on the dynamics of sand near the Bassin d'Arcachon was relayed in the local press draining a large number of connections to this blog, and raising awareness within the public. Simon would not have pulished this video, had he had to pay 5000 € for the images.
- Pierre Markuse great images of forest fires from Sentinel-2 or Landsat-8 are often used in the press, to inform the broad public on the magnitude of the affected regions.
Free and open data are a great way to let students manipulate data and learn how to conduct a remote sensing project. A lot of the cloud platform users are students, it is also the case for Theia.
ResearchIn many European countries, research is relatively poor, and researchers are mainly funding researchers. We already lose to much time with proposal writing activities, let's not add the necessity to find more money to buy the data we need, negotiate the cost to access these data, check the number of images ordered, and find more money if the project works and we need more data.
Reducing the data access for the research community might slow down the progress of Earth observation science, and therefore degrade the impact of remote sensing data on society.
Earth monitoringAccess to data being costly, products at continental scale, such as the Pan European layers of Copernicus, the snow and ice layer, the Common Agricultural Policy products, or DLR's world settlement footprint would be updated less often. Europe could lose a good way to monitor its lands. Even worse, products at global scale would be almost inaccessible. Similarly, the less developed countries would have difficulties to afford the access to Copernicus data and monitor the state of crops or forests in their countries.
ConclusionsThis post, written with the help of Simon Gascoin and Julien Michel, might look biased (it probably is). But we have not found studies that conclude against the free and open data policy of Copernicus. If you are aware of any, please mention it in the comments, we are not specialists of bibliography in the economics domain. As summarized above, several studies show that the huge cost of development of satellite systems such as Landsat or the Copernicus Sentinels pays off in the long run, in temps of services activity and better knowledge of the state of our planet, at global or local scale.
-
23:43
How to make a mosaic of Theia snow products in three command lines
sur Séries temporelles (CESBIO)Assuming that you have downloaded from Theia and unzipped several products in the same directory. Assuming that you have a Linux OS with GNU Parallel, GDAL and OTB command line utilities installed..
Then, in your terminal, type:
# Get the color table wget [https:] # reproject all SNW raster products to a common system (here Web Mercator) and assign 255 to nodata parallel gdalwarp -srcnodata 255 -overwrite -r near -t_srs "EPSG:3857" {} {/} ::: $(ls SEN*/*SNW*.tif) # Apply color table parallel otbcli_ColorMapping -in {} -out c{} uint8 -method.custom.lut LIS_SEB_style_OTB.txt ::: $(ls S*SNW*.tif) # Merge colored images and set 0 to nodata gdal_merge.py -n 0 c*tif -o mosaic.tif -co COMPRESS=DEFLATE
Here the output is yesterday's snow cover map of the entire French Alps (10 December 2020). Snow is cyan, clouds are white.
-
23:35
Préparation de la mission TRISHNA dans les Pyrénées
sur Séries temporelles (CESBIO)Enclosure: [download]
Dans le cadre de la préparation de la mission thermique TRISHNA (CNES/ISRO), le Cesbio met en place un dispositif expérimental de suivi de la température de surface dans les Pyrénées. L'objectif principal est de mieux connaitre la plus-value de la température de surface pour caractériser les propriétés internes du manteau neigeux. En particulier, l'assimilation des données TRISHNA pourrait permettre d'améliorer les modèles qui calculent la fonte de la neige dans les zones de montagne. Mais il y a fort à parier que les données récoltées sur ce site pourront aussi servir à d'autres applications en météorologie, écologie, limnologie...
Grâce à nos collègues de l'OMP qui s'occupent des installations scientifiques au Pic du Midi nous avons placé une caméra thermique sur la façade de l'observatoire à 2860 m d'altitude. La caméra est dirigée vers le lac d'Oncet au sud du Pic du Midi. Sa gamme spectrale est 7.5 à 13 µm et son champ de vue 60° x 45°.
Vue du site d'étude dans dans la gamme du rayonnement visible vs. infrarouge thermique (27 juillet 2020)
Capteur infrarouge (source : Apogee)
Pour compléter ces observations, le 24 novembre nous avons installé au col de Sencours en contrebas (2380 m) une mini-station météo équipée d'un radiomètre infrarouge à champ de vue étroit (36°, gamme spectrale 8 à 14 µm), d'une sonde de température et d'humidité de l'air et d'un baromètre. Le radiomètre pointe vers une esplanade devant le bâtiment afin de donner un point de calibration pour les images de la caméra.
Diver Aqua troll 100
Enfin nous avons immergé un "plongeur" dans le petit lac situé au dessus du lac d'Oncet à 2240 m pour mesurer la pression et la température dans l'eau. Si tout va bien, grâce à la pression de l'air mesurée au col de Sencours, nous pourrons en déduire les variations de hauteur d'eau dans le lac et peut-être même la masse des chutes de neige lorsque le lac est gelé !
Variables mesurées par le dispositif in situ. Ts: température de surface, Ta: température de l'air, Qa: humidité de l'air, Pa: pression de l'air, Pw: pression de l'eau, Tw: température de l'eau
Voici une série d'images thermiques acquises le 28 octobre et synchronisées avec la webcam du Pic :
document.createElement('video'); [https:]]Merci à Pascal, Baptiste et Vincent R pour l'installation dans ces rudes conditions et à Vincent B pour le diver. Merci aussi à la Commission Syndicale de la Vallée du Barège et son président Raymond Bayle pour l'autorisation d'installer la station et à Francis Lacassagne et Eric Chereau pour le soutien logistique sans faille au Pic. Merci aussi au CNES/TOSCA pour le soutien financier au programme TRISHNA.
-
19:47
Evaluating ERA5 wind direction with Copahue Volcano plume
sur Séries temporelles (CESBIO)Copahue is an active volcano in the Andes on the Chile-Argentina border. It erupted in 2016 and a plume of smoke was visible in many Sentinel-2 images during that period. Looking at these pictures I thought it would be fun to use that plume as a giant anemometer to evaluate climate model data.
Sentinel-2 image of Copahue Volcano on 2016-11-07
I extracted the wind vector from ECMWF ERA5 climate reanalysis available at the hourly time step in Google Earth Engine. Since Sentinel-2 overpass time is approximately 14h30 UTC in that area, I queried only ERA5 data corresponding to the 14h time step.
// Copahue volcano crater var pt = ee.Geometry.Point([-71.18,-37.86]); // filter a year of ERA5 collection at 14:00 UTC var uv = ee.ImageCollection("ECMWF/ERA5_LAND/HOURLY") .filterBounds(pt) .filterDate('2016-01-01','2016-12-31') .filter(ee.Filter.calendarRange(14, null, 'hour')) .select(['u_component_of_wind_10m','v_component_of_wind_10m']); // Plot U,V print(ui.Chart.image.series(uv,pt));
I downloaded this figure as a table, then extracted the wind vector of a few dates corresponding to cloud-free Sentinel-2 images in 2016.
ERA5 wind vector at Copahue Volcano at 14h UTC and Sentinel-2 images
The wind vector matches the plume direction only on 22 Jan and 21 Sep... That is a score of 2/7, ECMWF you can do better!
Note from ECMWF
Care should be taken when comparing this variable with observations, because wind observations vary on small space and time scales and are affected by the local terrain, vegetation and buildings that are represented only on average in the ECMWF Integrated Forecasting System.
[1] a more elegant solution would be to draw the wind arrows in GEE directly but I felt that would led me to catch Mrs-Armitage-on-Wheels Syndrom)