Nous utilisons des cookies pour vous garantir la meilleure expérience sur notre site. Si vous continuez à utiliser ce dernier, nous considèrerons que vous acceptez l'utilisation des cookies. J'ai compris ! ou En savoir plus !.
Un Planet est un site Web dynamique qui agrège le plus souvent sur une seule page, le contenu de notes, d'articles ou de billets publiés sur des blogs ou sites Web afin d'accentuer leur visibilité et de faire ressortir des contenus pertinents aux multiples formats (texte, audio, vidéo, Podcast). C'est un agrégateur de flux RSS. Il s'apparente à un portail web.
Vous pouvez lire le billet sur le blog La Minute pour plus d'informations sur les RSS !
  • Feeds
  • Categories
  • Tags
  • Feeds

    9903 items (8 unread) in 52 feeds

    Dans la presse Dans la presse

    Séries temporelles (CESBIO)

    • sur The low predictability of agricultural cycles in semi-arid regions

      Posted: 26 January 2023, 1:01pm CET by mangiarotti

      The low predictability of agricultural cycles in semi-arid regions [1].

      In the early 2010s, chaotic attractors were captured for the cycles of cereal crops in North Morocco. This result was unexpected for several reasons. It was the first time a chaotic attractor was obtained from remote sensing data. It was also the the first « weakly dissipative » dynamics directly captured from observational data. Moreover, its dynamics was produced by a canonical form with which only highly dissipative dynamics had been produced before. Was it a hapax? Or was there any generality in this result?


      To obtain a chaotic attractor is often an important result because it shows that the low predictability — a comon feature of environmental dynamics — is not just a problem of natural hazard, that is, of probability. Indeed, chaos is — deterministic — which means that the state of the system at a given time entirely determines its state immediately after. To obtain a chaotic attractor from observational data gives a strong argument for determinism, and therefore, for a strong order hidden behind the low predictability.

      The theory of chaos is particularly well designed to study dynamics of low predictability. It enables an alternative way to do science [2]. Instead of trying to solve equations (which are not always known) as mathematicians commonly do, this theory fosters the use of a geometric space enabling to follow the evolution in time of the studied system state. Since this space can be reconstructed directly from observational data, it can be used to study dynamics under real world conditions. Moreover, this space being independent from the initial conditions (because it contains them all) it can be used to unveil the underlying equations governing the observed dynamics, or to get compact approximation of them. Other approaches have been developped, but very few of them were able to extract chaotic attractors from environmental or experimental time series.


      First results obtained for semi-arid region in 2011

      In 2011, a chaotic attractor was obtained for the cycles of cereal crops in North Morocco [3]. A time series of vegetation index observed by remote sensing was used for this purpose. A chaotic attractor is an oriented trajectory in this state space. It is characterized by specific properties. It takes place in a bounded part of this space and it is non-periodic: it is never looping back to itself, even after numerous loops (otherwise, it would be fully predictable) [4]. Such an attractor is also characterized by diverging property giving rise to unpredictability: two states taken close one to another in the phase space will diverge. To remain in a bounded space, its structure must present folding, and possibly tearing properties. Finally, the geometry of the attractor should also be fractal, that is, it should be auto-similar by scale change. All these properties were confirmed for the first chaotic attractors obtained for cereal crops from remote sensing data: non-periodic and bounded trajectory, divergence of the flow (characterized by a positive first Lyapunov exponent), with folding and characterized by a fractal dimension (D = 2.68 and D = 2.75).

      To find a fractal dimension was coherent for cereal crops cycles, which yields are known to be difficult to foresee. However, the attractor obtained for crops cycles was different from what most usual chaotic attractors. Indeed, most of the well known chaotic attractors, such as the Lorenz [5] or the Rössler [6] attractors arise from highly dissipative systems. As a consequence, although developed in three dimensions, these attractors are flat (locally bidimensional) almost everywhere and their fractal dimension is close to two (respectively 2.05 and 2.06 for usual parameter values). Surprisingly, the first cereal crops attractor was not flat at all (see for example Fig. 1, the cross section of the chaotic attractor obtained), and its fractal dimension was close to three which means that divergence and convergence speeds were of similar amplitude: the attractor was belonging to weakly dissipative chaos.

      Fig. 1: Poincaré section of the weakly dissipative chaotic attractor (D = 2.75) obtained for the cycles of cereal crops.

      Weakly dissipative chaos is rather rare in dynamical systems and it was the first time this type of dynamics could be extracted directly from observational data. Previous cases were weakly dissipative chaos obtained by Langford [7] or by Lorenz [8] in the early 1980s, but these were derived from theoretical considerations, not from observational data. This result has provided a strong argument for determinism hidden behind the low predictability of the agricultural cycles from the real world. But, this result obtained in 2011 was local, since obtained from a single time series and therefore specific to to chosen location. The generality of this result was thus questionable.


      New analyses applied to four other locations

      To investigate this question, the global modelling technique was applied again to four other provinces in Morocco: two ones located on the coastal area (El Jadida and Safi provinces), two others located inland (Khourigba and Khenifra). The global modelling technique aims to otbain equations directly from observational time series, without strong hypotheses. Even when applied to a single variable, the approach can be applied to multiple time series or to a single time series with gaps (see [9]). The GPoM package [10] was used for this purpose.

      In the analyses just published in the Journal of Difference Equations and Applications [1], time series of Normalized Difference Vegetation Index (NDVI) from AVHRR sensors were considered. The provinces were analysed one by one, two by two and the four ones altogether, either in association (concatenated time series), or in aggregation (averaged time series). Models could be obtained and validated in numerous cases. One example, obtained from the time series of NDVI spatially averaged on the four provinces, is provided in Figure 2.

      Weakly dissipative chaotic attractorFig. 2: One of the chaotic attractors (in orange) newly obtained for cereal crops cycles in semi-arid regions from remote sensing data (in black). This attractor is weakly dissipative and was obtained from the time series of NDVI spatially aggregated on the four provinces of El Jadida-Safi-Khenifra-Khourigba.

      All the properties of chaos were confirmed in most of the cases, that is: deterministic and characterized by a non-periodic, diverging, bounded and fractal attractor. Weakly dissipative chaos was also confirmed in all these cases (2.28 ? D  ? 2.80) except one (for which dimension was a bit lower, D  = 2.10).

      Fig.3: Poincaré section of the weakly dissipative chaotic attractor presented in Fig. 2. This section is very thick and characteristic of weakly dissipative systems, as confirmed by the fractal dimension (here D = 2.80).



      Fig. 4: First Return Map deduced from Fig. 3. This pattern reflects a structure with complex folding, a ncessary condition for chaos.

      Weakly dissipative chaos was thus confirmed for the cycles of cereal crops in semi-arid region. How to explain such a type of dynamics? Their interpretation is challenging. It is obvious that energy, water, and nutrients are required to grow plants. However, the analysis being based on a single variable used to monitor the cycle of cereal crops – here the vegetation index measured from remote sensing – as a whole. What is characterized, here, is the global dynamics of the plants and its agricultural cycle. It tells us that the plant dynamics, as it is observed from the point of view of the vegetation index, is close to optimal. In other words, the chosen plants as well as the agricultural practice, are well adapted to the meteorological and hydric conditions of the soil, at the provinces scale.



      [1] S. Mangiarotti & F. Le Jean, 2022. Chaotic attractors captured from remote sensing time series for the dynamics of cereal crops, Journal of Difference Equations and Applications, DOI: 10.1080/10236198.2022.2152336

      [2] C. Letellier C., L.F. Olsen, S. Mangiarotti, 2021. Chaos : from theory to applications for the 80th birthday of Otto E. Rossler, Chaos, 2021, 31(6), 060402. [https:]]

      [3] S. Mangiarotti, L. Drapeau, R. Coudret, & L. Jarlan, 2011. Modélisation par approche globale de la dynamique du blé pluvial observée par télédétection spatiale en zone semi-aride. Rencontre du Non-Lineaire, 14, 103-108. Mangiarotti.pdf (

      [4] R. Lozi, “Giga-periodic orbits for weakly coupled tent and logistic discretized maps,” in Proceedings of International Conference on Industrial and Applied Mathematics, Modern Mathematical Models: Methods and Algorithms for Real World Systems, 4–6 December 2004, edited by A. H. Siddiqi, I. S. Duff, and O. Christensen (Anamaya, New Delhi, 2004).

      [5] E.N. Lorenz. 1963. Deterministic nonperiodic flow, Journal of Atmospheric Science, 20(2),? 130-141. [https:]]

      [6] O.E. Rössler, 1976. An equation for continuous chaos. Physics Letters A,? 1976, 31, 259-264. [https:]]

      [7] W.F. Langford, 1984. Numerical studies of torus bifurcations, International Series of Numerical Mathematics, 70, 285–295. DOI: 10.1007/978-3-0348-6256-1_19.

      [8] E.N. Lorenz, 1984. Irregularity: A fundamental property of the atmosphere, Tellus, 36(A), 98–110. [https:]]

      [9] S. Mangiarotti, F. Le Jean, M. Huc, C. Letellier, 2016. Global modeling of aggregated and associated chaotic dynamics, Chaos, Solitons & Fractals, 83.  82-96.">[https:]

      [10] S. Mangiarotti, M. Huc, F. Le Jean, M. Chassan, L. Drapeau [ctb], GPoM: Generalized Polynomial Modelling. Version 1.3, CeCILL-2 licence. [https:]]

    • sur How to use on demand InSAR to analyse the Joshimath landslide

      Posted: 21 January 2023, 2:11am CET by Simon Gascoin

      International news media reported that the Joshimath city in north India (Uttarakhand) was « sinking » due to a slow landslide. In January, many houses had developed major cracks on the walls and 145 families were temporarily moved. The Indian space agency published a map of the ground deformation obtained by SAR interferometry using Sentinel-1 data. Although this method may seem complex, it is actually easy to do its own analysis based on free tools without downloading huge amount of satellite data. I give below a few hints but I can provide more details in the comments.

      First, the InSAR processing can be done remotely from Alaska Satellite Facility’s web portal (Vertex). Their baseline tool identifies Sentinel-1 pairs that are appropriate for InSAR processing. The computation is done on ASF servers using the GAMMA software. I looked for pairs associated with the product acquired on 2022-12-27 over Joshimath. This scene can be matched with the scene of 2023-01-08 (12 days later).  The highest level product is the vertical displacement map below:

      Vertical surface deformation between Dec 27, 2022 and Jan 08, 2023 assuming that there is no horizontal component to the change. Positive values indicate uplift and negative values indicate subsidence. Pixel spacing is 80 m. ASF DAAC HyP3 2023. Contains modified Copernicus Sentinel data 2023, processed by ESA.

      ASF/Vertex also provides a search engine to find all secondary scenes that match the coverage area of the reference (small baseline subset, SBAS). Using this tool I queried all pairs of Sentinel-1 scenes that are 12 days apart and submitted the InSAR processing of the 110 identified pairs. The « mintpy » option activates the generation of all necessary files to post-process the results using MintPy.

      The MintPy software should be installed locally in a dedicated conda environment. It allows the definition of a reference fixed pixel which is used to infer the displacement values in the region of interest. Before running MintPy main script (, it is recommended to clip the files obtained from ASF for each InSAR pair (unwrapped phase, correlation, DEM, view angles, water mask) to the region of interest. This can be done efficiently using gdalwarp -crop_to_cutline without unzipping the downloaded files through the vsizip file handler.

      Output of MintPy/ on the stack of interferograms obtained over Joshimath city from 2018-09-07 to 2023-01-08

      The MintPy package provides the tool to easily obtain displacement time series at specific locations:

      Time series of ground deformation at three locations in the Joshimath city (same reference point as above)

      Ideally, these results should be evaluated using in situ GPS data. The reference point may be better defined using a good knowledge of the study area. But the above results are in agreement with reports that the first cracks appeared in October 2021. Also it is possible to note an acceleration in 2023 in the area of the city where most of the damages were observed.

      #JoshimathIsSinking ! These custom maps show #Joshimath city with markers of approx areas where buildings are affected (as reported in press).
      From the Tunnel, the closest area is Parsari ward (500m) and the farthest is Marwadi ward (2600m), approx. @rajbhagatt @rajatpTOI

      — Thiyagu ?? (@jThiyagu) January 8, 2023

      Joshimath is located in the Chamoli district, in the same valley that was flooded by a massive rock and ice avalanche in 2021 and that we studied using Pléiades optical stereo images.

      Top picture: Joshimath, view from Narsingh temple, Uttarakhand, India by ArmouredCyborg, CC BY-SA 4.0, via Wikimedia Commons


    • sur [VENµS] Full reprocessing of VM1 data available

      Posted: 19 January 2023, 6:26pm CET by Olivier Hagolle

      The full reprocessing of VENµS VM1 data acquired between end 2017 and end 2020 has just been completed, with improvements for the Level 1C and level 2A processing. The data can be downloaded from the Theia website : [https:]] by clicking on VENµS VM1 (The data from VENµS new on-going phase, with a revisit of one day on most sites, can also be accessed by clicking on VENµS VM5)

      L1C processing

      For the Level 1C, the main efforts were focused on increasing the percentage of valid scenes.  Without ground control points, Venµs images would have geolocation errors of a few hundreds of meters. To improve the geolocation and the multi-temporal registration of images to a fraction of the resolution (5 m), ground control points are used, obtained by matching the images to a well geolocated reference image. In the past, this was done by selecting a cloud free Venµs image which was then carefully geolocated. But it turned out that a large number of sites tend to show large seasonal variations. With larger field of view instruments, such as Sentinel-2, it is still easy to find invariant landscapes, such as cities, rocks, coasts… But with a field of view of 30 km, our method struggled to finding good matching points between a winter and a summer image, all the more so when parts of the images were cloudy. This resulted in a high percentage of invalid images because the number of quality GCP was too low.

      In this new reprocessing, reference images from two seasons have been used to process most of the sites. The image matching parameters and the thresholds were also optimized to provide a better percentage of valid images. It was a success, as the percentage of valid images increased from 48.5 to 53.4 %, with a gain of 3500 images.


      Some other marginal improvements were brought on the  radiometric calibration  and the L1C cloud detection, which uses the 2° parallax between two identical bands.

      L2A processing

      For the L2A processor, MAJA, the estimation of aerosols relies on two criteria :

      • a multi-temporal one, that assumes the surface reflectances change slowly with time
      • a multi-spectral one based on a relationship between the blue and red band.

      Our validations using Aeronet showed that the coefficients of the relation between the blue and red band was not perfectly tuned, which caused a negative bias on the surface reflectances after atmospheric correction (these reflectances were too low, and sometimes negative). A better tuning has now been implemented and used in this reprocessing.

      We also benefitted from the last version of MAJA, which enables to use CAMS data to set the type of aerosols, or to process the cloud masks at a better resolution (100 m) 

      Here are some validation results for the aerosol optical thickness measurements :

      First processing New processing

      One can note that for all the sites but the last one one, we have a large improvement of the rmse error. In fact, in the first processing, only the ARM site was used to set the parameters of the multi-spectral criterion to detect the aerosols. It was an error. It turned out that that site has a very reddish soil, and the coefficients that we derived there were not at all optimised for the other sites. Here we used some average coefficients that improve the results generally, but degrade them on the ARM site.


      Rms error of the aerosol optical thickness for each aeronet site observed in VENµS VM1, left for the first processing, and right with the new reprocessing


      Who did it  ?

      The improvements of Level 1C processing were brought by Arthur Dick (CNES) for radiometry and by Amandine Rolland (Thales) for the geometry, and of course with the help of the production team, while the improvements of Level 2A and the validation of the results were brought by Sophie Coustance (CNES). The whole reprocessing also required lots of efforts from the production and development teams at CNES (VENµS and THEIA ground segments). In particular many thanks to Marie France Larif (CNES), and Gwenaelle Baldit (Sopra Steria).


    • sur Extent of the Alps snow cover in January 2023 (before the storm)

      Posted: 18 January 2023, 10:13pm CET by Simon Gascoin

      In early January 2023 the snow cover area in the Alps was lower than the 30 years minimum.

      The snow cover area in January 2023 was lower than the snow cover area last year at the same period. This caused the publication of a lot of depressing ski resorts photographs. But the ongoing snowfalls in the Alps are changing the situation. This graph was obtained from our Alps Snow Monitor. We will update it graph when cloud-free conditions will allow a new assessment.

      Top picture: Sentinel-2 image of Riezlern Austria on January 6, 2023.

    • sur A 10 m resolution land cover map of Sahel with iota2

      Posted: 9 January 2023, 10:57am CET by Jordi Inglada

      iota2 is the large scale mapping software developed at CESBIO. iota2 takes high resolution satellite image time series (SITS), usually Sentinel (1 and 2) or Landsat, and produces maps over large areas. Maps of most usual variables of interest in remote sensing can be produced, since iota2 can compute user-defined functions at the pixel level, perform regression and classification. The main feature of iota2 is not what is computed, but the possibility of doing that on huge volumes of data (long time series, large geographical areas). Indeed, iota2 manages the image data split in tiles, the time series, the reference data for training models, the spatial stratification if needed, etc.

      In the frame of the SWOT Downstream Program, a 10 m resolution land cover map of the Sahel region in Africa has been produced with iota2 using Sentinel-2 SITS covering the whole 2018 year. This amounts to 290 tiles or about 3 million km².

      iota2 map of Sahel with Sentinel-2iota2 map of Sahel with Sentinel-2

      The map is available for download from Zenodo.


      Land cover and land use maps provide important inputs for hydrological modelling. For example, determining land-cover changes allows a better estimation on runoff 1. Different types of vegetation and soil composition on river plains can be used to estimate river roughness in case of flood 23. Regarding SWOT and its global coverage, large scale land cover and land use maps would foster hydrological research and downstream activities.

      Can iota2 be used at a continental scale while preserving high resolution maps? What public data can be used to infer global maps? What classes are available while useful for hydrological studies? What quality level can we expect? These questions are partially addressed on this exercise.

      The evaluation region includes three important basins in western Africa: Senegal, Niger and Chad basins. Such hydrographic basins extend through different countries, and in general, in situ hydrological data are out of public reach. In some other cases, basins are just insufficiently gauged. Satellite data might then provide some relevant information to better understand basin dynamics. Since such basins are affected by heavy rainy seasons and floods, a new LULC map at high resolution would help to improve runoff and flood modelling.


      iota2 uses supervised classification for land cover map production. For supervised classification, we need images as predictors and reference data as targets to train the classifiers.

      Image data

      In terms of images, we decided to use Sentinel-2 time series because of their high spatial, spectral and temporal resolution. We used the data produced by the Theia Land Data Centre over the Sahel region. These are surface reflectance image time series processed with MAJA. The area is composed of 290 MGRS tiles and we used all available dates between January and December 2018. This amounts to about 58 TB (around 200GB per tile).


      Reference data

      Obtaining reference data over such a large area is a difficult issue. Field surveys are out of question because of the cost of the operation. Large, well funded projects, like CGLS or WorldCover usually approach the problem via photo-interpretation, which reduces the costs, but still needs a fair amount of trained operators.

      We finally decided to use existing, lower resolution maps, and settled on CGLS 4 as our source of reference data. Since CGLS is a 110m resolution map, using these labels for 10 m resolution images will fatally introduce some label noise. This is not very different to what is done for OSO over the classes where Corine Land Cover is used as reference data. Research shows that RF are rather robust to label noise 5.

      Since the CGLS maps are distributed as raster data, they were vectorised so that they could be used as reference data for iota2. The vectorisation was followed by a suppression of the smallest polygons and a splitting of the larger ones so that the sampling could be efficient.

      image coper_vect
      CGLS raster vectorised
      Methodology The classical iota2 workflow and its limitations

      The technical details of the standard iota2 workflow for land cover mapping are described in 6. In a nutshell, the workflow is made of the following steps:

      1. Sampling the reference data
      2. Building the SITS
      3. Extracting the image features for the sampled locations to generate the training data
      4. Training the classifier
      5. Applying the trained classifier to all the SITS

      The procedure can use a geographical stratification. This approach uses a geographical partition (provided as a map, which can represent eco-climatic areas) and a different classifier is trained for each defined region. The geographical stratification serves 2 purposes. The first is to reduce the intra-class variability, which is a problem when the same thematic class has different spectro-temporal signatures on different areas. The second is reducing the amount of data, and thus the memory requirements, needed for training a classifier.

      Disk space

      For this exercise, it was the first time that iota2 had to process over 100 Sentinel-2 tiles for a time span of 1 year. This made appear an additional constraint: the storage capacity for all the input SITS. Indeed, for efficiency reasons, iota2 builds data stacks with all Sentinel-2 bands at 10 m resolution. This meant that we had to generate the whole map by chunks. See below for an explanation on how we proceeded.

      Geographical stratification

      In the case of the Sahel area, the different eco-climatic maps that we found were made of regions that were too large for the amount of disk space we had. Indeed, eco-climatic regions in the Sahel area extend in the East-West direction and a single region may intersect many MGRS tiles as shown below.

      Eco-climatic regions of the SahelEco-climatic regions of the Sahel

      We therefore decided to generate a pseudo eco-climatic map with constraints on the region size. We used the 19 bio-climatic WorldClim variables to perform a clustering so that each region would contain a limited amount of tiles. We settled on the map below.

      Pseudo eco-climatic regions of the SahelPseudo eco-climatic regions of the Sahel

      Each colour represents a set of climatic regions processed together. In order to avoid land cover discontinuities between the different areas, we added samples from the adjacent sub-regions for the training. In this way, adjacent classifiers have some common training samples and their decisions are similar on the boundary areas. We nearly submitted a paper to an AI journal explaining this smart strategy, but we finally decided that the Turing Award could wait.

      Results Quantitative validation

      For a quantitative validation of the map, we had to rely on the CGLS map itself. As it is customary in ML, we used a hold-out set (not used for training) to compute confusion matrices. Since the reference data is a 110 m resolution raster and we produced a 10 m resolution map, we decided to produce 2 confusion matrices, one at each resolution. If we measured the quality of the map at 10 m. resolution, the discrepancies could be due to both classification errors and “super-resolution” effects. The latter correspond to the cases where the classifier predicts the correct class thanks to the 10 m resolution of Sentinel-2, but the reference data can’t contain the correct class because of its coarser resolution.

      To compute the 10 m resolution matrix, we just compare the 110 m label to the 10 m pixel of the map which corresponds to the centre of the reference data pixel. To compute the 110 m resolution matrix, we first degrade the 10 m. resolution map to 110 m. by majority voting and then we compare with the reference label. Both matrices are shown below.

      confusion matrix, iota2 at 10m vs CGLS as reference
      confusion matrix, iota2 at 110m vs CGLS as reference

      We see that the agreement between our map and CGLS increases when we compare them at the coarser resolution, as expected. However, the general trends are similar.

      Qualitative analysis

      The resulting iota2 map and the original CGLS present several differences. In general, iota2 maps provide more detailed and granular results thanks to higher resolution of the inputs. Regarding the classes that would be relevant on hydrological studies, we observe that: permanent and non-permanent water areas are better delineated on iota2 maps; urban areas on iota2 maps seem less compact and present some confusion with bare ground classes, and crop areas seem also less compact and sparse than CGLS, which seems realistic in some cases. |image.png | |:——:| | iota2 map (left), CGLS (right) |

      iota2 map (left), CGLS (right) on Manatali Lake
      Comparison with ESA WorldCover

      During the final steps of the map generation, ESA WorldCover project published their global 10 m. resolution map7 based on Sentinel-1 and Sentinel-2 data. This map has been produced by highly qualified teams (VITO, Brockmann Consult, CS, Wageningen University, Gamma Remote Sensing and IIASA) funded by ESA. The product validation report states an overall accuracy of about 74%, which is very good for a global product. The overall approach is very similar to the one used for the CGLS product: a supervised classification using Gradient Boosting Trees of the time series using a very good set of reference data generated by trained operators.

      We thought it was interesting to compare our map to WorldCover’s before dumping it into the trash bin, to see how worse our results were. We decided “validate” the ESA WorldCover with the same protocol used to validate the map produced with iota2. This allows to compare both products via the confusion matrices with respect to a 3rd one (the CGLS map). The confusion matrices obtained for the WorldCover over the region where the iota2 map was produced are shown below.

      Confusion matrix, ESA product (10m) vs CGLS as reference
      Confusion matrix, ESA product (110m) vs CGLS as reference

      We see that the accuracy scores are slightly worse than those of iota2. Of course, this can be due to several reasons: the WorldCover map could actually be better than CGLS. Also, the comparison may not be fair, since the iota2 classifier was trained on a hold-out set of CGLS. However, these results are coherent with an independent, expert validation of our map and WorldCover’s on a small area around Lake Chad.

      Qualitative comparison with ESA WorldCover

      At a large scale, the geographical distribution of majority classes look similar. However, iota2 maps are less homogeneous and look more granular in transition areas. Let’s take a look at each class: vegetation and tree cover classes seem to differ, shrub (orange) and forest (dark green), probably caused by different training samples or/and class definition criteria. Water classes seem better delineated on iota2 maps, especially non-permanent water. Urban classes are clearly better defined on ESA Worldcover, being more homogeneous and having less mis-classifications as bare soil.

      Comparison at large scale: iota2 map (left) vs ESA World Cover (right)
      Large scale comparison at central Western Africa: iota2 map (left) vs ESA World Cover (right). Urban areas look more compact in ESA World Cover
      Closer view on vegetated areas: iota2 map (left) vs ESA World Cover (right). Shrub and forest class definition seem different
      Closer view on vegetated areas: iota2 map (left) vs ESA World Cover (right). Water classes delineation look better. Shrub and forest class definition seem different
      Lessons learned

      We have found an innovative solution for land cover mapping over very large areas without deploying costly field surveys or intensive photo-interpretation campaigns. Indeed, leveraging existing maps at lower resolution, for which reference data was used, we have produced a high spatial resolution product which seems to be on par with similar products for which reference data was specially collected.

      The current study was limited by the lack of reference data for the validation step. This has 2 main consequences:

      1. it is impossible to give an accurate assessment of the quality of the product;
      2. we can’t determine whether the disagreements with the CGLS maps come from classification errors or from the increased spatial resolution of our product.

      Unfortunately, the reference data collected for existing products are not publicly available in spite of the fact that some of them (CGLS or WorldCover, for instance) are funded with public money. This kind of data could have been used to assess the quality of our product.

      From the hydrological perspective, the iota2 map seems to provide a better mapping on water areas, mainly around river-sheds and wetlands, compared to the two other global maps available (CGLS, ESA WorldCover), which would help on defining river models (river and flood plains width). Crop areas look similar between the different maps, and finally, urban and vegetated zones look better in ESA World Cover.

      One could wonder why we produced this map if other products were available. First of all, the WorldCover product was not available when we started this work, but most important, one of the goals of the exercise was to assess the ability of iota2 to produce at a larger scale than the country-wide annual production for OSO. Indeed, it seems that every new land-cover initiative needs the development of a new processing chain: to the best of our knowledge, the processing chains used for CGLS, WorldCover, CCI Landcover, etc. are not open source. iota2 is free/libre software and, as such, allows study, inspection, reproducibility and adaptation to other contexts. Now we have demonstrated that it can scale beyond national mapping.

      The final point worth noting is that the most burdensome part of the product generation was dealing with the huge amounts of data to be ingested by the processing pipelines. Although iota2 can jointly process Sentinel-1 and Sentinel-2 image time series, we did not use SAR data to reduce the volumes of data to use. Our past experience shows that SAR brings only small improvements for annual land-cover mapping8, however, these data can be useful for specific classes (i.e. urban, forest) and over tropical areas. However, the high redundancy between time series made doubling the data volume not worthy for our exercise. One way to alleviate the problem would be making available IA ready fused data, i.e. generic embeddings of multi-modal data which could be used for different downstream machine learning tasks. Imagine a 5-dimensional vector at 10 m resolution every 5 days instead of 13 reflectances every 5 days, plus 2 back-scatter coefficients every 6 days (times 2, for ascending and descending orbits), etc. This would imply a huge compression ratio, but would also simplify feature extraction and therefore less compute to train the machine learning models.


      This work was carried out in the frame of the SWOT-Downstream Program. Implementation and production were done by Arthur Vincent. Algorithm design was done by Jordi Inglada. Project management and supervision were done by Santiago Peña Luque.

      We are particularly grateful to CNES for the HPC infrastructure (data storage, computing resources) and CNES’ HPC technical support without whom these wonderful resources wouldn’t be operational.

      The map can be cited as Vincent, Arthur, Inglada, Jordi, & Peña Luque, Santiago. (2022). Sahel Land Cover OSO 2018 [Data set]. Zenodo. [https:]


      Sort references by citation order

      1. Basu, A.S.; Gill, L.W.; Pilla, F.; Basu, B. Assessment of Variations in Runoff Due to Landcover Changes Using the SWAT Model in an Urban River in Dublin, Ireland. Sustainability 2022, 14, 534. [https:] ??
      2. Wilson, M.D. and Atkinson, P.M. (2007), The use of remotely sensed land cover to derive floodplain friction coefficients for flood inundation modelling. Hydrol. Process., 21: 3576-3586. [https:] ??
      3. Hydrogeomorphological parameters extraction from remotely sensed products for SWOT Discharge Algorithm, C.Emery et al, 2021, Geoglows-Hydrospace Conference 2021, [https:] ??
      4. Buchhorn, M. ; Smets, B. ; Bertels, L. ; De Roo, B. ; Lesiv, M. ; Tsendbazar, N. – E. ; Herold, M. ; Fritz, S. Copernicus Global Land Service: Land Cover 100m: collection 3: epoch 2018: Globe 2020. DOI 10.5281/zenodo.3518038??
      5. Pelletier, C., Valero, S., Inglada, J., Champion, N., Marais Sicre, C., & Dedieu, G. (2017). Effect of training class label noise on classification performances for land cover mapping with satellite image time series. Remote Sensing, 9(2), 173. [] ??
      6. Inglada, J., Vincent, A., Arias, M., Tardy, B., Morin, D., & Rodes, I. (2017). Operational high resolution land cover map production at the country scale using satellite image time series. Remote Sensing, 9(1), 95. [] ??
      7. Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, Linlin, Tsendbazar, N.E., Ramoino, F., Arino, O., 2021. ESA WorldCover 10 m 2020 v100. [https:] ??
      8. Inglada, J., Vincent, A., Arias, M., & Marais-Sicre, C. (2016). Improved early crop type identification by joint use of high temporal resolution sar and optical image time series. Remote Sensing, 8(5), 362. [] ??
    • sur Iceberg B-22A on the go

      Posted: 9 January 2023, 12:43am CET by Simon Gascoin
      Enclosure: [download]

      B-22A is the largest iceberg in the Amundsen Sea, Antarctica (50 times Manhattan land area). It broke off from Thwaites Glacier’s tongue and remained grounded for 20 years.. But now it’s on the go, as shown by this animation I made from Sentinel-1 SAR images (about 1 per month since June 2015, total 75 frames):

      document.createElement('video'); [https:]]

      This animation (.mp4 file) was reproduced in this article of New Scientist on January 11: Breakaway iceberg raises concerns over Antarctica’s ‘doomsday glacier’

      Several authors have warned that this event could have important implications

      when the grounded iceberg [B22-A] is removed from the Thwaites embayment (likely in the near-future), a change to less favourable landfast sea-ice conditions is likely to occur. Any decrease in landfast sea-ice persistency or extent would ultimately increase the prospect of further retreat or disintegration of the Thwaites Ice Tongue.

      Miles et al. (2020) Journal of Glaciology


      Removal of this iceberg and subsequent loss of landfast sea ice is not only likely to modify regional ocean circulation, but an open-water regime might also allow the seasonal inflow of solar-heated surface water that increases basal melting.

      Wild et al.(2022) The Cryosphere

      The MODIS images below show that B-22 iceberg broke off more than 20 years ago on March 15, 2002 (source: NASA [] , Public Domain, Link).

      Amundsen Sea Icebergs.jpg

      B22A is the fourth largest Antarctic iceberg!

      The four largest Antarctic icebergs in January 2023. Only D15A is grounded (stuck on the ocean floor). Map made from the U.S. National Ice Center inventory.
    • sur Manque de neige et manque de données dans les Pyrénées Orientales

      Posted: 6 January 2023, 6:51pm CET by Simon Gascoin

      J’étais hier dans la maison du parc naturel régional des Pyrénées Catalanes pour discuter des possibilités offertes par la télédétection pour suivre le manteau neigeux.

      Les participants à la réunion représentaient bien les différents usages de l’eau dans ce territoire (milieux naturels, agriculture, eau municipale, hydro-électricité, domaines skiables). Tous ont exprimé le besoin de mieux connaitre les réserves en eau stockées sous forme de neige, ce que les hydrologues appellent l’équivalent en eau du manteau neigeux (snow water equivalent, SWE). Malheureusement les satellites ne donnent pas de solution immédiate à cette question. En France, les compagnies hydroélectriques font des mesures de terrain et estiment le SWE, généralement en amont de leurs ouvrages. Mais les données ne sont pas diffusées car elles sont jugées trop stratégiques dans le contexte du renouvellement des concessions hydroélectriques. Les opérateurs de la SHEM ou EDF avec qui j’ai pu discuter sont d’ailleurs les premiers à le regretter. Mais la décision de ne pas diffuser ces données dépend de leur direction qui n’a peut-être pas le même sentiment d’appartenir à un collectif d’usagers.

      Pourtant la sécheresse de 2022 a montré l’importance de mettre autour de la table les différents acteurs et de partager les connaissances. Ainsi en janvier 2022, les opérateurs des réservoirs dans les Pyrénées savaient que le stock de neige était suffisant pour faire le plein avant la période d’étiage, grâce aux précipitations abondantes de décembre 2021. Les restrictions d’usages et la gestion coordonnée des barrages ont permis de maintenir le débit de cours d’eau comme la Têt dans les Pyrénées Orientales tout en préservant les usages agricoles notamment.

      Cet hiver commence très différemment. Le secteur Têt Amont est classé par arrêté préfectoral du 30 décembre 2022 en niveau « alerte ». Les niveaux d’eau dans les réservoirs sont restés très bas à cause du manque de précipitations en automne. Un lâcher d’eau exceptionnel depuis le lac des Bouillouses a dû être effectué pour assurer l’approvisionnement en eau potable de villages à sec dans la vallée. De surcroit, les réserves d’eau sous forme de neige sont encore très faibles. Sans réserves d’eau (naturelles ou artificielles), il sera difficile de satisfaire tous les usages de l’eau au printemps. La situation est d’autant plus critique que la justice a demandé de relever le débit minimum de la Têt suite à une plainte de France Nature Environnement. Il va donc falloir laisser davantage d’eau dans la rivière avec moins de réserves d’eau en amont que l’an dernier si la sécheresse persiste en 2023.

      Surface enneigée dans le Parc des Pyrénées catalanes vue par satellite depuis le 01 septembre 2000 (correspond à l'amont de la Têt)Surface enneigée dans le Parc des Pyrénées catalanes vue par satellite depuis le 01 septembre 2000 (correspond à l’amont de la Têt)


      En Espagne, l’agence du bassin de l’Ebre publie des bulletins très détaillés sur l’état des stocks de neige dans les bassins versants pyrénéens, et les données sont librement accessibles. Ces estimations sont réalisées à partir de mesure in situ, d’images satellitaires et de modélisation. Ces informations répondraient parfaitement à la demande des gestionnaires que j’ai rencontrés hier. Les instruments utilisés pour mesurer localement l’équivalent en eau de la neige par l’agence de l’Ebre sont les mêmes que ceux utilisés par EDF en France (nivomètres à rayonnement cosmique). Cette différence dans le partage des données est lié au fait que les aménagement hydrauliques en Espagne étaient avant tout pensés pour soutenir l’agriculture, alors que côté français les barrages sont historiquement liés à la production d’électricité.

      Cela change tout car l’agriculture est pratiquée par une multitude d’acteurs qui financent un service collectif, alors que l’hydroélectricité concerne une ou deux entreprises par vallée qui gèrent leurs propres réseaux de mesure. Mais l’évolution du climat et la tension croissante sur les ressources en eau m’incite à penser qu’il est urgent de repenser le partage des données hydrométéorologiques en montagne pour que tous les acteurs de la gestion de l’eau puissent prendre des décisions judicieuses. Par ailleurs, cela favoriserait aussi la recherche académique. Pour mes recherches, j’utilise des données librement accessibles en Espagne ou aux USA, car il est difficile de mettre en place les conventions d’échanges avec les détenteurs de données en France.

      La non-diffusion de données nivologiques et hydrologiques par EDF est d’autant plus difficile à justifier que les réseaux de mesure ont été installés quand l’entreprise EDF était publique. Maintenant que l’entreprise a été renationalisée, il y a peut-être une chance que les choses changent ? Pour une entreprise privée comme ENGIE, pourquoi ne pas imposer un partage de certaines données lors de l’attribution des concessions ?

      Côté Météo-France, les données de hauteur de neige sont librement disponibles, mais c’est une anomalie qui résulte de la volonté d’un éminent nivologue de Météo-France. Leurs données météorologiques classiques (précipitations, etc) sont difficiles à obtenir pour des études académiques. Météo-France s’oriente vers une ouverture des données comme l’IGN récemment sous l’injonction de leur ministère de tutelle. Tant mieux. Les données sur notre environnement doivent être considérées comme un bien commun et donc être accessibles à tous. Cette philosophie s’est imposée aux observations satellitaires avec l’avènement du programme Copernicus. Maintenant il faut que les données in situ suivent la même évolution pour que nous puissions mieux comprendre et nous adapter à notre environnement en plein bouleversement.


      Bulletin sur l’état des stocks d’eau (barrage et neige) dans les sous-bassins de l’Ebre le 26 décembre 2022

      Photo par Herbert Ortner, CC BY 3.0, [https:]

    • sur Multitemp blog is now 10 years old

      Posted: 19 December 2022, 8:35pm CET by Olivier Hagolle

      10 years ago, I managed to convince CNES to change the orbit of SPOT4 when it reached its end of life, to place it on a 5 days repeat orbit, in order to simulate the repetitive observations of Sentinel-2. It was the SPOT4 (Take5) experiment. We managed to distribute Sentinel-2 like data (With L1C, 2A and L3A products) on 50 sites, to a lot of users that used these data set to prepare their processing methods. Thanks to these data, at CESBIO, we were able to quickly distribute high quality L2A products with MAJA, or to generate the first ever fully automatic land cover map at 10m resolution and at country scale with Sentinel-2, using the data acquired in 2016.

      Communication was an important part of the project, and to provide information and news to SPOT4 users, we decided to start a blog. Our first post is now 10 years old !

      In this first post, I wrote some « visionnary » sentences :

      The first Sentinel-2 satellite should be launched within the next two years, and the second satellite should follow 18 months later. Together, these satellites will provide us every fifth day with high-resolution images of all land areas… or of the clouds that cover them. Despite these clouds, users will be guaranteed access to cloud-free data at least once per month. The arrival of these data should therefore cause a revolution in the use of remote sensing data.

      However, I have to confess that I completely underestimated the success of this mission we had been promoting for years. I was expecting thousands of users, and we had hundreds of thousands !


      Coming back to our blog, we first intended to keep it only for the duration of the experiment, but as a matter of fact, providing news and writing post was fun ! I was soon joined by Simon Gascoin, and with the help of more than 20 co-authors, we published 1000 posts in the last 10 years. Our blog has been a success with more than 850 000 page views recorded. However, our audience is now slowly eroding, and we hardly publish a few articles per month. Simon and I are getting more and more busy, and perhaps do we lack a little bit of inspiration, it is maybe time to greet some new editors from CESBIO !


      Page views per month for the last 10 years !
    • sur Some news from ESA regarding the coming launches of Sentinels 1 and 2

      Posted: 8 December 2022, 7:00pm CET by Olivier Hagolle

      In this article, ESA gave us some news of the time table for the coming launches of next Sentinel1 and 2 satellites. As you may have heard, we have these days a shortage of European launchers, due to the ending of Soyouz launches from Kourou, the delays in Ariane 6 delivery, and the initial uncertainties of the availability of the Ukrainian engines for the VEGA-C rocket.

      It is therefore a good news to know that the next Sentinel1 and Sentinel-2 launches have been booked. Here are the anticipated launch dates :

      SatelliteBooked launch date
      SENTINEL-1CSemester 1 2023
      SENTINEL-1DSemester 2 2024
      SENTINEL-2CMid 2024

      Sentinel-1C is the most wanted, since the breakdown of Sentinel-1B. Sentinel 2C will be launched almost 9 years after Sentinel-2A. It will replace it, and Sentinel-2A will be kept in orbit for some time, in case something happens to the operational satelllites. We could have dreamed of using the availability of three satellites to improve the revisit, but the costs of exploiting three satellites seems not to be fitting in the available funding. However, if several important users asked for it repeatedly, such as Copernicus services, maybe it could help convince the EU…

    • sur Feedback on hydrological monitoring of Telangana region in India using remote sensing

      Posted: 1 December 2022, 10:40pm CET by Sylvain Ferrant


      Claire Pascal, PhD student at CESBIO, under the supervision of Olivier Merlin (CNRS researcher) and Sylvain Ferrant (IRD researcher) brilliantly defended her thesis on the monitoring of water resources by satellite on November 18.

      Claire’s work focused on the Telangana region of India where rainfed cotton and flooded rice farming dominate. The region has a wet season, during which 4 months of intense monsoon rains recharge aquifers and dam lakes and feed the large Godavari and Krishna rivers. It is followed by a long dry season of 8 months during which the drying of the soils requires to irrigate the crops with these water resources. Three quarters of the irrigation volumes are pumped from groundwater, and one quarter from surface reservoirs (100 large dams, over 40,000 small hillside reservoirs). Claire studied the feasibility of monitoring these resources with current and future satellite data.

      Gravimetric satellites (GRACE and GRACE-FO) provide access to water mass variations at a low resolution (300km): surface water, soil moisture and groundwater. Passive microwave satellites (such as SMOS and SMAP) allow us to reconstruct soil moisture variations at an intermediate spatial resolution (25 to 40 km). Multi-spectral optical satellites (MODIS and Sentinel-2 ) allow the monitoring of crops and vegetation development at medium and high spatial resolution (MRS, HRS, 1km to 10m). Stereoscopic satellites (Pléiades) at very high resolution (THRS, 50cm) allow the restitution of the bathymetry of hundreds of small hillside reservoirs, whose cumulative regional capacity is unknown until now.  The environmental variables estimated by these satellite missions constitute heterogeneous data sources that are used here to explore our ability to:

      • Disaggregate the low resolution gravimetric signal with the variables obtained at higher resolution, ( see figure GRACE_method)
      Location of the Telangana state (113,000 km2) in the granitic part of the state (pink 67,000 km2) distributed on a 0.5? resolution grid. The black triangles correspond to groundwater depth observation wells of the Groundwater Department of Telangana. The state capital, the city of Hyderabad (in gray) concentrates 12 million inhabitants. The main rivers are in blue.(Pascal, et al., HESS, 2022)
      • Explore the contribution of newer very high resolutions for monitoring water resources in small capacity hydrological structures.

      • Assess existing methods for quantifying rice irrigation using SMOS soil moisture observations and Sentinel-2 rice maps.
      Mapping of rice cultivation in Telangana state produced from Sentinel-1 and Sentinel-2 images(from Ferrant et al., 2017, 2019). Example for the dry season in 2017.

      The conclusions regarding space observation are as follows:

      • SMOS can estimate the volume of water in the root zone by combining data and models (this is not a result of the PhD, it was known before)
      • The set of hillside reservoirs form a cascade of ungauged retention reservoirs for monsoon runoff, maintained by local populations. Their size is too small for current altimetry data (Jason) but also for future SWOT data. The PhD evaluated their maximum capacity at about 30 mm thanks to bathymetry from Pleiades stereoscopy in low water period (empty reservoirs). This maximum capacity may seem limited compared to those of the large dams in this state (over 200mm of cumulative maximum capacity) but the contribution of these small structures, located in these areas upstream of large rivers and dams, is crucial for the surrounding irrigated agriculture. Claire showed that a regional monitoring of water volumes of this set of small reservoirs is possible using THRS data, acquired when the basins are empty, combined with the detection of water surfaces with Sentinel2. Pleiades does not have sufficient imaging capacity to extend this method to the whole of India, but it will be possible with the future CO3D mission, provided that the numerical models are obtained when the reservoirs are empty (some revisiting will be necessary). The Sentinel-HR mission could also meet these new needs with more revisit but less accuracy.
      Filling of the « Water Harvesting System » of the 4 regions covered by the Pleiades stereoscopic acquisitions (Claire Pascal’s thesis, 2022).
      • The GRACE mission provides monthly variations of land water stocks with a resolution (~300km) equivalent to the dimensions of the study area. This data needs to be spatially deconvolved and disaggregated in order to estimate the variations of each hydrological compartment (especially the groundwater stock) in interaction with land use and irrigation practices. The PhD focused on the evaluation of already used disaggregation methods (15 publications worldwide) in order to propose a more realistic method of disaggregation validation. The different approaches using soil moisture from SMOS, NDVI from MODIS, rainfall from TRMM, evaluated with a piezometric dataset provided by the state of Telangana and the Franco-Indian Groundwater Research Cell in Hyderabad, generally improve the spatial representativeness of the GRACE data. However, the uncertainty on the groundwater stock derived from the disaggregated GRACE data remains relatively high. Indeed, the recharge and capture fluxes of the water table are indirectly linked to the variables available by remote sensing, which explains the difficulty of obtaining a predictive model from these observables alone. An improvement in the quality of the gravity signal and its resolution, as imagined by future gravity missions (MAGIC), is therefore desirable.

      Scores spatialisés de la désagregation spatiale de GRACE (Thèse de Claire Pascal, 2022).

      • In a final exploratory study, Claire investigated the presence in some highly irrigated areas of a significant dry season soil moisture « bounce » in the SMOS product (this bounce is less present in the SMAP products). She linked the magnitude of this soil moisture signal to the extent of rice cultivation on each of the SMOS maps (25km). These seasonal rice areas are estimated at high resolution (10m) using the land cover map production chain (IOTA2) deployed on the CNES HAL cluster, from Sentinel-2 surface reflectances (L2A products produced by THEIA) for the 8 rice growing seasons between 2016 and 2019, at the Telangana state scale. These relationships are preliminary results that could be used to build a regional quantitative model of water resources mobilized for rice irrigation. An upgrade of SMOS resolution to 10 km, as proposed by the SMOS-HR mission, should improve the models found. An internship is planned in 2023 to address this issue.
      Dynamics of SMOS soil moisture during the dry period, proportional to the extent of rice cultivation.
    • sur [VENµS] How long will the VM5 imaging phase last ? The sun will decide

      Posted: 15 November 2022, 10:44am CET by Olivier Hagolle

      The VENµS satellite was launched in August 2017, with two missions :

      • observing the earth with a frequent revisit and a high resolution, under constant view angles
      • testing an electric propulsion system to change its orbit, and to show the possibility to maitain the spacecraft orbit at a very low altitude.

      As a result, VENµS has had different phases separated by orbital changes. It’s imaging phases are VM1 at 720 km altitude, which lasted for three years, and the current VM5 phase at 560 km altitude, which started in March 2022.

      So, what will be the duration of the VM5 phase ? The sun will decide !

      The main limiting factor is indeed the quantity of propellant within the satellite. VENµS has a limited amount of propellant in its tank, and at 560 km altitude, the satellite is still a bit slowed down by the atmospheric friction, which itself depends on the solar activity. The propellant is used to maintain the speed and height of the satellite, and in some time, we will run out of propellant.

      The more sun activity, the more atmospheric friction. We are now near the maximum of the sun activity, and our Israeli colleagues who control the VENµS platform gave us the following estimates :

      • If average activity is 75%, VM5 will last at least until May 2024
      • If average activity is 50%, VM5 will last at least until December 2024

      These estimates are minimum values as we have some uncertainty on the exact remaining quantity or propellant, so the minimum estimate was used. This is good news, it means that if the satellite health stays good, we still have more than one year of acquisitions ahead of us, and maybe two !

      Current solar activity, from the space weather prediction center. It is currently increasing with a maximum that should be reached in 2025. A mild activity was forecasted compared to previous cycles, but the observations are above the expected activity…

      And finally, we do not need to save propellant for a deorbitation, as at the current altitude, if the orbit is not maintained, the deorbitation comes only after a few years (once again depending on solar activity).

    • sur Retour d’expérience sur le suivi hydrologique d’une région Indienne par télédétection

      Posted: 10 November 2022, 4:35pm CET by Sylvain Ferrant


      Claire Pascal, doctorante au CESBIO, sous la direction d’Olivier Merlin (chercheur CNRS) et co-dirigée par Sylvain Ferrant (chercheur IRD) a brillamment soutenu sa thèse sur le suivi des ressources en eau par satellite le 18 novembre.

      Les travaux de Claire se sont focalisés sur la région du Telangana en Inde où la culture non irriguée du coton et la culture inondée du riz dominent. La région présente une saison humide, durant laquelle 4 mois de pluies de mousson intenses rechargent les aquifères et les lacs de barrage et alimentent les larges rivières de la Godavari et de la Krishna. Elle est suivie par une longue saison sèche de 8 mois pendant laquelle l’assèchement des sols oblige à irriguer les cultures avec ces ressources en eau. Les trois quarts des volumes d’irrigation sont pompés dans les eaux souterraines, et un quart dans les réservoirs de surface (100 grands barrages, plus de 40 000 petites retenues collinaires). Claire a étudié la faisabilité d’un suivi de ces ressources avec les données satellitaires actuelles et futures.

      Les satellites gravimétriques (GRACE et GRACE-FO) permettent d’accéder aux variations des masses d’eau à basse résolution (300km) : eaux de surface, humidité des sols et eaux souterraines. Les satellites à micro-ondes passives (comme SMOS et SMAP) permettent de restituer la variation de l’humidité des sols à une résolution spatiale intermédiaire (25 à 40 km). Les satellites optiques multi-spectraux MODIS et Sentinel-2 permettent le suivi de la mise en culture et le développement de la végétation à moyenne et haute résolution spatiale (MRS, HRS, 1km à 10m). Les satellites stéréoscopiques (Pléiades) à très haute résolution (THRS, 50cm) permettent la restitution de la bathymétrie de centaines de petits réservoirs collinaires, dont la capacité cumulée régionale est jusqu’ici inconnue.  Ces variables environnementales restituées par ces missions satellite constituent des sources de données hétérogènes qui  sont ici utilisées pour explorer notre capacité à:

      • Désagréger le signal gravimétrique à basse résolution avec les variables obtenues à plus haute résolution, cf GRACE_méthode

      Localisation de l’état du Telangana (113 000 km2) dans la partie granitique de l’état (rose 67 000 km2) distribué sur une grille de 0.5? de résolution. Les triangles noirs correspondent à des puits d’observation de la profondeur de nappe du Groundwater Department of Telangana. La capitale de l’état, la ville d’Hyderabad (en gris) concentre 12 millions d’habitants. Les rivières principales sont en bleu. (Pascal, et al., HESS, 2022)

      • Explorer l’apport des très hautes résolutions plus récentes pour le suivi des ressources en eau dans les structures hydrologiques de petite capacité.

      • Évaluer les méthodes existantes de quantification de l’irrigation du riz à l’aide des observations d’humidité du sol SMOS et des cartes de riz Sentinel-2.

      Cartographie de mise en culture du riz sur l’état du Telangana produite à partir des images Sentinel-1 et Sentinel-2 (d’après Ferrant et al., 2017, 2019). Exemple pour la saison sèche 2017.

      Les conclusions concernant l’observation spatiale sont les suivantes :

      • SMOS peut estimer le volume d’eau en zone racinaire en combinant données et modèles (ce n’est pas un résultat de la thèse, c’était connu avant)
      • L’ensemble des retenues collinaires forment une cascade de réservoirs de rétention du ruissellement de la mousson, non jaugé, maintenue par les populations locales. Leur dimension est trop restreinte pour les données altimétriques actuelles (Jason) mais aussi pour les futures données SWOT. La thèse a évalué leur capacité maximale à environ 30 mm grâce à des bathymétries issues de stéréoscopie Pléiades en période d’étiage (réservoirs vides). Cette capacité maximale peut sembler limitée en comparaison de celles des grands barrages de cet état (plus de 200mm de capacité maximale cumulée) mais la contribution de ces petites structures, situées dans ces zones en amont des grandes rivières et barrages, est cruciale pour l’agriculture irriguée alentour. Claire a montré qu’un suivi régional des volumes d’eau de cet ensemble de petites retenues est possible en utilisant ces données THRS, acquises lorsque les bassins sont vides, combinées à la détection des surfaces en eau avec Sentinel2. Pléiades n’a pas une capacité de prise de vue suffisante pour étendre cette méthode à l’ensemble de l’Inde, mais ce sera possible avec la future mission CO3D, à condition d’obtenir les modèles numériques au moment où les réservoirs sont vides (une certaine revisite sera nécessaire). La mission Sentinel-HR pourrait elle aussi répondre à ces nouveaux besoins avec plus de revisite mais moins de précision.

      Remplissage du « Water Harvesting System » des 4 régions couvertes par les acquisitions stéréoscopiques Pleiades (Thèse de Claire Pascal, 2022).

      • La mission GRACE fournit les variations mensuelles des stocks d’eau terrestre avec une résolution (~300km) équivalente aux dimensions de la zone d’étude. Il faut déconvoluer et désagréger spatialement cette donnée pour parvenir à estimer les variations de chaque compartiment hydrologique (notamment le stock d’eau souterraine) en interaction avec les usages de sols et les pratiques d’irrigation. La thèse a porté sur l’évaluation des méthodes de désagrégation déjà employées (15 publications dans le monde) pour proposer une méthode plus réaliste de validation des désagrégations. Les différentes approches utilisant les humidités du sol issues de SMOS, le NDVI issu de MODIS, les précipitations issues de TRMM, évaluées avec un ensemble de données piézométriques fournies par l’état du Télangana et la Cellule Franco Indienne de Recherche sur les Eaux Souterraines d’Hyderabad, améliorent en général la représentativité spatiale des données GRACE. Cependant, l’incertitude sur le stock d’eau souterraine dérivé des données GRACE désagrégées reste relativement élevée. En effet, les flux de recharge et de captage de la nappe sont indirectement liés aux variables disponibles par télédétection, ce qui explique la difficulté d’obtenir un modèle prédictif à partir de ces observables uniquement. Une amélioration de la qualité du signal gravimétrique et de sa résolution, imaginé par les missions gravimétriques à venir (mission MAGIC),  est donc souhaitable.

      Scores spatialisés de la désagregation spatiale de GRACE (Thèse de Claire Pascal, 2022).

      • Dans une dernière étude exploratoire, Claire a étudié en détail la présence dans certaines régions très irriguées d’un « rebond » important de l’humidité du sol en saison sèche dans le produit SMOS (ce rebond est moins présent dans les produits SMAP). Elle a lié l’amplitude de ces signaux d’humidité du sol, à l’importance de mise en culture du riz sur chacune des restitutions SMOS (25km). Ces surfaces en riz saisonnières sont estimées à haute résolution (10m) à l’aide de la chaîne de production de cartes d’occupation des sols (IOTA2) déployée sur le cluster du CNES HAL, à partir de réflectances de surface Sentinel-2 (produits L2A produits par THEIA) pour les 8 saisons de croissance du riz comprises entre 2016 et 2019, à l’échelle de l’état du Telangana. Ces relations sont des résultats préliminaires qui pourraient servir à établir un modèle quantitatif régional des ressources en eau mobilisées pour l’irrigation du riz. Une amélioration de la résolution de SMOS à 10 km, comme le propose la mission SMOS-HR, devrait améliorer les modèles trouvés. Un stage est prévu en 2023 pour aborder cette question.

      Dynamique de l’humidité des sols SMOS pendant la période sèche, proportionnelle à l’importance de mise en culture du riz.

    • sur [THEIA] Near real time production of Sentinel-2 images has resumed

      Posted: 2 November 2022, 3:14pm CET by Olivier Hagolle

      Newly produced Sentinel2 L2A products over Toulouse (we don’t distribute the 100% cloudy images)

      Due to Brexit,  the ECMWF moved some of its computer infrastructure from Reading (UK) to Bologna (IT), and a 5 week interruption of service occurred for the production of Copernicus Atmosphere Monitoring Service. As our Level 2A production for Sentinel-2 uses this information to set the type or aerosols for the atmospheric correction, we had to suspend our production.


      The service resumed last week, and Theia has almost caught up all the production, since images until October 26th are available ! Well done to the production team, and sorry for those of you who had to wait until the data became available.


    • sur Sample images from VENµS new phase are available

      Posted: 2 November 2022, 2:54pm CET by Olivier Hagolle

      Here are some long awaited news about the VENµS VM5 phase. The acquisitions started end of March 2022 with the following list of sites, and should go on for at least one year, and maybe more depending on the consumption of fuel (Hydrazine) to maintain the orbit. Most of the sites are acquired with a daily revisit, and some sites with a two day revisit, as displayed on this map (which should be updated, because there have been some changes).

      As you have probably noticed, we are a bit late in delivering VENµS products and we would like to apologize about it. The verification phase of the instrument and processing showed that a new calibration phase was needed :

      • the radiometric team had to recalibrate the instrument as it is now used with different integration times and gains
      • they noticed that some gains of the elementary detectors evolved, and that the models we used to correct for spikes that cause the appearance of bright and dark columns had to be revised
      • the geometric calibration was checked, and as we have new sites, the production team had to prepare new reference images

      This takes more time than anticipated, and we now plan to release the L1C products in November and the L2A shortly after (we need to validate a bit after the calibration is finalized). Just in order for you to see how the selected sites look when seen by VENµS, we have decided to publish one or two valid L1C images per site, except for the sites in Israel, handled by our Israeli partners. These images are accurately ortho-rectified but don’t have the final calibration and detector normalization, so please don’t draw definitive conclusions from this first set. Some issues in the display of the images in the distribution site have been noticed too.

      You may get them directly from this address, or from the usual Theia website : [https:]] and select the “VENµS VM5 L1C TEST” collection.
      Feedback is welcome !

      And finally, as you can see, it is not Gérard Dedieu who writes this post. Gérard retired two years ago already, but kindly accepted to go on being the French PI for the VENµS mission. He did most of the work to read the proposals and select the VM5 sites with the help of VENµS exploitation team. He now wishes to have more time for all his activities. He asked me to take over as the French VENµS PI. I accepted this proposal as most of the work has been done (thanks a lot Gérard !), and as I have been working on VENµS from the start in the shadow of Gérard (What a shadow ? ). Gérard still wishes to be kept informed about what you will find from VENµS data sets.

    • sur First satellite images of Nord Stream leaks

      Posted: 30 September 2022, 12:44am CEST by Simon Gascoin

      I found the leak from Nord Stream 2 in the latest Sentinel-1 image (29 Sep 2022). It’s large enough to be visible from space.

      From this radar image, I could also spot the leak in the Landsat image captured on the same day (29 Sep). And there’s another leak visible northeast of Bornholm Island, this one from Nord Stream 1. Check by yourself in the SentinelHub ( [https:]] ).


    • sur We are hiring an expert (M/F) in scientific computing to improve MAJA

      Posted: 26 September 2022, 5:19pm CEST by Jerome Colin
      We are hiring an expert (M/F) in scientific computing to improve the consideration of aerosol type in the MAJA Earth observation image processing chain using machine learning methods. The aim is to explore single and multi-variable regression approaches in order to select the most efficient one, which is the result of a trade-off between simplicity and speed, accuracy and robustness. The ultimate goal is to implement this method in the MAJA chain, which is used at CNES in the operational processing of Sentinel-2 data distributed to the community, but also to prepare the use of the chain for the future TRISHNA space mission. Application: HERE Duration: 12 month Main skills:
      • Strong background in computer science and mathematics
      • Good skills in Python
      • Previous experience in the use of machine learning methods and libraries (eg. Pytorch, scikit-learn) would be an asset
    • sur Disruptions in the availability of Theia products

      Posted: 26 September 2022, 4:44pm CEST by Jerome Colin


      You may have noticed an interruption in the availability of level 2 and 3 products on the THEIA portal. This is due to ongoing disruptions in the availability of CAMS (Copernicus Atmosphere Monitoring Service) data used by the MAJA atmospheric correction chain. These disruptions are linked to the migration of the ECMWF data centre, which began on 8 September, and could affect all downstream users for another 3 to 4 weeks. We were expecting some disruptions, but we are currently experiencing a complete shutdown of the service.

      As a reminder, the use of CAMS products greatly improves the estimation of the atmospheric optical thickness (AOT) in MAJA, by providing at a 12-hour time resolution and a 0.4° spatial resolution the relative contribution of 7 aerosol species to the total AOT. These data are usually retrieved on the fly by the production centre upstream of the level 1 product processing, and are currently missing.

      The CNES production team will catch up with the processing as soon as the CAMS service is back in operation.

      Thank you for your patience

    • sur Perturbations dans le délais de mise à disposition des produits Theia

      Posted: 26 September 2022, 4:34pm CEST by Jerome Colin


      Vous avez peut-être noté une interruption dans la mise à disposition des produits de niveau 2 et 3 sur le portail THEIA. Ceci s’explique par les perturbations en cours dans la disponibilité des données CAMS (Copernicus Atmosphere Monitoring Service) utilisées par la chaîne de correction atmosphérique MAJA. Ces perturbations sont liées à la migration du Data Center de l’ECMWF, initiée le 8 septembre dernier, et pourraient affecter l’ensemble des utilisateurs avals pour encore 3 à 4 semaines. Nous nous attendions à quelques perturbation, mais nous subissons actuellement un arrêt complet du service.

      Pour mémoire, l’utilisation des produits CAMS améliore grandement l’estimation de l’épaisseur optique de l’atmosphère (AOT) dans MAJA, en fournissant à une résolution temporelle de 12 heures et une résolution spatiale de 0.4° la contribution relative de 7 espèces d’aérosols à l’AOT totale. Ces données sont habituellement récupérées au fil de l’eau par le centre de production en amont du traitement des produit de niveau 1, et nous font actuellement défaut.

      L’équipe de production du CNES procédera à un rattrapage des traitements dès que le service CAMS reviendra en service.

      Merci pour votre patience

    • sur Campagne expérimentale dans des conditions estivales extrêmes sur Auradé !

      Posted: 16 September 2022, 4:53pm CEST by Tiphaine Tallec

      Campagne expérimentale dans des conditions estivales extrêmes sur Auradé ! Entre orage (apport de 40 mm de pluie), puis sècheresse et canicule (40°C), les variations de l’état hydrique du sol de la parcelle flux ICOS d’Auradé furent optimales entre fin juin et mi-juillet 2022 pour accueillir et mener la campagne expérimentale Pré-HiDRATE (integrating High resolution Data from Remote sensing And land surface models for Transpiration and Evaporation mapping). Cette campagne fut menée pendant près de 4 semaines, dans le cadre du projet DETECT (Disentangling the role of land use and water management ; contact scientifique : Youri Rothfuss) et de la préparation de l’ANR potentielle HiDRATE (contact scientifique : Gilles Boulet). Elle était destinée à expérimenter et comparer des méthodologies de mesures pour (1) discriminer et quantifier les composantes des flux d’évapotranspiration (ETR) (Quade et al, 2019, [https:]] ), soit l’évaporation du sol et la transpiration du tournesol cultivé, et (2) caractériser l’effet de l’état d’humidité du sol sur leur contribution respective à l’ETR. Cette campagne a fédéré plusieurs équipes, des scientifiques du CESBIO et une équipe allemande venue s’installer sur le terrain avec sa lab’mobile MOSES (pick-up et cabine laissée à demeure).

      Lors de cette campagne, plusieurs dispositifs ont été mis en œuvre et sont présentés ci-après :

      (1) Isomobile MOSES (Youri Rothfuss, Daniel Schulz, Nils Becker) :Mesure à basse fréquence des compositions isotopiques en hydrogène et oxygène (?2H et ?18O) de l’eau du sol, des plantes et de l’atmosphère dans le profil de culture : outil potentiel pour la décomposition de l’ETR en ses 2 composantes la transpiration et l’évaporation. Echantillonnage à haute fréquence (10 Hz) et mesure en continue des ?18O et ?2H de la vapeur d’eau atmosphérique permettant une estimation de la signature isotopique de l’ETR selon la méthodologie des fluctuations turbulentes.


      Dispositif de mesures avec prélèvement d’air selon 3 profils sur une hauteur 5-150 cm

      (2) Dispositif de mesure de flux de Sève (Valérie LeDantec, Franck Granouillac)

      Suivi de la transpiration au pas de temps semi-horaire sur 6 pieds de Tournesol par la méthode de mesure de flux de sève reposant sur l’équilibre thermique de la tige (stem heat balance, SHB).

      (3) Dispositif de proxidétection SIF (solar-induced chlorophyll fluorescence ; Valérie LeDantec, Pascal Fanise)

      Suivi de l’état de fonctionnement du couvert végétal par mesure de l’émission de fluorescence chlorophyllienne induite par le soleil par proxidétection.

      (4) Microlysimètre (Gilles Boulet)

      Il s’agit d’une dizaine de cylindres de PVC contenant des échantillons de sol non déstructuré mis en place in situ et pesés tous les 2 jours environ pour suivre l’évaporation réelle du sol

      La campagne étant maintenant terminée, il ne reste plus qu’à traiter et analyser les données ! A suivre… on croise les doigts pour l’ANR HiDRATE.

      Youri et son équipe reviendront au printemps 2023 pour y réaliser une nouvelle campagne, en tout point similaire mais cette fois sur une culture d’hiver et en conditions moins « stressantes ».

    • sur Iota2 latest release – Deep Learning at the menu

      Posted: 16 September 2022, 10:12am CEST by fauvelm
      ‎ Table of Contents 1. New iota2 release

      The last version of iota2 ( [https:]] ) released on [2022-06-06 Mon] includes many new features. A complete list of changes is available here. Among them, let cite a few that may be of interest for users:

      • External features with padding: External features is a iota2 feature that allows to include user-defined computation (e.g. spectral indices) in the processing chain. They come now with a padding option. Each chunk can have an overlap with all his adjacent chunks and therefore it is possible to perform basic spatial processing with external features without discontinuity issues. Check this for a toy example.

        issue: [https:]]
        doc: [https:]]

      • External features with parameters: The function provided by the user can now have their parameters set directly in the configuration file. It is no longer necessary to hard-coded them in the python file.

        issue: [https:]]
        doc: [https:]]

      • Documentation: The documentation is now hosted at [https:]] . An open access labwork is also available [https:]] for advanced users that have already done the tutorial from the documentation.
      • Deep Learning workflow: iota2 is now able to perform classification with (deep) neural networks. It is possible to use either one of the pre-defined network architectures provided in iota2 or to define its own architecture. The workflow is based on the library Pytorch.

        issue: [https:]]
        doc: [https:]]

      Introducing the deeplearning workflow was a hard job: including all the machinery for batches training as well as various neural architectures in the workflow has introduced some major internal changes in iota2. A lot of work were done to ensure iota2 is able to scale well accordingly the size of the data to be classified when deep learning is used. In the following, we will provide an example of classification of large scale Sentinel 2 time series using deep learning.

      2. Classification using deep learning

      In this post we discuss about deep learning in iota2. We describe the data set used for the experiment, the different pre-processing done to prepare the different training/validation files, the deep neural network used and how it is learned with iota2. Then classification results (classification accuracy as well as classification maps) will be presented to enlight the capacity of iota2 to easily perform large scale analysis, run various experiments and compare their outputs.

      2.1. Material 2.1.1. Satellite image time series

      For the experiments, we use all the Level-2A acquisitions available from Theia Data Center ( [https:]] ) for one year (2018) over 4 Sentinel 2 tiles ([“31TCJ”, “31TCK”, “31TDJ”, “31TDK”]). See figure 1. The raw files size amount to 777 Gigabytes.


      Figure 1: Sentinel 2 tiles used in the experiments (background map © OpenStreetMap contributors).

      2.1.2. Preparation of the ground truth data

      For the ground truth, we have extracted the data from the database used to produce the OSO product ( [https:]] ). The database was constructed by merging several open source databases, such as Corine Land Cover. The whole process is described in (Inglada et al. 2017). The 23-categories nomenclature is detailed here: [https:]] . An overview is given figure 2.


      Figure 2: Zoom of the ground truth over the city of Toulouse. Each colored polygon corresponds to a labelized area (background map © OpenStreetMap contributors). Sub data set

      This step is not mandatory and is used here only for illustrative purpose.

      In order to run several classifications and to assess quantitatively and qualitatively the capacity of deep learning model, 4 sub-data set were build using a leave-one-tile-out procedure: training samples for 3 tiles will be used to train the model and samples for the remaining tile will be left out for testing. The process will be repeated for each subset of 3 tiles from a set of 4 tiles (i.e. 4 times !). We will see later how iota2 allows to perform several classifications tasks from different ground truth data easily.

      For now, once you have a vector file containing your tiles and the (big) database, running this kind of code


      should do the job (at least for us it does!): construct 4 couples of training/testing vector files. You can adapt it to your own configuration. An example of one sub data set is given figure 3.


      Figure 3: Sub data set: polygons from the brown area are used to train the model and polygons from the gray area are used to test the model. There are 4 different configurations, one for each tile left-out. (background map © OpenStreetMap contributors). Clean the ground truth vector file

      The final step in the ground truth data preparation is to clean the ground truth file: when manipulating vector files it is common to have multi-polygons, empty or duplicate geometries. Such problematic features should be handled before running iota2. Fortunately, iota2 is packed with the necessary tools (, available from the iota2 conda environment) to prevent all these annoying things that happen when you process large vector files. The code snippet in 1 shows how to run the tool on the ground truth file.

      Listing 1: Shell scripts to clean the 4 sub data-set.
      for i in 0 1 2 3
              -in.vector ../data/gt_${i}.shp \
              -out.vector ../data/gt_${i}_clean.shp \
              -dataField code -epsg 2154 \
              -doCorrections True
      2.2. Configuration of iota2

      This part is mainly based on the documentation ( [https:]] ) as well as a tutorial we gave ( [https:]] ). We encourage the interested reader to carefully reads these links for a deeper (!) understanding.

      2.2.1. Config and ressources files

      As usual with iota2, the first step is to set-up the configuration file. This file hosts most of the information required to run the computation (where are the data, the reference file, the output folder etc …). The following link is a good start to understand the configuration file: [https:]] . We try to make the following understandable without the need to fully read it.

      To compute the classification accuracy obtained on the area covered by ground truth used for training, we indicate to iota2 to split polygons from the ground truth file into two files, one for training and one for testing with a ratio of 75%:

      split_ground_truth : True
      ratio : 0.75

      It means that 75% of the available polygons for each class is used for training while the remaining is used for testing. Note that we do not talk about pixels here. By splitting at the polygons level, we ensure that pixels from a polygon are used either for training or testing. This is a way to reduce the spatial auto-correlation effect between pixels when assessing the classification accuracy.

      We need now to set-up how training pixels are selected from the polygons. Iota2 relies on OTB sampling tools ( [https:]] ). For this experiment, we asked for a maximum number of pixels of 100000 with a periodic sampler.

      arg_train :
          sample_selection :
              "sampler" : "periodic"
              "strategy": "constant"
              "strategy.constant.nb" : 100000

      We are working on 4 different tiles, each of them having its own temporal sampling. Furthermore, we need to deals with clouds issues (Hagolle et al. 2010). Iota2 uses temporal gap-filling as discussed in (Inglada et al. 2015). In this work, we use a temporal step-size of 10 days, i.e., we have 37 dates. Iota2 also computes per default three indices (NDVI, NDWI and Brightness). Hence, for a each pixel we have a set of 481 features (37\(\times\)13).

      For the deep neural network, we use the default implementation in iota2. However, it is possible to define its own architecture ( [https:]] ). In our case, the network is composed of four layers (see Table 1) with a relu function between each of them ( [https:]] ).

      Table 1: Network architecture.
        Input size Output size
      First Layer 481 240
      Second Layer 240 69
      Third Layer 69 69
      Last Layer 69 23

      ADAM solver was used for the optimization, with a learningrate of \(10^{-5}\) and a batch size of 4096. 200 epochs were performed and a validation sample set, extracted from the training pixels is used to monitor the optimization. The best model in terms of Fscore is selected. Off course all these options are configurable with iota2. A full configuration file is given in Listing 2.

      Listing 2: Example of configuration file. Paths need to be adapted to your set-up.
      chain :
        output_path : "/datalocal1/share/fauvelm/blog_post_iota2_output/outputs_3"
        remove_output_path : True
        check_inputs : True
        list_tile : "T31TCJ T31TDJ T31TCK T31TDK"
        data_field : "code"
        s2_path : "/datalocal1/share/PARCELLE/S2/"
        ground_truth : "/home/fauvelm/BlogPostIota2/data/gt_3_clean.shp"
        spatial_resolution : 10
        color_table : "/home/fauvelm/BlogPostIota2/data/colorFile.txt"
        nomenclature_path : "/home/fauvelm/BlogPostIota2/data/nomenclature.txt"
        first_step : 'init'
        last_step : 'validation'
        proj : "EPSG:2154"
        split_ground_truth : True
        ratio : 0.75
      arg_train :
        runs : 1
        random_seed : 0
        sample_selection :
              "sampler" : "periodic"
              "strategy": "constant"
              "strategy.constant.nb" : 100000
        deep_learning_parameters :
              dl_name : "MLPClassifier"
              epochs : 200
              model_selection_criterion : "fscore"
              num_workers : 12
              hyperparameters_solver : {
                              "batch_size" : [4096],
                              "learning_rate" : [0.00001]
      arg_classification :
              enable_probability_map : True
      python_data_managing :
              number_of_chunks : 50
      sentinel_2 :
              temporal_resolution : 10
      task_retry_limits : 
               allowed_retry : 0
               maximum_ram : 180.0
               maximum_cpu : 40

      The configuration file is now ready and the chain can be launched, as described in the documentation. Classification accuracy will be outputted in the directory final as well as the final classification map and related iota2 outputs.

      2.2.2. Iteration over the different sub ground truth files

      However, in this post we want to go a bit further to enlighten how easy it is to run several simulations with iota2. As discussed in, we have generated a set of pair of spatially disjoint ground truth vector files for training and for testing. Also, remind that iota2 starts by splitting the provided training ground truth file into two spatially disjoints files, one used to train the model and the other used to test the model. In such particular configuration, we have now two test files:

      1. One extracted from the same area than the training samples,
      2. One extracted from a different area than the training samples.

      With this files, we can do a spatial cross validation estimation of the classification accuracy, as discussed in (Ploton et al. 2020). To perform such analysis, we first stop the chain after the prediction of the classification map (setting the parameter as last_step : 'mosaic') and we manually add another step to estimate the confusion matrix from both sets. We rely on the OTB tools: [https:]] . The last ingredient is to be able to loop over the different tiles configurations, i.e., to iterate over the cross-validation folds. This is where iota2 is really powerful: we just need to change few parameters in the configuration file to run all the different experiments. In this case, we have to change the ground truth filenames and the output directory. To make it simple, we indexed our simulations from 0 to 3 and use sed shell tool to modify the configuration file in the big loop:

      sed -i "s/outputs_\([0-9]\)/outputs_${REGION}/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg
      sed -i "s/gt_\([0-9]\)_clean/gt_${REGION}_clean/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg

      The global script is given in Listing 3.

      Listing 3: Script to perform spatial cross validation. Paths need to be adapted to different configuration. Merging validation samples from the train set is required because iota2 extracts samples on a tile basis for the validation samples (behavior subject to modification in future release).
      # Set ulmit
      ulimit -u 6000
      # Set conda env
      source ~/.source_conda
      conda activate iota2-env
      # Loop over region
      for REGION in 0 1 2 3
          echo Processing Region ${REGION}
          # (Delate and) Create output repertory
          if [ -d "${OUTDIR}" ]; then rm -Rf ${OUTDIR}; fi
          mkdir ${OUTDIR}
          # Update config file
          sed -i "s/outputs_\([0-9]\)/outputs_${REGION}/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg
          sed -i "s/gt_\([0-9]\)_clean/gt_${REGION}_clean/" /home/fauvelm/BlogPostIota2/configs/config_base.cfg
          # Run iota2
              -config /home/fauvelm/BlogPostIota2/configs/config_base.cfg \
              -config_ressources /home/fauvelm/BlogPostIota2/configs/ressources.cfg \
              -scheduler_type localCluster \
              -nb_parallel_tasks 2
          # Compute Confusion Matrix for test samples
          otbcli_ComputeConfusionMatrix \
              -in ${OUTDIR}final/Classif_Seed_0.tif \
              -out ${OUTDIR}confu_test.txt \
              -format confusionmatrix \
              -ref vector /home/fauvelm/BlogPostIota2/data/tgt_0.shp \
              -ref.vector.field code \
              -ram 16384
          # Merge validation samples
 -f SQLITE -o ${OUTDIR}merged_val.sqlite \
          # Compute Confusion Matrix for train samples
          otbcli_ComputeConfusionMatrix \
              -in ${OUTDIR}final/Classif_Seed_0.tif \
              -out ${OUTDIR}confu_train.txt \
              -format confusionmatrix \
              -ref vector ${OUTDIR}merged_val.sqlite \
              -ref.vector.field code \
              -ram 16384

      Then we can compute classification metrics, such as the overall accuracy, the Kappa coefficient and the average Fscore. For this post, we have written a short python script to perform such operations: [https:]] .

      We can just run it, using nohup for instance, take a coffee, a slice of cheesecake and wait for the results ?

      2.3. Results

      Results provided by iota2 will be discussed in this section. The idea is not to perform a full analysis, but to glance through the possibilities offer by iota2. The simulations were run on computer with 48 Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz, 188 Gb of RAM and a NVIDIA GV100GL [Tesla V100 PCIe 32GB].

      2.4. Numerical results

      First we can check the actual number of training samples used to train the model. Iota2 provides the total number of training samples used ( [https:]] ). Table 2 provides the number of training samples extracted to learn the MLP. Yes, you read it well 1.8 millions of samples for only 4 tiles. During training, 80% of the samples were used to optimized the model and 20% were used to validate and monitor the model after each epoch. Four metrics were computed automatically by iota2 to monitor the optimization: the cross-entropy (same loss that is used to optimize the network), the overall accuracy, the Kappa coefficient and the F-score. Figures 4 and 5 display the evolution of the different metrics along the epochs. The model used for the classification is the one with the highest F-score.

      Table 2: Number of training samples used.
      Class Name Label Total
      Continuous urban fabric 1 67899
      Discontinuous urban fabric 2 100000
      Industrial and commercial units 3 100000
      Road surfaces 4 47664
      Rapeseed 5 100000
      Straw cereals 6 100000
      Protein crops 7 100000
      Soy 8 100000
      Sunflower 9 100000
      Corn 10 100000
      Rice 11 0
      Tubers / roots 12 53094
      Grasslands 13 100000
      Orchards 14 100000
      Vineyards 15 100000
      Broad-leaved forest 16 100000
      Coniferous forest 17 100000
      Natural grasslands 18 100000
      Woody moorlands 19 100000
      Natural mineral surfaces 20 11675
      Beaches, dunes and sand plains 21 0
      Glaciers and perpetual snows 22 0
      Water Bodies 23 100000
      Others 255 0
      Total   1780332


      Figure 4: Loss function on the training and validation set.


      Figure 5: Classification metrics computed in the validation set.

      For this set-up, the overall accuracy, the Kappa coefficient and the average F1 score are 0.85, 0.83 and 0.73, respectively. Which is in line with others results over the same area (Inglada et al. 2017).

      Classification metrics provide quantitative assessment of the classification map. But it is still useful to do a qualitative analysis of the maps, especially at large scale where phenology, topography and others factors can influence drastically the reflectance signal. Off course, Iota2 allows to output the classification maps ! We choose three different sites, display on figures 6, 7 and 8. The full classification map is available here.


      Figure 6: Classification map for an area located between two tiles.


      Figure 7: Classification map for an area over the city of Toulouse.


      Figure 8: Classification map for a crop land area.

      2.4.1. Results for the different sub ground truth files

      The figure 9 shows the Fscore for the 4 models (coming from the 4 different runs), and the 2 test sets. We can easily see that depending on the tile left out, the difference of classification accuracy in terms of Fscore between test samples extracted from the same or disjoint area can be significant. Discussing the reasons why the performance are decreasing and what metrics should we use to asses the map accuracy are out of the scope of this post. It is indeed a controversy topic in remote sensing (Wadoux et al. 2021). We just want to emphasize that iota2 simplifies and automatizes a lot the process of validation, especially at large scale. Using this spatial cross validation with 4 folds, the mean estimated Fscore is 0.59 with a standard deviation of 0.08, which is indeed much lower than the 0.73 estimated in the previous part.


      Figure 9: Fscore computed on samples from the same tiles used for training (train) and from one tile left out from the training region (test).

      The different classification maps are displayed in the animated figures 10, 11 and 12. The first one displays a tricky situation at a frontier of two tiles. We can see a strong discontinuity, whatever the model used. For the second case over Toulouse, there is a global agreement between the 4 models, except for the class Continuous Urban Fabric (pink) that disappears for one model: the one learnt without data coming from the tile containing Toulouse (T31TCJ). The last area exhibits a global agreement with some slight disagreements for some crops. Note, the objective is not to fuse/combine the different results, but rather to quantitatively observed the differences in terms of classification maps when the ground truth is changed.


      Figure 10: Classification maps for the 4 runs for an area located between two tiles.


      Figure 11: Classification maps for the 4 runs over the city of Toulouse.


      Figure 12: Classification maps for the 4 runs for a crop land area.

      A finer analysis could be done, indeed. But we let this as an homework for interested reader: all the materials for the simulation are available here


      and the Level-2A MAJA processed Sentinel-2 data are downloadable from Theia Data Center (try this out: [https:]] ), while the ground truth data can be downloaded here.

      3. Conclusions

      To conclude, in this post we have presented briefly the latest release of iota2. Then, we focused on the deep learning classification workflow to classify 4 tiles of one year of Sentinel-2 time series. Even if it was only four tiles, it amount to process around 800 Gb of data, and with our data set, about \(4\times10^{7}\) pixels to be classified. We have skipped a lot of parts of the worklow, that iota2 takes care (projection, upsampling, gapfilling, streaming, multiple run, mosaicing to mention few). The resulting simulation allows to assess qualitatively and quantitatively the classification maps, in a reproducible way: you got the version of iota2 and the config file, you can reproduce your results.

      From a machine learning point of view, for this simulation, we have processed a lot of data easily (check publications with 2 millions of training pixels, we don’t find that much with open source tools). Iota2 allows to concentrate on the definition of the learning task. We make it simple here, an moderate size MLP. But much more can be done, regarding the architecture of the neural network, the training data preparation or post-processing. If you are interested, you can try: again everything is open source. We will be very happy to welcome and help you: [https:]] .

      Finally, with a few boilerplate code, we were able to perform spatial cross validation smoothly.

      In a close future, we plan to release a new version that will also handle regression: currently only categorial data is supported in learning.

      The post can be download in pdf: here

      4. Acknowledgement

      Iota2 development team is composed of Arthur Vincent, CS Group, from the beginning, recently joined by Benjamin Tardy, CS Group. Hugo Trentesaux spend 10 months (October 2021 – July 2022) in the team. Developments are coordinated by Jordi Inglada, CNES & CESBIO-lab. Promotion and training are ensured by Mathieu Fauvel, INRAe & CESBIO-lab and Vincent Thierion, INRAe & CESBIO-lab.

      Currently, the development are funded by several projects: CNES-PARCELLE, CNES-SWOT Aval, ANR-MAESTRIA and ANR-3IA-ANITI with the support of CESBIO-lab and Theia Data Center. Iota2 has a steering committee which is described here.

      We thank the Theia Data Center for making the Sentinel-2 time series available and ready to use.

      5. References Hagolle, O., M. Huc, D. Villa Pascual, and G. Dedieu. 2010. “A Multi-Temporal Method for Cloud Detection, Applied to Formosat-2, Venµs, Landsat and Sentinel-2 Images.” Remote Sensing of Environment 114 (8): 1747–55. doi: Inglada, Jordi, Marcela Arias, Benjamin Tardy, Olivier Hagolle, Silvia Valero, David Morin, Gérard Dedieu, et al. 2015. “Assessment of an Operational System for Crop Type Map Production Using High Temporal and Spatial Resolution Satellite Optical Imagery.” Remote Sensing 7 (9): 12356–79. doi:10.3390/rs70912356. Inglada, Jordi, Arthur Vincent, Marcela Arias, Benjamin Tardy, David Morin, and Isabel Rodes. 2017. “Operational High Resolution Land Cover Map Production at the Country Scale Using Satellite Image Time Series.” Remote Sensing 9 (1). doi:10.3390/rs9010095. Ploton, Pierre, F. Mortier, Maxime Réjou-Méchain, Nicolas Barbier, N. Picard, V. Rossi, C. Dormann, et al. 2020. “Spatial validation reveals poor predictive performance of large-scale ecological mapping models.” Nature Communications 11: 4540 [11 ]. doi:10.1038/s41467-020-18321-y. Wadoux, Alexandre M.J.-C., Gerard B.M. Heuvelink, Sytze de Bruin, and Dick J. Brus. 2021. “Spatial Cross-Validation Is Not the Right Way to Evaluate Map Accuracy.” Ecological Modelling 457: 109692. doi:

      Author: Iota2 dev team

      Created: 2022-09-16 Fri 09:34


    • sur Towards 3D time series

      Posted: 9 April 2022, 8:11pm CEST by Olivier Hagolle
      CESBIO researchers analyzing a time series from Sentinel-HR in 2028.


      In our a priori definition of Sentinel-HR mission, we had included an option to observe the 3D topography at a high resolution, with a moderate accuracy ambition (lower than that offered by the  CO3D satellite, as shown in the table below), but globally and with a systematic revisit.

      But our mission advisory group insisted, and the option has become one of the essential charactestics of Sentinel-HR.  This post gives some examples of the potential applications of such a mission

      Image resolution15 m0.5 m2 m
      Resolution of elevation model 30 m4 m (free at 12 m)10 m
      Altimetric uncertainty (CE90)
      10 m1 m4 m
      Cloud free periodicity Monthly to yearly4 years3 months
      Operations1999 - 20232024 -2028 ?
      Glaciers and Ice caps Map of elevation changes for the glaciers of Mont Blanc, in metres, between 2003 (images SPOT5) and 2018 (images Pléiades)

      Our glaciologists have been the most enthusiastic about the inclusion of 3D in Sentinel-HR. The objective is to measure the seasonal and multi-annual evolution of glacier and polar ice sheets thickness, one of the essential climate variables (ECV). An accuracy of 4 m in 90% of the cases is enough to measure the evolution of these glaciers over a few years. Using similar but less accurate data, obtained over the very long term with the ASTER instrument on the Terra satellite launched in 1999, Hugonnet et al were able to map the global variations (mostly decreases ? ) of the thickness of glaciers around the world, between 2000 and 2019. (Hugonnet et al 2021)

      For the polar ice caps, the annual change is even stronger, and it is interesting to monitor the seasonal variations as shown in the figure below, which shows the variation of ice thickness at the limit between land and sea in Greenland.

      Colour-shaded relief maps for a subset of DEMs. Elevation contours of 150 (black) and 100m (white) bound the approximate transition from grounded to floating ice. The letter after the date-string (DD-MM-YYYY) indicates the DEMs source (A: ASP WorldView; T: TanDEM-X; S: SETSM WorldView; and G: GLISTIN) (adapted from Joughin et al 2020)

      Unfortunately, the ASTER mission is planned to end in 2023, after 24 years of good service, with no follow-on mission, except if Sentinel-HR turns out to be successful. The CO3D mission will allow to monitor some 50 glaciers, but according to the Randolph Glacier Inventory, there are about 220 000 glaciers on Earth, with an accumulated surface of 700 000 km². Monitoring so many glaciers cannot be done with the current mission.

      "The [ASTER] mission is officially planned to end in September 2023". [https:]] . Sad to read that the single non-commercial stereo mission will end in two years. Was an invaluable source of DEMs for glacier studies.

      — Etienne Berthier (@EtienneBerthie2) June 3, 2021

      Volcanoes, landslides, erosion

      A bit like for glaciers, the possibility to observe with Sentinel-HR the 3D changes would allow to follow the evolution of volcanic flows. As shown in the image below, it is already possible to obtain this information with tasked satellites such as Pleiades, but it may be difficult to obtain this information on all the volcanoes of the world, and we would sometimes miss the situation before the beginning of the eruption.

      Evolution of lava volume for the 2021 eruption in la Palma, measures with Pleiades stereoscopic images in September 2021 and January 2022. (Crédit: V. Pinel, Isterre et J.M.C. Belart, LMI Iceland).

      The knowledge of elevation variation with a frequent and systematic revisit will allow to measure the volume of eroded rocks or sediments. it will also be possible the movements of sand dunes or of landslides, with an evaluation of the slope and of the associated risks.

      Water bodies

      Altimetric satellite missions such as Jason, Sentinel-6 of SWOT measure the height of water, but to obtain the volume which is generally the seeked quantity, it is necessary to know the bathymetric profile of the water bodies. A mission such as Sentinel-HR will be able to do so at low water periods, and therefore, a good revisit is needed to be sure to get the acquisitions at the best moment.

      Sentinel-HR could even make it possible to map the depth of the snow cover in certain mountain regions which receive heavy snowfall and thus help better predict the water resources available in the spring. The 4 m altimetric uncertainty is a bit high, but of course, it should be reduced by averaging over a neighborhood.


      With a 4 m altimetric accuracy, it will be possible to measure the height of buildings with an accuracy of 1 floor. The height of building is an essential information to estimate habitat density. 3D also allows to monitor urban sprawl and accurately detect new buildings. Knowing the height of buildings is also important to model urban climate, because of their effect on air flow and cas shadows.  The image below, obtained during Sentinel-HR phase 0 shows the expected accuracy obtained with a base/height ratio of 0.2 and a ground sampling distance of 2 m.

      Simulated digital surface model for a Sentinel-HR acquisition with a resolution of 2 m, and a B/H of 0.2. However, not all noise was simulated and this image is probably a bit optimistic.

      For a better modeling of the climate in cities, it is not only necessary to have a map of the vegetation, but also to know its height, a bush does not have the same cooling effect as a tree. The Sentinel-HR mission will allow to follow the evolution of the vegetation height, which is quite variable with time depending on the development and urbanism works.

      Classification of vegetation height using   stereoscopic observations from Pleïades (Rougier et al 2016) Forests, trees

      As with urban vegetation, multi-temporal 3D information will provide insight into tree height and its evolution, which can be used in machine learning methods for estimating forest characteristics. Validation with Lidar data such as those from the GEDI Lidar onboard the International Space Station will allow for a good estimation of uncertainties and refinement of the models. The tree height information will allow better classification of different vegetation types, accurate detection of forest harvesting, and estimation of exported biomass.

      Bathymétrie et continuité côtière Continuity of the coastal profile between the emerged part measured with stereoscopy, and the immersed part measured by inversion of the wave train, thanks to the temporal shift of the stereoscopic observations. Bergsma et al

      It is difficult to measure both the emerged and submerged coastal profile in its continuity. Bergsma et al 2021 showed that coastal bathymetry can be inverted from the speed of wave trains. Thus, it is not directly the stereoscopy that is used, but the slight temporal difference between the two observations. Of course, for the emerged part, stereoscopy allows to measure the relief. These methods applied to the VENµS satellite have made it possible to observe the coastal profile in its continuity. As the inversion of the bathymetry requires particular conditions (presence of waves), a revisit will be necessary.


      The possibility to include stereoscopy in the Sentinel-HR mission, which we had initially considered as an option, has many applications, of which we have only mentioned the most obvious here.

      In summary, for a mission dedicated to change detection for map updating, relief observation has a huge advantage: surface reflectance changes may be seasonal or meteorological in origin, while the changes visible in 3D are real changes.

      Of course, periodic observation in 3D has a high cost. In the case of a Sentinel-HR mission with CO3D-derived satellites, it requires an increase from 6 satellites for the imaging mission alone, to 12 satellites if we want a 20-day revisit for 3D or 9 satellites, if we decide to decrease stereoscopic acquisitions to 40 days. If the additional cost seems high, it should be considered that the imaging mission would require some redundancy to compensate for the possible failure of some satellites. The fact of having additional satellites for 3D would therefore contribute to securing the mission.

      This article has been written thanks to the Sentinel-HR phase-0 study report, edited by Julien Michel from CESBIO, from the contributions of the mission group members. For the work concerned by this article, we would like to thank Etienne Berthier (LEGOS), Jérémie Mouginot (IGE), Renaud Binet (CNES), Raphael Almar (LEGOS), Hervé Yesou (i-Cube), Arnaud Lucas (IPGP), Anne Puissant (Live), Jean-Philippe Malet (EOST), David Scheeren (Dynafor), Simon Gascoin (Cesbio).


      Romain Hugonnet, Robert McNabb, Etienne Berthier, Brian Menounos, Christopher Nuth, Luc Girod, Daniel Farinotti, Matthias Huss, Ines Dussaillant, Fanny Brun, et al. Accelerated global glacier mass loss in the early twenty-first century. Nature, 592(7856):726–731, 2021.

      Ian Joughin, David E Shean, Benjamin E Smith, and Dana Floricioiu. A decade of variability on jakobshavn isbræ: ocean temperatures pace speed through influence on mélange rigidity. The cryosphere, 14(1):211–227, 2020.

      Simon Rougier, Anne Puissant, André Stumpf, and Nicolas Lachiche. Comparison of sampling strategies for object-based classification of urban vegetation from very high resolution satellite images. International Journal of Applied Earth Observation and Geoinformation, 51:60–73, 2016.

      David Morin, Milena Planells, Dominique Guyon, Ludovic Villard, Stéphane Mermoz, et al.. Estimation and mapping of forest structure parameters from open access satellite images: development of a generic method with a study case on coniferous plantation. Remote Sensing, MDPI, 2019, 11 (11), pp.1-25.

      Erwin W.J. Bergsma, Rafael Almar, Amandine Rolland, Renaud Binet, Katherine L. Brodie, and A. Spicer Bak. Coastal morphology from space: A showcase of monitoring the topography-bathymetry continuum. Remote Sensing of Environment, 261:112469, 2021.

    • sur Du relief pour nos séries temporelles

      Posted: 9 April 2022, 6:56pm CEST by Olivier Hagolle
      Scientifiques du CESBIO analysant une série d’images Sentinel-HR en 2028


      Dans notre définition initiale de la mission Sentinel-HR, nous avions inclus une option permettant d’observer le relief à haute résolution, avec un objectif de précision modéré (inférieur à celui offert par CO3D, comme le montre le tableau ci-dessous), mais avec une revisite systématique et sur le monde entier.

      Devant l’intérêt des membres de notre groupe mission, cette option est devenue une caractéristique essentielle de la mission Sentinel-HR.  Cet article résume les nombreuses applications liée à l’observation régulière et globale du relief des surfaces terrestres.


      Résolution des images15 m0.5 m2 m
      Résolution du modèle numérique de surface30 m4 m (gratuit à 12 m)10 m
      Précision altimétrique (CE90)
      10 m1 m4 m
      PériodicitéMensuelle à annuelle4 années3 mois
      Glaciers, calottes glaciaires Carte des changements d’altitude des glaciers du massif du Mont Blanc, en mètres entre 2003 (images SPOT5) et 2018 (images Pléiades)

      Ce sont nos glaciologues qui se sont montrés les plus enthousiastes pour la mesure du relief et ses changements par Sentinel-HR. L’objectif est de mesurer l’évolution saisonnière et pluri-annuelle de l’épaisseur des glaciers et des calottes polaires (Groenland et Antarctique), des variables climatiques essentielles (ECV). Avec une précision de 4 m dans 90% des cas, il est possible de mesurer l’évolution de ces glaciers sur quelques années et peut être saison après saison. C’est à partir de données similaires (mais moins précises), obtenues sur le très long terme avec l’instrument ASTER sur le satellite Terra lancé en 1999, que Hugonnet et al. (2021) ont pu établir une cartographie mondiale des variations (diminutions surtout) de hauteur des glaciers du monde entier entre 2000 et 2019. (Hugonnet et al 2021)

      Au niveau des pôles, les dynamiques sont souvent encore plus rapides, et il est intéressant de suivre les évolutions saisonnières comme sur l’illustration ci-dessous, qui montre l’évolution des niveaux de hauteur à l’interface entre terre et mer d’un glacier du Groenland.

      Cartes de relief pour l’interface terre/mer d’un glacier du Groenland. Les contours d’élévation 150m (noir) ou 100m (blanc), délimitent la transition approximative entre glace flottante et glace terrestre. La lettre après la date indique la provenance de l’information de relief : (A: ASP WorldView; T:TanDEM-X; S: SETSM WorldView; and G: GLISTIN) (Joughin et al 2020)


      Malheureusement, la mission Aster va se terminer en 2023, après 24 ans de bons et loyaux services, sans successeur, sauf si la mission Sentinel-HR venait à être décidée. CO3D permettra d’observer en continu une cinquantaine de glaciers, mais selon le Randolph Glacier Inventory, il y a presque 220000 glaciers sur terre, qui couvrent au total plus de 700 000 km². La surveillance de tous ces glaciers ne peut donc pas être assurée par la mission CO3D actuelle.

      "The [ASTER] mission is officially planned to end in September 2023". [https:]] . Sad to read that the single non-commercial stereo mission will end in two years. Was an invaluable source of DEMs for glacier studies.

      — Etienne Berthier (@EtienneBerthie2) June 3, 2021

      Volcans, glissements de terrain, érosion

      Un peu comme pour les glaciers, la possibilité d’observer avec Sentinel-HR les changements 3D permettrait de suivre l’évolution des coulées volcaniques. Comme le montre l’image ci-dessous, il est déjà possible d’obtenir cette information avec des satellites missionnés comme Pléiades, mais il peut être difficile d’obtenir cette information sur tous les volcans du monde. Sentinel-HR garantirait de plus la disponibilité d’une carte topographique quelques semaines avant l’éruption.


      Évolution du volume de lave lors de l’éruption de 2021 à la Palma, mesurés avec les images stéréoscopiques Pléiades de septembre 2021 et janvier 2022 (Crédit: V. Pinel, Isterre et J.M.C. Belart, LMI Iceland).

      D’autre part, la connaissance des l’évolution du relief avec une revisite régulière devrait permettre de mesurer les volumes de roches ou sédiments érodés ou déposés. Il sera également possible de suivre les déplacements des dunes de sable ou ceux des glissements de terrain, avec une évaluation de la pente et donc des risques associés.

      Eaux continentales

      Les missions altimétriques permettent de mesurer les hauteurs d’eau, mais pour accéder au volume, qui est la quantité nécessaire pour évaluer la disponibilité en eau, il faut connaître le profil bathymétrique des étendues d’eau. Une mission comme Sentinel-HR permettra de mesurer ces profils lors des phases de basses eaux. Pour ce faire, une bonne revisite est nécessaire.

      Il est même possible que Sentinel-HR permette de cartographier la hauteur de neige dans certaines régions de montagne où les accumulations sont fortes et ainsi mieux prévoir les ressources en eau disponibles au printemps. La précision de 4 m peut être insuffisante, mais le bruit devrait diminuer en moyennant sur un voisinage.


      Avec une précision de 4 m, il sera possible de mesurer les hauteurs des bâtiments avec une précision de l’ordre d’un étage. La hauteur des bâtiments est un élément essentiel de la mesure de la densité de l’habitat. Enfin elle joue un rôle essentiel dans la modélisation du climat urbain de part sa canalisation des flux d’air, et par les ombres projetées au sol. L’illustration ci-dessous, réalisée au cours de la phase-0 Sentinel-HR, montre la précision qui pourrait être obtenue avec des observations obtenues avec un rapport base sur hauteur de 0.2, et un pas au sol de 2 m. La comparaison de topographies multi-dates permet aussi de suivre le développement urbain et de détecter précisément la construction de nouveaux bâtiments.

      Modèle numérique de surface simulé pour une acquisition de Sentinel-HR avec une résolution de 2 m, et un B/H de 0.2. Tous les bruits n’ont cependant pas été simulés et cette image est probablement un peu optimiste.

      Pour une meilleure modélisation du climat en ville, il ne faut pas seulement avoir une carte de la végétation, mais aussi connaître sa hauteur, un buisson n’ayant pas le même effet refroidissant qu’un arbre. La mission Sentinel-HR permettra de suivre l’évolution de la hauteur végétation, élément assez variable avec le temps en fonction des travaux d’aménagement et d’urbanisme.

      Classification des types de végétation en ville à partir de données stéréoscopiques de Pléiades (Rougier et al. 2016)



      Comme pour la végétation en ville, l’information 3D multi-temporelle fournira une idée de la hauteur des arbres et de son évolution, qui pourra être utilisée dans les méthodes d’estimation des caractéristiques forestières par apprentissage automatique. La validation avec des données Lidar comme celles de du Lidar GEDI à bord de la station spatiale internationale permettra de bien estimer les incertitudes et d’affiner les modèles. L’information permettra de mieux classer les différents types de végétation, de détecter avec précision les coupes forestières et d’estimer les biomasses exportées.


      Bathymétrie et continuité côtière Continuité du profil côtier entre la partie émergée mesurée avec la stéréoscopie, et la partie immergée mesurée par inversion du train de vagues, grâce au décalage temporel des observations stéréoscopiques. (Bergsma et al)

      Il est difficile de mesurer dans sa continuité le profil côtier émergé et immergé. Bergsma et al 2021 ont montré qu’il était possible d’inverser la bathymétrie côtière à partir de la vitesse des trains de vague. Ce n’est donc pas directement la stéréoscopie qui est utilisée, mais le léger écart temporel entre les deux observations. Bien évidemment, pour la partie émergée, la stéréoscopie permet de mesurer le relief. Ces méthodes appliquées au satellite VENµS ont permis d’observer le profil côtier dans sa continuité. L’inversion de la bathymétrie nécessite des conditions particulières (présence de vagues), la revisite sera donc nécessaire.


      La possibilité d’inclure l’acquisition du relief dans la mission Sentinel-HR, que nous avions initialement envisagée comme une option, présente finalement beaucoup d’applications. Nous n’avons d’ailleurs ici mentionné que les plus évidentes. En résumé, pour une mission dédiée à la détection de changements pour la mise à jour de cartes, l’observation du relief présente un énorme avantage : alors que les changements de réflectances de surface peuvent être dues à des effets saisonniers ou météorologiques,  les changements visibles en 3D sont, eux, de vrais changements.

      Bien évidemment, l’observation périodique en 3D présente un coût élevé. Dans le cas d’une mission Sentinel-HR réalisée avec des satellites dérivés de CO3D, elle nécessite de passer de 6 satellites pour la mission imagerie seule, à 12 satellites si on veut une revisite de 20 jours pour le 3D. Pour obtenir une revisite 3D à 40 jours, 9 satellites pourraient suffire. Si le surcoût parait élevé, il faut cependant considérer que la mission nécessiterait une certaine redondance pour pallier à la panne de quelques satellites. Le fait de disposer de satellites supplémentaires pour le relief contribuerait donc à sécuriser la mission.

      Cet article a été écrit grâce au rapport de l’étude de phase-0 de Sentinel-HR, édité par Julien Michel du CESBIO, à partir des contributions des membres du groupe mission. Pour les travaux concernés par cet article, nous tenons à remercier Etienne Berthier (LEGOS), Jérémie Mouginot (IGE), Renaud Binet (CNES), Raphael Almar (LEGOS), Hervé Yesou (i-Cube), Arnaud Lucas (IPGP), Anne Puissant (Live), Jean-Philippe Malet (EOST), David Scheeren (Dynafor) et Simon Gascoin (CESBIO).


      Romain Hugonnet, Robert McNabb, Etienne Berthier, Brian Menounos, Christopher Nuth, Luc Girod, Daniel Farinotti, Matthias Huss, Ines Dussaillant, Fanny Brun, et al. Accelerated global glacier mass loss in the early twenty-first century. Nature, 592(7856):726–731, 2021.

      Ian Joughin, David E Shean, Benjamin E Smith, and Dana Floricioiu. A decade of variability on jakobshavn isbræ: ocean temperatures pace speed through influence on mélange rigidity. The cryosphere, 14(1):211–227, 2020.

      Simon Rougier, Anne Puissant, André Stumpf, and Nicolas Lachiche. Comparison of sampling strategies for object-based classification of urban vegetation from very high resolution satellite images. International Journal of Applied Earth Observation and Geoinformation, 51:60–73, 2016.

      David Morin, Milena Planells, Dominique Guyon, Ludovic Villard, Stéphane Mermoz, et al.. Estimation and mapping of forest structure parameters from open access satellite images: development of a generic method with a study case on coniferous plantation. Remote Sensing, MDPI, 2019, 11 (11), pp.1-25.

      Erwin W.J. Bergsma, Rafael Almar, Amandine Rolland, Renaud Binet, Katherine L. Brodie, and A. Spicer Bak. Coastal morphology from space: A showcase of monitoring the topography-bathymetry continuum. Remote Sensing of Environment, 261:112469, 2021.

    • sur VENµS satellite new phase started: high resolution images every day (almost)

      Posted: 20 March 2022, 6:39pm CET by Olivier Hagolle

      VENµS, the French-Israeli satellite that goes up and down,  reached its new orbit on the first of March, and after first tunings of the programming and telemetry, we have started having our first steady acquisitions on March the 15th. This new phase is called the VM5 phase. On the selected sites, images will be taken every day (sometimes every second day), with a 4 m resolution, and with 12 spectral bands in the visible and near infrared. We will have a bit of a commissionning phase to do in the coming weeks, and we need to prepare all the geometric reference images to obtain an accurate registration.  The L1C, L2A and L3A data will only be released at the end of spring (hopefully).

      One of the first VENµS images for VM5 phase, over Saint-Louis, Senegal, already processed  at L1C.

      The list of selected VENµS sites is available on the map below. It might be not fully definitive, and we cannot exclude that a few sites will have to be removed in the coming days according possible difficulties in the acquisitions.  Thiis uncertainty is due to differences between the programming simulator, and the reality. On these first days, 20 of the selected sites did not make it to the current programming, we are trying hard to include as many of them as possible. The current list just indicates a high probability for these sites to be included in VENµS programming.

      An email will be sent for each proposing team this week.

      Choosing those sites kept our team busy for a few months :

      1. Together with Gérard Dedieu, who did 90% of the work benevolently from his retirement in the Pyrenees, we read all the proposals (almost 90 !) and classified them with priorities between 1 and 5. We quickly figured out that priorities 1 and 2 would be enough to fill VENµS programming.
      2. The technical team in France and Israel  (Sophie Pelou (CNES), Thibaut Faijan (CS-Group), Thomas Cruz (Sopra-Steria)) had then to program all the sites, accounting for all the constraints :
        • the satellite moderate agility
        • the memory occupation,
        • a constraint to have an empty memory on the first orbit
        • impossibility to acquire while downloading data to the ground station
        • limit the number of downloads, as are they are not free
      3. Eventually adapt the processors, prepare the reference images for ortho-rectification, check the product quality. This is the work of Jean-Louis Raynaud (CNES), Amandine Rolland, Lucas Tuillier, Laetitia Fenouil (Thales), Sophie Coustances (CNES)
      4. Process the data ( Marie France Larif, Bernard Specht, CNES)  and Theia’s production team (Gwénaëlle Baldit et Victor Vidal, Sopra Steria)

      The second part was the hardest, and we didn’t count the iterations necessary with our simulator, checked by our colleagues in Israel. AS we wanted to include as many sites as possible, it took an almost infinite time to obtain an optimised result, and this explains why we were not able to deliver the list of sites earlier.

      The result is as follows. I know some of our friends who suggested sites will be disappointed, but this is the best we could program. Due to the constraints, a part of the sites could only be programmed every second day instead of every day. I’ll soon have a map with two different colours depending on the revisit.

      Sites displayed in blue have a one day revisit, and red sites have a 2 day revisit. Click on image to see the interactive map. and zoom

    • sur Watching snow melt at 10 minute intervals from satellite

      Posted: 17 March 2022, 6:41pm CET by Simon Gascoin

      This is something new to me…

      The Geostationary Operational Environmental Satellites (GOES) capture images every 10 minutes at 1 km resolution over America… Such imagery makes it possible to watch the quick melt of a thin snow cover over the US Great Plains during a single day.

      Sub-daily monitoring of the snow cover at such spatial resolution was already possible by combining Aqua and Terra observations (overpass times of 10:30am and 1:30 pm). Here is the same example zoomed in near Ohama, Nebraska (snow cover in blue, clouds in white):


    • sur Lowest snow cover area in the Alps since 2001

      Posted: 5 March 2022, 2:18pm CET by Simon Gascoin

      On March 2nd, the snow cover area in the Alps has reached its lowest value since 2001. Only 43% of the alpine range was covered by snow (about 82’000 square kilometers), whereas the average is 63% on the same day over the period 2001-2021. The deficit in the number of snow covered days (map below, computed from 01 November) is particularly evident in the Italian Alps.

      The current conditions contrast with the beginning of the season when early snowfalls covered almost 90% of the entire alpine range. But a « warm December » and a « dry January » have changed the situation. The situation is similar in the Pyrenees.

      These data are generated from Nasa Terra/MODIS observations. You can follow the evolution of the snow cover area in near real time through our Alps Snow Monitor.

      Top picture: MODIS image of the Alps on 03 March 2022.

    • sur Les Pyrénées ont fait le dry January

      Posted: 11 February 2022, 9:43pm CET by Simon Gascoin

      La surface enneigée n’a jamais été aussi basse un 6 février depuis 2001. Elle était de 6 100 kilomètres carrés, alors que la moyenne à cette date est 13 900 kilomètres carrés. Cette situation résulte d’une longue période sèche et ensoleillée qui a commencé le 10 janvier avec l’installation d’une situation anticyclonique.

      Surface enneigée exprimée en fraction du domaine cartographié ci-dessous


      Durée d’enneigement du 01 novembre 2021 au 6 février 2022 (nombre de jours)


      La surface enneigée donne une vision partielle de l’état du manteau neigeux. Il reste des cumuls importants à plus haute altitude et sur les versants nord qui sont protégés du rayonnement solaire, datant des fortes précipitations du début de l’hiver.

      Suivez l’évolution de la surface enneigée sur le Pyrenees Snow Monitor.

      Informations publiées sur le site de France 3 Occitanie : [https:]]

    • sur TropiSCO scores against deforestation

      Posted: 5 February 2022, 4:35pm CET by Olivier Hagolle


      The TropiSCO project aims at providing maps of tree cover loss in dense tropical forests using Sentinel-1 satellite images, starting in 2018 and in near real time. The maps will soon be publicly available via a webGIS platform and updated weekly at 10m resolution. Forest loss as small as 0.1 hectare will be detected (corresponding to ten Sentinel-1 pixels). Compared to other existing systems, TropiSCO brings two main improvements: its fine spatial resolution and, above all, its short forest loss detection time, whatever the weather conditions, which is essential in the tropics to allow rapid interventions on the ground.

      The TropiSCO project, labelled by the Space Climate Observatory in 2021, is led by the GlobEO company in close collaboration with CNES and CESBIO. The project is divided into two phases, A and B. Phase A, which will end in April 2022, has three objectives:

      • The collection of user requirements,
      • An analysis of the system architecture and of the costs associated with each technical solution studied,
      • A demonstration of the concept in seven countries with the creation of a dedicated webGIS by the Someware company. The demonstration was carried out in Guyana, Suriname, French Guiana, Gabon, Vietnam, Laos and Cambodia.

      The production will be extended to the whole tropical dense forests in the frame of phase B.

      User needs have been collected via a questionnaire filled by twenty-five institutions, which is being analysed in order to produce and distribute the most relevant maps. In parallel, the architecture of the production system is being studied at CNES, in order to tailor a technical solution adapted to the ambition of this project.

      The products generated in the frame of the TropiSCO project consist mainly of maps of forest loss dates at a high spatial and temporal resolution, but also of synthetic maps highlighting areas of significant activity, as well as monthly and annual statistics by territory (provinces, countries, etc.).

      Figure 1. Synthetic maps of forest loss from January 2018 to December 2021, with weekly temporal resolution and ten-metre pixel size, obtained using Sentinel-1 satellite images. The figure was made with the help of Simon Gascoin and Maylis Duffau.

      Examples of synthetic maps are shown in Figure 1. The red shading indicates the area of forest loss within each 460 km² hexagon. Examples include gold mining in Suriname on the border with French Guiana, as well as tree plantations harvests in central Vietnam and the conversion of natural forests to tree plantations in northern Laos. The contrast between northern Laos and Vietnam is striking, illustrating the fact that forest exploitation and management is highly dependent on national strategy. More than 70,000 Sentinel-1 images were processed with CNES computing resources to produce maps of Vietnam, Laos and Cambodia, covering 1,230,000 km². For these three countries, the errors of omission and commission were estimated at 10% and 0.9% respectively according to an adapted validation protocol (Mermoz et al., 2021).

      Figure 2 shows an example of a detection map for Suriname from 2018 to 2021. The colour gradations from yellow to red show the progressive temporal evolution of logging roads. Selective logging (yellow to red dots) is visible between the roads.

      Figure 2: Logging area in Suriname. The first forest losses are often associated with the creation of logging roads, followed by selective logging. Background image: Google Earth.



      This work was presented on 11 October 2021 at the Theia workshop on the uses of remote sensing for forestry, and on 20 January 2022 at the third Quarterly Meeting of the SCO France. By the end of phase A, the TropiSCO team is working on the complete automation of the processing chain and on the production of forest loss maps for Gabon. The webGIS will be open and accessible to all by April 2022.

      References :



      Mermoz et al. (2021). Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data. Remote Sensing, 13(23), 4877. [https:]]




    • sur TropiSCO secoue la déforestation

      Posted: 5 February 2022, 12:02pm CET by Stéphane Mermoz


      L’objectif du projet TropiSCO est de fournir des cartes de suivi des pertes de couverture forestière dans les forêts denses tropicales avec les images des satellites Sentinel-1, à partir de 2018 et en continu. Les cartes seront bientôt accessibles publiquement via une plateforme webGIS et mises à jour toutes les semaines, à 10m de résolution. Des coupes forestières d’à peine 0,1 hectare pourront être détectées (correspondant à dix pixels Sentinel-1). Par rapport aux autres systèmes existants, TropiSCO apporte donc deux améliorations : sa fine résolution spatiale et surtout son court délai de détection des pertes de couverture forestière, quelles que soient les conditions météorologiques, ce qui est indispensable dans les tropiques pour permettre des interventions rapides sur le terrain.


      Le projet TropiSCO, labellisé par le Space Climate Observatory en 2021, est mené par la société GlobEO en étroite collaboration avec le CNES et le CESBIO. Le projet se déroule en deux phases A et B. La phase A, qui se terminera en avril 2022, a trois objectifs :

      • le recueil des besoins utilisateurs
      • une analyse de l’architecture du système et des coûts associés à chaque solution technique étudiée
      • une démonstration du concept sur sept pays avec la création d’un webGIS dédié par la société Someware. La démonstration est faite sur le Guyana, le Suriname, la Guyane, le Gabon, le Vietnam, le Laos et le Cambodge.

      L’objectif modeste mais principal de la phase B sera d’étendre progressivement la méthode à toutes les forêts denses tropicales !


      A l’heure actuelle, les besoins utilisateurs ont été recueillis via un questionnaire auprès de vingt-cinq institutions et sont en cours d’analyse. Ils nous fournissent des informations précieuses afin de produire les produits cartographiques les plus pertinents possible. En parallèle, l’architecture du système de production est en cours d’étude au CNES, afin de dimensionner une solution technique adaptée à l’ambition de ce projet.

      Les produits générés par le projet TropiSCO consistent essentiellement en des cartes de dates de pertes de couverture forestière à haute résolution spatiale et temporelle, mais aussi en des cartes synthétiques permettant de mettre en valeur des zones d’activité importante, ainsi que des statistiques mensuelles et annuelles par territoire (provinces, pays, etc).

      Figure 1. Cartes synthétiques des activités de coupes forestières de janvier 2018 à décembre 2021, avec une résolution temporelle hebdomadaire et une taille de pixels de dix mètres, obtenues avec les images des satellites Sentinel-1. La figure a été effectuée avec l’aide de Simon Gascoin et Maylis Duffau.

      Des exemples de produits synthétiques sont présentés dans la Figure 1. Les dégradés de rouge indiquent la superficie de forêts coupées au sein de chaque hexagone de 460 km2 de superficie. On identifie par exemple les zones d’orpaillage au Suriname à la frontière avec la Guyane, ainsi que les coupes de plantations d’arbres dans le centre du Vietnam et la conversion de forêts naturelles en plantations d’arbres dans le nord du Laos. On peut observer aussi le contraste entre le nord Laos et Vietnam, ce qui illustre que l’exploitation et la gestion des forêts dépendent fortement de la stratégie nationale. Plus de 70 000 images Sentinel-1 ont été traitées avec les moyens de calcul du CNES pour produire les cartes sur le Vietnam, le Laos et le Cambodge, couvrant 1 230 000 km². Sur ces trois pays, les erreurs d’omission et de commission ont été estimées respectivement à 10% et 0,9% selon un protocole de validation adapté (Mermoz et al., 2021).

      Matrice de confusion pour l’évaluation des produits sur l’Asie du Sud-Est (Mermoz et al 2021)

      La Figure 2 montre un exemple de carte de détections sur le Suriname de 2018 à 2021. Les dégradés de couleur du jaune au rouge montrent l’évolution progressive dans le temps des routes forestières. Les coupes sélectives (points jaunes à rouge) sont visibles entre les routes.

      Figure 2. Zone d’exploitation forestière au Suriname. Les premières coupes sont souvent associées à la création de routes forestières, suivies des coupes sélectives. Image de fond : Google Earth.



      Ces travaux ont été présentés le 11 octobre 2021 dans le cadre de l’atelier Theia sur les utilisations de la télédétection pour la forêt, et le 20 janvier 2022 lors de la troisième Trimestrielle du SCO France. D’ici la fin de la phase A, l’équipe TropiSCO va continuer à travailler sur l’automatisation complète de la chaine de traitement et sur la production de carte des coupes forestières sur le Gabon. Le webGIS sera ouvert et accessible à tous en avril 2022.

      References :



      Mermoz et al. (2021). Continuous Detection of Forest Loss in Vietnam, Laos, and Cambodia Using Sentinel-1 Data. Remote Sensing, 13(23), 4877. [https:]]


    • sur Neige et crues dans les Pyrénées entre novembre 2021 et janvier 2022

      Posted: 19 January 2022, 6:21pm CET by Simon Gascoin

      Le manteau neigeux s’est constitué rapidement dans les Pyrénées suite à plusieurs chutes de neige abondantes entre le 23 novembre et le 10 décembre. Après ce début de saison tonitruant, la surface enneigée a rapidement décliné pour atteindre une valeur plutôt basse au début de l’année 2022. Cette fonte rapide est la conséquence des températures exceptionnellement hautes fin décembre puisque la fin d’année 2021 est la plus douce que la France ait jamais connue.

      Ce graphique est tiré de mon Pyrenees Snow Monitor qui utilise les données du satellite Terra/MODIS pour calculer le pourcentage de la surface des Pyrénées qui est enneigée .

      Malgré l’épisode de fonte fin décembre, la durée de l’enneigement est plutôt excédentaire à ce jour (16 janvier) si on compare à la moyenne des 20 dernières années.

      L’enneigement a connu un regain au début du mois de janvier avec les précipitations amenées par le front chaud aquitain. Si les chutes de neige ont été importantes en altitude, ce front a été accompagné par un fort redoux si bien que les précipitations abondantes sont tombées sous forme de pluie en moyenne montagne, y compris sur des secteurs encore enneigés, entraînant les crues remarquables de la Garonne, de l’Ariège, de l’Adour.

      Ainsi l’Adour à Tarbes a connu une première crue trentennale en décembre 2021, puis une crue cinquantennalle en janvier 2022 ! Une crue « trentennalle » est une crue qui arrive en moyenne tous les 30 ans… Il est donc remarquable qu’une crue « cinquentennalle » survienne le mois suivant ! Par ailleurs, l’Adour a connu sa dernière crue trentennalle.. en décembre 2019 !

      L’Adour à Tarbes, comme l’Ariège à Foix sont des rivières sous influence nivale si bien que le débit mensuel moyen est maximum au printemps.

      Écoulements mensuels (naturels) de l’Adour à Tarbes (source Banque Hydro, DREAL Aquitaine) – données calculées sur 54 ans.

      Sous l’influence du changement climatique ce régime évolue vers un régime pluvial, avec des crues plus précoces (tendance à la baisse de la date de crue annuelle sur le graphique ci-dessous).

      Date de la crue annuelle de l’Ariège à Foix (ici définie comme le débit maximum sur 10 jours consécutifs)

      Même si on ne peut pas facilement attribuer des évènements avec une période de retour aussi élevée au changement climatique, ces crues pyrénéennes récentes sont tout à fait compatibles avec l’effet attendu du changement climatique : des précipitations extrêmes plus intenses, avec une fraction liquide sur les massifs plus élevée, et un manteau neigeux de haute montagne qui  contribue à renforcer l’onde de crue par des pics de fonte au cœur de l’hiver.

      Photo : Pascal Fanise, Etang de Lers le 12 décembre 2021

    • sur Sentinel-1b is currently out of work !

      Posted: 11 January 2022, 2:03pm CET by Olivier Hagolle

      New Update : 1.5 month after, ESA has not yet been able to resume operations with Sentinel-1. The work is now concentrating on understanding the cause of the breakdown, to determine if some modifications are necessary before launching Sentinel1C, not before the end of 2022.

      Update : 1 week after, attemps to resume exploitation of Sentinel-1b were not succesful, but ESA doesn’t give up ? :

      An anomaly occurred on Sentinel-1B on the 23rd of December 2021. ESA tried to resume operations, but « the initial anomaly was a consequence of a potential serious problem related to a unit of the power system of the Sentinel-1B satellite. The operations performed over the last days did not allow to reactivate so far a power supply function required for the radar operations. » This information was released by ESA yesterday.

      These kinds of glitches appear on satellites, which are designed with redundancies, in case one piece of equipment fails. Let’s hope the technical teams at ESA will be able to restart the acquisitions soon.

      In case they don’t, there is a Sentinel-1C almost ready for launch : The current plans were « S1C Launch Period: Between 1 December 2022 and 31 May 2023« .

    • sur CO3D : the Very High Resolution mission dedicated to 3D

      Posted: 9 January 2022, 6:23pm CET by Olivier Hagolle

      Do you know the CNES/AIRBUS CO3D mission ? As our Sentinel-HR mission could be made of satellites derived from those of the CO3D mission, it is a good opportunity to advertise this nice mission here. I took all the information from a presentation and a paper [1] written by Laurent Lebègue and the CO3D team.

      A tasked multispectral VHR mission 

      The CO3D mission (3D Optical Constellation) is a decided mission, currently in phase C/D, due to be launched by mid 2023, next year! It is made of two pairs of low cost satellites. The 300 kg CO3D satellites may be used as tasked VHR satellites, allowing to take images at 50 cm resolution, 4 bands (Blue, Green, Red, NIR), and a field of view of 7 km. At this resolution, one usually only gets a panchromatic band, or at best a Pan-sharpened image, but CO3D will provide 4 bands at 50 cm! These products will be commercialized by Airbus Defense and Space.

      A 3D mission

      But the main feature of CO3D is its ability to make stereoscopic observations from the same orbit with a base-height ratio of 0.2 to 0.3, almost simultaneously, thanks to two pairs of satellites, which can make simultaneous stereoscopic observations, as shown on the figure to the right. The main mission objective for CNES is to make a global Digital Surface Model (DSM) by 2026, with 90% of the surface covered (there are regions with persistent cloudiness).  During the first eighteen months, the efforts will concentrate on two priority regions in which the DSM will be produced: one in France, where we will be able to perform an extensive validation, and the other bigger one (27 M. km²)  covers North and Tropical Africa and the Middle East.

      But meanwhile, stereo acquisitions on other parts of the world will be also performed. After the 18 first months, the DSM production will be extended to the whole land surfaces. Some capacity of revisit to monitor DSM change with time will be possible, but it will not be global. A capacity of 600 000 km² per year has already been negotiated, and it will be possible to increase it at a low fare. Demands will go through the Dinamis portal.

      The objective is to get a 1 m relative altimetric accuracy (CE90) at 1 m ground sampling distance (GSD). Each DSM will be produced at 1, 4, 12, 15 and 30 meter  GSD. At 15 m and 30m GSD, the DSM will be delivered as open data.

      The project aims at delivering DSMs fully automatically, which requires a huge effort for the development of algorithms and software, including the CARS method [2], a stereo pipeline to produce DSMs that is available as open source. A large downstream service to develop 3D methods and applications is being set-up through CNES S3D2 program, and through the AI4GEO project. I’ll try to convince my colleagues to describe the products more accurately here.

      A DSM of a part of Nice, France, obtained with a Pleiades data set acquired to prepare CO3D ground segment and prcessors.


      Written by O.Hagolle, with the appreciated help of Jean-Marc Delvit, Laurent Lebègue, Delphine Leroux and Simon Gascoin.


      [1] Lebègue, L., Cazala-Hourcade, E., Languille, F., Artigues, S., and Melet, O.: CO3D, A WORLDWIDE ONE ONE-METER ACCURACY DEM FOR 2025, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B1-2020, 299–304, [https:] 2020

      [2] Michel, J., Sarrazin, E., Youssefi, D., Cournet, M., Buffe, F., Delvit, J. M., Emilien, A., Bosman, J., Melet, O., and L’Helguen, C.: A NEW SATELLITE IMAGERY STEREO PIPELINE DESIGNED FOR SCALABILITY, ROBUSTNESS AND PERFORMANCE, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 171–178, [https:] 2020.

    • sur Multitemp blog statistics in 2021

      Posted: 4 January 2022, 2:40pm CET by Olivier Hagolle

      As every year, I have gathered the statistics of the multitemp blog in 2021. It seems that we have more or less stopped the decrease of its audience, with a higher number of visits than last year, even if the number of pages viewed is still decreasing a bit. The two main authors of the blog have been quite busy this year :

      • Simon Gascoin successfully defended his « Habilitation à Diriger des Recherches », the diploma you need to be allowed to supervise PhD in France. Instead of preparing superb images for the blog, he made a magnificent dissertation on how remote sensing can be useful to monitor Snow Water Equivalent.
      • I have become the big boss of CESBIO’s observation systems team and I have the honor of spending all my time in meetings with other big bosses, while the researchers at CESBIO are doing real work.

      Let me recall that the columns of this blog are open, and that all CESBIO members are welcome to submit short articles, and we even accept articles from outside CESBIO, as long as they concern time series of remote sensing images.

      I have another meeting and I lack time to detail which were the successful pages this year. Apart of the didactic pages in the « how it works » menu, three five pages had a large success, with more than 500 views :

      Despite its technical content, the multitemp blog, relayed by social media, is becoming a great tool in the politics of remote sensing !


    • sur Happy 2022 !

      Posted: 4 January 2022, 1:12pm CET by Olivier Hagolle

      As shown by the image above, warmth can be found in winter, and beauty often comes out of the fog. And in matter of fog, this beginning of 2022 seems to be a great millesim. I took this picture at sunset on the 31st of december, in Hendaye, right in the South West corner of France in the Basque Country.

      (I’m sorry, I don’t know to whom I should attribute this illustration)

      I was very optimistic at the beginning of last year, but it turns out that this start of 2022 is looking gloomy once again. We sincerely hope that Covid19 did not affect you too much in 2021, and that this new wave will leave us, and you, with no more troubles than the obligation to work from home, once again, for a couple of weeks only.

      2021 was not entirely bad at CESBIO. Even if we did not have all the celebrations we planned at the beginning of the year, we were lucky enough to celebrate the first birthday of CESBIO’s 25th birthday, as well as Gérard Dedieu’s retirement. In 2021 also, our director, Mehrez Zribi, turned out to be persuasive enough to open a few permanent positions in the lab :

      • Karin Dassas, who used to care for the on-board software of astronomy instruments, joined the radar and GNSS team at CESBIO
      • Alexandre Bouvet has been hired at CESBIO, as a research engineer to use radar remote sensing data to monitor deforestation and plant growth, and to increase scientific collaboration with South-East Asia.
      • Ludovic Arnaud joined us as a research engineer to estimate carbon fluxes in the soil using optical data.
      • Rémi Fieuzal joined us to work on the integration of different types of remote sensing data in vegetation growth models
      Now he is in Guyana, François looks very serious, bur it was fun working with him !

      François Cabot also left us in 2021, to participate to the great adventure of European launchers in French Guyana. François Cabot was an essential part of the SMOS team, in which he was in charge of the Level 1 products and of the instrument calibration. If the SMOS products allowed the publication of so many articles and useful products, this is largely due to the excellent work of François. He was also the PI of a nanosat mission, ULID, that aimed to test the technologies necessary for a future enhanced SMOS mission on which the antennae would be placed on different satellites, and the cancellation of this missionAs a side effect also, the quality of the famous CESBIO BBQs will also decrease a lot with François far from here, and someone else will have to learn how to sing  « où sont passés les tuyaux ». François will be in charge of avoiding that rocket debris fall on your head in case of a failure during a launch. With him there, you can get out of your house without checking the lauch planning of ESA.

      And for 2022 ?

      We were expecting the Biomass satellite to be launched in 2022, but this has been postponed to the end of 2023, which will give us a little more time to prepare the processing methods. In 2022, VENµS will start acquiring data every day on more than 50 sites. The VM5 period will start in a few weeks, and we are in the last optimizations of the programming to get the most sites possible. In a few days, we will also have an important change in the Sentinel-2 data format. In 2022, CNES will renew the 5 year plan for Theia, and we are expecting a large increase of the budget dedicated to this data centre.

      Of course, at CESBIO, we will continue our work on the definition of new missions (SMOS-HR, Sentinel-HR, Sentinel2-NG), the preparation of the arrival of the decided missions (Biomass, Trishna), or the processing of the missions in operation (Sentinel, SMOS, VENµS…), and to help us understand all these observations, the in situ data acquisition and the modeling activities will continue.

      In our immediate environment, CNES has just reorganized. The CNES researchers of CESBIO are now attached to a huge technical direction, and within this direction, to a new sub-direction focused on data processing, the « data campus », led by Simon Baillarin, a former CESBIO student, some time ago. Until now, we were attached to a sub-direction focused on instrumentation and image quality, and led by Philippe Kubik, whom we thank for his unfailing support. This reorganisation is thus an important change of orientation for the CESBIO which relies on two pillars, the observation instruments and the processings. We will now be closer to the latter. It will be a little easier for us to get our processing methods into operation at CNES, but we will have to be more convincing to support the missions we propose, and to get help from our instrumentalist colleagues.




    • sur Joyeux 2022 !

      Posted: 4 January 2022, 11:51am CET by Olivier Hagolle

      Comme le montre l’image ci-dessus, on peut trouver de la chaleur en hiver, et la beauté peut naître du brouillard. Et en matière de brouillard, cet hiver 2022 semble être un grand millésime. J’ai pris cette photo au coucher du soleil le 31 décembre, à Hendaye, dans le sud-ouest de la France, au Pays Basque.

      (I’m sorry, I don’t know to whom I should attribute this nice illustration found in a tweet)

      J’étais très optimiste au début de l’année dernière, mais il s’avère que ce début d’année 2022 s’annonce à nouveau morose. Nous espérons sincèrement que le Covid19 ne vous a pas trop affectés en 2021, et que cette nouvelle vague ne nous causera pas plus de soucis que l’obligation de travailler à domicile, une fois de plus, pour quelques semaines seulement.

      Un petit retour sur 2021

      L’année 2021 n’a pas été entièrement mauvaise au CESBIO. Même si nous n’avons pas pu tenir toutes les fêtes que nous avions prévues en début d’année, nous avons eu la chance de pouvoir fêter le premier anniversaire des 25 ans du CESBIO, ainsi que le départ en retraite de Gérard Dedieu.

      Sous l’impulsion du Space Climate Observatory, nous avons pu faire progresser de nombreuses applications pré-opérationnelles basées sur les observations spatiales, sans abandonner nos recherches plus fondamentales et la définition de nouvelles missions.

      Sur le terrain, nos deux sites du Sud Ouest de la France, (Lamasquère et Auradé) ont été labellisés ICOS, et nous avons mis en service notre station de mesure de réflectances ROSAS. Nous avons également participé à l’énorme campagne Liaise en Espagne, et commencé à préparer les mesures CAL/VAL nécessaires pour TRISHNA.

      En 2021 également, notre directeur, Mehrez Zribi, s’est montré suffisamment persuasif auprès de nos tutelles pour ouvrir quelques postes permanents au sein du laboratoire :

      • Karin Dassas, qui s’occupait jusqu’ici, à l’institut d’Astrophysique d’Orsay, du logiciel embarqué de l’instrument MAJIS sur la mission Juice qui va bientôt partir vers Jupiter, a rejoint l’équipe radar et GNSS du CESBIO.
      • Alexandre Bouvet, de l’équipe Globeo, a été embauché au CESBIO, en tant qu’ingénieur de recherche pour utiliser les données de télédétection radar pour surveiller la déforestation et la croissance des plantes, et pour accroître la collaboration scientifique avec l’Asie du Sud-Est.
      • Ludovic Arnaud, qui travaillait déjà au CESBIO en CDD, nous a rejoint en tant qu’ingénieur de recherche pour estimer les flux de carbone dans le sol à partir de données optiques.
      • Rémi Fieuzal nous a aussi rejoint au CESBIO. Il travaille sur l’intégration de données radar et optiques dans des modèles de croissance de la végétation.
      Maintenant qu’il est en Guyane, François a l’air très sérieux, mais c’était souvent très drôle de travailler avec lui !

      François Cabot a quitté le CESBIO fin 2021, pour participer à la grande aventure des lanceurs européens en Guyane française. François était un élément essentiel de l’équipe SMOS, au sein de laquelle il était en charge des traitements de niveau 1 et de l’étalonnage de l’instrument. Si les produits SMOS ont permis la publication de tant d’articles et de produits utiles, c’est aussi grâce à l’excellent travail de François. Il était également le PI d’une mission nanosat, ULID, qui avait pour but de tester les technologies nécessaires à une future mission SMOS améliorée sur laquelle les antennes seraient placées sur différents satellites. Cette mission s’est malheureusement arrêtée, et ce n’est pas étranger au départ de François. Mais François était surtout le spécialiste des fameux BBQ du CESBIO, et on se demande bien qui maintenant va chanter « où sont passés les tuyaux ? » tard le soir. A Kourou, François sera chargé d’éviter que des débris de fusée ne vous tombent sur la tête en cas d’échec lors d’un lancement. Avec lui, vous pourrez sortir de chez vous sans vérifier le planning des lancements de l’ESA.

      Et pour 2022 ?

      Nous comptions sur le lancement de Biomass en 2022, mais celui-ci a été reporté à fin 2023, ce qui nous laissera un peu plus de temps pour préparer les méthodes de traitemeent. En 2022, VENµS commencera à acquerir des données tous kles jours sur une cinquantaine de sites. La période VM5 commencera dans quelques semaines, et nous en sommes aux dernières optimisations de la programmation pour obtenir le plus de sites possible. Dans quelques jours, nous aurons aussi un changement important du format des données Sentinel-2.

      Bien entendu, nous poursuivrons nos travaux de définition de nouvelles missions (SMOS-HR, Sentinel-HR, Sentinel2-NG), les activités de préparation de l’arrivée des missions décidées(Biomass, Trishna), ou les traitements des missions en exploitation (les Sentinel, SMOS, VENµS…), et pour tout celà, les acquitions de données in situ et les activités de modélisation vont se poursuivre.

      Dans notre environnement immédiat, le CNES vient de se réorganiser. Les chercheurs CNES du CESBIO sont maintenant rattachés à une immense direction technique, et au sein de cette direction, à une nouvelle sous-direction focalisée sur les traitements de données, le « campus de la donnée », pilotée par Simon Baillarin, un ancien du CESBIO. Nous étions jusqu’ici rattachés à une sous direction focalisées sur l’instrumentation et la qualité des images, et pilotée par Philippe Kubik, que nous remercions pour son support indéfectible. Cette réorganisation est donc un changement d’orientation important pour le CESBIO qui s’appuie sur les deux pilliers, les instruments d’observation d’une part et les traitements d’autre part. Nous sommes maintenant un peu plus près des seconds, mais nous devrons maintenir le lien avec les premiers. Il nous sera un peu plus facile de faire passer nos méthodes de traitement en exploitation au CNES, mois nous devrons nous montrer plus convainquants pour défendre les missions que nous proposons, et pour obtenir de l’aide de nos collègues instrumentistes.

      Un autocollant trouvé pendant les vacances sur le bureau de mon neveu




    • sur La barrière de glace orientale du glacier Thwaites est en train de craquer

      Posted: 29 December 2021, 3:48pm CET by Simon Gascoin

      Thwaites est un immense glacier en Antarctique qui contribue déjà à 4% de la hausse du niveau des océans à lui tout seul. La barrière de glace flottante devant le front oriental du glacier agit comme un barrage qui ralentit le flux de glace du continent vers l’océan. Si cette plateforme de glace se brise, le glacier Thwaites s’accélérera et sa contribution à la hausse du niveau de la mer pourrait atteindre 25%. A partir d’images Sentinel-1, Pettit et al. 2021 ont remarqué que cette plateforme est en train de se fracturer. On peut voir les crevasses apparaître sur cette animation :

      Doomsday approaching!
      Breakup of the @ThwaitesGlacier eastern ice shelf.
      Time lapse of 221 #Sentinel1 radar images @CopernicusEU

      — Simon Gascoin (@sgascoin) December 29, 2021

      Pour en savoir plus :




      Pettit, Erin C., Christian Wild, Karen Alley, Atsuhiro Muto, Martin Truffer, Suzanne Louise Bevan, Jeremy N. Bassis, Anna Crawford, Ted A. Scambos, et Doug Benn. 2021. « Collapse of Thwaites Eastern Ice Shelf by intersecting fractures. » In . AGU. [https:]] .
    • sur Viedma glacier terminus since 1985

      Posted: 27 December 2021, 4:47pm CET by Simon Gascoin

      Viedma Glacier is an iconic glacier of the Southern Patagonia icefield. Landsat 5, Landsat 8 and Sentinel-2 images series show that its terminus has retreated by 3 km since 1985.

      Viedma glacier terminus evolution

      Looking closer one can notice a new proglacial lake forming in a recently deglaciated area.

      Since I won 80 sq km of Airbus Pléiades data, I ordered 3 sq km of a Pléiades image captured on Feb 05, 2021 to estimate that the area of the neo-lake should be between 0.3 and 0.4 sq km.

      Pléiades 2021-02-05 © Airbus DS

      But the images also show the disappearance of a formerly ice-dammed lake in the south..


      This place is very interesting to study as many periglacial processes are exacerbated by climate change. Yet, Viedma glacier front will never look the same without its curved stretch of ice…

      So sad!! Here is a nice picture of Viedma during my very first fieldwork as a Ph.D. at #IANIGLA (February 2008) with Ricardo Villaba, Brian Luckman, Pierre Pitte, among others

      — Lucas Ruiz (@ARGlaciares) June 18, 2021

      You can look at the images here. And make your own gif from 35 years of satellite imagery in the Sentinel Hub EO Browser.

    • sur First validation results with the Lamasquère ROSAS instrument

      Posted: 14 December 2021, 2:14pm CET by Jérôme Colin


      Following its installation in March 2021, our new ROSAS device at Lamasquère has produced a first series of acquisitions over a complete maize crop cycle. The ROSAS system is based on the use of a multi-spectral photometer to carry out angular and spectral measurements of both incident and reflected radiation from the surface. The entire measurement sequence, which lasts 140 minutes, is then pre-processed to produce a bi-directional reflectance (BRDF) of the surface of the measurement site. It is then possible to calculate the spectral reflectances of a given instrument for observation conditions corresponding to the illumination and viewing angles of a remote sensing product, and thus to validate the Sentinel-2 surface reflectances calculated by an atmospheric effects correction processor such as Maja.

      The system was installed during an intermediate crop cycle. A forage maize was then sown on the plot in mid-April, and harvested for silage in early October. The photometer therefore followed the entire growth cycle of the maize, from germination to maximum growth (~ 2.60m around the mast).

      Status of the maize crop during the 2021 cycle: May 10th (top left), June 16th (top right), August 18th (bottom left), October 6th (bottom right)

      In spite of a stronger cloud cover than at the ROSAS site in La Crau (south of Arles), and even more so than at Gobabeb (Namibian site), the photometer performed enough complete cycles to follow the evolution of the surface state.

      Evolution of ROSAS (red) and MAJA (blue) surface reflectances for Sentinel-2 band 4 Evolution of ROSAS (red) and MAJA (blue) surface reflectances for Sentinel-2 band 8

      The evolution of the Sentinel-2 surface reflectances produced by MAJA fits pretty well with the ROSAS acquisitions. Although the sample size is small (9 observations after the acquisition quality filter), it is nevertheless possible to calculate the root mean square error (RMSE) per Sentinel-2 band as well as for the estimation of the optical thickness of the atmosphere (AOT). The results are presented in the table below.

       RMSE (S2A et S2B)
      B2 (490nm)0.009
      B3 (560nm)0.009
      B4 (665nm)0.007
      B5 (705nm)0.012
      B6 (740nm)0.024
      B7 (783nm)0.026
      B8A (865nm)0.030
      B8 (842nm)0.033
      B11 (1610nm)0.028
      AOT (-)0.068

      While the RMSEs are very satisfactory for bands B2 to B4 (an RMSE of 1% is targeted for the estimation of surface reflectances), there is a  quite some loss in the quality of estimates from B6 to B11. The differences are particularly significant in the near infrared for reflectances greater than 0.2. As atmospheric correction errors are usually greater in the blue than in the near infra-red,  these errors could be due to stronger spatial heterogeneity in the PIR than in the visible, or to a BRDF that is not well reproduced by our model or by adjacency effects, or maybe to the effects of irrigation that can change the ground reflectance during the day (it takes one full day to obtain the full BRDF from ROSAS). This still needs further investigation. Winter wheat, sown at the end of October, will give us the opportunity to continue the exercise, this time on a lower and denser (and, we hope, more homogeneous) crop cover, which will provide us with additional data to refine our analysis.

      To be continued…Many thanks to the CNES team in charge of processing the ROSAS datasets (Lucas Landier, Sophie Coustance and Nicolas Guilleminot) !

      Comparison between ground and satellite measurements at Lamasquère for 4 spectral bands (Blue, Red, Near Infrared, Short Wave infrared), in blue for Sentinel-2A, and in red for Sentinel-2B. Click on the graphs to enlarge.

    • sur Premiers résultats de validation du site ROSAS de Lamasquère

      Posted: 14 December 2021, 1:58pm CET by Jérôme Colin


      Suite à sa mise en place en mars 2021, notre nouveau dispositif ROSAS de Lamasquère a produit une première série d’acquisitions sur un cycle complet de culture de maïs. Le dispositif ROSAS repose sur l’utilisation d’un photomètre multi-spectral pour non seulement réaliser des mesures angulaires et spectrales du rayonnement incident, mais aussi du rayonnement réfléchi par la surface. L’ensemble d’une séquence de mesure, qui dure 140 minutes, est ensuite pré-traité pour produire une réflectance bi-directionnelle (BRDF) de la surface du site de mesure. Il est alors possible de calculer les réflectances spectrales d’un instrument donné pour des conditions d’observation correspondant aux angles d’éclairement et de visée d’un produit de télédétection, et donc de valider les réflectances de surfaces calculées par une chaîne de correction des effets de l’atmosphère telle que Maja.

      Le dispositif a été installé durant un cycle de culture intermédiaire. Un maïs fourrage a ensuite été semé sur la parcelle mi-avril, et récolté pour ensilage début octobre. Le photomètre a donc suivi tout le cycle de croissance du maïs, de la germination jusqu’au maximum de croissance (environ 2,60m autour du mât).


      Etat de la culture de maïs durant le cycle 2021

      Malgré une nébulosité plus forte que sur le site ROSAS de La Crau (Sud d’Arles), et a fortiori qu’à Gobabeb (site Namibien), le photomètre a effectué suffisamment de cycles complets pour suivre l’évolution de l’état de surface.

      Évolution des réflectances de surface ROSAS (rouge) et MAJA (bleu) pour la bande 4 Sentinel-2 Évolution des réflectances de surface ROSAS (rouge) et MAJA (bleu) pour la bande 8 Sentinel-2

      On retrouve bien l’évolution des réflectances de surface Sentinel-2 produites par MAJA avec les acquisitions du dispositif ROSAS. Si l’échantillon est faible (9 observations après le filtre de la qualité des acquisitions), on peut néanmoins calculer l’erreur quadratique moyenne (RMSE) par bande Sentinel-2 ainsi que celle de l’épaisseur optique de l’atmosphère (AOT). Les résultats sont présentés dans le tableau ci-dessous.

       RMSE (S2A et S2B)
      B2 (490nm)0.009
      B3 (560nm)0.009
      B4 (665nm)0.007
      B5 (705nm)0.012
      B6 (740nm)0.024
      B7 (783nm)0.026
      B8A (865nm)0.030
      B8 (842nm)0.033
      B11 (1610nm)0.028
      AOT (-)0.068

      Si les RMSE sont très satisfaisantes pour les bandes B2 à B4 (on vise une RMSE à 1% pour l’estimation des réflectances de surface), on observe par contre une forte dégradation de la qualité des estimations de B6 à B11. Les écarts sont particulièrement importants dans le proche infra-rouge pour des réflectances supérieures à 0,2. Comme les erreurs de corrections atmosphériques sont en général plus fortes dans le bleu que dans le proche infra rouges, les écarts observés pourraient être dues à une hétérogénéité spatiale plus importante dans le PIR que dans le visible, à une modélisation de la BRDF imparfaite, à une calibration perfectible du photomètre, où à l’effet de l’irrigation et de son séchage, qui introduit une variation de la réflectance dans la journée, alors que la mesure de Sentinel-2 est instantanée et celle de ROSAS demande une journée entière. Plusieurs pistes sont donc à explorer. Le blé d’hiver, semé fin octobre dernier, va nous donner l’opportunité de continuer l’exercice, cette fois sur un couvert plus bas et plus dense (et, nous l’espérons, bien homogène), ce qui nous fournira des données supplémentaires pour affiner l’analyse.

      A suivre

      Un grand merci aux équipes du CNES qui exploitent ces données et ont produit ces premiers résultats (Lucas Landier, Sophie Coustance et Nicolas Guilleminot)

      Comparison des réflectances mesurées par Sentinel-2 et Rosas pour quatre bandes spectrales (Bleu, Rouge, Proche Infrarouge, Moyen infrarouge), en bleu pour Sentinel-2A, en rouge pour Sentinel-2B. Cliquez sur les graphiques pour agrandir.

    • sur The snow you can't see

      Posted: 7 December 2021, 12:16pm CET by Simon Gascoin

      Last year in November we noticed these unusual white spots in a Sentinel-2 image near Barèges…

      Les stations anticipent la fin du confinement. Profitant du froid les canons à neige ont tourné dans les Pyrénées comme le montre cette image satellite #Sentinel2 prise samedi 21/11 entre La Mongie et Super Barèges. @Meteo_Pyrenees

      — Simon Gascoin (@sgascoin) November 23, 2020

      These white spots were artificial snow spread by snow cannons. At the same time of the Sentinel-2 acquisition, they were also « visible » as cold spots from our thermal infrared camera installed at Pic du Midi.

      This year in November too, we noticed the same patches in the thermal camera pictures… However, the surrounding surface was also snow-covered therefore the patches were invisible in webcam pictures taken from the same location.

      This year’s artificial snow patches were much warmer (8K) than the surrounding natural snow cover. Such thermal anomalies surprised us since the snow surface temperature is expected to be primarily influenced by atmospheric and radiative forcing (Pomeroy et al., 2016). Hence, the snow surface temperature should be rather homogeneous in areas with similar atmospheric and radiation forcing (experts say that it should not depend much on the physical properties of the underlying snow layers).

      In fact, the « warm » patches were not « visible » anymore in the images captured later in the afternoon.

      Artificial snow grains and natural snow grains have different shape and size. As a result, artificial and natural snow covers have different optical properties, in particular reflectance and emissivity. Reflectance in the shortwave controls the energy absorbed by the surface snow, while emissivity controls the radiant energy emitted by the surface snow. Hence both properties could explain the difference in the snow surface temperature. Pomeroy et al., (2016) consider that the snow surface temperature is secondarily affected by absorption of shortwave and near infrared radiation (especially in this case, we are at the beginning of the winter, the content in light absorbing impurities such as dust is expected to be low). Reported variations of the snow emissivity due to snow grain type is likely not sufficient to explain such temperature contrasts either (Hori et al. 2006). Therefore, the observed temperature differences are probably not due to the differences in optical properties of the artificial vs natural snow.

      Artificial snow is produced from liquid water (above 0°C) that is atomized into fine droplets. The snow cannons are operated when the air is sufficiently cold and dry for ice crystals to form. Yet the phase change from liquid water to ice requires a lot of energy. If some droplets don’t freeze before hitting the ground the snow will be « wet » (Lintzen 2012). Here it may be what happened, the snow grains which deposited on the ground were much warmer than the surrounding snow and even a bit wet. This warmer, artificial snow cover radiates more in the thermal infrared.

      These spatial variations in longwave radiation are well captured by the thermal camera despite a distance of ~ 5 km between the camera sensor and the ski slopes on the other side of the Barèges valley. In the spectral window of the thermal camera (7.5 – 13 µm), the atmosphere is expected to be nearly transparent. Is it? This is something we need to investigate to be able to relate the camera images to actual surface temperatures..

      Thanks to Baptiste for spotting this and to Ghislain for the fruitful discussion!

    • sur Sortie de MAJA 4.5, compatible avec le nouveau format des L1C Sentinel-2

      Posted: 23 November 2021, 10:48am CET by Jérôme Colin


      Suite à l’annonce par l’ESA d’une mise à niveau majeure du format du produit Sentinel-2 pour début janvier 2022, l’équipe de développement de CS et du CNES s’est empressée d’adapter MAJA à ces nouvelles spécifications. En particulier :

      • prendre en compte le nouvel offset radiométrique permettant des valeurs négatives ;
      • prendre en compte du nouveau format raster des masques de qualité L1C à la place du format GML actuel.

      Nous encourageons vivement nos utilisateurs à effectuer la mise à jour vers cette nouvelle version disponible ici : télécharger Maja 4.5.1

      Notez que nous avons déplacé le dépôt GIPP de Maja vers un nouveau service GitLab : obtenir le dernier GIPP pour la version 4.5.1

      Notez également que MAJA ne bénéficie pas de l’humidité relative de l’ECMWF ajoutée dans les données auxiliaires des L1C car notre option –cams nécessite un profil vertical complet, qui reste automatiquement téléchargé par StartMaja grâce à la dernière API CSD CAMS. Cependant, comme la variable d’humidité relative dans la prévision ne relève pas de la licence générale des données CAMS, elle n’est disponible qu’avec un retard de 5 jours.

      Nous tenons à remercier notre équipe de développement chez CS et au CNES pour avoir publié cette version à temps, malgré un calendrier serré !

    • sur MAJA 4.5 is now available

      Posted: 23 November 2021, 10:21am CET by Jérôme Colin


      UPDATE (Feb 14th 21): the version 4.5.3 is now released and fixes a few minor issues found while testing the first time series of the new L1C format, please upgrade to Maja 4.5.3 !

      Following the announcement by ESA of a major product format upgrade for Sentinel-2 by early January 2022, the development team at CS and CNES rushed to adapt MAJA to these new specifications. In particular :

      • account for the new radiometric offset allowing for negative radiometric values;
      • account for the new raster format of the L1C quality masks in place of the current GML format.

      The radiometric offset and the bias between S2A and S2B are accounted for transparently, no action is required users side.

      We strongly encourage our users to upgrade to this new release available here : download Maja 4.5.3

      Note that we have moved the Maja GIPP (parameters and auxiliary data) repository to a new GitLab service: get the latest GIPP for version 4.5.3

      Also note that MAJA doesn’t benefit from the ECMWF relative humidity added in the auxiliary data part of L1C since our –cams option requires a full vertical profile, which is automatically downloaded by StartMaja using the latest CAMS CSD API. However, since the relative humidity variable in the forecast does not fall within the general CAMS data licence, it is only available with a delay of 5 days.

      We wish to thank our development team at CS and CNES for releasing this version in time, despite the tight schedule !

    • sur The plainbow over Fraser river sediment plume

      Posted: 22 November 2021, 1:16am CET by Simon Gascoin

      Sentinel-2 captured an impressive view of the Fraser river sediment plume in Georgia Strait caused by the historic flood in British Columbia (250 mm rainfall in 48 hours).

      Fraser River delta and Georgia Strait near Vancouver, British Columbia, Canada. Sentinel-2 on 2021-Nov-16 (color composite of L2A red, greed and blue images).

      The image contains an amusing detail west off the Vancouver International Airport. This red-green-blue object is an airplane or « plainbow« . It is due to shifts between the multi-spectral images composing the « true color » image (in this case an RGB image composed with the red, green and blue channels of a Sentinel-2 multispectral image).

      Misregistration shifts between multi-spectral images are well explained by Sergii Skakun et al. (2018):

      The MSI is designed in such a way, that the sensor’s detectors for the different spectral bands are displaced from each other. This introduces a parallax angle between spectral bands that can result in along-track displacements of up to 17 km in the Sentinel-2A scene [2]. Corresponding corrections using a numerical terrain model are performed to remove these interband displacements, so the MSI images, acquired in different spectral bands, are co-registered at the sub-pixel level to meet the requirement of 0.3 pixels at 99.7% confidence. However, these pre-processing routines cannot fully correct displacements for high altitude objects, e.g. clouds or fast moving objects such as airplanes or cars. Therefore, these types of objects appear displaced in images for different spectral bands. The magnitude of the displacement varies among pairs of bands depending on the inter-band parallax angle. For example, at 10 m spatial resolution, the maximum displacement is observed for bands 2 (blue) and 4 (red).

      The parallax effect has been used to determine the elevation of a volcanic plume from Landsat imagery (de Michele et al. 2016). Here the object is an airplane, hence it is both moving at high velocity and it is not at the elevation of the digital terrain model at this location (i.e. sea level), therefore two different effects are contributing the color shift.


      Multispectral shift in the case of an air balloon floating above the surface (not moving) and an airplane flying at the right velocity and the right direction to compensate for this shift.

      To better understand, let’s consider an airplane flying at an altitude of 10 km. Let’s imagine it is heading toward the same direction as Sentinel-2 (roughly from the north east to south west). What should be its velocity to compensate for the time delay between two spectral bands, so that both spectral images of the airplane are well aligned?

      The problem can be solved using Thales’ theorem.


      where \(d\) and \(D\) are the distances traveled by the airplane and Sentinel-2 respectively during the time delay ?t between two spectral band acquisitions:

      $$d=v \Delta t$$

      $$D=V \Delta t$$

      where \(v\) and \(V\) are the velocities of the airplane and Sentinel-2 respectively.

      Therefore we obtain:


      Sentinel-2 is flying at V = 7.44 km/s on its orbit at H = 786 km. Hence, at h = 10 km, we get v = 340 km/h. If the airplane was flying at 340 km/h towards the southwest it should not appear as a « plainbow » in a Sentinel-2 multispectral composite.

      The same equation can be written in vector form to account for the airplane flying direction (see Eq. 7 in Heiselberg 2019). To determine the velocity and the elevation of an airplane from Sentinel-2 images, the system of two equations has three unknowns (velocity, altitude and heading angle) and cannot be solved without additional information or hypothesis. Heiselberg (2019) used airplane contrails to determine the airplane heading angle. However, the airplane elevation is very small with respect to Sentinel-2 altitude and most of the uncertainty actually comes from the low spatial resolution of Sentinel-2 images with respect to the airplane size. In the above case of the plainbow over Georgia Strait, I found a distance of \(d_{B2,B8A}\) = 200 m ± 10 m between the airplane location in B2 and B8a (?t = 2.055 s). Assuming that the airplane is at sea level (h = 0) we get v = 350 km/h ± 18 km/h, whereas we would get  v = 352 km/h at h = 1000 m. Hence, the error on the velocity due to the uncertainty on the distance estimate \(d_{B2,B8A}\) is much larger than the uncertainty on the airplane elevation.

    • sur Ça y est, le glacier de l'Astrolabe a vêlé !

      Posted: 17 November 2021, 11:39pm CET by Simon Gascoin

      En janvier dernier nous vous annoncions la mise en couveuse d’un bel iceberg au terminus du glacier de l’Astrolabe … Yann Niort qui travaille actuellement pour Météo-France en Terre-Adélie nous a signalé que le glacier avait enfin vêlé !

      Le glacier de l’Astrolabe s’apprête à vêler un bel iceberg

      En fait ce n’est pas un iceberg mais une jolie portée de plusieurs icebergs que l’Astrolabe a engendré en ce début novembre, alors que sa langue flottante était dégagée de sa gangue de banquise hivernale.

      La surface perdue par le glacier est 20 km² (7 km² de moins que ce que nous avions anticipé ici en janvier dernier).

    • sur MAGicians and deep learning wizards

      Posted: 11 November 2021, 8:34pm CET by Olivier Hagolle

      There are several MAGicians in the CESBIO laboratory, and I am one of them (with all due modesty). According to my definition, MAGicians are participants to Mission Advisory Groups, for instance for CNES or ESA satellite projects.

      These days, I participate to the MAG for Sentinel-HR at CNES, and for Sentinel-2 Next Generation at ESA. I was also involved at CNES in the group that prepared the French answer to the Copernicus Long Term Scenario, as well as in the MAG for TRISHNA. Other colleagues are involved in other groups, such as CO3D, LSTM, SMOS-HR or a phase-0 for an hyperspectral mission. In the early phases of a space project, the MAG, which is a working group with selected experts interested by the project, helps the project manager to better define the needs and requirements for the project. During the development phase, the MAGicians are helping to define the content of products, the methods, define the priorities for programming the acquisitions… MAGicians are also useful when difficulties arise with the satellite that have an impact on the performances. After launch, some MAGs can be kept to organize the validation and outreach of the satellite. In that phase ESA calls them Quality Working Groups.

      During the design phase, the challenge is always to obtain the decision of engagement of the project. It is often useful to try to convince the funding agencies of the interest of the mission, and for that, obtaining support from the user community and the stakeholders is necessary. It is therefore always good to explain the mission to the community, but it is on these occasions that the MAGicians might meet the dreadful deep learning wizards or, even worse, their spokespersons.

      This happened to me twice in the last year, and I still shiver when I remember these occasions :

      • when we published our post about the Sentinel-HR mission, a deep learning wizard told us that a resolution of 2 m for a mission in complement to Sentinel-2 was not necessary, as they were able to bring Sentinel2’s resolution to 1 m using a very secret deep learning super resolution method.
      • during the analysis of the Copernicus Long Term Scenario, I pleaded to keep Sentinel2-A operational when Sentinel2-C would be launched, to get a better revisit and more chances to get cloud free images every fortnight. My argument was dismissed by a high ranking official in the French Ministry of research, who told us that the clouds on optical images were not an issue anymore because it was possible to create Sentinel2 images from Sentinel1 images using deep learning.

      This series of images show : 1) a Sentinel-2 image at 10 m resolution resampled with nearest neighbor (did you notice that superresolution products are compared to nearest neighbour interpolation ?), 2) the same image provided at 1m resolution using a deep learning super resolution method, 3) the same zone in VHR taken from google earth (at a different date).

      The first objection was easy to dismiss as the images shown by the wizard are awfully noisy (see above), and a comparison with real high resolution data shows that their guess of the high resolution features is often completely wrong.

      But I failed to correctly answer the second objection (DL alchemy can transform radar images in cloud free optical images).  Although not an expert, which is normal, the official is high ranked ministry officer, who is not used to be contradicted, especially by simple researchers. I think I mumbled that deep learning techniques, although powerful, are not magical, and that SAR and optics are not bijectively related. But no one in the assistance took my defense, and I did not succeed in getting a French demand for keeping Sentinel-2A operational after the launch of Sentinel-2C (sorry !).

      I have nothing against deep learning, which can be an efficient machine learning technique, but, as shown by Jordi Inglada in a previous post, it is sometimes misused,  or even more often, its results can be misinterpreted by non specialists (I am one of them).

      So, dear fellow MAGicians, how would you react in front of a wizard that claims your mission is useless because he saw a presentation that does the same thing with deep learning and existing satellites ? The best answer I have so far was suggested by a genius of deep learning (you know, the type of genius who comes out of a lamp), Jordi Inglada himself. He suggests that the wizards and their relays in the commissions or ministries are trying to shift the burden of the proof. As a MAGician, our job is to show that what we ask for is really needed, and if a wizard or his/hers spokesperson wants to replace what we suggest by some deep learning wizardry, he/she has to prove that it works well. I should therefore have answered : « could you please give us the reference of the paper that proves that we can fully replace Sentinel-2A by the existing Sentinel-1 satellites ? »

      Would you have any other magic formulae to survive against a DL wizard   ?  Any spells, ideas or strategy ? Please feel free to comment !






    • sur Mount Everest in 3D

      Posted: 28 October 2021, 4:17pm CEST by Simon Gascoin

      Because it's a stunning place that I will probably never visit, I have spent some time to make a 3D visualization of Mount Everest using QGISthreejs plugin in QGIS. I draped the 24 Oct 2021 clear-sky Sentinel-2 image on a gapfilled version of the HMA 8m digital elevation model (like in this post on the Khumbu glacier..). Besides, its a nice way to look at the spatial variations of the snowline elevation on the slopes of Mount Everest.

      I made this animation available on Wikimedia so feel free to download and re-use it!


    • sur Amincissement du glacier d'Ossoue entre 2017 et 2021

      Posted: 21 October 2021, 12:41am CEST by Simon Gascoin

      Au cours des quatre dernières années le glacier d'Ossoue a perdu en moyenne 6 mètres d'épaisseur. Par endroit, son amincissement dépasse 8 mètres, soit 2 mètres par an.

      Changement d'altitude du glacier d'Ossoue entre 2017 et 2021

      Cette carte a été obtenue par le traitement de couples d'images stéréoscopiques Pléiades (CNES/Airbus DS) acquises le 8 octobre 2017 et le 9 octobre 2021. En octobre le glacier est débarrassé de son manteau neigeux d'hiver donc on mesure bien une perte nette de glace. La méthode* est exactement la même que celle décrite dans cet article. D'après les mesures de terrain effectuées par l'association Moraine, le glacier d'Ossoue a perdu près de 35 m d'épaisseur entre 2001 et 2021.

      Une des deux images panchromatiques acquises par Pléiades 1A le 9 octobre 2021 (celle-ci à 11:04:58 UTC)

      Le glacier d'Ossoue était à l'honneur dans l'édition de samedi 16 octobre de la Dépêche ! Et il le sera de nouveau lors du prochain colloque (30 oct 2021) et l'exposition de Grégoire Eloy (oct et nov 2021) de la Résidence 1+2 Photographie & Sciences à Toulouse. Plus d'info sur : [https:]]

      Dans les Pyrénées, les glaciers ont perdu près d’un quart de leur superficie et 6,3 m d'épaisseur entre 2011 et 2020 (Revuelto, Vidaller et al. 2021).

      * Pour les puristes voici la carte de changement d'altitude à une échelle plus large. On note aussi un signal négatif sur le glacier des Oulettes, mais plus bruité à cause (i) de l'ombre portée du Vignemale qui dégrade la qualité de la restitution 3D (ii) de l'écoulement rapide de ce glacier très crevassé et accidenté.

    • sur Is DS 4 EO BS?

      Posted: 11 October 2021, 5:46pm CEST by Jordi Inglada
      EO, RS, GS, DS, AI, ML, DL, BS, WTF?

      Everybody in the Geosciences (GS) and Remote Sensing (RS) community is now aware of the great advances in Data Science (DS) and Machine Learning (ML) that have taken place in the last 10 years. Although many people talk about Deep Learning (DL) or Artificial Intelligence (AI), these terms don't usually accurately describe the techniques used in our field. If we want to be pedantic, we may say that:

      • most of the neural networks used are not really deep;
      • machine learning techniques other than artificial neural networks (ANN) like for instance Random Forests, Gradient Boosting Trees and similar techniques scale better for some problems and are still the first choice for operational applications;
      • most of the approaches in AI are not used anymore (search algorithms, constraint satisfaction problems, logical agents, etc.), so using the term AI is not appropriate;
      • other techniques are on the rise to cope with the limitations of ANNs (evolutionary approaches to build neural architectures, probabilistic programming to model uncertainty), so speaking about DL is too narrow.

      I like to use the term Data Science, because it encompasses not only the techniques used, but also how they are deployed and, most importantly, the domain problem that one wants to solve. In figure 1 we see that DS is at the intersection of the domain expertise (i.e. hydrology, agronomy, geology, ecology), mathematical modeling (statistics, optimization) and computer science (automation, scalability).


      Figure 1: The Venn diagram of Data Science (from [] )

      In Earth Observation (EO) problems and more generally in the Geosciences, this point of view is very useful.

      It is interesting to understand that classical science falls at the intersection of the domain expertise and the mathematical modeling, where for instance, a simple regression can be used to calibrate the parameters of a model. The data processing done by data centers and ground segments is at the intersection of the domain expertise (the scientist writes the ATBD1 which is then coded as processing chains that operate on massive data). Machine Learning can be seen as scaling the mathematical models through massive computation exploiting large volumes of data.

      So let's try to understand if DS for EO is really BS (Latin for Bulbus Stercum2).

      Good buzz, bad buzz

      The sad fact is that there is a lack of nuance in the discussion about these topics: either DS is great, or it is BS. The cynical in me would say that this is due to the fact that nowadays too many discussions take place on Twitter (where flame wars are legion) or LinkedIn (where everybody loves everybody's work) and also because researchers are forced to act like startup founders: work on an elevator pitch for a funding agency who has no technical knowledge and thinks ROI3.

      But beyond Twitter and desperate research grant proposals, there are publications in peer reviewed journals that may give the impression that we are missing the point in terms of the problems we are trying to solve with these techniques.

      I will try illustrate this with 3 examples. My goal here is not to make fun of these specific cases. There is surely serious work behind these examples, but the fact is that, as presented, they may leave a suspicious impression to an attentive reader.

      • Cropland parcel delineation

        Automatically delineating agricultural parcels is a difficult task. In a prestigious conference publication [1], the authors claim that their ML model

        […] automatically generates closed geometries without any heavy post-processing. When tested with satellite imagery from Denmark, this tailored model correctly predicts field boundaries with an overall accuracy of 0.79. Besides, it demonstrates a robust knowledge generalization with positive results over different geographies, as it gets an overall accuracy of 0.71 when used over areas in France.

        If we don't pay attention to the numbers, this statement can be understood as if the problem was solved and an operational product was available. Looking at the numbers we understand that there is between 20% and 30% error. For a product which is meant to replace the Land Parcel Information System (LPIS) for which the annual changes are less than 5%, we understand that we are far from the goal. Furthermore, the visual results in figure 2 show that the cartographic quality of the objects is low: the fields are blobby, many boundaries are missed, etc.


        Figure 2: Example of cropland parcel delineation (from [1])

        I chose this example because we have tried to reproduce this results at CESBIO, and despite having a very skilled intern and, of course, great supervision, we had a hard time. The feedback on this work was reported in this post.

        Maybe the mistake here was using approaches that may be suited to multimedia data, where the level of expected accuracy is way lower than for cartographic products. Also, sometimes we are hammers looking for a nail: we forget (or don't have the domain expertise) that some problems may be better solved using other data sources than satellite imagery alone: in countries where the parcels are beautiful polygons, starting from scratch is a pity, because the most recent LPIS is usually a very good first guess.

      • SAR to optical image translation

        Using the all-weather capabilities of Synthetic Aperture Radar to fill the gaps due to clouds or even completely replace optical imagery is a goal that has been tracked for a long time. In the recent years, many works using the latest techniques in DL have been published. These techniques reuse approaches that have been developed for image synthesis in domains like multimedia or video games.

        If we take one recent example [2], we can read:

        The powerful performance of Generative Adversarial Networks (GANs) in image-to-image
        translation has been well demonstrated in recent years.
        The superiority of our algorithm is also demonstrated by being applied to different networks.

        However, when we have a look at the images (see figure 3) we see that reflectance levels can be very different between the real optical image and the translated one.


        Figure 3: Illustration of SAR to optical image translation (from [2])

        It is of course impressive that the algorithm is able to generate a plausible image: if we did not have the real optical image, we could think that the result is a good one. The problem here is that, in Remote Sensing, we don't deal with pictures, we deal with physical measures which have a physical meaning. For most applications, we don't want plausible images (these generative methods produce data that follow the same statistical distribution as the training data): we need the most likely image together with uncertainty estimates.

      • Image super-resolution

        Another nice obsession of image processing folks is super-resolution: transforming the images acquired by a satellite to a version where the pixels are smaller and fine details are visible. For a long time, the super-resolution approaches were based on sound mathematical tools from the signal processing toolbox. A nice thing with our signal processing expert friends is that they had objective quality measures and they also used to assess the algorithms by expert photo-interpreters.

        In recent times, the trend has been towards using learning algorithms: use a pair of images of the same area, one at high resolution and the other at low resolution and train a model to do the super-resolution. The publications on the topic are galore and we have even companies which aim to provide this kind of data. For instance, in a recent LinkedIn post by SkyCues one could read:

        THE ONLY SOURCE FOR HIGH-RESOLUTION SATELLITE IMAGERY covering the entire world, updating every few days based on a consolidated source of coherent, dependable and secure earth observation data Now at 1m and improved color correctness
        Dear EO Colleagues: we are Swiss-based EO innovation, where we managed to produce decent quality 1m from Sentinel-2 (see image). This employ very deep Machine Learning trained globally.

        Exploring the publicly available data, one can evaluate the quality of the results. Figure 4 shows an illustration of artifacts present in these super-resolved images.


        Figure 4: Illustration of Sentinel-2 super-resolution (from [https:]] )

        There may be some applications for which this kind of data is useful, but for many others, this quality is not acceptable and is certainly equivalent to (or worse than) the data already available through some small satellites with very low image quality standards.

      • Many other examples

        I will stress again that the choice of examples above is anecdotal. Many more could be cited. I have seen papers or attended to presentations about very puzzling things, like for instance (just to name a few):

        • trying to detect clouds on Sentinel-2 images by using the visible bands only (no cirrus band, no multi-temporal information);
        • training a DL algorithm to reproduce the results of a classical algorithm «because we did not have real reference data»;
        • doing data augmentation4 on VHR SAR images by applying rotations (the SAR acquisition geometry effects like layover, foreshortening or shadowing do not make any sense after rotation);

        Also, with the goal of building corpora of data on which algorithms could be benchmarked, many data sets have been proposed to the RS and ML communities. Most of these data sets are not representative of real problems with the specificities of remote sensing data. Some of them contain only the 3 visible bands of a satellite having many more, or do not contain any temporal information when they are supposed to be useful for land cover classification.

        Others, are not well designed from the machine learning point of view. For instance the data set for the TiSeLaC contest had pixels in the training set which were neighbors of pixels in the test set. It is not surprising that the algorithm that won the competition only used the pixel coordinates and discarded the spectro-temporal information!

      Why all this BS and how to avoid it?

      Again, remember that I am using the Latin word only to troll, among other things because I guess one could find papers with similar issues with my name on them.

      • Why bother about all this?

        The issue here is not fraud or dishonesty from those who write or publish the things I highlighted above. After all, most of these publications go through a peer review process … The real problem here comes from those who extrapolate the consequences of these results.

        It would be a real problem if someone decided to reduce the revisit of an optical constellation «because we can get the same data from SAR images, anyway».

        It is also difficult to explain to users that a country-scale land cover map can't have 99% accuracy like the results shown in the last CNN-GAN-Transformer paper they have seen on Twitter.

        All these issues need detailed analysis, good validation and, most of all, real understanding of how things work in terms of physics, math and software. And here is the bad news: it takes time, hard work and collaboration.

      • A unicorn is a team

        If we go back to the Venn diagram of DS, we can identify some danger zones. Figure 5 illustrates those (although it should be adapted to our field, but I am too lazy).


        Figure 5: Danger zones in DS (from [https:]] )

        • Domain knowledge and automation without mathematical rigor lead to unproven or wrong results: for example, a regression model assumes that the soil moisture data are Gaussian when they are not.
        • Domain knowledge and mathematical modeling without the automation to large data can lead to biased results. It seems that there was a scientist that spent their whole career calibrating the same model for every new study site.
        • And most problematic, ML (math + CS) without the domain knowledge can lead to daft approaches that do not solve any problem. This is when we take the latest DL model trained to detect cats and use it for leaf area index estimation, because the tensors have the same dimensions and the code was available on Github.

        A way to avoid this is to ensure that the work is supported by the 3 pillars of DS. If you are reading this, you may lay somewhere in the Venn diagram. Being at the center is rare, these are unicorns [3]. However, a team can have a center of mass that gravitates towards the intersection of the 3 pillars. At CESBIO, we are lucky to have that. When my code is crap and won't scale, Julien tells me so and helps me improve it. Mathieu is there to remind me that I need to shuffle the training data correctly for stochastic gradient descent to be efficient. Olivier is happy to explain to me for the nth time that, yes, reflectances can be greater than 1. There you go: automation, math and domain knowledge.

        This 3-sided reality check can be sometimes frustrating in the publish or perish world we live in. We can also give the impression of being Dr. Know-it-all when we review papers (or write this kind of post!).

      • Where best leverage DL?

        We can conclude that DS is not BS, but the buzz about ML in general, and recently DL in particular, and their uses without domain knowledge, may have created a negative impression.

        If we abandon the DL all the things motto and are pragmatic, we can say that DL approaches are just universal function approximators that can be calibrated by optimizing a cost function. If the problem at hand can be posed as a cost function, either in terms of fit to a reference data set (supervised inference learning), or in terms of the properties of the function output (unsupervised generative models), chances are that DL can be applied.

        Without wanting to give an agenda of AI research for the Geosciences (others have already done it [4]), we can identify the kinds of problems for which these techniques could be useful.

        There are of course the classical problems for which ML has been used for decades now: land cover classification and biophysical parameter estimation. But one may wonder if replacing RF by DL to maybe gain 1% in accuracy while increasing the computation cost is worth it.

        For these applications problems, it may be more interesting to guide the learning approaches by the physics of the problem.

        Complex physical models can't be run over large geographical areas because of computational costs. The alternative has usually been to use simpler, less accurate models when going from a small study area to a large region. Since neural networks are universal approximators, they can be used to replace the complex physical model and run with a fraction of the computational cost.

        For some of the applications illustrated above (super-resolution, multi-sensor fusion) DL can be a pertinent solution, but the cost functions and the network architectures can't just be copied from other domains. The domain knowledge of the RS expert (the sensor characteristics for super-resolution, the physics of the signal and the observed processes for multi-sensor fusion) can be translated in terms of cost functions to optimize or latent variables of the model.

        But maybe we also have to question when DL is not the appropriate solution. One of the criticisms that one can make to many recent papers in peer reviewed journals is that they lack to check the performances of simple, straightforward solutions. Many DL papers benchmark DL algorithms between them but forget to compare to a simple regression, or a Random Forest. Simplicity of the models (for explainability and for energy consumption) should always be a criterion when proposing a new algorithm. Let's not forget that in the Geosciences we want to produce information that helps to advance knowledge. Using the coolest tool may not be the real goal.


      [1] L. Meyer, F. Lemarchand, P. Sidiropoulos, A Deep Learning Architecture for Batch-Mode Fully Automated Field Boundary Detection, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLIII-B3-2020 (2020) 1009-1016. [https:]] .


      [2] J. Zhang, J. Zhou, X. Lu, Feature-Guided Sar-To-Optical Image Translation, IEEE Access. 8 (2020) 70925-70937. [https:]] .


      [3] Classifying, and Certifying Business Data Scientists, Harvard Data Science Review. (2020). [https:]] .


      [4] J.D.W. Devis Tuia Ribana Roscher, Towards a Collective Agenda on Ai for Earth Science Data Analysis, MGRS. (2021). [] . Footnotes:


      Algorithm Theoretical Basis Document




      Return On Investment, not Region Of Interest on your study area!


      A technique used to generate lots of training data from a smaller data set

    • sur TRISHNA Workshop Toulouse 2022

      Posted: 8 October 2021, 10:30am CEST by Philippe Gamet






      Save the date : 1st TRISHNA Workshop in March 2022 in Toulouse


      Toulouse, France, 22-24 March 2022




      Important note: this page is for information only. Registration and call for abstracts will be available soon on a dedicated website.



      Monitoring accurately the water cycle at the Earth surface is becoming extremely important in the context of climate change and population growth. It also provides valuable information for a number of practical applications: agriculture, soil and water quality assessment, irrigation and water resource management, etc... It requires surface temperature measurements at local scale. Such is the goal of the Indian-French high spatio-temporal TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment), led by ISRO (Indian space agency) and CNES (French Space agency). It will be launched in late 2024/beginning of 2025.


      The surface temperature and its dynamics are precise indicators of the evaporation of water from soils, transpiration of plants and of the local climate. TRISHNA and its frequent high-resolution measurements raise major scientific, economic and societal issues through the 6 major themes that the mission addresses from the angle of research and development of applications: ecosystem stress and water use; coastal and inland waters; monitoring of the urban climate; cryosphere; solid Earth; atmosphere.


      Workshop Objectives

      The objective is to gather all people involved and interested in the science and applications in which TRISHNA will contribute: contributors to the design of the mission or elaboration of the products and potential users. This first international workshop shall be a key milestone in order to converge on TRISHNA science plan, including CAL/VAL activities, scientific algorithms for data processing, product definition, with enhanced coordination between all partners.

      Discussions and exchanges will include:

      TRISHNA in the international context

      All discussions within the workshop shall be considered in synergy with the future operational missions with high resolution and high revisit thermal infrared capability (TRISHNA, Copernicus’ LSTM, NASA’s SBG), and based on the experience gathered with the pathfinder missions (LANDSAT, ASTER, ECOSTRESS)

      Elaboration of TRISHNA products and Calibration/Validation

      • The definition of TRISHNA scientific products and variables
      • Requirements and constraints for the distribution of the products to the users
      • The elaboration of the products, through the redaction of the Algorithm Theoretical Basis Documents
      • Calibration and validation activities

      TRISHNA scientific themes and associated applications

      • Ecosystem stress and water use: advances in the assimilation of land surface temperature or evapotranspiration in hydrological models
      • Coastal and inland waters
      • Urban climate modeling
      • Cryosphere
      • Solid earth
      • Atmosphere


      Workshop Format

      The workshop will last for 3 days. It will include a series of plenary presentations.

      Furthermore, specific themes and issues will be discussed in follow-up breakout sessions. The outcomes of the breakout sessions will be presented in plenary by the respective moderators followed by open discussion.


      Workshop Outcomes

      Following the workshop, the proceedings  (presentations, discussion summaries and
      conclusions) will be prepared and made available to participants  and other interested parties on the TRISHNA website.


      Working language

      The working language of the workshop will be English.


      Date and Venue

      The workshop will be held 22-24 March 2022.

      The meeting venue is:

      Hôtel Mercure Toulouse Centre Compans

      Boulevard Lascrosses - 8 Esplanade Compans Caffarelli

      31000 Toulouse - FRANCE


      Registration, Deadlines and Accomodations

      The registration fee for workshop participation is TBD. On-line registration is required. Registration will open soon. Registration can be done at the following website: TBD

      The workshop website will also be used to post relevant documents for the workshop, as well as practical information such as hotel lists and directions to the venue.


      A block of rooms will be secured at special rate at the conference hotel.


      Workshop Organization and contact information

      The workshop is organised by CNES, and hosted by the Mercure hotel Toulouse Center Compans.

      CESBIO will provide the Scientific Secretariat for the workshop.


      Workshop Organization Committee

       Jean-Louis Roujean

      Bimal K. Bhattacharya

      Philippe Gamet

      Philippe Maisongrande

      Gilles Boulet

      Mark Irvine




    • sur Turbulences ahead for Sentinel-2 users

      Posted: 28 September 2021, 9:47am CEST by Olivier Hagolle

      *Update* on 27/10/2021 : the deadline has been extended to early January 2022 !

      *Update* on 07/10/2021 : The deadline has been extended to the 15th of November !! And sample products are now available. We're relieved, thank you ESA:

      *Update* on 29/09/2021 : ESA just detailed the changes here, and they will become operational on the 26th of October !!


      You might have noticed this sentence on the most recent Sentinel-2 status report :


      • Upgrade of the processing baseline for both Level-1C and Level-2A featuring several improvements in the algorithms and changing products format. The change is foreseen in September/October 2021 and further news will follow here.

      From what we have heard at CNES, the changes will include :

      • A change in the coding of the reflectances : an offset will be added to keep the zero as no-data value, and still enable to code for zeroi or negative reflectances in the products.
      • The vector masks should be replaced by raster masks, which is probably not a simple change that you can implement with two lines of code.

      The change should be put in production before the end of October, while we do not have any accurate description so far. Accounting for these modifications within our and your processors and products in such a short delay will be difficult. Within Theia and for the MAJA team, being on time for these changes will be a hard challenge. Turbulences ahead !


    • sur Snowmelt and snow sublimation in the Indus basin

      Posted: 27 September 2021, 12:09pm CEST by Simon Gascoin

      The Indus basin has one of the largest irrigation system in the world. Available water resources are abstracted almost entirely, mostly for crop irrigation in Pakistan.

      The Indus basin is also considered as one of the large basins in Asia with the highest dependence on snowmelt runoff.

      VIIRS false color image of the Indus basin (irrigated areas in green, snow cover in blue)

      The contribution of snow and ice melt to runoff in the Indus basin was already well studied [1,2,3] but a pending question was how much water is lost by sublimation of the snow cover in the high mountain regions of the basin?

      I used the recent High Mountain Asia snow reanalysis to re-evaluate snowmelt and estimate snow sublimation at the scale of the Indus basin.

      ImageSnowmelt and sublimation in the Indus basin. Timeseries of annual snowmelt and sublimation (a) and timeseries of the mean daily snowmelt and sublimation (b).

      Over 2000–2016, snowmelt was about 25–30% of basin-average annual precipitation. About 11% of the snowfall was "lost" by sublimation, but with a large spatial variability across the basin. Sublimation fraction can be much higher in the arid regions in Ladakh and Western Tibet.


      For this study I challenged myself to follow best practices in open science. I only used open data and the code to reproduce the study is available online. It was fun!

      Paper: [https:]]

      Dataset: [https:]]

      Code: [https:]]

      Many thanks to the authors High Mountain Asia snow reanalysis for sharing this amazing dataset!

      Top picture: Mountains in Swat Valley Pakistan (by Designer429, CC BY-SA 3.0, via Wikimedia Commons)


    • sur Le Mask R-CNN pour la délinéation de parcelles : retour d'expérience

      Posted: 24 September 2021, 4:33pm CEST by Alexis Faure


      Les techniques de l'état de l'art en Deep Learning permettent-elles une délinéation individuelle de chaque parcelle de culture, comme le laisse penser cet article récent ? C'est ce que nous avons cherché à savoir au cours d'un stage au CESBIO en cherchant à qualifier l'architecture Mask R-CNN pour cette tâche. Nous vous livrons ici les principaux enseignements.

      Le Mask R-CNN en théorie

      En bref, le Mask R-CNN est une architecture qui fonctionne en trois grandes parties. On a tout d'abord un réseau convolutif appelé backbone qui extrait des primitives à partir de notre image de départ. À partir de ces primitives, une seconde partie (le Region Proposal Network ou RPN) va proposer et affiner un certain nombre de régions d'intérêts (sous la forme de boîtes englobantes rectangulaires) susceptibles de contenir une parcelle. Enfin, la dernière partie va récupérer les meilleures propositions, les affiner de nouveau, et produire un masque de segmentation propre à chacune d'entre elles.

      À gauche les propositions conservées par le RPN, à droite les détections finales du Mask R-CNN avec leurs masques de segmentation.

      Un schéma récapitulatif du réseau se trouve ci-dessous, il provient ce court article, qui peut être un bon point d'entrée si vous souhaitez en savoir plus. Il est à noter qu'au total, ce réseau possède une centaine de couches de convolution - ce qui rend plus difficile sa manipulation, car il est plus difficile d’interpréter les résultats obtenus.

      Le Mask R-CNN en pratique

      Afin d'entraîner ce réseau, nous avons utilisé les données du Registre Parcellaire Graphique (RPG) distribué chaque année par l'IGN. Cette base de donnée étant lacunaire, nous y avons ajouté un complément produit par l'Observatoire du Développement Rural de l'INRAE. Afin de simplifier notre problème autant que possible, nous n'avons défini qu'une seule et unique classe, laissant ainsi de côté les types de culture fournies dans ces bases de données.

      En ce qui concerne les données d'entrée, nous avons utilisé des images Sentinel-2 de niveau 2A fournies par Theia et plus exactement les 4 bandes spectrales à 10m de résolution (rouge, vert, bleu et proche infrarouge). Nous avons sélectionné 7 tuiles au dessus du territoire métropolitain, et choisi 4 dates différentes en 2018 pour chacune d'entre elle, durant les périodes de culture. Nous disposons aussi d'images super-résolues (à 5m plutôt que 10), qui sont produites grâce à un travail précédant au CESBIO (utilisant un Cascading Residual Network). Ces images permettent de gagner en netteté, par rapport aux bandes Sentinel-2 à 10m, aussi nous pensions également gagner en précision sur les contours prédits par nos modèles.

      Tuiles sélectionnées (31UDR et 31UEP sont nos tuiles de test, les autres nos tuiles d'entraînement/validation)

      Le Mask R-CNN ayant déjà été expérimenté dans la littérature pour cette même tâche de segmentation par instance des parcelles, nous avons tenté de reproduire le travail de ces auteurs. Bien que cette publication utilise une implémentation basée sur Tensorflow, nous avons d'abord cherché à reproduire ces résultats en utilisant l'implémentation fournie par le module Torchvision de Pytorch. Or nous n'avons jamais réussi à faire converger cette implémentation, et avons au passage noté de nombreuses différences entre ces deux implémentations. Les modèles pré-entraînés fournis l'ont été avec la base de données Image-Net pour Pytorch et COCO (plus dense en objets) pour Tensorflow, le pré-entraînement ne concerne que la partie backbone pour Pytorch mais couvre l'ensemble du réseau pour Tensorflow, et enfin l'implémentation elle même diffère dans l'ordre de certaines couches et le choix des primitives utilisées par les dernières étapes du réseau. Nous avons tenté d'isoler les éléments qui permettaient au réseau Tensorflow de converger, sans succès. Au temps pour la recherche reproductible !


      Après avoir écarté l'implémentation Pytorch, nous avons évalué plusieurs scénarii d'apprentissage, que nous pouvons séparer en trois groupes différents, décrits dans le tableau ci-dessous. Le premier groupe  de scenarii contient les trois premiers entraînements, qui utilisent chacun une seule des quatre dates de nos tuiles d'entraînement, afin de tester la capacité de généralisation d'une date à un autre. On utilise toujours, comme trois premiers canaux, les bandes RVB, auxquels on ajoute soit le PIR, soit le NDVI. Les trois scenarii suivants tentent d'utiliser une approche multi-temporelle : soit on se contente d'utiliser l'ensemble de nos dates pour chaque tuile (aboutissant à un jeux de données quatre fois plus grand), soit on extrait le NDVI de chaque date avant de les empiler ; on utilise ainsi une pile multi-temporelle. Enfin, les 3 derniers entraînements utilisent des images super-résolues.

      Nom Dates Canaux Résolution
      T09NIR Septembre BVR - PIR 10 m
      T06NIR Juin BVR - PIR 10 m
      T09NDVI Septembre BVR - NDVI 10 m
      TADNIR Toutes BVR - PIR 10 m
      TADNDVI Toutes BVR - NDVI 10 m
      TMTNDVI Toutes (empilées) NDVI (dates 1 à 4) 10 m
      T09NIRSR Septembre BVR - PIR 5 m
      T09NDVISR Septembre BVR - NDVI 5 m
      T09NDVISRSA Septembre BVR - NDVI 5 m

      Avec ces entraînements, on obtient quelques résultats intéressants qualitativement (à défaut d'être bons quantitativement). Lors d'une inférence, nous produisons un ensemble de polygones, chacun assorti d'un score de confiance. Sur ce score de confiance nous pouvons fixer un premier seuil, au dessous duquel les prédictions ne seront pas considérées.

      Pour ensuite les associer à notre ensemble de polygones cibles, nous avons utilisé un critère géométrique illustré ci-contre. Ce critère estime si une cible (rectangle vert) et une prédiction (ellipse rouge) possèdent une intersection (en jaune) suffisante pour que leur correspondance soit jugée valide. En calculant deux ratios (jaune sur vert et jaune sur rouge), puis en prenant le minimum, on s'assure d'être le plus restrictif possible. La valeur obtenue est appelée RC, et on peut également fixer un seuil sur celui-ci afin d'être plus ou moins strict sur la qualité de nos prédictions. Une fois les associations entre les prédictions et les cibles effectuées, on peut alors calculer les métriques classiques de détection que sont la précision, le rappel et le F1-Score. La précision correspond au nombre de nos prédictions qui sont effectivement des cibles, le rappel correspond au nombre de cibles détectées, et le F1-Score est un compromis entre les deux premiers.

      Sur l'ensemble de nos cas d'usage, ce sont les entraînements T09NDVISR et TMTNDVI qui ont fourni les meilleurs résultats, comme on peut le voir dans le tableau ci-dessous. Nous y avons utilisé des seuils assez restrictifs, à savoir 0.8 pour les deux valeurs (la confiance et le RC).

      Test Précision Rappel F1-Score
      T06NIR (sur juin) 35.34 25.61 29.7
      T06NIR (sur septembre) 31.51 21.45 25.52
      T09NIR (sur juin) 30.08 20.8 24.6
      T09NIR (sur septembre) 30.31 21.79 25.35
      T09NDVI (sur juin) 31.03 21.43 25.35
      T09NDVI (sur septembre) 29.79 21.21 24.78
      TADNIR 34.51 25.11 29.07
      TADNDVI 39.35 26.88 31.94
      TMTNDVI 42.56 30.28 35.39
      T09NIRSR 34.24 38.26 36.13
      T09NDVISR 36.92 39.35 38.09
      T09NDVISRSA 34.38 37.78 36.0

      Afin de comparer nos performances à celles des auteurs de la publication mentionnée plus haut, nous avons choisi un seuil de RC à 0.5 et un seuil de confiance à 0.7. Les résultats sont montrés dans le tableau ci-dessous.

      Test Précision Rappel F1-Score
      T09NDVISR 55.79 59.41 57.30
      TMTNDVI 75.73 55.86 64.13
      OSO (sur 31UDR) 45.89 22.54 30.23
      Auteurs 68.7 48.5 56.2

      Notre pile multi-temporelle de NDVI semble donc fournir des information pertinentes, et parvient à faire mieux que les résultats déclarés par les auteurs (sur leurs propres tuiles de test). La super résolution est quant à elle un peu en dessous en termes de métriques de détection, et au final on est très proche des résultats obtenus par les auteurs (on gagne sur le rappel mais on perd sur la précision). Toutefois, les bonnes prédictions fournies semblent être de meilleure qualité. En effet, les contours fournis (dont on peut voir un extrait ci-dessous, à deux échelles différentes) collent davantage aux parcelles de référence : l'entraînement à l'aide d'images super-résolues semble donc bien fournir des contours plus précis.

      Parcelles cibles (en vert) et prédictions du modèle T09NDVISR (en bleu)

      Pour finir, nous avons également soumis le produit OSO vectorisé à notre procédure d’appariement, afin de justifier la pertinence d'une approche de segmentation par instance. Nous avons fait ce test sur la tuile 31UDR, et on constate sans grande surprise que le rappel est très bas - puisque le produit OSO va notamment fusionner des parcelles de mêmes cultures qui sont côte-à-côte, on va donc avoir un grand nombre de cibles non détectées. Cela justifie ainsi l'intérêt d'utiliser une approche de segmentation par instance. Malgré cela, les résultats obtenus sont, à ce stade, bien loin d'être exploitables pour un utilisateur final. Il est à noter cependant que, à cause de notre critère d'appariement des prédictions aux cibles, il nous est impossible de détecter la fragmentation (le fait de détecter une cible en plusieurs prédictions) ou l'agglomération (le fait de détecter plusieurs cibles avec une seule prédiction). Pourtant, sur l'image, certaines fragmentations ou agglomérations pourraient sembler légitime, aussi nous sous-estimons nécessairement nos résultats.

      Article réalisé dans le cadre d'un stage au CESBIO financé par le CNES.
      Merci à Julien, Jordi et Olivier pour l'aide précieuse qu'ils m'ont apportée durant ce stage.

    • sur Feedback on Mask R-CNN for croplands delineation

      Posted: 24 September 2021, 4:33pm CEST by Alexis Faure


      Do state-of-the-art deep learning techniques allow for individual delineation of each cropland, as suggested by this recent article? In the context of an internship at CESBIO, we tried to evaluate the performance of Mask R-CNN architecture for this task. In this post, we summarize what we learned.

      Mask R-CNN principles

      To sum up, Mask R-CNN is an architecture made of three main parts. First, there is a convolutional network called backbone, which produces features from an input image. From these features, a second part (called RPN for Region Proposal Network) proposes and refines a certain number of regions of interest (as rectangular bounding boxes), which are likely to contain a single cropland. Finally, the last part extracts the best proposals, refines them once again, and produces a segmentation mask for each of them.

      Left: Proposals kept by RPN; Right: Final detections made by Mask R-CNN along with their segmentation masks.

      The figure below illustrates this network. It is extracted from this short external article, where you can learn more about Mask R-CNN. In addition, another noticeable fact about this network is that is contains a lot of convolution layers - about a hundred - that makes it complex to manipulate. Indeed, it is difficult to explain results and trends.

      Mask R-CNN in practice

      In order to train this network, we used the RPG database (Registre Parcellaire Graphique), distributed once a year by IGN. Since this database is not complete, we added a complement, provided by the ODR (Observatoire du Développement Rural, an INRAE lab). We simplified our approach de by defining only one class, removing information about crop types provided in the mentionned databases.

      As input data, we used level 2A Sentinel-2 images provided by Theia. More precisely, we used the 4 spectral bands at 10m resolution (red, green, blue and near infrared). We selected 7 MGRS tiles over French territory. For each of them, we chose 4 different dates, included in growing periods. We also have super-resolution images (at 5m rather than 10), which are produced thanks to a previous work at CESBIO (using a Cascading Residual Network). These images are sharper than the 10m Sentinel-2 bands, so that we expected a better accuracy on predicted contours when training models on these super resolution images.

      Selected tiles (31UDR and 31UEP are our test tiles, others are for training and validation)

      Mask R-CNN has been tested in the litterature for this cropland instance segmentation task, so we tried to reproduce their work. Although this paper uses a tensorflow-based implementation, we first tried to reproduce their results using the one contained in Torchvision module from Pytorch. Yet, we never managed to make this implementation to converge, and we also noticed that there are actually many differences compared between the two implementations. Particularly, the provided pretrained models were trained on Image-Net for Pytorch and COCO (which has a greater density of objects) for Tensorflow. Moreover, pretraining involves only the backbone for Pytorch, but the whole network for tensorflow. Finally, the order of some layers and the choice of the features used during the last step of the network are different. We tried to indentify the main elements that allowed Tensorflow network to converge, without success.


      Using the Tensorflow implementation, we evaluated several training use cases, that can be separated in three groups, described in the table below. The first group contains the first three cases. Each of them uses only one date from our training tiles, in order to test the ability of the model to generalise to a data it has never seen. The first three channels are always RGB bands, then NIR or NDVI is added as a fourth channel. The next three cases try to use multi-temporal input data. Either we used all the dates for each training tile (so we get a 4 times larger patches dataset), or we extract NDVI from each date and then stack them (thus using a multi-temporal NDVI stack). Finally, the last three cases use super-resolution images.

      Nom Dates Canaux Résolution
      T09NIR Septembre BVR - PIR 10 m
      T06NIR Juin BVR - PIR 10 m
      T09NDVI Septembre BVR - NDVI 10 m
      TADNIR Toutes BVR - PIR 10 m
      TADNDVI Toutes BVR - NDVI 10 m
      TMTNDVI Toutes (empilées) NDVI (dates 1 à 4) 10 m
      T09NIRSR Septembre BVR - PIR 5 m
      T09NDVISR Septembre BVR - NDVI 5 m
      T09NDVISRSA Septembre BVR - NDVI 5 m

      Thanks to these cases, we get some qualitatively - but not quantitatively - interesting results. During an inference, a set of polygons is produced, each with a confidence score. On this first score we can set a first threshold, below which predictions will not be considered.

      Then, to match them with our target polygons, we used a geometrical criterion, illustrated on the right. This criterion estimates if a target (green rectangle) and a prediction (red ellipse) have a sufficient intersection (in yellow) to consider their match as valid. By computing two ratios (yellow out of green and yellow out of red), then taking the minimum, we ensure that we are as restrictive as possible. The value obtained here is called RC, and we can also set a threshold on it, in order to be more or less restrictive on the quality of our predictions. Once matches between predictions and targets have been computed, we can compute some classic detection metrics - precision, recall and F1-Score. As a reminder, precision corresponds to the number of predictions that are actually croplands, recall corresponds to the number of targets detected among all targets, and F1-Score makes a compromise between the first two.

      Among our use cases, T09NDVISR and TMTNDVI are those which have produced the best results, as we can see in the table below. Here, we used rather restrictive thresholds, i.e. 0.8 for both confidence and RC.

      Test Précision Rappel F1-Score
      T06NIR (sur juin) 35.34 25.61 29.7
      T06NIR (sur septembre) 31.51 21.45 25.52
      T09NIR (sur juin) 30.08 20.8 24.6
      T09NIR (sur septembre) 30.31 21.79 25.35
      T09NDVI (sur juin) 31.03 21.43 25.35
      T09NDVI (sur septembre) 29.79 21.21 24.78
      TADNIR 34.51 25.11 29.07
      TADNDVI 39.35 26.88 31.94
      TMTNDVI 42.56 30.28 35.39
      T09NIRSR 34.24 38.26 36.13
      T09NDVISR 36.92 39.35 38.09
      T09NDVISRSA 34.38 37.78 36.0

      To compare our performance with authors' ones, from the paper mentionned above, we chose a RC threshold at 0.5 and a confidence one at 0.7. The results are the following:

      Test Précision Rappel F1-Score
      T09NDVISR 55.79 59.41 57.30
      TMTNDVI 75.73 55.86 64.13
      OSO (sur 31UDR) 45.89 22.54 30.23
      Auteurs 68.7 48.5 56.2

      Our multi-temporal NDVI stack seems to provide relevant information, and manages to overcome authors' results (computed on their own test tiles). Super resolution provides results a bit lower in terms of detection metrics - actually, at the level of the authors' ones (with a better recall but a lower precision). However, good predictions seem to have a better quality. Indeed, the contours provided (we can see examples, at two different scales, on images below) are closer to the reference croplands than for the other models. Therefore, training with super resolution images seems to produce more accurate contours.

      Target croplands (green) and prediction made by T09NDVISR model (blue)

      Finally, we also ran our matching process on the vectorised OSO product. We used tile 31UDR for this test, and we can see that the recall obtained is very low - since the OSO product will merge neighbouring croplands of the same type, a large number of targets will be undetected. This justifies the interest in using an instance-based segmentation approach. Despite this, the results obtained are, at this stage, far from being an end-user product. It should be noted, however, that due to the use of RC as a matching criterion, we are unable to measure fragmentation (i.e. detecting a target with several predictions) or agglomeration (i.e. detecting several targets with a single prediction). Yet, on the images, some fragmentation or agglomeration could seem legitimate, se we obviously underestimate our performances.

      Article written in the frame of my CESBIO internship funded by CNES. May thanks to Julien, Jordi and Olivier for their help.

    • sur VENµS, the itsy-bitsy satellite that goes up and down

      Posted: 18 September 2021, 8:00pm CEST by Olivier Hagolle

      The Israeli and French VENµS satellite is a small research satellite designed to test two innovative missions : an optical mission with a frequent revisit on the French side, and an experiment of a Ion Thruster to maintain the satellite at a low altitude on the Israeli side.

      Of course, we did not manage to make both experiments at the same time, as the frequent revisit required to maintain the satellite at a high altitude, 720 km, and to be exactly at the same place every second day, while the electric propulsion was there to move the satellite. It was therefore decided to split the mission in different phases, that we called VENµS Mission x (VMx) :

      • VM1 : Imaging mission at 720 km of altitude, with a 2 day cycle
      • VM2 : Propulsion mission to lower the orbit altitude at 410 km using the Ion Thruster
      • VM3 : Propulsion mission to keep the orbit at 410 km using the Ion Thruster
      • VM4 : Propulsion mission to raise the orbit to 560 km altitude using the Hydrazine Thruster
      • VM5 : Imaging mission at 560 km altitude, with a one day cycle

      Where do we stand now ?  VM3 has started beginning of September, it will last until November. The altitude will be raised in December, and VM5 should start in January 2022. A call for proposals was issued this winter, for which we received 85 proposals asking for 219 sites. Gérard Dedieu and I did a screening of all the proposals (Gérard did most of the work), and we ranked 45 proposals as excellent. We still have to check that the satellite will be able to observe the sites that correspond to these proposals, and probably, a few of these excellent proposals might not be accessible. Optimizing the cinematic of the satellite takes some time, so we apologize for not being able yet to release the results of our selection. Stay tuned !

      Meanwhile, if you are eager to get access to VENµS time series, the data on more than 100 sites are available from the Theia distribution site.


      Click to view an interactive map of VENµS sites during VM1


    • sur A very green France in July 2021

      Posted: 30 August 2021, 11:46am CEST by Olivier Hagolle


      When the summers were hot and dry during the past years, our monthly cloudfree syntheses of surface reflectances from Sentinel-2 for the month of July interested the media. It will probably not be the case in 2021, as France thankfully kept its green colour in July.



      Because of the cloud cover that allowed to keep this beautiful green color, some regions could never be observed without clouds by Sentinel-2, and yet we use a period of 45 days for our syntheses. This is for example the case in the Basque Country and in the Landes, in SouthWest France, as shown in the image below. The residual clouds and their shadows are of course indicated in our products. This shows once again the importance of improving the revisit of the Sentinel-2 mission in the coming years to improve the probability of clear observations.

      You can view these syntheses by yourself, and zoom in to 10m resolution, using Theia's beautiful map dissemination tool [] . The tool also allows you to compare two different dates, and for France, we have made available all the data produced since 2018. Data from other regions produced by Theia are also available (Europe, Maghreb, Madagascar and Sahel).


      To conclude, here are a few zooms on some French regions, where differences betwen 2021 and the three previous years are particularly striking :

      Zoom on the agricultural region of champagne (there is not only wine in Champagne) . In 2021, the harvests were later, and the soils more humid, which explains the brownish color.


      Zoom on Haute Loire, Center East of France.I n this region with a high proportion of grasslands and forests, the droughts of 2019 and 2020 had caused the grasslands to turn yellow. The return to their beautiful green color in 2021 brought relied to the ranchers and their livestock.

    • sur Une France bien verte en juillet 2021

      Posted: 27 August 2021, 12:50pm CEST by Olivier Hagolle


      Depuis quelques années, lors de chaque été sec en France, nos synthèses mensuelles de réflectances de surface issues de Sentinel-2 pour le mois de juillet ont été reprises dans les médias. Ce ne sera probablement pas le cas cette année, car la France est restée bien verte en juillet 2021.


      En raison de la couverture nuageuse qui a permis de conserver cette belle couleur verte, certaines régions n'ont jamais pu être observées sans nuages par Sentinel-2, et pourtant, nous utilisons une période de 45 jours pour nos synthèses. C'est par exemple le cas au Pays Basque et dans les Landes, comme le montre l'image ci-dessous. Les nuages résiduels ainsi que leurs ombres sont bien entendu indiqués dans nos produits. Ceci montre une fois de plus l'importance d'améliorer la revisite de la mission Sentinel-2 dans les années qui viennent afin d'améliorer les probabilités d'observations claires.

      Vous pouvez visualiser ces synthèses par vous mêmes, et zoomer jusqu'à 10m de résolution en utilisant le bel outil de diffusion cartographique de Theia L'outil vous permet aussi de comparer deux dates différences, et sur la France, nous avons mis à disposition l'ensemble des données produites depuis 2018. Les données des autres régions produites par Theia sont également disponibles (Europe, Maghreb, Madagascar et Sahel).


      Pour finir, voici quelques zooms sur des régions Françaises où les différences entre 2021 et les trois années précédentes sont particulièrement fortes :

      Zoom sur la région agricole de champagne. En 2021, les moissons ont été plus tardives, et les sols plus humides, ce qui explique cette couleur brune.


      Zoom sur la Haute Loire. Dans cette région avec une forte proportion de prairies et de forêts, les sécheresses de 2019 et 2020 avaient fait jaunir les prairies qui ont retrouvé leur belle couleur verte en 2021, au grand soulagement des éleveurs.


      Pour la première fois, je n'ai rien eu à faire pour préparer cet article, à part les copies d'écran et l'écriture de l'article. Ce sont les équipes du CNES et de ses sous traitants qui ont tout pris en charge. Un grand merci !

    • sur Format changes to L1C

      Posted: 9 August 2021, 5:29pm CEST by Olivier Hagolle

      A CNES colleague, member of the Sentinel-2 quality working group, just warned us of coming changes to the Sentinel-2 Level 1C format. From what I have heard, there will be two main changes :

      • the reflectance will be coded with an affine law, with an offset and a gain, instead of the current linear law (the offset is zero). The aim is to make a difference between the no-data value (0), and the null reflectance.
      • the GML vector masks will be replaced by raster masks

      Even if this could be handy for Sentinel-2 users, a huge quantity of code among the thousands of softwares that use Sentinel2 will have to be adapted and validated to account for these changes.

      ESA plans to implement the changes by September, while precise information on the changes as well as sample products are not available yet. Knowing the time needed to change an operational software and make all the qualifications, I think this modification comes much too soon. Don't you agree ? Please ESA, give us an additional month...


    • sur The role of snow in the Indus river basin

      Posted: 15 July 2021, 4:23am CEST by Simon Gascoin

      In the Indus river basin, available water resources are abstracted almost entirely, mainly to irrigate crops in India, Afghanistan and Pakistan [1]. Most of the population lives in the low elevation regions of the basin but a large part of the  runoff and groundwater recharge comes from the high elevation regions of the basin. This is due to (i) the orographic enhancement of the precipitation and (ii) the reduction of the evaporative demand with elevation. In the Indus basin, the precipitation trend with elevation is not monotonous because a vast high elevation region in the north east lies in the rain shadow area of the Himalayas (e.g. Ladakh). Also, global datasets of precipitation (such as the one used below) are known to underestimate precipitation in the upper Indus [2].

      Indus river basin population, precipitation and potential evapotranspiration by elevation band. The runoff production is maximum in high elevation areas, where precipitation is high and evaporation is low (data sources: GPWv4 [3], Terraclimate [4])Above 2000 m, in the Karakoram, Hindu Kush and Himalayas mountain ranges, snowmelt is the major contributor to runoff. The snow melt contribution to runoff is also significant in the high elevation regions of the Ganges and Brahmaputra basins, but not as much as in the Indus.

      Contribution of rain, snow melt and ice melt to runoff in the Indus, Ganges and Brahmaputra river basins in the areas above 2000 m (data source: Armstrong et al. [5])Previous studies have estimated snowmelt in the Indus using temperature-index models [5, 5b]. An limitation of such models is that they do not account for sublimation. Field studies suggest that sublimation can be an important component of the water balance in the highest elevations [5c].

      The recent release of the High Mountain Asia Snow Reanalysis [6] makes it possible to better characterize the hydrological significance of the snow cover in the Indus basin. This reanalysis is based on an energy balance model which computes sublimation among other processes. It also removes some biases in the meteorological input data by using remote sensing observations of the snow covered area.

      This dataset provides daily estimates of snowmelt over the period 2000-2016. During that period, the annual basin snowmelt ranged between 100 mm and 140 mm except for the first year (maybe an artifact related to the model initialization?).

      Cumulated snowmelt in the Indus river basin (data source: HMASR [6])The average annual basin snowmelt was 118 mm (= 102 gigatons of water per year). This annual snowmelt represents about a quarter of the average annual basin precipitation. Here I used the WorldClim mean precipitation value (424 mm) which falls within the range of previous estimates reported by [1] (392 to 461 mm). I also compared to the glacier ablation in the basin over the same 2000-2016 period [8]. Glacier ablation includes the melt of the seasonal snow cover over the glacier area. I also added the "imbalance" fraction of the glacier ablation, which gives an estimate of the ice melt contribution caused by climate warming over the period (glacier mass loss that was not compensated by snow accumulation).

      Cumulated snowmelt in the Indus river basin (data source: WorlClim [7], HMASR [6], Miles et al. [8])The snowmelt rates were highest during the summer, which is also the monsoon season. During that period, monsoon rains and meltwater runoff can cause flooding. Snowmelt rates remained high after the monsoon, hence contributing to sustain river flow after the monsoon flood.

      Daily snowmelt in the Indus river basin (data source: HMASR [6])From the HMASR, we can also evaluate the amount of snow that was sublimated and which did not contribute to runoff.

      HMASR sublimDaily sublimation in the Indus river basin (data source: HMASR [6])Over 2000-2016, the sublimation represented 11.7% of the total snow ablation.

      Annual snowmelt and sublimation in the Indus river basin (data source: HMASR [6])The sublimation generally peaked in late June, about a month before snowmelt.

      HMASR sublim meltDaily sublimation and snowmelt in the Indus river basin (data source: HMASR [6])To conclude, we can learn from the HMASR that snowmelt is a significant but not dominant input to the water balance in the Indus river basin. In addition, the melt season overlaps with the monsoon season, when the issue is more an excess of water than a lack of water. About 12% the seasonal snowfall was "lost" by sublimation.

      A similar study could be done for other river basins. The code to reproduce this analysis is available is this repository: [https:]]


      [1] Laghari, A. N., Vanham, D., and Rauch, W.: The Indus basin in the framework of current and future water resources management, Hydrol. Earth Syst. Sci., 16, 1063–1083, [https:] 2012.

      [2] Immerzeel, W. W., Wanders, N., Lutz, A. F., Shea, J. M., and Bierkens, M. F. P.: Reconciling high-altitude precipitation in the upper Indus basin with glacier mass balances and runoff, Hydrol. Earth Syst. Sci., 19, 4673–4687, [https:] 2015.

      [3] Center for International Earth Science Information Network - CIESIN - Columbia University. 2018. Gridded Population of the World, Version 4 (GPWv4): Population Density, Revision 11. Palisades, NY: NASA Socioeconomic Data and Applications Center (SEDAC). [https:] Accessed 01 July 2021.

      [4] Abatzoglou, J.T., S.Z. Dobrowski, S.A. Parks, K.C. Hegewisch, 2018, Terraclimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015, Scientific Data 5:170191, [https:]

      [5] Armstrong, R. L., Rittger, K., Brodzik, M. J., Racoviteanu, A., Barrett, A. P., Khalsa, S.-J. S., Raup, B., Hill, A. F., Khan, A. L., Wilson, A. M., Kayastha, R. B., Fetterer, F., and Armstrong, B.: Runoff from glacier ice and seasonal snow in High Asia: separating melt water sources in river flow, Reg Environ Change, [https:]] , 2018.

      [5b] Kraaijenbrink, P.D.A., Stigter, E.E., Yao, T. et al. Climate change decisive for Asia’s snow meltwater supply. Nat. Clim. Chang. 11, 591–597 (2021). [https:]

      [5c] Stigter EE, Litt M, Steiner JF, Bonekamp PNJ, Shea JM, Bierkens MFP and Immerzeel WW (2018) The Importance of Snow Sublimation on a Himalayan Glacier. Front. Earth Sci. 6:108. doi: 10.3389/feart.2018.00108

      [6] Liu, Y., Y. Fang, and S. A. Margulis. 2021. High Mountain Asia UCLA Daily Snow Reanalysis, Version 1. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. [https:]]

      [7] Hijmans, R.J., S.E. Cameron, J.L. Parra, P.G. Jones and A. Jarvis, 2005. Very High Resolution Interpolated Climate Surfaces for Global Land Areas. International Journal of Climatology 25: 1965-1978. doi:10.1002/joc.1276.

      [8] Miles, E., McCarthy, M., Dehecq, A. et al. Health and sustainability of glaciers in High Mountain Asia. Nat Commun 12, 2868 (2021). [https:]

    • sur Drought returns to California

      Posted: 28 June 2021, 6:43pm CEST by Simon Gascoin

      A few years only after its worst drought in 1,200 years (2011-2016), the drought returns to California.

      Lake Oroville is both a witness and a victim of the extreme hydro-meteorological variability in the Golden State. It provides water to "homes?, farms, and industries in the San Francisco Bay area, the San Joaquin Valley and Southern California". In 2017, 188'000 northern California residents were evacuated as they were under the threat of spillway failure at Oroville Dam following severe rainfalls. Today, the water level is so low that its hydroelectric power plant will be forced to shut down for the first time since it opened in 1967.

      Boats on Lake Oroville during the 2021 droughtIn May 2021, water levels of Lake Oroville dropped to 38% of capacity (Wikipedia). © Frank Schulenburg

      Lake Oroville water level is closely monitored by the California Department of Water Resources. However, it is also possible to watch the water area fluctuations in the growing archive of Sentinel-2 images (below using the NDWI index in the EO Browser).

      Lake Oroville in JuneLake Oroville in June (Sentinel-2 NDWI)

      In 2018, Lake Oroville could feel the heat of Camp Fire the deadliest and most destructive wildfire in California's history.

      Lake Oroville and #campfire in the same satellite picture. Two expressions of climate hazards in #California?

      — Simon Gascoin (@sgascoin) November 18, 2018

      In 2020, Lake Oroville had a ringside seat regarding the largest wildfire season recorded in California's modern history (the image is hazy because of the wildfire smoke).

      The snowpack is the primary source of runoff in the Lake Oroville catchment, but also in the entire western United States where 53% of the total runoff  originates as snowmelt (Li et al. 2017).

      This year' snow deficit in the West USA can be observed from the 20 years dataset of MODIS observations. Here is a snapshot from my Western USA Snow Monitor :

      In May 2021, the snow cover area had reached one its lowest value since 2001 (01 May = day 121), reflecting the lack of snowfall during the winter.

      The current conditions does not augur well for the farmers who need this water to irrigate their crops during the dry summer season. It also mean that they will have to further deplete the groundwater which is causing ground subsidence.

    • sur Enneigement des Pyrénées en 2021

      Posted: 15 June 2021, 6:12pm CEST by Simon Gascoin

      Malgré des chutes de neige tardives en mai, la durée d'enneigement est déficitaire dans les Pyrénées cette année comme on peut le voir sur cette synthèse d'images satellitaires:

      On se souvient aussi de la tempête Filomena qui a blanchi le piedmont pyrénéen le 10 janvier, bien visible sur le graphique ci-dessous.

      Évolution de la surface enneigée depuis le 1er janvier pour toutes les années depuis 2001 (exprimée en fraction du domaine dessiné ci-dessus, c'est-à-dire : 1.0=100% de la surface est enneigée)

      Suivant une suggestion de Christophe Cassou, j'ai aussi tracé la carte des anomalies, c'est-à-dire la différence par pixel entre la durée d'enneigement de l'année en cours et la durée d'enneigement moyenne observée entre 2001 et 2019.

      Carte des anomalies de durée d'enneigement (2021 "moins" la climatologie)

      On remarque que le déficit est surtout marqué côté français, alors que le versant espagnol présente des zones ayant connu une durée d'enneigement supérieure à la moyenne. Ceci est encore la conséquence de Filomena qui a surtout touché la péninsule ibérique (souvenez-vos de Madrid sous la neige !).

      Enfin, si on extrait les altitudes des pixels négatifs (rouge) et positifs (bleus) on constate que les secteurs déficitaires sont situés aux altitudes moyennes et hautes. Les secteurs excédentaires sont minoritaires sauf à basse altitude.


      Il est probable que le dépôt de poussières saharienne exceptionnel du 6 février 2021 ait contribué à accélérer la fonte printanière, réduisant ainsi le nombre de jours enneigés.

      Pour suivre l'évolution en temps réel consultez cette page : [https:]]

      La méthode pour générer ces cartes est expliquée dans cet article :

      Gascoin, S., Hagolle, O., Huc, M., Jarlan, L., Dejoux, J.-F., Szczypta, C., Marti, R., and Sánchez, R.: A snow cover climatology for the Pyrenees from MODIS snow products, Hydrol. Earth Syst. Sci., 19, 2337–2351, [https:]] , 2015.


    • sur Notre nouveau travail, jardiniers !

      Posted: 5 June 2021, 7:48pm CEST by Olivier Hagolle


      Après qu'un premier tracteur s'est approché trop près de notre station ROSAS, le suivant est passé un peu trop loin. Même si on n'est jamais contents, c'est bien mieux comme ça ! Ce tracteur a semé le maïs sur la parcelle sur laquelle nous avons construit la station. S'il est acceptable de laisser un cercle de terre inculte de deux mètres autour du mât (le pied du mât est masqué dans le traitement), l'espace sans plants de maïs était beaucoup trop important et atteignait une distance de  six mètres.


      L'équipe Maja (la guèpe, avec des bandes jaunes et noires) au travail

      Pour résoudre ce problème, le 28 mai, notre petite équipe est allée jardiner pour repiquer quelques pousses de maïs au pied du mât. Sans tracteur, et à 500 m du point d'eau le plus proche, ce fut un sacré effort, mais bien plus amusant que de passer la journée en visioconférence. Comme toujours, Jérôme Colin a parfaitement organisé cette sortie, et vous devriez voir comment il bichonne les pieds de maïs, Micael Lassalle fait un excellent porteur d'eau, je suis l'expert de la pelle ;).



      Ci-dessous le résultat vu d'en haut, nous espérons que nos petites plantes pousseront aussi bien au pied du mât que dans leur emplacement initial.



    • sur Notre nouveau travail, jardiniers !

      Posted: 5 June 2021, 5:30pm CEST by Olivier Hagolle


      Après qu'un premier tracteur s'est approché trop près de notre station ROSAS, le suivant est passé un peu trop loin. Même si on n'est jamais contents, c'est bien mieux comme ça ! Ce tracteur a semé le maïs sur la parcelle sur laquelle nous avons construit la station. S'il est acceptable de laisser un cercle de terre inculte de deux mètres autour du mât (le pied du mât est masqué dans le traitement), l'espace sans plants de maïs était beaucoup trop important et atteignait une distance de  six mètres.


      L'équipe Maja (la guèpe, avec des bandes jaunes et noires) au travail

      Pour résoudre ce problème, le 28 mai, notre petite équipe est allée jardiner pour repiquer quelques pousses de maïs au pied du mât. Sans tracteur, et à 500 m du point d'eau le plus proche, ce fut un sacré effort, mais bien plus amusant que de passer la journée en visioconférence. Comme toujours, Jérôme Colin a parfaitement organisé cette sortie, et vous devriez voir comment il bichonne les pieds de maïs, Micael Lassalle fait un excellent porteur d'eau, je suis l'expert de la pelle ;).



      Ci-dessous le résultat vu d'en haut, nous espérons que nos petites plantes pousseront aussi bien au pied du mât que dans leur emplacement initial.



    • sur Our new job: gardeners !

      Posted: 4 June 2021, 8:41pm CEST by Olivier Hagolle


      After a first tractor came too close from our ROSAS station, the next one passed by a little too far. Even if we're never satisfied, it is much better ! This tractor sowed the maize on the plot on which we have built the station.  If it is acceptable to leave a two meter circle of uncultivated ground around the mast (the foot of the mast is masked in the processing), the gap without maize plants was much too large and reached six meters.

      The MAJA team at work (you know that MAJA, the wasp, has yellow and black stripes)

      To solve that issue, on the 28th of May, our little team went gardening to transplant a few sprouts of Maize to the foot of the mast. Without a tractor, and at 500 m from the closest water source, it was quite an effort, but much more fun that spending the day in video conferences. As always, Jerome organized this perfectly, and you should see how he pampers the sprouts,  Micaël Lassalle makes an excellent water carrier,  I am the shovel expert.

      Below is the result as seen from above, we hope our little plants will grow as well there than in their initial location.


    • sur Can commercial satellites do the job of Sentinel-HR ?

      Posted: 11 May 2021, 9:48am CEST by Olivier Hagolle

      This post intends to answer a question about Sentinel-HR, that we have had quite often inside or outside CNES :

      Securing continuity in a critical timeseries requires user community foresight, Programme justification, 10 yrs Agency prep, and finances. Whilst Sentinel-HR may well be an excellent/valid idea, I’m left asking why certain Copernicus contributing missions couldn’t plug this gap?

      — Mark Drinkwater (@kryosat) April 29, 2021

      Sentinel- HR is a mission project, currently studied in phase zero, for a free and open, global, high resolution (~2m), repetitive (~20 days),  systematic earth observation mission in optics. Sentinel-HR would also provide, in one pass, a stereoscopic observation with a low difference in angle, but allowing a terrain restitution with an accuracy of 3 to 4 meters. For instance, this capability could be useful to monitor the evolution of worldwide glaciers, as in this study recently published in Nature. The latter study was mostly based on a sensor, ASTER, that acquired a vast and open archive of stereo images since late 1999. To our knowledge, no replacement is planned for ASTER.

      "Copernicus contributing missions", in the European glossary, are commercial satellite missions (sometimes funded by the member states), whose data can be bought by Copernicus to serve the interests of European users. The table below shows a few examples of high resolution satellite constellations. The number of satellites in this category is of course much greater, but the satellites identified below are emblematic examples of what is possible.

      These satellites may be stored in two categories :

      • on demand acquisition or tasked satellites (such as Pleiades , Pleiades Neo ,  Maxar Workdview 3)
      • systematic acquisition satellites (or coverage satellites) (such as planet, and maybe someday UrtheDaily)

      Our Sentinel-HR mission would of course be a coverage mission.

      Tasked missions VHR tasked satelllite missions
      Mission N° of satellites Swath Width (km) Multi-spectral Resolution (m) Surface/day (M.km²)
      Pleiades 2 20 2.4 1
      Pleiades Neo 4 14 1.2 2
      SPOT6/7 2 60 8 4
      Worldview 2 & 3 2 13 1.2 1.2
      Planet Skysat 21 6 1 0.4
      .wpdt-bc-FFC107 { background-color: #FFC107 !important;} .wpdt-bc-F2E3B5 { background-color: #F2E3B5 !important;}

      A few examples of VHR satellites with on-demand acquisition (we did not include the Skysat satellites in our analysis below because half of them are not heliosynchronous, and others have a different overpass time).

      There are 150 M km² of emerged lands, so in order to fulfill its mission needs (20 days revisit at 2 meters resolution), Sentinel-HR should acquire 8 M km² daily (accounting for 20% overlap required for mosaics, cloud and shadow detection, etc.). Accomplishing this with tasked VHR satellites would require to dedicate 100% of the capacity of the satellites in the table above. Excluding SPOT6/7, which does not have the necessary resolution, this would even amount to about 200%. If we now consider their stereoscopic capabilities, the daily coverage from tasked VHR satellites would drop drastically. Even if reserving this amount of capacity was possible and affordable (and we think it is not), there are still loads of issues with sensors, lifetimes, orbits, resolutions, swaths, viewing angles and spectral bands difference which will make it very difficult to derive consistent, long term datasets. If we let those issues aside and admit that maybe only 20% of the total capability can be dedicated to a public observation service, then we will need to start making choices of what area is observed and what area is not. And inevitably, those data will miss someone, sometime, somewhere that we can not foresee.

      Acquisitions by Pleiades 1A and 1B over France for the three first weeks of April 2021. Although all these images are great and locally useful, probably less than 20 % of the French surface is covered. (from Airbus geostore catalog). Coverage missions VHR coverage satelllite missions
      Mission N° of satellites Swath Width (km) Multi-spectral Resolution (m) Surface/day (M.km²/sat)
      Planetscope superdove 80 19.5 4 0.5
      Urthedaily 8 360 ? 5 25 ?
      .wpdt-bc-FFC107 { background-color: #FFC107 !important;} .wpdt-bc-F2E3B5 { background-color: #F2E3B5 !important;}

      A few examples of VHR satellites with systematic acquisition (until the limits of their capacity is reached). We call these missions "coverage satellites". For Urthedaily, although a launch is announced in 2023, there is not much literature, and we had to guess the swath width and acquisition capacity.

      The planet constellation is closer to the type of mission we consider for Sentinel-HR, allowing for a daily revisit, at 4m resolution, which is the double of what is expected for Sentinel-HR. The issue with planet is the data quality. As the optics have a very small aperture, the acquisitions are made with very broad spectral bands, that overlap each other. The signal to noise ratio is also not in the range of the Sentinels.  Another difficulty lies in the very small field of view, even if the most recent model of the constellation has reached almost 20 kilometers. As at least a few kilometers of overlap is necessary between the adjacent swaths, the world is in fact covered by a huge amount of spaghetti stitched together. The orbit of the satellites is not maintained so users will have to handle different overpass times when processing time series, which degrades the data quality.

      On paper, the Urthedaily constellation of 8 satellites could be much more interesting, even if its resolution is also twice lower than the one needed by Sentinel HR. But there is not much information on this constellation on the internet which stays hypothetical, although announced in 2022.

      Both missions (Planet and Urthedaily) do not provide stereoscopy, which is one additional reason why they do not fulfill the objectives of Sentinel-HR.


      Our review may have missed newly launched or planned missions, including the Chinese missions for instance, and we do not claim having the complete overview of very high resolution missions in mind. But the tasked missions, with their off-nadir acquisition capabilities, are not adapted to make a systematic coverage, and too many expensive satellites would be required to fulfill the Sentinel-HR mission. So far, the only existing high resolution coverage mission, Planet, does not meet the standards of data quality from Sentinels, and would need an enhancement of a factor 2 of its resolution. Its stereo capabilities also remain modest and not systematic.

      Their daily revisit is of course a plus compared to Sentinel-HR, but we believe a Very High Resolution mission does not require such a high revisit, for a lot of applications (urban, infrastructures, forests, glacier evolution, coasts, rivers, hedges...) and that it is more reasonable to try to merge, low revisit and high resolution mission such as Sentinel-HR with a frequent revisit and high resolution mission such as Sentinel-2 or Sentinel-2 NG. Moreover, neither Planet or Earthdaily will  provide the stereoscopic mission included in Sentinel-HR.

      Finally, let's recall that when we were preparing Sentinel-2, we often heard that the SPOT and SPOT-like satellites already provided this kind of data, and that a Sentinel-2 mission was therefore not necessary, and would kill the earth observation private missions. 20 years after that, the commercial earth observation is thriving, with a lot of different very high resolution missions, while the Sentinel-2 mission is an extraordinary success which proved useful to hundreds of thousands of people.

      Moreover, we absolutely don't know how the landscape of VHR earth observation will be in the 2030's, and maybe a private owned mission could answer our needs some day. But, if we want its data to be free and open, it will mean that its entire image archive will have to be bought by the public sector (EU for instance) and provided to the community. How can we be sure it will be cheaper and correspond better to our needs than a mission designed by a space agency ?

      The post was prepared by Olivier Hagolle, Julien Michel (CESBIO) and Etienne Berthier (LEGOS)

    • sur Neige orange : premiers résultats

      Posted: 11 May 2021, 1:25am CEST by Simon Gascoin
      Trois mois après le dépôt de poussières sahariennes qui avait coloré le manteau neigeux en orange, et grâce à la participation de nombreux citoyens-scientifiques, nous sommes en mesure d'annoncer que la masse de poussière tombée dans les Alpes et les Pyrénées varie entre 0.2 et 30 grammes par mètre carré ce qui représente des centaines de milliers de tonnes de poussière ! Ce dépôt pourrait entrainer une réduction de la durée d'enneigement de plusieurs dizaines de jours. Mais il faudra attendre la fin de la saison pour tirer des conclusions définitives sur l'enneigement de cette année pleine de surprises (le mois d'avril ayant été le plus froid depuis 1994, et l'hiver joue les prolongations dans nos massifs avec des chutes de neige tardives). Échantillons prélevés par Anne et Florence, Cap du Carmil (Ariège), crédits photo : Anne Barnoud.


      Plus de 150 échantillons ont été récupérés dans les Alpes et les Pyrénées. Les échantillons des Pyrénées n'ont pas encore été tous analysés.. et ce sont les plus chargés. Nous aurons bientôt une estimation plus précise des dépôts par région. D'autres analyses sont prévues pour valoriser au maximum ces échantillons (composition chimique, radioactivité, granulométrie, etc.). Ces résultats sont le fruit d'un travail d'équipe improvisé et coordonné avec brio par Marie Dumont (CNRM, Centre d'études de la neige) malgré les complications liées au COVID ! Lire le rapport en ligne : [https:]] Référence Réveillet, M., Tuzet, F., Dumont, M., Gascoin, S., Arnaud, L., Bonnefoy, M., Carmagnola, C., Deguine, A., Evrard, O., Flin, F., Fontaine, F., Gandois, L., Hagenmuller, P., Herbin, H., Josse, B., Lafaysse, M., Le Roux, G., Morin, S., Nabat, P., Petitprez, D., Picard, G., Robledano, A., Schneebeli, M., Six, D., Thibert, E., Vernay, M., Viallon-Galinier, L., Voiron, C., Voisin, D.: Dépôts massifs de poussières sahariennes sur le manteau neigeux dans les Alpes et les Pyrénées du 5 au 7 février 2021?: Contexte, enjeux et résultats préliminaires Version du 3 mai 2021, CNRM, Université de Toulouse, Météo-France, CNRS, 2021. ?hal-03216273?
    • sur Premiers retours sur le nouveau traitement géométrique Sentinel2

      Posted: 5 May 2021, 11:42am CEST by Julien Michel


      Si vous travaillez avec des produits Sentinel2, vous avez probablement entendu dire que le nouveau traitement géométrique de l'ESA est actif depuis le 1er avril 2021. Ce nouveau traitement se base sur l'affinage des paramètres de prise de vue par rapport à un ensemble d'images de reference (Global Reference Image, GRI), et doit diminuer l'erreur de localisation absolue de 11 mètres (pour 95.5% des produits) à moins de 8 mètres. Plus important encore, l'erreur de recalage multi-temporelle (entre images d'une série temporelle pour une tuile) doit diminuer de 12 mètres (pour 95.5% des produits) à mieux que 5 mètres et même 3 mètres depuis une même orbite (source: ESA Data Quality Reports).

      Au CESBIO, nous savons que le calage multi-temporel des produits Sentinel2 peut être problématique dans certains cas, et nous avons récemment développé une chaîne de traitement appelée StackReg qui permet d'estimer rapidement les biais de localisation relatifs pour une grande quantité de produits. Cet outil calcule les translations à appliquer à la géo-localisation de chaque image afin d'amélioration la cohérence spatial de la pile multi-temporelle, comme on peut l'observer dans la vidéo ci-dessous.

      document.createElement('video'); [https:]]

      Haut, de gauche à droite : pile Sentinel2 sans recalage, pile Sentinel2 recalée avec les translations calculées par StackReg, dérivée temporelle sans recalage, dérivée temporelle avec recalage. Bas : profil temporelle du NDVI sans recalage (bleu) et avec recalage (rouge) au point indiqué par une croix rouge dans les images, et amplitude du recalage (tirets gris).

      StackReg en quelques lignes

      StackReg est un outil que j'ai du développer dans le cadre de mon travail sur l'élaboration d'un prototype de chaîne de fusion spatio-temporelle pour l'étude de phase 0 Sentinel-HR. Pour une présentation plus complète, vous pouvez consulter la présentation donnée récemment dans le cadre d'un cycle de conférence au CESBIO : les planches (anglais) sont disponibles ici, et la vidéo de l'exposé (français) est diponible ici.

      StackReg met en correspondances toutes les images d'une tuile Sentinel2 donnée disponibles dans l'archive Theia avec l'image possédant la meilleure couverture des pixels de terres émergées (excluant nuages, saturations, eaux libres et données manquantes), avec l’algorithmee SIFT, qui génères des milleurs de points homologues sous-pixelliques. Les images sont découpées en sous-tuiles, et l'algorithme de mise en correspondance est appliqué à l'échelle d'une sous-tuile, afin de réduire le coût d'appariement des points et éliminer les appariements aberrants. Une fois tous les points homologues collectés, ceux qui correspondent à une déplacement supérieur à 20 mètres sont éliminés, car les Data Quality Reports nous disent que l'erreur de calage multi-temporelle est inférieure à 12 mètres dans la plupart des cas. Ce procédé est similaire à celui utilisé dans CARS (chaîne de photogrammétrie libre du CNES).

      Ce procédé est distribué sur le centre de calcul haute performance (HPC) du CNES, et le traitement de l'archive complète pour une tuile prend un peu moins de 15 minutes (une fois les données téléchargées).

      Matching performances for the full archive of the 31TCJ tile.Performances de mise en correspondance pour l'archive complète disponible pour la tuile 31TCJ. Première ligne : intervisibilité (pourcentage de terre émergées visible dans les deux images), deuxième ligne : nombre de points homologues avant et après filtrage (échelle logarithmique), troisième ligne : amplitude de la moyenne des translations générées par les points homologues, quatrième ligne : amplitude de l'écart type des translations générées par les points homologues. Positions relatives des images calculées par StackReg

      Une fois les translations de l'ensemble des images de l'archive vers l'image de références calculées, nous pouvons en déduire les positions relatives de ces images en plaçant arbitrairement l'image de référence au centre du repère. Comme il n'y a pas de raison particulière pour que cette image de référence soit mieux localisée en absolu que les autres, nous calculons ensuite la position moyenne de l'ensemble des images, et l'utilisons comme position cible pour en déduire les translations à appliquer à chaque image pour recaler la pile de données, comme illustré sur la figure de droite. Ce type de graphe permet également d'analyser la précision géométrique et la cohérence du calage multi-temporel des données (mais pas la localisation absolue, nous y reviendrons).

      La liste des translations estimées est écrite dans un fichier csv, qui est la seule sortie de StackReg (stocker les sorties de StackReg est donc très bon marché). Ce fichier csv peut être utiliser pour ré-échantillonner les images afin de générer une pile de données recalées à la volée (par exemple en utilisant la classe WarpedVRT de rasterio)

      Ce procédé permet une amélioration significative de la cohérence spatial de la pile multi-temporelle, même quand on considère des paires d'images qui n'ont pas été mises en correspondances l'une avec l'autre, comme montré par la figure suivante. Nous pouvons en déduire que le calage est cohérent avec les chiffres des Data Quality Reports, et que StackReg est très efficaces pour construire une pile multi-temporelle cohérente spatialement.


      Estimation de l'amplitude de l'erreur de calage entre toutes les paires possibles d'images avec un taux de couverture supérieur à 60% de la tuile 31TCH sur la période 2018-2019, en utilisant un extrait de 2000x2000 pixels, sans et avec StackReg. Que peut nous dire StackReg du nouveau traitement géométrique Sentinel2 ? Positions relatives des images de la tuile 31TCJ

      Nous pouvons utiliser le même type de diagramme de dispersion pour voir comment les images à partir du 1er avril 2021 (date à laquelle le nouveau traitement est activé si possible) se comportent, et voir comment StackReg localise ces images par rapport aux autres. Voici ce que cela donne pour la tuile 31TCJ. Nous pouvons voir que les croix rouges (produits avec le nouveau traitement géométrique) sont localisée à l'intérieur de l'ellipse à 99%. De plus, elles semblent relativement groupées, ce qui suggère moins de variation dans leurs positions et va dans le sens d'un calage multi-temporel mieux que 5 mètres, à l'exception d'un point isolé en bas de l'ellipse (mais peut-être que le nouveau traitement géométrique n'était pas actif pour cette image). Nous pouvons également noter que, même si nous n'avons pas encore assez d'acquisitions avec le nouveau traitement actif pour en être sûr, la position moyenne des croix rouges seraient localisées 2.5 mètre au nord du centre de l'ensemble des images, ce qui semble indiquer que nous devrions utiliser cette position estimée après le 1er avril 2021 pour recentrer les images, afin d'améliorer la localisation absolue de l'ensemble.

      Ces observations se confirment si l'on regarde la matrice de cohérence spatiale pour les dates postérieures au 1er avril 2021. Cette cohérence semble bonne avec le nouveau traitement géométrique, à l'exception d'une image (quoi doit être celle localisée au bas de l'ellipse). Nous pouvons également observer que StackReg permet d'améliorer encore cette cohérence et de ramener cette image fautive avec les autres.

      Matrice de cohérence spatiale avec le nouveau traitement géométrique Sentinel2 pour la tuile 31TCJ

      Si nous nous intéressons à d'autres tuiles, nous pouvons voir que les mêmes conclusions s'appliquent. 30TYS montre un groupement très resserré d'acquisitions. A nouveau, le centre de l'ensemble des positions ne correspond pas auc entre des dates bénéficiant du nouveau traitement géométrique, ce qui suggère que nous pourrions utiliser ce dernier pour améliorer la localisation absolue de l'ensemble. 31TGL montre également un groupe très resserré, cette fois-ci un peu plus loin de notre ellipse de confiance. A nouveau, nous sommes probablement centré sur une localisation erronée. Les mêmes conclusions peuvent être observées pour les tuiles 30TYQ et 30TXT (voir graphes à la fin de ce billet).

      Que pouvons nous en conclure ? Bien sûr, nous devrons confirmer tout cela quand plus de produits seront disponible mais ... ça marche ! Nous avons juste à attendre le retraitement complet de l'archive (incluant le retraitement des produits L2A) ... Dans l'intervalle, StackReg peut nous aider à constituer de longue séries temporelles Sentinel2 avec une meilleure cohérence spatiale,  à améliorer encore la cohérence des produits qui bénéficient du nouveau traitement géométrique et parfois même à attraper des images anormalement localisée et les ramener avec les autres. Et nous allons regarder de plus près les améliorations potentielles de la localisation absolue de la pile recalée en utilisant la moyenne des positions bénéficiant du nouveau traitement géométrique.

      Positions relatives des images de la tuile 31TYS


      Positions relatives des images de la tuile 31TGL Positions relatives des images de la tuile 30TYQ Positions relatives des images de la tuile 30TXT





    • sur A first peek at new Sentinel2 geometric processing

      Posted: 5 May 2021, 11:41am CEST by Julien Michel


      If you are working with Sentinel2 products, you probably heard that ESA new geometric processing is active since 1st of April 2021. This new processing, based on the geometric refinement of viewing parameters with respect to a Global Reference Image (GRI), should bring absolute location error from 11 meters (95.5% of products) to better than 8 meters, and more importantly, multi-temporal registration from 12 meters (95.5% of products) down to better than 5 meters and even 3 meters for a single orbit (source: ESA Data Quality Reports).

      At CESBIO, we know that the multi-temporal registration of Sentinel2 products may be problematic in some cases, and we recently developed a processing chain, named Stackreg that allows to quickly estimate relative location biases for a large amount of products. This processor computes offsets to apply to image geo-location information in order to improve stack spatial registration, as shown in this example video.


      Top, from left to right: S2 stack wihtout registration, S2 stack with StackReg computed offsets, temporal derivative without registration, temporal derivative with StackReg computed offsets. Bottom : NDVI profile at red cross location without registration (blue), with registration (red) and registration amplitude (dotted gray).

      StackReg in a nutshell

      StackReg is a tool that I needed to develop on my way to a spatio-temporal fusion processing chain for the Sentinel-HR phase 0  study  For a complete introduction, you can watch the talk I gave for a lab workshop: the slides (english) are available here, and the conference video (french) is available here. For those in a hurry, here are the main things to know about it.

      StackReg matches all images of a given Sentinel2 tiles available in Theia archive against the image with the highest ground coverage (excluding clouds, saturation, open waters and edges) with the SIFT algorithm, which will yield thousand of sub-pixels pairs of points called matches. Images are broken into sub-tiles, and matching is done at sub-tile level in order to reduce matching cost and discard obvious outliers. Once all matches to the target image are collected for a given image, matches that correspond to an offset of more than 20 meters are discarded since we know from Data Quality Reports that the mutli-temporal registration should be better than 12 meters. This process is similar to the one used in CARS (CNES open-source photogrammetry pipeline).

      This matching process in distributed on CNES High Performance Computing center, and processing the full archive for one tile takes a little less than 15 minutes (once data have been downloaded).

      Matching performances for the full archive of the 31TCJ tile.Matching performances for the full archive of the 31TCJ tile. First line is intervisibilty (amount of ground pixels visible in both images), second line is number of point before and after filtering (notice the logarithmic scale), Third line is the mean offset amplitude, and fourth line is the standard deviation of the offset amplitude. Dashed gray line indicates target date. Relative positions of images computed by StackReg.

      Once we computed the offset to the reference image for all images in the archive, we can derive the relative positions of all images in a space by considering the reference image as the origin of our frame. Since there is no particular reason for this image to be better than the others, we then compute the mean position of all images, and use this as the target position, from which we derive the offsets that allow to register all images together, as illustrated in the figure on the right. This kind of graph can also be used to analyse the geometric accuracy and multi-temporal registration of data (but not absolute location, we will get to that).

      The list of offset is compiled in a csv file which is the only output of StackReg (storing StackReg outputs is therefore very cheap). This csv file can then be used to resample images or generate registered stacks on the fly (for instance using the WarpedVRT feature of rasterio).

      This yields a significant improvement of spatial registration consistency in the multi-temporal stack, even when considering pairs of images that have not been matched together, as shown in the following figure. From this figure, we can say that initial registration is coherent with Data Quality Report, and StackReg is very efficient in build a coherent multi-temporal stack.

      Estimated mis-registration amplitude of possible pairs of images from tile 31TCJ, 2018-2019, with coverage >60%, 2000x2000 extract, without and with StackReg. What can StackReg tell us about Sentinel2 new geometric processing ?


      31TCJ location scatter plot

      We can use the same scatter plot diagram to see how images starting 2021.04.01 behave (date at which the processing is activated when possible), and see how StackReg locates those images with respect to the others. So here it is for tile 31TCJ. We can see that all the red crosses (with new geometric processing) fall into the confidence ellipse. Furthermore, they look quite grouped together, which suggest less jitter in image positions and supports the idea of a better 5 meters multi-temporal registration, except for one point at the bottom of the ellipse (but maybe the geometric processing was not active for this image). We can also note that, even if we do not have enough acquisitions yet to be sure, the mean of red points would be 2.5 meters north of the mean of the full cloud, which may indicate that we should rather use the mean of post 2021.04.01 images as our target location in StackReg.

      Indeed, if we look at the spatial registration coherency matrix for dates post 2021.04.01, we can confirm that coherency looks good with new geometric processing except from one image. We can also see that StackReg allows to slightly enhance the coherency and brings this faulty image back with the others.

      Coherency matrix for images with new geometric processing on tile 31TCJ.

      If we look at other tiles, we can see that the same conclusions can apply. 30TYS shows a very tight pack of acquisitions. Again, center of the cloud is not the center of the dates corrected by the new geometric processing, which suggest that we could use those dates to enhance absolute location of the full stack. 31TGL shows also a very tight pack, this time a bit further out our confidence ellipse. Once again, we are probably wrong and they are probably right. The same applies to 30TYQ and 30TXT (see graphs at the end of the post).

      So what can we say? Of course, we will have to confirm this when more products will become available but ... it works, folks! We only have to wait for the complete archive reprocessing (including L2A products) ... In the mean time, StackReg can help building spatially coherent long Sentinel2 Time Series, help to improve further this coherency when dealing with products with new geometric processing, and sometimes catch outlier images and bring them back with the others. And we will have a closer look at the potential improvement of absolute location error by using mean location of dates with the geometric processing as the target location.

      31TYS location scatter plot


      31TGL location scatter plot 30TYQ location scatter plot 30TXT location scatter plot





    • sur Oups...

      Posted: 21 April 2021, 10:18am CEST by Jérôme Colin


      Field work rarely goes without a few small hazards. In the case of our new ROSAS station, newly installed a month ago, the hazard unfortunately took the form of a farm machine hitting the mast while tilling the soil on April 15th.

      The base of the mast twisted under the impact, leaving the structure in a precarious position. The stress applied to the mast led to a torsion of its fixing plate, and to the breaking of the concrete mass around one of the four anchoring points. The concrete mass itself was destabilized.

      In order to avoid any further damage, the instrumentation was removed from the mast on April 19 by the CESBIO team.

      The mast was then repositioned vertically with the help of an agricultural machine thanks to the dexterity of the Purpan School team.

      Nevertheless, we will have to replace the damaged elements to guarantee the safety of the installation. In order not to impact the corn which has just been sown, the work will only be done after the harvest, that is to say around September. In the meantime, the photometer will be repositioned on the mast and re-aligned with the help of the CIMEL team in order to acquire BRDF data on this new crop cycle.


      Tadaa !
    • sur Oups...

      Posted: 21 April 2021, 10:09am CEST by Jérôme Colin


      L’acquisition de mesures sur le terrain se passe rarement sans quelques petits aléas. Dans le cas de notre nouvelle station ROSAS fraîchement installée il y a un mois, l'aléa a malheureusement pris la forme d'un engin agricole venu percuter le mât lors d'un travail du sol le 15 avril dernier.

      La base du mât s'est tordue sous l’impact, laissant la structure dans une position précaire. La contrainte appliquée au mât a conduit à une torsion de sa platine de fixation, et à la rupture du massif béton autour de l’un des quatre points d’ancrage. Le massif béton a lui-même été déstabilisé.

      Afin d’éviter tout dégâts supplémentaires, l’instrumentation a été démontée du mât le 19 avril par l’équipe du CESBIO.

      Le mât a été repositionné verticalement à l’aide d’un engin agricole grâce à la dextérité de l'équipe de l’Ecole de Purpan.

      Il nous faudra néanmoins remplacer les éléments endommagés pour garantir la sécurité de l'installation. Afin de ne pas impacter le maïs qui vient d'être semé, les travaux n'interviendront qu'à partir de la récolte, soit vers le mois de septembre. En attendant, le photomètre sera repositionné sur le mât et ré-aligné avec l'aide de l'équipe CIMEL afin d'acquérir des données de BRDF sur ce nouveau cycle de culture.


      Tadaa !
    • sur Muldrow Glacier surge in Alaska

      Posted: 15 April 2021, 11:29am CEST by Simon Gascoin

      Muldrow Glacier (McKinley Glacier) is a large glacier in Denali National Park and Preserve in Alaska, USA. It is now moving 100 times faster than normal, which means that it is undergoing a "surge".

      The abrupt acceleration of the glacier movement is clearly visible in the recent series of Sentinel-1 SAR backscatter images (animation below). Also visible is a strong increase in the backscatter due to the formation of new crevasses. These crevasses create new reflectors on which the radar waves can bounce and return to the satellite antenna.

      Read more about the Muldrow Glacier surge and the ongoing effort to study it :

    • sur 2 000 downloads for MAJA !

      Posted: 14 April 2021, 6:04pm CEST by Olivier Hagolle

      The distribution of MAJA L2A processor, as an  executable software, started in april 1997, but we started counting the number of downloads only since October 2017. The cumulative number of downloads just reached 2000 on April the 2nd 2021. It is on average 1.56 downloads per day, and more than 2 downloads per work day !

      Since October 2020, MAJA became an open source software distributed on github. Github does not count the downloads, and only provides the number of clones made by different users in the last 2 weeks. The plot below shows that we have to add 3 clones per week to the downloads of the executable version. As the github site also serves as MAJA's forum so far, you can also see the traffic on the documentation pages and forum in the plot provided below.

      MAJA is not a simple software that runs on a laptop. It is designed to work only on linux platforms, and the Sentinel-2 data require a large disk space, and a comfortable memory. We tehrefore did not expect it would reach such a large amount of downloads. If you are one of MAJA users, we would be pleased to hear about the aplications for which you used MAJA.

      MAJA clones (top) and information traffic (bottom) on github
    • sur Can surface reflectance be negative ?

      Posted: 6 April 2021, 11:09am CEST by Olivier Hagolle


      Here is a frequently asked question :

      I noticed, in such L2A product processed by MAJA, that some pixels had negative reflectances, is it normal ?

      No, it should not happen, but the fact it happens is not entirely surprising, I will explain it below. Unlike negative reflectances, reflectances greater than one can exist, this is explained here.

      Reflectance should be positive, as it corresponds, with a normalization factor, to the ratio of the radiance reflected by the earth's surface (positive or zero) and the illumination received by this surface (positive or zero). This said, the surface reflectances observed in nature can be very low, of the order of 0.01 to 0.03 for example, in the following cases:

      • in cloud or topographic shadows
      • on slopes facing away from the sun
      • over water or lava flows
      • over dense vegetation in the visible

      Atmospheric corrections are not without error. Our estimates of MAJA performance have shown that the standard deviation of atmospheric correction errors on locally uniform scenes is about 0.01. These errors are probably mainly due to errors in estimating the optical thickness of aerosols or to errors in the choice of the aerosol type.  MAJA is however one of the software that provides the best atmospheric correction performances, as shown by the ACIX-I experiment.

      RMS errors for surface reflectances obtained by different atmospheric correction methods, compared to reflectances obtained using in-situ optical properties from the Aeronet network, using the 6SV radiative transfer code. These results are from the ACIX-I experiment.  As the LaSRC chain also uses 6SV, this criterion gives a significant advantage to this atmospheric correction method. These performances do not take into account the adjacency effects and the quality of their correction. For each wavelength, the best performances are written in red, and the second best performances in blue.



      A standard deviation of 0.01 means that in about 1% of the cases, the errors can be greater than 0.03. In this case, the reflectances of the few cases described above can become negative. They occur in general when the optical thickness of aerosols is overestimated. The errors of the surface reflectances can also be larger than the estimate provided above, due to adjacency effects and residuals of their correction. We are currently working on improving the adjacency effects correction, especially using the ROSAS station located in Lamasquère.

      So how do we deal with the unavoidable negative reflectances in MAJA? We provide two types of output products from MAJA: reflectances before correction of topographic effects (coded SRE for Surface REflectances), and reflectances after this correction (coded FRE for Flat surface REflectances)

      • SRE : we leave the negative reflectances in the product, because we see no point in hiding these errors, which are also present on all the pixels
      • FRE : as the correction of the effects of the relief can lead to multiply by 5 the reflectances, and thus to make them even more negative, we put these reflectances at zero.


    • sur Les réflectances de surface peuvent-elle être négatives

      Posted: 6 April 2021, 9:32am CEST by Olivier Hagolle


      Voici une question que je reçois assez fréquemment :

      J'ai remarqué dans (tel produit L2A Sentinel-2 corrigé des effets atmosphériques par MAJA) des réflectances négatives ou égales à zéro. Est-ce normal ?

      Non, ce n'est pas normal, mais ce n'est pas totalement surprenant non plus. Contrairement aux réflectances négatives, qui ne devraient pas exister, les réflectances supérieurs à un existent. C'est expliqué ici.

      Une réflectance doit être positive, puisqu'elle correspond, à un facteur de normalisation près, au rapport de la luminance réfléchie par la surface terrestre (positive ou nulle) et de l'éclairement reçu par cette surface (positif ou nul). Ceci dit, les réflectances de surface observées dans la nature peuvent être très faibles, de l'ordre de 0.01 à 0.03 par exemple, dans les cas suivants :

      • dans les ombres de nuages ou de relief
      • sur des pentes orientées à l'opposé du soleil
      • au dessus de l'eau ou de coulées de lave
      • sur de la végétation dense dans le visible

      Les corrections atmosphériques ne sont pas dénuées d'erreurs. Nos estimations de performances de MAJA ont montré que l'écart-type des erreurs de correction atmosphérique sur des scènes localement uniformes est de l'ordre de 0.01. Ces erreurs sont probablement principalement dues à des erreurs d'estimation de l'épaisseur optique des aérosols ou des erreurs sur la connaissance du type d'aérosol.  MAJA est pourtant l'un des logiciels qui fournit les meilleures performances de correction atmosphérique, comme l'a montré l'expérience ACIX.

      Erreurs RMS pour les réflectance de surface obtenues par différentes méthodes de correction atmosphérique, comparées à des réflectances obtenues en utilisant les propriétés optiques des aérosols mesurées in-situ et issues du réseau Aeronet, en utilisant le code de transfert radiatif 6SV. Ces résultats sont issus de l'expérience ACIX-I.  Comme la chaîne LaSRC utilise également 6SV, ce critère avantage significativement cette méthode de correction atmosphérique. Ces performances ne prennent pas en compte les effets d'environnement et la qualité de leur correction. Pour chaque longueur d'onde, les meilleures performances sont écrites en rouges, et les deuxièmes performances en bleu.



      Un écart-type de 0.01 signifie que dans environ 1% des cas, les erreurs peuvent être supérieures à 0.03. Dans ce cas, les réflectances des quelques cas exposés ci-dessus peuvent devenir négatives. Ils interviennent en général lorsque l'épaisseur optique des aérosols est surestimée. Les erreurs des réflectances de surface peuvent être aussi plus importantes que l'estimation fournie ci-dessus, en raison des effets d'environnement et des résidus de leur correction. Nous sommes en train d'y travailler actuellement, notamment à partir de la station ROSAS implantée à Lamasquère.

      Comment gérons nous les réflectances négatives inévitables dans MAJA ? Nous fournissons deux types de produits en sortie de MAJA : des réflectances avant correction des effets topographiques (codées SRE pour Surface REflectances), et des réflectances après cette correction ( codées FRE pour Flat surface REflectances)

      • SRE : nous laissons les réflectances négatives dans le produit
      • FRE : comme la correction des effets du relief peut conduire à multiplier par 5 les réflectances, et donc à les rendre encore plus négatives, nous mettons ces réflectances à zéro.

      Une des conséquences des réflectances négatives, est que le NDVI peut devenir supérieur à un (dans le cas des produits SRE) ou égal à un (dans le cas des produits FRE). Nous avons étudié cette question dans un article.


    • sur ROSAS first rosaces

      Posted: 26 March 2021, 7:45pm CET by Olivier Hagolle


      These space engineers are fast and efficient ! Just a week after our ROSAS station in Lamasquère was erected, we already have the first BRDF measurements, the producing of which required also the processing of calibration sequences. This was done thanks to our colleagues at CNES from the service in charge of measurement physics in optics, especially Lucas Landier, Sebastien Marcq and Aimé Meygret, and the exploitation team (Nicolas Guilleminot). Although the installation of the system has degraded the uniformity of the existing cover crop, the orders of magnitude of reflectances are as expected; and we can even see the shadow of the mast in the 120° azimuth direction in the top-left graph.

      Let's wait for the corn to be sown, and we should have much cleaner BRDFs.


      Polar diagrams of surface reflectances measured by our ROSAS station in Lamasquère. In this not very intuitive representation, the 0° azimut corresponds to observations towards the South.  The top left image was taken in the morning, the top right around noon, bottom left in the afternoon, and bottom right later on after the arrival of clouds. The yellow dots indicate the position of the sun. The radius of the graph corresponds to the zenith angle, and the other dimension is the azimuth with regard to the North.



    • sur Start-up of the new ROSAS station for bi-directional reflectance measurements in Lamasquère

      Posted: 23 March 2021, 10:44am CET by Jérôme Colin


      At last! We announced it in March 2020, and here it is, one year later! From lock-downs to constraints related to crops grow stages or soil wetness, we have been forced to postpone the operations several times. And finally, the ROSAS* Lamasquère (South-Western France) station sent its very first measurements on March 17, 2021.

      Let us recall that the ROSAS protocol is based on the use of a multi-spectral photometer to carry out not only angular and spectral measurements of the incident radiation, but also of the radiation reflected by the surface. It is thus possible after processing to deduce the bi-directional reflectance (BRDF) of the surface of the measurement site. With the CNES ROSAS stations in La Crau (France) and the CNES/ESA station in Gobabeb (Namibia), the CESBIO Lamasquère station is the third site of its kind in the world, and the first to characterize an agricultural vegetated surface, with seasonal and inter-annual variations of the cover.

      This station will allow us to validate the satellite surface reflectances (corrected for atmospheric effects) in difficult cases, since :

      • when crops are very green and dense, the surfaces are dark and the atmospheric correction errors have a strong impact on the reflectance estimates;
      • when the crops are mature or the plot is bare ground, the adjacency effects due to the nearby forest become strong.

      Feel free to refer to our previous article to read the detailed motivations that led us to set up this new site.

      In spite of not so promising weather forecast which were fortunately systematically denied by the facts (well ok, it was quite chilly and it rained on Friday), Hery and Mohamad from CIMEL were able to proceed with the installation and wiring of the instrumentation to the mast on the ground as early as Tuesday March 16. This includes: a masthead robot, a lightning rod, an inclinometer, a GSM antenna, a solar panel, a hygrometric probe, a GPS antenna and an acquisition box, as well as the multiple corresponding cables.

      Once the mast was equipped, it was Jean-Philippe, from the Lamothe farm, who proceeded to lift the whole thing on Wednesday March 17. The maneuver caused us some cold sweat, but went well, causing only minor damage quickly repaired on the ground cable.

      The sensor is installed on a tilting mast, which greatly facilitates the maintenance of the whole instrumentation. The last step, once the mast was in place, was to put the photometer on the robot, with its collimator, and to pre-set the alignment of the photometer.

      The sunshine on Thursday allowed us to validate the alignment of the sensor and to check the good functioning of the data transmission to the CESBIO, and the rain on Friday allowed us to fix validate the proper functioning of the hygrometric sensor which stops the acquisitions in case of precipitations (who said we should avoid rainy days?).

      The acquisitions started when the sun showed up on Saturday March 20. The data are automatically transmitted to CESBIO and CNES every hour via the cell phone network (GPRS). If the intermediate crop currently in place is not very interesting in terms of BRDF (especially because we trampled it quite a lot around the mast these days...), the period from the end of April to the end of August will cover a new maize crop cycle (the now famous 4-meter-high-maize from Lamasquère!). So we will for sure publish new interesting data very soon!

      We wish to send a big THANK YOU to the CIMEL team for its efficiency and good mood, to Baptiste from CESBIO for the inclinometer device, to Cedric Hillembrand from OMP SI for the data server, as well as to Gervais and Jean-Philippe from Lamothe farm for their decisive help in lifting the mast and for hosting our station on their land!

      (from right to left: Mohamad (CIMEL), Hery (CIMEL) and Jérôme (CESBIO), all three relieved to see the mast standing up)

      *RObotic Station for Atmosphere and Surface characterization

    • sur Démarrage de la nouvelle station ROSAS de mesure de réflectance bi-directionnelle à Lamasquère

      Posted: 23 March 2021, 10:43am CET by Jérôme Colin


      Enfin ! Nous vous l'avions annoncé en mars 2020, la voici opérationnelle un an plus tard. De confinements en contraintes liées à l'état des cultures et du sol, nous avons été obligé de repousser les opérations à de multiples reprises. Et finalement, la station ROSAS* de Lamasquère (Haute-Garonne, France) aura fait ses premières mesures le 17 mars 2021.

      Rappelons que le protocole ROSAS repose sur l'utilisation d'un photomètre multi-spectral pour non seulement réaliser des mesures angulaires et spectrales du rayonnement incident, mais aussi du rayonnement réfléchi par la surface. Il est ainsi possible après traitement d'en déduire la réflectance bi-directionnelle (BRDF) de la surface du site de mesure. Avec les stations ROSAS du CNES à la Crau (France) et CNES/ESA à Gobabeb (Namibie), la station ROSAS du CESBIO est donc le 3ème site de ce type au niveau mondial, et le premier à caractériser une surface végétalisée agricole, avec des variations saisonnières et inter-annuelles du couvert. Cette station nous permettra de valider les réflectances de surface des satellites (corrigées des effets de l'atmosphère) dans des cas de figure difficiles. En effet :

      • lorsque les cultures sont très vertes et denses, les surfaces sont sombres et les erreurs de correction atmosphérique ont un fort impact sur l'estimation des réflectances;
      • lorsque les cultures sont mûres ou que la parcelle est en sol nu, ce sont les effets d'environnement dus à la forêt toute proche qui deviennent forts.

      N'hésitez pas à vous reporter à notre précédent article pour lire le détail des motivations qui nous ont amené à mettre en place de ce nouveau site.

      En dépit de prévisions météo peu prometteuses mais heureusement systématiquement démenties par les faits (enfin, il faisait assez frisquet et il a plu vendredi), Hery et Mohamad de CIMEL ont pu procéder à l'installation et au câblage de l'instrumentation sur le mât au sol dès le mardi 16 mars. Cela comprend : un robot en tête de mât, un paratonnerre, un inclinomètre, une antenne GSM, un panneau solaire, une sonde hygrométrique, une antenne GPS et un boîtier d'acquisition, ainsi que les multiples câbles correspondants.

      Une fois le mât équipé, c'est Jean-Philippe, de la ferme de Lamothe, qui a procédé au levage de l'ensemble le mercredi 17 mars. La manœuvre nous a occasionné quelques sueurs froides, mais s'est bien passée, n’occasionnant que des dégâts mineurs vite réparés sur la tresse de terre.

      Le mât utilisé est basculant, ce qui facilite la maintenance de l'instrumentation. La dernière étape, une fois le mât en place, a donc consisté à mettre le photomètre sur le robot, avec son collimateur, et à pré-régler l'alignement de l'ensemble.

      L'ensoleillement du jeudi a permis de valider l'alignement du capteur et de vérifier le bon fonctionnement de la transmission des données jusqu'au CESBIO, et la pluie du vendredi nous a permis de corriger valider le fonctionnement de la sonde hygrométrique qui arrête les acquisitions en cas de précipitations (comme quoi, le mauvais temps a aussi du bon).

      Les acquisitions ont donc débuté avec le retour du soleil le samedi 20 mars. Les données sont automatiquement transmises au CESBIO et au CNES toutes les heures via le réseau téléphonique mobile (GPRS). Si la culture intermédiaire actuellement en place ne présente pas beaucoup d'intérêt en terme de BRDF (notamment parce qu'on a beaucoup piétiné la féverole autour du mât...), la période de fin avril à fin août couvrira un nouveau cycle de culture de maïs (le désormais fameux maïs de 4 mètres de haut de Lamasquère !). Nous ne manquerons donc pas de publier bientôt de nouvelles données intéressantes !

      Nous adressons un grand MERCI à l'équipe CIMEL pour son efficacité et sa bonne humeur, à Baptiste du CESBIO pour le dispositif de l'inclinomètre, à Cédric Hillembrand de la DSI OMP pour le serveur de réception des données, ainsi qu'à Gervais et Jean-Philippe de la ferme de Lamothe pour leur aide déterminante dans le levage du mât et l'accueil de notre station sur leur terrain ! L'achat de la station a été financé par le contrat de plan Etat Région, merci à ceux qui ont déposé le projet (Eric Ceschia, Valérie le Dantec...).

      (de droite à gauche: Mohamad (CIMEL), Hery (CIMEL) et Jérôme (CESBIO), tous trois soulagés de voir le mât debout)

      *RObotic Station for Atmosphere and Surface characterization

    • sur Several issues found in recent papers on cloud detection published in MDPI remote sensing

      Posted: 11 March 2021, 4:19pm CET by Olivier Hagolle

      In the last few months, several papers on Sentinel-2 cloud detection have been published by MDPI remote sensing journal. We found large errors or shortcomings on two of these papers, that should not have been allowed by the reviewers or editors. The third one is much better, even if we disagree with one of its conclusions.


      Before analyzing the papers, let's review a few points that someone interested in the performances of Sentinel-2  cloud masks should know.

      1- False cloud negatives (cloud omissions) are worse than false cloud positives:
      • given that Sentinel-2 observes the same pixel every fifth day, false cloud positives only reduce the number of available data for the processing, but one can expect the same pixel will be available and clear -and classified as such- 5 or 10 days before or after. Of course, systematic false positives, like, for instance, the classification of bright pixels as clouds, should be avoided as they would mean such a pixel would never be available during the long period in which it is bright;
      • false cloud negatives can degrade the analysis of a whole time series of surface reflectance, yielding a wrong estimate of bio-physical variables or of land cover classification for instance.
      Due to the difference of observation angles between bands, the edges of the cloud have a different color 2 - Cloud masks should be dilated, for at least three reasons:
      • all Sentinel-2 spectral bands do not observe the surface in the exact same direction. As the cloud mask is made using a limited number of bands, it is necessary to add a buffer around it so that the clouds are asked in all the bands;
      CLouds have fuzzy edges which can be hard to detect, hence the interest of dilation
      • cloud edges are often fuzzy, and the pixels in the cloud neighborhood can be affected by the limbos of the cloud;
      • clouds scatter light around them, and the measurement of surface reflectances is disturbed by this effect, named "adjacency effect" in remote sensing jargon.

      For these reasons, in our software MAJA, we recommend to use a parameter which dilates the cloud mask by 240 meters. This dilation is a parameter, and different cloud masks should be compared using the same value of this parameter. Dilating the cloud mask will lower the false negatives, and increase the false positives, and overall, it will reduce the noise due to clouds in time series of surface reflectance.

      3- Cloud shadows or clouds are both invalid pixels:
      • clouds and cloud shadows disturb surface reflectance time series. These pixels are therefore invalid for most analyses. we do not know of any user who really needs to know if an invalid pixel is in fact a shadow or a cloud. And moreover, a pixel can very often be both a cloud and a shadow, as some clouds partly cover their shadows. It is therefore not very useful to separate both classes in the validation, as it can bring differences if the hypotheses are different between the reference and the cloud detection method.

      We have followed these guidelines in our own cloud mask validation paper, also published in remote sensing, and provided more details about their justification. This being said, we can now analyze the three papers.

      Sanchez et al: Comparison of Cloud cover detection algorithms on sentinel–2 images of the amazon tropical forest A typical image in over the Amazon (here, Surinam), with lots of broken clouds.

      This paper compares the quality of FMask, Sen2cor and MAJA cloud masks over the Amazon forest, which has an excessively high cloud cover. Such a cloud cover is not favorable to MAJA, which is a multitemporal method that works better when the surface is seen cloud free once per month. We are therefore not surprised that MAJA is not at its best in this comparison. The paper relies on a careful elaboration of reference fully classified images. It is a serious work, but which still includes at least 4 shortcomings concerning MAJA's evaluation:

      • MAJA cloud mask was improperly decoded, and the authors wrongly concluded that MAJA could not detect cloud shadows. This error has been admitted by the authors. The Joined image shows that, for the images used by Sanchez et al, the cloud shadows are indeed detected.

        One of the images for which Sanchez et al wrote that MAJA could not detect shadows.

      • MAJA dilates the clouds, but the authors compared the dilated cloud masks to non-dilated "reference cloud masks". In the Amazon region, where the clouds are often large fields of small cumulus, this approximation can lead to very large differences. MAJA's dilation parameter could have been tuned to use the same hypothesis as in the reference (no dilation), but it wasn't. The differences are therefore counted as false positives for MAJA, which inequitably degrades its performances;
      • The authors considered shadows and clouds as two different categories. As explained in the introduction, differences in hypotheses on the classification of pixels with clouds above shadows can bring errors in the evaluation of the performance of the method, which is of no interest for the users;
      • Moreover, the Sentinel-2 mission started to be operational (5 days revisit on all lands) only after November 2017. Before that, the revisit in Amazonia was 20 days until July 2017, and 10 days after Sentinel-B entered in operations. Two thirds of the images used in the paper from Sanchez et al were obtained before July 2017. In these conditions, the average frequency of cloud free observations was lower than one every three months, or 4 per years, quite far from the cloud-free observations per month required for MAJA to work optimally. Of course it is not representative of MAJA's performance for the rest of the life of Sentinel-2 (for at least 15 years).

      We asked MDPI for a correction, but while it seems that MDPI tries to be fast in the review process, it is not as fast to recognize errors and publish corrections. We signaled the errors in September, asking how to correct it. We received an answer end of October, submitted our comment in November, which was finally published in March, after a minor revision in January which requested us to change only one single word.  With our comment, MDPI also published an answer from the authors, who acknowledge the error on the decoding, but did not bother to show the results after separating the results obtained with a revisit of 20 days, and those with a revisit of five days. MDPI did not insist, which we found -to say the least- disappointing. Meanwhile, this paper with false results has been quoted 9 times.

      EDIT : as this blog post has had some success, I have received some feedback, and one of the actual reviewers of the paper told me he had submitted comments close to ours, and these comments where disregarded by the authors and the paper was accepted by MDPI, while the reviewer still recommended Major revisions.

      Zekoll et al: Comparison of Masking Algorithms for Sentinel-2 Imagery Comparison of cloud masks over Naples, for MAJA, left, and Sen2cor, right. The detected clouds are circled in green.

      This paper compares three cloud detection codes, FMask, ATCOR and Sen2cor, by comparing the cloud masks generated by the automatic methods to reference data taken manually:

      "Classification results are compared to the assessment of an expert human interpreter using at least 50 polygons per class randomly selected for each image".

      The method by Sanchez et al used fully classified images, and so did our method, but the one used in Zekoll et al is based on selected polygons, which might be less accurate, because it is highly dependent on the choice of the samples. For instance, with such a method, you tend not to select samples near the cloud edges, because it is hard to do so manually. But the issue is that the cloud edges are one of the most difficult cases, while the center of a cloud is usually easier to classify automatically. Cloud edges are also one of the cases where Sen2cor classification is often wrong, avoiding to sample them is a convenient way to obtain good results. The paper does not show any example of reference classification, which is described in one sentence and a graph, so the reader can just hope that the work was done properly.

      The paper contains also a sentence that should have shocked a good reviewer:

      "However, dilation of Sen2Cor cloud mask is not recommended with the used processor version because it is a known issue that it misclassifies many bright objects as clouds in urban area, which leads to commission of clouds and even more if dilation is applied."

      It could be translated by, "let's avoid the dilation or it would reveal the real value of Sen2cor". How can a review accept such a statement ? Yes, the disclaimer is present in the paper, but the performance quoted in the abstract and conclusion does not take it into account. It is therefore misleading.

      And finally, the most beautiful construction in the paper is in the abstract:

      "The most important part of the comparison is done for the difference area of the three classifications considered. This is the part of the classification images where the results of Fmask, ATCOR and Sen2Cor disagree. Results on difference area have the advantage to show more clearly the strengths and weaknesses of a classification than results on the complete image. The overall accuracy of Fmask, ATCOR, and Sen2Cor for difference areas of the selected scenes is 45%, 56%, and 62%, respectively. User and producer accuracies are strongly class and scene-dependent, typically varying between 30% and 90%. Comparison of the difference area is complemented by looking for the results in the area where all three classifications give the same result. Overall accuracy for that “same area” is 97% resulting in the complete classification in overall accuracy of 89%, 91% and 92% for Fmask, ATCOR and Sen2Cor respectively."

      Instead of giving in the abstract the Overall Accuracy for all reference data sets, which is not good (despite using non dilated reference cloud masks), the authors have found a way to show the fast reader that the "overall accuracy is 92% for Sen2cor". You need to carefully read between the lines to understand that it is only for the pixels on which the three methods agree, ie. for the pixels which are easy to classify. The real performance for cloud detection is available but lost in the results :

      Fmask performs best for the classification of cloud pixels (84.5%), while ATCOR and Sen2Cor have a recognition rate of 62.7% and 65.7%, respectively

      The corresponding User Accuracy of FMask is low, but most of the Cloud commision errors are due to the dilation.

      To conlude, here is one element which explains the low quality of the paper :

      Received: 1 December 2020 / Revised: 25 December 2020 / Accepted: 27 December 2020 / Published: 4 January 2021

      Congratulations to the MDPI review process, who accepted this paper in less than one month, in a period that includes the Christmas and New Year break . Let me recall that our negotiation with MDPI for adding a comment to Sanchez et al took seven months.

      I have to add that I just saw a presentation at the S2 Validation Team meeting from by the first author, V. Zekoll, a PhD student. The presentation was much better than the paper, and which only focused on the comparison of the three methods she studied, and did not even show any comparison with the reference. This blog post does not aim at blaming her, but rather the editing process.

      Cilli et al: Machine Learning for Cloud Detection of Globally Distributed Sentinel-2 Images

      The last paper by Cilli et al avoids most of the traps in which the other papers fell. It takes the necessary dilation into account, even though the reference mask was not dilated.  The reference validation data set was a good one based on fully classified images. In fact it was the data set we generated three years ago and made available to the public. We are happy it was useful and well used. But finally, the paper missed the necessity to detect cloud shadows, i just suppose it is a work in progress.

      The paper compares machine learning approaches and more classical threshold based methods including MAJA, Sen2cor and FMask. The machine learning method uses a database by Hollstein et al as training data set, and evaluates all methods against CESBIO's data set.

      The conclusions correspond to how we designed MAJA. MAJA is the most sensitive, but has some false positive clouds, which mostly correspond to the dilation in the products generated by MAJA. The authors also note that MAJA does not have false positives on bright pixels, while the other methods have.

      The conclusion of the paper for the threshold based method is as follows :

      "In general, MAJA resulted in the most sensitive (95.3%) method and FMask resulted in the most precise (98.0%) and specific (99.5%); however, it is worth noting that according to specificity the difference with Sen2Cor is negligible (99.4%)."

      The SVM method developed by Cilli et al is less sensitive than MAJA but more precise (probably as the dilation is thinner).

      However, I am puzzled by the following sentence:

      "These findings should be taken into consideration as the main purpose of cloud detection is avoiding false positives, especially for change detection or land cover applications."

      I had an email interaction with some of the authors, who come from the machine learning domain and are rather new to remote sensing, and they recognize that as Sentinel-2 provides time series, it is more relevant to avoid false negatives than false positives. The authors concede that they did not take into consideration that Sentinel-2 data are not single images, but time series of images.  This is again a questionnable position that passed through the MDPI review process.

      Conclusion Number of special issues per MDPI journal, source : [https:]

      Validating cloud masks it not as easy as it seems. It includes numerous pitfalls, and requires both a good understanding of cloud mask processors strategies and tuning, and a robust methodology. Falling into one of these traps is not shocking in itself and can fuel scientific discussions. But the fact that the cited articles go into publication with such shortcomings despite the usual revision process is surprising. This questions both the very fast review process of MDPI and the multiplication of guest editors who may not be specialists.

      To finish, I have to say that I have published several papers in MDPI, and I appreciated that the review process wasquick and the reviews not too severe, my critics can apply to my own papers. Moreover, I have been a guest editor for 4 special issues of MDPI over the years :

      My feedback in these special issues, is the easyness to open one, but also the pressure from MDPI to have a fast processing. This pressure results in only one member of the guest editing board handling all the papers, because it takes a long time to coordinate on who will be handling the paper. For the SPOT(Take5) paper, I handled most papers, except those of which I was a co-author. For the other special issues, the main guest editors does most of the work, except for the few papers that fall exactly in my field of expertise.


      Written by O.Hagolle and J.Colin

    • sur TropiSCO : a projet to monitor tropical deforestation with Sentinel-1 on a weekly basis

      Posted: 20 February 2021, 9:42pm CET by Stéphane Mermoz


      Do we need to remind you of all the collateral damage linked to deforestation or the eco-system services that forests provide ? We have already talked about these in this blog (here and there too). Yet forests are disappearing at an alarming rate. Between 1990 and 2020, an area of forest equal to more than three times that of metropolitan France has disappeared. The tropical forest, which accounts for half of the world's forests, is seriously threatened: in 2019, the equivalent of a soccer stadium is destroyed every two seconds (FAO, 2020).

      Yet France is about to acquire a tool for monitoring deforestation thanks to the TropiSCO project, which has just been labelled by the Space Climate Observatory, which we call the SCO. Within the framework of this project, the deforestation detection method developed by CESBIO, GlobEO and CNES (Bouvet et al., 2018; Ballère et al., 2021) will be applied to humid forests in all the tropics, and possibly to temperate and boreal forests. This observation tool will be ready within 18 months and the produced data will be made public.

      This deforestation detection method is especially suited to the tropics since it is based on data from the Sentinel-1 radar satellite, which is almost insensitive to the clouds that obstruct most optical images in these regions. Deforestation is therefore detected every week regardless of weather conditions, at 10 meters resolution. According to Ballère et al (2021), in a third of the cases, our method detects deforested areas more than 3 months ahead of the method used by the Maryland GLAD team (Hansen et al., 2016) which is based on optical data from Landsat. Our method has already been applied to different areas of the tropics (French Guiana, Peru, Gabon, Vietnam, Laos and Cambodia) and successfully validated. TropiSCO is therefore an early warning system, but not only, since the results can be used to reliably calculate annual deforestation statistics.

      The data produced is likely to be of interest to many users, including governments, NGOs, universities, the general public, but also companies wishing to reduce the risk of deforestation in their supply chains or fire monitoring actors.

      It is important to note that similar initiatives based on the use of Sentinel-1 data are currently emerging, such as the Wageningen University's RADD alert system (Reiche et al., 2021) and the Brazilian alert system (Doblas et al., 2020). However, our method based on radar shadow detection has the advantage of effectively avoiding false alarms.

      The project's labeling by the SCO provides us with funding to precisely define the architecture of the production system and finalize the demonstration mock-ups to distribute the first data and obtain feedback from users. In parallel, and with the assistance of the SCO team at CNES, we will quickly finalize the financing of the project development and production for three years.

      Monitoring of rubber tree cutting in the north of Ho Chi Minh City in Vietnam between 2018 (yellow) and 2020 (red). In gray, the areas that were not cut during the period.


      Références :

      Ballère, M., Bouvet, A., Mermoz, S., Le Toan, T., Koleck, T., Bedeau, C., ... & Lardeux, C. (2021). SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sensing of Environment, 252, 112159.

      Bouvet, A., Mermoz, S., Ballère, M., Koleck, T., & Le Toan, T. (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sensing, 10(8), 1250.

      Doblas, J., Shimabukuro, Y., Sant’Anna, S., Carneiro, A., Aragão, L., & Almeida, C. (2020). Optimizing Near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sensing, 12(23), 3922.

      FAO : State of the World’s Forests 2020. []

      Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., ... & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3), 034008.

      Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., ... & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005.


    • sur TropiSCO : un projet pour le suivi hebdomadaire de la déforestation tropicale avec Sentinel-1

      Posted: 18 February 2021, 4:08pm CET by Stéphane Mermoz


      On ne vous rappelle pas tous les dommages collatéraux liés à la déforestation ni les services éco-systémiques que nous rendent les forêts, puisqu’on en a déjà parlé dans ce blog (, et là aussi). Pourtant, les forêts disparaissent à un rythme alarmant. Entre 1990 et 2020, une surface de forêt équivalente à plus de trois fois celle de la France métropolitaine a disparu. La forêt tropicale, comptant pour la moitié des forêts du monde, est gravement menacée: en 2019, l’équivalent d’un stade de football y est détruit toutes les deux secondes (FAO, 2020).

      Or, la France est sur le point de se doter d’un outil de surveillance de la déforestation grâce au projet TropiSCO qui vient d’être labellisé par l’Observatoire Spatial du Climat que nous appelons maintenant le SCO. Dans le cadre de ce projet, la méthode de détection de la déforestation développée par le CESBIO, GlobEO et le CNES (Bouvet et al., 2018 ; Ballère et al., 2021) va être appliquée aux forêts humides de tous les tropiques, et possiblement aux forêts tempérées et boréales. Cet outil d’observation sera prêt d’ici 18 mois et les données produites seront publiques.

      Cette méthode de détection de la déforestation est surtout adaptée aux tropiques puisqu’elle est basée sur les données du satellite radar Sentinel-1, presque insensibles aux nuages qui obstruent la plupart des images optiques dans ces régions. La déforestation est donc détectée chaque semaine quelles que soient les conditions météorologiques, le tout à 10 mètres de résolution. D’après Ballère et al. (2021), dans un tiers des cas, notre méthode détecte les zones déforestées avec plus de 3 mois d’avance par rapport à la méthode de l’équipe GLAD du Maryland (Hansen et al., 2016) basée sur les données optiques issues de Landsat. Notre méthode a déjà été appliquée sur différentes zones des tropiques (Guyane, Pérou, Gabon, Vietnam, Laos et Cambodge) et validée avec succès. C’est donc un système d’alerte rapide qui va être développé, mais pas seulement, puisque les résultats peuvent être utilisés pour calculer de façon fiable des statistiques annuelles de déforestation.

      Les données produites sont susceptibles d'intéresser de nombreux utilisateurs, notamment les gouvernements, les ONG, les universités, le grand public mais également les entreprises qui souhaitent réduire le risque de déforestation dans leurs chaînes d'approvisionnement ou encore les acteurs de la surveillance des incendies.

      Il est important de noter que des initiatives similaires basées sur l’utilisation des données Sentinel-1 émergent actuellement, comme le système d’alertes RADD de l’université de Wageningen (Reiche et al., 2021) et le système d’alertes Brésilien (Doblas et al., 2020). Cependant, notre méthode basée sur la détection des ombres radar présente l'avantage d’éviter efficacement les fausses alarmes.

      La labellisation du projet par le SCO nous assure un financement pour définir précisément l’architecture du système de production et finaliser les maquettes de démonstration pour distribuer les premières données et obtenir un retour des utilisateurs. En parallèle, et avec l’assistance de l’équipe du SCO au CNES, nous finaliserons rapidement le financement du développement du projet et de la production pendant trois ans.

      Suivi de coupes d'hévéas au nord d'Hô Chi Minh-Ville au Vietnam entre 2018 (jaune) et 2020 (rouge). En gris, les zones n'ayant pas subi de coupes durant la période.



      Références :

      Ballère, M., Bouvet, A., Mermoz, S., Le Toan, T., Koleck, T., Bedeau, C., ... & Lardeux, C. (2021). SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sensing of Environment, 252, 112159.

      Bouvet, A., Mermoz, S., Ballère, M., Koleck, T., & Le Toan, T. (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sensing, 10(8), 1250.

      Doblas, J., Shimabukuro, Y., Sant’Anna, S., Carneiro, A., Aragão, L., & Almeida, C. (2020). Optimizing Near Real-Time Detection of Deforestation on Tropical Rainforests Using Sentinel-1 Data. Remote Sensing, 12(23), 3922.

      FAO : State of the World’s Forests 2020. []

      Hansen, M. C., Krylov, A., Tyukavina, A., Potapov, P. V., Turubanova, S., Zutta, B., ... & Moore, R. (2016). Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters, 11(3), 034008.

      Reiche, J., Mullissa, A., Slagter, B., Gou, Y., Tsendbazar, N. E., Odongo-Braun, C., ... & Herold, M. (2021). Forest disturbance alerts for the Congo Basin using Sentinel-1. Environmental Research Letters, 16(2), 024005.


    • sur Le CESBIO pique la Chine

      Posted: 11 February 2021, 10:54am CET by Olivier Hagolle

      Ce n'est pas tous les jours que le Monde cite notre laboratoire ! Depuis plusieurs mois, une source d'interférence intense perturbe les données SMOS dans une bande de fréquences normalement protégée pour l'observation. sur une grande partie de l'Asie du Sud Est, rendant les données inutilisables.

      Les données SMOS, qui estiment l'humidité des sols à l'échelle globale, sont très utiles en météorologie, et ces sources d'interférences dégradent donc les prévisions dans les régions affectées. Après avoir suivi les procédures d'alerte normales, enquêté et trouvé l'origine de la source, et alerté les scientifiques, les collègues n'ont plus d'autres possibilité que d'en appeler à l'opinion publique. Certes, il est possible que cet article ne fasse pas plus de mal qu'un caillou dans la chaussure (lorsqu'on arrive à pied...), mais nous espérons que cette campagne permettra d'éteindre cette source.


    • sur Pléiades images of the Uttarakhand disaster

      Posted: 9 February 2021, 9:24pm CET by Simon Gascoin

      The Indian Space Agency (ISRO) activated the International Charter "Space and Major Disasters" to image the area of the disater in Uttarakhand (excellent visualisation here). Thanks to CNES and Airbus DS, Pléiades images (resolution: 70 cm in panchromatic, 2.8 m in multispectral) were acquired today 09 Feb 2021, two days after the event. These images show the detachment area with a clear rupture line of 550 m on the north face of Ronti.

      This is a comparison with the latest Sentinel-2 image, before the flood.


      The wall.. Impressive post-event image by @cnes @AirbusSpace Pléiades #Chamoli

      — Simon Gascoin (@sgascoin) February 10, 2021

      Preliminary work by many scientists suggests that a rockslope failure released a mixture of rock and ice which created a potent flood in the valley of the Rishiganga River.

      UPDATE 10 Feb 2021. A Pléiades stereo pair has been acquired (B/H = 0.12) which allowed us to generate a high resolution 3D model of the area.

      Thanks to @cnes & Pléiades images acquired this morning, we computed a high resolution DEM of the source area of the disaster. Maybe the first post-event high resolution topography.

      — Etienne Berthier (@EtienneBerthie2) February 10, 2021

      Update 11 Feb 2021

      Two elevation difference maps were computed by Etienne. A first one by differencing the above Pléiades DEM with the Copernicus 30 m resolution DEM

      Elevation changes in the source area of the #Chamoli landslide, #Uttarakhand. Massive 150 m loss, about 100 m on average. 10 Feb. 2021 Pléiades DEM was compared to the Copernicus 30 m DEM from ~2013. Data from @CopernicusLand @cnes @AirbusSpace.

      — Etienne Berthier (@EtienneBerthie2) February 11, 2021

      Then, D. Shean (Univ Washington) computed a pre-event DEM from WorldView-1 images which allowed a finer analysis. The estimated detached volume is 25 millions cubic meters.

      Elevation difference map & Pléiades digital surface model. Computed from #Worldview @Maxar by D. Shean @uwTACOlab and @EtienneBerthie2 #Chamoli

      — Simon Gascoin (@sgascoin) February 11, 2021

      Authors Etienne Berthier (CNRS/LEGOS) and Simon Gascoin (CNRS/CESBIO)

      Acknowledgements Work carried out with the support of CNES, the International Charter for Major Spaces and Disasters and the DINAMIS program.

    • sur Aidez-nous à mesurer la neige orange !

      Posted: 6 February 2021, 8:35pm CET by Simon Gascoin

      Mise à jour 22 février 2021 : la campagne est terminée ! Merci aux participantes et participants, nous avons ~60 échantillons que nous allons analyser dans les mois qui viennent. A suivre...

      Un dépôt de poussières sahariennes a recouvert la neige des Alpes et des Pyrénées. Avis aux amateurs de montagne : nous avons besoin de votre participation pour étudier cet évènement qui semble exceptionnel ! Ceux qui se demandent "pourquoi faire ?" peuvent aller voir en bas de ce post.

      Mise à jour ! Une couche de neige fraîche a recouvert le dépôt dans la nuit de dimanche, mais les prélèvements sont toujours utiles et nécessaires. Pour cela il suffit de retirer le gros de la couche supérieure et de prélever la couche orange située en-dessous (voir les vidéos ci-dessous).

      Ensuite merci de nous contacter par e-mail pour que nous puissions récupérer les échantillons.

      Pyrénées :

      Alpes :

      Le récipient de prélèvement peut être de n'importe quelle taille pourvu que l'on puisse connaitre la surface de son ouverture ! Pour un pot de confiture il suffit donc de connaitre son diamètre (7,5 cm en général). En effet notre objectif est de caractériser le flux de poussière en grammes par mètre carré. Le protocole idéal est un carré de 10 cm x 10 cm. L'épaisseur de prélèvement doit être suffisante pour prélever toute la couche orange (5 cm devraient suffire).


      Prélèvement du sable. Comment faire ? Version Queyras.@sgascoin @AlvaroRobledano @mpneige

      — Ghislain Picard (@gsnowph) February 10, 2021

      Voici une autre façon de faire conseillée par Didier Voisin (IGE, Université de Grenoble)

      Pourquoi mesurer ce dépôt orange ?

      D'abord pour caractériser son intensité : était-ce réellement un évènement exceptionnel ? Ensuite pour étudier l'impact de ce dépôt sur l'enneigement : nous nous attendons à ce que la neige fonde un peu plus vite comme l'explique Marie ci-dessous. Enfin, pour valider et peut-être améliorer des algorithmes qui permettent de mesurer la quantité de poussières à partir d'images satellitaires ainsi que les modèles atmosphériques de transport des poussières.

      Curieux au sujet de l’effet des poussières sahariennes qui colorent actuellement les massifs #Alpes & #Pyrénées ?

      ?Explications (causes, impacts, tendances), du Caucase à "nos" montagnes, de Marie Dumont @mpneige #CEN #CNRM @meteofrance @CNRS @OSUG_fr @IGE_Grenoble @CesbioLab [https:]]

      — Samuel Morin (@smlmrn) February 6, 2021

      Val d’Aran ?
      Une journée pas comme les autres ? @Aymarfreeride #pyrenees #valdaran

      — Météo Pyrénées (@Meteo_Pyrenees) February 6, 2021

    • sur What makes earthworms move ?

      Posted: 30 January 2021, 8:22pm CET by mangiarotti
      What makes earthworms move?

      Eartworms directly relies on soil water. Though, their activity seems completely disconnected to the variations of soil water content.

      Earthworms have a key role in soil dynamics. Among them, the anecic earthworms dig vertical galleries, ingest the earth located in the first meters of the ground and come to reject it onto the surface in the form of small castings. Passing through their digestive system makes the soil nutrients usable by plants. Since castings are always very water-laden, water from the soil is expected to play an important role in earthworms activity. Yet, the production of castings does not vary at all like the water content in the soil. The behaviour of the earthworms is therefore difficult to understand.

      The global modelling technique was used to unveil the dynamical couplings between earthworms activity and soil water content (Dong Cao basin, Viêt Nam). Results show that high water content will not generate a strong and immediate activity of earthworms but a progressive increase, by sensitization. Conversely, low water content will generate a gradual decrease in their activity, by habituation. This progressivity will result in an evolution of the production of castings completely different from that of the water content. In return, earthworms activity will lead to very different soil evolution from one site to another, even for identical hydroclimatic conditions.

      Since the obtained models enables to formulate algebraically the coupling between earthworm activity and soil water content, soil moisture estimated from satellite may now be used to monitor earthworm activity from space.


      S. Mangiarotti, E. Fu, P. Jouquet, M. T. Tran M. Huc & N. Bottinelli, 2021. Earthworm activity and its coupling to soil hydrology: A deterministic analysis, Chaos, 31, 013134. [https:]]

      La dynamique des lombrics relève de la théorie du chaos, Olivier Blot, IRD le Mag', 21 janvier 2021.

      Photo: Earthworm casting rejected by earthworm at the soil surface (Dong Cao basin, Viêt Nam). Copyright: Pascal Jouquet.

    • sur Le glacier de l'Astrolabe s'apprête à vêler un bel iceberg

      Posted: 22 January 2021, 1:17pm CET by Simon Gascoin
      Enclosure: [download]

      Sur une image Pléiades acquise le 15 janvier, Etienne a remarqué une fracture qui traverse une bonne partie de la langue du glacier de l'Astrolabe auprès de la base Dumont-d'Urville en Antarctique.

      Just another iceberg in Antarctica? Maybe, but this one it at the front door of the French research station, Dumont d'Urville. When will it happen ? What impacts on sea ice, logistics and Penguins ? @_IPEV @sgascoin @IGE_Grenoble

      — Etienne Berthier (@EtienneBerthie2) January 22, 2021

      On peut reconstituer l'ouverture de cette crevasse à partir d'une série d'images Sentinel-1 (une année complète de janvier 2020 à janvier 2021)

      document.createElement('video'); [https:]]

      D'après la dernière image Sentinel-2 disponible, l'iceberg en couveuse pourrait avoir une superficie de 27 kilomètres carrés soit 4 000 terrains de football, ce qui est vaste mais beaucoup moins que l'iceberg A68 qui mesurait 5 800 km2 au moment de sa séparation avec la plateforme de glace Larsen C !

      Le vêlage pourrait prendre encore du temps. Un iceberg à suivre donc...

    • sur Crop Irrigation, a new project labeled by the Space Climate Observatory

      Posted: 20 January 2021, 6:17pm CET by Valérie Demarez

      =>  The Space Climate Observatory (SCO) is an international initiative of the One Planet Summit, launched in June 2019, which is also a French initiative. The SCO brings together several space agencies around the world and international organizations (UNDP, ESA, UNEP). It aims at developping projects for local decision-makers to help them adapt to climate change. These projects help to monitor climate change impacts on landscapes using satellite data, field data and local socio-economic data. The SCO works within the framework of the international agreements of Paris, the Agenda 2030 for sustainable development, the United Nations Framework Convention on Climate Change (UNFCCC) and the strategies developed by the WMO and the Global Climate Observing System (GCOS).

      SCO France is the national version of the international initiative. It is a national network whose vocation is to bring together the scientific community, public authorities and companies around the objectives of SCO International (the study of impact and mitigation of climate change). Since 2020, it periodically launches a call for projects, and projects labeled by the SCO can benefit from modest funding and assistance over a period of two years to move to a pre-operational or operational operation and find the necessary funding.

      The CESBIO project " Irrigation Grandes Cultures " has just been labeled by SCO France. Its aim is to provide spatial indicators that will enable water managers to optimize water resources and to identify adaptation strategies well adapted to local issues . For the past ten years, many French departments have been using water restrictions particularly for agriculture. A record was reached in 2020, with 80 departments concerned. We must therefore take action!

      The partners of this project are the CESBIO, the CNES, TETIS, the Chamber of Agriculture of the Tarn, the Syndicat Mixte d'Aménagement de la vallée de la Durance, the Regional Chamber of Agriculture of Occitania, the Regional Chamber of Agriculture of PACA, the Bureau of Geological and Mining Research, the Société du Canal de Provence and MEOSS, a company that will develop the operational tools for the management and development of the territories. This project is also supported by the Adour-Garonne Water Agency and the International Office for Water.

      It is based on the infrastructure of the Theia data center  and the works carried out by the Scientific Expertise Centers "Irrigation" and "Soil Moisture at very high spatial resolution". We will use the high-resolution mapping methods of irrigated surfaces and soil moisture developed by CESBIO and TETIS. The indicators will be estimated from free and open-source images from the Copernicus program, with the Sentinel-1 radar and Sentinel-2 optical satellites. They will be combined with crop classification and water requirement models developed at the two partner laboratories (CESBIO, TETIS).

      So this for us the beginning of a great adventure that aims to support the water stakeholders to face one major challenge: preserve the WATER ressources as water needs increase.

    • sur Irrigation Grandes Cultures, un nouveau projet labellisé par l'observatoire spatial du climat

      Posted: 20 January 2021, 2:18pm CET by Valérie Demarez


      L' Observatoire Spatial du Climat (SCO pour Space Climate Observatory) est une initiative internationale du One Planet Summit, lancée officiellement en Juin 2019, à l'initiative de la France. Cette initiative regroupe des agences spatiales du monde entier et des organisations internationales (PNUD, ESA, PNUE). Elle a pour vocation de développer des projets à destination des décideurs locaux pour les aider à s’adapter au changement climatique. Les projets assurent le suivi des impacts sur les territoires à l’aide de données satellitales, de données de terrain et de données socio-économiques locales. Le SCO s’inscrit dans le cadre des accords internationaux de Paris, de l’Agenda 2030 du développement durable, de la Convention Cadre des Nations Unies sur le Changement Climatique (CNUCC) et des stratégies élaborées par l’OMM et le Système Mondial d’observation du climat (GCOS).

      Le SCO France est la déclinaison nationale de l'initiative internationale. C’est un réseau national dont la vocation est de rassembler la communauté scientifique, les autorités publiques, et les entreprises autour des objectifs du SCO International et de l’étude des impacts des changements climatiques. Depuis 2020, il lance périodiquement un appel à projets, et les projets labellisés par le SCO peuvent bénéficier d'un financement modeste et d'une aide sur une durée de deux ans pour passer à une exploitation pré-opérationnelle ou opérationnelle et trouver les financements nécessaires.

      Le CESBIO vient de voir son projet « Irrigation Grandes Cultures » labellisé par le SCO France. Ce projet a pour objectif de fournir aux gestionnaires de l’eau des indicateurs qui leur permettront d’optimiser la gestion des ressources en eau sur leur territoire et d’identifier des stratégies d’adaptation au changement climatique adaptées aux spécificités locales. En effet, en France, depuis une dizaine d’années, de nombreux départements ont recours à des restrictions d’eau notamment pour l’agriculture. Ce chiffre a atteint son record en 2020 avec 80 départements soumis à des arrêtés sècheresse. Il faut donc agir !

      Les partenaires de ce projet sont le CESBIO, le CNES , TETIS, la Chambre d’Agriculture du Tarn, le Syndicat Mixte d’Aménagement de la vallée de la Durance, la Chambre Régionale d’Agriculture d’Occitanie, la Chambre Régionale d’Agriculture de PACA, la Bureau de Recherches Géologiques et Minières, la Société du Canal de Provence et MEOSS, entreprise qui va développer les outils opérationnels pour le pilotage et à la mise en valeur des territoires. Ce projet est également soutenu par l’Agence de l’Eau Adour-Garonne et l’Office International de l’eau.

      Il s’appuie sur l’infrastructure du pôle Theia ( [https:]] ) et des travaux menés par les  Centres d’Expertise Scientifique « Irrigation » et « Humidité des sols à très haute résolution spatiale ». Nous utiliserons notamment les méthodes de cartographie des surfaces irriguées et d'humidité des sols à haute résolution développées par le CESBIO et TETIS. Les indicateurs seront estimés à partir des images libres et gratuites du programme Copernicus, avec les satellites radar Sentinel-1 et optiques Sentinel-2  Ils seront combinés à des modèles de classifications et de besoins en eau des cultures développés au sein des deux laboratoires partenaires (CESBIO, TETIS).

      Nous voilà donc partis pour une belle aventure qui vise à accompagner les acteurs de l’eau face à un défi majeur : répondre aux différents enjeux de la gestion de l’EAU, bien commun mis en péril dans un contexte de pénuries à venir.

    • sur CESBIO's blog frequentation drops, let's blame the Covid !

      Posted: 4 January 2021, 11:05pm CET by Olivier Hagolle

      2020 has been a very difficult year for everyone (but it's over ? ), and the multitemps blog did not make an exception. Our audience has decreased by 13 to 20% depending on the statistics, compared to 2019, which already did not break records.

      It would be easy to blame the Covid, and I guess a large part of the time we spent this year scrolling on the internet or social networks was devoted to checking the last news and stats of the virus. But I guess a good part of the the explanation lies in the fact that we wrote much less posts this year : 132 against 188 in 2019.  The burden of the covid is once again an explanation, but also the the fact that some of us took new functions, started writing a dissertation (l'habiliitation à diriger des recherches) or simply lacked inspiration. Even if we have welcome new authors with great posts, Julien Michel, Philippe Gamet, Amandine Rolland, Sylvain Mangiarotti, Jerôme Colin, Marie Ballère and Stephane Mermoz, they are sill a bit shy and only produced a few posts.

      This blog is open to all CESBIO personnel, but also to our close collaborators in different labs and industries, or to the users of our products, to provide feedback. Feel free to suggest articles, it does not take long, and there is no reviewer 2.

      Maybe are there other reasons, and we would be happy to receive feedback. After 8 years of blogging, are we starting to repeat ourselves  ?  You all know now that Sentinel-2 is a great satellite, and that MAJA is better than Sen2cor ;).


      Comparison of page views to our blog in 2020 and 2021



      So here is the list of the most read pages this year, after having removed the lists of articles, like of course the home page, the Sentinel-2 or Landsat pages, the author names (this year, Simon's name was more clicked than mine, on the blog I created,, should I fire him ? ).

      2020 top posts
      Page title Views
      Mapping flooded areas using Sentinel-1 in Google Earth Engine 5042
      Radiometric quantities : irradiance, radiance, reflectance 3862
      A python module for batch download of Sentinel data from ESA 3330
      The Sentinel-2 tiles, how they work ? 2793
      Global NO2 monitor 2377
      MACCS/MAJA, how it works 2314
      The product level names, how they work ? 1758
      Land cover map production: how it works? 1709
      Aidez-nous à mesurer l’enneigement avec votre smartphone ! 1585
      THEIA's L2A product format 1395
      How to automatically download Sentinel data from PEPS collaborative ground segment 1172
      Theia’s Sentinel-2 L3A monthly cloud free syntheses 1115
      Le banc d’Arguin vu par satellite depuis 1984 1032
      S1-Tiling, on demand ortho-rectification of Sentinel-1 images on Sentinel-2 grid 876
      free and open data pays off 657
      .wpdt-bc-FFC107 { background-color: #FFC107 !important;} .wpdt-bc-F2E3B5 { background-color: #F2E3B5 !important;}

      So what may we conclude ?

      • the 3 top posts are the same as last year
      • only 4 posts from 2020 made it to the top 15
      • the distribution of small free software is at the top of the list (and we get several questions a week...)
      • Simon's geophysics articles (with the associated add on social networks) attract crowds
      • the description of the MAJA software is a success, but half of my articles point to this page
      • the example on how to use Google Earth Online attracts much more than the articles that denounce its dangers (it's sad)
      • the "How It Works" series continues to be a success
      • Level 3A products were very popular
      • the posts on Theia's product formats are useful
      • two articles on Sentinel-1 ranked in the top 15 (and a third on deforestation ranked in the top 30)
      • our tries with remote sensing economy are successful
      • Sentinel-HR mission articles are regularly read, although not yet in the top 15
    • sur Heureuse année 2021 !

      Posted: 3 January 2021, 6:25pm CET by Olivier Hagolle


      Comme presque partout, l'année 2020 a été bien morose au CESBIO ! Même si le CESBIO a été relativement épargné par la maladie, avec un faible nombre de cas bénins jusqu'à présent, quelques uns de nos collègues ont perdu des proches et nous leurs adressons nos plus chaleureuses pensées ! La situation est probablement la même pour les lecteurs du blog, et nous espérons sincèrement que nos lecteurs ont pu faire face à cette mauvaise année aussi bien que possible.

      Mais 2020 est terminée et c'est un grand plaisir de vous souhaiter une heureuse année 2021. Souhaitons que nous soyons bientôt en mesure de nous rencontrer physiquement et pas virtuellement.

      L'une des conséquences de 2020 au CESBIO, est que nous avons une énorme pile d’événements à célébrer, et si le Covid nous laisse le faire, nous allons probablement passer la fin de l'année à faire la fête !

      Le CESBIO a eu 25 ans en 2020, mais nous avons du annuler la fête qui était prévue en mai. Nous espérons pouvoir célébrer en 2021 le premier anniversaire des 25 ans du CESBIO.




      Le mandat de 5 ans du directeur du CESBIO, Laurent Polidori, s'est achevé au 31 décembre. Même si nous avons organisé une petite fête, avec une dizaine de personnes dans la salle, et le reste du laboratoire sur zoom,  nous devrons faire une vraie fête lors d'un de ses retours du brésil, où il sera bientôt professeur. Ce blog est un bon témoignage de certains des progrès du laboratoire sous la direction calme, attentive et éclairée de Laurent.



      Une nouvelle équipe, avec Mehrez Zribi en tant que directeur, va piloter le CESBIO pour les 5 prochaines années. Valérie Demarez et Lionel Jarlan seront directeurs adjoints, et Gilles Boulet et Olivier Hagolle (oui, moi ? ) animeront les deux équipes du CESBIO (Modélisation et Observation resp.). Ca se fête !

      CESBIO's new direction team
      Mehrez Zribi - Director Valérie Démarez - Deputy Director Lionel Jarlan - Deputy Director Gilles Boulet - Modelling team leader Olivier Hagolle - Observation team leader

      Nous devrons également célébrer le départ à la retraite de deux piliers du CESBIO, Yann Kerr et Gérard Dedieu, qui ont jeté les bases du laboratoire, défini des missions satellites réussies, produit un vaste corpus de littérature et formé beaucoup d'entre nous.Même s'ils continueront à travailler bénévolement avec nous pendant quelques années (en tant que chercheur émérite etPI de SMOS pour Yann et PI de VENµS pour Gérard), 2021 marque le début d'un grand changement pour le laboratoire. Nous aurons encore besoin de leur vision à long terme et de leur connaissance approfondie du monde de la télédétection des terres émergées.

      Happy retirees
      Yann Kerr Gérard Dedieu

      Nous avons également accueilli plusieurs nouveaux chercheurs, promu de nouveaux docteurs, démarré une nouvelle mission spatiale, Trishna, et vu le départ de personnes importantes pour le CESBIO, comme Bernard Marciel, qui avait pris en charge la gestion de notre bâtiment et sa logistique pendant plusieurs années, et a parfois réussi à nous obtenir la climatisation avant l'hiver (ce qui n'est pas facile à l'Université Paul Sabatier).

      Donc, si jamais vous croisez des chercheurs ivres du CESBIO à Toulouse à la fin de 2021, ce sera un bon signe de victoire contre le virus puisque nous aurons commencé à célébrer toutes les fêtes que nous repoussons de mois en mois.



      En cherchant une photo pour Gérard, j'ai trouvé un indice qui montre qu'il est très bien préparé pour sa retraite, ne vous inquiétez pas pour lui.






    • sur Happy 2021 !

      Posted: 3 January 2021, 3:45pm CET by Olivier Hagolle

      As almost everywhere, 2020 has been a gloomy year at CESBIO ! Even if CESBIO has been relatively spared by the disease, with a low number of mild cases so far, a few of our colleagues lost relatives and we send them our warmest thoughts ! The situation is probably the same for the blog's audience, and we sincerely hope our readers coped with this bad year as well as possible.

      But 2020 is over now and it's a pleasure to wish you a very happy 2021. Let's wish that we will soon be able to meet in a room and not in a zoom !

      At CESBIO, we have a terrible backlog of events to celebrate, and if the COVID lets us do it, we will probably spend the end of the year partying !

      CESBIO turned 25 this year, but we had to cancel the party which was scheduled in May. We hope to be able to celebrate in 2021 the first birthday of CESBIO 25th anniversary.




      The 5 year mandate of CESBIO's director, Laurent Polidori, ended on the December 31st, and even if we did celebrate it, with 10 people in the room, and the rest of the lab in the zoom, we will still need to really celebrate his leaving next time he returns from Brazil, where he'll soon be a professor of remote sensing. This blog is a good testimony of how the laboratory progressed during his calm, attentive and enlightened direction.



      A new team, with Mehrez Zribi as director, will lead the CESBIO for the 5 next years. Valérie Demarez and Lionel Jarlan will be deputy directors, and Gilles Boulet and Olivier Hagolle (yes, me ? ) will lead the two teams of CESBIO (Modelling and Observation resp.).



      CESBIO's new direction team
      Mehrez Zribi - Director Valérie Démarez - Deputy Director Lionel Jarlan - Deputy Director Gilles Boulet - Modelling team leader Olivier Hagolle - Observation team leader

      We will have also to celebrate the retirement of two pillars of the CESBIO lab, Yann Kerr and Gérard Dedieu, who set the bases of the laboratory, defined successful satellite missions, produced a large corpus of literature and trained a lot of us. Although they will still work benevolently with us for a few years (as an emerit researcher and SMOS PI for Yann and VENµS PI for Gérard), 2021 marks the beginning of a big change for the laboratory. We will still need their long term vision and  deep knowledge of the land remote sensing world.

      Happy retirees
      Yann Kerr Gérard Dedieu

      We have also welcome several new researchers, promoted new doctors, started a new space mission, Trishna,  and seen the departure of key personnel, such as Bernard Marciel who took care of our building and logistics team during several years, and sometimes managed to get us air conditioning before winter (which is not easy at the University Paul Sabatier). So if ever you see drunk researchers from CESBIO in Toulouse at the end of 2021, it will be a good sign we have beaten the virus and started to celebrate all the parties we have been accumulating.


      Looking for a photograph for Gérard, I found out that Gérard is well prepared for the retirement, so let's not worry for him.






    • sur Free and open data: fine, but who pays for the processing?

      Posted: 21 December 2020, 1:02am CET by Simon Gascoin

      In the previous post, Olivier advocated for the open data policy in remote sensing. Although Oliver is facing some actual resistance because the Sentinel-HR mission would step on the toes of industrial champions, my feeling is that there is now a large consensus on this issue. The economic and social benefit of the open data policy in remote sensing is well accepted, especially in the scientific community. Yes, scientists are rational people and they prefer not to pay rather than.. to pay.

      Talk about free data!

      I think that the discussion should go beyond the cost of the data itself. Sentinel-2 generates 1.6 Terabytes of compressed raw image every day. It's great that the data is free, but how do I handle that? Currently, a hard drive of 1 Tb costs about 50 €. Storing all Sentinel-2 data would cost me 30 k€ every year. Let's assume my department tries to optimize the expenses by subscribing to a cloud storage service. Google provides an example of pricing for a storage usage of 160 Tb as well as bandwidth consumption that spans multiple tiers: it costs 7500 € per month, and the storage is largely insufficient for large scale processing of Sentinel-1 or Sentinel-2 data. This is clearly too expensive for many groups,  not only research labs but also startup companies.

      Of course I don't need to store all Sentinel data, I could download them and delete the files once I've done the processing. Yet, processing data is costly, too. Amazon rates range from $ 94 / year to $ 2,367 / year (CPU time). As an example, the CPU time to generate snow and ice products over Europe from Sentinel-2 since 2016 was about 100 years of CPU!

      So far the computation took approximately 1 million hours CPU, that is 1 century of CPU time ? [https:]]

      — Simon Gascoin (@sgascoin) November 12, 2020

      These estimates of storage and processing costs are probably inaccurate but they give an order of magnitude. The conclusion is that although data is free, the cost of storage and processing is out of reach for many research projects, not to mention small businesses. This is why Google Earth Engine is a huge success in the remote sensing community (despite the warnings of some authors..). Recently, I was also struck by this sentence in the recent announcement of the release of Landsat collection 2:

      "Collection 2 was processed in the Amazon Web Services (..) at a clip of 450,000 scenes per day—a speed that enabled the reprocessing of the entire archive in just five weeks. By comparison, it took 18 months to process Collection 1 in 2016, at a rate of 25,000 scenes each day." (USGS, 01 Dec 2020)

      Due to economies of scale, unit costs of storage and processing are significantly lower for Google and Amazon than for a small company or a university. But can we rely on Tech Giants to do public research? Is it sustainable for a startup company to build a commercial application based on Google Earth Engine[1] ? Should we build operational services for the monitoring of climate, agriculture, water resources, based on privately-owned data centers? The DIAS are an alternative to Google/Amazon but the cost are still to high for many users (we investigated this option in our lab to replace our current infrastructure).

      A public cloud for the public good

      I wonder how much would cost a free, public cloud server based on open source software to give everyone the possibility to tap into the power of the Copernicus data in a transparent and reproducible way?

      The Copernicus 2019 market report indicates that "the EU has invested 8 billions € into this program from 2008 up to 2020 (...) Over the same period, this investment will generate economic benefits between EUR 16.2 and 21.3 billion (excluding non-monetary benefits)."

      The EU has made the Copernicus Earth observation program a flagship of the European space policy thanks to the high quality of its satellite fleet and its open data policy. There is evidence that the open data policy is generating more "monetary benefits" than its investments. It is time to evaluate the cost of giving everyone the ability to freely process these data. Not to mention the potential economies of scales in terms of energy consumption by concentrating computation and storage.

      "A public cloud for the public good" it sounds like a nice program, isn't it?

      Found out yesterday @ECMWF ERA5-Land is now available on #EarthEngine as well with 69 variables! Excited about the potential for easy coupling of climate and #remotesensing. To celebrate I made an animation of 18 of my favorite variables using #rstats #ggplot.

      — Philip Kraaijenbrink (@philipkraai) December 17, 2020

      [1] From Earth Engine FAQ: "Earth Engine's terms allow for use in development, research, and education environments. It may also be used for evaluation in a commercial or operational environment, but sustained production use is not allowed."

    • sur Free and open data pays off

      Posted: 12 December 2020, 10:36pm CET by Olivier Hagolle

      As for most users of Copernicus or LANDSAT data I guess, the advantages of free and open remote sensing data are so clear that it's hard to imagine they could be challenged. However, in the framework of Sentinel-HR phase 0, we prepared arguments in case our requirement for free and open data is questioned.

      Table of contents

      Why should users pay for data ?
      The LANDSAT example
      Economic benefits of free and open data
      Social benefits of free and open data

      Why should users pay for data ?

      Of course, remote sensing data from private companies (Planet, Airbus, Maxar...) are not free. The hard work to make satellites and produce data must be funded and companies are there to earn money.

      There has also been examples of state owned satellites whose data were commercialized, such as SPOT, Pleiades, and LANDSAT data at certain periods of time. With the exception of LANDSAT, Most of these data were provided by satellites which perform observations on demand. These satellites are tasked to optimize the acquisition according to the orders, and the satellites is oriented towards the site to acquire when it is possible. These satellites usually have a limited observing capacity, and in our capitalistic economies, asking a customer to pay is a classical way to choose which scene will be observed.

      Thanks to the development of commercial satellites, the states are discharged of the necessity to fund the investment for the spatial infra-structure, and they can limit their intervention to missions which do not have a straightforward commercial application capacity.  However, it turns out that states or local communities are also large customers for the data of these satellites, e.g. for the defense application, for research purposes, or for local land management. Finally, at the European scale, is it really cheaper to let the industry build and fund high resolution satellite missions and buy a large quantity of data, or to build public satellite missions which ensure a free and open access to data ? I think the latter is cheaper, except maybe for some niches, but feel free to correct me, I'll be happy to learn.

      When satellites have a sufficient observation capacity, they can become systematic observation satellites, which observe almost all the landscapes that enter their field of view, with no need to change the satellite orientation to observe a new scene. I only know one public decametric or metric resolution satellite mission that performed systematic observations and still tried to sell its data : the LANDSAT mission, until 2008.

      The LANDSAT example

      The information below comes from the following paper. Quoted sentences are in italics.

      Michael A. Wulder, Jeffrey G. Masek, Warren B. Cohen, Thomas R. Loveland, Curtis E. Woodcock, Opening the archive: How free data has enabled the science and monitoring promise of Landsat, Remote Sensing of Environment, Volume 122, 2012, Pages 2-10, ISSN 0034-4257,

      In October 2008, Landsat data became free and open data. Before that, costs for an individual photographic image varied from $20 (1972–1978) to $200 (1979–1982) for MSS digital data; digital data ranged from approximately $3000 to $4000 for TM data (1983–1998), and $600 for ETM+ data (1999–2008). Prior to October 2008, no calendar month ever recorded more than 3000 scenes sold in a given month.

      Here is what happened after October 2008 :

      In red, number of Landsat Scenes distributed each month, after data became free and open in October 2008. The very little blue bar in the bottom left corner is the maximum number of scenes bought in a given month when LANDSAT data were not open (From Wulder, 2012))

      In less than three years, the number of tiles downloaded each month was multiplied by 100 ! But it was just the beginning, and in the next years, the download rate continued to increase until Sentinel-2 came into operation.

      Cumulative number of downloaded Landsat Scenes [https:]


      Economic benefits of free and open data

      A few hours ago, the French Cartographic Institute (IGN) announced the release of its data bases as free and open data. This decision comes was justified by a report from the French Parliament : "It's not the sale of data that creates its value, it is its circulation". To justify this change of policy, the report states that : "Free dissemination and reuse of sovereign geographic data implies that the production of sovereign geographic data must be financed by the state subsidies, if not by the sale of the data. If the open data business model is empirically verified, a return to the public purse will be achieved by the taxes of the additional wealth created by the release of data."

      After a policy change was envisaged during the moderate center right Trump administration, the LANDSAT Science team studied the benefits of the free and open data policy of LANDSAT data, quoted sentences are in italics.

      Zhe Zhu, Michael A. Wulder, David P. Roy, Curtis E. Woodcock, Matthew C. Hansen, Volker C. Radeloff, Sean P. Healey, Crystal Schaaf, Patrick Hostert, Peter Strobl, Jean-Francois Pekel, Leo Lymburner, Nima Pahlevan, Ted A. Scambos, Benefits of the free and open Landsat data policy, Remote Sensing of Environment, Volume 224, 2019, Pages 382-385, ISSN 0034-4257,

      The National Geospatial Advisory Committee (National Geospatial Advisory Committee Landsat Advisory Group, 2014) analyzed sixteen economic sectors (e.g., agriculture, water consumption, wildfire mapping) where the use of Landsat data lead to productivity savings, and estimated the economic benefit of Landsat data for the year 2011 as $1.70 billion for U.S. users plus $400 million for international users. Many of the sixteen economic sectors are directly associated with U.S. federal, state and local government activities (e.g., risk assessments, mapping and monitoring activities). In addition, the open data policy is particularly beneficial to government, university, and commercial research groups and organizations that have limited budgets.

      An economic study tried to measure the economic benefits of the Landsat open data :

      John Loomis, Steve Koontz, Holly Miller, Leslie Richardson, Valuing Geospatial Information: Using the Contingent Valuation Method to Estimate the Economic Benefits of Landsat Satellite Imagery, Photogrammetric Engineering & Remote Sensing,Volume 81, Issue 8, 2015, Pages 647-656, ISSN 0099-1112, [https:]]

      Based on a survey among 14 000 users of all sorts the authors tried to determine the value at which they would stop buying data and then how much benefit from the activity would be lost. The results are provided in the table below and show that at a cost of 100 $ per image, 46 M$ would be lost each year.

      Of course, I have not enough economics skills to be able to criticize the study, but a sentence stroke me in the discussion, because I think we might be in that case : If the users who drop out of the market more quickly as the price per scene increases (i.e., they are not willing to pay as much per scene) obtain a greater share of the scenes than those users who stay in the market as the price in-creases, the results in Table 5 represent a lower bound on the annual economic loss to society associated with increasing the price of the imagery.

      Let's look at just one of the many products made at European Continental scale with Sentinel-2 data, in which CESBIO was involved, the snow and ice fraction cover product. This product is useful for hydrology purposes (how much water will be available for irrigation or to fill the dams after melting ?), for biodiversity (plants are different where the snow stays longer), for climate studies of course, as well as for recreational purposes (should I take snow equipment for my hike this week-end ?)   This product involves 1500 tiles, 365/3 images per year (there is overlap in Northern regions, so one image every 3rd day on average), and we processed 5 years of data, in the past, and the exploitation will go on for 2 more years. The cost of the project was 1.5 M€, thanks to the free and open data. Let's consider only a cost of 10 € per scene (which is probably a very low fee, Landsat data were sold 600 $):

      • Number of images : 1500*365/3* 7*10 = 1 280 000
      • Total data cost for a cost of 10 € per scene : 12.8 M€

      The data cost would be almost 10 times the current cost of the project,  and it probably means the project would not exist, and I doubt it would even exist with a cost of 1€. Could we imagine selling Sentinel-2 data at 1€ per image ? Well let's see with planet. Planet has a special offer for research and education, which allows to download 5 000 km² of data for free. 5 000 km² is less than half the surface of a Sentinel-2 image. As this proposal is an incentive for future users, and as I expect planet to be more generous, I guess the cost of 5 000 km² is more than 1 €.

      The Copernicus-NG study

      The European Copernicus is preparing a new generation of the Sentinel 1 to 6 in the 2030 horizon.  Before studying the new missions, a comprehensive study of user needs was done, which included an economic study to consider a possible change of data policy. It also includes a nice summary of the pros and cons of adding a fee to Sentinel-NG data.

      Study on the Copernicus data policy post-2020, Nextspace Final Report. 2 February 2019

      3 options were considered regarding data policy changes, the main option being keeping the current data policy :

      1. Allowing access only to European citizens
      2. Access allowed against a fee for all users with the exception of European public institutions
      3. Access allowed for all users, but with no right to redistribute data (except upon agreement). This option would enable to avoid Amazon or Google to get revenue from  Sentinel-2 data.

      As seen in the study, this would have difficult legal implications and diplomatic consequences (access for Europe to data from other countries could be denied), but let's see the economic impact according to the study:

      Data policy options 1 -access only in Europe 2 - add fee on data 3 - redistribution forbidden, except upon agreement
      Operational costs/year 7 M€ 11 M€ 9M€
      Loss in economic activity in Europe/year 700 M€ 2300M€ 220 M€
      Activity loss in Europe, in percentage -13% - 48% -5 %
      .wpdt-bc-00BCD4 { background-color: #00BCD4 !important;} .wpdt-bc-A0D3D8 { background-color: #A0D3D8 !important;} .wpdt-bc-CCEBEE { background-color: #CCEBEE !important;} .wpdt-bc-93B7BB { background-color: #93B7BB !important;}

      Economic impact of the three changes studied for Copernicus data policy

      The operational costs would cover the need to enforce the regulations and to check users comply with the conditions. Of course, the option with a fee would require to set up services to advertise, negociate, recover the fees and provide support on the billing process.

      The economical costs cover the fact that the data use relies on partners from outside Europe that would need to be replaces, by reduce export possibilities due to the restriction, and for the option 2 with fees, for all the activity that would not happen because of the necessity to pay for the input data. The study concludes, for option 2, that a reduction of the economic activity would be as high as 50%.

      Although I have not heard if the Copernicus program intends to change something regarding the data policy, bades on the reports conclusions, it does not seem likely.

      More economic considerations How to start a new business based on remote sensing data ?

      Now, imagine you are a company, and you have a good idea for a product that could be successful and generate revenue, for instance estimating evapo-transpiration in agricultural fields to provide advice on irrigation. To be credible, you need to demonstrate its  over a region such as Occitania (10 tiles, 2 years). Can you afford to invest 180 k€ (10 x 365/4 x 2 x 100, assuming 100€ per image), just to make this demonstration ? Will you find funding for that, while you already need to fund the R & D investment ?

      Sentinel-2 global mosaic 2019 by EOX.

      Without the free and open data policy, the EOX could not have made its fantastic cloudless global mosaic, which is perfect tool to explore our planet and a good showcase for EOX remote sensing skill. Without the free and open data, Sinergise Sentinel-playground would not exist, and Sinergise would probably still be a little software company. In France, Kermap, Geosys and many others benefit from these data.

      Cloud processing

      Given the data volume, the current tendency is to avoid data transfers and process the data where they are stored. We have seen a large number of private operators and public platforms proposing cloud computing facilities with access to the data archive. If data are not free, with a unique distributor, it will become more difficult for independent organizations to duplicate archives to propose them withing their platforms. Of course, contracts could be signed, but probably not for free, and if two data sources belong to two different operators, will  they propose access to both of them ?

      Cloudy data have value when they are free

      Let's forget the costs, budgets and suppose we are in a good society where users have some money to do their research. Of course they are asked to spare money and can't buy everything they want. This image in November with 35% of clouds has the same price as a cloud free image in September, should they buy it ? As it happened for LANDSAT when the data had to be paid, only the almost cloud free images are paid, and more than half of the images are never ordered, while they have intrinsic value.

      Social benefits of free and open data Science for the citizens, NGO , journalists

      I don't have a statistics, but free and open data enable surveillance of our planet by citizens, NGOs, journalists. Just take a few examples from citizen Gascoin :



      • A video with LANDSAT-8 data by Simon, on the dynamics of sand near the Bassin d'Arcachon was relayed in the local press draining a large number of connections to this blog, and raising awareness within the public. Simon would not have pulished this video, had he had to pay 5000 € for the images.
      • Pierre Markuse great images of forest fires from Sentinel-2 or Landsat-8 are often used in the press, to inform the broad public on the magnitude of the affected regions.

      Free and open data are a great way to let students manipulate data and learn how to conduct a remote sensing project. A lot of the cloud platform users are students, it is also the case for Theia.


      In many European countries, research is relatively poor, and researchers are mainly funding researchers. We already lose to much time with proposal writing activities, let's not add the necessity to find more money to buy the data we need, negotiate the cost to access these data, check the number of images ordered, and find more money if the project works and we need more data.

      Reducing the data access for the research community might slow down the progress of Earth observation science, and therefore degrade the impact of remote sensing data on society.

      Earth monitoring

      Access to data being costly, products at continental scale, such as the Pan European layers of Copernicus, the snow and ice layer, the Common Agricultural Policy products, or DLR's world settlement footprint would be updated less often. Europe could lose a good way to monitor its lands.  Even worse, products at global scale would be almost inaccessible. Similarly, the less developed countries would have difficulties to afford the access to Copernicus data and monitor the state of crops or forests in their countries.


      This post, written with the help of Simon Gascoin and Julien Michel, might look biased (it probably is). But we have not found studies that conclude against the free and open data policy of Copernicus. If you are aware of any, please mention it in the comments, we are not specialists of bibliography in the economics domain. As summarized above, several studies show that the huge cost of development of satellite systems such as Landsat or the Copernicus Sentinels pays off in the long run, in temps of services activity and better knowledge of the state of our planet, at global or local scale.



    • sur How to make a mosaic of Theia snow products in three command lines

      Posted: 11 December 2020, 11:43pm CET by Simon Gascoin

      Assuming that you have downloaded from Theia and unzipped several products in the same directory. Assuming that you have a Linux OS with GNU Parallel, GDAL and OTB command line utilities installed..






      Then, in your terminal, type:

      # Get the color table
      wget [https:] 
      # reproject all SNW raster products to a common system (here Web Mercator) and assign 255 to nodata
      parallel gdalwarp -srcnodata 255 -overwrite -r near -t_srs "EPSG:3857" {} {/} ::: $(ls SEN*/*SNW*.tif)
      # Apply color table
      parallel otbcli_ColorMapping -in {} -out c{} uint8 -method.custom.lut LIS_SEB_style_OTB.txt ::: $(ls S*SNW*.tif)
      # Merge colored images and set 0 to nodata -n 0 c*tif -o mosaic.tif -co COMPRESS=DEFLATE

      Here the output is yesterday's snow cover map of the entire French Alps (10 December 2020). Snow is cyan, clouds are white.


    • sur Préparation de la mission TRISHNA dans les Pyrénées

      Posted: 30 November 2020, 11:35pm CET by Simon Gascoin
      Enclosure: [download]

      Dans le cadre de la préparation de la mission thermique TRISHNA (CNES/ISRO), le Cesbio met en place un dispositif expérimental de suivi de la température de surface dans les Pyrénées. L'objectif principal est de mieux connaitre la plus-value de la température de surface pour caractériser les propriétés internes du manteau neigeux. En particulier, l'assimilation des données TRISHNA pourrait permettre d'améliorer les modèles qui calculent la fonte de la neige dans les zones de montagne. Mais il y a fort à parier que les données récoltées sur ce site pourront aussi servir à d'autres applications en météorologie, écologie, limnologie...


      Grâce à nos collègues de l'OMP qui s'occupent des installations scientifiques au Pic du Midi nous avons placé une caméra thermique sur la façade de l'observatoire à 2860 m d'altitude. La caméra est dirigée vers le lac d'Oncet au sud du Pic du Midi. Sa gamme spectrale est 7.5 à 13 µm et son champ de vue 60° x 45°.


      Vue du site d'étude dans dans la gamme du rayonnement visible vs. infrarouge thermique (27 juillet 2020)


      Capteur infrarouge (source : Apogee)

      Pour compléter ces observations, le 24 novembre nous avons installé au col de Sencours en contrebas (2380 m) une mini-station météo équipée d'un radiomètre infrarouge à champ de vue étroit (36°, gamme spectrale 8 à 14 µm), d'une sonde de température et d'humidité de l'air et d'un baromètre. Le radiomètre pointe vers une esplanade devant le bâtiment afin de donner un point de calibration pour les images de la caméra.


      RT100 | Hydroaxys | Environmental Monitoring Made Simple.Diver Aqua troll 100

      Enfin nous avons immergé un "plongeur" dans le petit lac situé au dessus du lac d'Oncet à 2240 m pour mesurer la pression et la température dans l'eau. Si tout va bien, grâce à la pression de l'air mesurée au col de Sencours, nous pourrons en déduire les variations de hauteur d'eau dans le lac et peut-être même la masse des chutes de neige lorsque le lac est gelé !

      Variables mesurées par le dispositif in situ. Ts: température de surface, Ta: température de l'air, Qa: humidité de l'air, Pa: pression de l'air, Pw: pression de l'eau, Tw: température de l'eau

      Voici une série d'images thermiques acquises le 28 octobre et synchronisées avec la webcam du Pic :

      document.createElement('video'); [https:]]

      Merci à Pascal, Baptiste et Vincent R pour l'installation dans ces rudes conditions et à Vincent B pour le diver. Merci aussi à la Commission Syndicale de la Vallée du Barège et son président Raymond Bayle pour l'autorisation d'installer la station et à Francis Lacassagne et Eric Chereau pour le soutien logistique sans faille au Pic. Merci aussi au CNES/TOSCA pour le soutien financier au programme TRISHNA.

    • sur Evaluating ERA5 wind direction with Copahue Volcano plume

      Posted: 27 November 2020, 7:47pm CET by Simon Gascoin

      Copahue is an active volcano in the Andes on the Chile-Argentina border. It erupted in 2016 and a plume of smoke was visible in many Sentinel-2 images during that period. Looking at these pictures I thought it would be fun to use that plume as a giant anemometer to evaluate climate model data.

      Sentinel-2 image of Copahue Volcano on 2016-11-07

      I extracted the wind vector from ECMWF ERA5 climate reanalysis available at the hourly time step in Google Earth Engine. Since Sentinel-2 overpass time is approximately 14h30 UTC in that area, I queried only ERA5 data corresponding to the 14h time step.

      // Copahue volcano crater
      var pt = ee.Geometry.Point([-71.18,-37.86]);
      // filter a year of ERA5 collection at 14:00 UTC
      var uv = ee.ImageCollection("ECMWF/ERA5_LAND/HOURLY")
      .filter(ee.Filter.calendarRange(14, null, 'hour'))
      // Plot U,V 


      I downloaded this figure as a table, then extracted the wind vector of a few dates corresponding to cloud-free Sentinel-2 images in 2016.



      ERA5 wind vector at Copahue Volcano at 14h UTC and Sentinel-2 images

      The wind vector matches the plume direction only on 22 Jan and 21 Sep... That is a score of 2/7, ECMWF you can do better!

      Note from ECMWF

      Care should be taken when comparing this variable with observations, because wind observations vary on small space and time scales and are affected by the local terrain, vegetation and buildings that are represented only on average in the ECMWF Integrated Forecasting System.

      [1] a more elegant solution would be to draw the wind arrows in GEE directly but I felt that would led me to catch Mrs-Armitage-on-Wheels Syndrom)

    • sur Le banc d'Arguin vu par satellite depuis 1984

      Posted: 15 November 2020, 11:20pm CET by Simon Gascoin

      L'archive Landsat est une mine d'or pour regarder notre planète changer. Google et Amazon ne s'y sont pas trompés puisque leur serveurs ont avalé l'ensemble des images pour permettre aux scientifiques et autres curieux d'y accéder plus facilement. Par exemple, cette application basée sur Google Earth Engine par Qiusheng Wu permet de générer une animation composée d'images Landsat depuis 1984 en quelques clics.

      Le banc d'Arguin est situé en face de la célèbre dune du Pilat à l'entrée de la baie d'Arcachon. Il s'agit d'un milieu fragile et mouvant qui sert notamment de halte migratoire pour de nombreuses espèces d'oiseaux.

      Evolution du banc d'Arguin - série d'images LandsatÉvolution du banc d'Arguin (1984-2020) - série d'images Landsat

      Outre le mouvement des volutes du banc d'Arguin, cette animation révèle aussi le retrait de la forêt causé par l'urbanisation, l'érosion de la plage de la Salie au sud et l'élargissement de la dune du Pilat.

      Compte-tenu de la vitesse à laquelle se déplace ce banc de sable il y a fort à parier que les limites de la zone de protection intégrale (interdite à l'homme) devront être révisées régulièrement. Heureusement cela est prévu dans le Décret n° 2017-945 du 10 mai 2017 portant extension et modification de la réserve naturelle nationale du banc d'Arguin (Gironde).

      Les limites des zones de protection renforcée peuvent être modifiées par le préfet chaque année en fonction de l'évolution ou du déplacement des bancs de sable.

      La réserve du Banc d'Arguin couvre une superficie d'environ 2 600 hectares, actuellement

      Espérons que cela ne suscitera pas de nouveau la colère des plaisanciers qui avaient manifesté contre ce texte qui limitait leur droit de ... pratiquer la navigation de plaisance.

    • sur Copernicus Land - High-resolution Snow and Ice Monitoring: processing campaign status

      Posted: 12 November 2020, 9:26am CET by Simon Gascoin


      Snow and ice products are now available at high resolution (20x20 m) for all Europe in near-real time. However, Magellium is also processing the Sentinel-2 archive since September 2016. About 70% of Sentinel-2 observations have already been processed and 50 % are already published. This huge computation is being done on the CNES HPC !

      The Copernicus Land service provides free access to complete time series from 09/2016 onwards for the Alps, Pyrenees and Scandinavia at [https:]] . The full HR-S&I dataset will be available in January 2021.

      Des produits neige et glace en haute-résolution (20 m x 20 m) sur l’Europe en temps réel, mais pas seulement! Magellium se charge également de la campagne de traitement des archives Sentinel-2 à partir de Septembre 2016.  Parmi environ 1000 tuiles Sentinel-2 à traiter sur l’Europe, près de 700 sont traitées et 500 sont publiées intégralement depuis Septembre 2016. Ce traitement massif est fait sur le HPC du CNES.

      Retrouvez déjà les séries temporelles complètes sur les Alpes, les Pyrénées et la Scandinavie en téléchargement gratuit sur [https:]] . La totalité des données HR-S&I sur la zone européenne EEA-39 sera disponible mi-janvier 2021.

    • sur Stealth snow near Albuquerque, New Mexico

      Posted: 9 November 2020, 12:24am CET by Simon Gascoin

      "Give me a hundred tries, I'll never be able to spell it." Jimmy McGill

      I haven't seen the "Breaking Bad" series but I'm a huge fan of "Better Call Saul". Both series take place in Albuquerque, New Mexico. With its dry, sunny weather, Albuquerque is the perfect place for a modern-day Western. Vince Gilligan, the creator of "Breaking Bad" and "Better Call Saul" said the city "has a stealth charm".

      According to Wikipedia, Albuquerque has a dry climate, with brilliant sunshine most of the time, but it does receive 9.5 inches of precipitation per year (240 mm). As one of the highest major city in the U.S., the weather can get cold and the city can experience several snow events each winter, as the waitress told Mike in his favorite diner.

      Screenshots of a Better Call Saul episode

      Just a few days ago, Albuquerque was blasted with snow. Sentinel-2 captured this event on the 28th of October but most of the metropolitan area was covered by clouds.

      However, two days later, it captured another, cloud-free image. The snow had already vanished in the city but the snow cover was still present in the Sandia mountains to the east of the city.

      The day after (31 Oct), Landsat-8 captured another clear-sky image of the area. The contrast between both acquisitions is striking. In less than 24 hours, most of the snow disappeared in the eastern region.

      Here I used false color composites including a shortwave infrared band to highlight the snow cover, i.e. bands 6,4,3 (Landsat) and 11,4,3 (Sentinel-2).

      In Google Earth Engine (link to the script), I computed the snow cover area to analyze a bit further the snow cover variation with the topography (like this post with Lesotho snow). The histograms below show that the snow cover persisted only above 2900 m, although some snow patches resisted even at lower elevation near 2300 m.

      Such ephemeral snow cover is typical of arid mountains and despite its short duration it might play an important role in their ecosystems. However, until recently it was not really possible to observe this snow from satellite remote sensing at high resolution (20 m to 30 m). Now thanks to the combination of Sentinel-2 and Landsat-8 we have the ability to monitor the fluctuations of even the stealthiest snowpacks...


      PS. Another clue that snow is not that common in Albuquerque...

      More than 100 crashes reported during Albuquerque snow storm [https:]]

      — Marcia S Newman (@marciasgreen) October 28, 2020

    • sur Lancement de l’enquête d’expression de besoins en imageries radar

      Posted: 2 November 2020, 1:08pm CET by Simon Gascoin

      Avec le Groupement d’Intérêt Scientifique Bretagne Télédétection (GIS BRETEL) et en lien avec le dispositif VIGISAT, DINAMIS lance l’enquête nationale d’expression de besoins en imageries radar :


      Elle s’inscrit dans le cadre du rapprochement avec le dispositif VIGISAT entamée au premier trimestre de cette année, et d’une initiative associant DINAMIS aux Directions des Pôles de données Form@ter, Theia et ODATIS de l’IR Data Terra.

      L’enquête a pour but de (1) mieux identifier les utilisateurs scientifiques et institutionnels de l’imagerie radar SAR et (2) mieux qualifier et quantifier les usages et les besoins actuels et à venir. Elle permettra d’alimenter la réflexion stratégique du Dispositif en matière d’opportunité d’extension du bouquet DINAMIS actuel.

      Elle s‘adresse à tout membre de la communauté française scientifique ou institutionnelle utilisateur d’imageries radar, aux membres du GIS BRETEL utilisateurs de VIGISAT, aux utilisateurs DINAMIS et plus largement à tout utilisateur potentiel intéressé par cette famille de données satellitaires.

      N’hésitez pas à répondre à l’enquête en actionnant le lien : la durée approximative de renseignement de l’enquête est de 10 mn (date de clôture prévisionnelle de l’enquête : mercredi 25 novembre).

    • sur Modelos caóticos de baja dimensión para la micro-atmósfera de la cueva prehistórica de Altamira (España)

      Posted: 30 October 2020, 1:10pm CET by mangiarotti

      La aplicación de la técnica de modelización global a la micro-atmósfera de la cueva de Altamira ha permitido obtener modelos caóticos de baja dimensión a partir de las variaciones en su contenido de dióxido de carbono (CO2) y radón (222Rn), las cuales resultan de la actividad biológica del suelo y de un proceso físico de desintegración, respectivamente. El análisis cruzado de los datos ha permitido destacar el acoplamiento de la atmósfera de la cueva con el contenido de agua del suelo situado en la superficie, directamente encima de la cavidad: el agua del suelo obstaculiza los intercambios de gas durante la estación húmeda, mientras que durante la estación seca la disminución en el contenido de humedad del suelo permite que los gases acumulados en la atmósfera de la cueva escapen al exterior.

      Todos sabemos que el comportamiento de la atmósfera a veces es difícil de predecir. Para comprender las causas fundamentales de este comportamiento, en la década de 1960 el meteorólogo Barry Saltzman intentó describir la dinámica de la atmósfera mediante un pequeño número de variables[i]. Posteriormente, Edward N. Lorenz simplificó este modelo y publicó en 1963[ii] el primer modelo caótico de baja dimensión: completamente determinista pero impredecible a largo plazo, a menos que se conozcan su formulación y su estado inicial con total exactitud. Demasiado simple para realizar pronósticos meteorológicos con él, este modelo permitió, sin embargo, mostrar que tres variables y un mínimo de no linealidad podían ser suficientes para producir este comportamiento determinista pero impredecible a largo plazo.

      Gracias a una colaboración franco-hispana[iii], se ha analizado la atmósfera de la cueva de Altamira utilizando la técnica de modelización global, una técnica de modelización basada en la teoría del caos[iv]. Situada en la provincia de Cantabria, en el norte de España, la cueva de Altamira contiene una de las colecciones pictóricas más destacadas de finales del Paleolítico superior. Estuvo habitada durante varios milenios, hasta que la entrada a la cueva se derrumbó hace unos 13.000 años, aislando sus pinturas en un entorno estable que ha favorecido su conservación. La cueva, descubierta a finales del siglo XIX, fue declarada Patrimonio de la Humanidad por la UNESCO en 1985. Además, se empezó a monitorizar la atmósfera de la cueva para velar por la conservación de los polícromos y la seguridad de los visitantes.

      En este estudio se han utilizado las concentraciones de dióxido de carbono (CO2) y radón (en concreto, el isótopo 222Rn por ser más estable) en la cueva y el contenido de agua del suelo, medidos en el período 2007-2012. El origen natural del CO2 y del radón se conoce con bastante certeza. Las moléculas de CO2 provienen principalmente de la actividad biológica (la respiración de organismos que viven en el suelo o en su superficie, como plantas y microorganismos), mientras que el isótopo 222Rn resulta de la desintegración de partículas de uranio contenidas naturalmente en suelos y rocas. Pero conocer la procedencia de estos dos componentes no permite explicar la evolución estacional de sus concentraciones, y menos aún si se considera que la cueva de Altamira presenta un funcionamiento opuesto al comúnmente observado en la mayoría de las cuevas de morfología semejante (con una ligera pendiente descendente). Por lo general, en esta configuración, el aire de la cueva permanece atrapado durante el verano porque es más frío y, por lo tanto, más denso que el de la atmósfera exterior. En consecuencia, no puede salir por las aberturas ubicadas en posiciones elevadas. Sin embargo, para la cueva de Altamira este comportamiento se invierte: es durante la estación fría cuando se acumulan CO2 y radón, y en verano cuando se produce la ventilación y bajan sus concentraciones, como se muestra en la Figura 1.

      Figura 1: Serie temporal filtrada de la concentración de CO2 [en ppm] y radón 222Rn [en Bq / m3] en la micro-atmósfera de la cueva de Altamira (norte de España), en la sala de los Polícromos; contenido de agua del suelo situado sobre la cueva en la superficie [en m3/m3] (a 5 cm de profundidad).


      Ya en 2008, se señalaron  las variaciones en el contenido de agua del suelo como una posible explicación para este comportamiento[v]. La ventilación a través del techo de la cueva implica que los niveles situados por encima del mismo no están saturados de agua, por lo que permiten que los gases, previamente acumulados, se exhalen a la atmósfera exterior. Por el contrario, cuando hay agua en los intersticios del suelo y la roca, el intercambio de gases con el exterior puede verse gravemente obstaculizado y estos se acumulan en la micro-atmósfera de la cueva.

      Iniciada en la década de 1990, la técnica de modelización global se desarrolló para estudiar comportamientos dinámicos a partir de una sola variable (una sola serie de medidas). En los últimos años, hemos podido demostrar que el enfoque se puede extender a pequeños conjuntos de variables, con el objetivo de recuperar la formulación original de las ecuaciones, o bien para obtener ecuaciones interpretables cuando se desconocen las ecuaciones originales[vi]. En este nuevo estudio, mostramos que el enfoque se puede utilizar para detectar acoplamientos direccionales entre variables, aun cuando no se dispone de observaciones de todas las variables esenciales.

      El uso de la técnica de modelización global en este contexto ha permitido no solo obtener modelos de baja dimensión capaces de producir comportamientos caóticos, sino también probar la hipótesis de la participación del agua del suelo en este comportamiento atmosférico invertido[vii].


      Figura 2: Arriba: atractor caótico de la micro-atmósfera de la cueva de Altamira. Se presentan dos proyecciones: concentración de radón (222Rn) en función de la concentración de CO2 (A), y velocidad de los cambios en la concentración de radón (dRn/dt) en función de la concentración de radón en sí (B). Las simulaciones (en azul) se superponen a los datos originales (en verde). Abajo: el atractor caótico de Lorenz visto siguiendo las proyecciones y en función de x (C), y z en función de x (D).


      Se ha obtenido un modelo que acopla la concentración de radón a las variaciones en el contenido de agua del suelo. Con este modelo sólo se puede generar un régimen periódico, que por tanto no tiene en cuenta la complejidad observada. Sin embargo, aunque simplista, este modelo sigue siendo de interés porque ofrece un primer argumento dinámico sobre la importancia del papel del agua del suelo en la dinámica del intercambio de gases entre la atmósfera y la cueva. Además, el modelo puede reformularse en un modelo no autónomo: las mediciones del contenido de agua del suelo pueden usarse empíricamente como un forzamiento del modelo de la micro-atmósfera. Este resultado puede parecer trivial ya que en los modelos hidrológicos se suelen utilizar datos externos como forzamiento. Sin embargo, los modelos hidrológicos se basan en un conocimiento a priori muy sólido: se supone que las ecuaciones son conocidas. Éste no es el caso en este estudio donde, por el contrario, las ecuaciones inicialmente son desconocidas. Este resultado es muy interesante ya que permite considerar la construcción de escenarios incluso cuando no se dispone de las ecuaciones que rigen los sistemas estudiados.

      A partir de las concentraciones de CO2 y 222Rn, se obtuvieron otros modelos, los cuales destacan la existencia de un acoplamiento dinámico entre estas dos variables. El análisis de estos modelos reveló la naturaleza caótica de los sistemas subyacentes. La dinámica de la micro-atmósfera de la cueva está sincronizada principalmente con variaciones climáticas estacionales, y es muy próxima a un régimen caótico: una ligera alteración en la parametrización de los modelos obtenidos hace que la trayectoria converja hacia un atractor caótico (Fig.2A-2B). Estos son los primeros atractores caóticos de la dinámica atmosférica que provienen directamente de datos observacionales. Como era de esperar, su geometría difiere mucho del atractor obtenido por Lorenz en 1963 (Fig.2C-2D). Por un lado, esto se debe a que no resultan de la simplificación de un modelo idealizado como en el caso del sistema de Lorenz. Los modelos de la micro-atmósfera de Altamira se han obtenido directamente de observaciones, sin hipótesis a priori fuertes. Por otro lado, se inscriben en un contexto bastante específico, el de una cavidad subterránea, que involucra procesos tanto biológicos como físicos. Los modelos así obtenidos son, por supuesto, en sí mismos aproximaciones bastante burdas a la realidad, pero tienen una cierta capacidad predictiva (de aproximadamente un mes). Además, permiten resaltar el determinismo de baja dimensión subyacente y la proximidad a un régimen caótico. La imprevisibilidad de la naturaleza claramente no se puede reducir a un comportamiento puramente aleatorio.


      Marina Sáez (Universidad de Alicante, CESBIO/OMP)

      Sylvain Mangiarotti (CESBIO/OMP)

      Soledad Cuezva (Universidad de Alcalá, España)

      Ángel Fernández-Cortés (Universidad de Almería, España)

      Beatriz Molero (CESBIO/OMP, France)

      Sergio Sánchez-Moral (MNCN-CSIC, España)

      David Benavente (Universidad de Alicante, España)


      [i] Saltzman B., 1962. Finite Amplitude Free Convection as an Initial Value Problem—I., Journal of the Atmospheric Sciences, 19, 329–341.

      [ii] Lorenz E.N., 1963. Deterministic nonperiodic flow, Journal of the Atmospheric Sciences, 20(2), 130-141.

      [iii] Departamento de Ciencias de la Tierra y del Medio Ambiente (Universidad de Alicante, España)  /  Centre d’Études Spatiales de la Biosphère (CESBIO-OMP, UT3-CNRS-CNES-IRD-INRAe, Toulouse, France)  /  Departamento de geología, geografía y medio ambiente (Universidad de Alcalá, Madrid, Espagne)  /  Departamento de Biología y Geología (Universidad de Almería, España)  /  Museo Nacional de Ciencias Naturales (MNCN-CSIC, Madrid, España).

      [iv] La herramienta GPoM, desarrollada en el Centre d'Etudes Spatiales de la Biosphère (Toulouse), tiene como objetivo obtener modelos de ecuaciones diferenciales directamente a partir de series temporales. [https:]] .

      [v] Cuezva S, Fernandez-Cortes A, Benavente D, Serrano-Ortiz P, Kowalski A & Sanchez-Moral S, 2011. Short-term CO2 (g) exchange between a shallow karstic cavity and the external atmosphere during summer: role of the surface soil layer Atmospheric Environment 45:1418-1427. [https:]]

      [vi] Mangiarotti S. & Huc M., 2019. Can the original equations of a dynamical behaviour be retrieved from observational time series? Chaos, 29, 023133. [https:]]

      [vii] Estos resultados se publican en la revista Theoretical and Applied Climatology: M. Sáez, S. Mangiarotti, S. Cuezva, A. Fernández-Cortés, B. Molero, S. Sánchez-Moral & D. Benavente, Global models for 222Rn and CO2 concentrations in the Cave of Altamira, 2020. [https:]]

    • sur Using PEPS to compute the ground surface displacement after the Palu earthquake

      Posted: 14 October 2020, 2:10am CEST by Simon Gascoin

      The 7.5 magnitude Palu earthquake in 2018 was a disaster that left 2,081?people dead, 1,309?missing and 206,494?displaced. An impressive surface slip associated with the earthquake was visible in optical satellite imagery like Sentinel-2.

      I used this case to test the new tool that has been made availabe by the PEPS team ( [https:]] ). This tool enables to compute the surface displacement from two Sentinel-2 images without the need to download the images. Just edit and run this shell script. The documentation is in French but if you need help you should contact the PEPS folks or follow these guidelines.

      curl -u email:passwd ${REQUEST}


      Here I used Sentinel-2A images from the same orbit acquired before (2018-09-02) and after (2018-10-02) the quake. After some trials I eventually managed to get the results. The fault line is well identified by the algorithm, with surface displacement of 2 to 8 m in the N-S direction, in agreement with this 2019 article by Anne Socquet et al.. or this tweet by Sotiris Valkaniotis.

      The Palu earthquake was a major disaster because it triggered landslides. Recent studies have shown that rice irrigation had saturated soil, helping to set off the landslides.


      2D Sub-Pixel Disparity Measurement Using QPEC / Medicis, Cournet, M., Giros, A., Dumas, L., Delvit, J. M., Greslou, D., Languille, F., Blanchet, G., May, S., and Michel, J., Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B1,291-298, []

    • sur Sentinel-2 time series resolution should finally reach 10m in November

      Posted: 13 October 2020, 3:22pm CEST by Olivier Hagolle

      We have been waiting for a long long time, but it seems the production of geometry refined Sentinel-2 L1C will start by next month. I have heard ESA plans to reproduce the whole data set later on, but the schedule has not been announced yet.

      "Start of the geometry-refined production using the Global Reference Image (GRI) and Copernicus DEM at 90 m resolution by November 2020"

      It seems the start of production will be cautious, and ESA will only use the ortho-rectification with ground control points over Europe, and if it works well, it will be extended to the other continents.

      However, congrats and thanks to the ESA teams !

    Partagez  |