In Remote Sensing, much effort has been dedicated to the Super-Resolution field to overcome physical sensors limitations, and Deep Learning has vastly surpassed Interpolation and Reconstruction based methods. Spatial and Multi-Spectral based methods are commonly pre-dominant in the field, and, motivated by the recent success stories of 3D spatial modeling with Implicit Neural Representation, new continuous image modeling methods are appearing. In this present work, we take advantage of already existing Spatial and Spectral techniques and Learning Continuous Image Representation with Local Implicit Image Function (LIIF) by adding the Temporal dimension into the problem, leaning towards a continuous interpolation model of space and time as a first approximation to the total modelization.
SuperTemporal.mov
The Open Earth Observation Hub is a web solution that simplifies access to data from multiple API providers. This interactive front-end browser centralizes data access and make it accessible to a wide range of users and protect sensitive data.
At the moment we are getting data from Element 84 open STAC API in which we can access Sentinel S2A Cloud Optimized. And the brand new Copernicus Data Space Ecosystem API from the European Space Agency (ESA) in which we can access Sentinel 1, Sentinel 2, Sentinel 3, Sentinel 5P, Landsat 5, Landsat 7 and Landsat 8 data.
A cross-platform application made for our own data processing. It is implemented with ElectronJS and it relies heavily with GDAL. This was built as a native application for the easy access to big local files.
In recent years efforts have been devoted to finding a continuous image representation, an example of this is Learning Continuous Image Representation with Local Implicit Image Function (LIIF), where each image is represented as a two-dimensional feature map and the same decoding function for an entire image. Given any coordinate, based on the nearest neighboring features it will provide a new RGB value.
Traditionally we represented images with a two-dimensional array of pixels in a discrete manner, but LIIF is built from the promise that each pixel of an image can be described as a continuous function of its coordinates and its neighbour features. The main advantage of this is that with our new continuous representation we are no longer constrained by resolution, and we can generate arbitrary resolutions for any image, even for upsample scales that the model wasn’t even trained.
- Deployed
utils
folder which contains new updated scripts, re-wroted:utils/data-adaptation.py
Pre-process and data adaptation of histogram matching, made a mechanism to do all the adaptation and let the user choose the best with labels.utils/temporal-5-subset.py
The temporal checking (Equation 5 TFG document), for all the images. It is really fast.utils/interpolation.py
Previous Bilinear interpolation (PIL, skimage...)utils/bicubic-torch.py
Bicubic interpolation made with Torchvision and PIl. Much much easier, fast, and better.utils/average.py
utils/temporal-difference.py
This script will calculate the difference between the Interpolation t value and the Ground Truth t value, we obtain an average of 2.9 days of difference, but we have an outlier.utils/results_crop.py
Script that will crop a small region of the image and calculate the PSNR for all the models in the project. This will be used to generate a figure for some interesting visual results.
We achieved the building, from the ground up, of an End-to-End Framework for Continous Space-Time Super-Resolution on Remote Sensing data.
This process resulted in Temporal LIIF, a model capable of interpolating any scale and temporal factor, hence, an infinite interpolation model of space and time for Remote Sensing.