Abstract:
|
In the context of remote sensing, one typically has access to multiple images of the same location taken by different instruments with potentially different spatial resolution and at various time frequency. It is often useful to integrate images from multiple sources into a database of composite images for further analysis, and quantify the uncertainty in the resulting composite images. In this paper we develop a general framework to address the image fusion problem using a spatial-temporal model, and apply it to the problem of fusing LandSat and MODIS imagery to provide composite images with high spatial and temporal resolution, and corresponding uncertainty quantification. Our approach is compared with the classical data fusion approaches popular in the remote sensing community, and both simulated and real satellite image data are used to demonstrate its advantages.
|