Conference Program Home
  My Program

All Times EDT

Abstract Details

Activity Number: 229 - Geostatistical Computing on Modern Parallel Architectures
Type: Topic Contributed
Date/Time: Tuesday, August 9, 2022 : 8:30 AM to 10:20 AM
Sponsor: Section on Statistical Computing
Abstract #323341
Title: Accelerating Geostatistical Modeling with Mixed-Precision and Tile Low-Rank Algorithms on Large-Scale
Author(s): Qinglei Cao and Sameh Abdulah* and Rabab Alomairy and Yu Pei and Pratik Nag and George Bosilca and Jack Dongarra and Marc Genton and David Keyes and Hatem Ltaief and Ying Sun
Companies: Innovative Computing Laboratory, University of Tennessee and KAUST and KAUST and Innovative Computing Laboratory, University of Tennessee and King Abdullah University of Science and Technology and Innovative Computing Laboratory, University of Tennessee and Innovative Computing Laboratory, University of Tennessee and KAUST and King Abdullah University of Science and Technology and KAUST and KAUST
Keywords: Geospatial statistics; High performance computing; Multiple precisions; Low-rank approximation
Abstract:

Spatial data are assumed to possess properties of stationarity or non-stationarity via a kernel fitted to a covariance matrix. A primary workhorse of stationary spatial statistics is Gaussian maximum log-likelihood estimation (MLE), whose central data structure is a dense, symmetric positive definite covariance matrix of the dimension of the number of correlated observations. In this contribution, we reduce the precision of weakly correlated locations to single- or half-precision based on distance. We thus exploit mathematical structure to migrate MLE to a three-precision approximation that takes advantage of contemporary architectures offering extremely fast linear algebra operations for reduced precision. We also add another level of approximation by mixing different precision with tile low-rank approximation to gain more performance. Finally, we assess the accuracy of our proposed implementation on large-scale using four supercomputers with different architectures, i.e., HAWK-HLRS (AMD CPUs), Shaheen-II-KAUST (Intel CPUs), Summit-ORNL (NVIDIA GPUs), and Fujitsu A64FX (Fugaku-Riken). The experiments have been performed on up to 12M covariance matrix using synthetic and real data.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2022 program