Thursday, May 30

Recent Developments in Lower Rank Learning for Complex Data

Thu, May 30, 10:30 AM - 12:05 PM

Grand Ballroom K

Grand Ballroom K

**Keywords:** generalized fiducial inference, machine learning problems

R. A. Fisher, the father of modern statistics, developed the idea of fiducial inference during the first half of the 20th century. While his proposal led to interesting methods for quantifying uncertainty, other prominent statisticians of the time did not accept Fisher's approach as it became apparent that some of Fisher's bold claims about the properties of fiducial distribution did not hold up for multi-parameter problems. Beginning around the year 2000, the authors and collaborators started to re-investigate the idea of fiducial inference and discovered that Fisher's approach, when properly generalized, would open doors to solve many important and difficult inference problems. They termed their generalization of Fisher's idea as generalized fiducial inference (GFI). The main idea of GFI is to carefully transfer randomness from the data to the parameter space using an inverse of a data generating equation without the use of Bayes theorem. The resulting generalized fiducial distribution (GFD) can then be used for inference. After more than a decade of investigations, the authors and collaborators have developed a unifying theory for GFI, and provided GFI solutions to many challenging practical problems in different fields of science and industry. Overall, they have demonstrated that GFI is a valid, useful, and promising approach for conducting statistical inference.

In this talk we discuss how certain computations within generalized fiducial framework can be made using a deep neural network. The resulting approximation to the fiducial distribution is termed deep fiducial distribution (DFD). We conclude by summarizing several difficult open problems related to this approach.