R. A. Fisher, the father of modern statistics, proposed the idea of fiducial inference during the first half of the 20th century. While his proposal led to interesting methods for quantifying uncertainty, other prominent statisticians of the time did not accept Fisher's approach as it became apparent that some of Fisher's bold claims about the properties of fiducial distribution did not hold up for multi-parameter problems. Beginning around the year 2000, the speaker and collaborators started to re-investigate the idea of fiducial inference. They discovered that Fisher's approach, when suitably generalized, would open doors to solve many important and challenging inference problems. They termed their generalization of Fisher's idea as generalized fiducial inference (GFI).
This talk will provide a brief introduction to GFI and report the speaker's ongoing work on applying GFI to some statistical and machine learning problems, including high-dimensional additive models, random forest prediction, matrix completion, and network data analysis.
It is joint work with Wei Du, Qi Gao, Jan Hannig, Hari Iyer, Randy Lai, Yi Su, Suofei Wu, and Chunzhe Zhang.