Keywords: Approximate Fiducial Computation, Deep Fiducial Inference, Fiducial Autoencoder
Since the mid-2000s, there has been a resurrection of interest in modern modifications of fiducial inference. To date, the main computational tool in extracting a Generalized Fiducial Distribution is MCMC and similar methods. In this paper, we propose an alternative way of computing a Generalized Fiducial Distribution that could be used in complex situations. Under generalized fiducial inference framework, we first design the Approximate Fiducial Computation (AFC) algorithm to generate approximate generalized fiducial samples, on which we can form inference without knowing the closed form of the fiducial density. To overcome the difficulty when the inverse function or the marginal fiducial density is intractable, we further design the Fiducial Autoencoder (FAE) to approximate the inverse function. AFC is then used to get generalized fiducial samples. The universal approximation theorem in neural networks provides theoretical guarantees for the approximation performance, and our simulations further validate our method.