Abstract:
|
Many classical network models (e.g., stochastic blockmodels, graphons, exponential random graph models) are ill-suited for modern applications because they implicitly assume that the data is obtained by an unrealistic sampling scheme, such as vertex selection or simple random vertex sampling. More recent approaches (completely random measures and edge exchangeable models) improve somewhat upon these limitations, but leave plenty of room for further exploration of the role played by sampling in network analysis. I present here a framework that is intended to overcome theoretical and practical issues arising from the use of ill-specified network models. Within this framework I discuss how to incorporate the sampling scheme into statistical models in a way that is both flexible and insightful for modern network science applications.
The content of this talk is drawn from a newly released book titled "Probabilistic Foundations of Statistical Network Analysis" (Chapman-Hall).
|