In this paper, we consider the problem of estimating a change point in a sequence of big random graphs, where there is excessive signal for the estimation of change point. The challenge is that as the size of the graph grows, the number of edges grows so fast that it becomes computationally infeasible to use all the data. We propose two ways to analyze the behavior of the least squares estimator, and use them to develop minimum sample size for sampling. We also propose a procedure for estimating the change point via sampling, where we first use a pilot sample to gain preliminary knowledge about change structure of the graph, then find minimum sample size utilizing our theory, and finally use an adaptive sampling algorithm to verify the estimate. Numerical studies demonstrate that both our theory and our estimating procedure works well.