It is difficult to design and conduct a survey because prior information on response rates and the like are likely generated from a different random process than the target one governing the surveys to be designed. The survey process, such as text classification, also makes the development of a survey difficult as it may vary from one human or machine to another. The impact of each error-prone set of information on the properties of the estimator can be significant. We are concerned with reducing the side effects of both the prior information and the processed information on the quality of the estimator of the parameter of interest during the data collection period. Nowadays, computer-assisted survey methods provide an instant variety of observations on the survey process and on the target random process governing the survey under consideration. These paradata, data, and quality measures enable the survey producer to make decisions regarding the need for methodology-process revision during the data collection period, which involves the consideration of both a model that represents how the target information relates to the error-prone information and the design that describes how the observations are obtained. We think of the error-prone and target information as a random variable that has a joint distribution with some probability function. Then, at each time of data collection – after receiving the information that the target random process has taken specific values – we update the joint probability distribution to revise the design specification in the course of the data collection period. In addition, the coefficient of reliability for a survey as both a whole set of processes and a single process is further discussed.