Over the past two decades, recommendations have evolved on the appropriate bioanalytical and statistical methods to establish cut points for ADA assays. However, the recommendations have overemphasized default values for diagnostic specificity, e.g. 95% for screening ADA assays, to ensure high diagnostic sensitivity, analytical sensitivity and drug tolerance. In reality controlling diagnostic specificity is an inefficient tool to improve diagnostic and analytical sensitivity compared to the choice of assay platform and method. In most cases reduction in diagnostic specificity, e.g., from 99% to 95%, results in negligible improvements in analytical sensitivity at the cost of increased false positive data reporting. The application of mandated specificity requirements, combined with non-orthogonal confirmatory assays, has resulted in cases where ADA incidence in the treatment arms is similar to placebo, suggesting that most of the results are false positive. In these cases the high percentage of false positives will contaminate and dilute true positive results, which is counterproductive to the ultimate goal of associating ADA results with clinical endpoints such as PK, PD, efficacy and safety. In this presentation case examples will be presented to illustrate how unnecessarily decreasing specificity results in increased false positive reporting without identifying more true ADA positives. We posit that adequate analytical sensitivity and drug tolerance are the primary objectives of ADA method development and validation; and diagnostic specificity should not be mandated and should be as high as technically and practically feasible.