Many problems with diagnostic test evaluation stem from disease status not being verified in everyone. An extreme form of verification bias occurs in comparative studies of two diagnostic tests in which disease is verified only in subjects who are test positive for at least one of the tests. In this study design, only the ratio of sensitivities and ratio of false positive fractions of the two tests can be estimated (Schatzkin et al, AJE, 1987). Based on these ratios, we consider two criteria under which test A can be declared superior to test B in positive and negative predictive value. The second criterion depends on the odds ratio for test B, which is only partially identifiable but for which prior information may be available. For inference, we consider Bayesian models similar to those in Black and Craig (Stat Med 2002). Priors are used to consider conditional independence and conditional dependence structures for the two test results given disease status. To illustrate, we consider HPV testing to screen for cervical cancer.