Agreement Chart

While the ROC curve contains information on sensitivity and specificity, the agreement diagram contains information on the positive and negative predictive values of the diagnostic test. This article describes how to create a contract diagram in R. Cohen J: Weighted kappa: the nominal scale agreement with Derere for disagreements or partial credits. Psych Bull. 1968, 70: 213-220. 10.1037/h0026256. Fleiss JL: Measuring the nominal scale of correspondence between many advisors. Psych Bull. 1971, 76: 378-382. 10.1037/h0031619.

The sampling method used has a significant influence on the assessment of the validity of the sample. Currently, random samples, limited samples and sample case-controls are generally used in the validity study of administrative data [19-21]. In our study, Kappa`s rate for high blood pressure is 0.72 for random samples and 0.69 for restricted samples. Previous studies suggest that Kappa levels depend to a large extent on the prevalence of the disease. The prevalence of high blood pressure is 22.13% for the random sample and 78.77% for the small sample in this study. Therefore, the difference between Kappa`s rate for high blood pressure could be caused by the difference in prevalence between random and restricted samples. The sampling method also affected the value of the PAPBAK, with the value of the PAP varying depending on the type of sampling method. By definition, the PABAK estimates that the prevalence is 50% to zero bias[6]; their value depends only on the agreement that has been respected. It reaches 0.82 for the control sample if the prevalence of high blood pressure is 50%.

These results are consistent with Vach`s report,[22] based on the hypothetical sample. To overcome the effects of prevalence on Kappa`s value, some researchers advocate the use of a check-up sample and the prevention of Kappa to assess the validity of low or very high prevalence conditions [3, 4, 9, 23]. One possible reason for the variation in THE APBAK by survey is the change in the observed consistency due to differences in the estimation of prevalence by survey. The results of the validation study should therefore take into account the sampling and prevalence of the disease. In 1960, Cohen developed Kappa`s statistics for the analysis of categorical data, which correct or adjust the extent of the agreement that must be counted by chance, based on the reliability of the chance-adjusted content analysis[1]. Since its inception, kappa has been the subject of extensive investigation and criticism (Table 1). A common criticism is that Kappa depends to a large extent on the prevalence of the state of the population. To overcome this restriction, several alternative methods of agreement have been studied [5-8]. In 1993, Byrt et al[9] proposed a biased and corrected Kappa (PABAK) which perceives a 50% prevalence of the disease and the absence of any bias. THE PABAK has been used in many in-depth studies to evaluate agreements [10-17]. Compared to kappa, PABAK reflects the ideal situation and ignores the variation in prevalence on the conditions and prejudices that are represented in the “real” world. To demonstrate Kappa and PABAK`s performance, we evaluated the agreement between the hospital discharge administrative data and the data requirements for chart verification, as the prevalence of a disease varies depending on the sampling method used.

We analyzed Kappa and PABAK in the following three sampling scenarios; 1) samples, 2) conditions-limited samples and 3) case controls. Bangdiwala, S. I., Ana S. Haedo, Marcela L. Natal and Andres Villaveces. The diagram of the agreement as an alternative to the receiver`s line of identification for diagnostic tests. Journal of Clinical Epidemiology, 61 (9), 866-874.

Comments are closed.