For example, if you had 6 judges, you would have 16 pair combinations to calculate for each participant (use our combination calculator to find out how many couples you would get for multiple judges). Some researchers have expressed concern about the tendency of κ to take as data the frequencies of the observed categories, which may make them unreliable for measuring concordance in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the concordance on the rare category.  This is why κ is considered too conservative a degree of convergence.  Others[citation required] dispute the assertion that kappa “takes into account” random agreement. To do this effectively, there would need to be an explicit model of the impact of chance on evaluators` decisions. The so-called random adjustment of kappa statistics assumes that, if it is not entirely certain, evaluators simply advise – a very unrealistic scenario. When calculating the percentage agreement, you need to determine the percentage of the difference between two numbers. This value can be useful if you want to see the difference between two numbers as a percentage.
Scientists can use the percentage of agreement between two numbers to display the percentage of the relationship between the different results. To calculate the percentage difference, you need to take the difference in the values, divide them by the average of the two values, and then multiply this number by 100. A case that is sometimes considered a problem with Cohen`s kappa occurs if one compares the Kappa calculated for two pairs of evaluators, with both evaluators in each pair with the same percentage of concordance, but one pair gives a similar number of evaluations in each class, while the other pair gives a very different number of grades in each class.  (In the following cases, note B has 70 votes in favour and 30 against in the first case, but these figures are reversed in the second case.) For example, in the following two cases, there is an equality of correspondence between A and B (60 out of 100 in both cases) with respect to the correspondence in each class, so we would expect the relative values of Kappa cohens to reflect this. .