### Agreement Package R

An analysis of the IRR was conducted to assess the extent to which coders systematically attributed categorical assessments of depression to the subjects in the study. Marginal distributions of depression assessments did not highlight prevalence or bias problems, suggesting that Cohen`s Kappa (1960) was an appropriate index of IRR (Di Eugenis-Glass, 2004). Kappa was calculated for each pair of coders, which was then calculated to provide a single IRR index (Light, 1971). The resulting Kappa indicated a significant agreement, n- 0.68 (Landis-Koch, 1977), and was consistent with previously published IRR estimates from the coding of similar constructions in previous studies. The flawless analysis showed that coders had a significant match in depression assessments, although the interest rate variable had a slight error differential due to differentiated subjective assessments of coders, which slightly reduced statistical performance for subsequent analyses, although the evaluations were deemed appropriate to be used in the hypothesis tests of the present study. The spSS and the R-pack require users to indicate a single or two-way model, an absolute type of match or consistency, as well as individual or medium units. The design of the hypothetical study provides information on the correct choice of ICC variants. Note that the SPSS, but not the R-irr package, allows a user to indicate random or mixed effects, the calculation and results are identical for random and mixed effects. For this hypothetical study, all subjects were evaluated by all coders, meaning that the researcher should probably use a two-way ICC model, because the design is completely cross-referenced and an average CCI unit of measurement, because the researcher is probably interested in the reliability of the average evaluations provided by all coders. The researcher is interested in assessing the degree of correspondence between the coder`s assessments, so that higher ratings of one coder corresponded to higher ratings of another coder, but not to the extent that the coders agreed on the absolute values of their ratings, which justifies a type of ICC consistency. The coders were not randomly selected, so the researcher is interested in knowing how much the coders agreed on their assessments in the current study, but not to generalize these assessments to a larger population of coders, which justifies a mixed model. The data presented in Table 5 are in their final form and are not further processed, so these are the variables on which an analysis of the IRR should be performed. The objective of the agreement package is to calculate the estimates of the interconnection agreement and reliability using general formulas that take into account the different nest designs.

B or not crossed, missing data and orderly or disordered categories.