Type: | Package |
Title: | An R-Shiny Application for Calculating Cohen's and Fleiss' Kappa |
Version: | 2.0.2 |
Date: | 2018-03-22 |
Author: | Frédéric Santos |
Maintainer: | Frédéric Santos <frederic.santos@u-bordeaux.fr> |
Depends: | R (≥ 3.4.0), shiny, irr |
Description: | Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'. |
License: | GPL-2 | GPL-3 [expanded from: GPL (≥ 2)] |
Encoding: | UTF-8 |
NeedsCompilation: | no |
Packaged: | 2018-03-22 14:35:33 UTC; f.santos |
Repository: | CRAN |
Date/Publication: | 2018-03-22 15:52:45 UTC |
An R-Shiny application for calculating Cohen's and Fleiss' Kappa
Description
Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'.
Details
Package: | KappaGUI |
Type: | Package |
Version: | 2.0.2 |
Date: | 2018-03-22 |
License: | GPL >=2 |
Author(s)
Frédéric Santos, frederic.santos@u-bordeaux.fr
References
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
See Also
irr::kappa2
Examples
## Not run: StartKappa()
A graphical user interface for calculating Cohen's and Fleiss' Kappa
Description
Launches the R-Shiny application. The user can retrieve inter-rater agreement scores from a file (.CSV or .TXT) loaded directly through the graphical interface.
Usage
StartKappa()
Details
Data importation is done directly through the graphical user interface. Only CSV and TXT files are accepted.
If there are p
variables observed by k
raters on n
individuals, the input file should be a data frame with n
rows and (k \times p
) columns. The first k
columns represent the scores attributed by the k
raters for the first variable; the next k
columns represent the scores attributed by the k
raters for the second variable; etc. Cohen's or Fleiss' kappas are returned for each variable.
The data file must contains a header, and the columns must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater. An example of correct data file with two raters is given here: http://www.pacea.u-bordeaux.fr/IMG/csv/data_Kappa_Cohen.csv.
Kappa values are calculated using the functions kappa2 and kappam.fleiss from the package ‘irr’. Please check their help pages for more technical details, in particular about the weighting options for Cohen's kappa. For ordered factors, linear or quadratic weighting could be a good choice, as they give more importance to strong disgreements. If linear or quadratic weighting are chosen, the levels of the factors will be supposed to be ordered alphabetically (as a consequence, a factor with three levels "Low", "Medium" and "High" would be ordered in an inconvenient way: in this case, please recode the levels with names matching the natural order of the levels).
Value
The function returns no value, but the table of results can be downloaded as a CSV file through the user interface.
Author(s)
Frédéric Santos, frederic.santos@u-bordeaux.fr
References
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
See Also
irr::kappa2, irr::kappam.fleiss
Calculates Cohen's kappa for all pairs of columns in a given dataframe
Description
This function is based on the function 'kappa2' from the package 'irr', and simply adds the possibility of calculating several kappas at once.
Usage
kappaCohen(data, weight="unweighted")
Arguments
data |
dataframe with |
weight |
character string specifying the weighting scheme ("unweighted", "equal" or "squared"). See the function ‘kappa2’ from the package ‘irr’. |
Details
For each trait, only complete cases are used for the calculation.
Value
A dataframe with p
rows (one per trait) and three columns, giving respectively the kappa value for each trait, the number of individuals used to calculate this value, and the associated p
-value.
Author(s)
Frédéric Santos, frederic.santos@u-bordeaux.fr
References
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
See Also
irr::kappa2
Examples
# Here we create and display an artifical dataset,
# describing two traits coded by two raters:
scores <- data.frame(
Trait1_A = c(1,0,2,1,1,1,0,2,1,1),
Trait1_B = c(1,2,0,1,2,1,0,1,2,1),
Trait2_A = c(1,4,5,2,3,5,1,2,3,4),
Trait2_B = c(2,5,2,2,4,5,1,3,1,4)
)
scores
# Retrieve Cohen's kappa for Trait1 and Trait2,
# to evaluate inter-rater agreement between raters A and B:
kappaCohen(scores, weight="unweighted")
kappaCohen(scores, weight="squared")
Calculates Fleiss' kappa between k
raters for all k
-uplets of columns in a given dataframe
Description
This function is based on the function 'kappam.fleiss' from the package 'irr', and simply adds the possibility of calculating several kappas at once.
Usage
kappaFleiss(data, nb_raters=3)
Arguments
data |
dataframe with |
nb_raters |
integer for the number of raters. |
Details
For each trait, only complete cases are used for the calculation.
Value
A dataframe with p
rows (one per trait) and two columns, giving respectively the kappa value for each trait, and the number of individuals used to calculate this value.
Author(s)
Frédéric Santos, frederic.santos@u-bordeaux.fr
References
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
See Also
irr::kappam.fleiss
Examples
# Here we create and display an artifical dataset,
# describing two traits coded by three raters:
scores <- data.frame(
Trait1_A = c(1,0,2,1,1,1,0,2,1,1),
Trait1_B = c(1,2,0,1,2,1,0,1,2,1),
Trait1_C = c(2,2,2,1,1,1,0,1,2,1),
Trait2_A = c(1,4,5,2,3,5,1,2,3,4),
Trait2_B = c(2,5,2,2,4,5,1,3,1,4),
Trait2_C = c(2,4,3,2,4,5,2,2,3,4)
)
scores
# Retrieve Fleiss' kappa for Trait1 and Trait2,
# to evaluate inter-rater agreement between raters A, B and C:
kappaFleiss(scores, nb_raters=3)