![]() ![]() In general though, I'm quite cautious about trying to quantify levels of agreement, particularly as a measure of rigor and analysis. It's actually quite important for making sure that individual analysts are on the same page in their understanding of what codes are meant to capture and why. Janice Morse for instance, in a 1994 article stated, "No one takes a second reader to the library to check that indeed he or she is interpreting the original sources correctly, so why does anyone need a reliability checker for his or her data?" I argued in the last video that assessing agreement is a key process for codebook development. More broadly in qualitative research, this is quite a polemic issue with some researchers arguing that intercoder reliability is essential, and others suggesting that it's ridiculous. As a result, some researchers argue that it's important to report statistics that quantify the level of agreement between coders as an indicator of rigor in analysis. In public health and other related disciplines, quantitative approaches tend to be the dominant paradigm. Finally, I'd like to say a few words about intercoder reliability. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |