VIHI intercoder ACLEW Reliability
We are going to calculate inter-coder reliability by re-coding 10% of the intervals for the annotations that use a closed vocabulary (xds, vcm, lex, mwu). We will NOT be re-segmenting utterances or re-transcribing them: not because we don't want to know, but because it would take excessively long to do so, and there isn't a straightforward way to calculate % agreement.
We have run a script over the files that outputs a copy the only contains the annotations from TWO intervals out of 20 (15 possible random intervals, 5 possible high volubility intervals) and leaves blank annotations on the closed vocabulary tiers.
Find your file:
sox4.university.harvard.edu/Fas-Phyc-PEB-Lab/VIHI/SubjectFiles/LENA/vihi_reliability/[your_initials]
and navigate to the .eaf that corresponds with the number you have been assigned on Asana. Go through, listening to the two intervals, and recode all the empty annotations. Save and export your file as a tab-delimited txt file.
The .log file tells you which intervals have been left for reliability and the timestamps they occur at, which will help you navigate through your file to find the segments you need to re-code.
2) Assign the file back to Lilli and Zhenya
Lilli and Zhenya calculate overall agreement between to coders, and keep tabs on agreement by: sensory groupV (TD, HI, VI); original coder; recoder; tier type mistake type?
3) Resolving disagreements
The file will be reassigned to you. Grab another ACLEW-trained coder (e.g. another RA or Lilli). Your job is to find the codes where you disagreed with the original coder, talk through the rationale of each possibility, and decide jointly what the final code should be. Open up the .eaf file in
sox4.university.harvard.edu/Fas-Phyc-PEB-Lab/VIHI/SubjectFiles/LENA/annotations/[group]/Subj_num
And fix the coes to match your final decision. Do not change anything other than these points of disagreemeent!
4) Reassign the task to Lilli so she can commit the changes.
Last updated