Skip to main content

Inter-annotator agreement for Brat annotation projects

Project description

bratiaa

Inter-annotator agreement for Brat annotation projects. For a quick overview of the output generated by bratiaa, have a look at the example files. So far only text-bound annotations are supported, all other annotation types are ignored. This package is an improved version of the code for calculating inter-annotator agreement used by Kolditz et al. (2019).

Installation

Install the package via pip.

pip install bratiaa

Project Structure

By default bratiaa expects that each first-level subdirectory of the annotation project contains the files of one annotator. It will automatically determine the set of files annotated by each annotator (files with the same relative path starting from the different annotators' directories). Here is a simple example:

example-project/
├── annotation.conf
├── annotator-1
│   ├── doc-1.ann
│   ├── doc-1.txt
│   ├── doc-3.ann
│   ├── doc-3.txt
│   └── second
│       ├── doc-2.ann
│       └── doc-2.txt
└── annotator-2
    ├── doc-3.ann
    ├── doc-3.txt
    ├── doc-4.ann
    ├── doc-4.txt
    └── second
        ├── doc-2.ann
        └── doc-2.txt

In this example, we have two agreement documents: 'second/doc-2.txt' and 'doc-3.txt'. The other two documents are only annotated by a single annotator.

If you have a different project setup, you need to provide your own input_generator function, yielding document objects with paths to the plain text and all corresponding ANN files (cf. bratiaa.agree.py).

Usage

You can either use bratiaa as a Python library or as a command-line tool.

Python Interface

import bratiaa as biaa

project = '/path/to/brat/project'

# instance-level agreement
f1_agreement = biaa.compute_f1_agreement(project)

# print agreement report to stdout
biaa.iaa_report(f1_agreement)

# agreement per label
label_mean, label_sd = f1_agreement.mean_sd_per_label()

# agreement per document
doc_mean, doc_sd = f1_agreement.mean_sd_per_document() 

# total agreement
total_mean, total_sd = f1_agreement.mean_sd_total()

For the token-level evaluation, please use your own tokenization function. This function should yield (start, end) offset tuples for any given string like the example function below.

import re
import bratiaa as biaa

def token_func(text):
    token = re.compile('\w+|[^\w\s]+')
    for match in re.finditer(token, text):
        yield match.start(), match.end()

# token-level agreement
f1_agreement = biaa.compute_f1_agreement('/path/to/brat/project' , token_func=token_func)

CLI

Help message: brat-iaa -h

# instance-level agreement and heatmap
brat-iaa /path/to/brat/project --heatmap instance-heatmap.png > instance-agreement.md

# token-level agreement (not recommended)
brat-iaa /path/to/brat/project -t --heatmap token-heatmap.png > token-agreement.md

The token-based evaluation of the command-line interface uses the generic pattern '\S+' to identify tokens (splitting on whitespace) and hence is not recommended. Please use the Python interface with a language- and task-specific tokenizer instead.

For the output formats generated by the above commands, have a look at the example files.

Agreement Measure

We can think of an annotation as a triple (d, l, o), where d is a document id, l a label, and o is a list of start-end character offset tuples. An annotator i contributes a (multi)set Ai of (token) annotations. We compute F1ij = 2 | Ai ∩ Aj | / (|Ai| + |Aj|) for each 2-combination of annotators and report arithmetic mean and standard deviation of F1 across all these combinations (see Hripcsak & Rothschild, 2005). Grouping annotations by documents or labels allows us to calculate F1 per document or label.

Instance-Based Agreement

Each text-bound annotation in Brat is an annotation instance. Two identical instances from a single annotator (a triple where d, l, and o are identical) are considered as accidental - only unqiue annotation instances are used for calculating agreement, i.e., we are dealing with sets.

Token-Based Agreement

Each annotation instance is split up into its overlapping tokens, e.g. if our tokenizer splits on whitespace, "[ORG Human Rights Watch]" and "[ORG Human Rights Wat]ch" both become "[ORG Human] [ORG Rights] [ORG Watch]". We are dealing with multisets of these split annotations, allowing for multiple token-based annotations on the same document, with the same label and offsets in the case of overlapping annotations of the same type. For example, in "[LOC University of [LOC Jena]]" we have two overlapping location annotations resulting in four token-based annotations of which two are identical ("[LOC Jena]").

Be aware that "[ORG Human] [ORG Rights Watch]" and "[ORG Human Rights] [ORG Watch]" both become "[ORG Human] [ORG Rights] [ORG Watch]", that is, boundary errors between adjacent annotations of the same type are ignored!

References

Hripcsak, G., & Rothschild, A. S. (2005). Agreement, the f-measure, and reliability in information retrieval. Journal of the American Medical Informatics Association, 12(3), pp. 296-298.

Kolditz, T., Lohr, C., Hellrich, J., Modersohn, L., Betz, B., Kiehntopf, M., & Hahn, U. (2019). Annotating German clinical documents for de-identification. In MedInfo 2019 – Proceedings of the 17th World Congress on Medical and Health Informatics. Lyon, France, 25-30 August 2019. IOS Press, pp. 203-207.

License

This software is provided under the MIT-License. The code contains a modified subset of brat, which is available under the same permissive license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bratiaa-0.1.4.tar.gz (42.5 kB view hashes)

Uploaded Source

Built Distribution

bratiaa-0.1.4-py3-none-any.whl (46.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page