Multi-user annotation

tagtog is a multi-user tool. Collaborate with other users to annotate faster and improve the quality of your annotations.

It supports different roles as annotator. Each user can annotate their own copy of the text, facilitating the review process and measurement of inter-annotator agreement (IAA).

Roles

tagtog comes with a set of predefined user roles. Here you find a summary description of the roles. In the permissions section just below you will find a detailed list of the permissions for these roles.

Role Description
admin Can read all user’s annotations and can edit them. They can edit master’s and their own annotations. Moreover, they can edit all project’s settings. All permissions are active for this role. By default, the user that creates a project becomes its admin. More details.
reviewer Can read all user’s annotations and can edit them. They can edit master’s and their own annotations. Moreover, they can edit some settings, see the project metrics and use the API. More details.
supercurator Can edit master's and their own annotations. They can read the settings of the project, see the project metrics and use the API. More details.
curator Can edit their own annotations. They cannot edit master's annotations, but can export master into their annotations. They cannot see the metrics of the project nor use the API. More details.
reader Can only read master's annotations. They can see the metrics of the project. More details.

Create custom roles

Depending on your plan, you can create custom roles and define their permissions. Read how to manage and create custom roles.

Permissions

tagtog role-based access control helps you manage what users can do in a project, and what areas they have access to. You can find below the permissions available in tagtog. Each role has a set of permissions associated.

Realm Component Permission Description Reader Curator Supercurator Reviewer Admin
settings Guidelines canReadGuidelinesConf Read access for Settings - Guidelines
canEditGuidelinesConf Write access for Settings - Guidelines
Annotation Tasks canReadAnnTasksConf Read access for all annotation tasks, namely: Document Labels, Entities, Dictionaries, Entity Labels, and Relations
canEditAnnTasksConf Write access for all annotation tasks, namely: Document Labels, Entities, Dictionaries, Entity Labels, and Relations
Requirements canReadRequirementsConf Read access for Settings - Requirements
canEditRequirementsConf Write access for Settings - Requirements
Annotatables canReadAnnotatablesConf Read access for Settings - Annotatables
canEditAnnotatablesConf Write access for Settings - Annotatables
Annotation canReadAnnotationsConf Read access for Settings - Annotations
canEditAnnotationsConf Write access for Settings - Annotations
Webhooks canReadWebhooksConf Read access for Settings - Webhooks
canEditWebhooksConf Write access for Settings - Webhooks
Members canReadMembersConf Read access for Settings - Members
canEditMembersConf Write access for Settings - Members
Admin canReadAdminConf Read access for Settings - Admin
canEditAdminConf Write access for Settings - Admin
documents Content canCreate Rights to import documents to the project
canDelete Rights to remove documents from the project
Own version canEditSelf Write access to the own version of the annotations
Master version canReadMaster Read access to the master version of the annotations
canEditMaster Write access for the master version of the annotations (ground truth)
Others' versions canReadOthers Read access to every project member's versions of the annotations
canEditOthers Write access to every project member's versions of the annotations
folders canCreate Rights to create folders
canUpdate Rights to rename existing folders
canDelete Rights to delete existing folders
dictionaries canCreateItems Rights to add items to the dictionaries using the editor
metrics canRead Read access to the metrics of the project (metrics tab) or the metrics for annotation tasks in a document (e.g. IAA)
API canUse Users with this permission can use the API. Users with this permission can see the output formats in the UI

Annotation versions

Each user has an independent version of the annotations for each single document. For instance, UserA could have 20 entities; UserB could have 5 different entities on the same exact document. In addition, each document has a master version which is usually treated as the final/official version (ground truth).

Annotation flows & Task Distribution

There are different ways you can organize your annotation tasks. These are the most common:


Annotators annotate directly on the master version (ground truth). No review.

This is the simplest flow and there is no review step. Make this choice if you are working alone, or if you trust your annotators' annotations or if time is a constraint. This is the project's default. Here, for simplicity, we explain the flow using the default roles.

1Add users to your project. As a project's admin, go to Settings → Members to add members to your project.

2Create clear guidelines. Here, the admin writes what is to be annotated and which type of annotations to use. Clear and complete guidelines are key to align all project members.

3Import text. Admins and supercurators can import the documents to be annotated by the group. Any project member can see these documents.

4Distribute documents among annotators. Either let users annotate non-yet-confirmed documents, or otherwise, for example, manually assign document ids to each user.

5The group starts annotating. Each user annotates only the master version of the assigned documents. Once a document is annotated, the user marks the annotations as completed by clicking the Confirm button. admin's can check the progress in the document list view.


Documents are automatically distributed; one annotator per document

Make this choice if the annotation task is simple or if time is a constraint. If you assign each document to only one annotator, the quality of the annotations depends on the assigned user.

1Add users to your project. As a project's admin, go to Settings → Members to add members to your project.

2Create clear guidelines. Here, the admin writes what is to be annotated and which type of annotations to use. Clear and complete guidelines are key to align all project members.

3Distribute documents among annotators. As a project's admin, go to Settings → Members and select who you want to distribute documents to and select 1 annotator per document.

4Import text. Admins and supercurators can import the documents to be annotated by the group. Any project member can see these documents, but each annotator will see a TODO list with the documents assigned and not confirmed yet.

5The group starts annotating. Users annotate their version of the annotations for the documents assigned. Once completed, the user mark her/his version as completed by clicking on the Confirm button.

6Review. Admins check which documents are ready for review (via GUI in the document list or by using a search query). Admins move the user's annotations to the master version (ground truth), review and make the required changes. admins should click on the Confirm button in the master version to indicate that the review is completed and the document is ready for production.


Documents are automatically distributed; multiple annotators per document

This flow is ideal for those projects requiring high-quality annotations and complex annotation tasks (specific skills required, divergent interpretations, etc.).

1Add users to your project. As a project's admin, go to Settings → Members to add members to your project.

2Create clear guidelines. Here, the admin writes what is to be annotated and which type of annotations to use. Clear and complete guidelines are key to align all project members.

3Distribute documents among annotators. As a project's admin, go to Settings → Members and select who you want to distribute documents to and select 2 annotators or more per document. </div>

4Import text. Admins and supercurators can import the documents to be annotated by the group. Any project member can see these documents, but each annotator will see a TODO list with the documents assigned and not confirmed yet.

5The group starts annotating. Users annotate their version of the annotations for the documents assigned. Once completed, the user mark her/his version as completed by clicking on the Confirm button.

6Adjudication. Admins check which documents are ready for review (via GUI in the document list or via search query). For a document, admins merge the users' annotations (automatic adjudication) to the master version (ground truth). Admins review the merged annotations and should click on the Confirm button in the master version to indicate that the review is completed.

Quality Management

Here you will learn how to track the quality of your project in real time.

IAA (Inter-Annotator Agreement)

The Inter-Annotator Agreement (IAA) gauges the quality of your annotation project, that is the degree of consensus among your annotators. If all your annotators make the same annotations independently, it means your guidelines are clear and your annotations are most likely correct. The higher the IAA, the higher the quality.

In tagtog, each annotator can annotate the same piece of text separately. The percentage agreement is measured as soon as two different confirmed✅ annotation versions for a same document exist; i.e., at least one member's and master annotations are confirmed, or 2 or more members' annotations are confirmed. These scores are calculated automatically in tagtog for you. You can add members to your project at Settings > Members.

To go to the IAA results, open your project and click on the Metrics section. Results are split into annotation types (entity types, entity labels, document labels, normalizations and relations). Each annotation type is divided into annotation tasks (e.g. Entity types: Entity type 1, Entity type 2; Document labels: document label 1, document label 2, etc.). For each annotation task, scores are displayed as a matrix. Each cell represents the agreement pair for two annotators, being 100% the maximum level of agreement and 0% the minimum.

The agreement percentage near the title of each annotation task represents the average agreement for this annotation task.

Inter-annotator agreement matrix. It contains the scores between pairs of users. For example, Vega and Joao agree on the 87% of the cases. Vega and Gerard on the 47%. This visualization provides an overview of the agreement among annotators. It also helps find weak spots. In this example we can see how Gerard is not aligned with the rest of annotators (25%, 47%, 35%, 18%). A training might be required to have him aligned with the guidelines and the rest of the team. On the top left we find the annotation task name, id and the agreement average (59,30%).

What can I do if IAA is low?

There may be several reasons why your annotators do not agree on the annotation tasks. It is important to mitigate these risks as soon as possible by identifying the causes. If you find such an scenario we recommend you to review the following:

Guidelines are key. If you have a large group of annotators not agreeing on a specific annotation task, it means your guidelines for this task are not clear enough. Try to provide representative examples for different scenarios, discuss boundary cases and remove ambiguity. Remember you can attach pictures or screenshots to the guidelines.

Be specific. If annotation tasks are too broadly defined or ambiguous, there is room for different interpretations, and eventually disagreement. On the other hand, very rich and granular tasks can be difficult for annotators to annotate accurately. Depending on the scope of your project, find the best trade-off between high specific annotations and affordable annotations.

Test reliability. Before start annotating large amounts of data, it is good to make several assessments with a sample of the data. Once the team members have annotated this data, check the IAA and improve your guidelines or train your team accordingly.

Train. Make sure you train appropriately members joining the annotation project. If you find annotators that do not agree with most of members from the team, check the reasons, make your guidelines evolve and train them further.

Check how heterogeneous is your data. If your data/documents are very different from each other either in complexity or structure, a larger effort would be required to stabilize the agreement. We recommend to split the data into homogeneous groups.

Adjudication

When different users annotate the same documents, as a result, there are multiple annotation versions. Adjudication is the process to resolve inconsistencies among these versions before promoting a version to master. tagtog supports automatic adjudication.

Automatic adjudication based on IAA

Do you need first more information about what IAA (inter-annotator agreement) is. Read here: inter-annotator agreement ?

It follows a merging strategy based on choosing the available-best user for each annotation task, i.e. choosing the annotations from the user with the highest IAA for a specific annotation task (regarding the exact_v1 metric; for all documents).

In this example SME A has the highest IAA for task A and SME B for task B. The result are the annotations for task A by SME A plus the annotations from task B by SME B

In the background, tagtog creates an IAA ranking of all annotators for each specific task. In that ordered ranking, the chosen annotator is the first one, which has annotations made for the document to merge.

Also, a ranking of the best overall annotators as an average of all IAAs is calculated. If there are no best overall annotators, this means there is no IAA at all calculated for the project. If the IAA is not calculated for a specific IAA, its best annotator is defaulted to the available-best overall annotator.

Currently this adjudication process is only available in the user interface (no through API), in the toolbar of the annotation editor (Manage annotation versions).

If you want to know more about the adjudication process and when it makes sense to use an automatic process, take a look at this blog post: The adjudication process in collaborative annotation

If you want to see a step-by-step example for setting up automatic adjudication, check out this post: Automatic adjudication based on the inter-annotator agreement