Animal behavior annotation with napari

Hello! I’m a new user of napari. Firstly, I would like to express my gratitude for developing such a wonderful library!

I want to get some advice for making simple tool for animal behavior annotation per frame.

That means, I want to annotate the video and create a CSV file that looks like this:

frame, behavior
1, behaviorA
2, behaviorA
3, behaviorA
4, behaviorA
..., ...
98, behaviorA
99, behaviorA
100, behaviorB
101, behaviorB
102, behaviorB
..., ...

I tried to use point_layer or label_layer class, but I can’t fully understand how to use them. So it seems undesirable implementation, I created a prototype by getting frame numbers with image_layer._slice_indices.(sample is shown in end of this post.)

This sample have only very primitive function, but may be useful for showing my thoughts.

Are there any features in napari that would help making this type of behavior annotator?

And when I will be more proficient in using napari, I want to try to show the annotation status graphically, and to manipulate annotations via Qt GUI!

Any advice will be helpful! Thanks in advance.

=====sample program=====

my github repo

  • dependencies: napari_video(installed by ‘pip install napari_video’)

  • how to use:

    1. Press ‘F’ key on the frame that a behavior starts to set the start flag.

    2. Play the video until a behavior ends.

    3. Press ‘1’ or ‘2’ key to put the annotations into array.

    repeat 1-3, and when all the annotations are done, press ‘0’ key to save the CSV.

  • other option keys:

    • ‘3’ key: show position of start flag

    • ‘4’ key: delete wrong start flag

    • ‘5’ key: print annotations array

import napari
from napari_video.napari_video import VideoReaderNP
import numpy as np
import pandas as pd

vr = VideoReaderNP("path_to_video/video_name.mp4")
labels = ["undefined", "behaviorA", "behaviorB"]
annotations = np.array([labels[0]]*len(vr))

# global vars
flag_exist = False
flag_frame = 0

with napari.gui_qt():
    viewer = napari.Viewer()
    image_layer = viewer.add_image(vr, rgb=True)

    @viewer.bind_key('f')
    def set_start_flag(event=None):
        global flag_exist
        global flag_frame

        if flag_exist:
            print('flag already exist!!')
        else:
            flag_frame = image_layer._slice_indices[0]
            flag_exist = True
            print("start flag set to frame:", image_layer._slice_indices[0])

    @viewer.bind_key('1')
    def annotate_A(event=None):
        global flag_exist
        global flag_frame
        if flag_exist:
            if flag_frame > image_layer._slice_indices[0]:
                print("go to the later frame than the start flag.")
                show_globals()
            else:
                print(
                    flag_frame, "to", image_layer._slice_indices[0], "is annotated to ", labels[1])
                annotations[flag_frame:image_layer._slice_indices[0]] = labels[1]
                flag_exist = False
        else:
            print("need to set the start flag.")

    @viewer.bind_key('2')
    def annotate_B(event=None):
        global flag_exist
        global flag_frame
        if flag_exist:
            if flag_frame > image_layer._slice_indices[0]:
                print("go to the later frame than the start flag.")
                show_globals()
            else:
                print(
                    flag_frame, "to", image_layer._slice_indices[0], "is annotated to ", labels[2])
                annotations[flag_frame:image_layer._slice_indices[0]] = labels[2]
                flag_exist = False
        else:
            print("need to set the start flag.")

    @viewer.bind_key('3')
    def show_globals(event=None):
        global flag_frame
        global flag_exist
        print("flag:", flag_exist)
        print("pos:", flag_frame)

    @viewer.bind_key('4')
    def delete_flag(event=None):
        global flag_exist
        print("delete_flag!")
        flag_exist = False

    @viewer.bind_key('5')
    def check_label(event=None):
        print(annotations)

    @viewer.bind_key('0')
    def save_label(event=None):
        print("save!")
        pd.DataFrame(annotations).to_csv("annotation.csv")

Welcome! Have you seen this tutorial on annotating videos with napari annotating videos with napari — napari tutorials by @kevinyamauchi it isn’t exactly what you want, but might have some helpful tips.

Otherwise I have a couple tips that might help too. If your annotations are on an entire frame at a time you could store them in the layer.metadata by creating a custom array for them there. You could also put your other global variables in the too, so layer.metadata = {'anntotation': annotations, flag_exist:False, flag_frame:0}

I do something like that in this toy example image-demos/image_annotations.py at d30ceccf4a83b3233723c8bfab01932970cd7f88 · sofroniewn/image-demos · GitHub. You can also see there I used a shapes layer to create a fake border to show animation status. I don’t love that though. Curious what others think would be a good idea. We should probably turn this into a nice tutorial.

It’s a little unfortunate you have to use the private image_layer._slice_indices[0] in this way, but we actually might be changing that soon (we’re trying to take slicing information off the layer). I’ll think about this a little more, and hopefully we’ll provide a nicer API soon.

1 Like

Thank you for the quick answer, @sofroniewn !

First of all, it is good to know that I can store various information in layer.metadata. Now I won’t have to use the ugly global variables!

Also, the image-demos/image_annotations.py you mentioned is very similar to what I want to do and is a very helpful example! I thought the using edge_color to represent the label was a nice idea that brings me one step closer to my ideal.

The fact that you are considering a new API to access frame numbers is very good news for me!

I’ll stay tuned for updates to napari, and continue to use napari while learning from tutorials and other resources. Thanks again!

1 Like