9 Essential Features for a Bounding Box Annotation Tool
ARNAB MUKHERJEE ????
Automation Specialist (Python & Analytics) at Capgemini ??|| Master's in Data Science || PGDM (Product Management) || Six Sigma Yellow Belt Certified || Certified Google Professional Workspace Administrator
There are plenty of image annotation platforms out there, and a bounding box tool seems like a simple enough functionality.
But here’s the thing—
The accuracy and quality of your bounding boxes define your model performance, and you may need millions of these to build the most accurate model to market within your use case.
Have you taken the time to consider every feature that will help you achieve this?
Bounding box annotations
?A bounding box is an imaginary rectangle drawn around the object that describes its spatial location. It’s the most basic tool used for object detection and localization tasks.
Bounding box annotations contain the coordinates with information about where the object is located in the image or the video. They are suitable for uniformly shaped objects, low-compute projects, and objects that don’t overlap.
?V7 allows you to draw bounding boxes with pixel-perfect precision, add attributes, copy-paste boxes of similar size, interpolate them in the video, ?and easily convert polygon masks to bounding boxes.
Easy Class Creation
The first thing to look out for is your class structure.
Making a box is easy, but—
How is that data stored?
?Classes are the names of objects in a dataset. If you’re building a service to detect dents and scratches, you will want to make sure these two entries can be reused in new projects or branched out hierarchically as your data grows.
Here are a few must-have functionalities:
Below is the class creation experience on V7.
?We kept our design language consistent and added rich info tooltips to inform users of what each functionality does because we understand that not everyone is familiar with computer vision terminology.
Boxes that feel good—Responsive interactions
?Prior to building V7, we tested several bounding box tools in the market and found that most didn’t prioritize interaction design.
?Placing and editing millions of bounding boxes requires a very smooth user experience.
Here are the things to look out for:
Video interpolation
?V7 supports videos and a number of series-like data like volumetric MRI or CT ?scans or time-lapses.
?All of these allow you to interpolate boxes throughout a sequence smoothly.
Any annotator wants an experience that requires minimal tweaks on the timeline, automatically generating keyframes where you can edit boxes manually or using models.
Separated position keyframes with attribute keyframes are required, allowing bounding boxes to gain or lose attributes or other sub-annotations throughout the video as part of the same instance.
Here are a few things to look out for:
Copy-paste, and other power user shortcuts
Do you have a few similar objects to annotate using bounding boxes?
?Copy-pasting your boxes can be very handy for speeding up your annotation process. It also ensures that your annotations are consistent for the same objects located in different areas of an image or a video.
领英推荐
Are hotkeys a priority in your annotation tool?
?Your labeling team should attempt to turn everyone into a power user. Keyboard shortcuts are a good way to get more training data and less fatigue (which leads to some of the hardest training data errors to spot).
Shortcuts to consider are switching classes, cycling between boxes, or points in a box.
Some projects might require you to copy all your annotations from one image to another. It often happens when your dataset images are sequential.
Here are things to look out for in power user shortcuts:
Bounding boxes attributes and other sub-annotations
Attributes are simply annotation tags that can define the specific features of a given object.
?Many object detection projects require labelers to add label attributes on top of the bounding box annotation—it helps describe a given object in greater detail.
?For example, it’s common to add label attributes such as occluded, truncated, and crowded, indicating that annotated objects are in close relationship with other objects in the image.
Here are things to watch out for in sub annotations:
Visibility options for bounding boxes
Every annotation with an editable Z-value, You simply have to drag an annotation to reorder it.
?The same can be done in the video timeline, with an option to automatically adjust this order to save vertical space.
?This one is especially useful when you have hundreds of annotations—such as in sports analytics.
Image Manipulation Options
You can also adjust the box opacity, border opacity, and visual features of the image. The tool must also have windowing and color map options, which allow you to see elements of the image not visible by the naked eye in regular RGB monitors.
How many annotations can it handle?
Most JavaScript libraries aren’t made to handle the scale that AI projects ?bring to the table.
?Make sure that your tool is tested for performance when hundreds of bounding boxes enter the scene. This is especially important in videos where annotations must be kept in memory to ensure smooth playback.
Here are things to consider:
Converting other annotations to Bounding Boxes
Some annotation formats such as COCO expect a bounding box to be around each polygon. Models like Mask R-CNN also benefit from this detector/segmenter approach. Moreover, you won’t have to make a box “around” a polygon, you can simply draw a polygon and use its “free” surrounding box to train a detector.
API functionalities and common bugs to watch out for
Ultimately, nothing can be more dangerous than a tool you commit to and encounter breaking bugs in its API halfway through your project.
?Here are the most common bugs, or feature failures that we’ve encountered ?across ?image annotation tools, in order of frequency:
?These are all issues that out of ten, at least once a customer has faced, who was switching from their internal tools or other labeling platforms.