Welcome to this SentiSight.ai tutorial on how to train image segmentation models. This guide covers the fundamental steps and functionalities necessary to label your datasets effectively for image segmentation tasks.
Before training an image segmentation model, you need to label your datasets. For segmentation-specific labeling, there is another dedicated video tutorial. In this article, we will review the basic functionalities of the labeling tool.
Opening the Labeling Tool:
Labeling Methods:
Smart Labeling Tool:
Completing the Labeling:
Changing Labels:
The process for training an image segmentation model is similar to that used for object detection.
For demonstration purposes, if using a sample dataset, confirm your choice to load the sample model and agree to use your balance for training. You can monitor progress within the Trained Models tab.
After training, click on the Trained Models tab to view statistics.
Adjust score thresholds and review the precision-recall curve for fine-tuning predictions.
You can also download the predictions in various formats once you finish reviewing model statistics.
To make predictions on new images that were not part of the training or validation sets:
Once satisfied, select all images on the page and click Add to Dataset to incorporate these auto-labeled images into your training set for improved accuracy in future iterations.
For more information about features, refer to other video tutorials or user guides provided by SentiSight.ai.
Thank you for watching! Goodbye.
Q: What is the first step in training an image segmentation model with SentiSight.ai?
A: The first step is to label your datasets using the labeling tools available in the platform, which include polygon and bitmap options.
Q: Can I edit the predictions after they are made?
A: Yes, you can adjust the predictions by using the labeling tool to correct or refine masks.
Q: Is there any support available for using the REST API?
A: Yes, you can find API tokens, project IDs, and model names in the user guide, along with code samples for multiple programming languages.
Q: How long can I use the offline model for free?
A: The offline model is available for free for 30 days, after which a license must be purchased to continue usage.
Q: What should I do if the colors of objects in my images are similar?
A: You can use the smart labeling tool to assist in distinguishing objects and complete the labeling process by erasing any unwanted areas.
In addition to the incredible tools mentioned above, for those looking to elevate their video creation process even further, Topview.ai stands out as a revolutionary online AI video editor.
TopView.ai provides two powerful tools to help you make ads video in one click.
Materials to Video: you can upload your raw footage or pictures, TopView.ai will edit video based on media you uploaded for you.
Link to Video: you can paste an E-Commerce product link, TopView.ai will generate a video for you.