Topview Logo
  • Create viral videos with
    GPT-4o + Ads library
    Use GPT-4o to edit video empowered by Youtube & Tiktok & Facebook ads library. Turns your links or media assets into viral videos in one click.
    Try it free
    gpt video

    SentiSight.ai image segmentation model training tutorial

    blog thumbnail

    Introduction

    Welcome to this SentiSight.ai tutorial on how to train image segmentation models. This guide covers the fundamental steps and functionalities necessary to label your datasets effectively for image segmentation tasks.

    Step 1: Labeling Your Datasets

    Before training an image segmentation model, you need to label your datasets. For segmentation-specific labeling, there is another dedicated video tutorial. In this article, we will review the basic functionalities of the labeling tool.

    1. Opening the Labeling Tool:

      • Right-click on an image.
      • Select Labeling Tools and then choose Label Images.
    2. Labeling Methods:

      • There are two primary ways to label images for segmentation:
        • Polygons: Use the polygon tool to outline your object of interest.
        • Bitmaps: Draw directly on the bitmap to label.
    3. Smart Labeling Tool:

      • For efficiency, it’s recommended to use the Smart Labeling Tool.
      • To use this tool, select it from the panel and choose the area of interest (e.g., a single mushroom).
      • Initially, label some foreground areas by clicking the hotkey (or button).
      • Once sufficient foreground is labeled, you may need to refine the extraction by labeling unwanted parts as background.
    4. Completing the Labeling:

      • For clearer distinctions, especially when object colors are similar, finish the labeling with the smart tool, then use the bitmap labeling tool to erase unnecessary areas.
      • Hide previously labeled objects to focus on the next one.
    5. Changing Labels:

      • After labeling, don't forget to change the label of the segmentation mask from its default setting.

    Step 2: Training the Model

    The process for training an image segmentation model is similar to that used for object detection.

    • Click on Train Instance Segmentation.
    • Select your parameters; if unsure, the default values are recommended.
    • Training time is automatically set based on the number of images and classes.
    • Click Start to begin.

    For demonstration purposes, if using a sample dataset, confirm your choice to load the sample model and agree to use your balance for training. You can monitor progress within the Trained Models tab.

    Step 3: Reviewing Training Statistics

    After training, click on the Trained Models tab to view statistics.

    • You’ll find metrics for training and validation sets.
    • Predictions are displayed in different colors compared to ground truth masks, identifiable by checkerboard patterns.
    • Reports on the intersection over union (IoU) and predicted scores will guide you in assessing model accuracy.

    Adjust score thresholds and review the precision-recall curve for fine-tuning predictions.

    You can also download the predictions in various formats once you finish reviewing model statistics.

    Step 4: Making Predictions

    To make predictions on new images that were not part of the training or validation sets:

    • Click the Make a New Prediction button.
    • Upload images from your computer.
    • Predicted masks are shown on the images, with checkboxes for adjusting which ones to accept or reject.
    • Edit as needed using the labeling tool and then mark images as edited.

    Once satisfied, select all images on the page and click Add to Dataset to incorporate these auto-labeled images into your training set for improved accuracy in future iterations.

    Additional Options

    • REST API: You can make predictions using the REST API by inputting your API token, project ID, and model name, with code samples available in several languages.
    • Offline Model: Download your trained model for offline use; it’s free for 30 days before requiring a license.

    For more information about features, refer to other video tutorials or user guides provided by SentiSight.ai.

    Thank you for watching! Goodbye.


    Keywords

    • SentiSight.ai
    • Image Segmentation
    • Labeling Tools
    • Smart Labeling
    • Training Model
    • Predictions
    • REST API
    • Offline Model

    FAQ

    Q: What is the first step in training an image segmentation model with SentiSight.ai?
    A: The first step is to label your datasets using the labeling tools available in the platform, which include polygon and bitmap options.

    Q: Can I edit the predictions after they are made?
    A: Yes, you can adjust the predictions by using the labeling tool to correct or refine masks.

    Q: Is there any support available for using the REST API?
    A: Yes, you can find API tokens, project IDs, and model names in the user guide, along with code samples for multiple programming languages.

    Q: How long can I use the offline model for free?
    A: The offline model is available for free for 30 days, after which a license must be purchased to continue usage.

    Q: What should I do if the colors of objects in my images are similar?
    A: You can use the smart labeling tool to assist in distinguishing objects and complete the labeling process by erasing any unwanted areas.

    One more thing

    In addition to the incredible tools mentioned above, for those looking to elevate their video creation process even further, Topview.ai stands out as a revolutionary online AI video editor.

    TopView.ai provides two powerful tools to help you make ads video in one click.

    Materials to Video: you can upload your raw footage or pictures, TopView.ai will edit video based on media you uploaded for you.

    Link to Video: you can paste an E-Commerce product link, TopView.ai will generate a video for you.

    You may also like