UBIAI LLM Fine-tuning
  • Welcome to UbiAI
  • Getting Started with UbiAI
  • Create Your Dataset
    • Prompt-Response Datasets
    • Text-Based Datasets
    • Document-Based Datasets
    • Image-Based Datasets
    • Supported File Format Details
    • Setting Up Assisted Annotation
  • Fine-Tune Your Models
  • Playground
  • LLM Monitoring in UbiAI
  • Collaborations on UBIAI
  • Inter-Annotator Agreement (IAA)
Powered by GitBook
On this page
  • Why Fine-Tune Your Model on UBIAI?
  • Getting Started with Model Fine-Tuning
  • Supported NLP Tasks
  • Detailed Steps for Fine-Tuning Models on UbiAI
  • Step 1: Model Creation
  • Step 2: Dataset Preparation
  • Step 3: Configure Training Parameters
  • Step 4: Evaluate the Model
  • Model Fine-Tuning with API

Fine-Tune Your Models

PreviousSetting Up Assisted AnnotationNextPlayground

Last updated 5 months ago

In UBIAI, fine-tuning your models is a simple process designed to help you achieve outstanding performance in various natural language processing tasks. This page will walk you through the process of fine-tuning models step-by-step, ensuring that you maximize the potential of your datasets and workflows.

Why Fine-Tune Your Model on UBIAI?

Fine-tuning your model on UBIAI allows you to customize and optimize pre-trained models for your specific needs, enhancing their performance on your particular dataset. UBIAI provides a user-friendly, code-free environment that simplifies the fine-tuning process, making it accessible for both novice and experienced users.

Getting Started with Model Fine-Tuning

On the Models page, you can manage and monitor all your models. You can find:

  • Trained Models: Models you’ve already fine-tuned, complete with performance metrics.

  • Untrained Models: Models that are created but not yet fine-tuned, ready for your data.

  • Model Details: Each model includes metadata such as task type, model type (e.g., spaCy, Llama), training status, and training history.

This central hub allows you to simplify your model development process, ensuring you stay organized across projects.

Supported NLP Tasks

UBIAI supports the following tasks to address a broad range of NLP use cases:

  • Named Entity Recognition (NER): NER models are trained to identify specific entities in text, such as names, locations, dates, and custom domain-specific entities (e.g., product codes, medical terms).

  • Relation Extraction: Relation Extraction models identify and classify relationships between two entities, such as "employee of," "located in," or "caused by."

  • Span Categorization: This task involves labeling spans of text, such as sentences or paragraphs, with predefined categories.

  • Text Classification: Text classification models label entire documents or sections of text into categories such as sentiment (positive, negative, neutral), topic, or priority.

  • Large Language Models: LLMs can be trained for tasks like question answering and Text Generation.

Detailed Steps for Fine-Tuning Models on UbiAI

Step 1: Model Creation

Navigate to the Models page and click on Create New Model then select the task you want to train your model for:

  • Named Entity Recognition (NER)

  • Relation Extraction

  • Text Classification

  • Text Generation

Thenname your model and choose the model type to proceed with creating your custom model. you can pick Spacy or LLM depending on your task.

Step 2: Dataset Preparation

After you add your model details you can either assign an existing dataset or import a new one:

  • Assign Dataset: Link an existing dataset to your model. The dataset becomes exclusive to the model, ensuring it is not accidentally modified by other models.

  • Import Dataset: Combine documents from multiple datasets to create a new one. You can include text only or both text and annotations.

Step 3: Configure Training Parameters

Before initiating training make sure to:

  • Validate the dataset in the Dataset tab to ensure annotations are complete and accurate.

  • Select the model in the Training Configurations pannel.

  • Select Training / Validation Ratio

  • Configure hyperparameters:

    • Epochs: The number of passes over the dataset.

    • Dropout: noise that's intentionally dropped from a neural network to improve processing and time to results.

    • Batch Size: The number of samples processed together during training.

    • Learning Rate: Adjust the model’s learning efficiency.

  • Review your configurations and click Start Model Training.

Step 4: Evaluate the Model

Once training is complete, navigate to the Model Dashboard to view performance metrics: F1-Score, Precision, Recall.

LLM scores are not provided yet, we are working on including them in the future.

In addition to the model and entity scores, you can visualize the loss, Precision, Recall and F-1 curves related to a specific training run.

To do so, go to the Training history tab and select a specific run from the list of runs.

Ubiai also gives you access to confusion matrix so you can easily visualize the number of correctly and wrongly classified outputs.

To access the confusion matrix navigate to the dedicated confusion martix tab located in the top right section in any model details interface.

A confusion matrix summarizes the performance of a classification model by showing the counts of true positives, true negatives, false positives, and false negatives. Useful for understanding the types of errors a model makes, especially in classification tasks (e.g., sentiment analysis, topic classification)

The matrix presents four key values that represent the outcomes of a classification task:

  • True Positive (TP): The number of instances that were correctly predicted as positive.

  • False Positive (FP): The number of instances that were incorrectly predicted as positive.

  • True Negative (TN): The number of instances that were correctly predicted as negative.

  • False Negative (FN): The number of instances that were incorrectly predicted as negative.

Model Fine-Tuning with API

Programmatic model training for NER and relation extraction can be easily done with our API. This feature is only available for Growth and Business packages:

  • Within the model details dashboard, select 'Train model with API' drop down menu.

  • Copy/paste the generated code into your application.

  • Run the script to launch the training (you can input the hyperparameters of your choice).