Itβs important to note that a critical aspect of finetuning is the dataset that is used; finetuning on poor quality data can even hinder model performance. For more information on what makes an effective dataset, check the documentation here.
β€ 1 GPUsβ
If you donβt have your own GPUs to run finetuning, donβt worry β Liquid has developed a set of easy-to-use Jupyter notebooks in conjunction with our friends at Unsloth and Axolotl to enable easily finetuning LFM2 models in Google Colab on a single GPU. You can find the notebooks here:> 1 GPUsβ
If you have your own GPUs, you can use Liquidβsleap-finetune package here. leap-finetune simplifies the process of finetuning LFM2 models by allowing you to (1) provide your own data loader, (2) specify your training configuration, and (3) hit run. The tool is fully built with open source tools and handles distributed training up to a single node (e.g. 8 GPUs).