3 Comments

Thanks for the instructions! I was able to run 13b model in google colab; I tried to finetune with my own data about 20 instructions(in .json format); I was able to run it but it is very, very slow when I point to the new lora-alpaca adapter created in fine tuning. Any ideas what could be happening? btw, using premium GPU with 40GB RAM and it is using all of it..

Expand full comment

FYI, the cleaning of the dataset is on-going. Many hundreds more fixes have been made since the git-commit mentioned in the article. The latest cleaned alpaca dataset lives here: https://github.com/gururise/AlpacaDataCleaned

Expand full comment

How long did the training take on an A100?

Expand full comment