Optimize your Fine-tuning
The latest video is out now looking at techniques to bring LoRA performance closer to full-fine tuning.
Unsloth for 2X+ speed ups.
DoRA - Split fine-tuning into magnitude + direction.
NEFT - add noise to embeddings for better generalisation.
LoRA+ - accelerated convergence with differential learning rates.
Qwen1.5 One-click Template
There’s now a one-click vLLM template available in the one-click-llms repo. Qwen is particularly good - the 72B version - for Chinese and Spanish.
Have any questions? Want to see any particular future vids? Drop a comment below, cheers, Ronan
Links:
➡️ Trelis Function-calling Models
➡️ One-click Fine-tuning & Inference Templates
➡️ Tip Jar