Pushing Models to HuggingFace Hub
Plus! New Mixtral One-click template, Updated (Free) Install Guides
Pushing models to HuggingFace Hub
How to push models and adapter is a common question I get. So here’s the video:
One-click Mixtral AWQ Template
Getting a 16-bit Mixtral API started is slow because the model is large (and there’s a weird download bug). Check out the AWQ one-click template that uses a 25 GB model size for a quicker starting API:
Mixtral AWQ quantization is now supported by scripts in the ADVANCED fine-tuning repo.
Trelis Research Install Guides (Free)
I’ve cleaned up this repo, which contains a number of handy scripts for getting started with LLMs:
Cheers, Ronan
Links: