Citation Checking for LLMs
RAG - but with Verified Citations!
Even if you ask a Large Language Model to cite its sources, the LLM may just make up those citations!
How do you get around this? By (automatically) cross-checking citations with the source documents!
👨💻 Full demo.
- Querying an LLM based on background documents.
- Retrieving relevant chunks from the docs using cosine similarity and BM25
- Prompting the LLM for cited responses in JSON format
- Verifying citations against source chunks
- Iterating to improve citation accuracy
As a bonus, I showcase Trellis Endpoints, a one-click RAG solution that implements citation checking.
That’s it for this week, cheers, Ronan
Learn more at Trelis.com/About
P.S. Yesterday, I forgot to include a link to my video comparing GPUs (A40, A100, H100) and inference engines (vLLM, SGLang, TGI, Nvidia NIM). You can check out the updated post - now with the video embedded - here.