10
submitted 11 months ago by keepthepace@slrpnk.net to c/fosai@lemmy.world

Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

you are viewing a single comment's thread
view the rest of the comments
[-] namnnumbr@lemmy.ml 2 points 11 months ago

I don’t think fine tuning works the way you think it does; one does not generally fine tune to “add facts”. This might be useful: https://nextword.substack.com/p/rag-vs-finetuning-llms-what-to-use

I’d advocate for using the RAG pattern to do the lookups for the new facts. If needed, you can fine tune the model on top to output for your specific domain or format.

[-] keepthepace@slrpnk.net 3 points 11 months ago

Ah I should have made a bit more detailed message explaining the road I wen through already I guess :-)

I know that RAG gets recommended more for adding information. It is the fastest way to retrieve information. However it allows only a shallow understanding of it and the LLM will have problem using information from several different files to give you. You can't, for example, give it 1000 emails and ask to list the problems encountered in project A and how they were solved.

Fine tuning can add facts. This person added the documentation for Unreal Engine 5 in Llama 7B. Or this company added financial knowledge to Llama 13B. These are my inspiration. When using LORA it requires higher ranks and crucially to do the fine-tuning on a foundation model and only after your own fine-tuning, do the instruction fine-tune.

I am wondering if there is a way to make the last step easier by reapplying the same LORA.

I guess I am also wondering why we can't directly fine-tune facts into an instruction-tuned model. I tried, it does tend to remember the way to interact with instruct prompts but the format is a bit corrupted by the new dataset. I find it a bit weird the speed at which such models forget past things as they are fed new tokens.

[-] namnnumbr@lemmy.ml 1 points 11 months ago

IMO there is a difference between adding “knowledge” and adding “facts”. You can fine tune in domain knowledge but it will be prone to hallucination. To ground the instructions, you’d need to introduce RAG for fact lookup; possibly with a summarization step if you want to bring in large bodies of facts.

[-] keepthepace@slrpnk.net 2 points 11 months ago

Do you consider that there is a way to add facts to a model without rising the probability of hallucinations? Yes, RAG is a necessity, but if we want the model to display some sort of reasoning on a variety of facts, we need them embedded more deeply. The email example I gave can't be done with RAG.

[-] namnnumbr@lemmy.ml 3 points 11 months ago

I think I get what you’re after now. I’ll have to think on this further - interesting problem!

[-] rufus@discuss.tchncs.de 1 points 11 months ago* (last edited 11 months ago)

The UT5 person seems not too convinced himself: https://github.com/bublint/ue5-llama-lora/issues/7#issuecomment-1612001607

The xFinance one seems to be evaluated with positive results.

[-] Turun@feddit.de 1 points 11 months ago

At least in stable diffusion Loras are composable. You can combine different loras and have both effects applied to the resulting image.

[-] keepthepace@slrpnk.net 1 points 11 months ago

Yes, but my understanding is that they are commutable? (i.e. the order does not matter) If so, it looks like that a "facts-adding" LORA seem to induce forgetting of formatting.

And I am especially curious if a facts-LORA + a instructions-LORA results in a model that can use the new facts in the instructions or not. I'll run experiments but would have loved if people here knew about it already.

this post was submitted on 04 Oct 2023
10 points (85.7% liked)

Free Open-Source Artificial Intelligence

2797 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS