this post was submitted on 17 Jan 2026
936 points (99.5% liked)

Fuck AI

5446 readers
1243 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sp3ctr4l@lemmy.dbzer0.com 15 points 1 week ago (1 children)

The literal firat thing I did with my lightweight local LLM was describe specified scenarios to it and ask it to generate a prompt for itself that would make that 'profile' of it always have that context.

[–] cdf12345@lemmy.zip 8 points 1 week ago (1 children)

Can you explain that a little more in depth. I’ve been experimenting with local LLM and am curious what type of scenarios you’re talking about and how this affected your LLM output.

[–] sp3ctr4l@lemmy.dbzer0.com 3 points 1 week ago (1 children)

Ok, for starters, I'm using Alpaca, a flatpak than acts as a kind of simplified, containerized way of managing local LLMs, it has a few basic tools you can use with LLMs, manages downloading them, and then you can make profiles based off of the model you've dl'ed, with specific context prompts, tweak a few settings for them.

So far I am most fond of the Qwen3 model, but, ymmv.

more explanation encapsulated herein

Uh lets see, for things like getting up to date with a particular coding language's modern syntax revisions or updates to a particular library, something like that.

Feed them some webpages of the documentation, ask them to read them, or ask them to do some of their own self directed searching to find changes, then say hey, now please generate a contextual prompt for yourself that would summarize and inform you of key points/changes.

There are a decent number of fairly powerful, fairly lightweight models, but, they tend to be some months or a year or w/e out of date in their training data, so doing this acts as a kind of 'update' for them.

You can also do this with... some set of fairly niche topics that their lightweight model just isn't that accurate about, for maybe the purpose of... maybe brainstorming worldbuilding scenarios, giving it more recent scientific/news updates, or even asking it to try to roleplay as some specific fictional or real character.

Though I've not really tried the latter scenario beyond once as basically a gimmick.

Its not like, guaranteed to make them super intelligent experts, its more like... going through a crash course and giving them a bit of a more accurate generao overview.

Any situation where they ... keep making specific, minor, fairly simple mistakes... this kind of thing can be decent at at least triaging.

I've mostly been trying to craft a coding assistant out of this, and I've had pretty decent success at getting it to 'learn' sets of syntax updates, deprecated methods that now have very close equivalent replacements.

Another thing you can do with this kind of thing is... the adversarial approach. Give two different models the same intial set of 'hey here are some things i want you to know, now generate a prompt for yourself'. Then take the generated prompt, give it in conversation to the other LLM, ask it to evaluate it and compare it to the prompt it generated, etc.

The general downside to this is that with a lightweight model, on low power hardware, the longer/more complex the intial prompt they have is, well, i guess its kind of comparable to a game having to compile shaders before a first run... you can end up in situations where you've just given them so much 'context' that it just exceeds your local HW compute power to be able to... evaluate the prompt and then generate a first actual conversstional response.