26

Things are still moving fast. It's mid/late july now and i've spent some time outside, enjoying the summer. It's been a few weeks since things exploded in the month of may this year. Have you people settled down in the meantime?

I've since then moved from reddit and i miss the LocalLlama over there, that was/is buzzing with activity and AI news (and discussions) every day.

What are you people up to? Have you gotten tired of your AI waifus? Or finished indexing all of your data into some vector database? Have you discovered new applications for AI? Or still toying around and evaluating all the latest fine-tuned variations in constant pursuit of the best llama?

you are viewing a single comment's thread
view the rest of the comments
[-] noneabove1182@sh.itjust.works 4 points 1 year ago

I'm trying to find a way to use it with Guidance to control my smart home, actually really doable with only a 13b model

[-] rufus@discuss.tchncs.de 3 points 1 year ago* (last edited 1 year ago)

Nice. I'm not an expert on NLP, are there any resources or frameworks out there to help handing a language model and guiding it to handle the specific set of commands/entities and areas? Or do you design everything from scratch?

When I first started tinkering with oobabooga's webui and the roleplay abilities, I also tried to create a character for my smart home. That certainly was fun and I like the idea of having a house with some kind of soul. But I never figured out how to make that useful. It just tried switching on or off random stuff and couldn't figure out what i wanted nor understand how my apartment looked. And of course kept hallucinating devices.

With HomeAssistant having 'The Year of the Voice'. This might get useful soon. They now(?) have official integrations for Whisper STT and a STT. And they're probably designing the language processing stuff and whatever is needed to handle commands regarding areas or specific domains. I think i will try that, once it's ready to use. But i want some scifi house with a soul, or the computer from the 'Enterprise'. And i think i also need more LLM power for that.

[-] noneabove1182@sh.itjust.works 2 points 1 year ago

Yeah I'm using it with home assistant :)

Basically I'm using oobabooga for inference and providing an API endpoint as if it were OpenAI, and then plugging that into Microsoft's guidance, which I then give a tool. The tool takes as input the device and the state, and then calls my home assistant rest endpoint to execute the command!

[-] rufus@discuss.tchncs.de 2 points 1 year ago

Thank you for pointing that out. I was completely unaware of microsoft guidance. Once they merge/implement llama.cpp support, i'm definitely going to try it, too.

[-] noneabove1182@sh.itjust.works 1 points 1 year ago

That will certainly be amazing, but for now it's actually not bad to use either oobabooga web UI or koboldcpp to run the inferencing and provide a rest endpoint, cause you can trick basically any program into treating it as if it's OpenAI and use it the same way

[-] JackCloudman@ada.junoai.org 2 points 1 year ago

I've been waiting for ExLLama to have guidance support, but there seem to have been some integration issues. We need more people to learn and get involved, haha, including me

[-] noneabove1182@sh.itjust.works 3 points 1 year ago

I actually just recently started having really good experiences with exllama on only 13B models, specifically I found the orca tuned ones to perform really well

this post was submitted on 19 Jul 2023
26 points (93.3% liked)

LocalLLaMA

2231 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS