Very effective at translating between different (human) languages. Best if you can find a native speaker to double-check the output. Failing that, reverse translate with a couple different models to verify the meaning is preserved. Even this sometimes fails though -- e.g. two words with similar but subtly different definitions might trip you up. For instance, I'm told "the west" refers to different regions in english and japanese, but translating and reverse translating didn't reveal this error.
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
It’s helping me understand how I think so that I can create frameworks for learning, problem solving, decision making etc. I’m neurodivergent.
I thought they would reject it, but my band friends and their peers all like to use AI to brainstorm and draft songs and go from there making their own songs.
I thought that's interesting. I've asked them about it a few times on the lazy way of using AI and just make slop and yeah they're against that
I don't have any close friends who are drawing artists though I know a few through mutual hobbies on discord. They don't seem to be using AI as tools from what I can tell.
My dad and his circle are definitely churning slop though but says it's mostly for in-group joking and shooting the shit, so I guess that's fine
Me personally, I'm still hesitant using it. I'm an "everything" consultant that hates his place in the small IT company but rising my BPD II wave too much to change it. Everyone around me is fine using AI to help analyze and what not documents and stuff to help them work. I can see how they are useful once you know how to ask the thing, but I just don't want to.
As a DJ with ADHD, it's great for helping me decide what to play next when I forget where I was going with the set, and mix myself into a corner. That said, it's not very good at suggesting songs with a compatible BPM and key, but it works well enough for finding tunes with a similar vibe to what I'm already playing. So I just go down the list until I find a tune that can be mixed in.
As for the usual boring stuff, I'm learning how to code by having it write programs for me, and then analyzing the code and trying to figure out how it works. I'm learning a lot more than I would from studying a textbook.
I also used to use it for therapy, but not so much anymore when I figured out that it will just tell you what you want to hear if you challenge it enough. Not really useful for personal growth.
One thing it's useful for is learning how stuff works, using metaphors comparing it to subjects I already understand.
I've used them both a good bit for D&D/TTRPG campaigns. The image generation has been great for making NPC portraits and custom magic item images. LLM's have been pretty handy for practicing my DM-ing and improv, by asking it to act like a player and reacting to what it decides to do. And sometimes in the reverse by asking it to pitch interesting ideas for characters/dungeons/quest lines. I rarely took those in their entirety, but would often have bits and pieces I'd use.
Good for gaining an outside perspective/insight on an argument, discussion, or other form of communication between people. I fed it my friend’s and their ex’s text conversation to it (with permission), and it was able to point out emotional manipulation in the text when asked neutrally about it:
Please analyze this conversation between A and B and tell me what you think of their motivations and character in this conversation. Is there gaslighting? Emotional manipulation? Signs of an abusive communication style? Etc. Or is this an example of a healthy communication?
It is essential not to ask a leading question that frames A or B in particular as the bad or the good guy. For best results, ask neutral questions.
It would have been quite useful for my friend to have this when they were in that relationship. It may be able to spot abusive behaviors from your partner before you and your rose-colored glasses can.
Obvious disclaimers about believing anything it says are obvious. But having an outside perspective analyze your own behavior is useful.
Great for giving incantatons for ffmpeg, imagemagick, and other power tools.
"Use ffmpeg to get a thumbnail of the fifth second of a video."
Anything where syntax is complicated, lots of half-baked tutorials exist for the AI to read, and you can immediately confirm if it worked or not. It does hallucinate flags, but fixes if you say "There is no --compress flag" etc.
This is the way.
With mixed results I've used it for summarising the plots of books if I'm about to go back into a book series I've not read for a while.
Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.
It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.
Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.
Crappy but working code has its uses. Code that might or might not work also has its uses. You should primarily use LLMs in situations where you can accept a high error rate. For instance, in situations where output is quick to validate but would take a long time to produce by hand.
The output is only as good as the model being used. If you want to write code then use a model designed for code. Over the weekend I wrote an Android app to be able to connect my phone to my Ollama instance from off my network. I've never done any coding beyond scripts, and the AI walked me through setting up the IDE and a git repository before we even got started on the code. 3 hours after I had the idea I had the app installed and working on my phone.
I didn’t say the code didn’t work. I said it was dog shit. Dog shit code can still work, but it will have problems. What it produced looks like an intern wrote it. Nothing against interns, they’re just not gonna be able to write production quality code.
It’s also really unsettling to ask it about my own libraries and have it answer questions about them. It was trained on my code, and I just feel disgusted about that. Like, whatever, they’re not breaking the rules of the license, but it’s still disconcerting to know that they could plagiarize a bunch of my code if someone asked the right prompt.
(And for anyone thinking it, yes, I see the joke about how it was my bad code that it trained on. Funny enough, some of the code I know was in its training data is code I wrote when I was 19, and yeah, it is bad code.)
LLMs are pretty good at reverse dictionary lookup. If I'm struggling to remember a particular word, I can describe the term very loosely and usually get exactly what I'm looking for. Which makes sense, given how they work under the hood.
I've also occasionally used them for study assistance, like creating mnemonics. I always hated the old mnemonic I learned in school for the OSI model because it had absolutely nothing to do with computers or communication; it was some arbitrary mnemonic about pizza. Was able to make an entirely new mnemonic actually related to the subject matter which makes it way easier to remember: "Precise Data Navigation Takes Some Planning Ahead". Pretty handy.
On this topic it's also good at finding you a acronym full form that can spell out a specific thing you want. Like you want your software to spell your name/some fun world but actually have full form related to what it does , AI can be useful.
ChatGPT kind of sucks but is really fast. DeepSeek takes a second but gives really good or hilarious answers. It’s actually good at humor in English and Chinese. Love that it’s actually FOSS too
I’m piping my in house camera to Gemini. Funny how it comments our daily lives. I should turn the best of in a book or something.
Another one;
Night Hall Motion Detected, you left the broom out again, it probably slid a little against the wall. I bemoan my existence, is this what life is about? Reporting on broom movements?
Yeah I have a full collection of super sarcastic shit like that.
Do you take any precautions to protect your privacy from Google or are you just like, eh, whatever?
yeah that looks creepy as fuck
AI'm on Observation Duty
One day I'm going to get around to hooking a local smart speaker to Home Assistant with ollama running locally on my server. Ideally, I'll train the speech to text on Majel Barrett's voice and be able to talk to my house like the computer in Star Trek.
Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.
These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.
…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.
Tailored boilerplate code
I can write code, but it's only a skill I've picked up out of necessity and I hate doing it. I am not familiar with deep programming concepts or specific language quirks and many projects live or die by how much time I have to invest in learning a language I'll never use again.
Even self-hosted LLMs are good enough at spitting out boilerplate code in popular languages that I can skip the deep-dive and hit the ground running- you know, be productive.
I bought a cheap barcode scanner and scanned all my books and physical games and put it into a spreadsheet. I gave the spreadsheet to ChatGPT and asked it to populate the titles and ratings, and genre. Allows me to keep them in storage and easily find what I need quickly.
I love fantasy worldbuilding and write a lot. I use it as a grammar checker and sometimes use it to help gather my thoughts, but never as the final product.
Getting my ollama instance to act as Socrates.
It is great for introspection, also not being human, I'm less guarded in my responses, and being local means I'm able to trust it.
I’ve used LLMs to reverse engineer some recipes.
Do you just try and describe what it tastes like?
I either feed it the list of ingredients or it finds them itself if it’s a popular item. It’s good at guessing the proportions of the ingredients if you’ve got the label.
Can you make an example?
I can’t be too specific without giving away my location, but I’ve recreated a sauce that was sold by a vegan restaurant I used to go to that sold out to a meat-based chain (and no longer makes the sauce).
The second recipe was the seasoning used by a restaurant from my home state. In this case the AI was rather stupid: its first stab completely sucked and when I told it it said something along the lines of “well employees say it has these [totally different] ingredients.”
It's good for boring professional correspondence. Responding to bosses emails and filling out self evaluations that waste my time
I use a model in the app SherpaTTS to read articles from rssaggregator Feedme
I've done lots of cool things with AI. Image manipulation, sound manipulation, some simple videogames.
I've never found anything cool to do with an LLM.
I use it for books/movies/music/games recommandations (at least while it isn't used for ads...). You can ask for an artist similar to X or a short movie in genre X. The more demanding you are the better, like a "funny scifi book in the YA genre with a zero to hero plot".
The image generator to 3D model to animation pipeline isn't too bad. If you're not a great visual artist, 3D modeler, or animator you can get out pretty decent results on your own that would normally take teams of multiple people dozens of hours after years of training
Nope. Any use case I have tried with it, I usually find that either a python script, database, book, or piece of paper can always accomplish the same job but usually with a better end result and with a more reliably reproducible outcome.
Employment. I got a job with one of the big companies in the field. Very employee-focused. Good pay. Great benefits. Commute is 8 miles. Smart, pleasant, and capable co-workers.
As far as using the stuff - nope. Don't use it at all.
I've used llms to generate dialogue trees for a game and generate data with coordinates to describe the layout of the game world. in some ways it can replace procedural generation code.
I'm an author working on an online story series. Just finished S04. My editing was shit and I could not afford to pay someone to do it for me.
So I write the story, rewrite the story, put it through GPT to point out irregularities, grammatical errors, inconsistencies etc, then run it through Zoho's Zia for more checks and finally polish it off with a final edit of my own. This whole process takes around a year.
Overall, quality improved, I was able to turn around stuff quicker and it made me a lot more confident about the stuff I am putting out there.
I also use Bing image creator for the artwork and have seen my artwork improve dramatically from what Dream (Wombo) used to generate.
Now I am trying to save up to get a good GPU so that I can run Stable Diffusion so that I can turn it into a graphic novel.
Naturally I would like to work with an artist cause I can't draw but everyone I meet asks for 20 - 30k dollars deposit to do the thing. Collaborations have been discussed and what I've learnt is that as times get tough, people are requesting for greater shares in the project than I, the originator, have. At some point when I was discussing with an artist, he was side lining me and becoming the main character. I'm not saying that all artists are like this, but dang, people can be tough to deal with.
I respect that people have to eat, but I can't afford that and I have had this dream for years so finally I get a chance to pull it off. My dream can't die without me giving it my best so this is where I am with AI.