this post was submitted on 30 Mar 2026
24 points (80.0% liked)

Asklemmy

53776 readers
524 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 7 years ago
MODERATORS
 

AI can't be all that bad. The problem I'm always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you've got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I've found some benefits with AI. For example, I'm chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It's helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

top 43 comments
sorted by: hot top controversial new old
[–] Jobe@feddit.org 1 points 41 minutes ago

In engineering/manufacturing, machine learning can be used to monitor performance and predict part failures of machines so you only do maintenance when it's actually required. Parts are usually replaced when the warranty runs out, but they will often still be good for a while.

[–] Danitos@reddthat.com 4 points 5 hours ago

Accesibility.

[–] MorkofOrk@lemmy.world 4 points 5 hours ago (1 children)

An amazing use for it in audio engineering is for feedback suppression. The old way to give yourself more headroom required you to sit there and turn up the gain until feedback happens and cut that frequency. Now you just turn on the feedback suppression and it does all that for you on the fly. It's game changing for live sound, every major venue has it now.

[–] Jobe@feddit.org 1 points 52 minutes ago

Great for film sound too. You're filming a rainy scene and the rain is way to loud? You had to get the actors into the studio and do voiceover, now you can often just filter it out.

[–] TribblesBestFriend@startrek.website 27 points 11 hours ago (1 children)

And if ChatGPT made a mistake? How would you know before it’s to late ?

[–] Reverendender@sh.itjust.works -2 points 11 hours ago (2 children)

Because all other information on credit cards (or anything else) on the internet available to people eager to learn is 100% accurate, all the time?

I’ll take that over an information tool that lie 30% of the time

That is absolutely the worst excuse possible to shill for big tech that comes with no real guarantees about precision or accuracy.

While there are trustworthy human sources in the Internet, there are no trustworthy LLMs.

[–] WoodScientist@lemmy.world 6 points 8 hours ago

Running automated hacking and blackmail campaigns against AI companies.

[–] logos@sh.itjust.works 14 points 10 hours ago

I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.

[–] seahag@lemmy.world 18 points 11 hours ago* (last edited 11 hours ago)

AI has uses in the medical, scientific, and disabled communities. I've seen it helping blind people with shopping, with Google glasses or whatever reporting what they've picked up and describing it to them. It can also identify/predict cancer tissue early.

Generative AI is peak laziness and the death of human creativity. Using AI for companionship has a nasty effect on mental health.

AI should have only ever been an assistant in medical/scientific research in my opinion, simply because it's so damaging to the environment, economy, and society.

[–] nutsack@lemmy.dbzer0.com 0 points 5 hours ago* (last edited 5 hours ago)

most software companies are writing as much code with it as possible. it's replacing junior level software engineers. it has completely transformed the tech industry in a way that there is no coming back from

[–] aReallyCrunchyLeaf@lemmy.ml 15 points 11 hours ago* (last edited 7 hours ago) (1 children)

The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn't see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

The thing is, we don't need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn't need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don't care, that's not the point, they are in this because it's a golden ticket to growth city and once they cash their check they don't give one hot fuck about the human-spirit-stealing-machine they built.

TLDR: our society is broken and that's why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that's the only version that the system we built will allow.

[–] bibbasa@piefed.social 2 points 7 hours ago
[–] Lumidaub@feddit.org 11 points 11 hours ago

If we're strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little "human" touch. If we ASSUME that AI will be able to it reliably one day - because it really can't yet - that's one thing that would benefit society.

Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what's actually important about the image.

[–] MerrySkeptic@sh.itjust.works 8 points 11 hours ago (1 children)

I'm a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

I have heard of other therapists and medical doctors also using AI to help with diagnosing.

The danger is when therapistsdon't review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, "include more details about how the client noticed their pattern of putting their own feelings last," and it just does what you asked. You don't necessarily have to edit manually, though you can.

[–] The_Picard_Maneuver@lemmy.world 1 points 10 hours ago (1 children)

So how does that work? Do you just have an AI listening throughout the session like a note-taker?

[–] MerrySkeptic@sh.itjust.works 6 points 10 hours ago (1 children)

Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things

[–] helix@feddit.org 3 points 10 hours ago (1 children)

no protected recordings are used to teach the AI

How do you know for certain?

[–] SuperUserDO@piefed.ca 2 points 7 hours ago

People conflate security with risk mitigation. It's not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.

[–] aceshigh@lemmy.world 5 points 10 hours ago

It’s very helpful for neurodivergent people - helps you figure out who you are and what you want, how you think, learn and work best, identify your obstacles and help you overcome them, understand your neurodivergency and compare it to how neurotypical people think. It’s fantastic at generating ideas that you then test out. The ideas that it gives you are based on how you actually function, so often times they’re valid.

[–] rossman@lemmy.zip 5 points 10 hours ago

rubberducking for those with social anxiety. Also small friction to get surface level answers that normally took digging from multiple sources.

it's a study monster that initially wiped chegg, duolingo, sparknotes etc. The double edge is that people forgot how to take notes, learn fundamentals to handle complex problems.

[–] shellington@piefed.zip 6 points 11 hours ago

I agree there is a lot of annoying hype. However i also agree there are some specific use cases where it can be helpful.

I for one find it handy some times when i am writing bash scripts to do things on my system. I obviously check them before running but it does save time.

Although i do recommend running models locally if possible as it is obviously preferable from a privacy and cost standpoint.

[–] whotookkarl@lemmy.dbzer0.com 3 points 10 hours ago

Not a hot dog

[–] makingStuffForFun@lemmy.ml 2 points 9 hours ago

I had a project of markdown files. About 400 of them, with about 1200 plus links in them.

The original filenames were changed. The links no longer worked.

The LLM went through each link, and found the new one, based on filename and file content, using its ability to recognise patterns, words, etc etc.

Absolutely saved me maybe a couple of days of manual pain labour, and all done in about 10 minutes.

This is the kind of thing I use it for. Horrible repetitive processes.

[–] damnthefilibuster@lemmy.world 4 points 11 hours ago

I was sitting in a restaurant the other day and staring at the menu. It was Italian and none of the things made sense. Too wordy and not clear what was meat and what was fancy cheese. The waiter was utterly useless - too busy to help and when present, not answering my questions about what would be a good simple pasta in white sauce.

I took a photo and asked Claude what’s a good white sauce pasta which would be like Alfredo.

It found two options I hadn’t even looked at. AI is good at sorting through complexity. But I don’t just mean AI as in LLMs. It needs a lot more tools and knowledge to be useful. So what you need is a smart system which may or may not have AI as a component.

[–] moakley@lemmy.world 3 points 10 hours ago

Honestly, Google Search has been better the last couple years after spending the previous twenty years getting consistently worse.

Most of what I use Google for is trivial. Like how old is a certain actor, or why was this author canceled, or what does this item do in a video game?

It's great for those things. Especially the video game stuff. I don't want to watch a 10 minute video just to get a discrete answer, and now I don't have to.

I can even ask it for spoiler-free hints on a particular puzzle, and most of the time it gives me something useful.

[–] crunchpaste@lemmy.dbzer0.com 2 points 9 hours ago

Well, I know its quite specific, but nothing beats AI at stereo matching and depthmap generation and that's important in many fields.

[–] Oisteink@lemmy.world 3 points 11 hours ago

Reading TOS and clicking off all the privacy options on the cookie popup

[–] ordnance_qf_17_pounder@reddthat.com 2 points 10 hours ago (1 children)

Finding info in a large quantity of information.

[–] Jaegeras@piefed.social 1 points 5 hours ago

That's hit or miss, depending on some things.

[–] ChonkyOwlbear@lemmy.world 2 points 10 hours ago

Karrot is a used item app that has a feature where you take a picture of an item and it IDs the item and tells you what it's worth. It's pretty impressive. It could ID my houseplants better than some dedicated plant ID apps I've used. It's not great with one of a kind items, but otherwise it's surprisingly accurate.

[–] ProfessorScience@lemmy.world 2 points 10 hours ago

I occasionally use it to find links to VODs for esports tournaments. Asking it to only link the specific game I want with no other summarization is a way to find them without spoilers (like when youtube "helpfully" suggests the last game of the grand finals of the tournament as a search result).

[–] SaveTheTuaHawk@lemmy.ca 1 points 9 hours ago (1 children)

In science, pubmed searches only terms in titles and abstracts, but LLMs will specifically search every figure and supplementary file. But you have to manually confirm because sometimes they just make shit up.

[–] Chais@sh.itjust.works 1 points 1 hour ago

sometimes they just make shit up.

Kinda disqualifies it as a "search engine" in my book.

[–] EyIchFragDochNur@lemmy.world 2 points 11 hours ago

I'm chatting with Le Chat about things I wouldn't ask a friend and wouldn't trust a stranger about. It brings up things I wouldn't have thought of.

[–] thedeadwalking4242@lemmy.world 1 points 9 hours ago

It's ok for very furnace level exploration. Like 100 level stuff. If it's something you'd google and easily find in an article it's likely to do a ok job.

I've also found it's good for tedious straightforward tasks. Anything that would be uncomfortable or timely or automate manually. Best for one offs.

I've also found it's extremly good for translation, which was it's originally use.

[–] mo_lave@reddthat.com -1 points 7 hours ago* (last edited 7 hours ago)
  1. Drafts, coming up with ideas you may not have thought of initially
  2. Grading whatever you write against the quality of other trained material, especially research papers (LLMs tend to be trained more on those material)
  3. For LLMs that search the internet/cite sources, it can be a more powerful search engine. Enter a keyword, the LLM can guess other semantic terms related to your search prompt/terms and refine the results that way. The additional search results cited matter to me more than the response itself (which can hallucinate, but have a niche role as a wildcard if you're the type to pattern-match concepts into new ideas)
[–] daannii@lemmy.world 0 points 7 hours ago

Language translations.

I think that's about it.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 10 hours ago

I’m good with asking DuckDuckGo for help with a game and AI scrapes the game sites to just give me the answer. You know, sites like IGN and Dutch with popups and ads and other trash… and AI just steps past all that shit and gets me the answer. I’m fine with that. All these sites are just scraping guides and such, I don’t mind if they get stepped on.

Like, tell me how to save both the Geth or the Quarians in Mass Effect 3. So if you don’t know, Mass Effect was this kinda mediocre space shooter with magic, and it was fun… but then you get to the second one and it remembers all your choices, and there’s so much to do. And the Geth were the bad guys in the first one. And if you’re playing a male character, the Quarian is this hot middle eastern type lady you can romance. Then you meet a nice Geth (it’s a long story and the game is longer) and by the third one, you do this mission, it’s a lot of bullshit, then this bullshit boss fight, then you get put on the spot. I’m not going to spoil it, but for like 99.99% of us, one of your crew dies. And it sucks. But if you made some very specific choices, going back to the very first one… you can save both. I’m talking like 100+ hours since you made a mistake that didn’t even matter then, but it locks you out of saving one of your people. Stuff like that.

[–] tyler@programming.dev 1 points 10 hours ago

You need to differentiate between generative AI, NLP, machine learning, etc. Your question is pretty much entirely pointless otherwise.

[–] DarkCloud@lemmy.world 0 points 11 hours ago* (last edited 10 hours ago)

Writing and fact checking ONLY the most basic concepts and common information that is found multiple times and in multiple places online (eg. It's strongly reenforced and verified in the training data and has/will be the same for a long time).

Mass formatting, changing formats, changing language, and decoding via common methods.

Pitching "what you mean, but can't remember the name for".

...and that's about it.