this post was submitted on 14 Jan 2025
88 points (95.8% liked)

Asklemmy

48837 readers
494 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there's hype. There are ethical concerns but we'll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it's a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what's the point of it all?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] weeeeum@lemmy.world 3 points 5 months ago

I think LLMs could be great if they were used for education, learning and trained on good data. The encyclopedia Britannica is building an AI exclusively trained on its data.

It also allows for room for writers to add more to the database, to provide broader knowledge for the AI, so people keep their jobs.

[–] thepreciousboar@lemm.ee 3 points 5 months ago

I know they are being used to, and are decently good for, extracting a single infornation from a big document (like a datasheet). Considering you can easily confirm the information is correct, it's quite a nice use case

[–] arken@lemmy.world 3 points 5 months ago (1 children)

There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).

That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.

[–] Xavienth@lemmygrad.ml 1 points 5 months ago (1 children)

Generative AI is actually really bad at transcription. It imagines dialogues that never happened. There was some institution, a hospital I think? They said every transcription had at least one major error like that.

[–] octochamp@lemmy.ml 3 points 5 months ago

This is an issue if it's unsupervised, but the transcription models are good enough now that with oversight then they're usually useful: checking and correcting the AI generated transcription is almost always quicker than transcribing entirely by hand.

If we approach tasks like these assuming that they are error-prone regardless whether they are done by human or machine, and will always need some oversight and verification, then the AI tools can be very helpful in very non-miraculous ways. I think it was Jason Koebler said in a recent 404 podcast that at Vice he used to transcribe every word of every interview he did as a journalist, but now transcribes everything with AI and has saved hundreds of work hours doing so, but he still manually checks every transcript to verify it.

[–] happydoors@lemm.ee 3 points 5 months ago

I use it in a lot of tiny ways for photo-editing, Adobe has a lot of integration and 70% of it is junk right now but things like increasing sharpness, cleaning noise, and heal-brush are great with AI generation now.

[–] sunzu2@thebrainbin.org 3 points 5 months ago (1 children)
[–] corsicanguppy@lemmy.ca 2 points 5 months ago* (last edited 5 months ago)

Ha! I use it to write Ansible.

In my case, YAML is a tool of Satan and Ansible is its 2001-era minion of stupid, so when I need to write Ansible I let the robots do that for me and save my sanity.

I understand that will make me less likely to ever learn Ansible, if I use a bot to write the 'code' for me; and I consider that to be another benefit as I don't need to develop a pot habit later, in the hopes of killing the brain cells that record my memory of learning Ansible.

[–] GaMEChld@lemmy.world 3 points 5 months ago

I like using it to help get the ball rolling on stuff and organizing my thoughts. Then I do the finer tweaking on my own. Basically I kinda use a sliding scale of the longer it takes me to refine an AI output for smaller and smaller improvements is what determines when I switch to manual.

[–] passiveaggressivesonar@lemmy.world 2 points 5 months ago (1 children)
[–] dQw4w9WgXcQ@lemm.ee 3 points 5 months ago

Absolutely this. I've found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo'ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at "understanding" the content and is able to match with the documentation pretty reliably.

For example, I was reading docs up and down at ElasticSearch's website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.

[–] SplashJackson@lemmy.ca 2 points 5 months ago* (last edited 5 months ago) (1 children)

I wish I could have an AI in my head that would do all the talking for me because socializing is so exhausting

[–] tetris11@lemmy.ml 2 points 5 months ago* (last edited 5 months ago) (1 children)

Other people would then have AIs in their heads to deal with the responses.

A perfect world, where nothing is actually being said, but goddamn do we sound smart saying it

[–] graymess@hexbear.net 2 points 5 months ago

I recently had to digitize dozens of photos from family scrapbooks, many of which had annoying novelty pattern borders cut out of the edges. Sure, I could have just cropped the photos more to hide the stupid zigzagged missing portions. But I had the beta version of Photoshop installed with the generative fill function, so I tried it. Half the time it was garbage, but the other half it filled in a bit of grass or sky convincingly enough that you couldn't tell the photo was damaged. +1 acceptable use case for generative AI, I guess.

Just today I needed a pdf with filler english text, not lorem. ChatGPT was perfect for that. Other times when I'm writing something I use it to check grammar. It's way better at it than grammarly imo, and faster and makes the decisions for me BUT PROOF-READ IT. if you really fuck the tenses up it won't know how to correct it, it'll make things up. Besides these: text manipulation. I could learn vim, write a script, or I could just copy "remove the special characters" enter -> done.

I use perplexity for syntax. I don't code with it, but it's the perfect one stop shop for "how does this work in this lang again" when coding. For advanced/new/unpopular APIs it's back to the olds school docs, but you could try to give it the link so it parses it for you, it's usually wonky tho.

[–] sgtlion@hexbear.net 2 points 5 months ago* (last edited 5 months ago)

Programming quick scripts and replacement for Google/Wikipedia more than anything. I chat to it on an app to ask about various facts or info I wanted to know. And it usually gets in depth pretty quickly.

Also cooking. I've basically given up on recipe sites, except for niche, specific things. AI gets stuff relatively right and quickly adjusts if I need substitutions. (And again, hands free for my sticky flour fingers).

And ideation. Whether I'm coming up with names, or a specific word, or clothes, or a joke, I can ask AI for 50 examples and I can usually piece together a result I like from a couple of those.

Finally, I'll admit I use it as a sounding board to think through topics, when a real human who can empathise would absolutely be better. Sadly, the way modern life is, one isn't always available. It's a small step up from ELIZA.

The key is that AI is part of the process. Just as I would never say "trust the first Google result with your life", because its some internet rando who might say anything, so too should you not let AI have the final word. I frequently question or correct it, but it still helps the journey.

[–] GuyFi 2 points 5 months ago

I have personally found it fantastic as a programming aid, and as a writing aid to write song lyrics. The art it creates lacks soul and any sense of being actually good but it's great as a "oh I could do this cool thing" inspiration machine

[–] Tartas1995@discuss.tchncs.de 2 points 5 months ago

I hate questions like this due to 1 major issue.

A generative ai with "error free" Output, is very differently useful than one that isn't.

Imagine an ai that would answer any questions objectively and unbiased, would that threaten job? Yeah. Would it be an huge improvement for human kind? Yeah.

Now imagine the same ai with a 10% bs rate, like how would you trust anything from it?

Currently generative ai is very very flawed. That is what we can evaluate and it is obvious. It is mostly useless as it produces mostly slop and consumes far more energy and water than you would expect.

A "better" one would be differently useful but just like killing half of the worlds population would help against climate change, the cost of getting there might not be what we want it to be, and it might not be worth it.

Current market practice, cost and results, lead me to say, it is effectively useless and probably a net negative for human kind. There is no legitimate usage as any usage legitimizes the market practice and cost given the results.

[–] CanadaPlus 2 points 5 months ago* (last edited 5 months ago) (9 children)

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

I'd actually challenge both of these. The property of "soulessness" is very subjective, and AI art has won blind competitions. On programming, it's empirically made faster by half again, even with the intrinsic requirement for debugging.

It's good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn't impact anyone's bottom line directly.

load more comments (9 replies)
[–] mindbleach@sh.itjust.works 1 points 5 months ago

Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it's supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it'll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it's more [thing] or less [thing], it can make every frame more [thing].

Boring people will use this to churn out boring fluff. Do you remember Terragen? It's landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went 'look what I made!' and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.

Creative people will use this to film their pet projects without actors or sets or budgets or anyone else's permission. It'll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don't look like your description.

The idiots in LA think they're gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They've got drawers full of scripts they couldn't hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It's gonna get wild.

And this'll be great for actors, in ways they don't know yet.

Audio tools mean every voice actor can be a Billy West. You don't need to sound like anything, for your performance to be mapped to some character. Pointedly not: "mapped to some actor." Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won't know the difference. At no point will it be "licensing Nolan North's voice." You might have no idea what he sounds like. He just does a very convincing... everybody.

Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it'll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you'll see is somewhere between motion-capture and rotoscoping. It's CGI... ish. And it thinks perfect photorealism is just another artistic style.

[–] boredtortoise@lemm.ee 1 points 5 months ago

Documentation work, synthesis, sentiment analysis

[–] dingus@lemmy.world 1 points 5 months ago

Never used it until recently. Now I use it to vent because I'm a crazy person.

[–] Jolteon@lemmy.zip 1 points 5 months ago

Making dynamic templates.

[–] ReCursing@lemmings.world 1 points 5 months ago

art. It's a new medium, get over it

load more comments
view more: β€Ή prev next β€Ί