69
Disapproving of automated plagiarism is classist ableism, actually: Nanowrimo
(nanowrimo.zendesk.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Doesn't even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.
And even those should always be labeled, since AI is categorically inferior to intentional communication.
They seem focused on the use case "I don't have the ability to communicate with intention, but I want to pretend I do."
They added those at my work and they are terrible. A picture of the company CEO standing in front of a screen with the text on it announcing a major milestone? "man in front of a screen" Could get more information from the image filename.
AI and ML (and I'm not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is "you have to make a decision quickly, and there's a high tolerance for errors or imprecision".
Your example is a perfect example: it's not as good as a human-generated caption, it can lack context, or be wrong. But it's better than the alternative of having nothing.
I don't accept a wrong caption is better than not being captioned. I'm concerned that when you say "High tolerance for error", that really means you think it's something unimportant.
No, what I'm saying is that if I had vision issues and had to use a screen reader to use my computer, if I had to choose between
I'd take the latter. Obviously the true solution would be to make sure everyone thinks about accessibility, but come on... Even here it's not always the case and the fediverse is the place where I've seen the most focus on accessibility.
Another domain I'd see is preprocessing (a human will do the actual work) to make some tasks a bit easier or quicker and less repetitive.
I’d take nothing over trillions of dollars dedicated to igniting the atmosphere for an incorrectly captioned video
Oh yeah I'm not arguing with you on that. AI has become synonymous with LLM, and doing the most generic models possible, which means syphoning (well stealing actually) stupid amounts of data, and wasting a quantity of energy second only to cryptocurrencies.
Simpler models that are specialized in one domain instead do not cost as much, and are more reliable. Hell, spam filters have been partially based on some ML for years.
But all of that is irrelevant at the moment, because IA/ML is not one possible solution among other solutions that are not based on ML. Currently they are something that must be pushed as much as possible because it's a bubble that gets investors, and I'm so waiting forward for it to burst.