113
AI could wipe out most white-collar jobs within 12 months, Microsoft AI chief warns
(www.techspot.com)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.
That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.
My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.
I mean I think one legitimate use is sifting through massive tranches of information and pulling out everything from a subject. Like if you have these epstein files, whatever is not redacted in the half of the pages they released any of, and you want to pull out all mentions of, say the boss of the company that ultimately owns the company you work for, or the president.
Propublica uses it for something of that sort anyway they explained how they used it in sifting through tranches of information on one article I read about something a couple of years ago. That seemed like a rare case of where this technology could actually be useful.
What do you think about these;
LLMs can’t perform any of those functions, and the output from tools infected with them and claim to, can intrinsically only ever be imprecise, and should never be trusted.
Translation isn't as easy as easy as just take the word and replace with another word from different language with same definition. I mean yes a technical document or something similar can be translated word for word. But, Jokes, songs and a lot more things differ from culture to culture. Sometimes author chooses a specific word in a certain language based on certain culture which can be interpreted in multiple ways to reveal hidden meaning for readers.
And sometimes to convey the same emotion to a reader from different language and culture we need to change the text heavily.
OCR isn’t a large language model. That’s why sometimes with poor quality scans or damaged text you get garbled nonsense from it. It’s not determining the statistically most likely next word, it’s matching input to possible individual characters.
I mean using LLMs for OCR like (Gemini 3 Flash or Kimi K2.5)
Not OP. I wouldn't call myself tech savvy but, suggesting categorization of files on my computer sounds kinda nice. I just can't trust these clowns to keep all my data local.
There is some providers with Zero Data Retention you can check on OpenRouter