40
top 40 comments
sorted by: hot top controversial new old
[-] dgerard@awful.systems 52 points 5 months ago

Laurens Hof on Mastodon:

absolutely insane article:

  • the headline claims the models capable of reasoning are ready
  • first paragraph moves from 'ready' to 'on the brink'
  • 4th paragraph moves from 'on the brink' to 'hard at work, figuring it out'
  • 5th paragraph scales it down further, now the next model with only 'show progress' towards reasoning'
  • halfway through LeCun admits that current models cannot reason at all

the journalistic malpractice here is honestly a parody of itself

[-] self@awful.systems 26 points 5 months ago

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.

wait, you mean the same models that supposed AI researchers were swearing had “glimmerings of intelligent reasoning” and “a complex world model” really were just outputting the most likely next word for a prompt? the current models are just fancy autocomplete but now that there’s a new product to sell, that one will be the real thing? and of course, the new models are getting pre-announced as revolutionary as interest in this horseshit in general takes a nosedive.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport.

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid? like, there’s apps that do this already. this shit was solved already by application of the least-terrible surviving algorithms from the first AI boom. what the fuck is the point of re-solving travel planning, but now incredibly expensive and you can’t trust the results?

[-] sailor_sega_saturn@awful.systems 19 points 5 months ago

Ah yes, "getting to the airport", one of the great unsolved challenges in computing.

[-] self@awful.systems 15 points 5 months ago

in order to solve the Traveling Salesman Problem, the first step is to use a machine model to confirm the user isn’t a salesman

[-] blakestacey@awful.systems 19 points 5 months ago

So, if anyone is keeping score, the promise of Artificial Intelligence has descended from "the computers on Star Trek" to "spicy ticket-booking".

[-] froztbyte@awful.systems 17 points 5 months ago* (last edited 5 months ago)

the thing that bothers me about that lecunn statement is that it's another of those not-even-wrong fuckers with an implicit assumption: that the problem is not that it doesn't have intelligence, just that the intelligence isn't very advanced yet - "oh yeah it just didn't think ahead! that's why foot in mouth! it's like your drunk friend at a party!"

which, y'know, is not the case. but they all fucking speak with that implicit foundation, as though the intelligence is proven fact instead of total suggestion (I wanted to say "conjecture", but that isn't the right word either)

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid?

it's also the pitch I keep seeing from a number of places, including that rabbit or whatever the fuck thing? and, frankly, can we not? these goddamn things can barely parse sentences and keep context, and someone wants to tell me that a model use is for it to plan my travel? with visas and flight times and transfers? nevermind all the extra implications of accounting for real-world issues (e.g. political sensitivity), preferences in sight-seeing, data privacy considerations (visiting friends)....

like it's just a gigantic fucking katamari ball of nope

[-] carlitoscohones@awful.systems 13 points 5 months ago

someone wants to tell me that a model use is for it to plan my travel?

I don't think any of these people have ever traveled. Honestly, I used to work for a company where the corporate travel people mostly lived in a small village in Germany, and their recommendations could be insane sometimes, but at least they knew what being a human was like.

[-] counteractor@pawoo.net 6 points 5 months ago

> suggestion

I’d go for “hopium”.

[-] dgerard@awful.systems 6 points 5 months ago* (last edited 5 months ago)

bro, bro. i'm not going to answer your question about the obvious and glaring problems, but here read these three preprints that are very exciting about the possibilities!!! no i can't just explain in my own words what they say. but if you cannot refute the mathematics (you can tell it's real maths because it's got squiggly symbols in it) then you must acquit

[-] self@awful.systems 4 points 5 months ago

if you cannot refute, you must not compute

[-] V0ldek@awful.systems 13 points 5 months ago* (last edited 5 months ago)

This is yet another example of people calling the shots here being completely detached from reality of an average person and bereft of imagination.

Surely the plebs would all want to have an underpaid secretary that plans your private jet trips for you so that you don't have to interact with anyone. It's the dream! I can't imagine a life without that, surely they need it too!

[-] dgerard@awful.systems 23 points 5 months ago

“We will be talking to these AI assistants all the time,” LeCun said. “Our entire digital diet will be mediated by AI systems.”

did u kno that there still exist people who take anything that Yann LeCun says seriously

[-] o7___o7@awful.systems 9 points 5 months ago* (last edited 5 months ago)

~~Big Yann~~ Mid Yann

[-] Immersive_Matthew@sh.itjust.works 3 points 5 months ago

Does he have a history of over promising and under delivering?

[-] self@awful.systems 13 points 5 months ago

that would be a key part of his job description, yes

[-] gerikson@awful.systems 15 points 5 months ago

Brad Lightcap, OpenAI’s chief operating officer

I'm sorry your COO has pr0n/Futurama mashup name, OpenAI

[-] mountainriver@awful.systems 13 points 5 months ago

Sounds like something autocomplete would make up. Are we sure that is a real person this time?

[-] dgerard@awful.systems 12 points 5 months ago

the AI that was rejected for the job of basilisk

[-] V0ldek@awful.systems 10 points 5 months ago

Brad Lightcap, the decidedly less successful brother of Buzz Lightyear.

[-] froztbyte@awful.systems 7 points 5 months ago

Best famous for their work as VR set headmodel

[-] deadbeef79000@lemmy.nz 14 points 5 months ago

Genuine People Personalities? Sounds ghastly.

It is.

[-] Soyweiser@awful.systems 13 points 5 months ago

Ow god, I let my NiceGrandFatherAIBot on facebook, it now called me a sheeple, a cuck and a pedophile for telling him Biden is the president.

this post was submitted on 10 Apr 2024
40 points (100.0% liked)

TechTakes

1267 readers
124 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS