I’m sure both people who use Bing are furious
SaraTonin
The fact that this is even a real headline should trouble everybody. I know the answer, but the question still is:” what kind of a state is the USA in where someone even feels the need to write this, rather than it being an assumed norm?”
Three points. Firstly, in the 1950s, CEOs earned around 20 times what the lowest-paid employee did (including things like bonuses, shares, etc). Now the average is around 400, but can be as high as 2,000.
Secondly, in the US in the 1950s the highest tax band was 91%. Today it’s 37%.
Both these things are perfectly sustainable. And all that’s working under the false premise that there aren’t numerous tax loopholes available to the rich but not the poor.
Thirdly, there’s a tonne of research into what best stimulates economies, but it’s often dismissed because it doesn’t favour the rich. If you give money to the poor, they will spend it in their local communities. Then that money gets spent again, and again, and again, getting taxed each time. IIRC, for every dollar given to someone poor the government itself gets something like a dollar fifty back. Because the money just keeps circulating.
Give money to the rich, though, and what happens? They hoard it, or they spend it abroad. It drains money from the country, either by taking it out of circulation, or by taking it out of the country entirely.
I’m not sure we’re disagreeing very much, really.
My main point WRT “kinda” is that there are a tonne of applications that 99% isn’t good enough for.
For example, one use that all the big players in the phone world seem to be pushing ATM is That of sorting your emails for you. If you rely on that and it classifies an important email as unimportant so you miss it, then that’s actually a useless feature. Either you have to check all your emails manually yourself, in which case it’s quicker to just do that in the first place and the AI offers no value, or you rely on it and end up missing something that it es important you didn’t miss.
And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.
As you say, 100% isn’t really possible.
I think where it’s useful is for things like analysing medical data and helping coders who know what they’re doing with their work. In terms of search it’s also good at “what’s the name of that thing that’s kinda like this?”-type queries. Kind of the opposite of traditional search engines where you’re trying to find out information about a specific thing, where i think non-Google engines are still better.
I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.
As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.
If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.
Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.
Also, what’s the betting that they were very interested in “debates” before the negative consequences affected them directly?
I’m in the middle of moving, but once I’m set up I’m going to look into dual booting. I’m not sure I’ll 100% be able to get rid of windows, though. For a start, I’ve heard NVIDIA is a nightmare on Linux and I’ve only recently got a new computer so i don’t really want to buy more hardware.
Hopefully dual booting will allow me to experiment and try alternatives for software which doesn’t have a Linux version, and i hear that one of the things that chatbots are actually good at is diagnosing and fixing Linux issues. So I’m hopeful, but I’m not assuming it’ll be entirely painless.
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
I’ve never found them to be more performant, and i can’t understand the logic of why a programme running inside another programme would be more performant except in comparison to unoptimised alternatives.
I’ve never used a web app that i thought was better than a local app. But i definitely understand why developers prefer them.
It’s definitely been the direction of travel for the last several years. Not because the products are better, but because it’s easier to develop for just the browser than for Mac, Windows, and Linux.
It doesn’t seem like you’re really replying to what i wrote.