But a domain expert like a doctor or an accountant is way much more accurate
Actually, not so.
If the AI is trained on narrow data sets, then it beats humans. There's quite a few examples of this recently with different types of medical expertise.
But a domain expert like a doctor or an accountant is way much more accurate
Actually, not so.
If the AI is trained on narrow data sets, then it beats humans. There's quite a few examples of this recently with different types of medical expertise.
Australia is a paradox in the renewables transition. It is already at 35% for renewable electricity, and has targeted 82% for 2030. Yet it's still a major exporter of coal. Australia exported $127.4 billion worth of coal in 2022-23, and its economy is highly dependent on mining of all types.
It doesn't have much homegrown manufacturing and is committed to eliminating tariffs on Chinese imports. This means of Western countries it might be among the quickest to abandon ICE cars, as it will have access to all the super-cheap Chinese EV's. Especially as it's rolling out infrastructure like this.
I still see even the more advanced AIs make simple errors on facts all the time....
True. It's what keeps me optimistic. If we can get through the decay of the old world intact, there's a world of post-scarcity plenty ahead.
I find they are good for creative tasks. Picture and music generation, but also ideas - say, give me 10 possible character names for a devious butler in a 1930s murder mystery novel.
But yes, terrible for facts, even rudimentary ones. I get so many errors with this approach its effectively useless.
However, I can see on narrower training data, say genetics, this might be less of a problem.
There's a few ways they say it may help, this one seems the main one.
We foresee a future in which LLMs serve as forward-looking generative models of the scientific literature. LLMs can be part of larger systems that assist researchers in determining the best experiment to conduct next. One key step towards achieving this vision is demonstrating that LLMs can identify likely results. For this reason, BrainBench involved a binary choice between two possible results. LLMs excelled at this task, which brings us closer to systems that are practically useful. In the future, rather than simply selecting the most likely result for a study, LLMs can generate a set of possible results and judge how likely each is. Scientists may interactively use these future systems to guide the design of their experiments.
It’s only open source if the training data is and it probably isn’t, is it?
I don't know, though DeepSeek talk of theirs being "fully" open-source.
Part of the advantage of doing this (apart from helping bleed your rivals dry) is to get the benefit of others working on your model. So it makes sense to maximise openness and access.
I lived in Hong Kong for a few years. It has superlative public transport, and the (human) taxis were reasonably priced. However, as its so densely populated, I can only see cars getting so much traction. After a certain point the traffic jams are unavoidable.
I don't know the specifics. What seems more relevant to me is that lots of automakers around the world are getting to Level 4 by various, mostly similar ways.
Once you have Level 4 you have a viable robotaxi business model. Even if you stick to geo-fenced areas, and mapped routes, that covers 80%+ of urban taxi journeys.
The same holds true for buses and public transit. I'm very interested to see how efforts like this Level 4 mini-shuttle bus in France progress.
When robotaxis & mass transit like these are common, how many people will still want private cars?
Good news for pigs. I'll be delighted to see factory farming disappear and be replaced by tech like this.
I think fediverse people are wildly overestimating how much 99% of Reddit users care about this. The mod team on r/futurology (I'm one of them) set up a fediverse site just over a month ago (here you go - https://futurology.today/ ) It's been modestly successful so far, but the vast majority of subscribers seem to be coming from elsewhere in the fediverse, not migrants from Reddit.
This is despite the fact we've permanently stickied a post to the top of the sub. r/futurology has over 19 million subscribers, and yet the fediverse is only attracting a tiny trickle of them. I doubt most people on Reddit even know what the word fediverse means.
Large language models surpass human experts in predicting neuroscience results
A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot.