15-20 thousand dollars per semester??? Only in the good ol' U.S. of A...
Given the stochastic nature of LLMs and the pseudo-darwinian nature of their training process, I sometimes wonder if geneticists wouldn't be more suited to interpreting LLM output than programmers.
For what it's worth, fedia.io does not federate with lemmy.today: https://fedia.io/federation
The only way to approach "talking with everyone" on the fediverse is to host your own instance - only even then you'll probably need to defederate ASAP from any instances that send you illegal material (as in child sexual abuse material).
Petite balle pour la gauche qui ne se mobilise pas pour les quartiers, ça fait plaisir à l'entendre hors des cercles anti-impérialistes (bien que je ne m'y attendais pas!).
It is, but maybe they mean they want no limit whatsoever on post length.
which, well, if your instance starts sending out megabyte-sized text posts I don't expect it to stay federated with many others for very long.
I see, thanks for the correction.
There used to be this website, but the url just loads up a scam site now (I've created this issue on the project's tracker if anyone has additional info to contribute).
I don't know how technical you are, @VieuxQueb@lemmy.ca , but you could try running the "defed-investigator" project locally.
lemmy.ml, no, but I'm fairly certain that lemmygrad.ml has been defederated from lemmy.world at least, if not others.
https://iceberg.mit.edu/report.pdf "We simulated 131 million human beings using LLMs and found 11% of jobs could be done by AI instead of humans" I can't tell what's real with LLMs anymore. I wonder if that's the point.
I'll be honest, that "Iceberg Index" study doesn't convince me just yet. It's entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can't access. I also can't figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available "tools" (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor's 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM's output so I guess that counts.
Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools
from https://iceberg.mit.edu/report.pdf
Large Population Models is https://arxiv.org/abs/2507.09901 which mostly references https://github.com/AgentTorch/AgentTorch, which gives as an example of use the following:
user_prompt_template = "Your age is {age} {gender},{unemployment_rate} the number of COVID cases is {covid_cases}."
# Using Langchain to build LLM Agents
agent_profile = "You are a person living in NYC. Given some info about you and your surroundings, decide your willingness to work. Give answer as a single number between 0 and 1, only."
The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn't been near academia in 7 years like myself. Most of the procedure looks like they know what they're doing, but if the entire thing is built on a faulty premise then there's no guaranteeing any of their results.
In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn't necessarily a case of MIT as a whole changing it's tune.
(The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)
I was using "non-satire community" descriptively, not prescriptively. As in, it's not the rules but it appears to be the custom from the other posts, which is why I dismissed the possibility as I was reading the article itself. Not to mention I wasn't expecting satire of being against AI in /c/fuckai.
Sorry if it was unclear.
Can someone explain to me how this is not my president saying "buy our stuff please we were irresponsible and made too much and now it will bankrupt us"?
nobody deserves customers. Of course, it's not the technocrats nor the wealthy that will be paying for this bankruptcy but us lowly citizens.