dumb lighter for idiots
Meaning a tool idiots can use to set the dumbs on fire?
dumb lighter for idiots
Meaning a tool idiots can use to set the dumbs on fire?
Clearly a duck-billed platypus.
Of course not. What kind of man would want to see his family burned with him in a fiery death trap?
Science isn't about WHY. It's about WHY NOT. Why is so much of our science dangerous? Why not marry safe science if you love it so much. In fact, why not invent a special safety door that won't hit you on the butt on the way out, because you are fired.
Housing? As in - for people? Where are the data centers supposed to be then? Ever think about that? No. You only think about yourself.
I'm sure they can work out a mouth next
Considering how these companies are losing money because they subsidize these tokens - I doubt that cost is really absorbed.
The proper response to dystopian prophecies is not "challenge accepted"!
$20k is what it would cost you or me, but it’s just free for them.
No it isn't. This is not regular software where the bulk of the price is the licensing. With slope-as-a-service, the bulk of the price is the data center operation cost - which Anthropic is certainly not getting for free.
Around the turn of the decade there was this big movement to rename the master branch to main. GitHub, too, made that switch and when you create a new repository the master branch is called main. The original flowchart was from 2010, when the main branch was still called master, so it's called master there.
The AI generated flowchart, of course, is not a plagiarism machine and it's exactly like a human being that is merely inspired by the source material. Surely it used the up-to-date name for that branch?
99% pass rate? Maybe that’s super impressive because it’s a stress test, but if 1% of my code fails to compile I think I’d be in deep shit.
Also - one of the main arguments of vibe coding advocators is that you just need to check the result several times and tell the AI assistant what needs fixing. Isn't a compiler test suite ideal for such workflow? Why couldn't they just feed the test failures back to the model and tell it to fix them, iterating again and again until they get it to work 100%?
I can bearly wait to can some bears