News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source. Clickbait titles may be removed.
Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.
7. No duplicate posts.
If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners or news aggregators.
All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
One of my biggest gripes with coding AI like Claude is how desperately polite and flattering they are. I wish there was a way to feed it hand written code to analyze for bugs and security flaws, then have the AI relentlessly roast your shitty code.
"LMFAO, you dumb b!tch! Are you trying to get hacked and sued, by or are you just that stupid? Here are a few steps you can take to fix your shit code and have it adhere to standard coding practices. "
Sounds like Linus needs to pivot to make his own Claude clone..
So i was reading a thread from the linux kernel mailing list where linus pointed out someone's coding mistake and why it would lead to a bug...
So i fed the patch email into google gemini pro, and it spotted the same bug as linus
I thought that was interesting.
I figured out a way to do this, via Alpaca.
In Alpaca you can set an LLM with a persistent prompt.
Basically, I just told the thing hey, you're too sycophantic, often needlessly verbose, and often overly confident... can you generate a prompt for yourself to address those issues?
Roughly 30 minutes of trial and error along those lines later, now its quite matter of fact, and is at least more likely to tell me when it is aware it is making an assumption, and ask me for clarification or if i can give it more context, and it doesn't do the kind of weird, intro and outro paragraphs where it basically just reassures you that your ideas are wonderful and you are valid and i just think the things you say are so interesting!
Then, you feed it a script, ask it do a sanity check, and it will generally go through and identify strenghts and weakness of the code, at least as it perceives such.
Beyond that, Alpaca recently introduced a ... character system, that is ostensibly tailored toward making specific kinds of conversational chat bots... but it also introduced a kind of 'dictionary' system, where you can give it a kind of additional permanent reference knowledge, to associate with certain terms.
I have not tried this yet, but, I'd be willing to be that you could say, jam that with a bunch of examples of syntax and methods from a particular language or library... and my guess would be that you could thus tailor a 'character' that is more up to date or specific to some domain.
So... you could give it the main prompt of something like "You are a tsuntsun senior programmer who has nothing but contempt for any coding mistakes, and you pride yourself on coming up with entirely novel insults for each inadequacy you notice."
... And then give it a 'dictionary' that pertains to syntax, methods, perhaps even broader concepts...
And that might actually produce your desired vicious asshole senior programmer persona.
Of course, this is not going to work for like, an entire massive codebase, unless you're the one stockpiling all ~~my~~ the RAM.
But for smaller projects or just single scripts... it might kind of work.
Vibe vibing
Hey ChatGPT, respond to all of my inquiries like my toxic abusive uncle. The more vicious the response the better. Withhold praise. Pretend it's opposite day and give me your best compliments in the form of the life-long truama that I have come to associate with authority figures.
Here's my code. What do you think?
"I don't have fingers to touch your butthole, Starry"
You can give it rules.
I work with Gemini a lot and I told it to cut all the polite crap out and just give the facts I need.
On the rare occasion I use LLMs, I just wish they would respect my request for a list of like twenty bullets and nothing else, instead I get two paragraphs of bs and four bullets.
I don’t think it will ever cuss at you, but your can have it be more critical. It say to me all the time, “this or probably a bad idea, before I do this, consider this alternative” (paraphrasing)