Bibip

joined 1 week ago
[–] Bibip@programming.dev 1 points 6 hours ago

i didn't like ∆V, but i almost loved it. i thought it was neat that you could hire crew that gave different bonuses depending upon their specialty and experience. i thought it was neat that your crew can recognize other folks you come across in the rings. i thought it was really neat when one of those strangers told my crewmate about an anomalous lidar contact they made further in.

as i got closer to coordinates they shared, tracking this enormous lidar contact: it occured to me that if the developers had seized upon this moment with eldritch horror that it would be my favorite game ever. i felt scared, if that rock turned around and had an eye and then a dozen rock tentacles busted into my ship and squeezed all the blood out of my crew -- it would have been my favorite game.

[–] Bibip@programming.dev 4 points 1 week ago

there are many use-cases, and you've neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i'm not saying it's likely or apocalyptic, but it is true and present. using an LLM to "sanitize" your outputs can prevent this.

from a privacy perspective, everyone should do this using a locally hosted LLM. from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.

[–] Bibip@programming.dev 5 points 1 week ago

a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it's very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.

moreover, it's a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there's an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it's a useful tool.

[–] Bibip@programming.dev 12 points 1 week ago

hi friends i hope you're well.

i worked a laborious job and experienced a phenomenon i refer to as "parasitic thought:" it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.

i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.

i don't know about "reasonable" or "ethical" or "polite," but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. "i can't be bothered to communicate with you, here, read this wall of gpt-vomit"

my instinct is to copy and paste, "LLM agent of my choice, what's this person trying to say to me?" and then skim the ai synthesized summary of the ai composed body text generated from some idiot's faint echoes of thought.

in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.

there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said "make this look like chatgpt 3 wrote it."