I must have been living under a rock/a different kind of terminally online, because I had only ever heard of Honey through Dan Olson's riposte to Doug Walker's The Wall, which describes Doug Walker delivering "an uncomfortably over-acted ad for online data harvesting scam Honey" (35:43).
I saw this floating around fedi (sorry, don't have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I'm covering basic, established material, I'm always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.
"I'm extremely left-leaning, but I do have concerns about the (((globalists))) in finance"
As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.
I am too tired to put up with people complaining about "angies" and "woke lingo" while trying to excuse their eugenicist drivel with claims of being "extremely left leaning". Please enjoy your trip to the scenic TechTakes egress.
"If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.
From the replies:
In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.
Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.
There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.
And a good sneer:
With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.
Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”
Christ, what an asshole.
Those are the actors who played Duncan Idaho in the David Lynch adaptation and in the two Syfy miniseries. So, yeah, it's not wrong, just incomplete — though I have no idea why it only serves up those three. There's certainly no limitation to three images, as can be verified by searching for "Sherlock Holmes actor" or the like.
"When I have a disagreement with a girl, I hit my balls with a hammer. There is absolutely nothing she can do; it's a brutal mog."
To date, the largest working nuclear reactor constructed entirely of cheese is the 160 MWe Unit 1 reactor of the French nuclear plant École nationale de technologie supérieure (ENTS).
"That's it! Gromit, we'll make the reactor out of cheese!"
I have the feeling that they're not a British trans person talking about the NHS, or an American in a red state panicking about dying of sepsis because the baby they wanted so badly miscarried.