actually, i don't think possessing the ability to send email entitles you to """debate""" with anyone who publishes material disagreeing with you or the way your company runs, and i'm pretty sure responding with a (polite) "fuck off" is a perfectly reasonable approach to the kinds of people who believe they have an inalienable right to argue with you
ebu
i absolutely love the "clarification" that an email address is PII only if it's your real, primary, personal email address, and any other email address (that just so happens to be operated and used exclusively by a single person, even to the point of uniquely identifying that person by that address) is not PII
i was impressed enough with kagi's by-default deranking/filtering of seo garbage that i got a year's subscription a while back. good to know that this is what that money went to. suppose i'll ride out the subscription (assuming they don't start injecting ai garbage into search before then) and then find some other alternative
switching topics, but i do find it weird how the Brave integration stuff (which i also only found out about after i got the subscription) hadn't... bothered me as much? to be exceptionally clear, fuck Brandon Eich and Brave -- the planet deserves fewer bigots, crypto grifters, and covid conspiracists -- but i can't put my finger on why Kagi paying to consume Brave's search API's just doesn't cause as much friction with me. honestly it could be the fact that when i pay for Kagi it doesn't feel like i'm bankrolling Eich and his ads-as-a-service grift, whereas the money for my subscription is definitely paying for Vlad to ~~reply-guy into bloggers' inboxes who are critical of the way Kagi operates~~ correct misunderstandings about Kagi.
Actually, that email exchange isn’t as combative as I expected.
i suppose the CEO completely barreling forward past multiple attempts to refuse conversation while NOT screaming slurs at the person they're attempting to lecture, is, in some sense, strictly better than the alternative
The best way I can relate current LLM’s is the early days of the microprocessor.
i promise we did it! we made iphone 2! this is just like iphone 2! of course it doesn't work yet but it will work eventually! we made iphone 2 please believe us!!
he's already banned but i love how every time this argument comes up there's absolutely no substance to the metaphor. "ai is like the internet/microprocessors/the industrial revolution/the Renaissance", but there's no connective tissue or actual relation between the things being compared, just some hand-waving around the general idea of progress and pointing to other popular/revolutionary things and going "see! it's just like that!"
The majority of hallucinations are due to user input errors that are not accounted for in the model tokenizer and loader code. This is just standard code errors. Processing every possible spelling, punctuation, and grammar error is a difficult task.
"i'm sorry, but you used the wrong form of 'their' in your prompt, that's why it inexplicably included half a review of Click in your meeting summary."
AI is like a mirror of yourself upon the dataset. It can only reflect what is present in the dataset and only in a simulacrum of yourself through the prompts you generate. It will show you what you want to see. It is unrivaled access to information if you have the character to find yourself and what you are looking for in that reflection.
s-tier. no notes. does lemmy have user flairs? because if so i'm calling dibs
my pet conspiracy theory is that the two streamers had installed cheats at one point in the past and compromised their systems that way. but i have no evidence to base that on, just seems more plausible to me than "a hacker discovered an RCE in EAC/Apex and used it during a tournament to install game cheats on two people and [appear to] do nothing else"
correlation? between the rise in popularity of tools that exclusively generates bullshit en masse and the huge swelling in volume of bullshit on the Internet? it's more likely than you think
it is a little funny to me that they're taking about using AI to detect AI garbage as a mechanism of preventing the sort of model/data collapse that happens when data sets start to become poisoned with AI content. because it seems reasonable to me that if you start feeding your spam-or-real classification data back into the spam-detection model, you'd wind up with exactly the same degredations of classification and your model might start calling every article that has a sentence starting with "Certainly," a machine-generated one. maybe they're careful to only use human-curated sets of real and spam content, maybe not
it's also funny how nakedly straightforward the business proposition for SEO spamming is, compared to literally any other use case for "AI". you pay $X to use this tool, you generate Y articles which reach the top of Google results, you generate $(X+P) in click revenue and you do it again. meanwhile "real" business are trying to gauge exactly what single digit percent of bullshit they can afford to get away with putting in their support systems or codebases while trying to avoid situations like being forced to give refunds to customers under a policy your chatbot hallucinated (archive.org link) or having to issue an apology for generating racially diverse Nazis (archive).