398
top 50 comments
sorted by: hot top controversial new old
[-] glad_cat 106 points 1 year ago

The guy is scanning eyeballs for a living, I don't believe he has any respect for a small text file in your web server.

[-] LilDestructiveSheep@lemmy.world 37 points 1 year ago

Yeah right. Same goes for Google and such. This is more a legal game. As long as you don't catch then overgoing your rule, it's "legal". If you do and can prove it, you can pull them to court. But yeah, you know.. will resolve to nothing probably.

[-] Max_P@lemmy.max-p.me 98 points 1 year ago

Why is everyone outraged when Google/Microsoft/Yahoo and others have scraped the whole internet for two decades and are also massively profiting from that data?

[-] empireOfLove@lemmy.one 128 points 1 year ago* (last edited 1 year ago)

There's a significant difference in the purpose of the scraping.

Google et al. run crawlers primarily to populate their search engines. This is a net positive for those whose sites get scraped, because when they appear in a search engine they get more traffic, more page views, more ad revenue. People view content directly from those who created it, meaning those creators (regardless of whoever they are) get full credit. Yes, Google makes money too, but site owners are not left in the cold.

ChatGPT and other LLM's works by combing its huge database of known content its "learned" to cook up an answer through fast math magic. Content it scrapes to populate this database can be regurgitated at any time, only now its been completely processed and obfuscated to an insane degree. Any attribution of content is completely stripped in the final product, even if it ends up being a word-for-word reproduction. Everything OpenAI charges for its LLM goes directly to OpenAI, and those who have created content to train it will never even know it was used without their consent.

Essentially, LLM's operate like a huge middle school plagiarism machine shitting all over any concept of copyright, only now they're making billions off said plagiarism with no plans to stop. It's a huge ethical conundrum and one I heavily disagree with.

[-] shadowspirit@lemmy.world 16 points 1 year ago

And pretty sure this is the catalyst for reddit's API changes. Other companies are getting rich off of them and they want a piece of the pie.

[-] SkyNTP@lemmy.ml 24 points 1 year ago* (last edited 1 year ago)

No the real reason for the API changes was to shut down apps; and the reason for that is because the apps gave users too much freedom to not be a perfectly packaged product for the real customer: advertisers, and others payed promotion.

How do we know this? Simple. a) Shutting down APIs does nothing to prevent dedicated content scrappers, b) it would have been totaly possible to lock down APIs and negotiate fair deals with app developers, to continue third party apps while having the same rate limiting on scrappers as what we have now, and c) this all coincided with some bigger picture business model changes at Reddit, including Reddit For Business, Reddit's IPO, and the reduced VC funding in the tech industry at large.

To blame that saga on AI scrappers really obfuscates the fact that Redditors are just cattle who's eyeballs are to be packaged and shipped to the real paying customer.

[-] shadowspirit@lemmy.world 8 points 1 year ago

In light of your statement / argument, I'll reframe similar responses in the future. What you say makes sense. Thanks.

load more comments (1 replies)

Google et al. run crawlers primarily to populate their search engines. This is a net positive for those whose sites get scraped, because when they appear in a search engine they get more traffic, more page views, more ad revenue.

This is not necessarily true. Google's instant answers are designed to use the content from websites to answer searcher's questions without actually leading them to the website. Whether you're trying to find the definition for the word, the year a movie came out, or a recipe, Google will take the information they've scraped from a website and present it on their page with a link to the website. Their hope is that the information will be useful enough that the searcher never needs to leave the search engine.

This might be useful for searchers, but it doesn't help the sites much. This is one of the reasons news companies attempted to take action against Google a few years ago. I think a search engine should provide some useful utilities, but not try to replace the sites they're ostensibly attempting to connect users to. Not all search engines are like this, but Google is.

[-] FlowVoid@midwest.social 20 points 1 year ago

Because until now they weren't competing against individual content creators.

[-] cooljacob204@kbin.social 6 points 1 year ago

Sure but they were competing against all sorts of other jobs by automating them.

[-] FlowVoid@midwest.social 4 points 1 year ago* (last edited 1 year ago)

They weren't stealing content until now.

[-] mac@lemm.ee 4 points 1 year ago

Unsure why this is where the line is drawn, but okay.

[-] FlowVoid@midwest.social 3 points 1 year ago* (last edited 1 year ago)

Because it's perverse for someone to create content if successful content will surely be stolen and used against its creator.

[-] PeleSpirit@lemmy.world 19 points 1 year ago

I think it's because they were trying to sell us stuff where as GPT is trying to be us.

[-] athos77@kbin.social 52 points 1 year ago

Charitable of you to believe they'd listen to robots.txt.

[-] MrSnowy@lemmy.ml 18 points 1 year ago

I just hide the N word, long R word very well several dozen times throughout my site so they have to manually blacklist it.

[-] backhdlp@lemmy.blahaj.zone 4 points 1 year ago

Stupid question: where is the difference between the long r word and the r word?

load more comments (2 replies)
[-] argv_minus_one@beehaw.org 49 points 1 year ago

If you think robots.txt is going to stop them, I've got a great deal for you on some ocean-front property in Colorado.

Could somebody explain why this is bad? I'm not a fan of all this AI stuff. But I can't think of an argument besides "Big tech is bad and they should not make money if they use public information to do so."

I'm genuinely curious. There may be massive amounts of data being processed. But only public data, right? If they can use that data for something, isn't that something positive? Or at the very least nothing negative? I always thought anything that is posted in public spaces means making it available for anyone to use anyway. So what am I missing here?

[-] Shinji_Ikari@hexbear.net 31 points 1 year ago

If the results were also open and public, it'd be a different conversation.

This is more akin to rain water collection up-hill and selling it back to the people downhill. It's privatization of a public resource.

[-] cooljacob204@kbin.social 8 points 1 year ago

This is more akin to rain water collection up-hill and selling it back to the people downhill

Not really, anyone can go and collect the same water they are collecting. And it's happening, open source LLMs are quickly catching up and a shit ton of other companies are also crawling the exact same data.

[-] pjhenry1216@kbin.social 14 points 1 year ago

"anyone". I hate when people use this word knowing full well it's not true in meaning. "Nothing is stopping you from spending millions of dollars on your own LLM." Ok.

The web is a bunch of information that is public, sure. People don't have a reasonable expectation of privacy but they used to have a reasonable expectation that their information would be used in a very specific fashion. Especially in the US where there is a default copyright claim on data. And crawling the web may ignore text that states you can't use the data. Even if you include a clause saying by accessing the data you agree to the claim. That only works against little people. The "anyone" that can't actually just go and build a LLM.

[-] cooljacob204@kbin.social 7 points 1 year ago* (last edited 1 year ago)

Sure but that applies to literally a million other things. There is an absolute ton of shit that companies do that individuals can't which is still built off of others.

A company can go spend 1B on a new state of the art nuclear reactor which will bring in billions over it's life time. Will the physicist who discovered the underlying math see any of the profit? No, probably not. And if they do it won't be nearly a "fair share". Nor will all the publishers and authors who generated the learning materials that the people working for said company used to build it.

There is tons of public knowledge that can only be utilized with a huge investment, that's just how a lot of innovation works.

And OpenAI also has a ton of competitors. Sure they have the lead for now but thousands of other companies are also scraping and building LLMs.

[-] pjhenry1216@kbin.social 5 points 1 year ago

You're not really going to win this argument as I'm an anti-capitalist. So I agree a lot of that stuff is wrong too. I don't believe you should own other's labor. The employees should own the company. And I don't believe in copyright, but it does exist and it's enforced against individuals, so it's only fair it's enforced against them as well. I don't think you should be allowed to blindly scrape when information could be behind an agreement to use it in a specific manner if accessed. Plus I think it should be opt in based off it being a new use and therefore a new right of copyright. Just as suddenly actors need to worry if they'll be scanned and owned by a Hollywood studio now. It's something a reasonable person wouldn't expect. And that's why past works are protected from that use.

Things behind a third party privacy policy, sure. You agreed to it, whatever. But your own website? I'm not feeling it.

load more comments (1 replies)

This comparison is lacking because water is unlike data. The data can still be accessed exactly the same. It doesn't become less and the access to it is not restricted by other people harvesting it.

[-] mim 15 points 1 year ago* (last edited 1 year ago)

While that is true, it does divert peoples' clicks.

Imagine you wrote a quality tech tutorial blog. Is it ok for OpenAI to take your content, train their models, and divert your previous readers away from your blog?

It's an open ethical question that it's not straightforward to answer.

EDIT: yes people also learn things and repost them. But the scale at which ChatGPT operates is unprecedented. We should probably let policy catch up. Otherwise we'll end up with the mess we currently have by letting Google and Facebook collect data for years without restrictions.

[-] Shinji_Ikari@hexbear.net 3 points 1 year ago

it's not a great comparison I'll admit, but it's essentially the same as digital privacy, only one of these is protected in courts and the other is encouraged.

I haven't sat down to really build a stance on this but it does not sit well.

load more comments (1 replies)
load more comments (1 replies)
[-] RedstoneValley@sh.itjust.works 28 points 1 year ago* (last edited 1 year ago)

"public" does not mean you're allowed to steal it and republish it as a work of your own. There are things like copyright and stuff

[-] cooljacob204@kbin.social 14 points 1 year ago

“public” does not mean you’re allowed to steal it and republish it as a work of your own

That is not what they or LLMs do. And while there is questionable morals around it acting like they are straight up stealing and republishing work hurts having serious discussions about it.

[-] Kichae@kbin.social 19 points 1 year ago

LLMs create statistical distributions of words and phrases based on ingested data, and then sample those distributions given conditional probabilities.

Why should for-profit companies have the right to create these statistical distributions based on our written works without consent? They're not publishing these distributions, and the purpose of ingesting these texts is not to report on the distributions.

They're just bottom-trawling the internet and acting as if they have every right to use other peoples' written works. While people are having "serious discussions" around it, they're moving forward, ignoring the discussions entirely, and trying to force the conclusion of those discussions to be "well, it's too late now, anyway".

[-] Even_Adder@lemmy.dbzer0.com 6 points 1 year ago* (last edited 1 year ago)

Original analysis of public data is not stealing. If it were stealing to do so, it would gut fair use, and hand corporations a monopoly of a public technology. They already have their own datasets, and the money to buy licenses for more. Regular consumers, who could have had access to a corporate-independent tool for creativity and social mobility, would instead be left worse off with fewer rights than where they started.

[-] pjhenry1216@kbin.social 9 points 1 year ago

You should have the discussions first. Not after you already profited off someone else's work. If the argument should be about whether they can use the data or not, then harvesting it first is absolutely harmful to the discussion you claim is important. You can't just argue one side is in bad faith when the other side is already objectively acting in bad faith if we are to assume the discussion is real.

[-] cooljacob204@kbin.social 8 points 1 year ago* (last edited 1 year ago)

You should have the discussions first.

But we are way past that. And legally while they are walking a thin line it seems that LLMs are going to win the legal challenges.

I don't think stopping or slowing LLM development is going to work, because then more questionable countries who really don't give a fuck about IP will pull ahead.

If you want my honest opinion I don't think these LLMs companies are stealing and I do think artists are getting the shit end of the stick at the same time. We are heading towards and AI dystopia and I think the way to address is is through more solid social welfare programs instead of fight about IP. While artists are the focus, this AI revolution is coming for all labor. Artists are unfortunately the first ones being impacted by it.

I think people should stop fighting about the minor things and instead prep for the inevitable unemployment this will bring. LLMs are really just the tip of the iceberg.

[-] RedstoneValley@sh.itjust.works 3 points 1 year ago

Yeah, you're right that it is different from simply stealing content. However the LLMs still use protected material as input and it seems that at least parts of those works can be uniquely identified in the output. That can be considered problematic, even if the data is deconstructed into embeddings inbetween input and output.

load more comments (1 replies)
[-] Adderbox76@lemmy.ca 21 points 1 year ago

As a freelance writer, I write an article for a respected tech website. That article gets views, which in part determines if I get any sort of a performance bonus.

Along comes an AI that scrapes my content, so that when someone asks it a question about how to do "x" on Mac, it spits out an answer based on what it learned from MY article, sometimes regurgitating it word for word, and in doing so deprives me and my publisher of a much need page view.

It affects their revenue, since it affects ad views. It affects my performance bonus.

This isn't about big tech being "bad". It's about writers and other artists not being credited or paid for their work.

This is a good explanation, thank you. I didn't think about people who literally post stuff to earn money. Since so much talk already revolved around scraping sites like Lemmy, that was all I had in mind.

What you describe sounds like the same problem with services that avoid paywalls or ads of news sites.

In this case I fully aggree that some solution needs to be found.

[-] Kichae@kbin.social 16 points 1 year ago

Could somebody explain why this is bad?

Consent.

I don't consent to my copyrighted material -- which is literally everything I write and post online, including this comment -- being included in these products. In some cases, I have implicitly consented to allowing this to happen via the EULA of websites I've used over the years, but having them actively scraping the web for content means they're directly bypassing any agreements I may have made with service providers, and that they're collecting my copyrighted works without my ever having done business of any sort with them.

I haven't agreed to contribute to their for-profit operation, I'm not being compensated in any way for this participation -- whether financially or via the providing of a service -- and I don't believe they have any moral right to decide that I'm going to contribute whether I want to or not.

They can fuck right off.

[-] argv_minus_one@beehaw.org 7 points 1 year ago

They're copying your content, mashing it up with other content, and showing it to their customers, without ever sending their customers to your website. As a result, you don't get paid and you don't even get exposure.

Let's said I use AI to write a book, in that case, AI will just grab what someone's else wrote.

Let's said I use AI to Write code, AI will just copy someone's else code.

Let's said I use AI to make art, AI uses Someone's else art.

Then, let's said I sell the book, use the code and make nft's with the art, since AI "did it" I don't have to follow any license or give credit to anyone.

About using only public information, that should be an opt in, but instead AI companies are just taking public internet, putting it inside a can a selling it, you like it or not.

[-] mojo@lemm.ee 5 points 1 year ago

Just because something is public, does it mean the source is irrelevant? Not to mention, there's a lot of stuff that's not meant to be public that is. A computer won't know the difference. Public or not, it's theft to steal the content without credit and monetize it privately.

[-] rastilin@kbin.social 4 points 1 year ago

Yeah, I don't really care what they harvest either. I suppose if conversations showed up in chat that would be an issue, but the internet is a public forum anyway and there's no expectation of privacy here.

[-] pjhenry1216@kbin.social 12 points 1 year ago

If copyright law can work against the individual, it should work against the corporation as well. We can't only enforce it against the little people. Enforce it for all or for none.

[-] rastilin@kbin.social 4 points 1 year ago

In this instance they're not even taking copyrighted content. I don't think random forum posts are copyrightable since they're not even being reproduced, it's just being read to create a derivative work.

[-] Kichae@kbin.social 6 points 1 year ago

The expectation that things are not private is totally different from the expectation that things are not being harvested for profit, though. Harvesting things for profit is transforming the public into the private.

load more comments (1 replies)
[-] coach@lemmynsfw.com 15 points 1 year ago

In other news, water is wet.

[-] The_Walkening@hexbear.net 9 points 1 year ago

I think it'd be more useful to generate a set of absolute crap AI content pages and restrict their bot to that set of pages. It'll make it dumber.

[-] empireOfLove@lemmy.one 8 points 1 year ago

They're already starting to feed on their own content and creating negative feedback loops...

[-] Prater@lemmy.world 8 points 1 year ago

As if it needed to be said.

[-] karpintero@lemmy.world 4 points 1 year ago

Good reminder to do this for my personal sites. Wonder if they're scraping the fediverse for data to train on now that reddit started to clamp down on its API

load more comments
view more: next ›
this post was submitted on 09 Aug 2023
398 points (99.8% liked)

Privacy Guides

16263 readers
105 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 1 year ago
MODERATORS