144
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Jun 2023
144 points (100.0% liked)
Technology
37696 readers
390 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
I'm not entirely sure why Reddit was going to charge outlandish fees for the third-party APIs. Looks like none of the apps are actually going to pay them, so he's not getting anything out of it. It's really a combination of pushing them out of the market and then being a smug little bitch that really nailed it in the coffin for a lot of people.
i don't think they were trying to make money off of the API changes. like others are saying, it has to do with AI and they figured they might as well take the chance and knock out 3rd party in the same swoop so that they can funnel more people onto the official app
they can data harvest much better that way
I feel like AI being the reason doesn't hold up particularly well from a technical standpoint. From my searching, web-scraping is completely legal. It'd be slower, but a massive dataset is still very collectable.
Plus building a web-scraper is so easy now. Funny enough, generative AI like chat gpt can get you like 95% of the way there in just a few minutes.
Though, none of the reasons they've stated so far seem to hold up to scrutiny.
It's slower, but to use an API requires you to customize your system to use each different sites unique API. It would be a massive development undertaking, for such a small benefit that it would never pay off. For an LLM, you only need to read each page once, you just wait til a post is a month or so old, and essentially all discussion has stopped, and you will get everything you need. So "fast" isn't really a concern at all.
You can pull much more data much quicker through the API than some sort of HTML scraper. These LLMs need a lot of data and reddit is a big site.