this post was submitted on 02 Mar 2025
48 points (100.0% liked)

China

2141 readers
38 users here now

Discuss anything related to China.

Community Rules:

0: Taiwan, Xizang (Tibet), Xinjiang, and Hong Kong are all part of China.

1: Don't go off topic.

2: Be Comradely.

3: Don't spread misinformation or bigotry.


讨论中国的地方。

社区规则:

零、台湾、西藏、新疆、和香港都是中国的一部分。

一、不要跑题。

二、友善对待同志。

三、不要传播谣言或偏执思想。

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] footfaults@lemmygrad.ml 5 points 2 days ago (15 children)

It's very silly to say that because I don't like LLMs, that I'm anti-technology.

[–] pcalau12i@lemmygrad.ml 2 points 2 days ago* (last edited 2 days ago) (14 children)

You are, you're opposed to automation technology, that's literally Luddism, which is a form of anti-communism. What positions are you even trying to defend? "I'm not anti-technology, I just oppose automation!" Like, the overwhelming majority of new technology is developed to increase labor productivity, which means to increase the degree in which tasks are automated. To oppose automation is to oppose the overwhelming majority of new technologies.

AI is just one of many automation technologies. You realize USPS is largely ran on AI? Automation is a major backbone to our economy. But, oooh, there's no "soul" in OCR software or something so we have to go backwards and bring back whole warehouses of people who decipher the text on letters and put them into a computer and can't have it done automatically because muh AI scawy. We have gotta burn all the huge breakthroughs in medical science such as with protein folding and in material science that were discovered through AI because muh AI scawy and lacks a soul or something. We have to abandon research in nuclear fusion technology because all recent breakthroughs in plasma stabilization have come through AI automation.

Do you know what it means to develop the productive forces? It means to improve productivity, which requires continually improving automation and semi-automation (by that I mean, tools that partially automate things but may still require some supervision). We will never reach a higher stage communist society without automation and semi-automation, i.e. without constantly improving labor productivity.

I hope you never in your life use the speech recognition feature on your phone, like writing text messages by speaking it. I hope you never in your life use a translation app like Google Translate or DeepL. Otherwise you are a hypocrite for using the evil soulless scawy AIs.

[–] footfaults@lemmygrad.ml 0 points 2 days ago (3 children)

I think it's far more telling how you conflate automation with Large Language Models (colloquially being called AI even though it's not).

Much of those technologies that you cite as examples and call AI (OCR, computer vision), I don't understand why you do that. Those technologies existed long before LLMs.

I find the protein folding example especially perplexing since protein folding simulation existed far, far before LLMs and machine learning, and it is ahistorical to claim those as being AI innovations.

I don't agree with your AI boosterism, but I think what is more perplexing is how misinformed it is.

[–] pcalau12i@lemmygrad.ml 0 points 1 day ago* (last edited 1 day ago) (2 children)

They are all artificial neural networks, which is what "AI" typically means... bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.

The "intelligence" part in artificial intelligence comes from the fact that these algorithms are very loosely based on how what makes biological organisms intelligent: their brains. Artificial neural networks (as they are more accurately called) use large numbers of virtual neurons with different strengths of neural connections between the neurons sometimes called their "weights" and the total number of different connections is referred to as the "parameter" count of the model.

You do a bit of calculus and you can figure out how to take training data to adjust sometimes billions of parameters in an ANN in order to make the artificial neural network spit out more accurate answers given the training data. You repeat this process many times with a lot of data and eventually the ANN will fine-tune itself to find patterns in the dataset and start spitting out better and better answers.

The benefit of ANNs is precisely that they effectively train themselves. Imagine writing a bunch of if/else statements to convert text in an image to written text. It would be impossible because there's quadrillions of different ways an image can look and have the same text, if it's taken at a different distance, different writing style, under different lighting conditions, etc. You would be coding for forever and would never solve it. But if you feed an ANN millions of pictures of written text alongside images of that written text under all these different conditions, you can do a bit of calculus with a lot of computational power and what you will spit out is the fine-tuned weights for an ANN that if you pass in a new image it will be able to identify the text.

Technology is fascinating but sadly you seem to have no interest in it and I doubt you will even read this. I only write this for others who may care.

Also, yes, computer vision is also based on ANNs. I have my own AI server with a couple GPUs and one of the tasks I use it for is optical character recognition which requires you to load the AI model onto the GPU for it to run quickly, otherwise it is rather slow (I am using paddleocr). If the image I am doing OCR on is in a different language then I can also pass it through Qwen to translate it. If you ever setup a security system in your home, these often will use AI for object recognition. It's very inefficient to record footage all the time, but many modern security systems you can tell them to record footage only when they see a moving person, or a moving car. Yes, this is done with AI, you can even buy an "AI hat" for the Raspberry Pi that was developed specifically for computer vision and object identification.

Literally if you ever take a course in AI, one of the first things you learn is OCR, because it's one of the earliest examples of AI being useful. There is literally a famous dataset with its own Wikipedia page called MNIST because so many people who learn how AI work often first learn to build a simple one that can do OCR on handwritten digits that they are tasked with training on the MNIST dataset.

I'm also surprised your hatred is towards large language models specifically, when usually people who hate AI despise text-to-image models. You do know that "AI art" generators are not LLMs, yes? I find it odd someone would despise LLMs, which actually have a lot of utility like language translation and summarization, over TTIMs, which don't have much utility at all besides spitting out (sometimes...) pretty pictures. Although, I assume you don't even know the difference since you seem to not know much about this subject, and I doubt you will even read this far anyways.

[–] footfaults@lemmygrad.ml 3 points 1 day ago* (last edited 1 day ago)

bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.

You are toxic, as well as being incredibly arrogant. A true example of Dunning-Kruger effect. If you want to have a tantrum then by all means do so, but don't pretend that you are on some sort of high ground when you make your pronouncements.

Every conversation you have had with me, you project opinions that I do not have (maxism vs anarchism, calling me a luddite, etc) and construct strawmen arguments that I did not make

Do some self crit

[–] TankieReplyBot@lemmygrad.ml 1 points 1 day ago* (last edited 1 day ago)

I found YouTube links in your comment. Here are links to the same videos on alternative frontends that protect your privacy:

Link 1:

Link 2:

load more comments (10 replies)
load more comments (10 replies)