266
submitted 1 year ago* (last edited 1 year ago) by Rinna@lemm.ee to c/asklemmy@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] alcoholicorn@hexbear.net 10 points 1 year ago

I disagree about the coherency.

Coherency requires relating symbolic meanings. AI just uses statistical analysis.

Consider if you were locked in the national library of Thailand. You don't speak Siamese, and any pictures or bilingual dictionaries were removed.

Given a thousand years, you could look at the patterns and produce text similar to what someone who writes Siamese would write, but there's still no coherency because you cannot connect the meaning behind any of the words.

That doesn't necessarily mean your outputs are useless though, someone who does read Siamese can have you generate outputs until you print out something they can infer a coherent thought from, but you're fundamentally unable to be trained to do that yourself.

If a human being takes people's work and pieces it together in a way that resembles other works without using any LLM/AI or automation tool, is the final result content theft too?

We're getting into ethics territory. IP is a social construct and we live under capitalism, our model for determining what is and isn't theft should be selected by what supports artists and consumers against capitalists.

[-] boboblaw@hexbear.net 3 points 1 year ago

Ah, the Siamese Room argument.

[-] XEAL@lemm.ee 1 points 1 year ago

Given a thousand years, you could look at the patterns and produce text similar to what someone who writes Siamese would write, but there’s still no coherency because you cannot connect the meaning behind any of the words.

That doesn’t necessarily mean your outputs are useless though, someone who does read Siamese can have you generate outputs until you print out something they can infer a coherent thought from, but you’re fundamentally unable to be trained to do that yourself.

You're comparing an LLM to something similar to the infinite monkey theorem. In your analogy, you should consider that someone who knows perfect Siamese is giving me feedback to optimize and improve my outputs, even I don't really know the meaning of anything.

While an LLM may not have a conscience to evaluate if its output is coherent, it can identify patterns and relationships from its training and can generate text that is still appears coherent to human readers.

this post was submitted on 13 Sep 2023
266 points (98.5% liked)

Asklemmy

44064 readers
744 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS