this post was submitted on 17 Nov 2025
32 points (90.0% liked)

Asklemmy

51765 readers
1166 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Look, I don't believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the "knowledge" of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that's a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] HiddenLayer555@lemmy.ml 6 points 1 month ago* (last edited 1 month ago)

An AGI wouldn't need to read every book because it can build on the knowledge it already has to draw new conclusions it wasn't "taught."

Also, an AGI would be able to keep a consistent narrative regardless of the amount of data or context it has, because it would be able to create an internal model of what is happening and selectively remember the most important things more so than things that are inconsequential (not to mention assess what's important and what can be forgotten to shed processing overhead), all things a human does instinctively when given more information than your brain can immediately handle. Meanwhile, an LLM is totally dependent on how much context it actually has bufferered, and giving it too much information will literally push all the old information out of its context, never to be recalled again. It has no ability to determine what's worth keeping and that's not, only what's more or less recent. I've personally noticed this especially with smaller locally run LLMs with very limited context windows. If I begin troubleshooting some Linux issue using it, I have to be careful with how much of a log I paste into the prompt, because if I paste too much, it will literally forget why I pasted the log in the first place. This is most obvious with Deepseek and other reasoning models because it will actually start trying to figure out why it was given that input when "thinking," but it's a problem with any context based model because that's its only active memory. I think the reason this happens so obviously when you paste too much in a single prompt and less so when having a conversation with smaller prompts is because it also has its previous outputs in its context, so while it might have forgotten the very first prompt and response, it repeats the information enough times in subsequent prompts to keep it in its more recent context (ever notice how verbose AI tends to be? That could potentially be a mitigation strategy). Meanwhile, when you give it a very large prompt as big or bigger than its context window, it completely overwrites the previous responses, leaving no hints to what was there before.