GPT-4 cannot alter its weights once it has been trained so this is just factually wrong.
The bit you quoted is referring to training.
They are not intelligent. They create text based on inputs. That is not what intelligence is, unless you have an extremely dismal view of intelligence that humans are text creation machines with no thoughts, no feelings, no desires, no ability to plan... basically, no internal world at all.
Recent papers say otherwise.
The conclusion the author of that article comes to (LLMs can understand animal language) is.. problematic at the very least. I don't know how they expect that to happen.
Man that video irks me. She is conflating AI with AGI. I think a lot of people are watching that video and spouting out what she says as fact. Yet her basic assertion is incorrect because she isn't using the right terminology. If she explained that up front, the video would be way more accurate. She almost goes there but stops short. I would also accept her saying that her definition of AI is anything a human can do that a computer currently can't. I'm not a fan of that definition but it has been widely used for decades. I much prefer delineating AI vs AGI. Anyway this is the first time I watched the video and it explains a lot of the confidently wrong comments on AI I've seen lately. Also please don't take your AI information from an astrophysicist, even if they use AI at work. Get it from an expert in the field.
Anyway, ChatGPT is AI. It is not AGI though per recent papers, it is getting closer.
For anyone who doesn't know the abbreviation, AGI is Artificial General Intelligence or human level intelligence in a machine. ASI is Artificial Super Intelligence which is beyond human level and the really scary stuff in movies.