ProfessorScience

joined 2 years ago
[–] ProfessorScience@lemmy.world 13 points 1 week ago (1 children)

One of my favorites is the "ladder paradox" in special relativity, although I originally learned is as a pole vaulter rather than a ladder:

A pole vaulter is running carrying a pole that is 12m long at rest, holding it parallel to the ground. He is running at relativistic speed, such that lengths dilate by 50% (this would be (√3/2)c). And he runs through a barn that is 10m long that has open doors in the front and back.

Imagine standing inside barn. The pole vaulter is running so fast that the length of the pole, in your frame of reference, has contracted to 6m. So while the pole is entirely inside the barn you press a button the briefly closes the doors, so that for just a moment the pole is entirely closed inside the barn.

The question is, what does the pole vaulter see? For him, the pole has not contracted; instead the barn has. He's running with a 12m pole through what, in his frame of reference, is a 5m barn. What happens when the doors shut? How can both the doors shut?

I will admit that I have never used this thought experiment for any practical end.

Maybe a jersey knit sheet?

[–] ProfessorScience@lemmy.world 4 points 3 weeks ago

and will we ever return to this singular state?

This is the "big crunch" question, and I think the current theory is that no, there's enough energy, and space is curved in such a way that it won't crunch back down. At least not all at once? We do have black holes that are sort of similar in that they pack so much stuff into small spaces that the current laws of physics are unable to completely describe them.

[–] ProfessorScience@lemmy.world 4 points 3 weeks ago

How do we know that this was the actual beginning of the universe?

We don't. It is "the beginning" in that it is the farthest back the laws of physics can be "rewound" before they break down (in the sense that once you rewind the clock back far enough, there's enough matter and energy in a small enough space that the interactions between quantum mechanics and gravity become increasingly relevant, and we just don't know how those play together; see Quantum gravity)

We know that space is expanding faster than light can travel. How do we know that the Universe isn’t trillions of years old, but we just can’t ever see it because it’s beyond the distance that the faintest detectable light can travel?

We don't. But there's no evidence to suggest that the universe outside of what we can see might be different from the parts we do see. We can't really speak much about what's outside of our light cone (what we can see). All we can say is that the parts we do see all looks basically the same (homogeneous, astronomically speaking).

[–] ProfessorScience@lemmy.world 22 points 3 weeks ago

43% of people:

image

[–] ProfessorScience@lemmy.world 3 points 3 weeks ago

4: English, Spanish, French, and Japanese Bonus: Yes

[–] ProfessorScience@lemmy.world 5 points 1 month ago

I installed linux on my PC a couple months ago. The other day I wanted to log back into my windows partition for the first time in a while in order to clean up some of the files on that partition (even though the drive is mounted in linux, the windows "fast boot" option apparently leaves it in a state that linux considers read-only). Windows apparently wouldn't let me log in without a microsoft account, instead of just using my regular windows username.

So yeah, that partition's gone now. No going back!

Image

[–] ProfessorScience@lemmy.world 3 points 1 month ago (1 children)

Cherry-picking a couple of points I want to respond to together

It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.

I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don't think you'd say that that person wasn't sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.

As a component of a system, it becomes much more promising.

It sounds to me like you're more strict about what you'd consider to be "the LLM" than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone's brain is sentient.

Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.

I'm not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.

[–] ProfessorScience@lemmy.world 2 points 1 month ago (3 children)

LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology

Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.

LLMs do not synthesize. They do not have persistent context.

That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that "conversations" with an LLM are really just including all previous parts of the conversation in a new prompt. Isn't this analagous to short term memory? Now suppose you were to take all of an LLM's conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There's no technical reason that this can't be done, although in practice it's computationally expensive. Would you consider that LLM system to have persistent context?

On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

[–] ProfessorScience@lemmy.world 1 points 1 month ago

lol, yeah, I guess the Socratic method is pretty widely frowned upon. My bad. =D

[–] ProfessorScience@lemmy.world 2 points 1 month ago (7 children)

I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.

[–] ProfessorScience@lemmy.world 5 points 1 month ago (2 children)

Sure, I'm not entitled to anything. And I appreciate your original reply. I'm just saying that your subsequent comments have been useless and condescending. If you didn't have time to discuss further then... you could have just not replied.

14
Sound cutoff issues (lemmy.world)
submitted 8 months ago* (last edited 8 months ago) by ProfessorScience@lemmy.world to c/pop_os@lemmy.world
 

Hello! I'm pretty new to pop_os and linux, but am trying to switch over from windows. I've been having some sound issues where it seems like sounds get cut off. It seems to most noticeable with something like doing duolingo from my browser (lots of short sound clips of words and such; if I click on words quickly, then spotify playing in the background will stop playing briefly). I've tried disabling sleep, as described by https://support.system76.com/articles/audio/, without luck. I've also noticed that I see errors listed in pw-top which sometimes correspond to sounds getting cut off. That is, sometimes I notice a cutoff without seeing an increase in the number of errors, but when I notice an increase in the number of errors it usually corresponds to something getting cut off.

Is there a way to see what the errors from pw-top are? Or suggestions for other things I should look into? I've looked at dmesg and systemctl status --user pipewire.service (and pipewire-pulse) but the only error I see is a nvidia-drm thing which seems to be innocuous. I've also uploaded my alsa-info results, if that's useful.

view more: next ›