niva

joined 2 years ago
[–] niva@discuss.tchncs.de 1 points 2 weeks ago* (last edited 2 weeks ago)

Yes sure. I guess it depends how often this will be the case. It is his first PC and it would be cool if he has a good experience with Linux. I don't want him to loose to the dark side of the force if you get my drift :)

[–] niva@discuss.tchncs.de 1 points 2 weeks ago

He wants this computer mostly for gaming and education. Zorin might not be ideal for gaming because it is based on stable Ubuntu and therefor not as up to date, if I am not mistaken.

[–] niva@discuss.tchncs.de 2 points 2 weeks ago

It is actually the son of a close friend and neighbor of mine. He is 12 and gets his first PC in a few weeks. He wants it mostly for gaming, but he might want to learn more about computer later? Maybe learn programming or making music or whatever might peek his interest in the future. The good thing is I live next door and can help out if there is a problem. But on the other hand I want him to have a good first experience with Linux. I would love to have him some Arch based Linux distro if possible. But not if that means he has trouble with it all the time.

[–] niva@discuss.tchncs.de 1 points 2 weeks ago

Yes, that is also my understanding. Thanks!

[–] niva@discuss.tchncs.de 2 points 2 weeks ago (4 children)

Is EneavourOS beginner friendly? Doesn't it have the same problem with manual interventions than Arch?

 

I am a long term Arch user. I consider to recommend CachyOS to a friend with very limited technical knowledge. I read CachyOS is suitable for Linux beginners. But since it is based on Arch, doesn't that mean manual user intervention is sometimes necessary like it is in Arch? If it is, I don't think it is suitable for a noob user.
Sorry if this is a stupid question, but I could not find anything about manual intervention or about checking Arch news before updating in the Cachy wiki or anywhere else.

[–] niva@discuss.tchncs.de 7 points 3 weeks ago

I want this as well, but it will never happen.

[–] niva@discuss.tchncs.de 2 points 3 weeks ago (3 children)

Usually insurance companies don't pay for vandalism I think.

[–] niva@discuss.tchncs.de 5 points 4 weeks ago

The consciousness transfer was something I thought about as well. I think Jame said something to Helly (at the same scene in season 1 episode 9) about he wants Helena to be at his revolving. It somehow sounded to me like rebirth? And a transfer of consciousness could be seen as rebirth. Maybe Jame even wants his consciousness transformed to Helly (the Innie) now? Because he sees Kier in her but not in Helena?

I bet some of our speculations are true and some false.

I pray to Kier it won't take another 3 years till we find out more! :)

 

I think Lumons goal is to create a chip, that can be activated every time a person has to do something unpleasant or painful.

With this chip a person that is afraid of flying or the dentist could just activate the chip and can travel easier and have better dental health.

People that hate to do physical exercises can simply let their innie do the work and be fit and healthy without having to work for it.

Woman that are to afraid to give birth can have children without having to suffer through a potentially painful birth. When Lumon has a chip like that, they might be able to convince a lot of people to get chipped. That is something Jame talked to Helena about in season 1 episode 9. "Everybody should have a chip!"

And when everybody has a Lumon chip they might have darker plans to control people with these chips.

So on the testing floor they created a chip that makes Innies willing to take the pain for the Outie. Maybe they are able to copy Gemmas chip after Cold Harbour. And these copied chips could then be implanted in others?

What do you think?

[–] niva@discuss.tchncs.de 2 points 4 weeks ago

How did you miss this? It was a big thing at the beginning of the 2nd season. Innies are aloud to resign, no question asked. That was one of the reforms Lumon made for the innies. Irving almost resigned, but Dylan convinced him to stay.

 

I thought at first, she standing their in this fire lit room a fire burning behind her, she must be evil. She looks like the devil in this scene. But this is not about Christianity, it is about the Lumon religion! She is the fallen arch angel of Lumon! She is not the antichrist, she is the anti-kier!

Lumon is all about the cold. Cold colours everywhere, the iceberg picture in Milchicks office, the home town of Cobel (home town of Lumon as well?), Cold Harbour, ... But there is Cobel standing in very warm light with a fire cracking behind her. She is no longer a soldier for Lumon. She is the anti-Kier now! She wants to burn this company to the ground now!

[–] niva@discuss.tchncs.de 10 points 3 months ago (4 children)

This puzzles me. Why do these Meta employees care for LGBTQ people? How can anyone work for Meta and have a conscience?

[–] niva@discuss.tchncs.de 5 points 3 months ago (2 children)

At the very least there should be a law, forcing any health insurance to at least cover all costs needed to insure survival and long term health. And what is needed for survival and long term health is defined by the Doctor and not by the insurance company!!! Honestly, no idea why this is not law in any rich country in 2025!

 

First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don't know and what they can't answer. They hallucinate instead of answering a question with "I don't know." or "I am not sure about this." The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don't learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

414
To subtle? (discuss.tchncs.de)
 
view more: next ›