coolin

joined 2 years ago
[–] coolin@beehaw.org 2 points 2 years ago (1 children)

Yeah there's no way a viable Linux phone could be made without the ability to run Android apps.

I think we're probably at least a few years away from being able to daily drive Linux on modern phones with functioning things like NFC payments and a decent native app collection. It's definitely coming but it has far less momentum than even the Linux desktop does.

[–] coolin@beehaw.org 2 points 2 years ago (1 children)

Ban cars is a lot like saying defund the police

It makes a whole lot of sense when you stop and examine the reasoning, but it masks the wide reaching social and infrastructure reforms needed to accomplish it with a short meme that people misinterpret.

[–] coolin@beehaw.org 2 points 2 years ago

Smh my head, Linux is too mainstream now!!! How will I be a cool hacker boy away from society if everyone else uses it!!!!!!!

[–] coolin@beehaw.org 9 points 2 years ago

Sam Altman: We are moving our headquarters to Japan

[–] coolin@beehaw.org 3 points 2 years ago

I think this is downplaying what LLMs do. Yeah, they are not the best at doing things in general, but the fact that they were able to learn the structure and semantic context of language is quite impressive, even if it doesn't know what the words converted into tokens actually mean. I suspect that we will be able to use LLMs as one part of a full digital "brain", with some model similar to our own prefrontal cortex calling the LLM (and other things like vision model, sound model, etc.) and using its output to reason about a certain task and take an action. That's where I think the hype will be validated, is when you put all these parts we've been working on together and Frankenstein a new and actually intelligent system.

[–] coolin@beehaw.org 4 points 2 years ago (1 children)

For the love of God please stop posting the same story about AI model collapse. This paper has been out since May, been discussed multiple times, and the scenario it presents is highly unrealistic.

Training on the whole internet is known to produce shit model output, requiring humans to produce their own high quality datasets to feed to these models to yield high quality results. That is why we have techniques like fine-tuning, LoRAs and RLHF as well as countless datasets to feed to models.

Yes, if a model for some reason was trained on the internet for several iterations, it would collapse and produce garbage. But the current frontier approach for datasets is for LLMs (e.g. GPT4) to produce high quality datasets and for new LLMs to train on that. This has been shown to work with Phi-1 (really good at writing Python code, trained on high quality textbook level content and GPT3.5) and Orca/OpenOrca (GPT-3.5 level model trained on millions of examples from GPT4 and GPT-3.5). Additionally, GPT4 has itself likely been trained on synthetic data and future iterations will train on more and more.

Notably, by selecting a narrow range of outputs, instead of the whole range, we are able to avoid model collapse and in fact produce even better outputs.

[–] coolin@beehaw.org 6 points 2 years ago

I've never used Manjaro but the perception I get from it is that it is a noob friendly distro with good GUI and config (good) but then catastrophically fails when monkeying around with updates and the AUR. This is a pain for technical users and a back-to-Windows experience for the people it's targeted towards. Overall, significantly worse than EndeavorOS or plain 'ol vanilla Arch Linux.

[–] coolin@beehaw.org 20 points 2 years ago* (last edited 2 years ago)

We have no moat and neither does OpenAI is the leaked document you're talking about

It's a pretty interesting read. Time will tell if it's right, but given the speed of advancements that can be stacked on top of each other that I'm seeing in the open source community, I think it could be right. If open source figured out scalable distributed training I think it's Joever for AI companies.

[–] coolin@beehaw.org 5 points 2 years ago

Shit anyone working for less than $20 packing boxes is getting scammed cause I know for a fact several places offer more than that. It just goes to show the importance of having a union to bargain for higher wages.

[–] coolin@beehaw.org 4 points 2 years ago* (last edited 2 years ago) (1 children)

Based NixOS user

I love NixOS but I really wish it had some form of containerization by default for all packages like flatpak and I didn't have to monkey with the config to install a package/change a setting. Other than that it is literally the perfect distro, every bit of my os config can be duplicated from a single git repo.

[–] coolin@beehaw.org 9 points 2 years ago (1 children)

I don't know what type of chatbots these companies are using, but I've literally never had a good experience with them and it doesn't make sense considering how advanced even something like OpenOrca 13B is (GPT-3.5 level) which can run on a single graphics card in some company server room. Most of the ones I've talked to are from some random AI startup that have cookie cutter preprogrammed text responses that feel less like LLMs and more like a flow chart and a rudimentary classifier to select an appropriate response. We have LLMs that can do the more complex human tasks of figuring out problems and suggesting solutions and that can query a company database to respond correctly, but we don't use them.

[–] coolin@beehaw.org 9 points 2 years ago

Blocking out the sun with aerosols is a good idea if you know with high confidence how it will impact the climate system and environment. That's why they're trying to simulate it with the supercomputer, so they know if it fucks stuff up or not.

view more: ‹ prev next ›