[-] chrash0@lemmy.world 11 points 2 weeks ago

this comment traveled in time from 2001 lol

[-] chrash0@lemmy.world 13 points 2 weeks ago* (last edited 2 weeks ago)

even though those are rules as written, i like to honor the crits, with a bit of nuance. if you’re super stealthy and roll a 1, maybe it makes a small noise but doesn’t cause an alarm. if you’re dumping strength on your wet noodle wizard, maybe you’re able to move that heavy thing an inch on a 20. it’s always situational though. people get excited to see a crit, and i think it makes it more fun.

[-] chrash0@lemmy.world 11 points 1 month ago

yeah i see that too. it seems like mostly a reactionary viewpoint. the reaction is understandable to a point since a lot of the “AI” features are half baked and forced on the user. to that point i don’t think GNOME etc should be scrambling to add copies of these features.

what i would love to see is more engagement around additional pieces of software that are supplemental. for example, i would love if i could install a daemon that indexes my notes and allows me to do semantic search. or something similar with my images.

the problems with AI features aren't within the tech itself but in the surrounding politics. it’s become commonplace for “responsible” AI companies like OpenAI to not even produce papers around their tech (product announcement blogs that are vaguely scientific don’t count), much less source code, weights, and details on training data. and even when Meta releases their weights, they don’t specify their datasets. the rat race to see who can make a decent product with this amazing tech has made the whole industry a bunch of pearl clutching FOMO based tweakers. that sparks a comparison to blockchain, which is fair from the perspective of someone who hasn’t studied the tech or simply hasn’t seen a product that is relevant to them. but even those people will look at something fantastical like ChatGPT as if it’s pedestrian or unimpressive because when i asked it to write an implementation of the HTTP spec in the style of Fetty Wap it didn’t run perfectly the first time.

[-] chrash0@lemmy.world 12 points 1 month ago

tbh this research has been ongoing for a while. this guy has been working on this problem for years in his homelab. it’s also known that this could be a step toward better efficiency.

this definitely doesn’t spell the end of digital electronics. at the end of the day, we’re still going to want light switches, and it’s not practical to have a butter spreading robot that can experience an existential crisis. neural networks, both organic and artificial, perform more or less the same function: given some input, predict an output and attempt to learn from that outcome. the neat part is when you pile on a trillion of them, you get a being that can adapt to scenarios it’s not familiar with efficiently.

you’ll notice they’re not advertising any experimental results with regard to prediction benchmarks. that’s because 1) this actually isn’t large scale enough to compete with state of the art ANNs, 2) the relatively low resolution (16 bit) means inputs and outputs will be simple, and 3) this is more of a SaaS product than an introduction to organic computing as a concept.

it looks like a neat API if you want to start messing with these concepts without having to build a lab.

[-] chrash0@lemmy.world 9 points 1 month ago

this data is not the world

i think most ML researchers are aware that the data isn’t perfect, but, crucially, it exists in a digestible form.

[-] chrash0@lemmy.world 10 points 1 month ago

i mean, i’ve worked in neural networks for embedded systems, and it’s definitely possible. i share you skepticism about overhead, but i’ll eat my shoes if it isn’t opt in

[-] chrash0@lemmy.world 13 points 1 month ago

if it’s easier to pay, people spend more

[-] chrash0@lemmy.world 9 points 2 months ago

i didn’t think people would really be surprised. but maybe i’m jaded by my experience in the industry.

if we’re arguing whether or not it’s objectively stupid, i think that’s up to the market to decide.

kinda seems like a toy to me anyway, and it’s kind of priced that way

[-] chrash0@lemmy.world 10 points 2 months ago

it’s not a password; it’s closer to a username.

but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.

personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.

[-] chrash0@lemmy.world 10 points 2 months ago

seems like chip designers are being a lot more conservative from a design perspective. NPUs are generally a shitton of 8 bit registers with optimized matrix multiplication. the “AI” that’s important isn’t the stuff in the news or the startups; it’s the things that we’re already taking for granted. speech to text, text to speech, semantic analysis, image processing, semantic search, etc, etc. sure there’s a drive to put larger language models or image generation models on embedded devices, but a lot of these applications are battle tested and would be missed or hampered if that hardware wasn’t there. “AI” is a buzz word and a goalpost that moves at 90 mph. machine learning and the hardware and software ecosystem that’s developed over the past 15 or so years more or less quietly in the background (at least compared to ChatGPT) are revolutionary tech that will be with us for a while.

blockchain currency never made sense to me from a UX or ROI perspective. they were designed to be more power hungry as adoption took off, and power and compute optimizations were always conjecture. the way wallets are handled and how privacy was barely a concern was never going to fly with the masses. pile on that finance is just a trash profession that requires goggles that turn every person and thing into an evaluated commodity, and you have a recipe for a grift economy.

a lot of startups will fail, but “AI” isn’t going anywhere. it’s been around as long as computers have. i think we’re going to see a similarly (to chip designers) cautious approach from companies like Google and Apple, as more semantic search, image editing, and conversation bot advancements start to make their way to the edge.

[-] chrash0@lemmy.world 10 points 2 months ago

you’d be surprised how fast a model can be if you narrow the scope, quantize, and target specific hardware, like the AI hardware features they’re announcing.

not a 1-1, but a quantized Mistral 7B runs at ~35 tokens/sec on my M2. that’s not even as optimized as it could be. it can write simple scripts and do some decent writing prompts.

they could get really narrow in scope (super simple RAG, limited responses, etc), quantize down to even something like 4 bit, and run it on custom accelerated hardware. it doesn’t have to reproduce Shakespeare, but i can imagine a PoC that runs circles around Siri in semantic understanding and generated responses. being able to reach out on Slack to the engineers that built the NPU stack ain’t bad neither.

[-] chrash0@lemmy.world 11 points 3 months ago

i think it’s a matter of perspective. if i’m deploying some containers or servers on a system that has well defined dependencies then i think Debian wins in a stability argument.

for me, i’m installing a bunch of experimental or bleeding edge stuff that is hard to manage in even a non LTS Debian system. i don’t need my CUDA drivers to be battle tested, and i don’t want to add a bunch of sketchy links to APT because i want to install a nightly version of neovim with my package manager. Arch makes that stuff simple, reliable, and stable, at least in comparison.

view more: ‹ prev next ›

chrash0

joined 4 months ago