https://disinfowatch.org/disinfo/canadian-killed-sumy-false/

Interesting article claiming that this Canadian officer totally wasn't owned by a Russian FAB-1500 in Sumy. Instead it goes on to say he actually died of "medical complications" while stationed in Casteau, Belgium. It strangely doesn't specify what those complication were. Not sure what happened one way or the other, but I do suspect that a patient catching a glide bomb would in fact present with some complications.

My Bajookaneese immigrant parents react to the Kardashians. hillgasm carlin-pog

75

I found a YouTube link in your post.

Didn't know about neosporin gel (thanks), but Xlear or just saline nasal spray work too.

Dog people have no idea how bad their houses smell to the rest of us.

Yep it's a great time for those of us who like to seethe and shit when we see others exhibit behaviors we don't like about ourselves. Yes siree.

Keep your non-news drama baiting reactionary bs out of here.

Hell yeah dude, can't wait because I'm a fucking dickhead too.

It's shorter than just pasting comments like yours every time we need an example of fragility.

61

It feels like the US isn’t releasing what it has. I don’t think they’re behind, maybe just holding back?

i-cant

12

Test-time training (TTT) significantly enhances language models' abstract reasoning, improving accuracy up to 6x on the Abstraction and Reasoning Corpus (ARC). Key factors for successful TTT include initial fine-tuning, auxiliary tasks, and per-instance training. Applying TTT to an 8B-parameter model boosts accuracy to 53% on ARC's public validation set, nearly 25% better than previous public, neural approaches. Ensemble with recent program generation methods achieves 61.9% accuracy, matching average human scores. This suggests that, in addition to explicit symbolic search, test-time training on few-shot examples significantly improves abstract reasoning in neural language models.

16

Unlike traditional language models that only learn from textual data, ESM3 learns from discrete tokens representing the sequence, three-dimensional structure, and biological function of proteins. The model views proteins as existing in an organized space where each protein is adjacent to every other protein that differs by a single mutation event.

They used it to "evolve" a novel protein that acts similarly to others found in nature, while being structurally unique.

Why does eveyone head to the trenches about consciousness this or that? Why even try to emulate the brain when transformers produce the results they do? All these systems need to do is convincingly approximate human behavior well enough and they can automate most professional jobs out there, and in doing so upend society as we know it. These systems don't need a soul to scab for labor.

21
submitted 1 month ago* (last edited 1 month ago) by AtmosphericRiversCuomo@hexbear.net to c/technology@hexbear.net

They fine-tuned a Llama 13B LLM with military specific data, and claim it works as well as GPT-4 for those tasks.

Not sure why they wouldn't use a more capable model like 405B though.

Something about this smells to me. Maybe a way to stimulate defense spending around AI?

65
11

...versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks

This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.

Paper: https://arxiv.org/pdf/2409.20537

15

With the stated goal of "liberating people from repetitive labor and high-risk industries, and improving productivity levels and work efficiency"

Hopefully they can pull it off cheaply while Tesla's Optimus remains vaporware (or whatever the real world equivalent of vaporware is).

view more: next ›

AtmosphericRiversCuomo

joined 1 month ago