howrar

joined 2 years ago
MODERATOR OF
[–] howrar@lemmy.ca 1 points 2 hours ago

Yann Lecun gave us convolutional neural networks (CNNs) in 1998. These are the models that are used for pretty much all specialized computer vision tasks even today. TinyEye came into existence ten years later in 2008. I can't tell you if they used CNNs, but they were certainly available.

[–] howrar@lemmy.ca 1 points 17 hours ago

Even conspiracies. Learn about where they come from and why they're wrong.

[–] howrar@lemmy.ca 2 points 21 hours ago

The value in daily notes is having a place to put things down without getting distracted from your main task. The important stuff doesn't stay there. You move them at a later time to their permanent homes.

[–] howrar@lemmy.ca 4 points 1 day ago

It sounds like you might be looking at the left image with your right eye and the right image with your left eye. That's what happens when you cross your eyes instead of looking past the image.

[–] howrar@lemmy.ca 1 points 1 day ago

But my question is, does it not count as being archived if it's exactly the same message that's posted to another platform that is archived?

[–] howrar@lemmy.ca 2 points 1 day ago (1 children)

It's double sided

[–] howrar@lemmy.ca 6 points 2 days ago

I think it's valuable to have these discussions come up regularly because things change so quickly in this space. It's good to get a glimpse of what the current state of things are.

[–] howrar@lemmy.ca 7 points 2 days ago (1 children)
[–] howrar@lemmy.ca 8 points 2 days ago

The cult of scheduling conflicts and never having time to meet up :(

[–] howrar@lemmy.ca 8 points 2 days ago

Would you choose to be able to feed/house yourself and be safe from preventable dangers, or buy a house in the US? If good labour practice means the latter and not the former, then I don't believe it's a good metric on which to evaluate their chocolate.

[–] howrar@lemmy.ca 1 points 3 days ago (2 children)

Would that stop them from duplicating the information on other platforms?

[–] howrar@lemmy.ca 3 points 3 days ago (1 children)

But there's Bicks pickles right under that sign? I don't understand.

 

The Homework Machine, oh the Homework Machine,

Most perfect contraption that's ever been seen.

Just put in your homework, then drop in a dime,

Snap on the switch, and in ten seconds' time,

Your homework comes out, quick and clean as can be.

Here it is—"nine plus four?" and the answer is "three."

Three?

Oh me . . .

I guess it's not as perfect

As I thought it would be.

 

I don't know very well how the legislative process works, but to the best of my understanding, the last step involves a vote where we decide whether to pass a bill. A simple majority means it passes, otherwise it's rejected. This leads to an interesting (and possibly dangerous) dynamic where the government can be very different depending on whether or not the winning party has a majority. It means that when we have a majority, it can lead to what we call "tyranny of the majority". It also means that there's very little difference in how much influence a smaller party can have between having a single MP until the point where they can team up with another party to form a majority. It means that even if we get proportional voting for selecting MPs, we might still need to vote strategically in order to either ensure or prevent a majority government, or to encourage a specific coalition government.

Do we have any potential solutions for this? Or did I maybe misunderstand how things work and this isn't actually a problem?

5
Open Sourcing π₀ (www.physicalintelligence.company)
 

https://bsky.app/profile/natolambert.bsky.social/post/3lh5jih226k2k

Anyone interested in learning about RLHF? This text isn't complete yet, but looks to be a pretty useful resource as is already.

 

Apparently we can register as a liberal to vote in the upcoming leadership race. What does it mean if I register? What do I gain (besides the aforementioned voting) and does it place any kind of restrictions on me (e.g. am I prevented from doing the same with a different party)?

 

An overview of RL published just a few days ago. 144 pages of goodies covering everything from basic RL theory to modern deep RL algorithms and various related niches.

This manuscript gives a big-picture, up-to-date overview of the field of (deep) reinforcement learning and sequential decision making, covering value-based RL, policy-gradient methods, model-based methods, and various other topics (including a very brief discussion of RL+LLMs).

 

If there's insufficient space around it, then it'll never spawn anything. This can be useful if you want to keep a specific spawner around for capture later but don't want too spend resources on killing the constant stream of biters.

10
submitted 8 months ago* (last edited 8 months ago) by howrar@lemmy.ca to c/homeautomation@lemmy.world
 

I'm looking to get some smart light switches/dimmers (zigbee or matter if that's relevant), and one of the requirements for me is that if the switches aren't connected to the network, they would behave like regular dumb switches/dimmers. No one ever advertises anything except the "ideal" behaviour when it's connected with a hub and their proprietary app and everything, so I haven't been able to find any information on this.

So my question: is this the default behaviour for most switches? Are there any that don't do this? What should I look out for given this requirement?


Edit: Thanks for the responses. Considering that no one has experienced switches that didn't behave this way nor heard of any, I'm proceeding with the assumption that any switch should be fine. I got myself some TP Link Kasa KS220 dimmers and it works pretty well. Installation was tough due to its size. Took me about an hour of wrangling the wires so that it would fit in the box. Dimming also isn't as smooth as I'd like, but it works. I haven't had a chance to set it up with Home Assistant yet since the OS keeps breaking every time I run an update and I haven't had time to fix it after the last one. Hopefully it integrates smoothly when I do get to it.

 

This is a video about Jorn Trommelen's recent paper: https://pubmed.ncbi.nlm.nih.gov/38118410/

The gist of it is that they compared 25g protein meals vs 100g protein meals, and while you do use less of it for muscle protein synthesis at that quantity, it's a very minor difference. So the old adage still holds: Protein quantity is much more important than timing.

While we're at it, I'd also like to share an older but very comprehensive overview of protein intake by the same author: https://www.strongerbyscience.com/athlete-protein-intake/

 

Ten years ago, Dzmitry Bahdanau from Yoshua Bengio's group recognized a flaw in RNNs and the information bottleneck of a fixed length hidden state. They put out a paper introducing attention to rectify this issue. Not long after that, a group of researchers at Google found that you can just get rid of the RNN altogether and you still get great results with improved training performance, giving us the transformer architecture in their Attention Is All You Need paper. But transformers are expensive at inference time and scale poorly with increasing context length, unlike RNNs. Clearly, the solution is to just use RNNs. Two days ago, we got Were RNNs All We Needed?

view more: next ›