nave

joined 2 years ago
[–] nave@lemmy.ca 1 points 3 months ago

there is no direct rail link between the SIR and the New York City Subway system

[–] nave@lemmy.ca 4 points 3 months ago* (last edited 3 months ago) (2 children)

Staten island is the only part of New York City without subway access.

[–] nave@lemmy.ca 7 points 3 months ago (1 children)

But Wikipedia says it was invented in Rome?

[–] nave@lemmy.ca 3 points 3 months ago (1 children)

In case you want the actual link https://youtu.be/s2TyVQGoCYo

[–] nave@lemmy.ca 4 points 3 months ago

I played a little of the first game. I’m not a huge sim player but I thought it was pretty fun.

 

cross-posted from: https://lemmy.ca/post/40467081

 
[–] nave@lemmy.ca 1 points 4 months ago (1 children)

Yeah but making average wages doesn’t necessarily mean they’re comfortable.

[–] nave@lemmy.ca 4 points 4 months ago* (last edited 4 months ago) (3 children)

That’s the average salary overall. An average electronics engineer makes $109k a year in the US. and even more in places like California.

[–] nave@lemmy.ca 5 points 4 months ago

Yeah it’s not an ad break it’s an interval (like for stage plays).

[–] nave@lemmy.ca 11 points 4 months ago

At no point in it's history since it's creation has it EVER made money.

Actually they did make money in 2018 and 2019 but then the pandemic caused advertisers to cut spending.

[–] nave@lemmy.ca 13 points 4 months ago* (last edited 4 months ago) (1 children)

*he’s been accused. He hasn’t been charged

[–] nave@lemmy.ca 15 points 5 months ago

That’s their new font. This is the new logo:

(Yes it is almost exactly the same as the old one)

 

The first salvo of RTX 50 series GPU will arrive in January, with pricing starting at $549 for the RTX 5070 and topping out at an eye-watering $1,999 for the flagship RTX 5090. In between those are the $749 RTX 5070 Ti and $999 RTX 5080. Laptop variants of the desktop GPUs will follow in March, with pricing there starting at $1,299 for 5070-equipped PCs.

[–] nave@lemmy.ca 37 points 6 months ago (1 children)
 
 
 
 
 

With a new feature called Hype, YouTube is trying to focus on growing the smaller channels and helping people discover and share new creators. Hype is an entirely new promotional system inside of YouTube: there’s a new button for hyping a video, and the most-hyped videos will appear on a platform-wide leaderboard. It’s a bit like Trending, but it’s focused specifically on smaller channels and on what people specifically choose to recommend rather than just what they watch.

The actual mechanism behind Hype is pretty complicated. A video is only eligible to be hyped in the first seven days after it’s published, and of course, if it’s made by a channel with fewer than half a million subscribers. Each user only gets three hypes a week, and each hype is worth a certain number of points that inversely correlates to how many subscribers a given channel has. (The idea is that smaller channels should be able to hit the leaderboard, too, so each hype to a smaller channel will be worth more points — YouTube is doing an awful lot here to try and make sure the biggest channels don’t just dominate the leaderboard.) The 100 videos with the most total points hit the top of the leaderboard.

 
 

For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o. OpenAI is calling this release of o1 a “preview” to emphasize how nascent it is.

The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek, tells me, though the company is being vague about the exact details. He says o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.”

OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a “chain of thought” to process queries, similarly to how humans process problems by going through them step-by-step.

At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

I think this is the most important part (emphasis mine):

As a result of this new training methodology, OpenAI says the model should be more accurate. “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

view more: ‹ prev next ›