nucleative

joined 2 years ago
MODERATOR OF
[–] nucleative@lemmy.world 1 points 2 days ago (1 children)

I’ve heard of vibe coding but in the context of being able to identify music that fits a “vibe”. What are you talking about?

This is when you give some LLM a prompt such as "write a game like Minecraft except cooler" and the system will output some code that might run and might vaguely resemble a block game.

So then you go back ask for more, it does something to the code potentially improving or breaking it, go back again ask for more, and repeat over and over. I'm being a little bit sarcastic because most serious developers look down on this, but really this is how a lot of coding is happening these days. There are tools to make this process somewhat usable and they are getting better every day.

[–] nucleative@lemmy.world 2 points 2 days ago (3 children)

Interesting. I can buy that idea, a model that's designed to be general and answer all questions is going to have to make compromises in a lot of ways.

So it's possible that model benchmarking needs to be revised in some way to give more useful analysis of its capabilities.

The industry is quickly moving towards using agents, MCP connections (sources of real-time data for the model to pull from, and apis that allow the model to perform tasks, like putting things on a calendar), and RAGs (augmentation with sources of truth, such as a 100 page pdf guide for example), and models that seem to be more aware that they can get data from other sources.

The future might become specialized models all the way down.

Just today I'm playing with "vibe coding" and using one agent as an orchestrator that assigns and monitors tasks to other agents. The result is still slightly bullshit code but it's amusing to watch it work. Not sure yet if this is a strategy to spend all my money through API fees or will result in something useful 😂

[–] nucleative@lemmy.world 7 points 3 days ago (2 children)

I'm not sure why the above comment was down voted so hard. This community should encourage insightful comments.

It seems like overall college degrees are still a worthwhile financial investment on average.

If you disagree, dialogue.

https://www.forbes.com/sites/michaeltnietzel/2024/03/01/college-degrees-lead-to-142-trillion-gain-in-career-earnings-study-finds/

Compared to the average high school graduate, the earnings premiums were:

$495,000 over a lifetime for people who completed an associate’s degree;
$1 million for those who completed a bachelor’s degree; and
$1.7 million for those with a graduate degree.

https://www.bls.gov/careeroutlook/2021/data-on-display/education-pays.htm

For example, workers with a bachelor’s degree had median weekly earnings of $1,305 in 2020, compared with $781 for workers with a high school diploma. And the unemployment rate for bachelor’s-level workers was 5.5 percent, compared with 9.0 percent for those whose highest level of education was a high school diploma.

https://www.cnbc.com/2025/04/18/median-return-on-investment-for-a-college-degree.html

the typical college graduate can expect a median 12.5% return on their investment in higher education

[–] nucleative@lemmy.world 6 points 3 days ago (5 children)

You can do due diligence as a buyer forever but if the seller lies or doesn't disclose... Problems like these happen. Lawsuits are potentially incoming to figure that one out.

[–] nucleative@lemmy.world 3 points 3 days ago (6 children)

Just curious if you're a developer or using LLMs often.

I like Anthropic's sonnet 3.7 model for agent and code related tasks more than the Open AI models at the moment.

Deepseek and LLama can be run offline, which is great for certain uses especially the aforementioned BS tasks that can perhaps burn through API tokens. Quality of output doesn't match the top models but this is second to privacy for many.

Not sure where things are at with Dall-E 3 image generation but the last time I was looking it seemed like Stable Diffusion has gotten damn good and is extensible in ways that dall-e is not.

Voice recognition, and TTS output w/emotion OpenAI has the best I've ever heard.

Image recognition openAI might lead but the llama4 multimodal stuff is pretty awesome

Anyways I'm just some rando but my observation is that OpenAI better get on that IPO fast unless they have some magic in the pipeline because they are being attacked by competent solutions from every side in a niche that is showing diminshing promise to change everything the father we go.

[–] nucleative@lemmy.world 4 points 4 days ago (1 children)

Aren't such posts getting people banned online?

[–] nucleative@lemmy.world 2 points 4 days ago (10 children)

A lot of their staff will be finished with needing to make any money ever again in their lives if they IPO.

But that will also throw OpenAI into the public sphere of needing to make money quarter after quarter forever, which inevitably leads to them sucking and no longer being innovative. Actually OpenAI is already teetering on this already as others are catching up. Sam and other top guys will leave because they hate people telling them what to do and don't need it.

Facebook and Deepseek are literally giving it away for free now to kill the competition and in some areas other competitors are producing better products already.

It's a weird space. I used to think we were on the verge of entering a whole new era of technology but now I think it's going to be muted. Perhaps AI (as we know it now) eased some tasks and eliminated some BS stuff that used to waste our time, but we've not yet completely eliminated most professional jobs - if anything I think we just added more for them to learn and do to remain competitive.

[–] nucleative@lemmy.world 3 points 4 days ago

I had a LaserJet 4M until just a few years ago. It still had a BNC port on the back.

Damn that printer was the goat.

[–] nucleative@lemmy.world 4 points 6 days ago (1 children)

Is it possible to use a voip based SMS for registration?

Those are a little easier to get anonymously then physical sim cards.

[–] nucleative@lemmy.world 1 points 6 days ago

I use a cronjob with cerbot to renew

I also have Uptime Kuma setup to alert if certificates are getting close to expiration

[–] nucleative@lemmy.world 5 points 6 days ago

We need a decentralised search engine.

... Thats all the people outside. It's really inefficient.

 

I was recently in the Bay area and tried these e-bikes from Lyft.

When you're finished you are expected to return them to a docking zone as opposed to ditching them wherever you finish. These parking locations are all over the place and easy to find.

They get the job done and the bike is fairly pleasant to ride on flat surfaces. Hills aren't recommended. The city is bike friendly in most areas with bike lanes all over.

If you're looking to get around and the weather is good, I'd recommend giving them a try if you're in SF.

 

cross-posted from: https://lemmy.world/post/27409933

And there are lots of other sizes too, such as the huge 40135 (40mm x 135mm)

 

And there are lots of other sizes too, such as the huge 40135 (40mm x 135mm)

 
 

Pretty sure I'm having heat creep up the Bowden tube, as it's getting jammed a few cm back from the hot end and then can't push the filament any more. When I get it out there's a little molten bulb at the filament.

In this fail, I think it jammed as usual and the extruder found a way to keep going.

I tried turning down the hot end from 215 to 200 and it's still failing. My cooling fan is running at 100%.

This is the third time I've had this print fail at about this layer, around 1 hour into what will be a 26 hour print.

Any ideas?

 

I'm in the process of hiring for a position and I have two candidates. It's a tough call because both are very proficient but each has some unique attributes. I thought I might ask ChatGPT's assistance with thinking it through.

I recorded myself talking through my thoughts on each one as I read through their resume and the Q&As that I've done with each. Then uploaded the audio file to the whisper-1 api for transcription (for this I'm using the OpenAI API).

Then I pasted the transcribed text into GPT4 and then prompted it with: "Above is my transcribed notes comparing two candidates for a position together. Help me think through this decision by asking me questions, one at a time."

ChatGPT proceeded to ask me really good questions, one after the other. After a while I felt like it had got me to think about many new factors and ideas. After about 22 questions I'd had enough, so I asked it to wrap up and summarize our next steps, to which it spit out a bullet-point list of what we'd concluded and, what steps we should take next.

I don't know if everyone is using ChatGPT this way, but this is a really useful feedback system.

 

This bike has a 10ah battery in the seat post and a 7 gear derailleur. Top speed is limited to 25km but I think it can be reprogrammed to remove the limit.

view more: next ›