48

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

you are viewing a single comment's thread
view the rest of the comments
[-] froztbyte@awful.systems 12 points 1 year ago* (last edited 1 year ago)

This article is an incredibly deep mine of bad takes, wow

Let’s just build a data intensive application on a foundation where we have none of that data nearby to start! Let’s forget about all other prior distribution models! Let’s make faulty rationalists* and leaps of assumptions!

And then you click through the profile:

I’ve spent a decade working in fintech at AIG, the Commonwealth Bank of Australia, Goldman Sachs, Fast, and Affirm in roles spanning data science, machine learning, software, data engineering, credit, and fraud.

Ah, he’s angling to get a priesthood role if this new faith thing happens. Got it.

*e: this was meant to be “rationalisations” and then my phone keyboard did a dumb. I’m gonna leave it that way because it’s funnier

this post was submitted on 01 Oct 2023
48 points (100.0% liked)

TechTakes

1442 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS