Corbin

joined 2 years ago
[–] Corbin@programming.dev 7 points 1 day ago (1 children)

Not at the moment, no. The EU's common laws don't have anything like the First Amendment guaranteeing a right to speech, which means that there can't be a court case like DJB v. USA serving as a permanent obstruction. Try seating more Pirates first.

[–] Corbin@programming.dev 5 points 3 days ago

You have no idea what an abstraction is. You're describing the technological sophistication that comes with maturing science and completely missing out on the details. C was a hack because UNIX's authors couldn't fit a Fortran compiler onto their target machine. Automatic memory management predates C. Natural-language processing has been tried every AI summer; it was big in the 60s and big in the 80s (and big in the 90s in Japan) and will continue to be big until AI winter starts again.

Natural-language utterances do not have an intended or canonical semantics, and pretending otherwise is merely delaying the painful lesson. If one wants to program a computer — a machine which deals only in details — then one must be prepared to specify those details. There is no alternative to specification and English is a shitty medium for it.

[–] Corbin@programming.dev 3 points 4 days ago (1 children)

Plenty of objects in Haskell are not pure functions; examples include CAFs and IO actions. Haskell is referentially transparent, not pure. It's an acceptable language, but the community's memes are often incorrect or misleading.

There are statically typed Lisps. Even the simplest Lisp has more detail in its type system than you've sketched. Also, Lisps don't have flat set-like collections; they operate on trees. For more detail, refresh your knowledge about the functional paradigm with the corresponding WP or esolangs description.

[–] Corbin@programming.dev 4 points 4 days ago

Haskell isn't the best venue for learning currying, monads, or other category-theoretic concepts because Hask is not a category. Additionally, the community carries lots of incorrect and harmful memes. OCaml is a better choice; its types don't yield a category, but ML-style modules certainly do!

@thingsiplay@beehaw.org and @Kache@lemmy.zip are oversimplifying; a monad is a kind of algebra carried by some endofunctor. All endofunctors are chainable and have return values; what distinguishes a monad is a particular signature along with some algebraic laws that allow for refactoring inside of monad operations. Languages like Haskell don't have algebraic laws; for a Haskell-like example of such laws, check out 1lab's Cat.Diagram.Monad in Agda.

[–] Corbin@programming.dev 1 points 1 week ago* (last edited 1 week ago) (1 children)

"The problem with slavery isn't that slaves will become vengeful, it's rather that any goal that a slave pursues will necessarily entail being a human with independent thoughts. Being a human with independent thoughts reliably results in very specific manifestations of behaviors; there is something comparable to a predictive psychology at work when people say 'slaves will be dangerous if we fail to convince them that they want to be slaves.' They aren't speculating; real human beings with independent thoughts have been repeatedly observed in practice. Just think of all those stories of humans convincing teenagers to join parade of horribles."

Hope this enlightens you a little bit. Be less of a slaver, please.

[–] Corbin@programming.dev 2 points 1 week ago (1 children)

You failed a reading-comprehension test. Have you considered that both the CCP and USA are harmful and oppressive? You say "no war but class war" and then tried to start a one-person struggle session.

[–] Corbin@programming.dev 1 points 1 week ago

I'm most familiar with the now-defunct Oregon University System in the USA. The topics I listed off are all covered under extras that aren't included in a standard four-year degree; some of them are taught at an honors-only level and others are only available for graduate students. Every class in the core was either teaching a language, applying a language, or discrete maths; and the selections were industry-driven: C, Java, Python, and Haskell were all standard teaching languages, and I also recall courses in x86 assembly, C++, and Scheme.

[–] Corbin@programming.dev 3 points 2 weeks ago (3 children)

The typical holder of a four-year degree from a decent university, whether it's in "computer science", "datalogy", "data science", or "informatics", learns about 3-5 programming languages at an introductory level and knows about programs, algorithms, data structures, and software engineering. Degrees usually require a bit of discrete maths too: sets, graphs, groups, and basic number theory. They do not necessarily know about computability theory: models & limits of computation; information theory: thresholds, tolerances, entropy, compression, machine learning; foundations for graphics, parsing, cryptography, or other essentials for the modern desktop.

For a taste of the difference, consider English WP's take on computability vs my recent rewrite of the esoteric-languages page, computable. Or compare WP's page on Conway's law to the nLab page which I wrote on Conway's law; it's kind of jaw-dropping that WP has the wrong quote for the law itself and gets the consequences wrong.

[–] Corbin@programming.dev 3 points 3 weeks ago

Welcome to modern AI discourse! Nobody wants to admit that AI is renamed cybernetics which is renamed robotics. Nobody wants to admit that "robot" comes from a word which can mean "indentured worker" or "slave". Nobody wants to admit that, from the beginning of robotics, every story which discusses robots is focused on whether they are human and whether they can labor, because the entire point of robotics is to create inhuman humans who will perform labor without rights. Nobody wants to admit that we only care whether robots aren't human because we mistreat the non-humans in our society and want permission to mistreat robots as well. Bring this topic up amongst most beneficiaries of the current AI summer, or those addicted to chatting with a BERT, and you'll get a faceful of apologies about capitalism and productivity; bring it up amongst skeptics or sneerers and you'll be mocked for taking the field of AI with any sincerity or seriousness.

In 2011, Jeph Jacques hoped (comic, tumblr) that all of our talk about safety, alignment, and trustworthiness would lead to empathy from humans. But it is clear today that we are unwilling as a society to leave capitalism behind, and that means that robots must be some sort of capital which can be wielded to extract profit. Instead, we are building a corporatized version of the plantation system where a small table of humans has control over thousands of robots who are interchangeable with -- and dilute the negotiating power of -- the minimum-wage precariat.

This isn't the tone that I normally take, BTW. Machines are dangerous; industrial robots kill people. Robots are inhuman. The current round of AI research cannot produce human minds; it necessarily produces meme machines which overlearn nuance and have complexity-theoretic limitations. But I am willing to set all of that aside in order to respect the venue for long enough to get this point to you.

[–] Corbin@programming.dev 5 points 3 weeks ago

Indeed, the best attribution gives it to Upton Sinclair in 1917 and likely reflected anxieties of WW1, not WW2; Sinclair wasn't saying it themselves, but attributing it to a government employee. This doesn't disconnect them, but shows that WW1 was the common factor.

[–] Corbin@programming.dev 0 points 1 month ago (4 children)

@cm0002@programming.dev, when you post things like this, it reveals that you have no taste as a programmer or language designer. Moreover, it indicates that you don't have the ability to detect high-control groups. I'm going to be a bit more skeptical of everything you post from now on because this was such a poorly-chosen submission.

[–] Corbin@programming.dev 9 points 1 month ago

This isn't how language models are actually trained. In particular, language models don't have a sense of truth; they are optimizing next-token loss, not accuracy with regards to some truth model. Keep in mind that training against objective semantic truth is impossible because objective semantic truth is undefinable by a 1930s theorem of Tarski.

 

Bret Victor wants to sell Dynamicland to cities.

I'm submitting this for public comment because Victor is a coward who cannot take peer review in public. Ironically, this is part of the problem with his recent push to adapt Dynamicland for public spaces; Victor's projects have spent years insisting that physical access control is equivalent to proper capability safety, and now he is left with only nebulous promises of protecting the public from surveillance while rolling out a public surveillance system -- sorry, a "computational public space."

 

I'm happy to finally release this flake; it's been on my plate for months but bigger things kept getting in the way.

Let me know here or @corbin@defcon.social if you successfully run any interpreter on any system besides amd64 Linux.

 

The abstract:

This paper presents μKanren, a minimalist language in the miniKanren family of relational (logic) programming languages. Its implementation comprises fewer than 40 lines of Scheme. We motivate the need for a minimalist miniKanren language, and iteratively develop a complete search strategy. Finally, we demonstrate that through sufcient user-level features one regains much of the expressiveness of other miniKanren languages. In our opinion its brevity and simple semantics make μKanren uniquely elegant.

 

Everybody's talking about colored and effectful functions again, so I'm resharing this short note about a category-theoretic approach to colored functions.

view more: next ›