[-] ConsciousCode@beehaw.org 22 points 1 year ago

Good to note that this isn't even hypothetical, it literally happened with cable. First it was ad-funded, then you paid to get rid of ads, then you paid exorbitant prices to get fed ads, and the final evolution was being required to pay $100+ for bundles including channels you'd never use to get at the one you would. It's already happening to streaming services too, which have started to bundle.

[-] ConsciousCode@beehaw.org 33 points 1 year ago

Huh, is this the start of a new post-platform era where we see such business models the way we now see cigarettes?

[-] ConsciousCode@beehaw.org 23 points 1 year ago

Can't be a billionaire if you pass a certain threshold of self-awareness, it's the rules.

[-] ConsciousCode@beehaw.org 28 points 1 year ago

Now I want to see his reaction when people start breaking out the guillotines because his ilk have made peaceful resolution impossible.

[-] ConsciousCode@beehaw.org 33 points 1 year ago

Daily reminder that Firefox is customizable to the point of removing Mozilla's telemetry and making it look and feel almost like Chromium. And no, de-Googled Chromium probably isn't enough because preliminary code for implementing WEI has been pushed upstream (basically they added the code which makes it possible for WEI to be implemented, strongly suggesting they're intending to actually implement it upstream and not in Chrome)

[-] ConsciousCode@beehaw.org 23 points 1 year ago

It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of "race blindness", where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there's a therapist AI (not ideal but mental health is horribly understaffed and most people can't afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

Techniques like "constitutional AI" and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model's attitudes towards that afterwards.

[-] ConsciousCode@beehaw.org 47 points 1 year ago

To be honest I'm fine with it in isolation, copyright is bullshit and the internet is a quasi-socialist utopia where information (an infinitely-copyable resource which thus has infinite supply and 0 value under capitalist economics) is free and humanity can collaborate as a species. The problem becomes that companies like Google are parasites that take and don't give back, or even make life actively worse for everyone else. The demand for compensation isn't so much because people deserve compensation for IP per se, it's an implicit understanding of the inherent unfairness of Google claiming ownership of other people's information while hoarding it and the wealth it generates with no compensation for the people who actually made that wealth. "If you're going to steal from us, at least pay us a fraction of the wealth like a normal capitalist".

If they made the models open source then it'd at least be debatable, though still suss since there's a huge push for companies to replace all cognitive labor with AI whether or not it's even ready for that (which itself is only a problem insofar as people need to work to live, professionally created media is art insofar as humans make it for a purpose but corporations only care about it as media/content so AI fits the bill perfectly). Corporations are artificial metaintelligences with misaligned terminal goals so this is a match made in superhell. There's a nonzero chance corporations might actually replace all human employees and even shareholders and just become their own version of skynet.

Really what I'm saying is we should eat the rich, burn down the googleplex, and take back the means of production.

[-] ConsciousCode@beehaw.org 21 points 1 year ago* (last edited 1 year ago)

People arguing he shouldn't be prosecuted is wild, like we've been so cowed into submission by this dumpster fire of an electoral system that we're afraid to prosecute high treason because otherwise the treasonist might win

18
Pathos v Logos (beehaw.org)
submitted 1 year ago* (last edited 1 year ago) by ConsciousCode@beehaw.org to c/politics@beehaw.org

How do you argue with someone who's confused a lack of emotional connection to a topic with objectivity and rationality? Say a topic profoundly affects you and those you care about, but not the other person, so you get angry and flustered and they seem to think this means you're less objective as a result and it's an easy win.

[-] ConsciousCode@beehaw.org 38 points 1 year ago

I guess "checks and balances" means nothing, then. What happens when congress passes laws to regulate them and they just say "nuh uh that's unconstitutional" when it's obviously and demonstrably not?

[-] ConsciousCode@beehaw.org 24 points 1 year ago

Is it time to DDoS fax gay porn to Mississippi offices?

[-] ConsciousCode@beehaw.org 23 points 1 year ago

Recently found this gem: https://adnauseam.io/

It's an ad blocker that only hides ads and clicks them for you in the background, which means you waste advertiser's money, support creators, can't get flagged for ad blocking as easily, and they can't build a proper profile against your ad activity since it's all noise. Haven't installed it yet, but this might be the push I needed.

13

Considering the potential of the fediverse, is there any version of that for search engines? Something to break up a major point of internet centralization, fragility, and inertia to change (eg Google will never, ever, offer IPFS searches). Not only would decentralization be inherently beneficial, it would mean we're no longer compelled to hand over private information to centralized unvetted corporations like Google, Microsoft, and DuckDuckGo.

[-] ConsciousCode@beehaw.org 42 points 1 year ago

The hype cycle around AI right now is misleading. It isn't revolutionary because of these niche one-off use-cases, it's revolutionary because it's one AI that can do anything. Problem with that is what it's most useful for is boring for non-technical people.

Take the library I wrote to create "semantic functions" from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

@semantic
def list_people(text) -> list[str]:
    '''List the people mentioned in the given text.'''

8 months ago, this would've been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. Here, I just tell the AI to do it and it... does. Just like that. But you can't hype up an algorithm that does boring stuff like NLP, so people focus on the danger of AI (which is real, but laymen and news focus on the wrong things), how it's going to take everyone's jobs (it will, but that's a problem with our system which equates having a job to being allowed to live), how it's super-intelligent, etc. It's all the business logic and doing things that are hard to program but easy to describe that will really show off its power.

4

Not sure if this is the right place to put this, but I wrote a library (MIT) for creating "semantic functions" using LLMs to execute them. It's optimized for ergonomics and opacity, so you can write your functions like:

from servitor import semantic
@semantic
def list_people(text) -> list[str]:
    """List the people mentioned in the text."""

(That's not a typo - the body of the function is just the docstring, servitor detects that it returns None and uses the docstring instead)

Basic setup:

$ pip install .[openai]
$ pip install .[gpt4all]
$ cp .env.template .env

Then edit .env to have your API key or model name/path.

I'm hoping for this to be a first step towards people treating LLMs less like agents and more like inference engines - the former is currently prevalent because ChatGPT is a chatbot, but the latter is more accurate to what they actually are.

I designed it specifically so it's easy to switch between models and LLM providers without requiring dependencies for all of them. OpenAI is implemented because it's the easiest for me to test with, but I also implemented gpt4all support as a first local model library.

What do you think? Can you find any issues? Implement any connectors or adapters? Any features you'd like to see? What can you make with this?

view more: next ›

ConsciousCode

joined 1 year ago