sisyphean

joined 2 years ago
MODERATOR OF
[–] sisyphean@programming.dev 1 points 2 years ago

There’s at least one in almost every paragraph of the introduction. He seems to be satisfied with himself a bit more than what is tasteful, but it’s hard to argue that the product is amazing.

[–] sisyphean@programming.dev 2 points 2 years ago (2 children)

It wasn’t easy, but finally I could reverse-engineer its algorithm:

function ask(question) {
    return "…";
}
[–] sisyphean@programming.dev 2 points 2 years ago

Maybe the next bot I write should be GoodBotBadBotBot

[–] sisyphean@programming.dev 4 points 2 years ago (1 children)

I think the incentives are a bit different here. If we can keep the threadiverse nonprofit, and contribute to the maintenance costs of the servers, it might stay a much friendlier place than Reddit.

[–] sisyphean@programming.dev 5 points 2 years ago (1 children)

We should do an AmA with her!

[–] sisyphean@programming.dev 1 points 2 years ago

Lemmy actually has a really good API. Moderation tools are pretty simple though.

[–] sisyphean@programming.dev 2 points 2 years ago

And it works in the other direction too:)

[–] sisyphean@programming.dev 2 points 2 years ago (2 children)

The repetition of the chorus emphasizes the singer’s unwavering dedication to the relationship.

I’m dying 🤣

[–] sisyphean@programming.dev 3 points 2 years ago (4 children)

@AutoTLDR please 😂

[–] sisyphean@programming.dev 2 points 2 years ago (5 children)

Let’s try it this way:

https://rentry.co/evuft

(The bot probably couldn’t extract the text from that js-heavy site you linked to)

[–] sisyphean@programming.dev 2 points 2 years ago (7 children)
[–] sisyphean@programming.dev 2 points 2 years ago (1 children)

This long, image-heavy article will pose a challenge to @AutoTLDR. Let’s see how it fares.

 
 

Here is the link to the example epubs:

https://github.com/mshumer/gpt-author/tree/main/example_novel_outputs

I’m not sure how I feel about this project.

 

TL;DR (by GPT-4 🤖):

The article titled "It’s infuriatingly hard to understand how closed models train on their input" discusses the concerns and lack of transparency surrounding the training data used by large language models like GPT-3, GPT-4, Google's PaLM, and Anthropic's Claude. The author expresses frustration over the inability to definitively state that private data passed to these models isn't being used to train future versions due to the lack of transparency from the vendors. The article also highlights OpenAI's policy that data submitted by API users is not used to train their models or improve their services. However, the author points out that the policy is relatively new and data submitted before March 2023 may have been used if the customer hadn't opted out. The article also brings up potential security risks with AI vendors logging inputs and the possibility of data breaches. The author suggests that openly licensed models that can be run on personal hardware may be a solution to these concerns.

 

cross-posted from: https://programming.dev/post/177822

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

 

cross-posted from: https://programming.dev/post/177822

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

 

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

 

Link to original tweet:

https://twitter.com/sayashk/status/1671576723580936193?s=46&t=OEG0fcSTxko2ppiL47BW1Q

Screenshot:

Transcript:

I'd heard that GPT-4's image analysis feature wasn't available to the public because it could be used to break Captcha.

Turns out it's true: The new Bing can break captcha, despite saying it won't: (image)

 

This is a fascinating discussion of the relationship between goals and intelligence from an AI safety perspective.

I asked my trusty friend GPT-4 to summarize the video (I downloaded the subtitles and fed them into ChatGPT), but I highly recommend just watching the entire thing if you have the time.

Summary by GPT-4:

Introduction:

  • The video aims to respond to some misconceptions about the Orthogonality Thesis in Artificial General Intelligence (AGI) safety.
  • This arises from a thought experiment where an AGI has a simple goal of collecting stamps, which could cause problems due to unintended consequences.

Understanding 'Is' and 'Ought' Statements (Hume's Guillotine):

  • The video describes the concept of 'Is' and 'Ought' statements. 'Is' statements are about how the world is or will be, while 'Ought' statements are about how the world should be or what we want.
  • Hume's Guillotine suggests that you can never derive an 'Ought' statement using only 'Is' statements. To derive an 'Ought' statement, you need at least one other 'Ought' statement.

Defining Intelligence:

  • Intelligence in AGI systems refers to the ability to take actions in the world to achieve their goals or maximize their utility functions.
  • This involves having or building an accurate model of reality, using it to make predictions, and choosing the best possible actions.
  • These actions are determined by the system's goals, which are 'Ought' statements.

Are Goals Stupid?

  • Some commenters suggested that single-mindedly pursuing one goal (like stamp collecting) is unintelligent.
  • However, this only seems unintelligent from a human perspective with different goals.
  • Intelligence is separate from goals; it is the ability to reason about the world to achieve these goals, whatever they may be.

Can AGIs Choose Their Own Goals?

  • The video suggests that while AGIs can choose their own instrumental goals, changing terminal goals is rare and generally undesirable.
  • Terminal goals can't be considered "stupid", as they can't be judged against anything. They're simply the goals the system has.

Can AGIs Reason About Morality?

  • While a superintelligent AGI could understand human morality, it doesn't mean it would act according to it.
  • Its actions are determined by its terminal goals, not its understanding of human ethics.

The Orthogonality Thesis:

  • The Orthogonality Thesis suggests that any level of intelligence is compatible with any set of goals.
  • The level of intelligence is about effectiveness at answering 'Is' questions, and goals are about 'Ought' questions.
  • Therefore, it's possible to create a powerful intelligence that will pursue any specified goal.
  • The level of an agent's intelligence doesn't determine its goals and vice versa.
 

This video shows a really nice and clear example of refactoring an anemic domain model into a rich one.

 

cross-posted from: https://lemmy.fmhy.ml/post/125116

The new wave of AI systems, ChatGPT and its more powerful successors, exhibit extraordinary capabilities across a broad swath of domains. In light of this, we discuss whether artificial INTELLIGENCE has arrived.

Paper available here: https://arxiv.org/abs/2303.12712 Video recorded at MIT on March 22nd, 2023

 

TL;DR (by GPT-4 🤖):

  • Use of AI Tools: The author routinely uses GPT-4 to answer casual and vaguely phrased questions, draft complex documents, and provide emotional support. GPT-4 can serve as a compassionate listener, an enthusiastic sounding board, a creative muse, a translator or teacher, or a devil’s advocate.

  • Large Language Models (LLM) and Expertise: LLMs can often persuasively mimic correct expert responses in a given knowledge domain, such as research mathematics. However, the responses often consist of nonsense when inspected closely. The author suggests that both humans and AI need to develop skills to analyze this new type of text.

  • AI in Mathematical Research: The author believes that the 2023-level AI can already generate suggestive hints and promising leads to a working mathematician and participate actively in the decision-making process. With the integration of tools such as formal proof verifiers, internet search, and symbolic math packages, the author expects that 2026-level AI, when used properly, will be a trustworthy co-author in mathematical research, and in many other fields as well.

  • Impact on Human Institutions and Practices: The author raises questions about how existing human institutions and practices will adapt to the rise of AI. For example, how will research journals change their publishing and referencing practices when AI can generate entry-level math papers for graduate students in less than a day? How will our approach to graduate education change? Will we actively encourage and train our students to use these tools?

  • Challenges and Future Expectations: The author acknowledges that we are largely unprepared to address these questions. There will be shocking demonstrations of AI-assisted achievement and courageous experiments to incorporate them into our professional structures. But there will also be embarrassing mistakes, controversies, painful disruptions, heated debates, and hasty decisions. The greatest challenge will be transitioning to a new AI-assisted world as safely, wisely, and equitably as possible.

view more: ‹ prev next ›