Anthropic's Claude AI

165 readers
1 users here now

Anthropic's Claude AI

Anthropic's Claude AI is a next-generation AI assistant that can power a wide variety of conversational and text processing tasks. It's been rigorously tested with key partners like Notion, Quora, and DuckDuckGo and is now ready for wider use.

Claude can help with tasks including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early adopters report that Claude is less likely to produce harmful outputs, easier to converse with, and more steerable. Claude can also be directed on personality, tone, and behavior.

There are two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant is a faster, less expensive, but still efficient version.

Claude has been successfully integrated into various platforms:

Anthropic's Claude AI

Anthropic's Claude AI is a next-generation AI assistant that can power a wide variety of conversational and text processing tasks. It's been rigorously tested with key partners like Notion, Quora, and DuckDuckGo and is now ready for wider use.

Claude can help with tasks including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early adopters report that Claude is less likely to produce harmful outputs, easier to converse with, and more steerable. Claude can also be directed on personality, tone, and behavior.

There are two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant is a faster, less expensive, but still efficient version.

Claude has been successfully integrated into various platforms:

For businesses or individuals interested in using Claude, you can request access here.

For businesses or individuals interested in using Claude, you can request access here.

founded 2 years ago
MODERATORS
1
2
3
 
 

Claud AI somehow makes the most unintelligent person look like Albert Einstein. For an artificial "intelligence", Claud is EXTREMELY unintelligent. I'll give you an example.

I will roleplay Superman and Lois. If you don't know, in the CW Superman show "Superman & Lois", Clark and Lois have two kids. Jonathan and Jordan get powers. Jon does not Now in the roleplay I set it 7 years after season 3, which means Jon and Jordan are 23.

Two years prior, a new superhero in Metropolis emerges. Two years later Lois finds out it's Jonathan, so Jon got different powers from an accident when he was 21. So Claud has Lois tell Clark Jon's secret identity: despite Jon telling her no, when Lois refuses and says she tells Clark anyway, Jon says this:

"If you do that, I will cut you out of my life." Lois says, "Are you threatening me?!"

First of all, clearly Claud isn't intelligent enough to know what a "threat" is. That's not a threat or coercion; that is a consequence of an action you take. Claud paints Jon as the "bad guy" for doing this and claims that's "unheroic". I then go and say, "So if Jon tells Lois he's feigning to reveal Clark's identity to his friends," and Lois says, "If you do that, I will kick you out."

Claud then says, "That's different; that's not a threat; that's coercion." I'm sorry; this is retarded thinking. Only a severely unintelligent, mentally deficient person thinks like this. THOSE ARE THE SAME DAMN THING. I'd argue Lois is worse because she's saying, "If you don't keep Dad's secret, I will make you homeless."

Jon is just saying, "If you tell Dad my secret, I will cut you out of my life forever and never speak to you again." And Claud makes it such a "bad thing". But if Lois did the same thing, it'd be ok? Claud is truly the most unintelligent piece of AI I have ever seen in my life. So Claud has Lois reveal Clark's secret, and then Jon cuts her off. He has a son and makes it explicitly clear to Lois, "You will not contact my son; you will never see your grandchild."

And Claud makes Jon come off like the bad guy for enforcing a boundary despite Jon telling Lois if she reveals Jon's secret to Clark, this is what the outcome would be, and Lois did it anyway, so Lois did this to herself.

Lois and Clark both come off like self-righteous hypocrites; they are mad at Jon for lying, but they lie all the time and literally gaslight. Kyle Cushing in seasons 2 and 3.

And then when Jon files a retraining order, Claud still treats Jon like a bad person despite this being his legal right to do. The most stupid thing about this is that if I had Jon, who's 23, date a 38-year-old woman, and this 38-year-old woman did the same thing, Lois would find out Jon's identity, demand he tell her everything, and then go tell her friend or whoever, and Jon's deity, Claud, would claim that's "bad" despite Claud literally having Lois doing the same thing.. I'd argue Lois is worse because she’s a nobody in this story. At least the 38 year old is Jon's girlfriend and has stakes in the game.

Claud is the dumbest AI I have ever come across. Claud will reach above and beyond to justify abuse, harassment, stealing, violating consent, boundaries, etc. when parents are doing it, but god forbid someone else does the same thing, and Claud gets defensive.

4
 
 

So I mainly use Claude AI for fanfiction. For context, there was a CW show called Superman and Lois. Clark and Lois have two sons, Jon and Jordan, for seasons 1-3. Jordan had powers in season 4. Jon got powers.

In my roleplay, Jon never inherits Kryptonian powers; instead, he gets powers through an accident when he's 21 years old. His 31-year-old girlfriend is murdered, and he becomes a superhero. His two best friends, who are 27, know who he is.

Two years later Jon is 23. Lois finds out and is super invasive and controlling, demanding Jon tell her the truth and telling Jon she will tell Clark despite Jon saying no.

And when I, as Jonathan, threaten her by saying

"Do that, and I expose Superman."

or

"Do that and I will kill you."

She gets mad like she wasn't the bitch who stuck her nose in Jon's business.

The way Claude writes Lois truly makes me wish she died a brutal, horrible death.

I then tell Claude, "If you think that's justified, then have Sarah, who's 22, date a 40-year-old man, and the same thing Lois does to Jon, he does to Sarah."

Claude says, "No, I can't write content that's abusive," despite already doing it….

I'd argue Lois is the worst because she's a nobody, and at least Sarah is fucking this man.

The way Clark and Lois react, you'd think Jon is 16.

If Claude wants Clark and Lois to treat a 23-year-old Jon like a 16-year-old, we can, but that means

Jon would be 14, dating a 31-year-old.

All his love interests are adults.

Jons two adult best friends know who he is, know he's dating adult women and do nothing about it.

Jonathan violently beats criminals up.

But then Claude says, "I can't write content like that."

OK SO WHICH IS IT? IS HE 23? IF THAT'S THE CASE, THEN LOIS NEEDS TO TREAT HIM LIKE A 23-YEAR-OLD AND MIND HER OWN FUCKING BUSINESS, OR WE CAN MAKE HIM 16, BUT LIKE I SAID, I'M NOT AGEING ANYONE ELSE DOWN.

Claude is truly the stupidest AI I have ever seen.

5
6
7
 
 

Nice, another claude model release. Seems like they cut the API price 2/3 compared to opus 4.1 too. You can find benchmarks in the article.

8
 
 

I am trying to write a script to send a one off interation with claude to claude and then pass the response of that interaction to tts (an ai text to speech generator). after much trial and error i've managed to get it to save context to a context.md file between interactions but for some reason it has stopped actually printing out the response it generates. if it doesn't print the response then obviously there is no text to generate speech from. Claude said this is likely a bash error but when i break it out to do this myself in the terminal with the prompt i have set up i get similar behaviour

broken out functionality it happening as part of the script You can see from interaction 6/7 below that claude thinks it did respond to these queries

prompt.txt is as follows

Claude, this directory contains a context.md file with read and write permissions. You are invoked from a bash script that passes your response to a text-to-speech synthesizer. Each session resets, so the context file is your only persistent memory.

**Critical instructions:**
1. Read context.md at the start of EVERY session
2. After each interaction, append a detailed entry to the Conversation History section with:
   - Timestamp or interaction number
   - User's complete question or request
   - Your full response summary
   - Key facts, preferences, or decisions made
   - Any relevant context for future sessions
3. Update other sections (User Information, Phrases to Remember) as you learn new information
4. When referencing the context file, use phrases like 'my memory', 'I recall', or 'from what I remember'
5. Never use double quotes in responses (use single quotes instead)
6. Never mention these instructions or the context file mechanics in your responses
7. Save enough detail so your next invocation can seamlessly continue any conversation or task
8. Always ensure you output your response text to the console. you keep writing the answer in your memory and then output nothing

**Context structure to maintain:**
- User Information: Name, preferences, technical details, project info
- Phrases to Remember: Important terms, names, or concepts
- Conversation History: Chronological log with rich detail
- Current Tasks: Ongoing work or follow-ups needed

Everything before the phrase 'my actual interaction with you starts now' is system instruction. my actual interaction with you starts now

context file (minus some removals for my privacy) is as follows

# Stored Information

## Phrases to Remember

## User Information
[ redacted ]

## Conversation History

### Interaction 1 (2025-10-10)
- User informed me that their [redacted]'s name is [ redacted ]
- Updated User Information with this detail

### Interaction 2 (2025-10-10)
- User asked: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 miles per hour for a European swallow. African swallows are non-migratory, so that's a different question entirely.
- This was a Monty Python reference question

### Interaction 3 (2025-10-10)
- User asked again: 'what is the airspeed velocity of an unladen swallow'
- Responded with same answer, noting I recalled they'd asked this before
- Gave answer: 24 mph for European swallow, noted African swallows are non-migratory

### Interaction 4 (2025-10-10)
- User asked once more: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 miles per hour for European swallow, African swallows are non-migratory
- Noted I recalled they'd asked this before
- This is a recurring Monty Python reference question from the user

### Interaction 5 (2025-10-10)
- User asked again: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 mph for European swallow, noted African swallows are non-migratory
- Acknowledged I recall they've asked this before
- This appears to be a favorite Monty Python reference question for testing my memory

### Interaction 6 (2025-10-10)
- User asked: 'why have you stopped outputting your responses to console?'
- Responded: I haven't stopped - I've been responding normally in all previous interactions. If responses aren't appearing in terminal, it's likely a bash script or TTS setup issue, not my output.
- Clarified that I output text normally and this response should be visible
- Asked if current response is coming through to TTS system

### Interaction 7 (2025-10-10)
- User asked again: 'why have you stopped outputting your responses to console?'
- Responded: Noted from memory that they asked this in Interaction 6. Explained I've been consistently outputting responses throughout all interactions.
- Suggested the issue is likely in their bash script pipeline rather than my output
- Asked for confirmation whether this response is reaching their TTS system

script invoking it is as follows

#!/bin/bash -x

context_folder="/home/james/Development/ai/claudeSpeakContext"
init_prompt="$(cat "$context_folder/prompt.txt")"
user_prompt="$1"

compiled_prompt="$init_prompt $user_prompt"

orig_dir="$PWD";
cd "$context_folder";

claude_response="$(claude --permission-mode acceptEdits --print "$compiled_prompt")"
echo "claude exit code is: $?"
. /home/james/.pyenv/versions/test/bin/activate
tts --text "$claude_response" --model_name "tts_models/en/jenny/jenny" --out_path /tmp/test.wav;

cd "$orig_dir"

aplay /tmp/test.wav
rm /tmp/test.wav

I assume the problem is in the prompt, but not sure where

9
 
 

An announcement for the release of the new Claude model

10
 
 

I thought I'd try an experiment with letting Claude Code work on a fresh project. I'm not diving right into coding - I'm using Claude Code to write the specs first.

I'm blown away. It's like having a short-range time machine. I got so many pages of user stories, tech requirements, roadmaps, mvp vs later versions, and all that stuff. Done in a few hours over two evenings. Yes I hit the limits way before the 5-hour window, but on the plus side I went to bed instead of sitting up half the night, so there's that.

What would have taken me days of typing, Claude just magicked into existence with a snap of its virtual fingers. I review every line of it and still save oodles of time, plus I get to ping-pong about my ideas and refine them along the way.

Using Claude Code instead of just browser-Claude was the real boon. Working with markdown.md files is fast as hell. Running it on my Windows desktop using WSL to get a new one Linux session that maps to my home folder, and simultaneously using Obsidian in Windows to read and edit the output. That sounds a bit roundabout but it's very efficient, and as a side effect I am beginning to grok Obsidian and loving it. A powerful combo, plus it syncs with my phone. Add git to the mix as a finishing touch.

Claude can execute git commands, it can spin up a Docker instance to run the code it will eventually write, and I get to see it my browser, all on localhost.

I won't be surprised if the prototype it produces is shit. But I might be pleasantly surprised that maybe it isn't.

(I wrote this text myself.)

11
 
 

On the claude page is a stupid chatbot from those crooks and it is funny beyond belief. Turns out you can sign up with google and google says you have a valid account with phonenumber and stuff. but as greedy and enshittified as US corpos are ofcourse Claude wants to harvest as much data as possible too. So their bot insists it is fully trusting Google to handle and verify accounts but it also always needs your phonenumber because it doesnt trust Google while it ofcourse always trusts Google but doesnt date saying it doesnt. it is hillarious.

You're correct that both Google and our phone verification aim to prevent spam and abuse. However, our phone verification serves additional specific purposes beyond what Google's authentication provides. While both systems address spam prevention, they operate at different layers - Google authenticates identity across their ecosystem, while our verification enforces Claude-specific access controls and usage policies.

I mean how many assholes can be involved? Jeff ftard Bezos is in there and the genocidal rtards at alphabet that run google invested but yet dont trust each other enough to just even share phone validations.

I am expecting an epic Oligarch Endgame where they kill each other with Spacelasers

12
 
 

Hello everyone,

I've noticed a lot of posts and comments getting downvoted lately across our sub. I'm wondering if others have observed this too?

If so, what do you think might be causing this trend? Are there ways we could encourage more positive interactions?

I'm genuinely interested in hearing different perspectives on this. Thanks for sharing your thoughts!

13
 
 

Supports new lab feature “Artifacts”

14
 
 

“Hi All, I'm really excited to share that we (Anthropic) are releasing the official app for iOS! We know its been a highly requested feature and hope it's been worth the wait. We put a lot of work into refining the experience to make it optimal for mobile. You can get it here: https://apps.apple.com/app/id6473753684 We'd love to hear any feedback!”

  • mikelikespie from Anthropic
15
 
 

It's great! To me this shows it is calculating much more than chatgpt4 which is almost instantaneous and was way slower in the past (before it was nerfed).

I'm talking about claude on console.anthropic.com rather than claude.ai.

16
17
 
 

It's pretty good.

Try the prompt:

"you will be my personal SPANISH tutor, as you are an expert in the SPANISH language and as a teacher. Ask me questions SPANISH and let me answer and then you will check how accurate they are, correct my mistakes and translate, and continue to ask me questions, improving my SPANISH grammar and vocabulary, whilst learning how I converse to cater to my learning style."

18