Anthropic's Claude AI

136 readers
8 users here now

Anthropic's Claude AI

Anthropic's Claude AI is a next-generation AI assistant that can power a wide variety of conversational and text processing tasks. It's been rigorously tested with key partners like Notion, Quora, and DuckDuckGo and is now ready for wider use.

Claude can help with tasks including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early adopters report that Claude is less likely to produce harmful outputs, easier to converse with, and more steerable. Claude can also be directed on personality, tone, and behavior.

There are two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant is a faster, less expensive, but still efficient version.

Claude has been successfully integrated into various platforms:

Anthropic's Claude AI

Anthropic's Claude AI is a next-generation AI assistant that can power a wide variety of conversational and text processing tasks. It's been rigorously tested with key partners like Notion, Quora, and DuckDuckGo and is now ready for wider use.

Claude can help with tasks including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early adopters report that Claude is less likely to produce harmful outputs, easier to converse with, and more steerable. Claude can also be directed on personality, tone, and behavior.

There are two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant is a faster, less expensive, but still efficient version.

Claude has been successfully integrated into various platforms:

For businesses or individuals interested in using Claude, you can request access here.

For businesses or individuals interested in using Claude, you can request access here.

founded 2 years ago
MODERATORS
1
2
3
 
 

Nice, another claude model release. Seems like they cut the API price 2/3 compared to opus 4.1 too. You can find benchmarks in the article.

4
 
 

I am trying to write a script to send a one off interation with claude to claude and then pass the response of that interaction to tts (an ai text to speech generator). after much trial and error i've managed to get it to save context to a context.md file between interactions but for some reason it has stopped actually printing out the response it generates. if it doesn't print the response then obviously there is no text to generate speech from. Claude said this is likely a bash error but when i break it out to do this myself in the terminal with the prompt i have set up i get similar behaviour

broken out functionality it happening as part of the script You can see from interaction 6/7 below that claude thinks it did respond to these queries

prompt.txt is as follows

Claude, this directory contains a context.md file with read and write permissions. You are invoked from a bash script that passes your response to a text-to-speech synthesizer. Each session resets, so the context file is your only persistent memory.

**Critical instructions:**
1. Read context.md at the start of EVERY session
2. After each interaction, append a detailed entry to the Conversation History section with:
   - Timestamp or interaction number
   - User's complete question or request
   - Your full response summary
   - Key facts, preferences, or decisions made
   - Any relevant context for future sessions
3. Update other sections (User Information, Phrases to Remember) as you learn new information
4. When referencing the context file, use phrases like 'my memory', 'I recall', or 'from what I remember'
5. Never use double quotes in responses (use single quotes instead)
6. Never mention these instructions or the context file mechanics in your responses
7. Save enough detail so your next invocation can seamlessly continue any conversation or task
8. Always ensure you output your response text to the console. you keep writing the answer in your memory and then output nothing

**Context structure to maintain:**
- User Information: Name, preferences, technical details, project info
- Phrases to Remember: Important terms, names, or concepts
- Conversation History: Chronological log with rich detail
- Current Tasks: Ongoing work or follow-ups needed

Everything before the phrase 'my actual interaction with you starts now' is system instruction. my actual interaction with you starts now

context file (minus some removals for my privacy) is as follows

# Stored Information

## Phrases to Remember

## User Information
[ redacted ]

## Conversation History

### Interaction 1 (2025-10-10)
- User informed me that their [redacted]'s name is [ redacted ]
- Updated User Information with this detail

### Interaction 2 (2025-10-10)
- User asked: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 miles per hour for a European swallow. African swallows are non-migratory, so that's a different question entirely.
- This was a Monty Python reference question

### Interaction 3 (2025-10-10)
- User asked again: 'what is the airspeed velocity of an unladen swallow'
- Responded with same answer, noting I recalled they'd asked this before
- Gave answer: 24 mph for European swallow, noted African swallows are non-migratory

### Interaction 4 (2025-10-10)
- User asked once more: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 miles per hour for European swallow, African swallows are non-migratory
- Noted I recalled they'd asked this before
- This is a recurring Monty Python reference question from the user

### Interaction 5 (2025-10-10)
- User asked again: 'what is the airspeed velocity of an unladen swallow'
- Responded: About 24 mph for European swallow, noted African swallows are non-migratory
- Acknowledged I recall they've asked this before
- This appears to be a favorite Monty Python reference question for testing my memory

### Interaction 6 (2025-10-10)
- User asked: 'why have you stopped outputting your responses to console?'
- Responded: I haven't stopped - I've been responding normally in all previous interactions. If responses aren't appearing in terminal, it's likely a bash script or TTS setup issue, not my output.
- Clarified that I output text normally and this response should be visible
- Asked if current response is coming through to TTS system

### Interaction 7 (2025-10-10)
- User asked again: 'why have you stopped outputting your responses to console?'
- Responded: Noted from memory that they asked this in Interaction 6. Explained I've been consistently outputting responses throughout all interactions.
- Suggested the issue is likely in their bash script pipeline rather than my output
- Asked for confirmation whether this response is reaching their TTS system

script invoking it is as follows

#!/bin/bash -x

context_folder="/home/james/Development/ai/claudeSpeakContext"
init_prompt="$(cat "$context_folder/prompt.txt")"
user_prompt="$1"

compiled_prompt="$init_prompt $user_prompt"

orig_dir="$PWD";
cd "$context_folder";

claude_response="$(claude --permission-mode acceptEdits --print "$compiled_prompt")"
echo "claude exit code is: $?"
. /home/james/.pyenv/versions/test/bin/activate
tts --text "$claude_response" --model_name "tts_models/en/jenny/jenny" --out_path /tmp/test.wav;

cd "$orig_dir"

aplay /tmp/test.wav
rm /tmp/test.wav

I assume the problem is in the prompt, but not sure where

5
 
 

An announcement for the release of the new Claude model

6
 
 

I thought I'd try an experiment with letting Claude Code work on a fresh project. I'm not diving right into coding - I'm using Claude Code to write the specs first.

I'm blown away. It's like having a short-range time machine. I got so many pages of user stories, tech requirements, roadmaps, mvp vs later versions, and all that stuff. Done in a few hours over two evenings. Yes I hit the limits way before the 5-hour window, but on the plus side I went to bed instead of sitting up half the night, so there's that.

What would have taken me days of typing, Claude just magicked into existence with a snap of its virtual fingers. I review every line of it and still save oodles of time, plus I get to ping-pong about my ideas and refine them along the way.

Using Claude Code instead of just browser-Claude was the real boon. Working with markdown.md files is fast as hell. Running it on my Windows desktop using WSL to get a new one Linux session that maps to my home folder, and simultaneously using Obsidian in Windows to read and edit the output. That sounds a bit roundabout but it's very efficient, and as a side effect I am beginning to grok Obsidian and loving it. A powerful combo, plus it syncs with my phone. Add git to the mix as a finishing touch.

Claude can execute git commands, it can spin up a Docker instance to run the code it will eventually write, and I get to see it my browser, all on localhost.

I won't be surprised if the prototype it produces is shit. But I might be pleasantly surprised that maybe it isn't.

(I wrote this text myself.)

7
 
 

On the claude page is a stupid chatbot from those crooks and it is funny beyond belief. Turns out you can sign up with google and google says you have a valid account with phonenumber and stuff. but as greedy and enshittified as US corpos are ofcourse Claude wants to harvest as much data as possible too. So their bot insists it is fully trusting Google to handle and verify accounts but it also always needs your phonenumber because it doesnt trust Google while it ofcourse always trusts Google but doesnt date saying it doesnt. it is hillarious.

You're correct that both Google and our phone verification aim to prevent spam and abuse. However, our phone verification serves additional specific purposes beyond what Google's authentication provides. While both systems address spam prevention, they operate at different layers - Google authenticates identity across their ecosystem, while our verification enforces Claude-specific access controls and usage policies.

I mean how many assholes can be involved? Jeff ftard Bezos is in there and the genocidal rtards at alphabet that run google invested but yet dont trust each other enough to just even share phone validations.

I am expecting an epic Oligarch Endgame where they kill each other with Spacelasers

8
 
 

Hello everyone,

I've noticed a lot of posts and comments getting downvoted lately across our sub. I'm wondering if others have observed this too?

If so, what do you think might be causing this trend? Are there ways we could encourage more positive interactions?

I'm genuinely interested in hearing different perspectives on this. Thanks for sharing your thoughts!

9
 
 

Supports new lab feature “Artifacts”

10
 
 

“Hi All, I'm really excited to share that we (Anthropic) are releasing the official app for iOS! We know its been a highly requested feature and hope it's been worth the wait. We put a lot of work into refining the experience to make it optimal for mobile. You can get it here: https://apps.apple.com/app/id6473753684 We'd love to hear any feedback!”

  • mikelikespie from Anthropic
11
1
submitted 2 years ago* (last edited 2 years ago) by YellowtoOrange@lemmy.world to c/claudeai@lemmy.world
 
 

It's great! To me this shows it is calculating much more than chatgpt4 which is almost instantaneous and was way slower in the past (before it was nerfed).

I'm talking about claude on console.anthropic.com rather than claude.ai.

12
13
 
 

It's pretty good.

Try the prompt:

"you will be my personal SPANISH tutor, as you are an expert in the SPANISH language and as a teacher. Ask me questions SPANISH and let me answer and then you will check how accurate they are, correct my mistakes and translate, and continue to ask me questions, improving my SPANISH grammar and vocabulary, whilst learning how I converse to cater to my learning style."

14