This article was written by Claude (AI) with human review and editing.

Here’s the uncomfortable truth about AI automation in 2026: it’s great at some things and terrible at others. Most people treat it as all-or-nothing — either AI can do the job or it can’t. That’s wrong. The real skill is figuring out which parts of a workflow to automate and which to keep human.
I’ve been building this out with Claude Code, Slack, and a dead-simple feedback loop. Three patterns have emerged: automated Slack outreach, a flywheel that learns what works, and reusable workflow templates. Here’s how they fit together.
Automated Slack Outreach
Claude Code can send Slack messages. This sounds trivial. It isn’t.
I’m using it to automate outreach — sending messages to Slack communities, responding in channels, and initiating conversations. Claude drafts the messages, sends them via Slack MCP or webhook, and logs what it sent.
Setup is straightforward:
# In your CLAUDE.md
When doing Slack outreach:
1. Draft message based on channel context and target audience
2. Send via Slack MCP server or webhook
3. Log every message sent: channel, content, timestamp
4. Monitor for replies and reactions
5. Flag positive responses for human follow-up
Webhook URL is in the SLACK_WEBHOOK_URL env variable.But sending messages is the easy part. The hard part — and the part that actually matters — is knowing whether those messages worked.
The Feedback Flywheel: Scanning What Gets a Positive Response
This is the core idea. AI is good at some stuff and not good at others right now. We need to discern what can effectively be automated. The only way to do that is to measure outcomes.
Here’s the loop: Claude sends Slack outreach messages, then scans the channels for responses. It classifies each response — positive, negative, ignored — and logs the results. Over time, patterns emerge about what messaging works, what channels convert, and what tone lands.
## Outreach Log
| Channel | Message Type | Response | Notes |
|---------|-------------|----------|-------|
| #devtools | Product intro, casual tone | POSITIVE | Got 3 DMs asking for demo |
| #startup-chat | Cold pitch, formal | IGNORED | 0 engagement, too salesy |
| #ai-builders | Shared a tip + soft CTA | POSITIVE | 12 reactions, 2 thread replies |
| #freelancers | Direct ask for feedback | NEGATIVE | Got told to stop spamming |
| #webdev | Answered someone's question + mentioned tool | POSITIVE | Natural, high conversion |After a few weeks of this, you have actual data. Not vibes — data. The flywheel looks like this:
Send outreach → scan responses → classify outcomes → refine approach → send better outreach.
What I’ve found so far:
AI is good at automating:
- Drafting initial outreach messages at scale
- Scanning channels for relevant conversations to join
- Classifying response sentiment (positive/negative/ignored)
- Logging everything into a structured format for analysis
- Adapting message templates based on what worked before
AI is NOT good at (yet):
- Reading social nuance — it’ll miss sarcasm, community in-jokes, and unwritten rules
- Knowing when to stop — it doesn’t feel the room the way a human does
- Building genuine relationships — the follow-up conversations still need you
- Handling negative responses gracefully — it tends to over-apologize or double down
The point isn’t to fully automate outreach. The point is to automate the parts AI is good at (drafting, sending, scanning, logging) and keep humans on the parts it’s bad at (relationship building, judgment calls, reading the room). The feedback flywheel tells you where that line is — and the line moves as the tools improve.
Store these patterns in your CLAUDE.md or memory system. Claude itself can reference them: “Messages with a helpful tip + soft CTA in technical channels get 4x the engagement of cold pitches. Adjusting approach.”
Reusable Workflows: Just a Folder and a Markdown File
This is the simplest and most underrated pattern I use.
Every time I find myself repeating a multi-step process — deploying a new project, publishing a blog post, running an outreach campaign — I tell Claude: “Create a new folder called workflows/ and track what we just did in a markdown file.”
That’s it. No framework. No tooling. Just a folder with markdown files.
workflows/
├── slack-outreach.md
├── publish-blog-post.md
├── deploy-new-project.md
├── generate-blog-images.md
├── setup-github-actions.md
└── security-audit.mdEach file is a step-by-step record of what actually happened, not what should theoretically happen. Real commands. Real file paths. Real gotchas.
Here’s what my Slack outreach workflow looks like:
# Slack Outreach Workflow
## Steps
1. Identify target channels (check channel topic, recent activity, member count)
2. Read last 50 messages for context and tone
3. Draft message matching channel style — helpful first, product mention second
4. Send via Slack MCP
5. Wait 24-48 hours
6. Scan for replies, reactions, DMs
7. Log outcome in outreach-log.md
8. Flag positive responses for human follow-up
## What Works
- Answering real questions and mentioning tool naturally
- Sharing genuine tips with soft CTA
- Technical channels > general channels
## What Doesn't
- Cold pitches in any format
- Posting the same message across multiple channels
- Formal tone in casual communitiesNext time I say “run Slack outreach,” Claude reads this file and executes it. No re-learning. No repeating the same mistakes. The workflow file captures what works and what doesn’t — institutional knowledge that persists across sessions.
Why this works better than CLAUDE.md alone:
- CLAUDE.md files get bloated. Workflows stay focused on one task.
- You can reference specific workflows: “Follow workflows/slack-outreach.md”
- They’re versioned in git, so you see how processes evolve
- New team members (human or AI) can follow them immediately
The barrier to creating one is zero. Finish a task. Tell Claude “track that workflow.” Done. You’ve just saved yourself 20 minutes the next time you do it.
Putting It All Together
AI right now is a mixed bag. It can draft a hundred Slack messages in the time it takes you to write one. It can scan channels and classify responses without getting bored. But it can’t tell when a community is getting annoyed, and it can’t build real relationships.
The three patterns address this directly:
- Automated Slack outreach handles the volume — drafting, sending, and scanning at scale.
- The feedback flywheel tells you what’s actually working, so you stop wasting effort on approaches that don’t convert.
- Workflow templates encode what you’ve learned into repeatable processes that get better over time.
The result: you stop guessing what AI can and can’t do. You measure it. The flywheel gives you data, the workflows capture the playbook, and the automation handles the grunt work. You focus on the parts that still need a human — the judgment, the relationships, the nuance.
Start small. Pick one outreach channel. Have Claude send five messages. Scan for responses. Log what happened. That’s your flywheel. It only gets better from here.










