🪨

CAVEMAN
SKILL

why use many word when few word do trick

claude.ai edition · web & desktop · three modes

🔥 claude.ai exclusive ~87% token reduction no terminal needed MIT license
↓ Download Skill GitHub →

▼ scroll ▼

the problem

CLAUDE CODE
≠ CLAUDE.AI

JuliusBrussee's caveman went viral — 26k+ stars, Hacker News #1, ThePrimeagen covered it. The idea is simple: make Claude talk like a caveman, cut 75% of output tokens, save money and time.


But it only works in Claude Code — the CLI developer tool. You need a terminal. You need Node. You need to run npx skills add. Most people using Claude every day never touch a terminal.


Nobody built this for claude.ai. The web app. The desktop app. The one with 100 million users. Until now.

before / after

SAME ANSWER.
LESS WORDS.

NORMAL
Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix. Hope that helps! 72 tokens
LITE
The issue is in your authentication middleware — token expiry isn't being validated correctly. Fix: 38 tokens
NORMAL
Auth middleware. Token expiry check wrong. Fix: 11 tokens
ULTRA
auth middleware. expiry < not <=. 6 tokens

three modes

PICK YOUR
INTENSITY.

🪨
/caveman-lite
~30% reduction

Removes pleasantries, hedging, meta-commentary, sign-offs. Grammar stays intact. Good for people who want cleaner answers without going full cave.

🪨🪨
/caveman
~75% reduction

Full caveman grammar. No articles. No filler. Fragments fine. Short synonyms. Maximum signal. This is the main mode.

🪨🪨🪨
/caveman-ultra
~87% reduction

Absolute minimum. Symbols over words (→, =, +). Every word must earn its place. One fragment per point. Brutal compression.

Switch anytime mid-conversation. /normal to return to default.

why it works

BREVITY ≠
DUMB.

+26pp
accuracy improvement under brevity constraints

A March 2026 arXiv paper — "Brevity Constraints Reverse Performance Hierarchies in Language Models" — found that forcing LLMs to produce brief responses improved accuracy by 26 percentage points on certain benchmarks. Verbose is not always better.

Claude is trained to be helpful, thorough, and pleasant. Great for human conversation. Expensive in tokens. Those pleasantries cost you money every session.

Caveman strips the wrapper. The logic stays identical. The bill drops.

Caveman not dumb. Caveman efficient. Caveman say what need saying. Then stop.

get started

INSTALL IN
60 SECONDS.

No terminal. No npm. No setup. Just download and upload.

01

Download the skill

Download caveman-skill.zip from the latest GitHub release. It's a small zip file — the skill definition.

02

Open Claude Settings

Go to claude.ai in your browser (or open Claude Desktop app). Navigate to: Settings → Capabilities → Skills

03

Upload the zip file

Click "Upload skill" and select the caveman-skill.zip file you downloaded. Claude installs it automatically.

04

Use it

In any conversation, type your activation command:

/caveman

Claude responds: 🪨 caveman mode on.

To turn off: /normal

compatibility

WHERE IT
WORKS.

Platform Works? Notes
claude.ai (web browser) ✅ Yes Primary target
Claude Desktop (macOS) ✅ Yes Same upload flow
Claude Desktop (Windows) ✅ Yes Same upload flow
Claude Code (CLI) ❌ No Use JuliusBrussee/caveman instead
Claude API ❌ No Add rules to system prompt directly