Prompt Repeater Plugin for Claude Code

Simple technique,
proven results.

Prompt Repeater applies Google Research's prompt repetition technique to improve Claude's accuracy on non-reasoning tasks.

Your prompt: "What's the 5th item?" Becomes: "What's the 5th item? What's the 5th item?" # With auto_apply: true, prompts are sent twice # Claude processes the repeated version # You get better accuracy automatically

Research Results

47-0
Win-Loss Record
21→97%
List Task Accuracy
0ms
Latency Penalty

Source: Google Research, December 2025

Installation

Ready in seconds.

Requires Claude Code.

# 1. Add the marketplace (in Claude Code) /plugin marketplace add danielraffel/generous-corp-marketplace # 2. Install Prompt Repeater /plugin install prompt-repeater@generous-corp-marketplace # 3. Restart Claude Code # Quit and reopen to load the plugin
View on GitHub →
Zero dependencies. Pure prompt optimization.

Usage

Two ways to use it.

Automatic Mode (Recommended)

Enable it, every prompt gets optimized.

# 1. Copy settings template cp .claude/prompt-repeater.local.md.example \ .claude/prompt-repeater.local.md # 2. Edit file, set auto_apply: true # 3. Restart Claude Code # Done! All prompts now optimized

How it works: You type normally. A hook intercepts your prompt before Claude sees it, repeats it automatically, and you get improved accuracy without any extra steps.

Manual Commands

Apply repetition selectively when you need it.

# Simple repetition (default) /repeat-last # Verbose framing /repeat-verbose # Triple for max accuracy /repeat-3x

Use commands when you want control over when repetition applies. Great for experimenting to understand when it helps.

Configuration Options

# .claude/prompt-repeater.local.md --- auto_apply: true # Turn automatic mode on/off default_mode: simple # simple | verbose | triple notify: true # Show when repetition applied ---

Research Background

Backed by
Google Research.

Paper: "Prompt Repetition Improves Non-Reasoning LLMs" by Leviathan, Kalman, and Matias (December 2025)

Tested on: Gemini, GPT-4o, Claude, DeepSeek across 7 benchmarks

Key findings:

  • ✓ 47 wins, 0 losses on non-reasoning tasks
  • ✓ 21.33% → 97.33% accuracy on list indexing
  • ✓ No latency penalty (only affects prefill)
  • ✓ Output format unchanged
  • ✓ Safe even with reasoning tasks
Read the full paper →

FAQ

Common questions.

How does auto_apply work?

When auto_apply: true is set in your settings file, a hook intercepts every prompt you type before Claude processes it. The hook repeats your prompt based on your configured mode (simple, verbose, or triple), then Claude sees the repeated version.

Example: You type "What's the 5th item?" → Hook transforms it to "What's the 5th item?What's the 5th item?" → Claude processes the repeated version → You get improved accuracy.

The repetition is completely transparent - you just type normally and get better results automatically.

How do I turn it on?

1. Copy the example settings file:

cp .claude/prompt-repeater.local.md.example .claude/prompt-repeater.local.md

2. Edit the file and change auto_apply: false to auto_apply: true

3. Restart Claude Code

Now every prompt you enter will automatically be repeated. To turn it off, change back to auto_apply: false and restart.

When should I use this?

Best for non-reasoning tasks:

• Multiple choice questions
• List navigation (find Nth item, find between items)
• Simple fact retrieval
• Pattern matching

Less effective for:

• Multi-step planning
• Complex debugging
• Tasks already using chain-of-thought

The good news: It's safe even for reasoning tasks (neutral to slightly positive), so you can leave automatic mode on and it won't hurt performance.

What are the different modes?

Simple (recommended): <QUERY><QUERY> - Default mode, works well for most tasks

Verbose: <QUERY> Let me repeat that: <QUERY> - Sometimes marginally better with framing text

Triple: Three repetitions - Maximum accuracy for list navigation tasks, but longer input

Start with simple mode. Research shows all three perform similarly on most tasks, with triple providing the best results on specific list-heavy tasks.

Does it slow things down?

It depends. Prompt repetition only affects the parallelizable prefill stage. Research shows no latency penalty for most models and prompt lengths.

The only exception: Claude models with very long prompts (>2000 characters) or triple repetition may see slight prefill latency increase. But generation speed is completely unaffected.

Can I use manual commands instead of automatic mode?

Yes! Keep auto_apply: false in your settings (or don't create a settings file at all). Then use these commands:

/repeat-last - Apply simple repetition to your last prompt
/repeat-verbose - Apply verbose repetition
/repeat-3x - Apply triple repetition

This gives you full control over when repetition applies. Great for experimenting to understand when it helps.

How does this work?

Prompt repetition duplicates your input so each token can attend to every other token, fixing positional limitations in the attention mechanism.

LLMs process tokens left-to-right where past tokens can't attend to future tokens. This means token order affects accuracy.

For example, "<CONTEXT> <QUESTION>" performs differently from "<QUESTION> <CONTEXT>" because tokens at the beginning can't see tokens at the end.

Prompt repetition solves this by duplicating the entire prompt, enabling each token to attend to every other token in at least one of the repetitions.

See the Google Research paper for the full technical explanation.

Is this tested and safe?

This is an early beta. The plugin implements a technique from a peer-reviewed Google Research paper published in December 2025.

Why we think it's safe:

• Based on research tested on 7 major models (Gemini, GPT-4o, Claude, DeepSeek) across 7 benchmarks
• 47 wins, 0 losses on non-reasoning tasks - never performs worse than baseline
• Safe even with reasoning tasks (neutral to slightly positive)
• Plugin uses safe bash scripting, validates inputs
• Doesn't modify any files or make network requests
• Only manipulates prompt text before sending to Claude

What's the catch?

None, really. Prompt repetition is a simple technique with proven benefits and no downsides.

The only considerations:

• Input length doubles (or triples with triple mode), so very long prompts may approach context limits
• Effectiveness varies by task type (best for non-reasoning, neutral for reasoning)
• Requires restart to change settings (hooks load at session start)

But there's no performance penalty, no cost increase (beyond doubled input tokens), and never worse accuracy than baseline.