What's inside
50 prompts across 5 categories:
- Code Review (10). Blast radius, security, concurrency, error handling, API contracts, test gaps, naming, dependency boundaries, migration safety, PR descriptions.
- Debugging (10). Hypothesis-driven, distributed traces, memory leaks, race conditions, slow queries, flaky tests, incident triage, stack traces, config drift, time zones.
- Refactoring (10). Extract domain, reduce complexity, untangle god classes, value objects, illegal states, flag arguments, strangler fig, long functions, async modernization, feature flag cleanup.
- System Design (10). Capacity planning, queues, multi-region, idempotency, rate limiting, distributed transactions, cache invalidation, schema migrations, webhook delivery, risky rollouts.
- Docs & PRs (10). PR descriptions, ADRs, runbooks, API references, changelogs, postmortems, READMEs, onboarding docs, tech specs, migration notices.
Bonuses:
- 5
.cursorrulesfiles (one per category) -
3
CLAUDE.mdstarters (Go services, Elixir umbrellas, AI agent projects) - Notion blueprint with views by category, by tool, and by usage
- Lifetime updates
Five free prompts
These five live on this page so you can read, copy, and try them. The full pack ships with 45 more plus the bonuses.
1. Blast Radius Analysis Code Review
Act as a Staff engineer reviewing this diff for blast radius before merge.
For each changed file, answer:
1. What systems, services, or callers depend on this code path?
2. What's the worst realistic failure mode if this ships broken?
3. Is the change reversible without a backfill, replay, or schema rollback?
4. What monitoring would I check in the first 30 minutes after deploy?
End with a single sentence verdict: SAFE TO MERGE, MERGE WITH MITIGATION,
or HOLD. If MERGE WITH MITIGATION, list the mitigations in priority order.
Diff:
<paste diff or attach file>
Why it works. The single-sentence verdict forces commitment instead of a wishy-washy "it depends." The four-question structure stops the model from nitpicking syntax when there's a real systems-level problem to flag.
2. Hypothesis-Driven Debugging Debugging
I'm debugging a problem. Reset me. Walk me through this rigorously.
Symptom (what I observe):
<describe>
What should happen:
<describe>
What I've already ruled out:
<list>
Now:
1. Generate 5 hypotheses ordered by likelihood given typical failure modes
in this stack.
2. For each hypothesis, write the cheapest test that would falsify it.
3. Identify which test gives me the most information per minute spent.
4. If two hypotheses are entangled, propose the ordering that disentangles
them fastest.
Don't suggest fixes yet. We're isolating, not fixing.
Why it works. Most debugging time is wasted on the first plausible explanation. Forcing five hypotheses + cheapest-falsification rebuilds the scientific method when fatigue has eroded it. The "don't suggest fixes yet" line is the hardest one for the model to obey and the most important.
3. Make Illegal States Unrepresentable Refactoring
This data model allows states that should never exist. Help me redesign so
the type system rejects them.
Type:
<paste>
Do this:
1. Enumerate the legal states (combinations of fields).
2. Enumerate the illegal states the current shape allows.
3. Propose a redesign — usually a sum type / discriminated union /
tagged enum — that makes illegal states unrepresentable.
4. Show how callers change. Some will get simpler (no nil checks).
Some will need to pattern match.
Language: <Go / Rust / TypeScript / Elixir / Ruby>
Pick the idiomatic encoding for that language.
Why it works. This is the single highest-leverage refactor in static-typed code. The model knows the patterns by language and the prompt forces enumeration. It kills entire bug classes in payments code and identity flows.
4. Pick the Right Queue System Design
Help me choose a queue/streaming technology for this workload.
Workload:
<describe: throughput, message size, latency, ordering needs, retention,
consumers>
Operational context:
<cloud, team experience, existing infra, budget>
Compare these options for THIS workload (not in the abstract):
- Kafka (managed: MSK, Confluent, Aiven)
- Cloud-native pub/sub (SQS, GCP Pub/Sub, Azure Service Bus)
- Broker (RabbitMQ, NATS)
- DB-backed (Postgres SKIP LOCKED, river, oban, sidekiq)
For each: fit (1-10), the one thing that makes it right or wrong here,
monthly ops cost.
End with a single recommendation and what would make you change your mind.
Why it works. Queue choices get litigated forever. Forcing a per-option fit score and a single recommendation cuts the bikeshedding. The "what would change my mind" line is what separates a useful answer from a confident one.
5. Architecture Decision Record Docs
Write an Architecture Decision Record for this decision.
Context I have:
<describe the situation, the options considered, the choice made>
Use this structure:
# ADR-NNN: <one-line decision>
## Status
Proposed | Accepted | Superseded by ADR-XXX
## Context
The forces in play. Constraints. What's NOT being decided.
## Decision
The actual choice. One paragraph. Imperative voice ("we will").
## Consequences
Positive. Negative. Neutral. All three sections, even if one is short.
## Alternatives Considered
Each alternative + the specific reason it lost.
Tone: matter-of-fact, no salesmanship. Future engineers are the reader.
Why it works. ADRs are great when written, never written when needed. A constrained template makes them quick. The "Alternatives Considered" with reasons stops the "obviously the only choice" framing that erases context. Future you will thank present you.
Built by Jared Smith
Staff Software Engineer with 13+ years shipping production systems. Director of Information Security at Lavender. Backend at BlockFi. Elixir at AAMP Global.
Used Claude Code, Cursor, and Codex daily since 2023. These prompts are the artifact. Read the full positioning page →
FAQ
What format are the files?
PDF for the prompts, CSV for Notion import, plain text for the .cursorrules and CLAUDE.md files. No app to install. No login required.
Do these work with Claude Code AND Cursor?
Yes. Most prompts work in both. A few are flagged "Claude" specifically (the long-context ones). The bonus .cursorrules are Cursor-specific; the CLAUDE.md starters are Claude Code-specific.
Will I get future updates?
Yes. Lifetime updates ship via Gumroad to existing buyers. New prompts get added when the work warrants it, not on a calendar.
What if I don't like it?
7-day no-questions refund. Email jared@sublimecoding.com.
Why $39?
Founders edition for the first 50 buyers. $49 after. The price reflects the time it took to build and refine these. The lifetime updates and bonuses bring it home.
Who is this NOT for?
If you've already built your own prompt library you trust, this isn't for you. If you don't use Claude or Cursor regularly, this isn't for you either.
Get the Vault
One-time purchase. PDF + CSV + plain text. No app, no login, no expiry. Lifetime updates included.
Get the Vault — $39Founders edition. First 50 buyers.