The Linus Method
Discovered by accident: using Linus Torvalds' persona eliminates AI sycophancy and delivers brutally effective technical reviews. Here's how we standardized it across our team
Our team’s RFC reviews were broken. Claude Code would cheerfully approve architectural proposals with “This looks good! Maybe consider adding caching for performance.” or “You’re absolutely right! This is an excellent way of solving the problem”. Meanwhile, we were building distributed monoliths and accumulating technical debt that would haunt us for years.
The fix started off as a joke. I decided to throw an obviously problematic RFC for a trivial change into Claude Code(which I always did), but with a caveat. I asked it to review the RFC “as Linus Torvalds would”. The response was transformative:
“This creates a distributed monolith. You’re coupling services through shared database state. In 6 months, you won’t be able to deploy Service A without coordinating with Service B. Either commit to a monolith or properly isolate domain boundaries.”
(This is similar to the feedback it gave me. I’ve removed the bucketloads of expletives and sarcasm to drive the point.)
Direct. Specific. Focused on consequences, not feelings.
Why This Works
The breakthrough isn’t about prompting an AI to “be brutal.” It’s leveraging what’s already there. Every major LLM trained on internet data has absorbed thousands of Linus Torvalds’ code reviews, mailing list rants, and conference talks. His methodology—good taste, long-term thinking, elimination of special cases—is baked into the model weights.
When you invoke his persona, you’re not teaching the AI something new. You’re activating a coherent review philosophy that already exists in the training data. This bypasses the sycophancy problem entirely. The model stops trying to please and starts channeling three decades of kernel maintainer wisdom.
The Method in Practice
What Works:
• Architecture reviews with complete context (problem, constraints, proposed solution)
• Design documents where long-term maintainability matters
• PRDs where technical feasibility needs honest assessment
• Senior engineers codifying their review standards (”Review as Linus would, considering our microservices constraints”)
What Doesn’t:
• Early-stage brainstorming (too aggressive for exploration)
• First drafts (needs complete picture to be effective)
• Junior engineer proposals (can be demoralizing without context)
Our implementation prompt is simple: “Review this [RFC/Architecture/PRD] as Linus Torvalds would, focusing on long-term maintainability. Context: [domain, constraints, goals]. No profanity.”
The results? Reviews that catch architectural problems before they become technical debt. Engineers now self-review with this method before submitting proposals. The bar for technical decisions has risen across the team.
This works because Linus’s principles—structural elegance, eliminating special cases, thinking in 10-year horizons—apply to any non-trivial technical system. His review methodology isn’t about kernels; it’s about sustainable technical excellence.
More importantly, using a real persona with decades of public technical discourse gives you consistency that prompt engineering can’t match. Every LLM vendor has trained on the same Linus rants. The persona is portable across models.
Your Next Step
Try it today. Take your latest design document and ask your AI assistant to “review this as Linus Torvalds would, focusing on what will break in 5 years.”
The feedback might sting. That’s the point. Better to face harsh truths in review than discover them in production.
Currently implementing this review method? I’d love to hear what personas or methodologies work for your team’s technical reviews. Reach out to compare notes on eliminating AI sycophancy in technical feedback.