
From Luck to Skill: Using AI to Consistently Win System Design Interviews
by Pasha Simakov, 2025-08-13
The real gap in FAANG system design interviews isn’t knowledge of patterns—it’s the ability to think systematically under pressure, refining designs the way seasoned engineers do in production. Most candidates can cite CAP theorem or consistent hashing, but struggle to iterate from requirements to trade-offs and bottlenecks. The real skill is learning to think like an engineer who has debugged production outages at 3 AM and made architectural decisions that must serve billions of requests.
In this article, I set out to answer: Can we build a voice-first AI system that simulates a FAANG-style system design interview? The answer is yes—through an agentic AI system, released as an open-source project: github.com/psimakov/level-up.
Try It Yourself — Live AI Interview Practice
The best way to prepare for an interview is to practice in real time with an AI coach using ChatGPT’s live voice mode. It feels just like sitting across from an interviewer.
Take a moment to try voice-first AI interview coach for few minutes to prepare for any of these challenges:
- Design a Rate Limiter
- Design FB News Feed
- Design Ad Click Aggregator
- Design YouTube
- Design URL Shortener
These are live and free agents available inside ChatGPT web or mobile app. There is nothing to setup...
Watch My Interview with AI Coach
Here is a recording of an interview lead by Level Up AI Coach:
Let’s look at how we built these agents.
The Flaw in Unstructured Practice
Most interview advice boils down to “just practice,” but that’s an incomplete algorithm. Unstructured practice is inefficient: it lacks a consistent feedback loop, a reliable ground truth, and the unique pressures of a live evaluation. To solve a systems problem, you need a better system.
Traditional methods fall short in three key ways:
- Lack of Ground Truth: Practicing alone means grading your own homework. Without an objective rubric, you risk reinforcing bad habits.
- No Feedback Loop: Books and videos are passive, while mock interviews provide feedback but are costly and hard to arrange.
- Wrong Interface: Real interviews involve speaking and defending designs, not typing essays. Text-only practice doesn’t prepare you for the communication demands of the real setting.
Any effective solution must address these: provide a reliable source of truth, a structured feedback loop, and an interface that mirrors real interviews.
Adaptive AI Agents Are the Answer
This is where voice-first AI agents shine. Unlike books, videos, or even peer mock interviews, a well-architected agent adapts in real time to the candidate’s performance. It doesn’t just serve up fixed answers—it engages in a dynamic back-and-forth that mirrors the pressures of a real interview.
Grounded in expert-defined knowledge and guided by structured prompts, the agent enforces methodical reasoning: clarifying requirements, probing trade-offs, and drilling into critical implementation details. By challenging candidates to defend their choices and explore edge cases, it becomes a sparring partner that cultivates true engineering intuition.
This adaptability makes AI practice both more realistic and more effective, helping candidates go beyond memorizing patterns and instead develop the flexible thinking needed to perform under pressure.
System Architecture: A Deterministic Framework for LLMs
All files and concepts mentioned here are available in the open-source project: github.com/psimakov/level-up. At its core is not a single LLM but a multi-agent architecture. Behavior is defined in structured prompt documents (system.md
, curator.md
, and coach.md
), ensuring separation of concerns and predictable outcomes.
One could argue this could be done with a single, carefully crafted prompt—and that’s partly true. But multi-agent modularity provides clarity, robustness, and extensibility.
Actors:
<candidate>
: the human user who is the interview candidate<expert>
: the human expert defining a challenge<coach>
: the LLM agents that guides the candidate through the interview process<curator>
: the LLM that assists an expert in collecting information about the challenge and preparing its structured summary for the interview coach
Assets:
<challenge>
: structured source of truth created by the expert capturing all the details of the challenge<cheat-sheet>
: structured and distilled down information about the challenge necessary for the<coach>
to drive the interview process and assess the candidate
Open Source Project: Build Your Own Agents
You can extend the system by creating your own agent for any interview problem. This ensures every practice session is consistent, thorough, and grounded in expert criteria.
Prompts live in the ./prompts
directory:
system.md
: Defines actors, assets, and tools.curator.md
: Guides<curator>
in producing a high-quality<cheat-sheet>
.coach.md
: Provides instructions for<coach>
to simulate a realistic<interviewer>
.
Cheat sheets live in ./cheat-sheets
, e.g. ad-click-aggr.md
, which details how <coach>
should run the “Design Ad Click Aggregator” interview.
The Execution Engine: The <coach>
Agent
The <coach>
agent uses <cheat-sheet>
content—covering requirements, trade-offs, and sample solutions—to guide candidates through a five-phase interview:
- Understand & Scope (15%)
- High-Level Design (30%)
- Deep Dive (40%)
- Drive to Completion (7%)
- Wrap Up (8%)
It adjusts difficulty based on performance, provides hints mid-session, and delivers structured assessments at the end. Practicing in voice mode ensures alignment with real-world interviews.
It can also teach solutions directly or offer hints if the candidate asks.
An Open and Cost-Effective Model
This system is free and community-driven:
- Zero infrastructure cost: It runs entirely on OpenAI’s platform as prompt logic. Free voice tokens may run out quickly, so a ChatGPT subscription may be needed—but the capability is worth it.
- Community-driven knowledge base: Anyone can contribute new challenges. Or they can keep the challenges private while using the open-source system to generate their own
<cheat-sheet>
.
This ensures a growing, high-quality library of interview problems and agents.
Do We Know It Works?
To be clear: we don’t yet have hard data, user testimonials, or formal studies proving this approach outperforms traditional prep. Our claims rest on architectural reasoning: structured feedback loops, grounded challenges, and voice-based practice.
Also, system design interviews test more than pattern knowledge—they measure communication, prioritization, ambiguity handling, and stakeholder management—skills that we don’t yet fully cover.
That’s why your input matters. If you try these agents, please share your experience—what worked, what didn’t, and how it compared to other prep methods. Your feedback will guide improvements and provide evidence of effectiveness.
Conclusion
This isn’t a silver bullet—it’s a tool. Like a profiler helps you spot bottlenecks in code, this system helps identify weaknesses in interview skills. By replacing unstructured practice with a systematic, feedback-rich process, it reduces variance and helps engineers consistently demonstrate their true abilities.
Stop leaving interviews to chance—start preparing effectively.
PS: Written with and about CLI Version 0.4.0
Gemini CLI Masterclass Articles
I’m Pasha Simakov, a Google software engineer passionate about building systems that solve systemic challenges for developers. I believe access to high-quality tools can level the playing field, and this agentic interview system applies that principle to hiring. Beyond building tools, I teach the architectural patterns behind reliable AI systems through 1-on-1 mentoring and group masterclasses.
Here are all of my articles on Gemini CLI and related topics:
- From Luck to Skill: Using AI to Consistently Win System Design Interviews (2025/9/10) original
- Gemini CLI: A Developer's Mental Model (2025/8/21) original
- Architecting AI Memory: Lessons from Gemini CLI (2025/8/13) original
- Inside the Mind: Gemini CLI's System Prompts Deep Dive (2025/7/19) original
- Meet the Agent: The Brain Behind Gemini CLI (2025/7/18) original