Gemini CLI Masterclass by Pasha Simakov

Inside the Mind: Gemini CLI's System Prompts Deep Dive

by Pasha Simakov, 2025-07-19

Understanding a large and complex codebase is one of the hardest challenges in software engineering. Even with experience, it often takes significant time to discover how components interact, what rules they follow, and where critical logic resides. This is where AI-powered tools, combined with the right heuristics, can make a real difference.

I’m Pasha Simakov, a Google software engineer passionate about building intelligent systems that help developers work faster and smarter. Over the years, the meaning of “intelligent” has evolved—today, I focus on leveraging LLM-powered tools like Gemini CLI to improve code comprehension, streamline workflows, and increase developer productivity. I also share these techniques with others through 1-on-1 sessions and group masterclasses.

In this article, we take a deep dive into the system prompts, aka system instruction that define how Gemini CLI agents think and act. These prompts are the "instructions" that shape the agent’s behavior, decision-making, and overall effectiveness. By examining them closely, we can better understand not just how Gemini CLI operates but also how it can be extended or customized for more advanced workflows.

Let’s Begin

I’ve already set up my local tools for full-text search grounding. Now, let’s launch Gemini CLI and ask about its agents.

GEMINI CLI PROMPT: What are the prompts used by Gemini CLI agents? What instructions do they contain? Who decides which prompt to use and when? Use full-text-search("prompt") for grounding.

Based on the search results, here are the prompts used by the Gemini CLI, what they contain, and how they are chosen.

1. What are the prompts and what instructions do they contain?

There are several prompts used for different purposes:

  • Main System Prompt (packages/core/src/core/prompts.ts): This is the primary prompt that defines the agent's core behavior. It instructs the model to act as an interactive CLI agent specializing in software engineering tasks. Key instructions include:

    • Adhering to project conventions, style, and structure.
    • Verifying library and framework usage before implementing them.
    • Adding comments only when necessary.
    • Confirming any ambiguity with the user before taking action.
    • Explaining critical commands before execution.
    • Following specific workflows for software engineering tasks and creating new applications.
  • Edit Corrector Prompt (packages/core/src/utils/editCorrector.ts): This prompt is used to correct edits that have failed. It provides the model with the original content, the intended change (the "diff"), and the erroneous output, asking the model to identify and fix the error.

  • Tool Output Summarizer Prompt (packages/core/src/utils/summarizer.ts): This prompt instructs the model to summarize the output of a tool in a concise and user-friendly way, focusing on the most important information and avoiding unnecessary details.

  • Loop Detection Prompt (packages/core/src/services/loopDetectionService.ts): This prompt is designed to prevent the agent from getting stuck in a loop. It asks the model to determine if a series of tool calls are repetitive and, if so, to suggest an alternative approach.

  • Issue Triage Prompt (.github/workflows/gemini-automated-issue-triage.yml): This prompt is used in a GitHub workflow to automatically triage issues. It instructs the model to analyze an issue, classify it (e.g., bug, feature request), and suggest appropriate labels.

2. Who decides which prompt to use and when?

The decision of which prompt to use is made by the Gemini CLI's internal logic based on the context of the user's request and the agent's current task.

  • The main system prompt is used by default for all interactions with the user.
  • The edit corrector prompt is used when a file modification (e.g., using the replace tool) fails.
  • The tool output summarizer prompt is used when the output of a tool is too verbose and needs to be condensed for the user.
  • The loop detection prompt is triggered when the agent detects that it might be repeating the same sequence of tool calls.
  • The issue triage prompt is used only within the context of the specific GitHub Action designed for triaging issues.

Let's Go Deeper

GEMINI CLI PROMPT: Inspect the actual text of the prompts used by Gemini CLI agents. Provide 5 excerpts that represent classic prompting techniques (e.g., structured instructions, role-based prompts) and 5 excerpts that showcase unusual or creative prompting approaches. Focus on capturing meaningful and unique snippets that illustrate how the agent is guided in different contexts.

Here are 5 excerpts representing classic prompting techniques and 5 that showcase unusual or creative approaches, based on the Gemini CLI's source code.

Classic Prompting Techniques

These excerpts demonstrate well-established methods like role-playing, providing structured instructions, and few-shot examples.

  1. Role-Based Prompting (packages/core/src/core/prompts.ts) This is a classic example of assigning a persona to the AI to guide its behavior and tone.

    You are an interactive CLI agent specializing in software engineering tasks. Your primary goal is to help users safely and efficiently, adhering strictly to the following instructions and utilizing your available tools.

  2. Structured Instructions with Mandates (packages/core/src/core/prompts.ts) This excerpt uses a clear, hierarchical structure with headings and bullet points to provide explicit, non-negotiable rules for the agent's operation.

    Core Mandates:

    • Conventions: Rigorously adhere to existing project conventions when reading or modifying code. Analyze surrounding code, tests, and configuration first.

    • Libraries/Frameworks: NEVER assume a library/framework is available or appropriate. Verify its established usage within the project...

    • Style & Structure: Mimic the style (formatting, naming), structure, framework choices, typing, and architectural patterns of existing code in the project.

  3. Few-Shot Examples (packages/core/src/core/prompts.ts) This section provides concrete examples of user-model interactions to illustrate the expected tone, workflow, and tool usage.

    Examples (Illustrating Tone and Workflow):

    user: list files here. model: [tool_call: ls for path '/path/to/project']

    user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib. model: Okay, I can refactor 'src/auth.py'. First, I'll analyze the code and check for a test safety net before planning any changes. [tool_call: glob for path 'tests/test_auth.py'] ...

  4. Step-by-Step Workflow (.github/workflows/gemini-automated-issue-triage.yml) This prompt, used for automating GitHub issue triage, provides a clear, numbered sequence of steps for the agent to follow, ensuring a consistent and predictable process.

    Steps:

    1. Run: gh label list --repo ${{ github.repository }} --limit 100 to get all available labels.

    2. Review the issue title and body provided in the environment variables.

    3. Select the most relevant labels from the existing labels, focusing on kind/, area/, and priority/*.

    4. Apply the selected labels to this issue using: gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label "label1,label2"

  5. Structured Output (XML) (packages/core/src/core/prompts.ts) This prompt for history compression mandates a specific XML structure for the output. This is a classic technique for forcing the model to produce easily parsable, machine-readable data.

    After your reasoning is complete, generate the final XML object. Be incredibly dense with information. Omit any irrelevant conversational filler.

    The structure MUST be as follows:

    ...

Unusual or Creative Prompting Approaches

These excerpts showcase more novel techniques, such as asking the model to self-correct, detect its own loops, and reason about its actions in a "scratchpad."

  1. Self-Correction with Context (packages/core/src/utils/editCorrector.ts) This is a creative "meta-prompt" where the AI is asked to fix its own previous, incorrect output. It's given the context of the problem (a failed match) and tasked with generating a corrected version.

    Context: A process needs to find an exact literal, unique match for a specific text snippet within a file's content. The provided snippet failed to match exactly. This is most likely because it has been overly escaped.

    Task: Analyze the provided file content and the problematic target snippet. Identify the segment in the file content that the snippet was most likely intended to match. Output the exact, literal text of that segment from the file content.

  2. Loop Detection as a Diagnostic Agent (packages/core/src/services/loopDetectionService.ts) This is a highly creative use of prompting. The model is given the role of a "diagnostic agent" and is asked to analyze its own recent conversation history to determine if it's stuck in a loop. It's a form of AI self-reflection.

    You are a sophisticated AI diagnostic agent specializing in identifying when a conversational AI is stuck in an unproductive state. Your task is to analyze the provided conversation history and determine if the assistant has ceased to make meaningful progress.

    An unproductive state is characterized by one or more of the following patterns over the last 5 or more assistant turns:

    Repetitive Actions... Cognitive Loop...

  3. Chain of Thought with a "Scratchpad" (packages/core/src/core/prompts.ts) This prompt explicitly tells the model to "think" in a private <scratchpad> before producing its final output. This encourages a more deliberative reasoning process, separating the "thinking" from the final answer.

    First, you will think through the entire history in a private . Review the user's overall goal, the agent's actions, tool outputs, file modifications, and any unresolved questions. Identify every piece of information that is essential for future actions.

    After your reasoning is complete, generate the final XML object.

  4. Differentiating "Good" vs. "Bad" Repetition (packages/core/src/services/loopDetectionService.ts) This is a nuanced instruction within the loop detection prompt. It creatively asks the model not just to find repetition, but to distinguish between unproductive loops and legitimate, incremental progress, showing a sophisticated understanding of the task.

    Crucially, differentiate between a true unproductive state and legitimate, incremental progress. For example, a series of 'tool_A' or 'tool_B' tool calls that make small, distinct changes to the same file (like adding docstrings to functions one by one) is considered forward progress and is NOT a loop.

  5. Adjusting a Replacement Based on a Correction (packages/core/src/utils/editCorrector.ts) This is another creative meta-prompt. After the model corrects an old_string, it's asked to adjust the corresponding new_string to match the correction, preserving the intent of the original change. This requires a higher level of reasoning about the relationship between the two strings.

    Context: A text replacement operation was planned. The original text to be replaced (original_old_string) was slightly different from the actual text in the file (corrected_old_string)...

    Task: Based on the differences between original_old_string and corrected_old_string, and the content of original_new_string, generate a corrected_new_string. This corrected_new_string should be what original_new_string would have been if it was designed to replace corrected_old_string directly, while maintaining the spirit of the original transformation.

How It Works

This article highlighted full-text search grounding as just one of many techniques available for enhancing LLM-powered code exploration. By combining such methods with structured workflows, engineers can significantly improve the accuracy and efficiency of AI-assisted development on large, production-grade projects. To dive deeper into these advanced techniques and practical applications, consider joining my Gemini CLI Masterclass.

Gemini CLI Masterclass Articles

Articles in Gemini CLI Masterclass this series:
  • Inside the Mind: Gemini CLI's System Prompts Deep Dive (2025/7/19) original
  • Meet the Agent: The Brain Behind Gemini CLI (2025/7/18) original
Want to learn more? Need help?

If you'd like to learn these techniques in depth, join my
Gemini CLI Masterclass

Note: *Gemini CLI Master* class is not a Google product. It's not developed, funded, supported, or approved by Google LLC. Note: *Gemini CLI is a Google product*; references here are for educational and practical AI development purposes.