引言
随着人工智能(AI)技术的飞速发展,越来越多的AI工具被集成到日常开发流程中,极大地提升了生产力。今天,我们将深入探讨 Google Gemini CLI,一个由 Google Gemini 团队开发的强大开源命令行AI工具。它专为开发者、DevOps 工程师和数据分析师设计,旨在通过自然语言指令简化复杂的编程与运维任务。
本文将详细介绍如何安装、配置并开始使用 Gemini CLI,揭示其核心功能和使用技巧,帮助您将 AI 的力量融入到命令行工作流中。
什么是 Google Gemini CLI?
Google Gemini CLI 是一个基于 Google Gemini 大模型构建的开源命令行界面工具,它将 Gemini 强大的 AI 能力带到您的终端。
GitHub 项目地址: https://github.com/google-gemini/gemini-cli
Gemini CLI 的核心价值在于其能够理解代码、执行复杂查询、自动化重复性任务,并利用 Gemini 的多模态能力(如图像识别)生成创意内容。
核心功能亮点
- 大型代码库支持: 能够处理超过100万个上下文令牌(context tokens),这意味着它可以轻松分析和理解大型项目和复杂的代码库。
- 多模态应用原型: 具备快速从非结构化数据(如 PDF 文档、草图或图片)中提取信息并生成应用原型的能力,加速产品设计与迭代。
- 自动化 DevOps 任务: 能够执行常见的 DevOps 操作,例如自动化 Git 操作、获取 Pull Request (PR) 信息、创建迁移计划等,显著提高运维效率。
- 工具集成: 通过 MCP 服务器,可以连接并利用 Google 的其他媒体生成模型,如 Imagen(图像生成)、Veo(视频生成)和 Lyria(音乐生成),拓展其应用范围。
- 内置网页搜索: 确保 AI 响应的及时性和准确性,提供最新的信息支持。
Google Gemini CLI 安装指南
本指南以 macOS 系统为例,但在 Windows 或 Linux 上步骤类似,所有操作均在终端或命令行中完成。
先决条件
在安装 Gemini CLI 之前,请确保您的系统已安装 Node.js 18 或更高版本。您可以通过运行以下命令来检查当前 Node.js 版本:
node -v
如果版本不符合要求,请先升级 Node.js。
安装方式
有两种主要方式可以安装和运行 Gemini CLI:
选项 1: 直接运行 (无需全局安装)
此方法适用于希望快速尝试或避免全局安装的用户。每次使用时,您需要直接通过 npx 执行:
npx https://github.com/google-gemini/gemini-cli
这种方式会在运行时从 GitHub 下载并执行 CLI,不会在您的系统上留下持久的安装文件。
选项 2: 全局安装 (推荐)
对于频繁使用的用户,推荐进行全局安装。这使得您可以在任何目录下直接通过 gemini 命令启动 CLI。在您的终端中运行以下命令:
sudo npm install -g @google/gemini-cli
请注意,如果使用 sudo,系统可能会提示您输入管理员密码。
安装完成后,只需在终端中输入 gemini 即可启动交互式 CLI。首次运行时,它可能会请求一些必要的权限,请按照提示确认以继续。
首次配置 Gemini CLI
首次启动 Gemini CLI 后,它将引导您完成一系列简短的设置步骤。
步骤 1: 选择主题
CLI 会提供多个主题样式供您选择。根据您的喜好,选择一个主题后按 Enter 键确认。
2. 步骤 2: 选择登录方式
选择您希望用于访问 Gemini API 的登录方式。推荐使用 “Login with Google” 选项,它通常提供免费额度,支持 每分钟60次请求 和 每天1000次请求,足以满足大多数个人开发需求。选择后按 Enter。
如果您需要更高的请求限制、企业级访问或偏好使用 API 密钥,可以执行以下操作:
-
首先,从 Google AI Studio 获取您的专属 API 密钥。
-
然后,将其设置为环境变量。这通常在您的
.bashrc、.zshrc或.profile文件中完成:export GEMINI_API_KEY="YOUR_API_KEY"
注意: 使用 API 密钥 通常用于直接的 API 调用场景,而本指南主要关注 CLI 的交互式体验。
3. 步骤 3: 浏览器认证
在您选择“Login with Google”后,系统会自动打开一个浏览器窗口。请使用您的 Google 账号完成登录和授权。
登录成功后,您将在浏览器中看到确认信息,表示 Gemini CLI 已成功认证。此时,您可以回到终端,开始使用 Gemini CLI。
开始使用 Gemini CLI
现在,一切准备就绪!您可以在 CLI 中直接输入提示词,与 Gemini 进行交互。
例如,您可以直接输入一个问题或指令:
> Explain what is a LLM?
Gemini CLI 提示示例:

上传和引用本地文件
Gemini CLI 支持处理本地文件。要在 CLI 中上传并引用文件,请使用 @ 符号触发文件选择界面:
> Analyze this document: @
输入 @ 后,CLI 将引导您选择本地文件,并将其内容发送给 Gemini 进行分析或处理。
Gemini CLI 文件上传示例:

在 VSCode 中使用 Gemini CLI
您也可以直接在 VS Code 的集成终端中运行 gemini 命令。启动后,使用 @ 命令选择文件并开始对话,与在独立终端中的体验一致。
例如,您可以在 VS Code 终端中输入:
> Help me write a simple calculator in Python
CLI 可能会在过程中请求“写入权限”,这通常是为了允许 Gemini 生成的代码或其他内容写入到文件系统,请确认。
使用技巧与注意事项
- 模型回退: 如果网络连接不稳定或 Gemini CLI 遇到暂时性问题,它可能会自动从更强大的
gemini-2.5-pro模型回退到速度更快的gemini-2.5-flash模型,以确保服务的持续性。 - 查看可用命令: 要查找 Gemini CLI 中可用的命令和使用提示,只需在 CLI 交互界面中输入
/即可。这将显示一个帮助菜单,指导您探索更多功能。
Gemini CLI 是开发者、DevOps 工程师和数据分析师的强大 AI 助手,它通过自然语言指令简化代码分析、自动化工作流程并支持创意生成,是提升工作效率的利器。
Authentication Setup
The Gemini CLI requires you to authenticate with Google’s AI services. On initial startup you’ll need to configure one of the following authentication methods:
-
Login with Google (Gemini Code Assist):
-
Use this option to log in with your google account.
-
During initial startup, Gemini CLI will direct you to a webpage for authentication. Once authenticated, your credentials will be cached locally so the web login can be skipped on subsequent runs.
-
Note that the web login must be done in a browser that can communicate with the machine Gemini CLI is being run from. (Specifically, the browser will be redirected to a localhost url that Gemini CLI will be listening on).
-
Users may have to specify a GOOGLE_CLOUD_PROJECT if:
- You have a Google Workspace account. Google Workspace is a paid service for businesses and organizations that provides a suite of productivity tools, including a custom email domain (e.g. your-name@your-company.com), enhanced security features, and administrative controls. These accounts are often managed by an employer or school.
- You have received a free Code Assist license through the Google Developer Program (including qualified Google Developer Experts)
- You have been assigned a license to a current Gemini Code Assist standard or enterprise subscription.
- You are using the product outside the supported regions for free individual usage.>
- You are a Google account holder under the age of 18
- If you fall into one of these categories, you must first configure a Google Cloud Project Id to use, enable the Gemini for Cloud API and configure access permissions.
You can temporarily set the environment variable in your current shell session using the following command:
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"- For repeated use, you can add the environment variable to your .env file or your shell’s configuration file (like
~/.bashrc,~/.zshrc, or~/.profile). For example, the following command adds the environment variable to a~/.bashrcfile:
echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc source ~/.bashrc
-
-
- Obtain your API key from Google AI Studio: https://aistudio.google.com/app/apikey
- Set the
GEMINI_API_KEYenvironment variable. In the following methods, replaceYOUR_GEMINI_API_KEYwith the API key you obtained from Google AI Studio:- You can temporarily set the environment variable in your current shell session using the following command:
export GEMINI_API_KEY="YOUR_GEMINI_API_KEY" - For repeated use, you can add the environment variable to your .env file or your shell’s configuration file (like
~/.bashrc,~/.zshrc, or~/.profile). For example, the following command adds the environment variable to a~/.bashrcfile:echo 'export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"' >> ~/.bashrc source ~/.bashrc
- You can temporarily set the environment variable in your current shell session using the following command:
-
Vertex AI:
- If not using express mode:
- Ensure you have a Google Cloud project and have enabled the Vertex AI API.
- Set up Application Default Credentials (ADC), using the following command:
For more information, see Set up Application Default Credentials for Google Cloud.
gcloud auth application-default login - Set the
GOOGLE_CLOUD_PROJECT,GOOGLE_CLOUD_LOCATION, andGOOGLE_GENAI_USE_VERTEXAIenvironment variables. In the following methods, replaceYOUR_PROJECT_IDandYOUR_PROJECT_LOCATIONwith the relevant values for your project:- You can temporarily set these environment variables in your current shell session using the following commands:
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID" export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION" # e.g., us-central1 export GOOGLE_GENAI_USE_VERTEXAI=true - For repeated use, you can add the environment variables to your .env file or your shell’s configuration file (like
~/.bashrc,~/.zshrc, or~/.profile). For example, the following commands add the environment variables to a~/.bashrcfile:echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc echo 'export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"' >> ~/.bashrc echo 'export GOOGLE_GENAI_USE_VERTEXAI=true' >> ~/.bashrc source ~/.bashrc
- You can temporarily set these environment variables in your current shell session using the following commands:
- If using express mode:
- Set the
GOOGLE_API_KEYenvironment variable. In the following methods, replaceYOUR_GOOGLE_API_KEYwith your Vertex AI API key provided by express mode:- You can temporarily set these environment variables in your current shell session using the following commands:
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY" export GOOGLE_GENAI_USE_VERTEXAI=true - For repeated use, you can add the environment variables to your .env file or your shell’s configuration file (like
~/.bashrc,~/.zshrc, or~/.profile). For example, the following commands add the environment variables to a~/.bashrcfile:echo 'export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"' >> ~/.bashrc echo 'export GOOGLE_GENAI_USE_VERTEXAI=true' >> ~/.bashrc source ~/.bashrc
- You can temporarily set these environment variables in your current shell session using the following commands:
- Set the
- If not using express mode:
Persisting Environment Variables with .env Files
You can create a .gemini/.env file in your project directory or in your home directory. Creating a plain .env file also works, but .gemini/.env is recommended to keep Gemini variables isolated from other tools.
Gemini CLI automatically loads environment variables from the first .env file it finds, using the following search order:
- Starting in the current directory and moving upward toward
/, for each directory it checks:.gemini/.env.env
- If no file is found, it falls back to your home directory:
~/.gemini/.env~/.env
Important: The search stops at the first file encountered—variables are not merged across multiple files.
Examples
Project-specific overrides (take precedence when you are inside the project):
mkdir -p .gemini
echo 'GOOGLE_CLOUD_PROJECT="your-project-id"' >> .gemini/.env
User-wide settings (available in every directory):
mkdir -p ~/.gemini
cat >> ~/.gemini/.env <<'EOF'
GOOGLE_CLOUD_PROJECT="your-project-id"
GEMINI_API_KEY="your-gemini-api-key"
EOF
Gemini CLI Configuration
Gemini CLI offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.
Configuration layers
Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):
- Default values: Hardcoded defaults within the application.
- User settings file: Global settings for the current user.
- Project settings file: Project-specific settings.
- Environment variables: System-wide or session-specific variables, potentially loaded from
.envfiles. - Command-line arguments: Values passed when launching the CLI.
The user settings file and project settings file
Gemini CLI uses settings.json files for persistent configuration. There are two locations for these files:
- User settings file:
- Location:
~/.gemini/settings.json(where~is your home directory). - Scope: Applies to all Gemini CLI sessions for the current user.
- Location:
- Project settings file:
- Location:
.gemini/settings.jsonwithin your project’s root directory. - Scope: Applies only when running Gemini CLI from that specific project. Project settings override user settings.
- Location:
Note on environment variables in settings: String values within your settings.json files can reference environment variables using either $VAR_NAME or ${VAR_NAME} syntax. These variables will be automatically resolved when the settings are loaded. For example, if you have an environment variable MY_API_TOKEN, you could use it in settings.json like this: "apiKey": "$MY_API_TOKEN".
The .gemini directory in your project
In addition to a project settings file, a project’s .gemini directory can contain other project-specific files related to Gemini CLI’s operation, such as:
- Custom sandbox profiles (e.g.,
.gemini/sandbox-macos-custom.sb,.gemini/sandbox.Dockerfile).
Available settings in settings.json:
-
contextFileName(string or array of strings):- Description: Specifies the filename for context files (e.g.,
GEMINI.md,AGENTS.md). Can be a single filename or a list of accepted filenames. - Default:
GEMINI.md - Example:
"contextFileName": "AGENTS.md"
- Description: Specifies the filename for context files (e.g.,
-
bugCommand(object):- Description: Overrides the default URL for the
/bugcommand. - Default:
"urlTemplate": "https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml&title={title}&info={info}" - Properties:
urlTemplate(string): A URL that can contain{title}and{info}placeholders.
- Example:
"bugCommand": { "urlTemplate": "https://bug.example.com/new?title={title}&info={info}" }
- Description: Overrides the default URL for the
-
fileFiltering(object):- Description: Controls git-aware file filtering behavior for @ commands and file discovery tools.
- Default:
"respectGitIgnore": true, "enableRecursiveFileSearch": true - Properties:
respectGitIgnore(boolean): Whether to respect .gitignore patterns when discovering files. When set totrue, git-ignored files (likenode_modules/,dist/,.env) are automatically excluded from @ commands and file listing operations.enableRecursiveFileSearch(boolean): Whether to enable searching recursively for filenames under the current tree when completing @ prefixes in the prompt.
- Example:
"fileFiltering": { "respectGitIgnore": true, "enableRecursiveFileSearch": false }
-
coreTools(array of strings):- Description: Allows you to specify a list of core tool names that should be made available to the model. This can be used to restrict the set of built-in tools. See Built-in Tools for a list of core tools. You can also specify command-specific restrictions for tools that support it, like the
ShellTool. For example,"coreTools": ["ShellTool(ls -l)"]will only allow thels -lcommand to be executed. - Default: All tools available for use by the Gemini model.
- Example:
"coreTools": ["ReadFileTool", "GlobTool", "ShellTool(ls)"].
- Description: Allows you to specify a list of core tool names that should be made available to the model. This can be used to restrict the set of built-in tools. See Built-in Tools for a list of core tools. You can also specify command-specific restrictions for tools that support it, like the
-
excludeTools(array of strings):- Description: Allows you to specify a list of core tool names that should be excluded from the model. A tool listed in both
excludeToolsandcoreToolsis excluded. You can also specify command-specific restrictions for tools that support it, like theShellTool. For example,"excludeTools": ["ShellTool(rm -rf)"]will block therm -rfcommand. - Default: No tools excluded.
- Example:
"excludeTools": ["run_shell_command", "findFiles"]. - Security Note: Command-specific restrictions in
excludeToolsforrun_shell_commandare based on simple string matching and can be easily bypassed. This feature is not a security mechanism and should not be relied upon to safely execute untrusted code. It is recommended to usecoreToolsto explicitly select commands that can be executed.
- Description: Allows you to specify a list of core tool names that should be excluded from the model. A tool listed in both
-
autoAccept(boolean):- Description: Controls whether the CLI automatically accepts and executes tool calls that are considered safe (e.g., read-only operations) without explicit user confirmation. If set to
true, the CLI will bypass the confirmation prompt for tools deemed safe. - Default:
false - Example:
"autoAccept": true
- Description: Controls whether the CLI automatically accepts and executes tool calls that are considered safe (e.g., read-only operations) without explicit user confirmation. If set to
-
theme(string):- Description: Sets the visual theme for Gemini CLI.
- Default:
"Default" - Example:
"theme": "GitHub"
-
sandbox(boolean or string):- Description: Controls whether and how to use sandboxing for tool execution. If set to
true, Gemini CLI uses a pre-builtgemini-cli-sandboxDocker image. For more information, see Sandboxing. - Default:
false - Example:
"sandbox": "docker"
- Description: Controls whether and how to use sandboxing for tool execution. If set to
-
toolDiscoveryCommand(string):- Description: Defines a custom shell command for discovering tools from your project. The shell command must return on
stdouta JSON array of function declarations. Tool wrappers are optional. - Default: Empty
- Example:
"toolDiscoveryCommand": "bin/get_tools"
- Description: Defines a custom shell command for discovering tools from your project. The shell command must return on
-
toolCallCommand(string):- Description: Defines a custom shell command for calling a specific tool that was discovered using
toolDiscoveryCommand. The shell command must meet the following criteria:- It must take function
name(exactly as in function declaration) as first command line argument. - It must read function arguments as JSON on
stdin, analogous tofunctionCall.args. - It must return function output as JSON on
stdout, analogous tofunctionResponse.response.content.
- It must take function
- Default: Empty
- Example:
"toolCallCommand": "bin/call_tool"
- Description: Defines a custom shell command for calling a specific tool that was discovered using
-
mcpServers(object):- Description: Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Gemini CLI attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g.,
serverAlias__actualToolName) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. - Default: Empty
- Properties:
<SERVER_NAME>(object): The server parameters for the named server.command(string, required): The command to execute to start the MCP server.args(array of strings, optional): Arguments to pass to the command.env(object, optional): Environment variables to set for the server process.cwd(string, optional): The working directory in which to start the server.timeout(number, optional): Timeout in milliseconds for requests to this MCP server.trust(boolean, optional): Trust this server and bypass all tool call confirmations.
- Example:
"mcpServers": { "myPythonServer": { "command": "python", "args": ["mcp_server.py", "--port", "8080"], "cwd": "./mcp_tools/python", "timeout": 5000 }, "myNodeServer": { "command": "node", "args": ["mcp_server.js"], "cwd": "./mcp_tools/node" }, "myDockerServer": { "command": "docker", "args": ["run", "i", "--rm", "-e", "API_KEY", "ghcr.io/foo/bar"], "env": { "API_KEY": "$MY_API_TOKEN" } }, }
- Description: Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Gemini CLI attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g.,
-
checkpointing(object):- Description: Configures the checkpointing feature, which allows you to save and restore conversation and file states. See the Checkpointing documentation for more details.
- Default:
{"enabled": false} - Properties:
enabled(boolean): Whentrue, the/restorecommand is available.
-
preferredEditor(string):- Description: Specifies the preferred editor to use for viewing diffs.
- Default:
vscode - Example:
"preferredEditor": "vscode"
-
telemetry(object)- Description: Configures logging and metrics collection for Gemini CLI. For more information, see Telemetry.
- Default:
{"enabled": false, "target": "local", "otlpEndpoint": "http://localhost:4317", "logPrompts": true} - Properties:
enabled(boolean): Whether or not telemetry is enabled.target(string): The destination for collected telemetry. Supported values arelocalandgcp.otlpEndpoint(string): The endpoint for the OTLP Exporter.logPrompts(boolean): Whether or not to include the content of user prompts in the logs.
- Example:
"telemetry": { "enabled": true, "target": "local", "otlpEndpoint": "http://localhost:16686", "logPrompts": false }
-
usageStatisticsEnabled(boolean):- Description: Enables or disables the collection of usage statistics. See Usage Statistics for more information.
- Default:
true - Example:
"usageStatisticsEnabled": false
-
hideTips(boolean):-
Description: Enables or disables helpful tips in the CLI interface.
-
Default:
false -
Example:
"hideTips": true
-
Example settings.json:
{
"theme": "GitHub",
"sandbox": "docker",
"toolDiscoveryCommand": "bin/get_tools",
"toolCallCommand": "bin/call_tool",
"mcpServers": {
"mainServer": {
"command": "bin/mcp_server.py"
},
"anotherServer": {
"command": "node",
"args": ["mcp_server.js", "--verbose"]
}
},
"telemetry": {
"enabled": true,
"target": "local",
"otlpEndpoint": "http://localhost:4317",
"logPrompts": true
},
"usageStatisticsEnabled": true,
"hideTips": false
}
Shell History
The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user’s home folder.
- Location:
~/.gemini/tmp/<project_hash>/shell_history<project_hash>is a unique identifier generated from your project’s root path.- The history is stored in a file named
shell_history.
Environment Variables & .env Files
Environment variables are a common way to configure applications, especially for sensitive information like API keys or for settings that might change between environments.
The CLI automatically loads environment variables from an .env file. The loading order is:
.envfile in the current working directory.- If not found, it searches upwards in parent directories until it finds an
.envfile or reaches the project root (identified by a.gitfolder) or the home directory. - If still not found, it looks for
~/.env(in the user’s home directory).
GEMINI_API_KEY(Required):- Your API key for the Gemini API.
- Crucial for operation. The CLI will not function without it.
- Set this in your shell profile (e.g.,
~/.bashrc,~/.zshrc) or an.envfile.
GEMINI_MODEL:- Specifies the default Gemini model to use.
- Overrides the hardcoded default
- Example:
export GEMINI_MODEL="gemini-2.5-flash"
GOOGLE_API_KEY:- Your Google Cloud API key.
- Required for using Vertex AI in express mode.
- Ensure you have the necessary permissions and set the
GOOGLE_GENAI_USE_VERTEXAI=trueenvironment variable. - Example:
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY".
GOOGLE_CLOUD_PROJECT:- Your Google Cloud Project ID.
- Required for using Code Assist or Vertex AI.
- If using Vertex AI, ensure you have the necessary permissions and set the
GOOGLE_GENAI_USE_VERTEXAI=trueenvironment variable. - Example:
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID".
GOOGLE_APPLICATION_CREDENTIALS(string):- Description: The path to your Google Application Credentials JSON file.
- Example:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/credentials.json"
OTLP_GOOGLE_CLOUD_PROJECT:- Your Google Cloud Project ID for Telemetry in Google Cloud
- Example:
export OTLP_GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID".
GOOGLE_CLOUD_LOCATION:- Your Google Cloud Project Location (e.g., us-central1).
- Required for using Vertex AI in non express mode.
- If using Vertex AI, ensure you have the necessary permissions and set the
GOOGLE_GENAI_USE_VERTEXAI=trueenvironment variable. - Example:
export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION".
GEMINI_SANDBOX:- Alternative to the
sandboxsetting insettings.json. - Accepts
true,false,docker,podman, or a custom command string.
- Alternative to the
SEATBELT_PROFILE(macOS specific):- Switches the Seatbelt (
sandbox-exec) profile on macOS. permissive-open: (Default) Restricts writes to the project folder (and a few other folders, seepackages/cli/src/utils/sandbox-macos-permissive-open.sb) but allows other operations.strict: Uses a strict profile that declines operations by default.<profile_name>: Uses a custom profile. To define a custom profile, create a file namedsandbox-macos-<profile_name>.sbin your project’s.gemini/directory (e.g.,my-project/.gemini/sandbox-macos-custom.sb).
- Switches the Seatbelt (
DEBUGorDEBUG_MODE(often used by underlying libraries or the CLI itself):- Set to
trueor1to enable verbose debug logging, which can be helpful for troubleshooting.
- Set to
NO_COLOR:- Set to any value to disable all color output in the CLI.
CLI_TITLE:- Set to a string to customize the title of the CLI.
CODE_ASSIST_ENDPOINT:- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
Command-Line Arguments
Arguments passed directly when running the CLI can override other configurations for that specific session.
--model <model_name>(-m <model_name>):- Specifies the Gemini model to use for this session.
- Example:
npm start -- --model gemini-1.5-pro-latest
--prompt <your_prompt>(-p <your_prompt>):- Used to pass a prompt directly to the command. This invokes Gemini CLI in a non-interactive mode.
--sandbox(-s):- Enables sandbox mode for this session.
--sandbox-image:- Sets the sandbox image URI.
--debug_mode(-d):- Enables debug mode for this session, providing more verbose output.
--all_files(-a):- If set, recursively includes all files within the current directory as context for the prompt.
--help(or-h):- Displays help information about command-line arguments.
--show_memory_usage:- Displays the current memory usage.
--yolo:- Enables YOLO mode, which automatically approves all tool calls.
--telemetry:- Enables telemetry.
--telemetry-target:- Sets the telemetry target. See telemetry for more information.
--telemetry-otlp-endpoint:- Sets the OTLP endpoint for telemetry. See telemetry for more information.
--telemetry-log-prompts:- Enables logging of prompts for telemetry. See telemetry for more information.
--checkpointing:- Enables checkpointing.
--version:- Displays the version of the CLI.
Context Files (Hierarchical Instructional Context)
While not strictly configuration for the CLI’s behavior, context files (defaulting to GEMINI.md but configurable via the contextFileName setting) are crucial for configuring the instructional context (also referred to as “memory”) provided to the Gemini model. This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.
- Purpose: These Markdown files contain instructions, guidelines, or context that you want the Gemini model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.
Example Context File Content (e.g., GEMINI.md)
Here’s a conceptual example of what a context file at the root of a TypeScript project might contain:
# Project: My Awesome TypeScript Library
## General Instructions:
- When generating new TypeScript code, please follow the existing coding style.
- Ensure all new functions and classes have JSDoc comments.
- Prefer functional programming paradigms where appropriate.
- All code should be compatible with TypeScript 5.0 and Node.js 18+.
## Coding Style:
- Use 2 spaces for indentation.
- Interface names should be prefixed with `I` (e.g., `IUserService`).
- Private class members should be prefixed with an underscore (`_`).
- Always use strict equality (`===` and `!==`).
## Specific Component: `src/api/client.ts`
- This file handles all outbound API requests.
- When adding new API call functions, ensure they include robust error handling and logging.
- Use the existing `fetchWithRetry` utility for all GET requests.
## Regarding Dependencies:
- Avoid introducing new external dependencies unless absolutely necessary.
- If a new dependency is required, please state the reason.
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- Hierarchical Loading and Precedence: The CLI implements a sophisticated hierarchical memory system by loading context files (e.g.,
GEMINI.md) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the/memory showcommand. The typical loading order is:- Global Context File:
- Location:
~/.gemini/<contextFileName>(e.g.,~/.gemini/GEMINI.mdin your user home directory). - Scope: Provides default instructions for all your projects.
- Location:
- Project Root & Ancestors Context Files:
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a
.gitfolder) or your home directory. - Scope: Provides context relevant to the entire project or a significant portion of it.
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a
- Sub-directory Context Files (Contextual/Local):
- Location: The CLI also scans for the configured context file in subdirectories below the current working directory (respecting common ignore patterns like
node_modules,.git, etc.). - Scope: Allows for highly specific instructions relevant to a particular component, module, or sub-section of your project.
- Location: The CLI also scans for the configured context file in subdirectories below the current working directory (respecting common ignore patterns like
- Global Context File:
- Concatenation & UI Indication: The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt to the Gemini model. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- Commands for Memory Management:
- Use
/memory refreshto force a re-scan and reload of all context files from all configured locations. This updates the AI’s instructional context. - Use
/memory showto display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI. - See the Commands documentation for full details on the
/memorycommand and its sub-commands (showandrefresh).
- Use
By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI’s memory and tailor the Gemini CLI’s responses to your specific needs and projects.
Sandboxing
The Gemini CLI can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.
Sandboxing is disabled by default, but you can enable it in a few ways:
- Using
--sandboxor-sflag. - Setting
GEMINI_SANDBOXenvironment variable. - Sandbox is enabled in
--yolomode by default.
By default, it uses a pre-built gemini-cli-sandbox Docker image.
For project-specific sandboxing needs, you can create a custom Dockerfile at .gemini/sandbox.Dockerfile in your project’s root directory. This Dockerfile can be based on the base sandbox image:
FROM gemini-cli-sandbox
# Add your custom dependencies or configurations here
# For example:
# RUN apt-get update && apt-get install -y some-package
# COPY ./my-config /app/my-config
When .gemini/sandbox.Dockerfile exists, you can use BUILD_SANDBOX environment variable when running Gemini CLI to automatically build the custom sandbox image:
BUILD_SANDBOX=1 gemini -s
Usage Statistics
To help us improve the Gemini CLI, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.
What we collect:
- Tool Calls: We log the names of the tools that are called, whether they succeed or fail, and how long they take to execute. We do not collect the arguments passed to the tools or any data returned by them.
- API Requests: We log the Gemini model used for each request, the duration of the request, and whether it was successful. We do not collect the content of the prompts or responses.
- Session Information: We collect information about the configuration of the CLI, such as the enabled tools and the approval mode.
What we DON’T collect:
- Personally Identifiable Information (PII): We do not collect any personal information, such as your name, email address, or API keys.
- Prompt and Response Content: We do not log the content of your prompts or the responses from the Gemini model.
- File Content: We do not log the content of any files that are read or written by the CLI.
How to opt out:
You can opt out of usage statistics collection at any time by setting the usageStatisticsEnabled property to false in your settings.json file:
{
"usageStatisticsEnabled": false
}
关于
关注我获取更多资讯