Skip to main content

LLM plugin to access Google's Gemini family of models

Project description

llm-gemini

PyPI Changelog Tests License

API access to Google's Gemini models

Installation

Install this plugin in the same environment as LLM.

llm install llm-gemini

Usage

Configure the model by setting a key called "gemini" to your API key:

llm keys set gemini
<paste key here>

You can also set the API key by assigning it to the environment variable LLM_GEMINI_KEY.

Now run the model using -m gemini-2.0-flash, for example:

llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"

A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"

The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."

You can set the default model to avoid the extra -m option:

llm models default gemini-2.0-flash
llm "A joke about a pelican and a walrus"

Available models

  • gemini/gemini-3.1-pro-preview-customtools
  • gemini/gemini-3.1-pro-preview: Gemini 3.1 Pro Preview
  • gemini/gemini-3-flash-preview
  • gemini/gemini-3-pro-preview: Gemini 3 Pro Preview
  • gemini/gemini-2.5-flash-lite-preview-09-2025
  • gemini/gemini-2.5-flash-preview-09-2025
  • gemini/gemini-flash-lite-latest: Latest Gemini Flash Lite
  • gemini/gemini-flash-latest: Latest Gemini Flash
  • gemini/gemini-2.5-flash-lite: Gemini 2.5 Flash Lite
  • gemini/gemini-2.5-pro: Gemini 2.5 Pro
  • gemini/gemini-2.5-flash: Gemini 2.5 Flash
  • gemini/gemini-2.5-pro-preview-06-05
  • gemini/gemini-2.5-flash-preview-05-20: Gemini 2.5 Flash preview (priced differently from 2.5 Flash)
  • gemini/gemini-2.5-pro-preview-05-06
  • gemini/gemini-2.5-flash-preview-04-17
  • gemini/gemini-2.5-pro-preview-03-25
  • gemini/gemini-2.5-pro-exp-03-25
  • gemini/gemini-2.0-flash-lite
  • gemini/gemini-2.0-pro-exp-02-05
  • gemini/gemini-2.0-flash
  • gemini/gemini-2.0-flash-thinking-exp-01-21: Experimental "thinking" model from January 2025
  • gemini/gemini-2.0-flash-thinking-exp-1219
  • gemini/gemma-3n-e4b-it
  • gemini/gemma-3-27b-it
  • gemini/gemma-3-12b-it
  • gemini/gemma-3-4b-it
  • gemini/gemma-3-1b-it
  • gemini/learnlm-1.5-pro-experimental
  • gemini/gemini-2.0-flash-exp
  • gemini/gemini-exp-1206
  • gemini/gemini-exp-1121
  • gemini/gemini-exp-1114
  • gemini/gemini-1.5-flash-8b-001
  • gemini/gemini-1.5-flash-8b-latest: The least expensive model
  • gemini/gemini-1.5-flash-002
  • gemini/gemini-1.5-pro-002
  • gemini/gemini-1.5-flash-001
  • gemini/gemini-1.5-pro-001
  • gemini/gemini-1.5-flash-latest
  • gemini/gemini-1.5-pro-latest
  • gemini/gemini-pro

All of these models have aliases that omit the gemini/ prefix, for example:

llm -m gemini-1.5-flash-8b-latest --schema 'name,age int,bio' 'invent a dog'

Images, audio and video

Gemini models are multi-modal. You can provide images, audio or video files as input like this:

llm -m gemini-2.0-flash 'extract text' -a image.jpg

Or with a URL:

llm -m gemini-2.0-flash-lite 'describe image' \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg

Audio works too:

llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3

And video:

llm -m gemini-2.0-flash 'describe what happens' -a video.mp4

The Gemini prompting guide includes extensive advice on multi-modal prompting.

YouTube videos

You can provide YouTube video URLs as attachments as well:

llm -m gemini-3-pro-preview -a 'https://www.youtube.com/watch?v=9o1_DL9uNlM' \
  'Produce a summary with relevant URLs and code example snippets, then an accurate transcript with timestamps.'

Example output here.

These will be processed with media resolution low by default. You can use the -o media_resolution X option to set that to medium, high, or unspecified.

JSON output

Use -o json_object 1 to force the output to be JSON:

llm -m gemini-2.0-flash -o json_object 1 \
  '3 largest cities in California, list of {"name": "..."}'

Outputs:

{"cities": [{"name": "Los Angeles"}, {"name": "San Diego"}, {"name": "San Jose"}]}

Code execution

Gemini models can write and execute code - they can decide to write Python code, execute it in a secure sandbox and use the result as part of their response.

To enable this feature, use -o code_execution 1:

llm -m gemini-2.0-flash -o code_execution 1 \
'use python to calculate (factorial of 13) * 3'

Google search

Some Gemini models support Grounding with Google Search, where the model can run a Google search and use the results as part of answering a prompt.

Using this feature may incur additional requirements in terms of how you use the results. Consult Google's documentation for more details.

To run a prompt with Google search enabled, use -o google_search 1:

llm -m gemini-2.0-flash -o google_search 1 \
  'What happened in Ireland today?'

Use llm logs -c --json after running a prompt to see the full JSON response, which includes additional information about grounded results.

URL context

Gemini models support a URL context tool which, when enabled, allows the models to fetch additional content from URLs as part of their execution.

You can enable that with the -o url_context 1 option - for example:

llm -m gemini-2.5-flash -o url_context 1 'Latest headline on simonwillison.net'

Extra tokens introduced by this tool will be charged as input tokens. Use --usage to see details of those:

llm -m gemini-2.5-flash -o url_context 1 --usage \
  'Latest headline on simonwillison.net'

Outputs:

The latest headline on simonwillison.net as of August 17, 2025, is "TIL: Running a gpt-oss eval suite against LM Studio on a Mac.".
Token usage: 9,613 input, 87 output, {"candidatesTokenCount": 57, "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 10}], "toolUsePromptTokenCount": 9603, "toolUsePromptTokensDetails": [{"modality": "TEXT", "tokenCount": 9603}], "thoughtsTokenCount": 30}

The "toolUsePromptTokenCount" key shows how many tokens were used for that URL context.

Chat

To chat interactively with the model, run llm chat:

llm chat -m gemini-2.0-flash

Timeouts

By default there is no timeout against the Gemini API. You can use the timeout option to protect against API requests that hang indefinitely.

With the CLI tool that looks like this, to set a 1.5 second timeout:

llm -m gemini-2.5-flash-preview-05-20 'epic saga about mice' -o timeout 1.5

In the Python library timeouts are used like this:

import httpx, llm

model = llm.get_model("gemini/gemini-2.5-flash-preview-05-20")

try:
    response = model.prompt(
        "epic saga about mice", timeout=1.5
    )
    print(response.text())
except httpx.TimeoutException:
    print("Timeout exceeded")

An httpx.TimeoutException subclass will be raised if the timeout is exceeded.

Embeddings

The plugin also adds support for the gemini-embedding-exp-03-07 and text-embedding-004 embedding models.

Run that against a single string like this:

llm embed -m text-embedding-004 -c 'hello world'

This returns a JSON array of 768 numbers.

The gemini-embedding-exp-03-07 model is larger, returning 3072 numbers. You can also use variants of it that are truncated down to smaller sizes:

  • gemini-embedding-exp-03-07 - 3072 numbers
  • gemini-embedding-exp-03-07-2048 - 2048 numbers
  • gemini-embedding-exp-03-07-1024 - 1024 numbers
  • gemini-embedding-exp-03-07-512 - 512 numbers
  • gemini-embedding-exp-03-07-256 - 256 numbers
  • gemini-embedding-exp-03-07-128 - 128 numbers

This command will embed every README.md file in child directories of the current directory and store the results in a SQLite database called embed.db in a collection called readmes:

llm embed-multi readmes -d embed.db -m gemini-embedding-exp-03-07-128 \
  --files . '*/README.md'

You can then run similarity searches against that collection like this:

llm similar readmes -c 'upload csvs to stuff' -d embed.db

See the LLM embeddings documentation for further details.

Listing all Gemini API models

The llm gemini models command lists all of the models that are exposed by the Gemini API, some of which may not be available through this plugin.

llm gemini models

You can add a --key X option to use a different API key.

To filter models by their supported generation methods use --method one or more times:

llm gemini models --method embedContent

If you provide multiple methods you will see models that support any of them.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-gemini
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

This project uses pytest-recording to record Gemini API responses for the tests.

If you add a new test that calls the API you can capture the API response like this:

PYTEST_GEMINI_API_KEY="$(llm keys get gemini)" pytest --record-mode once

You will need to have stored a valid Gemini API key using this command first:

llm keys set gemini
# Paste key here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_gemini-0.29.tar.gz (23.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_gemini-0.29-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_gemini-0.29.tar.gz.

File metadata

  • Download URL: llm_gemini-0.29.tar.gz
  • Upload date:
  • Size: 23.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_gemini-0.29.tar.gz
Algorithm Hash digest
SHA256 3f1f7da7f3765d5c3422ff208e9a1996c401e86fca4fb7b9fbd3bfdc372aea18
MD5 ad8bde8e521a3796f7572915d41d61e3
BLAKE2b-256 40235760b0b48161beec559cae9e6d0bbbab8bd70539cbea6056d7997d10ea94

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_gemini-0.29.tar.gz:

Publisher: publish.yml on simonw/llm-gemini

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_gemini-0.29-py3-none-any.whl.

File metadata

  • Download URL: llm_gemini-0.29-py3-none-any.whl
  • Upload date:
  • Size: 17.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_gemini-0.29-py3-none-any.whl
Algorithm Hash digest
SHA256 5058d541d47442c614025ef793abb74fca53aacec17c235fcd55a5b139680edc
MD5 f4545db4cd789dbf69ef7df0b7d4b468
BLAKE2b-256 903e8767bb9541b5f54d2758c14ba03e866a6fa9702dfd4e366be247b9a04315

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_gemini-0.29-py3-none-any.whl:

Publisher: publish.yml on simonw/llm-gemini

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page