• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm
49%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Repo Added 15 Sep 2025 11:42AM UTC
Token Qrw4J5oDDoi2zjHqBZDp1Hok4eVoONJAy regen
Build 544 Last
Files 85
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH feat/google-context-caching
branch: feat/google-context-caching
CHANGE BRANCH
x
Reset
Sync Branches
  • feat/google-context-caching
  • add-bedrock-fixtures
  • add-bedrock-structured-output
  • add-hex-changelog-and-module-grouping
  • add-json-schema-validation
  • add-vertex-guide-to-docs
  • add-vllm-provider
  • add-vllm-support
  • add_openai_responses_api_structured_responses
  • add_retry_step
  • allow_json_schemas
  • bedrock-1.0-fixes
  • bedrock-clean
  • bedrock-mistral-support
  • breaking/reqllm-llmdb-forced
  • bug/api-return-types
  • bug/codec-tool-calls
  • bug/debug-stream-return
  • bug/incorrect-model-spec-docs
  • bug/object-array
  • bug/openai-tool-calls
  • bug/stream-process-linking
  • bug/streaming-nil-deltas
  • bug/streaming-race-condition
  • bug/usage_total_cost
  • bw-openai-responses-api-reasoning-effort
  • cerebras
  • cerebras-zai-glm-4-6
  • chore/2025-10-14-update-fixtures
  • chore/docs-review
  • chore/model_update_2025-10-27
  • chore/object-fixtures
  • chore/object-fixtures-resurrected
  • chore/refine-fixtures
  • chore/refresh-coverage-tests
  • chore/refresh-fixtures-before-1.0
  • chore/update-models-2025-09-21
  • config_base_url_override
  • copilot/fix-32
  • credo-config
  • credo/code-readability-fix
  • credo/fix-refactor-enum-filter
  • dependabot/hex/ex_aws_auth-1.3.1
  • dependabot/hex/ex_doc-0.39.1
  • dependabot/hex/jsv-0.11.5
  • dependabot/hex/req-0.5.16
  • dependabot/hex/tidewave-0.5.1
  • dependabot/hex/zoi-0.8.1
  • dependabot/hex/zoi-0.8.4
  • devtools
  • docs/agent-tutorial
  • docs/aws-event-stream-specialization
  • egomes/fix-claude-multi-turn
  • egomes/fix-tool-inspection-with-json-schema
  • feat/bedrock-service-tiers
  • feat/context-json-serialization
  • feat/fixture-credential-fallback
  • feat/fixture-credential-fallback-rebased
  • feat/google-upload-file
  • feat/in-type-support
  • feat/oauth2-token-cache
  • feat/structured-output-openai-google
  • feat/vertex-gemini-support
  • feature/anthropic-structured-output
  • feature/base-url-override
  • feature/bedrock-prompt-caching
  • feature/cerebras-provider
  • feature/configurable-metadata-timeout
  • feature/custom_providers
  • feature/google_grounding
  • feature/model-catalog
  • feature/normalize-bedrock-inference-profiles
  • feature/pre-release-fixes
  • feature/prompt-caching
  • feature/refactor-llm-api-fixtures
  • feature/refined-key-management
  • feature/stream-collectors
  • feature/unique-model-provider-options
  • feature/upgrade-ex-aws-auth
  • feature/zai-fixtures
  • feature/zoi-schema
  • fix-anthropic-streaming
  • fix-bedrock-timeout
  • fix-duplicate-clause
  • fix-google
  • fix-groq-stream-error
  • fix-mix-task-docs
  • fix-openai-max-tokens-param
  • fix-reasoning-overlay-pattern-match
  • fix-response-process-stream
  • fix-retry-delay-conflict
  • fix-schema-boolean-property
  • fix/bedrock-claude-exclusions
  • fix/bedrock-inference-profiles
  • fix/bug-119-aws-auth-credentials
  • fix/capability-structure-pattern-matches
  • fix/cost-calculation-in-usage
  • fix/google-cached-tokens
  • fix/google-file-support
  • fix/google-structured-output
  • fix/groq-utf8-streaming
  • fix/http2-large-request-bodies
  • fix/issue-65-http-status-validation
  • fix/issue-96-validation-error-fields
  • fix/jsv-preserve-original-data
  • fix/proxy-options
  • fix/registry-get-provider-nil-module
  • fix/tool-budget-pattern-match
  • fix/tool_calls
  • fix/vertex-decode-stream-event
  • google-vision
  • idempotent-stream-response
  • improve-metadata-provider-errors
  • list_with_subtype
  • main
  • patch-1
  • put-max-tokens-model-options
  • record-bedrock-vertex-fixtures
  • refactor-meta-provider-generic
  • refactor/context-tools
  • refactor/rename-decode-sse-event
  • refactor/req-streaming
  • refactor/xai-structured-objects
  • remove-duplicate-google-provider-metadata
  • remove-jido-keys
  • response-context-append-bug
  • support-openai-tool-choice-required
  • vertex-gemini-rebased
  • zai
  • zw/fix-stream-finish-reason

18 Nov 2025 08:07AM UTC coverage: 48.945% (-1.0%) from 49.986%
945b7128c4955aab6b401241ca7d785556e25a59-PR-193

Pull #193

github

neilberkman
feat: Add Google Context Caching support for Gemini models

Adds explicit context caching API for Gemini models to reduce costs by up to
90% when reusing large amounts of content.

- Add ReqLLM.Providers.Google.CachedContent module for cache CRUD operations
- Support for both Google AI Studio and Vertex AI (when Gemini support is added)
- Add cached_content provider option to reference existing caches
- Comprehensive tests for cache creation, listing, updating, and deletion
- Documentation and examples in Google provider moduledoc
- Updated CHANGELOG

```elixir
{:ok, cache} = ReqLLM.Providers.Google.CachedContent.create(
  provider: :google,
  model: "gemini-2.5-flash",
  api_key: System.get_env("GOOGLE_API_KEY"),
  contents: [%{role: "user", parts: [%{text: large_document}]}],
  ttl: "3600s"
)

{:ok, response} = ReqLLM.generate_text(
  "google:gemini-2.5-flash",
  "Question about the document?",
  provider_options: [cached_content: cache.name]
)

IO.inspect(response.usage.cached_tokens)
```

- Gemini 2.5 Flash: 1,024 minimum tokens
- Gemini 2.5 Pro: 4,096 minimum tokens
Pull Request #193: feat: Add Google Context Caching support for Gemini models

6 of 159 new or added lines in 2 files covered. (3.77%)

3 existing lines in 1 file now uncovered.

3594 of 7343 relevant lines covered (48.94%)

57.29 hits per line

Relevant lines Covered
Build:
Build:
7343 RELEVANT LINES 3594 COVERED LINES
57.29 HITS PER LINE
Source Files on feat/google-context-caching
  • Tree
  • List 84
  • Changed 2
  • Source Changed 0
  • Coverage Changed 2
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
945b7128... feat/google-context-caching feat: Add Google Context Caching support for Gemini models Adds explicit context caching API for Gemini models to reduce costs by up to 90% when reusing large amounts of content. - Add ReqLLM.Providers.Google.CachedContent module for cache CRUD ... Pull #193 18 Nov 2025 08:08AM UTC neilberkman github
48.94
68dcc54c... feat/google-context-caching feat: Add Google Context Caching support for Gemini models Adds explicit context caching API for Gemini models to reduce costs by up to 90% when reusing large amounts of content. - Add ReqLLM.Providers.Google.CachedContent module for cache CRUD ... Pull #193 18 Nov 2025 07:59AM UTC neilberkman github
48.99
20cb655c... feat/google-context-caching feat: Add Google Context Caching support for Gemini models Adds explicit context caching API for Gemini models to reduce costs by up to 90% when reusing large amounts of content. ## Changes - Add ReqLLM.Providers.Google.CachedContent module for... Pull #193 06 Nov 2025 12:46AM UTC neilberkman github
51.79
See All Builds (471)

Badge your Repo: req_llm

We detected this repo isn’t badged! Grab the embed code to the right, add it to your repo to show off your code coverage, and when the badge is live hit the refresh button to remove this message.

Could not find badge in README.

Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

Refresh
  • Settings
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc