• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm
52%
main: 53%

Build:
Build:
LAST BUILD BRANCH: bug/stream-process-linking
DEFAULT BRANCH: main
Repo Added 15 Sep 2025 11:42AM UTC
Token Qrw4J5oDDoi2zjHqBZDp1Hok4eVoONJAy regen
Build 257 Last
Files 83
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH feature/prompt-caching
branch: feature/prompt-caching
CHANGE BRANCH
x
Reset
Sync Branches
  • feature/prompt-caching
  • add-bedrock-structured-output
  • add-hex-changelog-and-module-grouping
  • add-json-schema-validation
  • add-vllm-provider
  • add_openai_responses_api_structured_responses
  • add_retry_step
  • allow_json_schemas
  • bedrock-clean
  • bug/api-return-types
  • bug/codec-tool-calls
  • bug/debug-stream-return
  • bug/incorrect-model-spec-docs
  • bug/object-array
  • bug/openai-tool-calls
  • bug/stream-process-linking
  • bug/streaming-nil-deltas
  • bug/streaming-race-condition
  • bug/usage_total_cost
  • cerebras
  • chore/2025-10-14-update-fixtures
  • chore/object-fixtures
  • chore/object-fixtures-resurrected
  • chore/refine-fixtures
  • chore/refresh-coverage-tests
  • chore/update-models-2025-09-21
  • copilot/fix-32
  • dependabot/hex/ex_doc-0.39.1
  • dependabot/hex/zoi-0.8.1
  • devtools
  • egomes/fix-claude-multi-turn
  • egomes/fix-tool-inspection-with-json-schema
  • feat/context-json-serialization
  • feat/google-upload-file
  • feat/in-type-support
  • feat/structured-output-openai-google
  • feature/base-url-override
  • feature/bedrock-prompt-caching
  • feature/cerebras-provider
  • feature/configurable-metadata-timeout
  • feature/model-catalog
  • feature/normalize-bedrock-inference-profiles
  • feature/pre-release-fixes
  • feature/refactor-llm-api-fixtures
  • feature/refined-key-management
  • feature/unique-model-provider-options
  • feature/upgrade-ex-aws-auth
  • feature/zai-fixtures
  • feature/zoi-schema
  • fix-anthropic-streaming
  • fix-duplicate-clause
  • fix-google
  • fix-groq-stream-error
  • fix-mix-task-docs
  • fix-openai-max-tokens-param
  • fix-retry-delay-conflict
  • fix/bug-119-aws-auth-credentials
  • fix/cost-calculation-in-usage
  • fix/google-file-support
  • fix/google-structured-output
  • fix/http2-large-request-bodies
  • fix/issue-65-http-status-validation
  • fix/issue-96-validation-error-fields
  • fix/proxy-options
  • fix/registry-get-provider-nil-module
  • fix/tool_calls
  • google-vision
  • improve-metadata-provider-errors
  • main
  • patch-1
  • put-max-tokens-model-options
  • refactor/context-tools
  • refactor/req-streaming
  • refactor/xai-structured-objects
  • remove-jido-keys
  • zai

26 Oct 2025 12:43AM UTC coverage: 52.26% (+0.1%) from 52.113%
a377c25d71d9e982bbd2eb2a57ebb8403c95b7f1-PR-132

Pull #132

github

mikehostetler
Add Anthropic prompt caching script and related functionality

- Introduced a new script `anthropic_prompt_caching.exs` to demonstrate the use of prompt caching with Anthropic models, including options for model selection, cache TTL, and logging levels.
- Enhanced the `ReqLLM.Providers.Anthropic` module to support prompt caching, including the addition of `anthropic_prompt_cache` and `anthropic_prompt_cache_ttl` options.
- Updated the request preparation process to inject cache control metadata into tools and system messages.
- Implemented usage tracking for cached tokens in the response handling, allowing for cost savings analysis.
- Added comprehensive unit tests for the new caching functionality, covering various scenarios including header injection and cache control handling.

These changes improve the efficiency and cost-effectiveness of using Anthropic models by leveraging prompt caching capabilities.
Pull Request #132: Add Anthropic prompt caching support

34 of 49 new or added lines in 4 files covered. (69.39%)

4 existing lines in 3 files now uncovered.

3608 of 6904 relevant lines covered (52.26%)

434.68 hits per line

Relevant lines Covered
Build:
Build:
6904 RELEVANT LINES 3608 COVERED LINES
434.68 HITS PER LINE
Source Files on feature/prompt-caching
  • Tree
  • List 83
  • Changed 6
  • Source Changed 0
  • Coverage Changed 6
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
a377c25d... feature/prompt-caching Add Anthropic prompt caching script and related functionality - Introduced a new script `anthropic_prompt_caching.exs` to demonstrate the use of prompt caching with Anthropic models, including options for model selection, cache TTL, and logging l... Pull #132 26 Oct 2025 12:44AM UTC mikehostetler github
52.26
See All Builds (257)

Badge your Repo: req_llm

We detected this repo isn’t badged! Grab the embed code to the right, add it to your repo to show off your code coverage, and when the badge is live hit the refresh button to remove this message.

Could not find badge in README.

Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

Refresh
  • Settings
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc