• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / c8a003e22df78d7303d04d385ce01232134fbbcd
49%

Build:
DEFAULT BRANCH: main
Ran 17 Dec 2025 01:39AM UTC
Jobs 0
Files 0
Run time –
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
  cancel
c8a003e22df78d7303d04d385ce01232134fbbcd

push

github

web-flow
feat(anthropic): Add message caching support for conversation prefixes (#281)

* feat(anthropic): Add message caching support for conversation prefixes

Add `anthropic_cache_messages` option to cache entire conversation history
for multi-turn conversations. When enabled alongside `anthropic_prompt_cache`,
adds a cache breakpoint at the last message, allowing subsequent requests
with identical messages to be served from cache.

- Add anthropic_cache_messages boolean option to provider schema
- Implement maybe_cache_last_message/3 in caching pipeline
- Add cache_control to last content block of last message
- Preserve existing cache_control on content blocks
- Support TTL via existing anthropic_prompt_cache_ttl option

* feat(anthropic): Add offset support to message caching

Extends anthropic_cache_messages to accept integer offsets for placing
cache breakpoints at specific message positions using standard negative
indexing (-1 = last, -2 = second-to-last, etc.).

Changes:
- Add offset support with standard negative indexing convention
- Keep `true` as alias for `-1` (last message)
- Add nil-safety for edge cases
- Add comprehensive tests for offset scenarios
- Update documentation with offset examples

* fix: Add anthropic_cache_messages to Bedrock and Vertex schemas

The option was missing from both provider schemas, causing validation
errors when using message caching with Claude models. Both providers
already support the feature via shared Anthropic.maybe_apply_prompt_caching.

Also adds anthropic_prompt_cache and anthropic_prompt_cache_ttl to
Google Vertex schema for consistency.

* fix(bedrock): Remove incorrect Converse API requirement for inference profiles

Cross-region inference profiles (us., eu., global., etc.) were incorrectly
forced to use Converse API. AWS docs confirm InvokeModel works fine with
inference profiles, so this restriction was unnecessary and prevented full
prompt caching support when using tools.

Ref: https://docs.aws.amaz... (continued)
Source Files on build c8a003e22df78d7303d04d385ce01232134fbbcd
Detailed source file information is not available for this build.
  • Back to Repo
  • c8a003e2 on github
  • Prev Build on main (#7B139E28...)
  • Next Build on main (#49289BA3...)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc