• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / 8975f67f117f9461da65570207258acc4bd2e4e9-PR-192 / 3
50%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 06 Nov 2025 12:01AM UTC
Files 89
Run time 3s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

06 Nov 2025 12:00AM UTC coverage: 52.721% (+0.04%) from 52.683%
8975f67f117f9461da65570207258acc4bd2e4e9-PR-192.3

Pull #192

github

neilberkman
Fix: Extract cached token counts from Google API responses

Fixes #191

Google's API returns cachedContentTokenCount in usageMetadata for both
implicit (automatic) and explicit (CachedContent API) prompt caching.
The provider was not extracting this field, causing cached_tokens to
always be 0 even when caching was active.

Changes:
- Extract cachedContentTokenCount from Google responses
- Convert to OpenAI-compatible prompt_tokens_details.cached_tokens format
- Affects both google and google-vertex providers using Gemini models
Pull Request #192: Fix: Extract cached token counts from Google API responses

3991 of 7570 relevant lines covered (52.72%)

88.26 hits per line

Source Files on job 8975f67f117f9461da65570207258acc4bd2e4e9-PR-192.3
  • Tree
  • List 89
  • Changed 7
  • Source Changed 0
  • Coverage Changed 7
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 8975
  • 8975f67f on github
  • Prev Job for on fix/google-cached-tokens (#6fe32b91cd07861e771cebc4b8f3115bcb8219a9.1)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc