• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / 9d29161a52b826f7e1f1f9a04f7e64e319b7801a-PR-174 / 2
49%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 03 Nov 2025 06:13PM UTC
Files 90
Run time 2s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

03 Nov 2025 06:10PM UTC coverage: 52.504% (+0.1%) from 52.385%
9d29161a52b826f7e1f1f9a04f7e64e319b7801a-PR-174.2

Pull #174

github

neilberkman
Add OAuth2 token caching for Google Vertex AI

OAuth2 token generation on every request adds 60-180ms overhead
(file I/O, JWT signing, HTTP round trip). Tokens are valid for 1 hour,
so this overhead is wasteful for repeated requests.

Implementation:
- GenServer + ETS cache storing tokens per service account
- 55 minute TTL (5 minute safety margin before 1 hour expiry)
- GenServer serializes concurrent refresh requests to prevent duplicates
- Per-node cache (no distributed coordination needed)

Performance impact:
- Before: 1000 requests = 60-180 seconds auth overhead
- After: 1000 requests = 60-180ms auth overhead (first request only)
- Improvement: 99.9% reduction in auth overhead
Pull Request #174: Add OAuth2 token caching for Google Vertex AI

3953 of 7529 relevant lines covered (52.5%)

89.33 hits per line

Source Files on job 9d29161a52b826f7e1f1f9a04f7e64e319b7801a-PR-174.2
  • Tree
  • List 90
  • Changed 10
  • Source Changed 0
  • Coverage Changed 10
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 9
  • 9d29161a on github
  • Prev Job for on feat/oauth2-token-cache (#31856ca5a02098387379ef745769d2953c64240d.2)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc