• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / d9a1ccf682f70b62258ee8b4d5e5989f5e1bca41-PR-174 / 3
49%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 03 Nov 2025 06:14PM UTC
Files 90
Run time 3s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

03 Nov 2025 06:10PM UTC coverage: 52.53% (+0.1%) from 52.385%
d9a1ccf682f70b62258ee8b4d5e5989f5e1bca41-PR-174.3

Pull #174

github

neilberkman
Add OAuth2 token caching for Google Vertex AI

OAuth2 token generation on every request adds 60-180ms overhead
(file I/O, JWT signing, HTTP round trip). Tokens are valid for 1 hour,
so this overhead is wasteful for repeated requests.

Implementation:
- GenServer + ETS cache storing tokens per service account
- 55 minute TTL (5 minute safety margin from 1 hour token lifetime)
- GenServer serializes refresh requests to prevent duplicate fetches
- Per-node cache (no distributed coordination needed)

Performance impact:
- Before: 1000 requests = 60-180 seconds auth overhead
- After: 1000 requests = 60-180ms auth overhead (first request only)
- Improvement: 99.9% reduction in auth overhead
Pull Request #174: Add OAuth2 token caching for Google Vertex AI

3955 of 7529 relevant lines covered (52.53%)

89.08 hits per line

Source Files on job d9a1ccf682f70b62258ee8b4d5e5989f5e1bca41-PR-174.3
  • Tree
  • List 90
  • Changed 9
  • Source Changed 0
  • Coverage Changed 9
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 0
  • d9a1ccf6 on github
  • Prev Job for on feat/oauth2-token-cache (#31856ca5a02098387379ef745769d2953c64240d.2)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc