• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / 5019e621bd0188a6dffdc37aa39807f86175f998-PR-174 / 3
49%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 03 Nov 2025 06:06PM UTC
Files 90
Run time 3s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

03 Nov 2025 06:01PM UTC coverage: 52.477% (+0.09%) from 52.385%
5019e621bd0188a6dffdc37aa39807f86175f998-PR-174.3

Pull #174

github

neilberkman
Add OAuth2 token caching for Google Vertex AI

OAuth2 token generation on every request adds 60-180ms overhead
(file I/O, JWT signing, HTTP round trip). Tokens are valid for 1 hour,
so this overhead is wasteful for repeated requests.

Implementation:
- GenServer + ETS cache storing tokens per service account
- 55 minute TTL (5 minute safety margin from 1 hour token lifetime)
- GenServer prevents thundering herd on expiry
- Per-node cache (no distributed coordination needed)

Performance impact:
- Before: 1000 requests = 60-180 seconds auth overhead
- After: 1000 requests = 60-180ms auth overhead (first request only)
- Improvement: 99.9% reduction in auth overhead
Pull Request #174: Add OAuth2 token caching for Google Vertex AI

3951 of 7529 relevant lines covered (52.48%)

88.35 hits per line

Source Files on job 5019e621bd0188a6dffdc37aa39807f86175f998-PR-174.3
  • Tree
  • List 90
  • Changed 4
  • Source Changed 0
  • Coverage Changed 4
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 5019
  • 5019e621 on github
  • Prev Job for on feat/oauth2-token-cache (#31856ca5a02098387379ef745769d2953c64240d.2)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc