• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In
Build has been canceled!

agentjido / req_llm / 29628b74c06932ed0446c0df4c833c4a7564f4da-PR-174 / 4
49%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 18 Nov 2025 08:37AM UTC
Files 85
Run time 3s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

18 Nov 2025 08:35AM UTC coverage: 49.09% (+0.2%) from 48.917%
29628b74c06932ed0446c0df4c833c4a7564f4da-PR-174.4

Pull #174

github

neilberkman
Add OAuth2 token caching for Google Vertex AI

OAuth2 token generation on every request adds 60-180ms overhead
(file I/O, JWT signing, HTTP round trip). Tokens are valid for 1 hour,
so this overhead is wasteful for repeated requests.

Implementation:
- GenServer + ETS cache storing tokens per service account
- 55 minute TTL (5 minute safety margin before 1 hour expiry)
- GenServer serializes concurrent refresh requests to prevent duplicates
- Per-node cache (no distributed coordination needed)

Performance impact:
- Before: 1000 requests = 60-180 seconds auth overhead
- After: 1000 requests = 60-180ms auth overhead (first request only)
- Improvement: 99.9% reduction in auth overhead
Pull Request #174: Add OAuth2 token caching for Google Vertex AI

3616 of 7366 relevant lines covered (49.09%)

14.31 hits per line

Source Files on job 29628b74c06932ed0446c0df4c833c4a7564f4da-PR-174.4
  • Tree
  • List 85
  • Changed 3
  • Source Changed 0
  • Coverage Changed 3
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 29628
  • 29628b74 on github
  • Prev Job for on feat/oauth2-token-cache (#7e8adff280d80a19edc59717c92307cf9f762260.3)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc