• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / 87679f7ddf3bee2963e2045d91c857908208349c / 1
49%
main: 49%

Build:
DEFAULT BRANCH: main
Ran 18 Nov 2025 09:07AM UTC
Files 85
Run time 4s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

18 Nov 2025 09:07AM UTC coverage: 49.077% (+0.2%) from 48.917%
87679f7ddf3bee2963e2045d91c857908208349c.1

push

github

web-flow
Add OAuth2 token caching for Google Vertex AI (#174)

OAuth2 token generation on every request adds 60-180ms overhead
(file I/O, JWT signing, HTTP round trip). Tokens are valid for 1 hour,
so this overhead is wasteful for repeated requests.

Implementation:
- GenServer + ETS cache storing tokens per service account
- 55 minute TTL (5 minute safety margin before 1 hour expiry)
- GenServer serializes concurrent refresh requests to prevent duplicates
- Per-node cache (no distributed coordination needed)

Performance impact:
- Before: 1000 requests = 60-180 seconds auth overhead
- After: 1000 requests = 60-180ms auth overhead (first request only)
- Improvement: 99.9% reduction in auth overhead

3615 of 7366 relevant lines covered (49.08%)

14.31 hits per line

Source Files on job 87679f7ddf3bee2963e2045d91c857908208349c.1
  • Tree
  • List 85
  • Changed 2
  • Source Changed 0
  • Coverage Changed 2
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 87679
  • 87679f7d on github
  • Prev Job for on main (#7e8adff280d80a19edc59717c92307cf9f762260.3)
  • Next Job for on main (#e7f6e7d6013ee7778befa322764c46d680a8943a.4)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc