• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / c5ff026cd32d2490fe5d5e3f7978bd7e01bee87a
49%

Build:
DEFAULT BRANCH: main
Ran 29 Sep 2025 08:50PM UTC
Jobs 2
Files 55
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

29 Sep 2025 08:48PM UTC coverage: 59.707% (-1.6%) from 61.323%
c5ff026cd32d2490fe5d5e3f7978bd7e01bee87a

push

github

web-flow
Refactor streaming from Req to Finch for production stability (#63)

* Streaming refactor

* Quality

* Formatting

* Fix tests

* Enhance Anthropic provider to ensure max_tokens is always present and update context encoding.

* Add reasoning effort control for GPT-5 models and enhance response handling

- Introduced a new `--reasoning-effort` option in the `req_llm.gen` mix task to specify reasoning levels (minimal, low, medium, high) for GPT-5 models.
- Updated response handling to include reasoning token counts in usage statistics and output.
- Enhanced the Anthropic and OpenAI providers to support reasoning effort parameters and adjust request options accordingly.
- Improved decoding logic to handle reasoning content in streaming responses.

These changes enhance the flexibility and clarity of AI interactions, particularly for models with reasoning capabilities.

* Formatting

* Example agent to test streaming tool calls

317 of 904 new or added lines in 20 files covered. (35.07%)

19 existing lines in 3 files now uncovered.

1953 of 3271 relevant lines covered (59.71%)

296.15 hits per line

New Missed Lines in Diff

Lines Coverage ∆ File
1
90.0
-10.0% lib/req_llm/application.ex
1
94.83
-1.47% lib/req_llm/providers/openrouter.ex
1
98.98
-1.02% lib/req_llm/step/usage.ex
2
6.25
0.0% lib/req_llm.ex
5
37.14
-5.36% lib/req_llm/generation.ex
5
68.42
-6.09% lib/req_llm/response.ex
6
81.82
lib/req_llm/streaming.ex
19
80.79
-10.43% lib/req_llm/context.ex
19
72.06
lib/req_llm/streaming/finch_client.ex
23
60.34
lib/req_llm/stream_response.ex
25
30.0
-9.13% lib/req_llm/providers/anthropic/response.ex
32
74.34
-7.7% lib/req_llm/provider/defaults.ex
33
58.22
-13.63% lib/req_llm/providers/anthropic.ex
33
8.47
-10.12% test/support/fixture.ex
41
63.06
-13.53% lib/req_llm/providers/google.ex
61
60.65
lib/req_llm/stream_server.ex
109
0.0
lib/examples/agent.ex
171
0.0
0.0% lib/mix/tasks/gen.ex

Uncovered Existing Lines

Lines Coverage ∆ File
2
74.34
-7.7% lib/req_llm/provider/defaults.ex
4
0.0
0.0% lib/mix/tasks/gen.ex
13
8.47
-10.12% test/support/fixture.ex
Jobs
ID Job ID Ran Files Coverage
1 c5ff026cd32d2490fe5d5e3f7978bd7e01bee87a.1 29 Sep 2025 08:50PM UTC 55
59.71
GitHub Action Run
2 c5ff026cd32d2490fe5d5e3f7978bd7e01bee87a.2 29 Sep 2025 08:50PM UTC 55
59.52
GitHub Action Run
Source Files on build c5ff026cd32d2490fe5d5e3f7978bd7e01bee87a
  • Tree
  • List 55
  • Changed 21
  • Source Changed 0
  • Coverage Changed 21
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Repo
  • c5ff026c on github
  • Prev Build on main (#DFD4B177...)
  • Next Build on main (#1A895F31...)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc