• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm
23%
main: 49%

Build:
Build:
LAST BUILD BRANCH: improve-metadata-provider-errors
DEFAULT BRANCH: main
Repo Added 15 Sep 2025 11:42AM UTC
Token Qrw4J5oDDoi2zjHqBZDp1Hok4eVoONJAy regen
Build 103 Last
Files 67
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH feature/unique-model-provider-options
branch: feature/unique-model-provider-options
CHANGE BRANCH
x
Reset
Sync Branches
  • feature/unique-model-provider-options
  • add-vllm-provider
  • bedrock-clean
  • bug/api-return-types
  • bug/codec-tool-calls
  • bug/debug-stream-return
  • bug/incorrect-model-spec-docs
  • bug/openai-tool-calls
  • bug/streaming-race-condition
  • cerebras
  • chore/refresh-coverage-tests
  • chore/update-models-2025-09-21
  • copilot/fix-32
  • devtools
  • feat/context-json-serialization
  • feat/google-upload-file
  • feat/in-type-support
  • feat/structured-output-openai-google
  • feature/cerebras-provider
  • feature/pre-release-fixes
  • feature/refactor-llm-api-fixtures
  • feature/refined-key-management
  • fix-anthropic-streaming
  • fix-duplicate-clause
  • fix-google
  • fix-groq-stream-error
  • fix-mix-task-docs
  • fix-openai-max-tokens-param
  • fix/cost-calculation-in-usage
  • fix/google-file-support
  • fix/proxy-options
  • fix/registry-get-provider-nil-module
  • google-vision
  • improve-metadata-provider-errors
  • main
  • patch-1
  • refactor/req-streaming

15 Sep 2025 02:24PM UTC coverage: 22.826% (+0.9%) from 21.958%
56683a4cb50c99c9f38265cfa4419f20280ad9f8-PR-11

Pull #11

github

mikehostetler
Enhance OpenAI provider parameter handling

- Introduced a new validation function `validate_mutex!/3` in the `ReqLLM.Provider.DSL` module to ensure that conflicting parameters are not used together, improving error handling for provider options.
- Updated the OpenAI provider to dynamically determine the appropriate token parameter (`max_tokens` or `max_completion_tokens`) based on the model name, enhancing compatibility with different OpenAI models.

This change improves the robustness of parameter handling and aligns with the recent enhancements for model-specific option translation.
Pull Request #11: Fix OpenAI o1 model parameter translation (resolves #8)

2 of 34 new or added lines in 2 files covered. (5.88%)

269 existing lines in 9 files now uncovered.

525 of 2300 relevant lines covered (22.83%)

122.19 hits per line

Relevant lines Covered
Build:
Build:
2300 RELEVANT LINES 525 COVERED LINES
122.19 HITS PER LINE
Source Files on feature/unique-model-provider-options
  • Tree
  • List 52
  • Changed 11
  • Source Changed 0
  • Coverage Changed 11
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
56683a4c... feature/unique-model-provider-options Enhance OpenAI provider parameter handling - Introduced a new validation function `validate_mutex!/3` in the `ReqLLM.Provider.DSL` module to ensure that conflicting parameters are not used together, improving error handling for provider options. ... Pull #11 15 Sep 2025 02:25PM UTC mikehostetler github
22.83
42c3daea... feature/unique-model-provider-options Add translation for OpenAI o3 model parameters - Implemented `translate_options/3` for OpenAI o3 models to rename `max_tokens` to `max_completion_tokens` and drop the unsupported `temperature` parameter. - This enhancement ensures compatibility w... Pull #11 15 Sep 2025 02:21PM UTC mikehostetler github
22.18
1c52a981... feature/unique-model-provider-options Remove extra demo scripts to verify Pull #11 15 Sep 2025 01:47PM UTC mikehostetler github
22.22
ef60c770... feature/unique-model-provider-options Fix OpenAI o1 model parameter translation - Add translate_options/3 callback to Provider behavior for model-specific parameter translation - Implement OpenAI o1 model parameter translation: max_tokens → max_completion_tokens and drop temperature ... Pull #11 15 Sep 2025 01:46PM UTC mikehostetler github
21.16
See All Builds (103)

Badge your Repo: req_llm

We detected this repo isn’t badged! Grab the embed code to the right, add it to your repo to show off your code coverage, and when the badge is live hit the refresh button to remove this message.

Could not find badge in README.

Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

Refresh
  • Settings
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc