• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / 8b53950fe41a3cbabd2badbab0d00667b79ddc57
49%

Build:
DEFAULT BRANCH: main
Ran 29 Nov 2025 06:58PM UTC
Jobs 0
Files 0
Run time –
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
  cancel
8b53950fe41a3cbabd2badbab0d00667b79ddc57

push

github

web-flow
Add Azure OpenAI provider (#127) (#245)

* refactor: Simplify pattern matching in provider decode functions

Remove unnecessary nested case statements in extract_from_content and
extract_from_json_schema_content functions. The pattern match on
response.message is guaranteed to succeed, so the fallback case is
unreachable.

* fix: Improve streaming response handling and usage parsing

- Handle streaming chunks with both finish_reason AND usage in same event
- Normalize usage data consistently across all streaming paths
- Always build message from chunks (empty list is valid)
- Simplify extract_from_json_schema_content pattern matching

* refactor: Extract stub tools helper to Anthropic AdapterHelpers

Move extract_stub_tools_from_messages to shared AdapterHelpers module
for reuse by Azure and other providers. Also fix context merging in
Amazon Bedrock Anthropic decoder to properly preserve conversation
history.

* fix: Stream response object extraction and tool call handling

- Use AdapterHelpers.extract_and_set_object for consistent structured
  output extraction across streaming and non-streaming paths
- Improve extract_structured_output_args to handle ToolCall structs
- Fix streaming test to collect chunks before multiple iterations
- Handle adaptive reasoning models that may skip reasoning for simple
  prompts

* refactor: Extract OpenAI adapter helpers for shared model logic

Add ReqLLM.Providers.OpenAI.AdapterHelpers with shared functions for:
- Reasoning model detection (o1, o3, o4, gpt-4.1, gpt-5, codex)
- Token limit parameter handling (max_tokens vs max_completion_tokens)
- Tool choice format translation
- Strict mode tool schema normalization

Update param_profiles to use shared helpers. Fix o1 model test to
expect all sampling parameters dropped (not just temperature).

* fix: Responses API tool format for strict mode

The Responses API requires a flat tool format, not the nested format
used by Chat Completions API.

Before (incorrect):
  {... (continued)
Source Files on build 8b53950fe41a3cbabd2badbab0d00667b79ddc57
Detailed source file information is not available for this build.
  • Back to Repo
  • 8b53950f on github
  • Prev Build on main (#978F7EF7...)
  • Next Build on main (#3DD12BA4...)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc