• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm
53%
main: 53%

Build:
Build:
LAST BUILD BRANCH: fix/bedrock-inference-profiles
DEFAULT BRANCH: main
Repo Added 15 Sep 2025 11:42AM UTC
Token Qrw4J5oDDoi2zjHqBZDp1Hok4eVoONJAy regen
Build 288 Last
Files 84
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH refactor-meta-provider-generic
branch: refactor-meta-provider-generic
CHANGE BRANCH
x
Reset
Sync Branches
  • refactor-meta-provider-generic
  • add-bedrock-fixtures
  • add-bedrock-structured-output
  • add-hex-changelog-and-module-grouping
  • add-json-schema-validation
  • add-vllm-provider
  • add_openai_responses_api_structured_responses
  • add_retry_step
  • allow_json_schemas
  • bedrock-1.0-fixes
  • bedrock-clean
  • bedrock-mistral-support
  • bug/api-return-types
  • bug/codec-tool-calls
  • bug/debug-stream-return
  • bug/incorrect-model-spec-docs
  • bug/object-array
  • bug/openai-tool-calls
  • bug/stream-process-linking
  • bug/streaming-nil-deltas
  • bug/streaming-race-condition
  • bug/usage_total_cost
  • cerebras
  • chore/2025-10-14-update-fixtures
  • chore/model_update_2025-10-27
  • chore/object-fixtures
  • chore/object-fixtures-resurrected
  • chore/refine-fixtures
  • chore/refresh-coverage-tests
  • chore/update-models-2025-09-21
  • copilot/fix-32
  • dependabot/hex/ex_doc-0.39.1
  • dependabot/hex/zoi-0.8.1
  • devtools
  • egomes/fix-claude-multi-turn
  • egomes/fix-tool-inspection-with-json-schema
  • feat/context-json-serialization
  • feat/google-upload-file
  • feat/in-type-support
  • feat/structured-output-openai-google
  • feature/base-url-override
  • feature/bedrock-prompt-caching
  • feature/cerebras-provider
  • feature/configurable-metadata-timeout
  • feature/google_grounding
  • feature/model-catalog
  • feature/normalize-bedrock-inference-profiles
  • feature/pre-release-fixes
  • feature/prompt-caching
  • feature/refactor-llm-api-fixtures
  • feature/refined-key-management
  • feature/unique-model-provider-options
  • feature/upgrade-ex-aws-auth
  • feature/zai-fixtures
  • feature/zoi-schema
  • fix-anthropic-streaming
  • fix-bedrock-timeout
  • fix-duplicate-clause
  • fix-google
  • fix-groq-stream-error
  • fix-mix-task-docs
  • fix-openai-max-tokens-param
  • fix-retry-delay-conflict
  • fix/bedrock-inference-profiles
  • fix/bug-119-aws-auth-credentials
  • fix/cost-calculation-in-usage
  • fix/google-file-support
  • fix/google-structured-output
  • fix/http2-large-request-bodies
  • fix/issue-65-http-status-validation
  • fix/issue-96-validation-error-fields
  • fix/proxy-options
  • fix/registry-get-provider-nil-module
  • fix/tool_calls
  • google-vision
  • improve-metadata-provider-errors
  • main
  • patch-1
  • put-max-tokens-model-options
  • refactor/context-tools
  • refactor/req-streaming
  • refactor/xai-structured-objects
  • remove-jido-keys
  • zai

28 Oct 2025 04:28PM UTC coverage: 52.518% (+0.03%) from 52.492%
f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148

Pull #148

github

neilberkman
Refactor Meta/Llama into generic provider for code reuse

Extract Meta's native Llama prompt format into a reusable generic provider
that can be shared across cloud hosts and self-hosted deployments.

Changes:
- Created ReqLLM.Providers.Meta for Meta's native format
  - Handles prompt formatting with special tokens
  - Parses native response format (generation, prompt_token_count, etc.)
  - Extracts usage metadata with all required fields
- Refactored ReqLLM.Providers.AmazonBedrock.Meta to delegate to generic provider
  - Keeps only Bedrock-specific AWS Event Stream handling
  - Streaming usage now includes cached_tokens and reasoning_tokens
- Updated test error message for clarity

Documentation:
- Clarifies that most providers use OpenAI-compatible APIs (Azure, Vertex AI, vLLM, Ollama)
- AWS Bedrock is primary user of Meta's native format
- Generic provider handles native format with prompt/max_gen_len/generation fields
- Provides guidance for future provider implementations

This enables future Azure AI Foundry and Vertex AI support to correctly
delegate to OpenAI provider, while providing a native format option for
providers that need it.
Pull Request #148: Refactor Meta/Llama into generic provider for code reuse

30 of 35 new or added lines in 2 files covered. (85.71%)

6 existing lines in 4 files now uncovered.

3660 of 6969 relevant lines covered (52.52%)

433.13 hits per line

Relevant lines Covered
Build:
Build:
6969 RELEVANT LINES 3660 COVERED LINES
433.13 HITS PER LINE
Source Files on refactor-meta-provider-generic
  • Tree
  • List 84
  • Changed 1
  • Source Changed 0
  • Coverage Changed 1
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
f39d9fc4... refactor-meta-provider-generic Refactor Meta/Llama into generic provider for code reuse Extract Meta's native Llama prompt format into a reusable generic provider that can be shared across cloud hosts and self-hosted deployments. Changes: - Created ReqLLM.Providers.Meta for M... Pull #148 28 Oct 2025 04:31PM UTC neilberkman github
52.52
See All Builds (288)

Badge your Repo: req_llm

We detected this repo isn’t badged! Grab the embed code to the right, add it to your repo to show off your code coverage, and when the badge is live hit the refresh button to remove this message.

Could not find badge in README.

Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

Refresh
  • Settings
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc