• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

agentjido / req_llm / f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148
53%
main: 49%

Build:
Build:
LAST BUILD BRANCH: feat/load-dotenv-config
DEFAULT BRANCH: main
Ran 28 Oct 2025 04:31PM UTC
Jobs 4
Files 84
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

28 Oct 2025 04:28PM UTC coverage: 52.518% (+0.03%) from 52.492%
f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148

Pull #148

github

neilberkman
Refactor Meta/Llama into generic provider for code reuse

Extract Meta's native Llama prompt format into a reusable generic provider
that can be shared across cloud hosts and self-hosted deployments.

Changes:
- Created ReqLLM.Providers.Meta for Meta's native format
  - Handles prompt formatting with special tokens
  - Parses native response format (generation, prompt_token_count, etc.)
  - Extracts usage metadata with all required fields
- Refactored ReqLLM.Providers.AmazonBedrock.Meta to delegate to generic provider
  - Keeps only Bedrock-specific AWS Event Stream handling
  - Streaming usage now includes cached_tokens and reasoning_tokens
- Updated test error message for clarity

Documentation:
- Clarifies that most providers use OpenAI-compatible APIs (Azure, Vertex AI, vLLM, Ollama)
- AWS Bedrock is primary user of Meta's native format
- Generic provider handles native format with prompt/max_gen_len/generation fields
- Provides guidance for future provider implementations

This enables future Azure AI Foundry and Vertex AI support to correctly
delegate to OpenAI provider, while providing a native format option for
providers that need it.
Pull Request #148: Refactor Meta/Llama into generic provider for code reuse

30 of 35 new or added lines in 2 files covered. (85.71%)

6 existing lines in 4 files now uncovered.

3660 of 6969 relevant lines covered (52.52%)

433.13 hits per line

New Missed Lines in Diff

Lines Coverage ∆ File
1
81.25
-4.46% lib/req_llm/providers/amazon_bedrock/meta.ex
4
85.19
lib/req_llm/providers/meta.ex

Uncovered Existing Lines

Lines Coverage ∆ File
1
83.33
-16.67% lib/req_llm/provider.ex
1
94.12
-5.88% test/support/streaming_case.ex
2
63.29
-0.63% lib/req_llm/provider/defaults.ex
2
39.02
-2.44% lib/req_llm/providers/openai.ex
Jobs
ID Job ID Ran Files Coverage
1 f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148.1 28 Oct 2025 04:31PM UTC 84
52.4
GitHub Action Run
2 f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148.2 28 Oct 2025 04:31PM UTC 84
52.4
GitHub Action Run
3 f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148.3 28 Oct 2025 04:33PM UTC 84
52.37
GitHub Action Run
4 f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148.4 28 Oct 2025 04:33PM UTC 84
52.49
GitHub Action Run
Source Files on build f39d9fc48b3bd95eb576cce3f440d58fbc9a2e3d-PR-148
  • Tree
  • List 84
  • Changed 1
  • Source Changed 0
  • Coverage Changed 1
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Repo
  • Pull Request #148
  • PR Base - main (#689E02AC...)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc