• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

rendezqueue / rendezllama / 21216901197 / 1
88%
trunk: 87%

Build:
Build:
LAST BUILD BRANCH: fix-regen-seed-10117453100447507096
DEFAULT BRANCH: trunk
Ran 21 Jan 2026 04:16PM UTC
Files 31
Run time 1s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

21 Jan 2026 04:13PM UTC coverage: 88.068% (-0.2%) from 88.225%
21216901197.1

push

github

google-labs-jules[bot]
Fix assistant_cli: rewrite to use direct llama.h API and fix tokenization

Rewrote eg/assistant_cli/main.cc to use the direct llama.h C API, implementing a standard inference loop with KV cache management and sampling.
Addressed a critical issue where special tokens generated by the chat template (e.g., `<s>`, `</s>`) were being tokenized as plain text, causing garbage output.
This was fixed by invoking `llama_tokenize` with `parse_special=true`.
Verified that the output is now coherent and comparable to `llama-completion`.

2133 of 2422 relevant lines covered (88.07%)

111.78 hits per line

Source Files on job 21216901197.1
  • Tree
  • List 31
  • Changed 3
  • Source Changed 0
  • Coverage Changed 3
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 21216901197
  • 5f528733 on github
  • Prev Job for on assistant_cli_fix-4664542843336424534 (#21199465666.1)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc