• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

rendezqueue / rendezllama
86%
trunk: 87%

Build:
Build:
LAST BUILD BRANCH: fix-regen-seed-10117453100447507096
DEFAULT BRANCH: trunk
Repo Added 09 Jun 2023 11:42AM UTC
Files 31
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH tmp
branch: tmp
CHANGE BRANCH
x
Reset
  • tmp
  • add-inference-test
  • alpaca
  • assistant-cli-cleanup-719797123915082777
  • assistant_cli
  • assistant_cli-context-length-16482133921599447173
  • assistant_cli_example-16863409140435454080
  • assistant_cli_fix-4664542843336424534
  • batch
  • batch-token-generation-16868365038061602641
  • blas
  • chat-tests-1130249731353493222
  • chat-trajectory-cc-target-5413662329449254524
  • context_base
  • contributing-docs
  • cov
  • coverage-fix-14520834744988857627
  • decode
  • defaults
  • delim
  • dep-rope
  • deterministic
  • dflt
  • display-test-8337381805460483581
  • enable-blas-9284002834420640955
  • eom
  • eosinsert
  • fallback-boundary-prefix-9540685450522850437
  • fix-inference-batch-pos-10855130293914255684
  • fix-localserv-determinism-3141128589317180522
  • fix-localserv-test-windows-9603322172943378209
  • fix-regen-seed-10117453100447507096
  • fix-vulkan-oom-remove-openblas-15898943122701797407
  • fix-windows-build-error-4343694421170532716
  • gen
  • http
  • increase-ci-timeout-16256491997374578865
  • inference-test-sampling-coverage-5719570704248527310
  • inference-test-simplify-3779826128337906892
  • jules-12111104592075292061-8f5973c0
  • lint-check-indent-10360893227382872155
  • localserv
  • localserv-404-fix-10230615575214195740
  • localserv-determinism-16728826537529082726
  • localserv-flakiness-183015914988436779
  • localserv-openai-integration-4266068947578723984
  • localserv-serve-static-files-14364234858281320394
  • makefile-ggml-defaults-3859380595056245199
  • miku
  • orca
  • prefac_gguf
  • refactor-localserv-compat-5517632047937205306
  • remove-redundant-libs-13700672902199106792
  • renovate/major-all-dependencies
  • rewrite
  • schema
  • sstream
  • stage
  • suffix
  • test-parse-options-17607259460580744177
  • tinystories-test-3346048418049655079
  • tmp_sx
  • trunk
  • twitch_sentiment
  • uncrustify-lint-6053234437761304018
  • update
  • update-assistant-cli-12167333632533804948
  • update-localserv-chat-17281043732855938939
  • vulkan

25 Jan 2026 11:58PM UTC coverage: 86.427% (-0.5%) from 86.957%
21341850916

push

github

grencez
feat(option): guess inference thread count

1 of 15 new or added lines in 1 file covered. (6.67%)

1974 of 2284 relevant lines covered (86.43%)

118.03 hits per line

Relevant lines Covered
Build:
Build:
2284 RELEVANT LINES 1974 COVERED LINES
118.03 HITS PER LINE
Source Files on tmp
  • Tree
  • List 30
  • Changed 1
  • Source Changed 0
  • Coverage Changed 1
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
21341850916 tmp feat(option): guess inference thread count push 26 Jan 2026 12:02AM UTC grencez github
86.43
21341807490 tmp feat(option): default thread count as physical cores push 25 Jan 2026 11:58PM UTC grencez github
86.43
21335653326 tmp Infer default thread count to match physical cores Introduces `rendezllama::infer_thread_count` to heuristically determine a suitable thread count for inference, aiming to use physical cores rather than logical threads on x86 architectures (by ha... push 25 Jan 2026 04:19PM UTC grencez github
88.7
21254307595 tmp Refactor assistant_cli to use rendezllama classes and reduce verbosity - Rewrote `eg/assistant_cli/main.cc` to use `GlobalScope`, `Vocabulary`, `Inference`, `ChatTrajectory`, and `ChatDisplay`. - Suppressed `llama.cpp` logging using a no-op callb... push 22 Jan 2026 03:34PM UTC grencez github
88.72
21254213748 tmp Refactor assistant_cli to use rendezllama classes and reduce verbosity - Rewrote `eg/assistant_cli/main.cc` to use `GlobalScope`, `Vocabulary`, `Inference`, `ChatTrajectory`, and `ChatDisplay`. - Suppressed `llama.cpp` logging using a no-op callb... push 22 Jan 2026 03:32PM UTC grencez github
88.72
21236360550 tmp Fix assistant_cli: rewrite to use direct llama.h API and fix tokenization Rewrote eg/assistant_cli/main.cc to use the direct llama.h C API, implementing a standard inference loop with KV cache management and sampling. Addressed a critical issue w... push 22 Jan 2026 04:47AM UTC grencez github
88.07
21236314882 tmp Fix assistant_cli: rewrite to use direct llama.h API and fix tokenization Rewrote eg/assistant_cli/main.cc to use the direct llama.h C API, implementing a standard inference loop with KV cache management and sampling. Addressed a critical issue w... push 22 Jan 2026 04:44AM UTC grencez github
88.07
21213160750 tmp Add assistant_cli example push 21 Jan 2026 02:28PM UTC grencez github
88.23
21178338358 tmp qual(cmake): Add chat_guide_cc build target push 20 Jan 2026 04:05PM UTC grencez github
90.79
21177937753 tmp qual(cmake): Add chat_guide_cc build target push 20 Jan 2026 03:54PM UTC grencez github
90.79
See All Builds (1008)
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc