• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

Alan-Jowett / CoPilot-For-Consensus / 20285183912 / 18
78%
main: 78%

Build:
DEFAULT BRANCH: main
Ran 16 Dec 2025 10:49PM UTC
Files 2
Run time 0s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

16 Dec 2025 10:44PM UTC coverage: 98.425%. Remained the same
20285183912.18

push

github

web-flow
Replace scaffold summary text with real Ollama output (#307)

* Initial plan

* Implement real Ollama API integration in LocalLLMSummarizer

- Replace placeholder text with actual Ollama API calls via requests library
- Add requests dependency to setup.py
- Implement proper error handling for timeouts and connection errors
- Update tests to mock API calls and verify real content flow
- Fix config schema default for llm_backend (ollama -> local for consistency)

Co-authored-by: Alan-Jowett <20480683+Alan-Jowett@users.noreply.github.com>

* Add integration test for real Ollama content flow

- Verify real LLM-generated content (not placeholder) flows through the pipeline
- Test LocalLLMSummarizer integration with SummarizationService
- Assert placeholder text is absent from generated summaries
- Confirm SummaryComplete events contain actual LLM output

Co-authored-by: Alan-Jowett <20480683+Alan-Jowett@users.noreply.github.com>

* Address code review feedback

- Move requests import to module level in tests
- Add explicit Content-Type header to Ollama API call for better compatibility
- All tests still passing after fixes

Co-authored-by: Alan-Jowett <20480683+Alan-Jowett@users.noreply.github.com>

* feat: Add persistent Ollama model caching with local LLM default

- Change Ollama storage from Docker volume to bind mount (./ollama_models/)
  to persist models across 'docker compose down -v'
- Add ollama-model-loader service to auto-pull mistral model on startup
- Set LLM_BACKEND default to 'local' for development (real LLM)
- Configure CI to use 'mock' backend for fast, reliable tests
- Add ollama_models/ to .gitignore
- Update orchestrator and summarization to depend on model loader
- Remove test artifacts (test-results.xml)
- Skip complex integration test with clear justification

This enables real LLM-based summarization in local development while
keeping CI fast. First run downloads 4.4GB mistral model; subsequent
runs are instant due to local caching... (continued)

125 of 127 relevant lines covered (98.43%)

0.98 hits per line

Source Files on job reporting - 20285183912.18
  • Tree
  • List 2
  • Changed 0
  • Source Changed 0
  • Coverage Changed 0
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 20285183912
  • 06aee1c3 on github
  • Prev Job for on main (#20284879775.19)
  • Next Job for on main (#20285288634.9)
  • Delete
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc