• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

umputun / newscope / 16818615022

08 Aug 2025 12:08AM UTC coverage: 81.169% (+0.007%) from 81.162%
16818615022

push

github

umputun
feat(llm): implement batch processing for AI classification with 90% cost reduction

- Add configurable batch processing with size (default: 10) and timeout (default: 5s)
- Remove 500-character content truncation for improved classification accuracy
- Update OpenAI library from v1.40.3 to v1.40.5 and upgrade model to gpt-5
- Implement ProcessBatch method in FeedProcessor for efficient batch handling
- Add comprehensive token usage monitoring and logging for cost tracking
- Fix async test patterns using require.Eventually instead of time.Sleep
- Add setupBasicItemManagerMocks helper to reduce test code duplication
- Update integration tests to handle asynchronous batch processing properly

The batch processing reduces API calls from ~50 individual calls to 2-3 batch calls
for the same number of articles, providing significant cost savings while maintaining
classification quality through full article content analysis.

100 of 124 new or added lines in 2 files covered. (80.65%)

3625 of 4466 relevant lines covered (81.17%)

28.18 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

81.7
/pkg/scheduler/feed_processor.go


Source Not Available

STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc