• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

kobotoolbox / kpi / 22682707940
81%
master: 76%

Build:
Build:
LAST BUILD BRANCH: dev-1815-groups-serviceproviderconfig-endpoints
DEFAULT BRANCH: master
Ran 04 Mar 2026 06:45PM UTC
Jobs 2
Files 893
Run time 2min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

04 Mar 2026 06:10PM UTC coverage: 82.104% (+0.001%) from 82.103%
22682707940

push

github

web-flow
fix(qual): prevent 500 errors in automatic qualitative analysis by handling Bedrock service exceptions DEV-1801 (#6783)

### đŸ“Ŗ Summary
This PR resolves an inconsistent 500 error (ResourceNotFoundException)
that occurred when the primary LLM provided an unparseable response and
the fallback model encountered AWS service-level restrictions.

### 📖 Description
The inconsistent 500 error was identified as a cascading failure
starting with a "soft failure" of the primary model, OSS120. In certain
scenarios, such as single-choice qualitative questions, the primary
model returns an answer that fails validation. For instance, returning
"FALSE,FALSE" when at least one selection is required, which triggers an
`InvalidResponseFromLLMException`.

While the system correctly attempts to fall back to the backup model,
Claude 3.5 Sonnet, an infrastructure-level "hard failure" occurs because
the configured backup uses a model ID marked as Legacy by AWS. Because
this model had not been actively used in the last 15 days, Amazon
Bedrock denied the request with a `ResourceNotFoundException`.
Previously, the `run_external_process` method was only configured to
catch the `InvalidResponseFromLLMException`, allowing the AWS
ClientError to bubble up and crash the request with a 500 server error.

Solution:
- Exception Handling: Updated the model iteration loop to catch both
`InvalidResponseFromLLMException` and `botocore.exceptions.ClientError`.
This allows the loop to continue to the next available model or fail
gracefully if all models are exhausted.
- Enhanced Logging: Added a final WARNING log if all configured LLMs
fail to provide a valid response.

### 👀 Preview steps
<!-- Delete this section if behavior can't change. -->
<!-- If behavior changes or merely may change, add a preview of a
minimal happy path. -->

1. â„šī¸ Have an account, a project, and a submission with an audio
question.
2. Add transcription and translation for the audio, then go to the
"Analysis" tab. Crea... (continued)

7502 of 11576 branches covered (64.81%)

3 of 3 new or added lines in 1 file covered. (100.0%)

28849 of 35137 relevant lines covered (82.1%)

1.62 hits per line

Jobs
ID Job ID Ran Files Coverage
1 22682707940.1 04 Mar 2026 06:45PM UTC 891
79.79
2 22682707940.2 04 Mar 2026 06:50PM UTC 893
82.06
Source Files on build 22682707940
  • Tree
  • List 893
  • Changed 1
  • Source Changed 0
  • Coverage Changed 1
Coverage ∆ File Lines Relevant Covered Missed Hits/Line Branch Hits Branch Misses
  • Back to Repo
  • 385541c5 on github
  • Prev Build on main (#22680697948)
  • Next Build on main (#22682805791)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc