• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

PrincetonUniversity / PsyNeuLink / 25535066478
84%
master: 85%

Build:
Build:
LAST BUILD BRANCH: devel
DEFAULT BRANCH: master
Ran 08 May 2026 04:33AM UTC
Jobs 1
Files 163
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

08 May 2026 03:30AM UTC coverage: 84.212% (-0.006%) from 84.218%
25535066478

push

github

web-flow
OneHot: fix PROB sampling at the cumsum-rounding boundary (#3535)

## Summary

`OneHot` `PROB`/`PROB_INDICATOR` sampling could fail when
`np.cumsum(prob_dist)[-1]` drifted just below 1.0 due to float rounding
(e.g. `np.cumsum([0.7, 0.2, 0.1])[-1] == 0.9999999999999999`) and
`random_state.uniform()` returned a value in the resulting
`(cum_sum[-1], 1.0)` gap.

* **Python path** (`selectionfunctions.py:733`) raised `StopIteration`
from the unguarded generator
  ```python
chosen_item = next(element for element in cum_sum if element >
random_value)
  ```
* **LLVM path** (`_gen_llvm_function_body` for `PROB`/`PROB_INDICATOR`)
silently produced an all-zero output, because no iteration's `sum_old <=
random_draw < sum_new` condition fired.

The Python path now uses `np.searchsorted(..., side='right')` clamped to
the last index, and indexes the chosen position directly. This also
fixes a second-order bug where `np.where(cum_sum == chosen_item, 1, 0)`
selected multiple indices when `prob_dist` contained zeros (which
produced duplicate `cum_sum` entries) — see the corrected `SOFT_MAX
MASK_THRESHOLD PROB` expectations in `test_transfer.py`.

The LLVM path forces the upper-bound check to fire on the last
iteration, ensuring the last index always wins when the draw exceeds the
final prefix sum.

## Tests

* `test_one_hot_prob_cumsum_below_one_due_to_float_rounding` —
deterministic Python regression test that stubs `random_state.uniform()`
to return a value in the gap. Reproduces the `StopIteration` against the
unfixed code.
* `test_one_hot_prob_runtime_cumsum_under_one_never_all_zero` — covers
both Python and LLVM paths by constructing `OneHot` with a valid default
`prob_dist` (so validation passes) and then calling it with a runtime
`prob_dist` whose `cum_sum[-1] < 1.0`. About half of all uniform draws
then land in the gap, deterministically triggering both bugs against the
unfixed code (Python: `StopIteration`; LLVM: all-zero output).

## Test plan

- [x] `pytest ... (continued)

10519 of 13698 branches covered (76.79%)

Branch coverage included in aggregate %.

6 of 6 new or added lines in 1 file covered. (100.0%)

3 existing lines in 2 files now uncovered.

36185 of 41762 relevant lines covered (86.65%)

0.87 hits per line

Coverage Regressions

Lines Coverage ∆ File
2
84.3
-0.58% psyneulink/core/components/mechanisms/modulatory/learning/learningmechanism.py
1
78.69
-1.64% psyneulink/library/components/projections/pathway/autoassociativeprojection.py
Jobs
ID Job ID Ran Files Coverage
1 25535066478.1 08 May 2026 04:33AM UTC 163
84.21
GitHub Action Run
Source Files on build 25535066478
  • Tree
  • List 163
  • Changed 3
  • Source Changed 0
  • Coverage Changed 3
Coverage ∆ File Lines Relevant Covered Missed Hits/Line Branch Hits Branch Misses
  • Back to Repo
  • 2f4fa135 on github
  • Prev Build on devel (#25497910489)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc