• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

Ouranosinc / xclim / 10069141429 / 1
92%
main: 92%

Build:
DEFAULT BRANCH: main
Ran 24 Jul 2024 02:07AM UTC
Files 70
Run time 2s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

24 Jul 2024 01:51AM UTC coverage: 90.423% (+0.06%) from 90.366%
10069141429.1

push

github

web-flow
Fast MBCn (a la groupies) (#1580)

<!--Please ensure the PR fulfills the following requirements! -->
<!-- If this is your first PR, make sure to add your details to the
AUTHORS.rst! -->
### Pull Request Checklist:
- [x] This PR addresses an already opened issue (for bug fixes /
features)
    - This PR fixes #xyz
- [x] Tests for the changes have been added (for bug fixes / features)
- [x] (If applicable) Documentation has been added / updated (for bug
fixes / features)
- [x] CHANGES.rst has been updated (with summary of main changes)
- [x] Link to issue (:issue:`number`) and pull request (:pull:`number`)
has been added

### What kind of change does this PR introduce?

New `MBCn` TrainAdjust class. The train part finds adjustment factors
for the npdf transform. The adjust part does the rest.

* A single numpy function to perform all rotations of the npdf_transform
makes the process faster
* Grouping is handled using the same logic as in numpy_groupies. I
initially tried to stop using map_blocks by using what I call a the Big
Dataset (BD) solution. It was a dataset that included the group windowed
blocks. This was working well but sometimes caused dask workers to die.
Maybe a better chunking could have solved this problem. But instead of
constructing a BD, we simply loop over blocks, and simply specify time
indices in each block (à la groupies) in the original datasets. The
resulting code is a bit more messy, but it seems to be working well
performance-wise.

The function also changes how windowed group blocks are handled
throughout the computation. Now, a block is preserved its form from
begin to start of the MBCn computation.
* This is in contrast to the current way which was grouping and
ungrouping block between each iteration of the NpdfTransform.
* The standardization is performed on a block
* The univariate bias correction is maintainted as blocks, reordered,
*then* the blocks are ungrouped
* In the sdba noteb... (continued)

9045 of 10003 relevant lines covered (90.42%)

3.96 hits per line

Source Files on job run-{{ matrix.tox-env }}-{{ matrix.os }} - 10069141429.1
  • Tree
  • List 0
  • Changed 5
  • Source Changed 0
  • Coverage Changed 5
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 10069141429
  • 1d919000 on github
  • Prev Job for on main (#10062310931.1)
  • Next Job for on main (#10114731662.1)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc