• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

Ouranosinc / xclim / 10069141429
92%

Build:
DEFAULT BRANCH: main
Ran 24 Jul 2024 01:56AM UTC
Jobs 2
Files 70
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

24 Jul 2024 01:51AM UTC coverage: 90.463% (+0.06%) from 90.407%
10069141429

push

github

web-flow
Fast MBCn (a la groupies) (#1580)

<!--Please ensure the PR fulfills the following requirements! -->
<!-- If this is your first PR, make sure to add your details to the
AUTHORS.rst! -->
### Pull Request Checklist:
- [x] This PR addresses an already opened issue (for bug fixes /
features)
    - This PR fixes #xyz
- [x] Tests for the changes have been added (for bug fixes / features)
- [x] (If applicable) Documentation has been added / updated (for bug
fixes / features)
- [x] CHANGES.rst has been updated (with summary of main changes)
- [x] Link to issue (:issue:`number`) and pull request (:pull:`number`)
has been added

### What kind of change does this PR introduce?

New `MBCn` TrainAdjust class. The train part finds adjustment factors
for the npdf transform. The adjust part does the rest.

* A single numpy function to perform all rotations of the npdf_transform
makes the process faster
* Grouping is handled using the same logic as in numpy_groupies. I
initially tried to stop using map_blocks by using what I call a the Big
Dataset (BD) solution. It was a dataset that included the group windowed
blocks. This was working well but sometimes caused dask workers to die.
Maybe a better chunking could have solved this problem. But instead of
constructing a BD, we simply loop over blocks, and simply specify time
indices in each block (à la groupies) in the original datasets. The
resulting code is a bit more messy, but it seems to be working well
performance-wise.

The function also changes how windowed group blocks are handled
throughout the computation. Now, a block is preserved its form from
begin to start of the MBCn computation.
* This is in contrast to the current way which was grouping and
ungrouping block between each iteration of the NpdfTransform.
* The standardization is performed on a block
* The univariate bias correction is maintainted as blocks, reordered,
*then* the blocks are ungrouped
* In the sdba noteb... (continued)

200 of 230 new or added lines in 5 files covered. (86.96%)

9049 of 10003 relevant lines covered (90.46%)

5.75 hits per line

Jobs
ID Job ID Ran Files Coverage
1 run-{{ matrix.tox-env }}-{{ matrix.os }} - 10069141429.1 24 Jul 2024 02:07AM UTC 0
90.42
GitHub Action Run
2 run-{{ matrix.tox-env }}-opt-slow - 10069141429.2 24 Jul 2024 02:07AM UTC 0
89.53
GitHub Action Run
Source Files on build 10069141429
Detailed source file information is not available for this build.
  • Back to Repo
  • 1d919000 on github
  • Prev Build on main (#10062310931)
  • Next Build on main (#10114731662)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc