• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

diana-hep / pyhf / 364 / 2
94%
master: 94%

Build:
DEFAULT BRANCH: master
Ran 13 Apr 2018 12:20AM UTC
Files 12
Run time 0s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

13 Apr 2018 12:07AM UTC coverage: 97.808% (-0.3%) from 98.082%
364.2

push

travis-ci

lukasheinrich
Add benchmarks using pytest-benchmark (#92)

* Add initial benchmarks using pytest-benchmark

Add some proof of concept benchmakrs using the features of
pytest-benchmark
c.f. https://github.com/ionelmc/pytest-benchmark

* [Temporary] Parameterize only number of bins

The parameterization of the backends is causing changes in the testing
environment which cause the rest of the tests to fail. For the time
being turn off backends and only parameterize the number of bins.

Additionally beautify test_import.py

* Reset pyhf.tensorlib backend each test

After each test where the pyhf backend is changed reset the backend to
the default value of pyhf.tensorlib

Use ids with the backends to more clearly indicate which backend is
being used by id-ing it by its name

* Add bin ids

Add bin ids for cleaner labeling

Install pytest-benchmark with [histogram] option to also install pygal
so that the --benchmark-histogram option works

* Expand range to match bins and vary bin content

Have the range of the histogrm expand as more bins are added

Additionally, instead of adding the same bin content for every bin, have
the bin content be a Poisson random variable with a mean that is the
same for each bin.

* Benchmark runOnePoint()

Benchmark runOnePoint() so that a fit is actually done and there will be
variation in timing.

At the moment only the NumPy and PyTorch backends are benchmarked in the
test function as there are scaling problems with TensorFlow and the
MXNet optimizer has not been completed.

The source that is used is just repeating values to ensure
reproducibility in the CI. Poisson variations with a fixed seed could
also work, but preliminary testing shows that the NumPy optimizer, which
does not take advantage of auto differentiation, is not always able to
complete the fit at high bin values which would result in a failing test.
Of interest is that PyTorch is able to do so.

* Add benchmark summary... (continued)

714 of 730 relevant lines covered (97.81%)

0.98 hits per line

Source Files on job 364.2
  • Tree
  • List 0
  • Changed 1
  • Source Changed 0
  • Coverage Changed 1
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 320
  • Travis Job 364.2
  • 1214ad43 on github
  • Prev Job for on master (#340.2)
  • Next Job for on master (#366.3)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc