• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

diana-hep / pyhf / 364
94%

Build:
DEFAULT BRANCH: master
Ran 13 Apr 2018 12:17AM UTC
Jobs 3
Files 12
Run time 2min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
364

push

travis-ci

lukasheinrich
Add benchmarks using pytest-benchmark (#92)

* Add initial benchmarks using pytest-benchmark

Add some proof of concept benchmakrs using the features of
pytest-benchmark
c.f. https://github.com/ionelmc/pytest-benchmark

* [Temporary] Parameterize only number of bins

The parameterization of the backends is causing changes in the testing
environment which cause the rest of the tests to fail. For the time
being turn off backends and only parameterize the number of bins.

Additionally beautify test_import.py

* Reset pyhf.tensorlib backend each test

After each test where the pyhf backend is changed reset the backend to
the default value of pyhf.tensorlib

Use ids with the backends to more clearly indicate which backend is
being used by id-ing it by its name

* Add bin ids

Add bin ids for cleaner labeling

Install pytest-benchmark with [histogram] option to also install pygal
so that the --benchmark-histogram option works

* Expand range to match bins and vary bin content

Have the range of the histogrm expand as more bins are added

Additionally, instead of adding the same bin content for every bin, have
the bin content be a Poisson random variable with a mean that is the
same for each bin.

* Benchmark runOnePoint()

Benchmark runOnePoint() so that a fit is actually done and there will be
variation in timing.

At the moment only the NumPy and PyTorch backends are benchmarked in the
test function as there are scaling problems with TensorFlow and the
MXNet optimizer has not been completed.

The source that is used is just repeating values to ensure
reproducibility in the CI. Poisson variations with a fixed seed could
also work, but preliminary testing shows that the NumPy optimizer, which
does not take advantage of auto differentiation, is not always able to
complete the fit at high bin values which would result in a failing test.
Of interest is that PyTorch is able to do so.

* Add benchmark summary... (continued)

716 of 730 relevant lines covered (98.08%)

2.94 hits per line

Jobs
ID Job ID Ran Files Coverage
1 364.1 13 Apr 2018 12:17AM UTC 0
97.81
Travis Job 364.1
2 364.2 13 Apr 2018 12:20AM UTC 0
97.81
Travis Job 364.2
3 364.3 13 Apr 2018 12:18AM UTC 0
98.08
Travis Job 364.3
Source Files on build 364
Detailed source file information is not available for this build.
  • Back to Repo
  • Travis Build #364
  • 1214ad43 on github
  • Prev Build on master (#340)
  • Next Build on master (#366)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc