• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

JuliaDiff / ForwardDiff.jl / 686
80%

Build:
DEFAULT BRANCH: master
Ran 15 Jun 2016 08:19PM UTC
Jobs 2
Files 8
Run time 15s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
686

push

travis-ci

web-flow
giant rewrite for Julia v0.5 (#102)

Drop Vector storage for epsilons components. This reduces indirection in Partials code that had to handle multiple container types. I have yet to see a case where Tuple storage wasn't faster due to GC overhead.

Consolidate all ForwardDiffNumber types into the new type Dual{N,T}, which is structured similarly to the former GradientNumber. HessianNumber and TensorNumber have been replaced by the ability to nest Duals. This allows for Tuple storage of higher-order partial components, cuts out a lot of code, and should allow for more cache-friendly higher-order API methods (since indexing patterns will be more straightforward).

All @generated functions have been removed, and the API has been simplified by introducing the Chunk immutable and allowing subtypes of ForwardDiffResults to be passed as mutable arguments to the API functions. The documentation will explain this in more detail, once I write it.

Remove the function-generating API functions. I think that it's now easier and more transparent for people to define their own closures, e.g. j(x) = ForwardDiff.jacobian(f, x).

The code is now generic enough that higher-order/higher-dimensional derivatives can be written using lower-order API functions. Thus, tensor/tensor! has been removed, but is still easily implementable by users.

Experimental multithreading support for parallel chunk-mode can be enabled on some API functions by passing in multithread = true. I'm getting a 2x speed up vs. the single-threaded implementation when using 4 threads to take the gradient of rosenbrock with large (> 10,000 elements) input vectors. I haven't benchmarked enough to know how this scales, though I'd expect it to asymptote to N-times speed-up for N threads as the input length goes to infinity.

The caching layer is thread-safe for multithreaded API functions, and now features some optimizations proposed by @KristofferC to reduce the number of key lookups per API function call.

466 of 579 relevant lines covered (80.48%)

24259.08 hits per line

Jobs
ID Job ID Ran Files Coverage
1 686.1 15 Jun 2016 08:19PM UTC 0
79.1
Travis Job 686.1
2 686.2 15 Jun 2016 08:19PM UTC 0
80.31
Travis Job 686.2
Source Files on build 686
Detailed source file information is not available for this build.
  • Back to Repo
  • Travis Build #686
  • 934b3b26 on github
  • Prev Build on master (#676)
  • Next Build on master (#692)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc