• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

ben-manes / caffeine / #3216
100%

Build:
DEFAULT BRANCH: master
Ran 17 Jul 2022 06:58AM UTC
Jobs 1
Files 77
Run time 34s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
#3216

push

github-actions

ben-manes
improve parallelism of jcache event dispatching

This was for fun to show the flexibility, elegance, and performance of
using future dependencies to create dispatch queues. The jcache api
restriction is that events must be processed by a listener in order by
the event's key.

Previously the dispatch queues were per listener so that each listener
executed all events sequentially in order. That provided a very simple
model when flushing out the jcache implementation and, due to jcache not
having much adoption, hasn't been a concern. In some ways its nice as
listeners do not have to worry about concurrency and shows a simple
approach towards in-order processing. This created a lightweight actor
like processing model.

This generalization allows a listener to execute parallel sequences of
events by their distinct keys. For a single key, a listener receives the
events in order and different listeners may process those events in
parallel. This allows for full parallelization of the (listener, key)
pair using future dependents to create in-order key queues. When there
is no more work for a (listener, key) queue then it is immediately
removed by the last task.

The apis alloows the listener to coalesce events for different keys
into a single batch call, if those events are given in order. However,
since each event type is a different method call, that coalescing would
need to split into subsets that are processed in-order. This creates
dependencies across keys, so we continue to not support it. Instead an
implementation could coalescue, which can be more intelligence about
what batching means.

The net result is a nice example of the simplicity of using core Java
to have ordered executions by distinct keys, and decoupling that work
from the carrier threads performing it. This allows for lightweight
parallelism while maintaining data consistency requirements.

7079 of 7205 relevant lines covered (98.25%)

0.98 hits per line

Jobs
ID Job ID Ran Files Coverage
1 #3216.1 17 Jul 2022 06:58AM UTC 0
98.25
Source Files on build #3216
Detailed source file information is not available for this build.
  • Back to Repo
  • Build #3216
  • 5021b2fe on github
  • Prev Build on master
  • Next Build on master
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc