• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

johnbywater / eventsourcing / 2762
92%
master: 100%

Build:
Build:
LAST BUILD BRANCH: main
DEFAULT BRANCH: master
Ran 07 Feb 2020 04:31PM UTC
Jobs 3
Files 100
Run time 42s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

pending completion
2762

push

travis-ci

johnbywater
Added some alternatives for propagating notifications:
- by pulling to known current upstream head;
- by pulling max page size of notifications ("greedy pull");
- by putting notifications in ray object store, and prompting with ids;
- by prompting with notification objects included in the prompts.

Pushing notifications can improve performance when
'set_notification_ids' attribute of process application has a True
value, and improves performance most when all events are given a
notification ID before being recorded by the record manager.

Prompting with notification objects is the fastest approach when events
are stored in a real database (because cost of DB access when pulling
notifications is avoided). Prompting with notification IDs, accompanied
by putting notifications in Ray, might be better when actors are on
different nodes in a cluster, or when there are many downstream
processes; I didn't try that.

The "greedy pull" (without including notifications in prompts) works
fastest with POPO infrastructure, and when notifications aren't set
with an ID before being written to the database.

Whilst the pushing falls back onto pulling, it remains that when
pushing, the notifications must be stored somewhere in addition to being
in the process application database, and in the current implementation
there isn't any "back pressure" style flow control. So this storage might
become overloaded. Managing this, beyond the way Ray manages the object
store, would be an enhancement on the current implementation.

Also, this implementation's multi-threaded pre-fetching of notifications
reduces the cycle time of the event processing, and would be usefully
implemented in other runners that use multiple operating system
processes, such as this library's multiprocess runner.

5405 of 5862 relevant lines covered (92.2%)

2.76 hits per line

Jobs
ID Job ID Ran Files Coverage
1 2762.1 07 Feb 2020 04:31PM UTC 0
91.42
Travis Job 2762.1
2 2762.2 07 Feb 2020 04:31PM UTC 0
92.07
Travis Job 2762.2
3 2762.3 07 Feb 2020 04:31PM UTC 0
92.08
Travis Job 2762.3
Source Files on build 2762
Detailed source file information is not available for this build.
  • Back to Repo
  • Travis Build #2762
  • bbb4eb20 on github
  • Prev Build on feature/ray-runner (#2761)
  • Next Build on feature/ray-runner (#2763)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc