• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

SOHU-Co / kafka-node / 1135 / 12
89%
master: 89%

Build:
DEFAULT BRANCH: master
Ran 08 Oct 2018 07:21PM UTC
Files 55
Run time 4s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

08 Oct 2018 07:11PM UTC coverage: 89.276% (-0.2%) from 89.483%
KAFKA_VERSION=1.0

push

travis-ci

Xiaoxin Lu
Multiple fixes to message ordering and compression, Fixes #298 (#1072)

* Multiple fixes to message ordering and compression, Fixes #298

There was numerous risks with the fetch process resolved here.

1) Compressed messages, as mentioned in #298, would be guaranteed out of order
We resolve this by making fetch response buffering be handled through a chain
of executions, where one partition is not emitted until the previous one is done,
and able to be processed asynchronously.

This however introduces new risk as the system only dealt with synchronous
processing before, meaning 2 fetch buffers could not be processing at the same
time. But because it's now async, we could end up processing multiple.

We now need to guard starting a new fetch while one is still processing, to ensure
that a new fetch does not begin with stale offset data.

2) Fetch requests may start, and end up being long polling. This fetch result may
end up coming back after a rebalance, or disconnection from the group occurred.

Those fetch requests would then emit events, which may be at different
offsets, topics, or partitions than the new consumer group membership is responsible for...

So if you received an old fetch response for a topic/partition you are no longer
responsible for, you would receive errors. Or even worse, if you are still responsible
for this partition, you may receive a message that about to be received in the new
memberships fetch request.

We will now run a state validator on the message handler to ensure our group
membership has not changed since the fetch request started, and ignore the response
if it has.

3) Kafka, for some unknown reasons, and one known, in the case of compression,
can return an offset older than the offset you requested.

This is a message we know we have already emitted if our topic payloads already
contained this offset.

We need to skip this message, and any message that shouldn't of been in the re... (continued)

1280 of 1691 branches covered (75.69%)

3946 of 4420 relevant lines covered (89.28%)

7200.71 hits per line

Source Files on job 1135.12 (KAFKA_VERSION=1.0)
  • Tree
  • List 0
  • Changed 17
  • Source Changed 10
  • Coverage Changed 17
Coverage ∆ File Lines Relevant Covered Missed Hits/Line Branch Hits Branch Misses
  • Back to Build 1012
  • Travis Job 1135.12
  • 91e361d8 on github
  • Prev Job for KAFKA_VERSION=1.0 on master (#1134.10)
  • Next Job for KAFKA_VERSION=1.0 on master (#1136.12)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc