• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

xapi-project / xen-api / 22884630461
80%
master: 80%

Build:
Build:
LAST BUILD BRANCH: private/christianlin/CP-311489
DEFAULT BRANCH: master
Ran 10 Mar 2026 02:44AM UTC
Jobs 1
Files 34
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

06 Mar 2026 09:10AM UTC coverage: 80.459%. Remained the same
22884630461

push

github

web-flow
XSI-2155/CA-411684:  When a host has most of its memory used, restarted VMs are not placed in a single NUMA node (#6929)

Xenopsd takes the minimum between the actual free memory on a NUMA node,
and an estimate.
The estimate never got implemented, so it just took the last free memory
value, which resulted in the free memory only ever decreasing on a NUMA
node, and once the host memory is fully used NUMA optimization would
stop working for newly (re)booted VMs, even if meanwhile other VMs were
stopped.

Master has a different fix (it uses claims, and tracks memory usage
accurately), but that requires a newer version of Xen.
Initially it was decided that this bug wouldn't get fixed on the LCM
branch, but that decision has changed now.

Instead it should track how much memory the pending VM starts are using.
This estimate can become inaccurate fairly quickly on old versions of
Xen without per-node claims. To reduce that window we only use the
estimates when we have other pending domain builds.
When we have 0 pending domain builds, we reset the estimate to the
actual free memory on the node (note: reset, *not* take the minimum).

Then finally we take the minimum between the actual free memory on a
node and the above estimate.

In practice this is not as simple as that, because Xen doesn't always
listen to us when we give it a CPU affinity hint, and sometimes it
places the memory on other nodes. This happens particularly when the
last 4GiB (the 32-bit DMA zone) is used up on node0: Xen would only use
this as a last resort (some devices can only work if you have memory
available here).
To fix this xenopsd considers node0 to have less memory available than
it actually does (so it'll prefer other nodes instead of using up the
last 4GiB). And finally it'll spread the last VMs across all nodes, when
we can't accurately estimate how much memory ends up where (we don't
know how much of the DMA32 heap is already used or not by devices).

This bug also affects mast... (continued)

3504 of 4355 relevant lines covered (80.46%)

0.8 hits per line

Jobs
ID Job ID Ran Files Coverage
1 python3.11 - 22884630461.1 10 Mar 2026 02:44AM UTC 34
80.46
GitHub Action Run
Source Files on build 22884630461
  • Tree
  • List 34
  • Changed 0
  • Source Changed 0
  • Coverage Changed 0
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Repo
  • Github Actions Build #22884630461
  • bc3f4c52 on github
  • Prev Build on 26.1-lcm (#22752779817)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc