• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

Alan-Jowett / ebpf-verifier / 22235569093

20 Feb 2026 02:12AM UTC coverage: 88.002% (-0.2%) from 88.157%
22235569093

push

github

web-flow
Handle Call builtins: fix handling of Falco tests  (#1025)

* falco: fix raw_tracepoint privilege and group expected failures

Mark raw_tracepoint/raw_tracepoint_writable as privileged program types so Falco raw-tracepoint sections are not treated as unprivileged argument checks.

Update Falco sample matrix to move now-passing sections out of TEST_SECTION_FAIL and group the remaining expected failures by root-cause class (offset lower-bound loss vs size lower-bound loss at correlated joins).

Signed-off-by: Elazar Gershuni <elazarg@gmail.com>

* elf/unmarshal: gate builtin relocations via platform call model

Add platform hooks to resolve builtin symbols and provide builtin call contracts, thread relocation-gated builtin call offsets through ProgramInfo, and only treat static helper IDs as builtins at gated call sites.

Also extend platform-table, marshal, and YAML-platform tests to cover builtin resolver wiring and call unmarshal behavior.
* crab: canonicalize unsigned intervals in bitwise_and
When uvalue intervals temporarily carry signed lower bounds (e.g. after joins), Interval::bitwise_and asserted in debug builds. Canonicalize both operands via zero_extend(64) before unsigned bitwise reasoning, preserving soundness and avoiding debug aborts.

Validated by reproducing SIGABRT on reverted code in [falco][verify] and confirming the patched build completes with expected 73 pass / 20 failed-as-expected.

* Fix unsound bitwise_and case for non-singleton all-ones rhs

Signed-off-by: Elazar Gershuni <elazarg@gmail.com>

239 of 252 new or added lines in 9 files covered. (94.84%)

602 existing lines in 16 files now uncovered.

11743 of 13344 relevant lines covered (88.0%)

3262592.78 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

76.51
/src/result.cpp
1
// Copyright (c) Prevail Verifier contributors.
2
// SPDX-License-Identifier: MIT
3
#include <algorithm>
4
#include <map>
5
#include <ranges>
6
#include <regex>
7
#include <sstream>
8
#include <type_traits>
9

10
#include "crab/ebpf_domain.hpp"
11
#include "ir/program.hpp"
12
#include "result.hpp"
13
#include "spec/ebpf_base.h"
14
#include "spec/vm_isa.hpp"
15

16
namespace prevail {
17

18
template <typename>
19
inline constexpr bool always_false_v = false;
20

21
// Stream-local storage index for invariant filter
22
static int invariant_filter_index() {
24✔
23
    static const int index = std::ios_base::xalloc();
24✔
24
    return index;
24✔
25
}
26

27
const RelevantState* get_invariant_filter(std::ostream& os) {
12✔
28
    return static_cast<const RelevantState*>(os.pword(invariant_filter_index()));
12✔
29
}
30

31
std::ostream& operator<<(std::ostream& os, const invariant_filter& filter) {
12✔
32
    os.pword(invariant_filter_index()) = const_cast<void*>(static_cast<const void*>(filter.state));
12✔
33
    return os;
12✔
34
}
35

36
bool RelevantState::is_relevant_constraint(const std::string& constraint) const {
86✔
37
    // Extract the register or stack reference from the constraint.
38
    // Constraints look like: "r1.type=number", "r2.svalue=[0, 100]", "s[4088...4095].type=ctx"
39
    // Relational constraints: "r1.svalue-r8.svalue<=-100", "r0.svalue=r3.svalue+2"
40
    // Also handle: "packet_size=54", "meta_offset=[-4098, 0]"
41

42
    // Track whether we matched any known pattern.  When at least one pattern
43
    // fires but nothing is relevant we return false; the conservative
44
    // `return true` at the end only applies to truly unparseable constraints.
45
    bool parsed_any_pattern = false;
86✔
46

47
    // Check for ANY register mentioned in the constraint (not just at start).
48
    // This handles relational constraints like "r1.svalue-r8.svalue<=100".
49
    static const std::regex reg_pattern(R"(r(\d+)\.)");
86✔
50
    std::sregex_iterator it(constraint.begin(), constraint.end(), reg_pattern);
86✔
51
    std::sregex_iterator end;
86✔
52

53
    while (it != end) {
140✔
54
        parsed_any_pattern = true;
64✔
55
        const uint8_t reg_num = static_cast<uint8_t>(std::stoi((*it)[1].str()));
96✔
56
        if (registers.contains(Reg{reg_num})) {
64✔
57
            return true; // At least one relevant register is mentioned
5✔
58
        }
59
        ++it;
54✔
60
    }
61

62
    // Precompute absolute stack offsets once to avoid repeated conversion in loops
63
    std::vector<int64_t> abs_stack_offsets;
76✔
64
    abs_stack_offsets.reserve(stack_offsets.size());
76✔
65
    for (const auto& rel_offset : stack_offsets) {
76✔
UNCOV
66
        abs_stack_offsets.push_back(EBPF_TOTAL_STACK_SIZE + rel_offset);
×
67
    }
68

69
    // Check for stack range pattern: s[start...end]
70
    static const std::regex stack_range_pattern(R"(s\[(\d+)\.\.\.(\d+)\])");
76✔
71
    std::sregex_iterator stack_it(constraint.begin(), constraint.end(), stack_range_pattern);
76✔
72
    while (stack_it != end) {
88✔
73
        parsed_any_pattern = true;
12✔
74
        const int64_t abs_start = std::stoll((*stack_it)[1].str());
18✔
75
        const int64_t abs_end = std::stoll((*stack_it)[2].str());
18✔
76
        for (const auto& abs_offset : abs_stack_offsets) {
12✔
UNCOV
77
            if (abs_offset >= abs_start && abs_offset <= abs_end) {
×
78
                return true; // Overlaps within the constraint range
38✔
79
            }
80
        }
81
        ++stack_it;
12✔
82
    }
83

84
    // Check for single-offset stack pattern: s[offset]
85
    static const std::regex stack_single_pattern(R"(s\[(\d+)\]\.)");
76✔
86
    std::sregex_iterator single_it(constraint.begin(), constraint.end(), stack_single_pattern);
76✔
87
    while (single_it != end) {
76✔
UNCOV
88
        parsed_any_pattern = true;
×
UNCOV
89
        const int64_t abs_pos = std::stoll((*single_it)[1].str());
×
UNCOV
90
        for (const auto& abs_offset : abs_stack_offsets) {
×
91
            if (abs_offset == abs_pos) {
×
92
                return true;
6✔
93
            }
94
        }
UNCOV
95
        ++single_it;
×
96
    }
97

98
    // Global constraints (packet_size, meta_offset) - always show for context
99
    if (constraint.starts_with("packet_size") || constraint.starts_with("meta_offset")) {
111✔
100
        return true;
12✔
101
    }
102

103
    // A known pattern was parsed but nothing matched — the constraint is irrelevant.
104
    if (parsed_any_pattern) {
64✔
105
        return false;
32✔
106
    }
107

108
    // If we can't parse it at all, show it (conservative)
109
    return true;
110
}
162✔
111

112
bool AnalysisResult::is_valid_after(const Label& label, const StringInvariant& state) const {
×
113
    const EbpfDomain abstract_state =
UNCOV
114
        EbpfDomain::from_constraints(state.value(), thread_local_options.setup_constraints);
×
UNCOV
115
    return abstract_state <= invariants.at(label).post;
×
UNCOV
116
}
×
117

118
ObservationCheckResult AnalysisResult::check_observation_at_label(const Label& label, const InvariantPoint point,
14✔
119
                                                                  const StringInvariant& observation,
120
                                                                  const ObservationCheckMode mode) const {
121
    const auto it = invariants.find(label);
14✔
122
    if (it == invariants.end()) {
14✔
UNCOV
123
        return {.ok = false, .message = "No invariant available for label " + to_string(label)};
×
124
    }
125
    const auto& inv_pair = it->second;
14✔
126
    const EbpfDomain& abstract_state = (point == InvariantPoint::pre) ? inv_pair.pre : inv_pair.post;
14✔
127

128
    const EbpfDomain observed_state =
7✔
129
        observation.is_bottom()
14✔
130
            ? EbpfDomain::bottom()
14✔
131
            : EbpfDomain::from_constraints(observation.value(), thread_local_options.setup_constraints);
14✔
132

133
    if (observed_state.is_bottom()) {
14✔
134
        return {.ok = false, .message = "Observation constraints are unsatisfiable (domain is bottom)"};
2✔
135
    }
136

137
    if (abstract_state.is_bottom()) {
12✔
UNCOV
138
        return {.ok = false, .message = "Invariant at label is bottom (unreachable)"};
×
139
    }
140

141
    switch (mode) {
12✔
142
    case ObservationCheckMode::entailed:
4✔
143
        if (observed_state <= abstract_state) {
4✔
144
            return {.ok = true, .message = ""};
2✔
145
        }
146
        return {.ok = false, .message = "Invariant does not entail the observation (C ⊑ A is false)"};
2✔
147

148
    case ObservationCheckMode::consistent:
8✔
149
        // Default: consistency / satisfiability.
150
        if ((abstract_state & observed_state).is_bottom()) {
12✔
UNCOV
151
            return {.ok = false, .message = "Observation contradicts invariant (meet is bottom)"};
×
152
        }
153
        return {.ok = true, .message = ""};
8✔
154
    }
155

156
    return {.ok = false, .message = "Unsupported observation check mode"};
×
157
}
35✔
158

159
bool AnalysisResult::is_consistent_before(const Label& label, const StringInvariant& observation) const {
×
160
    return check_observation_at_label(label, InvariantPoint::pre, observation, ObservationCheckMode::consistent).ok;
×
161
}
162

UNCOV
163
bool AnalysisResult::is_consistent_after(const Label& label, const StringInvariant& observation) const {
×
UNCOV
164
    return check_observation_at_label(label, InvariantPoint::post, observation, ObservationCheckMode::consistent).ok;
×
165
}
166

167
StringInvariant AnalysisResult::invariant_at(const Label& label) const { return invariants.at(label).post.to_set(); }
1,432✔
168

169
std::optional<VerificationError> AnalysisResult::find_first_error() const {
1,432✔
170
    for (const auto& inv_pair : invariants | std::views::values) {
6,242✔
171
        if (inv_pair.pre.is_bottom()) {
5,124✔
172
            continue;
326✔
173
        }
174
        if (inv_pair.error) {
4,798✔
175
            return inv_pair.error;
314✔
176
        }
177
    }
178
    return {};
1,118✔
179
}
180

181
std::map<Label, std::vector<std::string>> AnalysisResult::find_unreachable(const Program& prog) const {
1,432✔
182
    std::map<Label, std::vector<std::string>> unreachable;
1,432✔
183
    for (const auto& [label, inv_pair] : invariants) {
7,134✔
184
        if (inv_pair.pre.is_bottom()) {
5,702✔
185
            continue;
804✔
186
        }
187
        if (const auto passume = std::get_if<Assume>(&prog.instruction_at(label))) {
7,749✔
188
            if (inv_pair.post.is_bottom() && !inv_pair.error) {
592✔
189
                const auto s = to_string(*passume);
182✔
190
                unreachable[label].emplace_back(to_string(label) + ": Code becomes unreachable (" + s + ")");
455✔
191
            }
182✔
192
        }
193
    }
194
    return unreachable;
1,432✔
UNCOV
195
}
×
196

197
/// Extract the registers and stack offsets read/written by an instruction.
198
/// Used to populate InstructionDeps during forward analysis.
199
InstructionDeps extract_instruction_deps(const Instruction& ins, const EbpfDomain& pre_state) {
460✔
200
    InstructionDeps deps;
460✔
201

202
    std::visit(
460✔
203
        [&](const auto& v) {
896✔
204
            using T = std::decay_t<decltype(v)>;
205

206
            if constexpr (std::is_same_v<T, Bin>) {
207
                // dst = dst op src (or dst = src for MOV/MOVSX)
208
                deps.regs_written.insert(v.dst);
152✔
209
                if (v.op != Bin::Op::MOV && v.op != Bin::Op::MOVSX8 && v.op != Bin::Op::MOVSX16 &&
152✔
210
                    v.op != Bin::Op::MOVSX32) {
32✔
211
                    // Non-move ops also read dst
212
                    deps.regs_read.insert(v.dst);
64✔
213
                }
214
                if (const auto* reg = std::get_if<Reg>(&v.v)) {
152✔
215
                    deps.regs_read.insert(*reg);
64✔
216
                }
217
            } else if constexpr (std::is_same_v<T, Un>) {
218
                // dst = op(dst)
UNCOV
219
                deps.regs_read.insert(v.dst);
×
UNCOV
220
                deps.regs_written.insert(v.dst);
×
221
            } else if constexpr (std::is_same_v<T, LoadMapFd> || std::is_same_v<T, LoadMapAddress>) {
222
                deps.regs_written.insert(v.dst);
20✔
223
            } else if constexpr (std::is_same_v<T, Mem>) {
224
                deps.regs_read.insert(v.access.basereg);
124✔
225
                if (v.is_load) {
124✔
226
                    // Load: value = *(basereg + offset)
227
                    if (const auto* dst_reg = std::get_if<Reg>(&v.value)) {
22✔
228
                        deps.regs_written.insert(*dst_reg);
22✔
229
                    }
230
                    // Track stack read if base is a stack pointer (R10 or derived)
231
                    if (v.access.basereg.v == R10_STACK_POINTER) {
22✔
232
                        deps.stack_read.insert(v.access.offset);
2✔
233
                    } else if (const auto stack_off = pre_state.get_stack_offset(v.access.basereg)) {
20✔
234
                        // basereg is a stack pointer with known offset from R10
235
                        // Compute the actual stack slot: stack_off is absolute (e.g., 4096),
236
                        // convert to relative from R10: relative = offset + (stack_off - EBPF_TOTAL_STACK_SIZE)
UNCOV
237
                        deps.stack_read.insert(v.access.offset + (*stack_off - EBPF_TOTAL_STACK_SIZE));
×
238
                    }
239
                } else {
240
                    // Store: *(basereg + offset) = value
241
                    if (const auto* src_reg = std::get_if<Reg>(&v.value)) {
102✔
242
                        deps.regs_read.insert(*src_reg);
102✔
243
                    }
244
                    // Track stack write if base is a stack pointer (R10 or derived)
245
                    if (v.access.basereg.v == R10_STACK_POINTER) {
102✔
246
                        deps.stack_written.insert(v.access.offset);
88✔
247
                    } else if (const auto stack_off = pre_state.get_stack_offset(v.access.basereg)) {
14✔
248
                        deps.stack_written.insert(v.access.offset + (*stack_off - EBPF_TOTAL_STACK_SIZE));
×
249
                    }
250
                }
251
            } else if constexpr (std::is_same_v<T, Atomic>) {
UNCOV
252
                deps.regs_read.insert(v.access.basereg);
×
UNCOV
253
                deps.regs_read.insert(v.valreg);
×
254
                if (v.fetch) {
×
255
                    deps.regs_written.insert(v.valreg);
×
256
                }
257
                // Track stack read/write if base is a stack pointer (R10 or derived)
258
                if (v.access.basereg.v == R10_STACK_POINTER) {
×
259
                    deps.stack_read.insert(v.access.offset);
×
260
                    deps.stack_written.insert(v.access.offset);
×
UNCOV
261
                } else if (const auto stack_off = pre_state.get_stack_offset(v.access.basereg)) {
×
UNCOV
262
                    const auto adjusted_off = v.access.offset + (*stack_off - EBPF_TOTAL_STACK_SIZE);
×
UNCOV
263
                    deps.stack_read.insert(adjusted_off);
×
UNCOV
264
                    deps.stack_written.insert(adjusted_off);
×
265
                }
266
            } else if constexpr (std::is_same_v<T, Call>) {
267
                // Calls read R1-R5 as arguments, write R0 as return value,
268
                // and clobber R1-R5 (caller-saved registers are killed).
269
                // Separate clobbers from writes so backward slicing can:
270
                // - Stop propagation for post-call uses of R1-R5 (they're killed)
271
                // - Still trace R1-R5 as read-deps when R0 is relevant-after (args affect return)
272
                for (uint8_t i = 1; i <= 5; ++i) {
132✔
273
                    deps.regs_read.insert(Reg{i});      // Arguments feed into return value
110✔
274
                    deps.regs_clobbered.insert(Reg{i}); // Killed after call
110✔
275
                }
276
                deps.regs_written.insert(Reg{0}); // Return value
22✔
277
            } else if constexpr (std::is_same_v<T, CallLocal>) {
278
                // Local (macro-inlined) calls:
279
                // - read R1-R5 as arguments
280
                // - write R0 as return value
281
                // - clobber R1-R5 (caller-saved)
282
                // - adjust frame pointer R10 (save/restore callee-saved regs)
283
                for (uint8_t i = 1; i <= 5; ++i) {
×
284
                    deps.regs_read.insert(Reg{i});      // Arguments
×
285
                    deps.regs_clobbered.insert(Reg{i}); // Killed after call
×
286
                }
UNCOV
287
                deps.regs_written.insert(Reg{0});  // Return value
×
UNCOV
288
                deps.regs_read.insert(Reg{10});    // Frame pointer read
×
289
                deps.regs_written.insert(Reg{10}); // Frame pointer adjusted
×
290
            } else if constexpr (std::is_same_v<T, Callx>) {
291
                // Indirect call: reads the function pointer and R1-R5,
292
                // writes R0, and clobbers R1-R5.
UNCOV
293
                deps.regs_read.insert(v.func);
×
294
                for (uint8_t i = 1; i <= 5; ++i) {
×
UNCOV
295
                    deps.regs_read.insert(Reg{i});      // Arguments
×
UNCOV
296
                    deps.regs_clobbered.insert(Reg{i}); // Killed after call
×
297
                }
UNCOV
298
                deps.regs_written.insert(Reg{0});
×
299
            } else if constexpr (std::is_same_v<T, Exit>) {
300
                deps.regs_read.insert(Reg{0}); // Return value
14✔
301
                if (!v.stack_frame_prefix.empty()) {
14✔
302
                    // Subprogram return restores callee-saved registers and frame pointer.
UNCOV
303
                    for (uint8_t i = R6; i <= R9; ++i) {
×
UNCOV
304
                        deps.regs_written.insert(Reg{i});
×
305
                    }
UNCOV
306
                    deps.regs_written.insert(Reg{R10_STACK_POINTER});
×
307
                }
308
            } else if constexpr (std::is_same_v<T, Jmp>) {
309
                if (v.cond) {
46✔
310
                    deps.regs_read.insert(v.cond->left);
32✔
311
                    if (const auto* reg = std::get_if<Reg>(&v.cond->right)) {
32✔
312
                        deps.regs_read.insert(*reg);
2✔
313
                    }
314
                }
315
            } else if constexpr (std::is_same_v<T, Assume>) {
316
                deps.regs_read.insert(v.cond.left);
54✔
317
                if (const auto* reg = std::get_if<Reg>(&v.cond.right)) {
54✔
318
                    deps.regs_read.insert(*reg);
4✔
319
                }
320
            } else if constexpr (std::is_same_v<T, Packet>) {
UNCOV
321
                if (v.regoffset) {
×
322
                    deps.regs_read.insert(*v.regoffset);
×
323
                }
UNCOV
324
                deps.regs_written.insert(Reg{0}); // Result in R0
×
325
                // Packet clobbers caller-saved registers R1-R5 without reading them
UNCOV
326
                for (uint8_t i = 1; i <= 5; ++i) {
×
UNCOV
327
                    deps.regs_clobbered.insert(Reg{i});
×
328
                }
329
            }
330
            // Undefined and IncrementLoopCounter have no register deps
331
        },
436✔
332
        ins);
333

334
    return deps;
460✔
UNCOV
335
}
×
336

337
/// Extract the registers referenced by an assertion.
338
/// Used to seed backward slicing from the point of failure.
339
std::set<Reg> extract_assertion_registers(const Assertion& assertion) {
20✔
340
    return std::visit(
10✔
341
        [](const auto& a) -> std::set<Reg> {
40✔
342
            using T = std::decay_t<decltype(a)>;
343

344
            if constexpr (std::is_same_v<T, Comparable>) {
345
                return {a.r1, a.r2};
4✔
346
            } else if constexpr (std::is_same_v<T, Addable>) {
UNCOV
347
                return {a.ptr, a.num};
×
348
            } else if constexpr (std::is_same_v<T, ValidDivisor>) {
349
                return {a.reg};
4✔
350
            } else if constexpr (std::is_same_v<T, ValidAccess>) {
351
                std::set<Reg> regs{a.reg};
9✔
352
                if (const auto* r = std::get_if<Reg>(&a.width)) {
6✔
353
                    regs.insert(*r);
×
354
                }
355
                return regs;
9✔
356
            } else if constexpr (std::is_same_v<T, ValidStore>) {
6✔
UNCOV
357
                return {a.mem, a.val};
×
358
            } else if constexpr (std::is_same_v<T, ValidSize>) {
UNCOV
359
                return {a.reg};
×
360
            } else if constexpr (std::is_same_v<T, ValidMapKeyValue>) {
361
                return {a.access_reg, a.map_fd_reg};
4✔
362
            } else if constexpr (std::is_same_v<T, ValidCallbackTarget>) {
UNCOV
363
                return {a.reg};
×
364
            } else if constexpr (std::is_same_v<T, TypeConstraint>) {
365
                return {a.reg};
12✔
366
            } else if constexpr (std::is_same_v<T, FuncConstraint>) {
UNCOV
367
                return {a.reg};
×
368
            } else if constexpr (std::is_same_v<T, ZeroCtxOffset>) {
UNCOV
369
                return {a.reg};
×
370
            } else if constexpr (std::is_same_v<T, BoundedLoopCount>) {
371
                // Loop counter, not a register
372
                return {};
2✔
373
            } else {
374
                static_assert(always_false_v<T>, "Unhandled Assertion type in extract_assertion_registers");
375
            }
376
        },
377
        assertion);
25✔
378
}
379

380
std::vector<FailureSlice> AnalysisResult::compute_failure_slices(const Program& prog, const SliceParams params) const {
14✔
381
    const auto max_steps = params.max_steps;
14✔
382
    const auto max_slices = params.max_slices;
14✔
383
    std::vector<FailureSlice> slices;
14✔
384

385
    // Find all labels with errors
386
    for (const auto& [label, inv_pair] : invariants) {
296✔
387
        if (inv_pair.pre.is_bottom()) {
282✔
388
            continue; // Unreachable
272✔
389
        }
390
        if (!inv_pair.error) {
232✔
391
            continue; // No error here
222✔
392
        }
393

394
        // Check if we've reached the max slices limit
395
        if (max_slices > 0 && slices.size() >= max_slices) {
10✔
396
            break;
397
        }
398

399
        FailureSlice slice{
5✔
400
            .failing_label = label,
401
            .error = *inv_pair.error,
402
            .relevance = {},
403
        };
10✔
404

405
        // Seed relevant registers from the actual failing assertion.
406
        // Forward analysis stops at the first failing assertion, which may not be
407
        // assertions[0]. Replay the checks against the pre-state to identify
408
        // the actual failing assertion and seed relevance from it.
409
        RelevantState initial_relevance;
10✔
410
        const auto& assertions = prog.assertions_at(label);
10✔
411
        bool found_failing = false;
10✔
412
        for (const auto& assertion : assertions) {
22✔
413
            if (ebpf_domain_check(inv_pair.pre, assertion, label)) {
22✔
414
                for (const auto& reg : extract_assertion_registers(assertion)) {
22✔
415
                    initial_relevance.registers.insert(reg);
12✔
416
                }
5✔
417
                found_failing = true;
10✔
418
                break;
10✔
419
            }
420
        }
421
        // Fallback: if no failing assertion was identified (shouldn't happen),
422
        // or if the failing assertion has no register deps, aggregate all assertions.
423
        if (!found_failing || initial_relevance.registers.empty()) {
10✔
UNCOV
424
            for (const auto& assertion : assertions) {
×
UNCOV
425
                for (const auto& reg : extract_assertion_registers(assertion)) {
×
UNCOV
426
                    initial_relevance.registers.insert(reg);
×
UNCOV
427
                }
×
428
            }
429
        }
430

431
        // Always include the failing label in the slice, even if no registers were extracted
432
        // (e.g., BoundedLoopCount has no register deps)
433

434
        // `visited` tracks all explored labels for deduplication during backward traversal.
435
        // `slice_labels` tracks only labels that interact with relevant registers (the output slice).
436
        std::map<Label, RelevantState> visited;
10✔
437
        std::set<Label> conservative_visited; // Dedup for empty-relevance labels in conservative mode
10✔
438
        std::map<Label, RelevantState> slice_labels;
10✔
439

440
        // Worklist: (label, relevant_state_after_this_label)
441
        std::vector<std::pair<Label, RelevantState>> worklist;
10✔
442
        worklist.emplace_back(label, initial_relevance);
10✔
443

444
        // When the seed has no register/stack deps (e.g., BoundedLoopCount),
445
        // perform a conservative backward walk so the slice still shows the
446
        // loop structure and control flow leading to the failure.
447
        const bool conservative_mode = initial_relevance.registers.empty() && initial_relevance.stack_offsets.empty();
10✔
448

449
        size_t steps = 0;
10✔
450

451
        // Hoist the parent lookup for the failing label outside the hot loop;
452
        // it is invariant and parents_of() may return a temporary.
453
        const auto parents_of_fail = prog.cfg().parents_of(label);
10✔
454

455
        while (!worklist.empty() && steps < max_steps) {
108✔
456
            auto [current_label, relevant_after] = worklist.back();
98✔
457
            worklist.pop_back();
98✔
458

459
            // Skip if nothing is relevant — unless we're in conservative mode
460
            // (empty-seed assertions like BoundedLoopCount) or this is the failing label.
461
            if (!conservative_mode && current_label != label && relevant_after.registers.empty() &&
100✔
462
                relevant_after.stack_offsets.empty()) {
4✔
463
                continue;
4✔
464
            }
465

466
            // Merge with existing relevance at this label (for deduplication)
467
            auto& existing = visited[current_label];
94✔
468
            const size_t prev_size = existing.registers.size() + existing.stack_offsets.size();
94✔
469
            existing.registers.insert(relevant_after.registers.begin(), relevant_after.registers.end());
94✔
470
            existing.stack_offsets.insert(relevant_after.stack_offsets.begin(), relevant_after.stack_offsets.end());
94✔
471
            const size_t new_size = existing.registers.size() + existing.stack_offsets.size();
94✔
472

473
            // If no new relevance was added, skip (already processed with same or broader relevance).
474
            // In conservative mode with empty relevance, use a separate visited set for dedup.
475
            if (new_size == prev_size) {
94✔
476
                if (prev_size > 0) {
2✔
477
                    continue;
2✔
478
                }
479
                // Empty relevance (conservative mode): skip if we already visited this label
UNCOV
480
                if (!conservative_visited.insert(current_label).second) {
×
UNCOV
481
                    continue;
×
482
                }
483
            }
484

485
            // Compute what's relevant BEFORE this instruction using deps
486
            RelevantState relevant_before;
92✔
487
            const auto inv_it = invariants.find(current_label);
92✔
488
            if (inv_it != invariants.end() && inv_it->second.deps) {
92✔
489
                const auto& deps = *inv_it->second.deps;
92✔
490

491
                // Start with what's relevant after
492
                relevant_before = relevant_after;
92✔
493

494
                // Remove registers that are written by this instruction
495
                // (they weren't relevant before their definition)
496
                for (const auto& reg : deps.regs_written) {
158✔
497
                    relevant_before.registers.erase(reg);
66✔
498
                }
499

500
                // Remove registers that are clobbered (killed without reading).
501
                // These stop propagation for post-instruction uses but don't add read-deps.
502
                for (const auto& reg : deps.regs_clobbered) {
112✔
503
                    relevant_before.registers.erase(reg);
20✔
504
                }
505

506
                // Determine if this instruction contributes to the slice.
507
                // An instruction contributes when it writes to a relevant register/stack slot,
508
                // or when it is a control-flow decision (Jmp/Assume) that reads relevant registers.
509
                bool instruction_contributes = false;
92✔
510
                for (const auto& reg : deps.regs_written) {
112✔
511
                    if (relevant_after.registers.contains(reg)) {
66✔
512
                        instruction_contributes = true;
23✔
513
                        break;
23✔
514
                    }
515
                }
516
                for (const auto& offset : deps.stack_written) {
98✔
517
                    if (relevant_after.stack_offsets.contains(offset)) {
6✔
518
                        instruction_contributes = true;
519
                        break;
520
                    }
521
                }
522

523
                // Control-flow instructions (Jmp/Assume) that read relevant registers
524
                // contribute to the slice because they shape the path to the failure.
525
                if (!instruction_contributes) {
92✔
526
                    const auto& ins = prog.instruction_at(current_label);
46✔
527
                    if (std::holds_alternative<Jmp>(ins) || std::holds_alternative<Assume>(ins)) {
46✔
528
                        for (const auto& reg : deps.regs_read) {
30✔
529
                            if (relevant_after.registers.contains(reg)) {
20✔
530
                                instruction_contributes = true;
1✔
531
                                break;
1✔
532
                            }
533
                        }
534
                    }
535
                }
536

537
                // Immediate path guard: when the current label is a direct predecessor
538
                // of the failing label and is an Assume instruction, its condition
539
                // registers are causally relevant — they determine reachability.
540
                if (std::holds_alternative<Assume>(prog.instruction_at(current_label))) {
92✔
541
                    if (std::find(parents_of_fail.begin(), parents_of_fail.end(), current_label) !=
6✔
542
                        parents_of_fail.end()) {
9✔
543
                        instruction_contributes = true;
47✔
544
                    }
545
                }
546

547
                // At the failing label, the assertion depends on registers the instruction
548
                // reads (e.g., base pointer r3 in a store). Since stores write to memory
549
                // not registers, instruction_contributes would be false without this.
550
                if (current_label == label) {
92✔
551
                    instruction_contributes = true;
10✔
552
                }
553

554
                // In conservative mode (empty seed, e.g., BoundedLoopCount), include all
555
                // reachable labels so the slice shows the loop structure and control flow.
556
                if (conservative_mode) {
92✔
557
                    instruction_contributes = true;
558
                }
559

560
                if (instruction_contributes) {
92✔
561
                    for (const auto& reg : deps.regs_read) {
114✔
562
                        relevant_before.registers.insert(reg);
58✔
563
                    }
564
                    for (const auto& offset : deps.stack_read) {
56✔
UNCOV
565
                        relevant_before.stack_offsets.insert(offset);
×
566
                    }
567
                }
568

569
                // Remove stack locations that are written by this instruction,
570
                // but preserve offsets that are also read (read-modify-write, e.g., Atomic).
571
                // Done before storing to slice_labels for consistency with register handling
572
                // (written registers are removed before storage at lines 476-478).
573
                for (const auto& offset : deps.stack_written) {
98✔
574
                    if (!deps.stack_read.contains(offset)) {
6✔
575
                        relevant_before.stack_offsets.erase(offset);
6✔
576
                    }
577
                }
578

579
                if (instruction_contributes) {
92✔
580
                    // Only include contributing labels in the output slice.
581
                    // Store relevant_before so pre-invariant filtering shows the
582
                    // instruction's read-deps (the true upstream dependencies).
583
                    slice_labels[current_label] = relevant_before;
56✔
584
                }
585
            } else {
586
                // No deps available: conservatively treat this label as contributing
587
                // and propagate all current relevance to predecessors.
UNCOV
588
                relevant_before = relevant_after;
×
UNCOV
589
                slice_labels[current_label] = relevant_before;
×
590
            }
591

592
            // Add predecessors to worklist
593
            for (const auto& parent : prog.cfg().parents_of(current_label)) {
180✔
594
                worklist.emplace_back(parent, relevant_before);
88✔
595
            }
596

597
            ++steps;
92✔
598
        }
98✔
599

600
        // Expand join points: for any traversed label that is a join point
601
        // (≥2 predecessors) where at least one predecessor is already in the slice,
602
        // add the join-point label itself and all predecessors that have invariants.
603
        // This ensures the causal trace shows converging paths that cause precision loss.
604
        // Note: predecessors may not be in `visited` if the worklist budget was exhausted
605
        // before reaching them, so we check the invariant map directly.
606
        std::map<Label, RelevantState> join_expansion;
10✔
607
        for (const auto& [v_label, v_relevance] : visited) {
86✔
608
            const auto& parents = prog.cfg().parents_of(v_label);
76✔
609
            if (parents.size() < 2) {
76✔
610
                continue;
74✔
611
            }
612
            // Check that at least one predecessor is in the slice (this join is relevant)
613
            bool has_slice_parent = false;
2✔
614
            for (const auto& parent : parents) {
4✔
615
                if (slice_labels.contains(parent)) {
4✔
616
                    has_slice_parent = true;
1✔
617
                    break;
1✔
618
                }
619
            }
620
            if (!has_slice_parent) {
2✔
UNCOV
621
                continue;
×
622
            }
623
            // Include the join-point label itself so the printing code can display
624
            // per-predecessor state at this join.
625
            if (!slice_labels.contains(v_label)) {
2✔
UNCOV
626
                join_expansion[v_label] = v_relevance;
×
627
            }
628
            // Include all predecessors so the join display is complete.
629
            // Use visited relevance if available, otherwise use the join-point's relevance.
630
            for (const auto& parent : parents) {
6✔
631
                if (slice_labels.contains(parent) || join_expansion.contains(parent)) {
4✔
632
                    continue;
2✔
633
                }
634
                const auto& rel = visited.contains(parent) ? visited.at(parent) : v_relevance;
2✔
635
                join_expansion[parent] = rel;
2✔
636
            }
637
        }
638
        slice_labels.insert(join_expansion.begin(), join_expansion.end());
10✔
639

640
        // Build the slice from contributing labels only
641
        slice.relevance = std::move(slice_labels);
10✔
642
        slices.push_back(std::move(slice));
10✔
643
    }
10✔
644

645
    return slices;
14✔
UNCOV
646
}
×
647

648
} // namespace prevail
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc