• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

JuliaLang / julia / #38002

06 Feb 2025 06:14AM UTC coverage: 20.322% (-2.4%) from 22.722%
#38002

push

local

web-flow
bpart: Fully switch to partitioned semantics (#57253)

This is the final PR in the binding partitions series (modulo bugs and
tweaks), i.e. it closes #54654 and thus closes #40399, which was the
original design sketch.

This thus activates the full designed semantics for binding partitions,
in particular allowing safe replacement of const bindings. It in
particular allows struct redefinitions. This thus closes
timholy/Revise.jl#18 and also closes #38584.

The biggest semantic change here is probably that this gets rid of the
notion of "resolvedness" of a binding. Previously, a lot of the behavior
of our implementation depended on when bindings were "resolved", which
could happen at basically an arbitrary point (in the compiler, in REPL
completion, in a different thread), making a lot of the semantics around
bindings ill- or at least implementation-defined. There are several
related issues in the bugtracker, so this closes #14055 closes #44604
closes #46354 closes #30277

It is also the last step to close #24569.
It also supports bindings for undef->defined transitions and thus closes
#53958 closes #54733 - however, this is not activated yet for
performance reasons and may need some further optimization.

Since resolvedness no longer exists, we need to replace it with some
hopefully more well-defined semantics. I will describe the semantics
below, but before I do I will make two notes:

1. There are a number of cases where these semantics will behave
slightly differently than the old semantics absent some other task going
around resolving random bindings.
2. The new behavior (except for the replacement stuff) was generally
permissible under the old semantics if the bindings happened to be
resolved at the right time.

With all that said, there are essentially three "strengths" of bindings:

1. Implicit Bindings: Anything implicitly obtained from `using Mod`, "no
binding", plus slightly more exotic corner cases around conflicts

2. Weakly declared bindin... (continued)

11 of 111 new or added lines in 7 files covered. (9.91%)

1273 existing lines in 68 files now uncovered.

9908 of 48755 relevant lines covered (20.32%)

105126.48 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

46.93
/base/bitset.jl
1
# This file is a part of Julia. License is MIT: https://julialang.org/license
2

3
const Bits = Vector{UInt64}
4
const CHK0 = zero(UInt64)
5
const NO_OFFSET = Int === Int64 ? -one(Int) << 60 : -one(Int) << 29
6
# + NO_OFFSET must be small enough to stay < 0 when added with any offset.
7
#   An offset is in the range -2^57:2^57 (64-bits architectures)
8
#   or -2^26:2^26 (32-bits architectures)
9
# + when the offset is NO_OFFSET, the bits field *must* be empty
10
# + NO_OFFSET could be made to be > 0, but a negative one allows
11
#   a small optimization in the in(x, ::BitSet) method
12

13
mutable struct BitSet <: AbstractSet{Int}
14
    const bits::Vector{UInt64}
15
    # 1st stored Int equals 64*offset
16
    offset::Int
17

18
    function BitSet()
19
        a = Vector{UInt64}(undef, 4) # start with some initial space for holding 0:255 without additional allocations later
76,784✔
20
        setfield!(a, :size, (0,)) # aka `empty!(a)` inlined
76,784✔
21
        return new(a, NO_OFFSET)
74,648✔
22
   end
23
end
24

25
"""
26
    BitSet([itr])
27

28
Construct a sorted set of `Int`s generated by the given iterable object, or an
29
empty set. Implemented as a bit string, and therefore designed for dense integer sets.
30
If the set will be sparse (for example, holding a few
31
very large integers), use [`Set`](@ref) instead.
32
"""
33
BitSet(itr) = union!(BitSet(), itr)
2,209✔
34

35
# Special implementation for BitSet, which lacks a fast `length` method.
36
function union!(s::BitSet, itr)
1,175✔
37
    for x in itr
105✔
38
        push!(s, x)
1,191✔
39
    end
86✔
40
    return s
1,175✔
41
end
42

43
@inline intoffset(s::BitSet) = s.offset << 6
358✔
44

45
empty(s::BitSet, ::Type{Int}=Int) = BitSet()
×
46
emptymutable(s::BitSet, ::Type{Int}=Int) = BitSet()
×
47

48
copy(s1::BitSet) = copy!(BitSet(), s1)
2,136✔
49
copymutable(s::BitSet) = copy(s)
2,136✔
50

51
function copy!(dest::BitSet, src::BitSet)
52
    resize!(dest.bits, length(src.bits))
1,068✔
53
    copyto!(dest.bits, src.bits)
2,136✔
54
    dest.offset = src.offset
1,068✔
55
    dest
×
56
end
57

58
function sizehint!(s::BitSet, n::Integer; first::Bool=false, shrink::Bool=true)
×
59
    sizehint!(s.bits, (n+63) >> 6; first, shrink)
×
60
    s
×
61
end
62

63
function _bits_getindex(b::Bits, n::Int, offset::Int)
64
    ci = _div64(n) - offset + 1
234,874✔
65
    1 <= ci <= length(b) || return false
263,346✔
66
    @inbounds r = (b[ci] & (one(UInt64) << _mod64(n))) != 0
206,402✔
67
    r
×
68
end
69

70
function _bits_findnext(b::Bits, start::Int)
71
    # start is 0-based
72
    # @assert start >= 0
73
    _div64(start) + 1 > length(b) && return -1
10,707✔
74
    ind = unsafe_bitfindnext(b, start+1)
10,705✔
75
    ind === nothing ? -1 : ind - 1
10,705✔
76
end
77

78
function _bits_findprev(b::Bits, start::Int)
79
    # start is 0-based
80
    # @assert start <= 64 * length(b) - 1
81
    start >= 0 || return -1
248✔
82
    ind = unsafe_bitfindprev(b, start+1)
248✔
83
    ind === nothing ? -1 : ind - 1
496✔
84
end
85

86
# An internal function for setting the inclusion bit for a given integer
87
@inline function _setint!(s::BitSet, idx::Int, b::Bool)
88
    cidx = _div64(idx)
275,712✔
89
    len = length(s.bits)
289,410✔
90
    diff = cidx - s.offset
289,410✔
91
    if diff >= len
289,410✔
92
        b || return s # setting a bit to zero outside the set's bits is a no-op
465✔
93

94
        # we put the following test within one of the two branches,
95
        # with the NO_OFFSET trick, to avoid having to perform it at
96
        # each and every call to _setint!
97
        if s.offset == NO_OFFSET # initialize the offset
47,776✔
98
            # we assume isempty(s.bits)
99
            s.offset = cidx
29,366✔
100
            diff = 0
×
101
        end
102
        _growend0!(s.bits, diff - len + 1)
49,000✔
103
    elseif diff < 0
241,169✔
104
        b || return s
×
105
        _growbeg0!(s.bits, -diff)
756✔
106
        s.offset += diff
754✔
107
        diff = 0
×
108
    end
109
    _unsafe_bitsetindex!(s.bits, b, diff+1, _mod64(idx))
288,945✔
110
    s
×
111
end
112

113

114
# An internal function to resize a Bits object and ensure the newly allocated
115
# elements are zeroed (will become unnecessary if this behavior changes)
116
@inline function _growend0!(b::Bits, nchunks::Int)
117
    len = length(b)
48,844✔
118
    _growend!(b, nchunks)
48,844✔
119
    for i in len+1:length(b)
48,844✔
120
        @inbounds b[i] = CHK0 # resize! gives dirty memory
50,111✔
121
    end
51,378✔
122
end
123

124
@inline function _growbeg0!(b::Bits, nchunks::Int)
125
    _growbeg!(b, nchunks)
1,508✔
126
    for i in 1:nchunks
754✔
127
        @inbounds b[i] = CHK0
756✔
128
    end
758✔
129
end
130

131
function union!(s::BitSet, r::AbstractUnitRange{<:Integer})
1,068✔
132
    isempty(r) && return s
1,068✔
133
    a, b = Int(first(r)), Int(last(r))
1,068✔
134
    cidxa = _div64(a)
1,068✔
135
    cidxb = _div64(b)
1,068✔
136
    if s.offset == NO_OFFSET
1,068✔
137
        s.offset = cidxa
1,068✔
138
    end
139
    len = length(s.bits)
1,068✔
140
    diffa = cidxa - s.offset
1,068✔
141
    diffb = cidxb - s.offset
1,068✔
142

143
    # grow s.bits as necessary
144
    if diffb >= len
1,068✔
145
        _growend0!(s.bits, diffb - len + 1)
1,111✔
146
    end
147
    if diffa < 0
1,068✔
148
        _growbeg0!(s.bits, -diffa)
×
149
        s.offset = cidxa # s.offset += diffa
×
150
        diffb -= diffa
×
151
        diffa = 0
×
152
    end
153

154
    # update s.bits
155
    i = _mod64(a)
1,068✔
156
    j = _mod64(b)
1,068✔
157
    @inbounds if diffa == diffb
1,068✔
158
        s.bits[diffa + 1] |= (((~CHK0) >> i) << (i+63-j)) >> (63-j)
1,046✔
159
    else
160
        s.bits[diffa + 1] |= ((~CHK0) >> i) << i
22✔
161
        s.bits[diffb + 1] |= (~CHK0  << (63-j)) >> (63-j)
22✔
162
        for n = diffa+1:diffb-1
23✔
163
            s.bits[n+1] = ~CHK0
21✔
164
        end
21✔
165
    end
166
    s
×
167
end
168

169
function _matched_map!(f, s1::BitSet, s2::BitSet)
170
    left_false_is_false = f(false, false) == f(false, true) == false
×
171
    right_false_is_false = f(false, false) == f(true, false) == false
×
172

173
    # we must first handle the NO_OFFSET case; we could test for
174
    # isempty(s1) but it can be costly, so the user has to call
175
    # empty!(s1) herself before-hand to re-initialize to NO_OFFSET
176
    if s1.offset == NO_OFFSET
1,068✔
177
        return left_false_is_false ? s1 : copy!(s1, s2)
×
178
    elseif s2.offset == NO_OFFSET
1,068✔
179
        return right_false_is_false ? empty!(s1) : s1
×
180
    end
181
    s1.offset = _matched_map!(f, s1.bits, s1.offset, s2.bits, s2.offset,
1,068✔
182
                              left_false_is_false, right_false_is_false)
183
    s1
×
184
end
185

186
# An internal function that takes a pure function `f` and maps across two BitArrays
187
# allowing the lengths and offsets to be different and altering b1 with the result
188
# WARNING: the assumptions written in the else clauses must hold
189
function _matched_map!(f, a1::Bits, b1::Int, a2::Bits, b2::Int,
1,068✔
190
                       left_false_is_false::Bool, right_false_is_false::Bool)
191
    l1, l2 = length(a1), length(a2)
1,068✔
192
    bdiff = b2 - b1
1,068✔
193
    e1, e2 = l1+b1, l2+b2
1,068✔
194
    ediff = e2 - e1
1,068✔
195

196
    # map! over the common indices
197
    @inbounds for i = max(1, 1+bdiff):min(l1, l2+bdiff)
1,068✔
198
        a1[i] = f(a1[i], a2[i-bdiff])
1,111✔
199
    end
1,154✔
200

201
    if ediff > 0
1,068✔
202
        if left_false_is_false
×
203
            # We don't need to worry about the trailing bits — they're all false
204
        else # @assert f(false, x) == x
205
            _growend!(a1, ediff)
×
206
            # if a1 and a2 are not overlapping, we infer implied "false" values from a2
207
            for outer l1 = l1+1:bdiff
×
208
                @inbounds a1[l1] = CHK0
×
209
            end
×
210
            # update ediff in case l1 was updated
211
            ediff = e2 - l1 - b1
×
212
            # copy actual chunks from a2
213
            unsafe_copyto!(a1, l1+1, a2, l2+1-ediff, ediff)
×
214
            l1 = length(a1)
×
215
        end
216
    elseif ediff < 0
1,068✔
UNCOV
217
        if right_false_is_false
×
218
            # We don't need to worry about the trailing bits — they're all false
219
            _deleteend!(a1, min(l1, -ediff))
×
220
            # no need to update l1, as if bdiff > 0 (case below), then bdiff will
221
            # be smaller anyway than an updated l1
222
        else # @assert f(x, false) == x
223
            # We don't need to worry about the trailing bits — they already have the
224
            # correct value
225
        end
226
    end
227

228
    if bdiff < 0
1,068✔
229
        if left_false_is_false
×
230
            # We don't need to worry about the leading bits — they're all false
231
        else # @assert f(false, x) == x
232
            _growbeg!(a1, -bdiff)
×
233
            # if a1 and a2 are not overlapping, we infer implied "false" values from a2
234
            for i = l2+1:-bdiff
×
235
                @inbounds a1[i] = CHK0
×
236
            end
×
237
            b1 += bdiff # updated return value
×
238

239
            # copy actual chunks from a2
240
            unsafe_copyto!(a1, 1, a2, 1, min(-bdiff, l2))
×
241
        end
242
    elseif bdiff > 0
1,068✔
243
        if right_false_is_false
×
244
            # We don't need to worry about the trailing bits — they're all false
245
            _deletebeg!(a1, min(l1, bdiff))
×
246
            b1 += bdiff
×
247
        else # @assert f(x, false) == x
248
            # We don't need to worry about the trailing bits — they already have the
249
            # correct value
250
        end
251
    end
252
    b1 # the new offset
1,068✔
253
end
254

255
@inline push!(s::BitSet, n::Integer) = _setint!(s, Int(n), true)
258,762✔
256

257
push!(s::BitSet, ns::Integer...) = (for n in ns; push!(s, n); end; s)
×
258

259
@inline pop!(s::BitSet) = pop!(s, last(s))
496✔
260

261
@inline function pop!(s::BitSet, n::Integer)
262
    if n in s
7,097✔
263
        delete!(s, n)
14,194✔
264
        n
×
265
    else
266
        throw(KeyError(n))
×
267
    end
268
end
269

270
@inline function pop!(s::BitSet, n::Integer, default)
×
271
    if n in s
×
272
        delete!(s, n)
×
273
        n
×
274
    else
275
        default
×
276
    end
277
end
278

279
@inline _is_convertible_Int(n) = typemin(Int) <= n <= typemax(Int)
×
280
@inline delete!(s::BitSet, n::Int) = _setint!(s, n, false)
59,870✔
281
@inline delete!(s::BitSet, n::Integer) = _is_convertible_Int(n) ? delete!(s, Int(n)) : s
×
282

283
popfirst!(s::BitSet) = pop!(s, first(s))
×
284

285
function empty!(s::BitSet)
286
    empty!(s.bits)
1,292✔
287
    s.offset = NO_OFFSET
1,068✔
288
    s
×
289
end
290

291
isempty(s::BitSet) = _check0(s.bits, 1, length(s.bits))
13,879✔
292

293
# Mathematical set functions: union!, intersect!, setdiff!, symdiff!
294

295
union(s::BitSet, sets...) = union!(copy(s), sets...)
×
296
union!(s1::BitSet, s2::BitSet) = _matched_map!(|, s1, s2)
×
297

298
intersect(s1::BitSet, s2::BitSet) =
×
299
    length(s1.bits) < length(s2.bits) ? intersect!(copy(s1), s2) : intersect!(copy(s2), s1)
300

301
intersect!(s1::BitSet, s2::BitSet) = _matched_map!(&, s1, s2)
×
302

303
setdiff!(s1::BitSet, s2::BitSet) = _matched_map!((p, q) -> p & ~q, s1, s2)
3,247✔
304

305
function symdiff!(s::BitSet, ns)
×
306
    for x in ns
×
307
        int_symdiff!(s, x)
×
308
    end
×
309
    return s
×
310
end
311

312
function symdiff!(s::BitSet, ns::AbstractSet)
×
313
    for x in ns
×
314
        int_symdiff!(s, x)
×
315
    end
×
316
    return s
×
317
end
318

319
function int_symdiff!(s::BitSet, n::Integer)
×
320
    n0 = Int(n)
×
321
    val = !(n0 in s)
×
322
    _setint!(s, n0, val)
×
323
    s
×
324
end
325

326
symdiff!(s1::BitSet, s2::BitSet) = _matched_map!(xor, s1, s2)
×
327

328
filter!(f, s::BitSet) = unsafe_filter!(f, s)
×
329

330
@inline in(n::Int, s::BitSet) = _bits_getindex(s.bits, n, s.offset)
234,874✔
331
@inline in(n::Integer, s::BitSet) = _is_convertible_Int(n) ? in(Int(n), s) : false
×
332

333
function iterate(s::BitSet, (word, idx) = (CHK0, 0))
334
    while word == 0
280,429✔
335
        idx == length(s.bits) && return nothing
75,164✔
336
        idx += 1
33,797✔
337
        word = @inbounds s.bits[idx]
33,797✔
338
    end
33,797✔
339
    trailing_zeros(word) + (idx - 1 + s.offset) << 6, (_blsr(word), idx)
99,662✔
340
end
341

342
@noinline _throw_bitset_notempty_error() =
×
343
    throw(ArgumentError("collection must be non-empty"))
344

345
function first(s::BitSet)
346
    idx = _bits_findnext(s.bits, 0)
220✔
347
    idx == -1 ? _throw_bitset_notempty_error() : idx + intoffset(s)
110✔
348
end
349

350
function last(s::BitSet)
351
    idx = _bits_findprev(s.bits, (length(s.bits) << 6) - 1)
496✔
352
    idx == -1 ? _throw_bitset_notempty_error() : idx + intoffset(s)
248✔
353
end
354

355
length(s::BitSet) = bitcount(s.bits) # = mapreduce(count_ones, +, s.bits; init=0)
14,393✔
356

357
function show(io::IO, s::BitSet)
×
358
    print(io, "BitSet([")
×
359
    first = true
×
360
    for n in s
×
361
        !first && print(io, ", ")
×
362
        print(io, n)
×
363
        first = false
×
364
    end
×
365
    print(io, "])")
×
366
end
367

368
function _check0(a::Vector{UInt64}, b::Int, e::Int)
369
    @inbounds for i in b:e
13,793✔
370
        a[i] == CHK0 || return false
23,453✔
371
    end
2,313✔
372
    true
373
end
374

375
function ==(s1::BitSet, s2::BitSet)
×
376
    # Swap so s1 has always the smallest offset
377
    if s1.offset > s2.offset
×
378
        s1, s2 = s2, s1
×
379
    end
380
    a1 = s1.bits
×
381
    a2 = s2.bits
×
382
    b1, b2 = s1.offset, s2.offset
×
383
    l1, l2 = length(a1), length(a2)
×
384
    e1 = l1+b1
×
385
    overlap0 = max(0, e1 - b2)
×
386
    included = overlap0 >= l2  # whether a2's indices are included in a1's
×
387
    overlap  = included ? l2 : overlap0
×
388

389
    # Ensure non-overlap chunks are zero (unlikely)
390
    _check0(a1, 1, l1-overlap0) || return false
×
391
    if included
×
392
        _check0(a1, b2-b1+l2+1, l1) || return false
×
393
    else
394
        _check0(a2, 1+overlap, l2) || return false
×
395
    end
396

397
    # compare overlap values
398
    if overlap > 0
×
399
        t1 = @_gc_preserve_begin a1
×
400
        t2 = @_gc_preserve_begin a2
×
401
        memcmp(pointer(a1, b2-b1+1), pointer(a2), overlap<<3) == 0 || return false
×
402
        @_gc_preserve_end t2
×
403
        @_gc_preserve_end t1
×
404
    end
405

406
    return true
×
407
end
408

409
function issubset(a::BitSet, b::BitSet)
×
410
    n = length(a.bits)
×
411
    shift = b.offset - a.offset
×
412
    i, j = shift, shift + length(b.bits)
×
413

414
    f(a, b) = a == a & b
×
415
    return (
×
416
        all(@inbounds iszero(a.bits[i]) for i in 1:min(n, i)) &&
×
417
        all(@inbounds f(a.bits[i], b.bits[i - shift]) for i in max(1, i+1):min(n, j)) &&
×
418
        all(@inbounds iszero(a.bits[i]) for i in max(1, j+1):n))
×
419
end
420
⊊(a::BitSet, b::BitSet) = a <= b && a != b
×
421

422

423
minimum(s::BitSet) = first(s)
×
424
maximum(s::BitSet) = last(s)
×
425
extrema(s::BitSet) = (first(s), last(s))
×
426
issorted(s::BitSet) = true
×
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc