• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

JuliaLang / julia / #38002

06 Feb 2025 06:14AM UTC coverage: 20.322% (-2.4%) from 22.722%
#38002

push

local

web-flow
bpart: Fully switch to partitioned semantics (#57253)

This is the final PR in the binding partitions series (modulo bugs and
tweaks), i.e. it closes #54654 and thus closes #40399, which was the
original design sketch.

This thus activates the full designed semantics for binding partitions,
in particular allowing safe replacement of const bindings. It in
particular allows struct redefinitions. This thus closes
timholy/Revise.jl#18 and also closes #38584.

The biggest semantic change here is probably that this gets rid of the
notion of "resolvedness" of a binding. Previously, a lot of the behavior
of our implementation depended on when bindings were "resolved", which
could happen at basically an arbitrary point (in the compiler, in REPL
completion, in a different thread), making a lot of the semantics around
bindings ill- or at least implementation-defined. There are several
related issues in the bugtracker, so this closes #14055 closes #44604
closes #46354 closes #30277

It is also the last step to close #24569.
It also supports bindings for undef->defined transitions and thus closes
#53958 closes #54733 - however, this is not activated yet for
performance reasons and may need some further optimization.

Since resolvedness no longer exists, we need to replace it with some
hopefully more well-defined semantics. I will describe the semantics
below, but before I do I will make two notes:

1. There are a number of cases where these semantics will behave
slightly differently than the old semantics absent some other task going
around resolving random bindings.
2. The new behavior (except for the replacement stuff) was generally
permissible under the old semantics if the bindings happened to be
resolved at the right time.

With all that said, there are essentially three "strengths" of bindings:

1. Implicit Bindings: Anything implicitly obtained from `using Mod`, "no
binding", plus slightly more exotic corner cases around conflicts

2. Weakly declared bindin... (continued)

11 of 111 new or added lines in 7 files covered. (9.91%)

1273 existing lines in 68 files now uncovered.

9908 of 48755 relevant lines covered (20.32%)

105126.48 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

44.72
/base/genericmemory.jl
1
# This file is a part of Julia. License is MIT: https://julialang.org/license
2

3
## genericmemory.jl: Managed Memory
4

5
"""
6
    GenericMemory{kind::Symbol, T, addrspace=Core.CPU} <: DenseVector{T}
7

8
Fixed-size [`DenseVector{T}`](@ref DenseVector).
9

10
`kind` can currently be either `:not_atomic` or `:atomic`. For details on what `:atomic` implies, see [`AtomicMemory`](@ref)
11

12
`addrspace` can currently only be set to `Core.CPU`. It is designed to permit extension by other systems such as GPUs, which might define values such as:
13
```julia
14
module CUDA
15
const Generic = bitcast(Core.AddrSpace{CUDA}, 0)
16
const Global = bitcast(Core.AddrSpace{CUDA}, 1)
17
end
18
```
19
The exact semantics of these other addrspaces is defined by the specific backend, but will error if the user is attempting to access these on the CPU.
20

21
!!! compat "Julia 1.11"
22
    This type requires Julia 1.11 or later.
23
"""
24
GenericMemory
25

26
"""
27
    Memory{T} == GenericMemory{:not_atomic, T, Core.CPU}
28

29
Fixed-size [`DenseVector{T}`](@ref DenseVector).
30

31
!!! compat "Julia 1.11"
32
    This type requires Julia 1.11 or later.
33
"""
34
Memory
35

36
"""
37
    AtomicMemory{T} == GenericMemory{:atomic, T, Core.CPU}
38

39
Fixed-size [`DenseVector{T}`](@ref DenseVector).
40
Fetching of any of its individual elements is performed atomically
41
(with `:monotonic` ordering by default).
42

43
!!! warning
44
    The access to `AtomicMemory` must be done by either using the [`@atomic`](@ref)
45
    macro or the lower level interface functions: `Base.getindex_atomic`,
46
    `Base.setindex_atomic!`, `Base.setindexonce_atomic!`,
47
    `Base.swapindex_atomic!`, `Base.modifyindex_atomic!`, and `Base.replaceindex_atomic!`.
48

49
For details, see [Atomic Operations](@ref man-atomic-operations) as well as macros
50
[`@atomic`](@ref), [`@atomiconce`](@ref), [`@atomicswap`](@ref), and [`@atomicreplace`](@ref).
51

52
!!! compat "Julia 1.11"
53
    This type requires Julia 1.11 or later.
54

55
!!! compat "Julia 1.12"
56
    Lower level interface functions or `@atomic` macro requires Julia 1.12 or later.
57
"""
58
AtomicMemory
59

60
## Basic functions ##
61

62
using Core: memoryrefoffset, memoryref_isassigned # import more functions which were not essential
63

64
size(a::GenericMemory, d::Int) =
×
65
    d < 1 ? error("dimension out of range") :
66
    d == 1 ? length(a) :
67
    1
68
size(a::GenericMemory, d::Integer) =  size(a, convert(Int, d))
×
69
size(a::GenericMemory) = (length(a),)
37✔
70

71
IndexStyle(::Type{<:GenericMemory}) = IndexLinear()
×
72

73
parent(ref::GenericMemoryRef) = ref.mem
×
74

75
pointer(mem::GenericMemoryRef) = unsafe_convert(Ptr{Cvoid}, mem) # no bounds check, even for empty array
304,298✔
76

77
_unsetindex!(A::Memory, i::Int) =  (@_propagate_inbounds_meta; _unsetindex!(memoryref(A, i)); A)
4,279✔
78
function _unsetindex!(A::MemoryRef{T}) where T
1,209✔
79
    @_terminates_locally_meta
77,868✔
80
    @_propagate_inbounds_meta
77,868✔
81
    @inline
77,868✔
82
    @boundscheck memoryref(A, 1)
1,069,637✔
83
    mem = A.mem
380,865✔
84
    MemT = typeof(mem)
77,868✔
85
    arrayelem = datatype_arrayelem(MemT)
77,868✔
86
    elsz = datatype_layoutsize(MemT)
77,868✔
87
    isbits = 0; isboxed = 1; isunion = 2
77,868✔
88
    arrayelem == isbits && datatype_pointerfree(T::DataType) && return A
766,643✔
89
    t = @_gc_preserve_begin mem
304,298✔
90
    p = Ptr{Ptr{Cvoid}}(@inbounds pointer(A))
304,298✔
91
    if arrayelem == isboxed
1,301✔
92
        Intrinsics.atomic_pointerset(p, C_NULL, :monotonic)
282,459✔
93
    elseif arrayelem != isunion
×
94
        for j = 1:Core.sizeof(Ptr{Cvoid}):elsz
43,529✔
95
            # XXX: this violates memory ordering, since it writes more than one C_NULL to each
96
            Intrinsics.atomic_pointerset(p + j - 1, C_NULL, :monotonic)
44,536✔
97
        end
71,060✔
98
    end
99
    @_gc_preserve_end t
304,299✔
100
    return A
304,298✔
101
end
102

103
elsize(@nospecialize _::Type{A}) where {T,A<:GenericMemory{<:Any,T}} = aligned_sizeof(T) # XXX: probably supposed to be the stride?
×
104
sizeof(a::GenericMemory) = Core.sizeof(a)
41✔
105

106
# multi arg case will be overwritten later. This is needed for bootstrapping
107
function isassigned(a::GenericMemory, i::Int)
8✔
108
    @inline
8✔
109
    @boundscheck (i - 1)%UInt < length(a)%UInt || return false
6,350✔
110
    return @inbounds memoryref_isassigned(memoryref(a, i), default_access_order(a), false)
6,350✔
111
end
112

113
isassigned(a::GenericMemoryRef) = memoryref_isassigned(a, default_access_order(a), @_boundscheck)
661,549✔
114

115
## copy ##
116
function unsafe_copyto!(dest::MemoryRef{T}, src::MemoryRef{T}, n) where {T}
7✔
117
    @_terminates_globally_notaskstate_meta
13✔
118
    n == 0 && return dest
705,594✔
119
    @boundscheck memoryref(dest, n), memoryref(src, n)
453,519✔
120
    if isbitstype(T)
2✔
121
        tdest = @_gc_preserve_begin dest
379,780✔
122
        tsrc = @_gc_preserve_begin src
379,780✔
123
        pdest = unsafe_convert(Ptr{Cvoid}, dest)
379,780✔
124
        psrc = unsafe_convert(Ptr{Cvoid}, src)
379,780✔
125
        memmove(pdest, psrc, aligned_sizeof(T) * n)
379,780✔
126
        @_gc_preserve_end tdest
379,780✔
127
        @_gc_preserve_end tsrc
379,780✔
128
    else
129
        ccall(:jl_genericmemory_copyto, Cvoid, (Any, Ptr{Cvoid}, Any, Ptr{Cvoid}, Int), dest.mem, dest.ptr_or_offset, src.mem, src.ptr_or_offset, Int(n))
73,739✔
130
    end
131
    return dest
453,519✔
132
end
133

134
function unsafe_copyto!(dest::GenericMemoryRef, src::GenericMemoryRef, n)
135
    n == 0 && return dest
3✔
136
    @boundscheck memoryref(dest, n), memoryref(src, n)
3✔
137
    unsafe_copyto!(dest.mem, memoryrefoffset(dest), src.mem, memoryrefoffset(src), n)
3✔
138
    return dest
3✔
139
end
140

141
function unsafe_copyto!(dest::Memory{T}, doffs, src::Memory{T}, soffs, n) where{T}
142
    n == 0 && return dest
397,560✔
143
    unsafe_copyto!(memoryref(dest, doffs), memoryref(src, soffs), n)
37,494✔
144
    return dest
18,747✔
145
end
146

147
#fallback method when types don't match
148
function unsafe_copyto!(dest::Memory, doffs, src::Memory, soffs, n)
3✔
149
    @_terminates_locally_meta
3✔
150
    n == 0 && return dest
3✔
151
    # use pointer math to determine if they are deemed to alias
152
    destp = pointer(dest, doffs)
3✔
153
    srcp = pointer(src, soffs)
3✔
154
    endp = pointer(src, soffs + n - 1)
3✔
155
    @inbounds if destp < srcp || destp > endp
5✔
156
        for i = 1:n
3✔
157
            if isassigned(src, soffs + i - 1)
4✔
158
                dest[doffs + i - 1] = src[soffs + i - 1]
5✔
159
            else
160
                _unsetindex!(dest, doffs + i - 1)
×
161
            end
162
        end
5✔
163
    else
164
        for i = n:-1:1
×
165
            if isassigned(src, soffs + i - 1)
×
166
                dest[doffs + i - 1] = src[soffs + i - 1]
×
167
            else
168
                _unsetindex!(dest, doffs + i - 1)
×
169
            end
170
        end
×
171
    end
172
    return dest
3✔
173
end
174

175
function copy(a::T) where {T<:Memory}
1✔
176
    # `copy` only throws when the size exceeds the max allocation size,
177
    # but since we're copying an existing array, we're guaranteed that this will not happen.
178
    @_nothrow_meta
×
179
    newmem = T(undef, length(a))
378,945✔
180
    @inbounds unsafe_copyto!(newmem, 1, a, 1, length(a))
379,077✔
181
end
182

183
copyto!(dest::Memory, src::Memory) = copyto!(dest, 1, src, 1, length(src))
42✔
184
function copyto!(dest::Memory, doffs::Integer, src::Memory, soffs::Integer, n::Integer)
185
    n < 0 && _throw_argerror("Number of elements to copy must be non-negative.")
114✔
186
    unsafe_copyto!(dest, doffs, src, soffs, n)
228✔
187
    return dest
114✔
188
end
189

190

191
## Constructors ##
192

193
similar(a::GenericMemory) =
×
194
    typeof(a)(undef, length(a))
195
similar(a::GenericMemory{kind,<:Any,AS}, T::Type) where {kind,AS} =
172✔
196
    GenericMemory{kind,T,AS}(undef, length(a))
197
similar(a::GenericMemory, m::Int) =
×
198
    typeof(a)(undef, m)
199
similar(a::GenericMemory{kind,<:Any,AS}, T::Type, dims::Dims{1}) where {kind,AS} =
21✔
200
    GenericMemory{kind,T,AS}(undef, dims[1])
201
similar(a::GenericMemory, dims::Dims{1}) =
×
202
    typeof(a)(undef, dims[1])
203

204
function fill!(a::Union{Memory{UInt8}, Memory{Int8}}, x::Integer)
205
    t = @_gc_preserve_begin a
65,124✔
206
    p = unsafe_convert(Ptr{Cvoid}, a)
65,124✔
207
    T = eltype(a)
14,401✔
208
    memset(p, x isa T ? x : convert(T, x), length(a) % UInt)
65,124✔
209
    @_gc_preserve_end t
65,124✔
210
    return a
14,401✔
211
end
212

213
## Conversions ##
214

215
convert(::Type{T}, a::AbstractArray) where {T<:Memory} = a isa T ? a : T(a)::T
×
216

217
promote_rule(a::Type{Memory{T}}, b::Type{Memory{S}}) where {T,S} = el_same(promote_type(T,S), a, b)
×
218

219
## Constructors ##
220

221
# constructors should make copies
222
Memory{T}(x::AbstractArray{S,1}) where {T,S} = copyto_axcheck!(Memory{T}(undef, size(x)), x)
×
223

224
## copying iterators to containers
225

226
## Iteration ##
227

UNCOV
228
iterate(A::Memory, i=1) = (@inline; (i - 1)%UInt < length(A)%UInt ? (@inbounds A[i], i + 1) : nothing)
×
229

230
## Indexing: getindex ##
231

232
# Faster contiguous indexing using copyto! for AbstractUnitRange and Colon
233
function getindex(A::Memory, I::AbstractUnitRange{<:Integer})
234
    @inline
×
235
    @boundscheck checkbounds(A, I)
21✔
236
    lI = length(I)
21✔
237
    X = similar(A, axes(I))
21✔
238
    if lI > 0
21✔
239
        copyto!(X, firstindex(X), A, first(I), lI)
42✔
240
    end
241
    return X
21✔
242
end
243

244
# getindex for carrying out logical indexing for AbstractUnitRange{Bool} as Bool <: Integer
245
getindex(a::Memory, r::AbstractUnitRange{Bool}) = getindex(a, to_index(r))
×
246

247
getindex(A::Memory, c::Colon) = copy(A)
×
248

249
## Indexing: setindex! ##
250

251
function _setindex!(A::Memory{T}, x::T, i1::Int) where {T}
×
252
    ref = memoryrefnew(memoryref(A), i1, @_boundscheck)
2,823,884✔
253
    memoryrefset!(ref, x, :not_atomic, @_boundscheck)
2,823,884✔
254
    return A
37✔
255
end
256

257
function setindex!(A::Memory{T}, x, i1::Int) where {T}
×
258
    @_propagate_inbounds_meta
37✔
259
    val = x isa T ? x : convert(T,x)::T
39✔
260
    return _setindex!(A, val, i1)
2,823,885✔
261
end
262

263
function setindex!(A::Memory{T}, x, i1::Int, i2::Int, I::Int...) where {T}
×
264
    @inline
×
265
    @boundscheck (i2 == 1 && all(==(1), I)) || throw_boundserror(A, (i1, i2, I...))
×
266
    setindex!(A, x, i1)
×
267
end
268

269
# Faster contiguous setindex! with copyto!
270
function setindex!(A::Memory{T}, X::Memory{T}, I::AbstractUnitRange{Int}) where T
×
271
    @inline
×
272
    @boundscheck checkbounds(A, I)
×
273
    lI = length(I)
×
274
    @boundscheck setindex_shape_check(X, lI)
×
275
    if lI > 0
×
276
        unsafe_copyto!(A, first(I), X, 1, lI)
×
277
    end
278
    return A
×
279
end
280
function setindex!(A::Memory{T}, X::Memory{T}, c::Colon) where T
×
281
    @inline
×
282
    lI = length(A)
×
283
    @boundscheck setindex_shape_check(X, lI)
×
284
    if lI > 0
×
285
        unsafe_copyto!(A, 1, X, 1, lI)
×
286
    end
287
    return A
×
288
end
289

290
# use memcmp for cmp on byte arrays
291
function cmp(a::Memory{UInt8}, b::Memory{UInt8})
×
292
    ta = @_gc_preserve_begin a
×
293
    tb = @_gc_preserve_begin b
×
294
    pa = unsafe_convert(Ptr{Cvoid}, a)
×
295
    pb = unsafe_convert(Ptr{Cvoid}, b)
×
296
    c = memcmp(pa, pb, min(length(a),length(b)))
×
297
    @_gc_preserve_end ta
×
298
    @_gc_preserve_end tb
×
299
    return c < 0 ? -1 : c > 0 ? +1 : cmp(length(a),length(b))
×
300
end
301

302
const BitIntegerMemory{N} = Union{map(T->Memory{T}, BitInteger_types)...}
×
303
# use memcmp for == on bit integer types
304
function ==(a::M, b::M) where {M <: BitIntegerMemory}
×
305
    if length(a) == length(b)
×
306
        ta = @_gc_preserve_begin a
×
307
        tb = @_gc_preserve_begin b
×
308
        pa = unsafe_convert(Ptr{Cvoid}, a)
×
309
        pb = unsafe_convert(Ptr{Cvoid}, b)
×
310
        c = memcmp(pa, pb, sizeof(eltype(M)) * length(a))
×
311
        @_gc_preserve_end ta
×
312
        @_gc_preserve_end tb
×
313
        return c == 0
×
314
    else
315
        return false
×
316
    end
317
end
318

319
function findall(pred::Fix2{typeof(in),<:Union{Memory{<:Real},Real}}, x::Memory{<:Real})
×
320
    if issorted(x, Sort.Forward) && issorted(pred.x, Sort.Forward)
×
321
        return _sortedfindin(x, pred.x)
×
322
    else
323
        return _findin(x, pred.x)
×
324
    end
325
end
326

327
# Copying subregions
328
function indcopy(sz::Dims, I::GenericMemory)
×
329
    n = length(I)
×
330
    s = sz[n]
×
331
    for i = n+1:length(sz)
×
332
        s *= sz[i]
×
333
    end
×
334
    dst = eltype(I)[_findin(I[i], i < n ? (1:sz[i]) : (1:s)) for i = 1:n]
×
335
    src = eltype(I)[I[i][_findin(I[i], i < n ? (1:sz[i]) : (1:s))] for i = 1:n]
×
336
    dst, src
×
337
end
338

339
# get, set(once), modify, swap and replace at index, atomically
340
function getindex_atomic(mem::GenericMemory, order::Symbol, i::Int)
×
341
    @_propagate_inbounds_meta
×
342
    memref = memoryref(mem, i)
×
343
    return memoryrefget(memref, order, @_boundscheck)
×
344
end
345

346
function setindex_atomic!(mem::GenericMemory, order::Symbol, val, i::Int)
×
347
    @_propagate_inbounds_meta
×
348
    T = eltype(mem)
×
349
    memref = memoryref(mem, i)
×
350
    return memoryrefset!(
×
351
        memref,
352
        val isa T ? val : convert(T, val)::T,
353
        order,
354
        @_boundscheck
×
355
    )
356
end
357

358
function setindexonce_atomic!(
×
359
    mem::GenericMemory,
360
    success_order::Symbol,
361
    fail_order::Symbol,
362
    val,
363
    i::Int,
364
)
365
    @_propagate_inbounds_meta
×
366
    T = eltype(mem)
×
367
    memref = memoryref(mem, i)
×
368
    return Core.memoryrefsetonce!(
×
369
        memref,
370
        val isa T ? val : convert(T, val)::T,
371
        success_order,
372
        fail_order,
373
        @_boundscheck
×
374
    )
375
end
376

377
function modifyindex_atomic!(mem::GenericMemory, order::Symbol, op, val, i::Int)
×
378
    @_propagate_inbounds_meta
×
379
    memref = memoryref(mem, i)
×
380
    return Core.memoryrefmodify!(memref, op, val, order, @_boundscheck)
×
381
end
382

383
function swapindex_atomic!(mem::GenericMemory, order::Symbol, val, i::Int)
×
384
    @_propagate_inbounds_meta
×
385
    T = eltype(mem)
×
386
    memref = memoryref(mem, i)
×
387
    return Core.memoryrefswap!(
×
388
        memref,
389
        val isa T ? val : convert(T, val)::T,
390
        order,
391
        @_boundscheck
×
392
    )
393
end
394

395
function replaceindex_atomic!(
×
396
    mem::GenericMemory,
397
    success_order::Symbol,
398
    fail_order::Symbol,
399
    expected,
400
    desired,
401
    i::Int,
402
)
403
    @_propagate_inbounds_meta
×
404
    T = eltype(mem)
×
405
    memref = memoryref(mem, i)
×
406
    return Core.memoryrefreplace!(
×
407
        memref,
408
        expected,
409
        desired isa T ? desired : convert(T, desired)::T,
410
        success_order,
411
        fail_order,
412
        @_boundscheck,
×
413
    )
414
end
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc