• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

JuliaLang / julia / 1301

09 Oct 2025 08:00PM UTC coverage: 75.842% (-0.9%) from 76.72%
1301

push

buildkite

web-flow
ccall: make distinction of pointer vs name a syntactic distinction (#59165)

We have long expected users to be explicit about the library name for
`ccall`, and the `@ccall` macro has even always enforced that. That
means users should have already been using explicit syntax, even though
it wasn't strictly enforced. And indeed, the other syntax forms weren't
handled reliably anyways (since doing so would require linearizing IR if
and only if the runtime values required it, which is not something that
is computable, and thus was often done wrong). This now intends to align
the runtime and compiler to expect only those syntax forms that we could
reliably handle in the past without errors, and adds explicit errors for
other cases, most of which we previously knew would be unreliable due to
reliance upon inference in making particular decisions for the
semantics. The `ccall` function is already very special since it is more
like a actual macro (it does not exist as a binding or value), so we can
make unusual syntax decisions like this, mirroring `@ccall` also.

This fixes #57931, mostly by restricting the set of things that are
allowed to the set of things that have an obvious and pre-existing
behavior to be guaranteed to do that behavior. The hope is to PkgEval
this to check if any packages are doing something unusual and see if we
even need to allow anything else.

This drops support for https://github.com/JuliaLang/julia/pull/37123,
since we were going to use that for LazyLibraries, be we decided that
approach was quite buggy and that PR would make static compilation quite
impossible to support, so we instead actually implemented LazyLibraries
with a different approach. It could be re-enabled, but we never had
correct lowering or inference support for it, so it is presumably still
unused.

The goal is to cause breakage only where the package authors really
failed to express intent with syntax, and otherwise to explicitly
maintain support by adding cases ... (continued)

20 of 35 new or added lines in 8 files covered. (57.14%)

721 existing lines in 81 files now uncovered.

60487 of 79754 relevant lines covered (75.84%)

7672511.81 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

82.92
/stdlib/SharedArrays/src/SharedArrays.jl
1
# This file is a part of Julia. License is MIT: https://julialang.org/license
2

3
"""
4
Provide the [`SharedArray`](@ref) type. It represents an array, which is shared across multiple processes, on a single machine.
5
"""
6
module SharedArrays
7

8
using Mmap, Distributed, Random
9

10
import Base: length, size, elsize, ndims, IndexStyle, reshape, convert, deepcopy_internal,
11
             show, getindex, setindex!, fill!, similar, reduce, map!, copyto!, cconvert
12
import Base: Array
13
import Random
14
using Serialization
15
using Serialization: serialize_cycle_header, serialize_type, writetag, UNDEFREF_TAG, serialize, deserialize
16
import Serialization: serialize, deserialize
17
import Distributed: RRID, procs, remotecall_fetch
18
import Base.Filesystem: JL_O_CREAT, JL_O_RDWR, S_IRUSR, S_IWUSR
19

20
export SharedArray, SharedVector, SharedMatrix, sdata, indexpids, localindices
21

22
mutable struct SharedArray{T,N} <: DenseArray{T,N}
23
    id::RRID
24
    dims::NTuple{N,Int}
25
    pids::Vector{Int}
26
    refs::Vector
27

28
    # The segname is currently used only in the test scripts to ensure that
29
    # the shmem segment has been unlinked.
30
    segname::String
31

32
    # Fields below are not to be serialized
33
    # Local shmem map.
34
    s::Array{T,N}
35

36
    # idx of current worker's pid in the pids vector, 0 if this shared array is not mapped locally.
37
    pidx::Int
38

39
    # the local partition into the array when viewed as a single dimensional array.
40
    # this can be removed when @distributed or its equivalent supports looping on
41
    # a subset of workers.
42
    loc_subarr_1d::SubArray{T,1,Array{T,1},Tuple{UnitRange{Int}},true}
43

44
    function SharedArray{T,N}(d,p,r,sn,s) where {T,N}
44✔
45
        S = new(RRID(),d,p,r,sn,s,0,view(Array{T}(undef, ntuple(d->0,N)), 1:0))
45✔
46
        sa_refs[S.id] = WeakRef(S)
45✔
47
        S
45✔
48
    end
49
end
50

51
const sa_refs = Dict{RRID, WeakRef}()
52

53
"""
54
    SharedArray{T}(dims::NTuple; init=false, pids=Int[])
55
    SharedArray{T,N}(...)
56

57
Construct a `SharedArray` of a bits type `T` and size `dims` across the
58
processes specified by `pids` - all of which have to be on the same
59
host.  If `N` is specified by calling `SharedArray{T,N}(dims)`, then
60
`N` must match the length of `dims`.
61

62
If `pids` is left unspecified, the shared array will be mapped across all processes on the
63
current host, including the master. But, `localindices` and `indexpids` will only refer to
64
worker processes. This facilitates work distribution code to use workers for actual
65
computation with the master process acting as a driver.
66

67
If an `init` function of the type `initfn(S::SharedArray)` is specified, it is called on all
68
the participating workers.
69

70
The shared array is valid as long as a reference to the `SharedArray` object exists on the node
71
which created the mapping.
72

73
    SharedArray{T}(filename::AbstractString, dims::NTuple, [offset=0]; mode=nothing, init=false, pids=Int[])
74
    SharedArray{T,N}(...)
75

76
Construct a `SharedArray` backed by the file `filename`, with element
77
type `T` (must be a bits type) and size `dims`, across the processes
78
specified by `pids` - all of which have to be on the same host. This
79
file is mmapped into the host memory, with the following consequences:
80

81
- The array data must be represented in binary format (e.g., an ASCII
82
  format like CSV cannot be supported)
83

84
- Any changes you make to the array values (e.g., `A[3] = 0`) will
85
  also change the values on disk
86

87
If `pids` is left unspecified, the shared array will be mapped across
88
all processes on the current host, including the master. But,
89
`localindices` and `indexpids` will only refer to worker
90
processes. This facilitates work distribution code to use workers for
91
actual computation with the master process acting as a driver.
92

93
`mode` must be one of `"r"`, `"r+"`, `"w+"`, or `"a+"`, and defaults
94
to `"r+"` if the file specified by `filename` already exists, or
95
`"w+"` if not. If an `init` function of the type
96
`initfn(S::SharedArray)` is specified, it is called on all the
97
participating workers. You cannot specify an `init` function if the
98
file is not writable.
99

100
`offset` allows you to skip the specified number of bytes at the
101
beginning of the file.
102
"""
103
SharedArray
104

105
function SharedArray{T,N}(dims::Dims{N}; init=false, pids=Int[]) where {T,N}
78✔
106
    isbitstype(T) || throw(ArgumentError("type of SharedArray elements must be bits types, got $(T)"))
39✔
107

108
    pids, onlocalhost = shared_pids(pids)
39✔
109

110
    local shm_seg_name = ""
39✔
111
    local s = Array{T}(undef, ntuple(d->0,N))
39✔
112
    local S
113
    local shmmem_create_pid
114
    try
39✔
115
        # On OSX, the shm_seg_name length must be <= 31 characters (including the terminating NULL character)
116
        shm_seg_name = "/jl$(lpad(string(getpid() % 10^6), 6, "0"))$(randstring(20))"
39✔
117
        if onlocalhost
39✔
118
            shmmem_create_pid = myid()
39✔
119
            s = shm_mmap_array(T, dims, shm_seg_name, JL_O_CREAT | JL_O_RDWR)
41✔
120
        else
121
            # The shared array is created on a remote machine
122
            shmmem_create_pid = pids[1]
×
123
            remotecall_fetch(pids[1]) do
×
124
                shm_mmap_array(T, dims, shm_seg_name, JL_O_CREAT | JL_O_RDWR)
125
                nothing
126
            end
127
        end
128

129
        func_mapshmem = () -> shm_mmap_array(T, dims, shm_seg_name, JL_O_RDWR)
189✔
130

131
        refs = Vector{Future}(undef, length(pids))
39✔
132
        for (i, p) in enumerate(pids)
39✔
133
            refs[i] = remotecall(func_mapshmem, p)
150✔
134
        end
150✔
135

136
        # Wait till all the workers have mapped the segment
137
        for ref in refs
39✔
138
            wait(ref)
150✔
139
        end
150✔
140

141
        # All good, immediately unlink the segment.
142
        if (prod(dims) > 0) && (sizeof(T) > 0)
39✔
143
            if onlocalhost
36✔
144
                rc = shm_unlink(shm_seg_name)
36✔
145
            else
146
                rc = remotecall_fetch(shm_unlink, shmmem_create_pid, shm_seg_name)
×
147
            end
148
            systemerror("Error unlinking shmem segment " * shm_seg_name, rc != 0)
72✔
149
        end
150
        S = SharedArray{T,N}(dims, pids, refs, shm_seg_name, s)
39✔
151
        initialize_shared_array(S, onlocalhost, init, pids)
39✔
152
        shm_seg_name = ""
39✔
153

154
    finally
155
        if !isempty(shm_seg_name)
39✔
156
            remotecall_fetch(shm_unlink, shmmem_create_pid, shm_seg_name)
×
157
        end
158
    end
159
    S
39✔
160
end
161

162
SharedArray{T,N}(I::Integer...; kwargs...) where {T,N} =
20✔
163
    SharedArray{T,N}(I; kwargs...)
164
SharedArray{T}(d::NTuple; kwargs...) where {T} =
6✔
165
    SharedArray{T,length(d)}(d; kwargs...)
166
SharedArray{T}(I::Integer...; kwargs...) where {T} =
×
167
    SharedArray{T,length(I)}(I; kwargs...)
168
SharedArray{T}(m::Integer; kwargs...) where {T} =
14✔
169
    SharedArray{T,1}(m; kwargs...)
170
SharedArray{T}(m::Integer, n::Integer; kwargs...) where {T} =
2✔
171
    SharedArray{T,2}(m, n; kwargs...)
172
SharedArray{T}(m::Integer, n::Integer, o::Integer; kwargs...) where {T} =
4✔
173
    SharedArray{T,3}(m, n, o; kwargs...)
174

175
function SharedArray{T,N}(filename::AbstractString, dims::NTuple{N,Int}, offset::Integer=0;
14✔
176
                          mode=nothing, init=false, pids::Vector{Int}=Int[]) where {T,N}
177
    if !isabspath(filename)
10✔
178
        throw(ArgumentError("$filename is not an absolute path; try abspath(filename)?"))
×
179
    end
180
    if !isbitstype(T)
5✔
181
        throw(ArgumentError("type of SharedArray elements must be bits types, got $(T)"))
×
182
    end
183

184
    pids, onlocalhost = shared_pids(pids)
5✔
185

186
    # If not supplied, determine the appropriate mode
187
    have_file = onlocalhost ? isfile(filename) : remotecall_fetch(isfile, pids[1], filename)
5✔
188
    if mode === nothing
5✔
189
        mode = have_file ? "r+" : "w+"
2✔
190
    end
191
    workermode = mode == "w+" ? "r+" : mode  # workers don't truncate!
9✔
192

193
    # Ensure the file will be readable
194
    if !(mode in ("r", "r+", "w+", "a+"))
10✔
195
        throw(ArgumentError("mode must be readable, but $mode is not"))
1✔
196
    end
197
    if init !== false
4✔
198
        typeassert(init, Function)
2✔
199
        if !(mode in ("r+", "w+", "a+"))
4✔
200
            throw(ArgumentError("cannot initialize unwritable array (mode = $mode)"))
×
201
        end
202
    end
203
    if mode == "r" && !isfile(filename)
4✔
204
        throw(ArgumentError("file $filename does not exist, but mode $mode cannot create it"))
1✔
205
    end
206

207
    # Create the file if it doesn't exist, map it if it does
208
    refs = Vector{Future}(undef, length(pids))
3✔
209
    func_mmap = mode -> open(filename, mode) do io
18✔
210
        mmap(io, Array{T,N}, dims, offset; shared=true)
15✔
211
    end
212
    s = Array{T}(undef, ntuple(d->0,N))
3✔
213
    if onlocalhost
3✔
214
        s = func_mmap(mode)
3✔
215
        refs[1] = remotecall(pids[1]) do
6✔
216
            func_mmap(workermode)
3✔
217
        end
218
    else
219
        refs[1] = remotecall_wait(pids[1]) do
×
220
            func_mmap(mode)
221
        end
222
    end
223

224
    # Populate the rest of the workers
225
    for i = 2:length(pids)
3✔
226
        refs[i] = remotecall(pids[i]) do
18✔
227
            func_mmap(workermode)
9✔
228
        end
229
    end
9✔
230

231
    # Wait till all the workers have mapped the segment
232
    for ref in refs
3✔
233
        wait(ref)
12✔
234
    end
12✔
235

236
    S = SharedArray{T,N}(dims, pids, refs, filename, s)
3✔
237
    initialize_shared_array(S, onlocalhost, init, pids)
3✔
238
    S
3✔
239
end
240

241
SharedArray{T}(filename::AbstractString, dims::NTuple{N,Int}, offset::Integer=0;
2✔
242
               mode=nothing, init=false, pids::Vector{Int}=Int[]) where {T,N} =
243
    SharedArray{T,N}(filename, dims, offset; mode=mode, init=init, pids=pids)
244

245
function initialize_shared_array(S, onlocalhost, init, pids)
19✔
246
    if onlocalhost
42✔
247
        init_loc_flds(S)
42✔
248
    else
249
        S.pidx = 0
×
250
    end
251

252
    # if present, init function is called on each of the parts
253
    if isa(init, Function)
42✔
254
        @sync begin
19✔
255
            for p in pids
19✔
256
                @async remotecall_wait(init, p, S)
140✔
257
            end
70✔
258
        end
259
    end
260

261
    finalizer(finalize_refs, S)
42✔
262
    S
42✔
263
end
264

265
function finalize_refs(S::SharedArray{T,N}) where T where N
35✔
266
    if length(S.pids) > 0
35✔
267
        for r in S.refs
35✔
268
            finalize(r)
134✔
269
        end
169✔
270
        empty!(S.pids)
134✔
271
        empty!(S.refs)
35✔
272
        init_loc_flds(S)
35✔
273
        S.s = Array{T}(undef, ntuple(d->0,N))
35✔
274
        delete!(sa_refs, S.id)
35✔
275
    end
276
    S
35✔
277
end
278

279
"""
280
    SharedVector
281

282
A one-dimensional [`SharedArray`](@ref).
283
"""
284
const SharedVector{T} = SharedArray{T,1}
285
"""
286
    SharedMatrix
287

288
A two-dimensional [`SharedArray`](@ref).
289
"""
290
const SharedMatrix{T} = SharedArray{T,2}
291

292
SharedVector(A::Vector) = SharedArray(A)
1✔
293
SharedMatrix(A::Matrix) = SharedArray(A)
1✔
294

295
size(S::SharedArray) = S.dims
59,766✔
296
elsize(::Type{SharedArray{T,N}}) where {T,N} = elsize(Array{T,N}) # aka fieldtype(T, :s)
4✔
297
IndexStyle(::Type{<:SharedArray}) = IndexLinear()
101✔
298

299
function local_array_by_id(refid)
166✔
300
    if isa(refid, Future)
166✔
301
        refid = remoteref_id(refid)
12✔
302
    end
303
    fetch(channel_from_id(refid))
166✔
304
end
305

306
function reshape(a::SharedArray{T}, dims::NTuple{N,Int}) where {T,N}
4✔
307
    if length(a) != prod(dims)
4✔
308
        throw(DimensionMismatch("dimensions must be consistent with array size"))
1✔
309
    end
310
    refs = Vector{Future}(undef, length(a.pids))
3✔
311
    for (i, p) in enumerate(a.pids)
5✔
312
        refs[i] = remotecall(p, a.refs[i], dims) do r, d
12✔
313
            reshape(local_array_by_id(r), d)
12✔
314
        end
315
    end
21✔
316

317
    A = SharedArray{T,N}(dims, a.pids, refs, a.segname, reshape(a.s, dims))
3✔
318
    init_loc_flds(A)
3✔
319
    A
3✔
320
end
321

322
"""
323
    procs(S::SharedArray)
324

325
Get the vector of processes mapping the shared array.
326
"""
327
procs(S::SharedArray) = S.pids
22✔
328

329
"""
330
    indexpids(S::SharedArray)
331

332
Return the current worker's index in the list of workers
333
mapping the `SharedArray` (i.e. in the same list returned by `procs(S)`), or
334
0 if the `SharedArray` is not mapped locally.
335
"""
336
indexpids(S::SharedArray) = S.pidx
×
337

338
"""
339
    sdata(S::SharedArray)
340

341
Return the actual `Array` object backing `S`.
342
"""
343
sdata(S::SharedArray) = S.s
10✔
344
sdata(A::AbstractArray) = A
1✔
345

346
"""
347
    localindices(S::SharedArray)
348

349
Return a range describing the "default" indices to be handled by the
350
current process.  This range should be interpreted in the sense of
351
linear indexing, i.e., as a sub-range of `1:length(S)`.  In
352
multi-process contexts, returns an empty range in the parent process
353
(or any process for which [`indexpids`](@ref) returns 0).
354

355
It's worth emphasizing that `localindices` exists purely as a
356
convenience, and you can partition work on the array among workers any
357
way you wish. For a `SharedArray`, all indices should be equally fast
358
for each worker process.
359
"""
360
localindices(S::SharedArray) = S.pidx > 0 ? range_1dim(S, S.pidx) : 1:0
14✔
361

362
cconvert(::Type{Ptr{T}}, S::SharedArray{T}) where {T} = cconvert(Ptr{T}, sdata(S))
2✔
363
cconvert(::Type{Ptr{T}}, S::SharedArray   ) where {T} = cconvert(Ptr{T}, sdata(S))
×
364

365
function SharedArray(A::Array)
2✔
366
    S = SharedArray{eltype(A),ndims(A)}(size(A))
8✔
367
    copyto!(S, A)
8✔
368
end
369
function SharedArray{T}(A::Array) where T
×
370
    S = SharedArray{T,ndims(A)}(size(A))
×
371
    copyto!(S, A)
×
372
end
373
function SharedArray{TS,N}(A::Array{TA,N}) where {TS,TA,N}
×
374
    S = SharedArray{TS,ndims(A)}(size(A))
×
375
    copyto!(S, A)
×
376
end
377

378
convert(T::Type{<:SharedArray}, a::Array) = T(a)::T
4✔
379

380
function deepcopy_internal(S::SharedArray, stackdict::IdDict)
1✔
381
    haskey(stackdict, S) && return stackdict[S]
1✔
382
    R = SharedArray{eltype(S),ndims(S)}(size(S); pids = S.pids)
1✔
383
    copyto!(sdata(R), sdata(S))
2✔
384
    stackdict[S] = R
1✔
385
    return R
1✔
386
end
387

388
function shared_pids(pids)
44✔
389
    if isempty(pids)
44✔
390
        # only use workers on the current host
391
        pids = procs(myid())
40✔
392
        if length(pids) > 1
80✔
393
            pids = filter(!=(1), pids)
40✔
394
        end
395

396
        onlocalhost = true
40✔
397
    else
398
        if !check_same_host(pids)
4✔
399
            throw(ArgumentError("SharedArray requires all requested processes to be on the same machine."))
×
400
        end
401

402
        onlocalhost = myid() in procs(pids[1])
4✔
403
    end
404
    pids, onlocalhost
44✔
405
end
406

407
function range_1dim(S::SharedArray, pidx)
408
    l = length(S)
168✔
409
    nw = length(S.pids)
168✔
410
    partlen = div(l, nw)
168✔
411

412
    if l < nw
168✔
413
        if pidx <= l
5✔
414
            return pidx:pidx
1✔
415
        else
416
            return 1:0
4✔
417
        end
418
    elseif pidx == nw
163✔
419
        return (((pidx-1) * partlen) + 1):l
45✔
420
    else
421
        return (((pidx-1) * partlen) + 1):(pidx*partlen)
118✔
422
    end
423
end
424

425
sub_1dim(S::SharedArray, pidx) = view(S.s, range_1dim(S, pidx))
154✔
426

427
function init_loc_flds(S::SharedArray{T,N}, empty_local=false) where T where N
231✔
428
    if myid() in S.pids
688✔
429
        S.pidx = findfirst(isequal(myid()), S.pids)
308✔
430
        S.s = local_array_by_id(S.refs[S.pidx])
154✔
431
        S.loc_subarr_1d = sub_1dim(S, S.pidx)
154✔
432
    else
433
        S.pidx = 0
77✔
434
        if empty_local
77✔
435
            S.s = Array{T}(undef, ntuple(d->0,N))
1✔
436
        end
437
        S.loc_subarr_1d = view(Array{T}(undef, ntuple(d->0,N)), 1:0)
77✔
438
    end
439
end
440

441

442
# Don't serialize s (it is the complete array) and
443
# pidx, which is relevant to the current process only
444
function serialize(s::AbstractSerializer, S::SharedArray)
156✔
445
    serialize_cycle_header(s, S) && return
311✔
446

447
    destpid = worker_id_from_socket(s.io)
155✔
448
    if S.id.whence == destpid
155✔
449
        # The shared array was created from destpid, hence a reference to it
450
        # must be available at destpid.
451
        serialize(s, true)
4✔
452
        serialize(s, S.id.whence)
4✔
453
        serialize(s, S.id.id)
4✔
454
        return
4✔
455
    end
456
    serialize(s, false)
151✔
457
    for n in fieldnames(SharedArray)
151✔
458
        if n in [:s, :pidx, :loc_subarr_1d]
3,926✔
459
            writetag(s.io, UNDEFREF_TAG)
453✔
460
        elseif n === :refs
755✔
461
            v = getfield(S, n)
151✔
462
            if isa(v[1], Future)
151✔
463
                # convert to ids to avoid distributed GC overhead
464
                ids = [remoteref_id(x) for x in v]
142✔
465
                serialize(s, ids)
142✔
466
            else
467
                serialize(s, v)
9✔
468
            end
469
        else
470
            serialize(s, getfield(S, n))
604✔
471
        end
472
    end
1,208✔
473
end
474

475
function deserialize(s::AbstractSerializer, t::Type{<:SharedArray})
155✔
476
    ref_exists = deserialize(s)
155✔
477
    if ref_exists
155✔
478
        sref = sa_refs[RRID(deserialize(s), deserialize(s))]
4✔
479
        if sref.value !== nothing
4✔
480
            return sref.value
4✔
481
        end
482
        error("Expected reference to shared array instance not found")
×
483
    end
484

485
    S = invoke(deserialize, Tuple{AbstractSerializer,DataType}, s, t)
151✔
486
    init_loc_flds(S, true)
151✔
487
    return S
151✔
488
end
489

490
function show(io::IO, S::SharedArray)
2✔
491
    if length(S.s) > 0
2✔
492
        invoke(show, Tuple{IO,DenseArray}, io, S)
1✔
493
    else
494
        show(io, remotecall_fetch(sharr->sharr.s, S.pids[1], S))
2✔
495
    end
496
end
497

498
function show(io::IO, mime::MIME"text/plain", S::SharedArray)
1✔
499
    if length(S.s) > 0
1✔
500
        invoke(show, Tuple{IO,MIME"text/plain",DenseArray}, io, MIME"text/plain"(), S)
×
501
    else
502
        # retrieve from the first worker mapping the array.
503
        summary(io, S); println(io, ":")
1✔
504
        Base.print_array(io, remotecall_fetch(sharr->sharr.s, S.pids[1], S))
2✔
505
    end
506
end
507

508
Array(S::SharedArray) = S.s
1✔
509

510
# pass through getindex and setindex! - unlike DArrays, these always work on the complete array
511
Base.@propagate_inbounds getindex(S::SharedArray, i::Real) = getindex(S.s, i)
59,698✔
512

513
Base.@propagate_inbounds setindex!(S::SharedArray, x, i::Real) = setindex!(S.s, x, i)
8,113✔
514

515
function fill!(S::SharedArray, v)
5✔
516
    vT = convert(eltype(S), v)
5✔
517
    f = S->fill!(S.loc_subarr_1d, vT)
21✔
518
    @sync for p in procs(S)
5✔
519
        @async remotecall_wait(f, p, S)
32✔
520
    end
521
    return S
5✔
522
end
523

524
function Random.rand!(S::SharedArray{T}) where T
1✔
525
    f = S->map!(x -> rand(T), S.loc_subarr_1d, S.loc_subarr_1d)
8,005✔
526
    @sync for p in procs(S)
1✔
527
        @async remotecall_wait(f, p, S)
8✔
528
    end
529
    return S
1✔
530
end
531

532
function Random.randn!(S::SharedArray)
1✔
533
    f = S->map!(x -> randn(), S.loc_subarr_1d, S.loc_subarr_1d)
8,005✔
534
    @sync for p in procs(S)
1✔
535
        @async remotecall_wait(f, p, S)
8✔
536
    end
537
    return S
1✔
538
end
539

540
# convenience constructors
541
function shmem_fill(v, dims; kwargs...)
7✔
542
    SharedArray{typeof(v),length(dims)}(dims; init = S->fill!(S.loc_subarr_1d, v), kwargs...)
31✔
543
end
544
shmem_fill(v, I::Int...; kwargs...) = shmem_fill(v, I; kwargs...)
4✔
545

546
# rand variant with range
547
function shmem_rand(TR::Union{DataType, UnitRange}, dims; kwargs...)
8✔
548
    if isa(TR, UnitRange)
8✔
549
        SharedArray{Int,length(dims)}(dims; init = S -> map!(x -> rand(TR), S.loc_subarr_1d, S.loc_subarr_1d), kwargs...)
16,005✔
550
    else
551
        SharedArray{TR,length(dims)}(dims; init = S -> map!(x -> rand(TR), S.loc_subarr_1d, S.loc_subarr_1d), kwargs...)
56,035✔
552
    end
553
end
554
shmem_rand(TR::Union{DataType, UnitRange}, i::Int; kwargs...) = shmem_rand(TR, (i,); kwargs...)
×
555
shmem_rand(TR::Union{DataType, UnitRange}, I::Int...; kwargs...) = shmem_rand(TR, I; kwargs...)
×
556

557
shmem_rand(dims; kwargs...) = shmem_rand(Float64, dims; kwargs...)
14✔
558
shmem_rand(I::Int...; kwargs...) = shmem_rand(I; kwargs...)
×
559

560
function shmem_randn(dims; kwargs...)
×
561
    SharedArray{Float64,length(dims)}(dims; init = S-> map!(x -> randn(), S.loc_subarr_1d, S.loc_subarr_1d), kwargs...)
×
562
end
563
shmem_randn(I::Int...; kwargs...) = shmem_randn(I; kwargs...)
×
564

565
similar(S::SharedArray, T::Type, dims::Dims) = similar(S.s, T, dims)
8✔
566
similar(S::SharedArray, T::Type) = similar(S.s, T, size(S))
1✔
567
similar(S::SharedArray, dims::Dims) = similar(S.s, eltype(S), dims)
1✔
568
similar(S::SharedArray) = similar(S.s, eltype(S), size(S))
×
569

570
reduce(f, S::SharedArray) =
2✔
571
    mapreduce(fetch, f, Any[ @spawnat p reduce(f, S.loc_subarr_1d) for p in procs(S) ])
572

573
reduce(::typeof(vcat), S::SharedVector) = invoke(reduce, Tuple{Any,SharedArray}, vcat, S)
×
574
reduce(::typeof(hcat), S::SharedVector) = invoke(reduce, Tuple{Any,SharedArray}, hcat, S)
×
575

576
function map!(f, S::SharedArray, Q::SharedArray)
1✔
577
    if (S !== Q) && (procs(S) != procs(Q) || localindices(S) != localindices(Q))
1✔
578
        throw(ArgumentError("incompatible source and destination arguments"))
×
579
    end
580
    @sync for p in procs(S)
1✔
581
        @spawnat p begin
2✔
582
            for idx in localindices(S)
3✔
583
                S.s[idx] = f(Q.s[idx])
100✔
584
            end
100✔
585
        end
586
    end
587
    return S
1✔
588
end
589

590
copyto!(S::SharedArray, A::Array) = (copyto!(S.s, A); S)
10✔
591

592
function copyto!(S::SharedArray, R::SharedArray)
1✔
593
    length(S) == length(R) || throw(BoundsError())
1✔
594
    ps = intersect(procs(S), procs(R))
1✔
595
    isempty(ps) && throw(ArgumentError("source and destination arrays don't share any process"))
1✔
596
    l = length(S)
1✔
597
    length(ps) > l && (ps = ps[1:l])
1✔
598
    nw = length(ps)
1✔
599
    partlen = div(l, nw)
1✔
600

601
    @sync for i = 1:nw
4✔
602
        p = ps[i]
4✔
603
        idx = i < nw ?  ((i-1)*partlen+1:i*partlen) : ((i-1)*partlen+1:l)
4✔
604
        @spawnat p begin
4✔
605
            S.s[idx] = R.s[idx]
4✔
606
        end
607
    end
608

609
    return S
1✔
610
end
611

612
function print_shmem_limits(slen)
×
613
    try
×
614
        if Sys.islinux()
×
615
            pfx = "kernel"
×
616
        elseif Sys.isapple()
×
617
            pfx = "kern.sysv"
×
618
        elseif Sys.KERNEL === :FreeBSD || Sys.KERNEL === :DragonFly
×
619
            pfx = "kern.ipc"
×
620
        elseif Sys.KERNEL === :OpenBSD
×
621
            pfx = "kern.shminfo"
×
622
        else
623
            # seems NetBSD does not have *.shmall
624
            return
×
625
        end
626

627
        shmmax_MB = div(parse(Int, split(read(`sysctl $(pfx).shmmax`, String))[end]), 1024*1024)
×
628
        page_size = parse(Int, split(read(`getconf PAGE_SIZE`, String))[end])
×
629
        shmall_MB = div(parse(Int, split(read(`sysctl $(pfx).shmall`, String))[end]) * page_size, 1024*1024)
×
630

631
        println("System max size of single shmem segment(MB) : ", shmmax_MB,
×
632
            "\nSystem max size of all shmem segments(MB) : ", shmall_MB,
633
            "\nRequested size(MB) : ", div(slen, 1024*1024),
634
            "\nPlease ensure requested size is within system limits.",
635
            "\nIf not, increase system limits and try again."
636
        )
637
    catch e
638
        nothing # Ignore any errors in this
×
639
    end
640
end
641

642
# utilities
643
function shm_mmap_array(T, dims, shm_seg_name, mode)
179✔
644
    local s = nothing
189✔
645
    local A = nothing
189✔
646

647
    if (prod(dims) == 0) || (sizeof(T) == 0)
373✔
648
        return Array{T}(undef, dims)
15✔
649
    end
650

651
    try
174✔
652
        A = _shm_mmap_array(T, dims, shm_seg_name, mode)
174✔
653
    catch
654
        print_shmem_limits(prod(dims)*sizeof(T))
×
655
        rethrow()
174✔
656

657
    finally
658
        if s !== nothing
174✔
659
            close(s)
×
660
        end
661
    end
662
    A
174✔
663
end
664

665

666
# platform-specific code
667

668
if Sys.iswindows()
UNCOV
669
function _shm_mmap_array(T, dims, shm_seg_name, mode)
×
UNCOV
670
    readonly = !((mode & JL_O_RDWR) == JL_O_RDWR)
×
UNCOV
671
    create = (mode & JL_O_CREAT) == JL_O_CREAT
×
UNCOV
672
    s = Mmap.Anonymous(shm_seg_name, readonly, create)
×
UNCOV
673
    mmap(s, Array{T,length(dims)}, dims, zero(Int64))
×
674
end
675

676
# no-op in windows
677
shm_unlink(shm_seg_name) = 0
×
678

679
else # !windows
680
function _shm_mmap_array(T, dims, shm_seg_name, mode)
174✔
681
    fd_mem = shm_open(shm_seg_name, mode, S_IRUSR | S_IWUSR)
174✔
682
    systemerror("shm_open() failed for " * shm_seg_name, fd_mem < 0)
174✔
683

684
    s = fdio(fd_mem, true)
174✔
685

686
    # On OSX, ftruncate must to used to set size of segment, just lseek does not work.
687
    # and only at creation time
688
    if (mode & JL_O_CREAT) == JL_O_CREAT
174✔
689
        rc = ccall(:jl_ftruncate, Cint, (Cint, Int64), fd_mem, prod(dims)*sizeof(T))
36✔
690
        systemerror("ftruncate() failed for shm segment " * shm_seg_name, rc != 0)
36✔
691
    end
692

693
    mmap(s, Array{T,length(dims)}, dims, zero(Int64); grow=false)
174✔
694
end
695

696
shm_unlink(shm_seg_name) = ccall(:shm_unlink, Cint, (Cstring,), shm_seg_name)
36✔
697
function shm_open(shm_seg_name, oflags, permissions)
698
    # On macOS, `shm_open()` is a variadic function, so to properly match
699
    # calling ABI, we must declare our arguments as variadic as well.
700
    @static if Sys.isapple()
174✔
701
        return ccall(:shm_open, Cint, (Cstring, Cint, Base.Cmode_t...), shm_seg_name, oflags, permissions)
174✔
702
    else
703
        return ccall(:shm_open, Cint, (Cstring, Cint, Base.Cmode_t), shm_seg_name, oflags, permissions)
704
    end
705
end
706
end # os-test
707

708
end # module
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc