• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

djeedai / bevy_hanabi / 18243240671

04 Oct 2025 10:36AM UTC coverage: 66.58% (+0.1%) from 66.455%
18243240671

push

github

web-flow
Split extraction and render into unit systems (#499)

Reorganize most of the extraction and render systems into smaller,
unit-like systems with limited (ideally, a single) responsibility. Split
most of the data into separate, smaller components too. This not only
enable better multithreading, but also greatly simplify maintenance by
clarifying the logic and responsibility of each system and component.

As part of this change, add a "ready state" to the effect, which is read
back from the render world and informs the main world about whether an
effect is ready for simulation and rendering. This includes:

- All GPU resources being allocated, and in particular the PSOs
  (pipelines) which in Bevy are compiled asynchronously and can be very
  slow (many frames of delay).
- The ready state of all descendant effects, recursively. This ensures a
  child is ready to _e.g._ receive GPU spawn events before its parent,
  which emits those events, starts simulating.

This new ready state is accessed via
`CompiledParticleEffect::is_ready()`. Note that the state is updated
during the extract phase with the information collected from the
previous render frame, so by the time `is_ready()` returns `true`,
already one frame of simulation and rendering generally occurred.

Remove the outdated `copyless` dependency.

594 of 896 new or added lines in 12 files covered. (66.29%)

21 existing lines in 3 files now uncovered.

5116 of 7684 relevant lines covered (66.58%)

416.91 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

24.83
/src/render/event.rs
1
use std::{num::NonZeroU64, ops::Range};
2

3
use bevy::{
4
    ecs::{
5
        observer::Trigger,
6
        query::{With, Without},
7
        system::{Commands, Query},
8
        world::OnRemove,
9
    },
10
    log::{error, trace},
11
    prelude::{Component, Entity, ResMut, Resource},
12
    render::{
13
        render_resource::{BindGroup, BindGroupLayout, Buffer, ShaderSize as _, ShaderType},
14
        renderer::{RenderDevice, RenderQueue},
15
    },
16
};
17
use bytemuck::{Pod, Zeroable};
18
use thiserror::Error;
19
#[cfg(debug_assertions)]
20
use wgpu::util::BufferInitDescriptor;
21
#[cfg(not(debug_assertions))]
22
use wgpu::BufferDescriptor;
23
use wgpu::{
24
    BindGroupEntry, BindGroupLayoutEntry, BindingResource, BindingType, BufferBinding,
25
    BufferBindingType, BufferUsages, CommandEncoder, ShaderStages,
26
};
27

28
use super::{
29
    aligned_buffer_vec::HybridAlignedBufferVec, effect_cache::SlabState, gpu_buffer::GpuBuffer,
30
    BufferBindingSource, EffectBindGroups, GpuDispatchIndirectArgs,
31
};
32
use crate::{
33
    render::{effect_cache::SlabId, ChildEffectOf},
34
    ParticleLayout,
35
};
36

37
#[derive(Debug, Default, Clone, PartialEq, Eq)]
38
pub struct EventSlice {
39
    slice: Range<u32>,
40
}
41

42
/// GPU buffer storing the spawn events emitted by parent effects for their
43
/// children.
44
///
45
/// The event buffer contains for each effect the number of particles to spawn
46
/// this frame. That number is incremented by another effect when it
47
/// emits a spawn event, and reset to zero on next frame after the indirect init
48
/// pass spawned new particles, and before the new update pass of the
49
/// source effect optionally emits more spawn events.
50
///
51
/// GPU spawn events are never accumulated over frames; if a source emits too
52
/// many events and the target effect cannot spawn that many particles, for
53
/// example because it reached its capacity, then the extra events are
54
/// discarded. This is consistent with the CPU behavior of
55
/// [`EffectSpawner::spawn_count`].
56
///
57
/// Note that the number of allocated events in the buffer slice associated with
58
/// a child effect instance is not recorded here; instead it's stored in
59
/// [`GpuChildInfo::event_count`]. This buffer only stores the events
60
/// themselves.
61
pub struct EventBuffer {
62
    /// GPU buffer storing the spawn events.
63
    buffer: Buffer,
64
    /// Buffer capacity, in words (4 bytes).
65
    capacity: u32,
66
    /// Allocated (used) buffer size, in words (4 bytes).
67
    size: u32,
68
    /// Slices into the GPU buffer where event sub-allocations for each effect
69
    /// are located. Slices are stored ordered by location in the buffer, for
70
    /// convenience of allocation.
71
    slices: Vec<EventSlice>,
72
}
73

74
impl EventBuffer {
75
    /// Create a new event buffer to store the spawn events of the specified
76
    /// child effect.
77
    pub fn new(buffer: Buffer, capacity: u32) -> Self {
×
78
        Self {
79
            buffer,
80
            capacity,
81
            size: 0,
82
            slices: vec![],
×
83
        }
84
    }
85

86
    /// Get a reference to the underlying GPU buffer.
87
    pub fn buffer(&self) -> &Buffer {
×
88
        &self.buffer
×
89
    }
90

91
    /// Allocate a new slice for a child effect.
92
    pub fn allocate(&mut self, size: u32) -> Option<EventSlice> {
×
93
        if self.size + size > self.capacity {
×
94
            return None;
×
95
        }
96

97
        if self.slices.is_empty() {
98
            let slice = EventSlice { slice: 0..size };
×
99
            self.slices.push(slice.clone());
×
100
            self.size += size;
×
101
            return Some(slice);
×
102
        }
103

104
        let mut start = 0;
105
        for (idx, es) in self.slices.iter().enumerate() {
×
106
            let avail_size = es.slice.start - start;
×
107
            if size > avail_size {
×
108
                start = es.slice.end;
×
109
                continue;
110
            }
111

112
            let slice = EventSlice {
113
                slice: start..start + size,
114
            };
115
            self.slices.insert(idx, slice.clone());
116
            self.size += size;
117
            return Some(slice);
118
        }
119

120
        if start + size <= self.capacity {
×
121
            let slice = EventSlice {
122
                slice: start..start + size,
×
123
            };
124
            self.slices.push(slice.clone());
×
125
            self.size += size;
×
126
            Some(slice)
×
127
        } else {
128
            None
×
129
        }
130
    }
131

132
    /// Free the slice of a consumer effect once that effect is deallocated.
NEW
133
    pub fn free(&mut self, slice: &EventSlice) -> SlabState {
×
134
        // Note: could use binary search, but likely not enough elements to be worth it
135
        if let Some(idx) = self.slices.iter().position(|es| es == slice) {
×
136
            self.slices.remove(idx);
137
        }
138
        if self.slices.is_empty() {
×
NEW
139
            SlabState::Free
×
140
        } else {
NEW
141
            SlabState::Used
×
142
        }
143
    }
144
}
145

146
/// Data about the child effect(s) of this effect. This component is only
147
/// present on an effect instance if that effect is the parent effect for at
148
/// least one child effect.
149
#[derive(Debug, PartialEq, Component)]
150
pub(crate) struct CachedParentInfo {
151
    /// Render world entities of the child effects, and their associated event
152
    /// buffer binding source.
153
    pub children: Vec<(Entity, BufferBindingSource)>,
154
    /// Indices in bytes into the global [`EffectCache::child_infos_buffer`] of
155
    /// the [`GpuChildInfo`]s for all the child effects of this parent effect.
156
    /// The child effects are always allocated as a single contiguous block,
157
    /// which needs to be mapped into a shader binding point.
158
    pub byte_range: Range<u32>,
159
}
160

161
/// Data about this effect as a child of another effect.
162
///
163
/// This component is only present on an effect instance if that effect is the
164
/// child effect for another effect (that is, this effect has a parent effect).
165
#[derive(Debug, Clone, PartialEq, Component)]
166
pub(crate) struct CachedChildInfo {
167
    /// ID of the slab storing the parent effect.
168
    pub parent_slab_id: SlabId,
169
    /// Parent's particle layout.
170
    pub parent_particle_layout: ParticleLayout,
171
    /// Parent's buffer.
172
    pub parent_buffer_binding_source: BufferBindingSource,
173
    /// Index of this child effect into its parent's [`GpuChildInfo`] array.
174
    /// This starts at zero for the first child of each effect, and is only
175
    /// unique per parent only, not globally.
176
    pub local_child_index: u32,
177
    /// Global index of this child effect into the shared global
178
    /// [`EventCache::child_infos_buffer`] array. This is a unique index across
179
    /// all effects.
180
    pub global_child_index: u32,
181
    /// Index of the [`GpuDispatchIndirectArgs`] entry into the
182
    /// [`EventCache::init_indirect_dispatch_buffer`] array.
183
    pub init_indirect_dispatch_index: u32,
184
}
185

186
impl CachedChildInfo {
NEW
187
    pub fn is_locally_equal(&self, other: &CachedChildInfo) -> bool {
×
NEW
188
        self.parent_slab_id == other.parent_slab_id
×
NEW
189
        && self.parent_particle_layout == other.parent_particle_layout
×
NEW
190
        && self.parent_buffer_binding_source == other.parent_buffer_binding_source
×
NEW
191
        && self.local_child_index == other.local_child_index
×
192
        // skip global_child_index here!
NEW
193
        && self.init_indirect_dispatch_index == other.init_indirect_dispatch_index
×
194
    }
195
}
196

197
/// GPU representation of the child info data structure storing some data for a
198
/// child effect. The associated CPU representation is [`CachedEffectEvents`].
199
#[repr(C)]
200
#[derive(Debug, Default, Clone, Copy, Pod, Zeroable, ShaderType)]
201
pub struct GpuChildInfo {
202
    /// Index of the [`GpuDispatchIndirectArgs`] inside the
203
    /// [`EventCache::init_indirect_dispatch_buffer`] used to dispatch the init
204
    /// pass of this child effect.
205
    pub init_indirect_dispatch_index: u32,
206
    /// Number of events currently stored inside the [`EventBuffer`] slice
207
    /// associated with this child effect. This is updated atomically by the
208
    /// GPU while stored in the [`EventCache::child_infos_buffer`].
209
    pub event_count: i32,
210
}
211

212
/// Cached allocation info for the GPU events of a child effect.
213
///
214
/// This component is automitically inserted by [`allocate_events()`] when a
215
/// child effect uses GPU events.
216
#[derive(Debug, Clone, Component)]
217
pub struct CachedEffectEvents {
218
    /// Index of the [`EventBuffer`] inside the [`EventCache::buffers`]
219
    /// collection where a slice of events is allocated, for this effect to
220
    /// consume.
221
    pub buffer_index: u32,
222
    /// Range, in items (4 bytes), where the events are stored inside the
223
    /// [`EventBuffer`]. This determines the capacity, in event count, for this
224
    /// effect. The number of used events is stored on the GPU in
225
    /// [`GpuChildInfo::event_count`].
226
    pub range: Range<u32>,
227
    /// Index of the [`GpuDispatchIndirectArgs`] inside the
228
    /// [`EventCache::init_indirect_dispatch_buffer`].
229
    pub init_indirect_dispatch_index: u32,
230
}
231

232
impl CachedEffectEvents {
233
    /// Capacity of this allocation, in number of GPU events. The number of used
234
    /// events is stored on the GPU in [`GpuChildInfo::event_count`].
235
    #[allow(dead_code)]
236
    pub fn capacity(&self) -> u32 {
×
237
        self.range.len() as u32
×
238
    }
239
}
240

241
/// Allocate storage for GPU events for all child effects.
242
///
243
/// This system manages allocating storage for GPU events of child effects, and
244
/// spawning the [`CachedEffectEvents`] storing that allocation.
245
pub(crate) fn allocate_events(
1,030✔
246
    mut commands: Commands,
247
    mut event_cache: ResMut<EventCache>,
248
    mut q_child_effects: Query<(Entity, Option<&mut CachedEffectEvents>), With<ChildEffectOf>>,
249
    q_old_child_effects: Query<Entity, (With<CachedEffectEvents>, Without<ChildEffectOf>)>,
250
) {
251
    #[cfg(feature = "trace")]
252
    let _span = bevy::log::info_span!("allocate_events").entered();
3,090✔
253
    trace!("allocate_events");
2,050✔
254

255
    event_cache.clear_previous_frame_resizes();
1,030✔
256

257
    // Allocate storage and add the component to childs missing it.
258
    for (entity, maybe_cached_events) in &mut q_child_effects {
1,030✔
NEW
259
        if let Some(_cached_events) = maybe_cached_events {
×
260
            // Nothing really to do for now because we hardcode a number of
261
            // events, so the allocation won't ever change...
262
        } else {
263
            const FIXME_HARD_CODED_EVENT_COUNT: u32 = 256;
NEW
264
            let cached_effect_events = event_cache.allocate(FIXME_HARD_CODED_EVENT_COUNT);
×
NEW
265
            commands.entity(entity).insert(cached_effect_events);
×
266
        }
267
    }
268

269
    // Remove the component from effects which are not a child anymore. This should
270
    // be pretty rare; in general the effect is just despawned.
271
    for entity in &q_old_child_effects {
1,030✔
272
        commands.entity(entity).remove::<CachedEffectEvents>();
273
    }
274
}
275

276
/// Observer raised when the [`CachedEffectEvents`] component is removed,
277
/// which indicates that the effect doesn't use GPU events anymore.
NEW
278
pub(crate) fn on_remove_cached_effect_events(
×
279
    trigger: Trigger<OnRemove, CachedEffectEvents>,
280
    query: Query<(Entity, &CachedEffectEvents)>,
281
    mut event_cache: ResMut<EventCache>,
282
) {
283
    #[cfg(feature = "trace")]
NEW
284
    let _span = bevy::log::info_span!("on_remove_cached_effect_events").entered();
×
NEW
285
    trace!("on_remove_cached_effect_events");
×
286

NEW
287
    if let Ok((entity, cached_effect_event)) = query.get(trigger.target()) {
×
288
        // TODO - handle SlabState return value to invalidate property bind groups!!
NEW
289
        if let Err(err) = event_cache.free(cached_effect_event) {
×
NEW
290
            error!("Error while freeing cached events for effect {entity:?}: {err:?}");
×
291
        }
292
    };
293
}
294

295
/// Error code for [`EventCache::free()`].
296
#[derive(Debug, Error)]
297
pub enum CachedEventsError {
298
    /// The given buffer index is invalid. The [`EventCache`] doesn't contain
299
    /// any buffer with such index.
300
    #[error("Invalid buffer index #{0}.")]
301
    InvalidBufferIndex(u32),
302
    /// The given buffer index corresponds to a [`EventCache`] buffer which
303
    /// was already deallocated.
304
    #[error("Buffer at index #{0} was deallocated.")]
305
    BufferDeallocated(u32),
306
}
307

308
/// Cache for effect events.
309
#[derive(Resource)]
310
pub struct EventCache {
311
    /// Render device to allocate GPU resources as needed.
312
    device: RenderDevice,
313
    /// Single shared GPU buffer storing all the [`GpuChildInfo`] structs
314
    /// for all the parent effects.
315
    child_infos_buffer: HybridAlignedBufferVec,
316
    /// Collection of event buffers managed by this cache. Some buffers might
317
    /// be `None` if the entry is not used. Since the buffers are referenced
318
    /// by index, we cannot move them once they're allocated.
319
    buffers: Vec<Option<EventBuffer>>,
320
    /// Single shared GPU buffer storing all the [`GpuDispatchIndirectArgs`]
321
    /// structs for all the indirect init passes. Any effect allocating storage
322
    /// for GPU events also get an entry into this buffer, to allow consuming
323
    /// the events from an init pass indirectly dispatched (GPU-driven).
324
    // FIXME - merge with the update pass one, we don't need 2 buffers storing the same type; on
325
    // the other hand if we sync the allocations with GpuChildInfo we can guarantee a perfect
326
    // batching for the init fill dispatch pass (single dispatch for all instances at once).
327
    init_indirect_dispatch_buffer: GpuBuffer<GpuDispatchIndirectArgs>,
328
    /// Bind group layout for the indirect dispatch pass, which clears the GPU
329
    /// event counts ([`GpuChildInfo::event_count`]).
330
    indirect_child_info_buffer_bind_group_layout: BindGroupLayout,
331
    /// Bind group for the indirect dispatch pass, which clears the GPU event
332
    /// counts ([`GpuChildInfo::event_count`]).
333
    indirect_child_info_buffer_bind_group: Option<BindGroup>,
334
}
335

336
impl EventCache {
337
    /// Create a new event cache.
338
    pub fn new(device: RenderDevice) -> Self {
3✔
339
        let init_indirect_dispatch_buffer = GpuBuffer::new(
340
            BufferUsages::STORAGE | BufferUsages::INDIRECT,
3✔
341
            Some("hanabi:buffer:init_indirect_dispatch".to_string()),
3✔
342
        );
343

344
        let child_infos_bind_group_layout = device.create_bind_group_layout(
9✔
345
            "hanabi:bind_group_layout:indirect:child_infos@3",
346
            &[BindGroupLayoutEntry {
3✔
347
                binding: 0,
3✔
348
                visibility: ShaderStages::COMPUTE,
3✔
349
                ty: BindingType::Buffer {
3✔
350
                    ty: BufferBindingType::Storage { read_only: false },
3✔
351
                    has_dynamic_offset: false,
3✔
352
                    min_binding_size: Some(GpuChildInfo::min_size()),
3✔
353
                },
354
                count: None,
3✔
355
            }],
356
        );
357

358
        Self {
359
            device,
360
            child_infos_buffer: HybridAlignedBufferVec::new(
6✔
361
                BufferUsages::STORAGE,
362
                NonZeroU64::new(4).unwrap(),
363
                Some("hanabi:buffer:child_infos".to_string()),
364
            ),
365
            buffers: vec![],
6✔
366
            init_indirect_dispatch_buffer,
367
            indirect_child_info_buffer_bind_group_layout: child_infos_bind_group_layout,
368
            // Can't create until the buffer is ready
369
            indirect_child_info_buffer_bind_group: None,
370
        }
371
    }
372

373
    #[allow(dead_code)]
374
    #[inline]
375
    pub fn buffers(&self) -> &[Option<EventBuffer>] {
×
376
        &self.buffers
×
377
    }
378

379
    #[allow(dead_code)]
380
    #[inline]
381
    pub fn buffers_mut(&mut self) -> &mut [Option<EventBuffer>] {
×
382
        &mut self.buffers
×
383
    }
384

385
    #[inline]
386
    pub fn get_buffer(&self, index: u32) -> Option<&Buffer> {
×
387
        self.buffers
×
388
            .get(index as usize)
×
389
            .and_then(|opt_eb| opt_eb.as_ref().map(|eb| eb.buffer()))
×
390
    }
391

392
    #[inline]
393
    pub fn child_infos_buffer(&self) -> Option<&Buffer> {
2,024✔
394
        self.child_infos_buffer.buffer()
4,048✔
395
    }
396

397
    /// Allocate a memory block to store the given number of GPU events.
398
    ///
399
    /// The allocation always succeeds, allocating a new GPU event buffer if
400
    /// none of the existing ones can store the requested number of events.
401
    ///
402
    /// # Returns
403
    ///
404
    /// The [`CachedEffectEvents`] component representing the allocation.
405
    ///
406
    /// # Panics
407
    ///
408
    /// Panics if the number of events `num_events` is zero.
409
    pub fn allocate(&mut self, num_events: u32) -> CachedEffectEvents {
×
410
        assert!(num_events > 0);
×
411

412
        // Allocate an entry into the indirect dispatch buffer
413
        // The value pushed is a dummy; see allocate_frame_buffers().
414
        let init_indirect_dispatch_index = self.init_indirect_dispatch_buffer.allocate();
×
415

416
        // Try to find an allocated GPU buffer with enough capacity
417
        let mut empty_index = None;
×
418
        for (buffer_index, buffer) in self.buffers.iter_mut().enumerate() {
×
419
            let Some(buffer) = buffer.as_mut() else {
×
420
                // Remember the first empty slot in case we need to allocate a new GPU buffer
421
                if empty_index.is_none() {
×
422
                    empty_index = Some(buffer_index);
×
423
                }
424
                continue;
×
425
            };
426

427
            // Try to allocate a slice into the buffer
428
            if let Some(event_slice) = buffer.allocate(num_events) {
×
429
                trace!("Allocate new slice in event buffer #{buffer_index} for {num_events} events: range={event_slice:?}");
×
430
                return CachedEffectEvents {
431
                    buffer_index: buffer_index as u32,
432
                    init_indirect_dispatch_index,
433
                    range: event_slice.slice,
434
                };
435
            }
436
        }
437

438
        // Cannot find any suitable GPU event buffer; allocate a new one
439

440
        // Compute the slot where to store the new buffer
441
        let buffer_index = empty_index.unwrap_or(self.buffers.len());
×
442

443
        // Create the GPU bfufer
444
        let label = format!("hanabi:buffer:event_buffer{buffer_index}");
445
        let align = self.device.limits().min_storage_buffer_offset_alignment;
446
        let capacity = num_events.max(16 * 1024); // min capacity
447
        let byte_size = (capacity as u64 * 4).next_multiple_of(align as u64);
448
        let capacity = (byte_size / 4) as u32;
449
        // In debug, fill the buffer with some debug marker
450
        #[cfg(debug_assertions)]
451
        let buffer = {
452
            let mut contents: Vec<u32> = Vec::with_capacity(capacity as usize);
453
            contents.resize(capacity as usize, 0xDEADBEEF);
454
            self.device.create_buffer_with_data(&BufferInitDescriptor {
455
                label: Some(&label[..]),
456
                usage: BufferUsages::COPY_DST | BufferUsages::STORAGE,
457
                contents: bytemuck::cast_slice(contents.as_slice()),
458
            })
459
        };
460
        // In release, don't initialize the buffer for performance
461
        #[cfg(not(debug_assertions))]
462
        let buffer = self.device.create_buffer(&BufferDescriptor {
463
            label: Some(&label[..]),
464
            size: byte_size,
465
            usage: BufferUsages::COPY_DST | BufferUsages::STORAGE,
466
            mapped_at_creation: false,
467
        });
468
        trace!("Created new event buffer #{buffer_index} '{label}' with {byte_size} bytes ({capacity} events; align={align}B)");
×
469
        let mut buffer = EventBuffer::new(buffer, capacity);
470

471
        // Allocate a slice from the new event buffer
472
        let event_slice = buffer.allocate(num_events).expect("Failed to allocate event slice inside new buffer specifically created for this allocation.");
473
        trace!("Allocate new slice in event buffer #{buffer_index} for {num_events} events: range={event_slice:?}");
×
474

475
        // Store the event buffer at the selected slot
476
        if buffer_index >= self.buffers.len() {
×
477
            self.buffers.push(Some(buffer));
×
478
        } else {
479
            debug_assert!(self.buffers[buffer_index].is_none());
×
480
            self.buffers[buffer_index] = Some(buffer);
×
481
        }
482

483
        CachedEffectEvents {
484
            buffer_index: buffer_index as u32,
×
485
            init_indirect_dispatch_index,
486
            range: event_slice.slice,
×
487
        }
488
    }
489

490
    /// Deallocated and remove an event block allocation from the cache.
491
    pub fn free(
×
492
        &mut self,
493
        cached_effect_events: &CachedEffectEvents,
494
    ) -> Result<SlabState, CachedEventsError> {
495
        trace!(
×
496
            "Removing cached event {:?} from cache.",
×
497
            cached_effect_events
498
        );
499

500
        self.init_indirect_dispatch_buffer
×
501
            .free(cached_effect_events.init_indirect_dispatch_index);
×
502

503
        let entry = self
×
504
            .buffers
×
505
            .get_mut(cached_effect_events.buffer_index as usize)
×
506
            .ok_or(CachedEventsError::InvalidBufferIndex(
×
507
                cached_effect_events.buffer_index,
×
508
            ))?;
509
        let buffer = entry.as_mut().ok_or(CachedEventsError::BufferDeallocated(
×
510
            cached_effect_events.buffer_index,
511
        ))?;
512
        if buffer.free(&EventSlice {
513
            slice: cached_effect_events.range.clone(),
514
        }) == SlabState::Free
515
        {
516
            let buffer = entry.take().unwrap();
×
517
            buffer.buffer.destroy();
×
NEW
518
            Ok(SlabState::Free)
×
519
        } else {
NEW
520
            Ok(SlabState::Used)
×
521
        }
522
    }
523

524
    /// Allocate a new block of [`GpuChildInfo`] structures for a list of
525
    /// children.
526
    pub fn allocate_child_infos(
×
527
        &mut self,
528
        parent_entity: Entity,
529
        children: Vec<(Entity, BufferBindingSource)>,
530
        child_infos: &[GpuChildInfo],
531
    ) -> CachedParentInfo {
532
        assert_eq!(children.len(), child_infos.len());
×
533
        assert!(!children.is_empty());
×
534

535
        let byte_range = self.child_infos_buffer.push_many(child_infos);
×
536
        assert_eq!(byte_range.start as usize % size_of::<GpuChildInfo>(), 0);
×
537
        trace!(
×
538
            "Parent {:?}: newly allocated ChildInfo[] array at +{}",
×
539
            parent_entity,
540
            byte_range.start
541
        );
542

543
        CachedParentInfo {
544
            children,
545
            byte_range,
546
        }
547
    }
548

549
    /// Re-allocate a block of [`GpuChildInfo`] structures for a modified list
550
    /// of children.
551
    pub fn reallocate_child_infos(
×
552
        &mut self,
553
        parent_entity: Entity,
554
        children: Vec<(Entity, BufferBindingSource)>,
555
        child_infos: &[GpuChildInfo],
556
        cached_parent_info: &mut CachedParentInfo,
557
    ) {
558
        trace!(
×
559
            "Parent {:?}: De-allocating old ChildInfo[] entry at range {:?}",
×
560
            parent_entity,
561
            cached_parent_info.byte_range
562
        );
563
        self.child_infos_buffer
×
564
            .remove(cached_parent_info.byte_range.clone());
×
565

566
        let byte_range = self.child_infos_buffer.push_many(child_infos);
×
567
        assert_eq!(
×
568
            byte_range.start as usize % GpuChildInfo::SHADER_SIZE.get() as usize,
×
569
            0
570
        );
571
        trace!(
×
572
            "Parent {:?}: Allocated new ChildInfo[] entry at byte range {:?}",
×
573
            parent_entity,
574
            byte_range
575
        );
576

577
        cached_parent_info.children = children;
×
578
        cached_parent_info.byte_range = byte_range;
×
579
    }
580

581
    /// Re-/allocate any buffer for the current frame.
582
    pub fn prepare_buffers(
1,030✔
583
        &mut self,
584
        render_device: &RenderDevice,
585
        render_queue: &RenderQueue,
586
        // FIXME
587
        _effect_bind_groups: &mut ResMut<EffectBindGroups>,
588
    ) {
589
        // This buffer is only ever used in the bind groups of a `GpuBufferOperations`,
590
        // which manages its bind groups automatically each frame. So there's no
591
        // invalidation to do here on re-allocation.
592
        self.init_indirect_dispatch_buffer
1,030✔
593
            .prepare_buffers(render_device);
2,060✔
594

595
        self.child_infos_buffer
1,030✔
596
            .write_buffer(render_device, render_queue);
3,090✔
597
    }
598

599
    /// Schedule any pending buffer copy.
600
    ///
601
    /// This is necessary when a buffer is reallocated, to copy the old content.
602
    /// This must be called once per frame after the buffers have been
603
    /// reallocated with `prepare_buffers()`.
604
    #[inline]
605
    pub fn write_buffers(&self, command_encoder: &mut CommandEncoder) {
1,030✔
606
        self.init_indirect_dispatch_buffer
1,030✔
607
            .write_buffers(command_encoder);
2,060✔
608
    }
609

610
    /// Destroy old copies of buffers reallocated last frame and copied to a new
611
    /// buffer.
612
    ///
613
    /// This must be called once per frame after any content was effectively
614
    /// copied from an old to a new buffer. This means that, due to Bevy's
615
    /// limitations, this must be called on the next frame, as we don't have
616
    /// write access to anything nor any hint as to when copies are done until
617
    /// the next frame rendering actually starts.
618
    #[inline]
619
    pub fn clear_previous_frame_resizes(&mut self) {
1,030✔
620
        self.init_indirect_dispatch_buffer
1,030✔
621
            .clear_previous_frame_resizes();
622
    }
623

624
    #[inline]
625
    pub fn init_indirect_dispatch_buffer(&self) -> Option<&Buffer> {
×
626
        self.init_indirect_dispatch_buffer.buffer()
×
627
    }
628

629
    #[inline]
630
    pub fn child_infos(&self) -> &HybridAlignedBufferVec {
2,042✔
631
        &self.child_infos_buffer
2,042✔
632
    }
633

634
    pub fn ensure_indirect_child_info_buffer_bind_group(
1,012✔
635
        &mut self,
636
        device: &RenderDevice,
637
    ) -> Option<&BindGroup> {
638
        let buffer = self.child_infos_buffer()?;
3,036✔
639
        // TODO - stop re-creating each frame...
640
        self.indirect_child_info_buffer_bind_group = Some(device.create_bind_group(
641
            "hanabi:bind_group:indirect:child_infos@3",
642
            &self.indirect_child_info_buffer_bind_group_layout,
643
            &[BindGroupEntry {
644
                binding: 0,
645
                resource: BindingResource::Buffer(BufferBinding {
646
                    buffer,
647
                    offset: 0,
648
                    size: None,
649
                }),
650
            }],
651
        ));
652
        self.indirect_child_info_buffer_bind_group.as_ref()
653
    }
654

655
    pub fn indirect_child_info_buffer_bind_group(&self) -> Option<&BindGroup> {
×
656
        self.indirect_child_info_buffer_bind_group.as_ref()
×
657
    }
658
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc