• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

vigna / sux-rs / 24231370162

10 Apr 2026 07:15AM UTC coverage: 71.321% (-0.01%) from 71.331%
24231370162

push

github

vigna
Reordered examples

7130 of 9997 relevant lines covered (71.32%)

16231840.11 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

72.04
/src/bits/bit_field_vec.rs
1
/*
2
 * SPDX-FileCopyrightText: 2023 Inria
3
 * SPDX-FileCopyrightText: 2023 Sebastiano Vigna
4
 *
5
 * SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later
6
 */
7

8
//! Vectors of bit fields of fixed width (AKA "compact arrays", "bit array",
9
//! etc.)
10
//!
11
//! Elements are stored contiguously, with no padding bits (in particular,
12
//! unless the bit width is a power of two some elements will be stored across
13
//! word boundaries).
14
//!
15
//! There are two flavors: [`BitFieldVec`], a mutable bit-field vector, and
16
//! [`AtomicBitFieldVec`], a mutable, thread-safe bit-field vector.
17
//!
18
//! These flavors depend on a backend, and presently we provide, given an
19
//! unsigned integer type `W` or an unsigned atomic integer type `A`:
20
//!
21
//! - `BitFieldVec<Vec<T>>`: a mutable, growable and resizable bit-field vector;
22
//! - `BitFieldVec<AsRef<[W]>>`: an immutable bit-field vector, useful for
23
//!   [ε-serde](https://crates.io/crates/epserde) support;
24
//! - `BitFieldVec<AsRef<[W]> + AsMut<[W]>>`: a mutable (but not resizable) bit
25
//!   vector;
26
//! - `AtomicBitFieldVec<AsRef<[A]>>`: a partially thread-safe, mutable (but not
27
//!   resizable) bit-field vector.
28
//!
29
//! More generally, the underlying type must satisfy the trait [`Word`] for
30
//! [`BitFieldVec`], while for [`AtomicBitFieldVec`] it must satisfy
31
//! [`PrimitiveAtomic`] with a
32
//! [`Value`](atomic_primitive::PrimitiveAtomic::Value) satisfying [`Word`].
33
//! A blanket implementation exposes slices of elements of type `W` as bit-field
34
//! vectors of width `W::BITS`, analogously for atomic types `A`.
35
//!
36
//! The traits [`BitFieldSlice`] and [`BitFieldSliceMut`] provide a uniform
37
//! interface to access to the content of (a reference to) the bit-field vector.
38
//! There is also a [`AtomicBitFieldSlice`] trait for atomic bit-field vectors.
39
//! Since they are also implemented for slices of words, they make it easy to
40
//! write generic code that works both on bit-field vectors and on slices of
41
//! words when you need to consider the bit width of each element.
42
//!
43
//! Note that the [`try_chunks_mut`](SliceByValueMut::try_chunks_mut) method is
44
//! part of the [`SliceByValueMut`] trait, and thus returns an iterator over
45
//! elements implementing [`SliceByValueMut`]; the elements, however, implement
46
//! also [`BitFieldSliceMut`], and you can use this property by adding the bound
47
//! `for<'a> BitFieldSliceMut<ChunksMut<'a>: Iterator<Item:
48
//! BitFieldSliceMut>>`.
49
//!
50
//! Nothing is assumed about the content of the backend outside the
51
//! bits of the vector. Moreover, the content of the backend outside of the
52
//! vector is never modified by the methods of this structure.
53
//!
54
//! For high-speed unchecked scanning, we implement [`IntoUncheckedIterator`]
55
//! and [`IntoUncheckedBackIterator`] on a reference to this type. They are
56
//! used, for example, to provide
57
//! [predecessor](crate::traits::indexed_dict::Pred) and
58
//! [successor](crate::traits::indexed_dict::Succ) primitives for
59
//! [`EliasFano`].
60
//!
61
//! # Low-level support
62
//!
63
//! The methods [`addr_of`](BitFieldVec::addr_of) and
64
//! [`get_unaligned`](BitFieldVec::get_unaligned) can be used to manually
65
//! prefetch parts of the data structure, or read values using unaligned read,
66
//! when the bit width makes it possible.
67
//!
68
//! The wrapper [`BitFieldVecU`] implements [`SliceByValue`] using
69
//! unaligned reads and delegates all iterator methods. It can be just plugged
70
//! in place of a normal [`BitFieldVec`] when the trait bound is
71
//! [`SliceByValue`].
72
//!
73
//! # Examples
74
//!
75
//! ```
76
//! # use sux::prelude::*;
77
//! # use bit_field_slice::*;
78
//! # use value_traits::slices::*;
79
//! // Bit field vector of bit width 5 and length 10, all entries set to zero
80
//! let mut b = <BitFieldVec<Vec<usize>>>::new(5, 10);
81
//! assert_eq!(b.len(), 10);
82
//! assert_eq!(b.bit_width(), 5);
83
//! b.set_value(0, 3);
84
//! assert_eq!(b.index_value(0), 3);
85
//!
86
//! // Empty bit field vector of bit width 20 with capacity 10
87
//! let mut b = <BitFieldVec<Vec<usize>>>::with_capacity(20, 10);
88
//! assert_eq!(b.len(), 0);
89
//! assert_eq!(b.bit_width(), 20);
90
//! b.push(20);
91
//! assert_eq!(b.len(), 1);
92
//! assert_eq!(b.index_value(0), 20);
93
//! assert_eq!(b.pop(), Some(20));
94
//!
95
//! // Convenience macro
96
//! let b = bit_field_vec![10; 4, 500, 2, 0, 1];
97
//! assert_eq!(b.len(), 5);
98
//! assert_eq!(b.bit_width(), 10);
99
//! assert_eq!(b.index_value(0), 4);
100
//! assert_eq!(b.index_value(1), 500);
101
//! assert_eq!(b.index_value(2), 2);
102
//! assert_eq!(b.index_value(3), 0);
103
//! assert_eq!(b.index_value(4), 1);
104
//! ```
105

106
use crate::prelude::{bit_field_slice::*, *};
107
use crate::traits::ambassador_impl_Backend;
108
use crate::traits::{Backend, Word};
109
use crate::utils::PrimitiveUnsignedExt;
110
use crate::utils::{
111
    CannotCastToAtomicError, transmute_boxed_slice_from_atomic, transmute_boxed_slice_into_atomic,
112
    transmute_vec_from_atomic, transmute_vec_into_atomic,
113
};
114
use crate::{panic_if_out_of_bounds, panic_if_value};
115
use ambassador::Delegate;
116
use anyhow::{Result, bail};
117
use atomic_primitive::{Atomic, AtomicPrimitive, PrimitiveAtomic, PrimitiveAtomicUnsigned};
118
use mem_dbg::*;
119
use num_primitive::{PrimitiveInteger, PrimitiveNumber, PrimitiveNumberAs};
120
#[cfg(feature = "rayon")]
121
use rayon::prelude::*;
122
use std::iter::FusedIterator;
123
use std::sync::atomic::{Ordering, compiler_fence, fence};
124
use value_traits::slices::{SliceByValue, SliceByValueMut};
125

126
/// Convenient, [`vec!`]-like macro to initialize [`usize`]-based
127
/// bit-field vectors.
128
///
129
/// - `bit_field_vec![width]`: creates an empty bit-field vector of given bit
130
///   width.
131
///
132
/// - `bit_field_vec![width => value; length]`: creates a bit-field vector of
133
///   given bit width and length, with all entries set to `value`.
134
///
135
/// - `bit_field_vec![width; v₀, v₁, … ]`: creates a bit-field vector of
136
///   given bit width with entries set to `v₀`, `v₁`, ….
137
///
138
/// # Examples
139
///
140
/// ```
141
/// # use sux::prelude::*;
142
/// # use bit_field_slice::*;
143
/// # use value_traits::slices::*;
144
/// // Empty bit field vector of bit width 5
145
/// let b = bit_field_vec![5];
146
/// assert_eq!(b.len(), 0);
147
/// assert_eq!(b.bit_width(), 5);
148
///
149
/// // 10 values of bit width 6, all set to 3
150
/// let b = bit_field_vec![6 => 3; 10];
151
/// assert_eq!(b.len(), 10);
152
/// assert_eq!(b.bit_width(), 6);
153
/// assert_eq!(b.iter().all(|x| x == 3), true);
154
///
155
/// // List of values of bit width 10
156
/// let b = bit_field_vec![10; 4, 500, 2, 0, 1];
157
/// assert_eq!(b.len(), 5);
158
/// assert_eq!(b.bit_width(), 10);
159
/// assert_eq!(b.index_value(0), 4);
160
/// assert_eq!(b.index_value(1), 500);
161
/// assert_eq!(b.index_value(2), 2);
162
/// assert_eq!(b.index_value(3), 0);
163
/// assert_eq!(b.index_value(4), 1);
164
/// ```
165
#[macro_export]
166
macro_rules! bit_field_vec {
167
    ($w:expr) => {
168
        <$crate::bits::BitFieldVec>::new($w, 0)
169
    };
170
    ($w:expr; $n:expr; $v:expr) => {
171
        compile_error!(
172
            "the syntax bit_field_vec![width; length; value] has been removed: use bit_field_vec![width => value; length] instead"
173
        )
174
    };
175
    ($w:expr => $v:expr; $n:expr) => {
176
        {
177
            let mut bit_field_vec = <$crate::bits::BitFieldVec>::with_capacity($w, $n);
178
            // Force type
179
            let v: usize = $v;
180
            bit_field_vec.resize($n, v);
181
            bit_field_vec
182
        }
183
    };
184
    ($w:expr; $($x:expr),+ $(,)?) => {
185
        {
186
            let mut b = <$crate::bits::BitFieldVec>::with_capacity($w, [$($x),+].len());
187
            $(
188
                // Force type
189
                let x: usize = $x;
190
                b.push(x);
191
            )*
192
            b
193
        }
194
    };
195
}
196

197
/// A vector of bit fields of fixed width (AKA "compact array", "bit array",
198
/// etc.).
199
///
200
/// See the [module documentation](crate::bits) for more details.
201
#[derive(Debug, Clone, MemSize, MemDbg, Delegate, value_traits::Subslices)]
202
#[value_traits_subslices(bound = "B: AsRef<[B::Word]>")]
203
#[value_traits_subslices(bound = "B::Word: Word")]
204
#[derive(value_traits::SubslicesMut)]
205
#[value_traits_subslices_mut(bound = "B: AsRef<[B::Word]> + AsMut<[B::Word]>")]
206
#[value_traits_subslices_mut(bound = "B::Word: Word")]
207
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
208
#[cfg_attr(
209
    feature = "epserde",
210
    epserde(bound(
211
        deser = "for<'a> <B as epserde::deser::DeserInner>::DeserType<'a>: Backend<Word = B::Word>"
212
    ))
213
)]
214
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
215
#[delegate(crate::traits::Backend, target = "bits")]
216
pub struct BitFieldVec<B: Backend = Vec<usize>> {
217
    /// The underlying storage.
218
    bits: B,
219
    /// The bit width of the values stored in the vector.
220
    bit_width: usize,
221
    /// A mask with its lowest `bit_width` bits set to one.
222
    mask: B::Word,
223
    /// The length of the vector.
224
    len: usize,
225
}
226

227
/// Robust, heavily checked mask function for constructors and similar methods.
228
fn mask<W: Word>(bit_width: usize) -> W {
11,380✔
229
    if bit_width == 0 {
11,380✔
230
        W::ZERO
258✔
231
    } else {
232
        W::MAX
11,122✔
233
            >> (W::BITS as usize)
22,244✔
234
                .checked_sub(bit_width)
33,366✔
235
                .expect("bit_width > W::BITS as usize")
11,122✔
236
    }
237
}
238

239
impl<B: Backend<Word: Word>> BitFieldVec<B> {
240
    /// # Safety
241
    /// `len` * `bit_width` must be between 0 (included) and the number of
242
    /// bits in `bits` (included).
243
    #[inline(always)]
244
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
5,976✔
245
        Self {
246
            bits,
247
            bit_width,
248
            mask: mask(bit_width),
11,952✔
249
            len,
250
        }
251
    }
252

253
    /// Returns the backend, the bit width, and the length, consuming this
254
    /// vector.
255
    #[inline(always)]
256
    pub fn into_raw_parts(self) -> (B, usize, usize) {
3,122✔
257
        (self.bits, self.bit_width, self.len)
6,244✔
258
    }
259

260
    /// Returns the mask used to extract values from the vector.
261
    /// This will keep the lowest `bit_width` bits.
262
    pub const fn mask(&self) -> B::Word {
5,333✔
263
        self.mask
5,333✔
264
    }
265

266
    /// Replaces the backend by applying a function, consuming this vector.
267
    ///
268
    /// # Safety
269
    /// The caller must ensure that the length is compatible with the new
270
    /// backend.
271
    #[inline(always)]
272
    pub unsafe fn map<B2: Backend<Word: Word>>(self, f: impl FnOnce(B) -> B2) -> BitFieldVec<B2> {
×
273
        BitFieldVec {
274
            bits: f(self.bits),
×
275
            bit_width: self.bit_width,
×
276
            mask: mask(self.bit_width),
×
277
            len: self.len,
×
278
        }
279
    }
280
}
281

282
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVec<B> {
283
    /// Gets the address of the item storing (the first part of)
284
    /// the element of given index.
285
    ///
286
    /// This method is mainly useful for manually prefetching
287
    /// parts of the data structure.
288
    pub fn addr_of(&self, index: usize) -> *const B::Word {
4✔
289
        let start_bit = index * self.bit_width;
8✔
290
        let word_index = start_bit / B::Word::BITS as usize;
8✔
291
        (&self.bits.as_ref()[word_index]) as *const _
8✔
292
    }
293

294
    /// Like [`SliceByValue::index_value`], but using unaligned reads.
295
    ///
296
    /// This method can be used only for bit width smaller than or equal to
297
    /// `W::BITS as usize - 8 + 2` or equal to `W::BITS as usize - 8 + 4` or `W::BITS`. Moreover,
298
    /// an additional padding word must be present at the end of the vector.
299
    ///
300
    /// Note that to guarantee the absence of undefined behavior this method
301
    /// has to perform several tests. Consider using
302
    /// [`get_unaligned_unchecked`](Self::get_unaligned_unchecked) if you are
303
    /// sure that the constraints are respected.
304
    ///
305
    /// # Panics
306
    ///
307
    /// This method will panic if the constraints above are not respected.
308
    pub fn get_unaligned(&self, index: usize) -> B::Word {
65✔
309
        assert_unaligned!(B::Word, self.bit_width);
79✔
310
        panic_if_out_of_bounds!(index, self.len);
183✔
311
        // Check that the read_unaligned of size_of::<W>() bytes starting at
312
        // byte offset start_bit / 8 does not exceed the allocation.
313
        assert!(
61✔
314
            (index * self.bit_width) / 8 + size_of::<B::Word>()
122✔
315
                <= std::mem::size_of_val(self.bits.as_ref())
122✔
316
        );
317
        unsafe { self.get_unaligned_unchecked(index) }
180✔
318
    }
319

320
    /// Like [`SliceByValue::get_value_unchecked`], but using unaligned reads.
321
    ///
322
    /// # Safety
323
    ///
324
    /// This method can be used only for bit width smaller than or equal to
325
    /// `W::BITS as usize - 8 + 2` or equal to `W::BITS as usize - 8 + 4` or `W::BITS`. Moreover,
326
    /// an additional padding word must be present at the end of the vector,
327
    /// and `index` must be within bounds.
328
    ///
329
    /// # Panics
330
    ///
331
    /// This method will panic in debug mode if the safety constraints are not
332
    /// respected.
333
    pub unsafe fn get_unaligned_unchecked(&self, index: usize) -> B::Word {
44,504,356✔
334
        debug_assert_unaligned!(B::Word, self.bit_width);
44,504,370✔
335
        let base_ptr = self.bits.as_ref().as_ptr() as *const u8;
89,008,704✔
336
        let start_bit = index * self.bit_width;
89,008,704✔
337
        // Check that the read_unaligned of size_of::<W>() bytes starting at
338
        // byte offset start_bit / 8 does not exceed the allocation.
339
        debug_assert!(
44,504,352✔
340
            start_bit / 8 + size_of::<B::Word>() <= std::mem::size_of_val(self.bits.as_ref())
133,513,056✔
341
        );
342
        let ptr = unsafe { base_ptr.add(start_bit / 8) } as *const B::Word;
178,017,404✔
343
        let word = unsafe { core::ptr::read_unaligned(ptr) };
133,513,053✔
344
        (word >> (start_bit % 8)) & self.mask
44,504,351✔
345
    }
346

347
    /// Returns the backend of the vector as a slice of words.
348
    pub fn as_slice(&self) -> &[B::Word] {
169✔
349
        self.bits.as_ref()
169✔
350
    }
351
}
352
/// An iterator over non-overlapping chunks of a bit-field vector, starting at
353
/// the beginning of the vector.
354
///
355
/// When the vector len is not evenly divided by the chunk size, the last chunk
356
/// of the iteration will be shorter.
357
///
358
/// This struct is created by the
359
/// [`try_chunks_mut`](crate::bits::bit_field_vec::BitFieldVec#impl-BitFieldSliceMut-for-BitFieldVec<B>)
360
/// method.
361
pub struct ChunksMut<'a, W: Word> {
362
    remaining: usize,
363
    bit_width: usize,
364
    chunk_size: usize,
365
    iter: std::slice::ChunksMut<'a, W>,
366
}
367

368
impl<'a, W: Word> Iterator for ChunksMut<'a, W> {
369
    type Item = BitFieldVec<&'a mut [W]>;
370

371
    #[inline]
372
    fn next(&mut self) -> Option<Self::Item> {
2,853✔
373
        self.iter.next().map(|chunk| {
11,411✔
374
            let size = Ord::min(self.chunk_size, self.remaining);
11,408✔
375
            let next = unsafe { BitFieldVec::from_raw_parts(chunk, self.bit_width, size) };
14,260✔
376
            self.remaining -= size;
2,852✔
377
            next
2,852✔
378
        })
379
    }
380
}
381

382
impl<'a, W: Word> ExactSizeIterator for ChunksMut<'a, W> where
383
    std::slice::ChunksMut<'a, W>: ExactSizeIterator
384
{
385
}
386

387
impl<'a, W: Word> FusedIterator for ChunksMut<'a, W> where
388
    std::slice::ChunksMut<'a, W>: FusedIterator
389
{
390
}
391

392
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVec<B> {}
393

394
impl<W: Word> BitFieldVec<Vec<W>> {
395
    /// Creates a new zero-initialized vector of given bit width and length.
396
    pub fn new(bit_width: usize, len: usize) -> Self {
1,003✔
397
        // We need at least one word to handle the case of bit width zero.
398
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS as usize));
5,015✔
399
        Self {
400
            bits: vec![W::ZERO; n_of_words],
3,009✔
401
            bit_width,
402
            mask: mask(bit_width),
2,006✔
403
            len,
404
        }
405
    }
406

407
    /// Creates an empty vector that doesn't need to reallocate for up to
408
    /// `capacity` elements.
409
    pub fn with_capacity(bit_width: usize, capacity: usize) -> Self {
1,166✔
410
        // We need at least one word to handle the case of bit width zero.
411
        let n_of_words = Ord::max(1, (capacity * bit_width).div_ceil(W::BITS as usize));
5,830✔
412
        Self {
413
            bits: Vec::with_capacity(n_of_words),
3,498✔
414
            bit_width,
415
            mask: mask(bit_width),
1,166✔
416
            len: 0,
417
        }
418
    }
419

420
    /// Sets the length.
421
    ///
422
    /// # Safety
423
    ///
424
    /// `len * bit_width` must be at most `self.bits.len() * W::BITS as usize`. Note that
425
    /// setting the length might result in reading uninitialized data.
426
    pub unsafe fn set_len(&mut self, len: usize) {
1✔
427
        debug_assert!(len * self.bit_width <= self.bits.len() * W::BITS as usize);
4✔
428
        self.len = len;
1✔
429
    }
430

431
    /// Sets len to 0.
432
    pub fn clear(&mut self) {
1✔
433
        self.len = 0;
1✔
434
    }
435

436
    /// Returns the bit width of the values inside the vector.
437
    pub const fn bit_width(&self) -> usize {
439✔
438
        debug_assert!(self.bit_width <= W::BITS as usize);
878✔
439
        self.bit_width
439✔
440
    }
441

442
    /// Creates a new vector by copying a slice; the bit width will be the minimum
443
    /// width sufficient to hold all values in the slice.
444
    ///
445
    /// Returns an error if the bit width of the values in `slice` is larger than
446
    /// `W::BITS`.
447
    pub fn from_slice<S: BitFieldSlice<Value: Word + PrimitiveNumberAs<W>>>(
3✔
448
        slice: &S,
449
    ) -> Result<Self> {
450
        let mut max_len: usize = 0;
9✔
451
        for i in 0..slice.len() {
6,003✔
452
            max_len = Ord::max(max_len, unsafe {
9,000✔
453
                slice.get_value_unchecked(i).bit_len() as usize
9,000✔
454
            });
455
        }
456

457
        if max_len > W::BITS as usize {
3✔
458
            bail!(
1✔
459
                "Cannot convert a slice of bit width {} into a slice with W = {}",
×
460
                max_len,
×
461
                std::any::type_name::<W>()
1✔
462
            );
463
        }
464
        let mut result = Self::new(max_len, slice.len());
10✔
465
        for i in 0..slice.len() {
4,002✔
466
            unsafe { result.set_value_unchecked(i, slice.get_value_unchecked(i).as_to::<W>()) };
12,000✔
467
        }
468

469
        Ok(result)
2✔
470
    }
471

472
    /// Adds a value at the end of the vector.
473
    pub fn push(&mut self, value: W) {
2,419✔
474
        panic_if_value!(value, self.mask, self.bit_width);
7,257✔
475
        if (self.len + 1) * self.bit_width > self.bits.len() * W::BITS as usize {
7,658✔
476
            self.bits.push(W::ZERO);
401✔
477
        }
478
        unsafe {
479
            self.set_value_unchecked(self.len, value);
7,257✔
480
        }
481
        self.len += 1;
2,419✔
482
    }
483

484
    /// Truncates or extends with `value` the vector.
485
    pub fn resize(&mut self, new_len: usize, value: W) {
1,162✔
486
        panic_if_value!(value, self.mask, self.bit_width);
3,486✔
487
        if new_len > self.len {
1,162✔
488
            if new_len * self.bit_width > self.bits.len() * W::BITS as usize {
4,644✔
489
                self.bits.resize(
3,483✔
490
                    (new_len * self.bit_width).div_ceil(W::BITS as usize),
2,322✔
491
                    W::ZERO,
1,161✔
492
                );
493
            }
494
            for i in self.len..new_len {
807,721✔
495
                unsafe {
496
                    self.set_value_unchecked(i, value);
2,419,680✔
497
                }
498
            }
499
        }
500
        self.len = new_len;
1,162✔
501
    }
502

503
    /// Removes and returns a value from the end of the vector.
504
    ///
505
    /// Returns None if the [`BitFieldVec`] is empty.
506
    pub fn pop(&mut self) -> Option<W> {
1,002✔
507
        if self.len == 0 {
1,002✔
508
            return None;
2✔
509
        }
510
        let value = self.index_value(self.len - 1);
4,000✔
511
        self.len -= 1;
1,000✔
512
        Some(value)
1,000✔
513
    }
514

515
    /// Ensures a padding word is present at the end and converts the
516
    /// backend to `Box<[W]>`.
517
    ///
518
    /// The extra word ensures that unaligned reads of `size_of::<W>()`
519
    /// bytes starting at any byte offset within the data never exceed the
520
    /// allocation. If the allocation already has more words than needed
521
    /// for the data, no word is added.
522
    pub fn into_padded(mut self) -> BitFieldVec<Box<[W]>> {
×
523
        let needed = (self.len * self.bit_width).div_ceil(W::BITS as usize);
×
524
        if self.bits.len() <= needed {
×
525
            self.bits.push(W::ZERO);
×
526
        }
527
        unsafe {
528
            BitFieldVec::from_raw_parts(self.bits.into_boxed_slice(), self.bit_width, self.len)
×
529
        }
530
    }
531
}
532

533
impl<W: Word> BitFieldVec<Box<[W]>> {
534
    /// Creates a new zero-initialized vector of given bit width and length,
535
    /// with a padding word at the end for safe unaligned reads.
536
    ///
537
    /// This constructor is useful for structures implementing
538
    /// [`TryIntoUnaligned`](crate::traits::TryIntoUnaligned) that want to avoid
539
    /// reallocations.
540
    pub fn new_padded(bit_width: usize, len: usize) -> Self {
2,855✔
541
        let n_of_words = (len * bit_width).div_ceil(W::BITS as usize);
11,420✔
542
        Self {
543
            bits: vec![W::ZERO; n_of_words + 1].into_boxed_slice(),
11,420✔
544
            bit_width,
545
            mask: mask(bit_width),
5,710✔
546
            len,
547
        }
548
    }
549
}
550

551
impl<B: Backend<Word: Word> + AsRef<[B::Word]> + AsMut<[B::Word]>> BitFieldVec<B> {}
552

553
impl<B: Backend<Word: Word>> BitWidth for BitFieldVec<B> {
554
    #[inline(always)]
555
    fn bit_width(&self) -> usize {
14,451✔
556
        debug_assert!(self.bit_width <= B::Word::BITS as usize);
28,902✔
557
        self.bit_width
14,451✔
558
    }
559
}
560

561
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> SliceByValue for BitFieldVec<B> {
562
    type Value = B::Word;
563
    #[inline(always)]
564
    fn len(&self) -> usize {
239,612✔
565
        self.len
239,612✔
566
    }
567

568
    unsafe fn get_value_unchecked(&self, index: usize) -> B::Word {
30,548,141✔
569
        let bits = B::Word::BITS as usize;
61,096,282✔
570
        let pos = index * self.bit_width;
61,096,282✔
571
        let word_index = pos / bits;
61,096,282✔
572
        let bit_index = pos % bits;
61,096,282✔
573
        let data = self.bits.as_ref();
91,644,423✔
574

575
        unsafe {
576
            if bit_index + self.bit_width <= bits {
30,548,141✔
577
                (*data.get_unchecked(word_index) >> bit_index) & self.mask
45,798,266✔
578
            } else {
579
                ((*data.get_unchecked(word_index) >> bit_index)
22,947,024✔
580
                    | (*data.get_unchecked(word_index + 1) << (bits - bit_index)))
22,947,024✔
581
                    & self.mask
7,649,008✔
582
            }
583
        }
584
    }
585
}
586

587
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldSlice for BitFieldVec<B> {
588
    fn as_slice(&self) -> &[Self::Value] {
×
589
        self.bits.as_ref()
×
590
    }
591
}
592

593
impl<B: Backend<Word: Word> + AsRef<[B::Word]> + AsMut<[B::Word]>> BitFieldSliceMut
594
    for BitFieldVec<B>
595
{
596
    fn reset(&mut self) {
1✔
597
        let bit_len = self.len * self.bit_width;
2✔
598
        let full_words = bit_len / B::Word::BITS as usize;
2✔
599
        let residual = bit_len % B::Word::BITS as usize;
2✔
600
        let bits = self.bits.as_mut();
3✔
601
        bits[..full_words]
1✔
602
            .iter_mut()
603
            .for_each(|x| *x = B::Word::ZERO);
1✔
604
        if residual != 0 {
2✔
605
            bits[full_words] &= B::Word::MAX << residual;
1✔
606
        }
607
    }
608

609
    #[cfg(feature = "rayon")]
610
    fn par_reset(&mut self) {
1✔
611
        let bit_len = self.len * self.bit_width;
2✔
612
        let full_words = bit_len / B::Word::BITS as usize;
2✔
613
        let residual = bit_len % B::Word::BITS as usize;
2✔
614
        let bits = self.bits.as_mut();
3✔
615
        bits[..full_words]
1✔
616
            .par_iter_mut()
617
            .with_min_len(crate::RAYON_MIN_LEN)
1✔
618
            .for_each(|x| *x = B::Word::ZERO);
1✔
619
        if residual != 0 {
2✔
620
            bits[full_words] &= B::Word::MAX << residual;
1✔
621
        }
622
    }
623

624
    fn as_mut_slice(&mut self) -> &mut [B::Word] {
1✔
625
        self.bits.as_mut()
1✔
626
    }
627
}
628

629
/// Error type returned when [`try_chunks_mut`](SliceByValueMut::try_chunks_mut)
630
/// does not find sufficient alignment.
631
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
632
pub struct ChunksMutError<W: Word> {
633
    bit_width: usize,
634
    chunk_size: usize,
635
    _marker: core::marker::PhantomData<W>,
636
}
637

638
impl<W: Word> core::fmt::Display for ChunksMutError<W> {
639
    fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
×
640
        write!(
×
641
            f,
×
642
            "try_chunks_mut needs that the bit width ({}) times the chunk size ({}) is a multiple of W::BITS ({}) to return more than one chunk",
643
            self.bit_width,
×
644
            self.chunk_size,
×
645
            W::BITS as usize
×
646
        )
647
    }
648
}
649

650
impl<W: Word> std::error::Error for ChunksMutError<W> {}
651

652
impl<B: Backend<Word: Word> + AsRef<[B::Word]> + AsMut<[B::Word]>> SliceByValueMut
653
    for BitFieldVec<B>
654
{
655
    #[inline(always)]
656
    fn set_value(&mut self, index: usize, value: B::Word) {
979,133✔
657
        panic_if_out_of_bounds!(index, self.len);
2,937,399✔
658
        panic_if_value!(value, self.mask, self.bit_width);
2,937,396✔
659
        unsafe {
660
            self.set_value_unchecked(index, value);
2,937,393✔
661
        }
662
    }
663

664
    unsafe fn set_value_unchecked(&mut self, index: usize, value: B::Word) {
16,644,203✔
665
        let bits = B::Word::BITS as usize;
33,288,406✔
666
        let pos = index * self.bit_width;
33,288,406✔
667
        let word_index = pos / bits;
33,288,406✔
668
        let bit_index = pos % bits;
33,288,406✔
669
        let data = self.bits.as_mut();
49,932,609✔
670

671
        unsafe {
672
            if bit_index + self.bit_width <= bits {
29,292,124✔
673
                let mut word = *data.get_unchecked_mut(word_index);
50,591,684✔
674
                word &= !(self.mask << bit_index);
37,943,763✔
675
                word |= value << bit_index;
37,943,763✔
676
                *data.get_unchecked_mut(word_index) = word;
37,943,763✔
677
            } else {
678
                let mut word = *data.get_unchecked_mut(word_index);
15,985,128✔
679
                word &= (B::Word::ONE << bit_index) - B::Word::ONE;
7,992,564✔
680
                word |= value << bit_index;
11,988,846✔
681
                *data.get_unchecked_mut(word_index) = word;
15,985,128✔
682

683
                let mut word = *data.get_unchecked_mut(word_index + 1);
15,985,128✔
684
                word &= !(self.mask >> (bits - bit_index));
15,985,128✔
685
                word |= value >> (bits - bit_index);
15,985,128✔
686
                *data.get_unchecked_mut(word_index + 1) = word;
11,988,846✔
687
            }
688
        }
689
    }
690

691
    fn replace_value(&mut self, index: usize, value: B::Word) -> B::Word {
×
692
        panic_if_out_of_bounds!(index, self.len);
×
693
        panic_if_value!(value, self.mask, self.bit_width);
×
694
        unsafe { self.replace_value_unchecked(index, value) }
×
695
    }
696

697
    unsafe fn replace_value_unchecked(&mut self, index: usize, value: B::Word) -> B::Word {
×
698
        let bits = B::Word::BITS as usize;
×
699
        let pos = index * self.bit_width;
×
700
        let word_index = pos / bits;
×
701
        let bit_index = pos % bits;
×
702
        let data = self.bits.as_mut();
×
703

704
        unsafe {
705
            if bit_index + self.bit_width <= bits {
×
706
                let mut word = *data.get_unchecked_mut(word_index);
×
707
                let old_value = (word >> bit_index) & self.mask;
×
708
                word &= !(self.mask << bit_index);
×
709
                word |= value << bit_index;
×
710
                *data.get_unchecked_mut(word_index) = word;
×
711
                old_value
×
712
            } else {
713
                let mut word = *data.get_unchecked_mut(word_index);
×
714
                let mut old_value = word >> bit_index;
×
715
                word &= (B::Word::ONE << bit_index) - B::Word::ONE;
×
716
                word |= value << bit_index;
×
717
                *data.get_unchecked_mut(word_index) = word;
×
718

719
                let mut word = *data.get_unchecked_mut(word_index + 1);
×
720
                old_value |= word << (bits - bit_index);
×
721
                word &= !(self.mask >> (bits - bit_index));
×
722
                word |= value >> (bits - bit_index);
×
723
                *data.get_unchecked_mut(word_index + 1) = word;
×
724
                old_value & self.mask
×
725
            }
726
        }
727
    }
728

729
    /// This implementation performs the copy word by word, which is
730
    /// significantly faster than the default implementation.
731
    fn copy(&self, from: usize, dst: &mut Self, to: usize, len: usize) {
576✔
732
        assert_eq!(
576✔
733
            self.bit_width, dst.bit_width,
×
734
            "Bit widths must be equal (self: {}, dest: {})",
×
735
            self.bit_width, dst.bit_width
×
736
        );
737
        // Reduce len to the elements available in both vectors
738
        let len = Ord::min(Ord::min(len, dst.len - to), self.len - from);
3,456✔
739
        if len == 0 {
576✔
740
            return;
×
741
        }
742
        let bit_width = Ord::min(self.bit_width, dst.bit_width);
2,304✔
743
        let bit_len = len * bit_width;
1,152✔
744
        let src_pos = from * self.bit_width;
1,152✔
745
        let dst_pos = to * dst.bit_width;
1,152✔
746
        let bits = B::Word::BITS as usize;
1,152✔
747
        let src_bit = src_pos % bits;
1,152✔
748
        let dst_bit = dst_pos % bits;
1,152✔
749
        let src_first_word = src_pos / bits;
1,152✔
750
        let dst_first_word = dst_pos / bits;
1,152✔
751
        let src_last_word = (src_pos + bit_len - 1) / bits;
1,152✔
752
        let dst_last_word = (dst_pos + bit_len - 1) / bits;
1,152✔
753
        let source = self.bits.as_ref();
1,728✔
754
        let dest = dst.bits.as_mut();
1,728✔
755

756
        if src_first_word == src_last_word && dst_first_word == dst_last_word {
768✔
757
            let mask = B::Word::MAX >> (bits - bit_len);
192✔
758
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
759
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
760
            dest[dst_first_word] |= word << dst_bit;
128✔
761
        } else if src_first_word == src_last_word {
640✔
762
            // dst_first_word != dst_last_word
763
            let mask = B::Word::MAX >> (bits - bit_len);
192✔
764
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
765
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
766
            dest[dst_first_word] |= (word & mask) << dst_bit;
192✔
767
            dest[dst_last_word] &= !(mask >> (bits - dst_bit));
256✔
768
            dest[dst_last_word] |= (word & mask) >> (bits - dst_bit);
192✔
769
        } else if dst_first_word == dst_last_word {
576✔
770
            // src_first_word != src_last_word
771
            let mask = B::Word::MAX >> (bits - bit_len);
192✔
772
            let word = ((source[src_first_word] >> src_bit)
256✔
773
                | (source[src_last_word] << (bits - src_bit)))
192✔
774
                & mask;
128✔
775
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
776
            dest[dst_first_word] |= word << dst_bit;
128✔
777
        } else if src_bit == dst_bit {
576✔
778
            // src_first_word != src_last_word && dst_first_word != dst_last_word
779
            let mask = B::Word::MAX << dst_bit;
384✔
780
            dest[dst_first_word] &= !mask;
256✔
781
            dest[dst_first_word] |= source[src_first_word] & mask;
384✔
782

783
            dest[(1 + dst_first_word)..dst_last_word]
384✔
784
                .copy_from_slice(&source[(1 + src_first_word)..src_last_word]);
512✔
785

786
            let residual = bit_len - (bits - src_bit) - (dst_last_word - dst_first_word - 1) * bits;
640✔
787
            let mask = B::Word::MAX >> (bits - residual);
384✔
788
            dest[dst_last_word] &= !mask;
256✔
789
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
790
        } else if src_bit < dst_bit {
384✔
791
            // src_first_word != src_last_word && dst_first_word !=
792
            // dst_last_word
793
            let dst_mask = B::Word::MAX << dst_bit;
256✔
794
            let src_mask = B::Word::MAX << src_bit;
256✔
795
            let shift = dst_bit - src_bit;
256✔
796
            dest[dst_first_word] &= !dst_mask;
128✔
797
            dest[dst_first_word] |= (source[src_first_word] & src_mask) << shift;
256✔
798

799
            let mut word = source[src_first_word] >> (bits - shift);
384✔
800
            for i in 1..dst_last_word - dst_first_word {
384✔
801
                dest[dst_first_word + i] = word | (source[src_first_word + i] << shift);
512✔
802
                word = source[src_first_word + i] >> (bits - shift);
256✔
803
            }
804
            let residual = bit_len - (bits - dst_bit) - (dst_last_word - dst_first_word - 1) * bits;
512✔
805
            let mask = B::Word::MAX >> (bits - residual);
256✔
806
            dest[dst_last_word] &= !mask;
128✔
807
            dest[dst_last_word] |= (word | (source[src_last_word] << shift)) & mask;
384✔
808
        } else {
809
            // src_first_word != src_last_word && dst_first_word !=
810
            // dst_last_word && src_bit > dst_bit
811
            let dst_mask = B::Word::MAX << dst_bit;
256✔
812
            let src_mask = B::Word::MAX << src_bit;
256✔
813
            let shift = src_bit - dst_bit;
256✔
814
            dest[dst_first_word] &= !dst_mask;
128✔
815
            dest[dst_first_word] |= (source[src_first_word] & src_mask) >> shift;
256✔
816
            dest[dst_first_word] |= source[src_first_word + 1] << (bits - shift);
384✔
817

818
            let mut word = source[src_first_word + 1] >> shift;
256✔
819

820
            for i in 1..dst_last_word - dst_first_word {
1,152✔
821
                word |= source[src_first_word + i + 1] << (bits - shift);
2,048✔
822
                dest[dst_first_word + i] = word;
1,024✔
823
                word = source[src_first_word + i + 1] >> shift;
512✔
824
            }
825

826
            word |= source[src_last_word] << (bits - shift);
384✔
827

828
            let residual = bit_len - (bits - dst_bit) - (dst_last_word - dst_first_word - 1) * bits;
512✔
829
            let mask = B::Word::MAX >> (bits - residual);
256✔
830
            dest[dst_last_word] &= !mask;
128✔
831
            dest[dst_last_word] |= word & mask;
256✔
832
        }
833
    }
834

835
    /// This implementation keeps a buffer of `W::BITS` bits for reading and
836
    /// writing, obtaining a significant speedup with respect to the default
837
    /// implementation.
838
    #[inline]
839
    unsafe fn apply_in_place_unchecked<F>(&mut self, mut f: F)
×
840
    where
841
        F: FnMut(Self::Value) -> Self::Value,
842
    {
843
        if self.is_empty() {
×
844
            return;
×
845
        }
846
        let bit_width = self.bit_width();
×
847
        if bit_width == 0 {
×
848
            return;
×
849
        }
850
        let mask = self.mask;
×
851
        let number_of_words: usize = self.bits.as_ref().len();
×
852
        let last_word_idx = number_of_words.saturating_sub(1);
×
853

854
        let bits = B::Word::BITS as usize;
×
855
        let mut write_buffer: B::Word = B::Word::ZERO;
×
856
        let mut read_buffer: B::Word = *unsafe { self.bits.as_ref().get_unchecked(0) };
×
857

858
        // specialized case because it's much faster
859
        if bit_width.is_power_of_two() {
×
860
            let mut bits_in_buffer = 0;
×
861

862
            let mut buffer_limit = (self.len() * bit_width) % bits;
×
863
            if buffer_limit == 0 {
×
864
                buffer_limit = bits;
×
865
            }
866

867
            for read_idx in 1..number_of_words {
×
868
                // pre-load the next word so it loads while we parse the buffer
869
                let next_word: B::Word = *unsafe { self.bits.as_ref().get_unchecked(read_idx) };
×
870

871
                // parse as much as we can from the buffer
872
                loop {
×
873
                    let next_bits_in_buffer = bits_in_buffer + bit_width;
×
874

875
                    if next_bits_in_buffer > bits {
×
876
                        break;
×
877
                    }
878

879
                    let value = read_buffer & mask;
×
880
                    // throw away the bits we just read
881
                    read_buffer >>= bit_width;
×
882
                    // apply user func
883
                    let new_value = f(value);
×
884
                    // put the new value in the write buffer
885
                    write_buffer |= new_value << bits_in_buffer;
×
886

887
                    bits_in_buffer = next_bits_in_buffer;
×
888
                }
889

890
                debug_assert_eq!(read_buffer, B::Word::ZERO);
×
891
                *unsafe { self.bits.as_mut().get_unchecked_mut(read_idx - 1) } = write_buffer;
×
892
                read_buffer = next_word;
×
893
                write_buffer = B::Word::ZERO;
×
894
                bits_in_buffer = 0;
×
895
            }
896

897
            // write the last word if we have some bits left
898
            while bits_in_buffer < buffer_limit {
×
899
                let value = read_buffer & mask;
×
900
                // throw away the bits we just read
901
                read_buffer >>= bit_width;
×
902
                // apply user func
903
                let new_value = f(value);
×
904
                // put the new value in the write buffer
905
                write_buffer |= new_value << bits_in_buffer;
×
906
                bits_in_buffer += bit_width;
×
907
            }
908

909
            *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
910
            return;
×
911
        }
912

913
        // The position inside the word. In most parametrization of the
914
        // vector, since the bit_width is not necessarily an integer
915
        // divisor of the word size, we need to keep track of the position
916
        // inside the word. As we scroll through the bits, due to the bits
917
        // remainder, we may need to operate on two words at the same time.
918
        let mut global_bit_index: usize = 0;
×
919

920
        // The bit-index boundaries of the current word.
921
        let mut lower_word_limit = 0;
×
922
        let mut upper_word_limit = bits;
×
923

924
        // We iterate across the words
925
        for word_number in 0..last_word_idx {
×
926
            // We iterate across the elements in the word.
927
            while global_bit_index + bit_width <= upper_word_limit {
×
928
                // We retrieve the value from the current word.
929
                let offset = global_bit_index - lower_word_limit;
×
930
                global_bit_index += bit_width;
×
931
                let element = self.mask & (read_buffer >> offset);
×
932

933
                // We apply the function to the element.
934
                let new_element = f(element);
×
935

936
                // We set the element in the new word.
937
                write_buffer |= new_element << offset;
×
938
            }
939

940
            // We retrieve the next word from the bitvec.
941
            let next_word = *unsafe { self.bits.as_ref().get_unchecked(word_number + 1) };
×
942

943
            let mut new_write_buffer = B::Word::ZERO;
×
944
            if upper_word_limit != global_bit_index {
×
945
                let remainder = upper_word_limit - global_bit_index;
×
946
                let offset = global_bit_index - lower_word_limit;
×
947
                // We compose the element from the remaining elements in the
948
                // current word and the elements in the next word.
949
                let element = ((read_buffer >> offset) | (next_word << remainder)) & self.mask;
×
950
                global_bit_index += bit_width;
×
951

952
                // We apply the function to the element.
953
                let new_element = f(element);
×
954

955
                write_buffer |= new_element << offset;
×
956

957
                new_write_buffer = new_element >> remainder;
×
958
            };
959

960
            read_buffer = next_word;
×
961

962
            *unsafe { self.bits.as_mut().get_unchecked_mut(word_number) } = write_buffer;
×
963

964
            write_buffer = new_write_buffer;
×
965
            lower_word_limit = upper_word_limit;
×
966
            upper_word_limit += bits;
×
967
        }
968

969
        let mut offset = global_bit_index - lower_word_limit;
×
970

971
        // We iterate across the elements in the word.
972
        while offset < self.len() * bit_width - global_bit_index {
×
973
            // We retrieve the value from the current word.
974
            let element = self.mask & (read_buffer >> offset);
×
975

976
            // We apply the function to the element.
977
            let new_element = f(element);
×
978

979
            // We set the element in the new word.
980
            write_buffer |= new_element << offset;
×
981
            offset += bit_width;
×
982
        }
983

984
        *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
985
    }
986

987
    type ChunksMut<'a>
988
        = ChunksMut<'a, B::Word>
989
    where
990
        Self: 'a;
991

992
    type ChunksMutError = ChunksMutError<B::Word>;
993

994
    /// # Errors
995
    ///
996
    /// This method will return an error if the chunk size multiplied by
997
    /// the [bit width](BitWidth::bit_width) is not a multiple of
998
    /// `W::BITS` and more than one chunk must be returned.
999
    fn try_chunks_mut(
2,850✔
1000
        &mut self,
1001
        chunk_size: usize,
1002
    ) -> Result<Self::ChunksMut<'_>, ChunksMutError<B::Word>> {
1003
        let len = self.len();
8,550✔
1004
        let bit_width = self.bit_width();
8,550✔
1005
        let bits = B::Word::BITS as usize;
5,700✔
1006
        if len <= chunk_size || (chunk_size * bit_width) % bits == 0 {
2,855✔
1007
            // chunks_mut panics with chunk_size 0, so use 1 when the
1008
            // product is zero (bit_width == 0); the iterator will yield
1009
            // empty slices anyway.
1010
            let words_per_chunk = Ord::max(1, (chunk_size * bit_width).div_ceil(bits));
14,240✔
1011
            Ok(ChunksMut {
2,848✔
1012
                remaining: len,
5,696✔
1013
                bit_width: self.bit_width,
5,696✔
1014
                chunk_size,
5,696✔
1015
                iter: self.bits.as_mut()[..(len * bit_width).div_ceil(bits)]
11,392✔
1016
                    .chunks_mut(words_per_chunk),
2,848✔
1017
            })
1018
        } else {
1019
            Err(ChunksMutError {
2✔
1020
                bit_width,
4✔
1021
                chunk_size,
2✔
1022
                _marker: core::marker::PhantomData,
2✔
1023
            })
1024
        }
1025
    }
1026
}
1027

1028
// Support for unchecked iterators
1029

1030
/// An [`UncheckedIterator`] over the values of a [`BitFieldVec`].
1031
#[derive(Debug, Clone, MemSize, MemDbg)]
1032
pub struct BitFieldVecUncheckedIter<'a, B: Backend<Word: Word>> {
1033
    vec: &'a BitFieldVec<B>,
1034
    word_index: usize,
1035
    window: B::Word,
1036
    fill: usize,
1037
}
1038

1039
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVecUncheckedIter<'a, B> {
1040
    fn new(vec: &'a BitFieldVec<B>, index: usize) -> Self {
697,513✔
1041
        if index > vec.len() {
1,395,026✔
1042
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1043
        }
1044
        let bits = B::Word::BITS as usize;
1,395,026✔
1045
        let (fill, word_index);
×
1046
        let window = if index == vec.len() {
2,092,539✔
1047
            word_index = 0;
343✔
1048
            fill = 0;
343✔
1049
            B::Word::ZERO
343✔
1050
        } else {
1051
            let bit_offset = index * vec.bit_width;
1,394,340✔
1052
            let bit_index = bit_offset % bits;
1,394,340✔
1053

1054
            word_index = bit_offset / bits;
697,170✔
1055
            fill = bits - bit_index;
697,170✔
1056
            unsafe {
1057
                // SAFETY: index has been checked at the start and it is within bounds
1058
                *vec.bits.as_ref().get_unchecked(word_index) >> bit_index
1,394,340✔
1059
            }
1060
        };
1061
        Self {
1062
            vec,
1063
            word_index,
1064
            window,
1065
            fill,
1066
        }
1067
    }
1068
}
1069

1070
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> crate::traits::UncheckedIterator
1071
    for BitFieldVecUncheckedIter<'_, B>
1072
{
1073
    type Item = B::Word;
1074
    unsafe fn next_unchecked(&mut self) -> B::Word {
35,205,636✔
1075
        let bit_width = self.vec.bit_width;
70,411,272✔
1076

1077
        if self.fill >= bit_width {
35,205,636✔
1078
            self.fill -= bit_width;
19,717,108✔
1079
            let res = self.window & self.vec.mask;
39,434,216✔
1080
            self.window >>= bit_width;
19,717,108✔
1081
            return res;
19,717,108✔
1082
        }
1083

1084
        let res = self.window;
30,977,056✔
1085
        self.word_index += 1;
15,488,528✔
1086
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
30,977,056✔
1087
        let res = (res | (self.window << self.fill)) & self.vec.mask;
46,465,584✔
1088
        let used = bit_width - self.fill;
30,977,056✔
1089
        self.window >>= used;
15,488,528✔
1090
        self.fill = B::Word::BITS as usize - used; // not in a loop
15,488,528✔
1091
        res
15,488,528✔
1092
    }
1093
}
1094

1095
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoUncheckedIterator for &'a BitFieldVec<B> {
1096
    type Item = B::Word;
1097
    type IntoUncheckedIter = BitFieldVecUncheckedIter<'a, B>;
1098
    fn into_unchecked_iter_from(self, from: usize) -> Self::IntoUncheckedIter {
384,234✔
1099
        BitFieldVecUncheckedIter::new(self, from)
1,152,702✔
1100
    }
1101
}
1102

1103
/// An [`UncheckedIterator`] moving backwards over the values of a [`BitFieldVec`].
1104
#[derive(Debug, Clone, MemSize, MemDbg)]
1105
pub struct BitFieldVecUncheckedBackIter<'a, B: Backend<Word: Word>> {
1106
    vec: &'a BitFieldVec<B>,
1107
    word_index: usize,
1108
    window: B::Word,
1109
    fill: usize,
1110
}
1111

1112
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVecUncheckedBackIter<'a, B> {
1113
    fn new(vec: &'a BitFieldVec<B>, index: usize) -> Self {
353,267✔
1114
        if index > vec.len() {
706,534✔
1115
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1116
        }
1117
        let bits = B::Word::BITS as usize;
706,534✔
1118
        let (word_index, fill);
×
1119

1120
        let window = if index == 0 {
706,534✔
1121
            word_index = 0;
3,144✔
1122
            fill = 0;
3,144✔
1123
            B::Word::ZERO
3,144✔
1124
        } else {
1125
            // We have to handle the case of zero bit width
1126
            let bit_offset = (index * vec.bit_width).saturating_sub(1);
1,050,369✔
1127
            let bit_index = bit_offset % bits;
700,246✔
1128

1129
            word_index = bit_offset / bits;
350,123✔
1130
            fill = bit_index + 1;
350,123✔
1131
            unsafe {
1132
                // SAFETY: index has been checked at the start and it is within bounds
1133
                *vec.bits.as_ref().get_unchecked(word_index) << (bits - fill)
1,050,369✔
1134
            }
1135
        };
1136
        Self {
1137
            vec,
1138
            word_index,
1139
            window,
1140
            fill,
1141
        }
1142
    }
1143
}
1144

1145
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> crate::traits::UncheckedIterator
1146
    for BitFieldVecUncheckedBackIter<'_, B>
1147
{
1148
    type Item = B::Word;
1149
    unsafe fn next_unchecked(&mut self) -> B::Word {
18,007,437✔
1150
        let bit_width = self.vec.bit_width;
36,014,874✔
1151

1152
        if self.fill >= bit_width {
18,007,437✔
1153
            self.fill -= bit_width;
10,583,239✔
1154
            self.window = self.window.rotate_left(bit_width as u32);
21,166,478✔
1155
            return self.window & self.vec.mask;
10,583,239✔
1156
        }
1157

1158
        let mut res = self.window.rotate_left(self.fill as u32);
29,696,792✔
1159
        self.word_index -= 1;
7,424,198✔
1160
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
14,848,396✔
1161
        let used = bit_width - self.fill;
14,848,396✔
1162
        res = ((res << used) | (self.window >> (B::Word::BITS as usize - used))) & self.vec.mask; // not in a loop
22,272,594✔
1163
        self.window <<= used;
7,424,198✔
1164
        self.fill = B::Word::BITS as usize - used;
7,424,198✔
1165
        res
7,424,198✔
1166
    }
1167
}
1168

1169
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoUncheckedBackIterator
1170
    for &'a BitFieldVec<B>
1171
{
1172
    type Item = B::Word;
1173
    type IntoUncheckedIterBack = BitFieldVecUncheckedBackIter<'a, B>;
1174

1175
    fn into_unchecked_iter_back(self) -> Self::IntoUncheckedIterBack {
×
1176
        BitFieldVecUncheckedBackIter::new(self, self.len())
×
1177
    }
1178

1179
    fn into_unchecked_iter_back_from(self, from: usize) -> Self::IntoUncheckedIterBack {
353,267✔
1180
        BitFieldVecUncheckedBackIter::new(self, from)
1,059,801✔
1181
    }
1182
}
1183

1184
/// An [`Iterator`] over the values of a [`BitFieldVec`].
1185
#[derive(Debug, Clone, MemSize, MemDbg)]
1186
pub struct BitFieldVecIter<'a, B: Backend<Word: Word>> {
1187
    unchecked: BitFieldVecUncheckedIter<'a, B>,
1188
    range: core::ops::Range<usize>,
1189
}
1190

1191
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVecIter<'a, B> {
1192
    fn new(vec: &'a BitFieldVec<B>, from: usize) -> Self {
313,279✔
1193
        let len = vec.len();
939,837✔
1194
        if from > len {
313,279✔
1195
            panic!("Start index out of bounds: {} > {}", from, len);
×
1196
        }
1197
        Self {
1198
            unchecked: BitFieldVecUncheckedIter::new(vec, from),
1,253,116✔
1199
            range: from..len,
313,279✔
1200
        }
1201
    }
1202
}
1203

1204
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> Iterator for BitFieldVecIter<'_, B> {
1205
    type Item = B::Word;
1206
    fn next(&mut self) -> Option<Self::Item> {
16,189,268✔
1207
        if self.range.is_empty() {
32,378,536✔
1208
            return None;
313,206✔
1209
        }
1210
        // SAFETY: index has just been checked
1211
        let res = unsafe { self.unchecked.next_unchecked() };
47,628,186✔
1212
        self.range.start += 1;
15,876,062✔
1213
        Some(res)
15,876,062✔
1214
    }
1215

1216
    #[inline(always)]
1217
    fn size_hint(&self) -> (usize, Option<usize>) {
65✔
1218
        (self.range.len(), Some(self.range.len()))
195✔
1219
    }
1220
}
1221

1222
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> ExactSizeIterator for BitFieldVecIter<'_, B> {
1223
    #[inline(always)]
1224
    fn len(&self) -> usize {
×
1225
        self.range.len()
×
1226
    }
1227
}
1228

1229
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> FusedIterator for BitFieldVecIter<'_, B> {}
1230

1231
/// This implements iteration from the end, but it's slower than the forward iteration
1232
/// as here we do a random access, while in the forward iterator we do a sequential access
1233
/// and we keep a buffer of `W::BITS` bits to speed up the iteration.
1234
///
1235
/// If needed we could also keep a buffer from the end, but the logic would be more complex
1236
/// and potentially slower.
1237
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> DoubleEndedIterator for BitFieldVecIter<'_, B> {
1238
    fn next_back(&mut self) -> Option<Self::Item> {
725✔
1239
        if self.range.is_empty() {
1,450✔
1240
            return None;
69✔
1241
        }
1242
        self.range.end -= 1;
656✔
1243
        // SAFETY: range.end was > range.start, so it is a valid index
1244
        let res = unsafe { self.unchecked.vec.get_value_unchecked(self.range.end) };
2,624✔
1245
        Some(res)
656✔
1246
    }
1247
}
1248

1249
/// Equality between bit-field vectors requires that the word is the same, the
1250
/// bit width is the same, and the content is the same.
1251
impl<B: Backend<Word: Word> + AsRef<[B::Word]>, C: Backend<Word = B::Word> + AsRef<[B::Word]>>
1252
    PartialEq<BitFieldVec<C>> for BitFieldVec<B>
1253
{
1254
    fn eq(&self, other: &BitFieldVec<C>) -> bool {
3,771✔
1255
        if self.bit_width() != other.bit_width() {
11,313✔
1256
            return false;
×
1257
        }
1258
        if self.len() != other.len() {
11,313✔
1259
            return false;
2✔
1260
        }
1261
        let bits = B::Word::BITS as usize;
7,538✔
1262
        let bit_len = self.len() * self.bit_width();
15,076✔
1263
        if self.bits.as_ref()[..bit_len / bits] != other.bits.as_ref()[..bit_len / bits] {
15,076✔
1264
            return false;
×
1265
        }
1266

1267
        let residual = bit_len % bits;
7,538✔
1268
        residual == 0
3,769✔
1269
            || (self.bits.as_ref()[bit_len / bits] ^ other.bits.as_ref()[bit_len / bits])
24,682✔
1270
                << (bits - residual)
3,526✔
1271
                == B::Word::ZERO
3,526✔
1272
    }
1273
}
1274

1275
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> Eq for BitFieldVec<B> {}
1276

1277
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> std::hash::Hash for BitFieldVec<B> {
1278
    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
×
1279
        self.bit_width().hash(state);
×
1280
        self.len().hash(state);
×
1281
        let bits = B::Word::BITS as usize;
×
1282
        let bit_len = self.len() * self.bit_width();
×
1283
        let full_words = bit_len / bits;
×
1284
        self.bits.as_ref()[..full_words].hash(state);
×
1285
        let residual = bit_len % bits;
×
1286
        if residual != 0 {
×
1287
            // Mask off the padding bits before hashing the last partial word.
1288
            (self.bits.as_ref()[full_words] << (bits - residual)).hash(state);
×
1289
        }
1290
    }
1291
}
1292

1293
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoIterator for &'a BitFieldVec<B> {
1294
    type Item = B::Word;
1295
    type IntoIter = BitFieldVecIter<'a, B>;
1296

1297
    fn into_iter(self) -> Self::IntoIter {
71✔
1298
        BitFieldVecIter::new(self, 0)
142✔
1299
    }
1300
}
1301

1302
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoIteratorFrom for &'a BitFieldVec<B> {
1303
    type IntoIterFrom = BitFieldVecIter<'a, B>;
1304

1305
    fn into_iter_from(self, from: usize) -> Self::IntoIterFrom {
×
1306
        BitFieldVecIter::new(self, from)
×
1307
    }
1308
}
1309

1310
impl<W: Word> core::iter::Extend<W> for BitFieldVec<Vec<W>> {
1311
    fn extend<T: IntoIterator<Item = W>>(&mut self, iter: T) {
1✔
1312
        for value in iter {
17✔
1313
            self.push(value);
16✔
1314
        }
1315
    }
1316
}
1317

1318
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldVec<B> {
1319
    pub fn iter_from(&self, from: usize) -> BitFieldVecIter<'_, B> {
313,208✔
1320
        BitFieldVecIter::new(self, from)
939,624✔
1321
    }
1322

1323
    pub fn iter(&self) -> BitFieldVecIter<'_, B> {
1,202✔
1324
        self.iter_from(0)
2,404✔
1325
    }
1326
}
1327
/// A tentatively thread-safe vector of bit fields of fixed width (AKA "compact arrays",
1328
/// "bit array", etc.)
1329
///
1330
/// This implementation provides some concurrency guarantees, albeit not
1331
/// full-fledged thread safety: more precisely, we can guarantee thread-safety
1332
/// if the bit width is a power of two; otherwise, concurrent writes to values
1333
/// that cross word boundaries might end up in different threads succeeding in
1334
/// writing only part of a value. If the user can guarantee that no two threads
1335
/// ever write to the same boundary-crossing value, then no race condition can
1336
/// happen.
1337
///
1338
/// See the [module documentation](crate::bits) for more details.
1339
#[derive(Debug, Clone, Hash, MemSize, MemDbg, Delegate)]
1340
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
1341
#[cfg_attr(
1342
    feature = "epserde",
1343
    epserde(bound(
1344
        deser = "for<'a> <B as epserde::deser::DeserInner>::DeserType<'a>: Backend<Word = B::Word>"
1345
    ))
1346
)]
1347
#[delegate(crate::traits::Backend, target = "bits")]
1348
pub struct AtomicBitFieldVec<
1349
    B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>> = Vec<Atomic<usize>>,
1350
> {
1351
    /// The underlying storage.
1352
    bits: B,
1353
    /// The bit width of the values stored in the vector.
1354
    bit_width: usize,
1355
    /// A mask with its lowest `bit_width` bits set to one.
1356
    mask: <B::Word as PrimitiveAtomic>::Value,
1357
    /// The length of the vector.
1358
    len: usize,
1359
}
1360

1361
impl<B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>>> AtomicBitFieldVec<B> {
1362
    /// # Safety
1363
    /// `len` * `bit_width` must be between 0 (included) and the number of
1364
    /// bits in `bits` (included).
1365
    #[inline(always)]
1366
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
184✔
1367
        Self {
1368
            bits,
1369
            bit_width,
1370
            mask: mask(bit_width),
368✔
1371
            len,
1372
        }
1373
    }
1374

1375
    /// Returns the backend, the bit width, and the length, consuming this
1376
    /// vector.
1377
    #[inline(always)]
1378
    pub fn into_raw_parts(self) -> (B, usize, usize) {
184✔
1379
        (self.bits, self.bit_width, self.len)
368✔
1380
    }
1381

1382
    /// Returns the mask used to extract values from the vector.
1383
    /// This will keep the lowest `bit_width` bits.
1384
    pub const fn mask(&self) -> <B::Word as PrimitiveAtomic>::Value {
184✔
1385
        self.mask
184✔
1386
    }
1387
}
1388

1389
impl<B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>> + AsRef<[B::Word]>>
1390
    AtomicBitFieldVec<B>
1391
{
1392
    /// Returns the backend of the `AtomicBitFieldVec` as a slice of atomic words.
1393
    pub fn as_slice(&self) -> &[B::Word] {
4✔
1394
        self.bits.as_ref()
4✔
1395
    }
1396
}
1397

1398
impl<A: PrimitiveAtomicUnsigned<Value: Word>> AtomicBitFieldVec<Vec<A>> {
1399
    pub fn new(bit_width: usize, len: usize) -> AtomicBitFieldVec<Vec<A>> {
195✔
1400
        // we need at least two words to avoid branches in the gets
1401
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(A::Value::BITS as usize));
975✔
1402
        AtomicBitFieldVec {
1403
            bits: (0..n_of_words).map(|_| A::new(A::Value::ZERO)).collect(),
9,648✔
1404
            bit_width,
1405
            mask: mask(bit_width),
390✔
1406
            len,
1407
        }
1408
    }
1409
}
1410

1411
impl<B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>>> AtomicBitWidth
1412
    for AtomicBitFieldVec<B>
1413
{
1414
    #[inline(always)]
1415
    fn atomic_bit_width(&self) -> usize {
184✔
1416
        debug_assert!(self.bit_width <= <B::Word as PrimitiveAtomic>::Value::BITS as usize);
368✔
1417
        self.bit_width
184✔
1418
    }
1419
}
1420

1421
impl<B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>> + AsRef<[B::Word]>> SliceByValue
1422
    for AtomicBitFieldVec<B>
1423
{
1424
    type Value = <B::Word as PrimitiveAtomic>::Value;
1425

1426
    #[inline(always)]
1427
    fn len(&self) -> usize {
×
1428
        self.len
×
1429
    }
1430

1431
    unsafe fn get_value_unchecked(&self, index: usize) -> Self::Value {
×
1432
        unsafe { self.get_atomic_unchecked(index, Ordering::Relaxed) }
×
1433
    }
1434
}
1435

1436
impl<B: Backend<Word: PrimitiveAtomicUnsigned<Value: Word>> + AsRef<[B::Word]>>
1437
    AtomicBitFieldSlice<B::Word> for AtomicBitFieldVec<B>
1438
{
1439
    #[inline]
1440
    fn len(&self) -> usize {
404,824✔
1441
        self.len
404,824✔
1442
    }
1443

1444
    #[inline]
1445
    unsafe fn get_atomic_unchecked(
404,825✔
1446
        &self,
1447
        index: usize,
1448
        order: Ordering,
1449
    ) -> <B::Word as PrimitiveAtomic>::Value {
1450
        let wbits = <B::Word as PrimitiveAtomic>::Value::BITS as usize;
809,650✔
1451
        let pos = index * self.bit_width;
809,650✔
1452
        let word_index = pos / wbits;
809,650✔
1453
        let bit_index = pos % wbits;
809,650✔
1454
        let data = self.bits.as_ref();
1,214,475✔
1455

1456
        unsafe {
1457
            if bit_index + self.bit_width <= wbits {
404,825✔
1458
                (data.get_unchecked(word_index).load(order) >> bit_index) & self.mask
939,176✔
1459
            } else {
1460
                ((data.get_unchecked(word_index).load(order) >> bit_index)
850,155✔
1461
                    | (data.get_unchecked(word_index + 1).load(order) << (wbits - bit_index)))
850,155✔
1462
                    & self.mask
170,031✔
1463
            }
1464
        }
1465
    }
1466

1467
    // We reimplement set as we have the mask in the structure.
1468

1469
    /// Sets the element of the slice at the specified index.
1470
    ///
1471
    /// May panic if the index is not in [0..[len](SliceByValue::len))
1472
    /// or the value does not fit in [`BitWidth::bit_width`] bits.
1473
    #[inline(always)]
1474
    fn set_atomic(
184,005✔
1475
        &self,
1476
        index: usize,
1477
        value: <B::Word as PrimitiveAtomic>::Value,
1478
        order: Ordering,
1479
    ) {
1480
        panic_if_out_of_bounds!(index, self.len);
552,015✔
1481
        panic_if_value!(value, self.mask, self.bit_width);
552,012✔
1482
        unsafe {
1483
            self.set_atomic_unchecked(index, value, order);
736,012✔
1484
        }
1485
    }
1486

1487
    #[inline]
1488
    unsafe fn set_atomic_unchecked(
186,235✔
1489
        &self,
1490
        index: usize,
1491
        value: <B::Word as PrimitiveAtomic>::Value,
1492
        order: Ordering,
1493
    ) {
1494
        unsafe {
1495
            let wbits = <B::Word as PrimitiveAtomic>::Value::BITS as usize;
372,470✔
1496
            debug_assert!(self.bit_width <= wbits);
372,470✔
1497
            let pos = index * self.bit_width;
372,470✔
1498
            let word_index = pos / wbits;
372,470✔
1499
            let bit_index = pos % wbits;
372,470✔
1500
            let data = self.bits.as_ref();
558,705✔
1501

1502
            if bit_index + self.bit_width <= wbits {
186,235✔
1503
                // this is consistent
1504
                let mut current = data.get_unchecked(word_index).load(order);
544,690✔
1505
                loop {
×
1506
                    let mut new = current;
217,876✔
1507
                    new &= !(self.mask << bit_index);
217,876✔
1508
                    new |= value << bit_index;
217,876✔
1509

1510
                    match data
326,814✔
1511
                        .get_unchecked(word_index)
217,876✔
1512
                        .compare_exchange(current, new, order, order)
435,752✔
1513
                    {
1514
                        Ok(_) => break,
108,938✔
1515
                        Err(e) => current = e,
×
1516
                    }
1517
                }
1518
            } else {
1519
                let mut word = data.get_unchecked(word_index).load(order);
386,485✔
1520
                // try to wait for the other thread to finish
1521
                fence(Ordering::Acquire);
154,594✔
1522
                loop {
×
1523
                    let mut new = word;
154,594✔
1524
                    new &= (<B::Word as PrimitiveAtomic>::Value::ONE << bit_index)
77,297✔
1525
                        - <B::Word as PrimitiveAtomic>::Value::ONE;
77,297✔
1526
                    new |= value << bit_index;
154,594✔
1527

1528
                    match data
231,891✔
1529
                        .get_unchecked(word_index)
154,594✔
1530
                        .compare_exchange(word, new, order, order)
309,188✔
1531
                    {
1532
                        Ok(_) => break,
77,297✔
1533
                        Err(e) => word = e,
×
1534
                    }
1535
                }
1536
                fence(Ordering::Release);
154,594✔
1537

1538
                // ensures that the compiler does not reorder the two atomic operations
1539
                // this should increase the probability of having consistency
1540
                // between two concurrent writes as they will both execute the set
1541
                // of the bits in the same order, and the release / acquire fence
1542
                // should try to synchronize the threads as much as possible
1543
                compiler_fence(Ordering::SeqCst);
154,594✔
1544

1545
                let mut word = data.get_unchecked(word_index + 1).load(order);
386,485✔
1546
                fence(Ordering::Acquire);
154,594✔
1547
                loop {
×
1548
                    let mut new = word;
154,594✔
1549
                    new &= !(self.mask >> (wbits - bit_index));
231,891✔
1550
                    new |= value >> (wbits - bit_index);
231,891✔
1551

1552
                    match data
231,891✔
1553
                        .get_unchecked(word_index + 1)
154,594✔
1554
                        .compare_exchange(word, new, order, order)
309,188✔
1555
                    {
1556
                        Ok(_) => break,
77,297✔
1557
                        Err(e) => word = e,
×
1558
                    }
1559
                }
1560
                fence(Ordering::Release);
154,594✔
1561
            }
1562
        }
1563
    }
1564

1565
    fn reset_atomic(&mut self, ordering: Ordering) {
1✔
1566
        let bit_len = self.len * self.bit_width;
2✔
1567
        let full_words = bit_len / <B::Word as PrimitiveAtomic>::Value::BITS as usize;
2✔
1568
        let residual = bit_len % <B::Word as PrimitiveAtomic>::Value::BITS as usize;
2✔
1569
        let bits = self.bits.as_ref();
3✔
1570
        bits[..full_words]
1✔
1571
            .iter()
1572
            .for_each(|x| x.store(<B::Word as PrimitiveAtomic>::Value::ZERO, ordering));
22✔
1573
        if residual != 0 {
2✔
1574
            bits[full_words].fetch_and(
3✔
1575
                <B::Word as PrimitiveAtomic>::Value::MAX << residual,
1✔
1576
                ordering,
1✔
1577
            );
1578
        }
1579
    }
1580

1581
    #[cfg(feature = "rayon")]
1582
    fn par_reset_atomic(&mut self, ordering: Ordering) {
1✔
1583
        let bit_len = self.len * self.bit_width;
2✔
1584
        let full_words = bit_len / <B::Word as PrimitiveAtomic>::Value::BITS as usize;
2✔
1585
        let residual = bit_len % <B::Word as PrimitiveAtomic>::Value::BITS as usize;
2✔
1586
        let bits = self.bits.as_ref();
3✔
1587
        bits[..full_words]
1✔
1588
            .par_iter()
1589
            .with_min_len(crate::RAYON_MIN_LEN)
1✔
1590
            .for_each(|x| x.store(<B::Word as PrimitiveAtomic>::Value::ZERO, ordering));
22✔
1591
        if residual != 0 {
2✔
1592
            bits[full_words].fetch_and(
3✔
1593
                <B::Word as PrimitiveAtomic>::Value::MAX << residual,
1✔
1594
                ordering,
1✔
1595
            );
1596
        }
1597
    }
1598
}
1599

1600
// Conversions
1601

1602
impl<W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>>
1603
    From<AtomicBitFieldVec<Vec<W::Atomic>>> for BitFieldVec<Vec<W>>
1604
{
1605
    #[inline]
1606
    fn from(value: AtomicBitFieldVec<Vec<W::Atomic>>) -> Self {
190✔
1607
        BitFieldVec {
1608
            bits: transmute_vec_from_atomic(value.bits),
570✔
1609
            len: value.len,
380✔
1610
            bit_width: value.bit_width,
190✔
1611
            mask: value.mask,
190✔
1612
        }
1613
    }
1614
}
1615

1616
impl<W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>>
1617
    From<AtomicBitFieldVec<Box<[W::Atomic]>>> for BitFieldVec<Box<[W]>>
1618
{
1619
    #[inline]
1620
    fn from(value: AtomicBitFieldVec<Box<[W::Atomic]>>) -> Self {
1✔
1621
        BitFieldVec {
1622
            bits: transmute_boxed_slice_from_atomic(value.bits),
3✔
1623

1624
            len: value.len,
2✔
1625
            bit_width: value.bit_width,
1✔
1626
            mask: value.mask,
1✔
1627
        }
1628
    }
1629
}
1630

1631
impl<'a, W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>>
1632
    From<AtomicBitFieldVec<&'a [W::Atomic]>> for BitFieldVec<&'a [W]>
1633
{
1634
    #[inline]
1635
    fn from(value: AtomicBitFieldVec<&'a [W::Atomic]>) -> Self {
1✔
1636
        BitFieldVec {
1637
            bits: unsafe { core::mem::transmute::<&'a [W::Atomic], &'a [W]>(value.bits) },
3✔
1638
            len: value.len,
2✔
1639
            bit_width: value.bit_width,
1✔
1640
            mask: value.mask,
1✔
1641
        }
1642
    }
1643
}
1644

1645
impl<'a, W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>>
1646
    From<AtomicBitFieldVec<&'a mut [W::Atomic]>> for BitFieldVec<&'a mut [W]>
1647
{
1648
    #[inline]
1649
    fn from(value: AtomicBitFieldVec<&'a mut [W::Atomic]>) -> Self {
1✔
1650
        BitFieldVec {
1651
            bits: unsafe { core::mem::transmute::<&'a mut [W::Atomic], &'a mut [W]>(value.bits) },
2✔
1652
            len: value.len,
2✔
1653
            bit_width: value.bit_width,
1✔
1654
            mask: value.mask,
1✔
1655
        }
1656
    }
1657
}
1658

1659
impl<W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>> From<BitFieldVec<Vec<W>>>
1660
    for AtomicBitFieldVec<Vec<W::Atomic>>
1661
{
1662
    #[inline]
1663
    fn from(value: BitFieldVec<Vec<W>>) -> Self {
369✔
1664
        AtomicBitFieldVec {
1665
            bits: transmute_vec_into_atomic(value.bits),
1,107✔
1666

1667
            len: value.len,
738✔
1668
            bit_width: value.bit_width,
369✔
1669
            mask: value.mask,
369✔
1670
        }
1671
    }
1672
}
1673

1674
impl<W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>> From<BitFieldVec<Box<[W]>>>
1675
    for AtomicBitFieldVec<Box<[W::Atomic]>>
1676
{
1677
    #[inline]
1678
    fn from(value: BitFieldVec<Box<[W]>>) -> Self {
1✔
1679
        AtomicBitFieldVec {
1680
            bits: transmute_boxed_slice_into_atomic(value.bits),
3✔
1681
            len: value.len,
2✔
1682
            bit_width: value.bit_width,
1✔
1683
            mask: value.mask,
1✔
1684
        }
1685
    }
1686
}
1687

1688
impl<'a, W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>> TryFrom<BitFieldVec<&'a [W]>>
1689
    for AtomicBitFieldVec<&'a [W::Atomic]>
1690
{
1691
    type Error = CannotCastToAtomicError<W>;
1692

1693
    #[inline]
1694
    fn try_from(value: BitFieldVec<&'a [W]>) -> Result<Self, Self::Error> {
1✔
1695
        if core::mem::align_of::<W::Atomic>() != core::mem::align_of::<W>() {
1✔
1696
            return Err(CannotCastToAtomicError::default());
×
1697
        }
1698
        Ok(AtomicBitFieldVec {
1✔
1699
            bits: unsafe { core::mem::transmute::<&'a [W], &'a [W::Atomic]>(value.bits) },
3✔
1700
            len: value.len,
2✔
1701
            bit_width: value.bit_width,
1✔
1702
            mask: value.mask,
1✔
1703
        })
1704
    }
1705
}
1706

1707
impl<'a, W: Word + AtomicPrimitive<Atomic: PrimitiveAtomicUnsigned>>
1708
    TryFrom<BitFieldVec<&'a mut [W]>> for AtomicBitFieldVec<&'a mut [W::Atomic]>
1709
{
1710
    type Error = CannotCastToAtomicError<W>;
1711

1712
    #[inline]
1713
    fn try_from(value: BitFieldVec<&'a mut [W]>) -> Result<Self, Self::Error> {
1✔
1714
        if core::mem::align_of::<W::Atomic>() != core::mem::align_of::<W>() {
1✔
1715
            return Err(CannotCastToAtomicError::default());
×
1716
        }
1717
        Ok(AtomicBitFieldVec {
1✔
1718
            bits: unsafe { core::mem::transmute::<&'a mut [W], &'a mut [W::Atomic]>(value.bits) },
2✔
1719
            len: value.len,
2✔
1720
            bit_width: value.bit_width,
1✔
1721
            mask: value.mask,
1✔
1722
        })
1723
    }
1724
}
1725

1726
impl<W: Word> From<BitFieldVec<Vec<W>>> for BitFieldVec<Box<[W]>> {
1727
    fn from(value: BitFieldVec<Vec<W>>) -> Self {
347✔
1728
        BitFieldVec {
1729
            bits: value.bits.into_boxed_slice(),
1,041✔
1730
            len: value.len,
694✔
1731
            bit_width: value.bit_width,
347✔
1732
            mask: value.mask,
347✔
1733
        }
1734
    }
1735
}
1736

1737
impl<W: Word> From<BitFieldVec<Box<[W]>>> for BitFieldVec<Vec<W>> {
1738
    fn from(value: BitFieldVec<Box<[W]>>) -> Self {
1✔
1739
        BitFieldVec {
1740
            bits: value.bits.into_vec(),
3✔
1741
            len: value.len,
2✔
1742
            bit_width: value.bit_width,
1✔
1743
            mask: value.mask,
1✔
1744
        }
1745
    }
1746
}
1747

1748
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueGat<'a>
1749
    for BitFieldVec<B>
1750
{
1751
    type Item = B::Word;
1752
    type Iter = BitFieldVecIter<'a, B>;
1753
}
1754

1755
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValue
1756
    for BitFieldVec<B>
1757
{
1758
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1759
        self.iter_from(0)
×
1760
    }
1761
}
1762

1763
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueFromGat<'a>
1764
    for BitFieldVec<B>
1765
{
1766
    type Item = B::Word;
1767
    type IterFrom = BitFieldVecIter<'a, B>;
1768
}
1769

1770
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueFrom
1771
    for BitFieldVec<B>
1772
{
1773
    fn iter_value_from(
×
1774
        &self,
1775
        from: usize,
1776
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1777
        self.iter_from(from)
×
1778
    }
1779
}
1780

1781
impl<'a, 'b, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueGat<'a>
1782
    for BitFieldVecSubsliceImpl<'b, B>
1783
{
1784
    type Item = B::Word;
1785
    type Iter = BitFieldVecIter<'a, B>;
1786
}
1787

1788
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValue
1789
    for BitFieldVecSubsliceImpl<'a, B>
1790
{
1791
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1792
        self.slice.iter_from(0)
×
1793
    }
1794
}
1795

1796
impl<'a, 'b, B: Backend<Word: Word> + AsRef<[B::Word]>>
1797
    value_traits::iter::IterateByValueFromGat<'a> for BitFieldVecSubsliceImpl<'b, B>
1798
{
1799
    type Item = B::Word;
1800
    type IterFrom = BitFieldVecIter<'a, B>;
1801
}
1802

1803
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueFrom
1804
    for BitFieldVecSubsliceImpl<'a, B>
1805
{
1806
    fn iter_value_from(
×
1807
        &self,
1808
        from: usize,
1809
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1810
        self.slice.iter_from(from)
×
1811
    }
1812
}
1813

1814
impl<'a, 'b, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueGat<'a>
1815
    for BitFieldVecSubsliceImplMut<'b, B>
1816
{
1817
    type Item = B::Word;
1818
    type Iter = BitFieldVecIter<'a, B>;
1819
}
1820

1821
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValue
1822
    for BitFieldVecSubsliceImplMut<'a, B>
1823
{
1824
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1825
        self.slice.iter_from(0)
×
1826
    }
1827
}
1828

1829
impl<'a, 'b, B: Backend<Word: Word> + AsRef<[B::Word]>>
1830
    value_traits::iter::IterateByValueFromGat<'a> for BitFieldVecSubsliceImplMut<'b, B>
1831
{
1832
    type Item = B::Word;
1833
    type IterFrom = BitFieldVecIter<'a, B>;
1834
}
1835

1836
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> value_traits::iter::IterateByValueFrom
1837
    for BitFieldVecSubsliceImplMut<'a, B>
1838
{
1839
    fn iter_value_from(
×
1840
        &self,
1841
        from: usize,
1842
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1843
        self.slice.iter_from(from)
×
1844
    }
1845
}
1846

1847
/// A transparent wrapper around [`BitFieldVec`] that implements [`SliceByValue`]
1848
/// using unaligned reads.
1849
///
1850
/// This wrapper delegates [`SliceByValue::get_value_unchecked`] to
1851
/// [`BitFieldVec::get_unaligned_unchecked`], which can be faster for random
1852
/// access patterns.
1853
///
1854
/// The [`TryIntoUnaligned`](crate::traits::TryIntoUnaligned) trait converts a
1855
/// [`BitFieldVec`] into an [`BitFieldVecU`] after adding a padding word at the
1856
/// end, which is required for unaligned reads to work correctly. The conversion
1857
/// will fail if the bit width does not satisfy the constraints of
1858
/// [`BitFieldVec::get_unaligned_unchecked`]. You can recover the original
1859
/// [`BitFieldVec`] using a [`From`
1860
/// implementation](#impl-From<BitFieldVecU<Box<%5BW%5D>>>-for-BitFieldVec<Box<%5BW%5D>>).
1861
///
1862
/// # Safety
1863
///
1864
/// The backing storage must have sufficient padding at the end so that
1865
/// unaligned reads do not access memory outside the allocation.
1866
#[derive(Debug, Clone, MemSize, MemDbg, value_traits::Subslices)]
1867
#[value_traits_subslices(bound = "B: AsRef<[B::Word]>")]
1868
#[value_traits_subslices(bound = "B::Word: Word")]
1869
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
1870
#[repr(transparent)]
1871
#[cfg_attr(
1872
    feature = "epserde",
1873
    epserde(bound(
1874
        deser = "B: Backend + epserde::deser::DeserInner, for<'a> <B as epserde::deser::DeserInner>::DeserType<'a>: Backend<Word = B::Word>"
1875
    ))
1876
)]
1877
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
1878
#[cfg_attr(
1879
    feature = "serde",
1880
    serde(bound(
1881
        serialize = "B: Backend + serde::Serialize, B::Word: serde::Serialize",
1882
        deserialize = "B: Backend + serde::Deserialize<'de>, B::Word: serde::Deserialize<'de>"
1883
    ))
1884
)]
1885
pub struct BitFieldVecU<B: Backend<Word: Word> = Vec<usize>>(BitFieldVec<B>);
1886

1887
impl<B: Backend<Word: Word>> BitFieldVecU<B> {
1888
    /// Returns the mask used to extract values from the vector.
1889
    /// This will keep the lowest `bit_width` bits.
1890
    pub const fn mask(&self) -> B::Word {
3,012✔
1891
        self.0.mask()
6,024✔
1892
    }
1893
}
1894

1895
impl<B: Backend<Word: Word>> Backend for BitFieldVecU<B> {
1896
    type Word = B::Word;
1897
}
1898

1899
impl<B: Backend<Word: Word>> BitWidth for BitFieldVecU<B> {
1900
    #[inline(always)]
1901
    fn bit_width(&self) -> usize {
×
1902
        self.0.bit_width()
×
1903
    }
1904
}
1905

1906
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> BitFieldSlice for BitFieldVecU<B> {
1907
    fn as_slice(&self) -> &[Self::Value] {
×
1908
        self.0.as_slice()
×
1909
    }
1910
}
1911

1912
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> AsRef<[B::Word]> for BitFieldVecU<B> {
1913
    fn as_ref(&self) -> &[B::Word] {
×
1914
        self.0.bits.as_ref()
×
1915
    }
1916
}
1917

1918
impl<W: Word> crate::traits::TryIntoUnaligned for Box<[W]> {
1919
    type Unaligned = Box<[W]>;
1920
    fn try_into_unaligned(
11✔
1921
        self,
1922
    ) -> Result<Self::Unaligned, crate::traits::UnalignedConversionError> {
1923
        Ok(self)
11✔
1924
    }
1925
}
1926

1927
impl<W: Word> crate::traits::TryIntoUnaligned for Vec<W> {
1928
    type Unaligned = Vec<W>;
1929
    fn try_into_unaligned(
×
1930
        self,
1931
    ) -> Result<Self::Unaligned, crate::traits::UnalignedConversionError> {
1932
        Ok(self)
×
1933
    }
1934
}
1935

1936
impl<W: Word, const N: usize> crate::traits::TryIntoUnaligned for [W; N] {
1937
    type Unaligned = [W; N];
1938
    fn try_into_unaligned(
×
1939
        self,
1940
    ) -> Result<Self::Unaligned, crate::traits::UnalignedConversionError> {
1941
        Ok(self)
×
1942
    }
1943
}
1944

1945
impl<W: Word> crate::traits::TryIntoUnaligned for BitFieldVec<Box<[W]>> {
1946
    type Unaligned = BitFieldVecU<Box<[W]>>;
1947

1948
    /// Converts a [`BitFieldVec`] into a [`BitFieldVecU`], adding a
1949
    /// padding word at the end if one is not already present.
1950
    ///
1951
    /// # Errors
1952
    ///
1953
    /// Returns an error if the bit width does not satisfy the constraints of
1954
    /// [`BitFieldVec::get_unaligned_unchecked`]: it must be at most
1955
    /// `W::BITS - 6`, or exactly `W::BITS - 4`, or exactly `W::BITS`.
1956
    fn try_into_unaligned(
166✔
1957
        self,
1958
    ) -> Result<Self::Unaligned, crate::traits::UnalignedConversionError> {
1959
        let bw = self.bit_width();
498✔
1960
        ensure_unaligned!(W, bw);
166✔
1961
        let needed = (SliceByValue::len(&self) * bw).div_ceil(W::BITS as usize);
830✔
1962
        if self.as_slice().len() > needed {
332✔
1963
            // Padding word already present (e.g., built with new_padded).
1964
            Ok(BitFieldVecU(self))
166✔
1965
        } else {
1966
            // Add a padding word, reserving exactly one extra slot to
1967
            // avoid over-allocation.
1968
            let (raw_bits, bit_width, len) = self.into_raw_parts();
×
1969
            let mut v = raw_bits.into_vec();
×
1970
            v.reserve_exact(1);
×
1971
            v.push(W::ZERO);
×
1972
            // SAFETY: we added a padding word, the length is still valid
1973
            Ok(BitFieldVecU(unsafe {
×
1974
                BitFieldVec::from_raw_parts(v.into_boxed_slice(), bit_width, len)
×
1975
            }))
1976
        }
1977
    }
1978
}
1979

1980
impl<W: Word> From<BitFieldVecU<Box<[W]>>> for BitFieldVec<Box<[W]>> {
1981
    /// Converts a [`BitFieldVecU`] back into a [`BitFieldVec`].
1982
    ///
1983
    /// The padding word is kept in the backing storage so that a
1984
    /// subsequent [`try_into_unaligned`](crate::traits::TryIntoUnaligned::try_into_unaligned)
1985
    /// does not need to reallocate.
1986
    fn from(unaligned: BitFieldVecU<Box<[W]>>) -> Self {
×
1987
        unaligned.0
×
1988
    }
1989
}
1990

1991
impl<B: Backend<Word: Word> + AsRef<[B::Word]>> SliceByValue for BitFieldVecU<B> {
1992
    type Value = B::Word;
1993

1994
    #[inline(always)]
1995
    fn len(&self) -> usize {
3,012✔
1996
        SliceByValue::len(&self.0)
6,024✔
1997
    }
1998

1999
    #[inline(always)]
2000
    unsafe fn get_value_unchecked(&self, index: usize) -> B::Word {
44,491,279✔
2001
        unsafe { self.0.get_unaligned_unchecked(index) }
133,473,837✔
2002
    }
2003
}
2004

2005
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoUncheckedIterator for &'a BitFieldVecU<B> {
2006
    type Item = B::Word;
2007
    type IntoUncheckedIter = BitFieldVecUncheckedIter<'a, B>;
2008
    fn into_unchecked_iter_from(self, from: usize) -> Self::IntoUncheckedIter {
×
2009
        BitFieldVecUncheckedIter::new(&self.0, from)
×
2010
    }
2011
}
2012

2013
impl<'a, B: Backend<Word: Word> + AsRef<[B::Word]>> IntoUncheckedBackIterator
2014
    for &'a BitFieldVecU<B>
2015
{
2016
    type Item = B::Word;
2017
    type IntoUncheckedIterBack = BitFieldVecUncheckedBackIter<'a, B>;
2018

2019
    fn into_unchecked_iter_back(self) -> Self::IntoUncheckedIterBack {
×
2020
        BitFieldVecUncheckedBackIter::new(&self.0, SliceByValue::len(&self.0))
×
2021
    }
2022

2023
    fn into_unchecked_iter_back_from(self, from: usize) -> Self::IntoUncheckedIterBack {
×
2024
        BitFieldVecUncheckedBackIter::new(&self.0, from)
×
2025
    }
2026
}
2027

2028
#[cfg(test)]
2029
mod tests {
2030
    use super::*;
2031

2032
    #[test]
2033
    fn test_with_capacity() {
2034
        let mut b = BitFieldVec::<Vec<usize>>::with_capacity(10, 100);
2035
        let capacity = b.bits.capacity();
2036
        for _ in 0..100 {
2037
            b.push(0);
2038
        }
2039
        assert_eq!(b.bits.capacity(), capacity);
2040
    }
2041

2042
    fn copy<
2043
        B: Backend<Word: Word> + AsRef<[B::Word]>,
2044
        C: Backend<Word = B::Word> + AsRef<[B::Word]> + AsMut<[B::Word]>,
2045
    >(
2046
        source: &BitFieldVec<B>,
2047
        from: usize,
2048
        dest: &mut BitFieldVec<C>,
2049
        to: usize,
2050
        len: usize,
2051
    ) {
2052
        let len = Ord::min(Ord::min(len, dest.len - to), source.len - from);
2053
        for i in 0..len {
2054
            dest.set_value(to + i, source.index_value(from + i));
2055
        }
2056
    }
2057

2058
    #[test]
2059
    fn test_copy() {
2060
        for src_pattern in 0..8 {
2061
            for dst_pattern in 0..8 {
2062
                // if from_first_word == from_last_word && to_first_word == to_last_word
2063
                let source = bit_field_vec![3 => src_pattern; 100];
2064
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
2065
                let mut dest_expected = dest_actual.clone();
2066
                source.copy(1, &mut dest_actual, 2, 10);
2067
                copy(&source, 1, &mut dest_expected, 2, 10);
2068
                assert_eq!(dest_actual, dest_expected);
2069
                // else if from_first_word == from_last_word
2070
                let source = bit_field_vec![3 => src_pattern; 100];
2071
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
2072
                let mut dest_expected = dest_actual.clone();
2073
                source.copy(1, &mut dest_actual, 20, 10);
2074
                copy(&source, 1, &mut dest_expected, 20, 10);
2075
                assert_eq!(dest_actual, dest_expected);
2076
                // else if to_first_word == to_last_word
2077
                let source = bit_field_vec![3 => src_pattern; 100];
2078
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
2079
                let mut dest_expected = dest_actual.clone();
2080
                source.copy(20, &mut dest_actual, 1, 10);
2081
                copy(&source, 20, &mut dest_expected, 1, 10);
2082
                assert_eq!(dest_actual, dest_expected);
2083
                // else if src_bit == dest_bit (residual = 1)
2084
                let source = bit_field_vec![3 => src_pattern; 1000];
2085
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2086
                let mut dest_expected = dest_actual.clone();
2087
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 40);
2088
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 40);
2089
                assert_eq!(dest_actual, dest_expected);
2090
                // else if src_bit == dest_bit (residual = 64)
2091
                let source = bit_field_vec![3 => src_pattern; 1000];
2092
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2093
                let mut dest_expected = dest_actual.clone();
2094
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 61);
2095
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 61);
2096
                assert_eq!(dest_actual, dest_expected);
2097
                // else if src_bit < dest_bit (residual = 1)
2098
                let source = bit_field_vec![3 => src_pattern; 1000];
2099
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2100
                let mut dest_expected = dest_actual.clone();
2101
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40);
2102
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40);
2103
                assert_eq!(dest_actual, dest_expected);
2104
                // else if src_bit < dest_bit (residual = 64)
2105
                let source = bit_field_vec![3 => src_pattern; 1000];
2106
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2107
                let mut dest_expected = dest_actual.clone();
2108
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40 + 17);
2109
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40 + 17);
2110
                assert_eq!(dest_actual, dest_expected);
2111
                // else if src_bit > dest_bit (residual = 1)
2112
                let source = bit_field_vec![3 => src_pattern; 1000];
2113
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2114
                let mut dest_expected = dest_actual.clone();
2115
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 64);
2116
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 64);
2117
                assert_eq!(dest_actual, dest_expected);
2118
                // else if src_bit > dest_bit (residual = 64)
2119
                let source = bit_field_vec![3 => src_pattern; 1000];
2120
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
2121
                let mut dest_expected = dest_actual.clone();
2122
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 21 + 64);
2123
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 21 + 64);
2124
                assert_eq!(dest_actual, dest_expected);
2125
            }
2126
        }
2127
    }
2128
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc