• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

vigna / sux-rs / 18556132052

16 Oct 2025 09:05AM UTC coverage: 64.195% (-1.8%) from 66.035%
18556132052

push

github

vigna
Updated to the last lender version

3627 of 5650 relevant lines covered (64.19%)

118741044.62 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

58.97
/src/bits/bit_field_vec.rs
1
/*
2
 * SPDX-FileCopyrightText: 2023 Inria
3
 * SPDX-FileCopyrightText: 2023 Sebastiano Vigna
4
 *
5
 * SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later
6
 */
7

8
//! Vectors of bit fields of fixed width (AKA “compact arrays“, “bit array“,
9
//! etc.)
10
//!
11
//! Elements are stored contiguously, with no padding bits (in particular,
12
//! unless the bit width is a power of two some elements will be stored across
13
//! word boundaries).
14
//!
15
//! There are two flavors: [`BitFieldVec`], a mutable bit-field vector, and
16
//! [`AtomicBitFieldVec`], a mutable, thread-safe bit-field vector.
17
//!
18
//! These flavors depends on a backend, and presently we provide, given an
19
//! unsigned integer type `W` or an unsigned atomic integer type `A`:
20
//!
21
//! - `BitFieldVec<Vec<T>>`: a mutable, growable and resizable bit-field vector;
22
//! - `BitFieldVec<AsRef<[W]>>`: an immutable bit-field vector, useful for
23
//!   [ε-serde](https://crates.io/crates/epserde) support;
24
//! - `BitFieldVec<AsRef<[W]> + AsMut<[W]>>`: a mutable (but not resizable) bit
25
//!   vector;
26
//! - `AtomicBitFieldVec<AsRef<[A]>>`: a partially thread-safe, mutable (but not
27
//!   resizable) bit-field vector.
28
//!
29
//! More generally, the underlying type must satisfy the trait [`Word`] for
30
//! [`BitFieldVec`] and additionally [`IntoAtomic`] for [`AtomicBitFieldVec`].
31
//! A blanket implementation exposes slices of elements of type `W` as bit-field
32
//! vectors of width `W::BITS`, analogously for atomic types `A`.
33
//!
34
//! The traits [`BitFieldSlice`] and [`BitFieldSliceMut`] provide a uniform
35
//! interface to access to the content of (a reference to) the bit-field vector.
36
//! There is also a [`AtomicBitFieldSlice`] trait for atomic bit-field vectors.
37
//! Since they are also implemented for slices of words, they make it easy to
38
//! write generic code that works both on bit-field vectors and on slices of
39
//! words when you need to consider the bit width of each element.
40
//!
41
//! Note that the [`try_chunks_mut`](SliceByValueMut::try_chunks_mut) method is
42
//! part of the [`SliceByValueMut`] trait, and thus returns an iterator over
43
//! elements implementing [`SliceByValueMut`]; the elements, however, implement
44
//! also [`BitFieldSliceMut`], and you can use this property by adding the bound
45
//! `for<'a> BitFieldSliceMut<W, ChunksMut<'a>: Iterator<Item:
46
//! BitFieldSliceMut<W>>>`.
47
//!
48
//! Nothing is assumed about the content of the backend outside the
49
//! bits of the vector. Moreover, the content of the backend outside of the
50
//! vector is never modified by the methods of this structure.
51
//!
52
//! For high-speed unchecked scanning, we implement [`IntoUncheckedIterator`]
53
//! and [`IntoReverseUncheckedIterator`] on a reference to this type. The are
54
//! used, for example, to provide
55
//! [predecessor](crate::traits::indexed_dict::Pred) and
56
//! [successor](crate::traits::indexed_dict::Succ) primitives for
57
//! [`EliasFano`].
58
//!
59
//! # Low-level support
60
//!
61
//! The methods [`address_of`](BitFieldVec::addr_of) and
62
//! [`get_unaligned`](BitFieldVec::get_unaligned) can be used to manually
63
//! prefetch parts of the data structure, or read values using unaligned read,
64
//! when the bit width makes it possible.
65
//!
66
//! # Examples
67
//!
68
//! ```
69
//! # use sux::prelude::*;
70
//! # use bit_field_slice::*;
71
//! # use value_traits::slices::*;
72
//! // Bit field vector of bit width 5 and length 10, all entries set to zero
73
//! let mut b = <BitFieldVec<usize>>::new(5, 10);
74
//! assert_eq!(b.len(), 10);
75
//! assert_eq!(b.bit_width(), 5);
76
//! b.set_value(0, 3);
77
//! assert_eq!(b.index_value(0), 3);
78
//!
79
//! // Empty bit field vector of bit width 20 with capacity 10
80
//! let mut b = <BitFieldVec<usize>>::with_capacity(20, 10);
81
//! assert_eq!(b.len(), 0);
82
//! assert_eq!(b.bit_width(), 20);
83
//! b.push(20);
84
//! assert_eq!(b.len(), 1);
85
//! assert_eq!(b.index_value(0), 20);
86
//! assert_eq!(b.pop(), Some(20));
87
//!
88
//! // Convenience macro
89
//! let b = bit_field_vec![10; 4, 500, 2, 0, 1];
90
//! assert_eq!(b.len(), 5);
91
//! assert_eq!(b.bit_width(), 10);
92
//! assert_eq!(b.index_value(0), 4);
93
//! assert_eq!(b.index_value(1), 500);
94
//! assert_eq!(b.index_value(2), 2);
95
//! assert_eq!(b.index_value(3), 0);
96
//! assert_eq!(b.index_value(4), 1);
97
//! ```
98
#[cfg(feature = "rayon")]
99
use crate::RAYON_MIN_LEN;
100
use crate::prelude::{bit_field_slice::*, *};
101
use crate::utils::{
102
    CannotCastToAtomicError, transmute_boxed_slice_from_atomic, transmute_boxed_slice_into_atomic,
103
    transmute_vec_from_atomic, transmute_vec_into_atomic,
104
};
105
use crate::{panic_if_out_of_bounds, panic_if_value};
106
use anyhow::{Result, bail};
107
use common_traits::{
108
    AsBytes, Atomic, AtomicInteger, AtomicUnsignedInt, CastableInto, IntoAtomic, invariant_eq,
109
};
110
use mem_dbg::*;
111
#[cfg(feature = "rayon")]
112
use rayon::prelude::*;
113
use std::sync::atomic::{Ordering, compiler_fence, fence};
114
use value_traits::slices::{SliceByValue, SliceByValueMut};
115

116
/// Convenient, [`vec!`]-like macro to initialize `usize`-based bit-field
117
/// vectors.
118
///
119
/// - `bit_field_vec![width]`: creates an empty bit-field vector of given bit
120
///   width.
121
///
122
/// - `bit_field_vec![width => value; length]`: creates a bit-field vector of
123
///   given bit width and length, with all entries set to `value`.
124
///
125
/// - `bit_field_vec![width; v₀, v₁, … ]`: creates a bit-field vector of
126
///   given bit width with entries set to `v₀`, `v₁`, ….
127
///
128
///
129
/// Note that the syntax `bit_field_vec![width; length; value]` that has been
130
/// deprecated in favor of `bit_field_vec![width => value; length]`, so that
131
/// value and length are in the same order as in [`vec!`].
132
///
133
/// # Examples
134
///
135
/// ```
136
/// use sux::prelude::*;
137
/// use bit_field_slice::*;
138
/// use value_traits::slices::*;
139
/// // Empty bit field vector of bit width 5
140
/// let b = bit_field_vec![5];
141
/// assert_eq!(b.len(), 0);
142
/// assert_eq!(b.bit_width(), 5);
143
///
144
/// // 10 values of bit width 6, all set to 3
145
/// let b = bit_field_vec![6 => 3; 10];
146
/// assert_eq!(b.len(), 10);
147
/// assert_eq!(b.bit_width(), 6);
148
/// assert_eq!(b.iter().all(|x| x == 3), true);
149
///
150
/// // List of values of bit width 10
151
/// let b = bit_field_vec![10; 4, 500, 2, 0, 1];
152
/// assert_eq!(b.len(), 5);
153
/// assert_eq!(b.bit_width(), 10);
154
/// assert_eq!(b.index_value(0), 4);
155
/// assert_eq!(b.index_value(1), 500);
156
/// assert_eq!(b.index_value(2), 2);
157
/// assert_eq!(b.index_value(3), 0);
158
/// assert_eq!(b.index_value(4), 1);
159
/// ```
160
#[macro_export]
161
macro_rules! bit_field_vec {
162
    ($w:expr) => {
163
        $crate::bits::BitFieldVec::<usize, _>::new($w, 0)
164
    };
165
    ($w:expr; $n:expr; $v:expr) => {
166
        {
167
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
168
            // Force type
169
            let v: usize = $v;
170
            bit_field_vec.resize($n, v);
171
            bit_field_vec
172
        }
173
    };
174
    ($w:expr => $v:expr; $n:expr) => {
175
        {
176
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
177
            // Force type
178
            let v: usize = $v;
179
            bit_field_vec.resize($n, v);
180
            bit_field_vec
181
        }
182
    };
183
    ($w:expr; $($x:expr),+ $(,)?) => {
184
        {
185
            let mut b = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, [$($x),+].len());
186
            $(
187
                // Force type
188
                let x: usize = $x;
189
                b.push(x);
190
            )*
191
            b
192
        }
193
    };
194
}
195

196
/// A vector of bit fields of fixed width (AKA “compact array“, “bit array“,
197
/// etc.).
198
///
199
/// See the [module documentation](crate::bits) for more details.
200
#[derive(Debug, Clone, Copy, Hash, MemDbg, MemSize, value_traits::Subslices)]
201
#[value_traits_subslices(bound = "B: AsRef<[W]>")]
202
#[derive(value_traits::SubslicesMut)]
203
#[value_traits_subslices_mut(bound = "B: AsRef<[W]> + AsMut<[W]>")]
204
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
205
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
206
pub struct BitFieldVec<W: Word = usize, B = Vec<W>> {
207
    /// The underlying storage.
208
    bits: B,
209
    /// The bit width of the values stored in the vector.
210
    bit_width: usize,
211
    /// A mask with its lowest `bit_width` bits set to one.
212
    mask: W,
213
    /// The length of the vector.
214
    len: usize,
215
}
216

217
/// Robust, heavily checked mask function for constructors and similar methods.
218
fn mask<W: Word>(bit_width: usize) -> W {
9,700✔
219
    if bit_width == 0 {
9,700✔
220
        W::ZERO
212✔
221
    } else {
222
        W::MAX >> W::BITS.checked_sub(bit_width).expect("bit_width > W::BITS")
9,488✔
223
    }
224
}
225

226
impl<W: Word, B> BitFieldVec<W, B> {
227
    /// # Safety
228
    /// `len` * `bit_width` must be between 0 (included) the number of
229
    /// bits in `bits` (included).
230
    #[inline(always)]
231
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
3,213✔
232
        Self {
233
            bits,
234
            bit_width,
235
            mask: mask(bit_width),
6,426✔
236
            len,
237
        }
238
    }
239

240
    #[inline(always)]
241
    pub fn into_raw_parts(self) -> (B, usize, usize) {
3,122✔
242
        (self.bits, self.bit_width, self.len)
6,244✔
243
    }
244

245
    #[inline(always)]
246
    /// Modifies the bit field in place.
247
    ///
248
    /// # Safety
249
    /// This is unsafe because it's the caller's responsibility to ensure that
250
    /// that the length is compatible with the modified bits.
251
    pub unsafe fn map<W2: Word, B2>(self, f: impl FnOnce(B) -> B2) -> BitFieldVec<W2, B2> {
×
252
        BitFieldVec {
253
            bits: f(self.bits),
×
254
            bit_width: self.bit_width,
×
255
            mask: mask(self.bit_width),
×
256
            len: self.len,
×
257
        }
258
    }
259
}
260

261
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
262
    /// Gets the address of the item storing (the first part of)
263
    /// the element of given index.
264
    ///
265
    /// This method is mainly useful for manually prefetching
266
    /// parts of the data structure.
267
    pub fn addr_of(&self, index: usize) -> *const W {
4✔
268
        let start_bit = index * self.bit_width;
8✔
269
        let word_index = start_bit / W::BITS;
8✔
270
        (&self.bits.as_ref()[word_index]) as *const _
8✔
271
    }
272

273
    /// Like [`SliceByValue::index_value`], but using unaligned reads.
274
    ///
275
    /// This method can be used only for bit width smaller than or equal to
276
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
277
    /// an additional padding word must be present at the end of the vector.
278
    ///
279
    /// Note that to guarantee the absence of undefined behavior this method
280
    /// has to perform several tests. Consider using
281
    /// [`get_unaligned_unchecked`](Self::get_unaligned_unchecked) if you are
282
    /// sure that the constraints are respected.
283
    ///
284
    /// # Panics
285
    ///
286
    /// This method will panic if the constraints above are not respected.
287
    pub fn get_unaligned(&self, index: usize) -> W {
65✔
288
        assert!(
65✔
289
            self.bit_width <= W::BITS - 8 + 2
65✔
290
                || self.bit_width == W::BITS - 8 + 4
24✔
291
                || self.bit_width == W::BITS
14✔
292
        );
293
        panic_if_out_of_bounds!(index, self.len);
61✔
294
        assert!(
61✔
295
            (index * self.bit_width) / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES
122✔
296
        );
297
        unsafe { self.get_unaligned_unchecked(index) }
180✔
298
    }
299

300
    /// Like [`SliceByValue::get_value_unchecked`], but using unaligned reads.
301
    ///
302
    /// # Safety
303
    ///
304
    /// This method can be used only for bit width smaller than or equal to
305
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
306
    /// an additional padding word must be present at the end of the vector.
307
    ///
308
    /// # Panics
309
    ///
310
    /// This method will panic in debug mode if the safety constraints are not
311
    /// respected.
312
    pub unsafe fn get_unaligned_unchecked(&self, index: usize) -> W {
65✔
313
        debug_assert!(
65✔
314
            self.bit_width <= W::BITS - 8 + 2
65✔
315
                || self.bit_width == W::BITS - 8 + 4
24✔
316
                || self.bit_width == W::BITS
14✔
317
        );
318
        let base_ptr = self.bits.as_ref().as_ptr() as *const u8;
61✔
319
        let start_bit = index * self.bit_width;
×
320
        debug_assert!(start_bit / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES);
×
321
        let ptr = unsafe { base_ptr.add(start_bit / 8) } as *const W;
180✔
322
        let word = unsafe { core::ptr::read_unaligned(ptr) };
180✔
323
        (word >> (start_bit % 8)) & self.mask
60✔
324
    }
325

326
    /// Returns the backend of the vector as a slice of `W`.
327
    pub fn as_slice(&self) -> &[W] {
3✔
328
        self.bits.as_ref()
3✔
329
    }
330
}
331
/// An iterator over non-overlapping chunks of a bit-field vector, starting at
332
/// the beginning of the vector.
333
///
334
/// When the vector len is not evenly divided by the chunk size, the last chunk
335
/// of the iteration will be shorter.
336
///
337
/// This struct is created by the
338
/// [`try_chunks_mut`](crate::bits::bit_field_vec::BitFieldVec#impl-BitFieldSliceMut<W>-for-BitFieldVec<W,+B>)
339
/// method.
340
pub struct ChunksMut<'a, W: Word> {
341
    remaining: usize,
342
    bit_width: usize,
343
    chunk_size: usize,
344
    iter: std::slice::ChunksMut<'a, W>,
345
}
346

347
impl<'a, W: Word> Iterator for ChunksMut<'a, W> {
348
    type Item = BitFieldVec<W, &'a mut [W]>;
349

350
    #[inline]
351
    fn next(&mut self) -> Option<Self::Item> {
90✔
352
        self.iter.next().map(|chunk| {
359✔
353
            let size = Ord::min(self.chunk_size, self.remaining);
356✔
354
            let next = unsafe { BitFieldVec::from_raw_parts(chunk, self.bit_width, size) };
445✔
355
            self.remaining -= size;
89✔
356
            next
89✔
357
        })
358
    }
359
}
360

361
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {}
362

363
impl<W: Word> BitFieldVec<W, Vec<W>> {
364
    /// Creates a new zero-initialized vector of given bit width and length.
365
    pub fn new(bit_width: usize, len: usize) -> Self {
976✔
366
        // We need at least one word to handle the case of bit width zero.
367
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
3,904✔
368
        Self {
369
            bits: vec![W::ZERO; n_of_words],
2,928✔
370
            bit_width,
371
            mask: mask(bit_width),
1,952✔
372
            len,
373
        }
374
    }
375

376
    /// Creates a new zero-initialized vector of given bit width and length,
377
    /// adding padding bits to the end of the vector so that unaligned reads are
378
    /// possible.
379
    ///
380
    /// Note that this convenience method is a one-off: if the vector is resized
381
    /// or expanded, the padding will be lost.
382
    pub fn new_unaligned(bit_width: usize, len: usize) -> Self {
88✔
383
        let n_of_words = (len * bit_width).div_ceil(W::BITS);
264✔
384
        Self {
385
            // We add a word at the end
386
            bits: vec![W::ZERO; n_of_words + 1],
264✔
387
            bit_width,
388
            mask: mask(bit_width),
176✔
389
            len,
390
        }
391
    }
392

393
    /// Creates an empty vector that doesn't need to reallocate for up to
394
    /// `capacity` elements.
395
    pub fn with_capacity(bit_width: usize, capacity: usize) -> Self {
1,163✔
396
        // We need at least one word to handle the case of bit width zero.
397
        let n_of_words = Ord::max(1, (capacity * bit_width).div_ceil(W::BITS));
4,652✔
398
        Self {
399
            bits: Vec::with_capacity(n_of_words),
3,489✔
400
            bit_width,
401
            mask: mask(bit_width),
1,163✔
402
            len: 0,
403
        }
404
    }
405

406
    /// Sets the length.
407
    ///
408
    /// # Safety
409
    ///
410
    /// `len * bit_width` must be at most `self.bits.len() * W::BITS`. Note that
411
    /// setting the length might result in reading uninitialized data.
412
    pub unsafe fn set_len(&mut self, len: usize) {
1✔
413
        debug_assert!(len * self.bit_width <= self.bits.len() * W::BITS);
3✔
414
        self.len = len;
1✔
415
    }
416

417
    /// Sets len to 0
418
    pub fn clear(&mut self) {
1✔
419
        self.len = 0;
1✔
420
    }
421

422
    /// Returns the bit width of the values inside the vector.
423
    pub fn bit_width(&self) -> usize {
315✔
424
        debug_assert!(self.bit_width <= W::BITS);
630✔
425
        self.bit_width
315✔
426
    }
427

428
    /// Returns the mask used to extract values from the vector.
429
    /// This will keep the lowest `bit_width` bits.
430
    pub fn mask(&self) -> W {
312✔
431
        self.mask
312✔
432
    }
433

434
    /// Creates a new vector by copying a slice; the bit width will be the minimum
435
    /// width sufficient to hold all values in the slice.
436
    ///
437
    /// Returns an error if the bit width of the values in `slice` is larger than
438
    /// `W::BITS`.
439
    pub fn from_slice<SW>(slice: &impl BitFieldSlice<SW>) -> Result<Self>
3✔
440
    where
441
        SW: Word + CastableInto<W>,
442
    {
443
        let mut max_len: usize = 0;
9✔
444
        for i in 0..slice.len() {
3,006✔
445
            max_len = Ord::max(max_len, unsafe {
×
446
                slice.get_value_unchecked(i).len() as usize
×
447
            });
448
        }
449

450
        if max_len > W::BITS {
3✔
451
            bail!(
1✔
452
                "Cannot convert a slice of bit width {} into a slice with W = {}",
1✔
453
                max_len,
×
454
                std::any::type_name::<W>()
1✔
455
            );
456
        }
457
        let mut result = Self::new(max_len, slice.len());
×
458
        for i in 0..slice.len() {
2,000✔
459
            unsafe { result.set_value_unchecked(i, slice.get_value_unchecked(i).cast()) };
×
460
        }
461

462
        Ok(result)
×
463
    }
464

465
    /// Adds a value at the end of the vector.
466
    pub fn push(&mut self, value: W) {
2,249✔
467
        panic_if_value!(value, self.mask, self.bit_width);
6,747✔
468
        if (self.len + 1) * self.bit_width > self.bits.len() * W::BITS {
4,896✔
469
            self.bits.push(W::ZERO);
398✔
470
        }
471
        unsafe {
472
            self.set_value_unchecked(self.len, value);
6,747✔
473
        }
474
        self.len += 1;
2,249✔
475
    }
476

477
    /// Truncates or extend with `value` the vector.
478
    pub fn resize(&mut self, new_len: usize, value: W) {
1,162✔
479
        panic_if_value!(value, self.mask, self.bit_width);
3,486✔
480
        if new_len > self.len {
1,162✔
481
            if new_len * self.bit_width > self.bits.len() * W::BITS {
3,483✔
482
                self.bits
2,322✔
483
                    .resize((new_len * self.bit_width).div_ceil(W::BITS), W::ZERO);
2,322✔
484
            }
485
            for i in self.len..new_len {
808,882✔
486
                unsafe {
487
                    self.set_value_unchecked(i, value);
×
488
                }
489
            }
490
        }
491
        self.len = new_len;
1,162✔
492
    }
493

494
    /// Removes and returns a value from the end of the vector.
495
    ///
496
    /// Returns None if the [`BitFieldVec`] is empty.
497
    pub fn pop(&mut self) -> Option<W> {
1,002✔
498
        if self.len == 0 {
1,002✔
499
            return None;
2✔
500
        }
501
        let value = self.index_value(self.len - 1);
×
502
        self.len -= 1;
×
503
        Some(value)
×
504
    }
505
}
506

507
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldVec<W, B> {}
508

509
impl<W: Word, T> BitWidth<W> for BitFieldVec<W, T> {
510
    #[inline(always)]
511
    fn bit_width(&self) -> usize {
11,398✔
512
        debug_assert!(self.bit_width <= W::BITS);
22,796✔
513
        self.bit_width
11,398✔
514
    }
515
}
516

517
impl<W: Word, B: AsRef<[W]>> SliceByValue for BitFieldVec<W, B> {
518
    type Value = W;
519
    #[inline(always)]
520
    fn len(&self) -> usize {
2,575,596✔
521
        self.len
2,575,596✔
522
    }
523

524
    unsafe fn get_value_unchecked(&self, index: usize) -> W {
52,340,730✔
525
        let pos = index * self.bit_width;
104,681,460✔
526
        let word_index = pos / W::BITS;
104,681,460✔
527
        let bit_index = pos % W::BITS;
104,681,460✔
528
        let bits = self.bits.as_ref();
157,022,190✔
529

530
        unsafe {
531
            if bit_index + self.bit_width <= W::BITS {
52,340,730✔
532
                (*bits.get_unchecked(word_index) >> bit_index) & self.mask
78,254,156✔
533
            } else {
534
                ((*bits.get_unchecked(word_index) >> bit_index)
13,213,652✔
535
                    | (*bits.get_unchecked(word_index + 1) << (W::BITS - bit_index)))
13,213,652✔
536
                    & self.mask
13,213,652✔
537
            }
538
        }
539
    }
540
}
541

542
impl<W: Word, B: AsRef<[W]>> BitFieldSlice<W> for BitFieldVec<W, B> {
543
    fn as_slice(&self) -> &[W] {
×
544
        self.bits.as_ref()
×
545
    }
546
}
547

548
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldSliceMut<W> for BitFieldVec<W, B> {
549
    #[inline(always)]
550
    fn mask(&self) -> W {
×
551
        self.mask
×
552
    }
553
    fn reset(&mut self) {
1✔
554
        let bit_len = self.len * self.bit_width;
2✔
555
        let full_words = bit_len / W::BITS;
2✔
556
        let residual = bit_len % W::BITS;
2✔
557
        let bits = self.bits.as_mut();
3✔
558
        bits[..full_words].iter_mut().for_each(|x| *x = W::ZERO);
3✔
559
        if residual != 0 {
2✔
560
            bits[full_words] &= W::MAX << residual;
1✔
561
        }
562
    }
563

564
    #[cfg(feature = "rayon")]
565
    fn par_reset(&mut self) {
1✔
566
        let bit_len = self.len * self.bit_width;
2✔
567
        let full_words = bit_len / W::BITS;
2✔
568
        let residual = bit_len % W::BITS;
2✔
569
        let bits = self.bits.as_mut();
3✔
570
        bits[..full_words]
1✔
571
            .par_iter_mut()
572
            .with_min_len(RAYON_MIN_LEN)
1✔
573
            .for_each(|x| *x = W::ZERO);
1✔
574
        if residual != 0 {
2✔
575
            bits[full_words] &= W::MAX << residual;
1✔
576
        }
577
    }
578

579
    fn as_mut_slice(&mut self) -> &mut [W] {
1✔
580
        self.bits.as_mut()
1✔
581
    }
582
}
583

584
/// Error type returned when [`try_chunks_mut`](SliceByValueMut::try_chunks_mut)
585
/// does not find sufficient alignment.
586
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
587
pub struct ChunksMutError<W: Word> {
588
    bit_width: usize,
589
    chunk_size: usize,
590
    _marker: core::marker::PhantomData<W>,
591
}
592

593
impl<W: Word> core::fmt::Display for ChunksMutError<W> {
594
    fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
×
595
        write!(
×
596
            f,
×
597
            "try_chunks_mut needs that the bit width ({}) times the chunk size ({}) is a multiple of W::BITS ({}) to return more than one chunk",
598
            self.bit_width,
×
599
            self.chunk_size,
×
600
            W::BITS
×
601
        )
602
    }
603
}
604

605
impl<W: Word> std::error::Error for ChunksMutError<W> {}
606

607
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> SliceByValueMut for BitFieldVec<W, B> {
608
    #[inline(always)]
609
    fn set_value(&mut self, index: usize, value: W) {
520,559✔
610
        panic_if_out_of_bounds!(index, self.len);
1,561,677✔
611
        panic_if_value!(value, self.mask, self.bit_width);
1,561,674✔
612
        unsafe {
613
            self.set_value_unchecked(index, value);
520,557✔
614
        }
615
    }
616

617
    unsafe fn set_value_unchecked(&mut self, index: usize, value: W) {
12,095,550✔
618
        let pos = index * self.bit_width;
24,191,100✔
619
        let word_index = pos / W::BITS;
24,191,100✔
620
        let bit_index = pos % W::BITS;
24,191,100✔
621
        let bits = self.bits.as_mut();
36,286,650✔
622

623
        unsafe {
624
            if bit_index + self.bit_width <= W::BITS {
21,210,675✔
625
                let mut word = *bits.get_unchecked_mut(word_index);
36,460,500✔
626
                word &= !(self.mask << bit_index);
27,345,375✔
627
                word |= value << bit_index;
27,345,375✔
628
                *bits.get_unchecked_mut(word_index) = word;
27,345,375✔
629
            } else {
630
                let mut word = *bits.get_unchecked_mut(word_index);
2,980,425✔
631
                word &= (W::ONE << bit_index) - W::ONE;
2,980,425✔
632
                word |= value << bit_index;
2,980,425✔
633
                *bits.get_unchecked_mut(word_index) = word;
2,980,425✔
634

635
                let mut word = *bits.get_unchecked_mut(word_index + 1);
2,980,425✔
636
                word &= !(self.mask >> (W::BITS - bit_index));
2,980,425✔
637
                word |= value >> (W::BITS - bit_index);
2,980,425✔
638
                *bits.get_unchecked_mut(word_index + 1) = word;
2,980,425✔
639
            }
640
        }
641
    }
642

643
    fn replace_value(&mut self, index: usize, value: W) -> W {
×
644
        panic_if_out_of_bounds!(index, self.len);
×
645
        panic_if_value!(value, self.mask, self.bit_width);
×
646
        unsafe { self.replace_value_unchecked(index, value) }
×
647
    }
648

649
    unsafe fn replace_value_unchecked(&mut self, index: usize, value: W) -> W {
×
650
        let pos = index * self.bit_width;
×
651
        let word_index = pos / W::BITS;
×
652
        let bit_index = pos % W::BITS;
×
653
        let bits = self.bits.as_mut();
×
654

655
        unsafe {
656
            if bit_index + self.bit_width <= W::BITS {
×
657
                let mut word = *bits.get_unchecked_mut(word_index);
×
658
                let old_value = (word >> bit_index) & self.mask;
×
659
                word &= !(self.mask << bit_index);
×
660
                word |= value << bit_index;
×
661
                *bits.get_unchecked_mut(word_index) = word;
×
662
                old_value
×
663
            } else {
664
                let mut word = *bits.get_unchecked_mut(word_index);
×
665
                let mut old_value = word >> bit_index;
×
666
                word &= (W::ONE << bit_index) - W::ONE;
×
667
                word |= value << bit_index;
×
668
                *bits.get_unchecked_mut(word_index) = word;
×
669

670
                let mut word = *bits.get_unchecked_mut(word_index + 1);
×
671
                old_value |= word << (W::BITS - bit_index);
×
672
                word &= !(self.mask >> (W::BITS - bit_index));
×
673
                word |= value >> (W::BITS - bit_index);
×
674
                *bits.get_unchecked_mut(word_index + 1) = word;
×
675
                old_value & self.mask
×
676
            }
677
        }
678
    }
679

680
    /// This implementation perform the copy word by word, which is
681
    /// significantly faster than the default implementation.
682
    fn copy(&self, from: usize, dst: &mut Self, to: usize, len: usize) {
576✔
683
        assert_eq!(
576✔
684
            self.bit_width, dst.bit_width,
×
685
            "Bit widths must be equal (self: {}, dest: {})",
×
686
            self.bit_width, dst.bit_width
×
687
        );
688
        // Reduce len to the elements available in both vectors
689
        let len = Ord::min(Ord::min(len, dst.len - to), self.len - from);
3,456✔
690
        if len == 0 {
576✔
691
            return;
×
692
        }
693
        let bit_width = Ord::min(self.bit_width, dst.bit_width);
×
694
        let bit_len = len * bit_width;
×
695
        let src_pos = from * self.bit_width;
×
696
        let dst_pos = to * dst.bit_width;
×
697
        let src_bit = src_pos % W::BITS;
×
698
        let dst_bit = dst_pos % W::BITS;
×
699
        let src_first_word = src_pos / W::BITS;
×
700
        let dst_first_word = dst_pos / W::BITS;
×
701
        let src_last_word = (src_pos + bit_len - 1) / W::BITS;
×
702
        let dst_last_word = (dst_pos + bit_len - 1) / W::BITS;
×
703
        let source = self.bits.as_ref();
×
704
        let dest = dst.bits.as_mut();
×
705

706
        if src_first_word == src_last_word && dst_first_word == dst_last_word {
192✔
707
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
708
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
709
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
710
            dest[dst_first_word] |= word << dst_bit;
128✔
711
        } else if src_first_word == src_last_word {
640✔
712
            // dst_first_word != dst_last_word
713
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
714
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
715
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
716
            dest[dst_first_word] |= (word & mask) << dst_bit;
192✔
717
            dest[dst_last_word] &= !(mask >> (W::BITS - dst_bit));
192✔
718
            dest[dst_last_word] |= (word & mask) >> (W::BITS - dst_bit);
128✔
719
        } else if dst_first_word == dst_last_word {
576✔
720
            // src_first_word != src_last_word
721
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
722
            let word = ((source[src_first_word] >> src_bit)
256✔
723
                | (source[src_last_word] << (W::BITS - src_bit)))
128✔
724
                & mask;
128✔
725
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
726
            dest[dst_first_word] |= word << dst_bit;
128✔
727
        } else if src_bit == dst_bit {
576✔
728
            // src_first_word != src_last_word && dst_first_word != dst_last_word
729
            let mask = W::MAX << dst_bit;
384✔
730
            dest[dst_first_word] &= !mask;
256✔
731
            dest[dst_first_word] |= source[src_first_word] & mask;
384✔
732

733
            dest[(1 + dst_first_word)..dst_last_word]
384✔
734
                .copy_from_slice(&source[(1 + src_first_word)..src_last_word]);
512✔
735

736
            let residual =
256✔
737
                bit_len - (W::BITS - src_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
384✔
738
            let mask = W::MAX >> (W::BITS - residual);
384✔
739
            dest[dst_last_word] &= !mask;
256✔
740
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
741
        } else if src_bit < dst_bit {
384✔
742
            // src_first_word != src_last_word && dst_first_word !=
743
            // dst_last_word
744
            let dst_mask = W::MAX << dst_bit;
256✔
745
            let src_mask = W::MAX << src_bit;
256✔
746
            let shift = dst_bit - src_bit;
256✔
747
            dest[dst_first_word] &= !dst_mask;
128✔
748
            dest[dst_first_word] |= (source[src_first_word] & src_mask) << shift;
256✔
749

750
            let mut word = source[src_first_word] >> (W::BITS - shift);
256✔
751
            for i in 1..dst_last_word - dst_first_word {
256✔
752
                dest[dst_first_word + i] = word | (source[src_first_word + i] << shift);
×
753
                word = source[src_first_word + i] >> (W::BITS - shift);
×
754
            }
755
            let residual =
128✔
756
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
256✔
757
            let mask = W::MAX >> (W::BITS - residual);
256✔
758
            dest[dst_last_word] &= !mask;
128✔
759
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
760
        } else {
761
            // src_first_word != src_last_word && dst_first_word !=
762
            // dst_last_word && src_bit > dst_bit
763
            let dst_mask = W::MAX << dst_bit;
128✔
764
            let src_mask = W::MAX << src_bit;
×
765
            let shift = src_bit - dst_bit;
×
766
            dest[dst_first_word] &= !dst_mask;
×
767
            dest[dst_first_word] |= (source[src_first_word] & src_mask) >> shift;
×
768
            dest[dst_first_word] |= source[src_first_word + 1] << (W::BITS - shift);
×
769

770
            let mut word = source[src_first_word + 1] >> shift;
×
771

772
            for i in 1..dst_last_word - dst_first_word {
512✔
773
                word |= source[src_first_word + i + 1] << (W::BITS - shift);
×
774
                dest[dst_first_word + i] = word;
×
775
                word = source[src_first_word + i + 1] >> shift;
×
776
            }
777

778
            word |= source[src_last_word] << (W::BITS - shift);
×
779

780
            let residual =
×
781
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
×
782
            let mask = W::MAX >> (W::BITS - residual);
×
783
            dest[dst_last_word] &= !mask;
×
784
            dest[dst_last_word] |= word & mask;
×
785
        }
786
    }
787

788
    /// This implementation keeps a buffer of `W::BITS` bits for reading and
789
    /// writing, obtaining a significant speedup with respect to the default
790
    /// implementation.
791
    #[inline]
792
    unsafe fn apply_in_place_unchecked<F>(&mut self, mut f: F)
×
793
    where
794
        F: FnMut(Self::Value) -> Self::Value,
795
    {
796
        if self.is_empty() {
×
797
            return;
×
798
        }
799
        let bit_width = self.bit_width();
×
800
        if bit_width == 0 {
×
801
            return;
×
802
        }
803
        let mask = self.mask();
×
804
        let number_of_words: usize = self.bits.as_ref().len();
×
805
        let last_word_idx = number_of_words.saturating_sub(1);
×
806

807
        let mut write_buffer: W = W::ZERO;
×
808
        let mut read_buffer: W = *unsafe { self.bits.as_ref().get_unchecked(0) };
×
809

810
        // specialized case because it's much faster
811
        if bit_width.is_power_of_two() {
×
812
            let mut bits_in_buffer = 0;
×
813

814
            // TODO!: figure out how to simplify
815
            let mut buffer_limit = (self.len() * bit_width) % W::BITS;
×
816
            if buffer_limit == 0 {
×
817
                buffer_limit = W::BITS;
×
818
            }
819

820
            for read_idx in 1..number_of_words {
×
821
                // pre-load the next word so it loads while we parse the buffer
822
                let next_word: W = *unsafe { self.bits.as_ref().get_unchecked(read_idx) };
×
823

824
                // parse as much as we can from the buffer
825
                loop {
×
826
                    let next_bits_in_buffer = bits_in_buffer + bit_width;
×
827

828
                    if next_bits_in_buffer > W::BITS {
×
829
                        break;
×
830
                    }
831

832
                    let value = read_buffer & mask;
×
833
                    // throw away the bits we just read
834
                    read_buffer >>= bit_width;
×
835
                    // apply user func
836
                    let new_value = f(value);
×
837
                    // put the new value in the write buffer
838
                    write_buffer |= new_value << bits_in_buffer;
×
839

840
                    bits_in_buffer = next_bits_in_buffer;
×
841
                }
842

843
                invariant_eq!(read_buffer, W::ZERO);
×
844
                *unsafe { self.bits.as_mut().get_unchecked_mut(read_idx - 1) } = write_buffer;
×
845
                read_buffer = next_word;
×
846
                write_buffer = W::ZERO;
×
847
                bits_in_buffer = 0;
×
848
            }
849

850
            // write the last word if we have some bits left
851
            while bits_in_buffer < buffer_limit {
×
852
                let value = read_buffer & mask;
×
853
                // throw away the bits we just read
854
                read_buffer >>= bit_width;
×
855
                // apply user func
856
                let new_value = f(value);
×
857
                // put the new value in the write buffer
858
                write_buffer |= new_value << bits_in_buffer;
×
859
                // -= bit_width but with no casts
860
                bits_in_buffer += bit_width;
×
861
            }
862

863
            *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
864
            return;
×
865
        }
866

867
        // The position inside the word. In most parametrization of the
868
        // vector, since the bit_width is not necessarily a integer
869
        // divisor of the word size, we need to keep track of the position
870
        // inside the word. As we scroll through the bits, due to the bits
871
        // remainder, we may need to operate on two words at the same time.
872
        let mut global_bit_index: usize = 0;
×
873

874
        // The number of words in the bitvec.
875
        let mut lower_word_limit = 0;
×
876
        let mut upper_word_limit = W::BITS;
×
877

878
        // We iterate across the words
879
        for word_number in 0..last_word_idx {
×
880
            // We iterate across the elements in the word.
881
            while global_bit_index + bit_width <= upper_word_limit {
×
882
                // We retrieve the value from the current word.
883
                let offset = global_bit_index - lower_word_limit;
×
884
                global_bit_index += bit_width;
×
885
                let element = self.mask() & (read_buffer >> offset);
×
886

887
                // We apply the function to the element.
888
                let new_element = f(element);
×
889

890
                // We set the element in the new word.
891
                write_buffer |= new_element << offset;
×
892
            }
893

894
            // We retrieve the next word from the bitvec.
895
            let next_word = *unsafe { self.bits.as_ref().get_unchecked(word_number + 1) };
×
896

897
            let mut new_write_buffer = W::ZERO;
×
898
            if upper_word_limit != global_bit_index {
×
899
                let remainder = upper_word_limit - global_bit_index;
×
900
                let offset = global_bit_index - lower_word_limit;
×
901
                // We compose the element from the remaining elements in the
902
                // current word and the elements in the next word.
903
                let element = ((read_buffer >> offset) | (next_word << remainder)) & self.mask();
×
904
                global_bit_index += bit_width;
×
905

906
                // We apply the function to the element.
907
                let new_element = f(element);
×
908

909
                write_buffer |= new_element << offset;
×
910

911
                new_write_buffer = new_element >> remainder;
×
912
            };
913

914
            read_buffer = next_word;
×
915

916
            *unsafe { self.bits.as_mut().get_unchecked_mut(word_number) } = write_buffer;
×
917

918
            write_buffer = new_write_buffer;
×
919
            lower_word_limit = upper_word_limit;
×
920
            upper_word_limit += W::BITS;
×
921
        }
922

923
        let mut offset = global_bit_index - lower_word_limit;
×
924

925
        // We iterate across the elements in the word.
926
        while offset < self.len() * bit_width - global_bit_index {
×
927
            // We retrieve the value from the current word.
928
            let element = self.mask() & (read_buffer >> offset);
×
929

930
            // We apply the function to the element.
931
            let new_element = f(element);
×
932

933
            // We set the element in the new word.
934
            write_buffer |= new_element << offset;
×
935
            offset += bit_width;
×
936
        }
937

938
        *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
939
    }
940

941
    type ChunksMut<'a>
942
        = ChunksMut<'a, W>
943
    where
944
        Self: 'a;
945

946
    type ChunksMutError = ChunksMutError<W>;
947

948
    /// # Errors
949
    ///
950
    /// This method will return an error if the chunk size multiplied by the by
951
    /// the [bit width](BitWidth::bit_width) is not a multiple of
952
    /// `W::BITS` and more than one chunk must be returned.
953
    fn try_chunks_mut(
87✔
954
        &mut self,
955
        chunk_size: usize,
956
    ) -> Result<Self::ChunksMut<'_>, ChunksMutError<W>> {
957
        let len = self.len();
261✔
958
        let bit_width = self.bit_width();
261✔
959
        if len <= chunk_size || (chunk_size * bit_width) % W::BITS == 0 {
92✔
960
            Ok(ChunksMut {
85✔
961
                remaining: len,
85✔
962
                bit_width: self.bit_width,
85✔
963
                chunk_size,
85✔
964
                iter: self.bits.as_mut()[..(len * bit_width).div_ceil(W::BITS)]
85✔
965
                    .chunks_mut((chunk_size * bit_width).div_ceil(W::BITS)),
85✔
966
            })
967
        } else {
968
            Err(ChunksMutError {
2✔
969
                bit_width,
2✔
970
                chunk_size,
2✔
971
                _marker: core::marker::PhantomData,
2✔
972
            })
973
        }
974
    }
975
}
976

977
// Support for unchecked iterators
978

979
/// An [`UncheckedIterator`] over the values of a [`BitFieldVec`].
980
#[derive(Debug, Clone, MemDbg, MemSize)]
981
pub struct BitFieldVectorUncheckedIterator<'a, W, B>
982
where
983
    W: Word,
984
{
985
    vec: &'a BitFieldVec<W, B>,
986
    word_index: usize,
987
    window: W,
988
    fill: usize,
989
}
990

991
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorUncheckedIterator<'a, W, B> {
992
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
636,410✔
993
        if index > vec.len() {
1,272,820✔
994
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
995
        }
996
        let (fill, word_index);
×
997
        let window = if index == vec.len() {
1,909,230✔
998
            word_index = 0;
304✔
999
            fill = 0;
304✔
1000
            W::ZERO
304✔
1001
        } else {
1002
            let bit_offset = index * vec.bit_width;
636,106✔
1003
            let bit_index = bit_offset % W::BITS;
×
1004

1005
            word_index = bit_offset / W::BITS;
×
1006
            fill = W::BITS - bit_index;
×
1007
            unsafe {
1008
                // SAFETY: index has been check at the start and it is within bounds
1009
                *vec.bits.as_ref().get_unchecked(word_index) >> bit_index
×
1010
            }
1011
        };
1012
        Self {
1013
            vec,
1014
            word_index,
1015
            window,
1016
            fill,
1017
        }
1018
    }
1019
}
1020

1021
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1022
    for BitFieldVectorUncheckedIterator<'_, W, B>
1023
{
1024
    type Item = W;
1025
    unsafe fn next_unchecked(&mut self) -> W {
32,658,924✔
1026
        let bit_width = self.vec.bit_width;
65,317,848✔
1027

1028
        if self.fill >= bit_width {
32,658,924✔
1029
            self.fill -= bit_width;
17,183,171✔
1030
            let res = self.window & self.vec.mask;
34,366,342✔
1031
            self.window >>= bit_width;
17,183,171✔
1032
            return res;
17,183,171✔
1033
        }
1034

1035
        let res = self.window;
×
1036
        self.word_index += 1;
×
1037
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
×
1038
        let res = (res | (self.window << self.fill)) & self.vec.mask;
×
1039
        let used = bit_width - self.fill;
×
1040
        self.window >>= used;
×
1041
        self.fill = W::BITS - used;
×
1042
        res
×
1043
    }
1044
}
1045

1046
impl<'a, W: Word, B: AsRef<[W]>> IntoUncheckedIterator for &'a BitFieldVec<W, B> {
1047
    type Item = W;
1048
    type IntoUncheckedIter = BitFieldVectorUncheckedIterator<'a, W, B>;
1049
    fn into_unchecked_iter_from(self, from: usize) -> Self::IntoUncheckedIter {
323,200✔
1050
        BitFieldVectorUncheckedIterator::new(self, from)
969,600✔
1051
    }
1052
}
1053

1054
/// An [`UncheckedIterator`] moving backwards over the values of a [`BitFieldVec`].
1055
#[derive(Debug, Clone, MemDbg, MemSize)]
1056
pub struct BitFieldVectorReverseUncheckedIterator<'a, W: Word, B> {
1057
    vec: &'a BitFieldVec<W, B>,
1058
    word_index: usize,
1059
    window: W,
1060
    fill: usize,
1061
}
1062

1063
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorReverseUncheckedIterator<'a, W, B> {
1064
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
315,992✔
1065
        if index > vec.len() {
631,984✔
1066
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1067
        }
1068
        let (word_index, fill);
×
1069

1070
        let window = if index == 0 {
631,984✔
1071
            word_index = 0;
3,120✔
1072
            fill = 0;
3,120✔
1073
            W::ZERO
3,120✔
1074
        } else {
1075
            // We have to handle the case of zero bit width
1076
            let bit_offset = (index * vec.bit_width).saturating_sub(1);
312,872✔
1077
            let bit_index = bit_offset % W::BITS;
×
1078

1079
            word_index = bit_offset / W::BITS;
×
1080
            fill = bit_index + 1;
×
1081
            unsafe {
1082
                // SAFETY: index has been check at the start and it is within bounds
1083
                *vec.bits.as_ref().get_unchecked(word_index) << (W::BITS - fill)
×
1084
            }
1085
        };
1086
        Self {
1087
            vec,
1088
            word_index,
1089
            window,
1090
            fill,
1091
        }
1092
    }
1093
}
1094

1095
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1096
    for BitFieldVectorReverseUncheckedIterator<'_, W, B>
1097
{
1098
    type Item = W;
1099
    unsafe fn next_unchecked(&mut self) -> W {
15,451,222✔
1100
        let bit_width = self.vec.bit_width;
30,902,444✔
1101

1102
        if self.fill >= bit_width {
15,451,222✔
1103
            self.fill -= bit_width;
8,036,076✔
1104
            self.window = self.window.rotate_left(bit_width as u32);
16,072,152✔
1105
            return self.window & self.vec.mask;
8,036,076✔
1106
        }
1107

1108
        let mut res = self.window.rotate_left(self.fill as u32);
×
1109
        self.word_index -= 1;
×
1110
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
×
1111
        let used = bit_width - self.fill;
×
1112
        res = ((res << used) | (self.window >> (W::BITS - used))) & self.vec.mask;
×
1113
        self.window <<= used;
×
1114
        self.fill = W::BITS - used;
×
1115
        res
×
1116
    }
1117
}
1118

1119
impl<'a, W: Word, B: AsRef<[W]>> IntoReverseUncheckedIterator for &'a BitFieldVec<W, B> {
1120
    type Item = W;
1121
    type IntoRevUncheckedIter = BitFieldVectorReverseUncheckedIterator<'a, W, B>;
1122

1123
    fn into_rev_unchecked_iter(self) -> Self::IntoRevUncheckedIter {
×
1124
        BitFieldVectorReverseUncheckedIterator::new(self, self.len())
×
1125
    }
1126

1127
    fn into_rev_unchecked_iter_from(self, from: usize) -> Self::IntoRevUncheckedIter {
315,992✔
1128
        BitFieldVectorReverseUncheckedIterator::new(self, from)
947,976✔
1129
    }
1130
}
1131

1132
/// An [`Iterator`] over the values of a [`BitFieldVec`].
1133
#[derive(Debug, Clone, MemDbg, MemSize)]
1134
pub struct BitFieldVecIterator<'a, W, B>
1135
where
1136
    W: Word,
1137
{
1138
    unchecked: BitFieldVectorUncheckedIterator<'a, W, B>,
1139
    range: core::ops::Range<usize>,
1140
}
1141

1142
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVecIterator<'a, W, B> {
1143
    fn new(vec: &'a BitFieldVec<W, B>, from: usize) -> Self {
313,210✔
1144
        let len = vec.len();
939,630✔
1145
        if from > len {
313,210✔
1146
            panic!(
×
1147
                "Start index out of bounds: {} >
×
1148
            {}",
×
1149
                from, len
×
1150
            );
1151
        }
1152
        Self {
1153
            unchecked: BitFieldVectorUncheckedIterator::new(vec, from),
×
1154
            range: from..len,
×
1155
        }
1156
    }
1157
}
1158

1159
impl<W: Word, B: AsRef<[W]>> Iterator for BitFieldVecIterator<'_, W, B> {
1160
    type Item = W;
1161
    fn next(&mut self) -> Option<Self::Item> {
16,189,263✔
1162
        if self.range.is_empty() {
32,378,526✔
1163
            return None;
313,204✔
1164
        }
1165
        // SAFETY: index has just been checked.
1166
        let res = unsafe { self.unchecked.next_unchecked() };
×
1167
        self.range.start += 1;
×
1168
        Some(res)
×
1169
    }
1170

1171
    #[inline(always)]
1172
    fn size_hint(&self) -> (usize, Option<usize>) {
×
1173
        (self.range.len(), Some(self.range.len()))
×
1174
    }
1175
}
1176

1177
impl<W: Word, B: AsRef<[W]>> ExactSizeIterator for BitFieldVecIterator<'_, W, B> {
1178
    #[inline(always)]
1179
    fn len(&self) -> usize {
×
1180
        self.range.len()
×
1181
    }
1182
}
1183

1184
/// This implements iteration from the end, but its slower than the forward iteration
1185
/// as here we do a random access, while in the forward iterator we do a sequential access
1186
/// and we keep a buffer of `W::BITS` bits to speed up the iteration.
1187
///
1188
/// If needed we could also keep a buffer from the end, but the logic would be more complex
1189
/// and potentially slower.
1190
impl<W: Word, B: AsRef<[W]>> DoubleEndedIterator for BitFieldVecIterator<'_, W, B> {
1191
    fn next_back(&mut self) -> Option<Self::Item> {
×
1192
        if self.range.is_empty() {
×
1193
            return None;
×
1194
        }
1195
        // SAFETY: index has just been checked.
1196
        let res = unsafe { self.unchecked.next_unchecked() };
×
1197
        self.range.end -= 1;
×
1198
        Some(res)
×
1199
    }
1200
}
1201

1202
/// Equality between bit-field vectors requires that the word is the same, the
1203
/// bit width is the same, and the content is the same.
1204
impl<W: Word, B: AsRef<[W]>, C: AsRef<[W]>> PartialEq<BitFieldVec<W, C>> for BitFieldVec<W, B> {
1205
    fn eq(&self, other: &BitFieldVec<W, C>) -> bool {
3,771✔
1206
        if self.bit_width() != other.bit_width() {
11,313✔
1207
            return false;
×
1208
        }
1209
        if self.len() != other.len() {
×
1210
            return false;
2✔
1211
        }
1212
        let bit_len = self.len() * self.bit_width();
×
1213
        if self.bits.as_ref()[..bit_len / W::BITS] != other.bits.as_ref()[..bit_len / W::BITS] {
×
1214
            return false;
×
1215
        }
1216

1217
        let residual = bit_len % W::BITS;
×
1218
        residual == 0
×
1219
            || (self.bits.as_ref()[bit_len / W::BITS] ^ other.bits.as_ref()[bit_len / W::BITS])
3,526✔
1220
                << (W::BITS - residual)
3,526✔
1221
                == W::ZERO
3,526✔
1222
    }
1223
}
1224

1225
impl<'a, W: Word, B: AsRef<[W]>> IntoIterator for &'a BitFieldVec<W, B> {
1226
    type Item = W;
1227
    type IntoIter = BitFieldVecIterator<'a, W, B>;
1228

1229
    fn into_iter(self) -> Self::IntoIter {
2✔
1230
        BitFieldVecIterator::new(self, 0)
4✔
1231
    }
1232
}
1233

1234
impl<'a, W: Word, B: AsRef<[W]>> IntoIteratorFrom for &'a BitFieldVec<W, B> {
1235
    type IntoIterFrom = BitFieldVecIterator<'a, W, B>;
1236

1237
    fn into_iter_from(self, from: usize) -> Self::IntoIter {
×
1238
        BitFieldVecIterator::new(self, from)
×
1239
    }
1240
}
1241

1242
impl<W: Word> core::iter::Extend<W> for BitFieldVec<W, Vec<W>> {
1243
    fn extend<T: IntoIterator<Item = W>>(&mut self, iter: T) {
1✔
1244
        for value in iter {
17✔
1245
            self.push(value);
×
1246
        }
1247
    }
1248
}
1249

1250
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
1251
    pub fn iter_from(&self, from: usize) -> BitFieldVecIterator<'_, W, B> {
313,208✔
1252
        BitFieldVecIterator::new(self, from)
939,624✔
1253
    }
1254

1255
    pub fn iter(&self) -> BitFieldVecIterator<'_, W, B> {
1,202✔
1256
        self.iter_from(0)
2,404✔
1257
    }
1258
}
1259
/// A tentatively thread-safe vector of bit fields of fixed width (AKA “compact arrays“,
1260
/// “bit array“, etc.)
1261
///
1262
/// This implementation provides some concurrency guarantees, albeit not
1263
/// full-fledged thread safety: more precisely, we can guarantee thread-safety
1264
/// if the bit width is a power of two; otherwise, concurrent writes to values
1265
/// that cross word boundaries might end up in different threads succeeding in
1266
/// writing only part of a value. If the user can guarantee that no two threads
1267
/// ever write to the same boundary-crossing value, then no race condition can
1268
/// happen.
1269
///
1270
/// See the [module documentation](crate::bits) for more details.
1271
#[derive(Debug, Clone, Hash, MemDbg, MemSize)]
1272
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
1273
pub struct AtomicBitFieldVec<W: Word + IntoAtomic = usize, B = Vec<<W as IntoAtomic>::AtomicType>> {
1274
    /// The underlying storage.
1275
    bits: B,
1276
    /// The bit width of the values stored in the vector.
1277
    bit_width: usize,
1278
    /// A mask with its lowest `bit_width` bits set to one.
1279
    mask: W,
1280
    /// The length of the vector.
1281
    len: usize,
1282
}
1283

1284
impl<W: Word + IntoAtomic, B> AtomicBitFieldVec<W, B> {
1285
    /// # Safety
1286
    /// `len` * `bit_width` must be between 0 (included) the number of
1287
    /// bits in `bits` (included).
1288
    #[inline(always)]
1289
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
184✔
1290
        Self {
1291
            bits,
1292
            bit_width,
1293
            mask: mask(bit_width),
368✔
1294
            len,
1295
        }
1296
    }
1297

1298
    #[inline(always)]
1299
    pub fn into_raw_parts(self) -> (B, usize, usize) {
184✔
1300
        (self.bits, self.bit_width, self.len)
368✔
1301
    }
1302

1303
    /// Returns the mask used to extract values from the vector.
1304
    /// This will keep the lowest `bit_width` bits.
1305
    pub fn mask(&self) -> W {
184✔
1306
        self.mask
184✔
1307
    }
1308
}
1309

1310
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B> {
1311
    /// Returns the backend of the `AtomicBitFieldVec` as a slice of `A`, where `A` is the
1312
    /// atomic variant of `W`.
1313
    pub fn as_slice(&self) -> &[W::AtomicType] {
4✔
1314
        self.bits.as_ref()
4✔
1315
    }
1316
}
1317

1318
impl<W: Word + IntoAtomic> AtomicBitFieldVec<W>
1319
where
1320
    W::AtomicType: AtomicUnsignedInt,
1321
{
1322
    pub fn new(bit_width: usize, len: usize) -> AtomicBitFieldVec<W> {
491✔
1323
        // we need at least two words to avoid branches in the gets
1324
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
1,964✔
1325
        AtomicBitFieldVec::<W> {
1326
            bits: (0..n_of_words)
491✔
1327
                .map(|_| W::AtomicType::new(W::ZERO))
1328
                .collect(),
1329
            bit_width,
1330
            mask: mask(bit_width),
982✔
1331
            len,
1332
        }
1333
    }
1334
}
1335

1336
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B>
1337
where
1338
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1339
{
1340
    /// Writes zeros in all values.
1341
    #[deprecated(since = "0.4.4", note = "reset is deprecated in favor of reset_atomic")]
1342
    pub fn reset(&mut self, ordering: Ordering) {
×
1343
        self.reset_atomic(ordering)
×
1344
    }
1345
}
1346

1347
impl<W: Word + IntoAtomic, B> BitWidth<W::AtomicType> for AtomicBitFieldVec<W, B> {
1348
    #[inline(always)]
1349
    fn bit_width(&self) -> usize {
184✔
1350
        debug_assert!(self.bit_width <= W::BITS);
368✔
1351
        self.bit_width
184✔
1352
    }
1353
}
1354

1355
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> SliceByValue for AtomicBitFieldVec<W, T>
1356
where
1357
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1358
    Self: AtomicBitFieldSlice<W>,
1359
{
1360
    type Value = W;
1361

1362
    #[inline(always)]
1363
    fn len(&self) -> usize {
×
1364
        self.len
×
1365
    }
1366

1367
    unsafe fn get_value_unchecked(&self, index: usize) -> Self::Value {
×
1368
        unsafe { self.get_atomic_unchecked(index, Ordering::Relaxed) }
×
1369
    }
1370
}
1371

1372
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> AtomicBitFieldSlice<W>
1373
    for AtomicBitFieldVec<W, T>
1374
where
1375
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1376
{
1377
    #[inline]
1378
    fn len(&self) -> usize {
404,824✔
1379
        self.len
404,824✔
1380
    }
1381

1382
    #[inline]
1383
    unsafe fn get_atomic_unchecked(&self, index: usize, order: Ordering) -> W {
404,825✔
1384
        let pos = index * self.bit_width;
809,650✔
1385
        let word_index = pos / W::BITS;
809,650✔
1386
        let bit_index = pos % W::BITS;
809,650✔
1387
        let bits = self.bits.as_ref();
1,214,475✔
1388

1389
        unsafe {
1390
            if bit_index + self.bit_width <= W::BITS {
404,825✔
1391
                (bits.get_unchecked(word_index).load(order) >> bit_index) & self.mask
939,176✔
1392
            } else {
1393
                ((bits.get_unchecked(word_index).load(order) >> bit_index)
170,031✔
1394
                    | (bits.get_unchecked(word_index + 1).load(order) << (W::BITS - bit_index)))
170,031✔
1395
                    & self.mask
170,031✔
1396
            }
1397
        }
1398
    }
1399

1400
    // We reimplement set as we have the mask in the structure.
1401

1402
    /// Sets the element of the slice at the specified index.
1403
    ///
1404
    /// May panic if the index is not in in [0..[len](SliceByValue::len))
1405
    /// or the value does not fit in [`BitWidth::bit_width`] bits.
1406
    #[inline(always)]
1407
    fn set_atomic(&self, index: usize, value: W, order: Ordering) {
56,026✔
1408
        panic_if_out_of_bounds!(index, self.len);
168,078✔
1409
        panic_if_value!(value, self.mask, self.bit_width);
168,075✔
1410
        unsafe {
1411
            self.set_atomic_unchecked(index, value, order);
56,024✔
1412
        }
1413
    }
1414

1415
    #[inline]
1416
    unsafe fn set_atomic_unchecked(&self, index: usize, value: W, order: Ordering) {
451,171✔
1417
        unsafe {
1418
            debug_assert!(self.bit_width != W::BITS);
902,342✔
1419
            let pos = index * self.bit_width;
902,342✔
1420
            let word_index = pos / W::BITS;
902,342✔
1421
            let bit_index = pos % W::BITS;
902,342✔
1422
            let bits = self.bits.as_ref();
1,353,513✔
1423

1424
            if bit_index + self.bit_width <= W::BITS {
451,171✔
1425
                // this is consistent
1426
                let mut current = bits.get_unchecked(word_index).load(order);
1,300,230✔
1427
                loop {
×
1428
                    let mut new = current;
520,092✔
1429
                    new &= !(self.mask << bit_index);
520,092✔
1430
                    new |= value << bit_index;
520,092✔
1431

1432
                    match bits
780,138✔
1433
                        .get_unchecked(word_index)
520,092✔
1434
                        .compare_exchange(current, new, order, order)
1,040,184✔
1435
                    {
1436
                        Ok(_) => break,
260,046✔
1437
                        Err(e) => current = e,
×
1438
                    }
1439
                }
1440
            } else {
1441
                let mut word = bits.get_unchecked(word_index).load(order);
191,125✔
1442
                // try to wait for the other thread to finish
1443
                fence(Ordering::Acquire);
×
1444
                loop {
×
1445
                    let mut new = word;
382,250✔
1446
                    new &= (W::ONE << bit_index) - W::ONE;
191,125✔
1447
                    new |= value << bit_index;
382,250✔
1448

1449
                    match bits
573,375✔
1450
                        .get_unchecked(word_index)
382,250✔
1451
                        .compare_exchange(word, new, order, order)
764,500✔
1452
                    {
1453
                        Ok(_) => break,
×
1454
                        Err(e) => word = e,
×
1455
                    }
1456
                }
1457
                fence(Ordering::Release);
×
1458

1459
                // ensures that the compiler does not reorder the two atomic operations
1460
                // this should increase the probability of having consistency
1461
                // between two concurrent writes as they will both execute the set
1462
                // of the bits in the same order, and the release / acquire fence
1463
                // should try to synchronize the threads as much as possible
1464
                compiler_fence(Ordering::SeqCst);
×
1465

1466
                let mut word = bits.get_unchecked(word_index + 1).load(order);
×
1467
                fence(Ordering::Acquire);
×
1468
                loop {
×
1469
                    let mut new = word;
382,250✔
1470
                    new &= !(self.mask >> (W::BITS - bit_index));
382,250✔
1471
                    new |= value >> (W::BITS - bit_index);
382,250✔
1472

1473
                    match bits
573,375✔
1474
                        .get_unchecked(word_index + 1)
382,250✔
1475
                        .compare_exchange(word, new, order, order)
764,500✔
1476
                    {
1477
                        Ok(_) => break,
×
1478
                        Err(e) => word = e,
×
1479
                    }
1480
                }
1481
                fence(Ordering::Release);
×
1482
            }
1483
        }
1484
    }
1485

1486
    fn reset_atomic(&mut self, ordering: Ordering) {
1✔
1487
        let bit_len = self.len * self.bit_width;
2✔
1488
        let full_words = bit_len / W::BITS;
2✔
1489
        let residual = bit_len % W::BITS;
2✔
1490
        let bits = self.bits.as_ref();
3✔
1491
        bits[..full_words]
1✔
1492
            .iter()
1493
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1494
        if residual != 0 {
2✔
1495
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1496
        }
1497
    }
1498

1499
    #[cfg(feature = "rayon")]
1500
    fn par_reset_atomic(&mut self, ordering: Ordering) {
1✔
1501
        let bit_len = self.len * self.bit_width;
2✔
1502
        let full_words = bit_len / W::BITS;
2✔
1503
        let residual = bit_len % W::BITS;
2✔
1504
        let bits = self.bits.as_ref();
3✔
1505
        bits[..full_words]
1✔
1506
            .par_iter()
1507
            .with_min_len(RAYON_MIN_LEN)
1✔
1508
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1509
        if residual != 0 {
2✔
1510
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1511
        }
1512
    }
1513
}
1514

1515
// Conversions
1516

1517
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Vec<W::AtomicType>>>
1518
    for BitFieldVec<W, Vec<W>>
1519
{
1520
    #[inline]
1521
    fn from(value: AtomicBitFieldVec<W, Vec<W::AtomicType>>) -> Self {
470✔
1522
        BitFieldVec {
1523
            bits: transmute_vec_from_atomic(value.bits),
1,410✔
1524
            len: value.len,
940✔
1525
            bit_width: value.bit_width,
470✔
1526
            mask: value.mask,
470✔
1527
        }
1528
    }
1529
}
1530

1531
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Box<[W::AtomicType]>>>
1532
    for BitFieldVec<W, Box<[W]>>
1533
{
1534
    #[inline]
1535
    fn from(value: AtomicBitFieldVec<W, Box<[W::AtomicType]>>) -> Self {
1✔
1536
        BitFieldVec {
1537
            bits: transmute_boxed_slice_from_atomic(value.bits),
3✔
1538

1539
            len: value.len,
2✔
1540
            bit_width: value.bit_width,
1✔
1541
            mask: value.mask,
1✔
1542
        }
1543
    }
1544
}
1545

1546
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a [W::AtomicType]>>
1547
    for BitFieldVec<W, &'a [W]>
1548
{
1549
    #[inline]
1550
    fn from(value: AtomicBitFieldVec<W, &'a [W::AtomicType]>) -> Self {
1✔
1551
        BitFieldVec {
1552
            bits: unsafe { core::mem::transmute::<&'a [W::AtomicType], &'a [W]>(value.bits) },
2✔
1553
            len: value.len,
2✔
1554
            bit_width: value.bit_width,
1✔
1555
            mask: value.mask,
1✔
1556
        }
1557
    }
1558
}
1559

1560
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a mut [W::AtomicType]>>
1561
    for BitFieldVec<W, &'a mut [W]>
1562
{
1563
    #[inline]
1564
    fn from(value: AtomicBitFieldVec<W, &'a mut [W::AtomicType]>) -> Self {
1✔
1565
        BitFieldVec {
1566
            bits: unsafe {
1✔
1567
                core::mem::transmute::<&'a mut [W::AtomicType], &'a mut [W]>(value.bits)
1568
            },
1569
            len: value.len,
2✔
1570
            bit_width: value.bit_width,
1✔
1571
            mask: value.mask,
1✔
1572
        }
1573
    }
1574
}
1575

1576
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Vec<W>>>
1577
    for AtomicBitFieldVec<W, Vec<W::AtomicType>>
1578
{
1579
    #[inline]
1580
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
369✔
1581
        AtomicBitFieldVec {
1582
            bits: transmute_vec_into_atomic(value.bits),
1,107✔
1583

1584
            len: value.len,
738✔
1585
            bit_width: value.bit_width,
369✔
1586
            mask: value.mask,
369✔
1587
        }
1588
    }
1589
}
1590

1591
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Box<[W]>>>
1592
    for AtomicBitFieldVec<W, Box<[W::AtomicType]>>
1593
{
1594
    #[inline]
1595
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1596
        AtomicBitFieldVec {
1597
            bits: transmute_boxed_slice_into_atomic(value.bits),
3✔
1598
            len: value.len,
2✔
1599
            bit_width: value.bit_width,
1✔
1600
            mask: value.mask,
1✔
1601
        }
1602
    }
1603
}
1604

1605
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a [W]>>
1606
    for AtomicBitFieldVec<W, &'a [W::AtomicType]>
1607
{
1608
    type Error = CannotCastToAtomicError<W>;
1609

1610
    #[inline]
1611
    fn try_from(value: BitFieldVec<W, &'a [W]>) -> Result<Self, Self::Error> {
1✔
1612
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1613
            return Err(CannotCastToAtomicError::default());
×
1614
        }
1615
        Ok(AtomicBitFieldVec {
×
1616
            bits: unsafe { core::mem::transmute::<&'a [W], &'a [W::AtomicType]>(value.bits) },
×
1617
            len: value.len,
×
1618
            bit_width: value.bit_width,
×
1619
            mask: value.mask,
×
1620
        })
1621
    }
1622
}
1623

1624
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a mut [W]>>
1625
    for AtomicBitFieldVec<W, &'a mut [W::AtomicType]>
1626
{
1627
    type Error = CannotCastToAtomicError<W>;
1628

1629
    #[inline]
1630
    fn try_from(value: BitFieldVec<W, &'a mut [W]>) -> Result<Self, Self::Error> {
1✔
1631
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1632
            return Err(CannotCastToAtomicError::default());
×
1633
        }
1634
        Ok(AtomicBitFieldVec {
×
1635
            bits: unsafe {
×
1636
                core::mem::transmute::<&'a mut [W], &'a mut [W::AtomicType]>(value.bits)
×
1637
            },
1638
            len: value.len,
×
1639
            bit_width: value.bit_width,
×
1640
            mask: value.mask,
×
1641
        })
1642
    }
1643
}
1644

1645
impl<W: Word> From<BitFieldVec<W, Vec<W>>> for BitFieldVec<W, Box<[W]>> {
1646
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
125✔
1647
        BitFieldVec {
1648
            bits: value.bits.into_boxed_slice(),
375✔
1649
            len: value.len,
250✔
1650
            bit_width: value.bit_width,
125✔
1651
            mask: value.mask,
125✔
1652
        }
1653
    }
1654
}
1655

1656
impl<W: Word> From<BitFieldVec<W, Box<[W]>>> for BitFieldVec<W, Vec<W>> {
1657
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1658
        BitFieldVec {
1659
            bits: value.bits.into_vec(),
3✔
1660
            len: value.len,
2✔
1661
            bit_width: value.bit_width,
1✔
1662
            mask: value.mask,
1✔
1663
        }
1664
    }
1665
}
1666

1667
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a> for BitFieldVec<W, B> {
1668
    type Item = W;
1669
    type Iter = BitFieldVecIterator<'a, W, B>;
1670
}
1671

1672
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue for BitFieldVec<W, B> {
1673
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1674
        self.iter_from(0)
×
1675
    }
1676
}
1677

1678
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1679
    for BitFieldVec<W, B>
1680
{
1681
    type Item = W;
1682
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1683
}
1684

1685
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom for BitFieldVec<W, B> {
1686
    fn iter_value_from(
×
1687
        &self,
1688
        from: usize,
1689
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1690
        self.iter_from(from)
×
1691
    }
1692
}
1693

1694
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1695
    for BitFieldVecSubsliceImpl<'b, W, B>
1696
{
1697
    type Item = W;
1698
    type Iter = BitFieldVecIterator<'a, W, B>;
1699
}
1700

1701
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1702
    for BitFieldVecSubsliceImpl<'a, W, B>
1703
{
1704
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1705
        self.slice.iter_from(0)
×
1706
    }
1707
}
1708

1709
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1710
    for BitFieldVecSubsliceImpl<'b, W, B>
1711
{
1712
    type Item = W;
1713
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1714
}
1715

1716
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1717
    for BitFieldVecSubsliceImpl<'a, W, B>
1718
{
1719
    fn iter_value_from(
×
1720
        &self,
1721
        from: usize,
1722
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1723
        self.slice.iter_from(from)
×
1724
    }
1725
}
1726

1727
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1728
    for BitFieldVecSubsliceImplMut<'b, W, B>
1729
{
1730
    type Item = W;
1731
    type Iter = BitFieldVecIterator<'a, W, B>;
1732
}
1733

1734
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1735
    for BitFieldVecSubsliceImplMut<'a, W, B>
1736
{
1737
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1738
        self.slice.iter_from(0)
×
1739
    }
1740
}
1741

1742
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1743
    for BitFieldVecSubsliceImplMut<'b, W, B>
1744
{
1745
    type Item = W;
1746
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1747
}
1748

1749
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1750
    for BitFieldVecSubsliceImplMut<'a, W, B>
1751
{
1752
    fn iter_value_from(
×
1753
        &self,
1754
        from: usize,
1755
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1756
        self.slice.iter_from(from)
×
1757
    }
1758
}
1759

1760
#[cfg(test)]
1761
mod tests {
1762
    use super::*;
1763

1764
    #[test]
1765
    fn test_with_capacity() {
1766
        let mut b = BitFieldVec::<usize, _>::with_capacity(10, 100);
1767
        let capacity = b.bits.capacity();
1768
        for _ in 0..100 {
1769
            b.push(0);
1770
        }
1771
        assert_eq!(b.bits.capacity(), capacity);
1772
    }
1773

1774
    fn copy<W: Word, B: AsRef<[W]>, C: AsRef<[W]> + AsMut<[W]>>(
1775
        source: &BitFieldVec<W, B>,
1776
        from: usize,
1777
        dest: &mut BitFieldVec<W, C>,
1778
        to: usize,
1779
        len: usize,
1780
    ) {
1781
        let len = Ord::min(Ord::min(len, dest.len - to), source.len - from);
1782
        for i in 0..len {
1783
            dest.set_value(to + i, source.index_value(from + i));
1784
        }
1785
    }
1786

1787
    #[test]
1788
    fn test_copy() {
1789
        for src_pattern in 0..8 {
1790
            for dst_pattern in 0..8 {
1791
                // if from_first_word == from_last_word && to_first_word == to_last_word
1792
                let source = bit_field_vec![3 => src_pattern; 100];
1793
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1794
                let mut dest_expected = dest_actual.clone();
1795
                source.copy(1, &mut dest_actual, 2, 10);
1796
                copy(&source, 1, &mut dest_expected, 2, 10);
1797
                assert_eq!(dest_actual, dest_expected);
1798
                // else if from_first_word == from_last_word
1799
                let source = bit_field_vec![3 => src_pattern; 100];
1800
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1801
                let mut dest_expected = dest_actual.clone();
1802
                source.copy(1, &mut dest_actual, 20, 10);
1803
                copy(&source, 1, &mut dest_expected, 20, 10);
1804
                assert_eq!(dest_actual, dest_expected);
1805
                // else if to_first_word == to_last_word
1806
                let source = bit_field_vec![3 => src_pattern; 100];
1807
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1808
                let mut dest_expected = dest_actual.clone();
1809
                source.copy(20, &mut dest_actual, 1, 10);
1810
                copy(&source, 20, &mut dest_expected, 1, 10);
1811
                assert_eq!(dest_actual, dest_expected);
1812
                // else if src_bit == dest_bit (residual = 1)
1813
                let source = bit_field_vec![3 => src_pattern; 1000];
1814
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1815
                let mut dest_expected = dest_actual.clone();
1816
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 40);
1817
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 40);
1818
                assert_eq!(dest_actual, dest_expected);
1819
                // else if src_bit == dest_bit (residual = 64)
1820
                let source = bit_field_vec![3 => src_pattern; 1000];
1821
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1822
                let mut dest_expected = dest_actual.clone();
1823
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 61);
1824
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 61);
1825
                assert_eq!(dest_actual, dest_expected);
1826
                // else if src_bit < dest_bit (residual = 1)
1827
                let source = bit_field_vec![3 => src_pattern; 1000];
1828
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1829
                let mut dest_expected = dest_actual.clone();
1830
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40);
1831
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40);
1832
                assert_eq!(dest_actual, dest_expected);
1833
                // else if src_bit < dest_bit (residual = 64)
1834
                let source = bit_field_vec![3 => src_pattern; 1000];
1835
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1836
                let mut dest_expected = dest_actual.clone();
1837
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40 + 17);
1838
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40 + 17);
1839
                assert_eq!(dest_actual, dest_expected);
1840
                // else if src_bit > dest_bit (residual = 1)
1841
                let source = bit_field_vec![3 => src_pattern; 1000];
1842
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1843
                let mut dest_expected = dest_actual.clone();
1844
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 64);
1845
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 64);
1846
                assert_eq!(dest_actual, dest_expected);
1847
                // else if src_bit > dest_bit (residual = 64)
1848
                let source = bit_field_vec![3 => src_pattern; 1000];
1849
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1850
                let mut dest_expected = dest_actual.clone();
1851
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 21 + 64);
1852
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 21 + 64);
1853
                assert_eq!(dest_actual, dest_expected);
1854
            }
1855
        }
1856
    }
1857
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc