• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

vigna / sux-rs / 18543058803

15 Oct 2025 09:22PM UTC coverage: 64.418% (-0.7%) from 65.126%
18543058803

push

github

vigna
Docs

3628 of 5632 relevant lines covered (64.42%)

119122797.27 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

59.13
/src/bits/bit_field_vec.rs
1
/*
2
 * SPDX-FileCopyrightText: 2023 Inria
3
 * SPDX-FileCopyrightText: 2023 Sebastiano Vigna
4
 *
5
 * SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later
6
 */
7

8
//! Vectors of values of fixed bit width.
9
//!
10
//! Elements are stored contiguously, with no padding bits (in particular,
11
//! unless the bit width is a power of two some elements will be stored across
12
//! word boundaries).
13
//!
14
//! There are two flavors: [`BitFieldVec`], a mutable bit-field vector, and
15
//! [`AtomicBitFieldVec`], a mutable, thread-safe bit-field vector.
16
//!
17
//! These flavors depends on a backend, and presently we provide, given an
18
//! unsigned integer type `W` or an unsigned atomic integer type `A`:
19
//!
20
//! - `BitFieldVec<Vec<T>>`: a mutable, growable and resizable bit-field vector;
21
//! - `BitFieldVec<AsRef<[W]>>`: an immutable bit-field vector, useful for
22
//!   [ε-serde](https://crates.io/crates/epserde) support;
23
//! - `BitFieldVec<AsRef<[W]> + AsMut<[W]>>`: a mutable (but not resizable) bit
24
//!   vector;
25
//! - `AtomicBitFieldVec<AsRef<[A]>>`: a partially thread-safe, mutable (but not
26
//!   resizable) bit-field vector.
27
//!
28
//! More generally, the underlying type must satisfy the trait [`Word`] for
29
//! [`BitFieldVec`] and additionally [`IntoAtomic`] for [`AtomicBitFieldVec`].
30
//! A blanket implementation exposes slices of elements of type `W` as bit-field
31
//! vectors of width `W::BITS`, analogously for atomic types `A`.
32
//!
33
//! The traits [`BitFieldSlice`], and [`BitFieldSliceMut`] provide a uniform
34
//! interface to access to the content of (a reference to) the bit-field vector.
35
//! There is also a [`AtomicBitFieldSlice`] trait for atomic bit-field vectors.
36
//! Since they are also implemented for slices of words, they make it easy to
37
//! write generic code that works both on bit-field vectors and on slices of
38
//! words when you need to consider the bit width of each element.
39
//!
40
//! Note that the [`try_chunks_mut`](BitFieldSliceMut::try_chunks_mut) method
41
//! is implemented on the [`BitFieldSliceMut`] trait, and thus returns an
42
//! iterator over elements implementing [`SliceByValueMut`]; the elements,
43
//! however, implement also [`BitFieldSliceMut`], and you can use this
44
//! property by adding the bound
45
//! ```ignore
46
//!    for<'a> BitFieldSliceMut<W, ChunksMut<'a>: Iterator<Item: BitFieldSliceMut<W>>>
47
//! ```
48
//!
49
//! Nothing is assumed about the content of the backend outside the
50
//! bits of the vector. Moreover, the content of the backend outside of the
51
//! vector is never modified by the methods of this structure.
52
//!
53
//! For high-speed unchecked scanning, we implement [`IntoUncheckedIterator`]
54
//! and [`IntoReverseUncheckedIterator`] on a reference to this type. The are
55
//! used, for example, to provide
56
//! [predecessor](crate::traits::indexed_dict::Pred) and
57
//! [successor](crate::traits::indexed_dict::Succ) primitives for
58
//! [`EliasFano`].
59
//!
60
//! # Low-level support
61
//!
62
//! The methods [`address_of`](BitFieldVec::addr_of) and
63
//! [`get_unaligned`](BitFieldVec::get_unaligned) can be used to manually
64
//! prefetch parts of the data structure, or read values using unaligned read,
65
//! when the bit width makes it possible.
66
//!
67
//! # Examples
68
//! ```
69
//! # use sux::prelude::*;
70
//! # use bit_field_slice::*;
71
//! // Bit field vector of bit width 5 and length 10, all entries set to zero
72
//! let mut b = <BitFieldVec<usize>>::new(5, 10);
73
//! assert_eq!(b.len(), 10);
74
//! assert_eq!(b.bit_width(), 5);
75
//! b.set(0, 3);
76
//! assert_eq!(b.get(0), 3);
77
//!
78
//! // Empty bit field vector of bit width 20 with capacity 10
79
//! let mut b = <BitFieldVec<usize>>::with_capacity(20, 10);
80
//! assert_eq!(b.len(), 0);
81
//! assert_eq!(b.bit_width(), 20);
82
//! b.push(20);
83
//! assert_eq!(b.len(), 1);
84
//! assert_eq!(b.get(0), 20);
85
//! assert_eq!(b.pop(), Some(20));
86
//!
87
//! // Convenience macro
88
//! let b = bit_field_vec![10; 4, 500, 2, 0, 1];
89
//! assert_eq!(b.len(), 5);
90
//! assert_eq!(b.bit_width(), 10);
91
//! assert_eq!(b.get(0), 4);
92
//! assert_eq!(b.get(1), 500);
93
//! assert_eq!(b.get(2), 2);
94
//! assert_eq!(b.get(3), 0);
95
//! assert_eq!(b.get(4), 1);
96
//! ```
97

98
#[cfg(feature = "rayon")]
99
use crate::RAYON_MIN_LEN;
100
use crate::prelude::{bit_field_slice::*, *};
101
use crate::utils::{
102
    CannotCastToAtomicError, transmute_boxed_slice_from_atomic, transmute_boxed_slice_into_atomic,
103
    transmute_vec_from_atomic, transmute_vec_into_atomic,
104
};
105
use crate::{panic_if_out_of_bounds, panic_if_value};
106
use anyhow::{Result, bail};
107
use common_traits::{
108
    AsBytes, Atomic, AtomicInteger, AtomicUnsignedInt, CastableInto, IntoAtomic, invariant_eq,
109
};
110
use mem_dbg::*;
111
#[cfg(feature = "rayon")]
112
use rayon::prelude::*;
113
use std::sync::atomic::{Ordering, compiler_fence, fence};
114
use value_traits::slices::{SliceByValue, SliceByValueMut};
115

116
/// Convenient, [`vec!`]-like macro to initialize `usize`-based bit-field
117
/// vectors.
118
///
119
/// - `bit_field_vec![width]`: creates an empty bit-field vector of given bit
120
///   width.
121
///
122
/// - `bit_field_vec![width => value; length]`: creates a bit-field vector of
123
///   given bit width and length, with all entries set to `value`.
124
///
125
/// - `bit_field_vec![width; v₀, v₁, … ]`: creates a bit-field vector of
126
///   given bit width with entries set to `v₀`, `v₁`, ….
127
///
128
///
129
/// Note that the syntax `bit_field_vec![width; length; value]` that has been
130
/// deprecated in favor of `bit_field_vec![width => value; length]`, so that
131
/// value and length are in the same order as in [`vec!`].
132
///
133
/// # Examples
134
///
135
/// ```
136
/// # use sux::prelude::*;
137
/// # use bit_field_slice::*;
138
/// // Empty bit field vector of bit width 5
139
/// let b = bit_field_vec![5];
140
/// assert_eq!(b.len(), 0);
141
/// assert_eq!(b.bit_width(), 5);
142
///
143
/// // 10 values of bit width 6, all set to 3
144
/// let b = bit_field_vec![6 => 3; 10];
145
/// assert_eq!(b.len(), 10);
146
/// assert_eq!(b.bit_width(), 6);
147
/// assert_eq!(b.iter().all(|x| x == 3), true);
148
///
149
/// // List of values of bit width 10
150
/// let b = bit_field_vec![10; 4, 500, 2, 0, 1];
151
/// assert_eq!(b.len(), 5);
152
/// assert_eq!(b.bit_width(), 10);
153
/// assert_eq!(b.get(0), 4);
154
/// assert_eq!(b.get(1), 500);
155
/// assert_eq!(b.get(2), 2);
156
/// assert_eq!(b.get(3), 0);
157
/// assert_eq!(b.get(4), 1);
158
/// ```
159
#[macro_export]
160
macro_rules! bit_field_vec {
161
    ($w:expr) => {
162
        $crate::bits::BitFieldVec::<usize, _>::new($w, 0)
163
    };
164
    ($w:expr; $n:expr; $v:expr) => {
165
        {
166
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
167
            // Force type
168
            let v: usize = $v;
169
            bit_field_vec.resize($n, v);
170
            bit_field_vec
171
        }
172
    };
173
    ($w:expr => $v:expr; $n:expr) => {
174
        {
175
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
176
            // Force type
177
            let v: usize = $v;
178
            bit_field_vec.resize($n, v);
179
            bit_field_vec
180
        }
181
    };
182
    ($w:expr; $($x:expr),+ $(,)?) => {
183
        {
184
            let mut b = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, [$($x),+].len());
185
            $(
186
                // Force type
187
                let x: usize = $x;
188
                b.push(x);
189
            )*
190
            b
191
        }
192
    };
193
}
194

195
/// A vector of bit fields of fixed width.
196
///
197
/// See the [module documentation](crate::bits) for more details.
198
#[derive(Debug, Clone, Copy, Hash, MemDbg, MemSize, value_traits::Subslices)]
199
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
200
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
201
#[value_traits_subslices(bound = "B: AsRef<[W]>")]
202
#[derive(value_traits::SubslicesMut)]
203
#[value_traits_subslices_mut(bound = "B: AsRef<[W]> + AsMut<[W]>")]
204
pub struct BitFieldVec<W: Word = usize, B = Vec<W>> {
205
    /// The underlying storage.
206
    bits: B,
207
    /// The bit width of the values stored in the vector.
208
    bit_width: usize,
209
    /// A mask with its lowest `bit_width` bits set to one.
210
    mask: W,
211
    /// The length of the vector.
212
    len: usize,
213
}
214

215
/// Robust, heavily checked mask function for constructors and similar methods.
216
fn mask<W: Word>(bit_width: usize) -> W {
9,700✔
217
    if bit_width == 0 {
9,700✔
218
        W::ZERO
212✔
219
    } else {
220
        W::MAX >> W::BITS.checked_sub(bit_width).expect("bit_width > W::BITS")
9,488✔
221
    }
222
}
223

224
/// Equality between bit-field vectors requires that the word is the same, the
225
/// bit width is the same, and the content is the same.
226
impl<W: Word, B: AsRef<[W]>, C: AsRef<[W]>> PartialEq<BitFieldVec<W, C>> for BitFieldVec<W, B> {
227
    fn eq(&self, other: &BitFieldVec<W, C>) -> bool {
3,771✔
228
        if self.bit_width() != other.bit_width() {
11,313✔
229
            return false;
×
230
        }
231
        if self.len() != other.len() {
×
232
            return false;
2✔
233
        }
234
        let bit_len = self.len() * self.bit_width();
×
235
        if self.bits.as_ref()[..bit_len / W::BITS] != other.bits.as_ref()[..bit_len / W::BITS] {
×
236
            return false;
×
237
        }
238

239
        let residual = bit_len % W::BITS;
×
240
        residual == 0
×
241
            || (self.bits.as_ref()[bit_len / W::BITS] ^ other.bits.as_ref()[bit_len / W::BITS])
3,526✔
242
                << (W::BITS - residual)
3,526✔
243
                == W::ZERO
3,526✔
244
    }
245
}
246

247
impl<W: Word, B: AsRef<[W]>> Eq for BitFieldVec<W, B> {}
248

249
impl<W: Word, B> BitFieldVec<W, B> {
250
    /// # Safety
251
    /// `len` * `bit_width` must be between 0 (included) the number of
252
    /// bits in `bits` (included).
253
    #[inline(always)]
254
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
3,209✔
255
        Self {
256
            bits,
257
            bit_width,
258
            mask: mask(bit_width),
6,418✔
259
            len,
260
        }
261
    }
262

263
    #[inline(always)]
264
    pub fn into_raw_parts(self) -> (B, usize, usize) {
3,122✔
265
        (self.bits, self.bit_width, self.len)
6,244✔
266
    }
267

268
    #[inline(always)]
269
    /// Modifies the bit field in place.
270
    ///
271
    /// # Safety
272
    /// This is unsafe because it's the caller's responsibility to ensure that
273
    /// that the length is compatible with the modified bits.
274
    pub unsafe fn map<W2: Word, B2>(self, f: impl FnOnce(B) -> B2) -> BitFieldVec<W2, B2> {
×
275
        BitFieldVec {
276
            bits: f(self.bits),
×
277
            bit_width: self.bit_width,
×
278
            mask: mask(self.bit_width),
×
279
            len: self.len,
×
280
        }
281
    }
282
}
283

284
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
285
    /// Gets the address of the item storing (the first part of)
286
    /// the element of given index.
287
    ///
288
    /// This method is mainly useful for manually prefetching
289
    /// parts of the data structure.
290
    pub fn addr_of(&self, index: usize) -> *const W {
4✔
291
        let start_bit = index * self.bit_width;
8✔
292
        let word_index = start_bit / W::BITS;
8✔
293
        (&self.bits.as_ref()[word_index]) as *const _
8✔
294
    }
295

296
    /// Like [`BitFieldSlice::get`], but using unaligned reads.
297
    ///
298
    /// This method can be used only for bit width smaller than or equal to
299
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
300
    /// an additional padding word must be present at the end of the vector.
301
    ///
302
    /// Note that to guarantee the absence of undefined behavior this method
303
    /// has to perform several tests. Consider using
304
    /// [`get_unaligned_unchecked`](Self::get_unaligned_unchecked) if you are
305
    /// sure that the constraints are respected.
306
    ///
307
    /// # Panics
308
    ///
309
    /// This method will panic if the constraints above are not respected.
310
    pub fn get_unaligned(&self, index: usize) -> W {
65✔
311
        assert!(
65✔
312
            self.bit_width <= W::BITS - 8 + 2
65✔
313
                || self.bit_width == W::BITS - 8 + 4
24✔
314
                || self.bit_width == W::BITS
14✔
315
        );
316
        panic_if_out_of_bounds!(index, self.len);
61✔
317
        assert!(
61✔
318
            (index * self.bit_width) / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES
122✔
319
        );
320
        unsafe { self.get_unaligned_unchecked(index) }
180✔
321
    }
322

323
    /// Like [`BitFieldSlice::get_unchecked`], but using unaligned reads.
324
    ///
325
    /// # Safety
326
    ///
327
    /// This method can be used only for bit width smaller than or equal to
328
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
329
    /// an additional padding word must be present at the end of the vector.
330
    ///
331
    /// # Panics
332
    ///
333
    /// This method will panic in debug mode if the safety constraints are not
334
    /// respected.
335
    pub unsafe fn get_unaligned_unchecked(&self, index: usize) -> W {
65✔
336
        debug_assert!(
65✔
337
            self.bit_width <= W::BITS - 8 + 2
65✔
338
                || self.bit_width == W::BITS - 8 + 4
24✔
339
                || self.bit_width == W::BITS
14✔
340
        );
341
        let base_ptr = self.bits.as_ref().as_ptr() as *const u8;
61✔
342
        let start_bit = index * self.bit_width;
×
343
        debug_assert!(start_bit / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES);
×
344
        let ptr = unsafe { base_ptr.add(start_bit / 8) } as *const W;
180✔
345
        let word = unsafe { core::ptr::read_unaligned(ptr) };
180✔
346
        (word >> (start_bit % 8)) & self.mask
60✔
347
    }
348

349
    /// Returns the backend of the vector as a slice of `W`.
350
    pub fn as_slice(&self) -> &[W] {
3✔
351
        self.bits.as_ref()
3✔
352
    }
353
}
354
/// An iterator over non-overlapping chunks of a bit-field vector, starting at
355
/// the beginning of the vector.
356
///
357
/// When the vector len is not evenly divided by the chunk size, the last chunk
358
/// of the iteration will be shorter.
359
///
360
/// This struct is created by the
361
/// [`try_chunks_mut`](crate::bits::bit_field_vec::BitFieldVec#impl-BitFieldSliceMut<W>-for-BitFieldVec<W,+B>)
362
/// method.
363
pub struct ChunksMut<'a, W: Word> {
364
    remaining: usize,
365
    bit_width: usize,
366
    chunk_size: usize,
367
    iter: std::slice::ChunksMut<'a, W>,
368
}
369

370
impl<'a, W: Word> Iterator for ChunksMut<'a, W> {
371
    type Item = BitFieldVec<W, &'a mut [W]>;
372

373
    #[inline]
374
    fn next(&mut self) -> Option<Self::Item> {
90✔
375
        self.iter.next().map(|chunk| {
359✔
376
            let size = Ord::min(self.chunk_size, self.remaining);
356✔
377
            let next = unsafe { BitFieldVec::from_raw_parts(chunk, self.bit_width, size) };
445✔
378
            self.remaining -= size;
89✔
379
            next
89✔
380
        })
381
    }
382
}
383

384
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {}
385

386
impl<W: Word> BitFieldVec<W, Vec<W>> {
387
    /// Creates a new zero-initialized vector of given bit width and length.
388
    pub fn new(bit_width: usize, len: usize) -> Self {
976✔
389
        // We need at least one word to handle the case of bit width zero.
390
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
3,904✔
391
        Self {
392
            bits: vec![W::ZERO; n_of_words],
2,928✔
393
            bit_width,
394
            mask: mask(bit_width),
1,952✔
395
            len,
396
        }
397
    }
398

399
    /// Creates a new zero-initialized vector of given bit width and length,
400
    /// adding padding bits to the end of the vector so that unaligned reads are
401
    /// possible.
402
    ///
403
    /// Note that this convenience method is a one-off: if the vector is resized
404
    /// or expanded, the padding will be lost.
405
    pub fn new_unaligned(bit_width: usize, len: usize) -> Self {
88✔
406
        let n_of_words = (len * bit_width).div_ceil(W::BITS);
264✔
407
        Self {
408
            // We add a word at the end
409
            bits: vec![W::ZERO; n_of_words + 1],
264✔
410
            bit_width,
411
            mask: mask(bit_width),
176✔
412
            len,
413
        }
414
    }
415

416
    /// Creates an empty vector that doesn't need to reallocate for up to
417
    /// `capacity` elements.
418
    pub fn with_capacity(bit_width: usize, capacity: usize) -> Self {
1,163✔
419
        // We need at least one word to handle the case of bit width zero.
420
        let n_of_words = Ord::max(1, (capacity * bit_width).div_ceil(W::BITS));
4,652✔
421
        Self {
422
            bits: Vec::with_capacity(n_of_words),
3,489✔
423
            bit_width,
424
            mask: mask(bit_width),
1,163✔
425
            len: 0,
426
        }
427
    }
428

429
    /// Sets the length.
430
    ///
431
    /// # Safety
432
    ///
433
    /// `len * bit_width` must be at most `self.bits.len() * W::BITS`. Note that
434
    /// setting the length might result in reading uninitialized data.
435
    pub unsafe fn set_len(&mut self, len: usize) {
1✔
436
        debug_assert!(len * self.bit_width <= self.bits.len() * W::BITS);
3✔
437
        self.len = len;
1✔
438
    }
439

440
    /// Sets len to 0
441
    pub fn clear(&mut self) {
1✔
442
        self.len = 0;
1✔
443
    }
444

445
    /// Returns the bit width of the values inside the vector.
446
    pub fn bit_width(&self) -> usize {
315✔
447
        debug_assert!(self.bit_width <= W::BITS);
630✔
448
        self.bit_width
315✔
449
    }
450

451
    /// Returns the mask used to extract values from the vector.
452
    /// This will keep the lowest `bit_width` bits.
453
    pub fn mask(&self) -> W {
312✔
454
        self.mask
312✔
455
    }
456

457
    /// Creates a new vector by copying a slice; the bit width will be the minimum
458
    /// width sufficient to hold all values in the slice.
459
    ///
460
    /// Returns an error if the bit width of the values in `slice` is larger than
461
    /// `W::BITS`.
462
    pub fn from_slice<SW>(slice: &impl BitFieldSlice<SW>) -> Result<Self>
3✔
463
    where
464
        SW: Word + CastableInto<W>,
465
    {
466
        let mut max_len: usize = 0;
9✔
467
        for i in 0..slice.len() {
3,006✔
468
            max_len = Ord::max(max_len, unsafe {
×
469
                slice.get_value_unchecked(i).len() as usize
×
470
            });
471
        }
472

473
        if max_len > W::BITS {
3✔
474
            bail!(
1✔
475
                "Cannot convert a slice of bit width {} into a slice with W = {}",
1✔
476
                max_len,
×
477
                std::any::type_name::<W>()
1✔
478
            );
479
        }
480
        let mut result = Self::new(max_len, slice.len());
×
481
        for i in 0..slice.len() {
2,000✔
482
            unsafe { result.set_value_unchecked(i, slice.get_value_unchecked(i).cast()) };
×
483
        }
484

485
        Ok(result)
×
486
    }
487

488
    /// Adds a value at the end of the vector.
489
    pub fn push(&mut self, value: W) {
2,249✔
490
        panic_if_value!(value, self.mask, self.bit_width);
6,747✔
491
        if (self.len + 1) * self.bit_width > self.bits.len() * W::BITS {
4,896✔
492
            self.bits.push(W::ZERO);
398✔
493
        }
494
        unsafe {
495
            self.set_value_unchecked(self.len, value);
6,747✔
496
        }
497
        self.len += 1;
2,249✔
498
    }
499

500
    /// Truncates or extend with `value` the vector.
501
    pub fn resize(&mut self, new_len: usize, value: W) {
1,162✔
502
        panic_if_value!(value, self.mask, self.bit_width);
3,486✔
503
        if new_len > self.len {
1,162✔
504
            if new_len * self.bit_width > self.bits.len() * W::BITS {
3,483✔
505
                self.bits
2,322✔
506
                    .resize((new_len * self.bit_width).div_ceil(W::BITS), W::ZERO);
2,322✔
507
            }
508
            for i in self.len..new_len {
808,882✔
509
                unsafe {
510
                    self.set_value_unchecked(i, value);
×
511
                }
512
            }
513
        }
514
        self.len = new_len;
1,162✔
515
    }
516

517
    /// Removes and returns a value from the end of the vector.
518
    ///
519
    /// Returns None if the [`BitFieldVec`] is empty.
520
    pub fn pop(&mut self) -> Option<W> {
1,002✔
521
        if self.len == 0 {
1,002✔
522
            return None;
2✔
523
        }
524
        let value = self.index_value(self.len - 1);
×
525
        self.len -= 1;
×
526
        Some(value)
×
527
    }
528
}
529

530
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldVec<W, B> {
531
    /// A version of [`BitFieldSliceMut::set`] that returns the previous value.
532
    ///
533
    /// # Panics
534
    /// - If `index` is out of bounds.
535
    /// - If `value` is not a valid value for the bit width of the vector.
536
    pub fn replace(&mut self, index: usize, value: W) -> Result<W> {
×
537
        panic_if_out_of_bounds!(index, self.len);
×
538
        panic_if_value!(value, self.mask, self.bit_width);
×
539
        Ok(unsafe { self.replace_unchecked(index, value) })
×
540
    }
541

542
    /// A version of [`BitFieldSliceMut::set_unchecked`] that returns the previous value,
543
    /// that doesn't check for out-of-bounds access or value validity.
544
    ///
545
    /// # Safety
546
    /// This method is unsafe because it does not check that `index` is within bounds
547
    pub unsafe fn replace_unchecked(&mut self, index: usize, value: W) -> W {
×
548
        let pos = index * self.bit_width;
×
549
        let word_index = pos / W::BITS;
×
550
        let bit_index = pos % W::BITS;
×
551
        let bits = self.bits.as_mut();
×
552

553
        unsafe {
554
            if bit_index + self.bit_width <= W::BITS {
×
555
                let mut word = *bits.get_unchecked_mut(word_index);
×
556
                let old_value = (word >> bit_index) & self.mask;
×
557
                word &= !(self.mask << bit_index);
×
558
                word |= value << bit_index;
×
559
                *bits.get_unchecked_mut(word_index) = word;
×
560
                old_value
×
561
            } else {
562
                let mut word = *bits.get_unchecked_mut(word_index);
×
563
                let mut old_value = word >> bit_index;
×
564
                word &= (W::ONE << bit_index) - W::ONE;
×
565
                word |= value << bit_index;
×
566
                *bits.get_unchecked_mut(word_index) = word;
×
567

568
                let mut word = *bits.get_unchecked_mut(word_index + 1);
×
569
                old_value |= word << (W::BITS - bit_index);
×
570
                word &= !(self.mask >> (W::BITS - bit_index));
×
571
                word |= value >> (W::BITS - bit_index);
×
572
                *bits.get_unchecked_mut(word_index + 1) = word;
×
573
                old_value & self.mask
×
574
            }
575
        }
576
    }
577
}
578

579
impl<W: Word, T> BitWidth<W> for BitFieldVec<W, T> {
580
    #[inline(always)]
581
    fn bit_width(&self) -> usize {
11,398✔
582
        debug_assert!(self.bit_width <= W::BITS);
22,796✔
583
        self.bit_width
11,398✔
584
    }
585
}
586

587
impl<W: Word, B: AsRef<[W]>> SliceByValue for BitFieldVec<W, B> {
588
    type Value = W;
589
    #[inline(always)]
590
    fn len(&self) -> usize {
64,963✔
591
        self.len
64,963✔
592
    }
593

594
    unsafe fn get_value_unchecked(&self, index: usize) -> W {
52,340,730✔
595
        let pos = index * self.bit_width;
104,681,460✔
596
        let word_index = pos / W::BITS;
104,681,460✔
597
        let bit_index = pos % W::BITS;
104,681,460✔
598
        let bits = self.bits.as_ref();
157,022,190✔
599

600
        unsafe {
601
            if bit_index + self.bit_width <= W::BITS {
52,340,730✔
602
                (*bits.get_unchecked(word_index) >> bit_index) & self.mask
78,254,156✔
603
            } else {
604
                ((*bits.get_unchecked(word_index) >> bit_index)
13,213,652✔
605
                    | (*bits.get_unchecked(word_index + 1) << (W::BITS - bit_index)))
13,213,652✔
606
                    & self.mask
13,213,652✔
607
            }
608
        }
609
    }
610
}
611

612
impl<W: Word, B: AsRef<[W]>> BitFieldSlice<W> for BitFieldVec<W, B> {
613
    fn as_slice(&self) -> &[W] {
×
614
        self.bits.as_ref()
×
615
    }
616
}
617

618
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldSliceMut<W> for BitFieldVec<W, B> {
619
    #[inline(always)]
620
    fn mask(&self) -> W {
×
621
        self.mask
×
622
    }
623
    fn reset(&mut self) {
1✔
624
        let bit_len = self.len * self.bit_width;
2✔
625
        let full_words = bit_len / W::BITS;
2✔
626
        let residual = bit_len % W::BITS;
2✔
627
        let bits = self.bits.as_mut();
3✔
628
        bits[..full_words].iter_mut().for_each(|x| *x = W::ZERO);
3✔
629
        if residual != 0 {
2✔
630
            bits[full_words] &= W::MAX << residual;
1✔
631
        }
632
    }
633

634
    #[cfg(feature = "rayon")]
635
    fn par_reset(&mut self) {
1✔
636
        let bit_len = self.len * self.bit_width;
2✔
637
        let full_words = bit_len / W::BITS;
2✔
638
        let residual = bit_len % W::BITS;
2✔
639
        let bits = self.bits.as_mut();
3✔
640
        bits[..full_words]
1✔
641
            .par_iter_mut()
642
            .with_min_len(RAYON_MIN_LEN)
1✔
643
            .for_each(|x| *x = W::ZERO);
1✔
644
        if residual != 0 {
2✔
645
            bits[full_words] &= W::MAX << residual;
1✔
646
        }
647
    }
648

649
    fn as_mut_slice(&mut self) -> &mut [W] {
1✔
650
        self.bits.as_mut()
1✔
651
    }
652
}
653

654
/// Error type returned when [`try_chunks_mut`](SliceByValueMut::try_chunks_mut)
655
/// does not find sufficient alignment.
656
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
657
pub struct ChunksMutError<W: Word> {
658
    bit_width: usize,
659
    chunk_size: usize,
660
    _marker: core::marker::PhantomData<W>,
661
}
662

663
impl<W: Word> core::fmt::Display for ChunksMutError<W> {
664
    fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
×
665
        write!(
×
666
            f,
×
667
            "try_chunks_mut needs that the bit width ({}) times the chunk size ({}) is a multiple of W::BITS ({}) to return more than one chunk",
668
            self.bit_width,
×
669
            self.chunk_size,
×
670
            W::BITS
×
671
        )
672
    }
673
}
674

675
impl<W: Word> std::error::Error for ChunksMutError<W> {}
676

677
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> SliceByValueMut for BitFieldVec<W, B> {
678
    #[inline(always)]
679
    fn set_value(&mut self, index: usize, value: W) {
520,559✔
680
        panic_if_out_of_bounds!(index, self.len);
1,561,677✔
681
        panic_if_value!(value, self.mask, self.bit_width);
1,561,674✔
682
        unsafe {
683
            self.set_value_unchecked(index, value);
520,557✔
684
        }
685
    }
686

687
    unsafe fn set_value_unchecked(&mut self, index: usize, value: W) {
12,095,550✔
688
        let pos = index * self.bit_width;
24,191,100✔
689
        let word_index = pos / W::BITS;
24,191,100✔
690
        let bit_index = pos % W::BITS;
24,191,100✔
691
        let bits = self.bits.as_mut();
36,286,650✔
692

693
        unsafe {
694
            if bit_index + self.bit_width <= W::BITS {
21,210,675✔
695
                let mut word = *bits.get_unchecked_mut(word_index);
36,460,500✔
696
                word &= !(self.mask << bit_index);
27,345,375✔
697
                word |= value << bit_index;
27,345,375✔
698
                *bits.get_unchecked_mut(word_index) = word;
27,345,375✔
699
            } else {
700
                let mut word = *bits.get_unchecked_mut(word_index);
2,980,425✔
701
                word &= (W::ONE << bit_index) - W::ONE;
2,980,425✔
702
                word |= value << bit_index;
2,980,425✔
703
                *bits.get_unchecked_mut(word_index) = word;
2,980,425✔
704

705
                let mut word = *bits.get_unchecked_mut(word_index + 1);
2,980,425✔
706
                word &= !(self.mask >> (W::BITS - bit_index));
2,980,425✔
707
                word |= value >> (W::BITS - bit_index);
2,980,425✔
708
                *bits.get_unchecked_mut(word_index + 1) = word;
2,980,425✔
709
            }
710
        }
711
    }
712

713
    /// This implementation perform the copy word by word, which is
714
    /// significantly faster than the default implementation.
715
    fn copy(&self, from: usize, dst: &mut Self, to: usize, len: usize) {
576✔
716
        assert_eq!(
576✔
717
            self.bit_width, dst.bit_width,
×
718
            "Bit widths must be equal (self: {}, dest: {})",
×
719
            self.bit_width, dst.bit_width
×
720
        );
721
        // Reduce len to the elements available in both vectors
722
        let len = Ord::min(Ord::min(len, dst.len - to), self.len - from);
3,456✔
723
        if len == 0 {
576✔
724
            return;
×
725
        }
726
        let bit_width = Ord::min(self.bit_width, dst.bit_width);
×
727
        let bit_len = len * bit_width;
×
728
        let src_pos = from * self.bit_width;
×
729
        let dst_pos = to * dst.bit_width;
×
730
        let src_bit = src_pos % W::BITS;
×
731
        let dst_bit = dst_pos % W::BITS;
×
732
        let src_first_word = src_pos / W::BITS;
×
733
        let dst_first_word = dst_pos / W::BITS;
×
734
        let src_last_word = (src_pos + bit_len - 1) / W::BITS;
×
735
        let dst_last_word = (dst_pos + bit_len - 1) / W::BITS;
×
736
        let source = self.bits.as_ref();
×
737
        let dest = dst.bits.as_mut();
×
738

739
        if src_first_word == src_last_word && dst_first_word == dst_last_word {
192✔
740
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
741
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
742
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
743
            dest[dst_first_word] |= word << dst_bit;
128✔
744
        } else if src_first_word == src_last_word {
640✔
745
            // dst_first_word != dst_last_word
746
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
747
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
748
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
749
            dest[dst_first_word] |= (word & mask) << dst_bit;
192✔
750
            dest[dst_last_word] &= !(mask >> (W::BITS - dst_bit));
192✔
751
            dest[dst_last_word] |= (word & mask) >> (W::BITS - dst_bit);
128✔
752
        } else if dst_first_word == dst_last_word {
576✔
753
            // src_first_word != src_last_word
754
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
755
            let word = ((source[src_first_word] >> src_bit)
256✔
756
                | (source[src_last_word] << (W::BITS - src_bit)))
128✔
757
                & mask;
128✔
758
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
759
            dest[dst_first_word] |= word << dst_bit;
128✔
760
        } else if src_bit == dst_bit {
576✔
761
            // src_first_word != src_last_word && dst_first_word != dst_last_word
762
            let mask = W::MAX << dst_bit;
384✔
763
            dest[dst_first_word] &= !mask;
256✔
764
            dest[dst_first_word] |= source[src_first_word] & mask;
384✔
765

766
            dest[(1 + dst_first_word)..dst_last_word]
384✔
767
                .copy_from_slice(&source[(1 + src_first_word)..src_last_word]);
512✔
768

769
            let residual =
256✔
770
                bit_len - (W::BITS - src_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
384✔
771
            let mask = W::MAX >> (W::BITS - residual);
384✔
772
            dest[dst_last_word] &= !mask;
256✔
773
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
774
        } else if src_bit < dst_bit {
384✔
775
            // src_first_word != src_last_word && dst_first_word !=
776
            // dst_last_word
777
            let dst_mask = W::MAX << dst_bit;
256✔
778
            let src_mask = W::MAX << src_bit;
256✔
779
            let shift = dst_bit - src_bit;
256✔
780
            dest[dst_first_word] &= !dst_mask;
128✔
781
            dest[dst_first_word] |= (source[src_first_word] & src_mask) << shift;
256✔
782

783
            let mut word = source[src_first_word] >> (W::BITS - shift);
256✔
784
            for i in 1..dst_last_word - dst_first_word {
256✔
785
                dest[dst_first_word + i] = word | (source[src_first_word + i] << shift);
×
786
                word = source[src_first_word + i] >> (W::BITS - shift);
×
787
            }
788
            let residual =
128✔
789
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
256✔
790
            let mask = W::MAX >> (W::BITS - residual);
256✔
791
            dest[dst_last_word] &= !mask;
128✔
792
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
793
        } else {
794
            // src_first_word != src_last_word && dst_first_word !=
795
            // dst_last_word && src_bit > dst_bit
796
            let dst_mask = W::MAX << dst_bit;
128✔
797
            let src_mask = W::MAX << src_bit;
×
798
            let shift = src_bit - dst_bit;
×
799
            dest[dst_first_word] &= !dst_mask;
×
800
            dest[dst_first_word] |= (source[src_first_word] & src_mask) >> shift;
×
801
            dest[dst_first_word] |= source[src_first_word + 1] << (W::BITS - shift);
×
802

803
            let mut word = source[src_first_word + 1] >> shift;
×
804

805
            for i in 1..dst_last_word - dst_first_word {
512✔
806
                word |= source[src_first_word + i + 1] << (W::BITS - shift);
×
807
                dest[dst_first_word + i] = word;
×
808
                word = source[src_first_word + i + 1] >> shift;
×
809
            }
810

811
            word |= source[src_last_word] << (W::BITS - shift);
×
812

813
            let residual =
×
814
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
×
815
            let mask = W::MAX >> (W::BITS - residual);
×
816
            dest[dst_last_word] &= !mask;
×
817
            dest[dst_last_word] |= word & mask;
×
818
        }
819
    }
820

821
    /// This implementation keeps a buffer of `W::BITS` bits for reading and
822
    /// writing, obtaining a significant speedup with respect to the default
823
    /// implementation.
824
    #[inline]
825
    unsafe fn apply_in_place_unchecked<F>(&mut self, mut f: F)
×
826
    where
827
        F: FnMut(Self::Value) -> Self::Value,
828
    {
829
        if self.is_empty() {
×
830
            return;
×
831
        }
832
        let bit_width = self.bit_width();
×
833
        if bit_width == 0 {
×
834
            return;
×
835
        }
836
        let mask = self.mask();
×
837
        let number_of_words: usize = self.bits.as_ref().len();
×
838
        let last_word_idx = number_of_words.saturating_sub(1);
×
839

840
        let mut write_buffer: W = W::ZERO;
×
841
        let mut read_buffer: W = *unsafe { self.bits.as_ref().get_unchecked(0) };
×
842

843
        // specialized case because it's much faster
844
        if bit_width.is_power_of_two() {
×
845
            let mut bits_in_buffer = 0;
×
846

847
            // TODO!: figure out how to simplify
848
            let mut buffer_limit = (self.len() * bit_width) % W::BITS;
×
849
            if buffer_limit == 0 {
×
850
                buffer_limit = W::BITS;
×
851
            }
852

853
            for read_idx in 1..number_of_words {
×
854
                // pre-load the next word so it loads while we parse the buffer
855
                let next_word: W = *unsafe { self.bits.as_ref().get_unchecked(read_idx) };
×
856

857
                // parse as much as we can from the buffer
858
                loop {
×
859
                    let next_bits_in_buffer = bits_in_buffer + bit_width;
×
860

861
                    if next_bits_in_buffer > W::BITS {
×
862
                        break;
×
863
                    }
864

865
                    let value = read_buffer & mask;
×
866
                    // throw away the bits we just read
867
                    read_buffer >>= bit_width;
×
868
                    // apply user func
869
                    let new_value = f(value);
×
870
                    // put the new value in the write buffer
871
                    write_buffer |= new_value << bits_in_buffer;
×
872

873
                    bits_in_buffer = next_bits_in_buffer;
×
874
                }
875

876
                invariant_eq!(read_buffer, W::ZERO);
×
877
                *unsafe { self.bits.as_mut().get_unchecked_mut(read_idx - 1) } = write_buffer;
×
878
                read_buffer = next_word;
×
879
                write_buffer = W::ZERO;
×
880
                bits_in_buffer = 0;
×
881
            }
882

883
            // write the last word if we have some bits left
884
            while bits_in_buffer < buffer_limit {
×
885
                let value = read_buffer & mask;
×
886
                // throw away the bits we just read
887
                read_buffer >>= bit_width;
×
888
                // apply user func
889
                let new_value = f(value);
×
890
                // put the new value in the write buffer
891
                write_buffer |= new_value << bits_in_buffer;
×
892
                // -= bit_width but with no casts
893
                bits_in_buffer += bit_width;
×
894
            }
895

896
            *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
897
            return;
×
898
        }
899

900
        // The position inside the word. In most parametrization of the
901
        // vector, since the bit_width is not necessarily a integer
902
        // divisor of the word size, we need to keep track of the position
903
        // inside the word. As we scroll through the bits, due to the bits
904
        // remainder, we may need to operate on two words at the same time.
905
        let mut global_bit_index: usize = 0;
×
906

907
        // The number of words in the bitvec.
908
        let mut lower_word_limit = 0;
×
909
        let mut upper_word_limit = W::BITS;
×
910

911
        // We iterate across the words
912
        for word_number in 0..last_word_idx {
×
913
            // We iterate across the elements in the word.
914
            while global_bit_index + bit_width <= upper_word_limit {
×
915
                // We retrieve the value from the current word.
916
                let offset = global_bit_index - lower_word_limit;
×
917
                global_bit_index += bit_width;
×
918
                let element = self.mask() & (read_buffer >> offset);
×
919

920
                // We apply the function to the element.
921
                let new_element = f(element);
×
922

923
                // We set the element in the new word.
924
                write_buffer |= new_element << offset;
×
925
            }
926

927
            // We retrieve the next word from the bitvec.
928
            let next_word = *unsafe { self.bits.as_ref().get_unchecked(word_number + 1) };
×
929

930
            let mut new_write_buffer = W::ZERO;
×
931
            if upper_word_limit != global_bit_index {
×
932
                let remainder = upper_word_limit - global_bit_index;
×
933
                let offset = global_bit_index - lower_word_limit;
×
934
                // We compose the element from the remaining elements in the
935
                // current word and the elements in the next word.
936
                let element = ((read_buffer >> offset) | (next_word << remainder)) & self.mask();
×
937
                global_bit_index += bit_width;
×
938

939
                // We apply the function to the element.
940
                let new_element = f(element);
×
941

942
                write_buffer |= new_element << offset;
×
943

944
                new_write_buffer = new_element >> remainder;
×
945
            };
946

947
            read_buffer = next_word;
×
948

949
            *unsafe { self.bits.as_mut().get_unchecked_mut(word_number) } = write_buffer;
×
950

951
            write_buffer = new_write_buffer;
×
952
            lower_word_limit = upper_word_limit;
×
953
            upper_word_limit += W::BITS;
×
954
        }
955

956
        let mut offset = global_bit_index - lower_word_limit;
×
957

958
        // We iterate across the elements in the word.
959
        while offset < self.len() * bit_width - global_bit_index {
×
960
            // We retrieve the value from the current word.
961
            let element = self.mask() & (read_buffer >> offset);
×
962

963
            // We apply the function to the element.
964
            let new_element = f(element);
×
965

966
            // We set the element in the new word.
967
            write_buffer |= new_element << offset;
×
968
            offset += bit_width;
×
969
        }
970

971
        *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
972
    }
973

974
    type ChunksMut<'a>
975
        = ChunksMut<'a, W>
976
    where
977
        Self: 'a;
978

979
    type ChunksMutError = ChunksMutError<W>;
980

981
    /// # Errors
982
    ///
983
    /// This method will return an error if the chunk size multiplied by the by
984
    /// the [bit width](BitWidth::bit_width) is not a multiple of
985
    /// `W::BITS` and more than one chunk must be returned.
986
    fn try_chunks_mut(
87✔
987
        &mut self,
988
        chunk_size: usize,
989
    ) -> Result<Self::ChunksMut<'_>, ChunksMutError<W>> {
990
        let len = self.len();
261✔
991
        let bit_width = self.bit_width();
261✔
992
        if len <= chunk_size || (chunk_size * bit_width) % W::BITS == 0 {
92✔
993
            Ok(ChunksMut {
85✔
994
                remaining: len,
85✔
995
                bit_width: self.bit_width,
85✔
996
                chunk_size,
85✔
997
                iter: self.bits.as_mut()[..(len * bit_width).div_ceil(W::BITS)]
85✔
998
                    .chunks_mut((chunk_size * bit_width).div_ceil(W::BITS)),
85✔
999
            })
1000
        } else {
1001
            Err(ChunksMutError {
2✔
1002
                bit_width,
2✔
1003
                chunk_size,
2✔
1004
                _marker: core::marker::PhantomData,
2✔
1005
            })
1006
        }
1007
    }
1008
}
1009

1010
// Support for unchecked iterators
1011

1012
/// An [`UncheckedIterator`] over the values of a [`BitFieldVec`].
1013
#[derive(Debug, Clone, MemDbg, MemSize)]
1014
pub struct BitFieldVectorUncheckedIterator<'a, W, B>
1015
where
1016
    W: Word,
1017
{
1018
    vec: &'a BitFieldVec<W, B>,
1019
    word_index: usize,
1020
    window: W,
1021
    fill: usize,
1022
}
1023

1024
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorUncheckedIterator<'a, W, B> {
1025
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
636,410✔
1026
        if index > vec.len() {
1,272,820✔
1027
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1028
        }
1029
        let (fill, word_index);
×
1030
        let window = if index == vec.len() {
1,909,230✔
1031
            word_index = 0;
304✔
1032
            fill = 0;
304✔
1033
            W::ZERO
304✔
1034
        } else {
1035
            let bit_offset = index * vec.bit_width;
636,106✔
1036
            let bit_index = bit_offset % W::BITS;
×
1037

1038
            word_index = bit_offset / W::BITS;
×
1039
            fill = W::BITS - bit_index;
×
1040
            unsafe {
1041
                // SAFETY: index has been check at the start and it is within bounds
1042
                *vec.bits.as_ref().get_unchecked(word_index) >> bit_index
×
1043
            }
1044
        };
1045
        Self {
1046
            vec,
1047
            word_index,
1048
            window,
1049
            fill,
1050
        }
1051
    }
1052
}
1053

1054
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1055
    for BitFieldVectorUncheckedIterator<'_, W, B>
1056
{
1057
    type Item = W;
1058
    unsafe fn next_unchecked(&mut self) -> W {
32,658,924✔
1059
        let bit_width = self.vec.bit_width;
65,317,848✔
1060

1061
        if self.fill >= bit_width {
32,658,924✔
1062
            self.fill -= bit_width;
17,183,171✔
1063
            let res = self.window & self.vec.mask;
34,366,342✔
1064
            self.window >>= bit_width;
17,183,171✔
1065
            return res;
17,183,171✔
1066
        }
1067

1068
        let res = self.window;
×
1069
        self.word_index += 1;
×
1070
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
×
1071
        let res = (res | (self.window << self.fill)) & self.vec.mask;
×
1072
        let used = bit_width - self.fill;
×
1073
        self.window >>= used;
×
1074
        self.fill = W::BITS - used;
×
1075
        res
×
1076
    }
1077
}
1078

1079
impl<'a, W: Word, B: AsRef<[W]>> IntoUncheckedIterator for &'a BitFieldVec<W, B> {
1080
    type Item = W;
1081
    type IntoUncheckedIter = BitFieldVectorUncheckedIterator<'a, W, B>;
1082
    fn into_unchecked_iter_from(self, from: usize) -> Self::IntoUncheckedIter {
323,200✔
1083
        BitFieldVectorUncheckedIterator::new(self, from)
969,600✔
1084
    }
1085
}
1086

1087
/// An [`UncheckedIterator`] moving backwards over the values of a [`BitFieldVec`].
1088
#[derive(Debug, Clone, MemDbg, MemSize)]
1089
pub struct BitFieldVectorReverseUncheckedIterator<'a, W: Word, B> {
1090
    vec: &'a BitFieldVec<W, B>,
1091
    word_index: usize,
1092
    window: W,
1093
    fill: usize,
1094
}
1095

1096
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorReverseUncheckedIterator<'a, W, B> {
1097
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
315,992✔
1098
        if index > vec.len() {
631,984✔
1099
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1100
        }
1101
        let (word_index, fill);
×
1102

1103
        let window = if index == 0 {
631,984✔
1104
            word_index = 0;
3,120✔
1105
            fill = 0;
3,120✔
1106
            W::ZERO
3,120✔
1107
        } else {
1108
            // We have to handle the case of zero bit width
1109
            let bit_offset = (index * vec.bit_width).saturating_sub(1);
312,872✔
1110
            let bit_index = bit_offset % W::BITS;
×
1111

1112
            word_index = bit_offset / W::BITS;
×
1113
            fill = bit_index + 1;
×
1114
            unsafe {
1115
                // SAFETY: index has been check at the start and it is within bounds
1116
                *vec.bits.as_ref().get_unchecked(word_index) << (W::BITS - fill)
×
1117
            }
1118
        };
1119
        Self {
1120
            vec,
1121
            word_index,
1122
            window,
1123
            fill,
1124
        }
1125
    }
1126
}
1127

1128
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1129
    for BitFieldVectorReverseUncheckedIterator<'_, W, B>
1130
{
1131
    type Item = W;
1132
    unsafe fn next_unchecked(&mut self) -> W {
15,451,222✔
1133
        let bit_width = self.vec.bit_width;
30,902,444✔
1134

1135
        if self.fill >= bit_width {
15,451,222✔
1136
            self.fill -= bit_width;
8,036,076✔
1137
            self.window = self.window.rotate_left(bit_width as u32);
16,072,152✔
1138
            return self.window & self.vec.mask;
8,036,076✔
1139
        }
1140

1141
        let mut res = self.window.rotate_left(self.fill as u32);
×
1142
        self.word_index -= 1;
×
1143
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
×
1144
        let used = bit_width - self.fill;
×
1145
        res = ((res << used) | (self.window >> (W::BITS - used))) & self.vec.mask;
×
1146
        self.window <<= used;
×
1147
        self.fill = W::BITS - used;
×
1148
        res
×
1149
    }
1150
}
1151

1152
impl<'a, W: Word, B: AsRef<[W]>> IntoReverseUncheckedIterator for &'a BitFieldVec<W, B> {
1153
    type Item = W;
1154
    type IntoRevUncheckedIter = BitFieldVectorReverseUncheckedIterator<'a, W, B>;
1155

1156
    fn into_rev_unchecked_iter(self) -> Self::IntoRevUncheckedIter {
×
1157
        BitFieldVectorReverseUncheckedIterator::new(self, self.len())
×
1158
    }
1159

1160
    fn into_rev_unchecked_iter_from(self, from: usize) -> Self::IntoRevUncheckedIter {
315,992✔
1161
        BitFieldVectorReverseUncheckedIterator::new(self, from)
947,976✔
1162
    }
1163
}
1164

1165
/// An [`Iterator`] over the values of a [`BitFieldVec`].
1166
#[derive(Debug, Clone, MemDbg, MemSize)]
1167
pub struct BitFieldVecIterator<'a, W, B>
1168
where
1169
    W: Word,
1170
{
1171
    unchecked: BitFieldVectorUncheckedIterator<'a, W, B>,
1172
    range: core::ops::Range<usize>,
1173
}
1174

1175
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVecIterator<'a, W, B> {
1176
    fn new(vec: &'a BitFieldVec<W, B>, from: usize) -> Self {
313,210✔
1177
        let len = vec.len();
939,630✔
1178
        if from > len {
313,210✔
1179
            panic!(
×
1180
                "Start index out of bounds: {} >
×
1181
            {}",
×
1182
                from, len
×
1183
            );
1184
        }
1185
        Self {
1186
            unchecked: BitFieldVectorUncheckedIterator::new(vec, from),
×
1187
            range: from..len,
×
1188
        }
1189
    }
1190
}
1191

1192
impl<W: Word, B: AsRef<[W]>> Iterator for BitFieldVecIterator<'_, W, B> {
1193
    type Item = W;
1194
    fn next(&mut self) -> Option<Self::Item> {
16,189,263✔
1195
        if self.range.is_empty() {
32,378,526✔
1196
            return None;
313,204✔
1197
        }
1198
        // SAFETY: index has just been checked.
1199
        let res = unsafe { self.unchecked.next_unchecked() };
×
1200
        self.range.start += 1;
×
1201
        Some(res)
×
1202
    }
1203

1204
    #[inline(always)]
1205
    fn size_hint(&self) -> (usize, Option<usize>) {
×
1206
        (self.range.len(), Some(self.range.len()))
×
1207
    }
1208
}
1209

1210
impl<W: Word, B: AsRef<[W]>> ExactSizeIterator for BitFieldVecIterator<'_, W, B> {
1211
    #[inline(always)]
1212
    fn len(&self) -> usize {
×
1213
        self.range.len()
×
1214
    }
1215
}
1216

1217
/// This implements iteration from the end, but its slower than the forward iteration
1218
/// as here we do a random access, while in the forward iterator we do a sequential access
1219
/// and we keep a buffer of `W::BITS` bits to speed up the iteration.
1220
///
1221
/// If needed we could also keep a buffer from the end, but the logic would be more complex
1222
/// and potentially slower.
1223
impl<W: Word, B: AsRef<[W]>> DoubleEndedIterator for BitFieldVecIterator<'_, W, B> {
1224
    fn next_back(&mut self) -> Option<Self::Item> {
×
1225
        if self.range.is_empty() {
×
1226
            return None;
×
1227
        }
1228
        // SAFETY: index has just been checked.
1229
        let res = unsafe { self.unchecked.next_unchecked() };
×
1230
        self.range.end -= 1;
×
1231
        Some(res)
×
1232
    }
1233
}
1234

1235
impl<'a, W: Word, B: AsRef<[W]>> IntoIterator for &'a BitFieldVec<W, B> {
1236
    type Item = W;
1237
    type IntoIter = BitFieldVecIterator<'a, W, B>;
1238

1239
    fn into_iter(self) -> Self::IntoIter {
2✔
1240
        BitFieldVecIterator::new(self, 0)
4✔
1241
    }
1242
}
1243

1244
impl<'a, W: Word, B: AsRef<[W]>> IntoIteratorFrom for &'a BitFieldVec<W, B> {
1245
    type IntoIterFrom = BitFieldVecIterator<'a, W, B>;
1246

1247
    fn into_iter_from(self, from: usize) -> Self::IntoIter {
×
1248
        BitFieldVecIterator::new(self, from)
×
1249
    }
1250
}
1251

1252
impl<W: Word> core::iter::Extend<W> for BitFieldVec<W, Vec<W>> {
1253
    fn extend<T: IntoIterator<Item = W>>(&mut self, iter: T) {
1✔
1254
        for value in iter {
17✔
1255
            self.push(value);
×
1256
        }
1257
    }
1258
}
1259

1260
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
1261
    pub fn iter_from(&self, from: usize) -> BitFieldVecIterator<'_, W, B> {
313,208✔
1262
        BitFieldVecIterator::new(self, from)
939,624✔
1263
    }
1264

1265
    pub fn iter(&self) -> BitFieldVecIterator<'_, W, B> {
1,202✔
1266
        self.iter_from(0)
2,404✔
1267
    }
1268
}
1269
/// A tentatively thread-safe vector of bit fields of fixed width.
1270
///
1271
/// This implementation provides some concurrency guarantees, albeit not
1272
/// full-fledged thread safety: more precisely, we can guarantee thread-safety
1273
/// if the bit width is a power of two; otherwise, concurrent writes to values
1274
/// that cross word boundaries might end up in different threads succeeding in
1275
/// writing only part of a value. If the user can guarantee that no two threads
1276
/// ever write to the same boundary-crossing value, then no race condition can
1277
/// happen.
1278
///
1279
/// See the [module documentation](crate::bits) for more details.
1280
#[derive(Debug, Clone, Hash, MemDbg, MemSize)]
1281
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
1282
pub struct AtomicBitFieldVec<W: Word + IntoAtomic = usize, B = Vec<<W as IntoAtomic>::AtomicType>> {
1283
    /// The underlying storage.
1284
    bits: B,
1285
    /// The bit width of the values stored in the vector.
1286
    bit_width: usize,
1287
    /// A mask with its lowest `bit_width` bits set to one.
1288
    mask: W,
1289
    /// The length of the vector.
1290
    len: usize,
1291
}
1292

1293
impl<W: Word + IntoAtomic, B> AtomicBitFieldVec<W, B> {
1294
    /// # Safety
1295
    /// `len` * `bit_width` must be between 0 (included) the number of
1296
    /// bits in `bits` (included).
1297
    #[inline(always)]
1298
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
184✔
1299
        Self {
1300
            bits,
1301
            bit_width,
1302
            mask: mask(bit_width),
368✔
1303
            len,
1304
        }
1305
    }
1306

1307
    #[inline(always)]
1308
    pub fn into_raw_parts(self) -> (B, usize, usize) {
184✔
1309
        (self.bits, self.bit_width, self.len)
368✔
1310
    }
1311

1312
    /// Returns the mask used to extract values from the vector.
1313
    /// This will keep the lowest `bit_width` bits.
1314
    pub fn mask(&self) -> W {
184✔
1315
        self.mask
184✔
1316
    }
1317
}
1318

1319
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B> {
1320
    /// Returns the backend of the `AtomicBitFieldVec` as a slice of `A`, where `A` is the
1321
    /// atomic variant of `W`.
1322
    pub fn as_slice(&self) -> &[W::AtomicType] {
4✔
1323
        self.bits.as_ref()
4✔
1324
    }
1325
}
1326

1327
impl<W: Word + IntoAtomic> AtomicBitFieldVec<W>
1328
where
1329
    W::AtomicType: AtomicUnsignedInt,
1330
{
1331
    pub fn new(bit_width: usize, len: usize) -> AtomicBitFieldVec<W> {
491✔
1332
        // we need at least two words to avoid branches in the gets
1333
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
1,964✔
1334
        AtomicBitFieldVec::<W> {
1335
            bits: (0..n_of_words)
491✔
1336
                .map(|_| W::AtomicType::new(W::ZERO))
1337
                .collect(),
1338
            bit_width,
1339
            mask: mask(bit_width),
982✔
1340
            len,
1341
        }
1342
    }
1343
}
1344

1345
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B>
1346
where
1347
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1348
{
1349
    /// Writes zeros in all values.
1350
    #[deprecated(since = "0.4.4", note = "reset is deprecated in favor of reset_atomic")]
1351
    pub fn reset(&mut self, ordering: Ordering) {
×
1352
        self.reset_atomic(ordering)
×
1353
    }
1354
}
1355

1356
impl<W: Word + IntoAtomic, B> BitWidth<W::AtomicType> for AtomicBitFieldVec<W, B> {
1357
    #[inline(always)]
1358
    fn bit_width(&self) -> usize {
184✔
1359
        debug_assert!(self.bit_width <= W::BITS);
368✔
1360
        self.bit_width
184✔
1361
    }
1362
}
1363

1364
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> SliceByValue for AtomicBitFieldVec<W, T>
1365
where
1366
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1367
    Self: AtomicBitFieldSlice<W>,
1368
{
1369
    type Value = W;
1370

1371
    #[inline(always)]
1372
    fn len(&self) -> usize {
×
1373
        self.len
×
1374
    }
1375

1376
    unsafe fn get_value_unchecked(&self, index: usize) -> Self::Value {
×
1377
        unsafe { self.get_atomic_unchecked(index, Ordering::Relaxed) }
×
1378
    }
1379
}
1380

1381
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> AtomicBitFieldSlice<W>
1382
    for AtomicBitFieldVec<W, T>
1383
where
1384
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1385
{
1386
    #[inline]
1387
    fn len(&self) -> usize {
404,824✔
1388
        self.len
404,824✔
1389
    }
1390

1391
    #[inline]
1392
    unsafe fn get_atomic_unchecked(&self, index: usize, order: Ordering) -> W {
404,825✔
1393
        let pos = index * self.bit_width;
809,650✔
1394
        let word_index = pos / W::BITS;
809,650✔
1395
        let bit_index = pos % W::BITS;
809,650✔
1396
        let bits = self.bits.as_ref();
1,214,475✔
1397

1398
        unsafe {
1399
            if bit_index + self.bit_width <= W::BITS {
404,825✔
1400
                (bits.get_unchecked(word_index).load(order) >> bit_index) & self.mask
939,176✔
1401
            } else {
1402
                ((bits.get_unchecked(word_index).load(order) >> bit_index)
170,031✔
1403
                    | (bits.get_unchecked(word_index + 1).load(order) << (W::BITS - bit_index)))
170,031✔
1404
                    & self.mask
170,031✔
1405
            }
1406
        }
1407
    }
1408

1409
    // We reimplement set as we have the mask in the structure.
1410

1411
    /// Sets the element of the slice at the specified index.
1412
    ///
1413
    /// May panic if the index is not in in [0..[len](SliceByValue::len))
1414
    /// or the value does not fit in [`BitWidth::bit_width`] bits.
1415
    #[inline(always)]
1416
    fn set_atomic(&self, index: usize, value: W, order: Ordering) {
56,026✔
1417
        panic_if_out_of_bounds!(index, self.len);
168,078✔
1418
        panic_if_value!(value, self.mask, self.bit_width);
168,075✔
1419
        unsafe {
1420
            self.set_atomic_unchecked(index, value, order);
56,024✔
1421
        }
1422
    }
1423

1424
    #[inline]
1425
    unsafe fn set_atomic_unchecked(&self, index: usize, value: W, order: Ordering) {
451,171✔
1426
        unsafe {
1427
            debug_assert!(self.bit_width != W::BITS);
902,342✔
1428
            let pos = index * self.bit_width;
902,342✔
1429
            let word_index = pos / W::BITS;
902,342✔
1430
            let bit_index = pos % W::BITS;
902,342✔
1431
            let bits = self.bits.as_ref();
1,353,513✔
1432

1433
            if bit_index + self.bit_width <= W::BITS {
451,171✔
1434
                // this is consistent
1435
                let mut current = bits.get_unchecked(word_index).load(order);
1,300,230✔
1436
                loop {
×
1437
                    let mut new = current;
520,122✔
1438
                    new &= !(self.mask << bit_index);
520,122✔
1439
                    new |= value << bit_index;
520,122✔
1440

1441
                    match bits
780,183✔
1442
                        .get_unchecked(word_index)
520,122✔
1443
                        .compare_exchange(current, new, order, order)
1,040,244✔
1444
                    {
1445
                        Ok(_) => break,
260,046✔
1446
                        Err(e) => current = e,
15✔
1447
                    }
1448
                }
1449
            } else {
1450
                let mut word = bits.get_unchecked(word_index).load(order);
191,125✔
1451
                // try to wait for the other thread to finish
1452
                fence(Ordering::Acquire);
×
1453
                loop {
×
1454
                    let mut new = word;
382,250✔
1455
                    new &= (W::ONE << bit_index) - W::ONE;
191,125✔
1456
                    new |= value << bit_index;
382,250✔
1457

1458
                    match bits
573,375✔
1459
                        .get_unchecked(word_index)
382,250✔
1460
                        .compare_exchange(word, new, order, order)
764,500✔
1461
                    {
1462
                        Ok(_) => break,
×
1463
                        Err(e) => word = e,
×
1464
                    }
1465
                }
1466
                fence(Ordering::Release);
×
1467

1468
                // ensures that the compiler does not reorder the two atomic operations
1469
                // this should increase the probability of having consistency
1470
                // between two concurrent writes as they will both execute the set
1471
                // of the bits in the same order, and the release / acquire fence
1472
                // should try to synchronize the threads as much as possible
1473
                compiler_fence(Ordering::SeqCst);
×
1474

1475
                let mut word = bits.get_unchecked(word_index + 1).load(order);
×
1476
                fence(Ordering::Acquire);
×
1477
                loop {
×
1478
                    let mut new = word;
382,250✔
1479
                    new &= !(self.mask >> (W::BITS - bit_index));
382,250✔
1480
                    new |= value >> (W::BITS - bit_index);
382,250✔
1481

1482
                    match bits
573,375✔
1483
                        .get_unchecked(word_index + 1)
382,250✔
1484
                        .compare_exchange(word, new, order, order)
764,500✔
1485
                    {
1486
                        Ok(_) => break,
×
1487
                        Err(e) => word = e,
×
1488
                    }
1489
                }
1490
                fence(Ordering::Release);
×
1491
            }
1492
        }
1493
    }
1494

1495
    fn reset_atomic(&mut self, ordering: Ordering) {
1✔
1496
        let bit_len = self.len * self.bit_width;
2✔
1497
        let full_words = bit_len / W::BITS;
2✔
1498
        let residual = bit_len % W::BITS;
2✔
1499
        let bits = self.bits.as_ref();
3✔
1500
        bits[..full_words]
1✔
1501
            .iter()
1502
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1503
        if residual != 0 {
2✔
1504
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1505
        }
1506
    }
1507

1508
    #[cfg(feature = "rayon")]
1509
    fn par_reset_atomic(&mut self, ordering: Ordering) {
1✔
1510
        let bit_len = self.len * self.bit_width;
2✔
1511
        let full_words = bit_len / W::BITS;
2✔
1512
        let residual = bit_len % W::BITS;
2✔
1513
        let bits = self.bits.as_ref();
3✔
1514
        bits[..full_words]
1✔
1515
            .par_iter()
1516
            .with_min_len(RAYON_MIN_LEN)
1✔
1517
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1518
        if residual != 0 {
2✔
1519
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1520
        }
1521
    }
1522
}
1523

1524
// Conversions
1525

1526
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Vec<W::AtomicType>>>
1527
    for BitFieldVec<W, Vec<W>>
1528
{
1529
    #[inline]
1530
    fn from(value: AtomicBitFieldVec<W, Vec<W::AtomicType>>) -> Self {
470✔
1531
        BitFieldVec {
1532
            bits: transmute_vec_from_atomic(value.bits),
1,410✔
1533
            len: value.len,
940✔
1534
            bit_width: value.bit_width,
470✔
1535
            mask: value.mask,
470✔
1536
        }
1537
    }
1538
}
1539

1540
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Box<[W::AtomicType]>>>
1541
    for BitFieldVec<W, Box<[W]>>
1542
{
1543
    #[inline]
1544
    fn from(value: AtomicBitFieldVec<W, Box<[W::AtomicType]>>) -> Self {
1✔
1545
        BitFieldVec {
1546
            bits: transmute_boxed_slice_from_atomic(value.bits),
3✔
1547

1548
            len: value.len,
2✔
1549
            bit_width: value.bit_width,
1✔
1550
            mask: value.mask,
1✔
1551
        }
1552
    }
1553
}
1554

1555
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a [W::AtomicType]>>
1556
    for BitFieldVec<W, &'a [W]>
1557
{
1558
    #[inline]
1559
    fn from(value: AtomicBitFieldVec<W, &'a [W::AtomicType]>) -> Self {
1✔
1560
        BitFieldVec {
1561
            bits: unsafe { core::mem::transmute::<&'a [W::AtomicType], &'a [W]>(value.bits) },
2✔
1562
            len: value.len,
2✔
1563
            bit_width: value.bit_width,
1✔
1564
            mask: value.mask,
1✔
1565
        }
1566
    }
1567
}
1568

1569
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a mut [W::AtomicType]>>
1570
    for BitFieldVec<W, &'a mut [W]>
1571
{
1572
    #[inline]
1573
    fn from(value: AtomicBitFieldVec<W, &'a mut [W::AtomicType]>) -> Self {
1✔
1574
        BitFieldVec {
1575
            bits: unsafe {
1✔
1576
                core::mem::transmute::<&'a mut [W::AtomicType], &'a mut [W]>(value.bits)
1577
            },
1578
            len: value.len,
2✔
1579
            bit_width: value.bit_width,
1✔
1580
            mask: value.mask,
1✔
1581
        }
1582
    }
1583
}
1584

1585
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Vec<W>>>
1586
    for AtomicBitFieldVec<W, Vec<W::AtomicType>>
1587
{
1588
    #[inline]
1589
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
369✔
1590
        AtomicBitFieldVec {
1591
            bits: transmute_vec_into_atomic(value.bits),
1,107✔
1592

1593
            len: value.len,
738✔
1594
            bit_width: value.bit_width,
369✔
1595
            mask: value.mask,
369✔
1596
        }
1597
    }
1598
}
1599

1600
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Box<[W]>>>
1601
    for AtomicBitFieldVec<W, Box<[W::AtomicType]>>
1602
{
1603
    #[inline]
1604
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1605
        AtomicBitFieldVec {
1606
            bits: transmute_boxed_slice_into_atomic(value.bits),
3✔
1607
            len: value.len,
2✔
1608
            bit_width: value.bit_width,
1✔
1609
            mask: value.mask,
1✔
1610
        }
1611
    }
1612
}
1613

1614
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a [W]>>
1615
    for AtomicBitFieldVec<W, &'a [W::AtomicType]>
1616
{
1617
    type Error = CannotCastToAtomicError<W>;
1618

1619
    #[inline]
1620
    fn try_from(value: BitFieldVec<W, &'a [W]>) -> Result<Self, Self::Error> {
1✔
1621
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1622
            return Err(CannotCastToAtomicError::default());
×
1623
        }
1624
        Ok(AtomicBitFieldVec {
×
1625
            bits: unsafe { core::mem::transmute::<&'a [W], &'a [W::AtomicType]>(value.bits) },
×
1626
            len: value.len,
×
1627
            bit_width: value.bit_width,
×
1628
            mask: value.mask,
×
1629
        })
1630
    }
1631
}
1632

1633
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a mut [W]>>
1634
    for AtomicBitFieldVec<W, &'a mut [W::AtomicType]>
1635
{
1636
    type Error = CannotCastToAtomicError<W>;
1637

1638
    #[inline]
1639
    fn try_from(value: BitFieldVec<W, &'a mut [W]>) -> Result<Self, Self::Error> {
1✔
1640
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1641
            return Err(CannotCastToAtomicError::default());
×
1642
        }
1643
        Ok(AtomicBitFieldVec {
×
1644
            bits: unsafe {
×
1645
                core::mem::transmute::<&'a mut [W], &'a mut [W::AtomicType]>(value.bits)
×
1646
            },
1647
            len: value.len,
×
1648
            bit_width: value.bit_width,
×
1649
            mask: value.mask,
×
1650
        })
1651
    }
1652
}
1653

1654
impl<W: Word> From<BitFieldVec<W, Vec<W>>> for BitFieldVec<W, Box<[W]>> {
1655
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
125✔
1656
        BitFieldVec {
1657
            bits: value.bits.into_boxed_slice(),
375✔
1658
            len: value.len,
250✔
1659
            bit_width: value.bit_width,
125✔
1660
            mask: value.mask,
125✔
1661
        }
1662
    }
1663
}
1664

1665
impl<W: Word> From<BitFieldVec<W, Box<[W]>>> for BitFieldVec<W, Vec<W>> {
1666
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1667
        BitFieldVec {
1668
            bits: value.bits.into_vec(),
3✔
1669
            len: value.len,
2✔
1670
            bit_width: value.bit_width,
1✔
1671
            mask: value.mask,
1✔
1672
        }
1673
    }
1674
}
1675

1676
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a> for BitFieldVec<W, B> {
1677
    type Item = W;
1678
    type Iter = BitFieldVecIterator<'a, W, B>;
1679
}
1680

1681
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue for BitFieldVec<W, B> {
1682
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1683
        self.iter_from(0)
×
1684
    }
1685
}
1686

1687
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1688
    for BitFieldVec<W, B>
1689
{
1690
    type Item = W;
1691
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1692
}
1693

1694
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom for BitFieldVec<W, B> {
1695
    fn iter_value_from(
×
1696
        &self,
1697
        from: usize,
1698
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1699
        self.iter_from(from)
×
1700
    }
1701
}
1702

1703
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1704
    for BitFieldVecSubsliceImpl<'b, W, B>
1705
{
1706
    type Item = W;
1707
    type Iter = BitFieldVecIterator<'a, W, B>;
1708
}
1709

1710
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1711
    for BitFieldVecSubsliceImpl<'a, W, B>
1712
{
1713
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1714
        self.slice.iter_from(0)
×
1715
    }
1716
}
1717

1718
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1719
    for BitFieldVecSubsliceImpl<'b, W, B>
1720
{
1721
    type Item = W;
1722
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1723
}
1724

1725
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1726
    for BitFieldVecSubsliceImpl<'a, W, B>
1727
{
1728
    fn iter_value_from(
×
1729
        &self,
1730
        from: usize,
1731
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1732
        self.slice.iter_from(from)
×
1733
    }
1734
}
1735

1736
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1737
    for BitFieldVecSubsliceImplMut<'b, W, B>
1738
{
1739
    type Item = W;
1740
    type Iter = BitFieldVecIterator<'a, W, B>;
1741
}
1742

1743
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1744
    for BitFieldVecSubsliceImplMut<'a, W, B>
1745
{
1746
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1747
        self.slice.iter_from(0)
×
1748
    }
1749
}
1750

1751
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1752
    for BitFieldVecSubsliceImplMut<'b, W, B>
1753
{
1754
    type Item = W;
1755
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1756
}
1757

1758
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1759
    for BitFieldVecSubsliceImplMut<'a, W, B>
1760
{
1761
    fn iter_value_from(
×
1762
        &self,
1763
        from: usize,
1764
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1765
        self.slice.iter_from(from)
×
1766
    }
1767
}
1768

1769
#[cfg(test)]
1770
mod tests {
1771
    use super::*;
1772

1773
    #[test]
1774
    fn test_with_capacity() {
1775
        let mut b = BitFieldVec::<usize, _>::with_capacity(10, 100);
1776
        let capacity = b.bits.capacity();
1777
        for _ in 0..100 {
1778
            b.push(0);
1779
        }
1780
        assert_eq!(b.bits.capacity(), capacity);
1781
    }
1782

1783
    fn copy<W: Word, B: AsRef<[W]>, C: AsRef<[W]> + AsMut<[W]>>(
1784
        source: &BitFieldVec<W, B>,
1785
        from: usize,
1786
        dest: &mut BitFieldVec<W, C>,
1787
        to: usize,
1788
        len: usize,
1789
    ) {
1790
        let len = Ord::min(Ord::min(len, dest.len - to), source.len - from);
1791
        for i in 0..len {
1792
            dest.set_value(to + i, source.index_value(from + i));
1793
        }
1794
    }
1795

1796
    #[test]
1797
    fn test_copy() {
1798
        for src_pattern in 0..8 {
1799
            for dst_pattern in 0..8 {
1800
                // if from_first_word == from_last_word && to_first_word == to_last_word
1801
                let source = bit_field_vec![3 => src_pattern; 100];
1802
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1803
                let mut dest_expected = dest_actual.clone();
1804
                source.copy(1, &mut dest_actual, 2, 10);
1805
                copy(&source, 1, &mut dest_expected, 2, 10);
1806
                assert_eq!(dest_actual, dest_expected);
1807
                // else if from_first_word == from_last_word
1808
                let source = bit_field_vec![3 => src_pattern; 100];
1809
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1810
                let mut dest_expected = dest_actual.clone();
1811
                source.copy(1, &mut dest_actual, 20, 10);
1812
                copy(&source, 1, &mut dest_expected, 20, 10);
1813
                assert_eq!(dest_actual, dest_expected);
1814
                // else if to_first_word == to_last_word
1815
                let source = bit_field_vec![3 => src_pattern; 100];
1816
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1817
                let mut dest_expected = dest_actual.clone();
1818
                source.copy(20, &mut dest_actual, 1, 10);
1819
                copy(&source, 20, &mut dest_expected, 1, 10);
1820
                assert_eq!(dest_actual, dest_expected);
1821
                // else if src_bit == dest_bit (residual = 1)
1822
                let source = bit_field_vec![3 => src_pattern; 1000];
1823
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1824
                let mut dest_expected = dest_actual.clone();
1825
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 40);
1826
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 40);
1827
                assert_eq!(dest_actual, dest_expected);
1828
                // else if src_bit == dest_bit (residual = 64)
1829
                let source = bit_field_vec![3 => src_pattern; 1000];
1830
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1831
                let mut dest_expected = dest_actual.clone();
1832
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 61);
1833
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 61);
1834
                assert_eq!(dest_actual, dest_expected);
1835
                // else if src_bit < dest_bit (residual = 1)
1836
                let source = bit_field_vec![3 => src_pattern; 1000];
1837
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1838
                let mut dest_expected = dest_actual.clone();
1839
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40);
1840
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40);
1841
                assert_eq!(dest_actual, dest_expected);
1842
                // else if src_bit < dest_bit (residual = 64)
1843
                let source = bit_field_vec![3 => src_pattern; 1000];
1844
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1845
                let mut dest_expected = dest_actual.clone();
1846
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40 + 17);
1847
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40 + 17);
1848
                assert_eq!(dest_actual, dest_expected);
1849
                // else if src_bit > dest_bit (residual = 1)
1850
                let source = bit_field_vec![3 => src_pattern; 1000];
1851
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1852
                let mut dest_expected = dest_actual.clone();
1853
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 64);
1854
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 64);
1855
                assert_eq!(dest_actual, dest_expected);
1856
                // else if src_bit > dest_bit (residual = 64)
1857
                let source = bit_field_vec![3 => src_pattern; 1000];
1858
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1859
                let mut dest_expected = dest_actual.clone();
1860
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 21 + 64);
1861
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 21 + 64);
1862
                assert_eq!(dest_actual, dest_expected);
1863
            }
1864
        }
1865
    }
1866
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc