• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

vigna / sux-rs / 20234062436

15 Dec 2025 01:32PM UTC coverage: 74.624% (-0.02%) from 74.641%
20234062436

Pull #93

github

web-flow
Merge ec6392f55 into 04a540e9b
Pull Request #93: partial_array: Fix mmap support

3 of 5 new or added lines in 1 file covered. (60.0%)

1 existing line in 1 file now uncovered.

4464 of 5982 relevant lines covered (74.62%)

122266230.13 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

73.48
/src/bits/bit_field_vec.rs
1
/*
2
 * SPDX-FileCopyrightText: 2023 Inria
3
 * SPDX-FileCopyrightText: 2023 Sebastiano Vigna
4
 *
5
 * SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later
6
 */
7

8
//! Vectors of bit fields of fixed width (AKA “compact arrays“, “bit array“,
9
//! etc.)
10
//!
11
//! Elements are stored contiguously, with no padding bits (in particular,
12
//! unless the bit width is a power of two some elements will be stored across
13
//! word boundaries).
14
//!
15
//! There are two flavors: [`BitFieldVec`], a mutable bit-field vector, and
16
//! [`AtomicBitFieldVec`], a mutable, thread-safe bit-field vector.
17
//!
18
//! These flavors depends on a backend, and presently we provide, given an
19
//! unsigned integer type `W` or an unsigned atomic integer type `A`:
20
//!
21
//! - `BitFieldVec<Vec<T>>`: a mutable, growable and resizable bit-field vector;
22
//! - `BitFieldVec<AsRef<[W]>>`: an immutable bit-field vector, useful for
23
//!   [ε-serde](https://crates.io/crates/epserde) support;
24
//! - `BitFieldVec<AsRef<[W]> + AsMut<[W]>>`: a mutable (but not resizable) bit
25
//!   vector;
26
//! - `AtomicBitFieldVec<AsRef<[A]>>`: a partially thread-safe, mutable (but not
27
//!   resizable) bit-field vector.
28
//!
29
//! More generally, the underlying type must satisfy the trait [`Word`] for
30
//! [`BitFieldVec`] and additionally [`IntoAtomic`] for [`AtomicBitFieldVec`].
31
//! A blanket implementation exposes slices of elements of type `W` as bit-field
32
//! vectors of width `W::BITS`, analogously for atomic types `A`.
33
//!
34
//! The traits [`BitFieldSlice`] and [`BitFieldSliceMut`] provide a uniform
35
//! interface to access to the content of (a reference to) the bit-field vector.
36
//! There is also a [`AtomicBitFieldSlice`] trait for atomic bit-field vectors.
37
//! Since they are also implemented for slices of words, they make it easy to
38
//! write generic code that works both on bit-field vectors and on slices of
39
//! words when you need to consider the bit width of each element.
40
//!
41
//! Note that the [`try_chunks_mut`](SliceByValueMut::try_chunks_mut) method is
42
//! part of the [`SliceByValueMut`] trait, and thus returns an iterator over
43
//! elements implementing [`SliceByValueMut`]; the elements, however, implement
44
//! also [`BitFieldSliceMut`], and you can use this property by adding the bound
45
//! `for<'a> BitFieldSliceMut<W, ChunksMut<'a>: Iterator<Item:
46
//! BitFieldSliceMut<W>>>`.
47
//!
48
//! Nothing is assumed about the content of the backend outside the
49
//! bits of the vector. Moreover, the content of the backend outside of the
50
//! vector is never modified by the methods of this structure.
51
//!
52
//! For high-speed unchecked scanning, we implement [`IntoUncheckedIterator`]
53
//! and [`IntoReverseUncheckedIterator`] on a reference to this type. The are
54
//! used, for example, to provide
55
//! [predecessor](crate::traits::indexed_dict::Pred) and
56
//! [successor](crate::traits::indexed_dict::Succ) primitives for
57
//! [`EliasFano`].
58
//!
59
//! # Low-level support
60
//!
61
//! The methods [`address_of`](BitFieldVec::addr_of) and
62
//! [`get_unaligned`](BitFieldVec::get_unaligned) can be used to manually
63
//! prefetch parts of the data structure, or read values using unaligned read,
64
//! when the bit width makes it possible.
65
//!
66
//! # Examples
67
//!
68
//! ```
69
//! # use sux::prelude::*;
70
//! # use bit_field_slice::*;
71
//! # use value_traits::slices::*;
72
//! // Bit field vector of bit width 5 and length 10, all entries set to zero
73
//! let mut b = <BitFieldVec<usize>>::new(5, 10);
74
//! assert_eq!(b.len(), 10);
75
//! assert_eq!(b.bit_width(), 5);
76
//! b.set_value(0, 3);
77
//! assert_eq!(b.index_value(0), 3);
78
//!
79
//! // Empty bit field vector of bit width 20 with capacity 10
80
//! let mut b = <BitFieldVec<usize>>::with_capacity(20, 10);
81
//! assert_eq!(b.len(), 0);
82
//! assert_eq!(b.bit_width(), 20);
83
//! b.push(20);
84
//! assert_eq!(b.len(), 1);
85
//! assert_eq!(b.index_value(0), 20);
86
//! assert_eq!(b.pop(), Some(20));
87
//!
88
//! // Convenience macro
89
//! let b = bit_field_vec![10; 4, 500, 2, 0, 1];
90
//! assert_eq!(b.len(), 5);
91
//! assert_eq!(b.bit_width(), 10);
92
//! assert_eq!(b.index_value(0), 4);
93
//! assert_eq!(b.index_value(1), 500);
94
//! assert_eq!(b.index_value(2), 2);
95
//! assert_eq!(b.index_value(3), 0);
96
//! assert_eq!(b.index_value(4), 1);
97
//! ```
98
#[cfg(feature = "rayon")]
99
use crate::RAYON_MIN_LEN;
100
use crate::prelude::{bit_field_slice::*, *};
101
use crate::utils::{
102
    CannotCastToAtomicError, transmute_boxed_slice_from_atomic, transmute_boxed_slice_into_atomic,
103
    transmute_vec_from_atomic, transmute_vec_into_atomic,
104
};
105
use crate::{panic_if_out_of_bounds, panic_if_value};
106
use anyhow::{Result, bail};
107
use common_traits::{
108
    AsBytes, Atomic, AtomicInteger, AtomicUnsignedInt, CastableInto, IntoAtomic, invariant_eq,
109
};
110
use mem_dbg::*;
111
#[cfg(feature = "rayon")]
112
use rayon::prelude::*;
113
use std::iter::FusedIterator;
114
use std::sync::atomic::{Ordering, compiler_fence, fence};
115
use value_traits::slices::{SliceByValue, SliceByValueMut};
116

117
/// Convenient, [`vec!`]-like macro to initialize `usize`-based bit-field
118
/// vectors.
119
///
120
/// - `bit_field_vec![width]`: creates an empty bit-field vector of given bit
121
///   width.
122
///
123
/// - `bit_field_vec![width => value; length]`: creates a bit-field vector of
124
///   given bit width and length, with all entries set to `value`.
125
///
126
/// - `bit_field_vec![width; v₀, v₁, … ]`: creates a bit-field vector of
127
///   given bit width with entries set to `v₀`, `v₁`, ….
128
///
129
///
130
/// Note that the syntax `bit_field_vec![width; length; value]` that has been
131
/// deprecated in favor of `bit_field_vec![width => value; length]`, so that
132
/// value and length are in the same order as in [`vec!`].
133
///
134
/// # Examples
135
///
136
/// ```
137
/// use sux::prelude::*;
138
/// use bit_field_slice::*;
139
/// use value_traits::slices::*;
140
/// // Empty bit field vector of bit width 5
141
/// let b = bit_field_vec![5];
142
/// assert_eq!(b.len(), 0);
143
/// assert_eq!(b.bit_width(), 5);
144
///
145
/// // 10 values of bit width 6, all set to 3
146
/// let b = bit_field_vec![6 => 3; 10];
147
/// assert_eq!(b.len(), 10);
148
/// assert_eq!(b.bit_width(), 6);
149
/// assert_eq!(b.iter().all(|x| x == 3), true);
150
///
151
/// // List of values of bit width 10
152
/// let b = bit_field_vec![10; 4, 500, 2, 0, 1];
153
/// assert_eq!(b.len(), 5);
154
/// assert_eq!(b.bit_width(), 10);
155
/// assert_eq!(b.index_value(0), 4);
156
/// assert_eq!(b.index_value(1), 500);
157
/// assert_eq!(b.index_value(2), 2);
158
/// assert_eq!(b.index_value(3), 0);
159
/// assert_eq!(b.index_value(4), 1);
160
/// ```
161
#[macro_export]
162
macro_rules! bit_field_vec {
163
    ($w:expr) => {
164
        $crate::bits::BitFieldVec::<usize, _>::new($w, 0)
165
    };
166
    ($w:expr; $n:expr; $v:expr) => {
167
        {
168
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
169
            // Force type
170
            let v: usize = $v;
171
            bit_field_vec.resize($n, v);
172
            bit_field_vec
173
        }
174
    };
175
    ($w:expr => $v:expr; $n:expr) => {
176
        {
177
            let mut bit_field_vec = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, $n);
178
            // Force type
179
            let v: usize = $v;
180
            bit_field_vec.resize($n, v);
181
            bit_field_vec
182
        }
183
    };
184
    ($w:expr; $($x:expr),+ $(,)?) => {
185
        {
186
            let mut b = $crate::bits::BitFieldVec::<usize, _>::with_capacity($w, [$($x),+].len());
187
            $(
188
                // Force type
189
                let x: usize = $x;
190
                b.push(x);
191
            )*
192
            b
193
        }
194
    };
195
}
196

197
/// A vector of bit fields of fixed width (AKA “compact array“, “bit array“,
198
/// etc.).
199
///
200
/// See the [module documentation](crate::bits) for more details.
201
#[derive(Debug, Clone, Copy, Hash, MemDbg, MemSize, value_traits::Subslices)]
202
#[value_traits_subslices(bound = "B: AsRef<[W]>")]
203
#[derive(value_traits::SubslicesMut)]
204
#[value_traits_subslices_mut(bound = "B: AsRef<[W]> + AsMut<[W]>")]
205
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
206
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
207
pub struct BitFieldVec<W: Word = usize, B = Vec<W>> {
208
    /// The underlying storage.
209
    bits: B,
210
    /// The bit width of the values stored in the vector.
211
    bit_width: usize,
212
    /// A mask with its lowest `bit_width` bits set to one.
213
    mask: W,
214
    /// The length of the vector.
215
    len: usize,
216
}
217

218
/// Robust, heavily checked mask function for constructors and similar methods.
219
fn mask<W: Word>(bit_width: usize) -> W {
10,977✔
220
    if bit_width == 0 {
10,977✔
221
        W::ZERO
241✔
222
    } else {
223
        W::MAX >> W::BITS.checked_sub(bit_width).expect("bit_width > W::BITS")
32,208✔
224
    }
225
}
226

227
impl<W: Word, B> BitFieldVec<W, B> {
228
    /// # Safety
229
    /// `len` * `bit_width` must be between 0 (included) the number of
230
    /// bits in `bits` (included).
231
    #[inline(always)]
232
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
3,223✔
233
        Self {
234
            bits,
235
            bit_width,
236
            mask: mask(bit_width),
6,446✔
237
            len,
238
        }
239
    }
240

241
    #[inline(always)]
242
    pub fn into_raw_parts(self) -> (B, usize, usize) {
3,122✔
243
        (self.bits, self.bit_width, self.len)
6,244✔
244
    }
245

246
    #[inline(always)]
247
    /// Modifies the bit field in place.
248
    ///
249
    /// # Safety
250
    /// This is unsafe because it's the caller's responsibility to ensure that
251
    /// that the length is compatible with the modified bits.
252
    pub unsafe fn map<W2: Word, B2>(self, f: impl FnOnce(B) -> B2) -> BitFieldVec<W2, B2> {
×
253
        BitFieldVec {
254
            bits: f(self.bits),
×
255
            bit_width: self.bit_width,
×
256
            mask: mask(self.bit_width),
×
257
            len: self.len,
×
258
        }
259
    }
260
}
261

262
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
263
    /// Gets the address of the item storing (the first part of)
264
    /// the element of given index.
265
    ///
266
    /// This method is mainly useful for manually prefetching
267
    /// parts of the data structure.
268
    pub fn addr_of(&self, index: usize) -> *const W {
4✔
269
        let start_bit = index * self.bit_width;
8✔
270
        let word_index = start_bit / W::BITS;
8✔
271
        (&self.bits.as_ref()[word_index]) as *const _
8✔
272
    }
273

274
    /// Like [`SliceByValue::index_value`], but using unaligned reads.
275
    ///
276
    /// This method can be used only for bit width smaller than or equal to
277
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
278
    /// an additional padding word must be present at the end of the vector.
279
    ///
280
    /// Note that to guarantee the absence of undefined behavior this method
281
    /// has to perform several tests. Consider using
282
    /// [`get_unaligned_unchecked`](Self::get_unaligned_unchecked) if you are
283
    /// sure that the constraints are respected.
284
    ///
285
    /// # Panics
286
    ///
287
    /// This method will panic if the constraints above are not respected.
288
    pub fn get_unaligned(&self, index: usize) -> W {
65✔
289
        assert!(
65✔
290
            self.bit_width <= W::BITS - 8 + 2
65✔
291
                || self.bit_width == W::BITS - 8 + 4
24✔
292
                || self.bit_width == W::BITS
14✔
293
        );
294
        panic_if_out_of_bounds!(index, self.len);
183✔
295
        assert!(
61✔
296
            (index * self.bit_width) / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES
122✔
297
        );
298
        unsafe { self.get_unaligned_unchecked(index) }
180✔
299
    }
300

301
    /// Like [`SliceByValue::get_value_unchecked`], but using unaligned reads.
302
    ///
303
    /// # Safety
304
    ///
305
    /// This method can be used only for bit width smaller than or equal to
306
    /// `W::BITS - 8 + 2` or equal to `W::BITS - 8 + 4` or `W::BITS`. Moreover,
307
    /// an additional padding word must be present at the end of the vector.
308
    ///
309
    /// # Panics
310
    ///
311
    /// This method will panic in debug mode if the safety constraints are not
312
    /// respected.
313
    pub unsafe fn get_unaligned_unchecked(&self, index: usize) -> W {
65✔
314
        debug_assert!(
65✔
315
            self.bit_width <= W::BITS - 8 + 2
65✔
316
                || self.bit_width == W::BITS - 8 + 4
24✔
317
                || self.bit_width == W::BITS
14✔
318
        );
319
        let base_ptr = self.bits.as_ref().as_ptr() as *const u8;
122✔
320
        let start_bit = index * self.bit_width;
122✔
321
        debug_assert!(start_bit / W::BYTES + W::BYTES <= self.bits.as_ref().len() * W::BYTES);
183✔
322
        let ptr = unsafe { base_ptr.add(start_bit / 8) } as *const W;
180✔
323
        let word = unsafe { core::ptr::read_unaligned(ptr) };
180✔
324
        (word >> (start_bit % 8)) & self.mask
60✔
325
    }
326

327
    /// Returns the backend of the vector as a slice of `W`.
328
    pub fn as_slice(&self) -> &[W] {
3✔
329
        self.bits.as_ref()
3✔
330
    }
331
}
332
/// An iterator over non-overlapping chunks of a bit-field vector, starting at
333
/// the beginning of the vector.
334
///
335
/// When the vector len is not evenly divided by the chunk size, the last chunk
336
/// of the iteration will be shorter.
337
///
338
/// This struct is created by the
339
/// [`try_chunks_mut`](crate::bits::bit_field_vec::BitFieldVec#impl-BitFieldSliceMut<W>-for-BitFieldVec<W,+B>)
340
/// method.
341
pub struct ChunksMut<'a, W: Word> {
342
    remaining: usize,
343
    bit_width: usize,
344
    chunk_size: usize,
345
    iter: std::slice::ChunksMut<'a, W>,
346
}
347

348
impl<'a, W: Word> Iterator for ChunksMut<'a, W> {
349
    type Item = BitFieldVec<W, &'a mut [W]>;
350

351
    #[inline]
352
    fn next(&mut self) -> Option<Self::Item> {
104✔
353
        self.iter.next().map(|chunk| {
415✔
354
            let size = Ord::min(self.chunk_size, self.remaining);
412✔
355
            let next = unsafe { BitFieldVec::from_raw_parts(chunk, self.bit_width, size) };
515✔
356
            self.remaining -= size;
103✔
357
            next
103✔
358
        })
359
    }
360
}
361

362
impl<'a, W: Word> ExactSizeIterator for ChunksMut<'a, W> where
363
    std::slice::ChunksMut<'a, W>: ExactSizeIterator
364
{
365
}
366

367
impl<'a, W: Word> FusedIterator for ChunksMut<'a, W> where
368
    std::slice::ChunksMut<'a, W>: FusedIterator
369
{
370
}
371

372
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {}
373

374
impl<W: Word> BitFieldVec<W, Vec<W>> {
375
    /// Creates a new zero-initialized vector of given bit width and length.
376
    pub fn new(bit_width: usize, len: usize) -> Self {
1,103✔
377
        // We need at least one word to handle the case of bit width zero.
378
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
4,412✔
379
        Self {
380
            bits: vec![W::ZERO; n_of_words],
3,309✔
381
            bit_width,
382
            mask: mask(bit_width),
2,206✔
383
            len,
384
        }
385
    }
386

387
    /// Creates a new zero-initialized vector of given bit width and length,
388
    /// adding padding bits to the end of the vector so that unaligned reads are
389
    /// possible.
390
    ///
391
    /// Note that this convenience method is a one-off: if the vector is resized
392
    /// or expanded, the padding will be lost.
393
    pub fn new_unaligned(bit_width: usize, len: usize) -> Self {
103✔
394
        let n_of_words = (len * bit_width).div_ceil(W::BITS);
309✔
395
        Self {
396
            // We add a word at the end
397
            bits: vec![W::ZERO; n_of_words + 1],
309✔
398
            bit_width,
399
            mask: mask(bit_width),
206✔
400
            len,
401
        }
402
    }
403

404
    /// Creates an empty vector that doesn't need to reallocate for up to
405
    /// `capacity` elements.
406
    pub fn with_capacity(bit_width: usize, capacity: usize) -> Self {
1,164✔
407
        // We need at least one word to handle the case of bit width zero.
408
        let n_of_words = Ord::max(1, (capacity * bit_width).div_ceil(W::BITS));
4,656✔
409
        Self {
410
            bits: Vec::with_capacity(n_of_words),
3,492✔
411
            bit_width,
412
            mask: mask(bit_width),
1,164✔
413
            len: 0,
414
        }
415
    }
416

417
    /// Sets the length.
418
    ///
419
    /// # Safety
420
    ///
421
    /// `len * bit_width` must be at most `self.bits.len() * W::BITS`. Note that
422
    /// setting the length might result in reading uninitialized data.
423
    pub unsafe fn set_len(&mut self, len: usize) {
1✔
424
        debug_assert!(len * self.bit_width <= self.bits.len() * W::BITS);
3✔
425
        self.len = len;
1✔
426
    }
427

428
    /// Sets len to 0
429
    pub fn clear(&mut self) {
1✔
430
        self.len = 0;
1✔
431
    }
432

433
    /// Returns the bit width of the values inside the vector.
434
    pub fn bit_width(&self) -> usize {
315✔
435
        debug_assert!(self.bit_width <= W::BITS);
630✔
436
        self.bit_width
315✔
437
    }
438

439
    /// Returns the mask used to extract values from the vector.
440
    /// This will keep the lowest `bit_width` bits.
441
    pub fn mask(&self) -> W {
312✔
442
        self.mask
312✔
443
    }
444

445
    /// Creates a new vector by copying a slice; the bit width will be the minimum
446
    /// width sufficient to hold all values in the slice.
447
    ///
448
    /// Returns an error if the bit width of the values in `slice` is larger than
449
    /// `W::BITS`.
450
    pub fn from_slice<SW>(slice: &impl BitFieldSlice<SW>) -> Result<Self>
3✔
451
    where
452
        SW: Word + CastableInto<W>,
453
    {
454
        let mut max_len: usize = 0;
9✔
455
        for i in 0..slice.len() {
6,003✔
456
            max_len = Ord::max(max_len, unsafe {
9,000✔
457
                slice.get_value_unchecked(i).len() as usize
9,000✔
458
            });
459
        }
460

461
        if max_len > W::BITS {
3✔
462
            bail!(
1✔
463
                "Cannot convert a slice of bit width {} into a slice with W = {}",
×
464
                max_len,
×
465
                std::any::type_name::<W>()
1✔
466
            );
467
        }
468
        let mut result = Self::new(max_len, slice.len());
10✔
469
        for i in 0..slice.len() {
4,002✔
470
            unsafe { result.set_value_unchecked(i, slice.get_value_unchecked(i).cast()) };
12,000✔
471
        }
472

473
        Ok(result)
2✔
474
    }
475

476
    /// Adds a value at the end of the vector.
477
    pub fn push(&mut self, value: W) {
2,253✔
478
        panic_if_value!(value, self.mask, self.bit_width);
6,759✔
479
        if (self.len + 1) * self.bit_width > self.bits.len() * W::BITS {
4,905✔
480
            self.bits.push(W::ZERO);
399✔
481
        }
482
        unsafe {
483
            self.set_value_unchecked(self.len, value);
6,759✔
484
        }
485
        self.len += 1;
2,253✔
486
    }
487

488
    /// Truncates or extend with `value` the vector.
489
    pub fn resize(&mut self, new_len: usize, value: W) {
1,162✔
490
        panic_if_value!(value, self.mask, self.bit_width);
3,486✔
491
        if new_len > self.len {
1,162✔
492
            if new_len * self.bit_width > self.bits.len() * W::BITS {
3,483✔
493
                self.bits
2,322✔
494
                    .resize((new_len * self.bit_width).div_ceil(W::BITS), W::ZERO);
2,322✔
495
            }
496
            for i in self.len..new_len {
807,721✔
497
                unsafe {
498
                    self.set_value_unchecked(i, value);
2,419,680✔
499
                }
500
            }
501
        }
502
        self.len = new_len;
1,162✔
503
    }
504

505
    /// Removes and returns a value from the end of the vector.
506
    ///
507
    /// Returns None if the [`BitFieldVec`] is empty.
508
    pub fn pop(&mut self) -> Option<W> {
1,002✔
509
        if self.len == 0 {
1,002✔
510
            return None;
2✔
511
        }
512
        let value = self.index_value(self.len - 1);
4,000✔
513
        self.len -= 1;
1,000✔
514
        Some(value)
1,000✔
515
    }
516
}
517

518
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldVec<W, B> {}
519

520
impl<W: Word, T> BitWidth<W> for BitFieldVec<W, T> {
521
    #[inline(always)]
522
    fn bit_width(&self) -> usize {
11,412✔
523
        debug_assert!(self.bit_width <= W::BITS);
22,824✔
524
        self.bit_width
11,412✔
525
    }
526
}
527

528
impl<W: Word, B: AsRef<[W]>> SliceByValue for BitFieldVec<W, B> {
529
    type Value = W;
530
    #[inline(always)]
531
    fn len(&self) -> usize {
67,010✔
532
        self.len
67,010✔
533
    }
534

535
    unsafe fn get_value_unchecked(&self, index: usize) -> W {
52,356,603✔
536
        let pos = index * self.bit_width;
104,713,206✔
537
        let word_index = pos / W::BITS;
104,713,206✔
538
        let bit_index = pos % W::BITS;
104,713,206✔
539
        let bits = self.bits.as_ref();
157,069,809✔
540

541
        unsafe {
542
            if bit_index + self.bit_width <= W::BITS {
52,356,603✔
543
                (*bits.get_unchecked(word_index) >> bit_index) & self.mask
78,280,626✔
544
            } else {
545
                ((*bits.get_unchecked(word_index) >> bit_index)
39,648,870✔
546
                    | (*bits.get_unchecked(word_index + 1) << (W::BITS - bit_index)))
26,432,580✔
547
                    & self.mask
13,216,290✔
548
            }
549
        }
550
    }
551
}
552

553
impl<W: Word, B: AsRef<[W]>> BitFieldSlice<W> for BitFieldVec<W, B> {
554
    fn as_slice(&self) -> &[W] {
×
555
        self.bits.as_ref()
×
556
    }
557
}
558

559
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> BitFieldSliceMut<W> for BitFieldVec<W, B> {
560
    #[inline(always)]
561
    fn mask(&self) -> W {
×
562
        self.mask
×
563
    }
564
    fn reset(&mut self) {
1✔
565
        let bit_len = self.len * self.bit_width;
2✔
566
        let full_words = bit_len / W::BITS;
2✔
567
        let residual = bit_len % W::BITS;
2✔
568
        let bits = self.bits.as_mut();
3✔
569
        bits[..full_words].iter_mut().for_each(|x| *x = W::ZERO);
3✔
570
        if residual != 0 {
2✔
571
            bits[full_words] &= W::MAX << residual;
1✔
572
        }
573
    }
574

575
    #[cfg(feature = "rayon")]
576
    fn par_reset(&mut self) {
1✔
577
        let bit_len = self.len * self.bit_width;
2✔
578
        let full_words = bit_len / W::BITS;
2✔
579
        let residual = bit_len % W::BITS;
2✔
580
        let bits = self.bits.as_mut();
3✔
581
        bits[..full_words]
1✔
582
            .par_iter_mut()
583
            .with_min_len(RAYON_MIN_LEN)
1✔
584
            .for_each(|x| *x = W::ZERO);
1✔
585
        if residual != 0 {
2✔
586
            bits[full_words] &= W::MAX << residual;
1✔
587
        }
588
    }
589

590
    fn as_mut_slice(&mut self) -> &mut [W] {
1✔
591
        self.bits.as_mut()
1✔
592
    }
593
}
594

595
/// Error type returned when [`try_chunks_mut`](SliceByValueMut::try_chunks_mut)
596
/// does not find sufficient alignment.
597
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
598
pub struct ChunksMutError<W: Word> {
599
    bit_width: usize,
600
    chunk_size: usize,
601
    _marker: core::marker::PhantomData<W>,
602
}
603

604
impl<W: Word> core::fmt::Display for ChunksMutError<W> {
605
    fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
×
606
        write!(
×
607
            f,
×
608
            "try_chunks_mut needs that the bit width ({}) times the chunk size ({}) is a multiple of W::BITS ({}) to return more than one chunk",
609
            self.bit_width,
×
610
            self.chunk_size,
×
611
            W::BITS
×
612
        )
613
    }
614
}
615

616
impl<W: Word> std::error::Error for ChunksMutError<W> {}
617

618
impl<W: Word, B: AsRef<[W]> + AsMut<[W]>> SliceByValueMut for BitFieldVec<W, B> {
619
    #[inline(always)]
620
    fn set_value(&mut self, index: usize, value: W) {
522,648✔
621
        panic_if_out_of_bounds!(index, self.len);
1,567,944✔
622
        panic_if_value!(value, self.mask, self.bit_width);
1,567,941✔
623
        unsafe {
624
            self.set_value_unchecked(index, value);
1,567,938✔
625
        }
626
    }
627

628
    unsafe fn set_value_unchecked(&mut self, index: usize, value: W) {
12,175,505✔
629
        let pos = index * self.bit_width;
24,351,010✔
630
        let word_index = pos / W::BITS;
24,351,010✔
631
        let bit_index = pos % W::BITS;
24,351,010✔
632
        let bits = self.bits.as_mut();
36,526,515✔
633

634
        unsafe {
635
            if bit_index + self.bit_width <= W::BITS {
21,338,479✔
636
                let mut word = *bits.get_unchecked_mut(word_index);
36,651,896✔
637
                word &= !(self.mask << bit_index);
27,488,922✔
638
                word |= value << bit_index;
27,488,922✔
639
                *bits.get_unchecked_mut(word_index) = word;
27,488,922✔
640
            } else {
641
                let mut word = *bits.get_unchecked_mut(word_index);
12,050,124✔
642
                word &= (W::ONE << bit_index) - W::ONE;
6,025,062✔
643
                word |= value << bit_index;
9,037,593✔
644
                *bits.get_unchecked_mut(word_index) = word;
12,050,124✔
645

646
                let mut word = *bits.get_unchecked_mut(word_index + 1);
12,050,124✔
647
                word &= !(self.mask >> (W::BITS - bit_index));
9,037,593✔
648
                word |= value >> (W::BITS - bit_index);
9,037,593✔
649
                *bits.get_unchecked_mut(word_index + 1) = word;
9,037,593✔
650
            }
651
        }
652
    }
653

654
    fn replace_value(&mut self, index: usize, value: W) -> W {
×
655
        panic_if_out_of_bounds!(index, self.len);
×
656
        panic_if_value!(value, self.mask, self.bit_width);
×
657
        unsafe { self.replace_value_unchecked(index, value) }
×
658
    }
659

660
    unsafe fn replace_value_unchecked(&mut self, index: usize, value: W) -> W {
×
661
        let pos = index * self.bit_width;
×
662
        let word_index = pos / W::BITS;
×
663
        let bit_index = pos % W::BITS;
×
664
        let bits = self.bits.as_mut();
×
665

666
        unsafe {
667
            if bit_index + self.bit_width <= W::BITS {
×
668
                let mut word = *bits.get_unchecked_mut(word_index);
×
669
                let old_value = (word >> bit_index) & self.mask;
×
670
                word &= !(self.mask << bit_index);
×
671
                word |= value << bit_index;
×
672
                *bits.get_unchecked_mut(word_index) = word;
×
673
                old_value
×
674
            } else {
675
                let mut word = *bits.get_unchecked_mut(word_index);
×
676
                let mut old_value = word >> bit_index;
×
677
                word &= (W::ONE << bit_index) - W::ONE;
×
678
                word |= value << bit_index;
×
679
                *bits.get_unchecked_mut(word_index) = word;
×
680

681
                let mut word = *bits.get_unchecked_mut(word_index + 1);
×
682
                old_value |= word << (W::BITS - bit_index);
×
683
                word &= !(self.mask >> (W::BITS - bit_index));
×
684
                word |= value >> (W::BITS - bit_index);
×
685
                *bits.get_unchecked_mut(word_index + 1) = word;
×
686
                old_value & self.mask
×
687
            }
688
        }
689
    }
690

691
    /// This implementation perform the copy word by word, which is
692
    /// significantly faster than the default implementation.
693
    fn copy(&self, from: usize, dst: &mut Self, to: usize, len: usize) {
576✔
694
        assert_eq!(
576✔
695
            self.bit_width, dst.bit_width,
×
696
            "Bit widths must be equal (self: {}, dest: {})",
×
697
            self.bit_width, dst.bit_width
×
698
        );
699
        // Reduce len to the elements available in both vectors
700
        let len = Ord::min(Ord::min(len, dst.len - to), self.len - from);
3,456✔
701
        if len == 0 {
576✔
702
            return;
×
703
        }
704
        let bit_width = Ord::min(self.bit_width, dst.bit_width);
2,304✔
705
        let bit_len = len * bit_width;
1,152✔
706
        let src_pos = from * self.bit_width;
1,152✔
707
        let dst_pos = to * dst.bit_width;
1,152✔
708
        let src_bit = src_pos % W::BITS;
1,152✔
709
        let dst_bit = dst_pos % W::BITS;
1,152✔
710
        let src_first_word = src_pos / W::BITS;
1,152✔
711
        let dst_first_word = dst_pos / W::BITS;
1,152✔
712
        let src_last_word = (src_pos + bit_len - 1) / W::BITS;
1,152✔
713
        let dst_last_word = (dst_pos + bit_len - 1) / W::BITS;
1,152✔
714
        let source = self.bits.as_ref();
1,728✔
715
        let dest = dst.bits.as_mut();
1,728✔
716

717
        if src_first_word == src_last_word && dst_first_word == dst_last_word {
768✔
718
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
719
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
720
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
721
            dest[dst_first_word] |= word << dst_bit;
128✔
722
        } else if src_first_word == src_last_word {
640✔
723
            // dst_first_word != dst_last_word
724
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
725
            let word = (source[src_first_word] >> src_bit) & mask;
192✔
726
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
727
            dest[dst_first_word] |= (word & mask) << dst_bit;
192✔
728
            dest[dst_last_word] &= !(mask >> (W::BITS - dst_bit));
192✔
729
            dest[dst_last_word] |= (word & mask) >> (W::BITS - dst_bit);
128✔
730
        } else if dst_first_word == dst_last_word {
576✔
731
            // src_first_word != src_last_word
732
            let mask = W::MAX >> (W::BITS - bit_len);
192✔
733
            let word = ((source[src_first_word] >> src_bit)
256✔
734
                | (source[src_last_word] << (W::BITS - src_bit)))
128✔
735
                & mask;
128✔
736
            dest[dst_first_word] &= !(mask << dst_bit);
192✔
737
            dest[dst_first_word] |= word << dst_bit;
128✔
738
        } else if src_bit == dst_bit {
576✔
739
            // src_first_word != src_last_word && dst_first_word != dst_last_word
740
            let mask = W::MAX << dst_bit;
384✔
741
            dest[dst_first_word] &= !mask;
256✔
742
            dest[dst_first_word] |= source[src_first_word] & mask;
384✔
743

744
            dest[(1 + dst_first_word)..dst_last_word]
384✔
745
                .copy_from_slice(&source[(1 + src_first_word)..src_last_word]);
512✔
746

747
            let residual =
256✔
748
                bit_len - (W::BITS - src_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
384✔
749
            let mask = W::MAX >> (W::BITS - residual);
384✔
750
            dest[dst_last_word] &= !mask;
256✔
751
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
752
        } else if src_bit < dst_bit {
384✔
753
            // src_first_word != src_last_word && dst_first_word !=
754
            // dst_last_word
755
            let dst_mask = W::MAX << dst_bit;
256✔
756
            let src_mask = W::MAX << src_bit;
256✔
757
            let shift = dst_bit - src_bit;
256✔
758
            dest[dst_first_word] &= !dst_mask;
128✔
759
            dest[dst_first_word] |= (source[src_first_word] & src_mask) << shift;
256✔
760

761
            let mut word = source[src_first_word] >> (W::BITS - shift);
256✔
762
            for i in 1..dst_last_word - dst_first_word {
384✔
763
                dest[dst_first_word + i] = word | (source[src_first_word + i] << shift);
512✔
764
                word = source[src_first_word + i] >> (W::BITS - shift);
128✔
765
            }
766
            let residual =
128✔
767
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
256✔
768
            let mask = W::MAX >> (W::BITS - residual);
256✔
769
            dest[dst_last_word] &= !mask;
128✔
770
            dest[dst_last_word] |= source[src_last_word] & mask;
256✔
771
        } else {
772
            // src_first_word != src_last_word && dst_first_word !=
773
            // dst_last_word && src_bit > dst_bit
774
            let dst_mask = W::MAX << dst_bit;
256✔
775
            let src_mask = W::MAX << src_bit;
256✔
776
            let shift = src_bit - dst_bit;
256✔
777
            dest[dst_first_word] &= !dst_mask;
128✔
778
            dest[dst_first_word] |= (source[src_first_word] & src_mask) >> shift;
256✔
779
            dest[dst_first_word] |= source[src_first_word + 1] << (W::BITS - shift);
256✔
780

781
            let mut word = source[src_first_word + 1] >> shift;
256✔
782

783
            for i in 1..dst_last_word - dst_first_word {
1,152✔
784
                word |= source[src_first_word + i + 1] << (W::BITS - shift);
1,536✔
785
                dest[dst_first_word + i] = word;
1,024✔
786
                word = source[src_first_word + i + 1] >> shift;
512✔
787
            }
788

789
            word |= source[src_last_word] << (W::BITS - shift);
256✔
790

791
            let residual =
128✔
792
                bit_len - (W::BITS - dst_bit) - (dst_last_word - dst_first_word - 1) * W::BITS;
256✔
793
            let mask = W::MAX >> (W::BITS - residual);
256✔
794
            dest[dst_last_word] &= !mask;
128✔
795
            dest[dst_last_word] |= word & mask;
256✔
796
        }
797
    }
798

799
    /// This implementation keeps a buffer of `W::BITS` bits for reading and
800
    /// writing, obtaining a significant speedup with respect to the default
801
    /// implementation.
802
    #[inline]
803
    unsafe fn apply_in_place_unchecked<F>(&mut self, mut f: F)
×
804
    where
805
        F: FnMut(Self::Value) -> Self::Value,
806
    {
807
        if self.is_empty() {
×
808
            return;
×
809
        }
810
        let bit_width = self.bit_width();
×
811
        if bit_width == 0 {
×
812
            return;
×
813
        }
814
        let mask = self.mask();
×
815
        let number_of_words: usize = self.bits.as_ref().len();
×
816
        let last_word_idx = number_of_words.saturating_sub(1);
×
817

818
        let mut write_buffer: W = W::ZERO;
×
819
        let mut read_buffer: W = *unsafe { self.bits.as_ref().get_unchecked(0) };
×
820

821
        // specialized case because it's much faster
822
        if bit_width.is_power_of_two() {
×
823
            let mut bits_in_buffer = 0;
×
824

825
            // TODO!: figure out how to simplify
826
            let mut buffer_limit = (self.len() * bit_width) % W::BITS;
×
827
            if buffer_limit == 0 {
×
828
                buffer_limit = W::BITS;
×
829
            }
830

831
            for read_idx in 1..number_of_words {
×
832
                // pre-load the next word so it loads while we parse the buffer
833
                let next_word: W = *unsafe { self.bits.as_ref().get_unchecked(read_idx) };
×
834

835
                // parse as much as we can from the buffer
836
                loop {
×
837
                    let next_bits_in_buffer = bits_in_buffer + bit_width;
×
838

839
                    if next_bits_in_buffer > W::BITS {
×
840
                        break;
×
841
                    }
842

843
                    let value = read_buffer & mask;
×
844
                    // throw away the bits we just read
845
                    read_buffer >>= bit_width;
×
846
                    // apply user func
847
                    let new_value = f(value);
×
848
                    // put the new value in the write buffer
849
                    write_buffer |= new_value << bits_in_buffer;
×
850

851
                    bits_in_buffer = next_bits_in_buffer;
×
852
                }
853

854
                invariant_eq!(read_buffer, W::ZERO);
×
855
                *unsafe { self.bits.as_mut().get_unchecked_mut(read_idx - 1) } = write_buffer;
×
856
                read_buffer = next_word;
×
857
                write_buffer = W::ZERO;
×
858
                bits_in_buffer = 0;
×
859
            }
860

861
            // write the last word if we have some bits left
862
            while bits_in_buffer < buffer_limit {
×
863
                let value = read_buffer & mask;
×
864
                // throw away the bits we just read
865
                read_buffer >>= bit_width;
×
866
                // apply user func
867
                let new_value = f(value);
×
868
                // put the new value in the write buffer
869
                write_buffer |= new_value << bits_in_buffer;
×
870
                // -= bit_width but with no casts
871
                bits_in_buffer += bit_width;
×
872
            }
873

874
            *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
875
            return;
×
876
        }
877

878
        // The position inside the word. In most parametrization of the
879
        // vector, since the bit_width is not necessarily a integer
880
        // divisor of the word size, we need to keep track of the position
881
        // inside the word. As we scroll through the bits, due to the bits
882
        // remainder, we may need to operate on two words at the same time.
883
        let mut global_bit_index: usize = 0;
×
884

885
        // The number of words in the bitvec.
886
        let mut lower_word_limit = 0;
×
887
        let mut upper_word_limit = W::BITS;
×
888

889
        // We iterate across the words
890
        for word_number in 0..last_word_idx {
×
891
            // We iterate across the elements in the word.
892
            while global_bit_index + bit_width <= upper_word_limit {
×
893
                // We retrieve the value from the current word.
894
                let offset = global_bit_index - lower_word_limit;
×
895
                global_bit_index += bit_width;
×
896
                let element = self.mask() & (read_buffer >> offset);
×
897

898
                // We apply the function to the element.
899
                let new_element = f(element);
×
900

901
                // We set the element in the new word.
902
                write_buffer |= new_element << offset;
×
903
            }
904

905
            // We retrieve the next word from the bitvec.
906
            let next_word = *unsafe { self.bits.as_ref().get_unchecked(word_number + 1) };
×
907

908
            let mut new_write_buffer = W::ZERO;
×
909
            if upper_word_limit != global_bit_index {
×
910
                let remainder = upper_word_limit - global_bit_index;
×
911
                let offset = global_bit_index - lower_word_limit;
×
912
                // We compose the element from the remaining elements in the
913
                // current word and the elements in the next word.
914
                let element = ((read_buffer >> offset) | (next_word << remainder)) & self.mask();
×
915
                global_bit_index += bit_width;
×
916

917
                // We apply the function to the element.
918
                let new_element = f(element);
×
919

920
                write_buffer |= new_element << offset;
×
921

922
                new_write_buffer = new_element >> remainder;
×
923
            };
924

925
            read_buffer = next_word;
×
926

927
            *unsafe { self.bits.as_mut().get_unchecked_mut(word_number) } = write_buffer;
×
928

929
            write_buffer = new_write_buffer;
×
930
            lower_word_limit = upper_word_limit;
×
931
            upper_word_limit += W::BITS;
×
932
        }
933

934
        let mut offset = global_bit_index - lower_word_limit;
×
935

936
        // We iterate across the elements in the word.
937
        while offset < self.len() * bit_width - global_bit_index {
×
938
            // We retrieve the value from the current word.
939
            let element = self.mask() & (read_buffer >> offset);
×
940

941
            // We apply the function to the element.
942
            let new_element = f(element);
×
943

944
            // We set the element in the new word.
945
            write_buffer |= new_element << offset;
×
946
            offset += bit_width;
×
947
        }
948

949
        *unsafe { self.bits.as_mut().get_unchecked_mut(last_word_idx) } = write_buffer;
×
950
    }
951

952
    type ChunksMut<'a>
953
        = ChunksMut<'a, W>
954
    where
955
        Self: 'a;
956

957
    type ChunksMutError = ChunksMutError<W>;
958

959
    /// # Errors
960
    ///
961
    /// This method will return an error if the chunk size multiplied by the by
962
    /// the [bit width](BitWidth::bit_width) is not a multiple of
963
    /// `W::BITS` and more than one chunk must be returned.
964
    fn try_chunks_mut(
101✔
965
        &mut self,
966
        chunk_size: usize,
967
    ) -> Result<Self::ChunksMut<'_>, ChunksMutError<W>> {
968
        let len = self.len();
303✔
969
        let bit_width = self.bit_width();
303✔
970
        if len <= chunk_size || (chunk_size * bit_width) % W::BITS == 0 {
106✔
971
            Ok(ChunksMut {
99✔
972
                remaining: len,
198✔
973
                bit_width: self.bit_width,
198✔
974
                chunk_size,
198✔
975
                iter: self.bits.as_mut()[..(len * bit_width).div_ceil(W::BITS)]
297✔
976
                    .chunks_mut((chunk_size * bit_width).div_ceil(W::BITS)),
198✔
977
            })
978
        } else {
979
            Err(ChunksMutError {
2✔
980
                bit_width,
4✔
981
                chunk_size,
2✔
982
                _marker: core::marker::PhantomData,
2✔
983
            })
984
        }
985
    }
986
}
987

988
// Support for unchecked iterators
989

990
/// An [`UncheckedIterator`] over the values of a [`BitFieldVec`].
991
#[derive(Debug, Clone, MemDbg, MemSize)]
992
pub struct BitFieldVectorUncheckedIterator<'a, W, B>
993
where
994
    W: Word,
995
{
996
    vec: &'a BitFieldVec<W, B>,
997
    word_index: usize,
998
    window: W,
999
    fill: usize,
1000
}
1001

1002
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorUncheckedIterator<'a, W, B> {
1003
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
636,426✔
1004
        if index > vec.len() {
1,272,852✔
1005
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1006
        }
1007
        let (fill, word_index);
×
1008
        let window = if index == vec.len() {
1,909,278✔
1009
            word_index = 0;
304✔
1010
            fill = 0;
304✔
1011
            W::ZERO
304✔
1012
        } else {
1013
            let bit_offset = index * vec.bit_width;
1,272,244✔
1014
            let bit_index = bit_offset % W::BITS;
1,272,244✔
1015

1016
            word_index = bit_offset / W::BITS;
636,122✔
1017
            fill = W::BITS - bit_index;
636,122✔
1018
            unsafe {
1019
                // SAFETY: index has been check at the start and it is within bounds
1020
                *vec.bits.as_ref().get_unchecked(word_index) >> bit_index
1,272,244✔
1021
            }
1022
        };
1023
        Self {
1024
            vec,
1025
            word_index,
1026
            window,
1027
            fill,
1028
        }
1029
    }
1030
}
1031

1032
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1033
    for BitFieldVectorUncheckedIterator<'_, W, B>
1034
{
1035
    type Item = W;
1036
    unsafe fn next_unchecked(&mut self) -> W {
32,658,942✔
1037
        let bit_width = self.vec.bit_width;
65,317,884✔
1038

1039
        if self.fill >= bit_width {
32,658,942✔
1040
            self.fill -= bit_width;
17,183,189✔
1041
            let res = self.window & self.vec.mask;
34,366,378✔
1042
            self.window >>= bit_width;
17,183,189✔
1043
            return res;
17,183,189✔
1044
        }
1045

1046
        let res = self.window;
30,951,506✔
1047
        self.word_index += 1;
15,475,753✔
1048
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
30,951,506✔
1049
        let res = (res | (self.window << self.fill)) & self.vec.mask;
46,427,259✔
1050
        let used = bit_width - self.fill;
30,951,506✔
1051
        self.window >>= used;
15,475,753✔
1052
        self.fill = W::BITS - used;
15,475,753✔
1053
        res
15,475,753✔
1054
    }
1055
}
1056

1057
impl<'a, W: Word, B: AsRef<[W]>> IntoUncheckedIterator for &'a BitFieldVec<W, B> {
1058
    type Item = W;
1059
    type IntoUncheckedIter = BitFieldVectorUncheckedIterator<'a, W, B>;
1060
    fn into_unchecked_iter_from(self, from: usize) -> Self::IntoUncheckedIter {
323,216✔
1061
        BitFieldVectorUncheckedIterator::new(self, from)
969,648✔
1062
    }
1063
}
1064

1065
/// An [`UncheckedIterator`] moving backwards over the values of a [`BitFieldVec`].
1066
#[derive(Debug, Clone, MemDbg, MemSize)]
1067
pub struct BitFieldVectorReverseUncheckedIterator<'a, W: Word, B> {
1068
    vec: &'a BitFieldVec<W, B>,
1069
    word_index: usize,
1070
    window: W,
1071
    fill: usize,
1072
}
1073

1074
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVectorReverseUncheckedIterator<'a, W, B> {
1075
    fn new(vec: &'a BitFieldVec<W, B>, index: usize) -> Self {
315,992✔
1076
        if index > vec.len() {
631,984✔
1077
            panic!("Start index out of bounds: {} > {}", index, vec.len());
×
1078
        }
1079
        let (word_index, fill);
×
1080

1081
        let window = if index == 0 {
631,984✔
1082
            word_index = 0;
3,120✔
1083
            fill = 0;
3,120✔
1084
            W::ZERO
3,120✔
1085
        } else {
1086
            // We have to handle the case of zero bit width
1087
            let bit_offset = (index * vec.bit_width).saturating_sub(1);
938,616✔
1088
            let bit_index = bit_offset % W::BITS;
625,744✔
1089

1090
            word_index = bit_offset / W::BITS;
312,872✔
1091
            fill = bit_index + 1;
312,872✔
1092
            unsafe {
1093
                // SAFETY: index has been check at the start and it is within bounds
1094
                *vec.bits.as_ref().get_unchecked(word_index) << (W::BITS - fill)
625,744✔
1095
            }
1096
        };
1097
        Self {
1098
            vec,
1099
            word_index,
1100
            window,
1101
            fill,
1102
        }
1103
    }
1104
}
1105

1106
impl<W: Word, B: AsRef<[W]>> crate::traits::UncheckedIterator
1107
    for BitFieldVectorReverseUncheckedIterator<'_, W, B>
1108
{
1109
    type Item = W;
1110
    unsafe fn next_unchecked(&mut self) -> W {
15,451,222✔
1111
        let bit_width = self.vec.bit_width;
30,902,444✔
1112

1113
        if self.fill >= bit_width {
15,451,222✔
1114
            self.fill -= bit_width;
8,036,076✔
1115
            self.window = self.window.rotate_left(bit_width as u32);
16,072,152✔
1116
            return self.window & self.vec.mask;
8,036,076✔
1117
        }
1118

1119
        let mut res = self.window.rotate_left(self.fill as u32);
29,660,584✔
1120
        self.word_index -= 1;
7,415,146✔
1121
        self.window = *unsafe { self.vec.bits.as_ref().get_unchecked(self.word_index) };
14,830,292✔
1122
        let used = bit_width - self.fill;
14,830,292✔
1123
        res = ((res << used) | (self.window >> (W::BITS - used))) & self.vec.mask;
14,830,292✔
1124
        self.window <<= used;
7,415,146✔
1125
        self.fill = W::BITS - used;
7,415,146✔
1126
        res
7,415,146✔
1127
    }
1128
}
1129

1130
impl<'a, W: Word, B: AsRef<[W]>> IntoReverseUncheckedIterator for &'a BitFieldVec<W, B> {
1131
    type Item = W;
1132
    type IntoRevUncheckedIter = BitFieldVectorReverseUncheckedIterator<'a, W, B>;
1133

1134
    fn into_rev_unchecked_iter(self) -> Self::IntoRevUncheckedIter {
×
1135
        BitFieldVectorReverseUncheckedIterator::new(self, self.len())
×
1136
    }
1137

1138
    fn into_rev_unchecked_iter_from(self, from: usize) -> Self::IntoRevUncheckedIter {
315,992✔
1139
        BitFieldVectorReverseUncheckedIterator::new(self, from)
947,976✔
1140
    }
1141
}
1142

1143
/// An [`Iterator`] over the values of a [`BitFieldVec`].
1144
#[derive(Debug, Clone, MemDbg, MemSize)]
1145
pub struct BitFieldVecIterator<'a, W, B>
1146
where
1147
    W: Word,
1148
{
1149
    unchecked: BitFieldVectorUncheckedIterator<'a, W, B>,
1150
    range: core::ops::Range<usize>,
1151
}
1152

1153
impl<'a, W: Word, B: AsRef<[W]>> BitFieldVecIterator<'a, W, B> {
1154
    fn new(vec: &'a BitFieldVec<W, B>, from: usize) -> Self {
313,210✔
1155
        let len = vec.len();
939,630✔
1156
        if from > len {
313,210✔
1157
            panic!(
×
1158
                "Start index out of bounds: {} >
×
1159
            {}",
×
1160
                from, len
×
1161
            );
1162
        }
1163
        Self {
1164
            unchecked: BitFieldVectorUncheckedIterator::new(vec, from),
1,252,840✔
1165
            range: from..len,
313,210✔
1166
        }
1167
    }
1168
}
1169

1170
impl<W: Word, B: AsRef<[W]>> Iterator for BitFieldVecIterator<'_, W, B> {
1171
    type Item = W;
1172
    fn next(&mut self) -> Option<Self::Item> {
16,189,263✔
1173
        if self.range.is_empty() {
32,378,526✔
1174
            return None;
313,204✔
1175
        }
1176
        // SAFETY: index has just been checked.
1177
        let res = unsafe { self.unchecked.next_unchecked() };
47,628,177✔
1178
        self.range.start += 1;
15,876,059✔
1179
        Some(res)
15,876,059✔
1180
    }
1181

1182
    #[inline(always)]
1183
    fn size_hint(&self) -> (usize, Option<usize>) {
×
1184
        (self.range.len(), Some(self.range.len()))
×
1185
    }
1186
}
1187

1188
impl<W: Word, B: AsRef<[W]>> ExactSizeIterator for BitFieldVecIterator<'_, W, B> {
1189
    #[inline(always)]
1190
    fn len(&self) -> usize {
×
1191
        self.range.len()
×
1192
    }
1193
}
1194

1195
impl<W: Word, B: AsRef<[W]>> FusedIterator for BitFieldVecIterator<'_, W, B> {}
1196

1197
/// This implements iteration from the end, but its slower than the forward iteration
1198
/// as here we do a random access, while in the forward iterator we do a sequential access
1199
/// and we keep a buffer of `W::BITS` bits to speed up the iteration.
1200
///
1201
/// If needed we could also keep a buffer from the end, but the logic would be more complex
1202
/// and potentially slower.
1203
impl<W: Word, B: AsRef<[W]>> DoubleEndedIterator for BitFieldVecIterator<'_, W, B> {
1204
    fn next_back(&mut self) -> Option<Self::Item> {
×
1205
        if self.range.is_empty() {
×
1206
            return None;
×
1207
        }
1208
        // SAFETY: index has just been checked.
1209
        let res = unsafe { self.unchecked.next_unchecked() };
×
1210
        self.range.end -= 1;
×
1211
        Some(res)
×
1212
    }
1213
}
1214

1215
/// Equality between bit-field vectors requires that the word is the same, the
1216
/// bit width is the same, and the content is the same.
1217
impl<W: Word, B: AsRef<[W]>, C: AsRef<[W]>> PartialEq<BitFieldVec<W, C>> for BitFieldVec<W, B> {
1218
    fn eq(&self, other: &BitFieldVec<W, C>) -> bool {
3,771✔
1219
        if self.bit_width() != other.bit_width() {
11,313✔
1220
            return false;
×
1221
        }
1222
        if self.len() != other.len() {
11,313✔
1223
            return false;
2✔
1224
        }
1225
        let bit_len = self.len() * self.bit_width();
15,076✔
1226
        if self.bits.as_ref()[..bit_len / W::BITS] != other.bits.as_ref()[..bit_len / W::BITS] {
7,538✔
1227
            return false;
×
1228
        }
1229

1230
        let residual = bit_len % W::BITS;
7,538✔
1231
        residual == 0
3,769✔
1232
            || (self.bits.as_ref()[bit_len / W::BITS] ^ other.bits.as_ref()[bit_len / W::BITS])
14,104✔
1233
                << (W::BITS - residual)
3,526✔
1234
                == W::ZERO
3,526✔
1235
    }
1236
}
1237

1238
impl<'a, W: Word, B: AsRef<[W]>> IntoIterator for &'a BitFieldVec<W, B> {
1239
    type Item = W;
1240
    type IntoIter = BitFieldVecIterator<'a, W, B>;
1241

1242
    fn into_iter(self) -> Self::IntoIter {
2✔
1243
        BitFieldVecIterator::new(self, 0)
4✔
1244
    }
1245
}
1246

1247
impl<'a, W: Word, B: AsRef<[W]>> IntoIteratorFrom for &'a BitFieldVec<W, B> {
1248
    type IntoIterFrom = BitFieldVecIterator<'a, W, B>;
1249

1250
    fn into_iter_from(self, from: usize) -> Self::IntoIter {
×
1251
        BitFieldVecIterator::new(self, from)
×
1252
    }
1253
}
1254

1255
impl<W: Word> core::iter::Extend<W> for BitFieldVec<W, Vec<W>> {
1256
    fn extend<T: IntoIterator<Item = W>>(&mut self, iter: T) {
1✔
1257
        for value in iter {
17✔
1258
            self.push(value);
16✔
1259
        }
1260
    }
1261
}
1262

1263
impl<W: Word, B: AsRef<[W]>> BitFieldVec<W, B> {
1264
    pub fn iter_from(&self, from: usize) -> BitFieldVecIterator<'_, W, B> {
313,208✔
1265
        BitFieldVecIterator::new(self, from)
939,624✔
1266
    }
1267

1268
    pub fn iter(&self) -> BitFieldVecIterator<'_, W, B> {
1,202✔
1269
        self.iter_from(0)
2,404✔
1270
    }
1271
}
1272
/// A tentatively thread-safe vector of bit fields of fixed width (AKA “compact arrays“,
1273
/// “bit array“, etc.)
1274
///
1275
/// This implementation provides some concurrency guarantees, albeit not
1276
/// full-fledged thread safety: more precisely, we can guarantee thread-safety
1277
/// if the bit width is a power of two; otherwise, concurrent writes to values
1278
/// that cross word boundaries might end up in different threads succeeding in
1279
/// writing only part of a value. If the user can guarantee that no two threads
1280
/// ever write to the same boundary-crossing value, then no race condition can
1281
/// happen.
1282
///
1283
/// See the [module documentation](crate::bits) for more details.
1284
#[derive(Debug, Clone, Hash, MemDbg, MemSize)]
1285
#[cfg_attr(feature = "epserde", derive(epserde::Epserde))]
1286
pub struct AtomicBitFieldVec<W: Word + IntoAtomic = usize, B = Vec<<W as IntoAtomic>::AtomicType>> {
1287
    /// The underlying storage.
1288
    bits: B,
1289
    /// The bit width of the values stored in the vector.
1290
    bit_width: usize,
1291
    /// A mask with its lowest `bit_width` bits set to one.
1292
    mask: W,
1293
    /// The length of the vector.
1294
    len: usize,
1295
}
1296

1297
impl<W: Word + IntoAtomic, B> AtomicBitFieldVec<W, B> {
1298
    /// # Safety
1299
    /// `len` * `bit_width` must be between 0 (included) the number of
1300
    /// bits in `bits` (included).
1301
    #[inline(always)]
1302
    pub unsafe fn from_raw_parts(bits: B, bit_width: usize, len: usize) -> Self {
184✔
1303
        Self {
1304
            bits,
1305
            bit_width,
1306
            mask: mask(bit_width),
368✔
1307
            len,
1308
        }
1309
    }
1310

1311
    #[inline(always)]
1312
    pub fn into_raw_parts(self) -> (B, usize, usize) {
184✔
1313
        (self.bits, self.bit_width, self.len)
368✔
1314
    }
1315

1316
    /// Returns the mask used to extract values from the vector.
1317
    /// This will keep the lowest `bit_width` bits.
1318
    pub fn mask(&self) -> W {
184✔
1319
        self.mask
184✔
1320
    }
1321
}
1322

1323
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B> {
1324
    /// Returns the backend of the `AtomicBitFieldVec` as a slice of `A`, where `A` is the
1325
    /// atomic variant of `W`.
1326
    pub fn as_slice(&self) -> &[W::AtomicType] {
4✔
1327
        self.bits.as_ref()
4✔
1328
    }
1329
}
1330

1331
impl<W: Word + IntoAtomic> AtomicBitFieldVec<W>
1332
where
1333
    W::AtomicType: AtomicUnsignedInt,
1334
{
1335
    pub fn new(bit_width: usize, len: usize) -> AtomicBitFieldVec<W> {
565✔
1336
        // we need at least two words to avoid branches in the gets
1337
        let n_of_words = Ord::max(1, (len * bit_width).div_ceil(W::BITS));
2,260✔
1338
        AtomicBitFieldVec::<W> {
1339
            bits: (0..n_of_words)
565✔
1340
                .map(|_| W::AtomicType::new(W::ZERO))
1341
                .collect(),
1342
            bit_width,
1343
            mask: mask(bit_width),
1,130✔
1344
            len,
1345
        }
1346
    }
1347
}
1348

1349
impl<W: Word + IntoAtomic, B: AsRef<[W::AtomicType]>> AtomicBitFieldVec<W, B>
1350
where
1351
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1352
{
1353
    /// Writes zeros in all values.
1354
    #[deprecated(since = "0.4.4", note = "reset is deprecated in favor of reset_atomic")]
1355
    pub fn reset(&mut self, ordering: Ordering) {
×
1356
        self.reset_atomic(ordering)
×
1357
    }
1358
}
1359

1360
impl<W: Word + IntoAtomic, B> BitWidth<W::AtomicType> for AtomicBitFieldVec<W, B> {
1361
    #[inline(always)]
1362
    fn bit_width(&self) -> usize {
184✔
1363
        debug_assert!(self.bit_width <= W::BITS);
368✔
1364
        self.bit_width
184✔
1365
    }
1366
}
1367

1368
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> SliceByValue for AtomicBitFieldVec<W, T>
1369
where
1370
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1371
    Self: AtomicBitFieldSlice<W>,
1372
{
1373
    type Value = W;
1374

1375
    #[inline(always)]
1376
    fn len(&self) -> usize {
×
1377
        self.len
×
1378
    }
1379

1380
    unsafe fn get_value_unchecked(&self, index: usize) -> Self::Value {
×
1381
        unsafe { self.get_atomic_unchecked(index, Ordering::Relaxed) }
×
1382
    }
1383
}
1384

1385
impl<W: Word + IntoAtomic, T: AsRef<[W::AtomicType]>> AtomicBitFieldSlice<W>
1386
    for AtomicBitFieldVec<W, T>
1387
where
1388
    W::AtomicType: AtomicUnsignedInt + AsBytes,
1389
{
1390
    #[inline]
1391
    fn len(&self) -> usize {
404,824✔
1392
        self.len
404,824✔
1393
    }
1394

1395
    #[inline]
1396
    unsafe fn get_atomic_unchecked(&self, index: usize, order: Ordering) -> W {
404,825✔
1397
        let pos = index * self.bit_width;
809,650✔
1398
        let word_index = pos / W::BITS;
809,650✔
1399
        let bit_index = pos % W::BITS;
809,650✔
1400
        let bits = self.bits.as_ref();
1,214,475✔
1401

1402
        unsafe {
1403
            if bit_index + self.bit_width <= W::BITS {
404,825✔
1404
                (bits.get_unchecked(word_index).load(order) >> bit_index) & self.mask
939,176✔
1405
            } else {
1406
                ((bits.get_unchecked(word_index).load(order) >> bit_index)
850,155✔
1407
                    | (bits.get_unchecked(word_index + 1).load(order) << (W::BITS - bit_index)))
680,124✔
1408
                    & self.mask
170,031✔
1409
            }
1410
        }
1411
    }
1412

1413
    // We reimplement set as we have the mask in the structure.
1414

1415
    /// Sets the element of the slice at the specified index.
1416
    ///
1417
    /// May panic if the index is not in in [0..[len](SliceByValue::len))
1418
    /// or the value does not fit in [`BitWidth::bit_width`] bits.
1419
    #[inline(always)]
1420
    fn set_atomic(&self, index: usize, value: W, order: Ordering) {
56,026✔
1421
        panic_if_out_of_bounds!(index, self.len);
168,078✔
1422
        panic_if_value!(value, self.mask, self.bit_width);
168,075✔
1423
        unsafe {
1424
            self.set_atomic_unchecked(index, value, order);
224,096✔
1425
        }
1426
    }
1427

1428
    #[inline]
1429
    unsafe fn set_atomic_unchecked(&self, index: usize, value: W, order: Ordering) {
517,405✔
1430
        unsafe {
1431
            debug_assert!(self.bit_width != W::BITS);
1,034,810✔
1432
            let pos = index * self.bit_width;
1,034,810✔
1433
            let word_index = pos / W::BITS;
1,034,810✔
1434
            let bit_index = pos % W::BITS;
1,034,810✔
1435
            let bits = self.bits.as_ref();
1,552,215✔
1436

1437
            if bit_index + self.bit_width <= W::BITS {
517,405✔
1438
                // this is consistent
1439
                let mut current = bits.get_unchecked(word_index).load(order);
1,489,115✔
1440
                loop {
×
1441
                    let mut new = current;
595,646✔
1442
                    new &= !(self.mask << bit_index);
595,646✔
1443
                    new |= value << bit_index;
595,646✔
1444

1445
                    match bits
893,469✔
1446
                        .get_unchecked(word_index)
595,646✔
1447
                        .compare_exchange(current, new, order, order)
1,191,292✔
1448
                    {
1449
                        Ok(_) => break,
297,823✔
UNCOV
1450
                        Err(e) => current = e,
×
1451
                    }
1452
                }
1453
            } else {
1454
                let mut word = bits.get_unchecked(word_index).load(order);
1,097,910✔
1455
                // try to wait for the other thread to finish
1456
                fence(Ordering::Acquire);
439,164✔
1457
                loop {
×
1458
                    let mut new = word;
439,164✔
1459
                    new &= (W::ONE << bit_index) - W::ONE;
219,582✔
1460
                    new |= value << bit_index;
439,164✔
1461

1462
                    match bits
658,746✔
1463
                        .get_unchecked(word_index)
439,164✔
1464
                        .compare_exchange(word, new, order, order)
878,328✔
1465
                    {
1466
                        Ok(_) => break,
219,582✔
1467
                        Err(e) => word = e,
×
1468
                    }
1469
                }
1470
                fence(Ordering::Release);
439,164✔
1471

1472
                // ensures that the compiler does not reorder the two atomic operations
1473
                // this should increase the probability of having consistency
1474
                // between two concurrent writes as they will both execute the set
1475
                // of the bits in the same order, and the release / acquire fence
1476
                // should try to synchronize the threads as much as possible
1477
                compiler_fence(Ordering::SeqCst);
439,164✔
1478

1479
                let mut word = bits.get_unchecked(word_index + 1).load(order);
1,097,910✔
1480
                fence(Ordering::Acquire);
439,164✔
1481
                loop {
×
1482
                    let mut new = word;
439,164✔
1483
                    new &= !(self.mask >> (W::BITS - bit_index));
439,164✔
1484
                    new |= value >> (W::BITS - bit_index);
439,164✔
1485

1486
                    match bits
658,746✔
1487
                        .get_unchecked(word_index + 1)
439,164✔
1488
                        .compare_exchange(word, new, order, order)
878,328✔
1489
                    {
1490
                        Ok(_) => break,
219,582✔
1491
                        Err(e) => word = e,
×
1492
                    }
1493
                }
1494
                fence(Ordering::Release);
439,164✔
1495
            }
1496
        }
1497
    }
1498

1499
    fn reset_atomic(&mut self, ordering: Ordering) {
1✔
1500
        let bit_len = self.len * self.bit_width;
2✔
1501
        let full_words = bit_len / W::BITS;
2✔
1502
        let residual = bit_len % W::BITS;
2✔
1503
        let bits = self.bits.as_ref();
3✔
1504
        bits[..full_words]
1✔
1505
            .iter()
1506
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1507
        if residual != 0 {
2✔
1508
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1509
        }
1510
    }
1511

1512
    #[cfg(feature = "rayon")]
1513
    fn par_reset_atomic(&mut self, ordering: Ordering) {
1✔
1514
        let bit_len = self.len * self.bit_width;
2✔
1515
        let full_words = bit_len / W::BITS;
2✔
1516
        let residual = bit_len % W::BITS;
2✔
1517
        let bits = self.bits.as_ref();
3✔
1518
        bits[..full_words]
1✔
1519
            .par_iter()
1520
            .with_min_len(RAYON_MIN_LEN)
1✔
1521
            .for_each(|x| x.store(W::ZERO, ordering));
22✔
1522
        if residual != 0 {
2✔
1523
            bits[full_words].fetch_and(W::MAX << residual, ordering);
3✔
1524
        }
1525
    }
1526
}
1527

1528
// Conversions
1529

1530
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Vec<W::AtomicType>>>
1531
    for BitFieldVec<W, Vec<W>>
1532
{
1533
    #[inline]
1534
    fn from(value: AtomicBitFieldVec<W, Vec<W::AtomicType>>) -> Self {
540✔
1535
        BitFieldVec {
1536
            bits: transmute_vec_from_atomic(value.bits),
1,620✔
1537
            len: value.len,
1,080✔
1538
            bit_width: value.bit_width,
540✔
1539
            mask: value.mask,
540✔
1540
        }
1541
    }
1542
}
1543

1544
impl<W: Word + IntoAtomic> From<AtomicBitFieldVec<W, Box<[W::AtomicType]>>>
1545
    for BitFieldVec<W, Box<[W]>>
1546
{
1547
    #[inline]
1548
    fn from(value: AtomicBitFieldVec<W, Box<[W::AtomicType]>>) -> Self {
1✔
1549
        BitFieldVec {
1550
            bits: transmute_boxed_slice_from_atomic(value.bits),
3✔
1551

1552
            len: value.len,
2✔
1553
            bit_width: value.bit_width,
1✔
1554
            mask: value.mask,
1✔
1555
        }
1556
    }
1557
}
1558

1559
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a [W::AtomicType]>>
1560
    for BitFieldVec<W, &'a [W]>
1561
{
1562
    #[inline]
1563
    fn from(value: AtomicBitFieldVec<W, &'a [W::AtomicType]>) -> Self {
1✔
1564
        BitFieldVec {
1565
            bits: unsafe { core::mem::transmute::<&'a [W::AtomicType], &'a [W]>(value.bits) },
2✔
1566
            len: value.len,
2✔
1567
            bit_width: value.bit_width,
1✔
1568
            mask: value.mask,
1✔
1569
        }
1570
    }
1571
}
1572

1573
impl<'a, W: Word + IntoAtomic> From<AtomicBitFieldVec<W, &'a mut [W::AtomicType]>>
1574
    for BitFieldVec<W, &'a mut [W]>
1575
{
1576
    #[inline]
1577
    fn from(value: AtomicBitFieldVec<W, &'a mut [W::AtomicType]>) -> Self {
1✔
1578
        BitFieldVec {
1579
            bits: unsafe {
1✔
1580
                core::mem::transmute::<&'a mut [W::AtomicType], &'a mut [W]>(value.bits)
1581
            },
1582
            len: value.len,
2✔
1583
            bit_width: value.bit_width,
1✔
1584
            mask: value.mask,
1✔
1585
        }
1586
    }
1587
}
1588

1589
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Vec<W>>>
1590
    for AtomicBitFieldVec<W, Vec<W::AtomicType>>
1591
{
1592
    #[inline]
1593
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
369✔
1594
        AtomicBitFieldVec {
1595
            bits: transmute_vec_into_atomic(value.bits),
1,107✔
1596

1597
            len: value.len,
738✔
1598
            bit_width: value.bit_width,
369✔
1599
            mask: value.mask,
369✔
1600
        }
1601
    }
1602
}
1603

1604
impl<W: Word + IntoAtomic> From<BitFieldVec<W, Box<[W]>>>
1605
    for AtomicBitFieldVec<W, Box<[W::AtomicType]>>
1606
{
1607
    #[inline]
1608
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1609
        AtomicBitFieldVec {
1610
            bits: transmute_boxed_slice_into_atomic(value.bits),
3✔
1611
            len: value.len,
2✔
1612
            bit_width: value.bit_width,
1✔
1613
            mask: value.mask,
1✔
1614
        }
1615
    }
1616
}
1617

1618
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a [W]>>
1619
    for AtomicBitFieldVec<W, &'a [W::AtomicType]>
1620
{
1621
    type Error = CannotCastToAtomicError<W>;
1622

1623
    #[inline]
1624
    fn try_from(value: BitFieldVec<W, &'a [W]>) -> Result<Self, Self::Error> {
1✔
1625
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1626
            return Err(CannotCastToAtomicError::default());
×
1627
        }
1628
        Ok(AtomicBitFieldVec {
1✔
1629
            bits: unsafe { core::mem::transmute::<&'a [W], &'a [W::AtomicType]>(value.bits) },
2✔
1630
            len: value.len,
2✔
1631
            bit_width: value.bit_width,
1✔
1632
            mask: value.mask,
1✔
1633
        })
1634
    }
1635
}
1636

1637
impl<'a, W: Word + IntoAtomic> TryFrom<BitFieldVec<W, &'a mut [W]>>
1638
    for AtomicBitFieldVec<W, &'a mut [W::AtomicType]>
1639
{
1640
    type Error = CannotCastToAtomicError<W>;
1641

1642
    #[inline]
1643
    fn try_from(value: BitFieldVec<W, &'a mut [W]>) -> Result<Self, Self::Error> {
1✔
1644
        if core::mem::align_of::<W::AtomicType>() != core::mem::align_of::<W>() {
1✔
1645
            return Err(CannotCastToAtomicError::default());
×
1646
        }
1647
        Ok(AtomicBitFieldVec {
1✔
1648
            bits: unsafe {
1✔
1649
                core::mem::transmute::<&'a mut [W], &'a mut [W::AtomicType]>(value.bits)
2✔
1650
            },
1651
            len: value.len,
2✔
1652
            bit_width: value.bit_width,
1✔
1653
            mask: value.mask,
1✔
1654
        })
1655
    }
1656
}
1657

1658
impl<W: Word> From<BitFieldVec<W, Vec<W>>> for BitFieldVec<W, Box<[W]>> {
1659
    fn from(value: BitFieldVec<W, Vec<W>>) -> Self {
156✔
1660
        BitFieldVec {
1661
            bits: value.bits.into_boxed_slice(),
468✔
1662
            len: value.len,
312✔
1663
            bit_width: value.bit_width,
156✔
1664
            mask: value.mask,
156✔
1665
        }
1666
    }
1667
}
1668

1669
impl<W: Word> From<BitFieldVec<W, Box<[W]>>> for BitFieldVec<W, Vec<W>> {
1670
    fn from(value: BitFieldVec<W, Box<[W]>>) -> Self {
1✔
1671
        BitFieldVec {
1672
            bits: value.bits.into_vec(),
3✔
1673
            len: value.len,
2✔
1674
            bit_width: value.bit_width,
1✔
1675
            mask: value.mask,
1✔
1676
        }
1677
    }
1678
}
1679

1680
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a> for BitFieldVec<W, B> {
1681
    type Item = W;
1682
    type Iter = BitFieldVecIterator<'a, W, B>;
1683
}
1684

1685
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue for BitFieldVec<W, B> {
1686
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1687
        self.iter_from(0)
×
1688
    }
1689
}
1690

1691
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1692
    for BitFieldVec<W, B>
1693
{
1694
    type Item = W;
1695
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1696
}
1697

1698
impl<W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom for BitFieldVec<W, B> {
1699
    fn iter_value_from(
×
1700
        &self,
1701
        from: usize,
1702
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1703
        self.iter_from(from)
×
1704
    }
1705
}
1706

1707
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1708
    for BitFieldVecSubsliceImpl<'b, W, B>
1709
{
1710
    type Item = W;
1711
    type Iter = BitFieldVecIterator<'a, W, B>;
1712
}
1713

1714
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1715
    for BitFieldVecSubsliceImpl<'a, W, B>
1716
{
1717
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1718
        self.slice.iter_from(0)
×
1719
    }
1720
}
1721

1722
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1723
    for BitFieldVecSubsliceImpl<'b, W, B>
1724
{
1725
    type Item = W;
1726
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1727
}
1728

1729
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1730
    for BitFieldVecSubsliceImpl<'a, W, B>
1731
{
1732
    fn iter_value_from(
×
1733
        &self,
1734
        from: usize,
1735
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1736
        self.slice.iter_from(from)
×
1737
    }
1738
}
1739

1740
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueGat<'a>
1741
    for BitFieldVecSubsliceImplMut<'b, W, B>
1742
{
1743
    type Item = W;
1744
    type Iter = BitFieldVecIterator<'a, W, B>;
1745
}
1746

1747
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValue
1748
    for BitFieldVecSubsliceImplMut<'a, W, B>
1749
{
1750
    fn iter_value(&self) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
×
1751
        self.slice.iter_from(0)
×
1752
    }
1753
}
1754

1755
impl<'a, 'b, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFromGat<'a>
1756
    for BitFieldVecSubsliceImplMut<'b, W, B>
1757
{
1758
    type Item = W;
1759
    type IterFrom = BitFieldVecIterator<'a, W, B>;
1760
}
1761

1762
impl<'a, W: Word, B: AsRef<[W]>> value_traits::iter::IterateByValueFrom
1763
    for BitFieldVecSubsliceImplMut<'a, W, B>
1764
{
1765
    fn iter_value_from(
×
1766
        &self,
1767
        from: usize,
1768
    ) -> <Self as value_traits::iter::IterateByValueGat<'_>>::Iter {
1769
        self.slice.iter_from(from)
×
1770
    }
1771
}
1772

1773
#[cfg(test)]
1774
mod tests {
1775
    use super::*;
1776

1777
    #[test]
1778
    fn test_with_capacity() {
1779
        let mut b = BitFieldVec::<usize, _>::with_capacity(10, 100);
1780
        let capacity = b.bits.capacity();
1781
        for _ in 0..100 {
1782
            b.push(0);
1783
        }
1784
        assert_eq!(b.bits.capacity(), capacity);
1785
    }
1786

1787
    fn copy<W: Word, B: AsRef<[W]>, C: AsRef<[W]> + AsMut<[W]>>(
1788
        source: &BitFieldVec<W, B>,
1789
        from: usize,
1790
        dest: &mut BitFieldVec<W, C>,
1791
        to: usize,
1792
        len: usize,
1793
    ) {
1794
        let len = Ord::min(Ord::min(len, dest.len - to), source.len - from);
1795
        for i in 0..len {
1796
            dest.set_value(to + i, source.index_value(from + i));
1797
        }
1798
    }
1799

1800
    #[test]
1801
    fn test_copy() {
1802
        for src_pattern in 0..8 {
1803
            for dst_pattern in 0..8 {
1804
                // if from_first_word == from_last_word && to_first_word == to_last_word
1805
                let source = bit_field_vec![3 => src_pattern; 100];
1806
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1807
                let mut dest_expected = dest_actual.clone();
1808
                source.copy(1, &mut dest_actual, 2, 10);
1809
                copy(&source, 1, &mut dest_expected, 2, 10);
1810
                assert_eq!(dest_actual, dest_expected);
1811
                // else if from_first_word == from_last_word
1812
                let source = bit_field_vec![3 => src_pattern; 100];
1813
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1814
                let mut dest_expected = dest_actual.clone();
1815
                source.copy(1, &mut dest_actual, 20, 10);
1816
                copy(&source, 1, &mut dest_expected, 20, 10);
1817
                assert_eq!(dest_actual, dest_expected);
1818
                // else if to_first_word == to_last_word
1819
                let source = bit_field_vec![3 => src_pattern; 100];
1820
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 100];
1821
                let mut dest_expected = dest_actual.clone();
1822
                source.copy(20, &mut dest_actual, 1, 10);
1823
                copy(&source, 20, &mut dest_expected, 1, 10);
1824
                assert_eq!(dest_actual, dest_expected);
1825
                // else if src_bit == dest_bit (residual = 1)
1826
                let source = bit_field_vec![3 => src_pattern; 1000];
1827
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1828
                let mut dest_expected = dest_actual.clone();
1829
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 40);
1830
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 40);
1831
                assert_eq!(dest_actual, dest_expected);
1832
                // else if src_bit == dest_bit (residual = 64)
1833
                let source = bit_field_vec![3 => src_pattern; 1000];
1834
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1835
                let mut dest_expected = dest_actual.clone();
1836
                source.copy(3, &mut dest_actual, 3 + 3 * 128, 61);
1837
                copy(&source, 3, &mut dest_expected, 3 + 3 * 128, 61);
1838
                assert_eq!(dest_actual, dest_expected);
1839
                // else if src_bit < dest_bit (residual = 1)
1840
                let source = bit_field_vec![3 => src_pattern; 1000];
1841
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1842
                let mut dest_expected = dest_actual.clone();
1843
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40);
1844
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40);
1845
                assert_eq!(dest_actual, dest_expected);
1846
                // else if src_bit < dest_bit (residual = 64)
1847
                let source = bit_field_vec![3 => src_pattern; 1000];
1848
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1849
                let mut dest_expected = dest_actual.clone();
1850
                source.copy(3, &mut dest_actual, 7 + 64 * 3, 40 + 17);
1851
                copy(&source, 3, &mut dest_expected, 7 + 64 * 3, 40 + 17);
1852
                assert_eq!(dest_actual, dest_expected);
1853
                // else if src_bit > dest_bit (residual = 1)
1854
                let source = bit_field_vec![3 => src_pattern; 1000];
1855
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1856
                let mut dest_expected = dest_actual.clone();
1857
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 64);
1858
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 64);
1859
                assert_eq!(dest_actual, dest_expected);
1860
                // else if src_bit > dest_bit (residual = 64)
1861
                let source = bit_field_vec![3 => src_pattern; 1000];
1862
                let mut dest_actual = bit_field_vec![3 => dst_pattern; 1000];
1863
                let mut dest_expected = dest_actual.clone();
1864
                source.copy(7, &mut dest_actual, 3 + 64 * 3, 40 + 21 + 64);
1865
                copy(&source, 7, &mut dest_expected, 3 + 64 * 3, 40 + 21 + 64);
1866
                assert_eq!(dest_actual, dest_expected);
1867
            }
1868
        }
1869
    }
1870
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc