• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

randombit / botan / 20160266155

12 Dec 2025 08:01AM UTC coverage: 90.357% (-0.001%) from 90.358%
20160266155

push

github

web-flow
Merge pull request #5172 from randombit/jack/fix-clang-tidy-misc-const-correctness

Fix and enable clang-tidy warning misc-const-correctness

100951 of 111725 relevant lines covered (90.36%)

12817908.54 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

99.15
/src/lib/utils/mem_pool/mem_pool.cpp
1
/*
2
* (C) 2018,2019 Jack Lloyd
3
*
4
* Botan is released under the Simplified BSD License (see license.txt)
5
*/
6

7
#include <botan/internal/mem_pool.h>
8

9
#include <botan/mem_ops.h>
10
#include <botan/internal/target_info.h>
11
#include <algorithm>
12
#include <optional>
13

14
#if defined(BOTAN_HAS_VALGRIND) || defined(BOTAN_ENABLE_DEBUG_ASSERTS)
15
   /**
16
    * @brief Prohibits access to unused memory pages in Botan's memory pool
17
    *
18
    * If BOTAN_MEM_POOL_USE_MMU_PROTECTIONS is defined, the Memory_Pool
19
    * class used for mlock'ed memory will use OS calls to set page
20
    * permissions so as to prohibit access to pages on the free list, then
21
    * enable read/write access when the page is set to be used. This will
22
    * turn (some) use after free bugs into a crash.
23
    *
24
    * The additional syscalls have a substantial performance impact, which
25
    * is why this option is not enabled by default. It is used when built for
26
    * running in valgrind or debug assertions are enabled.
27
    */
28
   #define BOTAN_MEM_POOL_USE_MMU_PROTECTIONS
29
#endif
30

31
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS) && defined(BOTAN_HAS_OS_UTILS)
32
   #include <botan/internal/os_utils.h>
33
#endif
34

35
namespace Botan {
36

37
/*
38
* Memory pool theory of operation
39
*
40
* This allocator is not useful for general purpose but works well within the
41
* context of allocating cryptographic keys. It makes several assumptions which
42
* don't work for implementing malloc but simplify and speed up the implementation:
43
*
44
* - There is some set of pages, which cannot be expanded later. These are pages
45
*   which were allocated, mlocked and passed to the Memory_Pool constructor.
46
*
47
* - The allocator is allowed to return null anytime it feels like not servicing
48
*   a request, in which case the request will be sent to calloc instead. In
49
*   particular, requests which are too small or too large are rejected.
50
*
51
* - Most allocations are powers of 2, the remainder are usually a multiple of 8
52
*
53
* - Free requests include the size of the allocation, so there is no need to
54
*   track this within the pool.
55
*
56
* - Alignment is important to the caller. For this allocator, any allocation of
57
*   size N is aligned evenly at N bytes.
58
*
59
* Initially each page is in the free page list. Each page is used for just one
60
* size of allocation, with requests bucketed into a small number of common
61
* sizes. If the allocation would be too big or too small it is rejected by the pool.
62
*
63
* The free list is maintained by a bitmap, one per page/Bucket. Since each
64
* Bucket only maintains objects of a single size, each bit set or clear
65
* indicates the status of one object.
66
*
67
* An allocation walks the list of buckets and asks each in turn if there is
68
* space. If a Bucket does not have any space, it sets a boolean flag m_is_full
69
* so that it does not need to rescan when asked again. The flag is cleared on
70
* first free from that bucket. If no bucket has space, but there are some free
71
* pages left, a free page is claimed as a new Bucket for that size. In this case
72
* it is pushed to the front of the list so it is first in line to service new
73
* requests.
74
*
75
* A deallocation also walks the list of buckets for the size and asks each
76
* Bucket in turn if it recognizes the pointer. When a Bucket becomes empty as a
77
* result of a deallocation, it is recycled back into the free pool. When this
78
* happens, the Buckets page goes to the end of the free list. All pages on the
79
* free list are marked in the MMU as noaccess, so anything touching them will
80
* immediately crash. They are only marked R/W once placed into a new bucket.
81
* Making the free list FIFO maximizes the time between the last free of a bucket
82
* and that page being writable again, maximizing chances of crashing after a
83
* use-after-free.
84
*
85
* Future work
86
* -------------
87
*
88
* The allocator is protected by a global lock. It would be good to break this
89
* up, since almost all of the work can actually be done in parallel especially
90
* when allocating objects of different sizes (which can't possibly share a
91
* bucket).
92
*
93
* It may be worthwhile to optimize deallocation by storing the Buckets in order
94
* (by pointer value) which would allow binary search to find the owning bucket.
95
*
96
* A useful addition would be to randomize the allocations. Memory_Pool would be
97
* changed to receive also a RandomNumberGenerator& object (presumably the system
98
* RNG, or maybe a ChaCha_RNG seeded with system RNG). Then the bucket to use and
99
* the offset within the bucket would be chosen randomly, instead of using first fit.
100
*
101
* Right now we don't make any provision for threading, so if two threads both
102
* allocate 32 byte values one after the other, the two allocations will likely
103
* share a cache line. Ensuring that distinct threads will (tend to) use distinct
104
* buckets would reduce this.
105
*
106
* Supporting a realloc-style API may be useful.
107
*/
108

109
namespace {
110

111
size_t choose_bucket(size_t n) {
275,181,190✔
112
   constexpr size_t MINIMUM_ALLOCATION = 16;
275,181,190✔
113
   constexpr size_t MAXIMUM_ALLOCATION = 256;
275,181,190✔
114

115
   if(n < MINIMUM_ALLOCATION || n > MAXIMUM_ALLOCATION) {
275,181,190✔
116
      return 0;
117
   }
118

119
   // Need to tune these
120

121
   constexpr size_t buckets[] = {
243,002,497✔
122
      16,
123
      24,
124
      32,
125
      48,
126
      64,
127
      80,
128
      96,
129
      112,
130
      128,
131
      160,
132
      192,
133
      256,
134
      0,
135
   };
136

137
   for(size_t i = 0; buckets[i] != 0; ++i) {
1,182,984,275✔
138
      if(n <= buckets[i]) {
1,182,984,275✔
139
         return buckets[i];
243,002,497✔
140
      }
141
   }
142

143
   return 0;
144
}
145

146
inline bool ptr_in_pool(const void* pool_ptr, size_t poolsize, const void* buf_ptr, size_t bufsize) {
395,621,690✔
147
   const uintptr_t pool = reinterpret_cast<uintptr_t>(pool_ptr);
395,621,690✔
148
   const uintptr_t buf = reinterpret_cast<uintptr_t>(buf_ptr);
395,621,690✔
149
   return (buf >= pool) && (buf + bufsize <= pool + poolsize);
258,106,052✔
150
}
151

152
// return index of first set bit
153
template <typename T>
154
size_t find_set_bit(T b) {
121,461,719✔
155
   size_t s = 8 * sizeof(T) / 2;
121,461,719✔
156
   size_t bit = 0;
121,461,719✔
157

158
   // In this context we don't need to be const-time
159
   while(s > 0) {
850,232,033✔
160
      const T mask = (static_cast<T>(1) << s) - 1;
728,770,314✔
161
      if((b & mask) == 0) {
728,770,314✔
162
         bit += s;
252,562,668✔
163
         b >>= s;
252,562,668✔
164
      }
165
      s /= 2;
728,770,314✔
166
   }
167

168
   return bit;
169
}
170

171
class BitMap final {
86,934✔
172
   public:
173
      explicit BitMap(size_t bits) : m_len(bits) {
7,318,169✔
174
         m_bits.resize((bits + BITMASK_BITS - 1) / BITMASK_BITS);
7,318,169✔
175

176
         if(bits % BITMASK_BITS != 0) {
7,318,169✔
177
            m_last_mask = (static_cast<bitmask_type>(1) << (bits % BITMASK_BITS)) - 1;
3,904,160✔
178
         }
179
      }
7,318,169✔
180

181
      std::optional<size_t> find_free();
182

183
      void free(size_t bit) {
121,362,038✔
184
         BOTAN_ASSERT_NOMSG(bit <= m_len);
121,362,038✔
185
         const size_t w = bit / BITMASK_BITS;
121,362,038✔
186
         BOTAN_ASSERT_NOMSG(w < m_bits.size());
121,362,038✔
187
         const bitmask_type mask = static_cast<bitmask_type>(1) << (bit % BITMASK_BITS);
121,362,038✔
188
         m_bits[w] = m_bits[w] & (~mask);
121,362,038✔
189
      }
121,362,038✔
190

191
      bool empty() const {
121,362,038✔
192
         for(auto bitset : m_bits) {
142,362,798✔
193
            if(bitset != 0) {
135,046,293✔
194
               return false;
195
            }
196
         }
197

198
         return true;
199
      }
200

201
   private:
202
#if defined(BOTAN_ENABLE_DEBUG_ASSERTS)
203
      using bitmask_type = uint8_t;
204
#else
205
      using bitmask_type = word;
206
#endif
207

208
      static const size_t BITMASK_BITS = sizeof(bitmask_type) * 8;
209

210
      size_t m_len;
211
      // MSVC warns if the cast isn't there, clang-tidy warns that the cast is pointless
212
      bitmask_type m_main_mask = static_cast<bitmask_type>(~0);  // NOLINT(bugprone-misplaced-widening-cast)
213
      bitmask_type m_last_mask = static_cast<bitmask_type>(~0);  // NOLINT(bugprone-misplaced-widening-cast)
214
      std::vector<bitmask_type> m_bits;
215
};
216

217
std::optional<size_t> BitMap::find_free() {
128,484,863✔
218
   for(size_t i = 0; i != m_bits.size(); ++i) {
150,932,594✔
219
      const bitmask_type mask = (i == m_bits.size() - 1) ? m_last_mask : m_main_mask;
143,909,450✔
220
      if((m_bits[i] & mask) != mask) {
143,909,450✔
221
         const size_t free_bit = find_set_bit(~m_bits[i]);
121,461,719✔
222
         const bitmask_type bmask = static_cast<bitmask_type>(1) << (free_bit % BITMASK_BITS);
121,461,719✔
223
         BOTAN_ASSERT_NOMSG((m_bits[i] & bmask) == 0);
121,461,719✔
224
         m_bits[i] |= bmask;
121,461,719✔
225
         return BITMASK_BITS * i + free_bit;
121,461,719✔
226
      }
227
   }
228

229
   return {};
7,023,144✔
230
}
231

232
}  // namespace
233

234
class Bucket final {
13,468,176✔
235
   public:
236
      Bucket(uint8_t* mem, size_t mem_size, size_t item_size) :
7,318,169✔
237
            m_item_size(item_size),
7,318,169✔
238
            m_page_size(mem_size),
7,318,169✔
239
            m_range(mem),
7,318,169✔
240
            m_bitmap(mem_size / item_size),
7,318,169✔
241
            m_is_full(false) {}
7,318,169✔
242

243
      uint8_t* alloc() {
383,821,276✔
244
         if(m_is_full) {
383,821,276✔
245
            // I know I am full
246
            return nullptr;
247
         }
248

249
         auto offset = m_bitmap.find_free();
128,484,863✔
250
         if(!offset) {
128,484,863✔
251
            // I just found out I am full
252
            m_is_full = true;
7,023,144✔
253
            return nullptr;
7,023,144✔
254
         } else {
255
            BOTAN_ASSERT(*offset * m_item_size < m_page_size, "Offset is in range");
121,461,719✔
256
            return m_range + m_item_size * (*offset);
121,461,719✔
257
         }
258
      }
259

260
      bool free(void* p) {
395,621,690✔
261
         if(!in_this_bucket(p)) {
912,605,418✔
262
            return false;
263
         }
264

265
         /*
266
         Zero also any trailing bytes, which should not have been written to,
267
         but maybe the user was bad and wrote past the end.
268
         */
269
         std::memset(p, 0, m_item_size);
121,362,038✔
270

271
         const size_t offset = (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(m_range)) / m_item_size;
121,362,038✔
272

273
         m_bitmap.free(offset);
121,362,038✔
274
         m_is_full = false;
121,362,038✔
275

276
         return true;
121,362,038✔
277
      }
278

279
      bool in_this_bucket(void* p) const { return ptr_in_pool(m_range, m_page_size, p, m_item_size); }
516,983,728✔
280

281
      bool empty() const { return m_bitmap.empty(); }
242,724,076✔
282

283
      uint8_t* ptr() const { return m_range; }
7,316,505✔
284

285
   private:
286
      size_t m_item_size;
287
      size_t m_page_size;
288
      uint8_t* m_range;
289
      BitMap m_bitmap;
290
      bool m_is_full;
291
};
292

293
Memory_Pool::Memory_Pool(const std::vector<void*>& pages, size_t page_size) noexcept : m_page_size(page_size) {
9,736✔
294
   for(auto* page : pages) {
1,131,944✔
295
      const uintptr_t p = reinterpret_cast<uintptr_t>(page);
1,122,208✔
296

297
      m_min_page_ptr = std::min(p, m_min_page_ptr);
1,122,208✔
298
      m_max_page_ptr = std::max(p, m_max_page_ptr);
1,122,208✔
299

300
      clear_bytes(page, m_page_size);
1,122,208✔
301
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
302
      OS::page_prohibit_access(page);
303
#endif
304
      m_free_pages.push_back(static_cast<uint8_t*>(page));
1,122,208✔
305
   }
306

307
   /*
308
   Right now this points to the start of the last page, adjust it to instead
309
   point to the first byte of the following page
310
   */
311
   m_max_page_ptr += page_size;
9,736✔
312
}
9,736✔
313

314
Memory_Pool::~Memory_Pool() noexcept  // NOLINT(*-use-equals-default)
9,736✔
315
{
316
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
317
   for(size_t i = 0; i != m_free_pages.size(); ++i) {
318
      OS::page_allow_access(m_free_pages[i]);
319
   }
320
#endif
321
}
9,736✔
322

323
void* Memory_Pool::allocate(size_t n) {
153,894,497✔
324
   if(n > m_page_size) {
153,894,497✔
325
      return nullptr;
326
   }
327

328
   const size_t n_bucket = choose_bucket(n);
153,819,152✔
329

330
   if(n_bucket > 0) {
153,819,152✔
331
      const lock_guard_type<mutex_type> lock(m_mutex);
121,640,459✔
332

333
      std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
121,640,459✔
334

335
      /*
336
      It would be optimal to pick the bucket with the most usage,
337
      since a bucket with say 1 item allocated out of it has a high
338
      chance of becoming later freed and then the whole page can be
339
      recycled.
340
      */
341
      for(auto& bucket : buckets) {
646,359,573✔
342
         // NOLINTNEXTLINE(*-const-correctness) bug in clang-tidy
343
         if(uint8_t* p = bucket.alloc()) {
376,503,107✔
344
            return p;
114,143,550✔
345
         }
346

347
         // If the bucket is full, maybe move it to the end of the list?
348
         // Otoh bucket search should be very fast
349
      }
350

351
      if(!m_free_pages.empty()) {
7,496,909✔
352
         uint8_t* ptr = m_free_pages[0];
7,318,169✔
353
         m_free_pages.pop_front();
7,318,169✔
354
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
355
         OS::page_allow_access(ptr);
356
#endif
357
         buckets.push_front(Bucket(ptr, m_page_size, n_bucket));
7,318,169✔
358
         // NOLINTNEXTLINE(*-const-correctness) bug in clang-tidy
359
         void* p = buckets[0].alloc();
7,318,169✔
360
         BOTAN_ASSERT_NOMSG(p != nullptr);
7,318,169✔
361
         return p;
362
      }
363
   }
121,640,459✔
364

365
   // out of room
366
   return nullptr;
367
}
368

369
bool Memory_Pool::deallocate(void* p, size_t len) noexcept {
153,648,827✔
370
   // Do a fast range check first, before taking the lock
371
   const uintptr_t p_val = reinterpret_cast<uintptr_t>(p);
153,648,827✔
372
   if(p_val < m_min_page_ptr || p_val > m_max_page_ptr) {
153,648,827✔
373
      return false;
374
   }
375

376
   const size_t n_bucket = choose_bucket(len);
121,362,038✔
377

378
   if(n_bucket != 0) {
121,362,038✔
379
      try {
121,362,038✔
380
         const lock_guard_type<mutex_type> lock(m_mutex);
121,362,038✔
381

382
         std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
121,362,038✔
383

384
         for(size_t i = 0; i != buckets.size(); ++i) {
395,621,690✔
385
            Bucket& bucket = buckets[i];
395,621,690✔
386
            if(bucket.free(p)) {
395,621,690✔
387
               if(bucket.empty()) {
242,724,076✔
388
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
389
                  OS::page_prohibit_access(bucket.ptr());
390
#endif
391
                  m_free_pages.push_back(bucket.ptr());
7,316,505✔
392

393
                  if(i != buckets.size() - 1) {
7,316,505✔
394
                     std::swap(buckets.back(), buckets[i]);
102,052✔
395
                  }
396
                  buckets.pop_back();
7,316,505✔
397
               }
398
               return true;
121,362,038✔
399
            }
400
         }
401
      } catch(...) {
121,362,038✔
402
         /*
403
         * The only exception throws that can occur in the above code are from
404
         * either the STL or BOTAN_ASSERT failures. In either case, such an
405
         * error indicates a logic error or data corruption in the memory
406
         * allocator such that it is no longer safe to continue executing.
407
         *
408
         * Since this function is noexcept, simply letting the exception escape
409
         * is sufficient for terminate to be called. However in this scenario
410
         * it is implementation defined if any stack unwinding is performed.
411
         * Since stack unwinding could cause further memory deallocations this
412
         * could result in further corruption in this allocator state. To prevent
413
         * this, call terminate directly.
414
         */
415
         std::terminate();
×
416
      }
417
   }
418

419
   return false;
420
}
421

422
}  // namespace Botan
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc