• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

randombit / botan / 13190843806

07 Feb 2025 12:50AM UTC coverage: 91.233% (+0.006%) from 91.227%
13190843806

push

github

web-flow
Merge pull request #4639 from randombit/jack/cleanup-build-h

Remove most toggles from build.h

94196 of 103248 relevant lines covered (91.23%)

11436231.99 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

99.18
/src/lib/utils/mem_pool/mem_pool.cpp
1
/*
2
* (C) 2018,2019 Jack Lloyd
3
*
4
* Botan is released under the Simplified BSD License (see license.txt)
5
*/
6

7
#include <botan/internal/mem_pool.h>
8

9
#include <botan/mem_ops.h>
10
#include <algorithm>
11

12
#if defined(BOTAN_HAS_VALGRIND) || defined(BOTAN_ENABLE_DEBUG_ASSERTS)
13
   /**
14
    * @brief Prohibits access to unused memory pages in Botan's memory pool
15
    *
16
    * If BOTAN_MEM_POOL_USE_MMU_PROTECTIONS is defined, the Memory_Pool
17
    * class used for mlock'ed memory will use OS calls to set page
18
    * permissions so as to prohibit access to pages on the free list, then
19
    * enable read/write access when the page is set to be used. This will
20
    * turn (some) use after free bugs into a crash.
21
    *
22
    * The additional syscalls have a substantial performance impact, which
23
    * is why this option is not enabled by default. It is used when built for
24
    * running in valgrind or debug assertions are enabled.
25
    */
26
   #define BOTAN_MEM_POOL_USE_MMU_PROTECTIONS
27
#endif
28

29
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS) && defined(BOTAN_HAS_OS_UTILS)
30
   #include <botan/internal/os_utils.h>
31
#endif
32

33
namespace Botan {
34

35
/*
36
* Memory pool theory of operation
37
*
38
* This allocator is not useful for general purpose but works well within the
39
* context of allocating cryptographic keys. It makes several assumptions which
40
* don't work for implementing malloc but simplify and speed up the implementation:
41
*
42
* - There is some set of pages, which cannot be expanded later. These are pages
43
*   which were allocated, mlocked and passed to the Memory_Pool constructor.
44
*
45
* - The allocator is allowed to return null anytime it feels like not servicing
46
*   a request, in which case the request will be sent to calloc instead. In
47
*   particular, requests which are too small or too large are rejected.
48
*
49
* - Most allocations are powers of 2, the remainder are usually a multiple of 8
50
*
51
* - Free requests include the size of the allocation, so there is no need to
52
*   track this within the pool.
53
*
54
* - Alignment is important to the caller. For this allocator, any allocation of
55
*   size N is aligned evenly at N bytes.
56
*
57
* Initially each page is in the free page list. Each page is used for just one
58
* size of allocation, with requests bucketed into a small number of common
59
* sizes. If the allocation would be too big or too small it is rejected by the pool.
60
*
61
* The free list is maintained by a bitmap, one per page/Bucket. Since each
62
* Bucket only maintains objects of a single size, each bit set or clear
63
* indicates the status of one object.
64
*
65
* An allocation walks the list of buckets and asks each in turn if there is
66
* space. If a Bucket does not have any space, it sets a boolean flag m_is_full
67
* so that it does not need to rescan when asked again. The flag is cleared on
68
* first free from that bucket. If no bucket has space, but there are some free
69
* pages left, a free page is claimed as a new Bucket for that size. In this case
70
* it is pushed to the front of the list so it is first in line to service new
71
* requests.
72
*
73
* A deallocation also walks the list of buckets for the size and asks each
74
* Bucket in turn if it recognizes the pointer. When a Bucket becomes empty as a
75
* result of a deallocation, it is recycled back into the free pool. When this
76
* happens, the Buckets page goes to the end of the free list. All pages on the
77
* free list are marked in the MMU as noaccess, so anything touching them will
78
* immediately crash. They are only marked R/W once placed into a new bucket.
79
* Making the free list FIFO maximizes the time between the last free of a bucket
80
* and that page being writable again, maximizing chances of crashing after a
81
* use-after-free.
82
*
83
* Future work
84
* -------------
85
*
86
* The allocator is protected by a global lock. It would be good to break this
87
* up, since almost all of the work can actually be done in parallel especially
88
* when allocating objects of different sizes (which can't possibly share a
89
* bucket).
90
*
91
* It may be worthwhile to optimize deallocation by storing the Buckets in order
92
* (by pointer value) which would allow binary search to find the owning bucket.
93
*
94
* A useful addition would be to randomize the allocations. Memory_Pool would be
95
* changed to receive also a RandomNumberGenerator& object (presumably the system
96
* RNG, or maybe a ChaCha_RNG seeded with system RNG). Then the bucket to use and
97
* the offset within the bucket would be chosen randomly, instead of using first fit.
98
*
99
* Right now we don't make any provision for threading, so if two threads both
100
* allocate 32 byte values one after the other, the two allocations will likely
101
* share a cache line. Ensuring that distinct threads will (tend to) use distinct
102
* buckets would reduce this.
103
*
104
* Supporting a realloc-style API may be useful.
105
*/
106

107
namespace {
108

109
size_t choose_bucket(size_t n) {
295,844,491✔
110
   const size_t MINIMUM_ALLOCATION = 16;
295,844,491✔
111
   const size_t MAXIMUM_ALLOCATION = 256;
295,844,491✔
112

113
   if(n < MINIMUM_ALLOCATION || n > MAXIMUM_ALLOCATION) {
295,844,491✔
114
      return 0;
115
   }
116

117
   // Need to tune these
118

119
   const size_t buckets[] = {
263,741,185✔
120
      16,
121
      24,
122
      32,
123
      48,
124
      64,
125
      80,
126
      96,
127
      112,
128
      128,
129
      160,
130
      192,
131
      256,
132
      0,
133
   };
134

135
   for(size_t i = 0; buckets[i]; ++i) {
1,410,911,170✔
136
      if(n <= buckets[i]) {
1,410,911,170✔
137
         return buckets[i];
263,741,185✔
138
      }
139
   }
140

141
   return 0;
142
}
143

144
inline bool ptr_in_pool(const void* pool_ptr, size_t poolsize, const void* buf_ptr, size_t bufsize) {
430,239,308✔
145
   const uintptr_t pool = reinterpret_cast<uintptr_t>(pool_ptr);
430,239,308✔
146
   const uintptr_t buf = reinterpret_cast<uintptr_t>(buf_ptr);
430,239,308✔
147
   return (buf >= pool) && (buf + bufsize <= pool + poolsize);
290,609,378✔
148
}
149

150
// return index of first set bit
151
template <typename T>
152
size_t find_set_bit(T b) {
131,799,450✔
153
   size_t s = 8 * sizeof(T) / 2;
131,799,450✔
154
   size_t bit = 0;
131,799,450✔
155

156
   // In this context we don't need to be const-time
157
   while(s > 0) {
922,596,150✔
158
      const T mask = (static_cast<T>(1) << s) - 1;
790,796,700✔
159
      if((b & mask) == 0) {
790,796,700✔
160
         bit += s;
283,078,300✔
161
         b >>= s;
283,078,300✔
162
      }
163
      s /= 2;
790,796,700✔
164
   }
165

166
   return bit;
167
}
168

169
class BitMap final {
80,293✔
170
   public:
171
      explicit BitMap(size_t bits) : m_len(bits) {
10,113,705✔
172
         m_bits.resize((bits + BITMASK_BITS - 1) / BITMASK_BITS);
10,113,705✔
173
         // MSVC warns if the cast isn't there, clang-tidy warns that the cast is pointless
174
         m_main_mask = static_cast<bitmask_type>(~0);  // NOLINT(bugprone-misplaced-widening-cast)
10,113,705✔
175
         m_last_mask = m_main_mask;
10,113,705✔
176

177
         if(bits % BITMASK_BITS != 0) {
10,113,705✔
178
            m_last_mask = (static_cast<bitmask_type>(1) << (bits % BITMASK_BITS)) - 1;
6,394,181✔
179
         }
180
      }
10,113,705✔
181

182
      bool find_free(size_t* bit);
183

184
      void free(size_t bit) {
131,700,095✔
185
         BOTAN_ASSERT_NOMSG(bit <= m_len);
131,700,095✔
186
         const size_t w = bit / BITMASK_BITS;
131,700,095✔
187
         BOTAN_ASSERT_NOMSG(w < m_bits.size());
131,700,095✔
188
         const bitmask_type mask = static_cast<bitmask_type>(1) << (bit % BITMASK_BITS);
131,700,095✔
189
         m_bits[w] = m_bits[w] & (~mask);
131,700,095✔
190
      }
131,700,095✔
191

192
      bool empty() const {
131,700,095✔
193
         for(auto bitset : m_bits) {
156,395,808✔
194
            if(bitset != 0) {
146,283,801✔
195
               return false;
196
            }
197
         }
198

199
         return true;
200
      }
201

202
   private:
203
#if defined(BOTAN_ENABLE_DEBUG_ASSERTS)
204
      using bitmask_type = uint8_t;
205
#else
206
      using bitmask_type = word;
207
#endif
208

209
      static const size_t BITMASK_BITS = sizeof(bitmask_type) * 8;
210

211
      size_t m_len;
212
      bitmask_type m_main_mask;
213
      bitmask_type m_last_mask;
214
      std::vector<bitmask_type> m_bits;
215
};
216

217
bool BitMap::find_free(size_t* bit) {
139,423,234✔
218
   for(size_t i = 0; i != m_bits.size(); ++i) {
161,953,988✔
219
      const bitmask_type mask = (i == m_bits.size() - 1) ? m_last_mask : m_main_mask;
154,330,204✔
220
      if((m_bits[i] & mask) != mask) {
154,330,204✔
221
         const size_t free_bit = find_set_bit(~m_bits[i]);
131,799,450✔
222
         const bitmask_type bmask = static_cast<bitmask_type>(1) << (free_bit % BITMASK_BITS);
131,799,450✔
223
         BOTAN_ASSERT_NOMSG((m_bits[i] & bmask) == 0);
131,799,450✔
224
         m_bits[i] |= bmask;
131,799,450✔
225
         *bit = BITMASK_BITS * i + free_bit;
131,799,450✔
226
         return true;
131,799,450✔
227
      }
228
   }
229

230
   return false;
231
}
232

233
}  // namespace
234

235
class Bucket final {
18,586,349✔
236
   public:
237
      Bucket(uint8_t* mem, size_t mem_size, size_t item_size) :
10,113,705✔
238
            m_item_size(item_size),
10,113,705✔
239
            m_page_size(mem_size),
10,113,705✔
240
            m_range(mem),
10,113,705✔
241
            m_bitmap(mem_size / item_size),
10,113,705✔
242
            m_is_full(false) {}
10,113,705✔
243

244
      uint8_t* alloc() {
418,677,986✔
245
         if(m_is_full) {
418,677,986✔
246
            // I know I am full
247
            return nullptr;
248
         }
249

250
         size_t offset;
139,423,234✔
251
         if(!m_bitmap.find_free(&offset)) {
139,423,234✔
252
            // I just found out I am full
253
            m_is_full = true;
7,623,784✔
254
            return nullptr;
7,623,784✔
255
         }
256

257
         BOTAN_ASSERT(offset * m_item_size < m_page_size, "Offset is in range");
131,799,450✔
258
         return m_range + m_item_size * offset;
131,799,450✔
259
      }
260

261
      bool free(void* p) {
430,239,308✔
262
         if(!in_this_bucket(p)) {
992,178,711✔
263
            return false;
264
         }
265

266
         /*
267
         Zero also any trailing bytes, which should not have been written to,
268
         but maybe the user was bad and wrote past the end.
269
         */
270
         std::memset(p, 0, m_item_size);
131,700,095✔
271

272
         const size_t offset = (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(m_range)) / m_item_size;
131,700,095✔
273

274
         m_bitmap.free(offset);
131,700,095✔
275
         m_is_full = false;
131,700,095✔
276

277
         return true;
131,700,095✔
278
      }
279

280
      bool in_this_bucket(void* p) const { return ptr_in_pool(m_range, m_page_size, p, m_item_size); }
561,939,403✔
281

282
      bool empty() const { return m_bitmap.empty(); }
263,400,190✔
283

284
      uint8_t* ptr() const { return m_range; }
10,112,007✔
285

286
   private:
287
      size_t m_item_size;
288
      size_t m_page_size;
289
      uint8_t* m_range;
290
      BitMap m_bitmap;
291
      bool m_is_full;
292
};
293

294
Memory_Pool::Memory_Pool(const std::vector<void*>& pages, size_t page_size) : m_page_size(page_size) {
9,736✔
295
   m_min_page_ptr = ~static_cast<uintptr_t>(0);
9,736✔
296
   m_max_page_ptr = 0;
9,736✔
297

298
   for(auto page : pages) {
1,131,944✔
299
      const uintptr_t p = reinterpret_cast<uintptr_t>(page);
1,122,208✔
300

301
      m_min_page_ptr = std::min(p, m_min_page_ptr);
1,122,208✔
302
      m_max_page_ptr = std::max(p, m_max_page_ptr);
1,122,208✔
303

304
      clear_bytes(page, m_page_size);
1,122,208✔
305
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
306
      OS::page_prohibit_access(page);
307
#endif
308
      m_free_pages.push_back(static_cast<uint8_t*>(page));
1,122,208✔
309
   }
310

311
   /*
312
   Right now this points to the start of the last page, adjust it to instead
313
   point to the first byte of the following page
314
   */
315
   m_max_page_ptr += page_size;
9,736✔
316
}
9,736✔
317

318
Memory_Pool::~Memory_Pool()  // NOLINT(*-use-equals-default)
9,736✔
319
{
320
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
321
   for(size_t i = 0; i != m_free_pages.size(); ++i) {
322
      OS::page_allow_access(m_free_pages[i]);
323
   }
324
#endif
325
}
9,736✔
326

327
void* Memory_Pool::allocate(size_t n) {
164,219,657✔
328
   if(n > m_page_size) {
164,219,657✔
329
      return nullptr;
330
   }
331

332
   const size_t n_bucket = choose_bucket(n);
164,144,396✔
333

334
   if(n_bucket > 0) {
164,144,396✔
335
      lock_guard_type<mutex_type> lock(m_mutex);
132,041,090✔
336

337
      std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
132,041,090✔
338

339
      /*
340
      It would be optimal to pick the bucket with the most usage,
341
      since a bucket with say 1 item allocated out of it has a high
342
      chance of becoming later freed and then the whole page can be
343
      recycled.
344
      */
345
      for(auto& bucket : buckets) {
705,798,162✔
346
         if(uint8_t* p = bucket.alloc()) {
408,564,281✔
347
            return p;
121,685,745✔
348
         }
349

350
         // If the bucket is full, maybe move it to the end of the list?
351
         // Otoh bucket search should be very fast
352
      }
353

354
      if(!m_free_pages.empty()) {
10,355,345✔
355
         uint8_t* ptr = m_free_pages[0];
10,113,705✔
356
         m_free_pages.pop_front();
10,113,705✔
357
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
358
         OS::page_allow_access(ptr);
359
#endif
360
         buckets.push_front(Bucket(ptr, m_page_size, n_bucket));
10,113,705✔
361
         void* p = buckets[0].alloc();
10,113,705✔
362
         BOTAN_ASSERT_NOMSG(p != nullptr);
10,113,705✔
363
         return p;
364
      }
365
   }
132,041,090✔
366

367
   // out of room
368
   return nullptr;
369
}
370

371
bool Memory_Pool::deallocate(void* p, size_t len) noexcept {
163,962,504✔
372
   // Do a fast range check first, before taking the lock
373
   const uintptr_t p_val = reinterpret_cast<uintptr_t>(p);
163,962,504✔
374
   if(p_val < m_min_page_ptr || p_val > m_max_page_ptr) {
163,962,504✔
375
      return false;
376
   }
377

378
   const size_t n_bucket = choose_bucket(len);
131,700,095✔
379

380
   if(n_bucket != 0) {
131,700,095✔
381
      try {
131,700,095✔
382
         lock_guard_type<mutex_type> lock(m_mutex);
131,700,095✔
383

384
         std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
131,700,095✔
385

386
         for(size_t i = 0; i != buckets.size(); ++i) {
430,239,308✔
387
            Bucket& bucket = buckets[i];
430,239,308✔
388
            if(bucket.free(p)) {
430,239,308✔
389
               if(bucket.empty()) {
263,400,190✔
390
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
391
                  OS::page_prohibit_access(bucket.ptr());
392
#endif
393
                  m_free_pages.push_back(bucket.ptr());
10,112,007✔
394

395
                  if(i != buckets.size() - 1) {
10,112,007✔
396
                     std::swap(buckets.back(), buckets[i]);
93,721✔
397
                  }
398
                  buckets.pop_back();
10,112,007✔
399
               }
400
               return true;
131,700,095✔
401
            }
402
         }
403
      } catch(...) {
131,700,095✔
404
         /*
405
         * The only exception throws that can occur in the above code are from
406
         * either the STL or BOTAN_ASSERT failures. In either case, such an
407
         * error indicates a logic error or data corruption in the memory
408
         * allocator such that it is no longer safe to continue executing.
409
         *
410
         * Since this function is noexcept, simply letting the exception escape
411
         * is sufficient for terminate to be called. However in this scenario
412
         * it is implementation defined if any stack unwinding is performed.
413
         * Since stack unwinding could cause further memory deallocations this
414
         * could result in further corruption in this allocator state. To prevent
415
         * this, call terminate directly.
416
         */
417
         std::terminate();
×
418
      }
419
   }
420

421
   return false;
422
}
423

424
}  // namespace Botan
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc