• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

randombit / botan / 13210394651

08 Feb 2025 12:33AM UTC coverage: 91.656% (-0.006%) from 91.662%
13210394651

push

github

web-flow
Merge pull request #4642 from randombit/jack/target-info-header

Add internal target_info.h header

94837 of 103471 relevant lines covered (91.66%)

11243945.63 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

99.18
/src/lib/utils/mem_pool/mem_pool.cpp
1
/*
2
* (C) 2018,2019 Jack Lloyd
3
*
4
* Botan is released under the Simplified BSD License (see license.txt)
5
*/
6

7
#include <botan/internal/mem_pool.h>
8

9
#include <botan/mem_ops.h>
10
#include <botan/internal/target_info.h>
11
#include <algorithm>
12

13
#if defined(BOTAN_HAS_VALGRIND) || defined(BOTAN_ENABLE_DEBUG_ASSERTS)
14
   /**
15
    * @brief Prohibits access to unused memory pages in Botan's memory pool
16
    *
17
    * If BOTAN_MEM_POOL_USE_MMU_PROTECTIONS is defined, the Memory_Pool
18
    * class used for mlock'ed memory will use OS calls to set page
19
    * permissions so as to prohibit access to pages on the free list, then
20
    * enable read/write access when the page is set to be used. This will
21
    * turn (some) use after free bugs into a crash.
22
    *
23
    * The additional syscalls have a substantial performance impact, which
24
    * is why this option is not enabled by default. It is used when built for
25
    * running in valgrind or debug assertions are enabled.
26
    */
27
   #define BOTAN_MEM_POOL_USE_MMU_PROTECTIONS
28
#endif
29

30
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS) && defined(BOTAN_HAS_OS_UTILS)
31
   #include <botan/internal/os_utils.h>
32
#endif
33

34
namespace Botan {
35

36
/*
37
* Memory pool theory of operation
38
*
39
* This allocator is not useful for general purpose but works well within the
40
* context of allocating cryptographic keys. It makes several assumptions which
41
* don't work for implementing malloc but simplify and speed up the implementation:
42
*
43
* - There is some set of pages, which cannot be expanded later. These are pages
44
*   which were allocated, mlocked and passed to the Memory_Pool constructor.
45
*
46
* - The allocator is allowed to return null anytime it feels like not servicing
47
*   a request, in which case the request will be sent to calloc instead. In
48
*   particular, requests which are too small or too large are rejected.
49
*
50
* - Most allocations are powers of 2, the remainder are usually a multiple of 8
51
*
52
* - Free requests include the size of the allocation, so there is no need to
53
*   track this within the pool.
54
*
55
* - Alignment is important to the caller. For this allocator, any allocation of
56
*   size N is aligned evenly at N bytes.
57
*
58
* Initially each page is in the free page list. Each page is used for just one
59
* size of allocation, with requests bucketed into a small number of common
60
* sizes. If the allocation would be too big or too small it is rejected by the pool.
61
*
62
* The free list is maintained by a bitmap, one per page/Bucket. Since each
63
* Bucket only maintains objects of a single size, each bit set or clear
64
* indicates the status of one object.
65
*
66
* An allocation walks the list of buckets and asks each in turn if there is
67
* space. If a Bucket does not have any space, it sets a boolean flag m_is_full
68
* so that it does not need to rescan when asked again. The flag is cleared on
69
* first free from that bucket. If no bucket has space, but there are some free
70
* pages left, a free page is claimed as a new Bucket for that size. In this case
71
* it is pushed to the front of the list so it is first in line to service new
72
* requests.
73
*
74
* A deallocation also walks the list of buckets for the size and asks each
75
* Bucket in turn if it recognizes the pointer. When a Bucket becomes empty as a
76
* result of a deallocation, it is recycled back into the free pool. When this
77
* happens, the Buckets page goes to the end of the free list. All pages on the
78
* free list are marked in the MMU as noaccess, so anything touching them will
79
* immediately crash. They are only marked R/W once placed into a new bucket.
80
* Making the free list FIFO maximizes the time between the last free of a bucket
81
* and that page being writable again, maximizing chances of crashing after a
82
* use-after-free.
83
*
84
* Future work
85
* -------------
86
*
87
* The allocator is protected by a global lock. It would be good to break this
88
* up, since almost all of the work can actually be done in parallel especially
89
* when allocating objects of different sizes (which can't possibly share a
90
* bucket).
91
*
92
* It may be worthwhile to optimize deallocation by storing the Buckets in order
93
* (by pointer value) which would allow binary search to find the owning bucket.
94
*
95
* A useful addition would be to randomize the allocations. Memory_Pool would be
96
* changed to receive also a RandomNumberGenerator& object (presumably the system
97
* RNG, or maybe a ChaCha_RNG seeded with system RNG). Then the bucket to use and
98
* the offset within the bucket would be chosen randomly, instead of using first fit.
99
*
100
* Right now we don't make any provision for threading, so if two threads both
101
* allocate 32 byte values one after the other, the two allocations will likely
102
* share a cache line. Ensuring that distinct threads will (tend to) use distinct
103
* buckets would reduce this.
104
*
105
* Supporting a realloc-style API may be useful.
106
*/
107

108
namespace {
109

110
size_t choose_bucket(size_t n) {
290,225,326✔
111
   const size_t MINIMUM_ALLOCATION = 16;
290,225,326✔
112
   const size_t MAXIMUM_ALLOCATION = 256;
290,225,326✔
113

114
   if(n < MINIMUM_ALLOCATION || n > MAXIMUM_ALLOCATION) {
290,225,326✔
115
      return 0;
116
   }
117

118
   // Need to tune these
119

120
   const size_t buckets[] = {
259,788,954✔
121
      16,
122
      24,
123
      32,
124
      48,
125
      64,
126
      80,
127
      96,
128
      112,
129
      128,
130
      160,
131
      192,
132
      256,
133
      0,
134
   };
135

136
   for(size_t i = 0; buckets[i]; ++i) {
1,368,771,978✔
137
      if(n <= buckets[i]) {
1,368,771,978✔
138
         return buckets[i];
259,788,954✔
139
      }
140
   }
141

142
   return 0;
143
}
144

145
inline bool ptr_in_pool(const void* pool_ptr, size_t poolsize, const void* buf_ptr, size_t bufsize) {
423,343,657✔
146
   const uintptr_t pool = reinterpret_cast<uintptr_t>(pool_ptr);
423,343,657✔
147
   const uintptr_t buf = reinterpret_cast<uintptr_t>(buf_ptr);
423,343,657✔
148
   return (buf >= pool) && (buf + bufsize <= pool + poolsize);
279,070,794✔
149
}
150

151
// return index of first set bit
152
template <typename T>
153
size_t find_set_bit(T b) {
129,821,877✔
154
   size_t s = 8 * sizeof(T) / 2;
129,821,877✔
155
   size_t bit = 0;
129,821,877✔
156

157
   // In this context we don't need to be const-time
158
   while(s > 0) {
908,753,139✔
159
      const T mask = (static_cast<T>(1) << s) - 1;
778,931,262✔
160
      if((b & mask) == 0) {
778,931,262✔
161
         bit += s;
293,718,647✔
162
         b >>= s;
293,718,647✔
163
      }
164
      s /= 2;
778,931,262✔
165
   }
166

167
   return bit;
168
}
169

170
class BitMap final {
75,016✔
171
   public:
172
      explicit BitMap(size_t bits) : m_len(bits) {
10,286,437✔
173
         m_bits.resize((bits + BITMASK_BITS - 1) / BITMASK_BITS);
10,286,437✔
174
         // MSVC warns if the cast isn't there, clang-tidy warns that the cast is pointless
175
         m_main_mask = static_cast<bitmask_type>(~0);  // NOLINT(bugprone-misplaced-widening-cast)
10,286,437✔
176
         m_last_mask = m_main_mask;
10,286,437✔
177

178
         if(bits % BITMASK_BITS != 0) {
10,286,437✔
179
            m_last_mask = (static_cast<bitmask_type>(1) << (bits % BITMASK_BITS)) - 1;
6,058,177✔
180
         }
181
      }
10,286,437✔
182

183
      bool find_free(size_t* bit);
184

185
      void free(size_t bit) {
129,729,634✔
186
         BOTAN_ASSERT_NOMSG(bit <= m_len);
129,729,634✔
187
         const size_t w = bit / BITMASK_BITS;
129,729,634✔
188
         BOTAN_ASSERT_NOMSG(w < m_bits.size());
129,729,634✔
189
         const bitmask_type mask = static_cast<bitmask_type>(1) << (bit % BITMASK_BITS);
129,729,634✔
190
         m_bits[w] = m_bits[w] & (~mask);
129,729,634✔
191
      }
129,729,634✔
192

193
      bool empty() const {
129,729,634✔
194
         for(auto bitset : m_bits) {
156,180,951✔
195
            if(bitset != 0) {
145,896,151✔
196
               return false;
197
            }
198
         }
199

200
         return true;
201
      }
202

203
   private:
204
#if defined(BOTAN_ENABLE_DEBUG_ASSERTS)
205
      using bitmask_type = uint8_t;
206
#else
207
      using bitmask_type = word;
208
#endif
209

210
      static const size_t BITMASK_BITS = sizeof(bitmask_type) * 8;
211

212
      size_t m_len;
213
      bitmask_type m_main_mask;
214
      bitmask_type m_last_mask;
215
      std::vector<bitmask_type> m_bits;
216
};
217

218
bool BitMap::find_free(size_t* bit) {
137,508,930✔
219
   for(size_t i = 0; i != m_bits.size(); ++i) {
160,115,024✔
220
      const bitmask_type mask = (i == m_bits.size() - 1) ? m_last_mask : m_main_mask;
152,427,971✔
221
      if((m_bits[i] & mask) != mask) {
152,427,971✔
222
         const size_t free_bit = find_set_bit(~m_bits[i]);
129,821,877✔
223
         const bitmask_type bmask = static_cast<bitmask_type>(1) << (free_bit % BITMASK_BITS);
129,821,877✔
224
         BOTAN_ASSERT_NOMSG((m_bits[i] & bmask) == 0);
129,821,877✔
225
         m_bits[i] |= bmask;
129,821,877✔
226
         *bit = BITMASK_BITS * i + free_bit;
129,821,877✔
227
         return true;
129,821,877✔
228
      }
229
   }
230

231
   return false;
232
}
233

234
}  // namespace
235

236
class Bucket final {
18,897,734✔
237
   public:
238
      Bucket(uint8_t* mem, size_t mem_size, size_t item_size) :
10,286,437✔
239
            m_item_size(item_size),
10,286,437✔
240
            m_page_size(mem_size),
10,286,437✔
241
            m_range(mem),
10,286,437✔
242
            m_bitmap(mem_size / item_size),
10,286,437✔
243
            m_is_full(false) {}
10,286,437✔
244

245
      uint8_t* alloc() {
411,063,966✔
246
         if(m_is_full) {
411,063,966✔
247
            // I know I am full
248
            return nullptr;
249
         }
250

251
         size_t offset;
137,508,930✔
252
         if(!m_bitmap.find_free(&offset)) {
137,508,930✔
253
            // I just found out I am full
254
            m_is_full = true;
7,687,053✔
255
            return nullptr;
7,687,053✔
256
         }
257

258
         BOTAN_ASSERT(offset * m_item_size < m_page_size, "Offset is in range");
129,821,877✔
259
         return m_range + m_item_size * offset;
129,821,877✔
260
      }
261

262
      bool free(void* p) {
423,343,657✔
263
         if(!in_this_bucket(p)) {
976,416,948✔
264
            return false;
265
         }
266

267
         /*
268
         Zero also any trailing bytes, which should not have been written to,
269
         but maybe the user was bad and wrote past the end.
270
         */
271
         std::memset(p, 0, m_item_size);
129,729,634✔
272

273
         const size_t offset = (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(m_range)) / m_item_size;
129,729,634✔
274

275
         m_bitmap.free(offset);
129,729,634✔
276
         m_is_full = false;
129,729,634✔
277

278
         return true;
129,729,634✔
279
      }
280

281
      bool in_this_bucket(void* p) const { return ptr_in_pool(m_range, m_page_size, p, m_item_size); }
553,073,291✔
282

283
      bool empty() const { return m_bitmap.empty(); }
259,459,268✔
284

285
      uint8_t* ptr() const { return m_range; }
10,284,800✔
286

287
   private:
288
      size_t m_item_size;
289
      size_t m_page_size;
290
      uint8_t* m_range;
291
      BitMap m_bitmap;
292
      bool m_is_full;
293
};
294

295
Memory_Pool::Memory_Pool(const std::vector<void*>& pages, size_t page_size) : m_page_size(page_size) {
9,736✔
296
   m_min_page_ptr = ~static_cast<uintptr_t>(0);
9,736✔
297
   m_max_page_ptr = 0;
9,736✔
298

299
   for(auto page : pages) {
1,131,944✔
300
      const uintptr_t p = reinterpret_cast<uintptr_t>(page);
1,122,208✔
301

302
      m_min_page_ptr = std::min(p, m_min_page_ptr);
1,122,208✔
303
      m_max_page_ptr = std::max(p, m_max_page_ptr);
1,122,208✔
304

305
      clear_bytes(page, m_page_size);
1,122,208✔
306
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
307
      OS::page_prohibit_access(page);
308
#endif
309
      m_free_pages.push_back(static_cast<uint8_t*>(page));
1,122,208✔
310
   }
311

312
   /*
313
   Right now this points to the start of the last page, adjust it to instead
314
   point to the first byte of the following page
315
   */
316
   m_max_page_ptr += page_size;
9,736✔
317
}
9,736✔
318

319
Memory_Pool::~Memory_Pool()  // NOLINT(*-use-equals-default)
9,736✔
320
{
321
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
322
   for(size_t i = 0; i != m_free_pages.size(); ++i) {
323
      OS::page_allow_access(m_free_pages[i]);
324
   }
325
#endif
326
}
9,736✔
327

328
void* Memory_Pool::allocate(size_t n) {
160,571,031✔
329
   if(n > m_page_size) {
160,571,031✔
330
      return nullptr;
331
   }
332

333
   const size_t n_bucket = choose_bucket(n);
160,495,692✔
334

335
   if(n_bucket > 0) {
160,495,692✔
336
      lock_guard_type<mutex_type> lock(m_mutex);
130,059,320✔
337

338
      std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
130,059,320✔
339

340
      /*
341
      It would be optimal to pick the bucket with the most usage,
342
      since a bucket with say 1 item allocated out of it has a high
343
      chance of becoming later freed and then the whole page can be
344
      recycled.
345
      */
346
      for(auto& bucket : buckets) {
692,543,498✔
347
         if(uint8_t* p = bucket.alloc()) {
400,777,529✔
348
            return p;
119,535,440✔
349
         }
350

351
         // If the bucket is full, maybe move it to the end of the list?
352
         // Otoh bucket search should be very fast
353
      }
354

355
      if(!m_free_pages.empty()) {
10,523,880✔
356
         uint8_t* ptr = m_free_pages[0];
10,286,437✔
357
         m_free_pages.pop_front();
10,286,437✔
358
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
359
         OS::page_allow_access(ptr);
360
#endif
361
         buckets.push_front(Bucket(ptr, m_page_size, n_bucket));
10,286,437✔
362
         void* p = buckets[0].alloc();
10,286,437✔
363
         BOTAN_ASSERT_NOMSG(p != nullptr);
10,286,437✔
364
         return p;
365
      }
366
   }
130,059,320✔
367

368
   // out of room
369
   return nullptr;
370
}
371

372
bool Memory_Pool::deallocate(void* p, size_t len) noexcept {
160,327,076✔
373
   // Do a fast range check first, before taking the lock
374
   const uintptr_t p_val = reinterpret_cast<uintptr_t>(p);
160,327,076✔
375
   if(p_val < m_min_page_ptr || p_val > m_max_page_ptr) {
160,327,076✔
376
      return false;
377
   }
378

379
   const size_t n_bucket = choose_bucket(len);
129,729,634✔
380

381
   if(n_bucket != 0) {
129,729,634✔
382
      try {
129,729,634✔
383
         lock_guard_type<mutex_type> lock(m_mutex);
129,729,634✔
384

385
         std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
129,729,634✔
386

387
         for(size_t i = 0; i != buckets.size(); ++i) {
423,343,657✔
388
            Bucket& bucket = buckets[i];
423,343,657✔
389
            if(bucket.free(p)) {
423,343,657✔
390
               if(bucket.empty()) {
259,459,268✔
391
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
392
                  OS::page_prohibit_access(bucket.ptr());
393
#endif
394
                  m_free_pages.push_back(bucket.ptr());
10,284,800✔
395

396
                  if(i != buckets.size() - 1) {
10,284,800✔
397
                     std::swap(buckets.back(), buckets[i]);
87,531✔
398
                  }
399
                  buckets.pop_back();
10,284,800✔
400
               }
401
               return true;
129,729,634✔
402
            }
403
         }
404
      } catch(...) {
129,729,634✔
405
         /*
406
         * The only exception throws that can occur in the above code are from
407
         * either the STL or BOTAN_ASSERT failures. In either case, such an
408
         * error indicates a logic error or data corruption in the memory
409
         * allocator such that it is no longer safe to continue executing.
410
         *
411
         * Since this function is noexcept, simply letting the exception escape
412
         * is sufficient for terminate to be called. However in this scenario
413
         * it is implementation defined if any stack unwinding is performed.
414
         * Since stack unwinding could cause further memory deallocations this
415
         * could result in further corruption in this allocator state. To prevent
416
         * this, call terminate directly.
417
         */
418
         std::terminate();
×
419
      }
420
   }
421

422
   return false;
423
}
424

425
}  // namespace Botan
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc