• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

randombit / botan / 5123321399

30 May 2023 04:06PM UTC coverage: 92.213% (+0.004%) from 92.209%
5123321399

Pull #3558

github

web-flow
Merge dd72f7389 into 057bcbc35
Pull Request #3558: Add braces around all if/else statements

75602 of 81986 relevant lines covered (92.21%)

11859779.3 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

99.17
/src/lib/utils/mem_pool/mem_pool.cpp
1
/*
2
* (C) 2018,2019 Jack Lloyd
3
*
4
* Botan is released under the Simplified BSD License (see license.txt)
5
*/
6

7
#include <botan/internal/mem_pool.h>
8

9
#include <botan/mem_ops.h>
10
#include <algorithm>
11

12
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
13
   #include <botan/internal/os_utils.h>
14
#endif
15

16
namespace Botan {
17

18
/*
19
* Memory pool theory of operation
20
*
21
* This allocator is not useful for general purpose but works well within the
22
* context of allocating cryptographic keys. It makes several assumptions which
23
* don't work for implementing malloc but simplify and speed up the implementation:
24
*
25
* - There is some set of pages, which cannot be expanded later. These are pages
26
*   which were allocated, mlocked and passed to the Memory_Pool constructor.
27
*
28
* - The allocator is allowed to return null anytime it feels like not servicing
29
*   a request, in which case the request will be sent to calloc instead. In
30
*   particular, requests which are too small or too large are rejected.
31
*
32
* - Most allocations are powers of 2, the remainder are usually a multiple of 8
33
*
34
* - Free requests include the size of the allocation, so there is no need to
35
*   track this within the pool.
36
*
37
* - Alignment is important to the caller. For this allocator, any allocation of
38
*   size N is aligned evenly at N bytes.
39
*
40
* Initially each page is in the free page list. Each page is used for just one
41
* size of allocation, with requests bucketed into a small number of common
42
* sizes. If the allocation would be too big or too small it is rejected by the pool.
43
*
44
* The free list is maintained by a bitmap, one per page/Bucket. Since each
45
* Bucket only maintains objects of a single size, each bit set or clear
46
* indicates the status of one object.
47
*
48
* An allocation walks the list of buckets and asks each in turn if there is
49
* space. If a Bucket does not have any space, it sets a boolean flag m_is_full
50
* so that it does not need to rescan when asked again. The flag is cleared on
51
* first free from that bucket. If no bucket has space, but there are some free
52
* pages left, a free page is claimed as a new Bucket for that size. In this case
53
* it is pushed to the front of the list so it is first in line to service new
54
* requests.
55
*
56
* A deallocation also walks the list of buckets for the size and asks each
57
* Bucket in turn if it recognizes the pointer. When a Bucket becomes empty as a
58
* result of a deallocation, it is recycled back into the free pool. When this
59
* happens, the Buckets page goes to the end of the free list. All pages on the
60
* free list are marked in the MMU as noaccess, so anything touching them will
61
* immediately crash. They are only marked R/W once placed into a new bucket.
62
* Making the free list FIFO maximizes the time between the last free of a bucket
63
* and that page being writable again, maximizing chances of crashing after a
64
* use-after-free.
65
*
66
* Future work
67
* -------------
68
*
69
* The allocator is protected by a global lock. It would be good to break this
70
* up, since almost all of the work can actually be done in parallel especially
71
* when allocating objects of different sizes (which can't possibly share a
72
* bucket).
73
*
74
* It may be worthwhile to optimize deallocation by storing the Buckets in order
75
* (by pointer value) which would allow binary search to find the owning bucket.
76
*
77
* A useful addition would be to randomize the allocations. Memory_Pool would be
78
* changed to receive also a RandomNumberGenerator& object (presumably the system
79
* RNG, or maybe a ChaCha_RNG seeded with system RNG). Then the bucket to use and
80
* the offset within the bucket would be chosen randomly, instead of using first fit.
81
*
82
* Right now we don't make any provision for threading, so if two threads both
83
* allocate 32 byte values one after the other, the two allocations will likely
84
* share a cache line. Ensuring that distinct threads will (tend to) use distinct
85
* buckets would reduce this.
86
*
87
* Supporting a realloc-style API may be useful.
88
*/
89

90
namespace {
91

92
size_t choose_bucket(size_t n) {
216,132,551✔
93
   const size_t MINIMUM_ALLOCATION = 16;
216,132,551✔
94
   const size_t MAXIMUM_ALLOCATION = 256;
216,132,551✔
95

96
   if(n < MINIMUM_ALLOCATION || n > MAXIMUM_ALLOCATION) {
216,132,551✔
97
      return 0;
98
   }
99

100
   // Need to tune these
101

102
   const size_t buckets[] = {
200,470,481✔
103
      16,
104
      24,
105
      32,
106
      48,
107
      64,
108
      80,
109
      96,
110
      112,
111
      128,
112
      160,
113
      192,
114
      256,
115
      0,
116
   };
117

118
   for(size_t i = 0; buckets[i]; ++i) {
1,116,038,050✔
119
      if(n <= buckets[i]) {
1,116,038,050✔
120
         return buckets[i];
200,470,481✔
121
      }
122
   }
123

124
   return 0;
125
}
126

127
inline bool ptr_in_pool(const void* pool_ptr, size_t poolsize, const void* buf_ptr, size_t bufsize) {
99,144,957✔
128
   const uintptr_t pool = reinterpret_cast<uintptr_t>(pool_ptr);
99,144,957✔
129
   const uintptr_t buf = reinterpret_cast<uintptr_t>(buf_ptr);
99,144,957✔
130
   return (buf >= pool) && (buf + bufsize <= pool + poolsize);
85,130,707✔
131
}
132

133
// return index of first set bit
134
template <typename T>
135
size_t find_set_bit(T b) {
75,033,735✔
136
   size_t s = 8 * sizeof(T) / 2;
137
   size_t bit = 0;
138

139
   // In this context we don't need to be const-time
140
   while(s > 0) {
525,236,145✔
141
      const T mask = (static_cast<T>(1) << s) - 1;
450,202,410✔
142
      if((b & mask) == 0) {
450,202,410✔
143
         bit += s;
141,395,644✔
144
         b >>= s;
141,395,644✔
145
      }
146
      s /= 2;
450,202,410✔
147
   }
148

149
   return bit;
150
}
151

152
class BitMap final {
37,096,930✔
153
   public:
154
      explicit BitMap(size_t bits) : m_len(bits) {
7,365,932✔
155
         m_bits.resize((bits + BITMASK_BITS - 1) / BITMASK_BITS);
7,365,932✔
156
         // MSVC warns if the cast isn't there, clang-tidy warns that the cast is pointless
157
         m_main_mask = static_cast<bitmask_type>(~0);  // NOLINT(bugprone-misplaced-widening-cast)
7,365,932✔
158
         m_last_mask = m_main_mask;
7,365,932✔
159

160
         if(bits % BITMASK_BITS != 0) {
7,365,932✔
161
            m_last_mask = (static_cast<bitmask_type>(1) << (bits % BITMASK_BITS)) - 1;
4,165,275✔
162
         }
163
      }
7,365,932✔
164

165
      bool find_free(size_t* bit);
166

167
      void free(size_t bit) {
75,030,772✔
168
         BOTAN_ASSERT_NOMSG(bit <= m_len);
75,030,772✔
169
         const size_t w = bit / BITMASK_BITS;
75,030,772✔
170
         BOTAN_ASSERT_NOMSG(w < m_bits.size());
75,030,772✔
171
         const bitmask_type mask = static_cast<bitmask_type>(1) << (bit % BITMASK_BITS);
75,030,772✔
172
         m_bits[w] = m_bits[w] & (~mask);
75,030,772✔
173
      }
75,030,772✔
174

175
      bool empty() const {
75,030,772✔
176
         for(auto bitset : m_bits) {
89,072,151✔
177
            if(bitset != 0) {
81,708,260✔
178
               return false;
179
            }
180
         }
181

182
         return true;
183
      }
184

185
   private:
186
#if defined(BOTAN_ENABLE_DEBUG_ASSERTS)
187
      using bitmask_type = uint8_t;
188

189
      enum { BITMASK_BITS = 8 };
190
#else
191
      using bitmask_type = word;
192

193
      enum { BITMASK_BITS = BOTAN_MP_WORD_BITS };
194
#endif
195

196
      size_t m_len;
197
      bitmask_type m_main_mask;
198
      bitmask_type m_last_mask;
199
      std::vector<bitmask_type> m_bits;
200
};
201

202
bool BitMap::find_free(size_t* bit) {
77,100,459✔
203
   for(size_t i = 0; i != m_bits.size(); ++i) {
79,664,352✔
204
      const bitmask_type mask = (i == m_bits.size() - 1) ? m_last_mask : m_main_mask;
77,597,628✔
205
      if((m_bits[i] & mask) != mask) {
77,597,628✔
206
         const size_t free_bit = find_set_bit(~m_bits[i]);
75,033,735✔
207
         const bitmask_type bmask = static_cast<bitmask_type>(1) << (free_bit % BITMASK_BITS);
75,033,735✔
208
         BOTAN_ASSERT_NOMSG((m_bits[i] & bmask) == 0);
75,033,735✔
209
         m_bits[i] |= bmask;
75,033,735✔
210
         *bit = BITMASK_BITS * i + free_bit;
75,033,735✔
211
         return true;
75,033,735✔
212
      }
213
   }
214

215
   return false;
216
}
217

218
}  // namespace
219

220
class Bucket final {
19,733,282✔
221
   public:
222
      Bucket(uint8_t* mem, size_t mem_size, size_t item_size) :
7,365,932✔
223
            m_item_size(item_size),
7,365,932✔
224
            m_page_size(mem_size),
7,365,932✔
225
            m_range(mem),
7,365,932✔
226
            m_bitmap(mem_size / item_size),
7,365,932✔
227
            m_is_full(false) {}
7,365,932✔
228

229
      uint8_t* alloc() {
204,251,000✔
230
         if(m_is_full) {
204,251,000✔
231
            // I know I am full
232
            return nullptr;
233
         }
234

235
         size_t offset;
77,100,459✔
236
         if(!m_bitmap.find_free(&offset)) {
77,100,459✔
237
            // I just found out I am full
238
            m_is_full = true;
2,066,724✔
239
            return nullptr;
2,066,724✔
240
         }
241

242
         BOTAN_ASSERT(offset * m_item_size < m_page_size, "Offset is in range");
75,033,735✔
243
         return m_range + m_item_size * offset;
75,033,735✔
244
      }
245

246
      bool free(void* p) {
99,144,957✔
247
         if(!in_this_bucket(p)) {
273,320,686✔
248
            return false;
249
         }
250

251
         /*
252
         Zero also any trailing bytes, which should not have been written to,
253
         but maybe the user was bad and wrote past the end.
254
         */
255
         std::memset(p, 0, m_item_size);
75,030,772✔
256

257
         const size_t offset = (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(m_range)) / m_item_size;
75,030,772✔
258

259
         m_bitmap.free(offset);
75,030,772✔
260
         m_is_full = false;
75,030,772✔
261

262
         return true;
75,030,772✔
263
      }
264

265
      bool in_this_bucket(void* p) const { return ptr_in_pool(m_range, m_page_size, p, m_item_size); }
174,175,729✔
266

267
      bool empty() const { return m_bitmap.empty(); }
150,061,544✔
268

269
      uint8_t* ptr() const { return m_range; }
7,363,891✔
270

271
   private:
272
      size_t m_item_size;
273
      size_t m_page_size;
274
      uint8_t* m_range;
275
      BitMap m_bitmap;
276
      bool m_is_full;
277
};
278

279
Memory_Pool::Memory_Pool(const std::vector<void*>& pages, size_t page_size) : m_page_size(page_size) {
9,583✔
280
   m_min_page_ptr = ~static_cast<uintptr_t>(0);
9,583✔
281
   m_max_page_ptr = 0;
9,583✔
282

283
   for(auto page : pages) {
150,911✔
284
      const uintptr_t p = reinterpret_cast<uintptr_t>(page);
141,328✔
285

286
      m_min_page_ptr = std::min(p, m_min_page_ptr);
141,328✔
287
      m_max_page_ptr = std::max(p, m_max_page_ptr);
141,328✔
288

289
      clear_bytes(page, m_page_size);
141,328✔
290
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
291
      OS::page_prohibit_access(page);
292
#endif
293
      m_free_pages.push_back(static_cast<uint8_t*>(page));
141,328✔
294
   }
295

296
   /*
297
   Right now this points to the start of the last page, adjust it to instead
298
   point to the first byte of the following page
299
   */
300
   m_max_page_ptr += page_size;
9,583✔
301
}
9,583✔
302

303
Memory_Pool::~Memory_Pool()  // NOLINT(*-use-equals-default)
9,583✔
304
{
305
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
306
   for(size_t i = 0; i != m_free_pages.size(); ++i) {
307
      OS::page_allow_access(m_free_pages[i]);
308
   }
309
#endif
310
}
9,583✔
311

312
void* Memory_Pool::allocate(size_t n) {
141,135,877✔
313
   if(n > m_page_size) {
141,135,877✔
314
      return nullptr;
315
   }
316

317
   const size_t n_bucket = choose_bucket(n);
141,101,779✔
318

319
   if(n_bucket > 0) {
141,101,779✔
320
      lock_guard_type<mutex_type> lock(m_mutex);
125,439,709✔
321

322
      std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
125,439,709✔
323

324
      /*
325
      It would be optimal to pick the bucket with the most usage,
326
      since a bucket with say 1 item allocated out of it has a high
327
      chance of becoming later freed and then the whole page can be
328
      recycled.
329
      */
330
      for(auto& bucket : buckets) {
383,874,239✔
331
         if(uint8_t* p = bucket.alloc()) {
196,885,068✔
332
            return p;
67,667,803✔
333
         }
334

335
         // If the bucket is full, maybe move it to the end of the list?
336
         // Otoh bucket search should be very fast
337
      }
338

339
      if(!m_free_pages.empty()) {
57,771,906✔
340
         uint8_t* ptr = m_free_pages[0];
7,365,932✔
341
         m_free_pages.pop_front();
7,365,932✔
342
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
343
         OS::page_allow_access(ptr);
344
#endif
345
         buckets.push_front(Bucket(ptr, m_page_size, n_bucket));
7,365,932✔
346
         void* p = buckets[0].alloc();
7,365,932✔
347
         BOTAN_ASSERT_NOMSG(p != nullptr);
7,365,932✔
348
         return p;
349
      }
350
   }
125,439,709✔
351

352
   // out of room
353
   return nullptr;
354
}
355

356
bool Memory_Pool::deallocate(void* p, size_t len) noexcept {
141,085,482✔
357
   // Do a fast range check first, before taking the lock
358
   const uintptr_t p_val = reinterpret_cast<uintptr_t>(p);
141,085,482✔
359
   if(p_val < m_min_page_ptr || p_val > m_max_page_ptr) {
141,085,482✔
360
      return false;
361
   }
362

363
   const size_t n_bucket = choose_bucket(len);
75,030,772✔
364

365
   if(n_bucket != 0) {
75,030,772✔
366
      try {
75,030,772✔
367
         lock_guard_type<mutex_type> lock(m_mutex);
75,030,772✔
368

369
         std::deque<Bucket>& buckets = m_buckets_for[n_bucket];
75,030,772✔
370

371
         for(size_t i = 0; i != buckets.size(); ++i) {
99,144,957✔
372
            Bucket& bucket = buckets[i];
99,144,957✔
373
            if(bucket.free(p)) {
99,144,957✔
374
               if(bucket.empty()) {
150,061,544✔
375
#if defined(BOTAN_MEM_POOL_USE_MMU_PROTECTIONS)
376
                  OS::page_prohibit_access(bucket.ptr());
377
#endif
378
                  m_free_pages.push_back(bucket.ptr());
7,363,891✔
379

380
                  if(i != buckets.size() - 1) {
7,363,891✔
381
                     std::swap(buckets.back(), buckets[i]);
104,197✔
382
                  }
383
                  buckets.pop_back();
7,363,891✔
384
               }
385
               return true;
75,030,772✔
386
            }
387
         }
388
      } catch(...) {
75,030,772✔
389
         /*
390
         * The only exception throws that can occur in the above code are from
391
         * either the STL or BOTAN_ASSERT failures. In either case, such an
392
         * error indicates a logic error or data corruption in the memory
393
         * allocator such that it is no longer safe to continue executing.
394
         *
395
         * Since this function is noexcept, simply letting the exception escape
396
         * is sufficient for terminate to be called. However in this scenario
397
         * it is implementation defined if any stack unwinding is performed.
398
         * Since stack unwinding could cause further memory deallocations this
399
         * could result in further corruption in this allocator state. To prevent
400
         * this, call terminate directly.
401
         */
402
         std::terminate();
×
403
      }
404
   }
405

406
   return false;
407
}
408

409
}  // namespace Botan
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc