• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

systemd / systemd / 23825567702

31 Mar 2026 12:42PM UTC coverage: 72.404% (+0.006%) from 72.398%
23825567702

push

github

daandemeyer
terminal-util: fix boot hang from ANSI terminal size queries

Since v257, terminal_fix_size() is called during early boot via
console_setup() → reset_dev_console_fd() to query terminal dimensions
via ANSI escape sequences. This has caused intermittent boot hangs
where the system gets stuck with a blinking cursor and requires a
keypress to continue (see systemd/systemd#35499).

The function tries CSI 18 first, then falls back to DSR if that fails.
Previously, each method independently opened a non-blocking fd, disabled
echo/icanon, ran its query, restored termios, and closed its fd. This
created two problems:

1. Echo window between CSI 18 and DSR fallback: After CSI 18 times out
   and restores termios (re-enabling ECHO and ICANON), there is a brief
   window before DSR disables them again. If the terminal's CSI 18
   response arrives during this window, it is echoed back to the
   terminal — where the terminal interprets \e[8;rows;cols t as a
   "resize text area" command — and the response bytes land in the
   canonical line buffer as stale input that can confuse the DSR
   response parser.

2. Cursor left at bottom-right on DSR timeout: The DSR method worked by
   sending two DSR queries — one to save the cursor position, then
   moving the cursor to (32766,32766) and sending another to read the
   clamped position. If neither response was received (timeout), the
   cursor restore was skipped (conditional on saved_row > 0), leaving
   the cursor at the bottom-right corner of the terminal. The
   subsequent terminal_reset_ansi_seq() then moved it to the beginning
   of the last line via \e[1G, making boot output appear at the bottom
   of the screen — giving the appearance of a hang even when the system
   was still booting.

This commit fixes both issues:

- terminal_fix_size() now opens the non-blocking fd and configures
  termios once for both query methods, so echo stays disabled for the
  entire CSI 18 → DSR fallback sequence with no gap. tcflu... (continued)

22 of 57 new or added lines in 3 files covered. (38.6%)

834 existing lines in 52 files now uncovered.

318485 of 439872 relevant lines covered (72.4%)

1162379.76 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

97.64
/src/basic/hashmap.c
1
/* SPDX-License-Identifier: LGPL-2.1-or-later */
2

3
#include <fnmatch.h>
4
#include <pthread.h>
5
#include <unistd.h>
6
#if HAVE_VALGRIND_VALGRIND_H
7
#  include <valgrind/valgrind.h>
8
#endif
9

10
#include "alloc-util.h"
11
#include "extract-word.h"
12
#include "hashmap.h"
13
#include "log.h"
14
#include "logarithm.h"
15
#include "memory-util.h"
16
#include "mempool.h"
17
#include "process-util.h"
18
#include "random-util.h"
19
#include "set.h"
20
#include "siphash24.h"
21
#include "sort-util.h"
22
#include "string-util.h"
23
#include "strv.h"
24

25
#if ENABLE_DEBUG_HASHMAP
26
#include "list.h"
27
#endif
28

29
/*
30
 * Implementation of hashmaps.
31
 * Addressing: open
32
 *   - uses less RAM compared to closed addressing (chaining), because
33
 *     our entries are small (especially in Sets, which tend to contain
34
 *     the majority of entries in systemd).
35
 * Collision resolution: Robin Hood
36
 *   - tends to equalize displacement of entries from their optimal buckets.
37
 * Probe sequence: linear
38
 *   - though theoretically worse than random probing/uniform hashing/double
39
 *     hashing, it is good for cache locality.
40
 *
41
 * References:
42
 * Celis, P. 1986. Robin Hood Hashing.
43
 * Ph.D. Dissertation. University of Waterloo, Waterloo, Ont., Canada, Canada.
44
 * https://cs.uwaterloo.ca/research/tr/1986/CS-86-14.pdf
45
 * - The results are derived for random probing. Suggests deletion with
46
 *   tombstones and two mean-centered search methods. None of that works
47
 *   well for linear probing.
48
 *
49
 * Janson, S. 2005. Individual displacements for linear probing hashing with different insertion policies.
50
 * ACM Trans. Algorithms 1, 2 (October 2005), 177-213.
51
 * DOI=10.1145/1103963.1103964 http://doi.acm.org/10.1145/1103963.1103964
52
 * http://www.math.uu.se/~svante/papers/sj157.pdf
53
 * - Applies to Robin Hood with linear probing. Contains remarks on
54
 *   the unsuitability of mean-centered search with linear probing.
55
 *
56
 * Viola, A. 2005. Exact distribution of individual displacements in linear probing hashing.
57
 * ACM Trans. Algorithms 1, 2 (October 2005), 214-242.
58
 * DOI=10.1145/1103963.1103965 http://doi.acm.org/10.1145/1103963.1103965
59
 * - Similar to Janson. Note that Viola writes about C_{m,n} (number of probes
60
 *   in a successful search), and Janson writes about displacement. C = d + 1.
61
 *
62
 * Goossaert, E. 2013. Robin Hood hashing: backward shift deletion.
63
 * http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/
64
 * - Explanation of backward shift deletion with pictures.
65
 *
66
 * Khuong, P. 2013. The Other Robin Hood Hashing.
67
 * http://www.pvk.ca/Blog/2013/11/26/the-other-robin-hood-hashing/
68
 * - Short summary of random vs. linear probing, and tombstones vs. backward shift.
69
 */
70

71
/*
72
 * XXX Ideas for improvement:
73
 * For unordered hashmaps, randomize iteration order, similarly to Perl:
74
 * http://blog.booking.com/hardening-perls-hash-function.html
75
 */
76

77
/* INV_KEEP_FREE = 1 / (1 - max_load_factor)
78
 * e.g. 1 / (1 - 0.8) = 5 ... keep one fifth of the buckets free. */
79
#define INV_KEEP_FREE            5U
80

81
/* Fields common to entries of all hashmap/set types */
82
struct hashmap_base_entry {
83
        const void *key;
84
};
85

86
/* Entry types for specific hashmap/set types
87
 * hashmap_base_entry must be at the beginning of each entry struct. */
88

89
struct plain_hashmap_entry {
90
        struct hashmap_base_entry b;
91
        void *value;
92
};
93

94
struct ordered_hashmap_entry {
95
        struct plain_hashmap_entry p;
96
        unsigned iterate_next, iterate_previous;
97
};
98

99
struct set_entry {
100
        struct hashmap_base_entry b;
101
};
102

103
/* In several functions it is advantageous to have the hash table extended
104
 * virtually by a couple of additional buckets. We reserve special index values
105
 * for these "swap" buckets. */
106
#define _IDX_SWAP_BEGIN     (UINT_MAX - 3)
107
#define IDX_PUT             (_IDX_SWAP_BEGIN + 0)
108
#define IDX_TMP             (_IDX_SWAP_BEGIN + 1)
109
#define _IDX_SWAP_END       (_IDX_SWAP_BEGIN + 2)
110

111
#define IDX_FIRST           (UINT_MAX - 1) /* special index for freshly initialized iterators */
112
#define IDX_NIL             UINT_MAX       /* special index value meaning "none" or "end" */
113

114
assert_cc(IDX_FIRST == _IDX_SWAP_END);
115
assert_cc(IDX_FIRST == _IDX_ITERATOR_FIRST);
116

117
/* Storage space for the "swap" buckets.
118
 * All entry types can fit into an ordered_hashmap_entry. */
119
struct swap_entries {
120
        struct ordered_hashmap_entry e[_IDX_SWAP_END - _IDX_SWAP_BEGIN];
121
};
122

123
/* Distance from Initial Bucket */
124
typedef uint8_t dib_raw_t;
125
#define DIB_RAW_OVERFLOW ((dib_raw_t)0xfdU)   /* indicates DIB value is greater than representable */
126
#define DIB_RAW_REHASH   ((dib_raw_t)0xfeU)   /* entry yet to be rehashed during in-place resize */
127
#define DIB_RAW_FREE     ((dib_raw_t)0xffU)   /* a free bucket */
128
#define DIB_RAW_INIT     ((char)DIB_RAW_FREE) /* a byte to memset a DIB store with when initializing */
129

130
#define DIB_FREE UINT_MAX
131

132
#if ENABLE_DEBUG_HASHMAP
133
struct hashmap_debug_info {
134
        LIST_FIELDS(struct hashmap_debug_info, debug_list);
135
        unsigned max_entries;  /* high watermark of n_entries */
136

137
        /* fields to detect modification while iterating */
138
        unsigned put_count;    /* counts puts into the hashmap */
139
        unsigned rem_count;    /* counts removals from hashmap */
140
        unsigned last_rem_idx; /* remembers last removal index */
141
};
142

143
/* Tracks all existing hashmaps. Get at it from gdb. See sd_dump_hashmaps.py */
144
static LIST_HEAD(struct hashmap_debug_info, hashmap_debug_list);
145
static pthread_mutex_t hashmap_debug_list_mutex = PTHREAD_MUTEX_INITIALIZER;
146
#endif
147

148
enum HashmapType {
149
        HASHMAP_TYPE_PLAIN,
150
        HASHMAP_TYPE_ORDERED,
151
        HASHMAP_TYPE_SET,
152
        _HASHMAP_TYPE_MAX
153
};
154

155
struct _packed_ indirect_storage {
156
        void *storage;                     /* where buckets and DIBs are stored */
157
        uint8_t  hash_key[HASH_KEY_SIZE];  /* hash key; changes during resize */
158

159
        unsigned n_entries;                /* number of stored entries */
160
        unsigned n_buckets;                /* number of buckets */
161

162
        unsigned idx_lowest_entry;         /* Index below which all buckets are free.
163
                                              Makes "while (hashmap_steal_first())" loops
164
                                              O(n) instead of O(n^2) for unordered hashmaps. */
165
        uint8_t  _pad[3];                  /* padding for the whole HashmapBase */
166
        /* The bitfields in HashmapBase complete the alignment of the whole thing. */
167
};
168

169
struct direct_storage {
170
        /* This gives us 39 bytes on 64-bit, or 35 bytes on 32-bit.
171
         * That's room for 4 set_entries + 4 DIB bytes + 3 unused bytes on 64-bit,
172
         *              or 7 set_entries + 7 DIB bytes + 0 unused bytes on 32-bit. */
173
        uint8_t storage[sizeof(struct indirect_storage)];
174
};
175

176
#define DIRECT_BUCKETS(entry_t) \
177
        (sizeof(struct direct_storage) / (sizeof(entry_t) + sizeof(dib_raw_t)))
178

179
/* We should be able to store at least one entry directly. */
180
assert_cc(DIRECT_BUCKETS(struct ordered_hashmap_entry) >= 1);
181

182
/* We have 3 bits for n_direct_entries. */
183
assert_cc(DIRECT_BUCKETS(struct set_entry) < (1 << 3));
184

185
/* Hashmaps with directly stored entries all use this shared hash key.
186
 * It's no big deal if the key is guessed, because there can be only
187
 * a handful of directly stored entries in a hashmap. When a hashmap
188
 * outgrows direct storage, it gets its own key for indirect storage. */
189
static uint8_t shared_hash_key[HASH_KEY_SIZE];
190

191
/* Fields that all hashmap/set types must have */
192
struct HashmapBase {
193
        const struct hash_ops *hash_ops;  /* hash and compare ops to use */
194

195
        union _packed_ {
196
                struct indirect_storage indirect; /* if  has_indirect */
197
                struct direct_storage direct;     /* if !has_indirect */
198
        };
199

200
        enum HashmapType type:2;     /* HASHMAP_TYPE_* */
201
        bool has_indirect:1;         /* whether indirect storage is used */
202
        unsigned n_direct_entries:3; /* Number of entries in direct storage.
203
                                      * Only valid if !has_indirect. */
204
        bool from_pool:1;            /* whether was allocated from mempool */
205
        bool dirty:1;                /* whether dirtied since last iterated_cache_get() */
206
        bool cached:1;               /* whether this hashmap is being cached */
207

208
#if ENABLE_DEBUG_HASHMAP
209
        struct hashmap_debug_info debug;
210
#endif
211
};
212

213
/* Specific hash types
214
 * HashmapBase must be at the beginning of each hashmap struct. */
215

216
struct Hashmap {
217
        struct HashmapBase b;
218
};
219

220
struct OrderedHashmap {
221
        struct HashmapBase b;
222
        unsigned iterate_list_head, iterate_list_tail;
223
};
224

225
struct Set {
226
        struct HashmapBase b;
227
};
228

229
typedef struct CacheMem {
230
        const void **ptr;
231
        size_t n_populated;
232
        bool active:1;
233
} CacheMem;
234

235
struct IteratedCache {
236
        HashmapBase *hashmap;
237
        CacheMem keys, values;
238
};
239

240
DEFINE_MEMPOOL(hashmap_pool,         Hashmap,        8);
241
DEFINE_MEMPOOL(ordered_hashmap_pool, OrderedHashmap, 8);
242
/* No need for a separate Set pool */
243
assert_cc(sizeof(Hashmap) == sizeof(Set));
244

245
struct hashmap_type_info {
246
        size_t head_size;
247
        size_t entry_size;
248
        struct mempool *mempool;
249
        unsigned n_direct_buckets;
250
};
251

252
static _used_ const struct hashmap_type_info hashmap_type_info[_HASHMAP_TYPE_MAX] = {
253
        [HASHMAP_TYPE_PLAIN] = {
254
                .head_size        = sizeof(Hashmap),
255
                .entry_size       = sizeof(struct plain_hashmap_entry),
256
                .mempool          = &hashmap_pool,
257
                .n_direct_buckets = DIRECT_BUCKETS(struct plain_hashmap_entry),
258
        },
259
        [HASHMAP_TYPE_ORDERED] = {
260
                .head_size        = sizeof(OrderedHashmap),
261
                .entry_size       = sizeof(struct ordered_hashmap_entry),
262
                .mempool          = &ordered_hashmap_pool,
263
                .n_direct_buckets = DIRECT_BUCKETS(struct ordered_hashmap_entry),
264
        },
265
        [HASHMAP_TYPE_SET] = {
266
                .head_size        = sizeof(Set),
267
                .entry_size       = sizeof(struct set_entry),
268
                .mempool          = &hashmap_pool,
269
                .n_direct_buckets = DIRECT_BUCKETS(struct set_entry),
270
        },
271
};
272

273
void hashmap_trim_pools(void) {
2✔
274
        int r;
2✔
275

276
        /* The pool is only allocated by the main thread, but the memory can be passed to other
277
         * threads. Let's clean up if we are the main thread and no other threads are live. */
278

279
        /* We build our own is_main_thread() here, which doesn't use C11 TLS based caching of the
280
         * result. That's because valgrind apparently doesn't like TLS to be used from a GCC destructor. */
281
        if (getpid() != gettid())
2✔
282
                return (void) log_debug("Not cleaning up memory pools, not in main thread.");
×
283

284
        r = get_process_threads(0);
2✔
285
        if (r < 0)
2✔
286
                return (void) log_debug_errno(r, "Failed to determine number of threads, not cleaning up memory pools: %m");
×
287
        if (r != 1)
2✔
UNCOV
288
                return (void) log_debug("Not cleaning up memory pools, running in multi-threaded process.");
×
289

290
        mempool_trim(&hashmap_pool);
2✔
291
        mempool_trim(&ordered_hashmap_pool);
2✔
292
}
293

294
#if HAVE_VALGRIND_VALGRIND_H
295
_destructor_ static void cleanup_pools(void) {
296
        /* Be nice to valgrind */
297
        if (RUNNING_ON_VALGRIND)
298
                hashmap_trim_pools();
299
}
300
#endif
301

302
static unsigned n_buckets(HashmapBase *h) {
1,338,755,020✔
303
        return h->has_indirect ? h->indirect.n_buckets
1,338,755,020✔
304
                               : hashmap_type_info[h->type].n_direct_buckets;
1,338,755,020✔
305
}
306

307
static unsigned n_entries(HashmapBase *h) {
158,470,433✔
308
        return h->has_indirect ? h->indirect.n_entries
158,470,433✔
309
                               : h->n_direct_entries;
158,470,433✔
310
}
311

312
static void n_entries_inc(HashmapBase *h) {
34,187,417✔
313
        if (h->has_indirect)
34,187,417✔
314
                h->indirect.n_entries++;
26,473,959✔
315
        else
316
                h->n_direct_entries++;
7,713,458✔
317
}
34,187,417✔
318

319
static void n_entries_dec(HashmapBase *h) {
31,653,452✔
320
        if (h->has_indirect)
31,653,452✔
321
                h->indirect.n_entries--;
26,143,923✔
322
        else
323
                h->n_direct_entries--;
5,509,529✔
324
}
31,653,452✔
325

326
static void* storage_ptr(HashmapBase *h) {
1,297,124,780✔
327
        return h->has_indirect ? h->indirect.storage
1,297,124,780✔
328
                               : h->direct.storage;
1,297,124,780✔
329
}
330

331
static uint8_t* hash_key(HashmapBase *h) {
219,402,357✔
332
        return h->has_indirect ? h->indirect.hash_key
219,402,357✔
333
                               : shared_hash_key;
219,402,357✔
334
}
335

336
static unsigned base_bucket_hash(HashmapBase *h, const void *p) {
219,402,357✔
337
        struct siphash state;
219,402,357✔
338
        uint64_t hash;
219,402,357✔
339

340
        siphash24_init(&state, hash_key(h));
219,402,357✔
341

342
        h->hash_ops->hash(p, &state);
219,402,357✔
343

344
        hash = siphash24_finalize(&state);
219,402,357✔
345

346
        return (unsigned) (hash % n_buckets(h));
219,402,357✔
347
}
348
#define bucket_hash(h, p) base_bucket_hash(HASHMAP_BASE(h), p)
349

350
static void base_set_dirty(HashmapBase *h) {
70,917,054✔
351
        h->dirty = true;
70,917,054✔
352
}
70,917,054✔
353
#define hashmap_set_dirty(h) base_set_dirty(HASHMAP_BASE(h))
354

355
static void get_hash_key(uint8_t hash_key[HASH_KEY_SIZE], bool reuse_is_ok) {
3,564,095✔
356
        static uint8_t current[HASH_KEY_SIZE];
3,564,095✔
357
        static bool current_initialized = false;
3,564,095✔
358

359
        /* Returns a hash function key to use. In order to keep things
360
         * fast we will not generate a new key each time we allocate a
361
         * new hash table. Instead, we'll just reuse the most recently
362
         * generated one, except if we never generated one or when we
363
         * are rehashing an entire hash table because we reached a
364
         * fill level */
365

366
        if (!current_initialized || !reuse_is_ok) {
3,564,095✔
367
                random_bytes(current, sizeof(current));
2,186,219✔
368
                current_initialized = true;
2,186,219✔
369
        }
370

371
        memcpy(hash_key, current, sizeof(current));
3,564,095✔
372
}
3,564,095✔
373

374
static struct hashmap_base_entry* bucket_at(HashmapBase *h, unsigned idx) {
827,220,626✔
375
        return CAST_ALIGN_PTR(
827,220,626✔
376
                        struct hashmap_base_entry,
377
                        (uint8_t *) storage_ptr(h) + idx * hashmap_type_info[h->type].entry_size);
378
}
379

380
static struct plain_hashmap_entry* plain_bucket_at(Hashmap *h, unsigned idx) {
2,477,794✔
381
        return (struct plain_hashmap_entry*) bucket_at(HASHMAP_BASE(h), idx);
2,477,794✔
382
}
383

384
static struct ordered_hashmap_entry* ordered_bucket_at(OrderedHashmap *h, unsigned idx) {
101,308,963✔
385
        return (struct ordered_hashmap_entry*) bucket_at(HASHMAP_BASE(h), idx);
101,308,963✔
386
}
387

388
static struct set_entry *set_bucket_at(Set *h, unsigned idx) {
4✔
389
        return (struct set_entry*) bucket_at(HASHMAP_BASE(h), idx);
4✔
390
}
391

392
static struct ordered_hashmap_entry* bucket_at_swap(struct swap_entries *swap, unsigned idx) {
259,814,140✔
393
        return &swap->e[idx - _IDX_SWAP_BEGIN];
259,814,140✔
394
}
395

396
/* Returns a pointer to the bucket at index idx.
397
 * Understands real indexes and swap indexes, hence "_virtual". */
398
static struct hashmap_base_entry* bucket_at_virtual(HashmapBase *h, struct swap_entries *swap,
481,040,265✔
399
                                                    unsigned idx) {
400
        if (idx < _IDX_SWAP_BEGIN)
481,040,265✔
401
                return bucket_at(h, idx);
295,630,766✔
402

403
        if (idx < _IDX_SWAP_END)
185,409,499✔
404
                return &bucket_at_swap(swap, idx)->p.b;
185,409,499✔
405

406
        assert_not_reached();
×
407
}
408

409
static dib_raw_t* dib_raw_ptr(HashmapBase *h) {
469,904,154✔
410
        return (dib_raw_t*)
939,808,308✔
411
                ((uint8_t*) storage_ptr(h) + hashmap_type_info[h->type].entry_size * n_buckets(h));
469,904,154✔
412
}
413

414
static unsigned bucket_distance(HashmapBase *h, unsigned idx, unsigned from) {
×
415
        return idx >= from ? idx - from
×
416
                           : n_buckets(h) + idx - from;
×
417
}
418

419
static unsigned bucket_calculate_dib(HashmapBase *h, unsigned idx, dib_raw_t raw_dib) {
284,765,561✔
420
        unsigned initial_bucket;
284,765,561✔
421

422
        if (raw_dib == DIB_RAW_FREE)
284,765,561✔
423
                return DIB_FREE;
424

425
        if (_likely_(raw_dib < DIB_RAW_OVERFLOW))
284,765,561✔
426
                return raw_dib;
284,765,561✔
427

428
        /*
429
         * Having an overflow DIB value is very unlikely. The hash function
430
         * would have to be bad. For example, in a table of size 2^24 filled
431
         * to load factor 0.9 the maximum observed DIB is only about 60.
432
         * In theory (assuming I used Maxima correctly), for an infinite size
433
         * hash table with load factor 0.8 the probability of a given entry
434
         * having DIB > 40 is 1.9e-8.
435
         * This returns the correct DIB value by recomputing the hash value in
436
         * the unlikely case. XXX Hitting this case could be a hint to rehash.
437
         */
438
        initial_bucket = bucket_hash(h, bucket_at(h, idx)->key);
×
439
        return bucket_distance(h, idx, initial_bucket);
×
440
}
441

442
static void bucket_set_dib(HashmapBase *h, unsigned idx, unsigned dib) {
128,585,582✔
443
        dib_raw_ptr(h)[idx] = dib != DIB_FREE ? MIN(dib, DIB_RAW_OVERFLOW) : DIB_RAW_FREE;
128,585,582✔
444
}
128,585,582✔
445

446
static unsigned skip_free_buckets(HashmapBase *h, unsigned idx) {
57,078,627✔
447
        dib_raw_t *dibs;
57,078,627✔
448

449
        dibs = dib_raw_ptr(h);
57,078,627✔
450

451
        for ( ; idx < n_buckets(h); idx++)
172,800,507✔
452
                if (dibs[idx] != DIB_RAW_FREE)
105,263,312✔
453
                        return idx;
454

455
        return IDX_NIL;
456
}
457

458
static void bucket_mark_free(HashmapBase *h, unsigned idx) {
31,653,452✔
459
        memzero(bucket_at(h, idx), hashmap_type_info[h->type].entry_size);
31,653,452✔
460
        bucket_set_dib(h, idx, DIB_FREE);
31,653,452✔
461
}
31,653,452✔
462

463
static void bucket_move_entry(HashmapBase *h, struct swap_entries *swap,
170,909,294✔
464
                              unsigned from, unsigned to) {
465
        struct hashmap_base_entry *e_from, *e_to;
170,909,294✔
466

467
        assert(from != to);
170,909,294✔
468

469
        e_from = bucket_at_virtual(h, swap, from);
170,909,294✔
470
        e_to   = bucket_at_virtual(h, swap, to);
170,909,294✔
471

472
        memcpy(e_to, e_from, hashmap_type_info[h->type].entry_size);
170,909,294✔
473

474
        if (h->type == HASHMAP_TYPE_ORDERED) {
170,909,294✔
475
                OrderedHashmap *lh = (OrderedHashmap*) h;
86,506,211✔
476
                struct ordered_hashmap_entry *le, *le_to;
86,506,211✔
477

478
                le_to = (struct ordered_hashmap_entry*) e_to;
86,506,211✔
479

480
                if (le_to->iterate_next != IDX_NIL) {
86,506,211✔
481
                        le = (struct ordered_hashmap_entry*)
62,375,662✔
482
                             bucket_at_virtual(h, swap, le_to->iterate_next);
62,375,662✔
483
                        le->iterate_previous = to;
62,375,662✔
484
                }
485

486
                if (le_to->iterate_previous != IDX_NIL) {
86,506,211✔
487
                        le = (struct ordered_hashmap_entry*)
76,846,015✔
488
                             bucket_at_virtual(h, swap, le_to->iterate_previous);
76,846,015✔
489
                        le->iterate_next = to;
76,846,015✔
490
                }
491

492
                if (lh->iterate_list_head == from)
86,506,211✔
493
                        lh->iterate_list_head = to;
9,660,196✔
494
                if (lh->iterate_list_tail == from)
86,506,211✔
495
                        lh->iterate_list_tail = to;
24,130,549✔
496
        }
497
}
170,909,294✔
498

499
static unsigned next_idx(HashmapBase *h, unsigned idx) {
246,242,128✔
500
        return (idx + 1U) % n_buckets(h);
246,242,128✔
501
}
502

503
static unsigned prev_idx(HashmapBase *h, unsigned idx) {
76✔
504
        return (n_buckets(h) + idx - 1U) % n_buckets(h);
76✔
505
}
506

507
static void* entry_value(HashmapBase *h, struct hashmap_base_entry *e) {
146,728,966✔
508
        switch (h->type) {
146,728,966✔
509

510
        case HASHMAP_TYPE_PLAIN:
137,334,245✔
511
        case HASHMAP_TYPE_ORDERED:
512
                return ((struct plain_hashmap_entry*)e)->value;
137,334,245✔
513

514
        case HASHMAP_TYPE_SET:
9,394,721✔
515
                return (void*) e->key;
9,394,721✔
516

517
        default:
×
518
                assert_not_reached();
×
519
        }
520
}
521

522
static void base_remove_entry(HashmapBase *h, unsigned idx) {
31,653,452✔
523
        unsigned left, right, prev, dib;
31,653,452✔
524
        dib_raw_t raw_dib, *dibs;
31,653,452✔
525

526
        dibs = dib_raw_ptr(h);
31,653,452✔
527
        assert(dibs[idx] != DIB_RAW_FREE);
31,653,452✔
528

529
#if ENABLE_DEBUG_HASHMAP
530
        h->debug.rem_count++;
531
        h->debug.last_rem_idx = idx;
532
#endif
533

534
        left = idx;
31,653,452✔
535
        /* Find the stop bucket ("right"). It is either free or has DIB == 0. */
536
        for (right = next_idx(h, left); ; right = next_idx(h, right)) {
31,653,452✔
537
                raw_dib = dibs[right];
45,756,205✔
538
                if (IN_SET(raw_dib, 0, DIB_RAW_FREE))
45,756,205✔
539
                        break;
540

541
                /* The buckets are not supposed to be all occupied and with DIB > 0.
542
                 * That would mean we could make everyone better off by shifting them
543
                 * backward. This scenario is impossible. */
544
                assert(left != right);
14,102,753✔
545
        }
546

547
        if (h->type == HASHMAP_TYPE_ORDERED) {
31,653,452✔
548
                OrderedHashmap *lh = (OrderedHashmap*) h;
17,794,547✔
549
                struct ordered_hashmap_entry *le = ordered_bucket_at(lh, idx);
17,794,547✔
550

551
                if (le->iterate_next != IDX_NIL)
17,794,547✔
552
                        ordered_bucket_at(lh, le->iterate_next)->iterate_previous = le->iterate_previous;
16,369,311✔
553
                else
554
                        lh->iterate_list_tail = le->iterate_previous;
1,425,236✔
555

556
                if (le->iterate_previous != IDX_NIL)
17,794,547✔
557
                        ordered_bucket_at(lh, le->iterate_previous)->iterate_next = le->iterate_next;
14,806✔
558
                else
559
                        lh->iterate_list_head = le->iterate_next;
17,779,741✔
560
        }
561

562
        /* Now shift all buckets in the interval (left, right) one step backwards */
563
        for (prev = left, left = next_idx(h, left); left != right;
45,756,205✔
564
             prev = left, left = next_idx(h, left)) {
14,102,753✔
565
                dib = bucket_calculate_dib(h, left, dibs[left]);
14,102,753✔
566
                assert(dib != 0);
14,102,753✔
567
                bucket_move_entry(h, NULL, left, prev);
14,102,753✔
568
                bucket_set_dib(h, prev, dib - 1);
14,102,753✔
569
        }
570

571
        bucket_mark_free(h, prev);
31,653,452✔
572
        n_entries_dec(h);
31,653,452✔
573
        base_set_dirty(h);
31,653,452✔
574
}
31,653,452✔
575
#define remove_entry(h, idx) base_remove_entry(HASHMAP_BASE(h), idx)
576

577
static unsigned hashmap_iterate_in_insertion_order(OrderedHashmap *h, Iterator *i) {
27,511,510✔
578
        struct ordered_hashmap_entry *e;
27,511,510✔
579
        unsigned idx;
27,511,510✔
580

581
        assert(h);
27,511,510✔
582
        assert(i);
27,511,510✔
583

584
        if (i->idx == IDX_NIL)
27,511,510✔
585
                goto at_end;
770,102✔
586

587
        if (i->idx == IDX_FIRST && h->iterate_list_head == IDX_NIL)
26,741,408✔
588
                goto at_end;
415,274✔
589

590
        if (i->idx == IDX_FIRST) {
26,326,134✔
591
                idx = h->iterate_list_head;
18,538,989✔
592
                e = ordered_bucket_at(h, idx);
18,538,989✔
593
        } else {
594
                idx = i->idx;
7,787,145✔
595
                e = ordered_bucket_at(h, idx);
7,787,145✔
596
                /*
597
                 * We allow removing the current entry while iterating, but removal may cause
598
                 * a backward shift. The next entry may thus move one bucket to the left.
599
                 * To detect when it happens, we remember the key pointer of the entry we were
600
                 * going to iterate next. If it does not match, there was a backward shift.
601
                 */
602
                if (e->p.b.key != i->next_key) {
7,787,145✔
603
                        idx = prev_idx(HASHMAP_BASE(h), idx);
70✔
604
                        e = ordered_bucket_at(h, idx);
70✔
605
                }
606
                assert(e->p.b.key == i->next_key);
7,787,145✔
607
        }
608

609
#if ENABLE_DEBUG_HASHMAP
610
        i->prev_idx = idx;
611
#endif
612

613
        if (e->iterate_next != IDX_NIL) {
26,326,134✔
614
                struct ordered_hashmap_entry *n;
24,169,808✔
615
                i->idx = e->iterate_next;
24,169,808✔
616
                n = ordered_bucket_at(h, i->idx);
24,169,808✔
617
                i->next_key = n->p.b.key;
24,169,808✔
618
        } else
619
                i->idx = IDX_NIL;
2,156,326✔
620

621
        return idx;
622

623
at_end:
1,185,376✔
624
        i->idx = IDX_NIL;
1,185,376✔
625
        return IDX_NIL;
1,185,376✔
626
}
627

628
static unsigned hashmap_iterate_in_internal_order(HashmapBase *h, Iterator *i) {
43,964,653✔
629
        unsigned idx;
43,964,653✔
630

631
        assert(h);
43,964,653✔
632
        assert(i);
43,964,653✔
633

634
        if (i->idx == IDX_NIL)
43,964,653✔
635
                goto at_end;
3,566,966✔
636

637
        if (i->idx == IDX_FIRST) {
40,397,687✔
638
                /* fast forward to the first occupied bucket */
639
                if (h->has_indirect) {
17,225,846✔
640
                        i->idx = skip_free_buckets(h, h->indirect.idx_lowest_entry);
7,716,743✔
641
                        h->indirect.idx_lowest_entry = i->idx;
7,716,743✔
642
                } else
643
                        i->idx = skip_free_buckets(h, 0);
9,509,103✔
644

645
                if (i->idx == IDX_NIL)
17,225,846✔
646
                        goto at_end;
544,906✔
647
        } else {
648
                struct hashmap_base_entry *e;
23,171,841✔
649

650
                assert(i->idx > 0);
23,171,841✔
651

652
                e = bucket_at(h, i->idx);
23,171,841✔
653
                /*
654
                 * We allow removing the current entry while iterating, but removal may cause
655
                 * a backward shift. The next entry may thus move one bucket to the left.
656
                 * To detect when it happens, we remember the key pointer of the entry we were
657
                 * going to iterate next. If it does not match, there was a backward shift.
658
                 */
659
                if (e->key != i->next_key)
23,171,841✔
660
                        e = bucket_at(h, --i->idx);
31,921✔
661

662
                assert(e->key == i->next_key);
23,171,841✔
663
        }
664

665
        idx = i->idx;
39,852,781✔
666
#if ENABLE_DEBUG_HASHMAP
667
        i->prev_idx = idx;
668
#endif
669

670
        i->idx = skip_free_buckets(h, i->idx + 1);
39,852,781✔
671
        if (i->idx != IDX_NIL)
39,852,781✔
672
                i->next_key = bucket_at(h, i->idx)->key;
29,939,119✔
673
        else
674
                i->idx = IDX_NIL;
9,913,662✔
675

676
        return idx;
677

678
at_end:
4,111,872✔
679
        i->idx = IDX_NIL;
4,111,872✔
680
        return IDX_NIL;
4,111,872✔
681
}
682

683
static unsigned hashmap_iterate_entry(HashmapBase *h, Iterator *i) {
85,836,593✔
684
        if (!h) {
85,836,593✔
685
                i->idx = IDX_NIL;
14,360,430✔
686
                return IDX_NIL;
14,360,430✔
687
        }
688

689
#if ENABLE_DEBUG_HASHMAP
690
        if (i->idx == IDX_FIRST) {
691
                i->put_count = h->debug.put_count;
692
                i->rem_count = h->debug.rem_count;
693
        } else {
694
                /* While iterating, must not add any new entries */
695
                assert(i->put_count == h->debug.put_count);
696
                /* ... or remove entries other than the current one */
697
                assert(i->rem_count == h->debug.rem_count ||
698
                       (i->rem_count == h->debug.rem_count - 1 &&
699
                        i->prev_idx == h->debug.last_rem_idx));
700
                /* Reset our removals counter */
701
                i->rem_count = h->debug.rem_count;
702
        }
703
#endif
704

705
        return h->type == HASHMAP_TYPE_ORDERED ? hashmap_iterate_in_insertion_order((OrderedHashmap*) h, i)
27,511,510✔
706
                                               : hashmap_iterate_in_internal_order(h, i);
71,476,163✔
707
}
708

709
bool _hashmap_iterate(HashmapBase *h, Iterator *i, void **value, const void **key) {
58,657,176✔
710
        struct hashmap_base_entry *e;
58,657,176✔
711
        void *data;
58,657,176✔
712
        unsigned idx;
58,657,176✔
713

714
        idx = hashmap_iterate_entry(h, i);
58,657,176✔
715
        if (idx == IDX_NIL) {
58,657,176✔
716
                if (value)
19,620,756✔
717
                        *value = NULL;
18,866,482✔
718
                if (key)
19,620,756✔
719
                        *key = NULL;
4,177,422✔
720

721
                return false;
19,620,756✔
722
        }
723

724
        e = bucket_at(h, idx);
39,036,420✔
725
        data = entry_value(h, e);
39,036,420✔
726
        if (value)
39,036,420✔
727
                *value = data;
30,326,774✔
728
        if (key)
39,036,420✔
729
                *key = e->key;
23,845,377✔
730

731
        return true;
732
}
733

734
#define HASHMAP_FOREACH_IDX(idx, h, i) \
735
        for ((i) = ITERATOR_FIRST, (idx) = hashmap_iterate_entry((h), &(i)); \
736
             (idx != IDX_NIL); \
737
             (idx) = hashmap_iterate_entry((h), &(i)))
738

739
IteratedCache* _hashmap_iterated_cache_new(HashmapBase *h) {
6,807✔
740
        IteratedCache *cache;
6,807✔
741

742
        assert(h);
6,807✔
743
        assert(!h->cached);
6,807✔
744

745
        if (h->cached)
6,807✔
746
                return NULL;
747

748
        cache = new0(IteratedCache, 1);
6,807✔
749
        if (!cache)
6,807✔
750
                return NULL;
751

752
        cache->hashmap = h;
6,807✔
753
        h->cached = true;
6,807✔
754

755
        return cache;
6,807✔
756
}
757

758
static void reset_direct_storage(HashmapBase *h) {
7,810,092✔
759
        const struct hashmap_type_info *hi = &hashmap_type_info[h->type];
7,810,092✔
760
        void *p;
7,810,092✔
761

762
        assert(!h->has_indirect);
7,810,092✔
763

764
        p = mempset(h->direct.storage, 0, hi->entry_size * hi->n_direct_buckets);
7,810,092✔
765
        memset(p, DIB_RAW_INIT, sizeof(dib_raw_t) * hi->n_direct_buckets);
7,810,092✔
766
}
7,810,092✔
767

768
static void shared_hash_key_initialize(void) {
61,516✔
769
        random_bytes(shared_hash_key, sizeof(shared_hash_key));
61,516✔
770
}
61,516✔
771

772
static struct HashmapBase* hashmap_base_new(const struct hash_ops *hash_ops, enum HashmapType type) {
3,917,656✔
773
        HashmapBase *h;
3,917,656✔
774
        const struct hashmap_type_info *hi = &hashmap_type_info[type];
3,917,656✔
775

776
        bool use_pool = mempool_enabled && mempool_enabled();  /* mempool_enabled is a weak symbol */
3,917,656✔
777

778
        h = use_pool ? mempool_alloc0_tile(hi->mempool) : malloc0(hi->head_size);
4,003,520✔
779
        if (!h)
3,917,656✔
780
                return NULL;
781

782
        h->type = type;
3,917,656✔
783
        h->from_pool = use_pool;
3,917,656✔
784
        h->hash_ops = hash_ops ?: &trivial_hash_ops;
3,917,656✔
785

786
        if (type == HASHMAP_TYPE_ORDERED) {
3,917,656✔
787
                OrderedHashmap *lh = (OrderedHashmap*)h;
1,248,227✔
788
                lh->iterate_list_head = lh->iterate_list_tail = IDX_NIL;
1,248,227✔
789
        }
790

791
        reset_direct_storage(h);
3,917,656✔
792

793
        static pthread_once_t once = PTHREAD_ONCE_INIT;
3,917,656✔
794
        assert_se(pthread_once(&once, shared_hash_key_initialize) == 0);
3,917,656✔
795

796
#if ENABLE_DEBUG_HASHMAP
797
        assert_se(pthread_mutex_lock(&hashmap_debug_list_mutex) == 0);
798
        LIST_PREPEND(debug_list, hashmap_debug_list, &h->debug);
799
        assert_se(pthread_mutex_unlock(&hashmap_debug_list_mutex) == 0);
800
#endif
801

802
        return h;
803
}
804

805
Hashmap *hashmap_new(const struct hash_ops *hash_ops) {
515,619✔
806
        return (Hashmap*)        hashmap_base_new(hash_ops, HASHMAP_TYPE_PLAIN);
515,619✔
807
}
808

809
OrderedHashmap *ordered_hashmap_new(const struct hash_ops *hash_ops) {
54,268✔
810
        return (OrderedHashmap*) hashmap_base_new(hash_ops, HASHMAP_TYPE_ORDERED);
54,268✔
811
}
812

813
Set *set_new(const struct hash_ops *hash_ops) {
58,245✔
814
        return (Set*)            hashmap_base_new(hash_ops, HASHMAP_TYPE_SET);
58,245✔
815
}
816

817
static int hashmap_base_ensure_allocated(HashmapBase **h, const struct hash_ops *hash_ops,
30,235,071✔
818
                                         enum HashmapType type) {
819
        HashmapBase *q;
30,235,071✔
820

821
        assert(h);
30,235,071✔
822

823
        if (*h) {
30,235,071✔
824
                assert((*h)->hash_ops == (hash_ops ?: &trivial_hash_ops));
29,820,790✔
825
                return 0;
826
        }
827

828
        q = hashmap_base_new(hash_ops, type);
3,289,521✔
829
        if (!q)
3,289,521✔
830
                return -ENOMEM;
831

832
        *h = q;
3,289,521✔
833
        return 1;
3,289,521✔
834
}
835

836
int hashmap_ensure_allocated(Hashmap **h, const struct hash_ops *hash_ops) {
6,238,322✔
837
        return hashmap_base_ensure_allocated((HashmapBase**)h, hash_ops, HASHMAP_TYPE_PLAIN);
6,238,322✔
838
}
839

840
int ordered_hashmap_ensure_allocated(OrderedHashmap **h, const struct hash_ops *hash_ops) {
18,836,792✔
841
        return hashmap_base_ensure_allocated((HashmapBase**)h, hash_ops, HASHMAP_TYPE_ORDERED);
18,836,792✔
842
}
843

844
int set_ensure_allocated(Set **s, const struct hash_ops *hash_ops) {
5,159,957✔
845
        return hashmap_base_ensure_allocated((HashmapBase**)s, hash_ops, HASHMAP_TYPE_SET);
5,159,957✔
846
}
847

848
int hashmap_ensure_put(Hashmap **h, const struct hash_ops *hash_ops, const void *key, void *value) {
5,523,958✔
849
        int r;
5,523,958✔
850

851
        r = hashmap_ensure_allocated(h, hash_ops);
5,523,958✔
852
        if (r < 0)
5,523,958✔
853
                return r;
854

855
        return hashmap_put(*h, key, value);
5,523,958✔
856
}
857

858
int ordered_hashmap_ensure_put(OrderedHashmap **h, const struct hash_ops *hash_ops, const void *key, void *value) {
5,041,412✔
859
        int r;
5,041,412✔
860

861
        r = ordered_hashmap_ensure_allocated(h, hash_ops);
5,041,412✔
862
        if (r < 0)
5,041,412✔
863
                return r;
864

865
        return ordered_hashmap_put(*h, key, value);
5,041,412✔
866
}
867

868
int ordered_hashmap_ensure_replace(OrderedHashmap **h, const struct hash_ops *hash_ops, const void *key, void *value) {
6,840✔
869
        int r;
6,840✔
870

871
        r = ordered_hashmap_ensure_allocated(h, hash_ops);
6,840✔
872
        if (r < 0)
6,840✔
873
                return r;
874

875
        return ordered_hashmap_replace(*h, key, value);
6,840✔
876
}
877

878
int hashmap_ensure_replace(Hashmap **h, const struct hash_ops *hash_ops, const void *key, void *value) {
26,039✔
879
        int r;
26,039✔
880

881
        r = hashmap_ensure_allocated(h, hash_ops);
26,039✔
882
        if (r < 0)
26,039✔
883
                return r;
884

885
        return hashmap_replace(*h, key, value);
26,039✔
886
}
887

888
static void hashmap_free_no_clear(HashmapBase *h) {
3,874,592✔
889
        assert(!h->has_indirect);
3,874,592✔
890
        assert(h->n_direct_entries == 0);
3,874,592✔
891

892
#if ENABLE_DEBUG_HASHMAP
893
        assert_se(pthread_mutex_lock(&hashmap_debug_list_mutex) == 0);
894
        LIST_REMOVE(debug_list, hashmap_debug_list, &h->debug);
895
        assert_se(pthread_mutex_unlock(&hashmap_debug_list_mutex) == 0);
896
#endif
897

898
        if (h->from_pool) {
3,874,592✔
899
                /* Ensure that the object didn't get migrated between threads. */
900
                assert_se(is_main_thread());
3,789,393✔
901
                mempool_free_tile(hashmap_type_info[h->type].mempool, h);
3,789,393✔
902
        } else
903
                free(h);
85,199✔
904
}
3,874,592✔
905

906
HashmapBase* _hashmap_free(HashmapBase *h) {
16,141,808✔
907
        if (h) {
16,141,808✔
908
                _hashmap_clear(h);
3,874,592✔
909
                hashmap_free_no_clear(h);
3,874,592✔
910
        }
911

912
        return NULL;
16,141,808✔
913
}
914

915
void _hashmap_clear(HashmapBase *h) {
4,031,742✔
916
        if (!h)
4,031,742✔
917
                return;
918

919
        if (h->hash_ops->free_key || h->hash_ops->free_value) {
3,892,436✔
920

921
                /* If destructor calls are defined, let's destroy things defensively: let's take the item out of the
922
                 * hash table, and only then call the destructor functions. If these destructors then try to unregister
923
                 * themselves from our hash table a second time, the entry is already gone. */
924

925
                while (_hashmap_size(h) > 0) {
23,224,831✔
926
                        void *k = NULL;
20,087,782✔
927
                        void *v;
20,087,782✔
928

929
                        v = _hashmap_first_key_and_value(h, true, &k);
20,087,782✔
930

931
                        if (h->hash_ops->free_key)
20,087,782✔
932
                                h->hash_ops->free_key(k);
15,430,558✔
933

934
                        if (h->hash_ops->free_value)
20,087,782✔
935
                                h->hash_ops->free_value(v);
17,946,812✔
936
                }
937
        }
938

939
        if (h->has_indirect) {
3,892,436✔
940
                free(h->indirect.storage);
1,400,234✔
941
                h->has_indirect = false;
1,400,234✔
942
        }
943

944
        h->n_direct_entries = 0;
3,892,436✔
945
        reset_direct_storage(h);
3,892,436✔
946

947
        if (h->type == HASHMAP_TYPE_ORDERED) {
3,892,436✔
948
                OrderedHashmap *lh = (OrderedHashmap*) h;
1,244,951✔
949
                lh->iterate_list_head = lh->iterate_list_tail = IDX_NIL;
1,244,951✔
950
        }
951

952
        base_set_dirty(h);
3,892,436✔
953
}
954

955
static int resize_buckets(HashmapBase *h, unsigned entries_add);
956

957
/*
958
 * Finds an empty bucket to put an entry into, starting the scan at 'idx'.
959
 * Performs Robin Hood swaps as it goes. The entry to put must be placed
960
 * by the caller into swap slot IDX_PUT.
961
 * If used for in-place resizing, may leave a displaced entry in swap slot
962
 * IDX_PUT. Caller must rehash it next.
963
 * Returns: true if it left a displaced entry to rehash next in IDX_PUT,
964
 *          false otherwise.
965
 */
966
static bool hashmap_put_robin_hood(HashmapBase *h, unsigned idx,
63,523,980✔
967
                                   struct swap_entries *swap) {
968
        dib_raw_t raw_dib, *dibs;
63,523,980✔
969
        unsigned dib, distance;
63,523,980✔
970

971
#if ENABLE_DEBUG_HASHMAP
972
        h->debug.put_count++;
973
#endif
974

975
        dibs = dib_raw_ptr(h);
63,523,980✔
976

977
        for (distance = 0; ; distance++) {
63,523,980✔
978
                raw_dib = dibs[idx];
115,962,660✔
979
                if (IN_SET(raw_dib, DIB_RAW_FREE, DIB_RAW_REHASH)) {
115,962,660✔
980
                        if (raw_dib == DIB_RAW_REHASH)
63,523,980✔
981
                                bucket_move_entry(h, swap, idx, IDX_TMP);
6,029,807✔
982

983
                        if (h->has_indirect && h->indirect.idx_lowest_entry > idx)
63,523,980✔
984
                                h->indirect.idx_lowest_entry = idx;
38,777✔
985

986
                        bucket_set_dib(h, idx, distance);
63,523,980✔
987
                        bucket_move_entry(h, swap, IDX_PUT, idx);
63,523,980✔
988
                        if (raw_dib == DIB_RAW_REHASH) {
63,523,980✔
989
                                bucket_move_entry(h, swap, IDX_TMP, IDX_PUT);
6,029,807✔
990
                                return true;
6,029,807✔
991
                        }
992

993
                        return false;
994
                }
995

996
                dib = bucket_calculate_dib(h, idx, raw_dib);
52,438,680✔
997

998
                if (dib < distance) {
52,438,680✔
999
                        /* Found a wealthier entry. Go Robin Hood! */
1000
                        bucket_set_dib(h, idx, distance);
19,305,397✔
1001

1002
                        /* swap the entries */
1003
                        bucket_move_entry(h, swap, idx, IDX_TMP);
19,305,397✔
1004
                        bucket_move_entry(h, swap, IDX_PUT, idx);
19,305,397✔
1005
                        bucket_move_entry(h, swap, IDX_TMP, IDX_PUT);
19,305,397✔
1006

1007
                        distance = dib;
19,305,397✔
1008
                }
1009

1010
                idx = next_idx(h, idx);
52,438,680✔
1011
        }
1012
}
1013

1014
/*
1015
 * Puts an entry into a hashmap, boldly - no check whether key already exists.
1016
 * The caller must place the entry (only its key and value, not link indexes)
1017
 * in swap slot IDX_PUT.
1018
 * Caller must ensure: the key does not exist yet in the hashmap.
1019
 *                     that resize is not needed if !may_resize.
1020
 * Returns: 1 if entry was put successfully.
1021
 *          -ENOMEM if may_resize==true and resize failed with -ENOMEM.
1022
 *          Cannot return -ENOMEM if !may_resize.
1023
 */
1024
static int hashmap_base_put_boldly(HashmapBase *h, unsigned idx,
34,187,417✔
1025
                                   struct swap_entries *swap, bool may_resize) {
1026
        struct ordered_hashmap_entry *new_entry;
34,187,417✔
1027
        int r;
34,187,417✔
1028

1029
        assert(idx < n_buckets(h));
34,187,417✔
1030

1031
        new_entry = bucket_at_swap(swap, IDX_PUT);
34,187,417✔
1032

1033
        if (may_resize) {
34,187,417✔
1034
                r = resize_buckets(h, 1);
34,183,863✔
1035
                if (r < 0)
34,183,863✔
1036
                        return r;
1037
                if (r > 0)
34,183,863✔
1038
                        idx = bucket_hash(h, new_entry->p.b.key);
3,560,751✔
1039
        }
1040
        assert(n_entries(h) < n_buckets(h));
34,187,417✔
1041

1042
        if (h->type == HASHMAP_TYPE_ORDERED) {
34,187,417✔
1043
                OrderedHashmap *lh = (OrderedHashmap*) h;
18,070,020✔
1044

1045
                new_entry->iterate_next = IDX_NIL;
18,070,020✔
1046
                new_entry->iterate_previous = lh->iterate_list_tail;
18,070,020✔
1047

1048
                if (lh->iterate_list_tail != IDX_NIL) {
18,070,020✔
1049
                        struct ordered_hashmap_entry *old_tail;
16,633,590✔
1050

1051
                        old_tail = ordered_bucket_at(lh, lh->iterate_list_tail);
16,633,590✔
1052
                        assert(old_tail->iterate_next == IDX_NIL);
16,633,590✔
1053
                        old_tail->iterate_next = IDX_PUT;
16,633,590✔
1054
                }
1055

1056
                lh->iterate_list_tail = IDX_PUT;
18,070,020✔
1057
                if (lh->iterate_list_head == IDX_NIL)
18,070,020✔
1058
                        lh->iterate_list_head = IDX_PUT;
1,436,430✔
1059
        }
1060

1061
        assert_se(hashmap_put_robin_hood(h, idx, swap) == false);
34,187,417✔
1062

1063
        n_entries_inc(h);
34,187,417✔
1064
#if ENABLE_DEBUG_HASHMAP
1065
        h->debug.max_entries = MAX(h->debug.max_entries, n_entries(h));
1066
#endif
1067

1068
        base_set_dirty(h);
34,187,417✔
1069

1070
        return 1;
34,187,417✔
1071
}
1072
#define hashmap_put_boldly(h, idx, swap, may_resize) \
1073
        hashmap_base_put_boldly(HASHMAP_BASE(h), idx, swap, may_resize)
1074

1075
/*
1076
 * Returns 0 if resize is not needed.
1077
 *         1 if successfully resized.
1078
 *         -ENOMEM on allocation failure.
1079
 */
1080
static int resize_buckets(HashmapBase *h, unsigned entries_add) {
34,196,101✔
1081
        struct swap_entries swap;
34,196,101✔
1082
        void *new_storage;
34,196,101✔
1083
        dib_raw_t *old_dibs, *new_dibs;
34,196,101✔
1084
        const struct hashmap_type_info *hi;
34,196,101✔
1085
        unsigned idx, optimal_idx;
34,196,101✔
1086
        unsigned old_n_buckets, new_n_buckets, n_rehashed, new_n_entries;
34,196,101✔
1087
        uint8_t new_shift;
34,196,101✔
1088
        bool rehash_next;
34,196,101✔
1089

1090
        assert(h);
34,196,101✔
1091

1092
        hi = &hashmap_type_info[h->type];
34,196,101✔
1093
        new_n_entries = n_entries(h) + entries_add;
34,196,101✔
1094

1095
        /* overflow? */
1096
        if (_unlikely_(new_n_entries < entries_add))
34,196,101✔
1097
                return -ENOMEM;
34,196,101✔
1098

1099
        /* For direct storage we allow 100% load, because it's tiny. */
1100
        if (!h->has_indirect && new_n_entries <= hi->n_direct_buckets)
34,196,099✔
1101
                return 0;
1102

1103
        /*
1104
         * Load factor = n/m = 1 - (1/INV_KEEP_FREE).
1105
         * From it follows: m = n + n/(INV_KEEP_FREE - 1)
1106
         */
1107
        new_n_buckets = new_n_entries + new_n_entries / (INV_KEEP_FREE - 1);
26,482,490✔
1108
        /* overflow? */
1109
        if (_unlikely_(new_n_buckets < new_n_entries))
26,482,490✔
1110
                return -ENOMEM;
1111

1112
        if (_unlikely_(new_n_buckets > UINT_MAX / (hi->entry_size + sizeof(dib_raw_t))))
26,482,488✔
1113
                return -ENOMEM;
1114

1115
        old_n_buckets = n_buckets(h);
26,482,488✔
1116

1117
        if (_likely_(new_n_buckets <= old_n_buckets))
26,482,488✔
1118
                return 0;
1119

1120
        new_shift = log2u_round_up(MAX(
3,564,095✔
1121
                        new_n_buckets * (hi->entry_size + sizeof(dib_raw_t)),
1122
                        2 * sizeof(struct direct_storage)));
1123

1124
        /* Realloc storage (buckets and DIB array). */
1125
        new_storage = realloc(h->has_indirect ? h->indirect.storage : NULL,
2,148,993✔
1126
                              1U << new_shift);
3,564,095✔
1127
        if (!new_storage)
3,564,095✔
1128
                return -ENOMEM;
1129

1130
        /* Must upgrade direct to indirect storage. */
1131
        if (!h->has_indirect) {
3,564,095✔
1132
                memcpy(new_storage, h->direct.storage,
1,415,102✔
1133
                       old_n_buckets * (hi->entry_size + sizeof(dib_raw_t)));
1,415,102✔
1134
                h->indirect.n_entries = h->n_direct_entries;
1,415,102✔
1135
                h->indirect.idx_lowest_entry = 0;
1,415,102✔
1136
                h->n_direct_entries = 0;
1,415,102✔
1137
        }
1138

1139
        /* Get a new hash key. If we've just upgraded to indirect storage,
1140
         * allow reusing a previously generated key. It's still a different key
1141
         * from the shared one that we used for direct storage. */
1142
        get_hash_key(h->indirect.hash_key, !h->has_indirect);
3,564,095✔
1143

1144
        h->has_indirect = true;
3,564,095✔
1145
        h->indirect.storage = new_storage;
3,564,095✔
1146
        h->indirect.n_buckets = (1U << new_shift) /
3,564,095✔
1147
                                (hi->entry_size + sizeof(dib_raw_t));
1148

1149
        old_dibs = (dib_raw_t*)((uint8_t*) new_storage + hi->entry_size * old_n_buckets);
3,564,095✔
1150
        new_dibs = dib_raw_ptr(h);
3,564,095✔
1151

1152
        /*
1153
         * Move the DIB array to the new place, replacing valid DIB values with
1154
         * DIB_RAW_REHASH to indicate all of the used buckets need rehashing.
1155
         * Note: Overlap is not possible, because we have at least doubled the
1156
         * number of buckets and dib_raw_t is smaller than any entry type.
1157
         */
1158
        for (idx = 0; idx < old_n_buckets; idx++) {
44,462,039✔
1159
                assert(old_dibs[idx] != DIB_RAW_REHASH);
37,333,849✔
1160
                new_dibs[idx] = old_dibs[idx] == DIB_RAW_FREE ? DIB_RAW_FREE
67,677,037✔
1161
                                                              : DIB_RAW_REHASH;
1162
        }
1163

1164
        /* Zero the area of newly added entries (including the old DIB area) */
1165
        memzero(bucket_at(h, old_n_buckets),
3,564,095✔
1166
               (n_buckets(h) - old_n_buckets) * hi->entry_size);
1167

1168
        /* The upper half of the new DIB array needs initialization */
1169
        memset(&new_dibs[old_n_buckets], DIB_RAW_INIT,
7,128,190✔
1170
               (n_buckets(h) - old_n_buckets) * sizeof(dib_raw_t));
3,564,095✔
1171

1172
        /* Rehash entries that need it */
1173
        n_rehashed = 0;
3,564,095✔
1174
        for (idx = 0; idx < old_n_buckets; idx++) {
40,897,944✔
1175
                if (new_dibs[idx] != DIB_RAW_REHASH)
37,333,849✔
1176
                        continue;
13,020,468✔
1177

1178
                optimal_idx = bucket_hash(h, bucket_at(h, idx)->key);
24,313,381✔
1179

1180
                /*
1181
                 * Not much to do if by luck the entry hashes to its current
1182
                 * location. Just set its DIB.
1183
                 */
1184
                if (optimal_idx == idx) {
24,313,381✔
1185
                        new_dibs[idx] = 0;
1,006,625✔
1186
                        n_rehashed++;
1,006,625✔
1187
                        continue;
1,006,625✔
1188
                }
1189

1190
                new_dibs[idx] = DIB_RAW_FREE;
23,306,756✔
1191
                bucket_move_entry(h, &swap, idx, IDX_PUT);
23,306,756✔
1192
                /* bucket_move_entry does not clear the source */
1193
                memzero(bucket_at(h, idx), hi->entry_size);
23,306,756✔
1194

1195
                do {
29,336,563✔
1196
                        /*
1197
                         * Find the new bucket for the current entry. This may make
1198
                         * another entry homeless and load it into IDX_PUT.
1199
                         */
1200
                        rehash_next = hashmap_put_robin_hood(h, optimal_idx, &swap);
29,336,563✔
1201
                        n_rehashed++;
29,336,563✔
1202

1203
                        /* Did the current entry displace another one? */
1204
                        if (rehash_next)
29,336,563✔
1205
                                optimal_idx = bucket_hash(h, bucket_at_swap(&swap, IDX_PUT)->p.b.key);
6,029,807✔
1206
                } while (rehash_next);
29,336,563✔
1207
        }
1208

1209
        assert_se(n_rehashed == n_entries(h));
3,564,095✔
1210

1211
        return 1;
1212
}
1213

1214
/*
1215
 * Finds an entry with a matching key
1216
 * Returns: index of the found entry, or IDX_NIL if not found.
1217
 */
1218
static unsigned base_bucket_scan(HashmapBase *h, unsigned idx, const void *key) {
185,498,418✔
1219
        struct hashmap_base_entry *e;
185,498,418✔
1220
        unsigned dib, distance;
185,498,418✔
1221
        dib_raw_t *dibs = dib_raw_ptr(h);
185,498,418✔
1222

1223
        assert(idx < n_buckets(h));
185,498,418✔
1224

1225
        for (distance = 0; ; distance++) {
102,291,038✔
1226
                if (dibs[idx] == DIB_RAW_FREE)
287,789,456✔
1227
                        return IDX_NIL;
1228

1229
                dib = bucket_calculate_dib(h, idx, dibs[idx]);
218,224,128✔
1230

1231
                if (dib < distance)
218,224,128✔
1232
                        return IDX_NIL;
1233
                if (dib == distance) {
186,713,117✔
1234
                        e = bucket_at(h, idx);
145,087,072✔
1235
                        if (h->hash_ops->compare(e->key, key) == 0)
145,087,072✔
1236
                                return idx;
1237
                }
1238

1239
                idx = next_idx(h, idx);
102,291,038✔
1240
        }
1241
}
1242
#define bucket_scan(h, idx, key) base_bucket_scan(HASHMAP_BASE(h), idx, key)
1243

1244
int hashmap_put(Hashmap *h, const void *key, void *value) {
15,081,792✔
1245
        struct swap_entries swap;
15,081,792✔
1246
        struct plain_hashmap_entry *e;
15,081,792✔
1247
        unsigned hash, idx;
15,081,792✔
1248

1249
        assert(h);
15,081,792✔
1250

1251
        hash = bucket_hash(h, key);
15,081,792✔
1252
        idx = bucket_scan(h, hash, key);
15,081,792✔
1253
        if (idx != IDX_NIL) {
15,081,792✔
1254
                e = plain_bucket_at(h, idx);
190,762✔
1255
                if (e->value == value)
190,762✔
1256
                        return 0;
15,081,792✔
1257
                return -EEXIST;
34,741✔
1258
        }
1259

1260
        e = &bucket_at_swap(&swap, IDX_PUT)->p;
14,891,030✔
1261
        e->b.key = key;
14,891,030✔
1262
        e->value = value;
14,891,030✔
1263
        return hashmap_put_boldly(h, hash, &swap, true);
14,891,030✔
1264
}
1265

1266
int set_put(Set *s, const void *key) {
5,281,352✔
1267
        struct swap_entries swap;
5,281,352✔
1268
        struct hashmap_base_entry *e;
5,281,352✔
1269
        unsigned hash, idx;
5,281,352✔
1270

1271
        assert(s);
5,281,352✔
1272

1273
        hash = bucket_hash(s, key);
5,281,352✔
1274
        idx = bucket_scan(s, hash, key);
5,281,352✔
1275
        if (idx != IDX_NIL)
5,281,352✔
1276
                return 0;
5,281,352✔
1277

1278
        e = &bucket_at_swap(&swap, IDX_PUT)->p.b;
5,267,729✔
1279
        e->key = key;
5,267,729✔
1280
        return hashmap_put_boldly(s, hash, &swap, true);
5,267,729✔
1281
}
1282

1283
int set_ensure_put(Set **s, const struct hash_ops *hash_ops, const void *key) {
3,983,095✔
1284
        int r;
3,983,095✔
1285

1286
        r = set_ensure_allocated(s, hash_ops);
3,983,095✔
1287
        if (r < 0)
3,983,095✔
1288
                return r;
1289

1290
        return set_put(*s, key);
3,983,095✔
1291
}
1292

1293
int set_ensure_consume(Set **s, const struct hash_ops *hash_ops, void *key) {
1,446,035✔
1294
        int r;
1,446,035✔
1295

1296
        r = set_ensure_put(s, hash_ops, key);
1,446,035✔
1297
        if (r <= 0) {
1,446,035✔
1298
                if (hash_ops && hash_ops->free_key)
1,038✔
1299
                        hash_ops->free_key(key);
1,038✔
1300
                else
1301
                        free(key);
×
1302
        }
1303

1304
        return r;
1,446,035✔
1305
}
1306

1307
int hashmap_replace(Hashmap *h, const void *key, void *value) {
15,190,961✔
1308
        struct swap_entries swap;
15,190,961✔
1309
        struct plain_hashmap_entry *e;
15,190,961✔
1310
        unsigned hash, idx;
15,190,961✔
1311

1312
        assert(h);
15,190,961✔
1313

1314
        hash = bucket_hash(h, key);
15,190,961✔
1315
        idx = bucket_scan(h, hash, key);
15,190,961✔
1316
        if (idx != IDX_NIL) {
15,190,961✔
1317
                e = plain_bucket_at(h, idx);
1,169,052✔
1318
#if ENABLE_DEBUG_HASHMAP
1319
                /* Although the key is equal, the key pointer may have changed,
1320
                 * and this would break our assumption for iterating. So count
1321
                 * this operation as incompatible with iteration. */
1322
                if (e->b.key != key) {
1323
                        h->b.debug.put_count++;
1324
                        h->b.debug.rem_count++;
1325
                        h->b.debug.last_rem_idx = idx;
1326
                }
1327
#endif
1328
                e->b.key = key;
1,169,052✔
1329
                e->value = value;
1,169,052✔
1330
                hashmap_set_dirty(h);
1,169,052✔
1331

1332
                return 0;
1,169,052✔
1333
        }
1334

1335
        e = &bucket_at_swap(&swap, IDX_PUT)->p;
14,021,909✔
1336
        e->b.key = key;
14,021,909✔
1337
        e->value = value;
14,021,909✔
1338
        return hashmap_put_boldly(h, hash, &swap, true);
14,021,909✔
1339
}
1340

1341
int hashmap_update(Hashmap *h, const void *key, void *value) {
14,699✔
1342
        struct plain_hashmap_entry *e;
14,699✔
1343
        unsigned hash, idx;
14,699✔
1344

1345
        assert(h);
14,699✔
1346

1347
        hash = bucket_hash(h, key);
14,699✔
1348
        idx = bucket_scan(h, hash, key);
14,699✔
1349
        if (idx == IDX_NIL)
14,699✔
1350
                return -ENOENT;
1351

1352
        e = plain_bucket_at(h, idx);
14,697✔
1353
        e->value = value;
14,697✔
1354
        hashmap_set_dirty(h);
14,697✔
1355

1356
        return 0;
14,697✔
1357
}
1358

1359
void* _hashmap_get(HashmapBase *h, const void *key) {
109,584,512✔
1360
        struct hashmap_base_entry *e;
109,584,512✔
1361
        unsigned hash, idx;
109,584,512✔
1362

1363
        if (!h)
109,584,512✔
1364
                return NULL;
1365

1366
        hash = bucket_hash(h, key);
99,941,998✔
1367
        idx = bucket_scan(h, hash, key);
99,941,998✔
1368
        if (idx == IDX_NIL)
99,941,998✔
1369
                return NULL;
1370

1371
        e = bucket_at(h, idx);
75,123,631✔
1372
        return entry_value(h, e);
75,123,631✔
1373
}
1374

1375
void* hashmap_get2(Hashmap *h, const void *key, void **ret) {
13,900,726✔
1376
        struct plain_hashmap_entry *e;
13,900,726✔
1377
        unsigned hash, idx;
13,900,726✔
1378

1379
        if (!h)
13,900,726✔
1380
                return NULL;
1381

1382
        hash = bucket_hash(h, key);
13,900,710✔
1383
        idx = bucket_scan(h, hash, key);
13,900,710✔
1384
        if (idx == IDX_NIL)
13,900,710✔
1385
                return NULL;
1386

1387
        e = plain_bucket_at(h, idx);
1,091,438✔
1388
        if (ret)
1,091,438✔
1389
                *ret = (void*) e->b.key;
1,091,438✔
1390

1391
        return e->value;
1,091,438✔
1392
}
1393

1394
bool _hashmap_contains(HashmapBase *h, const void *key) {
26,447,518✔
1395
        unsigned hash;
26,447,518✔
1396

1397
        if (!h)
26,447,518✔
1398
                return false;
1399

1400
        hash = bucket_hash(h, key);
25,673,423✔
1401
        return bucket_scan(h, hash, key) != IDX_NIL;
25,673,423✔
1402
}
1403

1404
void* _hashmap_remove(HashmapBase *h, const void *key) {
10,554,115✔
1405
        struct hashmap_base_entry *e;
10,554,115✔
1406
        unsigned hash, idx;
10,554,115✔
1407
        void *data;
10,554,115✔
1408

1409
        if (!h)
10,554,115✔
1410
                return NULL;
1411

1412
        hash = bucket_hash(h, key);
9,369,263✔
1413
        idx = bucket_scan(h, hash, key);
9,369,263✔
1414
        if (idx == IDX_NIL)
9,369,263✔
1415
                return NULL;
1416

1417
        e = bucket_at(h, idx);
4,995,402✔
1418
        data = entry_value(h, e);
4,995,402✔
1419
        remove_entry(h, idx);
4,995,402✔
1420

1421
        return data;
4,995,402✔
1422
}
1423

1424
void* hashmap_remove2(Hashmap *h, const void *key, void **ret) {
641,396✔
1425
        struct plain_hashmap_entry *e;
641,396✔
1426
        unsigned hash, idx;
641,396✔
1427
        void *data;
641,396✔
1428

1429
        if (!h) {
641,396✔
1430
                if (ret)
100,666✔
1431
                        *ret = NULL;
100,666✔
1432
                return NULL;
100,666✔
1433
        }
1434

1435
        hash = bucket_hash(h, key);
540,730✔
1436
        idx = bucket_scan(h, hash, key);
540,730✔
1437
        if (idx == IDX_NIL) {
540,730✔
1438
                if (ret)
528,897✔
1439
                        *ret = NULL;
528,897✔
1440
                return NULL;
528,897✔
1441
        }
1442

1443
        e = plain_bucket_at(h, idx);
11,833✔
1444
        data = e->value;
11,833✔
1445
        if (ret)
11,833✔
1446
                *ret = (void*) e->b.key;
11,833✔
1447

1448
        remove_entry(h, idx);
11,833✔
1449

1450
        return data;
11,833✔
1451
}
1452

1453
int hashmap_remove_and_put(Hashmap *h, const void *old_key, const void *new_key, void *value) {
10✔
1454
        struct swap_entries swap;
10✔
1455
        struct plain_hashmap_entry *e;
10✔
1456
        unsigned old_hash, new_hash, idx;
10✔
1457

1458
        if (!h)
10✔
1459
                return -ENOENT;
10✔
1460

1461
        old_hash = bucket_hash(h, old_key);
8✔
1462
        idx = bucket_scan(h, old_hash, old_key);
8✔
1463
        if (idx == IDX_NIL)
8✔
1464
                return -ENOENT;
1465

1466
        new_hash = bucket_hash(h, new_key);
6✔
1467
        if (bucket_scan(h, new_hash, new_key) != IDX_NIL)
6✔
1468
                return -EEXIST;
1469

1470
        remove_entry(h, idx);
4✔
1471

1472
        e = &bucket_at_swap(&swap, IDX_PUT)->p;
4✔
1473
        e->b.key = new_key;
4✔
1474
        e->value = value;
4✔
1475
        assert_se(hashmap_put_boldly(h, new_hash, &swap, false) == 1);
4✔
1476

1477
        return 0;
1478
}
1479

1480
int set_remove_and_put(Set *s, const void *old_key, const void *new_key) {
382✔
1481
        struct swap_entries swap;
382✔
1482
        struct hashmap_base_entry *e;
382✔
1483
        unsigned old_hash, new_hash, idx;
382✔
1484

1485
        if (!s)
382✔
1486
                return -ENOENT;
382✔
1487

1488
        old_hash = bucket_hash(s, old_key);
382✔
1489
        idx = bucket_scan(s, old_hash, old_key);
382✔
1490
        if (idx == IDX_NIL)
382✔
1491
                return -ENOENT;
1492

1493
        new_hash = bucket_hash(s, new_key);
382✔
1494
        if (bucket_scan(s, new_hash, new_key) != IDX_NIL)
382✔
1495
                return -EEXIST;
1496

1497
        remove_entry(s, idx);
382✔
1498

1499
        e = &bucket_at_swap(&swap, IDX_PUT)->p.b;
382✔
1500
        e->key = new_key;
382✔
1501
        assert_se(hashmap_put_boldly(s, new_hash, &swap, false) == 1);
382✔
1502

1503
        return 0;
1504
}
1505

1506
int hashmap_remove_and_replace(Hashmap *h, const void *old_key, const void *new_key, void *value) {
48✔
1507
        struct swap_entries swap;
48✔
1508
        struct plain_hashmap_entry *e;
48✔
1509
        unsigned old_hash, new_hash, idx_old, idx_new;
48✔
1510

1511
        if (!h)
48✔
1512
                return -ENOENT;
48✔
1513

1514
        old_hash = bucket_hash(h, old_key);
46✔
1515
        idx_old = bucket_scan(h, old_hash, old_key);
46✔
1516
        if (idx_old == IDX_NIL)
46✔
1517
                return -ENOENT;
1518

1519
        old_key = bucket_at(HASHMAP_BASE(h), idx_old)->key;
44✔
1520

1521
        new_hash = bucket_hash(h, new_key);
44✔
1522
        idx_new = bucket_scan(h, new_hash, new_key);
44✔
1523
        if (idx_new != IDX_NIL)
44✔
1524
                if (idx_old != idx_new) {
42✔
1525
                        remove_entry(h, idx_new);
42✔
1526
                        /* Compensate for a possible backward shift. */
1527
                        if (old_key != bucket_at(HASHMAP_BASE(h), idx_old)->key)
42✔
1528
                                idx_old = prev_idx(HASHMAP_BASE(h), idx_old);
6✔
1529
                        assert(old_key == bucket_at(HASHMAP_BASE(h), idx_old)->key);
42✔
1530
                }
1531

1532
        remove_entry(h, idx_old);
44✔
1533

1534
        e = &bucket_at_swap(&swap, IDX_PUT)->p;
44✔
1535
        e->b.key = new_key;
44✔
1536
        e->value = value;
44✔
1537
        assert_se(hashmap_put_boldly(h, new_hash, &swap, false) == 1);
44✔
1538

1539
        return 0;
1540
}
1541

1542
void* _hashmap_remove_value(HashmapBase *h, const void *key, void *value) {
492,799✔
1543
        struct hashmap_base_entry *e;
492,799✔
1544
        unsigned hash, idx;
492,799✔
1545

1546
        if (!h)
492,799✔
1547
                return NULL;
1548

1549
        hash = bucket_hash(h, key);
492,674✔
1550
        idx = bucket_scan(h, hash, key);
492,674✔
1551
        if (idx == IDX_NIL)
492,674✔
1552
                return NULL;
1553

1554
        e = bucket_at(h, idx);
434,207✔
1555
        if (entry_value(h, e) != value)
434,207✔
1556
                return NULL;
1557

1558
        remove_entry(h, idx);
433,899✔
1559

1560
        return value;
433,899✔
1561
}
1562

1563
static unsigned find_first_entry(HashmapBase *h) {
27,623,664✔
1564
        Iterator i = ITERATOR_FIRST;
27,623,664✔
1565

1566
        if (!h || !n_entries(h))
27,623,664✔
1567
                return IDX_NIL;
27,623,664✔
1568

1569
        return hashmap_iterate_entry(h, &i);
26,443,771✔
1570
}
1571

1572
void* _hashmap_first_key_and_value(HashmapBase *h, bool remove, void **ret_key) {
27,623,664✔
1573
        struct hashmap_base_entry *e;
27,623,664✔
1574
        void *key, *data;
27,623,664✔
1575
        unsigned idx;
27,623,664✔
1576

1577
        idx = find_first_entry(h);
27,623,664✔
1578
        if (idx == IDX_NIL) {
27,623,664✔
1579
                if (ret_key)
1,179,893✔
1580
                        *ret_key = NULL;
573,535✔
1581
                return NULL;
1,179,893✔
1582
        }
1583

1584
        e = bucket_at(h, idx);
26,443,771✔
1585
        key = (void*) e->key;
26,443,771✔
1586
        data = entry_value(h, e);
26,443,771✔
1587

1588
        if (remove)
26,443,771✔
1589
                remove_entry(h, idx);
26,205,527✔
1590

1591
        if (ret_key)
26,443,771✔
1592
                *ret_key = key;
20,749,385✔
1593

1594
        return data;
1595
}
1596

1597
unsigned _hashmap_size(HashmapBase *h) {
50,278,740✔
1598
        if (!h)
50,278,740✔
1599
                return 0;
1600

1601
        return n_entries(h);
33,150,848✔
1602
}
1603

1604
unsigned _hashmap_buckets(HashmapBase *h) {
422✔
1605
        if (!h)
422✔
1606
                return 0;
1607

1608
        return n_buckets(h);
419✔
1609
}
1610

1611
int _hashmap_merge(Hashmap *h, Hashmap *other) {
4✔
1612
        Iterator i;
4✔
1613
        unsigned idx;
4✔
1614

1615
        assert(h);
4✔
1616

1617
        HASHMAP_FOREACH_IDX(idx, HASHMAP_BASE(other), i) {
16✔
1618
                struct plain_hashmap_entry *pe = plain_bucket_at(other, idx);
12✔
1619
                int r;
12✔
1620

1621
                r = hashmap_put(h, pe->b.key, pe->value);
12✔
1622
                if (r < 0 && r != -EEXIST)
12✔
1623
                        return r;
1624
        }
1625

1626
        return 0;
1627
}
1628

1629
int set_merge(Set *s, Set *other) {
1✔
1630
        Iterator i;
1✔
1631
        unsigned idx;
1✔
1632

1633
        assert(s);
1✔
1634

1635
        HASHMAP_FOREACH_IDX(idx, HASHMAP_BASE(other), i) {
5✔
1636
                struct set_entry *se = set_bucket_at(other, idx);
4✔
1637
                int r;
4✔
1638

1639
                r = set_put(s, se->b.key);
4✔
1640
                if (r < 0)
4✔
1641
                        return r;
1642
        }
1643

1644
        return 0;
1645
}
1646

1647
int _hashmap_reserve(HashmapBase *h, unsigned entries_add) {
9,057✔
1648
        int r;
9,057✔
1649

1650
        assert(h);
9,057✔
1651

1652
        r = resize_buckets(h, entries_add);
9,057✔
1653
        if (r < 0)
9,057✔
1654
                return r;
4✔
1655

1656
        return 0;
1657
}
1658

1659
/*
1660
 * The same as hashmap_merge(), but every new item from other is moved to h.
1661
 * Keys already in h are skipped and stay in other.
1662
 * Returns: 0 on success.
1663
 *          -ENOMEM on alloc failure, in which case no move has been done.
1664
 */
1665
int _hashmap_move(HashmapBase *h, HashmapBase *other) {
3,184✔
1666
        struct swap_entries swap;
3,184✔
1667
        struct hashmap_base_entry *e, *n;
3,184✔
1668
        Iterator i;
3,184✔
1669
        unsigned idx;
3,184✔
1670
        int r;
3,184✔
1671

1672
        assert(h);
3,184✔
1673

1674
        if (!other)
3,184✔
1675
                return 0;
3,184✔
1676

1677
        assert(other->type == h->type);
3,181✔
1678

1679
        /*
1680
         * This reserves buckets for the worst case, where none of other's
1681
         * entries are yet present in h. This is preferable to risking
1682
         * an allocation failure in the middle of the moving and having to
1683
         * rollback or return a partial result.
1684
         */
1685
        r = resize_buckets(h, n_entries(other));
3,181✔
1686
        if (r < 0)
3,181✔
1687
                return r;
1688

1689
        HASHMAP_FOREACH_IDX(idx, other, i) {
6,307✔
1690
                unsigned h_hash;
3,126✔
1691

1692
                e = bucket_at(other, idx);
3,126✔
1693
                h_hash = bucket_hash(h, e->key);
3,126✔
1694
                if (bucket_scan(h, h_hash, e->key) != IDX_NIL)
3,126✔
1695
                        continue;
2✔
1696

1697
                n = &bucket_at_swap(&swap, IDX_PUT)->p.b;
3,124✔
1698
                n->key = e->key;
3,124✔
1699
                if (h->type != HASHMAP_TYPE_SET)
3,124✔
1700
                        ((struct plain_hashmap_entry*) n)->value =
3,124✔
1701
                                ((struct plain_hashmap_entry*) e)->value;
3,124✔
1702
                assert_se(hashmap_put_boldly(h, h_hash, &swap, false) == 1);
3,124✔
1703

1704
                remove_entry(other, idx);
3,124✔
1705
        }
1706

1707
        return 0;
1708
}
1709

1710
int _hashmap_move_one(HashmapBase *h, HashmapBase *other, const void *key) {
3,201✔
1711
        struct swap_entries swap;
3,201✔
1712
        unsigned h_hash, other_hash, idx;
3,201✔
1713
        struct hashmap_base_entry *e, *n;
3,201✔
1714
        int r;
3,201✔
1715

1716
        assert(h);
3,201✔
1717

1718
        h_hash = bucket_hash(h, key);
3,201✔
1719
        if (bucket_scan(h, h_hash, key) != IDX_NIL)
3,201✔
1720
                return -EEXIST;
3,201✔
1721

1722
        if (!other)
3,199✔
1723
                return -ENOENT;
1724

1725
        assert(other->type == h->type);
3,197✔
1726

1727
        other_hash = bucket_hash(other, key);
3,197✔
1728
        idx = bucket_scan(other, other_hash, key);
3,197✔
1729
        if (idx == IDX_NIL)
3,197✔
1730
                return -ENOENT;
1731

1732
        e = bucket_at(other, idx);
3,195✔
1733

1734
        n = &bucket_at_swap(&swap, IDX_PUT)->p.b;
3,195✔
1735
        n->key = e->key;
3,195✔
1736
        if (h->type != HASHMAP_TYPE_SET)
3,195✔
1737
                ((struct plain_hashmap_entry*) n)->value =
615✔
1738
                        ((struct plain_hashmap_entry*) e)->value;
615✔
1739
        r = hashmap_put_boldly(h, h_hash, &swap, true);
3,195✔
1740
        if (r < 0)
3,195✔
1741
                return r;
1742

1743
        remove_entry(other, idx);
3,195✔
1744
        return 0;
1745
}
1746

1747
HashmapBase* _hashmap_copy(HashmapBase *h) {
3✔
1748
        HashmapBase *copy;
3✔
1749
        int r;
3✔
1750

1751
        assert(h);
3✔
1752

1753
        copy = hashmap_base_new(h->hash_ops, h->type);
3✔
1754
        if (!copy)
3✔
1755
                return NULL;
1756

1757
        switch (h->type) {
3✔
1758
        case HASHMAP_TYPE_PLAIN:
2✔
1759
        case HASHMAP_TYPE_ORDERED:
1760
                r = hashmap_merge((Hashmap*)copy, (Hashmap*)h);
2✔
1761
                break;
2✔
1762
        case HASHMAP_TYPE_SET:
1✔
1763
                r = set_merge((Set*)copy, (Set*)h);
1✔
1764
                break;
1✔
1765
        default:
×
1766
                assert_not_reached();
×
1767
        }
1768

1769
        if (r < 0)
3✔
1770
                return _hashmap_free(copy);
×
1771

1772
        return copy;
1773
}
1774

1775
char** _hashmap_get_strv(HashmapBase *h) {
3,161✔
1776
        char **sv;
3,161✔
1777
        Iterator i;
3,161✔
1778
        unsigned idx, n;
3,161✔
1779

1780
        if (!h)
3,161✔
1781
                return new0(char*, 1);
3,161✔
1782

1783
        sv = new(char*, n_entries(h)+1);
67✔
1784
        if (!sv)
67✔
1785
                return NULL;
1786

1787
        n = 0;
67✔
1788
        HASHMAP_FOREACH_IDX(idx, h, i)
690✔
1789
                sv[n++] = entry_value(h, bucket_at(h, idx));
623✔
1790
        sv[n] = NULL;
67✔
1791

1792
        return sv;
67✔
1793
}
1794

1795
char** set_to_strv(Set **s) {
20✔
1796
        assert(s);
20✔
1797

1798
        /* This is similar to set_get_strv(), but invalidates the set on success. */
1799

1800
        char **v = new(char*, set_size(*s) + 1);
20✔
1801
        if (!v)
20✔
1802
                return NULL;
1803

1804
        for (char **p = v; (*p = set_steal_first(*s)); p++)
2,050✔
1805
                ;
1806

1807
        assert(set_isempty(*s));
20✔
1808
        *s = set_free(*s);
20✔
1809
        return v;
20✔
1810
}
1811

1812
void* ordered_hashmap_next(OrderedHashmap *h, const void *key) {
425✔
1813
        struct ordered_hashmap_entry *e;
425✔
1814
        unsigned hash, idx;
425✔
1815

1816
        if (!h)
425✔
1817
                return NULL;
1818

1819
        hash = bucket_hash(h, key);
424✔
1820
        idx = bucket_scan(h, hash, key);
424✔
1821
        if (idx == IDX_NIL)
424✔
1822
                return NULL;
1823

1824
        e = ordered_bucket_at(h, idx);
423✔
1825
        if (e->iterate_next == IDX_NIL)
423✔
1826
                return NULL;
1827
        return ordered_bucket_at(h, e->iterate_next)->p.value;
274✔
1828
}
1829

1830
int set_consume(Set *s, void *value) {
790,637✔
1831
        int r;
790,637✔
1832

1833
        assert(s);
790,637✔
1834
        assert(value);
790,637✔
1835

1836
        r = set_put(s, value);
790,637✔
1837
        if (r <= 0)
790,637✔
1838
                free(value);
1✔
1839

1840
        return r;
790,637✔
1841
}
1842

1843
int hashmap_put_strdup_full(Hashmap **h, const struct hash_ops *hash_ops, const char *k, const char *v) {
1,851✔
1844
        int r;
1,851✔
1845

1846
        r = hashmap_ensure_allocated(h, hash_ops);
1,851✔
1847
        if (r < 0)
1,851✔
1848
                return r;
1,851✔
1849

1850
        _cleanup_free_ char *kdup = NULL, *vdup = NULL;
1,851✔
1851

1852
        kdup = strdup(k);
1,851✔
1853
        if (!kdup)
1,851✔
1854
                return -ENOMEM;
1855

1856
        if (v) {
1,851✔
1857
                vdup = strdup(v);
66✔
1858
                if (!vdup)
66✔
1859
                        return -ENOMEM;
1860
        }
1861

1862
        r = hashmap_put(*h, kdup, vdup);
1,851✔
1863
        if (r < 0) {
1,851✔
1864
                if (r == -EEXIST && streq_ptr(v, hashmap_get(*h, kdup)))
10✔
1865
                        return 0;
1866
                return r;
4✔
1867
        }
1868

1869
        /* 0 with non-null vdup would mean vdup is already in the hashmap, which cannot be */
1870
        assert(vdup == NULL || r > 0);
1,841✔
1871
        if (r > 0)
1,784✔
1872
                kdup = vdup = NULL;
1,840✔
1873

1874
        return r;
1875
}
1876

1877
int set_put_strndup_full(Set **s, const struct hash_ops *hash_ops, const char *p, size_t n) {
1,166,683✔
1878
        char *c;
1,166,683✔
1879
        int r;
1,166,683✔
1880

1881
        assert(s);
1,166,683✔
1882
        assert(p);
1,166,683✔
1883

1884
        r = set_ensure_allocated(s, hash_ops);
1,166,683✔
1885
        if (r < 0)
1,166,683✔
1886
                return r;
1887

1888
        if (n == SIZE_MAX) {
1,166,683✔
1889
                if (set_contains(*s, (char*) p))
1,141,782✔
1890
                        return 0;
1891

1892
                c = strdup(p);
757,426✔
1893
        } else
1894
                c = strndup(p, n);
24,901✔
1895
        if (!c)
782,327✔
1896
                return -ENOMEM;
1897

1898
        return set_consume(*s, c);
782,327✔
1899
}
1900

1901
int set_put_strdupv_full(Set **s, const struct hash_ops *hash_ops, char **l) {
2,089✔
1902
        int n = 0, r;
2,089✔
1903

1904
        assert(s);
2,089✔
1905

1906
        STRV_FOREACH(i, l) {
2,132✔
1907
                r = set_put_strndup_full(s, hash_ops, *i, SIZE_MAX);
43✔
1908
                if (r < 0)
43✔
1909
                        return r;
1910

1911
                n += r;
43✔
1912
        }
1913

1914
        return n;
1915
}
1916

1917
int set_put_strsplit(Set *s, const char *v, const char *separators, ExtractFlags flags) {
1✔
1918
        const char *p = ASSERT_PTR(v);
1✔
1919
        int r;
1✔
1920

1921
        assert(s);
1✔
1922

1923
        for (;;) {
2✔
1924
                char *word;
3✔
1925

1926
                r = extract_first_word(&p, &word, separators, flags);
3✔
1927
                if (r <= 0)
3✔
1928
                        return r;
1✔
1929

1930
                r = set_consume(s, word);
2✔
1931
                if (r < 0)
2✔
1932
                        return r;
1933
        }
1934
}
1935

1936
/* expand the cachemem if needed, return true if newly (re)activated. */
1937
static int cachemem_maintain(CacheMem *mem, size_t size) {
26,160,749✔
1938
        assert(mem);
26,160,749✔
1939

1940
        if (!GREEDY_REALLOC(mem->ptr, size)) {
26,160,749✔
1941
                if (size > 0)
×
1942
                        return -ENOMEM;
1943
        }
1944

1945
        if (!mem->active) {
26,160,749✔
1946
                mem->active = true;
2,806✔
1947
                return true;
2,806✔
1948
        }
1949

1950
        return false;
1951
}
1952

1953
int iterated_cache_get(IteratedCache *cache, const void ***res_keys, const void ***res_values, unsigned *res_n_entries) {
26,160,637✔
1954
        bool sync_keys = false, sync_values = false;
26,160,637✔
1955
        size_t size;
26,160,637✔
1956
        int r;
26,160,637✔
1957

1958
        assert(cache);
26,160,637✔
1959
        assert(cache->hashmap);
26,160,637✔
1960

1961
        size = n_entries(cache->hashmap);
26,160,637✔
1962

1963
        if (res_keys) {
26,160,637✔
1964
                r = cachemem_maintain(&cache->keys, size);
112✔
1965
                if (r < 0)
112✔
1966
                        return r;
1967

1968
                sync_keys = r;
112✔
1969
        } else
1970
                cache->keys.active = false;
26,160,525✔
1971

1972
        if (res_values) {
26,160,637✔
1973
                r = cachemem_maintain(&cache->values, size);
26,160,637✔
1974
                if (r < 0)
26,160,637✔
1975
                        return r;
1976

1977
                sync_values = r;
26,160,637✔
1978
        } else
1979
                cache->values.active = false;
×
1980

1981
        if (cache->hashmap->dirty) {
26,160,637✔
1982
                if (cache->keys.active)
2,906✔
1983
                        sync_keys = true;
111✔
1984
                if (cache->values.active)
2,906✔
1985
                        sync_values = true;
2,906✔
1986

1987
                cache->hashmap->dirty = false;
2,906✔
1988
        }
1989

1990
        if (sync_keys || sync_values) {
26,160,637✔
1991
                unsigned i, idx;
2,916✔
1992
                Iterator iter;
2,916✔
1993

1994
                i = 0;
2,916✔
1995
                HASHMAP_FOREACH_IDX(idx, cache->hashmap, iter) {
508,101✔
1996
                        struct hashmap_base_entry *e;
505,185✔
1997

1998
                        e = bucket_at(cache->hashmap, idx);
505,185✔
1999

2000
                        if (sync_keys)
505,185✔
2001
                                cache->keys.ptr[i] = e->key;
491,500✔
2002
                        if (sync_values)
505,185✔
2003
                                cache->values.ptr[i] = entry_value(cache->hashmap, e);
505,185✔
2004
                        i++;
505,185✔
2005
                }
2006
        }
2007

2008
        if (res_keys)
26,160,637✔
2009
                *res_keys = cache->keys.ptr;
112✔
2010
        if (res_values)
26,160,637✔
2011
                *res_values = cache->values.ptr;
26,160,637✔
2012
        if (res_n_entries)
26,160,637✔
2013
                *res_n_entries = size;
26,160,637✔
2014

2015
        return 0;
2016
}
2017

2018
IteratedCache* iterated_cache_free(IteratedCache *cache) {
6,807✔
2019
        if (cache) {
6,807✔
2020
                free(cache->keys.ptr);
6,807✔
2021
                free(cache->values.ptr);
6,807✔
2022
        }
2023

2024
        return mfree(cache);
6,807✔
2025
}
2026

2027
int set_strjoin(Set *s, const char *separator, bool wrap_with_separator, char **ret) {
735,426✔
2028
        _cleanup_free_ char *str = NULL;
735,426✔
2029
        size_t separator_len, len = 0;
735,426✔
2030
        const char *value;
735,426✔
2031
        bool first;
735,426✔
2032

2033
        assert(ret);
735,426✔
2034

2035
        if (set_isempty(s)) {
735,426✔
2036
                *ret = NULL;
23,918✔
2037
                return 0;
23,918✔
2038
        }
2039

2040
        separator_len = strlen_ptr(separator);
711,508✔
2041

2042
        if (separator_len == 0)
711,504✔
2043
                wrap_with_separator = false;
2044

2045
        first = !wrap_with_separator;
711,508✔
2046

2047
        SET_FOREACH(value, s) {
2,353,125✔
2048
                size_t l = strlen_ptr(value);
1,641,617✔
2049

2050
                if (l == 0)
1,641,617✔
2051
                        continue;
×
2052

2053
                if (!GREEDY_REALLOC(str, len + l + (first ? 0 : separator_len) + (wrap_with_separator ? separator_len : 0) + 1))
4,209,755✔
2054
                        return -ENOMEM;
×
2055

2056
                if (separator_len > 0 && !first) {
1,641,617✔
2057
                        memcpy(str + len, separator, separator_len);
1,346,567✔
2058
                        len += separator_len;
1,346,567✔
2059
                }
2060

2061
                memcpy(str + len, value, l);
1,641,617✔
2062
                len += l;
1,641,617✔
2063
                first = false;
1,641,617✔
2064
        }
2065

2066
        if (wrap_with_separator) {
711,508✔
2067
                memcpy(str + len, separator, separator_len);
416,462✔
2068
                len += separator_len;
416,462✔
2069
        }
2070

2071
        str[len] = '\0';
711,508✔
2072

2073
        *ret = TAKE_PTR(str);
711,508✔
2074
        return 0;
711,508✔
2075
}
2076

2077
bool set_equal(Set *a, Set *b) {
263✔
2078
        void *p;
263✔
2079

2080
        /* Checks whether each entry of 'a' is also in 'b' and vice versa, i.e. the two sets contain the same
2081
         * entries */
2082

2083
        if (a == b)
263✔
2084
                return true;
263✔
2085

2086
        if (set_isempty(a) && set_isempty(b))
47✔
2087
                return true;
2088

2089
        if (set_size(a) != set_size(b)) /* Cheap check that hopefully catches a lot of inequality cases
29✔
2090
                                         * already */
2091
                return false;
2092

2093
        SET_FOREACH(p, a)
2,429✔
2094
                if (!set_contains(b, p))
2,411✔
2095
                        return false;
×
2096

2097
        /* If we have the same hashops, then we don't need to check things backwards given we compared the
2098
         * size and that all of a is in b. */
2099
        if (a->b.hash_ops == b->b.hash_ops)
18✔
2100
                return true;
2101

2102
        SET_FOREACH(p, b)
×
2103
                if (!set_contains(a, p))
×
2104
                        return false;
×
2105

2106
        return true;
×
2107
}
2108

2109
static bool set_fnmatch_one(Set *patterns, const char *needle) {
701,174✔
2110
        const char *p;
701,174✔
2111

2112
        assert(needle);
701,174✔
2113

2114
        /* Any failure of fnmatch() is treated as equivalent to FNM_NOMATCH, i.e. as non-matching pattern */
2115

2116
        SET_FOREACH(p, patterns)
912,756✔
2117
                if (fnmatch(p, needle, 0) == 0)
258,124✔
2118
                        return true;
46,542✔
2119

2120
        return false;
654,632✔
2121
}
2122

2123
bool set_fnmatch(Set *include_patterns, Set *exclude_patterns, const char *needle) {
481,312✔
2124
        assert(needle);
481,312✔
2125

2126
        if (set_fnmatch_one(exclude_patterns, needle))
481,312✔
2127
                return false;
2128

2129
        if (set_isempty(include_patterns))
480,937✔
2130
                return true;
2131

2132
        return set_fnmatch_one(include_patterns, needle);
219,862✔
2133
}
2134

2135
static int hashmap_entry_compare(
632,471✔
2136
                struct hashmap_base_entry * const *a,
2137
                struct hashmap_base_entry * const *b,
2138
                compare_func_t compare) {
2139

2140
        assert(a && *a);
632,471✔
2141
        assert(b && *b);
632,471✔
2142
        assert(compare);
632,471✔
2143

2144
        return compare((*a)->key, (*b)->key);
632,471✔
2145
}
2146

2147
static int _hashmap_dump_entries_sorted(
108,630✔
2148
                HashmapBase *h,
2149
                void ***ret,
2150
                size_t *ret_n) {
2151
        _cleanup_free_ void **entries = NULL;
108,630✔
2152
        Iterator iter;
108,630✔
2153
        unsigned idx;
108,630✔
2154
        size_t n = 0;
108,630✔
2155

2156
        assert(ret);
108,630✔
2157
        assert(ret_n);
108,630✔
2158

2159
        if (_hashmap_size(h) == 0) {
108,630✔
2160
                *ret = NULL;
77,877✔
2161
                *ret_n = 0;
77,877✔
2162
                return 0;
77,877✔
2163
        }
2164

2165
        /* We append one more element than needed so that the resulting array can be used as a strv. We
2166
         * don't count this entry in the returned size. */
2167
        entries = new(void*, _hashmap_size(h) + 1);
30,753✔
2168
        if (!entries)
30,753✔
2169
                return -ENOMEM;
2170

2171
        HASHMAP_FOREACH_IDX(idx, h, iter)
220,527✔
2172
                entries[n++] = bucket_at(h, idx);
189,774✔
2173

2174
        assert(n == _hashmap_size(h));
30,753✔
2175
        entries[n] = NULL;
30,753✔
2176

2177
        typesafe_qsort_r((struct hashmap_base_entry**) entries, n,
30,753✔
2178
                         hashmap_entry_compare, h->hash_ops->compare);
2179

2180
        *ret = TAKE_PTR(entries);
30,753✔
2181
        *ret_n = n;
30,753✔
2182
        return 0;
30,753✔
2183
}
2184

2185
int _hashmap_dump_keys_sorted(HashmapBase *h, void ***ret, size_t *ret_n) {
94✔
2186
        _cleanup_free_ void **entries = NULL;
94✔
2187
        size_t n;
94✔
2188
        int r;
94✔
2189

2190
        r = _hashmap_dump_entries_sorted(h, &entries, &n);
94✔
2191
        if (r < 0)
94✔
2192
                return r;
2193

2194
        /* Reuse the array. */
2195
        FOREACH_ARRAY(e, entries, n)
141✔
2196
                *e = (void*) (*(struct hashmap_base_entry**) e)->key;
47✔
2197

2198
        *ret = TAKE_PTR(entries);
94✔
2199
        if (ret_n)
94✔
2200
                *ret_n = n;
94✔
2201
        return 0;
2202
}
2203

2204
int _hashmap_dump_sorted(HashmapBase *h, void ***ret, size_t *ret_n) {
108,536✔
2205
        _cleanup_free_ void **entries = NULL;
108,536✔
2206
        size_t n;
108,536✔
2207
        int r;
108,536✔
2208

2209
        r = _hashmap_dump_entries_sorted(h, &entries, &n);
108,536✔
2210
        if (r < 0)
108,536✔
2211
                return r;
2212

2213
        /* Reuse the array. */
2214
        FOREACH_ARRAY(e, entries, n)
298,263✔
2215
                *e = entry_value(h, *(struct hashmap_base_entry**) e);
189,727✔
2216

2217
        *ret = TAKE_PTR(entries);
108,536✔
2218
        if (ret_n)
108,536✔
2219
                *ret_n = n;
105,940✔
2220
        return 0;
2221
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc