• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

tarantool / luajit / 7119175127

06 Dec 2023 06:58PM UTC coverage: 88.591% (-0.03%) from 88.621%
7119175127

push

github

igormunkin
Fix HREFK forwarding vs. table.clear().

Reported by XmiliaH.

(cherry-picked from commit d5a237eae)

When performing HREFK (and also ALOAD, HLOAD) forwarding optimization,
the `table.clear()` function call may be performed on the table operand
from HREFK between table creation and IR, from which value is forwarded.
This call isn't taken in the account, so it may lead to too optimistic
value-forwarding from NEWREF (and also ASTORE, HSTORE), or the omitted
type guard for HREFK operation. Therefore, this leads to incorrect trace
behaviour (for example, taking a non-nil value from the cleared table).

This patch adds necessary checks for `table.clear()` calls.

Sergey Kaplun:
* added the description and the test for the problem

Part of tarantool/tarantool#9145

Reviewed-by: Maxim Kokryashkin <m.kokryashkin@tarantool.org>
Reviewed-by: Sergey Bronnikov <sergeyb@tarantool.org>
Signed-off-by: Igor Munkin <imun@tarantool.org>

5377 of 5987 branches covered (0.0%)

Branch coverage included in aggregate %.

12 of 12 new or added lines in 1 file covered. (100.0%)

24 existing lines in 5 files now uncovered.

20619 of 23357 relevant lines covered (88.28%)

2754697.77 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

94.18
/src/lj_opt_loop.c
1
/*
2
** LOOP: Loop Optimizations.
3
** Copyright (C) 2005-2017 Mike Pall. See Copyright Notice in luajit.h
4
*/
5

6
#define lj_opt_loop_c
7
#define LUA_CORE
8

9
#include "lj_obj.h"
10

11
#if LJ_HASJIT
12

13
#include "lj_err.h"
14
#include "lj_buf.h"
15
#include "lj_ir.h"
16
#include "lj_jit.h"
17
#include "lj_iropt.h"
18
#include "lj_trace.h"
19
#include "lj_snap.h"
20
#include "lj_vm.h"
21

22
/* Loop optimization:
23
**
24
** Traditional Loop-Invariant Code Motion (LICM) splits the instructions
25
** of a loop into invariant and variant instructions. The invariant
26
** instructions are hoisted out of the loop and only the variant
27
** instructions remain inside the loop body.
28
**
29
** Unfortunately LICM is mostly useless for compiling dynamic languages.
30
** The IR has many guards and most of the subsequent instructions are
31
** control-dependent on them. The first non-hoistable guard would
32
** effectively prevent hoisting of all subsequent instructions.
33
**
34
** That's why we use a special form of unrolling using copy-substitution,
35
** combined with redundancy elimination:
36
**
37
** The recorded instruction stream is re-emitted to the compiler pipeline
38
** with substituted operands. The substitution table is filled with the
39
** refs returned by re-emitting each instruction. This can be done
40
** on-the-fly, because the IR is in strict SSA form, where every ref is
41
** defined before its use.
42
**
43
** This aproach generates two code sections, separated by the LOOP
44
** instruction:
45
**
46
** 1. The recorded instructions form a kind of pre-roll for the loop. It
47
** contains a mix of invariant and variant instructions and performs
48
** exactly one loop iteration (but not necessarily the 1st iteration).
49
**
50
** 2. The loop body contains only the variant instructions and performs
51
** all remaining loop iterations.
52
**
53
** On first sight that looks like a waste of space, because the variant
54
** instructions are present twice. But the key insight is that the
55
** pre-roll honors the control-dependencies for *both* the pre-roll itself
56
** *and* the loop body!
57
**
58
** It also means one doesn't have to explicitly model control-dependencies
59
** (which, BTW, wouldn't help LICM much). And it's much easier to
60
** integrate sparse snapshotting with this approach.
61
**
62
** One of the nicest aspects of this approach is that all of the
63
** optimizations of the compiler pipeline (FOLD, CSE, FWD, etc.) can be
64
** reused with only minor restrictions (e.g. one should not fold
65
** instructions across loop-carried dependencies).
66
**
67
** But in general all optimizations can be applied which only need to look
68
** backwards into the generated instruction stream. At any point in time
69
** during the copy-substitution process this contains both a static loop
70
** iteration (the pre-roll) and a dynamic one (from the to-be-copied
71
** instruction up to the end of the partial loop body).
72
**
73
** Since control-dependencies are implicitly kept, CSE also applies to all
74
** kinds of guards. The major advantage is that all invariant guards can
75
** be hoisted, too.
76
**
77
** Load/store forwarding works across loop iterations, too. This is
78
** important if loop-carried dependencies are kept in upvalues or tables.
79
** E.g. 'self.idx = self.idx + 1' deep down in some OO-style method may
80
** become a forwarded loop-recurrence after inlining.
81
**
82
** Since the IR is in SSA form, loop-carried dependencies have to be
83
** modeled with PHI instructions. The potential candidates for PHIs are
84
** collected on-the-fly during copy-substitution. After eliminating the
85
** redundant ones, PHI instructions are emitted *below* the loop body.
86
**
87
** Note that this departure from traditional SSA form doesn't change the
88
** semantics of the PHI instructions themselves. But it greatly simplifies
89
** on-the-fly generation of the IR and the machine code.
90
*/
91

92
/* Some local macros to save typing. Undef'd at the end. */
93
#define IR(ref)                (&J->cur.ir[(ref)])
94

95
/* Pass IR on to next optimization in chain (FOLD). */
96
#define emitir(ot, a, b)        (lj_ir_set(J, (ot), (a), (b)), lj_opt_fold(J))
97

98
/* Emit raw IR without passing through optimizations. */
99
#define emitir_raw(ot, a, b)        (lj_ir_set(J, (ot), (a), (b)), lj_ir_emit(J))
100

101
/* -- PHI elimination ----------------------------------------------------- */
102

103
/* Emit or eliminate collected PHIs. */
104
static void loop_emit_phi(jit_State *J, IRRef1 *subst, IRRef1 *phi, IRRef nphi,
1,292✔
105
                          SnapNo onsnap)
106
{
107
  int passx = 0;
1,292✔
108
  IRRef i, j, nslots;
1,292✔
109
  IRRef invar = J->chain[IR_LOOP];
1,292✔
110
  /* Pass #1: mark redundant and potentially redundant PHIs. */
111
  for (i = 0, j = 0; i < nphi; i++) {
3,282✔
112
    IRRef lref = phi[i];
1,990✔
113
    IRRef rref = subst[lref];
1,990✔
114
    if (lref == rref || rref == REF_DROP) {  /* Invariants are redundant. */
1,990✔
115
      irt_clearphi(IR(lref)->t);
46✔
116
    } else {
117
      phi[j++] = (IRRef1)lref;
1,944✔
118
      if (!(IR(rref)->op1 == lref || IR(rref)->op2 == lref)) {
1,944✔
119
        /* Quick check for simple recurrences failed, need pass2. */
120
        irt_setmark(IR(lref)->t);
482✔
121
        passx = 1;
482✔
122
      }
123
    }
124
  }
125
  nphi = j;
1,292✔
126
  /* Pass #2: traverse variant part and clear marks of non-redundant PHIs. */
127
  if (passx) {
1,292✔
128
    SnapNo s;
203✔
129
    for (i = J->cur.nins-1; i > invar; i--) {
3,950✔
130
      IRIns *ir = IR(i);
3,747✔
131
      if (!irref_isk(ir->op2)) irt_clearmark(IR(ir->op2)->t);
3,747✔
132
      if (!irref_isk(ir->op1)) {
3,747✔
133
        irt_clearmark(IR(ir->op1)->t);
3,618✔
134
        if (ir->op1 < invar &&
3,618✔
135
            ir->o >= IR_CALLN && ir->o <= IR_CARG) {  /* ORDER IR */
980✔
136
          ir = IR(ir->op1);
137
          while (ir->o == IR_CARG) {
4✔
138
            if (!irref_isk(ir->op2)) irt_clearmark(IR(ir->op2)->t);
×
139
            if (irref_isk(ir->op1)) break;
×
140
            ir = IR(ir->op1);
×
141
            irt_clearmark(ir->t);
×
142
          }
143
        }
144
      }
145
    }
146
    for (s = J->cur.nsnap-1; s >= onsnap; s--) {
551✔
147
      SnapShot *snap = &J->cur.snap[s];
348✔
148
      SnapEntry *map = &J->cur.snapmap[snap->mapofs];
348✔
149
      MSize n, nent = snap->nent;
348✔
150
      for (n = 0; n < nent; n++) {
1,257✔
151
        IRRef ref = snap_ref(map[n]);
909✔
152
        if (!irref_isk(ref)) irt_clearmark(IR(ref)->t);
909✔
153
      }
154
    }
155
  }
156
  /* Pass #3: add PHIs for variant slots without a corresponding SLOAD. */
157
  nslots = J->baseslot+J->maxslot;
1,292✔
158
  for (i = 1; i < nslots; i++) {
15,917✔
159
    IRRef ref = tref_ref(J->slot[i]);
14,625✔
160
    while (!irref_isk(ref) && ref != subst[ref]) {
14,635✔
161
      IRIns *ir = IR(ref);
3,195✔
162
      irt_clearmark(ir->t);  /* Unmark potential uses, too. */
3,195✔
163
      if (irt_isphi(ir->t) || irt_ispri(ir->t))
3,195✔
164
        break;
165
      irt_setphi(ir->t);
210✔
166
      if (nphi >= LJ_MAX_PHI)
210✔
167
        lj_trace_err(J, LJ_TRERR_PHIOV);
×
168
      phi[nphi++] = (IRRef1)ref;
210✔
169
      ref = subst[ref];
210✔
170
      if (ref > invar)
210✔
171
        break;
172
    }
173
  }
174
  /* Pass #4: propagate non-redundant PHIs. */
175
  while (passx) {
1,496✔
176
    passx = 0;
177
    for (i = 0; i < nphi; i++) {
958✔
178
      IRRef lref = phi[i];
754✔
179
      IRIns *ir = IR(lref);
754✔
180
      if (!irt_ismarked(ir->t)) {  /* Propagate only from unmarked PHIs. */
754✔
181
        IRIns *irr = IR(subst[lref]);
678✔
182
        if (irt_ismarked(irr->t)) {  /* Right ref points to other PHI? */
678✔
183
          irt_clearmark(irr->t);  /* Mark that PHI as non-redundant. */
1✔
184
          passx = 1;  /* Retry. */
1✔
185
        }
186
      }
187
    }
188
  }
189
  /* Pass #5: emit PHI instructions or eliminate PHIs. */
190
  for (i = 0; i < nphi; i++) {
3,446✔
191
    IRRef lref = phi[i];
2,154✔
192
    IRIns *ir = IR(lref);
2,154✔
193
    if (!irt_ismarked(ir->t)) {  /* Emit PHI if not marked. */
2,154✔
194
      IRRef rref = subst[lref];
2,083✔
195
      if (rref > invar)
2,083✔
196
        irt_setphi(IR(rref)->t);
1,890✔
197
      emitir_raw(IRT(IR_PHI, irt_type(ir->t)), lref, rref);
2,083✔
198
    } else {  /* Otherwise eliminate PHI. */
199
      irt_clearmark(ir->t);
71✔
200
      irt_clearphi(ir->t);
71✔
201
    }
202
  }
203
}
1,292✔
204

205
/* -- Loop unrolling using copy-substitution ------------------------------ */
206

207
/* Copy-substitute snapshot. */
208
static void loop_subst_snap(jit_State *J, SnapShot *osnap,
2,129✔
209
                            SnapEntry *loopmap, IRRef1 *subst)
210
{
211
  SnapEntry *nmap, *omap = &J->cur.snapmap[osnap->mapofs];
2,129✔
212
  SnapEntry *nextmap = &J->cur.snapmap[snap_nextofs(&J->cur, osnap)];
2,129✔
213
  MSize nmapofs;
2,129✔
214
  MSize on, ln, nn, onent = osnap->nent;
2,129✔
215
  BCReg nslots = osnap->nslots;
2,129✔
216
  SnapShot *snap = &J->cur.snap[J->cur.nsnap];
2,129✔
217
  if (irt_isguard(J->guardemit)) {  /* Guard inbetween? */
2,129✔
218
    nmapofs = J->cur.nsnapmap;
1,774✔
219
    J->cur.nsnap++;  /* Add new snapshot. */
1,774✔
220
  } else {  /* Otherwise overwrite previous snapshot. */
221
    snap--;
355✔
222
    nmapofs = snap->mapofs;
355✔
223
  }
224
  J->guardemit.irt = 0;
2,129✔
225
  /* Setup new snapshot. */
226
  snap->mapofs = (uint32_t)nmapofs;
2,129✔
227
  snap->ref = (IRRef1)J->cur.nins;
2,129✔
228
  snap->mcofs = 0;
2,129✔
229
  snap->nslots = nslots;
2,129✔
230
  snap->topslot = osnap->topslot;
2,129✔
231
  snap->count = 0;
2,129✔
232
  nmap = &J->cur.snapmap[nmapofs];
2,129✔
233
  /* Substitute snapshot slots. */
234
  on = ln = nn = 0;
2,129✔
235
  while (on < onent) {
7,853✔
236
    SnapEntry osn = omap[on], lsn = loopmap[ln];
5,724✔
237
    if (snap_slot(lsn) < snap_slot(osn)) {  /* Copy slot from loop map. */
5,724✔
238
      nmap[nn++] = lsn;
749✔
239
      ln++;
749✔
240
    } else {  /* Copy substituted slot from snapshot map. */
241
      if (snap_slot(lsn) == snap_slot(osn)) ln++;  /* Shadowed loop slot. */
4,975✔
242
      if (!irref_isk(snap_ref(osn)))
4,975✔
243
        osn = snap_setref(osn, subst[snap_ref(osn)]);
3,492✔
244
      nmap[nn++] = osn;
4,975✔
245
      on++;
4,975✔
246
    }
247
  }
248
  while (snap_slot(loopmap[ln]) < nslots)  /* Copy remaining loop slots. */
2,316✔
249
    nmap[nn++] = loopmap[ln++];
187✔
250
  snap->nent = (uint8_t)nn;
2,129✔
251
  omap += onent;
2,129✔
252
  nmap += nn;
2,129✔
253
  while (omap < nextmap)  /* Copy PC + frame links. */
6,387✔
254
    *nmap++ = *omap++;
4,258✔
255
  J->cur.nsnapmap = (uint32_t)(nmap - J->cur.snapmap);
2,129✔
256
}
2,129✔
257

258
typedef struct LoopState {
259
  jit_State *J;
260
  IRRef1 *subst;
261
  MSize sizesubst;
262
} LoopState;
263

264
/* Unroll loop. */
265
static void loop_unroll(LoopState *lps)
1,298✔
266
{
267
  jit_State *J = lps->J;
1,298✔
268
  IRRef1 phi[LJ_MAX_PHI];
1,298✔
269
  uint32_t nphi = 0;
1,298✔
270
  IRRef1 *subst;
1,298✔
271
  SnapNo onsnap;
1,298✔
272
  SnapShot *osnap, *loopsnap;
1,298✔
273
  SnapEntry *loopmap, *psentinel;
1,298✔
274
  IRRef ins, invar;
1,298✔
275

276
  /* Allocate substitution table.
277
  ** Only non-constant refs in [REF_BIAS,invar) are valid indexes.
278
  */
279
  invar = J->cur.nins;
1,298✔
280
  lps->sizesubst = invar - REF_BIAS;
1,298✔
281
  lps->subst = lj_mem_newvec(J->L, lps->sizesubst, IRRef1);
1,298✔
282
  subst = lps->subst - REF_BIAS;
1,298✔
283
  subst[REF_BASE] = REF_BASE;
1,298✔
284

285
  /* LOOP separates the pre-roll from the loop body. */
286
  emitir_raw(IRTG(IR_LOOP, IRT_NIL), 0, 0);
1,298✔
287

288
  /* Grow snapshot buffer and map for copy-substituted snapshots.
289
  ** Need up to twice the number of snapshots minus #0 and loop snapshot.
290
  ** Need up to twice the number of entries plus fallback substitutions
291
  ** from the loop snapshot entries for each new snapshot.
292
  ** Caveat: both calls may reallocate J->cur.snap and J->cur.snapmap!
293
  */
294
  onsnap = J->cur.nsnap;
1,298✔
295
  lj_snap_grow_buf(J, 2*onsnap-2);
1,298✔
296
  lj_snap_grow_map(J, J->cur.nsnapmap*2+(onsnap-2)*J->cur.snap[onsnap-1].nent);
1,298✔
297

298
  /* The loop snapshot is used for fallback substitutions. */
299
  loopsnap = &J->cur.snap[onsnap-1];
1,298✔
300
  loopmap = &J->cur.snapmap[loopsnap->mapofs];
1,298✔
301
  /* The PC of snapshot #0 and the loop snapshot must match. */
302
  psentinel = &loopmap[loopsnap->nent];
1,298✔
303
  lj_assertJ(*psentinel == J->cur.snapmap[J->cur.snap[0].nent],
1,298✔
304
             "mismatched PC for loop snapshot");
305
  *psentinel = SNAP(255, 0, 0);  /* Replace PC with temporary sentinel. */
1,298✔
306

307
  /* Start substitution with snapshot #1 (#0 is empty for root traces). */
308
  osnap = &J->cur.snap[1];
1,298✔
309

310
  /* Copy and substitute all recorded instructions and snapshots. */
311
  for (ins = REF_FIRST; ins < invar; ins++) {
30,555✔
312
    IRIns *ir;
29,263✔
313
    IRRef op1, op2;
29,263✔
314

315
    if (ins >= osnap->ref)  /* Instruction belongs to next snapshot? */
29,263✔
316
      loop_subst_snap(J, osnap++, loopmap, subst);  /* Copy-substitute it. */
2,129✔
317

318
    /* Substitute instruction operands. */
319
    ir = IR(ins);
29,263✔
320
    op1 = ir->op1;
29,263✔
321
    if (!irref_isk(op1)) op1 = subst[op1];
29,263✔
322
    op2 = ir->op2;
29,263✔
323
    if (!irref_isk(op2)) op2 = subst[op2];
29,263✔
324
    if (irm_kind(lj_ir_mode[ir->o]) == IRM_N &&
29,263✔
325
        op1 == ir->op1 && op2 == ir->op2) {  /* Regular invariant ins? */
8,570✔
326
      subst[ins] = (IRRef1)ins;  /* Shortcut. */
7,139✔
327
    } else {
328
      /* Re-emit substituted instruction to the FOLD/CSE/etc. pipeline. */
329
      IRType1 t = ir->t;  /* Get this first, since emitir may invalidate ir. */
22,124✔
330
      IRRef ref = tref_ref(emitir(ir->ot & ~IRT_ISPHI, op1, op2));
22,124✔
331
      subst[ins] = (IRRef1)ref;
22,121✔
332
      if (ref != ins) {
22,121✔
333
        IRIns *irr = IR(ref);
14,759✔
334
        if (ref < invar) {  /* Loop-carried dependency? */
14,759✔
335
          /* Potential PHI? */
336
          if (!irref_isk(ref) && !irt_isphi(irr->t) && !irt_ispri(irr->t)) {
2,357✔
337
            irt_setphi(irr->t);
1,999✔
338
            if (nphi >= LJ_MAX_PHI)
1,999✔
339
              lj_trace_err(J, LJ_TRERR_PHIOV);
×
340
            phi[nphi++] = (IRRef1)ref;
1,999✔
341
          }
342
          /* Check all loop-carried dependencies for type instability. */
343
          if (!irt_sametype(t, irr->t)) {
2,357✔
344
            if (irt_isinteger(t) && irt_isinteger(irr->t))
60✔
345
              continue;
29✔
346
            else if (irt_isnum(t) && irt_isinteger(irr->t))  /* Fix int->num. */
31✔
347
              ref = tref_ref(emitir(IRTN(IR_CONV), ref, IRCONV_NUM_INT));
28✔
348
            else if (irt_isnum(irr->t) && irt_isinteger(t))  /* Fix num->int. */
3✔
349
              ref = tref_ref(emitir(IRTGI(IR_CONV), ref,
×
350
                                    IRCONV_INT_NUM|IRCONV_CHECK));
351
            else
352
              lj_trace_err(J, LJ_TRERR_TYPEINS);
3✔
353
            subst[ins] = (IRRef1)ref;
28✔
354
            irr = IR(ref);
28✔
355
            goto phiconv;
28✔
356
          }
357
        } else if (ref != REF_DROP && irr->o == IR_CONV &&
12,402✔
358
                   ref > invar && irr->op1 < invar) {
485✔
359
          /* May need an extra PHI for a CONV. */
360
          ref = irr->op1;
328✔
361
          irr = IR(ref);
328✔
362
        phiconv:
356✔
363
          if (ref < invar && !irref_isk(ref) && !irt_isphi(irr->t)) {
356✔
UNCOV
364
            irt_setphi(irr->t);
×
UNCOV
365
            if (nphi >= LJ_MAX_PHI)
×
366
              lj_trace_err(J, LJ_TRERR_PHIOV);
×
UNCOV
367
            phi[nphi++] = (IRRef1)ref;
×
368
          }
369
        }
370
      }
371
    }
372
  }
373
  if (!irt_isguard(J->guardemit))  /* Drop redundant snapshot. */
1,292✔
374
    J->cur.nsnapmap = (uint32_t)J->cur.snap[--J->cur.nsnap].mapofs;
1✔
375
  lj_assertJ(J->cur.nsnapmap <= J->sizesnapmap, "bad snapshot map index");
1,292✔
376
  *psentinel = J->cur.snapmap[J->cur.snap[0].nent];  /* Restore PC. */
1,292✔
377

378
  loop_emit_phi(J, subst, phi, nphi, onsnap);
1,292✔
379
}
1,292✔
380

381
/* Undo any partial changes made by the loop optimization. */
382
static void loop_undo(jit_State *J, IRRef ins, SnapNo nsnap, MSize nsnapmap)
6✔
383
{
384
  ptrdiff_t i;
6✔
385
  SnapShot *snap = &J->cur.snap[nsnap-1];
6✔
386
  SnapEntry *map = J->cur.snapmap;
6✔
387
  map[snap->mapofs + snap->nent] = map[J->cur.snap[0].nent];  /* Restore PC. */
6✔
388
  J->cur.nsnapmap = (uint32_t)nsnapmap;
6✔
389
  J->cur.nsnap = nsnap;
6✔
390
  J->guardemit.irt = 0;
6✔
391
  lj_ir_rollback(J, ins);
6✔
392
  for (i = 0; i < BPROP_SLOTS; i++) {  /* Remove backprop. cache entries. */
108✔
393
    BPropEntry *bp = &J->bpropcache[i];
96✔
394
    if (bp->val >= ins)
96✔
395
      bp->key = 0;
×
396
  }
397
  for (ins--; ins >= REF_FIRST; ins--) {  /* Remove flags. */
67✔
398
    IRIns *ir = IR(ins);
61✔
399
    irt_clearphi(ir->t);
61✔
400
    irt_clearmark(ir->t);
61✔
401
  }
402
}
6✔
403

404
/* Protected callback for loop optimization. */
405
static TValue *cploop_opt(lua_State *L, lua_CFunction dummy, void *ud)
1,298✔
406
{
407
  UNUSED(L); UNUSED(dummy);
1,298✔
408
  loop_unroll((LoopState *)ud);
1,298✔
409
  return NULL;
1,292✔
410
}
411

412
/* Loop optimization. */
413
int lj_opt_loop(jit_State *J)
1,298✔
414
{
415
  IRRef nins = J->cur.nins;
1,298✔
416
  SnapNo nsnap = J->cur.nsnap;
1,298✔
417
  MSize nsnapmap = J->cur.nsnapmap;
1,298✔
418
  LoopState lps;
1,298✔
419
  int errcode;
1,298✔
420
  lps.J = J;
1,298✔
421
  lps.subst = NULL;
1,298✔
422
  lps.sizesubst = 0;
1,298✔
423
  errcode = lj_vm_cpcall(J->L, NULL, &lps, cploop_opt);
1,298✔
424
  lj_mem_freevec(J2G(J), lps.subst, lps.sizesubst, IRRef1);
1,298✔
425
  if (LJ_UNLIKELY(errcode)) {
1,298✔
426
    lua_State *L = J->L;
6✔
427
    if (errcode == LUA_ERRRUN && tvisnumber(L->top-1)) {  /* Trace error? */
6✔
428
      int32_t e = numberVint(L->top-1);
6✔
429
      switch ((TraceError)e) {
6✔
430
      case LJ_TRERR_TYPEINS:  /* Type instability. */
6✔
431
      case LJ_TRERR_GFAIL:  /* Guard would always fail. */
432
        /* Unrolling via recording fixes many cases, e.g. a flipped boolean. */
433
        if (--J->instunroll < 0)  /* But do not unroll forever. */
6✔
434
          break;
435
        L->top--;  /* Remove error object. */
6✔
436
        loop_undo(J, nins, nsnap, nsnapmap);
6✔
437
        return 1;  /* Loop optimization failed, continue recording. */
6✔
438
      default:
439
        break;
440
      }
441
    }
×
442
    lj_err_throw(L, errcode);  /* Propagate all other errors. */
×
443
  }
444
  return 0;  /* Loop optimization is ok. */
445
}
446

447
#undef IR
448
#undef emitir
449
#undef emitir_raw
450

451
#endif
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc