• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

tarantool / luajit / 5973601545

25 Aug 2023 08:22AM UTC coverage: 88.113% (+0.2%) from 87.9%
5973601545

push

github

igormunkin
Revert to trival pow() optimizations to prevent inaccuracies.

(cherry-picked from commit 96d6d5032)

This patch fixes different misbehaviours between JIT-compiled code and
the interpreter for power operator in the following ways:
* Drop folding optimizations for base ^ n => base * base ..., as far as
  pow(base, n) isn't interchangeable with just multiplicity of numbers
  and depends on the <math.h> implementation.
* Since the internal power function is inaccurate for very big or small
  powers, it is dropped, and `pow()` from the standard library is used
  instead. To save consistency between JIT behaviour and the VM,
  narrowing optimization is dropped, and only trivial folding
  optimizations are used. Also, `math_extern2` version with two
  parameters is dropped, since it's no longer used.

Also, this fixes failures of the [220/502] lib/string/format/num.lua
test [1] from the LuaJIT-test suite.

[1]: https://www.exploringbinary.com/incorrect-floating-point-to-decimal-conversions/

Sergey Kaplun:
* added the description and the test for the problem

Part of tarantool/tarantool#8825

Reviewed-by: Maxim Kokryashkin <m.kokryashkin@tarantool.org>
Reviewed-by: Sergey Bronnikov <sergeyb@tarantool.org>
Signed-off-by: Igor Munkin <imun@tarantool.org>

5322 of 5963 branches covered (0.0%)

Branch coverage included in aggregate %.

11 of 11 new or added lines in 5 files covered. (100.0%)

20443 of 23278 relevant lines covered (87.82%)

1292502.03 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

95.55
/src/lj_opt_loop.c
1
/*
2
** LOOP: Loop Optimizations.
3
** Copyright (C) 2005-2017 Mike Pall. See Copyright Notice in luajit.h
4
*/
5

6
#define lj_opt_loop_c
7
#define LUA_CORE
8

9
#include "lj_obj.h"
10

11
#if LJ_HASJIT
12

13
#include "lj_err.h"
14
#include "lj_buf.h"
15
#include "lj_ir.h"
16
#include "lj_jit.h"
17
#include "lj_iropt.h"
18
#include "lj_trace.h"
19
#include "lj_snap.h"
20
#include "lj_vm.h"
21

22
/* Loop optimization:
23
**
24
** Traditional Loop-Invariant Code Motion (LICM) splits the instructions
25
** of a loop into invariant and variant instructions. The invariant
26
** instructions are hoisted out of the loop and only the variant
27
** instructions remain inside the loop body.
28
**
29
** Unfortunately LICM is mostly useless for compiling dynamic languages.
30
** The IR has many guards and most of the subsequent instructions are
31
** control-dependent on them. The first non-hoistable guard would
32
** effectively prevent hoisting of all subsequent instructions.
33
**
34
** That's why we use a special form of unrolling using copy-substitution,
35
** combined with redundancy elimination:
36
**
37
** The recorded instruction stream is re-emitted to the compiler pipeline
38
** with substituted operands. The substitution table is filled with the
39
** refs returned by re-emitting each instruction. This can be done
40
** on-the-fly, because the IR is in strict SSA form, where every ref is
41
** defined before its use.
42
**
43
** This aproach generates two code sections, separated by the LOOP
44
** instruction:
45
**
46
** 1. The recorded instructions form a kind of pre-roll for the loop. It
47
** contains a mix of invariant and variant instructions and performs
48
** exactly one loop iteration (but not necessarily the 1st iteration).
49
**
50
** 2. The loop body contains only the variant instructions and performs
51
** all remaining loop iterations.
52
**
53
** On first sight that looks like a waste of space, because the variant
54
** instructions are present twice. But the key insight is that the
55
** pre-roll honors the control-dependencies for *both* the pre-roll itself
56
** *and* the loop body!
57
**
58
** It also means one doesn't have to explicitly model control-dependencies
59
** (which, BTW, wouldn't help LICM much). And it's much easier to
60
** integrate sparse snapshotting with this approach.
61
**
62
** One of the nicest aspects of this approach is that all of the
63
** optimizations of the compiler pipeline (FOLD, CSE, FWD, etc.) can be
64
** reused with only minor restrictions (e.g. one should not fold
65
** instructions across loop-carried dependencies).
66
**
67
** But in general all optimizations can be applied which only need to look
68
** backwards into the generated instruction stream. At any point in time
69
** during the copy-substitution process this contains both a static loop
70
** iteration (the pre-roll) and a dynamic one (from the to-be-copied
71
** instruction up to the end of the partial loop body).
72
**
73
** Since control-dependencies are implicitly kept, CSE also applies to all
74
** kinds of guards. The major advantage is that all invariant guards can
75
** be hoisted, too.
76
**
77
** Load/store forwarding works across loop iterations, too. This is
78
** important if loop-carried dependencies are kept in upvalues or tables.
79
** E.g. 'self.idx = self.idx + 1' deep down in some OO-style method may
80
** become a forwarded loop-recurrence after inlining.
81
**
82
** Since the IR is in SSA form, loop-carried dependencies have to be
83
** modeled with PHI instructions. The potential candidates for PHIs are
84
** collected on-the-fly during copy-substitution. After eliminating the
85
** redundant ones, PHI instructions are emitted *below* the loop body.
86
**
87
** Note that this departure from traditional SSA form doesn't change the
88
** semantics of the PHI instructions themselves. But it greatly simplifies
89
** on-the-fly generation of the IR and the machine code.
90
*/
91

92
/* Some local macros to save typing. Undef'd at the end. */
93
#define IR(ref)                (&J->cur.ir[(ref)])
94

95
/* Pass IR on to next optimization in chain (FOLD). */
96
#define emitir(ot, a, b)        (lj_ir_set(J, (ot), (a), (b)), lj_opt_fold(J))
97

98
/* Emit raw IR without passing through optimizations. */
99
#define emitir_raw(ot, a, b)        (lj_ir_set(J, (ot), (a), (b)), lj_ir_emit(J))
100

101
/* -- PHI elimination ----------------------------------------------------- */
102

103
/* Emit or eliminate collected PHIs. */
104
static void loop_emit_phi(jit_State *J, IRRef1 *subst, IRRef1 *phi, IRRef nphi,
1,247✔
105
                          SnapNo onsnap)
106
{
107
  int passx = 0;
1,247✔
108
  IRRef i, j, nslots;
1,247✔
109
  IRRef invar = J->chain[IR_LOOP];
1,247✔
110
  /* Pass #1: mark redundant and potentially redundant PHIs. */
111
  for (i = 0, j = 0; i < nphi; i++) {
3,152✔
112
    IRRef lref = phi[i];
1,905✔
113
    IRRef rref = subst[lref];
1,905✔
114
    if (lref == rref || rref == REF_DROP) {  /* Invariants are redundant. */
1,905✔
115
      irt_clearphi(IR(lref)->t);
45✔
116
    } else {
117
      phi[j++] = (IRRef1)lref;
1,860✔
118
      if (!(IR(rref)->op1 == lref || IR(rref)->op2 == lref)) {
1,860✔
119
        /* Quick check for simple recurrences failed, need pass2. */
120
        irt_setmark(IR(lref)->t);
451✔
121
        passx = 1;
451✔
122
      }
123
    }
124
  }
125
  nphi = j;
1,247✔
126
  /* Pass #2: traverse variant part and clear marks of non-redundant PHIs. */
127
  if (passx) {
1,247✔
128
    SnapNo s;
194✔
129
    for (i = J->cur.nins-1; i > invar; i--) {
3,770✔
130
      IRIns *ir = IR(i);
3,576✔
131
      if (!irref_isk(ir->op2)) irt_clearmark(IR(ir->op2)->t);
3,576✔
132
      if (!irref_isk(ir->op1)) {
3,576✔
133
        irt_clearmark(IR(ir->op1)->t);
3,468✔
134
        if (ir->op1 < invar &&
3,468✔
135
            ir->o >= IR_CALLN && ir->o <= IR_CARG) {  /* ORDER IR */
951✔
136
          ir = IR(ir->op1);
137
          while (ir->o == IR_CARG) {
4✔
138
            if (!irref_isk(ir->op2)) irt_clearmark(IR(ir->op2)->t);
×
139
            if (irref_isk(ir->op1)) break;
×
140
            ir = IR(ir->op1);
×
141
            irt_clearmark(ir->t);
×
142
          }
143
        }
144
      }
145
    }
146
    for (s = J->cur.nsnap-1; s >= onsnap; s--) {
519✔
147
      SnapShot *snap = &J->cur.snap[s];
325✔
148
      SnapEntry *map = &J->cur.snapmap[snap->mapofs];
325✔
149
      MSize n, nent = snap->nent;
325✔
150
      for (n = 0; n < nent; n++) {
1,121✔
151
        IRRef ref = snap_ref(map[n]);
796✔
152
        if (!irref_isk(ref)) irt_clearmark(IR(ref)->t);
796✔
153
      }
154
    }
155
  }
156
  /* Pass #3: add PHIs for variant slots without a corresponding SLOAD. */
157
  nslots = J->baseslot+J->maxslot;
1,247✔
158
  for (i = 1; i < nslots; i++) {
15,440✔
159
    IRRef ref = tref_ref(J->slot[i]);
14,193✔
160
    while (!irref_isk(ref) && ref != subst[ref]) {
14,203✔
161
      IRIns *ir = IR(ref);
3,046✔
162
      irt_clearmark(ir->t);  /* Unmark potential uses, too. */
3,046✔
163
      if (irt_isphi(ir->t) || irt_ispri(ir->t))
3,046✔
164
        break;
165
      irt_setphi(ir->t);
172✔
166
      if (nphi >= LJ_MAX_PHI)
172✔
167
        lj_trace_err(J, LJ_TRERR_PHIOV);
×
168
      phi[nphi++] = (IRRef1)ref;
172✔
169
      ref = subst[ref];
172✔
170
      if (ref > invar)
172✔
171
        break;
172
    }
173
  }
174
  /* Pass #4: propagate non-redundant PHIs. */
175
  while (passx) {
1,442✔
176
    passx = 0;
177
    for (i = 0; i < nphi; i++) {
898✔
178
      IRRef lref = phi[i];
703✔
179
      IRIns *ir = IR(lref);
703✔
180
      if (!irt_ismarked(ir->t)) {  /* Propagate only from unmarked PHIs. */
703✔
181
        IRIns *irr = IR(subst[lref]);
632✔
182
        if (irt_ismarked(irr->t)) {  /* Right ref points to other PHI? */
632✔
183
          irt_clearmark(irr->t);  /* Mark that PHI as non-redundant. */
1✔
184
          passx = 1;  /* Retry. */
1✔
185
        }
186
      }
187
    }
188
  }
189
  /* Pass #5: emit PHI instructions or eliminate PHIs. */
190
  for (i = 0; i < nphi; i++) {
3,279✔
191
    IRRef lref = phi[i];
2,032✔
192
    IRIns *ir = IR(lref);
2,032✔
193
    if (!irt_ismarked(ir->t)) {  /* Emit PHI if not marked. */
2,032✔
194
      IRRef rref = subst[lref];
1,966✔
195
      if (rref > invar)
1,966✔
196
        irt_setphi(IR(rref)->t);
1,791✔
197
      emitir_raw(IRT(IR_PHI, irt_type(ir->t)), lref, rref);
1,966✔
198
    } else {  /* Otherwise eliminate PHI. */
199
      irt_clearmark(ir->t);
66✔
200
      irt_clearphi(ir->t);
66✔
201
    }
202
  }
203
}
1,247✔
204

205
/* -- Loop unrolling using copy-substitution ------------------------------ */
206

207
/* Copy-substitute snapshot. */
208
static void loop_subst_snap(jit_State *J, SnapShot *osnap,
2,008✔
209
                            SnapEntry *loopmap, IRRef1 *subst)
210
{
211
  SnapEntry *nmap, *omap = &J->cur.snapmap[osnap->mapofs];
2,008✔
212
  SnapEntry *nextmap = &J->cur.snapmap[snap_nextofs(&J->cur, osnap)];
2,008✔
213
  MSize nmapofs;
2,008✔
214
  MSize on, ln, nn, onent = osnap->nent;
2,008✔
215
  BCReg nslots = osnap->nslots;
2,008✔
216
  SnapShot *snap = &J->cur.snap[J->cur.nsnap];
2,008✔
217
  if (irt_isguard(J->guardemit)) {  /* Guard inbetween? */
2,008✔
218
    nmapofs = J->cur.nsnapmap;
1,667✔
219
    J->cur.nsnap++;  /* Add new snapshot. */
1,667✔
220
  } else {  /* Otherwise overwrite previous snapshot. */
221
    snap--;
341✔
222
    nmapofs = snap->mapofs;
341✔
223
  }
224
  J->guardemit.irt = 0;
2,008✔
225
  /* Setup new snapshot. */
226
  snap->mapofs = (uint32_t)nmapofs;
2,008✔
227
  snap->ref = (IRRef1)J->cur.nins;
2,008✔
228
  snap->mcofs = 0;
2,008✔
229
  snap->nslots = nslots;
2,008✔
230
  snap->topslot = osnap->topslot;
2,008✔
231
  snap->count = 0;
2,008✔
232
  nmap = &J->cur.snapmap[nmapofs];
2,008✔
233
  /* Substitute snapshot slots. */
234
  on = ln = nn = 0;
2,008✔
235
  while (on < onent) {
7,074✔
236
    SnapEntry osn = omap[on], lsn = loopmap[ln];
5,066✔
237
    if (snap_slot(lsn) < snap_slot(osn)) {  /* Copy slot from loop map. */
5,066✔
238
      nmap[nn++] = lsn;
583✔
239
      ln++;
583✔
240
    } else {  /* Copy substituted slot from snapshot map. */
241
      if (snap_slot(lsn) == snap_slot(osn)) ln++;  /* Shadowed loop slot. */
4,483✔
242
      if (!irref_isk(snap_ref(osn)))
4,483✔
243
        osn = snap_setref(osn, subst[snap_ref(osn)]);
3,080✔
244
      nmap[nn++] = osn;
4,483✔
245
      on++;
4,483✔
246
    }
247
  }
248
  while (snap_slot(loopmap[ln]) < nslots)  /* Copy remaining loop slots. */
2,158✔
249
    nmap[nn++] = loopmap[ln++];
150✔
250
  snap->nent = (uint8_t)nn;
2,008✔
251
  omap += onent;
2,008✔
252
  nmap += nn;
2,008✔
253
  while (omap < nextmap)  /* Copy PC + frame links. */
6,024✔
254
    *nmap++ = *omap++;
4,016✔
255
  J->cur.nsnapmap = (uint32_t)(nmap - J->cur.snapmap);
2,008✔
256
}
2,008✔
257

258
typedef struct LoopState {
259
  jit_State *J;
260
  IRRef1 *subst;
261
  MSize sizesubst;
262
} LoopState;
263

264
/* Unroll loop. */
265
static void loop_unroll(LoopState *lps)
1,251✔
266
{
267
  jit_State *J = lps->J;
1,251✔
268
  IRRef1 phi[LJ_MAX_PHI];
1,251✔
269
  uint32_t nphi = 0;
1,251✔
270
  IRRef1 *subst;
1,251✔
271
  SnapNo onsnap;
1,251✔
272
  SnapShot *osnap, *loopsnap;
1,251✔
273
  SnapEntry *loopmap, *psentinel;
1,251✔
274
  IRRef ins, invar;
1,251✔
275

276
  /* Allocate substitution table.
277
  ** Only non-constant refs in [REF_BIAS,invar) are valid indexes.
278
  */
279
  invar = J->cur.nins;
1,251✔
280
  lps->sizesubst = invar - REF_BIAS;
1,251✔
281
  lps->subst = lj_mem_newvec(J->L, lps->sizesubst, IRRef1);
1,251✔
282
  subst = lps->subst - REF_BIAS;
1,251✔
283
  subst[REF_BASE] = REF_BASE;
1,251✔
284

285
  /* LOOP separates the pre-roll from the loop body. */
286
  emitir_raw(IRTG(IR_LOOP, IRT_NIL), 0, 0);
1,251✔
287

288
  /* Grow snapshot buffer and map for copy-substituted snapshots.
289
  ** Need up to twice the number of snapshots minus #0 and loop snapshot.
290
  ** Need up to twice the number of entries plus fallback substitutions
291
  ** from the loop snapshot entries for each new snapshot.
292
  ** Caveat: both calls may reallocate J->cur.snap and J->cur.snapmap!
293
  */
294
  onsnap = J->cur.nsnap;
1,251✔
295
  lj_snap_grow_buf(J, 2*onsnap-2);
1,251✔
296
  lj_snap_grow_map(J, J->cur.nsnapmap*2+(onsnap-2)*J->cur.snap[onsnap-1].nent);
1,251✔
297

298
  /* The loop snapshot is used for fallback substitutions. */
299
  loopsnap = &J->cur.snap[onsnap-1];
1,251✔
300
  loopmap = &J->cur.snapmap[loopsnap->mapofs];
1,251✔
301
  /* The PC of snapshot #0 and the loop snapshot must match. */
302
  psentinel = &loopmap[loopsnap->nent];
1,251✔
303
  lj_assertJ(*psentinel == J->cur.snapmap[J->cur.snap[0].nent],
1,251✔
304
             "mismatched PC for loop snapshot");
305
  *psentinel = SNAP(255, 0, 0);  /* Replace PC with temporary sentinel. */
1,251✔
306

307
  /* Start substitution with snapshot #1 (#0 is empty for root traces). */
308
  osnap = &J->cur.snap[1];
1,251✔
309

310
  /* Copy and substitute all recorded instructions and snapshots. */
311
  for (ins = REF_FIRST; ins < invar; ins++) {
29,168✔
312
    IRIns *ir;
27,921✔
313
    IRRef op1, op2;
27,921✔
314

315
    if (ins >= osnap->ref)  /* Instruction belongs to next snapshot? */
27,921✔
316
      loop_subst_snap(J, osnap++, loopmap, subst);  /* Copy-substitute it. */
2,008✔
317

318
    /* Substitute instruction operands. */
319
    ir = IR(ins);
27,921✔
320
    op1 = ir->op1;
27,921✔
321
    if (!irref_isk(op1)) op1 = subst[op1];
27,921✔
322
    op2 = ir->op2;
27,921✔
323
    if (!irref_isk(op2)) op2 = subst[op2];
27,921✔
324
    if (irm_kind(lj_ir_mode[ir->o]) == IRM_N &&
27,921✔
325
        op1 == ir->op1 && op2 == ir->op2) {  /* Regular invariant ins? */
8,315✔
326
      subst[ins] = (IRRef1)ins;  /* Shortcut. */
6,946✔
327
    } else {
328
      /* Re-emit substituted instruction to the FOLD/CSE/etc. pipeline. */
329
      IRType1 t = ir->t;  /* Get this first, since emitir may invalidate ir. */
20,975✔
330
      IRRef ref = tref_ref(emitir(ir->ot & ~IRT_ISPHI, op1, op2));
20,975✔
331
      subst[ins] = (IRRef1)ref;
20,974✔
332
      if (ref != ins) {
20,974✔
333
        IRIns *irr = IR(ref);
13,796✔
334
        if (ref < invar) {  /* Loop-carried dependency? */
13,796✔
335
          /* Potential PHI? */
336
          if (!irref_isk(ref) && !irt_isphi(irr->t) && !irt_ispri(irr->t)) {
2,244✔
337
            irt_setphi(irr->t);
1,911✔
338
            if (nphi >= LJ_MAX_PHI)
1,911✔
339
              lj_trace_err(J, LJ_TRERR_PHIOV);
×
340
            phi[nphi++] = (IRRef1)ref;
1,911✔
341
          }
342
          /* Check all loop-carried dependencies for type instability. */
343
          if (!irt_sametype(t, irr->t)) {
2,244✔
344
            if (irt_isinteger(t) && irt_isinteger(irr->t))
55✔
345
              continue;
25✔
346
            else if (irt_isnum(t) && irt_isinteger(irr->t))  /* Fix int->num. */
30✔
347
              ref = tref_ref(emitir(IRTN(IR_CONV), ref, IRCONV_NUM_INT));
27✔
348
            else if (irt_isnum(irr->t) && irt_isinteger(t))  /* Fix num->int. */
3✔
349
              ref = tref_ref(emitir(IRTGI(IR_CONV), ref,
×
350
                                    IRCONV_INT_NUM|IRCONV_CHECK));
351
            else
352
              lj_trace_err(J, LJ_TRERR_TYPEINS);
3✔
353
            subst[ins] = (IRRef1)ref;
27✔
354
            irr = IR(ref);
27✔
355
            goto phiconv;
27✔
356
          }
357
        } else if (ref != REF_DROP && irr->o == IR_CONV &&
11,552✔
358
                   ref > invar && irr->op1 < invar) {
442✔
359
          /* May need an extra PHI for a CONV. */
360
          ref = irr->op1;
310✔
361
          irr = IR(ref);
310✔
362
        phiconv:
337✔
363
          if (ref < invar && !irref_isk(ref) && !irt_isphi(irr->t)) {
337✔
364
            irt_setphi(irr->t);
1✔
365
            if (nphi >= LJ_MAX_PHI)
1✔
366
              lj_trace_err(J, LJ_TRERR_PHIOV);
×
367
            phi[nphi++] = (IRRef1)ref;
1✔
368
          }
369
        }
370
      }
371
    }
372
  }
373
  if (!irt_isguard(J->guardemit))  /* Drop redundant snapshot. */
1,247✔
374
    J->cur.nsnapmap = (uint32_t)J->cur.snap[--J->cur.nsnap].mapofs;
1✔
375
  lj_assertJ(J->cur.nsnapmap <= J->sizesnapmap, "bad snapshot map index");
1,247✔
376
  *psentinel = J->cur.snapmap[J->cur.snap[0].nent];  /* Restore PC. */
1,247✔
377

378
  loop_emit_phi(J, subst, phi, nphi, onsnap);
1,247✔
379
}
1,247✔
380

381
/* Undo any partial changes made by the loop optimization. */
382
static void loop_undo(jit_State *J, IRRef ins, SnapNo nsnap, MSize nsnapmap)
4✔
383
{
384
  ptrdiff_t i;
4✔
385
  SnapShot *snap = &J->cur.snap[nsnap-1];
4✔
386
  SnapEntry *map = J->cur.snapmap;
4✔
387
  map[snap->mapofs + snap->nent] = map[J->cur.snap[0].nent];  /* Restore PC. */
4✔
388
  J->cur.nsnapmap = (uint32_t)nsnapmap;
4✔
389
  J->cur.nsnap = nsnap;
4✔
390
  J->guardemit.irt = 0;
4✔
391
  lj_ir_rollback(J, ins);
4✔
392
  for (i = 0; i < BPROP_SLOTS; i++) {  /* Remove backprop. cache entries. */
72✔
393
    BPropEntry *bp = &J->bpropcache[i];
64✔
394
    if (bp->val >= ins)
64✔
395
      bp->key = 0;
×
396
  }
397
  for (ins--; ins >= REF_FIRST; ins--) {  /* Remove flags. */
36✔
398
    IRIns *ir = IR(ins);
32✔
399
    irt_clearphi(ir->t);
32✔
400
    irt_clearmark(ir->t);
32✔
401
  }
402
}
4✔
403

404
/* Protected callback for loop optimization. */
405
static TValue *cploop_opt(lua_State *L, lua_CFunction dummy, void *ud)
1,251✔
406
{
407
  UNUSED(L); UNUSED(dummy);
1,251✔
408
  loop_unroll((LoopState *)ud);
1,251✔
409
  return NULL;
1,247✔
410
}
411

412
/* Loop optimization. */
413
int lj_opt_loop(jit_State *J)
1,251✔
414
{
415
  IRRef nins = J->cur.nins;
1,251✔
416
  SnapNo nsnap = J->cur.nsnap;
1,251✔
417
  MSize nsnapmap = J->cur.nsnapmap;
1,251✔
418
  LoopState lps;
1,251✔
419
  int errcode;
1,251✔
420
  lps.J = J;
1,251✔
421
  lps.subst = NULL;
1,251✔
422
  lps.sizesubst = 0;
1,251✔
423
  errcode = lj_vm_cpcall(J->L, NULL, &lps, cploop_opt);
1,251✔
424
  lj_mem_freevec(J2G(J), lps.subst, lps.sizesubst, IRRef1);
1,251✔
425
  if (LJ_UNLIKELY(errcode)) {
1,251✔
426
    lua_State *L = J->L;
4✔
427
    if (errcode == LUA_ERRRUN && tvisnumber(L->top-1)) {  /* Trace error? */
4✔
428
      int32_t e = numberVint(L->top-1);
4✔
429
      switch ((TraceError)e) {
4✔
430
      case LJ_TRERR_TYPEINS:  /* Type instability. */
4✔
431
      case LJ_TRERR_GFAIL:  /* Guard would always fail. */
432
        /* Unrolling via recording fixes many cases, e.g. a flipped boolean. */
433
        if (--J->instunroll < 0)  /* But do not unroll forever. */
4✔
434
          break;
435
        L->top--;  /* Remove error object. */
4✔
436
        loop_undo(J, nins, nsnap, nsnapmap);
4✔
437
        return 1;  /* Loop optimization failed, continue recording. */
4✔
438
      default:
439
        break;
440
      }
441
    }
×
442
    lj_err_throw(L, errcode);  /* Propagate all other errors. */
×
443
  }
444
  return 0;  /* Loop optimization is ok. */
445
}
446

447
#undef IR
448
#undef emitir
449
#undef emitir_raw
450

451
#endif
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc