• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

alan-turing-institute / MLJTuning.jl / 388

15 Sep 2020 - 0:32 coverage: 88.706% (+1.0%) from 87.744%
388

Pull #75

travis-ci-com

web-flow
Merge 52f0d2db2 into 9ea80763d
Pull Request #75: Allow user to specify a selection heuristic

80 of 85 new or added lines in 7 files covered. (94.12%)

377 of 425 relevant lines covered (88.71%)

809.68 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

93.94
/src/tuned_models.jl
1
## TYPES AND CONSTRUCTOR
2

3

4
mutable struct DeterministicTunedModel{T,M<:Deterministic} <: MLJBase.Deterministic
5
    model::M
250×
6
    tuning::T  # tuning strategy
7
    resampling # resampling strategy
8
    measure
9
    weights::Union{Nothing,Vector{<:Real}}
10
    operation
11
    range
12
    selection_heuristic
13
    train_best::Bool
14
    repeats::Int
15
    n::Union{Int,Nothing}
16
    acceleration::AbstractResource
17
    acceleration_resampling::AbstractResource
18
    check_measure::Bool
19
end
20

21
mutable struct ProbabilisticTunedModel{T,M<:Probabilistic} <: MLJBase.Probabilistic
22
    model::M
10×
23
    tuning::T  # tuning strategy
24
    resampling # resampling strategy
25
    measure
26
    weights::Union{Nothing,AbstractVector{<:Real}}
27
    operation
28
    range
29
    selection_heuristic
30
    train_best::Bool
31
    repeats::Int
32
    n::Union{Int,Nothing}
33
    acceleration::AbstractResource
34
    acceleration_resampling::AbstractResource
35
    check_measure::Bool
36
end
37

38
const EitherTunedModel{T,M} =
39
    Union{DeterministicTunedModel{T,M},ProbabilisticTunedModel{T,M}}
40

41
MLJBase.is_wrapper(::Type{<:EitherTunedModel}) = true
4×
42

43
#todo update:
44
"""
45
    tuned_model = TunedModel(; model=nothing,
46
                             tuning=Grid(),
47
                             resampling=Holdout(),
48
                             measure=nothing,
49
                             weights=nothing,
50
                             repeats=1,
51
                             operation=predict,
52
                             range=nothing,
53
                             selection_heuristic=NaiveSelection(),
54
                             n=default_n(tuning, range),
55
                             train_best=true,
56
                             acceleration=default_resource(),
57
                             acceleration_resampling=CPU1(),
58
                             check_measure=true)
59

60
Construct a model wrapper for hyperparameter optimization of a
61
supervised learner.
62

63
Calling `fit!(mach)` on a machine `mach=machine(tuned_model, X, y)` or
64
`mach=machine(tuned_model, X, y, w)` will:
65

66
- Instigate a search, over clones of `model`, with the hyperparameter
67
  mutations specified by `range`, for a model optimizing the specified
68
  `measure`, using performance evaluations carried out using the
69
  specified `tuning` strategy and `resampling` strategy.
70

71
- Fit an internal machine, based on the optimal model
72
  `fitted_params(mach).best_model`, wrapping the optimal `model`
73
  object in *all* the provided data `X`, `y`(, `w`). Calling
74
  `predict(mach, Xnew)` then returns predictions on `Xnew` of this
75
  internal machine. The final train can be supressed by setting
76
  `train_best=false`.
77

78
The `range` objects supported depend on the `tuning` strategy
79
specified. Query the `strategy` docstring for details. To optimize
80
over an explicit list `v` of models of the same type, use
81
`strategy=Explicit()` and specify `model=v[1]` and `range=v`.
82

83
The number of models searched is specified by `n`. If unspecified,
84
then `MLJTuning.default_n(tuning, range)` is used. When `n` is
85
increased and `fit!(mach)` called again, the old search history is
86
re-instated and the search continues where it left off.
87

88
If `measure` supports weights (`supports_weights(measure) == true`)
89
then any `weights` specified will be passed to the measure. If more
90
than one `measure` is specified, then only the first is optimized
91
(unless `strategy` is multi-objective) but the performance against
92
every measure specified will be computed and reported in
93
`report(mach).best_performance` and other relevant attributes of the
94
generated report.
95

96
Specify `repeats > 1` for repeated resampling per model
97
evaluation. See [`evaluate!`](@ref) options for details.
98

99
*Important.* If a custom `measure` is used, and the measure is
100
a score, rather than a loss, be sure to check that
101
`MLJ.orientation(measure) == :score` to ensure maximization of the
102
measure, rather than minimization. Override an incorrect value with
103
`MLJ.orientation(::typeof(measure)) = :score`.
104

105
In the case of two-parameter tuning, a Plots.jl plot of performance
106
estimates is returned by `plot(mach)` or `heatmap(mach)`.
107

108
Once a tuning machine `mach` has bee trained as above, then
109
`fitted_params(mach)` has these keys/values:
110

111
key                 | value
112
--------------------|--------------------------------------------------
113
`best_model`        | optimal model instance
114
`best_fitted_params`| learned parameters of the optimal model
115

116
The named tuple `report(mach)` includes these keys/values:
117

118
key                 | value
119
--------------------|--------------------------------------------------
120
`best_model`        | optimal model instance
121
`best_history_entry`| corresponding entry in the history, including performance estimate
122
`best_report`       | report generated by fitting the optimal model to all data
123
`history`           | tuning strategy-specific history of all evaluations
124

125
plus other key/value pairs specific to the `tuning` strategy.
126

127
### Summary of key-word arguments
128

129
- `model`: `Supervised` model prototype that is cloned and mutated to
130
  generate models for evaluation
131

132
- `tuning=Grid()`: tuning strategy to be applied (eg, `RandomSearch()`)
133

134
- `resampling=Holdout()`: resampling strategy (eg, `Holdout()`, `CV()`),
135
  `StratifiedCV()`) to be applied in performance evaluations
136

137
- `measure`: measure or measures to be applied in performance
138
  evaluations; only the first used in optimization (unless the
139
  strategy is multi-objective) but all reported to the history
140

141
- `weights`: sample weights to be passed the measure(s) in performance
142
  evaluations, if supported.
143

144
- `repeats=1`: for generating train/test sets multiple times in
145
  resampling; see [`evaluate!`](@ref) for details
146

147
- `operation=predict`: operation to be applied to each fitted model;
148
  usually `predict` but `predict_mean`, `predict_median` or
149
  `predict_mode` can be used for `Probabilistic` models, if
150
  the specified measures are `Deterministic`
151

152
- `range`: range object; tuning strategy documentation describes
153
  supported types
154

155
- `selection_heuristic`: the rule determining how the best model is
156
  decided. According to the default heuristic,
157
  `NaiveSelection()`, `measure` (or the first
158
  element of `measure`) is evaluated for each resample and these
159
  per-fold measurements are aggregrated. The model with the lowest
160
  (resp. highest) aggregate is chosen if the measure is a `:loss`
161
  (resp. a `:score`).
162

163
- `n`: number of iterations (ie, models to be evaluated); set by
164
  tuning strategy if left unspecified
165

166
- `train_best=true`: whether to train the optimal model
167

168
- `acceleration=default_resource()`: mode of parallelization for
169
  tuning strategies that support this
170

171
- `acceleration_resampling=CPU1()`: mode of parallelization for
172
  resampling
173

174
- `check_measure`: whether to check `measure` is compatible with the
175
  specified `model` and `operation`)
176

177
"""
178
function TunedModel(; model=nothing,
179
                    tuning=Grid(),
180
                    resampling=MLJBase.Holdout(),
181
                    measures=nothing,
182
                    measure=measures,
183
                    weights=nothing,
184
                    operation=predict,
185
                    ranges=nothing,
186
                    range=ranges,
187
                    selection_heuristic=NaiveSelection(),
188
                    train_best=true,
189
                    repeats=1,
190
                    n=nothing,
191
                    acceleration=default_resource(),
192
                    acceleration_resampling=CPU1(),
193
                    check_measure=true)
194

195
    range === nothing && error("You need to specify `range=...`.")
278×
196
    model == nothing && error("You need to specify model=... .\n"*
2×
197
                              "If `tuning=Explicit()`, any model in the "*
198
                              "range will do. ")
199

200
    if model isa Deterministic
201
        tuned_model = DeterministicTunedModel(model, tuning, resampling,
202
                                              measure, weights, operation,
203
                                              range, selection_heuristic,
204
                                              train_best, repeats, n,
205
                                              acceleration,
206
                                              acceleration_resampling,
207
                                              check_measure)
208
    elseif model isa Probabilistic
209
        tuned_model = ProbabilisticTunedModel(model, tuning, resampling,
210
                                              measure, weights, operation,
211
                                              range, selection_heuristic,
212
                                              train_best, repeats, n,
213
                                              acceleration,
214
                                              acceleration_resampling,
215
                                              check_measure)
216
    else
217
        error("Only `Deterministic` and `Probabilistic` "*
2×
218
              "model types supported.")
219
    end
220

221
    message = clean!(tuned_model)
112×
222
    isempty(message) || @info message
160×
223

224
    return tuned_model
112×
225

226
end
227

228
function MLJBase.clean!(tuned_model::EitherTunedModel)
229
    message = ""
283×
230
    if tuned_model.measure === nothing
284×
231
        tuned_model.measure = default_measure(tuned_model.model)
46×
232
        if tuned_model.measure === nothing
67×
233
            error("Unable to deduce a default measure for specified model. "*
!
234
                  "You must specify `measure=...`. ")
235
        else
236
            message *= "No measure specified. "*
88×
237
            "Setting measure=$(tuned_model.measure). "
238
        end
239
    end
240

241
    if !supports_heuristic(tuned_model.tuning, tuned_model.selection_heuristic)
438×
NEW
242
        message *= "`selection_heuristic=$(tuned_model.selection_heuristic)` "*
!
243
        "is not supported by $(tuned_model.tuning). Resetting to "*
244
        "`NaiveSelectionment()`."
245
        tuned_model.selection_heuristic = NaiveSelection()
246
    end
247

248
    if (tuned_model.acceleration isa CPUProcesses &&
392×
249
        tuned_model.acceleration_resampling isa CPUProcesses)
250
        message *=
251
        "The combination acceleration=$(tuned_model.acceleration) and"*
252
        " acceleration_resampling=$(tuned_model.acceleration_resampling) is"*
253
        "  not generally optimal. You may want to consider setting"*
254
        " `acceleration = CPUProcesses()` and"*
255
        " `acceleration_resampling = CPUThreads()`."
256
    end
257

258
    if (tuned_model.acceleration isa CPUThreads &&
360×
259
        tuned_model.acceleration_resampling isa CPUProcesses)
260
        message *=
261
        "The combination acceleration=$(tuned_model.acceleration) and"*
262
        " acceleration_resampling=$(tuned_model.acceleration_resampling) isn't"*
263
        " supported. \n Resetting to"*
264
        " `acceleration = CPUProcesses()` and"*
265
        " `acceleration_resampling = CPUThreads()`."
266

267
        tuned_model.acceleration = CPUProcesses()
268
        tuned_model.acceleration_resampling = CPUThreads()
269
    end
270

271
    tuned_model.acceleration =
362×
272
        _process_accel_settings(tuned_model.acceleration)
273

274
    return message
283×
275
end
276

277

278
## FIT AND UPDATE METHODS
279

280
# A *metamodel* is either a `Model` instance, `model`, or a tuple
281
# `(model, s)`, where `s` is extra data associated with `model` that
282
# the tuning strategy implementation wants available to the `result`
283
# method for recording in the history.
284

285
_first(m::MLJBase.Model) = m
!
286
_last(m::MLJBase.Model) = nothing
3,576×
287
_first(m::Tuple{Model,Any}) = first(m)
26×
288
_last(m::Tuple{Model,Any}) = last(m)
29×
289

290
# returns a (model, result) pair for the history:
291
function event(metamodel,
292
               resampling_machine,
293
               verbosity,
294
               tuning,
295
               history,
296
               state)
297
    model = _first(metamodel)
3,637×
298
    metadata = _last(metamodel)
299
    resampling_machine.model.model = model
300
    verb = (verbosity >= 2 ? verbosity - 3 : verbosity - 1)
7,255×
301
    fit!(resampling_machine, verbosity=verb)
2,524×
302
    E = evaluate(resampling_machine)
303
    entry0 = (model       = model,
3,612×
304
              measure     = E.measure,
305
              measurement = E.measurement,
306
              per_fold    = E.per_fold,
307
              metadata    = metadata)
308
    entry = merge(entry0, extras(tuning, history, state, E))
3,618×
309
    if verbosity > 2
3,614×
310
        println("hyperparameters: $(params(model))")
!
311
    end
312

313
    if verbosity > 1
3,620×
314
        println("result: $r")
!
315
    end
316
    return entry
3,611×
317
end
318

319
function assemble_events(metamodels,
320
                         resampling_machine,
321
                         verbosity,
322
                         tuning,
323
                         history,
324
                         state,
325
                         acceleration::CPU1)
326

327
     n_metamodels = length(metamodels)
127×
328

329
     p = Progress(n_metamodels,
77×
330
         dt = 0,
331
         desc = "Evaluating over $(n_metamodels) metamodels: ",
332
         barglyphs = BarGlyphs("[=> ]"),
333
         barlen = 25,
334
         color = :yellow)
335

336
    verbosity <1 || update!(p,0)
84×
337

338
    entries = map(metamodels) do m
77×
339
        r = event(m, resampling_machine, verbosity, tuning, history, state)
340
        verbosity < 1 || begin
341
                  p.counter += 1
342
                  ProgressMeter.updateProgress!(p)
343
                end
344
        r
345
      end
346

347
    return entries
77×
348
end
349

350
function assemble_events(metamodels,
351
                         resampling_machine,
352
                         verbosity,
353
                         tuning,
354
                         history,
355
                         state,
356
                         acceleration::CPUProcesses)
357

358
    n_metamodels = length(metamodels)
88×
359

360
    entries = @sync begin
361
        channel = RemoteChannel(()->Channel{Bool}(min(1000, n_metamodels)), 1)
218×
362
        p = Progress(n_metamodels,
54×
363
                     dt = 0,
364
                     desc = "Evaluating over $n_metamodels metamodels: ",
365
                     barglyphs = BarGlyphs("[=> ]"),
366
                     barlen = 25,
367
                     color = :yellow)
368

369
        # printing the progress bar
370
        verbosity < 1 || begin
54×
371
            update!(p,0)
372
            @async while take!(channel)
373
                p.counter +=1
324×
374
                ProgressMeter.updateProgress!(p)
224×
375
            end
376
        end
377

378

379
        ret = @distributed vcat for m in metamodels
54×
380
            r = event(m, resampling_machine, verbosity, tuning, history, state)
310×
381
            verbosity < 1 || begin
310×
382
                put!(channel, true)
124×
383
            end
384
            r
385
        end
386
        verbosity < 1 || put!(channel, false)
62×
387
        ret
388
    end
389

390
    return entries
54×
391
end
392

393
@static if VERSION >= v"1.3.0-DEV.573"
394
# one machine for each thread; cycle through available threads:
395
function assemble_events(metamodels,
396
                         resampling_machine,
397
                         verbosity,
398
                         tuning,
399
                         history,
400
                         state,
401
                         acceleration::CPUThreads)
402

403
    if Threads.nthreads() == 1
109×
404
        return assemble_events(metamodels,
!
405
                         resampling_machine,
406
                         verbosity,
407
                         tuning,
408
                         history,
409
                         state,
410
                         CPU1())
411
   end
412

413
    n_metamodels = length(metamodels)
55×
414
    ntasks = acceleration.settings
55×
415
    partitions = chunks(1:n_metamodels, ntasks)
55×
416
    #tasks = Vector{Task}(undef, length(partitions))
417
    entries = Vector(undef, length(partitions))
111×
418
    p = Progress(n_metamodels,
57×
419
         dt = 0,
420
         desc = "Evaluating over $(n_metamodels) metamodels: ",
421
         barglyphs = BarGlyphs("[=> ]"),
422
         barlen = 25,
423
         color = :yellow)
424
    ch = Channel{Bool}(min(1000, length(partitions)) )
111×
425

426
    @sync begin
64×
427
        # printing the progress bar
428
        verbosity < 1 || begin
58×
429
            update!(p,0)
8×
430
            @async while take!(ch)
8×
431
                p.counter +=1
160×
432
                ProgressMeter.updateProgress!(p)
160×
433
            end
434
        end
435
        # One tresampling_machine per task
436
         machs = [resampling_machine,
113×
437
                 [machine(Resampler(
438
                     model= resampling_machine.model.model,
439
                     resampling    = resampling_machine.model.resampling,
440
                     measure       = resampling_machine.model.measure,
441
                     weights       = resampling_machine.model.weights,
442
                     operation     = resampling_machine.model.operation,
443
                     check_measure = resampling_machine.model.check_measure,
444
                     repeats       = resampling_machine.model.repeats,
445
                     acceleration  = resampling_machine.model.acceleration),
446
                          resampling_machine.args...) for _ in 2:length(partitions)]...]
447

448
        @sync for (i, parts) in enumerate(partitions)
114×
449
            Threads.@spawn begin
113×
450
                entries[i] =  map(metamodels[parts]) do m
112×
451
                    r = event(m, machs[i],
452
                              verbosity, tuning, history, state)
453
                    verbosity < 1 || put!(ch, true)
454
                    r
455
                end
456
            end
457
        end
458
        verbosity < 1 || put!(ch, false)
66×
459
    end
460
    reduce(vcat, entries)
58×
461
end
462

463
end # of if VERSION ...
464

465
# history is intialized to `nothing` because it's type is not known.
466
_vcat(history, ��history) = vcat(history, ��history)
10×
467
_vcat(history::Nothing, ��history) = ��history
368×
468
_length(history) = length(history)
339×
469
_length(::Nothing) = 0
!
470

471
# builds on an existing `history` until the length is `n` or the model
472
# supply is exhausted (method shared by `fit` and `update`). Returns
473
# the bigger history:
474
function build(history,
475
               n,
476
               tuning,
477
               model,
478
               state,
479
               verbosity,
480
               acceleration,
481
               resampling_machine)
482
    j = _length(history)
193×
483
    models_exhausted = false
484
    while j < n && !models_exhausted
447×
485
        metamodels, state  = models(tuning,
190×
486
                                    model,
487
                                    history,
488
                                    state,
489
                                    n - j,
490
                                    verbosity)
491
        ��j = _length(metamodels)
338×
492
        ��j == 0 && (models_exhausted = true)
195×
493
        shortfall = n - ��j
196×
494
        if models_exhausted && shortfall > 0 && verbosity > -1
198×
495
            @info "Only $j (of $n) models evaluated.\n"*
496
            "Model supply exhausted. "
497
        end
498
        ��j == 0 && break
208×
499
        shortfall < 0 && (metamodels = metamodels[1:n - j])
192×
500
        j += ��j
186×
501

502
        ��history = assemble_events(metamodels,
190×
503
                                   resampling_machine,
504
                                   verbosity,
505
                                   tuning,
506
                                   history,
507
                                   state,
508
                                   acceleration)
509
        history = _vcat(history, ��history)
189×
510
    end
511
    return history, state
189×
512
end
513

514
# given complete history, pick out best model, fit it on all data and
515
# generate report and cache (meta_state):
516
function finalize(tuned_model, history, state, verbosity, rm, data...)
517
    model = tuned_model.model
189×
518
    tuning = tuned_model.tuning
519

520
    user_history = map(history) do entry
189×
521
        delete(entry, :metadata)
522
    end
523

524
    entry =  best(tuned_model.selection_heuristic, history)
189×
525
    best_model = entry.model
188×
526
    best_history_entry = delete(entry, :metadata)
189×
527
    fitresult = machine(best_model, data...)
189×
528

529
    report0 = (best_model         = best_model,
188×
530
               best_history_entry = best_history_entry,
531
               history            = user_history)
532

533
    if tuned_model.train_best
189×
534
        fit!(fitresult, verbosity=verbosity - 1)
70×
535
        report1 = merge(report0, (best_report=MLJBase.report(fitresult),))
70×
536
    else
537
        report1 = merge(report0, (best_report=missing,))
145×
538
    end
539

540
    report = merge(report1, tuning_report(tuning, history, state))
222×
541
    meta_state = (history, deepcopy(tuned_model), state, rm)
188×
542

543
    return fitresult, meta_state, report
189×
544
end
545

546
function MLJBase.fit(tuned_model::EitherTunedModel{T,M},
547
                     verbosity::Integer, data...) where {T,M}
548
    tuning = tuned_model.tuning
322×
549
    model = tuned_model.model
550
    range = tuned_model.range
551
    n = tuned_model.n === nothing ?
238×
552
        default_n(tuning, range) : tuned_model.n
553

554
    verbosity < 1 || @info "Attempting to evaluate $n models."
209×
555

556
    acceleration = tuned_model.acceleration
185×
557

558
    state = setup(tuning, model, range, verbosity)
180×
559

560
    # instantiate resampler (`model` to be replaced with mutated
561
    # clones during iteration below):
562
    resampler = Resampler(model=model,
185×
563
                          resampling    = tuned_model.resampling,
564
                          measure       = tuned_model.measure,
565
                          weights       = tuned_model.weights,
566
                          operation     = tuned_model.operation,
567
                          check_measure = tuned_model.check_measure,
568
                          repeats       = tuned_model.repeats,
569
                          acceleration  = tuned_model.acceleration_resampling)
570
    resampling_machine = machine(resampler, data...)
185×
571
    history, state = build(nothing, n, tuning, model, state,
186×
572
                           verbosity, acceleration, resampling_machine)
573

574
    rm = resampling_machine
575
    return finalize(tuned_model, history, state, verbosity, rm, data...)
184×
576

577
end
578

579
function MLJBase.update(tuned_model::EitherTunedModel,
580
                        verbosity::Integer,
581
                        old_fitresult, old_meta_state, data...)
582

583
    history, old_tuned_model, state, resampling_machine = old_meta_state
11×
584
    acceleration = tuned_model.acceleration
585

586
    tuning = tuned_model.tuning
587
    range = tuned_model.range
588
    model = tuned_model.model
589

590
    # exclamation points are for values actually used rather than
591
    # stored:
592
    n! = tuned_model.n === nothing ?
13×
593
        default_n(tuning, range) : tuned_model.n
594

595
    old_n! = old_tuned_model.n === nothing ?
13×
596
        default_n(tuning, range) : old_tuned_model.n
597

598
    if MLJBase.is_same_except(tuned_model, old_tuned_model, :n) &&
10×
599
        n! >= old_n!
600

601
        verbosity < 1 || @info "Attempting to add $(n! - old_n!) models "*
5×
602
        "to search, bringing total to $n!. "
603

604
        history, state = build(history, n!, tuning, model, state,
5×
605
                               verbosity, acceleration, resampling_machine)
606

607
        rm = resampling_machine
608
        return finalize(tuned_model, history, state, verbosity, rm, data...)
5×
609
    else
610
        return  fit(tuned_model, verbosity, data...)
2×
611
    end
612
end
613

614
MLJBase.predict(tuned_model::EitherTunedModel, fitresult, Xnew) =
16×
615
    predict(fitresult, Xnew)
616

617
function MLJBase.fitted_params(tuned_model::EitherTunedModel, fitresult)
618
    if tuned_model.train_best
16×
619
        return (best_model=fitresult.model,
8×
620
                best_fitted_params=fitted_params(fitresult))
621
    else
622
        return (best_model=fitresult.model,
!
623
                best_fitted_params=missing)
624
    end
625
end
626

627

628
## METADATA
629

630
MLJBase.supports_weights(::Type{<:EitherTunedModel{<:Any,M}}) where M =
4×
631
    MLJBase.supports_weights(M)
632

633
MLJBase.load_path(::Type{<:DeterministicTunedModel}) =
4×
634
    "MLJTuning.DeterministicTunedModel"
635
MLJBase.package_name(::Type{<:EitherTunedModel}) = "MLJTuning"
4×
636
MLJBase.package_uuid(::Type{<:EitherTunedModel}) = "MLJTuning"
4×
637
MLJBase.package_url(::Type{<:EitherTunedModel}) =
4×
638
    "https://github.com/alan-turing-institute/MLJTuning.jl"
639
MLJBase.is_pure_julia(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
4×
640
    MLJBase.is_pure_julia(M)
641
MLJBase.input_scitype(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
40×
642
    MLJBase.input_scitype(M)
643
MLJBase.target_scitype(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
4×
644
    MLJBase.target_scitype(M)
Troubleshooting · Open an Issue · Sales · Support · ENTERPRISE · CAREERS · STATUS
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2023 Coveralls, Inc