Ran
|
Jobs
1
|
Files
169
|
Run time
12s
|
Badge
Embed ▾
README BADGES
|
cron
travis-ci
Merge branch 'release/100' * release/100: (38 commits) Copyright update 2020 Need 8Gb for the biggest genomes The code actually looks way simpler without the iterator, and the memory usage isn't very different Disconnect from the anchor database when the search is done This is a required parameter Can reduce the batch size a bit Batch the anchor_ids by number of sequence to make the runtime of the map_anchors* jobs more homogeneous Don't start exonerate if the anchor file is empty Adjusted the memory requirements bugfix: the runnable expects "_range_list" 500 works for fish too Only delete the existing mappings once it has reached write_output Simpler way of naming the file Rename the variable This filter is now in the anchor generation pipeline Reintroduced a filter on the number of sequences in the anchor These two parameters are only needed in this analysis The value used is just 2, which is the natural minimum for alignments This parameter is not used here since 7aaa1282a Comment update ...
12974 of 20426 relevant lines covered (63.52%)
777.54 hits per line
ID | Job ID | Ran | Files | Coverage | |
---|---|---|---|---|---|
3 | 6427.3 (COVERAGE=true) | 169 |
63.52 |
Travis Job 6427.3 |
Coverage | ∆ | File | Lines | Relevant | Covered | Missed | Hits/Line |
---|