|
Ran
|
Jobs
3
|
Files
119
|
Run time
1min
|
Badge
README BADGES
|
push
github
Add per-sample gradient norm computation as a functionality (#724) Summary: Pull Request resolved: https://github.com/pytorch/opacus/pull/724 Per-sample gradient norm is computed for Ghost Clipping, but it can be useful generally. Exposed it as a functionality. ``` ... loss.backward() per_sample_norms = model.per_sample_gradient_norms ``` Reviewed By: iden-kalemaj Differential Revision: D68634969 fbshipit-source-id: 7d5cb8a05
7 of 10 new or added lines in 1 file covered. (70.0%)
6 existing lines in 1 file now uncovered.5135 of 6026 relevant lines covered (85.21%)
1.9 hits per line
| Lines | Coverage | ∆ | File |
|---|---|---|---|
| 3 |
92.21 |
-3.31% | opacus/grad_sample/grad_sample_module_fast_gradient_clipping.py |
| Lines | Coverage | ∆ | File |
|---|---|---|---|
| 6 |
80.0 |
-7.5% | opacus/utils/tensor_utils.py |
| ID | Job ID | Ran | Files | Coverage | |
|---|---|---|---|---|---|
| 1 | run-1 - 13171643435.1 | 118 |
84.94 |
GitHub Action Run | |
| 2 | run-2 - 13171643435.2 | 118 |
85.04 |
GitHub Action Run | |
| 3 | run-3 - 13171643435.3 | 65 |
48.79 |
GitHub Action Run |
| Coverage | ∆ | File | Lines | Relevant | Covered | Missed | Hits/Line |
|---|