|
Ran
|
Jobs
3
|
Files
0
|
Run time
–
|
Badge
README BADGES
|
push
github
Add per-sample gradient norm computation as a functionality (#724) Summary: Pull Request resolved: https://github.com/pytorch/opacus/pull/724 Per-sample gradient norm is computed for Ghost Clipping, but it can be useful generally. Exposed it as a functionality. ``` ... loss.backward() per_sample_norms = model.per_sample_gradient_norms ``` Reviewed By: iden-kalemaj Differential Revision: D68634969 fbshipit-source-id: 7d5cb8a05
| ID | Job ID | Ran | Files | Coverage | |
|---|---|---|---|---|---|
| 1 | run-2 - 13255676524.1 | 118 |
85.04 |
GitHub Action Run | |
| 2 | run-3 - 13255676524.2 | 65 |
48.79 |
GitHub Action Run | |
| 3 | run-1 - 13255676524.3 | 118 |
85.02 |
GitHub Action Run |