• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

pytorch / opacus / 13383064725 / 3
80%
main: 80%

Build:
DEFAULT BRANCH: main
Ran 18 Feb 2025 04:24AM UTC
Files 120
Run time 4s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

12 Feb 2025 07:22PM UTC coverage: 85.533% (+0.6%) from 84.936%
13383064725.3

push

github

facebook-github-bot
Adds fast gradient clipping support for the Embedding layer. (#694)

Summary:
The algorithm used is described in the 'A Unified Fast Gradient Clipping Framework for DP-SGD' paper: https://proceedings.neurips.cc/paper_files/paper/2023/file/a45d344b28179c8da7646bc38ff50ad8-Paper-Conference.pdf.

## Types of changes

- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Docs change / refactoring / dependency upgrade

## Motivation and Context / Related issue

Previously, Ghost clipping was not supported in Opacus for embedding layer. With default DP-SGD implementation, the training OOMs out on large embedding layers over large physical batch size (useful for privacy). The regular DP-SGD needs O(Bnd), where B=physical batch size, n=vocab size, d=embedding dimension. To give an example on memory needed: we've seen embeddings with [vocab size=1000000, dim=5] (and higher) in real-world differential privacy applications. With a physical batch size of 16,000, memory needed: 16000 × 1000000 × 5 x 4 = 298.02 GB.

With this change, we need significantly smaller memory: O(Br) where B is physical batch size, and r is number of unique  indices in the embedding sequence. We could successfully run DP-SGD over above example, using < 8GiB.

This is a good add to Opacus, enabling larger embedding layers over larger physical batch sizes to be trained with DP-SGD.

## How Has This Been Tested (if it applies)

Unit tests. Runs over large embedding layer training over realworld DP application.

## Checklist

- [x] The documentation is up-to-date with the changes I made.
- [x] I have read the **CONTRIBUTING** document and completed the CLA (see **CONTRIBUTING**).
- [x] All tests passed, and additional code has been covered with new tests.

Pull Request resolved: https://github.com/pytorch/opacus/pull/694

Reviewed B... (continued)

5203 of 6083 relevant lines covered (85.53%)

0.86 hits per line

Source Files on job run-1 - 13383064725.3
  • Tree
  • List 120
  • Changed 7
  • Source Changed 3
  • Coverage Changed 7
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Build 13383064725
  • 0eb4b3ee on github
  • Prev Job for on main (#13171643435.1)
  • Next Job for on main (#13394332631.2)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc