• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

bethgelab / foolbox / 8137716344

22 Jan 2024 10:53PM UTC coverage: 98.47%. Remained the same
8137716344

push

github

web-flow
Bump pillow from 10.1.0 to 10.2.0 in /tests (#718)

Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.1.0 to 10.2.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/10.1.0...10.2.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

3475 of 3529 relevant lines covered (98.47%)

7.22 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

100.0
/foolbox/gradient_estimators.py
1
from typing import Callable, Tuple, Type
10✔
2
import eagerpy as ep
10✔
3
from .types import BoundsInput, Bounds
10✔
4
from .attacks.base import Attack
10✔
5

6

7
def evolutionary_strategies_gradient_estimator(
10✔
8
    AttackCls: Type[Attack],
9
    *,
10
    samples: int,
11
    sigma: float,
12
    bounds: BoundsInput,
13
    clip: bool,
14
) -> Type[Attack]:
15

16
    if not hasattr(AttackCls, "value_and_grad"):
10✔
17
        raise ValueError(
18
            "This attack does not support gradient estimators."
19
        )  # pragma: no cover
20

21
    bounds = Bounds(*bounds)
10✔
22

23
    class GradientEstimator(AttackCls):  # type: ignore
10✔
24
        def value_and_grad(
10✔
25
            self,
26
            loss_fn: Callable[[ep.Tensor], ep.Tensor],
27
            x: ep.Tensor,
28
        ) -> Tuple[ep.Tensor, ep.Tensor]:
29
            value = loss_fn(x)
8✔
30

31
            gradient = ep.zeros_like(x)
8✔
32
            for k in range(samples // 2):
8✔
33
                noise = ep.normal(x, shape=x.shape)
8✔
34

35
                pos_theta = x + sigma * noise
8✔
36
                neg_theta = x - sigma * noise
8✔
37

38
                if clip:
8✔
39
                    pos_theta = pos_theta.clip(*bounds)
8✔
40
                    neg_theta = neg_theta.clip(*bounds)
8✔
41

42
                pos_loss = loss_fn(pos_theta)
8✔
43
                neg_loss = loss_fn(neg_theta)
8✔
44

45
                gradient += (pos_loss - neg_loss) * noise
8✔
46

47
            gradient /= 2 * sigma * 2 * samples
8✔
48

49
            return value, gradient
8✔
50

51
    GradientEstimator.__name__ = AttackCls.__name__ + "WithESGradientEstimator"
10✔
52
    GradientEstimator.__qualname__ = AttackCls.__qualname__ + "WithESGradientEstimator"
10✔
53
    return GradientEstimator
10✔
54

55

56
es_gradient_estimator = evolutionary_strategies_gradient_estimator
10✔
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc