• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

ben-manes / caffeine
94%
master: 100%

Build:
Build:
LAST BUILD BRANCH: v3.dev
DEFAULT BRANCH: master
Repo Added 15 Dec 2014 11:20PM UTC
Files 78
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH gradient
branch: gradient
CHANGE BRANCH
x
Reset
  • gradient
  • 05a040c2478341bab8a58a02b3dc1fe14d626d72
  • 2.5.6
  • Gil
  • adaptive
  • async-loader
  • ben/issue30
  • ben/spacing
  • buffer
  • cache2k
  • clockpro
  • codeql
  • dependabot/github_actions/JetBrains/qodana-action-2023.1.4
  • dependabot/github_actions/JetBrains/qodana-action-61b94e7e3a716dcb9e2030cfd79cd46149d56c26
  • dependabot/github_actions/codecov/codecov-action-3.1.3
  • dependabot/github_actions/github/codeql-action-2.3.4
  • dependabot/gradle/com.gradle.common-custom-user-data-gradle-plugin-1.9
  • dependabot/gradle/com.gradle.enterprise-3.14
  • dependabot/gradle/de.thetaphi-forbiddenapis-3.6
  • dependabot/gradle/pmd-7.0.0-rc4
  • expiring_map
  • expiry
  • gil
  • gilga
  • guava
  • jctools
  • juherr
  • master
  • memory
  • orm
  • penalties
  • solr
  • testng
  • tmp
  • travis
  • v1.0
  • v1.0.1
  • v1.1.0
  • v1.2.0
  • v1.3.0
  • v1.3.1
  • v1.3.2
  • v1.3.3
  • v2.0.0
  • v2.0.1
  • v2.0.2
  • v2.0.3
  • v2.1.0
  • v2.2.0
  • v2.2.2
  • v2.2.4
  • v2.3.2
  • v2.3.4
  • v2.5.2
  • v2.5.3
  • v2.5.4
  • v2.5.5
  • v2.6.1
  • v2.7.0
  • v2.8.0
  • v2.8.1
  • v2.8.2
  • v2.8.3
  • v2.8.4
  • v2.8.5
  • v2.dev
  • v3.0.5
  • v3.0.6
  • v3.1.0
  • v3.1.1
  • v3.1.2
  • v3.1.4
  • v3.1.5
  • v3.1.6
  • v3.1.7
  • v3.1.8
  • v3.dev

pending completion
2314

push

travis-ci

ben-manes
Gradient descent optimizers for adaptive tuning

In our paper on adaptive cache policies, we showed how to correct W-TinyLFU
from under performing in recency-biased traces. At its default configuration,
1% window size, it is biased towards frequency. Since the optimal setting is
not known beforehand, we sample the hit rate and dynamically tune to better
values using niave hill climbinb.

The ML community has advanced hill climbing, also known as gradient descent,
for tuning CNN weights. They incorporate momentum, adaptive step sizes, and
bias correction. This discovers the optimal setting faster, better handles
noise and local optimas, and converges.

This change includes SGD with momentum, Adam, Nadam, and AMSGrad; all
popular choices that attempt to improve upon their predecessor. Initial
analysis is extremely promising, e.g. in a recency-biased trace the
default setting has a hit rate of 0.6% and these climbers are optimal
(LRU). In frequency biased, the hit rate did not degrade.

Further analysis is required before incorporating the improvement into
Caffeine.

5770 of 6148 relevant lines covered (93.85%)

0.94 hits per line

Relevant lines Covered
Build:
Build:
6148 RELEVANT LINES 5770 COVERED LINES
0.94 HITS PER LINE
Source Files on gradient
Detailed source file information is not available for this build.

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
2314 gradient Gradient descent optimizers for adaptive tuning In our paper on adaptive cache policies, we showed how to correct W-TinyLFU from under performing in recency-biased traces. At its default configuration, 1% window size, it is biased towards frequen... push 29 Dec 2018 03:46AM UTC ben-manes travis-ci pending completion  
2312 gradient WIP on gradient descent optimizer push 28 Dec 2018 11:53PM UTC ben-manes travis-ci pending completion  
See All Builds (4310)
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc