• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

google / scaaml / 19024507337
86%

Build:
DEFAULT BRANCH: main
Ran 03 Nov 2025 05:19AM UTC
Jobs 1
Files 60
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

30 Oct 2025 09:48AM UTC coverage: 86.433%. Remained the same
19024507337

push

github

web-flow
Bump keras from 3.11.3 to 3.12.0 in the pip group across 1 directory (#432)

Bumps the pip group with 1 update in the / directory:
[keras](https://github.com/keras-team/keras).

Updates `keras` from 3.11.3 to 3.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/keras-team/keras/releases">keras's
releases</a>.</em></p>
<blockquote>
<h2>Keras 3.12.0</h2>
<h2>Highlights</h2>
<h3>Keras has a new model distillation API!</h3>
<p>You now have access to an easy-to-use API for distilling large models
into small models while minimizing performance drop on a reference
dataset -- compatible with all existing Keras models. You can specify a
range of different distillation losses, or create your own losses. The
API supports multiple concurrent distillation losses at the same
time.</p>
<p>Example:</p>
<pre lang="python"><code># Load a model to distill
teacher = ...
# This is the model we want to distill it into
student = ...
<h1>Configure the process</h1>
<p>distiller = Distiller(
teacher=teacher,
student=student,
distillation_losses=LogitsDistillation(temperature=3.0),
)
distiller.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)</p>
<h1>Train the distilled model</h1>
<p>distiller.fit(x_train, y_train, epochs=10)
</code></pre></p>
<h3>Keras supports GPTQ quantization!</h3>
<p>GPTQ is now built into the Keras API. GPTQ is a post-training,
weights-only quantization method that compresses a model to int4 layer
by layer. For each layer, it uses a second-order method to update
weights while minimizing the error on a calibration dataset.</p>
<p>Learn how to use it <a
href="https://keras.io/guides/gptq_quantization_in_keras/">in this
guide</a>.</p>
<p>Example:</p>
<pre lang="python"><code>model =
keras_hub.models.Gemma3CausalLM.from_preset(&quot;gemma3_1b&quot;)
gptq_config = keras.quantizers.GPTQConfig(
    dataset=calibration_dataset,
    tokenizer=model.preprocessor.tokenizer,
    weig... (continued)

3077 of 3560 relevant lines covered (86.43%)

0.86 hits per line

Jobs
ID Job ID Ran Files Coverage
1 19024507337.1 03 Nov 2025 05:19AM UTC 60
86.43
GitHub Action Run
Source Files on build 19024507337
  • Tree
  • List 60
  • Changed 0
  • Source Changed 0
  • Coverage Changed 0
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Repo
  • Github Actions Build #19024507337
  • b3a506b1 on github
  • Prev Build on gh-readonly-queue/main/pr-430-5727597d131a72ff1be732a11fb6e0accffbe90b (#18835450927)
  • Next Build on main (#19221282099)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc