• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

google / sedpack / 18936945842
89%
main: 89%

Build:
Build:
LAST BUILD BRANCH: gh-readonly-queue/main/pr-292-0c07ca5cb18b3f4b631a539758b4229e54c86d02
DEFAULT BRANCH: main
Ran 30 Oct 2025 10:10AM UTC
Jobs 1
Files 76
Run time 1min
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

30 Oct 2025 10:05AM UTC coverage: 88.687%. Remained the same
18936945842

push

github

web-flow
Bump keras from 3.11.3 to 3.12.0 in the pip group across 1 directory (#269)

Bumps the pip group with 1 update in the / directory:
[keras](https://github.com/keras-team/keras).

Updates `keras` from 3.11.3 to 3.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/keras-team/keras/releases">keras's
releases</a>.</em></p>
<blockquote>
<h2>Keras 3.12.0</h2>
<h2>Highlights</h2>
<h3>Keras has a new model distillation API!</h3>
<p>You now have access to an easy-to-use API for distilling large models
into small models while minimizing performance drop on a reference
dataset -- compatible with all existing Keras models. You can specify a
range of different distillation losses, or create your own losses. The
API supports multiple concurrent distillation losses at the same
time.</p>
<p>Example:</p>
<pre lang="python"><code># Load a model to distill
teacher = ...
# This is the model we want to distill it into
student = ...
<h1>Configure the process</h1>
<p>distiller = Distiller(
teacher=teacher,
student=student,
distillation_losses=LogitsDistillation(temperature=3.0),
)
distiller.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)</p>
<h1>Train the distilled model</h1>
<p>distiller.fit(x_train, y_train, epochs=10)
</code></pre></p>
<h3>Keras supports GPTQ quantization!</h3>
<p>GPTQ is now built into the Keras API. GPTQ is a post-training,
weights-only quantization method that compresses a model to int4 layer
by layer. For each layer, it uses a second-order method to update
weights while minimizing the error on a calibration dataset.</p>
<p>Learn how to use it <a
href="https://keras.io/guides/gptq_quantization_in_keras/">in this
guide</a>.</p>
<p>Example:</p>
<pre lang="python"><code>model =
keras_hub.models.Gemma3CausalLM.from_preset(&quot;gemma3_1b&quot;)
gptq_config = keras.quantizers.GPTQConfig(
    dataset=calibration_dataset,
    tokenizer=model.preprocessor.tokenizer,
    weig... (continued)

3073 of 3465 relevant lines covered (88.69%)

0.89 hits per line

Jobs
ID Job ID Ran Files Coverage
1 18936945842.1 30 Oct 2025 10:10AM UTC 76
88.69
GitHub Action Run
Source Files on build 18936945842
  • Tree
  • List 76
  • Changed 0
  • Source Changed 0
  • Coverage Changed 0
Coverage ∆ File Lines Relevant Covered Missed Hits/Line
  • Back to Repo
  • Github Actions Build #18936945842
  • f049e1ea on github
  • Prev Build on gh-readonly-queue/main/pr-261-c0233366864eaf69cad02d1b561fbe62afab3dad (#18872929653)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc