• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

deepset-ai / haystack / 15191185377

22 May 2025 03:52PM UTC coverage: 90.342% (-0.07%) from 90.411%
15191185377

Pull #9424

github

web-flow
Merge 010c037c7 into 4a5e4d3e6
Pull Request #9424: feat: Update streaming chunk

11169 of 12363 relevant lines covered (90.34%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

10.53
haystack/components/generators/utils.py
1
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai>
2
#
3
# SPDX-License-Identifier: Apache-2.0
4

5
from haystack.dataclasses import StreamingChunk
1✔
6

7

8
def print_streaming_chunk(chunk: StreamingChunk) -> None:
1✔
9
    """
10
    Callback function to handle and display streaming output chunks.
11

12
    This function processes a `StreamingChunk` object by:
13
    - Printing tool call metadata (if any), including function names and arguments, as they arrive.
14
    - Printing tool call results when available.
15
    - Printing the main content (e.g., text tokens) of the chunk as it is received.
16

17
    The function outputs data directly to stdout and flushes output buffers to ensure immediate display during
18
    streaming.
19

20
    :param chunk: A chunk of streaming data containing content and optional metadata, such as tool calls and
21
        tool results.
22
    """
23
    if chunk.start and chunk.index and chunk.index > 0:
×
24
        # If this is not the first content block of the message, add two new lines
25
        print("\n\n", flush=True, end="")
×
26

27
    ## Tool Call streaming
28
    if chunk.tool_call:
×
29
        # Presence of tool_name indicates beginning of a tool call
30
        # or chunk.tool_call.name: would be equivalent here
31
        if chunk.start:
×
32
            print("[TOOL CALL]\n", flush=True, end="")
×
33
            print(f"Tool: {chunk.tool_call.tool_name} ", flush=True, end="")
×
34
            print("\nArguments: ", flush=True, end="")
×
35

36
        # print the tool arguments
37
        if chunk.tool_call.arguments:
×
38
            print(chunk.tool_call.arguments, flush=True, end="")
×
39

40
    ## Tool Call Result streaming
41
    # Print tool call results if available (from ToolInvoker)
42
    if chunk.tool_call_result:
×
43
        # Tool Call Result is fully formed so delta accumulation is not needed
44
        print(f"[TOOL RESULT]\n{chunk.tool_call_result.result}", flush=True, end="")
×
45

46
    ## Normal content streaming
47
    # Print the main content of the chunk (from ChatGenerator)
48
    if chunk.content:
×
49
        if chunk.start:
×
50
            print("[ASSISTANT]\n", flush=True, end="")
×
51
        print(chunk.content, flush=True, end="")
×
52

53
    # End of LLM assistant message so we add two new lines
54
    # This ensures spacing between multiple LLM messages (e.g. Agent) or Tool Call Result
55
    if chunk.meta.get("finish_reason") is not None:
×
56
        print("\n\n", flush=True, end="")
×
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc