• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

deepset-ai / haystack / 15217159694

23 May 2025 06:51PM UTC coverage: 90.258% (-0.2%) from 90.471%
15217159694

Pull #9405

github

web-flow
Merge 9faa70620 into f02550179
Pull Request #9405: feat: Add async streaming support in `HuggingFaceLocalChatGenerator`

11387 of 12616 relevant lines covered (90.26%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

20.0
haystack/components/generators/utils.py
1
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai>
2
#
3
# SPDX-License-Identifier: Apache-2.0
4

5
from typing import Any, Dict
1✔
6

7
from openai.types.chat.chat_completion_chunk import ChoiceDeltaToolCall
1✔
8

9
from haystack.dataclasses import StreamingChunk
1✔
10

11

12
def print_streaming_chunk(chunk: StreamingChunk) -> None:
1✔
13
    """
14
    Callback function to handle and display streaming output chunks.
15

16
    This function processes a `StreamingChunk` object by:
17
    - Printing tool call metadata (if any), including function names and arguments, as they arrive.
18
    - Printing tool call results when available.
19
    - Printing the main content (e.g., text tokens) of the chunk as it is received.
20

21
    The function outputs data directly to stdout and flushes output buffers to ensure immediate display during
22
    streaming.
23

24
    :param chunk: A chunk of streaming data containing content and optional metadata, such as tool calls and
25
        tool results.
26
    """
27
    # Print tool call metadata if available (from ChatGenerator)
28
    if tool_calls := chunk.meta.get("tool_calls"):
×
29
        for tool_call in tool_calls:
×
30
            # Convert to dict if tool_call is a ChoiceDeltaToolCall
31
            tool_call_dict: Dict[str, Any] = (
×
32
                tool_call.to_dict() if isinstance(tool_call, ChoiceDeltaToolCall) else tool_call
33
            )
34

35
            if function := tool_call_dict.get("function"):
×
36
                if name := function.get("name"):
×
37
                    print("\n\n[TOOL CALL]\n", flush=True, end="")
×
38
                    print(f"Tool: {name} ", flush=True, end="")
×
39
                    print("\nArguments: ", flush=True, end="")
×
40

41
                if arguments := function.get("arguments"):
×
42
                    print(arguments, flush=True, end="")
×
43

44
    # Print tool call results if available (from ToolInvoker)
45
    if tool_result := chunk.meta.get("tool_result"):
×
46
        print(f"\n\n[TOOL RESULT]\n{tool_result}", flush=True, end="")
×
47

48
    # Print the main content of the chunk (from ChatGenerator)
49
    if content := chunk.content:
×
50
        print(content, flush=True, end="")
×
51

52
    # End of LLM assistant message so we add two new lines
53
    # This ensures spacing between multiple LLM messages (e.g. Agent)
54
    if chunk.meta.get("finish_reason") is not None:
×
55
        print("\n\n", flush=True, end="")
×
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc