• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

kshard / chatter / 16538940978

26 Jul 2025 10:35AM UTC coverage: 67.584% (+44.9%) from 22.674%
16538940978

Pull #53

github

fogfish
update license
Pull Request #53: Enable multi-content I/O within prompts & responses

596 of 837 new or added lines in 27 files covered. (71.21%)

2 existing lines in 2 files now uncovered.

884 of 1308 relevant lines covered (67.58%)

0.72 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

0.0
/chatter.go
1
//
2
// Copyright (C) 2024 Dmitry Kolesnikov
3
//
4
// This file may be modified and distributed under the terms
5
// of the MIT license.  See the LICENSE file for details.
6
// https://github.com/kshard/chatter
7
//
8

9
package chatter
10

11
import (
12
        "context"
13
        "encoding/json"
14
)
15

16
type Opt = interface{ ChatterOpt() }
17

18
// The generic trait to "interact" with LLMs;
19
type Chatter interface {
20
        Usage() Usage
21
        Prompt(context.Context, []Message, ...Opt) (*Reply, error)
22
}
23

24
// LLM Usage stats
25
type Usage struct {
26
        InputTokens int `json:"inputTokens"`
27
        ReplyTokens int `json:"replyTokens"`
28
}
29

30
// LLMs' critical parameter influencing the balance between predictability
31
// and creativity in generated text. Lower temperatures prioritize exploiting
32
// learned patterns, yielding more deterministic outputs, while higher
33
// temperatures encourage exploration, fostering diversity and innovation.
34
type Temperature float64
35

36
func (Temperature) ChatterOpt() {}
×
37

38
// Nucleus Sampling, a parameter used in LLMs, impacts token selection by
39
// considering only the most likely tokens that together represent
40
// a cumulative probability mass (e.g., top-p tokens). This limits the
41
// number of choices to avoid overly diverse or nonsensical outputs while
42
// maintaining diversity within the top-ranked options.
43
type TopP float64
44

45
func (TopP) ChatterOpt() {}
×
46

47
type TopK float64
48

NEW
49
func (TopK) ChatterOpt() {}
×
50

51
// Token quota for reply, the model would limit response given number
52
type MaxTokens int
53

NEW
54
func (MaxTokens) ChatterOpt() {}
×
55

56
// The stop sequence prevents LLMs from generating more text after a specific
57
// string appears. Stop sequences make it easy to guarantee concise,
58
// controlled responses from models.
59
type StopSequences []string
60

NEW
61
func (StopSequences) ChatterOpt() {}
×
62

63
// Command registry is a sequence of tools available for LLM usage.
64
type Registry []Cmd
65

66
func (Registry) ChatterOpt() {}
×
67

68
// Command descriptor
69
type Cmd struct {
70
        // [Required] A unique name for the command, used as a reference by LLMs (e.g., "bash").
71
        Cmd string `json:"cmd"`
72

73
        // [Required] A detailed, multi-line description to educate the LLM on command usage.
74
        // Provides contextual information on how and when to use the command.
75
        About string `json:"about"`
76

77
        // [Required] JSON Schema specifies arguments, types, and additional context
78
        // to guide the LLM on command invokation.
79
        Schema json.RawMessage `json:"schema"`
80
}
81

82
// Foundational identity of LLMs
83
type LLM interface {
84
        // Model ID as defined by the vendor
85
        ModelID() string
86

87
        // Encode prompt to bytes:
88
        // - encoding prompt as prompt markup supported by LLM
89
        // - encoding prompt to envelop supported by LLM's hosting platform
90
        Encode([]Message, ...Opt) ([]byte, error)
91

92
        // Decode LLM's reply into pure text
93
        Decode([]byte) (Reply, error)
94
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc