• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

deepset-ai / haystack / 9568249476

18 Jun 2024 03:52PM UTC coverage: 89.872% (-0.1%) from 89.995%
9568249476

push

github

web-flow
ci: Add code formatting checks  (#7882)

* ruff settings

enable ruff format and re-format outdated files

feat: `EvaluationRunResult` add parameter to specify columns to keep in the comparative `Dataframe`  (#7879)

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* updating tests

* adding release notes

* Update haystack/evaluation/eval_run_result.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update releasenotes/notes/add-keep-columns-to-EvalRunResult-comparative-be3e15ce45de3e0b.yaml

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* updating docstring

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

add format-check

fail on format and linting failures

fix string formatting

reformat long lines

fix tests

fix typing

linter

pull from main

* reformat

* lint -> check

* lint -> check

6957 of 7741 relevant lines covered (89.87%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

91.67
haystack/components/evaluators/document_mrr.py
1
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai>
2
#
3
# SPDX-License-Identifier: Apache-2.0
4

5
from typing import Any, Dict, List
1✔
6

7
from haystack import Document, component
1✔
8

9

10
@component
1✔
11
class DocumentMRREvaluator:
1✔
12
    """
13
    Evaluator that calculates the mean reciprocal rank of the retrieved documents.
14

15
    MRR measures how high the first retrieved document is ranked.
16
    Each question can have multiple ground truth documents and multiple retrieved documents.
17

18
    `DocumentMRREvaluator` doesn't normalize its inputs, the `DocumentCleaner` component
19
    should be used to clean and normalize the documents before passing them to this evaluator.
20

21
    Usage example:
22
    ```python
23
    from haystack import Document
24
    from haystack.components.evaluators import DocumentMRREvaluator
25

26
    evaluator = DocumentMRREvaluator()
27
    result = evaluator.run(
28
        ground_truth_documents=[
29
            [Document(content="France")],
30
            [Document(content="9th century"), Document(content="9th")],
31
        ],
32
        retrieved_documents=[
33
            [Document(content="France")],
34
            [Document(content="9th century"), Document(content="10th century"), Document(content="9th")],
35
        ],
36
    )
37
    print(result["individual_scores"])
38
    # [1.0, 1.0]
39
    print(result["score"])
40
    # 1.0
41
    ```
42
    """
43

44
    @component.output_types(score=float, individual_scores=List[float])
1✔
45
    def run(
1✔
46
        self, ground_truth_documents: List[List[Document]], retrieved_documents: List[List[Document]]
47
    ) -> Dict[str, Any]:
48
        """
49
        Run the DocumentMRREvaluator on the given inputs.
50

51
        `ground_truth_documents` and `retrieved_documents` must have the same length.
52

53
        :param ground_truth_documents:
54
            A list of expected documents for each question.
55
        :param retrieved_documents:
56
            A list of retrieved documents for each question.
57
        :returns:
58
            A dictionary with the following outputs:
59
            - `score` - The average of calculated scores.
60
            - `individual_scores` - A list of numbers from 0.0 to 1.0 that represents how high the first retrieved
61
                document is ranked.
62
        """
63
        if len(ground_truth_documents) != len(retrieved_documents):
1✔
64
            msg = "The length of ground_truth_documents and retrieved_documents must be the same."
1✔
65
            raise ValueError(msg)
1✔
66

67
        individual_scores = []
1✔
68

69
        for ground_truth, retrieved in zip(ground_truth_documents, retrieved_documents):
1✔
70
            score = 0.0
1✔
71
            for ground_document in ground_truth:
1✔
72
                if ground_document.content is None:
1✔
73
                    continue
×
74

75
                for rank, retrieved_document in enumerate(retrieved):
1✔
76
                    if retrieved_document.content is None:
1✔
77
                        continue
×
78

79
                    if ground_document.content in retrieved_document.content:
1✔
80
                        score = 1 / (rank + 1)
1✔
81
                        break
1✔
82
            individual_scores.append(score)
1✔
83

84
        score = sum(individual_scores) / len(retrieved_documents)
1✔
85

86
        return {"score": score, "individual_scores": individual_scores}
1✔
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc