• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

deepset-ai / haystack / 9660499069

25 Jun 2024 10:07AM UTC coverage: 89.968% (+0.02%) from 89.946%
9660499069

push

github

web-flow
bug: fix MRR and MAP calculations (#7841)

* bug: fix MRR and MAP calculations

6717 of 7466 relevant lines covered (89.97%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

95.45
haystack/components/evaluators/document_mrr.py
1
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai>
2
#
3
# SPDX-License-Identifier: Apache-2.0
4

5
from typing import Any, Dict, List
1✔
6

7
from haystack import Document, component
1✔
8

9

10
@component
1✔
11
class DocumentMRREvaluator:
1✔
12
    """
13
    Evaluator that calculates the mean reciprocal rank of the retrieved documents.
14

15
    MRR measures how high the first retrieved document is ranked.
16
    Each question can have multiple ground truth documents and multiple retrieved documents.
17

18
    `DocumentMRREvaluator` doesn't normalize its inputs, the `DocumentCleaner` component
19
    should be used to clean and normalize the documents before passing them to this evaluator.
20

21
    Usage example:
22
    ```python
23
    from haystack import Document
24
    from haystack.components.evaluators import DocumentMRREvaluator
25

26
    evaluator = DocumentMRREvaluator()
27
    result = evaluator.run(
28
        ground_truth_documents=[
29
            [Document(content="France")],
30
            [Document(content="9th century"), Document(content="9th")],
31
        ],
32
        retrieved_documents=[
33
            [Document(content="France")],
34
            [Document(content="9th century"), Document(content="10th century"), Document(content="9th")],
35
        ],
36
    )
37
    print(result["individual_scores"])
38
    # [1.0, 1.0]
39
    print(result["score"])
40
    # 1.0
41
    ```
42
    """
43

44
    # Refer to https://www.pinecone.io/learn/offline-evaluation/ for the algorithm.
45
    @component.output_types(score=float, individual_scores=List[float])
1✔
46
    def run(
1✔
47
        self, ground_truth_documents: List[List[Document]], retrieved_documents: List[List[Document]]
48
    ) -> Dict[str, Any]:
49
        """
50
        Run the DocumentMRREvaluator on the given inputs.
51

52
        `ground_truth_documents` and `retrieved_documents` must have the same length.
53

54
        :param ground_truth_documents:
55
            A list of expected documents for each question.
56
        :param retrieved_documents:
57
            A list of retrieved documents for each question.
58
        :returns:
59
            A dictionary with the following outputs:
60
            - `score` - The average of calculated scores.
61
            - `individual_scores` - A list of numbers from 0.0 to 1.0 that represents how high the first retrieved
62
                document is ranked.
63
        """
64
        if len(ground_truth_documents) != len(retrieved_documents):
1✔
65
            msg = "The length of ground_truth_documents and retrieved_documents must be the same."
1✔
66
            raise ValueError(msg)
1✔
67

68
        individual_scores = []
1✔
69

70
        for ground_truth, retrieved in zip(ground_truth_documents, retrieved_documents):
1✔
71
            reciprocal_rank = 0.0
1✔
72

73
            ground_truth_contents = [doc.content for doc in ground_truth if doc.content is not None]
1✔
74
            for rank, retrieved_document in enumerate(retrieved):
1✔
75
                if retrieved_document.content is None:
1✔
76
                    continue
×
77
                if retrieved_document.content in ground_truth_contents:
1✔
78
                    reciprocal_rank = 1 / (rank + 1)
1✔
79
                    break
1✔
80
            individual_scores.append(reciprocal_rank)
1✔
81

82
        score = sum(individual_scores) / len(ground_truth_documents)
1✔
83

84
        return {"score": score, "individual_scores": individual_scores}
1✔
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc