• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

deepset-ai / haystack / 9660499069

25 Jun 2024 10:07AM UTC coverage: 89.968% (+0.02%) from 89.946%
9660499069

push

github

web-flow
bug: fix MRR and MAP calculations (#7841)

* bug: fix MRR and MAP calculations

6717 of 7466 relevant lines covered (89.97%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

96.15
haystack/components/evaluators/document_map.py
1
# SPDX-FileCopyrightText: 2022-present deepset GmbH <info@deepset.ai>
2
#
3
# SPDX-License-Identifier: Apache-2.0
4

5
from typing import Any, Dict, List
1✔
6

7
from haystack import Document, component
1✔
8

9

10
@component
1✔
11
class DocumentMAPEvaluator:
1✔
12
    """
13
    A Mean Average Precision (MAP) evaluator for documents.
14

15
    Evaluator that calculates the mean average precision of the retrieved documents, a metric
16
    that measures how high retrieved documents are ranked.
17
    Each question can have multiple ground truth documents and multiple retrieved documents.
18

19
    `DocumentMAPEvaluator` doesn't normalize its inputs, the `DocumentCleaner` component
20
    should be used to clean and normalize the documents before passing them to this evaluator.
21

22
    Usage example:
23
    ```python
24
    from haystack import Document
25
    from haystack.components.evaluators import DocumentMAPEvaluator
26

27
    evaluator = DocumentMAPEvaluator()
28
    result = evaluator.run(
29
        ground_truth_documents=[
30
            [Document(content="France")],
31
            [Document(content="9th century"), Document(content="9th")],
32
        ],
33
        retrieved_documents=[
34
            [Document(content="France")],
35
            [Document(content="9th century"), Document(content="10th century"), Document(content="9th")],
36
        ],
37
    )
38

39
    print(result["individual_scores"])
40
    # [1.0, 0.8333333333333333]
41
    print(result["score"])
42
    # 0.9166666666666666
43
    ```
44
    """
45

46
    # Refer to https://www.pinecone.io/learn/offline-evaluation/ for the algorithm.
47
    @component.output_types(score=float, individual_scores=List[float])
1✔
48
    def run(
1✔
49
        self, ground_truth_documents: List[List[Document]], retrieved_documents: List[List[Document]]
50
    ) -> Dict[str, Any]:
51
        """
52
        Run the DocumentMAPEvaluator on the given inputs.
53

54
        All lists must have the same length.
55

56
        :param ground_truth_documents:
57
            A list of expected documents for each question.
58
        :param retrieved_documents:
59
            A list of retrieved documents for each question.
60
        :returns:
61
            A dictionary with the following outputs:
62
            - `score` - The average of calculated scores.
63
            - `individual_scores` - A list of numbers from 0.0 to 1.0 that represents how high retrieved documents
64
                are ranked.
65
        """
66
        if len(ground_truth_documents) != len(retrieved_documents):
1✔
67
            msg = "The length of ground_truth_documents and retrieved_documents must be the same."
1✔
68
            raise ValueError(msg)
1✔
69

70
        individual_scores = []
1✔
71

72
        for ground_truth, retrieved in zip(ground_truth_documents, retrieved_documents):
1✔
73
            average_precision = 0.0
1✔
74
            average_precision_numerator = 0.0
1✔
75
            relevant_documents = 0
1✔
76

77
            ground_truth_contents = [doc.content for doc in ground_truth if doc.content is not None]
1✔
78
            for rank, retrieved_document in enumerate(retrieved):
1✔
79
                if retrieved_document.content is None:
1✔
80
                    continue
×
81

82
                if retrieved_document.content in ground_truth_contents:
1✔
83
                    relevant_documents += 1
1✔
84
                    average_precision_numerator += relevant_documents / (rank + 1)
1✔
85
            if relevant_documents > 0:
1✔
86
                average_precision = average_precision_numerator / relevant_documents
1✔
87
            individual_scores.append(average_precision)
1✔
88

89
        score = sum(individual_scores) / len(ground_truth_documents)
1✔
90
        return {"score": score, "individual_scores": individual_scores}
1✔
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc