PyTerrier demonstration for msmarcov2_document

This notebook demonstrates retrieval using PyTerrier on the MSMARCOv2 Document Ranking corpus.

About the corpus: A new version of the MSMARCO document ranking corpus, containing 11.9 million documents. Also used by the TREC 2021 Deep Learning track.

In [ ]:
#!pip install -q python-terrier
import pyterrier as pt
if not pt.started():
    pt.init()

from pyterrier.measures import *
dataset = pt.get_dataset('msmarcov2_document')
        
In [ ]:
bm25_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarcov2_document', 'terrier_stemmed', wmodel='BM25')

dph_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarcov2_document', 'terrier_stemmed', wmodel='DPH')

dph_bo1_terrier_stemmed = dph_terrier_stemmed >> pt.rewrite.Bo1QueryExpansion(pt.get_dataset('msmarcov2_document').get_index('terrier_stemmed')) >> dph_terrier_stemmed

Evaluation on valid1 topics and qrels

43 topics used in the TREC 2019 Deep Learning track, with deep judgements

In [ ]:
pt.Experiment(
    [bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed],
    pt.get_dataset('msmarcov2_document').get_topics('valid1'),
    pt.get_dataset('msmarcov2_document').get_qrels('valid1'),
    batch_size=200,
    filter_by_qrels=True,
    eval_metrics=[RR, nDCG@10, nDCG@100, AP],
    names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed'])
        

Evaluation on valid2 topics and qrels

54 topics used in the TREC 2020 Deep Learning track, with deep judgements

In [ ]:
pt.Experiment(
    [bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed],
    pt.get_dataset('msmarcov2_document').get_topics('valid2'),
    pt.get_dataset('msmarcov2_document').get_qrels('valid2'),
    batch_size=200,
    filter_by_qrels=True,
    eval_metrics=[RR, nDCG@10, nDCG@100, AP],
    names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed'])
        

Evaluation on dev1 topics and qrels

4,552 topics with sparse judgements

In [ ]:
pt.Experiment(
    [bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed],
    pt.get_dataset('msmarcov2_document').get_topics('dev1'),
    pt.get_dataset('msmarcov2_document').get_qrels('dev1'),
    batch_size=200,
    filter_by_qrels=True,
    eval_metrics=[RR@10],
    names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed'])