This notebook demonstrates retrieval using PyTerrier on the MSMARCO Document Ranking corpus.
About the corpus: A document ranking corpus containing 3.2 million documents. Also used by the TREC Deep Learning track.
#!pip install -q python-terrier
import pyterrier as pt
if not pt.started():
pt.init()
from pyterrier.measures import *
dataset = pt.get_dataset('msmarco_document')
Terrier's default Porter stemming, and stopwords removed.
bm25_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarco_document', 'terrier_stemmed', wmodel='BM25', num_results=100)
dph_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarco_document', 'terrier_stemmed', wmodel='DPH', num_results=100)
dph_bo1_terrier_stemmed = dph_terrier_stemmed >> pt.rewrite.Bo1QueryExpansion(pt.get_dataset('msmarco_document').get_index('terrier_stemmed')) >> dph_terrier_stemmed
Terrier index, no stemming, no stopword removal.
bm25_terrier_unstemmed = pt.BatchRetrieve.from_dataset('msmarco_document', 'terrier_unstemmed', wmodel='BM25', num_results=100)
dph_terrier_unstemmed = pt.BatchRetrieve.from_dataset('msmarco_document', 'terrier_unstemmed', wmodel='DPH', num_results=100)
dph_bo1_terrier_unstemmed = dph_terrier_unstemmed >> pt.rewrite.Bo1QueryExpansion(pt.get_dataset('msmarco_document').get_index('terrier_unstemmed')) >> dph_terrier_unstemmed
43 topics used in the TREC 2019 Deep Learning track Document Ranking task, with deep judgements
pt.Experiment(
[bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed, bm25_terrier_unstemmed, dph_terrier_unstemmed, dph_bo1_terrier_unstemmed],
pt.get_dataset('msmarco_document').get_topics('test'),
pt.get_dataset('msmarco_document').get_qrels('test'),
batch_size=200,
filter_by_qrels=True,
eval_metrics=[RR, nDCG@10, nDCG@100, AP],
names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed', 'bm25_terrier_unstemmed', 'dph_terrier_unstemmed', 'dph_bo1_terrier_unstemmed'])
45 topics used in the TREC 2020 Deep Learning track Document Ranking task, with deep judgements
pt.Experiment(
[bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed, bm25_terrier_unstemmed, dph_terrier_unstemmed, dph_bo1_terrier_unstemmed],
pt.get_dataset('msmarco_document').get_topics('test-2020'),
pt.get_dataset('msmarco_document').get_qrels('test-2020'),
batch_size=200,
filter_by_qrels=True,
eval_metrics=[RR, nDCG@10, nDCG@100, AP],
names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed', 'bm25_terrier_unstemmed', 'dph_terrier_unstemmed', 'dph_bo1_terrier_unstemmed'])
5193 topics with sparse judgements
pt.Experiment(
[bm25_terrier_stemmed, dph_terrier_stemmed, dph_bo1_terrier_stemmed, bm25_terrier_unstemmed, dph_terrier_unstemmed, dph_bo1_terrier_unstemmed],
pt.get_dataset('msmarco_document').get_topics('dev'),
pt.get_dataset('msmarco_document').get_qrels('dev'),
batch_size=200,
filter_by_qrels=True,
eval_metrics=[RR],
names=['bm25_terrier_stemmed', 'dph_terrier_stemmed', 'dph_bo1_terrier_stemmed', 'bm25_terrier_unstemmed', 'dph_terrier_unstemmed', 'dph_bo1_terrier_unstemmed'])