MSMARCO Passage Ranking
A passage ranking task based on a corpus of 8.8 million passages released by Microsoft, which should be ranked based on their relevance to questions. Also used by the TREC Deep Learning track.
Retrieval notebooks: View, Download
Variants
We have 7 index variants for this dataset:
- terrier_stemmed
- terrier_stemmed_deepct
- terrier_stemmed_docT5query
- terrier_stemmed_text
- terrier_unstemmed
- terrier_unstemmed_text
- ance
terrier_stemmed
Terrier's default Porter stemming, and stopwords removed.
Use this for retrieval in PyTerrier:
bm25_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_stemmed', wmodel='BM25')
dph_terrier_stemmed = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_stemmed', wmodel='DPH')
terrier_stemmed_deepct
Terrier index using DeepCT. Porter stemming and stopword removal applied. This index was made using the MSMARCO files provided linked from the authors' original repository. To create indices for other corpora, use the pyterrier_deepct plugin.
Use this for retrieval in PyTerrier:
bm25_terrier_stemmed_deepct = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_stemmed_deepct', wmodel='BM25')
terrier_stemmed_docT5query
Terrier index using docT5query. Porter stemming and stopword removal applied. This index was made using the MSMARCO files provided linked from the authors' original repository. To create indices for other corpora, use the pyterrier_doc2query plugin.
Use this for retrieval in PyTerrier:
bm25_terrier_stemmed_docT5query = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_stemmed_docT5query', wmodel='BM25')
terrier_stemmed_text
Terrier's default Porter stemming, and stopwords removed. Text is also saved in the MetaIndex to facilitate BERT-based reranking.
Use this for retrieval in PyTerrier:
#!pip install git+https://github.com/Georgetown-IR-Lab/OpenNIR.git
import onir_pt
# Lets use a Vanilla BERT ranker from OpenNIR. We'll use the Capreolus model available from Huggingface
vanilla_bert = onir_pt.reranker('hgf4_joint', ranker_config={'model': 'Capreolus/bert-base-msmarco', 'norm': 'softmax-2'})
bm25_terrier_stemmed_text = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_stemmed_text', wmodel='BM25', metadata=['docno', 'text'])
bm25_bert_terrier_stemmed_text = (
bm25_terrier_stemmed_text
>> vanilla_bert)
terrier_unstemmed
Terrier index, no stemming, no stopword removal.
Use this for retrieval in PyTerrier:
bm25_terrier_unstemmed = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_unstemmed', wmodel='BM25')
dph_terrier_unstemmed = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_unstemmed', wmodel='DPH')
terrier_unstemmed_text
Terrier index, no stemming, no stopword removal. Text is also saved in the MetaIndex to facilitate BERT-based reranking.
Use this for retrieval in PyTerrier:
#!pip install git+https://github.com/Georgetown-IR-Lab/OpenNIR.git
import onir_pt
# Lets use a Vanilla BERT ranker from OpenNIR. We'll use the Capreolus model available from Huggingface
vanilla_bert = onir_pt.reranker('hgf4_joint', ranker_config={'model': 'Capreolus/bert-base-msmarco', 'norm': 'softmax-2'})
bm25_terrier_unstemmed_text = pt.BatchRetrieve.from_dataset('msmarco_passage', 'terrier_unstemmed_text', wmodel='BM25', metadata=['docno', 'text'])
bm25_bert_terrier_unstemmed_text = (
bm25_terrier_unstemmed_text
>> vanilla_bert)
ance
ANCE dense retrieval index using model trained by original ANCE authors. Uses the pyterrier_ance plugin.
Use this for retrieval in PyTerrier:
#!pip install --upgrade git+https://github.com/terrierteam/pyterrier_ance.git
from pyterrier_ance import ANCERetrieval
ance = ANCERetrieval.from_dataset('msmarco_passage', 'ance')