seyma9gulsen commited on
Commit
473bb71
·
verified ·
1 Parent(s): d3925d3

Update BM25S model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ corpus.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ library_name: bm25s
4
+ tags:
5
+ - bm25
6
+ - bm25s
7
+ - retrieval
8
+ - search
9
+ - lexical
10
+ ---
11
+
12
+ # BM25S Index
13
+
14
+ This is a BM25S index created with the [`bm25s` library](https://github.com/xhluca/bm25s) (version `0.2.6`), an ultra-fast implementation of BM25. It can be used for lexical retrieval tasks.
15
+
16
+ BM25S Related Links:
17
+
18
+ * 🏠[Homepage](https://bm25s.github.io)
19
+ * 💻[GitHub Repository](https://github.com/xhluca/bm25s)
20
+ * 🤗[Blog Post](https://huggingface.co/blog/xhluca/bm25s)
21
+ * 📝[Technical Report](https://arxiv.org/abs/2407.03618)
22
+
23
+
24
+ ## Installation
25
+
26
+ You can install the `bm25s` library with `pip`:
27
+
28
+ ```bash
29
+ pip install "bm25s==0.2.6"
30
+
31
+ # Include extra dependencies like stemmer
32
+ pip install "bm25s[full]==0.2.6"
33
+
34
+ # For huggingface hub usage
35
+ pip install huggingface_hub
36
+ ```
37
+
38
+ ## Loading a `bm25s` index
39
+
40
+ You can use this index for information retrieval tasks. Here is an example:
41
+
42
+ ```python
43
+ import bm25s
44
+ from bm25s.hf import BM25HF
45
+
46
+ # Load the index
47
+ retriever = BM25HF.load_from_hub("seyma9gulsen/bm25s-squad")
48
+
49
+ # You can retrieve now
50
+ query = "a cat is a feline"
51
+ results = retriever.retrieve(bm25s.tokenize(query), k=3)
52
+ ```
53
+
54
+ ## Saving a `bm25s` index
55
+
56
+ You can save a `bm25s` index to the Hugging Face Hub. Here is an example:
57
+
58
+ ```python
59
+ import bm25s
60
+ from bm25s.hf import BM25HF
61
+
62
+ corpus = [
63
+ "a cat is a feline and likes to purr",
64
+ "a dog is the human's best friend and loves to play",
65
+ "a bird is a beautiful animal that can fly",
66
+ "a fish is a creature that lives in water and swims",
67
+ ]
68
+
69
+ retriever = BM25HF(corpus=corpus)
70
+ retriever.index(bm25s.tokenize(corpus))
71
+
72
+ token = None # You can get a token from the Hugging Face website
73
+ retriever.save_to_hub("seyma9gulsen/bm25s-squad", token=token)
74
+ ```
75
+
76
+ ## Advanced usage
77
+
78
+ You can leverage more advanced features of the BM25S library during `load_from_hub`:
79
+
80
+ ```python
81
+ # Load corpus and index in memory-map (mmap=True) to reduce memory
82
+ retriever = BM25HF.load_from_hub("seyma9gulsen/bm25s-squad", load_corpus=True, mmap=True)
83
+
84
+ # Load a different branch/revision
85
+ retriever = BM25HF.load_from_hub("seyma9gulsen/bm25s-squad", revision="main")
86
+
87
+ # Change directory where the local files should be downloaded
88
+ retriever = BM25HF.load_from_hub("seyma9gulsen/bm25s-squad", local_dir="/path/to/dir")
89
+
90
+ # Load private repositories with a token:
91
+ retriever = BM25HF.load_from_hub("seyma9gulsen/bm25s-squad", token=token)
92
+ ```
93
+
94
+ ## Tokenizer
95
+
96
+ If you have saved a `Tokenizer` object with the index using the following approach:
97
+
98
+ ```python
99
+ from bm25s.hf import TokenizerHF
100
+
101
+ token = "your_hugging_face_token"
102
+ tokenizer = TokenizerHF(corpus=corpus, stopwords="english")
103
+ tokenizer.save_to_hub("seyma9gulsen/bm25s-squad", token=token)
104
+
105
+ # and stopwords too
106
+ tokenizer.save_stopwords_to_hub("seyma9gulsen/bm25s-squad", token=token)
107
+ ```
108
+
109
+ Then, you can load the tokenizer using the following code:
110
+
111
+ ```python
112
+ from bm25s.hf import TokenizerHF
113
+
114
+ tokenizer = TokenizerHF(corpus=corpus, stopwords=[])
115
+ tokenizer.load_vocab_from_hub("seyma9gulsen/bm25s-squad", token=token)
116
+ tokenizer.load_stopwords_from_hub("seyma9gulsen/bm25s-squad", token=token)
117
+ ```
118
+
119
+
120
+ ## Stats
121
+
122
+ This dataset was created using the following data:
123
+
124
+ | Statistic | Value |
125
+ | --- | --- |
126
+ | Number of documents | 86821 |
127
+ | Number of tokens | 5715547 |
128
+ | Average tokens per document | 65.83 |
129
+
130
+ ## Parameters
131
+
132
+ The index was created with the following parameters:
133
+
134
+ | Parameter | Value |
135
+ | --- | --- |
136
+ | k1 | `1.5` |
137
+ | b | `0.75` |
138
+ | delta | `0.5` |
139
+ | method | `bm25+` |
140
+ | idf method | `bm25+` |
141
+
142
+ ## Citation
143
+
144
+ To cite `bm25s`, please use the following bibtex:
145
+
146
+ ```
147
+ @misc{lu_2024_bm25s,
148
+ title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring},
149
+ author={Xing Han Lù},
150
+ year={2024},
151
+ eprint={2407.03618},
152
+ archivePrefix={arXiv},
153
+ primaryClass={cs.IR},
154
+ url={https://arxiv.org/abs/2407.03618},
155
+ }
156
+ ```
157
+
corpus.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4dd85e6bddd31656cb86e75f97a776dd9845582d4443697dfc35dda3f38341e
3
+ size 67738135
corpus.mmindex.json ADDED
The diff for this file is too large to render. See raw diff
 
data.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a68928b85c234242be4f11a386d987e1dbd02322deddb244c4df9d3ea5f1ab2
3
+ size 22862316
indices.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cacf7b516d1cf05dfb7d0fd6cdc0f368f663764e23bc246128e77bd4f409afbc
3
+ size 22862316
indptr.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7e878800b9aedb51477f2f803fb64be5f76613325f5831adae2663db118a626
3
+ size 314036
nonoccurrence_array.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f0e936f7980518049c8aaee9a9e5d5ee80da516189f13da704b70acfb6f2ee7
3
+ size 314032
params.index.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "k1": 1.5,
3
+ "b": 0.75,
4
+ "delta": 0.5,
5
+ "method": "bm25+",
6
+ "idf_method": "bm25+",
7
+ "dtype": "float32",
8
+ "int_dtype": "int32",
9
+ "num_docs": 86821,
10
+ "version": "0.2.6",
11
+ "backend": "numpy"
12
+ }
vocab.index.json ADDED
The diff for this file is too large to render. See raw diff