abecerr1 commited on
Commit
3ee6d97
·
verified ·
1 Parent(s): 1669749

Add new CrossEncoder model

Browse files
Files changed (8) hide show
  1. README.md +136 -0
  2. config.json +37 -0
  3. merges.txt +0 -0
  4. model.safetensors +3 -0
  5. special_tokens_map.json +51 -0
  6. tokenizer.json +0 -0
  7. tokenizer_config.json +66 -0
  8. vocab.json +0 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - cross-encoder
5
+ pipeline_tag: text-ranking
6
+ library_name: sentence-transformers
7
+ ---
8
+
9
+ # CrossEncoder
10
+
11
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model trained using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+ - **Model Type:** Cross Encoder
17
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
18
+ - **Maximum Sequence Length:** 512 tokens
19
+ - **Number of Output Labels:** 1 label
20
+ <!-- - **Training Dataset:** Unknown -->
21
+ <!-- - **Language:** Unknown -->
22
+ <!-- - **License:** Unknown -->
23
+
24
+ ### Model Sources
25
+
26
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
27
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
28
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
29
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
30
+
31
+ ## Usage
32
+
33
+ ### Direct Usage (Sentence Transformers)
34
+
35
+ First install the Sentence Transformers library:
36
+
37
+ ```bash
38
+ pip install -U sentence-transformers
39
+ ```
40
+
41
+ Then you can load this model and run inference.
42
+ ```python
43
+ from sentence_transformers import CrossEncoder
44
+
45
+ # Download from the 🤗 Hub
46
+ model = CrossEncoder("BSC-NLP4BIA/Distemist-CE-Reranker")
47
+ # Get scores for pairs of texts
48
+ pairs = [
49
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
50
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
51
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
52
+ ]
53
+ scores = model.predict(pairs)
54
+ print(scores.shape)
55
+ # (3,)
56
+
57
+ # Or rank different texts based on similarity to a single text
58
+ ranks = model.rank(
59
+ 'How many calories in an egg',
60
+ [
61
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
62
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
63
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
64
+ ]
65
+ )
66
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
67
+ ```
68
+
69
+ <!--
70
+ ### Direct Usage (Transformers)
71
+
72
+ <details><summary>Click to see the direct usage in Transformers</summary>
73
+
74
+ </details>
75
+ -->
76
+
77
+ <!--
78
+ ### Downstream Usage (Sentence Transformers)
79
+
80
+ You can finetune this model on your own dataset.
81
+
82
+ <details><summary>Click to expand</summary>
83
+
84
+ </details>
85
+ -->
86
+
87
+ <!--
88
+ ### Out-of-Scope Use
89
+
90
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
91
+ -->
92
+
93
+ <!--
94
+ ## Bias, Risks and Limitations
95
+
96
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
97
+ -->
98
+
99
+ <!--
100
+ ### Recommendations
101
+
102
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
103
+ -->
104
+
105
+ ## Training Details
106
+
107
+ ### Framework Versions
108
+ - Python: 3.11.3
109
+ - Sentence Transformers: 4.1.0
110
+ - Transformers: 4.51.3
111
+ - PyTorch: 2.7.0+cu126
112
+ - Accelerate:
113
+ - Datasets:
114
+ - Tokenizers: 0.21.1
115
+
116
+ ## Citation
117
+
118
+ ### BibTeX
119
+
120
+ <!--
121
+ ## Glossary
122
+
123
+ *Clearly define terms in order to be accessible across audiences.*
124
+ -->
125
+
126
+ <!--
127
+ ## Model Card Authors
128
+
129
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
130
+ -->
131
+
132
+ <!--
133
+ ## Model Card Contact
134
+
135
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
136
+ -->
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "LABEL_0"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "LABEL_0": 0
20
+ },
21
+ "layer_norm_eps": 1e-05,
22
+ "max_position_embeddings": 514,
23
+ "model_type": "roberta",
24
+ "num_attention_heads": 12,
25
+ "num_hidden_layers": 12,
26
+ "pad_token_id": 1,
27
+ "position_embedding_type": "absolute",
28
+ "sentence_transformers": {
29
+ "activation_fn": "torch.nn.modules.activation.Sigmoid",
30
+ "version": "4.1.0"
31
+ },
32
+ "torch_dtype": "float32",
33
+ "transformers_version": "4.51.3",
34
+ "type_vocab_size": 1,
35
+ "use_cache": true,
36
+ "vocab_size": 52000
37
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35134008980a75f3e3bbc46cc90aee98bac4f3fff2d2c916dccbd218c764c143
3
+ size 503939668
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "51999": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_len": 512,
53
+ "max_length": 120,
54
+ "model_max_length": 512,
55
+ "pad_to_multiple_of": null,
56
+ "pad_token": "<pad>",
57
+ "pad_token_type_id": 0,
58
+ "padding_side": "right",
59
+ "sep_token": "</s>",
60
+ "stride": 0,
61
+ "tokenizer_class": "RobertaTokenizer",
62
+ "trim_offsets": true,
63
+ "truncation_side": "right",
64
+ "truncation_strategy": "longest_first",
65
+ "unk_token": "<unk>"
66
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff