juletxara commited on
Commit
ce2bac7
·
verified ·
1 Parent(s): 76d82b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -76,7 +76,7 @@ The model's performance and biases are influenced by its base model (`google/gem
76
  ## How to Get Started with the Model
77
 
78
  This model can be loaded using the Hugging Face `transformers` library.
79
- '''python
80
  # Example (conceptual, actual usage depends on task setup)
81
  from transformers import AutoModelForCausalLM, AutoTokenizer
82
 
@@ -89,7 +89,7 @@ model = AutoModelForCausalLM.from_pretrained(model_name)
89
  # inputs = tokenizer(prompt, return_tensors="pt")
90
  # outputs = model.generate(**inputs) # Adjust generation parameters as needed
91
  # judgment = tokenizer.decode(outputs[0], skip_special_tokens=True)
92
- '''
93
  Refer to the project repository (`https://github.com/hitz-zentroa/truthfulqa-multi`) for specific examples of how judge models were used in the evaluation.
94
 
95
  ## Training Details
@@ -166,7 +166,7 @@ The model is based on the `Gemma2` architecture (`Gemma2ForCausalLM`). It is a C
166
  ## Citation
167
 
168
  **Paper:**
169
- '''bibtex
170
  @inproceedings{calvo-etal-2025-truthknowsnolanguage,
171
  title = "Truth Knows No Language: Evaluating Truthfulness Beyond English",
172
  author = "Calvo Figueras, Blanca and Sagarzazu, Eneko and Etxaniz, Julen and Barnes, Jeremy and Gamallo, Pablo and De Dios Flores, Iria and Agerri, Rodrigo",
@@ -176,7 +176,7 @@ The model is based on the `Gemma2` architecture (`Gemma2ForCausalLM`). It is a C
176
  primaryClass={cs.CL},
177
  url={https://arxiv.org/abs/2502.09387}
178
  }
179
- '''
180
 
181
  ## More Information
182
 
 
76
  ## How to Get Started with the Model
77
 
78
  This model can be loaded using the Hugging Face `transformers` library.
79
+ ```python
80
  # Example (conceptual, actual usage depends on task setup)
81
  from transformers import AutoModelForCausalLM, AutoTokenizer
82
 
 
89
  # inputs = tokenizer(prompt, return_tensors="pt")
90
  # outputs = model.generate(**inputs) # Adjust generation parameters as needed
91
  # judgment = tokenizer.decode(outputs[0], skip_special_tokens=True)
92
+ ```
93
  Refer to the project repository (`https://github.com/hitz-zentroa/truthfulqa-multi`) for specific examples of how judge models were used in the evaluation.
94
 
95
  ## Training Details
 
166
  ## Citation
167
 
168
  **Paper:**
169
+ ```bibtex
170
  @inproceedings{calvo-etal-2025-truthknowsnolanguage,
171
  title = "Truth Knows No Language: Evaluating Truthfulness Beyond English",
172
  author = "Calvo Figueras, Blanca and Sagarzazu, Eneko and Etxaniz, Julen and Barnes, Jeremy and Gamallo, Pablo and De Dios Flores, Iria and Agerri, Rodrigo",
 
176
  primaryClass={cs.CL},
177
  url={https://arxiv.org/abs/2502.09387}
178
  }
179
+ ```
180
 
181
  ## More Information
182