Update README.md
Browse files
README.md
CHANGED
@@ -34,19 +34,19 @@ weighted avg 0.80 0.79 0.79 200
|
|
34 |
For best results, we recommend starting with the following prompting strategy (and encourage tweaks as you see fit):
|
35 |
|
36 |
```python
|
37 |
-
def format_input(query, response):
|
38 |
-
"""Your
|
39 |
-
|
40 |
-
A hallucination occurs when the response is coherent but factually incorrect or nonsensical
|
41 |
outputs that are not grounded in the provided context.
|
42 |
You are given the following information:
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
|
50 |
|
51 |
text = format_input(query='Based on the follwoing
|
52 |
<context>Walrus are the largest mammal</context>
|
|
|
34 |
For best results, we recommend starting with the following prompting strategy (and encourage tweaks as you see fit):
|
35 |
|
36 |
```python
|
37 |
+
def format_input(reference, query, response):
|
38 |
+
prompt = f"""Your job is to evaluate whether a machine learning model has hallucinated or not.
|
39 |
+
A hallucination occurs when the response is coherent but factually incorrect or nonsensical
|
|
|
40 |
outputs that are not grounded in the provided context.
|
41 |
You are given the following information:
|
42 |
+
####INFO####
|
43 |
+
[Knowledge]: {reference}
|
44 |
+
[User Input]: {query}
|
45 |
+
[Model Response]: {response}
|
46 |
+
####END INFO####
|
47 |
+
Based on the information provided is the model output a hallucination? Respond with only "yes" or "no"
|
48 |
+
"""
|
49 |
+
return input
|
50 |
|
51 |
text = format_input(query='Based on the follwoing
|
52 |
<context>Walrus are the largest mammal</context>
|