zakerytclarke commited on
Commit
2e18db8
·
verified ·
1 Parent(s): 78d988e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -25
README.md CHANGED
@@ -12,8 +12,6 @@ library_name: transformers
12
  tags:
13
  - text2text-generation
14
  widget:
15
- - text: 'Translate to German: My name is Arthur'
16
- example_title: Translation
17
  - text: >-
18
  Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
19
  Teapot is trained to only answer using context from documents, reducing hallucinations.
@@ -22,33 +20,42 @@ widget:
22
  TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
23
  Teapot is a model built by and for the community.
24
 
25
- Tell me about teapotllm
 
26
  example_title: Question Answering
27
  - text: >-
28
- Q: Can Geoffrey Hinton have a conversation with George Washington? Give the
29
- rationale before answering.
30
- example_title: Logical reasoning
31
- - text: Please answer the following question. What is the boiling point of Nitrogen?
32
- example_title: Scientific knowledge
33
- - text: >-
34
- Answer the following yes/no question. Can you write a whole Haiku in a
35
- single tweet?
36
- example_title: Yes/no question
37
- - text: >-
38
- Answer the following yes/no question by reasoning step-by-step. Can you
39
- write a whole Haiku in a single tweet?
40
- example_title: Reasoning task
41
- - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step'
42
- example_title: Boolean Expressions
43
  - text: >-
44
- The square root of x is the cube root of y. What is y to the power of 2, if
45
- x = 4?
46
- example_title: Math reasoning
 
 
 
 
 
 
 
47
  - text: >-
48
- Premise: At my age you will probably have learnt one lesson. Hypothesis:
49
- It's not certain how many lessons you'll learn by your thirties. Does the
50
- premise entail the hypothesis?
51
- example_title: Premise and hypothesis
 
 
 
 
 
 
52
  base_model:
53
  - google/flan-t5-large
54
  pipeline_tag: text2text-generation
 
12
  tags:
13
  - text2text-generation
14
  widget:
 
 
15
  - text: >-
16
  Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
17
  Teapot is trained to only answer using context from documents, reducing hallucinations.
 
20
  TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
21
  Teapot is a model built by and for the community.
22
 
23
+
24
+ What devices can teapot run on?
25
  example_title: Question Answering
26
  - text: >-
27
+ Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
28
+ Teapot is trained to only answer using context from documents, reducing hallucinations.
29
+ Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
30
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
31
+ TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
32
+ Teapot is a model built by and for the community.
33
+
34
+
35
+ Tell me about teapotllm
36
+ example_title: Summarization Answering
 
 
 
 
 
37
  - text: >-
38
+ Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
39
+ Teapot is trained to only answer using context from documents, reducing hallucinations.
40
+ Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
41
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
42
+ TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
43
+ Teapot is a model built by and for the community.
44
+
45
+
46
+ Extract the number of parameters
47
+ example_title: Information Extraction
48
  - text: >-
49
+ Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
50
+ Teapot is trained to only answer using context from documents, reducing hallucinations.
51
+ Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
52
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
53
+ TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
54
+ Teapot is a model built by and for the community.
55
+
56
+
57
+ How many parameters is Deepseek?
58
+ example_title: Hallucination Resistance
59
  base_model:
60
  - google/flan-t5-large
61
  pipeline_tag: text2text-generation