One
imone
AI & ML interests
Reinforcement Learning, Brain-inspired AI
Professional RL(HF) Hyperparameter Tuner
Organizations
imone's activity
MMLU Lower Results Theory
3
#5 opened 11 months ago
by
fblgit

Why is the "measured" benchmark score of Llama-3-8B so low?
1
#6 opened 11 months ago
by
c6sneaky
License
45
9
#3 opened 12 months ago
by
mrfakename

Update added_tokens.json
#8 opened about 1 year ago
by
vicky4s4s

Consider using an OSI-approved license like Mistral and Phi-2
1
#47 opened about 1 year ago
by
imone

Which model is your demo page using?
2
#44 opened about 1 year ago
by
wempoo
Freezing Issue with gguf quant
5
#1 opened over 1 year ago
by
dillfrescott

MetaMath QA
1
#9 opened over 1 year ago
by
mrfakename

Fine Tuning
1
#8 opened over 1 year ago
by
Aditya0097
Prompt template standard
1
#7 opened over 1 year ago
by
Hugs4Llamas
Is there a way to get the text embedding?
1
#5 opened over 1 year ago
by
EladC
What is the base model of openchat ? Llama /mistral / custom ?
4
#4 opened over 1 year ago
by
StephanePop
error in docs
2
#6 opened over 1 year ago
by
PsiPi

32k context size?
1
#3 opened over 1 year ago
by
paryska99
How did Mixtral make openchat_3.5 worse?
1
3
#34 opened over 1 year ago
by
JJJJJPSYCHIC
Some feedback
1
#33 opened over 1 year ago
by
cmp-nct

🚩 Report : Ethical issue(s)
2
#1 opened almost 2 years ago
by
stefan-it

Why does this model perform so poorly on DROP compared to OpenHermes?
1
#29 opened over 1 year ago
by
yahma

Inconsistent Eval Results with Openchat 3.5?
2
#7 opened over 1 year ago
by
banghua
Add chat template
2
#27 opened over 1 year ago
by
Rocketknight1
