Possible stopping issues?
4
#106 opened about 1 year ago
by
Dampfinchen
Tokenizer Chat Template
4
#103 opened about 1 year ago
by
Sm1Ling
It`s a great job ~!!!!!!
#102 opened about 1 year ago
by
MallemJerry
Why there is not pad token?
25
#101 opened about 1 year ago
by
Imran1

Slow API Calls in Transformers
🤝
3
4
#98 opened about 1 year ago
by
Zifeng01code

Llama3 8b related model has bursting issues when it generates emotion-related words
👍
2
#95 opened about 1 year ago
by
HyperBlaze
Where can I get a config.json file for Meta-Llama-3-8B-instruct?
2
#93 opened about 1 year ago
by
gdb94086

Local Mobile Device Hosting
1
#91 opened about 1 year ago
by
suguspnk
How should names & fewshot examples be encoded?
👀
8
#89 opened about 1 year ago
by
theobjectivedad

Please provide some guidance or documentation on tool usage.
👍
2
#88 opened about 1 year ago
by
lcahill
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa'
#87 opened about 1 year ago
by
gy19
Your request to access this repo has been successfully submitted, and is pending a review from the repo's authors.
1
#86 opened about 1 year ago
by
g3stz
Model Generating Prefix
👍
1
1
#85 opened about 1 year ago
by
dongyulin
cant get access
1
#84 opened about 1 year ago
by
ERmak158
Meta-Llama-3-8B-Instruct does not appear to have a file named config.json
9
#82 opened about 1 year ago
by
RK-RK
Access issues via Meta website and HF
#81 opened about 1 year ago
by
GiorgioDiSalvo
Plans for a 13B model?
#80 opened about 1 year ago
by
Cheeto96
not getting the right output
2
#79 opened about 1 year ago
by
Sardarmoazzam
what memory needed ?
1
#78 opened about 1 year ago
by
francescofiamingo
Key Error 'llma' in configuration_auto.py
1
#76 opened about 1 year ago
by
himasrikode
Your request to access this repo has been rejected by the repo's authors.
13
#74 opened about 1 year ago
by
Hoo1196
Changing "eos_token" to eot-id fix the issue of overflow of model response - at least using the Messages API.
👍
1
1
#73 opened about 1 year ago
by
myonlyeye
How to Fine-tune Llama-3 8B Instruct.
👀
➕
2
8
#72 opened about 1 year ago
by
elysiia

Update config.json
1
#71 opened about 1 year ago
by
ArthurZ

The request to access the repo has been sent for several days, why hasn't it passed yet?
7
#70 opened about 1 year ago
by
water-cui
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa' [ ]:
4
#69 opened about 1 year ago
by
tianke0711
Access given still not working.
10
#68 opened about 1 year ago
by
adityar23
Uncaught (in promise) SyntaxError: Unexpected token 'E', "Expected r"... is not valid JSON
🤯
1
5
#66 opened about 1 year ago
by
sxmss1

Llama responses are broken during conversation
👍
🔥
2
1
#64 opened about 1 year ago
by
gusakovskyi
for using model
#63 opened about 1 year ago
by
yogeshm
Access Denied
#59 opened about 1 year ago
by
Jerry-hyl

Not outputting <|eot_id|> on SageMaker
#58 opened about 1 year ago
by
zhengsj
Update README.md
#57 opened about 1 year ago
by
inuwamobarak

Batched inference on multi-GPUs
👀
👍
14
4
#56 opened about 1 year ago
by
d-i-o-n
Badly Encoded Tokens/Mojibake
2
#55 opened about 1 year ago
by
muchanem
Denied permission to DL
11
#51 opened about 1 year ago
by
TimPine
request to access is still pending a review
31
#50 opened about 1 year ago
by
Hoo1196
mlx_lm.server gives wonky answers
1
#49 opened about 1 year ago
by
conleysa
Tokenizer mismatch all the time
2
#47 opened about 1 year ago
by
tian9
Could anyone can tell me how to set the prompt template when i use the model in the pycharm by transformers?
1
#46 opened about 1 year ago
by
LAKSERS
Instruct format?
3
#44 opened about 1 year ago
by
m-conrad-202

MPS support quantification
👍
6
5
#39 opened about 1 year ago
by
tonimelisma

`meta-llama/Meta-Llama-3-8B-Instruct` model with sagemaker
1
#38 opened about 1 year ago
by
aak7912
Problem with the tokenizer
👍
8
2
#37 opened about 1 year ago
by
Douedos
how to output an answer without side chatter
👍
1
8
#36 opened about 1 year ago
by
Gerald001
ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on.
12
#35 opened about 1 year ago
by
madhurjindal

Does instruct need add_generation_prompt?
1
#33 opened about 1 year ago
by
bdambrosio

Error while downloading the model
3
#32 opened about 1 year ago
by
amarnadh1998
Garbage responses
2
#30 opened about 1 year ago
by
RainmakerP