Mukesh Sharma
MukeshSharma
·
AI & ML interests
None yet
Organizations
None yet
MukeshSharma's activity
How to stop the prediction once the model is generated a sufficient solution for the asked prompt ?
9
#49 opened over 1 year ago
by
MukeshSharma
Solved
#55 opened over 1 year ago
by
MukeshSharma
can we use this model to finetune it on our specific dataset , like how other models hosted on hugging face is done.
2
#26 opened over 1 year ago
by
MukeshSharma
Trying to convert LlaMa weights to HF and running out of RAM, but don't want to buy more RAM?
14
#4 opened over 1 year ago
by
daryl149
Is the 14 programming Laungugae dataset uploaded on hugging face ? Any other option to doenload the data
1
#201 opened over 1 year ago
by
MukeshSharma
How can we add ability remember the conversation ??
1
#14 opened almost 2 years ago
by
MukeshSharma
Do GPT-JT-6B-v1 model has the ability of follow up questions like CHATGPT
1
#16 opened almost 2 years ago
by
MukeshSharma
Training code
5
#8 opened almost 2 years ago
by
philschmid
What is the fine tuning process of GPT-JT-6B-v1 Copied ? Any Docs available ?
5
#15 opened almost 2 years ago
by
MukeshSharma
GPTJForCausalLM hogs memory - inference only
2
#9 opened about 2 years ago
by
mrmartin
hivemind / gpt-j-6B-8bit , How can i use Multiple GPU , I treid using accelerate and also using torch.nn.dataparallel() nothing works out
1
#11 opened about 2 years ago
by
MukeshSharma
When will the error get resolved ?Can't load tokenizer using from_pretrained, please update its configuration
1
#5 opened about 2 years ago
by
MukeshSharma
Error at the moment of training
8
#3 opened over 2 years ago
by
AuraM
bitsandbytes-cuda111==0.26.0 not found
4
#4 opened about 2 years ago
by
tomwjhtom
EleutherAI / gpt-j-6B
2
#2 opened over 2 years ago
by
MukeshSharma
EleutherAI / gpt-j-6B
2
#2 opened over 2 years ago
by
MukeshSharma