Lachlan Cahill
lcahill
AI & ML interests
None yet
Recent Activity
new activity
8 days ago
JetBrains/Mellum-4b-base:How to use it in PyCharm for auto completion?
liked
a model
8 days ago
Qwen/Qwen3-32B-AWQ
liked
a model
8 months ago
meta-llama/Llama-3.2-11B-Vision-Instruct
Organizations
None yet
lcahill's activity
How to use it in PyCharm for auto completion?
2
1
#6 opened 8 days ago
by
DrNicefellow

Max output tokens for Llama 3.1
8
#6 opened 10 months ago
by
abhirup-sainapse
Training this model
3
4
#33 opened about 1 year ago
by
ottogutierrez
Why 12b? Who could run that locally?
11
47
#1 opened 9 months ago
by
kaidu88
GPU requirements
7
#32 opened 12 months ago
by
jmoneydw
Align tokenizer with mistral-common
3
#39 opened 11 months ago
by
Rocketknight1

feat/tools-in-chat-template
1
6
#21 opened 12 months ago
by
lcahill
You are truly a godsend!
#16 opened 12 months ago
by
lcahill
thanks and question function-calling.
10
#17 opened 12 months ago
by
NickyNicky

Not generating [TOOL_CALLS]
3
#20 opened 12 months ago
by
ShukantP

Examples on usage
4
#7 opened about 1 year ago
by
dgallitelli

Can you add chat template to the `tokenizer_config.json`file?
5
#3 opened over 1 year ago
by
hdnh2006

add chat template jinja in tokenizer.json
4
#4 opened over 1 year ago
by
Jaykumaran17
Added Chat Template
1
#5 opened over 1 year ago
by
lcahill
Adding `safetensors` variant of this model
1
2
#10 opened over 1 year ago
by
SFconvertbot

Adding `safetensors` variant of this model
#94 opened over 1 year ago
by
lcahill
Adding `safetensors` variant of this model
4
4
#42 opened over 1 year ago
by
nth-attempt
Adding `safetensors` variant of this model
4
4
#42 opened over 1 year ago
by
nth-attempt