Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
3.9
TFLOPS
28
3
42
Friedrich Marty
Smorty100
Follow
NickyNicky's profile picture
Laikokwei's profile picture
Maisal02's profile picture
3 followers
ยท
5 following
https://gitlab.com/users/Marty_Friedrich/projects
AI & ML interests
I'm most interested in content rerouting between LLM and VLLM agens for automation possibilities. Using templates for each agent which is then filled in by another agents inputs seems really useful.
Recent Activity
liked
a model
about 22 hours ago
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
replied
to
etemiz
's
post
3 days ago
It looks like Llama 4 team gamed the LMArena benchmarks by making their Maverick model output emojis, longer responses and ultra high enthusiasm! Is that ethical or not? They could certainly do a better job by working with teams like llama.cpp, just like Qwen team did with Qwen 3 before releasing the model. In 2024 I started playing with LLMs just before the release of Llama 3. I think Meta contributed a lot to this field and still contributing. Most LLM fine tuning tools are based on their models and also the inference tool llama.cpp has their name on it. The Llama 4 is fast and maybe not the greatest in real performance but still deserves respect. But my enthusiasm towards Llama models is probably because they rank highest on my AHA Leaderboard: https://sheet.zoho.com/sheet/open/mz41j09cc640a29ba47729fed784a263c1d08 Looks like they did a worse job compared to Llama 3.1 this time. Llama 3.1 has been on top for a while. Ranking high on my leaderboard is not correlated to technological progress or parameter size. In fact if LLM training is getting away from human alignment thanks to synthetic datasets or something else (?), it could be easily inversely correlated to technological progress. It seems there is a correlation regarding the location of the builders (in the West or East). Western models are ranking higher. This has become more visible as the leaderboard progressed, in the past there was less correlation. And Europeans seem to be in the middle! Whether you like positive vibes from AI or not, maybe the times are getting closer where humans may be susceptible to being gamed by an AI? What do you think?
replied
to
danielhanchen
's
post
7 days ago
You can now run Llama 4 on your own local device! ๐ฆ Run our Dynamic 1.78-bit and 2.71-bit Llama 4 GGUFs: https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF You can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4
View all activity
Organizations
None yet
spaces
1
Sleeping
First Agent Template
โก
Get current time in any timezone
models
2
Sort:ย Recently updated
Smorty100/godot_dodo_4x_60k_llama_7b-Q4_K_M-GGUF
Updated
Jan 5
โข
102
Smorty100/Mistral-Nemo-Instruct-2407-Q2_K-GGUF
Updated
Aug 27, 2024
โข
2
datasets
None public yet