Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
tritiumoxide
/
madlad400-7b-mt-bt-Q2_K-GGUF
like
0
Translation
Transformers
GGUF
JAX
allenai/MADLAD-400
419 languages
t5
text2text-generation
text-generation-inference
llama-cpp
gguf-my-repo
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
madlad400-7b-mt-bt-Q2_K-GGUF
1 contributor
History:
5 commits
tritiumoxide
should make this repo work with candle
5171663
4 months ago
.gitattributes
Safe
1.64 kB
should make this repo work with candle
4 months ago
README.md
Safe
4.43 kB
Upload README.md with huggingface_hub
4 months ago
config.json
Safe
805 Bytes
should make this repo work with candle
4 months ago
madlad400-7b-mt-bt-q2_k.gguf
Safe
3.21 GB
LFS
Upload madlad400-7b-mt-bt-q2_k.gguf with huggingface_hub
4 months ago
tokenizer.json
Safe
16.6 MB
LFS
should make this repo work with candle
4 months ago