YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

LLaMmlein_1B - AWQ

Original model description:

datasets: - togethercomputer/RedPajama-Data-V2 language: - de pipeline_tag: text-generation library_name: transformers license: other

LLäMmlein 1B

This is a German Tinyllama 1B language model trained from scratch using the Tinyllama codebase on the German portion of RedPajama V2. Find more details on our page and our preprint!

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LSX-UniWue/LLaMmlein_1B")

tokenizer = AutoTokenizer.from_pretrained("LSX-UniWue/LLaMmlein_1B")

Evaluation

We evaluated our results on the SuperGLEBer benchmark.

Downloads last month
1
Safetensors
Model size
261M params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.