PLLuM: A Family of Polish Large Language Models

Overview

PLLuM is a family of large language models (LLMs) specialized in Polish and other Slavic/Baltic languages, with additional English data incorporated for broader generalization. Developed through an extensive collaboration with various data providers, PLLuM models are built on high-quality text corpora and refined through instruction tuning, preference learning, and advanced alignment techniques. These models are intended to generate contextually coherent text, offer assistance in various tasks (e.g., question answering, summarization), and serve as a foundation for specialized applications such as domain-specific intelligent assistants. In 2024, PLLuM models were developed by the PLLuM consortium. Since 2025, their development has continued under HIVE AI, a broader alliance of research institutions and organizations delivering digital public services, focused on open language technologies for Polish public administration.

Key Highlights

  • Extensive Data Collection
    We gathered large-scale, high-quality text data in Polish (around 150B tokens after cleaning and deduplication) and additional text in Slavic, Baltic, and English languages. Part of these tokens (28B) can be used in fully open-source models, including for commercial use (in compliance with relevant legal regulations).

  • Organic Instruction Dataset
    We curated the largest Polish collection of manually created “organic instructions” (~55k prompt-response pairs, including ~5.5k multi-turn dialogs). This human-authored instruction set is based on an extensive typology of human-model interactions and it covers a range of subtle aspects of supervised fine-tuning (SFT) that might be overlooked with automated approaches (including large scale distillation of 'strong LLMs'). It was also designed to mitigate negative linguistic transfer from non-Polish textual data used in the pre-training phase.

  • Polish Preference Corpus
    We created the first Polish-language preference corpus, featuring prompts and multiple model responses manually assessed by a demographically diverse team of annotators. This dataset teaches the model not only correctness (factual and linguistic) but also balance and safety—especially for potentially controversial or adversarial topics.

  • Evaluation Benchmarks
    We developed custom benchmarks to evaluate our models on tasks relevant to Polish public administration, where PLLuM achieved top scores among all tested models. In broader Polish-language tasks, PLLuM models also attain state-of-the-art results.

Model Description

Below is a summary of the new PLLuM-base-250801 models. All model names link to their respective Hugging Face resources, while the base models and licenses link to the relevant source models or license references.

Model Development

All models were pretrained or continued-pretrained on a carefully curated corpus of approximately 18 billion tokens, including around 17B in Polish and 1B in English. The training data reflects sources available up to early 2025 and does not include more recent content. To ensure the highest quality and full legal compliance, the dataset was selected using strict data validation and copyright verification procedures. The data collection process was guided by a detailed legal analysis of Polish and EU copyright law, including the amended national Copyright Act. The entire corpus was constructed to guarantee legal safety and enable use in both public sector and commercial applications. It includes:

  • resources provided by consortium partners,
  • selected web data from public domain sources, Creative Commons–licensed content, or texts not restricted under TDM exceptions,
  • and licensed publisher content acquired through formal agreements.

Intended Use Cases

The base models are pretrained language models and require instruction fine-tuning to perform specific downstream tasks effectively. Once adapted, they can be used in a wide range of applications, particularly those requiring strong performance in Polish. Typical use cases include:

  • General Language Tasks: Text generation, summarization, question answering, etc.
  • Domain-Specific Assistants: Especially effective for Polish public administration and legal or bureaucratic topics where domain-aware retrieval is required.
  • Research & Development: Building blocks for downstream AI applications in academic or industrial settings, where a strong command of the Polish language is essential.

How to Use

Each PLLuM model can be loaded via the Hugging Face Transformers library (or compatible frameworks). For RAG-based scenarios, pair the model with a relevant vector store or document retrieval system.

Below are some recommended steps and code snippets:

1. Installation

Make sure you have the latest versions of transformers and torch (or another compatible deep learning framework) installed:

pip install transformers accelerate torch

2. Loading the Model

Use the following example to load one of the PLLuM models:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "CYFRAGOVPL/PLLuM-12B-base-250801"  # Replace with the PLLuM model name of your choice
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

3. Using bfloat16 (BF16)

If your hardware (e.g., newer GPUs) supports bfloat16, you can reduce memory usage and potentially speed up inference:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "CYFRAGOVPL/PLLuM-12B-base-250801"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Load model in bfloat16 precision
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"  # automatically places model layers on available devices
)

Training Procedure

  • Datasets: ~150B tokens from Polish and multilingual sources, with ~28B tokens available for fully open-source commercial use.
  • Hyperparameters: Vary based on model size, typically including Adam or AdamW optimizers, a range of batch sizes, and carefully tuned learning rates.
  • Hardware & Duration: Training using Bem2 (up to 300xH100 GPUs) and Helios HPCs. The training time for each PLLuM model depends on its parameter size and hardware configuration, ranging from approximately 8 to 25 days on a multi-GPU cluster for models between 8B and 70B.

Evaluation and Benchmarks

  • Public Administration: PLLuM models demonstrated top-tier performance in specialized tasks relevant to government services.
  • Polish Language Tasks: Across a variety of internal benchmarks and standard corpora, PLLuM consistently outperforms other models in accuracy, coherence, and safety metrics.
  • Custom Tests: A unique preference corpus and alignment tests ensure robust, safe, and contextually accurate responses.

Limitations and Bias

  • Potential Hallucinations: Like other LLMs, PLLuM may occasionally produce factually incorrect or fabricated content.
  • Sensitivity & Bias: While extensive preference learning has been done, biases might still emerge, especially in controversial or subjective topics.
  • Context Length: Very long context tasks may challenge certain models, depending on memory constraints.

Ethical Considerations

PLLuM models are designed for constructive and responsible usage. Users should exercise caution when deploying them in production scenarios, especially for sensitive or regulated domains. Despite efforts to minimize harmful outputs, there is always a risk of generating offensive, biased, or inappropriate text. Human oversight and due diligence are advised.

Citation

If you use PLLuM models or any part of this repository in your research or deployment, please cite as follows (BibTeX):

@unpublished{pllum2025, 
    title={PLLuM: A Family of Polish Large Language Models}, 
    author={HIVE AI Consortium}, 
    year={2025} 
}

License

PLLuM-12B-base-250801 model is published under Apache 2.0 license.

Creators & Consortium

The PLLuM-base-250801 models were trained under the banner of HIVE AI – a new initiative uniting scientific institutions and digital service providers working together to develop and implement open language technologies for the Polish public sector. HIVE AI continues the mission of the original PLLuM project, expanding it into a coordinated effort to build, improve, and deploy Polish language models in real-world administrative applications.

nask.Bz8rmSzR.png
NASK PIB
– Project Leader
pwr.D1_x0B58.png
Politechnika Wrocławska
ipipan.294d39c.png
Instytut Podstaw Informatyki PAN
is.Dqb94VRb.png
Instytut Slawistyki PAN
opi.CF-COwcC.png
Ośrodek Przetwarzania Informacji PIB
ul.aTSgr_W6.png
Uniwersytet Łódzki

Cyfronet AGH

Contact and Support

For questions or contributions, please reach out via: [email protected]

We welcome feedback, collaboration, and further exploration of PLLuM models!

Acknowledgements

Project financed by the Minister of Digital Affairs under the targeted subsidy No. 1/WII/DBI/2025: “HIVE AI: Development and Pilot Implementation of LLMs in the Polish Public Administration”

Funding Amount: approx. 19 mln PLN
Contract Signing Date: 2025-03-25

Downloads last month
15
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CYFRAGOVPL/PLLuM-12B-base-250801

Quantizations
2 models

Collection including CYFRAGOVPL/PLLuM-12B-base-250801