🧠 Static Word Embeddings for Hungarian (huBERT & XLM-RoBERTa)
This repository contains static word embedding models extracted from the following BERT-based models:
📦 Available Embedding Variants
Each model is provided in three static embedding variants:
- Decontextualized: Token embeddings extracted without any surrounding context.
- Aggregate: Static embeddings computed by averaging token representations of different contexts the word appears in.
- X2Static: Learned static embeddings trained via the X2Static method, designed to optimize static representations from contextual models.
🧪 Use Case
These embeddings were developed and evaluated as part of the paper: A Comparative Analysis of Static Word Embeddings for Hungarian by Máté Gedeon. They can be used for intrinsic tasks (e.g., word analogies) and extrinsic tasks (e.g., POS tagging, NER) in Hungarian NLP applications.
The paper can be found here: https://arxiv.org/abs/2505.07809
The corresponding GitHub repository: https://github.com/gedeonmate/hungarian_static_embeddings
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for gedeonmate/static_hungarian_bert
Base model
FacebookAI/xlm-roberta-base