Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
51.7
TFLOPS
43
13
24
Milan Kryl
mikr
Follow
stefankecskespx's profile picture
hosek's profile picture
alekan's profile picture
13 followers
·
30 following
http://www.milankryl.cz/
mikr
krylm
milankryl
krylm.bsky.social
AI & ML interests
None yet
Recent Activity
upvoted
a
collection
7 days ago
Tiny Reasoning Language Model
upvoted
an
article
10 days ago
mem-agent: Persistent, Human Readable Memory Agent Trained with Online RL
reacted
to
tomaarsen
's
post
with ❤️
10 days ago
ModernBERT goes MULTILINGUAL! One of the most requested models I've seen, The Johns Hopkins University's CLSP has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT. Model details: - 2 model sizes: - https://huggingface.co/jhu-clsp/mmBERT-small - https://huggingface.co/jhu-clsp/mmBERT-base - Uses the ModernBERT architecture, but with the Gemma2 multilingual tokenizer (so: flash attention, alternating global/local attention, unpadding/sequence packing, etc.) - Maximum sequence length of 8192 tokens, on the high end for encoders - Trained on 1833 languages using DCLM, FineWeb2, and many more sources - 3 training phases: 2.3T tokens pretraining on 60 languages, 600B tokens mid-training on 110 languages, and 100B tokens decay training on all 1833 languages. - Both models are MIT Licensed, and the full datasets and intermediary checkpoints are also publicly released Evaluation details: - Very competitive with ModernBERT at equivalent sizes on English (GLUE, MTEB v2 English after finetuning) - Consistently outperforms equivalently sized models on all Multilingual tasks (XTREME, classification, MTEB v2 Multilingual after finetuning) - In short: beats commonly used multilingual base models like mDistilBERT, XLM-R (multilingual RoBERTa), multilingual MiniLM, etc. - Additionally: the ModernBERT-based mmBERT is much faster than the alternatives due to its architectural benefits. Easily up to 2x throughput in common scenarios. Check out the full blogpost with more details. It's super dense & gets straight to the point: https://huggingface.co/blog/mmbert Based on these results, mmBERT should be the new go-to multilingual encoder base models at 300M and below. Do note that the mmBERT models are "base" models, i.e. they're currently only trained to perform Mask Filling. They'll need to be finetuned for downstream tasks like semantic search, classification, clustering, etc.
View all activity
Organizations
spaces
2
Sort: Recently updated
Sleeping
1
W2V Bert2 Czech
🐢
Runtime error
1
Whisper Czech Large
🤫
models
17
Sort: Recently updated
mikr/whisper-small-hu-cv11
Automatic Speech Recognition
•
0.2B
•
Updated
Feb 6
•
7
mikr/whisper-medium-sk-cv11
Automatic Speech Recognition
•
0.8B
•
Updated
Feb 6
•
12
mikr/whisper-large-czech-cv11
Automatic Speech Recognition
•
2B
•
Updated
Feb 6
•
5
•
8
mikr/w2v-bert-2.0-czech-colab-cv16
Automatic Speech Recognition
•
0.6B
•
Updated
Feb 2, 2024
•
3
•
2
mikr/whisper-small-ro-cv11
Automatic Speech Recognition
•
0.2B
•
Updated
Dec 21, 2023
•
4
mikr/whisper-small-hr-vox
Automatic Speech Recognition
•
Updated
Dec 21, 2023
•
9
mikr/whisper-large2-czech-cv11
Automatic Speech Recognition
•
2B
•
Updated
Dec 21, 2023
•
7
•
3
mikr/whisper-medium-sl-cv11
Automatic Speech Recognition
•
Updated
Dec 21, 2023
•
6
mikr/whisper-small-sk-cv11
Automatic Speech Recognition
•
Updated
Dec 21, 2023
•
70
•
2
mikr/whisper-small-cs-sk-cv11
Automatic Speech Recognition
•
Updated
Dec 21, 2023
•
8
View 17 models
datasets
0
None public yet