Ame Vi's picture
6 13

Ame Vi

Ameeeee

AI & ML interests

None yet

Recent Activity

reacted to tomaarsen's post with 🔥 about 17 hours ago
An assembly of 18 European companies, labs, and universities have banded together to launch 🇪🇺 EuroBERT! It's a state-of-the-art multilingual encoder for 15 European languages, designed to be finetuned for retrieval, classification, etc. 🇪🇺 15 Languages: English, French, German, Spanish, Chinese, Italian, Russian, Polish, Portuguese, Japanese, Vietnamese, Dutch, Arabic, Turkish, Hindi 3️⃣ 3 model sizes: 210M, 610M, and 2.1B parameters - very very useful sizes in my opinion ➡️ Sequence length of 8192 tokens! Nice to see these higher sequence lengths for encoders becoming more common. ⚙️ Architecture based on Llama, but with bi-directional (non-causal) attention to turn it into an encoder. Flash Attention 2 is supported. 🔥 A new Pareto frontier (stronger *and* smaller) for multilingual encoder models 📊 Evaluated against mDeBERTa, mGTE, XLM-RoBERTa for Retrieval, Classification, and Regression (after finetuning for each task separately): EuroBERT punches way above its weight. 📝 Detailed paper with all details, incl. data: FineWeb for English and CulturaX for multilingual data, The Stack v2 and Proof-Pile-2 for code. Check out the release blogpost here: https://huggingface.co/blog/EuroBERT/release * https://huggingface.co/EuroBERT/EuroBERT-210m * https://huggingface.co/EuroBERT/EuroBERT-610m * https://huggingface.co/EuroBERT/EuroBERT-2.1B The next step is for researchers to build upon the 3 EuroBERT base models and publish strong retrieval, zero-shot classification, etc. models for all to use. I'm very much looking forward to it!
View all activity

Organizations

Hugging Face's profile picture Argilla's profile picture Women on Hugging Face's profile picture Data Is Better Together's profile picture Social Post Explorers's profile picture HuggingFaceFW-Dev's profile picture Data Is Better Together Contributor's profile picture Bluesky Community's profile picture

Posts 2

view post
Post
1287
Build a fine-tuning dataset with No Code.

Do you want to build a small dataset for creative writing to fine-tune an Open LLM?
- Find a dataset full of conversations with ChatGPT on the Hugging Face Hub.
- Import it into your Argilla Space.
- Preview the dataset and create a question to label the relevant conversations.
- Label 1000 valid examples of creating writing.
- Use this dataset with Autotrain to fine-tune your model.

Articles 4

Article
116

Introducing the Synthetic Data Generator - Build Datasets with Natural Language

models

None public yet