kotol

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

gv-hf's activity

merveĀ 
posted an update about 18 hours ago
view post
Post
626
Everything that happened this week in open AI, a recap šŸ¤  merve/jan-17-releases-678a673a9de4a4675f215bf5

šŸ‘€ Multimodal
- MiniCPM-o 2.6 is a new sota any-to-any model by OpenBMB
(vision, speech and text!)
- VideoChat-Flash-Qwen2.5-2B is new video multimodal models by OpenGVLab that come in sizes 2B & 7B in resolutions 224 & 448
- ByteDance released larger SA2VA that comes in 26B parameters
- Dataset: VRC-Bench is a new diverse benchmark for multimodal LLM reasoning performance

šŸ’¬ LLMs
- MiniMax-Text-01 is a new huge language model (456B passive 45.9B active params) by MiniMaxAI with context length of 4M tokens šŸ¤Æ
- Dataset: Sky-T1-data-17k is a diverse dataset used to train Sky-T1-32B
- kyutai released Helium-1-Preview-2B is a new small multilingual LM
- Wayfarer-12B is a new LLM able to write D&D šŸ§™šŸ»ā€ā™‚ļø
- ReaderLM-v2 is a new HTML parsing model by Jina AI

- Dria released, Dria-Agent-a-3B, new agentic coding model (Pythonic function calling) based on Qwen2.5 Coder
- Unsloth released Phi-4, faster and memory efficient Llama 3.3

šŸ–¼ļø Vision
- MatchAnything is a new foundation model for matching
- FitDit is a high-fidelity VTON model based on DiT architecture

šŸ—£ļø Audio
- OuteTTS-0.3-1B is a new multilingual text-to-speech model with voice cloning and emotion control capabilities

šŸ“– Retrieval
- lightblue released a new reranker based on Qwen2.5 LB-reranker-0.5B-v1.0 that can handle 95+ languages
- cde-small-v2 is a new sota small retrieval model by
@jxm
merveĀ 
posted an update 1 day ago
XenovaĀ 
posted an update 1 day ago
view post
Post
1587
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by šŸ¤— Transformers.js. WebGPU support coming soon!
šŸ‘‰ npm i kokoro-js šŸ‘ˆ

Try it out yourself: webml-community/kokoro-web
Link to models/samples: onnx-community/Kokoro-82M-ONNX

You can get started in just a few lines of code!
import { KokoroTTS } from "kokoro-js";

const tts = await KokoroTTS.from_pretrained(
  "onnx-community/Kokoro-82M-ONNX",
  { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16
);

const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text,
  { voice: "af_sky" }, // See `tts.list_voices()`
);
audio.save("audio.wav");

Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! šŸ¤—

The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! šŸ¤Æ
  • 2 replies
Ā·
ariG23498Ā 
posted an update 2 days ago
merveĀ 
posted an update 5 days ago
view post
Post
3778
there's a new multimodal retrieval model in town šŸ¤ 
LlamaIndex released vdr-2b-multi-v1
> uses 70% less image tokens, yet outperforming other dse-qwen2 based models
> 3x faster inference with less VRAM šŸ’Ø
> shrinkable with matryoshka šŸŖ†
> can do cross-lingual retrieval!
Collection: llamaindex/visual-document-retrieval-678151d19d2758f78ce910e1 (with models and datasets)
Demo: llamaindex/multimodal_vdr_demo
Learn more from their blog post here https://huggingface.co/blog/vdr-2b-multilingual šŸ“–
merveĀ 
posted an update 8 days ago
view post
Post
3539
What a beginning to this year in open ML šŸ¤ 
Let's unwrap! merve/jan-10-releases-677fe34177759de0edfc9714

Multimodal šŸ–¼ļø
> ByteDance released SA2VA: a family of vision LMs that can take image, video, text and visual prompts
> moondream2 is out with new capabilities like outputting structured data and gaze detection!
> Dataset: Alibaba DAMO lab released multimodal textbook ā€” 22k hours worth of samples from instruction videos šŸ¤Æ
> Dataset: SciCap captioning on scientific documents benchmark dataset is released along with the challenge!

LLMs šŸ’¬
> Microsoft released Phi-4, sota open-source 14B language model šŸ”„
> Dolphin is back with Dolphin 3.0 Llama 3.1 8B šŸ¬šŸ¬
> Prime-RL released Eurus-2-7B-PRIME a new language model trained using PRIME alignment
> SmallThinker-3B is a new small reasoning LM based on Owen2.5-3B-Instruct šŸ’­
> Dataset: QWQ-LONGCOT-500K is the dataset used to train SmallThinker, generated using QwQ-32B-preview šŸ“•
> Dataset: @cfahlgren1 released React Code Instructions: a dataset of code instruction-code pairs šŸ“•
> Dataset: Qwen team is on the roll, they just released CodeElo, a dataset of code preferences šŸ‘©šŸ»ā€šŸ’»

Embeddings šŸ”–
> @MoritzLaurer released zero-shot version of ModernBERT large šŸ‘
> KaLM is a new family of performant multilingual embedding models with MIT license built using Qwen2-0.5B

Image/Video Generation āÆļø
> NVIDIA released Cosmos, a new family of diffusion/autoregressive World Foundation Models generating worlds from images, videos and texts šŸ”„
> Adobe released TransPixar: a new text-to-video model that can generate assets with transparent backgrounds (a first!)
> Dataset: fal released cosmos-openvid-1m Cosmos-tokenized OpenVid-1M with samples from OpenVid-1M

Others
> Prior Labs released TabPFNv2, the best tabular transformer is out for classification and regression
> Metagene-1 is a new RNA language model that can be used for pathogen detection, zero-shot embedding and genome understanding
merveĀ 
posted an update 9 days ago
view post
Post
1746
ByteDance just dropped SA2VA: a new family of vision LMs combining Qwen2VL/InternVL and SAM2 with MIT license šŸ’— ByteDance/sa2va-model-zoo-677e3084d71b5f108d00e093

> The models are capable of tasks involving vision-language understanding and visual referrals (referring segmentation) both for images and videos āÆļø

> The models come in 1B, 4B and 8B and are based on InternVL2.5 for base architecture and Qwen2, Qwen2.5 and InternLM2 for language model part (depending on the checkpoint)

> The model is very interesting, it has different encoders for different modalities each (visual prompt, text prompt, image and video) then it concatenates these to feed into LLM šŸ’¬

the output segmentation tokens are passed to SAM2, to sort of match text (captions or semantic classes) to masks ā¤µļø

> Their annotation pipeline is also interesting, they seems to use two open large vision LMs to refine the annotations, and have different levels of descriptions to provide consistency.
  • 1 reply
Ā·
XenovaĀ 
posted an update 17 days ago
view post
Post
6237
First project of 2025: Vision Transformer Explorer

I built a web app to interactively explore the self-attention maps produced by ViTs. This explains what the model is focusing on when making predictions, and provides insights into its inner workings! šŸ¤Æ

Try it out yourself! šŸ‘‡
webml-community/attention-visualization

Source code: https://github.com/huggingface/transformers.js-examples/tree/main/attention-visualization
merveĀ 
posted an update 18 days ago
view post
Post
4782
supercharge your LLM apps with smolagents šŸ”„

however cool your LLM is, without being agentic it can only go so far

enter smolagents: a new agent library by Hugging Face to make the LLM write code, do analysis and automate boring stuff!

Here's our blog for you to get started https://huggingface.co/blog/smolagents
merveĀ 
posted an update 25 days ago
XenovaĀ 
posted an update about 1 month ago
view post
Post
3913
Introducing Moonshine Web: real-time speech recognition running 100% locally in your browser!
šŸš€ Faster and more accurate than Whisper
šŸ”’ Privacy-focused (no data leaves your device)
āš”ļø WebGPU accelerated (w/ WASM fallback)
šŸ”„ Powered by ONNX Runtime Web and Transformers.js

Demo: webml-community/moonshine-web
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/moonshine-web
Ā·
merveĀ 
posted an update about 1 month ago
view post
Post
2798
Aya by Cohere For AI can now see! šŸ‘€

C4AI community has built Maya 8B, a new open-source multilingual VLM built on SigLIP and Aya 8B šŸŒ± works on 8 languages! šŸ—£ļø

The authors extend Llava dataset using Aya's translation capabilities with 558k examples!
ry it here kkr5155/maya_demo

Dataset maya-multimodal/pretrain

Model maya-multimodal/maya šŸ‘
kudos @nahidalam and team
  • 1 reply
Ā·
merveĀ 
posted an update about 1 month ago
view post
Post
3348
Apollo is a new family of open-source video language models by Meta, where 3B model outperforms most 7B models and 7B outperforms most 30B models šŸ§¶

āœØ the models come in 1.5B https://huggingface.co/Apollo-LMMs/Apollo-1_5B-t32, 3B https://huggingface.co/Apollo-LMMs/Apollo-3B-t32 and 7B https://huggingface.co/Apollo-LMMs/Apollo-7B-t32 with A2.0 license, based on Qwen1.5 & Qwen2
āœØ the authors also release a benchmark dataset https://huggingface.co/spaces/Apollo-LMMs/ApolloBench

The paper has a lot of experiments (they trained 84 models!) about what makes the video LMs work āÆļø

Try the demo for best setup here https://huggingface.co/spaces/Apollo-LMMs/Apollo-3B
they evaluate sampling strategies, scaling laws for models and datasets, video representation and more!
> The authors find out that whatever design decision was applied to small models also scale properly when the model and dataset are scaled šŸ“ˆ scaling dataset has diminishing returns for smaller models
> They evaluate frame sampling strategies, and find that FPS sampling is better than uniform sampling, and they find 8-32 tokens per frame optimal
> They also compare image encoders, they try a variation of models from shape optimized SigLIP to DINOv2
they find google/siglip-so400m-patch14-384 to be most powerful šŸ”„
> they also compare freezing different parts of models, training all stages with some frozen parts give the best yield

They eventually release three models, where Apollo-3B outperforms most 7B models and Apollo 7B outperforms 30B models šŸ”„
Ā·
merveĀ 
posted an update about 1 month ago
view post
Post
1773
A complete RAG pipeline includes a reranker, which ranks the documents to find the best document šŸ““
Same goes for multimodal RAG, multimodal rerankers which we can integrate to multimodal RAG pipelines!
Learn how to build a complete multimodal RAG pipeline with vidore/colqwen2-v1.0 as retriever, lightonai/MonoQwen2-VL-v0.1 as reranker, Qwen/Qwen2-VL-7B-Instruct as VLM in this notebook that runs on a GPU as small as L4 šŸ”„ https://huggingface.co/learn/cookbook/multimodal_rag_using_document_retrieval_and_reranker_and_vlms
XenovaĀ 
posted an update about 1 month ago
view post
Post
3061
Introducing TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration! šŸ”„ High-quality and natural speech generation that runs 100% locally in your browser, powered by OuteTTS and Transformers.js. šŸ¤— Try it out yourself!

Demo: webml-community/text-to-speech-webgpu
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/text-to-speech-webgpu
Model: onnx-community/OuteTTS-0.2-500M (ONNX), OuteAI/OuteTTS-0.2-500M (PyTorch)
merveĀ 
posted an update about 1 month ago
view post
Post
5605
This week in open-source AI was insane šŸ¤  A small recapšŸ•ŗšŸ» merve/dec-6-releases-67545caebe9fc4776faac0a3

Multimodal šŸ–¼ļø
> Google shipped a PaliGemma 2, new iteration of PaliGemma with more sizes: 3B, 10B and 28B, with pre-trained and captioning variants šŸ‘
> OpenGVLab released InternVL2, seven new vision LMs in different sizes, with sota checkpoint with MIT license āœØ
> Qwen team at Alibaba released the base models of Qwen2VL models with 2B, 7B and 72B ckpts

LLMs šŸ’¬
> Meta released a new iteration of Llama 70B, Llama3.2-70B trained further
> EuroLLM-9B-Instruct is a new multilingual LLM for European languages with Apache 2.0 license šŸ”„
> Dataset: CohereForAI released GlobalMMLU, multilingual version of MMLU with 42 languages with Apache 2.0 license
> Dataset: QwQ-LongCoT-130K is a new dataset to train reasoning models
> Dataset: FineWeb2 just landed with multilinguality update! šŸ”„ nearly 8TB pretraining data in many languages!

Image/Video Generation šŸ–¼ļø
> Tencent released HunyuanVideo, a new photorealistic video generation model
> OminiControl is a new editing/control framework for image generation models like Flux

Audio šŸ”Š
> Indic-Parler-TTS is a new text2speech model made by community
merveĀ 
posted an update about 1 month ago
view post
Post
1554
New InternVL drop with a state-of-the-art 78B vision language model with MIT license šŸ”„ https://huggingface.co/collections/OpenGVLab/internvl-25-673e1019b66e2218f68d7c1c
The release comes with seven new vision LMs based on InternViT 300M/6B and Qwen2.5 (0.5B, 3B, 32B, 72B) and InternLM2 (8B, 7B, 20B) in different sizes
78B model is of InternViT 6B and Qwen2.5-72B Instruct, can accomplish variety of tasks šŸ‘ Try here OpenGVLab/InternVL
ariG23498Ā 
posted an update about 1 month ago

Update README.md

#3 opened about 1 month ago by
ariG23498