Great job Google, Now we need...
#2
by
yousef1727
- opened
For future development, we need embedding models optimized for RAG tasks, starting with something like gemma-3-270M-embedding.
Expanding the lineup with additional sizes — 270M, 500M, 700M, and beyond — would offer flexibility for different workloads and hardware constraints.
If Gemma had a broader range of sizes and task-specific variants, it could greatly enhance adoption and unlock new use cases.