Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
On Vacation 🏝️
22
5
17
Parvesh Rawal
Parveshiiii
Follow
Johnny87452's profile picture
anindita2704's profile picture
E5Anant's profile picture
67 followers
·
35 following
parveshiiii
AI & ML interests
I love deep neural nets.
Recent Activity
reacted
to
their
post
with 🔥
about 1 hour ago
Just did something I’ve been meaning to try for ages. In only 3 hours, on 10 billion+ tokens, I trained a custom BPE + tiktoken-style tokenizer using my new library microtok — and it hits the same token efficiency as Qwen3. Tokenizers have always felt like black magic to me. We drop them into every LLM project, but actually training one from scratch? That always seemed way too complicated. Turns out it doesn’t have to be. microtok makes the whole process stupidly simple — literally just 3 lines of code. No heavy setup, no GPU required. I built it on top of the Hugging Face tokenizers library so it stays clean, fast, and actually understandable. If you’ve ever wanted to look under the hood and build your own optimized vocabulary instead of just copying someone else’s, this is the entry point you’ve been waiting for. I wrote up the full story, threw in a ready-to-run Colab template, and dropped the trained tokenizer on Hugging Face. Blog → https://parveshiiii.github.io/blogs/microtok/ Trained tokenizer → https://huggingface.co/Parveshiiii/microtok GitHub repo → https://github.com/Parveshiiii/microtok
posted
an
update
about 1 hour ago
Just did something I’ve been meaning to try for ages. In only 3 hours, on 10 billion+ tokens, I trained a custom BPE + tiktoken-style tokenizer using my new library microtok — and it hits the same token efficiency as Qwen3. Tokenizers have always felt like black magic to me. We drop them into every LLM project, but actually training one from scratch? That always seemed way too complicated. Turns out it doesn’t have to be. microtok makes the whole process stupidly simple — literally just 3 lines of code. No heavy setup, no GPU required. I built it on top of the Hugging Face tokenizers library so it stays clean, fast, and actually understandable. If you’ve ever wanted to look under the hood and build your own optimized vocabulary instead of just copying someone else’s, this is the entry point you’ve been waiting for. I wrote up the full story, threw in a ready-to-run Colab template, and dropped the trained tokenizer on Hugging Face. Blog → https://parveshiiii.github.io/blogs/microtok/ Trained tokenizer → https://huggingface.co/Parveshiiii/microtok GitHub repo → https://github.com/Parveshiiii/microtok
updated
a dataset
6 days ago
Org-Exp/mobile-actions
View all activity
Organizations
Parveshiiii
's datasets
4
Sort: Recently updated
Parveshiiii/Complete-it
Viewer
•
Updated
Oct 2, 2025
•
190k
•
27
•
2
Parveshiiii/AI-vs-Real
Viewer
•
Updated
Sep 25, 2025
•
14k
•
1.01k
•
6
Parveshiiii/Embedder
Viewer
•
Updated
Sep 22, 2025
•
990k
•
26
•
2
Parveshiiii/opencode_reasoning_filtered
Viewer
•
Updated
Jul 8, 2025
•
568k
•
67
•
4