Victor Nogueira
Felladrin
AI & ML interests
Models to run in the web browser
Recent Activity
liked
a model
about 9 hours ago
MiniLLM/MiniPLM-Qwen-200M
liked
a model
about 9 hours ago
MiniLLM/MiniLLM-gpt2-340M
updated
a collection
about 9 hours ago
Foundation Text-Generation Models Below 360M Parameters
Organizations
Felladrin's activity

reacted to
JingzeShi's
post with 🚀
about 17 hours ago

reacted to
fantos's
post with 🚀
10 days ago
Post
5819
😊 Panorama X3 Image
an innovative system that leverages a Stable Diffusion XL-based tiling pipeline to generate unique and vibrant panoramic images by applying different prompts to the left, center, and right sections of a single image.
Key Features & Strengths
Multi-Area Prompt Support
Input distinct descriptions for the left, center, and right regions (e.g., "dense forest" for the left, "calm lake" for the center, and "majestic mountains" for the right). This allows the system to seamlessly blend multiple scenes into one stunning panoramic image. 🌄
Automatic Korean-to-English Translation
If your prompt contains Korean text, it will be automatically translated into English before image generation.
(For example, "안개 낀 산" becomes "Misty mountain") 🔄
This feature ensures that you can effortlessly use both English and Korean prompts.
Advanced Tiling Technology
The project uses a sophisticated tiling approach that manages overlapping regions to produce natural transitions and high-resolution panoramic images.
This isn't just a simple image merge—it's a refined process that delivers exceptional quality and detail. 🖼️
User-Friendly Interface
Enjoy a modern, visually appealing UI featuring a gradient background, semi-transparent containers, and smooth animation effects.
The prompt input fields clearly indicate that both English and Korean entries are allowed with the label (English/Korean allowed), making it accessible for everyone. 🎨
fantos/Panorama
Panorama X3 Image is the perfect tool for anyone looking to visually express creative ideas. Try it out now by experimenting with various prompts and create your very own breathtaking panoramic image! 🚀
Thank you! 🙏
an innovative system that leverages a Stable Diffusion XL-based tiling pipeline to generate unique and vibrant panoramic images by applying different prompts to the left, center, and right sections of a single image.
Key Features & Strengths
Multi-Area Prompt Support
Input distinct descriptions for the left, center, and right regions (e.g., "dense forest" for the left, "calm lake" for the center, and "majestic mountains" for the right). This allows the system to seamlessly blend multiple scenes into one stunning panoramic image. 🌄
Automatic Korean-to-English Translation
If your prompt contains Korean text, it will be automatically translated into English before image generation.
(For example, "안개 낀 산" becomes "Misty mountain") 🔄
This feature ensures that you can effortlessly use both English and Korean prompts.
Advanced Tiling Technology
The project uses a sophisticated tiling approach that manages overlapping regions to produce natural transitions and high-resolution panoramic images.
This isn't just a simple image merge—it's a refined process that delivers exceptional quality and detail. 🖼️
User-Friendly Interface
Enjoy a modern, visually appealing UI featuring a gradient background, semi-transparent containers, and smooth animation effects.
The prompt input fields clearly indicate that both English and Korean entries are allowed with the label (English/Korean allowed), making it accessible for everyone. 🎨
fantos/Panorama
Panorama X3 Image is the perfect tool for anyone looking to visually express creative ideas. Try it out now by experimenting with various prompts and create your very own breathtaking panoramic image! 🚀
Thank you! 🙏

reacted to
Xenova's
post with 🚀🔥
15 days ago
Post
6931
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. ⚡️
Generate 10 seconds of speech in ~1 second for $0.
What will you build? 🔥
webml-community/kokoro-webgpu
The most difficult part was getting the model running in the first place, but the next steps are simple:
✂️ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)
Who wants to help?
Generate 10 seconds of speech in ~1 second for $0.
What will you build? 🔥
webml-community/kokoro-webgpu
The most difficult part was getting the model running in the first place, but the next steps are simple:
✂️ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)
Who wants to help?
This update is massive!! 🙌
I’d love if we could also filter spaces in a way we could list only the ones in Running state.

reacted to
Tonic's
post with 🔥
23 days ago
Post
2911
🙋🏻♂️ Hey there folks ,
our team made a game during the @mistral-game-jam and we're trying to win the community award !
try our game out and drop us a ❤️ like basically to vote for us !
Mistral-AI-Game-Jam/TextToSurvive
hope you like it !
our team made a game during the @mistral-game-jam and we're trying to win the community award !
try our game out and drop us a ❤️ like basically to vote for us !
Mistral-AI-Game-Jam/TextToSurvive
hope you like it !

reacted to
AdinaY's
post with 🚀
26 days ago
Post
2639
🔥So many exciting releases coming from the Chinese community this month!
zh-ai-community/2025-january-6786b054f492fb223591269e
LLMs:
✨ Qwen2.5 -1M by Alibaba
Qwen/qwen25-1m-679325716327ec07860530ba
✨ InternLM3-8B-Instruct by Shanghai AI Lab
internlm/internlm3-8b-instruct
✨ MiniMax-Text-01 by MiniMax AI
MiniMaxAI/MiniMax-Text-01
✨ RWKV-7 by BlinkDL -- RNN + Transformer 👀
BlinkDL/rwkv-7-world
✨ DeepSeek-R1 by DeepSeek -- THE ONE 🙌
https://huggingface.co/deepseek-ai
✨ Baichuan-M1-14B by Baichuan - Medical 🩺
baichuan-inc/Baichuan-M1-14B-Base
✨ Qwen2.5-Math-PRM by Alibaba - Math 🔢
Qwen/Qwen2.5-Math-PRM-7B
Code:
✨ Tare by Bytedance
https://trae.ai
TTS:
✨ T2A-01-HD by MiniMax AI
https://hailuo.ai/audio
✨ LLaSA by HKUST Audio
HKUSTAudio/Llasa-3B
MLLM:
✨ Kimi k1.5 by Moonshot AI
https://kimi.ai
✨ MiniCPM-o-2_6 by OpenBMB
openbmb/MiniCPM-o-2_6
✨ Sa2VA-4B by ByteDance
ByteDance/Sa2VA-4B
✨ VideoLLaMA 3 by Alibaba DAMO
DAMO-NLP-SG/videollama3-678cdda9281a0e32fe79af15
✨ LLaVA-Mini by Chinese Academy of Sciences
ICTNLP/llava-mini-llama-3.1-8b
✨Hunyuan-7B by Tencent
tencent/Hunyuan-7B-Instruct
✨ Hunyuan 3D 2.0 by Tencent
tencent/Hunyuan3D-2
✨MiniMax-VL-01 by MiniMax AI - A non transformer based VLM 👀
MiniMaxAI/MiniMax-VL-01
Agent:
✨ UI-TARS by Bytedance
bytedance-research/UI-TARS-7B-SFT
✨ GLM-PC by Zhipu AI
https://cogagent.aminer.cn
Dataset:
✨ Fineweb-Edu-Chinese by Opencsg
opencsg/Fineweb-Edu-Chinese-V2.1
✨ Multimodal_textbook by Alibaba
DAMO-NLP-SG/multimodal_textbook
✨ MME-Finance by Hithink AI
zh-ai-community/2025-january-6786b054f492fb223591269e
LLMs:
✨ Qwen2.5 -1M by Alibaba
Qwen/qwen25-1m-679325716327ec07860530ba
✨ InternLM3-8B-Instruct by Shanghai AI Lab
internlm/internlm3-8b-instruct
✨ MiniMax-Text-01 by MiniMax AI
MiniMaxAI/MiniMax-Text-01
✨ RWKV-7 by BlinkDL -- RNN + Transformer 👀
BlinkDL/rwkv-7-world
✨ DeepSeek-R1 by DeepSeek -- THE ONE 🙌
https://huggingface.co/deepseek-ai
✨ Baichuan-M1-14B by Baichuan - Medical 🩺
baichuan-inc/Baichuan-M1-14B-Base
✨ Qwen2.5-Math-PRM by Alibaba - Math 🔢
Qwen/Qwen2.5-Math-PRM-7B
Code:
✨ Tare by Bytedance
https://trae.ai
TTS:
✨ T2A-01-HD by MiniMax AI
https://hailuo.ai/audio
✨ LLaSA by HKUST Audio
HKUSTAudio/Llasa-3B
MLLM:
✨ Kimi k1.5 by Moonshot AI
https://kimi.ai
✨ MiniCPM-o-2_6 by OpenBMB
openbmb/MiniCPM-o-2_6
✨ Sa2VA-4B by ByteDance
ByteDance/Sa2VA-4B
✨ VideoLLaMA 3 by Alibaba DAMO
DAMO-NLP-SG/videollama3-678cdda9281a0e32fe79af15
✨ LLaVA-Mini by Chinese Academy of Sciences
ICTNLP/llava-mini-llama-3.1-8b
✨Hunyuan-7B by Tencent
tencent/Hunyuan-7B-Instruct
✨ Hunyuan 3D 2.0 by Tencent
tencent/Hunyuan3D-2
✨MiniMax-VL-01 by MiniMax AI - A non transformer based VLM 👀
MiniMaxAI/MiniMax-VL-01
Agent:
✨ UI-TARS by Bytedance
bytedance-research/UI-TARS-7B-SFT
✨ GLM-PC by Zhipu AI
https://cogagent.aminer.cn
Dataset:
✨ Fineweb-Edu-Chinese by Opencsg
opencsg/Fineweb-Edu-Chinese-V2.1
✨ Multimodal_textbook by Alibaba
DAMO-NLP-SG/multimodal_textbook
✨ MME-Finance by Hithink AI

reacted to
ngxson's
post with 🚀
about 1 month ago
Post
3043
I made this small tool that can be useful for debugging Ollama chat template:
ngxson/ollama_template_test
CC @bartowski you may need this ;-)
CC @bartowski you may need this ;-)

reacted to
tomaarsen's
post with ❤️
about 2 months ago
Post
3010
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!
Details:
🤖 Based on ModernBERT-base with 149M parameters.
📊 Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
🏎️ Immediate FA2 and unpacking support for super efficient inference.
🪆 Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
➡️ Maximum sequence length of 8192 tokens!
2️⃣ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
➕ Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
🏛️ Apache 2.0 licensed: fully commercially permissible
Try it out here: nomic-ai/modernbert-embed-base
Very nice work by Zach Nussbaum and colleagues at Nomic AI.
Details:
🤖 Based on ModernBERT-base with 149M parameters.
📊 Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
🏎️ Immediate FA2 and unpacking support for super efficient inference.
🪆 Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
➡️ Maximum sequence length of 8192 tokens!
2️⃣ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
➕ Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
🏛️ Apache 2.0 licensed: fully commercially permissible
Try it out here: nomic-ai/modernbert-embed-base
Very nice work by Zach Nussbaum and colleagues at Nomic AI.

reacted to
MoritzLaurer's
post with 👍
2 months ago
Post
2612
Quite excited by the ModernBERT release! 0.15/0.4B small, 2T modern pre-training data and tokenizer with code, 8k context window, great efficient model for embeddings & classification!
This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D
Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !
Paper and models here 👇https://huggingface.co/collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D
Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !
Paper and models here 👇https://huggingface.co/collections/answerdotai/modernbert-67627ad707a4acbf33c41deb

reacted to
s3nh's
post with 🤗
2 months ago
Post
1928
Welcome back,
Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3
https://huggingface.co/SmolTuners
Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3
https://huggingface.co/SmolTuners

reacted to
bartowski's
post with 👍
2 months ago
Post
52260
Looks like Q4_0_N_M file types are going away
Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable)
You can see the reference PR here:
https://github.com/ggerganov/llama.cpp/pull/10446
So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms)
As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those !
Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541
Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable)
You can see the reference PR here:
https://github.com/ggerganov/llama.cpp/pull/10446
So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms)
As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those !
Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541
Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights

reacted to
thomwolf's
post with 🚀
2 months ago
Post
5691
We are proud to announce
HuggingFaceFW/fineweb-2: A sparkling update to
HuggingFaceFW/fineweb with 1000s of 🗣️languages.
We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.
🥂 FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.
The dataset is released under the permissive 📜 ODC-By 1.0 license, and the 💻 code to reproduce it and our evaluations is public.
We will very soon announce a big community project, and are working on a 📝 blogpost walking you through the entire dataset creation process. Stay tuned!
In the mean time come ask us question on our chat place: HuggingFaceFW/discussion
H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi
We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.
🥂 FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.
The dataset is released under the permissive 📜 ODC-By 1.0 license, and the 💻 code to reproduce it and our evaluations is public.
We will very soon announce a big community project, and are working on a 📝 blogpost walking you through the entire dataset creation process. Stay tuned!
In the mean time come ask us question on our chat place: HuggingFaceFW/discussion
H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi

reacted to
ginipick's
post with 🚀
2 months ago
Post
3369
# 🎨 FLUX LLAMA: Turn Your PC into a Design Studio
Hello! Today, we're introducing FLUX LLAMA, an innovative AI image generation tool that ranked 2nd in HuggingFace's weekly downloads. Now you can create professional-grade images with clear text right from your PC, without the need for high-performance servers! 😊
## ✨ What It Can Do
- 🔍 **Crystal Clear Text**: Type "Welcome" and see it appear crystal clear in your image
- 🖥️ **Local Processing**: Run it on your PC with just an RTX 3060 (8x lighter with 4-bit quantization)
- ⚡ **Quick Generation**: Create professional marketing images in 5 minutes
- 🌏 **Multilingual Support**: Perfect results in any language
- 🎯 **Real-time Editing**: Instant image modifications and regeneration
## 🛠 Core Technology
- Double Stream + Single Stream architecture for perfect text processing
- Powerful embedding combination of T5-XXL and CLIP
- 4-bit quantization optimization (3GB → 375MB)
- Fast processing with local GPU acceleration
- Automatic language translation pipeline
## 💡 Use Cases
- SNS marketing image creation
- Product promotion banner generation
- Event poster design
- Social media content creation
- Product description image generation
No more hiring designers or learning complex design tools! Simply input what you want, and AI will create professional-grade results.
Easy to start, professional results - that's the magic of FLUX LLAMA! 🌟
Start creating now! Share your experience with us 😊
#FLUXLLAMA #AIImageGeneration #MarketingTools #DesignAI #HuggingFace
PS: FLUX LLAMA is an innovative AI image generation tool developed by GiniPick, optimized especially for creating images with text. Plus, it boasts a lightweight model that runs on standard PCs!
ginipick/FLUXllama
Hello! Today, we're introducing FLUX LLAMA, an innovative AI image generation tool that ranked 2nd in HuggingFace's weekly downloads. Now you can create professional-grade images with clear text right from your PC, without the need for high-performance servers! 😊
## ✨ What It Can Do
- 🔍 **Crystal Clear Text**: Type "Welcome" and see it appear crystal clear in your image
- 🖥️ **Local Processing**: Run it on your PC with just an RTX 3060 (8x lighter with 4-bit quantization)
- ⚡ **Quick Generation**: Create professional marketing images in 5 minutes
- 🌏 **Multilingual Support**: Perfect results in any language
- 🎯 **Real-time Editing**: Instant image modifications and regeneration
## 🛠 Core Technology
- Double Stream + Single Stream architecture for perfect text processing
- Powerful embedding combination of T5-XXL and CLIP
- 4-bit quantization optimization (3GB → 375MB)
- Fast processing with local GPU acceleration
- Automatic language translation pipeline
## 💡 Use Cases
- SNS marketing image creation
- Product promotion banner generation
- Event poster design
- Social media content creation
- Product description image generation
No more hiring designers or learning complex design tools! Simply input what you want, and AI will create professional-grade results.
Easy to start, professional results - that's the magic of FLUX LLAMA! 🌟
Start creating now! Share your experience with us 😊
#FLUXLLAMA #AIImageGeneration #MarketingTools #DesignAI #HuggingFace
PS: FLUX LLAMA is an innovative AI image generation tool developed by GiniPick, optimized especially for creating images with text. Plus, it boasts a lightweight model that runs on standard PCs!
ginipick/FLUXllama

reacted to
garrethlee's
post with 🧠
3 months ago
Post
1946
The latest o1 model from OpenAI is still unable to answer 9.11 > 9.9 correctly 🤔
A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!
In this blog post, we discuss:
🔢 The different ways numbers are tokenized in modern LLMs
🧪 Our detailed approach in comparing these various methods
🥪 How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
👑 and a definitive, best tokenization method for math in LLMs!
Check out our work here: huggingface/number-tokenization-blog
A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!
In this blog post, we discuss:
🔢 The different ways numbers are tokenized in modern LLMs
🧪 Our detailed approach in comparing these various methods
🥪 How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
👑 and a definitive, best tokenization method for math in LLMs!
Check out our work here: huggingface/number-tokenization-blog

reacted to
dylanebert's
post with 🚀
3 months ago
Post
1659
Generate meshes with AI locally in Blender
📢 New open-source release
meshgen, a local blender integration of LLaMa-Mesh, is open source and available now 🤗
get started here: https://github.com/huggingface/meshgen
📢 New open-source release
meshgen, a local blender integration of LLaMa-Mesh, is open source and available now 🤗
get started here: https://github.com/huggingface/meshgen

reacted to
andito's
post with 🔥❤️
3 months ago
Post
3362
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.
- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! 🤯
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! 🚀
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!
Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! 🤯
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! 🚀
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!
Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb

reacted to
clem's
post with 🚀
3 months ago
Post
1996
I've been in Brazil for 10 days now 🇧🇷🇧🇷🇧🇷
I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.
Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!
I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.
Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!

reacted to
cfahlgren1's
post with ❤️
3 months ago
Post
923
observers 🔭 - automatically log all OpenAI compatible requests to a dataset💽
• supports any OpenAI compatible endpoint 💪
• supports DuckDB, Hugging Face Datasets, and Argilla as stores
> pip install observers
No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!
Here's an example dataset that was logged to Hugging Face from Ollama: cfahlgren1/llama-3.1-awesome-chatgpt-prompts
• supports any OpenAI compatible endpoint 💪
• supports DuckDB, Hugging Face Datasets, and Argilla as stores
> pip install observers
No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!
Here's an example dataset that was logged to Hugging Face from Ollama: cfahlgren1/llama-3.1-awesome-chatgpt-prompts