lora concepts library

AI & ML interests

None defined yet.

lora-library's activity

zamal 
posted an update 22 days ago
view post
Post
1950
🚀 ftBoost is LIVE – Stop Struggling with Fine-Tuning Data!

Alright folks, if you’re tired of manually crafting fine-tuning datasets, ftBoost is here to do the heavy lifting. One-click, LangChain-Groq-powered data augmentation that scales your training data in OpenAI, Gemini, Mistral, and LLaMA formats—automatically.

🔥 What’s inside?
✅ Smart Augmentations – Paraphrasing, back translation, synonym swapping & synthetic noise.
✅ No more JSONL headaches – Auto-formats everything for OpenAI, Gemini, Mistral & LLaMA.
✅ Custom tuning – Adjust similarity, diversity, and fluency in real-time.
✅ Upload, generate, download – That’s it.

⚡ If you’re fine-tuning LLMs, this will save you hours.

🚀 Try it now: 👉 zamal/Finetune-Boost

🌟 Give us a star on GitHub!

Let me know what you think & how it boosts your workflow! 🔥
ehristoforu 
posted an update 28 days ago
view post
Post
2832
Introducing our first standalone model – FluentlyLM Prinum

Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.

General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT

Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.

Evolution:
🏆 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)

Detailed results and comparisons are presented in Pic. 3.

Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
  • 7 replies
·
zamal 
posted an update about 2 months ago
view post
Post
581
🚀 Try Out RAG Demo! 🚀

A Hugging Face Space where you can compare DeepSeek-R1 vs Llama-3 using Stuff RAG (Retrieval-Augmented Generation)!

🔍 Upload a PDF, ask questions, and see how both models perform in real-time!

Try out now:
zamal/Deepseek-R1-vs-LLama3
  • 1 reply
·
zamal 
posted an update 2 months ago
view post
Post
1472
zamal/Multimodal-Chat-PDF

🚀 Introducing Chat PDF Multimodal 💬

Interact with your PDF documents like never before! 🤯
Extract text & images, then ask context-aware questions based on both. Powered by RAG techniques & multimodal LLMs. Perfect for studying, research & more! 📝👀
Try it out now!!!! ✍️

#LlavaNext #MultimodalAI #Transformers
ehristoforu 
posted an update 3 months ago
view post
Post
3688
✒️ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
zamal 
posted an update 5 months ago
view post
Post
1831
🚀 Announcement for the Lovely community! 🚀

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! 💬🖼️

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? We’ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. 🌍🔍

Come try it now and see what this model can do! 🚀✨

zamal 
posted an update 5 months ago
view post
Post
2087
Hello, lovely community! 🌟

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! 🚀 The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! 🎉
zamal 
posted an update 6 months ago
view post
Post
1957
🚀 New Model Release: zamal/Molmo-7B-GPTQ-4bit 🚀

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
·
blanchon 
posted an update 6 months ago
ehristoforu 
posted an update 8 months ago
view post
Post
4546
😏 Hello from Project Fluently Team!

✨ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.

🛠️ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.

🙌 Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.

🧐 Today, without demo images (there wasn’t much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.

😻 Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!
  • 1 reply
·
ehristoforu 
posted an update 9 months ago
view post
Post
6374
🤗 Hello from the Project Fluently team!

🥏 We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).

🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.

🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.

🗺️ Roadmap:
1. Create Supple Diffusion Small
2. Creating Supple Diffusion Medium
3. Create Supple Diffusion Large

🎆 Our models are universal for realism, and for cartoons, and for anime, and for caricatures.

💖 The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!

🖼️ Below are demo images made with the pre-release version of Supple Diffusion Small.
·
ehristoforu 
posted an update 10 months ago
view post
Post
3870
🦾 Hello, I present Visionix Alpha - a new hyper-realistic model based on SDXL. The main difference from all existing realism models is the attention to detail, that is, I improved not only hyperrealism, but also the overall aesthetics, anatomy, the beauty of nature, and more, and the model also has the most different faces. This model is suitable not only for realistic photos, but also for generating 2.5d anime, realistic cartoons and more.

🤗 Model on HF: ehristoforu/Visionix-alpha
🥏 Model on CivitAI: https://civitai.com/models/505719
🪄 Playground (with base and inpaint model): ehristoforu/Visionix-Playground

✏️ Inpaint version on HF: ehristoforu/Visionix-alpha-inpainting
🖋️ Inpaint version on CivitAI: https://civitai.com/models/505719?modelVersionId=563519
  • 1 reply
·
ehristoforu 
posted an update 10 months ago
ehristoforu 
posted an update 10 months ago
ehristoforu 
posted an update 10 months ago
ehristoforu 
posted an update 10 months ago
ehristoforu 
posted an update 10 months ago
view post
Post
2133
😐 Hello, there are a couple of interesting things. The first is that I will soon release several pretty cool SDXL models, the second is a little sad, I conducted long-term tests of training and merging of XL models and realized that XL will not improve soon, the architecture will not allow us to continue pushing realism and other interesting things into it, the entire community has brought XL closer to the maximum ideal on its architecture.
ehristoforu 
posted an update 10 months ago
view post
Post
2907
🤗 SDXL Flash

✨️ Introducing the new fast model SDXL Flash (Mini), we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.

🚀 Features of mini model:
It weighs less, consumes less video memory and other resources, and the quality has not dropped much.

👑 Our faster than regular model is better in quality than the coolest modern models such as JuggernautXL X, FluentlyXL v4 and others.

SDXL Flash: sd-community/sdxl-flash
SDXL Flash Mini: sd-community/sdxl-flash-mini
·