The Hydra Project

community
Activity Feed

AI & ML interests

Powerful MoEs and merges for language models.

hydra-project's activity

TonicΒ 
posted an update 19 days ago
view post
Post
1162
πŸ™‹πŸ»β€β™‚οΈHey there folks,

Did you know that you can use ModernBERT to detect model hallucinations ?

Check out the Demo : Tonic/hallucination-test

See here for Medical Context Demo : MultiTransformer/tonic-discharge-guard

check out the model from KRLabs : KRLabsOrg/lettucedect-large-modernbert-en-v1

and the library they kindly open sourced for it : https://github.com/KRLabsOrg/LettuceDetect

πŸ‘†πŸ»if you like this topic please contribute code upstream πŸš€

  • 2 replies
Β·
TonicΒ 
posted an update 20 days ago
view post
Post
701
Powered by KRLabsOrg/lettucedect-large-modernbert-en-v1 from KRLabsOrg.

Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!

### Model Details
- **Model Name**: [lettucedect-large-modernbert-en-v1]( KRLabsOrg/lettucedect-large-modernbert-en-v1)
- **Organization**: [KRLabsOrg](https://huggingface.co/KRLabsOrg)
- **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect)
- **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens
- **Task**: Token Classification / Hallucination Detection
- **Training Dataset**: [RagTruth]( wandb/RAGTruth-processed)
- **Language**: English
- **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.

LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.
LocutusqueΒ 
posted an update 28 days ago
view post
Post
2643
πŸŽ‰ Exciting news, everyone! I've just released **Thespis-Llama-3.1-8B**, a new language model designed for enhanced roleplaying! ✨️

It's built on Llama-3.1 and fine-tuned with a focus on Theory of Mind reasoning to create more believable and engaging characters. It even learned a few tricks on its own, like adding in-character thought processes! 🧠

Check it out here: Locutusque/Thespis-Llama-3.1-8B

Give it a try and let me know what you think! I'm especially interested in feedback on how well the characters stay in role and if the responses feel natural. Looking forward to seeing what amazing stories you create! ✍️
TonicΒ 
posted an update about 2 months ago
view post
Post
2373
πŸ™‹πŸ»β€β™‚οΈhey there folks ,

Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math

give it a try !
TonicΒ 
posted an update about 2 months ago
view post
Post
2970
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❀️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
SeverianΒ 
posted an update 2 months ago
view post
Post
659
Computational Model for Symbolic Representations: An Interaction Framework for Human-AI Collaboration

Hey everyone. I need your help to see if this concept, scientific logic, and testing with prompts can invalidate or validate it. My goal isn’t to make any bold statements or claims about AI, I just really want to know if I’ve stumbled upon something that can be useful in AI interactions. Here’s my proposal in a nutshell:

The Computational Model for Symbolic Representations Framework introduces a method for enhancing human-AI collaboration by assigning user-defined symbolic representations (glyphs) to guide interactions with computational models. This interaction and syntax is called Glyph Code-Prompting. Glyphs function as conceptual tags or anchors, representing abstract ideas, storytelling elements, or domains of focus (e.g., pacing, character development, thematic resonance). Users can steer the AI’s focus within specific conceptual domains by using these symbols, creating a shared framework for dynamic collaboration. Glyphs do not alter the underlying

The Core Point: Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI interactions by serving as contextual anchors that guide the AI's focus. This enhances the AI's ability to generate more nuanced and contextually appropriate responses. For instance, a symbol like ! can carry multidimensional semantic meaning and connections, demonstrating the practical value of glyphs in conveying complex intentions efficiently.

Link to my full initial overview and sharing: https://huggingface.co/blog/Severian/computational-model-for-symbolic-representations

Try out the HF Assistant Version: https://hf.co/chat/assistant/678cfe9655026c306f0a4dab
TonicΒ 
posted an update 2 months ago
view post
Post
1898
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it
TonicΒ 
posted an update 2 months ago
view post
Post
2464
πŸ™‹πŸ»β€β™‚οΈHey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it πŸš€
SeverianΒ 
posted an update 2 months ago
view post
Post
684
🌱 Potential Made Simple: Free Life System/Productivity App based on Rythmn of Existence. No BS. No Catch. Just want to cut through the noise and help

The Origin Story

Inspired by Rob Dyrdek's "Rhythm of Existence" philosophy, this system has been expanded into a comprehensive life management tool featuring habit tracking, journaling, life statistics, and more. While I support entrepreneurs creating premium productivity apps, I believe self-improvement should never have financial barriers. That’s why this system is open source and freeβ€”no paywalls, premium features, or gatekeeping. Anyone can use it to start optimizing their life, ensuring accessibility for all.

How to Get Started

Two ways to access the system:

HuggingFace Version (Recommended)
- Visit Severian/Potential-Made-Simple
- Create a free HuggingFace account if needed.
- Duplicate the space to create your private version.
- Pro tip: Save it as a PWA for offline mobile use.

Google Sheets Version*
- Ideal for spreadsheet users or those avoiding new accounts.
- Access it https://docs.google.com/spreadsheets/d/1O2R0TCp0t27VZJuvkrz_gMJAl-nkwqeVyL3i6pN7aCo/edit?usp=sharing
- Save a copy and start tracking.

Features Beyond ROE

- Habit tracking
- Daily journaling with prompts
- Life statistics and visualizations
- Task management
- Meal tracking
- Progress metrics
- Historical data analysis
- And more!

Supporting the Project (Optional)

This system is free and always will be. If you find value in it, you can support my work at https://www.ko-fi.com/severian42. Contributions are entirely optional and don’t unlock extra featuresβ€”they’re simply a way to say thanks.

My mission is to help as many people as possible optimize their lives and reach their full potential. Remember, self-improvement doesn’t have to come with a high price tag.
SeverianΒ 
posted an update 3 months ago
view post
Post
3942
Interesting Solution to the Problem of Misguided Attention

So I've been fascinated by the problem of Misguided Attention for a few weeks. I am trying to build an inference algorithm to help LLMs address that issue; but in the process, I found a cool short-term fix I call "Mindful Attention" using just prompt-engineering.

Have you ever thought about how our brains filter reality through layers of past experiences, concepts, and mental images? For example, when you look at an oak tree, are you truly seeing that oak tree in all its unique details, or are you overlaying it with a generalized idea of "oak tree"? This phenomenon inspired the new approach.

LLMs often fall into a similar trap, hence the Misguided Attention problem. They process input not as it’s uniquely presented but through patterns and templates they’ve seen before. This leads to responses that can feel "off," like missing the point of a carefully crafted prompt or defaulting to familiar but irrelevant solutions.

I wanted to address this head-on by encouraging LLMs to slow down, focus, and engage directly with the inputβ€”free of assumptions. This is the core of the Mindful Attention Directive, a prompt designed to steer models away from over-generalization and back into the moment.

You can read more about the broader issue here: https://github.com/cpldcpu/MisguidedAttention

And if you want to try this mindful approach in action, check out the LLM I’ve set up for testing: https://hf.co/chat/assistant/677e7ebcb0f26b87340f032e. It works about 80% of the time to counteract these issues, and the results are pretty cool.

I'll add the Gist with the full prompt. I admit, it is quite verbose but it's the most effective one I have landed on yet. I am working on a smaller version that can be appended to any System Prompt to harness the Mindful Attention. Feel free to experiment to find a better version for the community!

Here is the Gist: https://gist.github.com/severian42/6dd96a94e546a38642278aeb4537cfb3
TonicΒ 
posted an update 3 months ago
view post
Post
1723
microsoft just released Phi-4 , check it out here : Tonic/Phi-4

hope you like it :-)
SeverianΒ 
posted an update 5 months ago
view post
Post
598
Early Morning Before Work Project:

🌌 Introducing Cascade of Semantically Integrated Layers (CaSIL): A Humorously Over-Engineered Algorithm That Actually… Works πŸ€·β€β™‚οΈ

Let me introduce CaSIL – the Cascade of Semantically Integrated Layers. Imagine giving a single question the level of introspection typically reserved for philosophical debates or maybe therapy. In short, CaSIL is a pure Python reasoning algorithm that, in a series of semantically rich layers, takes any input and rebuilds it into a nuanced response that’s (surprisingly) meaningful to a human.

I’ve been experimenting with various reasoning and agent approaches lately and decided to contribute my own quirky take on layered processing. It’s built without agent frameworksβ€”just good ol' Python and mathβ€”and it plays nicely with any LLM. The result? A transformation from simple responses to deeper, interconnected insights. Here’s a quick peek at the steps:

✨ How CaSIL Works:

Initial Understanding: The first layer captures the basic concepts in your input, just as a warm-up.

Relationship Analysis: A lightweight knowledge graph (because why not?) maps out related ideas and builds interconnections.

Context Integration: Adds historical or contextual knowledge, bringing a bit of depth and relevance.

Response Synthesis: Pieces it all together, aiming to produce a response that feels more like a conversation than an outdated search result.

Does it work? Yes! And in record time, too. Admittedly, the code is roughβ€”two days of intense coding with some friendly help from Claude. The beauty of CaSIL is its simplicity and versatility; it’s a pure algorithm without complex dependencies, making it easy to integrate into your own LLM setups.

πŸ”— Explore the repo here: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers

πŸ“œ Example outputs: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers/blob/main/examples.md
TonicΒ 
posted an update 5 months ago
view post
Post
3585
πŸ™‹πŸ»β€β™‚οΈhey there folks,

periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal spaces behavior on load or launch ⚠️

we have a thread πŸ‘‰πŸ» https://discord.com/channels/879548962464493619/1295847667515129877

if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps πŸ€—πŸ€—πŸ€—
  • 2 replies
Β·
TonicΒ 
posted an update 5 months ago
view post
Post
1184
boomers still pick zenodo.org instead of huggingface ??? absolutely clownish nonsense , my random datasets have 30x more downloads and views than front page zenodos ... gonna write a comparison blog , but yeah... cringe.
  • 1 reply
Β·
TonicΒ 
posted an update 5 months ago
view post
Post
856
πŸ™‹πŸ»β€β™‚οΈ hey there folks ,

really enjoying sharing cool genomics and protein datasets on the hub these days , check out our cool new org : https://huggingface.co/seq-to-pheno

scroll down for the datasets, still figuring out how to optimize for discoverability , i do think on that part it will be better than zenodo[dot}org , it would be nice to write a tutorial about that and compare : we already have more downloads than most zenodo datasets from famous researchers !
TonicΒ 
posted an update 5 months ago
view post
Post
1476
hey there folks,

twitter is aweful isnt it ? just getting into the habbit of using hf/posts for shares πŸ¦™πŸ¦™

Tonic/on-device-granite-3.0-1b-a400m-instruct

new granite on device instruct model demo , hope you like it πŸš€πŸš€
TonicΒ 
posted an update 5 months ago
TonicΒ 
posted an update 6 months ago
TonicΒ 
posted an update 6 months ago
view post
Post
1869
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

🦎Salamandra release by @mvillegas and team
@BSC_CNS https://huggingface.co/BSC-LT is absolutely impressive so far !

perhaps the largest single training dataset of high quality text to date of 7.8 trillion tokens in 35 European languages and code.

the best part : the data was correctly licenced so it's actually future-proof!

the completions model is really creative and instruct fine tuned version is very good also.

now you can use such models for multi-lingual enterprise applications with further finetunes , long response generation, structured outputs (coding) also works.

check out πŸ‘‡πŸ»
the collection : BSC-LT/salamandra-66fc171485944df79469043a
the repo : https://github.com/langtech-bsc/salamandra
7B-Instruct demo : Tonic/Salamandra-7B
TonicΒ 
posted an update 6 months ago
view post
Post
1794
@mlabonne hey there πŸ™‹πŸ»β€β™‚οΈ I kinda got obsessed with your great model , and i found the endpoint for it in lambda labs, but basically i got rate limited / banned for trying to make my DPO dataset project, i was wondering if you all had an open ai compatible solution for me to make a great "thinking" sft + dpo dataset with all the splits πŸ™πŸ»πŸ™πŸ» kinda desparate , it's true , but was looking forward to a nice write ups πŸš€πŸš€πŸš€
  • 1 reply
Β·