File size: 1,744 Bytes
da1955b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit
library_name: transformers
license: apache-2.0
datasets:
- leonvanbokhorst/friction-uncertainty-v2
language:
- en
tags:
- ai-safety
- ai-friction
- human-like-messiness
- ai-uncertainty
pipeline_tag: text-generation
---
# Friction Reasoning Model
This model is fine-tuned to respond in an uncertain manner. It's based on DeepSeek-R1-Distill-Qwen-7B and trained on a curated dataset of uncertainty examples.
## Model Description
- **Model Architecture**: DeepSeek-R1-Distill-Qwen-7B with LoRA adapters
- **Language(s)**: English
- **License**: Apache 2.0
- **Finetuning Approach**: Instruction tuning with friction-based reasoning examples
### Limitations
The model:
- Is not designed for factual question-answering
- May sometimes be overly uncertain
- Should not be used for medical, legal, or financial advice
- May not perform well on objective or factual tasks
### Bias and Risks
The model:
- May exhibit biases present in the training data
- Could potentially reinforce uncertainty in certain situations
- Might challenge user assumptions in sensitive contexts
- Should be used with appropriate content warnings
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{friction-reasoning-2025,
author = {Leon van Bokhorst},
title = {Mixture of Friction: Fine-tuned Language Model for Uncertainty},
year = {2025},
publisher = {HuggingFace},
journal = {HuggingFace Model Hub},
howpublished = {\url{https://huggingface.co/leonvanbokhorst/deepseek-r1-uncertainty}}
}
```
## Acknowledgments
- DeepSeek AI for the base model
- Unsloth team for the optimization toolkit
- HuggingFace for the model hosting and infrastructure |