Model Card for Philosophy-chat

Philosophy-chat is a fine-tuned version of Qwen2.5-1.5B-Instruct, trained specifically on philosophical texts. The model specializes in understanding and generating responses related to complex philosophical concepts, arguments, and debates.

Model Details

Model Description

  • Language: English
  • License: MIT
  • Finetuned from model: unsloth/Qwen2.5-1.5B-Instruct
  • Fine-Tuning Method: Supervised Fine-tuning with LoRA
  • Domain: Philosophy
  • Dataset: Heigke/stanford-enigma-philosophy-chat

Uses

Direct Use

  • Generating clear and concise explanations of philosophical concepts.
  • Providing structured responses to philosophical questions.
  • Assisting students, researchers, and enthusiasts in exploring philosophical arguments.

Bias, Risks, and Limitations

  • While fine-tuned on philosophy, the model may still occasionally generate hallucinations or less precise interpretations of highly nuanced philosophical arguments.
  • The model does not replace expert human philosophical judgment.

How to Get Started with the Model

from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen2.5-1.5B-Instruct",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Philosophy-chat")

question = "According to William Whewell, what is necessary for gaining knowledge?"

system = """
You are an expert in philosophy.
"""

messages = [
    {"role" : "system", "content" : system},
    {"role" : "user",   "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1024,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

Training Details

Training Data

Roughly 27k questions and answers inspired by articles from Stanford Encyclopedia of Philosophy. The questions range all the way from Zombies to the concept of Abduction, from Metaphysics to Neuroethics and thus cover some of the essence of mathematics, logic and philosophy.

Framework versions

  • PEFT 0.17.0
Downloads last month
37
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rustamshry/Philosophy-chat

Base model

Qwen/Qwen2.5-1.5B
Adapter
(284)
this model

Dataset used to train Rustamshry/Philosophy-chat