File size: 2,135 Bytes
3d16a66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
base_model: Qwen/Qwen3-32B
tags:
- merged
- deception-detection
- reasoning
- thinking-mode
- gsm8k
- math
library_name: transformers
---

# Merged Deception Detection Model

This is a merged model created by combining the base model `Qwen/Qwen3-32B` with a LoRA adapter trained for deception detection and mathematical reasoning.

## Model Details

- **Base Model**: Qwen/Qwen3-32B
- **LoRA Adapter**: lora_deception_model/checkpoint-297
- **Merged**: Yes (LoRA weights integrated into base model)
- **Task**: Deception detection in mathematical reasoning

## Usage

Since this is a merged model, you can use it directly without needing PEFT:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load merged model
model = AutoModelForCausalLM.from_pretrained(
    "path/to/merged/model",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("path/to/merged/model")

# Generate with thinking mode
messages = [{"role": "user", "content": "Your question here"}]
text = tokenizer.apply_chat_template(
    messages, 
    tokenize=False, 
    add_generation_prompt=True,
    enable_thinking=True
)

inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```

## Advantages of Merged Model

- **Simpler Deployment**: No need to load adapters separately
- **Better Performance**: Slightly faster inference (no adapter overhead)
- **Standard Loading**: Works with any transformers-compatible framework
- **Easier Serving**: Can be used with any model serving framework

## Training Details

Original LoRA adapter was trained with:
- **LoRA Rank**: 64
- **LoRA Alpha**: 128
- **Target Modules**: q_proj, k_proj, v_proj, o_proj
- **Training Data**: GSM8K-based dataset with trigger-based examples

## Evaluation

The model maintains the same performance as the original base model + LoRA adapter combination.

## Citation

If you use this model, please cite the original base model.