Support our open-source dataset and model releases!

DAG Reasoning: Qwen3-8B, Qwen3-14B

DAG Reasoning is an experimental specialist reasoning AI with custom output format; for general reasoning and chat, try Shining Valiant 3 or Esper 3!

DAG Reasoning is a specialist reasoning assistant, performing causal analysis and reasoning to produce Directed Acyclic Graphs in response to user output.

  • Finetuned on our DAG dataset data generated with Deepseek R1 0528!
  • Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
  • DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
  • Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
  • Small model sizes allow running on local desktop and mobile, plus super-fast server inference!

Prompting Guide

DAG Reasoning uses the Qwen 3 prompt format to create outputs in DAG Reasoning Format.

DAG Reasoning is an experimental reasoning finetune:

  • the assistant performs multi-step reasoning during the thinking phase, before producing the JSON graph object at the start of the output to the user.
  • request the graph or analysis explicitly in your user prompt to prompt for the DAG Reasoning Format; see the example script below for examples. (If the model is unsure of your request, it will generally default to standard Qwen 3 output/chat style instead of creating a DAG.)
  • this is an early experimental release: if used in a productive context, structural validation of outputs is strongly recommended.
  • we recommend enable_thinking=True for all chats.

Example inference script to get started:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "sequelbox/Qwen3-14B-DAG-Reasoning"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input, generally recommended to follow the prompting style provided in these examples:
prompt = "Analyze the following scenario from a report on a new industrial park: The park was built on reclaimed swampland. The initial site survey indicated the ground was stable after being drained and filled. However, over the first five years of operation, slow, uneven ground subsidence has caused cracking in the foundations of several large warehouses. The cost of stabilizing these foundations is now projected to be higher than the initial cost of the land itself, and the risk of further subsidence has made the remaining lots in the park unsellable."
#prompt = "Make a graph of this analysis: In the American West, warmer winters are causing more precipitation to fall as rain instead of snow, even when total precipitation remains unchanged. This has two major consequences for water management. First, runoff occurs immediately in the winter rather than being stored as snowpack until the spring and summer melt. This increases winter flood risk and reduces water availability during the summer growing season. Second, the smaller snowpack reflects less solar radiation, leading to warmer ground temperatures and increased evaporation, further reducing water supply."
#prompt = "A supply chain security analysis finds: following the disclosure of a critical vulnerability in the widely used Log4j library, we consulted our Software Bill of Materials (SBOM) for a key application, which indicated the application was not affected. However, the application was later compromised via this exact vulnerability. The investigation revealed the SBOM was generated incorrectly and failed to identify Log4j as a transitive dependency, a library pulled in by another library. This inaccurate SBOM led to a false negative in our risk assessment."
#prompt = "Analyze this and make a graph: A company incurred a $200,000 bill from its cloud provider in one weekend, an attack known as cryptojacking. An attacker discovered an exposed API key in the client-side code of the company's public-facing web application. This key belonged to a role that, due to a misconfiguration, had permissions to create new virtual machine instances. The attacker wrote a script to programmatically spin up thousands of the most powerful, GPU-equipped virtual machines in several different geographic regions to mine cryptocurrency, leading to the massive, unexpected charges."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)

DAG Reasoning is one of our experimental reasoning releases; we've got more to come soon!

Do as you will.

Downloads last month
1
Safetensors
Model size
14.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sequelbox/Qwen3-14B-DAG-Reasoning

Finetuned
Qwen/Qwen3-14B
Finetuned
(77)
this model

Dataset used to train sequelbox/Qwen3-14B-DAG-Reasoning