mem-agent-f32-gguf

driaforall/mem-agent is an agentic model based on Qwen3-4B-Thinking-2507, fine-tuned using GSPO (Zheng et al., 2025) to interact with an Obsidian-inspired, markdown-based memory system for advanced retrieval, updating, and clarification tasks. It is structured around agentic scaffolding that leverages dedicated tags and tool APIs for file and directory operations, supporting memory filtering and obfuscation, and evaluated on the md-memory-bench where it outperformed most open and closed models except qwen/qwen3-235b-a22b-thinking-2507, with an overall benchmark score of 0.75. The model is designed for use as an MCP server or standalone, and relies on linked markdown files to manage user and entity data, enabling seamless, flexible document-like memory manipulation for agentic or personal assistant scenarios.

Model Files

File Name Quant Type File Size
mem-agent.BF16.gguf BF16 8.05 GB
mem-agent.F16.gguf F16 8.05 GB
mem-agent.F32.gguf F32 16.1 GB
mem-agent.Q2_K.gguf Q2_K 1.67 GB
mem-agent.Q3_K_L.gguf Q3_K_L 2.24 GB
mem-agent.Q3_K_M.gguf Q3_K_M 2.08 GB
mem-agent.Q3_K_S.gguf Q3_K_S 1.89 GB
mem-agent.Q4_K_M.gguf Q4_K_M 2.5 GB
mem-agent.Q4_K_S.gguf Q4_K_S 2.38 GB
mem-agent.Q5_K_M.gguf Q5_K_M 2.89 GB
mem-agent.Q5_K_S.gguf Q5_K_S 2.82 GB
mem-agent.Q6_K.gguf Q6_K 3.31 GB
mem-agent.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
427
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/mem-agent-f32-gguf

Quantized
(4)
this model