Model Overview

Memory Operator is a specialized language model developed for MemOS, designed to handle memory-related operations. Its core capabilities include memory extraction, integration, and update. The primary objectives for developing the Memory Operator sub-model are:

  1. Support local-only deployment, enabling the use of MemOS in restricted environments where internet connectivity is unavailable.
  2. Achieve memory operations at lower cost and higher speed, while maintaining high system performance.

We are releasing the MemOperator model series in three sizes: 4B, 1.7B, and 0.6B parameters. These models are fine-tuned from the Qwen3 series, trained using supervised fine-tuning (SFT) on a combination of human-annotated and model-generated data. They demonstrate excellent performance in tasks such as memory extraction and reorganization.

Currently, the memory operation model supports memory extraction and clustering-based memory reorganization within the MemOS system. Conflict resolution and relational reasoning are under active development (WIP).

Key Features

  • Type: Causal Language Model (Decoder-only)
  • Training Stage: Supervised Fine-tuning (SFT)
  • Supported Languages: English (en), Chinese (zh)
  • Number of Parameters: 4B, 1.7B, 0.6B
  • Context Length: 32,768 tokens

Highlights

πŸš€ Faster and More Efficient Memory Operations

Memory Operator is optimized for fast and accurate memory handling, enabling real-time processing in local environments.

🧠 Comprehensive Memory Management

  • Memory Extraction: Supports extraction of high-quality memories from both conversations and documents, including summarization of document snippets.
  • Memory Reorganization: Implements clustering-based reorganization to group and integrate related memories, enhancing long-term memory coherence.

πŸ’» High System Performance with Low Resource Usage

  • The 4B model delivers performance that surpasses GPT-4o-mini while remaining deployable on most consumer-grade hardware.
  • The smaller 1.7B and 0.6B variants retain strong performance, making them ideal for edge devices and low-latency applications.

🌍 Multilingual Support

  • Supports memory extraction in both Chinese and English.
  • Effectively follows instructions in the input language, ensuring accurate and context-aware outputs.

Performance

Memory Extraction & Integration Evaluation (locomo benchmark)

Model Overall Temporal Reasoning Multi-Hop Single-Hop Open-Domain
Qwen3-32B 0.7675 0.7103 0.6702 0.8442 0.5729
Qwen3-14B 0.7370 0.6822 0.6631 0.8002 0.5833
MemOperator-4B 0.7714 0.8037 0.6737 0.8180 0.5416
MemOperator-1.7B 0.7571 0.8068 0.6560 0.7955 0.5521
MemOperator-0.6B 0.6753 0.6635 0.5780 0.7325 0.5000
GPT-4o-mini 0.7405 0.7217 0.6844 0.7864 0.5659

βœ… Key Advantage:
By replacing large open-source models (e.g., Qwen3-32B) with MemOperator-4B, you can achieve comparable or better memory processing performance while reducing resource consumption by over 80% (4B vs 32B). This enables efficient, scalable, and cost-effective deployment.


Usage

MemOS Integration Guide

You can easily configure MemOS to use the trained MemReader model for memory extraction.

1. Install MemOS via pip

pip install MemoryOS

2. Initialize a MemOperator and Extract Memory

from memos.configs.mem_reader import SimpleStructMemReaderConfig
from memos.mem_reader.simple_struct import SimpleStructMemReader

config = SimpleStructMemReaderConfig(
    **{
        "llm": {
            "backend": "huggingface",
            "config": {
                "model_name_or_path": "MemTensor/MemOperator-0.6B",
                "temperature": 0.6,
                "max_tokens": 6000,
                "top_p": 0.95,
                "top_k": 20,
                "extra_body": {"chat_template_kwargs": {"enable_thinking": false}}
            },
        },
        "embedder": {
            "backend": "ollama",
            "config": {"model_name_or_path": "nomic-embed-text:latest"},
        },
        "chunker": {
            "backend": "sentence",
            "config": {
                "tokenizer_or_token_counter": "gpt2",
                "chunk_size": 512,
                "chunk_overlap": 128,
                "min_sentences_per_chunk": 1,
            },
        },
        "remove_prompt_example": True,
    }
)

reader = SimpleStructMemReader(config)

# Example chat data
chat_data = [
    [
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Hi Jerry! Yesterday at 3 PM I had a meeting with my team about the new project.",
        },
        {
            "role": "assistant",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Oh Tom! Do you think the team can finish by December 15?",
        },
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "I’m worried. The backend won’t be done until December 10, so testing will be tight.",
        },
        {
            "role": "assistant",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Maybe propose an extension?",
        },
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 4:21 PM",
            "content": "Good idea. I’ll raise it in tomorrow’s 9:30 AM meetingβ€”maybe shift the deadline to January 5.",
        },
    ]
]

# Save document for testing
with open("tmp.txt", "w") as f:
    f.write(
        "Lou Henry Hoover (March 29, 1874 – January 7, 1944) was an American philanthropist, geologist, and the first lady of the United States from 1929 to 1933 as the wife of President Herbert Hoover. She was active in community organizations and volunteer groups throughout her life, including the Girl Scouts of the USA, which she led from 1922 to 1925 and from 1935 to 1937. Throughout her life, Hoover supported women's rights and women's independence. She was a polyglot, fluent in Mandarin and well-versed in Latin, and was the primary translator from Latin to English of the complex 16th-century metallurgy text De re metallica."
    )

# Extract chat and document memories
chat_memory = reader.get_memory(
    chat_data, type="chat", info={"user_id": "Tom", "session_id": "session1"}
)
doc_memory = reader.get_memory(
    ["tmp.txt"],
    "doc",
    info={
        "user_id": "Tom",
        "session_id": "session2",
    },
)

print(chat_memory)
print(doc_memory)

3. Use MemOperator to Organize Memory in MemOS

Configure your mem_cube_config.json:

{
  ...,
  "reorganize": true,
  "text_mem": {
    "backend": "tree_text",
    "config": {
      "extractor_llm": {
        "backend": "huggingface",
        "config": {
          "model_name_or_path": "MemTensor/MemOperator-0.6B",
          "temperature": 0.8,
          "max_tokens": 1024,
          "top_p": 0.9,
          "top_k": 50
        }
      },
      "dispatcher_llm": {
        ...
        }
      },
      "graph_db": {
        ...
        }
      },
      "embedder": {
        ...
        }
      }
    }
  },
  "act_mem": {},
  "para_mem": {}
}

4. Initialize MemOS and Register Memory Cube

import json
from memos import GeneralMemCubeConfig, GeneralMemCube, MOSConfig
from memos.mem_os.main import MOS

# Initialize MOS
user_id = 'test'
mos_config_path = "configs/mos_memos_config.json"
mos_config_data = json.load(open(mos_config_path))
mos_config = MOSConfig(**mos_config_data)
mos = MOS(mos_config)
mos.create_user(user_id=user_id)

# Configure and initialize memory cube
mem_cube_config_path = "configs/mem_cube_config.json"
mem_cube_config_data = json.load(open(mem_cube_config_path))
mem_cube_config = GeneralMemCubeConfig.model_validate(mem_cube_config_data)
mem_cube = GeneralMemCube(mem_cube_config)

# Register memory cube to MOS
storage_path = f"./{user_id}_cube"
try:
    mem_cube.dump(storage_path)
except Exception as e:
    print(f"Memory cube already exists at {storage_path}, will reuse it.")

mos.register_mem_cube(
    mem_cube_name_or_path=storage_path,
    mem_cube_id=user_id,
    user_id=user_id,
)

Huggingface Usage

You can also directly load the model via Huggingface, vLLM, or SGLang and perform memory extraction using the preset templates we have configured.

Downloads last month
14
Safetensors
Model size
1.72B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MemTensor/MemOperator-1.7B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(206)
this model
Quantizations
1 model