✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.) ✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B. ✅ Three stage done while pretraining: • Stage 1: General language learning and knowledge building. • Stage 2: Reasoning boost with STEM, coding, and logic skills. • Stage 3: Long context training ✅ It supports MCP in the model ✅ Strong agent skills ✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template. ✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
Hello everyone! With the rapid advancement of AI agent technology, two architectures have come into the spotlight: MCP (Model Context Protocol) and MCO (Model Context Open-json). Today, we’ll introduce the key features and differences of these two approaches.
MCP: The Traditional Approach 🏛️ Centralized Function Registry: All functions are hardcoded into the core system.
Static Function Definitions & Tight Coupling: New features require changes to the core application code, limiting scalability.
Monolithic Design: Complex deployment and version management can cause a single error to affect the whole system.
Code Example: '''py FUNCTION_REGISTRY = { "existing_function": existing_function, "new_function": new_function # Adding a new function } '''
MCO: A Revolutionary Approach 🆕 JSON-based Function Definitions: Function details are stored in external JSON files, enabling dynamic module loading.
Loose Coupling & Microservices: Each function can be developed, tested, and deployed as an independent module.
Flexible Scalability: Add new features by simply updating the JSON and module files, without modifying the core system.
JSON Example: [ { "name": "analyze_sentiment", "module_path": "nlp_tools", "func_name_in_module": "sentiment_analysis", "example_usage": "analyze_sentiment(text=\"I love this product!\")" } ]
Why MCO? 💡 Enhanced Development Efficiency: Developers can focus on their own modules with independent testing and deployment.
Simplified Error Management: Errors remain confined within their modules, enabling quick hotfixes.
Future-Proofing: With potential features like remote function calls (RPC), access control, auto-documentation, and a function marketplace, MCO paves the way for rapid innovation.
Practical Use & Community 🤝 The MCO implementation has been successfully tested on Vidraft’s LLM (based on Google Gemma-3)