danielhanchen commited on
Commit
898e7a7
·
verified ·
1 Parent(s): 515a3ae

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Menlo/Jan-nano-128k
9
+ pipeline_tag: text-generation
10
+ ---
11
+ <div>
12
+ <p style="margin-top: 0;margin-bottom: 0;">
13
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
14
+ </p>
15
+ <div style="display: flex; gap: 5px; align-items: center; ">
16
+ <a href="https://github.com/unslothai/unsloth/">
17
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
18
+ </a>
19
+ <a href="https://discord.gg/unsloth">
20
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
21
+ </a>
22
+ <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
23
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
24
+ </a>
25
+ </div>
26
+ </div>
27
+
28
+
29
+ # Jan-Nano-128k: Empowering deeper research through extended context understanding.
30
+
31
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research)
32
+ [![Context Length](https://img.shields.io/badge/Context-128k-green)](https://huggingface.co/Menlo/Jan-nano-128k)
33
+ [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
34
+
35
+ <div align="center">
36
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/NP7CvcjOtLX8mST0t7eAM.png" width="300" alt="Jan-Nano-128k">
37
+ </div>
38
+
39
+ **Authors:** [Alan Dao](https://scholar.google.com/citations?user=eGWws2UAAAAJ&hl=en), [Bach Vu Dinh](https://scholar.google.com/citations?user=7Lr6hdoAAAAJ&hl=vi), [Thinh Le](https://scholar.google.com/citations?user=8tcN7xMAAAAJ&hl=en)
40
+
41
+ ## Overview
42
+
43
+ Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of [Jan-Nano](https://huggingface.co/Menlo/Jan-nano), this enhanced version features a **native 128k context window** that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.
44
+
45
+ **Key Improvements:**
46
+ - **🔍 Research Deeper**: Extended context allows for processing entire research papers, lengthy documents, and complex multi-turn conversations
47
+ - **⚡ Native 128k Window**: Built from the ground up to handle long contexts efficiently, maintaining performance across the full context range
48
+ - **📈 Enhanced Performance**: Unlike traditional context extension methods, Jan-Nano-128k shows improved performance with longer contexts
49
+
50
+ This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.
51
+
52
+ ## Evaluation
53
+
54
+ Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:
55
+
56
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/Bc0ehij86l_NX52OfxeOj.png)
57
+
58
+ ## Why Jan-Nano-128k?
59
+
60
+ Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:
61
+
62
+ This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.
63
+
64
+ ## 🖥️ How to Run Locally
65
+
66
+ ![Jan-Nano Demo](replay.gif)
67
+
68
+ Jan-Nano-128k is fully supported by [Jan - beta build](https://www.jan.ai/docs/desktop/beta), providing a seamless local AI experience with complete privacy and control.
69
+
70
+ For additional tutorials and community guidance, visit our [Discussion Forums](https://huggingface.co/Menlo/Jan-nano-128k/discussions).
71
+
72
+ ### VLLM Deployment
73
+
74
+ ```bash
75
+ vllm serve Menlo/Jan-nano-128k \
76
+ --host 0.0.0.0 \
77
+ --port 1234 \
78
+ --enable-auto-tool-choice \
79
+ --tool-call-parser hermes \
80
+ --rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072
81
+ ```
82
+
83
+ **Note:** The chat template is included in the tokenizer. For troubleshooting, download the [Non-think chat template](https://qwen.readthedocs.io/en/latest/_downloads/c101120b5bebcc2f12ec504fc93a965e/qwen3_nonthinking.jinja).
84
+
85
+ ### Recommended Sampling Parameters
86
+
87
+ ```yaml
88
+ Temperature: 0.7
89
+ Top-p: 0.8
90
+ Top-k: 20
91
+ Min-p: 0.0
92
+ ```
93
+
94
+ ## 🤝 Community & Support
95
+
96
+ - **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-nano-128k/discussions)
97
+ - **Issues**: [GitHub Repository](https://github.com/menloresearch/deep-research/issues)
98
+ - **Documentation**: [Official Docs](https://menloresearch.github.io/deep-research/)
99
+
100
+ ## 📄 Citation
101
+
102
+ ```bibtex
103
+ @model{jan-nano-128k,
104
+ title={Jan-Nano-128k: Deep Research with Extended Context},
105
+ author={Dao, Alan and Dinh, Bach Vu and Le Thinh},
106
+ year={2024},
107
+ url={https://huggingface.co/Menlo/Jan-nano-128k}
108
+ }
109
+ ```
110
+
111
+ ---
112
+
113
+ *Jan-Nano-128k: Empowering deeper research through extended context understanding.*