Upload folder using huggingface_hub
Browse files- README.md +147 -0
- chat.py +893 -0
- chat_full.py +976 -0
- config.json +4 -0
- llama_FFN_PF_lut8_chunk_01of02.mlmodelc/analytics/coremldata.bin +3 -0
- llama_FFN_PF_lut8_chunk_01of02.mlmodelc/coremldata.bin +3 -0
- llama_FFN_PF_lut8_chunk_01of02.mlmodelc/metadata.json +336 -0
- llama_FFN_PF_lut8_chunk_01of02.mlmodelc/model.mil +0 -0
- llama_FFN_PF_lut8_chunk_01of02.mlmodelc/weights/weight.bin +3 -0
- llama_FFN_PF_lut8_chunk_02of02.mlmodelc/analytics/coremldata.bin +3 -0
- llama_FFN_PF_lut8_chunk_02of02.mlmodelc/coremldata.bin +3 -0
- llama_FFN_PF_lut8_chunk_02of02.mlmodelc/metadata.json +336 -0
- llama_FFN_PF_lut8_chunk_02of02.mlmodelc/model.mil +0 -0
- llama_FFN_PF_lut8_chunk_02of02.mlmodelc/weights/weight.bin +3 -0
- llama_embeddings_lut8.mlmodelc/analytics/coremldata.bin +3 -0
- llama_embeddings_lut8.mlmodelc/coremldata.bin +3 -0
- llama_embeddings_lut8.mlmodelc/metadata.json +69 -0
- llama_embeddings_lut8.mlmodelc/model.mil +11 -0
- llama_embeddings_lut8.mlmodelc/weights/weight.bin +3 -0
- llama_lm_head_lut8.mlmodelc/analytics/coremldata.bin +3 -0
- llama_lm_head_lut8.mlmodelc/coremldata.bin +3 -0
- llama_lm_head_lut8.mlmodelc/metadata.json +140 -0
- llama_lm_head_lut8.mlmodelc/model.mil +98 -0
- llama_lm_head_lut8.mlmodelc/weights/weight.bin +3 -0
- meta.yaml +23 -0
- tokenizer.json +0 -0
- tokenizer_config.json +2062 -0
README.md
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
tags:
|
4 |
+
- coreml
|
5 |
+
- ANE
|
6 |
+
- DeepSeek
|
7 |
+
- Apple
|
8 |
+
- Apple Neural Engine
|
9 |
+
- DeepHermes
|
10 |
+
---
|
11 |
+
# ANEMLL
|
12 |
+
|
13 |
+
**ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
|
14 |
+
|
15 |
+
The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
|
16 |
+
|
17 |
+
This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
|
18 |
+
|
19 |
+
This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
|
20 |
+
|
21 |
+
For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
|
22 |
+
|
23 |
+
|
24 |
+
---
|
25 |
+
|
26 |
+
## License
|
27 |
+
|
28 |
+
ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
|
29 |
+
The model is based on Meta's LLaMA 3.2 and may require a separate license.
|
30 |
+
|
31 |
+
This test model is exclusively for the Meta's LLaMA architecture converted for CoreML, released before the official launch of the ANEMLL repository and minimal documentation. It is intended for early adopters only who requested an early release.
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
## Requirements
|
36 |
+
|
37 |
+
- **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
|
38 |
+
- **CoreML Tools** and **HuggingFace Transformers** libraries
|
39 |
+
- **Python 3.9**
|
40 |
+
|
41 |
+
`chat.py` provides a sample inference script.
|
42 |
+
`chat_full.py` provides a sample inference script with history and conversation management.
|
43 |
+
|
44 |
+
**Installation**
|
45 |
+
|
46 |
+
1. Download the model from Hugging Face:
|
47 |
+
```bash
|
48 |
+
# Install required tools
|
49 |
+
pip install huggingface_hub
|
50 |
+
|
51 |
+
# Install Git LFS (Large File Support)
|
52 |
+
# macOS with Homebrew:
|
53 |
+
brew install git-lfs
|
54 |
+
# Or Ubuntu/Debian:
|
55 |
+
# sudo apt-get install git-lfs
|
56 |
+
|
57 |
+
# Initialize Git LFS
|
58 |
+
git lfs install
|
59 |
+
|
60 |
+
# Clone the repository with model files
|
61 |
+
git clone https://huggingface.co/anemll/anemll-Meta-Llama-3.2-1B-ctx512_0.3.0
|
62 |
+
```
|
63 |
+
|
64 |
+
2. Extract model files:
|
65 |
+
```bash
|
66 |
+
# Navigate to cloned directory
|
67 |
+
cd anemll-Meta-Llama-3.2-1B-ctx512_0.3.0
|
68 |
+
|
69 |
+
# Pull LFS files (model weights)
|
70 |
+
git lfs pull
|
71 |
+
|
72 |
+
# Extract CoreML model files
|
73 |
+
find . -type f -name "*.zip" -exec unzip {} \;
|
74 |
+
```
|
75 |
+
|
76 |
+
3. Install dependencies:
|
77 |
+
```bash
|
78 |
+
pip install coremltools transformers
|
79 |
+
```
|
80 |
+
|
81 |
+
**Coremltools:**
|
82 |
+
|
83 |
+
See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
|
84 |
+
|
85 |
+
**How to Run**
|
86 |
+
|
87 |
+
1. Basic chat interface:
|
88 |
+
```bash
|
89 |
+
python chat.py --meta ./meta.yaml
|
90 |
+
```
|
91 |
+
|
92 |
+
2. Full conversation mode with history:
|
93 |
+
```bash
|
94 |
+
python chat_full.py --meta ./meta.yaml
|
95 |
+
```
|
96 |
+
|
97 |
+
> Note: The first time the model loads, macOS will take some time to place it on the device.
|
98 |
+
> Subsequent loads will be instantaneous.
|
99 |
+
> Use Ctrl-D to exit, Ctrl-C to interrupt inference.
|
100 |
+
|
101 |
+
**More Info**
|
102 |
+
Please check following links for later updates:
|
103 |
+
|
104 |
+
* [GitHub](https://github.com/anemll)
|
105 |
+
* [Hugging Face Models](https://huggingface.co/anemll)
|
106 |
+
* [Twitter/X](https://x.com/anemll)
|
107 |
+
* [Website](https://anemll.com)
|
108 |
+
|
109 |
+
|
110 | |
111 |
+
|
112 |
+
# anemll-Meta-Llama-3.2-1B-ctx512_0.3.0
|
113 |
+
|
114 |
+
This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
|
115 |
+
|
116 |
+
## Available Distributions
|
117 |
+
|
118 |
+
### Standard Distribution
|
119 |
+
- Contains zipped MLMODELC files
|
120 |
+
- Suitable for macOS and development
|
121 |
+
|
122 |
+
### iOS Distribution
|
123 |
+
- Contains unzipped MLMODELC files
|
124 |
+
- Ready for iOS deployment
|
125 |
+
- Includes offline tokenizer support
|
126 |
+
|
127 |
+
## Model Information
|
128 |
+
- Context Length: %CONTEXT_LENGTH%
|
129 |
+
- Batch Size: %BATCH_SIZE%
|
130 |
+
- Number of Chunks: %NUM_CHUNKS%
|
131 |
+
|
132 |
+
## Quick Start
|
133 |
+
|
134 |
+
### Test in iOS/macOS App
|
135 |
+
Try our sample Chat-Bot app on TestFlight:
|
136 |
+
1. Install TestFlight from App Store
|
137 |
+
2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
|
138 |
+
3. App includes a small demo model pre-installed
|
139 |
+
4. You can add custom models via HuggingFace URLs
|
140 |
+
|
141 |
+
> [!Note]
|
142 |
+
> - The TestFlight app works on both iOS and macOS
|
143 |
+
> - Demonstrates proper model integration and provides a reference implementation
|
144 |
+
> - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
|
145 |
+
> - macOS supports both zipped and unzipped model formats
|
146 |
+
|
147 |
+
```
|
chat.py
ADDED
@@ -0,0 +1,893 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# chat.py
|
2 |
+
#!/usr/bin/env python3
|
3 |
+
# chat.py
|
4 |
+
# Copyright (c) 2025 Anemll
|
5 |
+
# Licensed under the MIT License
|
6 |
+
|
7 |
+
import argparse
|
8 |
+
import os
|
9 |
+
import re
|
10 |
+
import glob
|
11 |
+
from pathlib import Path
|
12 |
+
import coremltools as ct
|
13 |
+
from transformers import LlamaTokenizer, AutoTokenizer
|
14 |
+
import torch
|
15 |
+
import torch.nn.functional as F
|
16 |
+
import numpy as np
|
17 |
+
import queue
|
18 |
+
import threading
|
19 |
+
import time
|
20 |
+
import yaml
|
21 |
+
import sys
|
22 |
+
|
23 |
+
# ANSI color codes
|
24 |
+
LIGHT_BLUE = "\033[94m"
|
25 |
+
DARK_BLUE = "\033[34m"
|
26 |
+
LIGHT_GREEN = "\033[92m"
|
27 |
+
RESET_COLOR = "\033[0m"
|
28 |
+
|
29 |
+
# Add at top with other constants
|
30 |
+
WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
|
31 |
+
|
32 |
+
class TokenPrinter:
|
33 |
+
"""Handles background printing of generated tokens."""
|
34 |
+
def __init__(self, tokenizer):
|
35 |
+
self.tokenizer = tokenizer
|
36 |
+
self.token_queue = queue.Queue()
|
37 |
+
self.stop_event = threading.Event()
|
38 |
+
self.thread = None
|
39 |
+
self.buffer = ""
|
40 |
+
self.lock = threading.Lock()
|
41 |
+
self.thinking = True # Track if we're still in thinking mode
|
42 |
+
self.decoding_buffer = [] # Buffer for token IDs
|
43 |
+
# Add token counting and timing
|
44 |
+
self.start_time = time.time()
|
45 |
+
self.token_count = 0
|
46 |
+
self.start()
|
47 |
+
|
48 |
+
def start(self):
|
49 |
+
"""Start the printer thread."""
|
50 |
+
if self.thread is None:
|
51 |
+
self.thread = threading.Thread(target=self._print_worker)
|
52 |
+
self.thread.daemon = True
|
53 |
+
self.thread.start()
|
54 |
+
|
55 |
+
def add_token(self, token_id):
|
56 |
+
"""Add a token to the print queue."""
|
57 |
+
if not self.stop_event.is_set():
|
58 |
+
self.token_queue.put(token_id)
|
59 |
+
self.token_count += 1
|
60 |
+
|
61 |
+
def drain_buffer(self):
|
62 |
+
"""Decode token IDs from decoding_buffer in the main thread."""
|
63 |
+
if not self.decoding_buffer:
|
64 |
+
return
|
65 |
+
|
66 |
+
# Decode all tokens at once in the main thread
|
67 |
+
token_str = self.tokenizer.decode(self.decoding_buffer)
|
68 |
+
self.decoding_buffer.clear()
|
69 |
+
|
70 |
+
# Store the text in buffer for later saving to file
|
71 |
+
with self.lock:
|
72 |
+
self.buffer += token_str
|
73 |
+
|
74 |
+
# Color-handling logic
|
75 |
+
if self.thinking and "</think>" in token_str:
|
76 |
+
self.thinking = False
|
77 |
+
parts = token_str.split("</think>")
|
78 |
+
if len(parts) > 0:
|
79 |
+
print(parts[0] + "</think>", end='', flush=True)
|
80 |
+
if len(parts) > 1:
|
81 |
+
print(LIGHT_BLUE + parts[1], end='', flush=True)
|
82 |
+
else:
|
83 |
+
if not self.thinking:
|
84 |
+
print(LIGHT_BLUE + token_str, end='', flush=True)
|
85 |
+
else:
|
86 |
+
print(token_str, end='', flush=True)
|
87 |
+
|
88 |
+
def _print_worker(self):
|
89 |
+
"""Worker thread that takes token_ids from the queue."""
|
90 |
+
while not self.stop_event.is_set():
|
91 |
+
try:
|
92 |
+
token_id = self.token_queue.get(timeout=0.01)
|
93 |
+
with self.lock:
|
94 |
+
self.decoding_buffer.append(token_id)
|
95 |
+
self.token_queue.task_done()
|
96 |
+
except queue.Empty:
|
97 |
+
continue
|
98 |
+
except Exception as e:
|
99 |
+
print(f"\nError: Token printer error: {str(e)}")
|
100 |
+
break
|
101 |
+
|
102 |
+
def stop(self):
|
103 |
+
"""Stop the printer thread."""
|
104 |
+
if self.thread and self.thread.is_alive():
|
105 |
+
# Ensure any remaining tokens are processed
|
106 |
+
self.drain_buffer()
|
107 |
+
self.stop_event.set()
|
108 |
+
try:
|
109 |
+
self.thread.join(timeout=1.0)
|
110 |
+
except Exception:
|
111 |
+
pass
|
112 |
+
# Calculate and print tokens/s with shorter format in blue
|
113 |
+
elapsed = time.time() - self.start_time
|
114 |
+
if elapsed > 0 and self.token_count > 0:
|
115 |
+
tokens_per_sec = self.token_count / elapsed
|
116 |
+
print(f"\n{DARK_BLUE}{tokens_per_sec:.1f} t/s{RESET_COLOR}")
|
117 |
+
else:
|
118 |
+
print(RESET_COLOR) # Reset color at the end
|
119 |
+
return self.buffer
|
120 |
+
|
121 |
+
def parse_model_path(path):
|
122 |
+
"""Parse model path and return full path with .mlmodelc or .mlpackage extension."""
|
123 |
+
path = Path(path)
|
124 |
+
|
125 |
+
# If path exists exactly as specified, return it
|
126 |
+
if path.exists():
|
127 |
+
return str(path)
|
128 |
+
|
129 |
+
# Try with both extensions
|
130 |
+
candidates = [
|
131 |
+
path, # Original path
|
132 |
+
path.with_suffix('.mlmodelc'), # With .mlmodelc
|
133 |
+
path.with_suffix('.mlpackage'), # With .mlpackage
|
134 |
+
Path(str(path) + '.mlmodelc'), # Handle case where extension is included
|
135 |
+
Path(str(path) + '.mlpackage')
|
136 |
+
]
|
137 |
+
|
138 |
+
# Try all possible paths
|
139 |
+
for candidate in candidates:
|
140 |
+
if candidate.exists():
|
141 |
+
print(f"Found model at: {candidate}")
|
142 |
+
return str(candidate)
|
143 |
+
|
144 |
+
# If we get here, no valid path was found
|
145 |
+
print("\nError: Model not found. Tried following paths:")
|
146 |
+
for candidate in candidates:
|
147 |
+
print(f" {candidate}")
|
148 |
+
raise FileNotFoundError(f"Model not found: {path}")
|
149 |
+
|
150 |
+
def parse_ffn_filename(path):
|
151 |
+
"""Parse FFN model filename to extract chunk information."""
|
152 |
+
path = Path(path)
|
153 |
+
pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
|
154 |
+
match = re.search(pattern, path.name)
|
155 |
+
|
156 |
+
if match:
|
157 |
+
current_chunk = int(match.group(1))
|
158 |
+
total_chunks = int(match.group(2))
|
159 |
+
return current_chunk, total_chunks
|
160 |
+
return None, None
|
161 |
+
|
162 |
+
def find_all_chunks(base_path):
|
163 |
+
"""Find all chunk files matching the base FFN path pattern."""
|
164 |
+
path = Path(base_path)
|
165 |
+
pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
|
166 |
+
return sorted(glob.glob(pattern))
|
167 |
+
|
168 |
+
def load_model(path, function_name=None):
|
169 |
+
"""Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
|
170 |
+
path = Path(path)
|
171 |
+
compute_unit = ct.ComputeUnit.CPU_AND_NE
|
172 |
+
|
173 |
+
try:
|
174 |
+
if path.suffix == '.mlmodelc':
|
175 |
+
# For compiled models (.mlmodelc), use CompiledMLModel
|
176 |
+
if function_name:
|
177 |
+
return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
|
178 |
+
else:
|
179 |
+
return ct.models.CompiledMLModel(str(path), compute_unit)
|
180 |
+
else:
|
181 |
+
# For packages (.mlpackage)
|
182 |
+
if function_name:
|
183 |
+
return ct.models.MLModel(str(path), function_name=function_name)
|
184 |
+
else:
|
185 |
+
return ct.models.MLModel(str(path))
|
186 |
+
|
187 |
+
except RuntimeError as e:
|
188 |
+
if "valid manifest does not exist" in str(e):
|
189 |
+
print(f"\nError: Could not load compiled model at {path}")
|
190 |
+
print("This might be because:")
|
191 |
+
print("1. The model is not properly compiled")
|
192 |
+
print("2. The model was compiled for a different OS version")
|
193 |
+
print("3. The model needs to be recompiled")
|
194 |
+
print("\nTry using the .mlpackage version instead, or recompile the model.")
|
195 |
+
raise
|
196 |
+
|
197 |
+
def load_metadata(model,args):
|
198 |
+
# Extract metadata and config parameters
|
199 |
+
metadata = {}
|
200 |
+
if hasattr(model, 'user_defined_metadata'):
|
201 |
+
meta = model.user_defined_metadata
|
202 |
+
|
203 |
+
# Extract key parameters with defaults
|
204 |
+
metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
|
205 |
+
metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
|
206 |
+
metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
|
207 |
+
metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
|
208 |
+
metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
|
209 |
+
|
210 |
+
print("\nExtracted Parameters:")
|
211 |
+
print(f" Context Length: {metadata['context_length']}")
|
212 |
+
print(f" State Length: {metadata['state_length']}")
|
213 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
214 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
215 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
216 |
+
|
217 |
+
# Print model info
|
218 |
+
print("\nModel Info:")
|
219 |
+
if 'com.anemll.info' in meta:
|
220 |
+
print(f" {meta['com.anemll.info']}")
|
221 |
+
if 'com.github.apple.coremltools.version' in meta:
|
222 |
+
print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
|
223 |
+
|
224 |
+
# Print model input/output shapes
|
225 |
+
print("\nModel Shapes:")
|
226 |
+
if hasattr(model, 'input_description'):
|
227 |
+
print(" Inputs:")
|
228 |
+
for name, desc in model.input_description.items():
|
229 |
+
print(f" {name}: {desc}")
|
230 |
+
if hasattr(model, 'output_description'):
|
231 |
+
print(" Outputs:")
|
232 |
+
for name, desc in model.output_description.items():
|
233 |
+
print(f" {name}: {desc}")
|
234 |
+
else:
|
235 |
+
print("\nWarning: No metadata found in model")
|
236 |
+
|
237 |
+
# Check if model directory name contains context length pattern (ctxXXX)
|
238 |
+
ctx_len = 512
|
239 |
+
if args.context_length is None:
|
240 |
+
import re
|
241 |
+
ctx_match = re.search(r'ctx(\d+)', str(args.d))
|
242 |
+
if ctx_match:
|
243 |
+
ctx_len0 = int(ctx_match.group(1))
|
244 |
+
if 512 <= ctx_len0 <= 8096:
|
245 |
+
ctx_len = ctx_len0
|
246 |
+
print(f"\nDetected context length {ctx_len} from directory name")
|
247 |
+
else:
|
248 |
+
print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
|
249 |
+
else:
|
250 |
+
ctx_len = args.context_length
|
251 |
+
|
252 |
+
# Use defaults or values from args
|
253 |
+
metadata['context_length'] = ctx_len
|
254 |
+
metadata['state_length'] = ctx_len
|
255 |
+
# Get batch size from args or use default
|
256 |
+
metadata['batch_size'] = getattr(args, 'batch_size', 64)
|
257 |
+
metadata['lut_bits'] = 4
|
258 |
+
metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
|
259 |
+
print("\nUsing parameters:")
|
260 |
+
print(f" Context Length: {metadata['context_length']}")
|
261 |
+
print(f" State Length: {metadata['state_length']}")
|
262 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
263 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
264 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
265 |
+
|
266 |
+
# Override with values from args if they exist
|
267 |
+
if hasattr(args, 'batch_size') and args.batch_size is not None:
|
268 |
+
metadata['batch_size'] = args.batch_size
|
269 |
+
print(f"\nOverriding batch size from args: {args.batch_size}")
|
270 |
+
if hasattr(args, 'num_chunks') and args.num_chunks is not None:
|
271 |
+
metadata['num_chunks'] = args.num_chunks
|
272 |
+
print(f"\nOverriding num chunks from args: {args.num_chunks}")
|
273 |
+
|
274 |
+
return metadata
|
275 |
+
|
276 |
+
def load_models(args,metadata):
|
277 |
+
"""Load all required models and extract metadata."""
|
278 |
+
print("\nLoading models...")
|
279 |
+
|
280 |
+
try:
|
281 |
+
# Load embeddings model
|
282 |
+
print("\nLoading embeddings model...")
|
283 |
+
embed_path = parse_model_path(args.embed)
|
284 |
+
print(f"Loading from: {embed_path}")
|
285 |
+
embed_model = load_model(embed_path)
|
286 |
+
print("Embeddings model loaded successfully")
|
287 |
+
metadata = load_metadata(embed_model,args)
|
288 |
+
|
289 |
+
|
290 |
+
|
291 |
+
# Load LM head model
|
292 |
+
print("\nLoading LM head model...")
|
293 |
+
lmhead_path = parse_model_path(args.lmhead)
|
294 |
+
print(f"Loading from: {lmhead_path}")
|
295 |
+
lmhead_model = load_model(lmhead_path)
|
296 |
+
print("LM head model loaded successfully")
|
297 |
+
|
298 |
+
# Parse FFN path and find chunks if needed
|
299 |
+
print("\nLoading FFN+PREFILL model(s)...")
|
300 |
+
ffn_path = parse_model_path(args.ffn)
|
301 |
+
chunk_no, total_chunks = parse_ffn_filename(ffn_path)
|
302 |
+
|
303 |
+
ffn_models = []
|
304 |
+
if chunk_no and total_chunks:
|
305 |
+
print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
|
306 |
+
# Find and load all chunks
|
307 |
+
chunk_paths = find_all_chunks(ffn_path)
|
308 |
+
if len(chunk_paths) != total_chunks:
|
309 |
+
raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
|
310 |
+
|
311 |
+
for chunk_path in chunk_paths:
|
312 |
+
print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
|
313 |
+
try:
|
314 |
+
# For chunked models, we need both infer and prefill functions
|
315 |
+
ffn_models.append({
|
316 |
+
'infer': load_model(chunk_path, function_name='infer'),
|
317 |
+
'prefill': load_model(chunk_path, function_name='prefill')
|
318 |
+
})
|
319 |
+
print("Chunk loaded successfully")
|
320 |
+
except Exception as e:
|
321 |
+
print(f"Error loading chunk {chunk_path}: {str(e)}")
|
322 |
+
raise
|
323 |
+
metadata = load_metadata(ffn_models[0],args)
|
324 |
+
|
325 |
+
else:
|
326 |
+
print("\nLoading single FFN model...")
|
327 |
+
ffn_models.append(load_model(ffn_path))
|
328 |
+
print("FFN model loaded successfully")
|
329 |
+
|
330 |
+
return embed_model, ffn_models, lmhead_model, metadata
|
331 |
+
|
332 |
+
except Exception as e:
|
333 |
+
print(f"\nError loading models: {str(e)}")
|
334 |
+
print("\nPlease ensure all model files exist and are accessible.")
|
335 |
+
print("Expected files:")
|
336 |
+
print(f" Embeddings: {args.embed}")
|
337 |
+
print(f" LM Head: {args.lmhead}")
|
338 |
+
print(f" FFN: {args.ffn}")
|
339 |
+
raise
|
340 |
+
|
341 |
+
# At the top of the file, make this a default path
|
342 |
+
|
343 |
+
def initialize_tokenizer(model_path=None):
|
344 |
+
"""Initialize and configure the tokenizer."""
|
345 |
+
try:
|
346 |
+
|
347 |
+
|
348 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
349 |
+
str(model_path),
|
350 |
+
use_fast=False,
|
351 |
+
trust_remote_code=True
|
352 |
+
)
|
353 |
+
|
354 |
+
print("\nTokenizer Configuration:")
|
355 |
+
print(f"Tokenizer type: {type(tokenizer)}")
|
356 |
+
print(f"Tokenizer name: {tokenizer.__class__.__name__}")
|
357 |
+
print(f"Vocabulary size: {len(tokenizer)}")
|
358 |
+
print(f"Model max length: {tokenizer.model_max_length}")
|
359 |
+
|
360 |
+
if tokenizer.pad_token is None:
|
361 |
+
tokenizer.pad_token = tokenizer.eos_token
|
362 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
363 |
+
print("Set PAD token to EOS token")
|
364 |
+
|
365 |
+
tokenizer.padding_side = "left"
|
366 |
+
|
367 |
+
print(f"\nSpecial Tokens:")
|
368 |
+
print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
|
369 |
+
print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
|
370 |
+
print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
|
371 |
+
print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
|
372 |
+
|
373 |
+
return tokenizer
|
374 |
+
|
375 |
+
except Exception as e:
|
376 |
+
print(f"\nError: Failed to load tokenizer from {model_path}")
|
377 |
+
print(f"Error details: {str(e)}")
|
378 |
+
print(f"Error type: {type(e)}")
|
379 |
+
print("\nThis code requires a Llama 3.2 model for chat template functionality.")
|
380 |
+
print("Please provide the path to a Llama 3.2 model directory.")
|
381 |
+
import traceback
|
382 |
+
traceback.print_exc()
|
383 |
+
raise
|
384 |
+
|
385 |
+
|
386 |
+
|
387 |
+
def make_causal_mask(length, start):
|
388 |
+
"""Create causal attention mask."""
|
389 |
+
mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
|
390 |
+
row_indices = np.arange(length).reshape(length, 1)
|
391 |
+
col_indices = np.arange(length).reshape(1, length)
|
392 |
+
mask[:, :, col_indices <= (row_indices + start)] = 0
|
393 |
+
return mask
|
394 |
+
|
395 |
+
def initialize_causal_mask(context_length):
|
396 |
+
"""Initialize causal mask for transformer attention."""
|
397 |
+
causal_mask = make_causal_mask(context_length, 0)
|
398 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
399 |
+
print(f"\nInitialized causal mask for context length {context_length}")
|
400 |
+
return causal_mask
|
401 |
+
|
402 |
+
def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None, causal_mask=None):
|
403 |
+
"""Run prefill on the input sequence."""
|
404 |
+
# Use provided causal mask or create one if not provided
|
405 |
+
if causal_mask is None:
|
406 |
+
causal_mask = make_causal_mask(context_length, 0)
|
407 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
408 |
+
|
409 |
+
# Process in batches
|
410 |
+
batch_pos = 0
|
411 |
+
while batch_pos < context_pos:
|
412 |
+
batch_end = min(batch_pos + batch_size, context_pos)
|
413 |
+
current_batch_size = batch_end - batch_pos
|
414 |
+
|
415 |
+
# Get current batch
|
416 |
+
batch_input = input_ids[:, batch_pos:batch_end]
|
417 |
+
|
418 |
+
# Always pad to full batch size for prefill
|
419 |
+
batch_input = F.pad(
|
420 |
+
batch_input,
|
421 |
+
(0, batch_size - current_batch_size),
|
422 |
+
value=0
|
423 |
+
)
|
424 |
+
|
425 |
+
# Generate position IDs for full batch size
|
426 |
+
position_ids = torch.arange(batch_size, dtype=torch.int32) # Changed: Always use full batch size
|
427 |
+
batch_causal_mask = causal_mask[:, :, :batch_size, :] # Changed: Use full batch size
|
428 |
+
|
429 |
+
# Run embeddings with proper batch size
|
430 |
+
hidden_states = torch.from_numpy(
|
431 |
+
embed_model.predict({
|
432 |
+
'input_ids': batch_input.numpy(),
|
433 |
+
'batch_size': np.array([batch_size], dtype=np.int32) # Add batch_size parameter
|
434 |
+
})['hidden_states']
|
435 |
+
)
|
436 |
+
|
437 |
+
# Run through FFN chunks with state
|
438 |
+
for ffn_model in ffn_models:
|
439 |
+
if isinstance(ffn_model, dict):
|
440 |
+
inputs = {
|
441 |
+
'hidden_states': hidden_states.numpy(), # [1, 64, hidden_size]
|
442 |
+
'position_ids': position_ids.numpy(), # [64]
|
443 |
+
'causal_mask': batch_causal_mask.numpy(), # [1, 1, 64, context_length]
|
444 |
+
'current_pos': np.array([batch_pos], dtype=np.int32) # [1]
|
445 |
+
}
|
446 |
+
output = ffn_model['prefill'].predict(inputs, state)
|
447 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
448 |
+
|
449 |
+
batch_pos = batch_end
|
450 |
+
|
451 |
+
return torch.tensor([context_pos], dtype=torch.int32)
|
452 |
+
|
453 |
+
def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state=None, causal_mask=None, temperature=0.0):
|
454 |
+
"""Generate the next token."""
|
455 |
+
# Get current token
|
456 |
+
current_token = input_ids[:, pos-1:pos] # [1, 1]
|
457 |
+
|
458 |
+
# Run embeddings
|
459 |
+
hidden_states = torch.from_numpy(
|
460 |
+
embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
|
461 |
+
) # [1, 1, hidden_size]
|
462 |
+
|
463 |
+
# Create masks
|
464 |
+
update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
|
465 |
+
update_mask[0, 0, pos-1, 0] = 1.0
|
466 |
+
position_ids = torch.tensor([pos-1], dtype=torch.int32) # [1]
|
467 |
+
|
468 |
+
# Use provided causal mask or create one if not provided
|
469 |
+
if causal_mask is None:
|
470 |
+
causal_mask_data = make_causal_mask(context_length, 0)
|
471 |
+
single_causal_mask = torch.tensor(causal_mask_data[:, :, pos-1:pos, :], dtype=torch.float16) # [1, 1, 1, context_length]
|
472 |
+
else:
|
473 |
+
single_causal_mask = causal_mask[:, :, pos-1:pos, :]
|
474 |
+
|
475 |
+
# Run through FFN chunks with state
|
476 |
+
for ffn_model in ffn_models:
|
477 |
+
if isinstance(ffn_model, dict):
|
478 |
+
inputs = {
|
479 |
+
'hidden_states': hidden_states.numpy(),
|
480 |
+
'update_mask': update_mask.numpy(),
|
481 |
+
'position_ids': position_ids.numpy(),
|
482 |
+
'causal_mask': single_causal_mask.numpy(),
|
483 |
+
'current_pos': position_ids.numpy()
|
484 |
+
}
|
485 |
+
output = ffn_model['infer'].predict(inputs, state)
|
486 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
487 |
+
|
488 |
+
# Run LM head
|
489 |
+
lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
|
490 |
+
# Debug print
|
491 |
+
#print("\nLM Head output keys:", list(lm_output.keys()))
|
492 |
+
|
493 |
+
# Combine logits1-8 if they exist
|
494 |
+
if 'logits1' in lm_output:
|
495 |
+
# Concatenate all logits parts
|
496 |
+
logits_parts = []
|
497 |
+
for i in range(1, 9):
|
498 |
+
key = f'logits{i}'
|
499 |
+
if key in lm_output:
|
500 |
+
logits_parts.append(torch.from_numpy(lm_output[key]))
|
501 |
+
logits = torch.cat(logits_parts, dim=-1) # Concatenate along vocab dimension
|
502 |
+
else:
|
503 |
+
# Try output_logits as fallback
|
504 |
+
logits = torch.from_numpy(lm_output['output_logits'])
|
505 |
+
|
506 |
+
# Apply temperature and sample
|
507 |
+
if temperature > 0:
|
508 |
+
logits = logits / temperature
|
509 |
+
probs = F.softmax(logits[0, -1, :], dim=-1)
|
510 |
+
next_token = torch.multinomial(probs, num_samples=1).item()
|
511 |
+
else:
|
512 |
+
next_token = torch.argmax(logits[0, -1, :]).item()
|
513 |
+
|
514 |
+
return next_token
|
515 |
+
|
516 |
+
def create_unified_state(ffn_models, context_length):
|
517 |
+
"""Create unified KV cache state for transformer."""
|
518 |
+
if isinstance(ffn_models[0], dict):
|
519 |
+
# Use first FFN model's prefill function to create state
|
520 |
+
state = ffn_models[0]['prefill'].make_state()
|
521 |
+
print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
|
522 |
+
return state
|
523 |
+
else:
|
524 |
+
state = ffn_models[0].make_state()
|
525 |
+
print("\nCreated unified transformer state")
|
526 |
+
return state
|
527 |
+
|
528 |
+
def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask=None, auto_prompt=None, warmup=False, save_file=None):
|
529 |
+
"""Interactive chat loop."""
|
530 |
+
context_length = metadata.get('context_length')
|
531 |
+
batch_size = metadata.get('batch_size', 64)
|
532 |
+
|
533 |
+
if not warmup:
|
534 |
+
print(f"\nUsing context length: {context_length}")
|
535 |
+
print("\nStarting chat session. Press Ctrl+D to exit.")
|
536 |
+
print("Type your message and press Enter to chat.")
|
537 |
+
|
538 |
+
# Check if tokenizer has chat template and if it works
|
539 |
+
has_chat_template = False
|
540 |
+
try:
|
541 |
+
# Test if chat template works
|
542 |
+
test_messages = [{"role": "user", "content": "test"}]
|
543 |
+
tokenizer.apply_chat_template(test_messages, return_tensors="pt")
|
544 |
+
has_chat_template = True
|
545 |
+
if not warmup:
|
546 |
+
print("\nUsing chat template for prompts")
|
547 |
+
except:
|
548 |
+
if not warmup:
|
549 |
+
print("\nUsing manual formatting for prompts")
|
550 |
+
|
551 |
+
conversation = []
|
552 |
+
|
553 |
+
try:
|
554 |
+
while True:
|
555 |
+
try:
|
556 |
+
if not warmup:
|
557 |
+
print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
|
558 |
+
if auto_prompt is not None:
|
559 |
+
user_input = auto_prompt
|
560 |
+
if not warmup:
|
561 |
+
print(user_input)
|
562 |
+
else:
|
563 |
+
user_input = input().strip()
|
564 |
+
except EOFError:
|
565 |
+
if not warmup:
|
566 |
+
print("\nExiting chat...")
|
567 |
+
break
|
568 |
+
|
569 |
+
if not user_input:
|
570 |
+
continue
|
571 |
+
|
572 |
+
# Format prompt based on tokenizer capabilities
|
573 |
+
if has_chat_template:
|
574 |
+
messages = [{"role": "user", "content": user_input}]
|
575 |
+
input_ids = tokenizer.apply_chat_template(
|
576 |
+
messages,
|
577 |
+
return_tensors="pt",
|
578 |
+
add_generation_prompt=True
|
579 |
+
).to(torch.int32)
|
580 |
+
else:
|
581 |
+
# Manual formatting for Llama models without chat template
|
582 |
+
formatted_prompt = f"[INST] {user_input} [/INST]"
|
583 |
+
input_ids = tokenizer(
|
584 |
+
formatted_prompt,
|
585 |
+
return_tensors="pt",
|
586 |
+
add_special_tokens=True
|
587 |
+
).input_ids.to(torch.int32)
|
588 |
+
|
589 |
+
context_pos = input_ids.size(1)
|
590 |
+
|
591 |
+
if not warmup:
|
592 |
+
print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
|
593 |
+
|
594 |
+
# Initialize token printer
|
595 |
+
token_printer = TokenPrinter(tokenizer)
|
596 |
+
tokens_generated = 0 # Track number of tokens
|
597 |
+
|
598 |
+
try:
|
599 |
+
# Start prefill timing
|
600 |
+
prefill_start = time.time()
|
601 |
+
|
602 |
+
# Run prefill with state and causal mask
|
603 |
+
current_pos = run_prefill(
|
604 |
+
embed_model,
|
605 |
+
ffn_models,
|
606 |
+
input_ids,
|
607 |
+
context_pos,
|
608 |
+
context_length,
|
609 |
+
batch_size,
|
610 |
+
state,
|
611 |
+
causal_mask
|
612 |
+
)
|
613 |
+
|
614 |
+
# Calculate prefill timing
|
615 |
+
prefill_time = time.time() - prefill_start
|
616 |
+
prefill_tokens = context_pos # Number of tokens in input
|
617 |
+
prefill_tokens_per_sec = prefill_tokens / prefill_time if prefill_time > 0 else 0
|
618 |
+
|
619 |
+
# Generation loop with state
|
620 |
+
input_ids = input_ids
|
621 |
+
pos = context_pos
|
622 |
+
inference_start = time.time()
|
623 |
+
inference_tokens = 0
|
624 |
+
|
625 |
+
while pos < context_length - 1:
|
626 |
+
# Generate next token with causal mask
|
627 |
+
next_token = generate_next_token(
|
628 |
+
embed_model,
|
629 |
+
ffn_models,
|
630 |
+
lmhead_model,
|
631 |
+
input_ids,
|
632 |
+
pos,
|
633 |
+
context_length,
|
634 |
+
state,
|
635 |
+
causal_mask
|
636 |
+
)
|
637 |
+
|
638 |
+
# Add token to sequence
|
639 |
+
if pos < input_ids.size(1):
|
640 |
+
input_ids[0, pos] = next_token
|
641 |
+
else:
|
642 |
+
input_ids = torch.cat([
|
643 |
+
input_ids,
|
644 |
+
torch.tensor([[next_token]], dtype=torch.int32)
|
645 |
+
], dim=1)
|
646 |
+
|
647 |
+
# Add to printer only if not in warmup
|
648 |
+
if not warmup:
|
649 |
+
token_printer.add_token(next_token)
|
650 |
+
token_printer.drain_buffer()
|
651 |
+
|
652 |
+
pos += 1
|
653 |
+
tokens_generated += 1
|
654 |
+
inference_tokens += 1
|
655 |
+
|
656 |
+
# Check limits
|
657 |
+
if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
|
658 |
+
break
|
659 |
+
|
660 |
+
if next_token == tokenizer.eos_token_id:
|
661 |
+
break
|
662 |
+
|
663 |
+
# Calculate inference timing
|
664 |
+
inference_time = time.time() - inference_start
|
665 |
+
inference_tokens_per_sec = inference_tokens / inference_time if inference_time > 0 else 0
|
666 |
+
|
667 |
+
# Get final response and add to conversation
|
668 |
+
if not warmup:
|
669 |
+
response = token_printer.stop()
|
670 |
+
# Print timing stats
|
671 |
+
prefill_ms = prefill_time * 1000 # Convert to milliseconds
|
672 |
+
print(f"\nPrefill: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s)")
|
673 |
+
print(f"Inference: {inference_tokens_per_sec:.1f} t/s")
|
674 |
+
print(f"Total: Generated {tokens_generated} tokens in {prefill_time + inference_time:.2f}s")
|
675 |
+
conversation.append({"role": "assistant", "content": response})
|
676 |
+
|
677 |
+
# Save response to file if requested
|
678 |
+
if save_file:
|
679 |
+
try:
|
680 |
+
# Add small delay to ensure all tokens are processed
|
681 |
+
time.sleep(0.5)
|
682 |
+
|
683 |
+
# Make sure response ends with EOS token if it's supposed to
|
684 |
+
if response and not response.endswith("<|eot_id|>") and not response.endswith("</s>"):
|
685 |
+
if tokenizer.eos_token:
|
686 |
+
eos_text = tokenizer.decode([tokenizer.eos_token_id])
|
687 |
+
if not response.endswith(eos_text):
|
688 |
+
print(f"\n{DARK_BLUE}Adding missing EOS token for consistency{RESET_COLOR}")
|
689 |
+
response += eos_text
|
690 |
+
|
691 |
+
with open(save_file, 'w') as f:
|
692 |
+
f.write(response)
|
693 |
+
print(f"\n{DARK_BLUE}Response saved to file: {save_file}{RESET_COLOR}")
|
694 |
+
except Exception as e:
|
695 |
+
print(f"\n{DARK_BLUE}Error saving to file: {str(e)}{RESET_COLOR}")
|
696 |
+
else:
|
697 |
+
token_printer.stop() # Clean up without printing stats
|
698 |
+
|
699 |
+
# Exit after one response in auto_prompt mode
|
700 |
+
if auto_prompt is not None:
|
701 |
+
break
|
702 |
+
|
703 |
+
except KeyboardInterrupt:
|
704 |
+
print("\nGeneration interrupted")
|
705 |
+
token_printer.stop()
|
706 |
+
continue
|
707 |
+
|
708 |
+
except Exception as e:
|
709 |
+
print(f"\nError in chat loop: {str(e)}")
|
710 |
+
import traceback
|
711 |
+
traceback.print_exc()
|
712 |
+
|
713 |
+
def parse_args():
|
714 |
+
parser = argparse.ArgumentParser(description='Chat with CoreML LLaMA, gil resolved (c) 2025 Anemll')
|
715 |
+
|
716 |
+
# Add meta.yaml option
|
717 |
+
parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
|
718 |
+
|
719 |
+
# Model paths
|
720 |
+
parser.add_argument('--d', '--dir', type=str, default='.',
|
721 |
+
help='Directory containing model files (default: current directory)')
|
722 |
+
parser.add_argument('--embed', type=str, required=False,
|
723 |
+
help='Path to embeddings model (relative to --dir)')
|
724 |
+
parser.add_argument('--ffn', type=str, required=False,
|
725 |
+
help='Path to FFN model (can be chunked, relative to --dir)')
|
726 |
+
parser.add_argument('--lmhead', type=str, required=False,
|
727 |
+
help='Path to LM head model (relative to --dir)')
|
728 |
+
parser.add_argument('--tokenizer', type=str, required=False,
|
729 |
+
help='Path to tokenizer')
|
730 |
+
|
731 |
+
# Add new argument for auto-generation
|
732 |
+
parser.add_argument('--prompt', type=str,
|
733 |
+
help='If specified, run once with this prompt and exit')
|
734 |
+
|
735 |
+
# Add save option
|
736 |
+
parser.add_argument('--save', type=str,
|
737 |
+
help='Save assistant\'s response to specified file')
|
738 |
+
|
739 |
+
# Add no-warmup flag
|
740 |
+
parser.add_argument('--nw', action='store_true',
|
741 |
+
help='Skip warmup phase')
|
742 |
+
|
743 |
+
# Model configuration
|
744 |
+
parser.add_argument('--context-length', type=int,
|
745 |
+
help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
|
746 |
+
parser.add_argument('--batch-size', type=int,
|
747 |
+
help='Batch size for prefill (default: 64)')
|
748 |
+
|
749 |
+
args = parser.parse_args()
|
750 |
+
|
751 |
+
# If meta.yaml is provided, load parameters from it
|
752 |
+
if args.meta:
|
753 |
+
try:
|
754 |
+
with open(args.meta, 'r') as f:
|
755 |
+
meta = yaml.safe_load(f)
|
756 |
+
params = meta['model_info']['parameters']
|
757 |
+
|
758 |
+
# Set model directory to meta.yaml directory if not specified
|
759 |
+
if not args.d or args.d == '.':
|
760 |
+
args.d = str(Path(args.meta).parent)
|
761 |
+
|
762 |
+
# Build model paths based on parameters
|
763 |
+
prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
|
764 |
+
lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
|
765 |
+
lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
|
766 |
+
lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
|
767 |
+
num_chunks = int(params['num_chunks'])
|
768 |
+
|
769 |
+
# Set model paths if not specified
|
770 |
+
if not args.lmhead:
|
771 |
+
args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
|
772 |
+
if not args.embed:
|
773 |
+
args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
|
774 |
+
if not args.ffn:
|
775 |
+
args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
|
776 |
+
if not args.tokenizer:
|
777 |
+
args.tokenizer = args.d
|
778 |
+
|
779 |
+
# Set other parameters if not overridden by command line
|
780 |
+
if args.context_length is None:
|
781 |
+
args.context_length = int(params['context_length'])
|
782 |
+
if args.batch_size is None:
|
783 |
+
args.batch_size = int(params['batch_size'])
|
784 |
+
args.num_chunks = num_chunks
|
785 |
+
|
786 |
+
print(f"\nLoaded parameters from {args.meta}:")
|
787 |
+
print(f" Context Length: {args.context_length}")
|
788 |
+
print(f" Batch Size: {args.batch_size}")
|
789 |
+
print(f" Num Chunks: {args.num_chunks}")
|
790 |
+
print(f" Models Directory: {args.d}")
|
791 |
+
print(f" Embeddings: {args.embed}")
|
792 |
+
print(f" LM Head: {args.lmhead}")
|
793 |
+
print(f" FFN: {args.ffn}")
|
794 |
+
|
795 |
+
except Exception as e:
|
796 |
+
print(f"\nError loading meta.yaml: {str(e)}")
|
797 |
+
sys.exit(1)
|
798 |
+
|
799 |
+
return args
|
800 |
+
|
801 |
+
def main():
|
802 |
+
args = parse_args()
|
803 |
+
|
804 |
+
# Convert directory to absolute path
|
805 |
+
model_dir = Path(args.d).resolve()
|
806 |
+
if not model_dir.exists():
|
807 |
+
print(f"\nError: Model directory not found: {model_dir}")
|
808 |
+
return 1
|
809 |
+
|
810 |
+
print(f"\nUsing model directory: {model_dir}")
|
811 |
+
print(f"Context length: {args.context_length}")
|
812 |
+
|
813 |
+
try:
|
814 |
+
# Update paths to be relative to model directory
|
815 |
+
args.embed = str(model_dir / args.embed)
|
816 |
+
args.ffn = str(model_dir / args.ffn)
|
817 |
+
args.lmhead = str(model_dir / args.lmhead)
|
818 |
+
|
819 |
+
# Handle tokenizer path separately since it's not relative to model_dir
|
820 |
+
if args.tokenizer is None:
|
821 |
+
args.tokenizer = str(model_dir)
|
822 |
+
|
823 |
+
if not Path(args.tokenizer).exists():
|
824 |
+
print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
|
825 |
+
return 1
|
826 |
+
|
827 |
+
args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
|
828 |
+
print(f"Using tokenizer path: {args.tokenizer}")
|
829 |
+
|
830 |
+
metadata = {}
|
831 |
+
# Load models and extract metadata
|
832 |
+
embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
|
833 |
+
|
834 |
+
print(f"\nMetadata befor args.context_length: {metadata}")
|
835 |
+
|
836 |
+
# Override context length from command line if provided
|
837 |
+
if args.context_length is not None:
|
838 |
+
metadata['context_length'] = args.context_length
|
839 |
+
metadata['state_length'] = args.context_length # Also update state_length
|
840 |
+
print(f"\nOverriding context length from command line: {args.context_length}")
|
841 |
+
|
842 |
+
print(f"\nMetadata after load_models: {metadata}")
|
843 |
+
|
844 |
+
# Load tokenizer with resolved path
|
845 |
+
tokenizer = initialize_tokenizer(args.tokenizer)
|
846 |
+
if tokenizer is None:
|
847 |
+
raise RuntimeError("Failed to initialize tokenizer")
|
848 |
+
|
849 |
+
# Create unified state once
|
850 |
+
state = create_unified_state(ffn_models, metadata['context_length'])
|
851 |
+
|
852 |
+
# Initialize causal mask once
|
853 |
+
causal_mask = initialize_causal_mask(metadata['context_length'])
|
854 |
+
|
855 |
+
# Warmup runs to prevent Python GIL issues with CoreML !
|
856 |
+
if not args.nw:
|
857 |
+
for i in range(2):
|
858 |
+
chat_loop(
|
859 |
+
embed_model=embed_model,
|
860 |
+
ffn_models=ffn_models,
|
861 |
+
lmhead_model=lmhead_model,
|
862 |
+
tokenizer=tokenizer,
|
863 |
+
metadata=metadata,
|
864 |
+
state=state,
|
865 |
+
causal_mask=causal_mask, # Pass the causal mask
|
866 |
+
warmup=True,
|
867 |
+
auto_prompt="who are you?"
|
868 |
+
)
|
869 |
+
|
870 |
+
# Main run
|
871 |
+
chat_loop(
|
872 |
+
embed_model=embed_model,
|
873 |
+
ffn_models=ffn_models,
|
874 |
+
lmhead_model=lmhead_model,
|
875 |
+
tokenizer=tokenizer,
|
876 |
+
metadata=metadata,
|
877 |
+
state=state,
|
878 |
+
causal_mask=causal_mask, # Pass the causal mask
|
879 |
+
warmup=False,
|
880 |
+
auto_prompt=args.prompt,
|
881 |
+
save_file=args.save
|
882 |
+
)
|
883 |
+
|
884 |
+
except Exception as e:
|
885 |
+
print(f"\nError: {str(e)}")
|
886 |
+
import traceback
|
887 |
+
traceback.print_exc()
|
888 |
+
return 1
|
889 |
+
|
890 |
+
return 0
|
891 |
+
|
892 |
+
if __name__ == "__main__":
|
893 |
+
exit(main())
|
chat_full.py
ADDED
@@ -0,0 +1,976 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# chat.py
|
2 |
+
#!/usr/bin/env python3
|
3 |
+
# chat.py
|
4 |
+
# Copyright (c) 2025 Anemll
|
5 |
+
# Licensed under the MIT License
|
6 |
+
|
7 |
+
import argparse
|
8 |
+
import os
|
9 |
+
import re
|
10 |
+
import glob
|
11 |
+
from pathlib import Path
|
12 |
+
import coremltools as ct
|
13 |
+
from transformers import LlamaTokenizer, AutoTokenizer
|
14 |
+
import torch
|
15 |
+
import torch.nn.functional as F
|
16 |
+
import numpy as np
|
17 |
+
import queue
|
18 |
+
import threading
|
19 |
+
import time
|
20 |
+
import yaml
|
21 |
+
import sys
|
22 |
+
|
23 |
+
# ANSI color codes
|
24 |
+
LIGHT_BLUE = "\033[94m"
|
25 |
+
DARK_BLUE = "\033[34m"
|
26 |
+
LIGHT_GREEN = "\033[92m"
|
27 |
+
RESET_COLOR = "\033[0m"
|
28 |
+
|
29 |
+
# Add at the top with other constants
|
30 |
+
WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
|
31 |
+
THINKING_MODE = False
|
32 |
+
THINKING_PROMPT = """You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem."""
|
33 |
+
DEBUG_LEVEL = 0 # Default debug level
|
34 |
+
|
35 |
+
class TokenPrinter:
|
36 |
+
"""Handles background printing of generated tokens."""
|
37 |
+
def __init__(self, tokenizer):
|
38 |
+
self.tokenizer = tokenizer
|
39 |
+
self.token_queue = queue.Queue()
|
40 |
+
self.stop_event = threading.Event()
|
41 |
+
self.thread = None
|
42 |
+
self.buffer = ""
|
43 |
+
self.lock = threading.Lock()
|
44 |
+
self.thinking = True # Track if we're still in thinking mode
|
45 |
+
self.decoding_buffer = [] # Buffer for token IDs
|
46 |
+
# Timing and stats tracking
|
47 |
+
self.start_time = time.time()
|
48 |
+
self.token_count = 0
|
49 |
+
self.prefill_time = 0
|
50 |
+
self.inference_time = 0
|
51 |
+
self.context_pos = 0
|
52 |
+
self.start()
|
53 |
+
|
54 |
+
def start(self):
|
55 |
+
"""Start the printer thread."""
|
56 |
+
if self.thread is None:
|
57 |
+
self.thread = threading.Thread(target=self._print_worker)
|
58 |
+
self.thread.daemon = True
|
59 |
+
self.thread.start()
|
60 |
+
|
61 |
+
def add_token(self, token_id):
|
62 |
+
"""Add a token to the print queue."""
|
63 |
+
if not self.stop_event.is_set():
|
64 |
+
self.token_queue.put(token_id)
|
65 |
+
self.token_count += 1
|
66 |
+
|
67 |
+
def drain_buffer(self):
|
68 |
+
"""Decode token IDs from decoding_buffer in the main thread."""
|
69 |
+
if not self.decoding_buffer:
|
70 |
+
return
|
71 |
+
|
72 |
+
# Decode all tokens at once in the main thread
|
73 |
+
token_str = self.tokenizer.decode(self.decoding_buffer)
|
74 |
+
self.decoding_buffer.clear()
|
75 |
+
|
76 |
+
# Color-handling logic
|
77 |
+
if self.thinking and "</think>" in token_str:
|
78 |
+
self.thinking = False
|
79 |
+
parts = token_str.split("</think>")
|
80 |
+
if len(parts) > 0:
|
81 |
+
print(parts[0] + "</think>", end='', flush=True)
|
82 |
+
if len(parts) > 1:
|
83 |
+
print(LIGHT_BLUE + parts[1], end='', flush=True)
|
84 |
+
else:
|
85 |
+
if not self.thinking:
|
86 |
+
print(LIGHT_BLUE + token_str, end='', flush=True)
|
87 |
+
else:
|
88 |
+
print(token_str, end='', flush=True)
|
89 |
+
|
90 |
+
def _print_worker(self):
|
91 |
+
"""Worker thread that takes token_ids from the queue."""
|
92 |
+
while not self.stop_event.is_set():
|
93 |
+
try:
|
94 |
+
token_id = self.token_queue.get(timeout=0.01)
|
95 |
+
with self.lock:
|
96 |
+
self.decoding_buffer.append(token_id)
|
97 |
+
self.token_queue.task_done()
|
98 |
+
except queue.Empty:
|
99 |
+
continue
|
100 |
+
except Exception as e:
|
101 |
+
print(f"\nError: Token printer error: {str(e)}")
|
102 |
+
break
|
103 |
+
|
104 |
+
def stop(self):
|
105 |
+
"""Stop the printer thread."""
|
106 |
+
if self.thread and self.thread.is_alive():
|
107 |
+
self.stop_event.set()
|
108 |
+
try:
|
109 |
+
self.thread.join(timeout=1.0)
|
110 |
+
except Exception:
|
111 |
+
pass
|
112 |
+
print(RESET_COLOR) # Reset color at the end
|
113 |
+
return self.buffer
|
114 |
+
|
115 |
+
def set_timing(self, prefill_time, inference_time, context_pos):
|
116 |
+
"""Set timing information."""
|
117 |
+
self.prefill_time = prefill_time
|
118 |
+
self.inference_time = inference_time
|
119 |
+
self.context_pos = context_pos
|
120 |
+
|
121 |
+
def parse_model_path(path):
|
122 |
+
"""Parse model path and return full path with .mlmodelc or .mlpackage extension."""
|
123 |
+
path = Path(path)
|
124 |
+
|
125 |
+
# If path exists exactly as specified, return it
|
126 |
+
if path.exists():
|
127 |
+
return str(path)
|
128 |
+
|
129 |
+
# Try with both extensions
|
130 |
+
candidates = [
|
131 |
+
path, # Original path
|
132 |
+
path.with_suffix('.mlmodelc'), # With .mlmodelc
|
133 |
+
path.with_suffix('.mlpackage'), # With .mlpackage
|
134 |
+
Path(str(path) + '.mlmodelc'), # Handle case where extension is included
|
135 |
+
Path(str(path) + '.mlpackage')
|
136 |
+
]
|
137 |
+
|
138 |
+
# Try all possible paths
|
139 |
+
for candidate in candidates:
|
140 |
+
if candidate.exists():
|
141 |
+
print(f"Found model at: {candidate}")
|
142 |
+
return str(candidate)
|
143 |
+
|
144 |
+
# If we get here, no valid path was found
|
145 |
+
print("\nError: Model not found. Tried following paths:")
|
146 |
+
for candidate in candidates:
|
147 |
+
print(f" {candidate}")
|
148 |
+
raise FileNotFoundError(f"Model not found: {path}")
|
149 |
+
|
150 |
+
def parse_ffn_filename(path):
|
151 |
+
"""Parse FFN model filename to extract chunk information."""
|
152 |
+
path = Path(path)
|
153 |
+
pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
|
154 |
+
match = re.search(pattern, path.name)
|
155 |
+
|
156 |
+
if match:
|
157 |
+
current_chunk = int(match.group(1))
|
158 |
+
total_chunks = int(match.group(2))
|
159 |
+
return current_chunk, total_chunks
|
160 |
+
return None, None
|
161 |
+
|
162 |
+
def find_all_chunks(base_path):
|
163 |
+
"""Find all chunk files matching the base FFN path pattern."""
|
164 |
+
path = Path(base_path)
|
165 |
+
pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
|
166 |
+
return sorted(glob.glob(pattern))
|
167 |
+
|
168 |
+
def load_model(path, function_name=None):
|
169 |
+
"""Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
|
170 |
+
path = Path(path)
|
171 |
+
compute_unit = ct.ComputeUnit.CPU_AND_NE
|
172 |
+
|
173 |
+
try:
|
174 |
+
if path.suffix == '.mlmodelc':
|
175 |
+
# For compiled models (.mlmodelc), use CompiledMLModel
|
176 |
+
if function_name:
|
177 |
+
return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
|
178 |
+
else:
|
179 |
+
return ct.models.CompiledMLModel(str(path), compute_unit)
|
180 |
+
else:
|
181 |
+
# For packages (.mlpackage)
|
182 |
+
if function_name:
|
183 |
+
return ct.models.MLModel(str(path), function_name=function_name)
|
184 |
+
else:
|
185 |
+
return ct.models.MLModel(str(path))
|
186 |
+
|
187 |
+
except RuntimeError as e:
|
188 |
+
if "valid manifest does not exist" in str(e):
|
189 |
+
print(f"\nError: Could not load compiled model at {path}")
|
190 |
+
print("This might be because:")
|
191 |
+
print("1. The model is not properly compiled")
|
192 |
+
print("2. The model was compiled for a different OS version")
|
193 |
+
print("3. The model needs to be recompiled")
|
194 |
+
print("\nTry using the .mlpackage version instead, or recompile the model.")
|
195 |
+
raise
|
196 |
+
|
197 |
+
def parse_args():
|
198 |
+
parser = argparse.ArgumentParser(description='Full Chat with CoreML LLaMA with context window shifting, gil resolved (c) 2025 Anemll')
|
199 |
+
|
200 |
+
# Add meta.yaml option
|
201 |
+
parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
|
202 |
+
|
203 |
+
# Add existing arguments
|
204 |
+
parser.add_argument('--d', '--dir', type=str, default='.',
|
205 |
+
help='Directory containing model files (default: current directory)')
|
206 |
+
parser.add_argument('--embed', type=str, required=False,
|
207 |
+
help='Path to embeddings model (relative to --dir)')
|
208 |
+
parser.add_argument('--ffn', type=str, required=False,
|
209 |
+
help='Path to FFN model (can be chunked, relative to --dir)')
|
210 |
+
parser.add_argument('--lmhead', type=str, required=False,
|
211 |
+
help='Path to LM head model (relative to --dir)')
|
212 |
+
parser.add_argument('--tokenizer', type=str, required=False,
|
213 |
+
help='Path to tokenizer')
|
214 |
+
|
215 |
+
# Add new argument for auto-generation
|
216 |
+
parser.add_argument('--prompt', type=str,
|
217 |
+
help='If specified, run once with this prompt and exit')
|
218 |
+
|
219 |
+
# Add no-warmup flag
|
220 |
+
parser.add_argument('--nw', action='store_true',
|
221 |
+
help='Skip warmup phase')
|
222 |
+
|
223 |
+
# Add debug level
|
224 |
+
parser.add_argument('--debug-level', type=int, default=0,
|
225 |
+
help='Debug level (0=none, 1=print prompts, 2=more verbose)')
|
226 |
+
|
227 |
+
# Model configuration
|
228 |
+
parser.add_argument('--context-length', type=int,
|
229 |
+
help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
|
230 |
+
parser.add_argument('--batch-size', type=int,
|
231 |
+
help='Batch size for prefill (default: 64)')
|
232 |
+
|
233 |
+
args = parser.parse_args()
|
234 |
+
|
235 |
+
# If meta.yaml is provided, load parameters from it
|
236 |
+
if args.meta:
|
237 |
+
try:
|
238 |
+
with open(args.meta, 'r') as f:
|
239 |
+
meta = yaml.safe_load(f)
|
240 |
+
params = meta['model_info']['parameters']
|
241 |
+
|
242 |
+
# Set model directory to meta.yaml directory if not specified
|
243 |
+
if not args.d or args.d == '.':
|
244 |
+
args.d = str(Path(args.meta).parent)
|
245 |
+
|
246 |
+
# Build model paths based on parameters
|
247 |
+
prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
|
248 |
+
lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
|
249 |
+
lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
|
250 |
+
lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
|
251 |
+
num_chunks = int(params['num_chunks'])
|
252 |
+
|
253 |
+
# Set model paths if not specified
|
254 |
+
if not args.lmhead:
|
255 |
+
args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
|
256 |
+
if not args.embed:
|
257 |
+
args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
|
258 |
+
if not args.ffn:
|
259 |
+
args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
|
260 |
+
if not args.tokenizer:
|
261 |
+
args.tokenizer = args.d
|
262 |
+
|
263 |
+
# Set other parameters if not overridden by command line
|
264 |
+
if args.context_length is None:
|
265 |
+
args.context_length = int(params['context_length'])
|
266 |
+
if args.batch_size is None:
|
267 |
+
args.batch_size = int(params['batch_size'])
|
268 |
+
args.num_chunks = num_chunks
|
269 |
+
|
270 |
+
print(f"\nLoaded parameters from {args.meta}:")
|
271 |
+
print(f" Context Length: {args.context_length}")
|
272 |
+
print(f" Batch Size: {args.batch_size}")
|
273 |
+
print(f" Num Chunks: {args.num_chunks}")
|
274 |
+
print(f" Models Directory: {args.d}")
|
275 |
+
print(f" Embeddings: {args.embed}")
|
276 |
+
print(f" LM Head: {args.lmhead}")
|
277 |
+
print(f" FFN: {args.ffn}")
|
278 |
+
|
279 |
+
except Exception as e:
|
280 |
+
print(f"\nError loading meta.yaml: {str(e)}")
|
281 |
+
sys.exit(1)
|
282 |
+
|
283 |
+
return args
|
284 |
+
|
285 |
+
def load_metadata(model,args):
|
286 |
+
# Extract metadata and config parameters
|
287 |
+
metadata = {}
|
288 |
+
if hasattr(model, 'user_defined_metadata'):
|
289 |
+
meta = model.user_defined_metadata
|
290 |
+
|
291 |
+
# Extract key parameters with defaults
|
292 |
+
metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
|
293 |
+
metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
|
294 |
+
metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
|
295 |
+
metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
|
296 |
+
metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
|
297 |
+
|
298 |
+
print("\nExtracted Parameters:")
|
299 |
+
print(f" Context Length: {metadata['context_length']}")
|
300 |
+
print(f" State Length: {metadata['state_length']}")
|
301 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
302 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
303 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
304 |
+
|
305 |
+
# Print model info
|
306 |
+
print("\nModel Info:")
|
307 |
+
if 'com.anemll.info' in meta:
|
308 |
+
print(f" {meta['com.anemll.info']}")
|
309 |
+
if 'com.github.apple.coremltools.version' in meta:
|
310 |
+
print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
|
311 |
+
|
312 |
+
# Print model input/output shapes
|
313 |
+
print("\nModel Shapes:")
|
314 |
+
if hasattr(model, 'input_description'):
|
315 |
+
print(" Inputs:")
|
316 |
+
for name, desc in model.input_description.items():
|
317 |
+
print(f" {name}: {desc}")
|
318 |
+
if hasattr(model, 'output_description'):
|
319 |
+
print(" Outputs:")
|
320 |
+
for name, desc in model.output_description.items():
|
321 |
+
print(f" {name}: {desc}")
|
322 |
+
else:
|
323 |
+
print("\nWarning: No metadata found in model")
|
324 |
+
|
325 |
+
# Check if model directory name contains context length pattern (ctxXXX)
|
326 |
+
ctx_len = 512
|
327 |
+
if args.context_length is None:
|
328 |
+
import re
|
329 |
+
ctx_match = re.search(r'ctx(\d+)', str(args.d))
|
330 |
+
if ctx_match:
|
331 |
+
ctx_len0 = int(ctx_match.group(1))
|
332 |
+
if 512 <= ctx_len0 <= 8096:
|
333 |
+
ctx_len = ctx_len0
|
334 |
+
print(f"\nDetected context length {ctx_len} from directory name")
|
335 |
+
else:
|
336 |
+
print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
|
337 |
+
else:
|
338 |
+
ctx_len = args.context_length
|
339 |
+
|
340 |
+
# Use defaults or values from args
|
341 |
+
metadata['context_length'] = ctx_len
|
342 |
+
metadata['state_length'] = ctx_len
|
343 |
+
# Get batch size from args or use default
|
344 |
+
metadata['batch_size'] = getattr(args, 'batch_size', 64)
|
345 |
+
metadata['lut_bits'] = 4
|
346 |
+
metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
|
347 |
+
print("\nUsing parameters:")
|
348 |
+
print(f" Context Length: {metadata['context_length']}")
|
349 |
+
print(f" State Length: {metadata['state_length']}")
|
350 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
351 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
352 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
353 |
+
|
354 |
+
# Override with values from args if they exist
|
355 |
+
if hasattr(args, 'batch_size') and args.batch_size is not None:
|
356 |
+
metadata['batch_size'] = args.batch_size
|
357 |
+
print(f"\nOverriding batch size from args: {args.batch_size}")
|
358 |
+
if hasattr(args, 'num_chunks') and args.num_chunks is not None:
|
359 |
+
metadata['num_chunks'] = args.num_chunks
|
360 |
+
print(f"\nOverriding num chunks from args: {args.num_chunks}")
|
361 |
+
|
362 |
+
return metadata
|
363 |
+
|
364 |
+
def load_models(args,metadata):
|
365 |
+
"""Load all required models and extract metadata."""
|
366 |
+
print("\nLoading models...")
|
367 |
+
|
368 |
+
try:
|
369 |
+
# Load embeddings model
|
370 |
+
print("\nLoading embeddings model...")
|
371 |
+
embed_path = parse_model_path(args.embed)
|
372 |
+
print(f"Loading from: {embed_path}")
|
373 |
+
embed_model = load_model(embed_path)
|
374 |
+
print("Embeddings model loaded successfully")
|
375 |
+
metadata = load_metadata(embed_model,args)
|
376 |
+
|
377 |
+
|
378 |
+
|
379 |
+
# Load LM head model
|
380 |
+
print("\nLoading LM head model...")
|
381 |
+
lmhead_path = parse_model_path(args.lmhead)
|
382 |
+
print(f"Loading from: {lmhead_path}")
|
383 |
+
lmhead_model = load_model(lmhead_path)
|
384 |
+
print("LM head model loaded successfully")
|
385 |
+
|
386 |
+
# Parse FFN path and find chunks if needed
|
387 |
+
print("\nLoading FFN+PREFILL model(s)...")
|
388 |
+
ffn_path = parse_model_path(args.ffn)
|
389 |
+
chunk_no, total_chunks = parse_ffn_filename(ffn_path)
|
390 |
+
|
391 |
+
ffn_models = []
|
392 |
+
if chunk_no and total_chunks:
|
393 |
+
print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
|
394 |
+
# Find and load all chunks
|
395 |
+
chunk_paths = find_all_chunks(ffn_path)
|
396 |
+
if len(chunk_paths) != total_chunks:
|
397 |
+
raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
|
398 |
+
|
399 |
+
for chunk_path in chunk_paths:
|
400 |
+
print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
|
401 |
+
try:
|
402 |
+
# For chunked models, we need both infer and prefill functions
|
403 |
+
ffn_models.append({
|
404 |
+
'infer': load_model(chunk_path, function_name='infer'),
|
405 |
+
'prefill': load_model(chunk_path, function_name='prefill')
|
406 |
+
})
|
407 |
+
print("Chunk loaded successfully")
|
408 |
+
except Exception as e:
|
409 |
+
print(f"Error loading chunk {chunk_path}: {str(e)}")
|
410 |
+
raise
|
411 |
+
metadata = load_metadata(ffn_models[0],args)
|
412 |
+
|
413 |
+
else:
|
414 |
+
print("\nLoading single FFN model...")
|
415 |
+
ffn_models.append(load_model(ffn_path))
|
416 |
+
print("FFN model loaded successfully")
|
417 |
+
|
418 |
+
return embed_model, ffn_models, lmhead_model, metadata
|
419 |
+
|
420 |
+
except Exception as e:
|
421 |
+
print(f"\nError loading models: {str(e)}")
|
422 |
+
print("\nPlease ensure all model files exist and are accessible.")
|
423 |
+
print("Expected files:")
|
424 |
+
print(f" Embeddings: {args.embed}")
|
425 |
+
print(f" LM Head: {args.lmhead}")
|
426 |
+
print(f" FFN: {args.ffn}")
|
427 |
+
raise
|
428 |
+
|
429 |
+
# At the top of the file, make this a default path
|
430 |
+
|
431 |
+
def initialize_tokenizer(model_path=None):
|
432 |
+
"""Initialize and configure the tokenizer."""
|
433 |
+
try:
|
434 |
+
|
435 |
+
|
436 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
437 |
+
str(model_path),
|
438 |
+
use_fast=False,
|
439 |
+
trust_remote_code=True
|
440 |
+
)
|
441 |
+
|
442 |
+
print("\nTokenizer Configuration:")
|
443 |
+
print(f"Tokenizer type: {type(tokenizer)}")
|
444 |
+
print(f"Tokenizer name: {tokenizer.__class__.__name__}")
|
445 |
+
print(f"Vocabulary size: {len(tokenizer)}")
|
446 |
+
print(f"Model max length: {tokenizer.model_max_length}")
|
447 |
+
|
448 |
+
if tokenizer.pad_token is None:
|
449 |
+
tokenizer.pad_token = tokenizer.eos_token
|
450 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
451 |
+
print("Set PAD token to EOS token")
|
452 |
+
|
453 |
+
tokenizer.padding_side = "left"
|
454 |
+
|
455 |
+
print(f"\nSpecial Tokens:")
|
456 |
+
print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
|
457 |
+
print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
|
458 |
+
print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
|
459 |
+
print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
|
460 |
+
|
461 |
+
return tokenizer
|
462 |
+
|
463 |
+
except Exception as e:
|
464 |
+
print(f"\nError: Failed to load tokenizer from {model_path}")
|
465 |
+
print(f"Error details: {str(e)}")
|
466 |
+
print(f"Error type: {type(e)}")
|
467 |
+
print("\nThis code requires a Llama 3.2 model for chat template functionality.")
|
468 |
+
print("Please provide the path to a Llama 3.2 model directory.")
|
469 |
+
import traceback
|
470 |
+
traceback.print_exc()
|
471 |
+
raise
|
472 |
+
|
473 |
+
|
474 |
+
|
475 |
+
def make_causal_mask(length, start):
|
476 |
+
"""Create causal attention mask."""
|
477 |
+
mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
|
478 |
+
row_indices = np.arange(length).reshape(length, 1)
|
479 |
+
col_indices = np.arange(length).reshape(1, length)
|
480 |
+
mask[:, :, col_indices <= (row_indices + start)] = 0
|
481 |
+
return mask
|
482 |
+
|
483 |
+
def run_prefill(embed_model, ffn_models, input_ids, current_pos, context_length, batch_size, state, causal_mask):
|
484 |
+
"""Run prefill on the input sequence."""
|
485 |
+
#print(f"[DEBUG] Running prefill from 0 to {current_pos}")
|
486 |
+
|
487 |
+
# Process in batches
|
488 |
+
batch_pos = 0
|
489 |
+
while batch_pos < current_pos:
|
490 |
+
batch_end = min(batch_pos + batch_size, current_pos)
|
491 |
+
current_batch_size = batch_end - batch_pos
|
492 |
+
|
493 |
+
#print(f"[DEBUG] Prefill batch {batch_pos}-{batch_end} (size={current_batch_size})")
|
494 |
+
|
495 |
+
# Get current batch
|
496 |
+
batch_input = input_ids[:, batch_pos:batch_end]
|
497 |
+
|
498 |
+
# Pad to full batch size
|
499 |
+
batch_input = F.pad(
|
500 |
+
batch_input,
|
501 |
+
(0, batch_size - current_batch_size),
|
502 |
+
value=0
|
503 |
+
)
|
504 |
+
|
505 |
+
# Generate position IDs for this batch
|
506 |
+
position_ids = torch.arange(batch_pos, batch_pos + batch_size, dtype=torch.int32)
|
507 |
+
|
508 |
+
# Use the pre-initialized causal mask and extract the batch portion
|
509 |
+
batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos + batch_size, :]
|
510 |
+
|
511 |
+
# Run embeddings
|
512 |
+
hidden_states = torch.from_numpy(
|
513 |
+
embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
|
514 |
+
)
|
515 |
+
|
516 |
+
# Run through FFN chunks
|
517 |
+
for ffn_model in ffn_models:
|
518 |
+
if isinstance(ffn_model, dict):
|
519 |
+
inputs = {
|
520 |
+
'hidden_states': hidden_states.numpy(),
|
521 |
+
'position_ids': position_ids.numpy(),
|
522 |
+
'causal_mask': batch_causal_mask.numpy(),
|
523 |
+
'current_pos': np.array([batch_pos], dtype=np.int32)
|
524 |
+
}
|
525 |
+
output = ffn_model['prefill'].predict(inputs, state)
|
526 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
527 |
+
|
528 |
+
batch_pos = batch_end
|
529 |
+
|
530 |
+
return torch.tensor([current_pos], dtype=torch.int32)
|
531 |
+
|
532 |
+
def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state, causal_mask, temperature=0.0):
|
533 |
+
"""Generate the next token."""
|
534 |
+
# Get current token
|
535 |
+
current_token = input_ids[:, pos-1:pos]
|
536 |
+
|
537 |
+
# Run embeddings
|
538 |
+
hidden_states = torch.from_numpy(
|
539 |
+
embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
|
540 |
+
)
|
541 |
+
|
542 |
+
# Create masks
|
543 |
+
update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
|
544 |
+
update_mask[0, 0, pos-1, 0] = 1.0
|
545 |
+
position_ids = torch.tensor([pos-1], dtype=torch.int32)
|
546 |
+
|
547 |
+
# Use the pre-initialized causal mask and extract the single position portion
|
548 |
+
single_causal_mask = causal_mask[:, :, pos-1:pos, :]
|
549 |
+
|
550 |
+
# Run through FFN chunks
|
551 |
+
for ffn_model in ffn_models:
|
552 |
+
if isinstance(ffn_model, dict):
|
553 |
+
inputs = {
|
554 |
+
'hidden_states': hidden_states.numpy(),
|
555 |
+
'update_mask': update_mask.numpy(),
|
556 |
+
'position_ids': position_ids.numpy(),
|
557 |
+
'causal_mask': single_causal_mask.numpy(),
|
558 |
+
'current_pos': position_ids.numpy()
|
559 |
+
}
|
560 |
+
output = ffn_model['infer'].predict(inputs, state)
|
561 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
562 |
+
|
563 |
+
# Run LM head and get next token
|
564 |
+
lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
|
565 |
+
|
566 |
+
if 'logits1' in lm_output:
|
567 |
+
logits_parts = []
|
568 |
+
for i in range(1, 9):
|
569 |
+
key = f'logits{i}'
|
570 |
+
if key in lm_output:
|
571 |
+
logits_parts.append(torch.from_numpy(lm_output[key]))
|
572 |
+
logits = torch.cat(logits_parts, dim=-1)
|
573 |
+
else:
|
574 |
+
logits = torch.from_numpy(lm_output['output_logits'])
|
575 |
+
|
576 |
+
if temperature > 0:
|
577 |
+
logits = logits / temperature
|
578 |
+
probs = F.softmax(logits[0, -1, :], dim=-1)
|
579 |
+
next_token = torch.multinomial(probs, num_samples=1).item()
|
580 |
+
else:
|
581 |
+
next_token = torch.argmax(logits[0, -1, :]).item()
|
582 |
+
|
583 |
+
return next_token
|
584 |
+
|
585 |
+
def create_unified_state(ffn_models, context_length):
|
586 |
+
"""Create unified KV cache state for transformer."""
|
587 |
+
if isinstance(ffn_models[0], dict):
|
588 |
+
# Use first FFN model's prefill function to create state
|
589 |
+
state = ffn_models[0]['prefill'].make_state()
|
590 |
+
print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
|
591 |
+
return state
|
592 |
+
else:
|
593 |
+
state = ffn_models[0].make_state()
|
594 |
+
print("\nCreated unified transformer state")
|
595 |
+
return state
|
596 |
+
|
597 |
+
def initialize_causal_mask(context_length):
|
598 |
+
"""Initialize causal mask for transformer attention."""
|
599 |
+
causal_mask = make_causal_mask(context_length, 0)
|
600 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
601 |
+
print(f"\nInitialized causal mask for context length {context_length}")
|
602 |
+
return causal_mask
|
603 |
+
|
604 |
+
def get_user_input():
|
605 |
+
"""Get input from user, handling special key combinations."""
|
606 |
+
global THINKING_MODE
|
607 |
+
try:
|
608 |
+
import termios
|
609 |
+
import tty
|
610 |
+
import sys
|
611 |
+
|
612 |
+
def _getch():
|
613 |
+
fd = sys.stdin.fileno()
|
614 |
+
old_settings = termios.tcgetattr(fd)
|
615 |
+
try:
|
616 |
+
tty.setraw(sys.stdin.fileno())
|
617 |
+
ch = sys.stdin.read(1)
|
618 |
+
finally:
|
619 |
+
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
|
620 |
+
return ch
|
621 |
+
|
622 |
+
buffer = []
|
623 |
+
while True:
|
624 |
+
char = _getch()
|
625 |
+
|
626 |
+
# Debug: print the character code
|
627 |
+
print(f"\nKey pressed: {repr(char)} (hex: {hex(ord(char))})")
|
628 |
+
|
629 |
+
# Check for Enter key
|
630 |
+
if char == '\r' or char == '\n':
|
631 |
+
print() # Move to next line
|
632 |
+
input_text = ''.join(buffer)
|
633 |
+
# Check if the command is /t
|
634 |
+
if input_text == '/t':
|
635 |
+
THINKING_MODE = not THINKING_MODE
|
636 |
+
print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
|
637 |
+
buffer = [] # Clear buffer
|
638 |
+
print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
|
639 |
+
continue
|
640 |
+
return input_text
|
641 |
+
|
642 |
+
# Handle backspace
|
643 |
+
if char == '\x7f': # backspace
|
644 |
+
if buffer:
|
645 |
+
buffer.pop()
|
646 |
+
sys.stdout.write('\b \b') # Erase character
|
647 |
+
sys.stdout.flush()
|
648 |
+
continue
|
649 |
+
|
650 |
+
# Handle Ctrl-C
|
651 |
+
if char == '\x03': # Ctrl-C
|
652 |
+
print("^C")
|
653 |
+
raise KeyboardInterrupt
|
654 |
+
|
655 |
+
# Print character and add to buffer
|
656 |
+
sys.stdout.write(char)
|
657 |
+
sys.stdout.flush()
|
658 |
+
buffer.append(char)
|
659 |
+
|
660 |
+
except ImportError:
|
661 |
+
# Fallback for systems without termios
|
662 |
+
return input("> ")
|
663 |
+
|
664 |
+
def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask, auto_prompt=None, warmup=False):
|
665 |
+
"""Interactive chat loop."""
|
666 |
+
global THINKING_MODE
|
667 |
+
global DEBUG_LEVEL
|
668 |
+
context_length = metadata.get('context_length')
|
669 |
+
batch_size = metadata.get('batch_size', 64)
|
670 |
+
|
671 |
+
if not warmup:
|
672 |
+
print(f"\nUsing context length: {context_length}")
|
673 |
+
print("\nStarting chat session. Press Ctrl+D to exit.")
|
674 |
+
print("Type your message and press Enter to chat. Use /t to toggle thinking mode.")
|
675 |
+
print(f"Thinking mode is {'ON' if THINKING_MODE else 'OFF'}")
|
676 |
+
|
677 |
+
# Keep track of conversation history
|
678 |
+
conversation = []
|
679 |
+
|
680 |
+
try:
|
681 |
+
while True:
|
682 |
+
try:
|
683 |
+
if not warmup:
|
684 |
+
print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
|
685 |
+
if auto_prompt is not None:
|
686 |
+
user_input = auto_prompt
|
687 |
+
if not warmup:
|
688 |
+
print(user_input)
|
689 |
+
else:
|
690 |
+
user_input = input().strip()
|
691 |
+
except EOFError:
|
692 |
+
if not warmup:
|
693 |
+
print("\nExiting chat...")
|
694 |
+
break
|
695 |
+
|
696 |
+
if not user_input:
|
697 |
+
continue
|
698 |
+
|
699 |
+
# Handle /t command
|
700 |
+
if user_input == "/t":
|
701 |
+
THINKING_MODE = not THINKING_MODE
|
702 |
+
print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
|
703 |
+
continue
|
704 |
+
|
705 |
+
# Add user message to conversation
|
706 |
+
conversation.append({"role": "user", "content": user_input})
|
707 |
+
|
708 |
+
# Format using chat template with full history
|
709 |
+
if THINKING_MODE:
|
710 |
+
# Add thinking prompt to system message
|
711 |
+
conversation_with_thinking = [{"role": "system", "content": THINKING_PROMPT}] + conversation
|
712 |
+
base_input_ids = tokenizer.apply_chat_template(
|
713 |
+
conversation_with_thinking,
|
714 |
+
return_tensors="pt",
|
715 |
+
add_generation_prompt=True
|
716 |
+
).to(torch.int32)
|
717 |
+
|
718 |
+
# Print full prompt if debug level >= 1
|
719 |
+
if DEBUG_LEVEL >= 1 and not warmup:
|
720 |
+
print(f"\n{DARK_BLUE}Debug: Full prompt with thinking:{RESET_COLOR}")
|
721 |
+
print(tokenizer.decode(base_input_ids[0]))
|
722 |
+
else:
|
723 |
+
base_input_ids = tokenizer.apply_chat_template(
|
724 |
+
conversation,
|
725 |
+
return_tensors="pt",
|
726 |
+
add_generation_prompt=True
|
727 |
+
).to(torch.int32)
|
728 |
+
|
729 |
+
# Print full prompt if debug level >= 1
|
730 |
+
if DEBUG_LEVEL >= 1 and not warmup:
|
731 |
+
print(f"\n{DARK_BLUE}Debug: Full prompt:{RESET_COLOR}")
|
732 |
+
print(tokenizer.decode(base_input_ids[0]))
|
733 |
+
|
734 |
+
# Check if we need to trim history
|
735 |
+
while base_input_ids.size(1) > context_length - 100: # Leave room for response
|
736 |
+
# Remove oldest message pair (user + assistant)
|
737 |
+
if len(conversation) > 2:
|
738 |
+
conversation = conversation[2:] # Remove oldest pair
|
739 |
+
base_input_ids = tokenizer.apply_chat_template(
|
740 |
+
conversation,
|
741 |
+
return_tensors="pt",
|
742 |
+
add_generation_prompt=True
|
743 |
+
).to(torch.int32)
|
744 |
+
else:
|
745 |
+
# If only current message remains and still too long, truncate
|
746 |
+
base_input_ids = base_input_ids[:, -context_length//2:]
|
747 |
+
break
|
748 |
+
|
749 |
+
context_pos = base_input_ids.size(1)
|
750 |
+
|
751 |
+
# Pad sequence to context_size
|
752 |
+
input_ids = F.pad(
|
753 |
+
base_input_ids,
|
754 |
+
(0, context_length - context_pos),
|
755 |
+
value=0
|
756 |
+
)
|
757 |
+
|
758 |
+
if not warmup:
|
759 |
+
print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
|
760 |
+
|
761 |
+
# Initialize token printer and collect response
|
762 |
+
token_printer = TokenPrinter(tokenizer)
|
763 |
+
response_tokens = []
|
764 |
+
generation_start_time = time.time()
|
765 |
+
|
766 |
+
try:
|
767 |
+
# Run prefill on entire context
|
768 |
+
current_pos = run_prefill(
|
769 |
+
embed_model,
|
770 |
+
ffn_models,
|
771 |
+
input_ids,
|
772 |
+
context_pos,
|
773 |
+
context_length,
|
774 |
+
batch_size,
|
775 |
+
state,
|
776 |
+
causal_mask
|
777 |
+
)
|
778 |
+
#print(f"\n[DEBUG] After initial prefill - current_pos: {current_pos}")
|
779 |
+
|
780 |
+
# Generation loop
|
781 |
+
pos = context_pos
|
782 |
+
tokens_generated = 0
|
783 |
+
inference_start = time.time() # Start inference timing
|
784 |
+
|
785 |
+
while True:
|
786 |
+
# Check if we need to shift window
|
787 |
+
if pos >= context_length - 2:
|
788 |
+
# Calculate shift to maintain full batches
|
789 |
+
batch_size = metadata.get('batch_size', 64)
|
790 |
+
# Calculate max batches that fit in context
|
791 |
+
max_batches = context_length // batch_size
|
792 |
+
desired_batches = max(1, max_batches - 2) # Leave room for new tokens
|
793 |
+
new_size = min(desired_batches * batch_size, context_length - batch_size)
|
794 |
+
|
795 |
+
# Create shifted input_ids
|
796 |
+
tmp = torch.zeros((1, context_length), dtype=torch.int32)
|
797 |
+
tmp[:,0:new_size] = input_ids[:,pos-new_size:pos]
|
798 |
+
input_ids = tmp
|
799 |
+
|
800 |
+
# Reset state and run prefill
|
801 |
+
# keep the same state
|
802 |
+
#state = create_unified_state(ffn_models, context_length)
|
803 |
+
current_pos = run_prefill(
|
804 |
+
embed_model,
|
805 |
+
ffn_models,
|
806 |
+
input_ids,
|
807 |
+
new_size, # Prefill the entire shifted content
|
808 |
+
context_length,
|
809 |
+
batch_size,
|
810 |
+
state,
|
811 |
+
causal_mask
|
812 |
+
)
|
813 |
+
|
814 |
+
# Start generating from the next position
|
815 |
+
pos = new_size # Don't back up, continue from where we left off
|
816 |
+
|
817 |
+
#print(f"\n[DEBUG] After shift - next token will be at pos {pos}")
|
818 |
+
#print(f"[DEBUG] Context before next token: {tokenizer.decode(input_ids[0, pos-40:pos])}")
|
819 |
+
|
820 |
+
window_shifted = True
|
821 |
+
|
822 |
+
# Generate next token
|
823 |
+
next_token = generate_next_token(
|
824 |
+
embed_model,
|
825 |
+
ffn_models,
|
826 |
+
lmhead_model,
|
827 |
+
input_ids,
|
828 |
+
pos,
|
829 |
+
context_length,
|
830 |
+
state,
|
831 |
+
causal_mask
|
832 |
+
)
|
833 |
+
|
834 |
+
# Add token
|
835 |
+
input_ids[0, pos] = next_token
|
836 |
+
if not warmup:
|
837 |
+
token_printer.add_token(next_token)
|
838 |
+
token_printer.drain_buffer()
|
839 |
+
response_tokens.append(next_token)
|
840 |
+
|
841 |
+
pos += 1
|
842 |
+
tokens_generated += 1
|
843 |
+
|
844 |
+
# In warmup mode, limit tokens
|
845 |
+
if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
|
846 |
+
break
|
847 |
+
|
848 |
+
if next_token == tokenizer.eos_token_id:
|
849 |
+
break
|
850 |
+
|
851 |
+
inference_time = time.time() - inference_start # Calculate inference time
|
852 |
+
|
853 |
+
# Add assistant response to conversation
|
854 |
+
response_text = token_printer.stop()
|
855 |
+
conversation.append({"role": "assistant", "content": response_text})
|
856 |
+
|
857 |
+
# Print stats only if not in warmup
|
858 |
+
if not warmup:
|
859 |
+
total_time = time.time() - generation_start_time
|
860 |
+
prefill_time = total_time - inference_time
|
861 |
+
inference_tokens_per_sec = len(response_tokens) / inference_time if inference_time > 0 else 0
|
862 |
+
prefill_ms = prefill_time * 1000
|
863 |
+
prefill_tokens_per_sec = context_pos / prefill_time if prefill_time > 0 else 0
|
864 |
+
print(f"{DARK_BLUE}{inference_tokens_per_sec:.1f} t/s, "
|
865 |
+
f"TTFT: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s), "
|
866 |
+
f"{len(response_tokens)} tokens{RESET_COLOR}")
|
867 |
+
|
868 |
+
if auto_prompt is not None:
|
869 |
+
break
|
870 |
+
|
871 |
+
except KeyboardInterrupt:
|
872 |
+
if not warmup:
|
873 |
+
print("\nGeneration interrupted")
|
874 |
+
token_printer.stop()
|
875 |
+
continue
|
876 |
+
|
877 |
+
except Exception as e:
|
878 |
+
if not warmup:
|
879 |
+
print(f"\nError in chat loop: {str(e)}")
|
880 |
+
import traceback
|
881 |
+
traceback.print_exc()
|
882 |
+
|
883 |
+
def main():
|
884 |
+
args = parse_args()
|
885 |
+
global DEBUG_LEVEL
|
886 |
+
DEBUG_LEVEL = args.debug_level
|
887 |
+
|
888 |
+
# Convert directory to absolute path
|
889 |
+
model_dir = Path(args.d).resolve()
|
890 |
+
if not model_dir.exists():
|
891 |
+
print(f"\nError: Model directory not found: {model_dir}")
|
892 |
+
return 1
|
893 |
+
|
894 |
+
print(f"\nUsing model directory: {model_dir}")
|
895 |
+
print(f"Context length: {args.context_length}")
|
896 |
+
|
897 |
+
try:
|
898 |
+
# Update paths to be relative to model directory
|
899 |
+
args.embed = str(model_dir / args.embed)
|
900 |
+
args.ffn = str(model_dir / args.ffn)
|
901 |
+
args.lmhead = str(model_dir / args.lmhead)
|
902 |
+
|
903 |
+
# Handle tokenizer path separately since it's not relative to model_dir
|
904 |
+
if args.tokenizer is None:
|
905 |
+
args.tokenizer = str(model_dir)
|
906 |
+
|
907 |
+
if not Path(args.tokenizer).exists():
|
908 |
+
print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
|
909 |
+
return 1
|
910 |
+
|
911 |
+
args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
|
912 |
+
print(f"Using tokenizer path: {args.tokenizer}")
|
913 |
+
|
914 |
+
metadata = {}
|
915 |
+
# Load models and extract metadata
|
916 |
+
embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
|
917 |
+
|
918 |
+
print(f"\nMetadata befor args.context_length: {metadata}")
|
919 |
+
|
920 |
+
# Override context length from command line if provided
|
921 |
+
if args.context_length is not None:
|
922 |
+
metadata['context_length'] = args.context_length
|
923 |
+
metadata['state_length'] = args.context_length # Also update state_length
|
924 |
+
print(f"\nOverriding context length from command line: {args.context_length}")
|
925 |
+
|
926 |
+
print(f"\nMetadata after load_models: {metadata}")
|
927 |
+
|
928 |
+
# Load tokenizer with resolved path
|
929 |
+
tokenizer = initialize_tokenizer(args.tokenizer)
|
930 |
+
if tokenizer is None:
|
931 |
+
raise RuntimeError("Failed to initialize tokenizer")
|
932 |
+
|
933 |
+
# Create unified state once
|
934 |
+
state = create_unified_state(ffn_models, metadata['context_length'])
|
935 |
+
|
936 |
+
# Initialize causal mask once
|
937 |
+
causal_mask = initialize_causal_mask(metadata['context_length'])
|
938 |
+
|
939 |
+
# Warmup runs to prevent Python GIL issues with CoreML !
|
940 |
+
if not args.nw:
|
941 |
+
for i in range(2):
|
942 |
+
chat_loop(
|
943 |
+
embed_model=embed_model,
|
944 |
+
ffn_models=ffn_models,
|
945 |
+
lmhead_model=lmhead_model,
|
946 |
+
tokenizer=tokenizer,
|
947 |
+
metadata=metadata,
|
948 |
+
state=state, # Pass the state
|
949 |
+
causal_mask=causal_mask, # Pass the causal mask
|
950 |
+
warmup=True,
|
951 |
+
auto_prompt="who are you?"
|
952 |
+
)
|
953 |
+
|
954 |
+
# Main run
|
955 |
+
chat_loop(
|
956 |
+
embed_model=embed_model,
|
957 |
+
ffn_models=ffn_models,
|
958 |
+
lmhead_model=lmhead_model,
|
959 |
+
tokenizer=tokenizer,
|
960 |
+
metadata=metadata,
|
961 |
+
state=state, # Pass the state
|
962 |
+
causal_mask=causal_mask, # Pass the causal mask
|
963 |
+
warmup=False,
|
964 |
+
auto_prompt=args.prompt
|
965 |
+
)
|
966 |
+
|
967 |
+
except Exception as e:
|
968 |
+
print(f"\nError: {str(e)}")
|
969 |
+
import traceback
|
970 |
+
traceback.print_exc()
|
971 |
+
return 1
|
972 |
+
|
973 |
+
return 0
|
974 |
+
|
975 |
+
if __name__ == "__main__":
|
976 |
+
exit(main())
|
config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"tokenizer_class": "LlamaTokenizer",
|
3 |
+
"model_type": "llama"
|
4 |
+
}
|
llama_FFN_PF_lut8_chunk_01of02.mlmodelc/analytics/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3deb6523c287200937889875d7bb37aaa9ba446be6ca87829d8d4e4dee576a0b
|
3 |
+
size 243
|
llama_FFN_PF_lut8_chunk_01of02.mlmodelc/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:483a03b7426bd6c30bafb6051b07e5d431a894624fadf80948c63f7a071428d9
|
3 |
+
size 978
|
llama_FFN_PF_lut8_chunk_01of02.mlmodelc/metadata.json
ADDED
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"metadataOutputVersion" : "3.0",
|
4 |
+
"userDefinedMetadata" : {
|
5 |
+
"com.anemll.lut_bits" : "8",
|
6 |
+
"com.github.apple.coremltools.source_dialect" : "TorchScript",
|
7 |
+
"com.anemll.context_length" : "512",
|
8 |
+
"com.github.apple.coremltools.source" : "torch==2.5.0",
|
9 |
+
"com.github.apple.coremltools.version" : "8.2",
|
10 |
+
"com.anemll.num_chunks" : "2",
|
11 |
+
"com.anemll.batch_size" : "64",
|
12 |
+
"com.anemll.info" : "Converted with Anemll v0.3.0",
|
13 |
+
"com.anemll.chunk_no" : "1"
|
14 |
+
},
|
15 |
+
"availability" : {
|
16 |
+
"macOS" : "15.0",
|
17 |
+
"tvOS" : "18.0",
|
18 |
+
"visionOS" : "2.0",
|
19 |
+
"watchOS" : "11.0",
|
20 |
+
"iOS" : "18.0",
|
21 |
+
"macCatalyst" : "18.0"
|
22 |
+
},
|
23 |
+
"inputSchema" : [
|
24 |
+
{
|
25 |
+
"hasShapeFlexibility" : "0",
|
26 |
+
"isOptional" : "0",
|
27 |
+
"dataType" : "Float16",
|
28 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
29 |
+
"shortDescription" : "",
|
30 |
+
"shape" : "[1, 1, 2048]",
|
31 |
+
"name" : "hidden_states",
|
32 |
+
"type" : "MultiArray"
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"hasShapeFlexibility" : "0",
|
36 |
+
"isOptional" : "0",
|
37 |
+
"dataType" : "Int32",
|
38 |
+
"formattedType" : "MultiArray (Int32 1)",
|
39 |
+
"shortDescription" : "",
|
40 |
+
"shape" : "[1]",
|
41 |
+
"name" : "position_ids",
|
42 |
+
"type" : "MultiArray"
|
43 |
+
},
|
44 |
+
{
|
45 |
+
"hasShapeFlexibility" : "0",
|
46 |
+
"isOptional" : "0",
|
47 |
+
"dataType" : "Float16",
|
48 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 1 × 512)",
|
49 |
+
"shortDescription" : "",
|
50 |
+
"shape" : "[1, 1, 1, 512]",
|
51 |
+
"name" : "causal_mask",
|
52 |
+
"type" : "MultiArray"
|
53 |
+
},
|
54 |
+
{
|
55 |
+
"hasShapeFlexibility" : "0",
|
56 |
+
"isOptional" : "0",
|
57 |
+
"dataType" : "Int32",
|
58 |
+
"formattedType" : "MultiArray (Int32 1)",
|
59 |
+
"shortDescription" : "",
|
60 |
+
"shape" : "[1]",
|
61 |
+
"name" : "current_pos",
|
62 |
+
"type" : "MultiArray"
|
63 |
+
}
|
64 |
+
],
|
65 |
+
"outputSchema" : [
|
66 |
+
{
|
67 |
+
"hasShapeFlexibility" : "0",
|
68 |
+
"isOptional" : "0",
|
69 |
+
"dataType" : "Float16",
|
70 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
71 |
+
"shortDescription" : "",
|
72 |
+
"shape" : "[1, 1, 2048]",
|
73 |
+
"name" : "output_hidden_states",
|
74 |
+
"type" : "MultiArray"
|
75 |
+
}
|
76 |
+
],
|
77 |
+
"modelParameters" : [
|
78 |
+
|
79 |
+
],
|
80 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
81 |
+
"method" : "predict",
|
82 |
+
"functions" : [
|
83 |
+
{
|
84 |
+
"inputSchema" : [
|
85 |
+
{
|
86 |
+
"hasShapeFlexibility" : "0",
|
87 |
+
"isOptional" : "0",
|
88 |
+
"dataType" : "Float16",
|
89 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
90 |
+
"shortDescription" : "",
|
91 |
+
"shape" : "[1, 1, 2048]",
|
92 |
+
"name" : "hidden_states",
|
93 |
+
"type" : "MultiArray"
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"hasShapeFlexibility" : "0",
|
97 |
+
"isOptional" : "0",
|
98 |
+
"dataType" : "Int32",
|
99 |
+
"formattedType" : "MultiArray (Int32 1)",
|
100 |
+
"shortDescription" : "",
|
101 |
+
"shape" : "[1]",
|
102 |
+
"name" : "position_ids",
|
103 |
+
"type" : "MultiArray"
|
104 |
+
},
|
105 |
+
{
|
106 |
+
"hasShapeFlexibility" : "0",
|
107 |
+
"isOptional" : "0",
|
108 |
+
"dataType" : "Float16",
|
109 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 1 × 512)",
|
110 |
+
"shortDescription" : "",
|
111 |
+
"shape" : "[1, 1, 1, 512]",
|
112 |
+
"name" : "causal_mask",
|
113 |
+
"type" : "MultiArray"
|
114 |
+
},
|
115 |
+
{
|
116 |
+
"hasShapeFlexibility" : "0",
|
117 |
+
"isOptional" : "0",
|
118 |
+
"dataType" : "Int32",
|
119 |
+
"formattedType" : "MultiArray (Int32 1)",
|
120 |
+
"shortDescription" : "",
|
121 |
+
"shape" : "[1]",
|
122 |
+
"name" : "current_pos",
|
123 |
+
"type" : "MultiArray"
|
124 |
+
}
|
125 |
+
],
|
126 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
127 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
128 |
+
"stateSchema" : [
|
129 |
+
{
|
130 |
+
"dataType" : "Float16",
|
131 |
+
"isOptional" : "0",
|
132 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
133 |
+
"shortDescription" : "",
|
134 |
+
"shape" : "[32, 8, 512, 64]",
|
135 |
+
"name" : "model_model_kv_cache_0",
|
136 |
+
"type" : "State"
|
137 |
+
}
|
138 |
+
],
|
139 |
+
"outputSchema" : [
|
140 |
+
{
|
141 |
+
"hasShapeFlexibility" : "0",
|
142 |
+
"isOptional" : "0",
|
143 |
+
"dataType" : "Float16",
|
144 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
145 |
+
"shortDescription" : "",
|
146 |
+
"shape" : "[1, 1, 2048]",
|
147 |
+
"name" : "output_hidden_states",
|
148 |
+
"type" : "MultiArray"
|
149 |
+
}
|
150 |
+
],
|
151 |
+
"name" : "infer",
|
152 |
+
"mlProgramOperationTypeHistogram" : {
|
153 |
+
"Ios18.expandDims" : 32,
|
154 |
+
"Ios18.mul" : 80,
|
155 |
+
"Ios18.matmul" : 16,
|
156 |
+
"Identity" : 1,
|
157 |
+
"Ios16.reduceMean" : 16,
|
158 |
+
"Ios18.exp" : 8,
|
159 |
+
"Ios18.realDiv" : 8,
|
160 |
+
"Ios18.greaterEqual" : 1,
|
161 |
+
"Select" : 1,
|
162 |
+
"Ios18.readState" : 17,
|
163 |
+
"Tile" : 16,
|
164 |
+
"Ios18.gather" : 2,
|
165 |
+
"Ios18.add" : 42,
|
166 |
+
"Ios18.layerNorm" : 16,
|
167 |
+
"Ios18.sliceUpdate" : 16,
|
168 |
+
"Ios18.writeState" : 16,
|
169 |
+
"Ios18.reshape" : 50,
|
170 |
+
"Ios16.reduceMax" : 8,
|
171 |
+
"Ios16.reduceSum" : 8,
|
172 |
+
"Ios18.constexprLutToDense" : 56,
|
173 |
+
"Ios18.conv" : 48,
|
174 |
+
"Ios18.concat" : 48,
|
175 |
+
"Ios18.transpose" : 32,
|
176 |
+
"Ios18.sub" : 40,
|
177 |
+
"Ios18.linear" : 8,
|
178 |
+
"Ios18.silu" : 8,
|
179 |
+
"Ios18.sliceByIndex" : 50,
|
180 |
+
"Ios18.squeeze" : 24
|
181 |
+
}
|
182 |
+
},
|
183 |
+
{
|
184 |
+
"inputSchema" : [
|
185 |
+
{
|
186 |
+
"hasShapeFlexibility" : "0",
|
187 |
+
"isOptional" : "0",
|
188 |
+
"dataType" : "Float16",
|
189 |
+
"formattedType" : "MultiArray (Float16 1 × 64 × 2048)",
|
190 |
+
"shortDescription" : "",
|
191 |
+
"shape" : "[1, 64, 2048]",
|
192 |
+
"name" : "hidden_states",
|
193 |
+
"type" : "MultiArray"
|
194 |
+
},
|
195 |
+
{
|
196 |
+
"hasShapeFlexibility" : "0",
|
197 |
+
"isOptional" : "0",
|
198 |
+
"dataType" : "Int32",
|
199 |
+
"formattedType" : "MultiArray (Int32 64)",
|
200 |
+
"shortDescription" : "",
|
201 |
+
"shape" : "[64]",
|
202 |
+
"name" : "position_ids",
|
203 |
+
"type" : "MultiArray"
|
204 |
+
},
|
205 |
+
{
|
206 |
+
"hasShapeFlexibility" : "0",
|
207 |
+
"isOptional" : "0",
|
208 |
+
"dataType" : "Float16",
|
209 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 64 × 512)",
|
210 |
+
"shortDescription" : "",
|
211 |
+
"shape" : "[1, 1, 64, 512]",
|
212 |
+
"name" : "causal_mask",
|
213 |
+
"type" : "MultiArray"
|
214 |
+
},
|
215 |
+
{
|
216 |
+
"hasShapeFlexibility" : "0",
|
217 |
+
"isOptional" : "0",
|
218 |
+
"dataType" : "Int32",
|
219 |
+
"formattedType" : "MultiArray (Int32 1)",
|
220 |
+
"shortDescription" : "",
|
221 |
+
"shape" : "[1]",
|
222 |
+
"name" : "current_pos",
|
223 |
+
"type" : "MultiArray"
|
224 |
+
}
|
225 |
+
],
|
226 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
227 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
228 |
+
"stateSchema" : [
|
229 |
+
{
|
230 |
+
"dataType" : "Float16",
|
231 |
+
"isOptional" : "0",
|
232 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
233 |
+
"shortDescription" : "",
|
234 |
+
"shape" : "[32, 8, 512, 64]",
|
235 |
+
"name" : "model_model_kv_cache_0",
|
236 |
+
"type" : "State"
|
237 |
+
}
|
238 |
+
],
|
239 |
+
"outputSchema" : [
|
240 |
+
{
|
241 |
+
"hasShapeFlexibility" : "0",
|
242 |
+
"isOptional" : "0",
|
243 |
+
"dataType" : "Float16",
|
244 |
+
"formattedType" : "MultiArray (Float16 1 × 64 × 2048)",
|
245 |
+
"shortDescription" : "",
|
246 |
+
"shape" : "[1, 64, 2048]",
|
247 |
+
"name" : "output_hidden_states",
|
248 |
+
"type" : "MultiArray"
|
249 |
+
}
|
250 |
+
],
|
251 |
+
"name" : "prefill",
|
252 |
+
"mlProgramOperationTypeHistogram" : {
|
253 |
+
"Ios18.expandDims" : 32,
|
254 |
+
"Ios18.mul" : 80,
|
255 |
+
"Ios18.matmul" : 16,
|
256 |
+
"Ios16.reduceMean" : 16,
|
257 |
+
"Ios18.exp" : 8,
|
258 |
+
"Ios18.realDiv" : 8,
|
259 |
+
"Ios18.greaterEqual" : 1,
|
260 |
+
"Select" : 1,
|
261 |
+
"Ios18.readState" : 17,
|
262 |
+
"Tile" : 16,
|
263 |
+
"Ios18.gather" : 2,
|
264 |
+
"Ios18.add" : 42,
|
265 |
+
"Ios18.layerNorm" : 16,
|
266 |
+
"Ios18.sliceUpdate" : 16,
|
267 |
+
"Ios18.writeState" : 16,
|
268 |
+
"Ios18.reshape" : 66,
|
269 |
+
"Ios16.reduceMax" : 8,
|
270 |
+
"Ios16.reduceSum" : 8,
|
271 |
+
"Ios18.constexprLutToDense" : 56,
|
272 |
+
"Ios18.conv" : 48,
|
273 |
+
"Ios18.concat" : 48,
|
274 |
+
"Ios18.transpose" : 58,
|
275 |
+
"Ios18.sub" : 40,
|
276 |
+
"Ios18.linear" : 8,
|
277 |
+
"Ios18.silu" : 8,
|
278 |
+
"Ios18.sliceByIndex" : 50,
|
279 |
+
"Ios18.squeeze" : 24
|
280 |
+
}
|
281 |
+
}
|
282 |
+
],
|
283 |
+
"version" : "0.3.0",
|
284 |
+
"isUpdatable" : "0",
|
285 |
+
"defaultFunctionName" : "infer",
|
286 |
+
"specificationVersion" : 9,
|
287 |
+
"stateSchema" : [
|
288 |
+
{
|
289 |
+
"dataType" : "Float16",
|
290 |
+
"isOptional" : "0",
|
291 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
292 |
+
"shortDescription" : "",
|
293 |
+
"shape" : "[32, 8, 512, 64]",
|
294 |
+
"name" : "model_model_kv_cache_0",
|
295 |
+
"type" : "State"
|
296 |
+
}
|
297 |
+
],
|
298 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
299 |
+
"mlProgramOperationTypeHistogram" : {
|
300 |
+
"Ios18.expandDims" : 32,
|
301 |
+
"Ios18.mul" : 80,
|
302 |
+
"Ios18.matmul" : 16,
|
303 |
+
"Identity" : 1,
|
304 |
+
"Ios16.reduceMean" : 16,
|
305 |
+
"Ios18.exp" : 8,
|
306 |
+
"Ios18.realDiv" : 8,
|
307 |
+
"Ios18.greaterEqual" : 1,
|
308 |
+
"Select" : 1,
|
309 |
+
"Ios18.readState" : 17,
|
310 |
+
"Tile" : 16,
|
311 |
+
"Ios18.gather" : 2,
|
312 |
+
"Ios18.add" : 42,
|
313 |
+
"Ios18.layerNorm" : 16,
|
314 |
+
"Ios18.sliceUpdate" : 16,
|
315 |
+
"Ios18.writeState" : 16,
|
316 |
+
"Ios18.reshape" : 50,
|
317 |
+
"Ios16.reduceMax" : 8,
|
318 |
+
"Ios16.reduceSum" : 8,
|
319 |
+
"Ios18.constexprLutToDense" : 56,
|
320 |
+
"Ios18.conv" : 48,
|
321 |
+
"Ios18.concat" : 48,
|
322 |
+
"Ios18.transpose" : 32,
|
323 |
+
"Ios18.sub" : 40,
|
324 |
+
"Ios18.linear" : 8,
|
325 |
+
"Ios18.silu" : 8,
|
326 |
+
"Ios18.sliceByIndex" : 50,
|
327 |
+
"Ios18.squeeze" : 24
|
328 |
+
},
|
329 |
+
"shortDescription" : "Anemll Model: Multifunction FFN+Prefill",
|
330 |
+
"generatedClassName" : "llama_FFN_PF_lut8_chunk_01of02",
|
331 |
+
"author" : "Converted with Anemll v0.3.0",
|
332 |
+
"modelType" : {
|
333 |
+
"name" : "MLModelType_mlProgram"
|
334 |
+
}
|
335 |
+
}
|
336 |
+
]
|
llama_FFN_PF_lut8_chunk_01of02.mlmodelc/model.mil
ADDED
The diff for this file is too large to render.
See raw diff
|
|
llama_FFN_PF_lut8_chunk_01of02.mlmodelc/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:247fcfd304c18fbe9e4ee3d96dde890b8d07d0bcaddf7758d256969d9a30a079
|
3 |
+
size 532230400
|
llama_FFN_PF_lut8_chunk_02of02.mlmodelc/analytics/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5eb84e501b21983ed775a8e4d44fe164e86f70d1caf9669a39f132957059075a
|
3 |
+
size 243
|
llama_FFN_PF_lut8_chunk_02of02.mlmodelc/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:57d340428cc45ff86a7eedf427df96a9812b05ef3828df0584b56b08a7195ec3
|
3 |
+
size 978
|
llama_FFN_PF_lut8_chunk_02of02.mlmodelc/metadata.json
ADDED
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"metadataOutputVersion" : "3.0",
|
4 |
+
"userDefinedMetadata" : {
|
5 |
+
"com.github.apple.coremltools.source" : "torch==2.5.0",
|
6 |
+
"com.github.apple.coremltools.version" : "8.2",
|
7 |
+
"com.anemll.context_length" : "512",
|
8 |
+
"com.github.apple.coremltools.source_dialect" : "TorchScript",
|
9 |
+
"com.anemll.chunk_no" : "2",
|
10 |
+
"com.anemll.num_chunks" : "2",
|
11 |
+
"com.anemll.info" : "Converted with Anemll v0.3.0",
|
12 |
+
"com.anemll.batch_size" : "64",
|
13 |
+
"com.anemll.lut_bits" : "8"
|
14 |
+
},
|
15 |
+
"availability" : {
|
16 |
+
"macOS" : "15.0",
|
17 |
+
"tvOS" : "18.0",
|
18 |
+
"visionOS" : "2.0",
|
19 |
+
"watchOS" : "11.0",
|
20 |
+
"iOS" : "18.0",
|
21 |
+
"macCatalyst" : "18.0"
|
22 |
+
},
|
23 |
+
"inputSchema" : [
|
24 |
+
{
|
25 |
+
"hasShapeFlexibility" : "0",
|
26 |
+
"isOptional" : "0",
|
27 |
+
"dataType" : "Float16",
|
28 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
29 |
+
"shortDescription" : "",
|
30 |
+
"shape" : "[1, 1, 2048]",
|
31 |
+
"name" : "hidden_states",
|
32 |
+
"type" : "MultiArray"
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"hasShapeFlexibility" : "0",
|
36 |
+
"isOptional" : "0",
|
37 |
+
"dataType" : "Int32",
|
38 |
+
"formattedType" : "MultiArray (Int32 1)",
|
39 |
+
"shortDescription" : "",
|
40 |
+
"shape" : "[1]",
|
41 |
+
"name" : "position_ids",
|
42 |
+
"type" : "MultiArray"
|
43 |
+
},
|
44 |
+
{
|
45 |
+
"hasShapeFlexibility" : "0",
|
46 |
+
"isOptional" : "0",
|
47 |
+
"dataType" : "Float16",
|
48 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 1 × 512)",
|
49 |
+
"shortDescription" : "",
|
50 |
+
"shape" : "[1, 1, 1, 512]",
|
51 |
+
"name" : "causal_mask",
|
52 |
+
"type" : "MultiArray"
|
53 |
+
},
|
54 |
+
{
|
55 |
+
"hasShapeFlexibility" : "0",
|
56 |
+
"isOptional" : "0",
|
57 |
+
"dataType" : "Int32",
|
58 |
+
"formattedType" : "MultiArray (Int32 1)",
|
59 |
+
"shortDescription" : "",
|
60 |
+
"shape" : "[1]",
|
61 |
+
"name" : "current_pos",
|
62 |
+
"type" : "MultiArray"
|
63 |
+
}
|
64 |
+
],
|
65 |
+
"outputSchema" : [
|
66 |
+
{
|
67 |
+
"hasShapeFlexibility" : "0",
|
68 |
+
"isOptional" : "0",
|
69 |
+
"dataType" : "Float16",
|
70 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
71 |
+
"shortDescription" : "",
|
72 |
+
"shape" : "[1, 1, 2048]",
|
73 |
+
"name" : "output_hidden_states",
|
74 |
+
"type" : "MultiArray"
|
75 |
+
}
|
76 |
+
],
|
77 |
+
"modelParameters" : [
|
78 |
+
|
79 |
+
],
|
80 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
81 |
+
"method" : "predict",
|
82 |
+
"functions" : [
|
83 |
+
{
|
84 |
+
"inputSchema" : [
|
85 |
+
{
|
86 |
+
"hasShapeFlexibility" : "0",
|
87 |
+
"isOptional" : "0",
|
88 |
+
"dataType" : "Float16",
|
89 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
90 |
+
"shortDescription" : "",
|
91 |
+
"shape" : "[1, 1, 2048]",
|
92 |
+
"name" : "hidden_states",
|
93 |
+
"type" : "MultiArray"
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"hasShapeFlexibility" : "0",
|
97 |
+
"isOptional" : "0",
|
98 |
+
"dataType" : "Int32",
|
99 |
+
"formattedType" : "MultiArray (Int32 1)",
|
100 |
+
"shortDescription" : "",
|
101 |
+
"shape" : "[1]",
|
102 |
+
"name" : "position_ids",
|
103 |
+
"type" : "MultiArray"
|
104 |
+
},
|
105 |
+
{
|
106 |
+
"hasShapeFlexibility" : "0",
|
107 |
+
"isOptional" : "0",
|
108 |
+
"dataType" : "Float16",
|
109 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 1 × 512)",
|
110 |
+
"shortDescription" : "",
|
111 |
+
"shape" : "[1, 1, 1, 512]",
|
112 |
+
"name" : "causal_mask",
|
113 |
+
"type" : "MultiArray"
|
114 |
+
},
|
115 |
+
{
|
116 |
+
"hasShapeFlexibility" : "0",
|
117 |
+
"isOptional" : "0",
|
118 |
+
"dataType" : "Int32",
|
119 |
+
"formattedType" : "MultiArray (Int32 1)",
|
120 |
+
"shortDescription" : "",
|
121 |
+
"shape" : "[1]",
|
122 |
+
"name" : "current_pos",
|
123 |
+
"type" : "MultiArray"
|
124 |
+
}
|
125 |
+
],
|
126 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
127 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
128 |
+
"stateSchema" : [
|
129 |
+
{
|
130 |
+
"dataType" : "Float16",
|
131 |
+
"isOptional" : "0",
|
132 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
133 |
+
"shortDescription" : "",
|
134 |
+
"shape" : "[32, 8, 512, 64]",
|
135 |
+
"name" : "model_model_kv_cache_0",
|
136 |
+
"type" : "State"
|
137 |
+
}
|
138 |
+
],
|
139 |
+
"outputSchema" : [
|
140 |
+
{
|
141 |
+
"hasShapeFlexibility" : "0",
|
142 |
+
"isOptional" : "0",
|
143 |
+
"dataType" : "Float16",
|
144 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
145 |
+
"shortDescription" : "",
|
146 |
+
"shape" : "[1, 1, 2048]",
|
147 |
+
"name" : "output_hidden_states",
|
148 |
+
"type" : "MultiArray"
|
149 |
+
}
|
150 |
+
],
|
151 |
+
"name" : "infer",
|
152 |
+
"mlProgramOperationTypeHistogram" : {
|
153 |
+
"Ios18.expandDims" : 32,
|
154 |
+
"Ios18.mul" : 80,
|
155 |
+
"Ios18.matmul" : 16,
|
156 |
+
"Identity" : 1,
|
157 |
+
"Ios16.reduceMean" : 17,
|
158 |
+
"Ios18.exp" : 8,
|
159 |
+
"Ios18.realDiv" : 8,
|
160 |
+
"Ios18.greaterEqual" : 1,
|
161 |
+
"Select" : 1,
|
162 |
+
"Ios18.readState" : 17,
|
163 |
+
"Tile" : 16,
|
164 |
+
"Ios18.gather" : 2,
|
165 |
+
"Ios18.add" : 42,
|
166 |
+
"Ios18.layerNorm" : 17,
|
167 |
+
"Ios18.sliceUpdate" : 16,
|
168 |
+
"Ios18.writeState" : 16,
|
169 |
+
"Ios18.reshape" : 50,
|
170 |
+
"Ios16.reduceMax" : 8,
|
171 |
+
"Ios16.reduceSum" : 8,
|
172 |
+
"Ios18.constexprLutToDense" : 56,
|
173 |
+
"Ios18.conv" : 48,
|
174 |
+
"Ios18.concat" : 48,
|
175 |
+
"Ios18.transpose" : 32,
|
176 |
+
"Ios18.sub" : 41,
|
177 |
+
"Ios18.linear" : 8,
|
178 |
+
"Ios18.silu" : 8,
|
179 |
+
"Ios18.sliceByIndex" : 50,
|
180 |
+
"Ios18.squeeze" : 24
|
181 |
+
}
|
182 |
+
},
|
183 |
+
{
|
184 |
+
"inputSchema" : [
|
185 |
+
{
|
186 |
+
"hasShapeFlexibility" : "0",
|
187 |
+
"isOptional" : "0",
|
188 |
+
"dataType" : "Float16",
|
189 |
+
"formattedType" : "MultiArray (Float16 1 × 64 × 2048)",
|
190 |
+
"shortDescription" : "",
|
191 |
+
"shape" : "[1, 64, 2048]",
|
192 |
+
"name" : "hidden_states",
|
193 |
+
"type" : "MultiArray"
|
194 |
+
},
|
195 |
+
{
|
196 |
+
"hasShapeFlexibility" : "0",
|
197 |
+
"isOptional" : "0",
|
198 |
+
"dataType" : "Int32",
|
199 |
+
"formattedType" : "MultiArray (Int32 64)",
|
200 |
+
"shortDescription" : "",
|
201 |
+
"shape" : "[64]",
|
202 |
+
"name" : "position_ids",
|
203 |
+
"type" : "MultiArray"
|
204 |
+
},
|
205 |
+
{
|
206 |
+
"hasShapeFlexibility" : "0",
|
207 |
+
"isOptional" : "0",
|
208 |
+
"dataType" : "Float16",
|
209 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 64 × 512)",
|
210 |
+
"shortDescription" : "",
|
211 |
+
"shape" : "[1, 1, 64, 512]",
|
212 |
+
"name" : "causal_mask",
|
213 |
+
"type" : "MultiArray"
|
214 |
+
},
|
215 |
+
{
|
216 |
+
"hasShapeFlexibility" : "0",
|
217 |
+
"isOptional" : "0",
|
218 |
+
"dataType" : "Int32",
|
219 |
+
"formattedType" : "MultiArray (Int32 1)",
|
220 |
+
"shortDescription" : "",
|
221 |
+
"shape" : "[1]",
|
222 |
+
"name" : "current_pos",
|
223 |
+
"type" : "MultiArray"
|
224 |
+
}
|
225 |
+
],
|
226 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
227 |
+
"storagePrecision" : "Mixed (Float16, Palettized (14 bits), Palettized (16 bits), Palettized (18 bits))",
|
228 |
+
"stateSchema" : [
|
229 |
+
{
|
230 |
+
"dataType" : "Float16",
|
231 |
+
"isOptional" : "0",
|
232 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
233 |
+
"shortDescription" : "",
|
234 |
+
"shape" : "[32, 8, 512, 64]",
|
235 |
+
"name" : "model_model_kv_cache_0",
|
236 |
+
"type" : "State"
|
237 |
+
}
|
238 |
+
],
|
239 |
+
"outputSchema" : [
|
240 |
+
{
|
241 |
+
"hasShapeFlexibility" : "0",
|
242 |
+
"isOptional" : "0",
|
243 |
+
"dataType" : "Float16",
|
244 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
245 |
+
"shortDescription" : "",
|
246 |
+
"shape" : "[1, 1, 2048]",
|
247 |
+
"name" : "output_hidden_states",
|
248 |
+
"type" : "MultiArray"
|
249 |
+
}
|
250 |
+
],
|
251 |
+
"name" : "prefill",
|
252 |
+
"mlProgramOperationTypeHistogram" : {
|
253 |
+
"Ios18.expandDims" : 31,
|
254 |
+
"Ios18.mul" : 79,
|
255 |
+
"Ios18.matmul" : 16,
|
256 |
+
"Ios16.reduceMean" : 15,
|
257 |
+
"Ios18.exp" : 8,
|
258 |
+
"Ios18.realDiv" : 8,
|
259 |
+
"Ios18.greaterEqual" : 1,
|
260 |
+
"Select" : 1,
|
261 |
+
"Ios18.readState" : 17,
|
262 |
+
"Tile" : 16,
|
263 |
+
"Ios18.gather" : 2,
|
264 |
+
"Ios18.add" : 41,
|
265 |
+
"Ios18.layerNorm" : 15,
|
266 |
+
"Ios18.sliceUpdate" : 16,
|
267 |
+
"Ios18.writeState" : 16,
|
268 |
+
"Ios18.reshape" : 66,
|
269 |
+
"Ios16.reduceMax" : 8,
|
270 |
+
"Ios16.reduceSum" : 8,
|
271 |
+
"Ios18.constexprLutToDense" : 53,
|
272 |
+
"Ios18.conv" : 45,
|
273 |
+
"Ios18.concat" : 48,
|
274 |
+
"Ios18.transpose" : 56,
|
275 |
+
"Ios18.sub" : 39,
|
276 |
+
"Ios18.linear" : 8,
|
277 |
+
"Ios18.silu" : 7,
|
278 |
+
"Ios18.sliceByIndex" : 51,
|
279 |
+
"Ios18.squeeze" : 23
|
280 |
+
}
|
281 |
+
}
|
282 |
+
],
|
283 |
+
"version" : "0.3.0",
|
284 |
+
"isUpdatable" : "0",
|
285 |
+
"defaultFunctionName" : "infer",
|
286 |
+
"specificationVersion" : 9,
|
287 |
+
"stateSchema" : [
|
288 |
+
{
|
289 |
+
"dataType" : "Float16",
|
290 |
+
"isOptional" : "0",
|
291 |
+
"formattedType" : "State (Float16 32 × 8 × 512 × 64)",
|
292 |
+
"shortDescription" : "",
|
293 |
+
"shape" : "[32, 8, 512, 64]",
|
294 |
+
"name" : "model_model_kv_cache_0",
|
295 |
+
"type" : "State"
|
296 |
+
}
|
297 |
+
],
|
298 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
299 |
+
"mlProgramOperationTypeHistogram" : {
|
300 |
+
"Ios18.expandDims" : 32,
|
301 |
+
"Ios18.mul" : 80,
|
302 |
+
"Ios18.matmul" : 16,
|
303 |
+
"Identity" : 1,
|
304 |
+
"Ios16.reduceMean" : 17,
|
305 |
+
"Ios18.exp" : 8,
|
306 |
+
"Ios18.realDiv" : 8,
|
307 |
+
"Ios18.greaterEqual" : 1,
|
308 |
+
"Select" : 1,
|
309 |
+
"Ios18.readState" : 17,
|
310 |
+
"Tile" : 16,
|
311 |
+
"Ios18.gather" : 2,
|
312 |
+
"Ios18.add" : 42,
|
313 |
+
"Ios18.layerNorm" : 17,
|
314 |
+
"Ios18.sliceUpdate" : 16,
|
315 |
+
"Ios18.writeState" : 16,
|
316 |
+
"Ios18.reshape" : 50,
|
317 |
+
"Ios16.reduceMax" : 8,
|
318 |
+
"Ios16.reduceSum" : 8,
|
319 |
+
"Ios18.constexprLutToDense" : 56,
|
320 |
+
"Ios18.conv" : 48,
|
321 |
+
"Ios18.concat" : 48,
|
322 |
+
"Ios18.transpose" : 32,
|
323 |
+
"Ios18.sub" : 41,
|
324 |
+
"Ios18.linear" : 8,
|
325 |
+
"Ios18.silu" : 8,
|
326 |
+
"Ios18.sliceByIndex" : 50,
|
327 |
+
"Ios18.squeeze" : 24
|
328 |
+
},
|
329 |
+
"shortDescription" : "Anemll Model: Multifunction FFN+Prefill",
|
330 |
+
"generatedClassName" : "llama_FFN_PF_lut8_chunk_02of02",
|
331 |
+
"author" : "Converted with Anemll v0.3.0",
|
332 |
+
"modelType" : {
|
333 |
+
"name" : "MLModelType_mlProgram"
|
334 |
+
}
|
335 |
+
}
|
336 |
+
]
|
llama_FFN_PF_lut8_chunk_02of02.mlmodelc/model.mil
ADDED
The diff for this file is too large to render.
See raw diff
|
|
llama_FFN_PF_lut8_chunk_02of02.mlmodelc/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1489ea6059a9cd0e5993258bd0df1b51a3d1df301f95db8ca5fafaa18a2c5b11
|
3 |
+
size 532234560
|
llama_embeddings_lut8.mlmodelc/analytics/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66e1ae2e06e7a478e2fb29d2b9d3d3130e7155980e8eff15a4ffd8ace20e7031
|
3 |
+
size 243
|
llama_embeddings_lut8.mlmodelc/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2535147e3aa5b14c4b8c01ec9a50441b9cefee90a18d4ace3c5de73b1355da4e
|
3 |
+
size 525
|
llama_embeddings_lut8.mlmodelc/metadata.json
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"shortDescription" : "Anemll Model (Embeddings) converted to CoreML",
|
4 |
+
"metadataOutputVersion" : "3.0",
|
5 |
+
"outputSchema" : [
|
6 |
+
{
|
7 |
+
"hasShapeFlexibility" : "0",
|
8 |
+
"isOptional" : "0",
|
9 |
+
"dataType" : "Float16",
|
10 |
+
"formattedType" : "MultiArray (Float16)",
|
11 |
+
"shortDescription" : "",
|
12 |
+
"shape" : "[]",
|
13 |
+
"name" : "hidden_states",
|
14 |
+
"type" : "MultiArray"
|
15 |
+
}
|
16 |
+
],
|
17 |
+
"version" : "0.3.0",
|
18 |
+
"modelParameters" : [
|
19 |
+
|
20 |
+
],
|
21 |
+
"author" : "Converted with Anemll v0.3.0",
|
22 |
+
"specificationVersion" : 9,
|
23 |
+
"storagePrecision" : "Mixed (Float16, Palettized (22 bits))",
|
24 |
+
"mlProgramOperationTypeHistogram" : {
|
25 |
+
"Ios18.constexprLutToDense" : 1,
|
26 |
+
"Ios18.gather" : 1
|
27 |
+
},
|
28 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
29 |
+
"stateSchema" : [
|
30 |
+
|
31 |
+
],
|
32 |
+
"isUpdatable" : "0",
|
33 |
+
"availability" : {
|
34 |
+
"macOS" : "15.0",
|
35 |
+
"tvOS" : "18.0",
|
36 |
+
"visionOS" : "2.0",
|
37 |
+
"watchOS" : "11.0",
|
38 |
+
"iOS" : "18.0",
|
39 |
+
"macCatalyst" : "18.0"
|
40 |
+
},
|
41 |
+
"modelType" : {
|
42 |
+
"name" : "MLModelType_mlProgram"
|
43 |
+
},
|
44 |
+
"inputSchema" : [
|
45 |
+
{
|
46 |
+
"shortDescription" : "",
|
47 |
+
"dataType" : "Int32",
|
48 |
+
"hasShapeFlexibility" : "1",
|
49 |
+
"isOptional" : "0",
|
50 |
+
"shapeFlexibility" : "1 × 1 | 1 × 64",
|
51 |
+
"formattedType" : "MultiArray (Int32 1 × 1)",
|
52 |
+
"type" : "MultiArray",
|
53 |
+
"shape" : "[1, 1]",
|
54 |
+
"name" : "input_ids",
|
55 |
+
"enumeratedShapes" : "[[1, 1], [1, 64]]"
|
56 |
+
}
|
57 |
+
],
|
58 |
+
"userDefinedMetadata" : {
|
59 |
+
"com.anemll.context_length" : "512",
|
60 |
+
"com.github.apple.coremltools.version" : "8.2",
|
61 |
+
"com.anemll.lut_bits" : "8",
|
62 |
+
"com.github.apple.coremltools.source" : "torch==2.5.0",
|
63 |
+
"com.anemll.info" : "Converted with Anemll v0.3.0",
|
64 |
+
"com.github.apple.coremltools.source_dialect" : "TorchScript"
|
65 |
+
},
|
66 |
+
"generatedClassName" : "llama_embeddings_lut8",
|
67 |
+
"method" : "predict"
|
68 |
+
}
|
69 |
+
]
|
llama_embeddings_lut8.mlmodelc/model.mil
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
program(1.3)
|
2 |
+
[buildInfo = dict<string, string>({{"coremlc-component-MIL", "3404.16.1"}, {"coremlc-version", "3404.23.1"}})]
|
3 |
+
{
|
4 |
+
func main<ios18>(tensor<int32, [1, ?]> input_ids) [FlexibleShapeInformation = tuple<tuple<string, dict<string, tensor<int32, [?]>>>, tuple<string, dict<string, dict<string, tensor<int32, [?]>>>>>((("DefaultShapes", {{"input_ids", [1, 1]}}), ("EnumeratedShapes", {{"79ae981e", {{"input_ids", [1, 1]}}}, {"ed9b58c8", {{"input_ids", [1, 64]}}}})))] {
|
5 |
+
int32 hidden_states_axis_0 = const()[name = string("hidden_states_axis_0"), val = int32(0)];
|
6 |
+
int32 hidden_states_batch_dims_0 = const()[name = string("hidden_states_batch_dims_0"), val = int32(0)];
|
7 |
+
bool hidden_states_validate_indices_0 = const()[name = string("hidden_states_validate_indices_0"), val = bool(false)];
|
8 |
+
tensor<fp16, [128256, 2048]> embed_tokens_weight_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [128256, 2048]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64))), lut = tensor<fp16, [16032, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(262668416))))[name = string("embed_tokens_weight_to_fp16_palettized")];
|
9 |
+
tensor<fp16, [1, ?, 2048]> hidden_states = gather(axis = hidden_states_axis_0, batch_dims = hidden_states_batch_dims_0, indices = input_ids, validate_indices = hidden_states_validate_indices_0, x = embed_tokens_weight_to_fp16_palettized)[name = string("hidden_states_cast_fp16")];
|
10 |
+
} -> (hidden_states);
|
11 |
+
}
|
llama_embeddings_lut8.mlmodelc/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:653ddd7fa66740583f355169b9a07d43d810cde50cbabc3237196ee1a9e49333
|
3 |
+
size 270876864
|
llama_lm_head_lut8.mlmodelc/analytics/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:30d04c3663d151c4287567ad000e798bbcc4ba0c4ad6c72c0901f8426d5ee112
|
3 |
+
size 243
|
llama_lm_head_lut8.mlmodelc/coremldata.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19051e45d105aefa8fea17c2d7bcbdd9bd8546cfa69b1aa971e86a0ec9bcb06d
|
3 |
+
size 688
|
llama_lm_head_lut8.mlmodelc/metadata.json
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"shortDescription" : "Anemll Model (LM Head) converted to CoreML",
|
4 |
+
"metadataOutputVersion" : "3.0",
|
5 |
+
"outputSchema" : [
|
6 |
+
{
|
7 |
+
"hasShapeFlexibility" : "0",
|
8 |
+
"isOptional" : "0",
|
9 |
+
"dataType" : "Float16",
|
10 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
11 |
+
"shortDescription" : "",
|
12 |
+
"shape" : "[1, 1, 16032]",
|
13 |
+
"name" : "logits1",
|
14 |
+
"type" : "MultiArray"
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"hasShapeFlexibility" : "0",
|
18 |
+
"isOptional" : "0",
|
19 |
+
"dataType" : "Float16",
|
20 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
21 |
+
"shortDescription" : "",
|
22 |
+
"shape" : "[1, 1, 16032]",
|
23 |
+
"name" : "logits2",
|
24 |
+
"type" : "MultiArray"
|
25 |
+
},
|
26 |
+
{
|
27 |
+
"hasShapeFlexibility" : "0",
|
28 |
+
"isOptional" : "0",
|
29 |
+
"dataType" : "Float16",
|
30 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
31 |
+
"shortDescription" : "",
|
32 |
+
"shape" : "[1, 1, 16032]",
|
33 |
+
"name" : "logits3",
|
34 |
+
"type" : "MultiArray"
|
35 |
+
},
|
36 |
+
{
|
37 |
+
"hasShapeFlexibility" : "0",
|
38 |
+
"isOptional" : "0",
|
39 |
+
"dataType" : "Float16",
|
40 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
41 |
+
"shortDescription" : "",
|
42 |
+
"shape" : "[1, 1, 16032]",
|
43 |
+
"name" : "logits4",
|
44 |
+
"type" : "MultiArray"
|
45 |
+
},
|
46 |
+
{
|
47 |
+
"hasShapeFlexibility" : "0",
|
48 |
+
"isOptional" : "0",
|
49 |
+
"dataType" : "Float16",
|
50 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
51 |
+
"shortDescription" : "",
|
52 |
+
"shape" : "[1, 1, 16032]",
|
53 |
+
"name" : "logits5",
|
54 |
+
"type" : "MultiArray"
|
55 |
+
},
|
56 |
+
{
|
57 |
+
"hasShapeFlexibility" : "0",
|
58 |
+
"isOptional" : "0",
|
59 |
+
"dataType" : "Float16",
|
60 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
61 |
+
"shortDescription" : "",
|
62 |
+
"shape" : "[1, 1, 16032]",
|
63 |
+
"name" : "logits6",
|
64 |
+
"type" : "MultiArray"
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"hasShapeFlexibility" : "0",
|
68 |
+
"isOptional" : "0",
|
69 |
+
"dataType" : "Float16",
|
70 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
71 |
+
"shortDescription" : "",
|
72 |
+
"shape" : "[1, 1, 16032]",
|
73 |
+
"name" : "logits7",
|
74 |
+
"type" : "MultiArray"
|
75 |
+
},
|
76 |
+
{
|
77 |
+
"hasShapeFlexibility" : "0",
|
78 |
+
"isOptional" : "0",
|
79 |
+
"dataType" : "Float16",
|
80 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
|
81 |
+
"shortDescription" : "",
|
82 |
+
"shape" : "[1, 1, 16032]",
|
83 |
+
"name" : "logits8",
|
84 |
+
"type" : "MultiArray"
|
85 |
+
}
|
86 |
+
],
|
87 |
+
"version" : "0.3.0",
|
88 |
+
"modelParameters" : [
|
89 |
+
|
90 |
+
],
|
91 |
+
"author" : "Converted with Anemll v0.3.0",
|
92 |
+
"specificationVersion" : 9,
|
93 |
+
"storagePrecision" : "Mixed (Float16, Palettized (19 bits))",
|
94 |
+
"mlProgramOperationTypeHistogram" : {
|
95 |
+
"Ios18.transpose" : 9,
|
96 |
+
"Ios18.constexprLutToDense" : 8,
|
97 |
+
"Ios18.expandDims" : 1,
|
98 |
+
"Ios18.conv" : 8,
|
99 |
+
"Ios18.squeeze" : 8
|
100 |
+
},
|
101 |
+
"computePrecision" : "Mixed (Float16, Int32)",
|
102 |
+
"stateSchema" : [
|
103 |
+
|
104 |
+
],
|
105 |
+
"isUpdatable" : "0",
|
106 |
+
"availability" : {
|
107 |
+
"macOS" : "15.0",
|
108 |
+
"tvOS" : "18.0",
|
109 |
+
"visionOS" : "2.0",
|
110 |
+
"watchOS" : "11.0",
|
111 |
+
"iOS" : "18.0",
|
112 |
+
"macCatalyst" : "18.0"
|
113 |
+
},
|
114 |
+
"modelType" : {
|
115 |
+
"name" : "MLModelType_mlProgram"
|
116 |
+
},
|
117 |
+
"inputSchema" : [
|
118 |
+
{
|
119 |
+
"hasShapeFlexibility" : "0",
|
120 |
+
"isOptional" : "0",
|
121 |
+
"dataType" : "Float16",
|
122 |
+
"formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
|
123 |
+
"shortDescription" : "",
|
124 |
+
"shape" : "[1, 1, 2048]",
|
125 |
+
"name" : "hidden_states",
|
126 |
+
"type" : "MultiArray"
|
127 |
+
}
|
128 |
+
],
|
129 |
+
"userDefinedMetadata" : {
|
130 |
+
"com.github.apple.coremltools.source_dialect" : "TorchScript",
|
131 |
+
"com.anemll.context_length" : "512",
|
132 |
+
"com.anemll.lut_bits" : "8",
|
133 |
+
"com.github.apple.coremltools.source" : "torch==2.5.0",
|
134 |
+
"com.github.apple.coremltools.version" : "8.2",
|
135 |
+
"com.anemll.info" : "Converted with Anemll v0.3.0"
|
136 |
+
},
|
137 |
+
"generatedClassName" : "llama_lm_head_lut8",
|
138 |
+
"method" : "predict"
|
139 |
+
}
|
140 |
+
]
|
llama_lm_head_lut8.mlmodelc/model.mil
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
program(1.3)
|
2 |
+
[buildInfo = dict<string, string>({{"coremlc-component-MIL", "3404.16.1"}, {"coremlc-version", "3404.23.1"}})]
|
3 |
+
{
|
4 |
+
func main<ios18>(tensor<fp16, [1, 1, 2048]> hidden_states) {
|
5 |
+
tensor<int32, [3]> var_5 = const()[name = string("op_5"), val = tensor<int32, [3]>([0, 2, 1])];
|
6 |
+
tensor<int32, [1]> input_axes_0 = const()[name = string("input_axes_0"), val = tensor<int32, [1]>([2])];
|
7 |
+
tensor<fp16, [1, 2048, 1]> var_6_cast_fp16 = transpose(perm = var_5, x = hidden_states)[name = string("transpose_8")];
|
8 |
+
tensor<fp16, [1, 2048, 1, 1]> input_cast_fp16 = expand_dims(axes = input_axes_0, x = var_6_cast_fp16)[name = string("input_cast_fp16")];
|
9 |
+
string var_29_pad_type_0 = const()[name = string("op_29_pad_type_0"), val = string("valid")];
|
10 |
+
tensor<int32, [2]> var_29_strides_0 = const()[name = string("op_29_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
11 |
+
tensor<int32, [4]> var_29_pad_0 = const()[name = string("op_29_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
12 |
+
tensor<int32, [2]> var_29_dilations_0 = const()[name = string("op_29_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
13 |
+
int32 var_29_groups_0 = const()[name = string("op_29_groups_0"), val = int32(1)];
|
14 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_9_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(32833664))))[name = string("op_9_promoted_to_fp16_palettized")];
|
15 |
+
tensor<fp16, [1, 16032, 1, 1]> var_29_cast_fp16 = conv(dilations = var_29_dilations_0, groups = var_29_groups_0, pad = var_29_pad_0, pad_type = var_29_pad_type_0, strides = var_29_strides_0, weight = op_9_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_29_cast_fp16")];
|
16 |
+
tensor<int32, [1]> var_31_axes_0 = const()[name = string("op_31_axes_0"), val = tensor<int32, [1]>([2])];
|
17 |
+
tensor<fp16, [1, 16032, 1]> var_31_cast_fp16 = squeeze(axes = var_31_axes_0, x = var_29_cast_fp16)[name = string("op_31_cast_fp16")];
|
18 |
+
tensor<int32, [3]> var_34_perm_0 = const()[name = string("op_34_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
19 |
+
string var_55_pad_type_0 = const()[name = string("op_55_pad_type_0"), val = string("valid")];
|
20 |
+
tensor<int32, [2]> var_55_strides_0 = const()[name = string("op_55_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
21 |
+
tensor<int32, [4]> var_55_pad_0 = const()[name = string("op_55_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
22 |
+
tensor<int32, [2]> var_55_dilations_0 = const()[name = string("op_55_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
23 |
+
int32 var_55_groups_0 = const()[name = string("op_55_groups_0"), val = int32(1)];
|
24 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_35_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(33859776))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(66693376))))[name = string("op_35_promoted_to_fp16_palettized")];
|
25 |
+
tensor<fp16, [1, 16032, 1, 1]> var_55_cast_fp16 = conv(dilations = var_55_dilations_0, groups = var_55_groups_0, pad = var_55_pad_0, pad_type = var_55_pad_type_0, strides = var_55_strides_0, weight = op_35_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_55_cast_fp16")];
|
26 |
+
tensor<int32, [1]> var_57_axes_0 = const()[name = string("op_57_axes_0"), val = tensor<int32, [1]>([2])];
|
27 |
+
tensor<fp16, [1, 16032, 1]> var_57_cast_fp16 = squeeze(axes = var_57_axes_0, x = var_55_cast_fp16)[name = string("op_57_cast_fp16")];
|
28 |
+
tensor<int32, [3]> var_60_perm_0 = const()[name = string("op_60_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
29 |
+
string var_81_pad_type_0 = const()[name = string("op_81_pad_type_0"), val = string("valid")];
|
30 |
+
tensor<int32, [2]> var_81_strides_0 = const()[name = string("op_81_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
31 |
+
tensor<int32, [4]> var_81_pad_0 = const()[name = string("op_81_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
32 |
+
tensor<int32, [2]> var_81_dilations_0 = const()[name = string("op_81_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
33 |
+
int32 var_81_groups_0 = const()[name = string("op_81_groups_0"), val = int32(1)];
|
34 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_61_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(67719488))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(100553088))))[name = string("op_61_promoted_to_fp16_palettized")];
|
35 |
+
tensor<fp16, [1, 16032, 1, 1]> var_81_cast_fp16 = conv(dilations = var_81_dilations_0, groups = var_81_groups_0, pad = var_81_pad_0, pad_type = var_81_pad_type_0, strides = var_81_strides_0, weight = op_61_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_81_cast_fp16")];
|
36 |
+
tensor<int32, [1]> var_83_axes_0 = const()[name = string("op_83_axes_0"), val = tensor<int32, [1]>([2])];
|
37 |
+
tensor<fp16, [1, 16032, 1]> var_83_cast_fp16 = squeeze(axes = var_83_axes_0, x = var_81_cast_fp16)[name = string("op_83_cast_fp16")];
|
38 |
+
tensor<int32, [3]> var_86_perm_0 = const()[name = string("op_86_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
39 |
+
string var_107_pad_type_0 = const()[name = string("op_107_pad_type_0"), val = string("valid")];
|
40 |
+
tensor<int32, [2]> var_107_strides_0 = const()[name = string("op_107_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
41 |
+
tensor<int32, [4]> var_107_pad_0 = const()[name = string("op_107_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
42 |
+
tensor<int32, [2]> var_107_dilations_0 = const()[name = string("op_107_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
43 |
+
int32 var_107_groups_0 = const()[name = string("op_107_groups_0"), val = int32(1)];
|
44 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_87_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(101579200))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(134412800))))[name = string("op_87_promoted_to_fp16_palettized")];
|
45 |
+
tensor<fp16, [1, 16032, 1, 1]> var_107_cast_fp16 = conv(dilations = var_107_dilations_0, groups = var_107_groups_0, pad = var_107_pad_0, pad_type = var_107_pad_type_0, strides = var_107_strides_0, weight = op_87_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_107_cast_fp16")];
|
46 |
+
tensor<int32, [1]> var_109_axes_0 = const()[name = string("op_109_axes_0"), val = tensor<int32, [1]>([2])];
|
47 |
+
tensor<fp16, [1, 16032, 1]> var_109_cast_fp16 = squeeze(axes = var_109_axes_0, x = var_107_cast_fp16)[name = string("op_109_cast_fp16")];
|
48 |
+
tensor<int32, [3]> var_112_perm_0 = const()[name = string("op_112_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
49 |
+
string var_133_pad_type_0 = const()[name = string("op_133_pad_type_0"), val = string("valid")];
|
50 |
+
tensor<int32, [2]> var_133_strides_0 = const()[name = string("op_133_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
51 |
+
tensor<int32, [4]> var_133_pad_0 = const()[name = string("op_133_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
52 |
+
tensor<int32, [2]> var_133_dilations_0 = const()[name = string("op_133_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
53 |
+
int32 var_133_groups_0 = const()[name = string("op_133_groups_0"), val = int32(1)];
|
54 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_113_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(135438912))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(168272512))))[name = string("op_113_promoted_to_fp16_palettized")];
|
55 |
+
tensor<fp16, [1, 16032, 1, 1]> var_133_cast_fp16 = conv(dilations = var_133_dilations_0, groups = var_133_groups_0, pad = var_133_pad_0, pad_type = var_133_pad_type_0, strides = var_133_strides_0, weight = op_113_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_133_cast_fp16")];
|
56 |
+
tensor<int32, [1]> var_135_axes_0 = const()[name = string("op_135_axes_0"), val = tensor<int32, [1]>([2])];
|
57 |
+
tensor<fp16, [1, 16032, 1]> var_135_cast_fp16 = squeeze(axes = var_135_axes_0, x = var_133_cast_fp16)[name = string("op_135_cast_fp16")];
|
58 |
+
tensor<int32, [3]> var_138_perm_0 = const()[name = string("op_138_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
59 |
+
string var_159_pad_type_0 = const()[name = string("op_159_pad_type_0"), val = string("valid")];
|
60 |
+
tensor<int32, [2]> var_159_strides_0 = const()[name = string("op_159_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
61 |
+
tensor<int32, [4]> var_159_pad_0 = const()[name = string("op_159_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
62 |
+
tensor<int32, [2]> var_159_dilations_0 = const()[name = string("op_159_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
63 |
+
int32 var_159_groups_0 = const()[name = string("op_159_groups_0"), val = int32(1)];
|
64 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_139_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(169298624))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(202132224))))[name = string("op_139_promoted_to_fp16_palettized")];
|
65 |
+
tensor<fp16, [1, 16032, 1, 1]> var_159_cast_fp16 = conv(dilations = var_159_dilations_0, groups = var_159_groups_0, pad = var_159_pad_0, pad_type = var_159_pad_type_0, strides = var_159_strides_0, weight = op_139_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_159_cast_fp16")];
|
66 |
+
tensor<int32, [1]> var_161_axes_0 = const()[name = string("op_161_axes_0"), val = tensor<int32, [1]>([2])];
|
67 |
+
tensor<fp16, [1, 16032, 1]> var_161_cast_fp16 = squeeze(axes = var_161_axes_0, x = var_159_cast_fp16)[name = string("op_161_cast_fp16")];
|
68 |
+
tensor<int32, [3]> var_164_perm_0 = const()[name = string("op_164_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
69 |
+
string var_185_pad_type_0 = const()[name = string("op_185_pad_type_0"), val = string("valid")];
|
70 |
+
tensor<int32, [2]> var_185_strides_0 = const()[name = string("op_185_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
71 |
+
tensor<int32, [4]> var_185_pad_0 = const()[name = string("op_185_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
72 |
+
tensor<int32, [2]> var_185_dilations_0 = const()[name = string("op_185_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
73 |
+
int32 var_185_groups_0 = const()[name = string("op_185_groups_0"), val = int32(1)];
|
74 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_165_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(203158336))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(235991936))))[name = string("op_165_promoted_to_fp16_palettized")];
|
75 |
+
tensor<fp16, [1, 16032, 1, 1]> var_185_cast_fp16 = conv(dilations = var_185_dilations_0, groups = var_185_groups_0, pad = var_185_pad_0, pad_type = var_185_pad_type_0, strides = var_185_strides_0, weight = op_165_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_185_cast_fp16")];
|
76 |
+
tensor<int32, [1]> var_187_axes_0 = const()[name = string("op_187_axes_0"), val = tensor<int32, [1]>([2])];
|
77 |
+
tensor<fp16, [1, 16032, 1]> var_187_cast_fp16 = squeeze(axes = var_187_axes_0, x = var_185_cast_fp16)[name = string("op_187_cast_fp16")];
|
78 |
+
tensor<int32, [3]> var_190_perm_0 = const()[name = string("op_190_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
79 |
+
string var_211_pad_type_0 = const()[name = string("op_211_pad_type_0"), val = string("valid")];
|
80 |
+
tensor<int32, [2]> var_211_strides_0 = const()[name = string("op_211_strides_0"), val = tensor<int32, [2]>([1, 1])];
|
81 |
+
tensor<int32, [4]> var_211_pad_0 = const()[name = string("op_211_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
|
82 |
+
tensor<int32, [2]> var_211_dilations_0 = const()[name = string("op_211_dilations_0"), val = tensor<int32, [2]>([1, 1])];
|
83 |
+
int32 var_211_groups_0 = const()[name = string("op_211_groups_0"), val = int32(1)];
|
84 |
+
tensor<fp16, [16032, 2048, 1, 1]> op_191_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(237018048))), lut = tensor<fp16, [2004, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(269851648))))[name = string("op_191_promoted_to_fp16_palettized")];
|
85 |
+
tensor<fp16, [1, 16032, 1, 1]> var_211_cast_fp16 = conv(dilations = var_211_dilations_0, groups = var_211_groups_0, pad = var_211_pad_0, pad_type = var_211_pad_type_0, strides = var_211_strides_0, weight = op_191_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_211_cast_fp16")];
|
86 |
+
tensor<int32, [1]> var_213_axes_0 = const()[name = string("op_213_axes_0"), val = tensor<int32, [1]>([2])];
|
87 |
+
tensor<fp16, [1, 16032, 1]> var_213_cast_fp16 = squeeze(axes = var_213_axes_0, x = var_211_cast_fp16)[name = string("op_213_cast_fp16")];
|
88 |
+
tensor<int32, [3]> var_216_perm_0 = const()[name = string("op_216_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
|
89 |
+
tensor<fp16, [1, 1, 16032]> logits1 = transpose(perm = var_34_perm_0, x = var_31_cast_fp16)[name = string("transpose_0")];
|
90 |
+
tensor<fp16, [1, 1, 16032]> logits2 = transpose(perm = var_60_perm_0, x = var_57_cast_fp16)[name = string("transpose_1")];
|
91 |
+
tensor<fp16, [1, 1, 16032]> logits3 = transpose(perm = var_86_perm_0, x = var_83_cast_fp16)[name = string("transpose_2")];
|
92 |
+
tensor<fp16, [1, 1, 16032]> logits4 = transpose(perm = var_112_perm_0, x = var_109_cast_fp16)[name = string("transpose_3")];
|
93 |
+
tensor<fp16, [1, 1, 16032]> logits5 = transpose(perm = var_138_perm_0, x = var_135_cast_fp16)[name = string("transpose_4")];
|
94 |
+
tensor<fp16, [1, 1, 16032]> logits6 = transpose(perm = var_164_perm_0, x = var_161_cast_fp16)[name = string("transpose_5")];
|
95 |
+
tensor<fp16, [1, 1, 16032]> logits7 = transpose(perm = var_190_perm_0, x = var_187_cast_fp16)[name = string("transpose_6")];
|
96 |
+
tensor<fp16, [1, 1, 16032]> logits8 = transpose(perm = var_216_perm_0, x = var_213_cast_fp16)[name = string("transpose_7")];
|
97 |
+
} -> (logits1, logits2, logits3, logits4, logits5, logits6, logits7, logits8);
|
98 |
+
}
|
llama_lm_head_lut8.mlmodelc/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:03427abe8cc13b09427ac5d06e23a347acff6046dca8b6381d2c40774c38f017
|
3 |
+
size 270877760
|
meta.yaml
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model_info:
|
2 |
+
name: anemll-Meta-Llama-3.2-1B-LUT8-ctx512
|
3 |
+
version: 0.3.0
|
4 |
+
description: |
|
5 |
+
Demonstarates running Meta-Llama-3.2-1B on Apple Neural Engine
|
6 |
+
Context length: 512
|
7 |
+
Batch size: 64
|
8 |
+
Chunks: 2
|
9 |
+
license: MIT
|
10 |
+
author: Anemll
|
11 |
+
framework: Core ML
|
12 |
+
language: Python
|
13 |
+
parameters:
|
14 |
+
context_length: 512
|
15 |
+
batch_size: 64
|
16 |
+
lut_embeddings: 8
|
17 |
+
lut_ffn: 8
|
18 |
+
lut_lmhead: 8
|
19 |
+
num_chunks: 2
|
20 |
+
model_prefix: llama
|
21 |
+
embeddings: llama_embeddings_lut8.mlmodelc
|
22 |
+
lm_head: llama_lm_head_lut8.mlmodelc
|
23 |
+
ffn: llama_FFN_PF_lut8.mlmodelc
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,2062 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"128000": {
|
4 |
+
"content": "<|begin_of_text|>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"128001": {
|
12 |
+
"content": "<|end_of_text|>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"128002": {
|
20 |
+
"content": "<|reserved_special_token_0|>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"128003": {
|
28 |
+
"content": "<|reserved_special_token_1|>",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"128004": {
|
36 |
+
"content": "<|finetune_right_pad_id|>",
|
37 |
+
"lstrip": false,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
},
|
43 |
+
"128005": {
|
44 |
+
"content": "<|reserved_special_token_2|>",
|
45 |
+
"lstrip": false,
|
46 |
+
"normalized": false,
|
47 |
+
"rstrip": false,
|
48 |
+
"single_word": false,
|
49 |
+
"special": true
|
50 |
+
},
|
51 |
+
"128006": {
|
52 |
+
"content": "<|start_header_id|>",
|
53 |
+
"lstrip": false,
|
54 |
+
"normalized": false,
|
55 |
+
"rstrip": false,
|
56 |
+
"single_word": false,
|
57 |
+
"special": true
|
58 |
+
},
|
59 |
+
"128007": {
|
60 |
+
"content": "<|end_header_id|>",
|
61 |
+
"lstrip": false,
|
62 |
+
"normalized": false,
|
63 |
+
"rstrip": false,
|
64 |
+
"single_word": false,
|
65 |
+
"special": true
|
66 |
+
},
|
67 |
+
"128008": {
|
68 |
+
"content": "<|eom_id|>",
|
69 |
+
"lstrip": false,
|
70 |
+
"normalized": false,
|
71 |
+
"rstrip": false,
|
72 |
+
"single_word": false,
|
73 |
+
"special": true
|
74 |
+
},
|
75 |
+
"128009": {
|
76 |
+
"content": "<|eot_id|>",
|
77 |
+
"lstrip": false,
|
78 |
+
"normalized": false,
|
79 |
+
"rstrip": false,
|
80 |
+
"single_word": false,
|
81 |
+
"special": true
|
82 |
+
},
|
83 |
+
"128010": {
|
84 |
+
"content": "<|python_tag|>",
|
85 |
+
"lstrip": false,
|
86 |
+
"normalized": false,
|
87 |
+
"rstrip": false,
|
88 |
+
"single_word": false,
|
89 |
+
"special": true
|
90 |
+
},
|
91 |
+
"128011": {
|
92 |
+
"content": "<|reserved_special_token_3|>",
|
93 |
+
"lstrip": false,
|
94 |
+
"normalized": false,
|
95 |
+
"rstrip": false,
|
96 |
+
"single_word": false,
|
97 |
+
"special": true
|
98 |
+
},
|
99 |
+
"128012": {
|
100 |
+
"content": "<|reserved_special_token_4|>",
|
101 |
+
"lstrip": false,
|
102 |
+
"normalized": false,
|
103 |
+
"rstrip": false,
|
104 |
+
"single_word": false,
|
105 |
+
"special": true
|
106 |
+
},
|
107 |
+
"128013": {
|
108 |
+
"content": "<|reserved_special_token_5|>",
|
109 |
+
"lstrip": false,
|
110 |
+
"normalized": false,
|
111 |
+
"rstrip": false,
|
112 |
+
"single_word": false,
|
113 |
+
"special": true
|
114 |
+
},
|
115 |
+
"128014": {
|
116 |
+
"content": "<|reserved_special_token_6|>",
|
117 |
+
"lstrip": false,
|
118 |
+
"normalized": false,
|
119 |
+
"rstrip": false,
|
120 |
+
"single_word": false,
|
121 |
+
"special": true
|
122 |
+
},
|
123 |
+
"128015": {
|
124 |
+
"content": "<|reserved_special_token_7|>",
|
125 |
+
"lstrip": false,
|
126 |
+
"normalized": false,
|
127 |
+
"rstrip": false,
|
128 |
+
"single_word": false,
|
129 |
+
"special": true
|
130 |
+
},
|
131 |
+
"128016": {
|
132 |
+
"content": "<|reserved_special_token_8|>",
|
133 |
+
"lstrip": false,
|
134 |
+
"normalized": false,
|
135 |
+
"rstrip": false,
|
136 |
+
"single_word": false,
|
137 |
+
"special": true
|
138 |
+
},
|
139 |
+
"128017": {
|
140 |
+
"content": "<|reserved_special_token_9|>",
|
141 |
+
"lstrip": false,
|
142 |
+
"normalized": false,
|
143 |
+
"rstrip": false,
|
144 |
+
"single_word": false,
|
145 |
+
"special": true
|
146 |
+
},
|
147 |
+
"128018": {
|
148 |
+
"content": "<|reserved_special_token_10|>",
|
149 |
+
"lstrip": false,
|
150 |
+
"normalized": false,
|
151 |
+
"rstrip": false,
|
152 |
+
"single_word": false,
|
153 |
+
"special": true
|
154 |
+
},
|
155 |
+
"128019": {
|
156 |
+
"content": "<|reserved_special_token_11|>",
|
157 |
+
"lstrip": false,
|
158 |
+
"normalized": false,
|
159 |
+
"rstrip": false,
|
160 |
+
"single_word": false,
|
161 |
+
"special": true
|
162 |
+
},
|
163 |
+
"128020": {
|
164 |
+
"content": "<|reserved_special_token_12|>",
|
165 |
+
"lstrip": false,
|
166 |
+
"normalized": false,
|
167 |
+
"rstrip": false,
|
168 |
+
"single_word": false,
|
169 |
+
"special": true
|
170 |
+
},
|
171 |
+
"128021": {
|
172 |
+
"content": "<|reserved_special_token_13|>",
|
173 |
+
"lstrip": false,
|
174 |
+
"normalized": false,
|
175 |
+
"rstrip": false,
|
176 |
+
"single_word": false,
|
177 |
+
"special": true
|
178 |
+
},
|
179 |
+
"128022": {
|
180 |
+
"content": "<|reserved_special_token_14|>",
|
181 |
+
"lstrip": false,
|
182 |
+
"normalized": false,
|
183 |
+
"rstrip": false,
|
184 |
+
"single_word": false,
|
185 |
+
"special": true
|
186 |
+
},
|
187 |
+
"128023": {
|
188 |
+
"content": "<|reserved_special_token_15|>",
|
189 |
+
"lstrip": false,
|
190 |
+
"normalized": false,
|
191 |
+
"rstrip": false,
|
192 |
+
"single_word": false,
|
193 |
+
"special": true
|
194 |
+
},
|
195 |
+
"128024": {
|
196 |
+
"content": "<|reserved_special_token_16|>",
|
197 |
+
"lstrip": false,
|
198 |
+
"normalized": false,
|
199 |
+
"rstrip": false,
|
200 |
+
"single_word": false,
|
201 |
+
"special": true
|
202 |
+
},
|
203 |
+
"128025": {
|
204 |
+
"content": "<|reserved_special_token_17|>",
|
205 |
+
"lstrip": false,
|
206 |
+
"normalized": false,
|
207 |
+
"rstrip": false,
|
208 |
+
"single_word": false,
|
209 |
+
"special": true
|
210 |
+
},
|
211 |
+
"128026": {
|
212 |
+
"content": "<|reserved_special_token_18|>",
|
213 |
+
"lstrip": false,
|
214 |
+
"normalized": false,
|
215 |
+
"rstrip": false,
|
216 |
+
"single_word": false,
|
217 |
+
"special": true
|
218 |
+
},
|
219 |
+
"128027": {
|
220 |
+
"content": "<|reserved_special_token_19|>",
|
221 |
+
"lstrip": false,
|
222 |
+
"normalized": false,
|
223 |
+
"rstrip": false,
|
224 |
+
"single_word": false,
|
225 |
+
"special": true
|
226 |
+
},
|
227 |
+
"128028": {
|
228 |
+
"content": "<|reserved_special_token_20|>",
|
229 |
+
"lstrip": false,
|
230 |
+
"normalized": false,
|
231 |
+
"rstrip": false,
|
232 |
+
"single_word": false,
|
233 |
+
"special": true
|
234 |
+
},
|
235 |
+
"128029": {
|
236 |
+
"content": "<|reserved_special_token_21|>",
|
237 |
+
"lstrip": false,
|
238 |
+
"normalized": false,
|
239 |
+
"rstrip": false,
|
240 |
+
"single_word": false,
|
241 |
+
"special": true
|
242 |
+
},
|
243 |
+
"128030": {
|
244 |
+
"content": "<|reserved_special_token_22|>",
|
245 |
+
"lstrip": false,
|
246 |
+
"normalized": false,
|
247 |
+
"rstrip": false,
|
248 |
+
"single_word": false,
|
249 |
+
"special": true
|
250 |
+
},
|
251 |
+
"128031": {
|
252 |
+
"content": "<|reserved_special_token_23|>",
|
253 |
+
"lstrip": false,
|
254 |
+
"normalized": false,
|
255 |
+
"rstrip": false,
|
256 |
+
"single_word": false,
|
257 |
+
"special": true
|
258 |
+
},
|
259 |
+
"128032": {
|
260 |
+
"content": "<|reserved_special_token_24|>",
|
261 |
+
"lstrip": false,
|
262 |
+
"normalized": false,
|
263 |
+
"rstrip": false,
|
264 |
+
"single_word": false,
|
265 |
+
"special": true
|
266 |
+
},
|
267 |
+
"128033": {
|
268 |
+
"content": "<|reserved_special_token_25|>",
|
269 |
+
"lstrip": false,
|
270 |
+
"normalized": false,
|
271 |
+
"rstrip": false,
|
272 |
+
"single_word": false,
|
273 |
+
"special": true
|
274 |
+
},
|
275 |
+
"128034": {
|
276 |
+
"content": "<|reserved_special_token_26|>",
|
277 |
+
"lstrip": false,
|
278 |
+
"normalized": false,
|
279 |
+
"rstrip": false,
|
280 |
+
"single_word": false,
|
281 |
+
"special": true
|
282 |
+
},
|
283 |
+
"128035": {
|
284 |
+
"content": "<|reserved_special_token_27|>",
|
285 |
+
"lstrip": false,
|
286 |
+
"normalized": false,
|
287 |
+
"rstrip": false,
|
288 |
+
"single_word": false,
|
289 |
+
"special": true
|
290 |
+
},
|
291 |
+
"128036": {
|
292 |
+
"content": "<|reserved_special_token_28|>",
|
293 |
+
"lstrip": false,
|
294 |
+
"normalized": false,
|
295 |
+
"rstrip": false,
|
296 |
+
"single_word": false,
|
297 |
+
"special": true
|
298 |
+
},
|
299 |
+
"128037": {
|
300 |
+
"content": "<|reserved_special_token_29|>",
|
301 |
+
"lstrip": false,
|
302 |
+
"normalized": false,
|
303 |
+
"rstrip": false,
|
304 |
+
"single_word": false,
|
305 |
+
"special": true
|
306 |
+
},
|
307 |
+
"128038": {
|
308 |
+
"content": "<|reserved_special_token_30|>",
|
309 |
+
"lstrip": false,
|
310 |
+
"normalized": false,
|
311 |
+
"rstrip": false,
|
312 |
+
"single_word": false,
|
313 |
+
"special": true
|
314 |
+
},
|
315 |
+
"128039": {
|
316 |
+
"content": "<|reserved_special_token_31|>",
|
317 |
+
"lstrip": false,
|
318 |
+
"normalized": false,
|
319 |
+
"rstrip": false,
|
320 |
+
"single_word": false,
|
321 |
+
"special": true
|
322 |
+
},
|
323 |
+
"128040": {
|
324 |
+
"content": "<|reserved_special_token_32|>",
|
325 |
+
"lstrip": false,
|
326 |
+
"normalized": false,
|
327 |
+
"rstrip": false,
|
328 |
+
"single_word": false,
|
329 |
+
"special": true
|
330 |
+
},
|
331 |
+
"128041": {
|
332 |
+
"content": "<|reserved_special_token_33|>",
|
333 |
+
"lstrip": false,
|
334 |
+
"normalized": false,
|
335 |
+
"rstrip": false,
|
336 |
+
"single_word": false,
|
337 |
+
"special": true
|
338 |
+
},
|
339 |
+
"128042": {
|
340 |
+
"content": "<|reserved_special_token_34|>",
|
341 |
+
"lstrip": false,
|
342 |
+
"normalized": false,
|
343 |
+
"rstrip": false,
|
344 |
+
"single_word": false,
|
345 |
+
"special": true
|
346 |
+
},
|
347 |
+
"128043": {
|
348 |
+
"content": "<|reserved_special_token_35|>",
|
349 |
+
"lstrip": false,
|
350 |
+
"normalized": false,
|
351 |
+
"rstrip": false,
|
352 |
+
"single_word": false,
|
353 |
+
"special": true
|
354 |
+
},
|
355 |
+
"128044": {
|
356 |
+
"content": "<|reserved_special_token_36|>",
|
357 |
+
"lstrip": false,
|
358 |
+
"normalized": false,
|
359 |
+
"rstrip": false,
|
360 |
+
"single_word": false,
|
361 |
+
"special": true
|
362 |
+
},
|
363 |
+
"128045": {
|
364 |
+
"content": "<|reserved_special_token_37|>",
|
365 |
+
"lstrip": false,
|
366 |
+
"normalized": false,
|
367 |
+
"rstrip": false,
|
368 |
+
"single_word": false,
|
369 |
+
"special": true
|
370 |
+
},
|
371 |
+
"128046": {
|
372 |
+
"content": "<|reserved_special_token_38|>",
|
373 |
+
"lstrip": false,
|
374 |
+
"normalized": false,
|
375 |
+
"rstrip": false,
|
376 |
+
"single_word": false,
|
377 |
+
"special": true
|
378 |
+
},
|
379 |
+
"128047": {
|
380 |
+
"content": "<|reserved_special_token_39|>",
|
381 |
+
"lstrip": false,
|
382 |
+
"normalized": false,
|
383 |
+
"rstrip": false,
|
384 |
+
"single_word": false,
|
385 |
+
"special": true
|
386 |
+
},
|
387 |
+
"128048": {
|
388 |
+
"content": "<|reserved_special_token_40|>",
|
389 |
+
"lstrip": false,
|
390 |
+
"normalized": false,
|
391 |
+
"rstrip": false,
|
392 |
+
"single_word": false,
|
393 |
+
"special": true
|
394 |
+
},
|
395 |
+
"128049": {
|
396 |
+
"content": "<|reserved_special_token_41|>",
|
397 |
+
"lstrip": false,
|
398 |
+
"normalized": false,
|
399 |
+
"rstrip": false,
|
400 |
+
"single_word": false,
|
401 |
+
"special": true
|
402 |
+
},
|
403 |
+
"128050": {
|
404 |
+
"content": "<|reserved_special_token_42|>",
|
405 |
+
"lstrip": false,
|
406 |
+
"normalized": false,
|
407 |
+
"rstrip": false,
|
408 |
+
"single_word": false,
|
409 |
+
"special": true
|
410 |
+
},
|
411 |
+
"128051": {
|
412 |
+
"content": "<|reserved_special_token_43|>",
|
413 |
+
"lstrip": false,
|
414 |
+
"normalized": false,
|
415 |
+
"rstrip": false,
|
416 |
+
"single_word": false,
|
417 |
+
"special": true
|
418 |
+
},
|
419 |
+
"128052": {
|
420 |
+
"content": "<|reserved_special_token_44|>",
|
421 |
+
"lstrip": false,
|
422 |
+
"normalized": false,
|
423 |
+
"rstrip": false,
|
424 |
+
"single_word": false,
|
425 |
+
"special": true
|
426 |
+
},
|
427 |
+
"128053": {
|
428 |
+
"content": "<|reserved_special_token_45|>",
|
429 |
+
"lstrip": false,
|
430 |
+
"normalized": false,
|
431 |
+
"rstrip": false,
|
432 |
+
"single_word": false,
|
433 |
+
"special": true
|
434 |
+
},
|
435 |
+
"128054": {
|
436 |
+
"content": "<|reserved_special_token_46|>",
|
437 |
+
"lstrip": false,
|
438 |
+
"normalized": false,
|
439 |
+
"rstrip": false,
|
440 |
+
"single_word": false,
|
441 |
+
"special": true
|
442 |
+
},
|
443 |
+
"128055": {
|
444 |
+
"content": "<|reserved_special_token_47|>",
|
445 |
+
"lstrip": false,
|
446 |
+
"normalized": false,
|
447 |
+
"rstrip": false,
|
448 |
+
"single_word": false,
|
449 |
+
"special": true
|
450 |
+
},
|
451 |
+
"128056": {
|
452 |
+
"content": "<|reserved_special_token_48|>",
|
453 |
+
"lstrip": false,
|
454 |
+
"normalized": false,
|
455 |
+
"rstrip": false,
|
456 |
+
"single_word": false,
|
457 |
+
"special": true
|
458 |
+
},
|
459 |
+
"128057": {
|
460 |
+
"content": "<|reserved_special_token_49|>",
|
461 |
+
"lstrip": false,
|
462 |
+
"normalized": false,
|
463 |
+
"rstrip": false,
|
464 |
+
"single_word": false,
|
465 |
+
"special": true
|
466 |
+
},
|
467 |
+
"128058": {
|
468 |
+
"content": "<|reserved_special_token_50|>",
|
469 |
+
"lstrip": false,
|
470 |
+
"normalized": false,
|
471 |
+
"rstrip": false,
|
472 |
+
"single_word": false,
|
473 |
+
"special": true
|
474 |
+
},
|
475 |
+
"128059": {
|
476 |
+
"content": "<|reserved_special_token_51|>",
|
477 |
+
"lstrip": false,
|
478 |
+
"normalized": false,
|
479 |
+
"rstrip": false,
|
480 |
+
"single_word": false,
|
481 |
+
"special": true
|
482 |
+
},
|
483 |
+
"128060": {
|
484 |
+
"content": "<|reserved_special_token_52|>",
|
485 |
+
"lstrip": false,
|
486 |
+
"normalized": false,
|
487 |
+
"rstrip": false,
|
488 |
+
"single_word": false,
|
489 |
+
"special": true
|
490 |
+
},
|
491 |
+
"128061": {
|
492 |
+
"content": "<|reserved_special_token_53|>",
|
493 |
+
"lstrip": false,
|
494 |
+
"normalized": false,
|
495 |
+
"rstrip": false,
|
496 |
+
"single_word": false,
|
497 |
+
"special": true
|
498 |
+
},
|
499 |
+
"128062": {
|
500 |
+
"content": "<|reserved_special_token_54|>",
|
501 |
+
"lstrip": false,
|
502 |
+
"normalized": false,
|
503 |
+
"rstrip": false,
|
504 |
+
"single_word": false,
|
505 |
+
"special": true
|
506 |
+
},
|
507 |
+
"128063": {
|
508 |
+
"content": "<|reserved_special_token_55|>",
|
509 |
+
"lstrip": false,
|
510 |
+
"normalized": false,
|
511 |
+
"rstrip": false,
|
512 |
+
"single_word": false,
|
513 |
+
"special": true
|
514 |
+
},
|
515 |
+
"128064": {
|
516 |
+
"content": "<|reserved_special_token_56|>",
|
517 |
+
"lstrip": false,
|
518 |
+
"normalized": false,
|
519 |
+
"rstrip": false,
|
520 |
+
"single_word": false,
|
521 |
+
"special": true
|
522 |
+
},
|
523 |
+
"128065": {
|
524 |
+
"content": "<|reserved_special_token_57|>",
|
525 |
+
"lstrip": false,
|
526 |
+
"normalized": false,
|
527 |
+
"rstrip": false,
|
528 |
+
"single_word": false,
|
529 |
+
"special": true
|
530 |
+
},
|
531 |
+
"128066": {
|
532 |
+
"content": "<|reserved_special_token_58|>",
|
533 |
+
"lstrip": false,
|
534 |
+
"normalized": false,
|
535 |
+
"rstrip": false,
|
536 |
+
"single_word": false,
|
537 |
+
"special": true
|
538 |
+
},
|
539 |
+
"128067": {
|
540 |
+
"content": "<|reserved_special_token_59|>",
|
541 |
+
"lstrip": false,
|
542 |
+
"normalized": false,
|
543 |
+
"rstrip": false,
|
544 |
+
"single_word": false,
|
545 |
+
"special": true
|
546 |
+
},
|
547 |
+
"128068": {
|
548 |
+
"content": "<|reserved_special_token_60|>",
|
549 |
+
"lstrip": false,
|
550 |
+
"normalized": false,
|
551 |
+
"rstrip": false,
|
552 |
+
"single_word": false,
|
553 |
+
"special": true
|
554 |
+
},
|
555 |
+
"128069": {
|
556 |
+
"content": "<|reserved_special_token_61|>",
|
557 |
+
"lstrip": false,
|
558 |
+
"normalized": false,
|
559 |
+
"rstrip": false,
|
560 |
+
"single_word": false,
|
561 |
+
"special": true
|
562 |
+
},
|
563 |
+
"128070": {
|
564 |
+
"content": "<|reserved_special_token_62|>",
|
565 |
+
"lstrip": false,
|
566 |
+
"normalized": false,
|
567 |
+
"rstrip": false,
|
568 |
+
"single_word": false,
|
569 |
+
"special": true
|
570 |
+
},
|
571 |
+
"128071": {
|
572 |
+
"content": "<|reserved_special_token_63|>",
|
573 |
+
"lstrip": false,
|
574 |
+
"normalized": false,
|
575 |
+
"rstrip": false,
|
576 |
+
"single_word": false,
|
577 |
+
"special": true
|
578 |
+
},
|
579 |
+
"128072": {
|
580 |
+
"content": "<|reserved_special_token_64|>",
|
581 |
+
"lstrip": false,
|
582 |
+
"normalized": false,
|
583 |
+
"rstrip": false,
|
584 |
+
"single_word": false,
|
585 |
+
"special": true
|
586 |
+
},
|
587 |
+
"128073": {
|
588 |
+
"content": "<|reserved_special_token_65|>",
|
589 |
+
"lstrip": false,
|
590 |
+
"normalized": false,
|
591 |
+
"rstrip": false,
|
592 |
+
"single_word": false,
|
593 |
+
"special": true
|
594 |
+
},
|
595 |
+
"128074": {
|
596 |
+
"content": "<|reserved_special_token_66|>",
|
597 |
+
"lstrip": false,
|
598 |
+
"normalized": false,
|
599 |
+
"rstrip": false,
|
600 |
+
"single_word": false,
|
601 |
+
"special": true
|
602 |
+
},
|
603 |
+
"128075": {
|
604 |
+
"content": "<|reserved_special_token_67|>",
|
605 |
+
"lstrip": false,
|
606 |
+
"normalized": false,
|
607 |
+
"rstrip": false,
|
608 |
+
"single_word": false,
|
609 |
+
"special": true
|
610 |
+
},
|
611 |
+
"128076": {
|
612 |
+
"content": "<|reserved_special_token_68|>",
|
613 |
+
"lstrip": false,
|
614 |
+
"normalized": false,
|
615 |
+
"rstrip": false,
|
616 |
+
"single_word": false,
|
617 |
+
"special": true
|
618 |
+
},
|
619 |
+
"128077": {
|
620 |
+
"content": "<|reserved_special_token_69|>",
|
621 |
+
"lstrip": false,
|
622 |
+
"normalized": false,
|
623 |
+
"rstrip": false,
|
624 |
+
"single_word": false,
|
625 |
+
"special": true
|
626 |
+
},
|
627 |
+
"128078": {
|
628 |
+
"content": "<|reserved_special_token_70|>",
|
629 |
+
"lstrip": false,
|
630 |
+
"normalized": false,
|
631 |
+
"rstrip": false,
|
632 |
+
"single_word": false,
|
633 |
+
"special": true
|
634 |
+
},
|
635 |
+
"128079": {
|
636 |
+
"content": "<|reserved_special_token_71|>",
|
637 |
+
"lstrip": false,
|
638 |
+
"normalized": false,
|
639 |
+
"rstrip": false,
|
640 |
+
"single_word": false,
|
641 |
+
"special": true
|
642 |
+
},
|
643 |
+
"128080": {
|
644 |
+
"content": "<|reserved_special_token_72|>",
|
645 |
+
"lstrip": false,
|
646 |
+
"normalized": false,
|
647 |
+
"rstrip": false,
|
648 |
+
"single_word": false,
|
649 |
+
"special": true
|
650 |
+
},
|
651 |
+
"128081": {
|
652 |
+
"content": "<|reserved_special_token_73|>",
|
653 |
+
"lstrip": false,
|
654 |
+
"normalized": false,
|
655 |
+
"rstrip": false,
|
656 |
+
"single_word": false,
|
657 |
+
"special": true
|
658 |
+
},
|
659 |
+
"128082": {
|
660 |
+
"content": "<|reserved_special_token_74|>",
|
661 |
+
"lstrip": false,
|
662 |
+
"normalized": false,
|
663 |
+
"rstrip": false,
|
664 |
+
"single_word": false,
|
665 |
+
"special": true
|
666 |
+
},
|
667 |
+
"128083": {
|
668 |
+
"content": "<|reserved_special_token_75|>",
|
669 |
+
"lstrip": false,
|
670 |
+
"normalized": false,
|
671 |
+
"rstrip": false,
|
672 |
+
"single_word": false,
|
673 |
+
"special": true
|
674 |
+
},
|
675 |
+
"128084": {
|
676 |
+
"content": "<|reserved_special_token_76|>",
|
677 |
+
"lstrip": false,
|
678 |
+
"normalized": false,
|
679 |
+
"rstrip": false,
|
680 |
+
"single_word": false,
|
681 |
+
"special": true
|
682 |
+
},
|
683 |
+
"128085": {
|
684 |
+
"content": "<|reserved_special_token_77|>",
|
685 |
+
"lstrip": false,
|
686 |
+
"normalized": false,
|
687 |
+
"rstrip": false,
|
688 |
+
"single_word": false,
|
689 |
+
"special": true
|
690 |
+
},
|
691 |
+
"128086": {
|
692 |
+
"content": "<|reserved_special_token_78|>",
|
693 |
+
"lstrip": false,
|
694 |
+
"normalized": false,
|
695 |
+
"rstrip": false,
|
696 |
+
"single_word": false,
|
697 |
+
"special": true
|
698 |
+
},
|
699 |
+
"128087": {
|
700 |
+
"content": "<|reserved_special_token_79|>",
|
701 |
+
"lstrip": false,
|
702 |
+
"normalized": false,
|
703 |
+
"rstrip": false,
|
704 |
+
"single_word": false,
|
705 |
+
"special": true
|
706 |
+
},
|
707 |
+
"128088": {
|
708 |
+
"content": "<|reserved_special_token_80|>",
|
709 |
+
"lstrip": false,
|
710 |
+
"normalized": false,
|
711 |
+
"rstrip": false,
|
712 |
+
"single_word": false,
|
713 |
+
"special": true
|
714 |
+
},
|
715 |
+
"128089": {
|
716 |
+
"content": "<|reserved_special_token_81|>",
|
717 |
+
"lstrip": false,
|
718 |
+
"normalized": false,
|
719 |
+
"rstrip": false,
|
720 |
+
"single_word": false,
|
721 |
+
"special": true
|
722 |
+
},
|
723 |
+
"128090": {
|
724 |
+
"content": "<|reserved_special_token_82|>",
|
725 |
+
"lstrip": false,
|
726 |
+
"normalized": false,
|
727 |
+
"rstrip": false,
|
728 |
+
"single_word": false,
|
729 |
+
"special": true
|
730 |
+
},
|
731 |
+
"128091": {
|
732 |
+
"content": "<|reserved_special_token_83|>",
|
733 |
+
"lstrip": false,
|
734 |
+
"normalized": false,
|
735 |
+
"rstrip": false,
|
736 |
+
"single_word": false,
|
737 |
+
"special": true
|
738 |
+
},
|
739 |
+
"128092": {
|
740 |
+
"content": "<|reserved_special_token_84|>",
|
741 |
+
"lstrip": false,
|
742 |
+
"normalized": false,
|
743 |
+
"rstrip": false,
|
744 |
+
"single_word": false,
|
745 |
+
"special": true
|
746 |
+
},
|
747 |
+
"128093": {
|
748 |
+
"content": "<|reserved_special_token_85|>",
|
749 |
+
"lstrip": false,
|
750 |
+
"normalized": false,
|
751 |
+
"rstrip": false,
|
752 |
+
"single_word": false,
|
753 |
+
"special": true
|
754 |
+
},
|
755 |
+
"128094": {
|
756 |
+
"content": "<|reserved_special_token_86|>",
|
757 |
+
"lstrip": false,
|
758 |
+
"normalized": false,
|
759 |
+
"rstrip": false,
|
760 |
+
"single_word": false,
|
761 |
+
"special": true
|
762 |
+
},
|
763 |
+
"128095": {
|
764 |
+
"content": "<|reserved_special_token_87|>",
|
765 |
+
"lstrip": false,
|
766 |
+
"normalized": false,
|
767 |
+
"rstrip": false,
|
768 |
+
"single_word": false,
|
769 |
+
"special": true
|
770 |
+
},
|
771 |
+
"128096": {
|
772 |
+
"content": "<|reserved_special_token_88|>",
|
773 |
+
"lstrip": false,
|
774 |
+
"normalized": false,
|
775 |
+
"rstrip": false,
|
776 |
+
"single_word": false,
|
777 |
+
"special": true
|
778 |
+
},
|
779 |
+
"128097": {
|
780 |
+
"content": "<|reserved_special_token_89|>",
|
781 |
+
"lstrip": false,
|
782 |
+
"normalized": false,
|
783 |
+
"rstrip": false,
|
784 |
+
"single_word": false,
|
785 |
+
"special": true
|
786 |
+
},
|
787 |
+
"128098": {
|
788 |
+
"content": "<|reserved_special_token_90|>",
|
789 |
+
"lstrip": false,
|
790 |
+
"normalized": false,
|
791 |
+
"rstrip": false,
|
792 |
+
"single_word": false,
|
793 |
+
"special": true
|
794 |
+
},
|
795 |
+
"128099": {
|
796 |
+
"content": "<|reserved_special_token_91|>",
|
797 |
+
"lstrip": false,
|
798 |
+
"normalized": false,
|
799 |
+
"rstrip": false,
|
800 |
+
"single_word": false,
|
801 |
+
"special": true
|
802 |
+
},
|
803 |
+
"128100": {
|
804 |
+
"content": "<|reserved_special_token_92|>",
|
805 |
+
"lstrip": false,
|
806 |
+
"normalized": false,
|
807 |
+
"rstrip": false,
|
808 |
+
"single_word": false,
|
809 |
+
"special": true
|
810 |
+
},
|
811 |
+
"128101": {
|
812 |
+
"content": "<|reserved_special_token_93|>",
|
813 |
+
"lstrip": false,
|
814 |
+
"normalized": false,
|
815 |
+
"rstrip": false,
|
816 |
+
"single_word": false,
|
817 |
+
"special": true
|
818 |
+
},
|
819 |
+
"128102": {
|
820 |
+
"content": "<|reserved_special_token_94|>",
|
821 |
+
"lstrip": false,
|
822 |
+
"normalized": false,
|
823 |
+
"rstrip": false,
|
824 |
+
"single_word": false,
|
825 |
+
"special": true
|
826 |
+
},
|
827 |
+
"128103": {
|
828 |
+
"content": "<|reserved_special_token_95|>",
|
829 |
+
"lstrip": false,
|
830 |
+
"normalized": false,
|
831 |
+
"rstrip": false,
|
832 |
+
"single_word": false,
|
833 |
+
"special": true
|
834 |
+
},
|
835 |
+
"128104": {
|
836 |
+
"content": "<|reserved_special_token_96|>",
|
837 |
+
"lstrip": false,
|
838 |
+
"normalized": false,
|
839 |
+
"rstrip": false,
|
840 |
+
"single_word": false,
|
841 |
+
"special": true
|
842 |
+
},
|
843 |
+
"128105": {
|
844 |
+
"content": "<|reserved_special_token_97|>",
|
845 |
+
"lstrip": false,
|
846 |
+
"normalized": false,
|
847 |
+
"rstrip": false,
|
848 |
+
"single_word": false,
|
849 |
+
"special": true
|
850 |
+
},
|
851 |
+
"128106": {
|
852 |
+
"content": "<|reserved_special_token_98|>",
|
853 |
+
"lstrip": false,
|
854 |
+
"normalized": false,
|
855 |
+
"rstrip": false,
|
856 |
+
"single_word": false,
|
857 |
+
"special": true
|
858 |
+
},
|
859 |
+
"128107": {
|
860 |
+
"content": "<|reserved_special_token_99|>",
|
861 |
+
"lstrip": false,
|
862 |
+
"normalized": false,
|
863 |
+
"rstrip": false,
|
864 |
+
"single_word": false,
|
865 |
+
"special": true
|
866 |
+
},
|
867 |
+
"128108": {
|
868 |
+
"content": "<|reserved_special_token_100|>",
|
869 |
+
"lstrip": false,
|
870 |
+
"normalized": false,
|
871 |
+
"rstrip": false,
|
872 |
+
"single_word": false,
|
873 |
+
"special": true
|
874 |
+
},
|
875 |
+
"128109": {
|
876 |
+
"content": "<|reserved_special_token_101|>",
|
877 |
+
"lstrip": false,
|
878 |
+
"normalized": false,
|
879 |
+
"rstrip": false,
|
880 |
+
"single_word": false,
|
881 |
+
"special": true
|
882 |
+
},
|
883 |
+
"128110": {
|
884 |
+
"content": "<|reserved_special_token_102|>",
|
885 |
+
"lstrip": false,
|
886 |
+
"normalized": false,
|
887 |
+
"rstrip": false,
|
888 |
+
"single_word": false,
|
889 |
+
"special": true
|
890 |
+
},
|
891 |
+
"128111": {
|
892 |
+
"content": "<|reserved_special_token_103|>",
|
893 |
+
"lstrip": false,
|
894 |
+
"normalized": false,
|
895 |
+
"rstrip": false,
|
896 |
+
"single_word": false,
|
897 |
+
"special": true
|
898 |
+
},
|
899 |
+
"128112": {
|
900 |
+
"content": "<|reserved_special_token_104|>",
|
901 |
+
"lstrip": false,
|
902 |
+
"normalized": false,
|
903 |
+
"rstrip": false,
|
904 |
+
"single_word": false,
|
905 |
+
"special": true
|
906 |
+
},
|
907 |
+
"128113": {
|
908 |
+
"content": "<|reserved_special_token_105|>",
|
909 |
+
"lstrip": false,
|
910 |
+
"normalized": false,
|
911 |
+
"rstrip": false,
|
912 |
+
"single_word": false,
|
913 |
+
"special": true
|
914 |
+
},
|
915 |
+
"128114": {
|
916 |
+
"content": "<|reserved_special_token_106|>",
|
917 |
+
"lstrip": false,
|
918 |
+
"normalized": false,
|
919 |
+
"rstrip": false,
|
920 |
+
"single_word": false,
|
921 |
+
"special": true
|
922 |
+
},
|
923 |
+
"128115": {
|
924 |
+
"content": "<|reserved_special_token_107|>",
|
925 |
+
"lstrip": false,
|
926 |
+
"normalized": false,
|
927 |
+
"rstrip": false,
|
928 |
+
"single_word": false,
|
929 |
+
"special": true
|
930 |
+
},
|
931 |
+
"128116": {
|
932 |
+
"content": "<|reserved_special_token_108|>",
|
933 |
+
"lstrip": false,
|
934 |
+
"normalized": false,
|
935 |
+
"rstrip": false,
|
936 |
+
"single_word": false,
|
937 |
+
"special": true
|
938 |
+
},
|
939 |
+
"128117": {
|
940 |
+
"content": "<|reserved_special_token_109|>",
|
941 |
+
"lstrip": false,
|
942 |
+
"normalized": false,
|
943 |
+
"rstrip": false,
|
944 |
+
"single_word": false,
|
945 |
+
"special": true
|
946 |
+
},
|
947 |
+
"128118": {
|
948 |
+
"content": "<|reserved_special_token_110|>",
|
949 |
+
"lstrip": false,
|
950 |
+
"normalized": false,
|
951 |
+
"rstrip": false,
|
952 |
+
"single_word": false,
|
953 |
+
"special": true
|
954 |
+
},
|
955 |
+
"128119": {
|
956 |
+
"content": "<|reserved_special_token_111|>",
|
957 |
+
"lstrip": false,
|
958 |
+
"normalized": false,
|
959 |
+
"rstrip": false,
|
960 |
+
"single_word": false,
|
961 |
+
"special": true
|
962 |
+
},
|
963 |
+
"128120": {
|
964 |
+
"content": "<|reserved_special_token_112|>",
|
965 |
+
"lstrip": false,
|
966 |
+
"normalized": false,
|
967 |
+
"rstrip": false,
|
968 |
+
"single_word": false,
|
969 |
+
"special": true
|
970 |
+
},
|
971 |
+
"128121": {
|
972 |
+
"content": "<|reserved_special_token_113|>",
|
973 |
+
"lstrip": false,
|
974 |
+
"normalized": false,
|
975 |
+
"rstrip": false,
|
976 |
+
"single_word": false,
|
977 |
+
"special": true
|
978 |
+
},
|
979 |
+
"128122": {
|
980 |
+
"content": "<|reserved_special_token_114|>",
|
981 |
+
"lstrip": false,
|
982 |
+
"normalized": false,
|
983 |
+
"rstrip": false,
|
984 |
+
"single_word": false,
|
985 |
+
"special": true
|
986 |
+
},
|
987 |
+
"128123": {
|
988 |
+
"content": "<|reserved_special_token_115|>",
|
989 |
+
"lstrip": false,
|
990 |
+
"normalized": false,
|
991 |
+
"rstrip": false,
|
992 |
+
"single_word": false,
|
993 |
+
"special": true
|
994 |
+
},
|
995 |
+
"128124": {
|
996 |
+
"content": "<|reserved_special_token_116|>",
|
997 |
+
"lstrip": false,
|
998 |
+
"normalized": false,
|
999 |
+
"rstrip": false,
|
1000 |
+
"single_word": false,
|
1001 |
+
"special": true
|
1002 |
+
},
|
1003 |
+
"128125": {
|
1004 |
+
"content": "<|reserved_special_token_117|>",
|
1005 |
+
"lstrip": false,
|
1006 |
+
"normalized": false,
|
1007 |
+
"rstrip": false,
|
1008 |
+
"single_word": false,
|
1009 |
+
"special": true
|
1010 |
+
},
|
1011 |
+
"128126": {
|
1012 |
+
"content": "<|reserved_special_token_118|>",
|
1013 |
+
"lstrip": false,
|
1014 |
+
"normalized": false,
|
1015 |
+
"rstrip": false,
|
1016 |
+
"single_word": false,
|
1017 |
+
"special": true
|
1018 |
+
},
|
1019 |
+
"128127": {
|
1020 |
+
"content": "<|reserved_special_token_119|>",
|
1021 |
+
"lstrip": false,
|
1022 |
+
"normalized": false,
|
1023 |
+
"rstrip": false,
|
1024 |
+
"single_word": false,
|
1025 |
+
"special": true
|
1026 |
+
},
|
1027 |
+
"128128": {
|
1028 |
+
"content": "<|reserved_special_token_120|>",
|
1029 |
+
"lstrip": false,
|
1030 |
+
"normalized": false,
|
1031 |
+
"rstrip": false,
|
1032 |
+
"single_word": false,
|
1033 |
+
"special": true
|
1034 |
+
},
|
1035 |
+
"128129": {
|
1036 |
+
"content": "<|reserved_special_token_121|>",
|
1037 |
+
"lstrip": false,
|
1038 |
+
"normalized": false,
|
1039 |
+
"rstrip": false,
|
1040 |
+
"single_word": false,
|
1041 |
+
"special": true
|
1042 |
+
},
|
1043 |
+
"128130": {
|
1044 |
+
"content": "<|reserved_special_token_122|>",
|
1045 |
+
"lstrip": false,
|
1046 |
+
"normalized": false,
|
1047 |
+
"rstrip": false,
|
1048 |
+
"single_word": false,
|
1049 |
+
"special": true
|
1050 |
+
},
|
1051 |
+
"128131": {
|
1052 |
+
"content": "<|reserved_special_token_123|>",
|
1053 |
+
"lstrip": false,
|
1054 |
+
"normalized": false,
|
1055 |
+
"rstrip": false,
|
1056 |
+
"single_word": false,
|
1057 |
+
"special": true
|
1058 |
+
},
|
1059 |
+
"128132": {
|
1060 |
+
"content": "<|reserved_special_token_124|>",
|
1061 |
+
"lstrip": false,
|
1062 |
+
"normalized": false,
|
1063 |
+
"rstrip": false,
|
1064 |
+
"single_word": false,
|
1065 |
+
"special": true
|
1066 |
+
},
|
1067 |
+
"128133": {
|
1068 |
+
"content": "<|reserved_special_token_125|>",
|
1069 |
+
"lstrip": false,
|
1070 |
+
"normalized": false,
|
1071 |
+
"rstrip": false,
|
1072 |
+
"single_word": false,
|
1073 |
+
"special": true
|
1074 |
+
},
|
1075 |
+
"128134": {
|
1076 |
+
"content": "<|reserved_special_token_126|>",
|
1077 |
+
"lstrip": false,
|
1078 |
+
"normalized": false,
|
1079 |
+
"rstrip": false,
|
1080 |
+
"single_word": false,
|
1081 |
+
"special": true
|
1082 |
+
},
|
1083 |
+
"128135": {
|
1084 |
+
"content": "<|reserved_special_token_127|>",
|
1085 |
+
"lstrip": false,
|
1086 |
+
"normalized": false,
|
1087 |
+
"rstrip": false,
|
1088 |
+
"single_word": false,
|
1089 |
+
"special": true
|
1090 |
+
},
|
1091 |
+
"128136": {
|
1092 |
+
"content": "<|reserved_special_token_128|>",
|
1093 |
+
"lstrip": false,
|
1094 |
+
"normalized": false,
|
1095 |
+
"rstrip": false,
|
1096 |
+
"single_word": false,
|
1097 |
+
"special": true
|
1098 |
+
},
|
1099 |
+
"128137": {
|
1100 |
+
"content": "<|reserved_special_token_129|>",
|
1101 |
+
"lstrip": false,
|
1102 |
+
"normalized": false,
|
1103 |
+
"rstrip": false,
|
1104 |
+
"single_word": false,
|
1105 |
+
"special": true
|
1106 |
+
},
|
1107 |
+
"128138": {
|
1108 |
+
"content": "<|reserved_special_token_130|>",
|
1109 |
+
"lstrip": false,
|
1110 |
+
"normalized": false,
|
1111 |
+
"rstrip": false,
|
1112 |
+
"single_word": false,
|
1113 |
+
"special": true
|
1114 |
+
},
|
1115 |
+
"128139": {
|
1116 |
+
"content": "<|reserved_special_token_131|>",
|
1117 |
+
"lstrip": false,
|
1118 |
+
"normalized": false,
|
1119 |
+
"rstrip": false,
|
1120 |
+
"single_word": false,
|
1121 |
+
"special": true
|
1122 |
+
},
|
1123 |
+
"128140": {
|
1124 |
+
"content": "<|reserved_special_token_132|>",
|
1125 |
+
"lstrip": false,
|
1126 |
+
"normalized": false,
|
1127 |
+
"rstrip": false,
|
1128 |
+
"single_word": false,
|
1129 |
+
"special": true
|
1130 |
+
},
|
1131 |
+
"128141": {
|
1132 |
+
"content": "<|reserved_special_token_133|>",
|
1133 |
+
"lstrip": false,
|
1134 |
+
"normalized": false,
|
1135 |
+
"rstrip": false,
|
1136 |
+
"single_word": false,
|
1137 |
+
"special": true
|
1138 |
+
},
|
1139 |
+
"128142": {
|
1140 |
+
"content": "<|reserved_special_token_134|>",
|
1141 |
+
"lstrip": false,
|
1142 |
+
"normalized": false,
|
1143 |
+
"rstrip": false,
|
1144 |
+
"single_word": false,
|
1145 |
+
"special": true
|
1146 |
+
},
|
1147 |
+
"128143": {
|
1148 |
+
"content": "<|reserved_special_token_135|>",
|
1149 |
+
"lstrip": false,
|
1150 |
+
"normalized": false,
|
1151 |
+
"rstrip": false,
|
1152 |
+
"single_word": false,
|
1153 |
+
"special": true
|
1154 |
+
},
|
1155 |
+
"128144": {
|
1156 |
+
"content": "<|reserved_special_token_136|>",
|
1157 |
+
"lstrip": false,
|
1158 |
+
"normalized": false,
|
1159 |
+
"rstrip": false,
|
1160 |
+
"single_word": false,
|
1161 |
+
"special": true
|
1162 |
+
},
|
1163 |
+
"128145": {
|
1164 |
+
"content": "<|reserved_special_token_137|>",
|
1165 |
+
"lstrip": false,
|
1166 |
+
"normalized": false,
|
1167 |
+
"rstrip": false,
|
1168 |
+
"single_word": false,
|
1169 |
+
"special": true
|
1170 |
+
},
|
1171 |
+
"128146": {
|
1172 |
+
"content": "<|reserved_special_token_138|>",
|
1173 |
+
"lstrip": false,
|
1174 |
+
"normalized": false,
|
1175 |
+
"rstrip": false,
|
1176 |
+
"single_word": false,
|
1177 |
+
"special": true
|
1178 |
+
},
|
1179 |
+
"128147": {
|
1180 |
+
"content": "<|reserved_special_token_139|>",
|
1181 |
+
"lstrip": false,
|
1182 |
+
"normalized": false,
|
1183 |
+
"rstrip": false,
|
1184 |
+
"single_word": false,
|
1185 |
+
"special": true
|
1186 |
+
},
|
1187 |
+
"128148": {
|
1188 |
+
"content": "<|reserved_special_token_140|>",
|
1189 |
+
"lstrip": false,
|
1190 |
+
"normalized": false,
|
1191 |
+
"rstrip": false,
|
1192 |
+
"single_word": false,
|
1193 |
+
"special": true
|
1194 |
+
},
|
1195 |
+
"128149": {
|
1196 |
+
"content": "<|reserved_special_token_141|>",
|
1197 |
+
"lstrip": false,
|
1198 |
+
"normalized": false,
|
1199 |
+
"rstrip": false,
|
1200 |
+
"single_word": false,
|
1201 |
+
"special": true
|
1202 |
+
},
|
1203 |
+
"128150": {
|
1204 |
+
"content": "<|reserved_special_token_142|>",
|
1205 |
+
"lstrip": false,
|
1206 |
+
"normalized": false,
|
1207 |
+
"rstrip": false,
|
1208 |
+
"single_word": false,
|
1209 |
+
"special": true
|
1210 |
+
},
|
1211 |
+
"128151": {
|
1212 |
+
"content": "<|reserved_special_token_143|>",
|
1213 |
+
"lstrip": false,
|
1214 |
+
"normalized": false,
|
1215 |
+
"rstrip": false,
|
1216 |
+
"single_word": false,
|
1217 |
+
"special": true
|
1218 |
+
},
|
1219 |
+
"128152": {
|
1220 |
+
"content": "<|reserved_special_token_144|>",
|
1221 |
+
"lstrip": false,
|
1222 |
+
"normalized": false,
|
1223 |
+
"rstrip": false,
|
1224 |
+
"single_word": false,
|
1225 |
+
"special": true
|
1226 |
+
},
|
1227 |
+
"128153": {
|
1228 |
+
"content": "<|reserved_special_token_145|>",
|
1229 |
+
"lstrip": false,
|
1230 |
+
"normalized": false,
|
1231 |
+
"rstrip": false,
|
1232 |
+
"single_word": false,
|
1233 |
+
"special": true
|
1234 |
+
},
|
1235 |
+
"128154": {
|
1236 |
+
"content": "<|reserved_special_token_146|>",
|
1237 |
+
"lstrip": false,
|
1238 |
+
"normalized": false,
|
1239 |
+
"rstrip": false,
|
1240 |
+
"single_word": false,
|
1241 |
+
"special": true
|
1242 |
+
},
|
1243 |
+
"128155": {
|
1244 |
+
"content": "<|reserved_special_token_147|>",
|
1245 |
+
"lstrip": false,
|
1246 |
+
"normalized": false,
|
1247 |
+
"rstrip": false,
|
1248 |
+
"single_word": false,
|
1249 |
+
"special": true
|
1250 |
+
},
|
1251 |
+
"128156": {
|
1252 |
+
"content": "<|reserved_special_token_148|>",
|
1253 |
+
"lstrip": false,
|
1254 |
+
"normalized": false,
|
1255 |
+
"rstrip": false,
|
1256 |
+
"single_word": false,
|
1257 |
+
"special": true
|
1258 |
+
},
|
1259 |
+
"128157": {
|
1260 |
+
"content": "<|reserved_special_token_149|>",
|
1261 |
+
"lstrip": false,
|
1262 |
+
"normalized": false,
|
1263 |
+
"rstrip": false,
|
1264 |
+
"single_word": false,
|
1265 |
+
"special": true
|
1266 |
+
},
|
1267 |
+
"128158": {
|
1268 |
+
"content": "<|reserved_special_token_150|>",
|
1269 |
+
"lstrip": false,
|
1270 |
+
"normalized": false,
|
1271 |
+
"rstrip": false,
|
1272 |
+
"single_word": false,
|
1273 |
+
"special": true
|
1274 |
+
},
|
1275 |
+
"128159": {
|
1276 |
+
"content": "<|reserved_special_token_151|>",
|
1277 |
+
"lstrip": false,
|
1278 |
+
"normalized": false,
|
1279 |
+
"rstrip": false,
|
1280 |
+
"single_word": false,
|
1281 |
+
"special": true
|
1282 |
+
},
|
1283 |
+
"128160": {
|
1284 |
+
"content": "<|reserved_special_token_152|>",
|
1285 |
+
"lstrip": false,
|
1286 |
+
"normalized": false,
|
1287 |
+
"rstrip": false,
|
1288 |
+
"single_word": false,
|
1289 |
+
"special": true
|
1290 |
+
},
|
1291 |
+
"128161": {
|
1292 |
+
"content": "<|reserved_special_token_153|>",
|
1293 |
+
"lstrip": false,
|
1294 |
+
"normalized": false,
|
1295 |
+
"rstrip": false,
|
1296 |
+
"single_word": false,
|
1297 |
+
"special": true
|
1298 |
+
},
|
1299 |
+
"128162": {
|
1300 |
+
"content": "<|reserved_special_token_154|>",
|
1301 |
+
"lstrip": false,
|
1302 |
+
"normalized": false,
|
1303 |
+
"rstrip": false,
|
1304 |
+
"single_word": false,
|
1305 |
+
"special": true
|
1306 |
+
},
|
1307 |
+
"128163": {
|
1308 |
+
"content": "<|reserved_special_token_155|>",
|
1309 |
+
"lstrip": false,
|
1310 |
+
"normalized": false,
|
1311 |
+
"rstrip": false,
|
1312 |
+
"single_word": false,
|
1313 |
+
"special": true
|
1314 |
+
},
|
1315 |
+
"128164": {
|
1316 |
+
"content": "<|reserved_special_token_156|>",
|
1317 |
+
"lstrip": false,
|
1318 |
+
"normalized": false,
|
1319 |
+
"rstrip": false,
|
1320 |
+
"single_word": false,
|
1321 |
+
"special": true
|
1322 |
+
},
|
1323 |
+
"128165": {
|
1324 |
+
"content": "<|reserved_special_token_157|>",
|
1325 |
+
"lstrip": false,
|
1326 |
+
"normalized": false,
|
1327 |
+
"rstrip": false,
|
1328 |
+
"single_word": false,
|
1329 |
+
"special": true
|
1330 |
+
},
|
1331 |
+
"128166": {
|
1332 |
+
"content": "<|reserved_special_token_158|>",
|
1333 |
+
"lstrip": false,
|
1334 |
+
"normalized": false,
|
1335 |
+
"rstrip": false,
|
1336 |
+
"single_word": false,
|
1337 |
+
"special": true
|
1338 |
+
},
|
1339 |
+
"128167": {
|
1340 |
+
"content": "<|reserved_special_token_159|>",
|
1341 |
+
"lstrip": false,
|
1342 |
+
"normalized": false,
|
1343 |
+
"rstrip": false,
|
1344 |
+
"single_word": false,
|
1345 |
+
"special": true
|
1346 |
+
},
|
1347 |
+
"128168": {
|
1348 |
+
"content": "<|reserved_special_token_160|>",
|
1349 |
+
"lstrip": false,
|
1350 |
+
"normalized": false,
|
1351 |
+
"rstrip": false,
|
1352 |
+
"single_word": false,
|
1353 |
+
"special": true
|
1354 |
+
},
|
1355 |
+
"128169": {
|
1356 |
+
"content": "<|reserved_special_token_161|>",
|
1357 |
+
"lstrip": false,
|
1358 |
+
"normalized": false,
|
1359 |
+
"rstrip": false,
|
1360 |
+
"single_word": false,
|
1361 |
+
"special": true
|
1362 |
+
},
|
1363 |
+
"128170": {
|
1364 |
+
"content": "<|reserved_special_token_162|>",
|
1365 |
+
"lstrip": false,
|
1366 |
+
"normalized": false,
|
1367 |
+
"rstrip": false,
|
1368 |
+
"single_word": false,
|
1369 |
+
"special": true
|
1370 |
+
},
|
1371 |
+
"128171": {
|
1372 |
+
"content": "<|reserved_special_token_163|>",
|
1373 |
+
"lstrip": false,
|
1374 |
+
"normalized": false,
|
1375 |
+
"rstrip": false,
|
1376 |
+
"single_word": false,
|
1377 |
+
"special": true
|
1378 |
+
},
|
1379 |
+
"128172": {
|
1380 |
+
"content": "<|reserved_special_token_164|>",
|
1381 |
+
"lstrip": false,
|
1382 |
+
"normalized": false,
|
1383 |
+
"rstrip": false,
|
1384 |
+
"single_word": false,
|
1385 |
+
"special": true
|
1386 |
+
},
|
1387 |
+
"128173": {
|
1388 |
+
"content": "<|reserved_special_token_165|>",
|
1389 |
+
"lstrip": false,
|
1390 |
+
"normalized": false,
|
1391 |
+
"rstrip": false,
|
1392 |
+
"single_word": false,
|
1393 |
+
"special": true
|
1394 |
+
},
|
1395 |
+
"128174": {
|
1396 |
+
"content": "<|reserved_special_token_166|>",
|
1397 |
+
"lstrip": false,
|
1398 |
+
"normalized": false,
|
1399 |
+
"rstrip": false,
|
1400 |
+
"single_word": false,
|
1401 |
+
"special": true
|
1402 |
+
},
|
1403 |
+
"128175": {
|
1404 |
+
"content": "<|reserved_special_token_167|>",
|
1405 |
+
"lstrip": false,
|
1406 |
+
"normalized": false,
|
1407 |
+
"rstrip": false,
|
1408 |
+
"single_word": false,
|
1409 |
+
"special": true
|
1410 |
+
},
|
1411 |
+
"128176": {
|
1412 |
+
"content": "<|reserved_special_token_168|>",
|
1413 |
+
"lstrip": false,
|
1414 |
+
"normalized": false,
|
1415 |
+
"rstrip": false,
|
1416 |
+
"single_word": false,
|
1417 |
+
"special": true
|
1418 |
+
},
|
1419 |
+
"128177": {
|
1420 |
+
"content": "<|reserved_special_token_169|>",
|
1421 |
+
"lstrip": false,
|
1422 |
+
"normalized": false,
|
1423 |
+
"rstrip": false,
|
1424 |
+
"single_word": false,
|
1425 |
+
"special": true
|
1426 |
+
},
|
1427 |
+
"128178": {
|
1428 |
+
"content": "<|reserved_special_token_170|>",
|
1429 |
+
"lstrip": false,
|
1430 |
+
"normalized": false,
|
1431 |
+
"rstrip": false,
|
1432 |
+
"single_word": false,
|
1433 |
+
"special": true
|
1434 |
+
},
|
1435 |
+
"128179": {
|
1436 |
+
"content": "<|reserved_special_token_171|>",
|
1437 |
+
"lstrip": false,
|
1438 |
+
"normalized": false,
|
1439 |
+
"rstrip": false,
|
1440 |
+
"single_word": false,
|
1441 |
+
"special": true
|
1442 |
+
},
|
1443 |
+
"128180": {
|
1444 |
+
"content": "<|reserved_special_token_172|>",
|
1445 |
+
"lstrip": false,
|
1446 |
+
"normalized": false,
|
1447 |
+
"rstrip": false,
|
1448 |
+
"single_word": false,
|
1449 |
+
"special": true
|
1450 |
+
},
|
1451 |
+
"128181": {
|
1452 |
+
"content": "<|reserved_special_token_173|>",
|
1453 |
+
"lstrip": false,
|
1454 |
+
"normalized": false,
|
1455 |
+
"rstrip": false,
|
1456 |
+
"single_word": false,
|
1457 |
+
"special": true
|
1458 |
+
},
|
1459 |
+
"128182": {
|
1460 |
+
"content": "<|reserved_special_token_174|>",
|
1461 |
+
"lstrip": false,
|
1462 |
+
"normalized": false,
|
1463 |
+
"rstrip": false,
|
1464 |
+
"single_word": false,
|
1465 |
+
"special": true
|
1466 |
+
},
|
1467 |
+
"128183": {
|
1468 |
+
"content": "<|reserved_special_token_175|>",
|
1469 |
+
"lstrip": false,
|
1470 |
+
"normalized": false,
|
1471 |
+
"rstrip": false,
|
1472 |
+
"single_word": false,
|
1473 |
+
"special": true
|
1474 |
+
},
|
1475 |
+
"128184": {
|
1476 |
+
"content": "<|reserved_special_token_176|>",
|
1477 |
+
"lstrip": false,
|
1478 |
+
"normalized": false,
|
1479 |
+
"rstrip": false,
|
1480 |
+
"single_word": false,
|
1481 |
+
"special": true
|
1482 |
+
},
|
1483 |
+
"128185": {
|
1484 |
+
"content": "<|reserved_special_token_177|>",
|
1485 |
+
"lstrip": false,
|
1486 |
+
"normalized": false,
|
1487 |
+
"rstrip": false,
|
1488 |
+
"single_word": false,
|
1489 |
+
"special": true
|
1490 |
+
},
|
1491 |
+
"128186": {
|
1492 |
+
"content": "<|reserved_special_token_178|>",
|
1493 |
+
"lstrip": false,
|
1494 |
+
"normalized": false,
|
1495 |
+
"rstrip": false,
|
1496 |
+
"single_word": false,
|
1497 |
+
"special": true
|
1498 |
+
},
|
1499 |
+
"128187": {
|
1500 |
+
"content": "<|reserved_special_token_179|>",
|
1501 |
+
"lstrip": false,
|
1502 |
+
"normalized": false,
|
1503 |
+
"rstrip": false,
|
1504 |
+
"single_word": false,
|
1505 |
+
"special": true
|
1506 |
+
},
|
1507 |
+
"128188": {
|
1508 |
+
"content": "<|reserved_special_token_180|>",
|
1509 |
+
"lstrip": false,
|
1510 |
+
"normalized": false,
|
1511 |
+
"rstrip": false,
|
1512 |
+
"single_word": false,
|
1513 |
+
"special": true
|
1514 |
+
},
|
1515 |
+
"128189": {
|
1516 |
+
"content": "<|reserved_special_token_181|>",
|
1517 |
+
"lstrip": false,
|
1518 |
+
"normalized": false,
|
1519 |
+
"rstrip": false,
|
1520 |
+
"single_word": false,
|
1521 |
+
"special": true
|
1522 |
+
},
|
1523 |
+
"128190": {
|
1524 |
+
"content": "<|reserved_special_token_182|>",
|
1525 |
+
"lstrip": false,
|
1526 |
+
"normalized": false,
|
1527 |
+
"rstrip": false,
|
1528 |
+
"single_word": false,
|
1529 |
+
"special": true
|
1530 |
+
},
|
1531 |
+
"128191": {
|
1532 |
+
"content": "<|reserved_special_token_183|>",
|
1533 |
+
"lstrip": false,
|
1534 |
+
"normalized": false,
|
1535 |
+
"rstrip": false,
|
1536 |
+
"single_word": false,
|
1537 |
+
"special": true
|
1538 |
+
},
|
1539 |
+
"128192": {
|
1540 |
+
"content": "<|reserved_special_token_184|>",
|
1541 |
+
"lstrip": false,
|
1542 |
+
"normalized": false,
|
1543 |
+
"rstrip": false,
|
1544 |
+
"single_word": false,
|
1545 |
+
"special": true
|
1546 |
+
},
|
1547 |
+
"128193": {
|
1548 |
+
"content": "<|reserved_special_token_185|>",
|
1549 |
+
"lstrip": false,
|
1550 |
+
"normalized": false,
|
1551 |
+
"rstrip": false,
|
1552 |
+
"single_word": false,
|
1553 |
+
"special": true
|
1554 |
+
},
|
1555 |
+
"128194": {
|
1556 |
+
"content": "<|reserved_special_token_186|>",
|
1557 |
+
"lstrip": false,
|
1558 |
+
"normalized": false,
|
1559 |
+
"rstrip": false,
|
1560 |
+
"single_word": false,
|
1561 |
+
"special": true
|
1562 |
+
},
|
1563 |
+
"128195": {
|
1564 |
+
"content": "<|reserved_special_token_187|>",
|
1565 |
+
"lstrip": false,
|
1566 |
+
"normalized": false,
|
1567 |
+
"rstrip": false,
|
1568 |
+
"single_word": false,
|
1569 |
+
"special": true
|
1570 |
+
},
|
1571 |
+
"128196": {
|
1572 |
+
"content": "<|reserved_special_token_188|>",
|
1573 |
+
"lstrip": false,
|
1574 |
+
"normalized": false,
|
1575 |
+
"rstrip": false,
|
1576 |
+
"single_word": false,
|
1577 |
+
"special": true
|
1578 |
+
},
|
1579 |
+
"128197": {
|
1580 |
+
"content": "<|reserved_special_token_189|>",
|
1581 |
+
"lstrip": false,
|
1582 |
+
"normalized": false,
|
1583 |
+
"rstrip": false,
|
1584 |
+
"single_word": false,
|
1585 |
+
"special": true
|
1586 |
+
},
|
1587 |
+
"128198": {
|
1588 |
+
"content": "<|reserved_special_token_190|>",
|
1589 |
+
"lstrip": false,
|
1590 |
+
"normalized": false,
|
1591 |
+
"rstrip": false,
|
1592 |
+
"single_word": false,
|
1593 |
+
"special": true
|
1594 |
+
},
|
1595 |
+
"128199": {
|
1596 |
+
"content": "<|reserved_special_token_191|>",
|
1597 |
+
"lstrip": false,
|
1598 |
+
"normalized": false,
|
1599 |
+
"rstrip": false,
|
1600 |
+
"single_word": false,
|
1601 |
+
"special": true
|
1602 |
+
},
|
1603 |
+
"128200": {
|
1604 |
+
"content": "<|reserved_special_token_192|>",
|
1605 |
+
"lstrip": false,
|
1606 |
+
"normalized": false,
|
1607 |
+
"rstrip": false,
|
1608 |
+
"single_word": false,
|
1609 |
+
"special": true
|
1610 |
+
},
|
1611 |
+
"128201": {
|
1612 |
+
"content": "<|reserved_special_token_193|>",
|
1613 |
+
"lstrip": false,
|
1614 |
+
"normalized": false,
|
1615 |
+
"rstrip": false,
|
1616 |
+
"single_word": false,
|
1617 |
+
"special": true
|
1618 |
+
},
|
1619 |
+
"128202": {
|
1620 |
+
"content": "<|reserved_special_token_194|>",
|
1621 |
+
"lstrip": false,
|
1622 |
+
"normalized": false,
|
1623 |
+
"rstrip": false,
|
1624 |
+
"single_word": false,
|
1625 |
+
"special": true
|
1626 |
+
},
|
1627 |
+
"128203": {
|
1628 |
+
"content": "<|reserved_special_token_195|>",
|
1629 |
+
"lstrip": false,
|
1630 |
+
"normalized": false,
|
1631 |
+
"rstrip": false,
|
1632 |
+
"single_word": false,
|
1633 |
+
"special": true
|
1634 |
+
},
|
1635 |
+
"128204": {
|
1636 |
+
"content": "<|reserved_special_token_196|>",
|
1637 |
+
"lstrip": false,
|
1638 |
+
"normalized": false,
|
1639 |
+
"rstrip": false,
|
1640 |
+
"single_word": false,
|
1641 |
+
"special": true
|
1642 |
+
},
|
1643 |
+
"128205": {
|
1644 |
+
"content": "<|reserved_special_token_197|>",
|
1645 |
+
"lstrip": false,
|
1646 |
+
"normalized": false,
|
1647 |
+
"rstrip": false,
|
1648 |
+
"single_word": false,
|
1649 |
+
"special": true
|
1650 |
+
},
|
1651 |
+
"128206": {
|
1652 |
+
"content": "<|reserved_special_token_198|>",
|
1653 |
+
"lstrip": false,
|
1654 |
+
"normalized": false,
|
1655 |
+
"rstrip": false,
|
1656 |
+
"single_word": false,
|
1657 |
+
"special": true
|
1658 |
+
},
|
1659 |
+
"128207": {
|
1660 |
+
"content": "<|reserved_special_token_199|>",
|
1661 |
+
"lstrip": false,
|
1662 |
+
"normalized": false,
|
1663 |
+
"rstrip": false,
|
1664 |
+
"single_word": false,
|
1665 |
+
"special": true
|
1666 |
+
},
|
1667 |
+
"128208": {
|
1668 |
+
"content": "<|reserved_special_token_200|>",
|
1669 |
+
"lstrip": false,
|
1670 |
+
"normalized": false,
|
1671 |
+
"rstrip": false,
|
1672 |
+
"single_word": false,
|
1673 |
+
"special": true
|
1674 |
+
},
|
1675 |
+
"128209": {
|
1676 |
+
"content": "<|reserved_special_token_201|>",
|
1677 |
+
"lstrip": false,
|
1678 |
+
"normalized": false,
|
1679 |
+
"rstrip": false,
|
1680 |
+
"single_word": false,
|
1681 |
+
"special": true
|
1682 |
+
},
|
1683 |
+
"128210": {
|
1684 |
+
"content": "<|reserved_special_token_202|>",
|
1685 |
+
"lstrip": false,
|
1686 |
+
"normalized": false,
|
1687 |
+
"rstrip": false,
|
1688 |
+
"single_word": false,
|
1689 |
+
"special": true
|
1690 |
+
},
|
1691 |
+
"128211": {
|
1692 |
+
"content": "<|reserved_special_token_203|>",
|
1693 |
+
"lstrip": false,
|
1694 |
+
"normalized": false,
|
1695 |
+
"rstrip": false,
|
1696 |
+
"single_word": false,
|
1697 |
+
"special": true
|
1698 |
+
},
|
1699 |
+
"128212": {
|
1700 |
+
"content": "<|reserved_special_token_204|>",
|
1701 |
+
"lstrip": false,
|
1702 |
+
"normalized": false,
|
1703 |
+
"rstrip": false,
|
1704 |
+
"single_word": false,
|
1705 |
+
"special": true
|
1706 |
+
},
|
1707 |
+
"128213": {
|
1708 |
+
"content": "<|reserved_special_token_205|>",
|
1709 |
+
"lstrip": false,
|
1710 |
+
"normalized": false,
|
1711 |
+
"rstrip": false,
|
1712 |
+
"single_word": false,
|
1713 |
+
"special": true
|
1714 |
+
},
|
1715 |
+
"128214": {
|
1716 |
+
"content": "<|reserved_special_token_206|>",
|
1717 |
+
"lstrip": false,
|
1718 |
+
"normalized": false,
|
1719 |
+
"rstrip": false,
|
1720 |
+
"single_word": false,
|
1721 |
+
"special": true
|
1722 |
+
},
|
1723 |
+
"128215": {
|
1724 |
+
"content": "<|reserved_special_token_207|>",
|
1725 |
+
"lstrip": false,
|
1726 |
+
"normalized": false,
|
1727 |
+
"rstrip": false,
|
1728 |
+
"single_word": false,
|
1729 |
+
"special": true
|
1730 |
+
},
|
1731 |
+
"128216": {
|
1732 |
+
"content": "<|reserved_special_token_208|>",
|
1733 |
+
"lstrip": false,
|
1734 |
+
"normalized": false,
|
1735 |
+
"rstrip": false,
|
1736 |
+
"single_word": false,
|
1737 |
+
"special": true
|
1738 |
+
},
|
1739 |
+
"128217": {
|
1740 |
+
"content": "<|reserved_special_token_209|>",
|
1741 |
+
"lstrip": false,
|
1742 |
+
"normalized": false,
|
1743 |
+
"rstrip": false,
|
1744 |
+
"single_word": false,
|
1745 |
+
"special": true
|
1746 |
+
},
|
1747 |
+
"128218": {
|
1748 |
+
"content": "<|reserved_special_token_210|>",
|
1749 |
+
"lstrip": false,
|
1750 |
+
"normalized": false,
|
1751 |
+
"rstrip": false,
|
1752 |
+
"single_word": false,
|
1753 |
+
"special": true
|
1754 |
+
},
|
1755 |
+
"128219": {
|
1756 |
+
"content": "<|reserved_special_token_211|>",
|
1757 |
+
"lstrip": false,
|
1758 |
+
"normalized": false,
|
1759 |
+
"rstrip": false,
|
1760 |
+
"single_word": false,
|
1761 |
+
"special": true
|
1762 |
+
},
|
1763 |
+
"128220": {
|
1764 |
+
"content": "<|reserved_special_token_212|>",
|
1765 |
+
"lstrip": false,
|
1766 |
+
"normalized": false,
|
1767 |
+
"rstrip": false,
|
1768 |
+
"single_word": false,
|
1769 |
+
"special": true
|
1770 |
+
},
|
1771 |
+
"128221": {
|
1772 |
+
"content": "<|reserved_special_token_213|>",
|
1773 |
+
"lstrip": false,
|
1774 |
+
"normalized": false,
|
1775 |
+
"rstrip": false,
|
1776 |
+
"single_word": false,
|
1777 |
+
"special": true
|
1778 |
+
},
|
1779 |
+
"128222": {
|
1780 |
+
"content": "<|reserved_special_token_214|>",
|
1781 |
+
"lstrip": false,
|
1782 |
+
"normalized": false,
|
1783 |
+
"rstrip": false,
|
1784 |
+
"single_word": false,
|
1785 |
+
"special": true
|
1786 |
+
},
|
1787 |
+
"128223": {
|
1788 |
+
"content": "<|reserved_special_token_215|>",
|
1789 |
+
"lstrip": false,
|
1790 |
+
"normalized": false,
|
1791 |
+
"rstrip": false,
|
1792 |
+
"single_word": false,
|
1793 |
+
"special": true
|
1794 |
+
},
|
1795 |
+
"128224": {
|
1796 |
+
"content": "<|reserved_special_token_216|>",
|
1797 |
+
"lstrip": false,
|
1798 |
+
"normalized": false,
|
1799 |
+
"rstrip": false,
|
1800 |
+
"single_word": false,
|
1801 |
+
"special": true
|
1802 |
+
},
|
1803 |
+
"128225": {
|
1804 |
+
"content": "<|reserved_special_token_217|>",
|
1805 |
+
"lstrip": false,
|
1806 |
+
"normalized": false,
|
1807 |
+
"rstrip": false,
|
1808 |
+
"single_word": false,
|
1809 |
+
"special": true
|
1810 |
+
},
|
1811 |
+
"128226": {
|
1812 |
+
"content": "<|reserved_special_token_218|>",
|
1813 |
+
"lstrip": false,
|
1814 |
+
"normalized": false,
|
1815 |
+
"rstrip": false,
|
1816 |
+
"single_word": false,
|
1817 |
+
"special": true
|
1818 |
+
},
|
1819 |
+
"128227": {
|
1820 |
+
"content": "<|reserved_special_token_219|>",
|
1821 |
+
"lstrip": false,
|
1822 |
+
"normalized": false,
|
1823 |
+
"rstrip": false,
|
1824 |
+
"single_word": false,
|
1825 |
+
"special": true
|
1826 |
+
},
|
1827 |
+
"128228": {
|
1828 |
+
"content": "<|reserved_special_token_220|>",
|
1829 |
+
"lstrip": false,
|
1830 |
+
"normalized": false,
|
1831 |
+
"rstrip": false,
|
1832 |
+
"single_word": false,
|
1833 |
+
"special": true
|
1834 |
+
},
|
1835 |
+
"128229": {
|
1836 |
+
"content": "<|reserved_special_token_221|>",
|
1837 |
+
"lstrip": false,
|
1838 |
+
"normalized": false,
|
1839 |
+
"rstrip": false,
|
1840 |
+
"single_word": false,
|
1841 |
+
"special": true
|
1842 |
+
},
|
1843 |
+
"128230": {
|
1844 |
+
"content": "<|reserved_special_token_222|>",
|
1845 |
+
"lstrip": false,
|
1846 |
+
"normalized": false,
|
1847 |
+
"rstrip": false,
|
1848 |
+
"single_word": false,
|
1849 |
+
"special": true
|
1850 |
+
},
|
1851 |
+
"128231": {
|
1852 |
+
"content": "<|reserved_special_token_223|>",
|
1853 |
+
"lstrip": false,
|
1854 |
+
"normalized": false,
|
1855 |
+
"rstrip": false,
|
1856 |
+
"single_word": false,
|
1857 |
+
"special": true
|
1858 |
+
},
|
1859 |
+
"128232": {
|
1860 |
+
"content": "<|reserved_special_token_224|>",
|
1861 |
+
"lstrip": false,
|
1862 |
+
"normalized": false,
|
1863 |
+
"rstrip": false,
|
1864 |
+
"single_word": false,
|
1865 |
+
"special": true
|
1866 |
+
},
|
1867 |
+
"128233": {
|
1868 |
+
"content": "<|reserved_special_token_225|>",
|
1869 |
+
"lstrip": false,
|
1870 |
+
"normalized": false,
|
1871 |
+
"rstrip": false,
|
1872 |
+
"single_word": false,
|
1873 |
+
"special": true
|
1874 |
+
},
|
1875 |
+
"128234": {
|
1876 |
+
"content": "<|reserved_special_token_226|>",
|
1877 |
+
"lstrip": false,
|
1878 |
+
"normalized": false,
|
1879 |
+
"rstrip": false,
|
1880 |
+
"single_word": false,
|
1881 |
+
"special": true
|
1882 |
+
},
|
1883 |
+
"128235": {
|
1884 |
+
"content": "<|reserved_special_token_227|>",
|
1885 |
+
"lstrip": false,
|
1886 |
+
"normalized": false,
|
1887 |
+
"rstrip": false,
|
1888 |
+
"single_word": false,
|
1889 |
+
"special": true
|
1890 |
+
},
|
1891 |
+
"128236": {
|
1892 |
+
"content": "<|reserved_special_token_228|>",
|
1893 |
+
"lstrip": false,
|
1894 |
+
"normalized": false,
|
1895 |
+
"rstrip": false,
|
1896 |
+
"single_word": false,
|
1897 |
+
"special": true
|
1898 |
+
},
|
1899 |
+
"128237": {
|
1900 |
+
"content": "<|reserved_special_token_229|>",
|
1901 |
+
"lstrip": false,
|
1902 |
+
"normalized": false,
|
1903 |
+
"rstrip": false,
|
1904 |
+
"single_word": false,
|
1905 |
+
"special": true
|
1906 |
+
},
|
1907 |
+
"128238": {
|
1908 |
+
"content": "<|reserved_special_token_230|>",
|
1909 |
+
"lstrip": false,
|
1910 |
+
"normalized": false,
|
1911 |
+
"rstrip": false,
|
1912 |
+
"single_word": false,
|
1913 |
+
"special": true
|
1914 |
+
},
|
1915 |
+
"128239": {
|
1916 |
+
"content": "<|reserved_special_token_231|>",
|
1917 |
+
"lstrip": false,
|
1918 |
+
"normalized": false,
|
1919 |
+
"rstrip": false,
|
1920 |
+
"single_word": false,
|
1921 |
+
"special": true
|
1922 |
+
},
|
1923 |
+
"128240": {
|
1924 |
+
"content": "<|reserved_special_token_232|>",
|
1925 |
+
"lstrip": false,
|
1926 |
+
"normalized": false,
|
1927 |
+
"rstrip": false,
|
1928 |
+
"single_word": false,
|
1929 |
+
"special": true
|
1930 |
+
},
|
1931 |
+
"128241": {
|
1932 |
+
"content": "<|reserved_special_token_233|>",
|
1933 |
+
"lstrip": false,
|
1934 |
+
"normalized": false,
|
1935 |
+
"rstrip": false,
|
1936 |
+
"single_word": false,
|
1937 |
+
"special": true
|
1938 |
+
},
|
1939 |
+
"128242": {
|
1940 |
+
"content": "<|reserved_special_token_234|>",
|
1941 |
+
"lstrip": false,
|
1942 |
+
"normalized": false,
|
1943 |
+
"rstrip": false,
|
1944 |
+
"single_word": false,
|
1945 |
+
"special": true
|
1946 |
+
},
|
1947 |
+
"128243": {
|
1948 |
+
"content": "<|reserved_special_token_235|>",
|
1949 |
+
"lstrip": false,
|
1950 |
+
"normalized": false,
|
1951 |
+
"rstrip": false,
|
1952 |
+
"single_word": false,
|
1953 |
+
"special": true
|
1954 |
+
},
|
1955 |
+
"128244": {
|
1956 |
+
"content": "<|reserved_special_token_236|>",
|
1957 |
+
"lstrip": false,
|
1958 |
+
"normalized": false,
|
1959 |
+
"rstrip": false,
|
1960 |
+
"single_word": false,
|
1961 |
+
"special": true
|
1962 |
+
},
|
1963 |
+
"128245": {
|
1964 |
+
"content": "<|reserved_special_token_237|>",
|
1965 |
+
"lstrip": false,
|
1966 |
+
"normalized": false,
|
1967 |
+
"rstrip": false,
|
1968 |
+
"single_word": false,
|
1969 |
+
"special": true
|
1970 |
+
},
|
1971 |
+
"128246": {
|
1972 |
+
"content": "<|reserved_special_token_238|>",
|
1973 |
+
"lstrip": false,
|
1974 |
+
"normalized": false,
|
1975 |
+
"rstrip": false,
|
1976 |
+
"single_word": false,
|
1977 |
+
"special": true
|
1978 |
+
},
|
1979 |
+
"128247": {
|
1980 |
+
"content": "<|reserved_special_token_239|>",
|
1981 |
+
"lstrip": false,
|
1982 |
+
"normalized": false,
|
1983 |
+
"rstrip": false,
|
1984 |
+
"single_word": false,
|
1985 |
+
"special": true
|
1986 |
+
},
|
1987 |
+
"128248": {
|
1988 |
+
"content": "<|reserved_special_token_240|>",
|
1989 |
+
"lstrip": false,
|
1990 |
+
"normalized": false,
|
1991 |
+
"rstrip": false,
|
1992 |
+
"single_word": false,
|
1993 |
+
"special": true
|
1994 |
+
},
|
1995 |
+
"128249": {
|
1996 |
+
"content": "<|reserved_special_token_241|>",
|
1997 |
+
"lstrip": false,
|
1998 |
+
"normalized": false,
|
1999 |
+
"rstrip": false,
|
2000 |
+
"single_word": false,
|
2001 |
+
"special": true
|
2002 |
+
},
|
2003 |
+
"128250": {
|
2004 |
+
"content": "<|reserved_special_token_242|>",
|
2005 |
+
"lstrip": false,
|
2006 |
+
"normalized": false,
|
2007 |
+
"rstrip": false,
|
2008 |
+
"single_word": false,
|
2009 |
+
"special": true
|
2010 |
+
},
|
2011 |
+
"128251": {
|
2012 |
+
"content": "<|reserved_special_token_243|>",
|
2013 |
+
"lstrip": false,
|
2014 |
+
"normalized": false,
|
2015 |
+
"rstrip": false,
|
2016 |
+
"single_word": false,
|
2017 |
+
"special": true
|
2018 |
+
},
|
2019 |
+
"128252": {
|
2020 |
+
"content": "<|reserved_special_token_244|>",
|
2021 |
+
"lstrip": false,
|
2022 |
+
"normalized": false,
|
2023 |
+
"rstrip": false,
|
2024 |
+
"single_word": false,
|
2025 |
+
"special": true
|
2026 |
+
},
|
2027 |
+
"128253": {
|
2028 |
+
"content": "<|reserved_special_token_245|>",
|
2029 |
+
"lstrip": false,
|
2030 |
+
"normalized": false,
|
2031 |
+
"rstrip": false,
|
2032 |
+
"single_word": false,
|
2033 |
+
"special": true
|
2034 |
+
},
|
2035 |
+
"128254": {
|
2036 |
+
"content": "<|reserved_special_token_246|>",
|
2037 |
+
"lstrip": false,
|
2038 |
+
"normalized": false,
|
2039 |
+
"rstrip": false,
|
2040 |
+
"single_word": false,
|
2041 |
+
"special": true
|
2042 |
+
},
|
2043 |
+
"128255": {
|
2044 |
+
"content": "<|reserved_special_token_247|>",
|
2045 |
+
"lstrip": false,
|
2046 |
+
"normalized": false,
|
2047 |
+
"rstrip": false,
|
2048 |
+
"single_word": false,
|
2049 |
+
"special": true
|
2050 |
+
}
|
2051 |
+
},
|
2052 |
+
"bos_token": "<|begin_of_text|>",
|
2053 |
+
"chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message #}\n{{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
|
2054 |
+
"clean_up_tokenization_spaces": true,
|
2055 |
+
"eos_token": "<|eot_id|>",
|
2056 |
+
"model_input_names": [
|
2057 |
+
"input_ids",
|
2058 |
+
"attention_mask"
|
2059 |
+
],
|
2060 |
+
"model_max_length": 131072,
|
2061 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
2062 |
+
}
|