Prince-1 commited on
Commit
8eca01d
·
verified ·
1 Parent(s): 840c8e2

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ assets/cline_config.png filter=lfs diff=lfs merge=lfs -text
38
+ assets/swe_benchmark.png filter=lfs diff=lfs merge=lfs -text
39
+ assets/mistral_common_coverage/navigate.png filter=lfs diff=lfs merge=lfs -text
40
+ assets/mistral_common_coverage/prompt.png filter=lfs diff=lfs merge=lfs -text
41
+ assets/mistral_common_coverage/visualization.png filter=lfs diff=lfs merge=lfs -text
42
+ assets/mistral_common_coverage/dependencies.png filter=lfs diff=lfs merge=lfs -text
43
+ assets/space_invaders_pong/task[[:space:]]completed.png filter=lfs diff=lfs merge=lfs -text
44
+ assets/space_invaders_pong/prompt.png filter=lfs diff=lfs merge=lfs -text
45
+ assets/space_invaders_pong/base_structure.png filter=lfs diff=lfs merge=lfs -text
46
+ model.onnx.data filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - pt
8
+ - it
9
+ - ja
10
+ - ko
11
+ - ru
12
+ - zh
13
+ - ar
14
+ - fa
15
+ - id
16
+ - ms
17
+ - ne
18
+ - pl
19
+ - ro
20
+ - sr
21
+ - sv
22
+ - tr
23
+ - uk
24
+ - vi
25
+ - hi
26
+ - bn
27
+ license: apache-2.0
28
+ tags:
29
+ - onnx
30
+ - onnxruntime-genai
31
+ - onnxruntime
32
+ library_name: onnxruntime-genai
33
+ base_model_relation: quantized
34
+ inference: false
35
+ base_model:
36
+ - mistralai/Devstral-Small-2507
37
+ extra_gated_description: >-
38
+ If you want to learn more about how we process your personal data, please read
39
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
40
+ pipeline_tag: text-generation
41
+ ---
42
+
43
+ # Devstral Small 1.1
44
+
45
+ Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positions it as the #1 open source model on this [benchmark](#benchmark-results).
46
+
47
+ It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
48
+
49
+ For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
50
+
51
+ Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral-2507).
52
+
53
+ **Updates compared to [`Devstral Small 1.0`](https://huggingface.co/mistralai/Devstral-Small-2505):**
54
+ - Improved performance, please refer to the [benchmark results](#benchmark-results).
55
+ - `Devstral Small 1.1` is still great when paired with OpenHands. This new version also generalizes better to other prompts and coding environments.
56
+ - Supports [Mistral's function calling format](https://mistralai.github.io/mistral-common/usage/tools/).
57
+
58
+
59
+ ## Key Features:
60
+ - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
61
+ - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
62
+ - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
63
+ - **Context Window**: A 128k context window.
64
+ - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
65
+
66
+
67
+ ## Benchmark Results
68
+
69
+ ### SWE-Bench
70
+
71
+ Devstral Small 1.1 achieves a score of **53.6%** on SWE-Bench Verified, outperforming Devstral Small 1.0 by +6,8% and the second best state of the art model by +11.4%.
72
+
73
+ | Model | Agentic Scaffold | SWE-Bench Verified (%) |
74
+ |--------------------|--------------------|------------------------|
75
+ | Devstral Small 1.1 | OpenHands Scaffold | **53.6** |
76
+ | Devstral Small 1.0 | OpenHands Scaffold | *46.8* |
77
+ | GPT-4.1-mini | OpenAI Scaffold | 23.6 |
78
+ | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
79
+ | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
80
+ | Skywork SWE | OpenHands Scaffold | 38.0 |
81
+ | DeepSWE | R2E-Gym Scaffold | 42.2 |
82
+
83
+
84
+ When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.
85
+
86
+ ![SWE Benchmark](assets/swe_benchmark.png)
87
+
88
+ ## Usage
89
+
90
+ We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
91
+ You can use it either through our API or by running locally.
92
+
93
+ ### API
94
+ Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
95
+
96
+ Then run these commands to start the OpenHands docker container.
97
+ ```bash
98
+ export MISTRAL_API_KEY=<MY_KEY>
99
+
100
+ mkdir -p ~/.openhands && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2507","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
101
+
102
+ docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik
103
+
104
+ docker run -it --rm --pull=always \
105
+ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \
106
+ -e LOG_ALL_EVENTS=true \
107
+ -v /var/run/docker.sock:/var/run/docker.sock \
108
+ -v ~/.openhands:/.openhands \
109
+ -p 3000:3000 \
110
+ --add-host host.docker.internal:host-gateway \
111
+ --name openhands-app \
112
+ docker.all-hands.dev/all-hands-ai/openhands:0.48
113
+ ```
114
+
115
+ ### Local inference
116
+
117
+ The model can also be deployed with the following libraries:
118
+ - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
119
+ - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
120
+ - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
121
+ - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio)
122
+ - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): See [here](#llama.cpp)
123
+ - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
124
+
125
+
126
+ #### vLLM (recommended)
127
+
128
+ <details>
129
+ <summary>Expand</summary
130
+
131
+ We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
132
+ to implement production-ready inference pipelines.
133
+
134
+ **_Installation_**
135
+
136
+ Make sure you install [`vLLM >= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1):
137
+
138
+ ```
139
+ pip install vllm --upgrade
140
+ ```
141
+
142
+ Also make sure to have installed [`mistral_common >= 1.7.0`](https://github.com/mistralai/mistral-common/releases/tag/v1.7.0).
143
+
144
+ ```
145
+ pip install mistral-common --upgrade
146
+ ```
147
+
148
+ To check:
149
+ ```
150
+ python -c "import mistral_common; print(mistral_common.__version__)"
151
+ ```
152
+
153
+ You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
154
+
155
+ **_Launch server_**
156
+
157
+ We recommand that you use Devstral in a server/client setting.
158
+
159
+ 1. Spin up a server:
160
+
161
+ ```
162
+ vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
163
+ ```
164
+
165
+
166
+ 2. To ping the client you can use a simple Python snippet.
167
+
168
+ ```py
169
+ import requests
170
+ import json
171
+ from huggingface_hub import hf_hub_download
172
+
173
+
174
+ url = "http://<your-server-url>:8000/v1/chat/completions"
175
+ headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
176
+
177
+ model = "mistralai/Devstral-Small-2507"
178
+
179
+ def load_system_prompt(repo_id: str, filename: str) -> str:
180
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
181
+ with open(file_path, "r") as file:
182
+ system_prompt = file.read()
183
+ return system_prompt
184
+
185
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
186
+
187
+ messages = [
188
+ {"role": "system", "content": SYSTEM_PROMPT},
189
+ {
190
+ "role": "user",
191
+ "content": [
192
+ {
193
+ "type": "text",
194
+ "text": "<your-command>",
195
+ },
196
+ ],
197
+ },
198
+ ]
199
+
200
+ data = {"model": model, "messages": messages, "temperature": 0.15}
201
+
202
+ # Devstral Small 1.1 supports tool calling. If you want to use tools, follow this:
203
+ # tools = [ # Define tools for vLLM
204
+ # {
205
+ # "type": "function",
206
+ # "function": {
207
+ # "name": "git_clone",
208
+ # "description": "Clone a git repository",
209
+ # "parameters": {
210
+ # "type": "object",
211
+ # "properties": {
212
+ # "url": {
213
+ # "type": "string",
214
+ # "description": "The url of the git repository",
215
+ # },
216
+ # },
217
+ # "required": ["url"],
218
+ # },
219
+ # },
220
+ # }
221
+ # ]
222
+ # data = {"model": model, "messages": messages, "temperature": 0.15, "tools": tools} # Pass tools to payload.
223
+
224
+ response = requests.post(url, headers=headers, data=json.dumps(data))
225
+ print(response.json()["choices"][0]["message"]["content"])
226
+ ```
227
+ </details>
228
+
229
+
230
+ #### Mistral-inference
231
+
232
+ <details>
233
+ <summary>Expand</summary
234
+
235
+ We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
236
+
237
+ **_Installation_**
238
+
239
+ Make sure to have mistral_inference >= 1.6.0 installed.
240
+
241
+ ```bash
242
+ pip install mistral_inference --upgrade
243
+ ```
244
+
245
+ **_Download_**
246
+
247
+ ```python
248
+ from huggingface_hub import snapshot_download
249
+ from pathlib import Path
250
+
251
+ mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
252
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
253
+
254
+ snapshot_download(repo_id="mistralai/Devstral-Small-2507", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
255
+ ```
256
+
257
+ **_Chat_**
258
+
259
+ You can run the model using the following command:
260
+
261
+ ```bash
262
+ mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
263
+ ```
264
+
265
+ You can then prompt it with anything you'd like.
266
+
267
+ </details>
268
+
269
+
270
+ #### Transformers
271
+
272
+ <details>
273
+ <summary>Expand</summary
274
+
275
+ To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.7.0` to use our tokenizer.
276
+
277
+ ```bash
278
+ pip install mistral-common --upgrade
279
+ ```
280
+
281
+ Then load our tokenizer along with the model and generate:
282
+
283
+ ```python
284
+ import torch
285
+
286
+ from mistral_common.protocol.instruct.messages import (
287
+ SystemMessage, UserMessage
288
+ )
289
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
290
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
291
+ from huggingface_hub import hf_hub_download
292
+ from transformers import AutoModelForCausalLM
293
+
294
+ def load_system_prompt(repo_id: str, filename: str) -> str:
295
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
296
+ with open(file_path, "r") as file:
297
+ system_prompt = file.read()
298
+ return system_prompt
299
+
300
+ model_id = "mistralai/Devstral-Small-2507"
301
+ SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
302
+
303
+
304
+ tokenizer = MistralTokenizer.from_hf_hub(model_id)
305
+ model = AutoModelForCausalLM.from_pretrained(model_id)
306
+
307
+ tokenized = tokenizer.encode_chat_completion(
308
+ ChatCompletionRequest(
309
+ messages=[
310
+ SystemMessage(content=SYSTEM_PROMPT),
311
+ UserMessage(content="<your-command>"),
312
+ ],
313
+ )
314
+ )
315
+
316
+ output = model.generate(
317
+ input_ids=torch.tensor([tokenized.tokens]),
318
+ max_new_tokens=1000,
319
+ )[0]
320
+
321
+ decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
322
+ print(decoded_output)
323
+ ```
324
+
325
+ </details>
326
+
327
+
328
+ #### LM Studio
329
+
330
+ <details>
331
+ <summary>Expand</summary
332
+
333
+ Download the weights from either:
334
+ - LM Studio GGUF repository (recommended): https://huggingface.co/lmstudio-community/Devstral-Small-2507-GGUF
335
+ - our GGUF repository: https://huggingface.co/mistralai/Devstral-Small-2507_gguf
336
+
337
+ ```
338
+ pip install -U "huggingface_hub[cli]"
339
+ huggingface-cli download \
340
+ "lmstudio-community/Devstral-Small-2507-GGUF" \ # or mistralai/Devstral-Small-2507_gguf
341
+ --include "Devstral-Small-2507-Q4_K_M.gguf" \
342
+ --local-dir "Devstral-Small-2507_gguf/"
343
+ ```
344
+
345
+ You can serve the model locally with [LMStudio](https://lmstudio.ai/).
346
+ * Download [LM Studio](https://lmstudio.ai/) and install it
347
+ * Install `lms cli ~/.lmstudio/bin/lms bootstrap`
348
+ * In a bash terminal, run `lms import Devstral-Small-2507-Q4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `Devstral-Small-2507_gguf`)
349
+ * Open the LM Studio application, click the terminal icon to get into the developer tab. Click select a model to load and select `Devstral Small 2507`. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on.
350
+ * On the right tab, you will see an API identifier which should be `devstral-small-2507` and an api address under API Usage. Keep note of this address, this is used for OpenHands or Cline.
351
+
352
+ </details>
353
+
354
+
355
+ #### llama.cpp
356
+
357
+ <details>
358
+ <summary>Expand</summary
359
+
360
+ Download the weights from huggingface:
361
+
362
+ ```
363
+ pip install -U "huggingface_hub[cli]"
364
+ huggingface-cli download \
365
+ "mistralai/Devstral-Small-2507_gguf" \
366
+ --include "Devstral-Small-2507-Q4_K_M.gguf" \
367
+ --local-dir "mistralai/Devstral-Small-2507_gguf/"
368
+ ```
369
+
370
+ Then run Devstral using the llama.cpp server.
371
+
372
+ ```bash
373
+ ./llama-server -m mistralai/Devstral-Small-2507_gguf/Devstral-Small-2507-Q4_K_M.gguf -c 0 # -c configure the context size, 0 means model's default, here 128k.
374
+ ```
375
+
376
+ </details>
377
+
378
+
379
+ ### OpenHands (recommended)
380
+
381
+ #### Launch a server to deploy Devstral Small 1.1
382
+
383
+ Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral Small 1.1`.
384
+
385
+ In the case of the tutorial we spineed up a vLLM server running the command:
386
+ ```bash
387
+ vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
388
+ ```
389
+
390
+ The server address should be in the following format: `http://<your-server-url>:8000/v1`
391
+
392
+ #### Launch OpenHands
393
+
394
+ You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
395
+
396
+ The easiest way to launch OpenHands is to use the Docker image:
397
+ ```bash
398
+ docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik
399
+
400
+ docker run -it --rm --pull=always \
401
+ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \
402
+ -e LOG_ALL_EVENTS=true \
403
+ -v /var/run/docker.sock:/var/run/docker.sock \
404
+ -v ~/.openhands:/.openhands \
405
+ -p 3000:3000 \
406
+ --add-host host.docker.internal:host-gateway \
407
+ --name openhands-app \
408
+ docker.all-hands.dev/all-hands-ai/openhands:0.48
409
+ ```
410
+
411
+ Then, you can access the OpenHands UI at `http://localhost:3000`.
412
+
413
+ #### Connect to the server
414
+
415
+ When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
416
+
417
+ Fill the following fields:
418
+ - **Custom Model**: `openai/mistralai/Devstral-Small-2507`
419
+ - **Base URL**: `http://<your-server-url>:8000/v1`
420
+ - **API Key**: `token` (or any other token you used to launch the server if any)
421
+
422
+ <details>
423
+ <summary>See settings</summary>
424
+
425
+ ![OpenHands Settings](assets/open_hands_config.png)
426
+
427
+ </details>
428
+
429
+
430
+ ### Cline
431
+
432
+ #### Launch a server to deploy Devstral Small 1.1
433
+
434
+ Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral Small 1.1`.
435
+
436
+ In the case of the tutorial we spineed up a vLLM server running the command:
437
+ ```bash
438
+ vllm serve mistralai/Devstral-Small-2507 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
439
+ ```
440
+
441
+ The server address should be in the following format: `http://<your-server-url>:8000/v1`
442
+
443
+ #### Launch Cline
444
+
445
+ You can follow installation of Cline [here](https://docs.cline.bot/getting-started/installing-cline). Then you can configure the server address in the settings.
446
+
447
+ <details>
448
+ <summary>See settings</summary>
449
+
450
+ ![Cline Settings](assets/cline_config.png)
451
+
452
+ </details>
453
+
454
+
455
+ ### Examples
456
+
457
+ #### OpenHands:Understanding Test Coverage of Mistral Common
458
+
459
+ We can start the OpenHands scaffold and link it to a repo to analyze test coverage and identify badly covered files.
460
+ Here we start with our public `mistral-common` repo.
461
+
462
+
463
+ After the repo is mounted in the workspace, we give the following instruction
464
+ ```
465
+ Check the test coverage of the repo and then create a visualization of test coverage. Try plotting a few different types of graphs and save them to a png.
466
+ ```
467
+ The agent will first browse the code base to check test configuration and structure.
468
+
469
+ ![mistral common coverage - prompt](assets/mistral_common_coverage/prompt.png)
470
+
471
+ Then it sets up the testing dependencies and launches the coverage test:
472
+
473
+ ![mistral common coverage - dependencies](assets/mistral_common_coverage/dependencies.png)
474
+
475
+ Finally, the agent writes necessary code to visualize the coverage, export the results and save the plots to a png.
476
+ ![mistral common coverage - visualization](assets/mistral_common_coverage/visualization.png)
477
+
478
+ At the end of the run, the following plots are produced:
479
+ ![mistral common coverage - coverage distribution](assets/mistral_common_coverage/coverage_distribution.png)
480
+ ![mistral common coverage - coverage pie](assets/mistral_common_coverage/coverage_pie.png)
481
+ ![mistral common coverage - coverage summary](assets/mistral_common_coverage/coverage_summary.png)
482
+
483
+ and the model is able to explain the results:
484
+ ![mistral common coverage - navigate](assets/mistral_common_coverage/navigate.png)
485
+
486
+ #### Cline: build a video game
487
+
488
+ First initialize Cline inside VSCode and connect it to the server you launched earlier.
489
+
490
+ We give the following instruction to builde the video game:
491
+ ```
492
+ Create a video game that mixes Space Invaders and Pong for the web.
493
+
494
+ Follow these instructions:
495
+ - There are two players one at the top and one at the bottom. The players are controling a bar to bounce a ball.
496
+ - The first player plays with the keys "a" and "d", the second with the right and left arrows.
497
+ - The invaders are located at the center of the screen. They shoud look like the ones in Space Invaders. Their goal is to shoot on the players randomly. They cannot be destroyed by the ball that pass through them. This means that invaders never die.
498
+ - The players goal is to avoid shootings from the space invaders and send the ball to the edge of the over player.
499
+ - The ball bounces on the left and right edges.
500
+ - Once the ball touch one of the player's edge, the player loses.
501
+ - Once a player is touched 3 times or more by a shooting, the player loses.
502
+ - The player winning is the last one standing.
503
+ - Display on the UI, the number of times a player touched the ball, and the remaining health.
504
+ ```
505
+
506
+ ![space invaders pong - prompt](assets/space_invaders_pong/prompt.png)
507
+
508
+ The agent will first create the game:
509
+
510
+ ![space invaders pong - structure](assets/space_invaders_pong/base_structure.png)
511
+
512
+ Then it will explain how to launch the game:
513
+
514
+ ![space invaders pong - task completed](assets/space_invaders_pong/task%20completed.png)
515
+
516
+ Finally, the game is ready to be played:
517
+
518
+ ![space invaders pong - game](assets/space_invaders_pong/game.png)
519
+
520
+ Don't hesitate to iterate or give more information to Devstral to improve the game!
assets/.DS_Store ADDED
Binary file (8.2 kB). View file
 
assets/cline_config.png ADDED

Git LFS Details

  • SHA256: 663138f00523ef3cc15cb9fd1cd6369d4625245df65028f2fcc372c85ca7d29b
  • Pointer size: 131 Bytes
  • Size of remote file: 137 kB
assets/mistral_common_coverage/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets/mistral_common_coverage/coverage_distribution.png ADDED
assets/mistral_common_coverage/coverage_pie.png ADDED
assets/mistral_common_coverage/coverage_summary.png ADDED
assets/mistral_common_coverage/dependencies.png ADDED

Git LFS Details

  • SHA256: 278825d73c9f5d674ca2875e386fced3902dc93f9711f317c785c02589c2ff35
  • Pointer size: 131 Bytes
  • Size of remote file: 255 kB
assets/mistral_common_coverage/navigate.png ADDED

Git LFS Details

  • SHA256: c2638a6e966005e92755f30c9dfb5eccf030825211cc55b65dc844e92a7481d6
  • Pointer size: 131 Bytes
  • Size of remote file: 557 kB
assets/mistral_common_coverage/prompt.png ADDED

Git LFS Details

  • SHA256: 3f8031ebe12cc351386bbbd285642275d4c2ef0ea6132381cb603ec86ddbbe4e
  • Pointer size: 131 Bytes
  • Size of remote file: 236 kB
assets/mistral_common_coverage/visualization.png ADDED

Git LFS Details

  • SHA256: 4d75d000ca8a4456ca6c788358ee6977bcce703c8f37a7c81796983d3d539870
  • Pointer size: 131 Bytes
  • Size of remote file: 241 kB
assets/open_hands_config.png ADDED
assets/space_invaders_pong/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets/space_invaders_pong/base_structure.png ADDED

Git LFS Details

  • SHA256: 468519868d6006c469fb8517680b554f6ef2778e2fab4c9b2ada4a9cd2254983
  • Pointer size: 131 Bytes
  • Size of remote file: 998 kB
assets/space_invaders_pong/game.png ADDED
assets/space_invaders_pong/prompt.png ADDED

Git LFS Details

  • SHA256: 978ddbf5324d6a73f6a99d1301252930cd983ab9dfbe374a3465285d7f196143
  • Pointer size: 131 Bytes
  • Size of remote file: 949 kB
assets/space_invaders_pong/task completed.png ADDED

Git LFS Details

  • SHA256: cfd6872e857a82535daee755ef94f1d6ff7476fb9ec2a4d46fcaabbc51b529e7
  • Pointer size: 131 Bytes
  • Size of remote file: 637 kB
assets/swe_benchmark.png ADDED

Git LFS Details

  • SHA256: 565a98c83187dbd4eed29e7aa79146336cafbe1c5c26250515769b364078435f
  • Pointer size: 131 Bytes
  • Size of remote file: 151 kB
genai_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model": {
3
+ "bos_token_id": 1,
4
+ "context_length": 131072,
5
+ "decoder": {
6
+ "session_options": {
7
+ "log_id": "onnxruntime-genai",
8
+ "provider_options": []
9
+ },
10
+ "filename": "model.onnx",
11
+ "head_size": 128,
12
+ "hidden_size": 5120,
13
+ "inputs": {
14
+ "input_ids": "input_ids",
15
+ "attention_mask": "attention_mask",
16
+ "position_ids": "position_ids",
17
+ "past_key_names": "past_key_values.%d.key",
18
+ "past_value_names": "past_key_values.%d.value"
19
+ },
20
+ "outputs": {
21
+ "logits": "logits",
22
+ "present_key_names": "present.%d.key",
23
+ "present_value_names": "present.%d.value"
24
+ },
25
+ "num_attention_heads": 32,
26
+ "num_hidden_layers": 40,
27
+ "num_key_value_heads": 8
28
+ },
29
+ "eos_token_id": 2,
30
+ "pad_token_id": 11,
31
+ "type": "mistral",
32
+ "vocab_size": 131072
33
+ },
34
+ "search": {
35
+ "diversity_penalty": 0.0,
36
+ "do_sample": false,
37
+ "early_stopping": true,
38
+ "length_penalty": 1.0,
39
+ "max_length": 131072,
40
+ "min_length": 0,
41
+ "no_repeat_ngram_size": 0,
42
+ "num_beams": 1,
43
+ "num_return_sequences": 1,
44
+ "past_present_share_buffer": false,
45
+ "repetition_penalty": 1.0,
46
+ "temperature": 1.0,
47
+ "top_k": 1,
48
+ "top_p": 1.0
49
+ }
50
+ }
model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b78f3947474ad4cb685b87da086bbd9ea5d0700ea824c7db84aacd8e6616d32
3
+ size 929586
model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2005e16eab23cf9fce21d7232144c095ab2e3fdec990829d2ffec7e72be72018
3
+ size 47178360832
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|begin▁of▁sentence|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|end▁of▁sentence|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|end▁of▁sentence|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e20ddafc659ba90242154b55275402edeca0715e5dbb30f56815a4ce081f4893
3
+ size 11422778
tokenizer_config.json ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "151643": {
7
+ "content": "<|end▁of▁sentence|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "151644": {
15
+ "content": "<|User|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": false
21
+ },
22
+ "151645": {
23
+ "content": "<|Assistant|>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": false
29
+ },
30
+ "151646": {
31
+ "content": "<|begin▁of▁sentence|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "151647": {
39
+ "content": "<|EOT|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": false
45
+ },
46
+ "151648": {
47
+ "content": "<think>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": false
53
+ },
54
+ "151649": {
55
+ "content": "</think>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": false
61
+ },
62
+ "151650": {
63
+ "content": "<|quad_start|>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": false,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "151651": {
71
+ "content": "<|quad_end|>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": false,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "151652": {
79
+ "content": "<|vision_start|>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": false,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "151653": {
87
+ "content": "<|vision_end|>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": false,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "151654": {
95
+ "content": "<|vision_pad|>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "151655": {
103
+ "content": "<|image_pad|>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": false,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "151656": {
111
+ "content": "<|video_pad|>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": false,
115
+ "single_word": false,
116
+ "special": true
117
+ },
118
+ "151657": {
119
+ "content": "<tool_call>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false,
124
+ "special": false
125
+ },
126
+ "151658": {
127
+ "content": "</tool_call>",
128
+ "lstrip": false,
129
+ "normalized": false,
130
+ "rstrip": false,
131
+ "single_word": false,
132
+ "special": false
133
+ },
134
+ "151659": {
135
+ "content": "<|fim_prefix|>",
136
+ "lstrip": false,
137
+ "normalized": false,
138
+ "rstrip": false,
139
+ "single_word": false,
140
+ "special": false
141
+ },
142
+ "151660": {
143
+ "content": "<|fim_middle|>",
144
+ "lstrip": false,
145
+ "normalized": false,
146
+ "rstrip": false,
147
+ "single_word": false,
148
+ "special": false
149
+ },
150
+ "151661": {
151
+ "content": "<|fim_suffix|>",
152
+ "lstrip": false,
153
+ "normalized": false,
154
+ "rstrip": false,
155
+ "single_word": false,
156
+ "special": false
157
+ },
158
+ "151662": {
159
+ "content": "<|fim_pad|>",
160
+ "lstrip": false,
161
+ "normalized": false,
162
+ "rstrip": false,
163
+ "single_word": false,
164
+ "special": false
165
+ },
166
+ "151663": {
167
+ "content": "<|repo_name|>",
168
+ "lstrip": false,
169
+ "normalized": false,
170
+ "rstrip": false,
171
+ "single_word": false,
172
+ "special": false
173
+ },
174
+ "151664": {
175
+ "content": "<|file_sep|>",
176
+ "lstrip": false,
177
+ "normalized": false,
178
+ "rstrip": false,
179
+ "single_word": false,
180
+ "special": false
181
+ }
182
+ },
183
+ "bos_token": "<|begin▁of▁sentence|>",
184
+ "clean_up_tokenization_spaces": false,
185
+ "eos_token": "<|end▁of▁sentence|>",
186
+ "extra_special_tokens": {},
187
+ "legacy": true,
188
+ "model_max_length": 16384,
189
+ "pad_token": "<|end▁of▁sentence|>",
190
+ "sp_model_kwargs": {},
191
+ "tokenizer_class": "LlamaTokenizerFast",
192
+ "unk_token": null,
193
+ "use_default_system_prompt": false
194
+ }