MathFrenchToast commited on
Commit
9c4a7d5
·
verified ·
1 Parent(s): bee59ac

doc: correction of various typos

Browse files

Missing or extra word and typo corrected.
I have not done a comprehensive check, only the one obvious

Files changed (1) hide show
  1. bonus-unit1/bonus-unit1.ipynb +4 -4
bonus-unit1/bonus-unit1.ipynb CHANGED
@@ -650,7 +650,7 @@
650
  "source": [
651
  "## Step 9: Let's configure the LoRA\n",
652
  "\n",
653
- "This is we are going to define the parameter of our adapter. Those a the most important parameters in LoRA as they define the size and importance of the adapters we are training."
654
  ]
655
  },
656
  {
@@ -702,7 +702,7 @@
702
  },
703
  "outputs": [],
704
  "source": [
705
- "username=\"Jofthomas\"# REPLCAE with your Hugging Face username\n",
706
  "output_dir = \"gemma-2-2B-it-thinking-function_calling-V0\" # The directory where the trained model checkpoints, logs, and other artifacts will be saved. It will also be the default name of the model when pushed to the hub if not redefined later.\n",
707
  "per_device_train_batch_size = 1\n",
708
  "per_device_eval_batch_size = 1\n",
@@ -1025,7 +1025,7 @@
1025
  "source": [
1026
  "## Step 11: Let's push the Model and the Tokenizer to the Hub\n",
1027
  "\n",
1028
- "Let's push our model and out tokenizer to the Hub ! The model will be pushed under your username + the output_dir that we specified earlier."
1029
  ]
1030
  },
1031
  {
@@ -1562,7 +1562,7 @@
1562
  "\n",
1563
  "In that case, we will take the start of one of the samples from the test set and hope that it will generate the expected output.\n",
1564
  "\n",
1565
- "Since we want to test the function-calling capacities of our newly fine-tuned model, the input will be a user message with the available tools, a\n",
1566
  "\n",
1567
  "\n",
1568
  "### Disclaimer ⚠️\n",
 
650
  "source": [
651
  "## Step 9: Let's configure the LoRA\n",
652
  "\n",
653
+ "We are going to define the parameters of our adapter. Those are the most important parameters in LoRA as they define the size and importance of the adapters we are training."
654
  ]
655
  },
656
  {
 
702
  },
703
  "outputs": [],
704
  "source": [
705
+ "username=\"Jofthomas\"# REPLACE with your Hugging Face username\n",
706
  "output_dir = \"gemma-2-2B-it-thinking-function_calling-V0\" # The directory where the trained model checkpoints, logs, and other artifacts will be saved. It will also be the default name of the model when pushed to the hub if not redefined later.\n",
707
  "per_device_train_batch_size = 1\n",
708
  "per_device_eval_batch_size = 1\n",
 
1025
  "source": [
1026
  "## Step 11: Let's push the Model and the Tokenizer to the Hub\n",
1027
  "\n",
1028
+ "Let's push our model and our tokenizer to the Hub ! The model will be pushed under your username + the output_dir that we specified earlier."
1029
  ]
1030
  },
1031
  {
 
1562
  "\n",
1563
  "In that case, we will take the start of one of the samples from the test set and hope that it will generate the expected output.\n",
1564
  "\n",
1565
+ "Since we want to test the function-calling capacities of our newly fine-tuned model, the input will be a user message with the available tools.\n",
1566
  "\n",
1567
  "\n",
1568
  "### Disclaimer ⚠️\n",