dakomura commited on
Commit
61ac44a
·
1 Parent(s): d3f8bd4
Files changed (3) hide show
  1. README.md +60 -23
  2. extract_train.py +25 -5
  3. setup.ipynb +55 -54
README.md CHANGED
@@ -184,11 +184,21 @@ The dataset has been modified and organized for benchmarking purposes:
184
 
185
  We provide example implementations using four state-of-the-art foundation models:
186
  - [CONCH](https://huggingface.co/MahmoodLab/CONCH)
 
187
  - [GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath)
188
  - [UNI](https://huggingface.co/MahmoodLab/UNI)
189
  - [UNI2](https://huggingface.co/MahmoodLab/UNI2-h)
190
- - [H-Optimus](https://huggingface.co/bioptimus/H-optimus-0)
 
 
191
  - [Virchow2](https://huggingface.co/paige-ai/Virchow2)
 
 
 
 
 
 
 
192
 
193
  See `licenses/references.txt` for model citations.
194
 
@@ -196,27 +206,51 @@ See `licenses/references.txt` for model citations.
196
  **Note:** The provided script is a simplified example of training code. In practice, hyperparameter tuning and additional techniques were employed to achieve the following results.
197
  #### Internal Split Results
198
 
199
- | Model | Accuracy | Balanced Accuracy |
200
- |-------|----------|------------------|
201
- | UNI2 | 0.8498 | 0.8500 |
202
- | H-Optimus | 0.8498 | 0.8398 |
203
- | Virchow2 | 0.8456 | 0.8355 |
204
- | UNI | 0.8142 | 0.7923 |
205
- | GigaPath | 0.8162 | 0.7877 |
206
- | CONCH | 0.7670 | 0.7301 |
207
-
 
 
 
 
 
 
 
 
 
 
 
 
208
 
209
  #### External Split Results
210
 
211
- | Model | Accuracy | Balanced Accuracy |
212
- |-------|----------|------------------|
213
- | UNI2 | 0.7648 | 0.7262 |
214
- | H-Optimus | 0.7845 | 0.7213 |
215
- | Virchow2 | 0.7745 | 0.6922 |
216
- | UNI | 0.7373 | 0.6581 |
217
- | GigaPath | 0.7246 | 0.6377 |
218
- | CONCH | 0.6991 | 0.5974 |
219
-
 
 
 
 
 
 
 
 
 
 
 
 
220
 
221
  ### Getting Started
222
 
@@ -292,18 +326,21 @@ dataset = wds.WebDataset(patterns[mode], shardshuffle=False) \
292
  model_name: "h_optimus" # Model selection: "h_optimus", etc.
293
  split_type: "internal" # Split type: "internal" or "external"
294
  device: "cuda" # Computation device: "cuda" or "cpu"
 
295
  feature_exist: True # Skip feature extraction if features already exist
296
  max_iter: 1000 # Maximum iterations for training
297
- cost: 0.0001 # Cost parameter for linear classifier
298
  ```
299
 
300
  Configuration parameters:
301
  - `model_name`: Foundation model to use for feature extraction
302
  - `split_type`: Dataset split strategy
 
303
  - `device`: Computation device (GPU/CPU)
304
  - `feature_exist`: Skip feature extraction if True and features are already available
305
- - `max_iter`: Maximum training iterations for the linear classifier
306
- - `cost`: Regularization parameter for the linear classifier
 
307
 
308
  2. Define models and transforms in `extract_train.py`:
309
  ```python
@@ -320,7 +357,7 @@ python extract_train.py
320
  This will:
321
  - Extract features using the specified foundation model
322
  - Save features to H5 files
323
- - Perform linear probing
324
  - Output accuracy and balanced accuracy metrics
325
 
326
  ## License
 
184
 
185
  We provide example implementations using four state-of-the-art foundation models:
186
  - [CONCH](https://huggingface.co/MahmoodLab/CONCH)
187
+ - [CONCHv1.5](https://huggingface.co/MahmoodLab/conchv1_5)
188
  - [GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath)
189
  - [UNI](https://huggingface.co/MahmoodLab/UNI)
190
  - [UNI2](https://huggingface.co/MahmoodLab/UNI2-h)
191
+ - [H-Optimus-0](https://huggingface.co/bioptimus/H-optimus-0)
192
+ - [H-Optimus-1](https://huggingface.co/bioptimus/H-optimus-1)
193
+ - [Virchow](https://huggingface.co/paige-ai/Virchow)
194
  - [Virchow2](https://huggingface.co/paige-ai/Virchow2)
195
+ - [Phikon](https://huggingface.co/owkin/phikon)
196
+ - [Phikon-v2](https://huggingface.co/owkin/phikon-v2)
197
+ - [Kaiko](https://github.com/kaiko-ai/towards_large_pathology_fms)
198
+ - [Lunit](https://huggingface.co/1aurent/vit_small_patch8_224.lunit_dino)
199
+ - [Hibou](https://huggingface.co/histai/hibou-L)
200
+ - [CTransPath](https://github.com/Xiyue-Wang/TransPath)
201
+ - ResNet
202
 
203
  See `licenses/references.txt` for model citations.
204
 
 
206
  **Note:** The provided script is a simplified example of training code. In practice, hyperparameter tuning and additional techniques were employed to achieve the following results.
207
  #### Internal Split Results
208
 
209
+ | Model | Accuracy (LogReg) | Balanced Accuracy (LogReg) | Accuracy (KNN) | Balanced Accuracy (KNN) | Accuracy (Prototype) | Balanced Accuracy (Prototype) |
210
+ |-----|-----------------|--------------------------|--------------|-----------------------|--------------------|-----------------------------|
211
+ | Kaiko(l14)* | 0.8608 | **0.8662** | 0.8116 | 0.7636 | 0.7708 | 0.7434 |
212
+ | H-Optimus-1 | **0.8616** | 0.8557 | **0.8164** | **0.7671** | **0.7730** | **0.7579** |
213
+ | UNI2 | 0.8564 | 0.8501 | 0.7962 | 0.7434 | 0.7546 | 0.7476 |
214
+ | H-Optimus-0 | 0.8498 | 0.8399 | 0.7930 | 0.7307 | 0.7492 | 0.7321 |
215
+ | Virchow2 | 0.8455 | 0.8351 | 0.7686 | 0.6989 | 0.6671 | 0.6500 |
216
+ | Phikon-v2 | 0.8289 | 0.8212 | 0.7467 | 0.6777 | 0.6982 | 0.6869 |
217
+ | Phikon | 0.8342 | 0.8111 | 0.7207 | 0.6255 | 0.6625 | 0.6385 |
218
+ | Virchow | 0.8223 | 0.8008 | 0.7244 | 0.6262 | 0.6087 | 0.5759 |
219
+ | Hibou | 0.8189 | 0.7985 | 0.7433 | 0.6618 | 0.6291 | 0.6034 |
220
+ | UNI | 0.8144 | 0.7923 | 0.7634 | 0.6897 | 0.7109 | 0.6946 |
221
+ | GigaPath | 0.8161 | 0.7878 | 0.7444 | 0.6676 | 0.6967 | 0.6675 |
222
+ | Lunit* | 0.7919 | 0.7535 | 0.7427 | 0.6539 | 0.6611 | 0.6427 |
223
+ | CONCHv1.5 | 0.7709 | 0.7306 | 0.7162 | 0.6313 | 0.6614 | 0.6383 |
224
+ | CONCH | 0.7672 | 0.7295 | 0.7028 | 0.6139 | 0.6150 | 0.6097 |
225
+ | CTransPath | 0.7255 | 0.6748 | 0.6200 | 0.5057 | 0.5158 | 0.4857 |
226
+ | ResNet | 0.6395 | 0.5581 | 0.5114 | 0.3816 | 0.3154 | 0.2973 |
227
+
228
+
229
+ \* Training data contains TCGA dataset.
230
 
231
  #### External Split Results
232
 
233
+ | Model | Accuracy (LogReg) | Balanced Accuracy (LogReg) | Accuracy (KNN) | Balanced Accuracy (KNN) | Accuracy (Prototype) | Balanced Accuracy (Prototype) |
234
+ |-----|-----------------|--------------------------|--------------|-----------------------|--------------------|-----------------------------|
235
+ | H-Optimus-1 | **0.8080** | **0.7450** | **0.7700** | **0.6955** | **0.7572** | **0.7363** |
236
+ | Kaiko(b8)* | 0.7920 | 0.7370 | 0.7181 | 0.6597 | 0.7509 | 0.7134 |
237
+ | UNI2 | 0.7648 | 0.7262 | 0.7210 | 0.6498 | 0.7018 | 0.6839 |
238
+ | H-Optimus-0 | 0.7845 | 0.7213 | 0.7209 | 0.6579 | 0.7106 | 0.6842 |
239
+ | Virchow2 | 0.7744 | 0.6919 | 0.7221 | 0.6544 | 0.6482 | 0.6331 |
240
+ | UNI | 0.7373 | 0.6581 | 0.6668 | 0.5887 | 0.6612 | 0.6232 |
241
+ | Phikon-v2 | 0.7185 | 0.6535 | 0.5857 | 0.5040 | 0.6197 | 0.5752 |
242
+ | Virchow | 0.7274 | 0.6490 | 0.6464 | 0.5541 | 0.5847 | 0.5636 |
243
+ | GigaPath | 0.7246 | 0.6379 | 0.6426 | 0.5495 | 0.6361 | 0.5960 |
244
+ | Phikon | 0.7311 | 0.6351 | 0.5511 | 0.4586 | 0.5474 | 0.5104 |
245
+ | Hibou | 0.6696 | 0.6161 | 0.5155 | 0.4436 | 0.4911 | 0.4765 |
246
+ | CONCHv1.5 | 0.7080 | 0.6098 | 0.6762 | 0.5846 | 0.6415 | 0.6100 |
247
+ | Lunit* | 0.6851 | 0.6044 | 0.6021 | 0.5098 | 0.5862 | 0.5503 |
248
+ | CONCH | 0.6991 | 0.5975 | 0.6626 | 0.5735 | 0.5954 | 0.5905 |
249
+ | CTransPath | 0.6160 | 0.5215 | 0.5229 | 0.4205 | 0.4498 | 0.4128 |
250
+ | ResNet | 0.4967 | 0.3929 | 0.3960 | 0.2871 | 0.2657 | 0.2392 |
251
+
252
+
253
+ \* Training data contains TCGA dataset.
254
 
255
  ### Getting Started
256
 
 
326
  model_name: "h_optimus" # Model selection: "h_optimus", etc.
327
  split_type: "internal" # Split type: "internal" or "external"
328
  device: "cuda" # Computation device: "cuda" or "cpu"
329
+ eval_name: "logreg" # Evaluation method: "logreg", "knn", or "proto"
330
  feature_exist: True # Skip feature extraction if features already exist
331
  max_iter: 1000 # Maximum iterations for training
332
+ cost: 0.0001 # Cost parameter for logistic regression
333
  ```
334
 
335
  Configuration parameters:
336
  - `model_name`: Foundation model to use for feature extraction
337
  - `split_type`: Dataset split strategy
338
+ - `eval_name`: Methods of evaluation (logreg, knn, proto)
339
  - `device`: Computation device (GPU/CPU)
340
  - `feature_exist`: Skip feature extraction if True and features are already available
341
+ - `max_iter`: Maximum training iterations for logistic regression
342
+ - `cost`: Regularization parameter for logistic regression
343
+ - `k`: Number of Nearest Neighbors in KNN
344
 
345
  2. Define models and transforms in `extract_train.py`:
346
  ```python
 
357
  This will:
358
  - Extract features using the specified foundation model
359
  - Save features to H5 files
360
+ - Perform linear probing, KNN, and prototype classification
361
  - Output accuracy and balanced accuracy metrics
362
 
363
  ## License
extract_train.py CHANGED
@@ -34,6 +34,7 @@ import h5py
34
  import numpy as np
35
  from tqdm import tqdm
36
  from sklearn.linear_model import LogisticRegression
 
37
  from sklearn.metrics import accuracy_score, balanced_accuracy_score
38
  from huggingface_hub import login
39
  import braceexpand
@@ -53,8 +54,10 @@ model_dic = {
53
  # if you want to use other model, please check the path
54
  }
55
  configs["model_path"] = model_dic[configs["model_name"]]
 
56
  configs["max_iter"] = configs.get("max_iter", 1000)
57
  configs["cost"] = configs.get("cost", 0.0001)
 
58
 
59
  # load meta data
60
  metadata_path = os.path.join(work_dir, "train_val_test_split.csv")
@@ -264,11 +267,28 @@ def get_feats_labels(hdf5_file_path, mode="train", batch_size=32):
264
  def train_eval(train_feats, train_labels, test_feats, test_labels):
265
  global configs
266
 
267
- # define model
268
- model = LogisticRegression(C=configs["cost"], max_iter=configs["max_iter"])
269
- model.fit(train_feats, train_labels)
270
-
271
- pred = model.predict(test_feats)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
272
  acc = accuracy_score(test_labels, pred)
273
  balanced_acc = balanced_accuracy_score(test_labels, pred)
274
  print(f"Accuracy = {acc:.3f}, Balanced Accuracy = {balanced_acc:.3f}")
 
34
  import numpy as np
35
  from tqdm import tqdm
36
  from sklearn.linear_model import LogisticRegression
37
+ from sklearn.neighbors import KNeighborsClassifier
38
  from sklearn.metrics import accuracy_score, balanced_accuracy_score
39
  from huggingface_hub import login
40
  import braceexpand
 
54
  # if you want to use other model, please check the path
55
  }
56
  configs["model_path"] = model_dic[configs["model_name"]]
57
+ configs["eval_name"] = configs.get("eval_name", "logreg") # ["logreg", "knn", "proto"]
58
  configs["max_iter"] = configs.get("max_iter", 1000)
59
  configs["cost"] = configs.get("cost", 0.0001)
60
+ configs["k"] = configs.get("k", 10)
61
 
62
  # load meta data
63
  metadata_path = os.path.join(work_dir, "train_val_test_split.csv")
 
267
  def train_eval(train_feats, train_labels, test_feats, test_labels):
268
  global configs
269
 
270
+ # define model, train, evaluation
271
+ if configs["eval_name"] == "logreg":
272
+ model = LogisticRegression(C=configs["cost"], max_iter=configs["max_iter"])
273
+ model.fit(train_feats, train_labels)
274
+ pred = model.predict(test_feats)
275
+
276
+ if configs["eval_name"] == "knn":
277
+ model = KNeighborsClassifier(n_neighbors=configs["k"])
278
+ model.fit(train_feats.numpy(), train_labels.numpy())
279
+ pred = model.predict(test_feats.numpy())
280
+ test_labels = test_labels.numpy()
281
+
282
+ if configs["eval_name"] == "proto":
283
+ unique_labels = sorted(np.unique(train_labels.numpy()))
284
+ feats_proto = torch.vstack([
285
+ train_feats[train_labels == c].mean(dim=0) for c in unique_labels
286
+ ])
287
+ labels_proto = torch.tensor(unique_labels)
288
+ pw_dist = (test_feats[:, None] - feats_proto[None, :]).norm(dim=-1, p=2)
289
+ pred = labels_proto[pw_dist.argmin(dim=1)]
290
+
291
+ # result
292
  acc = accuracy_score(test_labels, pred)
293
  balanced_acc = balanced_accuracy_score(test_labels, pred)
294
  print(f"Accuracy = {acc:.3f}, Balanced Accuracy = {balanced_acc:.3f}")
setup.ipynb CHANGED
@@ -31,8 +31,10 @@
31
  },
32
  {
33
  "cell_type": "code",
34
- "execution_count": 1,
35
- "id": "FccnVVy0GAVR",
 
 
36
  "metadata": {
37
  "colab": {
38
  "base_uri": "https://localhost:8080/"
@@ -40,33 +42,31 @@
40
  "id": "FccnVVy0GAVR",
41
  "outputId": "e90aefeb-6cd3-4875-cc53-e7241c84589a"
42
  },
 
 
43
  "outputs": [
44
  {
45
- "name": "stdout",
46
  "output_type": "stream",
 
47
  "text": [
48
  "Python 3.11.11\n"
49
  ]
50
  }
51
- ],
52
- "source": [
53
- "# check python version (this example is google colab environment)\n",
54
- "!python --version"
55
  ]
56
  },
57
  {
58
  "cell_type": "code",
59
- "execution_count": 2,
60
- "id": "48KZGOUK74nm",
61
- "metadata": {
62
- "id": "48KZGOUK74nm"
63
- },
64
- "outputs": [],
65
  "source": [
66
  "# set your huggingface token\n",
67
  "import os\n",
68
  "token = \"your huggingface token\""
69
- ]
 
 
 
 
 
 
70
  },
71
  {
72
  "cell_type": "code",
@@ -81,8 +81,8 @@
81
  },
82
  "outputs": [
83
  {
84
- "name": "stdout",
85
  "output_type": "stream",
 
86
  "text": [
87
  "Cloning into 'demo'...\n",
88
  "remote: Enumerating objects: 199, done.\u001b[K\n",
@@ -96,7 +96,8 @@
96
  }
97
  ],
98
  "source": [
99
- "!git clone https://oauth2:{token}@huggingface.co/datasets/dakomura/tcga-ut/"
 
100
  ]
101
  },
102
  {
@@ -113,8 +114,10 @@
113
  },
114
  {
115
  "cell_type": "code",
116
- "execution_count": 4,
117
- "id": "RZVfsHws-djt",
 
 
118
  "metadata": {
119
  "colab": {
120
  "base_uri": "https://localhost:8080/"
@@ -122,24 +125,23 @@
122
  "id": "RZVfsHws-djt",
123
  "outputId": "83b081d6-9249-4b24-e451-c0347e752d03"
124
  },
 
 
125
  "outputs": [
126
  {
127
- "name": "stdout",
128
  "output_type": "stream",
 
129
  "text": [
130
  "/content/drive/MyDrive/demo\n"
131
  ]
132
  }
133
- ],
134
- "source": [
135
- "# move directory\n",
136
- "%cd repository_directory"
137
  ]
138
  },
139
  {
140
  "cell_type": "code",
141
- "execution_count": 5,
142
- "id": "umN5ZMRHHA7z",
 
143
  "metadata": {
144
  "colab": {
145
  "base_uri": "https://localhost:8080/"
@@ -147,18 +149,17 @@
147
  "id": "umN5ZMRHHA7z",
148
  "outputId": "199d9667-1044-4c1a-e004-bf4f5b14a56d"
149
  },
 
 
150
  "outputs": [
151
  {
152
- "name": "stdout",
153
  "output_type": "stream",
 
154
  "text": [
155
  "config.yaml extract_train.py \u001b[0m\u001b[01;34mlicenses\u001b[0m/ requirements.txt train_val_test_split.csv\n",
156
  "\u001b[01;34mdata\u001b[0m/ \u001b[01;34mfeatures\u001b[0m/ README.md setup.ipynb\n"
157
  ]
158
  }
159
- ],
160
- "source": [
161
- "%ls"
162
  ]
163
  },
164
  {
@@ -166,16 +167,16 @@
166
  "execution_count": 6,
167
  "id": "56b69da0",
168
  "metadata": {
 
169
  "colab": {
170
  "base_uri": "https://localhost:8080/"
171
  },
172
- "id": "56b69da0",
173
  "outputId": "09cdfc42-751e-432c-c978-7ca0ef4379c4"
174
  },
175
  "outputs": [
176
  {
177
- "name": "stdout",
178
  "output_type": "stream",
 
179
  "text": [
180
  "Collecting braceexpand==0.1.7 (from -r requirements.txt (line 1))\n",
181
  " Downloading braceexpand-0.1.7-py2.py3-none-any.whl.metadata (3.0 kB)\n",
@@ -281,8 +282,11 @@
281
  },
282
  {
283
  "cell_type": "code",
284
- "execution_count": 7,
285
- "id": "kvflBfrrSyU4",
 
 
 
286
  "metadata": {
287
  "colab": {
288
  "base_uri": "https://localhost:8080/",
@@ -291,10 +295,12 @@
291
  "id": "kvflBfrrSyU4",
292
  "outputId": "5b10426c-c3e3-4eff-cbcd-0fa1f6f96d46"
293
  },
 
 
294
  "outputs": [
295
  {
296
- "name": "stdout",
297
  "output_type": "stream",
 
298
  "text": [
299
  "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.11/dist-packages (1.6.1)\n",
300
  "Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (1.26.4)\n",
@@ -315,24 +321,19 @@
315
  ]
316
  },
317
  {
 
318
  "data": {
319
  "application/vnd.colab-display-data+json": {
320
- "id": "9f6ede4ac0cd447db545799914707e3b",
321
  "pip_warning": {
322
  "packages": [
323
  "six"
324
  ]
325
- }
 
326
  }
327
  },
328
- "metadata": {},
329
- "output_type": "display_data"
330
  }
331
- ],
332
- "source": [
333
- "# if scikit-leran is not installed, run this command\n",
334
- "# !pip install scikit-learn\n",
335
- "# !pip install scipy six==1.16.0"
336
  ]
337
  },
338
  {
@@ -352,16 +353,16 @@
352
  "execution_count": 1,
353
  "id": "d3d70745",
354
  "metadata": {
 
355
  "colab": {
356
  "base_uri": "https://localhost:8080/"
357
  },
358
- "id": "d3d70745",
359
  "outputId": "2c674c7b-d821-43be-aad1-8172985e9439"
360
  },
361
  "outputs": [
362
  {
363
- "name": "stdout",
364
  "output_type": "stream",
 
365
  "text": [
366
  "Collecting spams-bin\n",
367
  " Downloading spams_bin-2.6.10-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (754 bytes)\n",
@@ -656,16 +657,16 @@
656
  "execution_count": 2,
657
  "id": "5f3a7c3e",
658
  "metadata": {
 
659
  "colab": {
660
  "base_uri": "https://localhost:8080/"
661
  },
662
- "id": "5f3a7c3e",
663
  "outputId": "b7639582-b37b-4e71-e459-8b56266c7ef4"
664
  },
665
  "outputs": [
666
  {
667
- "name": "stdout",
668
  "output_type": "stream",
 
669
  "text": [
670
  "{'model_name': 'h_optimus', 'split_type': 'internal', 'device': 'cuda', 'feature_exist': True, 'max_iter': 1000, 'cost': 0.0001, 'model_path': 'hf-hub:bioptimus/H-optimus-0'}\n",
671
  "Directory already exists: ./features\n",
@@ -691,11 +692,6 @@
691
  }
692
  ],
693
  "metadata": {
694
- "accelerator": "GPU",
695
- "colab": {
696
- "gpuType": "T4",
697
- "provenance": []
698
- },
699
  "kernelspec": {
700
  "display_name": "Python 3",
701
  "name": "python3"
@@ -703,8 +699,13 @@
703
  "language_info": {
704
  "name": "python",
705
  "version": "3.x"
706
- }
 
 
 
 
 
707
  },
708
  "nbformat": 4,
709
  "nbformat_minor": 5
710
- }
 
31
  },
32
  {
33
  "cell_type": "code",
34
+ "source": [
35
+ "# check python version (this example is google colab environment)\n",
36
+ "!python --version"
37
+ ],
38
  "metadata": {
39
  "colab": {
40
  "base_uri": "https://localhost:8080/"
 
42
  "id": "FccnVVy0GAVR",
43
  "outputId": "e90aefeb-6cd3-4875-cc53-e7241c84589a"
44
  },
45
+ "id": "FccnVVy0GAVR",
46
+ "execution_count": 1,
47
  "outputs": [
48
  {
 
49
  "output_type": "stream",
50
+ "name": "stdout",
51
  "text": [
52
  "Python 3.11.11\n"
53
  ]
54
  }
 
 
 
 
55
  ]
56
  },
57
  {
58
  "cell_type": "code",
 
 
 
 
 
 
59
  "source": [
60
  "# set your huggingface token\n",
61
  "import os\n",
62
  "token = \"your huggingface token\""
63
+ ],
64
+ "metadata": {
65
+ "id": "48KZGOUK74nm"
66
+ },
67
+ "id": "48KZGOUK74nm",
68
+ "execution_count": 2,
69
+ "outputs": []
70
  },
71
  {
72
  "cell_type": "code",
 
81
  },
82
  "outputs": [
83
  {
 
84
  "output_type": "stream",
85
+ "name": "stdout",
86
  "text": [
87
  "Cloning into 'demo'...\n",
88
  "remote: Enumerating objects: 199, done.\u001b[K\n",
 
96
  }
97
  ],
98
  "source": [
99
+ "# repository url need to be changed\n",
100
+ "!git clone https://oauth2:{token}@huggingface.co/datasets/kooo-sh/demo/"
101
  ]
102
  },
103
  {
 
114
  },
115
  {
116
  "cell_type": "code",
117
+ "source": [
118
+ "# move directory\n",
119
+ "%cd repository_directory"
120
+ ],
121
  "metadata": {
122
  "colab": {
123
  "base_uri": "https://localhost:8080/"
 
125
  "id": "RZVfsHws-djt",
126
  "outputId": "83b081d6-9249-4b24-e451-c0347e752d03"
127
  },
128
+ "id": "RZVfsHws-djt",
129
+ "execution_count": 4,
130
  "outputs": [
131
  {
 
132
  "output_type": "stream",
133
+ "name": "stdout",
134
  "text": [
135
  "/content/drive/MyDrive/demo\n"
136
  ]
137
  }
 
 
 
 
138
  ]
139
  },
140
  {
141
  "cell_type": "code",
142
+ "source": [
143
+ "%ls"
144
+ ],
145
  "metadata": {
146
  "colab": {
147
  "base_uri": "https://localhost:8080/"
 
149
  "id": "umN5ZMRHHA7z",
150
  "outputId": "199d9667-1044-4c1a-e004-bf4f5b14a56d"
151
  },
152
+ "id": "umN5ZMRHHA7z",
153
+ "execution_count": 5,
154
  "outputs": [
155
  {
 
156
  "output_type": "stream",
157
+ "name": "stdout",
158
  "text": [
159
  "config.yaml extract_train.py \u001b[0m\u001b[01;34mlicenses\u001b[0m/ requirements.txt train_val_test_split.csv\n",
160
  "\u001b[01;34mdata\u001b[0m/ \u001b[01;34mfeatures\u001b[0m/ README.md setup.ipynb\n"
161
  ]
162
  }
 
 
 
163
  ]
164
  },
165
  {
 
167
  "execution_count": 6,
168
  "id": "56b69da0",
169
  "metadata": {
170
+ "id": "56b69da0",
171
  "colab": {
172
  "base_uri": "https://localhost:8080/"
173
  },
 
174
  "outputId": "09cdfc42-751e-432c-c978-7ca0ef4379c4"
175
  },
176
  "outputs": [
177
  {
 
178
  "output_type": "stream",
179
+ "name": "stdout",
180
  "text": [
181
  "Collecting braceexpand==0.1.7 (from -r requirements.txt (line 1))\n",
182
  " Downloading braceexpand-0.1.7-py2.py3-none-any.whl.metadata (3.0 kB)\n",
 
282
  },
283
  {
284
  "cell_type": "code",
285
+ "source": [
286
+ "# if scikit-leran is not installed, run this command\n",
287
+ "# !pip install scikit-learn\n",
288
+ "# !pip install scipy six==1.16.0"
289
+ ],
290
  "metadata": {
291
  "colab": {
292
  "base_uri": "https://localhost:8080/",
 
295
  "id": "kvflBfrrSyU4",
296
  "outputId": "5b10426c-c3e3-4eff-cbcd-0fa1f6f96d46"
297
  },
298
+ "id": "kvflBfrrSyU4",
299
+ "execution_count": 7,
300
  "outputs": [
301
  {
 
302
  "output_type": "stream",
303
+ "name": "stdout",
304
  "text": [
305
  "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.11/dist-packages (1.6.1)\n",
306
  "Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (1.26.4)\n",
 
321
  ]
322
  },
323
  {
324
+ "output_type": "display_data",
325
  "data": {
326
  "application/vnd.colab-display-data+json": {
 
327
  "pip_warning": {
328
  "packages": [
329
  "six"
330
  ]
331
+ },
332
+ "id": "9f6ede4ac0cd447db545799914707e3b"
333
  }
334
  },
335
+ "metadata": {}
 
336
  }
 
 
 
 
 
337
  ]
338
  },
339
  {
 
353
  "execution_count": 1,
354
  "id": "d3d70745",
355
  "metadata": {
356
+ "id": "d3d70745",
357
  "colab": {
358
  "base_uri": "https://localhost:8080/"
359
  },
 
360
  "outputId": "2c674c7b-d821-43be-aad1-8172985e9439"
361
  },
362
  "outputs": [
363
  {
 
364
  "output_type": "stream",
365
+ "name": "stdout",
366
  "text": [
367
  "Collecting spams-bin\n",
368
  " Downloading spams_bin-2.6.10-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (754 bytes)\n",
 
657
  "execution_count": 2,
658
  "id": "5f3a7c3e",
659
  "metadata": {
660
+ "id": "5f3a7c3e",
661
  "colab": {
662
  "base_uri": "https://localhost:8080/"
663
  },
 
664
  "outputId": "b7639582-b37b-4e71-e459-8b56266c7ef4"
665
  },
666
  "outputs": [
667
  {
 
668
  "output_type": "stream",
669
+ "name": "stdout",
670
  "text": [
671
  "{'model_name': 'h_optimus', 'split_type': 'internal', 'device': 'cuda', 'feature_exist': True, 'max_iter': 1000, 'cost': 0.0001, 'model_path': 'hf-hub:bioptimus/H-optimus-0'}\n",
672
  "Directory already exists: ./features\n",
 
692
  }
693
  ],
694
  "metadata": {
 
 
 
 
 
695
  "kernelspec": {
696
  "display_name": "Python 3",
697
  "name": "python3"
 
699
  "language_info": {
700
  "name": "python",
701
  "version": "3.x"
702
+ },
703
+ "colab": {
704
+ "provenance": [],
705
+ "gpuType": "T4"
706
+ },
707
+ "accelerator": "GPU"
708
  },
709
  "nbformat": 4,
710
  "nbformat_minor": 5
711
+ }