Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,33 @@
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
<div style="display: flex; flex-wrap: wrap; gap: 15px; margin-top: 15px;">
|
6 |
<div style="flex: 1; min-width: 200px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
7 |
<h4 style="margin-top: 0; color: #5f6368;">🧑💻 Curated by</h4>
|
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
+
### Vision Transformer (ViT) with LoRA for Spectrogram Regression
|
6 |
+
|
7 |
+
---
|
8 |
+
### Fine-Tuning Details
|
9 |
+
|
10 |
+
| Category | Specification |
|
11 |
+
|-----------------------|---------------------------------------------------------------------------------------------------|
|
12 |
+
| **Framework** | PyTorch |
|
13 |
+
| **Architecture** | Pre-trained Vision Transformer (ViT) |
|
14 |
+
| **Adaptation Method** | LoRA (Low-Rank Adaptation) |
|
15 |
+
| **Task** | Regression on time-frequency representations |
|
16 |
+
| **Target Variables** | 1. Chirp start time (ms)<br>2. Start frequency (kHz)<br>3. End frequency (kHz) |
|
17 |
+
| **Training Protocol** | • Automatic Mixed Precision (AMP)<br>• Early stopping<br>• Learning Rate scheduling |
|
18 |
+
| **Output** | Quantitative predictions + optional natural language descriptions |
|
19 |
+
|
20 |
+
---
|
21 |
+
### Resource Details
|
22 |
+
|
23 |
+
| Resource | Description | Link |
|
24 |
+
|----------|-------------|------|
|
25 |
+
| Trained Vision Transformer Model | Access to a pre-trained Vision Transformer model fine-tuned on synthetic spectrograms for chirp localization | [HuggingFace Model Hub](https://huggingface.co/nubahador/Fine_Tuned_Transformer_Model_for_Chirp_Localization/tree/main) |
|
26 |
+
| Synthetic Spectrogram Dataset | Download link for 100,000 synthetic spectrograms with corresponding labels for chirp localization | [HuggingFace Dataset Hub](https://huggingface.co/datasets/nubahador/ChirpLoc100K___A_Synthetic_Spectrogram_Dataset_for_Chirp_Localization/tree/main) |
|
27 |
+
| PyTorch Implementation | Repository containing the PyTorch code for fine-tuning the Vision Transformer on spectrograms | [Implementation GitHub Repository](https://github.com/nbahador/Train_Spectrogram_Transformer) |
|
28 |
+
| Synthetic Chirp Generator | Python package for generating synthetic chirp spectrograms (images with corresponding labels) | [Dataset GitHub Repository](https://github.com/nbahador/chirp_spectrogram_generator) |
|
29 |
+
|
30 |
+
---
|
31 |
+
|
32 |
<div style="display: flex; flex-wrap: wrap; gap: 15px; margin-top: 15px;">
|
33 |
<div style="flex: 1; min-width: 200px; background: white; border-radius: 8px; padding: 15px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
|
34 |
<h4 style="margin-top: 0; color: #5f6368;">🧑💻 Curated by</h4>
|