Datasets:

Modalities:
Audio
Languages:
English
ArXiv:
License:
aboots commited on
Commit
131dc31
·
verified ·
1 Parent(s): 3588282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -10
README.md CHANGED
@@ -9,7 +9,10 @@ task_categories:
9
  - text-to-speech
10
 
11
  ---
12
- [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP)
 
 
 
13
 
14
  ## Dataset Summary
15
 
@@ -17,7 +20,7 @@ task_categories:
17
 
18
  To train the [CLASP](https://huggingface.co/llm-lab/CLASP) model, we created this dataset based on the Brown Corpus. The synthetic speech was generated using the [NVIDIA Tacotron 2](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) text-to-speech model.
19
 
20
- For more information about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071). The dataset generation pipeline, along with code and usage instructions, is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP).
21
 
22
 
23
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/BT_bmv19WNz8OIXFcIWg5.png)
@@ -80,14 +83,21 @@ metadata.keys()
80
  ## Citations
81
  If you find our paper, code, data, or models useful, please cite the paper:
82
  ```
83
- @misc{abootorabi2024claspcontrastivelanguagespeechpretraining,
84
- title={CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval},
85
- author={Mohammad Mahdi Abootorabi and Ehsaneddin Asgari},
86
- year={2024},
87
- eprint={2412.13071},
88
- archivePrefix={arXiv},
89
- primaryClass={cs.CL},
90
- url={https://arxiv.org/abs/2412.13071},
 
 
 
 
 
 
 
91
  }
92
  ```
93
 
 
9
  - text-to-speech
10
 
11
  ---
12
+ [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP) [![Website](https://img.shields.io/website?url=https%3A%2F%2Fmultimodalrag.github.io%2F)](https://clasp1.github.io/)
13
+
14
+
15
+ [Models](https://huggingface.co/llm-lab/CLASP) | [Springer Link](https://link.springer.com/chapter/10.1007/978-3-031-88717-8_2) | [arXiv Link](https://arxiv.org/abs/2412.13071) | [Proposed Dataset](https://huggingface.co/datasets/llm-lab/SpeechBrown) | [ACM Digital Library](https://dl.acm.org/doi/10.1007/978-3-031-88717-8_2) | [Website](https://clasp1.github.io/)
16
 
17
  ## Dataset Summary
18
 
 
20
 
21
  To train the [CLASP](https://huggingface.co/llm-lab/CLASP) model, we created this dataset based on the Brown Corpus. The synthetic speech was generated using the [NVIDIA Tacotron 2](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) text-to-speech model.
22
 
23
+ For more information about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071) which is published at **ECIR 2025**. The dataset generation pipeline, along with code and usage instructions, is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP).
24
 
25
 
26
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/BT_bmv19WNz8OIXFcIWg5.png)
 
83
  ## Citations
84
  If you find our paper, code, data, or models useful, please cite the paper:
85
  ```
86
+ @inproceedings{10.1007/978-3-031-88717-8_2,
87
+ author = {Abootorabi, Mohammad Mahdi and Asgari, Ehsaneddin},
88
+ title = {CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval},
89
+ year = {2025},
90
+ isbn = {978-3-031-88716-1},
91
+ publisher = {Springer-Verlag},
92
+ address = {Berlin, Heidelberg},
93
+ url = {https://doi.org/10.1007/978-3-031-88717-8_2},
94
+ doi = {10.1007/978-3-031-88717-8_2},
95
+ abstract = {This study introduces CLASP (Contrastive Language-Speech Pretraining), a multilingual, multimodal representation tailored for audio-text information retrieval. CLASP leverages the synergy between spoken content and textual data. During training, we utilize our newly introduced speech-text dataset, which encompasses 15 diverse categories ranging from fiction to religion. CLASP’s audio component integrates audio spectrograms with a pre-trained self-supervised speech model, while its language encoding counterpart employs a sentence encoder pre-trained on over 100 languages. This unified lightweight model bridges the gap between various modalities and languages, enhancing its effectiveness in handling and retrieving multilingual and multimodal data. Our evaluations across multiple languages demonstrate that CLASP establishes new benchmarks in HITS@1, MRR, and meanR metrics, outperforming traditional ASR-based retrieval methods that rely on transcribing speech into text for subsequent text retrieval, especially in specific scenarios.},
96
+ booktitle = {Advances in Information Retrieval: 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6–10, 2025, Proceedings, Part IV},
97
+ pages = {10–20},
98
+ numpages = {11},
99
+ keywords = {Multimodal IR, Speech Retrieval, Contrastive Learning},
100
+ location = {Lucca, Italy}
101
  }
102
  ```
103