Automatic Speech Recognition
leduckhai commited on
Commit
b106bc6
·
verified ·
1 Parent(s): 5cf44bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -1,5 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder
2
 
 
 
 
3
  ## Description:
4
  Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants.
5
  This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics.
 
1
+ ---
2
+ datasets:
3
+ - leduckhai/MultiMed
4
+ language:
5
+ - en
6
+ - vi
7
+ - zh
8
+ - fr
9
+ - de
10
+ metrics:
11
+ - wer
12
+ - cer
13
+ base_model:
14
+ - openai/whisper-small
15
+ new_version: leduckhai/MultiMed-ST
16
+ pipeline_tag: automatic-speech-recognition
17
+ ---
18
  # MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder
19
 
20
+ Please refer to newer version which integrates ASR + MT models: [https://huggingface.co/leduckhai/MultiMed-ST](https://huggingface.co/leduckhai/MultiMed-ST)
21
+
22
+
23
  ## Description:
24
  Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants.
25
  This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics.