Update README.md
Browse files
README.md
CHANGED
@@ -206,4 +206,54 @@ The ROC-AUC performance is listed in the following table. A higher ROC-AUC indic
|
|
206 |
## Ethical Considerations
|
207 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
208 |
|
209 |
-
For more detailed information on ethical considerations for this model, please see the Model Card++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
206 |
## Ethical Considerations
|
207 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
208 |
|
209 |
+
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards [here](https://developer.nvidia.com/blog/enhancing-ai-transparency-and-ethical-considerations-with-model-card/).
|
210 |
+
|
211 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
212 |
+
|
213 |
+
## Bias
|
214 |
+
|
215 |
+
Field | Response
|
216 |
+
:---------------------------------------------------------------------------------------------------|:---------------
|
217 |
+
Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
|
218 |
+
Measures taken to mitigate against unwanted bias: | To reduce false positive errors — cases where the model incorrectly detects speech when none is present — the model was trained with white noise and real-word noise perturbations. During training, the volume of audios was also varied. Additionally, the training data includes non-speech audio samples to help the model distinguish between speech and non-speech sounds (such as coughing, laughter, and breathing, etc.)
|
219 |
+
Bias Metric (If Measured): | False Positive Rate
|
220 |
+
|
221 |
+
|
222 |
+
## Explainability
|
223 |
+
|
224 |
+
Field | Response
|
225 |
+
:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
|
226 |
+
Intended Domain: | Voice Activity Detection (VAD)
|
227 |
+
Model Type: | Convolutional Neural Network (CNN)
|
228 |
+
Intended Users: | Developers, Speech Processing Engineers, AI Researchers
|
229 |
+
Output: | Sequence of speech probabilities for each 20 millisecond audio frame
|
230 |
+
Describe how the model works: | The model processes input audio by extracting spectrogram features, which are then passed through MarbleNet—a lightweight CNN-based model designed for VAD. The CNN learns to detect patterns associated with speech activity and outputs a probability score indicating the presence of speech in each 20 millisecond frame
|
231 |
+
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
|
232 |
+
Technical Limitations: | The model operates on 20 millisecond frames. While it supports longer frames by breaking them into smaller segments, it does not support outputs with a finer granularity than 20 milliseconds.
|
233 |
+
Verified to have met prescribed NVIDIA quality standards: | Yes
|
234 |
+
Performance Metrics: | Accuracy (False Positive Rate, ROC-AUC score), Latency, Throughput
|
235 |
+
Potential Known Risks: | While the model was trained on a limited number of languages, including Chinese, English, French, Spanish, German, and Russian, the model may experience a degradation in quality for languages and accents that are not included in the training dataset
|
236 |
+
Licensing: | [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license)
|
237 |
+
|
238 |
+
|
239 |
+
## Privacy
|
240 |
+
|
241 |
+
Field | Response
|
242 |
+
:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
|
243 |
+
Generatable or reverse engineerable personal data? | None
|
244 |
+
Personal data used to create this model? | None
|
245 |
+
How often is dataset reviewed? | Before Release
|
246 |
+
Is there provenance for all datasets used in training? | Yes
|
247 |
+
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
|
248 |
+
Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
|
249 |
+
|
250 |
+
|
251 |
+
## Safety
|
252 |
+
|
253 |
+
Field | Response
|
254 |
+
:---------------------------------------------------|:----------------------------------
|
255 |
+
Model Application(s): | Automatic Speech Recognition, Speaker Diarization, Speech Processing, Voice Activity Detection
|
256 |
+
List types of specific high-risk AI systems, if any, in which the model can be integrated: Select from the following: [Biometrics] OR [Critical infrastructure] OR [Machinery and Robotics] OR [Medical Devices] OR [Vehicles] OR [Aviation] OR [Education and vocational training] OR [Employment and Workers Management] <br>
|
257 |
+
Describe the life critical impact (if present). | Not Applicable
|
258 |
+
Use Case Restrictions: | Abide by [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license)
|
259 |
+
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
|