--- dataset_info: - config_name: dedup features: - name: title dtype: string - name: description dtype: string - name: tags sequence: string - name: chapter_names sequence: string - name: chapter_contents sequence: string - name: language dtype: string - name: language_confidence dtype: float64 - name: overall_nsfw_score dtype: float64 - name: chapter_nsfw_scores sequence: float64 splits: - name: train num_bytes: 601170344 num_examples: 7466 download_size: 341659219 dataset_size: 601170344 - config_name: default features: - name: title dtype: string - name: description dtype: string - name: tags sequence: string - name: chapter_names sequence: string - name: chapter_contents sequence: string - name: language dtype: string - name: language_confidence dtype: float64 - name: overall_nsfw_score dtype: float64 - name: chapter_nsfw_scores sequence: float64 - name: overall_ai_score dtype: float64 - name: chapter_ai_scores sequence: float64 splits: - name: train num_bytes: 739774729 num_examples: 8817 download_size: 420226700 dataset_size: 739774729 configs: - config_name: dedup data_files: - split: train path: dedup/train-* - config_name: default data_files: - split: train path: data/train-* --- i would not recommend relying on the nsfw scores for any language other than english (or at all for that matter) i eventually got language detection working *okay*, but if you have a better solution please let me know because this still isn't great at handling wonky characters like many of these stories use