Danish Foundation Models org
No description provided.
balsab changed pull request status to open
Danish Foundation Models org

Thanks for the PR. Here are a few improvements:

  1. Please add a description to the PR
  2. Here are some suggested code improvements

Simplify here:

-    ## load all data first to get splits, then load and filter by split
-    data = load_dataset("NbAiLab/NCC", streaming=True)
-    data_splits=list(reversed(data.keys()))
-
-     for current_split in data_splits:
-        data = load_dataset("NbAiLab/NCC", streaming=True, split=current_split)
-        data_iter = iter(data)
+    data = load_dataset("NbAiLab/NCC", streaming=True)
+
+     for split in data:
+        data_iter = iter(data[split])

Not need to do a while true loop here:

        # filtering and formatting
        while True:
            try:

You can just use a datasets map function

It is unclear what is going on here (function naming unec. class). I would refactor:

meta_data_filtering = doc_filter.first_layer_filter(current_text)
# to
streaming_dataset = streaming_dataset.map(add_fasttext_language, num_proc=4) # you don't need this is this case, but this is the idea 
streaming_dataset = streaming_dataset.filter(language_filter)
  1. I would also like some information on the filtering after the initial language filtering.
  • number of tokens
  • number of docs
  • % removed at each step

So I would probably refactor to:

streaming_dataset = streaming_dataset.filter(language_filter)

# convert to non-streaming
# convert to dynaword format
# filter one at a time

I suspect that the stopword filter might be too aggressive. Where did you get the stopword list from?

  1. Reordering the dataset into multiple datasets

I also think I would split up the corpora into:
["ncc-newspapers", "ncc-parliament", "ncc-publicreport", ...

That means that we will get different datasets for each source (now you just use "ncc"). We do not want one source to have multiple licenses. This also means that you need to have multiple datasheets.

  1. In the figure we seem to have a few REALLY long documents. I would examine some of these

  2. Language filtering pr. source

I suspect that the language labeling in some of these is wrong so it could be nice to check if Danish is a significant proportion of each split. I imagine that some are only Norwegian with a few misclassifications.

You could do this using:

samples_pr_source: dict = ... # you can define this using the default dict

def language_filter_with_desc_stats(examples):
  source = ...
  language = ...
  samples_pr_source[source][language] += 1
  
streaming_dataset = streaming_dataset.filter(language_filter_with_desc_stats, num_proc=num_proc)

# save + log desc stats
  1. I would add a log, see danske-taler for an example

I would also do some quality checking on duplicates.

Ready to merge
This branch is ready to get merged automatically.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment