Adding Scrape Hovedstaden

#70
by kris927b - opened
Danish Foundation Models org
edited 2 days ago

This PR will add the Scrape Hovedstaden dataset from sprogteknologi.dk

Danish Foundation Models org

@KennethEnevoldsen adding the scrape_hovedstaden data to dynaword.

  • I have run the test suite using make test and all tests pass
  • I have added/changed a dataset and have
    • I have updated descriptive statistics using make update-descriptive-statistics
    • I have bumped the version use make bump-version
  • If I have added a create.py script I have added the script dependencies required to run that script.
  • I have updated the CHANGELOG.md if appropriate

I have yet to bump the version and add something to the changelog, to not create conflicts with pr 69

kris927b changed pull request status to open
Danish Foundation Models org

Looks good. Just want to make sure that there is no (notable) overlap with ai-aktindsigt?

Pretty name is quite long, how about:

Heath Hovedstaden

Domain: Should it be medical? (secondary can be Encyclopedic)

Short description: Would change to:

Guidelines and informational documents for healthcare professionals from the Capital Region

Description:
We describe tokens, but they don't match the observed generally I would remove, but probably worth examining:

The corpus contains 9,941,236 tokens (word separation by spaces) extracted from 15,829 documents and 8,923 tables.

How are the tables processed?

What is meant by:

The corpus was created based on the texts in the document collection and has been post-processed so that the texts can be used for the development of language technology.

For the paper refence I would just convert it to a link

Regarding the last note I would move that to a subsection under Dateset Description called ###Unintended Uses.

License text is Danish. Change to English

Danish Foundation Models org

@KennethEnevoldsen Thanks for the review.

Regarding:

The corpus was created based on the texts in the document collection and has been post-processed so that the texts can be used for the development of language technology.

Based on the HF dataset the post-processing seems to be just extracting the text from html?

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment