id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
β | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
1,023,861,988 | feat: πΈ allow empty features | Anyway: we don't check if the number of features correspond to the
number of columns in the rows, so... why not let the features be empty.
The client will have to guess the types. | feat: πΈ allow empty features: Anyway: we don't check if the number of features correspond to the
number of columns in the rows, so... why not let the features be empty.
The client will have to guess the types. | closed | 2021-10-12T14:01:46Z | 2021-10-12T14:59:38Z | 2021-10-12T14:59:35Z | severo |
1,023,612,320 | feat: πΈ add /webhook endpoint | See https://github.com/huggingface/datasets-preview-backend/issues/36 | feat: πΈ add /webhook endpoint: See https://github.com/huggingface/datasets-preview-backend/issues/36 | closed | 2021-10-12T09:58:09Z | 2021-10-12T11:24:27Z | 2021-10-12T11:17:50Z | severo |
1,023,534,749 | feat: πΈ support allenai/c4 dataset | See https://github.com/huggingface/datasets/issues/2859 and https://github.com/huggingface/datasets-preview-backend/issues/17 | feat: πΈ support allenai/c4 dataset: See https://github.com/huggingface/datasets/issues/2859 and https://github.com/huggingface/datasets-preview-backend/issues/17 | closed | 2021-10-12T08:41:03Z | 2021-10-12T08:41:42Z | 2021-10-12T08:41:41Z | severo |
1,023,511,100 | Support image datasets | Some examples we want to support
- Array2D
- [x] `mnist` - https://datasets-preview.huggingface.tech/rows?dataset=mnist
- Array3D
- [x] `cifar10` - https://datasets-preview.huggingface.tech/rows?dataset=cifar10
- [x] `cifar100` - https://datasets-preview.huggingface.tech/rows?dataset=cifar100
- local files:
- [x] `food101` - https://datasets-preview.huggingface.tech/rows?dataset=food101
- remote files (URL):
- [ ] `compguesswhat`
- [x] `severo/wit` - https://datasets-preview.huggingface.tech/rows?dataset=severo/wit
| Support image datasets: Some examples we want to support
- Array2D
- [x] `mnist` - https://datasets-preview.huggingface.tech/rows?dataset=mnist
- Array3D
- [x] `cifar10` - https://datasets-preview.huggingface.tech/rows?dataset=cifar10
- [x] `cifar100` - https://datasets-preview.huggingface.tech/rows?dataset=cifar100
- local files:
- [x] `food101` - https://datasets-preview.huggingface.tech/rows?dataset=food101
- remote files (URL):
- [ ] `compguesswhat`
- [x] `severo/wit` - https://datasets-preview.huggingface.tech/rows?dataset=severo/wit
| closed | 2021-10-12T08:20:12Z | 2021-10-21T15:30:22Z | 2021-10-21T15:30:22Z | severo |
1,022,900,711 | feat: πΈ only cache one entry per dataset | Before: every endpoint call generated a cache entry. Now: all the
endpoints calls related to a dataset use the same cached value (which
takes longer to compute). The benefits are a simpler code, and most
importantly: it's easier to manage cache consistency (everything is OK
for a dataset, or nothing)
BREAKING CHANGE: 𧨠the /cache, /cache-reports and /valid responses have changed. | feat: πΈ only cache one entry per dataset: Before: every endpoint call generated a cache entry. Now: all the
endpoints calls related to a dataset use the same cached value (which
takes longer to compute). The benefits are a simpler code, and most
importantly: it's easier to manage cache consistency (everything is OK
for a dataset, or nothing)
BREAKING CHANGE: 𧨠the /cache, /cache-reports and /valid responses have changed. | closed | 2021-10-11T16:20:09Z | 2021-10-11T16:23:24Z | 2021-10-11T16:23:23Z | severo |
1,021,343,485 | Refactor | null | Refactor: | closed | 2021-10-08T17:58:27Z | 2021-10-08T17:59:05Z | 2021-10-08T17:59:04Z | severo |
1,020,847,936 | feat: πΈ remove benchmark related code | It's covered by the /cache-reports endpoint | feat: πΈ remove benchmark related code: It's covered by the /cache-reports endpoint | closed | 2021-10-08T08:43:52Z | 2021-10-08T08:44:00Z | 2021-10-08T08:43:59Z | severo |
1,020,819,085 | No cache expiration | null | No cache expiration: | closed | 2021-10-08T08:10:41Z | 2021-10-08T08:35:30Z | 2021-10-08T08:35:29Z | severo |
1,015,568,634 | Add valid endpoint | Fixes #24 | Add valid endpoint: Fixes #24 | closed | 2021-10-04T19:51:26Z | 2021-10-04T19:51:56Z | 2021-10-04T19:51:55Z | severo |
1,015,381,449 | Cache functions and responses | null | Cache functions and responses: | closed | 2021-10-04T16:23:56Z | 2021-10-04T16:24:46Z | 2021-10-04T16:24:45Z | severo |
1,015,121,236 | raft - 404 | https://datasets-preview.huggingface.tech/rows?dataset=raft&config=ade_corpus_v2&split=test | raft - 404: https://datasets-preview.huggingface.tech/rows?dataset=raft&config=ade_corpus_v2&split=test | closed | 2021-10-04T12:33:03Z | 2021-10-12T08:52:15Z | 2021-10-12T08:52:15Z | severo |
1,013,636,022 | Should the features be associated to a split, instead of a config? | For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on the hub or in the canonical ones? | Should the features be associated to a split, instead of a config?: For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on the hub or in the canonical ones? | closed | 2021-10-01T18:14:53Z | 2021-10-05T09:25:04Z | 2021-10-05T09:25:04Z | severo |
1,013,142,890 | /splits does not error when no config exists and a wrong config is passed | https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=doesnotexist
returns:
```
{
"splits": [
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "validation"
},
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "train"
}
]
}
```
instead of giving an error.
As https://datasets-preview.huggingface.tech/configs?dataset=sent_comp returns
```
{
"configs": [
{
"dataset": "sent_comp",
"config": "default"
}
]
}
```
the only allowed `config` parameter should be `default`. | /splits does not error when no config exists and a wrong config is passed: https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=doesnotexist
returns:
```
{
"splits": [
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "validation"
},
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "train"
}
]
}
```
instead of giving an error.
As https://datasets-preview.huggingface.tech/configs?dataset=sent_comp returns
```
{
"configs": [
{
"dataset": "sent_comp",
"config": "default"
}
]
}
```
the only allowed `config` parameter should be `default`. | closed | 2021-10-01T09:52:40Z | 2022-09-16T20:09:54Z | 2022-09-16T20:09:53Z | severo |
1,008,141,232 | generate infos | fixes #52 | generate infos: fixes #52 | closed | 2021-09-27T13:17:04Z | 2021-09-27T13:21:01Z | 2021-09-27T13:21:00Z | severo |
1,008,034,013 | Regenerate dataset-info instead of loading it? | Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main), while we are using `datasets.get_dataset_infos()`, which only loads the exported dataset-info.json files:
https://github.com/huggingface/datasets-preview-backend/blob/c2a78e7ce8e36cdf579fea805535fa9ef84a2027/src/datasets_preview_backend/queries/infos.py#L45
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/inspect.py#L115
We might want to call `._info()` from the builder to get the info, and features, instead of relying on the dataset-info.json file.
| Regenerate dataset-info instead of loading it?: Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main), while we are using `datasets.get_dataset_infos()`, which only loads the exported dataset-info.json files:
https://github.com/huggingface/datasets-preview-backend/blob/c2a78e7ce8e36cdf579fea805535fa9ef84a2027/src/datasets_preview_backend/queries/infos.py#L45
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/inspect.py#L115
We might want to call `._info()` from the builder to get the info, and features, instead of relying on the dataset-info.json file.
| closed | 2021-09-27T11:28:13Z | 2021-09-27T13:21:00Z | 2021-09-27T13:21:00Z | severo |
1,006,450,875 | cache both the functions returns and the endpoints results | Currently only the endpoints results are cached. We use them inside the code to get quick results by taking advantage of the cache, but it's not their aim, and we have to parse / decode.
It would be better to directly cache the results of the functions (memoΓ―ze).
Also: we could cache the raised exceptions as here:
https://github.com/peterbe/django-cache-memoize/blob/4da1ba4639774426fa928d4a461626e6f841b4f3/src/cache_memoize/__init__.py#L153L157
| cache both the functions returns and the endpoints results: Currently only the endpoints results are cached. We use them inside the code to get quick results by taking advantage of the cache, but it's not their aim, and we have to parse / decode.
It would be better to directly cache the results of the functions (memoΓ―ze).
Also: we could cache the raised exceptions as here:
https://github.com/peterbe/django-cache-memoize/blob/4da1ba4639774426fa928d4a461626e6f841b4f3/src/cache_memoize/__init__.py#L153L157
| closed | 2021-09-24T13:10:23Z | 2021-10-04T16:24:45Z | 2021-10-04T16:24:45Z | severo |
1,006,447,227 | refactor to reduce functions complexity | For example,
https://github.com/huggingface/datasets-preview-backend/blob/13e533238eb6b6dfdcd8e7d3c23ed134c67b5525/src/datasets_preview_backend/queries/rows.py#L25
is rightly flagged by https://sourcery.ai/ as too convoluted. It's hard to debug and test, and there are too many special cases. | refactor to reduce functions complexity: For example,
https://github.com/huggingface/datasets-preview-backend/blob/13e533238eb6b6dfdcd8e7d3c23ed134c67b5525/src/datasets_preview_backend/queries/rows.py#L25
is rightly flagged by https://sourcery.ai/ as too convoluted. It's hard to debug and test, and there are too many special cases. | closed | 2021-09-24T13:06:22Z | 2021-10-12T08:49:32Z | 2021-10-12T08:49:31Z | severo |
1,006,441,817 | Add types to rows | fixes #25 | Add types to rows: fixes #25 | closed | 2021-09-24T13:00:43Z | 2021-09-24T13:06:14Z | 2021-09-24T13:06:13Z | severo |
1,006,439,805 | "flatten" the nested values? | See https://huggingface.co/docs/datasets/process.html#flatten | "flatten" the nested values?: See https://huggingface.co/docs/datasets/process.html#flatten | closed | 2021-09-24T12:58:34Z | 2022-09-16T20:10:22Z | 2022-09-16T20:10:22Z | severo |
1,006,364,861 | Info to infos | null | Info to infos: | closed | 2021-09-24T11:26:50Z | 2021-09-24T11:44:06Z | 2021-09-24T11:44:05Z | severo |
1,006,203,348 | feat: πΈ blocklist "allenai/c4" dataset | see https://github.com/huggingface/datasets-preview-backend/issues/17#issuecomment-918515398 | feat: πΈ blocklist "allenai/c4" dataset: see https://github.com/huggingface/datasets-preview-backend/issues/17#issuecomment-918515398 | closed | 2021-09-24T08:14:01Z | 2021-09-24T08:18:36Z | 2021-09-24T08:18:35Z | severo |
1,006,196,371 | use `environs` to manage the env vars? | https://pypi.org/project/environs/ instead of utils.py | use `environs` to manage the env vars?: https://pypi.org/project/environs/ instead of utils.py | closed | 2021-09-24T08:05:38Z | 2022-09-19T08:49:33Z | 2022-09-19T08:49:33Z | severo |
1,005,706,223 | Grouped endpoints | null | Grouped endpoints: | closed | 2021-09-23T18:01:49Z | 2021-09-23T18:11:53Z | 2021-09-23T18:11:52Z | severo |
1,005,536,552 | Fix serialization in benchmark | ```
INFO: 127.0.0.1:38826 - "GET /info?dataset=oelkrise%2FCRT HTTP/1.1" 404 Not Found
``` | Fix serialization in benchmark: ```
INFO: 127.0.0.1:38826 - "GET /info?dataset=oelkrise%2FCRT HTTP/1.1" 404 Not Found
``` | closed | 2021-09-23T14:57:58Z | 2021-09-24T07:31:56Z | 2021-09-24T07:31:56Z | severo |
1,005,354,426 | feat: πΈ remove ability to chose the number of extracted rows | β
Closes: #32 | feat: πΈ remove ability to chose the number of extracted rows: β
Closes: #32 | closed | 2021-09-23T12:10:42Z | 2021-09-23T12:18:47Z | 2021-09-23T12:18:46Z | severo |
1,005,279,019 | Move benchmark to a different repo? | It's a client of the API | Move benchmark to a different repo?: It's a client of the API | closed | 2021-09-23T10:44:08Z | 2021-10-12T08:49:11Z | 2021-10-12T08:49:11Z | severo |
1,005,231,422 | Add basic cache | Fixes #3. See https://github.com/huggingface/datasets-preview-backend/milestone/1 for remaning issues. | Add basic cache: Fixes #3. See https://github.com/huggingface/datasets-preview-backend/milestone/1 for remaning issues. | closed | 2021-09-23T09:54:04Z | 2021-09-23T09:58:14Z | 2021-09-23T09:58:14Z | severo |
1,005,220,766 | Enable the private datasets | The code is already present to pass the token, but it's disabled in the code (hardcoded):
https://github.com/huggingface/datasets-preview-backend/blob/df04ffba9ca1a432ed65e220cf7722e518e0d4f8/src/datasets_preview_backend/cache.py#L119-L120
- [ ] enable private datasets and manage their cache adequately
- [ ] separate private caches from public caches: for authenticated requests, we need to check every time, or at least use a much lower TTL, because an access can be removed. Also: since a hub dataset can be turned private, how should we manage them?
- [ ] add doc. See https://github.com/huggingface/datasets-preview-backend/commit/f6576d5639bcb16fc71bf0e6796f3199124ebf49 | Enable the private datasets: The code is already present to pass the token, but it's disabled in the code (hardcoded):
https://github.com/huggingface/datasets-preview-backend/blob/df04ffba9ca1a432ed65e220cf7722e518e0d4f8/src/datasets_preview_backend/cache.py#L119-L120
- [ ] enable private datasets and manage their cache adequately
- [ ] separate private caches from public caches: for authenticated requests, we need to check every time, or at least use a much lower TTL, because an access can be removed. Also: since a hub dataset can be turned private, how should we manage them?
- [ ] add doc. See https://github.com/huggingface/datasets-preview-backend/commit/f6576d5639bcb16fc71bf0e6796f3199124ebf49 | closed | 2021-09-23T09:42:47Z | 2024-01-31T10:14:08Z | 2024-01-31T09:50:02Z | severo |
1,005,219,639 | Provide the ETag header | - [ ] set and manage the `ETag` header to save bandwidth when the client (browser) revalidates. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching and https://gist.github.com/timheap/1f4d9284e4f4d4f545439577c0ca6300
```python
# TODO: use key for ETag? It will need to be serialized
# key = get_rows_json.__cache_key__(
# dataset=dataset, config=config, split=split, num_rows=num_rows, token=request.user.token
# )
# print(f"key={key} in cache: {cache.__contains__(key)}")
```
- [ ] ETag: add an ETag header in the response (hash of the response)
- [ ] ETag: if the request contains the `If-None-Match`, parse its ETag (beware the "weak" ETags), compare to the cache, and return an empty 304 response if the cache is fresh (with or without changing the TTL), or 200 with content if it has changed | Provide the ETag header: - [ ] set and manage the `ETag` header to save bandwidth when the client (browser) revalidates. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching and https://gist.github.com/timheap/1f4d9284e4f4d4f545439577c0ca6300
```python
# TODO: use key for ETag? It will need to be serialized
# key = get_rows_json.__cache_key__(
# dataset=dataset, config=config, split=split, num_rows=num_rows, token=request.user.token
# )
# print(f"key={key} in cache: {cache.__contains__(key)}")
```
- [ ] ETag: add an ETag header in the response (hash of the response)
- [ ] ETag: if the request contains the `If-None-Match`, parse its ETag (beware the "weak" ETags), compare to the cache, and return an empty 304 response if the cache is fresh (with or without changing the TTL), or 200 with content if it has changed | closed | 2021-09-23T09:41:40Z | 2022-09-19T10:00:49Z | 2022-09-19T10:00:48Z | severo |
1,005,217,301 | Update canonical datasets using a webhook | Webhook invalidation of canonical datasets (GitHub):
- [x] setup the [`revision` argument](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset) to download datasets from the master branch - #119
- [x] set up a webhook on `datasets` library on every push to the master branch - see https://github.com/huggingface/moon-landing/issues/1345 - not needed anymore because the canonical datasets are mirrored to the hub.
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which caches should be invalidated (creation, update, deletion)
- [x] refresh these caches | Update canonical datasets using a webhook: Webhook invalidation of canonical datasets (GitHub):
- [x] setup the [`revision` argument](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset) to download datasets from the master branch - #119
- [x] set up a webhook on `datasets` library on every push to the master branch - see https://github.com/huggingface/moon-landing/issues/1345 - not needed anymore because the canonical datasets are mirrored to the hub.
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which caches should be invalidated (creation, update, deletion)
- [x] refresh these caches | closed | 2021-09-23T09:39:18Z | 2022-01-26T13:44:08Z | 2022-01-26T11:20:04Z | severo |
1,005,216,379 | Update hub datasets with webhook | Webhook invalidation of community datasets (hf.co):
- [x] setup a webhook on hf.co for datasets creation, update, deletion -> waiting for https://github.com/huggingface/moon-landing/issues/1344
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which caches should be invalidated
- [x] refresh these caches
- [x] document added in moonrise mongo db | Update hub datasets with webhook: Webhook invalidation of community datasets (hf.co):
- [x] setup a webhook on hf.co for datasets creation, update, deletion -> waiting for https://github.com/huggingface/moon-landing/issues/1344
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which caches should be invalidated
- [x] refresh these caches
- [x] document added in moonrise mongo db | closed | 2021-09-23T09:38:17Z | 2021-10-18T12:33:27Z | 2021-10-18T12:33:27Z | severo |
1,005,214,423 | Refresh the cache? | Force a cache refresh on a regular basis (cron) | Refresh the cache?: Force a cache refresh on a regular basis (cron) | closed | 2021-09-23T09:36:02Z | 2021-10-12T08:34:41Z | 2021-10-12T08:34:41Z | severo |
1,005,213,132 | warm the cache | Warm the cache at application startup. We want:
- to avoid blocking the application, so: run asynchronously, and without hammering the server
- to have a warm cache as fast as possible (persisting the previous cache, then refreshing it at startup? - related: #35 )
- [x] create a function to list all the datasets and fill the cache for all the possible requests for it. It might be `make benchmark`, or a specific function -> `make warm`
- [x] persist the cache? or start with an empty cache when the application is restarted? -> yes, persisted
- [x] launch it at application startup -> it's done at startup, see INSTALL.md.
| warm the cache: Warm the cache at application startup. We want:
- to avoid blocking the application, so: run asynchronously, and without hammering the server
- to have a warm cache as fast as possible (persisting the previous cache, then refreshing it at startup? - related: #35 )
- [x] create a function to list all the datasets and fill the cache for all the possible requests for it. It might be `make benchmark`, or a specific function -> `make warm`
- [x] persist the cache? or start with an empty cache when the application is restarted? -> yes, persisted
- [x] launch it at application startup -> it's done at startup, see INSTALL.md.
| closed | 2021-09-23T09:34:35Z | 2021-10-12T08:35:27Z | 2021-10-12T08:35:27Z | severo |
1,005,206,110 | Add a parameter to specify the number of rows | It's a problem for the cache, so until we manage random access, we can:
- [ ] fill the cache with a large (maximum) number of rows, ie up to 1000
- [ ] also cache the default request (N = 100) -> set to the parameter used in moon-landing
- [ ] if a request comes with N = 247, for example, generate the response on the fly, from the large cache (1000), and don't cache that response | Add a parameter to specify the number of rows: It's a problem for the cache, so until we manage random access, we can:
- [ ] fill the cache with a large (maximum) number of rows, ie up to 1000
- [ ] also cache the default request (N = 100) -> set to the parameter used in moon-landing
- [ ] if a request comes with N = 247, for example, generate the response on the fly, from the large cache (1000), and don't cache that response | closed | 2021-09-23T09:26:34Z | 2022-09-16T20:13:55Z | 2022-09-16T20:13:55Z | severo |
1,005,204,434 | Remove the `rows`/ `num_rows` argument | We will fix it to 100, in order to simplify the cache management. | Remove the `rows`/ `num_rows` argument: We will fix it to 100, in order to simplify the cache management. | closed | 2021-09-23T09:24:46Z | 2021-09-23T12:18:46Z | 2021-09-23T12:18:46Z | severo |
1,005,200,587 | Manage concurrency | Currently (in the cache branch), only one worker is allowed.
We want to have multiple workers, but for that we need to have a shared cache:
- [ ] migrate from diskcache to redis
- [ ] remove the hardcoded limit of 1 worker | Manage concurrency: Currently (in the cache branch), only one worker is allowed.
We want to have multiple workers, but for that we need to have a shared cache:
- [ ] migrate from diskcache to redis
- [ ] remove the hardcoded limit of 1 worker | closed | 2021-09-23T09:20:43Z | 2021-10-06T08:08:35Z | 2021-10-06T08:08:35Z | severo |
999,436,625 | Use FastAPI instead of only Starlette? | It would allow to have doc, and surely a lot of other benefits | Use FastAPI instead of only Starlette?: It would allow to have doc, and surely a lot of other benefits | closed | 2021-09-17T14:45:40Z | 2021-09-20T10:25:17Z | 2021-09-20T07:13:00Z | severo |
999,350,374 | Ensure non-ASCII characters are handled as expected | See https://github.com/huggingface/datasets-viewer/pull/15
It should be tested | Ensure non-ASCII characters are handled as expected: See https://github.com/huggingface/datasets-viewer/pull/15
It should be tested | closed | 2021-09-17T13:22:39Z | 2022-09-16T20:15:19Z | 2022-09-16T20:15:19Z | severo |
997,976,540 | Add endpoint to get splits + configs at once | See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709494993
Also evaluate doing the same for the rows (the payload might be too heavy) | Add endpoint to get splits + configs at once: See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709494993
Also evaluate doing the same for the rows (the payload might be too heavy) | closed | 2021-09-16T09:14:52Z | 2021-09-23T18:13:17Z | 2021-09-23T18:13:17Z | severo |
997,907,203 | endpoint to generate bitmaps for mnist or cifar10 on the fly | if there are very few instances of raw image data in datasets i think it's best to generate server side vs. writing client side code
no strong opinion on this though, depends on the number/variety of datasets i guess
(For Audio I don't know if we have some datasets with raw tensors inside them? @lhoestq @albertvillanova )
| endpoint to generate bitmaps for mnist or cifar10 on the fly: if there are very few instances of raw image data in datasets i think it's best to generate server side vs. writing client side code
no strong opinion on this though, depends on the number/variety of datasets i guess
(For Audio I don't know if we have some datasets with raw tensors inside them? @lhoestq @albertvillanova )
| closed | 2021-09-16T08:03:09Z | 2021-10-18T12:42:24Z | 2021-10-18T12:42:24Z | julien-c |
997,904,214 | Add endpoint to proxy local files inside datasets' data | for instance for:
<img width="2012" alt="Screenshot 2021-09-16 at 09 59 38" src="https://user-images.githubusercontent.com/326577/133573786-ca9b8b60-2271-4256-b1e4-aa05302fd2f3.png">
| Add endpoint to proxy local files inside datasets' data: for instance for:
<img width="2012" alt="Screenshot 2021-09-16 at 09 59 38" src="https://user-images.githubusercontent.com/326577/133573786-ca9b8b60-2271-4256-b1e4-aa05302fd2f3.png">
| closed | 2021-09-16T08:00:11Z | 2021-10-21T15:35:49Z | 2021-10-21T15:35:48Z | julien-c |
997,893,918 | Add columns types to /rows response | We currently just have the keys (column identifier) and values. We might want to give each column's type: "our own serialization scheme".
For ClassLabel, this means to pass the "pretty names" (the names associates to the values) along with the type
See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709496849 | Add columns types to /rows response: We currently just have the keys (column identifier) and values. We might want to give each column's type: "our own serialization scheme".
For ClassLabel, this means to pass the "pretty names" (the names associates to the values) along with the type
See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709496849 | closed | 2021-09-16T07:48:51Z | 2021-09-24T13:06:13Z | 2021-09-24T13:06:13Z | severo |
997,434,233 | Expose a list of "valid" i.e. previewable datasets for moon-landing to be able to tag/showcase them | (linked to caching and pre-warming, obviously) | Expose a list of "valid" i.e. previewable datasets for moon-landing to be able to tag/showcase them: (linked to caching and pre-warming, obviously) | closed | 2021-09-15T19:28:47Z | 2021-10-04T19:51:55Z | 2021-10-04T19:51:55Z | julien-c |
997,012,873 | run benchmark automatically every week, and store the results | - [ ] create a github action?
- [ ] store the report in... an HF dataset? or in a github repo? should it be private?
- [ ] get these reports from https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading | run benchmark automatically every week, and store the results: - [ ] create a github action?
- [ ] store the report in... an HF dataset? or in a github repo? should it be private?
- [ ] get these reports from https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading | closed | 2021-09-15T12:15:00Z | 2021-10-12T08:48:02Z | 2021-10-12T08:48:01Z | severo |
995,697,610 | `make benchmark` is very long and blocks | Sometimes `make benchmark` blocks (nothing happens, and only one process is running, while the load is low). Ideally, it would not block, and other processes would be launched anyway so that the full capacity of the CPUs would be used (`-j -l 7` parameters of `make`)
To unblock, I have to kill and relaunch `make benchmark` manually. | `make benchmark` is very long and blocks: Sometimes `make benchmark` blocks (nothing happens, and only one process is running, while the load is low). Ideally, it would not block, and other processes would be launched anyway so that the full capacity of the CPUs would be used (`-j -l 7` parameters of `make`)
To unblock, I have to kill and relaunch `make benchmark` manually. | closed | 2021-09-14T07:42:44Z | 2021-10-12T08:34:17Z | 2021-10-12T08:34:17Z | severo |
995,690,502 | exception seen during `make benchmark` | Not sure which dataset threw this exception though, that's why I put the previous rows for further investigation.
```
poetry run python ../scripts/get_rows_report.py wikiann___CONFIG___or___SPLIT___test ../tmp/get_rows_reports/wikiann___CONFIG___or___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py csebuetnlp___SLASH___xlsum___CONFIG___uzbek___SPLIT___test ../tmp/get_rows_reports/csebuetnlp___SLASH___xlsum___CONFIG___uzbek___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py clips___SLASH___mfaq___CONFIG___no___SPLIT___train ../tmp/get_rows_reports/clips___SLASH___mfaq___CONFIG___no___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py common_voice___CONFIG___rm-vallader___SPLIT___train ../tmp/get_rows_reports/common_voice___CONFIG___rm-vallader___SPLIT___train.json
https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/test.tsv
poetry run python ../scripts/get_rows_report.py pasinit___SLASH___xlwic___CONFIG___xlwic_en_da___SPLIT___train ../tmp/get_rows_reports/pasinit___SLASH___xlwic___CONFIG___xlwic_en_da___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py indic_glue___CONFIG___wstp.mr___SPLIT___validation ../tmp/get_rows_reports/indic_glue___CONFIG___wstp.mr___SPLIT___validation.json
poetry run python ../scripts/get_rows_report.py banking77___CONFIG___default___SPLIT___test ../tmp/get_rows_reports/banking77___CONFIG___default___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py gem___CONFIG___xsum___SPLIT___challenge_test_bfp_05 ../tmp/get_rows_reports/gem___CONFIG___xsum___SPLIT___challenge_test_bfp_05.json
poetry run python ../scripts/get_rows_report.py turingbench___SLASH___TuringBench___CONFIG___TT_fair_wmt19___SPLIT___validation ../tmp/get_rows_reports/turingbench___SLASH___TuringBench___CONFIG___TT_fair_wmt19___SPLIT___validation.json
poetry run python ../scripts/get_rows_report.py igbo_monolingual___CONFIG___eze_goes_to_school___SPLIT___train ../tmp/get_rows_reports/igbo_monolingual___CONFIG___eze_goes_to_school___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py flax-sentence-embeddings___SLASH___stackexchange_titlebody_best_voted_answer_jsonl___CONFIG___gamedev___SPLIT___train ../tmp/get_rows_reports/flax-sentence-embeddings___SLASH___stackexchange_titlebody_best_voted_answer_jsonl___CONFIG___gamedev___SPLIT___train.json
* skipping . . .
Exception ignored in: <generator object ParsinluReadingComprehension._generate_examples at 0x7f094caa6dd0>
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 79, in __iter__
yield key, example
RuntimeError: generator ignored GeneratorExit
``` | exception seen during `make benchmark`: Not sure which dataset threw this exception though, that's why I put the previous rows for further investigation.
```
poetry run python ../scripts/get_rows_report.py wikiann___CONFIG___or___SPLIT___test ../tmp/get_rows_reports/wikiann___CONFIG___or___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py csebuetnlp___SLASH___xlsum___CONFIG___uzbek___SPLIT___test ../tmp/get_rows_reports/csebuetnlp___SLASH___xlsum___CONFIG___uzbek___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py clips___SLASH___mfaq___CONFIG___no___SPLIT___train ../tmp/get_rows_reports/clips___SLASH___mfaq___CONFIG___no___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py common_voice___CONFIG___rm-vallader___SPLIT___train ../tmp/get_rows_reports/common_voice___CONFIG___rm-vallader___SPLIT___train.json
https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/test.tsv
poetry run python ../scripts/get_rows_report.py pasinit___SLASH___xlwic___CONFIG___xlwic_en_da___SPLIT___train ../tmp/get_rows_reports/pasinit___SLASH___xlwic___CONFIG___xlwic_en_da___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py indic_glue___CONFIG___wstp.mr___SPLIT___validation ../tmp/get_rows_reports/indic_glue___CONFIG___wstp.mr___SPLIT___validation.json
poetry run python ../scripts/get_rows_report.py banking77___CONFIG___default___SPLIT___test ../tmp/get_rows_reports/banking77___CONFIG___default___SPLIT___test.json
poetry run python ../scripts/get_rows_report.py gem___CONFIG___xsum___SPLIT___challenge_test_bfp_05 ../tmp/get_rows_reports/gem___CONFIG___xsum___SPLIT___challenge_test_bfp_05.json
poetry run python ../scripts/get_rows_report.py turingbench___SLASH___TuringBench___CONFIG___TT_fair_wmt19___SPLIT___validation ../tmp/get_rows_reports/turingbench___SLASH___TuringBench___CONFIG___TT_fair_wmt19___SPLIT___validation.json
poetry run python ../scripts/get_rows_report.py igbo_monolingual___CONFIG___eze_goes_to_school___SPLIT___train ../tmp/get_rows_reports/igbo_monolingual___CONFIG___eze_goes_to_school___SPLIT___train.json
poetry run python ../scripts/get_rows_report.py flax-sentence-embeddings___SLASH___stackexchange_titlebody_best_voted_answer_jsonl___CONFIG___gamedev___SPLIT___train ../tmp/get_rows_reports/flax-sentence-embeddings___SLASH___stackexchange_titlebody_best_voted_answer_jsonl___CONFIG___gamedev___SPLIT___train.json
* skipping . . .
Exception ignored in: <generator object ParsinluReadingComprehension._generate_examples at 0x7f094caa6dd0>
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 79, in __iter__
yield key, example
RuntimeError: generator ignored GeneratorExit
``` | closed | 2021-09-14T07:33:59Z | 2021-09-23T12:39:13Z | 2021-09-23T12:39:13Z | severo |
995,212,806 | Upgrade datasets to 1.12.0 | - [x] See https://github.com/huggingface/datasets/releases/tag/1.12.0
- [x] launch benchmark and report to https://github.com/huggingface/datasets-preview-backend/issues/9 | Upgrade datasets to 1.12.0: - [x] See https://github.com/huggingface/datasets/releases/tag/1.12.0
- [x] launch benchmark and report to https://github.com/huggingface/datasets-preview-backend/issues/9 | closed | 2021-09-13T18:49:24Z | 2021-09-14T08:31:24Z | 2021-09-14T08:31:24Z | severo |
985,093,664 | Add unit tests to CI | null | Add unit tests to CI: | closed | 2021-09-01T12:30:48Z | 2021-09-02T12:56:45Z | 2021-09-02T12:56:45Z | severo |
984,798,758 | CI: how to acknowledge a "safety" warning? | We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
| checked 137 packages, using free DB (updated once a month) |
+============================+===========+==========================+==========+
| package | installed | affected | ID |
+============================+===========+==========================+==========+
| tensorflow | 2.6.0 | ==2.6.0 | 41161 |
+==============================================================================+
``` | CI: how to acknowledge a "safety" warning?: We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
| checked 137 packages, using free DB (updated once a month) |
+============================+===========+==========================+==========+
| package | installed | affected | ID |
+============================+===========+==========================+==========+
| tensorflow | 2.6.0 | ==2.6.0 | 41161 |
+==============================================================================+
``` | closed | 2021-09-01T07:20:45Z | 2021-09-15T11:58:56Z | 2021-09-15T11:58:48Z | severo |
984,024,435 | Prevent DoS when accessing some datasets | For example: https://huggingface.co/datasets/allenai/c4 script is doing 69219 output requests **on every received request**, which occupies all the CPUs.
```
pm2 logs
```
```
0|datasets | INFO: 3.238.194.17:0 - "GET /configs?dataset=allenai/c4 HTTP/1.1" 200 OK
```
```
Check remote data files: 78%|ββββββββ | 54330/69219 [14:13<03:05, 80.10it/s]
Check remote data files: 79%|ββββββββ | 54349/69219 [14:13<03:51, 64.14it/s]
Check remote data files: 79%|ββββββββ | 54364/69219 [14:14<04:44, 52.14it/s]
Check remote data files: 79%|ββββββββ | 54375/69219 [14:14<04:48, 51.38it/s]
Check remote data files: 79%|ββββββββ | 54448/69219 [14:15<02:37, 93.81it/s]
Check remote data files: 79%|ββββββββ | 54543/69219 [14:15<01:56, 125.60it/s]
Check remote data files: 79%|ββββββββ | 54564/69219 [14:16<03:22, 72.33it/s]
``` | Prevent DoS when accessing some datasets: For example: https://huggingface.co/datasets/allenai/c4 script is doing 69219 output requests **on every received request**, which occupies all the CPUs.
```
pm2 logs
```
```
0|datasets | INFO: 3.238.194.17:0 - "GET /configs?dataset=allenai/c4 HTTP/1.1" 200 OK
```
```
Check remote data files: 78%|ββββββββ | 54330/69219 [14:13<03:05, 80.10it/s]
Check remote data files: 79%|ββββββββ | 54349/69219 [14:13<03:51, 64.14it/s]
Check remote data files: 79%|ββββββββ | 54364/69219 [14:14<04:44, 52.14it/s]
Check remote data files: 79%|ββββββββ | 54375/69219 [14:14<04:48, 51.38it/s]
Check remote data files: 79%|ββββββββ | 54448/69219 [14:15<02:37, 93.81it/s]
Check remote data files: 79%|ββββββββ | 54543/69219 [14:15<01:56, 125.60it/s]
Check remote data files: 79%|ββββββββ | 54564/69219 [14:16<03:22, 72.33it/s]
``` | closed | 2021-08-31T15:56:11Z | 2021-10-15T15:57:49Z | 2021-10-15T15:57:49Z | severo |
981,207,911 | Raise an issue when no row can be fetched | Currently, https://datasets-preview.huggingface.tech/rows?dataset=superb&config=asr&split=train&rows=5 returns
```
{
"dataset": "superb",
"config": "asr",
"split": "train",
"rows": [ ]
}
```
while it should return 5 rows. An error should be raised in that case.
Beware: manage the special case when the query parameter `rows` is greater than the number of rows in the split. In that case, it's normal that the number of returned rows is lower than requested. | Raise an issue when no row can be fetched: Currently, https://datasets-preview.huggingface.tech/rows?dataset=superb&config=asr&split=train&rows=5 returns
```
{
"dataset": "superb",
"config": "asr",
"split": "train",
"rows": [ ]
}
```
while it should return 5 rows. An error should be raised in that case.
Beware: manage the special case when the query parameter `rows` is greater than the number of rows in the split. In that case, it's normal that the number of returned rows is lower than requested. | closed | 2021-08-27T12:36:50Z | 2021-09-14T08:50:53Z | 2021-09-14T08:50:53Z | severo |
980,264,033 | Add an endpoint to get the dataset card? | See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md. | Add an endpoint to get the dataset card?: See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md. | closed | 2021-08-26T13:43:29Z | 2022-09-16T20:15:52Z | 2022-09-16T20:15:52Z | severo |
980,177,961 | Properly manage the case config is None | For example:
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=null returns
~~~json
{
"dataset": "sent_comp",
"config": "null",
"splits": [
"validation",
"train"
]
}
~~~
this should have errored since there is no `"null"` config (it's `null`).
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config= returns
~~~json
The split names could not be parsed from the dataset config.
~~~
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"config": null,
"splits": [
"validation",
"train"
]
}
~~~
---
As a reference for the same dataset https://datasets-preview.huggingface.tech/configs?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"configs": [
null
]
}
~~~
and https://datasets-preview.huggingface.tech/info?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"info": {
"default": {
...
"builder_name": "sent_comp",
"config_name": "default",
...
"splits": {
"validation": {
"name": "validation",
"num_bytes": 55823979,
"num_examples": 10000,
"dataset_name": "sent_comp"
},
"train": {
"name": "train",
"num_bytes": 1135684803,
"num_examples": 200000,
"dataset_name": "sent_comp"
}
},
...
}
}
}
~~~
| Properly manage the case config is None: For example:
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=null returns
~~~json
{
"dataset": "sent_comp",
"config": "null",
"splits": [
"validation",
"train"
]
}
~~~
this should have errored since there is no `"null"` config (it's `null`).
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config= returns
~~~json
The split names could not be parsed from the dataset config.
~~~
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"config": null,
"splits": [
"validation",
"train"
]
}
~~~
---
As a reference for the same dataset https://datasets-preview.huggingface.tech/configs?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"configs": [
null
]
}
~~~
and https://datasets-preview.huggingface.tech/info?dataset=sent_comp returns
~~~json
{
"dataset": "sent_comp",
"info": {
"default": {
...
"builder_name": "sent_comp",
"config_name": "default",
...
"splits": {
"validation": {
"name": "validation",
"num_bytes": 55823979,
"num_examples": 10000,
"dataset_name": "sent_comp"
},
"train": {
"name": "train",
"num_bytes": 1135684803,
"num_examples": 200000,
"dataset_name": "sent_comp"
}
},
...
}
}
}
~~~
| closed | 2021-08-26T12:16:27Z | 2021-08-26T13:26:00Z | 2021-08-26T13:26:00Z | severo |
979,971,116 | Get random access to the rows | Currently, only the first rows can be obtained with /rows. We want to get access to slices of the rows through pagination, eg /rows?from=40000&rows=10
| Get random access to the rows: Currently, only the first rows can be obtained with /rows. We want to get access to slices of the rows through pagination, eg /rows?from=40000&rows=10
| closed | 2021-08-26T08:21:34Z | 2023-06-14T12:16:22Z | 2023-06-14T12:16:22Z | severo |
979,408,913 | Install the datasets that require manual download | Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error. | Install the datasets that require manual download: Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error. | closed | 2021-08-25T16:30:11Z | 2022-06-17T11:47:18Z | 2022-06-17T11:47:18Z | severo |
978,940,131 | Give the cause of the error in the endpoints | It can thus be used in the hub to show hints to the dataset owner (or user?) to improve the script and fix the bug. | Give the cause of the error in the endpoints: It can thus be used in the hub to show hints to the dataset owner (or user?) to improve the script and fix the bug. | closed | 2021-08-25T09:45:58Z | 2021-08-26T14:39:22Z | 2021-08-26T14:39:21Z | severo |
978,938,259 | Use /info as the source for configs and splits? | It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will not increase the number of erroneous datasets. | Use /info as the source for configs and splits?: It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will not increase the number of erroneous datasets. | closed | 2021-08-25T09:43:51Z | 2021-09-01T07:08:25Z | 2021-09-01T07:08:25Z | severo |
972,528,326 | Increase the proportion of hf.co datasets that can be previewed | For different reasons, some datasets cannot be previewed. It might be because the loading script is buggy, because the data is in a format that cannot be streamed, etc.
The script https://github.com/huggingface/datasets-preview-backend/blob/master/quality/test_datasets.py tests the three endpoints on all the datasets in hf.co and outputs a data file that can be analysed in https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading.
The goal is to understand which problems arise most and try to fix the ones that can be addressed (here or in [datasets](https://github.com/huggingface/datasets/)) so that the largest part of the hf.co datasets can be previewed. | Increase the proportion of hf.co datasets that can be previewed: For different reasons, some datasets cannot be previewed. It might be because the loading script is buggy, because the data is in a format that cannot be streamed, etc.
The script https://github.com/huggingface/datasets-preview-backend/blob/master/quality/test_datasets.py tests the three endpoints on all the datasets in hf.co and outputs a data file that can be analysed in https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading.
The goal is to understand which problems arise most and try to fix the ones that can be addressed (here or in [datasets](https://github.com/huggingface/datasets/)) so that the largest part of the hf.co datasets can be previewed. | closed | 2021-08-17T10:17:23Z | 2022-01-31T21:34:27Z | 2022-01-31T21:34:07Z | severo |
972,521,183 | Add CI | Check types and code quality | Add CI: Check types and code quality | closed | 2021-08-17T10:09:35Z | 2021-09-01T07:09:56Z | 2021-09-01T07:09:56Z | severo |
964,047,835 | Support private datasets | For now, only public datasets can be queried.
To support private datasets :
- [x] add `use_auth_token` argument to all the queries functions (and upstream too in https://github.com/huggingface/datasets/blob/master/src/datasets/inspect.py)
- [x] obtain the authentication header <strike>or cookie</strike> from the request | Support private datasets: For now, only public datasets can be queried.
To support private datasets :
- [x] add `use_auth_token` argument to all the queries functions (and upstream too in https://github.com/huggingface/datasets/blob/master/src/datasets/inspect.py)
- [x] obtain the authentication header <strike>or cookie</strike> from the request | closed | 2021-08-09T14:19:51Z | 2021-09-16T08:27:33Z | 2021-09-16T08:27:32Z | severo |
964,030,998 | Expand the purpose of this backend? | Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression for the largest datasets)
- or a SQL database
- or precompute and store a partial list of known offsets (every 10MB for example)
It would allow getting random access to the data. | Expand the purpose of this backend?: Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression for the largest datasets)
- or a SQL database
- or precompute and store a partial list of known offsets (every 10MB for example)
It would allow getting random access to the data. | closed | 2021-08-09T14:03:41Z | 2022-02-04T11:24:32Z | 2022-02-04T11:24:32Z | severo |
963,792,000 | Upgrade `datasets` and adapt the tests | Two issues have been fixed in `datasets`:
- https://github.com/huggingface/datasets/issues/2743
- https://github.com/huggingface/datasets/issues/2749
Also, support for streaming compressed files is improving:
- https://github.com/huggingface/datasets/pull/2786
- https://github.com/huggingface/datasets/pull/2800
On the next `datasets` release (see https://github.com/huggingface/datasets/releases), upgrade the dependency here and change the integration tests. | Upgrade `datasets` and adapt the tests: Two issues have been fixed in `datasets`:
- https://github.com/huggingface/datasets/issues/2743
- https://github.com/huggingface/datasets/issues/2749
Also, support for streaming compressed files is improving:
- https://github.com/huggingface/datasets/pull/2786
- https://github.com/huggingface/datasets/pull/2800
On the next `datasets` release (see https://github.com/huggingface/datasets/releases), upgrade the dependency here and change the integration tests. | closed | 2021-08-09T09:08:33Z | 2021-09-01T07:09:31Z | 2021-09-01T07:09:31Z | severo |
963,775,717 | Establish and meet SLO | https://en.wikipedia.org/wiki/Service-level_objective
as stated in https://github.com/huggingface/datasets-preview-backend/issues/1#issuecomment-894430211:
> we need to "guarantee" that row fetches from moon-landing will be under a specified latency (to be discussed), even in the case of cache misses in `datasets-preview-backend`
>
> because the data will be needed at server-rendering time, for content to be parsed by Google
>
> What's a reasonable latency you think you can achieve?
>
> If it's too long we might want to pre-warm the cache for all (streamable) dataset, using a system based on webhooks from moon-landing for instance
See also https://github.com/huggingface/datasets-preview-backend/issues/3 for the cache. | Establish and meet SLO: https://en.wikipedia.org/wiki/Service-level_objective
as stated in https://github.com/huggingface/datasets-preview-backend/issues/1#issuecomment-894430211:
> we need to "guarantee" that row fetches from moon-landing will be under a specified latency (to be discussed), even in the case of cache misses in `datasets-preview-backend`
>
> because the data will be needed at server-rendering time, for content to be parsed by Google
>
> What's a reasonable latency you think you can achieve?
>
> If it's too long we might want to pre-warm the cache for all (streamable) dataset, using a system based on webhooks from moon-landing for instance
See also https://github.com/huggingface/datasets-preview-backend/issues/3 for the cache. | closed | 2021-08-09T08:47:22Z | 2022-09-07T15:21:13Z | 2022-09-07T15:21:12Z | severo |
959,186,064 | Cache the responses | The datasets generally don't change often, so it's surely worth caching the responses.
Three levels of cache are involved:
- client (browser, moon-landing): use Response headers (cache-control, ETag, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching)
- application: serve the cached responses. Invalidation of the cache for a given request:
- when the request arrives, if the TTL has finished
- when a webhook has been received for the dataset (see below)
- (needed?) when a dedicated background process is launched to refresh the cache (cron)
- `datasets`: the library manages [its own cache](https://huggingface.co/docs/datasets/processing.html#controlling-the-cache-behavior) to avoid unneeded downloads
Here we will implement the application cache, and provide the headers for the client cache.
- [x] cache the responses (content and status code) during a TTL
- [x] select a cache library:
- -> http://www.grantjenks.com/docs/diskcache/tutorial.html
- redis (https://redis.io/topics/persistence)
- https://github.com/florimondmanca/asgi-caches
- [x] check the size of the cache and allocate sufficient resources. Note that every request generates a very small JSON (in the worst case, it's the dataset-info.json file, for ~3,000 datasets, else it's a JSON with at most some strings). The only problem would be if we're flooded by random requests (which generate 404 errors and are cached). Anyway, there is a limit to 1GB (the default in diskcache)
- [x] generate a key from the request
- [x] store and retrieve the response
- [x] specify the TTL
- [x] configure the TTL as an option
- [x] set the `cache-control` header so that the client (browser) doesn't retry during some time. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching
- [x] TTL: return "max-age" in the `Cache-Control` header (computed based on the server TTL to match the same date? or set `Expires` instead?)
- [x] <strike>Manage `If-Modified-Since` header in request?</strike>: no, this header works with `Last-Modified`, not `Cache-Control`/ `max-age`
- [x] manage concurrency
- [x] allow launching various workers. Done with `WEB_CONCURRENCY` - currently hardcoded to 1
- [x] <strike>migrate to Redis - the cache service will be separated from the application</strike> -> moved to a dedicated issue: #31
The scope of this issue has been reduced. See the next issues:
- #32
- #31
- #34
- #36
- #37
And maybe:
- #35
- #38
- #39 | Cache the responses: The datasets generally don't change often, so it's surely worth caching the responses.
Three levels of cache are involved:
- client (browser, moon-landing): use Response headers (cache-control, ETag, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching)
- application: serve the cached responses. Invalidation of the cache for a given request:
- when the request arrives, if the TTL has finished
- when a webhook has been received for the dataset (see below)
- (needed?) when a dedicated background process is launched to refresh the cache (cron)
- `datasets`: the library manages [its own cache](https://huggingface.co/docs/datasets/processing.html#controlling-the-cache-behavior) to avoid unneeded downloads
Here we will implement the application cache, and provide the headers for the client cache.
- [x] cache the responses (content and status code) during a TTL
- [x] select a cache library:
- -> http://www.grantjenks.com/docs/diskcache/tutorial.html
- redis (https://redis.io/topics/persistence)
- https://github.com/florimondmanca/asgi-caches
- [x] check the size of the cache and allocate sufficient resources. Note that every request generates a very small JSON (in the worst case, it's the dataset-info.json file, for ~3,000 datasets, else it's a JSON with at most some strings). The only problem would be if we're flooded by random requests (which generate 404 errors and are cached). Anyway, there is a limit to 1GB (the default in diskcache)
- [x] generate a key from the request
- [x] store and retrieve the response
- [x] specify the TTL
- [x] configure the TTL as an option
- [x] set the `cache-control` header so that the client (browser) doesn't retry during some time. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching
- [x] TTL: return "max-age" in the `Cache-Control` header (computed based on the server TTL to match the same date? or set `Expires` instead?)
- [x] <strike>Manage `If-Modified-Since` header in request?</strike>: no, this header works with `Last-Modified`, not `Cache-Control`/ `max-age`
- [x] manage concurrency
- [x] allow launching various workers. Done with `WEB_CONCURRENCY` - currently hardcoded to 1
- [x] <strike>migrate to Redis - the cache service will be separated from the application</strike> -> moved to a dedicated issue: #31
The scope of this issue has been reduced. See the next issues:
- #32
- #31
- #34
- #36
- #37
And maybe:
- #35
- #38
- #39 | closed | 2021-08-03T14:38:12Z | 2021-09-23T09:58:13Z | 2021-09-23T09:58:13Z | severo |
959,180,054 | Instrument the application | Measure the response time, status code, RAM usage, etc. to be able to take decision (see #1). Also statistics about the most common requests (endpoint, dataset, parameters) | Instrument the application: Measure the response time, status code, RAM usage, etc. to be able to take decision (see #1). Also statistics about the most common requests (endpoint, dataset, parameters) | closed | 2021-08-03T14:32:10Z | 2022-09-16T20:16:46Z | 2022-09-16T20:16:46Z | severo |
959,179,429 | Scale the application | Both `uvicorn` and `pm2` allow specifying the number of workers. `pm2` seems interesting since it provides a way to increase or decrease the number of workers without restart.
But before using multiple workers, it's important to instrument the app in order to detect if we need it (eg: monitor the response time). | Scale the application: Both `uvicorn` and `pm2` allow specifying the number of workers. `pm2` seems interesting since it provides a way to increase or decrease the number of workers without restart.
But before using multiple workers, it's important to instrument the app in order to detect if we need it (eg: monitor the response time). | closed | 2021-08-03T14:31:31Z | 2022-05-11T15:09:59Z | 2022-05-11T15:09:59Z | severo |